On Thu, Nov 29, 2007 at 02:05:13PM -0800, Brendan Gregg - Sun Microsystems
wrote:
I'd recommend running filebench for filesystem benchmarks, and see what
the results are:
http://www.solarisinternals.com/wiki/index.php/FileBench
Filebench is able to purge the ZFS cache
I never said I was a typical consumer. After all, I bought a $1600 DSLR.
If you look around photo forums, you'll see an interest the digital workflow
which includes long term storage and archiving. A chunk of these users will
opt for an external RAID box (10%? 20%?). I suspect ZFS will
I'm seeing some other issues with delegation+iscisi with the latest
Nevada bits. I will need to investigate them and will likely raise some
bugs once I figure out whats going on.
Thanks. For now my sudo wrapper works, but I would be very happy if this
can be sorted out without any hacks.
Thanks for your observations.
HOWEVER, I didn't pose the question
How do I architect the HA and storage and everything for an email system?
Our site like many other data centers has HA standards and politics and all
this other baggage that may lead a design to a certain point. Thus our answer
Hello,
I'm trying to track down a problem with taking zfs snapshots. On
occasion the zfs command will report:
cannot snapshot 'dataset name': dataset is busy
The problem is, I don't know what causes zfs to think the data set is
busy. Anyone out there know what constitutes a busy dataset?
I did
revised indentation:
mirror2 / # zpool status
pool: tank
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
tankONLINE 0 0 0
raidz2ONLINE 0 0 0
c0t1d0 ONLINE 0 0 0
Hi,
We have a number of 4200's setup using a combination of an SVM 4 way mirror and
a ZFS raidz stripe.
Each disk (of 4) is divided up like this
/ 6GB UFS s0
Swap 8GB s1
/var 6GB UFS s3
Metadb 50MB UFS s4
/data 48GB ZFS s5
For SVM we do a 4 way mirror on /,swap, and /var
So we have 3 SVM
mirror2 / # zpool history
History for 'tank':
2007-11-07.14:15:19 zpool create -f tank raidz2 c0t0d0 c0t1d0 c0t2d0 c2t0d0
c2t1d0 c2t2d0 c3t0d0 c3t1d0 c3t2d0 c3t3d0
2007-11-07.14:17:21 zfs set atime=off tank
2007-11-07.14:18:16 zfs create tank/datatel
2007-11-07.14:52:16 zfs set
Jonathan,
Thanks for providing the zpool history output. :-)
You probably missed the message after this command:
# zpool add tank c4t0d0
invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: pool uses raidz and new vdev is disk
I provided some
Can someone point me to a whitepaper, blog, or other document that
discusses ZFS' metadata structure and properties?
Thanks,
Michael
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Got an issue which is rather annoying to me - three of my
ZFS caches are regularly using nearly 1/2 of the 1.09Gb of
allocated kmem in my system
ultra20 m2
1x 2.2GHz dual-core Opteron
2Gb ram, 16Gb swap
2 zpools, not ZFS root
snv_77
2 non-global
Have you read the ZFS On-Disk Specification??
http://www.opensolaris.org/os/community/zfs/docs/
Or are you trying to find something specific??
Rayson
On Dec 3, 2007 5:31 PM, Michael A Walters [EMAIL PROTECTED] wrote:
Can someone point me to a whitepaper, blog, or other document that
Hi,
I have a system that had Solaris 10 8/07 previously installed with disks c0t0d0
and c0t1d0 mirrored using SVM and c0t2d0 and c0t3d0 added as whole disks to a
zpool.
When attempting to reinstall the OS via jumpstart the following error is output:
...
Using rules.ok from jumpstart:/js.
My guess is that you're looking for the on-disk format document:
http://www.opensolaris.org/os/community/zfs/docs/ondiskformat0822.pdf
If you're more interested in the user-visible metadata (e.g. the settings for
pools and filesystems), then you'd probably have to dig through the various
2007-11-07.14:15:19 zpool create -f tank raidz2 [ ... ]
2007-12-03.14:42:28 zpool add tank c4t0d0
c4t0d0 is not part of raidz2. How can I fix this?
Back up your data; destroy the pool; and re-create it.
Ideally I would like to create another zpool with c4t0d0 plus some more disks
since
Anton B. Rang wrote:
Got an issue which is rather annoying to me - three of my
ZFS caches are regularly using nearly 1/2 of the 1.09Gb of
allocated kmem in my system.
I think this is just the ARC; you can limit its size using:
16 matches
Mail list logo