Re: [zfs-discuss] If you have ZFS in production, willing to share some details (with me)?

2009-09-22 Thread Steffen Weiberle
On 09/18/09 14:34, Jeremy Kister wrote: On 9/18/2009 1:51 PM, Steffen Weiberle wrote: I am trying to compile some deployment scenarios of ZFS. # of systems do zfs root count? or only big pools? non root is more interesting to me. however, if you are sharing the root pool with your data

[zfs-discuss] If you have ZFS in production, willing to share some details (with me)?

2009-09-18 Thread Steffen Weiberle
I am trying to compile some deployment scenarios of ZFS. If you are running ZFS in production, would you be willing to provide (publicly or privately)? # of systems amount of storage application profile(s) type of workload (low, high; random, sequential; read-only, read-write, write-only)

Re: [zfs-discuss] limiting the ARC cache during early boot, without /etc/system

2009-08-06 Thread Steffen Weiberle
On 08/06/09 14:28, Matt Ingenthron wrote: If ZFS is not beinng used significantly, then ARC should not grow. ARC grows based on the usage (ie. amount of ZFS files/data accessed). Hence, if you are sure that the ZFS usage is low, things should be fine. I understand that it won't grow, but I

[zfs-discuss] ?: SMI vs. EFI label and a disk's write cache

2009-08-05 Thread Steffen Weiberle
For Solaris 10 5/09... There are supposed to be performance improvements if you create a zpool on a full disk, such as one with an EFI label. Does the same apply if the full disk is used with an SMI label, which is required to boot? I am trying to determine the trade-off, if any, of having a

Re: [zfs-discuss] ?: SMI vs. EFI label and a disk's write cache

2009-08-05 Thread Steffen Weiberle
looked on the Evil Tuning Guide one. The BP mentions the whole disk, however it is not clear whether that applies to the root, non-EFI pool, so your information is of value to me. Steffen Cindy On 08/05/09 15:07, Steffen Weiberle wrote: For Solaris 10 5/09... There are supposed

Re: [zfs-discuss] ZFS/zpool Versions in Solaris 10

2009-04-21 Thread Steffen Weiberle
On 04/21/09 11:08, Andrew Nicols on behalf of LUNS Root Output wrote: All, Is there anywhere which suggests what versions of zfs and zpool will make it into Solaris 10 update 7 05/09 next month? I'm currently running Update 6 on an x4500 but would really like to have the new zpool scrub code

Re: [zfs-discuss] ZFS/zpool Versions in Solaris 10

2009-04-21 Thread Steffen Weiberle
On 04/21/09 13:12, bob netherton wrote: since I am trying to keep my pools at a version that different updates can handle, I personally am glad it did not get rev'ed. I did get into trouble recently that SX-CE 112 created a file system on an old pool with a version newer than Solaris 10

[zfs-discuss] ?: any effort for snapshot management

2008-09-05 Thread Steffen Weiberle
I have seen Tim Foster's auto-snapshot and it looks interesting. Is there a bug id or effort to deliver snapshot policy and space management framework? Not looking for a GUI, although a CLI based UI might be helpful. Customer needs something that allows the use of snapshots on 100s of systems,

[zfs-discuss] ?: 1/2 Billion files in ZFS

2008-06-16 Thread Steffen Weiberle
Has anybody stored 1/2 billion small ( 50KB) files in a ZFS data store? If so, any feedback in how many file systems [and sub-file systems, if any] you used? How were ls times? And insights in snapshots, clones, send/receive, or restores in general? How about NFS access? Thanks Steffen

[zfs-discuss] ?: cyclical kernel/system processing (approx every 4 minutes)

2006-10-23 Thread Steffen Weiberle
Customer benchmarked an X4600 using UFS on top of VxVM a while back and got consistent performance under heavy load. Now they have put the system into system test, but in the process moved from UFS/VxVM to ZFS. This is 6/06 The are running at approximately 40% idle most of the time, with 10+%

[zfs-discuss] ?: ZFS and POSIX

2006-10-20 Thread Steffen Weiberle
Customer asks whether ZFS is fully POSIX compliant, such as flock? Is this a function of ZFS, or does VFS deliver this? Thanks Steffen ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] ?: ZFS and jumpstart export race condition

2006-09-08 Thread Steffen Weiberle
I have a jumpstart server where the install images are on a ZFS pool. For PXE boot, several lofs mounts are created and configured in /etc/vfstab. My system does not boot properly anymore because the mounts referring to jumstart files haven't been mounted yet via ZFS. What is the best way of

Re: [zfs-discuss] ?: ZFS and jumpstart export race condition

2006-09-08 Thread Steffen Weiberle
[EMAIL PROTECTED] wrote On 09/08/06 09:06,: I have the same with my home-installserver. As a dirty solution I set mount-at-boot to no for the lofs Filesystems, to get the system up. But with every new OS added by JET the mount at reboot reappears. Seems to me as the question when should a

[zfs-discuss] ?: zfs mv within pool seems slow

2006-06-12 Thread Steffen Weiberle
I have just upgraded my jumpstart server to S10 u2 b9a. It is an Ultra 10 with two 120GB EIDE drives. The second drive (disk1) is new, and has u2b9a installed on a slice, with most of the space in slice 7 for the ZFS pool I created pool1 on disk1, and created the filesystem pool1/ro (for

Re: [zfs-discuss] ?: zfs mv within pool seems slow

2006-06-12 Thread Steffen Weiberle
Darren J Moffat wrote On 06/12/06 09:09,: Steffen Weiberle wrote: I created as second filesystem pool1/jumpstart, and deviced to mv pool1/ro/jumpstart/* to pool1/jumpstart. All the data is staying in the same pool. No data is actually getting changed, it is just being relocated. If this were