On 09/18/09 14:34, Jeremy Kister wrote:
On 9/18/2009 1:51 PM, Steffen Weiberle wrote:
I am trying to compile some deployment scenarios of ZFS.
# of systems
do zfs root count? or only big pools?
non root is more interesting to me. however, if you are sharing the root
pool with your data
I am trying to compile some deployment scenarios of ZFS.
If you are running ZFS in production, would you be willing to provide
(publicly or privately)?
# of systems
amount of storage
application profile(s)
type of workload (low, high; random, sequential; read-only, read-write,
write-only)
On 08/06/09 14:28, Matt Ingenthron wrote:
If ZFS is not beinng used significantly, then ARC
should not grow. ARC grows
based on the usage (ie. amount of ZFS files/data
accessed). Hence, if you are
sure that the ZFS usage is low, things should be
fine.
I understand that it won't grow, but I
For Solaris 10 5/09...
There are supposed to be performance improvements if you create a zpool
on a full disk, such as one with an EFI label. Does the same apply if
the full disk is used with an SMI label, which is required to boot?
I am trying to determine the trade-off, if any, of having a
looked on the Evil Tuning Guide one. The BP mentions the
whole disk, however it is not clear whether that applies to the root,
non-EFI pool, so your information is of value to me.
Steffen
Cindy
On 08/05/09 15:07, Steffen Weiberle wrote:
For Solaris 10 5/09...
There are supposed
On 04/21/09 11:08, Andrew Nicols on behalf of LUNS Root Output wrote:
All,
Is there anywhere which suggests what versions of zfs and zpool will make
it into Solaris 10 update 7 05/09 next month? I'm currently running Update
6 on an x4500 but would really like to have the new zpool scrub code
On 04/21/09 13:12, bob netherton wrote:
since I am trying to keep my pools at a version that different updates
can handle, I personally am glad it did not get rev'ed. I did get into
trouble recently that SX-CE 112 created a file system on an old pool
with a version newer than Solaris 10
I have seen Tim Foster's auto-snapshot and it looks interesting.
Is there a bug id or effort to deliver snapshot policy and space
management framework? Not looking for a GUI, although a CLI based UI
might be helpful. Customer needs something that allows the use of
snapshots on 100s of systems,
Has anybody stored 1/2 billion small ( 50KB) files in a ZFS data store?
If so, any feedback in how many file systems [and sub-file systems, if
any] you used?
How were ls times? And insights in snapshots, clones, send/receive, or
restores in general?
How about NFS access?
Thanks
Steffen
Customer benchmarked an X4600 using UFS on top of VxVM a while back
and got consistent performance under heavy load. Now they have put the
system into system test, but in the process moved from UFS/VxVM to
ZFS. This is 6/06
The are running at approximately 40% idle most of the time, with 10+%
Customer asks whether ZFS is fully POSIX compliant, such as flock?
Is this a function of ZFS, or does VFS deliver this?
Thanks
Steffen
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I have a jumpstart server where the install images are on a ZFS pool.
For PXE boot, several lofs mounts are created and configured in
/etc/vfstab. My system does not boot properly anymore because the
mounts referring to jumstart files haven't been mounted yet via ZFS.
What is the best way of
[EMAIL PROTECTED] wrote On 09/08/06 09:06,:
I have the same with my home-installserver. As a dirty solution I
set mount-at-boot to no for the lofs Filesystems, to get the system up.
But with every new OS added by JET the mount at reboot reappears.
Seems to me as the question when should a
I have just upgraded my jumpstart server to S10 u2 b9a.
It is an Ultra 10 with two 120GB EIDE drives. The second drive (disk1) is new, and has u2b9a
installed on a slice, with most of the space in slice 7 for the ZFS pool
I created pool1 on disk1, and created the filesystem pool1/ro (for
Darren J Moffat wrote On 06/12/06 09:09,:
Steffen Weiberle wrote:
I created as second filesystem pool1/jumpstart, and deviced to mv
pool1/ro/jumpstart/* to pool1/jumpstart. All the data is staying in
the same pool. No data is actually getting changed, it is just being
relocated. If this were
15 matches
Mail list logo