[zfs-discuss] Wishlist items

2007-06-25 Thread Joshua . Goodall
I've been saving up a few wishlist items for zfs. Time to share. 1. A verbose (-v) option to the zfs commandline. In particular zfs sometimes takes a while to return from zfs snapshot -r tank/[EMAIL PROTECTED] in the case where there are a great many iscsi shared volumes underneath. A little pr

Re: [zfs-discuss] Write over read priority possible ?

2007-06-25 Thread Roch Bourbonnais
Dedicate some CPU to the task. Create a psrset and bind the ftp daemon to it. If that works then add a few of the read threads as many as what fits in the requirements. -r Le 25 juin 07 à 15:00, Paul van der Zwan a écrit : On 25 Jun 2007, at 14:37, [EMAIL PROTECTED] wrote: On 25 Ju

Re: [zfs-discuss] zfs space efficiency

2007-06-25 Thread Frank Cusack
On June 25, 2007 1:02:38 PM -0700 Erik Trimble <[EMAIL PROTECTED]> wrote: algorithms. I think (as Casper said), that should you need to, use SHA to weed out the cases where the checksums are different (since, that definitively indicates they are different), then do a bitwise compare on any that

Re: [zfs-discuss] zfs space efficiency

2007-06-25 Thread Erik Trimble
Bill Sommerfeld wrote: [This is version 2. the first one escaped early by mistake] On Sun, 2007-06-24 at 16:58 -0700, dave johnson wrote: The most common non-proprietary hash calc for file-level deduplication seems to be the combination of the SHA1 and MD5 together. Collisions have been s

Re: [zfs-discuss] zfs space efficiency

2007-06-25 Thread Bill Sommerfeld
[This is version 2. the first one escaped early by mistake] On Sun, 2007-06-24 at 16:58 -0700, dave johnson wrote: > The most common non-proprietary hash calc for file-level deduplication seems > to be the combination of the SHA1 and MD5 together. Collisions have been > shown to exist in MD5 a

Re: [zfs-discuss] zfs space efficiency

2007-06-25 Thread Casper . Dik
>I wouldn't de-duplicate without actually verifying that two blocks were >actually bitwise identical. Absolutely not, indeed. But the nice property of hashes is that if the hashes don't match then the inputs do not either. I.e., the likelyhood of having to do a full bitwise compare is vanishi

Re: [zfs-discuss] zfs space efficiency

2007-06-25 Thread Bill Sommerfeld
On Sun, 2007-06-24 at 16:58 -0700, dave johnson wrote: > The most common non-proprietary hash calc for file-level deduplication seems > to be the combination of the SHA1 and MD5 together. Collisions have been > shown to exist in MD5 and theoried to exist in SHA1 by extrapolation, but > the prob

[zfs-discuss] ZVOLs and O_DSYNC, fsync() behavior

2007-06-25 Thread Bill Moloney
I've spent some time searching, and I apologize if I've missed this somewhere, but in testing ZVOL write performance I cannot see any noticeable difference between opening a ZVOL with or without O_DSYNC. Does the O_DSYNC flag have any actual influence on ZVOL writes ? For ZVOLS that I have ope

[zfs-discuss] ZFS hotplug

2007-06-25 Thread Eric Ham
Hello, I ran across this entry on the following page: ZFS hotplug - PSARC/2007/197 http://www.opensolaris.org/os/community/on/flag-days/66-70/ Since Build 68 hasn't closed yet, I assume this would currently be in a nightly build? If so, has anyone had a chance to play with this yet and see how

[zfs-discuss] Odd behaviour with heavy workloads.

2007-06-25 Thread Dickon Hood
I'm seeing some odd behaviour with ZFS and a reasonably heavy workload. I'm currently on contract to BBC R&D to build what is effectively a network-based personal video recorder. To that end, I have a rather large collection of discs, arranged very poorly as it's something of a hack at present, an

Re: [zfs-discuss] zpool import minor bug in snv_64a

2007-06-25 Thread Dennis Clarke
> You've tripped over a variant of: > > 6335095 Double-slash on /. pool mount points > > - Eric > oh well .. no points for originality then I guess :-) Thanks ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

Re: [zfs-discuss] zpool import minor bug in snv_64a

2007-06-25 Thread Eric Schrock
You've tripped over a variant of: 6335095 Double-slash on /. pool mount points - Eric On Mon, Jun 25, 2007 at 02:11:33AM -0400, Dennis Clarke wrote: > > Not sure if this has been reported or not. > > This is fairly minor but slightly annoying. > > After fresh install of snv_64a I run zpool im

[zfs-discuss] Re: Suggestions on 30 drive configuration?

2007-06-25 Thread Bryan Wagoner
What is the controller setup going to look like for the 30 drives? Is it going to be fibre channel, SAS, etc. and what will be the Controller-to-Disk ratio? ~Bryan This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@ope

Re: [zfs-discuss] Re: Slow write speed to ZFS pool (via NFS)

2007-06-25 Thread Thomas Garner
Thanks, Roch! Much appreciated knowing what the problem is and that a fix is in a forthcoming release. Thomas On 6/25/07, Roch - PAE <[EMAIL PROTECTED]> wrote: Sorry about that; looks like you've hit this: 6546683 marvell88sx driver misses wakeup for mv_empty_cv http://bugs.

[zfs-discuss] Re: ZIL on user specified devices?

2007-06-25 Thread Bryan Wagoner
Thanks for the info Eric and Eric. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Re: ZIL on user specified devices?

2007-06-25 Thread Bryan Wagoner
Outstanding! Wow that was a ka-winky-dink in timing. This will clear up a lot of problems for my customers in HPC environments and in some of the SAN environments. Thanks a lot for the info. I'll keep my eyes open. This message posted from opensolaris.org ___

Re: [zfs-discuss] Re: ZFS Scalability/performance

2007-06-25 Thread Peter Schuller
> FreeBSD plays it safe too. It's just that UFS, and other file systems on > FreeBSD, understand write caches and flush at appropriate times. Do you have something to cite w.r.t. UFS here? Because as far as I know, that is not correct. FreeBSD shipped with write caching turned off by default for

Re: [zfs-discuss] ZFS version 5 to version 6 fails to import or upgrade

2007-06-25 Thread Mark J Musante
On Tue, 19 Jun 2007, John Brewer wrote: > bash-3.00# zpool import > pool: zones > id: 4567711835620380868 > state: ONLINE > status: The pool is formatted using an older on-disk version. > action: The pool can be imported using its name or numeric identifier, though > some features w

Re: [zfs-discuss] zpool import minor bug in snv_64a

2007-06-25 Thread Dennis Clarke
> On Mon, Jun 25, 2007 at 02:34:21AM -0400, Dennis Clarke wrote: note that it was well after 2 AM for me .. half blind asleep that's my excuse .. I'm sticking to it. :-) >> >> > in /usr/src/cmd/zpool/zpool_main.c : >> > >> >> at line 680 forwards we can probably check for this scenario

Re: [zfs-discuss] Write over read priority possible ?

2007-06-25 Thread Paul van der Zwan
On 25 Jun 2007, at 14:37, [EMAIL PROTECTED] wrote: On 25 Jun 2007, at 14:00, [EMAIL PROTECTED] wrote: I'm testing an X4500 where we need to send over 600MB/s over the network. This is no problem, I get about 700MB/s over a single 10G interface. Problem is the box also needs to accept

Re: [zfs-discuss] Write over read priority possible ?

2007-06-25 Thread Casper . Dik
> >On 25 Jun 2007, at 14:00, [EMAIL PROTECTED] wrote: > >> >>> I'm testing an X4500 where we need to send over 600MB/s over the >>> network. >>> This is no problem, I get about 700MB/s over a single 10G interface. >>> Problem is the box also needs to accept incoming data at 100MB/s. >>> If I do a

Re: [zfs-discuss] Write over read priority possible ?

2007-06-25 Thread Paul van der Zwan
On 25 Jun 2007, at 14:00, [EMAIL PROTECTED] wrote: I'm testing an X4500 where we need to send over 600MB/s over the network. This is no problem, I get about 700MB/s over a single 10G interface. Problem is the box also needs to accept incoming data at 100MB/s. If I do a simple test ftp-ing fil

Re: [zfs-discuss] Write over read priority possible ?

2007-06-25 Thread Casper . Dik
>I'm testing an X4500 where we need to send over 600MB/s over the >network. >This is no problem, I get about 700MB/s over a single 10G interface. >Problem is the box also needs to accept incoming data at 100MB/s. >If I do a simple test ftp-ing files into the same filesystem I see >the FTP being

Re: [zfs-discuss] Re: /dev/random problem after moving to zfs boot:

2007-06-25 Thread Darren J Moffat
I think the problem is a timing one. Something must be attempting to use the in kernel API to /dev/random sooner with ZFS boot that with UFS boot. We need some boot time DTrace output to find out who is attempting to call any of the APIs in misc/kcf - particularly the random provider ones.

Re: [zfs-discuss] Re: Slow write speed to ZFS pool (via NFS)

2007-06-25 Thread Roch - PAE
Sorry about that; looks like you've hit this: 6546683 marvell88sx driver misses wakeup for mv_empty_cv http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6546683 Fixed in snv_64. -r Thomas Garner writes: > > We have seen this behavior, but it appears to be entirely re

[zfs-discuss] Write over read priority possible ?

2007-06-25 Thread Paul van der Zwan
I'm testing an X4500 where we need to send over 600MB/s over the network. This is no problem, I get about 700MB/s over a single 10G interface. Problem is the box also needs to accept incoming data at 100MB/s. If I do a simple test ftp-ing files into the same filesystem I see the FTP being limite

[zfs-discuss] 'How ZFS works' (40 pages i n English, 42 in Russian) and 'ZFS—The Last Word in File Systems'

2007-06-25 Thread Graham Perrin
On 22 Jun 2007, at 19:35, Victor Latushkin wrote: Hi, Recently PC Magazine Russian Edition published article about ZFS in Russian titled ZFS - Новый взгляд на файловые системы or in English ZFS - New view on a filesystem http://www.pcmag.ru/solutions/detail.php?ID=9141 There's already so