Re: [zfs-discuss] ZFS/WAFL lawsuit

2007-09-06 Thread johansen-osdev
It's Columbia Pictures vs. Bunnell: http://www.eff.org/legal/cases/torrentspy/columbia_v_bunnell_magistrate_order.pdf The Register syndicated a Security Focus article that summarizes the potential impact of the court decision: http://www.theregister.co.uk/2007/08/08/litigation_data_retention/

Re: [zfs-discuss] Extremely long creat64 latencies on higly utilized zpools

2007-08-15 Thread johansen-osdev
You might also consider taking a look at this thread: http://mail.opensolaris.org/pipermail/zfs-discuss/2007-July/041760.html Although I'm not certain, this sounds a lot like the other pool fragmentation issues. -j On Wed, Aug 15, 2007 at 01:11:40AM -0700, Yaniv Aknin wrote: > Hello friends, >

Re: [zfs-discuss] is send/receive incremental

2007-08-08 Thread johansen-osdev
You can do it either way. Eric Kustarz has a good explanation of how to set up incremental send/receive on your laptop. The description is on his blog: http://blogs.sun.com/erickustarz/date/20070612 The technique he uses is applicable to any ZFS filesystem. -j On Wed, Aug 08, 2007 at

Re: [zfs-discuss] si3124 controller problem and fix (fwd)

2007-07-17 Thread johansen-osdev
In an attempt to speed up progress on some of the si3124 bugs that Roger reported, I've created a workspace with the fixes for: 6565894 sata drives are not identified by si3124 driver 6566207 si3124 driver loses interrupts. I'm attaching a driver which contains these fixes as well as a diff

Re: [zfs-discuss] ZFS performance and memory consumption

2007-07-06 Thread johansen-osdev
> But now I have another question. > How 8k blocks will impact on performance ? When tuning recordsize for things like databases, we try to recommend that the customer's recordsize match the I/O size of the database record. I don't think that's the case in your situation. ZFS is clever enough th

Re: [zfs-discuss] si3124 controller problem and fix (fwd)

2007-06-07 Thread johansen-osdev
> it's been assigned CR 6566207 by Linda Bernal. Basically, if you look > at si_intr and read the comments in the code, the bug is pretty > obvious. > > si3124 driver's interrupt routine is incorrectly coded. The ddi_put32 > that clears the interrupts should be enclosed in an "else" block, >

Re: [zfs-discuss] Re: [storage-discuss] NCQ performance

2007-05-29 Thread johansen-osdev
> When sequential I/O is done to the disk directly there is no performance > degradation at all. All filesystems impose some overhead compared to the rate of raw disk I/O. It's going to be hard to store data on a disk unless some kind of filesystem is used. All the tests that Eric and I have p

Re: [zfs-discuss] Lots of overhead with ZFS - what am I doing wrong?

2007-05-16 Thread johansen-osdev
Marko, Matt and I discussed this offline some more and he had a couple of ideas about double-checking your hardware. It looks like your controller (or disks, maybe?) is having trouble with multiple simultaneous I/Os to the same disk. It looks like prefetch aggravates this problem. When I asked M

Re: [zfs-discuss] Lots of overhead with ZFS - what am I doing wrong?

2007-05-16 Thread johansen-osdev
At Matt's request, I did some further experiments and have found that this appears to be particular to your hardware. This is not a general 32-bit problem. I re-ran this experiment on a 1-disk pool using a 32 and 64-bit kernel. I got identical results: 64-bit == $ /usr/bin/time dd if=/test

Re: [zfs-discuss] Re: Lots of overhead with ZFS - what am I doing wrong?

2007-05-16 Thread johansen-osdev
> >*sata_hba_list::list sata_hba_inst_t satahba_next | ::print > >sata_hba_inst_t satahba_dev_port | ::array void* 32 | ::print void* | > >::grep ".!=0" | ::print sata_cport_info_t cport_devp.cport_sata_drive | > >::print -a sata_drive_info_t satadrv_features_support satadrv_settings > >satadrv

Re: [zfs-discuss] Re: Lots of overhead with ZFS - what am I doing wrong?

2007-05-15 Thread johansen-osdev
> Each drive is freshly formatted with one 2G file copied to it. How are you creating each of these files? Also, would you please include a the output from the isalist(1) command? > These are snapshots of iostat -xnczpm 3 captured somewhere in the > middle of the operation. Have you double-che

Re: [zfs-discuss] Re: Lots of overhead with ZFS - what am I doing wrong?

2007-05-14 Thread johansen-osdev
Marko, I tried this experiment again using 1 disk and got nearly identical times: # /usr/bin/time dd if=/dev/dsk/c0t0d0 of=/dev/null bs=128k count=1 1+0 records in 1+0 records out real 21.4 user0.0 sys 2.4 $ /usr/bin/time dd if=/test/filebench/testfile of=/dev/

Re: [zfs-discuss] Re: Lots of overhead with ZFS - what am I doing wrong?

2007-05-14 Thread johansen-osdev
This certainly isn't the case on my machine. $ /usr/bin/time dd if=/test/filebench/largefile2 of=/dev/null bs=128k count=1 1+0 records in 1+0 records out real1.3 user0.0 sys 1.2 # /usr/bin/time dd if=/dev/dsk/c0t0d0 of=/dev/null bs=128k count=1 1+0 re

Re: [zfs-discuss] Re: Re: gzip compression throttles system?

2007-05-03 Thread johansen-osdev
A couple more questions here. [mpstat] > CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl > 00 0 3109 3616 316 1965 17 48 45 2450 85 0 15 > 10 0 3127 3797 592 2174 17 63 46 1760 84 0 15 > CPU minf mjf xcal

Re: [zfs-discuss] Re: Help me understand ZFS caching

2007-04-20 Thread johansen-osdev
Tony: > Now to another question related to Anton's post. You mention that > directIO does not exist in ZFS at this point. Are their plan's to > support DirectIO; any functionality that will simulate directIO or > some other non-caching ability suitable for critical systems such as > databases if t

Re: [zfs-discuss] Bottlenecks in building a system

2007-04-20 Thread johansen-osdev
Adam: > Hi, hope you don't mind if I make some portions of your email public in > a reply--I hadn't seen it come through on the list at all, so it's no > duplicate to me. I don't mind at all. I had hoped to avoid sending the list a duplicate e-mail, although it looks like my first post never m

Re: [zfs-discuss] Bottlenecks in building a system

2007-04-18 Thread johansen-osdev
Adam: > Does anyone have a clue as to where the bottlenecks are going to be with > this: > > 16x hot swap SATAII hard drives (plus an internal boot drive) > Tyan S2895 (K8WE) motherboard > Dual GigE (integral nVidia ports) > 2x Areca 8-port PCIe (8-lane) RAID drivers > 2x AMD Opteron 275 CPUs (2

Re: [zfs-discuss] Re: C'mon ARC, stay small...

2007-03-16 Thread johansen-osdev
> I've been seeing this failure to cap on a number of (Solaris 10 update > 2 and 3) machines since the script came out (arc hogging is a huge > problem for me, esp on Oracle). This is probably a red herring, but my > v490 testbed seemed to actually cap on 3 separate tests, but my t2000 > testbed do

Re: [zfs-discuss] C'mon ARC, stay small...

2007-03-15 Thread johansen-osdev
I suppose I should have been more forward about making my last point. If the arc_c_max isn't set in /etc/system, I don't believe that the ARC will initialize arc.p to the correct value. I could be wrong about this; however, next time you set c_max, set c to the same value as c_max and set p to ha

Re: [zfs-discuss] C'mon ARC, stay small...

2007-03-15 Thread johansen-osdev
Something else to consider, depending upon how you set arc_c_max, you may just want to set arc_c and arc_p at the same time. If you try setting arc_c_max, and then setting arc_c to arc_c_max, and then set arc_p to arc_c / 2, do you still get this problem? -j On Thu, Mar 15, 2007 at 05:18:12PM -0

Re: [zfs-discuss] C'mon ARC, stay small...

2007-03-15 Thread johansen-osdev
Gar. This isn't what I was hoping to see. Buffers that aren't available for eviction aren't listed in the lsize count. It looks like the MRU has grown to 10Gb and most of this could be successfully evicted. The calculation for determining if we evict from the MRU is in arc_adjust() and looks so

Re: [zfs-discuss] C'mon ARC, stay small...

2007-03-15 Thread johansen-osdev
This seems a bit strange. What's the workload, and also, what's the output for: > ARC_mru::print size lsize > ARC_mfu::print size lsize and > ARC_anon::print size For obvious reasons, the ARC can't evict buffers that are in use. Buffers that are available to be evicted should be on the mru or mf

Re: [zfs-discuss] understanding zfs/thunoer "bottlenecks"?

2007-02-27 Thread johansen-osdev
> it seems there isn't an algorithm in ZFS that detects sequential write > in traditional fs such as ufs, one would trigger directio. There is no directio for ZFS. Are you encountering a situation in which you believe directio support would improve performance? If so, please explain. -j ___

Re: [zfs-discuss] ZFS multi-threading

2007-02-08 Thread johansen-osdev
> Would the logic behind ZFS take full advantage of a heavily multicored > system, such as on the Sun Niagara platform? Would it utilize of the > 32 concurrent threads for generating its checksums? Has anyone > compared ZFS on a Sun Tx000, to that of a 2-4 thread x64 machine? Pete and I are workin

Re: [zfs-discuss] Re: ZFS direct IO

2007-01-24 Thread johansen-osdev
> And this feature is independant on whether or not the data is > DMA'ed straight into the user buffer. I suppose so, however, it seems like it would make more sense to configure a dataset property that specifically describes the caching policy that is desired. When directio implies different

Re: [zfs-discuss] Re: ZFS direct IO

2007-01-23 Thread johansen-osdev
> Note also that for most applications, the size of their IO operations > would often not match the current page size of the buffer, causing > additional performance and scalability issues. Thanks for mentioning this, I forgot about it. Since ZFS's default block size is configured to be larger th

Re: [zfs-discuss] Re: ZFS direct IO

2007-01-23 Thread johansen-osdev
> Basically speaking - there needs to be some sort of strategy for > bypassing the ARC or even parts of the ARC for applications that > may need to advise the filesystem of either: > 1) the delicate nature of imposing additional buffering for their > data flow > 2) already well optimized applicatio

Re: [zfs-discuss] Limit ZFS Memory Utilization

2007-01-10 Thread johansen-osdev
Robert: > Better yet would be if memory consumed by ZFS for caching (dnodes, > vnodes, data, ...) would behave similar to page cache like with UFS so > applications will be able to get back almost all memory used for ZFS > caches if needed. I believe that a better response to memory pressure is a

Re: [zfs-discuss] Question: ZFS + Block level SHA256 ~= almost free CAS Squishing?

2007-01-08 Thread johansen-osdev
> > Note that you'd actually have to verify that the blocks were the same; > > you cannot count on the hash function. If you didn't do this, anyone > > discovering a collision could destroy the colliding blocks/files. > > Given that nobody knows how to find sha256 collisions, you'd of course > ne

Re: [zfs-discuss] ZFS and savecore

2006-11-10 Thread johansen-osdev
This is CR: 4894692 caching data in heap inflates crash dump I have a fix which I am testing now. It still needs review from Matt/Mark before it's eligible for putback, though. -j On Fri, Nov 10, 2006 at 02:40:40PM -0800, Thomas Maier-Komor wrote: > Hi, > > I'm not sure if this is the right f

Re: [zfs-discuss] Re: slow reads question...

2006-09-22 Thread johansen-osdev
Harley: > Old 36GB drives: > > | # time mkfile -v 1g zeros-1g > | zeros-1g 1073741824 bytes > | > | real2m31.991s > | user0m0.007s > | sys 0m0.923s > > Newer 300GB drives: > > | # time mkfile -v 1g zeros-1g > | zeros-1g 1073741824 bytes > | > | real0m8.425s > | user0m0.010

Re: [zfs-discuss] Re: slow reads question...

2006-09-22 Thread johansen-osdev
Harley: >I had tried other sizes with much the same results, but > hadnt gone as large as 128K. With bs=128K, it gets worse: > > | # time dd if=zeros-10g of=/dev/null bs=128k count=102400 > | 81920+0 records in > | 81920+0 records out > | > | real2m19.023s > | user0m0.105s > | sys