Re: [zfs-discuss] Running on Dell hardware?

2010-10-23 Thread Henrik Johansen
Johansen hen...@scannet.dk Tlf. 75 53 35 00 ScanNet Group A/S ScanNet ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Running on Dell hardware?

2010-10-14 Thread Henrik Johansen
'Edward Ned Harvey' wrote: From: Henrik Johansen [mailto:hen...@scannet.dk] The 10g models are stable - especially the R905's are real workhorses. You would generally consider all your machines stable now? Can you easily pdsh to all those machines? Yes - the only problem child has been 1

Re: [zfs-discuss] Running on Dell hardware?

2010-10-13 Thread Henrik Johansen
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Med venlig hilsen / Best Regards Henrik Johansen hen...@scannet.dk Tlf. 75 53 35 00 ScanNet Group A/S ScanNet ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] future of OpenSolaris

2010-02-22 Thread Henrik Johansen
hilsen / Best Regards Henrik Johansen hen...@scannet.dk Tlf. 75 53 35 00 ScanNet Group A/S ScanNet ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] future of OpenSolaris

2010-02-22 Thread Henrik Johansen
On 02/22/10 03:35 PM, Jacob Ritorto wrote: On 02/22/10 09:19, Henrik Johansen wrote: On 02/22/10 02:33 PM, Jacob Ritorto wrote: On 02/22/10 06:12, Henrik Johansen wrote: Well - once thing that makes me feel a bit uncomfortable is the fact that you no longer can buy OpenSolaris Support

Re: [zfs-discuss] [indiana-discuss] future of OpenSolaris

2010-02-22 Thread Henrik Johansen
running Solaris on Sun hardware. Sun System Service Plans != (Open)Solaris Support subscriptions But thank you for the scare chicken little. --Tim -- Med venlig hilsen / Best Regards Henrik Johansen hen...@scannet.dk Tlf. 75 53 35 00 ScanNet Group A/S ScanNet

Re: [zfs-discuss] Large scale ZFS deployments out there (200 disks)

2010-01-29 Thread Henrik Johansen
running for about a year with no major issues so far. The only hickups we've had were all HW related (no fun in firmware upgrading 200+ disks). Will you ? :) Thanks, Robert -- Med venlig hilsen / Best Regards Henrik Johansen hen...@scannet.dk Tlf. 75 53 35 00 ScanNet Group A/S ScanNet

Re: [zfs-discuss] Large scale ZFS deployments out there (200 disks)

2010-01-29 Thread Henrik Johansen
On 01/29/10 07:36 PM, Richard Elling wrote: On Jan 29, 2010, at 12:45 AM, Henrik Johansen wrote: On 01/28/10 11:13 PM, Lutz Schumann wrote: While thinking about ZFS as the next generation filesystem without limits I am wondering if the real world is ready for this kind of incredible technology

Re: [zfs-discuss] Pulsing write performance

2009-08-27 Thread Henrik Johansen
zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Med venlig hilsen / Best Regards Henrik Johansen hen...@scannet.dk Tlf. 75 53 35 00 ScanNet Group A/S ScanNet ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] Pool iscsi /zfs performance in opensolaris 0906

2009-08-05 Thread Henrik Johansen
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Med venlig hilsen / Best Regards Henrik Johansen hen...@scannet.dk Tlf. 75 53 35 00 ScanNet Group A/S ScanNet ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] Pool iscsi /zfs performance in opensolaris 0906

2009-08-05 Thread Henrik Johansen
/listinfo/zfs-discuss -- Med venlig hilsen / Best Regards Henrik Johansen hen...@scannet.dk Tlf. 75 53 35 00 ScanNet Group A/S ScanNet ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Pool iscsi /zfs performance in opensolaris 0906

2009-08-05 Thread Henrik Johansen
-- Med venlig hilsen / Best Regards Henrik Johansen hen...@scannet.dk Tlf. 75 53 35 00 ScanNet Group A/S ScanNet ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Pool iscsi /zfs performance in opensolaris 0906

2009-08-05 Thread Henrik Johansen
___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Med venlig hilsen / Best Regards Henrik Johansen hen...@scannet.dk Tlf. 75 53 35 00 ScanNet Group A/S ScanNet ___ zfs-discuss

Re: [zfs-discuss] Best controller card for 8 SATA drives ?

2009-06-23 Thread Henrik Johansen
and portability. If I remember correctly, I think we're using the Adaptec 3085. I've pulled 465MB/s write and 1GB/s read off the MD1000 filled with SATA drives. Cordialement, Erik Ableson +33.6.80.83.58.28 Envoyé depuis mon iPhone On 23 juin 2009, at 21:18, Henrik Johansen hen...@scannet.dk wrote

Re: [zfs-discuss] Large zpool design considerations

2008-07-04 Thread Henrik Johansen
/listinfo/zfs-discuss -- chris -at- microcozm -dot- net === Si Hoc Legere Scis Nimium Eruditionis Habes -- Med venlig hilsen / Best Regards Henrik Johansen [EMAIL PROTECTED] ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] Large zpool design considerations

2008-07-03 Thread Henrik Johansen
of limited by 2 things : 1. We are a 100% Dell shop. 2. We already have lots of enclosures that i would like to reuse for my project. The HBA cards are SAS 5/E (LSI SAS1068 chipset) cards, the enclosures are Dell MD1000 diskarrays. -- Med venlig hilsen / Best Regards Henrik Johansen [EMAIL

Re: [zfs-discuss] zfs data corruption

2008-04-24 Thread johansen
I'm just interested in understanding how zfs determined there was data corruption when I have checksums disabled and there were no non-retryable read errors reported in the messages file. If the metadata is corrupt, how is ZFS going to find the data blocks on disk? I don't believe it was a

Re: [zfs-discuss] ZFS Performance Issue

2008-02-11 Thread johansen
Is deleting the old files/directories in the ZFS file system sufficient or do I need to destroy/recreate the pool and/or file system itself? I've been doing the former. The former should be sufficient, it's not necessary to destroy the pool. -j

Re: [zfs-discuss] ZFS Performance Issue

2008-02-07 Thread johansen
-Still playing with 'recsize' values but it doesn't seem to be doing much...I don't think I have a good understand of what exactly is being written...I think the whole file might be overwritten each time because it's in binary format. The other thing to keep in mind is that the tunables like

Re: [zfs-discuss] mdb ::memstat including zfs buffer details?

2007-11-12 Thread johansen
I don't think it should be too bad (for ::memstat), given that (at least in Nevada), all of the ZFS caching data belongs to the zvp vnode, instead of kvp. ZFS data buffers are attached to zvp; however, we still keep metadata in the crashdump. At least right now, this means that

Re: [zfs-discuss] Fileserver performance tests

2007-10-08 Thread johansen
statfile1 988ops/s 0.0mb/s 0.0ms/op 22us/op-cpu deletefile1 991ops/s 0.0mb/s 0.0ms/op 48us/op-cpu closefile2997ops/s 0.0mb/s 0.0ms/op4us/op-cpu readfile1 997ops/s 139.8mb/s 0.2ms/op

Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-05 Thread johansen
But note that, for ZFS, the win with direct I/O will be somewhat less. That's because you still need to read the page to compute its checksum. So for direct I/O with ZFS (with checksums enabled), the cost is W:LPS, R:2*LPS. Is saving one page of writes enough to make a difference?

Re: [zfs-discuss] ZFS/WAFL lawsuit

2007-09-06 Thread johansen-osdev
It's Columbia Pictures vs. Bunnell: http://www.eff.org/legal/cases/torrentspy/columbia_v_bunnell_magistrate_order.pdf The Register syndicated a Security Focus article that summarizes the potential impact of the court decision: http://www.theregister.co.uk/2007/08/08/litigation_data_retention/

Re: [zfs-discuss] Extremely long creat64 latencies on higly utilized zpools

2007-08-15 Thread johansen-osdev
You might also consider taking a look at this thread: http://mail.opensolaris.org/pipermail/zfs-discuss/2007-July/041760.html Although I'm not certain, this sounds a lot like the other pool fragmentation issues. -j On Wed, Aug 15, 2007 at 01:11:40AM -0700, Yaniv Aknin wrote: Hello friends,

Re: [zfs-discuss] is send/receive incremental

2007-08-08 Thread johansen-osdev
You can do it either way. Eric Kustarz has a good explanation of how to set up incremental send/receive on your laptop. The description is on his blog: http://blogs.sun.com/erickustarz/date/20070612 The technique he uses is applicable to any ZFS filesystem. -j On Wed, Aug 08, 2007 at

Re: [zfs-discuss] si3124 controller problem and fix (fwd)

2007-07-17 Thread johansen-osdev
/onnv-clone/usr/src/uts/common/io/sata/adapters/si3124/si3124.c Mon Nov 13 23:20:01 2006 +++ /export/johansen/si-fixes/usr/src/uts/common/io/sata/adapters/si3124/si3124.c Tue Jul 17 14:37:17 2007 @@ -22,11 +22,11 @@ /* * Copyright 2006 Sun Microsystems, Inc. All rights reserved. * Use

Re: [zfs-discuss] Re: [storage-discuss] NCQ performance

2007-05-29 Thread johansen-osdev
When sequential I/O is done to the disk directly there is no performance degradation at all. All filesystems impose some overhead compared to the rate of raw disk I/O. It's going to be hard to store data on a disk unless some kind of filesystem is used. All the tests that Eric and I have

Re: [zfs-discuss] Re: Lots of overhead with ZFS - what am I doing wrong?

2007-05-16 Thread johansen-osdev
*sata_hba_list::list sata_hba_inst_t satahba_next | ::print sata_hba_inst_t satahba_dev_port | ::array void* 32 | ::print void* | ::grep .!=0 | ::print sata_cport_info_t cport_devp.cport_sata_drive | ::print -a sata_drive_info_t satadrv_features_support satadrv_settings

Re: [zfs-discuss] Lots of overhead with ZFS - what am I doing wrong?

2007-05-16 Thread johansen-osdev
At Matt's request, I did some further experiments and have found that this appears to be particular to your hardware. This is not a general 32-bit problem. I re-ran this experiment on a 1-disk pool using a 32 and 64-bit kernel. I got identical results: 64-bit == $ /usr/bin/time dd

Re: [zfs-discuss] Lots of overhead with ZFS - what am I doing wrong?

2007-05-16 Thread johansen-osdev
Marko, Matt and I discussed this offline some more and he had a couple of ideas about double-checking your hardware. It looks like your controller (or disks, maybe?) is having trouble with multiple simultaneous I/Os to the same disk. It looks like prefetch aggravates this problem. When I asked

Re: [zfs-discuss] Re: Lots of overhead with ZFS - what am I doing wrong?

2007-05-15 Thread johansen-osdev
Each drive is freshly formatted with one 2G file copied to it. How are you creating each of these files? Also, would you please include a the output from the isalist(1) command? These are snapshots of iostat -xnczpm 3 captured somewhere in the middle of the operation. Have you

Re: [zfs-discuss] Re: Lots of overhead with ZFS - what am I doing wrong?

2007-05-14 Thread johansen-osdev
This certainly isn't the case on my machine. $ /usr/bin/time dd if=/test/filebench/largefile2 of=/dev/null bs=128k count=1 1+0 records in 1+0 records out real1.3 user0.0 sys 1.2 # /usr/bin/time dd if=/dev/dsk/c0t0d0 of=/dev/null bs=128k count=1 1+0

Re: [zfs-discuss] Re: Lots of overhead with ZFS - what am I doing wrong?

2007-05-14 Thread johansen-osdev
Marko, I tried this experiment again using 1 disk and got nearly identical times: # /usr/bin/time dd if=/dev/dsk/c0t0d0 of=/dev/null bs=128k count=1 1+0 records in 1+0 records out real 21.4 user0.0 sys 2.4 $ /usr/bin/time dd if=/test/filebench/testfile

Re: [zfs-discuss] Re: Re: gzip compression throttles system?

2007-05-03 Thread johansen-osdev
A couple more questions here. [mpstat] CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl 00 0 3109 3616 316 1965 17 48 45 2450 85 0 15 10 0 3127 3797 592 2174 17 63 46 1760 84 0 15 CPU minf mjf xcal

Re: [zfs-discuss] C'mon ARC, stay small...

2007-03-15 Thread johansen-osdev
This seems a bit strange. What's the workload, and also, what's the output for: ARC_mru::print size lsize ARC_mfu::print size lsize and ARC_anon::print size For obvious reasons, the ARC can't evict buffers that are in use. Buffers that are available to be evicted should be on the mru or mfu

Re: [zfs-discuss] C'mon ARC, stay small...

2007-03-15 Thread johansen-osdev
Gar. This isn't what I was hoping to see. Buffers that aren't available for eviction aren't listed in the lsize count. It looks like the MRU has grown to 10Gb and most of this could be successfully evicted. The calculation for determining if we evict from the MRU is in arc_adjust() and looks

Re: [zfs-discuss] C'mon ARC, stay small...

2007-03-15 Thread johansen-osdev
Something else to consider, depending upon how you set arc_c_max, you may just want to set arc_c and arc_p at the same time. If you try setting arc_c_max, and then setting arc_c to arc_c_max, and then set arc_p to arc_c / 2, do you still get this problem? -j On Thu, Mar 15, 2007 at 05:18:12PM

Re: [zfs-discuss] C'mon ARC, stay small...

2007-03-15 Thread johansen-osdev
I suppose I should have been more forward about making my last point. If the arc_c_max isn't set in /etc/system, I don't believe that the ARC will initialize arc.p to the correct value. I could be wrong about this; however, next time you set c_max, set c to the same value as c_max and set p to

Re: [zfs-discuss] understanding zfs/thunoer bottlenecks?

2007-02-27 Thread johansen-osdev
it seems there isn't an algorithm in ZFS that detects sequential write in traditional fs such as ufs, one would trigger directio. There is no directio for ZFS. Are you encountering a situation in which you believe directio support would improve performance? If so, please explain. -j

Re: [zfs-discuss] ZFS multi-threading

2007-02-08 Thread johansen-osdev
Would the logic behind ZFS take full advantage of a heavily multicored system, such as on the Sun Niagara platform? Would it utilize of the 32 concurrent threads for generating its checksums? Has anyone compared ZFS on a Sun Tx000, to that of a 2-4 thread x64 machine? Pete and I are working

Re: [zfs-discuss] Re: ZFS direct IO

2007-01-24 Thread johansen-osdev
And this feature is independant on whether or not the data is DMA'ed straight into the user buffer. I suppose so, however, it seems like it would make more sense to configure a dataset property that specifically describes the caching policy that is desired. When directio implies different

Re: [zfs-discuss] Re: ZFS direct IO

2007-01-23 Thread johansen-osdev
Basically speaking - there needs to be some sort of strategy for bypassing the ARC or even parts of the ARC for applications that may need to advise the filesystem of either: 1) the delicate nature of imposing additional buffering for their data flow 2) already well optimized applications

Re: [zfs-discuss] Re: ZFS direct IO

2007-01-23 Thread johansen-osdev
Note also that for most applications, the size of their IO operations would often not match the current page size of the buffer, causing additional performance and scalability issues. Thanks for mentioning this, I forgot about it. Since ZFS's default block size is configured to be larger than

[zfs-discuss] Re: slow reads question...

2006-09-22 Thread johansen
ZFS uses a 128k block size. If you change dd to use a bs=128k, do you observe any performance improvement? | # time dd if=zeros-10g of=/dev/null bs=8k count=102400 | 102400+0 records in | 102400+0 records out | real1m8.763s | user0m0.104s | sys 0m1.759s It's also worth

Re: [zfs-discuss] Re: slow reads question...

2006-09-22 Thread johansen-osdev
Harley: I had tried other sizes with much the same results, but hadnt gone as large as 128K. With bs=128K, it gets worse: | # time dd if=zeros-10g of=/dev/null bs=128k count=102400 | 81920+0 records in | 81920+0 records out | | real2m19.023s | user0m0.105s | sys

Re: [zfs-discuss] Re: slow reads question...

2006-09-22 Thread johansen-osdev
Harley: Old 36GB drives: | # time mkfile -v 1g zeros-1g | zeros-1g 1073741824 bytes | | real2m31.991s | user0m0.007s | sys 0m0.923s Newer 300GB drives: | # time mkfile -v 1g zeros-1g | zeros-1g 1073741824 bytes | | real0m8.425s | user0m0.010s | sys

[zfs-discuss] Re: Memory Usage

2006-09-12 Thread johansen
1) You should be able to limit your cache max size by setting arc.c_max. Its currently initialized to be phys-mem-size - 1GB. Mark's assertion that this is not a best practice is something of an understatement. ZFS was designed so that users/administrators wouldn't have to configure