Re: [zfs-discuss] Faster copy from UFS to ZFS

2011-05-19 Thread Daniel Rock
On Thu, 19 May 2011 15:39:50 +0200, Frank Van Damme wrote: Op 03-05-11 17:55, Brandon High schreef: -H: Hard links If you're going to this for 2 TB of data, remember to expand your swap space first (or have tons of memory). Rsync will need it to store every inode number in the directory.

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Daniel Rock
Am 05.01.2010 16:22, schrieb Mikko Lammi: However when we deleted some other files from the volume and managed to raise free disk space from 4 GB to 10 GB, the "rm -rf directory" method started to perform significantly faster. Now it's deleting around 4,000 files/minute (240,000/h - quite an impr

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-13 Thread Daniel Rock
Hi, Solaris 10U7, patched to the latest released patches two weeks ago. Four ST31000340NS attached to two SI3132 SATA controller, RAIDZ1. Selfmade system with 2GB RAM and an x86 (chipid 0x0 AuthenticAMD family 15 model 35 step 2 clock 2210 MHz) AMD Athlon(tm) 64 X2 Dual Core Processo

Re: [zfs-discuss] Data size grew.. with compression on

2009-04-13 Thread Daniel Rock
Will Murnane schrieb: Perhaps ZFS could do some very simplistic de-dup here: it has the B-tree entry for the file in question when it goes to overwrite a piece of it, so it could calculate the checksum of the new block and see if it matches the checksum for the block it is overwriting. This is a

Re: [zfs-discuss] Data size grew.. with compression on

2009-04-09 Thread Daniel Rock
Jonathan schrieb: OpenSolaris Forums wrote: if you have a snapshot of your files and rsync the same files again, you need to use "--inplace" rsync option , otherwise completely new blocks will be allocated for the new files. that`s because rsync will write entirely new file and rename it over th

Re: [zfs-discuss] Can I create ZPOOL with missing disks?

2009-01-16 Thread Daniel Rock
Jim Klimov schrieb: > Is it possible to create a (degraded) zpool with placeholders specified > instead > of actual disks (parity or mirrors)? This is possible in linux mdadm > ("missing" > keyword), so I kinda hoped this can be done in Solaris, but didn't manage to. Create sparse files with th

Re: [zfs-discuss] create raidz with 1 disk offline

2008-09-28 Thread Daniel Rock
Brandon High schrieb: > 1. Create a sparse file on an existing filesystem. The size should be > the same as your disks. > 2. Create the raidz with 4 of the drives and the sparse file. > 3. Export the zpool. > 4. Delete the sparse file > 5. Import the zpool. It should come up as degraded, since one

Re: [zfs-discuss] SIL3124 stability?

2008-09-25 Thread Daniel Rock
Joerg Schilling schrieb: > Daniel Rock <[EMAIL PROTECTED]> wrote: >> I disabled the BIOS on my cards because I don't need it. I boot from one >> of the onboard SATA ports. > > OK, how did you do this? This will depend on the card you are using. I simply had to

Re: [zfs-discuss] SIL3124 stability?

2008-09-25 Thread Daniel Rock
Joerg Schilling schrieb: > If it works for your system, be happy. I mentioned that the controller may > not be usable in all systems as it hangs up the BIOS in my machine if there > is a > disk connected to the card. I disabled the BIOS on my cards because I don't need it. I boot from one of th

Re: [zfs-discuss] SIL3124 stability?

2008-09-25 Thread Daniel Rock
Mikael Karlsson schrieb: > Hello! > > Anyone with experience with the SIL3124 chipset? Does it work good? > > It's in the HCL, but since SIL3114 apperantly is totally crap I'm a bit > skeptic to silicon image.. I'm running with two SIL3132 PCIe cards in my system with no problems. The only pan

Re: [zfs-discuss] ZFS Pools 1+TB

2008-08-28 Thread Daniel Rock
Kenny schrieb: >2. c6t600A0B800049F93C030A48B3EA2Cd0 > /scsi_vhci/[EMAIL PROTECTED] >3. c6t600A0B800049F93C030D48B3EAB6d0 > /scsi_vhci/[EMAIL PROTECTED] Disk 2: 931GB Disk 3: 931MB Do you see the difference? Daniel _

Re: [zfs-discuss] How to delete hundreds of emtpy snapshots

2008-07-17 Thread Daniel Rock
Sylvain Dusart schrieb: > Hi Joe, > > I use this script to delete my empty snapshots : > > #!/bin/bash > > NOW=$(date +%Y%m%d-%H%M%S) > > POOL=tank > > zfs snapshot -r [EMAIL PROTECTED] > > FS_WITH_SNAPSHOTS=$(zfs list -t snapshot | grep '@' | cut -d '@' -f 1 | uniq) > > for fs in $FS_WITH_S

Re: [zfs-discuss] ZFS for write-only media?

2008-04-24 Thread Daniel Rock
Joerg Schilling schrieb: > WOM Write-only media http://www.national.com/rap/files/datasheet.pdf Daniel ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] 24-port SATA controller options?

2008-04-15 Thread Daniel Rock
Tim schrieb: > I'm sure you're already aware, but if not, 22 drives in a raid-6 is > absolutely SUICIDE when using SATA disks. 12 disks is the upper end of > what you want even with raid-6. The odds of you losing data in a 22 > disk raid-6 is far too great to be worth it if you care about your

Re: [zfs-discuss] FiberChannel, and EMC tutorial?

2008-03-17 Thread Daniel Rock
Kyle McDonald schrieb: > Hi all, > > Can anyone explain to me, or point me to any docs that explain how the > following numbers map together? > > I have multiple LUNS exported to my HBA's from multiple EMC arrays. > > zpool status, and /dev/dsk show device names like: > > c0t6006048000

Re: [zfs-discuss] file server performance - slow 64 bit sparc or fast 32 bit intel

2007-06-17 Thread Daniel Rock
Joe S schrieb: I've read that ZFS runs best on 64 bit solaris. Which box would make the best (fastest) fileserver: Sun v120 650 MHz UltraSPARC IIi 2GB RAM --or-- Intel D875PBZ Intel Pentium D 3.0 GHz (32-bit) 2GB RAM I'd go with the Intel server. If you want to build a pure file server just

Re: [zfs-discuss] # devices in raidz.

2006-11-07 Thread Daniel Rock
Richard Elling - PAE schrieb: For modern machines, which *should* be the design point, the channel bandwidth is underutilized, so why not use it? And what about encrypted disks? Simply create a zpool with checksum=sha256, fill it up, then scrub. I'd be happy if I could use my machine during s

Re: [zfs-discuss] # devices in raidz.

2006-11-07 Thread Daniel Rock
Richard Elling - PAE schrieb: The big question, though, is "10% of what?" User CPU? iops? Maybe something like the "slow" parameter of VxVM? slow[=iodelay] Reduces toe system performance impact of copy operations. Such operations are us

Re: [zfs-discuss] Very high system loads with ZFS

2006-10-29 Thread Daniel Rock
Peter Guthrie schrieb: So far I've seen *very* high loads twice using ZFS which does not > happen when the same task is implemented with UFS This is a known bug in the SPARC IDE driver. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6421427 You could try the following workaround u

Re: [zfs-discuss] Re: ZFS Inexpensive SATA Whitebox

2006-10-18 Thread Daniel Rock
Richard Elling - PAE schrieb: Where most people get confused is the expectation that a hot-plug device works like a hot-swap device. Well, seems like you should also inform your documentation team about this definition: http://www.sun.com/products-n-solutions/hardware/docs/html/819-3722-15/i

Re: [zfs-discuss] Re: ZFS Inexpensive SATA Whitebox

2006-10-17 Thread Daniel Rock
Richard Elling - PAE schrieb: The operational definition of "hot pluggable" is: The ability to add or remove a system component while the system remains powered up, and without inducing any hardware errors. This does not imply anything about whether the component is automati

Re: [zfs-discuss] Re: ZFS Inexpensive SATA Whitebox

2006-10-17 Thread Daniel Rock
Richard Elling - PAE schrieb: Frank Cusack wrote: I'm sorry, but that's ridiculous. Sun sells a hardware product which their software does not support. The worst part is it is advertised as working. What is your definition of "work"? NVidia

[zfs-discuss] Re: Re: Comments on a ZFS multiple use of a pool, RFE.

2006-09-14 Thread Daniel Rock
> The OP was just showing a test case. On a real system your HA software > would exchange a heartbeat and not do a double import. The problem with > zfs is that after the original system fails and the second system imports > the pool, the original system also tries to import on [re]boot, and the

Re: [zfs-discuss] Re: Comments on a ZFS multiple use of a pool, RFE.

2006-09-13 Thread Daniel Rock
Anton B. Rang schrieb: The hostid solution that VxVM uses would catch this second problem, > because when A came up after its reboot, it would find that -- even > though it had created the pool -- it was not the last machine to access > it, and could refuse to automatically mount it. If the admi

Re: [zfs-discuss] Oracle on ZFS

2006-08-30 Thread Daniel Rock
[EMAIL PROTECTED] schrieb: Robert and Daniel, How did you put oracle on ZFS: - one zpool+ one filesystem - one zpool+ many filesystems - a few zpools + one filesystem on each - a few zpools + many filesystem on each My goal was not to maximize tuning for ZFS but just compare ZFS vs.

Re: [zfs-discuss] Oracle on ZFS

2006-08-25 Thread Daniel Rock
[EMAIL PROTECTED] schrieb: Hi all, Does anybody use Oracle on ZFS in production (but not as a background/testing database but as a front line) ? I haven't used in production yet - but I'm planning by the end of the year. I did some performance stress tests on ZFS vs. UFS+SVM. For testing I us

Re: [zfs-discuss] Proposal: zfs create -o

2006-08-15 Thread Daniel Rock
Eric Schrock schrieb: This RFE is also required for crypto support, as the encryption algorithm must be known when the filesystem is created It also has the benefit of cleaning up the implementation of other creation-time properties (volsize and volblocksize) that were previously special cases.

[zfs-discuss] Re: zfs questions from Sun customer

2006-07-28 Thread Daniel Rock
> > * follow-up question from customer > > > Yes, using the c#t#d# disks work, but anyone using fibre-channel storage > on somethink like IBM Shark or EMC Clariion will want multiple paths to > disk using either IBMsdd, EMCpower or Solaris native MPIO. Does ZFS > work wit

Re: [zfs-discuss] legato support

2006-07-20 Thread Daniel Rock
Anne Wong schrieb: The EMC/Legato NetWorker (a.k.a. Sun StorEdge EBS) support for ZFS NFSv4/ACLs will be in NetWorker 7.3.2 release currently targeting for September release. Will it also support the new zfs style automounts? Or do I have to set zfs set mountpoint=legacy zfs/file/syst

Re: [zfs-discuss] Big JBOD: what would you do?

2006-07-19 Thread Daniel Rock
Richard Elling schrieb: First, let's convince everyone to mirror and not RAID-Z[2] -- boil one ocean at a time, there are only 5 you know... :-) For maximum protection 4-disk RAID-Z2 is *always* better than 4-disk RAID-1+0. With more disks use multiple 4-disk RAID-Z2 packs. Daniel _

Re: [zfs-discuss] Big JBOD: what would you do?

2006-07-18 Thread Daniel Rock
Al Hopper schrieb: On Tue, 18 Jul 2006, Daniel Rock wrote: I think this type of calculation is flawed. Disk failures are rare and multiple disk failures at the same time are even more rare. Stop right here! :) If you have a large number of identical disks which operate in the same

Re: [zfs-discuss] Big JBOD: what would you do?

2006-07-18 Thread Daniel Rock
Richard Elling schrieb: Jeff Bonwick wrote: For 6 disks, 3x2-way RAID-1+0 offers better resiliency than RAID-Z or RAID-Z2. Maybe I'm missing something, but it ought to be the other way around. With 6 disks, RAID-Z2 can tolerate any two disk failures, whereas for 3x2-way mirroring, of the (6 ch

Re: [zfs-discuss] system unresponsive after issuing a zpool attach

2006-07-13 Thread Daniel Rock
Joseph Mocker schrieb: Today I attempted to upgrade to S10_U2 and migrate some mirrored UFS SVM partitions to ZFS. I used Live Upgrade to migrate from U1 to U2 and that went without a hitch on my SunBlade 2000. And the initial conversion of one side of the UFS mirrors to a ZFS pool and subseq

[zfs-discuss] zfs snapshot restarts scrubbing?

2006-06-22 Thread Daniel Rock
Hi, yesterday I implemented a simple hourly snapshot on my filesystems. I also regularly initiate a manual "zpool scrub" on all my pools. Usually the scrubbing will run for about 3 hours. But after enabling hourly snapshots I noticed that zfs scrub is always restarted if a new snapshot is cr

Re: [zfs-discuss] Safe to enable write cache?

2006-06-19 Thread Daniel Rock
Robert Milkowski schrieb: Hello UNIX, Monday, June 19, 2006, 10:02:03 AM, you wrote: Ua> Simple question: is it safe to enable the disk write cache when using ZFS? As ZFS should send proper ioctl to flush cache after each transaction group it should be safe. Actually if you give ZFS whole dis

[zfs-discuss] ZFS ACLs and Samba

2006-06-18 Thread Daniel Rock
Hi, is anyone working on ZFS ACL support in Samba? Currently I have to disable ACL support in samba. Otherwise I get "permission denied" error messages trying to synchronize my offline folders residing on a Samba server (now on ZFS). Daniel ___ zf

Re: [zfs-discuss] fdsync(5, FSYNC) problem and ZFS

2006-06-18 Thread Daniel Rock
Sean Meighan schrieb: The box runs less than 20% load. Everything has been working perfectly until two days ago, now it can take 10 minutes to exit from vi. The following truss shows that the 3 line file that is sitting on the ZFS volume (/archives) took almost 15 minutes in fdsync. /me have

Re: [zfs-discuss] panic in buf_hash_remove

2006-06-13 Thread Daniel Rock
Noel Dellofano schrieb: Out of curiosity, is this panic reproducible? Hmm, not directly. The panic happened during a long running I/O stress test in the middle of the night. The tests had already run for ~6 hours at that time. > A bug should be filed on this for more investigation. Feel fre

[zfs-discuss] panic in buf_hash_remove

2006-06-12 Thread Daniel Rock
Hi, had recently this panic during some I/O stress tests: > $BAD TRAP: type=e (#pf Page fault) rp=fe80005c3980 addr=30 occurred in module "zfs" due to a NULL pointer dereference sched: #pf Page fault Bad kernel fault at addr=0x30 pid=0, pc=0xf3ee322e, sp=0xfe80005c3a70, eflag

Re: [zfs-discuss] Re: Life after the pool party

2006-05-27 Thread Daniel Rock
James Dickens schrieb: tried it again... same results... restores the damm efi label.. disk starts on block 34 not 0, there is no slice 2... that solaris ijnstaller demands can not start any track at block 0.. so i can't create a backup slice aka 2. This is a SCSI disk? Then you can send SCSI

Re: [zfs-discuss] hard drive write cache

2006-05-27 Thread Daniel Rock
[EMAIL PROTECTED] schrieb: What about IDE drives (PATA, SATA). Currently only the sd driver implements enabling/disabling the write cache? They typically have write caches enabled by default; and some don't take ckindly to disabling the write cache or do not allow it at all. But you could at

Re: [zfs-discuss] hard drive write cache

2006-05-27 Thread Daniel Rock
Bart Smaalders schrieb: ZFS enables the write cache and flushes it when committing transaction groups; this insures that all of a transaction group appears or does not appear on disk. What about IDE drives (PATA, SATA). Currently only the sd driver implements enabling/disabling the write cache

[zfs-discuss] ZFS mirror and read policy; kstat I/O values for zfs

2006-05-26 Thread Daniel Rock
Hi, after some testing with ZFS I noticed that read requests are not scheduled even to the drives but the first one gets predominately selected: My pool is setup as follows: NAMESTATE READ WRITE CKSUM tpc ONLINE 0 0 0 mirrorONLI

Re: [zfs-discuss] Oracle on ZFS vs. UFS

2006-05-19 Thread Daniel Rock
Richard Elling schrieb: On Fri, 2006-05-19 at 23:09 +0200, Daniel Rock wrote: (*) maxphys = 8388608 Pedantically, because ZFS does 128kByte I/Os. Setting maxphys > 128kBytes won't make any difference. I know, but with the default maxphys value of 56kByte on x86 a 128kByte request

Re: [zfs-discuss] Oracle on ZFS vs. UFS

2006-05-19 Thread Daniel Rock
Bart Smaalders schrieb: How big is the database? After all the data has been loaded, all datafiles together 2.8GB, SGA 320MB. But I don't think size matters on this problem, since you can already see during the catalog creation phase that UFS is 2x faster. Since oracle writes in small blo

[zfs-discuss] Oracle on ZFS vs. UFS

2006-05-19 Thread Daniel Rock
Hi, I'm preparing a personal TPC-H benchmark. The goal is not to measure or optimize the database performance, but to compare ZFS to UFS in similar configurations. At the moment I'm preparing the tests at home. The test setup is as follows: . Solaris snv_37 . 2 x AMD Opteron 252 . 4 GB RAM . 2 x

[zfs-discuss] Re: snapshots and directory traversal (inode numbers

2006-05-09 Thread Daniel Rock
Ok, found the BugID: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6416101 and the relevant code: http://cvs.opensolaris.org/source/diff/on/usr/src/uts/common/fs/zfs/zfs_dir.c?r2=1.7&r1=1.6 So I will wait for snv_39 -- Daniel This message posted from opensolaris.org _

[zfs-discuss] snapshots and directory traversal (inode numbers

2006-05-09 Thread Daniel Rock
Just noticed this: # zfs create scratch/test # cd /scratch/test # mkdir d1 d2 d3 # zfs snapshot scratch/[EMAIL PROTECTED] # cd .zfs/snapshot/snap # ls d1d2d3 # du -k 1 ./d3 3 . {so "du" doesn't traverse the other directories 'd1' and 'd2'} # pwd /scratch/test/.zfs/snapshot/snap #

Re: [zfs-discuss] ZFS RAM requirements?

2006-05-07 Thread Daniel Rock
Roch Bourbonnais - Performance Engineering schrieb: A already noted, this needs not be different from other FS but is still an interesting question. I'll touch 3 aspects here - reported freemem - syscall writes to mmap pages - application write throttling Reported free