Re: [zfs-discuss] New Supermicro SAS/SATA controller: AOC-USAS2-L8e in SOHO NAS and HD HTPC

2010-01-20 Thread Günther
hello i have basically tested supermicro mainboard x8dth-6f together with nexenta http://www.supermicro.com/products/motherboard/QPI/5500/X8DTH-6F.cfm (same sas-II lsi-2008 chipset) nexenta 2: did not work nexenta 3: (snv 124+) install without problem, but no further testing see also my referen

Re: [zfs-discuss] Hang zpool detach while scrub is running..

2010-01-20 Thread Steve Radich, BitShop, Inc.
It came back right after I posted (had waited SEVERAL minutes). Perhaps related to dedup on slow downs (?). -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinf

[zfs-discuss] Hang zpool detach while scrub is running..

2010-01-20 Thread Steve Radich, BitShop, Inc.
I know, we should have done zpool scrub -s first.. but.. sigh.. bits...@zfs:/opt/StorMan# zpool status -v tankmir1 pool: tankmir1 state: ONLINE scrub: scrub in progress for 0h16m, 0.14% done, 187h17m to go config: NAMESTATE READ WRITE CKSUM tankmir1ONLINE

Re: [zfs-discuss] x4500...need input and clarity on striped/mirrored configuration

2010-01-20 Thread Richard Elling
On Jan 20, 2010, at 8:14 PM, Brad wrote: > I was reading your old posts about load-shares > http://opensolaris.org/jive/thread.jspa?messageID=294580񇺴 . > > So between raidz and load-share "striping", raidz stripes a file system block > evenly across each vdev but with load sharing the file syst

Re: [zfs-discuss] Panic running a scrub

2010-01-20 Thread Frank Middleton
On 01/20/10 04:27 PM, Cindy Swearingen wrote: Hi Frank, I couldn't reproduce this problem on SXCE build 130 by failing a disk in mirrored pool and then immediately running a scrub on the pool. It works as expected. As noted, the disk mustn't go offline until well after the scrub has started.

Re: [zfs-discuss] x4500...need input and clarity on striped/mirrored configuration

2010-01-20 Thread Brad
I was reading your old posts about load-shares http://opensolaris.org/jive/thread.jspa?messageID=294580񇺴 . So between raidz and load-share "striping", raidz stripes a file system block evenly across each vdev but with load sharing the file system block is written on a vdev that's not filled up

Re: [zfs-discuss] CR# 6574286, remove slog device

2010-01-20 Thread Moshe Vainer
Hi George. Any news on this? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] x4500...need input and clarity on striped/mirrored configuration

2010-01-20 Thread Brad
"Zfs does not do striping across vdevs, but its load share approach will write based on (roughly) a round-robin basis, but will also prefer a less loaded vdev when under a heavy write load, or will prefer to write to an empty vdev rather than write to an almost full one." I'm trying to visualize t

Re: [zfs-discuss] x4500...need input and clarity on striped/mirrored configuration

2010-01-20 Thread Brad
@hortnon - ASM is not within the scope of this project. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] x4500...need input and clarity on striped/mirrored configuration

2010-01-20 Thread John
Have you looked at using Oracle ASM instead of or with ZFS? Recent Sun docs concerning the F5100 seem to recommend a hybrid of both. If you don't go that route, generally you should separate redo logs from actual data so they don't compete for I/O, since a redo switch lagging hangs the database

Re: [zfs-discuss] x4500...need input and clarity on striped/mirrored configuration

2010-01-20 Thread Bob Friesenhahn
On Wed, 20 Jan 2010, Brad wrote: Can anyone recommend a optimum and redundant striped configuration for a X4500? We'll be using it for a OLTP (Oracle) database and will need best performance. Is it also true that the reads will be load-balanced across the mirrors? Is this considered a raid

[zfs-discuss] x4500...need input and clarity on striped/mirrored configuration

2010-01-20 Thread Brad
Can anyone recommend a optimum and redundant striped configuration for a X4500? We'll be using it for a OLTP (Oracle) database and will need best performance. Is it also true that the reads will be load-balanced across the mirrors? Is this considered a raid 1+0 configuration? zpool create -f

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2010-01-20 Thread Daniel Carosone
On Wed, Jan 20, 2010 at 10:04:34AM -0800, Willy wrote: > To those concerned about this issue, there is a patched version of > smartmontools that enables the querying and setting of TLER/ERC/CCTL > values (well, except for recent desktop drives from Western > Digitial). [Joining together two recent

Re: [zfs-discuss] can i make a COMSTAR zvol bigger?

2010-01-20 Thread Thomas Burgess
> > > > Yes you can. Size of the vol is a ZFS property. > > > yes, i knew this but i wasn't sure how to do the REST. =) > Set the volsize property to what you want then, then modify the > logical unit e.g. > > Usage: stmfadm modify-lu [OPTIONS] >OPTIONS: >-p, --lu-prop

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Joerg Schilling
Ragnar Sundblad wrote: > Yes! Modern LTO drives can typically vary their speed about a factor four > or so, so even if you can't keep up with the tape drive maximum speed, > it will typically work pretty good anyway. If you can't keep up even then, > it will have to stop, back up a bit, and resta

Re: [zfs-discuss] L2ARC in Cluster is picked up althought not part of the pool

2010-01-20 Thread Daniel Carosone
On Wed, Jan 20, 2010 at 03:20:20PM -0800, Richard Elling wrote: > Though the ARC case, PSARC/2007/618 is "unpublished," I gather from > googling and the source that L2ARC devices are considered auxiliary, > in the same category as spares. If so, then it is perfectly reasonable to > expect that it g

Re: [zfs-discuss] Panic running a scrub

2010-01-20 Thread Frank Middleton
On 01/20/10 05:55 PM, Cindy Swearingen wrote: Hi Frank, We need both files. The vmcore is 1.4GB. An http upload is never going to complete. Is there an ftp-able place to send it, or can you download it if I post it somewhere? Cheers -- Frank ___ zfs

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Ragnar Sundblad
On 21 jan 2010, at 00.20, Al Hopper wrote: > I remember for about 5 years ago (before LT0-4 days) that streaming > tape drives would go to great lengths to ensure that the drive kept > streaming - because it took so much time to stop, backup and stream > again. And one way the drive firmware acc

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Ragnar Sundblad
On 20 jan 2010, at 17.22, Julian Regel wrote: > >It is actually not that easy. > > > >Compare a cost of 2x x4540 with 1TB disks to equivalent solution on LTO. > > > >Each x4540 could be configured as: 4x 11 disks in raidz-2 + 2x hot spare > >+ 2x OS disks. > >The four raidz2 group form a single

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Al Hopper
On Wed, Jan 20, 2010 at 2:52 PM, David Magda wrote: > > On Jan 20, 2010, at 12:21, Robert Milkowski wrote: > >> On 20/01/2010 16:22, Julian Regel wrote: >>> >> [...] >>> >>> So you could provision a tape backup for just under £3 (~$49000). In >>> comparison, the cost of one X4540 with ~ 36TB u

Re: [zfs-discuss] L2ARC in Cluster is picked up althought not part of the pool

2010-01-20 Thread Richard Elling
Though the ARC case, PSARC/2007/618 is "unpublished," I gather from googling and the source that L2ARC devices are considered auxiliary, in the same category as spares. If so, then it is perfectly reasonable to expect that it gets picked up regardless of the GUID. This also implies that it is share

Re: [zfs-discuss] Panic running a scrub

2010-01-20 Thread Cindy Swearingen
Hi Frank, We need both files. Thanks, Cindy On 01/20/10 15:43, Frank Middleton wrote: On 01/20/10 04:27 PM, Cindy Swearingen wrote: Hi Frank, I couldn't reproduce this problem on SXCE build 130 by failing a disk in mirrored pool and then immediately running a scrub on the pool. It works as

Re: [zfs-discuss] Panic running a scrub

2010-01-20 Thread Frank Middleton
On 01/20/10 04:27 PM, Cindy Swearingen wrote: Hi Frank, I couldn't reproduce this problem on SXCE build 130 by failing a disk in mirrored pool and then immediately running a scrub on the pool. It works as expected. The disk has to fail whilst the scrub is running. It has happened twice now, on

Re: [zfs-discuss] L2ARC in Cluster is picked up althought not part of the pool

2010-01-20 Thread Tomas Ögren
On 20 January, 2010 - Richard Elling sent me these 2,7K bytes: > Hi Lutz, > > On Jan 20, 2010, at 3:17 AM, Lutz Schumann wrote: > > > Hello, > > > > we tested clustering with ZFS and the setup looks like this: > > > > - 2 head nodes (nodea, nodeb) > > - head nodes contain l2arc devices (node

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Joerg Schilling
Miles Nordin wrote: > From the perspective of MY business, I would much rather have the dark > OOB acl/fork/whatever-magic that's gone into ZFS and NFSv4 supported > in standard tools like rsync and GNUtar. This is, for example, what GNU tar does not support any platform speficic feature on any

Re: [zfs-discuss] L2ARC in Cluster is picked up althought not part of the pool

2010-01-20 Thread Richard Elling
Hi Lutz, On Jan 20, 2010, at 3:17 AM, Lutz Schumann wrote: > Hello, > > we tested clustering with ZFS and the setup looks like this: > > - 2 head nodes (nodea, nodeb) > - head nodes contain l2arc devices (nodea_l2arc, nodeb_l2arc) This makes me nervous. I suspect this is not in the typical Q

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Joerg Schilling
Ian Collins wrote: > > We are talking about TAR and I did give a pointer to the star archive > > format > > documentation, so it is obvious that I was talking about the ACL format from > > Sun tar. This format is not documented. > > > > > It is, Sun's ZFS ACL aware tools use acltotext() to f

Re: [zfs-discuss] Mirror of SAN Boxes with ZFS ? (split site mirror)

2010-01-20 Thread Richard Elling
Comment below. Perhaps someone from Sun's ZFS team can fill in the blanks, too. On Jan 20, 2010, at 3:34 AM, Lutz Schumann wrote: > Actually I found some time (and reason) to test this. > > Environment: > - 1 osol server > - one SLES10 iSCSI Target > - two LUN's exported via iSCSi to the OSol

Re: [zfs-discuss] can i make a COMSTAR zvol bigger?

2010-01-20 Thread Errol Neal
On Wed, Jan 20, 2010 02:38 PM, Thomas Burgess wrote: > I finally got iscsi working, and it's amazing...it took a minute for me to > figure out...i didn't realize it required 2 toolsbut anwyays. > > my original zvol is too smalli created a 120 gb zvol for time machine > but i really n

Re: [zfs-discuss] Unavailable device

2010-01-20 Thread John
Unfortunately, since we got a new priority on the project, I had to scrap and recreate the pool, so I don't have any of the information anymore. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://

Re: [zfs-discuss] Panic running a scrub

2010-01-20 Thread Cindy Swearingen
Hi Frank, I couldn't reproduce this problem on SXCE build 130 by failing a disk in mirrored pool and then immediately running a scrub on the pool. It works as expected. Any other symptoms (like a power failure?) before the disk went offline? It is possible that both disks went offline? We

Re: [zfs-discuss] Filesystem Quotas

2010-01-20 Thread Tomas Ögren
On 20 January, 2010 - Mr. T Doodle sent me these 1,0K bytes: > I currently have one filesystem / (root), is it possible to put a quota on > let's say /var? Or would I have to move /var to it's own filesystem in the > same pool? Only filesystems can have different settings. /Tomas -- Tomas Ögren

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread David Magda
On Jan 20, 2010, at 12:21, Robert Milkowski wrote: On 20/01/2010 16:22, Julian Regel wrote: [...] So you could provision a tape backup for just under £3 (~ $49000). In comparison, the cost of one X4540 with ~ 36TB usable storage is UK list price £30900. I've not factored in backup s

Re: [zfs-discuss] ZFS default compression and file size limit?

2010-01-20 Thread Daniel Carosone
On Wed, Jan 20, 2010 at 12:42:35PM -0500, Wajih Ahmed wrote: > Mike, > > Thank you for your quick response... > > Is there a way for me to test the compression from the command line to > see if lzjb is giving me more or less than the 12.5% mark? I guess it > will depend if there is a lzjb comm

[zfs-discuss] Filesystem Quotas

2010-01-20 Thread Mr. T Doodle
I currently have one filesystem / (root), is it possible to put a quota on let's say /var? Or would I have to move /var to it's own filesystem in the same pool? Thanks ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/m

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Miles Nordin
> "jr" == Julian Regel writes: jr> While I am sure that star is technically a fine utility, the jr> problem is that it is effectively an unsupported product. I have no problems with this whatsoever. jr> If our customers find a bug in their backup that is caused by jr> a fail

[zfs-discuss] can i make a COMSTAR zvol bigger?

2010-01-20 Thread Thomas Burgess
I finally got iscsi working, and it's amazing...it took a minute for me to figure out...i didn't realize it required 2 toolsbut anwyays. my original zvol is too smalli created a 120 gb zvol for time machine but i really need more like 250 gb so this is a 2 part questions. First, can

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Ian Collins
Joerg Schilling wrote: Ian Collins wrote: The correct way to archivbe ACLs would be to put them into extended POSIX tar attrubutes as star does. See http://cdrecord.berlios.de/private/man/star/star.4.html for a format documentation or have a look at ftp://ftp.berlios.de/pub/star/alpha, e.

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Ian Collins
Julian Regel wrote: >It is actually not that easy. > >Compare a cost of 2x x4540 with 1TB disks to equivalent solution on LTO. > >Each x4540 could be configured as: 4x 11 disks in raidz-2 + 2x hot spare >+ 2x OS disks. >The four raidz2 group form a single pool. This would provide well over >30TB

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Richard Elling
On Jan 20, 2010, at 3:15 AM, Joerg Schilling wrote: > Richard Elling wrote: > >>> >>> ufsdump/restore was perfect in that regard. The lack of equivalent >>> functionality is a big problem for the situations where this functionality >>> is a business requirement. >> >> How quickly we forget

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2010-01-20 Thread Willy
To those concerned about this issue, there is a patched version of smartmontools that enables the querying and setting of TLER/ERC/CCTL values (well, except for recent desktop drives from Western Digitial). It's available here http://www.csc.liv.ac.uk/~greg/projects/erc/ Unfortunately, smartmo

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-20 Thread Miles Nordin
> "ml" == Mikko Lammi writes: ml> "rm -rf" to problematic directory from parent level. Running ml> this command shows directory size decreasing by 10,000 ml> files/hour, but this would still mean close to ten months ml> (over 250 days) to delete everything! interesting. does

Re: [zfs-discuss] Unavailable device

2010-01-20 Thread Victor Latushkin
John wrote: I was able to solve it, but it actually worried me more than anything. Basically, I had created the second pool using the mirror as a primary device. So three disks but two full disk root mirrors. Shouldn't zpool have detected an active pool and prevented this? The other LDOM was

Re: [zfs-discuss] ZFS default compression and file size limit?

2010-01-20 Thread Wajih Ahmed
Mike, Thank you for your quick response... Is there a way for me to test the compression from the command line to see if lzjb is giving me more or less than the 12.5% mark? I guess it will depend if there is a lzjb command line utility. I am just a little surprised because gzip-6 is able to

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Robert Milkowski
On 20/01/2010 17:21, Robert Milkowski wrote: On 20/01/2010 16:22, Julian Regel wrote: >It is actually not that easy. > >Compare a cost of 2x x4540 with 1TB disks to equivalent solution on LTO. > >Each x4540 could be configured as: 4x 11 disks in raidz-2 + 2x hot spare >+ 2x OS disks. >The four

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Robert Milkowski
On 20/01/2010 16:22, Julian Regel wrote: >It is actually not that easy. > >Compare a cost of 2x x4540 with 1TB disks to equivalent solution on LTO. > >Each x4540 could be configured as: 4x 11 disks in raidz-2 + 2x hot spare >+ 2x OS disks. >The four raidz2 group form a single pool. This would pro

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Miles Nordin
> "ae" == Allen Eastwood writes: > "ic" == Ian Collins writes: >> If people are really still backing up to tapes or DVD's, just >> use file vdev's, export the pool, and then copy the unmounted >> vdev onto the tape or DVD. ae> And some of those enterprises require bac

[zfs-discuss] ZFS default compression and file size limit?

2010-01-20 Thread Wajih Ahmed
I have a 13GB text file. I turned ZFS compression on with "zfs set compression=on mypool". When i copy the 13GB file into another file, it does not get compressed (checking via du -sh). However if i set compression=gzip, then the file gets compressed. Is there a limit on file size with the def

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Julian Regel
>It is actually not that easy. > >Compare a cost of 2x x4540 with 1TB disks to equivalent solution on LTO. > >Each x4540 could be configured as: 4x 11 disks in raidz-2 + 2x hot spare >+ 2x OS disks. >The four raidz2 group form a single pool. This would provide well over >30TB of logical storage p

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Bob Friesenhahn
On Wed, 20 Jan 2010, Julian Regel wrote: If our customers find a bug in their backup that is caused by a failure in a Sun supplied utility, then they have a legal course of action. The customer's system administrators are covered because they were using tools provided by the vendor. The wrath

Re: [zfs-discuss] Unavailable device

2010-01-20 Thread Cindy Swearingen
Hi John, In general, ZFS will warn you when you attempt to add a device that is already part of an existing pool. One exception is when the system is being re-installed. I'd like to see the set of steps that led to the notification failure. Thanks, Cindy On 01/19/10 20:58, John wrote: I was

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread David Dyer-Bennet
On Wed, January 20, 2010 04:48, Ragnar Sundblad wrote: > LTO media is still cheaper than equivalent sized disks, maybe a factor 5 > or so. LTO drivers cost a little, but so do disk shelves. So, now that > there is no big price issue, there is choice instead. Use it! Depends on the scale you're o

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread David Dyer-Bennet
On Wed, January 20, 2010 09:23, Robert Milkowski wrote: > Now you rsync all the data from your clients to a dedicated filesystem > per client, then create a snapshot. Is there an rsync out there that can reliably replicate all file characteristics between two ZFS/Solaris systems? I haven't fou

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Robert Milkowski
On 20/01/2010 11:26, Joerg Schilling wrote: Edward Ned Harvey wrote: Star implements this in a very effective way (by using libfind) that is even faster that the find(1) implementation from Sun. Even if I just "find" my filesystem, it will run for 7 hours. But zfs can create my w

Re: [zfs-discuss] ZFS default compression and file size limit?

2010-01-20 Thread Robert Milkowski
On 20/01/2010 13:39, Wajih Ahmed wrote: I have a 13GB text file. I turned ZFS compression on with "zfs set compression=on mypool". When i copy the 13GB file into another file, it does not get compressed (checking via du -sh). However if i set compression=gzip, then the file gets compressed. I

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Robert Milkowski
On 20/01/2010 10:48, Ragnar Sundblad wrote: On 19 jan 2010, at 20.11, Ian Collins wrote: Julian Regel wrote: Based on what I've seen in other comments, you might be right. Unfortunately, I don't feel comfortable backing up ZFS filesystems because the tools aren't there to do it (bu

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Robert Milkowski
On 19/01/2010 19:11, Ian Collins wrote: Julian Regel wrote: Based on what I've seen in other comments, you might be right. Unfortunately, I don't feel comfortable backing up ZFS filesystems because the tools aren't there to do it (built into the operating system or using Zmanda/Amanda). C

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-20 Thread Simon Breden
Hi Constantin, It's good to hear your setup with the Samsung drives is working well. Which model/revision are they? My personal preference is to use drives of the same model & revision. However, in order to help ensure that the drives will perform reliably, I prefer to do a fair amount of rese

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Michael Schuster
Joerg Schilling wrote: Julian Regel wrote: If you like to have a backup that allows to access files, you need a file based backup and I am sure that even a filesystem level scan for recently changed files will not be much faster than what you may achive with e.g. star. Note that ufsdump dir

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-20 Thread Constantin Gonzalez
Hi, I'm using 2 x 1.5 TB drives from Samsung (EcoGreen, I believe) in my current home server. One reported 14 Read errors a few weeks ago, roughly 6 months after install, which went away during the next scrub/resilver. This remembered me to order a 3rd drive, a 2.0 TB WD20EADS from Western Digit

[zfs-discuss] ZFS default compression and file size limit?

2010-01-20 Thread Wajih Ahmed
I have a 13GB text file. I turned ZFS compression on with "zfs set compression=on mypool". When i copy the 13GB file into another file, it does not get compressed (checking via du -sh). However if i set compression=gzip, then the file gets compressed. Is there a limit on file size with the def

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Joerg Schilling
Julian Regel wrote: > >> While I am sure that star is technically a fine utility, the problem is > >> that it is effectively an unsupported product. > > >From this viewpoint, you may call most of Solaris "unsupported". > > From the perspective of the business, the contract with Sun provides that

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Julian Regel
>> While I am sure that star is technically a fine utility, the problem is that >> it is effectively an unsupported product. >From this viewpoint, you may call most of Solaris "unsupported". From the perspective of the business, the contract with Sun provides that support. >> If our customers

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Joerg Schilling
Julian Regel wrote: > > If you like to have a backup that allows to access files, you need a file > > based > > > backup and I am sure that even a filesystem level scan for recently changed > > files will not be much faster than what you may achive with e.g. star. > > > > Note that ufsdump dir

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-20 Thread Simon Breden
I see also that Samsung have very recently released the HD203WI 2TB 4-platter model. It seems to have good customer ratings so far at newegg.com, but currently there are only 13 reviews so it's a bit early to tell if it's reliable. Has anyone tried this model with ZFS? Cheers, Simon http://br

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Julian Regel
> If you like to have a backup that allows to access files, you need a file > based > backup and I am sure that even a filesystem level scan for recently changed > files will not be much faster than what you may achive with e.g. star. > > Note that ufsdump directly accesees the raw disk device

Re: [zfs-discuss] New Supermicro SAS/SATA controller: AOC-USAS2-L8e in SOHO NAS and HD HT

2010-01-20 Thread Simon Breden
Yes, this model looks to be interesting. SuperMicro seem to have produced two models new models that satisfy the SATA III requirements of 6Gbps per channel: 1. AOC-USAS2-L8e: http://www.supermicro.com/products/accessories/addon/AOC-USAS2-L8i.cfm?TYP=E 2. AOC-USAS2-L8i: http://www.supermicro.co

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Julian Regel
While I can appreciate that ZFS snapshots are very useful in being able to recover files that users might have deleted, they do not do much to help when the entire disk array experiences a crash/corruption or catches fire. Backing up to a second array helps if a) the array is off-site and for ma

Re: [zfs-discuss] Mirror of SAN Boxes with ZFS ? (split site mirror)

2010-01-20 Thread Lutz Schumann
Actually I found some time (and reason) to test this. Environment: - 1 osol server - one SLES10 iSCSI Target - two LUN's exported via iSCSi to the OSol server I did some rescilver tests to see how ZFS resilvers devices. Prep: osol: create a pool (myiscsi) with one mirror pair made from the

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Joerg Schilling
Edward Ned Harvey wrote: > > Star implements this in a very effective way (by using libfind) that is > > even > > faster that the find(1) implementation from Sun. > > Even if I just "find" my filesystem, it will run for 7 hours. But zfs can > create my whole incremental snapshot in a minute or t

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Joerg Schilling
Ian Collins wrote: > > The correct way to archivbe ACLs would be to put them into extended POSIX > > tar > > attrubutes as star does. > > > > See http://cdrecord.berlios.de/private/man/star/star.4.html for a format > > documentation or have a look at ftp://ftp.berlios.de/pub/star/alpha, e.g. >

[zfs-discuss] L2ARC in Cluster is picked up althought not part of the pool

2010-01-20 Thread Lutz Schumann
Hello, we tested clustering with ZFS and the setup looks like this: - 2 head nodes (nodea, nodeb) - head nodes contain l2arc devices (nodea_l2arc, nodeb_l2arc) - two external jbods - two mirror zpools (pool1,pool2) - each mirror is a mirror of one disk from each jbod - no ZIL (anyone knows a

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Joerg Schilling
Richard Elling wrote: > > > > ufsdump/restore was perfect in that regard. The lack of equivalent > > functionality is a big problem for the situations where this functionality > > is a business requirement. > > How quickly we forget ufsdump's limitations :-). For example, it is not > suppor

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Ragnar Sundblad
On 19 jan 2010, at 20.11, Ian Collins wrote: > Julian Regel wrote: >> >> Based on what I've seen in other comments, you might be right. >> Unfortunately, I don't feel comfortable backing up ZFS filesystems because >> the tools aren't there to do it (built into the operating system or using >>

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Ian Collins
Allen Eastwood wrote: On Jan 19, 2010, at 22:54 , Ian Collins wrote: Allen Eastwood wrote: On Jan 19, 2010, at 18:48 , Richard Elling wrote: Many people use send/recv or AVS for disaster recovery on the inexpensive side. Obviously, enterprise backup systems also provide DR c