[zfs-discuss] lofi crypto pools and *cache properties

2010-02-10 Thread Daniel Carosone
Until zfs-crypto arrives, I am using a pool for sensitive data inside several files encrypted via lofi crypto. The data is also valuable, of course, so the pool is mirrored, with one file on each of several pools (laptop rpool, and a couple of usb devices, not always connected). These backing fil

Re: [zfs-discuss] Dedup Questions.

2010-02-09 Thread Daniel Carosone
On Tue, Feb 09, 2010 at 08:26:42AM -0800, Richard Elling wrote: > >> "zdb -D poolname" will provide details on the DDT size. FWIW, I have a > >> pool with 52M DDT entries and the DDT is around 26GB. I wish -D was documented; I had forgotten about it and only found the (expensive) -S variant, whic

Re: [zfs-discuss] Anyone with experience with a PCI-X SSD card?

2010-02-08 Thread Daniel Carosone
On Tue, Feb 09, 2010 at 03:11:38PM +1100, Daniel Carosone wrote: > I didn't find anything to indicate either way whether there was > bootable bios on board Ah - in the install guide there's a mention about pressing "F4" or "Ctrl-S" when prompted at boot to c

Re: [zfs-discuss] Anyone with experience with a PCI-X SSD card?

2010-02-08 Thread Daniel Carosone
On Mon, Feb 08, 2010 at 07:33:56PM -0800, Erik Trimble wrote: > To reply to myself, the best I can do is this: > >http://www.apricorn.com/product_detail.php?type=family&id=59 > > (it uses a sil3124 controller, so it /might/ work with OpenSolaris ) Nice. I'd certainly like to know if you t

Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-08 Thread Daniel Carosone
> > Although I am in full support of what sun is doing, to play devils > > advocate: supermicro is. They're not the only ones, although the most-often discussed here. Dell will generally sell hardware and warranty and service add-ons in any combination, to anyone willing and capable of figurin

Re: [zfs-discuss] zpool list size

2010-02-08 Thread Daniel Carosone
On Mon, Feb 08, 2010 at 05:23:29PM -0700, Cindy Swearingen wrote: > Hi Lasse, > > I expanded this entry to include more details of the zpool list and > zfs list reporting. > > See if the new explanation provides enough details. Cindy, feel free to crib from or refer to my text in whatever way migh

Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-08 Thread Daniel Carosone
This is a long thread, with lots of interesting and valid observations about the organisation of the industry, the segmentation of the market, getting what you pay for vs paying for what you want, etc. I don't really find within, however, an answer to the original question, at least the way I re

Re: [zfs-discuss] zpool list size

2010-02-08 Thread Daniel Carosone
On Mon, Feb 08, 2010 at 11:28:11PM +0100, Lasse Osterild wrote: > Ok thanks I know that the amount of used space will vary, but what's > the usefulness of the total size when ie in my pool above 4 x 1G > (roughly, depending on recordsize) are reserved for parity, it's not > like it's useable for an

Re: [zfs-discuss] Intrusion Detection - powered by ZFS Checksumming ?

2010-02-08 Thread Daniel Carosone
On Mon, Feb 08, 2010 at 11:24:56AM -0800, Lutz Schumann wrote: > > Only with the zdb(1M) tool but note that the > > checksums are NOT of files > > but of the ZFS blocks. > > Thanks - bocks, right (doh) - thats what I was missing. Damn it would be so > nice :( If you're comparing the current dat

Re: [zfs-discuss] L2ARC in Cluster is picked up althought not part of the pool

2010-02-08 Thread Daniel Carosone
On Mon, Feb 01, 2010 at 12:22:55PM -0800, Lutz Schumann wrote: > > > Created a pool on head1 containing just the cache > > device (c0t0d0). > > > > This is not possible, unless there is a bug. You > > cannot create a pool > > with only a cache device. I have verified this on > > b131: > > # zpoo

Re: [zfs-discuss] Mounting a snapshot of an iSCSI volume using Windows

2010-02-07 Thread Daniel Carosone
On Thu, Feb 04, 2010 at 04:17:17PM -0800, Scott Meilicke wrote: > At this point, my server Gallardo can see the LUN, but like I said, it looks > blank to the OS. I suspect the 'sbdadm create-lu' phase. Yeah, try the import version of that command. -- Dan. pgphS37DCPdV0.pgp Description: PGP sig

Re: [zfs-discuss] ZFS ZIL + L2ARC SSD Setup

2010-02-07 Thread Daniel Carosone
On Mon, Feb 08, 2010 at 04:58:38AM +0100, Felix Buenemann wrote: > I have some questions about the choice of SSDs to use for ZIL and L2ARC. I have one answer. The other questions are mostly related to your raid controller, which I can't answer directly. > - Is it safe to run the L2ARC without ba

Re: [zfs-discuss] ZFS send/recv checksum transmission

2010-02-07 Thread Daniel Carosone
On Sat, Feb 06, 2010 at 09:22:57AM -0800, Richard Elling wrote: > I'm interested in anecdotal evidence which suggests there is a > problem as it is currently designed. I like to look at it differently: I'm not sure if there is a problem. I'd like to have a simple way to discover a problem, using

[zfs-discuss] quick overhead sizing for DDT and L2ARC

2010-01-31 Thread Daniel Carosone
Two related questions: - given an existing pool with dedup'd data, how can I find the current size of the DDT? I presume some zdb work to find and dump the relevant object, but what specifically? - what's the expansion ratio for the memory overhead of L2ARC entries? If I know my DDT

Re: [zfs-discuss] Home ZFS NAS - 2 drives or 3?

2010-01-31 Thread Daniel Carosone
On Sat, Jan 30, 2010 at 06:07:48PM -0500, Frank Middleton wrote: > On 01/30/10 05:33 PM, Ross Walker wrote: >> Just install the OS on the first drive and add the second drive to form >> a mirror. > > After more than a year or so of experience with ZFS on drive constrained > systems, I am convinced

Re: [zfs-discuss] ZFS configuration suggestion with 24 drives

2010-01-28 Thread Daniel Carosone
On Thu, Jan 28, 2010 at 09:33:19PM -0800, Ed Fang wrote: > We considered a SSD ZIL as well but from my understanding it won't > help much on sequential bulk writes but really helps on random > writes (to sequence going to disk better). slog will only help if your write load involves lots of sync

Re: [zfs-discuss] ZFS configuration suggestion with 24 drives

2010-01-28 Thread Daniel Carosone
On Thu, Jan 28, 2010 at 07:26:42AM -0800, Ed Fang wrote: > 4 x x6 vdevs in RaidZ1 configuration > 3 x x8 vdevs in RaidZ2 configuration Another choice might be 2 x x12 vdevs in raidz2 configuration This gets you the space of the first, with the recovery properties of the second - at a cost in pot

Re: [zfs-discuss] zvol being charged for double space

2010-01-27 Thread Daniel Carosone
On Wed, Jan 27, 2010 at 09:57:08PM -0800, Bill Sommerfeld wrote: Hi Bill! :-) > On 01/27/10 21:17, Daniel Carosone wrote: >> This is as expected. Not expected is that: >> >> usedbyrefreservation = refreservation >> >> I would expect this to be 0, sinc

[zfs-discuss] zvol being charged for double space

2010-01-27 Thread Daniel Carosone
In a thread elsewhere, trying to analyse why the zfs auto-snapshot cleanup code was cleaning up more aggressively than expected, I discovered some interesting properties of a zvol. http://mail.opensolaris.org/pipermail/zfs-auto-snapshot/2010-January/000232.html The zvol is not thin-provisioned.

Re: [zfs-discuss] primarycache=off, secondarycache=all

2010-01-27 Thread Daniel Carosone
On Wed, Jan 27, 2010 at 02:47:47PM -0800, Christo Kutrovsky wrote: > In the case of a ZVOL with the following settings: > > primarycache=off, secondarycache=all > > How does the L2ARC get populated if the data never makes it to ARC ? > Is this even a valid configuration? It's valid, I assume, i

Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-27 Thread Daniel Carosone
On Wed, Jan 27, 2010 at 02:34:29PM -0600, David Dyer-Bennet wrote: > Google is working heavily with the philosophy that things WILL fail, so > they plan for it, and have enough redundance to survive it -- and then > save lots of money by not paying for premium components. I like that > appro

Re: [zfs-discuss] backing this up

2010-01-27 Thread Daniel Carosone
On Wed, Jan 27, 2010 at 12:01:36PM -0800, Gregory Durham wrote: > Hello All, > I read through the attached threads and found a solution by a poster and > decided to try it. That may have been mine - good to know it helped, or at least started to. > The solution was to use 3 files (in my case I ma

Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-26 Thread Daniel Carosone
On Tue, Jan 26, 2010 at 07:32:05PM -0800, David Dyer-Bennet wrote: > Okay, so this SuperMicro AOC-USAS-L8i is an "SAS" card? I've never > done SAS; is it essentially a controller as flexible as SCSI that > then talks to SATA disks out the back? Yes, or SAS disks. > Amazon seems to be the only

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2010-01-25 Thread Daniel Carosone
On Mon, Jan 25, 2010 at 05:36:35PM -0500, Miles Nordin wrote: > > "sb" == Simon Breden writes: > > sb> 1. In simple non-RAID single drive 'desktop' PC scenarios > sb> where you have one drive, if your drive is experiencing > sb> read/write errors, as this is the only drive you hav

Re: [zfs-discuss] zfs streams

2010-01-25 Thread Daniel Carosone
On Mon, Jan 25, 2010 at 05:42:59PM -0500, Miles Nordin wrote: > et> You cannot import a stream into a zpool of earlier revision, > et> thought the reverse is possible. > > This is very bad, because it means if your backup server is pool > version 22, then you cannot use it to back up pool

Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-25 Thread Daniel Carosone
On Mon, Jan 25, 2010 at 04:08:04PM -0600, David Dyer-Bennet wrote: > > - Don't be afraid to dike out the optical drive, either for case > >space or available ports. [..] > >[..] Put the drive in an external USB case if you want, > >or leave it in the case connected via a USB bridge in

Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-25 Thread Daniel Carosone
Some other points and recommendations to consider: - Since you have the bays, get the controller to drive them, regardless. They will have many uses, some of which below. A 4-port controller would allow you enough ports for both the two empty hotswap bays, plus the dual 2.5" carrier.

Re: [zfs-discuss] Degraded Zpool

2010-01-24 Thread Daniel Carosone
On Thu, Jan 21, 2010 at 03:55:59PM +0100, Matthias Appel wrote: > I have a serious issue with my zpool. Yes. You need to figure out what the root cause of the issue is. > My zpool consists of 4 vdevs which are assembled to 2 mirrors. > > One of this mirrors got degraded cause of too many errors

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Daniel Carosone
Another issue with all this arithmetic: one needs to factor in the cost of additional spare disks (what were you going to resilver onto?). I look at it like this: you purchase the same number of total disks (active + hot spare + cold spare), and raidz2 vs raidz3 simply moves a disk from one of the

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2010-01-23 Thread Daniel Carosone
On Fri, Jan 22, 2010 at 04:12:48PM -0500, Miles Nordin wrote: > w> http://www.csc.liv.ac.uk/~greg/projects/erc/ > > dead link? Works for me - this is someone who's written patches for smartctl to set this feature; these are standardised/documented commands, no reverse engineering of DOS tool

[zfs-discuss] experience with sata p-m's?

2010-01-23 Thread Daniel Carosone
As I said in another post, it's coming time to build a new storage platform at home. I'm revisiting all the hardware options and permutations again, for current kit. Build 125 added something I was very eager for earlier, sata port-multiplier support.Since then, I've seen very little, if any,

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-23 Thread Daniel Carosone
On Sat, Jan 23, 2010 at 06:39:25PM -0500, Frank Cusack wrote: > On January 23, 2010 5:17:16 PM -0600 Tim Cook wrote: >> Smaller devices get you to raid-z3 because they cost less money. >> Therefore, you can afford to buy more of them. > > I sure hope you aren't ever buying for my company! :) :) >

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-23 Thread Daniel Carosone
On Sat, Jan 23, 2010 at 09:04:31AM -0800, Simon Breden wrote: > For resilvering to be required, I presume this will occur mostly in > the event of a mechanical failure. Soft failures like bad sectors > will presumably not require resilvering of the whole drive to occur, > as these types of error ar

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-23 Thread Daniel Carosone
On Sat, Jan 23, 2010 at 12:30:01PM -0800, Simon Breden wrote: > And regarding mirror vdevs etc, I can see the usefulness of being > able to build a mirror vdev of multiple drives for cases where you > have really critical data -- e.g. a single 4-drive mirror vdev. I > suppose regular backups can he

Re: [zfs-discuss] zfs zvol available space vs used space vs reserved space

2010-01-21 Thread Daniel Carosone
On Thu, Jan 21, 2010 at 07:33:47PM -0800, Younes wrote: > Hello all, > > I have a small issue with zfs. > I create a volume 1TB. > > # zfs get all tank/test01 > NAMEPROPERTY VALUE >

Re: [zfs-discuss] L2ARC in Cluster is picked up althought not part of the pool

2010-01-21 Thread Daniel Carosone
On Thu, Jan 21, 2010 at 05:52:57PM -0800, Richard Elling wrote: > I agree with this, except for the fact that the most common installers > (LiveCD, Nexenta, etc.) use the whole disk for rpool[1]. Er, no. You certainly get the option of "whole disk" or "make partitions", at least with the opensola

Re: [zfs-discuss] L2ARC in Cluster is picked up althought not part of the pool

2010-01-21 Thread Daniel Carosone
On Thu, Jan 21, 2010 at 03:33:28PM -0800, Richard Elling wrote: > [Richard makes a hobby of confusing Dan :-)] Heh. > > Lutz, is the pool autoreplace property on? If so, "god help us all" > > is no longer quite so necessary. > > I think this is a different issue. I agree. For me, it was the ma

Re: [zfs-discuss] Zpool is a bit Pessimistic at failures

2010-01-21 Thread Daniel Carosone
On Thu, Jan 21, 2010 at 11:14:33PM +0100, Henrik Johansson wrote: > I think this could scare or even make new users do terrible things, > even if the errors could be fixed. I think I'll file a bug, agree? Yes, very much so. -- Dan. pgp7OGc773Bqe.pgp Description: PGP signature __

Re: [zfs-discuss] 2gig file limit on ZFS?

2010-01-21 Thread Daniel Carosone
On Thu, Jan 21, 2010 at 02:54:21PM -0800, Richard Elling wrote: > + support file systems larger then 2GiB include 32-bit UIDs a GIDs file systems, but what about individual files within? -- Dan. pgpw54qWyHczW.pgp Description: PGP signature ___ zfs-di

Re: [zfs-discuss] Dedup memory overhead

2010-01-21 Thread Daniel Carosone
On Fri, Jan 22, 2010 at 08:55:16AM +1100, Daniel Carosone wrote: > For performance (rather than space) issues, I look at dedup as simply > increasing the size of the working set, with a goal of reducing the > amount of IO (avoided duplicate writes) in return. I should add "and

Re: [zfs-discuss] 2gig file limit on ZFS?

2010-01-21 Thread Daniel Carosone
On Thu, Jan 21, 2010 at 01:55:53PM -0800, Michelle Knight wrote: > The error messages are in the original post. They are... > /mirror2/applications/Microsoft/Operating Systems/Virtual PC/vm/XP-SP2/XP-SP2 > Hard Disk.vhd: File too large > /mirror2/applications/virtualboximages/xp/xp.tar.bz2: File t

Re: [zfs-discuss] Dedup memory overhead

2010-01-21 Thread Daniel Carosone
On Thu, Jan 21, 2010 at 05:04:51PM +0100, erik.ableson wrote: > What I'm trying to get a handle on is how to estimate the memory > overhead required for dedup on that amount of storage. We'd all appreciate better visibility of this. This requires: - time and observation and experience, and -

Re: [zfs-discuss] L2ARC in Cluster is picked up althought not part of the pool

2010-01-21 Thread Daniel Carosone
On Thu, Jan 21, 2010 at 09:36:06AM -0800, Richard Elling wrote: > On Jan 20, 2010, at 4:17 PM, Daniel Carosone wrote: > > > On Wed, Jan 20, 2010 at 03:20:20PM -0800, Richard Elling wrote: > >> Though the ARC case, PSARC/2007/618 is "unpublished," I gather from &g

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2010-01-20 Thread Daniel Carosone
On Wed, Jan 20, 2010 at 10:04:34AM -0800, Willy wrote: > To those concerned about this issue, there is a patched version of > smartmontools that enables the querying and setting of TLER/ERC/CCTL > values (well, except for recent desktop drives from Western > Digitial). [Joining together two recent

Re: [zfs-discuss] L2ARC in Cluster is picked up althought not part of the pool

2010-01-20 Thread Daniel Carosone
On Wed, Jan 20, 2010 at 03:20:20PM -0800, Richard Elling wrote: > Though the ARC case, PSARC/2007/618 is "unpublished," I gather from > googling and the source that L2ARC devices are considered auxiliary, > in the same category as spares. If so, then it is perfectly reasonable to > expect that it g

Re: [zfs-discuss] ZFS default compression and file size limit?

2010-01-20 Thread Daniel Carosone
On Wed, Jan 20, 2010 at 12:42:35PM -0500, Wajih Ahmed wrote: > Mike, > > Thank you for your quick response... > > Is there a way for me to test the compression from the command line to > see if lzjb is giving me more or less than the 12.5% mark? I guess it > will depend if there is a lzjb comm

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-19 Thread Daniel Carosone
There is a tendency to conflate "backup" and "archive", both generally and in this thread. They have different requirements. Backups should enable quick restore of a full operating image with all the necessary system level attributes. They concerned with SLA and uptime and outage and data loss w

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-19 Thread Daniel Carosone
On Tue, Jan 19, 2010 at 12:16:01PM +0100, Joerg Schilling wrote: > Daniel Carosone wrote: > > > I also don't recommend files >1Gb in size for DVD media, due to > > iso9660 limitations. I haven't used UDF enough to say much about any > > limitations there. &g

Re: [zfs-discuss] Snapshot that won't go away.

2010-01-18 Thread Daniel Carosone
On Mon, Jan 18, 2010 at 05:52:25PM +1300, Ian Collins wrote: >> Is it the parent snapshot for a clone? >> > I'm almost certain it isn't. I haven't created any clones and none show > in zpool history. What about snapshot holds? I don't know if (and doubt whether) these are in S10, but since

Re: [zfs-discuss] Is ZFS internal reservation excessive?

2010-01-18 Thread Daniel Carosone
On Mon, Jan 18, 2010 at 03:25:56PM -0800, Erik Trimble wrote: > Hopefully, once BP rewrite materializes (I know, I'm treating this > much to much as a Holy Grail, here to save us from all the ZFS > limitations, but really...), we can implement defragmentation which > will seriously reduce the amou

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-18 Thread Daniel Carosone
On Mon, Jan 18, 2010 at 01:38:16PM -0800, Richard Elling wrote: > The Solaris 10 10/09 zfs(1m) man page says: > > The format of the stream is committed. You will be able > to receive your streams on future versions of ZFS. > > I'm not sure when that hit snv, but obviously it wa

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-18 Thread Daniel Carosone
On Mon, Jan 18, 2010 at 07:34:51PM +0100, Lassi Tuura wrote: > > Consider then, using a zpool-in-a-file as the file format, rather than > > zfs send streams. > > This is an interesting suggestion :-) > > Did I understand you correctly that once a slice is written, zfs > won't rewrite it? In other

Re: [zfs-discuss] Backing up a ZFS pool

2010-01-18 Thread Daniel Carosone
On Mon, Jan 18, 2010 at 03:24:19AM -0500, Edward Ned Harvey wrote: > Unless I am mistaken, I believe, the following is not possible: > > On the source, create snapshot "1" > Send snapshot "1" to destination > On the source, create snapshot "2" > Send incremental, from "1" to "2" to the destination

Re: [zfs-discuss] Snapshot that won't go away.

2010-01-17 Thread Daniel Carosone
On Sun, Jan 17, 2010 at 06:21:45PM +1300, Ian Collins wrote: > I have a Solaris 10 update 6 system with a snapshot I can't remove. > > zfs destroy -f reports the device as being busy. fuser doesn't > shore any process using the filesystem and it isn't shared. Is it the parent snapshot for a c

Re: [zfs-discuss] Backing up a ZFS pool

2010-01-17 Thread Daniel Carosone
On Sun, Jan 17, 2010 at 04:38:03PM -0600, Bob Friesenhahn wrote: > On Mon, 18 Jan 2010, Daniel Carosone wrote: >> >> .. as long as you scrub both the original pool and the backup pool >> with the same regularity. sending the full backup from the source is >> basically

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-17 Thread Daniel Carosone
On Sun, Jan 17, 2010 at 05:31:39AM -0500, Edward Ned Harvey wrote: > Instead, it is far preferable to "zfs send | zfs receive" ... That is, > receive the data stream on external media as soon as you send it. Agree 100% - but.. .. it's hard to beat the convenience of a "backup file" format, for

Re: [zfs-discuss] Backing up a ZFS pool

2010-01-17 Thread Daniel Carosone
On Sun, Jan 17, 2010 at 08:05:27AM -0800, Richard Elling wrote: > > Personally, I like to start with a fresh "full" image once a month, and > > then do daily incrementals for the rest of the month. > > This doesn't buy you anything. .. as long as you scrub both the original pool and the backup

Re: [zfs-discuss] zfs fast mirror resync?

2010-01-15 Thread Daniel Carosone
On Fri, Jan 15, 2010 at 10:37:15AM -0500, Charles Menser wrote: > Perhaps an ISCSI mirror for a laptop? Online it when you are back > "home" to keep your backup current. I do exactly this, but: - It's not the only thing I do for backup. - The iscsi initiator is currently being a major PITA for

Re: [zfs-discuss] How do separate ZFS filesystems affect performance?

2010-01-13 Thread Daniel Carosone
On Wed, Jan 13, 2010 at 08:21:13AM -0600, Gary Mills wrote: > Yes, I understand that, but do filesystems have separate queues of any > sort within the ZIL? I'm not sure. If you can experiment and measure a benefit, understanding the reasons is helpful but secondary. If you can't experiment so eas

Re: [zfs-discuss] rpool mirror on zvol, can't offline and detach

2010-01-12 Thread Daniel Carosone
On Tue, Jan 12, 2010 at 01:26:15PM -0700, Cindy Swearingen wrote: > I see now how you might have created this config. > > I tried to reproduce this issue by creating a separate pool on another > disk and a volume to attach to my root pool, but my system panics when > I try to attach the volume to t

Re: [zfs-discuss] internal backup power supplies?

2010-01-12 Thread Daniel Carosone
On Mon, Jan 11, 2010 at 10:10:37PM -0800, Lutz Schumann wrote: > p.s. While writing this I'm thinking if a-card handles this case well ? ... > maybe not. apart from the fact that they seem to be hard to source, this is a big question about this interesting device for me too. I hope so, since it

Re: [zfs-discuss] internal backup power supplies?

2010-01-11 Thread Daniel Carosone
> [google server with batteries] These are cool, and a clever rethink of the typical data centre power supply paradigm. They keep the server running, until either a generator is started or a graceful shutdown can be done. Just to be clear, I'm talking about something much smaller, that provides

Re: [zfs-discuss] rpool mirror on zvol, can't offline and detach

2010-01-11 Thread Daniel Carosone
On Mon, Jan 11, 2010 at 06:03:40PM -0800, Richard Elling wrote: > IMHO, a split mirror is not as good as a decent backup :-) I know.. that was more by way of introduction and background. It's not the only method of backup, but since this disk does get plugged into the netbook frequently enough it

Re: [zfs-discuss] rpool mirror on zvol, can't offline and detach

2010-01-11 Thread Daniel Carosone
On Tue, Jan 12, 2010 at 02:38:56PM +1300, Ian Collins wrote: > How did you set the subdevice in the off line state? # zpool offline rpool /dev/zvol/dsk/ sorry if that wasn't clear. > Did you detach the device from the mirror? No, because then: - it will have to resilver fully on next atta

Re: [zfs-discuss] rpool mirror on zvol, can't offline and detach

2010-01-11 Thread Daniel Carosone
I should have mentioned: - opensolaris b130 - of course I could use partitions on the usb disk, but that's so much less flexible. -- Dan. pgp5A8rwHnenC.pgp Description: PGP signature ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mai

[zfs-discuss] rpool mirror on zvol, can't offline and detach

2010-01-11 Thread Daniel Carosone
I have a netbook with a small internal ssd as rpool. I have an external usb HDD with much larger storage, as a separate pool, which is sometimes attached to the netbook. I created a zvol on the external pool, the same size as the internal ssd, and attached it as a mirror to rpool for backup. I d

[zfs-discuss] internal backup power supplies?

2010-01-11 Thread Daniel Carosone
With all the recent discussion of SSD's that lack suitable power-failure cache protection, surely there's an opportunity for a separate modular solution? I know there used to be (years and years ago) small internal UPS's that fit in a few 5.25" drive bays. They were designed to power the motherbo

Re: [zfs-discuss] I/O Read starvation

2010-01-10 Thread Daniel Carosone
On Sun, Jan 10, 2010 at 09:54:56AM -0600, Bob Friesenhahn wrote: > WTF? urandom is a character device and is returning short reads (note the 0+n vs n+0 counts). dd is not padding these out to the full blocksize (conv=sync) or making multiple reads to fill blocks (conv=fullblock). Evidently the ur

Re: [zfs-discuss] Thin device support in ZFS?

2010-01-08 Thread Daniel Carosone
Yet another way to thin-out the backing devices for a zpool on a thin-provisioned storage host, today: resilver. If your zpool has some redundancy across the SAN backing LUNs, simply drop and replace one at a time and allow zfs to resilver only the blocks currently in use onto the replacement LUN

Re: [zfs-discuss] Benchmarks results for ZFS + NFS, using SSD's as slog devices (ZIL)

2009-12-23 Thread Daniel Carosone
On Thu, Dec 24, 2009 at 12:07:03AM +0100, Jeroen Roodhart wrote: > We are under the impression that a setup that server NFS over UFS has > the same assurance level than a setup using "ZFS without ZIL". Is this > impression false? Completely. It's closer to "UFS mount -o async", without the risk o

[zfs-discuss] zfs-crypto vs. tampering and replay

2009-12-22 Thread Daniel Carosone
On Mon, Dec 21, 2009 at 02:44:00PM -0800, Darren J Moffat wrote: > The IV generation when doing deduplication > is done by calculating an HMAC of the plaintext using a separate per > dataset key (that is also refreshed if 'zfs key -K' is run to rekey the > dataset). > [..] > In the case where

Re: [zfs-discuss] DeDup and Compression - Reverse Order?

2009-12-17 Thread Daniel Carosone
Your parenthetical comments here raise some concerns, or at least eyebrows, with me. Hopefully you can lower them again. > compress, encrypt, checksum, dedup. > (and you need to use zdb to get enough info to see the > leak - and that means you have access to the raw devices) An attacker with

Re: [zfs-discuss] all zfs snapshot made by TimeSlider destroyed after upgrading to b129

2009-12-15 Thread Daniel Carosone
None of these look like the issue either. With 128, I did have to edit the code to avoid the month rollover error, and add the missing dependency dbus-python26. I think I have a new install that went to 129 without having auto snapshots enabled yet. When I can get to that machine later, I wil

Re: [zfs-discuss] all zfs snapshot made by TimeSlider destroyed after upgrading to b129

2009-12-14 Thread Daniel Carosone
> There was an announcement made in November about auto > snapshots being made obsolete in build 128 That thread (which I know well) talks about the replacement of the [b]implementation[/b], while retaining the (majority of) the behaviour and configuration interface. The old implementation had

Re: [zfs-discuss] all zfs snapshot made by TimeSlider destroyed after upgrading to b129

2009-12-13 Thread Daniel Carosone
I can't (yet!) say I've seen the same, with respect to disappearing snapshots. However, I can confirm that I am seeing the same thing, with respect to snapshots without the "frequent" prefix.. $ zfs list -t snapshot | fgrep :- rp...@zfs-auto-snap:-2009-12-14-13:15

Re: [zfs-discuss] Accidentally added disk instead of attaching

2009-12-08 Thread Daniel Carosone
>>> Doesn't the "mismatched replication" message help? >> >> Not if you're trying to make a single disk pool redundant by adding .. >> er, attaching .. a mirror; then there won't be such a warning, however >> effective that warning might or might not be otherwise. > > Not a problem because you ca

Re: [zfs-discuss] Accidentally added disk instead of attaching

2009-12-07 Thread Daniel Carosone
> but if you attempt to "add" a disk to a redundant > config, you'll see an error message similar [..] > > Doesn't the "mismatched replication" message help? Not if you're trying to make a single disk pool redundant by adding .. er, attaching .. a mirror; then there won't be such a warning, howe

Re: [zfs-discuss] Accidentally added disk instead of attaching

2009-12-07 Thread Daniel Carosone
> Jokes aside, this is too easy to make a mistake with > the consequences that are > too hard to correct. Anyone disagrees? No, and this sums up the situation nicely, in that there are two parallel paths toward a resolution: - make the mistake harder to make (various ideas here) - make the co

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Daniel Carosone
> > Isn't this only true if the file sizes are such that the concatenated > > blocks are perfectly aligned on the same zfs block boundaries they used > > before? This seems unlikely to me. > > Yes that would be the case. While eagerly awaiting b128 to appear in IPS, I have been giving this iss

Re: [zfs-discuss] FreeNAS 0.7 zfs performance

2009-11-30 Thread Daniel Carosone
I haven't used it myself, but you could look at the EON software NAS appliance: http://eonstorage.blogspot.com/ -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/list

Re: [zfs-discuss] zfs-raidz - simulate disk failure

2009-11-25 Thread Daniel Carosone
> Speaking practically, do you evaluate your chipset > and disks for hotplug support before you buy? Yes, if someone else has shared their test results previously. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@open

Re: [zfs-discuss] Heads up: SUNWzfs-auto-snapshot obsoletion in snv 128

2009-11-25 Thread Daniel Carosone
> So we also need a "txg dirty" or similar > property to be exposed from the kernel. Or not.. if you find this condition, defer, but check again in a minute (really, after a full txg_interval has passed) rather than on the next scheduled snapshot. on that next check, if the txg has advanced aga

Re: [zfs-discuss] Heads up: SUNWzfs-auto-snapshot obsoletion in snv 128

2009-11-25 Thread Daniel Carosone
> you missed my point: you can't compare the current > txg to an old cr_txg directly, since the current > txg value will be at least 1 higher, even if > no changes have been made. OIC. So we also need a "txg dirty" or similar property to be exposed from the kernel. -- This message posted from op

Re: [zfs-discuss] zfs-raidz - simulate disk failure

2009-11-25 Thread Daniel Carosone
>> [verify on real hardware and share results] > Agree 110%. Good :) > > Yanking disk controller and/or power cables is an > > easy and obvious test. > The problem is that yanking a disk tests the failure > mode of yanking a disk. Yes, but the point is that it's a cheap and easy test, so you mi

Re: [zfs-discuss] heads-up: dedup=fletcher4,verify was broken

2009-11-25 Thread Daniel Carosone
It seems b128 will be re-spun for IPS, and was canceled only for SXCE. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs-raidz - simulate disk failure

2009-11-24 Thread Daniel Carosone
Those are great, but they're about testing the zfs software. There's a small amount of overlap, in that these injections include trying to simulate the hoped-for system response (e.g, EIO) to various physical scenarios, so it's worth looking at for scenario suggestions. However, for most of us

Re: [zfs-discuss] Heads up: SUNWzfs-auto-snapshot obsoletion in snv 128

2009-11-24 Thread Daniel Carosone
> you can fetch the "cr_txg" (cr for creation) for a > snapshot using zdb, yes, but this is hardly an appropriate interface. zdb is also likely to cause disk activity because it looks at many things other than the specific item in question. > but the very creation of a snapshot requires a new >

Re: [zfs-discuss] Heads up: SUNWzfs-auto-snapshot obsoletion in snv 128

2009-11-23 Thread Daniel Carosone
> Daniel Carosone writes: > > > Would there be a way to avoid taking snapshots if > > they're going to be zero-sized? > > I don't think it is easy to do, the txg counter is on > a pool level, > [..] > it would help when the entire pool is idle, t

Re: [zfs-discuss] [Fwd: [zfs-auto-snapshot] Heads up: SUNWzfs-auto-snapshot obsoletion in snv 128]

2009-11-17 Thread Daniel Carosone
I welcome the re-write. The deficiencies of the current snapshot cleanup implementation have been a source of constant background irritation to me for a while, and the subject of a few bugs. Regarding the issues in contention - the send hooks capability is useful and should remain, but the i

Re: [zfs-discuss] snv_110 -> snv_121 produces checksum errors on Raid-Z pool

2009-09-02 Thread Daniel Carosone
Furthermore, this clarity needs to be posted somewhere much, much more visible than buried in some discussion thread. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailma

Re: [zfs-discuss] incremental backup with zfs to file

2009-08-24 Thread Daniel Carosone
> How about if you don't 'detach' them? Just unplug > the backup device in the pair, plug in the > temporary replacement, and tell zfs to > replace the device. Hm. I had tried a variant: a three-way mirror, with one device missing most of the time. The annoyance of that was that the pool c

Re: [zfs-discuss] incremental backup with zfs to file

2009-08-24 Thread Daniel Carosone
> You can validate a stream stored as a file at any > time using the "zfs receive -n" option. Interesting. Maybe it's just a documentation issue, but the man page doesn't make it clear that this command verifies much more than the names in the stream, and suggests that the rest of the data cou

Re: [zfs-discuss] incremental backup with zfs to file

2009-08-23 Thread Daniel Carosone
> On Sun, 23 Aug 2009, Daniel Carosone wrote: > > Userland tools to read and verify a stream, without > having to play > > it into a pool (seek and io overhead) could really > help here. > > This assumes that the problem is data corruption of > the stream, which &

Re: [zfs-discuss] incremental backup with zfs to file

2009-08-23 Thread Daniel Carosone
> Save the data to a file stored in zfs. Then you are > covered. :-) Only if the stream was also separately covered in transit. While you want in-transit protection regardless, "zfs recv"ing the stream into a pool validates that it was not damaged in transit, as well as giving you at-rest prot

Re: [zfs-discuss] zfs send/receive and compression

2009-08-23 Thread Daniel Carosone
> I have a gzip-9 compressed filesystem that I want to > backup to a remote system and would prefer > not to have to recompress everything > again at such great computation expense. This would be nice, and a similar desire applies to upcoming streams after zfs-crypto lands. However, the presen

Re: [zfs-discuss] zfs snapshoot of rpool/* to usb removable drives?

2009-07-08 Thread Daniel Carosone
> Thankyou! Am I right in thinking that rpool > snapshots will include things like swap? If so, is > there some way to exclude them? Hi Carl :) You can't exclude them from the send -R with something like --exclude, but you can make sure there are no such snapshots (which aren't useful anyway)

Re: [zfs-discuss] Single disk parity

2009-07-07 Thread Daniel Carosone
> Sorry, don't have a thread reference > to hand just now. http://www.opensolaris.org/jive/thread.jspa?threadID=100296 Note that there's little empirical evidence that this is directly applicable to the kinds of errors (single bit, or otherwise) that a single failing disk medium would produce.

Re: [zfs-discuss] Single disk parity

2009-07-07 Thread Daniel Carosone
There was a discussion in zfs-code around error-correcting (rather than just -detecting) properties of the checksums currently kept, an of potential additional checksum methods with stronger properties. It came out of another discussion about fletcher2 being both weaker than desired, and flawed

Re: [zfs-discuss] Broken snapshot is a bit of a disaster

2009-06-22 Thread Daniel Carosone
> Other details - the original ZFS was created at ZFS > version 14 on SNV b105, trying to be restored to ZFS > version 15 on SNV b114. Any help would be > appreciated. The zfs send/recv format is not warranted to be compatible between revisions. I don't know, offhand, if that is the problem in

Re: [zfs-discuss] zfs on 32 bit?

2009-06-16 Thread Daniel Carosone
> Not a ZFS bug. [SMI vs EFI labels vs BIOS booting] and so also only a problem for disks that are members of the root pool. ie, I can have >1Tb disks as part of a non-bootable data pool, with EFI labels, on a 32-bit machine? -- This message posted from opensolaris.org ___

<    1   2   3   4   >