Until zfs-crypto arrives, I am using a pool for sensitive data inside
several files encrypted via lofi crypto. The data is also valuable,
of course, so the pool is mirrored, with one file on each of several
pools (laptop rpool, and a couple of usb devices, not always
connected).
These backing fil
On Tue, Feb 09, 2010 at 08:26:42AM -0800, Richard Elling wrote:
> >> "zdb -D poolname" will provide details on the DDT size. FWIW, I have a
> >> pool with 52M DDT entries and the DDT is around 26GB.
I wish -D was documented; I had forgotten about it and only found the
(expensive) -S variant, whic
On Tue, Feb 09, 2010 at 03:11:38PM +1100, Daniel Carosone wrote:
> I didn't find anything to indicate either way whether there was
> bootable bios on board
Ah - in the install guide there's a mention about pressing "F4" or
"Ctrl-S" when prompted at boot to c
On Mon, Feb 08, 2010 at 07:33:56PM -0800, Erik Trimble wrote:
> To reply to myself, the best I can do is this:
>
>http://www.apricorn.com/product_detail.php?type=family&id=59
>
> (it uses a sil3124 controller, so it /might/ work with OpenSolaris )
Nice. I'd certainly like to know if you t
> > Although I am in full support of what sun is doing, to play devils
> > advocate: supermicro is.
They're not the only ones, although the most-often discussed here.
Dell will generally sell hardware and warranty and service add-ons in
any combination, to anyone willing and capable of figurin
On Mon, Feb 08, 2010 at 05:23:29PM -0700, Cindy Swearingen wrote:
> Hi Lasse,
>
> I expanded this entry to include more details of the zpool list and
> zfs list reporting.
>
> See if the new explanation provides enough details.
Cindy, feel free to crib from or refer to my text in whatever way migh
This is a long thread, with lots of interesting and valid observations
about the organisation of the industry, the segmentation of the
market, getting what you pay for vs paying for what you want, etc.
I don't really find within, however, an answer to the original
question, at least the way I re
On Mon, Feb 08, 2010 at 11:28:11PM +0100, Lasse Osterild wrote:
> Ok thanks I know that the amount of used space will vary, but what's
> the usefulness of the total size when ie in my pool above 4 x 1G
> (roughly, depending on recordsize) are reserved for parity, it's not
> like it's useable for an
On Mon, Feb 08, 2010 at 11:24:56AM -0800, Lutz Schumann wrote:
> > Only with the zdb(1M) tool but note that the
> > checksums are NOT of files
> > but of the ZFS blocks.
>
> Thanks - bocks, right (doh) - thats what I was missing. Damn it would be so
> nice :(
If you're comparing the current dat
On Mon, Feb 01, 2010 at 12:22:55PM -0800, Lutz Schumann wrote:
> > > Created a pool on head1 containing just the cache
> > device (c0t0d0).
> >
> > This is not possible, unless there is a bug. You
> > cannot create a pool
> > with only a cache device. I have verified this on
> > b131:
> > # zpoo
On Thu, Feb 04, 2010 at 04:17:17PM -0800, Scott Meilicke wrote:
> At this point, my server Gallardo can see the LUN, but like I said, it looks
> blank to the OS. I suspect the 'sbdadm create-lu' phase.
Yeah, try the import version of that command.
--
Dan.
pgphS37DCPdV0.pgp
Description: PGP sig
On Mon, Feb 08, 2010 at 04:58:38AM +0100, Felix Buenemann wrote:
> I have some questions about the choice of SSDs to use for ZIL and L2ARC.
I have one answer. The other questions are mostly related to your
raid controller, which I can't answer directly.
> - Is it safe to run the L2ARC without ba
On Sat, Feb 06, 2010 at 09:22:57AM -0800, Richard Elling wrote:
> I'm interested in anecdotal evidence which suggests there is a
> problem as it is currently designed.
I like to look at it differently: I'm not sure if there is a
problem. I'd like to have a simple way to discover a problem, using
Two related questions:
- given an existing pool with dedup'd data, how can I find the
current size of the DDT? I presume some zdb work to find and dump the
relevant object, but what specifically?
- what's the expansion ratio for the memory overhead of L2ARC entries?
If I know my DDT
On Sat, Jan 30, 2010 at 06:07:48PM -0500, Frank Middleton wrote:
> On 01/30/10 05:33 PM, Ross Walker wrote:
>> Just install the OS on the first drive and add the second drive to form
>> a mirror.
>
> After more than a year or so of experience with ZFS on drive constrained
> systems, I am convinced
On Thu, Jan 28, 2010 at 09:33:19PM -0800, Ed Fang wrote:
> We considered a SSD ZIL as well but from my understanding it won't
> help much on sequential bulk writes but really helps on random
> writes (to sequence going to disk better).
slog will only help if your write load involves lots of sync
On Thu, Jan 28, 2010 at 07:26:42AM -0800, Ed Fang wrote:
> 4 x x6 vdevs in RaidZ1 configuration
> 3 x x8 vdevs in RaidZ2 configuration
Another choice might be
2 x x12 vdevs in raidz2 configuration
This gets you the space of the first, with the recovery properties of
the second - at a cost in pot
On Wed, Jan 27, 2010 at 09:57:08PM -0800, Bill Sommerfeld wrote:
Hi Bill! :-)
> On 01/27/10 21:17, Daniel Carosone wrote:
>> This is as expected. Not expected is that:
>>
>> usedbyrefreservation = refreservation
>>
>> I would expect this to be 0, sinc
In a thread elsewhere, trying to analyse why the zfs auto-snapshot
cleanup code was cleaning up more aggressively than expected, I
discovered some interesting properties of a zvol.
http://mail.opensolaris.org/pipermail/zfs-auto-snapshot/2010-January/000232.html
The zvol is not thin-provisioned.
On Wed, Jan 27, 2010 at 02:47:47PM -0800, Christo Kutrovsky wrote:
> In the case of a ZVOL with the following settings:
>
> primarycache=off, secondarycache=all
>
> How does the L2ARC get populated if the data never makes it to ARC ?
> Is this even a valid configuration?
It's valid, I assume, i
On Wed, Jan 27, 2010 at 02:34:29PM -0600, David Dyer-Bennet wrote:
> Google is working heavily with the philosophy that things WILL fail, so
> they plan for it, and have enough redundance to survive it -- and then
> save lots of money by not paying for premium components. I like that
> appro
On Wed, Jan 27, 2010 at 12:01:36PM -0800, Gregory Durham wrote:
> Hello All,
> I read through the attached threads and found a solution by a poster and
> decided to try it.
That may have been mine - good to know it helped, or at least started to.
> The solution was to use 3 files (in my case I ma
On Tue, Jan 26, 2010 at 07:32:05PM -0800, David Dyer-Bennet wrote:
> Okay, so this SuperMicro AOC-USAS-L8i is an "SAS" card? I've never
> done SAS; is it essentially a controller as flexible as SCSI that
> then talks to SATA disks out the back?
Yes, or SAS disks.
> Amazon seems to be the only
On Mon, Jan 25, 2010 at 05:36:35PM -0500, Miles Nordin wrote:
> > "sb" == Simon Breden writes:
>
> sb> 1. In simple non-RAID single drive 'desktop' PC scenarios
> sb> where you have one drive, if your drive is experiencing
> sb> read/write errors, as this is the only drive you hav
On Mon, Jan 25, 2010 at 05:42:59PM -0500, Miles Nordin wrote:
> et> You cannot import a stream into a zpool of earlier revision,
> et> thought the reverse is possible.
>
> This is very bad, because it means if your backup server is pool
> version 22, then you cannot use it to back up pool
On Mon, Jan 25, 2010 at 04:08:04PM -0600, David Dyer-Bennet wrote:
> > - Don't be afraid to dike out the optical drive, either for case
> >space or available ports. [..]
> >[..] Put the drive in an external USB case if you want,
> >or leave it in the case connected via a USB bridge in
Some other points and recommendations to consider:
- Since you have the bays, get the controller to drive them,
regardless. They will have many uses, some of which below.
A 4-port controller would allow you enough ports for both the two
empty hotswap bays, plus the dual 2.5" carrier.
On Thu, Jan 21, 2010 at 03:55:59PM +0100, Matthias Appel wrote:
> I have a serious issue with my zpool.
Yes. You need to figure out what the root cause of the issue is.
> My zpool consists of 4 vdevs which are assembled to 2 mirrors.
>
> One of this mirrors got degraded cause of too many errors
Another issue with all this arithmetic: one needs to factor in the
cost of additional spare disks (what were you going to resilver onto?).
I look at it like this: you purchase the same number of total disks
(active + hot spare + cold spare), and raidz2 vs raidz3 simply moves a
disk from one of the
On Fri, Jan 22, 2010 at 04:12:48PM -0500, Miles Nordin wrote:
> w> http://www.csc.liv.ac.uk/~greg/projects/erc/
>
> dead link?
Works for me - this is someone who's written patches for smartctl to
set this feature; these are standardised/documented commands, no
reverse engineering of DOS tool
As I said in another post, it's coming time to build a new storage
platform at home. I'm revisiting all the hardware options and
permutations again, for current kit.
Build 125 added something I was very eager for earlier, sata
port-multiplier support.Since then, I've seen very little, if any,
On Sat, Jan 23, 2010 at 06:39:25PM -0500, Frank Cusack wrote:
> On January 23, 2010 5:17:16 PM -0600 Tim Cook wrote:
>> Smaller devices get you to raid-z3 because they cost less money.
>> Therefore, you can afford to buy more of them.
>
> I sure hope you aren't ever buying for my company! :) :)
>
On Sat, Jan 23, 2010 at 09:04:31AM -0800, Simon Breden wrote:
> For resilvering to be required, I presume this will occur mostly in
> the event of a mechanical failure. Soft failures like bad sectors
> will presumably not require resilvering of the whole drive to occur,
> as these types of error ar
On Sat, Jan 23, 2010 at 12:30:01PM -0800, Simon Breden wrote:
> And regarding mirror vdevs etc, I can see the usefulness of being
> able to build a mirror vdev of multiple drives for cases where you
> have really critical data -- e.g. a single 4-drive mirror vdev. I
> suppose regular backups can he
On Thu, Jan 21, 2010 at 07:33:47PM -0800, Younes wrote:
> Hello all,
>
> I have a small issue with zfs.
> I create a volume 1TB.
>
> # zfs get all tank/test01
> NAMEPROPERTY VALUE
>
On Thu, Jan 21, 2010 at 05:52:57PM -0800, Richard Elling wrote:
> I agree with this, except for the fact that the most common installers
> (LiveCD, Nexenta, etc.) use the whole disk for rpool[1].
Er, no. You certainly get the option of "whole disk" or "make
partitions", at least with the opensola
On Thu, Jan 21, 2010 at 03:33:28PM -0800, Richard Elling wrote:
> [Richard makes a hobby of confusing Dan :-)]
Heh.
> > Lutz, is the pool autoreplace property on? If so, "god help us all"
> > is no longer quite so necessary.
>
> I think this is a different issue.
I agree. For me, it was the ma
On Thu, Jan 21, 2010 at 11:14:33PM +0100, Henrik Johansson wrote:
> I think this could scare or even make new users do terrible things,
> even if the errors could be fixed. I think I'll file a bug, agree?
Yes, very much so.
--
Dan.
pgp7OGc773Bqe.pgp
Description: PGP signature
__
On Thu, Jan 21, 2010 at 02:54:21PM -0800, Richard Elling wrote:
> + support file systems larger then 2GiB include 32-bit UIDs a GIDs
file systems, but what about individual files within?
--
Dan.
pgpw54qWyHczW.pgp
Description: PGP signature
___
zfs-di
On Fri, Jan 22, 2010 at 08:55:16AM +1100, Daniel Carosone wrote:
> For performance (rather than space) issues, I look at dedup as simply
> increasing the size of the working set, with a goal of reducing the
> amount of IO (avoided duplicate writes) in return.
I should add "and
On Thu, Jan 21, 2010 at 01:55:53PM -0800, Michelle Knight wrote:
> The error messages are in the original post. They are...
> /mirror2/applications/Microsoft/Operating Systems/Virtual PC/vm/XP-SP2/XP-SP2
> Hard Disk.vhd: File too large
> /mirror2/applications/virtualboximages/xp/xp.tar.bz2: File t
On Thu, Jan 21, 2010 at 05:04:51PM +0100, erik.ableson wrote:
> What I'm trying to get a handle on is how to estimate the memory
> overhead required for dedup on that amount of storage.
We'd all appreciate better visibility of this. This requires:
- time and observation and experience, and
-
On Thu, Jan 21, 2010 at 09:36:06AM -0800, Richard Elling wrote:
> On Jan 20, 2010, at 4:17 PM, Daniel Carosone wrote:
>
> > On Wed, Jan 20, 2010 at 03:20:20PM -0800, Richard Elling wrote:
> >> Though the ARC case, PSARC/2007/618 is "unpublished," I gather from
&g
On Wed, Jan 20, 2010 at 10:04:34AM -0800, Willy wrote:
> To those concerned about this issue, there is a patched version of
> smartmontools that enables the querying and setting of TLER/ERC/CCTL
> values (well, except for recent desktop drives from Western
> Digitial).
[Joining together two recent
On Wed, Jan 20, 2010 at 03:20:20PM -0800, Richard Elling wrote:
> Though the ARC case, PSARC/2007/618 is "unpublished," I gather from
> googling and the source that L2ARC devices are considered auxiliary,
> in the same category as spares. If so, then it is perfectly reasonable to
> expect that it g
On Wed, Jan 20, 2010 at 12:42:35PM -0500, Wajih Ahmed wrote:
> Mike,
>
> Thank you for your quick response...
>
> Is there a way for me to test the compression from the command line to
> see if lzjb is giving me more or less than the 12.5% mark? I guess it
> will depend if there is a lzjb comm
There is a tendency to conflate "backup" and "archive", both generally
and in this thread. They have different requirements.
Backups should enable quick restore of a full operating image with all
the necessary system level attributes. They concerned with SLA and
uptime and outage and data loss w
On Tue, Jan 19, 2010 at 12:16:01PM +0100, Joerg Schilling wrote:
> Daniel Carosone wrote:
>
> > I also don't recommend files >1Gb in size for DVD media, due to
> > iso9660 limitations. I haven't used UDF enough to say much about any
> > limitations there.
&g
On Mon, Jan 18, 2010 at 05:52:25PM +1300, Ian Collins wrote:
>> Is it the parent snapshot for a clone?
>>
> I'm almost certain it isn't. I haven't created any clones and none show
> in zpool history.
What about snapshot holds? I don't know if (and doubt whether) these
are in S10, but since
On Mon, Jan 18, 2010 at 03:25:56PM -0800, Erik Trimble wrote:
> Hopefully, once BP rewrite materializes (I know, I'm treating this
> much to much as a Holy Grail, here to save us from all the ZFS
> limitations, but really...), we can implement defragmentation which
> will seriously reduce the amou
On Mon, Jan 18, 2010 at 01:38:16PM -0800, Richard Elling wrote:
> The Solaris 10 10/09 zfs(1m) man page says:
>
> The format of the stream is committed. You will be able
> to receive your streams on future versions of ZFS.
>
> I'm not sure when that hit snv, but obviously it wa
On Mon, Jan 18, 2010 at 07:34:51PM +0100, Lassi Tuura wrote:
> > Consider then, using a zpool-in-a-file as the file format, rather than
> > zfs send streams.
>
> This is an interesting suggestion :-)
>
> Did I understand you correctly that once a slice is written, zfs
> won't rewrite it? In other
On Mon, Jan 18, 2010 at 03:24:19AM -0500, Edward Ned Harvey wrote:
> Unless I am mistaken, I believe, the following is not possible:
>
> On the source, create snapshot "1"
> Send snapshot "1" to destination
> On the source, create snapshot "2"
> Send incremental, from "1" to "2" to the destination
On Sun, Jan 17, 2010 at 06:21:45PM +1300, Ian Collins wrote:
> I have a Solaris 10 update 6 system with a snapshot I can't remove.
>
> zfs destroy -f reports the device as being busy. fuser doesn't
> shore any process using the filesystem and it isn't shared.
Is it the parent snapshot for a c
On Sun, Jan 17, 2010 at 04:38:03PM -0600, Bob Friesenhahn wrote:
> On Mon, 18 Jan 2010, Daniel Carosone wrote:
>>
>> .. as long as you scrub both the original pool and the backup pool
>> with the same regularity. sending the full backup from the source is
>> basically
On Sun, Jan 17, 2010 at 05:31:39AM -0500, Edward Ned Harvey wrote:
> Instead, it is far preferable to "zfs send | zfs receive" ... That is,
> receive the data stream on external media as soon as you send it.
Agree 100% - but..
.. it's hard to beat the convenience of a "backup file" format, for
On Sun, Jan 17, 2010 at 08:05:27AM -0800, Richard Elling wrote:
> > Personally, I like to start with a fresh "full" image once a month, and
> > then do daily incrementals for the rest of the month.
>
> This doesn't buy you anything.
.. as long as you scrub both the original pool and the backup
On Fri, Jan 15, 2010 at 10:37:15AM -0500, Charles Menser wrote:
> Perhaps an ISCSI mirror for a laptop? Online it when you are back
> "home" to keep your backup current.
I do exactly this, but:
- It's not the only thing I do for backup.
- The iscsi initiator is currently being a major PITA for
On Wed, Jan 13, 2010 at 08:21:13AM -0600, Gary Mills wrote:
> Yes, I understand that, but do filesystems have separate queues of any
> sort within the ZIL?
I'm not sure. If you can experiment and measure a benefit,
understanding the reasons is helpful but secondary. If you can't
experiment so eas
On Tue, Jan 12, 2010 at 01:26:15PM -0700, Cindy Swearingen wrote:
> I see now how you might have created this config.
>
> I tried to reproduce this issue by creating a separate pool on another
> disk and a volume to attach to my root pool, but my system panics when
> I try to attach the volume to t
On Mon, Jan 11, 2010 at 10:10:37PM -0800, Lutz Schumann wrote:
> p.s. While writing this I'm thinking if a-card handles this case well ? ...
> maybe not.
apart from the fact that they seem to be hard to source, this is a big
question about this interesting device for me too. I hope so, since
it
> [google server with batteries]
These are cool, and a clever rethink of the typical data centre power
supply paradigm. They keep the server running, until either a
generator is started or a graceful shutdown can be done.
Just to be clear, I'm talking about something much smaller, that
provides
On Mon, Jan 11, 2010 at 06:03:40PM -0800, Richard Elling wrote:
> IMHO, a split mirror is not as good as a decent backup :-)
I know.. that was more by way of introduction and background. It's
not the only method of backup, but since this disk does get plugged
into the netbook frequently enough it
On Tue, Jan 12, 2010 at 02:38:56PM +1300, Ian Collins wrote:
> How did you set the subdevice in the off line state?
# zpool offline rpool /dev/zvol/dsk/
sorry if that wasn't clear.
> Did you detach the device from the mirror?
No, because then:
- it will have to resilver fully on next atta
I should have mentioned:
- opensolaris b130
- of course I could use partitions on the usb disk, but that's so much less
flexible.
--
Dan.
pgp5A8rwHnenC.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mai
I have a netbook with a small internal ssd as rpool. I have an
external usb HDD with much larger storage, as a separate pool, which
is sometimes attached to the netbook.
I created a zvol on the external pool, the same size as the internal
ssd, and attached it as a mirror to rpool for backup. I d
With all the recent discussion of SSD's that lack suitable
power-failure cache protection, surely there's an opportunity for a
separate modular solution?
I know there used to be (years and years ago) small internal UPS's
that fit in a few 5.25" drive bays. They were designed to power the
motherbo
On Sun, Jan 10, 2010 at 09:54:56AM -0600, Bob Friesenhahn wrote:
> WTF?
urandom is a character device and is returning short reads (note the
0+n vs n+0 counts). dd is not padding these out to the full blocksize
(conv=sync) or making multiple reads to fill blocks (conv=fullblock).
Evidently the ur
Yet another way to thin-out the backing devices for a zpool on a
thin-provisioned storage host, today: resilver.
If your zpool has some redundancy across the SAN backing LUNs, simply
drop and replace one at a time and allow zfs to resilver only the
blocks currently in use onto the replacement LUN
On Thu, Dec 24, 2009 at 12:07:03AM +0100, Jeroen Roodhart wrote:
> We are under the impression that a setup that server NFS over UFS has
> the same assurance level than a setup using "ZFS without ZIL". Is this
> impression false?
Completely. It's closer to "UFS mount -o async", without the risk o
On Mon, Dec 21, 2009 at 02:44:00PM -0800, Darren J Moffat wrote:
> The IV generation when doing deduplication
> is done by calculating an HMAC of the plaintext using a separate per
> dataset key (that is also refreshed if 'zfs key -K' is run to rekey the
> dataset).
> [..]
> In the case where
Your parenthetical comments here raise some concerns, or at least eyebrows,
with me. Hopefully you can lower them again.
> compress, encrypt, checksum, dedup.
> (and you need to use zdb to get enough info to see the
> leak - and that means you have access to the raw devices)
An attacker with
None of these look like the issue either. With 128, I did have to edit the
code to avoid the month rollover error, and add the missing dependency
dbus-python26.
I think I have a new install that went to 129 without having auto snapshots
enabled yet. When I can get to that machine later, I wil
> There was an announcement made in November about auto
> snapshots being made obsolete in build 128
That thread (which I know well) talks about the replacement of the
[b]implementation[/b], while retaining the (majority of) the behaviour and
configuration interface. The old implementation had
I can't (yet!) say I've seen the same, with respect to disappearing snapshots.
However, I can confirm that I am seeing the same thing, with respect to
snapshots without the "frequent" prefix..
$ zfs list -t snapshot | fgrep :-
rp...@zfs-auto-snap:-2009-12-14-13:15
>>> Doesn't the "mismatched replication" message help?
>>
>> Not if you're trying to make a single disk pool redundant by adding ..
>> er, attaching .. a mirror; then there won't be such a warning, however
>> effective that warning might or might not be otherwise.
>
> Not a problem because you ca
> but if you attempt to "add" a disk to a redundant
> config, you'll see an error message similar [..]
>
> Doesn't the "mismatched replication" message help?
Not if you're trying to make a single disk pool redundant by adding .. er,
attaching .. a mirror; then there won't be such a warning, howe
> Jokes aside, this is too easy to make a mistake with
> the consequences that are
> too hard to correct. Anyone disagrees?
No, and this sums up the situation nicely, in that there are two parallel paths
toward a resolution:
- make the mistake harder to make (various ideas here)
- make the co
> > Isn't this only true if the file sizes are such that the concatenated
> > blocks are perfectly aligned on the same zfs block boundaries they used
> > before? This seems unlikely to me.
>
> Yes that would be the case.
While eagerly awaiting b128 to appear in IPS, I have been giving this iss
I haven't used it myself, but you could look at the EON software NAS appliance:
http://eonstorage.blogspot.com/
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/list
> Speaking practically, do you evaluate your chipset
> and disks for hotplug support before you buy?
Yes, if someone else has shared their test results previously.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@open
> So we also need a "txg dirty" or similar
> property to be exposed from the kernel.
Or not..
if you find this condition, defer, but check again in a minute (really, after a
full txg_interval has passed) rather than on the next scheduled snapshot.
on that next check, if the txg has advanced aga
> you missed my point: you can't compare the current
> txg to an old cr_txg directly, since the current
> txg value will be at least 1 higher, even if
> no changes have been made.
OIC. So we also need a "txg dirty" or similar property to be exposed from the
kernel.
--
This message posted from op
>> [verify on real hardware and share results]
> Agree 110%.
Good :)
> > Yanking disk controller and/or power cables is an
> > easy and obvious test.
> The problem is that yanking a disk tests the failure
> mode of yanking a disk.
Yes, but the point is that it's a cheap and easy test, so you mi
It seems b128 will be re-spun for IPS, and was canceled only for SXCE.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Those are great, but they're about testing the zfs software. There's a small
amount of overlap, in that these injections include trying to simulate the
hoped-for system response (e.g, EIO) to various physical scenarios, so it's
worth looking at for scenario suggestions.
However, for most of us
> you can fetch the "cr_txg" (cr for creation) for a
> snapshot using zdb,
yes, but this is hardly an appropriate interface. zdb is also likely to cause
disk activity because it looks at many things other than the specific item in
question.
> but the very creation of a snapshot requires a new
>
> Daniel Carosone writes:
>
> > Would there be a way to avoid taking snapshots if
> > they're going to be zero-sized?
>
> I don't think it is easy to do, the txg counter is on
> a pool level,
> [..]
> it would help when the entire pool is idle, t
I welcome the re-write. The deficiencies of the current snapshot cleanup
implementation have been a source of constant background irritation to me for a
while, and the subject of a few bugs.
Regarding the issues in contention
- the send hooks capability is useful and should remain, but the
i
Furthermore, this clarity needs to be posted somewhere much, much more visible
than buried in some discussion thread.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailma
> How about if you don't 'detach' them? Just unplug
> the backup device in the pair, plug in the
> temporary replacement, and tell zfs to
> replace the device.
Hm. I had tried a variant: a three-way mirror, with one device missing most of
the time. The annoyance of that was that the pool c
> You can validate a stream stored as a file at any
> time using the "zfs receive -n" option.
Interesting. Maybe it's just a documentation issue, but the man page doesn't
make it clear that this command verifies much more than the names in the
stream, and suggests that the rest of the data cou
> On Sun, 23 Aug 2009, Daniel Carosone wrote:
> > Userland tools to read and verify a stream, without
> having to play
> > it into a pool (seek and io overhead) could really
> help here.
>
> This assumes that the problem is data corruption of
> the stream, which
&
> Save the data to a file stored in zfs. Then you are
> covered. :-)
Only if the stream was also separately covered in transit.
While you want in-transit protection regardless, "zfs recv"ing the stream into
a pool validates that it was not damaged in transit, as well as giving you
at-rest prot
> I have a gzip-9 compressed filesystem that I want to
> backup to a remote system and would prefer
> not to have to recompress everything
> again at such great computation expense.
This would be nice, and a similar desire applies to upcoming streams after
zfs-crypto lands.
However, the presen
> Thankyou! Am I right in thinking that rpool
> snapshots will include things like swap? If so, is
> there some way to exclude them?
Hi Carl :)
You can't exclude them from the send -R with something like --exclude, but you
can make sure there are no such snapshots (which aren't useful anyway)
> Sorry, don't have a thread reference
> to hand just now.
http://www.opensolaris.org/jive/thread.jspa?threadID=100296
Note that there's little empirical evidence that this is directly applicable to
the kinds of errors (single bit, or otherwise) that a single failing disk
medium would produce.
There was a discussion in zfs-code around error-correcting (rather than just
-detecting) properties of the checksums currently kept, an of potential
additional checksum methods with stronger properties.
It came out of another discussion about fletcher2 being both weaker than
desired, and flawed
> Other details - the original ZFS was created at ZFS
> version 14 on SNV b105, trying to be restored to ZFS
> version 15 on SNV b114. Any help would be
> appreciated.
The zfs send/recv format is not warranted to be compatible between revisions.
I don't know, offhand, if that is the problem in
> Not a ZFS bug. [SMI vs EFI labels vs BIOS booting]
and so also only a problem for disks that are members of the root pool.
ie, I can have >1Tb disks as part of a non-bootable data pool, with EFI labels,
on a 32-bit machine?
--
This message posted from opensolaris.org
___
201 - 300 of 309 matches
Mail list logo