Damon,
Yes, we can provide simple concat inside the array (even though today we
provide RAID5 or RAID1 as our standard, and using Veritas with concat), the
question is more of if it's worth it to switch the redundancy from the array to
the ZFS layer.
The RAID5/1 features of the high-end EMC ar
Hi Bob,
On Fri, 13 Feb 2009 19:58:51 -0600 (CST)
Bob Friesenhahn wrote:
> On Fri, 13 Feb 2009, Tim wrote:
>
> > I don't think it hurts in the least to throw out some ideas. If
> > they aren't valid, it's not hard to ignore them and move on. It
> > surely isn't a waste of anyone's time to s
On February 13, 2009 7:58:51 PM -0600 Bob Friesenhahn
wrote:
With this level of overhead, I am surprise that there is any remaining
development motion on ZFS at all.
come on now. with all due respect, you are attempting to stifle
relevant discussion and that is, well, bordering on ridiculous.
On Fri, 13 Feb 2009, Tim wrote:
I don't think it hurts in the least to throw out some ideas. If
they aren't valid, it's not hard to ignore them and move on. It
surely isn't a waste of anyone's time to spend 5 minutes reading a
response and weighing if the idea is valid or not.
Today I sat
Hi,
When I read the ZFS manual, it usually recommends to configure redundancy at
the ZFS layer, mainly because there are features that will work only with
redundant configuration (like corrupted data correction), also it implies that
the overall robustness will improve.
My question is simple,
Tim wrote:
On Fri, Feb 13, 2009 at 4:21 PM, Bob Friesenhahn
mailto:bfrie...@simple.dallas.tx.us>>
wrote:
On Fri, 13 Feb 2009, Ross Smith wrote:
However, I've just had another idea. Since the uberblocks are
pretty
vital in recovering a pool, and I believe it's a
On Fri, Feb 13, 2009 at 4:21 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Fri, 13 Feb 2009, Ross Smith wrote:
>
> However, I've just had another idea. Since the uberblocks are pretty
>> vital in recovering a pool, and I believe it's a fair bit of work to
>> search the disk to
On Fri, 13 Feb 2009, Ross Smith wrote:
However, I've just had another idea. Since the uberblocks are pretty
vital in recovering a pool, and I believe it's a fair bit of work to
search the disk to find them. Might it be a good idea to allow ZFS to
store uberblock locations elsewhere for recover
Richard Elling wrote:
Greg Palmer wrote:
Miles Nordin wrote:
gm> That implies that ZFS will have to detect removable devices
gm> and treat them differently than fixed devices.
please, no more of this garbage, no more hidden unchangeable automatic
condescending behavior. The whole format vs rmf
You don't, but that's why I was wondering about time limits. You have
to have a cut off somewhere, but if you're checking the last few
minutes of uberblocks that really should cope with a lot. It seems
like a simple enough thing to implement, and if a pool still gets
corrupted with these checks i
Richard Elling wrote:
Greg Palmer wrote:
Miles Nordin wrote:
gm> That implies that ZFS will have to detect removable devices
gm> and treat them differently than fixed devices.
please, no more of this garbage, no more hidden unchangeable automatic
condescending behavior. The whole format vs rmf
On January 30, 2009 1:09:49 PM -0500 Mark J Musante
wrote:
On Fri, 30 Jan 2009, Frank Cusack wrote:
so, is there a way to tell zfs not to perform the mounts for data2? or
another way i can replicate the pool on the same host, without exporting
the original pool?
There is not a way to do that
> How does mounting the card work? Can one reverse the
> slot cover and screw it in like that, or is the card hanging free?
unfortunately, the cover does not fit in the case, so I fixed it with a tip of
hot glue; the same I used to fix the intel gig-e pci-e card (which is a
low-profile version)
On Fri, Feb 13, 2009 at 02:00:28PM -0600, Nicolas Williams wrote:
> Ordering matters for atomic operations, and filesystems are full of
> those.
Also, note that ignoring barriers is effectively as bad as dropping
writes if there's any chance that some writes will never hit the disk
because of, say
This shouldn't be taking anywhere *near* half an hour. The snapshots
differ trivially, by one or two files and less than 10k of data (they're
test results from working on my backup script). But so far, it's still
sitting there after more than half an hour.
local...@fsfs:~/src/bup2# zfs destroy r
On Fri, 13 Feb 2009, Ross Smith wrote:
Thinking about this a bit more, you've given me an idea: Would it be
worth ZFS occasionally reading previous uberblocks from the pool, just
to check they are there and working ok?
That sounds like a good idea. However, how do you know for sure that
the
Hello,
I formated unallocated partition using Gparted and now my table looks:
sh-3.2# format -e
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c9d0
/p...@0,0/pci-...@1f,2/i...@0/c...@0,0
Specify disk (enter its number): 0
selecting c9d0
NO Alt slice
No defect list fou
On Fri, 13 Feb 2009, Ross Smith wrote:
Also, that's a pretty extreme situation since you'd need a device that
is being written to but not read from to fail in this exact way. It
also needs to have no scrubbing being run, so the problem has remained
undetected.
On systems with a lot of RAM, 10
On Fri, Feb 13, 2009 at 8:24 PM, Bob Friesenhahn
wrote:
> On Fri, 13 Feb 2009, Ross Smith wrote:
>>
>> You have to consider that even with improperly working hardware, ZFS
>> has been checksumming data, so if that hardware has been working for
>> any length of time, you *know* that the data on it
Greg Palmer wrote:
Miles Nordin wrote:
gm> That implies that ZFS will have to detect removable devices
gm> and treat them differently than fixed devices.
please, no more of this garbage, no more hidden unchangeable automatic
condescending behavior. The whole format vs rmformat mess is just
ridi
On Fri, Feb 13, 2009 at 8:24 PM, Bob Friesenhahn
wrote:
> On Fri, 13 Feb 2009, Ross Smith wrote:
>>
>> You have to consider that even with improperly working hardware, ZFS
>> has been checksumming data, so if that hardware has been working for
>> any length of time, you *know* that the data on it
On Fri, 13 Feb 2009, Ross Smith wrote:
You have to consider that even with improperly working hardware, ZFS
has been checksumming data, so if that hardware has been working for
any length of time, you *know* that the data on it is good.
You only know this if the data has previously been read.
Bob Friesenhahn wrote:
> On Fri, 13 Feb 2009, Ross wrote:
>>
>> Something like that will have people praising ZFS' ability to
>> safeguard their data, and the way it recovers even after system
>> crashes or when hardware has gone wrong. You could even have a
>> "common causes of this are..." mes
On Fri, Feb 13, 2009 at 7:41 PM, Bob Friesenhahn
wrote:
> On Fri, 13 Feb 2009, Ross wrote:
>>
>> Something like that will have people praising ZFS' ability to safeguard
>> their data, and the way it recovers even after system crashes or when
>> hardware has gone wrong. You could even have a "comm
On Fri, Feb 13, 2009 at 10:29:05AM -0800, Frank Cusack wrote:
> On February 13, 2009 1:10:55 PM -0500 Miles Nordin wrote:
> >>"fc" == Frank Cusack writes:
> >
> >fc> If you're misordering writes
> >fc> isn't that a completely different problem?
> >
> >no. ignoring the flush cache com
On Fri, Feb 13, 2009 at 04:51, Nicola Fankhauser
wrote:
> hi
>
> I have a AOC-USAS-L8i working in both a Gigabyte GA-P35-DS3P and Gigabyte
> GA-EG45M-DS2H under OpenSolaris build 104+ (Nexenta Core 2.0 beta).
Very cool! It's good to see people having success with this card.
How does mounting th
On Fri, 13 Feb 2009, Ross wrote:
Something like that will have people praising ZFS' ability to
safeguard their data, and the way it recovers even after system
crashes or when hardware has gone wrong. You could even have a
"common causes of this are..." message, or a link to an online help
a
Superb news, thanks Jeff.
Having that will really raise ZFS up a notch, and align it much better with
peoples expectations. I assume it'll work via zpool import, and let the user
know what's gone wrong?
If you think back to this case, imagine how different the users response would
have been i
> "fc" == Frank Cusack writes:
fc> why would dropping a flush cache imply dropping every write
fc> after the flush cache?
it wouldn't and probably never does. It was an imaginary scenario
invented to argue with you and to agree with the guy in the USB bug
who said ``dropping a cache
On February 13, 2009 10:29:05 AM -0800 Frank Cusack
wrote:
On February 13, 2009 1:10:55 PM -0500 Miles Nordin wrote:
"fc" == Frank Cusack writes:
fc> If you're misordering writes
fc> isn't that a completely different problem?
no. ignoring the flush cache command causes writes to b
On February 13, 2009 1:10:55 PM -0500 Miles Nordin wrote:
"fc" == Frank Cusack writes:
fc> If you're misordering writes
fc> isn't that a completely different problem?
no. ignoring the flush cache command causes writes to be misordered.
oh. can you supply a reference or if you hav
> "fc" == Frank Cusack writes:
fc> If you're misordering writes
fc> isn't that a completely different problem?
no. ignoring the flush cache command causes writes to be misordered.
fc> Even then, I don't see how it's worse than DROPPING a write.
fc> The data eventually get
On Fri, 13 Feb 2009 17:53:00 +0100, Eric D. Mudama
wrote:
On Fri, Feb 13 at 9:14, Neil Perrin wrote:
Having a separate intent log on good hardware will not prevent
corruption
on a pool with bad hardware. By "good" I mean hardware that correctly
flush their write caches when requested.
C
On February 13, 2009 12:41:12 PM -0500 Miles Nordin wrote:
"fc" == Frank Cusack writes:
fc> if you have 100TB of data, wouldn't you have a completely
fc> redundant storage network
If you work for a ponderous leaf-eating brontosorous maybe. If your
company is modern I think having s
On February 13, 2009 12:10:08 PM -0500 Miles Nordin wrote:
please, no more of this garbage, no more hidden unchangeable automatic
condescending behavior. The whole format vs rmformat mess is just
ridiculous.
thank you.
___
zfs-discuss mailing list
z
On February 13, 2009 12:20:21 PM -0500 Miles Nordin wrote:
"fc" == Frank Cusack writes:
>> Dropping a flush-cache command is just as bad as dropping a
>> write.
fc> Not that it matters, but it seems obvious that this is wrong
fc> or anyway an exaggeration. Dropping a flush-c
Miles Nordin wrote:
gm> That implies that ZFS will have to detect removable devices
gm> and treat them differently than fixed devices.
please, no more of this garbage, no more hidden unchangeable automatic
condescending behavior. The whole format vs rmformat mess is just
ridiculous. An
> "fc" == Frank Cusack writes:
fc> if you have 100TB of data, wouldn't you have a completely
fc> redundant storage network
If you work for a ponderous leaf-eating brontosorous maybe. If your
company is modern I think having such an oddly large amount of data in
one pool means you'd
> "t" == Tim writes:
t> I would like to believe it has more to do with Solaris's
t> support of USB than ZFS, but the fact remains it's a pretty
t> glaring deficiency in 2009, no matter which part of the stack
t> is at fault.
maybe, but for this job I don't much mind glar
> "fc" == Frank Cusack writes:
>> Dropping a flush-cache command is just as bad as dropping a
>> write.
fc> Not that it matters, but it seems obvious that this is wrong
fc> or anyway an exaggeration. Dropping a flush-cache just means
fc> that you have to wait until the d
> "gm" == Gary Mills writes:
gm> That implies that ZFS will have to detect removable devices
gm> and treat them differently than fixed devices.
please, no more of this garbage, no more hidden unchangeable automatic
condescending behavior. The whole format vs rmformat mess is just
ri
On Thu, Feb 12 at 19:43, Toby Thain wrote:
^^ Spec compliance is what we're testing for... We wouldn't know if this
special variant is working correctly either. :)
Time the difference between NCQ reads with and without FUA in the
presence of overlapped cached write data. That should have a
sig
On Fri, Feb 13 at 9:14, Neil Perrin wrote:
Having a separate intent log on good hardware will not prevent corruption
on a pool with bad hardware. By "good" I mean hardware that correctly
flush their write caches when requested.
Can someone please name a specific piece of bad hardware?
--eric
Having a separate intent log on good hardware will not prevent corruption
on a pool with bad hardware. By "good" I mean hardware that correctly
flush their write caches when requested.
Note, a pool is always consistent (again when using good hardware).
The function of the intent log is not to pro
On Fri, 13 Feb 2009, Tony Marshall wrote:
How would i obtain the current setting for the vdev_cache from a
production system? We are looking at trying to tune ZFS for better
performance with respect to oracle databases, however before we start
changing settings via the /etc/system file we wou
Hi All,
How would i obtain the current setting for the vdev_cache from a
production system? We are looking at trying to tune ZFS for better
performance with respect to oracle databases, however before we start
changing settings via the /etc/system file we would like to confirm the
setting from th
On 2/13/2009 5:58 AM, Ross wrote:
huh? but that looses the convenience of USB.
I've used USB drives without problems at all, just remember to "zpool export"
them before you unplug.
I think there is a subcommand of cfgaadm you should run to to notify
Solariss that you intend to unplug the
Hello,
thanks for the answer.
The partition table shows that Wind and OS run on:
1. c9d0
/p...@0,0/pci-...@1f,2/i...@0/c...@0,0
Partition StatusType Start End Length%
= == = === == ===
1
While mobility could be lost, usb storage still has the advantage of being
cheap and easy to install comparing to install internal disks on pc, so if I
just want to use it to provide zfs storage space for home file server, can a
small intent log located on internal sata disk prevent the pool co
>
> I have seen this 'phantom dataset' with a pool on nv93. I created a
> zpool, created a dataset, then destroyed the zpool. When creating a new
> zpool on the same partitions/disks as the destroyed zpool, upon export I
> receive the same message as you describe above, even though I never
> c
huh? but that looses the convenience of USB.
I've used USB drives without problems at all, just remember to "zpool export"
them before you unplug.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
htt
I am wondering if the usb storage device is not reliable for ZFS usage, can the
situation be improved if I put the intent log on internal sata disk to avoid
corruption and utilize the convenience of usb storage
at the same time?
--
This message posted from opensolaris.org
___
I'm moving some data off an old machine to something reasonably new.
Normally, the new machine performs better, but I have one case just now
where the new system is terribly slow.
Old machine - V880 (Solaris 8) with SVM raid-5:
# ptime du -kds foo
15043722foo
real6.955
user
hi
I have a AOC-USAS-L8i working in both a Gigabyte GA-P35-DS3P and Gigabyte
GA-EG45M-DS2H under OpenSolaris build 104+ (Nexenta Core 2.0 beta).
the controller looks like this in lspci:
01:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1068E PCI-Express
Fusion-MPT SAS (rev 08)
Subs
On Mon, Feb 2, 2009 at 6:55 AM, Robert Milkowski wrote:
> It definitely does. I made some tests today comparing b101 with b105 while
> doing 'zfs send -R -I A B >/dev/null' with several dozen snapshots between A
> and B. Well, b105 is almost 5x faster in my case - that's pretty good.
>
> --
> Ro
55 matches
Mail list logo