Yes, if you value your data you should change from USB drives to normal drives.
I heard that USB did some strange things? Normal connection such as SATA is
more reliable.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-disc
Hello,
I'm now thinking there is some _real_ bug in the way zfs handles files systems
created with the pool itself (ie tank filesystem when zpool is tank, usually
mounted as /tank)
My own experiens shows that zfs is unable to send/receive recursively
(snapshots, child fs) properly when the desti
I just have the say this, and I don't mean it in a bad way... If you
really care about your data why then use usb drives with lose cables and
(apparently no backup)
USB connected drives for data backup are okay, for playing around and
getting to know ZFS seems also okay. Using it for onlin
I had a very similar problem. 8 external USB drives running OpenSolaris
native. When I moved the machine into a different room and powered it back
up (there were a couple of reboots and a couple of broken usb cables and
drive shut downs in between), I got the same error. Loosing that much data
is d
On 4-Aug-09, at 9:28 AM, Roch Bourbonnais wrote:
Le 26 juil. 09 à 01:34, Toby Thain a écrit :
On 25-Jul-09, at 3:32 PM, Frank Middleton wrote:
On 07/25/09 02:50 PM, David Magda wrote:
Yes, it can be affected. If the snapshot's data structure /
record is
underneath the corrupted data in
Le 26 juil. 09 à 01:34, Toby Thain a écrit :
On 25-Jul-09, at 3:32 PM, Frank Middleton wrote:
On 07/25/09 02:50 PM, David Magda wrote:
Yes, it can be affected. If the snapshot's data structure / record
is
underneath the corrupted data in the tree then it won't be able to
be
reached.
Le 19 juil. 09 à 16:47, Bob Friesenhahn a écrit :
On Sun, 19 Jul 2009, Ross wrote:
The success of any ZFS implementation is *very* dependent on the
hardware you choose to run it on.
To clarify:
"The success of any filesystem implementation is *very* dependent on
the hardware you choose
On 03.08.09 03:44, Stephen Pflaum wrote:
George,
I have a pool with family photos on it which needs recovery. Is there a livecd with a tool to invalidate the uberblock which will boot on a macbookpro?
This has been recovered by rolling two txgs back. pool is being scrubbed now.
More details
George,
I have a pool with family photos on it which needs recovery. Is there a livecd
with a tool to invalidate the uberblock which will boot on a macbookpro?
Steve
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
Have you considered this?
*Maybe* a little time travel to an old uberblock could help you?
http://www.opensolaris.org/jive/thread.jspa?threadID=85794
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ht
> On Fri, 31 Jul 2009, Brian wrote:
>
> > I must say this thread has also damaged the view I
> have of ZFS.
> > Ive been considering just getting a Raid 5
> controller and going the
> > linux route I had planned on.
>
> Thankfully, the zfs users who have never lost a pool
> do not spend much
>
Dave Stubbs wrote:
I don't mean to be offensive Russel, but if you do
ever return to ZFS, please promise me that you will
never, ever, EVER run it virtualized on top of NTFS
(a.k.a. worst file system ever) in a production
environment. Microsoft Windows is a horribly
unreliable operating system
> wow, talk about a knee jerk reaction...
Not at all. A long thread is started where the user lost his pool, and
discussion shows it's a known problem. I love ZFS and I'm still very nervous
about the risk of loosing an entire pool.
> As has been described many times over the past few
> years
+--
| On 2009-07-31 17:00:54, Jason A. Hoffman wrote:
|
| I have thousands and thousands and thousands of zpools. I started
| collecting such zpools back in 2005. None have been lost.
I don't have thousands and thousand
On 31-Jul-09, at 20:00 , Jason A. Hoffman wrote:
I have thousands and thousands and thousands of zpools. I started
collecting such zpools back in 2005. None have been lost.
Best regards, Jason
Jason A. Hoffman, PhD | Founder, CTO,
Brian wrote:
I must say this thread has also damaged the view I have of ZFS. Ive been
considering just getting a Raid 5 controller and going the linux route I had
planned on.
That'll be you loss. I've never managed to loose a pool and I've all
sorts of unreliable media and all sorts of na
On 31-Jul-09, at 7:15 PM, Richard Elling wrote:
wow, talk about a knee jerk reaction...
On Jul 31, 2009, at 3:23 PM, Dave Stubbs wrote:
I don't mean to be offensive Russel, but if you do
ever return to ZFS, please promise me that you will
never, ever, EVER run it virtualized on top of NTFS
(
On Jul 31, 2009, at 20:00, Jason A. Hoffman wrote:
On Jul 31, 2009, at 4:54 PM, Bob Friesenhahn wrote:
On Fri, 31 Jul 2009, Brian wrote:
I must say this thread has also damaged the view I have of ZFS.
Ive been considering just getting a Raid 5 controller and going
the linux route I had pl
On Jul 31, 2009, at 4:54 PM, Bob Friesenhahn wrote:
On Fri, 31 Jul 2009, Brian wrote:
I must say this thread has also damaged the view I have of ZFS. Ive
been considering just getting a Raid 5 controller and going the
linux route I had planned on.
Thankfully, the zfs users who have never
On Fri, 31 Jul 2009, Brian wrote:
I must say this thread has also damaged the view I have of ZFS.
Ive been considering just getting a Raid 5 controller and going the
linux route I had planned on.
Thankfully, the zfs users who have never lost a pool do not spend much
time posting about their
On Jul 31, 2009, at 19:26, Brian wrote:
I must say this thread has also damaged the view I have of ZFS. Ive
been considering just getting a Raid 5 controller and going the
linux route I had planned on.
It's your data, and you are responsible for it. So this thread, if
nothing else, allo
I must say this thread has also damaged the view I have of ZFS. Ive been
considering just getting a Raid 5 controller and going the linux route I had
planned on.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@open
wow, talk about a knee jerk reaction...
On Jul 31, 2009, at 3:23 PM, Dave Stubbs wrote:
I don't mean to be offensive Russel, but if you do
ever return to ZFS, please promise me that you will
never, ever, EVER run it virtualized on top of NTFS
(a.k.a. worst file system ever) in a production
envi
> I don't mean to be offensive Russel, but if you do
> ever return to ZFS, please promise me that you will
> never, ever, EVER run it virtualized on top of NTFS
> (a.k.a. worst file system ever) in a production
> environment. Microsoft Windows is a horribly
> unreliable operating system in situatio
On Jul 28, 2009, at 6:34 PM, Eric D. Mudama wrote:
On Mon, Jul 27 at 13:50, Richard Elling wrote:
On Jul 27, 2009, at 10:27 AM, Eric D. Mudama wrote:
Can *someone* please name a single drive+firmware or RAID
controller+firmware that ignores FLUSH CACHE / FLUSH CACHE EXT
commands? Or worse, re
Hi James
Many thanks for finding & posting that link.
I'm sure many people on this forum will
be interested in trying out Brad Fitzpatrick's
perl script 'diskchecker.pl'.
It will be interesting to hear their results.
I've not yet had time to work out how Brad's
script works. If would be good if
Nigel Smith wrote:
> David Magda wrote:
>> This is also (theoretically) why a drive purchased from Sun is more
>> that expensive then a drive purchased from your neighbourhood computer
>> shop: Sun (and presumably other manufacturers) takes the time and
>> effort to test things to make sure t
On Mon, Jul 27 at 13:50, Richard Elling wrote:
On Jul 27, 2009, at 10:27 AM, Eric D. Mudama wrote:
Can *someone* please name a single drive+firmware or RAID
controller+firmware that ignores FLUSH CACHE / FLUSH CACHE EXT
commands? Or worse, responds "ok" when the flush hasn't occurred?
two seco
> This is also (theoretically) why a drive purchased
> from Sun is more
> that expensive then a drive purchased from your
> neighbourhood computer
> shop:
It's more significant than that. Drives aimed at the consumer market are at a
competitive disadvantage if they do handle cache flush corr
>
> Can *someone* please name a single drive+firmware or
> RAID
> controller+firmware that ignores FLUSH CACHE / FLUSH
> CACHE EXT
> commands? Or worse, responds "ok" when the flush
> hasn't occurred?
I think it would be a shorter list if one were to name the drives/controllers
that actually imp
I think people can understand the concept of missing flushes. The big
conceptual problem is how this manages to hose an entire filesystem, which is
assumed to have rather a lot of data which ZFS has already verified to be ok.
Hardware ignoring flushes and loosing recent data is understandable,
On 27-Jul-09, at 3:44 PM, Frank Middleton wrote:
On 07/27/09 01:27 PM, Eric D. Mudama wrote:
Everyone on this list seems to blame lying hardware for ignoring
commands, but disks are relatively mature and I can't believe that
major OEMs would qualify disks or other hardware that willingly
ig
David Magda wrote:
> This is also (theoretically) why a drive purchased from Sun is more
> that expensive then a drive purchased from your neighbourhood computer
> shop: Sun (and presumably other manufacturers) takes the time and
> effort to test things to make sure that when a drive says "I'
On 27-Jul-09, at 15:14 , David Magda wrote:
Also, I think it may have already been posted, but I haven't found
the
option to disable VirtualBox' disk cache. Anyone have the incantation
handy?
http://forums.virtualbox.org/viewtopic.php?f=8&t=13661&start=0
It tells VB not to ignore the sync/fl
On Jul 27, 2009, at 10:27 AM, Eric D. Mudama wrote:
On Sun, Jul 26 at 1:47, David Magda wrote:
On Jul 25, 2009, at 16:30, Carson Gaspar wrote:
Frank Middleton wrote:
Doesn't this mean /any/ hardware might have this problem, albeit
with much lower probability?
No. You'll lose unwritten
On 07/27/09 01:27 PM, Eric D. Mudama wrote:
Everyone on this list seems to blame lying hardware for ignoring
commands, but disks are relatively mature and I can't believe that
major OEMs would qualify disks or other hardware that willingly ignore
commands.
You are absolutely correct, but if th
On Mon, July 27, 2009 13:59, Adam Sherman wrote:
> Also, I think it may have already been posted, but I haven't found the
> option to disable VirtualBox' disk cache. Anyone have the incantation
> handy?
http://forums.virtualbox.org/viewtopic.php?f=8&t=13661&start=0
It tells VB not to ignore the
On Mon, Jul 27, 2009 at 12:54 PM, Chris Ridd wrote:
>
> On 27 Jul 2009, at 18:49, Thomas Burgess wrote:
>
>>
>> i was under the impression it was virtualbox and it's default setting that
>> ignored the command, not the hard drive
>
> Do other virtualization products (eg VMware, Parallels, Virtual P
On 27-Jul-09, at 13:54 , Chris Ridd wrote:
i was under the impression it was virtualbox and it's default
setting that ignored the command, not the hard drive
Do other virtualization products (eg VMware, Parallels, Virtual PC)
have the same default behaviour as VirtualBox?
I've a suspicion
On 27 Jul 2009, at 18:49, Thomas Burgess wrote:
i was under the impression it was virtualbox and it's default
setting that ignored the command, not the hard drive
Do other virtualization products (eg VMware, Parallels, Virtual PC)
have the same default behaviour as VirtualBox?
I've a s
i was under the impression it was virtualbox and it's default setting that
ignored the command, not the hard drive
On Mon, Jul 27, 2009 at 1:27 PM, Eric D. Mudama
wrote:
> On Sun, Jul 26 at 1:47, David Magda wrote:
>
>>
>> On Jul 25, 2009, at 16:30, Carson Gaspar wrote:
>>
>> Frank Middleton wr
On Sun, Jul 26 at 1:47, David Magda wrote:
On Jul 25, 2009, at 16:30, Carson Gaspar wrote:
Frank Middleton wrote:
Doesn't this mean /any/ hardware might have this problem, albeit
with much lower probability?
No. You'll lose unwritten data, but won't corrupt the pool, because
the on-disk
Heh, I'd kill for failures to be handled in 2 or 3 seconds. I saw the failure
of a mirrored iSCSI disk lock the entire pool for 3 minutes. That has been
addressed now, but device hangs have the potential to be *very* disruptive.
--
This message posted from opensolaris.org
_
> That's only one element of it Bob. ZFS also needs
> devices to fail quickly and in a predictable manner.
>
> A consumer grade hard disk could lock up your entire
> pool as it fails. The kit Sun supply is more likely
> to fail in a manner ZFS can cope with.
I agree 100%.
Hardware, firmware,
On 26-Jul-09, at 11:08 AM, Frank Middleton wrote:
On 07/25/09 04:30 PM, Carson Gaspar wrote:
No. You'll lose unwritten data, but won't corrupt the pool, because
the on-disk state will be sane, as long as your iSCSI stack doesn't
lie about data commits or ignore cache flush commands. Why is th
On Sun, 26 Jul 2009, David Magda wrote:
That's the whole point of this thread: what should happen, or what should the
file system do, when the drive (real or virtual) lies about the syncing? It's
just as much a problem with any other POSIX file system (which have to deal
with fsync(2))--ZFS i
On 07/25/09 04:30 PM, Carson Gaspar wrote:
No. You'll lose unwritten data, but won't corrupt the pool, because
the on-disk state will be sane, as long as your iSCSI stack doesn't
lie about data commits or ignore cache flush commands. Why is this so
difficult for people to understand? Let me crea
On Jul 25, 2009, at 15:32, Frank Middleton wrote:
Can you comment on if/how mirroring or raidz mitigates this, or tree
corruption in general? I have yet to lose a pool even on a machine
with fairly pathological problems, but it is mirrored (and copies=2).
Presumably at least on of the drives i
On Jul 25, 2009, at 16:30, Carson Gaspar wrote:
Frank Middleton wrote:
Doesn't this mean /any/ hardware might have this problem, albeit
with much lower probability?
No. You'll lose unwritten data, but won't corrupt the pool, because
the on-disk state will be sane, as long as your iSCSI s
On 25-Jul-09, at 3:32 PM, Frank Middleton wrote:
On 07/25/09 02:50 PM, David Magda wrote:
Yes, it can be affected. If the snapshot's data structure / record is
underneath the corrupted data in the tree then it won't be able to be
reached.
Can you comment on if/how mirroring or raidz mitigat
Frank Middleton wrote:
Finally, a number of posters blamed VB for ignoring a flush, but
according to the evil tuning guide, without any application syncs,
ZFS may wait up to 5 seconds before issuing a synch, and there
must be all kinds of failure modes even on bare hardware where
it never gets a
On 07/25/09 02:50 PM, David Magda wrote:
Yes, it can be affected. If the snapshot's data structure / record is
underneath the corrupted data in the tree then it won't be able to be
reached.
Can you comment on if/how mirroring or raidz mitigates this, or tree
corruption in general? I have yet t
On Jul 25, 2009, at 14:17, roland wrote:
thanks for the explanation !
one more question:
there are situations where the disks doing strange things
(like lying) have caused the ZFS data structures to become wonky. The
'broken' data structure will cause all branches underneath it to be
lost--a
thanks for the explanation !
one more question:
> there are situations where the disks doing strange things
>(like lying) have caused the ZFS data structures to become wonky. The
>'broken' data structure will cause all branches underneath it to be
>lost--and if it's near the top of the tree, it c
On Jul 25, 2009, at 12:24, roland wrote:
why can i loose a whole 10TB pool including all the snapshots with
the logging/transactional nature of zfs?
Because ZFS does not (yet) have an (easy) way to go back a previous
state. That's what this bug is about:
need a way to rollback to an uber
>As soon as you have more then one disk in the equation, then it is
>vital that the disks commit their data when requested since otherwise
>the data on disk will not be in a consistent state.
ok, but doesn`t that refer only to the most recent data?
why can i loose a whole 10TB pool including all t
On Sat, 25 Jul 2009, roland wrote:
When that happens, ZFS believes the data is safely written, but a
power cut or >crash can cause severe problems with the pool.
didn`t i read a million times that zfs ensures an "always consistent
state" and is self healing, too?
so, if new blocks are alwa
>Running this kind of setup absolutely can give you NO garanties at all.
>Virtualisation, OSOL/zfs on WinXP. It's nice to play with and see it
>"working" but would I TRUST precious data to it? No way!
why not?
if i write some data trough virtualization layer which goes straight trough to
raw disk
On 07/21/09 01:21 PM, Richard Elling wrote:
I never win the lottery either :-)
Let's see. Your chance of winning a 49 ball lottery is apparently
around 1 in 14*10^6, although it's much better than that because of
submatches (smaller payoffs for matches on less than 6 balls).
There are about 3
> "aym" == Anon Y Mous writes:
> "mg" == Mario Goebbels writes:
aym> I don't mean to be offensive Russel, but if you do ever return
aym> to ZFS, please promise me that you will never, ever, EVER run
aym> it virtualized on top of NTFS
he said he was using raw disk devices IIRC.
To All : The ECC discussion was very interesting as I had never
considered it that way! I willl be buying ECC memory for my home
machine!!
You have to make sure your mainboard, chipset and/or CPU support it,
otherwise any ECC modules will just work like regular modules.
The mainboard needs to
Once these bits are available in Opensolaris then users will be able to
upgrade rather easily. This would allow you to take a liveCD running
these bits and recover older pools.
Do you currently have a pool which needs recovery?
Thanks,
George
Alexander Skwar wrote:
Hi.
Good to Know!
But ho
I don't mean to be offensive Russel, but if you do ever return to ZFS, please
promise me that you will never, ever, EVER run it virtualized on top of NTFS
(a.k.a. worst file system ever) in a production environment. Microsoft Windows
is a horribly unreliable operating system in situations where
Hi.
Good to Know!
But how do we deal with that on older sStems, which don't have the
patch applied, once it is out?
Thanks, Alexander
On Tuesday, July 21, 2009, George Wilson wrote:
> Russel wrote:
>
> OK.
>
> So do we have an zpool import --xtg 56574 mypoolname
> or help to do it (script?)
>
Thanks for the feed back George.
I hope we get the tools soon.
At home I have now blown the ZFS away now and creating
a HW raid-5 set :-( Hopefully in the future when the tools
are there I will return to ZFS.
To All : The ECC discussion was very interesting as I had never
considered it that way
On Jul 20, 2009, at 12:48 PM, Frank Middleton wrote:
On 07/19/09 06:10 PM, Richard Elling wrote:
Not that bad. Uncommitted ZFS data in memory does not tend to
live that long. Writes are generally out to media in 30 seconds.
Yes, but memory hits are instantaneous. On a reasonably busy
system
Russel wrote:
OK.
So do we have an zpool import --xtg 56574 mypoolname
or help to do it (script?)
Russel
We are working on the pool rollback mechanism and hope to have that
soon. The ZFS team recognizes that not all hardware is created equal and
thus the need for this mechanism. We are usi
My understanding of the root cause of these issues is that the vast majority
are happening with consumer grade hardware that is reporting to ZFS that writes
have succeeded, when in fact they are still in the cache.
When that happens, ZFS believes the data is safely written, but a power cut or
c
On 07/19/09 06:10 PM, Richard Elling wrote:
Not that bad. Uncommitted ZFS data in memory does not tend to
live that long. Writes are generally out to media in 30 seconds.
Yes, but memory hits are instantaneous. On a reasonably busy
system there may be buffers in queue all the time. You may hav
On 20-Jul-09, at 6:26 AM, Russel wrote:
Well I did have a UPS on the machine :-)
but the machine hung and I had to power it off...
(yep it was vertual, but that happens on direct HW too,
As has been discussed here before, the failure modes are different as
the layer stack from filesystem t
OK.
So do we have an zpool import --xtg 56574 mypoolname
or help to do it (script?)
Russel
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> the machine hung and I had to power it off.
kinda getting off the "zpool import --tgx -3" request, but
"hangs" are exceptionally rare and usually ram or other
hardware issue, solairs usually abends on software faults.
r...@pdm # uptime
9:33am up 1116 day(s), 21:12, 1 user, load average:
Well I did have a UPS on the machine :-)
but the machine hung and I had to power it off...
(yep it was vertual, but that happens on direct HW too,
and virtualisasion is the happening ting at sun and else where!
I have a version of the data backed up, but will
take ages (10days) to restore).
--
Th
On Sun, 19 Jul 2009, Richard Elling wrote:
I do, even though I have a small business. Neither InDesign nor
Illustrator will be ported to Linux or OpenSolaris in my lifetime...
besides, iTunes rocks and it is the best iPhone developer's environment
on the planet.
Richard,
I think the point
Gavin Maltby wrote:
Hi,
David Magda wrote:
On Jul 19, 2009, at 20:13, Gavin Maltby wrote:
No, ECC memory is a must too. ZFS checksumming verifies and corrects
data read back from a disk, but once it is read from disk it is stashed
in memory for your application to use - without ECC you ero
Hi,
David Magda wrote:
On Jul 19, 2009, at 20:13, Gavin Maltby wrote:
No, ECC memory is a must too. ZFS checksumming verifies and corrects
data read back from a disk, but once it is read from disk it is stashed
in memory for your application to use - without ECC you erode
confidence that
wh
On Sun, 19 Jul 2009, David Magda wrote:
Right, because once (say) Apple incorporates ZFS into Mac OS X
they'll also start shipping MacBooks and iMacs with ECC. If it's so
necessary we might as well have any kernel that has ZFS in it only
allow 'zpool create' to be run if the kernel detects EC
On Jul 19, 2009, at 20:13, Gavin Maltby wrote:
No, ECC memory is a must too. ZFS checksumming verifies and corrects
data read back from a disk, but once it is read from disk it is
stashed
in memory for your application to use - without ECC you erode
confidence that
what you read from memor
dick hoogendijk wrote:
true. Furthermore, much so-called consumer hardware is very good these
days. My guess is ZFS should work quite reliably on that hardware.
(i.e. non ECC memory should work fine!) / mirroring is a -must- !
No, ECC memory is a must too. ZFS checksumming verifies and correc
On Sun, 19 Jul 2009, Miles Nordin wrote:
"r" == Ross writes:
"tt" == Toby Thain writes:
r> ZFS was never designed to run on consumer hardware,
this is markedroid garbage, as well as post-facto apologetics.
Don't lower the bar. Don't blame the victim.
I think that the standard discl
Frank Middleton wrote:
On 07/19/09 05:00 AM, dick hoogendijk wrote:
(i.e. non ECC memory should work fine!) / mirroring is a -must- !
Yes, mirroring is a must, although it doesn't help much if you
have memory errors (see several other threads on this topic):
http://en.wikipedia.org/wiki/Dyna
> "r" == Ross writes:
> "tt" == Toby Thain writes:
r> ZFS was never designed to run on consumer hardware,
this is markedroid garbage, as well as post-facto apologetics.
Don't lower the bar. Don't blame the victim.
tt> I posted about that insane default, six months ago. Obvi
On Sun, 19 Jul 2009, Frank Middleton wrote:
Yes, mirroring is a must, although it doesn't help much if you
have memory errors (see several other threads on this topic):
http://en.wikipedia.org/wiki/Dynamic_random_access_memory#Errors_and_error_correction
"Tests[ecc]give widely varying error rat
On 07/19/09 05:00 AM, dick hoogendijk wrote:
(i.e. non ECC memory should work fine!) / mirroring is a -must- !
Yes, mirroring is a must, although it doesn't help much if you
have memory errors (see several other threads on this topic):
http://en.wikipedia.org/wiki/Dynamic_random_access_memory
That's only one element of it Bob. ZFS also needs devices to fail quickly and
in a predictable manner.
A consumer grade hard disk could lock up your entire pool as it fails. The kit
Sun supply is more likely to fail in a manner ZFS can cope with.
--
This message posted from opensolaris.org
__
On 19-Jul-09, at 7:12 AM, Russel wrote:
Guys guys please chill...
First thanks to the info about virtualbox option to bypass the
cache (I don't suppose you can give me a reference for that info?
(I'll search the VB site :-))
I posted about that insane default, six months ago. Obviously ZFS
On Sun, 19 Jul 2009, Ross wrote:
The success of any ZFS implementation is *very* dependent on the
hardware you choose to run it on.
To clarify:
"The success of any filesystem implementation is *very* dependent on
the hardware you choose to run it on."
ZFS requires that the hardware cache s
Heh, yes, I assumed similar things Russel. I also assumed that a faulty disk
in a raid-z set wouldn't hang my entire pool indefinitely, that hot plugging a
drive wouldn't reboot Solaris, and that my pool would continue working after I
disconnected one half of an iscsi mirror.
I also like yours
>From the experience myself and others have had, and Sun's approach with their
>Amber Road storage (FISHWORKS - fully integrated *hardware* and software), my
>feeling is very much that ZFS was designed by Sun to run on Sun's own
>hardware, and as such, they were able to make certain assumptions
Guys guys please chill...
First thanks to the info about virtualbox option to bypass the
cache (I don't suppose you can give me a reference for that info?
(I'll search the VB site :-)) As this was not clear to me. I use VB
like others use vmware etc to run solaris because its the ONLY
way I can, a
On Sun, 19 Jul 2009 01:48:40 PDT
Ross wrote:
> As far as I can see, the ZFS Administrator Guide is sorely lacking in
> any warning that you are risking data loss if you run on consumer
> grade hardware.
And yet, ZFS is not only for NON-consumer grade hardware is it? the
fact that many, many peop
While I agree with Brent, I think this is something that should be stressed in
the ZFS documentation. Those of us with long term experience of ZFS know that
it's really designed to work with hardware meeting quite specific requirements.
Unfortunately, that isn't documented anywhere, and more an
On Sun, 19 Jul 2009 00:00:06 -0700
Brent Jones wrote:
> No offense, but you trusted 10TB of important data, running in
> OpenSolaris from inside Virtualbox (not stable) on top of Windows XP
> (arguably not stable, especially for production) on probably consumer
> grade hardware with unknown suppo
. heinäkuuta 2009 11:24
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Another user looses his pool (10TB) in this case and
40 days work
>>>>> "bj" == Brent Jones writes:
bj> many levels of fail here,
pft. Virtualbox isn't unstable in any of my e
> "bj" == Brent Jones writes:
bj> many levels of fail here,
pft. Virtualbox isn't unstable in any of my experience. It doesn't
by default pass cache flushes from guest to host unless you set
VBoxManage setextradata VMNAME
"VBoxInternal/Devices/piix3ide/0/LUN#[x]/Config/IgnoreFlush" 0
On Sat, Jul 18, 2009 at 7:39 PM, Russel wrote:
> Yes you'll find my name all over VB at the moment, but I have found it to be
> stable
> (don't install the addons disk for solaris!!, use 3.0.2, and for me
> winXP32bit and
> OpenSolaris 2009.6 has been rock solid, it was (seems) to be opensolaris
Yes you'll find my name all over VB at the moment, but I have found it to be
stable
(don't install the addons disk for solaris!!, use 3.0.2, and for me winXP32bit
and
OpenSolaris 2009.6 has been rock solid, it was (seems) to be opensolaris failed
with extract_boot_list doesn't belong to 101, but
Sorry to hear that, but you do know that VirtualBox is not really stable?
VirtualBox does show some instability from time to time. You havent read the
VirtualBox forums? I would advice against VirtualBox for saving all your data
in ZFS. I would use OpenSolaris without virtualization. I hope your
Well I have a 10TB (5x2TB) in RAIDZ on VirtualBox
got it all working on Windows XP and Windows 7.
SMB shares back to my PC, great managed the impossible!
Copied all my data over form a loads of old external disks,
sorted it, all in all 15 days work (my holiday :-)) Used
raw disks to the VirtualBox
99 matches
Mail list logo