-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Brent Jones:
My results are much improved, on the order of 5-100 times faster
(either over Mbuffer or SSH).
this is good news - although not quite soon enough for my current 5TB zfs send
;-)
have you tested if this also improves the performance
I thought I'd noticed that my crashes tended to occur when I was running a
scrub, and saw at least one open bug that was scrub-related that could
cause such a crash. However, I eventually tracked my problem down (as it
got worse) to a bad piece of memory (been nearly a week since I replaced
the
I thought I'd noticed that my crashes tended to occur when I was running a
scrub, and saw at least one open bug that was scrub-related that could
cause such a crash. However, I eventually tracked my problem down (as it
got worse) to a bad piece of memory (been nearly a week since I replaced
the
On Thu, 22 Jan 2009, Ross wrote:
However, now I've written that, Sun use SATA (SAS?) SSD's in their
high end fishworks storage, so I guess it definately works for some
use cases.
But the fishworks (Fishworks is a development team, not a product)
write cache device is not based on FLASH.
That's my understanding too. One (STEC?) drive as a write cache,
basically a write optimised SSD. And cheaper, larger, read optimised
SSD's for the read cache.
I thought it was an odd strategy until I read into SSD's a little more
and realised you really do have to think about your usage cases
On Fri, January 23, 2009 09:52, casper@sun.com wrote:
Which leaves me wondering, how safe is running a scrub? Scrub is one of
the things that made ZFS so attractive to me, and my automatic reaction
when I first hook up the data disks during a recovery is run a scrub!.
If your memory is
If i'm not mistaken (and somebody please correct me if i'm wrong), the
Sun 7000 series storage appliances (the Fishworks boxes) use enterprise
SSDs, with dram caching. One such product is made by STEC.
My understanding is that the Sun appliances use one SSD for the ZIL, and
one as a read
* David Dyer-Bennet (d...@dd-b.net) wrote:
On Fri, January 23, 2009 09:52, casper@sun.com wrote:
Which leaves me wondering, how safe is running a scrub? Scrub is one of
the things that made ZFS so attractive to me, and my automatic reaction
when I first hook up the data disks during a
On Fri, January 23, 2009 12:01, Glenn Lagasse wrote:
* David Dyer-Bennet (d...@dd-b.net) wrote:
But what I'm wondering is, are there known bugs in 101b that make
scrubbing inadvisable with that code? I'd love to *find out* what
horrors
may be lurking.
There's nothing in the release notes
Yes, everything seems to be fine, but that was still scary, and the fix was not
completely obvious. At the very least, I would suggest adding text such as the
following to the page at http://www.sun.com/msg/ZFS-8000-FD :
When physically replacing the failed device, it is best to use the same
--- On Tue, 12/30/08, Andrew Gabriel agabr...@opensolaris.org wrote:
If you were doing a rolling upgrade, I suspect the old disks are all
horribly out of sync with each other?
If that is the problem, then if the filesystem(s) have a snapshot that
existed when all the old disks were still online,
Yes, but I can't export a pool that has never been imported. These drives are
no longer connected to their original system, and at this point, when I connect
them to their original system, the results are the same.
Thanks,
Michael
--- On Tue, 12/30/08, Weldon S Godfrey 3 wel...@excelsus.com
Roger wrote:
Hi!
Im running popensolaris b101 and ive made a zfs pool called tank and an fs
inside of it tank/public, ive shared it with smb.
zfs set sharesmb=on tank/public
im using solaris smb and not samba.
The problem is this. When i connect and create a file its readable to
Hi All,
sorry for all the duplicates. Feel free to pass on to other interested
parties.
The OpenSolaris Storage Community is holding a Storage Summit on
February 23 at the Grand Hyatt San Francisco, prior to the FAST
conference.
The registration wiki is here:
I have two 280R systems. System A has Solaris 10u6, and its (2) drives
are configured as a ZFS rpool, and are mirrored. I would like to pull
these drives, and move them to my other 280, system B, which is
currently hard drive-less.
Although unsupported by Sun, I have done this before without
I was having CIFs problems on my Mac so I upgrade to build 105.
After getting all my shares populated with data I ran zpool scrub on
the raidz array and it told me the version was out of date so I
upgraded.
One of my shares is now inaccessible and I cannot even delete it :(
It was rumored that Nevada build 105 would have ZFS encrypted file
systems integrated into the main source.
In reviewing the Change logs (URL's below) I did not see anything
mentioned that this had come to pass. Its going to be another week
before I have a chance to play with b105.
Does
Richard Elling wrote:
mijenix wrote:
yes, that's the way zpool likes it
I think I've to understand how (Open)Solaris create disks or how the
partition thing works under OSol. Do you know any guide or howto?
We've tried to make sure the ZFS Admin Guide covers these things, including
Hi All,
Since switching to ZFS I get a lot of ³beach balls². I think for
productivity sake I should switch back to HFS+. My home directory was on
this ZFS parition.
I backed up my data to another drive and tried using Disk Utility to select
my ZFS partition, un-mount it and format just that
It also wouldn't be a bad idea for ZFS to also verify drives designated as
hot spares in fact have sufficient capacity to be compatible replacements
for particular configurations, prior to actually being critically required
(as if drives otherwise appearing to have equivalent capacity may not, it
Hi all,
I moved from Sol 10 Update4 to update 6.
Before doing this I exported both of my zpools, and replace the discs
containing the ufs root on with two new discs (these discs did not have any
zpool /zfs info and are raid mirrored in hardware)
Once I had installed update6 I did a zpool
Colin Johnson wrote:
I was having CIFs problems on my Mac so I upgrade to build 105.
After getting all my shares populated with data I ran zpool scrub on
the raidz array and it told me the version was out of date so I
upgraded.
One of my shares is now inaccessible and I cannot even
Jerry K wrote:
It was rumored that Nevada build 105 would have ZFS encrypted file
systems integrated into the main source.
In reviewing the Change logs (URL's below) I did not see anything
mentioned that this had come to pass. Its going to be another week
before I have a chance to play
James Nord wrote:
Hi all,
I moved from Sol 10 Update4 to update 6.
Before doing this I exported both of my zpools, and replace the discs
containing the ufs root on with two new discs (these discs did not have any
zpool /zfs info and are raid mirrored in hardware)
Once I had installed
A little gotcha that I found in my 10u6 update process was that 'zpool
upgrade [poolname]' is not the same as 'zfs upgrade
[poolname]/[filesystem(s)]'
What does 'zfs upgrade' say? I'm not saying this is the source of
your problem, but it's a detail that seemed to affect stability for
me.
On
I've seen reports of a recent Seagate firmware update bricking drives again.
What's the output of 'zpool import' from the LiveCD? It sounds like
more than 1 drive is dropping off.
On Thu, Jan 22, 2009 at 10:52 PM, Brad Hill b...@thosehills.com wrote:
I would get a new 1.5 TB and make sure it
This is primarily a list for OpenSolaris ZFS - OS X is a little different ;)
However, I think you need to do a 'sudo zpool destroy [poolname]' from
Terminal.app
Be warned, you can't go back once you have done this!
On Sun, Jan 18, 2009 at 4:42 PM, Jason Todd Slack-Moehrle
+1
On Thu, Jan 22, 2009 at 11:12 PM, Paul Schlie sch...@comcast.net wrote:
It also wouldn't be a bad idea for ZFS to also verify drives designated as
hot spares in fact have sufficient capacity to be compatible replacements
for particular configurations, prior to actually being critically
Does anyone know specifically if b105 has ZFS encryption?
IIRC it has been pushed back to b109.
-mg
signature.asc
Description: OpenPGP digital signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Tue, Jan 6, 2009 at 4:23 PM, John Arden jarden.ar...@oryx.cc wrote:
I have two 280R systems. System A has Solaris 10u6, and its (2) drives
are configured as a ZFS rpool, and are mirrored. I would like to pull
these drives, and move them to my other 280, system B, which is
currently hard
30 matches
Mail list logo