Richard,
thanks for the explanation.
So can we say that the problem is in the disks loosing a command now and then
under stress?
Best regards.
Maurilio.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
On Mon, Oct 5, 2009 at 10:27 AM, David Dyer-Bennet d...@dd-b.net wrote:
On Sat, October 3, 2009 17:18, Ray Clark wrote:
Thank you all for your help, not to snub anyone, but Darren, Richard, and
Cindy especially come to mind. Thanks for sparring with me until we
understood each other.
I'd
Question (for Richard E): Is there a write-up on the ZFS broken fletcher fix?
Is the default checksum for new pool creation changed in U8?
Is the default checksum for new pool creation change in OpenSolaris or
SXCE (which versions)?
Is there a case open to allow the user to select the checksum
I am one of the much blessed university users who wishes to provide home
directories and web space to thousands of users and is being bitten by
abysmal scaling behaviour of zfs, the overhead of creating thousands of zfs
files sytems in a pool can takes days to complete. Sharing or unsharing
them
On Oct 4, 2009, at 11:52 PM, Maurilio Longo wrote:
Richard,
thanks for the explanation.
So can we say that the problem is in the disks loosing a command now
and then under stress?
It may be the disks or the HBA. I'll bet a steak dinner it is the HBA.
-- richard
Richard,
it is the same controller used inside Sun's thumpers; It could be a problem in
my unit (which is a couple of years old now), though.
Is there something I can do to find out if I owe you that steak? :)
Thanks.
Maurilio.
--
This message posted from opensolaris.org
Victor Latushkin wrote:
Liam Slusser wrote:
Long story short, my cat jumped on my server at my house crashing two
drives at the same time. It was a 7 drive raidz (next time ill do
raidz2).
Long story short - we've been able to get access to data in the pool.
This involved finding better
re == Richard Elling richard.ell...@gmail.com writes:
re As I said before, if the checksum matches, then the data is
re checked for sequence number = previous + 1, the blk_birth ==
re 0, and the size is correct. Since this data lives inside the
re block, it is unlikely that a
On 05.10.09 23:07, Miles Nordin wrote:
re == Richard Elling richard.ell...@gmail.com writes:
re As I said before, if the checksum matches, then the data is
re checked for sequence number = previous + 1, the blk_birth ==
re 0, and the size is correct. Since this data lives inside
On Sun, Oct 4, 2009 at 3:23 PM, Trevor Pretty trevor_pre...@eagle.co.nz wrote:
I think you've taken volume snapshots. I believe you need to make file
system snapshots and each users/username a zfs file system.
Lets play..
Automatic .snapshot directories are a feature of NetApp filers and
vl == Victor Latushkin victor.latush...@sun.com writes:
vl It changes setting of checksum=on to mean fletcher4
oh, good. so it is only the ZIL that's unfixed? At least that fix
could come from a simple upgrade, if it ever gets fixed.
pgp0027QlEfr3.pgp
Description: PGP signature
I have a snapshot that I'd like to destroy:
# zfs list rpool/ROOT/be200909160...@200909160720
NAME USED AVAIL REFER MOUNTPOINT
rpool/ROOT/be200909160...@200909160720 1.88G - 4.18G -
But when I try it warns me of dependent clones:
# zfs destroy
Sorry. My environment:
# uname -a
SunOS xx 5.10 Generic_141414-10 sun4v sparc SUNW,SPARC-Enterprise-T5220
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Replying to a few folks in a digest format, because I'm lazy and don't
have that much to say.
On Wed, Sep 30, 2009 at 5:53 PM, Tim Cook t...@cook.ms wrote:
What are you hoping to accomplish? You're still going to need a drives
worth of free space, and if you're so performance strapped that one
On 5-Oct-09, at 3:32 PM, Miles Nordin wrote:
bm == Brandon Mercer yourcomputer...@gmail.com writes:
I'm now starting to feel that I understand this issue,
and I didn't for quite a while. And that I understand the
risks better, and have a clearer idea of what the possible
fixes are. And I
Hi Osvald,
Can you comment on how the disks shrank or how the labeling on these
disks changed?
We would like to track the issues that causes the hardware underneath
a live pool to change so that we can figure out how to prevent pool
failures in the future.
Thanks,
Cindy
On 10/03/09 09:46,
On Mon, Oct 05, 2009 at 02:14:24PM -0700, Mark Horstman wrote:
I have a snapshot that I'd like to destroy:
If you have a filesystem and a clone of that filesystem, a snapshot
always connects them. You can destroy the snapshot only if there are no
clones.
--
Darren
Just a reminder...
Hope to see you there.
-Jennifer
-
To: Developers and Students
You are invited to participate in the first OpenSolaris Security Summit
OpenSolaris Security Summit
Tuesday, November 3rd, 2009
Baltimore Marriott Waterfront
700
18 matches
Mail list logo