Hello,
I am new to this list but i have a big Problem:
We have a Sun Fire V440 with an SCSI RAID system connected. I can see all the
devices and Partitions.
After a failure in the UPS-System the Zpool is not accessible anymore.
The Zpool is a normal stripe over 4 Partitions .
Your pool is on a device that requires a 16 byte CDB to address the entire LUN.
That is the LUN is more than 2Tb in size. However the host bus adapter driver
that is being used does not support 16byte CDBs.
Quite how you got into this situation, ie how you could create the volume I
don't
Daniel Carosone d...@geek.com.au writes:
you can fetch the cr_txg (cr for creation) for a
snapshot using zdb,
yes, but this is hardly an appropriate interface.
agreed.
zdb is also likely to cause disk activity because it looks at many
things other than the specific item in question.
I'd
Richard,
First, thank you for the detailed reply ... (comments in line below)
On Tue, Nov 24, 2009 at 6:31 PM, Richard Elling
richard.ell...@gmail.com wrote:
more below...
On Nov 24, 2009, at 9:29 AM, Paul Kraus wrote:
On Tue, Nov 24, 2009 at 11:03 AM, Richard Elling
On Nov 24, 2009, at 3:41 PM, dick hoogendijk wrote:
I have a solution with use zfs set sharenfs=rw,nosuid zpool but i prefer
use the sharemgr command.
Then you prefere wrong. ZFS filesystems are not shared this way.
Read up on ZFS and NFS.
It can also be done with sharemgr. Shaving via
dick hoogendijk wrote:
glidic anthony wrote:
I have a solution with use zfs set sharenfs=rw,nosuid zpool but i prefer
use the sharemgr command.
Then you prefere wrong.
To each their own.
ZFS filesystems are not shared this way.
They can be. I do it all the time. There's nothing
I posted baseline stats at http://www.ilk.org/~ppk/Geek/
baseline test was 1 thread, 3 GiB file, 64KiB to 512 KiB record size
480-3511-baseline.xls is an iozone output file
iostat-baseline.txt is the iostat output for the device in use (annotated)
I also noted an odd behavior yesterady and
When will SXCE 129 be released since 128 was passed over? There used to
be a release calendar on opensolaris.org but I can't find it anymore.
Jeff Bonwick wrote:
And, for the record, this is my fault. There is an aspect of endianness
that I simply hadn't thought of. When I have a little
On Nov 24, 2009, at 2:51 PM, Daniel Carosone wrote:
Those are great, but they're about testing the zfs software.
There's a small amount of overlap, in that these injections include
trying to simulate the hoped-for system response (e.g, EIO) to
various physical scenarios, so it's worth
On Wed, 2009-11-25 at 10:00 -0500, Kyle McDonald wrote:
To each their own.
[cut the rest of your reply]
In general: I stand corrected. I was rude.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
If you are using (3) 3511's, then won't it be possibly that your 3GB workload
will be largely or entirely served out of RAID controller cache?
Also, I had a question for your production backups (millions of small files),
do you have atime=off set for the filesystems? That might be helpful.
--
Maybe 11/30/2009 ?
According to
http://hub.opensolaris.org/bin/view/Community+Group+on/schedule. we have
onnv_129 11/23/2009 11/30/2009
But..as far as i know those release dates are in a best effort basis.
Bruno
Karl Rossing wrote:
When will SXCE 129 be released since 128 was passed over?
Jim Sez:
Like many others, I've come close to making a home
NAS server based on
ZFS and OpenSolaris. While this is not an enterprise
solution with high IOPS
expectation, but rather a low-power system for
storing everything I have,
I plan on cramming in some 6-10 5400RPM Green
drives with
Hello !
I'm currently using a X2200 with a LSI HBA connected to a Supermicro
JBOD chassis, however i want to have more redundancy in the JBOD.
So i have looked into to market, and into to the wallet, and i think
that the Sun J4400 suits nicely to my goals. However i have some
concerns and if
On Wed, Nov 25, 2009 at 7:54 AM, Paul Kraus pk1...@gmail.com wrote:
You're peaking at 658 256KB random IOPS for the 3511, or ~66
IOPS per drive. Since ZFS will max out at 128KB per I/O, the disks
see something more than 66 IOPS each. The IOPS data from
iostat would be a better metric to
more below...
On Nov 25, 2009, at 5:54 AM, Paul Kraus wrote:
Richard,
First, thank you for the detailed reply ... (comments in line
below)
On Tue, Nov 24, 2009 at 6:31 PM, Richard Elling
richard.ell...@gmail.com wrote:
more below...
On Nov 24, 2009, at 9:29 AM, Paul Kraus wrote:
more below...
On Nov 25, 2009, at 7:10 AM, Paul Kraus wrote:
I posted baseline stats at http://www.ilk.org/~ppk/Geek/
baseline test was 1 thread, 3 GiB file, 64KiB to 512 KiB record size
480-3511-baseline.xls is an iozone output file
iostat-baseline.txt is the iostat output for the device
I am trying to understand the ARC's behavior based on different
permutations of (a)sync Reads and (a)sync Writes.
thank you, in advance
o does the data for a *sync-write* *ever* go into the ARC?
eg, my understanding is that the data goes to the ZIL (and
the SLOG, if present), but how does
Is there still any interest in this? I've done a bit of hacking (then
searched for this thread - I picked -P instead of -c)...
$ zfs get -P compression,dedup /var
NAMEPROPERTY VALUE SOURCE
rpool/ROOT/zfstest compression on inherited from rpool/ROOT
On 2009-Nov-24 14:07:06 -0600, Mike Gerdts mger...@gmail.com wrote:
On Tue, Nov 24, 2009 at 1:39 PM, Richard Elling
richard.ell...@gmail.com wrote:
Also, the performance of /dev/*random is not very good. So prestaging
lots of random data will be particularly challenging.
This depends on the
On Nov 25, 2009, at 11:55 AM, andrew.r...@sun.com wrote:
I am trying to understand the ARC's behavior based on different
permutations of (a)sync Reads and (a)sync Writes.
thank you, in advance
o does the data for a *sync-write* *ever* go into the ARC?
always
eg, my understanding is that
[verify on real hardware and share results]
Agree 110%.
Good :)
Yanking disk controller and/or power cables is an
easy and obvious test.
The problem is that yanking a disk tests the failure
mode of yanking a disk.
Yes, but the point is that it's a cheap and easy test, so you might as
On Nov 25, 2009, at 4:43 PM, Daniel Carosone wrote:
[verify on real hardware and share results]
Agree 110%.
Good :)
Yanking disk controller and/or power cables is an
easy and obvious test.
The problem is that yanking a disk tests the failure
mode of yanking a disk.
Yes, but the point
On Wed, Nov 25 at 16:43, Daniel Carosone wrote:
The problem is that yanking a disk tests the failure
mode of yanking a disk.
Yes, but the point is that it's a cheap and easy test, so you might
as well do it -- just beware of what it does, and most importantly
does not, tell you. It's a valid
So we also need a txg dirty or similar
property to be exposed from the kernel.
Or not..
if you find this condition, defer, but check again in a minute (really, after a
full txg_interval has passed) rather than on the next scheduled snapshot.
on that next check, if the txg has advanced again,
Speaking practically, do you evaluate your chipset
and disks for hotplug support before you buy?
Yes, if someone else has shared their test results previously.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
et == Erik Trimble erik.trim...@sun.com writes:
et I'd still get the 7310 hardware.
et Worst case scenario is that you can blow away the AmberRoad
okay but, AIUI he was saying pricing is 6% more for half as much
physical disk. This is also why it ``uses less energy'' while
supposedly
Miles Nordin wrote:
et == Erik Trimble erik.trim...@sun.com writes:
et I'd still get the 7310 hardware.
et Worst case scenario is that you can blow away the AmberRoad
okay but, AIUI he was saying pricing is 6% more for half as much
physical disk. This is also why it
Interesting. Unfortunately, I can not zpool offline, nor zpool
detach, nor zpool remove the existing c6t4d0s0 device.
I thought perhaps we could boot something newer than b125 [*1] and I would be
able to remove the slog device that is too big.
The dev-127.iso does not boot [*2] due to
29 matches
Mail list logo