Erik Trimble wrote:
On a related note - does anyone know of a good Solaris-supported 4+ port
SATA card for PCI-Express? Preferably 1x or 4x slots...
From what I can tell, all the vendors are only making SAS controllers for
PCIe with more than 4 ports. Since SAS supports SATA, I guess they
that is my thread and I'm still having issues even after applying that patch.
It just came up again this week.
[locahost] uname -a
Linux dv-121-25.centtech.com 2.6.18-53.1.14.el5 #1 SMP Wed Mar 5 11:37:38 EST
2008 x86_64 x86_64 x86_64 GNU/Linux
[localhost] cat /etc/issue
CentOS release 5
kevin kramer wrote:
that is my thread and I'm still having issues even after applying that patch.
It just came up again this week.
[locahost] uname -a
Linux dv-121-25.centtech.com 2.6.18-53.1.14.el5 #1 SMP Wed Mar 5 11:37:38 EST
2008 x86_64 x86_64 x86_64 GNU/Linux
[localhost] cat
Did you try mounting with nfs version 3?
mount -o vers=3
On May 28, 2008, at 10:38 AM, kevin kramer wrote:
that is my thread and I'm still having issues even after applying
that patch. It just came up again this week.
[locahost] uname -a
Linux dv-121-25.centtech.com 2.6.18-53.1.14.el5 #1
Hello, I'm having the same exact situation on one VM, and not on another VM on
the same infrastructure.
The only difference is that on the failing VM I initially created the pool with
a name and then changed the mountpoint to another name.
Did you found a solution to the issue?
Should I consider
At home I have an old ultra-60 running attached to a scsi shoebox with 6x18GB
disks. I created the zpool as raidZ with one hot spare. Recently, one of the
non hot spare disks failed and now zpool commands hang. Also, I/O to the pool
just hangs for periods of time. I'm using release 0807
Hi guys, I wrote my first post about ZFS (
http://silveiraneto.net/2008/05/28/trying-to-corrupt-data-in-a-zfs-mirror/)
showing how to create a pool with a mirror and so trying to corrupt the
data. I used the Self Healing with
ZFShttp://opensolaris.org/os/community/zfs/demos/selfheal/as base.
I
On May 28, 2008, at 05:11, James Andrewartha wrote:
From what I can tell, all the vendors are only making SAS
controllers for
PCIe with more than 4 ports. Since SAS supports SATA, I guess they
don't see
much point in doing SATA-only controllers.
For example, the LSI SAS3081E-R is $260
J. Les Bemont wrote:
At home I have an old ultra-60 running attached to a scsi shoebox with 6x18GB
disks. I created the zpool as raidZ with one hot spare. Recently, one of
the non hot spare disks failed and now zpool commands hang. Also, I/O to the
pool just hangs for periods of time.
On Wed, May 28, 2008 at 11:20 AM, Richard Elling [EMAIL PROTECTED]
wrote:
J. Les Bemont wrote:
At home I have an old ultra-60 running attached to a scsi shoebox with
6x18GB disks. I created the zpool as raidZ with one hot spare. Recently,
one of the non hot spare disks failed and now zpool
Bill McGonigle wrote:
On May 28, 2008, at 05:11, James Andrewartha wrote:
From what I can tell, all the vendors are only making SAS
controllers for
PCIe with more than 4 ports. Since SAS supports SATA, I guess they
don't see
much point in doing SATA-only controllers.
For example,
On May 28, 2008, at 10:27 AM 5/28/, Richard Elling wrote:
Since the mechanics are the same, the difference is in the electronics
In my very distant past, I did QA work for an electronic component
manufacturer. Even parts which were identical were expected to
behave quite differently ...
Tim wrote:
On Wed, May 28, 2008 at 11:20 AM, Richard Elling
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
J. Les Bemont wrote:
At home I have an old ultra-60 running attached to a scsi
shoebox with 6x18GB disks. I created the zpool as raidZ with one
hot spare.
I strongly agree most of the comments. I quess, I tried to keep it simple,
perhaps a little bit too simple.
If I am not mistaken ,most of the Nand disks will virtualize the underlying
cells so even you update the same sector update will be made somewhere else.
So the time to corrupt an
By the way. All enterprise SSD's have internal Dram based cache. Some
vendors may quote the write performance of the internal RAM device.
Normally Nand drives due to read after write operations and several other
reasons will not perform quite good under write based load.
Mertol Ozyoney
On Wed, 28 May 2008, Mertol Ozyoney wrote:
Think that you have a 146 GB SSD and the wirte cycle is around 100k
And you can write/update data at 10 MB/sec (depends on the IO pattern could
be a lot slower or a lot higher) It will take 4 Hours or 14,400 sec's to
fully populate the drive.
On Wed, 2008-05-28 at 10:34 -0600, Keith Bierman wrote:
On May 28, 2008, at 10:27 AM 5/28/, Richard Elling wrote:
Since the mechanics are the same, the difference is in the electronics
In my very distant past, I did QA work for an electronic component
manufacturer. Even parts which
Hello, I am fairly new to Solaris and ZFS. I am testing both out in a sandbox
at work. I am playing with virtual machines running on a windows front-end that
connects to a zfs back-end for its data needs. As far as i know my two options
are sharesmb and shareiscsci for data sharing. I have a
On Wed, May 28, 2008 at 9:27 AM, Richard Elling [EMAIL PROTECTED] wrote:
There are BigDriveCos which sell enterprise-class SATA drives.
Since the mechanics are the same, the difference is in the electronics
and software. Vote with your pocketbook for the enterprise-class
products.
CMU
On Wed, 28 May 2008, Brandon High wrote:
CMU released a study comparing the MTBF enterprise class drive with
consumer drives, and found no real differences.
That should really not be a surprise. Chips are chips and in the
economies of scale, as few chips will be be used as possible. The
http://blogs.sun.com/relling/entry/adaptec_webinar_on_disks_and
-- richard
Bob Friesenhahn wrote:
On Wed, 28 May 2008, Brandon High wrote:
CMU released a study comparing the MTBF enterprise class drive with
consumer drives, and found no real differences.
That should really not be
Is there a way to to a create a zfs file system
(e.g. zpool create boot /dev/dsk/c0t0d0s1)
Then, (after vacating the old boot disk) add another
device and make the zpool a mirror?
(as in: zpool create boot mirror /dev/dsk/c0t0d0s1 /dev/dsk/c1t0d0s1)
Thanks!
emike
This message posted from
E. Mike Durbin wrote:
Is there a way to to a create a zfs file system
(e.g. zpool create boot /dev/dsk/c0t0d0s1)
Then, (after vacating the old boot disk) add another
device and make the zpool a mirror?
zpool attach
-- richard
(as in: zpool create boot mirror /dev/dsk/c0t0d0s1
Greetings all.
I am facing serious problems running ZFS on a storage server assembled out of
commodity hardware that is supposed to be Solaris compatible.
Although I am quite familiar with Linux distros and other unices, I am new to
Solaris so any suggestions are highly appreciated.
First I
E. Mike Durbin wrote:
Is there a way to to a create a zfs file system
(e.g. zpool create boot /dev/dsk/c0t0d0s1)
Then, (after vacating the old boot disk) add
another
device and make the zpool a mirror?
zpool attach
-- richard
(as in: zpool create boot mirror
25 matches
Mail list logo