I am converting a 4 disk raidz from osx to opensolaris. And I want to keep the
data intact.
I want zfs to get access to the full disk instead of a slice.
I believe like c8d0 instead off c8d0s1.
I wanted to do this 1 disk at a time and let it resilver.
what is the proper way to do this.
I tried, I
By the way,
there are more than fifty bugs logged for marevell88sx, many of them about
problems with DMA handling and/or driver behaviour under stress.
Can it be that I'm stumbling upon something along these lines?
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6826483
Maurilio.
Dirk,
I'm not sure I'm following you exactly but this is what I think you are
trying to do:
You have a RAIDZ pool that is built with slices and you are trying to
convert the slice configuration to whole disks. This isn't possible
because you are trying replace the same disk. This is what
yes, that was what I was doing.
I wanted to give the raidz whole disks because grub didn't want to
install.
( I forgot which command I used . bootadm ? )
I have another slice free on a shared disk with osx and win7 but I am
having problems with grub.
I will try that again and document it
I am running zfs send/receive on a ~1.2Tb zfs spread across 10x200Gb LUNs.
Has copied only 650Gb in ~42Hrs. Source pool and destination pool are from
the same storage sub system. Last time when ran, took ~20Hrs.
Something is terribly wrong here. What do i need to look to figure out the
reason?
You might try setting zfs_scrub_limit to 1 or 2 and attach a customer
service record to :
6494473 ZFS needs a way to slow down resilvering
-r
Le 7 oct. 09 à 06:14, John a écrit :
Hi,
We are running b118, with a LSI 3801 controller which is connected
to 44 drives (yes it's a
Sorry if this is a noob question but I can't seem to find this info anywhere.
Are hot spares generally spunned down until they are needed?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
To recap for those who don't recall my plaintive cries for help, I lost a pool
due to the following sequence of events:
- One drive in my raidz array becomes flaky, has frequent stuck I/Os due to
drive error recovery, trashing performance
- I take flaky drive offline (zpool offline...)
- I
Hi there,
On Oct 8, 2009, at 9:46 PM, bjbm wrote:
Sorry if this is a noob question but I can't seem to find this info
anywhere.
Are hot spares generally spunned down until they are needed?
No, but have a look at power.conf(4) and the device-thresholds keyword
to spin down disks.
Here
In an attempt to recycle some old PATA disks, we bought some
really cheap PATA/SATA adapters, some of which actually work
to the point where it is possible to boot from a ZFS installation
(e.g., c1t2d0s0). Not all PATA disks work, just Seagates, it would
seem, but not Maxstors. I wonder why?
Frank Middleton wrote:
In an attempt to recycle some old PATA disks, we bought some
really cheap PATA/SATA adapters, some of which actually work
to the point where it is possible to boot from a ZFS installation
(e.g., c1t2d0s0). Not all PATA disks work, just Seagates, it would
seem, but not
11 matches
Mail list logo