Hi Tim,
The p* devices represent the larger container Solaris fdisk container,
so a possibly scenario is that someone could create a pool that contains
both a p0 container, which might also point to the same blocks as
another partition in that container that is also included in the pool.
This would be bad. I think its a bug that you can create a pool on a p*
device because we are unsure that all operations are supported on p*
devices. We don't test pool operations on p* devices.
Cindy
On 12/18/09 14:09, Tim wrote:
Hi Cindy,
I had similar concerns however I wasn't aware of that bug. Before I bought
this controller I had read a number of people saying that they had problems and
then other people saying didn't have problems with the sil3114. I was
originally after a sil3124 (SATAII) but given my future drives didn't need the
extra speed I settled on the cheaper sil3114. Sil3124 was 5 times the cost of a
sil3114. A friend was running a sil3112 (2 port SATAI card) and that appeared
to be fine. So I bought the sil3114.
As far as I've seen, the card is fine. I have already connected and created a pool
on the 1.5TB disk attached to the sil3114 using snv111. I think from memory I even
booted back to snv101 and it still recognised it as well. It's after I started
copying files to this new pool that I found my read problem on the main 'storage'
pool. That was my plan for a backup device, just a single drive to start in it's
own pool of the sil3114, and then I could add to it as needed, hence the sil3114
& 1.5TB disk.
I'm fairly sure though when I created the new pool 'backup1' that I used the
device c6d1p0, not p1. I'll try c6d1p1 today to make sure that is ok. Is
there a problem using c6d1p0 ?
Do you or anyone else know when a disk is replaced via :
zpool replace pool_name old_disk new_disk
when does the old_disk actually get removed from the pool ? is it before the new_disk starts it's resilver, or is it after the new_disk has been resilvered ?
Once I get the drives swapped in the array, I was going to reformat the sus
750GB and give it another workout and see if the read slow down persists,
before sending it back to Samsung. The drive is 14 months old, but it's had
probably 2 weeks of total use.
Cindy, thanks for the reply, I really appreciate it.
Tim
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss