Cindy Swearingen wrote:
> Chris,
> 
> I agree that your best bet is to replace the 128-mb device with
> another device, fix the emcpower2a manually, and then replace it
> back. I don't know these drives at all, so I'm unclear about the
> fix it manually step.
> 
> Because your pool isn't redundant, you can't use zpool offline
> or detach.
> 
> I'm curious if the capacity of this pool is 128mb x 3? If so,
> then I think you could replace the emcpower2a with a 128mb file.

It should be 125G+125G+128M. I think this is a good idea, just create 
this file somewhere outside of your pool.

Hth,
Victor

> Then, replace it back. Like this:
> 
> 0. Backup your data.
> 
> 1. Create the file.
> # mkdir /files
> # mkfile 128m /files/file1
> 
> 2. Replace the device with the file:
> 
> # zpool replace mypool emcpower2a /files/file1
> 
> 3. fix the emcpower2a drive
> 
> 4. Replace the file with the device
> 
> # zpool replace mypool /files/file1 emcpower2a
> 
> I have no experience with these drives, but in theory, this should work.
> I'm also wondering if you should make the 128mb file slightly larger to
> account for any differences in sizing of a UFS file and the emcpower
> drive.
> 
> Cindy
> 
> Krzys wrote:
> 
>> yes, I was thinking about this but I wanted to just remove the whole 128mb 
>> disk 
>> and then use format to repartition this complete disk to give it full 
>> capacity... I have all the disks setup this way so I wanted to be consistent 
>> with it but its not letting remove that disk at all from the pool...128mb is 
>> not 
>> much to waste and I am not concern about it but as I said I wanted to be 
>> consistent and thats the reason why I wanted to remove the other disk...
>>
>> Maybe what I can do is replace it with a different device if I can find it 
>> and 
>> then replace that disk with it and then partition it to my need and then 
>> replace 
>> the temporary disk with this new repartitioned disk... I thought there might 
>> be 
>> easier way to do it...
>>
>> Thanks for help.
>>
>> Chris
>>
>>
>> On Tue, 30 Oct 2007, Mark J Musante wrote:
>>
>>  
>>
>>> On Mon, 29 Oct 2007, Krzys wrote:
>>>    
>>>
>>>> everything is great but I've made a mistake and I would like to remove
>>>> emcpower2a from my pool and I cannot do that...
>>>>
>>>> Well the mistake that I made is that I did not format my device
>>>> correctly so instead of adding 125gig I added 128meg
>>>>      
>>>>
>>> You can't remove it directly, but you certainly can *replace* it with a
>>> larger drive.  If this is critical data, then obviously back up first, and
>>> test these steps on alternate storage.
>>>
>>>    
>>>
>>>> Part      Tag    Flag     Cylinders         Size            Blocks
>>>>  0       root    wm       0 -    63      128.00MB    (64/0/0)       262144
>>>>  1       swap    wu      64 -   127      128.00MB    (64/0/0)       262144
>>>>  2     backup    wu       0 - 63997      125.00GB    (63998/0/0) 262135808
>>>>  3 unassigned    wm       0                0         (0/0/0)             0
>>>>  4 unassigned    wm       0                0         (0/0/0)             0
>>>>  5 unassigned    wm       0                0         (0/0/0)             0
>>>>  6        usr    wm     128 - 63997      124.75GB    (63870/0/0) 261611520
>>>>  7 unassigned    wm       0                0         (0/0/0)             0
>>>>      
>>>>
>>> The easiest thing would be to replace s0 with s6.
>>>
>>> You'll be 128mb shy of the full disk, but that's a drop in the bucket.
>>> The command would be:
>>>     zpool replace mypool emcpower2a emcpowerXX
>>> where XX is the name of slice 6.  You should see the new size right away.
>>>
>>> Another option would be to use a different drive, formatted to give you
>>> the entire disk, and then do a replace of emcpower2a with emcpower3a.
>>> Then you could repartition 2 properly, and repalce 3 with 2.
>>>
>>>
>>> Regards,
>>> markm
>>> _______________________________________________
>>> zfs-discuss mailing list
>>> zfs-discuss@opensolaris.org
>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>>
>>>
>>> !DSPAM:122,472733c5131049287932!
>>>
>>>    
>>>
>> _______________________________________________
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>  
>>
> 
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to