Hej Richard.
think I'll update all our servers to the same version of zfs...
That will hopefully make sure that this doesn't happen again :-)
Darren and Richard: Thank you very much for your help !
Sascha
--
This message posted from opensolaris.org
__
On Sep 21, 2009, at 8:59 AM, Sascha wrote:
Hi Darren,
sorry that it took so long before I could answer.
The good thing:
I found out what went wrong.
What I did:
After resizing a Disk on the Storage, solaris recognizes it
immediately.
Everytime you resize a disk, the EVA storage updates the
Hi Darren,
sorry that it took so long before I could answer.
The good thing:
I found out what went wrong.
What I did:
After resizing a Disk on the Storage, solaris recognizes it immediately.
Everytime you resize a disk, the EVA storage updates the discription which
contains the size. So typing
Hi Darren,
took me a while what device is meant by zdb -l...
Original size was 20GB
After resizing in EVA format -e showed the new correct size:
17. c6t6001438002A5435A0001006Dd0
/scsi_vhci/s...@g6001438002a5435a0001006d
Here is the output of zdb -l:
zdb -l /dev/dsk/
On Wed, Aug 12, 2009 at 04:53:20AM -0700, Sascha wrote:
> confirmed, it's really an EFI Label. (see below)
>
>format> label
>[0] SMI Label
>[1] EFI Label
>Specify Label type[1]: 0
>Warning: This disk has an EFI label. Changing to SMI label will erase all
>current partitions
Darren, I want to give you a short overview of what I tried:
1.
created a zpool on a LUN
resized the LUN on the EVA
exported the zpool
used format -e and label
tried to enlarge slice 0 -> impossible (see posting above)
2.
Same like 1. but exported the zpool before resizing on the EVA
Same resul
Hi Darren,
thanks for your quick answer.
> On Tue, Aug 11, 2009 at 09:35:53AM -0700, Sascha
> wrote:
> > Then creating a zpool:
> > zpool create -m /zones/huhctmp huhctmppool
> c6t6001438002A5435A0001005Ad0
> >
> > zpool list
> > NAME SIZE USED AVAILCAP HEALTH
> ALTROOT
On Tue, Aug 11, 2009 at 09:35:53AM -0700, Sascha wrote:
> Then creating a zpool:
> [b]zpool create -m /zones/huhctmp huhctmppool
> c6t6001438002A5435A0001005Ad0[/b]
>
> [b]zpool list[/b]
> NAME SIZE USED AVAILCAP HEALTH ALTROOT
> huhctmppool 59.5G 103K 59.5G 0%
Hi Darren,
i tried exactly the same, but it doesn't seem to work.
First the size of the disk:
[b] echo |format -e|grep -i 05A[/b] 17.
c6t6001438002A5435A0001005Ad0
/scsi_vhci/s...@g6001438002a5435a0001005a
Then creating a zpool:
[b]zpool create -m /zones/
On Mon, Aug 03, 2009 at 01:15:49PM -0700, Jan wrote:
> Yes, I have an EFI label on that device.
> This is my procedure to try growing the capacity of the device:
> -> export the zpool
> -> overwrite the existing EFI label with format tool
> -> auto-configure it
> -> import the zpool
>
> What do y
Hi Darren,
thanks for your reply.
> What did you try?
> Since you're larger than 1T, you certainly have an EFI label. What you
> have to do is destroy the existing EFI label, then have format create a
> new one for the larger LUN. Finally, create slice 0 as the size of the
> entire (now larger) d
On Wed, Jul 29, 2009 at 03:51:22AM -0700, Jan wrote:
> Hi all,
> I need to know if it is possible to expand the capacity of a zpool
> without loss of data by growing the LUN (2TB) presented from an HP EVA
> to a Solaris 10 host.
Yes.
> I know that there is a possible way in Solaris Express Commun
Hi all,
I need to know if it is possible to expand the capacity of a zpool without loss
of data by growing the LUN (2TB) presented from an HP EVA to a Solaris 10 host.
I know that there is a possible way in Solaris Express Community Edition, b117
with the autoexpand property. But I still work wi
13 matches
Mail list logo