Re: [OpenIndiana-discuss] Partitions for co-exist NAS drive

2021-05-01 Thread Reginald Beardsley via openindiana-discuss
 That's what it should do but the messages in the install program are quite 
confusing. 

It will actually let you do anything you want, but it is not easy to sort out 
how. In 34 years of doing admin work I have *never* had to spend more than a 
few hours to get a working system. The failure to reconfigure on the first boot 
took me 50 hours of effort to diagnose. And I really only succeeded because I 
did not give up.

I normally take several days sorting out exactly how I want a new system 
configured, but that is rather different from it not booting when it obviously 
should.

Reg


 On Saturday, May 1, 2021, 11:38:04 AM CDT, Michelle 
 wrote:  
 
 I just followed the text install on the 2021.04 GUi live image, and it
created a single Rpool for the whole disk.

I'm not sure where I'm going to go from here. Need to have a sit and
think.

Thank you very much for all your efforts on this. I really do
appreciate it.

Michelle.

On Sat, 2021-05-01 at 16:23 +, Reginald Beardsley via openindiana-
discuss wrote:
>  I just installed the 2021.04.30 text ISO on my Z400 test system. The
> disk had a pool in a single slice which I imported and destroyed. Had
> I not done that it would have named the pool rpool1. I left the disk
> label alone. This was from testing manually creating a single slice
> and installing into that using the F5 option which doesn't create
> dump and swap on the expectation they already exist.
> 
> The installer still has the usual spurious messages about only using
> 2 TB. I selected an EFI label and did the install.
> 
> That created a 256 MB s0 slice beginning in sector 256 with the rest
> of the disk in s1.
> 
> "zfs list" shows the pool has 4.39 TB available which is correct for
> having used the entire disk.
> 
> If you're going to run a 2 disk mirror with the 6 TB drives I'd
> install to a single pool using the text installer. Assuming it works
> properly. When the 8 TB drive returns, copy the data and then create
> a matching label and add the drive to the pool to create a mirror.
> 
> Unfortunately we are a tad shy on current documentation.
> 
> Reg
> 
>      On Saturday, May 1, 2021, 10:15:53 AM CDT, Michelle <
> miche...@msknight.com> wrote:  
>  
>  Actually, I was wrong. It was the 2021.04 text install USB that I
> was
> using. I was using the text install because it's just a file server.
> I
> don't need X.
> 
> Well, the situation I find myself in is as follows...
> 
> A pair of 6tb reds, one of which has data on that I can't get off
> until
> one of my 8tb has come back from RMA
> 
> So I thought I'd use the other 6tb to get this sorted, and by the
> time
> the other 6tb is ready to be put in, I would be able to install it
> and
> add it in.
> 
> Going via the GUI installer, I'm wondering whether I'll be able to do
> that.
> 
> This is why I was following your script, but when it came to step 4,
> the solaris partition was only 2tb max. However, I've just noted that
> I
> wasn't using the -e at the end of the format command, so after I've
> made dinner I'll read up on that switch in the hope that this is what
> will enable me to create the larger Solaris partition.
> 
> Michelle.
> 
> 
> On Sat, 2021-05-01 at 14:04 +, Reginald Beardsley via
> openindiana-
> discuss wrote:
> >  Michelle,
> > 
> > What disks are you trying to use? If they are different sizes the
> > recipe gets a bit more complex, but it's quite possible to get any
> > arrangement for which you have a sufficient number of disks. Having
> > spent 50 hours over 5 days battling the 2021.04.05 install I've had
> > *lots* of practice installing both 2021.04.05 and 2020.10.
> > 
> > Incidentally the failure of 2021.04.05 to run the nVIDIA driver is
> > because it installed the wrong driver for that system.
> > 
> > Reg
> > 
> > 
> > 
> > 
> > On Saturday, May 1, 2021, 08:16:07 AM CDT, Michelle <
> > miche...@msknight.com> wrote:
> > 
> > 
> > OK - I give in.
> > 
> > I've tried various combinations and I just can't get this into a
> > configuration where I can get things installed.
> > 
> > I need a step by step guide please, because I'm lost. I don't know
> > what
> > options to choose in the installer to not avoid it wanting to wipe
> > whatever combinations of partitions I've set.
> > 
> > Michelle.
> > 
> > 
> >  
> > ___
> > openindiana-discuss mailing list
> > openindiana-discuss@openindiana.org
> > https://openindiana.org/mailman/listinfo/openindiana-discuss
> 
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>  
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org

Re: [OpenIndiana-discuss] Partitions for co-exist NAS drive

2021-05-01 Thread Michelle
I just followed the text install on the 2021.04 GUi live image, and it
created a single Rpool for the whole disk.

I'm not sure where I'm going to go from here. Need to have a sit and
think.

Thank you very much for all your efforts on this. I really do
appreciate it.

Michelle.

On Sat, 2021-05-01 at 16:23 +, Reginald Beardsley via openindiana-
discuss wrote:
>  I just installed the 2021.04.30 text ISO on my Z400 test system. The
> disk had a pool in a single slice which I imported and destroyed. Had
> I not done that it would have named the pool rpool1. I left the disk
> label alone. This was from testing manually creating a single slice
> and installing into that using the F5 option which doesn't create
> dump and swap on the expectation they already exist.
> 
> The installer still has the usual spurious messages about only using
> 2 TB. I selected an EFI label and did the install.
> 
> That created a 256 MB s0 slice beginning in sector 256 with the rest
> of the disk in s1.
> 
> "zfs list" shows the pool has 4.39 TB available which is correct for
> having used the entire disk.
> 
> If you're going to run a 2 disk mirror with the 6 TB drives I'd
> install to a single pool using the text installer. Assuming it works
> properly. When the 8 TB drive returns, copy the data and then create
> a matching label and add the drive to the pool to create a mirror.
> 
> Unfortunately we are a tad shy on current documentation.
> 
> Reg
> 
>  On Saturday, May 1, 2021, 10:15:53 AM CDT, Michelle <
> miche...@msknight.com> wrote:  
>  
>  Actually, I was wrong. It was the 2021.04 text install USB that I
> was
> using. I was using the text install because it's just a file server.
> I
> don't need X.
> 
> Well, the situation I find myself in is as follows...
> 
> A pair of 6tb reds, one of which has data on that I can't get off
> until
> one of my 8tb has come back from RMA
> 
> So I thought I'd use the other 6tb to get this sorted, and by the
> time
> the other 6tb is ready to be put in, I would be able to install it
> and
> add it in.
> 
> Going via the GUI installer, I'm wondering whether I'll be able to do
> that.
> 
> This is why I was following your script, but when it came to step 4,
> the solaris partition was only 2tb max. However, I've just noted that
> I
> wasn't using the -e at the end of the format command, so after I've
> made dinner I'll read up on that switch in the hope that this is what
> will enable me to create the larger Solaris partition.
> 
> Michelle.
> 
> 
> On Sat, 2021-05-01 at 14:04 +, Reginald Beardsley via
> openindiana-
> discuss wrote:
> >   Michelle,
> > 
> > What disks are you trying to use? If they are different sizes the
> > recipe gets a bit more complex, but it's quite possible to get any
> > arrangement for which you have a sufficient number of disks. Having
> > spent 50 hours over 5 days battling the 2021.04.05 install I've had
> > *lots* of practice installing both 2021.04.05 and 2020.10.
> > 
> > Incidentally the failure of 2021.04.05 to run the nVIDIA driver is
> > because it installed the wrong driver for that system.
> > 
> > Reg
> > 
> > 
> > 
> > 
> > On Saturday, May 1, 2021, 08:16:07 AM CDT, Michelle <
> > miche...@msknight.com> wrote:
> > 
> > 
> > OK - I give in.
> > 
> > I've tried various combinations and I just can't get this into a
> > configuration where I can get things installed.
> > 
> > I need a step by step guide please, because I'm lost. I don't know
> > what
> > options to choose in the installer to not avoid it wanting to wipe
> > whatever combinations of partitions I've set.
> > 
> > Michelle.
> > 
> > 
> >   
> > ___
> > openindiana-discuss mailing list
> > openindiana-discuss@openindiana.org
> > https://openindiana.org/mailman/listinfo/openindiana-discuss
> 
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>   
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Partitions for co-exist NAS drive

2021-05-01 Thread Reginald Beardsley via openindiana-discuss
 I just installed the 2021.04.30 text ISO on my Z400 test system. The disk had 
a pool in a single slice which I imported and destroyed. Had I not done that it 
would have named the pool rpool1. I left the disk label alone. This was from 
testing manually creating a single slice and installing into that using the F5 
option which doesn't create dump and swap on the expectation they already exist.

The installer still has the usual spurious messages about only using 2 TB. I 
selected an EFI label and did the install.

That created a 256 MB s0 slice beginning in sector 256 with the rest of the 
disk in s1.

"zfs list" shows the pool has 4.39 TB available which is correct for having 
used the entire disk.

If you're going to run a 2 disk mirror with the 6 TB drives I'd install to a 
single pool using the text installer. Assuming it works properly. When the 8 TB 
drive returns, copy the data and then create a matching label and add the drive 
to the pool to create a mirror.

Unfortunately we are a tad shy on current documentation.

Reg

 On Saturday, May 1, 2021, 10:15:53 AM CDT, Michelle 
 wrote:  
 
 Actually, I was wrong. It was the 2021.04 text install USB that I was
using. I was using the text install because it's just a file server. I
don't need X.

Well, the situation I find myself in is as follows...

A pair of 6tb reds, one of which has data on that I can't get off until
one of my 8tb has come back from RMA

So I thought I'd use the other 6tb to get this sorted, and by the time
the other 6tb is ready to be put in, I would be able to install it and
add it in.

Going via the GUI installer, I'm wondering whether I'll be able to do
that.

This is why I was following your script, but when it came to step 4,
the solaris partition was only 2tb max. However, I've just noted that I
wasn't using the -e at the end of the format command, so after I've
made dinner I'll read up on that switch in the hope that this is what
will enable me to create the larger Solaris partition.

Michelle.


On Sat, 2021-05-01 at 14:04 +, Reginald Beardsley via openindiana-
discuss wrote:
>  Michelle,
> 
> What disks are you trying to use? If they are different sizes the
> recipe gets a bit more complex, but it's quite possible to get any
> arrangement for which you have a sufficient number of disks. Having
> spent 50 hours over 5 days battling the 2021.04.05 install I've had
> *lots* of practice installing both 2021.04.05 and 2020.10.
> 
> Incidentally the failure of 2021.04.05 to run the nVIDIA driver is
> because it installed the wrong driver for that system.
> 
> Reg
> 
> 
> 
> 
> On Saturday, May 1, 2021, 08:16:07 AM CDT, Michelle <
> miche...@msknight.com> wrote:
> 
> 
> OK - I give in.
> 
> I've tried various combinations and I just can't get this into a
> configuration where I can get things installed.
> 
> I need a step by step guide please, because I'm lost. I don't know
> what
> options to choose in the installer to not avoid it wanting to wipe
> whatever combinations of partitions I've set.
> 
> Michelle.
> 
> 
>  
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss
  
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Partitions for co-exist NAS drive

2021-05-01 Thread Michelle
Actually, I was wrong. It was the 2021.04 text install USB that I was
using. I was using the text install because it's just a file server. I
don't need X.

Well, the situation I find myself in is as follows...

A pair of 6tb reds, one of which has data on that I can't get off until
one of my 8tb has come back from RMA

So I thought I'd use the other 6tb to get this sorted, and by the time
the other 6tb is ready to be put in, I would be able to install it and
add it in.

Going via the GUI installer, I'm wondering whether I'll be able to do
that.

This is why I was following your script, but when it came to step 4,
the solaris partition was only 2tb max. However, I've just noted that I
wasn't using the -e at the end of the format command, so after I've
made dinner I'll read up on that switch in the hope that this is what
will enable me to create the larger Solaris partition.

Michelle.


On Sat, 2021-05-01 at 14:04 +, Reginald Beardsley via openindiana-
discuss wrote:
>  Michelle,
> 
> What disks are you trying to use? If they are different sizes the
> recipe gets a bit more complex, but it's quite possible to get any
> arrangement for which you have a sufficient number of disks. Having
> spent 50 hours over 5 days battling the 2021.04.05 install I've had
> *lots* of practice installing both 2021.04.05 and 2020.10.
> 
> Incidentally the failure of 2021.04.05 to run the nVIDIA driver is
> because it installed the wrong driver for that system.
> 
> Reg
> 
> 
> 
> 
> On Saturday, May 1, 2021, 08:16:07 AM CDT, Michelle <
> miche...@msknight.com> wrote:
> 
> 
> OK - I give in.
> 
> I've tried various combinations and I just can't get this into a
> configuration where I can get things installed.
> 
> I need a step by step guide please, because I'm lost. I don't know
> what
> options to choose in the installer to not avoid it wanting to wipe
> whatever combinations of partitions I've set.
> 
> Michelle.
> 
> 
>   
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Partitions for co-exist NAS drive

2021-05-01 Thread Reginald Beardsley via openindiana-discuss
 
The text-install disk doesn't install X. I think the 2 text installers may not 
be the same, but have not verified that. They *should* be.

You can do all the stuff with format(1m) in single user mode. I use the GUI 
installer because I want the GUI and it is more convenient to have the MATE 
terminals.

I have an issue raised about the spurious 2 TB limit message. It's just that, a 
spurious message. It caused me a *lot* of confusion.

Reg

 On Saturday, May 1, 2021, 09:00:05 AM CDT, Michelle 
 wrote:  
 
 Thanks. I'll give that a go.

I'm using the text installer 2020.10 and on that, I can't create a
Solaris partition greater than 2tb.

I've got the GUI live image here somewhere.

Michelle.


On Sat, 2021-05-01 at 13:50 +, Reginald Beardsley via openindiana-
discuss wrote:
>  What release are you trying to install? I last did this using
> 2021.04.05, but I can easily verify the behavior with 2020.10.
> 
> I *strongly* recommend that you put all the disks you plan to use in
> the machine and run the text-installer from the Live Desktop. Select
> F2 on the first screen. When you get to the disk list, select all the
> disks. Make sure you don't have a USB stick plugged in. I wiped out
> mine by accident. Selecting multiple disks triggers the appearance of
> options to create a mirror, RAIDZ1 or RAIDZ2 depending on whether you
> have 2, 3 or 4 disks. Just make sure you select an EFI label, not MBR
> 
> Let it wipe out the partitions and create the 250 MB slice and the
> rest of disk slice. That *is* the best practice today. The mirrored
> root pool slice is a bodge for releases which can't boot from RAIDZ.
> 
> If for some reason you want to do something different this is the
> step by step recipe. It will give you exactly whatever you set up
> with format(1m). Because the F5 option is expecting to be used on an
> existing bootable pool it doesn't create dump and swap, so you have
> to do that by hand. The absence of /reconfigure was what beat me up
> for a week before I figured it out quite by accident.
> 
> All this is done in a terminal window after doing "sudo /bin/su" as
> jack.
> 
> > format -e
> # select disk
> > fdisk
> # create Solaris partition for entire drive and commit to disk
> > partition
> # create the desired slices 
> > label
> # write an EFI label
> > quit
> > verify
> > quit
> 
> You should now have a Sun EFI label with 9 slices. 8 is set by
> format(1m) and can't be changed. The other two should be the ones you
> created. You will need to do this for each disk and then check using
> prtvtoc(1m) that they all match. The disks don't need to match, but
> the slice sizes should.
> 
> I should note that format(1m) will not allow deleting a slice by
> setting the start and end to 0 easily. I have managed to do it, but
> I'm not sure what the correct incantation is. It gives "0 is out of
> range" messages most of the time.
> 
> In the first text installer screen chose F5, in the 2nd screen select
> the slice you want to use. Continue with the install. When It
> completes reboot to single user.
> 
> zfs create -V  rpool/dump rpool
> zfs create -V  rpool/swap rpool
> dumpadm -d /dev/zvol/dsk/rpool/dump
> swap -a /dev/zvol/dsk/rpool/swap
> touch /reconfigure
> init 6
> 
> Reg    On Saturday, May 1, 2021, 08:16:07 AM CDT, Michelle <
> miche...@msknight.com> wrote:  
>  
>  OK - I give in.
> 
> I've tried various combinations and I just can't get this into a
> configuration where I can get things installed.
> 
> I need a step by step guide please, because I'm lost. I don't know
> what
> options to choose in the installer to not avoid it wanting to wipe
> whatever combinations of partitions I've set.
> 
> Michelle.
> 
> 
> On Sat, 2021-05-01 at 15:42 +0300, Toomas Soome via openindiana-
> discuss 
> wrote:
> > > On 1. May 2021, at 15:30, Michelle  wrote:
> > > 
> > > That's where I'm becoming unstuck.
> > > 
> > > A Solaris 2 partition will only see the first 2Tb.
> > > 
> > > There hangs my first problem.
> > > 
> > > If I try and create any other partition it gives me the warning
> > > about
> > > the 2TB limit and if I then try and create the EFI partition, it
> > > won't
> > > co-exist with anything and wants to wipe the whole disk again.
> > > 
> > > Michelle.
> > 
> > MBR and GPT can not co-exist. On this disk, you need GPT and this
> > means, MBR partitions will be removed.
> > 
> > Rgds,
> > Toomas
> > 
> > > > On Sat, 2021-05-01 at 12:21 +, Reginald Beardsley via
> > > > openindiana-
> > > > discuss wrote:
> > > > 
> > > > I just went through several iterations of this, and like you
> > > > the
> > > > last
> > > > time I had done it was long ago. The following is based on
> > > > 2021.04.05.
> > > > 
> > > > Large disks require an EFI or GPT label. The gparted program
> > > > creates
> > > > 128 slices which is a bit much. format(1m) will also write an
> > > > EFI
> > > > label which is usable with large disks.
> > > > 
> > > > > format -e
> > > > # select disk
> > > > > 

Re: [OpenIndiana-discuss] Partitions for co-exist NAS drive

2021-05-01 Thread Reginald Beardsley via openindiana-discuss
 Michelle,

What disks are you trying to use? If they are different sizes the recipe gets a 
bit more complex, but it's quite possible to get any arrangement for which you 
have a sufficient number of disks. Having spent 50 hours over 5 days battling 
the 2021.04.05 install I've had *lots* of practice installing both 2021.04.05 
and 2020.10.

Incidentally the failure of 2021.04.05 to run the nVIDIA driver is because it 
installed the wrong driver for that system.

Reg




On Saturday, May 1, 2021, 08:16:07 AM CDT, Michelle  
wrote:


OK - I give in.

I've tried various combinations and I just can't get this into a
configuration where I can get things installed.

I need a step by step guide please, because I'm lost. I don't know what
options to choose in the installer to not avoid it wanting to wipe
whatever combinations of partitions I've set.

Michelle.


  
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Partitions for co-exist NAS drive

2021-05-01 Thread Michelle
Thanks. I'll give that a go.

I'm using the text installer 2020.10 and on that, I can't create a
Solaris partition greater than 2tb.

I've got the GUI live image here somewhere.

Michelle.


On Sat, 2021-05-01 at 13:50 +, Reginald Beardsley via openindiana-
discuss wrote:
>  What release are you trying to install? I last did this using
> 2021.04.05, but I can easily verify the behavior with 2020.10.
> 
> I *strongly* recommend that you put all the disks you plan to use in
> the machine and run the text-installer from the Live Desktop. Select
> F2 on the first screen. When you get to the disk list, select all the
> disks. Make sure you don't have a USB stick plugged in. I wiped out
> mine by accident. Selecting multiple disks triggers the appearance of
> options to create a mirror, RAIDZ1 or RAIDZ2 depending on whether you
> have 2, 3 or 4 disks. Just make sure you select an EFI label, not MBR
> 
> Let it wipe out the partitions and create the 250 MB slice and the
> rest of disk slice. That *is* the best practice today. The mirrored
> root pool slice is a bodge for releases which can't boot from RAIDZ.
> 
> If for some reason you want to do something different this is the
> step by step recipe. It will give you exactly whatever you set up
> with format(1m). Because the F5 option is expecting to be used on an
> existing bootable pool it doesn't create dump and swap, so you have
> to do that by hand. The absence of /reconfigure was what beat me up
> for a week before I figured it out quite by accident.
> 
> All this is done in a terminal window after doing "sudo /bin/su" as
> jack.
> 
> > format -e
> # select disk
> > fdisk
> # create Solaris partition for entire drive and commit to disk
> > partition
> # create the desired slices 
> > label
> # write an EFI label
> > quit
> > verify
> > quit
> 
> You should now have a Sun EFI label with 9 slices. 8 is set by
> format(1m) and can't be changed. The other two should be the ones you
> created. You will need to do this for each disk and then check using
> prtvtoc(1m) that they all match. The disks don't need to match, but
> the slice sizes should.
> 
> I should note that format(1m) will not allow deleting a slice by
> setting the start and end to 0 easily. I have managed to do it, but
> I'm not sure what the correct incantation is. It gives "0 is out of
> range" messages most of the time.
> 
> In the first text installer screen chose F5, in the 2nd screen select
> the slice you want to use. Continue with the install. When It
> completes reboot to single user.
> 
> zfs create -V  rpool/dump rpool
> zfs create -V  rpool/swap rpool
> dumpadm -d /dev/zvol/dsk/rpool/dump
> swap -a /dev/zvol/dsk/rpool/swap
> touch /reconfigure
> init 6
> 
> Reg On Saturday, May 1, 2021, 08:16:07 AM CDT, Michelle <
> miche...@msknight.com> wrote:  
>  
>  OK - I give in.
> 
> I've tried various combinations and I just can't get this into a
> configuration where I can get things installed.
> 
> I need a step by step guide please, because I'm lost. I don't know
> what
> options to choose in the installer to not avoid it wanting to wipe
> whatever combinations of partitions I've set.
> 
> Michelle.
> 
> 
> On Sat, 2021-05-01 at 15:42 +0300, Toomas Soome via openindiana-
> discuss 
> wrote:
> > > On 1. May 2021, at 15:30, Michelle  wrote:
> > > 
> > > That's where I'm becoming unstuck.
> > > 
> > > A Solaris 2 partition will only see the first 2Tb.
> > > 
> > > There hangs my first problem.
> > > 
> > > If I try and create any other partition it gives me the warning
> > > about
> > > the 2TB limit and if I then try and create the EFI partition, it
> > > won't
> > > co-exist with anything and wants to wipe the whole disk again.
> > > 
> > > Michelle.
> > 
> > MBR and GPT can not co-exist. On this disk, you need GPT and this
> > means, MBR partitions will be removed.
> > 
> > Rgds,
> > Toomas
> > 
> > > > On Sat, 2021-05-01 at 12:21 +, Reginald Beardsley via
> > > > openindiana-
> > > > discuss wrote:
> > > > 
> > > > I just went through several iterations of this, and like you
> > > > the
> > > > last
> > > > time I had done it was long ago. The following is based on
> > > > 2021.04.05.
> > > > 
> > > > Large disks require an EFI or GPT label. The gparted program
> > > > creates
> > > > 128 slices which is a bit much. format(1m) will also write an
> > > > EFI
> > > > label which is usable with large disks.
> > > > 
> > > > > format -e
> > > > # select disk
> > > > > fdisk
> > > > # create Solaris partition for entire drive and commit to disk
> > > > > partition
> > > > # create the desired slices and write an EFI label
> > > > > quit
> > > > > verify
> > > > > quit
> > > > 
> > > > You should now have a Sun EFI label with 9 slices. 8 is set by
> > > > format(1m) and can't be changed. The other two should be the
> > > > ones
> > > > you
> > > > created.
> > > > 
> > > > In the first text installer screen chose F5, in the 2nd screen
> > > > select
> > > > the slice you want to 

Re: [OpenIndiana-discuss] Partitions for co-exist NAS drive

2021-05-01 Thread Reginald Beardsley via openindiana-discuss
 What release are you trying to install? I last did this using 2021.04.05, but 
I can easily verify the behavior with 2020.10.

I *strongly* recommend that you put all the disks you plan to use in the 
machine and run the text-installer from the Live Desktop. Select F2 on the 
first screen. When you get to the disk list, select all the disks. Make sure 
you don't have a USB stick plugged in. I wiped out mine by accident. Selecting 
multiple disks triggers the appearance of options to create a mirror, RAIDZ1 or 
RAIDZ2 depending on whether you have 2, 3 or 4 disks. Just make sure you select 
an EFI label, not MBR

Let it wipe out the partitions and create the 250 MB slice and the rest of disk 
slice. That *is* the best practice today. The mirrored root pool slice is a 
bodge for releases which can't boot from RAIDZ.

If for some reason you want to do something different this is the step by step 
recipe. It will give you exactly whatever you set up with format(1m). Because 
the F5 option is expecting to be used on an existing bootable pool it doesn't 
create dump and swap, so you have to do that by hand. The absence of 
/reconfigure was what beat me up for a week before I figured it out quite by 
accident.

All this is done in a terminal window after doing "sudo /bin/su" as jack.

> format -e
# select disk
> fdisk
# create Solaris partition for entire drive and commit to disk
> partition
# create the desired slices 
> label
# write an EFI label
> quit
> verify
> quit

You should now have a Sun EFI label with 9 slices. 8 is set by format(1m) and 
can't be changed. The other two should be the ones you created. You will need 
to do this for each disk and then check using prtvtoc(1m) that they all match. 
The disks don't need to match, but the slice sizes should.

I should note that format(1m) will not allow deleting a slice by setting the 
start and end to 0 easily. I have managed to do it, but I'm not sure what the 
correct incantation is. It gives "0 is out of range" messages most of the time.

In the first text installer screen chose F5, in the 2nd screen select the slice 
you want to use. Continue with the install. When It completes reboot to single 
user.

zfs create -V  rpool/dump rpool
zfs create -V  rpool/swap rpool
dumpadm -d /dev/zvol/dsk/rpool/dump
swap -a /dev/zvol/dsk/rpool/swap
touch /reconfigure
init 6

Reg On Saturday, May 1, 2021, 08:16:07 AM CDT, Michelle 
 wrote:  
 
 OK - I give in.

I've tried various combinations and I just can't get this into a
configuration where I can get things installed.

I need a step by step guide please, because I'm lost. I don't know what
options to choose in the installer to not avoid it wanting to wipe
whatever combinations of partitions I've set.

Michelle.


On Sat, 2021-05-01 at 15:42 +0300, Toomas Soome via openindiana-discuss 
wrote:
> > On 1. May 2021, at 15:30, Michelle  wrote:
> > 
> > That's where I'm becoming unstuck.
> > 
> > A Solaris 2 partition will only see the first 2Tb.
> > 
> > There hangs my first problem.
> > 
> > If I try and create any other partition it gives me the warning
> > about
> > the 2TB limit and if I then try and create the EFI partition, it
> > won't
> > co-exist with anything and wants to wipe the whole disk again.
> > 
> > Michelle.
> 
> MBR and GPT can not co-exist. On this disk, you need GPT and this
> means, MBR partitions will be removed.
> 
> Rgds,
> Toomas
> 
> > 
> > > On Sat, 2021-05-01 at 12:21 +, Reginald Beardsley via
> > > openindiana-
> > > discuss wrote:
> > > 
> > > I just went through several iterations of this, and like you the
> > > last
> > > time I had done it was long ago. The following is based on
> > > 2021.04.05.
> > > 
> > > Large disks require an EFI or GPT label. The gparted program
> > > creates
> > > 128 slices which is a bit much. format(1m) will also write an EFI
> > > label which is usable with large disks.
> > > 
> > > > format -e
> > > # select disk
> > > > fdisk
> > > # create Solaris partition for entire drive and commit to disk
> > > > partition
> > > # create the desired slices and write an EFI label
> > > > quit
> > > > verify
> > > > quit
> > > 
> > > You should now have a Sun EFI label with 9 slices. 8 is set by
> > > format(1m) and can't be changed. The other two should be the ones
> > > you
> > > created.
> > > 
> > > In the first text installer screen chose F5, in the 2nd screen
> > > select
> > > the slice you want to use. Continue with the install. When It
> > > completes reboot to single user.
> > > 
> > > zfs create -V  rpool/dump rpool
> > > zfs create -V  rpool/swap rpool
> > > dumpadm -d /dev/zvol/dsk/rpool/dump
> > > swap -a /dev/zvol/dsk/rpool/swap
> > > touch /reconfigure
> > > init 6
> > > 
> > > You should now come up with rpool in the 100 GB slice.
> > > 
> > > That said, we can boot from RAIDZ now. The text-install on the
> > > Desktop live image will let you create a mirror, RAIDZ1 or RAIDZ2
> > > and
> > > will take care of all the label stuff. Despite the 

Re: [OpenIndiana-discuss] Partitions for co-exist NAS drive

2021-05-01 Thread Reginald Beardsley via openindiana-discuss
 Toomas,

Thanks for the explanation. That certainly sounds like a good idea. It provides 
lots of flexibility.

It would nice if the text-installer allowed specifying the BE name when 
creating a fresh pool or defaulted to the release, e.g. 2021.04, instead of 
openindiana. The latter is not very informative.

I don't recommend the 100 GB root pool approach now that we can boot from RAIDZ 
unless some HW issue prevents using a RAIDZ root pool. If one installs updates 
using F5 you will run out of room fairly quickly unless you delete older BEs. 
Ten years ago I did it to avoid using a USB stick to boot an NL40 running a 4x 
2 TB disk RAIDZ2 configuration. It was also the only option with Solaris 10 u8 
that provided redundancy for the root pool and RAIDZ for the rest of the disk. 
It has served me well, but is no longer needed in most cases.

Reg




On Saturday, May 1, 2021, 07:40:37 AM CDT, Toomas Soome  wrote:

This is EFI System partition, that is partition, created with FAT file system, 
where UEFI firmware can load and start applications such as os boot loaders and 
firmware updates.

The 250MB is too large for illumos loader alone, but is allocated to allow 
storing of other applications too.

In fact, 250MB is picked to keep in mind 4k sector size disks, and is actually 
buggy value - it should be abort 260MB instead. The fix is still on my desk;)

Rgds,
Toomas

  
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Partitions for co-exist NAS drive

2021-05-01 Thread Michelle
OK - I give in.

I've tried various combinations and I just can't get this into a
configuration where I can get things installed.

I need a step by step guide please, because I'm lost. I don't know what
options to choose in the installer to not avoid it wanting to wipe
whatever combinations of partitions I've set.

Michelle.


On Sat, 2021-05-01 at 15:42 +0300, Toomas Soome via openindiana-discuss 
wrote:
> > On 1. May 2021, at 15:30, Michelle  wrote:
> > 
> > That's where I'm becoming unstuck.
> > 
> > A Solaris 2 partition will only see the first 2Tb.
> > 
> > There hangs my first problem.
> > 
> > If I try and create any other partition it gives me the warning
> > about
> > the 2TB limit and if I then try and create the EFI partition, it
> > won't
> > co-exist with anything and wants to wipe the whole disk again.
> > 
> > Michelle.
> 
> MBR and GPT can not co-exist. On this disk, you need GPT and this
> means, MBR partitions will be removed.
> 
> Rgds,
> Toomas
> 
> > 
> > > On Sat, 2021-05-01 at 12:21 +, Reginald Beardsley via
> > > openindiana-
> > > discuss wrote:
> > > 
> > > I just went through several iterations of this, and like you the
> > > last
> > > time I had done it was long ago. The following is based on
> > > 2021.04.05.
> > > 
> > > Large disks require an EFI or GPT label. The gparted program
> > > creates
> > > 128 slices which is a bit much. format(1m) will also write an EFI
> > > label which is usable with large disks.
> > > 
> > > > format -e
> > > # select disk
> > > > fdisk
> > > # create Solaris partition for entire drive and commit to disk
> > > > partition
> > > # create the desired slices and write an EFI label
> > > > quit
> > > > verify
> > > > quit
> > > 
> > > You should now have a Sun EFI label with 9 slices. 8 is set by
> > > format(1m) and can't be changed. The other two should be the ones
> > > you
> > > created.
> > > 
> > > In the first text installer screen chose F5, in the 2nd screen
> > > select
> > > the slice you want to use. Continue with the install. When It
> > > completes reboot to single user.
> > > 
> > > zfs create -V  rpool/dump rpool
> > > zfs create -V  rpool/swap rpool
> > > dumpadm -d /dev/zvol/dsk/rpool/dump
> > > swap -a /dev/zvol/dsk/rpool/swap
> > > touch /reconfigure
> > > init 6
> > > 
> > > You should now come up with rpool in the 100 GB slice.
> > > 
> > > That said, we can boot from RAIDZ now. The text-install on the
> > > Desktop live image will let you create a mirror, RAIDZ1 or RAIDZ2
> > > and
> > > will take care of all the label stuff. Despite the statement that
> > > it
> > > will only use 2 TB, it in fact uses the entire disk.
> > > 
> > > It creates a 250 MB s0 slice and the rest of the disk in s1. The
> > > 250
> > > MB slice is labeled "System", but I've not seen any explanation
> > > of
> > > it. I've also created RAIDZ2 pools by hand and used F5 to install
> > > into them. F5 appears to be intended to install into a new BE in
> > > an
> > > existing pool, hence the need to set up dump and swap by hand.
> > > 
> > > Ultimately I decided I didn't care about 1 GB of unused space in
> > > 16
> > > TB of space. So I just went with the text-install created RAIDZ2
> > > pool. The reconfigure on the first boot after the install is
> > > critical
> > > to getting 2021.04.05 up properly.
> > > 
> > > Reg
> > > On Saturday, May 1, 2021, 02:57:23 AM CDT, Michelle <
> > > miche...@msknight.com> wrote:  
> > > 
> > > OK - I appear to be well out of touch.
> > > 
> > > I booted the installer and went into prompt.
> > > 
> > > Used format (only 1 x 6TB drive in the machine at this point) to
> > > create
> > > a new Solaris 2 partition table and then fdisk'd an all free hog
> > > to
> > > partition 1, giving partition 0 100gig. 
> > > 
> > > I noticed that it must have gone on for 40 odd partitions, and
> > > also
> > > there was none of the usual backup and reserved partitions for 2,
> > > 8
> > > and
> > > 9 as I saw before.
> > > 
> > > On installation of OI, I selected the drive and got the
> > > warning...
> > > "you have chosen a gpt labelled disk. installing onto a gpt
> > > labelled
> > > disk will cause the loss of all existing data"
> > > 
> > > Out of interest I continued through and got the options for whole
> > > disk
> > > or partition (MBR) ... the second of which gave me a 2Tb Solaris
> > > 2
> > > partition in the list.
> > > 
> > > I did try F5 to change partition, but it just took me straight
> > > back
> > > to
> > > the installation menu at the start again.
> > > 
> > > Things have obviously moved on and I haven't kept pace.
> > > 
> > > I now have to work out how to do this on a gpt drive.
> > > 
> > > If anyone has any notes, I'd be grateful.
> > > 
> > > Michelle.
> > > 
> > > 
> > > > On Sat, 2021-05-01 at 08:31 +0100, Michelle wrote:
> > > > Well, I looked over my notes and the last time I did this was
> > > > in
> > > > 2014.
> > > > 
> > > > My preference has always been to run OI on its own drive and
> > > > 

Re: [OpenIndiana-discuss] Partitions for co-exist NAS drive

2021-05-01 Thread Toomas Soome via openindiana-discuss


> On 1. May 2021, at 15:30, Michelle  wrote:
> 
> That's where I'm becoming unstuck.
> 
> A Solaris 2 partition will only see the first 2Tb.
> 
> There hangs my first problem.
> 
> If I try and create any other partition it gives me the warning about
> the 2TB limit and if I then try and create the EFI partition, it won't
> co-exist with anything and wants to wipe the whole disk again.
> 
> Michelle.

MBR and GPT can not co-exist. On this disk, you need GPT and this means, MBR 
partitions will be removed.

Rgds,
Toomas

> 
> 
>> On Sat, 2021-05-01 at 12:21 +, Reginald Beardsley via openindiana-
>> discuss wrote:
>> 
>> I just went through several iterations of this, and like you the last
>> time I had done it was long ago. The following is based on
>> 2021.04.05.
>> 
>> Large disks require an EFI or GPT label. The gparted program creates
>> 128 slices which is a bit much. format(1m) will also write an EFI
>> label which is usable with large disks.
>> 
>>> format -e
>> # select disk
>>> fdisk
>> # create Solaris partition for entire drive and commit to disk
>>> partition
>> # create the desired slices and write an EFI label
>>> quit
>>> verify
>>> quit
>> 
>> You should now have a Sun EFI label with 9 slices. 8 is set by
>> format(1m) and can't be changed. The other two should be the ones you
>> created.
>> 
>> In the first text installer screen chose F5, in the 2nd screen select
>> the slice you want to use. Continue with the install. When It
>> completes reboot to single user.
>> 
>> zfs create -V  rpool/dump rpool
>> zfs create -V  rpool/swap rpool
>> dumpadm -d /dev/zvol/dsk/rpool/dump
>> swap -a /dev/zvol/dsk/rpool/swap
>> touch /reconfigure
>> init 6
>> 
>> You should now come up with rpool in the 100 GB slice.
>> 
>> That said, we can boot from RAIDZ now. The text-install on the
>> Desktop live image will let you create a mirror, RAIDZ1 or RAIDZ2 and
>> will take care of all the label stuff. Despite the statement that it
>> will only use 2 TB, it in fact uses the entire disk.
>> 
>> It creates a 250 MB s0 slice and the rest of the disk in s1. The 250
>> MB slice is labeled "System", but I've not seen any explanation of
>> it. I've also created RAIDZ2 pools by hand and used F5 to install
>> into them. F5 appears to be intended to install into a new BE in an
>> existing pool, hence the need to set up dump and swap by hand.
>> 
>> Ultimately I decided I didn't care about 1 GB of unused space in 16
>> TB of space. So I just went with the text-install created RAIDZ2
>> pool. The reconfigure on the first boot after the install is critical
>> to getting 2021.04.05 up properly.
>> 
>> Reg
>> On Saturday, May 1, 2021, 02:57:23 AM CDT, Michelle <
>> miche...@msknight.com> wrote:  
>> 
>> OK - I appear to be well out of touch.
>> 
>> I booted the installer and went into prompt.
>> 
>> Used format (only 1 x 6TB drive in the machine at this point) to
>> create
>> a new Solaris 2 partition table and then fdisk'd an all free hog to
>> partition 1, giving partition 0 100gig. 
>> 
>> I noticed that it must have gone on for 40 odd partitions, and also
>> there was none of the usual backup and reserved partitions for 2, 8
>> and
>> 9 as I saw before.
>> 
>> On installation of OI, I selected the drive and got the warning...
>> "you have chosen a gpt labelled disk. installing onto a gpt labelled
>> disk will cause the loss of all existing data"
>> 
>> Out of interest I continued through and got the options for whole
>> disk
>> or partition (MBR) ... the second of which gave me a 2Tb Solaris 2
>> partition in the list.
>> 
>> I did try F5 to change partition, but it just took me straight back
>> to
>> the installation menu at the start again.
>> 
>> Things have obviously moved on and I haven't kept pace.
>> 
>> I now have to work out how to do this on a gpt drive.
>> 
>> If anyone has any notes, I'd be grateful.
>> 
>> Michelle.
>> 
>> 
>>> On Sat, 2021-05-01 at 08:31 +0100, Michelle wrote:
>>> Well, I looked over my notes and the last time I did this was in
>>> 2014.
>>> 
>>> My preference has always been to run OI on its own drive and have
>>> the
>>> main ZFS tank as a "whole drive" basis. However, thanks to the
>>> QNAP,
>>> that's changed.
>>> 
>>> In 2014 I did a test. I took two 40gig drives and did the
>>> partitions
>>> as
>>> an all free hog on partition 0 ... I was simply testing the ability
>>> to
>>> configure rpool on two drives and have both active, so if one
>>> failed
>>> the other would keep running the OS.
>>> 
>>> My immediate thought is to have 100gig for the OS on partition 0
>>> and
>>> the rest on partition 1. Also, turn on auto expand for the tank
>>> pool
>>> and off for the rpool.
>>> 
>>> That's my gut feel.
>>> 
>>> Anyone got any advice to offer please, before I commit finger to
>>> keyboard?
>>> 
>>> Michelle.
>>> 
>>> 
>>> ___
>>> openindiana-discuss mailing list
>>> openindiana-discuss@openindiana.org
>>> 

Re: [OpenIndiana-discuss] Partitions for co-exist NAS drive

2021-05-01 Thread Toomas Soome via openindiana-discuss


> On 1. May 2021, at 15:22, Reginald Beardsley via openindiana-discuss 
>  wrote:
> 
> 
> I just went through several iterations of this, and like you the last time I 
> had done it was long ago. The following is based on 2021.04.05.
> 
> Large disks require an EFI or GPT label. The gparted program creates 128 
> slices which is a bit much. format(1m) will also write an EFI label which is 
> usable with large disks.
> 
>> format -e
> # select disk
>> fdisk
> # create Solaris partition for entire drive and commit to disk
>> partition
> # create the desired slices and write an EFI label
>> quit
>> verify
>> quit
> 
> You should now have a Sun EFI label with 9 slices. 8 is set by format(1m) and 
> can't be changed. The other two should be the ones you created.
> 
> In the first text installer screen chose F5, in the 2nd screen select the 
> slice you want to use. Continue with the install. When It completes reboot to 
> single user.
> 
> zfs create -V  rpool/dump rpool
> zfs create -V  rpool/swap rpool
> dumpadm -d /dev/zvol/dsk/rpool/dump
> swap -a /dev/zvol/dsk/rpool/swap
> touch /reconfigure
> init 6
> 
> You should now come up with rpool in the 100 GB slice.
> 
> That said, we can boot from RAIDZ now. The text-install on the Desktop live 
> image will let you create a mirror, RAIDZ1 or RAIDZ2 and will take care of 
> all the label stuff. Despite the statement that it will only use 2 TB, it in 
> fact uses the entire disk.
> 
> It creates a 250 MB s0 slice and the rest of the disk in s1. The 250 MB slice 
> is labeled "System", but I've not seen any explanation of it. I've also 
> created RAIDZ2 pools by hand and used F5 to install into them. F5 appears to 
> be intended to install into a new BE in an existing pool, hence the need to 
> set up dump and swap by hand.

This is EFI System partition, that is partition, created with FAT file system, 
where UEFI firmware can load and start applications such as os boot loaders and 
firmware updates.

The 250MB is too large for illumos loader alone, but is allocated to allow 
storing of other applications too. 

In fact, 250MB is picked to keep in mind 4k sector size disks, and is actually 
buggy value - it should be abort 260MB instead. The fix is still on my desk;)

Rgds,
Toomas

> 
> Ultimately I decided I didn't care about 1 GB of unused space in 16 TB of 
> space. So I just went with the text-install created RAIDZ2 pool. The 
> reconfigure on the first boot after the install is critical to getting 
> 2021.04.05 up properly.
> 
> Reg
> On Saturday, May 1, 2021, 02:57:23 AM CDT, Michelle 
>  wrote:  
> 
> OK - I appear to be well out of touch.
> 
> I booted the installer and went into prompt.
> 
> Used format (only 1 x 6TB drive in the machine at this point) to create
> a new Solaris 2 partition table and then fdisk'd an all free hog to
> partition 1, giving partition 0 100gig. 
> 
> I noticed that it must have gone on for 40 odd partitions, and also
> there was none of the usual backup and reserved partitions for 2, 8 and
> 9 as I saw before.
> 
> On installation of OI, I selected the drive and got the warning...
> "you have chosen a gpt labelled disk. installing onto a gpt labelled
> disk will cause the loss of all existing data"
> 
> Out of interest I continued through and got the options for whole disk
> or partition (MBR) ... the second of which gave me a 2Tb Solaris 2
> partition in the list.
> 
> I did try F5 to change partition, but it just took me straight back to
> the installation menu at the start again.
> 
> Things have obviously moved on and I haven't kept pace.
> 
> I now have to work out how to do this on a gpt drive.
> 
> If anyone has any notes, I'd be grateful.
> 
> Michelle.
> 
> 
>> On Sat, 2021-05-01 at 08:31 +0100, Michelle wrote:
>> Well, I looked over my notes and the last time I did this was in
>> 2014.
>> 
>> My preference has always been to run OI on its own drive and have the
>> main ZFS tank as a "whole drive" basis. However, thanks to the QNAP,
>> that's changed.
>> 
>> In 2014 I did a test. I took two 40gig drives and did the partitions
>> as
>> an all free hog on partition 0 ... I was simply testing the ability
>> to
>> configure rpool on two drives and have both active, so if one failed
>> the other would keep running the OS.
>> 
>> My immediate thought is to have 100gig for the OS on partition 0 and
>> the rest on partition 1. Also, turn on auto expand for the tank pool
>> and off for the rpool.
>> 
>> That's my gut feel.
>> 
>> Anyone got any advice to offer please, before I commit finger to
>> keyboard?
>> 
>> Michelle.
>> 
>> 
>> ___
>> openindiana-discuss mailing list
>> openindiana-discuss@openindiana.org
>> https://openindiana.org/mailman/listinfo/openindiana-discuss
> 
> 
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
> 
> 

Re: [OpenIndiana-discuss] Partitions for co-exist NAS drive

2021-05-01 Thread Michelle
That's where I'm becoming unstuck.

A Solaris 2 partition will only see the first 2Tb.

There hangs my first problem.

If I try and create any other partition it gives me the warning about
the 2TB limit and if I then try and create the EFI partition, it won't
co-exist with anything and wants to wipe the whole disk again.

Michelle.


On Sat, 2021-05-01 at 12:21 +, Reginald Beardsley via openindiana-
discuss wrote:
>  
> I just went through several iterations of this, and like you the last
> time I had done it was long ago. The following is based on
> 2021.04.05.
> 
> Large disks require an EFI or GPT label. The gparted program creates
> 128 slices which is a bit much. format(1m) will also write an EFI
> label which is usable with large disks.
> 
> > format -e
> # select disk
> > fdisk
> # create Solaris partition for entire drive and commit to disk
> > partition
> # create the desired slices and write an EFI label
> > quit
> > verify
> > quit
> 
> You should now have a Sun EFI label with 9 slices. 8 is set by
> format(1m) and can't be changed. The other two should be the ones you
> created.
> 
> In the first text installer screen chose F5, in the 2nd screen select
> the slice you want to use. Continue with the install. When It
> completes reboot to single user.
> 
> zfs create -V  rpool/dump rpool
> zfs create -V  rpool/swap rpool
> dumpadm -d /dev/zvol/dsk/rpool/dump
> swap -a /dev/zvol/dsk/rpool/swap
> touch /reconfigure
> init 6
> 
> You should now come up with rpool in the 100 GB slice.
> 
> That said, we can boot from RAIDZ now. The text-install on the
> Desktop live image will let you create a mirror, RAIDZ1 or RAIDZ2 and
> will take care of all the label stuff. Despite the statement that it
> will only use 2 TB, it in fact uses the entire disk.
> 
> It creates a 250 MB s0 slice and the rest of the disk in s1. The 250
> MB slice is labeled "System", but I've not seen any explanation of
> it. I've also created RAIDZ2 pools by hand and used F5 to install
> into them. F5 appears to be intended to install into a new BE in an
> existing pool, hence the need to set up dump and swap by hand.
> 
> Ultimately I decided I didn't care about 1 GB of unused space in 16
> TB of space. So I just went with the text-install created RAIDZ2
> pool. The reconfigure on the first boot after the install is critical
> to getting 2021.04.05 up properly.
> 
> Reg
>  On Saturday, May 1, 2021, 02:57:23 AM CDT, Michelle <
> miche...@msknight.com> wrote:  
>  
>  OK - I appear to be well out of touch.
> 
> I booted the installer and went into prompt.
> 
> Used format (only 1 x 6TB drive in the machine at this point) to
> create
> a new Solaris 2 partition table and then fdisk'd an all free hog to
> partition 1, giving partition 0 100gig. 
> 
> I noticed that it must have gone on for 40 odd partitions, and also
> there was none of the usual backup and reserved partitions for 2, 8
> and
> 9 as I saw before.
> 
> On installation of OI, I selected the drive and got the warning...
> "you have chosen a gpt labelled disk. installing onto a gpt labelled
> disk will cause the loss of all existing data"
> 
> Out of interest I continued through and got the options for whole
> disk
> or partition (MBR) ... the second of which gave me a 2Tb Solaris 2
> partition in the list.
> 
> I did try F5 to change partition, but it just took me straight back
> to
> the installation menu at the start again.
> 
> Things have obviously moved on and I haven't kept pace.
> 
> I now have to work out how to do this on a gpt drive.
> 
> If anyone has any notes, I'd be grateful.
> 
> Michelle.
> 
> 
> On Sat, 2021-05-01 at 08:31 +0100, Michelle wrote:
> > Well, I looked over my notes and the last time I did this was in
> > 2014.
> > 
> > My preference has always been to run OI on its own drive and have
> > the
> > main ZFS tank as a "whole drive" basis. However, thanks to the
> > QNAP,
> > that's changed.
> > 
> > In 2014 I did a test. I took two 40gig drives and did the
> > partitions
> > as
> > an all free hog on partition 0 ... I was simply testing the ability
> > to
> > configure rpool on two drives and have both active, so if one
> > failed
> > the other would keep running the OS.
> > 
> > My immediate thought is to have 100gig for the OS on partition 0
> > and
> > the rest on partition 1. Also, turn on auto expand for the tank
> > pool
> > and off for the rpool.
> > 
> > That's my gut feel.
> > 
> > Anyone got any advice to offer please, before I commit finger to
> > keyboard?
> > 
> > Michelle.
> > 
> > 
> > ___
> > openindiana-discuss mailing list
> > openindiana-discuss@openindiana.org
> > https://openindiana.org/mailman/listinfo/openindiana-discuss
> 
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>   
> ___
> 

Re: [OpenIndiana-discuss] Partitions for co-exist NAS drive

2021-05-01 Thread Reginald Beardsley via openindiana-discuss
 
I just went through several iterations of this, and like you the last time I 
had done it was long ago. The following is based on 2021.04.05.

Large disks require an EFI or GPT label. The gparted program creates 128 slices 
which is a bit much. format(1m) will also write an EFI label which is usable 
with large disks.

> format -e
# select disk
> fdisk
# create Solaris partition for entire drive and commit to disk
> partition
# create the desired slices and write an EFI label
> quit
> verify
> quit

You should now have a Sun EFI label with 9 slices. 8 is set by format(1m) and 
can't be changed. The other two should be the ones you created.

In the first text installer screen chose F5, in the 2nd screen select the slice 
you want to use. Continue with the install. When It completes reboot to single 
user.

zfs create -V  rpool/dump rpool
zfs create -V  rpool/swap rpool
dumpadm -d /dev/zvol/dsk/rpool/dump
swap -a /dev/zvol/dsk/rpool/swap
touch /reconfigure
init 6

You should now come up with rpool in the 100 GB slice.

That said, we can boot from RAIDZ now. The text-install on the Desktop live 
image will let you create a mirror, RAIDZ1 or RAIDZ2 and will take care of all 
the label stuff. Despite the statement that it will only use 2 TB, it in fact 
uses the entire disk.

It creates a 250 MB s0 slice and the rest of the disk in s1. The 250 MB slice 
is labeled "System", but I've not seen any explanation of it. I've also created 
RAIDZ2 pools by hand and used F5 to install into them. F5 appears to be 
intended to install into a new BE in an existing pool, hence the need to set up 
dump and swap by hand.

Ultimately I decided I didn't care about 1 GB of unused space in 16 TB of 
space. So I just went with the text-install created RAIDZ2 pool. The 
reconfigure on the first boot after the install is critical to getting 
2021.04.05 up properly.

Reg
 On Saturday, May 1, 2021, 02:57:23 AM CDT, Michelle 
 wrote:  
 
 OK - I appear to be well out of touch.

I booted the installer and went into prompt.

Used format (only 1 x 6TB drive in the machine at this point) to create
a new Solaris 2 partition table and then fdisk'd an all free hog to
partition 1, giving partition 0 100gig. 

I noticed that it must have gone on for 40 odd partitions, and also
there was none of the usual backup and reserved partitions for 2, 8 and
9 as I saw before.

On installation of OI, I selected the drive and got the warning...
"you have chosen a gpt labelled disk. installing onto a gpt labelled
disk will cause the loss of all existing data"

Out of interest I continued through and got the options for whole disk
or partition (MBR) ... the second of which gave me a 2Tb Solaris 2
partition in the list.

I did try F5 to change partition, but it just took me straight back to
the installation menu at the start again.

Things have obviously moved on and I haven't kept pace.

I now have to work out how to do this on a gpt drive.

If anyone has any notes, I'd be grateful.

Michelle.


On Sat, 2021-05-01 at 08:31 +0100, Michelle wrote:
> Well, I looked over my notes and the last time I did this was in
> 2014.
> 
> My preference has always been to run OI on its own drive and have the
> main ZFS tank as a "whole drive" basis. However, thanks to the QNAP,
> that's changed.
> 
> In 2014 I did a test. I took two 40gig drives and did the partitions
> as
> an all free hog on partition 0 ... I was simply testing the ability
> to
> configure rpool on two drives and have both active, so if one failed
> the other would keep running the OS.
> 
> My immediate thought is to have 100gig for the OS on partition 0 and
> the rest on partition 1. Also, turn on auto expand for the tank pool
> and off for the rpool.
> 
> That's my gut feel.
> 
> Anyone got any advice to offer please, before I commit finger to
> keyboard?
> 
> Michelle.
> 
> 
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss
  
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Partitions for co-exist NAS drive

2021-05-01 Thread Toomas Soome via openindiana-discuss



> On 1. May 2021, at 11:10, Michelle  wrote:
> 
> I'm just going to try breaking it down to two Solaris 2 partitions and
> see where that takes me.
> 
> Michelle.

Solaris2 (as MBR partition type) - our tools assume there is one Solaris2 
partition. As you have 6TB disks, you can not really use MBR (it does not allow 
to address that big disks).

with zfs pools, the automatic partitioning code is using tag usr (with GPT it 
is translated to UUID). We do not really care about slice tags, but some other 
systems do (so they may be meaningful with multi-boot setups).

rgds,
toomas

> 
> 
> On Sat, 2021-05-01 at 08:56 +0100, Michelle wrote:
>> OK - I appear to be well out of touch.
>> 
>> I booted the installer and went into prompt.
>> 
>> Used format (only 1 x 6TB drive in the machine at this point) to
>> create
>> a new Solaris 2 partition table and then fdisk'd an all free hog to
>> partition 1, giving partition 0 100gig. 
>> 
>> I noticed that it must have gone on for 40 odd partitions, and also
>> there was none of the usual backup and reserved partitions for 2, 8
>> and
>> 9 as I saw before.
>> 
>> On installation of OI, I selected the drive and got the warning...
>> "you have chosen a gpt labelled disk. installing onto a gpt labelled
>> disk will cause the loss of all existing data"
>> 
>> Out of interest I continued through and got the options for whole
>> disk
>> or partition (MBR) ... the second of which gave me a 2Tb Solaris 2
>> partition in the list.
>> 
>> I did try F5 to change partition, but it just took me straight back
>> to
>> the installation menu at the start again.
>> 
>> Things have obviously moved on and I haven't kept pace.
>> 
>> I now have to work out how to do this on a gpt drive.
>> 
>> If anyone has any notes, I'd be grateful.
>> 
>> Michelle.
>> 
>> 
>> On Sat, 2021-05-01 at 08:31 +0100, Michelle wrote:
>>> Well, I looked over my notes and the last time I did this was in
>>> 2014.
>>> 
>>> My preference has always been to run OI on its own drive and have
>>> the
>>> main ZFS tank as a "whole drive" basis. However, thanks to the
>>> QNAP,
>>> that's changed.
>>> 
>>> In 2014 I did a test. I took two 40gig drives and did the
>>> partitions
>>> as
>>> an all free hog on partition 0 ... I was simply testing the ability
>>> to
>>> configure rpool on two drives and have both active, so if one
>>> failed
>>> the other would keep running the OS.
>>> 
>>> My immediate thought is to have 100gig for the OS on partition 0
>>> and
>>> the rest on partition 1. Also, turn on auto expand for the tank
>>> pool
>>> and off for the rpool.
>>> 
>>> That's my gut feel.
>>> 
>>> Anyone got any advice to offer please, before I commit finger to
>>> keyboard?
>>> 
>>> Michelle.
>>> 
>>> 
>>> ___
>>> openindiana-discuss mailing list
>>> openindiana-discuss@openindiana.org
>>> https://openindiana.org/mailman/listinfo/openindiana-discuss
>> 
>> ___
>> openindiana-discuss mailing list
>> openindiana-discuss@openindiana.org
>> https://openindiana.org/mailman/listinfo/openindiana-discuss
> 
> 
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Partitions for co-exist NAS drive

2021-05-01 Thread Michelle
I'm just going to try breaking it down to two Solaris 2 partitions and
see where that takes me.

Michelle.


On Sat, 2021-05-01 at 08:56 +0100, Michelle wrote:
> OK - I appear to be well out of touch.
> 
> I booted the installer and went into prompt.
> 
> Used format (only 1 x 6TB drive in the machine at this point) to
> create
> a new Solaris 2 partition table and then fdisk'd an all free hog to
> partition 1, giving partition 0 100gig. 
> 
> I noticed that it must have gone on for 40 odd partitions, and also
> there was none of the usual backup and reserved partitions for 2, 8
> and
> 9 as I saw before.
> 
> On installation of OI, I selected the drive and got the warning...
> "you have chosen a gpt labelled disk. installing onto a gpt labelled
> disk will cause the loss of all existing data"
> 
> Out of interest I continued through and got the options for whole
> disk
> or partition (MBR) ... the second of which gave me a 2Tb Solaris 2
> partition in the list.
> 
> I did try F5 to change partition, but it just took me straight back
> to
> the installation menu at the start again.
> 
> Things have obviously moved on and I haven't kept pace.
> 
> I now have to work out how to do this on a gpt drive.
> 
> If anyone has any notes, I'd be grateful.
> 
> Michelle.
> 
> 
> On Sat, 2021-05-01 at 08:31 +0100, Michelle wrote:
> > Well, I looked over my notes and the last time I did this was in
> > 2014.
> > 
> > My preference has always been to run OI on its own drive and have
> > the
> > main ZFS tank as a "whole drive" basis. However, thanks to the
> > QNAP,
> > that's changed.
> > 
> > In 2014 I did a test. I took two 40gig drives and did the
> > partitions
> > as
> > an all free hog on partition 0 ... I was simply testing the ability
> > to
> > configure rpool on two drives and have both active, so if one
> > failed
> > the other would keep running the OS.
> > 
> > My immediate thought is to have 100gig for the OS on partition 0
> > and
> > the rest on partition 1. Also, turn on auto expand for the tank
> > pool
> > and off for the rpool.
> > 
> > That's my gut feel.
> > 
> > Anyone got any advice to offer please, before I commit finger to
> > keyboard?
> > 
> > Michelle.
> > 
> > 
> > ___
> > openindiana-discuss mailing list
> > openindiana-discuss@openindiana.org
> > https://openindiana.org/mailman/listinfo/openindiana-discuss
> 
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Partitions for co-exist NAS drive

2021-05-01 Thread Michelle
OK - I appear to be well out of touch.

I booted the installer and went into prompt.

Used format (only 1 x 6TB drive in the machine at this point) to create
a new Solaris 2 partition table and then fdisk'd an all free hog to
partition 1, giving partition 0 100gig. 

I noticed that it must have gone on for 40 odd partitions, and also
there was none of the usual backup and reserved partitions for 2, 8 and
9 as I saw before.

On installation of OI, I selected the drive and got the warning...
"you have chosen a gpt labelled disk. installing onto a gpt labelled
disk will cause the loss of all existing data"

Out of interest I continued through and got the options for whole disk
or partition (MBR) ... the second of which gave me a 2Tb Solaris 2
partition in the list.

I did try F5 to change partition, but it just took me straight back to
the installation menu at the start again.

Things have obviously moved on and I haven't kept pace.

I now have to work out how to do this on a gpt drive.

If anyone has any notes, I'd be grateful.

Michelle.


On Sat, 2021-05-01 at 08:31 +0100, Michelle wrote:
> Well, I looked over my notes and the last time I did this was in
> 2014.
> 
> My preference has always been to run OI on its own drive and have the
> main ZFS tank as a "whole drive" basis. However, thanks to the QNAP,
> that's changed.
> 
> In 2014 I did a test. I took two 40gig drives and did the partitions
> as
> an all free hog on partition 0 ... I was simply testing the ability
> to
> configure rpool on two drives and have both active, so if one failed
> the other would keep running the OS.
> 
> My immediate thought is to have 100gig for the OS on partition 0 and
> the rest on partition 1. Also, turn on auto expand for the tank pool
> and off for the rpool.
> 
> That's my gut feel.
> 
> Anyone got any advice to offer please, before I commit finger to
> keyboard?
> 
> Michelle.
> 
> 
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Partitions for co-exist NAS drive

2021-05-01 Thread Toomas Soome via openindiana-discuss



> On 1. May 2021, at 10:31, Michelle  wrote:
> 
> Well, I looked over my notes and the last time I did this was in 2014.
> 
> My preference has always been to run OI on its own drive and have the
> main ZFS tank as a "whole drive" basis. However, thanks to the QNAP,
> that's changed.
> 
> In 2014 I did a test. I took two 40gig drives and did the partitions as
> an all free hog on partition 0 ... I was simply testing the ability to
> configure rpool on two drives and have both active, so if one failed
> the other would keep running the OS.
> 
> My immediate thought is to have 100gig for the OS on partition 0 and
> the rest on partition 1. Also, turn on auto expand for the tank pool
> and off for the rpool.
> 
> That's my gut feel.
> 
> Anyone got any advice to offer please, before I commit finger to
> keyboard?
> 
> Michelle.

In general, it is good idea to keep autoexpand off, this is especially for 
mirror. You do not want to have unexpected pool growth.

rgds,
toomas
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] Partitions for co-exist NAS drive

2021-05-01 Thread Michelle
Well, I looked over my notes and the last time I did this was in 2014.

My preference has always been to run OI on its own drive and have the
main ZFS tank as a "whole drive" basis. However, thanks to the QNAP,
that's changed.

In 2014 I did a test. I took two 40gig drives and did the partitions as
an all free hog on partition 0 ... I was simply testing the ability to
configure rpool on two drives and have both active, so if one failed
the other would keep running the OS.

My immediate thought is to have 100gig for the OS on partition 0 and
the rest on partition 1. Also, turn on auto expand for the tank pool
and off for the rpool.

That's my gut feel.

Anyone got any advice to offer please, before I commit finger to
keyboard?

Michelle.


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss