Re: [OpenIndiana-discuss] RAIDZ issue

2021-02-25 Thread Toomas Soome via openindiana-discuss



> On 25. Feb 2021, at 14:30, Thebest videos  wrote:
> 
> Hi Toomas,
> 
> I was trying to understand zpool spare functionality, so as part of that im 
> doing as below
> I have 15 disks as 3 vdevs(5 disks in each vdev) and 2 disks as spare. so i 
> removed one disk which is part of one of the vdev from datapool and trying to 
> write the data. then im replacing the same disk to check auto replacement 
> functionality(already enabled autoreplace) from spare disks. but auto replace 
> is not working. and i got checksum as 1 (screenshot attached). then tried 
> manual replacement with one spare disk. 
> questions:
> 1. why auto replacement doesnt work even after enabling the autoreplace=on in 
> zpool

 autoreplace=on|off
 Controls automatic device replacement.  If set to off, device
 replacement must be initiated by the administrator by using the
 zpool replace command.  If set to on, any new device, found in
 the same physical location as a device that previously belonged
 to the pool, is automatically formatted and replaced.  The
 default behavior is off.  This property can also be referred to
 by its shortened column name, replace.


That is, you need to insert new disk to the same slot where old disk was.


> 2. why checksum is 1 on removed disk. i mean how to make checksum zero. or 
> can i ignore ? or can i use zpool clear(what it does exactly)?

zpool clear will reset error counters to 0, nothing more. zpool scrub will 
check the whole pool.

> 3. after manual replacement why replaced disk(disk15) and old disk(disk14) 
> under spare-4.why there is confusing  with old disk and replaced disk. 
> please let me know. i tried from google but i didnt get clarity.
> 

when you have spare disk configure, and that spare is in use, that fact does 
not change the spare disk to something else. You need still to replace the 
broken disk and once that is done, the currently INUSE spare will get status 
AVAIL.

rgds,
toomas


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] RAIDZ issue

2021-02-25 Thread Thebest videos
>
> Hi Toomas,
>

I was trying to understand zpool spare functionality, so as part of that im
doing as below
I have 15 disks as 3 vdevs(5 disks in each vdev) and 2 disks as spare. so i
removed one disk which is part of one of the vdev from datapool and trying
to write the data. then im replacing the same disk to check auto
replacement functionality(already enabled autoreplace) from spare disks.
but auto replace is not working. and i got checksum as 1 (screenshot
attached). then tried manual replacement with one spare disk.
questions:
1. why auto replacement doesnt work even after enabling the autoreplace=on
in zpool
2. why checksum is 1 on removed disk. i mean how to make checksum zero. or
can i ignore ? or can i use zpool clear(what it does exactly)?
3. after manual replacement why replaced disk(disk15) and old disk(disk14)
under spare-4.why there is confusing  with old disk and replaced disk.
please let me know. i tried from google but i didnt get clarity.
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] RAIDZ issue

2021-02-19 Thread Toomas Soome via openindiana-discuss


> On 19. Feb 2021, at 07:34, Thebest videos  wrote:
> 
> If I create a one vdev ( raidz2 with 6 disks ) it boots fine. (Virtualbox 
> limitation of 5 disk does not come here)

Yes, this is because you have raidz2, where number of parity disks is 2, and 
therefore 4 disks must be available at any time. This raidz setup means that on 
vbox, you have 4 data  + 1 parity disks available, and even when one of those 
disks has failures, you still can boot.

> If I create two vdevs (raidz2 with 6 disk) I see a boot issue.

Pool with 2 (or more) vdevs does spread data evenly over all vdevs, so 
literally half of the kernel is on second raidz2. Unfortunately vbox does not 
show those disks and therefore half (because you have 2 vdevs) of the data can 
not be read.


> I need to understand why the problem only comes in the second case ?


I hope the explanation above is clear enough.

rgds,
toomas


> 
> On Thu, Feb 18, 2021 at 10:12 PM Toomas Soome  > wrote:
> 
> 
>> On 18. Feb 2021, at 18:15, Thebest videos > > wrote:
>> 
>> we are able to achieve RAIDZ configuration in other way like , we are able 
>> to create RAIDZ2 with 5 disks in vdev at initial. After reboot we are adding 
>> disks to the existing pool as 2nd vdev with 5 disks and then reboot again 
>> adding disks to the same pool as 3 vdev and so on... the small change we 
>> have in command is as below(giving labelname of disk with /dev/gpt,  before 
>> we were giving disk name as ada0.)
>> --before---
>> zpool create datapool raidz2 ada0 ada1 ada2 ada3 ada4
>> ---after
>> zpool create datapool raidz2 /dev/gpt/disk0 /dev/gpt/disk1 /dev/gpt/disk2 
>> /dev/gpt/disk3 /dev/gpt/disk4   #for the first time
>> then reboot 
>> then we are adding disks to the pool in the existing pool with 5 disks. This 
>> process is repeated for every reboot. to make these 15 disks part of RAIDZ. 
>> But the problem , this is not our requirement we should create RAIDZ with 
>> multiple vdev's in single commands instead of adding on reboot
>> zpool create datapool raidz2 /dev/gpt/disk0 ...raidz2 
>> /dev/gpt/disk4..raidz2 /dev/gpt/disk9 . #this 
>> way it should work
>> in short, we need create RAIDZ with all disks all at once 
>> 
>> So any suggestion to achieve at once...!? 
> 
> nono, do not let yourself to be deceived.
> 
> on initial setup, all data is on first vdev, once you got second vdev added, 
> the initial data is still on first vdev. When you will update the OS, the old 
> kernel will not be overwritten, but new blocks will be allovated from all the 
> vdevs, especially from most recently added ones - because zfs will try to 
> balance the vdev alloctions. Once that will happen, the bootability is gone.
> 
> rgds,
> toomas
> 
> 
>> 
>> 
>> On Thu, Feb 18, 2021 at 6:16 PM Toomas Soome > > wrote:
>> 
>> 
>>> On 18. Feb 2021, at 14:23, Thebest videos >> > wrote:
>>> 
>>> I am still new in the freebsd zfs world so I am asking the question below 
>>> kindly bear with me:
>>> Is it necessary to create a boot partition on each disk, to make it part of 
>>> raidz configuration
>> 
>> boot partition (zfs-boot/efi) needs to be on member of bootable pool for two 
>> reasons; first, if you have disk failing, you want to be able to boot from 
>> other disk, and secondly, it will help to keep devices in pool to have 
>> exactly the same layout.
>> 
>>> When you say boot pool what does it mean exactly ?
>> 
>> boot pool is the pool you use to load boot loader and OS (kernel). 
>> Specifically, you point your BIOS to use boot disk belonging to boot pool 
>> and the pool itself does have bootfs property set (zpool get bootfs).
>> 
>> boot pool normally does contain the OS installation.
>> 
>>>  you mean to say should I create separate boot pool and data pool 
>>> something like
>>>  zpool create bootpool raidz disk1-p1 disk2-p1
>>>  zpool create datapool raidz disk1-p3 disk2-p3 
>>> Or you mean something else.
>>> I am still not able to understand virtualbox limit of 5 disk how it is 
>>> blocking me.
>> 
>> with virtualbox, this limit means your boot pool must be built from max 5 
>> disks, and those 5 disks must be first in disk list. If you use more disks, 
>> then virtualbox will not see the extra ones and those disks are marked as 
>> UNKNOWN. if more than parity number disks are missing, we can not read the 
>> pool.
>> 
>>> what is your recommendation to arrange 13 disk in raidz configuration (you 
>>> can avoid this question if it is going beyond )
>> 
>> There is no one answer for this question, it depends on what kind of IO will 
>> be done there. You can create one single 10+2 raidz2 with spare or 10+3 
>> raidz3, but with raidz, all writes are whole stripe writes.
>> 
>> rgds,
>> toomas
>> 
>>> 
>>> On Thu, Feb 18, 2021 at 4:48 PM Toomas Soome >> > wrote:
>>> 

Re: [OpenIndiana-discuss] RAIDZ issue

2021-02-18 Thread Thebest videos
If I create a one vdev ( raidz2 with 6 disks ) it boots fine. (Virtualbox
limitation of 5 disk does not come here)If I create two vdevs (raidz2 with
6 disk) I see a boot issue.I need to understand why the problem only comes
in the second case ?

On Thu, Feb 18, 2021 at 10:12 PM Toomas Soome  wrote:

>
>
> On 18. Feb 2021, at 18:15, Thebest videos 
> wrote:
>
> we are able to achieve RAIDZ configuration in other way like , we are able
> to create RAIDZ2 with 5 disks in vdev at initial. After reboot we are
> adding disks to the existing pool as 2nd vdev with 5 disks and then reboot
> again adding disks to the same pool as 3 vdev and so on... the small
> change we have in command is as below(giving labelname of disk with
> /dev/gpt,  before we were giving disk name as ada0.)
> --before---
> zpool create datapool raidz2 ada0 ada1 ada2 ada3 ada4
> ---after
> zpool create datapool raidz2 /dev/gpt/disk0 /dev/gpt/disk1 /dev/gpt/disk2
> /dev/gpt/disk3 /dev/gpt/disk4   #for the first time
> then reboot
> then we are adding disks to the pool in the existing pool with 5 disks.
> This process is repeated for every reboot. to make these 15 disks part of
> RAIDZ. But the problem , this is not our requirement we should create RAIDZ
> with multiple vdev's in single commands instead of adding on reboot
> zpool create datapool raidz2 /dev/gpt/disk0 ...raidz2
> /dev/gpt/disk4..raidz2 /dev/gpt/disk9 . #this
> way it should work
> in short, we need create RAIDZ with all disks all at once
>
> So any suggestion to achieve at once...!?
>
>
> nono, do not let yourself to be deceived.
>
> on initial setup, all data is on first vdev, once you got second vdev
> added, the initial data is still on first vdev. When you will update the
> OS, the old kernel will not be overwritten, but new blocks will be
> allovated from all the vdevs, especially from most recently added ones -
> because zfs will try to balance the vdev alloctions. Once that will happen,
> the bootability is gone.
>
> rgds,
> toomas
>
>
>
>
> On Thu, Feb 18, 2021 at 6:16 PM Toomas Soome  wrote:
>
>>
>>
>> On 18. Feb 2021, at 14:23, Thebest videos 
>> wrote:
>>
>> I am still new in the freebsd zfs world so I am asking the question below 
>> kindly bear with me:Is it necessary to create a boot partition on each disk, 
>> to make it part of raidz configuration
>>
>>
>> boot partition (zfs-boot/efi) needs to be on member of bootable pool for
>> two reasons; first, if you have disk failing, you want to be able to boot
>> from other disk, and secondly, it will help to keep devices in pool to have
>> exactly the same layout.
>>
>> When you say boot pool what does it mean exactly ?
>>
>>
>> boot pool is the pool you use to load boot loader and OS (kernel).
>> Specifically, you point your BIOS to use boot disk belonging to boot pool
>> and the pool itself does have bootfs property set (zpool get bootfs).
>>
>> boot pool normally does contain the OS installation.
>>
>>  you mean to say should I create separate boot pool and data pool 
>> something like zpool create bootpool raidz disk1-p1 disk2-p1
>>  zpool create datapool raidz disk1-p3 disk2-p3 Or you mean something 
>> else.I am still not able to understand virtualbox limit of 5 disk how it is 
>> blocking me.
>>
>>
>> with virtualbox, this limit means your boot pool must be built from max 5
>> disks, and those 5 disks must be first in disk list. If you use more disks,
>> then virtualbox will not see the extra ones and those disks are marked as
>> UNKNOWN. if more than parity number disks are missing, we can not read the
>> pool.
>>
>> what is your recommendation to arrange 13 disk in raidz configuration (you 
>> can avoid this question if it is going beyond )
>>
>>
>> There is no one answer for this question, it depends on what kind of IO
>> will be done there. You can create one single 10+2 raidz2 with spare or
>> 10+3 raidz3, but with raidz, all writes are whole stripe writes.
>>
>> rgds,
>> toomas
>>
>>
>> On Thu, Feb 18, 2021 at 4:48 PM Toomas Soome  wrote:
>>
>>>
>>>
>>> On 18. Feb 2021, at 12:52, Thebest videos 
>>> wrote:
>>>
>>> Ok, We also generated .img file using our custom OS from freebsd source.
>>> we are uploaded img file to digital ocean images. then we are creating
>>> droplet. everything working fine for basic operating system. but we are
>>> facing same issue at droplet side. atleast on virtual box we are able to
>>> create single vdev upto 5 disks and 2 vdevs with 3 disk each vdev(i mean
>>> upto 6 disks). but digital ocean side we are unable to create atleast
>>> single vdev with 3 disks. its working fine with 2 disks as mirror pool. we
>>> are raised the issue on digital ocean like any restrictions on number of
>>> disks towards the RAIDZ. but they says there is no constraints on number of
>>> disks. we can create as RAIDZ as many number of disks. we still don't
>>> understand where is the mistake. we also raised same query on 

Re: [OpenIndiana-discuss] RAIDZ issue

2021-02-18 Thread Toomas Soome via openindiana-discuss


> On 18. Feb 2021, at 18:15, Thebest videos  wrote:
> 
> we are able to achieve RAIDZ configuration in other way like , we are able to 
> create RAIDZ2 with 5 disks in vdev at initial. After reboot we are adding 
> disks to the existing pool as 2nd vdev with 5 disks and then reboot again 
> adding disks to the same pool as 3 vdev and so on... the small change we 
> have in command is as below(giving labelname of disk with /dev/gpt,  before 
> we were giving disk name as ada0.)
> --before---
> zpool create datapool raidz2 ada0 ada1 ada2 ada3 ada4
> ---after
> zpool create datapool raidz2 /dev/gpt/disk0 /dev/gpt/disk1 /dev/gpt/disk2 
> /dev/gpt/disk3 /dev/gpt/disk4   #for the first time
> then reboot 
> then we are adding disks to the pool in the existing pool with 5 disks. This 
> process is repeated for every reboot. to make these 15 disks part of RAIDZ. 
> But the problem , this is not our requirement we should create RAIDZ with 
> multiple vdev's in single commands instead of adding on reboot
> zpool create datapool raidz2 /dev/gpt/disk0 ...raidz2 
> /dev/gpt/disk4..raidz2 /dev/gpt/disk9 . #this way 
> it should work
> in short, we need create RAIDZ with all disks all at once 
> 
> So any suggestion to achieve at once...!? 

nono, do not let yourself to be deceived.

on initial setup, all data is on first vdev, once you got second vdev added, 
the initial data is still on first vdev. When you will update the OS, the old 
kernel will not be overwritten, but new blocks will be allovated from all the 
vdevs, especially from most recently added ones - because zfs will try to 
balance the vdev alloctions. Once that will happen, the bootability is gone.

rgds,
toomas


> 
> 
> On Thu, Feb 18, 2021 at 6:16 PM Toomas Soome  > wrote:
> 
> 
>> On 18. Feb 2021, at 14:23, Thebest videos > > wrote:
>> 
>> I am still new in the freebsd zfs world so I am asking the question below 
>> kindly bear with me:
>> Is it necessary to create a boot partition on each disk, to make it part of 
>> raidz configuration
> 
> boot partition (zfs-boot/efi) needs to be on member of bootable pool for two 
> reasons; first, if you have disk failing, you want to be able to boot from 
> other disk, and secondly, it will help to keep devices in pool to have 
> exactly the same layout.
> 
>> When you say boot pool what does it mean exactly ?
> 
> boot pool is the pool you use to load boot loader and OS (kernel). 
> Specifically, you point your BIOS to use boot disk belonging to boot pool and 
> the pool itself does have bootfs property set (zpool get bootfs).
> 
> boot pool normally does contain the OS installation.
> 
>>  you mean to say should I create separate boot pool and data pool 
>> something like
>>  zpool create bootpool raidz disk1-p1 disk2-p1
>>  zpool create datapool raidz disk1-p3 disk2-p3 
>> Or you mean something else.
>> I am still not able to understand virtualbox limit of 5 disk how it is 
>> blocking me.
> 
> with virtualbox, this limit means your boot pool must be built from max 5 
> disks, and those 5 disks must be first in disk list. If you use more disks, 
> then virtualbox will not see the extra ones and those disks are marked as 
> UNKNOWN. if more than parity number disks are missing, we can not read the 
> pool.
> 
>> what is your recommendation to arrange 13 disk in raidz configuration (you 
>> can avoid this question if it is going beyond )
> 
> There is no one answer for this question, it depends on what kind of IO will 
> be done there. You can create one single 10+2 raidz2 with spare or 10+3 
> raidz3, but with raidz, all writes are whole stripe writes.
> 
> rgds,
> toomas
> 
>> 
>> On Thu, Feb 18, 2021 at 4:48 PM Toomas Soome > > wrote:
>> 
>> 
>>> On 18. Feb 2021, at 12:52, Thebest videos >> > wrote:
>>> 
>>> Ok, We also generated .img file using our custom OS from freebsd source. we 
>>> are uploaded img file to digital ocean images. then we are creating 
>>> droplet. everything working fine for basic operating system. but we are 
>>> facing same issue at droplet side. atleast on virtual box we are able to 
>>> create single vdev upto 5 disks and 2 vdevs with 3 disk each vdev(i mean 
>>> upto 6 disks). but digital ocean side we are unable to create atleast 
>>> single vdev with 3 disks. its working fine with 2 disks as mirror pool. we 
>>> are raised the issue on digital ocean like any restrictions on number of 
>>> disks towards the RAIDZ. but they says there is no constraints on number of 
>>> disks. we can create as RAIDZ as many number of disks. we still don't 
>>> understand where is the mistake. we also raised same query on freebsd forum 
>>> but no response. as i already shares the manual steps which we are 
>>> following to create partitions and RAIDZ configuration. are we making any 
>>> mistake from 

Re: [OpenIndiana-discuss] RAIDZ issue

2021-02-18 Thread Thebest videos
we are able to achieve RAIDZ configuration in other way like , we are able
to create RAIDZ2 with 5 disks in vdev at initial. After reboot we are
adding disks to the existing pool as 2nd vdev with 5 disks and then reboot
again adding disks to the same pool as 3 vdev and so on... the small
change we have in command is as below(giving labelname of disk with
/dev/gpt,  before we were giving disk name as ada0.)
--before---
zpool create datapool raidz2 ada0 ada1 ada2 ada3 ada4
---after
zpool create datapool raidz2 /dev/gpt/disk0 /dev/gpt/disk1 /dev/gpt/disk2
/dev/gpt/disk3 /dev/gpt/disk4   #for the first time
then reboot
then we are adding disks to the pool in the existing pool with 5 disks.
This process is repeated for every reboot. to make these 15 disks part of
RAIDZ. But the problem , this is not our requirement we should create RAIDZ
with multiple vdev's in single commands instead of adding on reboot
zpool create datapool raidz2 /dev/gpt/disk0 ...raidz2
/dev/gpt/disk4..raidz2 /dev/gpt/disk9 . #this
way it should work
in short, we need create RAIDZ with all disks all at once

So any suggestion to achieve at once...!?


On Thu, Feb 18, 2021 at 6:16 PM Toomas Soome  wrote:

>
>
> On 18. Feb 2021, at 14:23, Thebest videos 
> wrote:
>
> I am still new in the freebsd zfs world so I am asking the question below 
> kindly bear with me:Is it necessary to create a boot partition on each disk, 
> to make it part of raidz configuration
>
>
> boot partition (zfs-boot/efi) needs to be on member of bootable pool for
> two reasons; first, if you have disk failing, you want to be able to boot
> from other disk, and secondly, it will help to keep devices in pool to have
> exactly the same layout.
>
> When you say boot pool what does it mean exactly ?
>
>
> boot pool is the pool you use to load boot loader and OS (kernel).
> Specifically, you point your BIOS to use boot disk belonging to boot pool
> and the pool itself does have bootfs property set (zpool get bootfs).
>
> boot pool normally does contain the OS installation.
>
>  you mean to say should I create separate boot pool and data pool 
> something like zpool create bootpool raidz disk1-p1 disk2-p1
>  zpool create datapool raidz disk1-p3 disk2-p3 Or you mean something 
> else.I am still not able to understand virtualbox limit of 5 disk how it is 
> blocking me.
>
>
> with virtualbox, this limit means your boot pool must be built from max 5
> disks, and those 5 disks must be first in disk list. If you use more disks,
> then virtualbox will not see the extra ones and those disks are marked as
> UNKNOWN. if more than parity number disks are missing, we can not read the
> pool.
>
> what is your recommendation to arrange 13 disk in raidz configuration (you 
> can avoid this question if it is going beyond )
>
>
> There is no one answer for this question, it depends on what kind of IO
> will be done there. You can create one single 10+2 raidz2 with spare or
> 10+3 raidz3, but with raidz, all writes are whole stripe writes.
>
> rgds,
> toomas
>
>
> On Thu, Feb 18, 2021 at 4:48 PM Toomas Soome  wrote:
>
>>
>>
>> On 18. Feb 2021, at 12:52, Thebest videos 
>> wrote:
>>
>> Ok, We also generated .img file using our custom OS from freebsd source.
>> we are uploaded img file to digital ocean images. then we are creating
>> droplet. everything working fine for basic operating system. but we are
>> facing same issue at droplet side. atleast on virtual box we are able to
>> create single vdev upto 5 disks and 2 vdevs with 3 disk each vdev(i mean
>> upto 6 disks). but digital ocean side we are unable to create atleast
>> single vdev with 3 disks. its working fine with 2 disks as mirror pool. we
>> are raised the issue on digital ocean like any restrictions on number of
>> disks towards the RAIDZ. but they says there is no constraints on number of
>> disks. we can create as RAIDZ as many number of disks. we still don't
>> understand where is the mistake. we also raised same query on freebsd forum
>> but no response. as i already shares the manual steps which we are
>> following to create partitions and RAIDZ configuration. are we making any
>> mistake from commands which we are following towards RAIDZ configuration or
>> as you said its king of restrictions on number of disks on virtual box and
>> might digital ocean side. i mean restricitons on vender side?!.  Any
>> guesses if it works(if no mistakes from commands we are using)if we attach
>> CD/image to any bare metal server...?! or any suggestions?
>>
>>
>>
>> I have no personal experience with digital ocean, but the basic test is
>> the same; if you get loader OK prompt, use lsdev -v command to check how
>> many disks you can actually see. There actually is another option too —
>> with BIOS boot, when you see the very first spinner, press space key and
>> you will get boot: prompt. This is very limited but still useful prompt
>> from gptzfsboot proagram 

Re: [OpenIndiana-discuss] RAIDZ issue

2021-02-18 Thread Toomas Soome via openindiana-discuss


> On 18. Feb 2021, at 14:23, Thebest videos  wrote:
> 
> I am still new in the freebsd zfs world so I am asking the question below 
> kindly bear with me:
> Is it necessary to create a boot partition on each disk, to make it part of 
> raidz configuration

boot partition (zfs-boot/efi) needs to be on member of bootable pool for two 
reasons; first, if you have disk failing, you want to be able to boot from 
other disk, and secondly, it will help to keep devices in pool to have exactly 
the same layout.

> When you say boot pool what does it mean exactly ?

boot pool is the pool you use to load boot loader and OS (kernel). 
Specifically, you point your BIOS to use boot disk belonging to boot pool and 
the pool itself does have bootfs property set (zpool get bootfs).

boot pool normally does contain the OS installation.

>  you mean to say should I create separate boot pool and data pool 
> something like
>  zpool create bootpool raidz disk1-p1 disk2-p1
>  zpool create datapool raidz disk1-p3 disk2-p3 
> Or you mean something else.
> I am still not able to understand virtualbox limit of 5 disk how it is 
> blocking me.

with virtualbox, this limit means your boot pool must be built from max 5 
disks, and those 5 disks must be first in disk list. If you use more disks, 
then virtualbox will not see the extra ones and those disks are marked as 
UNKNOWN. if more than parity number disks are missing, we can not read the pool.

> what is your recommendation to arrange 13 disk in raidz configuration (you 
> can avoid this question if it is going beyond )

There is no one answer for this question, it depends on what kind of IO will be 
done there. You can create one single 10+2 raidz2 with spare or 10+3 raidz3, 
but with raidz, all writes are whole stripe writes.

rgds,
toomas

> 
> On Thu, Feb 18, 2021 at 4:48 PM Toomas Soome  > wrote:
> 
> 
>> On 18. Feb 2021, at 12:52, Thebest videos > > wrote:
>> 
>> Ok, We also generated .img file using our custom OS from freebsd source. we 
>> are uploaded img file to digital ocean images. then we are creating droplet. 
>> everything working fine for basic operating system. but we are facing same 
>> issue at droplet side. atleast on virtual box we are able to create single 
>> vdev upto 5 disks and 2 vdevs with 3 disk each vdev(i mean upto 6 disks). 
>> but digital ocean side we are unable to create atleast single vdev with 3 
>> disks. its working fine with 2 disks as mirror pool. we are raised the issue 
>> on digital ocean like any restrictions on number of disks towards the RAIDZ. 
>> but they says there is no constraints on number of disks. we can create as 
>> RAIDZ as many number of disks. we still don't understand where is the 
>> mistake. we also raised same query on freebsd forum but no response. as i 
>> already shares the manual steps which we are following to create partitions 
>> and RAIDZ configuration. are we making any mistake from commands which we 
>> are following towards RAIDZ configuration or as you said its king of 
>> restrictions on number of disks on virtual box and might digital ocean side. 
>> i mean restricitons on vender side?!.  Any guesses if it works(if no 
>> mistakes from commands we are using)if we attach CD/image to any bare metal 
>> server...?! or any suggestions?
> 
> 
> I have no personal experience with digital ocean, but the basic test is the 
> same; if you get loader OK prompt, use lsdev -v command to check how many 
> disks you can actually see. There actually is another option too — with BIOS 
> boot, when you see the very first spinner, press space key and you will get 
> boot: prompt. This is very limited but still useful prompt from gptzfsboot 
> proagram (the one which will try to find and start /boot/loader). On boot: 
> prompt, you can enter: status — this will produce the same report as you get 
> from lsdev.
> 
> So, if you know your VM should have, say, 10 disks, but boot: status or ok 
> lsdev will show less, then you know, there must be BIOS limit (we do use BIOS 
> INT13h to access the disks).
> 
> Please note, if the provider does offer option to use UEFI, it *may* support 
> greater number of boot disks, the same check does apply with UEFI as well 
> (lsdev -v).
> 
> rgds,
> toomas
> 
> 
>> These are commands we are using to create partitions and RAIDZ configuration
>> NOTE: we are creating below gpart partitions(boot,swap,root) on all hard 
>> disks then adding those hard disks in zpool command
>> Doubt: should we create partitions(boot,swap,root) on all hard disks to make 
>> part of RAIDZ configuration or is it enough to add in zpool as raw disks or 
>> making 2-3 disks as bootable then remaining as raw disks? anyway please 
>> check below commands we are using to create partitions and zpool 
>> configurations
>> gpart create -s gpt /dev/da0
>> gpart add -a 4k -s 512K -t freebsd-boot da0
>> gpart bootcode -b /boot/pmbr -p 

Re: [OpenIndiana-discuss] RAIDZ issue

2021-02-18 Thread Thebest videos
I am still new in the freebsd zfs world so I am asking the question
below kindly bear with me:Is it necessary to create a boot partition
on each disk, to make it part of raidz configurationWhen you say boot
pool what does it mean exactly ?
 you mean to say should I create separate boot pool and data pool
something like zpool create bootpool raidz disk1-p1 disk2-p1
 zpool create datapool raidz disk1-p3 disk2-p3 Or you mean
something else.I am still not able to understand virtualbox limit of 5
disk how it is blocking me.what is your recommendation to arrange 13
disk in raidz configuration (you can avoid this question if it is
going beyond )


On Thu, Feb 18, 2021 at 4:48 PM Toomas Soome  wrote:

>
>
> On 18. Feb 2021, at 12:52, Thebest videos 
> wrote:
>
> Ok, We also generated .img file using our custom OS from freebsd source.
> we are uploaded img file to digital ocean images. then we are creating
> droplet. everything working fine for basic operating system. but we are
> facing same issue at droplet side. atleast on virtual box we are able to
> create single vdev upto 5 disks and 2 vdevs with 3 disk each vdev(i mean
> upto 6 disks). but digital ocean side we are unable to create atleast
> single vdev with 3 disks. its working fine with 2 disks as mirror pool. we
> are raised the issue on digital ocean like any restrictions on number of
> disks towards the RAIDZ. but they says there is no constraints on number of
> disks. we can create as RAIDZ as many number of disks. we still don't
> understand where is the mistake. we also raised same query on freebsd forum
> but no response. as i already shares the manual steps which we are
> following to create partitions and RAIDZ configuration. are we making any
> mistake from commands which we are following towards RAIDZ configuration or
> as you said its king of restrictions on number of disks on virtual box and
> might digital ocean side. i mean restricitons on vender side?!.  Any
> guesses if it works(if no mistakes from commands we are using)if we attach
> CD/image to any bare metal server...?! or any suggestions?
>
>
>
> I have no personal experience with digital ocean, but the basic test is
> the same; if you get loader OK prompt, use lsdev -v command to check how
> many disks you can actually see. There actually is another option too —
> with BIOS boot, when you see the very first spinner, press space key and
> you will get boot: prompt. This is very limited but still useful prompt
> from gptzfsboot proagram (the one which will try to find and start
> /boot/loader). On boot: prompt, you can enter: status — this will produce
> the same report as you get from lsdev.
>
> So, if you know your VM should have, say, 10 disks, but boot: status or ok
> lsdev will show less, then you know, there must be BIOS limit (we do use
> BIOS INT13h to access the disks).
>
> Please note, if the provider does offer option to use UEFI, it *may*
> support greater number of boot disks, the same check does apply with UEFI
> as well (lsdev -v).
>
> rgds,
> toomas
>
>
> These are commands we are using to create partitions and RAIDZ
> configuration
> NOTE: we are creating below gpart partitions(boot,swap,root) on all hard
> disks then adding those hard disks in zpool command
> Doubt: should we create partitions(boot,swap,root) on all hard disks to
> make part of RAIDZ configuration or is it enough to add in zpool as raw
> disks or making 2-3 disks as bootable then remaining as raw disks? anyway
> please check below commands we are using to create partitions and zpool
> configurations
> gpart create -s gpt /dev/da0
> gpart add -a 4k -s 512K -t freebsd-boot da0
> gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0
> gpart add -a 1m -s 2G -t freebsd-swap -l swap1 da0
> gpart add -a 1m -t freebsd-zfs -l disk1 da0
>zpool create -o altroot=/mnt datapool raidz2 ada0p3 ada1p3 ada2p3 ada3p3
> raidz2 ada4p3 ada5p3 ada6p3 ada7p3
> zfs create -o mountpoint=/ -o canmount=noauto datapool/boot
> mount -t zfs datapool/boot /mnt
> cp -r /temp/* /mnt/.
> zpool set bootfs=datapool/boot datapool
> zfs create -o mountpoint=/storage -o canmount=noauto datapool/storage
> zfs create -o mountpoint=/conf -o canmount=noauto datapool/conf
> shutdown and remove iso/img and start it again
> zpool import datapool
> mkdir /conf /storage
> mount -t zfs datapool/conf /conf
> mount -t zfs datapool/storage /storage
>
>
>
> On Thu, Feb 18, 2021 at 3:33 PM Toomas Soome  wrote:
>
>>
>>
>> On 18. Feb 2021, at 11:52, Thebest videos 
>> wrote:
>>
>> as per your reply, im not clear
>> although i've tried to create 2 pools with 4 disks(for testing purpose)
>> each pool in a single vdev as expected it works. but that is not our
>> requirement since we intended to chose single pool as many number of disks
>> which should part of multiple vdev's based of condition(max 5 disks each
>> vdev) and any disks left after part of vdev should act as spare 

Re: [OpenIndiana-discuss] RAIDZ issue

2021-02-18 Thread Toomas Soome via openindiana-discuss


> On 18. Feb 2021, at 12:52, Thebest videos  wrote:
> 
> Ok, We also generated .img file using our custom OS from freebsd source. we 
> are uploaded img file to digital ocean images. then we are creating droplet. 
> everything working fine for basic operating system. but we are facing same 
> issue at droplet side. atleast on virtual box we are able to create single 
> vdev upto 5 disks and 2 vdevs with 3 disk each vdev(i mean upto 6 disks). but 
> digital ocean side we are unable to create atleast single vdev with 3 disks. 
> its working fine with 2 disks as mirror pool. we are raised the issue on 
> digital ocean like any restrictions on number of disks towards the RAIDZ. but 
> they says there is no constraints on number of disks. we can create as RAIDZ 
> as many number of disks. we still don't understand where is the mistake. we 
> also raised same query on freebsd forum but no response. as i already shares 
> the manual steps which we are following to create partitions and RAIDZ 
> configuration. are we making any mistake from commands which we are following 
> towards RAIDZ configuration or as you said its king of restrictions on number 
> of disks on virtual box and might digital ocean side. i mean restricitons on 
> vender side?!.  Any guesses if it works(if no mistakes from commands we are 
> using)if we attach CD/image to any bare metal server...?! or any suggestions?


I have no personal experience with digital ocean, but the basic test is the 
same; if you get loader OK prompt, use lsdev -v command to check how many disks 
you can actually see. There actually is another option too — with BIOS boot, 
when you see the very first spinner, press space key and you will get boot: 
prompt. This is very limited but still useful prompt from gptzfsboot proagram 
(the one which will try to find and start /boot/loader). On boot: prompt, you 
can enter: status — this will produce the same report as you get from lsdev.

So, if you know your VM should have, say, 10 disks, but boot: status or ok 
lsdev will show less, then you know, there must be BIOS limit (we do use BIOS 
INT13h to access the disks).

Please note, if the provider does offer option to use UEFI, it *may* support 
greater number of boot disks, the same check does apply with UEFI as well 
(lsdev -v).

rgds,
toomas


> These are commands we are using to create partitions and RAIDZ configuration
> NOTE: we are creating below gpart partitions(boot,swap,root) on all hard 
> disks then adding those hard disks in zpool command
> Doubt: should we create partitions(boot,swap,root) on all hard disks to make 
> part of RAIDZ configuration or is it enough to add in zpool as raw disks or 
> making 2-3 disks as bootable then remaining as raw disks? anyway please check 
> below commands we are using to create partitions and zpool configurations
> gpart create -s gpt /dev/da0
> gpart add -a 4k -s 512K -t freebsd-boot da0
> gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0
> gpart add -a 1m -s 2G -t freebsd-swap -l swap1 da0
> gpart add -a 1m -t freebsd-zfs -l disk1 da0
>zpool create -o altroot=/mnt datapool raidz2 ada0p3 ada1p3 ada2p3 ada3p3  
> raidz2 ada4p3 ada5p3 ada6p3 ada7p3
> zfs create -o mountpoint=/ -o canmount=noauto datapool/boot
> mount -t zfs datapool/boot /mnt
> cp -r /temp/* /mnt/.
> zpool set bootfs=datapool/boot datapool
> zfs create -o mountpoint=/storage -o canmount=noauto datapool/storage
> zfs create -o mountpoint=/conf -o canmount=noauto datapool/conf
> shutdown and remove iso/img and start it again
> zpool import datapool
> mkdir /conf /storage
> mount -t zfs datapool/conf /conf
> mount -t zfs datapool/storage /storage
> 
> 
> 
> On Thu, Feb 18, 2021 at 3:33 PM Toomas Soome  > wrote:
> 
> 
>> On 18. Feb 2021, at 11:52, Thebest videos > > wrote:
>> 
>> as per your reply, im not clear 
>> although i've tried to create 2 pools with 4 disks(for testing purpose) each 
>> pool in a single vdev as expected it works. but that is not our requirement 
>> since we intended to chose single pool as many number of disks which should 
>> part of multiple vdev's based of condition(max 5 disks each vdev) and any 
>> disks left after part of vdev should act as spare disks.  
>> finally max 5 disks are coming ONLINE in vdev remaining disks going as says 
>> OFFLINE state disk state is UNKNOWN. is there anyway to fix this issue.
>> 
> 
> 
> If you want to use virtualbox, then there is limit that virtualbox does only 
> see first 5 disk devices. This is vbox limit and there are only two options 
> about it - either accept it or to file feature request to virtualbox 
> developers.
> 
> Different systems can set different limits there, for example, VMware Fusion 
> does support booting from first 12 disks. It also can have more disks than 
> 12, but  only first 12 are visible for boot loader.
> 
> Real hardware is vendor 

Re: [OpenIndiana-discuss] RAIDZ issue

2021-02-18 Thread Thebest videos
Ok, We also generated .img file using our custom OS from freebsd source. we
are uploaded img file to digital ocean images. then we are creating
droplet. everything working fine for basic operating system. but we are
facing same issue at droplet side. atleast on virtual box we are able to
create single vdev upto 5 disks and 2 vdevs with 3 disk each vdev(i mean
upto 6 disks). but digital ocean side we are unable to create atleast
single vdev with 3 disks. its working fine with 2 disks as mirror pool. we
are raised the issue on digital ocean like any restrictions on number of
disks towards the RAIDZ. but they says there is no constraints on number of
disks. we can create as RAIDZ as many number of disks. we still don't
understand where is the mistake. we also raised same query on freebsd forum
but no response. as i already shares the manual steps which we are
following to create partitions and RAIDZ configuration. are we making any
mistake from commands which we are following towards RAIDZ configuration or
as you said its king of restrictions on number of disks on virtual box and
might digital ocean side. i mean restricitons on vender side?!.  Any
guesses if it works(if no mistakes from commands we are using)if we attach
CD/image to any bare metal server...?! or any suggestions?
These are commands we are using to create partitions and RAIDZ configuration
NOTE: we are creating below gpart partitions(boot,swap,root) on all hard
disks then adding those hard disks in zpool command
Doubt: should we create partitions(boot,swap,root) on all hard disks to
make part of RAIDZ configuration or is it enough to add in zpool as raw
disks or making 2-3 disks as bootable then remaining as raw disks? anyway
please check below commands we are using to create partitions and zpool
configurations
gpart create -s gpt /dev/da0
gpart add -a 4k -s 512K -t freebsd-boot da0
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0
gpart add -a 1m -s 2G -t freebsd-swap -l swap1 da0
gpart add -a 1m -t freebsd-zfs -l disk1 da0

   zpool create -o altroot=/mnt datapool raidz2 ada0p3 ada1p3 ada2p3 ada3p3
raidz2 ada4p3 ada5p3 ada6p3 ada7p3
zfs create -o mountpoint=/ -o canmount=noauto datapool/boot
mount -t zfs datapool/boot /mnt
cp -r /temp/* /mnt/.
zpool set bootfs=datapool/boot datapool
zfs create -o mountpoint=/storage -o canmount=noauto datapool/storage
zfs create -o mountpoint=/conf -o canmount=noauto datapool/conf
shutdown and remove iso/img and start it again
zpool import datapool
mkdir /conf /storage
mount -t zfs datapool/conf /conf
mount -t zfs datapool/storage /storage



On Thu, Feb 18, 2021 at 3:33 PM Toomas Soome  wrote:

>
>
> On 18. Feb 2021, at 11:52, Thebest videos 
> wrote:
>
> as per your reply, im not clear
> although i've tried to create 2 pools with 4 disks(for testing purpose)
> each pool in a single vdev as expected it works. but that is not our
> requirement since we intended to chose single pool as many number of disks
> which should part of multiple vdev's based of condition(max 5 disks each
> vdev) and any disks left after part of vdev should act as spare disks.
> finally max 5 disks are coming ONLINE in vdev remaining disks going as
> says OFFLINE state disk state is UNKNOWN. is there anyway to fix this issue.
>
>
>
> If you want to use virtualbox, then there is limit that virtualbox does
> only see first 5 disk devices. This is vbox limit and there are only two
> options about it - either accept it or to file feature request to
> virtualbox developers.
>
> Different systems can set different limits there, for example, VMware
> Fusion does support booting from first 12 disks. It also can have more
> disks than 12, but  only first 12 are visible for boot loader.
>
> Real hardware is vendor specific.
>
> rgds,
> toomas
>
>
> On Thu, Feb 18, 2021 at 1:24 AM Toomas Soome via openindiana-discuss <
> openindiana-discuss@openindiana.org> wrote:
>
>>
>>
>> > On 17. Feb 2021, at 20:49, Thebest videos 
>> wrote:
>> >
>> > NOTE: we are getting issues after shutdown , then remove ISO file from
>> > virtualBox then power on the server. if we attach an ISO file we are
>> safe
>> > with our Zpool stuff. and we are creating boot,swap,root partitions on
>> each
>> > disks.
>>
>> vbox seems to have limit on boot disks - it appears to “see” 5, My vbox
>> has IDE for boot disk, and I did add 6 sas disks, I only can see 5 — ide +
>> 4 sas.
>>
>> So all you need to do is to add disk for boot pool, and make sure it is
>> first one - once kernel is up, it can see all the disks.
>>
>> rgds,
>> toomas
>>
>>
>> > I'm not able to understand First 5 disks are ONLINE and remaining disks
>> are
>> > UNKNOWN state after power off and then power on
>> > actually our requirement is to create RAIDZ1/RAIDZ2 with single
>> vdev(upto 5
>> > disks per vdev) if more than 5 or less than 10 disks then those
>> disks(after
>> > 5disks) are spare part shouldn't be included any vdev. if we 

Re: [OpenIndiana-discuss] RAIDZ issue

2021-02-18 Thread Toomas Soome via openindiana-discuss


> On 18. Feb 2021, at 11:52, Thebest videos  wrote:
> 
> as per your reply, im not clear 
> although i've tried to create 2 pools with 4 disks(for testing purpose) each 
> pool in a single vdev as expected it works. but that is not our requirement 
> since we intended to chose single pool as many number of disks which should 
> part of multiple vdev's based of condition(max 5 disks each vdev) and any 
> disks left after part of vdev should act as spare disks.  
> finally max 5 disks are coming ONLINE in vdev remaining disks going as says 
> OFFLINE state disk state is UNKNOWN. is there anyway to fix this issue.
> 


If you want to use virtualbox, then there is limit that virtualbox does only 
see first 5 disk devices. This is vbox limit and there are only two options 
about it - either accept it or to file feature request to virtualbox developers.

Different systems can set different limits there, for example, VMware Fusion 
does support booting from first 12 disks. It also can have more disks than 12, 
but  only first 12 are visible for boot loader.

Real hardware is vendor specific.

rgds,
toomas


> On Thu, Feb 18, 2021 at 1:24 AM Toomas Soome via openindiana-discuss 
>  > wrote:
> 
> 
> > On 17. Feb 2021, at 20:49, Thebest videos  > > wrote:
> > 
> > NOTE: we are getting issues after shutdown , then remove ISO file from
> > virtualBox then power on the server. if we attach an ISO file we are safe
> > with our Zpool stuff. and we are creating boot,swap,root partitions on each
> > disks.
> 
> vbox seems to have limit on boot disks - it appears to “see” 5, My vbox has 
> IDE for boot disk, and I did add 6 sas disks, I only can see 5 — ide + 4 sas.
> 
> So all you need to do is to add disk for boot pool, and make sure it is first 
> one - once kernel is up, it can see all the disks.
> 
> rgds,
> toomas
> 
> 
> > I'm not able to understand First 5 disks are ONLINE and remaining disks are
> > UNKNOWN state after power off and then power on
> > actually our requirement is to create RAIDZ1/RAIDZ2 with single vdev(upto 5
> > disks per vdev) if more than 5 or less than 10 disks then those disks(after
> > 5disks) are spare part shouldn't be included any vdev. if we have
> > multiple's of 5 disks then we need to create multiple vdev in a pool
> > example: RAIDZ2 : if total 7 disks then 5 disks as single vdev, remaining 2
> > disks as spare parts nothing to do. and if we have 12 disks intotal then 2
> > vdevs (5 disks per vdev) so total 10 disks in 2 vdevs remaining 2disks as
> > spare.
> > RAIDZ1: if we have only 3 disks then we should create RAIDZ1
> > 
> > Here, we wrote a zfs script for our requirements(but currently testing with
> > manual commands). We are able to createRAIDZ2 with a single vdev in a pool
> > for 5 disks. it works upto 9 disks but if we have 10 disks then 2 vdevs are
> > created after power on the same error coming like zfs: i/o error all copies
> > blocked.
> > I was testing the RAIDZ like I'm creating 2 vdevs which have 3 disks per
> > each vdev.its working fine even after shutdown and power on(as says that we
> > are removing the ISO file after shutdown).
> > but the issue is when we create 2 vdevs with 4 disks per each vdev.this
> > time we are not getting error its giving options like we press esc button
> > what kind of options we see those options are coming. if i type lsdev -v(as
> > you said before). first 5 disks are online and the remaining 3 disks are
> > UNKNOWN.
> > 
> > FInally, I need to setup RAIDZ configuration with 5 multiples of disks per
> > each vdev.  please look once again below commands im using to create
> > partitions and RAIDZ configuration
> > 
> > NOTE: below gpart commands are running for each disk
> > 
> > gpart create -s gpt ada0
> > 
> > 
> > gpart add -a 4k -s 512K -t freebsd-boot ada0
> > 
> > 
> > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
> > 
> > 
> > gpart add -a 1m -s 2G -t freebsd-swap -l swap0 ada0
> > 
> > 
> > gpart add -a 1m -t freebsd-zfs -l disk0 ada0
> > 
> > 
> > zpool create -o altroot=/mnt datapool raidz2 ada0p3 ada1p3 ada2p3
> > ada3p3  raidz2 ada4p3 ada5p3 ada6p3 ada7p3
> > 
> > 
> > zfs create -o mountpoint=/ -o canmount=noauto datapool/boot
> > 
> > 
> > mount -t zfs datapool/boot /mnt
> > 
> > 
> > mount_cd9660 /dev/cd0 /media
> > 
> > 
> > cp -r /media/* /mnt/.
> > 
> > 
> > zpool set bootfs=datapool/boot datapool
> > 
> > 
> > shutdown and remove ISO and power on the server
> > 
> > 
> > kindly suggest me steps if im wrong
> > 
> > On Wed, Feb 17, 2021 at 11:51 PM Thebest videos  > >
> > wrote:
> > 
> >> prtconf -v | grep biosdev not working on freebsd
> >> i think its legacy boot system(im not sure actually i didnt find anything
> >> about EFI related stuff) is there anyway to check EFI
> >> 
> >> Create the pool with EFI boot:
> >> # zpool create -B rpool raidz c0t0d0 c0t1d0 c0t3d0
> >> 
> >> how can i 

Re: [OpenIndiana-discuss] RAIDZ issue

2021-02-17 Thread Thebest videos
++ruchir.bharad...@yahoo.com

On Thu, Feb 18, 2021 at 10:23 AM Thebest videos 
wrote:

> ++ruchir.bharad...@yahoo.com
>
> On Thu, Feb 18, 2021 at 1:24 AM Toomas Soome via openindiana-discuss <
> openindiana-discuss@openindiana.org> wrote:
>
>>
>>
>> > On 17. Feb 2021, at 20:49, Thebest videos 
>> wrote:
>> >
>> > NOTE: we are getting issues after shutdown , then remove ISO file from
>> > virtualBox then power on the server. if we attach an ISO file we are
>> safe
>> > with our Zpool stuff. and we are creating boot,swap,root partitions on
>> each
>> > disks.
>>
>> vbox seems to have limit on boot disks - it appears to “see” 5, My vbox
>> has IDE for boot disk, and I did add 6 sas disks, I only can see 5 — ide +
>> 4 sas.
>>
>> So all you need to do is to add disk for boot pool, and make sure it is
>> first one - once kernel is up, it can see all the disks.
>>
>> rgds,
>> toomas
>>
>>
>> > I'm not able to understand First 5 disks are ONLINE and remaining disks
>> are
>> > UNKNOWN state after power off and then power on
>> > actually our requirement is to create RAIDZ1/RAIDZ2 with single
>> vdev(upto 5
>> > disks per vdev) if more than 5 or less than 10 disks then those
>> disks(after
>> > 5disks) are spare part shouldn't be included any vdev. if we have
>> > multiple's of 5 disks then we need to create multiple vdev in a pool
>> > example: RAIDZ2 : if total 7 disks then 5 disks as single vdev,
>> remaining 2
>> > disks as spare parts nothing to do. and if we have 12 disks intotal
>> then 2
>> > vdevs (5 disks per vdev) so total 10 disks in 2 vdevs remaining 2disks
>> as
>> > spare.
>> > RAIDZ1: if we have only 3 disks then we should create RAIDZ1
>> >
>> > Here, we wrote a zfs script for our requirements(but currently testing
>> with
>> > manual commands). We are able to createRAIDZ2 with a single vdev in a
>> pool
>> > for 5 disks. it works upto 9 disks but if we have 10 disks then 2 vdevs
>> are
>> > created after power on the same error coming like zfs: i/o error all
>> copies
>> > blocked.
>> > I was testing the RAIDZ like I'm creating 2 vdevs which have 3 disks per
>> > each vdev.its working fine even after shutdown and power on(as says
>> that we
>> > are removing the ISO file after shutdown).
>> > but the issue is when we create 2 vdevs with 4 disks per each vdev.this
>> > time we are not getting error its giving options like we press esc
>> button
>> > what kind of options we see those options are coming. if i type lsdev
>> -v(as
>> > you said before). first 5 disks are online and the remaining 3 disks are
>> > UNKNOWN.
>> >
>> > FInally, I need to setup RAIDZ configuration with 5 multiples of disks
>> per
>> > each vdev.  please look once again below commands im using to create
>> > partitions and RAIDZ configuration
>> >
>> > NOTE: below gpart commands are running for each disk
>> >
>> > gpart create -s gpt ada0
>> >
>> >
>> > gpart add -a 4k -s 512K -t freebsd-boot ada0
>> >
>> >
>> > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
>> >
>> >
>> > gpart add -a 1m -s 2G -t freebsd-swap -l swap0 ada0
>> >
>> >
>> > gpart add -a 1m -t freebsd-zfs -l disk0 ada0
>> >
>> >
>> > zpool create -o altroot=/mnt datapool raidz2 ada0p3 ada1p3 ada2p3
>> > ada3p3  raidz2 ada4p3 ada5p3 ada6p3 ada7p3
>> >
>> >
>> > zfs create -o mountpoint=/ -o canmount=noauto datapool/boot
>> >
>> >
>> > mount -t zfs datapool/boot /mnt
>> >
>> >
>> > mount_cd9660 /dev/cd0 /media
>> >
>> >
>> > cp -r /media/* /mnt/.
>> >
>> >
>> > zpool set bootfs=datapool/boot datapool
>> >
>> >
>> > shutdown and remove ISO and power on the server
>> >
>> >
>> > kindly suggest me steps if im wrong
>> >
>> > On Wed, Feb 17, 2021 at 11:51 PM Thebest videos <
>> sri.chityala...@gmail.com>
>> > wrote:
>> >
>> >> prtconf -v | grep biosdev not working on freebsd
>> >> i think its legacy boot system(im not sure actually i didnt find
>> anything
>> >> about EFI related stuff) is there anyway to check EFI
>> >>
>> >> Create the pool with EFI boot:
>> >> # zpool create -B rpool raidz c0t0d0 c0t1d0 c0t3d0
>> >>
>> >> how can i create pool with EFI
>> >> and -B what it refers?
>> >>
>> >> On Wed, Feb 17, 2021 at 11:00 PM John D Groenveld 
>> >> wrote:
>> >>
>> >>> In message <272389262.2537371.1613575739...@mail.yahoo.com>, Reginald
>> >>> Beardsley
>> >>> via openindiana-discuss writes:
>>  I was not aware that it was possible to boot from RAIDZ. It wasn't
>> >>> possible wh
>> >>>
>> >>> With the current text installer, escape to a shell.
>> >>> Confirm the disks are all BIOS accessible:
>> >>> # prtconf -v | grep biosdev
>> >>> Create the pool with EFI boot:
>> >>> # zpool create -B rpool raidz c0t0d0 c0t1d0 c0t3d0
>> >>> Exit and return to the installer and then F5 Install to an Existing
>> Pool
>> >>>
>> >>> John
>> >>> groenv...@acm.org
>> >>>
>> >>> ___
>> >>> openindiana-discuss mailing list
>> >>> openindiana-discuss@openindiana.org
>> >>> 

Re: [OpenIndiana-discuss] RAIDZ issue

2021-02-17 Thread Thebest videos
++ruchir.bharad...@yahoo.com

On Thu, Feb 18, 2021 at 1:24 AM Toomas Soome via openindiana-discuss <
openindiana-discuss@openindiana.org> wrote:

>
>
> > On 17. Feb 2021, at 20:49, Thebest videos 
> wrote:
> >
> > NOTE: we are getting issues after shutdown , then remove ISO file from
> > virtualBox then power on the server. if we attach an ISO file we are safe
> > with our Zpool stuff. and we are creating boot,swap,root partitions on
> each
> > disks.
>
> vbox seems to have limit on boot disks - it appears to “see” 5, My vbox
> has IDE for boot disk, and I did add 6 sas disks, I only can see 5 — ide +
> 4 sas.
>
> So all you need to do is to add disk for boot pool, and make sure it is
> first one - once kernel is up, it can see all the disks.
>
> rgds,
> toomas
>
>
> > I'm not able to understand First 5 disks are ONLINE and remaining disks
> are
> > UNKNOWN state after power off and then power on
> > actually our requirement is to create RAIDZ1/RAIDZ2 with single
> vdev(upto 5
> > disks per vdev) if more than 5 or less than 10 disks then those
> disks(after
> > 5disks) are spare part shouldn't be included any vdev. if we have
> > multiple's of 5 disks then we need to create multiple vdev in a pool
> > example: RAIDZ2 : if total 7 disks then 5 disks as single vdev,
> remaining 2
> > disks as spare parts nothing to do. and if we have 12 disks intotal then
> 2
> > vdevs (5 disks per vdev) so total 10 disks in 2 vdevs remaining 2disks as
> > spare.
> > RAIDZ1: if we have only 3 disks then we should create RAIDZ1
> >
> > Here, we wrote a zfs script for our requirements(but currently testing
> with
> > manual commands). We are able to createRAIDZ2 with a single vdev in a
> pool
> > for 5 disks. it works upto 9 disks but if we have 10 disks then 2 vdevs
> are
> > created after power on the same error coming like zfs: i/o error all
> copies
> > blocked.
> > I was testing the RAIDZ like I'm creating 2 vdevs which have 3 disks per
> > each vdev.its working fine even after shutdown and power on(as says that
> we
> > are removing the ISO file after shutdown).
> > but the issue is when we create 2 vdevs with 4 disks per each vdev.this
> > time we are not getting error its giving options like we press esc button
> > what kind of options we see those options are coming. if i type lsdev
> -v(as
> > you said before). first 5 disks are online and the remaining 3 disks are
> > UNKNOWN.
> >
> > FInally, I need to setup RAIDZ configuration with 5 multiples of disks
> per
> > each vdev.  please look once again below commands im using to create
> > partitions and RAIDZ configuration
> >
> > NOTE: below gpart commands are running for each disk
> >
> > gpart create -s gpt ada0
> >
> >
> > gpart add -a 4k -s 512K -t freebsd-boot ada0
> >
> >
> > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
> >
> >
> > gpart add -a 1m -s 2G -t freebsd-swap -l swap0 ada0
> >
> >
> > gpart add -a 1m -t freebsd-zfs -l disk0 ada0
> >
> >
> > zpool create -o altroot=/mnt datapool raidz2 ada0p3 ada1p3 ada2p3
> > ada3p3  raidz2 ada4p3 ada5p3 ada6p3 ada7p3
> >
> >
> > zfs create -o mountpoint=/ -o canmount=noauto datapool/boot
> >
> >
> > mount -t zfs datapool/boot /mnt
> >
> >
> > mount_cd9660 /dev/cd0 /media
> >
> >
> > cp -r /media/* /mnt/.
> >
> >
> > zpool set bootfs=datapool/boot datapool
> >
> >
> > shutdown and remove ISO and power on the server
> >
> >
> > kindly suggest me steps if im wrong
> >
> > On Wed, Feb 17, 2021 at 11:51 PM Thebest videos <
> sri.chityala...@gmail.com>
> > wrote:
> >
> >> prtconf -v | grep biosdev not working on freebsd
> >> i think its legacy boot system(im not sure actually i didnt find
> anything
> >> about EFI related stuff) is there anyway to check EFI
> >>
> >> Create the pool with EFI boot:
> >> # zpool create -B rpool raidz c0t0d0 c0t1d0 c0t3d0
> >>
> >> how can i create pool with EFI
> >> and -B what it refers?
> >>
> >> On Wed, Feb 17, 2021 at 11:00 PM John D Groenveld 
> >> wrote:
> >>
> >>> In message <272389262.2537371.1613575739...@mail.yahoo.com>, Reginald
> >>> Beardsley
> >>> via openindiana-discuss writes:
>  I was not aware that it was possible to boot from RAIDZ. It wasn't
> >>> possible wh
> >>>
> >>> With the current text installer, escape to a shell.
> >>> Confirm the disks are all BIOS accessible:
> >>> # prtconf -v | grep biosdev
> >>> Create the pool with EFI boot:
> >>> # zpool create -B rpool raidz c0t0d0 c0t1d0 c0t3d0
> >>> Exit and return to the installer and then F5 Install to an Existing
> Pool
> >>>
> >>> John
> >>> groenv...@acm.org
> >>>
> >>> ___
> >>> openindiana-discuss mailing list
> >>> openindiana-discuss@openindiana.org
> >>> https://openindiana.org/mailman/listinfo/openindiana-discuss
> >>>
> >>
> > ___
> > openindiana-discuss mailing list
> > openindiana-discuss@openindiana.org
> > https://openindiana.org/mailman/listinfo/openindiana-discuss
>
>
> 

Re: [OpenIndiana-discuss] RAIDZ issue

2021-02-17 Thread Toomas Soome via openindiana-discuss


> On 17. Feb 2021, at 20:49, Thebest videos  wrote:
> 
> NOTE: we are getting issues after shutdown , then remove ISO file from
> virtualBox then power on the server. if we attach an ISO file we are safe
> with our Zpool stuff. and we are creating boot,swap,root partitions on each
> disks.

vbox seems to have limit on boot disks - it appears to “see” 5, My vbox has IDE 
for boot disk, and I did add 6 sas disks, I only can see 5 — ide + 4 sas.

So all you need to do is to add disk for boot pool, and make sure it is first 
one - once kernel is up, it can see all the disks.

rgds,
toomas


> I'm not able to understand First 5 disks are ONLINE and remaining disks are
> UNKNOWN state after power off and then power on
> actually our requirement is to create RAIDZ1/RAIDZ2 with single vdev(upto 5
> disks per vdev) if more than 5 or less than 10 disks then those disks(after
> 5disks) are spare part shouldn't be included any vdev. if we have
> multiple's of 5 disks then we need to create multiple vdev in a pool
> example: RAIDZ2 : if total 7 disks then 5 disks as single vdev, remaining 2
> disks as spare parts nothing to do. and if we have 12 disks intotal then 2
> vdevs (5 disks per vdev) so total 10 disks in 2 vdevs remaining 2disks as
> spare.
> RAIDZ1: if we have only 3 disks then we should create RAIDZ1
> 
> Here, we wrote a zfs script for our requirements(but currently testing with
> manual commands). We are able to createRAIDZ2 with a single vdev in a pool
> for 5 disks. it works upto 9 disks but if we have 10 disks then 2 vdevs are
> created after power on the same error coming like zfs: i/o error all copies
> blocked.
> I was testing the RAIDZ like I'm creating 2 vdevs which have 3 disks per
> each vdev.its working fine even after shutdown and power on(as says that we
> are removing the ISO file after shutdown).
> but the issue is when we create 2 vdevs with 4 disks per each vdev.this
> time we are not getting error its giving options like we press esc button
> what kind of options we see those options are coming. if i type lsdev -v(as
> you said before). first 5 disks are online and the remaining 3 disks are
> UNKNOWN.
> 
> FInally, I need to setup RAIDZ configuration with 5 multiples of disks per
> each vdev.  please look once again below commands im using to create
> partitions and RAIDZ configuration
> 
> NOTE: below gpart commands are running for each disk
> 
> gpart create -s gpt ada0
> 
> 
> gpart add -a 4k -s 512K -t freebsd-boot ada0
> 
> 
> gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
> 
> 
> gpart add -a 1m -s 2G -t freebsd-swap -l swap0 ada0
> 
> 
> gpart add -a 1m -t freebsd-zfs -l disk0 ada0
> 
> 
> zpool create -o altroot=/mnt datapool raidz2 ada0p3 ada1p3 ada2p3
> ada3p3  raidz2 ada4p3 ada5p3 ada6p3 ada7p3
> 
> 
> zfs create -o mountpoint=/ -o canmount=noauto datapool/boot
> 
> 
> mount -t zfs datapool/boot /mnt
> 
> 
> mount_cd9660 /dev/cd0 /media
> 
> 
> cp -r /media/* /mnt/.
> 
> 
> zpool set bootfs=datapool/boot datapool
> 
> 
> shutdown and remove ISO and power on the server
> 
> 
> kindly suggest me steps if im wrong
> 
> On Wed, Feb 17, 2021 at 11:51 PM Thebest videos 
> wrote:
> 
>> prtconf -v | grep biosdev not working on freebsd
>> i think its legacy boot system(im not sure actually i didnt find anything
>> about EFI related stuff) is there anyway to check EFI
>> 
>> Create the pool with EFI boot:
>> # zpool create -B rpool raidz c0t0d0 c0t1d0 c0t3d0
>> 
>> how can i create pool with EFI
>> and -B what it refers?
>> 
>> On Wed, Feb 17, 2021 at 11:00 PM John D Groenveld 
>> wrote:
>> 
>>> In message <272389262.2537371.1613575739...@mail.yahoo.com>, Reginald
>>> Beardsley
>>> via openindiana-discuss writes:
 I was not aware that it was possible to boot from RAIDZ. It wasn't
>>> possible wh
>>> 
>>> With the current text installer, escape to a shell.
>>> Confirm the disks are all BIOS accessible:
>>> # prtconf -v | grep biosdev
>>> Create the pool with EFI boot:
>>> # zpool create -B rpool raidz c0t0d0 c0t1d0 c0t3d0
>>> Exit and return to the installer and then F5 Install to an Existing Pool
>>> 
>>> John
>>> groenv...@acm.org
>>> 
>>> ___
>>> openindiana-discuss mailing list
>>> openindiana-discuss@openindiana.org
>>> https://openindiana.org/mailman/listinfo/openindiana-discuss
>>> 
>> 
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] RAIDZ issue

2021-02-17 Thread Thebest videos
NOTE: we are getting issues after shutdown , then remove ISO file from
virtualBox then power on the server. if we attach an ISO file we are safe
with our Zpool stuff. and we are creating boot,swap,root partitions on each
disks.
I'm not able to understand First 5 disks are ONLINE and remaining disks are
UNKNOWN state after power off and then power on
actually our requirement is to create RAIDZ1/RAIDZ2 with single vdev(upto 5
disks per vdev) if more than 5 or less than 10 disks then those disks(after
5disks) are spare part shouldn't be included any vdev. if we have
multiple's of 5 disks then we need to create multiple vdev in a pool
example: RAIDZ2 : if total 7 disks then 5 disks as single vdev, remaining 2
disks as spare parts nothing to do. and if we have 12 disks intotal then 2
vdevs (5 disks per vdev) so total 10 disks in 2 vdevs remaining 2disks as
spare.
RAIDZ1: if we have only 3 disks then we should create RAIDZ1

Here, we wrote a zfs script for our requirements(but currently testing with
manual commands). We are able to createRAIDZ2 with a single vdev in a pool
for 5 disks. it works upto 9 disks but if we have 10 disks then 2 vdevs are
created after power on the same error coming like zfs: i/o error all copies
blocked.
I was testing the RAIDZ like I'm creating 2 vdevs which have 3 disks per
each vdev.its working fine even after shutdown and power on(as says that we
are removing the ISO file after shutdown).
but the issue is when we create 2 vdevs with 4 disks per each vdev.this
time we are not getting error its giving options like we press esc button
what kind of options we see those options are coming. if i type lsdev -v(as
you said before). first 5 disks are online and the remaining 3 disks are
UNKNOWN.

FInally, I need to setup RAIDZ configuration with 5 multiples of disks per
each vdev.  please look once again below commands im using to create
partitions and RAIDZ configuration

NOTE: below gpart commands are running for each disk

gpart create -s gpt ada0


gpart add -a 4k -s 512K -t freebsd-boot ada0


gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0


gpart add -a 1m -s 2G -t freebsd-swap -l swap0 ada0


gpart add -a 1m -t freebsd-zfs -l disk0 ada0


zpool create -o altroot=/mnt datapool raidz2 ada0p3 ada1p3 ada2p3
ada3p3  raidz2 ada4p3 ada5p3 ada6p3 ada7p3


zfs create -o mountpoint=/ -o canmount=noauto datapool/boot


mount -t zfs datapool/boot /mnt


mount_cd9660 /dev/cd0 /media


cp -r /media/* /mnt/.


zpool set bootfs=datapool/boot datapool


shutdown and remove ISO and power on the server


kindly suggest me steps if im wrong

On Wed, Feb 17, 2021 at 11:51 PM Thebest videos 
wrote:

> prtconf -v | grep biosdev not working on freebsd
> i think its legacy boot system(im not sure actually i didnt find anything
> about EFI related stuff) is there anyway to check EFI
>
> Create the pool with EFI boot:
> # zpool create -B rpool raidz c0t0d0 c0t1d0 c0t3d0
>
> how can i create pool with EFI
> and -B what it refers?
>
> On Wed, Feb 17, 2021 at 11:00 PM John D Groenveld 
> wrote:
>
>> In message <272389262.2537371.1613575739...@mail.yahoo.com>, Reginald
>> Beardsley
>>  via openindiana-discuss writes:
>> >I was not aware that it was possible to boot from RAIDZ. It wasn't
>> possible wh
>>
>> With the current text installer, escape to a shell.
>> Confirm the disks are all BIOS accessible:
>> # prtconf -v | grep biosdev
>> Create the pool with EFI boot:
>> # zpool create -B rpool raidz c0t0d0 c0t1d0 c0t3d0
>> Exit and return to the installer and then F5 Install to an Existing Pool
>>
>> John
>> groenv...@acm.org
>>
>> ___
>> openindiana-discuss mailing list
>> openindiana-discuss@openindiana.org
>> https://openindiana.org/mailman/listinfo/openindiana-discuss
>>
>
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] RAIDZ issue

2021-02-17 Thread Thebest videos
prtconf -v | grep biosdev not working on freebsd
i think its legacy boot system(im not sure actually i didnt find anything
about EFI related stuff) is there anyway to check EFI

Create the pool with EFI boot:
# zpool create -B rpool raidz c0t0d0 c0t1d0 c0t3d0

how can i create pool with EFI
and -B what it refers?

On Wed, Feb 17, 2021 at 11:00 PM John D Groenveld  wrote:

> In message <272389262.2537371.1613575739...@mail.yahoo.com>, Reginald
> Beardsley
>  via openindiana-discuss writes:
> >I was not aware that it was possible to boot from RAIDZ. It wasn't
> possible wh
>
> With the current text installer, escape to a shell.
> Confirm the disks are all BIOS accessible:
> # prtconf -v | grep biosdev
> Create the pool with EFI boot:
> # zpool create -B rpool raidz c0t0d0 c0t1d0 c0t3d0
> Exit and return to the installer and then F5 Install to an Existing Pool
>
> John
> groenv...@acm.org
>
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] RAIDZ issue

2021-02-17 Thread Reginald Beardsley via openindiana-discuss
 
Thanks. I'm not expecting to have a reason to build a new OI system, but good 
to know.

I set up a Windows 7 Pro / Debian 9.3 dual boot a couple of days ago as a 
concession to all the EE software that only is available for one of those two 
platforms.

At present my biggest issue is some bad DRAM in my Solaris 10 system. If it's 
been up for a couple of days or longer a scrub will cause a kernel panic. After 
I reboot, a scrub will complete with no errors. Other than getting a new DIMM 
and moving it from slot to slot until the problem goes away I can't think of 
any way to locate the bad DIMM. I don't know of a program that will write a 
pattern to memory and then scan it a few days later for errors.

Is there a traditional bare metal monitor such as was common in the past? It's 
obviously possible, but it's hard to justify all the time it would take to 
learn the details. Though now that I think about it, I might be able to hack 
the memtest program.

I ran into a similar issue on my 3/60 running 4.1.1. I built a stripped kernel 
to save memory. But if I accessed the 1/4" tape after it had run for a day or 
more it would kernel panic. Immediately after a boot tape access was fine. 
After a lot of head scratching I went back to using the generic kernel which 
mapped the bad DRAM to some device I did not have.

Have Fun!
Reg

 On Wednesday, February 17, 2021, 11:30:05 AM CST, John D Groenveld 
 wrote:  
 
 In message <272389262.2537371.1613575739...@mail.yahoo.com>, Reginald Beardsley
 via openindiana-discuss writes:
>I was not aware that it was possible to boot from RAIDZ. It wasn't possible wh

With the current text installer, escape to a shell.
Confirm the disks are all BIOS accessible:
# prtconf -v | grep biosdev
Create the pool with EFI boot:
# zpool create -B rpool raidz c0t0d0 c0t1d0 c0t3d0
Exit and return to the installer and then F5 Install to an Existing Pool

John
groenv...@acm.org

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss
  
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] RAIDZ issue

2021-02-17 Thread John D Groenveld
In message <272389262.2537371.1613575739...@mail.yahoo.com>, Reginald Beardsley
 via openindiana-discuss writes:
>I was not aware that it was possible to boot from RAIDZ. It wasn't possible wh

With the current text installer, escape to a shell.
Confirm the disks are all BIOS accessible:
# prtconf -v | grep biosdev
Create the pool with EFI boot:
# zpool create -B rpool raidz c0t0d0 c0t1d0 c0t3d0
Exit and return to the installer and then F5 Install to an Existing Pool

John
groenv...@acm.org

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] RAIDZ issue

2021-02-17 Thread Reginald Beardsley via openindiana-discuss
 All my Solaris/OI systems run RAIDZ and have for over 10 years. I run RAIDZ1 
with 3 disks on two and RAIDZ2 with 4 disks on the third. 

I was not aware that it was possible to boot from RAIDZ. It wasn't possible 
when I set the systems up. I've never seen anything to suggest that one could 
now boot from a RAIDZ volume. However, I'm not running BSD.

The way I solved the problem was to create 2 slices on each disk. A 100 GB 
slice for rpool which is a mirror and the rest of the disk in a second slice 
for a RAIDZ pool. It has worked fine. Despite losing a disk from time to time, 
I've never lost data. It was also dead simple to setup.

Reg

Here's what Hipster 2017.10 reports:

root@Hipster:~# zpool status
 pool: epool
 state: ONLINE
 scan: scrub repaired 0 in 2h32m with 0 errors on Sun Feb 14 21:17:16 2021
config:

 NAME STATE READ WRITE CKSUM
 epool ONLINE 0 0 0
 raidz1-0 ONLINE 0 0 0
 c4d0s1 ONLINE 0 0 0
 c6d1s1 ONLINE 0 0 0
 c7d0s1 ONLINE 0 0 0

errors: No known data errors

 pool: rpool
 state: ONLINE
 scan: scrub repaired 0 in 1h39m with 0 errors on Sun Feb 14 20:24:35 2021
config:

 NAME STATE READ WRITE CKSUM
 rpool ONLINE 0 0 0
 mirror-0 ONLINE 0 0 0
 c4d0s0 ONLINE 0 0 0
 c6d1s0 ONLINE 0 0 0
 c7d0s0 ONLINE 0 0 0

errors: No known data errors


root@Hipster:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
epool 594G 2.75T 29.3K /epool
epool/app 10.9G 2.75T 10.8G /app
epool/export 93.3G 2.75T 30.6K /export
epool/export/home 93.3G 2.75T 29.3K /export/home
epool/export/home/old_rhb 11.3G 2.75T 11.3G /export/home/old_rhb
epool/export/home/reg 235M 2.75T 234M /export/home/reg
epool/export/home/rhb 81.8G 2.75T 73.7G /export/home/rhb
epool/iso 8.76G 2.75T 8.76G /iso
epool/misc 29.3K 2.75T 29.3K /export/misc
epool/vbox 29.3K 2.75T 29.3K /export/vbox
epool/vdisk 29.3K 2.75T 29.3K /export/vdisk
epool/vdisks 425G 2.75T 379G /export/vbox/vdisks
rpool 77.9G 18.5G 32.5K /rpool
rpool/ROOT 16.3G 18.5G 23K legacy
rpool/ROOT/openindiana 16.3G 18.5G 16.1G /
rpool/dump 3.99G 18.5G 3.99G -
rpool/fubar_1 23K 18.5G 23K none
rpool/fubar_2 25K 18.5G 25K none
rpool/swap 4.24G 18.7G 4.07G -
rpool/tmp_rhb 53.3G 18.5G 53.3G /export/home/tmp_rhb
  
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] RAIDZ issue

2021-02-17 Thread Toomas Soome via openindiana-discuss
Disconnect data disks, start from USB/CD and verify with lsdev -v the boot pool 
disks are there.

Sent from my iPhone

> On 17. Feb 2021, at 16:15, Thebest videos  wrote:
> 
> 
> I've created 3disks as bootable disk, and remaining disks as raw disks. Then 
> I ran 
> Zpool create -o altroot=/mnt datapool raidz1 disk1 disk2 disk3 disk4 raidz1 
> disk5 disk6 disk7 disk8
> Still getting same zfs I/o error
> 
> 
>> On Wed, Feb 17, 2021, 15:56 Toomas Soome via openindiana-discuss 
>>  wrote:
>> hi!
>> 
>> From your screenshot, you do get 5 disks recognized (disk0 - disk4), that 
>> means, those are disks you can use for booting. It *may* be the limit is 
>> higher with UEFI boot.
>> 
>> You can check the number of disks while booting from usb/cd, press esc to 
>> get out of boot menu, and enter: lsdev -v, loader will report only those 
>> disk devices which BIOS can see, and only those devices can be used for 
>> booting.
>> 
>> Since you are writing to OI list (and not freebsd), there is another way to 
>> check; in illumos you can get list of BIOS accessible disks with:
>> 
>> $ prtconf -v | grep biosdev
>> name='biosdev-0x83' type=byte items=588
>> name='biosdev-0x82' type=byte items=588
>> name='biosdev-0x81' type=byte items=588
>> name='biosdev-0x80' type=byte items=588
>> 
>> in this example, the system does have 4 BIOS-visible disk devices, and 
>> incidentally those are all disks this system does have, and I have:
>> 
>> tsoome@beastie:/code$ zpool status
>>   pool: rpool
>>  state: ONLINE
>>   scan: resilvered 1,68T in 0 days 10:10:07 with 0 errors on Fri Oct 25 
>> 05:05:34 2019
>> config:
>> 
>> NAMESTATE READ WRITE CKSUM
>> rpool   ONLINE   0 0 0
>>   raidz1-0  ONLINE   0 0 0
>> c3t0d0  ONLINE   0 0 0
>> c3t1d0  ONLINE   0 0 0
>> c3t3d0  ONLINE   0 0 0
>> c3t4d0  ONLINE   0 0 0
>> 
>> errors: No known data errors
>> tsoome@beastie:/code$ 
>> 
>> 
>> Please note; in theory, if you have 5 visible disks, you could create boot 
>> pool by having those 5 disks for data + parity disks, but such configuration 
>> would not be advisable, because if one data disk will get an issue, then you 
>> can not boot (unless you swap around physical disks).
>> 
>> Therefore, the suggestion would be to verify, how many disks your system 
>> BIOS or UEFI can see, and plan the boot pool accordingly.
>> 
>> rgds,
>> toomas
>> 
>> > On 17. Feb 2021, at 11:26, Thebest videos  
>> > wrote:
>> > 
>> > Hi there,
>> > 
>> > I'm facing one issue since long time.i.e., I'm trying to create raidz
>> > configuration with multiple disks as below
>> > Create RAIDZ for 4 disks each vdev in RAIDZ1/RAIDZ2 intotal 2 vdevs
>> > 
>> > I'm running below commands running for each hard disk
>> > 
>> > gpart create -s gpt ada0
>> > 
>> > gpart add -a 4k -s 512K -t freebsd-boot ada0
>> > 
>> > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
>> > 
>> > gpart add -a 1m -s 2G -t freebsd-swap -l swap0 ada0
>> > 
>> > gpart add -a 1m -t freebsd-zfs -l disk0 ada0
>> > 
>> > 
>> > #RAIDZ creating
>> > 
>> > zpool create -o altroot=/mnt datapool raidz2 ada0p3 ada1p3 ada2p3 ada3p3
>> > raidz2 ada4p3 ada5p3 ada6p3 ada7p3
>> > 
>> > zfs create -o mountpoint=/ -o canmount=noauto datapool/boot
>> > 
>> > mount -t zfs datapool/boot /mnt
>> > 
>> > mount_cd9660 /dev/cd0 /media
>> > 
>> > cp -r /media/* /mnt/.
>> > 
>> > zpool set bootfs=datapool/boot datapool
>> > 
>> > 
>> > I've tried both RAIDZ1 and RAIDZ2 getting same issue has been attached
>> > 
>> > 
>> > 2 attachments has attached screenshot 1 (17th feb) issue: when i create
>> > with 2vdevs with 4 disks each vdev
>> > 
>> > 2nd attachment (16th feb)issue : creating RAIDZ2 with 5 disks with each
>> > vdev total 2vdevs in total
>> > 
>> > 
>> > Kindly respond since i need to fix this issue anyway
>> > > > 5.58.31 PM.png>___
>> > openindiana-discuss mailing list
>> > openindiana-discuss@openindiana.org
>> > https://openindiana.org/mailman/listinfo/openindiana-discuss
>> 
>> ___
>> openindiana-discuss mailing list
>> openindiana-discuss@openindiana.org
>> https://openindiana.org/mailman/listinfo/openindiana-discuss
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] RAIDZ issue

2021-02-17 Thread Thebest videos
I've created 3disks as bootable disk, and remaining disks as raw disks.
Then I ran
Zpool create -o altroot=/mnt datapool raidz1 disk1 disk2 disk3 disk4 raidz1
disk5 disk6 disk7 disk8
Still getting same zfs I/o error


On Wed, Feb 17, 2021, 15:56 Toomas Soome via openindiana-discuss <
openindiana-discuss@openindiana.org> wrote:

> hi!
>
> From your screenshot, you do get 5 disks recognized (disk0 - disk4), that
> means, those are disks you can use for booting. It *may* be the limit is
> higher with UEFI boot.
>
> You can check the number of disks while booting from usb/cd, press esc to
> get out of boot menu, and enter: lsdev -v, loader will report only those
> disk devices which BIOS can see, and only those devices can be used for
> booting.
>
> Since you are writing to OI list (and not freebsd), there is another way
> to check; in illumos you can get list of BIOS accessible disks with:
>
> $ prtconf -v | grep biosdev
> name='biosdev-0x83' type=byte items=588
> name='biosdev-0x82' type=byte items=588
> name='biosdev-0x81' type=byte items=588
> name='biosdev-0x80' type=byte items=588
>
> in this example, the system does have 4 BIOS-visible disk devices, and
> incidentally those are all disks this system does have, and I have:
>
> tsoome@beastie:/code$ zpool status
>   pool: rpool
>  state: ONLINE
>   scan: resilvered 1,68T in 0 days 10:10:07 with 0 errors on Fri Oct 25
> 05:05:34 2019
> config:
>
> NAMESTATE READ WRITE CKSUM
> rpool   ONLINE   0 0 0
>   raidz1-0  ONLINE   0 0 0
> c3t0d0  ONLINE   0 0 0
> c3t1d0  ONLINE   0 0 0
> c3t3d0  ONLINE   0 0 0
> c3t4d0  ONLINE   0 0 0
>
> errors: No known data errors
> tsoome@beastie:/code$
>
>
> Please note; in theory, if you have 5 visible disks, you could create boot
> pool by having those 5 disks for data + parity disks, but such
> configuration would not be advisable, because if one data disk will get an
> issue, then you can not boot (unless you swap around physical disks).
>
> Therefore, the suggestion would be to verify, how many disks your system
> BIOS or UEFI can see, and plan the boot pool accordingly.
>
> rgds,
> toomas
>
> > On 17. Feb 2021, at 11:26, Thebest videos 
> wrote:
> >
> > Hi there,
> >
> > I'm facing one issue since long time.i.e., I'm trying to create raidz
> > configuration with multiple disks as below
> > Create RAIDZ for 4 disks each vdev in RAIDZ1/RAIDZ2 intotal 2 vdevs
> >
> > I'm running below commands running for each hard disk
> >
> > gpart create -s gpt ada0
> >
> > gpart add -a 4k -s 512K -t freebsd-boot ada0
> >
> > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
> >
> > gpart add -a 1m -s 2G -t freebsd-swap -l swap0 ada0
> >
> > gpart add -a 1m -t freebsd-zfs -l disk0 ada0
> >
> >
> > #RAIDZ creating
> >
> > zpool create -o altroot=/mnt datapool raidz2 ada0p3 ada1p3 ada2p3 ada3p3
> > raidz2 ada4p3 ada5p3 ada6p3 ada7p3
> >
> > zfs create -o mountpoint=/ -o canmount=noauto datapool/boot
> >
> > mount -t zfs datapool/boot /mnt
> >
> > mount_cd9660 /dev/cd0 /media
> >
> > cp -r /media/* /mnt/.
> >
> > zpool set bootfs=datapool/boot datapool
> >
> >
> > I've tried both RAIDZ1 and RAIDZ2 getting same issue has been attached
> >
> >
> > 2 attachments has attached screenshot 1 (17th feb) issue: when i create
> > with 2vdevs with 4 disks each vdev
> >
> > 2nd attachment (16th feb)issue : creating RAIDZ2 with 5 disks with each
> > vdev total 2vdevs in total
> >
> >
> > Kindly respond since i need to fix this issue anyway
> >  5.58.31 PM.png>___
> > openindiana-discuss mailing list
> > openindiana-discuss@openindiana.org
> > https://openindiana.org/mailman/listinfo/openindiana-discuss
>
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] RAIDZ issue

2021-02-17 Thread Toomas Soome via openindiana-discuss
hi!

>From your screenshot, you do get 5 disks recognized (disk0 - disk4), that 
>means, those are disks you can use for booting. It *may* be the limit is 
>higher with UEFI boot.

You can check the number of disks while booting from usb/cd, press esc to get 
out of boot menu, and enter: lsdev -v, loader will report only those disk 
devices which BIOS can see, and only those devices can be used for booting.

Since you are writing to OI list (and not freebsd), there is another way to 
check; in illumos you can get list of BIOS accessible disks with:

$ prtconf -v | grep biosdev
name='biosdev-0x83' type=byte items=588
name='biosdev-0x82' type=byte items=588
name='biosdev-0x81' type=byte items=588
name='biosdev-0x80' type=byte items=588

in this example, the system does have 4 BIOS-visible disk devices, and 
incidentally those are all disks this system does have, and I have:

tsoome@beastie:/code$ zpool status
  pool: rpool
 state: ONLINE
  scan: resilvered 1,68T in 0 days 10:10:07 with 0 errors on Fri Oct 25 
05:05:34 2019
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  raidz1-0  ONLINE   0 0 0
c3t0d0  ONLINE   0 0 0
c3t1d0  ONLINE   0 0 0
c3t3d0  ONLINE   0 0 0
c3t4d0  ONLINE   0 0 0

errors: No known data errors
tsoome@beastie:/code$ 


Please note; in theory, if you have 5 visible disks, you could create boot pool 
by having those 5 disks for data + parity disks, but such configuration would 
not be advisable, because if one data disk will get an issue, then you can not 
boot (unless you swap around physical disks).

Therefore, the suggestion would be to verify, how many disks your system BIOS 
or UEFI can see, and plan the boot pool accordingly.

rgds,
toomas

> On 17. Feb 2021, at 11:26, Thebest videos  wrote:
> 
> Hi there,
> 
> I'm facing one issue since long time.i.e., I'm trying to create raidz
> configuration with multiple disks as below
> Create RAIDZ for 4 disks each vdev in RAIDZ1/RAIDZ2 intotal 2 vdevs
> 
> I'm running below commands running for each hard disk
> 
> gpart create -s gpt ada0
> 
> gpart add -a 4k -s 512K -t freebsd-boot ada0
> 
> gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
> 
> gpart add -a 1m -s 2G -t freebsd-swap -l swap0 ada0
> 
> gpart add -a 1m -t freebsd-zfs -l disk0 ada0
> 
> 
> #RAIDZ creating
> 
> zpool create -o altroot=/mnt datapool raidz2 ada0p3 ada1p3 ada2p3 ada3p3
> raidz2 ada4p3 ada5p3 ada6p3 ada7p3
> 
> zfs create -o mountpoint=/ -o canmount=noauto datapool/boot
> 
> mount -t zfs datapool/boot /mnt
> 
> mount_cd9660 /dev/cd0 /media
> 
> cp -r /media/* /mnt/.
> 
> zpool set bootfs=datapool/boot datapool
> 
> 
> I've tried both RAIDZ1 and RAIDZ2 getting same issue has been attached
> 
> 
> 2 attachments has attached screenshot 1 (17th feb) issue: when i create
> with 2vdevs with 4 disks each vdev
> 
> 2nd attachment (16th feb)issue : creating RAIDZ2 with 5 disks with each
> vdev total 2vdevs in total
> 
> 
> Kindly respond since i need to fix this issue anyway
>  PM.png>___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss