> On 19. Feb 2021, at 07:34, Thebest videos <sri.chityala...@gmail.com> wrote:
> 
> If I create a one vdev ( raidz2 with 6 disks ) it boots fine. (Virtualbox 
> limitation of 5 disk does not come here)

Yes, this is because you have raidz2, where number of parity disks is 2, and 
therefore 4 disks must be available at any time. This raidz setup means that on 
vbox, you have 4 data  + 1 parity disks available, and even when one of those 
disks has failures, you still can boot.

> If I create two vdevs (raidz2 with 6 disk) I see a boot issue.

Pool with 2 (or more) vdevs does spread data evenly over all vdevs, so 
literally half of the kernel is on second raidz2. Unfortunately vbox does not 
show those disks and therefore half (because you have 2 vdevs) of the data can 
not be read.


> I need to understand why the problem only comes in the second case ?


I hope the explanation above is clear enough.

rgds,
toomas


> 
> On Thu, Feb 18, 2021 at 10:12 PM Toomas Soome <tso...@me.com 
> <mailto:tso...@me.com>> wrote:
> 
> 
>> On 18. Feb 2021, at 18:15, Thebest videos <sri.chityala...@gmail.com 
>> <mailto:sri.chityala...@gmail.com>> wrote:
>> 
>> we are able to achieve RAIDZ configuration in other way like , we are able 
>> to create RAIDZ2 with 5 disks in vdev at initial. After reboot we are adding 
>> disks to the existing pool as 2nd vdev with 5 disks and then reboot again 
>> adding disks to the same pool as 3 vdev and so on....... the small change we 
>> have in command is as below(giving labelname of disk with /dev/gpt,  before 
>> we were giving disk name as ada0.....)
>> ------before-------
>> zpool create datapool raidz2 ada0 ada1 ada2 ada3 ada4
>> -------after--------
>> zpool create datapool raidz2 /dev/gpt/disk0 /dev/gpt/disk1 /dev/gpt/disk2 
>> /dev/gpt/disk3 /dev/gpt/disk4   #for the first time
>> then reboot 
>> then we are adding disks to the pool in the existing pool with 5 disks. This 
>> process is repeated for every reboot. to make these 15 disks part of RAIDZ. 
>> But the problem , this is not our requirement we should create RAIDZ with 
>> multiple vdev's in single commands instead of adding on reboot
>> zpool create datapool raidz2 /dev/gpt/disk0 ...............raidz2 
>> /dev/gpt/disk4..............raidz2 /dev/gpt/disk9 ................. #this 
>> way it should work
>> in short, we need create RAIDZ with all disks all at once 
>> 
>> So any suggestion to achieve at once...!? 
> 
> nono, do not let yourself to be deceived.
> 
> on initial setup, all data is on first vdev, once you got second vdev added, 
> the initial data is still on first vdev. When you will update the OS, the old 
> kernel will not be overwritten, but new blocks will be allovated from all the 
> vdevs, especially from most recently added ones - because zfs will try to 
> balance the vdev alloctions. Once that will happen, the bootability is gone.
> 
> rgds,
> toomas
> 
> 
>> 
>> 
>> On Thu, Feb 18, 2021 at 6:16 PM Toomas Soome <tso...@me.com 
>> <mailto:tso...@me.com>> wrote:
>> 
>> 
>>> On 18. Feb 2021, at 14:23, Thebest videos <sri.chityala...@gmail.com 
>>> <mailto:sri.chityala...@gmail.com>> wrote:
>>> 
>>> I am still new in the freebsd zfs world so I am asking the question below 
>>> kindly bear with me:
>>> Is it necessary to create a boot partition on each disk, to make it part of 
>>> raidz configuration
>> 
>> boot partition (zfs-boot/efi) needs to be on member of bootable pool for two 
>> reasons; first, if you have disk failing, you want to be able to boot from 
>> other disk, and secondly, it will help to keep devices in pool to have 
>> exactly the same layout.
>> 
>>> When you say boot pool what does it mean exactly ?
>> 
>> boot pool is the pool you use to load boot loader and OS (kernel). 
>> Specifically, you point your BIOS to use boot disk belonging to boot pool 
>> and the pool itself does have bootfs property set (zpool get bootfs).
>> 
>> boot pool normally does contain the OS installation.
>> 
>>>      you mean to say should I create separate boot pool and data pool 
>>> something like
>>>      zpool create bootpool raidz disk1-p1 disk2-p1
>>>      zpool create datapool raidz disk1-p3 disk2-p3 
>>> Or you mean something else.
>>> I am still not able to understand virtualbox limit of 5 disk how it is 
>>> blocking me.
>> 
>> with virtualbox, this limit means your boot pool must be built from max 5 
>> disks, and those 5 disks must be first in disk list. If you use more disks, 
>> then virtualbox will not see the extra ones and those disks are marked as 
>> UNKNOWN. if more than parity number disks are missing, we can not read the 
>> pool.
>> 
>>> what is your recommendation to arrange 13 disk in raidz configuration (you 
>>> can avoid this question if it is going beyond )
>> 
>> There is no one answer for this question, it depends on what kind of IO will 
>> be done there. You can create one single 10+2 raidz2 with spare or 10+3 
>> raidz3, but with raidz, all writes are whole stripe writes.
>> 
>> rgds,
>> toomas
>> 
>>> 
>>> On Thu, Feb 18, 2021 at 4:48 PM Toomas Soome <tso...@me.com 
>>> <mailto:tso...@me.com>> wrote:
>>> 
>>> 
>>>> On 18. Feb 2021, at 12:52, Thebest videos <sri.chityala...@gmail.com 
>>>> <mailto:sri.chityala...@gmail.com>> wrote:
>>>> 
>>>> Ok, We also generated .img file using our custom OS from freebsd source. 
>>>> we are uploaded img file to digital ocean images. then we are creating 
>>>> droplet. everything working fine for basic operating system. but we are 
>>>> facing same issue at droplet side. atleast on virtual box we are able to 
>>>> create single vdev upto 5 disks and 2 vdevs with 3 disk each vdev(i mean 
>>>> upto 6 disks). but digital ocean side we are unable to create atleast 
>>>> single vdev with 3 disks. its working fine with 2 disks as mirror pool. we 
>>>> are raised the issue on digital ocean like any restrictions on number of 
>>>> disks towards the RAIDZ. but they says there is no constraints on number 
>>>> of disks. we can create as RAIDZ as many number of disks. we still don't 
>>>> understand where is the mistake. we also raised same query on freebsd 
>>>> forum but no response. as i already shares the manual steps which we are 
>>>> following to create partitions and RAIDZ configuration. are we making any 
>>>> mistake from commands which we are following towards RAIDZ configuration 
>>>> or as you said its king of restrictions on number of disks on virtual box 
>>>> and might digital ocean side. i mean restricitons on vender side?!.  Any 
>>>> guesses if it works(if no mistakes from commands we are using)if we attach 
>>>> CD/image to any bare metal server...?! or any suggestions?
>>> 
>>> 
>>> I have no personal experience with digital ocean, but the basic test is the 
>>> same; if you get loader OK prompt, use lsdev -v command to check how many 
>>> disks you can actually see. There actually is another option too — with 
>>> BIOS boot, when you see the very first spinner, press space key and you 
>>> will get boot: prompt. This is very limited but still useful prompt from 
>>> gptzfsboot proagram (the one which will try to find and start 
>>> /boot/loader). On boot: prompt, you can enter: status — this will produce 
>>> the same report as you get from lsdev.
>>> 
>>> So, if you know your VM should have, say, 10 disks, but boot: status or ok 
>>> lsdev will show less, then you know, there must be BIOS limit (we do use 
>>> BIOS INT13h to access the disks).
>>> 
>>> Please note, if the provider does offer option to use UEFI, it *may* 
>>> support greater number of boot disks, the same check does apply with UEFI 
>>> as well (lsdev -v).
>>> 
>>> rgds,
>>> toomas
>>> 
>>> 
>>>> These are commands we are using to create partitions and RAIDZ 
>>>> configuration
>>>> NOTE: we are creating below gpart partitions(boot,swap,root) on all hard 
>>>> disks then adding those hard disks in zpool command
>>>> Doubt: should we create partitions(boot,swap,root) on all hard disks to 
>>>> make part of RAIDZ configuration or is it enough to add in zpool as raw 
>>>> disks or making 2-3 disks as bootable then remaining as raw disks? anyway 
>>>> please check below commands we are using to create partitions and zpool 
>>>> configurations
>>>>     gpart create -s gpt /dev/da0
>>>>     gpart add -a 4k -s 512K -t freebsd-boot da0
>>>>     gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0
>>>>     gpart add -a 1m -s 2G -t freebsd-swap -l swap1 da0
>>>>     gpart add -a 1m -t freebsd-zfs -l disk1 da0
>>>>    zpool create -o altroot=/mnt datapool raidz2 ada0p3 ada1p3 ada2p3 
>>>> ada3p3  raidz2 ada4p3 ada5p3 ada6p3 ada7p3
>>>>     zfs create -o mountpoint=/ -o canmount=noauto datapool/boot
>>>>     mount -t zfs datapool/boot /mnt
>>>>     cp -r /temp/* /mnt/.
>>>>     zpool set bootfs=datapool/boot datapool
>>>>     zfs create -o mountpoint=/storage -o canmount=noauto datapool/storage
>>>>     zfs create -o mountpoint=/conf -o canmount=noauto datapool/conf
>>>>     shutdown and remove iso/img and start it again
>>>>     zpool import datapool
>>>>     mkdir /conf /storage
>>>>     mount -t zfs datapool/conf /conf
>>>>     mount -t zfs datapool/storage /storage
>>>>     
>>>> 
>>>> 
>>>> On Thu, Feb 18, 2021 at 3:33 PM Toomas Soome <tso...@me.com 
>>>> <mailto:tso...@me.com>> wrote:
>>>> 
>>>> 
>>>>> On 18. Feb 2021, at 11:52, Thebest videos <sri.chityala...@gmail.com 
>>>>> <mailto:sri.chityala...@gmail.com>> wrote:
>>>>> 
>>>>> as per your reply, im not clear 
>>>>> although i've tried to create 2 pools with 4 disks(for testing purpose) 
>>>>> each pool in a single vdev as expected it works. but that is not our 
>>>>> requirement since we intended to chose single pool as many number of 
>>>>> disks which should part of multiple vdev's based of condition(max 5 disks 
>>>>> each vdev) and any disks left after part of vdev should act as spare 
>>>>> disks.  
>>>>> finally max 5 disks are coming ONLINE in vdev remaining disks going as 
>>>>> says OFFLINE state disk state is UNKNOWN. is there anyway to fix this 
>>>>> issue.
>>>>> 
>>>> 
>>>> 
>>>> If you want to use virtualbox, then there is limit that virtualbox does 
>>>> only see first 5 disk devices. This is vbox limit and there are only two 
>>>> options about it - either accept it or to file feature request to 
>>>> virtualbox developers.
>>>> 
>>>> Different systems can set different limits there, for example, VMware 
>>>> Fusion does support booting from first 12 disks. It also can have more 
>>>> disks than 12, but  only first 12 are visible for boot loader.
>>>> 
>>>> Real hardware is vendor specific.
>>>> 
>>>> rgds,
>>>> toomas
>>>> 
>>>> 
>>>>> On Thu, Feb 18, 2021 at 1:24 AM Toomas Soome via openindiana-discuss 
>>>>> <openindiana-discuss@openindiana.org 
>>>>> <mailto:openindiana-discuss@openindiana.org>> wrote:
>>>>> 
>>>>> 
>>>>> > On 17. Feb 2021, at 20:49, Thebest videos <sri.chityala...@gmail.com 
>>>>> > <mailto:sri.chityala...@gmail.com>> wrote:
>>>>> > 
>>>>> > NOTE: we are getting issues after shutdown , then remove ISO file from
>>>>> > virtualBox then power on the server. if we attach an ISO file we are 
>>>>> > safe
>>>>> > with our Zpool stuff. and we are creating boot,swap,root partitions on 
>>>>> > each
>>>>> > disks.
>>>>> 
>>>>> vbox seems to have limit on boot disks - it appears to “see” 5, My vbox 
>>>>> has IDE for boot disk, and I did add 6 sas disks, I only can see 5 — ide 
>>>>> + 4 sas.
>>>>> 
>>>>> So all you need to do is to add disk for boot pool, and make sure it is 
>>>>> first one - once kernel is up, it can see all the disks.
>>>>> 
>>>>> rgds,
>>>>> toomas
>>>>> 
>>>>> 
>>>>> > I'm not able to understand First 5 disks are ONLINE and remaining disks 
>>>>> > are
>>>>> > UNKNOWN state after power off and then power on
>>>>> > actually our requirement is to create RAIDZ1/RAIDZ2 with single 
>>>>> > vdev(upto 5
>>>>> > disks per vdev) if more than 5 or less than 10 disks then those 
>>>>> > disks(after
>>>>> > 5disks) are spare part shouldn't be included any vdev. if we have
>>>>> > multiple's of 5 disks then we need to create multiple vdev in a pool
>>>>> > example: RAIDZ2 : if total 7 disks then 5 disks as single vdev, 
>>>>> > remaining 2
>>>>> > disks as spare parts nothing to do. and if we have 12 disks intotal 
>>>>> > then 2
>>>>> > vdevs (5 disks per vdev) so total 10 disks in 2 vdevs remaining 2disks 
>>>>> > as
>>>>> > spare.
>>>>> > RAIDZ1: if we have only 3 disks then we should create RAIDZ1
>>>>> > 
>>>>> > Here, we wrote a zfs script for our requirements(but currently testing 
>>>>> > with
>>>>> > manual commands). We are able to createRAIDZ2 with a single vdev in a 
>>>>> > pool
>>>>> > for 5 disks. it works upto 9 disks but if we have 10 disks then 2 vdevs 
>>>>> > are
>>>>> > created after power on the same error coming like zfs: i/o error all 
>>>>> > copies
>>>>> > blocked.
>>>>> > I was testing the RAIDZ like I'm creating 2 vdevs which have 3 disks per
>>>>> > each vdev.its working fine even after shutdown and power on(as says 
>>>>> > that we
>>>>> > are removing the ISO file after shutdown).
>>>>> > but the issue is when we create 2 vdevs with 4 disks per each vdev.this
>>>>> > time we are not getting error its giving options like we press esc 
>>>>> > button
>>>>> > what kind of options we see those options are coming. if i type lsdev 
>>>>> > -v(as
>>>>> > you said before). first 5 disks are online and the remaining 3 disks are
>>>>> > UNKNOWN.
>>>>> > 
>>>>> > FInally, I need to setup RAIDZ configuration with 5 multiples of disks 
>>>>> > per
>>>>> > each vdev.  please look once again below commands im using to create
>>>>> > partitions and RAIDZ configuration
>>>>> > 
>>>>> > NOTE: below gpart commands are running for each disk
>>>>> > 
>>>>> > gpart create -s gpt ada0
>>>>> > 
>>>>> > 
>>>>> > gpart add -a 4k -s 512K -t freebsd-boot ada0
>>>>> > 
>>>>> > 
>>>>> > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
>>>>> > 
>>>>> > 
>>>>> > gpart add -a 1m -s 2G -t freebsd-swap -l swap0 ada0
>>>>> > 
>>>>> > 
>>>>> > gpart add -a 1m -t freebsd-zfs -l disk0 ada0
>>>>> > 
>>>>> > 
>>>>> > zpool create -o altroot=/mnt datapool raidz2 ada0p3 ada1p3 ada2p3
>>>>> > ada3p3  raidz2 ada4p3 ada5p3 ada6p3 ada7p3
>>>>> > 
>>>>> > 
>>>>> > zfs create -o mountpoint=/ -o canmount=noauto datapool/boot
>>>>> > 
>>>>> > 
>>>>> > mount -t zfs datapool/boot /mnt
>>>>> > 
>>>>> > 
>>>>> > mount_cd9660 /dev/cd0 /media
>>>>> > 
>>>>> > 
>>>>> > cp -r /media/* /mnt/.
>>>>> > 
>>>>> > 
>>>>> > zpool set bootfs=datapool/boot datapool
>>>>> > 
>>>>> > 
>>>>> > shutdown and remove ISO and power on the server
>>>>> > 
>>>>> > 
>>>>> > kindly suggest me steps if im wrong
>>>>> > 
>>>>> > On Wed, Feb 17, 2021 at 11:51 PM Thebest videos 
>>>>> > <sri.chityala...@gmail.com <mailto:sri.chityala...@gmail.com>>
>>>>> > wrote:
>>>>> > 
>>>>> >> prtconf -v | grep biosdev not working on freebsd
>>>>> >> i think its legacy boot system(im not sure actually i didnt find 
>>>>> >> anything
>>>>> >> about EFI related stuff) is there anyway to check EFI
>>>>> >> 
>>>>> >> Create the pool with EFI boot:
>>>>> >> # zpool create -B rpool raidz c0t0d0 c0t1d0 c0t3d0
>>>>> >> 
>>>>> >> how can i create pool with EFI
>>>>> >> and -B what it refers?
>>>>> >> 
>>>>> >> On Wed, Feb 17, 2021 at 11:00 PM John D Groenveld <groenv...@acm.org 
>>>>> >> <mailto:groenv...@acm.org>>
>>>>> >> wrote:
>>>>> >> 
>>>>> >>> In message <272389262.2537371.1613575739...@mail.yahoo.com 
>>>>> >>> <mailto:272389262.2537371.1613575739...@mail.yahoo.com>>, Reginald
>>>>> >>> Beardsley
>>>>> >>> via openindiana-discuss writes:
>>>>> >>>> I was not aware that it was possible to boot from RAIDZ. It wasn't
>>>>> >>> possible wh
>>>>> >>> 
>>>>> >>> With the current text installer, escape to a shell.
>>>>> >>> Confirm the disks are all BIOS accessible:
>>>>> >>> # prtconf -v | grep biosdev
>>>>> >>> Create the pool with EFI boot:
>>>>> >>> # zpool create -B rpool raidz c0t0d0 c0t1d0 c0t3d0
>>>>> >>> Exit and return to the installer and then F5 Install to an Existing 
>>>>> >>> Pool
>>>>> >>> 
>>>>> >>> John
>>>>> >>> groenv...@acm.org <mailto:groenv...@acm.org>
>>>>> >>> 
>>>>> >>> _______________________________________________
>>>>> >>> openindiana-discuss mailing list
>>>>> >>> openindiana-discuss@openindiana.org 
>>>>> >>> <mailto:openindiana-discuss@openindiana.org>
>>>>> >>> https://openindiana.org/mailman/listinfo/openindiana-discuss 
>>>>> >>> <https://openindiana.org/mailman/listinfo/openindiana-discuss>
>>>>> >>> 
>>>>> >> 
>>>>> > _______________________________________________
>>>>> > openindiana-discuss mailing list
>>>>> > openindiana-discuss@openindiana.org 
>>>>> > <mailto:openindiana-discuss@openindiana.org>
>>>>> > https://openindiana.org/mailman/listinfo/openindiana-discuss 
>>>>> > <https://openindiana.org/mailman/listinfo/openindiana-discuss>
>>>>> 
>>>>> 
>>>>> _______________________________________________
>>>>> openindiana-discuss mailing list
>>>>> openindiana-discuss@openindiana.org 
>>>>> <mailto:openindiana-discuss@openindiana.org>
>>>>> https://openindiana.org/mailman/listinfo/openindiana-discuss 
>>>>> <https://openindiana.org/mailman/listinfo/openindiana-discuss>
>>>>> <Screenshot 2021-02-18 at 12.38.35 PM.png>
>>>> 
>>> 
>> 
> 

_______________________________________________
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss

Reply via email to