Re: [OpenIndiana-discuss] OI 2020.10 install disk FAIL #2 !!!

2021-03-01 Thread Reginald Beardsley via openindiana-discuss
 Toomas,

Thank you. That was quite helpful. "format -e" does indeed work in the live 
desktop environment once one does a sudo /bin/su. At the time I got the SEGV on 
2017.10 I was primarily focused on recovering my Solaris 10 u8 instance. As I 
had successfully scrubbed the 3 pools in single user mode I wanted to back them 
up before repairing grub. In the end with your confirmation of the installgrub 
operation I simply did it and all is well. That system has a 3 TB 2 disk mirror 
scratch pool in addition to the 3 disk s0/s1 root and export pools.

I have run a 3 or 4 way mirror on s0 and a RAIDZ1 (2x 3 disk systems) or RAIDZ2 
(1x 4 disk system) pool for /export for around 8 years. The systems currently 
run Solaris 10 u8, oi151_a7 and Hipster 2017.10. The oi151_a7 is the RAIDZ2 
system, but heat limitations in the small space I have them in has resulted in 
it not being used in practice.

Some systems have identical disks for the pools and some do not. By sizing the 
slices properly I have had no issues at all. And the fact that I was able to 
completely recover my u8 instance when all the pools were corrupted is strong 
testimony that it is very robust even if grub is not installed on the one drive 
in the mirror that doesn't show errors. I think I have now corrected that but 
have no desire to test it. Concurrent with the u8 fault I had a router running 
DD-WRT fail so I was hopping.

I agree that configurations such as mine do require expertise and care, but I 
don't think it can be called "bad practice". The 4 way rpool mirror in the s0 
slice of my NL40 is certainly far more robust than having rpool on a single 
flash drive as was suggested generally when I built the RAIDZ2 on the NL40. I 
tested that when it was built by pulling 2 disks and it recovered perfectly, 
albeit slowly. I should note that I have only used 2 TB disks for the s0/s1 
arrangement. The EFI label might not support that even though it should. I 
shall test that in the process of upgrading from Hipster 2017.10.

On the 2020.10 live desktop gparted dumps core in jack's home directory. 
Unfortunately dbx running on 2017.10 doesn't see it as an ELF file though 
file(1) does. Thus far I have not been able to get a working copy of gdb to 
determine where it fails.

My attempt to install gdb via pkg on 2017.10 resulted in this:

rhb@Hipster:/app/pkgs# pkg install gdb
Creating Plan (Solver setup): /
pkg install: No matching version of developer/debug/gdb can be installed:
 Reject: pkg://openindiana.org/developer/debug/gdb@7.10.1-2020.0.1.6
 Reason: This version is excluded by installed incorporation 
consolidation/userland/userland-incorporation@0.5.11-2017.0.0.9657
rhb@Hipster:/app/pkgs# find /usr -name gdb
/usr/share/glib-2.0/gdb
/usr/share/gdb
rhb@Hipster:/app/pkgs# 

FWIW The /app/pkgs tree is where I build 3rd party software from source. 
Unfortunately, gdb 10.1 failed to build as did 9.2. It's a fairly elaborate 
system that allows me to toggle links in /app/{bin,lib} to select particular 
versions each of which is in /app/pkgs///{bin,lib,man,src} etc. 
I have found the ability to seamlessly fallback to a different version 
invaluable. Shell scripts create or remove symlinks as needed and allows 
complete control over what software versions are selected in PATH. 

With an EFI label on the 5 TB disk, I shall see if the text install will allow 
me to create the mirror and RAIDZ configuration.

Best regards,
Reg


  
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] OI 2020.10 install disk FAIL #2 !!!

2021-03-01 Thread cretin1997 via openindiana-discuss
‐‐‐ Original Message ‐‐‐
On Monday, March 1, 2021 10:06 PM, John D Groenveld  wrote:

> In message 971533125.1431110.1614570927...@mail.yahoo.com, Reginald Beardsley 
> via openindiana-discuss writes:
>
> > Out of curiosity I just booted FreeBSD 12.2 and messed with gpart.
> > It does not offer "apple-zfs as an option. Aside from ZFS not being
>
> URL:https://www.freebsd.org/cgi/man.cgi?query=gpart=0=8=FreeBSD+13.0-current=default=html
> | CAVEATS
> | Partition type apple-zfs (6a898cc3-1dd2-11b2-99a6-080020736631) is also
> | being used on illumos/Solaris platforms for ZFS volumes.
>
> John
> groenv...@acm.org
>
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss

Later time you should read other's mail more carefully.

This is for 12.2:

https://www.freebsd.org/cgi/man.cgi?query=gpart=0=8=FreeBSD+12.2-RELEASE=default=html

No apple-zfs. The apple-zfs stuff is from ZoL, which FreeBSD rebase their ZFS 
from us to ZoL, they called ZoF.

He should use freebsd-zfs.

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] OI 2020.10 install disk FAIL #2 !!!

2021-03-01 Thread John D Groenveld
In message <971533125.1431110.1614570927...@mail.yahoo.com>, Reginald Beardsley 
via openindiana-discuss writes:
>Out of curiosity I just booted FreeBSD 12.2 and messed with gpart.
>It does not offer "apple-zfs as an option. Aside from ZFS not being

https://www.freebsd.org/cgi/man.cgi?query=gpart=0=8=FreeBSD+13.0-current=default=html>
| CAVEATS
| Partition type apple-zfs (6a898cc3-1dd2-11b2-99a6-080020736631) is also
| being used on illumos/Solaris platforms for ZFS volumes.

John
groenv...@acm.org

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] OI 2020.10 install disk FAIL #2 !!!

2021-02-28 Thread Toomas Soome via openindiana-discuss



> On 1. Mar 2021, at 05:55, Reginald Beardsley via openindiana-discuss 
>  wrote:
> 
> The Debian derived gparted disk did not offer any zfs FS types.
> 
> Out of curiosity I just booted FreeBSD 12.2 and messed with gpart. It does 
> not offer "apple-zfs as an option. Aside from ZFS not being an Apple 
> creation, it's rather perverse that in 2021 one would need to use the beta 
> from another OS to partition a >2TB disk for OI.
> 
> From my admin log book:
> ---
> 1-26-13 
> 
> oi_151a7
> 
> text installer limited system to 2 TB of 3 TB disk
> 
> backed up to shell and ran format which correctly detected disk
> 
> successfully labeled disk with 2 slices of 128 GB and 2.6 GB
> 
> created pools w/ zpool on both slices
> 
> relabeling 3 TB disk using OI format(1m) runs into logic errors in the 
> partition.
> 
> solution is to do a "free hug" modify & take defaults for all slices, then 
> rename s0
> ---
> 
> From there I moved the disk to my Solaris 10 u8 system which happily created 
> the pools. I was also building an NL40 based system at the time and the log 
> book gets a bit unclear. The s0 & s1 slices are the hallmark of my ZFS boot 
> setup. Mirror on s0 and RAIDZ on s1. I don't think the Sol 10 instance has 
> the 128 GB s0 slice now. As I'm pretty sure it will kernel panic if I run 
> "format -e" and select the disk I'd rather not look.
> 
> The message here is OI was capable of doing the geometry for a >2 TB disk at 
> oi_151_a7. So for Hipster 2020.10 to not be able to do that is a considerable 
> regression.


root@beastie:/var/log# zpool status
  pool: rpool
 state: ONLINE
  scan: resilvered 1,68T in 0 days 10:10:07 with 0 errors on Fri Oct 25 
05:05:34 2019
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  raidz1-0  ONLINE   0 0 0
c3t0d0  ONLINE   0 0 0
c3t1d0  ONLINE   0 0 0
c3t3d0  ONLINE   0 0 0
c3t4d0  ONLINE   0 0 0

errors: No known data errors
root@beastie:/var/log# prtvtoc /dev/rdsk/c3t0d0
* /dev/rdsk/c3t0d0 partition map
*
* Dimensions:
* 512 bytes/sector
*  7814037168 sectors
*  7814037101 accessible sectors
*
* Flags:
*   1: unmountable
*  10: read-only
*
* Unallocated space:
* First   Sector  Last
* Sector   Count  Sector
*34 222 255
*
*First   Sector  Last
* Partition  Tag  Flags  Sector   Count  Sector  Mount Directory
   0 1200  256  524288  524543
   1  400   524544  7813496207  7814020750
   8 1100   7814020751   16384  7814037134
root@beastie:/var/log# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c3t0d0 
  /pci@0,0/pci15d9,805@1f,2/disk@0,0
   1. c3t1d0 
  /pci@0,0/pci15d9,805@1f,2/disk@1,0
   2. c3t3d0 
  /pci@0,0/pci15d9,805@1f,2/disk@3,0
   3. c3t4d0 
  /pci@0,0/pci15d9,805@1f,2/disk@4,0
Specify disk (enter its number): 0
selecting c3t0d0
[disk formatted]
/dev/dsk/c3t0d0s1 is part of active ZFS pool rpool. Please see zpool(1M).


FORMAT MENU:
disk   - select a disk
type   - select (define) a disk type
partition  - select (define) a partition table
current- describe the current disk
format - format and analyze the disk
fdisk  - run the fdisk program
repair - repair a defective sector
label  - write label to the disk
analyze- surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
inquiry- show vendor, product and revision
volname- set 8-character volume name
! - execute , then return
quit
format> ver

Volume name = <>
ascii name  = 
bytes/sector=  512
sectors = 7814037168
accessible sectors = 7814020717
first usable sector = 34
last usable sector = 7814037134
Part  TagFlag First Sector  Size  Last Sector
  0 systemwm   256   256.00MB   524543
  1usrwm524544 3.64TB   7814020750
  2 unassignedwm 000
  3 unassignedwm 000
  4 unassignedwm 000
  5 unassignedwm 000
  6 unassignedwm 000
  8   reservedwm

Re: [OpenIndiana-discuss] OI 2020.10 install disk FAIL #2 !!!

2021-02-28 Thread Reginald Beardsley via openindiana-discuss
 The Debian derived gparted disk did not offer any zfs FS types.

Out of curiosity I just booted FreeBSD 12.2 and messed with gpart. It does not 
offer "apple-zfs as an option. Aside from ZFS not being an Apple creation, it's 
rather perverse that in 2021 one would need to use the beta from another OS to 
partition a >2TB disk for OI.

>From my admin log book:
---
 1-26-13 

oi_151a7

text installer limited system to 2 TB of 3 TB disk

backed up to shell and ran format which correctly detected disk

successfully labeled disk with 2 slices of 128 GB and 2.6 GB

created pools w/ zpool on both slices

relabeling 3 TB disk using OI format(1m) runs into logic errors in the 
partition.

solution is to do a "free hug" modify & take defaults for all slices, then 
rename s0
---
 
>From there I moved the disk to my Solaris 10 u8 system which happily created 
>the pools. I was also building an NL40 based system at the time and the log 
>book gets a bit unclear. The s0 & s1 slices are the hallmark of my ZFS boot 
>setup. Mirror on s0 and RAIDZ on s1. I don't think the Sol 10 instance has the 
>128 GB s0 slice now. As I'm pretty sure it will kernel panic if I run "format 
>-e" and select the disk I'd rather not look.

The message here is OI was capable of doing the geometry for a >2 TB disk at 
oi_151_a7. So for Hipster 2020.10 to not be able to do that is a considerable 
regression.

Reg
 On Sunday, February 28, 2021, 09:00:17 PM CST, John D Groenveld 
 wrote:  
 
 In message <845919546.1414404.1614561339...@mail.yahoo.com>, Reginald Beardsley
 via openindiana-discuss writes:
>Following hints from others, I used a *working* copy of gparted to put a GPT l
>abel on a 5 TB disk in advance of attempting to install OI.

I booted the FreeBSD 13 Beta installer:
https://download.freebsd.org/ftp/releases/ISO-IMAGES/13.0/>
With gpart(8), I created a GPT scheme and then added an apple-zfs
partition.
Then I created an zpool named rpool with features disabled and
exported it.
# zpool create -d rpool ada0p1
# zpool export rpool

Using the OI text installer I was able to F5 to install to an existing
pool.
http://dlc.openindiana.org/>

>I took photos of the screen should anyone question this

As Josh Clulow noted, the OI text installer keeps a logfile in /tmp
which would be helpful if you're interested in providing a bug report.

John
groenv...@acm.org

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss
  
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] OI 2020.10 install disk FAIL #2 !!!

2021-02-28 Thread John D Groenveld
In message <845919546.1414404.1614561339...@mail.yahoo.com>, Reginald Beardsley
 via openindiana-discuss writes:
>Following hints from others, I used a *working* copy of gparted to put a GPT l
>abel on a 5 TB disk in advance of attempting to install OI.

I booted the FreeBSD 13 Beta installer:
https://download.freebsd.org/ftp/releases/ISO-IMAGES/13.0/>
With gpart(8), I created a GPT scheme and then added an apple-zfs
partition.
Then I created an zpool named rpool with features disabled and
exported it.
# zpool create -d rpool ada0p1
# zpool export rpool

Using the OI text installer I was able to F5 to install to an existing
pool.
http://dlc.openindiana.org/>

>I took photos of the screen should anyone question this

As Josh Clulow noted, the OI text installer keeps a logfile in /tmp
which would be helpful if you're interested in providing a bug report.

John
groenv...@acm.org

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] OI 2020.10 install disk FAIL #2 !!!

2021-02-28 Thread Judah Richardson
About to eat so firing this reply off quickly:

OI has an oddity in which the live USB supports GPT and UEFI boot (on the
USB media itself) but the actual OS installation and boot is entirely
legacy (MBR, at least by default).

Yes, I know that doesn't make sense. There's some *semantic* (read: not
practical. In the context IMO the answer relative to other current
generation OSes is "No") debate about whether OI supports UEFI (see the
mailing list archives within the past 6 months) but I do believe an
accurate assessment is the above sentence. Hopefully that helps.

On Sun, Feb 28, 2021 at 7:15 PM Reginald Beardsley via openindiana-discuss <
openindiana-discuss@openindiana.org> wrote:

> Following hints from others, I used a *working* copy of gparted to put a
> GPT label on a 5 TB disk in advance of attempting to install OI.
>
> I took photos of the screen should anyone question this, but I don't see a
> reason to post them lest they cost someone on a measured connection.  After
> the OI fail, I booted gparted and took a picture of what it reported to
> verify that it was what I had done.
>
> The simple fact of the matter is both the text and the GUI install
> completely ignore the GPT label on the disk.
>
> I created a 2 GB partition, a 100 GB partition and allocated the rest of
> the disk to a 3rd partition.  I then booted the OI disk which ignored the
> partitioning and refused to use more than 2 TB.
>
> This is simply a failure to actually test the image before release.
> Relative to creating a distribution ISO image, testing it is vanishingly
> little work.  I do not know and do not care whose neck this albatross
> should be hung around.  But I firmly hope that those who do know remove
> this person from the role.  This does more damage to OI than can be
> described.  I have multiple Z400s and an Ultra 20 as well as several
> functional older machines.  I shall be more than happy to test an install
> image before it is put up for general use.
>
> I was, and still am willing to work on OI.  But the lack of anything
> resembling cooperation makes that rather difficult.  The computer is the
> final arbiter.  If OI fails on a system Sun certified for Solaris 10 there
> is a very serious QC issue.  You can't blame this on the difficulties posed
> by "arbitrary hardware".
>
> Reg
>
>
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss