Re: [OpenIndiana-discuss] The kiss of death

2021-04-21 Thread Jacob Ritorto
I'm highly motivated to help test for sparc ( sb100, sb2000, ultra5,2,1)
and I do have one shitty pc (HP Z400) I can add to the fray.  Please let me
know how I can be useful.



On Wed, Apr 21, 2021 at 7:30 PM Reginald Beardsley via openindiana-discuss <
openindiana-discuss@openindiana.org> wrote:

> I'm extremely concerned about the situation I've encountered.
>
> I'm trying to bring up OI on Oracle Solaris 11 certified hardware.
> 2020.10 crashes booting the Desktop Live Image.  The 2021.04_rc1 Live Image
> works fine, but the installed image crashes on reboot.
>
> This is disastrous for the future of OI.  If it is not fixed, OI is dead.
> I feel confident that I can determine the source of my problem and fix it.
> However, I only have a few hardware platforms to test releases on.  Fixing
> my problem doesn't make OI reliable on other systems. If we do not achieve
> reliable operation on many platforms it's "game over".
>
> We need to have multiple people test a release candidate on as many pieces
> of hardware as possible.  We can't do everything, but we can do better.
> Release decisions need to be based on a sign off from multiple people
> testing a release candidate, not on a schedule.
>
> Reg
>
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-21 Thread John D Groenveld
In message 
, Jacob Ritorto writes:
>and I do have one shitty pc (HP Z400) I can add to the fray.  Please let me
>know how I can be useful.

Can you install from the latest installation media on that PC?
http://dlc.openindiana.org/isos/hipster/test/>

John
groenv...@acm.org

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-21 Thread Gary Mills
On Wed, Apr 21, 2021 at 11:29:55PM +, Reginald Beardsley via 
openindiana-discuss wrote:

> I'm trying to bring up OI on Oracle Solaris 11 certified hardware.
> 2020.10 crashes booting the Desktop Live Image.  The 2021.04_rc1
> Live Image works fine, but the installed image crashes on reboot.

I've seen this behavior before: you can boot from the install media
and even do an install with no problems, but the installed image fails
to boot fully.  It's caused by the VESA video driver, which will only
run correctly with a BIOS boot, not with a UEFI boot.  The installer
itself can be booted either way.


-- 
-Gary Mills--refurb--Winnipeg, Manitoba, Canada-

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-21 Thread Reginald Beardsley via openindiana-discuss
 That is what I've been fighting with for 3 days.
 On Wednesday, April 21, 2021, 06:40:15 PM CDT, John D Groenveld 
 wrote:  
 
 In message 
, Jacob Ritorto writes:
>and I do have one shitty pc (HP Z400) I can add to the fray.  Please let me
>know how I can be useful.

Can you install from the latest installation media on that PC?
http://dlc.openindiana.org/isos/hipster/test/>

John
groenv...@acm.org

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss
  
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-21 Thread John D Groenveld
In message <20210422000856.ga10...@imap.fastmail.com>, Gary Mills writes:
>to boot fully.  It's caused by the VESA video driver, which will only
>run correctly with a BIOS boot, not with a UEFI boot.  The installer

Possible hack:
# pkg uninstall xorg-video xorg-video-vesa

John
groenv...@acm.org

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-21 Thread Reginald Beardsley via openindiana-discuss
 That is in contradiction to what Allan said, but I'll write an xorg.conf file 
by hand tomorrow. 8-9 hours of this is all I can stand. If I try to run longer, 
I make things worse rather than better.

I am doing a UEFI boot as it is a 4x 4 TB RAIDZ2 pool. The UEFI vs SMI boot 
would make a lot of sense. However, it still leaves the problem of 2020.10 
crashing before reaching the desktop.

I want to resolve this. If it's not fixed, OI is dead. That would make me 
*very* unhappy. 

I was quite stunned by the number of symlinks and *not* happy. I've done battle 
with profligate use of symlinks as bandaids too many times.

Reg



 On Wednesday, April 21, 2021, 07:09:00 PM CDT, Gary Mills 
 wrote:  
 
 On Wed, Apr 21, 2021 at 11:29:55PM +, Reginald Beardsley via 
openindiana-discuss wrote:

> I'm trying to bring up OI on Oracle Solaris 11 certified hardware.
> 2020.10 crashes booting the Desktop Live Image.  The 2021.04_rc1
> Live Image works fine, but the installed image crashes on reboot.

I've seen this behavior before: you can boot from the install media
and even do an install with no problems, but the installed image fails
to boot fully.  It's caused by the VESA video driver, which will only
run correctly with a BIOS boot, not with a UEFI boot.  The installer
itself can be booted either way.


-- 
-Gary Mills-        -refurb-        -Winnipeg, Manitoba, Canada-
  
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-21 Thread Reginald Beardsley via openindiana-discuss
 

I've been referring to this as 2021.04_rc1 for 2 weeks. We need to formalize 
what most others do and have a release candidate process where people test the 
images before a formal release. 

Expecting the people building them to do it is simply not reasonable. They are 
already quite busy and cannot possibly support the number of hardware platforms 
needed even if they could afford it. It's simply too much work.

I maintain, and will continue to maintain, systems whose sole purpose is 
testing such things.

Reg
 On Wednesday, April 21, 2021, 06:40:15 PM CDT, John D Groenveld 
 wrote:  
 
 In message 
, Jacob Ritorto writes:
>and I do have one shitty pc (HP Z400) I can add to the fray.  Please let me
>know how I can be useful.

Can you install from the latest installation media on that PC?
http://dlc.openindiana.org/isos/hipster/test/>

John
groenv...@acm.org

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss
  
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-21 Thread Nelson H. F. Beebe
John D Groenveld  asks today

>> Can you install from the latest installation media on that PC?
>> http://dlc.openindiana.org/isos/hipster/test/>

There are a lot of recent discussions on this list about installation
problems on particular hardware systems, but not a much mention of
routine test installations on virtualization hypervisors, like bhyve,
Parallels, OVirt/QEMU, virt-manager/QEMU, VirtualBox, VMware ESX, and
Xen.

At my site, we have a test farm with hundreds of VMs, primarily on
OVirt and virt-manager, with one small system capable of supporting
just one or two simultaneous VMs on VirtualBox.

I have had at least 8 instances of various releases of OpenIndiana and
Hipster, plus more VMS with the relatives OmniOS, OmniTribblix,
Sun/Oracle Solaris, Tribblix, Unleashed, and XstreamOS, mostly on the
QEMU-based hypervisors.

Several classes of installer bugs have afflicted some of the VMs that
we run, or have tried to install:

(a) uncontrollable streaming garbage input on the console device,
making correct typein almost impossible;

(b) black screen-of-death on the console at boot time;

(c) random patterns of green and red lines on the console (hiding all
text);

(d) failure of the mouse to track the screen cursor, sometimes showing
two cursors, one of which cannot reach certain portions of the
screen;

(e) installer screens with critical buttons lying off the visible
console screen, and no resizing possible;

(f) boot wait times that are too short to get to the BIOS setup before
booting starts (on OVirt, it can take 20 to 30 seconds after a
power-on before the console is visible, by which time a lot of
early output is lost).

Are openindiana developers on this list willing to post a list of VMs
on which candidate ISO images have been tested before end users try to
install them on physical machines?

It seems to me that such testing should be normal these days, given
the extreme low cost of virtualization --- I personally run more than
80 VMs simultaneously on each of my home and campus office
workstations.

I will make an effort over the next few days to try some of the latest
ISO images from the Web site above on some of our virtualization
platforms.

---
- Nelson H. F. BeebeTel: +1 801 581 5254  -
- University of UtahFAX: +1 801 581 4148  -
- Department of Mathematics, 110 LCBInternet e-mail: be...@math.utah.edu  -
- 155 S 1400 E RM 233   be...@acm.org  be...@computer.org -
- Salt Lake City, UT 84112-0090, USAURL: http://www.math.utah.edu/~beebe/ -
---

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-22 Thread Reginald Beardsley via openindiana-discuss
 There are very few developers. As a consequence responsibility for testing 
must fall on the regular user community. It should in any case for the simple 
reason that installation on physical hardware is more of an issue than on a VM 
and no one can support very many physical machines. I have a single Z400 set up 
to allow testing OS installs to old disks using a trayless SATA socket.

The point of starting this thread is to formalize a review process by the user 
community of a release candidate before the release is formalized. The list of 
systems on which a release has been tested should be in the release notes along 
with the name of the person who did the testing.

I got started on testing when what I thought would be a routine install of 
2020.10 to a 5 TB disk found that gparted dumped core. So I started looking for 
other issues.

This is why my extensive testing of the ISO images John linked which I have 
been referring to as 2021.04_rc1.

The only way this can work is to post release candidates which are tested by 
the user community, issues logged and when those issues have been fixed another 
RC is created and the testing repeated. When all the issues found have been 
verified as fixed it becomes an official release with a list of the systems 
used in testing in the release notes. Semiannual releases which break things 
that were working is not conducive to the survival of OI. Gparted works on the 
2017.10 ISO, but dumps core on 2020.10. That can not be allowed to happen. 
Better no release than one that has obvious major flaws.

Reg



On Wednesday, April 21, 2021, 10:14:00 PM CDT, Nelson H. F. Beebe 
 wrote:


John D Groenveld  asks today

>> Can you install from the latest installation media on that PC?
>> http://dlc.openindiana.org/isos/hipster/test/>

There are a lot of recent discussions on this list about installation
problems on particular hardware systems, but not a much mention of
routine test installations on virtualization hypervisors, like bhyve,
Parallels, OVirt/QEMU, virt-manager/QEMU, VirtualBox, VMware ESX, and
Xen.

At my site, we have a test farm with hundreds of VMs, primarily on
OVirt and virt-manager, with one small system capable of supporting
just one or two simultaneous VMs on VirtualBox.

I have had at least 8 instances of various releases of OpenIndiana and
Hipster, plus more VMS with the relatives OmniOS, OmniTribblix,
Sun/Oracle Solaris, Tribblix, Unleashed, and XstreamOS, mostly on the
QEMU-based hypervisors.

Several classes of installer bugs have afflicted some of the VMs that
we run, or have tried to install:

(a) uncontrollable streaming garbage input on the console device,
 making correct typein almost impossible;

(b) black screen-of-death on the console at boot time;

(c) random patterns of green and red lines on the console (hiding all
 text);

(d) failure of the mouse to track the screen cursor, sometimes showing
 two cursors, one of which cannot reach certain portions of the
 screen;

(e) installer screens with critical buttons lying off the visible
 console screen, and no resizing possible;

(f) boot wait times that are too short to get to the BIOS setup before
 booting starts (on OVirt, it can take 20 to 30 seconds after a
 power-on before the console is visible, by which time a lot of
 early output is lost).

Are openindiana developers on this list willing to post a list of VMs
on which candidate ISO images have been tested before end users try to
install them on physical machines?

It seems to me that such testing should be normal these days, given
the extreme low cost of virtualization --- I personally run more than
80 VMs simultaneously on each of my home and campus office
workstations.

I will make an effort over the next few days to try some of the latest
ISO images from the Web site above on some of our virtualization
platforms.

---
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: be...@math.utah.edu -
- 155 S 1400 E RM 233 be...@acm.org be...@computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
---

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss
  
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-22 Thread Gary Mills
On Thu, Apr 22, 2021 at 12:48:26AM +, Reginald Beardsley wrote:
>I am doing a UEFI boot as it is a 4x 4 TB RAIDZ2 pool. The UEFI vs SMI
>boot would make a lot of sense. However, it still leaves the problem of
>2020.10 crashing before reaching the desktop.

This is a known broken configuration of OI.  It cannot work.  The VESA
driver is a fall-back driver in Xorg.  It only runs if the other
drivers all fail.  Unfortunately, the VESA driver does not work with
a UEFI boot.  As long as the nvidia driver succeeds, it will be
selected by Xorg, and VESA will never be used.  The Xorg log will
contain this information.

My AMD systems all use the Radeon HD 7450 video card, which requires
the VESA driver on OI.  They all use a BIOS boot.  OI is installed on
a mirrored pair of small SSD disks.  They also use a mirrored pair of
large mechanical disks for the data pool.  This configuration does
work.  Other people use different configurations that also works.


-- 
-Gary Mills--refurb--Winnipeg, Manitoba, Canada-

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-22 Thread Reginald Beardsley via openindiana-discuss
 
VESA *does* work with a UEFI boot. I had 2020.10 booting from a 4x 4 TB RAIDZ2 
pool using the VESA driver before I tried to boot 2021.04_rc1.



On Thursday, April 22, 2021, 08:07:18 AM CDT, Gary Mills 
 wrote:


This is a known broken configuration of OI. It cannot work. The VESA
driver is a fall-back driver in Xorg. It only runs if the other
drivers all fail. Unfortunately, the VESA driver does not work with
a UEFI boot. As long as the nvidia driver succeeds, it will be
selected by Xorg, and VESA will never be used. The Xorg log will
contain this information.
  
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-22 Thread Toomas Soome via openindiana-discuss
VESA only can work when adapter does have VESA/VGA bios. With UEFI, there is no 
requirement to have it. 

Sent from my iPhone

> On 22. Apr 2021, at 16:26, Reginald Beardsley via openindiana-discuss 
>  wrote:
> 
> 
> VESA *does* work with a UEFI boot. I had 2020.10 booting from a 4x 4 TB 
> RAIDZ2 pool using the VESA driver before I tried to boot 2021.04_rc1.
> 
> 
> 
> On Thursday, April 22, 2021, 08:07:18 AM CDT, Gary Mills 
>  wrote:
> 
> 
> This is a known broken configuration of OI. It cannot work. The VESA
> driver is a fall-back driver in Xorg. It only runs if the other
> drivers all fail. Unfortunately, the VESA driver does not work with
> a UEFI boot. As long as the nvidia driver succeeds, it will be
> selected by Xorg, and VESA will never be used. The Xorg log will
> contain this information.
> 
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-22 Thread Reginald Beardsley via openindiana-discuss
 
I don't understand "adapter". I'm using a Display Port on the K5000 feeding an 
HP DP to DVI adapter to the DVI port of the 1600x1200 Princeton monitor. So I 
don't think VGA enters into this at the moment. It might once I get a new KVM 
switch as the one I've been using died and wouldn't pass the USB signals, just 
the video.

The Z840 boot order BIOS menu allows UEFI or legacy boots of all devices. I've 
not seen any mention of graphics adapters in the BIOS screens, but it's a large 
and complex BIOS so I may have missed it.

Reg

 On Thursday, April 22, 2021, 08:35:25 AM CDT, Toomas Soome  
wrote:  
 
 VESA only can work when adapter does have VESA/VGA bios. With UEFI, there is 
no requirement to have it. 

Sent from my iPhone

> On 22. Apr 2021, at 16:26, Reginald Beardsley via openindiana-discuss 
>  wrote:
> 
> 
> VESA *does* work with a UEFI boot. I had 2020.10 booting from a 4x 4 TB 
> RAIDZ2 pool using the VESA driver before I tried to boot 2021.04_rc1.
> 
> 
> 
> On Thursday, April 22, 2021, 08:07:18 AM CDT, Gary Mills 
>  wrote:
> 
> 
> This is a known broken configuration of OI. It cannot work. The VESA
> driver is a fall-back driver in Xorg. It only runs if the other
> drivers all fail. Unfortunately, the VESA driver does not work with
> a UEFI boot. As long as the nvidia driver succeeds, it will be
> selected by Xorg, and VESA will never be used. The Xorg log will
> contain this information.
> 
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
  
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-22 Thread Austin Kim
On Apr 21, 2021, at 11:13 PM, Nelson H. F. Beebe  wrote:
> 
> John D Groenveld  asks today
> 
>>> Can you install from the latest installation media on that PC?
>>> http://dlc.openindiana.org/isos/hipster/test/>
> 
> There are a lot of recent discussions on this list about installation
> problems on particular hardware systems, but not a much mention of
> routine test installations on virtualization hypervisors, like bhyve,
> Parallels, OVirt/QEMU, virt-manager/QEMU, VirtualBox, VMware ESX, and
> Xen.
> 
> At my site, we have a test farm with hundreds of VMs, primarily on
> OVirt and virt-manager, with one small system capable of supporting
> just one or two simultaneous VMs on VirtualBox.
> 
> I have had at least 8 instances of various releases of OpenIndiana and
> Hipster, plus more VMS with the relatives OmniOS, OmniTribblix,
> Sun/Oracle Solaris, Tribblix, Unleashed, and XstreamOS, mostly on the
> QEMU-based hypervisors.
> 
> Several classes of installer bugs have afflicted some of the VMs that
> we run, or have tried to install:
> 
> (a) uncontrollable streaming garbage input on the console device,
>making correct typein almost impossible;
> 
> (b) black screen-of-death on the console at boot time;
> 
> (c) random patterns of green and red lines on the console (hiding all
>text);
> 
> (d) failure of the mouse to track the screen cursor, sometimes showing
>two cursors, one of which cannot reach certain portions of the
>screen;
> 
> (e) installer screens with critical buttons lying off the visible
>console screen, and no resizing possible;
> 
> (f) boot wait times that are too short to get to the BIOS setup before
>booting starts (on OVirt, it can take 20 to 30 seconds after a
>power-on before the console is visible, by which time a lot of
>early output is lost).
> 
> Are openindiana developers on this list willing to post a list of VMs
> on which candidate ISO images have been tested before end users try to
> install them on physical machines?
> 
> It seems to me that such testing should be normal these days, given
> the extreme low cost of virtualization --- I personally run more than
> 80 VMs simultaneously on each of my home and campus office
> workstations.
> 
> I will make an effort over the next few days to try some of the latest
> ISO images from the Web site above on some of our virtualization
> platforms.
> 
> ---
> - Nelson H. F. BeebeTel: +1 801 581 5254  
> -
> - University of UtahFAX: +1 801 581 4148  
> -
> - Department of Mathematics, 110 LCBInternet e-mail: be...@math.utah.edu  
> -
> - 155 S 1400 E RM 233   be...@acm.org  be...@computer.org 
> -
> - Salt Lake City, UT 84112-0090, USAURL: http://www.math.utah.edu/~beebe/ 
> -
> ———
> 
Was reading Appendix A:  “MakeIndex” of Leslie Lamport’s _LaTeX:  A Document 
Preparation System_, second edition (which somehow perennially outlives the 
Tower of Babel of documentation and markup languages that seem to come into and 
out of vogue every year) when I saw your post pop up in my inbox and my eyes 
did a double-take!

Like a blindfolded chess-master playing multiple amateurs in round-robin 
fashion, I think the sheer passion and dedication of the OI developers often 
mask the fact that there are actually a small number of people doing an 
impossibly enormous amount of work (at least as far as I can tell from GitHub 
commits), all the more remarkable considering that unlike the BSDs there is no 
Foundation providing any financial, logistical, or organizational support for 
their efforts (apart from hosting and upstream sources from the illumos 
project).

One possible solution might be to do what some of the BSDs do and set up a Wiki 
page with an architecture × platform grid where OI users who have had 
successful installs out in the field can document which version of OI they got 
to work on which architecture/platform combination (possibly with notes, _e. 
g._, on which graphics cards or disk controllers they tested).

Like Amdahl’s Law, when the number of developers is constrained, it makes sense 
to focus those development resources on those use cases and subsystems where 
OpenIndiana is used most of the time.  (I’m admittedly biased here because 
OpenIndiana isn’t an option on the IBM PowerPC 970*-based Apple Power Mac G5s 
that I use as my personal machines, so my personal use case for OI is solely as 
a server OS.)

And thank you so much for all your work over the years on MakeIndex and *TeX!  
The only reason more people around the world don’t reach out to you to thank 
you is I suspect that most people don’t know whom to thank :)

Austin

“We are responsible for actions performed in response to circumstances for 
which we are not responsible.”  —Allan Massie, _A Question of Loyalties_, 1989
___

Re: [OpenIndiana-discuss] The kiss of death

2021-04-22 Thread John D Groenveld
In message , "Nelson H. F. Bee
be" writes:
>Are openindiana developers on this list willing to post a list of VMs
>on which candidate ISO images have been tested before end users try to
>install them on physical machines?

Using bhyve(5). I was able to install via the
OI-hipster-text-20210405.iso.

Using VirtualBox with the VBox configure in UEFI mode, I was able to
install via that ISO image.

Unfortunately using the USB image, OI-hipster-text-20210405.usb,
the VM boots into maintenance under both bhyve and VirtualBox.
John
groenv...@acm.org

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-22 Thread Richard L. Hamilton
I don't reinstall from an image, but run pkg update almost daily, and have 
rarely (maybe once, a long time ago) run into anything that didn't boot 
properly again (such that I had to boot an older BE and get rid of the failed 
one). That's as VirtualBox guests (usually current VirtualBox, except that 
right now, 6.1.20 aborted VMs on startup, so I reverted to 6.1.18 and its 
extension pack, which is fine), on a macOS host (currently Mojave).

root@openindiana:~# uname -a;cat /etc/release
SunOS openindiana 5.11 illumos-64b8fdd9a2 i86pc i386 i86pc
 OpenIndiana Hipster 2020.10 (powered by illumos)
OpenIndiana Project, part of The Illumos Foundation (C) 2010-2020
Use is subject to license terms.
   Assembled 31 October 2020


> On Apr 22, 2021, at 11:05, John D Groenveld  wrote:
> 
> In message , "Nelson H. F. 
> Bee
> be" writes:
>> Are openindiana developers on this list willing to post a list of VMs
>> on which candidate ISO images have been tested before end users try to
>> install them on physical machines?
> 
> Using bhyve(5). I was able to install via the
> OI-hipster-text-20210405.iso.
> 
> Using VirtualBox with the VBox configure in UEFI mode, I was able to
> install via that ISO image.
> 
> Unfortunately using the USB image, OI-hipster-text-20210405.usb,
> the VM boots into maintenance under both bhyve and VirtualBox.
> John
> groenv...@acm.org
> 
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
> 

-- 
eMail:  mailto:rlha...@smart.net




___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-22 Thread Judah Richardson
On Thu, Apr 22, 2021 at 10:20 AM Richard L. Hamilton 
wrote:

> I don't reinstall from an image, but run pkg update almost daily, and have
> rarely (maybe once, a long time ago) run into anything that didn't boot
> properly again (such that I had to boot an older BE and get rid of the
> failed one).

This has been my experience running OI Hipster on a Dell OptiPlex 390 MT

(specs and config at link) with legacy boot. Aside from the initial
installation challenges and some instability running with 8 GB (upgrading
to 16 GB fixed that) RAM, I haven't experienced any bugs that prevent OI
from running or booting smoothly. I update pkg and pkgin at least once a
month (usually coinciding with Patch Tuesday), which is also the only time
I ever have to reboot (solely to move to the new BE).

In addition to OI, I run Debian, Ubuntu, FreeBSD, Windows 10 (2 release
channels), and Raspberry Pi OS all on bare metal. Although the lack of
package support/availability limits its usefulness relative to the other
OSes for my workflow, OI has been the most stable and reliable of the
bunch. In fact, I frequently use OI's out of the box integrated DE as a
model for FreeBSD to follow.

I track issues with my OS installations on GitHub
, and currently of the 19 open, only
1 is an OI issue . That issue
is one I theoretically have the solution to, but haven't had time to
implement (plus thanks to the magic of DHCP IP address lease renewal, it's
easy to work around.)

That's as VirtualBox guests (usually current VirtualBox, except that right
> now, 6.1.20 aborted VMs on startup, so I reverted to 6.1.18 and its
> extension pack, which is fine), on a macOS host (currently Mojave).
>
> root@openindiana:~# uname -a;cat /etc/release
> SunOS openindiana 5.11 illumos-64b8fdd9a2 i86pc i386 i86pc
>  OpenIndiana Hipster 2020.10 (powered by illumos)
> OpenIndiana Project, part of The Illumos Foundation (C) 2010-2020
> Use is subject to license terms.
>Assembled 31 October 2020
>
>
> > On Apr 22, 2021, at 11:05, John D Groenveld  wrote:
> >
> > In message , "Nelson
> H. F. Bee
> > be" writes:
> >> Are openindiana developers on this list willing to post a list of VMs
> >> on which candidate ISO images have been tested before end users try to
> >> install them on physical machines?
> >
> > Using bhyve(5). I was able to install via the
> > OI-hipster-text-20210405.iso.
> >
> > Using VirtualBox with the VBox configure in UEFI mode, I was able to
> > install via that ISO image.
> >
> > Unfortunately using the USB image, OI-hipster-text-20210405.usb,
> > the VM boots into maintenance under both bhyve and VirtualBox.
> > John
> > groenv...@acm.org
> >
> > ___
> > openindiana-discuss mailing list
> > openindiana-discuss@openindiana.org
> > https://openindiana.org/mailman/listinfo/openindiana-discuss
> >
>
> --
> eMail:  mailto:rlha...@smart.net
>
>
>
>
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-22 Thread Nelson H. F. Beebe
Today, I downloaded 

http://dlc.openindiana.org/isos/hipster/test/OI-hipster-gui-20210405.iso

and attempted to create a new VM on CentOS 7.9.2009 with virt-manager 1.5.0
and qemu-system-x86_64 2.0.0.

The initial BIOS screen comes up, asks for a keyboard type, and proceeds
to display a nice GUI.  However, I have two cursors on the display:
one frozen (and the one needed to do selections), and the other
movable, but whose selection clicks are ignored.  No progress
is possible.

I'm next going to try the text mode installer, and the GUI mode
on an Ubuntu 20.04 system.  More later...

---
- Nelson H. F. BeebeTel: +1 801 581 5254  -
- University of UtahFAX: +1 801 581 4148  -
- Department of Mathematics, 110 LCBInternet e-mail: be...@math.utah.edu  -
- 155 S 1400 E RM 233   be...@acm.org  be...@computer.org -
- Salt Lake City, UT 84112-0090, USAURL: http://www.math.utah.edu/~beebe/ -
---

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-22 Thread Nelson H. F. Beebe
Following my report earlier today of problems installing the latest
OpenIndiana ISO GUI installer on CentOS 7.9.2009, I next downloaded

http://dlc.openindiana.org/isos/hipster/test/OI-hipster-text-20210405.iso

and that led to a successful installation on CentOS 7.9.2009 with
virt-manager 1.5.0 and qemu-system-x86_64 2.0.0.

There was one nit: at the point that it asks for a domain name, it
cuts it off at 15 characters, which is way too small.  See

https://www.ietf.org/rfc/rfc1035.txt
https://en.wikipedia.org/wiki/Hostname

https://web.archive.org/web/20190518124533/https://devblogs.microsoft.com/oldnewthing/?p=7873

The limit for the domain name field should be 252 (allowing a one-byte
count, one-byte hostname, and dot for the unqualified hostname), so
the total fully-qualified domain name string is less than 256 bytes.

In my case, my required domain name is "vm.math.utah.edu", one longer
than the OpenIndiana installer permits.  Presumably I can fix this
trivially in files in the /etc tree after installation.

During installation, I assigned a static IPv4 address, and when the
system later rebooted, it gave me a text console, but no startx or
xstart or xinit to bring up a window system.

I can successfully login to the new VM via ssh from another system.
Presumably, I could continue to configure this system, and perhaps
later, add enough X11 packages to get a desktop.  However, I'm
delaying that for now.

Next, I ran the GUI installer from

  http://dlc.openindiana.org/isos/hipster/test/OI-hipster-gui-20210405.iso

on Ubuntu 20.04, which has virt-manager 2.2.1 and qemu-system-x86_64
4.2.1, considerably newer than available on CentOS 7.9.2009.

The GUI installation proceeded as expected, with no mouse problems
like I met on CentOS, but it quickly reached a panel complaining about
two missing device drivers, highlighting them in pink:

Audio   Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family)
High Definition Audio ControllerMisconfigured:[audiohd]

Red Hat, Inc. Virtio consoleUNK Info

It will not proceed when I press the Install button, reporting in a
popup:

Add Driver
Driver not installed
The driver package is empty
Close

I have no idea what driver package is expected there, so there is
nothing to do but abort the installer.  

I don't have audio on any of my 500+ VMs, and given that the installer
desktop screen looks normal, it is puzzling why the GUI installer
thinks a console driver is missing.

I may be able to repeat these experiments next week on VirtualBox on
another system on campus; this week, I working from home.

---
- Nelson H. F. BeebeTel: +1 801 581 5254  -
- University of UtahFAX: +1 801 581 4148  -
- Department of Mathematics, 110 LCBInternet e-mail: be...@math.utah.edu  -
- 155 S 1400 E RM 233   be...@acm.org  be...@computer.org -
- Salt Lake City, UT 84112-0090, USAURL: http://www.math.utah.edu/~beebe/ -
---

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-22 Thread Reginald Beardsley via openindiana-discuss
 
The text-install ISO doesn't have a windowing system on it. I don't know if 
there is anything analogous to the Solaris 11.4 "solaris-desktop " pkg.

 On Thursday, April 22, 2021, 04:21:01 PM CDT, Nelson H. F. Beebe 
 wrote:  
 
 Following my report earlier today of problems installing the latest
OpenIndiana ISO GUI installer on CentOS 7.9.2009, I next downloaded

    http://dlc.openindiana.org/isos/hipster/test/OI-hipster-text-20210405.iso

and that led to a successful installation on CentOS 7.9.2009 with
virt-manager 1.5.0 and qemu-system-x86_64 2.0.0.

There was one nit: at the point that it asks for a domain name, it
cuts it off at 15 characters, which is way too small.  See

    https://www.ietf.org/rfc/rfc1035.txt
    https://en.wikipedia.org/wiki/Hostname
    
https://web.archive.org/web/20190518124533/https://devblogs.microsoft.com/oldnewthing/?p=7873

The limit for the domain name field should be 252 (allowing a one-byte
count, one-byte hostname, and dot for the unqualified hostname), so
the total fully-qualified domain name string is less than 256 bytes.

In my case, my required domain name is "vm.math.utah.edu", one longer
than the OpenIndiana installer permits.  Presumably I can fix this
trivially in files in the /etc tree after installation.

During installation, I assigned a static IPv4 address, and when the
system later rebooted, it gave me a text console, but no startx or
xstart or xinit to bring up a window system.

I can successfully login to the new VM via ssh from another system.
Presumably, I could continue to configure this system, and perhaps
later, add enough X11 packages to get a desktop.  However, I'm
delaying that for now.

Next, I ran the GUI installer from

      http://dlc.openindiana.org/isos/hipster/test/OI-hipster-gui-20210405.iso

on Ubuntu 20.04, which has virt-manager 2.2.1 and qemu-system-x86_64
4.2.1, considerably newer than available on CentOS 7.9.2009.

The GUI installation proceeded as expected, with no mouse problems
like I met on CentOS, but it quickly reached a panel complaining about
two missing device drivers, highlighting them in pink:

    Audio  Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family)
        High Definition Audio Controller        Misconfigured:[audiohd]

    Red Hat, Inc. Virtio console                    UNK Info

It will not proceed when I press the Install button, reporting in a
popup:

    Add Driver
    Driver not installed
    The driver package is empty
    Close

I have no idea what driver package is expected there, so there is
nothing to do but abort the installer.  

I don't have audio on any of my 500+ VMs, and given that the installer
desktop screen looks normal, it is puzzling why the GUI installer
thinks a console driver is missing.

I may be able to repeat these experiments next week on VirtualBox on
another system on campus; this week, I working from home.

---
- Nelson H. F. Beebe                    Tel: +1 801 581 5254                  -
- University of Utah                    FAX: +1 801 581 4148                  -
- Department of Mathematics, 110 LCB    Internet e-mail: be...@math.utah.edu  -
- 155 S 1400 E RM 233                      be...@acm.org  be...@computer.org -
- Salt Lake City, UT 84112-0090, USA    URL: http://www.math.utah.edu/~beebe/ -
---

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss
  
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-22 Thread Tim Mooney via openindiana-discuss

In regard to: Re: [OpenIndiana-discuss] The kiss of death, Nelson H. F:


There was one nit: at the point that it asks for a domain name, it
cuts it off at 15 characters, which is way too small.  See

   https://www.ietf.org/rfc/rfc1035.txt
   https://en.wikipedia.org/wiki/Hostname
   
https://web.archive.org/web/20190518124533/https://devblogs.microsoft.com/oldnewthing/?p=7873


I'm not positive about this, so take this with a grain of salt, but 
I think it's asking not for DNS domain but instead the NIS domain.

Because of Solaris' lengthy history with NIS/YP, and because a lot of sites
never really went for the replacement (NIS+), I think it's asking for a
NIS domainname.

See domainname(1m) and defaultdomain(4) on an installed system.

Tim
--
Tim Mooney tim.moo...@ndsu.edu
Enterprise Computing & Infrastructure /
Division of Information Technology/701-231-1076 (Voice)
North Dakota State University, Fargo, ND 58105-5164

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-22 Thread Tim Mooney via openindiana-discuss

In regard to: Re: [OpenIndiana-discuss] The kiss of death, Reginald...:


The text-install ISO doesn't have a windowing system on it. I don't know
if there is anything analogous to the Solaris 11.4 "solaris-desktop "
pkg.


It's 'mate-install'.  And all it does is force the installation of the
desktop components, so the recommended steps are

pfexec pkg install mate-install
pfexec pkg uninstall mate-install

Tim
--
Tim Mooney tim.moo...@ndsu.edu
Enterprise Computing & Infrastructure /
Division of Information Technology/701-231-1076 (Voice)
North Dakota State University, Fargo, ND 58105-5164

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-22 Thread Richard L. Hamilton
That must be something with the user interface for configuration, because the 
"domainname" command allows 256 characters (257 including a terminating null 
byte, maybe). Even if you go by something else that says 64, it's still a lot 
bigger than 15.

> On Apr 22, 2021, at 17:47, Tim Mooney via openindiana-discuss 
>  wrote:
> 
> In regard to: Re: [OpenIndiana-discuss] The kiss of death, Nelson H. F:
> 
>> There was one nit: at the point that it asks for a domain name, it
>> cuts it off at 15 characters, which is way too small.  See
>> 
>>   https://www.ietf.org/rfc/rfc1035.txt
>>   https://en.wikipedia.org/wiki/Hostname
>>   
>> https://web.archive.org/web/20190518124533/https://devblogs.microsoft.com/oldnewthing/?p=7873
> 
> I'm not positive about this, so take this with a grain of salt, but I think 
> it's asking not for DNS domain but instead the NIS domain.
> Because of Solaris' lengthy history with NIS/YP, and because a lot of sites
> never really went for the replacement (NIS+), I think it's asking for a
> NIS domainname.
> 
> See domainname(1m) and defaultdomain(4) on an installed system.
> 
> Tim
> -- 
> Tim Mooney tim.moo...@ndsu.edu
> Enterprise Computing & Infrastructure /
> Division of Information Technology/701-231-1076 (Voice)
> North Dakota State University, Fargo, ND 58105-5164
> 
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
> 

-- 
eMail:  mailto:rlha...@smart.net




___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-22 Thread John D Groenveld
In message <30db2d9-2732-992c-f23-10d8e386f...@ndsu.edu>, Tim Mooney via openin
diana-discuss writes:
>It's 'mate-install'.  And all it does is force the installation of the
>desktop components, so the recommended steps are

http://docs.openindiana.org/handbook/getting-started/>

>   pfexec pkg install mate-install
>   pfexec pkg uninstall mate-install

This currently fails because thunderbird-lightning is now obsolete:
http://pkg.openindiana.org/hipster/manifest/0/mate_install%400.1%2C5.11-2020.0.1.42%3A20200819T002124Z>
http://pkg.openindiana.org/hipster/manifest/0/mail%2Fthunderbird%2Fplugin%2Fthunderbird-lightning%4052.9.1%2C5.11-2020.0.1.2%3A20210419T204405Z>

If someone has the source downloaded, please submit a PR to
remove it from the mate_install manifest.
John
groenv...@acm.org

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-22 Thread Nelson H. F. Beebe
After my last message, I ran "pkg install" on the OpenIndiana VM
created from the text-mode installer on CentOS 7.9.2009, to get gcc,
build-essential, and a few other packages installed.  That went as
expected.

I then ran

# pkg update
# pkg upgrade

That updated 347 packages and 8074 files, then reported

A clone of openindiana exists and has been updated and activated.
On the next boot the Boot Environment openindiana-2021:04:22 will be
mounted on '/'.  Reboot when ready to switch to this updated BE.

I typed

# reboot

and the system came partly up with

Loading unix...
Loading /platform/i86pc/amd64/boot_archive...
ZFS: i/o error - all block copies available
...
No rootfs module provided, aborting

The VM was allocated an 80GB disk, 2GB DRAM, and 2 CPUs, and the
physical CentOS 7.8.2009 host has 1.8TB free space.

I powered off the VM, increased the DRAM from 2GB to 8GB, and retried,
but behavior is identical.  Another attempt with 1 CPU and 32GB DRAM
got the same failure.

I powered off, rebooted from the ISO image, logged in as
root/openindiana, and ran

# zpool import
# df -h /rpool
FilesystemSize  UsedAvailable Capacity Mounted on
rpool   77.02G   33K70.01G1%   /rpool

# ls /rpool
boot

# ls /rpool/boot
grub menu.lst

So, the disk looks like it has almost been wiped clean, which should
not be the case after all of the package updating.

Things were not this bad with earlier OpenIndiana releases over the
last almost 11 years, back to my first VM for this O/S created on
14-Sep-2010 from oi-dev-147-x86.iso.

---
- Nelson H. F. BeebeTel: +1 801 581 5254  -
- University of UtahFAX: +1 801 581 4148  -
- Department of Mathematics, 110 LCBInternet e-mail: be...@math.utah.edu  -
- 155 S 1400 E RM 233   be...@acm.org  be...@computer.org -
- Salt Lake City, UT 84112-0090, USAURL: http://www.math.utah.edu/~beebe/ -
---

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-22 Thread Reginald Beardsley via openindiana-discuss
 
I'll give that route a try if the current text install from the Desktop fails. 
I updated the BIOS and that stopped Solaris 11.4 from booting :-( What a mess.

So I figured worth a shot trying 2021.04-rc1 again.

Reg

 On Thursday, April 22, 2021, 04:49:31 PM CDT, Tim Mooney via 
openindiana-discuss  wrote:  
 
 In regard to: Re: [OpenIndiana-discuss] The kiss of death, Reginald...:

> The text-install ISO doesn't have a windowing system on it. I don't know
> if there is anything analogous to the Solaris 11.4 "solaris-desktop "
> pkg.

It's 'mate-install'.  And all it does is force the installation of the
desktop components, so the recommended steps are

     pfexec pkg install mate-install
     pfexec pkg uninstall mate-install

Tim
-- 
Tim Mooney                                            tim.moo...@ndsu.edu
Enterprise Computing & Infrastructure /
Division of Information Technology    /                701-231-1076 (Voice)
North Dakota State University, Fargo, ND 58105-5164

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss
  
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-22 Thread Volker A. Brandt
Hi Nelson!


Good to see you're not giving up.

[...]
> I powered off, rebooted from the ISO image, logged in as
> root/openindiana, and ran
>
> # zpool import
> # df -h /rpool
> FilesystemSize  UsedAvailable Capacity Mounted on
> rpool   77.02G   33K70.01G1%   /rpool
>
> # ls /rpool
> boot
>
> # ls /rpool/boot
> grub menu.lst

This is all that's in /rpool, so that is correct.

> So, the disk looks like it has almost been wiped clean, which should
> not be the case after all of the package updating.

You have 7GB worth of data on that disk.

Try a "zfs list -tall" to see what's really there.  I am guessing that
a lot of stuff did not get mounted since you did not specify a temporary
root directory (mkdir /tmp/mypool ; zpool import -R /tmp/mypool rpool).


Regards -- Volker
-- 

Volker A. BrandtConsulting and Support for Solaris-based Systems
Brandt & Brandt Computer GmbH   WWW: http://www.bb-c.de/
Am Wiesenpfad 6, 53340 Meckenheim, GERMANYEmail: v...@bb-c.de
Handelsregister: Amtsgericht Bonn, HRB 10513  Schuhgröße: 46
Geschäftsführer: Rainer J.H. Brandt und Volker A. Brandt

"When logic and proportion have fallen sloppy dead"

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-22 Thread Nelson H. F. Beebe
Following my report a few minutes ago of a failure to reboot after
package updates, Volker A. Brandt kindly suggested a recovery attempt
like this:

# zfs list -tall
# mkdir /tmp/mypool
# zpool import -R /tmp/mypool rpool

That showed that the installed files from the pkg system are there, so
I ran

# zpool export
# reboot

However, that brings me back to the boot prompt with the old complaint

ZFS: i/o error - all block copies unavailable
ZFS: can't read MOS of pool rpool
Can't find /boot/loader
Can't find /boot/zfsloader

I forgot to mention in my earlier report that in the text-mode
install, I chose MBR booting, rather than UEFI; the latter does not
work on the QEMU version available on CentOS 7.

Ideas of what to do next?

---
- Nelson H. F. BeebeTel: +1 801 581 5254  -
- University of UtahFAX: +1 801 581 4148  -
- Department of Mathematics, 110 LCBInternet e-mail: be...@math.utah.edu  -
- 155 S 1400 E RM 233   be...@acm.org  be...@computer.org -
- Salt Lake City, UT 84112-0090, USAURL: http://www.math.utah.edu/~beebe/ -
---

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-22 Thread John D Groenveld
In message , "Nelson H. F. Bee
be" writes:
>A clone of openindiana exists and has been updated and activated.
>On the next boot the Boot Environment openindiana-2021:04:22 will be
>mounted on '/'.  Reboot when ready to switch to this updated BE.
>
>I typed
>
># reboot
>
>and the system came partly up with

Can you boot the previous BE from loader?
If so, it would be interesting if you can try again:
# pkg update
# init 6

John
groenv...@acm.org

John
groenv...@acm.org

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-22 Thread Reginald Beardsley via openindiana-discuss
 
What do those mean? I have seen them numerous times even though the system 
booted.



On Thursday, April 22, 2021, 05:46:30 PM CDT, Nelson H. F. Beebe 
 wrote:


 ZFS: i/o error - all block copies unavailable
 ZFS: can't read MOS of pool rpool
  
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-22 Thread Jacob Ritorto
Got this scrambled graphics screen. Going to try text install. 

Sent from my iPhone

> On Apr 22, 2021, at 11:54 AM, Judah Richardson  
> wrote:
> 
> On Thu, Apr 22, 2021 at 10:20 AM Richard L. Hamilton 
> wrote:
> 
>> I don't reinstall from an image, but run pkg update almost daily, and have
>> rarely (maybe once, a long time ago) run into anything that didn't boot
>> properly again (such that I had to boot an older BE and get rid of the
>> failed one).
> 
> This has been my experience running OI Hipster on a Dell OptiPlex 390 MT
> 
> (specs and config at link) with legacy boot. Aside from the initial
> installation challenges and some instability running with 8 GB (upgrading
> to 16 GB fixed that) RAM, I haven't experienced any bugs that prevent OI
> from running or booting smoothly. I update pkg and pkgin at least once a
> month (usually coinciding with Patch Tuesday), which is also the only time
> I ever have to reboot (solely to move to the new BE).
> 
> In addition to OI, I run Debian, Ubuntu, FreeBSD, Windows 10 (2 release
> channels), and Raspberry Pi OS all on bare metal. Although the lack of
> package support/availability limits its usefulness relative to the other
> OSes for my workflow, OI has been the most stable and reliable of the
> bunch. In fact, I frequently use OI's out of the box integrated DE as a
> model for FreeBSD to follow.
> 
> I track issues with my OS installations on GitHub
> , and currently of the 19 open, only
> 1 is an OI issue . That issue
> is one I theoretically have the solution to, but haven't had time to
> implement (plus thanks to the magic of DHCP IP address lease renewal, it's
> easy to work around.)
> 
> That's as VirtualBox guests (usually current VirtualBox, except that right
>> now, 6.1.20 aborted VMs on startup, so I reverted to 6.1.18 and its
>> extension pack, which is fine), on a macOS host (currently Mojave).
>> 
>> root@openindiana:~# uname -a;cat /etc/release
>> SunOS openindiana 5.11 illumos-64b8fdd9a2 i86pc i386 i86pc
>> OpenIndiana Hipster 2020.10 (powered by illumos)
>>OpenIndiana Project, part of The Illumos Foundation (C) 2010-2020
>>Use is subject to license terms.
>>   Assembled 31 October 2020
>> 
>> 
 On Apr 22, 2021, at 11:05, John D Groenveld  wrote:
>>> 
>>> In message , "Nelson
>> H. F. Bee
>>> be" writes:
 Are openindiana developers on this list willing to post a list of VMs
 on which candidate ISO images have been tested before end users try to
 install them on physical machines?
>>> 
>>> Using bhyve(5). I was able to install via the
>>> OI-hipster-text-20210405.iso.
>>> 
>>> Using VirtualBox with the VBox configure in UEFI mode, I was able to
>>> install via that ISO image.
>>> 
>>> Unfortunately using the USB image, OI-hipster-text-20210405.usb,
>>> the VM boots into maintenance under both bhyve and VirtualBox.
>>> John
>>> groenv...@acm.org
>>> 
>>> ___
>>> openindiana-discuss mailing list
>>> openindiana-discuss@openindiana.org
>>> https://openindiana.org/mailman/listinfo/openindiana-discuss
>>> 
>> 
>> --
>> eMail:  mailto:rlha...@smart.net
>> 
>> 
>> 
>> 
>> ___
>> openindiana-discuss mailing list
>> openindiana-discuss@openindiana.org
>> https://openindiana.org/mailman/listinfo/openindiana-discuss
>> 
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-22 Thread John D Groenveld
In message , "Nelson H. F. Bee
be" writes:
>There was one nit: at the point that it asks for a domain name, it
>cuts it off at 15 characters, which is way too small.  See

I reproduced this bug:
https://www.illumos.org/issues/13742>

John
groenv...@acm.org

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-22 Thread John D Groenveld
In message <20210413.13mmdxii004...@groenveld.us>, John D Groenveld writes:
>>  pfexec pkg install mate-install
>>  pfexec pkg uninstall mate-install
>
>This currently fails because thunderbird-lightning is now obsolete:

mate_install obsolete require dependency is issue 13743
https://www.illumos.org/issues/13743>

John
groenv...@acm.org

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-22 Thread Toomas Soome via openindiana-discuss


> On 23. Apr 2021, at 01:57, Reginald Beardsley via openindiana-discuss 
>  wrote:
> 
> 
> What do those mean? I have seen them numerous times even though the system 
> booted.
> 
> 
> 
> On Thursday, April 22, 2021, 05:46:30 PM CDT, Nelson H. F. Beebe 
>  wrote:
> 
> 
> ZFS: i/o error - all block copies unavailable


I/O error means what it is telling us - we did encounter error while doing I/O, 
in this case, we were attempting to read disk. I/O error may come from BIOS 
INT13:

usr/src/boot/sys/boot/i386/libi386/biosdisk.c:

function bd_io() message template is:

printf("%s%d: Read %d sector(s) from %lld to %p “ … );

You should see disk name as disk0: etc.

or UEFI case in usr/src/boot/sys/boot/efi/libefi/efipart.c:

function efipart_readwrite() and the message template is:

printf("%s: rw=%d, blk=%ju size=%ju status=%lu\n” …)


With ZFS reader, we can also get I/O error because of failing checksum check. 
That is, we did issue read command to disk, we did not receive error from that 
read command, but the data checksum does not match.

Checksum error may happen in scenarios:

1. the data actually is corrupt (bitrot, partial write…)
2. the disk read did not return error, but did read wrong data or does not 
actually return any data. This is often case when we meet 2TB barrier and BIOS 
INT13 implementation is buggy.

We can see this case with large disks, the setup used to be booting fine, but 
as the rpool has been filling up, at one point in time (after pkg update), the 
vital boot files (kernel or boot_archive) are written past 2TB “line”. Then 
next boot will attempt to read kernel or boot_archive and will get error.

3. BUG in loader ZFS reader code. If by some reason the zfs reader code will 
instruct disk layer to read wrong blocks, the disk IO is most likely OK, but 
the logical data is not. I can not exclude this cause, but it is very unlikely 
case.

When we do get IO error, and the pool has redundancy (mirror, raidz or zfs set 
copies=..), we attempt to read alternate copy of the file system block, if all 
reas fail, we get the second half of this message (all block copies).

Unfortunately, the current ZFS reader code does report generic EIO from its 
internal stack and therefore this error message is very generic one. I do plan 
to fix this with decryption code addition.


> ZFS: can't read MOS of pool rpool
> 

This means, we got IO errors while we are opening pool for reading, we got pool 
label (there are 4 copies of pool label on disk), but we have error while 
attempting to read MOS (this is sort of like superblock in UFS), without MOS we 
can not continue reading this pool.

If this is virtualized system and the error does appear after reboot and the 
disk is not over 2TB one, one possible reason is VM managed failing to write 
down virtual disk blocks. This is the cache when VM manager is not implementing 
the cache flush for disks and, for example, is crashing. I have seen this 
myself with VMWare Fusion.

rgds,
toomas

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-23 Thread Toomas Soome via openindiana-discuss


> On 22. Apr 2021, at 16:49, Reginald Beardsley  wrote:
> 
> 
> I don't understand "adapter". I'm using a Display Port on the K5000 feeding 
> an HP DP to DVI adapter to the DVI port of the 1600x1200 Princeton monitor.

I am sorry, I meant video controller or card. With UEFI world, the boot loader 
can access console device either via UEFI terminal emulator (SimpleTextOutput 
protocol interface) in “text” mode, or by using Graphics Output Protocol (GOP). 
GOP does provide Blt() for Block Transfer, to draw data on console display, and 
may provide linear framebuffer (address, size and 32-bit depth). It *may* be 
implemented on top of VGA/VESA BIOS interfaces, but such details are not 
exposed.

Once we start the OS kernel, the UEFI protocols will be not accessible any more 
(UEFI Boot Services will be disabled), and OS drivers must implement the access 
to hardware. If you happen to have gfx card without VGA/VESA support, then X11 
VESA driver will not work for you (obviously).

rgds,
toomas


> So I don't think VGA enters into this at the moment. It might once I get a 
> new KVM switch as the one I've been using died and wouldn't pass the USB 
> signals, just the video.
> 
> The Z840 boot order BIOS menu allows UEFI or legacy boots of all devices. 
> I've not seen any mention of graphics adapters in the BIOS screens, but it's 
> a large and complex BIOS so I may have missed it.
> 
> Reg
> 
> On Thursday, April 22, 2021, 08:35:25 AM CDT, Toomas Soome  
> wrote:
> 
> 
> VESA only can work when adapter does have VESA/VGA bios. With UEFI, there is 
> no requirement to have it. 
> 
> Sent from my iPhone
> 
> > On 22. Apr 2021, at 16:26, Reginald Beardsley via openindiana-discuss 
> >  > > wrote:
> > 
> > 
> > VESA *does* work with a UEFI boot. I had 2020.10 booting from a 4x 4 TB 
> > RAIDZ2 pool using the VESA driver before I tried to boot 2021.04_rc1.
> > 
> > 
> > 
> > On Thursday, April 22, 2021, 08:07:18 AM CDT, Gary Mills 
> > mailto:gary_mi...@fastmail.fm>> wrote:
> > 
> > 
> > This is a known broken configuration of OI. It cannot work. The VESA
> > driver is a fall-back driver in Xorg. It only runs if the other
> > drivers all fail. Unfortunately, the VESA driver does not work with
> > a UEFI boot. As long as the nvidia driver succeeds, it will be
> > selected by Xorg, and VESA will never be used. The Xorg log will
> > contain this information.
> > 
> > ___
> > openindiana-discuss mailing list
> > openindiana-discuss@openindiana.org 
> > 
> > https://openindiana.org/mailman/listinfo/openindiana-discuss 
> > 

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-23 Thread Reginald Beardsley via openindiana-discuss
 Hmmm,

That suggests that I was perhaps too optimistic about using disks >2 TB despite 
having successfully installed 2020.10 on a single slice 5 TB disk.

I've got S11.4 installed and running again.

Reg

 On Friday, April 23, 2021, 01:26:37 AM CDT, Toomas Soome  
wrote:  
 
 


On 23. Apr 2021, at 01:57, Reginald Beardsley via openindiana-discuss 
 wrote:

What do those mean? I have seen them numerous times even though the system 
booted.



On Thursday, April 22, 2021, 05:46:30 PM CDT, Nelson H. F. Beebe 
 wrote:


 ZFS: i/o error - all block copies unavailable



I/O error means what it is telling us - we did encounter error while doing I/O, 
in this case, we were attempting to read disk. I/O error may come from BIOS 
INT13:
usr/src/boot/sys/boot/i386/libi386/biosdisk.c:
function bd_io() message template is:
printf("%s%d: Read %d sector(s) from %lld to %p “ … );
You should see disk name as disk0: etc.
or UEFI case in usr/src/boot/sys/boot/efi/libefi/efipart.c:
function efipart_readwrite() and the message template is:
printf("%s: rw=%d, blk=%ju size=%ju status=%lu\n” …)

With ZFS reader, we can also get I/O error because of failing checksum check. 
That is, we did issue read command to disk, we did not receive error from that 
read command, but the data checksum does not match.
Checksum error may happen in scenarios:
1. the data actually is corrupt (bitrot, partial write…)2. the disk read did 
not return error, but did read wrong data or does not actually return any data. 
This is often case when we meet 2TB barrier and BIOS INT13 implementation is 
buggy.
We can see this case with large disks, the setup used to be booting fine, but 
as the rpool has been filling up, at one point in time (after pkg update), the 
vital boot files (kernel or boot_archive) are written past 2TB “line”. Then 
next boot will attempt to read kernel or boot_archive and will get error.
3. BUG in loader ZFS reader code. If by some reason the zfs reader code will 
instruct disk layer to read wrong blocks, the disk IO is most likely OK, but 
the logical data is not. I can not exclude this cause, but it is very unlikely 
case.
When we do get IO error, and the pool has redundancy (mirror, raidz or zfs set 
copies=..), we attempt to read alternate copy of the file system block, if all 
reas fail, we get the second half of this message (all block copies).
Unfortunately, the current ZFS reader code does report generic EIO from its 
internal stack and therefore this error message is very generic one. I do plan 
to fix this with decryption code addition.


 ZFS: can't read MOS of pool rpool



This means, we got IO errors while we are opening pool for reading, we got pool 
label (there are 4 copies of pool label on disk), but we have error while 
attempting to read MOS (this is sort of like superblock in UFS), without MOS we 
can not continue reading this pool.
If this is virtualized system and the error does appear after reboot and the 
disk is not over 2TB one, one possible reason is VM managed failing to write 
down virtual disk blocks. This is the cache when VM manager is not implementing 
the cache flush for disks and, for example, is crashing. I have seen this 
myself with VMWare Fusion.
rgds,toomas
  
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-23 Thread Toomas Soome via openindiana-discuss


> On 23. Apr 2021, at 14:49, Reginald Beardsley  wrote:
> 
> Hmmm,
> 
> That suggests that I was perhaps too optimistic about using disks >2 TB 
> despite having successfully installed 2020.10 on a single slice 5 TB disk.
> 
> I've got S11.4 installed and running again.
> 
> Reg
> 

Simple test setup is to create rpool partition past 2TB line and see if it is 
bootable. The 2TB issue seems to reappeared with UEFI CSM (BIOS emulation in 
UEFI system), in that case, the UEFI boot likely does not have this limit.

I can not remember the exact model numbers, but there was user with this issue 
and previous generation from this vendor was not having any problems;)

rgds,
toomas

> On Friday, April 23, 2021, 01:26:37 AM CDT, Toomas Soome  
> wrote:
> 
> 
> 
> 
>> On 23. Apr 2021, at 01:57, Reginald Beardsley via openindiana-discuss 
>> > > wrote:
>> 
>> 
>> What do those mean? I have seen them numerous times even though the system 
>> booted.
>> 
>> 
>> 
>> On Thursday, April 22, 2021, 05:46:30 PM CDT, Nelson H. F. Beebe 
>> mailto:be...@math.utah.edu>> wrote:
>> 
>> 
>> ZFS: i/o error - all block copies unavailable
> 
> 
> I/O error means what it is telling us - we did encounter error while doing 
> I/O, in this case, we were attempting to read disk. I/O error may come from 
> BIOS INT13:
> 
> usr/src/boot/sys/boot/i386/libi386/biosdisk.c:
> 
> function bd_io() message template is:
> 
> printf("%s%d: Read %d sector(s) from %lld to %p “ … );
> 
> You should see disk name as disk0: etc.
> 
> or UEFI case in usr/src/boot/sys/boot/efi/libefi/efipart.c:
> 
> function efipart_readwrite() and the message template is:
> 
> printf("%s: rw=%d, blk=%ju size=%ju status=%lu\n” …)
> 
> 
> With ZFS reader, we can also get I/O error because of failing checksum check. 
> That is, we did issue read command to disk, we did not receive error from 
> that read command, but the data checksum does not match.
> 
> Checksum error may happen in scenarios:
> 
> 1. the data actually is corrupt (bitrot, partial write…)
> 2. the disk read did not return error, but did read wrong data or does not 
> actually return any data. This is often case when we meet 2TB barrier and 
> BIOS INT13 implementation is buggy.
> 
> We can see this case with large disks, the setup used to be booting fine, but 
> as the rpool has been filling up, at one point in time (after pkg update), 
> the vital boot files (kernel or boot_archive) are written past 2TB “line”. 
> Then next boot will attempt to read kernel or boot_archive and will get error.
> 
> 3. BUG in loader ZFS reader code. If by some reason the zfs reader code will 
> instruct disk layer to read wrong blocks, the disk IO is most likely OK, but 
> the logical data is not. I can not exclude this cause, but it is very 
> unlikely case.
> 
> When we do get IO error, and the pool has redundancy (mirror, raidz or zfs 
> set copies=..), we attempt to read alternate copy of the file system block, 
> if all reas fail, we get the second half of this message (all block copies).
> 
> Unfortunately, the current ZFS reader code does report generic EIO from its 
> internal stack and therefore this error message is very generic one. I do 
> plan to fix this with decryption code addition.
> 
> 
>> ZFS: can't read MOS of pool rpool
>> 
>> 
> 
> This means, we got IO errors while we are opening pool for reading, we got 
> pool label (there are 4 copies of pool label on disk), but we have error 
> while attempting to read MOS (this is sort of like superblock in UFS), 
> without MOS we can not continue reading this pool.
> 
> If this is virtualized system and the error does appear after reboot and the 
> disk is not over 2TB one, one possible reason is VM managed failing to write 
> down virtual disk blocks. This is the cache when VM manager is not 
> implementing the cache flush for disks and, for example, is crashing. I have 
> seen this myself with VMWare Fusion.
> 
> rgds,
> toomas
> 

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-23 Thread Nelson H. F. Beebe
This progress report is 2/3 positive.  

I reported yesterday of my problems with frozen mouse cursor in the
GUI install from

http://dlc.openindiana.org/isos/hipster/test/OI-hipster-gui-20210405.iso

on CentOS 7.9.2009 with virt-manager 1.5.0 and qemu-system-x86_64
2.0.0.

Later that day, I repeated that experiment on Ubuntu 20.04, which has
virt-manager 2.2.1 and qemu-system-x86_64 4.2.1, considerably newer
than the versions available on CentOS 7.9.2009, but found that the
installation would not proceed because of missing audio and console
drivers.  After my posting, I turned to other unrelated projects, had
a night's sleep, and woke up with an idea for OpenIndiana.

The virt-manager console for configuring a virtual machine has only
one driver for each of mouse and keyboard, but it has THREE for the
video controller: QXL (default), VGA, and Virtio.  It has been rare to
have to use other than the default video type, so I normally don't
even think about changing the default.

For OpenIndiana on Ubuntu 20.04, I shutdown the stuck installer,
powered off the VM, changed to VGA, rebooted, and found that the
installer would then not produce a GUI display, but only a bare
console.

I powered off again, changed to Virtio, and this time, the installer
worked and the installation completed!

It then asked me to reboot, which I did, and it came up normally.  I
immediately shut it down, took a virt-manager snapshot, brought it up,
ran "pkg install build-essential", powered off, took another snapshot,
powered on, installed more packages and user accounts, powered off,
took yet another snapshot, powered on, and I now have a working system
on which I'm attempting to build TeX Live 2021 (see my reports for
other systems at

http://www.math.utah.edu/pub/texlive-utah/

).

Later this morning, I returned to my campus office in order to meet
with a colleague who is a network and ZFS expert with long experience
in the Solaris family.  We tried to find a way to get past the boot
failure on CentOS 7.9.2009, which I reported yesterday said

Loading unix...
Loading /platform/i86pc/amd64/boot_archive...
ZFS: i/o error - all block copies available
...
No rootfs module provided, aborting

That happens early in the boot process, well before there is any
chance to try to revert to an earlier boot environment.  Despite Web
searching last night and today, and finding others have reported a
similar problem with ZFS on Solaris and FreeBSD, none of their
proposed fixes worked.  So, a new OpenIndiana on CentOS 7.9.2009
remains out of reach.

Finally, I tried the GUI installer on VirtualBox on a small memory
Ubuntu 20.04 system.  There, the installer worked flawlessly,
including a following "pkg install build-essential" and "pkg update"
and subsequent reboot.  No configuration changes whatever were needed.

So, I now have 2 working OpenIndiana Hipster 2020.10 systems out of 4
attempts, and I can now get on with actually putting these VMs to
doing useful work.

It might be useful to add a README file to the download site that
summarizes the lessons:

VirtualBox: 20210405 unstaller works perfectly
newer virt-manager/QEMU:change video to Virtio from default QXL
older virt-manager/QEMU:probably unworkable

along with notes for other physical systems were people have gotten
successful installs, including instructions about what list members
have learned about the various NVIDIA driver choices.
 
Next week, when I'm back in my campus office, I may try to install
OpenIndiana with OVIRT/Qemu, our third virtualization platform: it is
newer than the virt-manager on CentOS 7, and offers the nice feature
of live VM migration across our multiple physical VM servers, but is
considerably more painful to manage and configure VMs for.

If OpenIndiana developers remember to post notices about new ISO
images added at

http://dlc.openindiana.org/isos/hipster/test/

then I'm certainly willing to try new VM installations; our experience
this week should speed up their creation.



Some background and comments of support:

Our use of SunOS, and later Solaris, began in 1987, and we were
predominantly based on Sun hardware until Sun was bought by Oracle
in 2010. After that, Oracle hardware pricing made their products
unaffordable for our always meager academic budgets.  Since then,
our backbone servers have been mostly Dell, IBM, and HP hardware
running Red Hat, CentOS, Ubuntu, and ArchLinux, with hundreds of
VMs in our test lab running other operating systems.

Over our 33+ years of use, SunOS/Solaris has always been rock
solid, and Sun hardware (MC68K, SPARC, x86, and x86_64) was too.

We now have about 120 physical and virtual systems using ZFS,
including multiple Linux distributions, FreeBSD, soon NetBSD, and
of course all of the Solaris family O/Ses.  In 15+ years of ZFS

Re: [OpenIndiana-discuss] The kiss of death

2021-04-23 Thread Reginald Beardsley via openindiana-discuss
 I should like to say that from my perspective as the person who started this 
thread I consider your results 100% successful.

My goal was to increase the involvement of the user community in the process of 
creating a new release. It is a tremendous amount of work and many of us such 
as myself have been far too passive.

My profound thanks for your efforts.
Reg


 On Friday, April 23, 2021, 08:06:23 PM CDT, Nelson H. F. Beebe 
 wrote:  
 
 This progress report is 2/3 positive.  

I reported yesterday of my problems with frozen mouse cursor in the
GUI install from

    http://dlc.openindiana.org/isos/hipster/test/OI-hipster-gui-20210405.iso

on CentOS 7.9.2009 with virt-manager 1.5.0 and qemu-system-x86_64
2.0.0.

Later that day, I repeated that experiment on Ubuntu 20.04, which has
virt-manager 2.2.1 and qemu-system-x86_64 4.2.1, considerably newer
than the versions available on CentOS 7.9.2009, but found that the
installation would not proceed because of missing audio and console
drivers.  After my posting, I turned to other unrelated projects, had
a night's sleep, and woke up with an idea for OpenIndiana.

The virt-manager console for configuring a virtual machine has only
one driver for each of mouse and keyboard, but it has THREE for the
video controller: QXL (default), VGA, and Virtio.  It has been rare to
have to use other than the default video type, so I normally don't
even think about changing the default.

For OpenIndiana on Ubuntu 20.04, I shutdown the stuck installer,
powered off the VM, changed to VGA, rebooted, and found that the
installer would then not produce a GUI display, but only a bare
console.

I powered off again, changed to Virtio, and this time, the installer
worked and the installation completed!

It then asked me to reboot, which I did, and it came up normally.  I
immediately shut it down, took a virt-manager snapshot, brought it up,
ran "pkg install build-essential", powered off, took another snapshot,
powered on, installed more packages and user accounts, powered off,
took yet another snapshot, powered on, and I now have a working system
on which I'm attempting to build TeX Live 2021 (see my reports for
other systems at

    http://www.math.utah.edu/pub/texlive-utah/

).

Later this morning, I returned to my campus office in order to meet
with a colleague who is a network and ZFS expert with long experience
in the Solaris family.  We tried to find a way to get past the boot
failure on CentOS 7.9.2009, which I reported yesterday said

    Loading unix...
    Loading /platform/i86pc/amd64/boot_archive...
    ZFS: i/o error - all block copies available
    ...
    No rootfs module provided, aborting

That happens early in the boot process, well before there is any
chance to try to revert to an earlier boot environment.  Despite Web
searching last night and today, and finding others have reported a
similar problem with ZFS on Solaris and FreeBSD, none of their
proposed fixes worked.  So, a new OpenIndiana on CentOS 7.9.2009
remains out of reach.

Finally, I tried the GUI installer on VirtualBox on a small memory
Ubuntu 20.04 system.  There, the installer worked flawlessly,
including a following "pkg install build-essential" and "pkg update"
and subsequent reboot.  No configuration changes whatever were needed.

So, I now have 2 working OpenIndiana Hipster 2020.10 systems out of 4
attempts, and I can now get on with actually putting these VMs to
doing useful work.

It might be useful to add a README file to the download site that
summarizes the lessons:

      VirtualBox:            20210405 unstaller works perfectly
    newer virt-manager/QEMU:    change video to Virtio from default QXL
    older virt-manager/QEMU:     probably unworkable

along with notes for other physical systems were people have gotten
successful installs, including instructions about what list members
have learned about the various NVIDIA driver choices.
 
Next week, when I'm back in my campus office, I may try to install
OpenIndiana with OVIRT/Qemu, our third virtualization platform: it is
newer than the virt-manager on CentOS 7, and offers the nice feature
of live VM migration across our multiple physical VM servers, but is
considerably more painful to manage and configure VMs for.

If OpenIndiana developers remember to post notices about new ISO
images added at

    http://dlc.openindiana.org/isos/hipster/test/

then I'm certainly willing to try new VM installations; our experience
this week should speed up their creation.



Some background and comments of support:

    Our use of SunOS, and later Solaris, began in 1987, and we were
    predominantly based on Sun hardware until Sun was bought by Oracle
    in 2010. After that, Oracle hardware pricing made their products
    unaffordable for our always meager academic budgets.  Since then,
    our backbone servers have been mostly Dell, IBM, and HP hardware
    running Red Hat, CentOS, Ubuntu, and ArchLinux, wi

Re: [OpenIndiana-discuss] The kiss of death

2021-04-24 Thread John D Groenveld
In message , "Nelson H. F. 
Beebe" writes:
>in the Solaris family.  We tried to find a way to get past the boot
>failure on CentOS 7.9.2009, which I reported yesterday said
>
>Loading unix...
>Loading /platform/i86pc/amd64/boot_archive...
>ZFS: i/o error - all block copies available
>...
>No rootfs module provided, aborting
>
>That happens early in the boot process, well before there is any
>chance to try to revert to an earlier boot environment.  Despite Web

That is after the boot loader menu.
The second page of loader includes an option to boot from alt BEs.

John
groenv...@acm.org

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-24 Thread Nelson H. F. Beebe
[This is temporarily off the openindiana-discuss mailing list]

You wrote on Sat, 24 Apr 2021 11:58:06 -0400

>> That is after the boot loader menu.
>> The second page of loader includes an option to boot from alt BEs.

If I use the virt-manager menu to send Ctl-Alt-Del, I get this output:

Press ESC for boot menu.

I quickly press ESC, producing:

1. AHCI/0: QEMU HARDDISK ...
2. DVD/CD ...
3. Legacy option rom
4. iPXE ...

I press 1 and get

Booting from Hard Disk...
BIOS drive C: is disk0
ZFS: i/o error - all block copies unavailable
ZFS: can't read MOS of pool rpool

Can't find /boot/loader

Can't find /boot/zfsloader

illumos/x86 boot
Default: /boot/loader
boot:

At that point, I no idea how to get to a ``second page of loader
options.''

The boot prompt is opaque: it only prompts again when I try input like
"?", "help", "boot", ESCape, F1, F2, ..., F12, PageUp, PageDown, 

I find it often aggravating that BIOS code is so low level and
unusable without a separate manual.

---
- Nelson H. F. BeebeTel: +1 801 581 5254  -
- University of UtahFAX: +1 801 581 4148  -
- Department of Mathematics, 110 LCBInternet e-mail: be...@math.utah.edu  -
- 155 S 1400 E RM 233   be...@acm.org  be...@computer.org -
- Salt Lake City, UT 84112-0090, USAURL: http://www.math.utah.edu/~beebe/ -
---

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-24 Thread Toomas Soome via openindiana-discuss


> On 24. Apr 2021, at 22:52, Nelson H. F. Beebe  wrote:
> 
> [This is temporarily off the openindiana-discuss mailing list]
> 
> You wrote on Sat, 24 Apr 2021 11:58:06 -0400
> 
>>> That is after the boot loader menu.
>>> The second page of loader includes an option to boot from alt BEs.
> 
> If I use the virt-manager menu to send Ctl-Alt-Del, I get this output:
> 
>   Press ESC for boot menu.
> 
> I quickly press ESC, producing:
> 
>   1. AHCI/0: QEMU HARDDISK ...
>   2. DVD/CD ...
>   3. Legacy option rom
>   4. iPXE ...
> 
> I press 1 and get
> 
>   Booting from Hard Disk...
>   BIOS drive C: is disk0
>   ZFS: i/o error - all block copies unavailable
>   ZFS: can't read MOS of pool rpool
> 
>   Can't find /boot/loader
> 
>   Can't find /boot/zfsloader
> 
>   illumos/x86 boot
>   Default: /boot/loader
>   boot:
> 
> At that point, I no idea how to get to a ``second page of loader
> options.''
> 


At that point, you can’t. You got to boot loader stage1 program (gptzfsboot) 
and its prompt (boot: prompt). It is failing to read zfs pool to load and start 
next stage - /boot/loader). At this point, only simple diagnostics is possible 
- you can enter: status to get list of disks and ?path (like ?/) to get 
directory listing from path. Also you can enter alternate file name to be 
loaded and alternate disk device to load from. See also gptzfsboot(5).

I can not guess what does trigger this situation, but one option is to start 
from alternate media (usb/iso), and check out what is going on there - what 
does zpool scrub tell etc.

rgds,
toomas

> The boot prompt is opaque: it only prompts again when I try input like
> "?", "help", "boot", ESCape, F1, F2, ..., F12, PageUp, PageDown, 
> 
> I find it often aggravating that BIOS code is so low level and
> unusable without a separate manual.
> 
> ---
> - Nelson H. F. BeebeTel: +1 801 581 5254  
> -
> - University of UtahFAX: +1 801 581 4148  
> -
> - Department of Mathematics, 110 LCBInternet e-mail: be...@math.utah.edu  
> -
> - 155 S 1400 E RM 233   be...@acm.org  be...@computer.org 
> -
> - Salt Lake City, UT 84112-0090, USAURL: http://www.math.utah.edu/~beebe/ 
> -
> ---
> 
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-24 Thread Nelson H. F. Beebe
Thanks for the additional suggestions to get the CentOS-7 based
OpenIndiana to boot.  Here is what I get:

boot: status
disk device:
disk0:   BIOS driver C (167772160 X 512)
  disk0s1: Solaris 2 79GB
disk0s1a: root   79GB
disk0s1i: root 8032KB

illumos/x86 boot
Default: /boot/loader
boot: ?/

illumos/x86 boot
Default: /boot/loader

I propose that we drop this one for now, unless someone has deeper
insight into how to recover from this failed installation.

There is more information on the boot loader at

https://illumos.org/man/5/loader

but none of the few commands documented there worked for me.

Next week, I'll retry an OpenIndiana installation with the OVirt
alternative on CentOS 7; I may also play with different disk types on
virt-manager.  The disk is currently SATA, but IDE, SCSI, USB, and
VirtIO are also possible (though USB isn't a candidate).

I can also report that the big package update (300+) that I ran today
on the OpenIndiana system on Ubuntu 20.04 was problem free, and the
system has been rebooted twice since, to make a virt-manager snapshot,
as well as a ZFS snapshot.  On this system, as on our many other
ZFS-based O/Ses, I have cron jobs that take daily snapshots, and the
VM disk images themselves are backed-up nightly to LTO-8 tape, and to
a remote datacenter.

---
- Nelson H. F. BeebeTel: +1 801 581 5254  -
- University of UtahFAX: +1 801 581 4148  -
- Department of Mathematics, 110 LCBInternet e-mail: be...@math.utah.edu  -
- 155 S 1400 E RM 233   be...@acm.org  be...@computer.org -
- Salt Lake City, UT 84112-0090, USAURL: http://www.math.utah.edu/~beebe/ -
---

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-24 Thread Toomas Soome via openindiana-discuss



> On 24. Apr 2021, at 23:56, Nelson H. F. Beebe  wrote:
> 
> Thanks for the additional suggestions to get the CentOS-7 based
> OpenIndiana to boot.  Here is what I get:
> 
>   boot: status
>   disk device:
>   disk0:   BIOS driver C (167772160 X 512)
> disk0s1: Solaris 2 79GB
>   disk0s1a: root   79GB
>   disk0s1i: root 8032KB


Why there are two root slices? it should not disturb us but still weird. 
anyhow, can you mail me full partition table, format -> verify or partition -> 
print.ole disk 

Since this is VM and no dual-boot, I recommend to only do whole disk setup 
(that is, GPT automatically prepared). But for now, I wonder how your current 
slices are defined:)

rgds,
toomas

> 
>   illumos/x86 boot
>   Default: /boot/loader
>   boot: ?/
> 
>   illumos/x86 boot
>   Default: /boot/loader
> 
> I propose that we drop this one for now, unless someone has deeper
> insight into how to recover from this failed installation.
> 
> There is more information on the boot loader at
> 
>   https://illumos.org/man/5/loader
> 
> but none of the few commands documented there worked for me.
> 
> Next week, I'll retry an OpenIndiana installation with the OVirt
> alternative on CentOS 7; I may also play with different disk types on
> virt-manager.  The disk is currently SATA, but IDE, SCSI, USB, and
> VirtIO are also possible (though USB isn't a candidate).
> 
> I can also report that the big package update (300+) that I ran today
> on the OpenIndiana system on Ubuntu 20.04 was problem free, and the
> system has been rebooted twice since, to make a virt-manager snapshot,
> as well as a ZFS snapshot.  On this system, as on our many other
> ZFS-based O/Ses, I have cron jobs that take daily snapshots, and the
> VM disk images themselves are backed-up nightly to LTO-8 tape, and to
> a remote datacenter.
> 
> ---
> - Nelson H. F. BeebeTel: +1 801 581 5254  
> -
> - University of UtahFAX: +1 801 581 4148  
> -
> - Department of Mathematics, 110 LCBInternet e-mail: be...@math.utah.edu  
> -
> - 155 S 1400 E RM 233   be...@acm.org  be...@computer.org 
> -
> - Salt Lake City, UT 84112-0090, USAURL: http://www.math.utah.edu/~beebe/ 
> -
> ---


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-24 Thread Nelson H. F. Beebe
Toomas Soome  responds today:

>> ...
>> > On 24. Apr 2021, at 23:56, Nelson H. F. Beebe  wrote:
>> >
>> > Thanks for the additional suggestions to get the CentOS-7 based
>> > OpenIndiana to boot.  Here is what I get:
>> >
>> >   boot: status
>> >   disk device:
>> >   disk0:   BIOS driver C (167772160 X 512)
>> > disk0s1: Solaris 2 79GB
>> >   disk0s1a: root   79GB
>> >   disk0s1i: root 8032KB
>> 
>> Why there are two root slices? it should not disturb us but still weird. 
>> anyhow, can you mail
>> me full partition table, format -> verify or partition -> print.ole disk
>> 
>> Since this is VM and no dual-boot, I recommend to only do whole disk setup 
>> (that is, GPT
>> automatically prepared). But for now, I wonder how your current slices are 
>> defined:)
>> 
>> ...

I booted the failing VM from the CD-ROM, ran "ssh-keygen -A", edited
/etc/ssh/sshd_config to change PermitRootLogin from no to yes, then
ran "/usr/lib/ssh/sshd &".  That let me login remotely from a terminal
window from which I can do cut-and-paste, and I could then do

# zpool import -R /mnt rpoot

# format
Searching for disks...done

AVAILABLE DISK SELECTIONS:
   0. c4t0d0 
  /pci@0,0/pci1af4,1100@6/disk@0,0
Specify disk (enter its number): 0

electing c4t0d0
[disk formatted]
/dev/dsk/c4t0d0s0 is part of active ZFS pool rpool. Please see 
zpool(1M).

FORMAT MENU:
disk   - select a disk
type   - select (define) a disk type
partition  - select (define) a partition table
current- describe the current disk
format - format and analyze the disk
fdisk  - run the fdisk program
repair - repair a defective sector
label  - write label to the disk
analyze- surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save   - save new disk/partition definitions
inquiry- show vendor, product and revision
volname- set 8-character volume name
! - execute , then return
quit
format> verify
Warning: Primary label on disk appears to be different from
current label.

Warning: Check the current partitioning and 'label' the disk or use the
 'backup' command.

Primary label contents:

Volume name = <>
ascii name  = 
pcyl= 10442
ncyl= 10440
acyl=2
bcyl=0
nhead   =  255
nsect   =   63
Part  TagFlag Cylinders SizeBlocks
  0   rootwm   1 - 10439   79.97GB(10439/0/0) 
167702535
  1 unassignedwm   00 (0/0/0)   
  0
  2 backupwu   0 - 10439   79.97GB(10440/0/0) 
167718600
  3 unassignedwm   00 (0/0/0)   
  0
  4 unassignedwm   00 (0/0/0)   
  0
  5 unassignedwm   00 (0/0/0)   
  0
  6 unassignedwm   00 (0/0/0)   
  0
  7 unassignedwm   00 (0/0/0)   
  0
  8   bootwu   0 - 07.84MB(1/0/0) 
16065
  9 unassignedwm   00 (0/0/0)   
  0

I have to leave soon for the weekend, so likely cannot respond before Monday.

---
- Nelson H. F. BeebeTel: +1 801 581 5254  -
- University of UtahFAX: +1 801 581 4148  -
- Department of Mathematics, 110 LCBInternet e-mail: be...@math.utah.edu  -
- 155 S 1400 E RM 233   be...@acm.org  be...@computer.org -
- Salt Lake City, UT 84112-0090, USAURL: http://www.math.utah.edu/~beebe/ -
---

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-24 Thread Toomas Soome via openindiana-discuss



> On 25. Apr 2021, at 00:53, Nelson H. F. Beebe  wrote:
> 
> Toomas Soome mailto:tso...@me.com>> responds today:
> 
>>> ...
 On 24. Apr 2021, at 23:56, Nelson H. F. Beebe  wrote:
 
 Thanks for the additional suggestions to get the CentOS-7 based
 OpenIndiana to boot.  Here is what I get:
 
  boot: status
  disk device:
  disk0:   BIOS driver C (167772160 X 512)
disk0s1: Solaris 2 79GB
  disk0s1a: root   79GB
  disk0s1i: root 8032KB
>>> 
>>> Why there are two root slices? it should not disturb us but still weird. 
>>> anyhow, can you mail
>>> me full partition table, format -> verify or partition -> print.ole disk
>>> 
>>> Since this is VM and no dual-boot, I recommend to only do whole disk setup 
>>> (that is, GPT
>>> automatically prepared). But for now, I wonder how your current slices are 
>>> defined:)
>>> 
>>> ...
> 
> I booted the failing VM from the CD-ROM, ran "ssh-keygen -A", edited
> /etc/ssh/sshd_config to change PermitRootLogin from no to yes, then
> ran "/usr/lib/ssh/sshd &".  That let me login remotely from a terminal
> window from which I can do cut-and-paste, and I could then do
> 
>   # zpool import -R /mnt rpoot
> 
>   # format
>   Searching for disks...done
> 
>   AVAILABLE DISK SELECTIONS:
>  0. c4t0d0 
> /pci@0,0/pci1af4,1100@6/disk@0,0
>   Specify disk (enter its number): 0
> 
>   electing c4t0d0
>   [disk formatted]
>   /dev/dsk/c4t0d0s0 is part of active ZFS pool rpool. Please see 
> zpool(1M).
> 
>   FORMAT MENU:
>   disk   - select a disk
>   type   - select (define) a disk type
>   partition  - select (define) a partition table
>   current- describe the current disk
>   format - format and analyze the disk
>   fdisk  - run the fdisk program
>   repair - repair a defective sector
>   label  - write label to the disk
>   analyze- surface analysis
>   defect - defect list management
>   backup - search for backup labels
>   verify - read and display labels
>   save   - save new disk/partition definitions
>   inquiry- show vendor, product and revision
>   volname- set 8-character volume name
>   ! - execute , then return
>   quit
>   format> verify
>   Warning: Primary label on disk appears to be different from
>   current label.
> 
>   Warning: Check the current partitioning and 'label' the disk or use the
>'backup' command.
> 
>   Primary label contents:
> 
>   Volume name = <>
>   ascii name  = 
>   pcyl= 10442
>   ncyl= 10440
>   acyl=2
>   bcyl=0
>   nhead   =  255
>   nsect   =   63
>   Part  TagFlag Cylinders SizeBlocks
> 0   rootwm   1 - 10439   79.97GB(10439/0/0) 
> 167702535
> 1 unassignedwm   00 (0/0/0)   
>   0
> 2 backupwu   0 - 10439   79.97GB(10440/0/0) 
> 167718600
> 3 unassignedwm   00 (0/0/0)   
>   0
> 4 unassignedwm   00 (0/0/0)   
>   0
> 5 unassignedwm   00 (0/0/0)   
>   0
> 6 unassignedwm   00 (0/0/0)   
>   0
> 7 unassignedwm   00 (0/0/0)   
>   0
> 8   bootwu   0 - 07.84MB(1/0/0) 
> 16065
> 9 unassignedwm   00 (0/0/0)   
>   0

*this* label does make sense. That warning above, what is it about, what does 
partition -> print tell?

rgds,
toomas


> 
> I have to leave soon for the weekend, so likely cannot respond before Monday.
> 
> ---
> - Nelson H. F. BeebeTel: +1 801 581 5254  
> -
> - University of UtahFAX: +1 801 581 4148  
> -
> - Department of Mathematics, 110 LCBInternet e-mail: be...@math.utah.edu 
>   -
> - 155 S 1400 E RM 233   be...@acm.org 
>   be...@computer.org  -
> - Salt Lake City, UT 84112-0090, USAURL: http://www.math.utah.edu/~beebe/ 
>  -
> ---

___
openindiana-discuss mailing list
openindiana-discuss

Re: [OpenIndiana-discuss] The kiss of death

2021-04-24 Thread Reginald Beardsley via openindiana-discuss
 FWIW I saw the messages that Nelson posted at the start of this discussion on 
systems that booted. However, they very likely had relic zfs labels. I've had 
mysterious "corrupted pools" appear which I was only able to fix by using dd(1) 
to wipe out the old label.

I've come to the conclusion that zfs is saving information in places I don't 
know about and which may or may not get cleared by "zpool labelclear".

Once I recover from the ordeal of this past week I'll go back and conduct some 
experiments such as creating a slice, creating a pool and then modifying the 
slice and creating a new pool to see if I can sort out what is happening.

I am very sorry that some of the developers took offense to my posts, but I am 
very pleased that there is more engagement by the users in testing. "It is 
meet, right and our bounden duty..."

Reg


 On Saturday, April 24, 2021, 05:16:59 PM CDT, Toomas Soome via 
openindiana-discuss  wrote:  
 
 

> On 25. Apr 2021, at 00:53, Nelson H. F. Beebe  wrote:
> 
> Toomas Soome mailto:tso...@me.com>> responds today:
> 
>>> ...
 On 24. Apr 2021, at 23:56, Nelson H. F. Beebe  wrote:
 
 Thanks for the additional suggestions to get the CentOS-7 based
 OpenIndiana to boot.  Here is what I get:
 
      boot: status
      disk device:
          disk0:  BIOS driver C (167772160 X 512)
            disk0s1: Solaris 2            79GB
              disk0s1a: root              79GB
              disk0s1i: root            8032KB
>>> 
>>> Why there are two root slices? it should not disturb us but still weird. 
>>> anyhow, can you mail
>>> me full partition table, format -> verify or partition -> print.ole disk
>>> 
>>> Since this is VM and no dual-boot, I recommend to only do whole disk setup 
>>> (that is, GPT
>>> automatically prepared). But for now, I wonder how your current slices are 
>>> defined:)
>>> 
>>> ...
> 
> I booted the failing VM from the CD-ROM, ran "ssh-keygen -A", edited
> /etc/ssh/sshd_config to change PermitRootLogin from no to yes, then
> ran "/usr/lib/ssh/sshd &".  That let me login remotely from a terminal
> window from which I can do cut-and-paste, and I could then do
> 
>     # zpool import -R /mnt rpoot
> 
>     # format
>     Searching for disks...done
> 
>     AVAILABLE DISK SELECTIONS:
>           0. c4t0d0 
>           /pci@0,0/pci1af4,1100@6/disk@0,0
>     Specify disk (enter its number): 0
> 
>     electing c4t0d0
>     [disk formatted]
>     /dev/dsk/c4t0d0s0 is part of active ZFS pool rpool. Please see zpool(1M).
> 
>     FORMAT MENU:
>         disk      - select a disk
>         type      - select (define) a disk type
>         partition  - select (define) a partition table
>         current    - describe the current disk
>         format    - format and analyze the disk
>         fdisk      - run the fdisk program
>         repair    - repair a defective sector
>         label      - write label to the disk
>         analyze    - surface analysis
>         defect    - defect list management
>         backup    - search for backup labels
>         verify    - read and display labels
>         save      - save new disk/partition definitions
>         inquiry    - show vendor, product and revision
>         volname    - set 8-character volume name
>         !    - execute , then return
>         quit
>     format> verify
>     Warning: Primary label on disk appears to be different from
>     current label.
> 
>     Warning: Check the current partitioning and 'label' the disk or use the
>         'backup' command.
> 
>     Primary label contents:
> 
>     Volume name = <        >
>     ascii name  = 
>     pcyl        = 10442
>     ncyl        = 10440
>     acyl        =    2
>     bcyl        =    0
>     nhead      =  255
>     nsect      =  63
>     Part      Tag    Flag    Cylinders        Size            Blocks
>       0      root    wm      1 - 10439      79.97GB    (10439/0/0) 167702535
>       1 unassigned    wm      0                0        (0/0/0)            0
>       2    backup    wu      0 - 10439      79.97GB    (10440/0/0) 167718600
>       3 unassigned    wm      0                0        (0/0/0)            0
>       4 unassigned    wm      0                0        (0/0/0)            0
>       5 unassigned    wm      0                0        (0/0/0)            0
>       6 unassigned    wm      0                0        (0/0/0)            0
>       7 unassigned    wm      0                0        (0/0/0)            0
>       8      boot    wu      0 -    0        7.84MB    (1/0/0)        16065
>       9 unassigned    wm      0                0        (0/0/0)            0

*this* label does make sense. That warning above, what is it about, what does 
partition -> print tell?

rgds,
toomas


> 
> I have to leave soon for the weekend, so likely cannot respond before Monday.
> 
> ---
> - Nelson H. F. Beebe               

Re: [OpenIndiana-discuss] The kiss of death

2021-04-24 Thread Chris

On 2021-04-24 16:02, Reginald Beardsley via openindiana-discuss wrote:
FWIW I saw the messages that Nelson posted at the start of this discussion 
on
systems that booted. However, they very likely had relic zfs labels. I've 
had
mysterious "corrupted pools" appear which I was only able to fix by using 
dd(1) to

wipe out the old label.
If it's anything like gpt, you're right. With gpt it's always best to perform 
a
gpart destroy prior to writing a new disk. If you don't delete the indices 
first

than a gpart destroy -F is often required. As memory serves; zfs also has the
destroy option. Which will clear the tables so you don't end up with 
"foreign"

un-referenced labels.

--Chris


I've come to the conclusion that zfs is saving information in places I don't 
know

about and which may or may not get cleared by "zpool labelclear".

Once I recover from the ordeal of this past week I'll go back and conduct 
some
experiments such as creating a slice, creating a pool and then modifying the 
slice

and creating a new pool to see if I can sort out what is happening.

I am very sorry that some of the developers took offense to my posts, but I 
am
very pleased that there is more engagement by the users in testing. "It is 
meet,

right and our bounden duty..."

Reg


 On Saturday, April 24, 2021, 05:16:59 PM CDT, Toomas Soome via
openindiana-discuss  wrote:




On 25. Apr 2021, at 00:53, Nelson H. F. Beebe  wrote:

Toomas Soome mailto:tso...@me.com>> responds today:


...
On 24. Apr 2021, at 23:56, Nelson H. F. Beebe  
wrote:


Thanks for the additional suggestions to get the CentOS-7 based
OpenIndiana to boot.  Here is what I get:

      boot: status
      disk device:
          disk0:  BIOS driver C (167772160 X 512)
            disk0s1: Solaris 2            79GB
              disk0s1a: root              79GB
              disk0s1i: root            8032KB


Why there are two root slices? it should not disturb us but still weird. 
anyhow, can you mail

me full partition table, format -> verify or partition -> print.ole disk

Since this is VM and no dual-boot, I recommend to only do whole disk 
setup (that is, GPT
automatically prepared). But for now, I wonder how your current slices 
are defined:)


...


I booted the failing VM from the CD-ROM, ran "ssh-keygen -A", edited
/etc/ssh/sshd_config to change PermitRootLogin from no to yes, then
ran "/usr/lib/ssh/sshd &".  That let me login remotely from a terminal
window from which I can do cut-and-paste, and I could then do

    # zpool import -R /mnt rpoot

    # format
    Searching for disks...done

    AVAILABLE DISK SELECTIONS:
          0. c4t0d0 
          /pci@0,0/pci1af4,1100@6/disk@0,0
    Specify disk (enter its number): 0

    electing c4t0d0
    [disk formatted]
    /dev/dsk/c4t0d0s0 is part of active ZFS pool rpool. Please see 
zpool(1M).


    FORMAT MENU:
        disk      - select a disk
        type      - select (define) a disk type
        partition  - select (define) a partition table
        current    - describe the current disk
        format    - format and analyze the disk
        fdisk      - run the fdisk program
        repair    - repair a defective sector
        label      - write label to the disk
        analyze    - surface analysis
        defect    - defect list management
        backup    - search for backup labels
        verify    - read and display labels
        save      - save new disk/partition definitions
        inquiry    - show vendor, product and revision
        volname    - set 8-character volume name
        !    - execute , then return
        quit
    format> verify
    Warning: Primary label on disk appears to be different from
    current label.

    Warning: Check the current partitioning and 'label' the disk or use the
        'backup' command.

    Primary label contents:

    Volume name = <        >
    ascii name  = 
    pcyl        = 10442
    ncyl        = 10440
    acyl        =    2
    bcyl        =    0
    nhead      =  255
    nsect      =  63
    Part      Tag    Flag    Cylinders        Size            Blocks
      0      root    wm      1 - 10439      79.97GB    (10439/0/0) 
167702535
      1 unassigned    wm      0                0        (0/0/0)            
0
      2    backup    wu      0 - 10439      79.97GB    (10440/0/0) 
167718600
      3 unassigned    wm      0                0        (0/0/0)            
0
      4 unassigned    wm      0                0        (0/0/0)            
0
      5 unassigned    wm      0                0        (0/0/0)            
0
      6 unassigned    wm      0                0        (0/0/0)            
0
      7 unassigned    wm      0                0        (0/0/0)            
0

      8      boot    wu      0 -    0        7.84MB    (1/0/0)        16065
      9 unassigned    wm      0                0        (0/0/0)            
0


*this* label does make sense. That warning above, what is it about, what 
does

partition -> print tell?

rgds,
toom

Re: [OpenIndiana-discuss] The kiss of death

2021-04-25 Thread Volker A. Brandt
Hi Reg!


>  FWIW I saw the messages that Nelson posted at the start of this discussion 
> on systems that booted. However, they very likely had relic zfs labels. I've 
> had mysterious "corrupted pools" appear which I was only able to fix by using 
> dd(1) to wipe out the old label.

Recent-ish illumos ZFS versions (and presumably OpenZFS, too) have

  zpool labelclear 


Regards -- Volker
-- 

Volker A. BrandtConsulting and Support for Solaris-based Systems
Brandt & Brandt Computer GmbH   WWW: http://www.bb-c.de/
Am Wiesenpfad 6, 53340 Meckenheim, GERMANYEmail: v...@bb-c.de
Handelsregister: Amtsgericht Bonn, HRB 10513  Schuhgröße: 46
Geschäftsführer: Rainer J.H. Brandt und Volker A. Brandt

"When logic and proportion have fallen sloppy dead"

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The kiss of death

2021-04-25 Thread Reginald Beardsley via openindiana-discuss
 
I have been using labelclear which is a huge improvement. In the past I had to 
write a script to do it using dd. So far I've been unable to find a good 
diagram showing the various disk labels, boot blocks, etc. The best 
documentation I've found to date is the ZFS chapter in McKusick et al. I've not 
yet ordered the "IT Mastery" ZFS books but shall.

IIRC I had done an install into an existing pool created by hand. When 
Groenveld commented about the ability to create a RAIDZ2 pool in the text 
installer I tried using that. However, whereas I had created a single s0 slice, 
the text installer created a 250 MB s0 slice with the rest of the disk in s1. I 
think that the error message was the result of a label in the now much smaller 
and unused s0 slice.

This is somewhat speculative given the staggering number of installs I've done 
trying to collect information about what prevented 2021.04-rc1 from booting to 
the desktop.

Maddeningly, format(1m) will sometimes let me modify the label to go back to a 
single s0 slice and sometimes it won't. I've been unable to discern a pattern, 
so that will have to wait until I have time to run it under dbx.

Reg
 On Sunday, April 25, 2021, 07:04:23 AM CDT, Volker A. Brandt 
 wrote:  
 
 Hi Reg!


>  FWIW I saw the messages that Nelson posted at the start of this discussion 
>on systems that booted. However, they very likely had relic zfs labels. I've 
>had mysterious "corrupted pools" appear which I was only able to fix by using 
>dd(1) to wipe out the old label.

Recent-ish illumos ZFS versions (and presumably OpenZFS, too) have

  zpool labelclear 


Regards -- Volker
-- 

Volker A. Brandt        Consulting and Support for Solaris-based Systems
Brandt & Brandt Computer GmbH                  WWW: http://www.bb-c.de/
Am Wiesenpfad 6, 53340 Meckenheim, GERMANY            Email: v...@bb-c.de
Handelsregister: Amtsgericht Bonn, HRB 10513              Schuhgröße: 46
Geschäftsführer: Rainer J.H. Brandt und Volker A. Brandt

"When logic and proportion have fallen sloppy dead"
  
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss