Suppose you have a failing disk controller. In a RAIDZ1 context it could
corrupt 2-3 drives in a 3 drive pool and you would lose the pool. By having the
pool in a separate slice from the root pool, you can take it offline and reduce
the risk to the contents.
That's something I just went throu
At the time I created the 2 slice arrangement, it was not possible to boot
from a RAIDZ array. Over the course of 10 years I have found that it is very
robust. It has survived several disk failures and more recently a MB failure
without loss of data on my S10_u8 system. That is not to be sneeze
On Fri, 9 Apr 2021 at 16:15, Reginald Beardsley via
openindiana-discuss wrote:
> Here's what the system reports after the initial reboot. The console log
> while I fumbled around with "format -e" and "I'm sorry. I can't do that."
> was too crazy to bother cleaning up. I'd have to do it all ov
Sadly rather messy. I was forced to use a gparted live CD to relabel the disk
and got 128 partitions instead of the 9 the Sun EFI label creates.
Here's what the system reports after the initial reboot. The console log while
I fumbled around with "format -e" and "I'm sorry. I can't do that."
This is my standard install style using rpool on s0 and export on s1. However,
I feel quite certain that one could install with a single pool if desired.
What will not work IIRC is to have more than one pool on the target disk at
install time.
The process was as follows:
Boot GUI Live Image
On April 8, 2021 1:12:42 PM UTC, Judah Richardson
wrote:
>It looks to me like you're trying to SSH from a Debian host. My guess
>is
>that everything after $ssh jason@ is resolving to the host machine/OS,
>which is then rejecting the connection. I would guess the issue here
>lies
>in either Debian