>In 2001, Dr. Nemeth retired to her sailboat and sailed the
>world - a long long way from her CU-Boulder professorship.
>In June 2013, she and the crew of the vintage yacht Niña
>were lost in a huge storm in the Tasman Sea between New
>Zealand and Australia. Sigh.
I knew Evi personally, we
On Fri, 13 May 2022, Ben Koenig wrote:
As a nod to Dr. Nemeth and the people who made all this possible, I'll
uninstall NetworkManager this weekend and configure a static IP for eth0
in /etc/rc.d/rc.inet1.conf.
Ben,
Or, '/etc/rc.d/rc.networkmanager stop' followed by 'chmod -x
If you wanna host it, great. Personally, I don’t. - Robert
On Fri, May 13, 2022 at 3:18 PM Ben Koenig
wrote:
> Github??? Do I need to write a howto for hosting a repo with cgit?
>
> -Ben
>
>
> --- Original Message ---
> On Friday, May 13th, 2022 at 2:05 PM, Robert Citek
> wrote:
>
>
>
On Fri, May 13, 2022 at 01:38:32PM -0700, Keith Lofstrom wrote:
> I have /many/ verbose and vaguely-useful computer books on
I pondered that. The fat books that transitioned me (*) from
Windoze to BSDI, then BSDI to Redhat Linux, were Evi Nemeth's
"Unix System Administration Handbook", editions
I have /many/ verbose and vaguely-useful computer books on
my shelves - the authors run on for pages about a few
subjects, rather than provide well indexed terse paragraphs
about MANY subjects.
Over the years, the PLUG list has accumulated some nonsense
and MUCH wisdom. I can imagine that
I've forced this hundreds of times over the years working for a hosting
provider where we used software raid / LVM on almost all of our
servers.
I've found the following commands quite helpful when in the situation
like you describe. I'd usually be using a CentOS rescue environment
mainly
On Fri, May 13, 2022 at 12:53 PM Ben Koenig
wrote:
> > I suspect this is a result of many years of different sysadmins replacing
> > drives as they failed. we probably had the idea of eventually increasing
> > the array's storage size once all the smaller drives were replaced with
> > larger
On Fri, May 13, 2022 at 12:07 PM Robert Citek
wrote:
>
> In contrast, if we parse the same information for md1, we see that it is
> also made up of 5 devices, sdb2-sdf2, but of varying sizes:
>
> The RAID makes sense. The smallest partition size is 288.2 GB. And 684.1
> / 228.2 = 3 which is
Just to clarify, md0 seems to be working just fine. It's md1 that seems to
be having an issue(s). And if md1 was partitioned or configured to be used
in an LVM, mounting it won't work.
If we parse the data from lsblk, we can see that md0 is made of 5 devices,
sdb1-sdf1, which are all of the
On Fri, May 13, 2022 at 10:02 AM Ben Koenig
wrote:
> I might be channeling Captain Obvious here, but /dev/md0 is basically just
> a block device.
>
I believe you are channeling a very different captain.
> Sounds like you should identify the filesystem the same way you would for
> a normal HDD
On Fri, May 13, 2022 at 8:04 AM Robert Citek wrote:
> Admittedly, I haven't played with LVM in a while. But here's a nice
> resource:
>
>
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_and_managing_logical_volumes/index
>
>
this does look nice, but
Admittedly, I haven't played with LVM in a while. But here's a nice
resource:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_and_managing_logical_volumes/index
>From the output you posted, your md1 RAID6 looks like it's working fine,
i.e. no failed
12 matches
Mail list logo