Re: After Install last physical disk is not mounted on reboot

2018-10-16 Thread Bruce Ferrell

On 10/15/18 6:39 AM, Larry Linder wrote:

When you look at the /dev/disk and the directories there is no occurance
of "sde"

We tried to modify "fstab" manuall but the device code - decoding scheme
didn't work.  System booted to "rescue".

There are a number of problems with the GigaBit MB and one has to do
with the serial communication.

I looked into the bios and all 4 WD disks are present. Disk 5 as "sde"
is not seen there.  We tried moving disks around and the same result so
its not a disk problem.
These are all WD disks

However we have noticed that when you count up the devices to be mounted
in "fstab" there are 16.  A number of the mounts are due to the user and
SL OS.

On this server we will stick with xt4 for the time being.

We have investigated a Port Expansion board to allow us to use more
physical disks but when you peek under the covers and look how they work
the performance penality is not worth the trouble.

Larry Linder

On Sat, 2018-10-13 at 09:55 -0700, Bruce Ferrell wrote:

My one and only question is, do you see the device for sde, in any
form (/dev/sdeX, /dev/disk/by-*, etc) present in /etc/fstab with the
proper mount point(s)?

It really doesn't matter WHAT the device tech is.  /etc/fstab just
tells the OS where to put the device into the filesystem... Or it did
before systemd  got into the mix.

Just for grins and giggles, I'd put sde (and it's correct
partition/mount point) into fstab and reboot during a maintenance
window.

if that fails, I'd be taking a hard look at systemd and the units that
took over disk mounting.  Systemd is why I'm still running SL 6.x

Also, if you hot swapped the drive, the kernel has a nasty habit of
assigning a new device name. What WAS sde becomes sdf until the next
reboot... But fstab and systemd just don't get that.  Look for
anomalies.  disk devices that you don't recognize in fstab or the
systemd configs.


On 10/13/18 7:20 AM, Larry Linder wrote:


The problem is not associated with the file system.
We have a newer system with SL 7.5 and xfs and we have the same problem.

I omited a lot of directories because of time and importance.  fstab is
what is mounted and used by OS.

The fstab was copied exactly as SL 7.5 built it.  It does not give you a
clue as to what the directories are and it shouldn't.

The point is that I would like to use more pysical drives on this system
but because of MB or OS the last physical disk is not seen, which is
"sde".  One of older SCSI sysems had 31 disks attached to it.

The Bios does sees 1 SSD, 4 WesternDigital drives and 1 dvd.
SSD sda
WD  sdb
WD  sdc
WD  sdd
WD  sde is missing from "fstab" and not mounted.
plextor dvd

We tried a nanual mount and it works but when you reboot it is gone
becasuse it not in "fstab".

Why so many disks:
Two of these disks are used for back up of users on the server.  Twice a
day @ 12:30 and at 0:30 each day.  These are also in sync with two disks
that are at another physical location.  Using "rsync" you have to be
carefull or it can be an eternal garbage colledtor.  This is off topic.

A disk has a finite life so every 6 mo.  We rotate in a new disk and
toss the oldest one.  It takes two and 1/2 years to cycle threw the
pack.
This scheme has worked for us for the last 20 years.  We have never had
a server die on us. We have used Sl Linux form version 4 to current and
before that RH 7->9 and BSD 4.3.

We really do not have a performance problem even on long 3d renderings-
The slowest thing in the room is the speed one can type or point.
Models, simulations, drawings are done before you can reach for your
cup.

Thank You
Larry Linder


On Fri, 2018-10-12 at 23:07 -0700, Bruce Ferrell wrote:

On 10/12/18 8:09 PM, ~Stack~ wrote:

On 10/12/2018 07:35 PM, Nico Kadel-Garcia wrote:
[snip]

On SL 7? Why? Is there any reason not to use xfs? I've appreciated the
ext filesystems, I've known its original author for decades. (He was
my little brother in my fraternity!) But there's not a compelling
reason to use it in recent SL releases.

Sure there is. Anyone who has to mange fluctuating disks in an LVM knows
precisely why you avoid XFS - Shrink an XFS formated LVM partition. Oh,
wait. You can't. ;-)

My server with EXT4 will be back on line with adjusted filesystem sizes
before the XFS partition has even finished backing up! It is a trivial,
well-documented, and quick process to adjust an ext4 file-system.

Granted, I'm in a world where people can't seem to judge how they are
going to use the space on their server and frequently have to come to me
needing help because they did something silly like allocate 50G to /opt
and 1G to /var. *rolls eyes* (sadly that was a true event.) Adjusting
filesystems for others happens far too frequently for me. At least it is
easy for the EXT4 crowd.

Also, I can't think of a single compelling reason to use XFS over EXT4.
Supposedly XFS is great for large files of 30+ Gb, but I can promise you
that most of the servers and desktops I support have easily 95% of thei

Re: XFS v EXT4 was: After Install last physical disk is not mounted on reboot

2018-10-15 Thread Konstantin Olchanski
>
> Unfortunately, in the world I am in we have ...
> jr admins build the box ...
> sr admins ... fix it.
> 


I was going to comment on the high labour costs of creating,
managing and resizing all these partitions (vs buying a separate
120GB SSD for each partition, at $70 a pop).

But in the world where you have *both* juniour *and* seniour sysadmins,
maybe it is not such a big issue...

As somebody wrote somewhen:
the life purpose of sysadmins is to create continued employment for sysadmins.


-- 
Konstantin Olchanski
Data Acquisition Systems: The Bytes Must Flow!
Email: olchansk-at-triumf-dot-ca
Snail mail: 4004 Wesbrook Mall, TRIUMF, Vancouver, B.C., V6T 2A3, Canada


Re: After Install last physical disk is not mounted on reboot

2018-10-15 Thread Adam Jensen
On 10/15/2018 05:04 PM, Konstantin Olchanski wrote:
>> 5.  Plextor DVD
> No paper tape reader?
> 

LOL.

I backup my irreplaceable data onto M-Disc DVD's - one copy is kept here
with me and two additional copies are kept, one at an East coast
location and the other at a West coast location. This critical data-set
is currently about 100GB and this method still seems like a reasonably
cost-effective and reliable way to do it. I don't think the technology
is completely antiquated.


Re: After Install last physical disk is not mounted on reboot

2018-10-15 Thread Konstantin Olchanski
On Fri, Oct 12, 2018 at 04:33:56PM -0400, Larry Linder wrote:
> Disk:
> 1.  WD2 TBsdb contains /usr/local & /engr, /engr/users
> 2.  WD2 TBsdc contains /mariadb, & company library
> 3.  WD2 TBsdd contains /backup for other machines
> 4.  WD2 TBsde contains ...

You are very brave to run HDDs without any redundancy. If any HDD springs
a bad sector and you discover it 6 months later when you cannot
read an important file, just hope your backups go back that far.

By my calculation the cost of extra HDDs + learning how to setup
and manage mdadm RAID (or ZFS RAID) is much less than the hassle
of recovering data from backups (if there is anything to recover,
otherwise the cost of eating complete data loss).

> 5.  Plextor DVD

No paper tape reader?

> fstab#
> UUID=1aa38030-b573-4537-bc9d-83f0a9748c9b /   ext4 
> defaults1 1
> ...

"mount -a" finds the missing disk or not?

-- 
Konstantin Olchanski
Data Acquisition Systems: The Bytes Must Flow!
Email: olchansk-at-triumf-dot-ca
Snail mail: 4004 Wesbrook Mall, TRIUMF, Vancouver, B.C., V6T 2A3, Canada


Re: After Install last physical disk is not mounted on reboot

2018-10-15 Thread Larry Linder
When you look at the /dev/disk and the directories there is no occurance
of "sde"

We tried to modify "fstab" manuall but the device code - decoding scheme
didn't work.  System booted to "rescue".

There are a number of problems with the GigaBit MB and one has to do
with the serial communication.

I looked into the bios and all 4 WD disks are present. Disk 5 as "sde"
is not seen there.  We tried moving disks around and the same result so
its not a disk problem.
These are all WD disks

However we have noticed that when you count up the devices to be mounted
in "fstab" there are 16.  A number of the mounts are due to the user and
SL OS.

On this server we will stick with xt4 for the time being.

We have investigated a Port Expansion board to allow us to use more
physical disks but when you peek under the covers and look how they work
the performance penality is not worth the trouble.

Larry Linder

On Sat, 2018-10-13 at 09:55 -0700, Bruce Ferrell wrote:
> My one and only question is, do you see the device for sde, in any
> form (/dev/sdeX, /dev/disk/by-*, etc) present in /etc/fstab with the
> proper mount point(s)?
> 
> It really doesn't matter WHAT the device tech is.  /etc/fstab just
> tells the OS where to put the device into the filesystem... Or it did
> before systemd  got into the mix.
> 
> Just for grins and giggles, I'd put sde (and it's correct
> partition/mount point) into fstab and reboot during a maintenance
> window.
> 
> if that fails, I'd be taking a hard look at systemd and the units that
> took over disk mounting.  Systemd is why I'm still running SL 6.x
> 
> Also, if you hot swapped the drive, the kernel has a nasty habit of
> assigning a new device name. What WAS sde becomes sdf until the next
> reboot... But fstab and systemd just don't get that.  Look for
> anomalies.  disk devices that you don't recognize in fstab or the
> systemd configs.
> 
> 
> On 10/13/18 7:20 AM, Larry Linder wrote:
> 
> > The problem is not associated with the file system.
> > We have a newer system with SL 7.5 and xfs and we have the same problem.
> > 
> > I omited a lot of directories because of time and importance.  fstab is
> > what is mounted and used by OS.
> > 
> > The fstab was copied exactly as SL 7.5 built it.  It does not give you a
> > clue as to what the directories are and it shouldn't.
> > 
> > The point is that I would like to use more pysical drives on this system
> > but because of MB or OS the last physical disk is not seen, which is
> > "sde".  One of older SCSI sysems had 31 disks attached to it.
> > 
> > The Bios does sees 1 SSD, 4 WesternDigital drives and 1 dvd.
> > SSD sda
> > WD  sdb
> > WD  sdc
> > WD  sdd
> > WD  sde is missing from "fstab" and not mounted.
> > plextor dvd
> > 
> > We tried a nanual mount and it works but when you reboot it is gone
> > becasuse it not in "fstab".
> > 
> > Why so many disks:
> > Two of these disks are used for back up of users on the server.  Twice a
> > day @ 12:30 and at 0:30 each day.  These are also in sync with two disks
> > that are at another physical location.  Using "rsync" you have to be
> > carefull or it can be an eternal garbage colledtor.  This is off topic.
> > 
> > A disk has a finite life so every 6 mo.  We rotate in a new disk and
> > toss the oldest one.  It takes two and 1/2 years to cycle threw the
> > pack.
> > This scheme has worked for us for the last 20 years.  We have never had
> > a server die on us. We have used Sl Linux form version 4 to current and
> > before that RH 7->9 and BSD 4.3.
> > 
> > We really do not have a performance problem even on long 3d renderings-
> > The slowest thing in the room is the speed one can type or point.
> > Models, simulations, drawings are done before you can reach for your
> > cup.
> > 
> > Thank You
> > Larry Linder
> > 
> > 
> > On Fri, 2018-10-12 at 23:07 -0700, Bruce Ferrell wrote:
> > > On 10/12/18 8:09 PM, ~Stack~ wrote:
> > > > On 10/12/2018 07:35 PM, Nico Kadel-Garcia wrote:
> > > > [snip]
> > > > > On SL 7? Why? Is there any reason not to use xfs? I've appreciated the
> > > > > ext filesystems, I've known its original author for decades. (He was
> > > > > my little brother in my fraternity!) But there's not a compelling
> > > > > reason to use it in recent SL releases.
> > > > Sure there is. Anyone who has to mange fluctuating disks in an LVM knows
> > > > precisely why you avoid XFS - Shrink an XFS formated LVM partition. Oh,
> > > > wait. You can't. ;-)
> > > > 
> > > > My server with EXT4 will be back on line with adjusted filesystem sizes
> > > > before the XFS partition has even finished backing up! It is a trivial,
> > > > well-documented, and quick process to adjust an ext4 file-system.
> > > > 
> > > > Granted, I'm in a world where people can't seem to judge how they are
> > > > going to use the space on their server and frequently have to come to me
> > > > needing help because they did something silly like allocate 50G to /opt
> > > > and 1G to /var. *rolls eyes* (sadly that

Re: XFS v EXT4 was: After Install last physical disk is not mounted on reboot

2018-10-14 Thread Adam Jensen
On 10/14/2018 09:51 AM, ~Stack~ wrote:
> We do a pool of mirrored disks with fast SSD's for our ZFS caching.
> Performance is fantastic and, as I mentioned in another reply, the
> rebuild time of a failed drive (or a resilvering when I upgraded all of
> the drives on the fly without downtime) is way faster than any RAID I've
> ever worked on before (which is quite a few in my career).
> 

That's interesting. I have no experience with SSD caching.

> However, even if performance wasn't great we would still probably be
> using it because of the tooling around ZFS. We utilize a lot of the
> tools it provides for shared file-systems, backups, compression,
> de-dupe, ect.
> 

ZFS is super nifty, for sure :) I still have a small machine (Celeron
J1900, 8GB RAM) running FreeBSD-11.2 from a two disk ZFS mirror.

> Never used ZFS on *BSD. I've only used it on SL7 so I can't say anything
> about an OS difference.

I had a ZFS mirror on SL-7.4 earlier this year. It would occasionally
have a minor problem - mostly stuff that would self resolve but it was
still sort of surprising and a little worrisome that it would sometimes
stumble or fall down. I've used ZFS on FreeBSD on and off for years,
pretty much since the beginning - mostly simple setups on small
machines, and it has steadily grown to be so reliable, in my experience,
that I expect it and trust it to work like any other part of a critical
infrastructure. The problem I recently discovered is that on a
consistently busy machine it can consume a significant portion of the
resources. So I went with SL and a hardware RAID over FreeBSD with ZFS.
I'm curious to see how it plays out.


Re: XFS v EXT4 was: After Install last physical disk is not mounted on reboot

2018-10-14 Thread ~Stack~
On 10/13/2018 11:22 AM, Adam Jensen wrote:
> On 10/12/2018 11:09 PM, ~Stack~ wrote:
>> For all the arguments of performance, well I wouldn't use either XFS or
>> EXT4. I use ZFS and Ceph on the systems I want performance out of.
> 
> For a single, modest server that runs everything - email, web, DBMS,
> etc. - I've recently switched from FreeBSD-11.2 with a four disk ZFS
> RAID-10 to SL-7.5 with XFS on a four disk hardware RAID-5. While ZFS was
> very convenient and had a lot of nifty capabilities, the resource
> consumption was enormous and performance didn't seem to be as good as it
> is now. (E3-1245, 32GB RAM, MR9266-4i)
> 

We do a pool of mirrored disks with fast SSD's for our ZFS caching.
Performance is fantastic and, as I mentioned in another reply, the
rebuild time of a failed drive (or a resilvering when I upgraded all of
the drives on the fly without downtime) is way faster than any RAID I've
ever worked on before (which is quite a few in my career).

However, even if performance wasn't great we would still probably be
using it because of the tooling around ZFS. We utilize a lot of the
tools it provides for shared file-systems, backups, compression,
de-dupe, ect.

Never used ZFS on *BSD. I've only used it on SL7 so I can't say anything
about an OS difference.

~Stack~


Re: XFS v EXT4 was: After Install last physical disk is not mounted on reboot

2018-10-14 Thread ~Stack~
On 10/13/2018 04:41 AM, Nico Kadel-Garcia wrote:
> On Fri, Oct 12, 2018 at 11:09 PM ~Stack~  wrote:
>>
>> On 10/12/2018 07:35 PM, Nico Kadel-Garcia wrote:
>> [snip]
>>> On SL 7? Why? Is there any reason not to use xfs? I've appreciated the
>>> ext filesystems, I've known its original author for decades. (He was
>>> my little brother in my fraternity!) But there's not a compelling
>>> reason to use it in recent SL releases.
>>
>>
>> Sure there is. Anyone who has to mange fluctuating disks in an LVM knows
>> precisely why you avoid XFS - Shrink an XFS formated LVM partition. Oh,
>> wait. You can't. ;-)
> 
> I gave up, roughly 10 years ago, on LVM and more partitions than I
> absolutely needed. I cope with it professionally, most recently with
> some tools to replicate a live OS to a new disk image with complex LVM
> layouts for the filesystem. LVM has usually involved complexity I
> do not need. In the modern day of virtualization and virtualization
> disk images, I just use disk images, not LVM, to create new
> filesystems of tuned sizes. Not so helpful for home desktops, I admit,
> but quite feasible in a "VirtualBox" or "Xen" or "VMWAre" set of Linux
> VMs.

Unfortunately, in the world I am in we have an Audit/Security
requirement that we *must* have separate partitions for /, swap, /tmp,
/home, and /var with a recommendation for /opt if it is heavily used.
I'm also in a world where researchers get to pick their layouts to have
the jr admins build the box to their specs. Then when they break
something, we few sr admins have to come in and fix it.


>> My server with EXT4 will be back on line with adjusted filesystem sizes
>> before the XFS partition has even finished backing up! It is a trivial,
>> well-documented, and quick process to adjust an ext4 file-system.
> 
> xfsresize is not working for you? Is that an LVM specific deficit?

Please provide more information. To the best of my knowledge, RH
official support still says shrinking an XFS partition can not be done.
Only growing. I am not familiar with a xfsresize command. Where do I
find it?

$ yum provides */xfsresize
No matches found
$ cat /etc/redhat-release
Scientific Linux release 7.5 (Nitrogen)

> 
>> Granted, I'm in a world where people can't seem to judge how they are
>> going to use the space on their server and frequently have to come to me
>> needing help because they did something silly like allocate 50G to /opt
>> and 1G to /var. *rolls eyes* (sadly that was a true event.) Adjusting
>> filesystems for others happens far too frequently for me. At least it is
>> easy for the EXT4 crowd.
> 
> That's a fairly compelling reason not to use the finely divided
> filesystems. The benefits in protecting a system from corrupting data
> when an application overflows a shared partition and interferes with
> another critical system has, typically, been overwhelmed by the
> wailing and gnashing of teeth when *one* partition overflows and
> screws up simple operations like logging, RPM updates, or SSH for
> anyone other than root.
> 
> If it's eating a lot of your time, there's a point where your time
> spent tuning systems is much more expensive than simply buying more
> storage and consistently overprovisioning. Not saying you should spend
> that money, just something I hope you keep in mind.

I don't disagree with you at all. But those partition regulations come
down from higher level than I. As for buying more disks, I'm quite glad
that the server class SSD's have fallen in price and they aren't buying
60GB disks anymore. Most of them are getting in the ~200GB range and it
is less of an issue.

> 
>> Also, I can't think of a single compelling reason to use XFS over EXT4.
>> Supposedly XFS is great for large files of 30+ Gb, but I can promise you
>> that most of the servers and desktops I support have easily 95% of their
>> files under 100M (and I would guess ~70% are under 1M). I know this,
>> because I help the backup team on occasion. I've seen the histograms of
>> file size distributions.
> 
> Personally, I found better performance for proxies, which wound up
> with many, many thousands of files in the same directory because the
> developers had never really thought about the cost of the kernel
> "stat" call to get an ordered list of the files in a directory. I
> ran into that one a lot, especially as systems were scaled up, and
> some people got bit *really hard* when they found g that some things
> did not scale up linearly.y.
> 
> Also: if you're running proxies, email archives, or other tools likely
> to support many small files, ext4 only supports 4 Billion files. ext4
> supports 2^64.

(for others following along, typo. He ment XFS supports 2^64.)

We have had researchers who get pretty high up there, but thankfully no
one has hit that limit yet. For the foreseeable future I think I'm good
on that limit.


>> For all the arguments of performance, well I wouldn't use either XFS or
>> EXT4. I use ZFS and Ceph on the systems I want perf

Re: After Install last physical disk is not mounted on reboot

2018-10-13 Thread Bruce Ferrell
My one and only question is, do you see the device for sde, in any form 
(/dev/sdeX, /dev/disk/by-*, etc) present in /etc/fstab with the proper 
mount point(s)?


It really doesn't matter WHAT the device tech is.  /etc/fstab just tells 
the OS where to put the device into the filesystem... Or it did before 
systemd  got into the mix.


Just for grins and giggles, I'd put sde (and it's correct 
partition/mount point) into fstab and reboot during a maintenance window.


if that fails, I'd be taking a hard look at systemd and the units that 
took over disk mounting.  Systemd is why I'm still running SL 6.x


Also, if you hot swapped the drive, the kernel has a nasty habit of 
assigning a new device name. What WAS sde becomes sdf until the next 
reboot... But fstab and systemd just don't get that.  Look for 
anomalies.  disk devices that you don't recognize in fstab or the 
systemd configs.



On 10/13/18 7:20 AM, Larry Linder wrote:

The problem is not associated with the file system.
We have a newer system with SL 7.5 and xfs and we have the same problem.

I omited a lot of directories because of time and importance.  fstab is
what is mounted and used by OS.

The fstab was copied exactly as SL 7.5 built it.  It does not give you a
clue as to what the directories are and it shouldn't.

The point is that I would like to use more pysical drives on this system
but because of MB or OS the last physical disk is not seen, which is
"sde".  One of older SCSI sysems had 31 disks attached to it.

The Bios does sees 1 SSD, 4 WesternDigital drives and 1 dvd.
SSD sda
WD  sdb
WD  sdc
WD  sdd
WD  sde is missing from "fstab" and not mounted.
plextor dvd

We tried a nanual mount and it works but when you reboot it is gone
becasuse it not in "fstab".

Why so many disks:
Two of these disks are used for back up of users on the server.  Twice a
day @ 12:30 and at 0:30 each day.  These are also in sync with two disks
that are at another physical location.  Using "rsync" you have to be
carefull or it can be an eternal garbage colledtor.  This is off topic.

A disk has a finite life so every 6 mo.  We rotate in a new disk and
toss the oldest one.  It takes two and 1/2 years to cycle threw the
pack.
This scheme has worked for us for the last 20 years.  We have never had
a server die on us. We have used Sl Linux form version 4 to current and
before that RH 7->9 and BSD 4.3.

We really do not have a performance problem even on long 3d renderings-
The slowest thing in the room is the speed one can type or point.
Models, simulations, drawings are done before you can reach for your
cup.

Thank You
Larry Linder


On Fri, 2018-10-12 at 23:07 -0700, Bruce Ferrell wrote:

On 10/12/18 8:09 PM, ~Stack~ wrote:

On 10/12/2018 07:35 PM, Nico Kadel-Garcia wrote:
[snip]

On SL 7? Why? Is there any reason not to use xfs? I've appreciated the
ext filesystems, I've known its original author for decades. (He was
my little brother in my fraternity!) But there's not a compelling
reason to use it in recent SL releases.

Sure there is. Anyone who has to mange fluctuating disks in an LVM knows
precisely why you avoid XFS - Shrink an XFS formated LVM partition. Oh,
wait. You can't. ;-)

My server with EXT4 will be back on line with adjusted filesystem sizes
before the XFS partition has even finished backing up! It is a trivial,
well-documented, and quick process to adjust an ext4 file-system.

Granted, I'm in a world where people can't seem to judge how they are
going to use the space on their server and frequently have to come to me
needing help because they did something silly like allocate 50G to /opt
and 1G to /var. *rolls eyes* (sadly that was a true event.) Adjusting
filesystems for others happens far too frequently for me. At least it is
easy for the EXT4 crowd.

Also, I can't think of a single compelling reason to use XFS over EXT4.
Supposedly XFS is great for large files of 30+ Gb, but I can promise you
that most of the servers and desktops I support have easily 95% of their
files under 100M (and I would guess ~70% are under 1M). I know this,
because I help the backup team on occasion. I've seen the histograms of
file size distributions.

For all the arguments of performance, well I wouldn't use either XFS or
EXT4. I use ZFS and Ceph on the systems I want performance out of.

Lastly, (I know - single data point) I almost never get the "help my
file system is corrupted" from the EXT4 crowd but I've long stopped
counting how many times I've heard XFS eating files. And the few times
it is EXT4 I don't worry because the tools for recovery are long and
well tested. The best that can be said for XFS recovery tools is "Well,
they are better now then they were."

To me, it still boggles my mind why it is the default FS in the EL world.

But that's me. :-)

~Stack~


The one thing I'd offer you in terms of EXT4 vs XFS Do NOT have a system crash 
on very large filesystems (> than 1TB) with EXT4.

It will take days to fsck completely.  Trust me on this.  I 

Re: XFS v EXT4 was: After Install last physical disk is not mounted on reboot

2018-10-13 Thread Adam Jensen
On 10/12/2018 11:09 PM, ~Stack~ wrote:
> For all the arguments of performance, well I wouldn't use either XFS or
> EXT4. I use ZFS and Ceph on the systems I want performance out of.

For a single, modest server that runs everything - email, web, DBMS,
etc. - I've recently switched from FreeBSD-11.2 with a four disk ZFS
RAID-10 to SL-7.5 with XFS on a four disk hardware RAID-5. While ZFS was
very convenient and had a lot of nifty capabilities, the resource
consumption was enormous and performance didn't seem to be as good as it
is now. (E3-1245, 32GB RAM, MR9266-4i)


After Install last physical disk is not mounted on reboot

2018-10-13 Thread Larry Linder
The problem is not associated with the file system.
We have a newer system with SL 7.5 and xfs and we have the same problem.

I omited a lot of directories because of time and importance.  fstab is
what is mounted and used by OS.

The fstab was copied exactly as SL 7.5 built it.  It does not give you a
clue as to what the directories are and it shouldn't.

The point is that I would like to use more pysical drives on this system
but because of MB or OS the last physical disk is not seen, which is
"sde".  One of older SCSI sysems had 31 disks attached to it.

The Bios does sees 1 SSD, 4 WesternDigital drives and 1 dvd.
SSD sda
WD  sdb
WD  sdc
WD  sdd
WD  sde is missing from "fstab" and not mounted.
plextor dvd

We tried a nanual mount and it works but when you reboot it is gone
becasuse it not in "fstab".

Why so many disks:
Two of these disks are used for back up of users on the server.  Twice a
day @ 12:30 and at 0:30 each day.  These are also in sync with two disks
that are at another physical location.  Using "rsync" you have to be
carefull or it can be an eternal garbage colledtor.  This is off topic.

A disk has a finite life so every 6 mo.  We rotate in a new disk and
toss the oldest one.  It takes two and 1/2 years to cycle threw the
pack.
This scheme has worked for us for the last 20 years.  We have never had
a server die on us. We have used Sl Linux form version 4 to current and
before that RH 7->9 and BSD 4.3.

We really do not have a performance problem even on long 3d renderings-
The slowest thing in the room is the speed one can type or point.
Models, simulations, drawings are done before you can reach for your
cup.

Thank You
Larry Linder


On Fri, 2018-10-12 at 23:07 -0700, Bruce Ferrell wrote:
> On 10/12/18 8:09 PM, ~Stack~ wrote:
> > On 10/12/2018 07:35 PM, Nico Kadel-Garcia wrote:
> > [snip]
> >> On SL 7? Why? Is there any reason not to use xfs? I've appreciated the
> >> ext filesystems, I've known its original author for decades. (He was
> >> my little brother in my fraternity!) But there's not a compelling
> >> reason to use it in recent SL releases.
> >
> > Sure there is. Anyone who has to mange fluctuating disks in an LVM knows
> > precisely why you avoid XFS - Shrink an XFS formated LVM partition. Oh,
> > wait. You can't. ;-)
> >
> > My server with EXT4 will be back on line with adjusted filesystem sizes
> > before the XFS partition has even finished backing up! It is a trivial,
> > well-documented, and quick process to adjust an ext4 file-system.
> >
> > Granted, I'm in a world where people can't seem to judge how they are
> > going to use the space on their server and frequently have to come to me
> > needing help because they did something silly like allocate 50G to /opt
> > and 1G to /var. *rolls eyes* (sadly that was a true event.) Adjusting
> > filesystems for others happens far too frequently for me. At least it is
> > easy for the EXT4 crowd.
> >
> > Also, I can't think of a single compelling reason to use XFS over EXT4.
> > Supposedly XFS is great for large files of 30+ Gb, but I can promise you
> > that most of the servers and desktops I support have easily 95% of their
> > files under 100M (and I would guess ~70% are under 1M). I know this,
> > because I help the backup team on occasion. I've seen the histograms of
> > file size distributions.
> >
> > For all the arguments of performance, well I wouldn't use either XFS or
> > EXT4. I use ZFS and Ceph on the systems I want performance out of.
> >
> > Lastly, (I know - single data point) I almost never get the "help my
> > file system is corrupted" from the EXT4 crowd but I've long stopped
> > counting how many times I've heard XFS eating files. And the few times
> > it is EXT4 I don't worry because the tools for recovery are long and
> > well tested. The best that can be said for XFS recovery tools is "Well,
> > they are better now then they were."
> >
> > To me, it still boggles my mind why it is the default FS in the EL world.
> >
> > But that's me. :-)
> >
> > ~Stack~
> >
> 
> The one thing I'd offer you in terms of EXT4 vs XFS Do NOT have a system 
> crash on very large filesystems (> than 1TB) with EXT4.
> 
> It will take days to fsck completely.  Trust me on this.  I did it (5.5TB 
> RAID6)... and then converted to XFS.  Been running well for 3 years now.


Re: XFS v EXT4 was: After Install last physical disk is not mounted on reboot

2018-10-13 Thread Nico Kadel-Garcia
On Fri, Oct 12, 2018 at 11:09 PM ~Stack~  wrote:
>
> On 10/12/2018 07:35 PM, Nico Kadel-Garcia wrote:
> [snip]
> > On SL 7? Why? Is there any reason not to use xfs? I've appreciated the
> > ext filesystems, I've known its original author for decades. (He was
> > my little brother in my fraternity!) But there's not a compelling
> > reason to use it in recent SL releases.
>
>
> Sure there is. Anyone who has to mange fluctuating disks in an LVM knows
> precisely why you avoid XFS - Shrink an XFS formated LVM partition. Oh,
> wait. You can't. ;-)

I gave up, roughly 10 years ago, on LVM and more partitions than I
absolutely needed. I cope with it professionally, most recently with
some tools to replicate a live OS to a new disk image with complex LVM
layouts for the filesystem. LVM has usually involved complexity I
do not need. In the modern day of virtualization and virtualization
disk images, I just use disk images, not LVM, to create new
filesystems of tuned sizes. Not so helpful for home desktops, I admit,
but quite feasible in a "VirtualBox" or "Xen" or "VMWAre" set of Linux
VMs.

> My server with EXT4 will be back on line with adjusted filesystem sizes
> before the XFS partition has even finished backing up! It is a trivial,
> well-documented, and quick process to adjust an ext4 file-system.

xfsresize is not working for you? Is that an LVM specific deficit?

> Granted, I'm in a world where people can't seem to judge how they are
> going to use the space on their server and frequently have to come to me
> needing help because they did something silly like allocate 50G to /opt
> and 1G to /var. *rolls eyes* (sadly that was a true event.) Adjusting
> filesystems for others happens far too frequently for me. At least it is
> easy for the EXT4 crowd.

That's a fairly compelling reason not to use the finely divided
filesystems. The benefits in protecting a system from corrupting data
when an application overflows a shared partition and interferes with
another critical system has, typically, been overwhelmed by the
wailing and gnashing of teeth when *one* partition overflows and
screws up simple operations like logging, RPM updates, or SSH for
anyone other than root.

If it's eating a lot of your time, there's a point where your time
spent tuning systems is much more expensive than simply buying more
storage and consistently overprovisioning. Not saying you should spend
that money, just something I hope you keep in mind.

> Also, I can't think of a single compelling reason to use XFS over EXT4.
> Supposedly XFS is great for large files of 30+ Gb, but I can promise you
> that most of the servers and desktops I support have easily 95% of their
> files under 100M (and I would guess ~70% are under 1M). I know this,
> because I help the backup team on occasion. I've seen the histograms of
> file size distributions.

Personally, I found better performance for proxies, which wound up
with many, many thousands of files in the same directory because the
developers had never really thought about the cost of the kernel
"stat" call to get an ordered list of the files in a directory. I
ran into that one a lot, especially as systems were scaled up, and
some people got bit *really hard* when they found g that some things
did not scale up linearly.y.

Also: if you're running proxies, email archives, or other tools likely
to support many small files, ext4 only supports 4 Billion files. ext4
supports 2^64.

> For all the arguments of performance, well I wouldn't use either XFS or
> EXT4. I use ZFS and Ceph on the systems I want performance out of.

I've not personally needed those, but I've not been in the big
filesystem world for a few years. Are they working well for you?

> Lastly, (I know - single data point) I almost never get the "help my
> file system is corrupted" from the EXT4 crowd but I've long stopped
> counting how many times I've heard XFS eating files. And the few times
> it is EXT4 I don't worry because the tools for recovery are long and
> well tested. The best that can be said for XFS recovery tools is "Well,
> they are better now then they were."

Five years ago, I was much more leary of xfs. It's been performing
very well for me since SL 7 and the upstream RHEL 7 came out.

> To me, it still boggles my mind why it is the default FS in the EL world.

I suspect its stability has improved. The "large filesystem" specs are
pretty good, and the latency numbers do show some benefits.


Re: XFS v EXT4 was: After Install last physical disk is not mounted on reboot

2018-10-13 Thread MAH Maccallum
Another issue is that Dropbox have announced that henceforth
they will only support ext4.

On 13/10/18 04:09, ~Stack~ wrote:
> On 10/12/2018 07:35 PM, Nico Kadel-Garcia wrote:
> [snip]
>> On SL 7? Why? Is there any reason not to use xfs? I've appreciated the
>> ext filesystems, I've known its original author for decades. (He was
>> my little brother in my fraternity!) But there's not a compelling
>> reason to use it in recent SL releases.
>
> Sure there is. Anyone who has to mange fluctuating disks in an LVM knows
> precisely why you avoid XFS - Shrink an XFS formated LVM partition. Oh,
> wait. You can't. ;-)
>
> My server with EXT4 will be back on line with adjusted filesystem sizes
> before the XFS partition has even finished backing up! It is a trivial,
> well-documented, and quick process to adjust an ext4 file-system.
>
> Granted, I'm in a world where people can't seem to judge how they are
> going to use the space on their server and frequently have to come to me
> needing help because they did something silly like allocate 50G to /opt
> and 1G to /var. *rolls eyes* (sadly that was a true event.) Adjusting
> filesystems for others happens far too frequently for me. At least it is
> easy for the EXT4 crowd.
>
> Also, I can't think of a single compelling reason to use XFS over EXT4.
> Supposedly XFS is great for large files of 30+ Gb, but I can promise you
> that most of the servers and desktops I support have easily 95% of their
> files under 100M (and I would guess ~70% are under 1M). I know this,
> because I help the backup team on occasion. I've seen the histograms of
> file size distributions.
>
> For all the arguments of performance, well I wouldn't use either XFS or
> EXT4. I use ZFS and Ceph on the systems I want performance out of.
>
> Lastly, (I know - single data point) I almost never get the "help my
> file system is corrupted" from the EXT4 crowd but I've long stopped
> counting how many times I've heard XFS eating files. And the few times
> it is EXT4 I don't worry because the tools for recovery are long and
> well tested. The best that can be said for XFS recovery tools is "Well,
> they are better now then they were."
>
> To me, it still boggles my mind why it is the default FS in the EL world.
>
> But that's me. :-)
>
> ~Stack~
>



Re: XFS v EXT4 was: After Install last physical disk is not mounted on reboot

2018-10-12 Thread Bruce Ferrell

On 10/12/18 8:09 PM, ~Stack~ wrote:

On 10/12/2018 07:35 PM, Nico Kadel-Garcia wrote:
[snip]

On SL 7? Why? Is there any reason not to use xfs? I've appreciated the
ext filesystems, I've known its original author for decades. (He was
my little brother in my fraternity!) But there's not a compelling
reason to use it in recent SL releases.


Sure there is. Anyone who has to mange fluctuating disks in an LVM knows
precisely why you avoid XFS - Shrink an XFS formated LVM partition. Oh,
wait. You can't. ;-)

My server with EXT4 will be back on line with adjusted filesystem sizes
before the XFS partition has even finished backing up! It is a trivial,
well-documented, and quick process to adjust an ext4 file-system.

Granted, I'm in a world where people can't seem to judge how they are
going to use the space on their server and frequently have to come to me
needing help because they did something silly like allocate 50G to /opt
and 1G to /var. *rolls eyes* (sadly that was a true event.) Adjusting
filesystems for others happens far too frequently for me. At least it is
easy for the EXT4 crowd.

Also, I can't think of a single compelling reason to use XFS over EXT4.
Supposedly XFS is great for large files of 30+ Gb, but I can promise you
that most of the servers and desktops I support have easily 95% of their
files under 100M (and I would guess ~70% are under 1M). I know this,
because I help the backup team on occasion. I've seen the histograms of
file size distributions.

For all the arguments of performance, well I wouldn't use either XFS or
EXT4. I use ZFS and Ceph on the systems I want performance out of.

Lastly, (I know - single data point) I almost never get the "help my
file system is corrupted" from the EXT4 crowd but I've long stopped
counting how many times I've heard XFS eating files. And the few times
it is EXT4 I don't worry because the tools for recovery are long and
well tested. The best that can be said for XFS recovery tools is "Well,
they are better now then they were."

To me, it still boggles my mind why it is the default FS in the EL world.

But that's me. :-)

~Stack~



The one thing I'd offer you in terms of EXT4 vs XFS Do NOT have a system crash 
on very large filesystems (> than 1TB) with EXT4.

It will take days to fsck completely.  Trust me on this.  I did it (5.5TB 
RAID6)... and then converted to XFS.  Been running well for 3 years now.


XFS v EXT4 was: After Install last physical disk is not mounted on reboot

2018-10-12 Thread ~Stack~
On 10/12/2018 07:35 PM, Nico Kadel-Garcia wrote:
[snip]
> On SL 7? Why? Is there any reason not to use xfs? I've appreciated the
> ext filesystems, I've known its original author for decades. (He was
> my little brother in my fraternity!) But there's not a compelling
> reason to use it in recent SL releases.


Sure there is. Anyone who has to mange fluctuating disks in an LVM knows
precisely why you avoid XFS - Shrink an XFS formated LVM partition. Oh,
wait. You can't. ;-)

My server with EXT4 will be back on line with adjusted filesystem sizes
before the XFS partition has even finished backing up! It is a trivial,
well-documented, and quick process to adjust an ext4 file-system.

Granted, I'm in a world where people can't seem to judge how they are
going to use the space on their server and frequently have to come to me
needing help because they did something silly like allocate 50G to /opt
and 1G to /var. *rolls eyes* (sadly that was a true event.) Adjusting
filesystems for others happens far too frequently for me. At least it is
easy for the EXT4 crowd.

Also, I can't think of a single compelling reason to use XFS over EXT4.
Supposedly XFS is great for large files of 30+ Gb, but I can promise you
that most of the servers and desktops I support have easily 95% of their
files under 100M (and I would guess ~70% are under 1M). I know this,
because I help the backup team on occasion. I've seen the histograms of
file size distributions.

For all the arguments of performance, well I wouldn't use either XFS or
EXT4. I use ZFS and Ceph on the systems I want performance out of.

Lastly, (I know - single data point) I almost never get the "help my
file system is corrupted" from the EXT4 crowd but I've long stopped
counting how many times I've heard XFS eating files. And the few times
it is EXT4 I don't worry because the tools for recovery are long and
well tested. The best that can be said for XFS recovery tools is "Well,
they are better now then they were."

To me, it still boggles my mind why it is the default FS in the EL world.

But that's me. :-)

~Stack~



signature.asc
Description: OpenPGP digital signature


Re: After Install last physical disk is not mounted on reboot

2018-10-12 Thread Nico Kadel-Garcia
On Fri, Oct 12, 2018 at 4:50 PM Larry Linder
 wrote:
>
> New System:
>
> Gigabyte Mother board.
> 32 G Ram
> 6 core AMD processor.
> ext4 FS  ??

On SL 7? Why? Is there any reason not to use xfs? I've appreciated the
ext filesystems, I've known its original author for decades. (He was
my little brother in my fraternity!) But there's not a compelling
reason to use it in recent SL releases.

> Disk:
> 0.  SSD  240G   sda cibtaubs o/s

What the hell? No partitions? Where did you put the boot loader?

> 1.  WD  2 TBsdb contains /usr/local & /engr, /engr/users

*Stop* putting your software and bundled directories in "/" It's a
violation of the File System Hierarchy, and will cause endless grief.

> 2.  WD  2 TBsdc contains /mariadb, & company library

See above.

> 3.  WD  2 TBsdd contains /backup for other machines

See above.

> 4.  WD  2 TBsde contains ...
> 5.  Plextor DVD
> Mother board has 6 ports.
> These are physical disks setup during install, using a manual install.
> After install is complete, system reboots, everything works but there is
> no sde present in fstab and it is not mounted.
> According to RH website we are not exceeding any published limits.


> There is nothing abut this problem with GigaBye MB.
>
> This system does not use logical anything or any raid?
>
> Any clues as to what is going on.
>
> Don't know how to decode disk definition in fstab?
> My discription is how I set it up.
>
> Thank You
> Larry Linder

And you need to run "parted -l" and "blkid" in order to unfurl the
UUID associations with particular devices. "parted -l" will show you
if the devices are detected. "blkid" wil show you what thte
relationship is between the devices and their UUID's. And as much as I
love, as I love the author of ext based filesystems, I no longer
recommend them for default use, mostly due to their performance with
directories with many, many thousands of files or directories in them.

> fstab#
> UUID=1aa38030-b573-4537-bc9d-83f0a9748c9b /   ext4
> defaults1 1
> UUID=0f5a32b3-4f87-4d44-9c87-76da4ae6e5f5 /boot   ext4
> defaults1 2
> UUID=22E1-6183  /boot/efi   vfat
> umask=0077,shortname=winnt 0 0
> UUID=8137d36b-5bf7-4499-8c00-b62486bfe24b /engr   ext4
> defaults1 2
> UUID=d46c96a4-aad3-4cf0-a73c-826f8426d553 /home   ext4
> defaults1 2
> UUID=0636e7dd-b750-44da-97af-36e8b5296030 /mc_1   ext4
> defaults1 2
> UUID=4f73b695-ef2a-4c7a-a535-88e2146d4f20 /mcdb   ext4
> defaults1 2
> UUID=6fd9460d-f036-4b67-b464-7017deb91f7d /mc_4   ext4
> defaults1 2
> UUID=bec01426-038b-4600-8af9-7641bfd3f5cb /mc_lib ext4
> defaults1 2
> UUID=48907d16-332e-40dd-a1f6-5dc240cc061a /optext4
> defaults1 2
> UUID=937d8f1e-4d9c-4ed0-abbb-c37ea0336869 /tmpext4
> defaults1 2
> UUID=aee46348-f657-4132-87cb-7d1df890472b /usrext4
> defaults1 2
> UUID=544e3db7-f670-4c5d-903a-176b05d63bcf /usr/local  ext4
> defaults1 2
> UUID=8fca9d68-579a-475b-85f2-3ea08967cc93 /varext4
> defaults1 2
> UUID=2b39c434-723b-494e-8e3f-db5a8c4a1a14 swapswap
> defaults0 0
> tmpfs   /dev/shmtmpfs   defaults
> 0 0
> devpts  /dev/ptsdevpts  gid=5,mode=620
> 0 0
> sysfs   /syssysfs   defaults
> 0 0
> proc/proc   procdefaults
> 0 0
> ~


Re: After Install last physical disk is not mounted on reboot

2018-10-12 Thread Andrew Z
And bios sees it, right?

On Fri, Oct 12, 2018, 16:50 Larry Linder 
wrote:

> New System:
>
> Gigabyte Mother board.
> 32 G Ram
> 6 core AMD processor.
> ext4 FS  ??
>
> Disk:
> 0.  SSD  240G   sda cibtaubs o/s
> 1.  WD  2 TBsdb contains /usr/local & /engr, /engr/users
> 2.  WD  2 TBsdc contains /mariadb, & company library
> 3.  WD  2 TBsdd contains /backup for other machines
> 4.  WD  2 TBsde contains ...
> 5.  Plextor DVD
> Mother board has 6 ports.
> These are physical disks setup during install, using a manual install.
> After install is complete, system reboots, everything works but there is
> no sde present in fstab and it is not mounted.
> According to RH website we are not exceeding any published limits.
>
> There is nothing abut this problem with GigaBye MB.
>
> This system does not use logical anything or any raid?
>
> Any clues as to what is going on.
>
> Don't know how to decode disk definition in fstab?
> My discription is how I set it up.
>
> Thank You
> Larry Linder
>
> fstab#
> UUID=1aa38030-b573-4537-bc9d-83f0a9748c9b /   ext4
> defaults1 1
> UUID=0f5a32b3-4f87-4d44-9c87-76da4ae6e5f5 /boot   ext4
> defaults1 2
> UUID=22E1-6183  /boot/efi   vfat
> umask=0077,shortname=winnt 0 0
> UUID=8137d36b-5bf7-4499-8c00-b62486bfe24b /engr   ext4
> defaults1 2
> UUID=d46c96a4-aad3-4cf0-a73c-826f8426d553 /home   ext4
> defaults1 2
> UUID=0636e7dd-b750-44da-97af-36e8b5296030 /mc_1   ext4
> defaults1 2
> UUID=4f73b695-ef2a-4c7a-a535-88e2146d4f20 /mcdb   ext4
> defaults1 2
> UUID=6fd9460d-f036-4b67-b464-7017deb91f7d /mc_4   ext4
> defaults1 2
> UUID=bec01426-038b-4600-8af9-7641bfd3f5cb /mc_lib ext4
> defaults1 2
> UUID=48907d16-332e-40dd-a1f6-5dc240cc061a /optext4
> defaults1 2
> UUID=937d8f1e-4d9c-4ed0-abbb-c37ea0336869 /tmpext4
> defaults1 2
> UUID=aee46348-f657-4132-87cb-7d1df890472b /usrext4
> defaults1 2
> UUID=544e3db7-f670-4c5d-903a-176b05d63bcf /usr/local  ext4
> defaults1 2
> UUID=8fca9d68-579a-475b-85f2-3ea08967cc93 /varext4
> defaults1 2
> UUID=2b39c434-723b-494e-8e3f-db5a8c4a1a14 swapswap
> defaults0 0
> tmpfs   /dev/shmtmpfs   defaults
> 0 0
> devpts  /dev/ptsdevpts  gid=5,mode=620
> 0 0
> sysfs   /syssysfs   defaults
> 0 0
> proc/proc   procdefaults
> 0 0
> ~
>


After Install last physical disk is not mounted on reboot

2018-10-12 Thread Larry Linder
New System:

Gigabyte Mother board.
32 G Ram
6 core AMD processor.
ext4 FS  ??

Disk:
0.  SSD  240G   sda cibtaubs o/s
1.  WD  2 TBsdb contains /usr/local & /engr, /engr/users
2.  WD  2 TBsdc contains /mariadb, & company library
3.  WD  2 TBsdd contains /backup for other machines
4.  WD  2 TBsde contains ...
5.  Plextor DVD
Mother board has 6 ports.
These are physical disks setup during install, using a manual install.
After install is complete, system reboots, everything works but there is
no sde present in fstab and it is not mounted.
According to RH website we are not exceeding any published limits.

There is nothing abut this problem with GigaBye MB.

This system does not use logical anything or any raid?

Any clues as to what is going on.

Don't know how to decode disk definition in fstab?
My discription is how I set it up.

Thank You
Larry Linder

fstab#
UUID=1aa38030-b573-4537-bc9d-83f0a9748c9b /   ext4
defaults1 1
UUID=0f5a32b3-4f87-4d44-9c87-76da4ae6e5f5 /boot   ext4
defaults1 2
UUID=22E1-6183  /boot/efi   vfat
umask=0077,shortname=winnt 0 0
UUID=8137d36b-5bf7-4499-8c00-b62486bfe24b /engr   ext4
defaults1 2
UUID=d46c96a4-aad3-4cf0-a73c-826f8426d553 /home   ext4
defaults1 2
UUID=0636e7dd-b750-44da-97af-36e8b5296030 /mc_1   ext4
defaults1 2
UUID=4f73b695-ef2a-4c7a-a535-88e2146d4f20 /mcdb   ext4
defaults1 2
UUID=6fd9460d-f036-4b67-b464-7017deb91f7d /mc_4   ext4
defaults1 2
UUID=bec01426-038b-4600-8af9-7641bfd3f5cb /mc_lib ext4
defaults1 2
UUID=48907d16-332e-40dd-a1f6-5dc240cc061a /optext4
defaults1 2
UUID=937d8f1e-4d9c-4ed0-abbb-c37ea0336869 /tmpext4
defaults1 2
UUID=aee46348-f657-4132-87cb-7d1df890472b /usrext4
defaults1 2
UUID=544e3db7-f670-4c5d-903a-176b05d63bcf /usr/local  ext4
defaults1 2
UUID=8fca9d68-579a-475b-85f2-3ea08967cc93 /varext4
defaults1 2
UUID=2b39c434-723b-494e-8e3f-db5a8c4a1a14 swapswap
defaults0 0
tmpfs   /dev/shmtmpfs   defaults
0 0
devpts  /dev/ptsdevpts  gid=5,mode=620
0 0
sysfs   /syssysfs   defaults
0 0
proc/proc   procdefaults
0 0
~