Re: After Install last physical disk is not mounted on reboot
On 10/15/18 6:39 AM, Larry Linder wrote: When you look at the /dev/disk and the directories there is no occurance of "sde" We tried to modify "fstab" manuall but the device code - decoding scheme didn't work. System booted to "rescue". There are a number of problems with the GigaBit MB and one has to do with the serial communication. I looked into the bios and all 4 WD disks are present. Disk 5 as "sde" is not seen there. We tried moving disks around and the same result so its not a disk problem. These are all WD disks However we have noticed that when you count up the devices to be mounted in "fstab" there are 16. A number of the mounts are due to the user and SL OS. On this server we will stick with xt4 for the time being. We have investigated a Port Expansion board to allow us to use more physical disks but when you peek under the covers and look how they work the performance penality is not worth the trouble. Larry Linder On Sat, 2018-10-13 at 09:55 -0700, Bruce Ferrell wrote: My one and only question is, do you see the device for sde, in any form (/dev/sdeX, /dev/disk/by-*, etc) present in /etc/fstab with the proper mount point(s)? It really doesn't matter WHAT the device tech is. /etc/fstab just tells the OS where to put the device into the filesystem... Or it did before systemd got into the mix. Just for grins and giggles, I'd put sde (and it's correct partition/mount point) into fstab and reboot during a maintenance window. if that fails, I'd be taking a hard look at systemd and the units that took over disk mounting. Systemd is why I'm still running SL 6.x Also, if you hot swapped the drive, the kernel has a nasty habit of assigning a new device name. What WAS sde becomes sdf until the next reboot... But fstab and systemd just don't get that. Look for anomalies. disk devices that you don't recognize in fstab or the systemd configs. On 10/13/18 7:20 AM, Larry Linder wrote: The problem is not associated with the file system. We have a newer system with SL 7.5 and xfs and we have the same problem. I omited a lot of directories because of time and importance. fstab is what is mounted and used by OS. The fstab was copied exactly as SL 7.5 built it. It does not give you a clue as to what the directories are and it shouldn't. The point is that I would like to use more pysical drives on this system but because of MB or OS the last physical disk is not seen, which is "sde". One of older SCSI sysems had 31 disks attached to it. The Bios does sees 1 SSD, 4 WesternDigital drives and 1 dvd. SSD sda WD sdb WD sdc WD sdd WD sde is missing from "fstab" and not mounted. plextor dvd We tried a nanual mount and it works but when you reboot it is gone becasuse it not in "fstab". Why so many disks: Two of these disks are used for back up of users on the server. Twice a day @ 12:30 and at 0:30 each day. These are also in sync with two disks that are at another physical location. Using "rsync" you have to be carefull or it can be an eternal garbage colledtor. This is off topic. A disk has a finite life so every 6 mo. We rotate in a new disk and toss the oldest one. It takes two and 1/2 years to cycle threw the pack. This scheme has worked for us for the last 20 years. We have never had a server die on us. We have used Sl Linux form version 4 to current and before that RH 7->9 and BSD 4.3. We really do not have a performance problem even on long 3d renderings- The slowest thing in the room is the speed one can type or point. Models, simulations, drawings are done before you can reach for your cup. Thank You Larry Linder On Fri, 2018-10-12 at 23:07 -0700, Bruce Ferrell wrote: On 10/12/18 8:09 PM, ~Stack~ wrote: On 10/12/2018 07:35 PM, Nico Kadel-Garcia wrote: [snip] On SL 7? Why? Is there any reason not to use xfs? I've appreciated the ext filesystems, I've known its original author for decades. (He was my little brother in my fraternity!) But there's not a compelling reason to use it in recent SL releases. Sure there is. Anyone who has to mange fluctuating disks in an LVM knows precisely why you avoid XFS - Shrink an XFS formated LVM partition. Oh, wait. You can't. ;-) My server with EXT4 will be back on line with adjusted filesystem sizes before the XFS partition has even finished backing up! It is a trivial, well-documented, and quick process to adjust an ext4 file-system. Granted, I'm in a world where people can't seem to judge how they are going to use the space on their server and frequently have to come to me needing help because they did something silly like allocate 50G to /opt and 1G to /var. *rolls eyes* (sadly that was a true event.) Adjusting filesystems for others happens far too frequently for me. At least it is easy for the EXT4 crowd. Also, I can't think of a single compelling reason to use XFS over EXT4. Supposedly XFS is great for large files of 30+ Gb, but I can promise you that most of the servers and desktops I support have easily 95% of thei
Re: After Install last physical disk is not mounted on reboot
On 10/15/2018 05:04 PM, Konstantin Olchanski wrote: >> 5. Plextor DVD > No paper tape reader? > LOL. I backup my irreplaceable data onto M-Disc DVD's - one copy is kept here with me and two additional copies are kept, one at an East coast location and the other at a West coast location. This critical data-set is currently about 100GB and this method still seems like a reasonably cost-effective and reliable way to do it. I don't think the technology is completely antiquated.
Re: After Install last physical disk is not mounted on reboot
On Fri, Oct 12, 2018 at 04:33:56PM -0400, Larry Linder wrote: > Disk: > 1. WD2 TBsdb contains /usr/local & /engr, /engr/users > 2. WD2 TBsdc contains /mariadb, & company library > 3. WD2 TBsdd contains /backup for other machines > 4. WD2 TBsde contains ... You are very brave to run HDDs without any redundancy. If any HDD springs a bad sector and you discover it 6 months later when you cannot read an important file, just hope your backups go back that far. By my calculation the cost of extra HDDs + learning how to setup and manage mdadm RAID (or ZFS RAID) is much less than the hassle of recovering data from backups (if there is anything to recover, otherwise the cost of eating complete data loss). > 5. Plextor DVD No paper tape reader? > fstab# > UUID=1aa38030-b573-4537-bc9d-83f0a9748c9b / ext4 > defaults1 1 > ... "mount -a" finds the missing disk or not? -- Konstantin Olchanski Data Acquisition Systems: The Bytes Must Flow! Email: olchansk-at-triumf-dot-ca Snail mail: 4004 Wesbrook Mall, TRIUMF, Vancouver, B.C., V6T 2A3, Canada
Re: After Install last physical disk is not mounted on reboot
When you look at the /dev/disk and the directories there is no occurance of "sde" We tried to modify "fstab" manuall but the device code - decoding scheme didn't work. System booted to "rescue". There are a number of problems with the GigaBit MB and one has to do with the serial communication. I looked into the bios and all 4 WD disks are present. Disk 5 as "sde" is not seen there. We tried moving disks around and the same result so its not a disk problem. These are all WD disks However we have noticed that when you count up the devices to be mounted in "fstab" there are 16. A number of the mounts are due to the user and SL OS. On this server we will stick with xt4 for the time being. We have investigated a Port Expansion board to allow us to use more physical disks but when you peek under the covers and look how they work the performance penality is not worth the trouble. Larry Linder On Sat, 2018-10-13 at 09:55 -0700, Bruce Ferrell wrote: > My one and only question is, do you see the device for sde, in any > form (/dev/sdeX, /dev/disk/by-*, etc) present in /etc/fstab with the > proper mount point(s)? > > It really doesn't matter WHAT the device tech is. /etc/fstab just > tells the OS where to put the device into the filesystem... Or it did > before systemd got into the mix. > > Just for grins and giggles, I'd put sde (and it's correct > partition/mount point) into fstab and reboot during a maintenance > window. > > if that fails, I'd be taking a hard look at systemd and the units that > took over disk mounting. Systemd is why I'm still running SL 6.x > > Also, if you hot swapped the drive, the kernel has a nasty habit of > assigning a new device name. What WAS sde becomes sdf until the next > reboot... But fstab and systemd just don't get that. Look for > anomalies. disk devices that you don't recognize in fstab or the > systemd configs. > > > On 10/13/18 7:20 AM, Larry Linder wrote: > > > The problem is not associated with the file system. > > We have a newer system with SL 7.5 and xfs and we have the same problem. > > > > I omited a lot of directories because of time and importance. fstab is > > what is mounted and used by OS. > > > > The fstab was copied exactly as SL 7.5 built it. It does not give you a > > clue as to what the directories are and it shouldn't. > > > > The point is that I would like to use more pysical drives on this system > > but because of MB or OS the last physical disk is not seen, which is > > "sde". One of older SCSI sysems had 31 disks attached to it. > > > > The Bios does sees 1 SSD, 4 WesternDigital drives and 1 dvd. > > SSD sda > > WD sdb > > WD sdc > > WD sdd > > WD sde is missing from "fstab" and not mounted. > > plextor dvd > > > > We tried a nanual mount and it works but when you reboot it is gone > > becasuse it not in "fstab". > > > > Why so many disks: > > Two of these disks are used for back up of users on the server. Twice a > > day @ 12:30 and at 0:30 each day. These are also in sync with two disks > > that are at another physical location. Using "rsync" you have to be > > carefull or it can be an eternal garbage colledtor. This is off topic. > > > > A disk has a finite life so every 6 mo. We rotate in a new disk and > > toss the oldest one. It takes two and 1/2 years to cycle threw the > > pack. > > This scheme has worked for us for the last 20 years. We have never had > > a server die on us. We have used Sl Linux form version 4 to current and > > before that RH 7->9 and BSD 4.3. > > > > We really do not have a performance problem even on long 3d renderings- > > The slowest thing in the room is the speed one can type or point. > > Models, simulations, drawings are done before you can reach for your > > cup. > > > > Thank You > > Larry Linder > > > > > > On Fri, 2018-10-12 at 23:07 -0700, Bruce Ferrell wrote: > > > On 10/12/18 8:09 PM, ~Stack~ wrote: > > > > On 10/12/2018 07:35 PM, Nico Kadel-Garcia wrote: > > > > [snip] > > > > > On SL 7? Why? Is there any reason not to use xfs? I've appreciated the > > > > > ext filesystems, I've known its original author for decades. (He was > > > > > my little brother in my fraternity!) But there's not a compelling > > > > > reason to use it in recent SL releases. > > > > Sure there is. Anyone who has to mange fluctuating disks in an LVM knows > > > > precisely why you avoid XFS - Shrink an XFS formated LVM partition. Oh, > > > > wait. You can't. ;-) > > > > > > > > My server with EXT4 will be back on line with adjusted filesystem sizes > > > > before the XFS partition has even finished backing up! It is a trivial, > > > > well-documented, and quick process to adjust an ext4 file-system. > > > > > > > > Granted, I'm in a world where people can't seem to judge how they are > > > > going to use the space on their server and frequently have to come to me > > > > needing help because they did something silly like allocate 50G to /opt > > > > and 1G to /var. *rolls eyes* (sadly that
Re: After Install last physical disk is not mounted on reboot
My one and only question is, do you see the device for sde, in any form (/dev/sdeX, /dev/disk/by-*, etc) present in /etc/fstab with the proper mount point(s)? It really doesn't matter WHAT the device tech is. /etc/fstab just tells the OS where to put the device into the filesystem... Or it did before systemd got into the mix. Just for grins and giggles, I'd put sde (and it's correct partition/mount point) into fstab and reboot during a maintenance window. if that fails, I'd be taking a hard look at systemd and the units that took over disk mounting. Systemd is why I'm still running SL 6.x Also, if you hot swapped the drive, the kernel has a nasty habit of assigning a new device name. What WAS sde becomes sdf until the next reboot... But fstab and systemd just don't get that. Look for anomalies. disk devices that you don't recognize in fstab or the systemd configs. On 10/13/18 7:20 AM, Larry Linder wrote: The problem is not associated with the file system. We have a newer system with SL 7.5 and xfs and we have the same problem. I omited a lot of directories because of time and importance. fstab is what is mounted and used by OS. The fstab was copied exactly as SL 7.5 built it. It does not give you a clue as to what the directories are and it shouldn't. The point is that I would like to use more pysical drives on this system but because of MB or OS the last physical disk is not seen, which is "sde". One of older SCSI sysems had 31 disks attached to it. The Bios does sees 1 SSD, 4 WesternDigital drives and 1 dvd. SSD sda WD sdb WD sdc WD sdd WD sde is missing from "fstab" and not mounted. plextor dvd We tried a nanual mount and it works but when you reboot it is gone becasuse it not in "fstab". Why so many disks: Two of these disks are used for back up of users on the server. Twice a day @ 12:30 and at 0:30 each day. These are also in sync with two disks that are at another physical location. Using "rsync" you have to be carefull or it can be an eternal garbage colledtor. This is off topic. A disk has a finite life so every 6 mo. We rotate in a new disk and toss the oldest one. It takes two and 1/2 years to cycle threw the pack. This scheme has worked for us for the last 20 years. We have never had a server die on us. We have used Sl Linux form version 4 to current and before that RH 7->9 and BSD 4.3. We really do not have a performance problem even on long 3d renderings- The slowest thing in the room is the speed one can type or point. Models, simulations, drawings are done before you can reach for your cup. Thank You Larry Linder On Fri, 2018-10-12 at 23:07 -0700, Bruce Ferrell wrote: On 10/12/18 8:09 PM, ~Stack~ wrote: On 10/12/2018 07:35 PM, Nico Kadel-Garcia wrote: [snip] On SL 7? Why? Is there any reason not to use xfs? I've appreciated the ext filesystems, I've known its original author for decades. (He was my little brother in my fraternity!) But there's not a compelling reason to use it in recent SL releases. Sure there is. Anyone who has to mange fluctuating disks in an LVM knows precisely why you avoid XFS - Shrink an XFS formated LVM partition. Oh, wait. You can't. ;-) My server with EXT4 will be back on line with adjusted filesystem sizes before the XFS partition has even finished backing up! It is a trivial, well-documented, and quick process to adjust an ext4 file-system. Granted, I'm in a world where people can't seem to judge how they are going to use the space on their server and frequently have to come to me needing help because they did something silly like allocate 50G to /opt and 1G to /var. *rolls eyes* (sadly that was a true event.) Adjusting filesystems for others happens far too frequently for me. At least it is easy for the EXT4 crowd. Also, I can't think of a single compelling reason to use XFS over EXT4. Supposedly XFS is great for large files of 30+ Gb, but I can promise you that most of the servers and desktops I support have easily 95% of their files under 100M (and I would guess ~70% are under 1M). I know this, because I help the backup team on occasion. I've seen the histograms of file size distributions. For all the arguments of performance, well I wouldn't use either XFS or EXT4. I use ZFS and Ceph on the systems I want performance out of. Lastly, (I know - single data point) I almost never get the "help my file system is corrupted" from the EXT4 crowd but I've long stopped counting how many times I've heard XFS eating files. And the few times it is EXT4 I don't worry because the tools for recovery are long and well tested. The best that can be said for XFS recovery tools is "Well, they are better now then they were." To me, it still boggles my mind why it is the default FS in the EL world. But that's me. :-) ~Stack~ The one thing I'd offer you in terms of EXT4 vs XFS Do NOT have a system crash on very large filesystems (> than 1TB) with EXT4. It will take days to fsck completely. Trust me on this. I
Re: After Install last physical disk is not mounted on reboot
On Fri, Oct 12, 2018 at 4:50 PM Larry Linder wrote: > > New System: > > Gigabyte Mother board. > 32 G Ram > 6 core AMD processor. > ext4 FS ?? On SL 7? Why? Is there any reason not to use xfs? I've appreciated the ext filesystems, I've known its original author for decades. (He was my little brother in my fraternity!) But there's not a compelling reason to use it in recent SL releases. > Disk: > 0. SSD 240G sda cibtaubs o/s What the hell? No partitions? Where did you put the boot loader? > 1. WD 2 TBsdb contains /usr/local & /engr, /engr/users *Stop* putting your software and bundled directories in "/" It's a violation of the File System Hierarchy, and will cause endless grief. > 2. WD 2 TBsdc contains /mariadb, & company library See above. > 3. WD 2 TBsdd contains /backup for other machines See above. > 4. WD 2 TBsde contains ... > 5. Plextor DVD > Mother board has 6 ports. > These are physical disks setup during install, using a manual install. > After install is complete, system reboots, everything works but there is > no sde present in fstab and it is not mounted. > According to RH website we are not exceeding any published limits. > There is nothing abut this problem with GigaBye MB. > > This system does not use logical anything or any raid? > > Any clues as to what is going on. > > Don't know how to decode disk definition in fstab? > My discription is how I set it up. > > Thank You > Larry Linder And you need to run "parted -l" and "blkid" in order to unfurl the UUID associations with particular devices. "parted -l" will show you if the devices are detected. "blkid" wil show you what thte relationship is between the devices and their UUID's. And as much as I love, as I love the author of ext based filesystems, I no longer recommend them for default use, mostly due to their performance with directories with many, many thousands of files or directories in them. > fstab# > UUID=1aa38030-b573-4537-bc9d-83f0a9748c9b / ext4 > defaults1 1 > UUID=0f5a32b3-4f87-4d44-9c87-76da4ae6e5f5 /boot ext4 > defaults1 2 > UUID=22E1-6183 /boot/efi vfat > umask=0077,shortname=winnt 0 0 > UUID=8137d36b-5bf7-4499-8c00-b62486bfe24b /engr ext4 > defaults1 2 > UUID=d46c96a4-aad3-4cf0-a73c-826f8426d553 /home ext4 > defaults1 2 > UUID=0636e7dd-b750-44da-97af-36e8b5296030 /mc_1 ext4 > defaults1 2 > UUID=4f73b695-ef2a-4c7a-a535-88e2146d4f20 /mcdb ext4 > defaults1 2 > UUID=6fd9460d-f036-4b67-b464-7017deb91f7d /mc_4 ext4 > defaults1 2 > UUID=bec01426-038b-4600-8af9-7641bfd3f5cb /mc_lib ext4 > defaults1 2 > UUID=48907d16-332e-40dd-a1f6-5dc240cc061a /optext4 > defaults1 2 > UUID=937d8f1e-4d9c-4ed0-abbb-c37ea0336869 /tmpext4 > defaults1 2 > UUID=aee46348-f657-4132-87cb-7d1df890472b /usrext4 > defaults1 2 > UUID=544e3db7-f670-4c5d-903a-176b05d63bcf /usr/local ext4 > defaults1 2 > UUID=8fca9d68-579a-475b-85f2-3ea08967cc93 /varext4 > defaults1 2 > UUID=2b39c434-723b-494e-8e3f-db5a8c4a1a14 swapswap > defaults0 0 > tmpfs /dev/shmtmpfs defaults > 0 0 > devpts /dev/ptsdevpts gid=5,mode=620 > 0 0 > sysfs /syssysfs defaults > 0 0 > proc/proc procdefaults > 0 0 > ~
Re: After Install last physical disk is not mounted on reboot
And bios sees it, right? On Fri, Oct 12, 2018, 16:50 Larry Linder wrote: > New System: > > Gigabyte Mother board. > 32 G Ram > 6 core AMD processor. > ext4 FS ?? > > Disk: > 0. SSD 240G sda cibtaubs o/s > 1. WD 2 TBsdb contains /usr/local & /engr, /engr/users > 2. WD 2 TBsdc contains /mariadb, & company library > 3. WD 2 TBsdd contains /backup for other machines > 4. WD 2 TBsde contains ... > 5. Plextor DVD > Mother board has 6 ports. > These are physical disks setup during install, using a manual install. > After install is complete, system reboots, everything works but there is > no sde present in fstab and it is not mounted. > According to RH website we are not exceeding any published limits. > > There is nothing abut this problem with GigaBye MB. > > This system does not use logical anything or any raid? > > Any clues as to what is going on. > > Don't know how to decode disk definition in fstab? > My discription is how I set it up. > > Thank You > Larry Linder > > fstab# > UUID=1aa38030-b573-4537-bc9d-83f0a9748c9b / ext4 > defaults1 1 > UUID=0f5a32b3-4f87-4d44-9c87-76da4ae6e5f5 /boot ext4 > defaults1 2 > UUID=22E1-6183 /boot/efi vfat > umask=0077,shortname=winnt 0 0 > UUID=8137d36b-5bf7-4499-8c00-b62486bfe24b /engr ext4 > defaults1 2 > UUID=d46c96a4-aad3-4cf0-a73c-826f8426d553 /home ext4 > defaults1 2 > UUID=0636e7dd-b750-44da-97af-36e8b5296030 /mc_1 ext4 > defaults1 2 > UUID=4f73b695-ef2a-4c7a-a535-88e2146d4f20 /mcdb ext4 > defaults1 2 > UUID=6fd9460d-f036-4b67-b464-7017deb91f7d /mc_4 ext4 > defaults1 2 > UUID=bec01426-038b-4600-8af9-7641bfd3f5cb /mc_lib ext4 > defaults1 2 > UUID=48907d16-332e-40dd-a1f6-5dc240cc061a /optext4 > defaults1 2 > UUID=937d8f1e-4d9c-4ed0-abbb-c37ea0336869 /tmpext4 > defaults1 2 > UUID=aee46348-f657-4132-87cb-7d1df890472b /usrext4 > defaults1 2 > UUID=544e3db7-f670-4c5d-903a-176b05d63bcf /usr/local ext4 > defaults1 2 > UUID=8fca9d68-579a-475b-85f2-3ea08967cc93 /varext4 > defaults1 2 > UUID=2b39c434-723b-494e-8e3f-db5a8c4a1a14 swapswap > defaults0 0 > tmpfs /dev/shmtmpfs defaults > 0 0 > devpts /dev/ptsdevpts gid=5,mode=620 > 0 0 > sysfs /syssysfs defaults > 0 0 > proc/proc procdefaults > 0 0 > ~ >