[CentOS] mdadm: --update=resync not understood for 1.x metadata

2023-12-03 Thread Strahil Nikolov via CentOS
Hi All,

Recently I had to update a host that uses raid1 for /boot and /boot/efi and I 
noticed that my resync service is failing with "mdadm: --update=resync not 
understood for 1.x metadata”.
Does anyone know why 'mdadm-4.2-rc2.el8.x86_64.rpm’ supports it but 
'mdadm-4.2-13.el8.x86_64’ not ?

Why the version is not properly bumped if such drastic change is intentional ?


Best Regards,
Strahil Nikolov
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] bash script input password automatically.

2022-07-26 Thread Strahil Nikolov via CentOS
Or just try:echo PASS | your_script.sh
If it needs to answer multie prompts:echo -e 'USER\nPASS' | your_script.sh
Best Regards,Strahil Nikolov 
 
 
  On Fri, Jul 22, 2022 at 23:06, Paul Heinlein wrote:   On 
Fri, 22 Jul 2022, Kaushal Shriyan wrote:

> Hi,
>
> I have the below commands to generate keystore.pkcs12 and keystore.jks
> files on CentOS Linux release 7.9.2009 (Core)
>
> openssl pkcs12 -export -clcerts -in fullchain1.pem -inkey privkey1.pem -out
> keystore.pkcs12 -name javasso
> keytool -importkeystore -srckeystore keystore.pkcs12 -srcstoretype pkcs12
> -destkeystore keystore.jks -deststoretype jks -alias javasso
>
> I have created a small shell script to generate both keystore.pkcs12 and
> keystore.jks files. It prompts for a password. Is there a way to key in a
> password without prompt or non-interactive way?
> For example password is stored in a file and the bash script will source it
> instead of manually typing the password.
>
> Please suggest. Thanks in advance.

See the "PASS PHRASE ARGUMENTS" section of the openssl(1) man page for 
the various ways openssl can get a password.

-- 
Paul Heinlein
heinl...@madboa.com
45°22'48" N, 122°35'36" W
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos
  
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SELINUX blocks procmail from executing perl script without logging

2021-04-03 Thread Strahil Nikolov via CentOS
Have you checked with 'semodule -DB' ?
Source: Chapter 5. Troubleshooting problems related to SELinux Red Hat 
Enterprise Linux 8 | Red Hat Customer Portal  
|  
|   
|   
|   ||

   |

  |
|  
|   |  
Chapter 5. Troubleshooting problems related to SELinux Red Hat Enterprise Linux 
8 | Red Hat Customer Portal
 
The Red Hat Customer Portal delivers the knowledge, expertise, and guidance 
available through your Red Hat subscription.
  |   |

  |

  |

  

Best Regards,Strahil Nikolov 
 
  On Thu, Apr 1, 2021 at 14:43, Radu Radutiu wrote:   Hi,

I'm upgrading our request tracker from Centos 7 to 8 and found some
unexpected SELINUX issues with procmail. Even after I create a policy which
allows all denied operations, procmail is still not allowed to run a perl
script (in my case rt-mailgate). I get the following error in the procmail
log: "Can't open perl script "/opt/rt5/bin/rt-mailgate": Permission denied"
but I have no denied audit entry in /var/log/audit/audit.log.
If I set selinux to permissive, everything works fine. Any idea how to
debug this?

Best regards,
Radu
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos
  
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Migrating to nvme and expand

2021-03-31 Thread Strahil Nikolov via CentOS
Are you using the MD devices as Physical Volumes ?If ues, then create a PV from 
that NVME and then pvmove.
Best Regards,Strahil Nikolov
 
 
  On Wed, Mar 31, 2021 at 21:16, Jerry Geis wrote:   I 
have older SATA disks (2 of them size 2T) in a software raid config
running CentOS 7.

/dev/md127 is / and xfs
/dev/sda2 is swap
/dev/md126 is /home and xfs

I desire to get a new (single) NVME 4T disk.

What is the correct way to copy the software raid to a single "new" NVME
disk ?
then expand the /home file system to All the remaining space?

Thanks,

Jerry
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos
  
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] Anyone running CentOS 8 on ML350p Gen8

2021-03-21 Thread Strahil Nikolov via CentOS
Hi All,
Does anyone manage to run CentOS 8 (Stream or not) on ML350p Gen8 ?Did you 
experience any hiccups or did you have to run some workarounds to make it work ?

Thanks in advance.
Best Regards,Strahil Nikolov
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Expand XFS filesystem on CentOS Linux release 8.2.2004 (Core)

2021-03-14 Thread Strahil Nikolov via CentOS
True.I wiped a VM this way , years ago.
Best Regards,Strahil Nikolov 
 
  On Sun, Mar 14, 2021 at 20:05, Simon Matter wrote:   
> I'm constantly using fdisk on GPT and everything has been fine.
> Best Regards,Strahil Nikolov

That's only true in recent times, because in the past fdisk didn't support
GPT at all. Back then you had to use tools like parted.

Simon

>
>
>  On Fri, Mar 12, 2021 at 15:30, Simon Matter
> wrote:  > Hi,
>>
>> Is there a way to expand xfs filesystem /dev/nvme0n1p2 which is 7.8G and
>> occupy the remaining free disk space of 60GB?
>>
>> [root@ip-10-0-0-218 centos]# df -hT --total
>> Filesystem    Type      Size  Used Avail Use% Mounted on
>> devtmpfs      devtmpfs  1.7G    0  1.7G  0% /dev
>> tmpfs          tmpfs    1.7G    0  1.7G  0% /dev/shm
>> tmpfs          tmpfs    1.7G  23M  1.7G  2% /run
>> tmpfs          tmpfs    1.7G    0  1.7G  0% /sys/fs/cgroup
>> */dev/nvme0n1p2 xfs      7.8G  7.0G  824M  90% /* >
>> expand /dev/nvme0n1p2 which is 7.8G and occupy the remaining free disk
>> space of 60GB.
>> /dev/nvme0n1p1 vfat      599M  6.4M  593M  2% /boot/efi
>> tmpfs          tmpfs    345M    0  345M  0% /run/user/1000
>> total          -          16G  7.0G  8.5G  46% -
>> [root@ip-10-0-0-218 centos]# fdisk -l
>> GPT PMBR size mismatch (20971519 != 125829119) will be corrected by
>> write.
>> The backup GPT table is not on the end of the device. This problem will
>> be
>> corrected by write.
>
> How did you end up in this situation? Did you copy the data from a smaller
> disk to this 60G disk?
>
>> *Disk /dev/nvme0n1: 60 GiB*, 64424509440 bytes, 125829120 sectors
>> Units: sectors of 1 * 512 = 512 bytes
>> Sector size (logical/physical): 512 bytes / 512 bytes
>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>> Disklabel type: gpt
>> Disk identifier: E97B9FFA-2C13-474E-A0E4-ABF1572CD20C
>>
>> Device            Start      End  Sectors  Size Type
>> /dev/nvme0n1p1    2048  1230847  1228800  600M EFI System
>> /dev/nvme0n1p2  1230848 17512447 16281600  7.8G Linux filesystem
>> /dev/nvme0n1p3 17512448 17514495    2048    1M BIOS boot
>
> Looks like you could move p3 to the end of the disk and then enlarge p2
> and then grow the XFS on it.
>
> I'm not sure it's a good idea to use fdisk on a GPT disk. At least in the
> past this wasn't supported and I don't know how much has changed here. I
> didn't touch a lot of GPT systems yet, and where I did I felt frightened
> by the whole EFI stuff :)
>
> Regards,
> Simon
>
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos
>
>


  
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Expand XFS filesystem on CentOS Linux release 8.2.2004 (Core)

2021-03-14 Thread Strahil Nikolov via CentOS
I'm constantly using fdisk on GPT and everything has been fine.
Best Regards,Strahil Nikolov
 
 
  On Fri, Mar 12, 2021 at 15:30, Simon Matter wrote:   
> Hi,
>
> Is there a way to expand xfs filesystem /dev/nvme0n1p2 which is 7.8G and
> occupy the remaining free disk space of 60GB?
>
> [root@ip-10-0-0-218 centos]# df -hT --total
> Filesystem    Type      Size  Used Avail Use% Mounted on
> devtmpfs      devtmpfs  1.7G    0  1.7G  0% /dev
> tmpfs          tmpfs    1.7G    0  1.7G  0% /dev/shm
> tmpfs          tmpfs    1.7G  23M  1.7G  2% /run
> tmpfs          tmpfs    1.7G    0  1.7G  0% /sys/fs/cgroup
> */dev/nvme0n1p2 xfs      7.8G  7.0G  824M  90% /* >
> expand /dev/nvme0n1p2 which is 7.8G and occupy the remaining free disk
> space of 60GB.
> /dev/nvme0n1p1 vfat      599M  6.4M  593M  2% /boot/efi
> tmpfs          tmpfs    345M    0  345M  0% /run/user/1000
> total          -          16G  7.0G  8.5G  46% -
> [root@ip-10-0-0-218 centos]# fdisk -l
> GPT PMBR size mismatch (20971519 != 125829119) will be corrected by write.
> The backup GPT table is not on the end of the device. This problem will be
> corrected by write.

How did you end up in this situation? Did you copy the data from a smaller
disk to this 60G disk?

> *Disk /dev/nvme0n1: 60 GiB*, 64424509440 bytes, 125829120 sectors
> Units: sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disklabel type: gpt
> Disk identifier: E97B9FFA-2C13-474E-A0E4-ABF1572CD20C
>
> Device            Start      End  Sectors  Size Type
> /dev/nvme0n1p1    2048  1230847  1228800  600M EFI System
> /dev/nvme0n1p2  1230848 17512447 16281600  7.8G Linux filesystem
> /dev/nvme0n1p3 17512448 17514495    2048    1M BIOS boot

Looks like you could move p3 to the end of the disk and then enlarge p2
and then grow the XFS on it.

I'm not sure it's a good idea to use fdisk on a GPT disk. At least in the
past this wasn't supported and I don't know how much has changed here. I
didn't touch a lot of GPT systems yet, and where I did I felt frightened
by the whole EFI stuff :)

Regards,
Simon

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos
  
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] el7 systemd service:: ensure var/log owner when User is specified

2021-02-11 Thread Strahil Nikolov via CentOS
Is there any reason to use a service file to create the logs ?After all we got 
systemd-tmpfilesfor that purpose.
Best Regards,Strahil Nikolov
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] smba shares are unreachable once or twice per month

2021-02-09 Thread Strahil Nikolov via CentOS
Have you checked this one: 
https://wiki.samba.org/index.php/Setting_up_Samba_as_a_Domain_Member
Maybe you missed something.
Best Regards,Strahil Nikolov
 
 
  On Tue, Feb 9, 2021 at 7:21, Prengel, Ralf wrote:   
Hallo,
I ve a problem with my samba-server.
We are  using a centos 7.7 with an samba exporting samba-shares. Logins 
are managed by our Windows-AD. The samba server has only the function to 
exports hares.
Windows clients are Win10 patched every month.
Our problem:
Once or twice a month shares aren't akctive any longer and reachable for 
the Window-Clients.
Network can nor be a reason because the same directory exported via NFS 
is reachable for all Linux-NFS-Client.
Normally we disconect and connect the Samba server in the AD and after a 
reboot the shares are working again but that cant be the solution ;-)
Can anyone give me hints hwo to anaylse and solving the problem.

Thanks Ralf

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos
  
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Resizing a QEMU guest display

2021-02-09 Thread Strahil Nikolov via CentOS
Have you restarted the service ?Yesterday I built a CentOS 7 system and spice 
didn't resize untill I restarted the service.
Best Regards,Strahil Nikolov
 
 
  On Mon, Feb 8, 2021 at 14:40, Bill Gee wrote:   Hi 
Strahil -

Thanks for the link.  Unfortunately it is not helpful.  The spice agent is 
already installed on the guest, and the spice channel is already configured.  

[bgee@practice21a ~]$ rpm -qa | grep spice 
spice-vdagent-0.20.0-3.fc33.x86_64

A question occurs to me ...  The working guest also has both the spice-vdagent 
package and the spice channel.  Why does it work and the other guest does not?

-- 
Bill Gee



On Monday, February 8, 2021 2:12:39 AM CST Strahil Nikolov wrote:
> I think the following is a good start:
> 11.3. SPICE Agent Red Hat Enterprise Linux 7 | Red Hat Customer Portal  
> |  
> |  
> |  
> |  |    |
> 
>    |
> 
>  |
> |  
> |  |  
> 11.3. SPICE Agent Red Hat Enterprise Linux 7 | Red Hat Customer Portal
>  
> The Red Hat Customer Portal delivers the knowledge, expertise, and guidance 
> available through your Red Hat subscription.
>  |  |
> 
>  |
> 
>  |
> 
>  
> Best Regards,Strahil Nikolov
>  
>  
>  On Sun, Feb 7, 2021 at 20:29, Bill Gee wrote:  Hi 
>Strahil -
> 
> How does one reach a guest via spice?  I am not familiar with any remote 
> access application called "spice".  I have tried two methods of accessing the 
> host.  First is to open it from the Virtual Machine Manager on the host.  
> Second is to use TigerVNC to access it across the local network.
> 
> I see a package installed on the guest which looks like the guest agent.  Is 
> there more that needs to be added?  This is the only qemu package installed 
> on both the Fedora machine that does not resize and the CentOS7 machine that 
> will resize.  The versions are way different between the two guests.
> 
> [root@practice21a ~]# rpm -qa | grep qemu 
> qemu-guest-agent-5.1.0-9.fc33.x86_64
> 
> The Fedora guest has, as you see, version 5.1.0-9.  The CentOS7 guest has 
> version 2.12.0-3.  Perhaps the Fedora guest version is too new to run on a 
> CentOS7 host??
> 
> I see other QEMU packages available.  One of them is called 
> "qemu-device-display-qxl" which is very suggestive.  However, that package is 
> NOT installed on the CentOS7 guest and yet that guest works.
> 
> Another package I see is "libvirt-daemon-driver-qemu".  This package is 
> installed on the host but is not present on either guest.  Is it needed on 
> guests?
> 
> > Have you tried to reach the VM via spice ?Also check if qemu's guest agent 
> > is runningin the VM.
> > 
> > Best Regards,Strahil Nikolov
> 
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos
>  
> 

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos
  
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Resizing a QEMU guest display

2021-02-08 Thread Strahil Nikolov via CentOS
I think the following is a good start:
11.3. SPICE Agent Red Hat Enterprise Linux 7 | Red Hat Customer Portal  
|  
|   
|   
|   ||

   |

  |
|  
|   |  
11.3. SPICE Agent Red Hat Enterprise Linux 7 | Red Hat Customer Portal
 
The Red Hat Customer Portal delivers the knowledge, expertise, and guidance 
available through your Red Hat subscription.
  |   |

  |

  |

  
Best Regards,Strahil Nikolov
 
 
  On Sun, Feb 7, 2021 at 20:29, Bill Gee wrote:   Hi 
Strahil -

How does one reach a guest via spice?  I am not familiar with any remote access 
application called "spice".  I have tried two methods of accessing the host.  
First is to open it from the Virtual Machine Manager on the host.  Second is to 
use TigerVNC to access it across the local network.

I see a package installed on the guest which looks like the guest agent.  Is 
there more that needs to be added?  This is the only qemu package installed on 
both the Fedora machine that does not resize and the CentOS7 machine that will 
resize.  The versions are way different between the two guests.

[root@practice21a ~]# rpm -qa | grep qemu 
qemu-guest-agent-5.1.0-9.fc33.x86_64

The Fedora guest has, as you see, version 5.1.0-9.  The CentOS7 guest has 
version 2.12.0-3.  Perhaps the Fedora guest version is too new to run on a 
CentOS7 host??

I see other QEMU packages available.  One of them is called 
"qemu-device-display-qxl" which is very suggestive.  However, that package is 
NOT installed on the CentOS7 guest and yet that guest works.

Another package I see is "libvirt-daemon-driver-qemu".  This package is 
installed on the host but is not present on either guest.  Is it needed on 
guests?

-- 
Bill Gee



On Sunday, February 7, 2021 1:18:14 AM CST Strahil Nikolov wrote:
> Have you tried to reach the VM via spice ?Also check if qemu's guest agent is 
> runningin the VM.
> 
> Best Regards,Strahil Nikolov

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos
  
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] Recommendation for 10 gigabit NICs on CentOS8

2021-02-07 Thread Strahil Nikolov via CentOS
Hi All,


can you share what kind of old NICs do you use on CentOS 8 (Stream or
not , it doesn't matter) without any issues?
I was looking at ebay and I found some pretty old Mellanox  "ConnectX"
or "ConnectX-2" but I seriously doubt they will work on CentOS 8.

Any proposals are also welcome. I don't care of the brand as long as it
is PCIe and is supported by the vanilla kernel.

Best Regards,
Strahil Nikolov

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Resizing a QEMU guest display

2021-02-06 Thread Strahil Nikolov via CentOS
Have you tried to reach the VM via spice ?Also check if qemu's guest agent is 
runningin the VM.

Best Regards,Strahil Nikolov
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Filesystem choice for BackupPC extrenal drive

2021-02-04 Thread Strahil Nikolov via CentOS
XFS is suitable for parallel workload, so I would pick ext4 for this case.
Here is a quote from https://access.redhat.com/articles/3129891 :
Another way to characterize this is that the Ext4 file system variants tend to 
perform better on systems that have limited I/O capability. Ext3 and Ext4 
perform better on limited bandwidth (< 200MB/s) and up to ~1,000 IOPS 
capability. For anything with higher capability, XFS tends to be faster. 


Best Regards,Strahil Nikolov

Sent from Yahoo Mail on Android 
 
  On Fri, Feb 5, 2021 at 2:42, Kenneth Porter wrote:   
I'm setting up a CentOS 7 box as a BackupPC 4 server to back up Windows 
boxes on my LAN. I'm using an external 1.5 TB USB drive for the "pool". 
BackupPC deduplicates by saving all files in a pool, a directory hiearchy 
with each file named for the checksum of the file, and the directories 
acting as a hash tree to reach each pool file. A backup for a specific 
workstation is a directory tree of checksums and metadata that point into 
the pool for the actual file data. Incremental backups are reverse deltas 
from periodic "filled" backups of all files. I'm using rsyncd to pull 
changed files from the workstations.

I'm deciding which filesystem to use for my external drive. I'm thinking 
the main candidates are ext4 and xfs. What's the best filesystem for this 
application?



Repo for CentOS 7 users:




___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos
  
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Infiniband special ops?

2021-01-25 Thread Strahil Nikolov via CentOS
I have never played with Infinibad, but I think that those cards most
probably allow some checksum offloading capabilities.
Have you explored in that direction and test with checksum in ofloaded
mode ?

Best Regards,
Strahil Nikolov

В 15:49 + на 25.01.2021 (пн), lejeczek via CentOS написа:
> 
> On 22/01/2021 00:33, Steven Tardy wrote:
> > On Thu, Jan 21, 2021 at 6:34 PM lejeczek via CentOS 
> > mailto:centos@centos.org>> wrote:
> > 
> > Hi guys.
> > 
> > Hoping some net experts my stumble upon this message,
> > I have
> > an IPoIB direct host to host connection and:
> > 
> > -> $ ethtool ib1
> > Settings for ib1:
> >  Supported ports: [  ]
> >  Supported link modes:   Not reported
> >  Supported pause frame use: No
> >  Supports auto-negotiation: No
> >  Supported FEC modes: Not reported
> >  Advertised link modes:  Not reported
> >  Advertised pause frame use: No
> >  Advertised auto-negotiation: No
> >  Advertised FEC modes: Not reported
> >  Speed: 4Mb/s
> >  Duplex: Full
> >  Auto-negotiation: on
> >  Port: Other
> >  PHYAD: 255
> >  Transceiver: internal
> >  Link detected: yes
> > 
> > and that's both ends, both hosts, yet:
> > 
> >  > $ iperf3 -c 10.5.5.97
> > Connecting to host 10.5.5.97, port 5201
> > [  5] local 10.5.5.49 port 56874 connected to
> > 10.5.5.97 port
> > 5201
> > [ ID] Interval   Transfer Bitrate
> > Retr Cwnd
> > [  5]   0.00-1.00   sec  1.36 GBytes  11.6 Gbits/sec0
> > 2.50 MBytes
> > [  5]   1.00-2.00   sec  1.87 GBytes  16.0 Gbits/sec0
> > 2.50 MBytes
> > [  5]   2.00-3.00   sec  1.84 GBytes  15.8 Gbits/sec0
> > 2.50 MBytes
> > [  5]   3.00-4.00   sec  1.83 GBytes  15.7 Gbits/sec0
> > 2.50 MBytes
> > [  5]   4.00-5.00   sec  1.61 GBytes  13.9 Gbits/sec0
> > 2.50 MBytes
> > [  5]   5.00-6.00   sec  1.60 GBytes  13.8 Gbits/sec0
> > 2.50 MBytes
> > [  5]   6.00-7.00   sec  1.56 GBytes  13.4 Gbits/sec0
> > 2.50 MBytes
> > [  5]   7.00-8.00   sec  1.52 GBytes  13.1 Gbits/sec0
> > 2.50 MBytes
> > [  5]   8.00-9.00   sec  1.52 GBytes  13.1 Gbits/sec0
> > 2.50 MBytes
> > [  5]   9.00-10.00  sec  1.52 GBytes  13.1 Gbits/sec0
> > 2.50 MBytes
> > - - - - - - - - - - - - - - - - - - - - - - - - -
> > [ ID] Interval   Transfer Bitrate Retr
> > [  5]   0.00-10.00  sec  16.2 GBytes  13.9 Gbits/sec
> > 0 sender
> > [  5]   0.00-10.00  sec  16.2 GBytes  13.9
> > Gbits/sec  receiver
> > 
> > It's rather an oldish platform which hosts the link,
> > PCIe is
> > only 2.0 but with link of x8 that should be able to carry
> > more than ~13Gbits/sec.
> > Infiniband is Mellanox's ConnectX-3.
> > 
> > Any thoughts on how to track the bottleneck or any
> > thoughts
> > 
> > 
> > 
> > Care to capture (a few seconds) of the *sender* side .pcap?
> > Often TCP receive window is too small or packet loss is to 
> > blame or round-trip-time.
> > All of these would be evident in the packet capture.
> > 
> > If you do multiple streams with the `-P 8` flag does that 
> > increase the throughput?
> > 
> > Google says these endpoints are 1.5ms apart:
> > 
> > (2.5 megabytes) / (13 Gbps) =
> > 1.53846154 milliseconds
> > 
> > 
> > 
> Seems that the platform in overall might not be enough. That 
> bitrate goes down even further when CPUs are fully loaded & 
> occupied.
> (I'll try to keep on investigating)
> 
> What I'm trying next is to have both ports(a dual-port card) 
> "teamed" by NM, with runner set to broadcast. I'm leaving 
> out "p-key" which NM sets to "default"(which is working with 
> a "regular" IPoIP connection)
> RHEL's "networking guide" docs say "...create a team from 
> two or more Wired or InfiniBand connections..."
> When I try to stand up such a team, master starts but 
> slaves, both, fail with:
> "...
>   [1611588576.8887] device (ib1): Activation: starting 
> connection 'team1055-slave-ib1' 
> (900d5073-366c-4a40-8c32-ac42c76f9c2e)
>   [1611588576.8889] device (ib1): state change: 
> disconnected -> prepare (reason 'none', sys-iface-state: 
> 'managed')
>   [1611588576.8973] device (ib1): state change: 
> prepare -> config (reason 'none', sys-iface-state: 'managed')
>   [1611588576.9199] device (ib1): state change: config 
> -> ip-config (reason 'none', sys-iface-state: 'managed')
>   [1611588576.9262] device (ib1): Activation: 
> connection 'team1055-slave-ib1' could not be enslaved
>   [1611588576.9272] device (ib1): state change: 
> ip-config -> failed (reason 'unknown', sys-iface-state: 
> 'managed')
>   [1611588576.9280] device (ib1): released from master 
> device nm-team
>   [1611589045.6268] device (ib1): carrier: link connected
> ..."
> 
> Any suggestions also 

Re: [CentOS] RHEL changes

2021-01-23 Thread Strahil Nikolov via CentOS
В 18:42 +0100 на 22.01.2021 (пт), Nicolas Kovacs написа:
> Le 22/01/2021 à 18:04, Valeri Galtsev a écrit :
> > I tried SUSE maybe 2-3 years later than you (around 2003). The
> > first thing I
> > disliked was: they have yast on top of standard configurations.
> > First of
> > all, it is quite unpleasant to deal with: infinitely long single
> > file
> > containing all configs. Next, you change one single thing, and yast
> > to
> > enable your change touches all config files.
You need to create extra ".local" files to preserve your
customizations. Totally different from RHEL.


> All the hardcore distribution users out there (Slackware, Arch,
> Gentoo, Crux,
> FreeBSD) like to make fun of YaST.
> 
> Ever tried to connect any Linux or BSD desktop to an LDAPS server
> running Red
> Hat Directory Server for authentication?
> 
> With YaST it's done in less than 30 seconds in half a dozen mouse
> clicks, and
> it JustWorks(tm).

I can confirm that YAST is quite powerful and I wish it was like
'smitty' (AIX) and allow you to invoke it with command line params.

openSUSE has one big benefit which we do not have with CentOS -> you
can upgrade your openSUSE to pure SUSE (if you need subscription) and
you will be fully supported. RH refused to do that with CentOS - always
reinstall.

Also, openSUSE/SUSE introduced booting and reverting from a snapshot.
Now RH is on the same path with the "BOOM Boot Manager".

Best Regards,
Strahil Nikolov

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] RHEL changes

2021-01-23 Thread Strahil Nikolov via CentOS
В 12:12 +0100 на 22.01.2021 (пт), Ljubomir Ljubojevic написа:
> On 1/22/21 9:29 AM, Marc Balmer via CentOS wrote:
> > > Hence it is as good as dead in my mind when looking into the
> > > future, I
> > > am looking for future distro of choice.
> > 
> > A little mentioned choice would be openSUSE, which is direction I
> > am taking.
> 
> I do not like system where configuration app can overwrite manualy
> set
> config.
That's why you need to use ".local" for most of the files to preserve
your settings. SUSE is not another RH clone and it has it's one
specifics.

Best Regards,
Strahil Nikolov

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] btrfs to ext4

2021-01-19 Thread Strahil Nikolov via CentOS
Does anyone know if it's possible to convert BTRFS partitions to
> ext4?
> 
I think that you can convert ext to btrfs , but not the opposite.

The cleanest way is to reinstall .

Best Regards,
Strahil Nikolov

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] centos-release-gluster8 for centos 7

2021-01-18 Thread Strahil Nikolov via CentOS
Last time I asked, there was no reply - which means that most probably v8 is 
not yet build for C7

Best Regards,
Strahil Nikolov






В понеделник, 18 януари 2021 г., 15:04:12 Гринуич+2, Nicolas Zuber 
 написа: 





Hi all,

We have some Centos 7 server running with gluster 7 installed. We are 
using the the packages provided by the storage SIG 
(centos-release-gluster7). "Yum search centos-release-gluster" shows 
centos-release-gluster7 as the latest gluster version provided. But 
according to https://wiki.centos.org/SpecialInterestGroup/Storage there 
should be already packages for the gluster 8 release available.

So I am wondering what I can do to be able to install glusterfs 8 from 
the storage SIG.

Thanks,
Nicolas
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Reboot/shutdown without login

2021-01-11 Thread Strahil Nikolov via CentOS


> That makes perfect sense in a company data room. My situation is a 
> roommate that wants to power if off to sleep and I've left for an 
> emergency and didn't have time to power it down myself.

Usually pressing the power button should do the trick .

Best Regards,
Strahil Nikolov

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS Stream suitability as a production webserver

2021-01-06 Thread Strahil Nikolov via CentOS
>At the moment my question possibly would have been better phrased "Why >isn't 
>Streama suitable platform for a production web server".

It is , but expect rough edges.
The differences will be :
- Shorter lifetime .If you skip the first 2 minor releases -it will be shorter
- No chance to "yum history undo last" as there are no older packages . You 
have to use Boom boot manager to rollback OS updates
- More testing is needed as the chance that someone broke something is bigger

Best Regards,
Strahil Nikolov
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS Stream suitability as a production webserver

2021-01-05 Thread Strahil Nikolov via CentOS


> We will need to (manually) migrate to Stream 9.x after 5 years
> instead of
> 10 though?

Most probably after 3 years. Currently stream should be equal to RHEL
8.4 .


Best Regards,
Strahil Nikolov

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS Stream suitability as a production webserver

2021-01-05 Thread Strahil Nikolov via CentOS
> Given we are not developing drivers or applications (other than
> websites
> and web applications), is the change a non-issue for my use-case? 
If you decide to go with Stream, you will need to test carefully each
version and use some kind of repository management - as there will be
no older version of the packages.
Thankfully the 'Boom boot manager' is now fully working, so you can
easily roll back an OS update.

If you decide that you don't want to fight with updates and the short
life cycle of Stream, you got plenty of clones that are available:
- Springdale Linux
- Oracle Enterprise Linux

And 2 more expected to come:
- Rocky Linux (founder of the original CentOS -> Gregory Kurtzer)
- Lenix (backed by CloudLinux)

I would prefer the full lifecycle of a RHEL clone instead of Stream.

Best Regards,
Strahil Nikolov

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] rare but repeating system crash in C7

2021-01-04 Thread Strahil Nikolov via CentOS
Verify that:
1. Autofs is not running
2. Systemd has created '.mount' and '.automount' units
systemctl status mnt-backup.mount mnt-backup.automount
systemctl cat mnt-backup.mount mnt-backup.automount

3. Verify that there are no errors in local-fs.target
systemctl status local-fs.target

4. Check for errors via:
mount -a
journalctl -e

Best Regards
Strahil Nikolov





В понеделник, 4 януари 2021 г., 01:29:25 Гринуич+2, Fred 
 написа: 





OK, I think I've got it set up as described here, while fixing the
misplaced fields in /etc/fstab:

UUID=259ec5ea-e8a4-465a-9263-1c06217b9aaf      /mnt/backup    ext4
x-systemd.automount,x-systemd.idle-timeout=15min,noauto 0      2

now when I do, e.g., "ls /mnt/backup"

I get:

$ sudo !!
sudo ls /mnt/backup
ls: cannot open directory /mnt/backup: No such file or directory

if I do:

ls /mnt

I see:

backup

use su to become root, then:
ls -l /mnt shows:

# ls -al
total 4
drwxr-xr-x.  3 root root    0 Jan  2 13:24 .
dr-xr-xr-x. 21 root root 4096 Jan  2 09:22 ..
dr-xr-xr-x.  2 root root    0 Jan  2 13:24 backup

ls backup shows:

# ls -al backup
ls: cannot open directory backup: No such file or directory

why? it clearly appears to exist 

the FS isn't mounted, but /mnt/backup exists, so it should be visible as an
entry directory. also, I can mount it manually:

mount UUID=259ec5ea-e8a4-465a-9263-1c06217b9aaf      /mnt/backup

and then access it. but it doesn't automount with, e.g. "ls /mnt/backup" or
"ls /mnt/backup/backups".

I must still be doing something wrong but maybe I'm too stupid to see it.
(Please don't agree with me publicly...! :=) )

Fred

On Sun, Jan 3, 2021 at 4:36 PM Pete Biggs  wrote:

> >
> > I commented out those entries in /etc/auto.master before modifying the
> > fstab entry:
> >
> > UUID=259ec5ea-e8a4-465a-9263-1c06217b9aaf      /mnt/backup
> > ext4,x-systemd.automount,x-systemd.idle-timeout=15min  noauto  0      2
>
> That's not correct.  See 'man fstab'. It should be
>
>    device  mount-point  filesystem-type  options  dump  fsck
>
> So you should have:
>
> UUID=259ec5ea-e8a4-465a-9263-1c06217b9aaf  /mnt/backup  ext4
>  x-systemd.automount,x-systemd.idle-timeout=15min,noauto 0 2
>
>
> >
> > which is exactly as it was before except for the x-systemd entries as you
> > described.
>
> Yeah, you put them in the wrong place.
>
>
> P.
>
>
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos

>
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] rare but repeating system crash in C7

2021-01-03 Thread Strahil Nikolov via CentOS
Reboot is not necessary as long as local-fs.target is restarted, but a fix for 
the /etc/fstab might be needed.


Best Regards,
Strahil Nikolov






В неделя, 3 януари 2021 г., 21:18:30 Гринуич+2, Simon Matter 
 написа: 





> $ cat /etc/centos-release
> CentOS Linux release 7.9.2009 (Core)
>
> $ sudo systemctl status mnt-backup.mount mnt-backup.automount
> [sudo] password for fredex:
> ● mnt-backup.mount - /mnt/backup
>    Loaded: loaded (/etc/fstab; bad; vendor preset: disabled)
>    Active: active (mounted) since Sat 2021-01-02 22:20:05 EST; 14h ago
>    Where: /mnt/backup
>      What: /dev/sdc1
>      Docs: man:fstab(5)
>            man:systemd-fstab-generator(8)
>    Tasks: 0
>
> ● mnt-backup.automount
>    Loaded: loaded
>    Active: inactive (dead)
>    Where: /mnt/backup
> [fredex@fcshome Desktop]$ systemctl cat mnt-backup.mount
> mnt-backup.automount
> No files found for mnt-backup.automount.
> # /run/systemd/generator/mnt-backup.mount
> # Automatically generated by systemd-fstab-generator
>
> [Unit]
> SourcePath=/etc/fstab
> Documentation=man:fstab(5) man:systemd-fstab-generator(8)
> RequiresOverridable=systemd-fsck@dev-disk-by
> \x2duuid-259ec5ea\x2de8a4\x2d465a\x2
> After=systemd-fsck@dev-disk-by
> \x2duuid-259ec5ea\x2de8a4\x2d465a\x2d9263\x2d1c062
>
> [Mount]
> What=/dev/disk/by-uuid/259ec5ea-e8a4-465a-9263-1c06217b9aaf
> Where=/mnt/backup
> Type=ext4
> Options=noauto
>
> the fstab statement I put in my last posting was a copy/paste from
> /etc/fstab, so it should be correct as shown. I don't see a comma before
> noauto.
>

Did you already try a reboot?

Don't ask me why I ask this.

Regards,
Simon

>
>
> On Sun, Jan 3, 2021 at 11:42 AM Strahil Nikolov 
> wrote:
>
>> Are you still on 7.6 ? I recently discovered that a bug in sysstat was
>> fixed in 7.7 that prevented autofs from umounting the filesystem.
>>
>> The following should show if it's taking into action:
>> systemctl status mnt-backup.mount mnt-backup.automount
>> systemctl cat mnt-backup.mount mnt-backup.automount
>>
>>
>> Are you sure that you got no "," before that "noauto" ?
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>>
>>
>>
>> В неделя, 3 януари 2021 г., 16:25:47 Гринуич+2, Fred <
>> fred.fre...@gmail.com> написа:
>>
>>
>>
>>
>>
>> Strahil:
>>
>> I WAS using that, but the automatic umount never worked, leaving it
>> mounted all the time.
>>
>> I commented out those entries in /etc/auto.master before modifying the
>> fstab entry:
>>
>> UUID=259ec5ea-e8a4-465a-9263-1c06217b9aaf      /mnt/backup
>> ext4,x-systemd.automount,x-systemd.idle-timeout=15min  noauto  0
>> 2
>>
>> which is exactly as it was before except for the x-systemd entries as
>> you
>> described.
>>
>> and the peculiar thing is it STILL does not automount. and yes, I did do
>> systemctl restart local-fs.target.
>>
>> do I need to reboot (or something simpler, maybe) to fully disable the
>> auto.master stuff?
>>
>> Thanks again!
>>
>> Fred
>>
>> On Sun, Jan 3, 2021 at 5:54 AM Strahil Nikolov via CentOS <
>> centos@centos.org> wrote:
>> > Hi Fred,
>> >
>> > do you use automatic umount for the map in /etc/auto.master
>> (--timeout) ?
>> >
>> > If yes, then the systemd mount options probably won't help.
>> >
>> > Best Regards,
>> > Strahil Nikolov
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > В неделя, 3 януари 2021 г., 04:27:17 Гринуич+2, Fred <
>> fred.fre...@gmail.com> написа:
>> >
>> >
>> >
>> >
>> >
>> > Yeah, and the instructions for setting RAID-1 or RAID-0 have the
>> switch
>> > positions exactly reversed.
>> >
>> > Strahil: I'm using autofs to automount the unit. but just turned that
>> off
>> > and enabled the xsystemd.automount in fstab, we'll see how that works.
>> >
>> > Fred
>> >
>> >
>> > On Sat, Jan 2, 2021 at 4:11 PM Warren Young 
>> wrote:
>> >
>> >> On Jan 2, 2021, at 11:17 AM, Fred  wrote:
>> >> >
>> >> > I assume that the yottamaster device runs Linux, just like 99% of
>> other
>> >> > such devices.
>> >>
>> >> 99% of NAS boxes, maybe, but not dumb RAID boxes like the one I
>> believe
>> >> you’re ref

Re: [CentOS] rare but repeating system crash in C7

2021-01-03 Thread Strahil Nikolov via CentOS
Erm ... the noauto should be part of the options column, so append it to the 
previous option (and of course delimit with a ",").

I see that the '.automount' was not generated ... Maybe it's related to the 
noauto issue.

By the way , "mount -a" should complain if fstab is not OK.

Best Regards,
Strahil Nikolov







В неделя, 3 януари 2021 г., 21:01:29 Гринуич+2, Fred  
написа: 





$ cat /etc/centos-release
CentOS Linux release 7.9.2009 (Core)

$ sudo systemctl status mnt-backup.mount mnt-backup.automount
[sudo] password for fredex: 
● mnt-backup.mount - /mnt/backup
   Loaded: loaded (/etc/fstab; bad; vendor preset: disabled)
   Active: active (mounted) since Sat 2021-01-02 22:20:05 EST; 14h ago
    Where: /mnt/backup
     What: /dev/sdc1
     Docs: man:fstab(5)
           man:systemd-fstab-generator(8)
    Tasks: 0

● mnt-backup.automount
   Loaded: loaded
   Active: inactive (dead)
    Where: /mnt/backup
[fredex@fcshome Desktop]$ systemctl cat mnt-backup.mount mnt-backup.automount
No files found for mnt-backup.automount.
# /run/systemd/generator/mnt-backup.mount
# Automatically generated by systemd-fstab-generator

[Unit]
SourcePath=/etc/fstab
Documentation=man:fstab(5) man:systemd-fstab-generator(8)
RequiresOverridable=systemd-fsck@dev-disk-by\x2duuid-259ec5ea\x2de8a4\x2d465a\x2
After=systemd-fsck@dev-disk-by\x2duuid-259ec5ea\x2de8a4\x2d465a\x2d9263\x2d1c062

[Mount]
What=/dev/disk/by-uuid/259ec5ea-e8a4-465a-9263-1c06217b9aaf
Where=/mnt/backup
Type=ext4
Options=noauto

the fstab statement I put in my last posting was a copy/paste from /etc/fstab, 
so it should be correct as shown. I don't see a comma before noauto.



On Sun, Jan 3, 2021 at 11:42 AM Strahil Nikolov  wrote:
> Are you still on 7.6 ? I recently discovered that a bug in sysstat was fixed 
> in 7.7 that prevented autofs from umounting the filesystem.
> 
> The following should show if it's taking into action:
> systemctl status mnt-backup.mount mnt-backup.automount
> systemctl cat mnt-backup.mount mnt-backup.automount
> 
> 
> Are you sure that you got no "," before that "noauto" ?
> 
> Best Regards,
> Strahil Nikolov 
> 
> 
> 
> 
> 
> 
> В неделя, 3 януари 2021 г., 16:25:47 Гринуич+2, Fred  
> написа: 
> 
> 
> 
> 
> 
> Strahil:
> 
> I WAS using that, but the automatic umount never worked, leaving it mounted 
> all the time.
> 
> I commented out those entries in /etc/auto.master before modifying the fstab 
> entry:
> 
> UUID=259ec5ea-e8a4-465a-9263-1c06217b9aaf       /mnt/backup     
> ext4,x-systemd.automount,x-systemd.idle-timeout=15min   noauto  0       2
> 
> which is exactly as it was before except for the x-systemd entries as you 
> described.
> 
> and the peculiar thing is it STILL does not automount. and yes, I did do 
> systemctl restart local-fs.target.
> 
> do I need to reboot (or something simpler, maybe) to fully disable the 
> auto.master stuff?
> 
> Thanks again!
> 
> Fred
> 
> On Sun, Jan 3, 2021 at 5:54 AM Strahil Nikolov via CentOS  
> wrote:
>> Hi Fred,
>> 
>> do you use automatic umount for the map in /etc/auto.master (--timeout) ?
>> 
>> If yes, then the systemd mount options probably won't help.
>> 
>> Best Regards,
>> Strahil Nikolov
>> 
>>  
>> 
>> 
>> 
>> 
>> 
>> 
>> В неделя, 3 януари 2021 г., 04:27:17 Гринуич+2, Fred  
>> написа: 
>> 
>> 
>> 
>> 
>> 
>> Yeah, and the instructions for setting RAID-1 or RAID-0 have the switch
>> positions exactly reversed.
>> 
>> Strahil: I'm using autofs to automount the unit. but just turned that off
>> and enabled the xsystemd.automount in fstab, we'll see how that works.
>> 
>> Fred
>> 
>> 
>> On Sat, Jan 2, 2021 at 4:11 PM Warren Young  wrote:
>> 
>>> On Jan 2, 2021, at 11:17 AM, Fred  wrote:
>>> >
>>> > I assume that the yottamaster device runs Linux, just like 99% of other
>>> > such devices.
>>>
>>> 99% of NAS boxes, maybe, but not dumb RAID boxes like the one I believe
>>> you’re referring to.
>>>
>>> (And I doubt even that, with the likes of FreeNAS extending down from the
>>> enterprise space where consumer volume can affect that sort of thing.)
>>>
>>> I have more than speculation to back that guess: the available firmware
>>> images are far too small to contain a Linux OS image, their manuals don’t
>>> talk about Linux or GPL that I can see, and there’s no place to download
>>> their Linux source code per the GPL.
>>>
>>> While doing this exploration, I’ve run into multiple problems with their
>>> web

Re: [CentOS] rare but repeating system crash in C7

2021-01-03 Thread Strahil Nikolov via CentOS
Are you still on 7.6 ? I recently discovered that a bug in sysstat was fixed in 
7.7 that prevented autofs from umounting the filesystem.

The following should show if it's taking into action:
systemctl status mnt-backup.mount mnt-backup.automount
systemctl cat mnt-backup.mount mnt-backup.automount


Are you sure that you got no "," before that "noauto" ?

Best Regards,
Strahil Nikolov 






В неделя, 3 януари 2021 г., 16:25:47 Гринуич+2, Fred  
написа: 





Strahil:

I WAS using that, but the automatic umount never worked, leaving it mounted all 
the time.

I commented out those entries in /etc/auto.master before modifying the fstab 
entry:

UUID=259ec5ea-e8a4-465a-9263-1c06217b9aaf       /mnt/backup     
ext4,x-systemd.automount,x-systemd.idle-timeout=15min   noauto  0       2

which is exactly as it was before except for the x-systemd entries as you 
described.

and the peculiar thing is it STILL does not automount. and yes, I did do 
systemctl restart local-fs.target.

do I need to reboot (or something simpler, maybe) to fully disable the 
auto.master stuff?

Thanks again!

Fred

On Sun, Jan 3, 2021 at 5:54 AM Strahil Nikolov via CentOS  
wrote:
> Hi Fred,
> 
> do you use automatic umount for the map in /etc/auto.master (--timeout) ?
> 
> If yes, then the systemd mount options probably won't help.
> 
> Best Regards,
> Strahil Nikolov
> 
>  
> 
> 
> 
> 
> 
> 
> В неделя, 3 януари 2021 г., 04:27:17 Гринуич+2, Fred  
> написа: 
> 
> 
> 
> 
> 
> Yeah, and the instructions for setting RAID-1 or RAID-0 have the switch
> positions exactly reversed.
> 
> Strahil: I'm using autofs to automount the unit. but just turned that off
> and enabled the xsystemd.automount in fstab, we'll see how that works.
> 
> Fred
> 
> 
> On Sat, Jan 2, 2021 at 4:11 PM Warren Young  wrote:
> 
>> On Jan 2, 2021, at 11:17 AM, Fred  wrote:
>> >
>> > I assume that the yottamaster device runs Linux, just like 99% of other
>> > such devices.
>>
>> 99% of NAS boxes, maybe, but not dumb RAID boxes like the one I believe
>> you’re referring to.
>>
>> (And I doubt even that, with the likes of FreeNAS extending down from the
>> enterprise space where consumer volume can affect that sort of thing.)
>>
>> I have more than speculation to back that guess: the available firmware
>> images are far too small to contain a Linux OS image, their manuals don’t
>> talk about Linux or GPL that I can see, and there’s no place to download
>> their Linux source code per the GPL.
>>
>> While doing this exploration, I’ve run into multiple problems with their
>> web site, which strengthens my suspicion that this box is your culprit.  If
>> they’re this slipshod with their marketing material, what does that say
>> about their engineering department?
>> ___
>> CentOS mailing list
>> CentOS@centos.org
>> https://lists.centos.org/mailman/listinfo/centos
> 
>>
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos
> 
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] rare but repeating system crash in C7

2021-01-03 Thread Strahil Nikolov via CentOS
Hi Fred,

do you use automatic umount for the map in /etc/auto.master (--timeout) ?

If yes, then the systemd mount options probably won't help.

Best Regards,
Strahil Nikolov

 






В неделя, 3 януари 2021 г., 04:27:17 Гринуич+2, Fred  
написа: 





Yeah, and the instructions for setting RAID-1 or RAID-0 have the switch
positions exactly reversed.

Strahil: I'm using autofs to automount the unit. but just turned that off
and enabled the xsystemd.automount in fstab, we'll see how that works.

Fred


On Sat, Jan 2, 2021 at 4:11 PM Warren Young  wrote:

> On Jan 2, 2021, at 11:17 AM, Fred  wrote:
> >
> > I assume that the yottamaster device runs Linux, just like 99% of other
> > such devices.
>
> 99% of NAS boxes, maybe, but not dumb RAID boxes like the one I believe
> you’re referring to.
>
> (And I doubt even that, with the likes of FreeNAS extending down from the
> enterprise space where consumer volume can affect that sort of thing.)
>
> I have more than speculation to back that guess: the available firmware
> images are far too small to contain a Linux OS image, their manuals don’t
> talk about Linux or GPL that I can see, and there’s no place to download
> their Linux source code per the GPL.
>
> While doing this exploration, I’ve run into multiple problems with their
> web site, which strengthens my suspicion that this box is your culprit.  If
> they’re this slipshod with their marketing material, what does that say
> about their engineering department?
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos

>
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] rare but repeating system crash in C7

2021-01-02 Thread Strahil Nikolov via CentOS


> Just add "x-systemd.automount,x-systemd.idle-timeout=10min" in the
> fstab mount options , or create an ".mount" + ".automount" entries
> for
> it (autofs is also an option) and test.
If you picked the systemd automounter as an option, you will have to
run:
systemctl restart local-fs.target

Best Regards,
Strahil Nikolov

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] rare but repeating system crash in C7

2021-01-02 Thread Strahil Nikolov via CentOS


> I assume that the yottamaster device runs Linux, just like 99% of
> other
> such devices. as to whether it uses linux software raid or some cheap
> (megaraid???) chipset, I don't know, nor know how to tell. but I'll
> check
> that URL you sent and see what happens.
Just add "x-systemd.automount,x-systemd.idle-timeout=10min" in the
fstab mount options , or create an ".mount" + ".automount" entries for
it (autofs is also an option) and test.

The "x-systemd.automount" option will tell systemd to create a
".automount" unit which will monitor the mount point and automatically
mount your drive, while the idle-timeout will tell systemd to
automatically umount the share when not in use (ls, df, du and others
count as usage and reset the counter). Also , if you use 7.6 - there is
a bug in sysstat that forces autofs and systemd's automounter to mount
the share.

Best Regards,
Strahil Nikolov

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Out of office: "CentOS Digest, Vol 191, Issue 26"

2020-12-27 Thread Strahil Nikolov via CentOS
>Then let's make a little contest out of it: what's the most stupid thing 
>>you've
>done as a system administrator ?
1)

rm -rf /prodnfs_mountpoint/*

Thankfully, it had some delay before deleting ,so "Ctrl + C" almost broke on my 
keyboard.

2) Powered off primary prod DB instead of the stanby which usually has planned 
downtime 3 times per year :)

3) wiped the whole VM during my RHCSA (paid with my own money) with only 40 min 
left

 
Best Regards,
Strahil Nikolov
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] centos-release-gluster8 missing

2020-12-24 Thread Strahil Nikolov via CentOS
Hello All,


does anyone know where is centos-release-gluster8 for CentOS 7 ?

Best Regards,
Strahil Nikolov

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] mariadb or mysql web gui

2020-12-20 Thread Strahil Nikolov via CentOS
В 17:04 -0500 на 20.12.2020 (нд), Ranbir написа:
> Hello,
> 
> Is there a web gui available in CentOS 8 for managing mariadb and/or 
> mysql DB servers? I've used phpMyAdmin for many years, but I don't
> see 
> an RPM for it in EPEL for CentOS 8.
> 
> Is there a popular alternative to phpMyAdmin that's packaged for
> CentOS 
> 8?

Accordung to this guide (
https://computingforgeeks.com/install-and-configure-phpmyadmin-on-rhel-8/
) it should work , yet I haven't tested it.

Best Regards,
Strahil Nikolov

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] MTU report question

2020-12-17 Thread Strahil Nikolov via CentOS
seems so
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] MTU report question

2020-12-14 Thread Strahil Nikolov via CentOS
what happens when you 'ping -M do -s 65000 -c 4  ?

Best Regards,
Strahil Nikolov






В понеделник, 14 декември 2020 г., 15:46:50 Гринуич+2, Patrick Bégou 
 написа: 





Hi,

I'm deploying a CentOS8 (not stream ) cluster and I have a question
about MTU on the interfaces. I have a connectX6 Mellanox interface where
I need IBoIP setup.
I've setup this interface via nmcli and set the MTU to 65520 with:

    nmcli connection modify ib0 mtu 65520
    nmcli connection up ib0

Running "nmcli connection show ib0" report:

    infiniband.mtu: 65520

But "ip addr show ib0" report a mtu of 2044:

    6: ib0:  *mtu 2044 *qdisc mq state
    UP group default qlen 256

Why ? Who is wrong (possibly me)?

Thanks

Patrick

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] [CentOS-devel] https://blog.centos.org/2020/12/future-is-centos-stream/

2020-12-09 Thread Strahil Nikolov via CentOS


> How many of us, complaining here about a supposed "breach of trust",
> are involved
> in making CentOS better and not just taking what others do and make
> money on top
> of it? I never participated in anything CentOS related, I happily use
> it but you should
> know what you are buying when you choose to use a (once) volunteer-
> based project.

It's your own problem. I have opened at least 10 bugs on CentOS's
bugzilla and at least 30-40 (not counting the docu )bugs on RH's
bugzilla providing necessary details for identifying bugs and
misconfiguration. 

Best Regards,
Strahil Nikolov

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] https://blog.centos.org/2020/12/future-is-centos-stream/

2020-12-08 Thread Strahil Nikolov via CentOS


> I promise you, to the best of my knowledge, IBM had nothing to do
> with
> this decision.  Red Hat is a distinct unit inside IBM and Red Hat
> still
> has a CEO, CFO, etc.  Red Hat also maintains a neutral relationship
> with
> many IBM competitors. So this was not an IBM decision.
> 
So why is the hurry ? Why this was not done when the EL8 came alive ?

Best Regards,
Strahil Nikolov

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] [CentOS-devel] https://blog.centos.org/2020/12/future-is-centos-stream/

2020-12-08 Thread Strahil Nikolov via CentOS
If anyone is considering to fork CentOS 8 (I'm not talking about that
"Stream"), count me in.

Otherwise I will switch to openSUSE Leap. At least they are not pushing
me some testing ground.

Best Regards,
Strahil Nikolov

В 12:07 -0500 на 08.12.2020 (вт), Phelps, Matthew написа:
> I still haven't seen an answer to the question, "Who made this
> decision?"
> and, "How can we lobby to get it changed?"
> 
> 
> 
> On Tue, Dec 8, 2020 at 9:06 AM Rich Bowen  wrote:
> 
> > The future of the CentOS Project is CentOS Stream, and over the
> > next
> > year we’ll be shifting focus from CentOS Linux, the rebuild of Red
> > Hat
> > Enterprise Linux (RHEL), to CentOS Stream, which tracks just ahead
> > of a
> > current RHEL release. CentOS Linux 8, as a rebuild of RHEL 8, will
> > end
> > at the end of 2021. CentOS Stream continues after that date,
> > serving as
> > the upstream (development) branch of Red Hat Enterprise Linux.
> > 
> > Meanwhile, we understand many of you are deeply invested in CentOS
> > Linux
> > 7, and we’ll continue to produce that version through the remainder
> > of
> > the RHEL 7 life cycle.
> > https://access.redhat.com/support/policy/updates/errata/#Life_Cycle_Dates
> > 
> > CentOS Stream will also be the centerpiece of a major shift in
> > collaboration among the CentOS Special Interest Groups (SIGs). This
> > ensures SIGs are developing and testing against what becomes the
> > next
> > version of RHEL. This also provides SIGs a clear single goal,
> > rather
> > than having to build and test for two releases. It gives the CentOS
> > contributor community a great deal of influence in the future of
> > RHEL.
> > And it removes confusion around what “CentOS” means in the Linux
> > distribution ecosystem.
> > 
> > When CentOS Linux 8 (the rebuild of RHEL8) ends, your best option
> > will
> > be to migrate to CentOS Stream 8, which is a small delta from
> > CentOS
> > Linux 8, and has regular updates like traditional CentOS Linux
> > releases.
> > If you are using CentOS Linux 8 in a production environment, and
> > are
> > concerned that CentOS Stream will not meet your needs, we encourage
> > you
> > to contact Red Hat about options.
> > 
> > We have an FAQ - https://centos.org/distro-faq/ - to help with your
> > information and planning needs, as you figure out how this shift of
> > project focus might affect you.
> > 
> > [See also: Red Hat's perspective on this.
> > 
> > https://www.redhat.com/en/blog/centos-stream-building-innovative-future-enterprise-linux
> > ]
> > 
> > ___
> > CentOS-devel mailing list
> > centos-de...@centos.org
> > https://lists.centos.org/mailman/listinfo/centos-devel
> > 
> 
> 

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] dnsmasq centos 7

2020-10-31 Thread Strahil Nikolov via CentOS
Are you sure you have opened 53/udp ?

Best Regards,
Strahil Nikolov






В събота, 31 октомври 2020 г., 16:15:10 Гринуич+2, Jerry Geis 
 написа: 





Hi Niki,

Thanks good article... I was close in what  I did - but still not working.


I made this config file in /etc/dnsmasq.d

more lsi.conf
domain-needed
bogus-priv
interface = eth0
expand-hosts
local = / LayeredSolutionsInc.com /
domain = LayeredSolutionsInc.com

# The address 192.168.1.14 is the static IP of this server
# You can find this ip by running ifconfig and look for the
# IP of the interface which is connected to the router.
listen-address=127.0.0.1
listen-address=192.168.1.14
bind-interfaces

# Use open source DNS servers
server=8.8.8.8

# Create custom 'domains'.
# Custom 'domains' can also be added in /etc/hosts
address=/LayeredSolutionsInc.com/192.168.1.14


I restart dnsmasq of course... The resolution works on the same
machine - but not for any other linux box.

I add the nameserver 192.168.1.14 to the /etc/resolv.conf of that
machine - but resolution does not work.

Thoughts? (note I moved from my original 192.168.1.8 to 192.168.1.14
machine) - same issue resolves locally but not for other machines.


Jerry
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Unable to get dummy interfaces to persist across reboots in CentOS 8

2020-10-28 Thread Strahil Nikolov via CentOS
Requirement is a very strong word , but you should consider using it and here 
is a short demo why:

- By default, RHEL uses NetworkManager to configure and manage network 
connections, and the /usr/sbin/ifup and /usr/sbin/ifdown scripts use 
NetworkManager to process ifcfg files in the /etc/sysconfig/network-scripts/ 
directory.

[root@system ~]# ls -l /usr/sbin/ifup
lrwxrwxrwx. 1 root root 22 21 окт 21,29 /usr/sbin/ifup -> /etc/alternatives/ifup

[root@system ~]# alternatives --list  | grep ifup
ifup    auto    /usr/libexec/nm-ifup

[root@system ~]# rpm -qf /usr/libexec/nm-ifup
NetworkManager-1.22.8-5.el8_2.x86_64

- the old networks-scripts have been deprecated and are not the defaults anymore


It's about time to switch to NM, but you got some 5-8 years till next EL 
release.

Best Regards,
Strahil Nikolov



В сряда, 28 октомври 2020 г., 12:47:12 Гринуич+2, Frank Even 
 написа: 





No.  Network Manager is always disabled on our builds since at least
Cent5 days.  The network stack has always been able to be managed
properly without relying on Network Manager.  Is that now an absolute
requirement?  It never has been prior.

On Mon, Oct 26, 2020 at 6:26 PM Strahil Nikolov via CentOS
 wrote:
>
> Have you tried to use NetworkManager ?
> After all ,anything network related should be done by it.
>
> [root@system ~]# nmcli connection add con-name dummy0 ifname dummy0 type dummy
> Connection 'dummy0' (9fdd74fa-c143-4991-9bac-0e542704ac89) successfully added.
>
> [root@system ~]# reboot
> Shared connection to glustera closed.
>
>
> [root@system ~]# uptime
> 03:23:44 up 0 min,  1 user,  load average: 1,57, 0,48, 0,17
> [root@glustera ~]# nmcli connection show
> NAME    UUID                                  TYPE      DEVICE
> dummy0  9fdd74fa-c143-4991-9bac-0e542704ac89  dummy    dummy0
>
>
> [root@system ~]# ip a s dummy0
> 3: dummy0:  mtu 1500 qdisc noqueue state UNKNOWN 
> group default qlen 1000
>    link/ether ce:c9:83:97:10:ee brd ff:ff:ff:ff:ff:ff
>    inet6 fe80::599:a978:9457:df10/64 scope link noprefixroute
>      valid_lft forever preferred_lft forever
>
> P.S.: This is the first time I hear about dummy interfaces. What are those 
> used for ?
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
>
>
>
> В вторник, 27 октомври 2020 г., 02:42:06 Гринуич+2, Frank Even 
>  написа:
>
>
>
>
>
> Anyone have any ideas?  It's rather annoying that I can't get these to
> persist across reboots without using some kind of helper script.
>
> On Fri, Oct 16, 2020 at 6:37 AM Frank Even
>  wrote:
> >
> > Hello all, hoping someone can help me out here.
> >
> > I cannot get dummy interfaces on a new Cent8 build to persist across 
> > reboots.
> >
> > On Cent7 - this is the process I use:
> >
> > Create Dummies:
> > # cat /etc/modules-load.d/dummy.conf
> > dummy
> > # cat /etc/modprobe.d/dummyopts.conf
> > options dummy numdummies=4
> > # ip link add dummy0 type dummy
> > ## - repeating a/ ascending dummyN adapters for as many needed
> > # service network start
> > # dracut -f
> >
> > Now  this  was  different than even how 6 handled it, forget how I
> > finally dug that up (possible I even asked here).  I've applied this
> > same configuration to a Cent8 box I'm trying to stand up and it all
> > appears to work fine, but unlike the Cent7 boxes,  when the Cent8 box
> > comes back up,  all the dummy adapters are missing.  I've been
> > searching all over trying to find some documentation on this to no
> > avail.  I'm hoping someone has some suggestions here to help out.
> >
> > Thanks,
> > Frank
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos

> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Unable to get dummy interfaces to persist across reboots in CentOS 8

2020-10-26 Thread Strahil Nikolov via CentOS
Have you tried with Network Manager , after all this is the thing that manages 
all networks:


[root@system ~]# nmcli connection add con-name dummy0 ifname dummy0 type dummy  
Connection 'dummy0' (9fdd74fa-c143-4991-9bac-0e542704ac89) successfully added.

 [root@system ~]# reboot
Shared connection to glustera closed.

 [root@system ~]# nmcli connection show  
NAME    UUID  TYPE  DEVICE  
dummy0  9fdd74fa-c143-4991-9bac-0e542704ac89  dummy dummy0


 [root@system ~]# ip a s dummy0
3: dummy0:  mtu 1500 qdisc noqueue state UNKNOWN 
group default qlen 1000
   link/ether ce:c9:83:97:10:ee brd ff:ff:ff:ff:ff:ff
   inet6 fe80::599:a978:9457:df10/64 scope link noprefixroute  
  valid_lft forever preferred_lft forever


Best Regards,
Strahil Nikolov







В вторник, 27 октомври 2020 г., 02:42:06 Гринуич+2, Frank Even 
 написа: 





Anyone have any ideas?  It's rather annoying that I can't get these to
persist across reboots without using some kind of helper script.

On Fri, Oct 16, 2020 at 6:37 AM Frank Even
 wrote:
>
> Hello all, hoping someone can help me out here.
>
> I cannot get dummy interfaces on a new Cent8 build to persist across reboots.
>
> On Cent7 - this is the process I use:
>
> Create Dummies:
> # cat /etc/modules-load.d/dummy.conf
> dummy
> # cat /etc/modprobe.d/dummyopts.conf
> options dummy numdummies=4
> # ip link add dummy0 type dummy
> ## - repeating a/ ascending dummyN adapters for as many needed
> # service network start
> # dracut -f
>
> Now  this  was  different than even how 6 handled it, forget how I
> finally dug that up (possible I even asked here).  I've applied this
> same configuration to a Cent8 box I'm trying to stand up and it all
> appears to work fine, but unlike the Cent7 boxes,  when the Cent8 box
> comes back up,  all the dummy adapters are missing.  I've been
> searching all over trying to find some documentation on this to no
> avail.  I'm hoping someone has some suggestions here to help out.
>
> Thanks,
> Frank
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Unable to get dummy interfaces to persist across reboots in CentOS 8

2020-10-26 Thread Strahil Nikolov via CentOS
Have you tried to use NetworkManager ?
After all ,anything network related should be done by it.

[root@system ~]# nmcli connection add con-name dummy0 ifname dummy0 type dummy  
Connection 'dummy0' (9fdd74fa-c143-4991-9bac-0e542704ac89) successfully added.

[root@system ~]# reboot
Shared connection to glustera closed.


[root@system ~]# uptime
03:23:44 up 0 min,  1 user,  load average: 1,57, 0,48, 0,17
[root@glustera ~]# nmcli connection show  
NAME    UUID  TYPE  DEVICE  
dummy0  9fdd74fa-c143-4991-9bac-0e542704ac89  dummy dummy0


[root@system ~]# ip a s dummy0
3: dummy0:  mtu 1500 qdisc noqueue state UNKNOWN 
group default qlen 1000
   link/ether ce:c9:83:97:10:ee brd ff:ff:ff:ff:ff:ff
   inet6 fe80::599:a978:9457:df10/64 scope link noprefixroute  
  valid_lft forever preferred_lft forever

P.S.: This is the first time I hear about dummy interfaces. What are those used 
for ?

Best Regards,
Strahil Nikolov









В вторник, 27 октомври 2020 г., 02:42:06 Гринуич+2, Frank Even 
 написа: 





Anyone have any ideas?  It's rather annoying that I can't get these to
persist across reboots without using some kind of helper script.

On Fri, Oct 16, 2020 at 6:37 AM Frank Even
 wrote:
>
> Hello all, hoping someone can help me out here.
>
> I cannot get dummy interfaces on a new Cent8 build to persist across reboots.
>
> On Cent7 - this is the process I use:
>
> Create Dummies:
> # cat /etc/modules-load.d/dummy.conf
> dummy
> # cat /etc/modprobe.d/dummyopts.conf
> options dummy numdummies=4
> # ip link add dummy0 type dummy
> ## - repeating a/ ascending dummyN adapters for as many needed
> # service network start
> # dracut -f
>
> Now  this  was  different than even how 6 handled it, forget how I
> finally dug that up (possible I even asked here).  I've applied this
> same configuration to a Cent8 box I'm trying to stand up and it all
> appears to work fine, but unlike the Cent7 boxes,  when the Cent8 box
> comes back up,  all the dummy adapters are missing.  I've been
> searching all over trying to find some documentation on this to no
> avail.  I'm hoping someone has some suggestions here to help out.
>
> Thanks,
> Frank
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Server entering Emergency Shell, but continues fine after pressing Enter

2020-09-10 Thread Strahil Nikolov via CentOS
I had similar issue on 7.6 - the LVM timeouts were too short and it was timing 
out as we had a lot of multipath devices. Once those were up , you could just 
continue.

journalctl will show you what has happened.


Best Regards,
Strahil Nikolov






В четвъртък, 10 септември 2020 г., 18:57:02 Гринуич+3, Quinn Comendant 
 написа: 





Hi Thomas,

On 10 Sep 2020 10:06:01, Thomas Bendler wrote:
> If I'm not mistaken, problems after UTMP point to problems with X/ hardware
> configuration. So I guess you might find more information when you also
> have a look at the log files of systemd.

I don't see any hardware issues. Here's the output from `journalctl -p 5 -xb`: 
https://write.as/2vjgz6pfmopg7fnf.txt The time of the last interruption during 
boot was at Sep 10 15:01:46.

Thanks,

Quinn
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Fixing grub/shim issue Centos 7

2020-08-07 Thread Strahil Nikolov via CentOS
Hi Alessandro,

Compared to Microsoft ,  both RH and SuSE are awesome.
You  always need a patch management  strategy  with locked  repos  
(spacewalk/pulp)  which can be tested on less important systems,  prior 
deployment on Prod.
Keep in mind that Secureboot is hard to  deploy in Virtual Environments and 
thus testing is not so easy.

Of course, contributing to the community was always welcomed.

Best  Regards,
Strahil Nikolov

На 7 август 2020 г. 10:40:01 GMT+03:00, Alessandro Baggi 
 написа:
>
>Il 07/08/20 08:22, Johnny Hughes ha scritto:
>>> "How on earth could this have passed Q & A ?"
>
>Hi Johnny,
>Niki's question is spread, legit, in the thoughts in many and many
>users 
>so don't see this as an attack. Many and many users,though really "if 
>this was tested before release" and I think that many of us are 
>incredulous at what happened on CentOS and in the upstream (specially
>in 
>the upstream) but as you said CentOS inherits RHEL bugs. I'm reading 
>about many users that lost their trust in RH with the last 2 problem 
>(microcode and shim). This is bad for CentOS.
>
>> Well, I mean that would be a valid point if it happened for every
>> install.  The issue did not happen on every install.  There is no way
>to
>> test every single hardware and firmware combination for every single
>> computer ever built :)
>>
>> It would be great if things like this did not happen, but with the
>> universe of possible combinations, i am surprised it does not happen
>> more often.
>
>Probably many users have not updated their machines between the bug 
>release and the resolution (thanks to your fast apply in the weekend, 
>thank you) and many update their centos machines on a 2 months base (if
>
>not worst). I think also that many users of CentOS user base have not 
>proclamed their disappointement/the issue on this list or in other 
>channels. For example I simply updated in the wrong time.
>
>> We do run boot tests of every single kernel for CentOS.  The RHEL
>team
>> runs many more tests for RHEL.  But every possible combination from
>> every vendor can't possibly be tested. Right?
>
>you are right but is not UEFI a standard and it shouldn't work the same
>
>on several vendors? I ask this because this patch broken all my uefi 
>workstations.
>
>While CentOS team could not have so much resources to run this type of 
>tests would be great to know what happened to RHEL QA (being RH giant) 
>for this release and given the partenership between CentOS and RH if
>you 
>know something more on this.
>
>Thank you.
>
>___
>CentOS mailing list
>CentOS@centos.org
>https://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 8 DNS resolution not working as expected

2020-08-06 Thread Strahil Nikolov via CentOS
I also,  don't see  a  search stanza.

Best Regards,
Strahil Nikolov

На 6 август 2020 г. 13:30:13 GMT+03:00, isdtor  написа:
>Pete Biggs writes:
>> On Thu, 2020-08-06 at 10:26 +0100, isdtor wrote:
>> > [root@localhost ~]# lsb_release -d
>> > Description:   CentOS Linux release 8.2.2004 (Core) 
>> > [root@localhost ~]# cat /etc/resolv.conf 
>> > # Generated by NetworkManager
>> > search subdomain.company.com company.com
>> > nameserver 1.2.3.4
>> > nameserver 5.6.7.8
>> > 
>> > [root@localhost ~]# host foo
>> > foo.subdomain.company.com has address 1.2.3.4
>> > 
>> > [root@localhost ~]# host foo.subdomain
>> > Host foo.subdomain not found: 3(NXDOMAIN)
>> > 
>> > [root@localhost ~]# host foo.subdomain.company.com
>> > foo.subdomain.company.com has address 1.2.3.4
>> > [root@localhost ~]# 
>> > 
>> > The expected result is that the lookup for foo.subdomain works,
>like it does under CentOS < 8.
>> 
>> man host
>> 
>>-N ndots
>>The number of dots that have to be in name for it to be
>considered absolute. The default value is that defined using
>>the ndots statement in /etc/resolv.conf, or 1 if no ndots
>statement is present. Names with fewer dots are interpreted
>>as relative names and will be searched for in the domains
>listed in the search or domain directive in
>>/etc/resolv.conf.
>
>As per man resolv.conf, the default setting hasn't changed. It is n=1
>on all of CentOS 6/7/8.
>
>___
>CentOS mailing list
>CentOS@centos.org
>https://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 8.2.2004 Latest yum update renders machine unbootable

2020-08-01 Thread Strahil Nikolov via CentOS
Don't forget that EL 7 (non-UEFI systems) & 8 support booting from LVM snapshot 
(a.k.a BOOM Boot Manager) , so revert is 2  steps away:
- boot from the snapshot (grub menu)
- revert from the lvm snapshot
- reboot  and wait for the revert to complete


Best Regards,
Strahil Nikolov

На 1 август 2020 г. 23:58:27 GMT+03:00, david  написа:
>At 01:03 PM 8/1/2020, you wrote:
>>On 8/1/20 6:56 AM, david wrote:
>>>At 02:54 AM 8/1/2020, Alessandro Baggi wrote:
Hi Johnny,
thank you very much for clarification.

You said that in the centos infrastructure only one server got the
>problem.
What are the conditions that permit the breakage? There is a
>particular
configuration (hw/sw) case that match always the problem or it is
>random?

Thank you
>>>
>>>I have two servers running Centos 7 on apple 
>>>hardware (one mac-mini and one mac 
>>>server).  They both failed to reboot a few 
>>>days ago.  So perhaps whatever anti-boot bug 
>>>hit Centos 8, also hit Centos 7.  I can't tell 
>>>what version got updated since the system 
>>>simply fails to boot.  I don't even get a grub 
>>>screen. I'll have to rebuild the systems from scratch.
>>>
>>
>>You should be able to boot off of installation 
>>media into rescue mode, and downgrade the grub2* and/or shim* RPMs.
>>
>>-Greg
>
>This is a good idea, if I knew how to 
>"downgrade...".  But in any event, I had decided 
>to rebuild from scratch, which of course failed 
>as soon as I did an yum update.  So, I'm 
>installing 7.8.2003 with no updates until I see 
>the "all clear -- updates will no longer make 
>your system unbootable" message from the Centos team.
>
>In my many years of blindly updating my 
>installations, starting from the free Redhat 
>distributions, through Whitehat and onto Centos, 
>this is the first disaster, and luckily, it 
>didn't hit all my systems.  Let's hope there aren't many more.
>
>David 
>
>___
>CentOS mailing list
>CentOS@centos.org
>https://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 8.2.2004 Latest yum update renders machine unbootable

2020-08-01 Thread Strahil Nikolov via CentOS
I got 5 CentOS 8.1 VMs minimal  install.
When I did the update I got 1 that had dnf issues and corrupted rpm DB and 
another 1  that didn't boot.
Restore from snapshot and again update -> no issues. For me it looks more like 
a random issue.

Best Regards,
Strahil Nikolov

На 1 август 2020 г. 18:28:09 GMT+03:00, Lamar Owen  написа:
>On 8/1/20 11:02 AM, Lamar Owen wrote:
>> ...
>> [lowen@localhost ~]$ rpm -qa | grep ^kernel|grep 147
>> kernel-devel-4.18.0-147.8.1.el8_1.x86_64
>> kernel-4.18.0-147.8.1.el8_1.x86_64
>> kernel-modules-4.18.0-147.8.1.el8_1.x86_64
>> kernel-core-4.18.0-147.8.1.el8_1.x86_64
>> [lowen@localhost ~]$ 
>
>Well, I sure fat-fingered that command let's try it again:
>
>[lowen@localhost ~]$ rpm -qa | grep ^kernel|grep 193.14
>kernel-headers-4.18.0-193.14.2.el8_2.x86_64
>kernel-devel-4.18.0-193.14.2.el8_2.x86_64
>kernel-modules-4.18.0-193.14.2.el8_2.x86_64
>kernel-tools-4.18.0-193.14.2.el8_2.x86_64
>kernel-core-4.18.0-193.14.2.el8_2.x86_64
>kernel-tools-libs-4.18.0-193.14.2.el8_2.x86_64
>kernel-4.18.0-193.14.2.el8_2.x86_64
>[lowen@localhost ~]$ uname -a
>Linux localhost.localdomain 4.18.0-193.14.2.el8_2.x86_64 #1 SMP Sun Jul
>
>26 03:54:29 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
>[lowen@localhost ~]$
>
>
>___
>CentOS mailing list
>CentOS@centos.org
>https://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] tmpfs / selinux issue

2020-07-26 Thread Strahil Nikolov via CentOS
Hi  Leon,

have you tried mounting with 'httpd_sys_rw_content_t'  instead  of  
'httpd_var_run_t' ?

Best Regards,
Strahil Nikolov

На 25 юли 2020 г. 14:20:19 GMT+03:00, Leon Fauster via CentOS 
 написа:
>Hi all,
>
>I have some AVC in the logs and wonder how to resolve this: Under
>EL8 (enforcing SElinux) I have /var/lib/php/session mounted as tmpfs.
>
>
># tail -1 /etc/fstab
>tmpfs  /var/lib/php/session  tmpfs 
>defaults,noatime,mode=770,gid=apache,size=16777216,context="system_u:object_r:httpd_var_run_t:s0"
>
>  0 0
>
># df -a |grep php
>tmpfs  16384   0 163840% /var/lib/php/session
>
># ls -laZ /var/lib/php/session
>insgesamt 0
>drwxrwx---. 2 root apache system_u:object_r:httpd_var_run_t:s0 40 24. 
>Jul 15:36 .
>drwxr-xr-x. 6 root root   system_u:object_r:httpd_var_lib_t:s0 68  7. 
>Jul 10:54 ..
>
>
>the applications can read the session data without any problems.
>
>
>
>When I reboot the system following AVC appears:
>
># last |grep ^re|head -3
>reboot   system boot  4.18.0-193.6.3.e Fri Jul 24 15:28   still running
>reboot   system boot  4.18.0-193.6.3.e Fri Jul 24 13:33 - 15:27 
>(01:54)
>reboot   system boot  4.18.0-193.6.3.e Fri Jul 24 01:20 - 13:33 
>(12:13)
>
>
># ausearch -m avc --start today
>
>time->Fri Jul 24 01:20:08 2020
>type=AVC msg=audit(1595546408.754:28): avc:  denied  { remount } for 
>pid=952 comm="(ostnamed)" scontext=system_u:system_r:init_t:s0 
>tcontext=system_u:object_r:httpd_var_run_t:s0 tclass=filesystem
>permissive=0
>
>time->Fri Jul 24 13:34:04 2020
>type=AVC msg=audit(1595590444.080:29): avc:  denied  { remount } for 
>pid=1020 comm="(ostnamed)" scontext=system_u:system_r:init_t:s0 
>tcontext=system_u:object_r:httpd_var_run_t:s0 tclass=filesystem
>permissive=0
>
>time->Fri Jul 24 15:28:40 2020
>type=AVC msg=audit(1595597320.783:28): avc:  denied  { remount } for 
>pid=934 comm="(ostnamed)" scontext=system_u:system_r:init_t:s0 
>tcontext=system_u:object_r:httpd_var_run_t:s0 tclass=filesystem
>permissive=0
>
>
>I wonder about the "remount" and the comm="ostnamed".
>
>I do not found any ostnamed application, the closest is hostnamed.
>
>Should the tmpfs be mounted differently (without fstab entry)?
>
>To get rid of the AVC I could add the corresponding policy
>"allow init_t httpd_var_run_t:filesystem remount;" but is this
>not a bit of overkill?
>
>Any hints about what the cause is?
>
>I'd really appreciate any ideas on this.
>
>--
>Leon
>
>
>
>
>
>
>
>___
>CentOS mailing list
>CentOS@centos.org
>https://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Slow terminal response Centos 7.7 1908

2020-07-03 Thread Strahil Nikolov via CentOS
Hi Erick,

what was the value of 'si' in top ?

Best Regards,
Strahil Nikolov

На 3 юли 2020 г. 18:48:30 GMT+03:00, Erick Perez - Quadrian Enterprises 
 написа:
>It was found that the software NIC  team created in Centos was having
>issues due to a failing network cable. The team was going berserk with
>up/down changes.
>
>
>On Fri, Jul 3, 2020 at 10:12 AM Erick Perez - Quadrian Enterprises <
>epe...@quadrianweb.com> wrote:
>
>> Hey!
>> I have a strange condition in one of the servers that I don't where
>to
>> start looking.
>> I login to the server via SSH (cant doit any other way) and anything
>that
>> I type is slow
>> HTTP sessions timeout waiting for screen redraw. So, the server is
>acting
>> "slow".
>>
>> server is bare metal. no virtual services.
>> no alarms in the disk raid
>>
>> note: server was restarted because of power failure.
>>
>> Some outputs from this server that is a mail server:
>> [root@correo ~]# top
>> top - 09:54:43 up 23:51,  2 users,  load average: 0.18, 0.23, 0.28
>> Tasks: 210 total,   1 running, 209 sleeping,   0 stopped,   0 zombie
>> %Cpu(s):  0.2 us,  0.1 sy,  0.0 ni, 99.8 id,  0.0 wa,  0.0 hi,  0.0
>si,
>> 0.0 st
>> KiB Mem : 32606084 total, 25106412 free,  5932244 used,  1567428
>buff/cache
>> KiB Swap: 16449532 total, 16449532 free,0 used. 26282624
>avail Mem
>>
>> **iostat**
>> [root@correo ~]# iostat -y 5
>> Linux 3.10.0-1062.12.1.el7.x86_64 (correo.binal.ac.pa)  07/03/2020
>> _x86_64_(4 CPU)
>>
>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>>0.050.000.050.050.00   99.85
>>
>> Device:tpskB_read/skB_wrtn/skB_read   
>kB_wrtn
>> sda   0.00 0.00 0.00  0 
>0
>> dm-0  0.00 0.00 0.00  0 
>0
>> dm-1  0.00 0.00 0.00  0 
>0
>>
>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>>0.050.000.050.050.00   99.85
>>
>> Device:tpskB_read/skB_wrtn/skB_read   
>kB_wrtn
>> sda  21.40 0.00   169.60  0   
>848
>> dm-0 21.40 0.00   169.60  0   
>848
>> dm-1  0.00 0.00 0.00  0 
>0
>>
>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>>0.600.000.050.450.00   98.90
>>
>> Device:tpskB_read/skB_wrtn/skB_read   
>kB_wrtn
>> sda   1.2016.80 0.00 84 
>0
>> dm-0  1.2016.80 0.00 84 
>0
>> dm-1  0.00 0.00 0.00  0 
>0
>>
>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>>0.050.000.000.050.00   99.90
>>
>> Device:tpskB_read/skB_wrtn/skB_read   
>kB_wrtn
>> sda   8.00 0.00   100.20  0   
>501
>> dm-0  9.00 0.00   100.20  0   
>501
>> dm-1  0.00 0.00 0.00  0 
>0
>>
>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>>0.450.000.350.050.00   99.15
>>
>> Device:tpskB_read/skB_wrtn/skB_read   
>kB_wrtn
>> sda   1.00 0.80 3.20  4
>16
>> dm-0  1.00 0.80 3.20  4
>16
>> dm-1  0.00 0.00 0.00  0 
>0
>>
>>
>> **dstop**
>> [root@correo ~]# dstat -cd --disk-util --disk-tps
>> total-cpu-usage -dsk/total- sda- -dsk/total-
>> usr sys idl wai hiq siq| read  writ|util|reads writs
>>   1   0  99   0   0   0|  20k   17k|0.14|   1 1
>>   0   0 100   0   0   0|   0 0 |   0|   0 0
>>   0   0 100   0   0   0|   0 0 |   0|   0 0
>>   0   0 100   0   0   0|   0 0 |   0|   0 0
>>   0   0 100   0   0   0|   0 0 |   0|   0 0
>>   0   0 100   0   0   0|   0 0 |   0|   0 0
>>   4   0  84  11   0   0|2512k  228k|52.3| 123 2
>>  31   4  58   7   0   0|1912k 1026k|38.1| 13223
>>   0   0  99   0   0   0|   0 0 |   0|   0 0
>>   1   0  99   1   0   0|4096B 3819k|22.5|   1   270
>>   0   0 100   0   0   0|   0 0 |   0|   0 0
>>  13   1  83   4   0   0| 148k 2304k|15.3|  18   214
>>   1   0  98   1   0   0| 140k  499k|9.70|  14 8
>>  26   5  69   0   0   0|   0  1260k|1.30|   046
>>  56   7  38   0   0   0|   0   204k|0.30|   012
>>  14  11  75   0   0   0|   0 0 |   0|   0 0
>>  22  10  68   0   0   0|   0 0 |   0|   0 0
>>  16  10  71   3   0   0| 192k   37k|14.0|  12 2
>>   0   0 100   0   0   0|   0 0 |   0|   0 0
>>   0   0 100   0   0   0|   0   152k|   0|   0 2
>>   0   0 100   0   0   0|   0 0 |   0|   0 0
>>   1   1  98   1   0   0|  16k 2569k|14.8|   1   207
>>   1   1  98 

[CentOS] No samba-vfs-glusterfs package

2020-06-29 Thread Strahil Nikolov via CentOS
Hello Community,
does anyone know if we got samba-vfs-glusterfs package available and if yes, in 
which repo ?

On CentOS7 it was part of the base repo , but I can't find it.

Thanks in advance.

Best Regards,
Strahil Nikolov
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] halt versus shutdown

2020-06-14 Thread Strahil Nikolov via CentOS
Working with different Linux Distributions makes the life harder.
So far I have found out that 'poweroff' & 'reboot' has the same behaviour on  
Linux/Unix/BSDs.

Best Regards,
Strahil Nikolov

На 15 юни 2020 г. 5:22:28 GMT+03:00, John Pierce  написа:
>On Sun, Jun 14, 2020 at 6:19 PM Pete Biggs  wrote:
>
>>
>> > I'm quite sure that in original Berkeley Unix, as on the VAX
>11/780, halt
>> > was an immediate halt of the CPU without any process cleanup or
>file
>> system
>> > umounting or anything.   Early SunOS (pre-Solaris) was like this,
>too.
>> >
>> The SunOS 4.1.2 man page for halt says
>>
>>NAME
>>   halt - stop the processor
>>SYNOPSIS
>> /usr/etc/halt [ -oqy ]
>>DESCRIPTION
>> halt writes out any information pending to the disks and then
>> stops the processor.
>>  halt normally logs the system shutdown to the system log
>>   daemon, syslogd(8), and places a shutdown record in the
>>   login accounting file Ivar/admlwtmp.
>>   These actions are inhibited if the -0 or -q options are
>present.
>>
>> The BSD 4.3 (that ran on VAXen) man pages say largely similar things:
>>
>>
>>
>https://www.freebsd.org/cgi/man.cgi?query=halt=0=0=4.3BSD+Reno=default=html
>>
>>
>ok, so it does a sync then hard halts, but it doesn't gracefully exit
>services, or unmount file systems.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS7 and NFS

2020-05-16 Thread Strahil Nikolov via CentOS
On May 16, 2020 12:41:09 PM GMT+03:00, "Patrick Bégou" 
 wrote:
>Hi Barbara,
>
>Thanks for all these suggestions. Yes, jumbo frames are activated and I
>have only two 10Gb ethernet switch between the server and the client,
>connected with a monomode fiber.
>I saw yesterday that the client showing the problem had not the right
>MTU (1500 instead of 9000). I don't know why. I changed the MTU to 9000
>yesterday and I'm looking at the logs now to see if the problems occur
>again.
>
>I will try to increase the number of nfs daemon in a few day, to check
>each setup change one after the other. Because of covid19, I'm working
>from home so I should be really careful when changing the setup of the
>servers.
>
>On a cluster node I try to set "rsize=1048576,wsize=1048576,vers=4,tcp"
>(I cannot have a larger value for rsize/wsize) but comparison with the
>mount using default setup do not show significant improvements. I sent
>20GB to the server or 2x10GB (2 concurrent processes) with dd to be
>larger than the raid controller cache but lower than the  server and
>client RAM. It was just a short test this morning.
>
>Patrick
>
>Le 15/05/2020 à 15:32, Barbara Krašovec a écrit :
>> The number of threads has nothing to do with the number of cores on
>the machine. It depends on the I/O, network speed, type of workload
>etc.
>> We usually start with 32 threads and increase if necessary. 
>>
>> You can check the statistics with:
>> watch 'cat /proc/net/rpc/nfsd | grep th’
>>
>> Or you can check on the client
>> bide5.bin 
>> nfsstat -rc
>> Client rpc stats:
>> calls  retransauthrefrsh
>> 1326777974   0  1326645701
>>
>> If you see a large number of retransmissions, you should increase the
>number of threads.
>>
>> However, your problem could also be related to the filesystem or
>network.
>>
>> Do you have jumbo frames (if yes, you should have them on clients and
>server)? You might think about disabling flow control on the switch and
>on the network card. Are there a lot of dropped packets?
>>
>> For network tuning, check http://fasterdata.es.net/host-tuning/linux/
>>
>> Did you try to enable readahead (blockdev —setra) on the filesystem?
>>
>> On the client side, changing the mount options helps. The default
>read/write block size is quite little, increase it (rsize, wsize), and
>use noatime.
>>
>>
>> Cheers,
>> Barbara
>>
>>
>>
>>
>>
>>> On 15 May 2020, at 09:26, Patrick Bégou
> wrote:
>>>
>>> Le 13/05/2020 à 15:36, Patrick Bégou a écrit :
 Le 13/05/2020 à 07:32, Simon Matter via CentOS a écrit :
>> Le 12/05/2020 à 16:10, James Pearson a écrit :
>>> Patrick Bégou wrote:
 Hi,

 I need some help with NFSv4 setup/tuning. I have a dedicated
>nfs server
 (2 x E5-2620  8cores/16 threads each, 64GB RAM, 1x10Gb ethernet
>and 16x
 8TB HDD) used by two servers and a small cluster (400 cores).
>All the
 servers are running CentOS 7, the cluster is running CentOS6.

 Time to time on the server I get:

   kernel: NFSD: client xxx.xxx.xxx.xxx testing state ID
>with
  incorrect client ID

 And the client xxx.xxx.xxx.xxx freeze whith:

   kernel: nfs: server x.legi.grenoble-inp.fr not
>responding,
  still trying
   kernel: nfs: server x.legi.grenoble-inp.fr OK
   kernel: nfs: server x.legi.grenoble-inp.fr not
>responding,
  still trying
   kernel: nfs: server x.legi.grenoble-inp.fr OK

 There is a discussion on RedHat7 support about this but only
>open to
 subscribers. Other searches with google do not provide  useful
 information.

 Do you have an idea how to solve these freeze states ?

 More generally I would be really interested with some
>advice/tutorials
 to improve NFS performances in this dedicated context. There
>are so
 many
 [different] things about tuning NFS available on the web that
>I'm a
 little bit lost (the opposite of the previous question). So if
>some one
 has "the tutorial"...;-)
>>> How many nfsd threads are you running on the server? - current
>count
>>> will be in /proc/fs/nfsd/threads
>>>
>>> James Pearson
>> Hi James,
>>
>> Thanks for your answer. I've configured 24 threads (for 16
>hardware
>> cores/ 32Threads on the NFS server with this processors)
>>
>> But it seams that there are buffer setup to modify too when
>increasing
>> the threads number... It is not done.
>>
>> Load average on the server is below 1
> I'd be very careful with higher thread numbers than physical
>cores. NFS
> threads and so called CPU hyper/simultaneous threads are quite
>different
> things and it can hurt performance if not configured correctly.
>
 So you suggest to limit the setup to 16 daemons ? I'll try this
>evening.


Re: [CentOS] kvm: C8 as guest on C6 host / huge delay while booting

2020-05-16 Thread Strahil Nikolov via CentOS
On May 16, 2020 6:09:22 PM GMT+03:00, Leon Fauster via CentOS 
 wrote:
>Am 11.05.20 um 15:59 schrieb Leon Fauster:
>> Since C8.1  kvm guests have a huge delay while booting on a kvm host 
>> based on C6. This delay was not present with C8.0. The "pause"
>happend
>> direct after the grub step. The VNC session shows only a "_"
>character.
>> 
>
>and this are the corresponding logs. The guest system continues to boot
>
>after ~5 minutes. At that point this appears
>
>May 16 16:44:14 ev kernel: kvm: 8458: cpu0 unhandled rdmsr: 0x140
>May 16 16:44:14 ev kernel: kvm: 8458: cpu0 disabled perfctr wrmsr: 0xc2
>
>data 0x
>May 16 16:44:14 ev kernel: kvm: 8458: cpu1 unhandled rdmsr: 0x140
>May 16 16:44:14 ev kernel: kvm: 8458: cpu2 unhandled rdmsr: 0x140
>May 16 16:44:14 ev kernel: kvm: 8458: cpu3 unhandled rdmsr: 0x140
>May 16 16:44:15 ev kernel: kvm: emulating exchange as write
>
>Any ideas?
>
>--
>Leon
>___
>CentOS mailing list
>CentOS@centos.org
>https://lists.centos.org/mailman/listinfo/centos

What is the output of:
systemd-analyze  blame
systemd-analyze  critical-chain
systemd-analyze  plot >  somefile

Best Regards,
Strahil Nikolov
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Understanding VDO vs ZFS

2020-05-03 Thread Strahil Nikolov via CentOS
On May 3, 2020 8:33:33 AM GMT+03:00, Erick Perez - Quadrian Enterprises 
 wrote:
>sorry corrections:
>For this test I created a 40GB lvm volume group with /dev/sdb and
>/dev/sdc
>then a 40GB LV
>then a 60GB VDO vol (for testing purposes)
>
>vdostats --verbose /dev/mapper/vdoas | grep -B6 'saving percent'
>output from just created vdoas
>
>[root@localhost ~]# vdostats --verbose /dev/mapper/vdoas | grep -B6
>'saving
>percent'
>physical blocks : 10483712
>  logical blocks  : 15728640
>  1K-blocks   : 41934848
>  1K-blocks used  : 4212024
>  1K-blocks available : 37722824
>  used percent: 10
>  saving percent  : 99
>[root@localhost ~]#
>
>FIRST copy CentOS-7-x86_64-Minimal-2003.iso (1.1G) to vdoas from source
>outside vdo volume
>[root@localhost ~]# vdostats --verbose /dev/mapper/vdoas | grep -B6
>'saving
>percent'
>  1K-blocks used  : 4721348
>  1K-blocks available : 37213500
>  used percent: 11
>  saving percent  : 9
>
>SECOND copy  CentOS-7-x86_64-Minimal-2003.iso (1.1G) to vdoas form
>source
>outside vdo volume
>#cp /root/CentOS-7-x86_64-Minimal-2003.iso
>/mnt/vdomounts/CentOS-7-x86_64-Minimal-2003-version2.iso
>  1K-blocks used  : 5239012
>  1K-blocks available : 36695836
>  used percent: 12
>  saving percent  : 52
>
>THIRD  copy  CentOS-7-x86_64-Minimal-2003.iso (1.1G) to
>vdoas form inside vdo volume to inside vdo volume
>  1K-blocks used  : 5248060
>  1K-blocks available : 36686788
>  used percent: 12
>  saving percent  : 67
>
>Then I did this a total of 9 more times to have 10 ISOs copied. Total
>data
>copied 10.6GB.
>
>
>Do note this:
>When using DF, it will show the VDO size, in my case 60G
>when using vdostats it will show the size of the LV, in my case 40G
>Remeber dedupe AND compression are enabled.
>
>The df -hT output shows the logical space occupied by these iso files
>as
>seen by the filesystem on the VDO volume.
>Since VDO manages a logical to physical block map, df sees logical
>space
>consumed according to the file system that resides on top of the VDO
>volume.
>vdostats --hu is viewing the physical block device as managed by VDO.
>Physically a single .ISO image is residing on the disk, but logically
>the
>file system thinks there are 10 copies, occupying 10.6GB.
>
>So at the end I have 10 .ISOs of 1086 1MB blocks (total 10860 1MB
>blocks)
>that yield these results:
>  1K-blocks used  : 5248212
>  1K-blocks available : 36686636
>  used percent: 12
>  saving percent  : 89
>
>So at the end it is using 5248212 1K blocks minus  4212024  initial
>used 1K
>blocks, gives (5248212 - 4212024) = 1036188 1K blocks / 1024 = about
>1012MB
>total.
>
>Hope this helps understanding where the space goes.
>
>BTW: Testing system is CentOS Linux release 7.8.2003 stock. with only
>"yum
>install vdo kmod-kvdo"
>
>History of commands:
>[root@localhost vdomounts]# history
>2  pvcreate /dev/sdb
>3  pvcreate /dev/sdc
>8  vgcreate -v -A y vgvol01 /dev/sdb /dev/sdc
>9  vgdisplay
>   13  lvcreate -l 100%FREE -n lvvdo01 vgvol01
>   14   yum install vdo kmod-kvdo
>   18  vdo create --name=vdoas --device=/dev/vgvol01/lvvdo01
>--vdoLogicalSize=60G --writePolicy=async
>   19  mkfs.xfs -K /dev/mapper/vdoas
>   20  ls /mnt
>   21  mkdir /mnt/vdomounts
>   22  mount /dev/mapper/vdoas /mnt//vdomounts/
>   26  vdostats --verbose /dev/mapper/vdoas | grep -B6 'saving percent'
>   28  cp /root/CentOS-7-x86_64-Minimal-2003.iso /mnt/vdomounts/ -vvv
>   29  vdostats --verbose /dev/mapper/vdoas | grep -B6 'saving percent'
>   30  cp /root/CentOS-7-x86_64-Minimal-2003.iso
>/mnt/vdomounts/CentOS-7-x86_64-Minimal-2003-version2.iso
>   31  vdostats --verbose /dev/mapper/vdoas | grep -B6 'saving percent'
>   33  cd /mnt/vdomounts/
>   35  cp CentOS-7-x86_64-Minimal-2003-version2.iso
>./CentOS-7-x86_64-Minimal-2003-version3.iso
>   36  vdostats --verbose /dev/mapper/vdoas | grep -B6 'saving percent'
>   37  df
>   39  vdostats --hu
>   40  ls -l --block-size=1MB /root/CentOS-7-x86_64-Minimal-2003.iso
>   41  df -hT
>   42  vdo status | grep Dedupl
>   43  vdostats --hu
>   44  vdostats
>   48  cp CentOS-7-x86_64-Minimal-2003-version2.iso
>./CentOS-7-x86_64-Minimal-2003-version4.iso
>   49  cp CentOS-7-x86_64-Minimal-2003-version2.iso
>./CentOS-7-x86_64-Minimal-2003-version5.iso
>   50  cp CentOS-7-x86_64-Minimal-2003-version2.iso
>./CentOS-7-x86_64-Minimal-2003-version6.iso
>   51  cp CentOS-7-x86_64-Minimal-2003-version2.iso
>./CentOS-7-x86_64-Minimal-2003-version7.iso
>   52  cp CentOS-7-x86_64-Minimal-2003-version2.iso
>./CentOS-7-x86_64-Minimal-2003-version8.iso
>  

Re: [CentOS] can't boot after volume rename

2020-01-07 Thread Strahil Nikolov via CentOS
 Get a CentOS Install media , boot from it and select troubleshoot.Then mount 
your root LV, boot lv , /proc/, /sys, /dev & /run (last 4 with "bind" mount 
option).Then chroot into the root LV's mount point and then change grub menu 
and run "dracut -f --regenerate-all"
last step is to reboot and test.
Best Regards,Strahil Nikolov

В понеделник, 6 януари 2020 г., 17:05:54 ч. Гринуич-5, Paul Amaral via 
CentOS  написа:  
 
 I renamed my volume with vgrename however I didn't complete the other steps.
Mainly update fstab and intiramfs. Once I booted, I was dropped on the
Dracut shell. From here I can see the newly rename VG and I can lvm lvscan
as well as activate it, lvm vgchange -ay. 

 

However I can't figure out what to do next, I'm assuming I need to
regenerate the initramfs and then boot to change grub? Could someone point
me in the right direction to recovering a FS from Dracut, or other means,
once the volume group name was changed.

 

TIA,

Paul 

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos
  
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos-8 Minimal iso?

2019-09-28 Thread Strahil Nikolov via CentOS
On September 27, 2019 4:57:42 PM GMT+03:00, Jay Beattie - local 
 wrote:
>Is there a minimal Centos 8 iso image available like Centos 7 ?
>
>___
>CentOS mailing list
>CentOS@centos.org
>https://lists.centos.org/mailman/listinfo/centos

Nearest mirror has a boot image :

http://centos.uni-sofia.bg/centos/8/isos/x86_64/CentOS-8-x86_64-1905-boot.iso

Best Regards,
Strahil Nikolov
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS linux VS CentOS stream, which to choose for learning?

2019-09-28 Thread Strahil Nikolov via CentOS
On September 28, 2019 3:36:10 AM GMT+03:00, Yang Guo  
wrote:
>Hi,
>I am a student and would like to use Centos to learn how to use the
>linux.
>Two versions of CentOS available: CentOS Linux and CentOS Stream.
>
>Which one is better for learning Linux?
>___
>CentOS mailing list
>CentOS@centos.org
>https://lists.centos.org/mailman/listinfo/centos

Hi Yang,

Every distro is OK for learning purposes.
The main difference is :
CentOS8  is a RHEL 8  stripped of any proprietary (licensed) stuff and built 
based on Red Hat's source rpms.
CentOS Stream should be a mixture of Fedora (Developing ground for future RHEL 
technology - highly unstable)  and pure CentOS. As per my understansing any 
future features for RHEL  will be first deployed on CentOS Stream before moved 
to RHEL.

Still,  the above could have any gaps - so do not take anything for granted.
I would recommend you to start either with CentOS 7.7 (quite stable and 
reliable)  and later move to v8.

Best Regards,
Strahil Nikolov
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos