[CentOS-docs] [Gitblit] arrfab pushed 1 commits => websites/centos.org.git

2019-01-09 Thread Gitblit
https://git.centos.org/summary/websites!centos.org.git

>---
 master branch updated (1 commits)
>---

 Fabian Arrotin 
 Thursday, January 10, 2019 07:55 +

 Switched rss js from FeedEk to https://github.com/sdepold/jquery-rss

 
https://git.centos.org/commit/websites!centos.org.git/5b39063b7d103ed708c629aad9fde775b73f79cf
___
CentOS-docs mailing list
CentOS-docs@centos.org
https://lists.centos.org/mailman/listinfo/centos-docs


Re: [CentOS] Help finishing off Centos 7 RAID install

2019-01-09 Thread Keith Keller
On 2019-01-09, Gordon Messmer  wrote:
> On 1/9/19 2:30 AM, Gary Stainburn wrote:
>
>> 2) is putting SWAP in a RAID a good idea? Will it help, will it cause
>> problems?
>
> The only "drawback" that I'm aware of is that RAID consistency checks 
> become meaningless, because it's common for swap writes to be canceled 
> before complete, in which case one disk will have the page written but 
> the other won't.  This is by design, and considered the optimal 
> operation.  However, consistency checks don't exclude blocks used for 
> swap, and they'll typically show mismatched blocks.

If the swap is RAID1 on its own partitions (e.g., sda5/sdb5), then
CHECK_DEVS in /etc/sysconfig/raid-check can be configured to check
only specific devices.

--keith

-- 
kkel...@wombat.san-francisco.ca.us


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] systemd

2019-01-09 Thread Jonathan Billings
On Wed, Jan 09, 2019 at 06:00:31PM +0100, Simon Matter via CentOS wrote:
> Maybe things _could_ be done the right way with systemd, but it doesn't
> happen because it quickly starts to be very complex and it's a lot of work
> to do it for a complete distribution. It just doesn't happen - or at least
> did not happen in all the years since its introduction.

There are a couple ways that systemd can handle service startup in a
way that dependent services can gracefully start up after it.  One way
is to have systemd open the socket, then hand it to the service when
it is ready.  This requires quite a bit of hacking and I don't think
is as reliable, it's more of the inetd way of doing things.

Another is to have the code send a message when it is ready.  This
isn't really that complicated, you can look at the change in
postgresql's git here:

https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=7d17e683fcc28a1b371c7dd02935728cd2cbf9bf

Basically, when the database is ready, it calls the C function:
  sd_notify(0, "READY=1");

and when it's shutting down, it runs:
  sd_notify(0, "STOPPING=1");

To be honest, that's not too complicated.  It does require minor
changes to the code to support systemd, but you can replace idle loops
in shell scripts with a smarter database (which knows when it is
ready) telling PID 1 that it is ready.

-- 
Jonathan Billings 
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] high kworker CPU usage in 3.10.0-957 w/ Xorg nouveau driver?

2019-01-09 Thread Sean
Hi all,

I have a number of Gnome/X desktop workstations with NVidia GeForce GT
1030 adapters, dual monitors, Core I7 3770 quad-core hyper-threaded
CPUs, with 32GB of RAM.  Most (haven't checked them all yet) are
exhibiting problems that include significant sluggish-ness with mouse
movement and typing as well as screen rendering problems happening
since upgrading from kernel 3.10.0-862.14.4.el7.x86_64 to
3.10.0-957.1.3.el7.x86_64.  The users have seen this behavior after
logging into Gnome, but with out any additional applications running
(Chrome/Firefox/LibreOffice, etc.).  I can see in top that there are
multiple kworker processes consuming a large amount of CPU time and
unusually high load averages - like 5-7 range on the 5 minute average,
normal load average would be between 1-2 for these users.  At one
point, while troubleshooting with a user, I was logged in remotely
while the user was working on the desktop when it became completely
unresponsive.  /var/log/messages had nouveau messages like:

kernel: nouveau: evo channel stalled
kernel: nouveau :01:00.0: disp: chid 1 mthd  data 
10003000 
kernel: nouveau :01:00.0: DRM: base-1: timeout
kernel: nouveau :01:00.0: DRM: core notifier timeout

Those messages might be meaningless, but they are abundant in the
logs.  For grins before rebooting, I attempted to stop and start GDM.
Both operations seemed successful, I verified all processes owned by
the user were gone, and asked him to log in again, but he reported his
screens still looked like they did before I restarted GDM and that he
didn't have a login screen.

Users are currently booting their systems to the 3.10.862 kernel, and
this problem does not present itself.  I can also add that running the
proprietary nvidia driver (from nvidia.com, not elrepo) version 410.78
does not produce this problem.  I config manage all these desktops
with Puppet and they were all built from by the same kickstart file.
The nvidia driver is not purposefully managed by puppet, I just
happened to be experimenting with it on my workstation.

Before I load the proprietary driver on all the problematic systems, I
was hoping someone on the list might have some insight or suggestions.

Thanks!

--Sean
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] systemd

2019-01-09 Thread Gordon Messmer

On 1/9/19 9:00 AM, Simon Matter via CentOS wrote:

Maybe things_could_  be done the right way with systemd, but it doesn't
happen because it quickly starts to be very complex and it's a lot of work
to do it for a complete distribution.



If you've looked at the sysv init script for postgresql, you know that 
that statement describes both init systems.


Systems engineering is hard.  It's fashionable to blame systemd, but 
it's not systemd's fault that there's a delay between the point at which 
postgresql forks and the point at which it's available for use.  SysV 
didn't magically solve that problem. Someone had to specifically write a 
delay loop in the init script to make the system work reliably, 
beforehand.  PostgreSQL isn't alone in that.  Other services needed 
their own hacks.  And collectively, "Maybe things _could_ be done the 
right way with SysV, but it doesn't happen because it quickly starts to 
be very complex and it's a lot of work to do it for a complete 
distribution."


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Help finishing off Centos 7 RAID install

2019-01-09 Thread Gordon Messmer

On 1/9/19 2:30 AM, Gary Stainburn wrote:

1) The big problem with this is that it is dependant on sda for booting.  I
did find an aritcle on how to set up boot loading on multiple HDD's,
including cloning /boot/efi but I now can't find it.  Does anyone know of a
similar article?



Use RAID1 for /boot/efi as well.  The installer should get the details 
right.




2) is putting SWAP in a RAID a good idea? Will it help, will it cause
problems?



It'll be moderately more reliable.  If you have swap on a non-redundant 
disk and the kernel tries to read it, bad things (TM) will happen.


The only "drawback" that I'm aware of is that RAID consistency checks 
become meaningless, because it's common for swap writes to be canceled 
before complete, in which case one disk will have the page written but 
the other won't.  This is by design, and considered the optimal 
operation.  However, consistency checks don't exclude blocks used for 
swap, and they'll typically show mismatched blocks.


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] systemd

2019-01-09 Thread Jonathan Billings
On Wed, Jan 09, 2019 at 12:04:29PM -0500, Steve Clark wrote:
> Hmm...
> I don't see that in the postgresql.service file - this is CentOS Linux 
> release 7.5.1804 (Core)
> postgresql-server-9.2.24-1.el7_5.x86_64
> 
> from /usr/lib/systemd/system/postgresql.service
> ...
> [Service]
> Type=forking
> 
> User=postgres
> Group=postgres

You're right!  My mistake.

I was looking at the systemd service for Postgresql 10 (check out the
rh-postgresql10-postgresql-server package in SCL).  It seems that
they've managed to get sd_notify notification working in version 10, but
it's still using Type=forking in version 9. 

By the way, this isn't really a systemd issue -- even with sysvinit
you'd be stuck trying to figure out when the service was *really* up
in a shell script or something.  At least now there's a mechanism to
tell the startup service that the service has actually started, so
proper ordering of services can be automatically performed, rather
than stringing together a collection of shell scripts.

-- 
Jonathan Billings 
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] systemd

2019-01-09 Thread Valeri Galtsev




On 1/9/19 11:00 AM, Simon Matter via CentOS wrote:

On Wed, Jan 09, 2019 at 10:43:38AM -0500, Steve Clark wrote:

I am trying to understand what After= means in a unit file. Does it
mean after the specified target is up and operational or only that
the target has been started?

I have something that needs postgres but postgres needs to be
operational not just started. Sometimes it can take a bit for
postgres to become operational.


I believe that the postgresql service has Type=notify in it's service
definition, which means that it will notify systemd when it is
operational.  This means that if you have a service that has
After=postgresql.service, systemd should wait until after the
postgresql service notifies systemd that it is operational before your
service will be started.

If your service is starting and unable to connect to postgresql, then
I would say that's a bug in postgresql -- it shouldn't be notifying
systemd that it is operational until it actually is.


This is, in fact, one of the points why I'm very unhappy with systemd and
the way it is implemented here and most likely in most distributions.

Maybe things _could_ be done the right way with systemd, but it doesn't
happen because it quickly starts to be very complex and it's a lot of work
to do it for a complete distribution. It just doesn't happen - or at least
did not happen in all the years since its introduction.


Yes, introduction of systemd earned Linuxes a lot of refugees. I in my 
worst times feel maybe that was the goal of it. But then I think about a 
split of refugees from Linux to UNIX descendants (FreeBSD, NettBSD etc.) 
vs to MS products, and I am not quite certain if that was a goal (though 
I do remember MS alliance with RedHat...), but if it was the goal I 
doubt refugee split was in MS favor (though one says something is better 
than nothing).


I hope, this didn't come as a rant, I should probably have used rant 
tags ;-)


Valeri



In this example, PG gets just started with "pg_ctl start" and that's it.

Regards,
Simon

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos



--

Valeri Galtsev
Sr System Administrator
Department of Astronomy and Astrophysics
Kavli Institute for Cosmological Physics
University of Chicago
Phone: 773-702-4247

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] systemd

2019-01-09 Thread Steve Clark
On 01/09/2019 11:36 AM, Jonathan Billings wrote:
> On Wed, Jan 09, 2019 at 10:43:38AM -0500, Steve Clark wrote:
>> I am trying to understand what After= means in a unit file. Does it
>> mean after the specified target is up and operational or only that
>> the target has been started? 
>>
>> I have something that needs postgres but postgres needs to be
>> operational not just started. Sometimes it can take a bit for
>> postgres to become operational. 
> I believe that the postgresql service has Type=notify in it's service
> definition, which means that it will notify systemd when it is
> operational.  This means that if you have a service that has
> After=postgresql.service, systemd should wait until after the
> postgresql service notifies systemd that it is operational before your
> service will be started.
>
> If your service is starting and unable to connect to postgresql, then
> I would say that's a bug in postgresql -- it shouldn't be notifying
> systemd that it is operational until it actually is.
>
Hmm...
I don't see that in the postgresql.service file - this is CentOS Linux release 
7.5.1804 (Core)
postgresql-server-9.2.24-1.el7_5.x86_64

from /usr/lib/systemd/system/postgresql.service
...
[Service]
Type=forking

User=postgres
Group=postgres
...

Regards,
Steve
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] systemd

2019-01-09 Thread Simon Matter via CentOS
> On Wed, Jan 09, 2019 at 10:43:38AM -0500, Steve Clark wrote:
>> I am trying to understand what After= means in a unit file. Does it
>> mean after the specified target is up and operational or only that
>> the target has been started?
>>
>> I have something that needs postgres but postgres needs to be
>> operational not just started. Sometimes it can take a bit for
>> postgres to become operational.
>
> I believe that the postgresql service has Type=notify in it's service
> definition, which means that it will notify systemd when it is
> operational.  This means that if you have a service that has
> After=postgresql.service, systemd should wait until after the
> postgresql service notifies systemd that it is operational before your
> service will be started.
>
> If your service is starting and unable to connect to postgresql, then
> I would say that's a bug in postgresql -- it shouldn't be notifying
> systemd that it is operational until it actually is.

This is, in fact, one of the points why I'm very unhappy with systemd and
the way it is implemented here and most likely in most distributions.

Maybe things _could_ be done the right way with systemd, but it doesn't
happen because it quickly starts to be very complex and it's a lot of work
to do it for a complete distribution. It just doesn't happen - or at least
did not happen in all the years since its introduction.

In this example, PG gets just started with "pg_ctl start" and that's it.

Regards,
Simon

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] [SOLVED] upg. CentOS 7.5 to 7.6: unable to mount smb shares - samba NT domain member using ldap

2019-01-09 Thread Miroslav Geisselreiter

Dne 7.1.2019 v 12:36 Miroslav Geisselreiter napsal(a):

Dne 5.1.2019 v 0:46 Gordon Messmer napsal(a):

On 1/3/19 11:46 PM, Miroslav Geisselreiter wrote:


Previously I deleted all files from /var/lib/samba, than set ldap 
admin password:

smbpasswd -W
Than I re-join DC, it did not help.



Shame.  I'm not really sure what else to try, beyond my previous 
suggestion that it doesn't make sense to be both a domain member and 
use an ldap passdb backend.


Try reverting the configuration file to the last known-good state.  
Leave the domain.  Change "security = user".  I'd expect that your 
system would work without any interactions with the DC.


I found some solution which solve only part of my problem and is not 
very "clean".


When I run winbind with these options client which are member of my 
NT4DOMAIN are now able to mout smb shares from NT4MEMBER server:


# winbindd -i -d 3 -S -n --option="netbios name"=NT4DOMAIN 
--option="ntlm auth"=yes


option "netbios name"=NT4DOMAIN overwrites this option from smb.conf: 
"netbios name"=NT4MEMBER


Nevertheless I am not able to mount smb shares from clients which are 
not members of NT4DOMAIN.



SOLVED:

I had to change only two parameters in smb.conf:
security = user
ntlm auth = yes

Everything works now like before upgrade and I do not even run winbind 
daemon.


Thanks to all for help and hints.

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Help finishing off Centos 7 RAID install

2019-01-09 Thread Simon Matter via CentOS
> I've just finished installing a new Bacula storeage server. Prior to doing
> the
> install I did some research and ended  up deciding to do the following
> config.
>
> 6x4TB drives
> /boot/efi efi_fs  sda1
> /boot/efi_copyefi_fs  sdb1
> /boot xfs RAID1   sda2 sdb2
> VGRAID6   all drives containing
>   SWAP
>   /
>   /home
>   /var/bacula
>
> Questions:
>
> 1) The big problem with this is that it is dependant on sda for booting.
> I
> did find an aritcle on how to set up boot loading on multiple HDD's,
> including cloning /boot/efi but I now can't find it.  Does anyone know of
> a
> similar article?

I also spent (wasted?) quite some time on this issue because I couldn't
believe things don't work so nice with EFI as they did before. The
designers of EFI obviously forgot that some people might want to boot from
software RAID in a redundant way.

I ended up with a similar design than you, my fstab has this:
/dev/md0/boot   xfs defaults0 0
/dev/nvme0n1p1  /boot/efi   vfat   
umask=0077,shortname=winnt 0 0
/dev/nvme1n1p1  /boot/efi.backupvfat   
umask=0077,shortname=winnt 0 0

Then in my package update tool I have a hook which syncs like this:

EFISRC="/boot/efi"
EFIDEST="${EFISRC}.backup"

efisync() {
  if [ -d "${EFISRC}/EFI" -a -d "${EFIDEST}/EFI" ]; then
rsync --archive --delete --verbose "${EFISRC}/EFI" "${EFIDEST}/"
  fi
}

BTW, another method could be to put /boot/efi on RAID1 with metadata
version 1.0 but that doesn't seem to be reliable, it works for some
systems but fails on others according to report I read.

>
> 2) is putting SWAP in a RAID a good idea? Will it help, will it cause
> problems?

No problem at all and I don't want to lose a swap device if a disk fails.
So it's the correct way to put in on RAID, IMHO.

Regards,
Simon

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] systemd

2019-01-09 Thread Jonathan Billings
On Wed, Jan 09, 2019 at 10:43:38AM -0500, Steve Clark wrote:
> I am trying to understand what After= means in a unit file. Does it
> mean after the specified target is up and operational or only that
> the target has been started? 
> 
> I have something that needs postgres but postgres needs to be
> operational not just started. Sometimes it can take a bit for
> postgres to become operational. 

I believe that the postgresql service has Type=notify in it's service
definition, which means that it will notify systemd when it is
operational.  This means that if you have a service that has
After=postgresql.service, systemd should wait until after the
postgresql service notifies systemd that it is operational before your
service will be started.

If your service is starting and unable to connect to postgresql, then
I would say that's a bug in postgresql -- it shouldn't be notifying
systemd that it is operational until it actually is.

-- 
Jonathan Billings 
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] systemd

2019-01-09 Thread Steve Clark
Hi List,

I am trying to understand what After= means in a unit file. Does it mean after 
the specified target is up and operational or
only that the target has been started?

I have something that needs postgres but postgres needs to be operational not 
just started. Sometimes it can take a bit
for postgres to become operational.

Thanks,
Steve

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 7.6 1810 vs. VirtualBox : bug with keyboard layout selection

2019-01-09 Thread Klaus Kolle
On 1/4/19 3:55 PM, Jonathan Billings wrote:
> After some thought, it makes sense to use VirtualBox for teaching,
> since many people will probably start testing Linux using VirtualBox
> on Windows or macOS.  Too bad the kernel bugs will prevent CentOS
> 7.6.1810 from being useful there.
It makes sense to use Centos on top of VirtualBox.

In fact I deliver a "Software-in-a-box" package consisting of a Centos
installation on top of VirtualBox for my 1. semester students in
software development.

The package includes all the necessary software for the courses 2 years
ahead.

This removes most of my support problems keeping the Eclipse C/C++
environment stable on Windows and Mac, which I do not know very much and
therefore does not offer support on.

|<

-- 
Med venlig hilsen

Klaus Kolle

Teknikumingeniør, B.Sc.EE., e-mail: kl...@kolle.dk
Master of ITwww   : www.kolle.dk
Kollundvej 5Telephone : +4586829682 / +4522216044
DK-8600 Silkeborg, Denmark

"Man skal ikke tilskrive til sammensværgelser hvad der tilstrækkeligt
kan forklares af inkompetence"
Poul Henning Kamp

Planlægning er tanker om noget man agter at gøre en gang i fremtiden,
hvis omstændighederne tillader det.
Klaus Kolle 2006

Perfection is achieved not when nothing more to add, but when there is
nothing more left to take away.
Antoine de Saint-Exupery



signature.asc
Description: OpenPGP digital signature
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Kickstart finishing Installation

2019-01-09 Thread Pete Biggs


> which switch is the right one for Centos 7.6 to finish the
> installation.
> Every Installation needs an acknowledgement at the end when the
> network configuration is shown while installing with grafics.
> 

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/installation_guide/sect-kickstart-syntax

Make sure your kickstart file as the 'reboot' command in it.

P.


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] NFS deny access

2019-01-09 Thread James Pearson
Thomas Plant via CentOS wrote:
> 
> Hello all,
> 
> I have an NFS Server where I want give access to a specific address to a
> specific path.
> Problem is that I have some other shares active which I do not want the
> specific IP to not access it.
> 
> The /etc/exports looks like the following:
> 
> /nfs/Share1 10.10.*(rw)
> /nfs/Share2 10.10.*(rw)
> /kdnbckp/CS21   10.10.193.43(rw)
> 
> The client on the last line (IP 10.10.193.43) I'd like to exclude from
> mounting the first two shares.
> 
> How can I do this? 'man exports' does not give any hint if this is 
> possible.

I don't know of an option to exclude a single host - but you might be 
able to do something clever with the 'refer' option ...

BTW, the export man page says that you shouldn't use wildcards in IP 
network addresses - i.e. instead of exporting to '10.10.*', you should 
use '10.10.0.0/16'

So something like the following may work:

  /nfs/Share1   10.10.193.43(rw,refer=/dummy@127.0.0.1) 10.10.0.0/16(rw)
  /nfs/Share2   10.10.193.43(rw,refer=/dummy@127.0.0.1) 10.10.0.0/16(rw)
  /kdnbckp/CS21 10.10.193.43(rw)

The above _should_ cause the client at 10.10.193.43 to attempt to mount 
"/dummy" from itself when it tries to mount either /nfs/Share1 or 
/nfs/Share2 from the server - and if "/dummy" isn't exported from itself 
(or if NFS isn't running), then the mount will fail ...

However, I believe the refer= option is NFSv4 only - so if the client 
attempts an NFSv3 mount, it will successfully mount from the server (and 
not use the refer mount point) - i.e. to make sure this doesn't happen, 
you will need to disable NFSv3 (and NFSv2) access - e.g see:

  https://opsech.io/posts/2016/Jan/26/nfsv4-only-on-centos-72.html

However, the above is all a bit messy - so I would be interested if you 
come across a simpler way of achieving this ...

James Pearson
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] Kickstart finishing Installation

2019-01-09 Thread Ralf Prengel
Hallo,
which switch is the right one for Centos 7.6 to finish the installation.
Every Installation needs an acknowledgement at the end when the network 
configuration is shown while installing with grafics.
Thanks
Ralf

Von meinem iPad gesendet
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] Squashfs as rootfs

2019-01-09 Thread Marcin Trendota
Hello.

I'm trying to add option to grub menu (amongst other options) to boot
from squashfs image. But 'root=live:/path/tofile' doesn't work. I didn't
find anything useful on the internet. Anybody can point me in right
direction?

Maybe better choice is to replace grub with isolinux?
I have working solution with iso with squashfs booting through PXE, but
i don't know how to do this in grub.

TIA
-- 
Marcin Trendota
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] NFS deny access

2019-01-09 Thread Thomas Plant via CentOS

Hello all,

I have an NFS Server where I want give access to a specific address to a 
specific path.
Problem is that I have some other shares active which I do not want the 
specific IP to not access it.


The /etc/exports looks like the following:

/nfs/Share1         10.10.*(rw)
/nfs/Share2         10.10.*(rw)
/kdnbckp/CS21   10.10.193.43(rw)

The client on the last line (IP 10.10.193.43) I'd like to exclude from 
mounting the first two shares.


How can I do this? 'man exports' does not give any hint if this is possible.

Thanks,
Thomas
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] Help finishing off Centos 7 RAID install

2019-01-09 Thread Gary Stainburn
I've just finished installing a new Bacula storeage server. Prior to doing the 
install I did some research and ended  up deciding to do the following 
config.

6x4TB drives
/boot/efi   efi_fs  sda1
/boot/efi_copy  efi_fs  sdb1
/boot   xfs RAID1   sda2 sdb2 
VG  RAID6   all drives containing
SWAP
/   
/home   
/var/bacula

Questions:

1) The big problem with this is that it is dependant on sda for booting.  I 
did find an aritcle on how to set up boot loading on multiple HDD's, 
including cloning /boot/efi but I now can't find it.  Does anyone know of a 
similar article?

2) is putting SWAP in a RAID a good idea? Will it help, will it cause 
problems? 
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS-virt] [QEMU-KVM] Centos guest VM freezing

2019-01-09 Thread John Haxby



> On 9 Jan 2019, at 09:50, Akshar Kanak  wrote:
> 
> Hi  
> Thanks for the reply 
> 
> We have seen the same guest VM freezing on vmware ESXi machine also , so we 
> were interested in know in the internal condition of the guest vm when the 
> freeze happened
> How can we analyse the core file generated by "virsh dump "
> 

Use crash(8).

When you've discovered the internal state of the comatose guest what are you 
going to do with it?   Find out when it was fixed in the years since you 
updated and update to that version?   Add it to the list of rediscovered bugs?  
Wait to see if it happens again before you are pwned by one of the many, many 
security fixes since you last updated?

jch

> Thanks and regards
> Akshar
> 
> On Wed, Jan 9, 2019 at 1:58 PM Manuel Wolfshant  
> wrote:
> On 1/9/19 10:24 AM, Akshar Kanak wrote:
>> Dear team 
>> I am running a centos guest VM  which freezes for every few days . The 
>> qemu-kvm on  shows 100% cpu utilization.
>> Ping to the guest might work or may not work .Please can you tell me 
>> what approach can i take to debug it .
>> using "virsh dump" I can dump the core of the  guest vm but I am not 
>> sure how to analyse it . 
>> Guest Centos VM : "Linux GUESTCentOS70 3.10.0-123.4.4.el7.x86_64 #1 SMP 
>> Fri Jul 25 05:07:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux"
>>"CentOS Linux release 7.0.1406 (Core)"
>>1 vcpu and 2 GB ram 
>>
>>  Host machine : "Linux HOST 3.10.51-1.el6.elrepo.x86_64 #1 SMP Fri Aug 1 
>> 13:14:11 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux"
>>  "CentOS release 6.5 (Final)"
>>  qemu-kvm package used : qemu-kvm-0.12.1.2-2.415.el6_5.10.x86_64
>>  
>>  Thanks and regards
>>  Akshar
> 
> I'd say that you should start by updating the OS on both host and guest. Both 
> OSes are heavily outdated, you lack YEARS of updates.
> 
> 
> 
> Regards
> 
> 
> 
> Manuel
> 
> ___
> CentOS-virt mailing list
> CentOS-virt@centos.org
> https://lists.centos.org/mailman/listinfo/centos-virt
> ___
> CentOS-virt mailing list
> CentOS-virt@centos.org
> https://lists.centos.org/mailman/listinfo/centos-virt

___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] [QEMU-KVM] Centos guest VM freezing

2019-01-09 Thread Manuel Wolfshant

On 1/9/19 11:50 AM, Akshar Kanak wrote:

Hi
Thanks for the reply

We have seen the same guest VM freezing on vmware ESXi machine also ,


No wonder given that the guest remains 5 years out of date even when 
using a different hypervisor. Leaving aside that also the long-term 
kernel installed from ElRepo that you are using is also more then 4 
years out of date.


Please update the OS(es) to the current supported OS versions ( that is, 
7.6 / 6.10 ) and verify if the problems persist. But you've already been 
told that by several persons...



so we were interested in know in the internal condition of the guest 
vm when the freeze happened

How can we analyse the core file generated by "virsh dump "


http://bfy.tw/LhMS might help with that


Regards,

    manuel





Thanks and regards
Akshar

On Wed, Jan 9, 2019 at 1:58 PM Manuel Wolfshant 
mailto:wo...@nobugconsulting.ro>> wrote:


On 1/9/19 10:24 AM, Akshar Kanak wrote:

Dear team
    I am running a centos guest VM  which freezes for every few
days . The qemu-kvm on  shows 100% cpu utilization.
    Ping to the guest might work or may not work .Please can you
tell me what approach can i take to debug it .
    using "virsh dump" I can dump the core of the  guest vm but I
am not sure how to analyse it .
    Guest Centos VM : "Linux GUESTCentOS70
3.10.0-123.4.4.el7.x86_64 #1 SMP Fri Jul 25 05:07:12 UTC 2014
x86_64 x86_64 x86_64 GNU/Linux"
"CentOS Linux release 7.0.1406 (Core)"
1 vcpu and 2 GB ram
Host machine : "Linux HOST 3.10.51-1.el6.elrepo.x86_64 #1 SMP Fri
Aug 1 13:14:11 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux"
"CentOS release 6.5 (Final)"
qemu-kvm package used : qemu-kvm-0.12.1.2-2.415.el6_5.10.x86_64
Thanks and regards
Akshar


I'd say that you should start by updating the OS on both host and
guest. Both OSes are heavily outdated, you lack YEARS of updates.



___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] [QEMU-KVM] Centos guest VM freezing

2019-01-09 Thread Akshar Kanak
Hi
Thanks for the reply

We have seen the same guest VM freezing on vmware ESXi machine also , so we
were interested in know in the internal condition of the guest vm when the
freeze happened
How can we analyse the core file generated by "virsh dump "

Thanks and regards
Akshar

On Wed, Jan 9, 2019 at 1:58 PM Manuel Wolfshant 
wrote:

> On 1/9/19 10:24 AM, Akshar Kanak wrote:
>
> Dear team
> I am running a centos guest VM  which freezes for every few days . The
> qemu-kvm on  shows 100% cpu utilization.
> Ping to the guest might work or may not work .Please can you tell me
> what approach can i take to debug it .
> using "virsh dump" I can dump the core of the  guest vm but I am not
> sure how to analyse it .
> Guest Centos VM : "Linux GUESTCentOS70 3.10.0-123.4.4.el7.x86_64 #1
> SMP Fri Jul 25 05:07:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux"
>   "CentOS Linux release 7.0.1406 (Core)"
>   1 vcpu and 2 GB ram
>
> Host machine : "Linux HOST 3.10.51-1.el6.elrepo.x86_64 #1 SMP Fri Aug 1
> 13:14:11 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux"
> "CentOS release 6.5 (Final)"
> qemu-kvm package used : qemu-kvm-0.12.1.2-2.415.el6_5.10.x86_64
> Thanks and regards
> Akshar
>
>
> I'd say that you should start by updating the OS on both host and guest.
> Both OSes are heavily outdated, you lack YEARS of updates.
>
>
> Regards
>
>
> Manuel
> ___
> CentOS-virt mailing list
> CentOS-virt@centos.org
> https://lists.centos.org/mailman/listinfo/centos-virt
>
___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] [QEMU-KVM] Centos guest VM freezing

2019-01-09 Thread Manuel Wolfshant

On 1/9/19 10:24 AM, Akshar Kanak wrote:

Dear team
    I am running a centos guest VM  which freezes for every few days . 
The qemu-kvm on  shows 100% cpu utilization.
    Ping to the guest might work or may not work .Please can you tell 
me what approach can i take to debug it .
    using "virsh dump" I can dump the core of the  guest vm but I am 
not sure how to analyse it .
    Guest Centos VM : "Linux GUESTCentOS70 3.10.0-123.4.4.el7.x86_64 
#1 SMP Fri Jul 25 05:07:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux"

  "CentOS Linux release 7.0.1406 (Core)"
  1 vcpu and 2 GB ram
Host machine : "Linux HOST 3.10.51-1.el6.elrepo.x86_64 #1 SMP Fri Aug 
1 13:14:11 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux"

"CentOS release 6.5 (Final)"
qemu-kvm package used : qemu-kvm-0.12.1.2-2.415.el6_5.10.x86_64
Thanks and regards
Akshar


I'd say that you should start by updating the OS on both host and guest. 
Both OSes are heavily outdated, you lack YEARS of updates.



    Regards


    Manuel

___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


[CentOS-virt] [QEMU-KVM] Centos guest VM freezing

2019-01-09 Thread Akshar Kanak
Dear team
I am running a centos guest VM  which freezes for every few days . The
qemu-kvm on  shows 100% cpu utilization.
Ping to the guest might work or may not work .Please can you tell me
what approach can i take to debug it .
using "virsh dump" I can dump the core of the  guest vm but I am not
sure how to analyse it .
Guest Centos VM : "Linux GUESTCentOS70 3.10.0-123.4.4.el7.x86_64 #1 SMP
Fri Jul 25 05:07:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux"
  "CentOS Linux release 7.0.1406 (Core)"
  1 vcpu and 2 GB ram

Host machine : "Linux HOST 3.10.51-1.el6.elrepo.x86_64 #1 SMP Fri Aug 1
13:14:11 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux"
"CentOS release 6.5 (Final)"
qemu-kvm package used : qemu-kvm-0.12.1.2-2.415.el6_5.10.x86_64
Thanks and regards
Akshar
___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


[CentOS] [Qemu-KVM] Centos 7.0 Guest vm freezing

2019-01-09 Thread Akshar Kanak
Dear team
I am running a centos guest VM  which freezes for every few days . The
qemu-kvm on  shows 100% cpu utilization.
Ping to the guest might work or may not work .Please can you tell me
what approach can i take to debug it .
using "virsh dump" I can dump the core of the  guest vm but I am not
sure how to analyse it .
Guest Centos VM : "Linux GUESTCentOS70 3.10.0-123.4.4.el7.x86_64 #1 SMP
Fri Jul 25 05:07:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux"
  "CentOS Linux release 7.0.1406 (Core)"
  1 vcpu and 2 GB ram

Host machine : "Linux HOST 3.10.51-1.el6.elrepo.x86_64 #1 SMP Fri Aug 1
13:14:11 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux"
"CentOS release 6.5 (Final)"
qemu-kvm package used : qemu-kvm-0.12.1.2-2.415.el6_5.10.x86_64
Thanks and regards
Akshar
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos