Re: Who Uses Scientific Linux, and How/Why?

2020-02-25 Thread Winnie Lacesso
Bonjour,

This was posted to SLU in 2012 but didn't get any actual answers. It's
reposted in case anyone can firmly say (or no) that the situation has
changed or is the same. *Is* it true that CentOS still have a period when
they do *not* release security updates for earlier OS dot releases, thus
leaving those earlier dot releases vulnerable?

(Security is one reason we stuck with SL with Super-Gratitude to them!)


My security colleagues said:

My reading of the thread surrounding that quote is that CentOS *do* 
release security patches between "dot" releases, but that they stop in the 
period between Red Hat releasing an update and the time that they have 
pushed that update out themselves. Thus, 5.3 has been released by both Red 
Hat and CentOS and is receiving updates, but when 5.4 comes out from Red 
Hat, all their security updates will not necessarily work on 5.3 so CentOS 
stops releasing them. As soon as CentOS gets 5.4 out of the door, the 
updates will start again (and they will have rolled the missing ones into 
their 5.4 release). 

It is significant though (i.e. potentially a couple of months without
security fixes when a new CentOS point release is being prepared), and
something I wasn't aware of. At the very least, CentOS admins need to be
aware of this until and unless the policy changes.


Original post: PS I haven't verified the links are still valid! (sorry)

In 2009 I was surprised to learn from this useful+informative SL-User's 
list, that CentOS does not always release security updates in a timely 
manner: 

http://listserv.fnal.gov/scripts/wa.exe?A2=ind0908=scientific-linux-users=0=0=4484
"It has come to light that the maintainers don't/can't release interim  
security updates while they are rebuilding a new dot release from 
upstream" 

http://listserv.fnal.gov/scripts/wa.exe?A2=ind0908=SCIENTIFIC-LINUX-USERS=R7106=-3
"For example, once Redhat releases a point release, an attacker knows that
any subsequent errata can be used against a CentOS box at least until the 
CentOS project releases the corresponding point release. It is quite 
literally a sitting duck."

http://listserv.fnal.gov/scripts/wa.exe?A2=ind0908=scientific-linux-users=0=0=4999
"(About CentOS & why user is switching from CentOS to SL:) So there is a
potential delay of weeks and months before security updates are passed on 
whilst a distribution is being rebuilt, as they currently don't start 
rebuilding the dependencies of an errata updated package, unless it is
part of the release. I am quite happy to wait a few days for a security 
updates, but I do take issue to an unknown exposure where security updates
are delayed for an unspecified length of time."

Question: that was in 2009. Does anyone know, is the above still true of 
CentOS? (Apols - I don't wish to join CentOS list just to find that out & 
am unable to find out via some searching)
(We are debating building some new servers as SL vs CentOS, & timely
security updates are relevant to us)

Many thanks for pointers/enlightenment.


Resolved! Re: unavailable SLU archives? and gom from epel-testing NOT WANTED!

2019-01-11 Thread Winnie Lacesso
Happy Friday!

Thanks, the SLU archives available again!
& my colleague pointed out the update is indeed in epel not just 
epel-testing (my bad - we keep the epel repo disabled, so must yum 
--enablerepo=epel update), so our SL7 clients all update ok :)

> Many many thank you for SL, & for advice/pointers!

Still true!

> PS Konstantin Olchanski's wry posts are HILARIOUS!!
> "instead of this obsolete email thing" yuk yuk yuk!

Still true!


unavailable SLU archives? and gom from epel-testing NOT WANTED!

2019-01-04 Thread Winnie Lacesso
Happy New Year!

Back in ?Oct?Nov?Dec? was mention of SL7 clients not updating, running 
into ???libgtop (or was it something else) dependency fails. If I could 
find SLU archives, who said what when could be quoted, but ATM attempt 
to access SLU archives cycles a long time then ends up at blank page

http://listserv.fnal.gov/archives/scientific-linux-users.html

Can someone admin@SLU check the archives are available, link is ok (or 
not?) (It's what's on the SL "Community" page for "List Archives")

The tail end of that thread was "enable the epel-testing repo = your SL7 
client update will succeed" to which someone responded (paraphrased), 
"This works, but ... we are uneasy that for production SL7 systems, 
something from epel-testing is a dependency." (Implication with which we
agree: nothing from epel-testing should be required on production system.)

IIRC someone else replied something like "You could complain to epel about 
it"

It's now much later & *STILL* this is not resolved. We don't want to 
install anything from epel-testing on our production system(s) but in the 
meantime updates are balking.

We really like SL & want to rely on it for stable reliable production 
servers. Can anyone take action to resolve this issue?

Many many thank you for SL, & for advice/pointers!

PS Konstantin Olchanski's wry posts are HILARIOUS!!
"instead of this obsolete email thing" yuk yuk yuk!


"Update notice SLBA-2018:0764-1 (from sl-security) is broken, or a bad duplicate," etc again

2018-05-25 Thread Winnie Lacesso
Happy Friday!

Happening again:

[root@lcgnetmon02 ~]# yum clean all
Loaded plugins: fastestmirror
Cleaning repos: EGI-trustanchors sl sl-security
Cleaning up everything
Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned 
data from disabled or removed repos
Cleaning up list of fastest mirrors

[root@lcgnetmon02 ~]# yum check-update
Loaded plugins: fastestmirror
Determining fastest mirrors
 * sl: ftp2.scientificlinux.org
 * sl-security: ftp2.scientificlinux.org
EGI-trustanchors
 | 2.5 kB  00:00:00 
sl  
 | 3.8 kB  00:00:00 
sl-security 
 | 2.9 kB  00:00:00 
(1/6): EGI-trustanchors/primary_db  
 |  59 kB  00:00:00 
(2/6): sl-security/x86_64/updateinfo
 |  20 kB  00:00:00 
(3/6): sl/x86_64/group_gz   
 | 113 kB  00:00:00 
(4/6): sl/x86_64/updateinfo 
 | 2.5 MB  00:00:01 
(5/6): sl/x86_64/primary_db 
 | 5.1 MB  00:00:04 
(6/6): sl-security/x86_64/primary_db
 | 1.3 MB  00:00:05 
Update notice SLBA-2018:0764-1 (from sl-security) is broken, or a bad 
duplicate, skipping.
You should report this problem to the owner of the sl-security repository.
If you are the owner, consider re-running the same command with --verbose to 
see the exact data that caused the conflict.


client info:
[root@lcgnetmon02 ~]# lsb_release -a
LSB Version::core-4.1-amd64:core-4.1-noarch
Distributor ID: Scientific
Description:Scientific Linux release 7.5 (Nitrogen)
Release:7.5
Codename:   Nitrogen

hopefully fix soon eh?


SLEA-2017:1977-1 (from sl-security) is broken, or a bad duplicate

2018-05-01 Thread Winnie Lacesso
Good morning,

yum check-update said "SLEA-2017:1977-1 (from sl-security) is broken, or a 
bad duplicate" & recommended --verbose. Output is below.
Me not sure what "SLEA-2017:1977-1" is!!

Is this something that someone supporting SL can fix?
PS we LOVE SL here!

root@lcgnetmon02> yum --verbose check-update
Loading "fastestmirror" plugin
Config time: 0.013
Yum version: 3.4.3
rpmdb time: 0.000
Building updates object
Setting up Package Sacks
EGI-trustanchors
 | 2.5 kB  00:00:00 
sl  
 | 4.0 kB  00:00:00 
sl-security 
 | 2.9 kB  00:00:00 
(1/6): EGI-trustanchors/primary_db  
 |  60 kB  00:00:00 
(2/6): sl/x86_64/group_gz   
 | 107 kB  00:00:00 
(3/6): sl-security/x86_64/updateinfo
 |  94 kB  00:00:00 
(4/6): sl/x86_64/primary_db 
 | 5.0 MB  00:00:02 
(5/6): sl/x86_64/updateinfo 
 | 2.2 MB  00:00:03 
(6/6): sl-security/x86_64/primary_db
 | 3.4 MB  00:00:03 
Determining fastest mirrors
 * sl: ftp1.scientificlinux.org
 * sl-security: ftp1.scientificlinux.org
pkgsack time: 6.937
up:Obs Init time: 0.207
up:simple updates time: 0.014
up:obs time: 0.003
up:condense time: 0.000
updates time: 7.632

kernel.x86_64  3.10.0-862.el7   
 sl-security
kernel-devel.x86_643.10.0-862.el7   
 sl-security
kernel-headers.x86_64  3.10.0-862.el7   
 sl-security
kernel-tools.x86_643.10.0-862.el7   
 sl-security
kernel-tools-libs.x86_64   3.10.0-862.el7   
 sl-security
kmod.x86_6420-21.el7
 sl-security
kmod-libs.x86_64   20-21.el7
 sl-security
linux-firmware.noarch  20180220-62.git6d51311.el7   
 sl-security
Duplicate of SLEA-2017:1977-1 differs in some fields:
<<< sl-security:description
None
===
''
>>> sl:description
Update notice SLEA-2017:1977-1 (from sl-security) is broken, or a bad
duplicate, skipping.
You should report this problem to the owner of the sl-security repository.
If you are the owner, consider re-running the same command with --verbose to
see the exact data that caused the conflict.
updateinfo time: 2.242

Currently installed on node:
root@lcgnetmon02> rpm -qa | sort | egrep "kernel|kmod|linux-firm"
kernel-3.10.0-327.el7.x86_64
kernel-3.10.0-693.21.1.el7.x86_64
kernel-devel-3.10.0-693.21.1.el7.x86_64
kernel-headers-3.10.0-693.21.1.el7.x86_64
kernel-tools-3.10.0-693.21.1.el7.x86_64
kernel-tools-libs-3.10.0-693.21.1.el7.x86_64
kmod-20-15.el7.x86_64
kmod-libs-20-15.el7.x86_64
linux-firmware-20170606-58.gitc990aae.el7_4.noarch

Is it a problem with the linux-firmware package? (Just a guess!!)


SLU: SL5 logwatch not summarizing from SL6 WN logging to it

2014-06-19 Thread Winnie Lacesso
Greetings SL Users!

Many apologies if this is not exactly SL-specific but all the servers run 
SL so am hoping it's okay to ask here. ( that someone has debugged this!)

A cluster of WN echo their syslogs to 2 central log/mon hosts. When the WN 
changed from SL5 to SL6 (but the central log/mon hosts for various reasons 
must remain SL5), logwatch on the central log/mon hosts stopped reporting 
anything from them. (I read logwatch once a week on central log/mon to 
watch for disk or similar badness.)

Does anyone have SL5 central log/mon hosts with SL6 clients syslogging 
to them,  have debugged what needs change/fix in the SL5 logwatch 
processing scripts to report about the SL6 clients (esp i.e. disk 
badness)? 

I've debugged it to 
/usr/share/logwatch/scripts/shared/onlyservice 'smartd' 
on the 2 SL5 log/mon hosts does not pass thru smart-logged entries from 
the SL6 WN. Near the end of ouptut of 

   logwatch --debug 6 --detail 5 --service smartd --range Today --print

is

 Processing Service: smartd
 ( cat /var/cache/logwatch/logwatch.AN03MJZX/messages  |  /usr/bin/perl  
/usr/share/logwatch/scripts/shared/onlyservice 'smartd' |/usr/bin/perl 
/usr/share/logwatch/scripts/shared/removeheaders '' |/usr/bin/perl 
/usr/share/logwatch/scripts/services/smartd) 21   

On another pair of SL5 log/mon hosts with only SL5 clients logging to 
them, that finds exactly as expected.

On the SL5 log/mon hosts with SL6 clients logging to them, onlyservice 
'smartd' finds zero:
root@smnat grep -i smartd /var/log/messages | wc -l
1096
root@smnat grep -i smartd /var/log/messages | tail -2
Jun 18 10:53:28 sm10.hadoop.cluster sm10 smartd[1811]: Device: /dev/sda [SAT], 
37 Currently unreadable (pending) sectors
Jun 18 11:01:06 sm05.hadoop.cluster sm05 smartd[1824]: Device: /dev/sda [SAT], 
11 Offline uncorrectable sectors
# yep, definitely there
root@smnat cat /var/log/messages |\
/usr/bin/perl /usr/share/logwatch/scripts/shared/onlyservice 'smartd'  /tmp/m; 
wc /tmp/m
0 0 0 /tmp/m

When the WN were SL5, it worked:

root@sm00 cat /var/log/messages.8|/usr/bin/perl 
/usr/share/logwatch/scripts/shared/onlyservice 'smartd'  /tmp/m; wc /tmp/m 
   843  11879 105783 /tmp/m
root@sm00 head -4 /tmp/m
Apr 20 04:07:51 sm06.hadoop.cluster smartd[11331]: Device: /dev/sda, SMART 
Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 111 to 112  
Apr 20 04:17:54 sm18.hadoop.cluster smartd[11547]: Device: /dev/sda, 1199 
Currently unreadable (pending) sectors  
Apr 20 04:17:54 sm18.hadoop.cluster smartd[11547]: Device: /dev/sda, 1069 
Offline uncorrectable sectors  
Apr 20 04:18:13 sm16.hadoop.cluster smartd[11470]: Device: /dev/sda, 1 
Currently unreadable (pending) sectors  

Very grateful if someone has an SL6-client-compatile onlyservice part of 
logwatch for SL5 central log/mon host!

Winnie Lacesso / Bristol University Particle Physics Computing Systems
HH Wills Physics Laboratory, Tyndall Avenue, Bristol, BS8 1TL, UK