Re: Is KVM now abandoned in EL 6?

2015-05-20 Thread John Lauro
Why not run wine32 in a SL6 VM in SL7?  Personally, IMHO, why run anything 
critical that you can't run elsewhere (such as a brower, ssh client, etc) not 
in a VM?


From: owner-scientific-linux-us...@listserv.fnal.gov 
 on behalf of ToddAndMargo 

Sent: Wednesday, May 20, 2015 12:31 AM
To: scientific-linux-users@fnal.gov
Subject: Re: Is KVM now abandoned in EL 6?

...
Deal killer: without Wine 32 I can not get my work done.
...


Re: systemd (again)

2015-02-15 Thread John Lauro
Sounds just what hackers would like.  A nice web interface that
doesn't even show up as a resource after it's been idle for 10
minutes so admins might not even realize if it's wide open...



- Original Message -
> From: "David Sommerseth" 
> To: kei...@kl-ic.com
> Cc: "scientific-linux-users" 
> Sent: Sunday, February 15, 2015 7:11:52 AM
> Subject: Re: systemd (again)
> 
> Cockpit is not running by default, but if you go to
> https://$IPADDRESS_OF_SERVER:9090/ systemd starts it
> on-the-fly (through socket activation).  In the moment it's been
> lingering idle for approx. 10 minutes, it is shut down again.
> So there's basically zero-footprint when it is not being used.
> This is one of the nice things about systemd.


Re: Is there any data base collecting data on breakin attempts?

2015-02-09 Thread John Lauro
Check out https://dshield.org/howto.html for a central place to submit 
attempts...

Some useful pages:
https://dshield.org/reports.html
https://dshield.org/sources.html

As many sources can be anonymous, it's easy for hosts to be on someones lists 
from either spoofed or replies to spoofed ips, etc...  and so shouldn't be used 
as a blacklist, at least not exclusively.  (ie: wouldn't want to block port 80 
based on this for a public web server)

- Original Message -
> From: "hansel" 
> To: SCIENTIFIC-LINUX-USERS@FNAL.GOV
> Sent: Sunday, February 8, 2015 12:41:56 PM
> Subject: Is there any data base collecting data on breakin attempts?
> 
> I accept it as normal many (upwards of several thousand) daily root
> breaking attempts. My defense is careful sshd configuration and
> restrictive incoming router firewall.
> 
> Does anyone mantain a database of consistently offending sites (maybe
> a
> news source, such as politico or propublica)? Initial use of whois
> and dig
> for a few returned familiar countries of origin, coutries that may
> encourage or even sponsor some attempts.
> 
> I searched the archive for "breakin" and "failed" with an without
> subject
> line qualifiers (like "root") and found nothing.
> 
> Thank you.
> mark hansel
> 


Re: Library security updates

2015-01-28 Thread John Lauro
Actually, looking at what files were updated, that should probably be
lsof -n | grep -e libc-

(Probably not a lot of difference in the programs listed, but...)

- Original Message -
> From: "John Lauro" 
> To: "Steven Haigh" 
> Cc: scientific-linux-users@fnal.gov
> Sent: Wednesday, January 28, 2015 9:13:08 AM
> Subject: Re: Library security updates
> 
> > 
> > > If it doesn't protect us is there practicable way to make sure we
> > > are
> > > genuinely protected short of rebooting the whole system every
> > > time
> > > there
> > > is a security update?
> > 
> > Depending on what the update is. If you want to be 100% certain,
> > reboot.
> > If you don't want to reboot, you can hunt through what programs use
> > certain libraries using ld - however the effort taken to do this is
> > much
> > more than a reboot - and probably takes longer.
> > 
> 
> 
> It actually isn't that hard to track down.
> [root@colo-a2vm t2]# lsof -n | grep gcc
> hpiod  2649 root  DEL   REG  252,0
>  4718941 /lib64/libgcc_s-4.1.2-20080825.so.1;545bc2ea
> mysqld 2851mysql  DEL   REG  252,0
>  4718941 /lib64/libgcc_s-4.1.2-20080825.so.1;545bc2ea
> libvirtd   3121 root  DEL   REG  252,0
>  4718941 /lib64/libgcc_s-4.1.2-20080825.so.1;545bc2ea
> yum-updat  3343 root  DEL   REG  252,0
>  4718941 /lib64/libgcc_s-4.1.2-20080825.so.1;545bc2ea
> smartd 3469 root  DEL   REG  252,0
>  4718941 /lib64/libgcc_s-4.1.2-20080825.so.1;545bc2ea
> automount  6482 root  DEL   REG  252,0
>  4718600
> /lib64/libgcc_s-4.1.2-20080825.so.1.#prelink#.dvRyeN
> httpd 11089 root  mem   REG  252,0
> 584004718834 /lib64/libgcc_s-4.1.2-20080825.so.1
> php   11639  ioi  mem   REG  252,0
> 584004718834 /lib64/libgcc_s-4.1.2-20080825.so.1
> php   24239  ioi  mem   REG  252,0
> 584004718834 /lib64/libgcc_s-4.1.2-20080825.so.1
> httpd 27057   daemon  mem   REG  252,0
> 584004718834 /lib64/libgcc_s-4.1.2-20080825.so.1
> httpd 27058   daemon  mem   REG  252,0
> 584004718834 /lib64/libgcc_s-4.1.2-20080825.so.1
> 
> 
> You can tell the processes that were not restarted as they show DEL
> instead of mem...
> 


Re: Library security updates

2015-01-28 Thread John Lauro
> 
> > If it doesn't protect us is there practicable way to make sure we
> > are
> > genuinely protected short of rebooting the whole system every time
> > there
> > is a security update?
> 
> Depending on what the update is. If you want to be 100% certain,
> reboot.
> If you don't want to reboot, you can hunt through what programs use
> certain libraries using ld - however the effort taken to do this is
> much
> more than a reboot - and probably takes longer.
> 


It actually isn't that hard to track down.
[root@colo-a2vm t2]# lsof -n | grep gcc
hpiod  2649 root  DEL   REG  252,0  
4718941 /lib64/libgcc_s-4.1.2-20080825.so.1;545bc2ea
mysqld 2851mysql  DEL   REG  252,0  
4718941 /lib64/libgcc_s-4.1.2-20080825.so.1;545bc2ea
libvirtd   3121 root  DEL   REG  252,0  
4718941 /lib64/libgcc_s-4.1.2-20080825.so.1;545bc2ea
yum-updat  3343 root  DEL   REG  252,0  
4718941 /lib64/libgcc_s-4.1.2-20080825.so.1;545bc2ea
smartd 3469 root  DEL   REG  252,0  
4718941 /lib64/libgcc_s-4.1.2-20080825.so.1;545bc2ea
automount  6482 root  DEL   REG  252,0  
4718600 /lib64/libgcc_s-4.1.2-20080825.so.1.#prelink#.dvRyeN
httpd 11089 root  mem   REG  252,0 58400
4718834 /lib64/libgcc_s-4.1.2-20080825.so.1
php   11639  ioi  mem   REG  252,0 58400
4718834 /lib64/libgcc_s-4.1.2-20080825.so.1
php   24239  ioi  mem   REG  252,0 58400
4718834 /lib64/libgcc_s-4.1.2-20080825.so.1
httpd 27057   daemon  mem   REG  252,0 58400
4718834 /lib64/libgcc_s-4.1.2-20080825.so.1
httpd 27058   daemon  mem   REG  252,0 58400
4718834 /lib64/libgcc_s-4.1.2-20080825.so.1


You can tell the processes that were not restarted as they show DEL instead of 
mem...


Re: Torrent for SL7

2014-11-19 Thread John Lauro
I suggest you use wget --continue.  If for some reason it fails in one-shot, 
then when you run it again, it will continue from where it left off.  That 
should work with most http servers.  For me, I find direct download is 
typically faster than torrents, assuming the server has a good connection and 
rtt latency is lower than most torrent peers.


- Original Message -
> From: bac...@landtrekker.com
> To: scientific-linux-users@fnal.gov
> Sent: Wednesday, November 19, 2014 3:33:36 AM
> Subject: Torrent for SL7
> 
> Hello,
> 
> Is there any plan to release SL7 as a torrent download ?
> 
> My ADSL link is too weak to sustain a one-shot DVD download.
> 
> Thanks
> 


Re: about realtime system

2014-08-24 Thread John Lauro
The recommendation changed with 5.5.  
http://blogs.vmware.com/performance/2013/09/deploying-extremely-latency-sensitive-applications-in-vmware-vsphere-5-5.html

"... However, performance demands of latency-sensitive applications with very 
low latency requirements such as distributed in-memory data management, stock 
trading, and high-performance computing have long been thought to be 
incompatible with virtualization.
vSphere 5.5 includes a new feature for setting latency sensitivity in order to 
support virtual machines with strict latency requirements."


- Original Message -
> From: "Paul Robert Marino" 
> To: "Nico Kadel-Garcia" 
> Cc: "John Lauro" , "Brandon Vincent" 
> , "Lee Kin"
> , "SCIENTIFIC-LINUX-USERS@FNAL.GOV" 
> 
> Sent: Sunday, August 24, 2014 3:27:39 PM
> Subject: Re: about realtime system
...
> By the way one of those stock exchanges is where the VMware engineers
> told us never to use their product in production. In fact we had huge
> problems with VMware in our development environments because some of
> our applications would actually detect the clock instability in the
> VMware clocks and would shut themselves down rather than have
> inaccurate audit logs. as a result we found we had trouble even using
> it in our development environments.


Re: about realtime system

2014-08-24 Thread John Lauro
Why spread FUD about Vmware.  Anyways, to hear what they say on the subject:
http://www.vmware.com/files/pdf/techpaper/latency-sensitive-perf-vsphere55.pdf

Anyways, KVM will not handle latency any better than Vmware.

- Original Message -
> From: "Paul Robert Marino" 
> To: "Nico Kadel-Garcia" 
> Cc: "Brandon Vincent" , llwa...@gmail.com, 
> "SCIENTIFIC-LINUX-USERS@FNAL.GOV"
> 
> Sent: Sunday, August 24, 2014 12:26:17 PM
> Subject: Re: about realtime system
> 
> Wow I don't know how VMware got mentioned in this string but VMware
> is
> not capable of real time operation and if you ask the senior
> engineers
> at VMware they will tell you they don't want you even trying it on
> their product because they know it wont work. The reason is VMware
> plays games with the clock on the VM so the clocks can never be 100%
> accurate.
> It should be possible to do real time in KVM assuming you don't
> overbook your CPU Cores or RAM. Apparently Red Hat has been doing
> VM's
> with microsecond accurate clocks with PTP running on the
> visualization


Re: Boot hangs / loops

2014-07-10 Thread John Lauro
> 
> FilesystemSizeUsedAvail   Use%
> Mounted on
> /dev/sda6 3.9G133M3.8G
> 4%  /
> tmpfs 3.9G0   3.9G
> 0%  /dev/shm
> /dev/sda7 3.9G133M3.8G
> 4%  /home
> 

Well, that is not looking good.  Was that booted from the rescue cd?

Sometimes when you boot from a different drive (such as CD) it can switch the 
devices around.  Maybe they are on SDB?
What does "fdisk -l" show?


Re: Boot hangs / loops

2014-07-10 Thread John Lauro
These are the type of things that can be difficult to do over email...

Try mounting the /dev/sda6 after fsck in rescue mode and make sure the 
filesystem has at least 10% free space.
Is it ext2, 3, or 4, or other?

What other partitions are on /sda?  I assume /boot is one, any others?


- Original Message -
> From: "Dormition Skete (Hotmail)" 
> To: SCIENTIFIC-LINUX-USERS@FNAL.GOV
> Sent: Thursday, July 10, 2014 4:34:00 PM
> Subject: Boot hangs / loops
> 
> I needed some more storage space on our SL6.5 server, so I hooked up
> a USB external drive to the machine.  The external drive had a
> Macintosh file system on it, so I installed kmod-hfsplus from the
> elrepo.org/elrepo-release-6-5-el6.elrepo.noarch.org repository.  I
> mounted the drive, and everything worked fine.  I could read and
> write files to it just fine.  I set up a file share under Samba, and
> that worked fine, too.
> 
> Then I made the stupid mistake of trying to delete a bunch of files
> off of it using nautilus from a thin client, rather than from the
> command line.
> 
> In the middle of the deletion process, it took the entire server
> down.  Now it won’t reboot.
> 
> Whenever I try to reboot it, I get the following message:
> 
> —
> 
> Checking file systems.
> /dev/sda6 is in use.
> e2fsck: Cannot continue, aborting.
> 
> *** An error occurred during the file system check.
> *** Dropping you into a shell; The system will reboot
> *** when you leave the shell.
> 
> —
> 
> I booted it from a Rescue CD, and did not mount the volumes.  I ran
> fsck on the two Linux ext4 file systems with:
> 
> fsck /dev/sda6   (my / file system)
> fsck /dev/sda7  (my /home file system)
> 
> 
> That did not help.  Something gave me the thought to try changing the
> labels in the /etc/fstab file.  I changed the “UUID=…..” with the
> device names /dev/sda5 (swap), 6 and 7.
> 
> 
> That didn’t help.
> 
> I tried booting it with “fastboot” in the kernel line, and that
> places me in a perpetual loop.  I get a message saying:
> 
> —
> 
> Warning — SELinux targeted policy relabel is required.
> 
> Relabeling could take a very long time, depending on
> file system size and speed of hard drives.
> 
> —
> 
> I’ve also tried putting “fastboot enforcing=0 autorelabel=0” in the
> kernel line, and that does not seem to do anything.
> 
> 
> Without “fastboot”, I get the file system check kicking me into a
> maintenance prompt.
> 
> With “fastboot”, I get the perpetual SELinux relabeling.
> 
> 
> I also find it really odd that when I changed the fstab entries, I
> first made a backup copy of the fstab file.  I also copied the
> existing lines I was going to change, commented them out, and
> changed the second set of lines.  Neither the backup fstab file, nor
> the commented lines are anywhere to be found.
> 
> If somebody would please help me get this machine back up, I would
> *greatly* appreciate it!
> 
> Peter, hieromonk
> 


Re: [SL-Users] Re: Scientific Linux 7 ALPHA

2014-07-04 Thread John Lauro
Looks like the first usable C7 builds were around the 18th (didn't realize they 
were out, just decided to check) and sound a little rougher (at least when 
first released) than SL7.  However, the C6 builds took a very long time 
compared to SL6...

One question...  Is it expected to be able to upgrade to non beta from the 
current alpha, or will a reinstall be required once it is out of beta?



- Original Message -
> From: "John R. Dennison" 
> To: scientific-linux-users@fnal.gov
> Sent: Friday, July 4, 2014 7:51:18 AM
> Subject: Re: [SL-Users] Re: Scientific Linux 7 ALPHA
> 
> On Fri, Jul 04, 2014 at 06:14:05AM -0400, Nico Kadel-Garcia wrote:
> > 
> > There was already a copy in my local rsync mirror, and I'm happy to
> > install from there and keep the lod off your servers. Getting the
> > alpha into people's hands this quickly is one of the reasons I've
> > come
> > to personally prefer Scientific Linux over CentOS.
> 
> Oh come, now.  The C7 builds have been public and installable for how
> long now?
> 
> 
> 
>   John
> --
>  You have to wonder why monosyllabic is not one syllable,
> and why phonetic isn't spelled the way it sounds.
> 


Re: Clarity on current status of Scientific Linux build

2014-06-27 Thread John Lauro
Looking at the license it sounds like there is not any such restrictions, but 
you would have to look at the individual software to verify, but exceptions 
should be mainly 3rd party binary only code... 

One reason to remove public sources is to keep the load off of their servers.

The is from: 
http://www.redhat.com/f/pdf/licenses/GLOBAL_EULA_RHEL_English_20101110.pdf

License Grant.
Subject to the following terms, Red Hat, Inc. (“
Red Hat
”) grants to you a perpetual, worldwide license to the Programs (most of
which include multiple software components) pursuant to the GNU
General Public License v.2. The license agreement for each soft
ware
component is located in the software component's source code and
permits you to run, copy, modify, and redistribute the softwar
e component
(subject to certain obligations in some cases), both in source
code and binary code forms, with
the exception of (a) certain bi
nary only firmware
components and (b) the images identified in Section 2 below. The
license rights for the binary only firmware components are loc
ated with the
components themselves. This EULA pertains solely to the Progr
ams and does not limit your rights under, or grant you rights that
supersede,
the license terms of any particular component. 


- Original Message -
> From: "Mark Rousell" 
> To: SCIENTIFIC-LINUX-USERS@FNAL.GOV
> Sent: Friday, June 27, 2014 3:28:49 PM
> Subject: Re: Clarity on current status of Scientific Linux build
> 
> Thanks to everyone who commented and I apologise for the delay in
> replying.
> 
> So it seems that complete clarity is not yet available. Ok.
> 
> A couple more questions in the search for clarity:-
> 
> 1) Can anyone confirm or deny that Red Hat places contractual
> limitations on what a subscriber (who has access to the RHEL7 SRPMs)
> can
> do with the source code so obtained? Yes, I know this has been
> discussed
> but I don't think it has been explicitly confirmed. One must infer
> that
> there are contractual limitations (otherwise why remove public access
> to
> SRPMs) but it would be nice to be absolutely clear.
> 
> 2) This is a legal question but it is relevant: If Red Hat uses a
> contract with its customers to prevent a customer who is a recipient
> of
> the GPLd source code (when received via SRPM) from redistributing it
> or
> rebuilding it as they please, wouldn't this mean that Red Hat itself
> was
> in breach of the GPL licence conditions?
> 


Re: Add a remote disk to LVM

2014-05-06 Thread John Lauro
What type of remote disk? NFS? 

A more common case would be to move some directories to /arch and use sym 
links. 
You could create a loopback diskfile somewhere on /arch and add that to LVM. 
It's going to make bootup messy, so you wouldn't want any volumes on it that 
are required for bootup (especially / or /usr or /sbin, and probably not /var) 
... 

- Original Message -

> From: "Mahmood Naderan" 
> To: scientific-linux-users@fnal.gov
> Sent: Wednesday, May 7, 2014 1:12:41 AM
> Subject: Add a remote disk to LVM

> Hello,
> Is it possible to add a network drive to existing LVM? I have created
> a group and have added three local drives. Now I want to add a
> remote disk from another node. The remote node has an additional
> hard drive and is mounted to /arch (remote node)

> Is that possible? How? All examples I see are trying to add extra
> local drives and not remote drives.

> Here are some info

> # vgdisplay
> --- Volume group ---
> VG Name tigerfiler1
> System ID
> Format lvm2
> Metadata Areas 3
> Metadata Sequence No 2
> VG Access read/write
> VG Status resizable
> MAX LV 0
> Cur LV 1
> Open LV 1
> Max PV 0
> Cur PV 3
> Act PV 3
> VG Size 2.73 TiB
> PE Size 4.00 MiB
> Total PE 715401
> Alloc PE / Size 715401 / 2.73 TiB
> Free PE / Size 0 / 0
> VG UUID 8Ef8Vj-bDc7-H4ia-D3X4-cDpY-kE9Z-njc8lj

> pvdisplay
> --- Physical volume ---
> PV Name /dev/sdb
> VG Name tigerfiler1
> PV Size 931.51 GiB / not usable 1.71 MiB
> Allocatable yes (but full)
> PE Size 4.00 MiB
> Total PE 238467
> Free PE 0
> Allocated PE 238467
> PV UUID FmC77z-9UaR-FhYa-ONHZ-EazF-5Hm2-8zmUuj

> --- Physical volume ---
> PV Name /dev/sdc
> VG Name tigerfiler1
> PV Size 931.51 GiB / not usable 1.71 MiB
> Allocatable yes (but full)
> PE Size 4.00 MiB
> Total PE 238467
> Free PE 0
> Allocated PE 238467
> PV UUID 1jBQUn-gkkD-37I3-R3nL-KeHA-Hn2A-4zgNcR

> --- Physical volume ---
> PV Name /dev/sdd
> VG Name tigerfiler1
> PV Size 931.51 GiB / not usable 1.71 MiB
> Allocatable yes (but full)
> PE Size 4.00 MiB
> Total PE 238467
> Free PE 0
> Allocated PE 238467
> PV UUID mxi8jW-O868-iPse-IfY7-ag3m-R3vZ-gS3Jdx

> Regards,
> Mahmood

Re: problem installing SL6.3 from live usb

2014-04-18 Thread John Lauro
Sounds like you are not using default LVM setup?  It should just work with LVM.

Generally I prefer just using /dev/sda, /dev/sdb, etc... but LVM is generally 
safer when devices change... In this case, stick with the UUID or LVM instead 
of the changeable /dev/sda, etc...


If you don't want to switch to LVM, it is possible to boot in rescue mode and 
manually reinstall grub telling it what the devices will be instead of what 
they are...  been awhile from when I had to do that.  Generally not a 
recommended option for a new install...

I think, another option, make your jetflash the lowest priority.  As long as 
D0/D1 are not bootable (you may have to wipe the start of the drives) it should 
still boot jetflash and the device mappings should then be consistent from the 
bios.



- Original Message -
> From: "Orion Poplawski" 
> To: "Mahmood Naderan" , scientific-linux-users@fnal.gov
> Sent: Friday, April 18, 2014 12:59:40 AM
> Subject: Re: problem installing SL6.3 from live usb
> 
> On 04/17/2014 04:51 AM, Mahmood Naderan wrote:
> > Hi
> > Recently I faced  problem installing SL6.3 on a machine. I didn't
> > have
> > such problem before so your comments are appreciated.
> > 
> > The machine has two physical disks (D0, D1) each has 1TB capacity
> > and I
> > attach a USB flash (2 GB) which contains a live image of SL6.3. So
> > the
> > boot priority in the BIOS looks like
> >  JetFlash
> >  D0
> >  D1
> > 
> > Using the installation wizard, I see a layout like this
> > /dev/sda 2GB
> > /dev/sdb 1TB
> >  /   900GB
> > swap  100GB
> > /dev/sdc 1TB
> > 
> > 
> > The default location for GRUB is then /dev/sda. Problem is, after
> > installation I remove the USB flash but it doesn't boot because the
> > grub
> > had been installed on /dev/sda (which was JetFlash) and now there
> > is no
> > such device
> > 
> > As another try, in the installation wizard where we can configure
> > the
> > boot loader, I selected /dev/sdb (where root and swap is
> > installed).
> > This time when I remove/disconnect the flash, the boot stuck again.
> > The
> > reason is that, when I disconnect the flash, D0 becomes /dev/sda
> > and D1
> > becomes /dev/sdb. So again there is no grub!!
> > 
> > Overall there is a loop and I haven't figured out how to resolve it
> > 
> >  
> > Regards,
> > Mahmood
> 
> Perhaps:
> - physically swap the disks after install?
> - install to /dev/sdc?
> 
> 
> --
> Orion Poplawski
> Technical Manager 303-415-9701 x222
> NWRA/CoRA DivisionFAX: 303-415-9702
> 3380 Mitchell Lane  or...@cora.nwra.com
> Boulder, CO 80301  http://www.cora.nwra.com
> 


Re: RedHat CentOS acquisition: stating the obvious

2014-01-15 Thread John Lauro
> At the risk of repeating myself... I refer you to Red Hat's 10-K
> filing:
> 
> http://www.sec.gov/Archives/edgar/data/1087423/000119312513173724/d484576d10k.htm#tx484576_1
> 
> See the "Competition" section on pages 12-14. Search for "Oracle" and
> "CentOS".
> 
> So when I say, "Red Hat considers CentOS a competitor", that is a
> demonstrable statement of fact, appearing in an authoritative
> document
> where lies can result in prison sentences. (Unsurprisingly, the
> "mission statement" you keep citing appears nowhere in this document.
> When choosing between "words" and "legally binding words", which to
> believe? Hm, hard to say...)

Then mention Centos and Fedora are at the end of paragraphs stating "we also", 
meaning they (at least at this time) is not considered a significant competitor 
relative to others mentioned.  Also, you need to read the entire document.  For 
example, they also list Fedora as something they compete with, but if you 
search for Fedora in that document you will also notice sections like:

Red Hat’s role in the open source community

We are an active contributor in many open source communities, often in a 
leadership role. Red Hat’s participation in the open source development process 
is illustrated by our sponsorship of the Fedora Project, JBoss.org, GlusterFS 
and other open source communities. This participation enables us to leverage 
the efforts of these worldwide communities, which we believe allows us to 
reduce both development cost and time and enhance acceptance and support of our 
offerings and technologies. Thus, we are able to use the Fedora Project, 
JBoss.org and other open source communities as proving grounds and virtual 
laboratories for innovations that we can draw upon for inclusion in our 
enterprise offerings and technologies. Additionally, the open and transparent 
nature of these communities provides our customers and potential customers with 
access and insights into, and the ability to influence, the future direction of 
Red Hat offerings and technologies.

We are dedicated to helping serve the interests and needs of open source 
software users and developers online. Our websites, which include redhat.com, 
fedoraproject.org, jboss.org, opensource.com and gluster.org, serve as 
substantial resources for information related to open source initiatives and 
our open source offerings. These websites contain news we believe to be of 
interest to open source users and developers, features for the open source 
community, a commerce site and a point-of-access for software downloads and 
upgrades. Visitors to our websites can organize and participate in user groups, 
make available fixes and enhancements and share knowledge regarding the use and 
development of open source software and methods. By acting as a publisher of 
open source information and by facilitating the interaction of users and 
developers, particularly through the Fedora and JBoss.org projects, we believe 
our websites have become community centers for open source. Additionally, 
redhat.com serves as a primary customer interface, web store and order 
mechanism for many of our offerings. 


Future versions will likely mention Centos as they do Fedora in terms of being 
an active contributor.


Re: RedHat CentOS acquisition: stating the obvious

2014-01-14 Thread John Lauro
Your first assumption, although largely correct as a generality it is not 
entirely accurate, and at a minimum is not the sole purpose.  That is why 
companies have mission statements.  They rarely highlight the purpose of making 
money, although that is often the main purpose even if not specified.  What is 
Red Hat's mission?  It is listed as:
To be the catalyst in communities of customers, contributors, and 
partners creating
better technology the open source way.

Making things exceedingly difficult would go against the stated mission.  In my 
opinion it would also go against making money as it would kill the eco system 
of vendors that support RedHat Enterprise Linux for their applications.

There are so many distributions out there, the biggest way for them to not make 
money is to become insignificant.  Having free
alternatives like Centos keeps high market share of the EL product and ensures 
compatibility and a healthy eco system.  If there was not open clones of EL, 
then ubuntu or something else would take over and the main supported platform 
of enterprise applications, and then the large enterprises that pay for RedHat 
support contracts would move completely off.

Having people use Centos or Scientific linux might not directly help the bottom 
line, but for RedHat it's a lot better than having people use ubuntu or suse.  
Oracle not being free could pose a bigger threat, but either RedHat remains on 
top as they are the main source for good support, or they do not and Oracle 
will have to pick up the slack for driving RedHat out of business. and what's 
left of RedHat would have to start using Oracle as TUV...  I don't see too many 
switching to Oracle besides those that are already Oracle shops.



- Original Message -
> From: "Patrick J. LoPresti" 
> To: scientific-linux-users@fnal.gov
> Sent: Tuesday, January 14, 2014 12:45:01 PM
> Subject: RedHat CentOS acquisition: stating the obvious
> 
> RedHat is a company. Companies exist for the sole purpose of making
> money. Every action by any company -- literally every single action,
> ever -- is motivated by that goal.
> 
> The question you should be asking is: How does Red Hat believe this
> move is going to make them money?
> 
> Those were statements of fact. What follows is merely my opinion.
> 
> Right now, anybody can easily get for free the same thing Red Hat
> sells, and their #1 competitor is taking their products, augmenting
> them, and reselling them. If you think Red Hat perceives this as
> being
> in their financial interest, I think you are out of your mind.
> 
> SRPMs will go away and be replaced by an ever-moving git tree. Red
> Hat
> will make it as hard as legally possible to rebuild their commercial
> releases. The primary target of this move is Oracle, but Scientific
> Linux will be collateral damage.
> 
> I consider all of this pretty obvious, but perhaps I am wrong. I hope
> I am.
> 
>  - Pat
> 


Re: RedHat CentOS acquisition: stating the obvious

2014-01-14 Thread John Lauro
Sounds like 1 step forward and 5 steps backward. BSD is lacking much and has 
little that isn't in the Linux kernel. 

- Original Message -

> From: "Jean-Victor Côté" 
> To: "Scientific Linux Users List" 
> Sent: Tuesday, January 14, 2014 1:06:02 PM
> Subject: RE: RedHat CentOS acquisition: stating the obvious

> Debian is moving to the BSD kernel.
> How does Scientific BSD sound to you?

> Jean-Victor Côté, M.Sc.(Sciences économiques), (CPA, CMA), Post MBA
> J'ai aussi passé d'autres examens, dont les examens CFA.

> J'ai un profil Viadeo sommaire:
> http://www.viadeo.com/fr/profile/jean-victor.cote
> I also have a LinkedIn profile:
> http://www.linkedin.com/profile/view?id=2367003&trk=tab_pro

> > Date: Tue, 14 Jan 2014 09:45:01 -0800
> > Subject: RedHat CentOS acquisition: stating the obvious
> > From: lopre...@gmail.com
> > To: scientific-linux-users@fnal.gov
> >
> > RedHat is a company. Companies exist for the sole purpose of making
> > money. Every action by any company -- literally every single
> > action,
> > ever -- is motivated by that goal.
> >
> > The question you should be asking is: How does Red Hat believe this
> > move is going to make them money?
> >
> > Those were statements of fact. What follows is merely my opinion.
> >
> > Right now, anybody can easily get for free the same thing Red Hat
> > sells, and their #1 competitor is taking their products, augmenting
> > them, and reselling them. If you think Red Hat perceives this as
> > being
> > in their financial interest, I think you are out of your mind.
> >
> > SRPMs will go away and be replaced by an ever-moving git tree. Red
> > Hat
> > will make it as hard as legally possible to rebuild their
> > commercial
> > releases. The primary target of this move is Oracle, but Scientific
> > Linux will be collateral damage.
> >
> > I consider all of this pretty obvious, but perhaps I am wrong. I
> > hope I am.
> >
> > - Pat


Re: Centos / Redhat announcement

2014-01-09 Thread John Lauro
Chances are, that if it's going to be in git then there will likely be
full commit log and branches so it might make it extremely easy to pull what
RH did, what Centos did, etc...

Ok, maybe I am a bit over optimistic, but not all change is bad.  


- Original Message -
> I have been struggling with this myself tbh. If RH adds a line in a
> GPL
> program that says "Welcome to Red Hat", releases the binary as RHEL
> and
> then modifies it for CentOS to read "Welcome to CentOS" and only
> releases the source that says "Welcome to CentOS", then they are in
> technical violation of the GPL, I would say. (IANAL).


Re: Still having problem with epel

2014-01-02 Thread John Lauro
Those instructions look like they are for centos. Try: 
yum install yum-conf-epel 

instead of wget/rpm 


Did you use 
yum install yum-conf-rpmfusion 


- Original Message -


From: "Mahmood Naderan"  
To: scientific-linux-users@fnal.gov 
Sent: Thursday, January 2, 2014 2:39:35 AM 
Subject: Still having problem with epel 



Hi, 
Recently I have faced a problem with epel repository ad asked a question about 
that. That thread was messy so I decided to create a simpler scenario. 


I have followed the instructions from 
http://www.rackspace.com/knowledge_center/article/installing-rhel-epel-repo-on-centos-5x-or-6x
 and downloaded the rpm file. However the epel repository doesn't work for me. 
If I remove it, then everything is normal. Please see the commands 


# wget 
http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm 
--2014-01-02 11:07:27-- 
http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm 
Resolving dl.fedoraproject.org... 209.132.181.23, 209.132.181.24, 
209.132.181.25, ... 
Connecting to dl.fedoraproject.org|209.132.181.23|:80... connected. 
HTTP request sent, awaiting response... 200 OK 
Length: 14540 (14K) [application/x-rpm] 
Saving to: "epel-release-6-8.noarch.rpm" 

100%[>]
 14,540 59.5K/s in 0.2s 

2014-01-02 11:07:33 (59.5 KB/s) - "epel-release-6-8.noarch.rpm" 





# rpm -Uvh epel-release-6-8.noarch.rpm 
Preparing... ### [100%] 
1:epel-release ### [100%] 



# ls -1 /etc/yum.repos.d/epel* 
/etc/yum.repos.d/epel.repo 
/etc/yum.repos.d/epel-testing.repo 



# cat /etc/yum.repos.d/epel.repo 
[epel] 
name=Extra Packages for Enterprise Linux 6 - $basearch 
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch 
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch
 
failovermethod=priority 
enabled=1 
gpgcheck=1 
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 

[epel-debuginfo] 
name=Extra Packages for Enterprise Linux 6 - $basearch - Debug 
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch/debug 
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-6&arch=$basearch
 
failovermethod=priority 
enabled=0 
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 
gpgcheck=1 

[epel-source] 
name=Extra Packages for Enterprise Linux 6 - $basearch - Source 
#baseurl=http://download.fedoraproject.org/pub/epel/6/SRPMS 
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-6&arch=$basearch
 
failovermethod=priority 
enabled=0 
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 
gpgcheck=1 





# yum makecache 
Loaded plugins: fastestmirror, refresh-packagekit, security 
Loading mirror speeds from cached hostfile 
Error: Cannot retrieve metalink for repository: epel. Please verify its path 
and try again 



# rpm -qa | grep epel 
rpm epel-release-6-8.noarch 



# rpm -e epel-release-6-8.noarch 

# yum makecache 
Loaded plugins: fastestmirror, refresh-packagekit, security 
Loading mirror speeds from cached hostfile 
* sl: ftp2.scientificlinux.org 
* sl-security: ftp2.scientificlinux.org 
sl | 3.5 kB 00:00 
sl-livecd-extra | 1.4 kB 00:00 
sl-security | 3.0 kB 00:00 
Metadata Cache Created 




So, how can I resolve the issue? 

Regards, 
Mahmood 




Re: Kernel Panic after power failure

2013-10-21 Thread John Lauro
Is it a SSD? A lot of SSD (especially consumer grade) lie about writes 
completing and cause nasty problems when the power goes out... 

Try mounting it with ro,noload. 

- Original Message -

> From: "Henrique C. S. Junior" 
> To: "Scientific Linux Users" 
> Sent: Monday, October 21, 2013 8:52:52 AM
> Subject: Kernel Panic after power failure

> After a power failure in my building, my SL 6.4 (x86_64) is
> experiencing a kernel panic in the first stages of booting.
> Error messages suggests that a serious problem occurred with my Ext4
> filesystem in /dev/mapper/VolGroup-lv_root.
> Here are the error messages aftes the kernel panic:

> ata3.00: status: { DRDY ERR }
> ata3.00: error: { UNC }
> ata3.00: exception Emask 0xoSAct 0x0 SErr 0x0 action 0x0
> ata3.00: BMDMA stat 0x25
> ata3.00: failed command: READ DMA
> ata3.00: cmd c8/00:00:c0:86:14/00:00:00:00:00/e3 tag 0 dm 131072 in
> res 51/40:cf:e8:86:14/00:00:00:00:00/e3 Emask 0x9 (media error)

> ata3.00: status: { DRDY ERR }
> ata3.00: error: { UNC }
> JBD: Failed to read block at offset 6886
> EXT4-fs (dm-0): error loading journal
> mount: wrong fs type, bad option, bad superblock on
> /dev/mapper/VolGroup-lv_root,
> missing codepage or helper program, or other error
> In some cases useful info is found in syslog - try
> dmesg | tail or so

> Rescue disk says that I "don't have any Linux partitions" and I need
> to rescue, at least, two MySQL databases in this server.
> Can someone, please, provide some help with this case?

> ---
> Henrique C. S. Junior
> http://about.me/henriquejunior
> Química Industrial - UFRRJ
> Prefeitura Muncipal de Paracambi
> Centro de Processamento de Dados


Re: How a user can execute a file from anothe user

2013-09-26 Thread John Lauro
One minor note,

Read isn't needed on the directories if the user/script/etc knows the path.  If 
the filename is known (no requirement to do a ls on the directory), then 
execute is sufficient.  If you give read, then all the filenames in your 
directory are revealed (but not necessarily the contents).

- Original Message -
> From: "Earl Ramirez" 
> To: "Mahmood Naderan" 
> Cc: scientific-linux-users@fnal.gov
> Sent: Thursday, September 26, 2013 4:43:31 PM
> Subject: Re: How a user can execute a file from anothe user
> 
...
> Sorry, I just saw the mistake, I forgot to mention that you need to
> grant access to the your home directory as mentioned by Mark.
> 
> chmod o+rx /home/mahmood (I added read as the user didn't have
> permission to access the directory.
> 
> You should now be able to execute the script as another user.
> 
> For your reference:
> 
> I created a folder named "shared" in user2 home directory
> 
> @lab19 ~]# ls -la /home/user2
> total 40
> drwx---r-x. 5 user2 user2  4096 Sep 26 15:57 .
> drwxr-xr-x. 5 root  root   4096 Sep 26 15:53 ..
> -rw---. 1 user2 user2  1387 Sep 26 16:27 .bash_history
> -rw-r-. 1 user2 user218 Feb 21  2013 .bash_logout
> -rw-r-. 1 user2 user2   176 Feb 21  2013 .bash_profile
> -rw-r-. 1 user2 user2   124 Feb 21  2013 .bashrc
> drwxr-x---. 2 user2 user2  4096 Nov 11  2010 .gnome2
> drwxr-x---. 4 user2 user2  4096 Dec 20  2012 .mozilla
> drwxrws---. 2 user2 public 4096 Sep 26 15:57 shared
> -rw---. 1 user2 user2   641 Sep 26 15:57 .viminfo
> 
> Created the script and was able to execute it from the user name
> user1
> 
> @lab19 ~]# ls -la /home/user2/shared/
> total 12
> drwxrws---. 2 user2 public 4096 Sep 26 15:57 .
> drwx---r-x. 5 user2 user2  4096 Sep 26 15:57 ..
> -rwxrwx---. 1 user2 public   18 Sep 26 15:57 script1
> 
> user1@lab19 ~]$ /home/user2/shared/script1
> FilesystemSize  Used Avail Use% Mounted on
> /dev/mapper/vg_lab11-lv_root
>   5.5G  2.8G  2.5G  54% /
> tmpfs 504M  232K  504M   1% /dev/shm
> /dev/vda1 485M   92M  369M  20% /boot
> /dev/md1272.0G  100M  1.9G   5% /home/labs
> 
> 
> 
> 
> --
> 
> 
> Kind Regards
> Earl Ramirez
> GPG Key: http://trinipino.com/PublicKey.asc
> 


Re: yum update failure for 6x x86_64 - [Errno 14] Downloaded more than max size

2013-09-13 Thread John Lauro
Sounds like you did the update in the middle of or an only partially complete 
sync, or the sync didn't finish and so primary.sqlite.bz2 is newer or older 
than other parts of the repo.  Make sure your rsync runs cleanly, and then try 
again.  Does your rsync have --delete-delay --delay-updates ?   If not, that 
should at least reduce the window of a partial sync in the future.



- Original Message -
From: "Paul Jochum" 
To: scientific-linux-users@fnal.gov
Sent: Thursday, September 12, 2013 11:21:55 AM
Subject: yum update failure for 6x x86_64 - [Errno 14] Downloaded more than max 
size

Hi All:

 This morning, I am trying to perform a "yum update" from our 
rsync'd copy of the Scientific Linux 6.x x86_64 repo, and ran into the 
following error:

[root@lss-desktop01 yum.repos.d]# yum clean all
Loaded plugins: refresh-packagekit, security
Cleaning repos: adobe-linux-i386 adobe-linux-x86_64 sl6x sl6x-fastbugs 
sl6x-security
Cleaning up Everything

[root@lss-desktop01 yum.repos.d]# yum update
Loaded plugins: refresh-packagekit, security
adobe-linux-i386 |  951 B 00:00
adobe-linux-i386/primary |  11 kB 00:00
adobe-linux-i386 17/17
adobe-linux-x86_64 |  951 B 00:00
adobe-linux-x86_64/primary | 1.2 kB 00:00
adobe-linux-x86_64 2/2
sl6x | 3.7 kB 00:00
sl6x/primary_db | 4.2 MB 00:02
sl6x-fastbugs | 2.6 kB 00:00
http://lss-kickstart1.ih.lucent.com/scientific/6x/x86_64/updates/fastbugs/repodata/primary.sqlite.bz2:
 
[Errno 14] Downloaded more than max size for 
http://lss-kickstart1.ih.lucent.com/scientific/6x/x86_64/updates/fastbugs/repodata/primary.sqlite.bz2:
 
365918 > 332221
Trying other mirror.
http://lss-kickstart1.ih.lucent.com/scientific/6x/x86_64/updates/fastbugs/repodata/primary.sqlite.bz2:
 
[Errno 14] Downloaded more than max size for 
http://lss-kickstart1.ih.lucent.com/scientific/6x/x86_64/updates/fastbugs/repodata/primary.sqlite.bz2:
 
365918 > 332221
Trying other mirror.
Error: failure: repodata/primary.sqlite.bz2 from sl6x-fastbugs: [Errno 
256] No more mirrors to try.


Re: Bug in yum-autoupdate

2013-08-03 Thread John Lauro
- Original Message -
> From: "Nico Kadel-Garcia" 
>
> It's exceedingly dangerous in a production environment. I've helped
> run, and done OS specifications and installers for a system over
> 10,000 hosts. and you *never*, *never*, *never* auto-update them
> without warning or outside the maintenance windows. *Never*. If I
> caught someone else on the team doing that as a matter of policy, I
> would have campaigned to have them fired ASAP.


If you have to manage 10,000 hosts then you are lucky you never had to learn to 
deal with no maintenance window and 0 downtime, and so most of your maintenance 
had to be possible outside of a maintenance window.  That is how many IT shops 
with thousands of machines have to operate these days.  You might even want to 
read up on Netflix's thoughts on chaos monkey.  Autoupgrades are just another 
form of random outage you might have to deal with.  As long as you have 
different hosts upgrading on different days and times, and you have automated 
routines that test and take servers out of service automatically if things 
fail, then autogrades is perfectly fine. If things break from the autoupgrades, 
it becomes real obvious based on the update history of which machines broke 
from it.

Campaigning to have someone fired without even hearing their reason for 
upgrading, or even warning them first that at your location is is standard 
practice not to ever autoupgrade because you have a separate QA process that 
even critical security patches must go through is a very bad practice on your 
part.

I am not going to state what patch policy I use, only that different policies 
work for different environments.  Based on your statement, it sounds like you 
could be loosing some valuable co-workers by lobbying to get people fired that 
have a different opinion from you instead of trying to educate and/or learn 
from each other.  If you feel you can not learn from your peers, you have 
already proven you are correct in that respect, but you have also shown there 
is much you don't know by being incapable of learning new things.


(Personally I would hate to use Nagios for 10,000 hosts.  It didn't really 
scale that well IMHO, but to be honest I haven't bothered looking at it in over 
4 years, and maybe it's improved.  Not familiar with Icinga, but I have had 
good luck with Zabbix for large scale)


Large filesystem recommendation

2013-07-24 Thread John Lauro
What is recommended for a large file system (40TB) under SL6?

In the past I have always had good luck with jfs.  Might not be the fastest, 
but very stable.  It works well with being able to repair huge filesystems in 
reasonable amount of RAM, and handle large directories, and large files.  
Unfortunately jfs doesn't appear to be supported in 6?  (or is there a repo I 
can add?)


Besides for support of 40+TB filesystem, also need support of files >4TB, and 
directories with hundreds of thousands of files.  What do people recommend?


Re: advice on using latest firefox from mozilla

2013-06-06 Thread John Lauro
If you read the bug report, he wasn't complaining about the requirement for 
root to update, only that it did not ask for it and simply tries to update and 
crashes. 


- Original Message -

From: "Paul Robert Marino"  
To: "Todd And Margo Chester" , "scientific-linux-users" 
 
Sent: Thursday, June 6, 2013 8:03:29 PM 
Subject: Re: advice on using latest firefox from mozilla 

Todd and Margo (I'm never sure who I'm addressing with you lol) 

There is a long standing security reason non root users can't update software 
which affect all users on the system. Remember over all *ux design is based on 
a multi user model where only people granted root access by password access or 
even better sudo access can affect all users. This is a good thing, it was done 
in response to computer viruses in the 70s. 




-- Sent from my HP Pre3 

On Jun 6, 2013 7:40 PM, Todd And Margo Chester  wrote: 

On 06/06/2013 01:18 PM, Todd And Margo Chester wrote: 
> I have had no 
> problems at all, except that the updates can not be installed 
> by the users -- you have to fire up Firefox as root. 

I just filed this: Linux upgrade required root privileges. 
And, it's looking good for implementation: 

https://bugzilla.mozilla.org/show_bug.cgi?id=880504 

"Ya All" please vote for it! 

-T 



Re: is this a this virus or an error

2013-05-24 Thread John Lauro
Linux can get viruses too including ones that could cause the symptoms 
described. Not sure what you mean by oos viruses, but the claim was blaster 
like, not the blaster virus.  That said, it sounds suspicious like an attempt 
to get you to buy something.  Anyways, a virus on Linux is possible, but you 
can use argus or tcpdump or a ton of other network monitoring tools on your 
machine and see if it is spewing out random connections that it shouldn't be.  



- Original Message -
From: "g" 
To: "scientific linux users" 
Sent: Friday, May 24, 2013 12:50:12 PM
Subject: is this a this virus or an error

greetings.

last night while reading articles at 'news.yahoo.com' using firefox 17.0.6,
i had 3 pages opened and this message popped up;

+++
Excessive Sessions Warning
Error

Your 2701HG-B Gateway has intercepted your web page request to provide you
with this important message. The following devices on your network are using
a large number of simultaneous Internet sessions:

192.168.1.144

The most likely cause of this issue is a ~blaster~ type virus which has
infected the device. It is strongly recommended that the devices above be
scanned for potential viruses.

Note that a large number of sessions may occasionally be the result of
application software or gaming software installed on the device. If you
believe this is the case, click the ~Do not show me excessive session
warnings in the future~ to disable this feature.

To access the requested Web page that was intercepted, please close all
browser windows and then restart your Web browser software.

If you continue to see this page after closing all open Web browser windows,
restart your computer.

[ ] Do not show me excessive session warnings in the future
+++

i have, at previous times, had 8 to 10 pages opened and not received such
a notice.

curious as to what such a virus infected, i looked up 'blaster' at
wikipedia.org to find;

+++
The Blaster Worm (also known as Lovsan, Lovesan or MSBlast) was a computer
worm that spread on computers running the Microsoft operating systems: Windows
XP and Windows 2000, during August 2003.[1]

The worm was first noticed and started spreading on August 11, 2003. The rate
that it spread increased until the number of infections peaked on August 13,
2003. Filtering by ISPs and widespread publicity about the worm curbed the
spread of Blaster.
+++

i contacted bellsouth and the rep insisted that i had a virus that was
causing message.

when i told her that i had doubt that it was a virus, because i run linux
and oos viruses do not effect linux.

she insisted that "viruses have a way of creeping into a system" and that
for $100, i could have an online scan run to check my system.

when i mentioned that notice stated;

  It is strongly recommended that the devices above be scanned for potential
  viruses.

rep insisted that meant my computer and not the dsl modem.

needless to say, if she did not understand what i was trying to explain
to her that i was not using oos, she has little understanding about any
virus problem.

so, have any readers run across above notice or know of any virus that can
enter a linux system to cause such a message to appear?

tia.

-- 

peace out.

in a world with out fences, who needs gates.

sl5.9 linux

tc.hago.

g
.


Re: Network is ok, web browser won't browse

2013-04-25 Thread John Lauro
One possibility is a proxy issue. Configured when it shouldn't be, or the proxy 
server down, or not configured but an upstream firewall enforces you go through 
it, etc... 

Less likely, but sometimes a switch or router just acts up and needs to be 
reset. Assuming you have identically configured machines, and groups are acting 
different, makes it a little more likely... 

Have you tried different utilities for http, such as wget? 


- Original Message -

From: "Nathan Moore"  
To: "scientific-linux-users"  
Sent: Thursday, April 25, 2013 10:03:41 PM 
Subject: Network is ok, web browser won't browse 

Context: small academic cluster of SL5.8 machines, x86_64. 


Recently, some of the machines on the cluster stopped being able to access the 
outside internet. Machines can still update via ftp, yum, nslookup, and ping, 
but http requests beyond the local cluster seem to be universally denied. I've 
never encountered a problem like this before (and it is probably my own fault, 
as I am a self-taught sys-admin), but I'm wondering if anyone else has seen a 
similar problem. 


Bonus points if you can tell me where in /etc or /var I should look to find the 
offending config file. 


regards, 


NTM 





Mirror questions

2013-04-22 Thread John Lauro
I am trying to setup a mirror (internal initially, might make public). It seems 
to be taking significantly more space than expected based on the FAQ.

The page on https://www.scientificlinux.org/download/mirroring/mirror.rsync 
recommends --exclude=archive/obsolete and --exclude=archive/debuginfo.  Any 
reason not to also exclude archives/obsolete and archives/debuginfo too?  They 
seem rather big and are under the 5rolloing directory.  Perhaps a mistake in 
the directory name as 6rolling is called archive?

Also, sites/example under most seem to be going forever and causing part of my 
problem.  Is some sym or hard link not syncing correctly?  For example:  
http://rsync.scientificlinux.org/linux/scientific/53/i386/sites/example/sites/example/sites/example/sites/example/sites/example/sites/example/sites/example/sites/example/
 (etc...)