sata0 is not sda

2013-02-18 Thread Ken Teh

During a kickstart install, how are drives mapped?  I notice that sata0 is not 
always sda.  This is especially true when there are very large drives in the 
mix.


Re: sata0 is not sda

2013-02-19 Thread Ken Teh

If the disks cannot be identified deterministically, then I cannot avoid
making my kickstart installs 2-stepped.  Whether I pre-label the disks or
install the system with a single disk and add the second disk after the
install.

I appreciate the situation from the kernel's perspective. Driver loads, disk
detection, etc.  But it sure puts a crinkle in the beauty of a kickstart
install.

Thanks everyone for your replies.


On 02/18/2013 10:06 PM, Nico Kadel-Garcia wrote:

On Mon, Feb 18, 2013 at 4:01 PM, Ken Teh  wrote:

During a kickstart install, how are drives mapped?  I notice that sata0 is
not always sda.  This is especially true when there are very large drives in
the mix.


It Depends(tm). There are confusing difficulties because the drive
controllers may be pre-loaded modules, which will be loaded first, and
because the later updates or manual drivers compiled for custom
kernels may be loaded in different order or pre-loaded with mkinitrd.
Then as drives or RAID arrays which look like drives are detected by
the bios starting and loading the drivers from the *boot* partition
for adiditional controllers, they're loaded in by the order detected,
first drive /dev/sda, second drive /dev/sdb, etc., etc. This is why
the boot loader is usually on "/dev/sda"

IDE drives used to  be listed as "/dev/ide0, /dev/ide1, etc." in
deterministic fashion, but that got tossed out when they started
labeling all drives as /dev/sda to gove access to special SCSI
compatible commands.

The result is that it's guesswork. This is why our favorite upstream
vendor tried for a while to use "LABEL=" settings to identify
particular partitions, instead of trying to deduce what would be
detected where.


Re: Installing on a new laptop

2013-02-26 Thread Ken Teh

I never boot a new laptop into Windows.  I replace the original hard drive
with a new one and install Linux on it.  This way I can put the original disk
back in and never void my warranty.  You can then even sell it in its
"original" state.

Of course, this works only if you don't plan to use Windows.

I use a $500 Lenovo X120e netbook.


On 02/26/2013 11:26 AM, Scott_Gates wrote:

OK, If I needed a desktop, I'd just roll my own. Probably starting with

something bare-bones from TigerDirect.

I'm thinking of buying a new laptop, rather than just recycling old ones,

like I have been.

I have HEARD there are issues with trying to install on computers with

Windows8 already installed--the only source I have of "CHEAP" laptops.

Basically a Wal-mart or Best-buy boxes that I can get in the $250-$400 ra
nge.

Does anybody have experience with this?  Yeah, I know I'll be Voiding the

Warranty--but, I need a laptop for real work--not socializing or net
flicking.  You know what I mean.


Re: Installing on a new laptop

2013-03-01 Thread Ken Teh

Time to stop this thread, methinks.


On 03/01/2013 08:06 AM, Paul Robert Marino wrote:

Have you run a ps -ax on a Linux box lately? You call that monolithic?
The linux kernel has been migrating to a micro kernel slowly for the last 
decade now. In some ways its benefited from its older cousin GNU Hurd because 
the Linux Kernel developers had the benefit of knowing what went wrong and what 
worked well in Hurd. By the way from my understanding one of the things they 
really got wrong was using Mach as a base because it was the root cause of a 
majority of the issues they've had, and last I herd they were stripping Mach 
out of the Hurd kernel.



-- Sent from my HP Pre3

---!

---

On Mar 1, 2013 12:13 AM, zxq9  wrote:

On 03/01/2013 01:03 PM, Yasha Karant wrote:
 > Modern BSD is a micro-kernel ("MACH") design, whereas Linux still is a
 > monolithic kernel design...

implying...

That monolithic kernel design is demonstrably primitive in every respect
to micro-kernel design and that there is a universal evolutionary path
predicted by some law we have yet to discover.

That kernel design trumps driver availability in every respect.

That any of this matters since nobody is fronting the development time
to implement $astronaut_arch_X.

That this is the place to discuss this.


Re: sata0 is not sda

2013-03-04 Thread Ken Teh

On 03/04/2013 12:03 PM, Lamar Owen wrote:


This is partially due in EL6 to the use of dracut and it's new initrd
udev-ish system. I have one RHEL 6 box that is hooked to a pretty good-sized
array on fibre-channel; it's fully HA, so there are four paths to any given
LUN. My boot device, a 3Ware 9500-series SATA RAID card, ends up with a
device name for it's first logical disk anywhere between /dev/sda and
/dev/sdah; it's been /dev/sdu, /dev/sds, /dev/sdt, /dev/sdz, /dev/sdab, and
pretty much everything in between, and it will vary from one boot to the
next; it's at /dev/sdad right now. But I have the 3ware card, an
on-motherboard U320 SCSI controller, a four-port Silicon Image SATA card,
and a dual-port FC card hooked to the SAN.



I've seen the same behaviour with my LSI MegaRAIDs and I find it very
disconcerting.

I use kickstart to upgrade machines by doing a full install.  I reserve the
system disks so I can wipe them out and reinstall at will. Now that I don't
know which disk is which, kickstart becomes more cumbersome.  Either I have to
record the UUIDs and embed them in the kickstart file or open the box and
disconnect any non-system drive.


mod_wsgi version?

2013-03-08 Thread Ken Teh

I'm having trouble getting mod_wsgi to work.  There is a warning in the httpd 
logs that says mod_wsgi was compiled for Python 2.6.5; I am running 2.6.6 which 
is the current version in the SL repo.

There are 2 mod_wsgi SRPMs - a 3.2.1 version and a 3.2.3 version, but there is 
only a 3.2.1 binary rpm.  Can someone clarify?

Should I grab the 3.2.3 srpm and rebuild it with my current python?  It's only 
a warning so I thought it would work but it's pretty obvious the 
WSGIScriptAlias directive has no effect.

Thanks!


Re: XRandR + nVidia

2013-03-10 Thread Ken Teh

I can really identify with the xkcd graphic.  X11 graphics on Linux has really 
come a long way. It was a real struggle back in the early 90's.

Florian's response is spot on.  Having a xorg.conf actually messes things up.  
Dont know why.  But when you get rid of it, some automagical happens.



On 03/09/2013 02:59 PM, Joseph Areeda wrote:

I need some advice on how to turn on RANDR.

I have a few systems with nVidia GPU 5xx and 6xx series. Latest kmod drivers, 
multiple monitors with Xinerama enabled.

Newer systems work fine but I have once that been upgraded since before the 2 
were compatible. I have libXrandr installed but it doesn't seem to be enabled.

This reminds me of:



Would someone point me to a link that explains what I have to do?

Thanks,

Joe


advice on using latest firefox from mozilla

2013-06-05 Thread Ken Teh

I'd like to hear some pros and cons with using the latest firefox from mozilla 
instead of using the ESR version that comes with the stock distro.  I am 
deploying a web app that fails to render properly. It is a bug in firefox which 
has been fixed since version 18.

Naturally, the ESR version is 17. Sigh...


Re: advice on using latest firefox from mozilla

2013-06-06 Thread Ken Teh

I'd appreciate some more details on how you implement the Mozilla update
protocol in an (not quite) enterprise environment.  IOW, not hundreds or
thousands of machines but enough to make manual updates unfeasable.

Isn't mozilla's update user-based?  When a user launches firefox, the browser
checks for updates, and if there is a newer version, asks the user to download
and install it.  What do you do if the user has no privileges to install
software?

Thanks!


On 06/05/2013 08:40 PM, Yasha Karant wrote:

On 06/05/2013 01:57 PM, Ken Teh wrote:

I'd like to hear some pros and cons with using the latest firefox from
mozilla instead of using the ESR version that comes with the stock
distro.  I am deploying a web app that fails to render properly. It is a
bug in firefox which has been fixed since version 18.

Naturally, the ESR version is 17. Sigh...


As we were deploying new Nvidia-equipped stereoscopic 3D scientific
visualisation workstations using X86-64 SL6x, we had to make a decision as
to whether to use the SL distribution Firefox (ESR) or the latest production
release.  After considering the pros and cons (including that the machines
are behind a network firewall), we selected the current production version.
Thus far, we have had no issues, and have done several updates using the
Mozilla Firefox update technique, not the SL6x update, keeping Firefox up to
the current production release.  Part of the reason for the decision was the
observation you also have made; certain defects that were corrected in the
production release did not have the corrections backported to the earlier
release ESR SL version.

Yasha Karant


Re: advice on using latest firefox from mozilla

2013-06-06 Thread Ken Teh

I don't know Remi's repo and I'm wary of including a lot of repos.  The only
extra repos I use are elrepo, epel, and adobe.  I even have doubts about epel.

Comments?  Assurances?



On 06/06/2013 02:31 PM, Graham Allan wrote:

Latest firefox is available pre-packaged for SL at Remi's repo,
http://rpms.famillecollet.com/

we used it for a while when TUV was still supplying a desperately old
version, though more recently switched back to the supplied ESR release.

Graham

On Thu, Jun 06, 2013 at 12:19:23PM -0500, Ken Teh wrote:

I'd appreciate some more details on how you implement the Mozilla update
protocol in an (not quite) enterprise environment.  IOW, not hundreds or
thousands of machines but enough to make manual updates unfeasable.

Isn't mozilla's update user-based?  When a user launches firefox, the browser
checks for updates, and if there is a newer version, asks the user to download
and install it.  What do you do if the user has no privileges to install
software?

Thanks!


On 06/05/2013 08:40 PM, Yasha Karant wrote:

On 06/05/2013 01:57 PM, Ken Teh wrote:

I'd like to hear some pros and cons with using the latest firefox from
mozilla instead of using the ESR version that comes with the stock
distro.  I am deploying a web app that fails to render properly. It is a
bug in firefox which has been fixed since version 18.

Naturally, the ESR version is 17. Sigh...


As we were deploying new Nvidia-equipped stereoscopic 3D scientific
visualisation workstations using X86-64 SL6x, we had to make a decision as
to whether to use the SL distribution Firefox (ESR) or the latest production
release.  After considering the pros and cons (including that the machines
are behind a network firewall), we selected the current production version.
Thus far, we have had no issues, and have done several updates using the
Mozilla Firefox update technique, not the SL6x update, keeping Firefox up to
the current production release.  Part of the reason for the decision was the
observation you also have made; certain defects that were corrected in the
production release did not have the corrections backported to the earlier
release ESR SL version.

Yasha Karant




question about scipy

2013-10-28 Thread Ken Teh

I see we have numpy but no scipy.  I looked on the scipy home page; it 
describes scipy as an ecosystem of packages including numpy and matplotlib.  
What's the recipe for getting scipy?

For the moment I need numpy and matplotlib which I can install piecemeal.  But 
if I install the scipy stack from its home site will I run into problems with 
numpy and matplotlib?  Anyone tried installing just the scipy package from its 
home?

Thanks!


Re: question about scipy

2013-10-29 Thread Ken Teh

Thanks!  I'm at 6.4 and scipy'ed.



On 10/28/2013 04:25 PM, Kraus, Dave (GE Healthcare) wrote:

Scipy came into TUV (and SL) in 6.4.

-Original Message-
From: owner-scientific-linux-us...@listserv.fnal.gov 
[mailto:owner-scientific-linux-us...@listserv.fnal.gov] On Behalf Of Ken Teh
Sent: Monday, October 28, 2013 4:13 PM
To: scientific-linux-users
Subject: question about scipy

I see we have numpy but no scipy.  I looked on the scipy home page; it 
describes scipy as an ecosystem of packages including numpy and matplotlib.  
What's the recipe for getting scipy?

For the moment I need numpy and matplotlib which I can install piecemeal.  But 
if I install the scipy stack from its home site will I run into problems with 
numpy and matplotlib?  Anyone tried installing just the scipy package from its 
home?

Thanks!



Re: Get screen resolution from remote system

2013-11-15 Thread Ken Teh

Did you try xdpyinfo?

'man X'  gives you an intro to the available X commands.


On 11/15/2013 09:14 AM, Stephen Berg (Contractor) wrote:

I'm searching for a command line utility of some fashion to let me get the Xwindows 
screen resolution from a remote system.  What I'd like is very simply a command, utility, 
log entry that I can reliably tell that "System X, over there on the network, is 
running an Xwindows session at 1920x1080."

Does anyone have any ideas on how I can accomplish this?  So far xrandr doesn't seem to 
let me do it.  There's lots of "Setting mode" entries in /var/log/Xorg.0.log 
but no easy way to tell if that's the actual resolution currently running.




Re: RedHat CentOS acquisition: stating the obvious

2014-01-16 Thread Ken Teh

Can we please stop with all the chatter on this topic?  Granted the
topic has some relevance to Scientific Linux but the conversation has
run amok and I'm resorting to deleting all the emails.  And possibly
deleting something that is really really relevant.

Connie, Pat, If you have some announcement on this topic, please use a
different subject line.

Thanks!

On 01/16/2014 05:29 PM, Patrick J. LoPresti wrote:

On Thu, Jan 16, 2014 at 2:30 AM, Jos Vos  wrote:

On Wed, Jan 15, 2014 at 10:49:51AM -0800, Patrick J. LoPresti wrote:


[...] (Always remember that companies,
like politicians, do not make statements to communicate information.
They make statements to achieve a desired result. Their statements may
happen to communicate information, but if and only if it helps to
achieve their desired result.)


It's probably because of my reading problems that I read this as
"companies are bad and they are lying all the time".  I know it's not
said literally, but that's where "reading between the lines" comes in.


Is "reading between the lines" sort of like "putting words in someone's mouth"?

OK, this is going to be way off topic. But what the heck, I am on a
roll. Oh, and I will definitely be making some value judgments this
time.

Of course I do not think companies lie all the time. They tell the
truth when it is in their interest. They mislead and lie by omission
when it is in their interest. And they outright lie when it is in
their interest, if they can do so without legal or reputational risk.

Quick aside: Companies do care about their reputation, but not for the
same reason you or I do. Well, unless you are a sociopath. Companies
care about their reputation to the extent that loss of reputation
translates to loss of sales. Period.

Small companies are often an exception. They are still capable of
behaving like human beings, acting ethically and even altruistically
for its own sake. Large companies are not so capable, because a CEO's
"fiduciary duty" is to generate wealth for shareholders by any and all
legal means. Anything less would be a violation of that duty.

Most companies start small and good, but have steadily increasing
difficultly "not being evil". Red Hat and Canonical, for example, were
unquestionably positive forces for Linux at one time. But it is highly
questionable whether we still live in that time. I think it is very
unclear whether corporate involvement in open source will ultimately
turn out to be a blessing or a curse. We are just now entering the
later chapters of that story...

To summarize my world view: Small corporations are good. Big
corporations are evil. Small government is good. Big government is
evil. I am still searching for a label that captures this view. I am
pretty sure "communist" is not it.

  - Pat



Re: RedHat CentOS acquisition: stating the obvious

2014-01-17 Thread Ken Teh

On 01/17/2014 05:32 AM, Dag Wieers wrote:


To put it in perspective, this deal is huge, not all information is
available, so it is normal that some people are speculating, share opinions,
or provide answers. Even Karanbir is involved, so some of the speculation
can result in new information right here.

So I don't think there is a need for action (at least not at this point).



Agreed.

Let's keep the discussion to topics that are relevant to the future of
Scientific Linux.  Not the merits of capitalism vs communism, the motivations
of companies, etc.


Re: Creating Live CD

2014-03-17 Thread Ken Teh

I run a data acquisition system I wrote under a minimal live SL system.  About 
250MB.  I studied Urs' scripts, stole a bunch of his work, and wrote my own 
scripts to create my own live SL CD.

My systems are still running SL5x since I've not had time to update the 
scripts.  They are not as nice as Urs' live CDs but I was really after an 
appliance that I can cycle power on without worrying about saving data or 
corrupting an actual hard drive.

I can definitely recommend Urs' www.livecd.ethz.ch site.  If you need help, we 
can discuss this off-line.

Ken


On 03/15/2014 12:22 AM, Yogi A. Patel wrote:

Hi -

I develop a real-time electrophysiology platform (rtxi.org ) using 
scientific linux with kernel 3.8 and the real-time layer, Xenomai (xenomai.org 
).

I would like to create a LiveCD of my system to make it easier for users to 
adopt, however am having trouble. The standard scripts only make LiveCDs of the 
stocks Scientific Linux distribution+kernel.

Any suggestions on how to accomplish this?

Thank you in advance!

Yogi


advice on auto version upgrade with sl6x.repo

2014-03-18 Thread Ken Teh

I've had 2 successful upgrades from 6.4 to 6.5 with the sl6x.repo enabled.  In 
the past, I've never done upgrades, preferring to re-install.

I'd like to know what folks are doing with respect to enabling the sl6x.repo.  Is it 
"just enable it!, it's ready from primetime" or are you still disabling it, 
doing a test drive on a test machine before reenabling across all machines?

thanks


sl7 systemd sysvinit

2014-08-25 Thread Ken Teh

I read the following article on systemd

http://ifwnewsletters.newsletters.infoworld.com/t/9625863/474699771/826094/14/

The comments suggested one could still revert to sysvinit.  Is this just 
wishful thinking on my part?


Re: about realtime system

2014-08-27 Thread Ken Teh

I used to play with a realtime Linux system back in the 90s that had a sort of 
virtualization architecture.  It had a realtime executive that could run realtime tasks.  
One of its tasks was the Linux kernel itself so this way the tasks could "talk" 
with linux processes and make use of linux capabilities it lacked.

When I first worked with it, it ran the 1.3 kernel and it was really fast. A 
6µsec latency.  It got progressively worse with 2x kernels.  But still much 
better than the 150µsec quoted earler.

I can't remember what it was called. Somebody from New Mexico developed it.  
There was also an offshoot development by a group in Italy.



On 08/27/2014 12:07 PM, Michael Duvall wrote:

Hello,

While I am thoroughly interested in SL topics, I rarely comment on threads.  
Today is an exception.  I work for a real-time linux vendor.  I concur with 
David Somerseth's summation.  Real-time cannot be achieved under virtualization.

Regards,
--
*Michael Duvall*
Systems Analyst, Real-Time
michael.duv...@ccur.com 

(954) 973-5395 direct
(954) 531-4538 mobile

CONCURRENT | 2881 Gateway Drive | Pompano Beach, FL 33069 | www.real-time.ccur.com 


-Original Message-
*From*: David Sommerseth mailto:david%20sommerseth%20%3csl+us...@lists.topphemmelig.net%3e>>
*Reply-to*: "scientific-linux-us...@listserv.fnal.gov" 

*To*: John Lauro mailto:john%20lauro%20%3cjohn.la...@covenanteyes.com%3e>>, Paul Robert Marino 
mailto:paul%20robert%20marino%20%3cprmari...@gmail.com%3e>>
*Cc*: Brandon Vincent mailto:brandon%20vincent%20%3cbrandon.vinc...@asu.edu%3e>>, 
llwa...@gmail.com mailto:%22llwa...@gmail.com%22%20%3cllwa...@gmail.com%3e>>, 
SCIENTIFIC-LINUX-USERS@FNAL.GOV mailto:%22scientific-linux-us...@fnal.gov%22%20%3cscientific-linux-us...@fnal.gov%3e>>, Nico Kadel-Garcia 
mailto:nico%20kadel-garcia%20%3cnka...@gmail.com%3e>>
*Subject*: Re: about realtime system
*Date*: Wed, 27 Aug 2014 12:27:50 -0400

On 24/08/14 18:57, John Lauro wrote:

Why spread FUD about Vmware.  Anyways, to hear what they say on the subject:
http://www.vmware.com/files/pdf/techpaper/latency-sensitive-perf-vsphere55.pdf

Anyways, KVM will not handle latency any better than Vmware.


You can currently not achieve true realtime characteristics when adding
virtualization, no matter the technology.  The reason is that realtime
tasks must be able to preempt running tasks to be able to keep its
deadlines.

Consider running a virtualized realtime kernel running a realtime task,
on a host VM with a realtime kernel.  When the task gets CPU time, it
preempts all other running tasks on the provided CPU core.  But if the
VM host is not aware of this happening, it may just as well not give
enough runtime in the right time-window to the realtime guest OS.  Thus
increasing the latency quite noticeably.  So for this to work, the guest
OS kernel must be able to communicate to the host OS kernel that it has
a task which needs attention right now.  And AFAIK, this mechanism is
not implemented anywhere.

I know there has been done some research on this topic some years ago,
and an interesting paper on it.  But I don't know if this has come any
further.






Re: Bizarre bug

2015-03-03 Thread Ken Teh

I set mine at uid/gid=2000 and pray it's good till I retire :)



On 03/03/2015 04:44 PM, Chris Schanzle wrote:

On 03/03/2015 03:33 PM, P. Larry Nelson wrote:

That used to happen in the old days before
system-config-users pretty much kept generated UIDs/GIDs well out
of the range that an installed piece of software might use.
I believe the rule is now that real people users get a UID > 500
and installed apps (like ntop, UID:103, GID:160) use UIDs < 500,
but I don't know if that's a hard and fast rule with apps or not.
I do the same thing with any local group I create - give it a
GID > 500.


The authoritative source used by useradd (perhaps others) is /etc/login.defs:

grep ^UID_MIN /etc/login.defs
UID_MIN  500

Historically it was UID >= 500 (note 500 was the first), in recent Fedora's and 
EL7, it's now 1000:

grep ^UID_MIN /etc/login.defs
UID_MIN  1000


Note new systems also have min/max values for system accounts in login.defs:

# Min/max values for automatic uid selection in useradd
#
UID_MIN  1000
UID
# System accounts
SYS_UID
SYS_UID_MAX   999



Re: USB point to point computer communications link

2015-03-27 Thread Ken Teh

I was about to suggest Mark's point about the second nic in the desktop.  It 
seems to me the easiest and most versatile.  And a small switch to connect the 
laptop and the desktop (on the second nic).  No cross-over cable.  A small 
disjoint lan with hard-wired addresses in /etc/hosts. You can add as many 
machines as you have ports on the switch.  You could even turn your desktop 
into a NAT gateway if you wish.


On 03/27/2015 08:38 AM, Mark Stodola wrote:

On 03/26/2015 06:51 PM, Kevin K wrote:

On Mar 26, 2015, at 6:37 PM, Yasha Karant  wrote:

My desktop workstation (currently X86-64 SL 7) has only one 802.3 physical 
port.  At my university, the IT gestapo will not allow the use of a local 802.3 
repeater (switch or hub) but requires a valid NIC MAC address and will 
disconnect any changes.  I have no 802.11 WNIC on my desktop workstation.  I 
just have obtained a new HP Zbook to run X86-64 Linux to replace my old mobile 
workstation (laptop) that was underprovisioned for 64 bit operation, had a worn 
out keyboard and pointing device, etc. (I regret to state that I am 
experimenting with OpenSUSE 13.2 on that machine for reasons beyond the subject 
matter of this post.)  The IT gestapo will not allow my workstation to serve as 
a HTTP server, etc. -- one cannot use scp, sftp, etc., for file transfer over 
the IT network from a desktop workstation (not a designated server).  I could 
attempt to transfer all of the files to the research network that has much less 
IT gestapo control -- but this is as tedious as what I am no

w

doing. H

ence, a question:


Is there a software application utility that will convert a USB network between 
two machines running standard open systems protocols to allow file transfer 
between the two machines?  I am not referring to the methods used with an 
Android device, but with a regular Linux workstation.  A cursory search of such 
things on the web did not provide any insight.  At one time, UUCP would do this 
over a RS232 point-to-point link (cable) -- will this approach still work over 
a USB (not RS232) link?  Is there something better than UUCP?


Are you wanting to do a one time transfer between the two computers?  Or be 
able to get both on the net at the same time?

For 1 time use, I would suggest a crossover cable.  Configure one to allow the 
SSH daemon to run, and copy files using scp or sftp.

If you want both to connect to the net at the same time, and be able to talk to 
each other, then an inexpensive NAT router should do the trick.  Unless they 
are running special software that can detect that you have multiple computers 
attached to it, there should be no issue.  You still wouldn’t be able to 
connect BACK to your computer from outside if servers aren’t allowed.

Behind NAT, your workstation should be able to be a server to the zbook.


If all you are looking for is file transfer, is there a any reason why a USB 
drive is not a viable option?  With USB length limits, it sounds like the 2 
machines will be in the same room with physical access.

Have you considered just adding a second NIC to the desktop for use with the 
laptop?

I recall seeing USB link devices for migrating Windows systems between 
computers several years ago, but do not have any experience with them.


ld-linux.so.2

2015-04-27 Thread Ken Teh

I have a user who has installed an executable built on a other Linux distro.  
Claims it was built on a 64-bit linux (doubtful).  He has no problems running 
it on a 32-bit SL6.x machine but cannot run it on a 64-bit SL6.x machine.  
Chokes with the following:

...:/lib/ld-linux.so: bad ELF interpreter: No such file or directory.

I'm wondering if it is "safe" to add a symbolic link to the 
ld-linux-x86_64.so.2 to fix this.


help debugging a kickstart install

2015-10-12 Thread Ken Teh

I'm having problems with an 6.7 install.  Here are the relevant lines:

# partitions

#clearpart --drives=disk/by-id/ata-SATA_SSD_96D70756062400160297
part /boot --fstype=ext4 --size=1024 --asprimary 
--ondisk=disk/by-id/ata-SATA_SSD_96D70756062400160297
part pv.01 --size=1 --grow --asprimary 
--ondisk=disk/by-id/ata-SATA_SSD_96D70756062400160297

volgroup sysvg pv.01
logvol swap --fstype=swap --vgname=svsvg --size=12288 --name=swap
logvol / --fstype=ext4 --vgname=sysvg --size=1 --grow --name=root


Kickstart stops trying to create the swap logical volume.  Claims there is no 
such sysvg volume.  I did an alt-F2 and ran parted on the disk.  The 'part' 
command never created the partitions.  This is my first time using the 
'disk/by-id/...'  syntax.  Also, first time with an SSD disk.  I checked 
/dev/disk/by-id and the disk is listed with the correct id.

I'm going to prep the partitions with a rescuecd and try again.  I'd appreciate 
any suggestions you may have debugging this.

Has anyone tried the ssh option with kickstart?  I understand you can ssh to 
the machine and monitor it during the installation.  The one advantage I can 
see is the saved lines on a terminal window instead of the 80x24 console.


Re: help debugging a kickstart install

2015-10-12 Thread Ken Teh

Good grief!

Vielen Dank!



On 10/12/2015 09:27 AM, Stephan Wiesand wrote:

On 12 Oct 2015, at 16:14, Ken Teh  wrote:

I'm having problems with an 6.7 install.  Here are the relevant lines:

# partitions

#clearpart --drives=disk/by-id/ata-SATA_SSD_96D70756062400160297
part /boot --fstype=ext4 --size=1024 --asprimary 
--ondisk=disk/by-id/ata-SATA_SSD_96D70756062400160297
part pv.01 --size=1 --grow --asprimary 
--ondisk=disk/by-id/ata-SATA_SSD_96D70756062400160297

volgroup sysvg pv.01
logvol swap --fstype=swap --vgname=svsvg --size=12288 --name=swap
logvol / --fstype=ext4 --vgname=sysvg --size=1 --grow --name=root


Kickstart stops trying to create the swap logical volume.  Claims there is no 
such sysvg volume.


Does it help to fix the typo in "--vgname=svsvg" ?


  I did an alt-F2 and ran parted on the disk.  The 'part' command never created 
the partitions.  This is my first time using the 'disk/by-id/...'  syntax.  
Also, first time with an SSD disk.  I checked /dev/disk/by-id and the disk is 
listed with the correct id.

I'm going to prep the partitions with a rescuecd and try again.  I'd appreciate 
any suggestions you may have debugging this.

Has anyone tried the ssh option with kickstart?  I understand you can ssh to 
the machine and monitor it during the installation.  The one advantage I can 
see is the saved lines on a terminal window instead of the 80x24 console.


Re: help debugging a kickstart install

2015-10-12 Thread Ken Teh

I looked up the documentation on %pre and its example.  I see
what you are saying.

Thanks for the tip.

I usually use kickstart via nfs so I have a copy of the kickstart
that installed the machine.



On 10/12/2015 09:34 AM, Nico Kadel-Garcia wrote:

On Mon, Oct 12, 2015 at 10:14 AM, Ken Teh  wrote:

I'm having problems with an 6.7 install.  Here are the relevant lines:

# partitions

#clearpart --drives=disk/by-id/ata-SATA_SSD_96D70756062400160297
part /boot --fstype=ext4 --size=1024 --asprimary
--ondisk=disk/by-id/ata-SATA_SSD_96D70756062400160297
part pv.01 --size=1 --grow --asprimary
--ondisk=disk/by-id/ata-SATA_SSD_96D70756062400160297

volgroup sysvg pv.01
logvol swap --fstype=swap --vgname=svsvg --size=12288 --name=swap
logvol / --fstype=ext4 --vgname=sysvg --size=1 --grow --name=root



Kickstart stops trying to create the swap logical volume.  Claims there is
no such sysvg volume.  I did an alt-F2 and ran parted on the disk.  The
'part' command never created the partitions.  This is my first time using
the 'disk/by-id/...'  syntax.  Also, first time with an SSD disk.  I checked
/dev/disk/by-id and the disk is listed with the correct id.


Don't hurt yourself. That "disk-by-id" or using UUID, is not stable.
If you need to ensure particular disk layouts, put in a '%pre'
statement to partition things the way *you* want in a saveable,
scriptable format, and use the resulting LABEL or  LVM based volumes
to hand off to the rest of the kickstart configuration. The anaconda
disk configuration tools are powerful, but awfully confusing and very
diffficult to get right if you try to do *anything* that is not bog
standard. And the "system-config-kickstart" GUI for resetting
kickstart files is not much help: it profoundly reformats the
kickstart file you start with, and throws out multiple "%pre" or
"%post" steps.


And ooohh, if you're using kickstart files? Put in a %post --nochroot"
to copy /tmp/ks.cfg to /mnt/sysimage/root/ks.cfg, so that you have an
actual copy of the kickstart file you actually used on that particular
system!


Has anyone tried the ssh option with kickstart?  I understand you can ssh to
the machine and monitor it during the installation.  The one advantage I can
see is the saved lines on a terminal window instead of the 80x24 console.


I've not tried that, I'm not sure the SSH binaries are even in the CD
boot images: I don't see them in the "boot.iso" images.



installing devtoolset-3 on sl6x

2015-11-03 Thread Ken Teh

I dont seem to be able to install devtoolset-3 on an sl6x x86_64 system.
I've done it successfully with devtoolset-2 on an i686 system.  My
procedure is

1. Wget the yum-conf-devtoolset rpm and install it.

2. Then, when I do a

  # yum --enablerepo=devtoolset install devtoolset-3-toolchain

it just comes back and says no such available package.  Though the
package is listed clear as day on the mirror.


I've tried this on 2 different systems and the result is the same.  I'm
baffled.  What am I missing?

Thanks.


how to troubleshoot a ups

2015-11-09 Thread Ken Teh

I need some advice on how to troubleshoot an apc smart ups.  I am
getting pairs of onbatt/online messages from nut 3-4 times a day.  No
particular regularity.

My first attempt at a fix was to set the power quality to fair to see if
that would help.  Nope.  There is an advanced config option called the
transfer setting which I gather is a specific value for triggering an
on battery transition.  I've not tried this.

I'm thinking of trying an alternative circuit to see if that helps.

But, all this is basically stabbing at the wind.  I've tried to find a
write-up on how one goes about diagnosing ups problems like this but no
luck.

I would appreciate any advice you have.  Thanks.


Re: how to troubleshoot a ups

2015-11-09 Thread Ken Teh

It's not clear to me that the ups is at fault.  There is no indication
on the front panel that anything is wrong with it except for the
multiple on-battery/on-line messages a day.  I have other upses from apc
and they don't do this except very occasionally.

If it were a self-test, then I'd expect some regularity, say, once or
twice a day when the ups switches to battery, then back to line power
during the self-test.

It seems to me either the circuit it is plugged into is really bad, ie,
large swings in voltage or the ups itself is overly sensitive to it
which suggests the ups is bad, or at least, cannot be relied upon to
function properly in a real event.

On 11/09/2015 08:31 AM, ONeal, Miles wrote:

Ken,

Why do you think the UPS is at fault?

-Miles


On Nov 9, 2015, at 07:13, Ken Teh  wrote:

I need some advice on how to troubleshoot an apc smart ups.  I am
getting pairs of onbatt/online messages from nut 3-4 times a day.  No
particular regularity.

My first attempt at a fix was to set the power quality to fair to see if
that would help.  Nope.  There is an advanced config option called the
transfer setting which I gather is a specific value for triggering an
on battery transition.  I've not tried this.

I'm thinking of trying an alternative circuit to see if that helps.

But, all this is basically stabbing at the wind.  I've tried to find a
write-up on how one goes about diagnosing ups problems like this but no
luck.

I would appreciate any advice you have.  Thanks.


sl7 iptables firewalld

2016-06-23 Thread Ken Teh

I'm trying to set up NAT on an SL7x machine.  I know how to do it via
iptables but am a little hesitant because of firewalld.

It's obvious from the lack of /etc/sysconfig/iptables that iptables
configuration is stored elsewhere probably in several xml files.

I'm going to try to do it via 'firewall-cmd --direct' in the hopes that
my reconfiguration is stored across reboots.

I dumped out the nat table.  There are several chains that did not exist
in SL6x.  They appear to be stubs.  Does anyone know what their intended
purpose is?  For example, my default zone is 'work' and I see among
others, POST_work, POST_work_log, POST_work_deny, POST_work_allow, etc.

The POSTROUTING chain also contains several targets with explicit rules
on 192.168.122.0/24.  Googling says they are libvirt related.  I suppose
I could retain them  Does anyone know if things will break if I delete
them?  It's a NAT gateway, not a virtualization server.


what runs libvirt?

2016-06-24 Thread Ken Teh

I was trying to set up dnsmasq and discovered it's already running.  Apparently 
as part of libvirt.  Why is libvirt started?  What starts it?

I tried looking through systemd output but the only thing about systemd that I 
can understand are its services.  Everything else is so far gobbledy-gook.


Re: [SCIENTIFIC-LINUX-USERS] what runs libvirt?

2016-06-24 Thread Ken Teh

Thanks for the tip.  Very useful especially since a list-units dumps out a huge 
list.  Many more than the list of files in /etc/init.d.


On 06/24/2016 10:34 AM, Pat Riehecky wrote:



On 06/24/2016 09:48 AM, Ken Teh wrote:

I was trying to set up dnsmasq and discovered it's already running. Apparently 
as part of libvirt.  Why is libvirt started?  What starts it?

I tried looking through systemd output but the only thing about systemd that I 
can understand are its services.  Everything else is so far gobbledy-gook.


Perhaps:

systemctl status 

will help track it down.

Pat


Re: what runs libvirt?

2016-06-24 Thread Ken Teh

libvirt's website has instructions on how to run dnsmasq alongside their 
instance
of dnsmasq.   The trick is to add a 'bind-interfaces' in the dnsmasq.conf and to
explicitly specify the listening address or interface.



On 06/24/2016 10:12 AM, Mark Stodola wrote:

On 06/24/2016 09:48 AM, Ken Teh wrote:

I was trying to set up dnsmasq and discovered it's already running.
Apparently as part of libvirt.  Why is libvirt started?  What starts it?

I tried looking through systemd output but the only thing about systemd
that I can understand are its services.  Everything else is so far
gobbledy-gook.


I ran into this recently on my Fedora laptop.  It was quite 
annoying/frustrating to find out about this default configuration.  I issued a 
'systemctl stop libvirtd' and 'systemctl disable libvirtd' to disable it.  It 
is used for the virtualization system, which relies on dnsmasq for the virtual 
lan these days...  It uses an alternate configuration file than the normal 
/etc/dnsmasq.d/ files or wherever they live these days.  Aft4r that, I as able 
to configure it as I normally do and start it using 'systemctl start dnsmasq'.

If you rely on it for virtualization, you probably have to go fiddle with 
libvirtd's alternate dnsmasq config files to add the options you need for other 
purposes.  This wasn't the case for me.


printing a man page

2016-06-24 Thread Ken Teh

Does anyone know enough groff to help me print this man page?


# man -t firewall-cmd > /tmp/firewall-cmd.ps
:397: warning [p 4, 4.4i]: can't break line
:434: warning [p 4, 6.8i]: can't break line
:446: warning [p 4, 7.9i]: can't break line
error: page 11: table will not fit on one page; use .TS H/.TH with a
supporting macro package


Re: printing a man page

2016-06-24 Thread Ken Teh

I *am* working on a headless server.  But I'll keep yelp in mind.

I decided to try the firewalld project home page.  They have the manual
pages on the web. Prints very nicely via the browser.

On 06/24/2016 04:33 PM, Jim Campbell wrote:

What about trying this:  yelp man:firewall-cmd

. . . and then using yelp to find and print the appropriate page? I am
pretty sure that yelp can be used to print (and I know that you can use
it to at least view man pages).

Of course, this is all moot if you are working from a server or don't
have yelp installed.

Jim

On Fri, Jun 24, 2016, at 04:28 PM, Mark Stodola wrote:

On 06/24/2016 03:30 PM, Ken Teh wrote:

Does anyone know enough groff to help me print this man page?


# man -t firewall-cmd > /tmp/firewall-cmd.ps
:397: warning [p 4, 4.4i]: can't break line
:434: warning [p 4, 6.8i]: can't break line
:446: warning [p 4, 7.9i]: can't break line
error: page 11: table will not fit on one page; use .TS H/.TH with a
supporting macro package


I haven't tried generating a postscript in a while, but there is a
man2html that does a pretty decent job.  Unfortunately it doesn't like
gzipped man pages it seems, so it might be easiest to copy the
firewall-cmd.1.gz (or whatever section it is) to /tmp, gunzip it, then
run man2html on it.  You could also get fancy with 'man' options and
piping things together if you want.

Digging in an old conversion script I have, I have done:
groff -mandoc source_file > dest_file.ps

The default is ps output, so it looks like the same command as your 'man
-t'.

Experience from compiling docbook/xml contents, the warnings/errors you
are seeing are familiar.  Rewriting an entire table structure just to
get a postscript seems a waste of time.


Re: printing a man page

2016-06-27 Thread Ken Teh

Thanks for the tip.  Grog outputs

  groff -t -man

which is what 'man -t' does.  So, the error is still there.  The error
description explicitly says how to fix the problem.

  error: page 11: table will not fit on one page; use .TS H/.TH with a
  supporting macro package

If I read this right, it says to add troff markup to the man page source
and to rerun groff with one or more additional packages.

Chasing this down will take me too far afield.  The firewalld site has
the man pages in html and they print nicely via the browser.



On 06/24/2016 11:23 PM, James Cloos wrote:

"KT" == Ken Teh  writes:


KT> Does anyone know enough groff to help me print this man page?
KT> # man -t firewall-cmd > /tmp/firewall-cmd.ps

Copy the source man page to someplace like /tmp, run grog on it to see
what options are required (for things like tables, equations and the
like) and then add -Txhtml or -Thtml to the arguments.

grohtml specifies an extremely long page length (infinite is
unavailable) so the table should work.

And then use a browser to print the x?html.

-JimC



Re: printing a man page

2016-06-27 Thread Ken Teh

I did try it exactly as you described below with -T html option.  When I
opened the html file with the browser, the table was missing.

I see what you mean by an "infinite page length".  Maybe the html output
is done with post-processing and groff internally still imposes a page
length.

Thanks for taking the time to reply.  I learnt one thing: the 'grog'
command.



On 06/27/2016 10:05 AM, James Cloos wrote:

"KT" == Ken Teh  writes:


KT> Thanks for the tip.  Grog outputs
KT>   groff -t -man

My point was that then doing one of:

   groff -t -man -Txhtml filename >file.html
   groff -t -man -Thtml filename >file.html

should work since the page length for html is as clone to infinite as
can be specified.

KT>   error: page 11: table will not fit on one page; use .TS H/.TH with a
KT>   supporting macro package

KT> If I read this right, it says to add troff markup to the man page source
KT> and to rerun groff with one or more additional packages.

The "supporting macro package" part suggests that the an macros probably
do not work with that, so the only way is to use a device with an
essentially endless page length.

And then use a browser to print it.

-JimC



nat setup with firewalld

2016-06-28 Thread Ken Teh

After reading and poking around, I've discovered it's actually quite easy
to set up NAT with firewalld instead of disabling it and resorting to
iptables-services.

Firewalld provides /etc/firewalld/direct.xml where one can create chains
and rules directly into a selected table.  See firewall.direct(5) for
details.

The appropriate chain is apparently POSTROUTING_direct.  Firewalld
creates this chain whether direct.xml exists or not.  The other stub
chains I asked about correspond to the active zones.

The net.ipv4.ip_forward is already set because of libvirtd.


On 06/23/2016 07:45 AM, Ken Teh wrote:

I'm trying to set up NAT on an SL7x machine.  I know how to do it via
iptables but am a little hesitant because of firewalld.

It's obvious from the lack of /etc/sysconfig/iptables that iptables
configuration is stored elsewhere probably in several xml files.

I'm going to try to do it via 'firewall-cmd --direct' in the hopes that
my reconfiguration is stored across reboots.

I dumped out the nat table.  There are several chains that did not exist
in SL6x.  They appear to be stubs.  Does anyone know what their intended
purpose is?  For example, my default zone is 'work' and I see among
others, POST_work, POST_work_log, POST_work_deny, POST_work_allow, etc.

The POSTROUTING chain also contains several targets with explicit rules
on 192.168.122.0/24.  Googling says they are libvirt related.  I suppose
I could retain them  Does anyone know if things will break if I delete
them?  It's a NAT gateway, not a virtualization server.




Re: pam_mount

2016-07-22 Thread Ken Teh

I also noted this in the TUV's docs with great interest.  One caveat:  The docs
recommend at least Windows Server 2012 for the trust between IPA and AD.

On 07/22/2016 05:42 AM, David Sommerseth wrote:

On 22/07/16 09:45, Lars Behrens wrote:

Am 22.07.2016 um 01:11 schrieb David Sommerseth:


Have a look at authconfig and sssd.  The former should help configure
all these things for you, including proper PAM setup as well as LDAP and
Kerberos.  For SSSD it is in particular helpful on laptops, where
authentication data can be cached locally to be capable of offline
authentication as well as caching enough information to automatically
fetch a Kerberos ticket once the network access has been established.


I already had been using authconfig for sssd setup. Authentication (via
AD/ldap) and caching works well. I only need  per user mounting of their
AD-directories and hadn't found a hint in the authconfig man page.


And SSSD do have some support for handling the autofs/automount stuff too.


Ok, that seems the way to go. Through your tip I now found that there is
an autofs/automount via "ldap_autofs_*" in sssd. Let's see if I get this
set up.


Otherwise, do have a look at the FreeIPA stuff too.  There's a lot of
good things in that package, which also doesn't require much resources
on the server side.  For clients, it gets even easier.  You just need to
install the proper IPA packages and run ipa-server-install or
ipa-client-install, that's mostly all you need.  FreeIPA also makes use
of SSSD and authconfig under the hood.


Yeah, looks like good thing but afaics I would have to set up a server
for that. I think at first I have to get comfy with the basics in the
"red hatted" world (I am coming from a debianic and SUSE background).

Thank you for your hints!


As you seem to also use AD, you might be pleased to know that it is
possible to integrate AD and FreeIPA.  IIRC, one of the new features in
EL7.2 was also "one way trust" in addition to the "full trust" available
in earlier versions of FreeIPA.

This means that AD users gets access to machines enrolled in IPA,
according to configured policies.

And the automount/autofs stuff is also very easily configured in IPA too.

A final note on setting up FreeIPA on SL7:

* server side
   yum install ipa-server
   ipa-server-install   # see --help for several useful options
   # wait for install script to complete
   # done

Now you can log into the web admin UI by accessing the servers host name
from a browser.

* client side
   yum install ipa-client
   ipa-client-install
   # wait for install script to complete
   # done

I'll admit that I have never installed and configured IPA together with
AD, but the demos I have seen on several conferences have not been
really scary.  If you setup IPA to use external DNS servers, you might
need to add a few entries there, but it might also be that the AD
integration does a lot of it for you too.

You may very well also install IPA server on an existing server, if that
would work with your sys-admin policies.  The IPA server does not
require too much resources at all, neither RAM, CPU nor disk.

Setting up replica IPA servers is also not a big challenge.  Setup
scripts usually works very well and the official documentation is quite
good too.





--
kind regards,

David Sommerseth



Re: Python 2.7 OS requirements

2016-08-01 Thread Ken Teh

I suggest trying anaconda from continuum analytics.  It installs into /opt and 
provides its own ecosytem, ie, all the support libraries it needs.  Because of 
this, it will run on an SL6 machine.  The install script does give you the 
option of installing it under a different root.  It provides numpy, scipy, 
matplotlib, pandas, jupyter, etc., for data analysis, and pretty much whatever 
else you need.  It comes in a python 2.7 and python 3.5 version. I used the 3.5 
version and can vouch for it.  If it doesn't have a package you can simply 
install it by running its version of pip and it will install it into its 
ecosystem.  But, I've actually never had it do it because its included packages 
is quite complete.



On 07/30/2016 09:01 PM, P. Larry Nelson wrote:

To the two Stevens,

Thanks for the possible solutions to this!

However, I did hear back from the grad student and his response was:

"I'm installing some python packages and need a higher version of numpy, which asks 
for python 2.7.  I'll try on CERN system. Thanks!"

Hopefully that's the last I'll hear of it  :-)
I have 4 weeks left with the U of I, I'm totally consumed working on another
project involving Docker and Shifter, and don't really have the time nor the
wherewithal to deal with it.

- Larry

Steven J. Yellin wrote on 7/30/16 8:20 PM:

Another way is to get Python-2.7.12.tar.xz from
https://www.python.org/downloads/, extract into directory Python-2.7.12 with
'tar -xJf Python-2.7.12.tar.xz', and see its README file for what to do next to
get it in /usr/local.

Steven Yellin

On Sun, 31 Jul 2016, Steven Haigh wrote:


You can look at virtualenv from EPEL.

You can install a separate python environment in a users home directory.

On 31/07/16 09:36, P. Larry Nelson wrote:

Hi all,

Please don't shoot the questioner (me), as I have no experience with
Python, other than knowing "what" it is and that my SL6.8 systems have
version 2.6.6 installed.

I have been asked by one of our Professors that one of his grad students
apparently needs Python 2.7.x installed on our cluster (optimally in
/usr/local, which is an NFS mounted dir everywhere).

In my brief Googling, I have not found OS requirements for 2.7.x, but
have inferred that it probably needs SL7.x.

Can anyone confirm that?
Or has anyone installed Python 2.7.x (and which .x?) on an SL6.8 system
without replacing 2.6.x?

I'm guessing this can be quite a morass to delve into as when I do a
'rpm -qa|grep -i python|wc'
It returns with 67 rpms with python in the rpm name!

If the solution is indeed simple, I might proceed, otherwise, I'm
of a tendency to reply to the Professor and student, "No way - won't work."
I think the student probably has access to CERN systems that probably
have what he's looking for.

I've followed up with that inquiry to the student and waiting to hear back.

Thanks!
- Larry




--
Steven Haigh

Email: net...@crc.id.au
Web: https://www.crc.id.au
Phone: (03) 9001 6090 - 0412 935 897







Re: SL 7.2 on a HP Zbook

2016-10-07 Thread Ken Teh

If you grep'd the rc init files for hwclock, you will find it in halt.  You 
can't grep systemd.  All you can do is read the man page and there's a lot of 
man pages to read.  :(






On 10/07/2016 01:28 PM, stod...@pelletron.comwrote:

- Original Message -
From: "Bill Askew" 
To: scientific-linux-us...@listserv.fnal.gov
Sent: Friday, October 7, 2016 1:14:04 PM
Subject: SL 7.2 on a HP Zbook

Hi everyone
I am using SL 7.2 on a HP Zbook.  So far the only issue that I have is setting 
the date and time does not set the Zbook's hardware clock.  It does change the 
time for the duration of the session but when the ZBook is rebooted the time 
goes back to what it was before plus the amount of time I spent during the 
session.
Does anyone have a fix for this?
Thanks

There is probably a more "systemd" type method, but this as root has always 
worked:
hwclock --systohc



nfsroot post install configure checklist

2016-10-28 Thread Ken Teh

I've made a minimal install with the --installroot option into a folder that I 
want to export as an nfs root.  I'm wondering if someone has a checklist on 
what I should do post-install to configure the files in the install.  I can 
think of some obvious ones.  But it looks like painstaking work to go through 
all in the files in etc.

One thought I had was kickstart.  Since a kickstart install configures as well, 
it should be possible to trace the kickstart install to see what it does, which 
files it updates, etc.  I use kickstart all the time but have never bothered to 
poke around how it does things.

I'd appreciate any pointers, suggestions, alternatives, etc.

Thanks.


firewalld help

2016-11-10 Thread Ken Teh

I'm trying to isolate a network problem and I need some debugging help.  
Frustrating when I am not fluent in the new sys admin tools.

Symptom is as follows:  I have a machine running Fedora 24 with its firewall 
zone set to work.  I cannot ping the machine except from the same subnet.  I 
don't have this problem with a second machine running the same OS/rev with the 
same firewall setup.  I'm not sure where to look.

I've dumped out both machines iptables.  See attachment.  I did a diff -y and they look 
almost identical.  The machine that does not work has 2 nics, one which is connected to a 
192.168 network.  It has additional rules in the various chains but they are all 
"from anywhere to anywhere".  I'm assuming the additional rules come from the 
second interface.

I've put a query to my networking folks to see if the problem is further 
upstream.  But I thought I'd ask if I have missed something obvious.

I know it's not SL7 but they use the same tools:  nmcli and firewall-cmd.

Chain INPUT (policy ACCEPT)
target prot opt source   destination 
ACCEPT all  --  anywhere anywhere ctstate 
RELATED,ESTABLISHED
ACCEPT all  --  anywhere anywhere
INPUT_direct  all  --  anywhere anywhere
INPUT_ZONES_SOURCE  all  --  anywhere anywhere
INPUT_ZONES  all  --  anywhere anywhere
DROP   all  --  anywhere anywhere ctstate INVALID
REJECT all  --  anywhere anywhere reject-with 
icmp-host-prohibited

Chain FORWARD (policy ACCEPT)
target prot opt source   destination 
ACCEPT all  --  anywhere anywhere ctstate 
RELATED,ESTABLISHED
ACCEPT all  --  anywhere anywhere
FORWARD_direct  all  --  anywhere anywhere
FORWARD_IN_ZONES_SOURCE  all  --  anywhere anywhere
FORWARD_IN_ZONES  all  --  anywhere anywhere
FORWARD_OUT_ZONES_SOURCE  all  --  anywhere anywhere
FORWARD_OUT_ZONES  all  --  anywhere anywhere
DROP   all  --  anywhere anywhere ctstate INVALID
REJECT all  --  anywhere anywhere reject-with 
icmp-host-prohibited

Chain OUTPUT (policy ACCEPT)
target prot opt source   destination 
OUTPUT_direct  all  --  anywhere anywhere

Chain FORWARD_IN_ZONES (1 references)
target prot opt source   destination 
FWDI_work  all  --  anywhere anywhere[goto] 
FWDI_work  all  --  anywhere anywhere[goto] 
FWDI_work  all  --  anywhere anywhere[goto] 

Chain FORWARD_IN_ZONES_SOURCE (1 references)
target prot opt source   destination 

Chain FORWARD_OUT_ZONES (1 references)
target prot opt source   destination 
FWDO_work  all  --  anywhere anywhere[goto] 
FWDO_work  all  --  anywhere anywhere[goto] 
FWDO_work  all  --  anywhere anywhere[goto] 

Chain FORWARD_OUT_ZONES_SOURCE (1 references)
target prot opt source   destination 

Chain FORWARD_direct (1 references)
target prot opt source   destination 

Chain FWDI_work (3 references)
target prot opt source   destination 
FWDI_work_log  all  --  anywhere anywhere
FWDI_work_deny  all  --  anywhere anywhere
FWDI_work_allow  all  --  anywhere anywhere
ACCEPT icmp --  anywhere anywhere

Chain FWDI_work_allow (1 references)
target prot opt source   destination 

Chain FWDI_work_deny (1 references)
target prot opt source   destination 

Chain FWDI_work_log (1 references)
target prot opt source   destination 

Chain FWDO_work (3 references)
target prot opt source   destination 
FWDO_work_log  all  --  anywhere anywhere
FWDO_work_deny  all  --  anywhere anywhere
FWDO_work_allow  all  --  anywhere anywhere

Chain FWDO_work_allow (1 references)
target prot opt source   destination 

Chain FWDO_work_deny (1 references)
target prot opt source   destination 

Chain FWDO_work_log (1 references)
target prot opt source   destination 

Chain INPUT_ZONES (1 references)
target prot opt source   destination 
IN_workall  --  anywhere anywhere[goto] 
IN_workall  --  anywhere anywhere[goto] 
IN_workall  --  anywhere 

Re: firewalld help

2016-11-10 Thread Ken Teh

Default routes on the failing system.


[root@saudade ~]# ip --details route
unicast default via 192.168.203.1 dev enp3s0  proto static  scope global  
metric 100
unicast default via 146.139.198.1 dev enp4s0  proto static  scope global  
metric 101
unicast 146.139.198.0/23 dev enp4s0  proto kernel  scope link  src 
146.139.198.23  metric 100
unicast 192.168.203.0/24 dev enp3s0  proto kernel  scope link  src 
192.168.203.39  metric 100



On 11/10/2016 08:27 AM, Stephan Wiesand wrote:



On 10 Nov 2016, at 15:09, Ken Teh  wrote:

I'm trying to isolate a network problem and I need some debugging help.  
Frustrating when I am not fluent in the new sys admin tools.

Symptom is as follows:  I have a machine running Fedora 24 with its firewall 
zone set to work.  I cannot ping the machine except from the same subnet.  I 
don't have this problem with a second machine running the same OS/rev with the 
same firewall setup.  I'm not sure where to look.

I've dumped out both machines iptables.  See attachment.  I did a diff -y and they look 
almost identical.  The machine that does not work has 2 nics, one which is connected to a 
192.168 network.  It has additional rules in the various chains but they are all 
"from anywhere to anywhere".  I'm assuming the additional rules come from the 
second interface.

I've put a query to my networking folks to see if the problem is further 
upstream.  But I thought I'd ask if I have missed something obvious.


What's the default route on the "failing" system?


I know it's not SL7 but they use the same tools:  nmcli and firewall-cmd.






Re: firewalld help

2016-11-10 Thread Ken Teh

Ok.  I see the problem now.  Default routes have always been a bit of a mystery 
to me.  Based on your reply, I manually deleted the default route for enp3s0 to 
confirm it works.  Then, I edited the connection with nmcli to remove the 
default permanently across reboots.

For everyone's benefit, the property setting is ipv4.never-default in nmcli.



On 11/10/2016 09:02 AM, Stephan Wiesand wrote:



On 10 Nov 2016, at 15:41, Ken Teh  wrote:

Default routes on the failing system.


[root@saudade ~]# ip --details route
unicast default via 192.168.203.1 dev enp3s0  proto static  scope global  
metric 100
unicast default via 146.139.198.1 dev enp4s0  proto static  scope global  
metric 101
unicast 146.139.198.0/23 dev enp4s0  proto kernel  scope link  src 
146.139.198.23  metric 100
unicast 192.168.203.0/24 dev enp3s0  proto kernel  scope link  src 
192.168.203.39  metric 100


This suggests tat saudade will send the response packages through enp3s0, unless the 
request originates from "the same subnet" (146.139.198.0/23). Is that expected 
to work?

You could check this with tcpdump.


On 11/10/2016 08:27 AM, Stephan Wiesand wrote:



On 10 Nov 2016, at 15:09, Ken Teh  wrote:

I'm trying to isolate a network problem and I need some debugging help.  
Frustrating when I am not fluent in the new sys admin tools.

Symptom is as follows:  I have a machine running Fedora 24 with its firewall 
zone set to work.  I cannot ping the machine except from the same subnet.  I 
don't have this problem with a second machine running the same OS/rev with the 
same firewall setup.  I'm not sure where to look.

I've dumped out both machines iptables.  See attachment.  I did a diff -y and they look 
almost identical.  The machine that does not work has 2 nics, one which is connected to a 
192.168 network.  It has additional rules in the various chains but they are all 
"from anywhere to anywhere".  I'm assuming the additional rules come from the 
second interface.

I've put a query to my networking folks to see if the problem is further 
upstream.  But I thought I'd ask if I have missed something obvious.


What's the default route on the "failing" system?


I know it's not SL7 but they use the same tools:  nmcli and firewall-cmd.








srpms?

2017-01-12 Thread Ken Teh
There was a discussion on the issue of srpms or lack of in SL7.  I don't build 
packages much so did not pay attention to it.  I am now in need of an srpm that 
shows how a particular rpm was built.  Specifically, the stuff in the spec file. 
 I need the package for a Fedora 25 system where it is not available except as 
a tarball.


Is there a quick write-up on how one handles this nowadays setting aside the 
pros and cons of this issue?


thanks


Re: srpms?

2017-01-12 Thread Ken Teh

Thanks Connie,

I looked on our mirror.  Apparently, they've not bother to host the SRPMS 
folder.  Another instance of "use the source, Ken!".


Thanks again!



On 01/12/2017 02:59 PM, Connie Sieh wrote:



There was a discussion on the issue of srpms or lack of in SL7.  I don't build
packages much so did not pay attention to it.  I am now in need of an srpm that
shows how a particular rpm was built.  Specifically, the stuff in the spec file.
 I need the package for a Fedora 25 system where it is not available except as
a tarball.

Is there a quick write-up on how one handles this nowadays setting aside the
pros and cons of this issue?

thanks



There is no lack of srpms for SL7 because we create srpms out of the "src" code
that RedHat provides.  RedHat does not provide publicly downloadable srpms for
RHEL 7 like they used to.  RHEL 7 srpms are available if one has a RHEL 7
subscription.

SL 7 srpms are available at

ftp://ftp.scientificlinux.org/linux/scientific/7x/SRPMS/


systemd/journald teething pains

2017-01-24 Thread Ken Teh
I'm debugging some code that logs messages via syslog.  I was under the 
assumption that syslog messages would display with journalctl.  But I'm not 
seeing them.  I tried using logger and it also does not display unless I 
explicitly say 'logger --journald' which suggests that I still need syslogd.  Is 
this true?  What gives?


Re: systemd/journald teething pains

2017-01-24 Thread Ken Teh

Never mind.  A bare syslog(3) works.  Problem is elsewhere.


On 01/24/2017 08:32 AM, Ken Teh wrote:

I'm debugging some code that logs messages via syslog.  I was under the
assumption that syslog messages would display with journalctl.  But I'm not
seeing them.  I tried using logger and it also does not display unless I
explicitly say 'logger --journald' which suggests that I still need syslogd.  Is
this true?  What gives?


Re: Connie Sieh, founder of Scientific Linux, retires from Fermilab

2017-02-24 Thread Ken Teh

Congratulations, Connie.

I recall your gallery of pictures in earlier versions of the SL installer.  I 
wish you the very best light for the many shots to come in your retirement.


Ken


On 02/24/2017 03:52 PM, Bonnie King wrote:

Friends,

The Scientific Linux team is at once happy and sad to announce Connie Sieh's
retirement after 23 years. Today is her last full-time day at Fermilab.

Connie Sieh founded the Fermi Linux and Scientific Linux projects and has worked
on them continuously. She has sometimes preferred to toil behind the scenes and
leave public announcements to others, but has always been a driving force behind
the projects.

The Scientific Linux story started in the late 1990s when Connie's group
explored using commodity PC hardware and Linux as an alternative to commercial
servers with proprietary UNIX operating systems. From the distributions
available at the time, Red Hat Linux was chosen.

In 1998, Connie announced Fermi Linux at HEPiX, a semi-annual meeting of High
Energy Physics IT staff. Fermi Linux was a customized and re-branded version of
Red Hat Linux with some tweaks for integration with the Fermilab environment. It
also introduced an installer modification called Workgroups, a framework to
customize package sets for use at different sites and for different purposes.
The Workgroups concept lives on today in the form of Contexts for SL7.

In October 2003 TUV changed their product model and introduced Red Hat
Enterprise Linux. Enterprise Linux was no longer freely distributed in binary
form, but sources remained available.

Connie and her colleagues started building from these sources, creating one of
the first Enterprise Linux rebuilds. A preview, dubbed HEPL, was presented at
spring HEPiX 2004. In May 2004, the rebuild was released as Scientific Linux.
The name was chosen to reflect the goals and user base of the product.

Our colleagues at CERN collaborated, customizing and using Scientific Linux as
Scientific Linux CERN (SLC). SL became a standard OS for Scientific Computing in
High Energy Physics at Fermilab, CERN and beyond.

SL is freely available to the general public, and is a popular Enterprise Linux
rebuild. As a result, it has built a community outside of Fermilab and HEP.

With gratitude, the Scientific Linux team would like to recognize Connie's many
years of service and her immense contribution to the project she founded.

Connie's outstanding technical and non-technical judgement are the foundation of
Scientific Linux. Her legacy will continue to inform the way we run SL and we
hope she'll remain as a collaborator.

All the best to Connie in her well-earned retirement. She will be dearly missed!



yum update initial setup hang up

2017-02-27 Thread Ken Teh

I encountered this error twice now.

Initial install of 7x via netinst.iso works.  Then, I log in remotely to run yum 
update.  Update proceeds but during cleanup, yum hangs.  Apparently cleaning up 
initial-setup.


I need some instructions on how to debug this problem.

Thanks.


Re: yum update initial setup hang up

2017-02-27 Thread Ken Teh
I solved my problem by doing the install "interactively" and updating 
initial-setup separately.


I must have missed something thinking I could do an install, reboot, remote 
login, and update.




On 02/27/2017 11:02 AM, Ken Teh wrote:

I encountered this error twice now.

Initial install of 7x via netinst.iso works.  Then, I log in remotely to run yum
update.  Update proceeds but during cleanup, yum hangs.  Apparently cleaning up
initial-setup.

I need some instructions on how to debug this problem.

Thanks.


Re: nmcli question

2017-04-10 Thread Ken Teh

On 04/10/2017 10:59 AM, Tom H wrote:



The lead NM developer's replied to you on fedora-devel@ or
fedora-user@ in the past that he and his fellow NM developers have
worked hard to add to NM configuration options for complex server
setups as well as a cli tool for managing settings. Sadly, NM seems to
be a project that can do nothing right in the eyes of its users even
though it's left the flakiness of its early years behind.




I'm not sure why I'm jumping into the fray but this paragraph struck me as to 
exactly why network manager is anathema to so many of us.  Even if it is not as 
flaky as it used to be.


I'm living with it but I can't you the number of times  nmcli and firewall-cmd 
have made my blood pressure go up.   The latter is even worse with its option 
style subcommands and is near impossible to remember choices.  Is it  list-all, 
zone-get-info,  zone-list-all?   Wtf?


Re: tip: Secondary Selection clipboard

2017-06-27 Thread Ken Teh
Haha.  I've been on fedora for almost a year and learning to unlearn everything 
I've learnt about Unix and X11 over 25 years.




On 06/27/2017 06:23 AM, Tom H wrote:

On Tue, Jun 20, 2017 at 4:38 PM, ToddAndMargo  wrote:


I have been using UNIX and Linux for over 25 years and did not realize
X11 has four clipboards. I recently discovered the Secondary Selection
keyboard.

It really saves a bunch of time when I am programming as I don't lose
my cursor's hot spot.

Here is a great 8 minute video demonstrating all four clipboards. It
is must learn for anyone using Linux.

http://www.cs.man.ac.uk/~chl/Secondary-Selection.mp4

To support this clipboard, your program has to use the GTK Toolkit.


Thanks. I didn't know about this secondary clipboard. I've just tried
it on my laptop running Ubuntu 17.10 but it didn't work. I suspect
that it's been deep-sixed in Gnome Shell and Unity.



Re: tip: Secondary Selection clipboard

2017-06-27 Thread Ken Teh

Time to hang it up?

I use the clipboard all the time especially when I'm coding. Multiple terminals 
each running a copy of vim.


I notice that many young programmers also use terminals and vim (or neovim) on 
Macs. At least on videos of programming topics I'm interested in.  Do they not 
use the clipboard?


On a more philosophical note:  I recall reading the X11 was all about capability 
and not policy.  People who design software nowadays seem to be all about policy 
and not capability. This is how you should do things.  F**k you if you don't get 
it.


Very un-unix if you ask me.  The one feature I love about unix is the countless 
permutations one can use its command line utilities to solve problems. Feeds the 
creative side of me, methinks.  That's why I never got much into GUIs.


Oh well, the future belongs to the young. Maybe it is time to hang it up.



On 06/27/2017 08:05 AM, Tom H wrote:

On Tue, Jun 27, 2017 at 8:19 AM, Andrew C Aitchison
 wrote:

On Tue, 27 Jun 2017, Tom H wrote:

On Tue, Jun 20, 2017 at 4:38 PM, ToddAndMargo 
wrote:


I have been using UNIX and Linux for over 25 years and did not
realize X11 has four clipboards. I recently discovered the Secondary
Selection keyboard.

It really saves a bunch of time when I am programming as I don't
lose my cursor's hot spot.

Here is a great 8 minute video demonstrating all four clipboards. It
is must learn for anyone using Linux.

http://www.cs.man.ac.uk/~chl/Secondary-Selection.mp4

To support this clipboard, your program has to use the GTK Toolkit.


Thanks. I didn't know about this secondary clipboard. I've just tried
it on my laptop running Ubuntu 17.10 but it didn't work. I suspect
that it's been deep-sixed in Gnome Shell and Unity.


I was interested in the secondary clipboard too, and looked at
http://www.cs.man.ac.uk/~chl/secondary-selection.html which makes
clear that this is not a standard gtk feature; there are experimental
modified gtk3 libraries which support secondary selection (no source
yet).

gtk3 means it doesn't run on SL6, so I haven't been able to explore further.


The author of "Secondary-Selection.mp4" asked about it on the gtk
development list

https://mail.gnome.org/archives/gtk-devel-list/2016-August/msg00036.html

and the answer was

https://mail.gnome.org/archives/gtk-devel-list/2016-August/msg00037.html

Part of the response:

We still (optionally) support the PRIMARY selection on the X11 backend,
and some compatibility layer for it on Wayland, but we have no plans on
adding support for the SECONDARY selection, as it's both barely
specified and, like the PRIMARY, highly confusing for anybody who is not
well-versed in 20+ years of use of textual interfaces on the X Windows
System. Personally, I would have jettisoned the PRIMARY selection a long
time ago as well, but apparently a very vocal minority is still holding
tight to that particular Easter egg. Adding support for the even more
esoteric SECONDARY selection on the X11 backend when we're trying to
move the Linux world towards the more modern and less legacy-ridden
Wayland display system would be problematic to say the least, and an ill
fit for the majority of graphical user experiences in use these days.



SL signing keys

2017-10-12 Thread Ken Teh
On the first update of a newly installed system, there is SL signing keys that 
have to be installed. Yum prompts for confirmation.


Is there a way to install the keys before the first yum update? Are they in an 
rpm somewhere?


Thanks.


389-ds fastbugs or epel

2017-11-28 Thread Ken Teh
Epel has many more 389-ds rpms as opposed to the main SL repos (specifically, 
fastbugs) which only has 3 rpms.


Can someone advise which ones to install? From epel or SL?


Re: 389-ds fastbugs or epel

2017-11-28 Thread Ken Teh
Never mind. I was confused by the yum list output. I thought I was looking at 
only the epel listing as I thought I had disabled all other repos except epel.



On 11/28/2017 11:48 AM, Ken Teh wrote:
Epel has many more 389-ds rpms as opposed to the main SL repos (specifically, 
fastbugs) which only has 3 rpms.


Can someone advise which ones to install? From epel or SL?




Re: Question on Python 3.5.x on HELiOS 6.9

2018-02-22 Thread Ken Teh
I recommend getting anaconda from continuum analytics. They have a complete 
standalone python install (usually into /opt) with all the necessary updated 
runtime libraries to support the latest python and a very nice curated set of 
python modules specifically useful to science, engineering, finance, data 
science, etc.


www.anaconda.com






On 02/22/2018 02:06 PM, Harsha Purushotham wrote:

Hi,

I am trying to include Python 3.5.x for custom applications on HELiOS 6.9. I 
would like to know if Python 3 package RPMs can be downloaded from Scientific 
Linux Packages. If yes, what would the link be.


Best regards,
Harsha


apply automatic updates

2018-08-10 Thread Ken Teh
I noticed that apply_updates in /etc/yum/yum-cron.conf is set to No on a centos 
7 system while it is set to Yes on SL7x.


Is there any reason not to set to yes for centos?  I have cron job that emails 
the user assigned to the desktop to reboot their machine when updates have been 
installed. With apply_updates set to No, the job fails to detect the install 
with 'yum history'. I can fix this but I was wondering if there is some reason 
why I shouldn't configure the daily yum cron the same way, ie, apply_updates=yes.


Thanks.


Re: [SCIENTIFIC-LINUX-USERS] apply automatic updates

2018-08-10 Thread Ken Teh

Thank you. I think I will set it to yes on centos as well for the desktops.


On 08/10/2018 11:23 AM, Pat Riehecky wrote:
That is a bit of a complex question.  From the SL side I can point you towards: 
https://urldefense.proofpoint.com/v2/url?u=http-3A__ftp.scientificlinux.org_linux_scientific_7x_x86-5F64_release-2Dnotes_-23-5Fsl-5Fprovides-5Fautomatic-5Fupdates&d=DwIDaQ&c=gRgGjJ3BkIsb5y6s49QqsA&r=gd8BzeSQcySVxr0gDWSEbN-P-pgDXkdyCtaMqdCgPPdW1cyL5RIpaIYrCn8C5x2A&m=K8OI2FBUTfS2DJY_nKBYUz670OgyZoSjzKdkKOjnB4c&s=uP6UUFqG3gSSVE0DgltBCsHzW9UzudPCU9IlKsLzor0&e= 



On 08/10/2018 11:11 AM, Ken Teh wrote:
I noticed that apply_updates in /etc/yum/yum-cron.conf is set to No on a 
centos 7 system while it is set to Yes on SL7x.


Is there any reason not to set to yes for centos?  I have cron job that emails 
the user assigned to the desktop to reboot their machine when updates have 
been installed. With apply_updates set to No, the job fails to detect the 
install with 'yum history'. I can fix this but I was wondering if there is 
some reason why I shouldn't configure the daily yum cron the same way, ie, 
apply_updates=yes.


Thanks.




systemd tftp xinetd

2018-09-11 Thread Ken Teh
I need help with how to enable tftp service. I am trying to get something done 
and I have no patience for systemd's convoluted logic.


The tftp-server installs

(1) /etc/xinetd.d/tftp

(2) tftp.socket  (what's this?)

(3) tftp.service

Manually, I can start the service and everything works. But enabling the service 
stays disabled or indirect. Enabling the socket does not start the service on 
reboot. Do I need xinetd or does systemd deprecate xinetd?


Geez!  I miss the old days when Unix was simple.


Re: systemd tftp xinetd

2018-09-11 Thread Ken Teh

What you described works manually.

Basically, the service is not started on reboot even though I've enabled it. So 
I don't know what 'enabling' a service means.


Since tftp-server installs /etc/xinetd.d/tftp, is it hinting that it should be 
started via xinetd?  Do I need to install xinetd or is systemd so do-it-all, 
know-it-all that it's taken over xinetd's functions?


I've tried the obvious steps. I'm working my way through all the permutations 
(however illogical) to see what works.  Obviously, enabling a systemd service 
does not necessarily start the service on reboot. When does enabling a service 
not enable it?


I wanted to test an application I wrote and I've spent 3 hours trying to 
configure the system so it will let me.


Systemd is really too much.




On 9/11/18 9:35 AM, Hinz, David (GE Healthcare) wrote:

If you're asking what I think you're asking:

systemctl enable tftp # This adds a symlink for tftp into the (target? Milestone? One of 
those), equivalent to saying "/etc/rc2.d is done, now let's go to rc3.d".
systemctl start tftp # This tries to start it
systemctl status tftp # This gives you success, or debug information if it 
didn't work.

If I missed your question entirely, then can you word it differently?

Dave Hinz


On 9/11/18, 9:32 AM, "owner-scientific-linux-us...@listserv.fnal.gov on behalf of Ken 
Teh"  
wrote:

 I need help with how to enable tftp service. I am trying to get something 
done
 and I have no patience for systemd's convoluted logic.
 
 The tftp-server installs
 
 (1) /etc/xinetd.d/tftp
 
 (2) tftp.socket  (what's this?)
 
 (3) tftp.service
 
 Manually, I can start the service and everything works. But enabling the service

 stays disabled or indirect. Enabling the socket does not start the service 
on
 reboot. Do I need xinetd or does systemd deprecate xinetd?
 
 Geez!  I miss the old days when Unix was simple.
 



Re: systemd tftp xinetd

2018-09-11 Thread Ken Teh
I've done all that.  But after I reboot the system, I cannot tftp a file from 
the server.  But if I start tftp.service manually, I can get the file.


If a service is never available on reboot after you've enabled it, what does 
'systemctl enable' mean?


Is there some magic sequence of steps I need to take to "really" enable the tftp 
service?


Thanks for the tip on retiring. I think you've got something there. ;)



On 9/11/18 10:03 AM, R P Herrold wrote:

On Tue, 11 Sep 2018, Ken Teh wrote:


I need help with how to enable tftp service. I am trying to
get something done and I have no patience for systemd's
convoluted logic.


Time then, to retire from modern Unix, perhaps.  Change and
the tide of systemd will not be reversing


The tftp-server installs

(1) /etc/xinetd.d/tftp


Old way: Please examine this file, and as needed, edit to
enable the service (normally services are / were shipped
disabled, pre-systemd, as part of a hardening push back at RHL
7.2, back at the turn of the century).

Particularly the line:
disable = yes

Alternatively (the old and) LSB specified way was: try as
root:
chkconfig tftp on

- or the 'systemd way is: -
systemctl enable tftp

-

View what is enabled, or not, thus.  'grep' will work with
this form:
 systemctl list-unit-files --no-pager

viz:

[herrold@centos-7 ~]$  systemctl list-unit-files --no-pager | \
grep tftp
tftp.service  indirect
tftp.socket   enabled

-- Russ herrold



Re: systemd tftp xinetd

2018-09-11 Thread Ken Teh

I finally figured out my problem.

(1) You don't need xinetd. The tftp-server package is enough. Iow, systemd 
supersedes xinetd.


(2) Although the tftp-server rpm installs /etc/xinetd.d/tftp, there is no need 
to change `disable = yes` in this file.


(3) The command `systemctl enable tftp` will enable tftp.socket. On reboot, the 
socket will be "listening".  The tftp.service will still be dead.


(4) If the tftp client has a firewall, it needs to do:

# firewall-cmd --zone=public --add-port=7130-7140/udp
$ tftp -R 7130:7140 mytftpserver.org
tftp> ...

Then, all works.

My problem was actually step 4 which I did to test the server. In my application 
this is never necessary as I'm using tftp for pxebooting.





On 9/11/18 9:30 AM, Ken Teh wrote:
I need help with how to enable tftp service. I am trying to get something done 
and I have no patience for systemd's convoluted logic.


The tftp-server installs

(1) /etc/xinetd.d/tftp

(2) tftp.socket  (what's this?)

(3) tftp.service

Manually, I can start the service and everything works. But enabling the service 
stays disabled or indirect. Enabling the socket does not start the service on 
reboot. Do I need xinetd or does systemd deprecate xinetd?


Geez!  I miss the old days when Unix was simple.


Re: Red Hat on the Desktop - was Re: Calibre current

2020-01-31 Thread Ken Teh

Haha!  I like this one.

On 1/30/20 7:48 PM, Konstantin Olchanski wrote:


  - removal of support for NIS (LDAP is "light weight" is like Titanic is a row 
boat).


Re: EL 8

2020-01-31 Thread Ken Teh
I've switched to using Fedora for myself and my users. If you are prepared for 
its short lifecycle, it is actually very usable. I've found upgrading to be 
quite painless.  I don't use Fedora on servers for obvious reasons.


I used ubuntu briefly a decade ago on a laptop and struggled with it. Not a put 
down on ubuntu but more a statement about myself.


I was used to redhat's conventions. I wrote a lot of code then and knew how to 
find development rpms, where the files are located after installed. I struggled 
with the dpkg/apt/synaptics for me. Rpm/dnf is a lot easier for me probably 
because I am used to it. Dnf is pretty much yum so there is no problem there. 
And I knew a bit of how to build rpms myself. I was loathe to learn dpkg.


We've gone with Fedora on the desktop and CentOS on servers and desktops

Hope this helps.


On 1/30/20 7:57 PM, Yasha Karant wrote:
At this point in terms of application support for EL 7 (including SL 7) from 
external entities (such as Calibre -- there are others), I am going soon to be 
forced to go to another Linux.  The options appear to be drop EL entirely and go 
to Ubuntu  LTS ("stable") current, or to stay with EL and use Springdale 
(Princeton) EL8 when (if?) it is available, or Oracle 8 EL.  Thus far, everyone 
I have contacted who did a clean install of Oracle 8 (and then copied back 
files, directory trees, etc., from the non-systems areas of an EL 7 working 
system) have had no issues. However, I am very concerned about support for 
Oracle 8 other than purchasing support from Oracle.  Do the various professional 
repositories for SL 7 (and EL 7 in general) such as EPEL have an EL 8 version 
that work seamlessly with Oracle 8 (or Springdale for that matter)?


In the best of all possible worlds, I or my students would have time to build 
applications from source -- but there are too many and not enough time, forcing 
the use of repositories with pre-built RPMs (or DEBs if we switch to Ubuntu).  
Note that we run the same base OS on servers (including HPC compute servers with 
Nvidia CUDA GPUs) as well as desktop and laptop machines, all presently X86-64 
based (this may change for at least some of the servers).


Any advice would be appreciated.

Yasha Karant


disabling the mirror list

2020-02-20 Thread Ken Teh
I tried to do a yum localinstall somename.rpm and yum went out looking for 
mirrors. I have a machine behind a nat box that restricts outbound access so it 
is timing out trying to reach https://urldefense.proofpoint.com/v2/url?u=http-3A__ftp.scientificlinux.org&d=DwICaQ&c=gRgGjJ3BkIsb5y6s49QqsA&r=gd8BzeSQcySVxr0gDWSEbN-P-pgDXkdyCtaMqdCgPPdW1cyL5RIpaIYrCn8C5x2A&m=gPxUF7VEZvngH42FWi_n2Oud9LoYvWcrlEyfWEZv4BI&s=U4w5jEkK86lgA8jZYPg5ym6c03DCPEsaGyPstP1aQqc&e= .


Why does it try accessing various mirrors when I do a local install?

We have a local mirror here at ANL and I want yum to use our local mirror and 
not wander around the net trying to reach various mirrors. To this end, I've put 
the baseurl in sl6x.repo to point at our local mirror. But it still goes out to 
look for mirrors.


What gives? Tried looking at ym.conf parameters but none jumps out at me that 
might possibly disable this behaviour.


Would appreciate some advice.  Thanks.


comps, primary, other, filelists

2020-02-21 Thread Ken Teh
I've been using the comps file to construct the packages list for my kickstart 
installs, ignoring the other files (primary, other, and filelists) that are in 
repodata folder.


Can someone explain what the other files are and if I need to review them for 
"other" packages? I notice that for centos 8, the comps file is called 
comps-baseos, suggesting that there are more (?) package groups listed elsewhere.


Thanks.


Re: Is Scientfic Linux Still Active as a Distribution?

2020-02-24 Thread Ken Teh

+1

On 2/22/20 5:41 PM, Keith Lofstrom wrote:

I'm an independent electronics inventor, heavily dependent
on both competent software and competent laboratory science,
both for the knowledge I depend on and the tools I use to
transform that knowledge into products and services for
my customers.

SL has been a very good tool for that.  Thanks to all who
have contributed.

I depend on "benign neglect" for a stable computing
platform - just enough funding and staffing to fix urgent
problems, but not continuously mutate the platform to
conform to ephemeral fashion or management whim.

I moved /from/ Windows to gain that stability, even if
that limits the choice of new widgets I can attach to my
(older) computers.  I have plenty of replacement-spare
old widgets, and I don't need the distraction of a
rapidly mutating platform optimized for market churn
and planned-obsolescence sales.

I'm actually glad that Microsoft, Apple, and IBM are
busily churning those markets, because it keeps their
customers distracted and not bothering me with those
distractions while I think and work.  The hardware cast
off by the fashion-chasers is still abundant on eBay,
and I have enough of it to last me for life (except
for the batteries and backlights for my old Thinkpads).

I presume there are enough like me, some of whom are on
this list, that we can continue to carve out a community
space on top of CentOS, focused on inquiry and reliability.

If CentOS 9 or 10 or 11 goes off the rails, there are
enough of us here to tweak CentOS 7 or 8 into something
we can continue to use, just like Linux was "in the good
old days".

While "security by obscurity" is not optimum, I presume a
smaller community of impoverished science geeks is a less
tempting target for professional software criminals than
million-dollar IT departments for billion-dollar
corporations and governments, or billions of hapless
consumers.  We are part of the global target, but we are
unlikely to attract specific attention from the bad guys.

And while we still benefit from the use of servers at
Fermilabs for our "static" distro and our active mailing
list, perhaps we should have a backup plan for migration
in case some bureaucrat decides to pull the plug on us.
That has /always/ been a risk for what we do here; we are
one presidential tweet away from Saint Louis USDA exile.

As a community of scientific, like-minded Linux users,
let's begin to prepare a rudimentary plan B, and hope
that we never need to implement it.

Keith



Re: Who Uses Scientific Linux, and How/Why?

2020-02-26 Thread Ken Teh

Didn't plan on chiming in but Larry's post tugged.

I started with Slackware in '93, kernel 1.3, looking for an cheap X11 
workstation alternative to the then $15k a pop SunOS workstations, of which we 
could only afford 2.  I proposed to my division director to let me buy 12 
Pentium 90's at $2k a pop to deploy this new thing called a linux workstation. I 
recall a committee of two, one from our scientific computing division, who 
advised against it, saying there was no vendor backing. Lucky for me, my 
division director took a chance. Well, the rest, they say, is history.


Like Larry I switched to RedHat when they came in boxes. I stumbled on SL when I 
started working on ROOT. Version 0.6 then. There was this thing called 
FermiLinux and when Redhat stopped selling boxes and wanted a subscription for 
RHEL, I switched us over to SL.


Remember Connie's photos when SL started installing?

The SL mailing list is fantastic resource. I suspect like all good things it 
will also come to an end. I hope it lasts a little longer, at least till I 
retire, so we can all bitch about CentOS 8 and commiserate together the loss of 
SL. Lol.




On 2/25/20 1:56 PM, P. Larry Nelson wrote:

Brett Viren wrote on 2/25/20 8:15 AM:

"Peter Willis"  writes:

Perhaps, if it’s not too much trouble, people on the list might give a short 
blurb about

how they use it and why.


Not quite a short blurb, but not too long either.

I am retired now (nearly 4 years) after nearly 50 years in the IT biz - 44 of 
those at UIUC and 20 of those as an IT Admin for our local HEP group, and I can 
tell you that there are two people who made my life immeasurably better.  So I 
just want to toot their horn.


Troy Dawson and Connie Sieh of FermiLab.  Here's a great interview with Troy 
that will answer a lot of questions as well as elucidate why we went with SL.
(I suspect the following will get transmogrified by Fermi's Proof Point URL 
secret encoder ring)


https://urldefense.proofpoint.com/v2/url?u=http-3A__old.montanalinux.org_interview-2Dtroy-2Ddawson-2Dscientific-2Dlinux-2Djune2011.html&d=DwIDaQ&c=gRgGjJ3BkIsb5y6s49QqsA&r=gd8BzeSQcySVxr0gDWSEbN-P-pgDXkdyCtaMqdCgPPdW1cyL5RIpaIYrCn8C5x2A&m=p76IJCxmwsNBSv-yK1gjd90aDixiH0QGmAOt17f6Gf0&s=_1X0fjomFwROuoTUSK43cCqxlIRTvLj6oFiyBnixFAE&e= 

Alas, much to my initial dismay, Troy announced in 2011 he was going to work for 
RedHat, but Pat Riehecky jumped in to those big shoes (Thanks Pat!).  I would be 
remiss if I didn't also mention Urs Beyerle and his work on the SL Live 
CDs/DVDs.  And, of course, the (then) smallish but amazingly helpful SL user 
community on this list.


After infuriatingly frustrating and hapless encounters with RHEL support on even 
the simplest of issues, being able to have one-on-one interactions with Troy, 
Connie, and Urs (and other users on the list) was like stepping out of a cold 
dark cave onto a warm sun drenched beach. [not hyperbole]


Our journey (in case you're interested and still reading) went something like:

Late 90's and early 2000's - SunOS (expensive hardware, expensive maintenance 
contracts, expensive licensing). Start playing with this new toy Redhat 2.0. 
(spare desktop hardware, almost free software, no licensing).  Then Redhat 3, 
then 4 - now seeing that we can replicate all services from SunOS to RH.
No longer a toy.  Then RH 5 and 6, 7. 8, 9 and End-of-Life.  LHC was ramping up 
and about to spew petabytes of ATLAS experiment data.  Time to start building 
racks of storage farms and compute clusters.  Switch to RHEL.  But with that 
came confusing and frustrating licensing plus the aforementioned support snafus.


Then an epiphany - one of our engineers was collaborating with another 
institution on loading linux onto embedded processors as part of the Dark Energy 
Survey telescope and came to me for linux advice.  They were using a free linux 
installation from CERN called Scientific Linux (SLC).  "Really!"  He said 
FermiLab had a similar version (SLF) but that they chose SLC for whatever 
reason. He said it's the same as RHEL. "Really!" (again)  I found FermiLab's 
website for SLF and the rest, they say, is history!


We started with RHEL3, moved to SL4, then SL5 (my favorite) and wound up at SL6. 
  SL7 was out and the HEP community was transitioning to it when I retired so I 
didn't have to deal with it.  :-)


Anyways, now back to retirement.
- Larry