Re: [SCIENTIFIC-LINUX-USERS] Yum update issue

2018-11-21 Thread jdow
I'll hold off on it. Thanks for the info. And thanks for the help waking the 
sever up about my account.


{^_^}   Joanne

On 20181121 10:46:13, Pat Riehecky wrote:
I'm showing xorgxrdp-0.2.8-3.el7.x86_64 is an EPEL7 package which is tracking 
RHEL7 and has RHSA-2018:3410, and RHSA-2018:3059.


The SL equivalent packages are on hold pending 
https://urldefense.proofpoint.com/v2/url?u=https-3A__bugzilla.redhat.com_show-5Fbug.cgi-3Fid-3D1650634=DwIDaQ=gRgGjJ3BkIsb5y6s49QqsA=gd8BzeSQcySVxr0gDWSEbN-P-pgDXkdyCtaMqdCgPPdW1cyL5RIpaIYrCn8C5x2A=DICQn_Fgy43gcmdizRhosgrGSdqUsHrKgTZEEidyMjs=HFt5f9lV3SRxzcqMk3wuU9-hdz6BE53OnHklnvwE0Mo=


There are packages available in the sl-testing repo, but I'd recommend reading 
the bugzilla in full.


Pat

On 11/21/18 12:35 PM, jdow wrote:

Four days running on SL7x:


Failed to check for updates with the following error message:
Failed to build transaction: xorgxrdp-0.2.8-3.el7.x86_64 requires 
xorg-x11-server-Xorg(x86-64) = 1.20.1



Latest available, and installed, is 1.19.5-5.el7

{^_^}   Joanne




Yum update issue

2018-11-21 Thread jdow

Four days running on SL7x:


Failed to check for updates with the following error message:
Failed to build transaction: xorgxrdp-0.2.8-3.el7.x86_64 requires 
xorg-x11-server-Xorg(x86-64) = 1.20.1



Latest available, and installed, is 1.19.5-5.el7

{^_^}   Joanne


Fascinating

2018-08-16 Thread jdow

Look at the time stamps closely

/etc/cron.daily/0yum-daily.cron:

Not using downloaded sl-security/repomd.xml because it is older than what we 
have:
  Current   : Wed Aug 15 06:46:15 2018
  Downloaded: Wed Aug 15 06:46:13 2018

{^_^}   Joanne


Re: L1TF

2018-08-15 Thread jdow
Maybe it would be appropriate to ask Red Hat (and Intel) when it will be 
available. SL will get it some time slightly later.

{^_^}

On 2018-08-15 05:16, Maarten wrote:

When are the kernel and microcode_ctll package updates coming out for SL?

https://urldefense.proofpoint.com/v2/url?u=https-3A__access.redhat.com_security_vulnerabilities_L1TF=DwICaQ=gRgGjJ3BkIsb5y6s49QqsA=gd8BzeSQcySVxr0gDWSEbN-P-pgDXkdyCtaMqdCgPPdW1cyL5RIpaIYrCn8C5x2A=LYs2gfTzw2eug3NkmYbYjAlw0Is8ysORqfAtZUhgo2g=bnNydGecmRBcY9ZeqVEeYwrNdiO2I6q8KIgvqrZkQyU= 






Re: ClamAV issue in repository

2018-08-06 Thread jdow
7x and it seems to have corrected itself. I got it twice so I forwarded it for 
your consideration as the message suggested. (Once is bad timing, Twice is a 
problem exists. So I waited for two.)


{^_^}   Joanne

On 2018-08-06 09:01, Scott Reid wrote:


Hi,

Thanks for the report! I haven't been able to reproduce this, but I'm assuming 
you're running SL 7.5. Is this correct or are you running another version?

Thanks!

On 8/2/18, 11:56 AM, "owner-scientific-linux-us...@listserv.fnal.gov on behalf of 
jdow"  wrote:

 ===8<---
 /etc/cron.daily/0yum-daily.cron:
 
 Update notice SLBA-2018:1424-1 (from sl-fastbugs) is broken, or a bad duplicate,

 skipping.
 You should report this problem to the owner of the sl-fastbugs repository.
 If you are the owner, consider re-running the same command with --verbose 
to see
 the exact data that caused the conflict.
 ===8<---
 
 {^_^}
 



ClamAV issue in repository

2018-08-02 Thread jdow

===8<---
/etc/cron.daily/0yum-daily.cron:

Update notice SLBA-2018:1424-1 (from sl-fastbugs) is broken, or a bad duplicate, 
skipping.

You should report this problem to the owner of the sl-fastbugs repository.
If you are the owner, consider re-running the same command with --verbose to see 
the exact data that caused the conflict.

===8<---

{^_^}


Re: urldefence?!? Re: Create bootable ISO that can be copied to a USB key

2018-07-21 Thread jdow

On 20180721 07:13, Konstantin Olchanski wrote:

On Fri, Jul 20, 2018 at 07:04:52PM -0700, Konstantin Olchanski wrote:

On Fri, Jul 20, 2018 at 05:37:34PM -0700, Konstantin Olchanski wrote:

Here is my recipe for doing what you are doing:
https://urldefense.proofpoint.com/v2/url?u=https-3A__daqshare.triumf.ca_-7Eolchansk_linux_CentOS7_AAA-2DREADME-2DUSBBOOT.txt=DwIBAg=gRgGjJ3BkIsb5y6s49QqsA=gd8BzeSQcySVxr0gDWSEbN-P-pgDXkdyCtaMqdCgPPdW1cyL5RIpaIYrCn8C5x2A=fk0JY3BhmGxCfN3pBlvgoU0w4oeNX-YtFJkujmomPYY=O0aGOYpKPfw848g7KU-jZW8EGsRnZL7EQomfx6fHYXo=

WTF, urldefence, really?!?


On second thoughts, never mind. It's just the Departement of Privacy (the new
sibling of Dept of Love and Dept of Peace) going about their business.

Same as after using the sslabs https test tool I see my newly setup web server
getting probed by everybody and their dog; now posting a URL to a public 
mailing list
attracts the same kind of attention.

(maybe it's not "urldefence" leaking the data to 3rd parties, maybe it's the 
mailing
list "archive service" or they just scrape the mailing list archives directly).

I hope this is not wildly off-topic, but I am certainly surprised and I will be 
more
careful about what I post to this mailing list. "The walls have ears", as they 
say,
but these specific ears seem to have hands, as well.

Here is the hits on my AAA-README file from the web server logs. No hits before 
my message
to the mailing list. Afterwards, at at least some of the repeated GETs from 
repeated IP addresses
look like robots. And too many hits altogether. I wish everybody would
read my README file and become enlightened, but I think not.

K.O.


[root@daqshare httpd]# grep AAA ssl_access_log | sort
108.233.44.194 - - [20/Jul/2018:19:08:25 -0700] "GET 
/~olchansk/linux/CentOS7/AAA-README-USBBOOT.txt HTTP/1.1" 200 2102
130.180.57.94 - - [21/Jul/2018:04:46:51 -0700] "GET 
/~olchansk/linux/CentOS7/AAA-README-USBBOOT.txt HTTP/1.1" 200 2102


...

Don't forget that more than just a few people are reading this list. Many will 
check your page out of curiosity as well as need.

{o.o}


Hm - SLBA-2018:1066-1 (from sl-fastbugs) is broken

2018-06-28 Thread jdow

/etc/cron.daily/0yum-daily.cron:

Update notice SLBA-2018:1066-1 (from sl-fastbugs) is broken, or a bad duplicate, 
skipping.

You should report this problem to the owner of the sl-fastbugs repository.
If you are the owner, consider re-running the same command with --verbose to see 
the exact data that caused the conflict.
Update notice SLBA-2018:1424-1 (from sl-fastbugs) is broken, or a bad duplicate, 
skipping.


{^_^}


Re: Why is 7.x still stuck at 7.4?

2018-05-15 Thread jdow

On 20180515 15:15, Orion Poplawski wrote:

On 05/13/2018 12:50 PM, Gilles Detillieux wrote:

On 2018-05-12 04:29, jdow wrote:

On 20180511 21:26, jdow wrote:
I have yum-conf-sl7x.noarch installed. 7.5 seems to be out. But yum update 
still leaves the system declaring it is 7.4.


{o.o}   Joanne


At least that's what I get on one system. The other is still declaring 7.3:

[... /etc]$ cat /etc/yum/vars/slreleasever
7.3

Shouldn't that read 7.x or something else if it's really following 7x?


I've found that sometimes yum-conf-sl7x doesn't properly update 
/etc/yum/vars/slreleasever. I suspect it might be because that file is 
shared/co-owned by yum-conf-sl7x and sl-release packages, and yum seems 
sometimes to get confused as to whether the config file should be updated or 
not. That happened on a few of my systems going from 7.3 to 7.4, but not with 
the recent 7.4 to 7.5 update. My fix was to copy the updated slreleasever file 
from an updated system to one that wasn't updating. Someone else has suggested 
removing and reinstalling the yum-conf-sl7x package: 
https://www.mail-archive.com/scientific-linux-users@fnal.gov/msg04927.html


If all else fails, you could try manually updating the slreleasever file:  
echo 7.5 > /etc/yum/vars/slreleasever


Yeah this system just isn't robust.  Which ever of sl-release or yum-conf-sl7x 
is installed last "wins".  So currently after every new point release you'll 
need to reinstall sl-conf-sl7x or echo 7x > /etc/yum/vars/slreleasever.


It might be possible with the use of rpm triggers to have yum-conf-sl7x "fix" 
slreleasever after every update of sl-release.


Perhaps:

%triggerin -- sl-release
echo 7x > /etc/yum/vars/slreleasever


After actually getting "yum-conf-sl7x" to work it appears it is an abbreviation 
for "yum-conf-sl7x-7.5-2.sl7.noarch". It appears this loaded the slreleasever 
with 7.5, which is the current 7x. Until there is an update for "yum-conf-sl7x", 
meaning something like "yum-conf-sl7x-7.6-?.sl7.noarch", fill in the ? later, it 
reads from 7.5.


It appears that this may not really work until the "yum clean all" is run. I 
don't know if that can be run from inside the update to "yum-conf-7x" or not. 
The symptoms I had after installing yum-conf-7x agree with the "proper" command 
sequence mentioned in Takashi Ichihara's post (remove,install,clean all, update) 
suggest that it cannot, which more or less makes the 7x concept not work right. 
However, there may be an initial pump priming needed after the 7x conf install 
with everything working as desired thereafter. I honestly don't know if I did 
the "clean all" step when installing 7x on the two machines that seemed to 
behave oddly. At any rate, I think I have it solved for now.


{^_^}


Re: Why is 7.x still stuck at 7.4?

2018-05-14 Thread jdow
FWIW uninstalling the 7x conf and reinstalling it with the cleanup led to 702 
files to be downloaded. Being paranoid I am taking a couple good backups before 
I load that many new things on my machine. So apparently the previous 7x install 
didn't take completely.


{^_-}

On 20180514 01:04, David Sommerseth wrote:

On 14/05/18 02:13, jdow wrote:

I notice that the system which declares 7.3 has been supposedly running 7x for
a very long time now, before 7.4 went public. I also notice some of the
installed repos, such as elrepo, explicitly say 7x. Other's use $slreleasever.

I figure there must be some good reason for this. I'm wondering what that good
reason might be.


$ rpm -q sl-release

This should be a good indication.  If this package is not updated, then the
whole system announces itself as an older release - plus the base sl7-*
repositories point at an older release as well.




Re: Why is 7.x still stuck at 7.4?

2018-05-13 Thread jdow
I notice that the system which declares 7.3 has been supposedly running 7x for a 
very long time now, before 7.4 went public. I also notice some of the installed 
repos, such as elrepo, explicitly say 7x. Other's use $slreleasever.


I figure there must be some good reason for this. I'm wondering what that good 
reason might be.


{o.o}

On 20180512 06:15, Steven C Timm wrote:

Two things could be happening--

(1) yum typically has a delay built into it of a couple days before it refreshes 
the repo cache when the repo has been changed, and thus may not have detected 
the new repo is there.
(2) yum update could be failing for some reason--if you have a stock system 
there will be e-mail in your root account saying why.


My systems got the 7.5 updates yesterday May 11.

Steve Timm


*From:* owner-scientific-linux-us...@listserv.fnal.gov 
<owner-scientific-linux-us...@listserv.fnal.gov> on behalf of jdow 
<j...@earthlink.net>

*Sent:* Saturday, May 12, 2018 4:29:35 AM
*To:* scientific-linux-users
*Subject:* Re: Why is 7.x still stuck at 7.4?
On 20180511 21:26, jdow wrote:

I have yum-conf-sl7x.noarch installed. 7.5 seems to be out. But yum update still
leaves the system declaring it is 7.4.

{o.o}   Joanne


At least that's what I get on one system. The other is still declaring 7.3:

[... /etc]$ cat /etc/yum/vars/slreleasever
7.3

Shouldn't that read 7.x or something else if it's really following 7x?

{^_^}


Re: Why is 7.x still stuck at 7.4?

2018-05-12 Thread jdow

On 20180511 21:26, jdow wrote:
I have yum-conf-sl7x.noarch installed. 7.5 seems to be out. But yum update still 
leaves the system declaring it is 7.4.


{o.o}   Joanne


At least that's what I get on one system. The other is still declaring 7.3:

[... /etc]$ cat /etc/yum/vars/slreleasever
7.3

Shouldn't that read 7.x or something else if it's really following 7x?

{^_^}


Why is 7.x still stuck at 7.4?

2018-05-11 Thread jdow
I have yum-conf-sl7x.noarch installed. 7.5 seems to be out. But yum update still 
leaves the system declaring it is 7.4.


{o.o}   Joanne


Re: menu items small and up close

2018-04-09 Thread jdow

On 20180409 05:03, ken wrote:

On 04/07/2018 05:22 AM, Maarten wrote:

I recently installed "Cinnamon Desktop" on my laptop, but the weird
thing is all the menu items are really small and up close to each
other. See the attachments, my desktop has the same packages installed
and doesn't have this problem, I'm running nvidia drive 390.48 from
elrepo. I tried installing the other drivers but those don't seem to
be compatible with my system. Any one have an idea why my menu items
are looking like this?


It may be because you have a relatively large screen and/or your screen
is set to a relatively high resolution.  If so, this has been a
long-standing issue:  Video drivers set the size of screen object based
on pixels.  Of course those objects should be sized instead according to
their visibility to humans.  To fix this, you could set the screen to an
inferior resolution.  Or perhaps there is a setting in the video driver
to make screen objects larger.  Some apps also have settings for larger
icons and/or text.


Conceptually you should be able to tell the windowing system that you have, say, 
a 2000 by 3000 display that is 20" by 15" and the display would scale fonts and 
images appropriately. A dpi setting should cover this. And ideally it would be 
the same dpi on both axes to get images looking their best. I've never had to 
worry about that as I feel the bigger the display the better. So I don't know if 
X11 supports this concept. If not, it damn well should if it purports to be 
useful for desktops, notebooks, laptops, and everything else.


{o.o}  Joanne "Opinionated" Dow


Re: Tip: when your terminal gets all screwed up

2017-11-11 Thread jdow

On 2017-11-11 04:26, Tom H wrote:


So it should be:

PS1="\[\e[0m\][\u@\h:\l \w]\$ "


Maybe. I got silly and experimented.

PS1="\[\e[1m\][\u@\h:\l \w]\$ "
and
PS1="\e[1m[\u@\h:\l \w]\$ "
and
PS1="\e[1m[\\u@\\h:\\l \\w]\$ "

all produce the same thing, which leaves the issue even more confused than when 
we started.


{^_^}


Re: Tip: when your terminal gets all screwed up

2017-11-11 Thread jdow

On 2017-11-11 06:30, Nico Kadel-Garcia wrote:

On Sat, Nov 11, 2017 at 8:10 AM, Tom H  wrote:

[ Hundreds of lines of fine-tuning prompt manipulation code and theory
snipped, especially involving quote handling ]

And *this* is why I ignore it all and just use "stty sane" when my
console gets confused.


Back with Hurricane, I think it was, I simply built an alias for that stty 
command and called it "clr". I "think" the \e[0m cured the screwed up text. At 
the very least I've not had it happen to me in a VERY long time.


{^_^}


Re: Tip: when your terminal gets all screwed up

2017-11-11 Thread jdow

On 2017-11-11 03:28, Tom H wrote:

On Fri, Nov 10, 2017 at 10:15 PM, Steven Haigh  wrote:


For what its worth, I've been using this for years:
PS1="\[\033[01;37m\]\$? \$(if [[ \$? == 0 ]]; then echo \"\[\033[01;32m\]
\342\234\223\"; else echo \"\[\033[01;31m\]\342\234\227\"; fi) $(if [[ ${EUID}
== 0 ]]; then echo '\[\033[01;31m\]\h'; else echo '\[\033[01;32m\]\u@\h'; fi)\
[\033[01;34m\] \w \$\[\033[00m\] "


If you use single-quotes for PS1, you can use unescaped double-quotes
and dollar signs within it. It makes it more legible:

PS1='\[\033[01;37m\]$? $(if [[ $? == 0 ]]; then echo
"\[\033[01;32m\]\342\234\223"; else echo
"\[\033[01;31m\]\342\234\227"; fi) $(if [[ ${EUID} == 0 ]]; then echo
"\[\033[01;31m\]\h"; else echo "\[\033[01;32m\]\u@\h";
fi)\[\033[01;34m\] \w \$\[\033[00m\] '

You might be better off using "printf" (it's a bash builtin) because
"echo" might not interpret escapes depending on the bash or shell
options that are set.


It works for him, apparently. So that's good. Now, in pedantic mode each of the 
\033 strings can be changed to \e for easier readability.


{^_-}


Re: Tip: when your terminal gets all screwed up

2017-11-10 Thread jdow

On 2017-11-10 16:38, ToddAndMargo wrote:

On 11/10/2017 04:21 PM, jdow wrote:

On 2017-11-10 15:14, ToddAndMargo wrote:

Dear List,

Ever cat a binary file by accident and your
terminal gets all screwed up.

I had a developer on the Perl 6 chat line give me
a tip on how to unscrew your terminal and set it
back to normal.  (He way helping me do a binary
read from the keyboard.)

stty sane^j

Note: it is , not "enter".

-T


Make "\033]0;" the first bit of your prompt. Never worry about it again.

ESC-0 sets the terminal to have no attribute bits set. So it clears funny 
display. I've had that as a standard part of my prompts for decades, even back 
in the CP/M days.

{^_^}   Joanne


Sweet!

Here is what I have in my .bash_profile file:


if [ "$PS1" ]; then
  # extra [ in front of \u unconfuses confused Linux VT parser
  PS1="\e[0 [[\\u@\\h:\\l \\w]\\$ "
fi

{^_^}


Re: Tip: when your terminal gets all screwed up

2017-11-10 Thread jdow
And that has an oops in it. Don't include the ";". The proper escape sequence 
would be "esc[0". My bad. Been too long since I did more than copy stuff.


{^_^}

On 2017-11-10 16:28, Carl Friedberg wrote:

That is cool.

I've had (in the VT220/320) some pretty wild prompts.

There was a thunderbolt, and for the holidays, someone
crafted Santa, a sleigh, and reindeer. People must have
had more time back then, or else no one who was in
charge had any idea what they were doing. Or both.

Carl Friedberg
(212) 798-0718
www.esb.com
The Elias Book of Baseball Records
2017 Edition

-Original Message-
From: owner-scientific-linux-us...@listserv.fnal.gov 
[mailto:owner-scientific-linux-us...@listserv.fnal.gov] On Behalf Of jdow
Sent: Friday, November 10, 2017 7:22 PM
To: scientific-linux-users@fnal.gov
Subject: Re: Tip: when your terminal gets all screwed up

On 2017-11-10 15:14, ToddAndMargo wrote:

Dear List,

Ever cat a binary file by accident and your
terminal gets all screwed up.

I had a developer on the Perl 6 chat line give me
a tip on how to unscrew your terminal and set it
back to normal.  (He way helping me do a binary
read from the keyboard.)

stty sane^j

Note: it is , not "enter".

-T


Make "\033]0;" the first bit of your prompt. Never worry about it again.

ESC-0 sets the terminal to have no attribute bits set. So it clears funny
display. I've had that as a standard part of my prompts for decades, even back
in the CP/M days.
{^_^}   Joanne





Re: Tip: when your terminal gets all screwed up

2017-11-10 Thread jdow

On 2017-11-10 15:14, ToddAndMargo wrote:

Dear List,

Ever cat a binary file by accident and your
terminal gets all screwed up.

I had a developer on the Perl 6 chat line give me
a tip on how to unscrew your terminal and set it
back to normal.  (He way helping me do a binary
read from the keyboard.)

stty sane^j

Note: it is , not "enter".

-T


Make "\033]0;" the first bit of your prompt. Never worry about it again.

ESC-0 sets the terminal to have no attribute bits set. So it clears funny 
display. I've had that as a standard part of my prompts for decades, even back 
in the CP/M days.

{^_^}   Joanne


Re: 7.4

2017-06-18 Thread jdow

On 2017-06-18 21:16, ToddAndMargo wrote:

Hi All,

Any rumors on when 7.4 will hit SL?

-T


Well, WiseOldCrow remarked it was just around the corner, "second Tuesday this 
week."


{O,o}  Ack! Plbbltptpbbb! (Fed that straight line I cannot resist.)


Re: [SCIENTIFIC-LINUX-USERS] RAID 6 array and failing harddrives

2017-04-11 Thread jdow

On 2017-04-11 09:44, Konstantin Olchanski wrote:

On Tue, Apr 11, 2017 at 11:13:25AM +0200, David Sommerseth wrote:


But that aside, according to [1], ZFS on Linux was considered stable in
2013.  That is still fairly fresh, and my concerns regarding the time it
takes to truly stabilize file systems for production [2] still stands.



Why do you worry about filesystem stability?

So I suppose the extended downtime while several terabytes of data are restored 
after it's loss due to filesystem malfunction is of no consequence to you. 
Others find extended downtime both extremely frustrating and expensive. And that 
does ignore the last few {interval between backups} worth of data loss, which 
can also be expensive.


{o.o}   Joanne


Re: Second network adapter

2017-04-01 Thread jdow

On 2017-03-31 22:31, Konstantin Olchanski wrote:

On Fri, Mar 31, 2017 at 07:10:22PM -0700, jdow wrote:


That's why I pictured IT plus other corporate authorities. When you
compromise security on a company's network you give away the keys to
the corporate kingdom. That can, has, and should lead to a firing.

Having a password that doesn't meet spec is a whole different
ballgame.



None of this makes sense. Installing a wifi hotspot in a locked
room in a locked building (where it cannot possibly be accessed
by unauthorised people) is a firing offense but using the same
password for root and for yahoo is ok (or just a slap on the wrist).


THAT demands a reply considering that I am somewhat if an expert in the 
electronics and technology used in radios of all kinds, including WiFi hot 
spots. Unless the building is a Tempest qualified facility or is a considerable 
distance from roads it's a very easy matter to exchange signals with a WiFi 
MODEM. A nice high gain directional antenna is a common tool for WiFi hackers. 
Radio does not abide by locks, locked rooms, or locked buildings. It does care 
about the walls between the transmitter and receiver. But usually adding some 
antenna gain solves that problem neatly.


In about 1990 give or take a little somebody had setup in a car outside the 
Torrance courthouse in California. He had a gadget that demonstrated why TEMPEST 
standards meant something. His screen painted what was on the screens in the 
court house offices. (These days it would be harder because of the number of 
computers there. In those days separating the leakage signals was not as hard.) 
The TEMPEST facility where I worked near there was a well shielded area with 
special locks to keep people out and NO network in or out of the room. They 
spent two months building it and securing it. It had a guard in it 24/7 to keep 
material inside on the inside. Fortunately USB did not exist in those days. The 
computer was a small VAX running VMS.


As for using the same password multiple places - how in h-e-double-toothpicks 
are you going to police that in a legal and secure manner? If course it should 
get the guy tossed out on his ear if he does it and is caught. That event shows 
he is beyond stupid into criminally stupid enough to be caught doing it. (Who 
was shoulder surfing while he was typing his password?)


Should I take it as a fact that you have setup such a configuration where you 
work and are trying to justify your act? Don't answer, I'd feel compelled to be 
a nasty tattle-tale about it.


{^_^}


Re: Second network adapter

2017-03-31 Thread jdow

On 2017-03-31 18:09, Konstantin Olchanski wrote:


anybody who did [this] [should] be fired upon it's discovery, not "let go" or
"laid off" but really "fired for cause".



I am not sure I like this crazy talk about IT departements becoming judge, jury
and executioner and about firing people left and right for violating some
arbitrarily made up rules. ("your password is only 28 characters long, you are 
fired!").

My dislike nomatter, in practice, all this KGB stuff comes to nothing when you 
try
to fire a Nobel-prize-winning professor or when you discover that the boss
of your boss is reading playboy instead of nytimes.


That's why I pictured IT plus other corporate authorities. When you compromise 
security on a company's network you give away the keys to the corporate kingdom. 
That can, has, and should lead to a firing. Having a password that doesn't meet 
spec is a whole different ballgame. And using your CDROM drive as a coffee cup 
holder is something else again. Using your USB ports to plug in random dongles 
you picked up on the street is a potential serious compromise to the corporate 
systems. But, it's up to the IT department to fill them with epoxy, chewing gum, 
or whatever else they want. For that matter password parameters are up to the IT 
department to enforce by not allowing entry of a bad password or providing more 
secure alternate means.


The WiFi node the person wanted to install on a company computer on company 
property simply creates a wide open hole into the network. If that's OK what is 
all this bother with SELinux, firewalls, and other security tools that 
supposedly Linux doesn't really need because well it's magic. (Yes, a Linux 
machine with a user connected to the keyboard and mouse rather than an IT drone 
is going to pick up malware. Recent exploits suggest this can be serious. In 
that case AV helps if you're not among the very first exposed to it.) I 
personally believe companies should have a published policy (hah - PUBLISHED you 
say? Our policies are secret - been there, too) declaring that such a WiFi 
tap on their network is a firing offense leading to immediate dismissal unless 
you have a REALLY REALLY good story. On the other paw, if the company isn't 
worth preserving in the minds of its owners and management, then go ahead and 
put in the WiFi tap. Have the grace to feel guilty if it does hasten the 
company's demise, though.


{o.o}   Fortunately I am exposed to VERY weak hacking attempts locally. I live 
uncomfortably dangerously and monitor security logs religiously. If I owned a 
company with me as an IT manager I'd be fired long ago. OTOH - only one 
penetration by malware since 1985 on open networks isn't altogether bad for a 
novice, even if she is paranoid. (They really are out to get me; but, there is 
nothing personal about it. You'll do just as well as me as a victim.) {^_-} And 
methinks me has said enough. IT should have published policies that employees 
are kept aware of. THEN things like an open WiFi (aka any WiFi) router covertly 
installed by an employee can lead to immediate dismissal.


Re: Second network adapter

2017-03-31 Thread jdow

On 2017-03-31 15:29, David Sommerseth wrote:

On 31/03/17 23:04, jdow wrote:

On 2017-03-31 13:44, David Sommerseth wrote:

On 31/03/17 13:40, James M. Pulver wrote:

Shouldn't we all take a step back here and ask why your IT support isn't
providing the resources you need to run the experiment?


This is absolutely important to consider for the person going to do such
a project.  But that shouldn't stop us on this ML to try to provide some
solutions which can work.

If this person doesn't use these solutions, others in completely
different needs may use this information for their challenges.

Mailing lists are a good place to "think aloud" and get input and ideas,
to share knowledge and learn new things.

I for one will not stop sharing my knowledge when I feel I have
something valuable to share.  Whether people use or find my input
valuable and useful or not is secondary to the sharing itself.  Because
we all grow when we share.


If the fellow cannot figure out how to add the second card and make it
work (he had to ask here, after all) then he is utterly in over his head
for security. I read the SANS Diary following their discussions as an
utter amateur. It gives me an idea of what is going around so I can work
to avoid it. This question is one of the chief IT nightmare scenarios.
It opens a side door into a secure network. Side doors are generally
very easy to penetrate. Once that happens the entire network and all the
site's data storage are up for grabs or are at least one layer of the
onion closer to being penetrated.


All very true.  But if the information doesn't come from here, it comes
from another place.  Perhaps even more hackier and with even worse
security around it.

There is however only one realistic way to avoid such scenarios though.
Corporate policies, where people get reminded about them on a regular
basis over time.  I know some places they did minor updates (could just
be simple grammar fixes) so they could send out a mail to all employees
about the updated policy (even though I found _that_ approach a bit
annoying).


I know that if I was "in charge"
anybody who did this would be fired upon it's discovery, not "let go" or
"laid off" but really "fired for cause". Security compromises can eat
his salary up each minute for hours on end in costs, legal fees,
restitution, and so forth.


That is exactly what the employee contract needs to state, with a
reference to the corporate policy defining what you can and cannot do.


The correct "hack" here is to walk up the management chain properly.
Sell your case to your boss. Have your  boss sell it to his boss.
Lather, rinse, repeat until the "boss" is at a level to communicate with
the IT boss. And be prepared to compromise. Also remember that YOUR
convenience is a very weak selling point in the face of security; but,
not being able to perform your assigned duties, is. It can be very hard
to explain in many cases. But solutions need to be worked out. One such
case is a department which makes dramatic presentations using
computerized hardware in a secure workplace with an aggressive IT
department.


Yes, in an ideal world this would be perfect.  And it would probably
work wonderfully well too!

I've been working with IT professionally since late 90s, worked in both
small start-ups and large enterprises, and everything in between, over
all these years.  Most of the times as full-time employee other times as
a hired consultant.  I have been through several rounds of PCI-DSS and
Visa/MasterCard security audits and certifications, been responsible for
the network security several more places.

My experience is that you need to be very lucky with your organisation
to actually find that each of these "leadership levels" fully understand
and be interested in what you are saying as a requestor.

I have experienced leaders saying "NO!" by default by just hearing the
word "network", without further chances to discuss it.  And I have
experienced my closest leader saying "Go ahead, just do it! I'll ensure
it gets approved!" and then experience the request never got a formal
approval, sometimes it never went further.  And the "fun" detail is that
it doesn't matter what kind of organisation it is, small or large, from
basically no processes to way too slow and long-dragging processes; I've
seen the whole spectre of leaders in all of them.

In my experience, with way too many leaders there is only one primary
question you need to have a good answer to to get an approval:  "What's
in it for me [as the leader]?"   If it can make the leader look good
among his group of fellows, the odds of getting an approval gets
annoyingly higher.  The technical aspect of it?  In reality, way too
often too boring for such leaders.   (I do not claim all leaders are
like that, not at all!  But way too many are).

Which again is why a corporate p

Re: Second network adapter

2017-03-31 Thread jdow

On 2017-03-31 13:44, David Sommerseth wrote:

On 31/03/17 13:40, James M. Pulver wrote:

Shouldn't we all take a step back here and ask why your IT support isn't
providing the resources you need to run the experiment?


This is absolutely important to consider for the person going to do such
a project.  But that shouldn't stop us on this ML to try to provide some
solutions which can work.

If this person doesn't use these solutions, others in completely
different needs may use this information for their challenges.

Mailing lists are a good place to "think aloud" and get input and ideas,
to share knowledge and learn new things.

I for one will not stop sharing my knowledge when I feel I have
something valuable to share.  Whether people use or find my input
valuable and useful or not is secondary to the sharing itself.  Because
we all grow when we share.


If the fellow cannot figure out how to add the second card and make it work (he 
had to ask here, after all) then he is utterly in over his head for security. I 
read the SANS Diary following their discussions as an utter amateur. It gives me 
an idea of what is going around so I can work to avoid it. This question is one 
of the chief IT nightmare scenarios. It opens a side door into a secure network. 
Side doors are generally very easy to penetrate. Once that happens the entire 
network and all the site's data storage are up for grabs or are at least one 
layer of the onion closer to being penetrated. I know that if I was "in charge" 
anybody who did this would be fired upon it's discovery, not "let go" or "laid 
off" but really "fired for cause". Security compromises can eat his salary up 
each minute for hours on end in costs, legal fees, restitution, and so forth.


The correct "hack" here is to walk up the management chain properly. Sell your 
case to your boss. Have your  boss sell it to his boss. Lather, rinse, repeat 
until the "boss" is at a level to communicate with the IT boss. And be prepared 
to compromise. Also remember that YOUR convenience is a very weak selling point 
in the face of security; but, not being able to perform your assigned duties, 
is. It can be very hard to explain in many cases. But solutions need to be 
worked out. One such case is a department which makes dramatic presentations 
using computerized hardware in a secure workplace with an aggressive IT 
department. What is needed is a secondary network that is treated as being 
outside the normal network with very tenuous connections to it. Otherwise the AV 
software will trigger at the wrong time ruining a presentation for which the 
take of the show is in 6 digits or more. Furthermore some network appliances 
used in theaters are um being charitable here "flaky". They do nasty things like 
expropriate the 2.0.0.0 network space for their own use. (ARTNET) Others put 
tremendous amounts of carefully timed (!) traffic on the network (COBRANET). So 
a working solution must be found. It behooves the applicant communicate clearly 
and that the IT department adapt. But, first, explain why you cannot do your job 
with their rules. And walk that explanation through the hierarchy. THAT is GOOD 
hacking.


{^_^}   Joanne (I have, indeed, done some strange things in my time.)


Re: Connie Sieh, founder of Scientific Linux, retires from Fermilab

2017-02-24 Thread jdow

Thank you Connie for all the years of effort creating what we all enjoy today.

I hope your retirement is a time of enjoyment and and comfort.

{^_^}   Joanne Dow

On 2017-02-24 13:52, Bonnie King wrote:

Friends,

The Scientific Linux team is at once happy and sad to announce Connie Sieh's
retirement after 23 years. Today is her last full-time day at Fermilab.

Connie Sieh founded the Fermi Linux and Scientific Linux projects and has worked
on them continuously. She has sometimes preferred to toil behind the scenes and
leave public announcements to others, but has always been a driving force behind
the projects.

The Scientific Linux story started in the late 1990s when Connie's group
explored using commodity PC hardware and Linux as an alternative to commercial
servers with proprietary UNIX operating systems. From the distributions
available at the time, Red Hat Linux was chosen.

In 1998, Connie announced Fermi Linux at HEPiX, a semi-annual meeting of High
Energy Physics IT staff. Fermi Linux was a customized and re-branded version of
Red Hat Linux with some tweaks for integration with the Fermilab environment. It
also introduced an installer modification called Workgroups, a framework to
customize package sets for use at different sites and for different purposes.
The Workgroups concept lives on today in the form of Contexts for SL7.

In October 2003 TUV changed their product model and introduced Red Hat
Enterprise Linux. Enterprise Linux was no longer freely distributed in binary
form, but sources remained available.

Connie and her colleagues started building from these sources, creating one of
the first Enterprise Linux rebuilds. A preview, dubbed HEPL, was presented at
spring HEPiX 2004. In May 2004, the rebuild was released as Scientific Linux.
The name was chosen to reflect the goals and user base of the product.

Our colleagues at CERN collaborated, customizing and using Scientific Linux as
Scientific Linux CERN (SLC). SL became a standard OS for Scientific Computing in
High Energy Physics at Fermilab, CERN and beyond.

SL is freely available to the general public, and is a popular Enterprise Linux
rebuild. As a result, it has built a community outside of Fermilab and HEP.

With gratitude, the Scientific Linux team would like to recognize Connie's many
years of service and her immense contribution to the project she founded.

Connie's outstanding technical and non-technical judgement are the foundation of
Scientific Linux. Her legacy will continue to inform the way we run SL and we
hope she'll remain as a collaborator.

All the best to Connie in her well-earned retirement. She will be dearly missed!



Re: Anacron has an upset tummy over sl-security

2017-01-29 Thread jdow

Yes (sigh).

I'll know from logwatch in the next couple days if it worked.

{^_^}

On 2017-01-29 18:03, Bill Maidment wrote:

I trust that you mean "yum clean all".


-Original message-

From:jdow 
Sent: Monday 30th January 2017 11:34
To: scientific-linux-users@fnal.gov
Subject: Re: Anacron has an upset tummy over sl-security

I am not using a local repo. I'm trying a "repo clean all" and then an update.

{^_^}

On 2017-01-29 16:15, Bill Maidment wrote:

Hi
I got the same issue, but when I deleted repodata from my local repo and 
resynced from the rsync master, it sorted itself out.
I believe this may have been caused by the firefox update which was released 
about the same time as 7.3
Anyway, I no longer have the problem (for now).

Cheers
Bill


-Original message-

From:jdow 
Sent: Monday 30th January 2017 10:30
To: scientific-linux-users@fnal.gov
Subject: Anacron has an upset tummy over sl-security

/etc/cron.daily/0yum-daily.cron:

Not using downloaded repomd.xml because it is older than what we have:
   Current   : Wed Jan 25 11:54:42 2017
   Downloaded: Wed Jan 25 11:54:25 2017
Update notice SLBA-2015:2562-1 (from sl-security) is broken, or a bad duplicate,
skipping.
You should report this problem to the owner of the sl-security repository.
Update notice SLBA-2016:1445-1 (from sl-security) is broken, or a bad duplicate,
skipping.
Update notice SLBA-2016:1526-1 (from sl-security) is broken, or a bad duplicate,
skipping.
... many more



I thought this was supposed to be fixed. This suggests there still is some form
of problem with the sl-security repo.

{^_^}   Joanne











Re: Anacron has an upset tummy over sl-security

2017-01-29 Thread jdow

I am not using a local repo. I'm trying a "repo clean all" and then an update.

{^_^}

On 2017-01-29 16:15, Bill Maidment wrote:

Hi
I got the same issue, but when I deleted repodata from my local repo and 
resynced from the rsync master, it sorted itself out.
I believe this may have been caused by the firefox update which was released 
about the same time as 7.3
Anyway, I no longer have the problem (for now).

Cheers
Bill


-Original message-

From:jdow 
Sent: Monday 30th January 2017 10:30
To: scientific-linux-users@fnal.gov
Subject: Anacron has an upset tummy over sl-security

/etc/cron.daily/0yum-daily.cron:

Not using downloaded repomd.xml because it is older than what we have:
   Current   : Wed Jan 25 11:54:42 2017
   Downloaded: Wed Jan 25 11:54:25 2017
Update notice SLBA-2015:2562-1 (from sl-security) is broken, or a bad duplicate,
skipping.
You should report this problem to the owner of the sl-security repository.
Update notice SLBA-2016:1445-1 (from sl-security) is broken, or a bad duplicate,
skipping.
Update notice SLBA-2016:1526-1 (from sl-security) is broken, or a bad duplicate,
skipping.
... many more



I thought this was supposed to be fixed. This suggests there still is some form
of problem with the sl-security repo.

{^_^}   Joanne






Anacron has an upset tummy over sl-security

2017-01-29 Thread jdow

/etc/cron.daily/0yum-daily.cron:

Not using downloaded repomd.xml because it is older than what we have:
  Current   : Wed Jan 25 11:54:42 2017
  Downloaded: Wed Jan 25 11:54:25 2017
Update notice SLBA-2015:2562-1 (from sl-security) is broken, or a bad duplicate, 
skipping.

You should report this problem to the owner of the sl-security repository.
Update notice SLBA-2016:1445-1 (from sl-security) is broken, or a bad duplicate, 
skipping.
Update notice SLBA-2016:1526-1 (from sl-security) is broken, or a bad duplicate, 
skipping.

... many more



I thought this was supposed to be fixed. This suggests there still is some form 
of problem with the sl-security repo.


{^_^}   Joanne


Re: 6.6 to 6.x (6.8 now, I guess)

2017-01-27 Thread jdow
That sort of appears to be the case. I tried nuking it. There is still a 
libasound.so. file from other RPMs. So it's likely still workable. This WAS 
the main gateway machine to the Internet here for a dozen or so 'thingies' on 
the network. At the moment all it is doing is spamassassin filtering for my 
partner. I've migrated (migrained?) the other functions off to the newer $400 
wonder running 7.3 (as of today). This is fortunate. The disk in the older 
machine is starting to throw SMART pending sector errors. So now I can clear 
that disk off and donate the machine, perhaps with a recent FC on it, to the 
local SF club or something. Heh, donate it to the local home for people my age 
or older who can't get around but like really good puzzles. {^_-}


It's not so bad risking things like nuking a library from the system when it's 
not an important machine anymore.


{^_^}   Joanne

On 2017-01-27 23:43, Andrew C Aitchison wrote:

On Fri, 27 Jan 2017, jdow wrote:


Ran yum --releasever=6.7 update sl-release

Same error going 6.6 to 6.7 as 6,6 to 6.x (6.8).

Transaction Check Error:
 file /lib64/libasound.so.2.0.0 from install of alsa-lib-1.1.0-4.el6.x86_64
conflicts with file from package libasound2-1.0.24.1-35.el6.x86_64


I can't find libasound2 in the SL tree, although it is in ATrpms.

You may be trying to mix too many repos ...


What now?
{^_^}

On 2017-01-27 15:03, jdow wrote:

 It gets through downloading a half gigabyte, trundles a bit and whines:

 Transaction Check Error:
   file /lib64/libasound.so.2.0.0 from install of
 alsa-lib-1.1.0-4.el6.x86_64
 conflicts with file from package libasound2-1.0.24.1-35.el6.x86_64

 
 Trying a hail Mary yum --releasever=6.11 update sl-release

 {o.o}







Re: 6.6 to 6.x (6.8 now, I guess)

2017-01-27 Thread jdow

Ran yum --releasever=6.7 update sl-release

Same error going 6.6 to 6.7 as 6,6 to 6.x (6.8).

Transaction Check Error:
  file /lib64/libasound.so.2.0.0 from install of alsa-lib-1.1.0-4.el6.x86_64 
conflicts with file from package libasound2-1.0.24.1-35.el6.x86_64


What now?
{^_^}

On 2017-01-27 15:03, jdow wrote:

It gets through downloading a half gigabyte, trundles a bit and whines:

Transaction Check Error:
  file /lib64/libasound.so.2.0.0 from install of alsa-lib-1.1.0-4.el6.x86_64
conflicts with file from package libasound2-1.0.24.1-35.el6.x86_64


Trying a hail Mary yum --releasever=6.11 update sl-release

{o.o}



Re: I need to add something to my global PATH

2017-01-13 Thread jdow

On 2017-01-13 17:42, ToddAndMargo wrote:

Hi All,

Google is failing me here. All I get is how to alter the path locally.

I want to add something to the PATH so the EVERYONE see it.


Edit /etc/profile


And one I make the changes, how do I reload the thing without having to reboot?


That's easy, logout and log back in.

So far as I know you can't do this without the logout. /etc/profile is read on 
login. Changing an existing bash instance seems to require manually fiddling 
with the PATH environment setting inside the running bash instance.


{^_^}


Re: Adventures with 7.2

2017-01-09 Thread jdow

On 2017-01-09 16:04, Konstantin Olchanski wrote:

On Sat, Jan 07, 2017 at 08:18:38PM -0800, jdow wrote:


Blanket disabling both of [selinux and iptables] at once, permanently is stupid 
beyond
belief ...




And then there is the reality:

In el6 (and earlier), selinux was not functional and iptables were not enabled 
by default.

So I see el7 is a big improvement:

a) iptables/firewalld is enabled by default and is easy to manage. no reason to 
turn it off ever.
b) selinux is mostly functional except for obscure bugs.

So we go from 0-out-of-2 to 2-out-of-2, unless you have been burned and scarred
(but not fired) by the NFS server bug, that it is 1-out-of-2.


SELinux worked for me for quite awhile on 6.2 on up. Now, with 7 (and perhaps 
with 6) there are some problems I don't know enough to work around. I have a 
MESSY workaround in 6.x. I learned of what the files in /etc/dhcp/dhclient.d do. 
So I used that to update a manually generated iptables that has a trick on open 
ports that allow one login per 90 seconds (or whatever I set it to). That 
worked. A file named "iptables.sh" calls the real iptables script I have tucked 
away in /etc/sysconfig.


Now, all that works; but I have an email arrangement that uses "fetchmail" to 
pull mail down from my ISP. I've found in the past it seems to have problems 
when the IP address from the ISP changes. (Damnifinowhy) And I have to get it 
started in the first place. "RestartMail.sh" seemed like the perfectly logical 
place to make sure it starts.


RestartMail.sh at first tried to "sudo" to the appropriate account and run a 
start mail script there. Nope. Fetchmail could not save or manipulate it's pid 
file. Besides sudo would not reliably run. I tried "su -l user command". Nope. I 
seems to vary with the phase of the Moon or something whether su or sudo is even 
accepted in the script. And always "fetchmail -d 120" has trouble with its pid 
file. The semodules "trick" doesn't seem to work or stick around through reboots.


So, I have to fark around with crontab and a script that detects changed 
conditions so that fetchmail gets started properly.


Some REALLY good documentation for SELinux with some good drawings as well as a 
snow job of words would be worthwhile. I'm not holding my breath. I'm just 
working around the various SELinux imposed annoyances. I feel naked without it; 
but, it wears like a wool bikini - itchy and scratchy.


{o.o}


Re: Adventures with 7.2

2017-01-09 Thread jdow

On 2017-01-09 06:04, David Sommerseth wrote:

On 08/01/17 05:18, jdow wrote:

And in a fairly clean (no servers) install iptables opened wide for
brief periods can be considered "safe enough".


Absolutely right!  Of course you should do a security assessment before
doing it, just to have an idea of the worst possible outcome and
consider if the risk is worth it or not.  But in many cases, this might
be very sensible to do.


When I can I cheat this test. The new server machine initially sits behind the 
existing firewall. {^_-}


With that in mind, I have an interesting mystery here. If I am running a 
nameserver open to the local network (but firewalled off and configured to only 
accept local queries) on a machine at say 192.168.159.3 and place that as the 
nameserver into resolv.conf it does not work. But if I tell resolv.conf to use 
127.0.0.1 it works. At the same time it fails to work for the local machine a 
laptop connected to it on the local net getting an appropriate address, say 
192.168.159.123, can use the name server.


Now, the obvious thing to do is use "nameserver localhost". But, to ease my mind 
I'd REALLY like to know why "dig" behaves dramatically differently with 
resolv.conf pointing to 192.168.159.3 when I use "dig www.foobar.com" (fails) or 
"dig @192.168.159.3 www.foobar.com" (succeeds). This is a headscratcher for me. 
Um, of course "dig @localhost www.foobar.com" also works in all cases.



Now, if you have a
telnetd running (but --- why would you do something so stupid?) opening
the firewall is suicidal.


Yes.  But there might also be misconfigured inetd/xinetd services, http
servers providing information which should be restricted, databases,
various management interfaces, etc, etc.  Hence the security assessment
is a practical exercise before doing such a stunt.


Now you've touched on a singular benefit of the "old tried and true ways". (Hey, 
I had no internet at all when I was young. I had to resort to ham radio for my 
techie fixes. BBS systems came in my 30s. So I have a right to be a fossil. 
{^_-}) A quick look at "/etc/rc.d/rc5.d" very quickly showed you what was usable 
and what was not. Then another look into /etc/xinetd.d perhaps opening a few 
files completed the survey. With systemctl the search is not quite so obvious or 
easy to read. With the old way a K in front means it's not running and an S in 
front means it is. A 5 second sweep covers the biggest set of possible holes 
before you open iptables wide.



Running 'ss -lntup' gives you a pretty good idea what the consequences
might be.  Of course if the box is routing traffic to other subnets,
that may also increase the risk.


Nice, but it's not a quick parse with a (sigh) traditional 80 character wide 
terminal window. (Yes, 132 is the other traditional width. It's er annoying to 
use as it eats screen real estate too er broadly.)



Blanket disabling both of them at once, permanently is stupid beyond
belief, IMAO.


Yes!


Some people seem to be inherently suicidal. {^_-}


OTOH the people who got in so easily might figure it's a
honeypot or something and walk away. But that's a stretch.


You're probably right, especially if the purpose of an attack was to get
insight.  If it was a drive-by-bot just wanting to install a spam-bot,
crawler or similar slave node, such details can just as well be ignored
on a target system.


Isn't it a civic duty to participate in huge DDoS attacks on "The Man?"

{O,o}   Yes, Joanne is feeling a little punchy this "morning". (Just got up.)


Re: Adventures with 7.2

2017-01-07 Thread jdow

On 2017-01-07 19:30, David Sommerseth wrote:

On 06/01/17 23:56, Konstantin Olchanski wrote:

On Sat, Dec 31, 2016 at 04:28:04PM -0800, jdow wrote:

... new 7.2 machine.
... SELinux issues.


You *must* disable SELinux in CentOS-7.


*That* deserves the price for the worst advice in 2017.  With '*must*',
that is just a way too strong advice which I hope nobody really
considers strongly.  It's as equally bad as saying "disable and flush
iptables because it blocks connections to your host".

I honestly hoped we had moved much further forward than this ...


I have turned SELinux permissive to try to track down problems. It removes one 
giant unknown variable from the picture. I seldom leave it that way very long.


And in a fairly clean (no servers) install iptables opened wide for brief 
periods can be considered "safe enough". Now, if you have a telnetd running (but 
--- why would you do something so stupid?) opening the firewall is suicidal.


Blanket disabling both of them at once, permanently is stupid beyond belief, 
IMAO. OTOH the people who got in so easily might figure it's a honeypot or 
something and walk away. But that's a stretch.


{^_-}


Re: Adventures with 7.2

2017-01-04 Thread jdow

On 2017-01-04 09:01, David Sommerseth wrote:

On 04/01/17 05:54, jdow wrote:



Off the top of my head, dnsdomainname, domainname, nisdomainname,
ypdomainname are symlinks to hostname; halt, poweroff, reboot,
shutdown are symlinks to systemctl; view is a symlink to vi; etc.


I hadn't dug that far. But, again, it makes sense in a weird sort of
way. It is really an ultimate reuse of code, right? {^_-}


In essence, yes.  IMO,there is often a misconception of the Unix
philosophy.  There is a good thought behind "a single program does a
single task, and does it well".  But that does not mean that each single
program must be a standalone binary, built from a standalone source code.


Besides, "one thing" is about as vague as the politicians' offers of "hope" or 
"change". Each one is modulo the speaker's definition of whatever is being 
discussed. If it is "add an iptables entry" then you "need" multiple files. If 
it means "manages iptables well" then you are encouraged to use one file. But, 
in the dark corners I inhabited decades ago that meant "ls" was neither a bunch 
of files, one for each way ls can be used, nor a single file whose behavior is 
based on input parameter 0. It meant we had "-" options. That feels more 
"wholesome", if you can catch my drift. If you go looking for "ls", for whatever 
reason - binary patch maybe, it is right there staring you in the face. With 
"foobar" that behaves differently when you call it "foo", "bar", or "baz" 
looking for the command "bar" could become tedious. But, then, why should one go 
looking for it? Erm, why should anybody ever need more than 64k? (About where 
computers started becoming human usable. Let's hear it for the HP2100S, my real 
birth machine. We shall ignore the IBM 7090 from my college days, PLEASE.)


There might be a parable in the above. Clarity at the expense of efficiency is 
bad. Efficiency at the expense of Clarity is bad. Finding a good compromise is 
best. And even that's not easy.


{^_^}


Re: Adventures with 7.2

2017-01-03 Thread jdow

On 2017-01-03 14:31, Tom H wrote:

On Tue, Jan 3, 2017 at 3:11 PM, jdow <j...@earthlink.net> wrote:

On 2017-01-03 09:56, David Sommerseth wrote:


Remember that firewalld provides an API over D-Bus for dynamic
firewall updates, so this is kind of to "seal" the configuration
without breaking any component depending on manipulating the firewall
as the system is running. NetworkManager and libvirt are two
components which adjusts the firewall on-the-fly, depending on which
network you're connected to or which VMs have been started, and so on.


That still leaves me mumbling and led me down a midget rabbit hole.
The "iptables" command is 777 root root system_u:object_r:bin_t:s0;
but, that's OK. It's a link - to xtables-multi, which is rwxr-xr-x.
root root system_u:object_r:iptables_exec_t:s0. Waitaminit says I to
meself. (or is it me to iself? Whatever) Let's give that a try The
results are reassuring:
===8<---
[jdow@whereever ~]$ xtables-multi iptables -L -v
iptables v1.4.21: can't initialize iptables table `filter': Permission
denied (you must be root)
Perhaps iptables or your kernel needs to be upgraded.
===8<---
I guess the ancient philosophy of one task one command is passe' and
now a monstrosity like xtables-multi finds itself masquerading as
iptables and about a dozen other things.


/usr/sbin/iptables-restore
/usr/sbin/iptables-save
/usr/sbin/iptables
/usr/sbin/ip6tables-restore
/usr/sbin/ip6tables-save
/usr/sbin/ip6tables


Notice the command I issued. I started, of course, with something like 
xtables-multi -L -v as a first approximation. It coughed up a list of some 14 
different things it can be called as. That was not reassuring since I called it 
as a user rather than root. Then I tried the command listed. If failed but the 
message was informative enough. I, of course, escalated to prepending "sudo " to 
the command, giving my password as usual, and admired the results.



are symlinks to "/usr/sbin/xtables-multi" because it's a multi-call
binary, like busybox.


I was simply bemused that the old UNIX philosophy of one small task one command 
with results chained into the next command ad nauseum has finally been 
discovered to be silly and furthermore good sense is catching on past busybox. 
(I have the same attitude about "goto". (And despite dogma even at UniSys many 
see Dijkstra's pontification on the subject as flawed er and harmful. I live 
with one such.) {^_-}



There are others.

Off the top of my head, dnsdomainname, domainname, nisdomainname,
ypdomainname are symlinks to hostname; halt, poweroff, reboot,
shutdown are symlinks to systemctl; view is a symlink to vi; etc.


I hadn't dug that far. But, again, it makes sense in a weird sort of way. It is 
really an ultimate reuse of code, right? {^_-}



It's normal for "iptables" to fail if you call it as jdow; but if you
have polkit installed, "pkexec iptables" might work (depending on your
polkit policies; "sudo ..." and "su -c ..." will work if you're
authorized).


But of course. I've been using sudo for a very long time. (I don't remember if I 
did it with the real SVR4 machine I had. But certainly I've been using it from 
the first RH 5 or so - not RHEL or Fedora, Hurricane if my memory works tonight.


If sudo didn't work I'd have made a scene about it, probably.

{^_^}


Re: Adventures with 7.2

2017-01-03 Thread jdow

On 2017-01-03 09:56, David Sommerseth wrote:

On 03/01/17 05:59, jdow wrote:

...

The lockdown-whitelist thing is more or less a "but why?" component.


lockdown in firewalld jargon is more like "which component/user may
modify the firewall if the firewall configuration have been locked down".

When firewalld is set into locked-down mode, no-one is able to
manipulate the firewall.  Otherwise, anyone granted admin privileges (as
defined in the PolicyKit policy for the firewalld component) may
manipulate the firewall.  So it tightens the access, regardless if
PolicyKit grants access.  The default policy have uid=0,
firewall-config, NetworkManager and libvirtd in this whitelist.

Remember that firewalld provides an API over D-Bus for dynamic firewall
updates, so this is kind of to "seal" the configuration without breaking
any component depending on manipulating the firewall as the system is
running.  NetworkManager and libvirt are two components which adjusts
the firewall on-the-fly, depending on which network you're connected to
or which VMs have been started, and so on.


That still leaves me mumbling and led me down a midget rabbit hole. The 
"iptables" command is 777 root root system_u:object_r:bin_t:s0; but, that's OK. 
It's a link - to xtables-multi, which is rwxr-xr-x. root root 
system_u:object_r:iptables_exec_t:s0. Waitaminit says I to meself. (or is it me 
to iself? Whatever) Let's give that a try The results are reassuring:

===8<---
[jdow@whereever ~]$ xtables-multi iptables -L -v
iptables v1.4.21: can't initialize iptables table `filter': Permission denied 
(you must be root)

Perhaps iptables or your kernel needs to be upgraded.
===8<---
I guess the ancient philosophy of one task one command is passe' and now a 
monstrosity like xtables-multi finds itself masquerading as iptables and about a 
dozen other things. I have a skew sense of humor so I find that amusing. I see 
it's been that way for some years now even in 6.x. I just never had cause to 
look for this. Somebody liked the inetd model later called xinetd. (Speaking of 
which, I notice systemd seems to have subsumed even that functionality. It's 
good from a central management standpoint. It's yet another unclear puzzle when 
initially trying to wrap one's mind around systemd.)


Preserving the lockdown file for something that is removed from the system, 
though, seems to be silly to my fevered brain.


Gee, my rant has led to some good learning and a slightly fascinating rabbit 
hole, as well as the frustrating systemd mile deep rabbit hole.


{^_^}


Re: Adventures with 7.2

2017-01-02 Thread jdow

On 2017-01-02 18:40, Tom H wrote:

On Mon, Jan 2, 2017 at 5:06 PM, jdow <j...@earthlink.net> wrote:

...

/usr/lib/python2.7/site-packages/initial_setup/tui/spokes/eula.py: remove
failed: No such file or directory
  Erasing: anaconda-core-21.48.22.56-1.sl7.1.x86_64
4/7
  Erasing: anaconda-tui-21.48.22.56-1.sl7.1.x86_64
5/7
  Erasing: firewall-config-0.4.3.2-8.el7.noarch
6/7
  Erasing: firewalld-0.4.3.2-8.el7.noarch
7/7
warning: /etc/firewalld/lockdown-whitelist.xml saved as
/etc/firewalld/lockdown-whitelist.xml.rpmsave

That smells amusing and puzzling but not dangerous to me.


So it's not fully or properly installed, :) and :(


...

One wonders about the missing EULA info.

The lockdown-whitelist thing is more or less a "but why?" component.

{^_-}


Re: Adventures with 7.2

2017-01-02 Thread jdow

On 2017-01-02 18:37, Tom H wrote:

On Mon, Jan 2, 2017 at 4:58 PM, jdow <j...@earthlink.net> wrote:

On 2017-01-02 07:26, Tom H wrote:

On Mon, Jan 2, 2017 at 4:12 AM, jdow <j...@earthlink.net> wrote:


The SYS5 stuff in 6.x and prior lacked flexibility, to be sure. It was
simple enough that figuring out what was going on became easy. And
where the documentation failed the workarounds were not all that
difficult. But, then,the first 'ix I played with was one of the first
commercial renditions of SVR4 - on the Amiga. So over about 25-ish
years I'd learned it. I don't HAVE another 25 years to learn something
with documentation that requires extreme google-fu to find. (I did
manage to find a page that described /etc/sysconfig contents, FINALLY.
I've been looking for that off and /on for 5 years or more.



/usr/share/doc/initscripts-9.49.37/sysconfig.txt


"man --index" is needed methinks.


Pointers to that list in the documentation for RHEL tuned systemd
would be a good thing.)


It's not a systemd directory - and it's a directory that systemd
upstream dislikes.


It is intimately involved with systemd as used on RHEL based systems.
Cross references can tie it all together in a nice logical package
with bows on it.


Indeed but it's provided by the initscripts package so it should be
the latter's responsibility to provide, for example, "man sysconfig"
but it never has. AFAIR, neither upstart in EL6 nor sysvinit in
previous EL versions referred to "/etc/sysconfig/" in their
documentation (just as they don't refer to "/etc/default/" on
Debian/Ubuntu).


There are signs that somebody goes through the man pages and RHEL documentation 
to tune aspects of the documentation, chiefly file locations, for the RHEL 
environment. That process probably should include initscripts documentation in 
the references where appropriate. It would be a nice value added component.


{^_^}


Re: Adventures with 7.2

2017-01-02 Thread jdow

On 2017-01-02 08:47, Tom H wrote:

On Mon, Jan 2, 2017 at 6:42 AM, jdow <j...@earthlink.net> wrote:

On 2017-01-02 01:35, David Sommerseth wrote:


Anaconda is the installer. To be honest, I've never understood why
anaconda needs to be installed on a final production server. The
production boxes I have where firewalld is uninstalled also have no
anaconda installed. And these boxes do get their proper updates
through yum regardless.


It's not involved with system maintenance past the initial
installation? I had the impression it was intimately involved with the
system's overall configuration including updates. But, I must admit
that it's not something I have dug into in any serious way. Thanks for
the suggestion. I'll keep this option in mind. This is good to know.


I don't have anaconda installed on any RHEL or RHEL clone system - and
never have.


So I erased it with this fascinating transaction report excerpt:
Running transaction
  Erasing: initial-setup-gui-0.3.9.30-1.el7.x86_64  1/7
warning: file 
/usr/lib/python2.7/site-packages/initial_setup/gui/spokes/eula.pyo: remove 
failed: No such file or directory
warning: file 
/usr/lib/python2.7/site-packages/initial_setup/gui/spokes/eula.pyc: remove 
failed: No such file or directory
warning: file /usr/lib/python2.7/site-packages/initial_setup/gui/spokes/eula.py: 
remove failed: No such file or directory
warning: file 
/usr/lib/python2.7/site-packages/initial_setup/gui/spokes/eula.glade: remove 
failed: No such file or directory

  Erasing: anaconda-gui-21.48.22.56-1.sl7.1.x86_64  2/7
  Erasing: initial-setup-0.3.9.30-1.el7.x86_64  3/7
warning: file 
/usr/lib/python2.7/site-packages/initial_setup/tui/spokes/eula.pyo: remove 
failed: No such file or directory
warning: file 
/usr/lib/python2.7/site-packages/initial_setup/tui/spokes/eula.pyc: remove 
failed: No such file or directory
warning: file /usr/lib/python2.7/site-packages/initial_setup/tui/spokes/eula.py: 
remove failed: No such file or directory

  Erasing: anaconda-core-21.48.22.56-1.sl7.1.x86_64 4/7
  Erasing: anaconda-tui-21.48.22.56-1.sl7.1.x86_64  5/7
  Erasing: firewall-config-0.4.3.2-8.el7.noarch 6/7
  Erasing: firewalld-0.4.3.2-8.el7.noarch   7/7
warning: /etc/firewalld/lockdown-whitelist.xml saved as 
/etc/firewalld/lockdown-whitelist.xml.rpmsave


That smells amusing and puzzling but not dangerous to me.

Thanks for the information.

{^_^}


Re: Adventures with 7.2

2017-01-02 Thread jdow

On 2017-01-02 07:26, Tom H wrote:

On Mon, Jan 2, 2017 at 4:12 AM, jdow <j...@earthlink.net> wrote:



The SYS5 stuff in 6.x and prior lacked flexibility, to be sure. It was
simple enough that figuring out what was going on became easy. And
where the documentation failed the workarounds were not all that
difficult. But, then,the first 'ix I played with was one of the first
commercial renditions of SVR4 - on the Amiga. So over about 25-ish
years I'd learned it. I don't HAVE another 25 years to learn something
with documentation that requires extreme google-fu to find. (I did
manage to find a page that described /etc/sysconfig contents, FINALLY.
I've been looking for that off and /on for 5 years or more.


/usr/share/doc/initscripts-9.49.37/sysconfig.txt


"man --index" is needed methinks.


Pointers to that list in the documentation for RHEL tuned systemd
would be a good thing.)


It's not a systemd directory - and it's a directory that systemd
upstream dislikes.


It is intimately involved with systemd as used on RHEL based systems. Cross 
references can tie it all together in a nice logical package with bows on it.


{^_-}


Re: Adventures with 7.2

2017-01-02 Thread jdow

On 2017-01-02 06:16, Tom H wrote:

On Mon, Jan 2, 2017 at 4:03 AM, jdow <j...@earthlink.net> wrote:


systemctl unmask firewalld failed.


I run "systemctl disable firewalld" before running "systemctl unmask
firewalld" because otherwise the logs have the "firewalld is masked"
messages.


Thought I did it in that order. But I'm not sure. (stop, disable, unmask.) I 
believe I also noticed that with it stopped I'd suddenly find a mishmash of my 
firewall and firewalld's firewall. Firewalld had started back up. So that might 
have left me in a "smash it over the head" frame of mind. I've discovered with 
the projects I worked on that if there is a command like mask it would stop, 
disable, then put a very heavy stone coffin around it. (I'd drive the stake 
through it, last, only if "uninstall" was indicated.) So it's likely I could 
have made a rash assumption somewhere. I need to remember that's doing multiple 
"things" in one command which is not the 'ix way, I suppose.



Did you run "systemctl enable firewalld" after running "systemctl
unmask firewalld"? Having to re-install firewalld doesn't make sense.


Indeed, it didn't make sense to me either. I got the same error message that was 
flopping around in the logs.


{^_^}


Re: Adventures with 7.2

2017-01-02 Thread jdow

On 2017-01-02 01:35, David Sommerseth wrote:

On 02/01/17 10:24, jdow wrote:

On 2017-01-01 14:24, David Sommerseth wrote:

On 01/01/17 01:28, jdow wrote:


Obviously I really do NOT want firewalld to run. This is, apparently,
usually done using "systemctl mask firewalld". Unfortunately this leaves
divots all over the logs about systemctl not being able to bring up
/dev/null er firewalld. That seems "unfriendly" to say the least. (And
it seems there is no "friendly" way to undo the "systemctl mask"
command, at least for firewalld.


# yum erase firewalld
# yum install iptables-services


Did the second half. The first half had a large collection of
dependencies that would be removed as well, little things like
"anaconda-core". Erm, that might not be a good thing. I'm not interested
in throwing the system into the dark ages. I just want to use some
iptables features that it firewalld doesn't seem to be able to approach.


I've discussed several details with the firewalld developers (reasonable
group of people, btw) and they do acknowledge that firewalld do have
some challenges, also in regards to logging.

The approach I've recommended have been deployed on two production systems.

Btw, the official documentation provides this guidance:
<https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Security_Guide/sec-Using_Firewalls.html#sec-Using_iptables>


I found that page. I've had one indication that keeping firewalld disabled may 
be a chore through a reboot. It's on my todo list to solve.



But remove Anaconda? K!


Anaconda is the installer.  To be honest, I've never understood why
anaconda needs to be installed on a final production server.  The
production boxes I have where firewalld is uninstalled also have no
anaconda installed.  And these boxes do get their proper updates through
yum regardless.


It's not involved with system maintenance past the initial installation? I had 
the impression it was intimately involved with the system's overall 
configuration including updates. But, I must admit that it's not something I 
have dug into in any serious way. Thanks for the suggestion. I'll keep this 
option in mind. This is good to know.


{^_^}


Re: Adventures with 7.2

2017-01-02 Thread jdow

On 2017-01-01 14:24, David Sommerseth wrote:

On 01/01/17 01:28, jdow wrote:


Obviously I really do NOT want firewalld to run. This is, apparently,
usually done using "systemctl mask firewalld". Unfortunately this leaves
divots all over the logs about systemctl not being able to bring up
/dev/null er firewalld. That seems "unfriendly" to say the least. (And
it seems there is no "friendly" way to undo the "systemctl mask"
command, at least for firewalld.


# yum erase firewalld
# yum install iptables-services


Did the second half. The first half had a large collection of dependencies that 
would be removed as well, little things like "anaconda-core". Erm, that might 
not be a good thing. I'm not interested in throwing the system into the dark 
ages. I just want to use some iptables features that it firewalld doesn't seem 
to be able to approach. It's gui doesn't even seem to have a way to turn SOME 
logging on leaving most logging off. That's rude. (I find I am even eschewing 
the iptables-services tools. I'm using the dhclient script capability to reset 
the firewall when a new address is assigned. The actual firewall design right 
now closely resembles that produced by firewalld. It was useful for a template 
for retuning the firewall's features.)


This little stanza is one I've been using since my first iptables setup:
$IPT -t filter -A IN_public_deny -p tcp --dport ssh --syn -m recent --name 
ssh_attack --rcheck --seconds 90 --hitcount 1 -j LOG --log-prefix 'SSH2 REJECT: 
' --log-level info
$IPT -t filter -A IN_public_deny -p tcp --dport ssh --syn -m recent --name 
ssh_attack --rcheck --seconds 90 --hitcount 1 -j REJECT --reject-with tcp-reset
$IPT -t filter -A IN_public_deny -p tcp --dport ssh --syn -m recent --name 
ssh_attack --set


A given site cannot feed a SYN packet to the ssh port more often than once every 
90 seconds. It makes password guessing rather time consuming. Firewalld 
documentation was not clear how I'd add that into its firewall via the gui, 
especially if it is conditional to a tiny configuration file in /etc to disable 
all ingress ports or open them up and how to open them up. Open when traveling. 
Close when home.


But remove Anaconda? K!

{o.o}


Re: Adventures with 7.2

2017-01-02 Thread jdow

On 2017-01-02 01:00, Nico Kadel-Garcia wrote:

On Sun, Jan 1, 2017 at 5:19 PM, David Sommerseth
<sl+us...@lists.topphemmelig.net> wrote:

On 01/01/17 01:28, jdow wrote:

...

I don't mind flame wars of controversial topics, but let it at least
start with proper facts ... In my experience, systemd is far better
documented than any other init system I've used over the last 15+ years
or so.


daemontools was much lighter, much cleaner, and well documented. It
never took off due to some unfortunate copyright policies by its
author. It's too late to switch now, because of the integration of
more modern logging with systemd.


The SYS5 stuff in 6.x and prior lacked flexibility, to be sure. It was simple 
enough that figuring out what was going on became easy. And where the 
documentation failed the workarounds were not all that difficult. But, then,the 
first 'ix I played with was one of the first commercial renditions of SVR4 - on 
the Amiga. So over about 25-ish years I'd learned it. I don't HAVE another 25 
years to learn something with documentation that requires extreme google-fu to 
find. (I did manage to find a page that described /etc/sysconfig contents, 
FINALLY. I've been looking for that off and on for 5 years or more. {^_-} 
Pointers to that list in the documentation for RHEL tuned systemd would be a 
good thing.)


I can see the improvement systemd gives over the old stuff. But, I reserve the 
right to bitch when the learning curve is made artificially steep. (Then I get 
down to business and worry the problems to a solution.)


{^_-}


Adventures with 7.2

2016-12-31 Thread jdow
I've just built a fresh new 7.2 machine. It's been an SELINUX adventure, so far. 
Here are some observations for the developers of various parts of the system. 
Mostly they are documentation and SELinux issues. Documentation would have 
helped deal with the SELinux issues. There is one serious deficiency noted below 
in subversion. (And for the standard whine, systemd sucks dead bunnies through 
garden hoses - chiefly due to nearly totally absent documentation.)


Among the problems I found is that subversion does NOT work out of the box. Its 
install does not create the repository location declared in its /etc/sysconfig 
file. Creating it "almost" makes it work. I fussed around following 
troubleshooter suggestions which all failed. Ultimately I bit the bullet and 
tried a system wide relabel. (I am not sure if I've broken dovecot yet. I think 
not.) That worked. Two things would have helped. First setting up /var/svn with 
the right contextual rules right off the bat. Second is, IMAO, a serious 
deficiency in svnserve. It apparently cannot be run as a user (subversion) 
rather than root. For the 6.x system subversion in a subversion account with 
limited permissions. It's a belt and suspenders thing. SELinux has been a severe 
PITA during this build. It's troubleshooting advice is usually useless.


On to the real reason for writing this. I have some idiosyncratic iptables rules 
I rather like. They involved "recent". I setup some open ports, notably ssh, for 
limited access. A given address can attempt a SYN packet to the machine no more 
frequently than once every 90 seconds. Imagine trying to "guess" an even "fair 
at best" password under those conditions without lighting a nuclear level flare 
in the logs.


Obviously I really do NOT want firewalld to run. This is, apparently, usually 
done using "systemctl mask firewalld". Unfortunately this leaves divots all over 
the logs about systemctl not being able to bring up /dev/null er firewalld. That 
seems "unfriendly" to say the least. (And it seems there is no "friendly" way to 
undo the "systemctl mask" command, at least for firewalld.


And I mentioned dovecot above. It as an adventure to get my usual 
fetchmail->sendmail->procmail->spamc->mbox file working. I managed to find the 
way to do this with postfix. (procmail? Um, historical - dates from before 
modern alternatives and is usefully aggressive about forcing the email through 
spamassassin.) Then I needed to setup Dovecot to suck the email back off the 
system. This was a PITA on 6.x. It is a ROYAL SELinux PITA on 7.2. Some better 
documentation would be most helpful, if anybody can be motivated to commit the 
writing effort.


{^_^}


Re: EPEL

2016-12-22 Thread jdow

On 2016-12-22 02:00, Stephan Wiesand wrote:

On 22 Dec 2016, at 09:10, jdow <j...@earthlink.net> wrote:

# yum install epel-release

Transaction test succeeded
Running transaction
Traceback (most recent call last):
 File "/bin/yum", line 29, in 
   yummain.user_main(sys.argv[1:], exit_code=True)
 File "/usr/share/yum-cli/yummain.py", line 365, in user_main
   errcode = main(args)
 File "/usr/share/yum-cli/yummain.py", line 271, in main
   return_code = base.doTransaction()
 File "/usr/share/yum-cli/cli.py", line 773, in doTransaction
   resultobject = self.runTransaction(cb=cb)
 File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 1736, in 
runTransaction
   if self.fssnap.available and ((self.conf.fssnap_automatic_pre or
 File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 1126, in 
   fssnap = property(fget=lambda self: self._getFSsnap(),
 File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 1062, in 
_getFSsnap
   devices=devices)
 File "/usr/lib/python2.7/site-packages/yum/fssnapshots.py", line 158, in 
__init__
   self._vgnames = _list_vg_names() if self.available else []
 File "/usr/lib/python2.7/site-packages/yum/fssnapshots.py", line 56, in 
_list_vg_names
   names = lvm.listVgNames()
lvm.LibLVMError: (0, '')


I saw similar errors on one SL7.2 system after applying the security updates 
from 7.3. Any lvmetad segfaults in your logs?


Oh interesting. There SHOULD be no LVM involved. It's a brandy spanky new 
install on btrfs. For the use I plan for this machine LVM is a wasteful extra 
layer of nonsense. So, of course, there they are.


===8<---
/var/log/messages:Dec 22 08:03:57 thursday lvmetad: WARNING: Ignoring 
unsupported value for cmd.
/var/log/messages:Dec 22 08:03:57 thursday kernel: lvmetad[6113]: segfault at 
5 ip 7f8c92cfa528 sp 7f8c90c70728 error 4 in 
libc-2.17.so[7f8c92bc7000+1b6000]
/var/log/messages:Dec 22 08:03:57 thursday systemd: lvm2-lvmetad.service: main 
process exited, code=killed, status=11/SEGV
/var/log/messages:Dec 22 08:03:57 thursday systemd: Unit lvm2-lvmetad.service 
entered failed state.

/var/log/messages:Dec 22 08:03:57 thursday systemd: lvm2-lvmetad.service failed.
/var/log/messages:Dec 22 08:03:57 thursday systemd: lvm2-lvmetad.service holdoff 
time over, scheduling restart.

===8<---

This raises the questions "Whyinell are they there and why is 
lvm2-lmvetad.service being enabled?" And "Howinell do I give it the proper kiss 
of death?"



Eventually (I think after a couple of restarts of lvm2-lvmetad), the problem downgraded 
itself to a message "lvmetad: WARNING: Ignoring unsupported value for cmd." 
being logged whenever yum installs or updates anything and lvmetad is running. Still a 
bit scary, but yum works. And most of the time starting lvmetad seems to fail ayway...




Can't get EPEL to install.

{^_^}


She whines, "But there should not be any lvm at all involved? What is it 
starting?"

{o.o}


EPEL

2016-12-22 Thread jdow

# yum install epel-release

Transaction test succeeded
Running transaction
Traceback (most recent call last):
  File "/bin/yum", line 29, in 
yummain.user_main(sys.argv[1:], exit_code=True)
  File "/usr/share/yum-cli/yummain.py", line 365, in user_main
errcode = main(args)
  File "/usr/share/yum-cli/yummain.py", line 271, in main
return_code = base.doTransaction()
  File "/usr/share/yum-cli/cli.py", line 773, in doTransaction
resultobject = self.runTransaction(cb=cb)
  File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 1736, in 
runTransaction

if self.fssnap.available and ((self.conf.fssnap_automatic_pre or
  File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 1126, in 

fssnap = property(fget=lambda self: self._getFSsnap(),
  File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 1062, in 
_getFSsnap
devices=devices)
  File "/usr/lib/python2.7/site-packages/yum/fssnapshots.py", line 158, in 
__init__
self._vgnames = _list_vg_names() if self.available else []
  File "/usr/lib/python2.7/site-packages/yum/fssnapshots.py", line 56, in 
_list_vg_names

names = lvm.listVgNames()
lvm.LibLVMError: (0, '')


Can't get EPEL to install.

{^_^}


Re: unsubscribe

2016-10-06 Thread jdow

On 2016-10-06 12:20, Arnaldo Olegario wrote:




This doesn't work?
List-Unsubscribe: 



{o.o}


Re: Regarding latest Linux level 3 rootkits

2016-09-07 Thread jdow
Is the part of the filesystem which handles links in kernel space or user space? 
That would make a great deal of difference as this rootkit tool evolves. At the 
moment it appears it is "contagious", meaning Linux installs can become 
infected. Since the files that are infected are shared system library files it 
suggests at least one route into root level privileges to make the install. 
After that you have a root level user who does not appear in conventional checks 
on /etc/passwd and directory listings. There is nothing that says this tool 
cannot evolve more self-protection capabilities and still remain hiding in user 
space, where it would normally not be expected to hang out. This would get 
around some of the new secure BIOS protections that exist. It can keep the 
original files around and let checksum utilities read it instead of the modified 
files.


I figure with all malware the best thing to do is not catch it, use "safe 
computing" with condoms like SELinux enabled and screwed down even tighter than 
RHEL out of the box. I'm mostly musing about how it could be made more likely 
for "the usual tools" to discover the hacking. (And as noted I am bemused 
because this resembles several pieces of old Amiga malware.)


{^_^}   Joanne

On 2016-09-07 19:03, Steven J. Yellin wrote:

Are rpm and the check sum tools statically linked?  If not, hiding copies of
them might not help if libraries have been compromised.  But busybox is
statically linked, and it looks like it can be easily used to replace most
commands used to check security without going to the trouble of pulling files
from it.  For example, 'ln -s busybox md5sum' allows use of busybox's md5sum and
'ln -s busybox vi' allows use of its vi. See
https://busybox.net/FAQ.html#getting_started .

Steven Yellin

On Wed, 7 Sep 2016, prmari...@gmail.com wrote:


Jdow,

Why are you looking at thatÿÿ for root kit prevention? It's a very old fashion
approach, I would use the RPM's verify  command or one of the many filesystem
 check sum tools available for that instead. Either one can tell you if ÿÿany
critical binaries or libraries have been compromised very easily and there are
even tools built around them to do it on a network wide level. Further more if
you really want to make your systems resistant to root kits, readonly mount of
/ and /usrÿÿ is still your best bet, even Red Hat products like RHEV use that
method on appliances.


  Original Message
From: jdow
Sent: Wednesday, September 7, 2016 19:09
To: scientific-linux-users@fnal.gov
Subject: Re: Re: Regarding latest Linux level 3 rootkits

Thanks Vladimir,

I suppose I could pull the necessary files from busybox as a means of keeping a
more generic Linux system in security trim. This might be a useful tool set to
suggest upstream. A statically linked less would allow a quick check for the
hidden user. A statically linked chkrootkit would find the bad file size for the
affected glib libraries.

{^_^} Joanne

On 2016-09-07 03:36, Vladimir Mosgalin wrote:

Hi jdow!
ÿÿ
On 2016.09.06 at 23:15:04 -0700, jdow wrote next:


Is there any source for a VI, VIM, or even EMACS that has all libraries
compiled into it statically? That would make monitoring for the rootkit much
easier. The same could be said for utilities such as chkrootkit. With
compiled in static libraries these level three (user space) rootkits can't
edit the results you get, as easily. (Any file system components in user
space would also have to be statically linked.)


Busybox would work. It's usually build statically (either that, or it's
easy to make that kind of build) and includes vi clone. Very poor man's
vi, just like other busybox utilities, but nevertheless. Current version
supports some neat stuff like autoindent and undo.





Re: Re: Regarding latest Linux level 3 rootkits

2016-09-07 Thread jdow

Very simple.

This rootkit works in user space by coopting glib. The new versions tell the 
user what the rootkit wants it to use. It creates a new user. This user has an 
exception to the /etc/passwd etc rewrites on the way to the application from the 
disk. The files involved do not show up in file requests.


Since the rootkit is in user space files that are statically linked with all the 
user space libraries will be unaffected. Note that rpm is not such a file. So it 
gets told what the modified glibc tells it. How can it detect the rootkit with 
that happening?


I simply mentioned chkrootkit as an example of a tool that would not be able to 
do its job properly unless it was statically linked. I was not advocating it. I 
just want to see if there is a real way around this infection.


The readonly mount option is not suitable in the face of updates. And SELinux, 
if it is full on, should prevent this problem. However, I have yet to get a 
fully configured machine that does not throw SELinux problems. Fully configured 
in this context means local DNS service, local DHCP service for dozens of 
devices, ntp service, samba running with user directories available, 
spamassassin running, very restricted smtp, and little else. By the time I get 
through Samba, DNS, and DHCP I am getting occasional SELinux problems with 
rather spurious seeming troubleshooting reports. It appears, at the moment, 7.2 
may be a little nicer in this regard than 6.8 and 6.8 is nicer than earlier 
versions of 6.


One thing that kills me on this "new" rootkit is that I first ran across it in 
the late 80s on Commodore Amigas. And it is STILL not completely addressed, or 
so it appears.


{o.o}   Joanne


On 2016-09-07 18:22, prmari...@gmail.com wrote:

Jdow,

Why are you looking at that‎ for root kit prevention?
It's a very old fashion approach, I would use the RPM's verify  command or one 
of the many filesystem  check sum tools available for that instead.
Either one can tell you if ‎any critical binaries or libraries have been 
compromised very easily and there are even tools built around them to do it on 
a network wide level.
Further more if you really want to make your systems resistant to root kits, 
readonly mount of / and /usr‎ is still your best bet, even Red Hat products 
like RHEV use that method on appliances.


  Original Message
From: jdow
Sent: Wednesday, September 7, 2016 19:09
To: scientific-linux-users@fnal.gov
Subject: Re: Re: Regarding latest Linux level 3 rootkits

Thanks Vladimir,

I suppose I could pull the necessary files from busybox as a means of keeping a
more generic Linux system in security trim. This might be a useful tool set to
suggest upstream. A statically linked less would allow a quick check for the
hidden user. A statically linked chkrootkit would find the bad file size for the
affected glib libraries.

{^_^} Joanne

On 2016-09-07 03:36, Vladimir Mosgalin wrote:

Hi jdow!
‎
On 2016.09.06 at 23:15:04 -0700, jdow wrote next:


Is there any source for a VI, VIM, or even EMACS that has all libraries
compiled into it statically? That would make monitoring for the rootkit much
easier. The same could be said for utilities such as chkrootkit. With
compiled in static libraries these level three (user space) rootkits can't
edit the results you get, as easily. (Any file system components in user
space would also have to be statically linked.)


Busybox would work. It's usually build statically (either that, or it's
easy to make that kind of build) and includes vi clone. Very poor man's
vi, just like other busybox utilities, but nevertheless. Current version
supports some neat stuff like autoindent and undo.





Re: Re: Regarding latest Linux level 3 rootkits

2016-09-07 Thread jdow

Thanks Vladimir,

I suppose I could pull the necessary files from busybox as a means of keeping a 
more generic Linux system in security trim. This might be a useful tool set to 
suggest upstream. A statically linked less would allow a quick check for the 
hidden user. A statically linked chkrootkit would find the bad file size for the 
affected glib libraries.


{^_^}   Joanne

On 2016-09-07 03:36, Vladimir Mosgalin wrote:

Hi jdow!

 On 2016.09.06 at 23:15:04 -0700, jdow wrote next:


Is there any source for a VI, VIM, or even EMACS that has all libraries
compiled into it statically? That would make monitoring for the rootkit much
easier. The same could be said for utilities such as chkrootkit. With
compiled in static libraries these level three (user space) rootkits can't
edit the results you get, as easily. (Any file system components in user
space would also have to be statically linked.)


Busybox would work. It's usually build statically (either that, or it's
easy to make that kind of build) and includes vi clone. Very poor man's
vi, just like other busybox utilities, but nevertheless. Current version
supports some neat stuff like autoindent and undo.



Regarding latest Linux level 3 rootkits

2016-09-07 Thread jdow
Is there any source for a VI, VIM, or even EMACS that has all libraries compiled 
into it statically? That would make monitoring for the rootkit much easier. The 
same could be said for utilities such as chkrootkit. With compiled in static 
libraries these level three (user space) rootkits can't edit the results you 
get, as easily. (Any file system components in user space would also have to be 
statically linked.)


{^_^}   Joanne


Re: SL7 CUPS/SELinux problem trying to install Brother HL-3150CDN printer driver

2016-06-29 Thread jdow

Nonsense.

Haven't met a distro yet that has SELinux correctly setup from the gitgo. It 
still doesn't completely like samba on my 6.6 install, for example. I had to 
make some "fake" changes to get something else rather pedestrian to work. (It's 
in the archives some time back with projected fix pushed all the way out to 6.7.)


{^_^}

On 2016-06-29 03:12, David Sommerseth wrote:

On 29/06/16 10:00, Bill Maidment wrote:

My final attempt was successful, sort of.
I switched SElinux to enabled and rebooted, then the install worked OK.
Then I had to use a live CD to be able to boot, changed SElinux to disabled, 
and reboot again.
Then I had to us lpoptions to set the default parameters as the CUPS gui tool 
refused to change anything.
Phew. What a tortuous route.
Back to sleep now.




Let this be an example why NOT to disable SELinux.  SELinux has been (if
my memory serves me right) available since Fedora 6 (released in 2006)
and RHEL *4*!  I believe it was turned on by default in Fedora 8 and
RHEL 5.  And in RHEL 6 you could no longer disable SELinux at install time.

SELinux is not the obstacle it once was over a decade ago.  So if you
have issues when it is enabled, learn to use the proper tools to debug
and fix it correctly.  (audit2why, audit2allow, semanage, restorecon,
etc, etc)

Disabling SELinux is in 2016 *not* a solution and can barely be
considered a workaround.

Refusing to to use, accept and learn SELinux will serve you no good in
the long run.

Seriously, I've been running a various amount of Fedora, RHEL/SL/CentOS
installations and versions over the last 8-9 years.  In SL7 SELinux have
not bitten me much at all (only one issue with logrotate on servers
running Zimbra Collaboration Suite, that's all).   I have the last 6-7
years never needed to disable SELinux to accomplish my tasks.  Yes, I've
put systems into permissive modes to see if SELinux was to blame, but
mostly that was not the issue.

So if you are badly hit by SELinux troubles, you need to look into if
you or the software you use are doing the right things.




--
kind regards,

David Sommerseth




-Original message-

From:Bill Maidment 
Sent: Wednesday 29th June 2016 16:34
To: Akemi Yagi ; SL Users 
Subject: RE: SL7 CUPS/SELinux problem trying to install Brother HL-3150CDN 
printer driver

Well I've heard back from Brother and they suggest that my SElinux set up has a 
problem. They recommended that I do
semodule -vR
This gave me exactly the same error messages. Then I did semodule -vB which 
worked OK, but repeating semodule -vR still gives

[root@ferguson src]# semodule -vB
Committing changes:
Ok: transaction number 0.
[root@ferguson src]# semodule -vR
SELinux:  Could not downgrade policy file 
/etc/selinux/targeted/policy/policy.29, searching for an older version.
SELinux:  Could not open policy file <= /etc/selinux/targeted/policy/policy.29: 
 No such file or directory
/sbin/load_policy:  Can't load policy:  No such file or directory
libsemanage.semanage_reload_policy: load_policy returned error code 2. (No such 
file or directory).
[root@ferguson src]#

This is happening on two different SL 7.2 machines with SElinux installed but 
disabled.

I even tried uninstalling selinux* but that got me into deeper trouble.

[root@ferguson src]# rpm -qv selinux-policy
selinux-policy-3.13.1-60.el7_2.7.noarch

Is there an issue with this version of selinux???

Cheers
Bill

-Original message-

From:Bill Maidment 
Sent: Saturday 25th June 2016 17:26
To: Akemi Yagi ; SL Users 
Subject: RE: SL7 CUPS/SELinux problem trying to install Brother HL-3150CDN 
printer driver

Thanks for the suggestion Akemi.
Unfortunately, it made no difference.
I'm awaiting comment from Brother, but I suspect they will say change to Ubuntu 
:-(
Cheers
Bill

-Original message-

From:Akemi Yagi 
Sent: Saturday 25th June 2016 1:10
To: SL Users 
Subject: Re: SL7 CUPS/SELinux problem trying to install Brother HL-3150CDN 
printer driver

On Fri, Jun 24, 2016 at 2:33 AM, Bill Maidment  wrote:

Has anyone any suggestions how to get a Brother HL-3150CDN printer driver 
installed on SL7.
I have been trying to install using the Brother supplied installation script, 
which worked OK on SL6.

With SL7 I get error messages such as:
SELinux:  Could not downgrade policy file 
/etc/selinux/targeted/policy/policy.29, searching for an older version.
SELinux:  Could not open policy file <= /etc/selinux/targeted/policy/policy.29: 
 No such file or directory /sbin/load_policy:  Can't load policy:  No such file or 
directory

The file in question does exist, but I have selinux disabled anyway.

SL7 is using cups version 1.6 whereas SL6 uses cups version 1.4. Is that an 
issue?
I guess the Brother script is a bit out of date as it was created in 2012.


Re: GPT?

2016-06-14 Thread jdow

On 2016-06-14 11:16, ToddAndMargo wrote:
...


Hi David,

I should have said GPT partition with HFS+ format.

I am basically looking for a shared format that will
accommodate large file transfers.

And OSx doesn't support NTFS write.  The fuse is a paid
service and I would not want to install it on every Apple
I see, even if there is a 14 day trial.  NTFS-3G is supported,
but I can't find a download for MAC for my life.  (They point
you to the paid version.)

I don't have an Apple either. Apple does not allow you to
use a virtual machine of OSx, unless the base system is
Apple hardware (not going to happen).

I am seeing a lot more Apple computers out there since
the advent of Frankenstein and Sons (Windows 8 and Nein,
oops, 10).  I personally find OSx to be excruciatingly weird,
but I need to eat, so I will work on anything folks are willing
to pay for.  (I make a lot of money off M$'s endless quality,
security, and reliability issues.)

I prefer to work on Linux, but most of my customer's are small
business and they need their Windows to run their apps.
I have a few Linux server and workstations out there.
And, my shop is Linux.  (I just don't have the patience to
fight with Windows on my own system after fighting with
it all day on my customer's machines.)

-T


Have you considered SAMBA? With modern gigabit networking that's not a huge 
speed penalty over sneakernet with disks.


{^_^}   Joanne


Re: Is KVM now abandoned in EL 6?

2015-05-20 Thread jdow

On 2015-05-20 09:28, ToddAndMargo wrote:

On 05/20/2015 05:20 AM, Nico Kadel-Garcia wrote:

On Wed, May 20, 2015 at 3:49 AM, John Lauro john.la...@covenanteyes.com wrote:

Why not run wine32 in a SL6 VM in SL7?  Personally, IMHO, why run anything
critical that you can't run elsewhere (such as a brower, ssh client, etc) not
in a VM?


Or, for pity's sake, if you need to run Widnows based software, run it
in a Windows VM. Needing Wine32 on an SL6 or RHEL6 host suggests that
you have e Windows 32 requirment.



Hi Nico,

Sure can tell you haven't had to do that.  Wait till you have VM's
open and five or six other programs open at the same time and get
lost trying to figure out what tool bar goes to what program.
Not to mention sizing the placing the buggers on the screen.
And then the @$#ing clipboard stops working between them!

If Coherence ever get going in Spice, a lot of that will go away.
Probably not the Clipboard issues though.

-T

The Clipboard has an anxiety detector build in.  It
checks you fingers for signs of stress and develops
a crash commensurate with how bad you need to use the
thing.


Have you tried VirtualBox's seamless mode?
{^_^}


Re: need SSD RAID controller advice

2015-04-13 Thread jdow
The 3Ware RAID cards I have vastly outstrip the motherboard built in Intel RAID 
implementations for a RAID 5 setup. (I don't consider RAID 1 to be economically 
sensible for most uses.) A four disk RAID 5 SSD configuration can be 
breathtaking fast, too.


{^_^}   Joanne

On 2015-04-13 05:10, James M. Pulver wrote:

I would point out that I'm not sure I've ever really seen the benefit of Real 
Raid except  for the vendor making more money. The only place I've used it is in 
iSCSI boxes that run everything in firmware.

On all computers / servers, I've always used MDADM on Linux and ZFS on FreeNAS. 
Both have been excellent for my intended use, though FreeNAS is only at home 
for ~3 concurrent users, so take that whole thing with a grain of salt. Neither 
has lost data due to power outages or drive failures.

--
James Pulver
CLASSE Computer Group
Cornell University


-Original Message-
From: owner-scientific-linux-us...@listserv.fnal.gov 
[mailto:owner-scientific-linux-us...@listserv.fnal.gov] On Behalf Of Vladimir 
Mosgalin
Sent: Monday, April 13, 2015 5:30 AM
To: scientific-linux-users@fnal.gov
Subject: Re: need SSD RAID controller advice

Hi ToddAndMargo!

  On 2015.04.12 at 17:35:04 -0700, ToddAndMargo wrote next:


On 04/12/2015 10:54 AM, Nico Kadel-Garcia wrote:

and I*loved*  3Ware


Me too.  LSI gobbled them up.


Well, consolidation is often a good thing.

You can still buy best performing 3Ware 9750 (2011 model!) from LSI, they are 
selling them for those who are fine with 6 Gbps speeds. Don't think they'll be 
upgrading it to 12Gbps (not that many people are interested in real 
RAID-supporting cards at such speeds, these are mostly for connecting external 
storages...)



Re: Resolved: NVidia drops off bus

2015-02-05 Thread jdow

On 2015-02-05 05:08, Joseph Areeda wrote:

On 01/29/2015 08:41 AM, Phil Wyett wrote:

On Thu, 2015-01-29 at 08:30 -0800, Joseph Areeda wrote:

Hi All,

I've been getting random crashes of X on my development workstation with
messages like:

on console: cpu #X stuck in [X:...]
.xsession-errors contain things like: gnome-session: Fatal IO error 11
(Resource temporarily unavailable) on X server :0.
/var/log/messages has NVRM: GPU at :01:00.0 has fallen off the bus.

Most things I've been able to find suggest a driver vs. kernel problem
so I updated to the latest driver from NVidia (346.35) with no luck.

Have others seen this?  Any hints?  Could it be the NVidia card failing?

Thanks,
Joe

Hi,

I have seen this once before with a persons system. The issue in that
case was a bad physical connection. Removing the card, cleaning the
connections and reseating the card corrected the issue.

Regards

Phil


This took a while because I had to finish a project and couldn't afford random
reboots, but I took Phil's advice and used the professional connector cleaner
AKA pencil eraser on the connectors, reseated the card.  I have now gone 4 days
without an incident whereas I was forced to reboot 3 or 4 times per day.

Thanks Phil!

Joe


Just a note about pink erasers - don't. They contain a lot of sulfur. That 
corrodes contacts rather rapidly. So the fix may be lamentably temporary and a 
re-fix may be impossible after the second or third time.


{o.o}   Joanne


Re: Scientific Linux 7 BETA

2014-12-15 Thread jdow

On 2014-12-14 09:29, Yasha Karant wrote:

On 12/14/2014 03:03 AM, jdow wrote:

On 2014-12-13 09:16, Yasha Karant wrote:

On 12/13/2014 09:02 AM, Santu Roy wrote:

wine 1.7 does not work in SL7, how can i run  windows file in SL7

Assuming you have a license for MS Windows if one is required and enforced by
your nation state (it is in the USA, EU, etc.), a very effective alternative is
to load Oracle VirtualBox (licensed for free), load MS Windows under VirtualBox,
and then install whatever MS Windows applications you need within MS Windows
under VirtualBox.  Unlike Wine that has some issues with executing various MS
Windows applications, if the application runs in the release of MS Windows you
have, it will run under MS Windows under Virtual Box.


I'd change that last line to most likely it will run ...


Yasha Karant


There are some cheap fun little toys/tools for people who are into things RF
within the 25MHz through 1800 MHz frequency range called DVB-T dongles. The
ones with RTL2832U chips can be used as samplers for Software Defined Radios.

If you can get one to work on a VirtualBox virtualized machine at its usual
full sample rate capability, 2.4 Msps, please let me know. In my experience
the virtualized USB bus is vastly too slow to work.

This holds with Win7 hosts and SL6.6 hosts and VirtualBox as of about this
time last year when I last played with it.

So I'd say most likely without the implied assurance of 100% such as your
turn of phrase suggests.

{^_-}   Joanne


Agreed -- I was ignoring hardware and/or driver limitations.  There are cases in
which peripheral devices physically will connect to a Linux machine but for
which the VirtualBox peripheral hardware does not exist or function, and thus
will not work.  This is the case for almost all virtual environments unless the
host can release the hardware fully to the guest (including physical machine
buses -- typically not allowed).
With this caveat, all typical software applications for MS Windows do work under
the same MS Windows under VirtualBox -- a very different situation from Wine or
CrossOver.

If the peripheral can be mounted via Linux as a file system component that is in
a shared folder with MS Win under VirtualBox, the item typically can be read and
written.  This may not work for those situations under which MS Windows
requires/demands full direct access and control.

Yasha


As it happens I can almost run the SDR applications on virtual machines with the 
Windows 7 host. The limitation is chiefly the speed of the virtualized USB bus. 
It's under 1/10th speed for bulk transfers. This includes SL6 VMs.


To be honest I forget the details on the SL6 host with the various VMs. I 
vaguely remember it was grim at best. I'll have to try again to find out.


{^_^}


Re: Scientific Linux 7 BETA

2014-12-14 Thread jdow

On 2014-12-13 09:16, Yasha Karant wrote:

On 12/13/2014 09:02 AM, Santu Roy wrote:

wine 1.7 does not work in SL7, how can i run  windows file in SL7

Assuming you have a license for MS Windows if one is required and enforced by
your nation state (it is in the USA, EU, etc.), a very effective alternative is
to load Oracle VirtualBox (licensed for free), load MS Windows under VirtualBox,
and then install whatever MS Windows applications you need within MS Windows
under VirtualBox.  Unlike Wine that has some issues with executing various MS
Windows applications, if the application runs in the release of MS Windows you
have, it will run under MS Windows under Virtual Box.


I'd change that last line to most likely it will run ...


Yasha Karant


There are some cheap fun little toys/tools for people who are into things RF 
within the 25MHz through 1800 MHz frequency range called DVB-T dongles. The ones 
with RTL2832U chips can be used as samplers for Software Defined Radios.


If you can get one to work on a VirtualBox virtualized machine at its usual full 
sample rate capability, 2.4 Msps, please let me know. In my experience the 
virtualized USB bus is vastly too slow to work.


This holds with Win7 hosts and SL6.6 hosts and VirtualBox as of about this time 
last year when I last played with it.


So I'd say most likely without the implied assurance of 100% such as your turn 
of phrase suggests.


{^_-}   Joanne


Heads up to the el6 guys

2014-11-19 Thread jdow

Latest patches won't install:
Error: Package: hdf5-mpich-1.8.5.patch1-9.el6.x86_64 (epel)
   Requires: libmpichf90.so.12()(64bit)
Error: Package: hdf5-mpich-1.8.5.patch1-9.el6.x86_64 (epel)
   Requires: mpich
Error: Package: hdf5-mpich-1.8.5.patch1-9.el6.x86_64 (epel)
   Requires: libmpich.so.12()(64bit)

This traces back to the octave install I have for doing some RF MODEM work.

{^_^}   Joanne


Re: SL6 incompatible update of X11

2014-11-06 Thread jdow

On 2014-11-06 11:28, Chris Schanzle wrote:

On 11/05/2014 06:36 PM, Konstantin Olchanski wrote:

A few days ago an updated linux kernel and updated xorg packages were
pushed into the SL6 updates. These updates are automatically installed
by the default yum configuration of SL6.5.

Unfortunately these updates are incompatible with pre-installed X11 video
drivers for NVIDIA (GeForce 210) and AMD/ATI (AMD E-350/E-450 and socket AM1
on-board video) from ELREPO.

These are the ELREPO kmod-fglrx and kmod-nvidia packages.

So all computers with these video cards promptly broke.

[snip]

Many (7) years ago I gave up on outsourcing my NVIDIA proprietary driver 
needs and wrote my own init.d script that installs/removes/kmod-updates the nvidia driver 
using nvidia's .run files.  Yes, there are a few things to be concerned about with some xorg 
updates stepping on files, but all easily fixable and under *my* control and has served me 
well.  Trying various versions is fairly simple (telinit 3, uninstall, move new version to 
/root/, install, telinit 5).  My update processes are heavily scripted as well.  And yes, I 
test some common configs before releasing to 150+ workstations of various configurations.  
It is not easy.

Admittedly, I'm a control freak with good scripting skills...a dangerous 
combination.


The trouble I have here is that distros with which I am familiar all keep around 
the last three kernel updates in case something goes wrong with the new one. 
Breaking X11's backwards compatibility breaks this safety mechanism. Is XOrg 
REALLY worth using when you periodically lose your capability of stepping 
backwards if something else breaks? The XOrg people probably have been 
approached over this unfortunate behavior and don't pay any attention to 
complaints. The best complaint would be for a major distro to quit using XOrg 
over this issue.


{^_^}   Joanne


Re: Final Solution to Chinese Break in

2014-10-05 Thread jdow
If credit card fraud was involved you might check with the Secret Service. At 
least in the mid 80s credit card fraud was investigated by the Secret Service. 
I've no freaking idea why; but, during an investigation about some online 
stalking featuring me as one of the victims credit card fraud was involved. I 
was interviewed about it by a Secret Service agent and an FBI agent 
concurrently. Both were just a touch out of their depth. Sigi Kluger was 
ultimately prosecuted for the CC fraud, not the stalking, not the death threats, 
not the bodily harm (chop me up and feed me to his dog) but CC fraud that was 
small amounts over a full year. VERY few women stayed with McGraw-Hill's BIX 
or Byte Information eXchange through that year. I was too damn stubborn to be 
run out. But - damn - CC fraud was the Secret Service's domain? Washington DC 
was hopelessly screwed up even then.


{^_^}   Joanne

On 2014-10-04 21:26, Paul Robert Marino wrote:

you may be right Interpol's economic crimes division might be the
right way to go Ive never considered that before.


On Sat, Oct 4, 2014 at 8:56 PM, Nico Kadel-Garcia nka...@gmail.com wrote:

On Sat, Oct 4, 2014 at 9:26 PM, Bill Maidment b...@maidment.me wrote:

There used to be an organisation called Interpol to deal with international 
crime. I haven't heard anything recent about them; do they still exist?

Regards
Bill Maidment


Interpol still exists, they've a web site hat
https://www.interpol.int/. Since we've gone way off this mailing
list's announced purpose, I'll stop here.




Re: about realtime system

2014-08-24 Thread jdow

The stock exchange could remove most of the problem, meaning high
frequency trades, by placing a purely random 0 to 1 second latency
on all incoming data and all outgoing data. The high frequency trading
reads to me as just another means of skimming now that they're not
allowed to round down fractional pennies and pocket the change. It's
time to give mere mortals some practical access to the exchanges. And
this interest in microsecond clocks would simply vanish from the
exchanges.

On a different point, the word I can find is that the free version of
VMWare does not support this high latency sensitivity setting.

{o.o}   Joanne, Just sayin'

On 2014-08-24 13:42, Paul Robert Marino wrote:

John
reread the first and third paragraph of my previous email.
Trading firms care about low latency but never cared about the
accuracy millisecond of the clocks. Sock exchanges on the other hand
want predictable latency not necessarily low latency but absolutely
require millisecond and if possible microsecond accurate clocks.
The reason for this is trading funds are worried about getting quotes,
bids, executions, etc. to the exchanges gateways as fast as possible;
however the exchange has to be able to prove to both the member firms
and the regulators that everyone is treated fairly once they put an
order into the gateway.

While yes VMware says under very specific configurations with 3/4ths
of their features disabled and special network cards which offload
their virtual switch's work VMware can handle low latency, but they
still can not handle clocks accurate to the millisecond and certain
cant handle it to the microsecond.
Furthermore I find this article highly suspect because its talking
about reducing the latency overhead in their visualization stack to
the point where it becomes less noticeable not necessarily true low
latency. This makes it acceptable for small hedge funds which have
staff and equipment budget constraints, but not really good enough for
the big boys if they are smart. I would advise you to be careful with
VMwares technical marketing docs and blogs in this area because the
sale people will tell you anything to get you to buy it, their high
level engineers will actually tell you the truth if they know what you
are using it for. In true real time and high precision situations
their senior engineers will tell the sales department to wave you off
of using their product if your employer is a big enough name for for
the real senior engineers (not sales engineers) to look at your
design prior to sale.

If you dive deep into that article it say's you need
1) very specific hardware support specifically network cards
2) you need to turn off vmotion and all of the other fault tolerance 
features
3) you need to have very specific features turned on
4) It makes strongly implied suggestion but doesn't state flat out
that for best performance you need to align the number of cores you
assign to the layout of the cache in your CPU so you don't get
multiple VM's sharing CPU cache even if that means assigning more
VCPUs than you need.
5) a separate physical network card for each VM
6) disable memory over committing (Same as KVM)
7) disable CPU over committing (Same as KVM)

Even with that all of that you still do not have a 10 microsecond
latency jitter in the network stack, and the accuracy of the clocks
are still not guaranteed to the millisecond. In no place in that
article or the blog is clock accuracy mentioned at all.
All they are talking about is better response time latency not real precision.




On Sun, Aug 24, 2014 at 3:46 PM, John Lauro john.la...@covenanteyes.com wrote:

The recommendation changed with 5.5.
http://blogs.vmware.com/performance/2013/09/deploying-extremely-latency-sensitive-applications-in-vmware-vsphere-5-5.html

... However, performance demands of latency-sensitive applications with very 
low latency requirements such as distributed in-memory data management, stock 
trading, and high-performance computing have long been thought to be incompatible 
with virtualization.
vSphere 5.5 includes a new feature for setting latency sensitivity in order to 
support virtual machines with strict latency requirements.


- Original Message -

From: Paul Robert Marino prmari...@gmail.com
To: Nico Kadel-Garcia nka...@gmail.com
Cc: John Lauro john.la...@covenanteyes.com, Brandon Vincent 
brandon.vinc...@asu.edu, Lee Kin
llwa...@gmail.com, SCIENTIFIC-LINUX-USERS@FNAL.GOV 
scientific-linux-users@fnal.gov
Sent: Sunday, August 24, 2014 3:27:39 PM
Subject: Re: about realtime system

...

By the way one of those stock exchanges is where the VMware engineers
told us never to use their product in production. In fact we had huge
problems with VMware in our development environments because some of
our applications would actually detect the clock instability in the
VMware clocks and would shut themselves down rather than have
inaccurate audit logs. as a result we found we had 

Re: Clarity on current status of Scientific Linux build

2014-07-01 Thread jdow

On 2014-07-01 08:16, Patrick J. LoPresti wrote:

On Tue, Jul 1, 2014 at 6:17 AM, Lamar Owen lo...@pari.edu wrote:

...


That's part of the
reason the CentOS team has changed the statement '100% binary compatible' to
'functionally compatible' since they do mean different things, but the
latter is more indicative of reality than the former ever has.


The *goal* of CentOS used to be binary compatibility, even if it was
never 100% achieved. Since the acquisition by Red Hat, that is no
longer even the goal, for obvious reasons.


Pat, this is nominally impossible with modern compilers as I discovered a
long time ago. The compilers I am used to embed a time stamp in the various
files compiled. This time stamp will differ between machines used to compile
the code. So when you perform the binary compare you will hit errors. The
compiler on which I discovered this first (Microsoft C7) the embedded dates
were not even the same length. So once I hit that difference the binary
compare was broken for the rest of the file. If GNU C has this same feature
prattling about binary identity and so forth is nonsense.

...


  - Pat



It is good that some of these issues are being aired. But some of the issues
raised smokescreen other more pertinent and substantive issues such as RHEL
traceability. (In the hardware world I infested long ago all test equipment
had to be calibrated in a manner traceable to NBS (now NIST). That issue was
solved.) This RHEL traceability issue is significant as is traceability back
to creators for non-RHEL code replacements for RHEL proprietary software and
for any add-on software provided by sources in the path from SL back to RHEL.
Fussing about binary compatibility is needless obfuscation.

The issue at its base is who do you trust? It helps to know who is asking
you to trust when you make that decision.

{^_^}   Joanne


Re: DELL server and hw Raid problem with latest 6.5 kernel

2014-06-23 Thread jdow

On 2014/06/23 11:34, Andras Horvath wrote:

On Tue, 17 Jun 2014 07:52:32 -0700
Patrick J. LoPresti lopre...@gmail.com wrote:


On Tue, Jun 17, 2014 at 1:25 AM, Andras Horvath m...@log69.com wrote:


I'm having problem with the latest kernel version for some time now. The previous kernel 
version boots fine and everything works just well, but the latest kernel 
(v2.6.32-431.17.1.el6.x86_64) cannot boot and Grub says something like trying to 
reach blocks outside of partition and that's all the message there is and boot 
hangs.


This sounds to me like your kernel has some blocks that lie beyond
what GRUB can read during boot (using the system BIOS). It worked
before because you got lucky; any time you reinstalled a kernel, you
were running the risk of some of the new boot image's blocks lying
outside the bootable range.

If this is correct, checking the inode number will not help. because
the problem the blocks inside the file itself, not the inode.

Possible fixes, in increasing order of difficulty:

Copy the kernel and initrd images until you get lucky again
See if your system BIOS has a setting related to booting from large disks
Reinstall grub with the --force-lba option
Reinstall the system, using an EFI boot partition (have fun)
Reinstall the system, creating a small (500M) /boot partition as the
first partition on the drive


That last is what I have done for years. I tried not doing so for my
last install on a large RAID -- figuring this is the 21st century --
and my system failed to boot. I reinstalled with a small /boot
partition and now it consistently works fine across dozens of
reinstalls. I do not know whether this is due to a buggy RAID BIOS or
something else, and I do not care...

Good luck.

  - Pat



An update on my issue. The latest kernel update (2.6.32-431.20.3.el6.x86_64) 
seems to have fixed my problem. The system boots fine without any grub or other 
error message.

I hope it stays like this. Cheers and thanks for the help!


Andras


I recently installed a dual boot SL6 on top of the XP32 I had on the machine.
It's out in the last 64 megabytes on a two terabyte drive. It boots very well
going through ntldr to grubldr to Linux. That hints that even grub should be
quite happy at least in the first two terabytes of any disk or array of disks.
(This machine is a fakeraid to boot - something I am in the process of
changing if the 3Ware card I bought on E-Bay is going to work for me. Um, the
machine has PCI-X slots and is otherwise overpowered. So there's no sense
throwing it away when I can recycle it to run virtual machines for testing.)

{^_^}   Joanne


Re: 224.0.0.251

2014-05-23 Thread jdow

On 2014/05/23 14:25, ToddAndMargo wrote:

On 05/23/2014 02:08 PM, Alan Bartlett wrote:

On 23 May 2014 22:02, ToddAndMargo toddandma...@zoho.com wrote:

Hi All,

Is there some special meaning (like 127.0.0.1.) to
the following IP address?

 224.0.0.251

Many thanks,
-T


It is an IP Multicast address.

host 224.0.0.251

will tell you a bit more.

Alan.



Hi Alan,

$ host 224.0.0.251
Host 251.0.0.224.in-addr.arpa. not found: 3(NXDOMAIN)

Not sure what I am suppose to find.

This is why I ask (VLC's doing):

kernel: Vlan-out Everything Else IN= OUT=eth0.5 SRC=192.168.254.10
DST=224.0.0.251 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=0 DF PROTO=UDP SPT=5353
DPT=5353 LEN=36

eth0.5 is a virtual Ethernet too, not hooked to the Internet.

And port 3535 UDP?

$ grep -i 3535 /etc/services
ms-la   3535/tcp# MS-LA
ms-la   3535/udp# MS-LA


Thank you for the help,
-T


Lysdexic are we? It's 5353 which seems to be an alternate DNS address.

{^_^}   Joanne me too be lysdexic


Re: 224.0.0.251

2014-05-23 Thread jdow

On 2014/05/23 18:38, ToddAndMargo wrote:

On 05/23/2014 06:17 PM, jdow wrote:

On 2014/05/23 14:25, ToddAndMargo wrote:

On 05/23/2014 02:08 PM, Alan Bartlett wrote:

On 23 May 2014 22:02, ToddAndMargo toddandma...@zoho.com wrote:

Hi All,

Is there some special meaning (like 127.0.0.1.) to
the following IP address?

 224.0.0.251

Many thanks,
-T


It is an IP Multicast address.

host 224.0.0.251

will tell you a bit more.

Alan.



Hi Alan,

$ host 224.0.0.251
Host 251.0.0.224.in-addr.arpa. not found: 3(NXDOMAIN)

Not sure what I am suppose to find.

This is why I ask (VLC's doing):

kernel: Vlan-out Everything Else IN= OUT=eth0.5 SRC=192.168.254.10
DST=224.0.0.251 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=0 DF PROTO=UDP
SPT=5353
DPT=5353 LEN=36

eth0.5 is a virtual Ethernet too, not hooked to the Internet.

And port 3535 UDP?

$ grep -i 3535 /etc/services
ms-la   3535/tcp# MS-LA
ms-la   3535/udp# MS-LA


Thank you for the help,
-T


Lysdexic are we? It's 5353 which seems to be an alternate DNS address.

{^_^}   Joanne me too be lysdexic



Hi Joanne,

$ grep -i 5353 /etc/services
mdns5353/tcp# Multicast DNS
mdns5353/udp# Multicast DNS

Makes more sense.

Interesting.  M$'s list of official ports does not
list it:

http://support.microsoft.com/kb/832017#method67

I have been working on a PCI (credit card security) probe
of a customer's site all day.  I keep mixing up my ports
with their's.   (When you probe the entire network, you get your own
IP as well as their's and everyone else on the network.) AA

Apparently lysdexic is catching.  Now the point I really
wanted to make was, was, was...  Oh phooey, I forgot.  :')

Who are you again?  Are you still in the Navy?   :-D

-T


Naw, even the Swiss navy would not accept me. But I have done some Swiss
Navy projects in my time. (Local slang for personal. If the now have a
navy I'll have to use something like Nigerian Navy.)

{O,o]   Back into my hole in the wall.   Joanne Glad I helped you make sense.


NVIDIA again

2014-04-11 Thread jdow

NVIDIA: API mismatch: the NVIDIA kernel module has version 304.119,
but this NVIDIA driver component has version 304.121.  Please make
sure that the kernel module and all NVIDIA driver components
have the same version.

II figure it'll be cleaned up soon. But, just in case, I thought I'd let
people know.

{^_^}   Joanne


Re: NVIDIA again

2014-04-11 Thread jdow

Nevermind - a reboot solved it. (Hm, usually I don't NEED to reboot. I
wonder why Maybe a new kernel that slipped under my radar.)

{^_^}

On 2014/04/11 20:02, jdow wrote:

NVIDIA: API mismatch: the NVIDIA kernel module has version 304.119,
but this NVIDIA driver component has version 304.121.  Please make
sure that the kernel module and all NVIDIA driver components
have the same version.

II figure it'll be cleaned up soon. But, just in case, I thought I'd let
people know.

{^_^}   Joanne



Re: Installing Virtualbox 4.3.8 on SL 6.3

2014-03-01 Thread jdow

On 2014/02/28 17:03, David Sommerseth wrote:

On 28/02/14 14:33, jdow wrote:


And for reasons I can't figure out kvm does not work on a Win7 host. (sarcasm)


I'd say that's rather irrelevant, as the subject says installing
virtualbox on SL.  Not running SL inside virtualbox on Windows7.


The specific error he saw is one I see, on BOTH virtual machines, when I
try to install the VB drivers on the SL6.5 guest OS. It seems to be
endemic in the beast. That suggests an error somewhere with nobody seeming
to be quite sure of what the right way to fix this might be. Since I see
the error installing VB drivers on a guest OS my logical presumption is
that there is a reasonable chance that either there are two very related
bugs or he was doing the same thing I was.

{o.o}


Re: Installing Virtualbox 4.3.8 on SL 6.3

2014-02-28 Thread jdow

I messed around with information I could find on the web. I did the same
things, but in different order, on two different SL6.5 installs on two
different virtual machines. It worked on one and not the other. This was
within the last couple weeks. Akemi Yagi, I believe it was, offered some
potential solutions.

Maybe by now if you search the web about it better data can be found. But
you surely can look at back messages on this list, somewhere, I bet.

{^_^}   Joanne

On 2014/02/27 22:42, Mahmood Naderan wrote:

So is this a good solution

ln -s /usr/src/kernels/2.6.32-279.5.1.el6.x86_64/include/linux/autoconf.h
/usr/include/linux/autoconf.h

?
Regards,
Mahmood


On Friday, February 28, 2014 10:09 AM, jdow j...@earthlink.net wrote:
Because it's not in /usr/include/linux/autoconf.h. Somehow you and I need
to learn how to convince the compiler that it should look in the /usr/src
hierarchy to find the includes rather than the standard /usr/include
hierarchy. Frustrating, isn't it?

{^_^}  Joanne

On 2014/02/27 21:13, Mahmood Naderan wrote:
  Hi
  I am trying to install Virtualbox 4.3.8 on my SL6.3. The installation seems
to be OK
 
  |# rpm -ivh VirtualBox-4.3-4.3.8_92456_el6-1.x86_64.rpm
  warning: VirtualBox-4.3-4.3.8_92456_el6-1.x86_64.rpm: Header V4 DSA/SHA1
  Signature, key ID 98ab5139: NOKEY
  Preparing...### 
[100%]
 1:VirtualBox-4.3### [100%]
 
  Creating group 'vboxusers'. VM users must be member of that group!
 
  No precompiled module for this kernel found -- trying to build one. Messages
  emitted during module compilation will be logged to 
/var/log/vbox-install.log.
 
  Stopping VirtualBox kernel modules [ OK  ]
  Recompiling VirtualBox kernel modules [  OK  ]
  Starting VirtualBox kernel modules [  OK  ]|
 
  However the log file at /var/log contains multiple messages like this
 
 |ld -r -m elf_x86_64 -T
 /usr/src/kernels/2.6.32-279.5.1.el6.x86_64/scripts/module-common.lds
 --build-id -o /tmp/vbox.0/vboxnetadp.ko /tmp/vbox.0/vboxnetadp.o
 /tmp/vbox.0/vboxnetadp.mod.o
 make KBUILD_VERBOSE=1 SUBDIRS=/tmp/vbox.0 SRCROOT=/tmp/vbox.0
 CONFIG_MODULE_SIG= -C /lib/modules/2.6.32-279.5.1.el6.x86_64/build modules
 test -e include/linux/autoconf.h -a -e include/config/auto.conf || (  
\
 echo;\
 echo   ERROR: Kernel configuration is invalid.;  \
 echo include/linux/autoconf.h or include/config/auto.conf are
 missing.;  \
 echo Run 'make oldconfig  make prepare' on kernel src to 
fix
 it.;  \
 echo;\
 /bin/false)
 mkdir -p /tmp/vbox.0/.tmp_versions ; rm -f /tmp/vbox.0/.tmp_versions/*|
 
 
 
  Actually the autoconf.h file exists in the source path
 
  |# find /usr/ -name autoconf.h
  /usr/src/kernels/2.6.32-279.5.1.el6.x86_64/include/linux/autoconf.h
 
  |So why should I receive such messages
 
 
  Regards,
  Mahmood




Re: Installing Virtualbox 4.3.8 on SL 6.3

2014-02-27 Thread jdow

Because it's not in /usr/include/linux/autoconf.h. Somehow you and I need
to learn how to convince the compiler that it should look in the /usr/src
hierarchy to find the includes rather than the standard /usr/include
hierarchy. Frustrating, isn't it?

{^_^}   Joanne

On 2014/02/27 21:13, Mahmood Naderan wrote:

Hi
I am trying to install Virtualbox 4.3.8 on my SL6.3. The installation seems to 
be OK

|# rpm -ivh VirtualBox-4.3-4.3.8_92456_el6-1.x86_64.rpm
warning: VirtualBox-4.3-4.3.8_92456_el6-1.x86_64.rpm: Header V4 DSA/SHA1
Signature, key ID 98ab5139: NOKEY
Preparing...### [100%]
1:VirtualBox-4.3 ### [100%]

Creating group 'vboxusers'. VM users must be member of that group!

No precompiled module for this kernel found -- trying to build one. Messages
emitted during module compilation will be logged to /var/log/vbox-install.log.

Stopping VirtualBox kernel modules [ OK  ]
Recompiling VirtualBox kernel modules [  OK  ]
Starting VirtualBox kernel modules [  OK  ]|

However the log file at /var/log contains multiple messages like this

|ld -r -m elf_x86_64 -T
/usr/src/kernels/2.6.32-279.5.1.el6.x86_64/scripts/module-common.lds
--build-id -o /tmp/vbox.0/vboxnetadp.ko /tmp/vbox.0/vboxnetadp.o
/tmp/vbox.0/vboxnetadp.mod.o
make KBUILD_VERBOSE=1 SUBDIRS=/tmp/vbox.0 SRCROOT=/tmp/vbox.0
CONFIG_MODULE_SIG= -C /lib/modules/2.6.32-279.5.1.el6.x86_64/build modules
test -e include/linux/autoconf.h -a -e include/config/auto.conf || (  \
echo;\
echo   ERROR: Kernel configuration is invalid.;  \
echo  include/linux/autoconf.h or include/config/auto.conf are
missing.;   \
echo  Run 'make oldconfig  make prepare' on kernel src to fix
it.;   \
echo;\
/bin/false)
mkdir -p /tmp/vbox.0/.tmp_versions ; rm -f /tmp/vbox.0/.tmp_versions/*|



Actually the autoconf.h file exists in the source path

|# find /usr/ -name autoconf.h
/usr/src/kernels/2.6.32-279.5.1.el6.x86_64/include/linux/autoconf.h

|So why should I receive such messages


Regards,
Mahmood


Re: Missing src RPMS

2014-02-26 Thread jdow

They are also under 6.5.

{^_^}

On 2014/02/26 03:34, Klaus Steinberger wrote:

Am 25.02.2014 12:54, schrieb jdow:

Hm, they were there a week or so ago when I went to get the kernel source.
Look in SRPMS/vendor:
ftp://ftp1.scientificlinux.org/linux/scientific/6x/SRPMS/vendor/



uhh, changed place, before they were under sl6

Sincerly,
Klaus




Re: Missing src RPMS

2014-02-25 Thread jdow

Hm, they were there a week or so ago when I went to get the kernel source.
Look in SRPMS/vendor:
ftp://ftp1.scientificlinux.org/linux/scientific/6x/SRPMS/vendor/

{^_^}

On 2014/02/24 23:24, Klaus Steinberger wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

the src rpm for sl-release-6.5-1  seems to be missing from the download server.

Sincerly,
Klaus



Re: OpenGL

2014-02-21 Thread jdow

On 2014/02/21 04:10, Nico Kadel-Garcia wrote:

I'm going to urge you to seriously simplify your setup: get rid of all
but one or two of the kernels, and make sure that the kernel you are
actually running right now is the last kernel you installed from RPM.
Reboot if necessary.


Bog standard install with yum update handling it all. I figure only two
kernels are really needed. But having three around is no big deal. It's
original function was to test SL6 as a candidate for use on a replacement
something that had been around the block a few times too many. I made sure
it could handle the critical services properly. Then I made the switch. I
kept the virtual machines around for any experiments I wanted to try.

I did such a thing and discovered VBox running on Win7 64 is pretty poor
working with USB. I intend to figure out, RSN, if the USB service is
better when working with SL6-x64 is any better. I have an interest in
Software Defined Radios and the little $20 DVB-T dongles that work well
on the native machine don't work at all well on the virtual machine.
And neither configuration should be stressing the USB very much at all.

So I backed off the snapshot.

{^_^}


Re: OpenGL

2014-02-19 Thread jdow

On 2014/02/19 01:59, Akemi Yagi wrote:

On Tue, Feb 18, 2014 at 9:29 PM, jdow j...@earthlink.net wrote:

What happened to it? It's really hard to get VirtualBox clients to work
without
OpenGL when you want to use the extra features. They won't compile because
OpenGL seems to be missing. And I don't find it in the usual suspect repos.


This is a known issue. You can find a workaround here:

https://forums.virtualbox.org/viewtopic.php?f=3t=58855

This and some more useful info are in this CentOS wiki:

http://wiki.centos.org/HowTos/Virtualization/VirtualBox/CentOSguest

Akemi



Thanks Akemi. I did discover where the ghc_OpenGL files were. Here's the story.
There is a missing kernel file and perhaps a problem in the kernel source.

Trying to install the virtual box extensions I ran across a misleading
error message about OpenGL not compiling. The machine I tried it on first
had OpenGL missing. I finally figured out it had never been configued for
epel. Half the problem was solved. But I got the same error. I traced back
to the build error:
   echo;   \
echo   ERROR: Kernel configuration is invalid.;   \
echo  include/linux/autoconf.h or include/config/auto.conf are 
missing.;  \
echo  Run 'make oldconfig  make prepare' on kernel src to 
fix it.;  \


So to save time I went looking for the kernel source and attempted
make oldconfig and make prepare. The latter failed. I am either missing
something interesting or the kernel source is missing something interesting.

[jdow@sl6 2.6.32-431.5.1.el6.x86_64]$ sudo make oldconfig
scripts/kconfig/conf -o arch/x86/Kconfig
#
# configuration written to .config
#
[jdow@sl6 2.6.32-431.5.1.el6.x86_64]$ sudo make prepare
scripts/kconfig/conf -s arch/x86/Kconfig
  CHK include/linux/version.h
  CHK include/linux/utsrelease.h
  SYMLINK include/asm - include/asm-x86
make[1]: *** No rule to make target `missing-syscalls'.  Stop.
make: *** [prepare0] Error 2
[jdow@sl6 2.6.32-431.5.1.el6.x86_64]$


So there may be two bugs here.

First the kernel-devel includes do not have the autoconf file.

Second trying to make prepare fails. (So does trying a subsequent make
clean.

{^_^}


OpenGL

2014-02-18 Thread jdow

What happened to it? It's really hard to get VirtualBox clients to work without
OpenGL when you want to use the extra features. They won't compile because
OpenGL seems to be missing. And I don't find it in the usual suspect repos.

{^_^}   Joanne


Re: Scientific Linux 6.5 Live Media released

2014-02-07 Thread jdow

Can the live CD be cozened into workomg with a fakeraid 5 disk setup on
ICH9 motherboard?

{^_^}   Joanne

On 2014/02/07 10:06, Urs Beyerle wrote:

Hi,

Scientific Linux 6.5 LiveCD, LiveMiniCD and LiveDVD are officially released.
They are available for 32-bit and 64-bit and come with following window managers

LiveMiniCD icewm
LiveCD gnome
LiveDVDgnome, kde, icewm

Software was added from rpmforge, epel and elrepo (see EXTRA SOFTWARE) to 
include
additional filesystem support (ntfs, reiserfs), secure network connection 
(openvpn,
vpnc, pptp), filesystem tools (dd_rescue, ddrescue, gparted), and better 
multimedia
support (gstreamer-ffmpeg, flash-plugin)


DOWNLOAD

- http://ftp.scientificlinux.org/linux/scientific/livecd/65/i386
- http://ftp.scientificlinux.org/linux/scientific/livecd/65/x86_64
Or
- http://ftp.scientificlinux.org/linux/scientific/6.5/i386/iso
- http://ftp.scientificlinux.org/linux/scientific/6.5/x86_64/iso

Alternatively use a public mirror or torrent:
- http://www.scientificlinux.org/download/mirrors
- http://www.osst.co.uk/Download/scientific/livecd/?id=28


CHANGES SINCE SL6.4 LIVE

- software based on SL6.5
- cups, phonon-backend-gstreamer, pidgin, brasero, qt, gcalctool, gdisk,
   lftp, spice-client, minicom, nc were removed from LiveCD to save diskspace


NOTES

- The Live images are based on the Fedora LiveCD tools.
- If you install the LiveCD to hard drive, the installation of the live
   image is done by anaconda similar to the normal SL6 installation.
   All changes done during LiveCD usage are lost!
- You can install the LiveCD on an USB stick with persistent changes using
   liveusb-creator included in sl-addons:
 yum --enablerepo=sl-addons install liveusb-creator
- To build your own LiveCD use livecd-tools from sl-addons:
 yum --enablerepo=sl-addons install livecd-tools


SOFTWARE

- kernel 2.6.32-431.1.2.el6
- gnome 2.28
- firefox 24.3.0
- thunderbird 24.3.0
- icewm 1.2.37 (LiveMiniCD)
- libreoffice 4.0.4.2 (only on LiveDVD)
- kde 4.3.4 (only on LiveDVD)
   etc.


EXTRA SOFTWARE (repo sl-livecd-extra)

- fuse-sshfs
- ntfs-3g
- ntfsprogs
- dd_rescue
- ddrescue
- iperf
- flash-plugin
- gstreamer-ffmpeg
- rxvt-unicode (only MiniCD)
- gparted
- NetworkManager-openvpn
- NetworkManager-vpnc
- NetworkManager-pptp
- vpnc-consoleuser
- kmod-reiserfs
- reiserfs-utils


BOOT PARAMETERS

- live_ram copy entire Live image to RAM (takes a few minutes)
- noswap   do not use SWAP partition found on hard drive
- pw=any_password  set password
- noautologin  disable auto login
- automountenable auto mounting (rw) of all found hard drives
- user=usernameusername of local user, default is liveuser
- cups=server  set CUPS server
- hostname=nameset hostname
- checkverify LiveCD before booting
- liveinst directly start graphical installation to hard drive
- textinst directly start text based installation to hard drive
- overlay=UUID=defines the UUID of the USB device used for persistent 
overlay
- rdinitdebug  debug dracut boot process
- ejecteject LiveCD/DVD at shutdown


More information can be found at http://www.livecd.ethz.ch

Urs Beyerle



Re: RedHat CentOS acquisition: stating the obvious

2014-01-15 Thread jdow

On 2014/01/15 15:27, Patrick J. LoPresti wrote:

On Wed, Jan 15, 2014 at 2:06 PM, David Sommerseth da...@sommerseths.net wrote:

On 15/01/14 19:49, Patrick J. LoPresti wrote:



- Red Hat (the company) considers Oracle (the company) one of their
top two competitors.

- Red Hat considers CentOS a competitor.

- Red Hat believes acquiring CentOS will improve their bottom line.

These statements are not attacks. They are neither good nor bad.
They simply are.



They simply are pure speculations.  You might be right in the first point,
based on that both parties are commercial companies delivering competing
products.

But the rest is pure garbage.


At the risk of repeating myself... I refer you to Red Hat's 10-K filing:

http://www.sec.gov/Archives/edgar/data/1087423/000119312513173724/d484576d10k.htm#tx484576_1

See the Competition section on pages 12-14. Search for Oracle and CentOS.

So when I say, Red Hat considers CentOS a competitor, that is a
demonstrable statement of fact, appearing in an authoritative document
where lies can result in prison sentences. (Unsurprisingly, the
mission statement you keep citing appears nowhere in this document.
When choosing between words and legally binding words, which to
believe? Hm, hard to say...)

When I say Red Hat considers Oracle one of their top two
competitors, I base that on the same section of the 10-K, where
Oracle features far more prominently than any other company, save
perhaps Microsoft.


What further do they say about CentOS? It is obvious that CentOS is
a competitor for OS distribution. Is it also obvious that CentOS is
not a competitor for support. They give a lot of peer to peer sort
of support. CentOS does not give direct hands on professional
support. One can expect Red Hat to deliver accurate, timely, and
detailed support. One cannot expect that from a list like this. At
worst you get conflicting advice and must make an educated guess
as to which advice to follow. The Red Hat business is support of
very stable and well wrung out versions of the tools delivered by
RHEL. The stable and well wrung out versions make the support they
are selling possible. But it's not necessarily that code they are
selling. It's the code with the support as a value added component.

Their 10k should point out something like this. They should explain
how they differ from their competition and why is this desirable
enough they will maintain a customer base.


When I say Red Hat believes acquiring CentOS will improve their
bottom line, that is so blindingly obvious I am not even sure how to
debate it. Companies do not make acquisitions for the fun of it.


The wording here is not particularly neutral, you know. There is a
strong insinuation that the intent is to remove CentOS as a
competitor. Might the reason be what is stated in the document that
was published stating that Red Hat felt CentOS could fill a useful
functional gap in their development and training cycles? I'd expect
a CentOS equivalent of SL6 Rolling to appear if one does not already
exist. This would be an intermediate level build between RHEL and
Fedora. Presumably they'd hope this would result in better testing
for modules and updates scheduled for the formal RHEL release.

Yes, they do expect acquiring CentOS to help their bottom line. But,
it's not a slam dunk the intent is to shut out derivative systems.
Heck, the document revealing this acquisition expected this to make
derivative systems easier to generate. (That results in more testing
for RHEL candidate modules in a relatively controlled environment
very similar to RHEL. That is surely a significant benefit.)

{^_^}   Joanne


Re: Centos / Redhat announcement

2014-01-09 Thread jdow

On 2014/01/09 15:27, Ian Murray wrote:

On 09/01/14 22:53, jdow wrote:

Ian, I suspect the SL staff position is more proper engineering with
it's concern about what could possibly go wrong than it is about
minimizing their work or compromising their main sponsor's needs. I
suspect that the SL staff position is also tempered with a healthy
dose of, What do our customers want and need?

I didn't suggest otherwise. However, I could have sworn I read somewhere
that Red Hat would stop release their source as SRPMs (which would have
a direct impact on the build process of SL I assume), but I can't find
that now. Maybe I mis-read that. I'll keep looking.


This is an excellent source of information.
http://wordshack.wordpress.com/2014/01/07/centos-welcome/

It contains many links including the link to the faq:
http://community.redhat.com/centos-faq/

The faq is a very good source of information. It's best to go to these
good sources rather than listen to FUD spread around the net. I saw
noting to indicate SRPMs were no longer going to be distributed. That
would be of questionable legality as a matter of fact given GPL
requirements.


The main SL customers are their sponsers, Fermilab and Cern. They do
not need the latest and greatest. They need stable support for what
we already have for as long as practical.


I thought core CentOS would still track Red Hat in releases and support
lengths. If I have that wrong, then that does throw a spanner in the works.


The impression I received is that Centos policies would not change. But
read the faq, don't trust me. I looked at and tried Centos many years ago
and decided what I got was a somewhat slower and outdated you gotta
update frequently like regular Red Hat (now Fedora) situation with very
unstable leadership. It was going through one if its, We're gonna die!
phases. I looked elsewhere. I hit on SL by accident and liked their
policies. I've not been disappointed. For, perhaps, different reasons I
want what SL's sponsors want. And I don't see SL's sponsors dropping it
any time soon. I'll probably die first.


All the other SL customers, such as you and I, don't matter a hill of
beans against the billion dollar investments of their sponsors. I am
sitting back and watching. I certainly respect their work, appreciate
their work, and admittedly sponge off their work. So I'd not dream of
trying to tell them what to do.

I wouldn't dream of telling them what to do either. All I am doing here
is chewing the cud, as it were.

FWIW, I don't feel link I sponge... I merely drink from the same open
source cup that SL and Red Hat does. I have a few lines of code accepted
in the Xen project; does that mean all Xen users (4.3+) are sponging off
me? I don't think so.


I'm taking care of a good situation. I make my income writing for pay
software. So while it's legal I do feel like a sponge rather than
someone who contributes enough to pay for my use. I don't have time.
(And time seems to be getting shorter every day as I get older. Too
many decade years, I suppose.)

{^_^}


I do note that for the machine on which I use SL it is precisely the
sort of thing I want, too.

{^_^}   Joanne Dow

On 2014/01/09 14:30, Ian Murray wrote:

On 09/01/14 21:12, William R. Somsky wrote:

One thing people should keep in mind while discussing this is the why
the original Fermilab distro (and Cern distro) which then became
Scientific Linux was created, and why Fermilab continues to actively
commit resources to SL. Remember Fermilab (and Cern) are particle
accelerator facilities with million/billion dollar experiments that
*must* have long-term guarantees of stable and supported software.

To make Scientific Linux a variant of Centos would be to introduce an
unknown/uncontrollable element as a controlling factor in the mix.
What if Centos pulled an Ubuntu and decided to start introducing
controversial changes in an attempt to become more user friendly or
to win the desktop?

A merging w/ Centos would need to carefully consider such issues.

I don't come from a scientific background, just more of a piggy-backer
on what seems to be a well governed and reliably supported operating
system. An O/S with some big names behind it, such as they ones you
mentioned above. I was a longterm CentOS user until it became clear that
there was surprising little opaqueness around the governance and
processes of the project and it seemed overly reliant on one or two
individuals. Despite it being having a huge userbase, I came to the
conclusion that this was largely a vanity project for those individuals.

Now, the Red Hat news has completely changed that situation. So for me,
CentOS is now viable again.

To answer your concern, directly:-

To make Scientific Linux a variant of Centos would be to introduce an
unknown/uncontrollable element as a controlling factor in the mix.

Scientific Linux is already based on Red Hat Enterprise Linux, so in
that sense you are not introducing any new element, in my opinion

Re: Centos / Redhat announcement

2014-01-09 Thread jdow

On 2014/01/09 16:00, Ian Murray wrote:

On 09/01/14 23:27, Ian Murray wrote:

On 09/01/14 22:53, jdow wrote:

Ian, I suspect the SL staff position is more proper engineering with
it's concern about what could possibly go wrong than it is about
minimizing their work or compromising their main sponsor's needs. I
suspect that the SL staff position is also tempered with a healthy
dose of, What do our customers want and need?

I didn't suggest otherwise. However, I could have sworn I read somewhere
that Red Hat would stop release their source as SRPMs (which would have
a direct impact on the build process of SL I assume), but I can't find
that now. Maybe I mis-read that. I'll keep looking.


Right, I have found it:

http://community.redhat.com/centos-faq/


Will this new relationship change the way CentOS obtains Red Hat
Enterprise Linux source code?

Yes. Going forward, the source code repository at git.centos.org will replace
and obsolete the Red Hat Enterprise Linux source rpms on ftp.redhat.com. Git
provides an attractive alternative to ftp because it saves time, reduces human
error, and makes it easier for CentOS users to collaborate on and build their
own distributions, including those of SIGs.


So, as I read it, SL will need to change whether it likes it or not, unless RHEL
SRPMs will be available through other channels.


I hope what they are doing is putting the RHEL sources into the Centos GIT
repository and Centos then derives from the posted RHEL sources with its
own sources OR that Centos simply becomes the source code distribution for
RHEL.

Don't forget that GPL means you must have the sources available when asked
for. Therefore they have to be available to all chronologically before any
potential Centos massaging might take place on those sources.

Pulling changes from git may be easier than pulling down the entire batch
of SRPMs, too. It may well simplify the SL process.

{^_^}


Re: CentOS + RHEL join forces...

2014-01-07 Thread jdow

Here is a good discussion of it from a Fedora viewpoint with links to
more information. Read the FAQ. It's huge and informative.

http://wordshack.wordpress.com/2014/01/07/centos-welcome/

Little changes other than slightly faster updates for Centos. This should
speed things up for SL as well.

(There is some discussion of this event on the Fedora list. The post with
the above link is the cream of the crop, so far.)

{^_^}   Joanne

On 2014/01/07 19:43, Jamie Duncan wrote:

so RH wants to get new versions of selected apps faster to the RHEL than if
they were going thru current Fedora - RHEL route?
am i reading this right?

no, not at all, I don't think.


On Tue, Jan 7, 2014 at 10:25 PM, Andrew Z form...@gmail.com
mailto:form...@gmail.com wrote:

so RH wants to get new versions of selected apps faster to the RHEL than if
they were going thru current Fedora - RHEL route?
am i reading this right?


On Tue, Jan 7, 2014 at 10:07 PM, ~Stack~ i.am.st...@gmail.com
mailto:i.am.st...@gmail.com wrote:

On 01/07/2014 08:27 PM, Steven Haigh wrote:
  On 8/01/2014 1:08 PM, Steven Miano wrote:
  So how does that impact Scientific Linux?
 
  In a nutshell? It doesn't.

I don't think it will hurt Scientific at all and from what I have been
reading it might make things easier and better. I (as a non-dev user, so
take this opinion accordingly) see two things that might help:
1) the hidden process of how CentOS rebuilds the SRPMs is being opened
up which should make CentOS even close to their binary-equivalent goal.
2) the variant ( http://centos.org/variants/ ) might actually make
things easier if Scientific just wanted to start with a core base and
build from there. I am sure there are going to be a dozen different
spin-offs of CentOS for this reason alone.

There are still a TON of details yet to be given, so we will see what is
actually delivered, but this is great news for the community as a whole.
Here is hoping that it makes things easier and better! Cheers!


~Stack~





--
Thanks,

Jamie Duncan
@jamieeduncan



Re: ddclient vs selinux

2013-12-16 Thread jdow

On 2013/12/16 02:48, David Sommerseth wrote:

On 15. des. 2013 03:13, jdow wrote:

On 2013/12/14 18:05, S.Tindall wrote:

On Sat, 2013-12-14 at 17:36 -0800, jdow wrote:

I kinda wondered if somebody here had an idea.

Ah well
{o.o}


I would start with:

   # restorecon -vr /etc/ddclient*
   # restorecon -vr /var/cache/ddclient

and then retest in permissive mode.

   # setenforce 0

Steve



More or less been there done that.

restorecon -r /var took a bit longer, and fixed one other unrelated
file. But the basic problem persisted.


Most likely the EPEL package does not include a proper file context for
the /var/cache/ddclient directory.

As a quick-fix, which I believe should be fairly safe, you can add the
dhcpc_t security context to that directory.  Just run as root:

# semanage fcontext -a -t dhcpc_t '/var/cahce/ddclient(/.*)?'

Then you can try the restorecon command again and see if it helps.


--
kind regards,

David Sommerseth


I think I'll wait a little bit pending a reply from the SELinux guru. It
looks like one of those hard to undo things that makes going forward
cleanly very awkward.

It is something akin to what I had figured trying.

Thanks for providing precise syntax for me.

{^_^}


Re: ddclient vs selinux

2013-12-16 Thread jdow

On 2013/12/16 04:37, David Sommerseth wrote:

On 16. des. 2013 12:52, jdow wrote:

On 2013/12/16 02:48, David Sommerseth wrote:

On 15. des. 2013 03:13, jdow wrote:

On 2013/12/14 18:05, S.Tindall wrote:

On Sat, 2013-12-14 at 17:36 -0800, jdow wrote:

I kinda wondered if somebody here had an idea.

Ah well
{o.o}


I would start with:

# restorecon -vr /etc/ddclient*
# restorecon -vr /var/cache/ddclient

and then retest in permissive mode.

# setenforce 0

Steve



More or less been there done that.

restorecon -r /var took a bit longer, and fixed one other unrelated
file. But the basic problem persisted.


Most likely the EPEL package does not include a proper file context for
the /var/cache/ddclient directory.

As a quick-fix, which I believe should be fairly safe, you can add the
dhcpc_t security context to that directory.  Just run as root:

 # semanage fcontext -a -t dhcpc_t '/var/cahce/ddclient(/.*)?'

Then you can try the restorecon command again and see if it helps.


--
kind regards,

David Sommerseth


I think I'll wait a little bit pending a reply from the SELinux guru. It
looks like one of those hard to undo things that makes going forward
cleanly very awkward.


To undo that command above ... replace -a with -d  really, SELinux
isn't that hard or complicated ;-)   'semanage fcontext' is basically
comparable to 'chown' - just for SELinux instead.

Of course, the harder way to do this is to implement a separate SELinux
type for ddclient, and set up the proper accesses the ddclient program
needs.  That requires far more skills.  I see that ddclient does have
such a policy ready in Fedora 19 (just checked the source package for
selinux-policy).  But I doubt that policy will get into EL6 as part of
the base policy, also because ddclient is just an EPEL package.

If you pick out the ddclient.{te,fc,if} files from the contrib SELinux
reference policy used in newer Fedoras, you might be lucky to build that
as a separate SELinux module (you need the selinux-policy-devel package
installed).  But that does require a bit more skills, and it might also
require some backporting too.  From a quick glance at the policy, it
isn't too complicated.  But it uses macros heavily, which I'd suspect
would be the biggest hurdle - as many of them might be from newer
reference policies than what is shipped in EL6.  Anyhow, if you're able
to build this as a SELinux module, it's 'semodule -i ddclient.pp' and to
unload it (back to how it was before) you use 'semodule -r ddclient'.


--
kind regards,

David Sommerseth


Were I about 40 years younger I'd be pushing to learn that stuff. But I'm
old enough and deep enough into a different field getting prepackaged
stuff is well worth it.

My passion at the moment is Software Defined Radios. They complete a
circle. I started out designing radio communications equipment,
sometimes for satellites. I moved into software. Then I am moving back
to the merger of the two fields. SDRs are fully complex enough to keep
my brain going these days.

Thanks for the additional information. I'll give a try tomorrow. (It is
bed time by a somewhat insomniac's definition of bed time.)

{^_-}   Joanne


Re: ddclient vs selinux

2013-12-16 Thread jdow

On 2013/12/16 02:48, David Sommerseth wrote:

On 15. des. 2013 03:13, jdow wrote:

On 2013/12/14 18:05, S.Tindall wrote:

On Sat, 2013-12-14 at 17:36 -0800, jdow wrote:

I kinda wondered if somebody here had an idea.

Ah well
{o.o}


I would start with:

   # restorecon -vr /etc/ddclient*
   # restorecon -vr /var/cache/ddclient

and then retest in permissive mode.

   # setenforce 0

Steve



More or less been there done that.

restorecon -r /var took a bit longer, and fixed one other unrelated
file. But the basic problem persisted.


Most likely the EPEL package does not include a proper file context for
the /var/cache/ddclient directory.

As a quick-fix, which I believe should be fairly safe, you can add the
dhcpc_t security context to that directory.  Just run as root:

# semanage fcontext -a -t dhcpc_t '/var/cahce/ddclient(/.*)?'

Then you can try the restorecon command again and see if it helps.


--
kind regards,

David Sommerseth



I did catch a typo:
semanage fcontext -a -t dhcpc_t '/var/cahce/ddclient(/.*)?'
should be
semanage fcontext -a -t dhcpc_t '/var/cache/ddclient(/.*)?'

{^_^}


Re: ddclient vs selinux

2013-12-16 Thread jdow

On 2013/12/16 05:28, David Sommerseth wrote:

On 16. des. 2013 13:57, jdow wrote:

On 2013/12/16 02:48, David Sommerseth wrote:

On 15. des. 2013 03:13, jdow wrote:

On 2013/12/14 18:05, S.Tindall wrote:

On Sat, 2013-12-14 at 17:36 -0800, jdow wrote:

I kinda wondered if somebody here had an idea.

Ah well
{o.o}


I would start with:

# restorecon -vr /etc/ddclient*
# restorecon -vr /var/cache/ddclient

and then retest in permissive mode.

# setenforce 0

Steve



More or less been there done that.

restorecon -r /var took a bit longer, and fixed one other unrelated
file. But the basic problem persisted.


Most likely the EPEL package does not include a proper file context for
the /var/cache/ddclient directory.

As a quick-fix, which I believe should be fairly safe, you can add the
dhcpc_t security context to that directory.  Just run as root:

 # semanage fcontext -a -t dhcpc_t '/var/cahce/ddclient(/.*)?'

Then you can try the restorecon command again and see if it helps.


--
kind regards,

David Sommerseth



I did catch a typo:
semanage fcontext -a -t dhcpc_t '/var/cahce/ddclient(/.*)?'
should be
semanage fcontext -a -t dhcpc_t '/var/cache/ddclient(/.*)?'


Right!  cache, not cahce.  Sorry about that!

I've run that command on a test system, so the rest should work.  It
would just miss labelling the proper directory with the typo.


No problem, David. I'm human, too. And my nifgers fmuble quite often.

{^_-}


Re: ddclient vs selinux

2013-12-15 Thread jdow

On 2013/12/14 18:33, S.Tindall wrote:

On Sat, 2013-12-14 at 18:13 -0800, jdow wrote:

On 2013/12/14 18:05, S.Tindall wrote:

On Sat, 2013-12-14 at 17:36 -0800, jdow wrote:

I kinda wondered if somebody here had an idea.

Ah well
{o.o}


I would start with:

   # restorecon -vr /etc/ddclient*
   # restorecon -vr /var/cache/ddclient

and then retest in permissive mode.

   # setenforce 0

Steve



More or less been there done that.

restorecon -r /var took a bit longer, and fixed one other unrelated
file. But the basic problem persisted.

Googling indicates current Fedora stuff has a ddclient_t tag that does
not exist on my machine. So it seems the ddclient selinux policy setup
was skipped or missing. At least, that's my best guess.

{^_^}


For now, you could build/implement local policy for ddclient:

http://wiki.centos.org/HowTos/SELinux#head-faa96b3fdd922004cdb988c1989e56191c257c01

...unless the local policy looks ridiculous. The above link also offers
other options that you might try.

Or you can hope that Daniel Walsh (Red Hat's SELinux go-to guy) reads
this thread. :-)

You can post a bug report to epel, but my experience with getting
movement, or even acknowledgment, is poor, to be polite.

Steve


I see it has not changed since my last attempt at Red Hat's Bugzilla. That
is why I went here first. (VirtualBox is another such bugzilla black hole.)

{^_^}


Re: ddclient vs selinux

2013-12-15 Thread jdow

On 2013/12/14 18:33, S.Tindall wrote:

On Sat, 2013-12-14 at 18:13 -0800, jdow wrote:

On 2013/12/14 18:05, S.Tindall wrote:

On Sat, 2013-12-14 at 17:36 -0800, jdow wrote:

I kinda wondered if somebody here had an idea.

Ah well
{o.o}


I would start with:

   # restorecon -vr /etc/ddclient*
   # restorecon -vr /var/cache/ddclient

and then retest in permissive mode.

   # setenforce 0

Steve



More or less been there done that.

restorecon -r /var took a bit longer, and fixed one other unrelated
file. But the basic problem persisted.

Googling indicates current Fedora stuff has a ddclient_t tag that does
not exist on my machine. So it seems the ddclient selinux policy setup
was skipped or missing. At least, that's my best guess.

{^_^}


For now, you could build/implement local policy for ddclient:

http://wiki.centos.org/HowTos/SELinux#head-faa96b3fdd922004cdb988c1989e56191c257c01

...unless the local policy looks ridiculous. The above link also offers
other options that you might try.

Or you can hope that Daniel Walsh (Red Hat's SELinux go-to guy) reads
this thread. :-)

You can post a bug report to epel, but my experience with getting
movement, or even acknowledgment, is poor, to be polite.

Steve


I just cheated and asked him to check the bugzilla report.

Hopefully I didn't irritate him by contacting him directly about it.

{o.o}


Re: ddclient vs selinux

2013-12-14 Thread jdow

I kinda wondered if somebody here had an idea.

Ah well
{o.o}

On 2013/12/14 17:11, Nico Kadel-Garcia wrote:

Submit a bug report to EPEL?

On Sat, Dec 14, 2013 at 7:18 PM, jdow j...@earthlink.net wrote:

For some time now ddclient has not been working quite right. I made some
changes that finally brought to light the reason for this.

I removed the tweaked ddclient.conf, then yum removed ddclient, yum install
ddclient, and finally edited the ddclient.conf file to make it happy.

I started getting errors. This sequence is typical:
Dec 14 14:40:29 me2 ddclient[5711]: WARNING:  updating .dyndns.org:
nochg: No update required; unnecessary attempts to change to the current
address are considered abusive
Dec 14 14:40:29 me2 ddclient[5711]: FATAL:Cannot create file
'/var/cache/ddclient/ddclient.cache'. (Permission denied)

I figured it's not nice to abuse the kind folks at dyndns so I dug further
into it.

setenforce 0 allows it to run properly.

So I dug into the audit logs.
These two lines do not look right.
type=AVC msg=audit(1387064159.179:461956): avc:  denied  { getattr } for
pid=6296 comm=ddclient path=/var/cache/ddclient/ddclient.cache dev=dm-0
ino=2621901 scontext=unconfined_u:system_r:dhcpc_t:s0-s0:c0.c1023
tcontext=unconfined_u:object_r:var_t:s0 tclass=file
type=SYSCALL msg=audit(1387064159.179:461956): arch=c03e syscall=4
success=yes exit=0 a0=1b234a0 a1=1b02130 a2=1b02130 a3=28 items=0 ppid=6281
pid=6296 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0
tty=(none) ses=10540 comm=ddclient exe=/usr/bin/perl
subj=unconfined_u:system_r:dhcpc_t:s0-s0:c0.c1023 key=(null)

ddclient with a dhcpc_t tag? I note there does not seem to be a ddclient_t
or similar tag on the system.

The ddclient is from epel. I'd expect it to have a proper selinux setup.
I am rash enough to expect that should be handled in the ddclient rpm
setup.

What do I need to do to get this to work properly with setenforce 1
restored?

{^_^}   Joanne




Re: ddclient vs selinux

2013-12-14 Thread jdow

On 2013/12/14 18:05, S.Tindall wrote:

On Sat, 2013-12-14 at 17:36 -0800, jdow wrote:

I kinda wondered if somebody here had an idea.

Ah well
{o.o}


I would start with:

  # restorecon -vr /etc/ddclient*
  # restorecon -vr /var/cache/ddclient

and then retest in permissive mode.

  # setenforce 0

Steve



More or less been there done that.

restorecon -r /var took a bit longer, and fixed one other unrelated
file. But the basic problem persisted.

Googling indicates current Fedora stuff has a ddclient_t tag that does
not exist on my machine. So it seems the ddclient selinux policy setup
was skipped or missing. At least, that's my best guess.

{^_^}


Re: media comparisons

2013-11-04 Thread jdow

On 2013/11/04 17:13, ToddAndMargo wrote:

On 11/04/2013 05:07 PM, Yasha Karant wrote:

On 11/04/2013 04:53 PM, ToddAndMargo wrote:

On 11/04/2013 04:21 PM, Yasha Karant wrote:

I need to do a media comparison between a data DVD and the .iso file
that purportedly contains the image of the exact DVD (including any
bootable or autoload binary files, not for an Intel instruction set
architecture).

When burning to the DVD, applications such as K3B and Nero (for Linux)
will do a verify of the burned media.  My understanding is that these
applications go through the device driver and device controller
hardware/firmware that may be applying error correction to the raw bit
stream; any such detected hardware media errors typically are reported
by the driver to a log file, but typically (if corrected) do not cause
the application to fail.

If one mounts the .iso file, by a command similar to that below,

# mount -t iso9660 -o ro,loop=/dev/loop0 /files/dvdimage.iso
/media1/virtualdisc

and likewise has the physical DVD in the DVD drive and mounted from,
say, /dev/sr0

will a diff /dev/loop0 /dev/sr0 suffice?

Is there a utility that will do the same thing that Nero would do as it
verifies after burning, but not requiring the burn -- that is, verify a
DVD against an ISO image file?

If /dev/sr0 were mounted on, say, /media/someDVD, and the ISO image
file on
/media1/virtualdisk , is there a utility or script to do a bit by bit
comparison via the mount points (not just the raw mount as /dev/sr0 )?

Yasha Karant




Hi Yasha,

Check the DVD as a raw device.

After you burn the ISO, eject the DVD (clears out something,
I don't know what, but had to learn the hard way):
/usr/bin/eject /dev/sr0

Then inject the DVD (close the door).  Can be on the same
line.
/usr/bin/eject -t /dev/sr0

Then make an MD5SUM of each
md5sum /files/dvdimage.iso /dev/sr0

Eyeball the sums.  One will be on top of the other.

If you like, I have some leftover code I can send you.

-T




 From http://en.wikipedia.org/wiki/Md5sum

As with all such hashing algorithms, there is theoretically an unlimited
number of files that will have any given MD5 hash. However, it is very
unlikely that any two non-identical files in the real world will have
the same MD5 hash, unless they have been specifically created to have
the same hash.

End quote.

I explain the above reality to my students, although I do use MD5SUM
myself.  I was hoping for a utility that did a true bit-by-bit
comparison of the two files.


Aside:  Note that a (very) clever attacker can embed specific issues
into a file such that the corrupted (and perhaps infected) file will
pass a MD5 hash test.  Note that USA NSA and other entities often do
employ such clever persons (do recall the cyber attack on the fissile
material enrichment facilities of a Middle Eastern nation state not in
full agreement with USA foreign policy, albeit an attack not
specifically limited to this mechanism).  I am not suggesting that the
DVD and ISO image file I am using are subject to this sort of clever
corruption; but, it is important to understand the limitations of
certain techniques.

Yasha Karant



Hi Yasha,

The likely hood is pretty low.

You could always try using the SHA sums.  Maybe do
both.

-T


Or use the cmp command, cmp -l infile outfile if he really wants a
byte by byte comparison of the disks.

{o.o}


Re: UEFI SL 6x boot

2013-09-24 Thread jdow

I don't think that's directly what has been said, Yasha. In less loaded
terms it appears there are motherboard vendors who are not strictly
compliant with the UEFI specification and do not allow you to turn off
UEFI without using some software with Microsoft roots. They have elected
to leave out, or so thoroughly obscure, any BIOS based means of disabling
the UEFI protections. This is neither a Linux or Microsoft issue. It is
an issue to take up with the motherboard vendor and get a fix for it.

In this sense your last paragraph is true of couched in inflammatory
terms that place the blame on the wrong party. The party responsible
is the motherboard vendor for violating the UEFI specification as I
understand it. And that only holds if you cannot find some place in
the BIOS that disables secure boot, UEFI, or whatever else the board
manufacturer calls it AND the mother board vendor will not provide you
with the required information that is not OS based on any OS.

{^_^}   Joanne

On 2013/09/24 17:20, Yasha Karant wrote:

Let me see if I understand the current situation. This question was prompted by
the question of a  colleague attempting to use OpenSuSE (not SL nor TUV) on UEFI
Secure Boot who was not able to get a reliably booted running operating
environment.  The colleague wondered if SL would fare better.

Depending upon the particular BIOS or BIOS equivalent, using MS Windows 8, it
may be possible to disable Secure Boot and allow for SL to be booted.  Secure
Boot, and many other technologies put forward by, through, or under the auspices
of the monopoly primarily exist to move forward the market share, return on
investment, and general economic wealth of the monopoly (not a surprise in
oligopolistic non-market economics).

SL with Fermilab participation is participating in projects that will allow SL
to boot on UEFI Secure Boot hardware without the use of any monopoly operating
environment software or applications -- Microsoft not required.  Presumably, TUV
is participating as well as TUV supported-for-fee environments must be able to
reliably boot and run on UEFI Secure Boot platforms without the use of monopoly
software to enable the booting process.  Apple is not a matter for discussion
because Apple provides the entire hardware and software package, and does not
allow the use of MacOS on non-Apple hardware platforms. Presumably VirtualBox
and other means to allow MS Windows to run as a guest environment has or will
have some means to provide UEFI Secure Boot to MS Windows guests requiring such.

At present, there is no production Linux that will reliably run on all hardware
platforms that use UEFI Secure Boot, but only MS Windows envirnoments will do so
on any hardware platform that proclaims compliance with the monopoly
(certification).

Is the above substantially correct as of this instant?

Yasha Karant

On 09/24/2013 04:40 PM, Connie Sieh wrote:

On Tue, 24 Sep 2013, Nico Kadel-Garcia wrote:


--001a11c379ecc5abcb04e7297e9d
Content-Type: text/plain; charset=ISO-8859-1

Down, boy.

Scientific Linux is behind the times on available tools, because our
favorite upstream vendor has not yet released tools. Tools to work with
have been tested, effectively, with Fedora, and I expect our favorite
upstream vendor will include tools with release 7.x, which is not yet in
alpha or beta release. Check out
http://docs.fedoraproject.org/en-US/Fedora/18/html-single/UEFI_Secure_Boot_Guide/index.htmlfor


a good breakdown of the issues and trade-offs.

UEFI is part of the old Palladium project from Microsoft, relabeled as
Trusted Computing. It is aimed squarely at DRM and vendor lock-in, not
security, for reasons that I could spend a whole day discussing.In the
meantime, yes, you can disalbe it for SL booting if needed, and
reasonably
expect our favorite upstream vendor to have shims available when
version 7
is publishedL they're already working well with recent Fedora
releases. I'd
also *expect* those shims to be workable for SL 7, but someone may
have to
plunk down some cash to get some keys signed, and spend some extra effort
to maintain the security needed for the relevant shims to work well
with SL
kernels and environments.


Last week at LinuxCon North America the shim developers were still
developing.

I attended the UEFI Plugfest last week as part of Linux Con. Microsoft
gave a presentation on UEFI signing.  The presentation will be posted to
uefi.org website.

We are working on this.  Fermilab is a member of the UEFI forum .

-Connie Sieh




On Tue, Sep 24, 2013 at 11:53 AM, Yasha Karant ykar...@csusb.edu wrote:


Secure boot is enabled.  Evidently, the only means to disable secure
boot
requires that a secure boot loader/configuration program be running --
e.g., the MS proprietary boot loader (typically, supplied as part of MS
Windows 8) must be used to disable secure boat if the UEFI actually
permits
this to be disabled (I have heard of some UEFI implementations that
do not
permit secure boot truly 

Re: rpmfusion failure due to conflict

2013-07-16 Thread jdow

It seems people here are adults with enough demands on their time that we
just adapt rather than complain. The lack of comment here on this issue
seems to support this view. It's REALLY refreshing.

In this case my rule is to simply do what the last commenter did to avoid
tangled morasses of comments. The exception is when I am interspersing
comments. Then I put my reply right under the section to which I am
commenting. Overall that seems to work best for the needs of clear
communications, our real goal.

{^_^}   Joanne

On 2013/07/16 12:40, Yasha Karant wrote:
...

(Etiquette:  does this list want start or end replies?  I have forgotten.)

Yasha Karant

On 07/16/2013 12:01 PM, John Pilkington wrote:




Re: SL6 on SSDs?

2013-06-10 Thread jdow

On 2013/06/10 10:29, Vladimir Mosgalin wrote:

Hi Chuck Munro!

  On 2013.06.10 at 09:42:38 -0700, Chuck Munro wrote next:


My question is how do I minimize writes to the disk array?  Is it
possible to significantly reduce disk writes once the host SL6 OS
has booted up and the guest OS's are running?


There is no real reason to do anything about it; any modern SSD, even
cheapest consumer model won't die from too many writes on low-loaded
server (they might die from controller failure or as a result of power
failure, but not from NAND flash chips wear). The special cases where
you should care about wear are loaded SQL databases (they often make all
data go through write buffer of sorts, such as xlog on PostgreSQL or
redo logs on Oracle DB), ZFS ZIL, maybe ext4 with data=journal option
(unsure) and similar cases, where a lot of data is constantly passing
through device.

I'm using SSDs on many servers and, except for the cases above, writes
never exceed typical desktop writes (1-3 TB / year). Any SSD should
endure such writes for many years; if it will die, something else would
be the reason.

So main advice is don't bother. If you are worried, make sure that SSD
you get provides information about life time writes in SMART; Crucial M4
and various Intel models do that, among many (but some don't).
Then, just check SMART after a few months of usage and stop worrying
after seeing how small write numbers will be.


The current hard-drive-based box has lots of RAM (8 GBytes) and each


You could mount /tmp as tmpfs, but then again, there isn't much point
for just a virtual host.


guest has a pretty small footprint, so swapping has never been
invoked.  So far the current SL6-based firewall pretty well runs
itself with very little effort on my part, so things like syslog are
usually not monitored much.  Can the /var filesystem be safely
mounted from a file server or does it have to be on a local drive
during bootup?


It can, but it requires tricks. /var/run  /var/lock should be local or
you'll have to hack init scripts.. It's easier to run whole / from NFS
(dracut supports that, though not 100% sure that SL6 version does).

I really advice you not to try this, at least in SL6. Future versions
will have changes to make it easier (/var/run  /var/lock in tmpfs), but
currently, you'd better not.

Anyhow, writing logs hardly contributes to writes on SSD because these
writes are buffered. Even 1 GB of written logs per day is .4 TB of
writes per year. Even cheapest SSDs will handle 10 TB of lifetime
writes, most will more.


I do let logwatch and logrotate run on the current box (with hard
drives) but I'm tempted to reduce logging to a bare minimum and
rotate logs to /dev/null


Well, once logs WERE written, rotating them doesn't contribute to writes
(it usually just renames files).

However, you can do full remote logging - turn off local logs and send
them to remote system, if you really are desperate.



At this point it's all somewhat academic, but I'd like to consider
the possibility of reducing heat and eliminating as many moving


I wouldn't expect much difference from heat perspective, unless you are
replacing tons of drives. Under medium load, HDD consumes around 6W,
SSD consumes around 2W. You can win 10 times more of that by switching
to low-power CPU models, for example.


Your suggestions??


I really believe that you are looking in the wrong direction. There is
no need to do anything on system you described to reduce writes to SSD.
You will prolong theoretical flash life from 200 years to 210 years or
similar (real numbers, on most systems I've seen SSD flash wear is less
than 1% after 2 years of usage), which won't change real life of SSD.

Stop bothering about writes unless we are talking about special cases
like ones mentioned above (I've seen 0.5 PB writes per year for ZFS ZIL,
for example). If any, not using TRIM (and having huge write
amplification) reduces SSD life much more than saving up on writes.
Check out
http://blog.neutrino.es/2013/howto-properly-activate-trim-for-your-ssd-on-linux-fstrim-lvm-and-dmcrypt/
or google for SSD write amplification if you are curious.



Just a little note, Vladimir, please be aware that there appears to be
a problem with SSDs when you read the same portion of the disk very many
times per day. The section of flash seems to lose data and cannot be
refreshed after a couple years. We have customers who use SSDs in theme
park rides in the vehicles for an audio server. It was a short, ride
length, audio track repeated every run for the ride vehicle - every few
minutes for a 12 hour day 365 days per year.

We now counsel customers to use features in the program to allow storing
many copies of the audio track and rotate their use to avoid this wear
problem.

This is mentioned so infrequently in the literature that I am not sure it
has been generally recognized or dealt with by the disk manufacturers. It
surely astounded us when the reports started coming in.

{^_^}   

Re: need command line support of zoho

2013-03-15 Thread jdow

The point is that he seemed to use an SSL port and he apparently found
using SSL did not work. So was something else providing the SSL (or other
encryption) or was the provider all fooed up?

{o.o}


On 2013/03/15 20:05, Paul Robert Marino wrote:

Well that depends.
If its clear text and you have the right flags set it will show you all of the
raw data.
Wireshark can in many cases decode it further.
However if it ssl/tls encrypted there is a tool much to most infosec peoples
dismay (and joy when its useful ) called ssldump that can take a tcpdump that
captures the full conversation and decode it.
But that answered is no not out of the box.



-- Sent from my HP Pre3


On Mar 15, 2013 10:27 PM, jdow j...@earthlink.net wrote:

On 2013/03/15 19:14, Todd And Margo Chester wrote:
  On 03/15/2013 02:17 PM, Todd And Margo Chester wrote:
  Hi All,
 
  The connection just times out. Does anyone know what I am
  doing wrong here? This is Linux and the nail program.
  (The account does work from Thunderbird.)
 
  #!/bin/bash
  echo nail test | \
  nail -v \
  -S smtp-use-starttls \
  -S from=taperepo...@.com \
  -S smtp-auth=login \
  -S ssl-verify=ignore \
  -S smtp-auth-user=taperepo...@.com \
  -S smtp-auth-password=zz \
  -S smtp=smtp.zoho.com:465 \
  -s `dnsdomainname` zoho smtp test subject y...@zoho.com
 
 
  Many thanks,
  -T
 
 
  Okay, I've have gotten a little further along. I am able to test
  with gmail but not yet with zoho:
 
  #!/bin/bash
  echo nail test | nail -v -s `dnsdomainname` zoho smtp test subject \
  -S smtp-use-starttls \
  -S smtp-auth=plain \
  -S ssl-verify=ignore \
  -S smtp=smtps://smtp.zoho.com:465 \
  -S from=x...@zoho.com \
  -S smtp-auth-user= \
  -S smtp-auth-password=hahahahaha \
  -S nss-config-dir=/home/linuxutil/mailcerts/ \
  yy...@zoho.com
 
 
  Gives me:
 
  250 AUTH LOGIN PLAIN
  STARTTLS
  220 Ready to start TLS
  SSL/TLS handshake failed: Unknown error -5938.
 
  Anyone know what causes this?
 
  Many thanks,
  -T
 
 
  Okay. I figured it out. I commented out -S smtp-use-starttls.
  Go figure.
 
  [editorial comment] AAHH!![/editorial comment]
 
  -T

Out of curiosity does tcpdump show the plain text login and message
transfer or is it encrypted?

{O.O}


Re: need command line support of zoho

2013-03-15 Thread jdow

On 2013/03/15 20:39, Todd And Margo Chester wrote:

On 03/15/2013 08:05 PM, Paul Robert Marino wrote:

Well that depends.
If its clear text and you have the right flags set it will show you all
of the raw data.
Wireshark can in many cases decode it further.
However if it ssl/tls encrypted there is a tool much to most infosec
peoples dismay (and joy when its useful ) called ssldump that can take a
tcpdump that captures the full conversation and decode it.
But that answered is no not out of the box.



-- Sent from my HP Pre3


On Mar 15, 2013 10:27 PM, jdow j...@earthlink.net wrote:

On 2013/03/15 19:14, Todd And Margo Chester wrote:
  On 03/15/2013 02:17 PM, Todd And Margo Chester wrote:
  Hi All,
 
  The connection just times out. Does anyone know what I am
  doing wrong here? This is Linux and the nail program.
  (The account does work from Thunderbird.)
 
  #!/bin/bash
  echo nail test | \
  nail -v \
  -S smtp-use-starttls \
  -S from=taperepo...@.com \
  -S smtp-auth=login \
  -S ssl-verify=ignore \
  -S smtp-auth-user=taperepo...@.com \
  -S smtp-auth-password=zz \
  -S smtp=smtp.zoho.com:465 \
  -s `dnsdomainname` zoho smtp test subject y...@zoho.com
 
 
  Many thanks,
  -T
 
 
  Okay, I've have gotten a little further along. I am able to test
  with gmail but not yet with zoho:
 
  #!/bin/bash
  echo nail test | nail -v -s `dnsdomainname` zoho smtp test
subject \
  -S smtp-use-starttls \
  -S smtp-auth=plain \
  -S ssl-verify=ignore \
  -S smtp=smtps://smtp.zoho.com:465 \
  -S from=x...@zoho.com \
  -S smtp-auth-user= \
  -S smtp-auth-password=hahahahaha \
  -S nss-config-dir=/home/linuxutil/mailcerts/ \
  yy...@zoho.com
 
 
  Gives me:
 
  250 AUTH LOGIN PLAIN
  STARTTLS
  220 Ready to start TLS
  SSL/TLS handshake failed: Unknown error -5938.
 
  Anyone know what causes this?
 
  Many thanks,
  -T
 
 
  Okay. I figured it out. I commented out -S smtp-use-starttls.
  Go figure.
 
  [editorial comment] AAHH!![/editorial comment]
 
  -T

Out of curiosity does tcpdump show the plain text login and message
transfer or is it encrypted?

{O.O}



Don't know.  Does this help?

# ./MailxTest.rla
Resolving host smtp.zoho.com . . . done.
Connecting to 74.201.154.90 . . . connected.
220 mx.zohomail.com SMTP Server ready March 15, 2013 8:34:27 PM PDT
  EHLO server.aa.local
250-mx.zohomail.com Hello server.aaa.local
(static-50-124-80-106.drr01.grdv.nv.nv.frontiernet.net (50.124.80.106))
250-SIZE 2500
250 AUTH LOGIN PLAIN
  AUTH LOGIN
334 VXNlcm5hbWU6
  YWNjb3VudGluZ0BhbHBpbmVmYXN0ZW5lci5jb20=
334 UGFzc3dvcmQ6
  ZmNhOTMyRGNtYQ==
235 Authentication Successful
  MAIL FROM:account...@.com
250 Sender account...@.com OK
  RCPT TO:a...@.com
250 Recipient a...@.com OK
  RCPT TO:cc...@.net
250 Recipient cc...@.net OK
  DATA
354 Ok Send data ending with CRLF.CRLF
  .
250 Message received
  QUIT
221 mx.zohomail.com closing connection



tcpdump would show whether the transaction was in clear text or not. It
does appear there might be some encryption on the login, though.

{^_^}


Latest kernel update

2013-03-11 Thread jdow

Something about the latest kernel update made the standard consoles
unusable. X is OK. And I can ssh in just fine. So it's not critical,
yet. There is a problem with alt-F2 and the like. It appears that the
window is either in a VERY large font or is somehow magnified so that
only a little bit of it shows.

Does that sound like a set of symptoms anybody knows how to fix?

{^_^}


  1   2   >