Re: [SCIENTIFIC-LINUX-USERS] firefox 45.1 crashes

2016-04-29 Thread David Sommerseth
On 29/04/16 17:43, Graham Allan wrote:
> On 4/29/2016 10:37 AM, Iosif Fettich wrote:
>>
>> I'll just second Pat: no problem encountered so far, despite heavy use.
>>
>> Just in case that might make a difference (it did, occasionally, in the
>> past): I'm having 16 GiB of memory in my desktop.
>>
>> Iosif Fettich
> 
> Good point; I have a machine with 16GB or so, and no problems, but we
> certainly have some people with old/weak machine - we'll check if the amount
> of RAM correlates at all.

I've run Firefox 45 ESR since the release in RHEL7.2 on a machine with 8GB RAM
without any issues, and with 10-15 tabs open.

Also upgraded a my laptops with SL7.2, but too early to say much about the
stability.  But I haven't really noticed anything special.


--
kind regards,

David Sommerseth


Re: SL7.2 Live DVDkde would not boot

2016-04-03 Thread David Sommerseth
On 03/04/16 17:10, John Pilkington wrote:
>>
> 
> I've had an interesting week with a new 3TB drive and a family box that has
> been running MS Vista for years.  I disconnected the Windows HD 'for safety'
> and installed kubuntu from the live DVD, with few problems until I tried a
> 'real' boot, which failed.  Eventually I installed buntu 14 with grub
> alongside Vista on the original HD, and also have buntu 16 beta on the new
> one; at present they will all boot and run.  Don't know if SL7 would do the
> same.  But the USB drive exploit looks handy.

This should work on the majority of all Linux distributions, at least if you
use UUID for the /boot partitionsi.  Use of LVM can also simplify mounting the
root parition (/) and so on - unless you use UUID for those mount points too.

I've booted several old Linux installations from hardrives put into a USB
closure. I haven't tried to do that with Windows though, that might work too -
but somehow I imagine it will freak out at some point where drive letters
won't match properly.

To get an overview you can run 'blkid' or 'lsblk -o NAME,UUID' on your system
to see all devices and their unique UUID.  These tools are also valuable when
you need to modify /etc/crypttab manually.


--
kind regards,

David Sommerseth


Re: How does NetworkManager monitor the connection files?

2016-03-30 Thread David Sommerseth
On 31/03/16 00:10, Benjamin Lefoul wrote:
> But sed -i ALSO changes the inode, and as I said it doesn't work:
> 
> root@hoptop:~# touch a
> root@hoptop:~# ls -i a
> 9700011 a
> root@hoptop:~# sed -i 's/q/a/g' a
> root@hoptop:~# ls -i a
> 9700013 a
> 

I've not looked into the NM code.  But it wouldn't surprise me that much
if inotify is used.  But it might be it doesn't catch all the events
which sed would trigger, but only modification triggered by editors.

Can you also check what happens to the inode on a file if you use vim/emacs?


-- 
kind regards,

David Sommerseth



> 
> From: owner-scientific-linux-us...@listserv.fnal.gov 
> <owner-scientific-linux-us...@listserv.fnal.gov> on behalf of Tom H 
> <tomh0...@gmail.com>
> Sent: 30 March 2016 23:00
> To: SL Users
> Subject: Re: How does NetworkManager monitor the connection files?
> 
> On Wed, Mar 30, 2016 at 3:49 PM, Benjamin Lefoul
> <benjamin.lef...@nwise.se> wrote:
>>
>> I have set monitor-connection-files=true in my
>> /etc/NetworkManager/NetworkManager.conf
>>
>> It works fine (in fact, instantly) if I edit
>> /etc/sysconfig/network-scripts/ifcfg-eth0 with emacs or vi (for instance,
>> changing the IP).
>>
>> It fails miserably if I use sudoedit, or sed:
>>
>> # grep 100 /etc/sysconfig/network-scripts/ifcfg-eth0
>> IPADDR=192.168.4.100
>>
>> # sed -i 's/100/155/g' /etc/sysconfig/network-scripts/ifcfg-eth0
>>
>> Even though all stats (access modify and change) are renewed.
>>
>> It's worse than that: even nmcli con reload afterwards fails.
>>
>> In fact, the only way to get the ip to change is by entering the file with
>> vi, not touching it, and leave with ":wq" (not just ":q").
>>
>> Why is that? What is going on here?
>>
>> I know, I know, I can use nmcli in scripts, and not string-manipulation
>> tools, but say I don't want to... :)
>>
>> And still, during operations, I'd rather edit the files with sudoedit...
> 
> "sudo -e ifcfg-file" doesn't change the inode. Can you use "sudo vi
> ifcfg-file"? (Or whichever editor you prefer.)
> 


Re: named-chroot issue - AGAIN

2016-03-19 Thread David Sommerseth
On 17/03/16 12:40, Steven Haigh wrote:
> On 17/03/2016 10:25 PM, David Sommerseth wrote:
>> On 17/03/16 06:36, Bill Maidment wrote:
>>> Hi guys
>>> Another named update and still the named-chroot.service file has not been 
>>> fixed. It is really annoying to have to manually fix it every time, just to 
>>> get DNS working after an update.
>>> Why is the -t /var/named/chroot option included in the ExecStart but not in 
>>> the ExecStartPre
>>>
>>> ExecStartPre=/bin/bash -c 'if [ ! "$DISABLE_ZONE_CHECKING" == "yes" ]; then 
>>> /usr/sbin/named-checkconf -z /etc/named.conf; else echo "Checking of zone 
>>> files is disabled"; fi'
>>> ExecStart=/usr/sbin/named -u named -t /var/named/chroot $OPTIONS
>>>
>>> Surely named-checkconf should be run with the same named.conf file as named 
>>> !!!
>>>
>>> This was reported back in November 2015
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1278082
>>> This should have been fixed by now. How hard is a one line change to fix ???
>>
>> This bug has severity set to medium. That means it has most likely not been
>> considered critical enough by Red Hat to go into an errata in the 7.2 life
>> cycle. But as the status is ASSIGNED (not NEW, which is the first status
>> level) - it means someone is working on it.
>>
>> If you do not like this pace, you can log in to the Red Hat customer portal
>> and get in touch with Red Hat support.  If you can provide them with good
>> technical arguments why this must be added in the 7.2 life cycle, then you
>> might see this fixed sooner.
>>
>> If you do not have Red Hat subscription with support ... well, then you need
>> to patiently wait.  Scientific Linux builds on the source RPMs Red Hat 
>> releases.
>>
>> And the reason for these things to take time is that every BZ for RHEL goes
>> through several steps of quality control before a fix gets released. It means
>> Red Hat needs to allocate resources getting these bugs fixed, verified and
>> tested before users see the update. This is the key concept of enterprise
>> distributions, to put efforts into avoiding regressions or new bugs as much 
>> as
>> possible and to try to deliver a stable distribution which is properly
>> maintained and updated over many years.
> 
> I would agree with you if they didn't remove that option in a release.
> the -t /blah was actually removed in a commit - which QA failed to pick
> up (and likely chroot bind setup wasn't tested at all).
> 
> I don't think this is a great example of what RedHat does well - this is
> an example of what they do *really* bad.

Not going to argue that this could have been done better, I agree with you
here.  On the other hand, maybe *that* is one reason it takes time to get this
issue resolved too?  That Red Hat QE is working on improving the situation,
adding needed regression tests and so on for this use case.  I know I'm
speculating now, but I also know that these guys really do their best to avoid
painful experiences for users and customers.  Unfortunately, they do mistakes
- as we all do from time to time.


--
kind regards,

David Sommerseth


Re: named-chroot issue - AGAIN

2016-03-19 Thread David Sommerseth
On 17/03/16 13:23, Tom H wrote:
> On Thu, Mar 17, 2016 at 12:53 PM, David Sommerseth
> <sl+us...@lists.topphemmelig.net> wrote:
>>
>> Not going to argue that this could have been done better, I agree with you
>> here. On the other hand, maybe *that* is one reason it takes time to get this
>> issue resolved too? That Red Hat QE is working on improving the situation,
>> adding needed regression tests and so on for this use case. I know I'm
>> speculating now, but I also know that these guys really do their best to 
>> avoid
>> painful experiences for users and customers. Unfortunately, they do mistakes
>> - as we all do from time to time.
> 
> Given the
> https://git.centos.org/blobdiff/rpms!bind.git/d56ed2d3a2736a07a09c268f3b2607cca8f1b6ca/SOURCES!named-chroot.service
> commit, there's probably a lot of hype in RH's QA marketing claims.
> I'm not implying that there's no QA at all but, in this case, if there
> was any, it sucked.

The people working on CentOS are not the same people working on RHEL,
even if they're working in the same company.  And RHEL is still the
upstream source of CentOS.

So if CentOS decides to fix this on their own, they need to keep track
of this until it's fixed in RHEL and then remove their own fix.  Of
course SL could do that as well, but that can be a maintenance burden.

That's the downside of being a downstream distro.


-- 
kind regards,

David Sommerseth


Re: named-chroot issue - AGAIN

2016-03-19 Thread David Sommerseth
On 17/03/16 06:36, Bill Maidment wrote:
> Hi guys
> Another named update and still the named-chroot.service file has not been 
> fixed. It is really annoying to have to manually fix it every time, just to 
> get DNS working after an update.
> Why is the -t /var/named/chroot option included in the ExecStart but not in 
> the ExecStartPre
> 
> ExecStartPre=/bin/bash -c 'if [ ! "$DISABLE_ZONE_CHECKING" == "yes" ]; then 
> /usr/sbin/named-checkconf -z /etc/named.conf; else echo "Checking of zone 
> files is disabled"; fi'
> ExecStart=/usr/sbin/named -u named -t /var/named/chroot $OPTIONS
> 
> Surely named-checkconf should be run with the same named.conf file as named 
> !!!
> 
> This was reported back in November 2015
> https://bugzilla.redhat.com/show_bug.cgi?id=1278082
> This should have been fixed by now. How hard is a one line change to fix ???

This bug has severity set to medium. That means it has most likely not been
considered critical enough by Red Hat to go into an errata in the 7.2 life
cycle. But as the status is ASSIGNED (not NEW, which is the first status
level) - it means someone is working on it.

If you do not like this pace, you can log in to the Red Hat customer portal
and get in touch with Red Hat support.  If you can provide them with good
technical arguments why this must be added in the 7.2 life cycle, then you
might see this fixed sooner.

If you do not have Red Hat subscription with support ... well, then you need
to patiently wait.  Scientific Linux builds on the source RPMs Red Hat releases.

And the reason for these things to take time is that every BZ for RHEL goes
through several steps of quality control before a fix gets released. It means
Red Hat needs to allocate resources getting these bugs fixed, verified and
tested before users see the update. This is the key concept of enterprise
distributions, to put efforts into avoiding regressions or new bugs as much as
possible and to try to deliver a stable distribution which is properly
maintained and updated over many years.


--
kind regards,

David Sommerseth


Re: "Not using downloaded repomd.xml because it is older than what we have:"

2016-03-08 Thread David Sommerseth
On 9 March 2016 06:05:04 CET, Eero Volotinen <eero.voloti...@iki.fi> wrote:
>Do you have transparent/forced proxy in network?

It happens for me also on boxes connected directly to the internet; no proxy 
involved.  Every single day I see this this message.


--
kind regards,

David Sommerseth 

>ke 9. maaliskuuta 2016 klo 6.53 Thomas Leavitt <tleav...@eag.com>
>kirjoitti:
>
>> No feedback? Is everyone else just ignoring these messages?
>>
>> Thomas
>>
>> -Original Message-
>> From: owner-scientific-linux-us...@listserv.fnal.gov [mailto:
>> owner-scientific-linux-us...@listserv.fnal.gov] On Behalf Of Thomas
>> Leavitt
>> Sent: Monday, February 29, 2016 10:53 AM
>> To: scientific-linux-us...@fnal.gov
>> Subject: RE: "Not using downloaded repomd.xml because it is older
>than
>> what we have:"
>>
>> Not using downloaded repomd.xml because it is older than what we
>have:
>>   Current   : Tue Feb 16 08:58:20 2016
>>   Downloaded: Tue Feb 16 08:58:13 2016
>>
>> A 7 second variance shouldn't be that much of an issue.
>>
>> Regards,
>> Thomas Leavitt
>>
>> -Original Message-
>> From: Thomas Leavitt
>> Sent: Monday, February 29, 2016 10:52 AM
>> To: 'scientific-linux-us...@fnal.gov'
>> Subject: RE: "Not using downloaded repomd.xml because it is older
>than
>> what we have:"
>>
>> I've been meaning to write about this for a while... my inbox is
>flooded
>> every day with messages with this as the content from my SL7 machines
>(the
>> ones I have set up to forward mail sent to root). I checked the time
>on
>> them, they're using NTP, and the time agrees, almost to the second,
>with
>> that of network time services and other machines, I doubt there's
>even a 30
>> second variance.
>>
>> What's the strategy for dealing with this? Seems like it isn't an
>isolated
>> problem.
>>
>> Regards,
>> Thomas Leavitt
>>
>> -Original Message-
>> From: owner-scientific-linux-us...@listserv.fnal.gov [mailto:
>> owner-scientific-linux-us...@listserv.fnal.gov] On Behalf Of David
>> Sommerseth
>> Sent: Thursday, February 18, 2016 2:51 AM
>> To: Peter Boy; SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV
>> Subject: Re: "Not using downloaded repomd.xml because it is older
>than
>> what we have:"
>>
>> On 13/02/16 19:24, Peter Boy wrote:
>> > Hi all,
>> >
>> > since several months I  get constantly from anacreon:
>> >> ——<
>> > /etc/cron.daily/0yum-daily.cron:
>> >
>> > Not using downloaded repomd.xml because it is older than what we
>have:
>> > Current   : Thu Feb  4 16:13:26 2016
>> > Downloaded: Thu Feb  4 16:13:25 2016
>> >> ——<
>> >
>> >
>> > The time difference is quite minimal. And a manual „yum update“
>> > confirms that no updates are waiting.
>> >
>> > Using my favourite search engine I found it might have be caused by
>an
>> > unresponsive of lazy mirror. But I use the standard configuration,
>i.e.
>> > the mirror list just includes the three scientificlinux servers.
>Other
>> > entries refer old bugs long fixed.
>> >
>> > I tried a yum clean all but it didn’t fix it.
>> >
>> > And all our other don’t show this issue, but the configuration is
>all
>> > the same, at least according to my knowledge.
>> >
>> >
>> > Obviously, there is no harm done and it can be safely ignored. But
>it
>> > always pulls our issue alert button.
>>
>> Hi,
>>
>> I am seeing exactly the same.  I thought it was NTP issues related to
>my
>> own setup, where I have a local rsync mirror.  But then I installed
>from
>> scratch SL7 on another site without any local mirrors, and the same
>issue
>> appears there too.  So I see this both with public repositories as
>well as
>> local rsync repositories.
>>
>> I have also seen this on SL6, but not as frequent as on SL7.
>>
>> Even though they cause no obvious harm, it gets quite annoying when
>you
>> receive many of them during a day ... sometimes even several days in
>a row.
>>
>> Perhaps yum should be more graceful to the timestamp?  And just quiet
>> these messages if the time difference is less than 30 seconds or so.
>>
>>
>> --
>> kind regards,
>>
>> David Sommerseth
>>
>> --
>> kind regards,
>>
>> David Sommerseth
>> 
>>
>> This e-mail may contain privileged or confidential information. If
>you are
>> not the intended recipient: (1) you may not disclose, use,
>distribute, copy
>> or rely upon this message or attachment(s); and (2) please notify the
>> sender by reply e-mail, and then delete this message and its
>attachment(s).
>> EAG, Inc. and its affiliates disclaim all liability for any errors,
>> omissions, corruption or virus in this message or any attachments.
>>
>>


Re: snooping windows 10 - how to stop it on a linux gateway?

2016-03-05 Thread David Sommerseth
On 05/03/16 13:23, David Sommerseth wrote:
> On 05/03/16 11:36, jdow wrote:
>> If squid can find usefully unique patterns in encrypted traffic I suppose 
>> that
>> might work. But that's one heck of a big "if".
> 
> A quick google search on "transparent https proxy" gave me these:
> 
> <http://docs.mitmproxy.org/en/stable/howmitmproxy.html>
> <http://rahulpahade.com/content/squid-transparent-proxy-over-ssl-https>
> 
> I probably have more "faith" in the mitmproxy approach, as that seems
> generally more designed with https in mind.

Just another idea came to mind.  You only need a transparent proxy to be used
when connecting to IP ranges belonging to Microsoft.  So instead of an
iptables REDIRECT for all http/https connection, you add separate rules with
--destination to the different Microsoft subnets.


--
kind regards,

David Sommerseth


>> On 2016-03-05 02:15, Karel Lang AFD wrote:
>>> Hmm ... yes, yes.
>>> Thanks for bringing this up.
>>> I force all http traffic through the squid proxy on our SL 6 gateway, this
>>> could
>>> be also helpful..
>>>
>>>
>>>
>>> On 03/05/2016 11:00 AM, prmari...@gmail.com wrote:
>>>> The only way I can think of is to force all internet access through a proxy
>>>> and filter it out in the proxy.
>>>> Then you don't give the machines any internet access just access to the 
>>>> proxy.
>>>> Unfortunately I do not have details for you on how to filter the snoop
>>>> messages because in I haven't looked at them but it should be fairly easy
>>>> using squid and an external Perl regex filter script or other filter
>>>> application, but you will take a latency hit because you will have to 
>>>> inspect
>>>> every transaction.
>>>>
>>>>Original Message
>>>> From: jdow
>>>> Sent: Friday, March 4, 2016 23:35
>>>> To: scientific-linux-us...@fnal.gov
>>>> Subject: Re: snooping windows 10 - how to stop it on a linux gateway?
>>>>
>>>> That windows update server is a relay for the "snoop" messages. About the 
>>>> only
>>>> way to totally stop the snoop messages is to totally isolate the network
>>>> containing Windows machines from the network. Any windows machine can serve
>>>> as a
>>>> relay point for any others.
>>>>
>>>> {o.o}
>>>>
>>>> On 2016-03-04 20:16, Karel Lang AFD wrote:
>>>>> Hi guys,
>>>>>
>>>>> firstly, sorry Todd, i don't know how it happened i got attached to your
>>>>> thread.
>>>>>
>>>>> secondly, thank you all for your thoughtful posts.
>>>>>
>>>>> I know it is not easy to block the selected traffic from windows 10 and
>>>>> you are
>>>>> right, it is being backported to windows 7 as well. Horrible and 
>>>>> disgusting.
>>>>>
>>>>> I already have windows server in LAN dedicated as a update server (work 
>>>>> of my
>>>>> windows colleagues), so the PC don't have to access windows update servers
>>>>> outside LAN - this should simplify things.
>>>>>
>>>>> Also the PCs must have internet access to email, http, https, ftp, sftp -
>>>>> simply
>>>>> the 'usual' stuff.
>>>>> I think, yet, there should be a way. I'll try to consult mikrotik experts
>>>>> (the
>>>>> router brand we use) and guys from our ISP.
>>>>> If i have something, i'll let you know :-)
>>>>>
>>>>> thank you, bb
>>>>>
>>>>> Karel
>>>>>
>>>>> On 03/05/2016 12:40 AM, Steven Haigh wrote:
>>>>>> On 05/03/16 07:24, Karel Lang AFD wrote:
>>>>>>> Hi all,
>>>>>>>
>>>>>>> guys, i think everyone heard already about how windows 10 badly treat
>>>>>>> its users privacy.
>>>>>>
>>>>>> My solution to this was to finally rid Windows 7 off my desktop PC - as
>>>>>> most of the telemetry has also been 'back ported' to Windows 7 also. You
>>>>>> can't stop it.
>>>>>>
>>>>>>> I'm now thinking about a way howto stop a windows 10 sending these data
>>>>>>> mining results to a microsoft telemetry servers and filter it on our SL
>>>&g

Re: snooping windows 10 - how to stop it on a linux gateway?

2016-03-05 Thread David Sommerseth
On 05/03/16 11:36, jdow wrote:
> If squid can find usefully unique patterns in encrypted traffic I suppose that
> might work. But that's one heck of a big "if".

A quick google search on "transparent https proxy" gave me these:

<http://docs.mitmproxy.org/en/stable/howmitmproxy.html>
<http://rahulpahade.com/content/squid-transparent-proxy-over-ssl-https>

I probably have more "faith" in the mitmproxy approach, as that seems
generally more designed with https in mind.


--
kind regards,

David Sommerseth



> On 2016-03-05 02:15, Karel Lang AFD wrote:
>> Hmm ... yes, yes.
>> Thanks for bringing this up.
>> I force all http traffic through the squid proxy on our SL 6 gateway, this
>> could
>> be also helpful..
>>
>>
>>
>> On 03/05/2016 11:00 AM, prmari...@gmail.com wrote:
>>> The only way I can think of is to force all internet access through a proxy
>>> and filter it out in the proxy.
>>> Then you don't give the machines any internet access just access to the 
>>> proxy.
>>> Unfortunately I do not have details for you on how to filter the snoop
>>> messages because in I haven't looked at them but it should be fairly easy
>>> using squid and an external Perl regex filter script or other filter
>>> application, but you will take a latency hit because you will have to 
>>> inspect
>>> every transaction.
>>>
>>>Original Message
>>> From: jdow
>>> Sent: Friday, March 4, 2016 23:35
>>> To: scientific-linux-us...@fnal.gov
>>> Subject: Re: snooping windows 10 - how to stop it on a linux gateway?
>>>
>>> That windows update server is a relay for the "snoop" messages. About the 
>>> only
>>> way to totally stop the snoop messages is to totally isolate the network
>>> containing Windows machines from the network. Any windows machine can serve
>>> as a
>>> relay point for any others.
>>>
>>> {o.o}
>>>
>>> On 2016-03-04 20:16, Karel Lang AFD wrote:
>>>> Hi guys,
>>>>
>>>> firstly, sorry Todd, i don't know how it happened i got attached to your
>>>> thread.
>>>>
>>>> secondly, thank you all for your thoughtful posts.
>>>>
>>>> I know it is not easy to block the selected traffic from windows 10 and
>>>> you are
>>>> right, it is being backported to windows 7 as well. Horrible and 
>>>> disgusting.
>>>>
>>>> I already have windows server in LAN dedicated as a update server (work of 
>>>> my
>>>> windows colleagues), so the PC don't have to access windows update servers
>>>> outside LAN - this should simplify things.
>>>>
>>>> Also the PCs must have internet access to email, http, https, ftp, sftp -
>>>> simply
>>>> the 'usual' stuff.
>>>> I think, yet, there should be a way. I'll try to consult mikrotik experts
>>>> (the
>>>> router brand we use) and guys from our ISP.
>>>> If i have something, i'll let you know :-)
>>>>
>>>> thank you, bb
>>>>
>>>> Karel
>>>>
>>>> On 03/05/2016 12:40 AM, Steven Haigh wrote:
>>>>> On 05/03/16 07:24, Karel Lang AFD wrote:
>>>>>> Hi all,
>>>>>>
>>>>>> guys, i think everyone heard already about how windows 10 badly treat
>>>>>> its users privacy.
>>>>>
>>>>> My solution to this was to finally rid Windows 7 off my desktop PC - as
>>>>> most of the telemetry has also been 'back ported' to Windows 7 also. You
>>>>> can't stop it.
>>>>>
>>>>>> I'm now thinking about a way howto stop a windows 10 sending these data
>>>>>> mining results to a microsoft telemetry servers and filter it on our SL
>>>>>> 6 linux gateway.
>>>>>
>>>>> Nope. There are no specific servers in use - just general - so whatever
>>>>> you block will end up killing other services.
>>>>>
>>>>>> I think it could be (maybe?) done via DPI (deep packet inspection). I
>>>>>> similarly filter torrent streams on our gateway - i patched standard SL
>>>>>> 6 kernel with 'xtables' (iptables enhancement) and it is working
>>>>>> extremely well.
>>>>>
>>>>> I would be interested to see if you could identify telemetry packets in
>>>>> the flow - but I'm not predicting much success. If you do get it, make
>>>>> sure you let the world know though!
>>>>>
>>>>>> I read (not sure if true) that some DNS resolutions to M$ servers are
>>>>>> even 'hardwired' via some .dll library, so it makes it even harder.
>>>>>
>>>>> Correct.
>>>>>
>>>>>> I'm no windows expert, but i'm and unix administrator concerned about
>>>>>> privacy of windows desktop/laptop users sitting inside my LAN.
>>>>>>
>>>>>> What i'd like to come up is some more general iptables rules, than
>>>>>> blocking specific IP addresses or names, because, apparently they may
>>>>>> change in any incoming windows update ...
>>>>>>
>>>>>> Anyone gave this thought already? Anyone else's concerned the way i am?
>>>>>
>>>>> Yup - and as I said, I'm now running Fedora 23 on my desktop (EL lags on
>>>>> a few things that I like - so Fedora is a happy medium for me - as I
>>>>> still have the fedora-updates-testing repo enabled. My work laptop as
>>>>> well as my personal laptop - and now my home desktop all run Fedora 23
>>>>> (KDE Spin if you hate Gnome 3 - like me).
>>>>>
>>>>
>>>
>>


Re: samba and ntfs flash drives ???

2016-03-04 Thread David Sommerseth
On 04/03/16 11:05, ToddAndMargo wrote:
[...snip...]
> # grep denied /var/log/audit/audit.log
> type=AVC msg=audit(1457071461.014:2015): avc:  denied  { write } for pid=26451
> comm="smbd" name="test" dev="dm-1" ino=593703
> scontext=system_u:system_r:smbd_t:s0 tcontext=unconfined_u:object_r:mnt_t:s0
> tclass=dir
> 
> These stem from when I was trying to get SeLinux to work
> on the share.  "Test" was a shared directory.  "Test"
> has since been removed.
> 
> I can browse/use the mount point without issue as
> long as I do not have an NTFS Flash Drive mounted to it.
> 
> No mention of /mnt/iso in the above
> # grep denied /var/log/audit/audit.log | grep iso
> # 

You skipped the 'audit2allow' tip I gave you.

-

cat | audit2allow

type=AVC msg=audit(1457071461.014:2015): avc:  denied  { write } for pid=26451
comm="smbd" name="test" dev="dm-1" ino=593703
scontext=system_u:system_r:smbd_t:s0 tcontext=unconfined_u:object_r:mnt_t:s0
tclass=dir



#= smbd_t ==

# This avc can be allowed using the boolean 'samba_export_all_rw'
allow smbd_t mnt_t:dir write;
-----

See the line " This avc can" ... So just do:

  # setsebool -P samba_export_all_rw 1


--
kind regards,

David Sommerseth


Re: samba and ntfs flash drives ???

2016-03-04 Thread David Sommerseth
On 4 March 2016 09:45:38 CET, ToddAndMargo <toddandma...@zoho.com> wrote:
>Hi All,
>
>Google is killing me here!
>
>Scientific Linux 7.2, 64 bit
>
>$ rpm -qa samba
>samba-4.2.3-11.el7_2.x86_64
>
>Is there some trick to mounting an NTFS USB flash drive and
>sharing it with Samba?
>
>I am trying to share an NTFS flash drive with samba.
>If the drive is not mounted, I can do what I want
>from Windows 7 and XP on the mount point.  I have
>full access.
>
>But, when I mount the stick to the mount point and
>try to browse the mount with W7 or XP, I get "permission
>denied".  Specifically, from the W7 machines samba log:
>
>   ../source3/smbd/uid.c:384(change_to_user)
>   Skipping user change - already user
>
>   ../source3/smbd/open.c:881(open_file)
>   Error opening file . (NT_STATUS_ACCESS_DENIED)
>   (local_flags=0) (flags=0)
>
>I mount suchlike:
>
># mount -t ntfs -rw -o 
>users,exec,sync,uid=todd,gid=users,fmask=000,dmask=000 /dev/sdc1
>/mnt/iso
>
>(I know I don't need the masks, but I left them there in case
>they were needed)
>
>After mounting:
># ls -al /mnt/iso
>
>total 1193
>drwxrwxrwx.  1 todd users   4096 Mar  3 23:30 .
>drwxr-xr-x. 13 todd users   4096 Mar  3 21:47 ..
>-rwxrwxrwx.  1 todd users122 Apr 12  2011 autorun.inf
>drwxrwxrwx.  1 todd users   4096 Apr 12  2011 boot
>-rwxrwxrwx.  1 todd users 383786 Apr 12  2011 bootmgr
>-rwxrwxrwx.  1 todd users 669568 Apr 12  2011 bootmgr.efi
>drwxrwxrwx.  1 todd users  0 Apr 12  2011 efi
>-rwxrwxrwx.  1 todd users 106768 Apr 12  2011 setup.exe
>drwxrwxrwx.  1 todd users  40960 Apr 12  2011 sources
>drwxrwxrwx.  1 todd users  0 Apr 12  2011 support
>drwxrwxrwx.  1 todd users  0 Apr 12  2011 upgrade
>
>My smb.conf:
>
>[iso]
>   comment = mnt on rn1 -- Mount as M:
>   path = /mnt/iso
>   valid users = @users
>   write list = @users
>   force group = users
>   force user = todd
>   oplocks = no
>   level2 oplocks = no
>   strict locking = no
>   blocking locks = no
>   force create mode = 
>   create mode = 0777
>   force directory mode = 
>   directory mode = 0777
>   map system = yes
>   map hidden = yes
>   writable = yes
>
>Trying simpler:
>   [iso]
>   comment = mnt on rn1 -- Mount as M:
>   path = /mnt/iso
>   force group = users
>   force user = todd
>Doesn't work either
>
>What am I doing wrong?
>
>Many thanks,
>-T


# grep denied /var/log/audit/audit.log

If you see something which looks related, pipe them to audit2allow and see what 
it suggests. Ofen you may get som hints that you need to flip a selinux boolean.


--
kind regards,

David Sommerseth


Re: "Not using downloaded repomd.xml because it is older than what we have:"

2016-02-18 Thread David Sommerseth
On 13/02/16 19:24, Peter Boy wrote:
> Hi all, 
> 
> since several months I  get constantly from anacreon:
>> ——<
> /etc/cron.daily/0yum-daily.cron:
> 
> Not using downloaded repomd.xml because it is older than what we have:
> Current   : Thu Feb  4 16:13:26 2016
> Downloaded: Thu Feb  4 16:13:25 2016
>> ——<
> 
> 
> The time difference is quite minimal. And a manual „yum update“
> confirms that no updates are waiting.
> 
> Using my favourite search engine I found it might have be caused by
> an unresponsive of lazy mirror. But I use the standard configuration, i.e.
> the mirror list just includes the three scientificlinux servers. Other
> entries refer old bugs long fixed.
> 
> I tried a yum clean all but it didn’t fix it.
> 
> And all our other don’t show this issue, but the configuration is
> all the same, at least according to my knowledge.
> 
> 
> Obviously, there is no harm done and it can be safely ignored. But
> it always pulls our issue alert button.

Hi,

I am seeing exactly the same.  I thought it was NTP issues related to my
own setup, where I have a local rsync mirror.  But then I installed from
scratch SL7 on another site without any local mirrors, and the same
issue appears there too.  So I see this both with public repositories as
well as local rsync repositories.

I have also seen this on SL6, but not as frequent as on SL7.

Even though they cause no obvious harm, it gets quite annoying when you
receive many of them during a day ... sometimes even several days in a row.

Perhaps yum should be more graceful to the timestamp?  And just quiet
these messages if the time difference is less than 30 seconds or so.


-- 
kind regards,

David Sommerseth

-- 
kind regards,

David Sommerseth


Re: two mysteries

2016-01-27 Thread David Sommerseth
On 27/01/16 17:58, Yasha Karant wrote:
> My laptop has an external "hardware" expansion insert slot, and I might be
> able to find such a 802.11 NIC.

USB wireless interfaces should also work, using USB redirection.  But it
depends on your performance needs on the wireless network.  I've done that a
few times with USB Ethernet devices when playing around on odd projects.

The external "hardware" expansion may or may not work, depending on if is
possible to use a PCI Pass-through mode on that interface or not.


--
kind regards,

David Sommerseth


Re: two mysteries

2016-01-27 Thread David Sommerseth
On 27/01/16 11:13, jdow wrote:
>>
> Fascinating. I made a bad "assumption" about network devices. It seems they
> are created dynamically without any presence in /dev.

IIRC, *BSD provides /dev nodes for network devices which the user-space can
use for configuring it and such.  But it's many years since I played with
FreeBSD, so my memory is scarce.


--
kind regards,

David Sommerseth


Re: two mysteries

2016-01-26 Thread David Sommerseth
On 26/01/16 08:13, Yasha Karant wrote:
> On 01/25/2016 04:30 PM, David Sommerseth wrote:
[...snip...]
>> But  KVM is the core hypervior.  It is in fact just a kernel
>> module which
>> you can load at any time on systems with CPUs supporting hardware
>> virtualization (VT-d or similar, most modern Intel, AMD and IBM Power 7/8
>> supports KVM).
>>
>> libvirt is the management backend, which provides a generic API. 
>> libvirt can
>> be used against other hypervisors as well, such as Xen, but probably more
>> often used with KVM.
>>
>> qemu-kvm is the KVM virtual machine process.  Each qemu-kvm process is
>> started
>> per VM.  You seldom start these processes manually, but they are
>> kicked off by
>> libvirt.
>>
>> virt-manager is a management GUI front-end.  And virsh is a console based
>> management tool.  Both connects to the libvirt API.
>>
>> Further, you can also download an oVirt Live image and boot that on a
>> bare-metal or virtual machine.  oVirt can then connect to libvirt and
>> provide
>> an even more feature rich management tool.
>>
>> virt-manager and oVirt can also connect to several systems running
>> libvirt
>> simultaneously, so you can manage more hypervisors from a single
>> front-end.
>> And there are probably even more front-ends, like "Boxes" (not really
>> tried it).
>>
>> I dunno much about vmware stuff, so I will refrain to comment that.  But
>> VirtualBox is also two-fold.  My experience with VirtualBox is now
>> quite old
>> (5-6 years ago).  You can start VirtualBox guests without a kernel
>> support
>> module loaded, which would work on most hardware.  But performance was
>> not too
>> good at all.  If you got the init.d script to build the kernel module,
>> you
>> could get quite acceptable performance.  However, I see VirtualBox
>> more like a
>> single package which gives you both hypervisor and management tool in
>> a single
>> software package.
>>
>> Even though VirtualBox is more a "single unit" and KVM/Qemu/libvirt
>> consists
>> of more components ... you normally don't notice that when you start
>> VMs via
>> the management tools.
>>
> Thank you for your detailed exposition.  My primary concern is that I do
> *NOT* want a hypervisor actually controlling the physical hardware; we
> have enough security vulnerabilities with a "hardened" supervisor such
> as EL 7.  

You can run virtual machines without a hypervisor.  But, that will not
give you a good performance in general.  Running in this mode is often
called 'emulation'.  So the hardware a computer needs, is emulated by
software in user space, without anything running in kernel space at all.
 You can do this also with libvirt and qemu too, but then you use 'qemu'
and not 'qemu-kvm'.

As a related side-track.  Running with a hypervisor can only allow
guests to be of the same CPU family as the bare-metal host.  With
emulation, the CPU seen on the inside of the guest can be whatever the
emulator supports.  With emulation you can run powerpc, mips or even
s/390 based environments - but it is slow compared to bare-metal
performance - as everything you do is emulated.

Likewise with VirtualBox, it goes into emulated mode when it does not
have the kernel module (vbox.ko? don't recall right now).  This also
provides a much poorer performance.

I do not know enough about vmware, but their early products did run on
hardware before hardware had any virtualization features at all.  But I
suspect they also needed some kind of kernel module to provide a decent
performance.  Once the bare-metal hardware got virtualization support,
you still need the kernel module - but now the module takes advantage of
the hardware capabilities in addition, increasing the performance even more.

So to simplify it a bit: Qemu, VirtualBox and vmware (I suspect) needs a
kernel module to provide decent performance, and these modules
instruments the kernel with at least hypervisor-like capabilities.

> My secondary issue is the actual human clock execution time in
> the VM as contrasted with the same OS/environment running on the
> physical hardware.  I have found that current production releases of
> VirtualBox and VMware (e.g., VMware player) provide acceptable
> performance, although the USB interface on VMware now does seem better
> than VirtualBox that evidently still has issues (one of the mysteries).

And this is what the hypervisor does.  It provides a channel from the
hardware on the bare-metal to the guest VM.

And to get an acceptable human clock execution time inside a virtual
guest OS, you will need a hypervisor.  So you have most likely been
running both 

Re: two mysteries

2016-01-25 Thread David Sommerseth
On 25/01/16 19:32, Yasha Karant wrote:
> On 01/24/2016 06:06 PM, Lamar Owen wrote:
>> On 01/23/2016 01:30 PM, Yasha Karant wrote:
>>> Perhaps someone else has experienced what I related below and can comment
>>> -- SL 7x.
>>>
>>> 1.  ... For 802.3, I prefer to use a manual configuration, not 
>>> NetworkManager.
>>
>> For a dynamic connection even with a wired Ethernet you should use the
>> supported NetworkManager stack, your personal preferences aside.  NM works
>> and doesn't require munging for a simple DHCP wired connection.
>>
>>>
>>> 2.  ...Note that I must use MS Win to work with these devices as the
>>> application software for the device in question is *NOT* available for
>>> linux, the device is proprietary (no source code available), and
>>> CrossOver/Wine does not support USB -- forcing the use of a VM running a MS
>>> Win gues
>>
>> Neither VMware nor VirtualBox ship as part of SL.  KVM does, and USB
>> passthrough works very well with Windows 7 running in a KVM virtual machine
>> on my laptop.  It just works, and it's already part of SL; why not use it? 
>> Performance is very good in my experience, and I'm running a few pieces of
>> software in Win 7 for the same reasons as you.  You're also far more likely
>> to get useful help using KVM, either from the list or from other sources,
>> such as the Red Hat or Fedora documentation.
> 
> From the KVM site (http://www.linux-kvm.org/page/Management_Tools) that has a
> RedHat logo, there is a list of management interfaces, including VMM (Virtual
> Machine Manager -- https://virt-manager.org/screenshots/ ) that also appears
> to be a Red Hat entity.  Anyone using VMM?  VMM appears to allow a true host
> OS (supervisor, not hypervisor) with the VM ("hypervisor") running under the
> OS (as with VMWare workstation/player or VirtualBox), thus booting an OS, not
> a hypervisor that actually provisions for guest supervisors.  Is this correct?

This was a bit confusing for me (getting late, so probably stupid to reply now).

But  KVM is the core hypervior.  It is in fact just a kernel module which
you can load at any time on systems with CPUs supporting hardware
virtualization (VT-d or similar, most modern Intel, AMD and IBM Power 7/8
supports KVM).

libvirt is the management backend, which provides a generic API.  libvirt can
be used against other hypervisors as well, such as Xen, but probably more
often used with KVM.

qemu-kvm is the KVM virtual machine process.  Each qemu-kvm process is started
per VM.  You seldom start these processes manually, but they are kicked off by
libvirt.

virt-manager is a management GUI front-end.  And virsh is a console based
management tool.  Both connects to the libvirt API.

Further, you can also download an oVirt Live image and boot that on a
bare-metal or virtual machine.  oVirt can then connect to libvirt and provide
an even more feature rich management tool.

virt-manager and oVirt can also connect to several systems running libvirt
simultaneously, so you can manage more hypervisors from a single front-end.
And there are probably even more front-ends, like "Boxes" (not really tried it).


I dunno much about vmware stuff, so I will refrain to comment that.  But
VirtualBox is also two-fold.  My experience with VirtualBox is now quite old
(5-6 years ago).  You can start VirtualBox guests without a kernel support
module loaded, which would work on most hardware.  But performance was not too
good at all.  If you got the init.d script to build the kernel module, you
could get quite acceptable performance.  However, I see VirtualBox more like a
single package which gives you both hypervisor and management tool in a single
software package.

Even though VirtualBox is more a "single unit" and KVM/Qemu/libvirt consists
of more components ... you normally don't notice that when you start VMs via
the management tools.


I hope this gave a broader perspective.


--
kind regards,

David Sommerseth


Re: forced fsck

2016-01-17 Thread David Sommerseth
On 17 January 2016 04:18:22 CET, ToddAndMargo <toddandma...@zoho.com> wrote:
>On 01/16/2016 07:10 PM, Brandon Vincent wrote:
>> On Sat, Jan 16, 2016 at 8:06 PM, ToddAndMargo <toddandma...@zoho.com>
>wrote:
>>> Would this be a one time fsck or every boot?
>>
>> As long as the kernel parameter(s) is passed during boot (manually
>> entered or set as the default in GRUB) a fsck should run every time.
>>
>> Brandon Vincent
>>
>
>Any way to do it once?

When you boot the box and see the grub menu, select the proper line with arrow 
keys and hit the E key on the keyboard; that enters the edit mode.  Use arrow 
keys again to locate the kernel command line and append these options at the 
end of the line.  IIRC correctly, it is ctrl-X to boot with these changes. But 
these changes are only temporary and only valid for the current boot.
--
kind regards,

David Sommerseth


Re: Perl 6?

2016-01-10 Thread David Sommerseth
On 10/01/16 23:29, ToddAndMargo wrote:
> On 01/10/2016 09:12 AM, James Rogers wrote:
>> The usual question: is anyone actually using P6 for anything productive?
>>
>> --W
> 
> Isn't it a little early for that question considering it
> was just released?

So the benefit of seeing Perl6 in EL distributions "now" is?

Yeah, I know, it can be seen as the "chicken and egg" problem.  But that's
where EPEL may be the a middle ground, to prove it is needed.

However, not everything in Fedora ends up in RHEL either.  So don't have too
high expectations just yet.  There are no reasons RH will spend much time and
energy on this before Perl6 gets a critical user mass using Perl6 software RH
customers want to use.

Just my 2 cents


--
kind regards,

David Sommerseth


Re: need rsync exclude help

2015-03-07 Thread David Sommerseth
 From: ToddAndMargo toddandma...@zoho.com
 To: SCIENTIFIC-LINUX-USERS SCIENTIFIC-LINUX-USERS@listserv.fnal.gov
 Sent: 7. March 2015 05:40:43
 Subject: Re: need rsync exclude help

 --exclude='{wine-*,wine-1.7.24}' /home/CDs/Linux /mnt/MyCDs/.

 
 I am not real certain that the {} thingy works correctly.
 Anyway, I only needed 'wine-*'

That seems redundant in this case.  You can always test such expansions using 
'echo'

  $ echo {wine-*,wine-1.7.24}
  wine-* wine-1.7.24
  $ echo wine-{1.7.24,package-1.2.3}
  wine-1.7.24 wine-package-1.2.3

Here I also added a little demonstration of how the {} expansion can work.


--
kind regards,

David SOmmerseth


Re: ypbind not registering with rpcbind on SL7

2015-01-06 Thread David Sommerseth
- Original Message -
 From: Konstantin Olchanski olcha...@triumf.ca
 To: Stephen John Smoogen smo...@gmail.com
 Cc: Dirk Hoffmann hoffm...@cppm.in2p3.fr, SCIENTIFIC-LINUX-USERS 
 SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV
 Sent: Tuesday, 6 January, 2015 22:18:54
 Subject: Re: ypbind not registering with rpcbind on SL7

 On Tue, Jan 06, 2015 at 11:20:30AM -0700, Stephen John Smoogen wrote:
 On 6 January 2015 at 05:08, Dirk Hoffmann hoffm...@cppm.in2p3.fr wrote:
 
  I installed SL7 yesterday from the standard DVD in Computing node
  flavour. yum update ran correctly, then I needed YP/NIS.


 Wow.. I didn't know ypbind was still in use :)?

 
 There is no replacement to NIS for small clusters.
 
 Vendors send us in the direction of LDAP, which is supposed to be light
 weight.
 
 Well, if LDAP is light-weight, I hate to see what they consider as
 normal-weight.
 
 With NIS, management is vi /etc/auto.home; make -C /var/yp.
 
 Wake me up when LDAP gets anywhere near that easy to use.

I'll admit that my IT career has mostly missed the yp/nis days (mostly due to 
working
in companies with just a few handful servers or less).  But!

I dare you to try out FreeIPA.  I've tested it in a slightly bigger environment 
(~30 boxes),
and decided to roll it out at home just for fun to play more with it.  It 
doesn't eat that
much CPU or disk resources (well some 100MB), but it is really easy to set up 
and play with.
And with both a reasonable webUI and a command line interface for the same 
tasks.  Firewall
and SELinux friendly, and lets you do really nice stuff such as DNS SSHFP (no 
more need
for hosts in ~/.ssh/known_hosts), centralised SSH public key management, 
Kerberos SSO
and all the other stuff NIS can do.

Regarding resource usage, at home I installed FreeIPA on an slightly well loaded
HP Microserver G7 (AMD N36L) with 8GB RAM running 5 VMs.  The average CPU load 
is 60% and
using ~7GB for VMs.  And the admin web console works very well and all IPA 
domain members
gets the authentication done fairly quickly.  I've not noticed any performance 
drop on the
VMs either.

What I basically did:

* IPA server
  - yum install ipa-server
  - ipa-server-install (see --help for enabling DNS server and more features)
  - Go to http://$SERVER
  - Login as admin and start playing

* IPA clients to become domain members
  - yum install ipa-client
  - Ensure /etc/resolv.conf 'nameserver' points at the IPA server
  - ipa-client-install  (see --help for more advanced features)

Also check out the documentation (you'll find relevant versions of it in 
https://access.redhat.com
under Identity Management).  It is quite good and accurate.

And that's basically it ... run kinit and you have SSO to all your boxes.  Or 
upload your
SSH public key to your IPA user account, and you can SSH to all boxes without 
uploading
any public keys anywhere else.

My playing has been done with SL6, SL7 and Fedora 19.  My next step is to start 
playing with IPA
servers on SL7, which is an even newer version of FreeIPA with some more 
features.

By the way, setting up master-master replication with more IPA servers is also 
really easy.  However,
there is a bug in the LDAP server which needs a configuration workaround.  But 
once that's done, it
works really smooth.

Yes, IPA is probably using more resources than yp/nis, but it also provides 
much more than just yp/nis.

--
kind regards,

David Sommerseth


Re: How to do WPA wifi authentication at run level 3 on SL 5.5 ?

2014-10-08 Thread David Sommerseth
On 08/10/14 04:30, Allen Wilkinson wrote:
 David,
 
 Key question is how do I configure network connections with
 NetworkManager from the command line?

Ahh!  I see.  IIRC, nmcli in EL5 does not support that.  I believe there
is some support for it in EL6 and even more in EL7.  But for EL5, I
believe you need to dig into the configuration files in /etc/NetworkManager.

I'm sorry, I have only two production boxes left with EL5, and neither
of them use NetworkManager, so it's hard for me to point you further.

-- 
kind regards,

David Sommerseth



 On Wed, 8 Oct 2014, David Sommerseth wrote:
 
 On 07/10/14 13:55, Allen Wilkinson wrote:
 I could use help on the SUBJECT problem.

 This is for an old laptop that uses the ipw2200 wifi driver.
 It assigns the wifi to eth1.

 eth0 for wired Ethernet is active okay, and I want eth1 active at the
 same time.

 ifup eth1 seems to only allow WEP keys successfully.
 NetworkManager never seems to connect at run level 3 using WPA for any
 configuration that I can figure out. nm-tool does show WPA should be
 possible.


 Hi Allen,

 If you already have configured the network connections using
 NetworkManager, it should be fairly possible to start the wireless
 network using 'nmcli'.  That's a command line tool for Network Manager.

 You most likely need to play around with 'nmcli con'.  F.ex. I have a
 wirelss config called 'home'.  So to connect from the command line, I do
 this:

  [user@host:~] $ nmcli con up id home

 I'll admit, it's a long time since I played with EL5, so it might not be
 fully supported.  But on EL6 and newer, this is possible.


 -- 
 kind regards,

 David Sommerseth



Re: How to do WPA wifi authentication at run level 3 on SL 5.5 ?

2014-10-07 Thread David Sommerseth
On 07/10/14 13:55, Allen Wilkinson wrote:
 I could use help on the SUBJECT problem.
 
 This is for an old laptop that uses the ipw2200 wifi driver.
 It assigns the wifi to eth1.
 
 eth0 for wired Ethernet is active okay, and I want eth1 active at the
 same time.
 
 ifup eth1 seems to only allow WEP keys successfully.
 NetworkManager never seems to connect at run level 3 using WPA for any
 configuration that I can figure out. nm-tool does show WPA should be
 possible.
 

Hi Allen,

If you already have configured the network connections using
NetworkManager, it should be fairly possible to start the wireless
network using 'nmcli'.  That's a command line tool for Network Manager.

You most likely need to play around with 'nmcli con'.  F.ex. I have a
wirelss config called 'home'.  So to connect from the command line, I do
this:

  [user@host:~] $ nmcli con up id home

I'll admit, it's a long time since I played with EL5, so it might not be
fully supported.  But on EL6 and newer, this is possible.


-- 
kind regards,

David Sommerseth


Re: [SCIENTIFIC-LINUX-USERS] Questions about SL 7.0

2014-09-03 Thread David Sommerseth
On 03/09/14 10:33, Andreas Mock wrote:
 Hi Pat, hi Patrick,
 
 thanks for your answers and comments.
 
 How would someone like me get a SRPM for a binary package found or installed 
 on
 a SL 7.0 system?

yumdownloader --source $PKGNAME


--
kind regards,

David Sommerseth



 Von: Patrick J. LoPresti [mailto:lopre...@gmail.com]
 Gesendet: Dienstag, 2. September 2014 23:22
 An: Pat Riehecky
 Cc: Andreas Mock; SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV
 Betreff: Re: AW: [SCIENTIFIC-LINUX-USERS] Questions about SL 7.0

 On Tue, Sep 2, 2014 at 2:11 PM, Pat Riehecky riehe...@fnal.gov wrote:

 The sources were taken from git.  They were then compared to the
 sources from the public Release Candidate provided by upstream on April
 22 2014.
 There were very few changes from this Release Candidate to the
 official release.

 Nice work.

 All the Security/Enhancement/Bugfix code comes out of git as the
 source rpms for these were never publicly released.

 Does this mean there is no way to correlate security/bugfix updates from
 Red Hat with the changes in git, and therefore no way to know how far SL is
 diverging from RHEL over time?

 Is the git tree entirely RHEL + released updates, or are unreleased CentOS
 changes mixed in as well?

 Presumably, anyone with a RHEL subscription (and the right tools) could
 compare the git repository against the update SRPMs, at least to tell you
 whether they are the same. Would that be a violation of the subscription
 terms, I wonder?

 Just curious.

  - Pat


-- 
kind regards,

David Sommerseth


Re: nmap to find mac addressees

2014-09-02 Thread David Sommerseth
On 02/09/14 20:16, Earl Ramirez wrote:
 On Tue, 2014-09-02 at 09:47 -0400, Lamar Owen wrote:
 On 09/01/2014 05:37 PM, ToddAndMargo wrote:


 An RPM [of autoscan-network] yet to try?



 In my own quick build attempts, I was not successful in building a 
 running binary from source.  Also, the binary tarball didn't run, 
 either.  But I'm running 64-bit and the tarball is 32-bit, and there are 
 a few indications that it might not build properly as 64-bit.

 But it wasn't high on my priority list, either, so I only put half an 
 hour or so into seeing just how easy or difficult it would be to build.  
 I did track down dependencies as they were declared, and the binary 
 built, but it would not actually run as built.  So I've put it on the 
 back burner here, since, as I say, it's not a terribly high priority to 
 me at the moment.  If you need it, you can use a BackTrack/Kali live 
 disk which as far as I know still includes autoscan-network.
 I have stumble upon the source RPM for it and I am also in the process
 of getting it to work on el6 (for now, will be another story for el7).
 
 I thinking about moving the install directory from /usr/share/apps
 to /opt/* directory, thoughts are welcome. I am doing the build on a
 vanilla el6 install and below are the build dependencies, which I plan
 to sort out within a few days if time permits.
 
 rpmbuild -ba rpmbuild/SPECS/autoscan.spec
 error: Failed build dependencies:
   gnomeui2-devel is needed by autoscan-1.50-1.el6.x86_64
   libao-devel is needed by autoscan-1.50-1.el6.x86_64
   libvorbis-devel is needed by autoscan-1.50-1.el6.x86_64
   net-snmp-devel is needed by autoscan-1.50-1.el6.x86_64
   gtk+2-devel is needed by autoscan-1.50-1.el6.x86_64
   libgtk-vnc-devel is needed by autoscan-1.50-1.el6.x86_64
   libgnome-keyring-devel is needed by autoscan-1.50-1.el6.x86_64
   vte-devel is needed by autoscan-1.50-1.el6.x86_64
   pcap-devel is needed by autoscan-1.50-1.el6.x86_64
 
 This is my hobby and if there anyone with extensive experience I am
 happy to get some little tips.
 

I'd recommend you to install mock and build it via mock.  Your user
account must be member of the mock group to function.

Mock builds pulls down the needed packages for the distribution you
build your packages for, unpacks them in a chroot and does the complete
build process inside that chroot.  On success, you'll get RPMs and on
failures you'll get a lot of log files to study :)

As I have my own mock configurations, I don't remember the exact syntax
... but it's something like this:

   $ mock -r epel-6-x86_64 --rebuild autoscan-1.50-1.el6.src.rpm

Sit down and wait for a while, and the results can be found in the
/var/lib/mock/epel-6-x86_64/results directory.

Using this approach, the same computer can build packages for a vast
majority of Fedora and EPEL packages, on several CPU architectures.
Look in /etc/mock to see all the available configurations
out-of-the-box.  Adopting for specific SL builds isn't that hard to
accomplish either.

Mock is one of the building-blocks koji (the Fedora build system) uses.


-- 
kind regards,

David Sommerseth


Re: having trouble downloading with wget

2014-08-22 Thread David Sommerseth
On 21/08/14 19:57, ToddAndMargo wrote:
 Hi All,
 
 Any idea why I can download
 
 http://www.uvnc.eu/download/1201/UltraVNC_1_2_0_X64_Setup.exe
 
 with Firefox, but not wget?

I haven't checked the URLs, but my first hunch is that they do
User-Agent filtering on the server side.  Try using wget with -U and a
user agent string from f.ex. Firefox.


--
kind regards,

David Sommerseth


Re: wget: need help extracting name

2014-07-29 Thread David Sommerseth
On 29/07/14 19:04, ToddAndMargo wrote:
 Hi All,
 
 I am trying to extract the file name (for the revision)
 of a file with --content-disposition.  I don't actually
 want the file.
 
 Is there a way to have wget tell me the name of the file
 without actually downloading it?
 
 Many thanks,
 -T
 
 This is what I have so far.  Tell me the file exists
 but not its name.
 
 
  wget --spider --content-disposition
 http://www.overlooksoft.com/packages/download?plat=lx64ext=rpm;
 
 Spider mode enabled. Check if remote file exists.
 --2014-07-29 10:00:38--
 http://www.overlooksoft.com/packages/download?plat=lx64ext=rpm
 
 Resolving www.overlooksoft.com... 96.127.149.74
 Connecting to www.overlooksoft.com|96.127.149.74|:80... connected.
 HTTP request sent, awaiting response... 200 OK
 Length: 6737765 (6.4M) [application/x-rpm]
 
 Remote file exists.

$ curl -I 'http://www.overlooksoft.com/packages/download?plat=lx64ext=rpm' \
   | awk -F= '/Content-Disposition: attachment;/ {print $2}'

Not sure if wget got a similar feature.  I just happen to like curl
slightly better.


--
kind regards,

David Sommerseth


Re: wget: need help extracting name

2014-07-29 Thread David Sommerseth
On 29/07/14 20:37, ToddAndMargo wrote:
 On 07/29/2014 10:40 AM, David Sommerseth wrote:
 On 29/07/14 19:04, ToddAndMargo wrote:
 Hi All,

 I am trying to extract the file name (for the revision)
 of a file with --content-disposition.  I don't actually
 want the file.

 Is there a way to have wget tell me the name of the file
 without actually downloading it?

 Many thanks,
 -T

 This is what I have so far.  Tell me the file exists
 but not its name.


   wget --spider --content-disposition
 http://www.overlooksoft.com/packages/download?plat=lx64ext=rpm;

 Spider mode enabled. Check if remote file exists.
 --2014-07-29 10:00:38--
 http://www.overlooksoft.com/packages/download?plat=lx64ext=rpm

 Resolving www.overlooksoft.com... 96.127.149.74
 Connecting to www.overlooksoft.com|96.127.149.74|:80... connected.
 HTTP request sent, awaiting response... 200 OK
 Length: 6737765 (6.4M) [application/x-rpm]

 Remote file exists.

 $ curl -I
 'http://www.overlooksoft.com/packages/download?plat=lx64ext=rpm' \
 | awk -F= '/Content-Disposition: attachment;/ {print $2}'

 Not sure if wget got a similar feature.  I just happen to like curl
 slightly better.


 -- 
 kind regards,

 David Sommerseth

 
 Hi David,
 
 I must be missing something.  Don't see the name anywhere.
 
 And, can't figure out why I get something different with
 the AWK pipe.
 
 -T
 
 $ curl -I 'http://www.overlooksoft.com/packages/download?plat=lx64ext=rpm'
 HTTP/1.1 403 Forbidden
 Date: Tue, 29 Jul 2014 18:34:31 GMT
 Server: Apache
 Content-Type: text/html; charset=iso-8859-1
 
 $ curl -I
 'http://www.overlooksoft.com/packages/download?plat=lx64ext=rpm'|
 awk -F= '/Content-Disposition: attachment;/ {print $2}'
   % Total% Received % Xferd  Average Speed   TimeTime Time
  Current
  Dload  Upload   Total   SpentLeft
  Speed
   0 00 00 0  0  0 --:--:-- --:--:--
 --:--:-- 0

Odd!  You get 403 Forbidden.  I don't get that from my Fedora 19 box:

$ curl -I 'http://www.overlooksoft.com/packages/download?plat=lx64ext=rpm'
HTTP/1.1 200 OK
Date: Tue, 29 Jul 2014 20:14:31 GMT
Server: Apache
X-Powered-By: PHP/5.4.28
Pragma: public
Expires: 0
Cache-Control: public
Content-Description: File Transfer
Content-Disposition: attachment; filename=overlook-fing-2.2.rpm
Content-Transfer-Encoding: binary
Content-Length: 6737765
Vary: User-Agent
Content-Type: application/x-rpm

However on a SL6.5 box, I do get 403 Forbidden.

I tried changing the User-Agent string, and that helped.  Seems some
curl versions have been banned on that site.

Pick a user-agent string from here:
http://www.useragentstring.com/pages/Firefox/

And run curl with '-A $USERAGENT' ...  that should fix it.


--
kind regards,

David Sommerseth


Re: SL REST API

2014-05-28 Thread David Sommerseth
On 27/05/14 18:52, Yasha Karant wrote:
 Does SL (i.e., TUV EL) have a standard enterprise-quality production
 REST API that will interoperate with non-EL clouds?
 
 The most I could find on a short search is:
 
 http://developerblog.redhat.com/2013/12/12/advanced_integration_rhevm-part1/
 
 
 Advanced integration with Red Hat Enterprise Virtualization Manager
 (RHEV-M) – Part 1 of 2
 
 and
 
 https://fedorahosted.org/rhevm-api/
 
 This is an effort to define an official REST API for Red Hat Enterprise
 Virtualization http://www.redhat.com/virtualization/rhev/.
 
 but that the fedorahosted project above is obsolete, replaced by:
 
 http://www.ovirt.org/Subprojects
 
 in which any mention of TUV by name is in the title of each reference.

Hi,

I think you might find deltacloud interesting.

http://deltacloud.apache.org/


--
kind regards,

David Sommerseth


Re: ssh -X xinit failure

2014-05-07 Thread David Sommerseth
On 07/05/14 04:33, Yasha Karant wrote:
 Thanks for the information.  At my institution, we were told by the
 university network security group that after ssh -X, one still needed to
 activate X for the session by xinit or the like for security reasons. 
 Evidently, the persons were thinking of some other environment (MS
 Windows perhaps?).  Indeed, xeyes and firefox both work fine from the
 remote host to the local client workstation.
 
 A question:  as a regular X window manager desktop from the remote
 machine is not displayed (that is, the pull down menu Applications
 under Gnome or the equivalent from KDE), is there any mechanism to get
 such a menu, etc., displayed?  What is the default GUI file manager
 (that allows an end user to point and click on an executable file to
 execute the application) that can be invoked from a remote terminal?

Running this over ssh will most likely not work well at all.  If you
want a remote desktop experience, look into nomachine or freenx:

https://www.nomachine.com/
http://wiki.centos.org/HowTos/FreeNX

Another alternative is to start Xvnc and tunnel the VNC port Xvnc
establishes from your remote server via SSH.  Then use a local VNC
client to connect to the same port.  This may work, but may also be
worse than nomachine.

Using anything else, will most likely just cause grief and frustration.
 The X11 protocol isn't easily tunnelled, and requires quite some stable
bandwidth to work decent.


--
kind regards,

David Sommerseth


Re: Security ERRATA Important: openssl on SL6.x i386/x86_64

2014-04-09 Thread David Sommerseth
On 09/04/14 16:42, Pat Riehecky wrote:
 This is a reminder of this security errata.
 
 Any SL6 system should apply this update.  If your system has been
 applying security errata regularly it is vulnerable until this update is
 applied.
 
 Systems with yum-autoupdate enabled using the default configuration have
 the update applied and only need to restart applications linked against
 openssl.
 
 All applications linked against openssl must be restarted for this
 update to take effect.

I installed lsof on my boxes and used

  [root@host: ~]# lsof | grep -E libcrypto|libssl | grep DEL

to identify processes/services which needs to be restarted.


--
kind regards,

David Sommerseth


Re: Any 7 rumors?

2014-04-09 Thread David Sommerseth
On 09/04/14 16:27, Paul Robert Marino wrote:
 No it was always required because the shopping cart itself may in some
 cases contain data which could possibly be used to gain access to
 sensitive customer data. Also in a sense data about who purchases what
 and where could also be used to mask credit card fraud by making the
 fraudulent charges look like the normal shopping activities of the
 card holder.

Really!?  I've been involved in a few PCI-DSS certification rounds for a
company which provided online payment services back in the days.
Granted that's some years ago now (2005 to 2008-ish).  Even though our
scope was limited to only processing credit card information, we did not
see any requirements anywhere at that time for the shopping cart to be
PCI-DSS certified.

In fact one of our sales arguments at that time was that our customers
could avoid certifications by implementing our online payment
terminal.  We even had some discussions with our auditor about this,
who gave his blessings to our product.  The solution we provided in this
case would take care of retrieving the credit card information from the
customer, process the payment and just provide a status back to the
merchant.  Merchants using a payment API for processing payments would
in some cases need certification, based on the amount of transactions
they had; this I believe has become much stricter since those days.

And just to have mentioned it, the solutions we provided was based upon
Gentoo(!) servers.  We even got very positive feedback for having
absolutely minimum installs on our production servers, plus kudos for
our maintenance routines.

Of course, many of the requirements have most likely changed since then.
 But I don't recognise the always required in regards to shopping carts.


--
kind regards,

David Sommerseth


 
 On Wed, Apr 9, 2014 at 8:13 AM, James M. Pulver jmp...@cornell.edu wrote:
 We were recently informed PCI compliance also extends to the shopping cart
 software, this may be new this year...



 --

 James Pulver

 CLASSE Computer Group

 Cornell University



 From: owner-scientific-linux-us...@listserv.fnal.gov
 [mailto:owner-scientific-linux-us...@listserv.fnal.gov] On Behalf Of Paul
 Robert Marino
 Sent: Tuesday, April 08, 2014 11:26 PM
 To: Nico Kadel-Garcia; ToddAndMargo
 Cc: Scientific Linux Users
 Subject: Re: Any 7 rumors?



 Well frankly if you need PCI-DSS compliance pay for RHEL. Its honestly not
 that expensive for the few systems that really require it. Only  the
 system's that handle credit cards supposedly require it and in most
 ecommerce companies that's probably 2 to 4 system's so what's the problem
 wit paying $750 a year each for those few systems to not have to deal with
 the problems and giving the stock investors a warm and fuzzy feeling. Your
 time spent on it costs them more money and ti reduces all the stress on
 every one if you buy compliance on the cheap.


 -- Sent from my HP Pre3



 

 On Apr 8, 2014 22:55, Nico Kadel-Garcia nka...@gmail.com wrote:

 On Tue, Apr 8, 2014 at 10:14 PM, ToddAndMargo toddandma...@zoho.com wrote:
 Hi All,

 I have a customer who is going to have to upgrade a
 whole pail of stuff for PCI compliance (credit card
 security).

 Part of what he is going to have upgrade is his old
 CentOS 5.x server (it is too underpowered to handle
 his new software along with the addition drag
 caused by adding File Integrity Monitoring
 [FIM] Software).

 Any rumors as to when EL 7 will be out?

 Many thanks,
 -T

 Shortly after our favorite upstream vendor publishes it? I don't see
 the relevance though. If he needs to update CentOS 5, update it to SL
 6 or CentOS 6. Why wait for RHE 7 to update? It's going to be major
 cluster futz with the the switch tu systemd from init scripts, with
 /bin being migrated to /usr/bin, and the other major changes. It
 will be much simpler, and much, much safer, to update to CentOS 6 or
 SL 6 first!


Re: How do I create a link from vfat to ext4?

2014-04-04 Thread David Sommerseth
 On 04/03/2014 08:52 PM, Patrick J. LoPresti wrote: You can use a bind
 mount to make an ext4 directory appear in the VFAT tree.

 Do man mount and search for bind, and/or search the web for bind
 mount.

   - Pat
 
 Hi Pat,
 
 Thank you!
 
 From man page, it sounds like I am writing to the
 two different locations at the same time:
 
  After this call the same contents is accessible in
  two places
 
 Is this just a play on words?


Bind mounts are special.  It basically mounts an already mounted
directory yet another place.  Say you have this scheme:

   /dev/sda4  - /mnt/mydata
   /dev/sdb2  - /mnt/friendsdata

If you add a 'friends' directory in /mnt/mydata ... giving you
/mnt/mydata/friends, and the do bind mount:

   mount -o bind /mnt/friendsdata /mnt/mydata/friends

This results in that you have access to the same data in both
/mnt/friendsdata and /mnt/mydata/friends ... But all data is read and
written from/to /dev/sdb2.  It's just that you have loaned an already
mounted directory into your /mnt/mydata directory.

These bind mounts are kind of a I want what you have-mount.

Bind mounts are particularly handy when you work with chroots and wants
to grant access to certain files outside the chroot, where symlink is
impossible.  With bind mounts, you can also the same with files; not
just directories.

I hope this clarified a bit.


--
kind regards,

David Sommerseth


Re: Need help with rpm rebuild error

2014-03-25 Thread David Sommerseth
On 25/03/14 03:03, ToddAndMargo wrote:
 On Mon, Mar 24, 2014 at 9:48 PM, ToddAndMargo toddandma...@zoho.com
 wrote:
 Hi All,

 SL 6.5, 64 bit

 The following does not rebuild (fc15 or fc20):

 # rpmbuild --rebuild clipit-1.4.2-5.fc20.src.rpm

 /root/rpmbuild/BUILDROOT/clipit-1.4.2-5.fc20.x86_64/etc/xdg/autostart/clipit-startup.desktop:

 error: value GNOME;XFCE;LXDE;Unity;MATE; for key OnlyShowIn in group
 Desktop Entry contains an unregistered value MATE; values
 extending the
 format should start with X-
 Error on file
 /root/rpmbuild/BUILDROOT/clipit-1.4.2-5.fc20.x86_64/etc/xdg/autostart/clipit-startup.desktop:

 Failed to validate the created desktop file


 Any way around this MATE error?

 Many thanks,
 -T
 
 On 03/24/2014 06:58 PM, Nico Kadel-Garcia wrote:
 
 Fedora has a significantly more recent version of RPM than Scientific
 Linux 6, and many of the dependencies have significantly modified
 names. That seems to be a stable package, it was in Fedora 15. Perhaps
 you could rebuild the SRPM from a Fedora 15 archive site, or compare
 the .spec files, to look for the changes from the older version?

 
 Hi Nico,
 
 You missed the fc15 in does not rebuild (fc15 or fc20).
 First thing I tried.  Same exact error.  RATS!

Bear in mind that EL6 is based upon Fedora 12-13 (roughly).  So you'll
need to figure out which packaging features have been changed since that
time.  But I believe this is more related to an issue inside the
clipit-startup.desktop file than an RPM .spec file issue - most likely
related to xdg-utils (Portland).


--
kind regards,

David Sommerseth


Re: gstreamer1-plugins-base

2014-03-06 Thread David Sommerseth
On 06/03/14 10:18, Ian A Taylor wrote:
 Sir/Madam
 
 Does anyone know how I can install the package
 gstreamer1-plugins-base
 
 on SL 6.5
 
 So I can playback video mp4 files

I can highly recommend the Fluendo plug-ins, if you want to stay on the
legal side (which is a requirement for the computers I use for work).

Fluendo are heavily involved in gstreamer, and they provide gstreamer
modules to playback formats which are patented.  It costs a little bit,
but the cost isn't that bad.  And renewals are even more affordable.

http://eu.fluendo.com/shop/product/oneplay-codec-pack/

For my part, to support their open source work on the gstreamer core, I
don't mind paying a little bit.


--
kind regards,

David Sommerseth


Re: running SL6.5 in emulator qemu-system-x86_64

2014-03-05 Thread David Sommerseth
On 05/03/14 18:23, Boryeu Mao wrote:
 I did use qemu-system-x86_64 on the SL65 image.  I will first try
 qemu-kvm, and then (as a lst resort) virt-manager and libvirtd - I had
 to do quite a bit work to get libvirt going when I was using
 Debian/Knoppix, and the relative simplicity of qemu-system-x86_64
 (just 'qemu-system-x86_64 -cdrom CDROM.iso') was quite nice.  In any
 event the advantage of libvirt may be worth the work.  

My experience with virt-manager on Fedora, RHEL and SL the last 3-4
years have been that it's quite capable as a desktop virtualisation
tool.  The pacakges available out of the box have sane default configs
to just get started (tm).

If you want to use more advanced features, virsh is the command line
interface to libvirt.  But having a little idea of the features
available in virt-manager can help understanding virsh better.


--
kind regards,

David Sommerseth



 On 3/5/14, David Sommerseth sl+us...@lists.topphemmelig.net wrote:
 On 05/03/14 16:32, Boryeu Mao wrote:
 I have recently migrated from Debian-based Knoppix to SL
 (livecd-iso-to-disk from SL-65-x86_64-2014-02-06-LiveDVD.iso), and
 installed the emulator from the package
 qemu-system-x86-1.0-27.2.x86_64.rpm).  The emulator booted a Knoppix
 image without incidents (albeit somewhat slowly,
 KNOPPIX_V7.2.0DVD-2013-06-16-EN.iso).  However, booting the SL 65
 LiveDVD didn't go so well - it hangs in the startup screen with the
 inner arc stopping at about 3 o'clock position.  I wonder if there may
 be another package for qemu that I should use, or if there is another
 virtual machine that may be more robust for SL.  Thanks in advance for
 help or suggestions.

 SL-65-x86_64-2014-02-06-LiveDVD.iso (via livecd-iso-to-disk)
 HP laptop Pavilion g6 with dual AMD A6-4400M APU

 AFAICT, AMD A6-4400M supports hardware virtualization (aka KVM in the
 Linux world).  I'd recommend you to try using qemu-kvm instead of
 qemy-system-x86.  That should improve the performance noticeably.

 In addition, the SL image you've downloaded is 64 bits.  I don't know if
 you try to run that via qemu-system-i386 or qemu-system-x86_64.  The
 former is 32bit /emulation/, the latter 64bit.

 But only qemu-kvm uses virtualization.

 Try also to kick off your images using virt-manager.  It's usually
 fairly simple to configure new VMs that way, and just point it to your
 images.  This way, virt-manager/libvirtd will take care of setting up
 the proper arguments to qemu-kvm, for better performance.  Also try to
 use the virtio drivers wherever possible, as they don't require the
 complete hardware emulation layer.


 --
 kind regards,

 David Sommerseth




Re: server side spam filters

2014-02-11 Thread David Sommerseth
On 11/02/14 02:13, Yasha Karant wrote:
 Our site has been edicted to Microsoft Exchange server with a Barracuda
 spam filter.  There are numerous difficulties, one of which is spam not
 being filtered and non-spam being so filtered (significant increase in
 mission critical false positives).  At present, the administrative
 authorities (all of whom appear to be management professionals, not
 internals nor systems folks) insist on Exchange, allowing open systems
 standards compliant end-users to have IMAP service.  Given this, what
 are the best server-side spam filters, either hardware or software? 
 Best should be based upon current field-deployed experience and/or
 unsolicited external reviews (not vendor-supported independent reviews).

I've put up a fairly simple Postfix + Amavis-new + SpamAssasin server in
front of some of my Zimbra servers to get rid of the worst trash (we
also had some other requirements too, but that's not important in this
thread).  I configured Postfix with several RBLs, SPF and postgrey.  In
addition I added these smtpd_recipient_restrictions:

reject_unknown_reverse_client_hostname,
reject_invalid_hostname,
reject_non_fqdn_hostname,
reject_non_fqdn_sender,
reject_non_fqdn_recipient,
reject_unknown_sender_domain,

The RBLs I have had great success with are:

reject_rbl_client bl.spamcop.net,
reject_rbl_client zen.spamhaus.org,
reject_rbl_client bl.blocklist.de,
reject_rbl_client b.barracudacentral.org,
reject_rbl_client bl.spamcannibal.org,
reject_rbl_client cidr.bl.mcafee.com,

The two first ones and barracudacentral.org seems to be those being
triggered most.  Barracudacentral requires a registration (they want the
IP of your DNS resolver doing the queries).

With all this in place, I reduced the spam which SpamAssassin filtered
out from 75-80% to ~20-25%.

I had to remove SORBS, as they actually listed a lot of valid SMTP
relays ... and for those companies being hit here, it was just a too
costly operation to fix each time it happened.  On the other hand, the
other RBLs catch quite fine what SORBS blocked correctly.

In regards to SPF, that works pretty well.  I did it even stricter than
the default configuration (I use python-policyd-spf), where I set
PermError_reject = True.  That enforces that SPF rules which are
explicit much harder.

And with postgrey, I learned that you need at least a 10 minutes
threshold.  For one of the servers I maintain, postgrey blocks ~25% of
all mail attempts.  On antoher one (low traffic), the hit rate was so
low I actually removed.  So you need to test and see if it can match
your needs.


--
kind regards,

David Sommerseth


Re: Ping Nico: AD?

2014-02-10 Thread David Sommerseth
On 10/02/14 04:19, ToddAndMargo wrote:
 On 02/09/2014 02:45 PM, Nico Kadel-Garcia wrote:
 On Sun, Feb 9, 2014 at 2:45 PM, ToddAndMargo toddandma...@zoho.com
 wrote:

 
 I take it old-out-of-date (SL) isn't supporting Samba 4 yet.

 Nope, it's in Fedora and RHEL 7 beta and places .like my github repo.

 
 What would you think of just doing a fedora 20 server,
 instead of suffering with all the out of date stuff
 on RHEL?

I did that once.  As it took quite a long time before CentOS 6 came out
(before I got to know SL, but SL6 was also not shipped).  It worked,
kind of.  But would I do it again?  Probably not.

I installed Fedora 11, and it was regularly tons of updates.  At some
point I did an upgrade to Fedora 12 (with massive downtime, due to the
Fedora upgrade process - at that time, Anaconda could upgrade via the
ISO image).  But after that, I stayed far too long on Fedora 12.  This
server was a host for KVM guest hosts.

But I also stayed on Fedora 12 on purpose, as EL6 was based on F12-F13.
 So what I ended up doing was a scratch install of SL6 on some of the
free space I had on the LVM (on a completely separate VG).  Reused /boot
without reformatting it (so I could fallback to F12 if needed).  Copied
over the needed /etc configs from F12 to SL6, and modified them as
needed too (not too much work, but took some time).  And then booted SL6
... And that actually worked out far better than I ever dared hoping for.

Having this said.  I did this process on my private laptop (which also
had been stuck on F12 on purpose too), where I also run KVM guests.  So
I gained quite some experience and felt comfortable enough to dare this
on a production server.

I don't regret moving to SL6 at all.  So if you have time to wait for
SL7, wait for it.  If you're in a pinch and must have this server
running yesterday, consider this approach, but I would probably stay on
Fedora 19 for a little while - but I would probably try to stay cool as
long as possible.

The good thing with Fedora right now is that they're figuring out the
process of Fedora.next.  So Fedora 21 will most likely not come at
normal schedule (which would be sometime around Q4 probably), but
somewhat later.  Which means F19 will not be EOL for a little bit longer.


--
kind regards,

David Sommerseth


Re: Ping Nico: AD?

2014-02-10 Thread David Sommerseth
On 09/02/14 20:45, ToddAndMargo wrote:
 Question: what do you see as an advantage of Samba's AD over
 just using Samba as an old fashioned Domain Controller?

A side track actually.  If you're looking for something like AD for
Linux, which can also integrate with AD (in version 3)  ... Have a look
at the FreeIPA project.  I'm also believe FreeIPA integrates with Samba4
too.

http://www.freeipa.org/page/About

Not sure how it is with RPM packages/yum repos for FreeIPA, though.

 I take it old-out-of-date (SL) isn't supporting Samba 4 yet.

In SL6, there are samba4 packages available.  I believe it is a
tech-preview in RHEL6.


--
kind regards,

David Sommerseth


Re: Autoupdate

2014-01-31 Thread David Sommerseth
On 31/01/14 16:43, CS DBA wrote:
 On 1/31/14, 8:39 AM, Jose Marques wrote:
 On 31 Jan 2014, at 15:38, David Sommerseth
 sl+us...@lists.topphemmelig.net wrote:

 If you have enabled the sl6x repositories, then yes, this is the
 expected behaviour.
 Is there a way of having auto-update and retaining the old behaviour?

 The University of St Andrews is a charity registered in Scotland, No.
 SC013532.

 # yum erase yum-autoupdate

This might be the solution if you want to disable all auto-updates.  But
this is a far more drastic move than just disabling the sl6x
repositories.  Then you will stay always updated on the same SL minor
release, until you do the upgrade manually.


--
kind regards,

David Sommerseth


Re: Finally figured out what is causing bug 836696

2013-12-11 Thread David Sommerseth
On 10. des. 2013 20:43, Jeff Siddall wrote:
 On 12/09/2013 11:20 PM, ToddAndMargo wrote:
 then you absolutely want to be running
 them against a snapshot rather than a live FS and LVM makes this easy.

 Never really cared for LVM.  Always used the direct partition approach.
 
 Well, perhaps I can try to convince you some more.
 
 Take another example of upgrading to a bigger disk.  Huge PITA if you
 use direct partitions.  Shut the system down and use a live OS or
 something while you move over all the data -- which could take hours or
 days depending on what you need to move.  If you are really obsessive
 you probably want to make sure nothing got lost in the move so there is
 a whole compare exercise after it finishes.
 
 If you have LVM you simply install your new drive (assuming you can
 hotswap you don't even have to shutdown for that) run pvcreate, vgextend
 and then pvmove.  Some hours (or days) later it finishes and your data
 is magically on the new disk without even a moment of downtime.
 
 A lvextend and a resize2fs (or whatever utility resizes the FS you use)
 and you can start using the extra space, still with no downtime.
 
 That is pure sysadmin gold!
 
 BTW: I also use LVM on my offsite backup disks.  I just use the same
 volume group/volume name on all the disks.  Works with LUKS also.

I'll even recomend fsadm, which makes resizing live filesystems even
easier and safer.  I've even done resize of / on a running system
without any issues.  The fsadm utility will do much of the filesystem
tasks for you, in a safe an controlled manner.

   # fsadm -v -l resize /dev/vg.../lv...  $NEWSIZE

If you're more cautious, you can add -n (--dry-run) and even do a 'fsadm
check' first.  This takes care of resizing all the needed pieces.  But
'fsadm check' can only be done on an unmounted volume, iirc.

I have not tried fsadm on a direct partition, though.  It might also
work there.

These LVM features can also be very useful if you're using virtual
machines with LVM, as adding and removing virtual drives on-the-fly is
very easy in such environments.


kind regards,

David Sommerseth


Re: Finally figured out what is causing bug 836696

2013-12-11 Thread David Sommerseth
On 10. des. 2013 21:27, Bluejay Adametz wrote:
 Never really cared for LVM.  Always used the direct partition approach.

 Well, perhaps I can try to convince you some more.
 
 I never used LVM either, but in my defense, these were/are factory
 systems where it was extremely unlikely that the system would grow and
 require more storage, and if one did, we'd have to remove the smaller
 drives and put in bigger ones.

This is actually one typical area where LVM would really make life
easier with shorter downtime.  But it of course depends on the amount of
data.  Without hotswap, you would of course need a reboot to add the new
hard drive and another one later on to remove the old old.  But with
hotswap capable hardware you could do this job with 0 downtime.


kind regards,

David Sommerseth


Re: How do I reset my default router without rebooting?

2013-12-05 Thread David Sommerseth
On 05. des. 2013 06:41, James Rogers wrote:
 Someone said: 
 P.S. the route command is a legacy command from the 2.2 kernel days
 and should not be used any more.
 
 Did they mean the 'route' command or is the 'ip route' hierarchy now
 depricated?
 
 If we're not supposed to use the ip commands what are we supposed to use
 now?

In the Linux world iproute2 (that is, the 'ip' command) seems to be the
preferred utility these days.  However net-tools (which is ifconfig,
route, netstat, arp, etc, etc) is still shipped in basically all Linux
distros.  And if you also look at FreeBSD, they still cling to net-tools
as the primary tool.  I believe the same is for NetBSD and OpenBSD too.

iproute2 does have quite some advantages on Linux over net-tools, like
better IPv6 configuration, secondary IP addresses without using
interface aliases, slicker tunnel configurations to mention a few.  I
personally prefer the configuration syntaxes in iproute2 over net-tools
these days.  But it's taken me a little while to get used to output of
iproute2.

But I still wouldn't expect net-tools to disappear any time soon.
Net-tools is still shipped in Fedora and other more bleeding edge Linux
distros.


--
kind regards,

David Sommerseth


SELinux de-mystified

2013-11-13 Thread David Sommerseth
Hi all,

As there has been a couple of SELinux discussions lately, I thought this
article could help explain better what SELinux is all about and why it's
such a great tool.

It's written by one of the core SELinux guys in Red Hat, Daniel Walsh
and illustrations by one of Fedora's UX designers, Máirín Duffy.

Your visual how-to guide for SELinux policy enforcement
http://opensource.com/business/13/11/selinux-policy-guide

Hope you'll enjoy the reading :)


--
kind regards,

David Sommerseth


Re: SELinux de-mystified

2013-11-13 Thread David Sommerseth
On 13. nov. 2013 20:26, John Musbach wrote:
 Maybe it's just me, but it seems like a serious failing of SELinux's
 efforts when most people I've encountered in the Linux world have the
 policy of just disabling SELinux in their images.

Not sure if this was intended as a fire torch or not, or I'm just being
a bit sensitive.

But I can turn it around:  IPv6 has been available for over a decade (if
not longer).  Is it a failure of IPv6 that so few enables IPv6 in their
networked environments?.  Of course not.  It's about convenience and
resistance of changing your attitude to new technologies.  But
eventually you're forced to take the step.

And it's been a similar situation with iptables (and firewalling in
Windows, for that matter).  People were mostly ignorant to the concept
of firewalling, until they realised they had to implement it to have a
more secure environment.  Is iptables (or firewalling) considered a
failure today?

During EL6 installation, there is no way you can disable SELinux.  It
needs to be done explicitly afterwards.  This is because SELinux is
considered to work so well most users really don't need to think about
it.  Seriously.

SELinux has also been available since EL4 and Fedora Core 3.  SELinux is
celebrating 10 years these days.  It's not something brand new, but it
is beginning to really gain traction.  These days even SEAndroid is on
the way (that is SELinux for Android).  SELinux has developed a lot, and
is far more easily available and usable today than it was 10 years ago.
 Please don't be afraid of it!

To all of you SELinux sceptic, I have only this to say: If you first
grasp the concept of labelling, SELinux isn't much more difficult than
what iptables used to be in the beginning.  And that article from Dan
Walsh gives a very easy to understand introduction to SELinux.

And seriously, unless you really have a really odd setup, SELinux will
in not give you any troubles in EL6.

I have set up roughly 20 different SL6.x servers the last years.  I
can't remember having had any real issues related to SELinux.  This has
been everything from LDAP servers, web servers (apache and nginx),
e-mail servers (both postfix+amavis+spamassasin and Zimbra), database
servers (both PostgreSQL and MySQL).  I honestly can't remember having
had much troubles with SELinux at all.

If SELinux did kick in, it was usually just to flip some SELinux
booleans (semanage boolean --list), modifying some network ports context
(semanage port --list) or adding some extra paths for correct file
labelling (semanage fcontext --list).  Changing those things are really
not more difficult than adding additional iptables rules.  And to figure
out if it is SELinux to blame:  grep denied /var/log/audit/audit.log

Really, stop disabling it!  Try it for real and embrace SELinux now!


kind regards,

David Sommerseth



 On Nov 13, 2013, at 2:17 PM, David Sommerseth 
 sl+us...@lists.topphemmelig.net wrote:
 
 Hi all,

 As there has been a couple of SELinux discussions lately, I thought this
 article could help explain better what SELinux is all about and why it's
 such a great tool.

 It's written by one of the core SELinux guys in Red Hat, Daniel Walsh
 and illustrations by one of Fedora's UX designers, Máirín Duffy.

 Your visual how-to guide for SELinux policy enforcement
 http://opensource.com/business/13/11/selinux-policy-guide

 Hope you'll enjoy the reading :)


 --
 kind regards,

 David Sommerseth



Re: Anyone know of a best ISO VM for security testing?

2013-09-11 Thread David Sommerseth
On 11/09/13 19:03, Todd And Margo Chester wrote:
 Hi All,
 
 I am getting tooled up to do some Penitration Testing
 for PCI compliance (Ethical Hacking).
 
 Refernce:
 https://www.pcisecuritystandards.org/pdfs/infosupp_11_3_penetration_testing.pdf
 

It's a long time since I've looked at such tools, but I vaguely remember
Backtrack-Linux to be quite state of the art.  Not sure if it still is.

http://www.backtrack-linux.org/


kind regards,

David Sommerseth


Re: Robust local mirroring of SL

2013-09-03 Thread David Sommerseth
On 03/09/13 12:47, John Rowe wrote:
 Until now there seems to have been no _robust_ way of using a local
 repository for yum updates.
 
 Editing the .repo files, as advised by the HowTo, breaks updates of the
 repo files themselves, and each of the methods on yum's own web page has
 a list of disadvantages. Using a local squid proxy has problems with
 mirroring.
 
 But SL6x would finally seem to provide a simple way to do this! From
 man yum.conf:
 
 As  of  3.2.28,  any  file  in /etc/yum/vars is turned into a
 variable named after the filename 
 
 This means that sl6x.repo can contain baseurl entries such as:
 
 baseurl=http://ftp.scientificlinux.org/linux/scientific/6x/$basearch/updates/security/
 $localrepo/6x/$basearch/updates/security/
 http://ftp1.scientificlinux.org/linux/scientific/6x/$basearch/updates/security/
   
 ftp://ftp.scientificlinux.org/linux/scientific/6x/$basearch/updates/security/
 
 and the yum-conf-sl6x rpm could contain the above SL6x.repo plus a
 file /etc/yum/vars/localrepo consisting of the single line:
  http://ftp2.scientificlinux.org/linux/scientific
 
 To use a local repository we just replace the content
 of /etc/yum/vars/localrepo with the URL of the local repository, eg:
 file:///somewhere/SL/scientific
 
 The change is simple, persistent across upgrades and doesn't break the
 repo files.
 
 This raises the questions:
 
 1. Am I the only one finding using a local repository to be irksome, or
 is everybody else just smarter than I am?
 
 2. Would the SL maintainers be willing to put something like this into
 the yum-conf-sl6x RPM?

Hi John

I'm having a local mirror of SL, EPEL and a few more.  They're basically
just rsync mirrors.  To enforce the use of these mirrors, I've just
added my own RPM package which I've installed on all my systems.

This package contains the needed .repo file with pointers to the
mirrored repositories.  And in one of the RPM spec file scriptlets
(forgot which one, too lazy to dig it up) I'm calling yum-config-manager
to disable the other repositories if they are available.

With this approach, I've not had any issues at all.  And my kickstart
scripts ensures installing this package as early as possible.  I've also
added the same repositories with 'repo' statements in the kickstart
files, which ensures quite quick install of new systems with all the
latest bits.

I think one of the advantages here is that the original repository files
are not modified (except the disablement).  Which means it's quick and
easy to rollback in case of some critical issues with the repo server.
And if I decide to add or remove some of the mirrors, I just spin a new
package with the updated repo files (and yum-config-manager statements)
and they gets installed automatically on all systems.


kind regards,

David Sommerseth


Re: Replacement for Acrobat Professional?

2013-06-11 Thread David Sommerseth
On 11/06/13 06:59, Yasha Karant wrote:
 The only thing missing is Adobe Distiller that converts a
 PostScript file to PDF.
 

You have ps2pdf from the ghostscript RPM which can do that for you.
Only commandline, though.


--
kind regards,

David Sommerseth


Re: [SCIENTIFIC-LINUX-USERS] is this a this virus or an error

2013-05-29 Thread David Sommerseth
On 28/05/13 21:36, Pat Riehecky wrote:
 On 05/28/2013 02:08 PM, Yasha Karant wrote:
 The latest ClamAV that I can find pre-ported fro SL6 x86-64 is

 http://pkgs.repoforge.org/clamav/clamd-0.97.7-1.el6.rf.x86_64.rpm
 
 EPEL has a slightly newer version of this package:
 
 http://koji.fedoraproject.org/koji/buildinfo?buildID=413926

I can recommend the EPEL version.  I'm using it in several production
environments.

 Will this RPM override dependencies in the stock SL distribution?
 EL (and Linux in general) does not seem to have reliable polymorphism
 -- the default for these sorts of dependencies generally does not seem
 to install a different executable/library sub-tree independent of the
 stock distribution except in so far as the same files (e.g.,
 libraries) are used.

There should be no issues with running the EPEL version at least.  No
dependency issues from what I've seen.

 However, ClamAV still appears to be pre-production (0.x, not 1.x).  Is
 it stable and useful?

Don't look yourself blind on version numbers - especially on Open Source
project.  Having a release being 0.x-something doesn't mean it's not
stable or usable.  You need to check what the each project defines as
their stable release.

http://www.clamav.net/lang/en/

And FWIW, Zimbra ships with these ClamAV 0.x versions as well as part of
the integrated anti-virus mail checks.

What is even more important with anti-virus, is updated virus databases.
 Which gets updates almost every day, and freshclam in ClamAV takes care
of that job.


--
kind regards,

David Sommerseth


Re: PYCURL ERROR 7 - couldn't connect to host

2013-05-29 Thread David Sommerseth
On 29/05/13 12:38, Semi wrote:
 Yum doesn't work on SL6.4, it get error and quit.
 
 yum update
 Loaded plugins: refresh-packagekit, security
 http://dl.atrpms.net/el6-x86_64/atrpms/stable/repodata/repomd.xml:
 ^

This isn't really related to SL at all, it's the ATrpms repo not being
functional.  Try getting in touch with them instead.  I presume this
would be a good place to start:

  http://lists.atrpms.net/pipermail/atrpms-users/
  http://lists.atrpms.net/mailman/listinfo/atrpms-users


--
kind regards,

David Sommerseth


Re: PYCURL ERROR 7 - couldn't connect to host

2013-05-29 Thread David Sommerseth
On 29/05/13 14:10, Semi wrote:
 easy_install of python doesn't work too on SL6.4.
 
 easy_install python-Levenshtein
 Searching for python-Levenshtein
 Reading http://pypi.python.org/simple/python-Levenshtein/
 Download error: [Errno 111] Connection refused -- Some packages may not
 be found!
 Reading http://pypi.python.org/simple/python-Levenshtein/
 Download error: [Errno 111] Connection refused -- Some packages may not
 be found!
 Couldn't find index page for 'python-Levenshtein' (maybe misspelled?)
 Scanning index of all packages (this may take a while)
 Reading http://pypi.python.org/simple/
 Download error: [Errno 111] Connection refused -- Some packages may not
 be found!
 No local packages or download links found for python-Levenshtein
 error: Could not find suitable distribution for
 Requirement.parse('python-Levenshtein')

Come on!  Can you please read the error messages more carefully?  It is
pretty obvious it cannot connect to pypi.python.org in this case.  That
is also not a Scientific Linux related server.


--
kind regards,

David Sommerseth


Re: PYCURL ERROR 7 - couldn't connect to host

2013-05-29 Thread David Sommerseth
On 29/05/13 14:16, Semi wrote:
 You can try links, all of them alive.
 Something happened with Linux.
 Please check this problem carefully.

--
$ curl -k -D- https://pypi.python.org/simple/python-Levenshtein/
HTTP/1.1 200 OK
Date: Wed, 29 May 2013 12:17:28 GMT
Server: nginx/1.1.19
Content-Type: text/html; charset=utf-8
Strict-Transport-Security: max-age=31536000
Cache-Control: max-age=3600, public
Via: 1.1 varnish
Content-Length: 800
Accept-Ranges: bytes
Via: 1.1 varnish
Age: 969
X-Served-By: cache-s34-SJC2, cache-a16-AMS
X-Cache: MISS, HIT
X-Cache-Hits: 0, 3
X-Timer: S1369828879.616590023,VS0,VS77,VE129,VE969233
Vary: Accept-Encoding

htmlheadtitleLinks for python-Levenshtein/titlemeta
name=api-version value=2 //headbodyh1Links for
python-Levenshtein/h1a
href=../../packages/source/p/python-Levenshtein/python-Levenshtein-0.10.2.tar.gz#md5=c8af7296dc640abdf511614ee677bbb8
rel=internalpython-Levenshtein-0.10.2.tar.gz/abr/
a href=http://trific.ath.cx/resources/python/levenshtein/;
rel=homepage0.10.1 home_page/abr/
a href=http://github.com/miohtama/python-Levenshtein;
rel=homepage0.10.2 home_page/abr/
a
href=http://webandmobile.mfabrik.com;http://webandmobile.mfabrik.com/abr/
a href=http://celljam.net/;http://celljam.net//abr/
a
href=http://github.com/miohtama/python-Levenshtein/tree/;http://github.com/miohtama/python-Levenshtein/tree//abr/
/body/html
--

This works ... you probably have a local network issue.


--
kind regards

David Sommerseth


 
 On 29-May-13 15:13, David Sommerseth wrote:
 On 29/05/13 14:10, Semi wrote:
 easy_install of python doesn't work too on SL6.4.

 easy_install python-Levenshtein
 Searching for python-Levenshtein
 Reading http://pypi.python.org/simple/python-Levenshtein/
 Download error: [Errno 111] Connection refused -- Some packages may not
 be found!
 Reading http://pypi.python.org/simple/python-Levenshtein/
 Download error: [Errno 111] Connection refused -- Some packages may not
 be found!
 Couldn't find index page for 'python-Levenshtein' (maybe misspelled?)
 Scanning index of all packages (this may take a while)
 Reading http://pypi.python.org/simple/
 Download error: [Errno 111] Connection refused -- Some packages may not
 be found!
 No local packages or download links found for python-Levenshtein
 error: Could not find suitable distribution for
 Requirement.parse('python-Levenshtein')
 Come on!  Can you please read the error messages more carefully?  It is
 pretty obvious it cannot connect to pypi.python.org in this case.  That
 is also not a Scientific Linux related server.


 -- 
 kind regards,

 David Sommerseth
 


Re: Anyone know of a status site for the Internet?

2013-04-16 Thread David Sommerseth
On 16/04/13 06:45, Mihamina Rakotomandimby wrote:
 On 2013-04-15 15:15, Paul Robert Marino wrote:
 In direct answer to your question oddly enough twitter seems to be the
 site find out about this kind of thing quickly.
 
 Twitter yes, but what to follow?

Not necessarily follow ... ask on twitter and provide some relevant
hashtags, like #scientificlinux


--
kind regards,

David Sommerseth


Re: SL 6.3

2013-01-31 Thread David Sommerseth
On 31/01/13 11:46, Arun Kishore wrote:
 Dear SL,
 
 I find updated repositories like below - good enough. i have been using
 below link for SL 6.3. I think it may be of help to you and all.
 There are many such similar sites.. for updates
 
 
 http://mirror.fraunhofer.de/download.fedora.redhat.com/epel/6/

This is just the EPEL repository ... I'd rather recommend just
installing the yum-conf-epel package (yum install it), and you'll get a
package which will be maintained with the always proper repository URLs
- including mirrors.


kind regards,

David Sommerseth


Re: New install boot problem -

2013-01-29 Thread David Sommerseth
On 29/01/13 20:40, Trenton Ray wrote:
 GRUB has a strange methodology (well, strange to me given that I'm one of a
 handful of neo-luddites still using LILO) for determining disks. It's got
 root as hd2,0 which means (and I may be mistaken) the 2nd disk, first
 partition.. which makes sense in accordance with /dev/sdb but since GRUB
 sees 0 as a value and not a null, maybe change it to 1,0 ? I fear I'm just
 going to add to the confusing at this point and apologize for not being able
 to help offhand.

Most commonly you'll find that:

/dev/sda - hd0
/dev/sdb - hd1
/dev/sdc - hd2

And then with partitions:

/dev/sdX1 - hdX,0
/dev/sdX2 - hdX,1
/dev/sdX3 - hdX,2
...
and so forth ...


 [root@box7 bobg]# cat /boot/grub/grub.conf
 # grub.conf generated by anaconda
 #
 # Note that you do not have to rerun grub after making changes to
 this file
 # NOTICE:  You have a /boot partition.  This means that
 #  all kernel and initrd paths are relative to /boot/, eg.
 #  root (hd2,0)
 #  kernel /vmlinuz-version ro root=/dev/sdc3
 #  initrd /initrd-[generic-]version.img
 #boot=/dev/sdb

This line ^^^ can set to /dev/sda if you want GRUB on your MBR of
/dev/sda.  But remove the leading #

 default=0
 timeout=15
 splashimage=(hd2,0)/grub/splash.xpm.gz

This looks for the splashimage (background graphics) from /dev/sdb1 ...
inside the grub/ directory on this partition.

If you mount /dev/sdb1 on /boot ... that means /boot/grub on your file
system.

 hiddenmenu
 title Scientific Linux (2.6.32-279.19.1.el6.x86_64)
  root (hd2,0)

This expects kernels to be found on /dev/sdb1

  kernel /vmlinuz-2.6.32-279.19.1.el6.x86_64 ro
 root=UUID=088eabf7-7644-4acb-9017-bc510456357b rd_NO_LUKS rd_NO_LVM
 LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16
 crashkernel=auto  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
  initrd /initramfs-2.6.32-279.19.1.el6.x86_64.img
 title Scientific Linux (2.6.32-279.5.1.el6.x86_64)
  root (hd2,0)

Same here as well.

  kernel /vmlinuz-2.6.32-279.5.1.el6.x86_64 ro
 root=UUID=088eabf7-7644-4acb-9017-bc510456357b rd_NO_LUKS rd_NO_LVM
 LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16
 crashkernel=auto  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
  initrd /initramfs-2.6.32-279.5.1.el6.x86_64.img
 title Other
  rootnoverify (hd0,0)
  chainloader +1

This would typically load Windows or so, installed on /dev/sda1

Hope this helps you understand the drive references a bit better.


--
kind regards,

David Sommerseth


Re: SL-6.3 Install xfce -

2013-01-29 Thread David Sommerseth
On 29/01/13 21:42, Alec T. Habig wrote:
 Bob Goodwin - Zuni, Virginia, USA writes:
I would like to install xfce but there doesn't seem to be a yum rpm.
What is the best way to deal with that?
 
 It's there, but in the epel repository.  Go get the
 epel-release package from here:
 
   http://fedoraproject.org/wiki/EPEL

Or just do:

   yum install yum-conf-epel

And you'll get all the needed files for EPEL ... no need to go via the
Fedora wiki any more :)

Then just do:  yum groupinstall xfce

That should be it.


--
kind regards

David Sommerseth


Re: SL-6.3 Install xfce -

2013-01-29 Thread David Sommerseth
On 29/01/13 22:47, Bob Goodwin - Zuni, Virginia, USA wrote:
 On 01/29/2013 03:57 PM, David Sommerseth wrote:
 On 29/01/13 21:42, Alec T. Habig wrote:
 Bob Goodwin - Zuni, Virginia, USA writes:
 I would like to install xfce but there doesn't seem to be a yum
 rpm.
 What is the best way to deal with that?
 It's there, but in the epel repository.  Go get the
 epel-release package from here:

http://fedoraproject.org/wiki/EPEL
 Or just do:

 yum install yum-conf-epel

 And you'll get all the needed files for EPEL ... no need to go via the
 Fedora wiki any more :)

 Then just do:  yum groupinstall xfce

 That should be it.

 Yes, thanks, that eased the pain a lot! Now if I can figure out
 how to shange from gnome to xfce. Maybe there's a switchdesk rpm?

When you log in via the graphical login screen, you should be able to
choose desktop environment directly there.  Usually at the bottom of the
screen, iirc.


--
kind regards,

David Sommerseth


Re: SL-6 KVM Disk Activity

2013-01-22 Thread David Sommerseth
On 22/01/13 07:34, Chuck Munro wrote:
 - I tried to use the Virtio flavor of disk, but the BSD-based VMs can't
 use them, so I had to stay with IDE disk emulation.

Depending on the BSD flavour, of course, but FreeBSD 8.2+ and 9 have
virtio support ...

  http://www.area536.com/projects/freebsd-as-a-kvm-guest-using-virtio/
  http://people.freebsd.org/~kuriyama/virtio/

I believe there are some ongoing work for virtio support in both OpenBSD
and NetBSD as well.  But that's also all I know.


--
kind regards,

David Sommerseth


Re: growisofs and dual disk (+R) failure

2013-01-02 Thread David Sommerseth
On 02/01/13 06:27, Andrew Z wrote:
 more details:
  it appears that cdrkit/wodim are pretty old and might not support well
 the DL burning. And indead, the  genisoimage is a leftover from wodim.
 
 Would appreciate a (rpm) spec file for cdrtools. The one for suse is
 giving me hard time.

As a normal (non-root) user, you can do this:

  $ yumdownloader --source cdrkit
  $ rpm -i cdrkit-*.el6.src.rpm
  $ cd ~/rpmbuild/SPECS

And you'll have the cdrkit.spec file there.


kind regards,

David Sommerseth


Re: Does anyone have virt-manager working with sudo?

2012-12-21 Thread David Sommerseth
On 20/12/12 19:49, Paul Robert Marino wrote:
 Its base64 with DIGEST-MD5 hashing with no salt.
 If you don't beleave me just decode it through any base64 tool and you
 will see the entire conversation
 And if you still don't beleave me read the RFC that describes SASL its
 very clearly explained and a relativly short read as RFCs go.

I've read through RFC2831 [1] more times now, which describes the
DIGEST-MD5 protocol pretty well.  And there are some details there,
which libvirt user and which makes it impossible to use any base64 tool
to extract the password, as you claim.

First, this is the dialogue I captured between client and server:

S: DIGEST-MD5
C: DIGEST-MD5
S: nonce=2VE5+WoJF+WSOHai6UpvyPUyjPV3TVdU//x25Bivx/w=,
   realm=virtsrv1.local,
   qop=auth-conf,cipher=rc4-56,rc4,3des,
   maxbuf=65536,charset=utf-8,
   algorithm=md5-sess
C: username=vmadmin,realm=virtsrv1.local,
   nonce=2VE5+WoJF+WSOHai6UpvyPUyjPV3TVdU//x25Bivx/w=,
   cnonce=3G5mp5dUDXKs6Z/D9pvMLHMk177+Y20DNWfXoK0iscc=,nc=0001,
   qop=auth-conf,cipher=rc4,
   maxbuf=65536,
   digest-uri=libvirt/172.16.21.10,
   response=145b633fe038af33327d720bdb9111b0
S: rspauth=d0c38ff1236b11aa22966cafc78d67f8

This matches very well the protocol described in RFC2831.  And there are
a few things to notice here.

a) qop-options
   From the RFC:
 A quoted string of one or more tokens indicating the quality of
  protection values supported by the server.  The value auth
  indicates authentication; the value auth-int indicates
  authentication with integrity protection; the value auth-conf
  indicates authentication with integrity protection and encryption.
  This directive is optional; if not present it defaults to auth.
  The client MUST ignore unrecognized options; if the client
  recognizes no option, it should abort the authentication
  exchange.

The server and clients both sends: qop=auth-conf

b) cipher-opts
   From the RFC
 A list of ciphers that the server supports. This directive must be
  present exactly once if auth-conf is offered in the
  qop-options directive, in which case the 3des and des modes
  are mandatory-to-implement. The client MUST ignore unrecognized
  options; if the client recognizes no option, it should abort the
  authentication exchange.

The server sends cipher=rc4-56,rc4,3des and the client responds
with cipher=rc4.  Which means a 128 bit RC4 encryption was chosen.

c) response field from client

   From the RFC:
 A string of 32 hex digits computed as defined below, which proves
  that the user knows a password. This directive is required and
  MUST be present exactly once; otherwise, authentication fails.

   Further, when decoding the clients 'response', it is not
   representing anything 'text readable'.  You can try to base64 or hex
   decode it.  All you get is binary data.

   This field is built up by hex-encoding a concatenation of several
   fields, some which have been through a hashing round first.

   The first step is to hash this string:

HASH1 = MD5($USERNAME:$REALM:$PASSWD)

And then this hash is concatenated and hashed once more like this:

HASH2 = MD5($HASH1:$NONCE:$CNONCE)

$NONCE is the nonce value from the server, and $CNONCE is the cnonce
value from the client.  Both which are unique for this
authentication session.  An MD5 hashing algorithm is used on this
string.

   A second string is prepared as well, like this:

 STR = MD5(AUTHENTICATE:$DIGESTURI:)

   The string 'AUTHENTICATE:' is added in front of $DIGESTURI which is
   the same as the digest-uri from the client.

   Then the final result is put together, using hex encoding of the
   input data:

HEX( HEX($HASH2):$NONCE:$NCVALUE:$CNONCE:$QOP:HEX($STR) )

The $NONCE and $CNONCE are the same as in the previous step, but
the $NCVALUE is the nc field from the client and $QOP is the qop
field from the client.

In addition, as the client responded with 'cipher=rc4', an
encryption performed as well, with the key based on parts of
$HASH2.  But I will not go into the deep details here now.


So if you still claim that libvirt sends passwords in clear-text, just
BASE64 encoded, well, then I challenge you to expose the password in the
client/server dialogue pasted above.  How I understand this, the
password has been hashed several times, with different salts (even
though not proper random salts).  And it is being RC4 encrypted in
addition.  I simply cannot find anything which looks like a BASE64
encoded password in neither the client/server dialogue nor the RFC -
when qop is set to 'auth-conf' and having 'cipher' set.


kind regards,

David Sommerseth


[1] RFC 2831 http://www.ietf.org/rfc/rfc2831.txt



 On Dec 20, 2012 10:16 AM, David Sommerseth
 sl+us...@lists.topphemmelig.net
 mailto:sl%2bus...@lists.topphemmelig.net wrote:
 
 On 19/12/12 23:45, Paul Robert

Re: Does anyone have virt-manager working with sudo?

2012-12-20 Thread David Sommerseth
On 19/12/12 23:45, Paul Robert Marino wrote:
 this statement
  (I double checked the network traffic, and even though not using SSL
 the password is not transferred over the network in clear-text)
 is wrong
 It is clear text that has been base64 encoded unless you are using
 gssapi with kerberos 5 or some other encrypted authentication
 mechanism. you can mitigate this slightly by using the digest-md5 auth
 but all you are doing then is sending the hash of the password in
 clear text which isn't that much better

I might be wrong, but I am quite confident that the data I see is
encrypted.  I've looked at the data via wireshark, and even though it
looks like base64, it doesn't render my password.

And looking further in the comments in /etc/libvirt/libvirtd.conf, there
is this statement:

   # Using the TCP socket requires SASL authentication by default. Only
   # SASL mechanisms which support data encryption are allowed. This is
   # DIGEST_MD5 and GSSAPI (Kerberos5)

And what I see, looks pretty much like digest-md5 with RC4 encryption.
My libvirtd daemon tells this to my client:

   DIGEST-MD5 Dnonce=BASE64_ecncoded_data,
   realm=DOMAIN,qop=auth-conf,
   cipher=rc4-56,rc4,3des,
   maxbuf=65536,charset=utf-8,algorithm=md5-sess

The client responds with this:

   DIGEST-MD5 Eusername=USERNAME,realm=DOMAIN,
   nonce=BASE64_encoded_data,
   cnonce=BASE64_encoded_data_2,
   nc=0001,qop=auth-conf,cipher=rc4,maxbuf=65536,
   digest-uri=libvirt/1035110,
   response=1f4023d0417acb495ed187255ba80fcf

And then the server responds with:

   rspauth=40922952d194910a08f3654b28d5485f

After this response, it comes a data flow which looks more like normal
data traffic.

As far as I can interpret this, the data from the client is encrypted,
using some shared data for a key exchange.  But I haven't dug into the
source code to verify this yet.

If I'm wrong, then I need to learn more about the DIGEST-MD5 protocol.


kind regards,

David Sommerseth


 On Wed, Dec 19, 2012 at 6:14 AM, David Sommerseth
 sl+us...@lists.topphemmelig.net wrote:
 On 19/12/12 04:38, Nico Kadel-Garcia wrote:
 I'd love to be able to use sudo with virt-manager, but it simply
 fails. It does work on Ubuntu, and I'd like to be able to use sudo for
 all access to my KVM servers, rather than direct root login.

 Is anyone using sudo successfully with virt-manager on SL 6.3? Other X
 applications work just fine.

 I do something similar to this, but slightly differently.  I'm running
 virt-manager as my local user, but connecting to the TCP port,
 authenticating using the SASL feature.  (I double checked the network
 traffic, and even though not using SSL the password is not transferred
 over the network in clear-text)

 * In /etc/libvirt/libvirtd.conf I set the following parameters:

listen_tls=0
listen_tcp=1
listen_address=x.x.x.x # optional
auth_tcp = sasl


 * Create the SASL database with the username/password I want to use

   Look up the sasldb_path in /etc/sasl2/libvirt.conf.  In my setup
   it was set to /etc/libvirt/passwd.db.  Then do:

   [root@host ~]# saslpasswd2 -f /etc/libvirt/passwd.db USERNAME

   The username can be completely virtual if you want.  saslpasswd2 will
   ask for the wanted password.  This username/password will only be
   used for libvirt.

   You can check the user database like this:

   [root@host ~]# sasldblistusers2 -f /etc/libvirt/passwd.db


 * Make libvirtd listen to TCP sockets
   Edit /etc/sysconfig/libvirtd so that LIBVIRTD_ARGS have --listen set

   F.ex:
   LIBVIRTD_ARGS=--listen


 * Restart libvirtd

   [root@host ~]# service libvirtd restart

 If you want to connect to another IP than localhost, you also need to
 open up your firewall as well.  But at this point, it should be possible
 to connect to libvirt over the network:

[user@host ~]$ virsh -c qemu+tcp://localhost/system
[user@host ~]$ virt-manager -c qemu+tcp://localhost/system


 I'm primarily using this approach to manage a couple of KVM hosts over a
 VPN connection, running virsh and virt-manager locally, connecting to
 the IP which is accessible over the VPN.  Works pretty well, but I had
 to do some changes in the guest configs so that SPICE or VNC consoles
 also listened to an IP address available over the VPN.  Changing these
 parameters requires cold-booting the virtual guests.

 I have tried to use SSL/TLS mode as well, but when managing more
 independent libvirt based servers, it gets quite annoying with how the
 client certificates needs to be configured.  (Granted, it's a long time
 ago I tried it, so it might have changed)  But I figured, as long as I
 use SASL and does this over VPN connections with strict firewalls, the
 setup is safe enough in my use cases.


 kind regards,

 David Sommerseth


Re: Does anyone have virt-manager working with sudo?

2012-12-19 Thread David Sommerseth
On 19/12/12 04:38, Nico Kadel-Garcia wrote:
 I'd love to be able to use sudo with virt-manager, but it simply
 fails. It does work on Ubuntu, and I'd like to be able to use sudo for
 all access to my KVM servers, rather than direct root login.
 
 Is anyone using sudo successfully with virt-manager on SL 6.3? Other X
 applications work just fine.

I do something similar to this, but slightly differently.  I'm running
virt-manager as my local user, but connecting to the TCP port,
authenticating using the SASL feature.  (I double checked the network
traffic, and even though not using SSL the password is not transferred
over the network in clear-text)

* In /etc/libvirt/libvirtd.conf I set the following parameters:

   listen_tls=0
   listen_tcp=1
   listen_address=x.x.x.x # optional
   auth_tcp = sasl


* Create the SASL database with the username/password I want to use

  Look up the sasldb_path in /etc/sasl2/libvirt.conf.  In my setup
  it was set to /etc/libvirt/passwd.db.  Then do:

  [root@host ~]# saslpasswd2 -f /etc/libvirt/passwd.db USERNAME

  The username can be completely virtual if you want.  saslpasswd2 will
  ask for the wanted password.  This username/password will only be
  used for libvirt.

  You can check the user database like this:

  [root@host ~]# sasldblistusers2 -f /etc/libvirt/passwd.db


* Make libvirtd listen to TCP sockets
  Edit /etc/sysconfig/libvirtd so that LIBVIRTD_ARGS have --listen set

  F.ex:
  LIBVIRTD_ARGS=--listen


* Restart libvirtd

  [root@host ~]# service libvirtd restart

If you want to connect to another IP than localhost, you also need to
open up your firewall as well.  But at this point, it should be possible
to connect to libvirt over the network:

   [user@host ~]$ virsh -c qemu+tcp://localhost/system
   [user@host ~]$ virt-manager -c qemu+tcp://localhost/system


I'm primarily using this approach to manage a couple of KVM hosts over a
VPN connection, running virsh and virt-manager locally, connecting to
the IP which is accessible over the VPN.  Works pretty well, but I had
to do some changes in the guest configs so that SPICE or VNC consoles
also listened to an IP address available over the VPN.  Changing these
parameters requires cold-booting the virtual guests.

I have tried to use SSL/TLS mode as well, but when managing more
independent libvirt based servers, it gets quite annoying with how the
client certificates needs to be configured.  (Granted, it's a long time
ago I tried it, so it might have changed)  But I figured, as long as I
use SASL and does this over VPN connections with strict firewalls, the
setup is safe enough in my use cases.


kind regards,

David Sommerseth


Re: Desktop Cinnamon

2012-12-13 Thread David Sommerseth
On 13/12/12 14:32, Larry Linder wrote:
 Has anyone tried to use Cinnamon desktop onto SL.   The Desktops on 6.X were 
 unusable from our vantage point and that is why we stalled at 5.8.
 
 From some others I have talked to have tried it and found that they love it 
 and they indicated that were running Fedora.

Cinnamon requires GNOME 3.  It's basically just a GNOME Shell
replacement in GNOME 3.  So to run Cinnamon on EL6, you need to bring
over the complete GNOME 3 stack as well.

I know Cinnamon is available in Fedora 17.


kind regards,

David Sommerseth


Re: Cannot start luci

2012-12-12 Thread David Sommerseth
On 12/12/12 17:28, Evan Sather wrote:
 Hi Oleg,
 
 Here is what repeats in my luci.log:
 
 Traceback (most recent call last):
   File /usr/bin/paster, line 9, in module
 load_entry_point('PasteScript==1.7.3', 'console_scripts',
 'paster')()
   File /usr/lib/python2.6/site-packages/paste/script/command.py,
 line 84, in run
 invoke(command, command_name, options, args[1:])
   File /usr/lib/python2.6/site-packages/paste/script/command.py,
 line 123, in invoke
 exit_code = runner.run(args)
   File /usr/lib/python2.6/site-packages/paste/script/command.py,
 line 218, in run
 result = self.command()
   File /usr/lib/python2.6/site-packages/paste/script/serve.py,
 line 274, in command
 relative_to=base, global_conf=vars)
   File /usr/lib/python2.6/site-packages/paste/script/serve.py,
 line 308, in loadserver
 relative_to=relative_to, **kw)
   File /usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py,
 line 210, in loadserver
 return loadobj(SERVER, uri, name=name, **kw)
   File /usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py,
 line 224, in loadobj
 global_conf=global_conf)
   File /usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py,
 line 248, in loadcontext
 global_conf=global_conf)
   File /usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py,
 line 278, in _loadconfig
 return loader.get_context(object_type, name, global_conf)
   File /usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py,
 line 363, in get_context
 object_type, name=name)
   File /usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py,
 line 528, in find_config_section
 self.filename))
 LookupError: No section 'init' (prefixed by 'server') found in
 config /var/lib/luci/etc/luci.ini
 Removing PID file /var/run/luci/luci.pid
 
 When I installed luci, I had to do a touch /var/lib/luci/etc/luci.ini
 because I couldn't get past this start failure message:
 
 -bash-4.1$ sudo service luci start
 Unable to create the luci base configuration file
 (`/var/lib/luci/etc/luci.ini').
 Start luci...  [FAILED]
 
 Did I miss a step after installing luci that I should have done before
 trying to start it?

Yes, you missed reading the error message:

  No section 'init' (prefixed by 'server') found in config
/var/lib/luci/etc/luci.ini

I'm no luci user so can't help you much there.  But looks like you need
to get a proper configuration file set up, not just use touch and hope
it works ;-)


kind regards,

David Sommerseth


Re: Scala missing in the official repo(s)

2012-11-29 Thread David Sommerseth
On 29/11/12 13:14, Freak Trick wrote:
 The developers at Scala themselves offer an RPM in the downloads section
 of the official website, but it conflicts with the jline.jar file of the
 JVM. I have never done/maintained packages. Let me check out some
 resources over the net and see if I can make a useful one. If I can, I'd
 definately look to contribute it.

Why make it so difficult? ;-)

I would go first go here:
http://koji.fedoraproject.org/koji/packageinfo?packageID=6830

If I find.el{5,6} packages here, I would go and install the Fedora EPEL
repository (yum install yum-conf-epel, iirc).

If there are no EPEL packages available, I would find the appropriate
package version and probably choose the builds for oldest Fedora
releases (EL6 is based on Fedora 12/Fedora 13) and download the
.src.rpm.  Then I would install mock and rpmbuild (yum install mock
rpm-build) ... then you just do:

  $ mock -r epel-6-x86_64 --rebuild /path/to/scala-*.src.rpm

(replace x86_64 with i386 to get a 32bit build ... for other targets,
see the files in /etc/mock)

After a little while, you'll get what you need in /var/lib/mock ...
ready to be installed with 'yum localinstall rpms'

Using mock will download and install all the needed stuff for compiling
the src.rpm in a chroot in /var/lib/mock ... so you'll need some space
there to be able to make this work.  But the log files in this directory
are usually quite helpful as well.


--
kind regards,

David Sommerseth


Re: SL6.3 - groupinstall error

2012-11-16 Thread David Sommerseth
On 15/11/12 17:33, Duke Nguyen wrote:
 On 11/15/12 11:27 PM, David Sommerseth wrote:
 On 15/11/12 16:56, Duke Nguyen wrote:
 On 11/15/12 7:17 PM, David Sommerseth wrote:
 On 15/11/12 12:03, Duke Nguyen wrote:
 Hi folks,

 I am trying to install a Base system into a directory using yum
 groupinstall, but I got error as below. Any suggestion to solve the
 errors? Thanks.

 $ cat /etc/redhat-release
 Scientific Linux release 6.3 (Carbon)
 $ uname -a
 Linux biowulf.grih.org 2.6.32-279.14.1.el6.x86_64 #1 SMP Tue Nov 6
 11:21:14 CST 2012 x86_64 x86_64 x86_64 GNU/Linux
 $ sudo yum -y groupinstall Base Server Platform
 --installroot=/diskless/root
 Loaded plugins: fastestmirror, refresh-packagekit, security
 Loading mirror speeds from cached hostfile
* sl: ftp.scientificlinux.org
* sl-security: ftp.scientificlinux.org
 http://ftp.scientificlinux.org/linux/scientific/%24releasever/x86_64/os/repodata/repomd.xml:


 ^^^
 This looks wrong.  This should be $releasever and should have been
 expanded to 6.3 before sent to the web server.  Not sure why this
 happens though :/  %24 is ASCII hex code for $.
 Thanks David, I also noticed that. But that is only one strang error!
 I did try to change $releasever to 6.3 in all repo files in
 /etc/yum.repo.d/, and did not have above error, but still got similar
 error at the end:

 -- Processing Dependency: kernel = 2.6.9-11 for package:
 systemtap-runtime-1.7-5.el6.x86_64
 --- Package xml-common.noarch 0:0.6.3-32.el6 will be installed
 -- Finished Dependency Resolution
 Error: Package: libdrm-2.4.25-2.el6.x86_64 (sl)
 Requires: kernel = 2.6.29.1-52.fc11
 Error: Package: pcmciautils-015-4.2.el6.x86_64 (sl)
 Requires: kernel = 2.6.12-1.1411_FC5
 Error: Package: systemtap-runtime-1.7-5.el6.x86_64 (sl)
 Requires: kernel = 2.6.9-11
   You could try using --skip-broken to work around the problem
   You could try running: rpm -Va --nofiles --nodigest
 Question is: why the system installed fine with live CD, and now with
 the exact same system, installation to different location
 (/diskless/root) fails with all kind of dependencies?
 I'll admit I didn't look too carefully at the rest last time.  Fixing
 the first steps usually helps solving the next ones ... but there are a
 couple of things here I wonder about:

 kernel = 2.6.29.1-52.fc11
 kernel = 2.6.12-1.1411_FC5
 kernel = 2.6.9-11

 That is ancient kernels, none of them have (to my knowledge) ever been
 EL kernels, and all of them predates EL6 and even EL5.

 So I'm wondering where it got the information about these kernels.
 Could you please provide the output of 'rpm -qa kernel' ?
 
 Here you go:
 
 $ rpm -qa kernel
 kernel-2.6.32-279.14.1.el6.x86_64
 kernel-2.6.32-279.5.1.el6.x86_64
 

Hmm.  Just a silly question ... but have you tried a 'yum clean all' and
then tried the groupinstall?  It's really peculiar that it complains
about these old kernels when you don't have them - and the versions you
have should really be good enough.


kind regards,

David Sommerseth


Re: SL6.3 - groupinstall error

2012-11-15 Thread David Sommerseth
On 15/11/12 12:03, Duke Nguyen wrote:
 Hi folks,
 
 I am trying to install a Base system into a directory using yum
 groupinstall, but I got error as below. Any suggestion to solve the
 errors? Thanks.
 
 $ cat /etc/redhat-release
 Scientific Linux release 6.3 (Carbon)
 $ uname -a
 Linux biowulf.grih.org 2.6.32-279.14.1.el6.x86_64 #1 SMP Tue Nov 6
 11:21:14 CST 2012 x86_64 x86_64 x86_64 GNU/Linux
 $ sudo yum -y groupinstall Base Server Platform
 --installroot=/diskless/root
 Loaded plugins: fastestmirror, refresh-packagekit, security
 Loading mirror speeds from cached hostfile
  * sl: ftp.scientificlinux.org
  * sl-security: ftp.scientificlinux.org
 http://ftp.scientificlinux.org/linux/scientific/%24releasever/x86_64/os/repodata/repomd.xml:
  ^^^
This looks wrong.  This should be $releasever and should have been
expanded to 6.3 before sent to the web server.  Not sure why this
happens though :/  %24 is ASCII hex code for $.


kind regards,

David Sommerseth


Re: SL6.3 - groupinstall error

2012-11-15 Thread David Sommerseth
On 15/11/12 16:56, Duke Nguyen wrote:
 On 11/15/12 7:17 PM, David Sommerseth wrote:
 On 15/11/12 12:03, Duke Nguyen wrote:
 Hi folks,

 I am trying to install a Base system into a directory using yum
 groupinstall, but I got error as below. Any suggestion to solve the
 errors? Thanks.

 $ cat /etc/redhat-release
 Scientific Linux release 6.3 (Carbon)
 $ uname -a
 Linux biowulf.grih.org 2.6.32-279.14.1.el6.x86_64 #1 SMP Tue Nov 6
 11:21:14 CST 2012 x86_64 x86_64 x86_64 GNU/Linux
 $ sudo yum -y groupinstall Base Server Platform
 --installroot=/diskless/root
 Loaded plugins: fastestmirror, refresh-packagekit, security
 Loading mirror speeds from cached hostfile
   * sl: ftp.scientificlinux.org
   * sl-security: ftp.scientificlinux.org
 http://ftp.scientificlinux.org/linux/scientific/%24releasever/x86_64/os/repodata/repomd.xml:

^^^
 This looks wrong.  This should be $releasever and should have been
 expanded to 6.3 before sent to the web server.  Not sure why this
 happens though :/  %24 is ASCII hex code for $.
 
 Thanks David, I also noticed that. But that is only one strang error!
 I did try to change $releasever to 6.3 in all repo files in
 /etc/yum.repo.d/, and did not have above error, but still got similar
 error at the end:
 
 -- Processing Dependency: kernel = 2.6.9-11 for package:
 systemtap-runtime-1.7-5.el6.x86_64
 --- Package xml-common.noarch 0:0.6.3-32.el6 will be installed
 -- Finished Dependency Resolution
 Error: Package: libdrm-2.4.25-2.el6.x86_64 (sl)
Requires: kernel = 2.6.29.1-52.fc11
 Error: Package: pcmciautils-015-4.2.el6.x86_64 (sl)
Requires: kernel = 2.6.12-1.1411_FC5
 Error: Package: systemtap-runtime-1.7-5.el6.x86_64 (sl)
Requires: kernel = 2.6.9-11
  You could try using --skip-broken to work around the problem
  You could try running: rpm -Va --nofiles --nodigest 
 
 Question is: why the system installed fine with live CD, and now with
 the exact same system, installation to different location
 (/diskless/root) fails with all kind of dependencies?

I'll admit I didn't look too carefully at the rest last time.  Fixing
the first steps usually helps solving the next ones ... but there are a
couple of things here I wonder about:

kernel = 2.6.29.1-52.fc11
kernel = 2.6.12-1.1411_FC5
kernel = 2.6.9-11

That is ancient kernels, none of them have (to my knowledge) ever been
EL kernels, and all of them predates EL6 and even EL5.

So I'm wondering where it got the information about these kernels.
Could you please provide the output of 'rpm -qa kernel' ?


kind regards,

David Sommerseth


Re: flash replacement

2012-11-14 Thread David Sommerseth
On 14/11/12 10:58, Todd And Margo Chester wrote:
 On 11/13/2012 08:57 PM, Andrew Z wrote:
 hello,
   is there an alternative for ( lots of grumpy swearing omitted ) flash
 plugin in FF? with the latest 11.2.202.251 x86 release the HD videos in
 full-screen a crashing.
 [bitching on]
   first i deal with an invasion of Avatars - the blue people and now
 $$$ thing won't even play for longer than 4 minutes in full screen
 [bitching off]
 
 
 Hi Andrew,
 
Oh good.  I get to help someone for a change.
 
The recent Flash player is a DISASTER.  Can't seem to get
 anyone to fix it either.
 
Back off to flash-plugin-11.1.102.63-0.1.el6.rf.x86_64.rpm
 (or 32 bit, if that's what you are running) and turn off auto
 updates.
 
 http://apt.sw.be/redhat/el6/en/x86_64/dag/RPMS/flash-plugin-11.1.102.63-0.1.el6.rf.x86_64.rpm
 
 
 # rpm -e flash-plugin
 # rpm -ivh flash-plugin-11.1.102.63-0.1.el6.rf.x86_64.rpm
 

Seriously, what's wrong with using the official Adobe Flash player?  I
see you're using one from the rpmforge/fusion repo.

1.  Go here: http://get.adobe.com/flashplayer/
2.  Select YUM for Linux and click download.
3.  Install that RPM
4.  as root: yum erase flash-*
5.  Disable all those extra elrepo, atrpms, rpmfusion/rpmforge
repositories (look inside the files in /etc/yum.repos.d/ )
5.  as root: yum install flash-plugin
6.  Enjoy a more reliable flash.

This way you get no 32bit nonsense on 64bit installations.  And you get
something which works quite more reliable.  I'm using the Adobe Flash
player (64bit) on a couple of my private laptops.  And I'm watching
fullscreen TV broadcasts in hi quality resolution (3.5Mbit/s streams)
scaled up to my 1920x1200 display ... and if there are issues, that's
been related to not getting the needed bandwidth.

I've mostly had just troubles with those additional repos, so I only
enable them when it's needed.  Otherwise, SL repos + EPEL repos gives a
really stable and reliable systems  At least, that's been my experience.


kind regards,

David Sommerseth


Re: recent kernel and root raid1

2012-11-13 Thread David Sommerseth
On 12/11/12 22:14, Konstantin Olchanski wrote:
 On Sat, Nov 10, 2012 at 08:14:41AM -0600, Robert Blair wrote:
 I have a system that failed to boot after the most recent kernel update.
  It took a while, but I eventually traced it to the initramfs not having
 raid1 included.  I had to manually do a mkinitrd --preload raid1 ...

 ... dracut-004-283.el6.noarch

 
 All are invited to inspect the mdadm-related code in dracut.
 
 Instead of running mdadm -As and hoping for the best, there is a maze of 
 twisty
 write-only shell scripts to do who knows what (there is no documentation 
 whatsoever)
 and they seem to get things wrong as often as they get them right.
 
 People who develop that kind of stuff and push it on us as improvements 
 should have
 their heads inspected. (Insult intended).

I've been uncertain if I should respond to this mail or not.  But I
struggle to let such rants pass when I think this is a really an unfair
attack.

I will not respond to the technical solutions chosen by upstream
communities or projects (mdadm, dracut, etc).  But I do react to your
claim which sounds like you think the upstream developers are clueless
and don't care about what they do.  In addition you do this on a mailing
list which is not directly related to the upstream components which you
criticise.

All the code in this Enterprise Linux distro comes from an open source
upstream source.  There are usually upstream communities to discuss
things with.  And there are communities where you can provide patches
fixing things you find not being in a good shape.  Insightful feedback
is always appreciated.  Using the proper channels to provide feedback
and patches can at least result in something fruitful.  Like improving
dracut.

https://dracut.wiki.kernel.org/index.php/Main_Page

Throwing out such trash which you did will definitely not improve
anything.  Upstream developers do really deserve better treatment from
us - no matter what we think of their work.  And especially if they
don't hang around on this mailing list to respond to such (intended)
insults.

So please, can we try to lift these discussions from throwing dirt
behind peoples back to rather provide useful feedback on the right
places?  Thank you!


kind regards,

David Sommerseth


Re: recent kernel and root raid1

2012-11-13 Thread David Sommerseth

On 13/11/12 21:04, Konstantin Olchanski wrote:

On Tue, Nov 13, 2012 at 08:33:05PM +0100, David Sommerseth wrote:

On 12/11/12 22:14, Konstantin Olchanski wrote:



... But I do react to your claim which sounds like you think the upstream
developers are clueless and don't care about what they do.


That is not what I said. I did not say they are stupid, I did not say they
are indifferent or malicious. I said that they are crazy. There is a difference.


You did not use the term crazy, but you did indicate very well you wanted to 
insult someone.  Insults in general are not something I personally appreciate, 
and feel rather confident I'm not alone in that opinion.  No matter how it is 
expressed.



All the code in this Enterprise Linux distro comes from an open source
upstream source.  There are usually upstream communities to discuss
things with.  And there are communities where you can provide patches
fixing things you find not being in a good shape.


There is a minor problem with your suggestion that I send bug fixes
to upstream:

Upstream is at dracut-024 (Oct-2012), SL6 is at dracut-004 (January-2010).


Yes, you are right.  But you are also wrong.  Yes, SL6 is based on the 
dracut-004 release.  But that does not mean the latest SL6 dracut package is 
comparable to the -004 release any more.  There has been quite some fixes 
since that release.  Just do a 'rpm -q --changelog dracut' and see for 
yourself.  Even better, download the src.rpm and look at the patches applied 
on top of the -004 release.


Just an example.  The latest change to dracut-004-284 contains
0284-fips-set-boot-as-symlink-to-sysroot-boot-if-no-boot-.patch.  This
points at upstream git commit f22f08d857fb239c38daf7099fc55e82506f4abe which 
can be found in the RHEL-6 branch in the upstream git tree.


Important fixes are included and fixed when needed.  So if it doesn't work for 
you, get in touch with upstream and/or file a bugzilla with upstream project. 
 If it is important for RHEL, it will arrive into RHEL at some point too. 
The severity of the issue decides how quickly a fix gets pushed out.  And when 
RHEL pushes out a fix ... then SL gets the fix too.  Bottom line is:  Working 
together with upstream projects does benefit SL too.


Red Hat does a lot of backports.  A package is frozen on a specific version to 
be stabilised when the next RHEL release is being worked on.  From that point 
of, the idea is that fixes and enhancements are backported from newer 
versions.  As an example.  KVM was introduced in the 2.6.20 kernel.  RHEL5 
ships with a 2.6.18 based kernel.  How come RHEL5 supports KVM?



I can elaborate on how bug fixes to upstream do no good to SL users.


No need.


Throwing out such trash which you did will definitely not improve
anything.  Upstream developers do really deserve better treatment from
us - no matter what we think of their work.


Please refer to: http://en.wikipedia.org/wiki/The_Emperor's_New_Clothes


I'm very well aware of that fairytale, but I'm not sure why you bring that one 
up in this discussion.  My point is that telling the right people upfront of 
issues generally makes things better.  Ranting behind peoples backs sounds 
more familiar to the fairytale to me.



kind regards,

David Sommerseth


Re: mt and LTO

2012-11-01 Thread David Sommerseth

On 01/11/12 22:57, Jeff Siddall wrote:

Not to bash tools like cpio, tar and mt, but one of the coolest pieces of
software out there has to be BackupPC. Huge cheap magnetic disks combined with
the compression and file pooling capabilities of BackupPC makes it hard for me
to imagine going back to tapes.

Just last week I restored an entire 1.5 GB LTSP client image from a backup
made a few months ago, all in a matter of a few minutes with no digging for
tapes.


I've been using BoxBackup for quite some time.  This is an online backup 
solution.  It might not be as fancy as Amanda or Bacula, but it is darn easy 
to setup and works quite well.  Most of my boxes are setup with lazy backup, 
which basically means rolling backups based time stamps on the files.  All 
backups are encrypted on the client side too, before they are sent to the 
server.  This means you'll have to have the decryption key saved somewhere 
else safely to be able to do disaster recoveries.  But this is can be ideal 
for off-site storage where you might not trust the storage environment.


What I don't like about BoxBackup is that the project almost seems dead, even 
though some developement is happening.  And the command line tools can 
sometimes be a bit tricky to work with.  But when you find the right tool and 
the right commands, it seems to work very well.  There are also some issues 
with deleted files being cleaned up a bit too early and something about 
timestamps on directories, iirc.  The Windows client does not handle files 
over 2GB well, but those files can be restored on a Linux computer (provided 
you have the encryption key available)


If I get some more time, I'm planning to submit BoxBackup to Fedora (including 
EPEL) after I've cleaned up the .spec file a bit more (the one in the source 
tree is rather nasty, and definitely does not comply with current Fedora 
packaging rules).  But if someone is interested to give this a try and/or help 
out, get in touch and I'll provide a src.rpm for testing.


But I have to admit ... if Bacula would have been as easy to set up (including 
encryption) as BoxBackup, I would probably have been using Bacula instead - 
mostly because of the tools and it seems to be better developed.  Bacula can 
use file storage as well as tapes.  Another option for backup to hard drives 
is also to use iSCSI and export partitions or files as iSCSI tape volumes. 
I've not tested this yet, but if the docs is correct, it should even support 
mtx to exchange tapes.



kind regards,

David Sommerseth


Re: [SCIENTIFIC-LINUX-USERS] Yum update problem -

2012-10-31 Thread David Sommerseth

(sorry! resending with proper e-mail address)

On 31/10/12 15:50, Bob Goodwin - Zuni, Virginia, USA wrote:

On 31/10/12 10:14, Pat Riehecky wrote:

syslinux-4.05-1.el6.rfx.x86_64 is not from SL, I suspect your problem is
coming from there.

Pat


Well where would it come from? I would have chosen an XFCE Live USB
installation if that was available but I'm not certain ...

Bob


Try this:

 # yum --disablerepo=* --enablerepo=sl --enablerepo=sl-security update

You might also consider to add --enablerepo=sl-fastbugs.

With this approach, you first disable all repos and will only add the official 
SL repos.  That should hopefully get you through the worst part.


Otherwise, have a look in /etc/yum.repos.d ... and use rpm -qif repo-file to 
see where this file came from.  If it's something you've added manually, you 
won't find a match.  Otherwise, it's usually coming from a package somewhere.


My experience is that rpmforge/rpmfusion and elrepo repositories often cause 
conflicts, so if I need them I have them disabled by default.  This might be 
because I'm often depending on the EPEL repository too.  YMMV.



kind regards,

David Sommerseth


Re: DeltaRPM Support for Scientific Linux

2012-10-13 Thread David Sommerseth
- Original Message -
 From: Piruthiviraj Natarajan piruthivi...@gmail.com
 To: SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV
 Cc: Piruthiviraj Natarajan piruthivi...@gmail.com
 Sent: Saturday, 13 October, 2012 8:34:41 PM
 Subject: DeltaRPM Support for Scientific Linux
 
 Hi Everyone,
 
 I 'm new to this list and Scientific Linux.
 I have been using Fedora and RHEL based clones for a while.
 I was thinking  that it would a big benefit  to the users to save
 some
 bandwidth if SL deployed Delta updates in the official repos.
 I asked the question in forum and they directed me here.
 
 where  can  I make the request for the feature?

Just do:

 [root@host ~]# yum install yum-presto

That's all, the presto plug-in is enabled automatically when installing it, and 
then delta-rpms are pulled down on updates.  It works quite fine for me at 
least.


kind regards,

David Sommerseth


Re: DeltaRPM Support for Scientific Linux

2012-10-13 Thread David Sommerseth
- Original Message -
 From: Akemi Yagi amy...@gmail.com
 To: David Sommerseth sl+us...@lists.topphemmelig.net
 Cc: Piruthiviraj Natarajan piruthivi...@gmail.com, 
 SCIENTIFIC-LINUX-USERS@listserv.fnal.gov
 Sent: Saturday, 13 October, 2012 10:08:49 PM
 Subject: Re: DeltaRPM Support for Scientific Linux
 
 On Sat, Oct 13, 2012 at 12:26 PM, David Sommerseth
 sl+us...@lists.topphemmelig.net wrote:
  - Original Message -
  From: Piruthiviraj Natarajan piruthivi...@gmail.com
  To: SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV
 
  Hi Everyone,
 
  I 'm new to this list and Scientific Linux.
  I have been using Fedora and RHEL based clones for a while.
  I was thinking  that it would a big benefit  to the users to save
  some
  bandwidth if SL deployed Delta updates in the official repos.
  I asked the question in forum and they directed me here.
 
  where  can  I make the request for the feature?
 
  Just do:
 
   [root@host ~]# yum install yum-presto
 
  That's all, the presto plug-in is enabled automatically when
  installing it, and then delta-rpms are pulled down on updates.  It
  works quite fine for me at least.
 
 
  kind regards,
 
  David Sommerseth
 
 Are you sure about this? My understanding is that deltaRPMs are not
 available for SL. I know CentOS has them but ...

Oh, good point!  I might mix it with some other installations (with Fedora and 
RHEL).  I know have installed presto on SL boxes as well, so I've taken it for 
granted.  And it might be that I've seen it in action on third-party repos like 
EPEL or so.


kind regards,

David Sommerseth


Re: The opposite SL and VirtualBox problem

2012-10-05 Thread David Sommerseth
- Original Message -
 From: Nico Kadel-Garcia nka...@gmail.com
 To: David Sommerseth sl+us...@lists.topphemmelig.net
 Cc: Joseph Areeda newsre...@areeda.com, 
 SCIENTIFIC-LINUX-USERS@listserv.fnal.gov,
 owner-scientific-linux-us...@listserv.fnal.gov
 Sent: Thursday, 4 October, 2012 2:53:01 AM
 Subject: Re: The opposite SL and VirtualBox problem
 
 On Tue, Oct 2, 2012 at 6:59 PM, David Sommerseth
 sl+us...@lists.topphemmelig.net wrote:
  - Original Message -
  From: Joseph Areeda newsre...@areeda.com
  To: owner-scientific-linux-us...@listserv.fnal.gov
  Cc: SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV
  Sent: Tuesday, 2 October, 2012 10:51:52 PM
  Subject: Re: The opposite SL and VirtualBox problem
 
  Well, I'm not going to touch Nico's comment because I don't know
  KVM.
 
  For me it's the Devil you know kind of thing.  I've had good
  experience
  with Vbox on multiple OS and am just playing in my comfort zone.
 
  I do have reasons to explore other VMs but none of them pressing.
   I
  just want to install one of the University's free site license
  copy
  of
  Windows as a courtesy to our students.
 
  Even though Nico have some good points, I feel some of them are
  also dated due to the shape of virt-manager in earlier versions.
   In EL6.3, it's become quite good IMO and very usable.  If you're
  running KVM locally on your own computer, there would be no
  benefits of using vbox IMO.
 
 What do you find improved? I'm writing a new KVM setup guideline for
 complete newbies on an open source project, and would welcome your
 insights. I did a 6.3 based installation today and found no
 significant improvementn in the virt-manager itself.

I've been playing with virt-manager since Fedora 11 (maybe even a little bit in 
Fedora 8/9-ish) and RHEL5, so that's my background.  And from that perspective, 
I can now easily through virt-manager create a new network bridge [1], setup 
LVM controlled storage pools (or adding iSCSI based disks [2], making the iSCSI 
layer invisible to the KVM guest) and create new VMs using the virt-manager 
configured network bridge and storage pool (including allocating a logical 
volume in the storage pool's volume group).  I have all OSes downloaded as ISO 
images and saved them under /var/lib/libvirt/images and uses the ISO install 
from that directory.  Or I can mount these ISO images and provide them via a 
locally configured http server and use the network install.  And I do all this 
via the virt-manager.  I've not played much with the Virtual Network feature, 
but from what I can see, it should be fairly straight forward to also configure 
that - as either a closed, NATed or routed network, where libvirt configures 
dnsmasq automatically if you want DHCP on the virtual network.

To set up the network and storage pools, I right click on the localhost entry 
in virt-manager and click on Details.  The VM installation is done using the 
Create new VM wizard, just filling out the fields.  And VM management is done 
double clicking the VMs.  So to answer your question, in the early days, all 
this was cumbersome and didn't always work.  In SL6.3, I have no troubles doing 
this at all, It Just Works.  So I struggle to understand your criticism of the 
virt-manager functionality (from a GUI perspective).  It might be I'm blind to 
those issues you see, as I've had my fights earlier and which I don't have any 
more.

So it almost feels like it is not doing it the way you want it to work, to 
which I can't add much.


kind regards,

David Sommerseth


[1] If adding VLAN and bridges to the mix, you need to manually add VLAN=yes 
to the ifcfg-eth* file virt-manager creates of the eth device.  Then creating 
the bridge afterwords will allow you to complete the setup without issues.  
That's the only issue I've hit in virt-manager in my last tests.

[2] 
http://berrange.com/posts/2010/05/04/provisioning-kvm-virtual-machines-on-iscsi-with-qnap-virt-manager-part-2-of-2/


Re: Iptable rule required to block youtube

2012-10-05 Thread David Sommerseth
- Original Message - 
 From: vivek chalotra vivekat...@gmail.com
 To: Henrique Junior henrique...@gmail.com
 Cc: Konstantin Olchanski olcha...@triumf.ca,
 scientific-linux-us...@fnal.gov
 Sent: Friday, 5 October, 2012 9:10:24 AM
 Subject: Re: Iptable rule required to block youtube

 I have blocked youtube(ips from 74.125.236.0- 74.125.236.14) in my
 gateway machine using the below rules:

 iptables -A INPUT -i eth1 -s 74.125.236.0 -j DROP
 iptables -A INPUT -i eth1 -p tcp -s 74.125.236.0 -j DROP
 iptables -A INPUT -i eth0 -s 74.125.236.0 -j DROP
 iptables -A INPUT -i eth0 -p tcp -s 74.125.236.0 -j DROP

 but how to block on the whole network. Other hosts are still able to
 access youtube.

With whole network, do you mean your local LAN which your firewall (this SL 
box you're configuring) controls?  If so, you should probably add those DROP 
rules to the FORWARD chain and not the INPUT chain.

See this URL for more info: 
http://www.netfilter.org/documentation/HOWTO//packet-filtering-HOWTO-6.html


kind regards,

David Sommerseth


Re: The opposite SL and VirtualBox problem

2012-10-02 Thread David Sommerseth
- Original Message -
 From: Nico Kadel-Garcia nka...@gmail.com
 To: David Sommerseth sl+us...@lists.topphemmelig.net
 Cc: Joseph Areeda newsre...@areeda.com, 
 SCIENTIFIC-LINUX-USERS@listserv.fnal.gov
 Sent: Tuesday, 2 October, 2012 1:53:29 PM
 Subject: Re: The opposite SL and VirtualBox problem
 
 On Tue, Oct 2, 2012 at 6:15 AM, David Sommerseth
 sl+us...@lists.topphemmelig.net wrote:

[...snip...]

 When I am on a remote connection with limited bandwidth, and I need
 to
 add or modify hardware configurations for a VM, I *do not want* to
 have to run the graphics console of the VM simply in order to add a
 network port. The virt-manager tool got this *dead wrong*.

Well, I might be quite used to work with low-level stuff.  But I find 'virsh' 
quite handy to work with when just using ssh to my KVM host.  And when I wonder 
about anything, this resource provides the info I need to figure out things:

http://libvirt.org/sources/virshcmdref/html-single/

Otherwise, I've set up libvirt to be accessible over a TCP port, with 
authentication.  So I use a local running instance of virt-manager, managing a 
remote KVM host.  And it doesn't even really have to be the same version for 
general functionality.  Even though, installing a new OS have been a bit 
annoying (unless doing networked installs)

 Oh, and did I forget to mention that if your X setup is not just
 right, virt-manager fails to start completely silently? It doesn't
 evem check if your DISPLAY is set and send an appropriate error
 message?

Agreed, that's annoying.

 KVM also has very serious problems with network configuration. The
 necessary bridged network configuration for VM's that are not going
 to be behind a NAT or completely isolated is not actually supported
 by
 any of our upstream vendor's configuration tools. You have to hand
 edit your core network configuration files with a text editor, and
 any
 use of NetworkManager to manage VPN  or wireless connections puts you
 at risk of breaking it unless you very rigorously and manually put
 'NM_CONTROLLED=no' in each /etc/sysconfig/network-scripts/ifcfg-*
 file.

It's a while since I tried to set up this.  And last time, I needed VLAN's as 
well, so that required manually tweaking things even more.  But I know it used 
to be like that.  I haven't really tried it with the latest 6.3 release.

 It is a very, very steep learning curve to get your first KVM setups
 working, where with VirtualBox it's very plug and play. VirtualBox is
 unlikely to have the high scalability, live server migration, or
 kernel integration that KVM has, but for a casual virtual machine or
 two running common operating systems on common architectures, who
 cares?

It used to be hard to setup KVM.  Maybe I'm just gotten so used to the process, 
but I find it quite intuitive mostly these days.  And the docs have also 
improved.  Anyhow, I'm also quite sure the libvirt guys wouldn't mind patches 
from users fixing their itches as well ... as you mentioned you did some 
coding, I mean.


kind regards,

David Sommerseth


Re: The opposite SL and VirtualBox problem

2012-10-02 Thread David Sommerseth
- Original Message -
 From: Joseph Areeda newsre...@areeda.com
 To: owner-scientific-linux-us...@listserv.fnal.gov
 Cc: SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV
 Sent: Tuesday, 2 October, 2012 10:51:52 PM
 Subject: Re: The opposite SL and VirtualBox problem
 
 Well, I'm not going to touch Nico's comment because I don't know KVM.
 
 For me it's the Devil you know kind of thing.  I've had good
 experience
 with Vbox on multiple OS and am just playing in my comfort zone.
 
 I do have reasons to explore other VMs but none of them pressing.  I
 just want to install one of the University's free site license copy
 of
 Windows as a courtesy to our students.

Even though Nico have some good points, I feel some of them are also dated due 
to the shape of virt-manager in earlier versions.  In EL6.3, it's become quite 
good IMO and very usable.  If you're running KVM locally on your own computer, 
there would be no benefits of using vbox IMO.

You might find the RHEL6 Virtualisation guides quite handy though,

Getting started:
https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Getting_Started_Guide/index.html

Administration Guide:
https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/index.html

Host Configuration and Guest Installation Guide:
https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Host_Configuration_and_Guest_Installation_Guide/index.html

Security Guide:
https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Security_Guide/index.html

V2V Guide (import VMs from other vendors into KVM)
https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/html/V2V_Guide/index.html

You basically need to install libvirt and virt-manager on your box, and you're 
ready to go.  To get SPICE support, you need to install some extra spice 
packages as well, which will improve the graphical console performance 
considerably (compared to VNC).  All packages and dependencies are available in 
SL6.  Give it a shot, and you'll see it's not necessarily that much harder than 
vbox. 

You can also run Windows guests within KVM.  And there's even virtio drivers 
available [1] to improve the performance of disk and network IO.


kind regards,

David Sommerseth


[1] http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers


  On 10/2/12 3:15 AM, David Sommerseth wrote:
 
 
  - Original Message -
  From: Joseph Areeda newsre...@areeda.com
  To: SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV
  Sent: Tuesday, 2 October, 2012 12:33:59 AM
  Subject: The opposite SL and VirtualBox problem
 
  I want to run Windows as a guest system on my Sl6.3 box.
 
  Installing vbox from the Oracle repository gives me an error
  trying
  to
  create the kernel modules.
  Just a silly question.  Why bother with VirtualBox when you have
  KVM built into the OS?  Using the SPICE protocol (yum search
  spice) and you'll even get a decent console performance.  And it's
  really easy to setup and configure using virt-manager.
 
 
  kind regards,
 
  David Sommerseth
 
  xxx
 
 


Re: [SCIENTIFIC-LINUX-USERS] Password required in single-user mode? (solved)

2012-08-26 Thread David Sommerseth

On 08/23/2012 03:18 PM, Pat Riehecky wrote:

On 08/18/2012 03:57 PM, David Sommerseth wrote:

Hi,

I've been running Scientific Linux since the 6.0 days, and single-user mode
have basically behaved how I have expected it those few times I needed it.
As I usually set up my boxes root accounts with passwords disabled,
single-user mode needs to be without root password.

Today, after having upgraded from 6.3, I needed to enter single-user mode at
boot. And I was asked for a password at boot time. Is this change intentional?

# cat /etc/redhat-release
Scientific Linux release 6.3 (Carbon)
# rpm -qa | grep -i sl_password_for_singleuser | wc -l
0
# grep SINGLE /etc/sysconfig/init
SINGLE=/sbin/sushell

If this change was intentional, how can I go back to the old behaviour? I
double checked the behaviour with an old VM with SL6.1, and that behaves as
expected.


kind regards,

David Sommerseth


Hi David,

The behavior shouldn't have changed. You've provided just about all the
relevant details in your email, so there isn't really anything I want to ask
for more information.

Can I have you try setting /etc/sysconfig/init = SINGLE to /sbin/sulogin
rebooting and setting it back to /sbin/sushell? Perhaps something got 'stuck'
wrong

/sbin/sushell is a shell script, so can I have you verify its contents? Mine
looks like:

#!/bin/bash

[ -z $SUSHELL ]  SUSHELL=/bin/bash

exec $SUSHELL


Hi Pat,

First of all, tanks a lot for your answer, and to Stephan for his input as 
well.  I checked /sbin/sushell and the SINGLE variable in /etc/sysconfig/init. 
 And it was currently working just as expected - in both modes.


But as I knew it had been failing, I added an extra disk to the VM I have with 
SL6.3.  Double checked adding 's' to the kernel command line gave me a shell 
without asking for password.  Then I added this disk to /etc/fstab ... and 
removed the extra disk.  This time, I was asked for a root password when 
booting up - both when I was asking for 's'ingle user-mode and without it. 
Adding the disk back, and the proper behaviour is back too.


So that's the issue I hit, as I in my real setup had just added and configured 
a new disk with encryption on my VM - but messed up the encryption key.  When 
I then wanted to rescue the system, instead of solving it via 'single-user 
mode' I had to use guestfish on the root filesystem of the failing machine and 
disable the new disk in /etc/cryptotab and /etc/fstab.  Then I could start 
setting up the disk again with a proper key.


But the hint from Stephan, setting EMERGENCY=/sbin/sushell in 
/etc/sysconfig/init did the trick.  So that's what I need to set on all my boxes.


Lesson learnt: Emergency mode supersedes single-user mode, especially with 
filesystem failures.



Thanks all!


kind regards,

David Sommerseth


Re: SL 6 vs. other RHEL clones: security advisory comparison

2012-08-20 Thread David Sommerseth
On 20/08/12 14:02, Janne Snabb wrote:
 Hello,
 
 I made some statistics and comparisons about security advisories
 published by three popular RHEL 6 clones: CentOS 6, Oracle Linux 6 and
 Scientific Linux 6.
 
 The article is available at the following URL:
 
 http://bitrate.epipe.com/rhel-vs-centos-scientific-oracle-linux-6_187
 
 I hope you find it interesting.

This is really interesting.  However, there are five Important and
Critical erratas which are delivered considerably slower
(RHSA-2012:0387, RHSA-2012:0388, RHSA-2012:1009, RHSA-2012:1054 and
RHSA-2012:1064).  SL's average is very much impacted by 3 of them.

What would be more interesting, from a statistical point of view is to
take those extremes out of the comparison.  On an average level, All
distros delivers errata updates fairly fast, but those extremes skews
the real average.  Of course extreme delays happened and will also
happen in the future, I'm not trying to hide that.  But getting an
average what is more the typically expected average would be probably
give a better indication.

So my suggestion is to take those four erratas I listed out of the
equation, which are only tagged as Important or Critical.  Those
five erratas are really a minority in a bigger set of data and surely
looks like exceptions, across all distros.

As I said, I'm not trying to hide the fact that SL had some slow
deliveries.  We need those graphs too.  Such things happens to all
distroes, sometimes you just get set behind.  But they do actually skew
the /typically expected/ delivery delay badly.

And people with statistics background can probably explain even better
than me why it's interesting to remove the extremes and compare that too.


kind regards,

David Sommerseth


Password required in single-user mode?

2012-08-18 Thread David Sommerseth

Hi,

I've been running Scientific Linux since the 6.0 days, and single-user mode 
have basically behaved how I have expected it those few times I needed it.  As 
I usually set up my boxes root accounts with passwords disabled, single-user 
mode needs to be without root password.


Today, after having upgraded from 6.3, I needed to enter single-user mode at 
boot.  And I was asked for a password at boot time.  Is this change intentional?


# cat /etc/redhat-release
Scientific Linux release 6.3 (Carbon)
# rpm -qa | grep -i sl_password_for_singleuser | wc -l
0
# grep SINGLE /etc/sysconfig/init
SINGLE=/sbin/sushell

If this change was intentional, how can I go back to the old behaviour?  I 
double checked the behaviour with an old VM with SL6.1, and that behaves as 
expected.



kind regards,

David Sommerseth


Re: SL to 6.3 release

2012-07-19 Thread David Sommerseth
On 19/07/12 16:05, Federico Alves wrote:
 What I would like to see is a faster release of updates, somewhat close to
 the upstream.

Uhm ... What about to make a guess ... Guess why SL/CentOS are without
any costs and why RH increased their pricing  For my part, I'd
probably guess it's somewhat related ... I might be wrong, of course...


kind regards,

David Sommerseth


Re: is the drop to fsck visual fixed in 6.3?

2012-07-19 Thread David Sommerseth
On 18/07/12 23:02, Orion Poplawski wrote:
 On 07/18/2012 02:50 PM, Todd And Margo Chester wrote:

 I don't know the exact number, I think it is 27 reboots,
 your boot will automatically drop to an FSCK.  In RHEL5,
 your would see a status bar showing you progress.  In 6,
 you get no indication that an FSCK is happening and you
 think you are frozen.  The temptation to throw the power
 switch is overwhelming.
 
 I can't speak to the lack of status, but you can disable the automatic
 fsck with:
 
 tune2fs -c 0 -i 0 /dev/
 
 This is pretty much done automatically now for any filesystems that the
 install makes, but not ones that you create later.

If using ext{2,3,4}, I would strongly recommend *against* doing this.
Running fsck from time to time isn't a bad thing.  It can surely happen
that it finds some things which should be fixed.  If this is needed fro
xfs, I dunno.  For reiserfs it is not needed, it will take care of this
on its own - and really needed, it'll scream loudly and you won't be
able to mount that partition/lv and this fix might take hours to
complete.  Regarding other file systems, I have no experience.

Removing these fsck's is like skipping taking your car to a car check
every now and then.  You car may run for a long way without a service.
But when it is really needed, it can take a long time to fix and might
be more costly compared to if you did this regularly ... and if you want
a reliable server, having a little maintenance downtime couple of times
during a year might not be such a bad investment - in the long run.

Just my 2cents


kind regards,

David Sommerseth


Re: is the drop to fsck visual fixed in 6.3?

2012-07-19 Thread David Sommerseth
On 19/07/12 20:20, Konstantin Olchanski wrote:
 I think fsck should be able to run in the background all the time.
 
 I am sure the ext2/3/4 filesystem driver and fsck can be made to talk to each 
 other
 and permit checking a mounted live filesystem. (SUN ZFS can do this).

This is actually partly possible if you use LVM.  You create a snapshot
of the file system, and run fsck on the snapshot.  If no issues are
found, no worries.  If there are issues there, then you need to unmount
the file system and run the fsck on the main image (not the snapshot
image).  And when that passes, you can delete the snapshot - this way
you also have a backup in case something goes really bad.

Anyhow ... this can save you downtime, when the fsck is clean, even an
automatic fsck shouldn't cause much extra delay next time you boot.


kind regards,

David Sommerseth


Re: significance of indicating root partition location in bot process

2012-07-19 Thread David Sommerseth
On 18/07/12 16:28, anuraag chowdhry wrote:
 thanks for replying. but the files needed to boot i.e stage1, stage2,
 initram disk image and kernel are present in boot partition, right?
 and /boot  can be on different hard disk partition too, right?

stage1, stage2, the kernel and initramfs images are actually bootloader
(GRUB, syslinux, lilo, etc) related.  That's the needed pieces to put
the kernel into RAM, boot it and enable the hardware needed to access
harddrives and such.  GRUB have also limited file system support, to be
able to locate and load those mentioned files.

When the kernel has started, the hardware is enabled, then it goes ahead
and mounts the root file system (rootfs) which it takes from the root=
kernel command line.

When rootfs is ready, it starts the init process (upstart, /sbin/init or
whatever the init= kernel command line argument says).  This is the
first process which should make sure to start the rest of the system
processes ... like starting all the enabled services etc.  It also takes
care of enabling all the physical consoles (usually tty{1-6}) and
respawning these consoles when you're logging out again.

It's a little detail I haven't mentioned, but the very first step is to
run some scripts which are inside the initramfs ... but that's a more
advanced topic.

 are you referring to /sbin/init or /etc/syconfig/init files that reside
 in / partition ?
 
 if i remove the root=  ,.. parameter, the system will try to boot to
 some extent , right?

Nope, the kernel will panic, as it doesn't know about any rootfs.  The
rootfs usually needs at minimum /bin, /sbin and /etc.  But it's possible
to boot a system if you just have /etc available - but then the first
init process (pid 1) needs to be in /etc as well.


So to sum up the boot process:

- You press the POWER ON button
- Bootstrap/BIOS Selfcheck
- BIOS POST starts
- BIOS locates boot harddrive and loads the MBR boot loader (512 bytes)
- MBR boots GRUB (stage1/stage2)
- GRUB loads initramfs and kernel into RAM
- GRUB boots the kernel
- kernel executes scripts in the initramfs, prepares for mounting rootfs
- kernel mounts rootfs and starts init process
- init process mounts the rest of the file system(s)
- init process continues loading/starting system config
- init finally starts the console processes (Xorg, *getty)
- The system is ready for logins

(This is the typical Intel arch PC BIOS boot procedure, with EFI this
changes a little bit.  And non-Intel is quite more different)


kind regards,

David Sommerseth


Re: server crashing out of memory

2012-07-18 Thread David Sommerseth
%  qemu-kvm
  454820.14s   0.27s  0K  0K 0K  0K ---  S   
 12   4%  qemu-kvm
  429990.05s   0.03s  0K  0K 0K 16K ---  S   
 14   1%  qemu-kvm
 2092150.01s   0.05s  0K  0K 0K  0K --- 
 S 9  1%  qemu-kvm
  404720.03s   0.02s  0K  0K 0K  0K ---  S   
 12   1%  qemu-kvm
 
 
 slabtop:
  Active / Total Slabs (% used)  : 1599178 / 1599216 (100.0%)
  Active / Total Caches (% used) : 132 / 204 (64.7%)
  Active / Total Size (% used)   : 6195130.43K / 6274921.86K (98.7%)
  Minimum / Average / Maximum Object : 0.02K / 0.28K / 4096.00K
 
   OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME
 7227321 6709793  92%0.10K 195333   37781332K buffer_head
 4242650 4242336  99%0.07K  80050   53320200K
 selinux_inode_security
 4224700 4224638  99%1.00K 10561754   4224700K ext4_inode_cache

This smells a bit bad ... ext4_inode_cache is using a lot of memory ...

 3257480 3257186  99%0.19K 162874   20651496K dentry
 1324786 1250981  94%0.06K  22454   59 89816K size-64
 484128 484094  99%0.02K   3362  144 13448K avtab_node
 347088 342539  98%0.03K   3099  112 12396K size-32
 342580 324110  94%0.55K  489407195760K radix_tree_node
 236059 235736  99%0.06K   4001   59 16004K ksm_rmap_item
 123980 123566  99%0.19K   6199   20 24796K size-192
 105630  47803  45%0.12K   3521   30 14084K size-128
  24300  24261  99%0.14K900   27  3600K sysfs_dir_cache
  17402  15599  89%0.05K226   77   904K anon_vma_chain
  16055  14874  92%0.20K845   19  3380K vm_area_struct
   9844   8471  86%0.04K107   92   428K anon_vma
   8952   8775  98%0.58K   14926  5968K inode_cache
   7518   5829  77%0.62K   12536  5012K proc_inode_cache
   6840   4692  68%0.19K342   20  1368K filp
   5888   5532  93%0.04K 64   92   256K dm_io
 
 
 top - 10:10:02 up 22:34,  4 users,  load average: 1.02, 1.15, 1.53
 Tasks: 888 total,   1 running, 887 sleeping,   0 stopped,   0 zombie
 Cpu(s):  0.8%us,  1.2%sy,  0.0%ni, 97.9%id,  0.1%wa,  0.0%hi,  0.0%si, 
 0.0%st
 Mem:  49421492k total, 43619512k used,  5801980k free,  4409144k buffers
 Swap:  8388600k total,16308k used,  8372292k free, 25837164k cached

Somehow, this doesn't reflect what the kernel complains about when the
OOM killer starts its mission.

I see that you're using  kernel-2.6.32-279.1.1.el6.x86_64 ... that
smells a bit like a SL 6.3 Beta ... is that right?  As SL 6.2 is usually
around 2.6.32-220-something.  I would probably recommend you to try a
6.2 kernel if you're running something much more bleeding edge.

And it somehow seems to be related to some file system issues ... at
least from what I can see.  Could be a bugy kernel which leaks memory,
somewhere in either the parition table code or ext4 code paths.

Not sure I'm able to provide any better clues right now.


kind regards,

David Sommerseth


Re: SL to 6.3 release

2012-07-18 Thread David Sommerseth
On 18/07/12 15:37, Semi wrote:
 1) When the SL6.3 is expected? Centos 6.3 already exists.

*sigh* do you read these mails in the thread you just replied to?

Again, here's a couple of pointers which might give you a better clue:

http://listserv.fnal.gov/scripts/wa.exe?A2=ind1207L=scientific-linux-usersT=0P=11071

http://listserv.fnal.gov/scripts/wa.exe?A2=ind1201L=scientific-linux-usersT=0P=17067

 2) When I run update from Centos I also updating the release number.
 
 For example:
 from Centos 5.2
 yum update
 I'll get Centos 5.8
 
 and SL doesn't raises the release version ?

That's documented here:

SL 6.x:
http://www.scientificlinux.org/documentation/howto/upgrade.6x

SL 5.x:
http://www.scientificlinux.org/documentation/howto/upgrade.5x

Unless you have installed the yum-conf-sl6x package (on SL6.x), you
won't upgrade the minor version of the distro automatically.  So in that
case, you need to do it explicit as described above.


kind regards,

David Sommerseth


Re: kernel-2.6.18-308.8.2.el5.x86_64 stalled at stage2 during boot

2012-07-07 Thread David Sommerseth

On 07/07/2012 02:09 AM, Zhi-Wei Lu wrote:

Hi all,

I have a Supermicro box (motherboard X7DW3) with 3ware RAID card (9690SA-4I).
I have 24 drives attached to this raid card, while the first three drives were
exported as SINGLE (JBOD) drives. The /boot and /root are Linux software raid
1 on these first three drives.

The latest kernel-2.6.18-308.8.2.el5.x86_64 would failed to boot, but the
previous kernel, kernel-2.6.18-308.8.1.el5.x86_64, worked just fine. I also
tested with CentOS kernel-2.6.18-308.8.2.el5.x86_64 and ended up with the same
fate. Does anyone see this problem at all?



Maybe take a screen shot (with a camera) of the boot process where it stops, 
could be an idea?  Make sure to remove 'rhgb' and 'quiet' from the kernel 
command line (you can edit the grub settings on the fly, if you reach for the 
menu).



kind regards,

David Sommerseth


Re: KVM issues with dump

2012-07-07 Thread David Sommerseth

On 07/06/2012 02:23 AM, Nico Kadel-Garcia wrote:



  By the way, if you skip using AHCI and just program the bios
  for IDE, you don't even have to use an f6 disk to install XP.
  No sign of drivers doing XP in yet.  Although the day will come.

  Great letter.  Thank you!

You didn't notice any performance issues with virtualized IDE versus SCSI?


IDE and SCSI emulation is not going to impress you on performance.  Using 
virtio for disk access may improve the performance, and the Windows drivers 
should be available here:


http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers


kind regards,

David Sommerseth


Re: single user mode

2012-06-23 Thread David Sommerseth

On 06/23/2012 10:38 AM, anuraag chowdhry wrote:

Hi
I have SL6.1 and the moment , the second stage Grub presents the Kernel
selection Menu, Pressing A ( for appending) and then a spacebar and then
typing single , pressing ENTER , ultimately prompts me the root password after
booting.

is the behaviour not different here.?

The system should boot in single user mode , it does but then it asks for root
password for maintenace.or press Ctrl-D to continue.

i tried typing '1 or s  instead of single , but still it prompts for the
password.

my kernel is   2.6.32-131.0.15.el6.i686



Try booting with 'init=/bin/sh' in the kernel command line.  This approach 
might force you to manually mount the root filesystem containing /etc ... and 
then you can change the root password, unmount and reboot.  Then you can enter 
single user with a known password.  If you save the root password stored in 
/etc/shadow before you change the password, you can also restore the old root 
password if that's needed afterwards.



kind regards,

David Sommerseth