Re: How does NetworkManager monitor the connection files?

2016-03-31 Thread jdow

On 2016-03-31 02:53, Tom H wrote:

On Thu, Mar 31, 2016 at 12:10 AM, Benjamin Lefoul
 wrote:


But sed -i ALSO changes the inode, and as I said it doesn't work:

root@hoptop:~# touch a
root@hoptop:~# ls -i a
9700011 a
root@hoptop:~# sed -i 's/q/a/g' a
root@hoptop:~# ls -i a
9700013 a

Benjamin Lefoul



From: owner-scientific-linux-us...@listserv.fnal.gov 
 on behalf of Tom H 

Sent: 30 March 2016 23:00
To: SL Users
Subject: Re: How does NetworkManager monitor the connection files?

On Wed, Mar 30, 2016 at 3:49 PM, Benjamin Lefoul
 wrote:


I have set monitor-connection-files=true in my
/etc/NetworkManager/NetworkManager.conf

It works fine (in fact, instantly) if I edit
/etc/sysconfig/network-scripts/ifcfg-eth0 with emacs or vi (for instance,
changing the IP).

It fails miserably if I use sudoedit, or sed:

# grep 100 /etc/sysconfig/network-scripts/ifcfg-eth0
IPADDR=192.168.4.100

# sed -i 's/100/155/g' /etc/sysconfig/network-scripts/ifcfg-eth0

Even though all stats (access modify and change) are renewed.

It's worse than that: even nmcli con reload afterwards fails.

In fact, the only way to get the ip to change is by entering the file with
vi, not touching it, and leave with ":wq" (not just ":q").

Why is that? What is going on here?

I know, I know, I can use nmcli in scripts, and not string-manipulation
tools, but say I don't want to... :)

And still, during operations, I'd rather edit the files with sudoedit...


"sudo -e ifcfg-file" doesn't change the inode. Can you use "sudo vi
ifcfg-file"? (Or whichever editor you prefer.)


Please bottom-post.

Sorry, my mind somehow discarded the sed case.

So the inode's not being monitored...



Careful, Tom. Too much of that whining about top/bottom post may prod me to side 
post and annoy everybody. It's done it in the past.


{^_-}


Re: How does NetworkManager monitor the connection files?

2016-03-30 Thread jdow

On 2016-03-30 20:35, olli hauer wrote:

On 2016-03-31 05:02, Yasha Karant wrote:

On 03/30/2016 06:56 PM, jdow wrote:

On 2016-03-30 10:59, Yasha Karant wrote:

...

Yasha, you may find you have to modify the virtual box settings so that they 
are not trying to use a network connection that is not active. That will also 
mean shutting down VB and restarting it. This is an issue I have with a Windows 
7 host, as well.

Disconnected adapters won't communicate with anything. And when you connect 
802.3 the 802.11 connection is shut down.

{^_^}

Thank you for that information.  However, reading the VirtualBox manual 
(https://www.virtualbox.org/manual/) I cannot find the command to shutdown 
VirtualBox.  Because EL 7 no longer uses the standard rc scripts, where do I 
look?  Does VirtualBox have its own command?  I have looked through the 
VBoxManage switches, and I cannot seem to find one that allows one to shutdown 
all of the VirtualBox services and then to restart these.   Details for SL7 
would be most appreciated.



Hm, on FreeBSD most users configure a LAGG interface with both interfaces as 
member, this way changing the real interface is transparent to VirtualBox.


Hm, this might do the same thing:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/pdf/Load_Balancer_Administration/Red_Hat_Enterprise_Linux-6-Load_Balancer_Administration-en-US.pdf

{^_^}


Re: How does NetworkManager monitor the connection files?

2016-03-30 Thread jdow

On 2016-03-30 10:59, Yasha Karant wrote:

On 03/30/2016 09:14 AM, Andrew C Aitchison wrote:

On Wed, 30 Mar 2016, Benjamin Lefoul wrote:


Hi,


I have set monitor-connection-files=true in my
/etc/NetworkManager/NetworkManager.conf


It works fine (in fact, instantly) if I edit
/etc/sysconfig/network-scripts/ifcfg-eth0 with emacs or vi (for instance,
changing the IP).


It fails miserably if I use sudoedit, or sed:


I *think* emacs writes a new file with a different name and then renames it.
Try "ls -li /etc/sysconfig/network-scripts/ifcfg-eth0" before and after
editting; if the inode/inum (the number at the beginning) has changed
that is what your editor is doing, and what NetworkManager is looking for.


# grep 100 /etc/sysconfig/network-scripts/ifcfg-eth0
IPADDR=192.168.4.100


# sed -i 's/100/155/g' /etc/sysconfig/network-scripts/ifcfg-eth0


Even though all stats (access modify and change) are renewed.

It's worse than that: even nmcli con reload afterwards fails.

In fact, the only way to get the ip to change is by entering the file with
vi, not touching it, and leave with ":wq" (not just ":q").


Why is that? What is going on here?


I know, I know, I can use nmcli in scripts, and not string-manipulation
tools, but say I don't want to... :)


And still, during operations, I'd rather edit the files with sudoedit...


Thanks in advance,


Benjamin Lefoul

nWISE AB


I have a related question.  I have now inserted an appropriate UTP cable into
the RJ-45 jack on my laptop, I have a green LED (meaning MAC signal) -- thus I
have a 802.3 connection.  However, unlike previous incarnations of Network
Manager, the present SL7 one does not allow me to activate the 802.3 connection,
but only the 802.11 connection. Evidently, despite claims that NAT should work
with 802.11 to VirtualBox running a MS Win 7 Pro guest (a claim as I recall
contradicted by other respondents), it does not (MS Win sees no network).  Thus,
I am attempting to run my laptop on a wired 802.3 connection, but I cannot seem
to activate it.  I do need Network Manager when I am in the field and must
connect to arbitrary 802.11 WLANs in a fashion similar to MS Win and Mac OS X
(for which the "automagic" 802.11 DHCP hotel, etc., networks seem to be
designed).  How do I get Network Manager to allow me to activate the wired
connection?  Note that Network Manager does "see" the 802.3 NIC ( it displays
Ethernet Network (Intel Ethernet Connection I217-LM) but shows disconnected and
will not let me connect.  Do I need to be root to make this happen?

If this related query should be a new thread, I will repost as such if that is
appropriate.

Any help would be appreciated.

Yasha Karant


Yasha, you may find you have to modify the virtual box settings so that they are 
not trying to use a network connection that is not active. That will also mean 
shutting down VB and restarting it. This is an issue I have with a Windows 7 
host, as well.


Disconnected adapters won't communicate with anything. And when you connect 
802.3 the 802.11 connection is shut down.


{^_^}


Re: snooping windows 10 - how to stop it on a linux gateway?

2016-03-05 Thread jdow
If squid can find usefully unique patterns in encrypted traffic I suppose that 
might work. But that's one heck of a big "if".


{o.o}   Joanne

On 2016-03-05 02:15, Karel Lang AFD wrote:

Hmm ... yes, yes.
Thanks for bringing this up.
I force all http traffic through the squid proxy on our SL 6 gateway, this could
be also helpful..



On 03/05/2016 11:00 AM, prmari...@gmail.com wrote:

The only way I can think of is to force all internet access through a proxy
and filter it out in the proxy.
Then you don't give the machines any internet access just access to the proxy.
Unfortunately I do not have details for you on how to filter the snoop
messages because in I haven't looked at them but it should be fairly easy
using squid and an external Perl regex filter script or other filter
application, but you will take a latency hit because you will have to inspect
every transaction.

   Original Message
From: jdow
Sent: Friday, March 4, 2016 23:35
To: scientific-linux-us...@fnal.gov
Subject: Re: snooping windows 10 - how to stop it on a linux gateway?

That windows update server is a relay for the "snoop" messages. About the only
way to totally stop the snoop messages is to totally isolate the network
containing Windows machines from the network. Any windows machine can serve as a
relay point for any others.

{o.o}

On 2016-03-04 20:16, Karel Lang AFD wrote:

Hi guys,

firstly, sorry Todd, i don't know how it happened i got attached to your thread.

secondly, thank you all for your thoughtful posts.

I know it is not easy to block the selected traffic from windows 10 and you are
right, it is being backported to windows 7 as well. Horrible and disgusting.

I already have windows server in LAN dedicated as a update server (work of my
windows colleagues), so the PC don't have to access windows update servers
outside LAN - this should simplify things.

Also the PCs must have internet access to email, http, https, ftp, sftp - simply
the 'usual' stuff.
I think, yet, there should be a way. I'll try to consult mikrotik experts (the
router brand we use) and guys from our ISP.
If i have something, i'll let you know :-)

thank you, bb

Karel

On 03/05/2016 12:40 AM, Steven Haigh wrote:

On 05/03/16 07:24, Karel Lang AFD wrote:

Hi all,

guys, i think everyone heard already about how windows 10 badly treat
its users privacy.


My solution to this was to finally rid Windows 7 off my desktop PC - as
most of the telemetry has also been 'back ported' to Windows 7 also. You
can't stop it.


I'm now thinking about a way howto stop a windows 10 sending these data
mining results to a microsoft telemetry servers and filter it on our SL
6 linux gateway.


Nope. There are no specific servers in use - just general - so whatever
you block will end up killing other services.


I think it could be (maybe?) done via DPI (deep packet inspection). I
similarly filter torrent streams on our gateway - i patched standard SL
6 kernel with 'xtables' (iptables enhancement) and it is working
extremely well.


I would be interested to see if you could identify telemetry packets in
the flow - but I'm not predicting much success. If you do get it, make
sure you let the world know though!


I read (not sure if true) that some DNS resolutions to M$ servers are
even 'hardwired' via some .dll library, so it makes it even harder.


Correct.


I'm no windows expert, but i'm and unix administrator concerned about
privacy of windows desktop/laptop users sitting inside my LAN.

What i'd like to come up is some more general iptables rules, than
blocking specific IP addresses or names, because, apparently they may
change in any incoming windows update ...

Anyone gave this thought already? Anyone else's concerned the way i am?


Yup - and as I said, I'm now running Fedora 23 on my desktop (EL lags on
a few things that I like - so Fedora is a happy medium for me - as I
still have the fedora-updates-testing repo enabled. My work laptop as
well as my personal laptop - and now my home desktop all run Fedora 23
(KDE Spin if you hate Gnome 3 - like me).









Re: snooping windows 10 - how to stop it on a linux gateway?

2016-03-05 Thread jdow
The basic problem as I understand it is that you will have to filter out every 
Windows machine 7 or later from Internet access. One can act as a relay for 
another. (This has existed for some time now, as I understand it and have 
experienced it.) If one presumes MS is sane enough to have the messages 
encrypted filtering becomes somewhat difficult. This is actually quite important 
when Windows Update is considered. I've had 10 reboot the machine if you leave 
it overnight running programs. As a result our only 10 machine is a test machine 
for some work we do. The best you can do is pick the time for the reboot. Alas, 
I keep some SW development programs up 24/7 and have had to bite a painful 
bullet to select a best time to take everything down and reboot. Of course, that 
happens to an extent with Linux, too. So far that's the only way to get new 
kernels installed. (This is annoying because the old AmigaDOS could be patched 
while it was live. I got spoiled. Picture an OS with exactly one constant 
address. Everything else was accessed off that address via doubly linked lists.)


{^_-}


On 2016-03-05 02:00, prmari...@gmail.com wrote:

The only way I can think of is to force all internet access through a proxy and 
filter it out in the proxy.
Then you don't give the machines any internet access just access to the proxy.
Unfortunately I do not have details for you on how to filter the snoop messages 
because in I haven't looked at them but it should be fairly easy using squid 
and an external Perl regex filter script or other filter application, but you 
will take a latency hit because you will have to inspect every transaction.

   Original Message
From: jdow
Sent: Friday, March 4, 2016 23:35
To: scientific-linux-us...@fnal.gov
Subject: Re: snooping windows 10 - how to stop it on a linux gateway?

That windows update server is a relay for the "snoop" messages. About the only
way to totally stop the snoop messages is to totally isolate the network
containing Windows machines from the network. Any windows machine can serve as a
relay point for any others.

{o.o}

On 2016-03-04 20:16, Karel Lang AFD wrote:

Hi guys,

firstly, sorry Todd, i don't know how it happened i got attached to your thread.

secondly, thank you all for your thoughtful posts.

I know it is not easy to block the selected traffic from windows 10 and you are
right, it is being backported to windows 7 as well. Horrible and disgusting.

I already have windows server in LAN dedicated as a update server (work of my
windows colleagues), so the PC don't have to access windows update servers
outside LAN - this should simplify things.

Also the PCs must have internet access to email, http, https, ftp, sftp - simply
the 'usual' stuff.
I think, yet, there should be a way. I'll try to consult mikrotik experts (the
router brand we use) and guys from our ISP.
If i have something, i'll let you know :-)

thank you, bb

Karel

On 03/05/2016 12:40 AM, Steven Haigh wrote:

On 05/03/16 07:24, Karel Lang AFD wrote:

Hi all,

guys, i think everyone heard already about how windows 10 badly treat
its users privacy.


My solution to this was to finally rid Windows 7 off my desktop PC - as
most of the telemetry has also been 'back ported' to Windows 7 also. You
can't stop it.


I'm now thinking about a way howto stop a windows 10 sending these data
mining results to a microsoft telemetry servers and filter it on our SL
6 linux gateway.


Nope. There are no specific servers in use - just general - so whatever
you block will end up killing other services.


I think it could be (maybe?) done via DPI (deep packet inspection). I
similarly filter torrent streams on our gateway - i patched standard SL
6 kernel with 'xtables' (iptables enhancement) and it is working
extremely well.


I would be interested to see if you could identify telemetry packets in
the flow - but I'm not predicting much success. If you do get it, make
sure you let the world know though!


I read (not sure if true) that some DNS resolutions to M$ servers are
even 'hardwired' via some .dll library, so it makes it even harder.


Correct.


I'm no windows expert, but i'm and unix administrator concerned about
privacy of windows desktop/laptop users sitting inside my LAN.

What i'd like to come up is some more general iptables rules, than
blocking specific IP addresses or names, because, apparently they may
change in any incoming windows update ...

Anyone gave this thought already? Anyone else's concerned the way i am?


Yup - and as I said, I'm now running Fedora 23 on my desktop (EL lags on
a few things that I like - so Fedora is a happy medium for me - as I
still have the fedora-updates-testing repo enabled. My work laptop as
well as my personal laptop - and now my home desktop all run Fedora 23
(KDE Spin if you hate Gnome 3 - like me).







Re: snooping windows 10 - how to stop it on a linux gateway?

2016-03-04 Thread jdow
That windows update server is a relay for the "snoop" messages. About the only 
way to totally stop the snoop messages is to totally isolate the network 
containing Windows machines from the network. Any windows machine can serve as a 
relay point for any others.


{o.o}

On 2016-03-04 20:16, Karel Lang AFD wrote:

Hi guys,

firstly, sorry Todd, i don't know how it happened i got attached to your thread.

secondly, thank you all for your thoughtful posts.

I know it is not easy to block the selected traffic from windows 10 and you are
right, it is being backported to windows 7 as well. Horrible and disgusting.

I already have windows server in LAN dedicated as a update server (work of my
windows colleagues), so the PC don't have to access windows update servers
outside LAN - this should simplify things.

Also the PCs must have internet access to email, http, https, ftp, sftp - simply
the 'usual' stuff.
I think, yet, there should be a way. I'll try to consult mikrotik experts (the
router brand we use) and guys from our ISP.
If i have something, i'll let you know :-)

thank you, bb

Karel

On 03/05/2016 12:40 AM, Steven Haigh wrote:

On 05/03/16 07:24, Karel Lang AFD wrote:

Hi all,

guys, i think everyone heard already about how windows 10 badly treat
its users privacy.


My solution to this was to finally rid Windows 7 off my desktop PC - as
most of the telemetry has also been 'back ported' to Windows 7 also. You
can't stop it.


I'm now thinking about a way howto stop a windows 10 sending these data
mining results to a microsoft telemetry servers and filter it on our SL
6 linux gateway.


Nope. There are no specific servers in use - just general - so whatever
you block will end up killing other services.


I think it could be (maybe?) done via DPI (deep packet inspection). I
similarly filter torrent streams on our gateway - i patched standard SL
6 kernel with 'xtables' (iptables enhancement) and it is working
extremely well.


I would be interested to see if you could identify telemetry packets in
the flow - but I'm not predicting much success. If you do get it, make
sure you let the world know though!


I read (not sure if true) that some DNS resolutions to M$ servers are
even 'hardwired' via some .dll library, so it makes it even harder.


Correct.


I'm no windows expert, but i'm and unix administrator concerned about
privacy of windows desktop/laptop users sitting inside my LAN.

What i'd like to come up is some more general iptables rules, than
blocking specific IP addresses or names, because, apparently they may
change in any incoming windows update ...

Anyone gave this thought already? Anyone else's concerned the way i am?


Yup - and as I said, I'm now running Fedora 23 on my desktop (EL lags on
a few things that I like - so Fedora is a happy medium for me - as I
still have the fedora-updates-testing repo enabled. My work laptop as
well as my personal laptop - and now my home desktop all run Fedora 23
(KDE Spin if you hate Gnome 3 - like me).





Re: snooping windows 10 - how to stop it on a linux gateway?

2016-03-04 Thread jdow
Can't be done economically. ANY machine that can reach Windows Update will also 
feed the snooping reports.


The blocking is probably not needed as it consists of error reports after you've 
turned off everything in the various settings dialogs. Of course, one must never 
run Cortana if one is concerned about privacy.


Note that you MAY have better overall security for personal information if you 
figure out what the reporting addresses are and explicitly block all other 
addresses as a means of mitigating potential third party attacks through these 
semi-open doors. How open they really are depends on the degree of encryption MS 
has used in these reports and interfaces.


{^_^}

On 2016-03-04 16:24, ToddAndMargo wrote:

On 03/04/2016 03:49 PM, Andrew Z wrote:

Uninstall. :)



Or just block all access from Windows machines to the Internet.
And   turn off Windows Update Service.

And test out your critical Windows software with
Wine Staging

And if need be, run XP inside a KVM virtual machine




Re: CVE 2015-7547

2016-02-19 Thread jdow

# rpm -qa | grep glibc
glibc-2.12-1.166.el6_7.7.x86_64
glibc-2.12-1.166.el6_7.7.i686
glibc-utils-2.12-1.166.el6_7.7.x86_64
glibc-common-2.12-1.166.el6_7.7.x86_64
glibc-devel-2.12-1.166.el6_7.7.x86_64
glibc-headers-2.12-1.166.el6_7.7.x86_64

Already installed with updates as of a day or so ago.

{^_^}

On 2016-02-19 17:49, Kenny Noe wrote:

Yes.   Follow these instructions.

http://www.thegeekstuff.com/2016/02/glibc-patch-cve-2015-7547/

--Kenny

Thanks

--Kenny

On Fri, Feb 19, 2016 at 8:33 PM, ToddAndMargo mailto:toddandma...@zoho.com>> wrote:

Hi All,

Are we affected by this?


http://www.infoworld.com/article/3033862/security/patch-now-unix-bug-puts-linux-android-and-ios-systems-at-risk.html

-T


--
~~
Computers are like air conditioners.
They malfunction when you open windows
~~




Re: two mysteries

2016-01-28 Thread jdow

On 2016-01-28 14:33, Patrick Mahan wrote:

On 1/27/16 1:23 PM, David Sommerseth wrote:

On 27/01/16 11:13, jdow wrote:



Fascinating. I made a bad "assumption" about network devices. It seems they
are created dynamically without any presence in /dev.


IIRC, *BSD provides /dev nodes for network devices which the user-space can
use for configuring it and such.  But it's many years since I played with
FreeBSD, so my memory is scarce.



Nope, BSD (FreeBSD, NetBSD, etc) do not show any network devices under /dev.  
And
kernel device configuration is done via sysctl commands as opposed to using
/sysfs in linux.

Patrick Mahan


Was that true a decade and a half ago give or take a little?

{^_^}


Re: two mysteries

2016-01-27 Thread jdow

On 2016-01-27 13:23, David Sommerseth wrote:

On 27/01/16 11:13, jdow wrote:



Fascinating. I made a bad "assumption" about network devices. It seems they
are created dynamically without any presence in /dev.


IIRC, *BSD provides /dev nodes for network devices which the user-space can
use for configuring it and such.  But it's many years since I played with
FreeBSD, so my memory is scarce.


--
kind regards,

David Sommerseth


That matches my memory of BSD from many years ago. I tried it after getting 
disgusted with RedHat, then Mandrake, then Ubuntu and Mint. I eventually found 
this distro during one of Centos' periods of dying and have lived here 
comfortably for some time now.


Nonetheless, as soon as I need something a little out of the ordinary for 
networking I disable network mangler and hand craft my solutions. Moving to 7 is 
going to be painful. I use a seldom used feature of IPTables to make a nice 
killer firewall that makes repeated attempts to login via SSH with passwords 
cost too much time to guess the password. Can't retry until 2 minutes have 
passed. It's fascinating to see the chains of attempts on SSH when the first one 
got far enough to reject the password and the chain of 200 that followed were 
simply dropped on the floor. I don't see that fun much anymore. Smaller logs are 
easier logs to watch. I moved ssh et al to uncommon pure random number port 
numbers - and left the other protection in place.


{^_^}   Dis broad likes multiple barriers for safety.


Re: two mysteries

2016-01-27 Thread jdow

On 2016-01-26 22:52, Yasha Karant wrote:

On 01/26/2016 09:41 PM, jdow wrote:

On 2016-01-26 05:17, Tom H wrote:

On Tue, Jan 26, 2016 at 10:12 AM, David Sommerseth
 wrote:

On 26/01/16 08:13, Yasha Karant wrote:


As neither VMware player nor VirtualBox seem capable of providing a MS
Win guest with any form of Internet access to an 802.11 connection from
the host (in both cases, the claim from a MS Win 7 Pro guest is that
there is no networking hardware, despite being shown by the guest as
existing), it is possible that the "native" (ships with) vm
functionality of EL 7 may address this issue.


So you want the guest to have full control over the wireless network
adapter?  That is possible, but only through a hypervisor ... and these
days, unless the adapter supports PCI SR-IOV [1], you need to disable
the interface (unload all drivers, unconfigure it) and allow your guest
to access the PCI interface directly (so called PCI passthrough).

With PCI SR-IOV support (this requires hardware support), you can
actually split a physical PCI device also supporting SR-IOV into
multiple "virtual functions" (VF) which results in more PCI devices
appearing on your bare-metal host and you can then grant a VM access to
this VF based PCI device.  For network cards, that also includes a
separate MAC address per VF.

[1] <http://blog.scottlowe.org/2009/12/02/what-is-sr-iov/>

But the downside, from your perspective, all this requires a hypervisor.


IIRC, Yasha's issue with 802.11 is that he cannot bridge a wifi NIC (I
pointed out in Oct/Nov that it's because the kernel prevents it).


Have you gone into /dev and made the appropriate permissions change on the
device?

{o.o}

Obviously, there is some point I am missing:

The physical 802.11 device has an instantiated driver interface wlp61s0 on the
machine in question.

bash-4.2$ ls -a /dev/wl*
ls: cannot access /dev/wl*: No such file or directory
bash-4.2$ ls /dev | grep -a wl
bash-4.2$
bash-4.2$ locate wlp61s0
/home/ykarant/.gkrellm2/data/net/wlp61s0
/var/lib/NetworkManager/dhclient-568cb7e6-daa1-4768-b13e-0ac4d3d61864-wlp61s0.lease
/var/lib/NetworkManager/dhclient-646c0914-6eff-4c67-ad42-330f130e6f8c-wlp61s0.lease
/var/lib/NetworkManager/dhclient-6ece21f4-61c7-47a1-bc0f-85b36632da7e-wlp61s0.lease
/var/lib/NetworkManager/dhclient-76d98a93-e645-4da2-b190-e2de2e2b9333-wlp61s0.lease
/var/lib/NetworkManager/dhclient-8811aaa3-40a9-43f7-b1d5-7d00f3e0c4fc-wlp61s0.lease
/var/lib/NetworkManager/dhclient-b31e96c6-392c-4c73-a6a5-8532908a0e44-wlp61s0.lease
/var/lib/NetworkManager/dhclient-ba0ab7fc-e666-4969-86d9-7e343ea8f722-wlp61s0.lease
/var/lib/NetworkManager/dhclient-c806cddf-1d8b-46da-a2a8-40bcf7e9956e-wlp61s0.lease
/var/lib/NetworkManager/dhclient-ef685b95-88bf-4a0d-acea-837443a026c0-wlp61s0.lease
/var/lib/NetworkManager/dhclient-wlp61s0.conf

Fascinating. I made a bad "assumption" about network devices. It seems they are 
created dynamically without any presence in /dev. So you may want to check the 
/etc/sysconfig/network-scripts files for wireless devices and their permissions 
structures. (Look for files analogous to ifcfg-eth0.) Now, I have seldom used 
network mangler because historically it has blown up in my face too often. By 
now it should be better but You could check in network mangler to see if it 
has a permissions of one sort or another that enable it to be used by other than 
root. There is such for wired ethernet, I believe. That is where I'd look to try 
to unlock this puzzle.


Regarding virtualbox - if it's as finicky to setup as with Windows "good luck". 
Just for grins setting one up in Windows might give you an idea of information 
needed to make it work. The GUI is handy and mostly works. I don't, at this 
time, have a large enough machine dedicated to Linux for experimentation. What I 
have are all little things - mail service for 6-12 accounts with routing is 
basically what they amount to.


{^_^}   Joanne


Re: two mysteries

2016-01-26 Thread jdow

On 2016-01-26 05:17, Tom H wrote:

On Tue, Jan 26, 2016 at 10:12 AM, David Sommerseth
 wrote:

On 26/01/16 08:13, Yasha Karant wrote:


As neither VMware player nor VirtualBox seem capable of providing a MS
Win guest with any form of Internet access to an 802.11 connection from
the host (in both cases, the claim from a MS Win 7 Pro guest is that
there is no networking hardware, despite being shown by the guest as
existing), it is possible that the "native" (ships with) vm
functionality of EL 7 may address this issue.


So you want the guest to have full control over the wireless network
adapter?  That is possible, but only through a hypervisor ... and these
days, unless the adapter supports PCI SR-IOV [1], you need to disable
the interface (unload all drivers, unconfigure it) and allow your guest
to access the PCI interface directly (so called PCI passthrough).

With PCI SR-IOV support (this requires hardware support), you can
actually split a physical PCI device also supporting SR-IOV into
multiple "virtual functions" (VF) which results in more PCI devices
appearing on your bare-metal host and you can then grant a VM access to
this VF based PCI device.  For network cards, that also includes a
separate MAC address per VF.

[1] 

But the downside, from your perspective, all this requires a hypervisor.


IIRC, Yasha's issue with 802.11 is that he cannot bridge a wifi NIC (I
pointed out in Oct/Nov that it's because the kernel prevents it).


Have you gone into /dev and made the appropriate permissions change on the 
device?

{o.o}


Re: a year later - CERN move to Centos - what are we doing?

2016-01-13 Thread jdow
There is at least one other sub group around here who value generally quiet 
sober support when they have a problem. Ubuntu and many other distros are 
quality distros, and perhaps better support what some people want. But the 
Ubuntu and Fedora mailing lists and support are unbelievably noisy and getting 
help often (usually?)requires rhinoceros hide and extreme persistence. This very 
mailing list which is getting defiled by this pointless discussion is what I 
value about SL. This list is mercifully seldom sullied by trolls. And the SL 
distro itself is competent and has what I need, a few smart people supporting me 
on the mailing list and reasonably timely updates. So, every time I even think 
of moving I slap myself across the face and go do something important.


{^_^}   Joanne

On 2016-01-13 02:43, lejeczek wrote:

I think we see two groups of people here, taking part in this "exciting" debate.

One is, people who genuinely try to learn a bit about more concrete plans
Fermilab have(or don't have) for the future of Scientificlinux - and they do it
here on the list because they can sroogle all over the Net and they won't find
any solid, ideally rendered by the very source itself statement.

The second group is people who don't give a toss, a good chance is it's because
they have not seen much beside Scientificlinux and they really believe it's very
unique indeed. You can see scientist type of mentality here -- I have got my one
little or big app, it runs great! and I'm happy with it and I can say I do it
all on by myself cause I do not have to deal with any commercial third entities
(which is very important for a scientist) -- very simplistic and egotistic
attitude, but works. And a good chance is this bunch don't have to even ask
about bigger picture. Some individuals from this group can even be rude and
hostile towards others, even if only asked about sharing a thought or two on the
issue.

Well, like esteemed fellows already said - we're free to do as we wish,
fortunately, only unfortunate is that we cannot always make fully informed
decisions. I guess that's science, that's life. Heh. :)
b.w.



On 13/01/16 10:13, Iosif Fettich wrote:

Hi there,


anyone who wants to go with centos, is free to do so, right? So just go and
do it and don't 'fuss' about it on SL mail list.


Thanks for the nice advice.

Would you have another one too? On which list should current SL users
ask/discuss about the short/long term plans regarding SL? And/or of the
problems related with a migration from/to SL?


We all should be grateful for continuing support of Fermilab for SL builds.


That's something we most probably all on this list agree with unconditionally.

Thank you,

Iosif Fettich

---
Iosif Fettich | e-mail: ifett...@netsoft.ro   phone+fax: +40-265-260256
Gen. Manager  | web:http://www.netsoft.ro phone: +40-365-806800
NetSoft SRL   | GPGkey: http://www.netsoft.ro/ifettich/public_gpg_key





Re: [SCIENTIFIC-LINUX-USERS] Heads up to the el6 guys

2014-11-19 Thread jdow
Thanks. I figured I was not the only one; and redundancy helps. (I'm 2000+ miles 
from the machine in a hotel room with a raging head cold. Otherwise I'd have 
gone looking.)


{^_^}  Joanne

On 2014/11/19 07:34, Pat Riehecky wrote:

On 11/19/2014 09:25 AM, jdow wrote:

Latest patches won't install:
Error: Package: hdf5-mpich-1.8.5.patch1-9.el6.x86_64 (epel)
   Requires: libmpichf90.so.12()(64bit)
Error: Package: hdf5-mpich-1.8.5.patch1-9.el6.x86_64 (epel)
   Requires: mpich
Error: Package: hdf5-mpich-1.8.5.patch1-9.el6.x86_64 (epel)
   Requires: libmpich.so.12()(64bit)

This traces back to the octave install I have for doing some RF MODEM work.

{^_^}   Joanne


Upstream bug report:

https://bugzilla.redhat.com/show_bug.cgi?id=1165635

Pat



Re: OpenGL

2014-02-20 Thread jdow

On 2014/02/20 03:21, David Sommerseth wrote:

On 19/02/14 15:08, jdow wrote:

On 2014/02/19 01:59, Akemi Yagi wrote:

On Tue, Feb 18, 2014 at 9:29 PM, jdow  wrote:

What happened to it? It's really hard to get VirtualBox clients to work
without
OpenGL when you want to use the extra features. They won't compile
because
OpenGL seems to be missing. And I don't find it in the usual suspect
repos.


This is a known issue. You can find a workaround here:

https://forums.virtualbox.org/viewtopic.php?f=3&t=58855

This and some more useful info are in this CentOS wiki:

http://wiki.centos.org/HowTos/Virtualization/VirtualBox/CentOSguest

Akemi



Thanks Akemi. I did discover where the ghc_OpenGL files were. Here's the
story.
There is a missing kernel file and perhaps a problem in the kernel source.

Trying to install the virtual box extensions I ran across a misleading
error message about OpenGL not compiling. The machine I tried it on first
had OpenGL missing. I finally figured out it had never been configued for
epel. Half the problem was solved. But I got the same error. I traced back
to the build error:
echo;   \
 echo "  ERROR: Kernel configuration is invalid.";   \
 echo " include/linux/autoconf.h or
include/config/auto.conf are missing.";  \
 echo " Run 'make oldconfig && make prepare' on kernel
src to fix it.";  \

So to save time I went looking for the kernel source and attempted
make oldconfig and make prepare. The latter failed. I am either missing
something interesting or the kernel source is missing something
interesting.

[jdow@sl6 2.6.32-431.5.1.el6.x86_64]$ sudo make oldconfig
scripts/kconfig/conf -o arch/x86/Kconfig
#
# configuration written to .config
#
[jdow@sl6 2.6.32-431.5.1.el6.x86_64]$ sudo make prepare
scripts/kconfig/conf -s arch/x86/Kconfig
   CHK include/linux/version.h
   CHK include/linux/utsrelease.h
   SYMLINK include/asm -> include/asm-x86
make[1]: *** No rule to make target `missing-syscalls'.  Stop.
make: *** [prepare0] Error 2
[jdow@sl6 2.6.32-431.5.1.el6.x86_64]$


So there may be two bugs here.

First the kernel-devel includes do not have the autoconf file.

Second trying to make prepare fails. (So does trying a subsequent "make
clean".


Do you have the kernel-devel package installed at all?  I'd try that
before doing manual kernel builds.  I believe that

And btw ... don't compile things as root (unless explicitly told to by
documentation).  That can completely mess up things, even though the
kernel is probably one of the more safer packages to build.  Doing 'make
install', however, requires root privileges.  But wait for root access
until this point.


Kernel-devel is installed as are all the other required items. Somehow I got
one of the two virtual machines to work. The other is still semi-unhappy. The
3D stuff won't compile, still.

The information Akemi posted is what guided me towards the solution.

Re actually compiling the kernel I'm a tad reluctant to do that. But going
through make prep is no problem. The process has changed a little since the
last time I compiled a kernel to make sure a driver I wanted was installed.
I even contributed the fixes to the kernel that made the driver work right,
since I was the one of the designers for the data the driver had to parse
in order to work right.

{^_^}


6x->6.4?

2014-01-13 Thread jdow

I figure this is a stupid question; but, is 6.5 really not ready for prime
time yet? I see that the 6.5 repo tree appears to be open for business. But
6x seems to point to 6.4.

{o.o}   Joanne


ddclient vs selinux

2013-12-14 Thread jdow

For some time now ddclient has not been working quite right. I made some
changes that finally brought to light the reason for this.

I removed the tweaked ddclient.conf, then yum removed ddclient, yum install 
ddclient, and finally edited the ddclient.conf file to make it happy.


I started getting errors. This sequence is typical:
Dec 14 14:40:29 me2 ddclient[5711]: WARNING:  updating .dyndns.org: nochg: 
No update required; unnecessary attempts to change to the current address are 
considered abusive
Dec 14 14:40:29 me2 ddclient[5711]: FATAL:Cannot create file 
'/var/cache/ddclient/ddclient.cache'. (Permission denied)


I figured it's not nice to abuse the kind folks at dyndns so I dug further
into it.

"setenforce 0" allows it to run properly.

So I dug into the audit logs.
These two lines do not look right.
type=AVC msg=audit(1387064159.179:461956): avc:  denied  { getattr } for 
pid=6296 comm="ddclient" path="/var/cache/ddclient/ddclient.cache" dev=dm-0 
ino=2621901 scontext=unconfined_u:system_r:dhcpc_t:s0-s0:c0.c1023 
tcontext=unconfined_u:object_r:var_t:s0 tclass=file
type=SYSCALL msg=audit(1387064159.179:461956): arch=c03e syscall=4 
success=yes exit=0 a0=1b234a0 a1=1b02130 a2=1b02130 a3=28 items=0 ppid=6281 
pid=6296 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 
tty=(none) ses=10540 comm="ddclient" exe="/usr/bin/perl" 
subj=unconfined_u:system_r:dhcpc_t:s0-s0:c0.c1023 key=(null)


ddclient with a dhcpc_t tag? I note there does not seem to be a ddclient_t
or similar tag on the system.

The ddclient is from epel. I'd expect it to have a proper selinux setup.
I am rash enough to expect that should be handled in the ddclient rpm
setup.

What do I need to do to get this to work properly with "setenforce 1"
restored?

{^_^}   Joanne


Re: cloning with dd

2013-01-29 Thread jdow

On 2013/01/29 10:20, Bluejay Adametz wrote:

target

hard drive.  The hard drive on A is /dev/sda, call it Ahd.  A is shut down
power off.  Bhd is installed into an available bay on A, A is booted, and
Bhd appears as /dev/sdb in A.  Using dd on A, clone /dev/sda to /dev/sdb .
Mount on A the partition of /dev/sdb that contains /etc (there are no end
user home directories -- only home directories are those of the system
administration users).  Using a text editor (e.g., vi), modify the
/etc/sysconfig/net* scripts/directories, as well as /etc/hosts. for the name
and IP address of machine B that will contain Bhd (resolv.conf will be the
same -- all of these machines are in the same DNS subzone, same TCP/IP
subnet).  Iterate through all of the target workstation hard drives.  As
there are no other distributed services running, this should suffice.


Using tar rather than dd would likely be faster if there's any
significant amount of free space on the source disk. dd will copy all
those unused blocks, while tar will copy just the Useful Data.

The network interface config files
(/etc/sysconfig/network-scripts/ifcfg-*) contain the interface MAC
address. You'll probably need to modify that. It may be easiest to set
up a one-time boot script so this can be set on the destination (B)
machine the first time it boots.

If you use selinux, you may want to touch .autorelabel in the B root
file system so the contexts will get set properly when it boots.

  - Bluejay Adametz

Definition: Alponium n. (chemical symbol:Ap) Initial blast of odor
from a can of dog food.



Do remember that when you use tar for copying you have to partition and
mkfs each of the disks. If this can be automated, so much the better.
Automating the system duplication would be the best time saver over all.
It frees the administrator's time for other activities if the process can
be automated end to end.

Whatever method is being used make sure it is amenable to scripting. It
should be possible to implant a working copy of disk B to disk C reasonably
quickly with a script that automates partitioning and/or copying. From
that point you need to mount C and make the changes for the system name
as needed and then dismount C.

Once C is ready to use, place it in the target machine. Boot the target
machine with a Live CD. Place disk D into the first prep system and disk
E into the "C" system. Prepare F and G simultaneously using the prep
script. Once you get up to 16 working machines you may find yourself
running around like a one armed paperhanger keeping up with the prep
process. Note that you do NOT need the network active at this time. You
simply need a script that uses sed or equivalent to make modifications
to the necessary files to make the machines appear "unique" on the
network and other places as needed.

The exact cloning process is not as important as automating it with
a script that accepts the new information for the sed editing. Start
it. Forget it. Wait for the script to beep at the end while you take
care of emails or other problems.

At least, that's how I'd do it. "Do it once is for humans. Do it many
times is for computers, scripts, and automation."

{^_^}   Joanne


Re: nvidia mess, Re: Is there a kernel update for SL 6.2 coming up soon?

2013-01-25 Thread jdow

It looks like Nux has it under control if people listen.

I figure there should be some people around ElRepo at least as smart
and clever as a roughly 70 year old woman who's too dumb to retire
and keeps pushing her work, which is not system administration. And
I figure at my age I've earned the right to be cranky.

(Or are you telling me old age and guile beat youth and enthusiasm
yet again.)

{^_-}

On 2013/01/25 15:04, Akemi Yagi wrote:

On Fri, Jan 25, 2013 at 2:48 PM, jdow  wrote:


It seems there should have been some way to cozen yum into sending the
administrator email regarding the process that needs to be taken.



I ask once again that all elrepo-related talks be taken to the elrepo
mailing list. The SL list is the place where SL-specific topics are
handled. But more importantly, users who are concerned (this includes
RHEL and CentOS users) can join the discussion on the elrepo M/L. In
fact, there is already one post/idea by Nux:

http://lists.elrepo.org/pipermail/elrepo/2013-January/001620.html

And by now everyone is subscribed to the elrepo list, right? Right?

Akemi



Re: nvidia mess, Re: Is there a kernel update for SL 6.2 coming up soon?

2013-01-25 Thread jdow

On 2013/01/25 15:04, Akemi Yagi wrote:

On Fri, Jan 25, 2013 at 2:48 PM, jdow  wrote:


It seems there should have been some way to cozen yum into sending the
administrator email regarding the process that needs to be taken.



I ask once again that all elrepo-related talks be taken to the elrepo
mailing list. The SL list is the place where SL-specific topics are
handled. But more importantly, users who are concerned (this includes
RHEL and CentOS users) can join the discussion on the elrepo M/L. In
fact, there is already one post/idea by Nux:

http://lists.elrepo.org/pipermail/elrepo/2013-January/001620.html

And by now everyone is subscribed to the elrepo list, right? Right?


You ask too much, perhaps.


Akemi


{^_^}


Re: nvidia mess, Re: Is there a kernel update for SL 6.2 coming up soon?

2013-01-25 Thread jdow

On 2013/01/25 13:31, Lamar Owen wrote:

On Jan 25, 2013, at 3:57 PM, jdow wrote:

To a degree I can sympathize with the elrepo people. Nvidia has screwed up.


This is not the first time nvidia cards have gone 'legacy'.  There are now 
three supported legacy nvidia driver versions (304.xx, 173.xx, and 96.xx).  
Prior to 310.xx, there were but two.


After getting disgusted with ATI I abandoned them for Nvidia. Now maybe I
need to figure out how good the support for Intel cards is these days.



Yeah, good luck with that.


But, still, the gentlemen at elrepo could have handled this a little more
gracefully, methinks. It's a shame they're stuck in the middle here. They
seem to be basically very good folks.



They handled it as gracefully as it could have been handled, since the heads-up 
was posted on the elrepo list quite a while ago.  I do think that if one uses 
third party packages, one should follow at least the announcement lists for 
each such repo.


It seems there should have been some way to cozen yum into sending the
administrator email regarding the process that needs to be taken. Better
yet would be a test for cards gone legacy. This could be added to the
install for a 304.65 driver update that is basically the same as 304.64
with the addition of a test program.

In the post processing the test program is run.

At best that test program should setup a sequence of yum steps to remove
the newly installed 305 drivers and install the 304xx legacy drivers. I
suspect it would take fairly little script-fu to do either just the email
or the full reinstall magilla. For the emailing this line should work:

mail -v -s "Your Nvidia graphics card is going legacy" root /dev/null

Just install a batch file with that line and the "msg" text then run it
if the test program reports the board is unsupported. That message could
have the update script embedded within it for easy reinstallation.

{^_^}


Re: nvidia mess, Re: Is there a kernel update for SL 6.2 coming up soon?

2013-01-25 Thread jdow

On 2013/01/25 13:04, Akemi Yagi wrote:

On Fri, Jan 25, 2013 at 12:57 PM, jdow  wrote:


  the gentlemen at elrepo .
  They seem to be basically very good folks.


s/basically//

;-)

By the way there is more than "gentlemen" on the ELRepo team... but
that's not important.

Akemi


I had a strong suspicion that was the case. However, as used "Gentlemen"
was generic human based. I gave up on the "him/her" or "himer" kind of
nonsense about the time people started thinking it'd be a good idea. I
embraced it for all of a week, maybe. If you male guys want some good
male only pronouns and nouns go generate your own new ones for referring
to men. All the others are generic now.

As for the textual substitution I stick with the way I phrased it. Remember
that one "Aw S***!" wipes out 1000 attaboys.

{^_-}


Re: nvidia mess, Re: Is there a kernel update for SL 6.2 coming up soon?

2013-01-25 Thread jdow

On 2013/01/25 12:28, Konstantin Olchanski wrote:

On Fri, Jan 25, 2013 at 12:19:14PM -0800, Konstantin Olchanski wrote:


3) nvidia and elrepo make a mess of the information on which cards are 
supported by which drivers.



To elaborate:

http://elrepo.org/tiki/kmod-nvidia

The grand total of information on supported video cards is this:

Supported Chipsets
This driver is the current release and supports the most recent NVIDIA graphics 
cards (GeForce 8 series GPUs onwards, as well as Quadro series). Users of cards 
based on older chipsets should use one of the following legacy drivers.

This is not helpful information. GeForce210 cards I bought in December are 
"recent" or not? Are
they "8 series and onward" or not? They are not listed in the legacy lists, is 
it an omission
or have I dodged a bullet today or not?


To a degree I can sympathize with the elrepo people. Nvidia has screwed up.
After getting disgusted with ATI I abandoned them for Nvidia. Now maybe I
need to figure out how good the support for Intel cards is these days.

But, still, the gentlemen at elrepo could have handled this a little more
gracefully, methinks. It's a shame they're stuck in the middle here. They
seem to be basically very good folks.

{^_^}


Re: Is there a kernel update for SL 6.2 coming up soon?

2013-01-24 Thread jdow

On 2013/01/24 01:12, Phil Perry wrote:

On 24/01/13 08:08, jdow wrote:

On 2013/01/23 23:55, Phil Perry wrote:


Here is the announcement I made back in November that the 310.xx
series nvidia
drivers were dropping support for older 6xxx and 7xxx based hardware:

http://lists.elrepo.org/pipermail/elrepo/2012-November/001525.html


And how was I to know that and how was I to prevent 310 being placed
on a no longer supported brand new system? It's rather a bummer you know.



Did you read the release notes for the new driver? That's how I found out. Did
you read my discussion thread on the issue? That's how other elrepo users found
out and suggested the solution.


Chicken, please meet Mr. Egg. Mr. Egg, please meet Ms. Chicken. I have the
release note before an automatic update happens?


I really don't know what you expect me to say. We have set up an email list to
communicate with our users and we use it. We use our IRC channel too. Many
thousands of people use the software we package. Only a very small percentage
subscribe to the lists. There will be many people in exactly the same position
as you. I guess when things "break" for them they will come looking for answers
as you did, and we do our best to provide them. In this case we knew of the
issue, we had documented the issue and we had a solution prepared and waiting
for you. I'm really not sure what more you expect me to do for you, for free in
my own volunteered time? I'm really sorry if you feel it's a bummer.

As I said before, if you subscribe to the elrepo mailing list (or even hang out
in #elrepo on IRC) we *will* highlight important issues that affect the software
that we release as we did above in a discussion thread that ran for 2 months.



No, this is the nvidia driver telling you that your hardware is no longer
supported. It even tells you that you need the NVIDIA 304.xx Legacy
drivers.


That's not obvious. And I feel I have a rather perfect right to presume
the board should be supported. It is a brand new machine as of May last
year.


That's correct - you need to stay at the 304.xx driver as this is the
*last*
driver that will support your older hardware (7xxx based chipset). We
released
the legacy kmod-nvidia-304xx and nvidia-x11-drv-304xx packages to aid
in this
(see the thread linked above) and pushed them out to the main repo
*before* we
released the updated 310.xx series drivers.

Please uninstall the kmod-nvidia driver and install the
kmod-nvidia-304xx and
then you can continue to receive updates from elrepo.


I've just tried to downgrade and see what happens.


Nothing screwed up, nvidia simply decided it was time to move on from
supporting
aging hardware (~8 years old?) in the current driver release.


Nvidia screwed up. The hardware was brand new about 8 months ago. So I feel
I have a perfect right to be annoyed.



You'd need to take that up with nvidia, or maybe even your hardware vendor why
they are using old chipsets.


Now, how do I stop new stuff from coming in? If there is a change in what
is supported then it behooves somebody to provide an automated test to
make sure the systems keep running by not downloading updates that do not
fit the particular system. After all "lspci" exists, reports this line
"00:0d.0 VGA compatible controller: nVidia Corporation C61 [GeForce 7025 /
nForce 630a] (rev a2)", and the install could be aborted when that is
found and the administrator notified.



Yes, we had that discussion and if we knew of a way to technically implement
that we would have seriously considered it.

Please, if you can suggest a mechanism for an RPM package to know what hardware
is present *before* it installs itself, and then prevent itself from installing
if the correct hardware isn't present, and do all this from within a yum
transaction, them I'm all ears. You can run such tests in %pre or %post scripts
but by then the yum transaction is already underway and the package is set to be
installed or is already installed. At which point the best you can do is log the
issue to warn the user, which is *exactly* what the nvidia driver does - even
then you didn't understand what the log entry was telling you. We didn't see any
need to replicate that.


Hm, this might be an issue to be bumped into the RPM maintainers.

There is a trick I used with (much) older redhat versions that might apply.
I built a dummy rpm for spamassassin with a high version number so that it
would be perpetually happy with what I had and not overwrite versions I was
working with via direct downloads.

Suppose you pushed a kmod 309 version that was really a "test". Within
the test if it detects an incompatible board it sets up a download for
the 304 versions, if applicable, plus some "along side" dummy versions
numbered 500 or something. Otherwise it does nothing and allows the
310 version to load a week later.


If you are

Re: Is there a kernel update for SL 6.2 coming up soon?

2013-01-24 Thread jdow

On 2013/01/23 23:55, Phil Perry wrote:


Here is the announcement I made back in November that the 310.xx series nvidia
drivers were dropping support for older 6xxx and 7xxx based hardware:

http://lists.elrepo.org/pipermail/elrepo/2012-November/001525.html


And how was I to know that and how was I to prevent 310 being placed
on a no longer supported brand new system? It's rather a bummer you know.



No, this is the nvidia driver telling you that your hardware is no longer
supported. It even tells you that you need the NVIDIA 304.xx Legacy drivers.


That's not obvious. And I feel I have a rather perfect right to presume
the board should be supported. It is a brand new machine as of May last
year.


That's correct - you need to stay at the 304.xx driver as this is the *last*
driver that will support your older hardware (7xxx based chipset). We released
the legacy kmod-nvidia-304xx and nvidia-x11-drv-304xx packages to aid in this
(see the thread linked above) and pushed them out to the main repo *before* we
released the updated 310.xx series drivers.

Please uninstall the kmod-nvidia driver and install the kmod-nvidia-304xx and
then you can continue to receive updates from elrepo.


I've just tried to downgrade and see what happens.


Nothing screwed up, nvidia simply decided it was time to move on from supporting
aging hardware (~8 years old?) in the current driver release.


Nvidia screwed up. The hardware was brand new about 8 months ago. So I feel
I have a perfect right to be annoyed.

Now, how do I stop new stuff from coming in? If there is a change in what
is supported then it behooves somebody to provide an automated test to
make sure the systems keep running by not downloading updates that do not
fit the particular system. After all "lspci" exists, reports this line
"00:0d.0 VGA compatible controller: nVidia Corporation C61 [GeForce 7025 /
nForce 630a] (rev a2)", and the install could be aborted when that is
found and the administrator notified.

{o.o}


Re: Is there a kernel update for SL 6.2 coming up soon?

2013-01-23 Thread jdow

On 2013/01/23 20:33, Akemi Yagi wrote:

On Wed, Jan 23, 2013 at 8:04 PM, jdow  wrote:

On 2013/01/23 19:27, Alan Bartlett wrote:



I fail to see the significance -- or relevance -- of your last sentence.

Please remember this is the main support channel for Scientific Linux.

Alan.



It's not this channel's support issue. I understand that. This is why
I wondered if 6.2 was going to have a kernel update.

ElRepo pushed the newer 310 NVidia modules before any appropriate kernel
appeared on SL2. So I have to back off. I simply wondered if waiting for
a new kernel was practical or not.

And that's been answered. ElRepo messed up pushing updates. (Or else
there should be a more recent kernel for 6.2 than what I am running,
2.6.32-279.19.1.el6.x86_64.)


Let me post once again the link I provided for you earlier in this
thread. I'm afraid you missed it.

http://elrepo.org/tiki/kmod-nvidia-304xx

That page has a list of supported GPUs. This is all about the hardware
you have and the version of Nvidia's driver that supports it. Which
kernel version is _not_ relevant.

You may also want to check out this post on the ELRepo's mailing list:

http://lists.elrepo.org/pipermail/elrepo/2013-January/001587.html

I quoted an essential part of it in my earlier post as well. I
strongly suggest you subscribe to the ELRepo general mailing list. If
you still have questions about the Nvidia-related packages offered by
ELRepo, please ask on the ELRepo's list.


With all due respect, Akemi, I'd like you to note two details.

First the message you point to is for version 304. ElRepo pushed 310.
Before that program load I was up to date with nVidia as well as kernel.
I'm not sure if I had 304 or earlier. Given the date, that is the version
I had loaded as of yesterday when 310 replaced it.

Second I get this message in the dmesg log from a reboot this morning.
===8<---
nvidia: module license 'NVIDIA' taints kernel.
Disabling lock debugging due to kernel taint
NVRM: The NVIDIA GeForce 7025 / nForce 630a GPU installed in this system is
NVRM:  supported through the NVIDIA 304.xx Legacy drivers. Please
NVRM:  visit http://www.nvidia.com/object/unix.html for more
NVRM:  information.  The 310.32 NVIDIA driver will ignore
NVRM:  this GPU.  Continuing probe...
NVRM: No NVIDIA graphics adapter found!
===8<---

Apparently the kernel or something does NOT supported with 310. So pushing
the 310 was apparently an error of overoptimism for this system. I'd have
expected the RPM to take this into account. But this is an ElRepo issue
not one from here. So I've tried to be brief. I see I had to give up that
effort.

That said it appears I have to go back to 304 and turn off elrepo updates,
which is moderately inconvenient. The Nvidia 7025 is embedded on the
motherboard that is less than  a year old. So I figure SOMETHING screwed
up if it's no longer supported.

{^_^}


Re: Is there a kernel update for SL 6.2 coming up soon?

2013-01-23 Thread jdow

On 2013/01/23 19:27, Alan Bartlett wrote:

On 24 January 2013 02:52, jdow  wrote:

Um, 4 weeks is a trifle long for no "Gnome" or "KDE". Ah well, I guess
I wait. I seldom use the machine from a GUI anyway.

It seems ElRepo may have screwed up.

{^_^}


I fail to see the significance -- or relevance -- of your last sentence.

Please remember this is the main support channel for Scientific Linux.

Alan.


It's not this channel's support issue. I understand that. This is why
I wondered if 6.2 was going to have a kernel update.

ElRepo pushed the newer 310 NVidia modules before any appropriate kernel
appeared on SL2. So I have to back off. I simply wondered if waiting for
a new kernel was practical or not.

And that's been answered. ElRepo messed up pushing updates. (Or else
there should be a more recent kernel for 6.2 than what I am running,
2.6.32-279.19.1.el6.x86_64.)

{^_^}


Re: Is there a kernel update for SL 6.2 coming up soon?

2013-01-23 Thread jdow

On 2013/01/23 08:51, Akemi Yagi wrote:

On Wed, Jan 23, 2013 at 8:20 AM, jdow  wrote:

Is there a kernel update for SL 6.2 coming up soon?

It seems elrepo released a new set of nvidia modules that don't work
with the 2.6.32-279.19.1.el6.x86_64 kernel. That one only supports
through nvidia 304 and the download is 310.


Copying a note from Phil Perry on the ELRepo list:

"Just a quick note to say kmod-nvidia-304xx and nvidia-x11-drv-304xx
legacy packages have now been released to the main repositories for
both el5 and el6.

These packages support 6xxx and 7xxx based graphics cards.

http://elrepo.org/tiki/kmod-nvidia-304xx "

So, you'd need to install kmod-nvidia-304xx to stay with the 304 series.

Akemi


So I have to downgrade all four upgraded modules?
kmod-nvidia, both nvidia*, and vino modules?

I can do it. But they released 310 modules rather than 304 modules
yesterday.

{^_-}


Re: Is there a kernel update for SL 6.2 coming up soon?

2013-01-23 Thread jdow

Um, 4 weeks is a trifle long for no "Gnome" or "KDE". Ah well, I guess
I wait. I seldom use the machine from a GUI anyway.

It seems ElRepo may have screwed up.

{^_^}

On 2013/01/23 08:21, Jamie Duncan wrote:

Red Hat is releasing RHEL 6.3 in approximately 4 weeks.


On Wed, Jan 23, 2013 at 11:20 AM, jdow mailto:j...@earthlink.net>> wrote:

Is there a kernel update for SL 6.2 coming up soon?

It seems elrepo released a new set of nvidia modules that don't work
with the 2.6.32-279.19.1.el6.x86_64 kernel. That one only supports
through nvidia 304 and the download is 310.

{^_^}




--
Thanks,

Jamie Duncan
@jamieeduncan



Is there a kernel update for SL 6.2 coming up soon?

2013-01-23 Thread jdow

Is there a kernel update for SL 6.2 coming up soon?

It seems elrepo released a new set of nvidia modules that don't work
with the 2.6.32-279.19.1.el6.x86_64 kernel. That one only supports
through nvidia 304 and the download is 310.

{^_^}


Re: STrange thing with sudo, fg and vim

2012-12-17 Thread jdow

On 2012/12/17 06:55, sam_pb wrote:

Good $(date) !

I have a server SL 6.0 (Carbon) x86 installation is up-to-date.

When I need edit any config file, i run (for example):
$ sudo vim /etc/rc.local

But, a strange things is happening after an update about 1.5 - 2 months ago:
1). I ran
 sudo vim /etc/rc.local
2). press CTRL+Z for backgound vim(or sudo?) and i get return code = 148
3). do some other things in my terminal
4). trying to resume the vim process and get failed:
 $ fg

If i try to do anything, vim goes  to background again, but returncode is 149
for now :(

All of next attempts to foreground vim is useless and return code 149 is
returned :| ( For example: )
 $ fg
 sudo vim /etc/rc.local

 [1]+  Stopped sudo vim /etc/rc.local
 ret:149^t:25@17:45^u:adska@s01^d:~^sh:-bash$


Same problem was attended on other SL 6.3 x64_86 server just right after
installation. But after update from SL-FASTBUGS repo, problem is gone.
 $ sudo yum --enablerepo=sl-fastbugs update

With first a x86 server that update was useless.



PS: This happens only with sudo ( if i do ``sudo su'' and work in the roof
terminal - that stuff isn't happens )
PSS: I google a little and found here is SIGTTIN(#21) has a place
PSSS: Any suggestions ?
P: This is rly annoying stuff :((



On a hunch try "sudo fg".

{^_^}


Re: [SCIENTIFIC-LINUX-USERS] Yum update problem -

2012-10-31 Thread jdow

On 2012/10/31 15:40, Bob Goodwin - Zuni, Virginia, USA wrote:

On 31/10/12 18:17, David Sommerseth wrote:

Otherwise, have a look in /etc/yum.repos.d ... and use rpm -qif  to
see where this file came from.  If it's something you've added manually, you
won't find a match. Otherwise, it's usually coming from a package somewhere.


This is what I get, it looks to me like it came with the SL Live files:

[root@192 bobg]# rpm -qif syslinux-extlinux-4.02-4.el6.x86_64
error: file /home/bobg/syslinux-extlinux-4.02-4.el6.x86_64: No
such file or directory
[root@192 bobg]# rpm -qif /boot/extlinux/whichsys.c32
Name: syslinux-extlinuxRelocations: (not
relocatable)
Version : 4.02  Vendor:
Scientific Linux
Release : 4.el6 Build Date: Fri 20
May 2011 21:40:06 EDT
Install Date: Thu 16 Feb 2012 06:01:48 EST  Build Host:
spacewalk.fnal.gov
Group   : System/Boot   Source RPM:
syslinux-4.02-4.el6.src.rpm
Size: 1262476  License: GPLv2+
Signature   : DSA/SHA1, Sat 21 May 2011 08:58:42 EDT, Key ID
b0b4183f192a7d7d
Packager: Scientific Linux
URL :
http://syslinux.zytor.com/wiki/index.php/The_Syslinux_Project
Summary : The EXTLINUX bootloader, for booting the local system.
Description :
The EXTLINUX bootloader, for booting the local system, as well
as all
the SYSLINUX/PXELINUX modules in /boot.

Bob


Bob, this is what I get with a fairly carefully updated and maintained
6.2 system, which magically has been getting more and more stable since
I installed it as 6.0:
[root ~]# rpm -q syslinux-extlinux -i
Name: syslinux-extlinuxRelocations: (not relocatable)
Version : 4.02  Vendor: Scientific Linux
Release : 4.el6 Build Date: Fri 20 May 2011 06:40:06 
PM PDT

Install Date: Wed 10 Aug 2011 01:16:11 PM PDT  Build Host: 
spacewalk.fnal.gov
Group   : System/Boot   Source RPM: 
syslinux-4.02-4.el6.src.rpm
Size: 1262476  License: GPLv2+
Signature   : DSA/SHA1, Sat 21 May 2011 05:58:42 AM PDT, Key ID b0b4183f192a7d7d
Packager: Scientific Linux
URL : http://syslinux.zytor.com/wiki/index.php/The_Syslinux_Project
Summary : The EXTLINUX bootloader, for booting the local system.
Description :
The EXTLINUX bootloader, for booting the local system, as well as all
the SYSLINUX/PXELINUX modules in /boot.

and

[root ~]# rpm -q syslinux -i
Name: syslinux Relocations: (not relocatable)
Version : 4.02  Vendor: Scientific Linux
Release : 4.el6 Build Date: Fri 20 May 2011 06:40:06 
PM PDT

Install Date: Wed 10 Aug 2011 01:10:36 PM PDT  Build Host: 
spacewalk.fnal.gov
Group   : Applications/System   Source RPM: 
syslinux-4.02-4.el6.src.rpm
Size: 2121438  License: GPLv2+
Signature   : DSA/SHA1, Sat 21 May 2011 05:58:42 AM PDT, Key ID b0b4183f192a7d7d
Packager: Scientific Linux
URL : http://syslinux.zytor.com/wiki/index.php/The_Syslinux_Project
Summary : Simple kernel loader which boots from a FAT filesystem
Description :
SYSLINUX is a suite of bootloaders, currently supporting DOS FAT
filesystems, Linux ext2/ext3 filesystems (EXTLINUX), PXE network boots
(PXELINUX), or ISO 9660 CD-ROMs (ISOLINUX).  It also includes a tool,
MEMDISK, which loads legacy operating systems from these media.


Perhaps you might consider uninstalling the rpmforge version and install
the SL version in its place. You might also take some measures to make
sure that machine is still yours and has not been thoroughly compromised.

{^_^}


Re: Iptable rule required to block youtube

2012-10-05 Thread jdow

Presuming that is the right address for your region on this ball of
dirt, how do you access Google? Google and YouTube share the same
address block, which is addresses 74.125.239.0-74.125.239.14.

Google owns 74.125.0.0/16 for that matter. I don't doubt that they
have other netblocks, too.

{o.o}

On 2012/10/05 00:10, vivek chalotra wrote:

I have blocked youtube(ips from 74.125.236.0- 74.125.236.14) in my gateway
machine using the below rules:


iptables -A INPUT -i eth1 -s 74.125.236.0 -j DROP
iptables -A INPUT -i eth1 -p tcp -s 74.125.236.0 -j DROP
iptables -A INPUT -i eth0 -s 74.125.236.0 -j DROP
iptables -A INPUT -i eth0 -p tcp -s 74.125.236.0 -j DROP

but how to block on the whole network. Other hosts are still able to access 
youtube.

Vivek Chalotra
GRID Project Associate,
High Energy Physics Group,
Department of Physics & Electronics,
University of Jammu,
Jammu 180006,
INDIA.


On Thu, Oct 4, 2012 at 11:57 PM, Henrique Junior mailto:henrique...@gmail.com>> wrote:

Maybe you should take a look at ClearOS[1].
It is a RHEL based distribution from a company that, now, develops
layer7-filter. In a simple way I was able to block all FLV videos (even if
the users are still able to reach youtube.com , they can
not see any videos).

[1] - http://www.clearfoundation.com/Software/overview.html
--
Henrique "LonelySpooky" Junior
http://about.me/henriquejunior



*From:* Konstantin Olchanski mailto:olcha...@triumf.ca>>
*To:* vivek chalotra mailto:vivekat...@gmail.com>>
*Cc:* scientific-linux-us...@fnal.gov

*Sent:* Thursday, October 4, 2012 3:10 PM

*Subject:* Re: Iptable rule required to block youtube

On Thu, Oct 04, 2012 at 12:57:00PM +0530, vivek chalotra wrote:
 >
 > And now i want to block youtube on my network. kindly suggest iptable
rules to do that.
 >

"block youtube on my network" is not a very well defined wish.

If you want to merely block the well known youtube IP and DNS addresses,
you can use iptables, etc. Be prepared to update these lists frequently
to keep up with things like youtu.be  & co.

If you want to prevent users of the network from watching all youtube
videos always,
give up now.

First of all, you will have to be able to handle legitimate exceptions:
"how do I watch training videos for Altera Quartus software that
happen to be hosted on youtube?!?".

Second, you will have to handle all the possible 3rd party redirectors,
proxies, and other kludges specifically designed to circumvent
youtube blockers such as you are try to build.

--
Konstantin Olchanski
Data Acquisition Systems: The Bytes Must Flow!
Email: olchansk-at-triumf-dot-ca
Snail mail: 4004 Wesbrook Mall, TRIUMF, Vancouver, B.C., V6T 2A3, Canada





Re: clock factor file

2012-10-03 Thread jdow

On 2012/10/03 01:33, g wrote:


On 10/03/2012 06:45 AM, jdow wrote:

On 2012/10/02 22:09, g wrote:

greetings.

in unix, there is a file, name of which i do not recall, used as a
'clock factor' and controls the 'tick rate' for the system clock.

is such a file used in scientific linux and what is it's name?

tia.


Do you mean adjtime by any chance? Unfortunately the values in its
fields do not seem to be well defined. It is part of the initscripts
package.

{^_^}


it has been way too long that i used unix to say yes, but 'adj-time'
does not sound familiar.

there where no 'fields', as in linux's 'adj-time', only a single value
that was over 10 digits long.

as stated in orig post, it was used as a _factor_to_adjust_ the
'tick rate'. no speed up or slow down as 'adj-time' does.


adjtime (no dash) is various clock correction factors which may or
may not affect tick rate.

If you mean adjusting the kernel task time quantum or the basic time
quantum on timers from like 10ms to 1ms that's another ballgame. I
have the impression that Linux tends to be tickless and adjusts
itself to perceived needs to a large degree. But I've not followed
that very much of late.

{^_^}


Re: clock factor file

2012-10-02 Thread jdow

Do you mean adjtime by any chance? Unfortunately the values in its
fields do not seem to be well defined. It is part of the initscripts
package.

{^_^}

On 2012/10/02 22:09, g wrote:

greetings.

in unix, there is a file, name of which i do not recall, used as a
'clock factor' and controls the 'tick rate' for the system clock.

is such a file used in scientific linux and what is it's name?

tia.



Re: SSD and RAID question

2012-09-08 Thread jdow

On 2012/09/08 18:34, Todd And Margo Chester wrote:

On 09/05/2012 03:34 PM, jdow wrote:

But if the real limit is related to
read write cycles on the memory locations you may find that temperature
has little real affect on the system lifetime.



I did some reliability analysis for the military about 25 yuears
ago.  It was pretty much following general guidlines and most
of it was baloney.  What I do remember was that failures from
temperature was not a linear curve, it was an exponential
curve.  I will strongly concur with you that heat is your enemy.

What I would love to see, but have never seen, is a Peltier
heat pump to mount hard drives on.

-T


Um, yes, temperature is a REAL problem when it gets high. But so is a
limited number of useable rewrites for memory locations. At "reasonable"
temperatures it should last a long time if write cycle limits are not
a problem. (Relays have a cycle related lifetime limit as well as the
usual temperature limit, as well.)

And thank Ghu that we're not dealing with radiation here. That gets
ugly, fast. I had to use two generations old TTL logic without gold
doping for the Frequency Synthesizer and Distribution Unit when
creating the basic Phase 2 GPS satellite design for that reason. Ick!

{^_-}


Re: SSD and RAID question

2012-09-05 Thread jdow

On 2012/09/05 11:38, Todd And Margo Chester wrote:

On 09/04/2012 12:21 PM, Konstantin Olchanski wrote:

Cherryville drives have a 1.2 million hour MTBF (mean time
>between failure) and a 5 year warranty.
>

Note that MTBF of 1.2 Mhrs (137 years?!?) is the*vendor's estimate*.


Baloney check.  1.2 Mhrs does not mean that the device is expected
to last 137 years.  It means that if you have 1.2 million devices
in front of you on a test bench, you would expect one device to
fail in one hour.

-T


Baloney check back at you. If you have 1.2 million devices in front
of you all operating under the same conditions as specified for the 1.2
million hours MTBF that half of them would have failed. bu the end of
the 1.2 million hours. Commentary here indicates those conditions are
a severe derating on the drive's transaction capacity.

It does not say much of anything about the drive's life under other
conditions because no failure mechanism is cited. For example, if the
drive is well cooled and that means the components inside are well
cooled rather than left in usual mountings the life might be far
greater simply based on the component temperature drop. 10C can make
a large difference in lifetime. But if the real limit is related to
read write cycles on the memory locations you may find that temperature
has little real affect on the system lifetime.

If I could design a system that worked off a fast normal RAID and could
buffer in the SSD RAID with a safe automatic fall over when the SSD RAID
failed, regardless of failure mode, and I needed the speed you can bet
I'd be in there collecting statistics for the company for whom I did the
work. There is a financial incentive here and potential competitive
advantage to KNOW how these drives fail. With 100 drives after the first
5 or 10 had died some knowledge might be gained. And, of course, if the
drives did NOT die even more important knowledge would be gained.

Simple MTBF under gamer type use is pretty useless for a massive database
application. And if manufacturers are not collecting the data there is a
really good potential for competitive advantage if you collect your own
data and hold it proprietary. I betcha somebody out there is doing this
right now for Wall Street automated trading uses if nothing else.

{^_^}


Re: SSD and RAID question

2012-09-02 Thread jdow

On 2012/09/02 20:26, Nathan wrote:


In my experience, I've had more problems with hardware RAID controllers than any
other component (hardware OR software) except for traditional hard drives
themselves.  We switched to software RAID (Linux) and ZFS (*BSD and Solaris)
  years ago.

But that's just us.  YMMV.


Speaking of software raid, I have four disks that are from a RAID on
a motherboard with the Intel ICH10 controller. They were in RAID 5.
The motherboard is a "was a motherboard" for the most part. I note
that the Linux raid could read the disks in that machine. If I stick
the four disks into four USB<->SATA adapters is it likely the Linux
raid software will be able to piece them together so I can get the
"not so critical" last few bits of effort off them that I've not
been able to keep the motherboard up long enough to get already? (The
native system on the disks was Windows 7 with which I make some real
income.)

If there's a good chance it would work that will change my recovery
strategy a little.

{^_^}


Re: jumbo frames?

2012-08-27 Thread jdow

On 2012/08/27 15:32, Todd And Margo Chester wrote:

On 08/27/2012 03:16 PM, jdow wrote:




ifconfig comes to mind.
{^_^}



$ ifconfig virbr0
virbr0Link encap:Ethernet  HWaddr 52:54:00:EB:2D:7B
   inet addr:192.168.122.1  Bcast:192.168.122.255 Mask:255.255.255.0
   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
   RX packets:0 errors:0 dropped:0 overruns:0 frame:0
   TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:0
   RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

What am I looking for?



MTU:1500

This interface is not doing jumbo frames.

Betcha this shows the same thing:
cat /sys/class/net/virbr0/mtu

(Or find it under /sys/devices/virtual/net possibly.)
Mess with it carefully.

{^_^}


Re: jumbo frames?

2012-08-27 Thread jdow

On 2012/08/27 14:37, Todd And Margo Chester wrote:

On 08/27/2012 01:57 PM, Carl Friedberg wrote:


-Original Message-
From: owner-scientific-linux-us...@listserv.fnal.gov [mailto:owner-
scientific-linux-users@listserv.fnal.gov] On Behalf Of Todd And Margo
Chester
Sent: Monday, August 27, 2012 4:49 PM
To: Scientific Linux Users
Subject: jumbo frames?

Hi All,

Can anyone tell me what this means?

  just disable jumbo frames on centos host interface
  ifcfg and ethernet switch.

Many thanks,
-T




 > Todd and Margo Chester:
 >
 >   I can give you some information.
 >
 > jumbo frames refer to a capability to send very large packets
 > over gigabit Ethernet (somewhere near 9,000 bytes), as
 > opposed to the traditional ~1500 byte packet size on
 > traditional Ethernet.
 >
 > This only works between two end-points if every switch
 > handling the frame/packet has jumbo frame capability
 > enabled.
 >
 > There have been instances (I've run into them, but not
 > on SL) where enabling jumbo frames can cause issues.
 >
 > So, since you didn't provide context on that piece of
 > advice,  I can't guess why they were suggesting
 > disabling jumbo frames.
 >
 > Typically, jumbo frames are disabled by default (but,
 > I don't know what the SL policy is).
 >
 > Carl
 >
 > Carl Friedberg
 > www.about.me/carl.friedberg
 > friedb...@comets.com
 > www.comets.com
 > Problems Solved
 >
 >

Where would I go to check on this?  Is there a utility?



ifconfig comes to mind.
{^_^}


Re: Procmail problem

2012-08-23 Thread jdow

On 2012/08/23 01:04, Anne Wilson wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 22/08/12 22:40, jdow wrote:

On 2012/08/22 11:08, Anne Wilson wrote:

-BEGIN PGP SIGNED MESSAGE- Hash: SHA1

On 22/08/12 18:17, Konstantin Olchanski wrote:

On Wed, Aug 22, 2012 at 12:33:09PM +0100, Anne Wilson wrote:



... procmail appears to be not doing its stuff



What could be wrong?



Why just had a spike of cartalk style questions "my car does
not start, what could be wrong?"

(my procmail, my wifi, whatever).

What could be wrong?



Instead of facile retorts, why not ask a question if you think
particular information may help?  In fact someone with more sense
has helped me off-list, if only to confirm that my install is
broken beyond what would be a sensible repair.

Just as a matter of interest, when I ask a question with only
outline information, as this was, it's because I need a pointer
as to which particular bit of the problem is most likely to need
investigation, in order to avoid swamping the list with
irrelevancies.  You spectacularly failed to give this.


Anne, I know you are not a newby. But your question was sort of
vague and seemed to betray a failure to perform some basic
troubleshooting 101 steps, "man procmail" is one. (There IS a lot
to read there in the various files. But learning the basics of how
procmail works is going to be step one for a lasting good solution.
This is hard for ancient bitches like me but I persevere. Having
learned is a nice feeling. The distressing part is how fast it all
dribbles away these days.)

{o.o}


That I can empathise with.  However, procmail has worked for me for
several years until this problem, so I was pretty sure that it was not
the obvious problem of broken procmailrc.  As I said, I wanted a
pointer to other things that could affect the working.

Browsing through logs (something that I simply hadn't had time for in
the last few weeks due to family problems) I found that an update had
broken some perl packages, and I couldn't get those mended.  The
problem almost certainly started when I followed web-page advice about
fixing priorities on the repos.  The more I dig, the more I find that
it has messed up such a large part of the installation that it is
simply not feasible to try to fix it.  A new install would be much
quicker.


Look to what procmail calls, like SpamAssassin. THAT might be where the
breakage exists.

{^_^}


Re: Procmail problem

2012-08-22 Thread jdow

On 2012/08/22 11:08, Anne Wilson wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 22/08/12 18:17, Konstantin Olchanski wrote:

On Wed, Aug 22, 2012 at 12:33:09PM +0100, Anne Wilson wrote:



... procmail appears to be not doing its stuff



What could be wrong?



Why just had a spike of cartalk style questions "my car does not
start, what could be wrong?"

(my procmail, my wifi, whatever).

What could be wrong?



Instead of facile retorts, why not ask a question if you think
particular information may help?  In fact someone with more sense has
helped me off-list, if only to confirm that my install is broken
beyond what would be a sensible repair.

Just as a matter of interest, when I ask a question with only outline
information, as this was, it's because I need a pointer as to which
particular bit of the problem is most likely to need investigation, in
order to avoid swamping the list with irrelevancies.  You
spectacularly failed to give this.


Anne, I know you are not a newby. But your question was sort of vague
and seemed to betray a failure to perform some basic troubleshooting
101 steps, "man procmail" is one. (There IS a lot to read there in the
various files. But learning the basics of how procmail works is going
to be step one for a lasting good solution. This is hard for ancient
bitches like me but I persevere. Having learned is a nice feeling. The
distressing part is how fast it all dribbles away these days.)

{o.o}


Re: Procmail problem

2012-08-22 Thread jdow

On 2012/08/22 04:33, Anne Wilson wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 21/08/12 20:02, Gerald Waugh wrote:


On 08/21/2012 01:50 PM, Anne Wilson wrote:

For reasons not relevant to this list, I've been using GMail to
read my mail for a while, so I hadn't noticed that my local mail
server now has big problems.



I see mail arriving there in maillog, and my .forward file
passes it to procmail - but procmail appears to be not doing its
stuff - the messages aren't being delivered at all.



What could be wrong?



Anne

What is in  your .procmailrc $HOME/.procmailrc


Thanks for answering.  .procmailrc is a restored file that worked
perfectly before - I'll give you the details if you still want to see
it after reading this.

Digging around in logs I found that there have been some problems with
perl updates - and I assume that procmail is a perl app.  It seems
likely that I messed things up when setting repo priorities, so I set
out to clean things up.

First I removed the packages that wouldn't update, keeping a list so
that I can restore them as soon as I have the problem sorted, and
tweaked the priorities of rpmforge, since that seems to be the one
that was missing out.  At first I thought that had done it, as I got a
clean list of updates about to perform.  Unfortunately, I then got a
transaction error -

file /sbin/extlinux from install of syslinux-4.05-1.el6.rfx.x86_64
conflicts with file from package syslinux-extlinux-4.02-4.el6.x86_64

If I try to update the remaining packages as they stand at present I get

Error: glibc-2.12-1.25.el6.i686 (sl)
   Requires: glibc-common = 2.12-1.25.el6
   Installed: glibc-common-2.12.1.47.el6_2.5.x86_64 (@sl-security)
 glibc-common = 2.12.1.47.el6_2.5
   Available: glibc-common-2.12-1.25.el6.x86_64 (sl)
 glibc-common = 2.12-1.25.el6

I thought of removing glibc, again, to instantly re-install, to ensure
I got the right version, but that promises to remove hundreds of
packages, so I abandoned that idea.

By now I am completely out of my depth.  Is it possible to repair this
system, or would be simpler to just abandon it, and try a clean
install, setting up everything afresh?  I hate to be beaten, but a
broken system is a liability.


Anne, procmail is not in any way related to perl. It is a modest ELF
binary executable.

So let's start with some basics. Is there an /etc/procmailrc file? What
does it look like. What does your home directory's .procmailrc file look
like? Procmail has been solid for me for two decades or so now. It has
also grown as I put in "discard with extreme prejudice" anti-troll
measures. (That would be sites that require me to verify I am really
me before they will forward a mailing list email to the customer, for
example.) Make sure you have no such filters in your .procmailrc file.

Generally consider procmail as a filter. Text goes in, is manipulated,
and usually flows out the other end ultimately into the user's mailbox.
You CAN cozen procmail into inserting log messages as it processes emails.
(Heck, I had mine rigged to play chimes when I received customer emails at
one time.)

If you have log messages of these emails going into procmail then it is
time to get procmail to tell you what it is doing. You CAN set a "VERBOSE"
flag, "VERBOSE=yes". That gets it to spit out error messages of some use.
Note that sigs USR1 will turn off VERBOSE and USR2 will turn it on.

Note that this little incantation right after any defines in .procmailrc
will preserve emails so you can manually process them after splitting
the email out of the destination file using "mail -f$HOME/mail/rawmbox":
:0c: clone.lock
$HOME/mail/rawmbox

Then you can run procmail manually with a copy of your .procmailrc and
figure out pretty decently what is actually going on.

{^_^}


Re: is the drop to fsck visual fixed in 6.3?

2012-07-19 Thread jdow

On 2012/07/19 16:33, Konstantin Olchanski wrote:

On Fri, Jul 20, 2012 at 01:17:47AM +0200, David Sommerseth wrote:


... even an automatic fsck shouldn't cause much extra delay next time you boot.




With full respect, "extra delay" being a subjective term, etc,
but do you have any idea how long it takes to fsck a 20TB filesystem 99% full
with a mixture of small and big files? (Hint: it takes more than 30 seconds).

But I guess it is the modern view of things: "if it is quick on my laptop
it would not cause much extra delay for anybody else". No need to put numbers 
on it
or think about scaling (at least fsck is mostly O(n) in disk size).


What happens if you periodically run fsck on the filesystems with the -n
option so no repairs are attempted? If that is done suitably niced into
the background an email or other message could be sent to "root" and inform
the system administrator that there is a problem that needs fixing.

{^_^}


Re: For the RPMForge guys

2012-06-27 Thread jdow

On 2012/06/27 13:58, S.Tindall wrote:

On Wed, 2012-06-27 at 13:12 -0700, jdow wrote:

On 2012/06/27 12:43, S.Tindall wrote:

On Wed, 2012-06-27 at 12:31 -0700, jdow wrote:

Latest clamav update main.cvd is an empty file. It apparently should not be
empty. For two days now I've gotten this message:

ERROR: Corrupted database file /var/clamav/main.cld: Broken or not a CVD file

{^_^}


Run freshclam and then restart clamd.

Steve



rerunning freshclam gives:
ClamAV update process started at Wed Jun 27 13:11:13 2012
main.cvd is up to date (version: 54, sigs: 1044387, f-level: 60, builder: sven)
daily.cld is up to date (version: 15092, sigs: 222617, f-level: 63, builder:
ccordes)
bytecode.cld is up to date (version: 185, sigs: 39, f-level: 63, builder: neo)
WARNING: [LibClamAV] cli_cvdverify: Can't read CVD header
ERROR: Corrupted database file /var/clamav/main.cld: Broken or not a CVD file
Corrupted database file renamed to /var/clamav/main.cld.broken
Trying again in 5 secs...




ClamAV update process started at Wed Jun 27 13:11:19 2012
main.cvd is up to date (version: 54, sigs: 1044387, f-level: 60, builder: sven)
daily.cld is up to date (version: 15092, sigs: 222617, f-level: 63, builder:
ccordes)
bytecode.cld is up to date (version: 185, sigs: 39, f-level: 63, builder: neo)


It's broken, Jim! (Sorry Star Trek)

{^_^}


You "fixed" it with freshclam. As per the final section, main.cvd,
daily.cld and bytecode.cld are now up to date.

If /var/clamav/*broken bothers you, then delete it/them.

# rm /var/clamav/*broken

# ls /var/clamav/
bytecode.cld  daily.cld  main.cvd  mirrors.dat


At least on my EL6 systems, those satisfy clamd.

# service clamd restart
Stopping Clam AntiVirus Daemon:[  OK  ]
Starting Clam AntiVirus Daemon:[  OK  ]


Steve



Then main.cld is a surplus file now? I didn't know that!

{^_^}


Re: For the RPMForge guys

2012-06-27 Thread jdow

On 2012/06/27 12:43, S.Tindall wrote:

On Wed, 2012-06-27 at 12:31 -0700, jdow wrote:

Latest clamav update main.cvd is an empty file. It apparently should not be
empty. For two days now I've gotten this message:

ERROR: Corrupted database file /var/clamav/main.cld: Broken or not a CVD file

{^_^}


Run freshclam and then restart clamd.

Steve



Addendum - if I rename the broken main.cld to main.cld and try to run the
daemon it fails to run:
Starting Clam AntiVirus Daemon: LibClamAV Error: cli_cvdverify: Can't read CVD 
header

LibClamAV Error: Can't load /var/clamav/main.cld: Broken or not a CVD file
ERROR: Broken or not a CVD file
   [FAILED]
rerunning freshclam gives:
ClamAV update process started at Wed Jun 27 13:11:13 2012
main.cvd is up to date (version: 54, sigs: 1044387, f-level: 60, builder: sven)
daily.cld is up to date (version: 15092, sigs: 222617, f-level: 63, builder: 
ccordes)

bytecode.cld is up to date (version: 185, sigs: 39, f-level: 63, builder: neo)
WARNING: [LibClamAV] cli_cvdverify: Can't read CVD header
ERROR: Corrupted database file /var/clamav/main.cld: Broken or not a CVD file
Corrupted database file renamed to /var/clamav/main.cld.broken
Trying again in 5 secs...
ClamAV update process started at Wed Jun 27 13:11:19 2012
main.cvd is up to date (version: 54, sigs: 1044387, f-level: 60, builder: sven)
daily.cld is up to date (version: 15092, sigs: 222617, f-level: 63, builder: 
ccordes)

bytecode.cld is up to date (version: 185, sigs: 39, f-level: 63, builder: neo)


It's broken, Jim! (Sorry Star Trek)

{^_^}


Re: For the RPMForge guys

2012-06-27 Thread jdow

On 2012/06/27 12:43, S.Tindall wrote:

On Wed, 2012-06-27 at 12:31 -0700, jdow wrote:

Latest clamav update main.cvd is an empty file. It apparently should not be
empty. For two days now I've gotten this message:

ERROR: Corrupted database file /var/clamav/main.cld: Broken or not a CVD file

{^_^}


Run freshclam and then restart clamd.

Steve



Don't run clamd. I use it as a filter in spamassassin. I'm trying restarting
spamassassin. I'm also trying running up clamd. I still have the broken
zero size file: /var/clamav/main.cld.broken and no /var/clamav/main.cld.

This is a very new happening.
{^_^}


For the RPMForge guys

2012-06-27 Thread jdow

Latest clamav update main.cvd is an empty file. It apparently should not be
empty. For two days now I've gotten this message:

ERROR: Corrupted database file /var/clamav/main.cld: Broken or not a CVD file

{^_^}


Re: yum vs sl-source repo

2012-06-15 Thread jdow

On 2012/06/15 17:20, Tom H wrote:

On Fri, Jun 15, 2012 at 7:52 PM, jdow  wrote:

On 2012/06/15 16:36, Tom H wrote:


On Fri, Jun 15, 2012 at 6:35 PM, jdow  wrote:


On 2012/06/15 05:27, Tom H wrote:


On Fri, Jun 15, 2012 at 7:47 AM, jdow  wrote:



Ah - nope. "all" does not work, either.

On 2012/06/15 04:24, Adam Bishop wrote:




Does it work if you take off the .src.rpm?

On 15 Jun 2012, at 04:26, jdow wrote:


yum --enablerepo=sl-source list available libusb1-1.0.3-1.el6.src.rpm



yumdownloader --source ...



===8<--- nope
yumdownloader --source --enablerepo=sl-source
libusb-0.1.12-23.el6.src.rpm
Loaded plugins: fastestmirror, refresh-packagekit, versionlock
Loading mirror speeds from cached hostfile
  * elrepo: elrepo.org
  * epel: fedora-epel.fastsoft.net
  * rpmforge: mirror.cpsc.ucalgary.ca
  * sl: ftp1.scientificlinux.org
  * sl-security: ftp1.scientificlinux.org
  * sl-source: ftp1.scientificlinux.org
Checking for new repos for mirrors
Enabling epel-source repository
No Match for argument libusb-0.1.12-23.el6.src.rpm
Nothing to download
===8<--- repo entry:
[sl-source]
name=Scientific Linux $releasever - Source

baseurl=http://ftp.scientificlinux.org/linux/scientific/$releasever/SRPMS/

  http://ftp1.scientificlinux.org/linux/scientific/$releasever/SRPMS/

  http://ftp2.scientificlinux.org/linux/scientific/$releasever/SRPMS/

  ftp://ftp.scientificlinux.org/linux/scientific/$releasever/SRPMS/
enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl
file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl6
file:///etc/pki/rpm-gpg/RPM-GPG-KEY-cern
===8<--

I'm puzzled. ftp works so I am not desperate. I am wondering what is
broken.



yumdownloader --source libusb


Ah - that did get it - the wrong one. yumdownloader --source libusb1 got
the correct one.

Now - list what sources are available? Or do I fly blind based on what
non-source rpms exist. (I like to be specific about what I want.)

And why doesn't yum itself work?


You tried to get the source for both libusb and libusb1 above, but it
was just an example anyway.

You can use "yum list ..." and then download the src.rpm(s) with yum downloader.

There's an extension (it's not the right the word but I can't remember
the correct one; and it might be a Fedora-only one) of yum that allows
the download of a src.rpm but only if the corresponding rpm isn't
installed.


That's stupid. If yum downloads the "executable" then it should be able
to go find the source and download it, too, regardless of whether the
"executable" is installed. I wonder what the yahoos were thinking of
when they committed that blunder.

{^_^}


Re: yum vs sl-source repo

2012-06-15 Thread jdow

On 2012/06/15 16:50, Akemi Yagi wrote:

On Fri, Jun 15, 2012 at 4:36 PM, Tom H  wrote:


yumdownloader --source libusb


Try the following (without --source):

yumdownloader libusb

works4me. :)


I'm talking about getting sources. I am trying to use libusb1 to communicate
with a TV tuner dongle (to use with SDR apps). And the source will help me
diagnose the problems. It looks like I may have to work a cheat to get the
code to work with the current device I'm playing with. And it looks like
libusb is a little stingy with error reports.

{^_^}


Re: yum vs sl-source repo

2012-06-15 Thread jdow

On 2012/06/15 16:36, Tom H wrote:

On Fri, Jun 15, 2012 at 6:35 PM, jdow  wrote:

On 2012/06/15 05:27, Tom H wrote:

On Fri, Jun 15, 2012 at 7:47 AM, jdow  wrote:


Ah - nope. "all" does not work, either.

On 2012/06/15 04:24, Adam Bishop wrote:



Does it work if you take off the .src.rpm?

On 15 Jun 2012, at 04:26, jdow wrote:


yum --enablerepo=sl-source list available libusb1-1.0.3-1.el6.src.rpm


yumdownloader --source ...



===8<--- nope
yumdownloader --source --enablerepo=sl-source libusb-0.1.12-23.el6.src.rpm
Loaded plugins: fastestmirror, refresh-packagekit, versionlock
Loading mirror speeds from cached hostfile
  * elrepo: elrepo.org
  * epel: fedora-epel.fastsoft.net
  * rpmforge: mirror.cpsc.ucalgary.ca
  * sl: ftp1.scientificlinux.org
  * sl-security: ftp1.scientificlinux.org
  * sl-source: ftp1.scientificlinux.org
Checking for new repos for mirrors
Enabling epel-source repository
No Match for argument libusb-0.1.12-23.el6.src.rpm
Nothing to download
===8<--- repo entry:
[sl-source]
name=Scientific Linux $releasever - Source
baseurl=http://ftp.scientificlinux.org/linux/scientific/$releasever/SRPMS/

  http://ftp1.scientificlinux.org/linux/scientific/$releasever/SRPMS/

  http://ftp2.scientificlinux.org/linux/scientific/$releasever/SRPMS/

  ftp://ftp.scientificlinux.org/linux/scientific/$releasever/SRPMS/
enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl
file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl6
file:///etc/pki/rpm-gpg/RPM-GPG-KEY-cern
===8<--

I'm puzzled. ftp works so I am not desperate. I am wondering what is
broken.


yumdownloader --source libusb



Ah - that did get it - the wrong one. yumdownloader --source libusb1 got
the correct one.

Now - list what sources are available? Or do I fly blind based on what
non-source rpms exist. (I like to be specific about what I want.)

And why doesn't yum itself work?
{^_^}


Re: yum vs sl-source repo

2012-06-15 Thread jdow

On 2012/06/15 05:27, Tom H wrote:

On Fri, Jun 15, 2012 at 7:47 AM, jdow  wrote:

Ah - nope. "all" does not work, either.

On 2012/06/15 04:24, Adam Bishop wrote:


Does it work if you take off the .src.rpm?

On 15 Jun 2012, at 04:26, jdow wrote:


yum --enablerepo=sl-source list available libusb1-1.0.3-1.el6.src.rpm


yumdownloader --source ...



===8<--- nope
yumdownloader --source --enablerepo=sl-source libusb-0.1.12-23.el6.src.rpm
Loaded plugins: fastestmirror, refresh-packagekit, versionlock
Loading mirror speeds from cached hostfile
 * elrepo: elrepo.org
 * epel: fedora-epel.fastsoft.net
 * rpmforge: mirror.cpsc.ucalgary.ca
 * sl: ftp1.scientificlinux.org
 * sl-security: ftp1.scientificlinux.org
 * sl-source: ftp1.scientificlinux.org
Checking for new repos for mirrors
Enabling epel-source repository
No Match for argument libusb-0.1.12-23.el6.src.rpm
Nothing to download
===8<--- repo entry:
[sl-source]
name=Scientific Linux $releasever - Source
baseurl=http://ftp.scientificlinux.org/linux/scientific/$releasever/SRPMS/

http://ftp1.scientificlinux.org/linux/scientific/$releasever/SRPMS/

http://ftp2.scientificlinux.org/linux/scientific/$releasever/SRPMS/

ftp://ftp.scientificlinux.org/linux/scientific/$releasever/SRPMS/
enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl 
file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl6 file:///etc/pki/rpm-gpg/RPM-GPG-KEY-cern

===8<--

I'm puzzled. ftp works so I am not desperate. I am wondering what is
broken.

{^_^}


Re: yum vs sl-source repo

2012-06-15 Thread jdow

Ah - nope. "all" does not work, either.

{^_^}

On 2012/06/15 04:24, Adam Bishop wrote:

Does it work if you take off the .src.rpm?

Adam Bishop

On 15 Jun 2012, at 04:26, jdow wrote:


yum --enablerepo=sl-source list available libusb1-1.0.3-1.el6.src.rpm



Janet is a trading name of The JNT Association, a company limited
by guarantee which is registered in England under No. 2881024
and whose Registered Office is at Lumen House, Library Avenue,
Harwell Oxford, Didcot, Oxfordshire. OX11 0SG



yum vs sl-source repo

2012-06-14 Thread jdow

This line returns nothing:
yum --enablerepo=sl-source list available *src.rpm
Loaded plugins: aliases, changelog, downloadonly, fastestmirror, refresh-
  : packagekit, security, tmprepo, verify, versionlock
Loading mirror speeds from cached hostfile
 * elrepo: elrepo.org
 * epel: mirrors.solfo.com
 * rpmforge: mirror.cpsc.ucalgary.ca
 * sl: ftp1.scientificlinux.org
 * sl-security: ftp1.scientificlinux.org
 * sl-source: ftp1.scientificlinux.org
Error: No matching Packages to list

Now, I know that's not really empty. I just used ftp to download the
libusb1 source RPM. Even getting specific does not work.

yum --enablerepo=sl-source list available libusb1-1.0.3-1.el6.src.rpm
Loaded plugins: aliases, changelog, downloadonly, fastestmirror, refresh-
  : packagekit, security, tmprepo, verify, versionlock
Loading mirror speeds from cached hostfile
 * elrepo: elrepo.org
 * epel: mirrors.solfo.com
 * rpmforge: mirror.cpsc.ucalgary.ca
 * sl: ftp1.scientificlinux.org
 * sl-security: ftp1.scientificlinux.org
 * sl-source: ftp1.scientificlinux.org
Error: No matching Packages to list


Am I suffering a senior moment and missing something important?

{o.o}


Re: Please update vsftpd package

2012-06-08 Thread jdow

On 2012/06/08 07:46, Dennis Schridde wrote:

Hello everyone!

Am Freitag, 8. Juni 2012, 08:44:35 schrieben Sie:

...

Or are they using FTPS?

So far I found no client that reliably supports FTPS. Especially nothing that
comes with the OS "by default" (I tried Chrome, Firefox, KDE/Dolphin). Can you
suggest one?


yum info sftp

{^_^|


Re: Please update vsftpd package

2012-06-08 Thread jdow

On 2012/06/08 05:44, Nico Kadel-Garcia wrote:


And in this day and age with password sniffing going on over
local networks by zombied machines and happening as a matter of government
policy worldwide in data centers, and the historic firewall wackiness with FTP's
2 channel communications, *WHY* is your client using FTP for anything that is
password based? You can cross-hook it to normal logins, true, but this is a
really bad idea for basic security reasons and should be avoided wherever 
feasible.
Or are they using FTPS?


rpm -qi vsftpd or yum info vsftpd should answer your question Nico, I
suspect.

{o.o}


Re: Which kernel SRPMS should I get for SL 6.2

2012-05-30 Thread jdow

On 2012/05/30 10:51, Akemi Yagi wrote:

On Wed, May 30, 2012 at 9:36 AM, EXT-Askew, R W  wrote:

Akemi

Thanks for the link those are great HowTos.  I double checked and have the 
packages installed.
I think the issue is that the kernel source rpm I downloaded 
(kernel-2.6.32.220.17.1.el6.src.rpm) from the SL 6.2 SRPM is missing some 
files.  Here is a snippet of the errors I get when running make mrproper and 
make.


May I suggest that you follow the instructions on the wiki [to the
letter] please? There is no execution of 'make' from the command line.
You will be running the rpmbuild command. It should work [TM]. :)


At least use "rpmbuild -bp" before you try to make anything. And if you
are rebuilding the kernel there is probably also a reason to run one
of the configuration steps first. Even if you wish to change nothing
it's a good thing to review it. You might change your mind about
doing nothing.

{o.o}


fcoe

2012-03-14 Thread jdow

Lately my log has been cluttered with these messages:

Mar 14 12:43:53 me2 fcoemon: error 111 Connection refused
Mar 14 12:43:53 me2 fcoemon: Failed to connect to lldpad


WTF is it, why, and how do I get rid of the messages? Starting or
stopping the fcoe service ddoes not stop the infernal messages.

{^_^}


Re: Wine RPM's

2012-02-26 Thread jdow

On 2012/02/26 11:54, Todd And Margo Chester wrote:

On 02/26/2012 04:41 AM, jdow wrote:

On 2012/02/25 23:08, Todd And Margo Chester wrote:

On 2012/02/25 19:59, Todd And Margo Chester wrote:

Hi All,

Anyone know of a source of up to date RPMs for Wine?

Many thanks,
-T



On 02/25/2012 09:36 PM, jdow wrote:
> It depends on just HOW up to date you mean:
> Available Packages
> wine.i686 1.2.3-1.el6 epel
> wine.x86_64 1.2.3-1.el6 epel
...
>
> {^_^}
>


I need at least 1.4rc5. See
http://bugs.winehq.org/show_bug.cgi?id=18231


Tag, you may be "it" for building it, creating the RPMs, and testing it.
It appears to be off TUV's radar screen. The flip comment would be "tbanks
for volunteering"; but, I figure that's a bit much for this venue.

Seriously, I suspect that is what you are going to have to do. Check and
see of Fedora has the later material so you can adapt it to SL6. You may
need to check to see if that recent a wine version is compatible with the
kernel level SL6 uses, too.

{O.O}



Just heard back from Andreas Bierfert, the maintainer of
Wine for FC and RHEL:

http://fedoraproject.org/wiki/AndreasBierfert/Wine

He is planning on supporting 1.4 or 1.4.1 when it
1.4 gets out of the RC phase. So, patience is now
the issue.

-T


Todd, doesn't that mean you can look for those versions of Wine in SL7, maybe?

{o.o}


Re: Wine RPM's

2012-02-26 Thread jdow

On 2012/02/26 09:25, Nico Kadel-Garcia wrote:

2012/2/26 Łukasz Posadowski mailto:lukasz.posadow...@gmail.com>>

Sat, 25 Feb 2012 23:08:20 -0800
Todd And Margo Chester mailto:toddandma...@gmail.com>>:

 > I need at least 1.4rc5.  See
 > http://bugs.winehq.org/show_bug.cgi?id=18231

Fedora 17 has 1.4rc5 SRPMs in repo, but they have tons of
dependencies to build.

ftp://ftp.icm.edu.pl/pub/linux/fedora/linux/development/rawhide/source/SRPMS/w/

Just checking spec in a headache.

Have you used "mock"? I find it very handy to set up a local Scientific Linux or
CentOS repository, point mock's yum setup to that, and use it to build RPM's
inside a well organized and easily replaced chroot cage. It's a very, very
useful tool I use for Repoforge tools all the time.


Really, all they need to do is put up F17 in a virtual machine and have done
with it.

{^_-}


Re: Wine RPM's

2012-02-26 Thread jdow

On 2012/02/25 23:08, Todd And Margo Chester wrote:

On 2012/02/25 19:59, Todd And Margo Chester wrote:

Hi All,

Anyone know of a source of up to date RPMs for Wine?

Many thanks,
-T



On 02/25/2012 09:36 PM, jdow wrote:
 > It depends on just HOW up to date you mean:
 > Available Packages
 > wine.i686 1.2.3-1.el6 epel
 > wine.x86_64 1.2.3-1.el6 epel
...
 >
 > {^_^}
 >


I need at least 1.4rc5. See
http://bugs.winehq.org/show_bug.cgi?id=18231


Tag, you may be "it" for building it, creating the RPMs, and testing it.
It appears to be off TUV's radar screen. The flip comment would be "tbanks
for volunteering"; but, I figure that's a bit much for this venue.

Seriously, I suspect that is what you are going to have to do. Check and
see of Fedora has the later material so you can adapt it to SL6. You may
need to check to see if that recent a wine version is compatible with the
kernel level SL6 uses, too.

{O.O}


Re: Wine RPM's

2012-02-25 Thread jdow

It depends on just HOW up to date you mean:
Available Packages
wine.i686   1.2.3-1.el6 epel
wine.x86_64 1.2.3-1.el6 epel
wine-alsa.i686  1.2.3-1.el6 epel
wine-alsa.x86_641.2.3-1.el6 epel
wine-capi.i686  1.2.3-1.el6 epel
wine-capi.x86_641.2.3-1.el6 epel
wine-cms.i686   1.2.3-1.el6 epel
wine-cms.x86_64 1.2.3-1.el6 epel
wine-common.noarch  1.2.3-1.el6 epel
wine-core.i686  1.2.3-1.el6 epel
wine-core.x86_641.2.3-1.el6 epel
wine-courier-fonts.noarch   1.2.3-1.el6 epel
wine-debuginfo.x86_64   1.2.3-1.el6 epel-debuginfo
wine-desktop.noarch 1.2.3-1.el6 epel
wine-devel.i686 1.2.3-1.el6 epel
wine-devel.x86_64   1.2.3-1.el6 epel
wine-docs.noarch1.2-1.el6   epel
wine-esd.i686   1.2.3-1.el6 epel
wine-esd.x86_64 1.2.3-1.el6 epel
wine-fonts.noarch   1.2.3-1.el6 epel
wine-gecko.x86_64   1.4-1.nodist.rftrpmforge-testing
wine-jack.i686  1.2.3-1.el6 epel
wine-jack.x86_641.2.3-1.el6 epel
wine-ldap.i686  1.2.3-1.el6 epel
wine-ldap.x86_641.2.3-1.el6 epel
wine-marlett-fonts.noarch   1.2.3-1.el6 epel
wine-nas.i686   1.2.3-1.el6 epel
wine-nas.x86_64 1.2.3-1.el6 epel
wine-openal.i6861.2.3-1.el6 epel
wine-openal.x86_64  1.2.3-1.el6 epel
wine-oss.i686   1.2.3-1.el6 epel
wine-oss.x86_64 1.2.3-1.el6 epel
wine-pulseaudio.i6861.2.3-1.el6 epel
wine-pulseaudio.x86_64  1.2.3-1.el6 epel
wine-small-fonts.noarch 1.2.3-1.el6 epel
wine-symbol-fonts.noarch1.2.3-1.el6 epel
wine-system-fonts.noarch1.2.3-1.el6 epel
wine-twain.i686 1.2.3-1.el6 epel
wine-twain.x86_64   1.2.3-1.el6 epel
wine-wow.i686   1.2.3-1.el6 epel
wine-wow.x86_64 1.2.3-1.el6 epel


{^_^}

On 2012/02/25 19:59, Todd And Margo Chester wrote:

Hi All,

Anyone know of a source of up to date RPMs for Wine?

Many thanks,
-T



Re: coreutils for 64 bit

2012-02-06 Thread jdow

On 2012/02/06 13:37, Chris Schanzle wrote:

On 02/06/2012 04:02 PM, jdow wrote:

(On a heavily loaded system, just when are you going to find 12 gigabytes
of fully contiguous storage?)


Probably lots of places on the below 1.0 TB Dell R910 box: :-) [no, not heavily
loaded at the moment, so your point is still valid, but don't forget times are
a-changin'!]

# numactl --hardware
available: 4 nodes (0-3)
node 0 size: 258511 MB
node 0 free: 255268 MB
node 1 size: 258560 MB
node 1 free: 258467 MB
node 2 size: 258560 MB
node 2 free: 258356 MB
node 3 size: 256540 MB
node 3 free: 256450 MB
node distances:
node 0 1 2 3
0: 10 20 20 20
1: 20 10 20 20
2: 20 20 10 20
3: 20 20 20 10


I don't forget. My partner works on machines that dwarf that Dell. (He writes
emulator code for UniSys.) {^_-}

At the time I worked on HardFrame I had a 16 meg machine. So 100k was a small
chunk of memory and usually easy to find, until the machine had been really
active for a week or so.

Fragmentation Happens. And it's simply another major factor on slowdowns.
Each high level transaction involves that transaction time, disk rotation
time, command queuing time, low level transaction time, low level command
queuing, rotational latency, data transfer time, and possible copying time.
(I worked in a zero copy environment, which can be noticeably faster.)

In a nice world the low level latency is a one time per transaction thing.
It needs to be considered because sometimes machines really are heavily
loaded and your values above aren't anywhere to be found.

Once the transaction overhead equals the actual data transfer time you're
not going to go faster than twice your speed no how no way. So REALLY large
buffers are little or no improvement and in case of memory fragmentation
can start costing you time.

Note that some systems have "fairness" built in. That means really large
transfers are automatically broken into smaller chunks to avoid blocking
the machine for extended DMA transfers. I've seen this in driver software,
disk firmware, and even bus level DMA firmware. (That latter was an Amiga
feature in its last days.)

Really, I don't see much need for actual low level transfer sizes to exceed
2gig even with SSD type devices - YET. We're an order of magnitude or more
away from the conditions that will change that overly broad statement.

{^_^}


Re: coreutils for 64 bit

2012-02-06 Thread jdow

You are seeing the memory fragmentation effect I mentioned. A 2g allocation
may be possible. But it's going to be a largish number of individual and
smaller allocations within physical memory. Drivers transfer into physical
memory. So really large blocks are a problem. They get broken into many
smaller transactions, one per physical block within the 2g virtual block.

At 2 meg for the buffer size the transfer time is falling into the time it
takes for the individual transaction, presuming you manage to get a full
2 meg contiguous buffer. 8 meg is probably close to your system's sweet
spot.

{^_^}   (Back in the 80s into the 90s I maintained and enhanced the drivers
for the Microbotics Amiga disk controller line, from the parallel
port StarDrive through the DMA capable HardFrame. So I spent a fair
amount of time analyzing this issue. We were proud of our speeds.)

On 2012/02/06 09:47, Stephen J. Gowdy wrote:

Hi Chris,
I understand using lager than 32kB block size can help the throughput but I'd
doubt you'd get advantage with a 2GB block size over a 8MB block size for most
devices. It may also be due to my laptop only having 4GB of RAM but it is much
better to use 8MB rather than 2GB for my SSD drive;

[root@antonia ~]# time dd if=/dev/sda of=/scratch/gowdy/test bs=8MB count=256
256+0 records in
256+0 records out
204800 bytes (2.0 GB) copied, 36.1101 s, 56.7 MB/s

real 0m36.125s
user 0m0.002s
sys 0m2.420s
root@antonia ~]# time dd if=/dev/sda of=/scratch/gowdy/test bs=2GB count=1
1+0 records in
1+0 records out
20 bytes (2.0 GB) copied, 56.1444 s, 35.6 MB/s

real 0m56.738s
user 0m0.001s
sys 0m14.715s

(oops, and I should have said 8M and 2G bs I guess). 2MB buffer isn't much 
slower;

[root@antonia ~]# time dd if=/dev/sda of=/scratch/gowdy/test bs=2MB count=1024
1024+0 records in
1024+0 records out
204800 bytes (2.0 GB) copied, 38.4204 s, 53.3 MB/s

real 0m38.781s
user 0m0.004s
sys 0m2.410s

regards,

Stephen.


On Mon, 6 Feb 2012, Chris Schanzle wrote:


It's a shame the original question didn't explain what and why he was trying
to do something with these large blocks.

Huge block sizes are useful if you have lots of ram and are copying very large
files on the same set of spindles. This minimizes disk seeking caused by head
repositioning for reads and writes and is vastly more efficient than say, "cp"
which often uses at most 32 KB reads/writes and relies on the VM system to
flush the writes (buffered by dirtying memory pages) pages as it deems
appropriate (tunables in /proc/sys/vm/dirty*).

Anyway, let's look at what system calls 'dd' does:

$ strace dd if=/dev/zero of=/dev/shm/deleteme bs=12G count=1
...
open("/dev/shm/deleteme", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
dup2(3, 1) = 1
close(3) = 0
mmap(NULL, 12884914176, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1,
0) = 0x2af98c7a
read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
12884901888) = 2147479552
write(1,
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
2147479552) = 2147479552
close(0) = 0
close(1) = 0
...

(count=2 is also interesting)

Things to notice:

1. strace shows dd is issuing a 12GB read from the input descriptor
(/dev/zero) but is getting a 'short read' from the kernel of 2GB. Short reads
are not an error.

2. The "count=" option in the dd man page specifies that it limits the number
of INPUT blocks. So it writes what it read (2GB) and quits.

So it seems to be working as designed, though perhaps not as you want.

Adding 'iflag=fullblock' will cause dd to perform multiple reads to fill the
input block size.

mmap(NULL, 12884914176, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1,
0) = 0x2b2d8735e000
read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
12884901888) = 2147479552
read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
10737422336) = 2147479552
read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
8589942784) = 2147479552
read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
6442463232) = 2147479552
read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
4294983680) = 2147479552
read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
2147504128) = 2147479552
read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
24576) = 24576
write(1, "", 12884901888) = 2147479552
write(1, "", 10737422336) = 2147479552
write(1,
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
8589942784) = 2147479552
write(1, "", 6442463232) = 2147479552
write(1,
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
4294983680) = 2147479552

Notice how the writes empty the input 2GB at a time.

Of course, all this reading/writing goes through typical VM buffering, so you
might want to consider direct i/o: iflag=direct and oflag=direct.

Which begs t

Re: coreutils for 64 bit

2012-02-06 Thread jdow

Let's think about the read and write advantages with very large block sizes.

With small (default 512 byte) reads you get extreme overhead with modern
disks. With older disks you got one disk block per read transaction. "Way
back when" the disk read time was actually large compared to the transaction
time. So the real win for issuing reads in larger sizes came from rotational
latency wins. In the late 80s I discovered there were serious advantages up
to between 4 and 8 blocks per transaction with Adaptec SCSI to ST-506 boards
and straight SCSI disks. So I built a transaction read ahead buffer that
amounted to 32 blocks. This improved compile times to a most gratifying
degree. As time went by the compile type performance did not improve much
with larger buffers. But issuing larger reads made a difference up to about
65536 bytes. So I habitually worked copies with 131072 byte buffers. Later
on it became profitable with "dd" copies on 'ix to go up to about a megabyte
per block. At that size the transaction delays were on the order of the
read/write times.

Note that this is on systems with hundreds to thousands of megabytes of
RAM and are lightly loaded so there is a chance to find a contiguous
buffer allowing transfers at the actual SCSI/SATA level to be singular. If
you try buffers that are too large they are generally not contiguous and
get slowed down as they are broken into multiple transactions due to the
usual levels of memory fragmentation.

These days you might be able to go as high as 100 megabytes for buffers if
you want transaction delays to be small compared to the actual disk read/write
times. Past that you win too little with respect to speed. So I for one am
sitting here reading this thread with a bemused expression wondering where
the real win is with 2 gigabyte buffers let alone 12 gigabyte buffers. That
is probably why the OS might have a practical read buffer size of 2 gigabytes.

(On a heavily loaded system, just when are you going to find 12 gigabytes
of fully contiguous storage?)

{^_^}

On 2012/02/06 09:24, Chris Schanzle wrote:

It's a shame the original question didn't explain what and why he was trying to
do something with these large blocks.

Huge block sizes are useful if you have lots of ram and are copying very large
files on the same set of spindles. This minimizes disk seeking caused by head
repositioning for reads and writes and is vastly more efficient than say, "cp"
which often uses at most 32 KB reads/writes and relies on the VM system to flush
the writes (buffered by dirtying memory pages) pages as it deems appropriate
(tunables in /proc/sys/vm/dirty*).

Anyway, let's look at what system calls 'dd' does:

$ strace dd if=/dev/zero of=/dev/shm/deleteme bs=12G count=1
...
open("/dev/shm/deleteme", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
dup2(3, 1) = 1
close(3) = 0
mmap(NULL, 12884914176, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0)
= 0x2af98c7a
read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
12884901888) = 2147479552
write(1, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
2147479552) = 2147479552
close(0) = 0
close(1) = 0
...

(count=2 is also interesting)

Things to notice:

1. strace shows dd is issuing a 12GB read from the input descriptor (/dev/zero)
but is getting a 'short read' from the kernel of 2GB. Short reads are not an 
error.

2. The "count=" option in the dd man page specifies that it limits the number of
INPUT blocks. So it writes what it read (2GB) and quits.

So it seems to be working as designed, though perhaps not as you want.

Adding 'iflag=fullblock' will cause dd to perform multiple reads to fill the
input block size.

mmap(NULL, 12884914176, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0)
= 0x2b2d8735e000
read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
12884901888) = 2147479552
read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
10737422336) = 2147479552
read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
8589942784) = 2147479552
read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
6442463232) = 2147479552
read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
4294983680) = 2147479552
read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
2147504128) = 2147479552
read(0, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
24576) = 24576
write(1, "", 12884901888) = 2147479552
write(1, "", 10737422336) = 2147479552
write(1, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
8589942784) = 2147479552
write(1, "", 6442463232) = 2147479552
write(1, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
4294983680) = 2147479552

Notice how the writes empty the input 2GB at a time.

Of course, all this reading/writing goes through typical VM buffering, so you
might want to cons

Re: serious bug in boot sequence when fsck is required

2012-02-01 Thread jdow

On 2012/02/01 15:38, Tom H wrote:

On Wed, Feb 1, 2012 at 6:05 PM, Nico Kadel-Garcia  wrote:

On Wed, Feb 1, 2012 at 5:36 PM, Yasha Karant  wrote:


Back to my primary point:  the bug in accepting the root password upon a
failed fsck during boot is from TUV and documented (please see a previous
post nominally in this thread).  Is there any fix?  I do not care if the fix
"breaks" TUV bug-for-bug compatibility -- is there a fix to which routine(s)
are causing the problem?


This is, in fact, an option to configure in grub in the older LILO
boot loader. Run the command "info grub-md5-crypt" for more
information.

This is not normally considered a "bug". The software is not doing
anything that is not expected or undocumented. It's a *risk*, and some
folks might think it's a security flaw. But the burden of storing and
managing  separate password for deployed systems is not, hirsorically,
taken up by default. It would have to be written into the OS instaler
to apply on the existing boot loader software. So it's not set by
default.


It's not a bug; it's a TUV decision. Requiring the root password for
single user mode can be set through "/etc/sysconfig/init".

As Nico's shown, you can also set a grub password to prevent anyone
from adding "init=/bin/sh"/"init=/bin/bash" to the "kernel" line
without that password.


It is a bug, IIRC. The original complaint is that it claims it is ready
to accept the root password and something prevents it by causing the
login prompt to recycle with each character typed. That has been declared
a TUV bug. I think somebody mentioned there might be a fix for it that has
not percolated through yet. It'd be worth checking TUV's bugzilla.

{^_^}


Re: serious bug in boot sequence when fsck is required

2012-02-01 Thread jdow

On 2012/02/01 09:28, Yasha Karant wrote:

On 02/01/2012 09:03 AM, Konstantin Olchanski wrote:

On Wed, Feb 01, 2012 at 08:47:28AM -0800, Yasha Karant wrote:

https://bugzilla.redhat.com/show_bug.cgi?id=636628

[snip]

Anyone with physical access to the machine can walk away with your disks,
or boot their own OS from a USB disk or from the network, and have root access
to all files without having to get root access. So you can safely assume
that for unfriendly purposes, having physical access is the same as knowing
the root password.



It is my understanding that if the BIOS on a standard IA-32 or X86-64 machine is
protected by a boot password, then there is no access to the boot procedure of
the BIOS and thus the media you suggest cannot be booted unless these are in
BIOS boot order preceding the physical internal hard drive.

Am I an in error?

Yasha Karant


Only two things provide security as far as I know. The first is a FULLY
encrypted file system. The other is not permitting other people physical
access to the machine. "Case opened" detection can tell you if you've been
compromised. It can't protect the disks. BIOS passwords are bypass-able in
some cases by simply shorting the coin for a couple seconds. They can be
worked around by simply removing the disks. If there is time they can be
copied with dd and worked upon at leisure.

At the very least keep critical files fully protected with encryption. It
slows the machine down somewhat. But that is a worthwhile tradeoff methinks.

{^_^}


Re: coreutils for 64 bit

2012-02-01 Thread jdow

Just on a hunch how much does it copy if you give it a BS=1GB?

This might be an uncaught 32 bit int on only the block size value.

{^_^}

On 2012/02/01 09:58, Andrey Y. Shevel wrote:

Hi Stephen,

thanks for the reply.

I am not sure that I do understand you (sorry for my stupidity).

I have
===
[root@pcfarm-10 ~]# yum list | grep coreutil
Failed to set locale, defaulting to C
coreutils.x86_64 5.97-34.el5 installed
policycoreutils.x86_64 1.33.12-14.8.el5 installed
policycoreutils-gui.x86_64 1.33.12-14.8.el5 installed
policycoreutils-newrole.x86_64 1.33.12-14.8.el5 installed
[root@pcfarm-10 ~]# rpm -q --file /bin/dd
coreutils-5.97-34.el5
=

Presumably all packages are appropriate (they have suffix x86_64) as shown by 
yum.

At the same time rpm does show packages without above suffixes

=
[root@pcfarm-10 ~]# rpm -qa | grep coreutil
policycoreutils-1.33.12-14.8.el5
policycoreutils-newrole-1.33.12-14.8.el5
coreutils-5.97-34.el5
policycoreutils-gui-1.33.12-14.8.el5
=




On Wed, 1 Feb 2012, Stephen J. Gowdy wrote:


Date: Wed, 1 Feb 2012 11:32:40 +0100 (CET)
From: Stephen J. Gowdy 
To: Andrey Y Shevel 
Cc: SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV
Subject: Re: coreutils for 64 bit

It says it only copied 2.1GB. You are runnig a 64bit OS. You reinstalld the
same coreutils package. You need to change the format of the package names
from "rpm -qa" if you want to see the architecture ("man rpm" should help you
figure out how).

On Wed, 1 Feb 2012, Andrey Y Shevel wrote:


Hi,

I just paid attention that utility 'dd' uses just 2 GB even I use greater
block size (BS). For example

=
[root@pcfarm-10 ~]# dd if=/dev/zero of=/mnt/sdb/TestFile-S1 bs=12GB
count=1
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB) copied, 15.8235 seconds, 136 MB/s


BTW,

[root@pcfarm-10 ~]# uname -a
Linux pcfarm-10.pnpi.spb.ru 2.6.18-274.17.1.el5xen #1 SMP Tue Jan 10
16:41:16 EST 2012 x86_64 x86_64 x86_64 GNU/Linux
[root@pcfarm-10 ~]# cat /etc/issue
Scientific Linux SL release 5.7 (Boron)
Kernel \r on an \m




I decided to reinstall coreutils:

[root@pcfarm-10 ~]# yum reinstall coreutils.x86_64
Failed to set locale, defaulting to C
Loaded plugins: kernel-module
Setting up Reinstall Process
Resolving Dependencies
--> Running transaction check
---> Package coreutils.x86_64 0:5.97-34.el5 set to be updated
--> Finished Dependency Resolution
Beginning Kernel Module Plugin
Finished Kernel Module Plugin

Dependencies Resolved

===

Package Arch Version
Repository
Size
===

Reinstalling:
coreutils x86_64 5.97-34.el5 sl-base
3.6 M

Transaction Summary
===

Remove 0 Package(s)
Reinstall 1 Package(s)
Downgrade 0 Package(s)

Total download size: 3.6 M
Is this ok [y/N]: y
Downloading Packages:
coreutils-5.97-34.el5.x86_64.rpm | 3.6
MB
00:05
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : coreutils
1/1

Installed:
coreutils.x86_64 0:5.97-34.el5


Complete!
=

However after that I see


[root@pcfarm-10 ~]# ls -l /bin/dd
-rwxr-xr-x 1 root root 41464 Jul 26 2011 /bin/dd
[root@pcfarm-10 ~]# rpm -q --file /bin/dd
coreutils-5.97-34.el5


[root@pcfarm-10 ~]# rpm -qa | grep coreutils
policycoreutils-1.33.12-14.8.el5
policycoreutils-newrole-1.33.12-14.8.el5
coreutils-5.97-34.el5
policycoreutils-gui-1.33.12-14.8.el5


i.e. no package with name coreutils.x86_64

I failed to find anything on the topic in scientific linux mailing list.

Does somebody know about dd for 64 bit ?

Many thanks in advance,

Andrey








Re: No route to host

2011-12-30 Thread jdow

On 2011/12/30 19:04, MT Julianto wrote:

On 31 December 2011 03:16, jdow mailto:j...@earthlink.net>>
wrote:

On 2011/12/30 18:05, MT Julianto wrote:

On 30 December 2011 14:22, jdow mailto:j...@earthlink.net> <mailto:j...@earthlink.net
<mailto:j...@earthlink.net>>>

This allows me to typo the password. All I have to do is wait a
couple minutes
between tries

Is it the same as fail2ban with setting: maxretry=1 ?


I don't know. I learned of fail2ban from the BSD mailing list long after I'd
learned that iptables trick. I feel more comfortable with the iptables trick
since it is right there instantly rather than with any log reading delays.
It even prevents two attempts from the same address if the first one was
successful, which is not something I've ever wanted to do. It's one less
piece of software on the system. It means I had to learn iptables a bit.


If I were you, I will do that same :-)  It is always a great pleasure to use our
own tricks and to keep learning about it.


I learned the trick on one of the Red Hat lists about a decade ago.


I wish have a chance someday to learn iptables...


There is no present like the time.

I first learned ipchains. I found the Trinity firewall project long ago and
built up some tweaks to their ipchains firewall. Then I had to learn iptables
to keep the goodies I'd built in, like a dedicated hole in the firewall in
case the usual login method failed. I also learned to redirect incoming
connection requests to another machine when I experimented with a little
video streaming on a Windows machine.

It's a little mind-bending at first. But taking working scripts and adapting
them is a good way to learn.

{^_^}


Re: No route to host

2011-12-30 Thread jdow

On 2011/12/30 18:11, MT Julianto wrote:

On 31 December 2011 03:01, jdow mailto:j...@earthlink.net>>
wrote:

On 2011/12/30 17:24, MT Julianto wrote:

On 30 December 2011 14:15, jdow mailto:j...@earthlink.net> <mailto:j...@earthlink.net
<mailto:j...@earthlink.net>>>
What happens if you ping the host before trying ssh?

Sometimes it failed as well.


As in the ssh login failed after the ping failed or did the ping find the
host and you still had an ssh failure.


I mean the ping is failed sometimes, but ssh login afterward is succeeded.


Bingo - if the numeric address of the destination machine changes fairly
frequently for one reason or another or if its DNS entry has a short enough
TTL the DNS lookup might fail the first try and succeed on the second after
the DNS server has finally received the information from the authoritative
sources it uses. It's something I remember seeing happen long ago. I've not
seen it recently. But then my address has been staying remarkably constant
for a long time now compared to the times I am on the road trying to get
into my machine. (I use dnsalias to find myself. Back in the mid 90s I
used a perverted "ping" command to trigger my remote machine to respond
to me with its address. I'd figured out the range of addresses it might be
on and made a broad enough ping request with a specific payload to the ping.
Yea verily I have sinned - but the statute of limitations has run out.)

{^_^}


Re: No route to host

2011-12-30 Thread jdow

On 2011/12/30 18:05, MT Julianto wrote:

On 30 December 2011 14:22, jdow mailto:j...@earthlink.net>>
wrote:

On 2011/12/30 00:14, MT Julianto wrote:

Indeed, I found some traces of intruder trying to get root access via
ssh, but
none is succeeded.  Now, I use fail2ban (available at atrpms) to handle
them.


I find zero to five tries a day. For some strange reason every try is from a
different address.


Exactly!  I have a web server which still got thousands sshd attack per month,
although fail2ban is installed with bantime = 1 hour :-(

For the current machine, just before fail2ban is installed yesterday, I found
about 500 tries in half hour from the same address.  sshd attack is drastically
drop after fail2ban is installed.

I have my own iptables script with lines like these in it:
$IPTABLES -A INPUT -p tcp --syn --dport 22 -m recent --name sshattack --set
$IPTABLES -A INPUT -p tcp --dport 22 --syn -m recent --name sshattack \
  --rcheck --seconds 60 --hitcount 2 -j LOG --log-prefix 'SSH REJECT: ' \
  --log-level info
$IPTABLES -A INPUT -p tcp --dport 22 --syn -m recent --name sshattack \
  --rcheck --seconds 60 --hitcount 2 -j REJECT --reject-with tcp-reset

The -m recent, -seconds 60, and --hitcount 2 phrases are the magic. Much of
that is so that I get the rejects logged, thanks to my sick curoisity.

Interesting!  However, I don't know much about iptables.

This allows me to typo the password. All I have to do is wait a couple 
minutes
between tries

Is it the same as fail2ban with setting: maxretry=1 ?


I don't know. I learned of fail2ban from the BSD mailing list long after I'd
learned that iptables trick. I feel more comfortable with the iptables trick
since it is right there instantly rather than with any log reading delays.
It even prevents two attempts from the same address if the first one was
successful, which is not something I've ever wanted to do. It's one less
piece of software on the system. It means I had to learn iptables a bit.

I learned the trick on one of the Red Hat lists about a decade ago.

{^_^}


Re: output of var/log/messages on the terminal

2011-12-30 Thread jdow

On 2011/12/10 12:54, Mark Stodola wrote:

On 12/10/2011 1:49 PM, Andrew Z wrote:

gys,
how can i send the /var/log/messages on a designated terminal?
i remember some many years ago i saw on of SAs having messages file to 11 or
10 terminal from the moment machine booted. The gain - they just switch to it
instead of typing "tail ".

Hope makes sence and thank you in advance.
Andrew

I don't practice this myself, but I suspect an entry in /etc/inittab like this
would suffice:
tty11::respawn:/usr/bin/tail -f /var/log/messages

-Mark


In the dark ancient past I discovered these two options helped get through
log rotations:  --retry --max-unchanged-stats=5

{^_^}


Re: No route to host

2011-12-30 Thread jdow

On 2011/12/30 17:24, MT Julianto wrote:


On 30 December 2011 14:15, jdow mailto:j...@earthlink.net>>
wrote:


All possibilities are negative: not power mode issue, not dhcp issue 
(see
below), not iptables issue (see below), not hacking issue
(/var/log/secure is
clear (no attack) at that fail time).


What happens if you ping the host before trying ssh?


Sometimes it failed as well.


As in the ssh login failed after the ping failed or did the ping find the
host and you still had an ssh failure. The error I remember you receiving
(long trimmed out of this thread) had suggested to me that your machine
was not finding the other machine in DNS lookup (or maybe found an inaccurate
address) that cleared by the time of the next lookup. If the destination
machine's actual numeric address does not change I'm not sure how the DNS
lookup could be wrong unless it got stale with a timeout.


Can you check if the ssh daemon is running or if it's setup to use xinetd?
(Normally it runs as a daemon. I'm hypothesizing that it is running off an
inet daemon and taking too long to load or something.)


sshd runs as a daemon.  Even xinetd is not installed.


So much for that idea, eh?

{^_^}


Re: No route to host

2011-12-30 Thread jdow

On 2011/12/30 00:14, MT Julianto wrote:

On 27 December 2011 21:02, jdow mailto:j...@earthlink.net>>
wrote:

If the server is not busy that might be an interesting way to keep
hackers out of the machine. It would also make my log files smaller.


Indeed, I found some traces of intruder trying to get root access via ssh, but
none is succeeded.  Now, I use fail2ban (available at atrpms) to handle them.

-Tito.


I find zero to five tries a day. For some strange reason every try is from a
different address.

I have my own iptables script with lines like these in it:
$IPTABLES -A INPUT -p tcp --syn --dport 22 -m recent --name sshattack --set
$IPTABLES -A INPUT -p tcp --dport 22 --syn -m recent --name sshattack \
  --rcheck --seconds 60 --hitcount 2 -j LOG --log-prefix 'SSH REJECT: ' \
  --log-level info
$IPTABLES -A INPUT -p tcp --dport 22 --syn -m recent --name sshattack \
  --rcheck --seconds 60 --hitcount 2 -j REJECT --reject-with tcp-reset


The -m recent, -seconds 60, and --hitcount 2 phrases are the magic. Much of
that is so that I get the rejects logged, thanks to my sick curoisity.

This allows me to typo the password. All I have to do is wait a couple minutes
between tries (Not all the portable hardware has a good enough ssh
implementation I can eschew passwords.) I also use this for pop3s and imaps,
neither of which have been attacked, yet. That's a little easier than trying
to tunnel pop3 or imap through ssh.

{^_^}


Re: No route to host

2011-12-30 Thread jdow

On 2011/12/30 00:06, MT Julianto wrote:


On 27 December 2011 15:11, MT Julianto mailto:mtjulia...@gmail.com>> wrote:


On 27 December 2011 08:57, zxq9 mailto:z...@zxq9.com>> 
wrote:

On 12/27/2011 04:12 PM, MT Julianto wrote:

The machine looks (sometimes) sleep, although it is always on, 
idle, no
screensavers is running, and no network changes surroundings.  
That's
never happened before migrated, and never happened when connecting 
or
neighbor machines in the office.


I haven't had this issue myself at all, but power settings is the first
place I would start looking. Another place to check might be the sshd
settings (but again, this would be strange since others haven't reported
the same issue).


What settings of sshd might related to the problem?  I used fresh
installation of SL61, did small change to /etc/ssh/sshd_config:
"PermitRootLogin no" and always use ssh with authentication key.

It could also be routing in the office. But I'd check the above two
things before I start wondering if routers don't like to remember my
system's MAC or DHCP address or something (though its possible if your
office needs some setting for DHCP leases to last the right amount of
time or whatever).


The machine got IP (public IP) via dhcp with fixed-address

I'll watch the dhcp lease further whenever the problem occurs again later,


Thanks for your replies :-)

All possibilities are negative: not power mode issue, not dhcp issue (see
below), not iptables issue (see below), not hacking issue (/var/log/secure is
clear (no attack) at that fail time).


What happens if you ping the host before trying ssh?

Can you check if the ssh daemon is running or if it's setup to use xinetd?
(Normally it runs as a daemon. I'm hypothesizing that it is running off an
inet daemon and taking too long to load or something.)

{^_^}


Re: No route to host

2011-12-27 Thread jdow

On 2011/12/27 09:13, Bluejay Adametz wrote:

When it fails, does it fail immediately, or does it take a few seconds
before the error shows up?

If it fails immediately, it could be a router or firewall blocking
something or maybe iptables.


It fails immediately, and soon the error message is gone in the next try.
That is happened sometimes, say once in seven times ssh.  I don't think
firewall could work that way.  Am I right?


Yes, I would expect a firewall to either deny (possibly generating the
'no route' error by explicitly denying the connect), or allow it,
every time.

Are there any routers involved, or are both the home and office
machines on the same LAN, in the configuration where you see the
failures?


On the other hand, there may be a way to actually achieve this effect
intentionally with iptables. When I get a chance I'll have to explore
it. At the moment I use one of the iptables options to allow only one
connection to my machine within a short period. The command for
doing this might be able to be perverted into rejecting the first
attempt, allowing the second, and denying all subsequent until a few
seconds have passed. Then you'd have to fail once before connecting.

If the server is not busy that might be an interesting way to keep
hackers out of the machine. It would also make my log files smaller.
I log each ssh attempt that is rejected with my iptables setup. I
just had a dumb from Zimbabwe spend nearly 4 hours attempting
an ssh connection. That amounted to 160,000 rejects. Some scripts are
DUMB.

{^_-}   Joanne


Re: Move a SL6 server from md software raid 5 to hardware raid 5

2011-12-20 Thread jdow

(privately) There is another factor I did not mention called IPAHS, Innate
Perversity of Animate Homo Sapiens. We may be faced with that at this time.

{^_-}

On 2011/12/20 17:14, Jason Bronner wrote:


There is something to be said for minimizing downtime, but every time i've tried
getting tricky with something like that something else goes horribly wrong. i
believe IPIO was mentioned and Murphy probably has something else to say about
it, too.



Re: Move a SL6 server from md software raid 5 to hardware raid 5

2011-12-20 Thread jdow

Trying to rush this kind of procedure generally leads to much greater
downtime and lost data.

The technical name for those who take this risk without a backup is "foolish".

{o.o}

On 2011/12/20 06:22, Felip Moll wrote:

Thank you for your answers!.

Regarding to the backups I have an external backup system with bacula + tapes
and another with NFS, so it should not be a problem. The problem is that I want
to do all this process in a short period time to minimize the downtime.

Be sure that your e-mails will be useful to me. Thank you.

2011/12/20 Nico Kadel-Garcia mailto:nka...@gmail.com>>

On Tue, Dec 20, 2011 at 5:52 AM, Jason Bronner mailto:jason.bron...@gmail.com>> wrote:
 > Felip.
 >
 > always always always: back up the array. create the new array. move the 
old
 > array to the new array. destroy the old array.

Amen. Also, in making the backup, consider using "star" if you use
SELinux. rsync and normal tar do not preserve SELinux attributes.

It is a good time to consider your backup policy. RAID is *not*
backup, and the white paper from Google at

http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/us/archive/disk_failures.pdf




Re: Move a SL6 server from md software raid 5 to hardware raid 5

2011-12-19 Thread jdow

Not as I see it. You take a backup to a large disk, or disks as the case
may be. That is your safety net. Then you try the md disks in the hard raid
controller. If they work, Bob's your uncle. If they do not work then create
the proper raid configuration on the hardware controller with the md disks
and copy in the backup. Perform the copying using a live CD to the extent
you can. At no time do you end up with twin RAID arrays. Of course, if you
have enough disks simply copy the md raid as an disk to the hard raid as a
disk. "tar" or "dd" imaging can work. If you use different disks in the
RAIDs then use tar or even cpio to copy the files rather than copy a pure
image. That will tend to optimize the partitioning to use the drive's
actual internal block size for creating partition boundaries.

{^_^}

On 2011/12/19 15:43, Felip Moll wrote:

Doing it this way seems to be a high risk operation.

Furthermore I want not do this because then I will have two raids: one raid per
software (md) into one per hardware.. my thoughts are about copying manually the
dirs of the operating system, then modifying configurations.. I think it is a
"more secure" process.

Thanks for the answer jdow ;)


2011/12/20 jdow mailto:j...@earthlink.net>>

First take a complete backup of the md raid.

Then if the laws if Innate Perversity of Inanimate Objects you'll be able to
move the disks and have them just work. Your data is protected. (If you had
no backup IPIO would, of course, lead to the transition failing 
expensively.)

Even if IPIO does not work you restore from the complete backup to the same
disks they were on after the hardware RAID assembles itself. (Despite the
numerous times IPIO seems to work, I still figure it's a silly superstition.
It does lead to a correct degree of paranoia, though.)

{^_^}


On 2011/12/19 09:18, Felip Moll wrote:

Well, I will remake my question to not scare possible "answerers":

How to move a SL6.0 system with md raid (raid per software), to another
server
without mantaining the raid per software?

Thanks!



2011/12/16 Felip Moll mailto:lip...@gmail.com>
<mailto:lip...@gmail.com <mailto:lip...@gmail.com>>>


Hello all!

Recently I installed and configured a Scientific Linux to run as a 
high
performance computing cluster with 15 slave nodes and one master. I
did this
while an older system with RedHat 5.0 was running in order
to avoid users to stop their computations. All gone well. I migrated
node to
node and now I have a flawlessly cluster with SL6!.

Well, the fact is that while migrating I used the node1 to install
SL6 while
the node0 was hosting the old master operating system. Node1 has
less ram
and no raid capabilities, so I configured a Raid5 per software when
installing, using md linux software (which comes per default to a 
normal
installation when you select "raid"). Node0 has a Raid 5 hardware
controller.

Now I want to move the new master node1, into node0. I thought about
this
and I have to shutdown node1, node0, and with a LiveCD partition the
harddisk of node0 and copy the contents of the disk of node1 into
it. Then
make grub install.

All right but, what do you think that I should take in consideration
regarding to Raid and md? I will have to modify /etc/fstab and also
delete
/etc/mdadm.conf to avoid md running. Anything more?

Thank you very much!





Re: Move a SL6 server from md software raid 5 to hardware raid 5

2011-12-19 Thread jdow

First take a complete backup of the md raid.

Then if the laws if Innate Perversity of Inanimate Objects you'll be able to
move the disks and have them just work. Your data is protected. (If you had
no backup IPIO would, of course, lead to the transition failing expensively.)

Even if IPIO does not work you restore from the complete backup to the same
disks they were on after the hardware RAID assembles itself. (Despite the
numerous times IPIO seems to work, I still figure it's a silly superstition.
It does lead to a correct degree of paranoia, though.)

{^_^}

On 2011/12/19 09:18, Felip Moll wrote:

Well, I will remake my question to not scare possible "answerers":

How to move a SL6.0 system with md raid (raid per software), to another server
without mantaining the raid per software?

Thanks!



2011/12/16 Felip Moll mailto:lip...@gmail.com>>

Hello all!

Recently I installed and configured a Scientific Linux to run as a high
performance computing cluster with 15 slave nodes and one master. I did this
while an older system with RedHat 5.0 was running in order
to avoid users to stop their computations. All gone well. I migrated node to
node and now I have a flawlessly cluster with SL6!.

Well, the fact is that while migrating I used the node1 to install SL6 while
the node0 was hosting the old master operating system. Node1 has less ram
and no raid capabilities, so I configured a Raid5 per software when
installing, using md linux software (which comes per default to a normal
installation when you select "raid"). Node0 has a Raid 5 hardware 
controller.

Now I want to move the new master node1, into node0. I thought about this
and I have to shutdown node1, node0, and with a LiveCD partition the
harddisk of node0 and copy the contents of the disk of node1 into it. Then
make grub install.

All right but, what do you think that I should take in consideration
regarding to Raid and md? I will have to modify /etc/fstab and also delete
/etc/mdadm.conf to avoid md running. Anything more?

Thank you very much!




Re: Repo update error

2011-12-12 Thread jdow

On 2011/12/12 02:28, Stephan Wiesand wrote:

On Dec 12, 2011, at 10:49 , jdow wrote:



YUM - security

Error: Package: icewm-1.3.7-1.el6.x86_64 (epel)
   Requires: bluecurve-icon-theme
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest

I installed it to see what it was like. Then I never deleted it. It seems
to need something not present.

{^_^}


EPEL bug?


Likely is - I believe they read this list. For the time being I'll disable
their repo.

{^_^}


Repo update error

2011-12-12 Thread jdow

 
 YUM - security
 
Error: Package: icewm-1.3.7-1.el6.x86_64 (epel)
   Requires: bluecurve-icon-theme
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest

I installed it to see what it was like. Then I never deleted it. It seems
to need something not present.

{^_^}


Re: output of var/log/messages on the terminal

2011-12-10 Thread jdow

On 2011/12/10 14:17, Bluejay Adametz wrote:

  how can i send the  /var/log/messages on a designated terminal?
  i remember some many years ago i saw on of SAs having messages file to 11
or 10 terminal from the moment machine booted. The gain - they just switch
to it instead of typing "tail ".


Perhaps an entry in /etc/rsyslog.conf to direct messages of the
desired type to /dev/tty11?


I don't practice this myself, but I suspect an entry in /etc/inittab like this 
would
suffice:
tty11::respawn:/usr/bin/tail -f /var/log/messages


That might run into problems when the messages log file rolls over.


tail --follow=name -n 100 --retry --max-unchanged-stats=5 /var/log/messages

{^_^}


Re: unable to cut a DVD from iso

2011-12-06 Thread jdow

On 2011/12/06 16:54, Todd And Margo Chester wrote:

On 12/06/2011 04:50 PM, jdow wrote:

On 2011/12/06 16:38, Todd And Margo Chester wrote:

On 12/06/2011 04:29 PM, jdow wrote:

On 2011/12/06 14:37, Todd And Margo Chester wrote:

On 12/06/2011 02:09 PM, Andrew Z wrote:


On Tue, Dec 6, 2011 at 4:13 PM, Todd And Margo Chester
mailto:toddandma...@gmail.com>> wrote:

On Tue, Dec 6, 2011 at 3:40 PM, Todd And Margo Chester
mailto:toddandma...@gmail.com>
<mailto:toddandma...@gmail.com
<mailto:toddandma...@gmail.com>__>> wrote:
growisofs -dvd-compat -Z /dev/sr0
./SL-61-x86_64-2011-07-27-__Everything-DVD2.iso


Syntax:
growisofs -dvd-compat -Z /dev/dvd=image.iso

Change your command to:
growisofs -dvd-compat -Z
/dev/sr0=./SL-61-x86_64-2011-07-27-__Everything-DVD2.iso

(Add the = sign.)

The above is the burn an ISO example from the man file for growisofs.

{^_^}



# growisofs -dvd-compat -allow-limited-size -Z \
/dev/sr0=SL-61-x86_64-2011-07-27-Everything-DVD1.iso

growisofs: no mkisofs options are permitted with =, aborting...


mumble, mumble ...


Let's try a "naw, it can't be!"

Try escaping all the hyphens in the name or simply renaming it to image.iso
and see what happens.

When doing it "right" doesn't work, try asinine solutions. The question
this
will answer is whether "growisofs" interprets the hyphens to mean mkisofs
options are present.

{o.o}



# mv SL-61-x86_64-2011-07-27-Everything-DVD1.iso x.iso

# growisofs --dry-run -dvd-compat -allow-limited-size -Z /dev/sr0=x.iso
growisofs: no mkisofs options are permitted with =, aborting...

naw mumble, it mumble can't mumble be mumble [explicative deleted]!



Hm, TUV, bugreport, filed, one each?

But TUV about it. It sounds like the man page does not match reality.

Lemme see, I am using SL6.1. The growisofs tool is version 7.1. Not feeling
like wasting a DVD or CDROM I simply tried the command with no disk in the
drive.

growisofs -dvd-compat -Z /dev/sr0=image.iso   (With an appropriate file.)

It trundled for a few moments then complained:
:-( /dev/sr0: media is not recognized as recordable DVD: 0

growisofs is found at /usr/bin/growisofs and is seen as:
-rwxr-xr-x. 1 root root 93952 Nov 22  2010 /usr/bin/growisofs

I ran it as root for write privileges.

{^_^}


Re: unable to cut a DVD from iso

2011-12-06 Thread jdow

On 2011/12/06 16:38, Todd And Margo Chester wrote:

On 12/06/2011 04:29 PM, jdow wrote:

On 2011/12/06 14:37, Todd And Margo Chester wrote:

On 12/06/2011 02:09 PM, Andrew Z wrote:


On Tue, Dec 6, 2011 at 4:13 PM, Todd And Margo Chester
mailto:toddandma...@gmail.com>> wrote:

On Tue, Dec 6, 2011 at 3:40 PM, Todd And Margo Chester
mailto:toddandma...@gmail.com>
<mailto:toddandma...@gmail.com
<mailto:toddandma...@gmail.com>__>> wrote:
growisofs -dvd-compat -Z /dev/sr0
./SL-61-x86_64-2011-07-27-__Everything-DVD2.iso


Syntax:
growisofs -dvd-compat -Z /dev/dvd=image.iso

Change your command to:
growisofs -dvd-compat -Z
/dev/sr0=./SL-61-x86_64-2011-07-27-__Everything-DVD2.iso

(Add the = sign.)

The above is the burn an ISO example from the man file for growisofs.

{^_^}



# growisofs -dvd-compat -allow-limited-size -Z \
/dev/sr0=SL-61-x86_64-2011-07-27-Everything-DVD1.iso

growisofs: no mkisofs options are permitted with =, aborting...


mumble, mumble ...


Let's try a "naw, it can't be!"

Try escaping all the hyphens in the name or simply renaming it to image.iso
and see what happens.

When doing it "right" doesn't work, try asinine solutions. The question this
will answer is whether "growisofs" interprets the hyphens to mean mkisofs
options are present.

{o.o}


Re: unable to cut a DVD from iso

2011-12-06 Thread jdow

On 2011/12/06 14:37, Todd And Margo Chester wrote:

On 12/06/2011 02:09 PM, Andrew Z wrote:


On Tue, Dec 6, 2011 at 4:13 PM, Todd And Margo Chester
mailto:toddandma...@gmail.com>> wrote:

On Tue, Dec 6, 2011 at 3:40 PM, Todd And Margo Chester
mailto:toddandma...@gmail.com>
__>> wrote:
growisofs -dvd-compat -Z /dev/sr0 
./SL-61-x86_64-2011-07-27-__Everything-DVD2.iso


Syntax:
growisofs -dvd-compat -Z /dev/dvd=image.iso

Change your command to:
growisofs -dvd-compat -Z 
/dev/sr0=./SL-61-x86_64-2011-07-27-__Everything-DVD2.iso

(Add the = sign.)

The above is the burn an ISO example from the man file for growisofs.

{^_^}


Re: Can't access SL repos - DNS problem

2011-12-05 Thread jdow

Addendum -  Thought I'd give "host scientificlinux.org" a try. It works.

scientificlinux.org has address 131.225.111.32

{^_^}

On 2011/12/05 15:00, jdow wrote:

[jdow@me2 ~]$ host ftp.scientificlinux.org
Host ftp.scientificlinux.org not found: 3(NXDOMAIN)


{^_^} generic Ontario California area

On 2011/12/05 08:53, N.N. wrote:

Hello Vladim.

ftp.scientificlinux.org is on-line.

Alain.


On 12/5/11, Vladimir Mosgalin wrote:

Hello everybody.

My DNS server (bind-9.7.3-2.el6_1.P3.3.x86_64 running on SL 6.1) stopped
resolving ftp.scientificlinux.org, ftp1.scientificlinux.org and such!
In logs it writes about probing every parent dns server in config and
then finally gives up.

$ host ftp.scientificlinux.org
Host ftp.scientificlinux.org not found: 3(NXDOMAIN)

in logs:
Dec 5 19:13:49 lime named[2109]: validating @0x7f719007dfe0: fnal.gov
DNSKEY: no valid signature found (DS)
Dec 5 19:13:49 lime named[2109]: error (no valid RRSIG) resolving
'fnal.gov/DNSKEY/IN': 8.8.4.4#53
Dec 5 19:13:49 lime named[2109]: validating @0x7f7194018900: fnal.gov
DNSKEY: no valid signature found (DS)
Dec 5 19:13:49 lime named[2109]: error (no valid RRSIG) resolving
'fnal.gov/DNSKEY/IN': 8.8.8.8#53
Dec 5 19:13:49 lime named[2109]: validating @0x7f7188067920: fnal.gov
DNSKEY: no valid signature found (DS)
Dec 5 19:13:49 lime named[2109]: error (no valid RRSIG) resolving
'fnal.gov/DNSKEY/IN': 198.49.208.71#53
Dec 5 19:13:49 lime named[2109]: validating @0x7f71900ab2b0: fnal.gov
DNSKEY: no valid signature found (DS)
Dec 5 19:13:49 lime named[2109]: error (no valid RRSIG) resolving
'fnal.gov/DNSKEY/IN': 198.49.208.70#53
Dec 5 19:13:49 lime named[2109]: validating @0x7f719007dfe0: fnal.gov
DNSKEY: no valid signature found (DS)
Dec 5 19:13:49 lime named[2109]: error (no valid RRSIG) resolving
'fnal.gov/DNSKEY/IN': 198.124.252.22#53
Dec 5 19:13:49 lime named[2109]: validating @0x7f7194018900: fnal.gov
DNSKEY: no valid signature found (DS)
Dec 5 19:13:49 lime named[2109]: error (no valid RRSIG) resolving
'fnal.gov/DNSKEY/IN': 2001:400:910:1::2#53
Dec 5 19:13:50 lime named[2109]: validating @0x7f7198603180: fnal.gov
DNSKEY: no valid signature found (DS)
Dec 5 19:13:50 lime named[2109]: error (no valid RRSIG) resolving
'fnal.gov/DNSKEY/IN': 198.128.2.10#53
Dec 5 19:13:50 lime named[2109]: validating @0x7f71900ba5f0: fnal.gov
DNSKEY: no valid signature found (DS)
Dec 5 19:13:50 lime named[2109]: error (no valid RRSIG) resolving
'fnal.gov/DNSKEY/IN': 2001:400:6000::22#53
Dec 5 19:13:50 lime named[2109]: validating @0x7f7194018900: fnal.gov
DNSKEY: no valid signature found (DS)
Dec 5 19:13:50 lime named[2109]: error (no valid RRSIG) resolving
'fnal.gov/DNSKEY/IN': 198.129.252.34#53
Dec 5 19:13:50 lime named[2109]: validating @0x7f719862a1a0: fnal.gov
DNSKEY: no valid signature found (DS)
Dec 5 19:13:50 lime named[2109]: error (no valid RRSIG) resolving
'fnal.gov/DNSKEY/IN': 2001:400:14:2::10#53
Dec 5 19:13:50 lime named[2109]: error (broken trust chain) resolving
'linux9.fnal.gov/A/IN': 8.8.8.8#53

I tried restarting it, didn't help. Is something broken on my side or
SL side? Looks like my DNS server resolves other names, including
DNSSEC-secured ones.

--

Vladimir








Re: Can't access SL repos - DNS problem

2011-12-05 Thread jdow

[jdow@me2 ~]$ host ftp.scientificlinux.org
Host ftp.scientificlinux.org not found: 3(NXDOMAIN)


{^_^}  generic Ontario California area

On 2011/12/05 08:53, N.N. wrote:

Hello Vladim.

ftp.scientificlinux.org is on-line.

Alain.


On 12/5/11, Vladimir Mosgalin  wrote:

Hello everybody.

My DNS server (bind-9.7.3-2.el6_1.P3.3.x86_64 running on SL 6.1) stopped
resolving ftp.scientificlinux.org, ftp1.scientificlinux.org and such!
In logs it writes about probing every parent dns server in config and
then finally gives up.

$ host ftp.scientificlinux.org
Host ftp.scientificlinux.org not found: 3(NXDOMAIN)

in logs:
Dec  5 19:13:49 lime named[2109]: validating @0x7f719007dfe0: fnal.gov
DNSKEY: no valid signature found (DS)
Dec  5 19:13:49 lime named[2109]: error (no valid RRSIG) resolving
'fnal.gov/DNSKEY/IN': 8.8.4.4#53
Dec  5 19:13:49 lime named[2109]: validating @0x7f7194018900: fnal.gov
DNSKEY: no valid signature found (DS)
Dec  5 19:13:49 lime named[2109]: error (no valid RRSIG) resolving
'fnal.gov/DNSKEY/IN': 8.8.8.8#53
Dec  5 19:13:49 lime named[2109]: validating @0x7f7188067920: fnal.gov
DNSKEY: no valid signature found (DS)
Dec  5 19:13:49 lime named[2109]: error (no valid RRSIG) resolving
'fnal.gov/DNSKEY/IN': 198.49.208.71#53
Dec  5 19:13:49 lime named[2109]: validating @0x7f71900ab2b0: fnal.gov
DNSKEY: no valid signature found (DS)
Dec  5 19:13:49 lime named[2109]: error (no valid RRSIG) resolving
'fnal.gov/DNSKEY/IN': 198.49.208.70#53
Dec  5 19:13:49 lime named[2109]: validating @0x7f719007dfe0: fnal.gov
DNSKEY: no valid signature found (DS)
Dec  5 19:13:49 lime named[2109]: error (no valid RRSIG) resolving
'fnal.gov/DNSKEY/IN': 198.124.252.22#53
Dec  5 19:13:49 lime named[2109]: validating @0x7f7194018900: fnal.gov
DNSKEY: no valid signature found (DS)
Dec  5 19:13:49 lime named[2109]: error (no valid RRSIG) resolving
'fnal.gov/DNSKEY/IN': 2001:400:910:1::2#53
Dec  5 19:13:50 lime named[2109]: validating @0x7f7198603180: fnal.gov
DNSKEY: no valid signature found (DS)
Dec  5 19:13:50 lime named[2109]: error (no valid RRSIG) resolving
'fnal.gov/DNSKEY/IN': 198.128.2.10#53
Dec  5 19:13:50 lime named[2109]: validating @0x7f71900ba5f0: fnal.gov
DNSKEY: no valid signature found (DS)
Dec  5 19:13:50 lime named[2109]: error (no valid RRSIG) resolving
'fnal.gov/DNSKEY/IN': 2001:400:6000::22#53
Dec  5 19:13:50 lime named[2109]: validating @0x7f7194018900: fnal.gov
DNSKEY: no valid signature found (DS)
Dec  5 19:13:50 lime named[2109]: error (no valid RRSIG) resolving
'fnal.gov/DNSKEY/IN': 198.129.252.34#53
Dec  5 19:13:50 lime named[2109]: validating @0x7f719862a1a0: fnal.gov
DNSKEY: no valid signature found (DS)
Dec  5 19:13:50 lime named[2109]: error (no valid RRSIG) resolving
'fnal.gov/DNSKEY/IN': 2001:400:14:2::10#53
Dec  5 19:13:50 lime named[2109]: error (broken trust chain) resolving
'linux9.fnal.gov/A/IN': 8.8.8.8#53

I tried restarting it, didn't help. Is something broken on my side or
SL side? Looks like my DNS server resolves other names, including
DNSSEC-secured ones.

--

Vladimir






Re: Strange issues with my bind9 host

2011-10-29 Thread jdow

It occurs to me that he's locking out an Australian /24 subnet with his
choice of IP there. 10.1.1.1 might be a little better.

{o.o}

On 2011/10/29 16:56, Cristian Ciupitu wrote:

Hi,


Try using something like this if you want to use only server 1.1.1.1:


 forward only;
 forwarders { 1.1.1.1; };


Kind regards,
Cristian Ciupitu




Re: Red Hat engineer renews attack on Windows 8-certified secure boot

2011-10-20 Thread jdow

Your rant is an off topic rant, too, sir.

Please stop it lest you issue further proof of your dysfunctional personality.
{o.o}

On 2011/10/20 19:24, Nico Kadel-Garcia wrote:

On Thu, Oct 20, 2011 at 6:16 PM, Yasha Karant  wrote:

Any idea how to get persons such as Victor Helsing to understand the issue
here? -- this is NOT a rant.  If we ignore it, we shall all be in the soup.


It's a rant. By continuing to rant, you discourage people from paying
attention to such issues where and when they *are* relevant, such as
(hopefully) the material below.


Re: Red Hat engineer renews attack on Windows 8-certified secure boot

2011-10-20 Thread jdow

Somebody ought to complain to the CSUSB and to Verizon about his spam.

Or maybe configure the spam filter being used to block CSUSB until he is
removed. That's a little harsh. But, what is there to do when it's really
easy for him to simply setup a new alias and have more of his fun? (Sadly
lunch mobs result in way too much paperwork for the mob members.)

{^_^}

On 2011/10/20 17:27, g wrote:

On 10/20/2011 08:58 PM, Yasha Karant wrote:
<>


the practical issue


"the practical issue" is that you are posting links that have nothing
to due with intent of this list as they are "off topic".

what is even more aggravating and irritating about your posting is that
you post a link and then quote that link.

please stop posting what *you think* is interesting. *it is not*.



Re: UEFI

2011-10-20 Thread jdow

On 2011/10/20 08:10, Tom H wrote:

On Thu, Oct 20, 2011 at 4:58 AM, Thomas Bendler
  wrote:


Secure boot is simply a design mistake. Instead of giving everyone the
opportunity to upload own certificates to the certificate store (like
browsers do), they implemented a hard coded list of certificates so that
only a few systems benefit from secure boot (the general idea of secure boot
is fine). This is the problem, the root of trust is moved to the vendors
instead of the owner. Unfortunately a lot of commercial interests will most
likely push it to the market as it is, so the only hope will be to be able
to switch it off.


The only intelligent post in this totally OT thread...


By definition there have been no intelligent posts to this thread. It does not
belong here. That it was posted here indicates the utter lack of intelligence
(can't read and follow directions) of the people wheezing in and starting this
thread.

{o.o}


Re: Distribution Servers Downtime - 15 hours on October 15th 2011 from 03:00 - 18:00 CDT (Completed)

2011-10-17 Thread jdow

Pat, you did send it out on the 12th. 2011/10/12 14:08
{o.o}

On 2011/10/17 06:27, Pat Riehecky wrote:

Apologies,

I apologize for not sending this out on time. I have lots of good sounding
excuses, but they are just excuses and avoid the fact that I didn't do it. The
downtime is completed and was finished on Saturday. There were a few lingering
issues which have since been resolved.


Everything should be working again as expected. Please let us know if you find
things which are not.

Pat

-

Hello,

The distribution servers rsync.scientificlinux.org, ftp.scientificlinux.org,
ftp1.scientificlinux.org, and ftp2.scientificlinux.org will be going down on:

Saturday October 15, 2011 at 03:00am CDT (Chicago)

Affected Machines:
* rsync.scientificlinux.org
* ftp.scientificlinux.org
* ftp1.scientificlinux.org
* ftp2.scientificlinux.org

Depending on your configuration nightly updates may fail during this process as
the servers will be unavailable.

Begin Downtime:
October 15, 2011 at 03:00am CDT (Chicago)

The downtime is expected to last for 15 hours.

End Downtime:
October 15, 2011 at 18:00 CDT (Chicago)

For your local time you can run date -d '2011-10-15 03:00 CDT'

Thank you for your patience while we perform this maintenance.

Pat Riehecky



Re: [jdow solved] How to run to launch script when nic interface is up

2011-10-13 Thread jdow

On 2011/10/13 21:24, William Scott wrote:

On 14 October 2011 14:13, jdow  wrote:



It acts as if the file is not even seen since there are no selinux problems
reported for it. So that makes me think something spooky is going on.


Where did you put your script?

if [ -x /sbin/ifup-pre-local ]; then
 /sbin/ifup-pre-local ${CONFIG} $2
fi

if [ -x /sbin/ifdown-pre-local ]; then
 /sbin/ifdown-pre-local ${DEVICE}
fi

The scripts get given arguments. Maybe echo $1, $2 to a text file and
see what they get.


# ls --lcontext /sbin/ifup-local
-rwxr-xr-x. 1 system_u:object_r:bin_t:s0   root root 329 Jul 25 13:30 
/sbin/ifup-local



The argument is used. If it is eth1 the rest of the script runs else
it skips on through.

It's contents (now) are:
===8<---
#!/bin/sh

DEVICE=$1

echo >>/tmp/ifup "ifup-local: $DEVICE"

if [ ${DEVICE} = "eth1" ]; then
echo >>/tmp/ifup "ifup-local resetting scripts etc"
/etc/sysconfig/network-scripts/iptables.up
/etc/sysconfig/network-scripts/dyndns-ip-up.local
/etc/mail/spamassassin/RestartMail.sh
fi
===8<---
It appears the script tries to run.
# ls --lcontext /etc/sysconfig/network-scripts/iptables.up
-rwxr-xr-x. 1 system_u:object_r:bin_t:s0   root root 534 Jul 25 13:37 
/etc/sysconfig/network-scripts/iptables.up


# ls --lcontext /etc/sysconfig/network-scripts/dyndns-ip-up.local
-rwxr-xr-x. 1 system_u:object_r:bin_t:s0   root root 1735 Jul 25 13:38 
/etc/sysconfig/network-scripts/dyndns-ip-up.local


# ls --lcontext /etc/mail/spamassassin/RestartMail.sh
-rwxr-xr-x. 1 system_u:object_r:bin_t:s0   root root 393 Jul 25 13:36 
/etc/mail/spamassassin/RestartMail.sh


So the sub-scripts look good. (The iptables script perpetrates some
horrors I can't do with the firewall tool and make the system somewhat
more secure against ssh attacks. Making an attacker wait two or three
seconds before trying to get in again makes it annoyingly hard to guess
even abcdefg as a password. What I use is somewhat better, of course.)

It appears RestartMail.sh might actually run. It stops fetchmail, restarts
spamd, and restarts each user's fetchmail with a sudo to the user. There is
no indication that fetchmail starts - no mail inflow starts. The iptables
script prints out an "I am running" note using echo to a file in /tmp. But
the firewall does not end up setup. That's about where I got frustrated.
The iptables script outputs about a half dozen notes to syslog and they
don't appear. Nor do any error messages. That's the really confusing part.

So I guess I had indeed wrestled it around to actually running. But the files
it uses don't seem to run right. They run perfectly if I run /sbin/ifup-local
directly (as root, of course.)

And ifdown eth1 followed by ifup eth1 works fine. It's on boot that it seems
like nothing started up. Does syslog start after network?

I'll ask my system Duh - I think I answered the question. Now I have to
ask other questions of my system after I boot it next time. Indeed, network
starts before syslog so of course I will not see the messages I expected in
syslog. Reapproaching the problem after being away from it for awhile works.
Fortunately I did not need to boot all that often.

I figure this one is solved.

{^_^}


Re: How to run to launch script when nic interface is up

2011-10-13 Thread jdow

On 2011/10/13 05:45, Vladimir Mosgalin wrote:

Hi jdow!

  On 2011.10.12 at 18:28:02 -0700, jdow wrote next:


  Is it possible under SL6.1 to run a script (or insert commands in
ifcfg-ethX files) when a nic is up, immediatly after network script
runs?? Like for example it can do with debian/ubuntu: post-up
post-down.


/sbin/ifup-pre-local and /sbin/ifdown-pre-local are executed (if exist)
before bringing interface up&   down

/sbin/ifup-local and /sbin/ifup-local are executed (if exist) after
bringing interface up&   down and setting up routes.

Argument to each will be interface name. Note that if you have aliases,
this script will be called for each alias.


OK, I have such a file that I still have to manually run when the system
reboots. I get no error messages nor is the file run.

# ls --lcontext /sbin/ifup-local
-rwxr-xr-x. 1 system_u:object_r:bin_t:s0   root root 329 Jul 25
13:30 /sbin/ifup-local

Any hints what I may have wrong?


Is your network setup managed by "network" service or NetworkManager? If
later, I don't think it cares about these local scripts at all..

If you are using "network", try to find out if it runs or not by putting
something simple inside, like

#!/bin/sh
echo "$@">  /tmp/ifup-local-test

If you need NetworkManager, I don't think I can give any advice on how
to make it work when NM manages network..


Hm, turns out I do not have that interface on Network Manager. So something
else is preventing it from working. I don't remember if ifdown;ifup picks
up the file. I think it does but a reboot fails to run the local file. I
really need to recheck that and then check in here again.

It acts as if the file is not even seen since there are no selinux problems
reported for it. So that makes me think something spooky is going on.

{^_^}


Re: How to run to launch script when nic interface is up

2011-10-13 Thread jdow

On 2011/10/13 07:15, Tom H wrote:

On Thu, Oct 13, 2011 at 8:45 AM, Vladimir Mosgalin
  wrote:

  On 2011.10.12 at 18:28:02 -0700, jdow wrote next:


  Is it possible under SL6.1 to run a script (or insert commands in
ifcfg-ethX files) when a nic is up, immediatly after network script
runs?? Like for example it can do with debian/ubuntu: post-up
post-down.


/sbin/ifup-pre-local and /sbin/ifdown-pre-local are executed (if exist)
before bringing interface up&down

/sbin/ifup-local and /sbin/ifup-local are executed (if exist) after
bringing interface up&down and setting up routes.

Argument to each will be interface name. Note that if you have aliases,
this script will be called for each alias.


OK, I have such a file that I still have to manually run when the system
reboots. I get no error messages nor is the file run.

# ls --lcontext /sbin/ifup-local
-rwxr-xr-x. 1 system_u:object_r:bin_t:s0   root root 329 Jul 25
13:30 /sbin/ifup-local


Is your network setup managed by "network" service or NetworkManager? If
later, I don't think it cares about these local scripts at all..

If you are using "network", try to find out if it runs or not by putting
something simple inside, like

#!/bin/sh
echo "$@">  /tmp/ifup-local-test

If you need NetworkManager, I don't think I can give any advice on how
to make it work when NM manages network..


For NM, you can use "/etc/NetworkManager/dispatcher.d/" (never tried it myself).


Old way works. NetworkManager seems to be a fix that breaks what worked. Not
good.

{^_^}


Re: How to run to launch script when nic interface is up

2011-10-13 Thread jdow

On 2011/10/13 05:45, Vladimir Mosgalin wrote:

Hi jdow!

  On 2011.10.12 at 18:28:02 -0700, jdow wrote next:


  Is it possible under SL6.1 to run a script (or insert commands in
ifcfg-ethX files) when a nic is up, immediatly after network script
runs?? Like for example it can do with debian/ubuntu: post-up
post-down.


/sbin/ifup-pre-local and /sbin/ifdown-pre-local are executed (if exist)
before bringing interface up&   down

/sbin/ifup-local and /sbin/ifup-local are executed (if exist) after
bringing interface up&   down and setting up routes.

Argument to each will be interface name. Note that if you have aliases,
this script will be called for each alias.


OK, I have such a file that I still have to manually run when the system
reboots. I get no error messages nor is the file run.

# ls --lcontext /sbin/ifup-local
-rwxr-xr-x. 1 system_u:object_r:bin_t:s0   root root 329 Jul 25
13:30 /sbin/ifup-local

Any hints what I may have wrong?


Is your network setup managed by "network" service or NetworkManager? If
later, I don't think it cares about these local scripts at all..

If you are using "network", try to find out if it runs or not by putting
something simple inside, like

#!/bin/sh
echo "$@">  /tmp/ifup-local-test

If you need NetworkManager, I don't think I can give any advice on how
to make it work when NM manages network..


I knew there was a reason I didn't like NetworkManager. It seems it is
time to get it out of the picture entirely. If it ignores those files it
is extremely broken.

{^_^}


Re: How to run to launch script when nic interface is up

2011-10-12 Thread jdow

On 2011/10/12 10:53, Vladimir Mosgalin wrote:

Hi carlopmart!

  On 2011.10.12 at 18:42:30 +0200, carlopmart wrote next:


  Is it possible under SL6.1 to run a script (or insert commands in
ifcfg-ethX files) when a nic is up, immediatly after network script
runs?? Like for example it can do with debian/ubuntu: post-up
post-down.


/sbin/ifup-pre-local and /sbin/ifdown-pre-local are executed (if exist)
before bringing interface up&  down

/sbin/ifup-local and /sbin/ifup-local are executed (if exist) after
bringing interface up&  down and setting up routes.

Argument to each will be interface name. Note that if you have aliases,
this script will be called for each alias.


OK, I have such a file that I still have to manually run when the system
reboots. I get no error messages nor is the file run.

# ls --lcontext /sbin/ifup-local
-rwxr-xr-x. 1 system_u:object_r:bin_t:s0   root root 329 Jul 25 13:30 
/sbin/ifup-local


Any hints what I may have wrong?

{^_^}


AMD Athlon II 215 X2 ASRock N68C-GS - no sound

2011-10-10 Thread jdow

VIA VT1705 Audio Codec

Latest kernel update leads to no sound. Booting old kernels gives no sound.
Modprobe includes

dist-alsa.conf:
install snd-pcm /sbin/modprobe --ignore-install snd-pcm && /sbin/modprobe 
snd-seq

dist-oss.conf is disabled (for good reason.)

The only new thing is openfwwf.conf
options b43 nohwcrypt=1 qos=0

no messages in dmsg or messages about sound, snd, or alsa.

As a result of an earlier kernel update screwing up a known good nouveau
configuration I now have the nVidia drivers installed.

Sound preferences declares no audio.

Have tried reboots and complete shutdown and reboots to no avail.

Any ideas?

{^_^}


Re: Flash plugin

2011-10-07 Thread jdow

On 2011/10/07 00:12, Dag Wieers wrote:

On Thu, 6 Oct 2011, Yasha Karant wrote:


On 10/06/2011 04:37 PM, Dag Wieers wrote:

On Thu, 6 Oct 2011, Yasha Karant wrote:

> I realise that except for the Fermilab/CERN staff persons, almost all
> of the rest of those maintaining material for SL are unpaid
> volunteers. With that stated, what is the
> typical/average/median/whatever delay from the Adobe release until the
> SL compatible port for the flash plugin?
> > In some cases, Adobe adds functionality -- but in most cases it is a
> matter of bug and security-hole fixes -- and the sooner one installs a
> valid security fix, the better.

Do you have proof that this is a security fix. Because I track the RHEL
packages and no such update has come through their channels. It seems as
if the release was simply their official Flash Player 11 release, rather
than a security fix.

If it is a security fix, even Red Hat is behind. Somehow I don't believe
that, but for you to provide proof of what you state. Thanks.


I use the direct Mozilla (and OpenOffice) distributions and updates. For
Firefox 7.x (that the Firefox update on Help --> About Firefox reports as up
to date), I ran an update check on the addons, including plugins using Tools
--> Add ons and URL https://www.mozilla.org/en-US/plugincheck/ and the
following was displayed:

Vulnerable plugins:
Plugin Icon
Shockwave Flash
Shockwave Flash 11.0 r1 Vulnerable (more info)

(11.0.1.129 is what actually is installed)


Again, without any information it is hard to determine whether the plugincheck
is mainly checking the version against the latest (known) available, or whether
it actually knows about vulnerabilities.

I bet the first option is what is implemented (because the second adds
complexity without any real gain). Their aim is to have people running the 
latest.

ALso, if we look at TUV, they still offer flash-plugin-10.3.183.10-1.el6, which
is most likely not vulnerable (and which was the version offered by Repoforge
until this morning too). In other words, we are now disconnected from the RHSA
information.

If you noticed a flash-plugin update from Adobe, feel free to let us know so we
can update our flash-plugin package too.


In that vein it seems "odd" to me that a 32 bit package would be accepted as an
update for a 64 bit package. This seems to be to be a bug.

{^_^}


  1   2   >