spice-guest-tools for widows 8?

2013-03-27 Thread Todd And Margo Chester

Hi All,

spice-guest-tools-0.52.exe does not support Windows 8.  Anyone
know of a version that does?  No QXL driver is annoying.

Many thanks,
-T


Re: Issues with the recent kernel and proprietary nvidia drivers

2013-03-27 Thread Yasha Karant

On 03/26/2013 10:10 AM, Connie Sieh wrote:

On Tue, 26 Mar 2013, Yasha Karant wrote:


Am I missing something here?  Does any other production vendor supply
GPU compute engine cards but Nvidia?   Are any GPU compute cards
fully
supported (including any additional interconnects beyond PCI) using
fully open source drivers and compilers/application support
generators/libraries?  To use the Nvidia GPU compute cards under CUDA,
it appears that the Nvidia proprietary driver is necessary.

Yasha Karant

On 03/25/2013 07:34 PM, Paul Robert Marino wrote:

Um well
Frankly the proprietary driver is never up to date with the kernel and
it is well that's luck if it ever works with a new version of the kernel
after you have reinstalled recompiled the module with code you can't
see against the new code


If you  have a problem with the proprietary driver take it up with
?Nvidia. In theory you pay them to make it work correct ?
If you don't pay them for support then find a card that doesn't use
proprietary code.



-- Sent from my HP Pre3


On Mar 25, 2013 9:59 PM, Jeff Siddall n...@siddall.name wrote:

On 03/25/2013 12:41 PM, Yasha Karant wrote:
 We are forced to use the Nvidia proprietary driver for two reasons:

 1. We use the switched stereoscopic 3D mode of professional Nvidia
 video cards with the external Nvidia 3D switching emitter for the
 stereoscopic 3D shutter glass mode of various applications that
 display stereoscopic 3D images (both still and motion).

 2. We need to load Nvidia CUDA in order to use the CUDA computational
 functions of Nvidia GPU compute cards in our GPU based compute
engines.
 The Nvidia CUDA system appears to require the proprietary Nvidia
driver.

Yup, I run the proprietary driver for VDPAU support. If anyone knows
how to get that from the open source driver I would like to know.

Jeff





The issue with the video card driver is with the vendor of the card.
Since you paid for the video card then you should contact the video card
vendor and have them fix what needs fixing.  They got your money now
they need to support their products.

-Connie Sieh


An excellent suggestion.  The unfortunate reality is that for a least 
some of these products, the only well support environment is MS Windows, 
sometimes Mac OS X, and only lastly, open systems such as Linux or the 
BSD variants.  In the case of Nvidia, the CUDA compute engines as well 
as the professional switched stereoscopic 3D do have a strong advertised 
support for and deployment under enterprise Linux.  However, the reality 
is that the compatibility with existing Linux environments, e.g., SL, is 
not what one might desire for a supported product.


However, if enough of us who professionally use SL and other TUV EL 
variants sufficiently complain, it is possible that Nvidia will make all 
of this work -- provided the profit is there for the corporation. 
Unlike SL and related efforts, as far as I can tell, USA for profit 
corporations, such as Nvidia, only exist for one purpose -- that all 
overarching profit.  All else is just lipservice.  (This may not be the 
case in some EU nations in which for profit corporations are required to 
have real workers share in the shaping of policy and in real business 
management, unlike the Gompers model in which all the workers are 
concerned with are working conditions and financial compensation -- not 
the direction or societal value of the corporation.)


The above is not meant as a political statement or a statement of 
philosophy -- it is meant purely as a statement of factual reality -- a 
reality within which we must work if we want them to 'fix' what needs 
fixing.


Yasha Karant


Re: Issues with the recent kernel and proprietary nvidia drivers

2013-03-27 Thread Thomas Bendler
2013/3/27 Yasha Karant ykar...@csusb.edu

 [...]
 An excellent suggestion.  The unfortunate reality is that for a least some
 of these products, the only well support environment is MS Windows,
 sometimes Mac OS X, and only lastly, open systems such as Linux or the BSD
 variants.  In the case of Nvidia, the CUDA compute engines as well as the
 professional switched stereoscopic 3D do have a strong advertised support
 for and deployment under enterprise Linux.  However, the reality is that
 the compatibility with existing Linux environments, e.g., SL, is not what
 one might desire for a supported product.


If they support RHEL, they support SL as well (not officially, but if the
driver is working with RHEL it should also work with SL). If the driver
isn't working with RHEL as well, file a bug at Nvidia and tell them that
they please fix the bug. Everything else is a waste of time. If Nvidia
provides a driver for Linux, they are responsible for the driver. If the
driver is bad, ask them to make a better one. But this is nothing where SL
nor TUV nor anyone else in the community can help, especially if the driver
is closed source, then it's up to Nvidia and no one else, quite simple.

Regards Thomas
-- 
Linux ... enjoy the ride!


Re: Issues with the recent kernel and proprietary nvidia drivers

2013-03-27 Thread Lamar Owen

On 03/27/2013 03:10 AM, Yasha Karant wrote:
An excellent suggestion.  The unfortunate reality is that for a least 
some of these products, the only well support environment is MS 
Windows, sometimes Mac OS X, and only lastly, open systems such as 
Linux or the BSD variants.  In the case of Nvidia, the CUDA compute 
engines as well as the professional switched stereoscopic 3D do have a 
strong advertised support for and deployment under enterprise Linux.  
However, the reality is that the compatibility with existing Linux 
environments, e.g., SL, is not what one might desire for a supported 
product.




Microway, just to mention one company, supports CUDA under Linux. We 
have one of their machines here, and one of our researchers is working 
on using it under EL6 with a fairly large CUDA GPU setup. He's still 
early into his research, but I'm sure I'll hear about any issues he may 
have.


In case you're not familiar with them, Microway's Tesla information can 
be found at http://www.microway.com/tesla/


They've been in the business a long time.  This is the first one of 
their machines we have purchased, so we'll see how well the support works.


And SL, along with CentOS, qualifies as an 'enterprise Linux.'


Re: Cannon IR 5055 driver

2013-03-27 Thread Andrew Z
i was slightly ;) surprised as well - since this Cannon printer is
relatively old model ( I think 2009 ).
Si today i put a side 4 hours to learn the intricacies of CUPS.
http://www.openprinting.org/download/kpfeifle/LinuxKongress2002/Tutorial/VII.cups-help/VII.cups-help.html#Supply
http://fedoraproject.org/wiki/How_to_debug_printing_problems
and ended up with tons of cumbersome debug information ...

the solution was simple - i picked the next model in the list of the
available models in Print assistant ( or whatever that gui app is called).
so instead of selecting 5000 model, i choose IR5570 and it nicely printed
the test page.

oh well .. just a click away :)



On Tue, Mar 26, 2013 at 1:18 PM, Paul Robert Marino prmari...@gmail.comwrote:

 Well vendors often create X86_64 packages that have i686 dependencies
 mostly because the weren't paying attention when the compiled it, so that
 happens.
 As far as cups just not working that's rare. Often cups may have odd
 output because a vendor decided to make a tweak or some other odd reason
 which you can usually tinker your way around. That said without seeing the
 logs I can't definitively tell you but most likely its a network issue such
 as trying to use a NetBIOS name instead of the IP address.



 -- Sent from my HP Pre3

 --
 On Mar 26, 2013 12:17 PM, Andrew Z form...@gmail.com wrote:

 hello,
  i'm trying to hook up a network Cannon IR2055 @ the office.
  First i can't seemed to find any drivers to it in the regular gui
 choices. the one for IR5000 doesn't seemed to work - displays connecting
 to printer and hangs.

 Second, i went to cannon.com ( well eu because .com doesn't have anything
 for Linux drivers):
 http://software.canon-europe.com/products/0010428.asp

 got this :
 g12bmeng_lindeb64_0204.rpm - CQue 2.0.4 Linux Driver RPM 64-bit
 but when installing :

 
  Package ArchVersion Repository
 Size

 
 Installing:
  cque-en x86_64  2.0-4   /g12bmeng_lindeb64_0204
 10 M
 Installing for dependencies:
  glibc   i6862.12-1.80.el6_3.5   sl-security
 4.3 M
  nss-softokn-freebl  i6863.12.9-11.el6   sl
 115 k

 Transaction Summary

 

 so it needs 686 glibc while been 64?

 Any hints on how to get it added?

 thank you
 AZ



RE: spice-guest-tools for widows 8?

2013-03-27 Thread Brown, Chris (GE Healthcare)
This would be a question better suited to spice-de...@lists.freedesktop.org

However as I have noted over time the spice guys do not seem to be very 
motivated to create and publish up to date windows drivers. They still have as 
of yet to publish properly signed QXL drivers for Windows 7.

*HINT*
These drivers do exist as part of the virtio-win package however if you have an 
active RHEL subscription :-)
- Chris

-Original Message-
From: owner-scientific-linux-us...@listserv.fnal.gov 
[mailto:owner-scientific-linux-us...@listserv.fnal.gov] On Behalf Of Todd And 
Margo Chester
Sent: Wednesday, March 27, 2013 1:38 AM
To: Scientific Linux Users
Subject: spice-guest-tools for widows 8?

Hi All,

spice-guest-tools-0.52.exe does not support Windows 8.  Anyone know of a 
version that does?  No QXL driver is annoying.

Many thanks,
-T


Re: memory per process/core

2013-03-27 Thread Attilio De Falco
Just a stab in the dark, but did you check the Shared Memory kernel parameter 
(shmmax), type cat /proc/sys/kernel/shmmax.  We have it set very high so that 
any process/thread can use as much memory as it needs.  You set the limit to 1 
GB without rebooting by typing echo 1073741824  /proc/sys/kernel/shmmax  or 
modify /etc/sysctl.conf and add the line kernel.shmmax = 1073741824 so 
remains after a reboot.  I'm not sure about abinit but some fortran programs 
need shmmax limit to be set high…

Good luck,

~Attilio



On Mar 26, 2013, at 9:59 PM, Duke Nguyen duke.li...@gmx.com wrote:

 Hi folks,
 
 We have SL6.3 64bit installed on a box with two quad core and 8GB RAM. We 
 installed openmpi, Intel Studio XE and abinit to run parallel (8 
 cores/processes) some of our applications. To our surprise, the system 
 usually takes only about half of available memory (about 500MB each core) and 
 then the job/task was killed with the low-resource error.
 
 We dont really understand why there is a cap of 512MB (I guess it would be 
 512MB instead of 500MB) for each of our cores whereas in theory, each of the 
 core should be able to run up to 1GB. Any suggestions/comments/experience 
 about this issue?
 
 Thanks in advance,
 
 D.
 


perl yum problem

2013-03-27 Thread Todd And Margo Chester

Hi All,

Any idea how to fix this?

Many thanks,
-T

Error: Package: perl-IO-Compress-Bzip2-2.020-127.el6.x86_64 (@sl/6x)
   Requires: perl = 4:5.10.1-127.el6
   Removing: 4:perl-5.10.1-127.el6.x86_64 (@sl/6x)
   perl = 4:5.10.1-127.el6
   Updated By: 4:perl-5.10.1-130.el6_4.x86_64 (sl-security)
   perl = 4:5.10.1-130.el6_4


Re: [SCIENTIFIC-LINUX-USERS] perl yum problem

2013-03-27 Thread Pat Riehecky

You may have gotten your repodata during the rebuild.

Try yum clean expire-cache and see if that helps.

Pat

On 03/27/2013 01:28 PM, Todd And Margo Chester wrote:

Hi All,

Any idea how to fix this?

Many thanks,
-T

Error: Package: perl-IO-Compress-Bzip2-2.020-127.el6.x86_64 (@sl/6x)
   Requires: perl = 4:5.10.1-127.el6
   Removing: 4:perl-5.10.1-127.el6.x86_64 (@sl/6x)
   perl = 4:5.10.1-127.el6
   Updated By: 4:perl-5.10.1-130.el6_4.x86_64 (sl-security)
   perl = 4:5.10.1-130.el6_4



--
Pat Riehecky

Scientific Linux developer
http://www.scientificlinux.org/


how to find internet dead spots

2013-03-27 Thread Todd And Margo Chester

Hi All,

I have a Cent OS 5.x server sitting on a DSL line
acting as a firewall.  I have noticed that there are
dead spots, up to a minute, every so often in their
Internet service.

It could be a storm on someone's part, but the worst
they run is IMAP.  No music; no video.

Is there a utility I can run to map this?

Many thanks,
-T


AD Integration - what do you do about user/group pairs like puppet/puppet ?

2013-03-27 Thread James M. Pulver
So we're working along our SL6 and AD Server 2008R2 integration, using SSSD for 
authentication and such. We've realized that AD won't allow groups and users to 
have the same name. For common software like puppet and quemu that has this 
setup, what do you do? Change the program configuration to use a different 
group name? Do some hackery with OUs and sAMAccountNames and have it use gids 
(do the right things do this)? Technet says sAMAccountName must be unique and 
cannot be munged... 

--
James Pulver
LEPP Computer Group
Cornell University


Re: AD Integration - what do you do about user/group pairs like puppet/puppet ?

2013-03-27 Thread Paul Robert Marino
WellThe same user should be able to login from multiple clients at the same time so as long as the gids and uids on your file system are consistent across the board that's a non issue.But a word of advice DO NOT PUT THE USERS FOR YOUR SERVICES IN AD OR ANY OTHER LDAP SERVERIts a horrible idea because if you loos LDAP connectivity and th SSD cache fails you server turns into a paperweight. That and the nature of how file system lookup of gid uid maps and acls work on Linux and Unix weren't designed with remote authentication in mind so every file access generates a lookup query SSD alleviates this but again if SSD fails you will fall back to doing a DOS attack on your LDAP server. This is one of the old common problems people had with nscd the default tuning options were optimized for local file based lookups and not for nss or LDAP so it would get over welmed and would either crash or worse get into a loop where its trying to answer expired requests delivered via the memory mapped file instead of the socket. By the way that memory mapped file default setting instead of using the socket was the actual cause of most of the issues but I digress.Only use LDAP for users that actually login and never for root.-- Sent from my HP Pre3On Mar 27, 2013 3:56 PM, James M. Pulver jmp...@cornell.edu wrote: So we're working along our SL6 and AD Server 2008R2 integration, using SSSD for authentication and such. We've realized that AD won't allow groups and users to have the same name. For common software like puppet and quemu that has this setup, what do you do? Change the program configuration to use a different group name? Do some hackery with OUs and sAMAccountNames and have it use gids (do the right things do this)? Technet says sAMAccountName must be unique and cannot be munged... 

--
James Pulver
LEPP Computer Group
Cornell University

Re: how to find internet dead spots

2013-03-27 Thread Joseph Areeda

Hi Todd,

If you mean the dsl goes out for a while, what I've done is pretty low 
tech but works for reporting downtime.


A cron job from inside that pings a couple of servers on the outside and 
one on the outside that pings the server in question.  I usually grep 
for the summary line and redirect it out to a log file.


Soemthing like:

   #!/bin/bash
   ips=example.com  another.example.com 
   for ip in $ips; do
dat=`date +%Y%m%d %H%M`
res=`ping -c 3 $ip | grep loss| awk '{print $6, ,, $10 }'`
echo $dat , $ip , $res /home/joe/ping.stats
   done

Joe


On 3/27/13 12:36 PM, Todd And Margo Chester wrote:


Hi All,

I have a Cent OS 5.x server sitting on a DSL line
acting as a firewall.  I have noticed that there are
dead spots, up to a minute, every so often in their
Internet service.

It could be a storm on someone's part, but the worst
they run is IMAP.  No music; no video.

Is there a utility I can run to map this?

Many thanks,
-T

xxx





Re: memory per process/core

2013-03-27 Thread Duke Nguyen

On 3/27/13 11:52 PM, Attilio De Falco wrote:

Just a stab in the dark, but did you check the Shared Memory kernel parameter (shmmax), type cat 
/proc/sys/kernel/shmmax.  We have it set very high so that any process/thread can use as much memory as it 
needs.  You set the limit to 1 GB without rebooting by typing echo 1073741824  
/proc/sys/kernel/shmmax  or modify /etc/sysctl.conf and add the line kernel.shmmax = 
1073741824 so remains after a reboot.  I'm not sure about abinit but some fortran programs need shmmax 
limit to be set high…


Hi Attilio, we already had it at very high value (not sure why, I never 
changed/edited this value before)


[root@biobos:~]# sysctl -p
net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
error: net.bridge.bridge-nf-call-ip6tables is an unknown key
error: net.bridge.bridge-nf-call-iptables is an unknown key
error: net.bridge.bridge-nf-call-arptables is an unknown key
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
[root@biobos:~]# cat /proc/sys/kernel/shmmax
68719476736

Any other suggestions?



On Mar 26, 2013, at 9:59 PM, Duke Nguyen duke.li...@gmx.com wrote:


Hi folks,

We have SL6.3 64bit installed on a box with two quad core and 8GB RAM. We 
installed openmpi, Intel Studio XE and abinit to run parallel (8 
cores/processes) some of our applications. To our surprise, the system usually 
takes only about half of available memory (about 500MB each core) and then the 
job/task was killed with the low-resource error.

We dont really understand why there is a cap of 512MB (I guess it would be 
512MB instead of 500MB) for each of our cores whereas in theory, each of the core should 
be able to run up to 1GB. Any suggestions/comments/experience about this issue?

Thanks in advance,

D.