nfs+xfs - was Re: SL6.3 2.6.32-358.2.1.el6.x86_64 kernel panic

2013-03-18 Thread Dr Andrew C Aitchison

On Sun, 17 Mar 2013, Nico Kadel-Garcia wrote:


Also, *why* are you mixing xfs and nfs services in the same
envirnment? And what kind of NFS and XFS servers are you using?


Out of curiosity, why not ?

In theory the choice of disk filesystem and network file sharing
protocol should be independent.

How different is the practice ?

--
Dr. Andrew C. Aitchison Computer Officer, DPMMS, Cambridge
a.c.aitchi...@dpmms.cam.ac.uk   http://www.dpmms.cam.ac.uk/~werdna


Re: nfs+xfs - was Re: SL6.3 2.6.32-358.2.1.el6.x86_64 kernel panic

2013-03-18 Thread Nico Kadel-Garcia
On Mon, Mar 18, 2013 at 2:59 AM, Dr Andrew C Aitchison
a.c.aitchi...@dpmms.cam.ac.uk wrote:
 On Sun, 17 Mar 2013, Nico Kadel-Garcia wrote:

 Also, *why* are you mixing xfs and nfs services in the same
 envirnment? And what kind of NFS and XFS servers are you using?


 Out of curiosity, why not ?

 In theory the choice of disk filesystem and network file sharing
 protocol should be independent.

 How different is the practice ?

I had some bad, bad experience with XFS and haven't used it since. It
completely destabilized my bulk storage environment: things may have
changed.

I've deliberately and effectively kept my file systems below the 16 TB
range and worked well with ext4. I've occasionally used larger scale
commercial storage serves such as NetApp's for larger NFS environments
since then.


Re: nfs+xfs - was Re: SL6.3 2.6.32-358.2.1.el6.x86_64 kernel panic

2013-03-18 Thread Sergio Ballestrero

On 18 Mar 2013, at 08:37, Steven Haigh wrote:

 On 03/18/2013 06:34 PM, Nico Kadel-Garcia wrote:
 On Mon, Mar 18, 2013 at 2:59 AM, Dr Andrew C Aitchison
 a.c.aitchi...@dpmms.cam.ac.uk wrote:
 On Sun, 17 Mar 2013, Nico Kadel-Garcia wrote:
 
 Also, *why* are you mixing xfs and nfs services in the same
 envirnment? And what kind of NFS and XFS servers are you using?
 
 
 Out of curiosity, why not ?
 
 In theory the choice of disk filesystem and network file sharing
 protocol should be independent.
 
 How different is the practice ?
 
 I had some bad, bad experience with XFS and haven't used it since. It
 completely destabilized my bulk storage environment: things may have
 changed.
 
 I've deliberately and effectively kept my file systems below the 16 TB
 range and worked well with ext4. I've occasionally used larger scale
 commercial storage serves such as NetApp's for larger NFS environments
 since then.
 
 
 I use XFS on a small RAID6 array (its 2Tb - not huge), and I mount it via NFS 
 to other systems. I haven't had a kernel crash as yet.


We use XFS for some heavily loaded buffer storage systems, and we haven't had 
an issue - but no NFS there.
We also have an NFS server using XFS (mostly because of the project quota 
feature we needed on some shares) and that's also working fine with NFSv3, 
serving about 200 clients; NFSv4 performance on XFS is disappointing compared 
to ext3 but we are not in an hurry to migrate.

Cheers,
  Sergio

-- 
 Sergio Ballestrero  - http://physics.uj.ac.za/psiwiki/Ballestrero
 University of Johannesburg, Physics Department
 ATLAS TDAQ sysadmin team - Office:75282 OnCall:164851








kdevelop on sl6.3

2013-03-18 Thread Mahmood Naderan
Hi
I want to install kdevelop on SL6.3. However it is not available in the default 
repositories plus epel.
What is the correct repository then?

 
Regards,
Mahmood


Re: double precision versus single/float ?

2013-03-18 Thread =?windows-1252?Q?Bill_Askew?=
I see you mentioned the use of sin() and cos().  These both take and 
return double.  For foat you would need to use sinf() and cosf().


Re: Power management with ATI Radeon cards using the radeon driver.

2013-03-18 Thread Paul Robert Marino
StevenWow thanks for sharing that, its certainly useful information about the kernel Radeon driver I didn't know.I wonder if its true for the AMD fusion as well or does it scale based on the CPU frequency since the are on the same die? Looks like I have some experiments to do latter.-- Sent from my HP Pre3On Mar 18, 2013 3:47 AM, Steven Haigh net...@crc.id.au wrote: Hi all,

I've been on a path of discovery lately regarding the state of play for 
ATI graphics cards. I started off using the ATI binary driver due to the 
high fan speed (resulting from high power usage) of the open source driver.

I decided to take a different approach today and stick with the open 
source 'radeon' driver. I managed to find that by default, the OSS 
driver keeps the card in a 'high power / performance' state.

This can be changed by using the sysfs entries exposed.

I found that using the following puts the card in low power mode:
	echo profile  /sys/class/drm/card0/device/power_method
	echo low  /sys/class/drm/card0/device/power_profile

Now, this is great to shut the fan up, and works on multi-head systems 
(more than one screen).

If you only use one screen, then you're in luck.
	echo dynpm  /sys/class/drm/card0/device/power_method

The "dynpm" method dynamically changes the clocks based on the number of 
pending fences, so performance is ramped up when running GPU  intensive 
apps, and ramped down when the GPU is idle. The reclocking is attemped 
during vertical blanking periods, but due to the timing of the 
reclocking functions, doesn't not always complete in the blanking 
period, which can lead to flicker in the display. Due to this, dynpm 
only works when a single head is active.

If you are like me and have multiple screens, you have the following 
options to get power_profile to:

"default" uses the default clocks and does not change the power state. 
This is the default behavior.

"auto" selects between "mid" and "high" power states based on the 
whether the system is on battery power or not. The "low" power state are 
selected when the monitors are in the dpms off state.

"low" forces the gpu to be in the low power state all the time. Note 
that "low" can cause display problems on some laptops; this is why auto 
does not use "low" when displays are active.

"mid" forces the gpu to be in the "mid" power state all the time. The 
"low" power state is selected when the monitors are in the dpms off state.

"high" forces the gpu to be in the "high" power state all the time. The 
"low" power state is selected when the monitors are in the dpms off state.

I've found that the 'low' setting seems to work fine in every day 
desktop tasks - and it certainly causes the fan to be much, much quieter 
than the default profile.

References:
* http://www.x.org/wiki/RadeonFeature

-- 
Steven Haigh

Email: net...@crc.id.au
Web: https://www.crc.id.au
Phone: (03) 9001 6090 - 0412 935 897
Fax: (03) 8338 0299

Re: nfs+xfs - was Re: SL6.3 2.6.32-358.2.1.el6.x86_64 kernel panic

2013-03-18 Thread Paul Robert Marino
I've used XFS for over a decade now. Its the most reliable crash resistant filesystem I've ever used according to all my tests and experience. But I have had a few bad patches on older versions of RHEL (before RedHat started supporting it) where it didn't work well, but historicity its worked perfectly on every non reheat based distro.By the way I know why the performance goes down on NFS4 its mostly due to the fact that it supports xattribs natively and ext3 does not unless you explicitly turn it on when you mount the file system.By the way I currently have several production servers running gluster on top of XFS serving both gluster native and NFS 3 clients and in several clusters it works perfectly for me.Oh and the earlier confusion Netapps are BSD UNIX boxes and just like Unix Linux can serve NFS volumes Netapp didn't invent NFS nor are they even the best implementation. Give me a linux box with a san or a good raid controller any day they are faster and in the case of a SAN the are cheaper to scale-- Sent from my HP Pre3On Mar 18, 2013 3:45 AM, Sergio Ballestrero sergio.ballestr...@cern.ch wrote: On 18 Mar 2013, at 08:37, Steven Haigh wrote:On 03/18/2013 06:34 PM, Nico Kadel-Garcia wrote:On Mon, Mar 18, 2013 at 2:59 AM, Dr Andrew C Aitchisona.c.aitchi...@dpmms.cam.ac.uk wrote:On Sun, 17 Mar 2013, Nico Kadel-Garcia wrote:Also, *why* are you mixing xfs and nfs services in the sameenvirnment? And what kind of NFS and XFS servers are you using?Out of curiosity, why not ?In theory the choice of disk filesystem and network file sharingprotocol should be independent.How different is the practice ?I had some bad, bad experience with XFS and haven't used it since. Itcompletely destabilized my bulk storage environment: things may havechanged.I've deliberately and effectively kept my file systems below the 16 TBrange and worked well with ext4. I've occasionally used larger scalecommercial storage serves such as NetApp's for larger NFS environmentssince then.I use XFS on a small RAID6 array (its 2Tb - not huge), and I mount it via NFS to other systems. I haven't had a kernel crash as yet.We use XFS for some heavily loaded "buffer storage" systems, and we haven't had an issue - but no NFS there.We also have an NFS server using XFS (mostly because of the "project quota" feature we needed on some shares) and that's also working fine with NFSv3, serving about 200 clients; NFSv4 performance on XFS is disappointing compared to ext3 but we are not in an hurry to migrate.Cheers, Sergio
--Sergio Ballestrero -http://physics.uj.ac.za/psiwiki/BallestreroUniversity of Johannesburg, Physics DepartmentATLAS TDAQ sysadmin team - Office:75282 OnCall:164851



Re: nfs+xfs - was Re: SL6.3 2.6.32-358.2.1.el6.x86_64 kernel panic

2013-03-18 Thread Vladimir Mosgalin
Hi Paul Robert Marino!

 On 2013.03.18 at 08:55:39 -0400, Paul Robert Marino wrote next:

 I've used XFS for over a decade now. Its the most reliable crash resistant
 filesystem I've ever used according to all my tests and experience. But I have

This might be true, but it's not the case for all. I've experienced very bad
corruptions on xfs myself, resulting in lots of non-accessible fake
files (random size, attributes etc) with random filenames including
non-printable characters - and there was no way to remove them, fsck
refused to fix them, too. Filesystem was in total mess and producing
various errors - it's fortunate that I was able to copy all real data
without corruption from it, though. Since then I try not to approach xfs
without serious reason.

I'd rather use JFS for huge filesystem which I've been using for many
years until ext4 appeared.. But for fs 16 Tb jfs is still best option,
I believe (far more stable in my experience compared to xfs, though
might be not as fast).

For several reasons most people don't consider JFS but I used it on tons
of servers for filesystems  1 Tb (ext3 was a bad choice for huge
filesystems for various reasons) and never had a single issue with it.

At most, after multiple power failures during heavy write access I had
errors which remounted it into R/O mode and fsck always fixed it.

 By the way I know why the performance goes down on NFS4 its mostly due to the
 fact that it supports xattribs natively and ext3 does not unless you 
 explicitly
 turn it on when you mount the file system.

I don't really understand your implication: xfs is slower *due* to xattr
support? So if I will mount ext4 with user_xattr option, NFS4 from it
will become slower? How come?


-- 

Vladimir


Re: kdevelop on sl6.3

2013-03-18 Thread Steven J. Yellin
You could try the SL5 version.  A little bit of googling found 
http://stackoverflow.com/questions/7340375/why-there-is-no-kdevelop-on-centos-6


Steven Yellin

On Mon, 18 Mar 2013, Mahmood Naderan wrote:


Hi

I want to install kdevelop on SL6.3. However it is not available in the default 
repositories plus epel.
What is the correct repository then?

 
Regards,
Mahmood

Re: nfs+xfs - was Re: SL6.3 2.6.32-358.2.1.el6.x86_64 kernel panic

2013-03-18 Thread Paul Robert Marino
Its mostly due to the uid and gid name mapping to names instead of numbers introduced in NFS 4 by default if possible a backup is saved as an extended attribute and can also compound the atime update speed issue.As for JFS its been a long time since I tested it but I had the reverse issue.Oh and I know the issue you ran into with xfs its rare but has been known to happen I've hit it once my self on a laptop its a journal problem, and fsck isn't the tool to use.There is a specific xfs repair tool to fix the journal or can rebuild it from the backup inodes-- Sent from my HP Pre3On Mar 18, 2013 10:53 AM, Vladimir Mosgalin mosga...@vm10124.spb.edu wrote: Hi Paul Robert Marino!

 On 2013.03.18 at 08:55:39 -0400, Paul Robert Marino wrote next:

 I've used XFS for over a decade now. Its the most reliable crash resistant
 filesystem I've ever used according to all my tests and experience. But I have

This might be true, but it's not the case for all. I've experienced very bad
corruptions on xfs myself, resulting in lots of non-accessible fake
files (random size, attributes etc) with random filenames including
non-printable characters - and there was no way to remove them, fsck
refused to fix them, too. Filesystem was in total mess and producing
various errors - it's fortunate that I was able to copy all real data
without corruption from it, though. Since then I try not to approach xfs
without serious reason.

I'd rather use JFS for huge filesystem which I've been using for many
years until ext4 appeared.. But for fs 16 Tb jfs is still best option,
I believe (far more stable in my experience compared to xfs, though
might be not as fast).

For several reasons most people don't consider JFS but I used it on tons
of servers for filesystems  1 Tb (ext3 was a bad choice for huge
filesystems for various reasons) and never had a single issue with it.

At most, after multiple power failures during heavy write access I had
errors which remounted it into R/O mode and fsck always fixed it.

 By the way I know why the performance goes down on NFS4 its mostly due to the
 fact that it supports xattribs natively and ext3 does not unless you explicitly
 turn it on when you mount the file system.

I don't really understand your implication: xfs is slower *due* to xattr
support? So if I will mount ext4 with user_xattr option, NFS4 from it
will become slower? How come?


-- 

Vladimir

Re: Power management with ATI Radeon cards using the radeon driver.

2013-03-18 Thread Steven Haigh

On 19/03/13 09:05, David Crick wrote:

Thanks for this.

The Wiki actually says kernel 2.6.35 or newer is required,
but TUV must have backported it because they're there
and available to be set in 2.6.32-358.2.1.el6.x86_64


Yeah - this is one of the 'joys' of the TUV Franken-kernel. You never 
know what backported stuff you'll get. Sometimes I think it is called 
2.6.32 only because that is what it started with.


The end result certainly isn't 2.6.32 anymore ;)

--
Steven Haigh

Email: net...@crc.id.au
Web: https://www.crc.id.au
Phone: (03) 9001 6090 - 0412 935 897
Fax: (03) 8338 0299