Re: Large filesystem recommendation

2013-07-24 Thread Paul Robert Marino
I use XFS on SL 6.x its great in my opinion very stable and fast.Redhat themselves suggest it as the backend filesystem for Gluster storage nodes. Early versions on SL6 had some problems but they have all been worked out now. And on other distros SuSE and Gentoo I've used it for decade now with no issues.In SL 6.4 I've even been using it on install via kickstarts it supported on every file system except /, and /boot by the anaconda installer. There are just three things to keep I'm mind with XFS1) if you use the undelete feature on EXT3 and 4 there is no such feature on XFS.2) XFS has its own diagnostic tools fsck does nothing on XFS even the fsck.xfs command is just a dummy command to satisfy the startup scripts wanting to check the file systems every so many times its mounted.3) if you want to do a full file system backup it may behoove you to look at the XFS dump command rather than traditional tools like tar because the file produced by xfsdump will include any selinux contexts, posix ACLS, and extended attributes set on the files. This comes in really handy for things like the openstack swift gluster integration which stores all the swift acls as extended attributes on the backend file system.-- Sent from my HP Pre3On Jul 24, 2013 16:25, Connie Sieh  wrote: On Wed, 24 Jul 2013, Ray Van Dolson wrote:

> On Wed, Jul 24, 2013 at 01:59:03PM -0400, John Lauro wrote:
>> What is recommended for a large file system (40TB) under SL6?
>>
>> In the past I have always had good luck with jfs.  Might not be the
>> fastest, but very stable.  It works well with being able to repair
>> huge filesystems in reasonable amount of RAM, and handle large
>> directories, and large files.  Unfortunately jfs doesn't appear to be
>> supported in 6?  (or is there a repo I can add?)
>>
>>
>> Besides for support of 40+TB filesystem, also need support of files
>>> 4TB, and directories with hundreds of thousands of files.  What do
>>> people recommend?
>
> Echoing what others have said, sounds like XFS might be the best option
> if you can find a repository with a quality version (EPEL perhaps?)

Do you have issues with the xfs that is provided in SL 6?

-Connie Sieh

>
> Interesting on the Backblaze and ext4 thing.  While ext4 itself may
> support this larger file system size, I'm not sure if the "default"
> ext4tools will??  Could be a risk to investigate if you go this route.
>
> Other options I can think of:
>
> - btrfs (not sure if something like EPEL provides a release with this)
> - ZFS on Linux (for the adventurous only, but I believe they have a
>  version that works well with RHEL).
>
> Personally, I'd go XFS.
>
> Ray
>

RE: Large filesystem recommendation

2013-07-24 Thread Brown, Chris (GE Healthcare)
If you enable the sl-other repository ZFS is also now an option to try as well.

- Chris Brown

-Original Message-
From: owner-scientific-linux-us...@listserv.fnal.gov 
[mailto:owner-scientific-linux-us...@listserv.fnal.gov] On Behalf Of Connie Sieh
Sent: Wednesday, July 24, 2013 3:25 PM
To: Ray Van Dolson
Cc: SCIENTIFIC-LINUX-USERS@LISTSERV.FNAL.GOV
Subject: Re: Large filesystem recommendation

On Wed, 24 Jul 2013, Ray Van Dolson wrote:

> On Wed, Jul 24, 2013 at 01:59:03PM -0400, John Lauro wrote:
>> What is recommended for a large file system (40TB) under SL6?
>>
>> In the past I have always had good luck with jfs.  Might not be the 
>> fastest, but very stable.  It works well with being able to repair 
>> huge filesystems in reasonable amount of RAM, and handle large 
>> directories, and large files.  Unfortunately jfs doesn't appear to be 
>> supported in 6?  (or is there a repo I can add?)
>>
>>
>> Besides for support of 40+TB filesystem, also need support of files
>>> 4TB, and directories with hundreds of thousands of files.  What do 
>>> people recommend?
>
> Echoing what others have said, sounds like XFS might be the best 
> option if you can find a repository with a quality version (EPEL 
> perhaps?)

Do you have issues with the xfs that is provided in SL 6?

-Connie Sieh

>
> Interesting on the Backblaze and ext4 thing.  While ext4 itself may 
> support this larger file system size, I'm not sure if the "default"
> ext4tools will??  Could be a risk to investigate if you go this route.
>
> Other options I can think of:
>
> - btrfs (not sure if something like EPEL provides a release with this)
> - ZFS on Linux (for the adventurous only, but I believe they have a  
> version that works well with RHEL).
>
> Personally, I'd go XFS.
>
> Ray
>


Re: Large filesystem recommendation

2013-07-24 Thread Connie Sieh

On Wed, 24 Jul 2013, Ray Van Dolson wrote:


On Wed, Jul 24, 2013 at 01:59:03PM -0400, John Lauro wrote:

What is recommended for a large file system (40TB) under SL6?

In the past I have always had good luck with jfs.  Might not be the
fastest, but very stable.  It works well with being able to repair
huge filesystems in reasonable amount of RAM, and handle large
directories, and large files.  Unfortunately jfs doesn't appear to be
supported in 6?  (or is there a repo I can add?)


Besides for support of 40+TB filesystem, also need support of files

4TB, and directories with hundreds of thousands of files.  What do
people recommend?


Echoing what others have said, sounds like XFS might be the best option
if you can find a repository with a quality version (EPEL perhaps?)


Do you have issues with the xfs that is provided in SL 6?

-Connie Sieh



Interesting on the Backblaze and ext4 thing.  While ext4 itself may
support this larger file system size, I'm not sure if the "default"
ext4tools will??  Could be a risk to investigate if you go this route.

Other options I can think of:

- btrfs (not sure if something like EPEL provides a release with this)
- ZFS on Linux (for the adventurous only, but I believe they have a
 version that works well with RHEL).

Personally, I'd go XFS.

Ray



Re: [SCIENTIFIC-LINUX-USERS] Upgrade SLC5 to SLC6 ?

2013-07-24 Thread Konstantin Olchanski
On Tue, Jul 23, 2013 at 05:56:17PM -0700, Jeffrey Anderson wrote:
> On Tue, Jul 23, 2013 at 5:45 PM, Nico Kadel-Garcia  wrote:
> 
> > the problem here is the switch from one OS, a CERN release, to an
> > actual Scientific Linux release. That. can be an adventure. I've
> > done it for CentOS=>Scientific Linux, and for RHEL=>CentOS and
> > CentOS=>RHEL. It's nasty, and I don't recommend it unless you're
> > getting paid hourly.
> >
> 
> Actually that is not quite the problem  I was trying to upgrade from the
> CERN SL5 to the CERN SL6, and that apparently is not supported.  But
> neither is a vanilla SL5 to vanilla SL6.
> 


One way to think about this:

the difference between minor update (5.8->5.9) and major update (5.9->6.x)
is one that requires a full reinstall.

Why? There are too many incompatible changes and writing an foolproof automated
upgrade tool is significantly more difficult compared to a full reinstall.

In other words, if "yum update" could handle it, they would have named it 
"5.10", not "6.x".

And of course if you do not require a bulletproof automated tool and
if you do not mind ending up with a funny mongrel system,
you can do the update/upgrade manually, even live, even successfully -
there are enough reports of people who have done exactly that.


-- 
Konstantin Olchanski
Data Acquisition Systems: The Bytes Must Flow!
Email: olchansk-at-triumf-dot-ca
Snail mail: 4004 Wesbrook Mall, TRIUMF, Vancouver, B.C., V6T 2A3, Canada


Re: [SCIENTIFIC-LINUX-USERS] Upgrade SLC5 to SLC6 ?

2013-07-24 Thread Yasha Karant
The remote mass upgrade/update -- not the full set of issues that an 
environment such as CFEngine provides for live systems partial update -- 
problem over a network has been solved for sometime.  The solution 
assumes a stable network running what is today the IETF Internet 
protocol suite, root access on all nodes, adequate local image hard 
drive space (theoretically, NFS or the like would work, but is too 
unreliable in many environments), compatible file systems between the 
present and upgraded OS, and the ability to remotely reboot a system 
from the network (issuing a shutdow and reboot method).   In all cases, 
the network information should be established into the upgrade (new) 
environment (OS); DHCP or similar discovery configuration protocols may 
be more problematic than a stable static logical network.  There are two 
methods:


install a base bootable production system onto the harddrive (in 
partition terms, onto an "empty" or overrideable partition) using the 
existing system, install a small program that overwrites the bootloader 
(MBR on most current systems) to point to the new image, and then reboot.


install an upgrade application environment, download one upgrade file 
(RPM in EL) at a time under control of the upgrade application that has 
a preconfigured file of which files to upgrade, upgrade each set of 
targets of the upgrade files (building the upgraded system in situ 
rather than installing a full image as in the other method) and then 
reboot remotely.


In most cases, I prefer a hardwired static network for the above, not a 
wireless (e.g., 802.11) dynamic network because of actual stability, 
bandwidth, and related issues.  The issues are simplest when all of the 
nodes to be upgraded are homogeneous (same OS, same instruction set 
architecture, very similar or identical total architecture), but this 
can be done with success in a largely inhomogeneous set with caution 
(and, preferably, a dry test run onto each platform/environment 
combination).


Yasha Karant

On 07/24/2013 05:11 AM, Nico Kadel-Garcia wrote:

On Tue, Jul 23, 2013 at 10:46 PM, Yasha Karant  wrote:

On 07/23/2013 06:02 PM, Nico Kadel-Garcia wrote:



I'm glad for you, and startled myself. Our favorite upstream vendor
certainly supported doing updates from major OS versions to major OS
versoins: you just couldn''t gracefully do it *live", because changing
things like major versions of glibc and rpm while you're in the midst
of using them to do the update is.. intriguingly problematic.
(Tried it once: don't recommend it!)



One should never have to deal with the "intriguingly problematic" situation
to which you allude, at least not with a properly engineered software
system.  The upgrading runtime system (that which is actually executing to
do the upgrade) should not depend upon any executable images from the system
being upgraded, but should stand alone -- installing / overwriting to new
executable images.  The only primary issue would be a power/hardware failure
during the upgrade, possibly leaving the system in an unstable ("mixed"
between incompatible executables) state.  Otherwise, upon completion of the
upgrading environment (possibly executing from a "temp" area on the hard
drive into some portion of main memory), the new standalone bootable system
(with bootloader and boot files) would be installed, and the system should
do a full reboot equivalent to a "cold" power-on.


Theory is nice. Practice... can get a bit interesting. The problem
isn't the "properly engineered system", it's the practices of handling
a remote system without boot media access to provide exactly the
isolated wrapper environment you describe.

There are additional issues, unfortunately, When doing the software
installations, software updates such as "rpm" installations are
normally done inside a "Use this alternative as if it were the "/"
directory instead" directive handed to the rpm command. But when the
newer version of RPM is a lot newer, you get into format changes of
the old RPM database in /var/lib/rpm.


The fact that you did need to deal with an "intriguingly problematic"
situation seems to indicate a not very good upgrade implementation.  The
same thing could happen with an update, depending upon which systems
dependencies are changed (e.g., a new glibc that is not backward compatible
with the one being used by the previous running image).


I simply didn't have access to boot media and boot time console access
on the remotely installed systems, which had to be down for less than
one hour apiece. I was asked if I *could* do it, and with some testing
found that I could. Doing the testing to get the procedure, now *THAT*
cost time. And mind you, this was years back, with the original Red
Hat 6.2, and a similar in place years later with with RHEL 4.3. The
latter indicates that it's probably feasible with Scientific Linux 4
to Scientific Linux 5, but it was problematic. I don't recommend it.

Incompatible glibc's ar

Re: [SCIENTIFIC-LINUX-USERS] Upgrade SLC5 to SLC6 ?

2013-07-24 Thread Nico Kadel-Garcia
On Tue, Jul 23, 2013 at 10:46 PM, Yasha Karant  wrote:
> On 07/23/2013 06:02 PM, Nico Kadel-Garcia wrote:

>> I'm glad for you, and startled myself. Our favorite upstream vendor
>> certainly supported doing updates from major OS versions to major OS
>> versoins: you just couldn''t gracefully do it *live", because changing
>> things like major versions of glibc and rpm while you're in the midst
>> of using them to do the update is.. intriguingly problematic.
>> (Tried it once: don't recommend it!)
>>
>
> One should never have to deal with the "intriguingly problematic" situation
> to which you allude, at least not with a properly engineered software
> system.  The upgrading runtime system (that which is actually executing to
> do the upgrade) should not depend upon any executable images from the system
> being upgraded, but should stand alone -- installing / overwriting to new
> executable images.  The only primary issue would be a power/hardware failure
> during the upgrade, possibly leaving the system in an unstable ("mixed"
> between incompatible executables) state.  Otherwise, upon completion of the
> upgrading environment (possibly executing from a "temp" area on the hard
> drive into some portion of main memory), the new standalone bootable system
> (with bootloader and boot files) would be installed, and the system should
> do a full reboot equivalent to a "cold" power-on.

Theory is nice. Practice... can get a bit interesting. The problem
isn't the "properly engineered system", it's the practices of handling
a remote system without boot media access to provide exactly the
isolated wrapper environment you describe.

There are additional issues, unfortunately, When doing the software
installations, software updates such as "rpm" installations are
normally done inside a "Use this alternative as if it were the "/"
directory instead" directive handed to the rpm command. But when the
newer version of RPM is a lot newer, you get into format changes of
the old RPM database in /var/lib/rpm.

> The fact that you did need to deal with an "intriguingly problematic"
> situation seems to indicate a not very good upgrade implementation.  The
> same thing could happen with an update, depending upon which systems
> dependencies are changed (e.g., a new glibc that is not backward compatible
> with the one being used by the previous running image).

I simply didn't have access to boot media and boot time console access
on the remotely installed systems, which had to be down for less than
one hour apiece. I was asked if I *could* do it, and with some testing
found that I could. Doing the testing to get the procedure, now *THAT*
cost time. And mind you, this was years back, with the original Red
Hat 6.2, and a similar in place years later with with RHEL 4.3. The
latter indicates that it's probably feasible with Scientific Linux 4
to Scientific Linux 5, but it was problematic. I don't recommend it.

Incompatible glibc's are almost inevitable with major OS updates. So
are database software changes, such as when RPM went from Berkeley DB
to SQLite. (Thank you, upstream vendor, for that one, thank you!)


Re: xulrunner-debuginfo-17.0.7-1.el5_9.x86_64.rpm ?

2013-07-24 Thread Dr Andrew C Aitchison

On Wed, 24 Jul 2013, Dr Andrew C Aitchison wrote:


On Wed, 26 Jun 2013, Bonnie King wrote:


Synopsis:  Critical: firefox security update
Advisory ID:   SLSA-2013:0981-1
Issue Date:2013-06-25



SL5
 x86_64
   firefox-17.0.7-1.el5_9.i386.rpm
   firefox-17.0.7-1.el5_9.x86_64.rpm
   firefox-debuginfo-17.0.7-1.el5_9.i386.rpm
   firefox-debuginfo-17.0.7-1.el5_9.x86_64.rpm
   xulrunner-17.0.7-1.el5_9.i386.rpm
   xulrunner-17.0.7-1.el5_9.x86_64.rpm
   xulrunner-debuginfo-17.0.7-1.el5_9.i386.rpm
   xulrunner-debuginfo-17.0.7-1.el5_9.x86_64.rpm


I can't find the xulrunner-debuginfo-17.0.7-1.el5_9 packages in 
http://ftp.scientificlinux.org/linux/scientific/59/x86_64/updates/security/
(the i386.rpm *is* in 
http://ftp.scientificlinux.org/linux/scientific/59/i386/updates/security/


No it isn't. I can't tell devel from debuginfo :-( sorry.

The question should have been; should we remove
http://ftp.scientificlinux.org/linux/scientific/59/x86_64/updates/security/xulrunner-debuginfo-17.0.5-1.el5_9.x86_64.rpm
?

It is in
http://ftp.scientificlinux.org/linux/scientific/5rolling/archives/debuginfo/
where I would expect to find it.

--
Dr. Andrew C. Aitchison Computer Officer, DPMMS, Cambridge
a.c.aitchi...@dpmms.cam.ac.uk   http://www.dpmms.cam.ac.uk/~werdna


Re: [SCIENTIFIC-LINUX-USERS] Upgrade SLC5 to SLC6 ?

2013-07-24 Thread Matthias Schroeder

On 07/24/2013 01:02 AM, g wrote:

hello pat,

On 07/23/2013 01:17 PM, Pat Riehecky wrote:

It is not possible to upgrade from the 5x branch to the 6x branch.  A
fresh
install is required.

Pat


i have to disagree with you on this issue.

iirc, my last 5x was 5.9. i first ran "yum update" and then ran "yum
upgrade",
both ran with out any problems.

my /etc/reddhat-release shows;

 Scientific Linux release 6.3 (Carbon)


i did not check to insure that all packages are now at 6.3,


And just for this reason it is not recommended to try this. You might 
get a system that runs, but you are not sure what exactly you have got. 
And one of the days you get strange problems that can not be reproduced 
elsewhere. We do not support such systems, and if a user that had done 
that form of upgrade reports a problem we tell him to properly 
re-install his system first.



but i would
presume that they are.


Perhaps they are, perhaps not. You waste much more time debugging 
strange issues than you gain by doing a proper install. So why take the 
risk?


Matthias