Re: [CentOS] Odd nfs mount problem [SOLVED]

2015-02-28 Thread Louis Lagendijk
On Fri, 2015-02-27 at 16:46 -0500, m.r...@5-cent.us wrote:
 m.r...@5-cent.us wrote:
  m.r...@5-cent.us wrote:
  I'm exporting a directory, firewall's open on both machines (one CentOS
  6.6, the other RHEL 6.6), it automounts on the exporting machine, but
  the
  other server, not so much.
 
  ls /mountpoint/directory eventually times out (directory being the NFS
  mount). mount -t nfs server:/location/being/exported /mnt works... but
  an
  immediate ls /mnt gives me stale file handle.
 
  The twist on this: the directory being exported is on an xfs
  filesystem...
  one that's 33TB (it's an external RAID 6 appliance).
 
  Any ideas?
 
  Oh, yes: I did just think to install xfs_progs, and did that, but still no
  joy.
 
 
 Since we got the RAID appliance mounted, we'd started with a project
 directory on it, and that exported just fine. So what seems to work was to
 put the new directory under that, and then export *that*.  That is,
 /path/to/ourproj, which mounts under /ourproj, and we wanted to mount
 something else under /otherproj, (note that ourproj is the large xfs
 filesystem), so instead of /path/to/otherproj, I just exported
 /path/to/ourproj/otherproj, and mounted that on the other system as
 /otherproj.
 
What NFS version are you using? V4? if so, have a look at the nfs4
requirement to export the parent of you exports wih fsid=1

 Does that make sense? Clear as mud? Anyway, it looks like we have our
 workaround.
 
mark wish nfs could handle an option of inode64
I have  no experience with the combination of xfs and nfs, but it seems
to be possible, see: 
http://xfs.org/index.php/XFS_FAQ#Q:_Why_doesn.27t_NFS-exporting_subdirectories_of_inode64-mounted_filesystem_work.3F

Louis

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] OT: AF 4k sector drives with 512 emulation

2015-02-28 Thread Chris Murphy
On Sat, Feb 28, 2015 at 12:33 AM, Robert Arkiletian rob...@gmail.com wrote:
 According to this pdf [1] alignment is important but from what I understand
 512e emulation still has a small RMW performance hit from writes that are
 smaller than 4k or if the writes are not a multiple of 4k.

There shouldn't be writes smaller than 4KB since ext234, XFS, and
Btrfs all use 4KB block sizes. There is a possible case where the XFS
journal writes are 512 bytes, this can be fixed by specifying a 4KB
sector size at mkfs time if it's not auto-detected.

 Also it's probably not a good idea to mix 512e with 512n in a raid set.

Scrambled eggs mixed with yogurt? Offhand it doesn't seem like a bad
idea (won't kill me), even if it also may not be a good idea (sounds
suboptimal).

-- 
Chris Murphy
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS-es] [OT] - Consulta alta disponibilidad

2015-02-28 Thread Diego Sanchez
Estimados.

Tengo que armar un server que soporte aproximadamente 10k visitas diarias.
El sitio es de ecommerce actualmente alojado en la infraestructura de Yahoo!

El sitio va a ser migrado a Magento.

Mi tarea, es diseñar los servidores que van a soportar la estructura y
requieren que sea sobre DigitalOcean

Tengo pensado en :

proxy nginx _ nginx magento1+mysql master
|_nginx magento2+mysql slave1
|_[...]
|_nginx magento[n]+mysql slave[n]


¿Consideran que podría ser viable?
¿Debería alojar la base de datos en otros servers para liberar carga y
darle más capacidad a la web?

¿Alguna recomendación sobre herramientas?
(por ejemplo, glusterfs vs rsync para manejar las actualizaciones)?

Gracias

-- 
Diego - Yo no soy paranoico! (pero que me siguen, me siguen)
___
CentOS-es mailing list
CentOS-es@centos.org
http://lists.centos.org/mailman/listinfo/centos-es


Re: [CentOS] Looking for a life-save LVM Guru

2015-02-28 Thread Chris Murphy
On Sat, Feb 28, 2015 at 4:29 PM, Valeri Galtsev
galt...@kicp.uchicago.edu wrote:

 You are implying that firmware of hardware RAID cards is somehow buggier
 than software of software RAID plus Linux kernel (sorry if I
 misinterpreted your point).

Drives, and hardware RAID cards are subject to firmware bugs, just as
we have software bugs in the kernel. makes no assessment of how
common such bugs are relative to each other.

 I disagree: embedded system of RAID card and
 RAID function they have to fulfill are much simpler than everything
 involved into software RAID. Therefore, with the same effort invested,
 firmware of (good) hardware is less buggy.

There's no evidence provided for this. All I've stated is bugs happen
in both software and the firmware on hardware RAID cards.
http://www.cs.toronto.edu/~bianca/papers/fast08.pdf

And further there's a widespread misperception that RAID56 (whether
software or hardware) is capable of detecting and correcting
corruption.


 And again, Linux kernel can be
 panicked more likely than trivial embedded system of hardware RAID
 card/box. At least my experience over decade and a half confirms that.

I'd say this is not a scientific sample and therefore unproven. I can
provide my own non-scientific sample: an XServe running OS X with
software raid1 which has never, in 8 years, kernel panicked. Its
longest uptime was over 500 days, and was only rebooted due to a
system upgrade that required it. There's nothing special about the
XServe that makes this magic, it's just good hardware with ECC memory,
enterprise SAS drives, and a capable though limited kernel. So there's
no good reason to expect kernel panics. Having them means something is
wrong.

 I have my raids verified once a week. If you don't
 verify them for a year, what happens then: you don't discover individual
 drive degradation until it is too late and larger number than the level of
 redundancy are kicked out because of fatal failures.

This is a common problem on software and hardware RAID alike, the lack
of scrubbing. Also recognize that software raid tends to bring along
cheaper drives that aren't well suited for RAID use, whereas people
spending money on hardware raid tend to invest in appropriate drives.
That prevents problems due to proper SCT ERC settings on the drive.

Anyway, these
 horror stories were purely poor sysadmin's job IMHO.

I agree. This is common in any case.


-- 
Chris Murphy
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Looking for a life-save LVM Guru

2015-02-28 Thread Valeri Galtsev

On Sat, February 28, 2015 4:22 pm, Chris Murphy wrote:
 On Sat, Feb 28, 2015 at 1:26 PM, Valeri Galtsev
 galt...@kicp.uchicago.edu wrote:
 Indeed. That is why: no LVMs in my server room. Even no software RAID.
 Software RAID relies on the system itself to fulfill its RAID function;
 what if kernel panics before software RAID does its job? Hardware RAID
 (for huge filesystems I can not afford to back up) is what only makes
 sense for me. RAID controller has dedicated processors and dedicated
 simple system which does one simple task: RAID.

 Biggest problem is myriad defaults aren't very well suited for
 multiple device configurations. There are a lot of knobs in Linux and
 on the drives and in hardware RAID cards. None of this is that simple.

 Drives, and hardware RAID cards are subject to firmware bugs, just as
 we have software bugs in the kernel. We know firmware bugs cause
 corruption.

Speaking of which: Only good hardware cards are the ones I would use, and
only good external RAID boxes. Over last decade and a half I never had
trouble due to firmware bugs of RAIDs. What I use is:

1. 3ware (mostly)
2. LSI megaraid (a few, I don't like their user interface and poor
notification abilities)
3. Areca (also a few, better UI than that of LSI)

External RAID boxes: Infortrend

I never will go for cheepy fake RAID (adaptec is one off the top of my
head). Also, it was not my choice but I had to deal with Hm... not good
external RAID boxes: by Promise, and by Raid.com to mention two.

You are implying that firmware of hardware RAID cards is somehow buggier
than software of software RAID plus Linux kernel (sorry if I
misinterpreted your point). I disagree: embedded system of RAID card and
RAID function they have to fulfill are much simpler than everything
involved into software RAID. Therefore, with the same effort invested,
firmware of (good) hardware is less buggy. And again, Linux kernel can be
panicked more likely than trivial embedded system of hardware RAID
card/box. At least my experience over decade and a half confirms that.

I have heard horror stories from people who used the same good hardware I
mentioned (3ware). However, when I went in each case deep into detail I
discovered that they just didn't have all necessary set up correctly,
which it trivial as a matter of fact. Namely: common mistake in all cases
was: not setting RAID verify cron task (it is set on the RAID
configuration level). I have my raids verified once a week. If you don't
verify them for a year, what happens then: you don't discover individual
drive degradation until it is too late and larger number than the level of
redundancy are kicked out because of fatal failures. Even then 3ware when
it is already not redundant doesn't kick out newly failing drives, just
makes RAID read-only, so you still can salvage something. Anyway, these
horror stories were purely poor sysadmin's job IMHO.

 Not all hardware RAID cards are the same, some are total
 junk. Many others get you vendor lock in due to proprietary metadata
 written to the drives. You can't get your data off if the card dies,
 you have to buy a similar model card sometimes with the same firmware
 version in order to regain access.

I would not consider that a disadvantage. I still have to see a 3ware card
dead (yes, you can burn that if you plug it into slot with gross
misalignment like tilt). And with 3ware, later model will accept drives
originally making up RAID on older model, only it will make RAID read
only, thus you can salvage your data first, then you can re-create RAID
with this new card's (metadata standard). I guess, I may have different
philosophy than you do. If I use RAID card, I choose indeed good one. Once
I use the good one, I feel no need moving drives to card made by different
manufacturer. And last, yet important thing: if you have to use these
drives with different card (even just different model by the same
manufacturer) then you better re-create RAID from scratch on this new
card. If you value your data...

Just my $0.02

Valeri


Valeri Galtsev
Sr System Administrator
Department of Astronomy and Astrophysics
Kavli Institute for Cosmological Physics
University of Chicago
Phone: 773-702-4247

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Looking for a life-save LVM Guru

2015-02-28 Thread James A. Peltier


- Original Message -
| On Fri, 27 Feb 2015 19:24:57 -0800
| John R Pierce pie...@hogranch.com wrote:
|  On 2/27/2015 4:52 PM, Khemara Lyn wrote:
|  
|   What is the right way to recover the remaining PVs left?
|  
|  take a filing cabinet packed full of 10s of 1000s of files of 100s of
|  pages each,   with the index cards interleaved in the files, and
|  remove 1/4th of the pages in the folders, including some of the
|  indexes... and toss everything else on the floor...this is what
|  you have.   3 out of 4 pages, semi-randomly with no idea whats what.
| 
| And this is why I don't like LVM to begin with. If one of the drives
| dies, you're screwed not only for the data on that drive, but even for
| data on remaining healthy drives.
| 
| I never really saw the point of LVM. Storing data on plain physical
| partitions, having an intelligent directory structure and a few wise
| well-placed symlinks across the drives can go a long way in having
| flexible storage, which is way more robust than LVM. With today's huge
| drive capacities, I really see no reason to adjust the sizes of
| partitions on-the-fly, and putting several TB of data in a single
| directory is just Bad Design to begin with.
| 
| That said, if you have a multi-TB amount of critical data while not
| having at least a simple RAID-1 backup, you are already standing in a
| big pile of sh*t just waiting to become obvious, regardless of LVM and
| stuff. Hardware fails, and storing data without a backup is just simply
| a disaster waiting to happen.
| 
| Best, :-)
| Marko
| 

This is not an LVM vs physical partitioning problem.  This is a system 
component failed and it wasn't being monitored and so now we're in deep doo-doo 
problem.  This problem also came to us after there were many attempts to 
recover the problem that were likely done incorrectly.  If the disk was still 
at least partially accessible (monitoring would have caught that) there would 
be increased chances of data recovery, although maybe not much better.

People who understand how to use the system do not suffer these problems.  LVM 
adds a bit of complexity for a bit of extra benefits.  You can't blame LVM for 
user error.  Not having monitoring in place or backups is a user problem, not 
an LVM one.

I have managed Petabytes worth of data on LVM and not suffered this sort of 
problem *knock on wood*, but I also know that I'm not immune to it.  I don't 
even use partitions for anything but system drives.  I use whole disk PV to 
avoid things like partition alignment issues.  Not a single bit of data loss in 
7 years dealing with these servers either.  At least none that weren't user 
error. ;)

-- 
James A. Peltier
IT Services - Research Computing Group
Simon Fraser University - Burnaby Campus
Phone   : 778-782-6573
Fax : 778-782-3045
E-Mail  : jpelt...@sfu.ca
Website : http://www.sfu.ca/itservices
Twitter : @sfu_rcg
Powering Engagement Through Technology
Build upon strengths and weaknesses will generally take care of themselves - 
Joyce C. Lock

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Looking for a life-save LVM Guru

2015-02-28 Thread Chris Murphy
On Sat, Feb 28, 2015 at 1:26 PM, Valeri Galtsev
galt...@kicp.uchicago.edu wrote:
 Indeed. That is why: no LVMs in my server room. Even no software RAID.
 Software RAID relies on the system itself to fulfill its RAID function;
 what if kernel panics before software RAID does its job? Hardware RAID
 (for huge filesystems I can not afford to back up) is what only makes
 sense for me. RAID controller has dedicated processors and dedicated
 simple system which does one simple task: RAID.

Biggest problem is myriad defaults aren't very well suited for
multiple device configurations. There are a lot of knobs in Linux and
on the drives and in hardware RAID cards. None of this is that simple.

Drives, and hardware RAID cards are subject to firmware bugs, just as
we have software bugs in the kernel. We know firmware bugs cause
corruption. Not all hardware RAID cards are the same, some are total
junk. Many others get you vendor lock in due to proprietary metadata
written to the drives. You can't get your data off if the card dies,
you have to buy a similar model card sometimes with the same firmware
version in order to regain access. Some cards support SNIA's DDF
format, in which case there's a chance mdadm can assemble the array,
should the hardware card die.

Anyway, the main thing is knowing where the land mines are regardless
of what technology you pick. If you don't know where they are, you're
inevitably going to run into trouble with anything you choose.

-- 
Chris Murphy
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Looking for a life-save LVM Guru

2015-02-28 Thread Chris Murphy
On Sat, Feb 28, 2015 at 4:28 PM, James A. Peltier jpelt...@sfu.ca wrote:

 People who understand how to use the system do not suffer these problems.  
 LVM adds a bit of complexity for a bit of extra benefits.  You can't blame 
 LVM for user error.  Not having monitoring in place or backups is a user 
 problem, not an LVM one.

It's a good point. Suggesting the OP's problem is an example why LVM
should not be used, is like saying dropped laptops is a good example
why laptops shouldn't be used.

A fair criticism is whether LVM should be used by default with single
disk system installations. I've always been suspicious of this choice.
(But now, even Apple does this on OS X by default, possibly as a
prelude to making full volume encryption a default - their LVM
equivalent implements encryption as an LV level attribute called
logical volume family.)

-- 
Chris Murphy
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Cyrus 2.4 and Centos6

2015-02-28 Thread Timothy Kesten
Am Freitag, 27. Februar 2015 16:38 schrieb Mike McCarthy, W1NR:
 Is there a reason why you need 2.4 vs. the 2.3 package from the CentOS6
 repos?

Using Outlook-Clients and need support of XLIST.
AFAIK implemented in 2.4

Timothy
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Looking for a life-save LVM Guru

2015-02-28 Thread Valeri Galtsev

On Fri, February 27, 2015 10:00 pm, Marko Vojinovic wrote:
 On Fri, 27 Feb 2015 19:24:57 -0800
 John R Pierce pie...@hogranch.com wrote:
 On 2/27/2015 4:52 PM, Khemara Lyn wrote:
 
  What is the right way to recover the remaining PVs left?

 take a filing cabinet packed full of 10s of 1000s of files of 100s of
 pages each,   with the index cards interleaved in the files, and
 remove 1/4th of the pages in the folders, including some of the
 indexes... and toss everything else on the floor...this is what
 you have.   3 out of 4 pages, semi-randomly with no idea whats what.

 And this is why I don't like LVM to begin with. If one of the drives
 dies, you're screwed not only for the data on that drive, but even for
 data on remaining healthy drives.

 I never really saw the point of LVM. Storing data on plain physical
 partitions, having an intelligent directory structure and a few wise
 well-placed symlinks across the drives can go a long way in having
 flexible storage, which is way more robust than LVM. With today's huge
 drive capacities, I really see no reason to adjust the sizes of
 partitions on-the-fly, and putting several TB of data in a single
 directory is just Bad Design to begin with.

 That said, if you have a multi-TB amount of critical data while not
 having at least a simple RAID-1 backup, you are already standing in a
 big pile of sh*t just waiting to become obvious, regardless of LVM and
 stuff. Hardware fails, and storing data without a backup is just simply
 a disaster waiting to happen.


Indeed. That is why: no LVMs in my server room. Even no software RAID.
Software RAID relies on the system itself to fulfill its RAID function;
what if kernel panics before software RAID does its job? Hardware RAID
(for huge filesystems I can not afford to back up) is what only makes
sense for me. RAID controller has dedicated processors and dedicated
simple system which does one simple task: RAID.

Just my $0.02

Valeri


Valeri Galtsev
Sr System Administrator
Department of Astronomy and Astrophysics
Kavli Institute for Cosmological Physics
University of Chicago
Phone: 773-702-4247

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Looking for a life-save LVM Guru

2015-02-28 Thread James A. Peltier
- Original Message -
| On Sat, Feb 28, 2015 at 4:28 PM, James A. Peltier jpelt...@sfu.ca wrote:
| 
|  People who understand how to use the system do not suffer these problems.
|  LVM adds a bit of complexity for a bit of extra benefits.  You can't
|  blame LVM for user error.  Not having monitoring in place or backups is a
|  user problem, not an LVM one.
| 
| It's a good point. Suggesting the OP's problem is an example why LVM
| should not be used, is like saying dropped laptops is a good example
| why laptops shouldn't be used.
| 
| A fair criticism is whether LVM should be used by default with single
| disk system installations. I've always been suspicious of this choice.
| (But now, even Apple does this on OS X by default, possibly as a
| prelude to making full volume encryption a default - their LVM
| equivalent implements encryption as an LV level attribute called
| logical volume family.)
| 
| --
| Chris Murphy

There is no difference between a single disk system and a multi-disk system in 
terms of being able to dynamically resize volumes that reside on a volume 
group.  Having the ability to resize a volume to be either larger or smaller on 
demand is a really nice feature to have.  Did you make / too small and have 
space on home and you're using ext3/4 then simply resize the home logical 
volume to be smaller and all the free extents to /.  Pretty simple process 
really and it can be done online.  This is just one example.  There are others, 
but this has nothing to do with the OP.

Getting back to the OP, it would seem that you may be stuck in a position where 
you need to restore from backup.  Without having further details into what 
exactly is happening I fear you're not going to be able to recover.  I'd be 
available to talk off list if needed.

-- 
James A. Peltier
IT Services - Research Computing Group
Simon Fraser University - Burnaby Campus
Phone   : 778-782-6573
Fax : 778-782-3045
E-Mail  : jpelt...@sfu.ca
Website : http://www.sfu.ca/itservices
Twitter : @sfu_rcg
Powering Engagement Through Technology
Build upon strengths and weaknesses will generally take care of themselves - 
Joyce C. Lock

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Looking for a life-save LVM Guru

2015-02-28 Thread Chris Murphy
On Sat, Feb 28, 2015 at 5:59 PM, James A. Peltier jpelt...@sfu.ca wrote:
 There is no difference between a single disk system and a multi-disk system 
 in terms of being able to dynamically resize volumes that reside on a volume 
 group.  Having the ability to resize a volume to be either larger or smaller 
 on demand is a really nice feature to have.

I'll better qualify this. For CentOS it's a fine default, as it is for
Fedora Server. For Workstation and Cloud I think LVM overly
complicates things. More non-enterprise users get confused over LVM
than they ever have a need to resize volumes.

  Did you make / too small and have space on home and you're using ext3/4 then 
 simply resize the home logical volume to be smaller and all the free extents 
 to /.  Pretty simple process really and it can be done online.

XFS doesn't support shrink, only grow. XFS is the CentOS 7 default.
The main advantage of LVM for CentOS system disks is ability to use
pvmove to replace a drive online, rather than resize. If Btrfs
stabilizes sufficiently for RHEL/CentOS 8, overall it's a win because
it meets the simple need of mortal users and supports advanced
features for advanced users. (Ergo I think LVM is badass but it's also
the storage equivalent of emacs - managing it is completely crazy.)

This is just one example.  There are others, but this has nothing to do with 
the OP.

 Getting back to the OP, it would seem that you may be stuck in a position 
 where you need to restore from backup.  Without having further details into 
 what exactly is happening I fear you're not going to be able to recover.  I'd 
 be available to talk off list if needed.

Yeah my bad for partly derailing this thread. Hopefully the original
poster hasn't been scared off, not least of which may be due to my
bark about cross posting being worse than my bite.

-- 
Chris Murphy
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos