[CentOS] Install CentOS 6.3 to partitionable mdadm array

2012-11-07 Thread Hal Martin
Hello all,

I'm trying to install CentOS 6.3 to an mdadm partitionable array and
not having any luck.

The installer only allows me to create one file system per md device,
or specify the md device as a LVM physical volume. I don't want to do
either, I want to create one md device and create multiple partitions
on top of the md device.

I thought that perhaps the installer was preventing me from doing this
because it wasn't possible to boot off a partitionable mdadm array,
but through googling I don't believe this is the case. (See links at
the bottom of this email)

I've tried the graphical installer and the text-mode installer, and
neither gives me the ability to install to a partitionable mdadm
array.

Even more annoying is that if I manually create such an array,
partition it and create file systems, the installer will stop the
array when starting the partitioning utility, preventing me from
installing to the partitions I created.

Is there an advanced mode in the installer that I can force it to
install to a partitionable md device, or is that not an option? I'd
like to avoid having multiple md devices on the same physical drives,
and unfortunately the application I'm installing doesn't place nicely
with LVM.

Thanks,
Hal

http://www.kernel.org/doc/Documentation/md.txt
http://www.miquels.cistron.nl/raid/
https://bugzilla.redhat.com/show_bug.cgi?id=447818
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Install CentOS 6.3 to partitionable mdadm array

2012-11-07 Thread Hal Martin
On Wed, Nov 7, 2012 at 10:15 AM, Nux! n...@li.nux.ro wrote:
 On 07/11/12 15:10, Hal Martin wrote:
 Hello all,

 I'm trying to install CentOS 6.3 to an mdadm partitionable array and
 not having any luck.

 The installer only allows me to create one file system per md device,
 or specify the md device as a LVM physical volume. I don't want to do
 either, I want to create one md device and create multiple partitions
 on top of the md device.

 I thought that perhaps the installer was preventing me from doing this
 because it wasn't possible to boot off a partitionable mdadm array,
 but through googling I don't believe this is the case. (See links at
 the bottom of this email)

 I've tried the graphical installer and the text-mode installer, and
 neither gives me the ability to install to a partitionable mdadm
 array.

 Even more annoying is that if I manually create such an array,
 partition it and create file systems, the installer will stop the
 array when starting the partitioning utility, preventing me from
 installing to the partitions I created.

 Is there an advanced mode in the installer that I can force it to
 install to a partitionable md device, or is that not an option? I'd
 like to avoid having multiple md devices on the same physical drives,
 and unfortunately the application I'm installing doesn't place nicely
 with LVM.

 Thanks,
 Hal

 Hello Hal,

 AFAIK installing on to partitionable md is  a hack that is not
 supported; it's cool, but is not supported by upstream. And when it
 breaks I hear it can be quite unpleasant to fix.
 What are your requirements exactly? Maybe we can advise an alternative
 partitioning etc.

The software we're testing does not support being installed on LVM, so
if we want vendor support we need to install it on a partition.

Hardware RAID is going to be used for deployment, but for lab testing
we were hoping to use mdadm and avoid buying expensive RAID
controllers.

Creating multiple RAID arrays looks like the only supported solution,
although it's quite annoying when partitonable md support would make
it much simpler.

Was it ever supported upstream? Or did it fall out of favour when LVM
became a popular installation method?

-Hal




 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] disk device lvm iscsi questions ....

2012-11-07 Thread Hal Martin
Götz,

Did you remove the devices as well? I was working with an HP MSA and
unless you removed the devices from multipath before the firmware
update I ran into the errors you're seeing. It's because the old
devices are still known to multipath, but don't map to a LUN on the
target anymore.

https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Online_Storage_Reconfiguration_Guide/removing_devices.html

-Hal

On Tue, Nov 6, 2012 at 5:14 AM, Götz Reinicke - IT Koordinator
goetz.reini...@filmakademie.de wrote:
 Hi,

 I have an iscsistorage which I attached to a new centos 6.3 server. I
 added logic volumes as usual, the block devices (sdb  sdc) showed up in
 dmesg; I can mount and access the stored files.

 Now we did an firmware software update to that storage (while
 unmounted/detached from the fileserver) and after reboot of the storage
 and reatache the iscsi nodes I do get new devices. (sdd  sde)

 The last time I remember a simple vgchange -ay changed all neede
 settings so that I can mount and use the volumes as expected.

 But this time I do get an IO error like

   /dev/sun7110to_dali_LUN_3/lvol0: read failed after 0 of 4096 at
 1610608476160: Eingabe-/Ausgabefehler

 but also:

   1 logical volume(s) in volume group sun7110to_dali_LUN_3 now active

 What might be messed up? What can I try do get the storage/filesystems
 back online?

 Thanks for any sugesstion and regards . Götz


 --
 Götz Reinicke - IT-Koordinator - Filmakademie Baden-Württemberg GmbH


 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Corosync init-script broken on CentOS6

2011-11-23 Thread Hal Martin
Hello all,

I am trying to create a corosync/pacemaker cluster using CentOS 6.0.
However, I'm having a great deal of difficulty doing so.

Corosync has a valid configuration file and an authkey has been generated.

When I run /etc/init.d/corosync I see that only corosync is started.
From experience working with corosync/pacemaker before, I know that
this is not enough to have a functioning cluster. For some reason the
base install (with or without updates) is not starting corosync
dependencies.

I've even tried using corosync/pacemaker for the EPEL 6 repo, and
still the init-script will not start corosync dependencies.

Expected:
corosync
/usr/lib64/heartbeat/stonithd
/usr/lib64/heartbeat/cib
/usr/lib64/heartbeat/lrmd
/usr/lib64/heartbeat/attrd
/usr/lib64/heartbeat/pengine

Observed:
corosync

My install options are:
%packages
@base
@core
@ha
@nfs-file-server
@network-file-system-client
@resilient-storage
@server-platform
@server-policy
@storage-client-multipath
@system-admin-tools
pax
oddjob
sgpio
pacemaker
dlm-pcmk
screen
lsscsi
-rgmanager
%end

The logs from the server aren't terribly helpful either:
Nov 23 12:13:45 cheapo4 corosync[2509]:   [pcmk  ] info: spawn_child:
Forked child 2515 for process stonith-ng
Nov 23 12:13:45 cheapo4 corosync[2509]:   [pcmk  ] info: spawn_child:
Forked child 2516 for process cib
Nov 23 12:13:45 cheapo4 corosync[2509]:   [pcmk  ] info: spawn_child:
Forked child 2517 for process lrmd
Nov 23 12:13:45 cheapo4 corosync[2509]:   [pcmk  ] info: spawn_child:
Forked child 2518 for process attrd
Nov 23 12:13:45 cheapo4 corosync[2509]:   [pcmk  ] info: spawn_child:
Forked child 2519 for process pengine
Nov 23 12:13:45 cheapo4 corosync[2509]:   [pcmk  ] info: spawn_child:
Forked child 2520 for process crmd
Nov 23 12:13:45 cheapo4 corosync[2509]:   [SERV  ] Service engine
loaded: Pacemaker Cluster Manager 1.1.2
Nov 23 12:13:45 cheapo4 corosync[2509]:   [SERV  ] Service engine
loaded: corosync extended virtual synchrony service
Nov 23 12:13:45 cheapo4 corosync[2509]:   [SERV  ] Service engine
loaded: corosync configuration service
Nov 23 12:13:45 cheapo4 corosync[2509]:   [SERV  ] Service engine
loaded: corosync cluster closed process group service v1.01
Nov 23 12:13:45 cheapo4 corosync[2509]:   [SERV  ] Service engine
loaded: corosync cluster config database access v1.01
Nov 23 12:13:45 cheapo4 corosync[2509]:   [SERV  ] Service engine
loaded: corosync profile loading service
Nov 23 12:13:45 cheapo4 corosync[2509]:   [SERV  ] Service engine
loaded: corosync cluster quorum service v0.1
Nov 23 12:13:45 cheapo4 corosync[2509]:   [MAIN  ] Compatibility mode
set to whitetank.  Using V1 and V2 of the synchronization engine.
Nov 23 12:13:46 cheapo4 corosync[2509]:   [pcmk  ] ERROR:
pcmk_wait_dispatch: Child process lrmd exited (pid=2517, rc=100)
Nov 23 12:13:46 cheapo4 corosync[2509]:   [pcmk  ] notice:
pcmk_wait_dispatch: Child process lrmd no longer wishes to be
respawned
Nov 23 12:13:46 cheapo4 corosync[2509]:   [pcmk  ] info:
update_member: Node cheapo4.jrz.cbn now has process list:
00111302 (1118978)
Nov 23 12:13:46 cheapo4 corosync[2509]:   [pcmk  ] ERROR:
pcmk_wait_dispatch: Child process cib exited (pid=2516, rc=100)
Nov 23 12:13:46 cheapo4 corosync[2509]:   [pcmk  ] notice:
pcmk_wait_dispatch: Child process cib no longer wishes to be respawned
Nov 23 12:13:46 cheapo4 corosync[2509]:   [pcmk  ] info:
update_member: Node cheapo4.jrz.cbn now has process list:
00111202 (1118722)
Nov 23 12:13:46 cheapo4 corosync[2509]:   [pcmk  ] ERROR:
pcmk_wait_dispatch: Child process crmd exited (pid=2520, rc=100)
Nov 23 12:13:46 cheapo4 corosync[2509]:   [pcmk  ] notice:
pcmk_wait_dispatch: Child process crmd no longer wishes to be
respawned
Nov 23 12:13:46 cheapo4 corosync[2509]:   [pcmk  ] info:
update_member: Node cheapo4.jrz.cbn now has process list:
00111002 (1118210)
Nov 23 12:13:46 cheapo4 corosync[2509]:   [pcmk  ] ERROR:
pcmk_wait_dispatch: Child process attrd exited (pid=2518, rc=100)
Nov 23 12:13:46 cheapo4 corosync[2509]:   [pcmk  ] notice:
pcmk_wait_dispatch: Child process attrd no longer wishes to be
respawned
Nov 23 12:13:46 cheapo4 corosync[2509]:   [pcmk  ] info:
update_member: Node cheapo4.jrz.cbn now has process list:
00110002 (1114114)
Nov 23 12:13:46 cheapo4 corosync[2509]:   [pcmk  ] ERROR:
pcmk_wait_dispatch: Child process pengine exited (pid=2519, rc=100)
Nov 23 12:13:46 cheapo4 corosync[2509]:   [pcmk  ] notice:
pcmk_wait_dispatch: Child process pengine no longer wishes to be
respawned
Nov 23 12:13:46 cheapo4 corosync[2509]:   [pcmk  ] info:
update_member: Node cheapo4.jrz.cbn now has process list:
0012 (1048578)
Nov 23 12:13:46 cheapo4 corosync[2509]:   [pcmk  ] ERROR:
pcmk_wait_dispatch: Child process stonith-ng exited (pid=2515, rc=100)
Nov 23 12:13:46 cheapo4 corosync[2509]:   [pcmk  ] notice:
pcmk_wait_dispatch: Child process stonith-ng no longer 

Re: [CentOS] Corosync init-script broken on CentOS6

2011-11-23 Thread Hal Martin
Am 23.11.2011 14:24 schrieb Michel van Deventer mic...@van.deventer.cx:

 Hi,

 Did you configure corosync ?
 Normally corosync starts pacemaker, which in turn starts the heartbeat
 deamons.
 But you have to configure the latter using for example a pcmk file with
 configuration in /etc/corosync/conf.d/ (from the top of my head).

Yes. /etc/corosync/services.d/pcmk is configured as per that document;
still not getting the expected start behaviour.

Corosync does start properly, just not via the init-script.

Thanks,
-Hal

 I normally use :

http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Clusters_from_Scratch/

Regards,

Michel

 On Wed, 2011-11-23 at 12:16 -0500, Hal Martin wrote:
  Hello all,
 
  I am trying to create a corosync/pacemaker cluster using CentOS 6.0.
  However, I'm having a great deal of difficulty doing so.
 
  Corosync has a valid configuration file and an authkey has been
generated.
 
  When I run /etc/init.d/corosync I see that only corosync is started.
  From experience working with corosync/pacemaker before, I know that
  this is not enough to have a functioning cluster. For some reason the
  base install (with or without updates) is not starting corosync
  dependencies.
 
  I've even tried using corosync/pacemaker for the EPEL 6 repo, and
  still the init-script will not start corosync dependencies.
 
  Expected:
  corosync
  /usr/lib64/heartbeat/stonithd
  /usr/lib64/heartbeat/cib
  /usr/lib64/heartbeat/lrmd
  /usr/lib64/heartbeat/attrd
  /usr/lib64/heartbeat/pengine
 
  Observed:
  corosync
 
  My install options are:
  %packages
  @base
  @core
  @ha
  @nfs-file-server
  @network-file-system-client
  @resilient-storage
  @server-platform
  @server-policy
  @storage-client-multipath
  @system-admin-tools
  pax
  oddjob
  sgpio
  pacemaker
  dlm-pcmk
  screen
  lsscsi
  -rgmanager
  %end
 
  The logs from the server aren't terribly helpful either:
  Nov 23 12:13:45 cheapo4 corosync[2509]:   [pcmk  ] info: spawn_child:
  Forked child 2515 for process stonith-ng
  Nov 23 12:13:45 cheapo4 corosync[2509]:   [pcmk  ] info: spawn_child:
  Forked child 2516 for process cib
  Nov 23 12:13:45 cheapo4 corosync[2509]:   [pcmk  ] info: spawn_child:
  Forked child 2517 for process lrmd
  Nov 23 12:13:45 cheapo4 corosync[2509]:   [pcmk  ] info: spawn_child:
  Forked child 2518 for process attrd
  Nov 23 12:13:45 cheapo4 corosync[2509]:   [pcmk  ] info: spawn_child:
  Forked child 2519 for process pengine
  Nov 23 12:13:45 cheapo4 corosync[2509]:   [pcmk  ] info: spawn_child:
  Forked child 2520 for process crmd
  Nov 23 12:13:45 cheapo4 corosync[2509]:   [SERV  ] Service engine
  loaded: Pacemaker Cluster Manager 1.1.2
  Nov 23 12:13:45 cheapo4 corosync[2509]:   [SERV  ] Service engine
  loaded: corosync extended virtual synchrony service
  Nov 23 12:13:45 cheapo4 corosync[2509]:   [SERV  ] Service engine
  loaded: corosync configuration service
  Nov 23 12:13:45 cheapo4 corosync[2509]:   [SERV  ] Service engine
  loaded: corosync cluster closed process group service v1.01
  Nov 23 12:13:45 cheapo4 corosync[2509]:   [SERV  ] Service engine
  loaded: corosync cluster config database access v1.01
  Nov 23 12:13:45 cheapo4 corosync[2509]:   [SERV  ] Service engine
  loaded: corosync profile loading service
  Nov 23 12:13:45 cheapo4 corosync[2509]:   [SERV  ] Service engine
  loaded: corosync cluster quorum service v0.1
  Nov 23 12:13:45 cheapo4 corosync[2509]:   [MAIN  ] Compatibility mode
  set to whitetank.  Using V1 and V2 of the synchronization engine.
  Nov 23 12:13:46 cheapo4 corosync[2509]:   [pcmk  ] ERROR:
  pcmk_wait_dispatch: Child process lrmd exited (pid=2517, rc=100)
  Nov 23 12:13:46 cheapo4 corosync[2509]:   [pcmk  ] notice:
  pcmk_wait_dispatch: Child process lrmd no longer wishes to be
  respawned
  Nov 23 12:13:46 cheapo4 corosync[2509]:   [pcmk  ] info:
  update_member: Node cheapo4.jrz.cbn now has process list:
  00111302 (1118978)
  Nov 23 12:13:46 cheapo4 corosync[2509]:   [pcmk  ] ERROR:
  pcmk_wait_dispatch: Child process cib exited (pid=2516, rc=100)
  Nov 23 12:13:46 cheapo4 corosync[2509]:   [pcmk  ] notice:
  pcmk_wait_dispatch: Child process cib no longer wishes to be respawned
  Nov 23 12:13:46 cheapo4 corosync[2509]:   [pcmk  ] info:
  update_member: Node cheapo4.jrz.cbn now has process list:
  00111202 (1118722)
  Nov 23 12:13:46 cheapo4 corosync[2509]:   [pcmk  ] ERROR:
  pcmk_wait_dispatch: Child process crmd exited (pid=2520, rc=100)
  Nov 23 12:13:46 cheapo4 corosync[2509]:   [pcmk  ] notice:
  pcmk_wait_dispatch: Child process crmd no longer wishes to be
  respawned
  Nov 23 12:13:46 cheapo4 corosync[2509]:   [pcmk  ] info:
  update_member: Node cheapo4.jrz.cbn now has process list:
  00111002 (1118210)
  Nov 23 12:13:46 cheapo4 corosync[2509]:   [pcmk  ] ERROR:
  pcmk_wait_dispatch: Child process attrd exited (pid=2518, rc=100)
  Nov 23 12:13:46 cheapo4 corosync[2509

Re: [CentOS] how long to reboot server ?

2010-09-02 Thread Hal Martin
Unless you have zombie processes or are upgrading the kernel, IMHO
there is no reason to reboot.

-Hal

On Thu, Sep 2, 2010 at 1:18 PM, Tim Nelson tnel...@rockbochs.com wrote:
 - mcclnx mcc mcc...@yahoo.com.tw wrote:
 we have CENTOS 5 on DELL servers.  some servers have longer than one
 year did not reboot.  Our consultant suggest we need at least reboot
 once every year to clean out memory junk.

 What is your opinion?


 If you're running a Windows server, yes, a period reboot is necessary to 
 'clean it out'. However, in Linux land, this is not typically necessary as a 
 'rule'. You could certainly be running applications with memory leaks or 
 other special circumstances that warrant a clean boot.

 I have several Linux boxes running a variety of flavors including CentOS, 
 Debian, and even Redhat (think old 8.x/9.x days) with uptimes ranging between 
 13 months to over two years. They're running perfectly without the 'yearly 
 reboot'.

 --Tim
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] real SATA RAID

2009-02-09 Thread Hal Martin
As many people have stated in this thread, 3ware and Areca make some
good hardware RAID solutions.

Software RAID using mdadm works too, quite well, I might add. I was one
of those people who stayed away from software RAID in linux, thinking it
was too complicated and difficult, I was wrong. It's dead easy to do,
and expanding your disks can be done on the fly (if you have a SATA
controller that supports hotplugging as many these days do.) Perhaps the
best thing about software RAID, in my opinion, is that it's light enough
on the machine that a PIII server can be reading/writing to the array at
10MB/s.

Now you might say, 10MB/s, who cares? That's small stuff. But, when
that's the load the network infrastructure at this location was designed
to handle, you don't need anything higher performance.

Just my two cents.
-Hal

Christopher Chan wrote:
 Paolo Supino wrote:
   
 Hi 

   Last week I had a lengthy thread in which someone indicated the my SIL 
 card is a FRAID (don't know if F stands for the F word or Fake, though 
 it doesn't really matter). I want to replace the controller with a 
 controller that Linux will see the RAID1 group as a single HD and not 
 multiple HDs as it happens with the SIL controller. Recommendations anyone? 

 

 Adaptec RAID 2405, 3ware, Areca


 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos

   

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] RAID 1 Post Install

2008-12-17 Thread Hal Martin
Just along these lines, would it be possible for me to break RAID 1 on
the two internal drives into RAID 0 and then mirror that new RAID 0
array onto a SATA drive using RAID 1 without loosing any data?

I used JFS as the file system for the RAID 1 array, so that may have to
be changed to XFS as you cannot dynamically expand JFS to the best of my
knowledge.

-Hal


Kai Schaetzl wrote:
 Tom Brown wrote on Tue, 02 Dec 2008 17:29:06 +:

   
 http://tldp.org/HOWTO/Software-RAID-HOWTO-5.html
 

 unfortunately that and the mini-howto are both very much outdated. Many of 
 the stuff it mentions (like mkraid, /etc/raidtab) is not part of the 
 distro anymore. You use mdadm nowadays. Those parts that contain mdadm 
 commands are still valid.

 Does this Silicon Image SATA controller not include Hardware RAID by 
 chance? 
 The basic steps for software-RAID are:
 - decide about the partitions the RAID devices will be based on
 - if it is used only for data you may probably want to have just one RAID 
 partition:
 mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda /dev/sdb
 (I assume you can use sda and sdb. I always use several RAID partitions, 
 so I never used the whole disk.)
 put LVM on it:
   pvcreate /dev/md0
   vgcreate myvolumegroupname /dev/md0
   - creates vg myvolumegroupname on it
 - start adding your logical volumes:
   lvcreate -L50G --name myname myvolumegroupname
   - adds a 50G logical volume named myname
   - format that lv:
 mkfs.ext3 /dev/myvolumegroupname/myname
 - copy any data you like from the old drives
 - add the mount points to fstab
 - if you don't boot from the RAID partition you are done now


 Kai

   

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos