[CentOS] Intel DP55WG centos 5.5 support?

2010-10-18 Thread Coert Waagmeester
Hello all,

I have looked around on the HCL and on other hardware sites.

Do any of you have experience with Centos 5.5 64 bit on these motherboards?

Regards,
Coert Waagmeester
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] LARTC and CentOS question

2010-09-09 Thread Coert Waagmeester
Kahlil Hodgson wrote:
 On 08/09/10 19:26, Coert Waagmeester wrote:
 Could someone point me in the right direction, where I can find 
 CentOS/Redhat specific documentation on the whole 
 /etc/sysconfig/network* setup?
 
 
 Might want to have a look at
 
 /usr/share/doc/initscripts-8.45.30/sysconfig.txt
 
 Kal
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos
 

This is indeed the starting point I was looking for.

Thanks.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] LARTC and CentOS question

2010-09-08 Thread Coert Waagmeester
Hello all,

Got myself the Linux Advanced Routing  Traffic control book
http://lartc.org/howto/

All the commands in the guide do not survive reboots.

Could someone point me in the right direction, where I can find 
CentOS/Redhat specific documentation on the whole 
/etc/sysconfig/network* setup?


Kind regards,
Coert Waagmeester
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] drbd xen question

2009-08-20 Thread Coert Waagmeester
Hello all,


I am running drbd protocol A to a secondary machine to have 'backups' of
my xen domUs.

Is it necessary to change the xen domains configs to use /dev/drbd*
instead of the LVM volume that drbd mirrors, and which the xen domU runs
of?


regards,
Coert

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] drbd xen question

2009-08-20 Thread Coert Waagmeester

On Thu, 2009-08-20 at 09:38 -0600, Alan Sparks wrote:
 Ross Walker wrote:
  On Aug 20, 2009, at 10:22 AM, Coert Waagmeester lgro...@waagmeester.co.za 
wrote:

  Hello all,
 
 
  I am running drbd protocol A to a secondary machine to have  
  'backups' of
  my xen domUs.
 
  Is it necessary to change the xen domains configs to use /dev/drbd*
  instead of the LVM volume that drbd mirrors, and which the xen domU  
  runs
  of?
  
 
  Yes otherwise the data won't be replicated and your drbd volume will  
  be inconsistent and need resync'd.
 
  -Ross
 
  ___

 
 To be clear, are you saying you have a DRBD partition on both host
 machines, and LVM on top of that to allocate LVs for host storage?
 
 You would not want to bypass the LVM layer in that case.  The hosts
 would be still configured to map the LV devices into the domUs.  You
 need to go through the LVM layer, which uses the DRBD partition as a
 block physical device.  The writes down through the DRDB layer will
 still be replicated.
 -Alan
 
Hello Alan,

This is my current setup:

Xen DomU

DRBD

LVM Volume

RAID 1


What I first wanted to do was:

DomU | DRBD

LVM Volume

RAID 1

Is this possible or not recommended?

Regards,
Coert

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] OT: eclipse on x86_64 compile for x86?

2009-08-14 Thread Coert Waagmeester
Hello all,

Have installed eclipse 3.5 x86_64 from the eclipse site, with CDT and QT
integration.


I am just starting to learn C++ but I would like to know how to set up
the ability to compile for 32 bit as well?

At the moment I am googleing this as well.


Regards,
Coert

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS-virt] Can I bridge a bonded and vlan tagged interface directly to a guest?

2009-08-03 Thread Coert Waagmeester

On Fri, 2009-07-31 at 11:08 -0400, David Knierim wrote:
 I am running CentOS 5.3 x86_64 as my dom0 and CentOS 5.3 on my domU's.
 On the dom0, I have two interfaces that are bonded and have tagged
 VLANs.   I can get the networks to the domU's by creating a bridge for
 each of the VLANS (bond0.3, bond0.4, etc).   On the domU, the
 interfaces show up as eth0, eth1, etc.
 
 Is there a way to set up the network on the dom0 so my domU's see a
 single interface with tagged VLAN support??   
 
 Thanks!
David
 ___
 CentOS-virt mailing list
 CentOS-virt@centos.org
 http://lists.centos.org/mailman/listinfo/centos-virt

Hello David,

Sorry this is not an answer to your question, but how did you set up the
bonds with xen?

I tried doing the same, and did not win


Regards,

Coert

___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS] [SOLVED sort of] DRBD very slow....

2009-07-30 Thread Coert Waagmeester

On Wed, 2009-07-29 at 11:42 +0400, Roman Savelyev wrote:
 No way in 8.2
 It's a socket option, managed well in 8.3 and later releases.
 If you don't hav large amount of very small syncronius writes, you don't 
 need it.
 - Original Message - 
 From: Coert Waagmeester lgro...@waagmeester.co.za
 To: CentOS mailing list centos@centos.org
 Sent: Monday, July 27, 2009 10:30 AM
 Subject: Re: [CentOS] DRBD very slow
 
 
 
  On Mon, 2009-07-27 at 10:18 +0400, Roman Savelyev wrote:
   Invest in a HW RAID card with NVRAM cache that will negate the need
   for barrier writes from the OS as the controller will issue them async
   from cache allowing I/O to continue flowing. This really is the safest
   method.
  It's a better way. But socket oprions in DRBD up to 8.2 (Nagel alghoritm)
  can decrease performance in large amount of small syncronius writes.
 
  ___
  CentOS mailing list
  CentOS@centos.org
  http://lists.centos.org/mailman/listinfo/centos
 
  Hello Roman,
 
  I am running drbd 8.2.6 (the standard centos version)
 
  How do I disable that nagle algorithm?
 
  ___
  CentOS mailing list
  CentOS@centos.org
  http://lists.centos.org/mailman/listinfo/centos
  
 
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos

Hi all,

Just want to thank you for all your help on this so far.
We are now using that server for something else, so at the moment my
DRBD plans are on hold. After playing around with the snd and receive
and max buffer settings, I did manage to crank the speed up to 10MB/sec.


Thanks again for all your help,
Coert

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] DRBD very slow....

2009-07-28 Thread Coert Waagmeester

On Mon, 2009-07-27 at 18:18 -0400, Ross Walker wrote:
 On Jul 27, 2009, at 4:09 PM, Coert Waagmeester
 lgro...@waagmeester.co.za wrote:
 
 
 
 
  
  On Mon, 2009-07-27 at 12:37 +0200, Coert Waagmeester wrote:
   On Mon, 2009-07-27 at 12:02 +0200, Alexander Dalloz wrote:
 
 On Mon, 2009-07-27 at 08:30 +0200, Coert Waagmeester wrote:

  Hello Roman,
  
  I am running drbd 8.2.6 (the standard centos version)

have you considered to test the drbd-8.3 packages?

http://bugs.centos.org/view.php?id=3598

http://dev.centos.org/centos/5/testing/{i386,x86_64}/RPMS/

   
   Thank you very much for this tip! It was one very obvious place
   where I
   did not look yet.
   
   
   Would it be necessary to still recompile it for the TCP_NODELAY
   and
   such?
   
   I am just making sure, because
   http://www.nabble.com/Huge-latency-issue-with-8.2.6-td18947965.html
   makes it seem unnecessary.
   
   Why do the repositories provide both DRBD 8.0.x and 8.2.6?
   
  
  Here is a status update
  ___
  on both hosts I now run from the testing repository:
  # rpm -qa | grep drbd
  drbd83-8.3.1-5.el5.centos
  kmod-drbd83-xen-8.3.1-4.el5.centos
  ___
  Here is my config (slightly condensed):
  -
  global {
   usage-count yes;
  }
  common {
   protocol C;
   syncer { rate 50M; }
   net {
  #allow-two-primaries; }
  sndbuf-size 0; }
  #  disk {no-disk-flushes;
  #no-md-flushes; }
   startup { wfc-timeout 0 ; }
  }
  resource xenfilesrv {
   device/dev/drbd1;
   disk  /dev/vg0/xenfilesrv;
   meta-disk internal;
  
   on baldur.mydomain.local {
 address   10.99.99.1:7788;
   }
   on thor.mydomain.local {
 address   10.99.99.2:7788;
   }
  }
  resource xenfilesrvdata {
   device/dev/drbd2;
   disk  /dev/vg0/xenfilesrvdata;
   meta-disk internal;
  
   on baldur.mydomain.local {
 address   10.99.99.1:7789;
   }
   on thor.mydomain.local {
 address   10.99.99.2:7789;
   }
  }
  ___
  
  xenfilesrv is a xen domU
  in this domU i ran a dd with oflag direct:
  -
  # dd if=/dev/zero of=1gig.file bs=1M count=1000 oflag=direct
  1000+0 records in
  1000+0 records out
  1048576000 bytes (1.0 GB) copied, 147.997 seconds, 7.1 MB/s
  
  Just before I ran the dd this popped up in the secondary hosts
  syslog:
  --
  Jul 27 21:51:42 thor kernel: drbd2: Method to ensure write ordering:
  flush
  Jul 27 21:51:42 thor kernel: drbd1: Method to ensure write ordering:
  flush
  
  
  ___
  
  What more can I try?
  
  To be quite honest, I have no idea what to do with/ where to find
  the
  TCP_NODELAY socket options..
  
 
 
 Use drbd option to disable flush/sync, but understand that during a
 power failure or system crash data will not be consistent on disk and
 you will need to sync the storage from the other server.
 
 
 -Ross
 
 
 
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos

That also does not really make a difference.
According to DRBD everything goes into barrier mode.

I still get speed of around 7.5 MB/sec

In the config i now have this:  
disk { no-disk-barrier;
   no-disk-flushes;
   no-md-flushes; }

according to /proc/drbd it then goes into 'drain' mode.

I still get only 8MB/sec throughput.

Would it be unwise to consider using Protocol A?

I have just tried Protocol A, and I also only get 8 MB/sec.
But, if I disconnect the secondary node, and do the dd again, I get
32MB/sec!


PS I sent another mail with an attachment. Have a feeling that is
moderated though

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] DRBD very slow....

2009-07-27 Thread Coert Waagmeester

On Mon, 2009-07-27 at 10:18 +0400, Roman Savelyev wrote:
  Invest in a HW RAID card with NVRAM cache that will negate the need
  for barrier writes from the OS as the controller will issue them async
  from cache allowing I/O to continue flowing. This really is the safest
  method.
 It's a better way. But socket oprions in DRBD up to 8.2 (Nagel alghoritm) 
 can decrease performance in large amount of small syncronius writes. 
 
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos

Hello Roman,

I am running drbd 8.2.6 (the standard centos version)

How do I disable that nagle algorithm?

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] DRBD very slow....

2009-07-27 Thread Coert Waagmeester

On Mon, 2009-07-27 at 08:30 +0200, Coert Waagmeester wrote:
 On Mon, 2009-07-27 at 10:18 +0400, Roman Savelyev wrote:
   Invest in a HW RAID card with NVRAM cache that will negate the need
   for barrier writes from the OS as the controller will issue them async
   from cache allowing I/O to continue flowing. This really is the safest
   method.
  It's a better way. But socket oprions in DRBD up to 8.2 (Nagel alghoritm) 
  can decrease performance in large amount of small syncronius writes. 
  
  ___
  CentOS mailing list
  CentOS@centos.org
  http://lists.centos.org/mailman/listinfo/centos
 
 Hello Roman,
 
 I am running drbd 8.2.6 (the standard centos version)
 
 How do I disable that nagle algorithm?

On google I found the following page:
http://www.nabble.com/Huge-latency-issue-with-8.2.6-td18947965.html

I have found in the drbdsetup (8) man page the sndbuf-size option, and I
will try setting this.

On the nabble page they talk about the TCP_NODELAY and TCP_QUICKACK
socket option. Does this have to do with Nagle algorithm?

Where do I set these socket options? Do I have to compile drbd with
them?

 
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] DRBD very slow....

2009-07-27 Thread Coert Waagmeester

On Mon, 2009-07-27 at 12:02 +0200, Alexander Dalloz wrote:
 
  On Mon, 2009-07-27 at 08:30 +0200, Coert Waagmeester wrote:
 
  Hello Roman,
 
  I am running drbd 8.2.6 (the standard centos version)
 
 
 Hi,
 
 have you considered to test the drbd-8.3 packages?
 
 http://bugs.centos.org/view.php?id=3598
 
 http://dev.centos.org/centos/5/testing/{i386,x86_64}/RPMS/
 
 Best regards
 
 Alexander
 
 
 
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos

Thank you very much for this tip! It was one very obvious place where I
did not look yet.


Would it be necessary to still recompile it for the TCP_NODELAY and
such?

I am just making sure, because
http://www.nabble.com/Huge-latency-issue-with-8.2.6-td18947965.html
makes it seem unnecessary.

Why do the repositories provide both DRBD 8.0.x and 8.2.6?

Thank you all again,
Coert

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] DRBD very slow....

2009-07-27 Thread Coert Waagmeester

On Mon, 2009-07-27 at 12:37 +0200, Coert Waagmeester wrote:
 On Mon, 2009-07-27 at 12:02 +0200, Alexander Dalloz wrote:
  
   On Mon, 2009-07-27 at 08:30 +0200, Coert Waagmeester wrote:
  
   Hello Roman,
  
   I am running drbd 8.2.6 (the standard centos version)
  
  
  Hi,
  
  have you considered to test the drbd-8.3 packages?
  
  http://bugs.centos.org/view.php?id=3598
  
  http://dev.centos.org/centos/5/testing/{i386,x86_64}/RPMS/
  
  Best regards
  
  Alexander
  
  
  
  ___
  CentOS mailing list
  CentOS@centos.org
  http://lists.centos.org/mailman/listinfo/centos
 
 Thank you very much for this tip! It was one very obvious place where I
 did not look yet.
 
 
 Would it be necessary to still recompile it for the TCP_NODELAY and
 such?
 
 I am just making sure, because
 http://www.nabble.com/Huge-latency-issue-with-8.2.6-td18947965.html
 makes it seem unnecessary.
 
 Why do the repositories provide both DRBD 8.0.x and 8.2.6?
 
 Thank you all again,
 Coert
 
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos



Hello all,

Here is a status update
___
on both hosts I now run from the testing repository:
# rpm -qa | grep drbd
drbd83-8.3.1-5.el5.centos
kmod-drbd83-xen-8.3.1-4.el5.centos
___
Here is my config (slightly condensed):
-
global {
  usage-count yes;
}
common {
  protocol C;
  syncer { rate 50M; }
  net {
#allow-two-primaries; }
 sndbuf-size 0; }
#  disk {no-disk-flushes;
#no-md-flushes; }
  startup { wfc-timeout 0 ; }
}
resource xenfilesrv {
  device/dev/drbd1;
  disk  /dev/vg0/xenfilesrv;
  meta-disk internal;

  on baldur.mydomain.local {
address   10.99.99.1:7788;
  }
  on thor.mydomain.local {
address   10.99.99.2:7788;
  }
}
resource xenfilesrvdata {
  device/dev/drbd2;
  disk  /dev/vg0/xenfilesrvdata;
  meta-disk internal;

  on baldur.mydomain.local {
address   10.99.99.1:7789;
  }
  on thor.mydomain.local {
address   10.99.99.2:7789;
  }
}
___

xenfilesrv is a xen domU
in this domU i ran a dd with oflag direct:
-
# dd if=/dev/zero of=1gig.file bs=1M count=1000 oflag=direct
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 147.997 seconds, 7.1 MB/s

Just before I ran the dd this popped up in the secondary hosts syslog:
--
Jul 27 21:51:42 thor kernel: drbd2: Method to ensure write ordering:
flush
Jul 27 21:51:42 thor kernel: drbd1: Method to ensure write ordering:
flush


___

What more can I try?

To be quite honest, I have no idea what to do with/ where to find the
TCP_NODELAY socket options..


Kind regards,
Coert


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] DRBD very slow....

2009-07-26 Thread Coert Waagmeester

On Fri, 2009-07-24 at 09:27 -0400, Ross Walker wrote:
 On Jul 24, 2009, at 3:28 AM, Coert Waagmeester lgro...@waagmeester.co.za 
   wrote:
 
 
  On Fri, 2009-07-24 at 10:21 +0400, Roman Savelyev wrote:
  1. You are hit by Nagel alghoritm (slow TCP response). You can  
  build DRBD
  8.3. In 8.3 TCP_NODELAY and QUICK_RESPONSE implemented in place.
  2. You are hit by DRBD protocol. In most cases, B is enought.
  3. You are hit by triple barriers. In most cases you are need only  
  one of
  barrier, flush,  drain - see documentation, it depens on type of  
  storage
  hardware.
 
 
  I have googled the triple barriers thing but cant find that much
  information.
 
  Would it help if I used IPv6 instead of IPv4?
 
 Triple barriers wouldn't affect you as this is on top of LVM and LVM  
 doesn't support barriers, so it acts like a filter for them. Not good,  
 but that's the state of things.
 
 I would have run the dd tests locally and not with netcat, the idea is  
 to take the network out of the picture.
 
I have run the dd again locally.

It writes to an LVM volume on top of Software RAID 1 mounted in dom0:
# dd if=/dev/zero of=/mnt/data/1gig.file oflag=direct bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 24.3603 seconds, 43.0 MB/s

 Given the tests though it looks like the disks have their write caches  
 disabled which cripples them, but with LVM filtering barriers, it's  
 the safest configuration.
 
 The way to get fast and safe is to use partitions instead of logical  
 volumes. If you need more then 4 then use GPT partition table which  
 allows up to 256 I believe. Then you can enable the disk caches as  
 drbd will issue barrier writes to assure consistency (hmmm maybe the  
 barrier problem is with devmapper which means software RAID will be a  
 problem too? Need to check that).

I am reading up on GPT, and that seems like a viable option.
Will keep you posted.

Most googles point to software raid 1 supporting barriers. not too sure
though.
 
 Or
 
 Invest in a HW RAID card with NVRAM cache that will negate the need  
 for barrier writes from the OS as the controller will issue them async  
 from cache allowing I/O to continue flowing. This really is the safest  
 method.
This is not going to be easy The servers we use are 1U rackmount,
and the single available PCI-express port is used up on both servers by
a quad gigabit network card.
 
 -Ross


Thanks for all the valuable tips so far, I will keep you posted.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] DRBD very slow....

2009-07-24 Thread Coert Waagmeester

On Fri, 2009-07-24 at 10:21 +0400, Roman Savelyev wrote:
 1. You are hit by Nagel alghoritm (slow TCP response). You can build DRBD 
 8.3. In 8.3 TCP_NODELAY and QUICK_RESPONSE implemented in place.
 2. You are hit by DRBD protocol. In most cases, B is enought.
 3. You are hit by triple barriers. In most cases you are need only one of 
 barrier, flush,  drain - see documentation, it depens on type of storage 
 hardware.
 

I have googled the triple barriers thing but cant find that much
information.

Would it help if I used IPv6 instead of IPv4?

Ross, here are the results of those tests you suggested:

For completeness here is my current setup:

host1: 10.99.99.2
Xeon Quad-Core
8GB RAM
Centos 5.3 64bit
2x 1TB seagate sata disks in software raid level 1
LVM on top of the raid for dom0 root fs and for all domU root FSses

host2: 10.99.99.1
Xeon Dual-Core
8GB RAM
Centos 5.3 64bit
2x 1TB seagate sata disks in software raid level 1
LVM on top of the raid for dom0 root fs and for all domU root FSses

common:
hosts are connected to local LAN
and directly to each other with a CAT6 gigabit crossover.

I have 6 DRBDs running for 5 domUs over the back to back link.
DRBD version drbd82-8.2.6-1.el5.centos
___
___




Ok, here is what I have done:

___
I have added the following to the drbd config:
disk { no-disk-flushes;
 no-md-flushes; }

That made the resync go up to 50MB/sec after I issued a
drbdsetup /dev/drbdX syncer -r 110M

It used to stick around at 11MB/sec

As far as i can tell it has improved the domUs disk access as well.

I do see that there are a lot of warnings to be heeded with disk and 
metadata flushing..
___

iperf results:

on host 1:
# iperf -s

Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)

[  5] local 10.99.99.1 port 5001 connected with 10.99.99.2 port 58183
[ ID] Interval   Transfer Bandwidth
[  5]  0.0-10.0 sec  1.16 GBytes990 Mbits/sec


on host 2:
# iperf -c 10.99.99.1

Client connecting to 10.99.99.1, TCP port 5001
TCP window size: 73.8 KByte (default)

[  3] local 10.99.99.2 port 58183 connected with 10.99.99.1 port 5001
[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.0 sec  1.16 GBytes992 Mbits/sec


I am assuming those results are to be expected from a back to back
gigabit.
___

the dd thing.
I think I did this completely wrong, how is this supposed to be done?

this is what i did

host 1:
nc -l 8123 | dd of=/mnt/data/1gig.file oflag=direct
(/mnt/data is an ext3 FS in LVM mounted on dom0)
(Not drbd) i first wanted to try it locally.

host 2:
date; dd if=/dev/zero bs=1M count=1000 | nc 10.99.99.2 8123 ; date


I did not wait for it to finish... according to ifstat the average speed
I got during this transfer was 1.6MB/sec

___

Any tips would be greatly appreciated.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] DRBD very slow....

2009-07-23 Thread Coert Waagmeester

Hello all,


For completeness here is my current setup:

host1:
Xeon Quad-Core
8GB RAM
Centos 5.3 64bit
2x 1TB seagate sata disks in software raid level 1
LVM on top of the raid for dom0 root fs and for all domU root FSses

host2:
Xeon Dual-Core
8GB RAM
Centos 5.3 64bit
2x 1TB seagate sata disks in software raid level 1
LVM on top of the raid for dom0 root fs and for all domU root FSses

common:
hosts are connected to local LAN
and directly to each other with a CAT6 gigabit crossover.

I have 6 DRBDs running for 5 domUs over the back to back link.
DRBD version drbd82-8.2.6-1.el5.centos
___
___




Ok, here is what I have done:

___
I have added the following to the drbd config:
disk { no-disk-flushes;
 no-md-flushes; }

That made the resync go up to 50MB/sec after I issued a
drbdsetup /dev/drbdX syncer -r 110M

It used to stick around at 11MB/sec

As far as i can tell it has improved the domUs disk access as well.

I do see that there are a lot of warnings to be heeded with disk and 
metadata flushing..
___

iperf results:

on host 1:
# iperf -s

Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)

[  5] local 10.99.99.1 port 5001 connected with 10.99.99.2 port 58183
[ ID] Interval   Transfer Bandwidth
[  5]  0.0-10.0 sec  1.16 GBytes990 Mbits/sec


on host 2:
# iperf -c 10.99.99.1

Client connecting to 10.99.99.1, TCP port 5001
TCP window size: 73.8 KByte (default)

[  3] local 10.99.99.2 port 58183 connected with 10.99.99.1 port 5001
[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.0 sec  1.16 GBytes992 Mbits/sec


I am assuming those results are to be expected from a back to back
gigabit.
___

the dd thing.
I think I did this completely wrong, how is this supposed to be done?

this is what i did

host 1:
nc -l 8123 | dd of=/mnt/data/1gig.file oflag=direct
(/mnt/data is an ext3 FS in LVM mounted on dom0)

host 2:
date; dd if=/dev/zero bs=1M count=1000 | nc 10.99.99.2 8123 ; date


I did not wait for it to finish... according to ifstat the average speed
I got during this transfer was 1.6MB/sec

___

Any tips would be greatly appreciated.


Kind regards,
Coert





___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] DRBD very slow....

2009-07-23 Thread Coert Waagmeester

On Wed, 2009-07-22 at 18:16 -0700, Ian Forde wrote:
 On Wed, 2009-07-22 at 11:16 +0200, Coert Waagmeester wrote:
  The highest speed I can get through that link with drbd is 11 MB/sec
  (megabytes)
 
 Not good...
 
  But if I copy a 1 gig file over that link I get 110 MB/sec.
 
 That tells me that the network connection is fine.  The issue is at a
 higher layer...
 
  Why is DRBD so slow? 
 
 Let's see...
 
  common {
protocol C;
syncer { rate 80M; }
net {
  allow-two-primaries;
}
  }
 
 You want allow-two-primaries?  That implies that you're using something
 like ocfs2, but that's probably immaterial to the discussion... Here's a
 question - do you have another syncer statement in the resource
 definition that's set to a lower number?  That would definitely throttle
 the sync rate...
 
   -I

I occasionally do migration from one dom0 to the other

I do not have clustered file sytems, so I make sure that two are only
primary during the migration.

I have no automation yet, I do it all manually to be sure.

I only have one syncer defenition, and according to the drbd manual that
is the rate for full resyncs?
 
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] DRBD very slow....

2009-07-22 Thread Coert Waagmeester
Hello all,

we have a new setup with xen on centos5.3

I run drbd from lvm volumes to mirror data between the two servers.

both servers are 1U nec rack mounts with 8GB RAM, 2x mirrored 1TB
seagate satas.

The one is a dual core xeon, and the other a quad-core xeon.

I have a gigabit crossover link between the two with an MTU of 9000 on
each end.

I currently have 6 drbds mirroring across that link.

The highest speed I can get through that link with drbd is 11 MB/sec
(megabytes)

But if I copy a 1 gig file over that link I get 110 MB/sec.

Why is DRBD so slow? 

I am not using drbd encryption because of the back to back link.
Here is a part of my drbd config:

# cat /etc/drbd.conf
global {
  usage-count yes;
}
common {
  protocol C;
  syncer { rate 80M; }
  net {
allow-two-primaries;
  }
}
resource xenotrs {
  device/dev/drbd6;
  disk  /dev/vg0/xenotrs;
  meta-disk internal;

  on baldur.somedomain.local {
address   10.99.99.1:7793;
  }
  on thor.somedomain.local {
address   10.99.99.2:7793;
  }
}


Kind regards,
Coert

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] DRBD very slow....

2009-07-22 Thread Coert Waagmeester

On Wed, 2009-07-22 at 11:16 +0200, Coert Waagmeester wrote:
 Hello all,
 
 we have a new setup with xen on centos5.3
 
 I run drbd from lvm volumes to mirror data between the two servers.
 
 both servers are 1U nec rack mounts with 8GB RAM, 2x mirrored 1TB
 seagate satas.
 
 The one is a dual core xeon, and the other a quad-core xeon.
 
 I have a gigabit crossover link between the two with an MTU of 9000 on
 each end.
 
 I currently have 6 drbds mirroring across that link.
 
 The highest speed I can get through that link with drbd is 11 MB/sec
 (megabytes)
 
 But if I copy a 1 gig file over that link I get 110 MB/sec.
 
 Why is DRBD so slow? 
 
 I am not using drbd encryption because of the back to back link.
 Here is a part of my drbd config:
 
 # cat /etc/drbd.conf
 global {
   usage-count yes;
 }
 common {
   protocol C;
   syncer { rate 80M; }
   net {
 allow-two-primaries;
   }
 }
 resource xenotrs {
   device/dev/drbd6;
   disk  /dev/vg0/xenotrs;
   meta-disk internal;
 
   on baldur.somedomain.local {
 address   10.99.99.1:7793;
   }
   on thor.somedomain.local {
 address   10.99.99.2:7793;
   }
 }
 
 
 Kind regards,
 Coert
 
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos


I am reading up on this on the internet as well, but all the tcp
settings and disk settings make me slightly nervous...

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS-virt] Windows on Xen rant....

2009-07-21 Thread Coert Waagmeester
Hello all,

Windows.

I have installed a Windows Server 2003 fully virt domU with the GPLPV
drivers.

The network settings reset on every restart of the domU. Weird STOP
errors keep popping up

Was I naive to think you can run a Windows server on Xen?

Are any of you guys doing it successfully?

Am I better of just installing windows server 2k3 on bare hardware?


Just my Tuesday rant.


Regards,
Coert

___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


[CentOS] Centos 5 samba with AD integration and XFS with extended ACLs

2009-07-21 Thread Coert Waagmeester
Hello all,

Firstly, I have checked on google, and there are indeed howtos on this
subject.


Have any of you done this or something similar on CentOS? If so, could
you send me the configs maybe?

How can I find out if the centos version of samba supports extended
ACLs?
I ran a modinfo xfs, and XFS supports it.

I want to set up a samba server that authenticated to AD.

I have that up and running, only the extended ACLs still to do.


Thanks in advance,
Coert

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] raid 1 disks upgrade

2009-07-13 Thread Coert Waagmeester
Hello all,

I have a machine with 2 SATA 250GB disks which I want to upgrade to 1TB
SATAs

This is the partition structure on both disks:

Disk /dev/sda: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot  Start End  Blocks   Id  System
/dev/sda1   *   1  25  200781   fd  Linux raid
autodetect
/dev/sda2  26197115631245   fd  Linux raid
autodetect
/dev/sda31972   30401   228363975   fd  Linux raid
autodetect

there are 3x RAID1 arrays. 
First is for /boot
Second if for swap
Third is for LVM (contains / and other filesystems)

What is the easiest way to get this upgraded?

I thought that I could maybe dd all the LVM volumes and /boot into
files, setup the new RAID1 arrays on the 1TB disks, and dd everything
back? or is there an easier way?

Regards,
Coert 

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] raid 1 disks upgrade

2009-07-13 Thread Coert Waagmeester

On Mon, 2009-07-13 at 12:20 +0200, Tim Verhoeven wrote:
 On Mon, Jul 13, 2009 at 12:13 PM, Coert
 Waagmeesterlgro...@waagmeester.co.za wrote:
 
  I have a machine with 2 SATA 250GB disks which I want to upgrade to 1TB
  SATAs
 
  This is the partition structure on both disks:
 
  Disk /dev/sda: 250.0 GB, 250059350016 bytes
  255 heads, 63 sectors/track, 30401 cylinders
  Units = cylinders of 16065 * 512 = 8225280 bytes
 
Device Boot  Start End  Blocks   Id  System
  /dev/sda1   *   1  25  200781   fd  Linux raid
  autodetect
  /dev/sda2  26197115631245   fd  Linux raid
  autodetect
  /dev/sda31972   30401   228363975   fd  Linux raid
  autodetect
 
  there are 3x RAID1 arrays.
  First is for /boot
  Second if for swap
  Third is for LVM (contains / and other filesystems)
 
  What is the easiest way to get this upgraded?
 
  I thought that I could maybe dd all the LVM volumes and /boot into
  files, setup the new RAID1 arrays on the 1TB disks, and dd everything
  back? or is there an easier way?
 
 
 The software RAID 1 implementation of the kernel allows the array to
 be extended. First you replace each old disk with the new 1TB disks
 and each time rebuild the array. After this the array is still only
 250GB but the partitions are already 1TB in size. Then use the --grow
 option of mdadm to increase the array to 1TB. Then it starts rebuilded
 the new space. When this is ready you can use the pvextend command to
 tell LVM that the PV has grown. Then the new space should be available
 in the volume group and you can increase the LV's and the filesystems
 inside them.
 
 Regards,
 Tim
 

Great, I will give that a try.

Thanks. I will ofcourse still make backups to be on the safe side.

Regards,
Coert

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Is there an openssh security problem?

2009-07-10 Thread Coert Waagmeester

On Thu, 2009-07-09 at 15:18 -0700, Bill Campbell wrote:
 This appeared today on Macworld, an article saying this is
 probably a hoax:
 
 http://www.macworld.com/article/141628/2009/07/openssh_securityhoax.html?lsrc=rss_main
 
 Bill

In my iptables setup I have the following rule: (excuse the ugly line
breaks)

/sbin/iptables -A INPUT -i eth0 -p tcp -s 196.1.1.0/24 -d 196.1.1.31 \
--dport 22 -m state -m recent --state NEW --update --seconds 15 -j \
DROPLOG

/sbin/iptables -A INPUT -i eth0 -p tcp -s 196.1.1.0/24 -d 196.1.1.31 \
--dport 22 -m state -m recent --state NEW --set -j ACCEPT

/sbin/iptables -A INPUT -i eth0 -p tcp -s 196.1.1.0/24 -d 196.1.1.31 \
--dport 22 -m state --state ESTABLISHED --state RELATED -j ACCEPT

it only allows one NEW connection to ssh per minute.

That is also a good protection right?


Regards,
Coert

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] new RAID5 array: 3x500GB with XFS

2009-07-03 Thread Coert Waagmeester
Hello all,

I have yesterday after some typos, sent my ext3 RAID5 array to the
void...

I want to recreate it now, but I read on
http://wiki.centos.org/HowTos/Disk_Optimization
that you can optimize the filesystem on top of the RAID.

Will this wiki article be exactly the same for XFS?

Is it worth the trouble to also create an LVM volume on the RAID array?


Regards,
Coert

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS and Redhat Directory Server

2009-07-02 Thread Coert Waagmeester

On Mon, 2009-06-29 at 11:29 -0400, Giovanni Torres wrote:
 I have implemented LDAP on CentOS successfully using Redhat's Directory 
 Server and the great how-to on the CentOS wiki.
 
 Being new to LDAP, I have a question and maybe one of you guys can point 
 me in the right direction:  I have LDAP implemented on the network for 
 logins to the workstation pcs.  I also have an apache website that I now 
 use LDAP for authentication.  What I want, however, is to be able to 
 allow a group of users to authenticate to the apache website, but not be 
 able to login to any of the systems directly nor via ssh.
 
 Any suggestions or pointers in the right direction on where to read up 
 on how to accomplish this specific task would be much appreciated.
 
 Thanks,
 Giovanni
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos

Hello Giovanni,

I have also just install centos directory server.
Successful install, but to be quite honest I hav no idea where to go
from here is there some howto somewhere that explains how to make
workstations authenticate to the DS and such?


Regards,
Coert

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS-virt] fully virt Xen DomU network question

2009-06-29 Thread Coert Waagmeester

On Fri, 2009-06-26 at 09:19 +0200, Fabian Arrotin wrote:
 Coert Waagmeester wrote:
  Hello all fellow CentOS users!
  
  I have a working xen setup with 3 paravirt domUs and one Windblows 2003
  fully virt domU.
  
  
  There are to virtual networks.
  
  As far as I can tell in the paravirt Linux DomUs I have gigabit
  networking, but not in the fully virt Windows 2003 domU
  
  Is there a setting for this, or is it not yet supported?
 
 
 That's not on the dom0 side, but directly in the w2k3 domU .. : you'll 
 get *bad* performances (at IO and network level) if the xenpv drivers 
 for Windows aren't installed .. Unfortunately you will not be able to 
 find them for CentOS. (While Upstream have them of course)
 

I see on google that no one really knows when they will make xenpv
available in CentOS

But I will be able to live with the performance... I intend on making
the w2k3 server a domain controller and print spooler.
Any warnings I should heed in this respect?

I am using Samba as a file server.

Kind regards,
Coert

___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] fully virt Xen DomU network question

2009-06-29 Thread Coert Waagmeester

On Mon, 2009-06-29 at 10:27 +0100, Karanbir Singh wrote:
 On 06/29/2009 07:59 AM, Coert Waagmeester wrote:
  That's not on the dom0 side, but directly in the w2k3 domU .. : you'll
  get *bad* performances (at IO and network level) if the xenpv drivers
  for Windows aren't installed .. Unfortunately you will not be able to
  find them for CentOS. (While Upstream have them of course)
  I see on google that no one really knows when they will make xenpv
  available in CentOS
 
 well, how did you reach that conclusion ? Essentially, you get your 
 credit card out and pay Citrix for the drivers. Considering they use 
 CentOS within their development / testing process I am quite sure they 
 have the right stuff required for the Windows DomU hosted on CentOS Dom0
 
 - KB

I will definitely look at such an option, but I do not want to go custom
with my CentOS implementation, and I am only going to use the windows
2k3 domU as domain controller and print server, so if it does not
negatively impact speed of other PV domUs then I can live with 100Mbps
and slower I/O

But I am definitely keeping a lookout for when xenpv comes out for
CentOS


Thanks,
Coert

___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


[CentOS-virt] fully virt Xen DomU network question

2009-06-24 Thread Coert Waagmeester
Hello all fellow CentOS users!

I have a working xen setup with 3 paravirt domUs and one Windblows 2003
fully virt domU.


There are to virtual networks.

As far as I can tell in the paravirt Linux DomUs I have gigabit
networking, but not in the fully virt Windows 2003 domU

Is there a setting for this, or is it not yet supported?

I run xen-3.0.3-64.el5_2.3


Kind regards,
Coert

___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] fully virt Xen DomU network question

2009-06-24 Thread Coert Waagmeester

On Wed, 2009-06-24 at 10:58 +0200, Tim Verhoeven wrote:
 On Wed, Jun 24, 2009 at 10:19 AM, Coert
 Waagmeesterlgro...@waagmeester.co.za wrote:
 
  I have a working xen setup with 3 paravirt domUs and one Windblows 2003
  fully virt domU.
 
  There are to virtual networks.
 
  As far as I can tell in the paravirt Linux DomUs I have gigabit
  networking, but not in the fully virt Windows 2003 domU
 
  Is there a setting for this, or is it not yet supported?
 
  I run xen-3.0.3-64.el5_2.3
 
 
 Both networks should be available in the Linux and the Windows domU's.
 Could you send the Xen configfiles for the domU's ? They should list
 to which network each domU gets attached.
 
 Regards,
 Tim
 

Attached are my config files
name = xenfilesrv
uuid = removed
maxmem = 1024
memory = 512
vcpus = 2
bootloader = /usr/bin/pygrub
on_poweroff = destroy
on_reboot = restart
on_crash = restart
vfb = [  ]
disk = [ phy:/dev/vg0/xenfilesrv,xvda,w , phy:/dev/vg0/lvdata1,xvdb1,w ]
vif = [ mac=00:16:3e:2b:ff:da,bridge=xenbr0 , 
mac=00:16:3e:2d:41:b5,bridge=xenbr1 ]


network-bridge-more
Description: application/shellscript
name = xenwin2k3
uuid = removed
maxmem = 512
memory = 512
vcpus = 2
builder = hvm
kernel = /usr/lib/xen/boot/hvmloader
boot = dc
pae = 1
acpi = 1
apic = 1
on_poweroff = destroy
on_reboot = destroy
on_crash = restart
device_model = /usr/lib64/xen/bin/qemu-dm
sdl = 0
vnc = 1
vncunused = 1
keymap = en-us
disk = [ phy:/dev/vg0/xenwin2k3,hda,w , 
file:/root/isos/win2k3srvd2.iso,ioemu:hdc:cdrom,r ]
vif = [ mac=00:16:3e:37:7c:b3,bridge=xenbr0,type=ioemu , 
mac=00:16:3e:48:2e:72,bridge=xenbr1,type=ioemu ]
serial = pty
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS] How do I change passwords / remove users for Samba?

2009-06-23 Thread Coert Waagmeester

On Tue, 2009-06-23 at 10:08 +0100, Kevin Thorpe wrote:
 I've got a bit of a problem with Samba. I just can't work out how to 
 change passwords or remove users.
 I've just got user security.. lines in smb.conf are:
 
  security = user
  passdb backend = tdbsam
 
 I've removed the user using pdbedit, I've removed the unix user, 
 smbpasswd says the user doesn't exist
 yet I can still connect to the shares. I'm obviously just missing 
 something here. Can anyone point me in the
 right direction?
 
 thanks
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos

check out smbpasswd

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] NAS Storage server question

2009-06-19 Thread Coert Waagmeester

On Thu, 2009-06-18 at 15:28 +0200, Rainer Duffner wrote:
 Coert Waagmeester schrieb:
  On Fri, 2009-06-12 at 22:59 +0200, Giuseppe Fuggiano wrote:

  2009/6/11 Coert Waagmeester lgro...@waagmeester.co.za:
  
  Hello all,

  Hi,
 
  
  At our office a have a server running 3 Xen domains. Mail server, etc.
 
  I want to make this setup more redundant.
 
  There are a few howtos on the combination of Xen, DRBD, and heartbeat.
  That is probably the best way.

  I am using a combination of DRBD+GFS.  Since v8.2, DRBD [1] can be
  configured in dual-primary mode [2].  You can mount your local
  partitions in r/w mode using a Distributed Lock Manager and GFS.  It
  works pretty well in my case, both my partitions are correctly
  replicated at device block level.  Please, note that with this
  solution you have to configure a fence device to preserve the file
  system integrity.  The DRBD documentation contains everything you need
  to realize this solution.
 
  [1] http://www.drbd.org/users-guide/
  [2] http://www.drbd.org/users-guide/s-dual-primary-mode.html
 
  Cheers
  
 
  Hello,
 
  Thanks, I will give this a bash, trying to set up GFS now. (very hairy!)
 
  What is you guys opinion on OCFS and GlusterFS? Or am I better off
  sticking with GFS?
 

 
 
 Have it setup by someone who knows what he's doing and who can bail you
 out in case it goes boom.
 
 Otherwise, you just introduce another layer (or two) of complexity that
 gives you no additional uptime over the one a simple-setup, solid server
 from HP/IBM/Sun (or maybe even Dell) + a UPS will give you.
 
 What were your primary reasons for outages over the last two years?
 
 
 Rainer
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos

We have not really had major outages yet... Most of our stuff is already
at least RAID 1 (even Windblows) I might just be a little too paranoid.

I have decided that at first I will be going for the DRBD solution.
With the current hardware I have it will be the easiest solution.

Have any of you guys used GlusterFS or OCFS yet?

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] NAS Storage server question

2009-06-18 Thread Coert Waagmeester

On Fri, 2009-06-12 at 22:59 +0200, Giuseppe Fuggiano wrote:
 2009/6/11 Coert Waagmeester lgro...@waagmeester.co.za:
  Hello all,
 
 Hi,
 
  At our office a have a server running 3 Xen domains. Mail server, etc.
 
  I want to make this setup more redundant.
 
  There are a few howtos on the combination of Xen, DRBD, and heartbeat.
  That is probably the best way.
 
 I am using a combination of DRBD+GFS.  Since v8.2, DRBD [1] can be
 configured in dual-primary mode [2].  You can mount your local
 partitions in r/w mode using a Distributed Lock Manager and GFS.  It
 works pretty well in my case, both my partitions are correctly
 replicated at device block level.  Please, note that with this
 solution you have to configure a fence device to preserve the file
 system integrity.  The DRBD documentation contains everything you need
 to realize this solution.
 
 [1] http://www.drbd.org/users-guide/
 [2] http://www.drbd.org/users-guide/s-dual-primary-mode.html
 
 Cheers

Hello,

Thanks, I will give this a bash, trying to set up GFS now. (very hairy!)

What is you guys opinion on OCFS and GlusterFS? Or am I better off
sticking with GFS?


Kind regards,
Coert

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS-virt] Xen with multiple virtual network interfaces with one bond

2009-06-17 Thread Coert Waagmeester
Hello all,

I have been running xen for a while now with two interfaces:
dummy0 for host only communication, and
eth0 for the outside network.

my script looks like this: (/etc/xen/scripts/network-bridge-more)
--
#! /bin/sh

dir=$(dirname $0)
$dir/network-bridge $@ vifnum=0 netdev=dummy0 bridge=xenbr0
$dir/network-bridge $@ vifnum=1 netdev=eth0 bridge=xenbr1
--

now i have a newer setup where eth0 and eth1 are bonded.

If i change eth0 in the above script to bond0 it messes up the bond
completely and stops working.

I have use /etc/xen/scripts/network-bridge-bonding, and that works, but
then I can only have one virtual network for my domU

I have tried this:
--
#! /bin/sh

dir=$(dirname $0)
$dir/network-bridge $@ vifnum=0 netdev=dummy0 bridge=xenbr0
$dir/network-bridge-bonding $@ vifnum=1 netdev=bond0 bridge=xenbr1
--

but that also does not work.
It also messes up my bond.

Any tips would be greatly appreciated.

Kind regards,
Coert



___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS] Xen with multiple virtual network interfaces with one bond

2009-06-17 Thread Coert Waagmeester
Sorry for mailing this to the wrong list. Rectified

On Wed, 2009-06-17 at 14:21 +0200, Coert Waagmeester wrote:
 Hello all,
 
 I have been running xen for a while now with two interfaces:
 dummy0 for host only communication, and
 eth0 for the outside network.
 
 my script looks like this: (/etc/xen/scripts/network-bridge-more)
 --
 #! /bin/sh
 
 dir=$(dirname $0)
 $dir/network-bridge $@ vifnum=0 netdev=dummy0 bridge=xenbr0
 $dir/network-bridge $@ vifnum=1 netdev=eth0 bridge=xenbr1
 --
 
 now i have a newer setup where eth0 and eth1 are bonded.
 
 If i change eth0 in the above script to bond0 it messes up the bond
 completely and stops working.
 
 I have use /etc/xen/scripts/network-bridge-bonding, and that works, but
 then I can only have one virtual network for my domU
 
 I have tried this:
 --
 #! /bin/sh
 
 dir=$(dirname $0)
 $dir/network-bridge $@ vifnum=0 netdev=dummy0 bridge=xenbr0
 $dir/network-bridge-bonding $@ vifnum=1 netdev=bond0 bridge=xenbr1
 --
 
 but that also does not work.
 It also messes up my bond.
 
 Any tips would be greatly appreciated.
 
 Kind regards,
 Coert
 
 
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] NAS Storage server question

2009-06-11 Thread Coert Waagmeester
Hello all,


At our office a have a server running 3 Xen domains. Mail server, etc.

I want to make this setup more redundant.

There are a few howtos on the combination of Xen, DRBD, and heartbeat.
That is probably the best way.

Another option I am looking at is a piece of shared storage,
a machine running CentOS with a large software RAID 5 array.

What is the best means of sharing the storage?
I would really like to use a combination of an iSCSI target server, and
GFS or OCFS.

But the iSCSI target server in the CentOS repos is a 'technology
preview'

Have any of you used the iSCSI target server in a production environment
yet?

Is NFS and option?

Kind regards,
Coert Waagmeester

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] NAS Storage server question

2009-06-11 Thread Coert Waagmeester

On Thu, 2009-06-11 at 17:14 +0200, Rainer Duffner wrote:
 Coert Waagmeester schrieb:
  Hello all,
 
 
  At our office a have a server running 3 Xen domains. Mail server, etc.
 
  I want to make this setup more redundant.
 
  There are a few howtos on the combination of Xen, DRBD, and heartbeat.
  That is probably the best way.
 
  Another option I am looking at is a piece of shared storage,
  a machine running CentOS with a large software RAID 5 array.

 
 How large?
 Depending on the size, RAID6 is the better option (with =1TB disks, the
 rebuild can take longer than the statistical average time another disk
 needs to fail).

I am starting with 4 1TB SATA disks.

With RAID 6 that will give me 2 TB right?
 
 
  What is the best means of sharing the storage?
  I would really like to use a combination of an iSCSI target server, and
  GFS or OCFS.
 

 
 
 If you don't already do GFS (and have been doing so for years), I'd say
 you better only do it in a configuration that is either supported by
 RedHat (e.g. with RHEL) or some competent 3rd-party that can help you
 over the pitfalls.
 Else you are on your own, with only the GFS mailinglist, yourself and
 your keyboard ;-)
 
Will OCFS be easier?
 
 
 
 
 Rainer
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] NAS Storage server question

2009-06-11 Thread Coert Waagmeester

On Thu, 2009-06-11 at 13:35 -0700, RobertH wrote:
 
  
  Briefly, but iet has been rock stable for me. It just runs forever...
  I have only used NFS under vmware, it worked good.
  
  jlc
  ___
 
 jlc,
 
 what has been rock stable?
 
 can you be more specific on the implementaion?
 
 are you saying it or iet
 
 if iet what is that?
 
 ;-)
 
  - rh
 
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos

jlc was talking about the iSCSI target server I think...

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Building a custom install CD

2009-06-09 Thread Coert Waagmeester

On Mon, 2009-06-08 at 15:13 -0700, Fred Moyer wrote:
 Greetings,
 
 I am looking for resources on how to build my own Centos install CD
 for a preselected package set that I want to install.  I think Red Hat
 may have had this functionality at some point but it has been a while
 since I have needed to do this.
 
 I found this on how to build my own kernel -
 http://wiki.centos.org/HowTos/BuildingKernelModules  - which I will
 need to exercise as well, but I want to build my own .iso that I can
 run a kickstart or similar mechanism from.
 
 Thanks in advance.
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos


what about rPath? http://www.rpath.org/


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos