Re: [CentOS] GFS and Small Files

2009-04-29 Thread Marc Grimme
Hi,
independently from the results you have seen it might be always reasonable to 
tune a gfs filesystem as follows:
http://kbase.redhat.com/faq/docs/DOC-6533
specially
mount with noatime and
gfs_tool settune fs glock_purge 50

Regards Marc.
On Wednesday 29 April 2009 13:01:17 Hairul Ikmal Mohamad Fuzi wrote:
 Hi all,

 We are running CentOS 5.2 64bit as our file server.
 Currently, we used GFS (with CLVM underneath it) as our filesystem
 (for our multiple 2TB SAN volume exports) since we plan to add more
 file servers (serving the same contents) later on.

 The issue we are facing at the moment is we found out that command
 such as 'ls' gives a very slow response.(e.g 3-4minutes for the
 outputs of ls to be printed out, or in certain cases, 20minutes or so)
 This is completely true especially in directories containing large
 number of small files (e.g 9+ of 1-4kb files). The thing is, most
 of system users are generating these small files frequently as part of
 their workflow.

 We tried emulating the same scenario (9+ of small files) on a ext3
 partition and it gives almost the same result.

 I believe most of the CLVM/GFS settings done are using the defaults
 parameters. Additionally, we would prefer to stick to GFS (or at least
 ext3) as it is part of CentOS / RHEL distribution rather than changing
 into other small-files 'friendly' filesystems (such as XFS, ReiserFS).

 I'm exploring whether is there anyway we can tune the GFS parameters
 to make the system more responsive?
 I have read that we can apply 'dir_index' option to ext3 partition to
 speedup things, but I'm not so sure about GFS.

 Below are the output from gfs_tool gettune /export/gfs :

 ilimit1 = 100
 ilimit1_tries = 3
 ilimit1_min = 1
 ilimit2 = 500
 ilimit2_tries = 10
 ilimit2_min = 3
 demote_secs = 300
 incore_log_blocks = 1024
 jindex_refresh_secs = 60
 depend_secs = 60
 scand_secs = 5
 recoverd_secs = 60
 logd_secs = 1
 quotad_secs = 5
 inoded_secs = 15
 glock_purge = 0
 quota_simul_sync = 64
 quota_warn_period = 10
 atime_quantum = 3600
 quota_quantum = 60
 quota_scale = 1.   (1, 1)
 quota_enforce = 1
 quota_account = 1
 new_files_jdata = 0
 new_files_directio = 0
 max_atomic_write = 4194304
 max_readahead = 262144
 lockdump_size = 131072
 stall_secs = 600
 complain_secs = 10
 reclaim_limit = 5000
 entries_per_readdir = 32
 prefetch_secs = 10
 statfs_slots = 64
 max_mhc = 1
 greedy_default = 100
 greedy_quantum = 25
 greedy_max = 250
 rgrp_try_threshold = 100
 statfs_fast = 0


 TIA.

 .ikmal
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos



-- 
Gruss / Regards,

Marc Grimme
Phone: +49-89 452 3538-14
http://www.atix.de/   http://www.open-sharedroot.org/

ATIX Informationstechnologie und Consulting AG | Einsteinstrasse 10 |
85716 Unterschleissheim | www.atix.de | www.open-sharedroot.org

Registergericht: Amtsgericht Muenchen, Registernummer: HRB 168930, USt.-Id.: 
DE209485962 | Vorstand: Marc Grimme, Mark Hlawatschek, Thomas Merz (Vors.) |
Vorsitzender des Aufsichtsrats: Dr. Martin Buss

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] [CLUSTER] Bugfix in cman for Centos 5.2

2009-02-24 Thread Marc Grimme
Hello *
For anybody who is interested.
There is a critical bug in cman (2.0.84) CentOS 5.2 provides. This is 
already known and fixed.
The bug can be found here:
https://bugzilla.redhat.com/show_bug.cgi?id=485026

I backported the fix to the current CentOS version (2.0.84).
Those can be downloaded here:

http://download.atix.de/yum/comoonics/testrpms/cman-2.0.84-2.4.i386.rpm
http://download.atix.de/yum/comoonics/testrpms/cman-2.0.84-2.4.x86_64.rpm
http://download.atix.de/yum/comoonics/testrpms/cman-devel-2.0.84-2.4.i386.rpm
http://download.atix.de/yum/comoonics/testrpms/cman-devel-2.0.84-2.4.x86_64.rpm
http://download.atix.de/yum/comoonics/testrpms/cman-2.0.84-2.4.src.rpm

They are neither signed nor verified and I don't want to responsible for any
harm they cause. But for my tests they worked perfectly well. ;-)

Use them at your own risc.

Regards
Marc.
-- 
Gruss / Regards,

Marc Grimme
http://www.atix.de/   http://www.open-sharedroot.org/

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] can't get Ethernet SNMP information

2008-08-13 Thread Marc Grimme
Hi,
I didn't follow the whole thread but had the same problem when using net-snmp 
with bridges (an I see peth0 and that rang the bell). The problem seems to be 
that net-snmp does not like nics with same ips (I didn't find very much on 
that topic). You should see some errors from net-snmp in the syslogs of your 
managed servers. What does your syslog say?

I solved it by changing the bridge configuration from the scripts provided by 
xen /etc/xen/scripts/network-bridge to using ifcfg bridges. Then cacti could 
monitor the interfaces like normal.

Hope that helps.

Regards marc.
On Wednesday 13 August 2008 20:42:30 nate wrote:
 Rudi Ahlers wrote:
  Here's my verbose output:

 [..]

  + Found item [ifName='lo'] index: 1 [from value]
  + Found item [ifName='peth0'] index: 2 [from value]
  + Found item [ifName='sit0'] index: 3 [from value]

 It certainly looks like cacti is finding a bunch of stuff, what
 does the status column say to the left of the green circle? For
 my test system it says -

 Success [24 Items, 4 Rows]

 When you go to the 'Create graphs for this host', do you see just
 the Header with nothing below it? (Data Query [SNMP - Interface
 Statistics])

 What version of cacti? Though it shouldn't matter this is a pretty
 basic thing.

 nate

 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos



-- 
Gruss / Regards,

Marc Grimme
Phone: +49-89 452 3538-14
http://www.atix.de/   http://www.open-sharedroot.org/

**
ATIX Informationstechnologie und Consulting AG
Einsteinstr. 10 
85716 Unterschleissheim
Deutschland/Germany

Phone: +49-89 452 3538-0
Fax:   +49-89 990 1766-0

Registergericht: Amtsgericht Muenchen
Registernummer: HRB 168930
USt.-Id.: DE209485962

Vorstand: 
Marc Grimme, Mark Hlawatschek, Thomas Merz (Vors.)

Vorsitzender des Aufsichtsrats:
Dr. Martin Buss

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] slow NFS speed

2008-07-30 Thread Marc Grimme
On Wednesday 30 July 2008 05:20:10 Mag Gam wrote:
 We upgraded from a 10/100Mbs to a 2 100/1000 bonding. We notice the
 speeds of NFS to be around 70-80Mb/sec. Which is slow, especially with
 bonding. I was wondering if we need to tune anything special with the
 Network and NFS. Does anyone have any experience with this?

 TIA
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos

Did you configure the bonding like it should be? Cause configuring a 
loadbalanced bonding (which I suppose you're using) is not at all an easy 
task.

For example if you are using rr bonding the switch where the nfsserver is 
connected to has to channel the ports the nfs-server is connected to. And 
much more.

I would recheck bonding. Test how fast it is with only one nic in the bond and 
the like. But first of all read the bonding.txt which comes along with the 
kernel-docs or can be found here:
http://www.mjmwired.net/kernel/Documentation/networking/bonding.txt

-- 
Gruss / Regards,

Marc Grimme
http://www.atix.de/   http://www.open-sharedroot.org/

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] slow NFS speed

2008-07-30 Thread Marc Grimme
On Wednesday 30 July 2008 20:52:07 John R Pierce wrote:
 Mag Gam wrote:
  70-80Mb/sec.
 
  MB, sorry :-)

 thats on the order of 700-800Mbit/sec, which is quite good for a single
 session on GigE.   as others have said, the sort of bonding you're doing
 doesn't speed up single transfers, instead it helps with multiple
 concurrent sessions.
I would not agree that if you are using round-robin (mode 0) bonding that it 
does not scale with one session. I would say it should or must scale:

from bonding.txt:
1780balance-rr: This mode is the only mode that will permit a single
1781TCP/IP connection to stripe traffic across multiple
1782interfaces. It is therefore the only mode that will allow a
1783single TCP/IP stream to utilize more than one interface's
1784worth of throughput.  This comes at a cost, however: the
1785striping generally results in peer systems receiving packets 
out
1786of order, causing TCP/IP's congestion control system to kick
1787in, often by retransmitting segments.
That means it could if we forget about out of order delivery.

What I see in a project where we have bonded four nics together with rr is 
that the way out is evenly loaded over the four nics (although we are 
communicating with only two hosts). But the way back is still the problem. 
Cause all packets arrive at only one nic. Again this is explained by 
bonding.txt:

1818This mode requires the switch to have the appropriate ports
1819configured for etherchannel or trunking.

And I also remember that I read that you can configure the etherchannel 
balancing. That means the way back. Some switches by default spread the 
packets going back by the MAC-Address (that's what we are seeing). But there 
should also be other balancing modes for a etherchannel. Got it here it is:

211 If ARP monitoring is used in an etherchannel compatible mode
212 (modes 0 and 2), the switch should be configured in a mode
213 that evenly distributes packets across all links. If the
214 switch is configured to distribute the packets in an XOR
215 fashion, all replies from the ARP targets will be received on
216 the same link which could cause the other team members to
217 fail.

What we have done is first measure the possible load over the network (with nc 
without any local i/o involved for example) and then you have a bottom line. 
Next see what nfs can do. 

BTW. I didn't write the bonding.txt but it appears to be dealing with some 
topics discussed here.

BTW. from FS point of view it should not be a problem to get some 200MByte/sec 
out of a bunch of disks (depending on the speed and cache of the disks and 
the bus where the data goes through).

-- 
Gruss / Regards,

Marc Grimme
http://www.atix.de/   http://www.open-sharedroot.org/

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Bonding and Xen

2008-07-16 Thread Marc Grimme
On Tuesday 15 July 2008 22:34:56 Victor Padro wrote:
 Does anyone has implemented this sucessfully?

 I am asking this because we are implementing Xen on our test lab machines,
 which they hold up to three 3com and intel Nics 10/100mbps based.

 These servers are meant to replace MS messaging and intranet webservers
 which holds up to 5000 hits per day and thousands of mails, and probably
 the Dom0 could not handle this kind of setup with only one 100mbps link,
 and could not afford changing all the networking hardware to gigabit, at
 least not yet.

 Any pointers perhaps?
Just go the normal way. As long as you are not using VLANs on top of bonds the 
default bridgescripts should do just fine.
Before starting with using a bond as no active/backup configuration I urge you 
to read and understand the bonding.txt from the kernel-source:
http://www.mjmwired.net/kernel/Documentation/networking/bonding.txt or just
websearcch: bonding.txt
The problem is not to configure the bonding on a linux-machine but to get the 
network setup right (Etherchannel, LACP, one switch or multiple switches, 
etc.) and know what to expect from which setup. 

And last but not least human communication between network guys and os-guys.

That are the biggest problem with bonding in my experience.

-marc.

 Greetings from Mexico.


 --
 It is human nature to think wisely and act in an absurd fashion.

 Todo el desorden del mundo proviene de las profesiones mal o mediocremente
 servidas



-- 
Gruss / Regards,

Marc Grimme
http://www.atix.de/   http://www.open-sharedroot.org/

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Last and final official release candidate of the com.oonics open shared root cluster installation DVD is available (RC4)

2008-07-03 Thread Marc Grimme
Hello,
we are very happy to announce the availability of the last and final official 
release candidate of the com.oonics open shared root cluster installation DVD 
(RC4). 

The com.oonics open shared root cluster installation DVD allows the 
installation of a single node open shared root cluster with the use of 
anaconda, the well known installation software provided by Red Hat. After the 
installation, the open shared root cluster can be easily scaled up to more 
than hundred cluster nodes.

You can now download the open shared root installation DVD from 
www.open-sharedroot.org.

We are very interested in feetback. Please either file a bug or feature or 
post to the mailinglist (see www.open-sharedroot.org).

More details can be found here: 
http://open-sharedroot.org/news-archive/availability-of-rc4-of-the-com-oonics-version-of-anaconda

Note: The download isos are based on Centos5.1!
  RHEL5.1 versions will be provided on request.

Have fun testing it and let us know the what you're thinking.
-- 
Gruss / Regards,

Marc Grimme
http://www.atix.de/   http://www.open-sharedroot.org/

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] First official release candidate of the com.oonics open shared root cluster installation DVD is available (RC3)

2008-03-20 Thread Marc Grimme
Hello,
we are very happy to announce the availability of the first official release 
candidate of the com.oonics open shared root cluster installation DVD (RC3). 

The com.oonics open shared root cluster installation DVD allows the 
installation of a single node open shared root cluster with the use of 
anaconda, the well known installation software provided by Red Hat. After the 
installation, the open shared root cluster can be easily scaled up to more 
than hundred cluster nodes.

You can now download the open shared root installation DVD from 
www.open-sharedroot.org.

We are very interested in feetback. Please either file a bug or feature or 
post to the mailinglist (see www.open-sharedroot.org).

More details can be found here: 
http://www.open-sharedroot.org/news-archive/availability-of-first-beta-of-the-com-oonics-version-of-anaconda.

Note: The download isos are based on Centos5.1!

Have fun testing it and let us know the outcome.
-- 
Gruss / Regards,

Marc Grimme
http://www.atix.de/   http://www.open-sharedroot.org/

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Xen, GFS, GNBD and DRBD?

2008-01-03 Thread Marc Grimme
Hi Tom,
On Wednesday 02 January 2008 23:44:19 Tom Lanyon wrote:
 Hi all,

 We're looking at deploying a small Xen cluster to run some of our
 smaller applications. I'm curious to get the lists opinions and advice
 on what's needed.
I'm not the biggest fan of DRBD with Xen and everything but it's for a small 
Xen cluster isn't it ;-) . In my opinion it brings way to much complexity in 
a concept that should always stay as simple as possible.

 The plan at the moment is to have two or three servers running as the
 Xen dom0 hosts and two servers running as storage servers. As we're
 trying to do this on a small scale, there is no means to hook the
 system into our SAN, so the storage servers do not have a shared
 storage subsystem.

 Is it possible to run DRBD on the two storage servers and then export
 the block devices over the network to the xen hosts? Ideally the goal
 is to have the effect of shared storage on the xen hosts so that
 domains can be migrated between them in case one server needs to go
 offline. Do I run GFS on top of the DRBD mirrored device, exported via
 GNBD to the xen hosts; or the other way around, using GNBD to export
 the DRBD mirrored device and then GFS running on the xen hosts?

 Is this possible; is there an easier/simpler/better way to do it?
For DRBD as base for GFS you might want to have a look at
http://gfs.wikidev.net/DRBD_Cookbook

I didn't test it but it might be what you are looking for.

When thinking about GNBD you could also think about iSCSI (as already stated) 
as it is a standard. Make it highly available move it onto some other two 
nodes and there you go. But still there you'll need shared storage. How about 
extending your thoughts also onto NFS. Again you'll have to make it highly 
available which results in shared storage or DRBD.

BTW: Don't make your *small* cluster to complex to manage. ;-)

Have fun
Marc.

-- 
Gruss / Regards,

Marc Grimme
http://www.atix.de/   http://www.open-sharedroot.org/

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos