Re: [Gluster-users] gluster local vs local = gluster x4 slower

2010-03-24 Thread Jeremy Enos
I'll do my best to describe it.  I have QDR IB between each host.  I'd 
like a very large filesystem that's reliable and high performance for 
both large and small files- but coming back to reality, I don't actually 
expect to be able to get all those features in a single filesystem, but 
perhaps 2.  It's an HPC cluster that the fileystem is required for- so 
it really does span large and small, at various reliability 
requirements.  My estimation is that I could do it in two filesystems- 
one unreliable high i/o (/scratch) and one reliable medium i/o (/home).


/scratch would not be required to be reliable, but if spread across 
enough nodes- AFR may be prudent, as I have it configured now.  This 
configuration has already yielded good aggregate i/o for large block 
transfers.  Small block is quite slow though.  If I do span multiple 
hosts, I can't dedicate those hosts to it- so the configuration would be 
spread wide to distribute i/o load as thin as possible too.  If I can't 
get good small block i/o performance out of the same filesystem, I'm ok 
with using /home for that.


/home would need to be reliable of course- and also medium performance 
for large block and medium to high performance for small block i/o.  I 
thought centralizing disks to a single host would help achieve this, but 
it has not- the small block overhead is apparently not as much due to 
distribution over hosts as it is to gluster itself.

Or my specific configuration perhaps.
thanks very much for all your help here-

Jeremy


On 3/23/2010 10:27 PM, Tejas N. Bhise wrote:

It might also be useful overall to know what you want to achieve. Its better to 
do sizing, performance etc if there is clarity on what is to be achieved. Once 
that is clear, it would be more useful to say if something is possible or not 
with the config you are trying and why or why not and whether even the 
expectations are justified or not from what is essentially a distributed 
networked FS.


- Original Message -
From: Jeremy Enosje...@ncsa.uiuc.edu
To: Stephan von Krawczynskisk...@ithnet.com
Cc: Tejas N. Bhisete...@gluster.com, gluster-users@gluster.org
Sent: Wednesday, March 24, 2010 5:41:28 AM GMT +05:30 Chennai, Kolkata, Mumbai, 
New Delhi
Subject: Re: [Gluster-users] gluster local vs local = gluster x4 slower

Stephan is correct- I primarily did this test to show a demonstrable
overhead example that I'm trying to eliminate.  It's pronounced enough
that it can be seen on a single disk / single node configuration, which
is good in a way (so anyone can easily repro).

My distributed/clustered solution would be ideal if it were fast enough
for small block i/o as well as large block- I was hoping that single
node systems would achieve that, hence the single node test.  Because
the single node test performed poorly, I eventually reduced down to
single disk to see if it could still be seen, and it clearly can be.
Perhaps it's something in my configuration?  I've pasted my config files
below.
thx-

  Jeremy

##glusterfsd.vol##
volume posix
type storage/posix
option directory /export
end-volume

volume locks
type features/locks
subvolumes posix
end-volume

volume disk
type performance/io-threads
option thread-count 4
subvolumes locks
end-volume

volume server-ib
type protocol/server
option transport-type ib-verbs/server
option auth.addr.disk.allow *
subvolumes disk
end-volume

volume server-tcp
type protocol/server
option transport-type tcp/server
option auth.addr.disk.allow *
subvolumes disk
end-volume

##ghome.vol##

#---IB remotes--
volume ghome
type protocol/client
option transport-type ib-verbs/client
#  option transport-type tcp/client
option remote-host acfs
option remote-subvolume raid
end-volume

#Performance Options---

volume readahead
type performance/read-ahead
option page-count 4   # 2 is default option
option force-atime-update off # default is off
subvolumes ghome
end-volume

volume writebehind
type performance/write-behind
option cache-size 1MB
subvolumes readahead
end-volume

volume cache
type performance/io-cache
option cache-size 1GB
subvolumes writebehind
end-volume

##END##



On 3/23/2010 6:02 AM, Stephan von Krawczynski wrote:
   

On Tue, 23 Mar 2010 02:59:35 -0600 (CST)
Tejas N. Bhisete...@gluster.com   wrote:


 

Out of curiosity, if you want to do stuff only on one machine,
why do you want to use a distributed, multi node, clustered,
file system ?

   

Because what he does is a very good way to show the overhead produced only by
glusterfs and nothing else (i.e. no network involved).
A pretty relevant test scenario I would say.

--
Regards,
Stephan



 

Am I missing something here ?

Regards,
Tejas.

- 

Re: [Gluster-users] gluster local vs local = gluster x4 slower

2010-03-24 Thread Stephan von Krawczynski
Hi Jeremy,

have you tried to reproduce with all performance options disabled? They are
possibly no good idea on a local system.
What local fs do you use?


--
Regards,
Stephan


On Tue, 23 Mar 2010 19:11:28 -0500
Jeremy Enos je...@ncsa.uiuc.edu wrote:

 Stephan is correct- I primarily did this test to show a demonstrable 
 overhead example that I'm trying to eliminate.  It's pronounced enough 
 that it can be seen on a single disk / single node configuration, which 
 is good in a way (so anyone can easily repro).
 
 My distributed/clustered solution would be ideal if it were fast enough 
 for small block i/o as well as large block- I was hoping that single 
 node systems would achieve that, hence the single node test.  Because 
 the single node test performed poorly, I eventually reduced down to 
 single disk to see if it could still be seen, and it clearly can be.  
 Perhaps it's something in my configuration?  I've pasted my config files 
 below.
 thx-
 
  Jeremy
 
 ##glusterfsd.vol##
 volume posix
type storage/posix
option directory /export
 end-volume
 
 volume locks
type features/locks
subvolumes posix
 end-volume
 
 volume disk
type performance/io-threads
option thread-count 4
subvolumes locks
 end-volume
 
 volume server-ib
type protocol/server
option transport-type ib-verbs/server
option auth.addr.disk.allow *
subvolumes disk
 end-volume
 
 volume server-tcp
type protocol/server
option transport-type tcp/server
option auth.addr.disk.allow *
subvolumes disk
 end-volume
 
 ##ghome.vol##
 
 #---IB remotes--
 volume ghome
type protocol/client
option transport-type ib-verbs/client
 #  option transport-type tcp/client
option remote-host acfs
option remote-subvolume raid
 end-volume
 
 #Performance Options---
 
 volume readahead
type performance/read-ahead
option page-count 4   # 2 is default option
option force-atime-update off # default is off
subvolumes ghome
 end-volume
 
 volume writebehind
type performance/write-behind
option cache-size 1MB
subvolumes readahead
 end-volume
 
 volume cache
type performance/io-cache
option cache-size 1GB
subvolumes writebehind
 end-volume
 
 ##END##
 
 
 
 On 3/23/2010 6:02 AM, Stephan von Krawczynski wrote:
  On Tue, 23 Mar 2010 02:59:35 -0600 (CST)
  Tejas N. Bhisete...@gluster.com  wrote:
 
 
  Out of curiosity, if you want to do stuff only on one machine,
  why do you want to use a distributed, multi node, clustered,
  file system ?
   
  Because what he does is a very good way to show the overhead produced only 
  by
  glusterfs and nothing else (i.e. no network involved).
  A pretty relevant test scenario I would say.
 
  --
  Regards,
  Stephan
 
 
 
  Am I missing something here ?
 
  Regards,
  Tejas.
 
  - Original Message -
  From: Jeremy Enosje...@ncsa.uiuc.edu
  To: gluster-users@gluster.org
  Sent: Tuesday, March 23, 2010 2:07:06 PM GMT +05:30 Chennai, Kolkata, 
  Mumbai, New Delhi
  Subject: [Gluster-users] gluster local vs local = gluster x4 slower
 
  This test is pretty easy to replicate anywhere- only takes 1 disk, one
  machine, one tarball.  Untarring to local disk directly vs thru gluster
  is about 4.5x faster.  At first I thought this may be due to a slow host
  (Opteron 2.4ghz).  But it's not- same configuration, on a much faster
  machine (dual 3.33ghz Xeon) yields the performance below.
 
  THIS TEST WAS TO A LOCAL DISK THRU GLUSTER
  [r...@ac33 jenos]# time tar xzf
  /scratch/jenos/intel/l_cproc_p_11.1.064_intel64.tgz
 
  real0m41.290s
  user0m14.246s
  sys 0m2.957s
 
  THIS TEST WAS TO A LOCAL DISK (BYPASS GLUSTER)
  [r...@ac33 jenos]# cd /export/jenos/
  [r...@ac33 jenos]# time tar xzf
  /scratch/jenos/intel/l_cproc_p_11.1.064_intel64.tgz
 
  real0m8.983s
  user0m6.857s
  sys 0m1.844s
 
  THESE ARE TEST FILE DETAILS
  [r...@ac33 jenos]# tar tzvf
  /scratch/jenos/intel/l_cproc_p_11.1.064_intel64.tgz  |wc -l
  109
  [r...@ac33 jenos]# ls -l
  /scratch/jenos/intel/l_cproc_p_11.1.064_intel64.tgz
  -rw-r--r-- 1 jenos ac 804385203 2010-02-07 06:32
  /scratch/jenos/intel/l_cproc_p_11.1.064_intel64.tgz
  [r...@ac33 jenos]#
 
  These are the relevant performance options I'm using in my .vol file:
 
  #Performance Options---
 
  volume readahead
  type performance/read-ahead
  option page-count 4   # 2 is default option
  option force-atime-update off # default is off
  subvolumes ghome
  end-volume
 
  volume writebehind
  type performance/write-behind
  option cache-size 1MB
  subvolumes readahead
  end-volume
 
  volume cache
  type performance/io-cache
  option cache-size 1GB
  subvolumes writebehind
  

Re: [Gluster-users] gluster local vs local = gluster x4 slower

2010-03-24 Thread Jeremy Enos

I also neglected to mention that the underlying filesystem is ext3.

On 3/24/2010 3:44 AM, Jeremy Enos wrote:
I haven't tried all performance options disabled yet- I can try that 
tomorrow when the resource frees up.  I was actually asking first 
before blindly trying different configuration matrices in case there's 
a clear direction I should take with it.  I'll let you know.


Jeremy

On 3/24/2010 2:54 AM, Stephan von Krawczynski wrote:

Hi Jeremy,

have you tried to reproduce with all performance options disabled? 
They are

possibly no good idea on a local system.
What local fs do you use?


--
Regards,
Stephan


On Tue, 23 Mar 2010 19:11:28 -0500
Jeremy Enosje...@ncsa.uiuc.edu  wrote:


Stephan is correct- I primarily did this test to show a demonstrable
overhead example that I'm trying to eliminate.  It's pronounced enough
that it can be seen on a single disk / single node configuration, which
is good in a way (so anyone can easily repro).

My distributed/clustered solution would be ideal if it were fast enough
for small block i/o as well as large block- I was hoping that single
node systems would achieve that, hence the single node test.  Because
the single node test performed poorly, I eventually reduced down to
single disk to see if it could still be seen, and it clearly can be.
Perhaps it's something in my configuration?  I've pasted my config 
files

below.
thx-

  Jeremy

##glusterfsd.vol##
volume posix
type storage/posix
option directory /export
end-volume

volume locks
type features/locks
subvolumes posix
end-volume

volume disk
type performance/io-threads
option thread-count 4
subvolumes locks
end-volume

volume server-ib
type protocol/server
option transport-type ib-verbs/server
option auth.addr.disk.allow *
subvolumes disk
end-volume

volume server-tcp
type protocol/server
option transport-type tcp/server
option auth.addr.disk.allow *
subvolumes disk
end-volume

##ghome.vol##

#---IB remotes--
volume ghome
type protocol/client
option transport-type ib-verbs/client
#  option transport-type tcp/client
option remote-host acfs
option remote-subvolume raid
end-volume

#Performance Options---

volume readahead
type performance/read-ahead
option page-count 4   # 2 is default option
option force-atime-update off # default is off
subvolumes ghome
end-volume

volume writebehind
type performance/write-behind
option cache-size 1MB
subvolumes readahead
end-volume

volume cache
type performance/io-cache
option cache-size 1GB
subvolumes writebehind
end-volume

##END##



On 3/23/2010 6:02 AM, Stephan von Krawczynski wrote:

On Tue, 23 Mar 2010 02:59:35 -0600 (CST)
Tejas N. Bhisete...@gluster.com   wrote:



Out of curiosity, if you want to do stuff only on one machine,
why do you want to use a distributed, multi node, clustered,
file system ?

Because what he does is a very good way to show the overhead 
produced only by

glusterfs and nothing else (i.e. no network involved).
A pretty relevant test scenario I would say.

--
Regards,
Stephan




Am I missing something here ?

Regards,
Tejas.

- Original Message -
From: Jeremy Enosje...@ncsa.uiuc.edu
To: gluster-users@gluster.org
Sent: Tuesday, March 23, 2010 2:07:06 PM GMT +05:30 Chennai, 
Kolkata, Mumbai, New Delhi

Subject: [Gluster-users] gluster local vs local = gluster x4 slower

This test is pretty easy to replicate anywhere- only takes 1 disk, 
one
machine, one tarball.  Untarring to local disk directly vs thru 
gluster
is about 4.5x faster.  At first I thought this may be due to a 
slow host

(Opteron 2.4ghz).  But it's not- same configuration, on a much faster
machine (dual 3.33ghz Xeon) yields the performance below.

THIS TEST WAS TO A LOCAL DISK THRU GLUSTER
[r...@ac33 jenos]# time tar xzf
/scratch/jenos/intel/l_cproc_p_11.1.064_intel64.tgz

real0m41.290s
user0m14.246s
sys 0m2.957s

THIS TEST WAS TO A LOCAL DISK (BYPASS GLUSTER)
[r...@ac33 jenos]# cd /export/jenos/
[r...@ac33 jenos]# time tar xzf
/scratch/jenos/intel/l_cproc_p_11.1.064_intel64.tgz

real0m8.983s
user0m6.857s
sys 0m1.844s

THESE ARE TEST FILE DETAILS
[r...@ac33 jenos]# tar tzvf
/scratch/jenos/intel/l_cproc_p_11.1.064_intel64.tgz  |wc -l
109
[r...@ac33 jenos]# ls -l
/scratch/jenos/intel/l_cproc_p_11.1.064_intel64.tgz
-rw-r--r-- 1 jenos ac 804385203 2010-02-07 06:32
/scratch/jenos/intel/l_cproc_p_11.1.064_intel64.tgz
[r...@ac33 jenos]#

These are the relevant performance options I'm using in my .vol file:

#Performance Options---

volume readahead
 type performance/read-ahead
 option page-count 4   # 2 is default option
 option force-atime-update off # default is off
 

Re: [Gluster-users] Lock error with mysql over gluster

2010-03-24 Thread Christopher Hawkins
Martin,
I do something similar with version 3.0.2 and do not have locking problems. My 
vol file syntax is slightly different:

volume locks1
type features/locks

whereas you have 
 volume locks1
 type features/posix-locks

Perhaps try 3.x version? Looks like there have been some changes made to 
locking. Not sure how 'posix-locks' became 'locks' but clearly something is 
updated. I am not using DHT but that probably should not matter for locks.  

Chris

- Martí­n Eduardo Bradaschia martin.bradasc...@intercomgi.net wrote:

 Hi
 
 I have a production environment like this:  Debian Etch (4.0), ext3, 
 glusterfs-2.0.9 (built from LATEST src), fuse-2.7.4glfs11, mysql 5.1.
 
 When I restart mysql it fiils the syslog with messages like this:
 
 InnoDB: Unable to lock ./ibdata1, error: 11
 
 I tried next translator variations:
 
 type features/posix-locks
 type features/locks
 option mandatory-locks on
 
 even with mysql restart gluster
 
 ... the same problem once and again
 
 Can anybody help me ? Thanx in advance !
 
 Here my current configuration:
 
 --- Sever
 
 volume posix1
 type storage/posix
 option directory /media/vol1
 option background-unlink yes   end-volume
 
 volume locks1
 type features/posix-locks
 option mandatory-locks on
 subvolumes posix1
 end-volume
 
 volume brick1
 type performance/io-threads
 option thread-count 8# Default es 16
 subvolumes locks1
 end-volume
 
 volume posix2
 type storage/posix
 option directory /media/vol2
 option background-unlink yes   end-volume
 
 volume locks2
 type features/posix-locks
 option mandatory-locks on
 subvolumes posix2
 end-volume
 
 volume brick2
 type performance/io-threads
 option thread-count 8# Default es 16
 subvolumes locks2
 end-volume
 
 volume server1
 type protocol/server
 option transport-type tcp
 option transport.socket.bind-address 127.0.0.1
 option transport.socket.listen-port 7001# Default is 6996
 option auth.addr.brick1.allow *   subvolumes brick1
 end-volume
 
 
 volume server2
 type protocol/server
 option transport-type tcp
 option transport.socket.bind-address 127.0.0.1
 option transport.socket.listen-port 7002# Default is 6996
 option auth.addr.brick2.allow *   subvolumes brick2
 end-volume
 
 --- Client
 
 
 volume client1
 type protocol/client
 option transport-type tcp
 option remote-host 127.0.0.1# El servidor es local
 option remote-port 7001# Defalut is 6995
 option remote-subvolume brick1# name of the remote volume
 end-volume
 
 
 volume client2
 type protocol/client
 option transport-type tcp
 option remote-host 127.0.0.1   option remote-port 7002# 
 Defalut is 6995
 option remote-subvolume brick2# name of the remote volume
 end-volume
 
 
 volume completo
 type cluster/distribute
 option min-free-disk 20%
 subvolumes client1 client2
 end-volume
 
 
 volume writebehind
 type performance/write-behind
 option cache-size 4MB
 subvolumes completo
 end-volume
 
 volume iocache
 type performance/io-cache
 option cache-size 64MB
 subvolumes writebehind
 end-volume
 
 -- 
 Martin Eduardo Bradaschia
 Intercomgi Argentina 
  
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Setup for production - which one would you choose?

2010-03-24 Thread Oliver Hoffmann
Haha, there are loads of Linux distributions out there and even
strange OSes like *BSD or windooze or what's it called? ;-)

I tried out Gentoo a while ago but I droped it because all the compiling
took way too long. The big plus here is the big minus on debian like
systems. The current Ubuntu 9.10 for example has glusterfs 2.0.9,
period. If you want to have 3.0.2 then you have to wait for Ubuntu
10.04 or you compile it. 
But now that we have (almost) 10.04 with 3.0.2 I'll take this way.
Having such a system up and running on recent hardware is a matter of
maybe 10 or 20 minutes.

Cheers!



 On 22/03/2010 17:59, Oliver Hoffmann wrote:
  Hi all,
 
  I just made some tests on two old machines using Ubuntu 10.4 (server
  i386) with fuse-2.7.4glfs11 and glusterfs-3.0.3. At a first glance
  it seems to be OK.
  The next step is deploying a system which could be used for
  production. What would you suggest? Ubuntu 10.4 (server 64bit) is my
  first choice because of LTS. Whatsoever, I think it is more the
  version of glusterfs which makes it stable or not, isn't it?
  In the end I'd like to have a distributed  replicated storage which
  provides data for a bunch of (virtualized) LAMPS.
 
  TIA for your recommendations!
 
 
 I'm intrigued.  I had not realised that there were other options than 
 Gentoo for use on a server?!  (Bang up to date, flexible
 configuration and strong support of various virtualisation
 solutions.  Slight negative in update speeds, but can be mitigated by
 using a binary package cache)
 
 Will try out those new fangled options you suggested above, but in
 the meantime have a look at Gentoo (at least if you are fairly
 confident with your linux skills).  Big plug for linux-vservers also,
 especially in combination with some custom server profiles to define
 required software versions and options
 
 Good luck
 
 Ed W
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Setup for production - which one would you choose?

2010-03-24 Thread Lee Simpson
To have glusterfs 3.0.3 on ubuntu 9.10 you can also just install the debian 
package for gluster 3.0.3 with dpkg -i.

http://packages.debian.org/source/sid/glusterfs

But then 10.04 is only a month away, so depends how much of a rush your in!



On Wednesday 24 Mar 2010 16:45:40 Oliver Hoffmann wrote:
 Haha, there are loads of Linux distributions out there and even
 strange OSes like *BSD or windooze or what's it called? ;-)
 
 I tried out Gentoo a while ago but I droped it because all the compiling
 took way too long. The big plus here is the big minus on debian like
 systems. The current Ubuntu 9.10 for example has glusterfs 2.0.9,
 period. If you want to have 3.0.2 then you have to wait for Ubuntu
 10.04 or you compile it. 
 But now that we have (almost) 10.04 with 3.0.2 I'll take this way.
 Having such a system up and running on recent hardware is a matter of
 maybe 10 or 20 minutes.
 
 Cheers!
 
 
 
  On 22/03/2010 17:59, Oliver Hoffmann wrote:
   Hi all,
  
   I just made some tests on two old machines using Ubuntu 10.4 (server
   i386) with fuse-2.7.4glfs11 and glusterfs-3.0.3. At a first glance
   it seems to be OK.
   The next step is deploying a system which could be used for
   production. What would you suggest? Ubuntu 10.4 (server 64bit) is my
   first choice because of LTS. Whatsoever, I think it is more the
   version of glusterfs which makes it stable or not, isn't it?
   In the end I'd like to have a distributed  replicated storage which
   provides data for a bunch of (virtualized) LAMPS.
  
   TIA for your recommendations!
  
  
  I'm intrigued.  I had not realised that there were other options than 
  Gentoo for use on a server?!  (Bang up to date, flexible
  configuration and strong support of various virtualisation
  solutions.  Slight negative in update speeds, but can be mitigated by
  using a binary package cache)
  
  Will try out those new fangled options you suggested above, but in
  the meantime have a look at Gentoo (at least if you are fairly
  confident with your linux skills).  Big plug for linux-vservers also,
  especially in combination with some custom server profiles to define
  required software versions and options
  
  Good luck
  
  Ed W
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
  
 
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 

-- 
Lee Simpson : Bsc(hons) Software Engineering
Software Development : Server Specification : Manufacture : Configuration

9 Partridge Rd,
St Albans
Herts
AL3 6HH

Email : l...@leesimpson.me.uk
www : http://www.leenix.co.uk

Tel : 01727 855 124
Mob : 07961 348 790






[ Disclaimer ]
This e-mail and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this email in error please notify the sender by replying to 
this e-mail.

This email has been scanned for viruses 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Setup for production - which one would you choose?

2010-03-24 Thread Ian Rogers


I've just done part one of a writeup of my EC2 gluster LAMP installation 
at 
http://www.sirgroane.net/2010/03/distributed-file-system-on-amazon-ec2/ 
- may or may not be useful to you :-)


Ian

On 24/03/2010 17:09, Oliver Hoffmann wrote:

Yes, that's an idea. Thanx. That will be important for all the debian
clients, mostly lenny.

I think waiting and testing a month is quite ok though.

   

To have glusterfs 3.0.3 on ubuntu 9.10 you can also just install the
debian package for gluster 3.0.3 with dpkg -i.

http://packages.debian.org/source/sid/glusterfs

But then 10.04 is only a month away, so depends how much of a rush
your in!



On Wednesday 24 Mar 2010 16:45:40 Oliver Hoffmann wrote:
 

Haha, there are loads of Linux distributions out there and even
strange OSes like *BSD or windooze or what's it called? ;-)

I tried out Gentoo a while ago but I droped it because all the
compiling took way too long. The big plus here is the big minus on
debian like systems. The current Ubuntu 9.10 for example has
glusterfs 2.0.9, period. If you want to have 3.0.2 then you have to
wait for Ubuntu 10.04 or you compile it.
But now that we have (almost) 10.04 with 3.0.2 I'll take this way.
Having such a system up and running on recent hardware is a matter
of maybe 10 or 20 minutes.

Cheers!



   

On 22/03/2010 17:59, Oliver Hoffmann wrote:
 

Hi all,

I just made some tests on two old machines using Ubuntu 10.4
(server i386) with fuse-2.7.4glfs11 and glusterfs-3.0.3. At a
first glance it seems to be OK.
The next step is deploying a system which could be used for
production. What would you suggest? Ubuntu 10.4 (server 64bit)
is my first choice because of LTS. Whatsoever, I think it is
more the version of glusterfs which makes it stable or not,
isn't it? In the end I'd like to have a distributed
replicated storage which provides data for a bunch of
(virtualized) LAMPS.

TIA for your recommendations!

   

I'm intrigued.  I had not realised that there were other options
than Gentoo for use on a server?!  (Bang up to date, flexible
configuration and strong support of various virtualisation
solutions.  Slight negative in update speeds, but can be
mitigated by using a binary package cache)

Will try out those new fangled options you suggested above, but in
the meantime have a look at Gentoo (at least if you are fairly
confident with your linux skills).  Big plug for linux-vservers
also, especially in combination with some custom server profiles
to define required software versions and options

Good luck

Ed W
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

   
 



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
   



--
www.ContactClean.com
Making changing email address as easy as clicking a mouse.
Helping you keep in touch.


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] quick-read vs. io-cache - was Re: mod_glusterfs?

2010-03-24 Thread Ian Rogers

On 23/03/2010 18:40, Raghavendra G wrote:



On Tue, Mar 23, 2010 at 5:05 PM, Ian Rogers 
ian.rog...@contactclean.com mailto:ian.rog...@contactclean.com wrote:


Yes I know, I reported that bug :-)

If you're using io-cache then quick-read seems irrelevant as
io-cache has 128K pages internally.


io-cache and quick-read perform different functions. Normally while 
reading a file, glusterfs (fuse) get following calls from VFS,


lookup,
open,
read,
.
.
.
read,
flush,
close.

But quick-read exploits an internal feature of glusterfs present in 
lookup to get the entire file in lookup call itself. Hence open, read 
and close calls are short-cut at quick-read itself and they don't 
reach server thereby saving 3 transactions over network (probably more 
due to read).


Is it possible for io-cache to detect if a file is smaller than the 
internal page size (or whatever the cut-off is) and then use the 
quick-read shortcut itself?


In a LAMP situation I'd be wanting to use io-cache but there's also a 
lot of small files. Having both io-cache and the quick-read cache would 
just mean the files are double cached - using up twice the memory needed 
as well as an extra set of memory copies to shuffle things around.


Because of this, and the no-total-size-limit bug in quick-read, I'd 
quite like quick-read to be deprecated and it's functionality added to 
io-cache rather than fixing the memory limit bug in quick-read.


Regards,

Ian
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Setup for production - which one would you choose?

2010-03-24 Thread Oliver Hoffmann

Yep, thanx.

@Stephan: It is not a matter of knowing how use tar and make, but if you 
have a bunch of servers than you want to do an apt-get update/upgrade 
once in a while without compiling this piece of software on that server 
and another one on another server, etc.

It is hard to fully understand what you just wrote.  If you are
suggesting that someone else's personal preferences (or company
objectives) are incorrect or misguided simply because they don't match
your own I'm trying to understand how your last post pertains to the
user forum for Gluster?  There are plenty of reasons to prefer packages
over source installations but that academic conversation is also not
appropriate for this list.

Cheers,
Benjamin



-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Stephan von
Krawczynski
Sent: Wednesday, March 24, 2010 4:37 PM
To: Ian Rogers
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Setup for production - which one would you
choose?

Ok, guys, honestly: it is allowed to learn (RMS fought for your right to
do so)
:-)
Really rarely in the open source universe you will find a piece of
software
that is as easy to compile and run as glusterfs. All you have to know
yourself
is how to use tar. Then enter the source directory and do ./configure ;
make ;
make install What exactly is difficult to do? Why would you install
_some_
rpm that is outdated anyways (be it 2.0.9 or 3.0.2)?
Please don't tell you configure and drive LAMP but can't build
glusterfs.
The docs for 5 apache config options are longer than the whole
glusterfs-source...

--
Regards,
Stephan

PS: yes, I know it's the user-list. 




On Wed, 24 Mar 2010 17:14:32 +
Ian Rogers ian.rog...@contactclean.com wrote:

  

I've just done part one of a writeup of my EC2 gluster LAMP

installation 
  
at 


http://www.sirgroane.net/2010/03/distributed-file-system-on-amazon-ec2/ 
  

- may or may not be useful to you :-)

Ian

On 24/03/2010 17:09, Oliver Hoffmann wrote:


Yes, that's an idea. Thanx. That will be important for all the
  

debian
  

clients, mostly lenny.

I think waiting and testing a month is quite ok though.

   
  

To have glusterfs 3.0.3 on ubuntu 9.10 you can also just install


the
  

debian package for gluster 3.0.3 with dpkg -i.

http://packages.debian.org/source/sid/glusterfs

But then 10.04 is only a month away, so depends how much of a rush
your in!



On Wednesday 24 Mar 2010 16:45:40 Oliver Hoffmann wrote:
 


Haha, there are loads of Linux distributions out there and even
strange OSes like *BSD or windooze or what's it called? ;-)

I tried out Gentoo a while ago but I droped it because all the
compiling took way too long. The big plus here is the big minus on
debian like systems. The current Ubuntu 9.10 for example has
glusterfs 2.0.9, period. If you want to have 3.0.2 then you have
  

to
  

wait for Ubuntu 10.04 or you compile it.
But now that we have (almost) 10.04 with 3.0.2 I'll take this way.
Having such a system up and running on recent hardware is a matter
of maybe 10 or 20 minutes.

Cheers!



   
  

On 22/03/2010 17:59, Oliver Hoffmann wrote:
 


Hi all,

I just made some tests on two old machines using Ubuntu 10.4
(server i386) with fuse-2.7.4glfs11 and glusterfs-3.0.3. At a
first glance it seems to be OK.
The next step is deploying a system which could be used for
production. What would you suggest? Ubuntu 10.4 (server 64bit)
is my first choice because of LTS. Whatsoever, I think it is
more the version of glusterfs which makes it stable or not,
isn't it? In the end I'd like to have a distributed
replicated storage which provides data for a bunch of
(virtualized) LAMPS.

TIA for your recommendations!

   
  

I'm intrigued.  I had not realised that there were other options
than Gentoo for use on a server?!  (Bang up to date, flexible
configuration and strong support of various virtualisation
solutions.  Slight negative in update speeds, but can be
mitigated by using a binary package cache)

Will try out those new fangled options you suggested above, but


in
  

the meantime have a look at Gentoo (at least if you are fairly
confident with your linux skills).  Big plug for linux-vservers
also, especially in combination with some custom server profiles
to define required software versions and options

Good luck

Ed W
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

   
  
 


___
Gluster-users mailing list
Gluster-users@gluster.org

Re: [Gluster-users] Setup for production - which one would you choose?

2010-03-24 Thread Steve

 Original-Nachricht 
 Datum: Wed, 24 Mar 2010 22:36:42 +0100
 Von: Stephan von Krawczynski sk...@ithnet.com
 An: Ian Rogers ian.rog...@contactclean.com
 CC: gluster-users@gluster.org
 Betreff: Re: [Gluster-users] Setup for production - which one would you 
 choose?

 Ok, guys, honestly: it is allowed to learn (RMS fought for your right to
 do so)
 :-)
 Really rarely in the open source universe you will find a piece of
 software
 that is as easy to compile and run as glusterfs. All you have to know
 yourself
 is how to use tar. Then enter the source directory and do ./configure ;
 make ;
 make install What exactly is difficult to do? Why would you install
 _some_
 rpm that is outdated anyways (be it 2.0.9 or 3.0.2)?
 Please don't tell you configure and drive LAMP but can't build glusterfs.
 The docs for 5 apache config options are longer than the whole
 glusterfs-source...
 
Sorry Stephan. But that is (sorry to write that): Bullshit.
The source of Gusterfs is not small at all.

Regarding installing source vs RPM: Well... I install EVERYTHING from source 
(except if it is a closed source application). But I would never install from 
source if I would use a system that has RPM as their packaging system. Then I 
would still build RPMS and install RPMS. It's not difficult to make GlusterFS 
RPMS. Anyway... if you install everything from source by using ./configure ; 
make ; make install then you probably have a ultra low amount of systems that 
you need to take care of (try doing that thing on many systems and still keep 
track what you have installed and what not). I would never ever pollute a 
system with ./configure ; make ; make install. There is a reason for 
packaging systems (RPM, DEB and friends). I use btw Gentoo Linux and installing 
from source is part of the idea behind Gentoo Linux.


 --
 Regards,
 Stephan
 
Steve


 PS: yes, I know it's the user-list. 
 
 
 On Wed, 24 Mar 2010 17:14:32 +
 Ian Rogers ian.rog...@contactclean.com wrote:
 
  
  I've just done part one of a writeup of my EC2 gluster LAMP installation
  at 
  http://www.sirgroane.net/2010/03/distributed-file-system-on-amazon-ec2/ 
  - may or may not be useful to you :-)
  
  Ian
  
  On 24/03/2010 17:09, Oliver Hoffmann wrote:
   Yes, that's an idea. Thanx. That will be important for all the debian
   clients, mostly lenny.
  
   I think waiting and testing a month is quite ok though.
  
  
   To have glusterfs 3.0.3 on ubuntu 9.10 you can also just install the
   debian package for gluster 3.0.3 with dpkg -i.
  
   http://packages.debian.org/source/sid/glusterfs
  
   But then 10.04 is only a month away, so depends how much of a rush
   your in!
  
  
  
   On Wednesday 24 Mar 2010 16:45:40 Oliver Hoffmann wrote:

   Haha, there are loads of Linux distributions out there and even
   strange OSes like *BSD or windooze or what's it called? ;-)
  
   I tried out Gentoo a while ago but I droped it because all the
   compiling took way too long. The big plus here is the big minus on
   debian like systems. The current Ubuntu 9.10 for example has
   glusterfs 2.0.9, period. If you want to have 3.0.2 then you have to
   wait for Ubuntu 10.04 or you compile it.
   But now that we have (almost) 10.04 with 3.0.2 I'll take this way.
   Having such a system up and running on recent hardware is a matter
   of maybe 10 or 20 minutes.
  
   Cheers!
  
  
  
  
   On 22/03/2010 17:59, Oliver Hoffmann wrote:

   Hi all,
  
   I just made some tests on two old machines using Ubuntu 10.4
   (server i386) with fuse-2.7.4glfs11 and glusterfs-3.0.3. At a
   first glance it seems to be OK.
   The next step is deploying a system which could be used for
   production. What would you suggest? Ubuntu 10.4 (server 64bit)
   is my first choice because of LTS. Whatsoever, I think it is
   more the version of glusterfs which makes it stable or not,
   isn't it? In the end I'd like to have a distributed
   replicated storage which provides data for a bunch of
   (virtualized) LAMPS.
  
   TIA for your recommendations!
  
  
   I'm intrigued.  I had not realised that there were other options
   than Gentoo for use on a server?!  (Bang up to date, flexible
   configuration and strong support of various virtualisation
   solutions.  Slight negative in update speeds, but can be
   mitigated by using a binary package cache)
  
   Will try out those new fangled options you suggested above, but in
   the meantime have a look at Gentoo (at least if you are fairly
   confident with your linux skills).  Big plug for linux-vservers
   also, especially in combination with some custom server profiles
   to define required software versions and options
  
   Good luck
  
   Ed W
   ___
   Gluster-users mailing list
   Gluster-users@gluster.org
   http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
  

  
   ___

Re: [Gluster-users] Setup for production - which one would you choose?

2010-03-24 Thread Chad

Please take your pro/con discussion of package management off line.
This is a gluster discussion list, not a sysadmin discussion list.

^C



Steve wrote:

 Original-Nachricht 

Datum: Wed, 24 Mar 2010 22:36:42 +0100
Von: Stephan von Krawczynski sk...@ithnet.com
An: Ian Rogers ian.rog...@contactclean.com
CC: gluster-users@gluster.org
Betreff: Re: [Gluster-users] Setup for production - which one would you choose?



Ok, guys, honestly: it is allowed to learn (RMS fought for your right to
do so)
:-)
Really rarely in the open source universe you will find a piece of
software
that is as easy to compile and run as glusterfs. All you have to know
yourself
is how to use tar. Then enter the source directory and do ./configure ;
make ;
make install What exactly is difficult to do? Why would you install
_some_
rpm that is outdated anyways (be it 2.0.9 or 3.0.2)?
Please don't tell you configure and drive LAMP but can't build glusterfs.
The docs for 5 apache config options are longer than the whole
glusterfs-source...


Sorry Stephan. But that is (sorry to write that): Bullshit.
The source of Gusterfs is not small at all.

Regarding installing source vs RPM: Well... I install EVERYTHING from source (except if it is a 
closed source application). But I would never install from source if I would use a system that has 
RPM as their packaging system. Then I would still build RPMS and install RPMS. It's not difficult 
to make GlusterFS RPMS. Anyway... if you install everything from source by using ./configure 
; make ; make install then you probably have a ultra low amount of systems that you need to 
take care of (try doing that thing on many systems and still keep track what you have installed and 
what not). I would never ever pollute a system with ./configure ; make ; make install. 
There is a reason for packaging systems (RPM, DEB and friends). I use btw Gentoo Linux and 
installing from source is part of the idea behind Gentoo Linux.



--
Regards,
Stephan


Steve


PS: yes, I know it's the user-list. 



On Wed, 24 Mar 2010 17:14:32 +
Ian Rogers ian.rog...@contactclean.com wrote:


I've just done part one of a writeup of my EC2 gluster LAMP installation
at 
http://www.sirgroane.net/2010/03/distributed-file-system-on-amazon-ec2/ 
- may or may not be useful to you :-)


Ian

On 24/03/2010 17:09, Oliver Hoffmann wrote:

Yes, that's an idea. Thanx. That will be important for all the debian
clients, mostly lenny.

I think waiting and testing a month is quite ok though.

   

To have glusterfs 3.0.3 on ubuntu 9.10 you can also just install the
debian package for gluster 3.0.3 with dpkg -i.

http://packages.debian.org/source/sid/glusterfs

But then 10.04 is only a month away, so depends how much of a rush
your in!



On Wednesday 24 Mar 2010 16:45:40 Oliver Hoffmann wrote:
 

Haha, there are loads of Linux distributions out there and even
strange OSes like *BSD or windooze or what's it called? ;-)

I tried out Gentoo a while ago but I droped it because all the
compiling took way too long. The big plus here is the big minus on
debian like systems. The current Ubuntu 9.10 for example has
glusterfs 2.0.9, period. If you want to have 3.0.2 then you have to
wait for Ubuntu 10.04 or you compile it.
But now that we have (almost) 10.04 with 3.0.2 I'll take this way.
Having such a system up and running on recent hardware is a matter
of maybe 10 or 20 minutes.

Cheers!



   

On 22/03/2010 17:59, Oliver Hoffmann wrote:
 

Hi all,

I just made some tests on two old machines using Ubuntu 10.4
(server i386) with fuse-2.7.4glfs11 and glusterfs-3.0.3. At a
first glance it seems to be OK.
The next step is deploying a system which could be used for
production. What would you suggest? Ubuntu 10.4 (server 64bit)
is my first choice because of LTS. Whatsoever, I think it is
more the version of glusterfs which makes it stable or not,
isn't it? In the end I'd like to have a distributed
replicated storage which provides data for a bunch of
(virtualized) LAMPS.

TIA for your recommendations!

   

I'm intrigued.  I had not realised that there were other options
than Gentoo for use on a server?!  (Bang up to date, flexible
configuration and strong support of various virtualisation
solutions.  Slight negative in update speeds, but can be
mitigated by using a binary package cache)

Will try out those new fangled options you suggested above, but in
the meantime have a look at Gentoo (at least if you are fairly
confident with your linux skills).  Big plug for linux-vservers
also, especially in combination with some custom server profiles
to define required software versions and options

Good luck

Ed W
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

 

___
Gluster-users mailing list
Gluster-users@gluster.org

Re: [Gluster-users] quick-read vs. io-cache - was Re: mod_glusterfs?

2010-03-24 Thread Ian Rogers



On 25/03/2010 00:11, Krishna Srinivas wrote:

On Wed, Mar 24, 2010 at 10:38 AM, Ian Rogers
ian.rog...@contactclean.com  wrote:
   


In a LAMP situation I'd be wanting to use io-cache but there's also a lot of
small files. Having both io-cache and the quick-read cache would just mean
the files are double cached - using up twice the memory needed as well as an
extra set of memory copies to shuffle things around.
 

Good point :) you will have this double cache problem if you use
iocache above quickread. But if you use quick read above iocache, only
quickread caches it (as iocache will never get any open/read calls for
the small files as quickread fetches all the data in the lookup call)

Hence to avoid the double caching problem always use quickread on top
of iocache. (i.e in the volfile quickread will be below iocache)

Krishna
   


Good tip, thanks for that :-)




like quick-read to be deprecated and it's functionality added to io-cache
 


Argh, I hate making punctuation typos!! %-}

Ian
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Lock error with mysql over gluster

2010-03-24 Thread Amar Tumballi
 volume locks1
 type features/locks

 whereas you have
  volume locks1
  type features/posix-locks

 Perhaps try 3.x version? Looks like there have been some changes made to
 locking. Not sure how 'posix-locks' became 'locks' but clearly something is
 updated. I am not using DHT but that probably should not matter for locks.


To makes things clear about how glusterfs understand 'type' of a volume
defined in the vol file.

GlusterFS does try to open a shared library file located at
${XLATOR_DIR}/given/type/in-vol-file.so. Now, if you look at the
directory '${XLATOR_DIR}/features/', you can find that posix-locks.so is a
symlink to 'locks.so'. Hence they both are actually same library (ie, same
type in volume file).

Other symlinks used are

cluster/dht - cluster/distribute
cluster/afr  - cluster/replicate

-

Now coming to locks issues, yes, we did fix some issues with 'flock()' in
recent 3.0.x version. Please use 3.0.2+ version, and let us know how it
goes.


Regards,
Amar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users