Re: [Gluster-users] DRBD like performance?

2009-11-30 Thread Hiren Joshi
I'm having a similar problem, I'm looking into DRBD but the downside
here will be, if the head server goes down the clients won't
automatically switch over to the slave server... 

 -Original Message-
 From: gluster-users-boun...@gluster.org 
 [mailto:gluster-users-boun...@gluster.org] On Behalf Of Jeffery Soo
 Sent: 29 November 2009 09:30
 To: gluster-users@gluster.org
 Subject: [Gluster-users] DRBD like performance?
 
 I had the intention of using GlusterFS to replace DRBD to setup a 
 clustered/redundant webserver but so far the performance is 
 about 7-8x 
 slower than native due to the live writing feature that 
 GlusterFS uses.  
 Is it possible to have a setup like DRBD to improve performance?
 
 Basically I want to know if I can get the same functionality and 
 performance of DRBD?  I have 2 servers and with DRBD each 
 server would 
 perform all reads locally (giving native performance) and 
 does not write 
 data until it is fully written locally (delayed write I guess 
 you could 
 say).  This way you get the replication but still get native 
 performance.
 
 Is there a current way to setup GlusterFS like this in order 
 to get this 
 'DRBD-like' functionality?
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] performance/stat-prefetch

2009-11-27 Thread Hiren Joshi
Hi,
 
I have a 2 server setup (6 bricks on each server replicated and then
hashed). I'm trying to load performance/stat-prefetch at the end of the
client vol but it doesn't seem to work.

I mount it ok, then do a df -h, it just hangs with nothing in using up
resources in top.

Any thoughts?

Josh.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Poor performance

2009-11-27 Thread Hiren Joshi
I'm seeing

### Add quick-read for small files
volume quickread
  type performance/quick-read
  option cache-timeout 1 # default 1 second
  option max-file-size 256KB# default 64Kb
  subvolumes iocache
end-volume 


Should that not be:
### Add quick-read for small files
volume quickread
  type performance/quick-read
  option cache-timeout 1 # default 1 second
  option max-file-size 256KB# default 64Kb
  subvolumes writeback
end-volume 

Or is the writeback vol supposed to be dangling?


 -Original Message-
 From: gluster-users-boun...@gluster.org 
 [mailto:gluster-users-boun...@gluster.org] On Behalf Of Andre 
 Felipe Machado
 Sent: 27 November 2009 12:58
 To: gluster-users@gluster.org
 Subject: Re: [Gluster-users] Poor performance
 
 Hello,
 You could study and adapt for your environment and apps behaviour:
 http://www.techforce.com.br/news/linux_blog/glusterfs_tuning_s
 mall_files
 Regards.
 Andre Felipe Machado
 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] performance/stat-prefetch

2009-11-27 Thread Hiren Joshi
Was this the correct link?

The server hangs with or without quick-read. Can you send me a link to
the bug, I'll put myself on the list.

Thanks,
Josh. 

 -Original Message-
 From: gluster-users-boun...@gluster.org 
 [mailto:gluster-users-boun...@gluster.org] On Behalf Of Andre 
 Felipe Machado
 Sent: 27 November 2009 13:02
 To: gluster-users@gluster.org
 Subject: Re: [Gluster-users] performance/stat-prefetch
 
 Hello,
 Before 2.0.8 and its bug 314 correction, stats-prefetch and 
 quick-read caused
 hangs when used in a setup at same time.
 http://www.techforce.com.br/news/linux_blog/glusterfs_tuning_s
 mall_files
 Regards.
 Andre Felipe Machado
 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] performance/stat-prefetch

2009-11-27 Thread Hiren Joshi

That's good news, I'll avoid stat-prefetch until the next version

Richard de Vries wrote:

I've observed the same problem in version 2.0.8.
If you strace for example a ls -l, it is waiting forever for the lstat64
to finish.
The current version pulled from git doesn't show this behavior.

Richard



-Original Message-
Thanks.
I don't have io-cache loaded either. The server will hang even if
stat-prefethch is the only performance translator loaded.







___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Rsync

2009-10-12 Thread Hiren Joshi
Another update, I thought I'd try it without direct-io-mode=write-only,
no joy. Next I'm going to try to simplify my setup (thinking 6 hashes
mirrored might be too much?). As always, if you shade any more light
or offer up any more ideas, please let me know.

Thanks,
Josh.

 -Original Message-
 From: gluster-users-boun...@gluster.org 
 [mailto:gluster-users-boun...@gluster.org] On Behalf Of Hiren Joshi
 Sent: 05 October 2009 11:01
 To: Pavan Vilas Sondur
 Cc: gluster-users@gluster.org
 Subject: Re: [Gluster-users] Rsync
 
 Just a quick update: The rsync is *still* not finished. 
 
  -Original Message-
  From: gluster-users-boun...@gluster.org 
  [mailto:gluster-users-boun...@gluster.org] On Behalf Of Hiren Joshi
  Sent: 01 October 2009 16:50
  To: Pavan Vilas Sondur
  Cc: gluster-users@gluster.org
  Subject: Re: [Gluster-users] Rsync
  
  Thanks!
  
  I'm keeping a close eye on the is glusterfs DHT really 
 distributed?
  thread =)
  
  I tried nodelay on and unhashd no. I tarred about 400G to 
 the share in
  about 17 hours (~6MB/s?) and am running an rsync now. Will post the
  results when it's done.
  
   -Original Message-
   From: Pavan Vilas Sondur [mailto:pa...@gluster.com] 
   Sent: 01 October 2009 09:00
   To: Hiren Joshi
   Cc: gluster-users@gluster.org
   Subject: Re: Rsync
   
   Hi,
   We're looking into the problem on similar setups and 
 workng on it. 
   Meanwhile can you let us know if performance increases if you 
   use this option:
   
   option transport.socket.nodelay on' in each of your
   protocol/client and protocol/server volumes.
   
   Pavan
   
   On 28/09/09 11:25 +0100, Hiren Joshi wrote:
Another update:
It took 1240 minutes (over 20 hours) to complete on the 
 simplified
system (without mirroring). What else can I do to debug?

 -Original Message-
 From: gluster-users-boun...@gluster.org 
 [mailto:gluster-users-boun...@gluster.org] On Behalf Of 
   Hiren Joshi
 Sent: 24 September 2009 13:05
 To: Pavan Vilas Sondur
 Cc: gluster-users@gluster.org
 Subject: Re: [Gluster-users] Rsync
 
  
 
  -Original Message-
  From: Pavan Vilas Sondur [mailto:pa...@gluster.com] 
  Sent: 24 September 2009 12:42
  To: Hiren Joshi
  Cc: gluster-users@gluster.org
  Subject: Re: Rsync
  
  Can you let us know the following:
  
   * What is the exact directory structure?
 /abc/def/ghi/jkl/[1-4]
 now abc, def, ghi and jkl are one of a thousand dirs.
 
   * How many files are there in each individual 
 directory and 
  of what size?
 Each of the [1-4] dirs has about 100 files in, all under 1MB.
 
   * It looks like each server process has 6 export 
  directories. Can you run one server process each 
 for a single 
  export directory and check if the rsync speeds up?
 I had no idea you could do that. How? Would I need to 
   create 6 config
 files and start gluster:
 
 /usr/sbin/glusterfsd -f /etc/glusterfs/export1.vol or similar?
 
 I'll give this a go
 
   * Also, do you have any benchmarks with a similar setup on 
 say, NFS?
 NFS will create the dir tree in about 20 minutes then start 
 copying the
 files over, it takes about 2-3 hours.
 
  
  Pavan
  
  On 24/09/09 12:13 +0100, Hiren Joshi wrote:
   It's been running for over 24 hours now.
   Network traffic is nominal, top shows about 200-400% cpu 
 (7 cores so
   it's not too bad).
   About 14G of memory used (the rest is being used as 
   disk cache).
   
   Thoughts?
   
   
   
   snip
   
   An update, after running the rsync for a day, 
   I killed it 
  and remounted
   all the disks (the underlying filesystem, not the 
 gluster) 
  with noatime,
   the rsync completed in about 600 minutes. I'm now 
 going to 
  try one level
   up (about 1,000,000,000 dirs).
   
-Original Message-
From: Pavan Vilas Sondur 
  [mailto:pa...@gluster.com] 
Sent: 23 September 2009 07:55
To: Hiren Joshi
Cc: gluster-users@gluster.org
Subject: Re: Rsync

Hi Hiren,
What glusterfs version are you using? Can you 
 send us the 
volfiles and the log files.

Pavan

On 22/09/09 16:01 +0100, Hiren Joshi wrote:
 I forgot to mention, the mount is 
 mounted with 
  direct-io, would this
 make a difference? 
 
  -Original Message-
  From: gluster-users-boun...@gluster.org 
  
 [mailto:gluster-users-boun...@gluster.org] On 
  Behalf Of 
Hiren Joshi
  Sent: 22 September 2009 11:40
  To: gluster-users

Re: [Gluster-users] Rsync

2009-10-07 Thread Hiren Joshi
The initial copy has to happen via gluster as I'm also using
distribution as well as replication 

 -Original Message-
 From: Stephan von Krawczynski [mailto:sk...@ithnet.com] 
 Sent: 06 October 2009 16:39
 To: Hiren Joshi
 Cc: Pavan Vilas Sondur; gluster-users@gluster.org
 Subject: Re: [Gluster-users] Rsync
 
 Remember, the gluster-team does not like my way of 
 data-feeding. If your setup
 blows up, don't blame them (or me :-)
 I can only tell you what I am doing: simply move (or copy) 
 the initial data to
 the primary server of the replication setup and then start 
 glusterfsd for
 exporting.
 You will  notice that the data gets replicated as soon as 
 stat is going on
 (first ls or the like). If you already exported the data via 
 nfs before you
 probably only need to setup up glusterfs on the very same box 
 and use it as
 primary server. Then there is no data copying at all.
 
 After months of experiments I can say that glusterfs runs 
 pretty stable on
 _low_ performance setups. But you have to do one thing: lengthen the
 ping-timeout (something like option ping-timeout 120).
 If you do not do that you will loose some of your server(s) 
 at any time and
 that will turn your glusterfs setup in a mess.
 If your environment is ok, it works. If your environment 
 fails it will fail,
 too, sooner or later. In other words: it exports data, but it 
 does not fulfill
 the promise of keeping your setup alive during failures - at 
 this stage.
 My advice for the team is to stop whatever they may work on 
 and take for
 physical boxes (2 server, 2 client), run a lot of bonnies and 
 unplug/re-plug 
 the servers non-deterministic. You can find all kinds of 
 weirdos this way.
 
 Regards,
 Stephan
 
 
 On Mon, 5 Oct 2009 16:49:53 +0100
 Hiren Joshi j...@moonfruit.com wrote:
 
  My users are more pitch fork less shooting.
  
  I don't understand what you're saying, should I have 
 locally copied all
  the files over not using gluster before attempting an rsync?
  
   -Original Message-
   From: Stephan von Krawczynski [mailto:sk...@ithnet.com] 
   Sent: 05 October 2009 14:13
   To: Hiren Joshi
   Cc: Pavan Vilas Sondur; gluster-users@gluster.org
   Subject: Re: [Gluster-users] Rsync
   
   It would be nice to remember my thread about _not_ copying 
   data initially to
   gluster via the mountpoint. And one major reason for _local_ 
   feed was: speed. 
   Obviously a lot of cases are merely impossible because of the 
   pure waiting
   time. If you had a live setup people would have already 
 shot you...
   This is why I talked about a feature and not an accepted bug 
   behaviour.
   
   Regards,
   Stephan
   
   
   On Mon, 5 Oct 2009 11:00:36 +0100
   Hiren Joshi j...@moonfruit.com wrote:
   
Just a quick update: The rsync is *still* not finished. 

 -Original Message-
 From: gluster-users-boun...@gluster.org 
 [mailto:gluster-users-boun...@gluster.org] On Behalf Of 
   Hiren Joshi
 Sent: 01 October 2009 16:50
 To: Pavan Vilas Sondur
 Cc: gluster-users@gluster.org
 Subject: Re: [Gluster-users] Rsync
 
 Thanks!
 
 I'm keeping a close eye on the is glusterfs DHT really 
   distributed?
 thread =)
 
 I tried nodelay on and unhashd no. I tarred about 400G to 
   the share in
 about 17 hours (~6MB/s?) and am running an rsync now. 
   Will post the
 results when it's done.
 
  -Original Message-
  From: Pavan Vilas Sondur [mailto:pa...@gluster.com] 
  Sent: 01 October 2009 09:00
  To: Hiren Joshi
  Cc: gluster-users@gluster.org
  Subject: Re: Rsync
  
  Hi,
  We're looking into the problem on similar setups and 
   workng on it. 
  Meanwhile can you let us know if performance 
 increases if you 
  use this option:
  
  option transport.socket.nodelay on' in each of your
  protocol/client and protocol/server volumes.
  
  Pavan
  
  On 28/09/09 11:25 +0100, Hiren Joshi wrote:
   Another update:
   It took 1240 minutes (over 20 hours) to complete on 
   the simplified
   system (without mirroring). What else can I do to debug?
   
-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of 
  Hiren Joshi
Sent: 24 September 2009 13:05
To: Pavan Vilas Sondur
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Rsync

 

 -Original Message-
 From: Pavan Vilas Sondur [mailto:pa...@gluster.com] 
 Sent: 24 September 2009 12:42
 To: Hiren Joshi
 Cc: gluster-users@gluster.org
 Subject: Re: Rsync
 
 Can you let us know the following:
 
  * What is the exact directory structure?
/abc/def/ghi/jkl/[1-4]
now abc, def, ghi and jkl are one of a thousand dirs

Re: [Gluster-users] Rsync

2009-10-05 Thread Hiren Joshi
Just a quick update: The rsync is *still* not finished. 

 -Original Message-
 From: gluster-users-boun...@gluster.org 
 [mailto:gluster-users-boun...@gluster.org] On Behalf Of Hiren Joshi
 Sent: 01 October 2009 16:50
 To: Pavan Vilas Sondur
 Cc: gluster-users@gluster.org
 Subject: Re: [Gluster-users] Rsync
 
 Thanks!
 
 I'm keeping a close eye on the is glusterfs DHT really distributed?
 thread =)
 
 I tried nodelay on and unhashd no. I tarred about 400G to the share in
 about 17 hours (~6MB/s?) and am running an rsync now. Will post the
 results when it's done.
 
  -Original Message-
  From: Pavan Vilas Sondur [mailto:pa...@gluster.com] 
  Sent: 01 October 2009 09:00
  To: Hiren Joshi
  Cc: gluster-users@gluster.org
  Subject: Re: Rsync
  
  Hi,
  We're looking into the problem on similar setups and workng on it. 
  Meanwhile can you let us know if performance increases if you 
  use this option:
  
  option transport.socket.nodelay on' in each of your
  protocol/client and protocol/server volumes.
  
  Pavan
  
  On 28/09/09 11:25 +0100, Hiren Joshi wrote:
   Another update:
   It took 1240 minutes (over 20 hours) to complete on the simplified
   system (without mirroring). What else can I do to debug?
   
-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of 
  Hiren Joshi
Sent: 24 September 2009 13:05
To: Pavan Vilas Sondur
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Rsync

 

 -Original Message-
 From: Pavan Vilas Sondur [mailto:pa...@gluster.com] 
 Sent: 24 September 2009 12:42
 To: Hiren Joshi
 Cc: gluster-users@gluster.org
 Subject: Re: Rsync
 
 Can you let us know the following:
 
  * What is the exact directory structure?
/abc/def/ghi/jkl/[1-4]
now abc, def, ghi and jkl are one of a thousand dirs.

  * How many files are there in each individual directory and 
 of what size?
Each of the [1-4] dirs has about 100 files in, all under 1MB.

  * It looks like each server process has 6 export 
 directories. Can you run one server process each for a single 
 export directory and check if the rsync speeds up?
I had no idea you could do that. How? Would I need to 
  create 6 config
files and start gluster:

/usr/sbin/glusterfsd -f /etc/glusterfs/export1.vol or similar?

I'll give this a go

  * Also, do you have any benchmarks with a similar setup on 
say, NFS?
NFS will create the dir tree in about 20 minutes then start 
copying the
files over, it takes about 2-3 hours.

 
 Pavan
 
 On 24/09/09 12:13 +0100, Hiren Joshi wrote:
  It's been running for over 24 hours now.
  Network traffic is nominal, top shows about 200-400% cpu 
(7 cores so
  it's not too bad).
  About 14G of memory used (the rest is being used as 
  disk cache).
  
  Thoughts?
  
  
  
  snip
  
  An update, after running the rsync for a day, 
  I killed it 
 and remounted
  all the disks (the underlying filesystem, not the 
gluster) 
 with noatime,
  the rsync completed in about 600 minutes. I'm now 
going to 
 try one level
  up (about 1,000,000,000 dirs).
  
   -Original Message-
   From: Pavan Vilas Sondur 
 [mailto:pa...@gluster.com] 
   Sent: 23 September 2009 07:55
   To: Hiren Joshi
   Cc: gluster-users@gluster.org
   Subject: Re: Rsync
   
   Hi Hiren,
   What glusterfs version are you using? Can you 
send us the 
   volfiles and the log files.
   
   Pavan
   
   On 22/09/09 16:01 +0100, Hiren Joshi wrote:
I forgot to mention, the mount is mounted with 
 direct-io, would this
make a difference? 

 -Original Message-
 From: gluster-users-boun...@gluster.org 
 [mailto:gluster-users-boun...@gluster.org] On 
 Behalf Of 
   Hiren Joshi
 Sent: 22 September 2009 11:40
 To: gluster-users@gluster.org
 Subject: [Gluster-users] Rsync
 
 Hello all,
  
 I'm getting what I think is bizarre 
 behaviour I have 
   about 400G to
 rsync (rsync -av) onto a gluster share, 
  the data is 
 in a directory
 structure which has about 1000 directories 
 per parent and 
   about 1000
 directories in each of them.
  
 When I try to rsync an end leaf 
 directory (this 
   has about 4 
 dirs and 100
 files in each) the operation takes about 10 
   seconds. When I 
 go one level
 above

Re: [Gluster-users] Rsync

2009-10-05 Thread Hiren Joshi
My users are more pitch fork less shooting.

I don't understand what you're saying, should I have locally copied all
the files over not using gluster before attempting an rsync?

 -Original Message-
 From: Stephan von Krawczynski [mailto:sk...@ithnet.com] 
 Sent: 05 October 2009 14:13
 To: Hiren Joshi
 Cc: Pavan Vilas Sondur; gluster-users@gluster.org
 Subject: Re: [Gluster-users] Rsync
 
 It would be nice to remember my thread about _not_ copying 
 data initially to
 gluster via the mountpoint. And one major reason for _local_ 
 feed was: speed. 
 Obviously a lot of cases are merely impossible because of the 
 pure waiting
 time. If you had a live setup people would have already shot you...
 This is why I talked about a feature and not an accepted bug 
 behaviour.
 
 Regards,
 Stephan
 
 
 On Mon, 5 Oct 2009 11:00:36 +0100
 Hiren Joshi j...@moonfruit.com wrote:
 
  Just a quick update: The rsync is *still* not finished. 
  
   -Original Message-
   From: gluster-users-boun...@gluster.org 
   [mailto:gluster-users-boun...@gluster.org] On Behalf Of 
 Hiren Joshi
   Sent: 01 October 2009 16:50
   To: Pavan Vilas Sondur
   Cc: gluster-users@gluster.org
   Subject: Re: [Gluster-users] Rsync
   
   Thanks!
   
   I'm keeping a close eye on the is glusterfs DHT really 
 distributed?
   thread =)
   
   I tried nodelay on and unhashd no. I tarred about 400G to 
 the share in
   about 17 hours (~6MB/s?) and am running an rsync now. 
 Will post the
   results when it's done.
   
-Original Message-
From: Pavan Vilas Sondur [mailto:pa...@gluster.com] 
Sent: 01 October 2009 09:00
To: Hiren Joshi
Cc: gluster-users@gluster.org
Subject: Re: Rsync

Hi,
We're looking into the problem on similar setups and 
 workng on it. 
Meanwhile can you let us know if performance increases if you 
use this option:

option transport.socket.nodelay on' in each of your
protocol/client and protocol/server volumes.

Pavan

On 28/09/09 11:25 +0100, Hiren Joshi wrote:
 Another update:
 It took 1240 minutes (over 20 hours) to complete on 
 the simplified
 system (without mirroring). What else can I do to debug?
 
  -Original Message-
  From: gluster-users-boun...@gluster.org 
  [mailto:gluster-users-boun...@gluster.org] On Behalf Of 
Hiren Joshi
  Sent: 24 September 2009 13:05
  To: Pavan Vilas Sondur
  Cc: gluster-users@gluster.org
  Subject: Re: [Gluster-users] Rsync
  
   
  
   -Original Message-
   From: Pavan Vilas Sondur [mailto:pa...@gluster.com] 
   Sent: 24 September 2009 12:42
   To: Hiren Joshi
   Cc: gluster-users@gluster.org
   Subject: Re: Rsync
   
   Can you let us know the following:
   
* What is the exact directory structure?
  /abc/def/ghi/jkl/[1-4]
  now abc, def, ghi and jkl are one of a thousand dirs.
  
* How many files are there in each individual 
 directory and 
   of what size?
  Each of the [1-4] dirs has about 100 files in, all 
 under 1MB.
  
* It looks like each server process has 6 export 
   directories. Can you run one server process each 
 for a single 
   export directory and check if the rsync speeds up?
  I had no idea you could do that. How? Would I need to 
create 6 config
  files and start gluster:
  
  /usr/sbin/glusterfsd -f /etc/glusterfs/export1.vol 
 or similar?
  
  I'll give this a go
  
* Also, do you have any benchmarks with a 
 similar setup on 
  say, NFS?
  NFS will create the dir tree in about 20 minutes then start 
  copying the
  files over, it takes about 2-3 hours.
  
   
   Pavan
   
   On 24/09/09 12:13 +0100, Hiren Joshi wrote:
It's been running for over 24 hours now.
Network traffic is nominal, top shows about 
 200-400% cpu 
  (7 cores so
it's not too bad).
About 14G of memory used (the rest is being used as 
disk cache).

Thoughts?



snip

An update, after running the rsync for a day, 
I killed it 
   and remounted
all the disks (the underlying 
 filesystem, not the 
  gluster) 
   with noatime,
the rsync completed in about 600 
 minutes. I'm now 
  going to 
   try one level
up (about 1,000,000,000 dirs).

 -Original Message-
 From: Pavan Vilas Sondur 
   [mailto:pa...@gluster.com] 
 Sent: 23 September 2009 07:55
 To: Hiren Joshi
 Cc: gluster-users@gluster.org
 Subject: Re: Rsync
 
 Hi Hiren,
 What glusterfs version are you using? Can you 
  send us the 
 volfiles and the log files

Re: [Gluster-users] unable to start client, new setup 2.0.6 centos5.3

2009-10-02 Thread Hiren Joshi
How are you trying to mount the client?
mount -t glusterfs -o direct-io-mode=write-only -f /path/to/client.vol
/mnt/

Works for me. 

 -Original Message-
 From: gluster-users-boun...@gluster.org 
 [mailto:gluster-users-boun...@gluster.org] On Behalf Of Charles
 Sent: 02 October 2009 17:31
 To: gluster-users@gluster.org
 Subject: [Gluster-users] unable to start client, new setup 
 2.0.6 centos5.3
 
 Hi,
 
 when attempting to start the client I received:
 no valid translator loaded at the top, orno mount point given. exiting
 [2009-10-02 10:34:40] E 
 [glusterfsd.c:570:glusterfs_graph_init] glusterfs: no valid 
 translator loaded at the top or no mount point given. exiting
 [2009-10-02 10:34:40] E [glusterfsd.c:1217:main] glusterfs: 
 translator initialization failed.  exiting
 
 - running 2.0.6 on centos 5.3
 - 3 servers, 1 client
 - I'm attempting to replicate across the 3 servers
 - pls help
 
 # client config
 # file: /etc/glusterfs/glusterfs.vol
 volume remote1
   type protocol/client
   option transport-type tcp
   option remote-host rh1
   option remote-subvolume brick
 end-volume
 
 volume remote2
   type protocol/client
   option transport-type tcp
   option remote-host rh2
   option remote-subvolume brick
 end-volume
 
 volume remote3
   type protocol/client
   option transport-type tcp
   option remote-host rh3
   option remote-subvolume brick
 end-volume
 
 volume replicate
   type cluster/replicate
   subvolumes remote1 remote2 remote3
 end-volume
 
 volume writebehind
   type performance/write-behind
   option window-size 1MB
   subvolumes replicate
 end-volume
 
 volume cache
   type performance/io-cache
   option cache-size 512MB
   subvolumes writebehind
 end-volume
 
 #
 # server config
 #
 volume posix
   type storage/posix
   option directory /bigpartition
 end-volume
 
 volume locks
   type features/locks
   subvolumes posix
 end-volume
 
 volume brick
   type performance/io-threads
   option thread-count 8
   subvolumes locks
 end-volume
 
 volume server
   type protocol/server
   option transport-type tcp
   option auth.addr.brick.allow *
   subvolumes brick
 end-volume
 
 =cm
 
 
 
   
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Rsync

2009-10-01 Thread Hiren Joshi
Thanks!

I'm keeping a close eye on the is glusterfs DHT really distributed?
thread =)

I tried nodelay on and unhashd no. I tarred about 400G to the share in
about 17 hours (~6MB/s?) and am running an rsync now. Will post the
results when it's done.

 -Original Message-
 From: Pavan Vilas Sondur [mailto:pa...@gluster.com] 
 Sent: 01 October 2009 09:00
 To: Hiren Joshi
 Cc: gluster-users@gluster.org
 Subject: Re: Rsync
 
 Hi,
 We're looking into the problem on similar setups and workng on it. 
 Meanwhile can you let us know if performance increases if you 
 use this option:
 
 option transport.socket.nodelay on' in each of your
 protocol/client and protocol/server volumes.
 
 Pavan
 
 On 28/09/09 11:25 +0100, Hiren Joshi wrote:
  Another update:
  It took 1240 minutes (over 20 hours) to complete on the simplified
  system (without mirroring). What else can I do to debug?
  
   -Original Message-
   From: gluster-users-boun...@gluster.org 
   [mailto:gluster-users-boun...@gluster.org] On Behalf Of 
 Hiren Joshi
   Sent: 24 September 2009 13:05
   To: Pavan Vilas Sondur
   Cc: gluster-users@gluster.org
   Subject: Re: [Gluster-users] Rsync
   

   
-Original Message-
From: Pavan Vilas Sondur [mailto:pa...@gluster.com] 
Sent: 24 September 2009 12:42
To: Hiren Joshi
Cc: gluster-users@gluster.org
Subject: Re: Rsync

Can you let us know the following:

 * What is the exact directory structure?
   /abc/def/ghi/jkl/[1-4]
   now abc, def, ghi and jkl are one of a thousand dirs.
   
 * How many files are there in each individual directory and 
of what size?
   Each of the [1-4] dirs has about 100 files in, all under 1MB.
   
 * It looks like each server process has 6 export 
directories. Can you run one server process each for a single 
export directory and check if the rsync speeds up?
   I had no idea you could do that. How? Would I need to 
 create 6 config
   files and start gluster:
   
   /usr/sbin/glusterfsd -f /etc/glusterfs/export1.vol or similar?
   
   I'll give this a go
   
 * Also, do you have any benchmarks with a similar setup on 
   say, NFS?
   NFS will create the dir tree in about 20 minutes then start 
   copying the
   files over, it takes about 2-3 hours.
   

Pavan

On 24/09/09 12:13 +0100, Hiren Joshi wrote:
 It's been running for over 24 hours now.
 Network traffic is nominal, top shows about 200-400% cpu 
   (7 cores so
 it's not too bad).
 About 14G of memory used (the rest is being used as 
 disk cache).
 
 Thoughts?
 
 
 
 snip
 
 An update, after running the rsync for a day, 
 I killed it 
and remounted
 all the disks (the underlying filesystem, not the 
   gluster) 
with noatime,
 the rsync completed in about 600 minutes. I'm now 
   going to 
try one level
 up (about 1,000,000,000 dirs).
 
  -Original Message-
  From: Pavan Vilas Sondur [mailto:pa...@gluster.com] 
  Sent: 23 September 2009 07:55
  To: Hiren Joshi
  Cc: gluster-users@gluster.org
  Subject: Re: Rsync
  
  Hi Hiren,
  What glusterfs version are you using? Can you 
   send us the 
  volfiles and the log files.
  
  Pavan
  
  On 22/09/09 16:01 +0100, Hiren Joshi wrote:
   I forgot to mention, the mount is mounted with 
direct-io, would this
   make a difference? 
   
-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On 
Behalf Of 
  Hiren Joshi
Sent: 22 September 2009 11:40
To: gluster-users@gluster.org
Subject: [Gluster-users] Rsync

Hello all,
 
I'm getting what I think is bizarre 
behaviour I have 
  about 400G to
rsync (rsync -av) onto a gluster share, 
 the data is 
in a directory
structure which has about 1000 directories 
per parent and 
  about 1000
directories in each of them.
 
When I try to rsync an end leaf directory (this 
  has about 4 
dirs and 100
files in each) the operation takes about 10 
  seconds. When I 
go one level
above (1000 dirs with about 4 dirs in each 
with about 100 
files in each)
the operation takes about 10 minutes.
 
Now, if I then go one level above that 
 (that's 1000 
   dirs with 
1000 dirs
in each with about 4 dirs in each with about 
100 files in 
  each) the
operation takes days! Top shows glusterfsd 
takes 300-600% 
  cpu usage
(2X4core), I have

Re: [Gluster-users] Rsync

2009-09-28 Thread Hiren Joshi
Another update:
It took 1240 minutes (over 20 hours) to complete on the simplified
system (without mirroring). What else can I do to debug?

 -Original Message-
 From: gluster-users-boun...@gluster.org 
 [mailto:gluster-users-boun...@gluster.org] On Behalf Of Hiren Joshi
 Sent: 24 September 2009 13:05
 To: Pavan Vilas Sondur
 Cc: gluster-users@gluster.org
 Subject: Re: [Gluster-users] Rsync
 
  
 
  -Original Message-
  From: Pavan Vilas Sondur [mailto:pa...@gluster.com] 
  Sent: 24 September 2009 12:42
  To: Hiren Joshi
  Cc: gluster-users@gluster.org
  Subject: Re: Rsync
  
  Can you let us know the following:
  
   * What is the exact directory structure?
 /abc/def/ghi/jkl/[1-4]
 now abc, def, ghi and jkl are one of a thousand dirs.
 
   * How many files are there in each individual directory and 
  of what size?
 Each of the [1-4] dirs has about 100 files in, all under 1MB.
 
   * It looks like each server process has 6 export 
  directories. Can you run one server process each for a single 
  export directory and check if the rsync speeds up?
 I had no idea you could do that. How? Would I need to create 6 config
 files and start gluster:
 
 /usr/sbin/glusterfsd -f /etc/glusterfs/export1.vol or similar?
 
 I'll give this a go
 
   * Also, do you have any benchmarks with a similar setup on 
 say, NFS?
 NFS will create the dir tree in about 20 minutes then start 
 copying the
 files over, it takes about 2-3 hours.
 
  
  Pavan
  
  On 24/09/09 12:13 +0100, Hiren Joshi wrote:
   It's been running for over 24 hours now.
   Network traffic is nominal, top shows about 200-400% cpu 
 (7 cores so
   it's not too bad).
   About 14G of memory used (the rest is being used as disk cache).
   
   Thoughts?
   
   
   
   snip
   
   An update, after running the rsync for a day, I killed it 
  and remounted
   all the disks (the underlying filesystem, not the 
 gluster) 
  with noatime,
   the rsync completed in about 600 minutes. I'm now 
 going to 
  try one level
   up (about 1,000,000,000 dirs).
   
-Original Message-
From: Pavan Vilas Sondur [mailto:pa...@gluster.com] 
Sent: 23 September 2009 07:55
To: Hiren Joshi
Cc: gluster-users@gluster.org
Subject: Re: Rsync

Hi Hiren,
What glusterfs version are you using? Can you 
 send us the 
volfiles and the log files.

Pavan

On 22/09/09 16:01 +0100, Hiren Joshi wrote:
 I forgot to mention, the mount is mounted with 
  direct-io, would this
 make a difference? 
 
  -Original Message-
  From: gluster-users-boun...@gluster.org 
  [mailto:gluster-users-boun...@gluster.org] On 
  Behalf Of 
Hiren Joshi
  Sent: 22 September 2009 11:40
  To: gluster-users@gluster.org
  Subject: [Gluster-users] Rsync
  
  Hello all,
   
  I'm getting what I think is bizarre 
  behaviour I have 
about 400G to
  rsync (rsync -av) onto a gluster share, the data is 
  in a directory
  structure which has about 1000 directories 
  per parent and 
about 1000
  directories in each of them.
   
  When I try to rsync an end leaf directory (this 
has about 4 
  dirs and 100
  files in each) the operation takes about 10 
seconds. When I 
  go one level
  above (1000 dirs with about 4 dirs in each 
  with about 100 
  files in each)
  the operation takes about 10 minutes.
   
  Now, if I then go one level above that (that's 1000 
 dirs with 
  1000 dirs
  in each with about 4 dirs in each with about 
  100 files in 
each) the
  operation takes days! Top shows glusterfsd 
  takes 300-600% 
cpu usage
  (2X4core), I have about 48G of memory 
 (usage is 0% as 
  expected).
   
  Has anyone seen anything like this? How can I 
  speed it up?
   
  Thanks,
   
  Josh.
  
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 
  http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

  
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 
  
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Rsync

2009-09-24 Thread Hiren Joshi
 

 -Original Message-
 From: Pavan Vilas Sondur [mailto:pa...@gluster.com] 
 Sent: 24 September 2009 12:42
 To: Hiren Joshi
 Cc: gluster-users@gluster.org
 Subject: Re: Rsync
 
 Can you let us know the following:
 
  * What is the exact directory structure?
/abc/def/ghi/jkl/[1-4]
now abc, def, ghi and jkl are one of a thousand dirs.

  * How many files are there in each individual directory and 
 of what size?
Each of the [1-4] dirs has about 100 files in, all under 1MB.

  * It looks like each server process has 6 export 
 directories. Can you run one server process each for a single 
 export directory and check if the rsync speeds up?
I had no idea you could do that. How? Would I need to create 6 config
files and start gluster:

/usr/sbin/glusterfsd -f /etc/glusterfs/export1.vol or similar?

I'll give this a go

  * Also, do you have any benchmarks with a similar setup on say, NFS?
NFS will create the dir tree in about 20 minutes then start copying the
files over, it takes about 2-3 hours.

 
 Pavan
 
 On 24/09/09 12:13 +0100, Hiren Joshi wrote:
  It's been running for over 24 hours now.
  Network traffic is nominal, top shows about 200-400% cpu (7 cores so
  it's not too bad).
  About 14G of memory used (the rest is being used as disk cache).
  
  Thoughts?
  
  
  
  snip
  
  An update, after running the rsync for a day, I killed it 
 and remounted
  all the disks (the underlying filesystem, not the gluster) 
 with noatime,
  the rsync completed in about 600 minutes. I'm now going to 
 try one level
  up (about 1,000,000,000 dirs).
  
   -Original Message-
   From: Pavan Vilas Sondur [mailto:pa...@gluster.com] 
   Sent: 23 September 2009 07:55
   To: Hiren Joshi
   Cc: gluster-users@gluster.org
   Subject: Re: Rsync
   
   Hi Hiren,
   What glusterfs version are you using? Can you send us the 
   volfiles and the log files.
   
   Pavan
   
   On 22/09/09 16:01 +0100, Hiren Joshi wrote:
I forgot to mention, the mount is mounted with 
 direct-io, would this
make a difference? 

 -Original Message-
 From: gluster-users-boun...@gluster.org 
 [mailto:gluster-users-boun...@gluster.org] On 
 Behalf Of 
   Hiren Joshi
 Sent: 22 September 2009 11:40
 To: gluster-users@gluster.org
 Subject: [Gluster-users] Rsync
 
 Hello all,
  
 I'm getting what I think is bizarre 
 behaviour I have 
   about 400G to
 rsync (rsync -av) onto a gluster share, the data is 
 in a directory
 structure which has about 1000 directories 
 per parent and 
   about 1000
 directories in each of them.
  
 When I try to rsync an end leaf directory (this 
   has about 4 
 dirs and 100
 files in each) the operation takes about 10 
   seconds. When I 
 go one level
 above (1000 dirs with about 4 dirs in each 
 with about 100 
 files in each)
 the operation takes about 10 minutes.
  
 Now, if I then go one level above that (that's 1000 
dirs with 
 1000 dirs
 in each with about 4 dirs in each with about 
 100 files in 
   each) the
 operation takes days! Top shows glusterfsd 
 takes 300-600% 
   cpu usage
 (2X4core), I have about 48G of memory (usage is 0% as 
 expected).
  
 Has anyone seen anything like this? How can I 
 speed it up?
  
 Thanks,
  
 Josh.
 
___
Gluster-users mailing list
Gluster-users@gluster.org

 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
   
 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Rsync

2009-09-23 Thread Hiren Joshi
2.0.6-1.el5, it's a 6 brick distributed system which is mirrored over 2
machines (each brick has a mirror on another machine, these mirrors are
put in the hash).

An update, after running the rsync for a day, I killed it and remounted
all the disks (the underlying filesystem, not the gluster) with noatime,
the rsync completed in about 600 minutes. I'm now going to try one level
up (about 1,000,000,000 dirs).

 -Original Message-
 From: Pavan Vilas Sondur [mailto:pa...@gluster.com] 
 Sent: 23 September 2009 07:55
 To: Hiren Joshi
 Cc: gluster-users@gluster.org
 Subject: Re: Rsync
 
 Hi Hiren,
 What glusterfs version are you using? Can you send us the 
 volfiles and the log files.
 
 Pavan
 
 On 22/09/09 16:01 +0100, Hiren Joshi wrote:
  I forgot to mention, the mount is mounted with direct-io, would this
  make a difference? 
  
   -Original Message-
   From: gluster-users-boun...@gluster.org 
   [mailto:gluster-users-boun...@gluster.org] On Behalf Of 
 Hiren Joshi
   Sent: 22 September 2009 11:40
   To: gluster-users@gluster.org
   Subject: [Gluster-users] Rsync
   
   Hello all,

   I'm getting what I think is bizarre behaviour I have 
 about 400G to
   rsync (rsync -av) onto a gluster share, the data is in a directory
   structure which has about 1000 directories per parent and 
 about 1000
   directories in each of them.

   When I try to rsync an end leaf directory (this has about 4 
   dirs and 100
   files in each) the operation takes about 10 seconds. When I 
   go one level
   above (1000 dirs with about 4 dirs in each with about 100 
   files in each)
   the operation takes about 10 minutes.

   Now, if I then go one level above that (that's 1000 dirs with 
   1000 dirs
   in each with about 4 dirs in each with about 100 files in 
 each) the
   operation takes days! Top shows glusterfsd takes 300-600% 
 cpu usage
   (2X4core), I have about 48G of memory (usage is 0% as expected).

   Has anyone seen anything like this? How can I speed it up?

   Thanks,

   Josh.
   
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Rsync

2009-09-23 Thread Hiren Joshi
-
 From: Pavan Vilas Sondur [mailto:pa...@gluster.com] 
 Sent: 23 September 2009 13:38
 To: Hiren Joshi
 Cc: gluster-users@gluster.org
 Subject: Re: Rsync
 
 Can you send us the volfiles and logfiles as well?
 
 Pavan
 
 On 23/09/09 10:08 +0100, Hiren Joshi wrote:
  2.0.6-1.el5, it's a 6 brick distributed system which is 
 mirrored over 2
  machines (each brick has a mirror on another machine, these 
 mirrors are
  put in the hash).
  
  An update, after running the rsync for a day, I killed it 
 and remounted
  all the disks (the underlying filesystem, not the gluster) 
 with noatime,
  the rsync completed in about 600 minutes. I'm now going to 
 try one level
  up (about 1,000,000,000 dirs).
  
   -Original Message-
   From: Pavan Vilas Sondur [mailto:pa...@gluster.com] 
   Sent: 23 September 2009 07:55
   To: Hiren Joshi
   Cc: gluster-users@gluster.org
   Subject: Re: Rsync
   
   Hi Hiren,
   What glusterfs version are you using? Can you send us the 
   volfiles and the log files.
   
   Pavan
   
   On 22/09/09 16:01 +0100, Hiren Joshi wrote:
I forgot to mention, the mount is mounted with 
 direct-io, would this
make a difference? 

 -Original Message-
 From: gluster-users-boun...@gluster.org 
 [mailto:gluster-users-boun...@gluster.org] On Behalf Of 
   Hiren Joshi
 Sent: 22 September 2009 11:40
 To: gluster-users@gluster.org
 Subject: [Gluster-users] Rsync
 
 Hello all,
  
 I'm getting what I think is bizarre behaviour I have 
   about 400G to
 rsync (rsync -av) onto a gluster share, the data is 
 in a directory
 structure which has about 1000 directories per parent and 
   about 1000
 directories in each of them.
  
 When I try to rsync an end leaf directory (this has about 4 
 dirs and 100
 files in each) the operation takes about 10 seconds. When I 
 go one level
 above (1000 dirs with about 4 dirs in each with about 100 
 files in each)
 the operation takes about 10 minutes.
  
 Now, if I then go one level above that (that's 1000 dirs with 
 1000 dirs
 in each with about 4 dirs in each with about 100 files in 
   each) the
 operation takes days! Top shows glusterfsd takes 300-600% 
   cpu usage
 (2X4core), I have about 48G of memory (usage is 0% as 
 expected).
  
 Has anyone seen anything like this? How can I speed it up?
  
 Thanks,
  
 Josh.
 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
   
 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Rsync

2009-09-23 Thread Hiren Joshi
Also, there's nothing happening in the log files beyond the initial
mount so it looks like the filesystem aspect of it is fine. 

 -Original Message-
 From: gluster-users-boun...@gluster.org 
 [mailto:gluster-users-boun...@gluster.org] On Behalf Of Hiren Joshi
 Sent: 23 September 2009 14:02
 To: Pavan Vilas Sondur
 Cc: gluster-users@gluster.org
 Subject: Re: [Gluster-users] Rsync
 
 Bellow.
 
 I've found that I get a performance hit if I add read cache or
 whitebehind.
 
 Server conf:
 
 ##Open vols
 
 volume posix1
   type storage/posix
   option directory /gluster/export1
 end-volume
 
 volume posix2
   type storage/posix
   option directory /gluster/export2
 end-volume
 
 volume posix3
   type storage/posix
   option directory /gluster/export3
 end-volume
 
 volume posix4
   type storage/posix
   option directory /gluster/export4
 end-volume
 
 volume posix5
   type storage/posix
   option directory /gluster/export5
 end-volume
 
 volume posix6
   type storage/posix
   option directory /gluster/export6
 end-volume
 ## Add the ability to lock files etc
 
 volume locks1
   type features/locks
   subvolumes posix1
 end-volume
 
 volume locks2
   type features/locks
   subvolumes posix2
 end-volume
 
 volume locks3
   type features/locks
   subvolumes posix3
 end-volume
 
 volume locks4
   type features/locks
   subvolumes posix4
 end-volume
 
 volume locks5
   type features/locks
   subvolumes posix5
 end-volume
 
 volume locks6
   type features/locks
   subvolumes posix6
 end-volume
 ## Preformance translators
 
 volume brick1
   type performance/io-threads
   option thread-count 8
   subvolumes locks1
 end-volume
 
 volume brick2
   type performance/io-threads
   option thread-count 8
   subvolumes locks2
 end-volume
 
 volume brick3
   type performance/io-threads
   option thread-count 8
   subvolumes locks3
 end-volume
 
 volume brick4
   type performance/io-threads
   option thread-count 8
   subvolumes locks4
 end-volume
 
 volume brick5
   type performance/io-threads
   option thread-count 8
   subvolumes locks5
 end-volume
 
 volume brick6
   type performance/io-threads
   option thread-count 8
   subvolumes locks6
 end-volume
 ##export the lot
 
 #volume brick6
 #  type debug/trace
 #  subvolumes trace_brick6
 #end-volume
 
 volume server
   type protocol/server
   option transport-type tcp/server
   option auth.addr.brick1.allow *
   option auth.addr.brick2.allow *
   option auth.addr.brick3.allow *
   option auth.addr.brick4.allow *
   option auth.addr.brick5.allow *
   option auth.addr.brick6.allow *
   subvolumes brick1 brick2 brick3 brick4 brick5 brick6
 end-volume 
 
 
 Vol file:
 ##Clent config
 ##import all the briks from all the mirrors and mirror them
 
 volume glust1a_1
   type protocol/client
   option transport-type tcp/client
   option remote-host glust1a
   option ping-timeout 30
   option remote-subvolume brick1
 end-volume
 
 volume glust1b_1
   type protocol/client
   option transport-type tcp/client
   option remote-host glust1b
   option ping-timeout 30
   option remote-subvolume brick1
 end-volume
 
 volume mirror1_1
   type cluster/replicate
   subvolumes glust1a_1 glust1b_1
 end-volume
 
 volume glust1a_2
   type protocol/client
   option transport-type tcp/client
   option remote-host glust1a
   option ping-timeout 30
   option remote-subvolume brick2
 end-volume
 
 volume glust1b_2
   type protocol/client
   option transport-type tcp/client
   option remote-host glust1b
   option ping-timeout 30
   option remote-subvolume brick2
 end-volume
 
 volume mirror1_2
   type cluster/replicate
   subvolumes glust1a_2 glust1b_2
 end-volume
 
 volume glust1a_3
   type protocol/client
   option transport-type tcp/client
   option remote-host glust1a
   option ping-timeout 30
   option remote-subvolume brick3
 end-volume
 
 volume glust1b_3
   type protocol/client
   option transport-type tcp/client
   option remote-host glust1b
   option ping-timeout 30
   option remote-subvolume brick3
 end-volume
 
 volume mirror1_3
   type cluster/replicate
   subvolumes glust1a_3 glust1b_3
 end-volume
 
 volume glust1a_4
   type protocol/client
   option transport-type tcp/client
   option remote-host glust1a
   option ping-timeout 30
   option remote-subvolume brick4
 end-volume
 
 volume glust1b_4
   type protocol/client
   option transport-type tcp/client
   option remote-host glust1b
   option ping-timeout 30
   option remote-subvolume brick4
 end-volume
 
 volume mirror1_4
   type cluster/replicate
   subvolumes glust1a_4 glust1b_4
 end-volume
 
 volume glust1a_5
   type protocol/client
   option transport-type tcp/client
   option remote-host glust1a
   option ping-timeout 30
   option remote-subvolume brick5
 end-volume
 
 volume glust1b_5
   type protocol/client
   option transport-type tcp/client
   option remote-host glust1b
   option ping-timeout 30
   option remote-subvolume brick5
 end-volume
 
 volume mirror1_5
   type cluster/replicate
   subvolumes glust1a_5 glust1b_5

[Gluster-users] Rsync

2009-09-22 Thread Hiren Joshi
Hello all,
 
I'm getting what I think is bizarre behaviour I have about 400G to
rsync (rsync -av) onto a gluster share, the data is in a directory
structure which has about 1000 directories per parent and about 1000
directories in each of them.
 
When I try to rsync an end leaf directory (this has about 4 dirs and 100
files in each) the operation takes about 10 seconds. When I go one level
above (1000 dirs with about 4 dirs in each with about 100 files in each)
the operation takes about 10 minutes.
 
Now, if I then go one level above that (that's 1000 dirs with 1000 dirs
in each with about 4 dirs in each with about 100 files in each) the
operation takes days! Top shows glusterfsd takes 300-600% cpu usage
(2X4core), I have about 48G of memory (usage is 0% as expected).
 
Has anyone seen anything like this? How can I speed it up?
 
Thanks,
 
Josh.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Rsync

2009-09-22 Thread Hiren Joshi
I forgot to mention, the mount is mounted with direct-io, would this
make a difference? 

 -Original Message-
 From: gluster-users-boun...@gluster.org 
 [mailto:gluster-users-boun...@gluster.org] On Behalf Of Hiren Joshi
 Sent: 22 September 2009 11:40
 To: gluster-users@gluster.org
 Subject: [Gluster-users] Rsync
 
 Hello all,
  
 I'm getting what I think is bizarre behaviour I have about 400G to
 rsync (rsync -av) onto a gluster share, the data is in a directory
 structure which has about 1000 directories per parent and about 1000
 directories in each of them.
  
 When I try to rsync an end leaf directory (this has about 4 
 dirs and 100
 files in each) the operation takes about 10 seconds. When I 
 go one level
 above (1000 dirs with about 4 dirs in each with about 100 
 files in each)
 the operation takes about 10 minutes.
  
 Now, if I then go one level above that (that's 1000 dirs with 
 1000 dirs
 in each with about 4 dirs in each with about 100 files in each) the
 operation takes days! Top shows glusterfsd takes 300-600% cpu usage
 (2X4core), I have about 48G of memory (usage is 0% as expected).
  
 Has anyone seen anything like this? How can I speed it up?
  
 Thanks,
  
 Josh.
 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Worrying

2009-09-03 Thread Hiren Joshi
Hello all,
 
I have a 2 servers each exporting 6 bricks. The client mirrors the 2
servers and AFRs the 6 mirrors it creates.
 
Running bonnie, the servers kept dropping (according to gluster logs
they stopped responding to pings in 10 seconds) so I set the ping
timeout to 30 second, now although bonnie runs I still see dropouts in
the log.
 
The worrying thing is that one of the servers is localhost! What's
happening here? I'm frustratingly close to putting this system on our
live network.
 
The log:
[2009-09-03 02:10:01] E
[client-protocol.c:437:client_ping_timer_expired] glust1a_1: Server
127.0.0.1:6996 has not responded in the last 30 seconds, disconnecting.
[2009-09-03 01:10:01] E
[client-protocol.c:437:client_ping_timer_expired] glust1b_1: Server
192.168.4.51:6996 has not responded in the last 30 seconds,
disconnecting.
[2009-09-03 01:10:01] E [saved-frames.c:165:saved_frames_unwind]
glust1a_1: forced unwinding frame type(2) op(PING)
[2009-09-03 01:10:01] E
[client-protocol.c:437:client_ping_timer_expired] glust1a_2: Server
127.0.0.1:6996 has not responded in the last 30 seconds, disconnecting.
[2009-09-03 01:10:01] E
[client-protocol.c:437:client_ping_timer_expired] glust1b_2: Server
192.168.4.51:6996 has not responded in the last 30 seconds,
disconnecting.
[2009-09-03 01:10:01] E
[client-protocol.c:437:client_ping_timer_expired] glust1a_3: Server
127.0.0.1:6996 has not responded in the last 30 seconds, disconnecting.
[2009-09-03 01:10:01] E
[client-protocol.c:437:client_ping_timer_expired] glust1b_3: Server
192.168.4.51:6996 has not responded in the last 30 seconds,
disconnecting.
[2009-09-03 01:10:01] E
[client-protocol.c:437:client_ping_timer_expired] glust1a_5: Server
127.0.0.1:6996 has not responded in the last 30 seconds, disconnecting.
[2009-09-03 01:10:01] E
[client-protocol.c:437:client_ping_timer_expired] glust1b_5: Server
192.168.4.51:6996 has not responded in the last 30 seconds,
disconnecting.
[2009-09-03 01:10:01] E
[client-protocol.c:437:client_ping_timer_expired] glust1a_6: Server
127.0.0.1:6996 has not responded in the last 30 seconds, disconnecting.
[2009-09-03 01:10:01] E
[client-protocol.c:437:client_ping_timer_expired] glust1b_6: Server
192.168.4.51:6996 has not responded in the last 30 seconds,
disconnecting.
[2009-09-03 01:10:01] N [client-protocol.c:6246:notify] glust1a_1:
disconnected
[2009-09-03 01:10:01] E [saved-frames.c:165:saved_frames_unwind]
glust1b_6: forced unwinding frame type(2) op(PING)
[2009-09-03 01:10:01] N [client-protocol.c:6246:notify] glust1b_6:
disconnected
[2009-09-03 01:10:01] E [saved-frames.c:165:saved_frames_unwind]
glust1a_6: forced unwinding frame type(2) op(PING)
[2009-09-03 01:10:01] N [client-protocol.c:6246:notify] glust1a_6:
disconnected
[2009-09-03 01:10:01] E [afr.c:2228:notify] mirror1_6: All subvolumes
are down. Going offline until atleast one of them comes back up.
[2009-09-03 01:10:01] E [saved-frames.c:165:saved_frames_unwind]
glust1b_5: forced unwinding frame type(2) op(PING)
[2009-09-03 01:10:01] N [client-protocol.c:6246:notify] glust1b_5:
disconnected
[2009-09-03 01:10:01] E [saved-frames.c:165:saved_frames_unwind]
glust1a_5: forced unwinding frame type(2) op(PING)
[2009-09-03 01:10:01] N [client-protocol.c:6246:notify] glust1a_5:
disconnected
[2009-09-03 01:10:01] E [afr.c:2228:notify] mirror1_5: All subvolumes
are down. Going offline until atleast one of them comes back up.
[2009-09-03 01:10:01] E [saved-frames.c:165:saved_frames_unwind]
glust1b_3: forced unwinding frame type(2) op(PING)
[2009-09-03 01:10:01] N [client-protocol.c:6246:notify] glust1b_3:
disconnected
[2009-09-03 01:10:01] E [saved-frames.c:165:saved_frames_unwind]
glust1a_3: forced unwinding frame type(2) op(PING)
[2009-09-03 01:10:01] N [client-protocol.c:6246:notify] glust1a_3:
disconnected
[2009-09-03 01:10:01] E [afr.c:2228:notify] mirror1_3: All subvolumes
are down. Going offline until atleast one of them comes back up.
[2009-09-03 01:10:01] E [saved-frames.c:165:saved_frames_unwind]
glust1b_2: forced unwinding frame type(2) op(PING)
[2009-09-03 01:10:01] N [client-protocol.c:6246:notify] glust1b_2:
disconnected
[2009-09-03 01:10:01] E [saved-frames.c:165:saved_frames_unwind]
glust1a_2: forced unwinding frame type(2) op(PING)
[2009-09-03 01:10:01] N [client-protocol.c:6246:notify] glust1a_2:
disconnected
[2009-09-03 01:10:01] E [afr.c:2228:notify] mirror1_2: All subvolumes
are down. Going offline until atleast one of them comes back up.
[2009-09-03 01:10:01] E [saved-frames.c:165:saved_frames_unwind]
glust1b_1: forced unwinding frame type(2) op(PING)
[2009-09-03 01:10:02] N [client-protocol.c:6246:notify] glust1b_1:
disconnected
[2009-09-03 01:10:02] E [afr.c:2228:notify] mirror1_1: All subvolumes
are down. Going offline until atleast one of them comes back up.
[2009-09-03 01:10:31] E
[client-protocol.c:437:client_ping_timer_expired] glust1a_4: Server
127.0.0.1:6996 has not responded in the last 30 seconds, disconnecting.
[2009-09-03 01:10:31] E

[Gluster-users] Bonnie crash

2009-08-28 Thread Hiren Joshi
Hello all,
 
I'm using gluster 2.0.4 and bonnie++ 1.96, I can't get the test to
complete.
 
bonnie++ -u 99:99 -d /home/webspace_glust/test/
 
Using uid:99, gid:99.
Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...Can't make directory ./Bonnie.24759
Cleaning up test directory after error.
Bonnie: drastic I/O error (rmdir): No such file or directory

I can't see what's wrong a quick google yielded very little. Any
pointers appreciated
 
Josh.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Interesting experiment

2009-08-19 Thread Hiren Joshi
 

 -Original Message-
 From: Liam Slusser [mailto:lslus...@gmail.com] 
 Sent: 18 August 2009 18:51
 To: Hiren Joshi
 Cc: gluster-users@gluster.org
 Subject: Re: [Gluster-users] Interesting experiment
 
 On Tue, Aug 18, 2009 at 3:05 AM, Hiren 
 Joshij...@moonfruit.com wrote:
  Hi,
 
  Ok, the basic setup is 6 bricks per server, 2 servers. 
 Mirror the six
  bricks and DHT them.
 
  I'm running three tests, dd 1G of zeros to the gluster 
 mount, dd 1000
  100k files and dd 1000 1M files.
 
  With 3M write-behind I get:
  0m35.460s for 1G file
  0m52.427s for 100k files
  1m37.209s for 1M files
 
  Then I added a 400M external journal to all the bricks, the 
 twist being
  the journals were made on a ram drive
 
  Running the same tests:
  0m33.614s for 1G file
  0m52.851s for 100k files
  1m31.693s for 1M files
 
 
  So why is it that adding an external journal (in the ram!) 
 seems to make
  no difference at all?
 
 I would imagine that most of your bottle neck is with the network and
 not the disks.  Modern raid disk storage systems are much quicker than
 gigabit ethernet.

You're right, the raid gives me great (SSD type) performance!

This is interesting, I'm on a gigabit network and it looks like it's
maxing out

when I dd a 1Gig file:
about 18 kbits/sec
When I dd 1000 1M files:
about 8 kbits/sec

Is it worth bonding? This look like I'm maxing out the network
connection.

 
 liam
 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Performance

2009-08-13 Thread Hiren Joshi
We've had large ext3 filesystems go readonly (underlying hardware
problem) and recovery can take a days.
 
For this I'm using ext3 as well and as it's a raid 6 disk (hardware
raided) I can probably get away with 4X1TB. But I'm currently
experiencing bad performance on a single brick that's not mirrored.




From: Mark Mielke [mailto:m...@mark.mielke.cc] 
Sent: 12 August 2009 18:51
To: Hiren Joshi
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Performance


On 08/12/2009 01:24 PM, Hiren Joshi wrote: 

36 partitions on each server - the word
partition is ambiguous. Are 
they 36 separate drives? Or multiple partitions
on the same drive. If 
multiple partitions on the same drive, this
would be a bad 
idea, as it 
would require the disk head to move back and
forth between the 
partitions, significantly increasing the
latency, and therefore 
significantly reducing the performance. If each
partition is 
on its own 
drive, you still won't see benefit unless you
have many clients 
concurrently changing many different files. In
your above case, it's 
touching a single file in sequence, and having a
cluster is 
costing you 
rather than benefitting you.



We went with 36 partitions (on a single raid 6 drive)
incase we got file
system corruption, it would take less time to fsck a
100G partition than
a 3.6TB one. Would a 3.6TB single disk be better?


Putting 3.6 TB on a single disk sounds like a lot of eggs in one
basket. :-)

If you are worried about fsck, I would definitely do as the
other poster suggested and use a journalled file system. This nearly
eliminates the fsck time for most situations. This would be whether
using 100G partitions or using 3.6T partitions. In fact, there is very
few reasons not to use a journalled file system these days.

As for how to deal with data on this partition - the file system
is going to have a better chance of placing files close to each other,
than setting up 36 partitions and having Gluster scatter the files
across all of them based on a hash. Personally, I would choose 4 x 1
Tbyte drives over 1 x 3.6 Tbyte drive, as this nearly quadruples my
bandwidth and for highly concurrent loads, nearly divides by four the
average latency to access files. 

But, if you already have the 3.6 Tbyte drive, I think the only
performance-friendly use would be to partition it based upon access
requirements, rather than a hash (random). That is, files that are
accessed frequently should be clustered together at the front of a disk,
files accessed less frequently could be in the middle, and files
accessed infrequently could be at the end. This would be a three
partition disk. Gluster does not have a file system that does this
automatically (that I can tell), so it would probably require a software
solution on your end. For example, I believe dovecot (IMAP server)
allows an alternative storage location to be defined, so that
infrequently read files can be moved to another disk, and it knows to
check the primary storage first, and fall back to the alternative
storage after.

It you can't break up your storage by access patterns, then I
think a 3.6 Tbyte file system might still be the next best option - it's
still better than 36 partitions. But, make sure you have a good file
system on it, that scales well to this size.

Cheers,
mark


-- 
Mark Mielke m...@mielke.cc mailto:m...@mielke.cc 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Performance

2009-08-13 Thread Hiren Joshi
What are the advantages of XFS over ext3 (which I'm currently using)? My
fear with XFS when selecting a filesystem was that it's not as active or
as well supported as ext3 and if things go wrong, how easy would it be
to recover?
 
I have 6 x 1TB disks in a hardware raid 6 with battery backup and UPS,
it's now just the performance I need to get sorted...




From: Liam Slusser [mailto:lslus...@gmail.com] 
Sent: 12 August 2009 20:35
To: Mark Mielke
Cc: Hiren Joshi; gluster-users@gluster.org
Subject: Re: [Gluster-users] Performance



I had a similar situation.  My larger gluster cluster has two
nodes but each node has 72 1.5tb hard drives.  I ended up creating three
30TB 24 drive raid6 arrays, formated with xfs and 64bit-inodes, and then
exporting three bricks with gluster.  I would recommend using a hardware
raid controller with battery backup power, UPS power, and a journaled
filesystem and i think you'll be fine.

I'm exporting the three bricks on each of my two nodes, the
clients are using replication to replicate each of the three bricks on
each server and then using distribute to tie it all together.

liam


On Wed, Aug 12, 2009 at 10:51 AM, Mark Mielke
m...@mark.mielke.cc wrote:


On 08/12/2009 01:24 PM, Hiren Joshi wrote:


36 partitions on each server - the word
partition is ambiguous. Are
they 36 separate drives? Or multiple
partitions on the same drive. If
multiple partitions on the same drive,
this would be a bad
idea, as it
would require the disk head to move back
and forth between the
partitions, significantly increasing the
latency, and therefore
significantly reducing the performance.
If each partition is
on its own
drive, you still won't see benefit
unless you have many clients
concurrently changing many different
files. In your above case, it's
touching a single file in sequence, and
having a cluster is
costing you
rather than benefitting you.




We went with 36 partitions (on a single raid 6
drive) incase we got file
system corruption, it would take less time to
fsck a 100G partition than
a 3.6TB one. Would a 3.6TB single disk be
better?



Putting 3.6 TB on a single disk sounds like a lot of
eggs in one basket. :-)

If you are worried about fsck, I would definitely do as
the other poster suggested and use a journalled file system. This nearly
eliminates the fsck time for most situations. This would be whether
using 100G partitions or using 3.6T partitions. In fact, there is very
few reasons not to use a journalled file system these days.

As for how to deal with data on this partition - the
file system is going to have a better chance of placing files close to
each other, than setting up 36 partitions and having Gluster scatter the
files across all of them based on a hash. Personally, I would choose 4 x
1 Tbyte drives over 1 x 3.6 Tbyte drive, as this nearly quadruples my
bandwidth and for highly concurrent loads, nearly divides by four the
average latency to access files.

But, if you already have the 3.6 Tbyte drive, I think
the only performance-friendly use would be to partition it based upon
access requirements, rather than a hash (random). That is, files that
are accessed frequently should be clustered together at the front of a
disk, files accessed less frequently could be in the middle, and files
accessed infrequently could be at the end. This would be a three
partition disk. Gluster does not have a file system that does this
automatically (that I can tell), so it would probably require a software
solution on your end. For example, I believe dovecot (IMAP server)
allows an alternative storage location to be defined, so that
infrequently read files can be moved to another disk, and it knows to
check the primary storage first, and fall back to the alternative
storage after.

It you can't break up your storage by access patterns,
then I think a 3.6 Tbyte file system might still be the next best option
- it's still better than 36 partitions. But, make sure you have a good
file system on it, that scales well to this size. 


Cheers,
mark

[Gluster-users] Performance

2009-08-12 Thread Hiren Joshi
Hello all,
 
I have 2 servers both exporting 36 partitions. The client mirrors the 36
partitions on each server and puts them all into a DHT.
 
dd if=/dev/zero of=/home/webspace_glust/zeros bs=1024 count=1024000
Takes 8 minutes, compared to 30 seconds vi NFS, now granted I'm using
mostly default settings but a transfer rate of 2.1Mb/s vs about 40Mb/s
seems low.
 
What am I doing wrong? Where should I start with the performance tuning?
 
Josh.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Performance

2009-08-12 Thread Hiren Joshi
 

 -Original Message-
 From: Mark Mielke [mailto:m...@mark.mielke.cc] 
 Sent: 12 August 2009 16:35
 To: Hiren Joshi
 Cc: gluster-users@gluster.org
 Subject: Re: [Gluster-users] Performance
 
 On 08/12/2009 11:24 AM, Hiren Joshi wrote:
  Hello all,
 
  I have 2 servers both exporting 36 partitions. The client 
 mirrors the 36
  partitions on each server and puts them all into a DHT.
 
  dd if=/dev/zero of=/home/webspace_glust/zeros bs=1024 count=1024000
  Takes 8 minutes, compared to 30 seconds vi NFS, now granted 
 I'm using
  mostly default settings but a transfer rate of 2.1Mb/s vs 
 about 40Mb/s
  seems low.
 
  What am I doing wrong? Where should I start with the 
 performance tuning?
 
 How about start with a 1:1 comparison? Unless is somehow doing 
 mirroring, you should remove the mirroring from your solution before 
 comparing.


Fair point, I've put on a single brick (no distribution no mirror) and I
get:
time dd if=/dev/zero of=/home/webspace_glust/zeros bs=1024 count=1024000
1024000+0 records in
1024000+0 records out
1048576000 bytes (1.0 GB) copied, 109.289 seconds, 9.6 MB/s

real1m49.291s
user0m0.156s
sys 0m2.701s

Compared to:
time dd if=/dev/zero of=/home/webspace_nfs/josh/zeros bs=1024
count=1024000
1024000+0 records in
1024000+0 records out
1048576000 bytes (1.0 GB) copied, 29.4801 seconds, 35.6 MB/s

real0m29.526s
user0m0.486s
sys 0m1.927s


For NFS.

 
 Are you using striping? If you are not using striping, then the file 
 /home/webspace_glust is going to be assigned to a single partition, 
 and you would only be using a mirror of one partition.

No striping, thanks, I didn't consider this, so I tried:
time for x in `seq 1 1 1000`; do dd if=/dev/zero
of=/home/webspace_nfs/zeros$x bs=1024 count=1024; echo $x; done
real0m45.292s
user0m0.760s
sys 0m3.209s
(about 20-40MB/s)

and
time for x in `seq 1 1 1000`; do dd if=/dev/zero
of=/home/webspace_glust/zeros$x bs=1024 count=1024; echo $x; done
real1m58.931s
user0m0.432s
sys 0m6.412s
(about 8-10MB/s)

 
 36 partitions on each server - the word partition is ambiguous. Are 
 they 36 separate drives? Or multiple partitions on the same drive. If 
 multiple partitions on the same drive, this would be a bad 
 idea, as it 
 would require the disk head to move back and forth between the 
 partitions, significantly increasing the latency, and therefore 
 significantly reducing the performance. If each partition is 
 on its own 
 drive, you still won't see benefit unless you have many clients 
 concurrently changing many different files. In your above case, it's 
 touching a single file in sequence, and having a cluster is 
 costing you 
 rather than benefitting you.

We went with 36 partitions (on a single raid 6 drive) incase we got file
system corruption, it would take less time to fsck a 100G partition than
a 3.6TB one. Would a 3.6TB single disk be better?


 
 As for the 2.1Mb/s vs 40Mb/s, I have no clue. I'm new to 
 Gluster myself, 
 and I have yet to install it on my node cluster (10 servers) 
 and perform 
 timing myself.
 
 Cheers,
 mark
 
 -- 
 Mark Mielkem...@mielke.cc
 
 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Fuse problem

2009-08-11 Thread Hiren Joshi
 

 -Original Message-
 From: Jason Williams [mailto:jas...@jhu.edu] 
 Sent: 11 August 2009 15:42
 To: Hiren Joshi
 Cc: Vikas Gorur; gluster-users@gluster.org
 Subject: Re: [Gluster-users] Fuse problem
 
 Hiren Joshi wrote:
   
  
  -Original Message-
  From: Vikas Gorur [mailto:vi...@gluster.com] 
  Sent: 11 August 2009 15:34
  To: Hiren Joshi
  Cc: gluster-users@gluster.org
  Subject: Re: [Gluster-users] Fuse problem
 
 
  - Hiren Joshi j...@moonfruit.com wrote:
 
  Hello all,
   
  I'm running a 64bit Centos5 setup and am trying to mount a gluster
  filesystem (which is exported out of the same box).
   
  glusterfs --debug --volfile=/root/gluster/webspace2.vol
  /home/webspace_glust/
   
  Gives me:
  snip
  [2009-08-11 16:26:37] D [client-protocol.c:5963:init] glust1b_36:
  defaulting ping-timeout to 10
  [2009-08-11 16:26:37] D [transport.c:141:transport_load] 
 transport:
  attempt to load file 
 /usr/lib64/glusterfs/2.0.4/transport/socket.so
  [2009-08-11 16:26:37] D [transport.c:141:transport_load] 
 transport:
  attempt to load file 
 /usr/lib64/glusterfs/2.0.4/transport/socket.so
  fuse: device not found, try 'modprobe fuse' first
  Make sure you have the fuse module loaded (modprobe fuse).
 
  
  I have fuse.x86_64 2.7.4-1.el5.rf installed but:
  # modprobe fuse
  FATAL: Module fuse not found.
  
  
  Vikas
  -- 
  Engineer - http://gluster.com/
 
 
 
 That is the userland part of fuse.  There is also a kernel 
 module that 
 you will need to get and install.  I'm guessing you have yum 
 since this 
 appears to be a RHEL 5 machine so try a 'yum search fuse' to 
 see if you 
 can find the name of the kernel module.  It escapes me at the moment, 
 which is why I suggest the yum search.

yum install dkms-fuse did the trick.. thanks!


 
 --
 Jason
 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Fuse problem

2009-08-11 Thread Hiren Joshi
 

 -Original Message-
 From: Vikas Gorur [mailto:vi...@gluster.com] 
 Sent: 11 August 2009 15:34
 To: Hiren Joshi
 Cc: gluster-users@gluster.org
 Subject: Re: [Gluster-users] Fuse problem
 
 
 - Hiren Joshi j...@moonfruit.com wrote:
 
  Hello all,
   
  I'm running a 64bit Centos5 setup and am trying to mount a gluster
  filesystem (which is exported out of the same box).
   
  glusterfs --debug --volfile=/root/gluster/webspace2.vol
  /home/webspace_glust/
   
  Gives me:
  snip
  [2009-08-11 16:26:37] D [client-protocol.c:5963:init] glust1b_36:
  defaulting ping-timeout to 10
  [2009-08-11 16:26:37] D [transport.c:141:transport_load] transport:
  attempt to load file /usr/lib64/glusterfs/2.0.4/transport/socket.so
  [2009-08-11 16:26:37] D [transport.c:141:transport_load] transport:
  attempt to load file /usr/lib64/glusterfs/2.0.4/transport/socket.so
  fuse: device not found, try 'modprobe fuse' first
 
 Make sure you have the fuse module loaded (modprobe fuse).
 

I have fuse.x86_64 2.7.4-1.el5.rf installed but:
# modprobe fuse
FATAL: Module fuse not found.


 Vikas
 -- 
 Engineer - http://gluster.com/
 
 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] DHT with AFR

2009-07-15 Thread Hiren Joshi
 

 -Original Message-
 From: Vikas Gorur [mailto:vi...@gluster.com] 
 Sent: 15 July 2009 09:01
 To: Hiren Joshi
 Cc: Kirby Zhou; gluster-users@gluster.org
 Subject: Re: [Gluster-users] DHT with AFR
 
 
 - Hiren Joshi j...@moonfruit.com wrote:
 
  My thinking is both on the client so:
  I AFR my nodes.
  I then DHT my AFR bricks.
  I then mount the DHT vols.
   
  Or would I get better performance the other way around?
 
 DHT over AFR'd pairs is the configuration you want. You can
 then add another AFR pair whenever you want to scale up.

When I add another pair, is there a way of re-distributing the data
evenly? Will this have a big performance hit?

 
 Vikas
 --
 Engineer - Gluster
 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] io-threads/io-cache

2009-07-15 Thread Hiren Joshi
Hi All,
 
http://www.gluster.org/docs/index.php/Translators/performance
 
Can anyone tell me the difference between the 2? It has the same
description
 
Thanks,
Josh.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] DHT with AFR

2009-07-15 Thread Hiren Joshi
If I do this on a client that's solely dedicated to copying out and in
again, will there be a performance hit on all the other clients using
it?

I really like the simplicity of glusterfs, hats off to the dev guys! 

 -Original Message-
 From: Kirby Zhou [mailto:kirbyz...@sohu-rd.com] 
 Sent: 15 July 2009 16:45
 To: Hiren Joshi; 'Vikas Gorur'
 Cc: gluster-users@gluster.org
 Subject: RE: [Gluster-users] DHT with AFR
 
 AFAIK, There is no method to redistribute your already stored 
 files except
 copy out then copy in again.
 
 
 
 -Original Message-
 From: Hiren Joshi [mailto:j...@moonfruit.com] 
 Sent: Wednesday, July 15, 2009 4:52 PM
 To: Vikas Gorur
 Cc: Kirby Zhou; gluster-users@gluster.org
 Subject: RE: [Gluster-users] DHT with AFR
 
  
 
  -Original Message-
  From: Vikas Gorur [mailto:vi...@gluster.com] 
  Sent: 15 July 2009 09:01
  To: Hiren Joshi
  Cc: Kirby Zhou; gluster-users@gluster.org
  Subject: Re: [Gluster-users] DHT with AFR
  
  
  - Hiren Joshi j...@moonfruit.com wrote:
  
   My thinking is both on the client so:
   I AFR my nodes.
   I then DHT my AFR bricks.
   I then mount the DHT vols.

   Or would I get better performance the other way around?
  
  DHT over AFR'd pairs is the configuration you want. You can
  then add another AFR pair whenever you want to scale up.
 
 When I add another pair, is there a way of re-distributing the data
 evenly? Will this have a big performance hit?
 
  
  Vikas
  --
  Engineer - Gluster
  
 
 
 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] DHT with AFR

2009-07-14 Thread Hiren Joshi
Hi all,
 
I'm looking to create a system where all nodes are mirrored with at
least one other node (using afr) but I also want it to be fully scalable
(that is to say I can add another mirrored pair at any point to the
system).
 
Will AFR with DHT do what I want? Are there any problems/caveats with
this setup that I should know about? What have been your experiences?
 
Thanks,
Josh.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] DHT with AFR

2009-07-14 Thread Hiren Joshi
My thinking is both on the client so:
I AFR my nodes.
I then DHT my AFR bricks.
I then mount the DHT vols.
 
Or would I get better performance the other way around?




From: Kirby Zhou [mailto:kirbyz...@sohu-rd.com] 
Sent: 14 July 2009 17:51
To: Hiren Joshi; gluster-users@gluster.org
Subject: RE: [Gluster-users] DHT with AFR



Which do you want to get? AFR based on DHT or DHT based on AFR?

 

 

From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Hiren Joshi
Sent: Wednesday, July 15, 2009 12:31 AM
To: gluster-users@gluster.org
Subject: [Gluster-users] DHT with AFR

 

Hi all,

 

I'm looking to create a system where all nodes are mirrored with
at least one other node (using afr) but I also want it to be fully
scalable (that is to say I can add another mirrored pair at any point to
the system).

 

Will AFR with DHT do what I want? Are there any problems/caveats
with this setup that I should know about? What have been your
experiences?

 

Thanks,

Josh.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS Preformance

2009-07-09 Thread Hiren Joshi
 

 -Original Message-
 From: Stephan von Krawczynski [mailto:sk...@ithnet.com] 
 Sent: 09 July 2009 13:50
 To: Hiren Joshi
 Cc: Liam Slusser; gluster-users@gluster.org
 Subject: Re: [Gluster-users] GlusterFS Preformance
 
 On Thu, 9 Jul 2009 09:33:59 +0100
 Hiren Joshi j...@moonfruit.com wrote:
 
   
  
   -Original Message-
   From: Stephan von Krawczynski [mailto:sk...@ithnet.com] 
   Sent: 09 July 2009 09:08
   To: Liam Slusser
   Cc: Hiren Joshi; gluster-users@gluster.org
   Subject: Re: [Gluster-users] GlusterFS Preformance
   
   On Wed, 8 Jul 2009 10:05:58 -0700
   Liam Slusser lslus...@gmail.com wrote:
   
You have to remember that when you are writing with NFS 
   you're writing to
one node, where as your gluster setup below is copying the 
   same data to two
nodes;  so you're doubling the bandwidth.  Dont expect nfs 
   like performance
on writing with multiple storage bricks.  However read 
   performance should be
quite good.
liam
   
   Do you think this problem can be solved by using 2 storage 
   bricks on two
   different network cards on the client?
  
  I'd be surprised if the bottleneck here was the network. 
 I'm testing on
  a xen network but I've only been given one eth per slice.
 
 Do you mean your clients and servers are virtual XEN 
 installations (on the
 same physical box) ?

They are on different boxes and using different disks (don't ask), this
seemed like a good way to evaluate as I setup an NFS server using the
same equipment to get relative timings. The plan is to roll it out onto
new physical boxes in a month or 2


 
 Regards,
 Stephan
 
 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users