Re: [Gluster-users] Newbee Question: GlusterFS on Compute Cluster?

2013-05-10 Thread Fabricio Cannini

Em 10-05-2013 19:38, Bradley, Randy escreveu:


I've got a 24 node compute cluster. Each node has one extra terabyte
drive. It seemed reasonable to install Gluster on each of the compute
nodes and the head node. I created a volume from the head node:

gluster volume create gv1 rep 2 transport tcp compute000:/export/brick1
compute001:/export/brick1 compute002:/export/brick1
compute003:/export/brick1 compute004:/export/brick1
compute005:/export/brick1 compute006:/export/brick1
compute007:/export/brick1 compute008:/export/brick1
compute009:/export/brick1 compute010:/export/brick1
compute011:/export/brick1 compute012:/export/brick1
compute013:/export/brick1 compute014:/export/brick1
compute015:/export/brick1 compute016:/export/brick1
compute017:/export/brick1 compute018:/export/brick1
compute019:/export/brick1 compute020:/export/brick1
compute021:/export/brick1 compute022:/export/brick1
compute023:/export/brick1

And then I mounted the volume on the head node. So far, so good. Apx. 10
TB available.

Now I would like each compute node to be able to access files on this
volume. Would this be done by NFS mount from the head node to the
compute nodes or is there a better way?


Back in the days of 3.0.x ( ~ 3 years ago ) I made a 'distributed 
scratch' in a scenario just like yours, Randy. I remember of using 
gluster's own protocol to access the files, mounting the volume locally.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Estimated date for release of Gluster 3.3

2012-03-14 Thread Fabricio Cannini
2012/3/14 Tim Bell tim.b...@cern.ch:

 Is there an estimate of when Gluster 3.3 would be out of beta ?

Before enlightenment 17 , right guys? Right ?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] FW: Upcoming August webinars

2011-08-17 Thread Fabricio Cannini
Em quarta-feira 17 agosto 2011, às 15:26:04, Charles Williams escreveu:
 Will we be able to get a copy of this at a later date? Would love to
 attend but am not able (allowed?) to stay up so late when I have to work.
 :(
 
 thanks,
 chuck

And please, pretty please, no webex video format.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Fwd: files not syncing up with glusterfs 3.1.2

2011-02-21 Thread Fabricio Cannini
Em Segunda-feira 21 Fevereiro 2011, às 12:01:11, Joe Landman escreveu:
 On 02/21/2011 09:53 AM, David Lloyd wrote:
  I'm working with Paul on this.
  
  We did take advice on XFS beforehand, and were given the impression that
  it would just be a performance issue rather than things not actually
  working.
 
 Hi David
 
XFS works fine as a backing store for GlusterFS.  We've deployed this
 now to many customer sites, and have not run into issues with it.

That's nice to hear. Next time i setup a gluster volume i'm going to take a 
look at xfs as a backend.
BTW Joe, these deployments with xfs as backend fs, which version of gluster 
have you used ?

  We've got quite fast hardware, and are more comfortable with XFS that
  ext4 from our own experience so we did our own tests and were happy with
  XFS performance.
 
There are many reasons to choose XFS in general, and there are no
 issues with using it with GlusterFS.  Especially on large file transfers.

Indeed it is. But i was with the 3.0x series docs on my mind still.

  Likewise, we're aware of the very poor performance of gluster with small
  files. We serve a lot of large files, and we're now moved most of the
  small files off to a normal nfs server. Again small files aren't known
  to break gluster are they?
 
Small files are the bane of every cluster file system.  We recommend
 using NFS client with GlusterFS for smaller files, simply due to the
 additional caching you can get out of the NFS system.

Good to know. Thanks for the tip.

Regards,
 
 Joe
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Reconfiguring volume type

2011-02-16 Thread Fabricio Cannini
Hi all.

 However if you have a replicate volume with count 2 already created, then
 you just need to add 2 more bricks and it will automatically become a
 distributed replicate volume. This can be scaled to 6 bricks by adding 2
 more bricks again and these steps can be done when the volume is online
 itself.

is it possible with the current 3 servers setup that Udo has, to add 1 server 
to the volume, then change the volume type to distribute-replicate, and after 
add the other 2 servers ?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster 3.1.1 issues over RDMA and HPC environment

2011-02-07 Thread Fabricio Cannini
Em Domingo 06 Fevereiro 2011, às 16:35:45, Claudio Baeza Retamal escreveu:

Hi.

 Dear friends,
 
 I have several problems of stability, reliability in a small-middle
 sized cluster, my configuration is the following:
 
 66 compute nodes (IBM idataplex, X5550, 24 GB RAM)
 1 access node (front end)
 1 master node (queue manager and monotoring)
 2 server for I/O with GlusterFS configured in distributed mode (4 TB in
 total)
 
 All computer have a Mellanox ConnectX QDR (40 Gbps) dual port
 1 Switch Qlogic 12800-180, 7 leaf of 24 ports each one and two double
 Spines QSFP plug
 
 Centos 5.5 and Xcat as cluster manager
 Ofed 1.5.1
 Gluster 3.1.1 over inbiniband

I have a smaller, but relatively similar setup, and am facing the same issues 
of Claudio.

- 1 frontend node ( 2 intel xeon 5420 , 16gb ram DDR2 ECC , 4TB of raw disk 
space ) with 2 Mellanox Technologies MT26418 [ConnectX VPI PCIe 2.0 5GT/s - 
IB DDR

- 1 storage node ( 2 intel xeon 5420 , 24gb ram DDR@ ECC, 8TB of raw disk 
space ) with 2 Mellanox Technologies MT26418 [ConnectX VPI PCIe 2.0 5GT/s - 
IB DDR

- 22 compute nodes  ( 2 intel xeon 5420 , 16gb ram DDR2 ECC , 750GB of raw 
disk space ) with 1 InfiniBand: Mellanox Technologies MT25204 [InfiniHost III 
Lx HCA]

Each compute node has a /glstfs partition, with 615GB , serving a gluster 
volume of ~3.1TB in /scratch for all nodes and the frontend, using 3.0.5 stock 
debian squeeze 6.0 packages.

 When the cluster is full loaded for applications which use heavily  MPI
 in combination with other application which uses a lot of I/O to file
 system,  GlusterFS do not work anymore.
 Also, when gendb uses interproscan bioinformatic applications with 128 o
 more jobs, GlusterFS death  or disconnects clients randomly, so, some
 applicatios become shutdown due they do not see the file system.
 
 This do not happen with Gluster over tcp (ethernet 1 Gbps)  and neither
 happen with Lustre 1.8.5 over infiniband, under same conditions Lustre
 work fine.
 
 My question is, exist any documenation where there are information more
 especific for GlusterFS tuning?
 
 Only I found basic information for configuring Gluster, but I do no have
 information more deep (i.e. for experts), I think must exist  some
 option for manipulate this siuation on GlusterFS, moreover, other people
 should have the same problems, since we replicate
   the configuration in other site with the same results.
 Perhaps, the question is about  the gluster scalability, how many
 clients is recommended for each gluster server when I use RDMA and
 infiniband fabric at 40 Gbps?
 
 I would appreciate any help,  I want to use Gluster, but stability and
 reliability  is very important for us. Perhaps

I have solved it , by taking out of the executing queue the first node that 
was listed in the client file '/etc/glusterfs/glusterfs.vol'.
And this what i *think* is the reason it worked:

I can't find it now, but i saw in the 3.0 docs that  ... the first hostname 
found in the client config file acts as a lock server for the whole volume 
In other words, the first hostname found in the client config coordinates the 
locking/unlocking of files in the whole volume. This way, the node does not 
accepts any job, and can dedicate its processing power solely as a 'lock 
server'.

it may well be the case that gluster is not yet as optimized for infiniband as 
it is for ethernet, too. I just can't say.

I am also unable to find how i can specify something like this in the gluster 
config: node n is a lock server for nodes a,b,c,d. Does anybody if is it 
possible?

Hope it helps you somehow, and to improve gluster performance over IB/RDMA.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.1.2 feedback

2011-01-21 Thread Fabricio Cannini
Em Sexta-feira 21 Janeiro 2011, às 09:49:10, David Lloyd escreveu:
 Hello,
 
 Haven't heard much feedback about installing glusterfs 3.1.2.
 
 Should I infer that it's all gone extremely very smoothly for everyone, or
 is everyone being as cowardly as me and waiting for others to do it first?

Hi David.

3.1 is very promising indeed. As an example, yesterday i felt the need to use 
the 'migrate' feature. Bt, i'm one of those 'cowards' , and i think that ( 
many will agree with me ) you can never be too coward with production 
machines. ;)

Bye.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Frequent stale nfs file handle error

2011-01-12 Thread Fabricio Cannini
Em Quarta-feira 12 Janeiro 2011, às 08:19:50, Amar Tumballi escreveu:
   If anybody can make a sense of why is it happening, i'd be really
   really thankful.
  
  We fixed many issues in 3.1.x releases compared to 3.0.5 (even some
  issues
 
 fixed in 3.0.6). Please considering testing/upgrading to higher version.

I'm thinking about upgrading, but i'd rather stay with debian stock packages 
if possible.
I'll talk with Patrick Matthäi, Debian's gluster maintainer and see if is it 
possible to backport the fixes.
Also, if there is any work-around available, please tell us.

  Got same problem at 3.1.1 - 3.1.2qa4
 
 Can you paste the logs ?? Also, when you say problem, what is the user
 application errors seen?

i've put a bunch of log messages here  http://pastebin.com/gkf3CmK9 and here 
 http://pastebin.com/wDgF74j8 .

 Regards,
 Amar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Frequent stale nfs file handle error

2011-01-11 Thread Fabricio Cannini
Hi all.

I've been having this error very frequently, at least once in a week.
Whenever this happens, restarting all the gluster daemons makes things work 
again.

This is the hardware i'm using:

22 nodes
2x Intel xeon 5420 2.5GHz , 16GB ddr2 ECC , 1 sata2 hd of 750GB.
Of which ~600GB is a partition ( /glstfs ) dedicated to gluster. Each node 
have 1 Mellanox MT25204 [InfiniHost III Lx] Inifiniband DDR HCA used by 
gluster through the 'verbs' interface. The switch is a Voltaire ISR 9024S/D.
Each node also is a client of the gluster volume, that is accessed through the 
'/scratch' mount-point.
The machine itself is a scientific cluster, with all nodes and the head running 
Debian Squeeze amd64, with stock 3.0.5 packages.

These are the server and client configs:

Client config
http://pastebin.com/6d4BjQwd

Server config
http://pastebin.com/4ZmX9ir1

And here are some of the messages in the head node log:
http://pastebin.com/gkf3CmK9

If anybody can make a sense of why is it happening, i'd be really really 
thankful.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs debian package

2011-01-06 Thread Fabricio Cannini
Em Quinta-feira 06 Janeiro 2011, às 17:24:02, Piotr Kandziora escreveu:
 Hello,
 
 Quick question for gluster developers: Could you add to debian package
 automatic creating rc scripts in postinst action? Currently this is
 not supported and user has to manually execute update-rc.d command.
 This would be helpful in large cluster installations...

Hi All.

I second Piotr suggestion.
May i also suggest to the gluster devs 2 things:

- To follow debian's way of separate packages ( server, client, libraries, and 
common data packages). Makes automated installation much easier and cleaner.

- To create a proper debian repository of the latest and greatest release of 
gluster at gluster.org. Again, it would make our life as sysadmins much easier 
to just set up a repo in '/etc/apt/sources.list' and let $management_system 
take care of the rest.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Instability with large setup and infiniband

2011-01-06 Thread Fabricio Cannini
Hi all

I''ve setup glusterfs , version 3.0.5 ( debian squeeze amd64 stock packages ) 
, like this, being each node both a server and client:

Client config
http://pastebin.com/6d4BjQwd

Server config
http://pastebin.com/4ZmX9ir1


Configuration of each node:
2x Intel xeon 5420 2.5GHz , 16GB ddr2 ECC , 1 sata2 hd of 750GB.
Of which ~600GB is a partition ( /glstfs ) dedicated to gluster. Each node 
also have 1 Mellanox MT25204 [InfiniHost III Lx] Inifiniband DDR hca used by 
gluster through the 'verbs' interface.

This cluster of 22 nodes is used for scientific computing, and glusterfs is 
used to create a scratch area for I/O intensive apps.

And this is one of the problems: *one* I/O intensive job can bring the whole 
volume to its knees, with Transport endpoint not connected errors and so, 
till complete uselessness; Especially if the job is running in a parallel way 
( through MPI ) in more than one node.

The other problem is that gluster have been somewhat unstable, even without 
I/O intensive jobs. Out of the blue a simple 'ls -la /scratch' is answered 
with a Transport endpoint not connected error. But when this happens, 
restarting all servers brings things back to a working state.

If anybody here using glusterfs with infiniband have been through this ( or 
something like that ) and could share your experiences , please please please 
do

TIA,
Fabricio.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users