Re: [Gluster-users] Automatic Failover

2011-03-09 Thread Daniel Müller
That is part of apache -->.htaccess file will do that for you, see:
http://httpd.apache.org/docs/2.0/howto/htaccess.html

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---
-Ursprüngliche Nachricht-
Von: Mohit Anchlia [mailto:mohitanch...@gmail.com] 
Gesendet: Mittwoch, 9. März 2011 20:40
An: muel...@tropenklinik.de; gluster-users@gluster.org
Betreff: Re: [Gluster-users] Automatic Failover

Thanks that's what I was thinking initially! May be this question
doesn't belong in this forum but how do I protect files form being
accessed by unauthorized users in this case.

On Wed, Mar 9, 2011 at 10:48 AM, Daniel Müller 
wrote:
> It is sufficient then to point your htmldocs (root of your apache) to a
> glusterfs client share.
> Thats all.
> ex: mount -t glusterfs /glusterfs/export /srv/www/htmldocs
>
> On Wed, 9 Mar 2011 09:25:22 -0800, Mohit Anchlia 
> wrote:
>> This is providing http service to clients using apache to serve them
>> with documents/images etc. on demand. These clients can be web browser
>> or desktop.
>>
>> On Tue, Mar 8, 2011 at 11:51 PM, Daniel Müller 
>> wrote:
>>> What exactly are you trying to do?
>>> Do you mean apache glusterfs over webdav??!!
>>>
>>> ---
>>> EDV Daniel Müller
>>>
>>> Leitung EDV
>>> Tropenklinik Paul-Lechler-Krankenhaus
>>> Paul-Lechler-Str. 24
>>> 72076 Tübingen
>>>
>>> Tel.: 07071/206-463, Fax: 07071/206-499
>>> eMail: muel...@tropenklinik.de
>>> Internet: www.tropenklinik.de
>>> ---
>>> -Ursprüngliche Nachricht-
>>> Von: Mohit Anchlia [mailto:mohitanch...@gmail.com]
>>> Gesendet: Dienstag, 8. März 2011 19:45
>>> An: muel...@tropenklinik.de; gluster-users@gluster.org
>>> Betreff: Re: [Gluster-users] Automatic Failover
>>>
>>> Thanks! I am still trying to figure out how it works with HTTP. In
>>> some docs I see that GFS supports HTTP and I am not sure how that
>>> works. Does anyone know how that works or what has to be done to make
>>> it available over HTTP?
>>>
>>> On Mon, Mar 7, 2011 at 11:14 PM, Daniel Müller
> 
>>> wrote:
 If you use a http-server,samba... ex:apache you need ucarp. There is a
 little blog about it:

>>>
>
http://www.misdivision.com/blog/setting-up-a-highly-available-storage-cluste
 r-using-glusterfs-and-ucarp

 Ucarp gives you a virt. Ip for your httpd. if the one is down the
> second
 with the same IP is still alive and serving. Glusterfs serves
 the data in ha mode. Try it it works like a charm.

 ---
 EDV Daniel Müller

 Leitung EDV
 Tropenklinik Paul-Lechler-Krankenhaus
 Paul-Lechler-Str. 24
 72076 Tübingen

 Tel.: 07071/206-463, Fax: 07071/206-499
 eMail: muel...@tropenklinik.de
 Internet: www.tropenklinik.de
 ---

 -Ursprüngliche Nachricht-
 Von: Mohit Anchlia [mailto:mohitanch...@gmail.com]
 Gesendet: Montag, 7. März 2011 18:47
 An: muel...@tropenklinik.de; gluster-users@gluster.org
 Betreff: Re: [Gluster-users] Automatic Failover

 Thanks! Can you please give me some scenarios where applications will
 need ucarp, are you referring to nfs clients?

 Does glusterfs support http also? can't find any documentation.

 On Mon, Mar 7, 2011 at 12:34 AM, Daniel Müller
 
 wrote:
> Yes the normal way the clients will follow the failover but some
> applications and services running on top
> Of glusterfs will not and will need this.
>
> ---
> EDV Daniel Müller
>
> Leitung EDV
> Tropenklinik Paul-Lechler-Krankenhaus
> Paul-Lechler-Str. 24
> 72076 Tübingen
>
> Tel.: 07071/206-463, Fax: 07071/206-499
> eMail: muel...@tropenklinik.de
> Internet: www.tropenklinik.de
> ---
>
> -Ursprüngliche Nachricht-
> Von: Mohit Anchlia [mailto:mohitanch...@gmail.com]
> Gesendet: Freitag, 4. März 2011 18:36
> An: muel...@tropenklinik.de; gluster-users@gluster.org
> Betreff: Re: [Gluster-users] Automatic Failover
>
> Thanks! I thought for native clients failover is inbuilt. Isn't that
>>> true?
>
> On Thu, Mar 3, 2011 at 11:46 PM, Daniel Müller
> 
> wrote:
>> I use ucarp to make a real failover for my gluster-vols
>> For the clients:
>> Ex:In  my ucarp config, vip-001.conf on both nodes (or nnn...
> nodes):
>>
>> vip-001.conf on node1
>> #ID
>> ID=001
>> #Network Interface
>> BIND_INTERFACE=eth0
>> #Real IP
>> SOURCE_ADDRESS=192.168.132.56
>>

Re: [Gluster-users] How to use gluster for WAN/Data Center replication

2011-03-09 Thread Count Zero
Thank you very much for the Sector info, I have been dying to find something 
like this for some time now.
Will try it out during the weekend on a small cluster of machines that are on 
WAN (20 nodes).


On Mar 10, 2011, at 6:51 AM, 沈允中 wrote:

> Hi, 
> According to my understanding, there is no data replication mechanism for WAN.
> I know that Sector file system have this function you want and it uses 
> enhanced UDT protocol.
> The way you can use in Gluster is to replicate data by using the original TCP 
> protocol.
> But it is influenced by network a lot.
> I tried this way before but the performance was not good.
> I asked gluster.com about the question and they said they are going to 
> provide the function this year, for your reference.
> 
> Best Regards, 
> Sylar Shen
> 
> -Original Message-
> From: gluster-users-boun...@gluster.org 
> [mailto:gluster-users-boun...@gluster.org] On Behalf Of Mohit Anchlia
> Sent: Thursday, March 10, 2011 8:44 AM
> To: gluster-users@gluster.org
> Subject: [Gluster-users] How to use gluster for WAN/Data Center replication
> 
> How to setup gluster for WAN/Data Center replication? Are there others
> using it this way?
> 
> Also, how to make the writes asynchronuous for data center replication?
> 
> We have a requirement to replicate data to other data center as well.
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] How to use gluster for WAN/Data Center replication

2011-03-09 Thread 沈允中
Hi, 
According to my understanding, there is no data replication mechanism for WAN.
I know that Sector file system have this function you want and it uses enhanced 
UDT protocol.
The way you can use in Gluster is to replicate data by using the original TCP 
protocol.
But it is influenced by network a lot.
I tried this way before but the performance was not good.
I asked gluster.com about the question and they said they are going to provide 
the function this year, for your reference.

Best Regards, 
Sylar Shen

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Mohit Anchlia
Sent: Thursday, March 10, 2011 8:44 AM
To: gluster-users@gluster.org
Subject: [Gluster-users] How to use gluster for WAN/Data Center replication

How to setup gluster for WAN/Data Center replication? Are there others
using it this way?

Also, how to make the writes asynchronuous for data center replication?

We have a requirement to replicate data to other data center as well.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] How to use gluster for WAN/Data Center replication

2011-03-09 Thread Mohit Anchlia
How to setup gluster for WAN/Data Center replication? Are there others
using it this way?

Also, how to make the writes asynchronuous for data center replication?

We have a requirement to replicate data to other data center as well.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Backup Strategy

2011-03-09 Thread Mohit Anchlia
Thanks! ZFS brings up another question about encrypting data. Is there
a feature in gluster that helps with encrypting data when it's written
to the file system?

On Wed, Mar 9, 2011 at 2:17 PM, Liam Slusser  wrote:
> Netbackup is great and can probably backup directly from a glusterFS
> client mount, however, the license and software cost for a few clients
> and one server/media server is nearly $50k.  Not exactly cheap.  Id
> look into Amanda backup if I was on a budget and wanted to backup to
> tape.
>
> Another option is to just do rsync your gluster cluster to a Sun
> Solaris server with a ZFS partition.  Then you can do nightly zfs
> snapshots of your data (snapshots only save what changes so it uses
> very little space).
>
> liam
>
> On Wed, Mar 9, 2011 at 11:40 AM, Mohit Anchlia  wrote:
>> Thanks! have you heard of netbackup? Our co. already has license for
>> it. I think it can be used.
>>
>> On Wed, Mar 9, 2011 at 11:11 AM, Sabuj Pattanayek  wrote:
>>> I read the docs. But here you go :
>>>
>>> http://lmgtfy.com/?q=backuppc+howto
>>>
>>> On Wed, Mar 9, 2011 at 1:05 PM, Mohit Anchlia  
>>> wrote:
 Thanks! Is there a short blog or steps that I can look at.
 documentaion looks overwhelming at first look :)

 On Wed, Mar 9, 2011 at 10:53 AM, Sabuj Pattanayek  wrote:
> for the amount of features that you get with backuppc, it's worth the
> fairly painless setup. Btw, we've found that it's better/faster to use
> tar via backuppc (it supports rsync as well) to do the backups rather
> than rsync in backuppc. Rsync can be really slow if you have
> thousands/millions of files.
>
> On Wed, Mar 9, 2011 at 12:50 PM, Mohit Anchlia  
> wrote:
>> Is there a problem with using just rsync vs backupcc? I need to read
>> about backupcc and how easy it is to setup.
>

>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Backup Strategy

2011-03-09 Thread Liam Slusser
Netbackup is great and can probably backup directly from a glusterFS
client mount, however, the license and software cost for a few clients
and one server/media server is nearly $50k.  Not exactly cheap.  Id
look into Amanda backup if I was on a budget and wanted to backup to
tape.

Another option is to just do rsync your gluster cluster to a Sun
Solaris server with a ZFS partition.  Then you can do nightly zfs
snapshots of your data (snapshots only save what changes so it uses
very little space).

liam

On Wed, Mar 9, 2011 at 11:40 AM, Mohit Anchlia  wrote:
> Thanks! have you heard of netbackup? Our co. already has license for
> it. I think it can be used.
>
> On Wed, Mar 9, 2011 at 11:11 AM, Sabuj Pattanayek  wrote:
>> I read the docs. But here you go :
>>
>> http://lmgtfy.com/?q=backuppc+howto
>>
>> On Wed, Mar 9, 2011 at 1:05 PM, Mohit Anchlia  wrote:
>>> Thanks! Is there a short blog or steps that I can look at.
>>> documentaion looks overwhelming at first look :)
>>>
>>> On Wed, Mar 9, 2011 at 10:53 AM, Sabuj Pattanayek  wrote:
 for the amount of features that you get with backuppc, it's worth the
 fairly painless setup. Btw, we've found that it's better/faster to use
 tar via backuppc (it supports rsync as well) to do the backups rather
 than rsync in backuppc. Rsync can be really slow if you have
 thousands/millions of files.

 On Wed, Mar 9, 2011 at 12:50 PM, Mohit Anchlia  
 wrote:
> Is there a problem with using just rsync vs backupcc? I need to read
> about backupcc and how easy it is to setup.

>>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Backup Strategy

2011-03-09 Thread Mohit Anchlia
Thanks! have you heard of netbackup? Our co. already has license for
it. I think it can be used.

On Wed, Mar 9, 2011 at 11:11 AM, Sabuj Pattanayek  wrote:
> I read the docs. But here you go :
>
> http://lmgtfy.com/?q=backuppc+howto
>
> On Wed, Mar 9, 2011 at 1:05 PM, Mohit Anchlia  wrote:
>> Thanks! Is there a short blog or steps that I can look at.
>> documentaion looks overwhelming at first look :)
>>
>> On Wed, Mar 9, 2011 at 10:53 AM, Sabuj Pattanayek  wrote:
>>> for the amount of features that you get with backuppc, it's worth the
>>> fairly painless setup. Btw, we've found that it's better/faster to use
>>> tar via backuppc (it supports rsync as well) to do the backups rather
>>> than rsync in backuppc. Rsync can be really slow if you have
>>> thousands/millions of files.
>>>
>>> On Wed, Mar 9, 2011 at 12:50 PM, Mohit Anchlia  
>>> wrote:
 Is there a problem with using just rsync vs backupcc? I need to read
 about backupcc and how easy it is to setup.
>>>
>>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Automatic Failover

2011-03-09 Thread Mohit Anchlia
Thanks that's what I was thinking initially! May be this question
doesn't belong in this forum but how do I protect files form being
accessed by unauthorized users in this case.

On Wed, Mar 9, 2011 at 10:48 AM, Daniel Müller  wrote:
> It is sufficient then to point your htmldocs (root of your apache) to a
> glusterfs client share.
> Thats all.
> ex: mount -t glusterfs /glusterfs/export /srv/www/htmldocs
>
> On Wed, 9 Mar 2011 09:25:22 -0800, Mohit Anchlia 
> wrote:
>> This is providing http service to clients using apache to serve them
>> with documents/images etc. on demand. These clients can be web browser
>> or desktop.
>>
>> On Tue, Mar 8, 2011 at 11:51 PM, Daniel Müller 
>> wrote:
>>> What exactly are you trying to do?
>>> Do you mean apache glusterfs over webdav??!!
>>>
>>> ---
>>> EDV Daniel Müller
>>>
>>> Leitung EDV
>>> Tropenklinik Paul-Lechler-Krankenhaus
>>> Paul-Lechler-Str. 24
>>> 72076 Tübingen
>>>
>>> Tel.: 07071/206-463, Fax: 07071/206-499
>>> eMail: muel...@tropenklinik.de
>>> Internet: www.tropenklinik.de
>>> ---
>>> -Ursprüngliche Nachricht-
>>> Von: Mohit Anchlia [mailto:mohitanch...@gmail.com]
>>> Gesendet: Dienstag, 8. März 2011 19:45
>>> An: muel...@tropenklinik.de; gluster-users@gluster.org
>>> Betreff: Re: [Gluster-users] Automatic Failover
>>>
>>> Thanks! I am still trying to figure out how it works with HTTP. In
>>> some docs I see that GFS supports HTTP and I am not sure how that
>>> works. Does anyone know how that works or what has to be done to make
>>> it available over HTTP?
>>>
>>> On Mon, Mar 7, 2011 at 11:14 PM, Daniel Müller
> 
>>> wrote:
 If you use a http-server,samba... ex:apache you need ucarp. There is a
 little blog about it:

>>>
> http://www.misdivision.com/blog/setting-up-a-highly-available-storage-cluste
 r-using-glusterfs-and-ucarp

 Ucarp gives you a virt. Ip for your httpd. if the one is down the
> second
 with the same IP is still alive and serving. Glusterfs serves
 the data in ha mode. Try it it works like a charm.

 ---
 EDV Daniel Müller

 Leitung EDV
 Tropenklinik Paul-Lechler-Krankenhaus
 Paul-Lechler-Str. 24
 72076 Tübingen

 Tel.: 07071/206-463, Fax: 07071/206-499
 eMail: muel...@tropenklinik.de
 Internet: www.tropenklinik.de
 ---

 -Ursprüngliche Nachricht-
 Von: Mohit Anchlia [mailto:mohitanch...@gmail.com]
 Gesendet: Montag, 7. März 2011 18:47
 An: muel...@tropenklinik.de; gluster-users@gluster.org
 Betreff: Re: [Gluster-users] Automatic Failover

 Thanks! Can you please give me some scenarios where applications will
 need ucarp, are you referring to nfs clients?

 Does glusterfs support http also? can't find any documentation.

 On Mon, Mar 7, 2011 at 12:34 AM, Daniel Müller
 
 wrote:
> Yes the normal way the clients will follow the failover but some
> applications and services running on top
> Of glusterfs will not and will need this.
>
> ---
> EDV Daniel Müller
>
> Leitung EDV
> Tropenklinik Paul-Lechler-Krankenhaus
> Paul-Lechler-Str. 24
> 72076 Tübingen
>
> Tel.: 07071/206-463, Fax: 07071/206-499
> eMail: muel...@tropenklinik.de
> Internet: www.tropenklinik.de
> ---
>
> -Ursprüngliche Nachricht-
> Von: Mohit Anchlia [mailto:mohitanch...@gmail.com]
> Gesendet: Freitag, 4. März 2011 18:36
> An: muel...@tropenklinik.de; gluster-users@gluster.org
> Betreff: Re: [Gluster-users] Automatic Failover
>
> Thanks! I thought for native clients failover is inbuilt. Isn't that
>>> true?
>
> On Thu, Mar 3, 2011 at 11:46 PM, Daniel Müller
> 
> wrote:
>> I use ucarp to make a real failover for my gluster-vols
>> For the clients:
>> Ex:In  my ucarp config, vip-001.conf on both nodes (or nnn...
> nodes):
>>
>> vip-001.conf on node1
>> #ID
>> ID=001
>> #Network Interface
>> BIND_INTERFACE=eth0
>> #Real IP
>> SOURCE_ADDRESS=192.168.132.56
>> #Virtual IP used by ucarp
>> VIP_ADDRESS=192.168.132.58
>> #Ucarp Password
>> PASSWORD=Password
>>
>> On node2
>>
>> #ID
>> ID=002
>> #Network Interface
>> BIND_INTERFACE=eth0
>> #Real IP
>> SOURCE_ADDRESS=192.168.132.57
>> #Virtual IP used by ucarp
>> VIP_ADDRESS=192.168.132.58
>> #Ucarp Password
>> PASSWORD=Password
>>
>> Then
>> mount -t glusterfs 192.168.132.58:/samba-vol /mnt/glusterfs
>> Set enries in fstab:
>>
>> 192.168.132.58:/samba-vol /mnt/glusterfs glusterfs defaults 0 0
>>
>> Now one Server fails there is still the same service on for

Re: [Gluster-users] Backup Strategy

2011-03-09 Thread Sabuj Pattanayek
I read the docs. But here you go :

http://lmgtfy.com/?q=backuppc+howto

On Wed, Mar 9, 2011 at 1:05 PM, Mohit Anchlia  wrote:
> Thanks! Is there a short blog or steps that I can look at.
> documentaion looks overwhelming at first look :)
>
> On Wed, Mar 9, 2011 at 10:53 AM, Sabuj Pattanayek  wrote:
>> for the amount of features that you get with backuppc, it's worth the
>> fairly painless setup. Btw, we've found that it's better/faster to use
>> tar via backuppc (it supports rsync as well) to do the backups rather
>> than rsync in backuppc. Rsync can be really slow if you have
>> thousands/millions of files.
>>
>> On Wed, Mar 9, 2011 at 12:50 PM, Mohit Anchlia  
>> wrote:
>>> Is there a problem with using just rsync vs backupcc? I need to read
>>> about backupcc and how easy it is to setup.
>>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Strange behaviour glusterd 3.1

2011-03-09 Thread Mohit Anchlia
If it's a problem can you please enable the debug on that volume and
then see what gets logged in the logs.

I suggest creating a bug since it sounds critical. What if it was production :)

On Wed, Mar 9, 2011 at 10:42 AM, Daniel Müller  wrote:
>
>
> did set gluster volume set samba-vol performance.quick-read off .
>
> vim
> a new file on node1. ssh node2 . ls new file -> read error, file not
> found.
>
> did set gluster volume set samba-vol performance.quick-read on.
>
>
> I can ls, change content one time. Then the same again. No
> solution!!!
>
> Should I delete the VOl? and build a new one?
>
> I am
> glad that it is no production environment. It would be a mess.
>
> On Wed, 09
> Mar 2011 23:10:21 +0530, Vijay Bellur  wrote: On Wednesday 09 March 2011
> 04:00 PM, Daniel Müller wrote:
> /mnt/glusterfs ist he mount point of the
> client where the samba-vol (backend:/glusterfs/export) is mounted on.
> So it
> should work. And it did work until last week.
>
>  Can you please check by
> disabling quick-read translator in your setup via the following
> command:
>
> #gluster volume set performance.quick-read off
>
> You may be
> hitting bug 2027 with 3.1.0
>
> Thanks,
> Vijay
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Self heal doesn't seem to work when file is updated

2011-03-09 Thread Mohit Anchlia
Thanks! looks like documentation is not up to date.

Asking my question again since it's very important to know how to find
out if self healing worked.

How will I know if self heal worked or not. What's the best way to
tell? I see there are 2 find commands and it looks in some cases
running one may not be sufficient. So how can we make sure that self
heal worked. Currently I am testing with one file so it's easy to
verify but with millions of files it may not be possible.

On Wed, Mar 9, 2011 at 10:51 AM, Burnash, James  wrote:
> Here's the out-dated page - I didn't even notice that stat-prefetch wasn't on 
> the new one.
>
> http://www.gluster.com/community/documentation/index.php/Translators
>
> Documentation is a bit of a challenge with GlusterFS ... I know that doing it 
> and keeping it up to date is one of the harder things to do on a project - I 
> did it for a living at one point.
>
> James Burnash, Unix Engineering
>
> -Original Message-
> From: Mohit Anchlia [mailto:mohitanch...@gmail.com]
> Sent: Wednesday, March 09, 2011 1:48 PM
> To: gluster-users@gluster.org; Burnash, James
> Subject: Re: [Gluster-users] Self heal doesn't seem to work when file is 
> updated
>
> Is this the complete list? I don't see stat prefetch translator in there.
>
> Just so this important question doesn't get lost I will ask again :)
>
> How will I know if self heal worked or not. What's the best way to
> tell? I see there are 2 find commands and it looks in some cases
> running one may not be sufficient. So how can we make sure that self
> heal worked. Currently I am testing with one file so it's easy to
> verify but with millions of files it may not be possible.
>
> On Wed, Mar 9, 2011 at 10:43 AM, Burnash, James  wrote:
>> Just found the updated link for translators - you made me look :-)
>>
>> http://www.gluster.com/community/documentation/index.php/Gluster_3.1:_Setting_Volume_Options
>>
>> I would also like to know HOW I would know for sure that self heal worked ...
>>
>> James Burnash, Unix Engineering
>>
>> -Original Message-
>> From: gluster-users-boun...@gluster.org 
>> [mailto:gluster-users-boun...@gluster.org] On Behalf Of Mohit Anchlia
>> Sent: Wednesday, March 09, 2011 1:33 PM
>> To: Pranith Kumar. Karampuri; gluster-users@gluster.org
>> Subject: Re: [Gluster-users] Self heal doesn't seem to work when file is 
>> updated
>>
>> Thanks!
>>
>> How will I know if self heal worked or not. What's the best way to
>> tell? I see there are 2 find commands and it looks in some cases
>> running one may not be sufficient. So how can we make sure that self
>> heal worked. Currently I am testing with one file so it's easy to
>> verify but with millions of files it may not be possible.
>>
>> Also, is there any place where I can read about these translators. I
>> didn;t even know there is something called as stat translator and this
>> could be a problem too.
>>
>> Thanks a lot for responding to my questions.
>> On Tue, Mar 8, 2011 at 7:39 PM, Pranith Kumar. Karampuri
>>  wrote:
>>> hi Mohit,
>>>       ls -laR does not trigger the self-heal when the stat-prefetch 
>>> translator
>>> is loaded. The command to use for triggering self-heal is "find". Please see
>>> our documentation of the same.
>>> http://europe.gluster.org/community/documentation/index.php/Gluster_3.1:_Triggering_Self-Heal_on_Replicate
>>>
>>>
>>> I executed the same example on my machine and it works fine.
>>>
>>> root@pranith-laptop:/mnt/client# cat /tmp/2/a.txt
>>> sds
>>> root@pranith-laptop:/mnt/client# find .
>>> .
>>> ./a.txt
>>> root@pranith-laptop:/mnt/client# cat /tmp/2/a.txt
>>> sds
>>> DD
>>>
>>> Pranith
>>>
>>> - Original Message -
>>> From: "Pranith Kumar. Karampuri" 
>>> To: land...@scalableinformatics.com
>>> Cc: gluster-users@gluster.org
>>> Sent: Wednesday, March 9, 2011 8:42:37 AM
>>> Subject: Re: [Gluster-users] Self heal doesn't seem to work when file   is  
>>>     updated
>>>
>>> hi,
>>>     Glusterfs identifies files using a gfid. Same file on both the replicas 
>>> contain same gfid. What happens when you edit a text file is a new backup 
>>> file(different gfid) is created and the data is written to it and then it 
>>> is renamed to the original file thus changing the gfid on the bricks that 
>>> are up. When the old brick comes back up it finds that the gfid of the same 
>>> file has changed. This case is not handled in the code at the moment, I am 
>>> working on it.
>>>
>>> Pranith.
>>> - Original Message -
>>> From: "Joe Landman" 
>>> To: "Mohit Anchlia" 
>>> Cc: gluster-users@gluster.org
>>> Sent: Tuesday, March 8, 2011 11:43:37 PM
>>> Subject: Re: [Gluster-users] Self heal doesn't seem to work when file is    
>>>     updated
>>>
>>> On 03/08/2011 01:10 PM, Mohit Anchlia wrote:
>>>
>>> ok ...
>>>
> Turn up debugging on the server.  Then try your test.  See if it still
> fails.
>>>
>>> It fails with the gluster client but not the NFS client?
>>>
>>> Have you opened a bug report on the b

Re: [Gluster-users] Strange behaviour glusterd 3.1

2011-03-09 Thread Daniel Müller


did set gluster volume set samba-vol performance.quick-read off .  

vim
a new file on node1. ssh node2 . ls new file -> read error, file not
found. 

did set gluster volume set samba-vol performance.quick-read on.


I can ls, change content one time. Then the same again. No
solution!!! 

Should I delete the VOl? and build a new one? 

I am
glad that it is no production environment. It would be a mess.

On Wed, 09
Mar 2011 23:10:21 +0530, Vijay Bellur  wrote: On Wednesday 09 March 2011
04:00 PM, Daniel Müller wrote:  
/mnt/glusterfs ist he mount point of the
client where the samba-vol (backend:/glusterfs/export) is mounted on.
So it
should work. And it did work until last week.

  Can you please check by
disabling quick-read translator in your setup via the following
command:

#gluster volume set performance.quick-read off 

You may be
hitting bug 2027 with 3.1.0

Thanks,
Vijay

 ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Backup Strategy

2011-03-09 Thread Mohit Anchlia
Thanks! Is there a short blog or steps that I can look at.
documentaion looks overwhelming at first look :)

On Wed, Mar 9, 2011 at 10:53 AM, Sabuj Pattanayek  wrote:
> for the amount of features that you get with backuppc, it's worth the
> fairly painless setup. Btw, we've found that it's better/faster to use
> tar via backuppc (it supports rsync as well) to do the backups rather
> than rsync in backuppc. Rsync can be really slow if you have
> thousands/millions of files.
>
> On Wed, Mar 9, 2011 at 12:50 PM, Mohit Anchlia  wrote:
>> Is there a problem with using just rsync vs backupcc? I need to read
>> about backupcc and how easy it is to setup.
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Files per directory

2011-03-09 Thread Mohit Anchlia
Thanks for sharing. It looks like 3.5 million files are directly
placed under /export/read-write/g0{1,2} on 2 servers. I am assuming
performance stays the same as no. of files increase.

Couple of more questions:

Did you also consider using RAID 0 over these 2 disks on the same host?
Do you know what happens when a new node is added. Since gluster uses
elastic hashing does it still work or does new node throws off the
hashing algorithm. I am wondering how gluster is able to retrieve old
files after adding new node because now it's hashing accross new
nodes.

Thanks again and keep us posted!

I will be starting similar testing in next 2 weeks also.


On Wed, Mar 9, 2011 at 10:53 AM, Burnash, James  wrote:
> I'm running ext4 on CentOS 5.5. Each storage server has two bricks, each in 
> their own directory, mounted as straight partitions. No LVM used in this 
> config.
>
> I did find that I had to keep the ext4 format and mount options simple to 
> avoid crashes that I encountered with more "tuned" configs.
>
> # create ext4 read-write storage
>
> yum -y install e4fsprogs e4fsprogs-libs e4fsprogs-devel dmapi
>
> create a single partition on each disk, maximum size, primary type
>
> time mke4fs -F -L g01 -v -j /dev/cciss/c1d0p1
> time mke4fs -F -L g02 -v -j /dev/cciss/c2d0p1
>
> mkdir -p /export/read-write/g0{1,2}
> mount -t ext4 /dev/cciss/c1d0p1 /export/read-write/g01
> mount
> mount -t ext4 /dev/cciss/c2d0p1 /export/read-write/g02
>
> fgrep read-write /etc/fstab
> /dev/cciss/c1d0p1       /export/read-write/g01  ext4    defaults        0 0
> /dev/cciss/c2d0p1       /export/read-write/g02  ext4    defaults        0 0
>
> James Burnash, Unix Engineering
>
> -Original Message-
> From: Mohit Anchlia [mailto:mohitanch...@gmail.com]
> Sent: Wednesday, March 09, 2011 1:15 PM
> To: Burnash, James; gluster-users@gluster.org
> Subject: Re: [Gluster-users] Files per directory
>
> Thanks! Is this on ext3 or ext4? Are all these files in mount
> directory or they are in sub directories. On glusterfs does it matter
> if all the files are placed in same directory? Generally from what
> I've seen in the past is that multiple no. of subdirs are recommended
> to improve performance.
>
> On Wed, Mar 9, 2011 at 9:59 AM, Burnash, James  wrote:
>> I'm going through a rebalance operation now on my "small" Glusterfs storage 
>> pool - 2 servers, 4 bricks, 30TB of total storage, 175 native Glusterfs 
>> clients.
>>
>> Current files checked is at 3.5 million - a lot of those are in the 1-1.5GB 
>> size range.
>>
>> Hopefully that is of some help - more stats to follow as I get a chance to 
>> document them ...
>>
>> James Burnash, Unix Engineering
>>
>>
>> -Original Message-
>> From: gluster-users-boun...@gluster.org 
>> [mailto:gluster-users-boun...@gluster.org] On Behalf Of Mohit Anchlia
>> Sent: Wednesday, March 09, 2011 12:47 PM
>> To: gluster-users@gluster.org
>> Subject: Re: [Gluster-users] Files per directory
>>
>> It will be good if I can get some suggestion from people who already
>> have millions of files on glusterFS.
>>
>> On Mon, Mar 7, 2011 at 4:11 PM, Mohit Anchlia  wrote:
>>> Is there any recommendation about how many files should be stored in
>>> one directory in glusterFS? In my experience I've seen spreading files
>>> accross many directories helps, but I am not sure if it's same with
>>> glusterFS.
>>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
>>
>> DISCLAIMER:
>> This e-mail, and any attachments thereto, is intended only for use by the 
>> addressee(s) named herein and may contain legally privileged and/or 
>> confidential information. If you are not the intended recipient of this 
>> e-mail, you are hereby notified that any dissemination, distribution or 
>> copying of this e-mail, and any attachments thereto, is strictly prohibited. 
>> If you have received this in error, please immediately notify me and 
>> permanently delete the original and any copy of any e-mail and any printout 
>> thereof. E-mail transmission cannot be guaranteed to be secure or 
>> error-free. The sender therefore does not accept liability for any errors or 
>> omissions in the contents of this message which arise as a result of e-mail 
>> transmission.
>> NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at 
>> its discretion, monitor and review the content of all e-mail communications. 
>> http://www.knight.com
>>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Files per directory

2011-03-09 Thread Burnash, James
I'm running ext4 on CentOS 5.5. Each storage server has two bricks, each in 
their own directory, mounted as straight partitions. No LVM used in this config.

I did find that I had to keep the ext4 format and mount options simple to avoid 
crashes that I encountered with more "tuned" configs.

# create ext4 read-write storage

yum -y install e4fsprogs e4fsprogs-libs e4fsprogs-devel dmapi

create a single partition on each disk, maximum size, primary type

time mke4fs -F -L g01 -v -j /dev/cciss/c1d0p1
time mke4fs -F -L g02 -v -j /dev/cciss/c2d0p1

mkdir -p /export/read-write/g0{1,2}
mount -t ext4 /dev/cciss/c1d0p1 /export/read-write/g01
mount
mount -t ext4 /dev/cciss/c2d0p1 /export/read-write/g02

fgrep read-write /etc/fstab
/dev/cciss/c1d0p1   /export/read-write/g01  ext4defaults0 0
/dev/cciss/c2d0p1   /export/read-write/g02  ext4defaults0 0

James Burnash, Unix Engineering

-Original Message-
From: Mohit Anchlia [mailto:mohitanch...@gmail.com] 
Sent: Wednesday, March 09, 2011 1:15 PM
To: Burnash, James; gluster-users@gluster.org
Subject: Re: [Gluster-users] Files per directory

Thanks! Is this on ext3 or ext4? Are all these files in mount
directory or they are in sub directories. On glusterfs does it matter
if all the files are placed in same directory? Generally from what
I've seen in the past is that multiple no. of subdirs are recommended
to improve performance.

On Wed, Mar 9, 2011 at 9:59 AM, Burnash, James  wrote:
> I'm going through a rebalance operation now on my "small" Glusterfs storage 
> pool - 2 servers, 4 bricks, 30TB of total storage, 175 native Glusterfs 
> clients.
>
> Current files checked is at 3.5 million - a lot of those are in the 1-1.5GB 
> size range.
>
> Hopefully that is of some help - more stats to follow as I get a chance to 
> document them ...
>
> James Burnash, Unix Engineering
>
>
> -Original Message-
> From: gluster-users-boun...@gluster.org 
> [mailto:gluster-users-boun...@gluster.org] On Behalf Of Mohit Anchlia
> Sent: Wednesday, March 09, 2011 12:47 PM
> To: gluster-users@gluster.org
> Subject: Re: [Gluster-users] Files per directory
>
> It will be good if I can get some suggestion from people who already
> have millions of files on glusterFS.
>
> On Mon, Mar 7, 2011 at 4:11 PM, Mohit Anchlia  wrote:
>> Is there any recommendation about how many files should be stored in
>> one directory in glusterFS? In my experience I've seen spreading files
>> accross many directories helps, but I am not sure if it's same with
>> glusterFS.
>>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
> DISCLAIMER:
> This e-mail, and any attachments thereto, is intended only for use by the 
> addressee(s) named herein and may contain legally privileged and/or 
> confidential information. If you are not the intended recipient of this 
> e-mail, you are hereby notified that any dissemination, distribution or 
> copying of this e-mail, and any attachments thereto, is strictly prohibited. 
> If you have received this in error, please immediately notify me and 
> permanently delete the original and any copy of any e-mail and any printout 
> thereof. E-mail transmission cannot be guaranteed to be secure or error-free. 
> The sender therefore does not accept liability for any errors or omissions in 
> the contents of this message which arise as a result of e-mail transmission.
> NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
> discretion, monitor and review the content of all e-mail communications. 
> http://www.knight.com
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Backup Strategy

2011-03-09 Thread Sabuj Pattanayek
for the amount of features that you get with backuppc, it's worth the
fairly painless setup. Btw, we've found that it's better/faster to use
tar via backuppc (it supports rsync as well) to do the backups rather
than rsync in backuppc. Rsync can be really slow if you have
thousands/millions of files.

On Wed, Mar 9, 2011 at 12:50 PM, Mohit Anchlia  wrote:
> Is there a problem with using just rsync vs backupcc? I need to read
> about backupcc and how easy it is to setup.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Self heal doesn't seem to work when file is updated

2011-03-09 Thread Burnash, James
Here's the out-dated page - I didn't even notice that stat-prefetch wasn't on 
the new one.

http://www.gluster.com/community/documentation/index.php/Translators

Documentation is a bit of a challenge with GlusterFS ... I know that doing it 
and keeping it up to date is one of the harder things to do on a project - I 
did it for a living at one point.

James Burnash, Unix Engineering

-Original Message-
From: Mohit Anchlia [mailto:mohitanch...@gmail.com] 
Sent: Wednesday, March 09, 2011 1:48 PM
To: gluster-users@gluster.org; Burnash, James
Subject: Re: [Gluster-users] Self heal doesn't seem to work when file is updated

Is this the complete list? I don't see stat prefetch translator in there.

Just so this important question doesn't get lost I will ask again :)

How will I know if self heal worked or not. What's the best way to
tell? I see there are 2 find commands and it looks in some cases
running one may not be sufficient. So how can we make sure that self
heal worked. Currently I am testing with one file so it's easy to
verify but with millions of files it may not be possible.

On Wed, Mar 9, 2011 at 10:43 AM, Burnash, James  wrote:
> Just found the updated link for translators - you made me look :-)
>
> http://www.gluster.com/community/documentation/index.php/Gluster_3.1:_Setting_Volume_Options
>
> I would also like to know HOW I would know for sure that self heal worked ...
>
> James Burnash, Unix Engineering
>
> -Original Message-
> From: gluster-users-boun...@gluster.org 
> [mailto:gluster-users-boun...@gluster.org] On Behalf Of Mohit Anchlia
> Sent: Wednesday, March 09, 2011 1:33 PM
> To: Pranith Kumar. Karampuri; gluster-users@gluster.org
> Subject: Re: [Gluster-users] Self heal doesn't seem to work when file is 
> updated
>
> Thanks!
>
> How will I know if self heal worked or not. What's the best way to
> tell? I see there are 2 find commands and it looks in some cases
> running one may not be sufficient. So how can we make sure that self
> heal worked. Currently I am testing with one file so it's easy to
> verify but with millions of files it may not be possible.
>
> Also, is there any place where I can read about these translators. I
> didn;t even know there is something called as stat translator and this
> could be a problem too.
>
> Thanks a lot for responding to my questions.
> On Tue, Mar 8, 2011 at 7:39 PM, Pranith Kumar. Karampuri
>  wrote:
>> hi Mohit,
>>       ls -laR does not trigger the self-heal when the stat-prefetch 
>> translator
>> is loaded. The command to use for triggering self-heal is "find". Please see
>> our documentation of the same.
>> http://europe.gluster.org/community/documentation/index.php/Gluster_3.1:_Triggering_Self-Heal_on_Replicate
>>
>>
>> I executed the same example on my machine and it works fine.
>>
>> root@pranith-laptop:/mnt/client# cat /tmp/2/a.txt
>> sds
>> root@pranith-laptop:/mnt/client# find .
>> .
>> ./a.txt
>> root@pranith-laptop:/mnt/client# cat /tmp/2/a.txt
>> sds
>> DD
>>
>> Pranith
>>
>> - Original Message -
>> From: "Pranith Kumar. Karampuri" 
>> To: land...@scalableinformatics.com
>> Cc: gluster-users@gluster.org
>> Sent: Wednesday, March 9, 2011 8:42:37 AM
>> Subject: Re: [Gluster-users] Self heal doesn't seem to work when file   is   
>>    updated
>>
>> hi,
>>     Glusterfs identifies files using a gfid. Same file on both the replicas 
>> contain same gfid. What happens when you edit a text file is a new backup 
>> file(different gfid) is created and the data is written to it and then it is 
>> renamed to the original file thus changing the gfid on the bricks that are 
>> up. When the old brick comes back up it finds that the gfid of the same file 
>> has changed. This case is not handled in the code at the moment, I am 
>> working on it.
>>
>> Pranith.
>> - Original Message -
>> From: "Joe Landman" 
>> To: "Mohit Anchlia" 
>> Cc: gluster-users@gluster.org
>> Sent: Tuesday, March 8, 2011 11:43:37 PM
>> Subject: Re: [Gluster-users] Self heal doesn't seem to work when file is     
>>    updated
>>
>> On 03/08/2011 01:10 PM, Mohit Anchlia wrote:
>>
>> ok ...
>>
 Turn up debugging on the server.  Then try your test.  See if it still
 fails.
>>
>> It fails with the gluster client but not the NFS client?
>>
>> Have you opened a bug report on the bugzilla?
>>
>> --
>> Joseph Landman, Ph.D
>> Founder and CEO
>> Scalable Informatics Inc.
>> email: land...@scalableinformatics.com
>> web  : http://scalableinformatics.com
>>        http://scalableinformatics.com/sicluster
>> phone: +1 734 786 8423 x121
>> fax  : +1 866 888 3112
>> cell : +1 734 612 4615
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>> _

Re: [Gluster-users] Backup Strategy

2011-03-09 Thread Mohit Anchlia
Is there a problem with using just rsync vs backupcc? I need to read
about backupcc and how easy it is to setup.

On Wed, Mar 9, 2011 at 10:45 AM, Burnash, James  wrote:
> I think the answer to that problem is to back up from a machine that has the 
> native Gluster client mount the data that needs to be backed up - that way 
> you don't have to worry about replicas and attributes.
>
> Of course this seems to indicate that you would need to restore the same way 
> ... but I'm not positive about that.
>
> James Burnash, Unix Engineering
>
> -Original Message-
> From: gluster-users-boun...@gluster.org 
> [mailto:gluster-users-boun...@gluster.org] On Behalf Of Mohit Anchlia
> Sent: Wednesday, March 09, 2011 1:39 PM
> To: Sabuj Pattanayek; gluster-users@gluster.org
> Subject: Re: [Gluster-users] Backup Strategy
>
> One confusion I have is when maintaining multiple bricks on multiple
> physical volume and a replica also how do we do effective backups?
>
> So if I have 4 node glusterfs cluster and replica of 2 then how do I
> effectively do the backups such that I don't backup multiple copies or
> overwrite files when doing backups.
>
> On Wed, Mar 9, 2011 at 10:34 AM, Sabuj Pattanayek  wrote:
>> If you want to do backup to disk then I highly recommend backuppc .
>>
>> On Wed, Mar 9, 2011 at 12:16 PM, Mohit Anchlia  
>> wrote:
>>> I would like to hear from experienced users of glusterFS about how
>>> they are currently doing backups. I am looking at solutions but not
>>> sure which one to use and how to effectively use them. First thing
>>> comes to my mind is using rsync.
>>>
>>> Please advice.
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
> DISCLAIMER:
> This e-mail, and any attachments thereto, is intended only for use by the 
> addressee(s) named herein and may contain legally privileged and/or 
> confidential information. If you are not the intended recipient of this 
> e-mail, you are hereby notified that any dissemination, distribution or 
> copying of this e-mail, and any attachments thereto, is strictly prohibited. 
> If you have received this in error, please immediately notify me and 
> permanently delete the original and any copy of any e-mail and any printout 
> thereof. E-mail transmission cannot be guaranteed to be secure or error-free. 
> The sender therefore does not accept liability for any errors or omissions in 
> the contents of this message which arise as a result of e-mail transmission.
> NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
> discretion, monitor and review the content of all e-mail communications. 
> http://www.knight.com
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Self heal doesn't seem to work when file is updated

2011-03-09 Thread Mohit Anchlia
Is this the complete list? I don't see stat prefetch translator in there.

Just so this important question doesn't get lost I will ask again :)

How will I know if self heal worked or not. What's the best way to
tell? I see there are 2 find commands and it looks in some cases
running one may not be sufficient. So how can we make sure that self
heal worked. Currently I am testing with one file so it's easy to
verify but with millions of files it may not be possible.

On Wed, Mar 9, 2011 at 10:43 AM, Burnash, James  wrote:
> Just found the updated link for translators - you made me look :-)
>
> http://www.gluster.com/community/documentation/index.php/Gluster_3.1:_Setting_Volume_Options
>
> I would also like to know HOW I would know for sure that self heal worked ...
>
> James Burnash, Unix Engineering
>
> -Original Message-
> From: gluster-users-boun...@gluster.org 
> [mailto:gluster-users-boun...@gluster.org] On Behalf Of Mohit Anchlia
> Sent: Wednesday, March 09, 2011 1:33 PM
> To: Pranith Kumar. Karampuri; gluster-users@gluster.org
> Subject: Re: [Gluster-users] Self heal doesn't seem to work when file is 
> updated
>
> Thanks!
>
> How will I know if self heal worked or not. What's the best way to
> tell? I see there are 2 find commands and it looks in some cases
> running one may not be sufficient. So how can we make sure that self
> heal worked. Currently I am testing with one file so it's easy to
> verify but with millions of files it may not be possible.
>
> Also, is there any place where I can read about these translators. I
> didn;t even know there is something called as stat translator and this
> could be a problem too.
>
> Thanks a lot for responding to my questions.
> On Tue, Mar 8, 2011 at 7:39 PM, Pranith Kumar. Karampuri
>  wrote:
>> hi Mohit,
>>       ls -laR does not trigger the self-heal when the stat-prefetch 
>> translator
>> is loaded. The command to use for triggering self-heal is "find". Please see
>> our documentation of the same.
>> http://europe.gluster.org/community/documentation/index.php/Gluster_3.1:_Triggering_Self-Heal_on_Replicate
>>
>>
>> I executed the same example on my machine and it works fine.
>>
>> root@pranith-laptop:/mnt/client# cat /tmp/2/a.txt
>> sds
>> root@pranith-laptop:/mnt/client# find .
>> .
>> ./a.txt
>> root@pranith-laptop:/mnt/client# cat /tmp/2/a.txt
>> sds
>> DD
>>
>> Pranith
>>
>> - Original Message -
>> From: "Pranith Kumar. Karampuri" 
>> To: land...@scalableinformatics.com
>> Cc: gluster-users@gluster.org
>> Sent: Wednesday, March 9, 2011 8:42:37 AM
>> Subject: Re: [Gluster-users] Self heal doesn't seem to work when file   is   
>>    updated
>>
>> hi,
>>     Glusterfs identifies files using a gfid. Same file on both the replicas 
>> contain same gfid. What happens when you edit a text file is a new backup 
>> file(different gfid) is created and the data is written to it and then it is 
>> renamed to the original file thus changing the gfid on the bricks that are 
>> up. When the old brick comes back up it finds that the gfid of the same file 
>> has changed. This case is not handled in the code at the moment, I am 
>> working on it.
>>
>> Pranith.
>> - Original Message -
>> From: "Joe Landman" 
>> To: "Mohit Anchlia" 
>> Cc: gluster-users@gluster.org
>> Sent: Tuesday, March 8, 2011 11:43:37 PM
>> Subject: Re: [Gluster-users] Self heal doesn't seem to work when file is     
>>    updated
>>
>> On 03/08/2011 01:10 PM, Mohit Anchlia wrote:
>>
>> ok ...
>>
 Turn up debugging on the server.  Then try your test.  See if it still
 fails.
>>
>> It fails with the gluster client but not the NFS client?
>>
>> Have you opened a bug report on the bugzilla?
>>
>> --
>> Joseph Landman, Ph.D
>> Founder and CEO
>> Scalable Informatics Inc.
>> email: land...@scalableinformatics.com
>> web  : http://scalableinformatics.com
>>        http://scalableinformatics.com/sicluster
>> phone: +1 734 786 8423 x121
>> fax  : +1 866 888 3112
>> cell : +1 734 612 4615
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
> DISCLAIMER:
> This e-mail, and any attachments thereto, is intended only for use by the 
> addressee(s) named herein and may contain legally privileged and/or 
> confidential information. If you are not the intended recipient of this 
> e-mail, you are hereby notified that any dissemination, distribution or 
> c

Re: [Gluster-users] Backup Strategy

2011-03-09 Thread Burnash, James
I think the answer to that problem is to back up from a machine that has the 
native Gluster client mount the data that needs to be backed up - that way you 
don't have to worry about replicas and attributes.

Of course this seems to indicate that you would need to restore the same way 
... but I'm not positive about that.

James Burnash, Unix Engineering

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Mohit Anchlia
Sent: Wednesday, March 09, 2011 1:39 PM
To: Sabuj Pattanayek; gluster-users@gluster.org
Subject: Re: [Gluster-users] Backup Strategy

One confusion I have is when maintaining multiple bricks on multiple
physical volume and a replica also how do we do effective backups?

So if I have 4 node glusterfs cluster and replica of 2 then how do I
effectively do the backups such that I don't backup multiple copies or
overwrite files when doing backups.

On Wed, Mar 9, 2011 at 10:34 AM, Sabuj Pattanayek  wrote:
> If you want to do backup to disk then I highly recommend backuppc .
>
> On Wed, Mar 9, 2011 at 12:16 PM, Mohit Anchlia  wrote:
>> I would like to hear from experienced users of glusterFS about how
>> they are currently doing backups. I am looking at solutions but not
>> sure which one to use and how to effectively use them. First thing
>> comes to my mind is using rsync.
>>
>> Please advice.
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


DISCLAIMER: 
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s) named herein and may contain legally privileged and/or 
confidential information. If you are not the intended recipient of this e-mail, 
you are hereby notified that any dissemination, distribution or copying of this 
e-mail, and any attachments thereto, is strictly prohibited. If you have 
received this in error, please immediately notify me and permanently delete the 
original and any copy of any e-mail and any printout thereof. E-mail 
transmission cannot be guaranteed to be secure or error-free. The sender 
therefore does not accept liability for any errors or omissions in the contents 
of this message which arise as a result of e-mail transmission. 
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
discretion, monitor and review the content of all e-mail communications. 
http://www.knight.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Self heal doesn't seem to work when file is updated

2011-03-09 Thread Burnash, James
Just found the updated link for translators - you made me look :-)

http://www.gluster.com/community/documentation/index.php/Gluster_3.1:_Setting_Volume_Options

I would also like to know HOW I would know for sure that self heal worked ...

James Burnash, Unix Engineering

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Mohit Anchlia
Sent: Wednesday, March 09, 2011 1:33 PM
To: Pranith Kumar. Karampuri; gluster-users@gluster.org
Subject: Re: [Gluster-users] Self heal doesn't seem to work when file is updated

Thanks!

How will I know if self heal worked or not. What's the best way to
tell? I see there are 2 find commands and it looks in some cases
running one may not be sufficient. So how can we make sure that self
heal worked. Currently I am testing with one file so it's easy to
verify but with millions of files it may not be possible.

Also, is there any place where I can read about these translators. I
didn;t even know there is something called as stat translator and this
could be a problem too.

Thanks a lot for responding to my questions.
On Tue, Mar 8, 2011 at 7:39 PM, Pranith Kumar. Karampuri
 wrote:
> hi Mohit,
>       ls -laR does not trigger the self-heal when the stat-prefetch translator
> is loaded. The command to use for triggering self-heal is "find". Please see
> our documentation of the same.
> http://europe.gluster.org/community/documentation/index.php/Gluster_3.1:_Triggering_Self-Heal_on_Replicate
>
>
> I executed the same example on my machine and it works fine.
>
> root@pranith-laptop:/mnt/client# cat /tmp/2/a.txt
> sds
> root@pranith-laptop:/mnt/client# find .
> .
> ./a.txt
> root@pranith-laptop:/mnt/client# cat /tmp/2/a.txt
> sds
> DD
>
> Pranith
>
> - Original Message -
> From: "Pranith Kumar. Karampuri" 
> To: land...@scalableinformatics.com
> Cc: gluster-users@gluster.org
> Sent: Wednesday, March 9, 2011 8:42:37 AM
> Subject: Re: [Gluster-users] Self heal doesn't seem to work when file   is    
>   updated
>
> hi,
>     Glusterfs identifies files using a gfid. Same file on both the replicas 
> contain same gfid. What happens when you edit a text file is a new backup 
> file(different gfid) is created and the data is written to it and then it is 
> renamed to the original file thus changing the gfid on the bricks that are 
> up. When the old brick comes back up it finds that the gfid of the same file 
> has changed. This case is not handled in the code at the moment, I am working 
> on it.
>
> Pranith.
> - Original Message -
> From: "Joe Landman" 
> To: "Mohit Anchlia" 
> Cc: gluster-users@gluster.org
> Sent: Tuesday, March 8, 2011 11:43:37 PM
> Subject: Re: [Gluster-users] Self heal doesn't seem to work when file is      
>   updated
>
> On 03/08/2011 01:10 PM, Mohit Anchlia wrote:
>
> ok ...
>
>>> Turn up debugging on the server.  Then try your test.  See if it still
>>> fails.
>
> It fails with the gluster client but not the NFS client?
>
> Have you opened a bug report on the bugzilla?
>
> --
> Joseph Landman, Ph.D
> Founder and CEO
> Scalable Informatics Inc.
> email: land...@scalableinformatics.com
> web  : http://scalableinformatics.com
>        http://scalableinformatics.com/sicluster
> phone: +1 734 786 8423 x121
> fax  : +1 866 888 3112
> cell : +1 734 612 4615
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


DISCLAIMER: 
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s) named herein and may contain legally privileged and/or 
confidential information. If you are not the intended recipient of this e-mail, 
you are hereby notified that any dissemination, distribution or copying of this 
e-mail, and any attachments thereto, is strictly prohibited. If you have 
received this in error, please immediately notify me and permanently delete the 
original and any copy of any e-mail and any printout thereof. E-mail 
transmission cannot be guaranteed to be secure or error-free. The sender 
therefore does not accept liability for any errors or omissions in the contents 
of this message which arise as a result of e-mail transmission. 
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
discretion, monitor and review the content of all e-mail communications. 
http://www.knight.com
___
Glus

Re: [Gluster-users] Backup Strategy

2011-03-09 Thread Burnash, James
This probably doesn't help, but I only use the GlusterFS to store copies of 
data already sourced and archived somewhere else - except for the read-write 
activity, which is really all considered scratch space - even though I'd have 
some pretty peeved users if they lost it all.

James Burnash, Unix Engineering
T. 201-239-2248 
jburn...@knight.com | www.knight.com

545 Washington Ave. | Jersey City, NJ


-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Mohit Anchlia
Sent: Wednesday, March 09, 2011 1:16 PM
To: gluster-users@gluster.org
Subject: [Gluster-users] Backup Strategy

I would like to hear from experienced users of glusterFS about how
they are currently doing backups. I am looking at solutions but not
sure which one to use and how to effectively use them. First thing
comes to my mind is using rsync.

Please advice.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


DISCLAIMER: 
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s) named herein and may contain legally privileged and/or 
confidential information. If you are not the intended recipient of this e-mail, 
you are hereby notified that any dissemination, distribution or copying of this 
e-mail, and any attachments thereto, is strictly prohibited. If you have 
received this in error, please immediately notify me and permanently delete the 
original and any copy of any e-mail and any printout thereof. E-mail 
transmission cannot be guaranteed to be secure or error-free. The sender 
therefore does not accept liability for any errors or omissions in the contents 
of this message which arise as a result of e-mail transmission. 
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
discretion, monitor and review the content of all e-mail communications. 
http://www.knight.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Backup Strategy

2011-03-09 Thread Mohit Anchlia
One confusion I have is when maintaining multiple bricks on multiple
physical volume and a replica also how do we do effective backups?

So if I have 4 node glusterfs cluster and replica of 2 then how do I
effectively do the backups such that I don't backup multiple copies or
overwrite files when doing backups.

On Wed, Mar 9, 2011 at 10:34 AM, Sabuj Pattanayek  wrote:
> If you want to do backup to disk then I highly recommend backuppc .
>
> On Wed, Mar 9, 2011 at 12:16 PM, Mohit Anchlia  wrote:
>> I would like to hear from experienced users of glusterFS about how
>> they are currently doing backups. I am looking at solutions but not
>> sure which one to use and how to effectively use them. First thing
>> comes to my mind is using rsync.
>>
>> Please advice.
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Backup Strategy

2011-03-09 Thread Sabuj Pattanayek
If you want to do backup to disk then I highly recommend backuppc .

On Wed, Mar 9, 2011 at 12:16 PM, Mohit Anchlia  wrote:
> I would like to hear from experienced users of glusterFS about how
> they are currently doing backups. I am looking at solutions but not
> sure which one to use and how to effectively use them. First thing
> comes to my mind is using rsync.
>
> Please advice.
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Self heal doesn't seem to work when file is updated

2011-03-09 Thread Mohit Anchlia
Thanks!

How will I know if self heal worked or not. What's the best way to
tell? I see there are 2 find commands and it looks in some cases
running one may not be sufficient. So how can we make sure that self
heal worked. Currently I am testing with one file so it's easy to
verify but with millions of files it may not be possible.

Also, is there any place where I can read about these translators. I
didn;t even know there is something called as stat translator and this
could be a problem too.

Thanks a lot for responding to my questions.
On Tue, Mar 8, 2011 at 7:39 PM, Pranith Kumar. Karampuri
 wrote:
> hi Mohit,
>       ls -laR does not trigger the self-heal when the stat-prefetch translator
> is loaded. The command to use for triggering self-heal is "find". Please see
> our documentation of the same.
> http://europe.gluster.org/community/documentation/index.php/Gluster_3.1:_Triggering_Self-Heal_on_Replicate
>
>
> I executed the same example on my machine and it works fine.
>
> root@pranith-laptop:/mnt/client# cat /tmp/2/a.txt
> sds
> root@pranith-laptop:/mnt/client# find .
> .
> ./a.txt
> root@pranith-laptop:/mnt/client# cat /tmp/2/a.txt
> sds
> DD
>
> Pranith
>
> - Original Message -
> From: "Pranith Kumar. Karampuri" 
> To: land...@scalableinformatics.com
> Cc: gluster-users@gluster.org
> Sent: Wednesday, March 9, 2011 8:42:37 AM
> Subject: Re: [Gluster-users] Self heal doesn't seem to work when file   is    
>   updated
>
> hi,
>     Glusterfs identifies files using a gfid. Same file on both the replicas 
> contain same gfid. What happens when you edit a text file is a new backup 
> file(different gfid) is created and the data is written to it and then it is 
> renamed to the original file thus changing the gfid on the bricks that are 
> up. When the old brick comes back up it finds that the gfid of the same file 
> has changed. This case is not handled in the code at the moment, I am working 
> on it.
>
> Pranith.
> - Original Message -
> From: "Joe Landman" 
> To: "Mohit Anchlia" 
> Cc: gluster-users@gluster.org
> Sent: Tuesday, March 8, 2011 11:43:37 PM
> Subject: Re: [Gluster-users] Self heal doesn't seem to work when file is      
>   updated
>
> On 03/08/2011 01:10 PM, Mohit Anchlia wrote:
>
> ok ...
>
>>> Turn up debugging on the server.  Then try your test.  See if it still
>>> fails.
>
> It fails with the gluster client but not the NFS client?
>
> Have you opened a bug report on the bugzilla?
>
> --
> Joseph Landman, Ph.D
> Founder and CEO
> Scalable Informatics Inc.
> email: land...@scalableinformatics.com
> web  : http://scalableinformatics.com
>        http://scalableinformatics.com/sicluster
> phone: +1 734 786 8423 x121
> fax  : +1 866 888 3112
> cell : +1 734 612 4615
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Backup Strategy

2011-03-09 Thread Mohit Anchlia
I would like to hear from experienced users of glusterFS about how
they are currently doing backups. I am looking at solutions but not
sure which one to use and how to effectively use them. First thing
comes to my mind is using rsync.

Please advice.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Files per directory

2011-03-09 Thread Mohit Anchlia
Thanks! Is this on ext3 or ext4? Are all these files in mount
directory or they are in sub directories. On glusterfs does it matter
if all the files are placed in same directory? Generally from what
I've seen in the past is that multiple no. of subdirs are recommended
to improve performance.

On Wed, Mar 9, 2011 at 9:59 AM, Burnash, James  wrote:
> I'm going through a rebalance operation now on my "small" Glusterfs storage 
> pool - 2 servers, 4 bricks, 30TB of total storage, 175 native Glusterfs 
> clients.
>
> Current files checked is at 3.5 million - a lot of those are in the 1-1.5GB 
> size range.
>
> Hopefully that is of some help - more stats to follow as I get a chance to 
> document them ...
>
> James Burnash, Unix Engineering
> T. 201-239-2248
> jburn...@knight.com | www.knight.com
>
> 545 Washington Ave. | Jersey City, NJ
>
>
> -Original Message-
> From: gluster-users-boun...@gluster.org 
> [mailto:gluster-users-boun...@gluster.org] On Behalf Of Mohit Anchlia
> Sent: Wednesday, March 09, 2011 12:47 PM
> To: gluster-users@gluster.org
> Subject: Re: [Gluster-users] Files per directory
>
> It will be good if I can get some suggestion from people who already
> have millions of files on glusterFS.
>
> On Mon, Mar 7, 2011 at 4:11 PM, Mohit Anchlia  wrote:
>> Is there any recommendation about how many files should be stored in
>> one directory in glusterFS? In my experience I've seen spreading files
>> accross many directories helps, but I am not sure if it's same with
>> glusterFS.
>>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
> DISCLAIMER:
> This e-mail, and any attachments thereto, is intended only for use by the 
> addressee(s) named herein and may contain legally privileged and/or 
> confidential information. If you are not the intended recipient of this 
> e-mail, you are hereby notified that any dissemination, distribution or 
> copying of this e-mail, and any attachments thereto, is strictly prohibited. 
> If you have received this in error, please immediately notify me and 
> permanently delete the original and any copy of any e-mail and any printout 
> thereof. E-mail transmission cannot be guaranteed to be secure or error-free. 
> The sender therefore does not accept liability for any errors or omissions in 
> the contents of this message which arise as a result of e-mail transmission.
> NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
> discretion, monitor and review the content of all e-mail communications. 
> http://www.knight.com
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Files per directory

2011-03-09 Thread Burnash, James
I'm going through a rebalance operation now on my "small" Glusterfs storage 
pool - 2 servers, 4 bricks, 30TB of total storage, 175 native Glusterfs clients.

Current files checked is at 3.5 million - a lot of those are in the 1-1.5GB 
size range.

Hopefully that is of some help - more stats to follow as I get a chance to 
document them ...

James Burnash, Unix Engineering
T. 201-239-2248 
jburn...@knight.com | www.knight.com

545 Washington Ave. | Jersey City, NJ


-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Mohit Anchlia
Sent: Wednesday, March 09, 2011 12:47 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] Files per directory

It will be good if I can get some suggestion from people who already
have millions of files on glusterFS.

On Mon, Mar 7, 2011 at 4:11 PM, Mohit Anchlia  wrote:
> Is there any recommendation about how many files should be stored in
> one directory in glusterFS? In my experience I've seen spreading files
> accross many directories helps, but I am not sure if it's same with
> glusterFS.
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


DISCLAIMER: 
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s) named herein and may contain legally privileged and/or 
confidential information. If you are not the intended recipient of this e-mail, 
you are hereby notified that any dissemination, distribution or copying of this 
e-mail, and any attachments thereto, is strictly prohibited. If you have 
received this in error, please immediately notify me and permanently delete the 
original and any copy of any e-mail and any printout thereof. E-mail 
transmission cannot be guaranteed to be secure or error-free. The sender 
therefore does not accept liability for any errors or omissions in the contents 
of this message which arise as a result of e-mail transmission. 
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
discretion, monitor and review the content of all e-mail communications. 
http://www.knight.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Files per directory

2011-03-09 Thread Mohit Anchlia
It will be good if I can get some suggestion from people who already
have millions of files on glusterFS.

On Mon, Mar 7, 2011 at 4:11 PM, Mohit Anchlia  wrote:
> Is there any recommendation about how many files should be stored in
> one directory in glusterFS? In my experience I've seen spreading files
> accross many directories helps, but I am not sure if it's same with
> glusterFS.
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Strange behaviour glusterd 3.1

2011-03-09 Thread Vijay Bellur

On Wednesday 09 March 2011 04:00 PM, Daniel Müller wrote:

/mnt/glusterfs ist he mount point of the client where the samba-vol 
(backend:/glusterfs/export) is mounted on.
So it should work. And it did work until last week.

Can you please check by disabling quick-read translator in your setup 
via the following command:


#gluster volume set  performance.quick-read off

You may be hitting bug 2027 with 3.1.0

Thanks,
Vijay


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Seeking Feedback on Gluster Development Priorities/Roadmap

2011-03-09 Thread Mohit Anchlia
Hi Ben,

Thanks for asking!

Documentation for sure lacks some important details. I have serveral
questions like:

1) How to effectively replicate on the other data center asynchronuously.
2) Support for async replication in addition to sync replication. This
might be helpful and self heal may not be required in that case.
2) More info on Data center replication setup
3) It will be good to get some performance information /experience
from those people who are already using it and post it on the gluster
site.
4) Some info on how to make gluster run as a web service. For eg: I
currently don't know how to effectively create a web service to serve
browser clients.
5) Best practices info

Mostly it's around getting more information out that we can read and understand!

Thanks!

> Following that, our internal priorities are:
>>
>> -Continuous Data Replication (over WAN)
>> -Improved User Interface
>> -CIFS/Active Directory Support
>> -Object storage  (unified file and object)
>> -Geo-replication to Amazon S3 (unify public and private cloud)
>> -Continuous Data Protection
>> -REST management API's
>> -Enhanced support for ISCSi SANs
>>
>> Are these the right priorities? How would you prioritize?
>>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Strange behaviour glusterd 3.1

2011-03-09 Thread Mohit Anchlia
BTW: I am had similar problem when self healing didn't work for the
files that were edited. It works for newly created files. I filed a
bug yesterday.

On Wed, Mar 9, 2011 at 2:30 AM, Daniel Müller  wrote:
> /mnt/glusterfs ist he mount point of the client where the samba-vol 
> (backend:/glusterfs/export) is mounted on.
> So it should work. And it did work until last week.
>
> Greetings
> Daniel
>
>
> ---
> EDV Daniel Müller
>
> Leitung EDV
> Tropenklinik Paul-Lechler-Krankenhaus
> Paul-Lechler-Str. 24
> 72076 Tübingen
>
> Tel.: 07071/206-463, Fax: 07071/206-499
> eMail: muel...@tropenklinik.de
> Internet: www.tropenklinik.de
> ---
>
> -Ursprüngliche Nachricht-
> Von: Pranith Kumar. Karampuri [mailto:prani...@gluster.com]
> Gesendet: Mittwoch, 9. März 2011 10:05
> An: muel...@tropenklinik.de
> Cc: gluster-users@gluster.org
> Betreff: Re: AW: [Gluster-users] Strange behaviour glusterd 3.1
>
> Directly editing the files on backend is not supported. Most of the editors 
> delete the original file and create a new-one when you edit a file.
> So the extended attributes that are stored on the old-file are gone.
>
> Pranith
> - Original Message -
> From: "Daniel Müller" 
> To: "Pranith Kumar. Karampuri" 
> Cc: gluster-users@gluster.org
> Sent: Wednesday, March 9, 2011 2:24:14 PM
> Subject: AW: [Gluster-users] Strange behaviour glusterd 3.1
>
> Both server running. I edit the file on one node1-server (in /mnt/glusterfs/) 
> changed the content, saved the file. Ssh to the node2-server there still old 
> content. Did " find /mnt/glusterfs -print0 | xargs --null stat >/dev/null", 
> still the same old content.
> Need to restart both servers the the new content is written to node2-server.
> And this equal behavior on both servers.
> A few days ago all was ok.
>
> ---
> EDV Daniel Müller
>
> Leitung EDV
> Tropenklinik Paul-Lechler-Krankenhaus
> Paul-Lechler-Str. 24
> 72076 Tübingen
>
> Tel.: 07071/206-463, Fax: 07071/206-499
> eMail: muel...@tropenklinik.de
> Internet: www.tropenklinik.de
> ---
>
> -Ursprüngliche Nachricht-
> Von: Pranith Kumar. Karampuri [mailto:prani...@gluster.com]
> Gesendet: Mittwoch, 9. März 2011 09:31
> An: muel...@tropenklinik.de
> Cc: gluster-users@gluster.org
> Betreff: Re: [Gluster-users] Strange behaviour glusterd 3.1
>
> "But after vim one.txt and changing the content of that file on one node.", 
> what exactly do you mean by this?. Did you edit the file on backend?. (or) 
> Brought one server down and after editing brought the second server back up?.
>
> Pranith
> - Original Message -
> From: "Daniel Müller" 
> To: gluster-users@gluster.org
> Sent: Wednesday, March 9, 2011 1:58:39 PM
> Subject: [Gluster-users] Strange behaviour glusterd 3.1
>
> Dear all,
>
> after some weeks of testing gluster the replication of my two nodes stopped
> working the way it used to be.
>
> My version:
> glusterfs --version
> glusterfs 3.1.0 built on Oct 13 2010 10:06:10
> Repository revision: v3.1.0
> Copyright (c) 2006-2010 Gluster Inc. 
> GlusterFS comes with ABSOLUTELY NO WARRANTY.
> You may redistribute copies of GlusterFS under the terms of the GNU Affero
> General Public License
> My Nodes:
>
> gluster peer status
> Number of Peers: 1
>
> Hostname: 192.168.132.56
> Uuid: 5ecf561e-f766-48b0-836f-17624586a39a
> State: Peer in Cluster (Connected)
>
> My replicating VOLS:
>
> gluster peer status
> Number of Peers: 1
>
> Hostname: 192.168.132.56
> Uuid: 5ecf561e-f766-48b0-836f-17624586a39a
> State: Peer in Cluster (Connected)
> [root@ctdb2 test]# gluster volume info
>
> Volume Name: samba-vol
> Type: Replicate
> Status: Started
> Number of Bricks: 2
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.132.56:/glusterfs/export
> Brick2: 192.168.132.57:/glusterfs/export
> Options Reconfigured:
> network.ping-timeout: 5
>
> Now my mount point for the client is /mnt/glusterfs in fstab:
> 192.168.132.57:/samba-vol  /mnt/glusterfs  glusterfs  defaults  0  0
>
> Commandline mount succeeds with:
>
> glusterfs#192.168.132.57:/samba-vol on /mnt/glusterfs type fuse
> (rw,allow_other,default_permissions,max_read=131072)
>
> Now glusterd worked for many weeks. But now when I create a file (ex:
> one.txt) in /mnt/glusterfs/ the file is replicated to the other node well.
> But after vim one.txt and changing the content of that file on one node. The
> changes are not replicated to the other node
> Until I restart both nodes.
> I did made a " find /mnt/glusterfs -print0 | xargs --null stat >/dev/null"
> with no success.
> Any Idea???
>
> Greetings Daniel
>
> ---
> EDV Daniel Müller
>
> Leitung EDV
> Tropenklinik Paul-Lechler-Krankenhaus
> Paul-Lechler-Str. 24
> 72076 Tübingen
>
> Tel.: 07071/206-463, Fax: 07071/206-499
> eMail: muel...@tropenklinik.de
> Int

Re: [Gluster-users] GlusterFS 3.1.1: "Permission denied" and "No such file or directory"

2011-03-09 Thread Daniel Zander

Dear all,

for a long time now, we've had the same problem:
A user wants to delete a directory, receives an error message 
("Permission denied"), but the directoy is removed anyway. This problem 
has not been solved yet.


Today, a new facet of the same problem arose:
-
[...]
I often can't copy files to certain directories /storage/cluster, the 
error is "permission denied". However, creating a new directory and 
using this as destination for copying works.


Currently, the problem is that I can't rename a certain directory:
[ekpcms4] /storage/cluster/ott $ mv NovemberGridding_presel-antiele 
NovemberGridding_presel-antiele.old
mv: cannot move `NovemberGridding_presel-antiele' to 
`NovemberGridding_presel-antiele.old': Permission denied


However, after this command, the original directory is not listed in an 
"ls" anymore and the new directory (with ".old") appears. So despite the 
error message, it seems to have worked.


But I can't create a directory named after the old name, 
"NovemberGridding_presel-antiele":

mkdir NovemberGridding_presel-antiele
mkdir: cannot create directory `NovemberGridding_presel-antiele': File 
exists

---

The setup is still the same as before. Any help is very much appreciated.

Best Regards,
Daniel


On 02/01/2011 10:50 AM, Daniel Zander wrote:

Dear all,

what I tried:
On the client side client as normal user:
mkdir /storage/cluster/zander/mailtest

drwxr-xr-x 2 zander belle 66 Feb 1 10:28 mailtest

Then I checked, where the directory is found:
Brick1: 192.168.101.249:/storage/4/cluster (xfs) yes!
Brick2: 192.168.101.248:/storage/5/cluster (xfs) yes!
Brick3: 192.168.101.250:/storage/6/cluster (xfs) yes!
Brick4: 192.168.101.247:/storage/7/cluster (xfs) no!
Brick5: 192.168.101.246:/storage/8/cluster (ext4) no!

This confused me, as I was under the impression that this directory
should be found on all servers. I then created some files in that
directory and checked, where they appeared. Again nothing found in
bricks 4 and 5.

I tried to check the logs.They directory /usr/local/var/log/glusterfs
exists, but is empty. /var/log/glusterfs/ contains some logfiles,
however. I zipped the directories from Bricks 3,4 and 5.

You can find them here:
http://www-ekp.physik.uni-karlsruhe.de/~zander/2011-02-01-fs6-logs.tar.gz
http://www-ekp.physik.uni-karlsruhe.de/~zander/2011-02-01-fs7-logs.tar.gz
http://www-ekp.physik.uni-karlsruhe.de/~zander/2011-02-01-fs8-logs.tar.gz

I performed the test at 10:45 on February 1st (which was a few minutes
ago).

The error always happens, i.e. whenever someone wants to delete a
directory. I am now sure it has to do with the fact that the directory
is not written on Bricks 4 and 5. Data is writte onto these new bricks
however, but it seems to me only in directories that already exist. The
problem occured since we switched from glusterFS 3.0 to 3.1.1.

Thanks for any any help,
Daniel



On 01/29/2011 04:20 AM, Pranith Kumar. Karampuri wrote:

If you have never edited vol files in 3.1.1 that means it is generated
by glusterd and it generates access-control xlator. So it is
definitely BUG 2296. Chown workaround will take care of it for now.
I need some data about "Delete problem", the circumstances it is
happening etc. I am not able to reproduce anything like what you are
stating. You said that the bricks you are using have both ext4 and
xfs. Do this test. Create one directory, do ls -l on that directory so
that we know the permissions/ownership etc. Check on which brick it is
created and find out whether that brick is xfs or ext4. Now delete the
directory. Do this until you get the error. Glusterfs stores the logs
under the directory /usr/local/var/log/glusterfs. Zip that directory
along with the "ls -l" outputs you collected and send them across. We
can take a look. It is helpful if you can confirm that the error is
happening only on the bricks with xfs or ext4 or both.

Pranith.
- Original Message -
From: "Daniel Zander"
To: "Pranith Kumar. Karampuri"
Cc: gluster-users@gluster.org
Sent: Friday, January 28, 2011 7:16:05 PM
Subject: Re: [Gluster-users] GlusterFS 3.1.1: "Permission denied" and
"No such file or directory"

Hi!


To confirm the "Permission Denied" problem you are facing is indeed
BUG 2296 following conditions should be met:
1) mount is fuse mount.

This I can confirm.


2) access-control translator should be present on top of the posix
translator in the brick volfile.

Unfortunately, neither me nor any colleague has any idea what this
means. We have no real sysadmins at our institute, sorry. A wild guess:
As we are using glusterFS 3.1.1, I have never edited or even seen any
volfiles. Or is that something different?

As the workaround really seems to work, I think we'll stick with that
for the moment. Any more ideas about the "delete problem"?

Best Regards,
Daniel



you can see the reports of the following bugs for "Permission Denied
on Fuse mount":
http://bugs.gluster.com/cgi-bin/bugzi

Re: [Gluster-users] Seeking Feedback on Gluster Development Priorities/Roadmap

2011-03-09 Thread paul simpson
hi ben,

it's good to be consulted!  my take on priorities would be:
0/ as per stephens email - basic syncing tools.  right now, i'm unsure (see
1) how often to do a "find sync" to "heal" the fs.  the baseline fs needs to
be more robust.  it feels very delicate and i often find myself needing to
restart a client/glusterd.
1/ documentation.  gluster looks great - but there's a lack of good solid
docs.  and lots of old legacy / conflicting documentation out there too.
 this is a barrier for new users (ie, me) to try it out.
2/ improved UI. by this, do you mean reporting/migration tools?  if so, yes.
3/ gNFS NLM locking

i'd concentrate on getting whats already there working better & faster than
adding new features.

looking forward to irc conference.

regards,

paul


Following that, our internal priorities are:
>
> -Continuous Data Replication (over WAN)
> -Improved User Interface
> -CIFS/Active Directory Support
> -Object storage  (unified file and object)
> -Geo-replication to Amazon S3 (unify public and private cloud)
> -Continuous Data Protection
> -REST management API's
> -Enhanced support for ISCSi SANs
>
> Are these the right priorities? How would you prioritize?
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Strange behaviour glusterd 3.1

2011-03-09 Thread Daniel Müller
/mnt/glusterfs ist he mount point of the client where the samba-vol 
(backend:/glusterfs/export) is mounted on.
So it should work. And it did work until last week.

Greetings
Daniel


---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

-Ursprüngliche Nachricht-
Von: Pranith Kumar. Karampuri [mailto:prani...@gluster.com] 
Gesendet: Mittwoch, 9. März 2011 10:05
An: muel...@tropenklinik.de
Cc: gluster-users@gluster.org
Betreff: Re: AW: [Gluster-users] Strange behaviour glusterd 3.1

Directly editing the files on backend is not supported. Most of the editors 
delete the original file and create a new-one when you edit a file.
So the extended attributes that are stored on the old-file are gone.

Pranith
- Original Message -
From: "Daniel Müller" 
To: "Pranith Kumar. Karampuri" 
Cc: gluster-users@gluster.org
Sent: Wednesday, March 9, 2011 2:24:14 PM
Subject: AW: [Gluster-users] Strange behaviour glusterd 3.1

Both server running. I edit the file on one node1-server (in /mnt/glusterfs/) 
changed the content, saved the file. Ssh to the node2-server there still old 
content. Did " find /mnt/glusterfs -print0 | xargs --null stat >/dev/null", 
still the same old content.
Need to restart both servers the the new content is written to node2-server.
And this equal behavior on both servers.
A few days ago all was ok.

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

-Ursprüngliche Nachricht-
Von: Pranith Kumar. Karampuri [mailto:prani...@gluster.com] 
Gesendet: Mittwoch, 9. März 2011 09:31
An: muel...@tropenklinik.de
Cc: gluster-users@gluster.org
Betreff: Re: [Gluster-users] Strange behaviour glusterd 3.1

"But after vim one.txt and changing the content of that file on one node.", 
what exactly do you mean by this?. Did you edit the file on backend?. (or) 
Brought one server down and after editing brought the second server back up?.

Pranith
- Original Message -
From: "Daniel Müller" 
To: gluster-users@gluster.org
Sent: Wednesday, March 9, 2011 1:58:39 PM
Subject: [Gluster-users] Strange behaviour glusterd 3.1

Dear all,

after some weeks of testing gluster the replication of my two nodes stopped
working the way it used to be.

My version:
glusterfs --version
glusterfs 3.1.0 built on Oct 13 2010 10:06:10
Repository revision: v3.1.0
Copyright (c) 2006-2010 Gluster Inc. 
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU Affero
General Public License
My Nodes:

gluster peer status
Number of Peers: 1

Hostname: 192.168.132.56
Uuid: 5ecf561e-f766-48b0-836f-17624586a39a
State: Peer in Cluster (Connected)

My replicating VOLS:

gluster peer status
Number of Peers: 1

Hostname: 192.168.132.56
Uuid: 5ecf561e-f766-48b0-836f-17624586a39a
State: Peer in Cluster (Connected)
[root@ctdb2 test]# gluster volume info

Volume Name: samba-vol
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.132.56:/glusterfs/export
Brick2: 192.168.132.57:/glusterfs/export
Options Reconfigured:
network.ping-timeout: 5

Now my mount point for the client is /mnt/glusterfs in fstab:
192.168.132.57:/samba-vol  /mnt/glusterfs  glusterfs  defaults  0  0

Commandline mount succeeds with:

glusterfs#192.168.132.57:/samba-vol on /mnt/glusterfs type fuse
(rw,allow_other,default_permissions,max_read=131072)

Now glusterd worked for many weeks. But now when I create a file (ex:
one.txt) in /mnt/glusterfs/ the file is replicated to the other node well.
But after vim one.txt and changing the content of that file on one node. The
changes are not replicated to the other node
Until I restart both nodes.
I did made a " find /mnt/glusterfs -print0 | xargs --null stat >/dev/null"
with no success.
Any Idea???

Greetings Daniel

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Strange behaviour glusterd 3.1

2011-03-09 Thread Pranith Kumar. Karampuri
Directly editing the files on backend is not supported. Most of the editors 
delete the original file and create a new-one when you edit a file.
So the extended attributes that are stored on the old-file are gone.

Pranith
- Original Message -
From: "Daniel Müller" 
To: "Pranith Kumar. Karampuri" 
Cc: gluster-users@gluster.org
Sent: Wednesday, March 9, 2011 2:24:14 PM
Subject: AW: [Gluster-users] Strange behaviour glusterd 3.1

Both server running. I edit the file on one node1-server (in /mnt/glusterfs/) 
changed the content, saved the file. Ssh to the node2-server there still old 
content. Did " find /mnt/glusterfs -print0 | xargs --null stat >/dev/null", 
still the same old content.
Need to restart both servers the the new content is written to node2-server.
And this equal behavior on both servers.
A few days ago all was ok.

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

-Ursprüngliche Nachricht-
Von: Pranith Kumar. Karampuri [mailto:prani...@gluster.com] 
Gesendet: Mittwoch, 9. März 2011 09:31
An: muel...@tropenklinik.de
Cc: gluster-users@gluster.org
Betreff: Re: [Gluster-users] Strange behaviour glusterd 3.1

"But after vim one.txt and changing the content of that file on one node.", 
what exactly do you mean by this?. Did you edit the file on backend?. (or) 
Brought one server down and after editing brought the second server back up?.

Pranith
- Original Message -
From: "Daniel Müller" 
To: gluster-users@gluster.org
Sent: Wednesday, March 9, 2011 1:58:39 PM
Subject: [Gluster-users] Strange behaviour glusterd 3.1

Dear all,

after some weeks of testing gluster the replication of my two nodes stopped
working the way it used to be.

My version:
glusterfs --version
glusterfs 3.1.0 built on Oct 13 2010 10:06:10
Repository revision: v3.1.0
Copyright (c) 2006-2010 Gluster Inc. 
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU Affero
General Public License
My Nodes:

gluster peer status
Number of Peers: 1

Hostname: 192.168.132.56
Uuid: 5ecf561e-f766-48b0-836f-17624586a39a
State: Peer in Cluster (Connected)

My replicating VOLS:

gluster peer status
Number of Peers: 1

Hostname: 192.168.132.56
Uuid: 5ecf561e-f766-48b0-836f-17624586a39a
State: Peer in Cluster (Connected)
[root@ctdb2 test]# gluster volume info

Volume Name: samba-vol
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.132.56:/glusterfs/export
Brick2: 192.168.132.57:/glusterfs/export
Options Reconfigured:
network.ping-timeout: 5

Now my mount point for the client is /mnt/glusterfs in fstab:
192.168.132.57:/samba-vol  /mnt/glusterfs  glusterfs  defaults  0  0

Commandline mount succeeds with:

glusterfs#192.168.132.57:/samba-vol on /mnt/glusterfs type fuse
(rw,allow_other,default_permissions,max_read=131072)

Now glusterd worked for many weeks. But now when I create a file (ex:
one.txt) in /mnt/glusterfs/ the file is replicated to the other node well.
But after vim one.txt and changing the content of that file on one node. The
changes are not replicated to the other node
Until I restart both nodes.
I did made a " find /mnt/glusterfs -print0 | xargs --null stat >/dev/null"
with no success.
Any Idea???

Greetings Daniel

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Strange behaviour glusterd 3.1

2011-03-09 Thread Daniel Müller
Both server running. I edit the file on one node1-server (in /mnt/glusterfs/) 
changed the content, saved the file. Ssh to the node2-server there still old 
content. Did " find /mnt/glusterfs -print0 | xargs --null stat >/dev/null", 
still the same old content.
Need to restart both servers the the new content is written to node2-server.
And this equal behavior on both servers.
A few days ago all was ok.

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

-Ursprüngliche Nachricht-
Von: Pranith Kumar. Karampuri [mailto:prani...@gluster.com] 
Gesendet: Mittwoch, 9. März 2011 09:31
An: muel...@tropenklinik.de
Cc: gluster-users@gluster.org
Betreff: Re: [Gluster-users] Strange behaviour glusterd 3.1

"But after vim one.txt and changing the content of that file on one node.", 
what exactly do you mean by this?. Did you edit the file on backend?. (or) 
Brought one server down and after editing brought the second server back up?.

Pranith
- Original Message -
From: "Daniel Müller" 
To: gluster-users@gluster.org
Sent: Wednesday, March 9, 2011 1:58:39 PM
Subject: [Gluster-users] Strange behaviour glusterd 3.1

Dear all,

after some weeks of testing gluster the replication of my two nodes stopped
working the way it used to be.

My version:
glusterfs --version
glusterfs 3.1.0 built on Oct 13 2010 10:06:10
Repository revision: v3.1.0
Copyright (c) 2006-2010 Gluster Inc. 
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU Affero
General Public License
My Nodes:

gluster peer status
Number of Peers: 1

Hostname: 192.168.132.56
Uuid: 5ecf561e-f766-48b0-836f-17624586a39a
State: Peer in Cluster (Connected)

My replicating VOLS:

gluster peer status
Number of Peers: 1

Hostname: 192.168.132.56
Uuid: 5ecf561e-f766-48b0-836f-17624586a39a
State: Peer in Cluster (Connected)
[root@ctdb2 test]# gluster volume info

Volume Name: samba-vol
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.132.56:/glusterfs/export
Brick2: 192.168.132.57:/glusterfs/export
Options Reconfigured:
network.ping-timeout: 5

Now my mount point for the client is /mnt/glusterfs in fstab:
192.168.132.57:/samba-vol  /mnt/glusterfs  glusterfs  defaults  0  0

Commandline mount succeeds with:

glusterfs#192.168.132.57:/samba-vol on /mnt/glusterfs type fuse
(rw,allow_other,default_permissions,max_read=131072)

Now glusterd worked for many weeks. But now when I create a file (ex:
one.txt) in /mnt/glusterfs/ the file is replicated to the other node well.
But after vim one.txt and changing the content of that file on one node. The
changes are not replicated to the other node
Until I restart both nodes.
I did made a " find /mnt/glusterfs -print0 | xargs --null stat >/dev/null"
with no success.
Any Idea???

Greetings Daniel

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Strange behaviour glusterd 3.1

2011-03-09 Thread Pranith Kumar. Karampuri
"But after vim one.txt and changing the content of that file on one node.", 
what exactly do you mean by this?. Did you edit the file on backend?. (or) 
Brought one server down and after editing brought the second server back up?.

Pranith
- Original Message -
From: "Daniel Müller" 
To: gluster-users@gluster.org
Sent: Wednesday, March 9, 2011 1:58:39 PM
Subject: [Gluster-users] Strange behaviour glusterd 3.1

Dear all,

after some weeks of testing gluster the replication of my two nodes stopped
working the way it used to be.

My version:
glusterfs --version
glusterfs 3.1.0 built on Oct 13 2010 10:06:10
Repository revision: v3.1.0
Copyright (c) 2006-2010 Gluster Inc. 
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU Affero
General Public License
My Nodes:

gluster peer status
Number of Peers: 1

Hostname: 192.168.132.56
Uuid: 5ecf561e-f766-48b0-836f-17624586a39a
State: Peer in Cluster (Connected)

My replicating VOLS:

gluster peer status
Number of Peers: 1

Hostname: 192.168.132.56
Uuid: 5ecf561e-f766-48b0-836f-17624586a39a
State: Peer in Cluster (Connected)
[root@ctdb2 test]# gluster volume info

Volume Name: samba-vol
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.132.56:/glusterfs/export
Brick2: 192.168.132.57:/glusterfs/export
Options Reconfigured:
network.ping-timeout: 5

Now my mount point for the client is /mnt/glusterfs in fstab:
192.168.132.57:/samba-vol  /mnt/glusterfs  glusterfs  defaults  0  0

Commandline mount succeeds with:

glusterfs#192.168.132.57:/samba-vol on /mnt/glusterfs type fuse
(rw,allow_other,default_permissions,max_read=131072)

Now glusterd worked for many weeks. But now when I create a file (ex:
one.txt) in /mnt/glusterfs/ the file is replicated to the other node well.
But after vim one.txt and changing the content of that file on one node. The
changes are not replicated to the other node
Until I restart both nodes.
I did made a " find /mnt/glusterfs -print0 | xargs --null stat >/dev/null"
with no success.
Any Idea???

Greetings Daniel

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Strange behaviour glusterd 3.1

2011-03-09 Thread Daniel Müller
Dear all,

after some weeks of testing gluster the replication of my two nodes stopped
working the way it used to be.

My version:
glusterfs --version
glusterfs 3.1.0 built on Oct 13 2010 10:06:10
Repository revision: v3.1.0
Copyright (c) 2006-2010 Gluster Inc. 
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU Affero
General Public License
My Nodes:

gluster peer status
Number of Peers: 1

Hostname: 192.168.132.56
Uuid: 5ecf561e-f766-48b0-836f-17624586a39a
State: Peer in Cluster (Connected)

My replicating VOLS:

gluster peer status
Number of Peers: 1

Hostname: 192.168.132.56
Uuid: 5ecf561e-f766-48b0-836f-17624586a39a
State: Peer in Cluster (Connected)
[root@ctdb2 test]# gluster volume info

Volume Name: samba-vol
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.132.56:/glusterfs/export
Brick2: 192.168.132.57:/glusterfs/export
Options Reconfigured:
network.ping-timeout: 5

Now my mount point for the client is /mnt/glusterfs in fstab:
192.168.132.57:/samba-vol  /mnt/glusterfs  glusterfs  defaults  0  0

Commandline mount succeeds with:

glusterfs#192.168.132.57:/samba-vol on /mnt/glusterfs type fuse
(rw,allow_other,default_permissions,max_read=131072)

Now glusterd worked for many weeks. But now when I create a file (ex:
one.txt) in /mnt/glusterfs/ the file is replicated to the other node well.
But after vim one.txt and changing the content of that file on one node. The
changes are not replicated to the other node
Until I restart both nodes.
I did made a " find /mnt/glusterfs -print0 | xargs --null stat >/dev/null"
with no success.
Any Idea???

Greetings Daniel

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Single-thread read/write performance over TCP/IP?

2011-03-09 Thread Marcus Bointon
On 7 Mar 2011, at 23:04, Patrick J. LoPresti wrote:

> I need to be able to read 500 megabytes/second, sustained, from a
> single file using a single thread.  I have achieved such speeds and
> higher, reading 200+ gigabytes sequentially, using a fast RAID and XFS
> (i.e., local storage).  I would like to replace that design with
> something more networked and scalable.  But my individual clients
> still require 500+ megabyte/second reads.
> 
> If I tie together a half dozen fast GlusterFS servers with 10GigE,
> will I be able to serve another half dozen 10GigE clients at 500MB/sec
> each?  (Again, assuming single-file, single-thread on each client.
> Also assume each server can read/write its local store at ~1000 MB/sec
> sustained.)

10GigE will give you net throughput of around 1Gbyte/sec, so two clients going 
at 500M/sec on the same partition/interface (and assuming your switch will run 
at wire speed) will saturate it. Maybe you should have a separate 10G interface 
on each server for each client? You might run into trouble with bus bandwidth 
on the servers though.

>From what I understand of gluster, you might get this kind of bandwidth if the 
>target files happen to be on different servers, but I'm not sure how you'd 
>make sure they were on different servers in order to increase capacity, unless 
>you effectively raid-1 your data across all servers. Even then, you'd need 
>some way of round-robining that's consistent across all clients.

I may be wrong here, mainly thinking out loud!

Marcus
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Single-thread read/write performance over TCP/IP?

2011-03-09 Thread Marcus Bointon
On 7 Mar 2011, at 23:04, Patrick J. LoPresti wrote:

> I need to be able to read 500 megabytes/second, sustained, from a
> single file using a single thread.  I have achieved such speeds and
> higher, reading 200+ gigabytes sequentially, using a fast RAID and XFS
> (i.e., local storage).  I would like to replace that design with
> something more networked and scalable.  But my individual clients
> still require 500+ megabyte/second reads.
> 
> If I tie together a half dozen fast GlusterFS servers with 10GigE,
> will I be able to serve another half dozen 10GigE clients at 500MB/sec
> each?  (Again, assuming single-file, single-thread on each client.
> Also assume each server can read/write its local store at ~1000 MB/sec
> sustained.)

10GigE will give you net throughput of around 1Gbyte/sec, so two clients going 
at 500M/sec on the same partition/interface (and assuming your switch will run 
at wire speed) will saturate it. Maybe you should have a separate 10G interface 
on each server for each client? You might run into trouble with bus bandwidth 
on the servers though.

>From what I understand of gluster, you might get this kind of bandwidth if the 
>target files happen to be on different servers, but I'm not sure how you'd 
>make sure they were on different servers in order to increase capacity, unless 
>you effectively raid-1 your data across all servers. Even then, you'd need 
>some way of round-robining that's consistent across all clients.

I may be wrong here, mainly thinking out loud!

Marcus
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users