Re: [Gluster-users] Automatic Failover

2011-03-08 Thread Daniel Müller
What exactly are you trying to do?
Do you mean apache glusterfs over webdav??!!

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---
-Ursprüngliche Nachricht-
Von: Mohit Anchlia [mailto:mohitanch...@gmail.com] 
Gesendet: Dienstag, 8. März 2011 19:45
An: muel...@tropenklinik.de; gluster-users@gluster.org
Betreff: Re: [Gluster-users] Automatic Failover

Thanks! I am still trying to figure out how it works with HTTP. In
some docs I see that GFS supports HTTP and I am not sure how that
works. Does anyone know how that works or what has to be done to make
it available over HTTP?

On Mon, Mar 7, 2011 at 11:14 PM, Daniel Müller 
wrote:
> If you use a http-server,samba... ex:apache you need ucarp. There is a
> little blog about it:
>
http://www.misdivision.com/blog/setting-up-a-highly-available-storage-cluste
> r-using-glusterfs-and-ucarp
>
> Ucarp gives you a virt. Ip for your httpd. if the one is down the second
> with the same IP is still alive and serving. Glusterfs serves
> the data in ha mode. Try it it works like a charm.
>
> ---
> EDV Daniel Müller
>
> Leitung EDV
> Tropenklinik Paul-Lechler-Krankenhaus
> Paul-Lechler-Str. 24
> 72076 Tübingen
>
> Tel.: 07071/206-463, Fax: 07071/206-499
> eMail: muel...@tropenklinik.de
> Internet: www.tropenklinik.de
> ---
>
> -Ursprüngliche Nachricht-
> Von: Mohit Anchlia [mailto:mohitanch...@gmail.com]
> Gesendet: Montag, 7. März 2011 18:47
> An: muel...@tropenklinik.de; gluster-users@gluster.org
> Betreff: Re: [Gluster-users] Automatic Failover
>
> Thanks! Can you please give me some scenarios where applications will
> need ucarp, are you referring to nfs clients?
>
> Does glusterfs support http also? can't find any documentation.
>
> On Mon, Mar 7, 2011 at 12:34 AM, Daniel Müller 
> wrote:
>> Yes the normal way the clients will follow the failover but some
>> applications and services running on top
>> Of glusterfs will not and will need this.
>>
>> ---
>> EDV Daniel Müller
>>
>> Leitung EDV
>> Tropenklinik Paul-Lechler-Krankenhaus
>> Paul-Lechler-Str. 24
>> 72076 Tübingen
>>
>> Tel.: 07071/206-463, Fax: 07071/206-499
>> eMail: muel...@tropenklinik.de
>> Internet: www.tropenklinik.de
>> ---
>>
>> -Ursprüngliche Nachricht-
>> Von: Mohit Anchlia [mailto:mohitanch...@gmail.com]
>> Gesendet: Freitag, 4. März 2011 18:36
>> An: muel...@tropenklinik.de; gluster-users@gluster.org
>> Betreff: Re: [Gluster-users] Automatic Failover
>>
>> Thanks! I thought for native clients failover is inbuilt. Isn't that
true?
>>
>> On Thu, Mar 3, 2011 at 11:46 PM, Daniel Müller 
>> wrote:
>>> I use ucarp to make a real failover for my gluster-vols
>>> For the clients:
>>> Ex:In  my ucarp config, vip-001.conf on both nodes (or nnn... nodes):
>>>
>>> vip-001.conf on node1
>>> #ID
>>> ID=001
>>> #Network Interface
>>> BIND_INTERFACE=eth0
>>> #Real IP
>>> SOURCE_ADDRESS=192.168.132.56
>>> #Virtual IP used by ucarp
>>> VIP_ADDRESS=192.168.132.58
>>> #Ucarp Password
>>> PASSWORD=Password
>>>
>>> On node2
>>>
>>> #ID
>>> ID=002
>>> #Network Interface
>>> BIND_INTERFACE=eth0
>>> #Real IP
>>> SOURCE_ADDRESS=192.168.132.57
>>> #Virtual IP used by ucarp
>>> VIP_ADDRESS=192.168.132.58
>>> #Ucarp Password
>>> PASSWORD=Password
>>>
>>> Then
>>> mount -t glusterfs 192.168.132.58:/samba-vol /mnt/glusterfs
>>> Set enries in fstab:
>>>
>>> 192.168.132.58:/samba-vol /mnt/glusterfs glusterfs defaults 0 0
>>>
>>> Now one Server fails there is still the same service on for the clients.
>>>
>>> ---
>>> EDV Daniel Müller
>>>
>>> Leitung EDV
>>> Tropenklinik Paul-Lechler-Krankenhaus
>>> Paul-Lechler-Str. 24
>>> 72076 Tübingen
>>>
>>> Tel.: 07071/206-463, Fax: 07071/206-499
>>> eMail: muel...@tropenklinik.de
>>> Internet: www.tropenklinik.de
>>> ---
>>> -Ursprüngliche Nachricht-
>>> Von: gluster-users-boun...@gluster.org
>>> [mailto:gluster-users-boun...@gluster.org] Im Auftrag von Mohit Anchlia
>>> Gesendet: Freitag, 4. März 2011 03:46
>>> An: gluster-users@gluster.org
>>> Betreff: [Gluster-users] Automatic Failover
>>>
>>> I am actively trying to find out how automatic failover works but not
>>> able to find how? Is it in the mount option or where do we give if
>>> node 1 is down then write to node 2 or something like that. Can
>>> someone please explain how it work?
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>
>>>
>>
>>
>
>


Re: [Gluster-users] Self heal doesn't seem to work when file is updated

2011-03-08 Thread Pranith Kumar. Karampuri
hi Mohit,
   ls -laR does not trigger the self-heal when the stat-prefetch translator
is loaded. The command to use for triggering self-heal is "find". Please see
our documentation of the same.
http://europe.gluster.org/community/documentation/index.php/Gluster_3.1:_Triggering_Self-Heal_on_Replicate


I executed the same example on my machine and it works fine.

root@pranith-laptop:/mnt/client# cat /tmp/2/a.txt 
sds
root@pranith-laptop:/mnt/client# find .
.
./a.txt
root@pranith-laptop:/mnt/client# cat /tmp/2/a.txt 
sds
DD

Pranith

- Original Message -
From: "Pranith Kumar. Karampuri" 
To: land...@scalableinformatics.com
Cc: gluster-users@gluster.org
Sent: Wednesday, March 9, 2011 8:42:37 AM
Subject: Re: [Gluster-users] Self heal doesn't seem to work when file   is  
updated

hi,
 Glusterfs identifies files using a gfid. Same file on both the replicas 
contain same gfid. What happens when you edit a text file is a new backup 
file(different gfid) is created and the data is written to it and then it is 
renamed to the original file thus changing the gfid on the bricks that are up. 
When the old brick comes back up it finds that the gfid of the same file has 
changed. This case is not handled in the code at the moment, I am working on it.

Pranith.
- Original Message -
From: "Joe Landman" 
To: "Mohit Anchlia" 
Cc: gluster-users@gluster.org
Sent: Tuesday, March 8, 2011 11:43:37 PM
Subject: Re: [Gluster-users] Self heal doesn't seem to work when file is
updated

On 03/08/2011 01:10 PM, Mohit Anchlia wrote:

ok ...

>> Turn up debugging on the server.  Then try your test.  See if it still
>> fails.

It fails with the gluster client but not the NFS client?

Have you opened a bug report on the bugzilla?

-- 
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: land...@scalableinformatics.com
web  : http://scalableinformatics.com
http://scalableinformatics.com/sicluster
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Self heal doesn't seem to work when file is updated

2011-03-08 Thread Pranith Kumar. Karampuri
hi,
 Glusterfs identifies files using a gfid. Same file on both the replicas 
contain same gfid. What happens when you edit a text file is a new backup 
file(different gfid) is created and the data is written to it and then it is 
renamed to the original file thus changing the gfid on the bricks that are up. 
When the old brick comes back up it finds that the gfid of the same file has 
changed. This case is not handled in the code at the moment, I am working on it.

Pranith.
- Original Message -
From: "Joe Landman" 
To: "Mohit Anchlia" 
Cc: gluster-users@gluster.org
Sent: Tuesday, March 8, 2011 11:43:37 PM
Subject: Re: [Gluster-users] Self heal doesn't seem to work when file is
updated

On 03/08/2011 01:10 PM, Mohit Anchlia wrote:

ok ...

>> Turn up debugging on the server.  Then try your test.  See if it still
>> fails.

It fails with the gluster client but not the NFS client?

Have you opened a bug report on the bugzilla?

-- 
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: land...@scalableinformatics.com
web  : http://scalableinformatics.com
http://scalableinformatics.com/sicluster
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Taking control of the elastic hash algorithm?

2011-03-08 Thread Patrick J. LoPresti
I am considering Gluster for my application.

In my application, I will have 8-10 servers with 10 GigE and 8 clients
also with 10 GigE.

At any time, each client will be doing a linear read of a 500-1000MB
file.  I know ahead of time which files will be read by which client,
and they are disjoint (i.e., some files will be read by client 1,
others by client 2, etc.)

To avoid contention, I would like to have some control over which file
lives on which server.

But I would also like to be able to increase the number of servers in
the future, and have their number not necessarily be a multiple of the
number of clients, which is why something like Gluster is of interest.

So, is the Gluster "elastic hash" algorithm pluggable?  As in, would
it be straightforward to implement my own?

This is an internal application, so I do control the relevant file names...

Thanks!

 - Pat
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Setup scenario for Cluster 4 node cluster.

2011-03-08 Thread Hareem Haque
Many thanks Jacob. I appreciate your help. Would the following be true with
respect to your statement.

Node1 fails(192.168.2.100) get the partition info setup. And then issue
replace-brick

gluster volume replace-brick test-volume Nod3:/exp3 Node1:exp1 start


secondly if this is correct then would i also have to commit the migration.

gluster volume replace-brick test-volume Node3:/exp3 Node1:/exp1 commit


Then i need to trigger the self heal command.

The reason we are thinking of using Ucarp with Glusterfs native client is
that the client used Node1 ip to mount glusterfs brick

so if Node1 had failed and client had to remount it would need another ip
address.

So using Ucarp the client can be remounted automatically to any of 4 nodes.

Is the above a correct method of handling failures or i am missing
something.

Best Regards
Hareem. Haque



On Tue, Mar 8, 2011 at 4:54 PM, Jacob Shucart  wrote:

> Hareem,
>
>
>
> As I mentioned in the call yesterday, rebalance is only when you are adding
> new nodes - not when you are replacing a failed node.  You need to use the
> replace-brick command.  Also, if you are using the Gluster native client to
> mount the filesystem then you do not need to use ucarp.  Ucarp is only
> needed for NFS access.  The Gluster client itself has failover capability
> built in to it.
>
>
>
> Regarding healing, please see the documentation at:
>
>
>
>
> http://www.gluster.com/community/documentation/index.php/Gluster_3.1:_Triggering_Self-Heal_on_Replicate
>
>
>
> Please let me know if you have any additional questions.  Thank you.
>
>
>
> Jacob Shucart | Gluster
>
> Systems Engineer
>
> E-Mail - ja...@gluster.com
>
> Direct - (408)770-1504
>
>
>
> *From:* Hareem Haque [mailto:hareem.ha...@gmail.com]
> *Sent:* Tuesday, March 08, 2011 1:17 PM
> *To:* prani...@gluster.com
> *Cc:* gluster-users@gluster.org; Rich Nave
> *Subject:* Setup scenario for Cluster 4 node cluster.
>
>
>
> Hello Pranithk
>
>
>
> Thanks for your help. I really appreciate it. The following is our proof of
> concept setup. Hopefully through this you can guide how best to work around
> disasters and node failures.
>
>
>
> 4 nodes distributed replication. All nodes run on 1 Gbps private network
> and have 1 TB sata HDD each
>
>
>
> 192.168.2.100
>
> 192.168.2.101
>
> 192.168.2.102
>
> 192.168.2.103
>
>
>
> A single access client
>
>
>
> 192.168.2.104
>
>
>
>
>
> Scenario
>
> On the Node1 (192.168.2.100) issued the peer probe command to the rest of
> the nodes. And 1 brick is created. Now the client (192.168.2.104) writes
> data over the cluster each nodes gets a replicated copy. All nodes run Ucarp
> for single ip address for the client to access. We use Glusterfs native
> client (FUSE)
>
>
>
>
>
> Now say around midnight Node1 fails (total failure -- disk dies --
> processor dies -- everything on this node dies -- no chance of data recovery
> on this node -- total node loss). Our staff add another node onto the
> private network this node is blank. We hardware spec as Node1. We load up
> the partition tables onto this new node.. its similar to the lost node
> except does not have the gluster data anymore. Now, what should i do to add
> this node into the cluster and get the cluster back to normal.
>
>
>
> Should the following be ok:
>
>
>
> 1. Run probe peer again on the re-gained Node1
>
>
>
> 2. Run rebalance command.
>
>
>
> 3 According to you Pranithk the system is self healing. So do the other
> nodes constantly ping back Node1 ip again and again until they get a
> response.
>
>
>
> 4. What are the exact steps we need to take in order to make sure that the
> data is not lost.. the way i see it.. raid 10 etc are not needed simply
> because there are so many replicas of the initial data that raid 10 feels
> like overkill. Personally, with our tests the 4 node cluster actually
> outperformed our old raid array.
>
>
>
> 5. we got the setup part properly. We do not know the proper procedure to
> bring back the cluster to its full strength. Now one can deploy gluster on
> an AMI or Vmware image but the underlying codebase is the same all the
> times. So what do we do to get this proof on concept  done.
>
>
>
>
>
>
>
>
> Best Regards
> Hareem. Haque
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Setup scenario for Cluster 4 node cluster.

2011-03-08 Thread Hareem Haque
Hello Pranithk

Thanks for your help. I really appreciate it. The following is our proof of
concept setup. Hopefully through this you can guide how best to work around
disasters and node failures.

4 nodes distributed replication. All nodes run on 1 Gbps private network and
have 1 TB sata HDD each

192.168.2.100
192.168.2.101
192.168.2.102
192.168.2.103

A single access client

192.168.2.104


Scenario
On the Node1 (192.168.2.100) issued the peer probe command to the rest of
the nodes. And 1 brick is created. Now the client (192.168.2.104) writes
data over the cluster each nodes gets a replicated copy. All nodes run Ucarp
for single ip address for the client to access. We use Glusterfs native
client (FUSE)


Now say around midnight Node1 fails (total failure -- disk dies -- processor
dies -- everything on this node dies -- no chance of data recovery on this
node -- total node loss). Our staff add another node onto the private
network this node is blank. We hardware spec as Node1. We load up the
partition tables onto this new node.. its similar to the lost node except
does not have the gluster data anymore. Now, what should i do to add this
node into the cluster and get the cluster back to normal.

Should the following be ok:

1. Run probe peer again on the re-gained Node1

2. Run rebalance command.

3 According to you Pranithk the system is self healing. So do the other
nodes constantly ping back Node1 ip again and again until they get a
response.

4. What are the exact steps we need to take in order to make sure that the
data is not lost.. the way i see it.. raid 10 etc are not needed simply
because there are so many replicas of the initial data that raid 10 feels
like overkill. Personally, with our tests the 4 node cluster actually
outperformed our old raid array.

5. we got the setup part properly. We do not know the proper procedure to
bring back the cluster to its full strength. Now one can deploy gluster on
an AMI or Vmware image but the underlying codebase is the same all the
times. So what do we do to get this proof on concept  done.




Best Regards
Hareem. Haque
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] future of gluster storage platform

2011-03-08 Thread Terry
On Mon, Mar 7, 2011 at 11:26 AM, Terry  wrote:
> Since it appears that GSP is going away, what other solutions are out
> there that have a similar model?  I haven't even deployed gluster yet
> but got demotivated when I came across that news.  I don't think the
> virtual appliance would work for me because I may want to use the
> white label storage for my VM backend.  Thoughts?
>

Since no one responded here, am I wrong in that GSP is going away?  If
so, anyone else concerned?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Automatic Failover

2011-03-08 Thread Mohit Anchlia
Thanks! I am still trying to figure out how it works with HTTP. In
some docs I see that GFS supports HTTP and I am not sure how that
works. Does anyone know how that works or what has to be done to make
it available over HTTP?

On Mon, Mar 7, 2011 at 11:14 PM, Daniel Müller  wrote:
> If you use a http-server,samba... ex:apache you need ucarp. There is a
> little blog about it:
> http://www.misdivision.com/blog/setting-up-a-highly-available-storage-cluste
> r-using-glusterfs-and-ucarp
>
> Ucarp gives you a virt. Ip for your httpd. if the one is down the second
> with the same IP is still alive and serving. Glusterfs serves
> the data in ha mode. Try it it works like a charm.
>
> ---
> EDV Daniel Müller
>
> Leitung EDV
> Tropenklinik Paul-Lechler-Krankenhaus
> Paul-Lechler-Str. 24
> 72076 Tübingen
>
> Tel.: 07071/206-463, Fax: 07071/206-499
> eMail: muel...@tropenklinik.de
> Internet: www.tropenklinik.de
> ---
>
> -Ursprüngliche Nachricht-
> Von: Mohit Anchlia [mailto:mohitanch...@gmail.com]
> Gesendet: Montag, 7. März 2011 18:47
> An: muel...@tropenklinik.de; gluster-users@gluster.org
> Betreff: Re: [Gluster-users] Automatic Failover
>
> Thanks! Can you please give me some scenarios where applications will
> need ucarp, are you referring to nfs clients?
>
> Does glusterfs support http also? can't find any documentation.
>
> On Mon, Mar 7, 2011 at 12:34 AM, Daniel Müller 
> wrote:
>> Yes the normal way the clients will follow the failover but some
>> applications and services running on top
>> Of glusterfs will not and will need this.
>>
>> ---
>> EDV Daniel Müller
>>
>> Leitung EDV
>> Tropenklinik Paul-Lechler-Krankenhaus
>> Paul-Lechler-Str. 24
>> 72076 Tübingen
>>
>> Tel.: 07071/206-463, Fax: 07071/206-499
>> eMail: muel...@tropenklinik.de
>> Internet: www.tropenklinik.de
>> ---
>>
>> -Ursprüngliche Nachricht-
>> Von: Mohit Anchlia [mailto:mohitanch...@gmail.com]
>> Gesendet: Freitag, 4. März 2011 18:36
>> An: muel...@tropenklinik.de; gluster-users@gluster.org
>> Betreff: Re: [Gluster-users] Automatic Failover
>>
>> Thanks! I thought for native clients failover is inbuilt. Isn't that true?
>>
>> On Thu, Mar 3, 2011 at 11:46 PM, Daniel Müller 
>> wrote:
>>> I use ucarp to make a real failover for my gluster-vols
>>> For the clients:
>>> Ex:In  my ucarp config, vip-001.conf on both nodes (or nnn... nodes):
>>>
>>> vip-001.conf on node1
>>> #ID
>>> ID=001
>>> #Network Interface
>>> BIND_INTERFACE=eth0
>>> #Real IP
>>> SOURCE_ADDRESS=192.168.132.56
>>> #Virtual IP used by ucarp
>>> VIP_ADDRESS=192.168.132.58
>>> #Ucarp Password
>>> PASSWORD=Password
>>>
>>> On node2
>>>
>>> #ID
>>> ID=002
>>> #Network Interface
>>> BIND_INTERFACE=eth0
>>> #Real IP
>>> SOURCE_ADDRESS=192.168.132.57
>>> #Virtual IP used by ucarp
>>> VIP_ADDRESS=192.168.132.58
>>> #Ucarp Password
>>> PASSWORD=Password
>>>
>>> Then
>>> mount -t glusterfs 192.168.132.58:/samba-vol /mnt/glusterfs
>>> Set enries in fstab:
>>>
>>> 192.168.132.58:/samba-vol /mnt/glusterfs glusterfs defaults 0 0
>>>
>>> Now one Server fails there is still the same service on for the clients.
>>>
>>> ---
>>> EDV Daniel Müller
>>>
>>> Leitung EDV
>>> Tropenklinik Paul-Lechler-Krankenhaus
>>> Paul-Lechler-Str. 24
>>> 72076 Tübingen
>>>
>>> Tel.: 07071/206-463, Fax: 07071/206-499
>>> eMail: muel...@tropenklinik.de
>>> Internet: www.tropenklinik.de
>>> ---
>>> -Ursprüngliche Nachricht-
>>> Von: gluster-users-boun...@gluster.org
>>> [mailto:gluster-users-boun...@gluster.org] Im Auftrag von Mohit Anchlia
>>> Gesendet: Freitag, 4. März 2011 03:46
>>> An: gluster-users@gluster.org
>>> Betreff: [Gluster-users] Automatic Failover
>>>
>>> I am actively trying to find out how automatic failover works but not
>>> able to find how? Is it in the mount option or where do we give if
>>> node 1 is down then write to node 2 or something like that. Can
>>> someone please explain how it work?
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>
>>>
>>
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Self heal doesn't seem to work when file is updated

2011-03-08 Thread Joe Landman

On 03/08/2011 01:10 PM, Mohit Anchlia wrote:

ok ...


Turn up debugging on the server.  Then try your test.  See if it still
fails.


It fails with the gluster client but not the NFS client?

Have you opened a bug report on the bugzilla?

--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: land...@scalableinformatics.com
web  : http://scalableinformatics.com
   http://scalableinformatics.com/sicluster
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Self heal doesn't seem to work when file is updated

2011-03-08 Thread Mohit Anchlia
These are the logs that I see:

mount FS logs

[2011-03-08 10:02:29.1920] I [afr-common.c:716:afr_lookup_done]
test-volume-replicate-0: background  entry self-heal triggered. path:
/
[2011-03-08 10:02:29.6111] I
[afr-self-heal-common.c:1526:afr_self_heal_completion_cbk]
test-volume-replicate-0: background  entry self-heal completed on /
[2011-03-08 10:02:37.374866] I [afr-common.c:716:afr_lookup_done]
test-volume-replicate-0: background  meta-data data self-heal
triggered. path: /a.tx
[2011-03-08 10:02:37.376012] I
[afr-self-heal-common.c:1526:afr_self_heal_completion_cbk]
test-volume-replicate-0: background  meta-data data self-heal
completed on /a.tx
[2011-03-08 10:02:45.554547] I [afr-common.c:716:afr_lookup_done]
test-volume-replicate-0: background  meta-data data self-heal
triggered. path: /a.tx
[2011-03-08 10:02:45.53] I
[afr-self-heal-common.c:1526:afr_self_heal_completion_cbk]
test-volume-replicate-0: background  meta-data data self-heal
completed on /a.tx

gluster log

[2011-03-08 10:01:54.316161] W [dict.c:1205:data_to_str] dict: @data=(nil)
[2011-03-08 10:01:54.316179] W [dict.c:1205:data_to_str] dict: @data=(nil)
[2011-03-08 10:01:54.317337] W [dict.c:1205:data_to_str] dict: @data=(nil)
[2011-03-08 10:01:54.317417] W [dict.c:1205:data_to_str] dict: @data=(nil)
[2011-03-08 10:01:54.318279] I
[glusterd3_1-mops.c:1233:glusterd3_1_stage_op] glusterd: Sent op req
to 0 peers
[2011-03-08 10:01:54.323578] I
[glusterd-utils.c:858:glusterd_service_stop] : Stopping gluster nfsd
running in pid: 11310
[2011-03-08 10:01:55.342835] I
[glusterd3_1-mops.c:1323:glusterd3_1_commit_op] glusterd: Sent op req
to 0 peers
[2011-03-08 10:01:55.342964] I
[glusterd3_1-mops.c:1145:glusterd3_1_cluster_unlock] glusterd: Sent
unlock req to 0 peers
[2011-03-08 10:01:55.342985] I
[glusterd-op-sm.c:4845:glusterd_op_txn_complete] glusterd: Cleared
local lock
[2011-03-08 10:02:04.886217] I
[glusterd-handler.c:715:glusterd_handle_cli_get_volume] glusterd:
Received get vol req
[2011-03-08 10:02:18.828788] I
[glusterd3_1-mops.c:172:glusterd3_1_friend_add_cbk] glusterd: Received
RJT from uuid: 34f59271-21b3-4533-9a89-2fd06523c729, host:
testefitarc01, port: 0
[2011-03-08 10:02:18.828818] I
[glusterd-utils.c:2066:glusterd_friend_find_by_uuid] glusterd: Friend
found.. state: Peer in Cluster


brick logs

[2011-03-08 10:02:29.3068] D [posix.c:265:posix_lstat_with_gfid]
test-volume-posix: failed to get gfid
[2011-03-08 10:02:29.4383] D [io-threads.c:2072:__iot_workers_scale]
test-volume-io-threads: scaled threads to 2 (queue_size=4/2)
[2011-03-08 10:02:29.4641] D [dict.c:331:dict_get]
(-->/usr/lib64/libglusterfs.so.0(call_resume+0xc5e) [0x2b46d2e1008e]
(-->/usr/lib64/glusterfs/3.1.2/xlator/performance/io-threads.so(iot_lookup_wrapper+0xda)
[0x2b80069a]
(-->/usr/lib64/glusterfs/3.1.2/xlator/features/locks.so(pl_lookup+0x78)
[0x2b5e9f48]))) dict: @this=(nil) key=glusterfs.entrylk-count
[2011-03-08 10:02:29.4663] D [dict.c:331:dict_get]
(-->/usr/lib64/libglusterfs.so.0(call_resume+0xc5e) [0x2b46d2e1008e]
(-->/usr/lib64/glusterfs/3.1.2/xlator/performance/io-threads.so(iot_lookup_wrapper+0xda)
[0x2b80069a]
(-->/usr/lib64/glusterfs/3.1.2/xlator/features/locks.so(pl_lookup+0x78)
[0x2b5e9f48]))) dict: @this=(nil) key=glusterfs.entrylk-count
[2011-03-08 10:02:29.4698] D [dict.c:331:dict_get]
(-->/usr/lib64/libglusterfs.so.0(call_resume+0xc5e) [0x2b46d2e1008e]
(-->/usr/lib64/glusterfs/3.1.2/xlator/performance/io-threads.so(iot_lookup_wrapper+0xda)
[0x2b80069a]
(-->/usr/lib64/glusterfs/3.1.2/xlator/features/locks.so(pl_lookup+0x92)
[0x2b5e9f62]))) dict: @this=(nil) key=glusterfs.inodelk-count
[2011-03-08 10:02:29.4723] D [dict.c:331:dict_get]
(-->/usr/lib64/libglusterfs.so.0(call_resume+0xc5e) [0x2b46d2e1008e]
(-->/usr/lib64/glusterfs/3.1.2/xlator/performance/io-threads.so(iot_lookup_wrapper+0xda)
[0x2b80069a]
(-->/usr/lib64/glusterfs/3.1.2/xlator/features/locks.so(pl_lookup+0x92)
[0x2b5e9f62]))) dict: @this=(nil) key=glusterfs.inodelk-count
[2011-03-08 10:02:29.4746] D [dict.c:331:dict_get]
(-->/usr/lib64/libglusterfs.so.0(call_resume+0xc5e) [0x2b46d2e1008e]
(-->/usr/lib64/glusterfs/3.1.2/xlator/performance/io-threads.so(iot_lookup_wrapper+0xda)
[0x2b80069a]
(-->/usr/lib64/glusterfs/3.1.2/xlator/features/locks.so(pl_lookup+0xad)
[0x2b5e9f7d]))) dict: @this=(nil) key=glusterfs.posixlk-count
[2011-03-08 10:02:29.4770] D [dict.c:331:dict_get]
(-->/usr/lib64/libglusterfs.so.0(call_resume+0xc5e) [0x2b46d2e1008e]
(-->/usr/lib64/glusterfs/3.1.2/xlator/performance/io-threads.so(iot_lookup_wrapper+0xda)
[0x2b80069a]
(-->/usr/lib64/glusterfs/3.1.2/xlator/features/locks.so(pl_lookup+0xad)
[0x2b5e9f7d]))) dict: @this=(nil) key=glusterfs.posixlk-count
[2011-03-08 10:02:29.4838] D [dict.c:331:dict_get]
(-->/usr/lib64/libglusterfs.so.0(call_resume+0xc5e) [0x2b46d2e1008e]
(-->/usr/lib64/glusterfs/3.1.2/xlator/performance/io-threads.so(iot_lookup_wrapper+0xda)
[0x2b80069a]
(-->/usr/lib64/glusterfs/3.1.2/xlato

Re: [Gluster-users] Self heal doesn't seem to work when file is updated

2011-03-08 Thread Joe Landman

On 03/08/2011 12:51 PM, Mohit Anchlia wrote:

They are all sync from same ntp server so they are exactly the same. I
don't really think it has anything to do with it since stat,touch, ls
-alR commands are all diff. variety of commands. Also, this problem
seems to be a problem when updating existing files when a server is
down.



We've run into problems with a number of things when timing drifted by 
more than a few seconds.


Turn up debugging on the server.  Then try your test.  See if it still 
fails.


--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: land...@scalableinformatics.com
web  : http://scalableinformatics.com
   http://scalableinformatics.com/sicluster
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Self heal doesn't seem to work when file is updated

2011-03-08 Thread Mohit Anchlia
They are all sync from same ntp server so they are exactly the same. I
don't really think it has anything to do with it since stat,touch, ls
-alR commands are all diff. variety of commands. Also, this problem
seems to be a problem when updating existing files when a server is
down.

On Tue, Mar 8, 2011 at 9:46 AM, Joe Landman
 wrote:
> On 03/08/2011 12:45 PM, Mohit Anchlia wrote:
>>
>> Can someone please help? I am stuck here figuring out if this is a
>> bug? I've tried stat, touch, ls -alR but nothing works. If a file is
>> updated when one glusterfsd server is down then running any of the
>> mentioned command doesn't cause it to self heal.
>
> Dumb question.  How close are the system times/dates on the two bricks?
>
> --
> Joseph Landman, Ph.D
> Founder and CEO
> Scalable Informatics Inc.
> email: land...@scalableinformatics.com
> web  : http://scalableinformatics.com
>       http://scalableinformatics.com/sicluster
> phone: +1 734 786 8423 x121
> fax  : +1 866 888 3112
> cell : +1 734 612 4615
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Self heal doesn't seem to work when file is updated

2011-03-08 Thread Joe Landman

On 03/08/2011 12:45 PM, Mohit Anchlia wrote:

Can someone please help? I am stuck here figuring out if this is a
bug? I've tried stat, touch, ls -alR but nothing works. If a file is
updated when one glusterfsd server is down then running any of the
mentioned command doesn't cause it to self heal.


Dumb question.  How close are the system times/dates on the two bricks?

--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: land...@scalableinformatics.com
web  : http://scalableinformatics.com
   http://scalableinformatics.com/sicluster
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Self heal doesn't seem to work when file is updated

2011-03-08 Thread Mohit Anchlia
Can someone please help? I am stuck here figuring out if this is a
bug? I've tried stat, touch, ls -alR but nothing works. If a file is
updated when one glusterfsd server is down then running any of the
mentioned command doesn't cause it to self heal.

However, all newly created files get self healed.

On Mon, Mar 7, 2011 at 9:00 PM, Mohit Anchlia  wrote:
> I am using latest glusterfs that I downloaded 2 days back and mount
> type is glusterfs. mount -t glusterfs ..I belive it's called native
> glusterfs client.
>
> On Mon, Mar 7, 2011 at 6:59 PM,   wrote:
>> hi Mohit,
>>   Could you please let us know the version and mount type you are using.
>>
>> Pranith
>>
>> - Original Message -
>> From: "Mohit Anchlia" 
>> To: gluster-users@gluster.org
>> Sent: Monday, March 7, 2011 11:20:57 PM
>> Subject: Re: [Gluster-users] Self heal doesn't seem to work when file is     
>>    updated
>>
>> Can someone please help me with my previous post? Self healing doesn't
>> seem to work as I stated in my previous mail. Is this a bug or
>> something I am doing wrong?
>>
>> On Fri, Mar 4, 2011 at 5:52 PM, Mohit Anchlia  wrote:
>>> I have 2 servers. I crash one server1 and then write data on the mount
>>> say /mnt/ext/a.txt. Now I bring up server 1 and then do ls -alR
>>> /mnt/ext or even cat /mnt/ext. Even after running those commands I see
>>> that the contents of file on server1 are still old. It doesn't change
>>> until I do an update on /mnt/ext/a.txt, at that point it changes
>>> (which we know why).
>>>
>>> Could someone please tell me what am I doing wrong? Why is self
>>> healing not working?
>>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs cluster recovery and healing setup.

2011-03-08 Thread pranithk
Why dont you give me a specific scenario, I will tell you what has to be done.

Pranith
- Original Message -
From: "James Burnash" 
To: "prani...@gluster.com" , "Hareem Haque" 

Cc: gluster-users@gluster.org
Sent: Tuesday, March 8, 2011 9:22:03 PM
Subject: RE: [Gluster-users] Glusterfs cluster recovery and healing setup.

Hi Pranith - correct me if I'm wrong, but if node1 suffered catastrophic 
failure and had to be completely rebuilt from scratch, then additional 
procedures would have to be completed - right?

James Burnash, Unix Engineering

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of prani...@gluster.com
Sent: Tuesday, March 08, 2011 10:33 AM
To: Hareem Haque
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Glusterfs cluster recovery and healing setup.

hi Hareem,
 When the NODE1 comes back up it will automatically be part of cluster and 
the self-heal of the files will happen whenever they are accessed. If you want 
to sync the files manually, follow the instructions in 
http://europe.gluster.org/community/documentation/index.php/Gluster_3.1:_Triggering_Self-Heal_on_Replicate

Pranith.
- Original Message -
From: "Hareem Haque" 
To: gluster-users@gluster.org
Sent: Tuesday, March 8, 2011 7:37:15 PM
Subject: [Gluster-users] Glusterfs cluster recovery and healing setup.

Can someone please kindly help in the following:

How best to deal with node failures and cluster repairs.


I have a problem. I setup 4 node distribute replicated cluster.

Node1-4

Node1: 192.168.1.100
Node2: 192.168.1.101
Node3: 192.168.1.102
Node4: 192.168.1.103

On Node1 is initiated the peer probe command and formed the storage pool
(Node1 being hypothetical master).

I mounted the glusterfs native client on a machine. Using Node1 is used to
mount the glusterfs onto the client machine.


Now say NODE1 fails and goes offline. How do i add it back to the cluster
and get the cluster back to its 4 node strength.

Do i need to do rebalance or something else.

Your help is greatly appreciated.


Best Regards
Hareem. Haque

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s) named herein and may contain legally privileged and/or 
confidential information. If you are not the intended recipient of this e-mail, 
you are hereby notified that any dissemination, distribution or copying of this 
e-mail, and any attachments thereto, is strictly prohibited. If you have 
received this in error, please immediately notify me and permanently delete the 
original and any copy of any e-mail and any printout thereof. E-mail 
transmission cannot be guaranteed to be secure or error-free. The sender 
therefore does not accept liability for any errors or omissions in the contents 
of this message which arise as a result of e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
discretion, monitor and review the content of all e-mail communications. 
http://www.knight.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] read access to replicate copies

2011-03-08 Thread Rosario Esposito


Hi,
is there any chance someone can help with this ?

Thanks!


Il 04/03/2011 16:27, Rosario Esposito ha scritto:

Hello,
first of all I would like to thank the gluster developers for the great
job they are doing.
I hope someone can give me more details about how replicate copies of
the same file in a distributed/replicated gluster volumes are accessed
by clients.
My specific question is: if I have a distributed/replicated volume and a
gluster native client needs to read a file, which server will be chosen ?

Let's say I have a 2-nodes cluster running the following gluster
configuration:

---
Volume Name: myvolume
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: host1:/brick
Brick2: host2:/brick
---

host1 and host2 are also gluster native clients, mounting "myvolume" in
/gluster

e.g.

[root@host1 ~]# mount | egrep "brick|gluster"
/dev/sda1 on /brick type ext3 (rw)
glusterfs#host1:/myvolume on /gluster type fuse
(rw,allow_other,default_permissions,max_read=131072)

[root@host2 ~]# mount | egrep "brick|gluster"
/dev/sda1 on /brick type ext3 (rw)
glusterfs#host2:/myvolume on /gluster type fuse
(rw,allow_other,default_permissions,max_read=131072)


If host1 needs to read the file /gluster/myfile will it use the local
copy from host1:/brick or the other copy from host2:/brick over the
network ?
Is there a way to force the client to read the local copy ?

Cheers, Rosario
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs cluster recovery and healing setup.

2011-03-08 Thread Burnash, James
Hi Pranith - correct me if I'm wrong, but if node1 suffered catastrophic 
failure and had to be completely rebuilt from scratch, then additional 
procedures would have to be completed - right?

James Burnash, Unix Engineering

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of prani...@gluster.com
Sent: Tuesday, March 08, 2011 10:33 AM
To: Hareem Haque
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Glusterfs cluster recovery and healing setup.

hi Hareem,
 When the NODE1 comes back up it will automatically be part of cluster and 
the self-heal of the files will happen whenever they are accessed. If you want 
to sync the files manually, follow the instructions in 
http://europe.gluster.org/community/documentation/index.php/Gluster_3.1:_Triggering_Self-Heal_on_Replicate

Pranith.
- Original Message -
From: "Hareem Haque" 
To: gluster-users@gluster.org
Sent: Tuesday, March 8, 2011 7:37:15 PM
Subject: [Gluster-users] Glusterfs cluster recovery and healing setup.

Can someone please kindly help in the following:

How best to deal with node failures and cluster repairs.


I have a problem. I setup 4 node distribute replicated cluster.

Node1-4

Node1: 192.168.1.100
Node2: 192.168.1.101
Node3: 192.168.1.102
Node4: 192.168.1.103

On Node1 is initiated the peer probe command and formed the storage pool
(Node1 being hypothetical master).

I mounted the glusterfs native client on a machine. Using Node1 is used to
mount the glusterfs onto the client machine.


Now say NODE1 fails and goes offline. How do i add it back to the cluster
and get the cluster back to its 4 node strength.

Do i need to do rebalance or something else.

Your help is greatly appreciated.


Best Regards
Hareem. Haque

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s) named herein and may contain legally privileged and/or 
confidential information. If you are not the intended recipient of this e-mail, 
you are hereby notified that any dissemination, distribution or copying of this 
e-mail, and any attachments thereto, is strictly prohibited. If you have 
received this in error, please immediately notify me and permanently delete the 
original and any copy of any e-mail and any printout thereof. E-mail 
transmission cannot be guaranteed to be secure or error-free. The sender 
therefore does not accept liability for any errors or omissions in the contents 
of this message which arise as a result of e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
discretion, monitor and review the content of all e-mail communications. 
http://www.knight.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs cluster recovery and healing setup.

2011-03-08 Thread pranithk
hi Hareem,
 When the NODE1 comes back up it will automatically be part of cluster and 
the self-heal of the files will happen whenever they are accessed. If you want 
to sync the files manually, follow the instructions in 
http://europe.gluster.org/community/documentation/index.php/Gluster_3.1:_Triggering_Self-Heal_on_Replicate

Pranith.
- Original Message -
From: "Hareem Haque" 
To: gluster-users@gluster.org
Sent: Tuesday, March 8, 2011 7:37:15 PM
Subject: [Gluster-users] Glusterfs cluster recovery and healing setup.

Can someone please kindly help in the following:

How best to deal with node failures and cluster repairs.


I have a problem. I setup 4 node distribute replicated cluster.

Node1-4

Node1: 192.168.1.100
Node2: 192.168.1.101
Node3: 192.168.1.102
Node4: 192.168.1.103

On Node1 is initiated the peer probe command and formed the storage pool
(Node1 being hypothetical master).

I mounted the glusterfs native client on a machine. Using Node1 is used to
mount the glusterfs onto the client machine.


Now say NODE1 fails and goes offline. How do i add it back to the cluster
and get the cluster back to its 4 node strength.

Do i need to do rebalance or something else.

Your help is greatly appreciated.


Best Regards
Hareem. Haque

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Roadmap

2011-03-08 Thread anthony garnier

 Ben,
For us the two important feature to have will be : 

0 - Continue to improve gNFS server (and make it works on AIX and Solaris)

1 - Continuous Data Replication (over WAN)

2 - CIFS/Active Directory Support (native support)

And then : 

-Object storage  (unified file and object)

-Geo-replication to Amazon S3 (unify public and private cloud)

-Continuous Data Protection

-Improved User Interface

-REST management API's 

-Enhanced support for ISCSi SANs  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Glusterfs cluster recovery and healing setup.

2011-03-08 Thread Hareem Haque
Can someone please kindly help in the following:

How best to deal with node failures and cluster repairs.


I have a problem. I setup 4 node distribute replicated cluster.

Node1-4

Node1: 192.168.1.100
Node2: 192.168.1.101
Node3: 192.168.1.102
Node4: 192.168.1.103

On Node1 is initiated the peer probe command and formed the storage pool
(Node1 being hypothetical master).

I mounted the glusterfs native client on a machine. Using Node1 is used to
mount the glusterfs onto the client machine.


Now say NODE1 fails and goes offline. How do i add it back to the cluster
and get the cluster back to its 4 node strength.

Do i need to do rebalance or something else.

Your help is greatly appreciated.


Best Regards
Hareem. Haque
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Best practice

2011-03-08 Thread Jojo Colina

I have 4 storage servers with 12 x 1TB SATA disks each. Would it be better to 
create a Linux software RAID 10 of the 12 disks then use each server as a brick 
or use each disk as a separate brick.

Thanks for any info.

--- 
Jojo Colina 
Managing Director/CTO 
ArcusIT.PH 
"Your IT, made CloudEASY" 

1007 Antel Global Corporate Center 
Julia Vargas cor. Meralco Avenue 
Pasig City, Philippines 1605 
Mobile: +639209198095 
Landline: +6326870422 ext. 103 


"entia non sunt multiplicanda praeter necessitatem" -- Ockhams razor 
We must not make things more complicated than necessary.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Seeking Feedback on Gluster Development Priorities/Roadmap

2011-03-08 Thread Count Zero

I think the number 1 feature that will propel Gluster to exponential growth is 
the first feature mentioned (Continuous replication over WAN). It will make it 
a ubiquitous filesystem, and will most probably get it distributed on most 
linux distributions within some time.


On Mar 8, 2011, at 7:01 AM, Ben Golub wrote:

> SEEKING FEEDBACK ON ROADMAP
> 
> Gluster is looking for feedback on its roadmap and priorities for 2011.
> 
> In Q1, we have placed a heavy emphasis on packaging Gluster as a virtual
> appliance, and have released a Gluster Virtual Storage Appliance for
> VMWare, a Gluster Amazon Machine Image, and a Gluster Rightscale template.
> 
> Our next point release will include a set of management and monitoring
> tools.
> 
> Following that, our internal priorities are:
> 
> -Continuous Data Replication (over WAN)
> 
> -Improved User Interface
> 
> -CIFS/Active Directory Support
> 
> -Object storage  (unified file and object)
> 
> -Geo-replication to Amazon S3 (unify public and private cloud)
> 
> -Continuous Data Protection
> 
> -REST management API's
> 
> -Enhanced support for ISCSi SANs
> 
> Are these the right priorities? How would you prioritize?
> 
> (We will be scheduling an IRC conference later this month).
> 
> -Ben Golub (Gluster CEO), Anand Babu (Gluster founder & CTO), and Hitesh
> Chellani (Gluster Founder & VP)
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Seeking Feedback on Gluster Development Priorities/Roadmap

2011-03-08 Thread Burnash, James
Agreed. Everything that has a positive impact on performance and flexibility in 
configuring, and especially RECONFIGURING volumes would really make Glusterfs a 
star for flexible storage management.

A specific for instance is this: I was migrating from GlusterFS 3.04 to 3.1.1 
running Distributed-Replicate. Additionally, going forward I wanted to split 
the storage pool into dedicated read-write and read-only namespaces.

In order to do this, I split my storage pool of 6 servers in half, taking away 
the mirrored server of each pair.

This left me with a single server to configure as read-write, and two servers 
to configure as read-only (since this was the most heavily used functionality 
here).

Since I only had one server to setup as read-write, I had to set it up as 
Distributed, because Replicate required 2 bricks and I initially setup the 
read-write server with only one.

Now that I had all of the new servers up and running, I was able to take down 
the old servers (after a service outage to update all of the client machines to 
the new Native glusterfs client).

However, when I reconfigured the second read-write server, I had to have 
another production outage to delete the existing Replicate (only) volume and 
reconfigure it to be Distributed-Replicate. In addition, I had to restart the 
native Glusterfs client on all servers (all 175 of them) in order for them to 
be able to utilize the new configuration.

I really do understand the making all of this work without a service 
interruption is probably hideously complex. That said, if GlusterFS COULD do 
all of this, it would really stand out in functionality over other distributed 
filesystems - even more so than it already does.

I must add - though I have spent my fair share of time screaming at my monitor 
and cursing the day I ever got involved with software of this complexity - the 
truth is, that pretty much defines my relationship with computers in general 
:-). It has little to do specifically with GlusterFS, which I truly believe is  
Best In Class product. Please keep focused on what it does best (distribute 
large files for simultaneous access by multiple clients), and don't try to turn 
it into ZFS or some other general purpose file system.

Thanks!

James Burnash, Unix Engineering

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Todd Daugherty
Sent: Tuesday, March 08, 2011 2:56 AM
To: Ben Golub
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Seeking Feedback on Gluster Development 
Priorities/Roadmap

Performance and Disk Pools. Those are number ONE for me.

Todd

On Tue, Mar 8, 2011 at 5:01 AM, Ben Golub  wrote:
> SEEKING FEEDBACK ON ROADMAP
>
> Gluster is looking for feedback on its roadmap and priorities for 2011.
>
> In Q1, we have placed a heavy emphasis on packaging Gluster as a virtual
> appliance, and have released a Gluster Virtual Storage Appliance for
> VMWare, a Gluster Amazon Machine Image, and a Gluster Rightscale template.
>
> Our next point release will include a set of management and monitoring
> tools.
>
> Following that, our internal priorities are:
>
> -Continuous Data Replication (over WAN)
>
> -Improved User Interface
>
> -CIFS/Active Directory Support
>
> -Object storage  (unified file and object)
>
> -Geo-replication to Amazon S3 (unify public and private cloud)
>
> -Continuous Data Protection
>
> -REST management API's
>
> -Enhanced support for ISCSi SANs
>
> Are these the right priorities? How would you prioritize?
>
> (We will be scheduling an IRC conference later this month).
>
> -Ben Golub (Gluster CEO), Anand Babu (Gluster founder & CTO), and Hitesh
> Chellani (Gluster Founder & VP)
>
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


DISCLAIMER: 
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s) named herein and may contain legally privileged and/or 
confidential information. If you are not the intended recipient of this e-mail, 
you are hereby notified that any dissemination, distribution or copying of this 
e-mail, and any attachments thereto, is strictly prohibited. If you have 
received this in error, please immediately notify me and permanently delete the 
original and any copy of any e-mail and any printout thereof. E-mail 
transmission cannot be guaranteed to be secure or error-free. The sender 
therefore does not accept liability for any errors or omissions in the contents 
of this message which arise as a result of e-mail transmission. 
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
discretion, monitor and review the content of all e-mail communications. 
htt

Re: [Gluster-users] Seeking Feedback on Gluster Development Priorities/Roadmap

2011-03-08 Thread Stephan von Krawczynski
How about the _basics_ of such a fs? Create an answer to the still unresolved
question: What files are currently not in-sync?
>From the very first day of glusterfs there is no answer to this fundamental
question for the user. No way to monitor the real state of a replicating fs up
to the current day.

-- 
Regards,
Stephan
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs 3.1.2 CIFS problem

2011-03-08 Thread Daniel Müller
Could it be possible too, to point  to msoffice-files. There is no hint
about it that they cannot be managed so far.

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

-Ursprüngliche Nachricht-
Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von
schro...@iup.physik.uni-bremen.de
Gesendet: Dienstag, 8. März 2011 08:31
An: Amar Tumballi
Cc: gluster-users@gluster.org
Betreff: Re: [Gluster-users] glusterfs 3.1.2 CIFS problem

Zitat von Amar Tumballi :

Hi Amar,

thanks for clearing that up.
Could it be possible to get it on the webpage as well ?

http://gluster.com/community/documentation/index.php/Gluster_3.1:_Using_CIFS
_with_Gluster

To a "newbie" it sounds like that CIFS is build in (native) same as NFS.

Thanks and Regards
Heiko


> Hi Heiko,
>
> GlusterFS doesn't support native CIFS protocol (like NFS), instead what we
> say is, you can export the glusterfs mount point as the samba export, and
> then mount it using CIFS protocol.
>
> Regards,
> Amar
>
> On Mon, Mar 7, 2011 at 10:28 PM, 
wrote:
>
>> Hello,
>>
>> i'am testing glusterfs 3.1.2.
>>
>> AFAIK CIFS and NFS is incorporated in the glusterd.
>> But i could only get the NFS mount working.
>>
>> Running an "smbclient -L //glusterfs_server/..." will only produce
>> "Connection to cfstester failed (Error NT_STATUS_CONNECTION_REFUSED)".
>> So it looks like that there is no SMB/CIFS port to connect to.
>>
>> The docs are not very specific about how to use the CIFS interface.
>>
>> What has to be done from the CLI to get the CIFS protocol up and running
?
>> Is there anything special inside the vol configs to take care of ?
>>
>> Thanks and Regards
>> Heiko
>>
>> 
>> This message was sent using IMP, the Internet Messaging Program.
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
>




This message was sent using IMP, the Internet Messaging Program.


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs 3.1.2 CIFS problem

2011-03-08 Thread Daniel Müller
Hello,
ON centos 5.5 X64
[root@ctdb1 ~]# smbclient  -L //192.168.132.56 <--- my glusterserver1
Enter root's password:
Domain=[LDAP.NET] OS=[Unix] Server=[Samba 3.5.6]

Sharename   Type  Comment
-     ---
share1  Disk
office  Disk
IPC$IPC   IPC Service (Samba 3.5.6)
rootDisk  Heimatverzeichnis root
Domain=[LDAP.NET] OS=[Unix] Server=[Samba 3.5.6]

Server   Comment
----
CTDB1Samba 3.5.6
CTDB2Samba 3.5.6

WorkgroupMaster
----
LDAP.NET CTDB1

[root@ctdb1 ~]# smbclient  -L //192.168.132.57<---my glusterserver2
Enter root's password:
Domain=[LDAP.NET] OS=[Unix] Server=[Samba 3.5.6]

Sharename   Type  Comment
-     ---
share1  Disk
IPC$IPC   IPC Service (Samba 3.5.6)
rootDisk  Heimatverzeichnis root
Domain=[LDAP.NET] OS=[Unix] Server=[Samba 3.5.6]

Server   Comment
----
CTDB1Samba 3.5.6
CTDB2Samba 3.5.6

WorkgroupMaster
----
LDAP.NET CTDB1

You need samba to run and a share pointing to a vol
But be aware of msoffice-files. I never managed to get a solution to save
and change them
On a glusterfs-share as it should.
This is the only point why I do not use gluster in my production
environment.

---
EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen

Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
---

-Ursprüngliche Nachricht-
Von: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] Im Auftrag von
schro...@iup.physik.uni-bremen.de
Gesendet: Montag, 7. März 2011 17:59
An: gluster-users@gluster.org
Betreff: [Gluster-users] glusterfs 3.1.2 CIFS problem

Hello,

i'am testing glusterfs 3.1.2.

AFAIK CIFS and NFS is incorporated in the glusterd.
But i could only get the NFS mount working.

Running an "smbclient -L //glusterfs_server/..." will only produce  
"Connection to cfstester failed (Error NT_STATUS_CONNECTION_REFUSED)".
So it looks like that there is no SMB/CIFS port to connect to.

The docs are not very specific about how to use the CIFS interface.

What has to be done from the CLI to get the CIFS protocol up and running ?
Is there anything special inside the vol configs to take care of ?

Thanks and Regards
Heiko


This message was sent using IMP, the Internet Messaging Program.


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users