[Gluster-users] Existing Data and self mounts ?

2012-06-04 Thread Jacques du Rand
HI Guys
This all applies to Gluster3.3

I love gluster but I'm having  some difficulties understanding some things.

1.Replication(with existing data):
Two servers in simple single brick replication. ie 1 volume (testvol)
-server1:/data/ && server2:/data/
-server1 has a few millions files in the /data dir
-server2 has a no files in /data dir

So after i created the testvol and started the volume
QUESTION (1): Do i  need to mount  each volume on the servers like so ? If
yes why ?
---> on server1: mount -t gluster 127.0.0.1:/testvol /mnt/gfstest
---> on server2: mount -t gluster 127.0.0.1:/testvol /mnt/gfstest

CLIENT:
Then I mount the client:
mount server-1-ip:/testvol /mnt/gfstest

Question(2) :
I only see files from server2 ???

Question (3)
Whenever I'm writing/updating/working with the files on the SERVER i should
ALWAYS do it via the (local mount )/mnt/gfstest. I should never work with
files directly in the bricks /data ??

Question (4.1)
-Whats the best-practise to sync existing "data"  ?

Question (4.2)
-Is it safe to create a brick in a directory that already has files in it ?


Regards
Jacques
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] why is glusterfs sometimes listening on port 80?

2012-06-04 Thread Raghavendra Bhat

Glusterfs client starts binding from port number 1023 and if any port is not 
available, then tries to bind to the lower port. (i.e. suppose 1023 is not 
available because some other process is already using it, then it decrements 
the port to 1022 and tries to bind.) So in this case it has tried all the ports 
starting 1023 and found that those ports are not free (used by other processes) 
and found that port 80 is free and just did bind on it.


Regards,
Raghavendra Bhat



- Original Message -
From: "Tomasz Chmielewski" 
To: "Gluster General Discussion List" 
Sent: Monday, June 4, 2012 7:02:55 AM
Subject: [Gluster-users] why is glusterfs sometimes listening on port 80?

I've noticed that I'm sometimes not able to start a webserver on machines 
running glusterfs. It turned out that glusterfs, when mounting, is sometimes 
starting to listen on port 80:

root@ca11:~# netstat -tpna|grep :80
tcp0  0 192.168.20.31:80192.168.20.34:24010 ESTABLISHED 
13843/glusterfs 


Why does glusterfs do it?

Killing glusterfs and mounting again usually fixes the problem.


-- 
Tomasz Chmielewski
http://www.ptraveler.com


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] why is glusterfs sometimes listening on port 80?

2012-06-04 Thread David Coulson
Is there a way to change this behavior? It's particularly frustrating 
having Gluster mount a filesystem before the service starts up, only to 
find it steps on the top end of <1024 ports often - IMAPS and POP3S are 
typical victims at 993 and 995.


Why does not not use ports within the 
/proc/sys/net/ipv4/ip_local_port_range?


On 6/4/12 5:46 AM, Raghavendra Bhat wrote:

Glusterfs client starts binding from port number 1023 and if any port is not 
available, then tries to bind to the lower port. (i.e. suppose 1023 is not 
available because some other process is already using it, then it decrements 
the port to 1022 and tries to bind.) So in this case it has tried all the ports 
starting 1023 and found that those ports are not free (used by other processes) 
and found that port 80 is free and just did bind on it.


Regards,
Raghavendra Bhat



- Original Message -
From: "Tomasz Chmielewski"
To: "Gluster General Discussion List"
Sent: Monday, June 4, 2012 7:02:55 AM
Subject: [Gluster-users] why is glusterfs sometimes listening on port 80?

I've noticed that I'm sometimes not able to start a webserver on machines 
running glusterfs. It turned out that glusterfs, when mounting, is sometimes 
starting to listen on port 80:

root@ca11:~# netstat -tpna|grep :80
tcp0  0 192.168.20.31:80192.168.20.34:24010 ESTABLISHED 
13843/glusterfs


Why does glusterfs do it?

Killing glusterfs and mounting again usually fixes the problem.



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] why is glusterfs sometimes listening on port 80?

2012-06-04 Thread Tomasz Chmielewski
On 06/04/2012 04:46 PM, Raghavendra Bhat wrote:
> 
> Glusterfs client starts binding from port number 1023 and if any port
> is not available, then tries to bind to the lower port. (i.e. suppose
> 1023 is not available because some other process is already using it,
> then it decrements the port to 1022 and tries to bind.) So in this
> case it has tried all the ports starting 1023 and found that those
> ports are not free (used by other processes) and found that port 80
> is free and just did bind on it.

No, that's impossible that so many ports, from 1023 down to 80, would be used.

OK, some more background on this.

1) actually it's not gluster listening; it's a connection from remote gluster 
to this gluster, on port 80, but it prevents the webserver from starting

2) reproducible when using "replace-brick" (I'm replacing a number of bricks, 
and could reproduce it with all of them so far):

gluster volume replace-brick sites ca4-int:/data/glusterfs ca3-int:/data/12 
start

3) when "replace-brick" is running, nginx accessing gluster "hangs" (its 
processes are in D state, waiting for IO)

4) the only cure is killing nginx and glusterfs

5) when doing mount -a (to mount gluster) and starting nginx after that, 
sometimes nginx wouldn't start, as there is some glusterfs local process with 
an open connection to port 80 (locally)


-- 
Tomasz Chmielewski
http://www.ptaveler.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Existing Data and self mounts ?

2012-06-04 Thread David Coulson



On 6/4/12 4:05 AM, Jacques du Rand wrote:

HI Guys
This all applies to Gluster3.3

I love gluster but I'm having  some difficulties understanding some 
things.


1.Replication(with existing data):
Two servers in simple single brick replication. ie 1 volume (testvol)
-server1:/data/ && server2:/data/
-server1 has a few millions files in the /data dir
-server2 has a no files in /data dir

So after i created the testvol and started the volume
QUESTION (1): Do i  need to mount  each volume on the servers like so 
? If yes why ?

---> on server1: mount -t gluster 127.0.0.1:/testvol /mnt/gfstest
---> on server2: mount -t gluster 127.0.0.1:/testvol /mnt/gfstest
Only if you want to access the files within the volume on the two 
servers which have the bricks on them.


CLIENT:
Then I mount the client:
mount server-1-ip:/testvol /mnt/gfstest

Question(2) :
I only see files from server2 ???

Probably hit and miss what you see, since your bricks are not consistent.


Question (3)
Whenever I'm writing/updating/working with the files on the SERVER i 
should ALWAYS do it via the (local mount )/mnt/gfstest. I should never 
work with files directly in the bricks /data ??
Correct - Gluster can't keep track of writes if you don't do it through 
the glusterfs mount point.


Question (4.1)
-Whats the best-practise to sync existing "data"  ?
You will need to force a manual self-heal and see if that copies all the 
data over to the other brick.


find /mnt/gfstest -noleaf -print0 | xargs --null stat>/dev/null




Question (4.2)
-Is it safe to create a brick in a directory that already has files in 
it ?


As long as you force a self-heal on it before you use it.









___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] why is glusterfs sometimes listening on port 80?

2012-06-04 Thread Raghavendra Bhat

A bug has already been logged for this issue and we are working on it.

https://bugzilla.redhat.com/show_bug.cgi?id=762989

Regards,
Raghavendra Bhat





- Original Message -
From: "David Coulson" 
To: "Raghavendra Bhat" 
Cc: "Gluster General Discussion List" 
Sent: Monday, June 4, 2012 3:46:16 PM
Subject: Re: [Gluster-users] why is glusterfs sometimes listening on port 80?

Is there a way to change this behavior? It's particularly frustrating 
having Gluster mount a filesystem before the service starts up, only to 
find it steps on the top end of <1024 ports often - IMAPS and POP3S are 
typical victims at 993 and 995.

Why does not not use ports within the 
/proc/sys/net/ipv4/ip_local_port_range?

On 6/4/12 5:46 AM, Raghavendra Bhat wrote:
> Glusterfs client starts binding from port number 1023 and if any port is not 
> available, then tries to bind to the lower port. (i.e. suppose 1023 is not 
> available because some other process is already using it, then it decrements 
> the port to 1022 and tries to bind.) So in this case it has tried all the 
> ports starting 1023 and found that those ports are not free (used by other 
> processes) and found that port 80 is free and just did bind on it.
>
>
> Regards,
> Raghavendra Bhat
>
>
>
> - Original Message -
> From: "Tomasz Chmielewski"
> To: "Gluster General Discussion List"
> Sent: Monday, June 4, 2012 7:02:55 AM
> Subject: [Gluster-users] why is glusterfs sometimes listening on port 80?
>
> I've noticed that I'm sometimes not able to start a webserver on machines 
> running glusterfs. It turned out that glusterfs, when mounting, is sometimes 
> starting to listen on port 80:
>
> root@ca11:~# netstat -tpna|grep :80
> tcp0  0 192.168.20.31:80192.168.20.34:24010 
> ESTABLISHED 13843/glusterfs
>
>
> Why does glusterfs do it?
>
> Killing glusterfs and mounting again usually fixes the problem.
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Existing Data and self mounts ?

2012-06-04 Thread Tomasz Chmielewski

On 06/04/2012 05:21 PM, David Coulson wrote:



Question (4.2)
-Is it safe to create a brick in a directory that already has files in
it ?


As long as you force a self-heal on it before you use it.


Do you know if I'll be able to convert a distribute to 
distribute-replicate this way?


1) delete the distribute volume

2) create a distribute-replicate volume

3) run the self-heal, which hopefully results in the data moved to the 
other brick, *not* removed?



--
Tomasz Chmielewski
http://www.ptraveler.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Existing Data and self mounts ?

2012-06-04 Thread Jacques du Rand
Hi David
Thanks for clearing it up
With regards to the  "self-heal":
find /mnt/gfstest -noleaf -print0 | xargs --null stat >/dev/null


a) I do this on the server(1|2|client|doesnt-matter) ? IF server1 is the
one with the latest copy of data

b) Would the self-heal i've been reading about in 3.3 not take care of this
over time ?

c) If server1:/data has the latest copy of the date compared to
server2:/data (so if there are any question regarding which data to
replicate/replace/use gluster MUST use data from server1 (until it's fully
sync again))
Does it matter what order i use the servers at the create stage  (keep in
mind i want to force all data to be from server1:/data")

"gluster volume create testvol  replica 2 server1:/data"server2:data"


Best Regards
Jacques


On Mon, Jun 4, 2012 at 12:21 PM, David Coulson wrote:

>
>
> On 6/4/12 4:05 AM, Jacques du Rand wrote:
>
> HI Guys
> This all applies to Gluster3.3
>
> I love gluster but I'm having  some difficulties understanding some things.
>
>  1.Replication(with existing data):
> Two servers in simple single brick replication. ie 1 volume (testvol)
> -server1:/data/ && server2:/data/
> -server1 has a few millions files in the /data dir
> -server2 has a no files in /data dir
>
>  So after i created the testvol and started the volume
> QUESTION (1): Do i  need to mount  each volume on the servers like so ? If
> yes why ?
> ---> on server1: mount -t gluster 127.0.0.1:/testvol /mnt/gfstest
> ---> on server2: mount -t gluster 127.0.0.1:/testvol /mnt/gfstest
>
> Only if you want to access the files within the volume on the two servers
> which have the bricks on them.
>
>
>  CLIENT:
> Then I mount the client:
> mount server-1-ip:/testvol /mnt/gfstest
>
>  Question(2) :
> I only see files from server2 ???
>
> Probably hit and miss what you see, since your bricks are not consistent.
>
>
>  Question (3)
> Whenever I'm writing/updating/working with the files on the SERVER i
> should ALWAYS do it via the (local mount )/mnt/gfstest. I should never work
> with files directly in the bricks /data ??
>
> Correct - Gluster can't keep track of writes if you don't do it through
> the glusterfs mount point.
>
>
>  Question (4.1)
> -Whats the best-practise to sync existing "data"  ?
>
> You will need to force a manual self-heal and see if that copies all the
> data over to the other brick.
>
> find /mnt/gfstest -noleaf -print0 | xargs --null stat >/dev/null
>
>
>
>  Question (4.2)
> -Is it safe to create a brick in a directory that already has files in it ?
>
>
> As long as you force a self-heal on it before you use it.
>
>
>
>
>
>
>
>
>
> ___
> Gluster-users mailing 
> listGluster-users@gluster.orghttp://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] File renamed, but I get "overwrite?" prompt when I try to rename it back again to the original name

2012-06-04 Thread Sabyasachi Ruj
So here is the situation (this is not reproducible every time, but I
got it twice).

directory "sqlite3.org" has already been renamed from client2.
executing these commands from client1 gives this output:

# stat sqlite.org
stat: cannot stat `sqlite.org': No such file or directory

This proves that there is no sqlite.org. But when I execute the following:

# mv sqlite.org1 sqlite.org
mv: overwrite `sqlite.org'?

Any idea, why does it prompt me to overwrite sqlite.org when there is
no such directory?

-- 
Sabyasachi
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] A very special announcement from Gluster.org

2012-06-04 Thread Philip
Doesn't sound like the solution we need for a large cluster. We would like
to keep it simple and stupid. Squeeze has libssl version 0.9.8. Maybe you
can work with "Toby Corkindale" since he managed to create a deb for sq
ueeze?

2012/6/2 John Mark Walker 

> Philip -
>
> Gluster.org is only nominally a Red Hat operation. If you want better
> Debian support, you need to help us do it. Also, it's common for legacy
> distributions not to have support for brand new releases. You can expect
> Squeeze to support 3.2.x, but not necessarily 3.3.x. I agree that this can
> be frustrating at times, but in those cases, it's better to compile from
> source anyway.
>
> I think the package maintainer for Debian is on the list - perhaps he can
> shed some light. Which version of libssl is on Squeeze?
>
> And finally, when you compiled libssl from source, did you install the
> source .deb so that it registered with the package database? If you
> compiled a tarball, did you specify an install directory when running
> ./configure? Compiling from source will by default place libraries into
> /usr/local/lib, and you probably need to run ldconfig before it will
> satisfy the dependency.
>
> -JM
>
>
> --
>
> Installing libssl1.0.0 from source does not help, I am still getting the
> same error message. Come on Gluster/Redhat, its kind of ridiculous if you
> only support a unstable operating system for your stable release.
> 2012/6/2 Philip 
>
>> I haven't but I will give it a try! Maybe you should also reconsider the
>> way you are building the debs. Building debs for a stable software on/for a
>> unstable operating system isn't smart is it?
>>
>> 2012/6/2 Sachidananda URS 
>>
>>> Hi Philip,
>>>
>>> Did you try installing libssl from source to meet the dependency?
>>>
>>> -sac
>>>
>>> Sent from my iPhone
>>>
>>> On 02-Jun-2012, at 13:57, Philip  wrote:
>>>
>>> It is still not possible to install the 3.3 deb on a stable release of
>>> debian because squeeze has no libssl1.0.0.
>>>
>>> 2012/5/31 John Mark Walker 
>>>
 Today, we’re announcing the next generation of 
 GlusterFS,
 version 3.3. The release has been a year in the making and marks several
 firsts: the first post-acquisition release under Red Hat, our first major
 act as an openly-governed project and
 our first foray beyond NAS. We’ve also taken our first steps towards
 merging big data and unstructured data storage, giving users and developers
 new ways of managing their data scalability challenges.

 GlusterFS is an open source, fully distributed storage solution for the
 world’s ever-increasing volume of unstructured data. It is a software-only,
 highly available, scale-out, centrally managed storage pool that can be
 backed by POSIX filesystems that support extended attributes, such as
 Ext3/4, XFS, BTRFS and many more.

 This release provides many of the most commonly requested features
 including proactive self-healing, quorum enforcement, and granular locking
 for self-healing, as well as many additional bug fixes and enhancements.

 Some of the more noteworthy features include:

- Unified File and Object storage – Blending OpenStack’s Object
Storage API   with
GlusterFS provides simultaneous read and write access to data as files 
 or
as objects.
- HDFS compatibility – Gives Hadoop administrators the ability to
run MapReduce jobs on unstructured data on GlusterFS and access the data
with well-known tools and shell scripts.
- Proactive self-healing – GlusterFS volumes will now automatically
restore file integrity after a replica recovers from failure.
- Granular locking – Allows large files to be accessed even during
self-healing, a feature that is particularly important for VM images.
- Replication improvements – With quorum enforcement you can be
confident that  your data has been written in at least the configured
number of places before the file operation returns, allowing a
user-configurable adjustment to fault tolerance vs performance.

 *
 *Visit http://www.gluster.org  to download.
 Packages are available for most distributions, including Fedora, Debian,
 RHEL, Ubuntu and CentOS.

 Get involved! Join us on #gluster on freenode, join our mailing 
 list,
 ‘like’ our Facebook page , follow us
 on Twitter , or check out our LinkedIn
 group .

 GlusterFS is an open source project sponsored by Red 
 Hat®,
 who uses it in its line of Red Hat Storage
>

Re: [Gluster-users] 'Transport endpoint not connected'

2012-06-04 Thread Brian Candler
On Fri, May 04, 2012 at 01:27:35PM +0530, Amar Tumballi wrote:
> Are you sure the clients are not automatically remounted within 10
> seconds of servers coming up? This was working fine from the time we
> had networking code written.
> 
> Internally, there is a timer thread which makes sure we
> automatically reconnect after 10seconds.
> 
> Please see if you can repeat the operations 2-3 times before doing a
> umount/mount, it should have gotten reconnected.
> 
> If not, please file a bug report with the glusterfs logs (of the
> client process).

OK this happened again, bug reported at
https://bugzilla.redhat.com/show_bug.cgi?id=828509

Nothing happened in the client log for each of the attempts to 'ls' the
affected directly, however the client log does have evidence of what looks
like a client-side crash of some sort (sig 11)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] A very special announcement from Gluster.org

2012-06-04 Thread John Mark Walker
- Original Message -

> Doesn't sound like the solution we need for a large cluster. We would
> like to keep it simple and stupid . Squeeze has libssl version
> 0.9.8. Maybe you can work with " Toby Corkindale" since he managed
> to create a deb for sq ueeze?

I hear what you're saying. I expect this to be resolved sooner, rather than 
later. 

Thanks, 
JM 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] 'Transport endpoint not connected'

2012-06-04 Thread Activepage Gmail


I Have the same problem with 3.2.6, time to time in a randon basis some 
server give-me the "Transport endpoint not connected".


I Have to reboot the server to make it connect again.

I run Fedora 16 and Gluster 3.2.6-2

- Original Message - 
From: "Brian Candler" 

To: "Amar Tumballi" 
Cc: 
Sent: Monday, June 04, 2012 5:07 PM
Subject: Re: [Gluster-users] 'Transport endpoint not connected'



On Fri, May 04, 2012 at 01:27:35PM +0530, Amar Tumballi wrote:

Are you sure the clients are not automatically remounted within 10
seconds of servers coming up? This was working fine from the time we
had networking code written.

Internally, there is a timer thread which makes sure we
automatically reconnect after 10seconds.

Please see if you can repeat the operations 2-3 times before doing a
umount/mount, it should have gotten reconnected.

If not, please file a bug report with the glusterfs logs (of the
client process).


OK this happened again, bug reported at
https://bugzilla.redhat.com/show_bug.cgi?id=828509

Nothing happened in the client log for each of the attempts to 'ls' the
affected directly, however the client log does have evidence of what looks
like a client-side crash of some sort (sig 11)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] GlusterFS 3.3 not yet quiet ready for Virtual Machines storage

2012-06-04 Thread Fernando Frediani (Qube)
Hi,

I have been reading and trying to test(without much success) Gluster 3.3 for 
Virtual Machines storage and from what I could see it isn't yet quiet ready for 
running virtual machines.

One great improvement about the granular locking which was essential for these 
types of environments was achieved, but the other one is still not, which is 
the ability to use striped+(distributed)+replicated.
As it stands now the natural choice would be Distributed + Replicated but when 
storing a Virtual Machines image it would reside in a single brick(replicated 
of course), so the maximum amount of IOPS for write would be  the equivalent of 
a single brick's RAID controller and its disks underneath, while if 
striped+(distributed)+replicated was available it would spread the IOPS across 
all bricks containing the large Virtual Machine image and therefore multiple 
bricks and RAID controllers.
Also, if I understand correctly, the maximum size for a file wouldn't be the 
size of a brick as ,again, the file would be spread across multiple bricks.

This type of volume is said to be available on version 3.3 but as the 
documentation says it is only to run MapReduce workloads.

What is everybody's opinion about this  and has this been thought or considered 
?

Regards,

Fernando
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] File renamed, but I get "overwrite?" prompt when I try to rename it back again to the original name

2012-06-04 Thread Anand Avati
You probably are treading within the narrow boundary of fuse's
entry/attribute timeout. Can you mount with --attribute-timeout=0 and
--entry-timeout=0 and see if it eliminates the behavior?

Avati

On Mon, Jun 4, 2012 at 6:26 AM, Sabyasachi Ruj  wrote:

> So here is the situation (this is not reproducible every time, but I
> got it twice).
>
> directory "sqlite3.org" has already been renamed from client2.
> executing these commands from client1 gives this output:
>
># stat sqlite.org
>stat: cannot stat `sqlite.org': No such file or directory
>
> This proves that there is no sqlite.org. But when I execute the following:
>
># mv sqlite.org1 sqlite.org
>mv: overwrite `sqlite.org'?
>
> Any idea, why does it prompt me to overwrite sqlite.org when there is
> no such directory?
>
> --
> Sabyasachi
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Striped replicated volumes in Gluster 3.3.0

2012-06-04 Thread Amar Tumballi

On 06/04/2012 11:21 AM, Amar Tumballi wrote:

On 06/01/2012 10:18 PM, Travis Rhoden wrote:

Did an answer to Christian's question pop up? I was going to write in
with the exact same one.

If I created a replicated striped volume, what would keep it from
working in a non-Hadoop environment? Does the NFS server refuse to
export such a volume? Does the FUSE client refuse to mount such a volume?

I read through the docs and found the following phrase: "In this
release, configuration of this volume type is supported only for Map
Reduce workloads."
What does that mean exactly? Hopefully not, that I'm unable to store
my KVM images on it?






Hi,

Striped-Replicated Volumes can be created like any other volume type
with GlusterFS-3.3.0. It is not restricted to be exported with NFS or
FUSE. It would still work with non-Hadoop environment.

The statement is the documentation is because of what we tested the type
of volume with. Considering the other type of volumes, Striped-Replicate
volume has been tested only with Hadoop workload as of now.

You can use striped-replicate volume, but have a test run for some time
before putting into production. If it works for you, we can say that
feature is good, but if there are issues, we are glad to fix it and make
it more stable.

Regards,
Amar

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] local IO

2012-06-04 Thread Костырев Александр Алексеевич
hello! question about IOs:

 

I've fired up glusterfs with

distributed replicated volumes

 like this 

gluster volume create test-mail replica 2 transport tcp 10.0.1.132:/mnt/ld0 
10.0.1.133:/mnt/ld0 10.0.1.132:/mnt/ld1 10.0.1.133:/mnt/ld1

 

server1 is 10.0.1.132

server2 is 10.0.1.133

 

gluster volume info all

Volume Name: test-mail

Type: Distributed-Replicate

Status: Started

Number of Bricks: 2 x 2 = 4

Transport-type: tcp

Bricks:

Brick1: 10.0.1.132:/mnt/ld0

Brick2: 10.0.1.133:/mnt/ld0

Brick3: 10.0.1.132:/mnt/ld1

Brick4: 10.0.1.133:/mnt/ld1

 

 

and mount glusterfs on earch on servers at /mnt/gfs_mail

mount -t glusterfs 127.0.0.1:/vms /mnt/gluster

 

the question: when server1 reads mail from /mnt/gfs_mail is all IO operation is 
LOCAL ?  

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Striped replicated volumes in Gluster 3.3.0

2012-06-04 Thread Fernando Frediani (Qube)
I tried it to host Virtual Machines images and it didn't work at all. Was 
hoping to be able to spread the IOPS more through the cluster.
 That's part of what I was trying to say on the email I sent earlier today.

Fernando

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Amar Tumballi
Sent: 05 June 2012 03:51
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] Striped replicated volumes in Gluster 3.3.0

On 06/04/2012 11:21 AM, Amar Tumballi wrote:
> On 06/01/2012 10:18 PM, Travis Rhoden wrote:
>> Did an answer to Christian's question pop up? I was going to write in 
>> with the exact same one.
>>
>> If I created a replicated striped volume, what would keep it from 
>> working in a non-Hadoop environment? Does the NFS server refuse to 
>> export such a volume? Does the FUSE client refuse to mount such a volume?
>>
>> I read through the docs and found the following phrase: "In this 
>> release, configuration of this volume type is supported only for Map 
>> Reduce workloads."
>> What does that mean exactly? Hopefully not, that I'm unable to store 
>> my KVM images on it?
>>
>>
>

Hi,

Striped-Replicated Volumes can be created like any other volume type with 
GlusterFS-3.3.0. It is not restricted to be exported with NFS or FUSE. It would 
still work with non-Hadoop environment.

The statement is the documentation is because of what we tested the type of 
volume with. Considering the other type of volumes, Striped-Replicate volume 
has been tested only with Hadoop workload as of now.

You can use striped-replicate volume, but have a test run for some time before 
putting into production. If it works for you, we can say that feature is good, 
but if there are issues, we are glad to fix it and make it more stable.

Regards,
Amar

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Striped replicated volumes in Gluster 3.3.0

2012-06-04 Thread Amar Tumballi

I tried it to host Virtual Machines images and it didn't work at all. Was 
hoping to be able to spread the IOPS more through the cluster.
  That's part of what I was trying to say on the email I sent earlier today.


I saw that mail and I agree that the target of 3.3.0 was to make 
glusterfs more stable and get the features which would make Virtual 
machine hosting on GlusterFS a possibility by 3.4.0 time-frame.


In this release the target was to get it functionally more complete by 
adding more features to CLI, make scaling of volumes a possibility, and 
bring in the basic design changes which would make future compatibility 
a possibility.


Saying that, *not working at all* is not a good thing, and we want to 
fix it. Please help us by filing a bug report and adding log file 
information in there.


Thanks,
Amar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] File renamed, but I get "overwrite?" prompt when I try to rename it back again to the original name

2012-06-04 Thread Sabyasachi Ruj
Did not make any difference using  --attribute-timeout=0. Strangely I
am noticing that this problem happens if I use tab-completion to
complete the name of "sqlite.org"!

On 5 June 2012 04:15, Anand Avati  wrote:
> You probably are treading within the narrow boundary of fuse's
> entry/attribute timeout. Can you mount with --attribute-timeout=0 and
> --entry-timeout=0 and see if it eliminates the behavior?
>
> Avati
>
> On Mon, Jun 4, 2012 at 6:26 AM, Sabyasachi Ruj  wrote:
>>
>> So here is the situation (this is not reproducible every time, but I
>> got it twice).
>>
>> directory "sqlite3.org" has already been renamed from client2.
>> executing these commands from client1 gives this output:
>>
>>    # stat sqlite.org
>>    stat: cannot stat `sqlite.org': No such file or directory
>>
>> This proves that there is no sqlite.org. But when I execute the following:
>>
>>    # mv sqlite.org1 sqlite.org
>>    mv: overwrite `sqlite.org'?
>>
>> Any idea, why does it prompt me to overwrite sqlite.org when there is
>> no such directory?
>>
>> --
>> Sabyasachi
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>



-- 
Sabyasachi
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] File renamed, but I get "overwrite?" prompt when I try to rename it back again to the original name

2012-06-04 Thread Anand Avati
Are you on 3.2.x? If so can you try 'gluster volume set  set
performance.stat-prefetch off' and try again?

Avati

On Mon, Jun 4, 2012 at 10:58 PM, Sabyasachi Ruj  wrote:

> Did not make any difference using  --attribute-timeout=0. Strangely I
> am noticing that this problem happens if I use tab-completion to
> complete the name of "sqlite.org"!
>
> On 5 June 2012 04:15, Anand Avati  wrote:
> > You probably are treading within the narrow boundary of fuse's
> > entry/attribute timeout. Can you mount with --attribute-timeout=0 and
> > --entry-timeout=0 and see if it eliminates the behavior?
> >
> > Avati
> >
> > On Mon, Jun 4, 2012 at 6:26 AM, Sabyasachi Ruj 
> wrote:
> >>
> >> So here is the situation (this is not reproducible every time, but I
> >> got it twice).
> >>
> >> directory "sqlite3.org" has already been renamed from client2.
> >> executing these commands from client1 gives this output:
> >>
> >># stat sqlite.org
> >>stat: cannot stat `sqlite.org': No such file or directory
> >>
> >> This proves that there is no sqlite.org. But when I execute the
> following:
> >>
> >># mv sqlite.org1 sqlite.org
> >>mv: overwrite `sqlite.org'?
> >>
> >> Any idea, why does it prompt me to overwrite sqlite.org when there is
> >> no such directory?
> >>
> >> --
> >> Sabyasachi
> >> ___
> >> Gluster-users mailing list
> >> Gluster-users@gluster.org
> >> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> >
> >
>
>
>
> --
> Sabyasachi
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] File renamed, but I get "overwrite?" prompt when I try to rename it back again to the original name

2012-06-04 Thread Sabyasachi Ruj
I am on: glusterfs 3.3.0 built on Jun  1 2012 12:08:38

On 5 June 2012 11:30, Anand Avati  wrote:
> Are you on 3.2.x? If so can you try 'gluster volume set  set
> performance.stat-prefetch off' and try again?
>
> Avati
>
>
> On Mon, Jun 4, 2012 at 10:58 PM, Sabyasachi Ruj  wrote:
>>
>> Did not make any difference using  --attribute-timeout=0. Strangely I
>> am noticing that this problem happens if I use tab-completion to
>> complete the name of "sqlite.org"!
>>
>> On 5 June 2012 04:15, Anand Avati  wrote:
>> > You probably are treading within the narrow boundary of fuse's
>> > entry/attribute timeout. Can you mount with --attribute-timeout=0 and
>> > --entry-timeout=0 and see if it eliminates the behavior?
>> >
>> > Avati
>> >
>> > On Mon, Jun 4, 2012 at 6:26 AM, Sabyasachi Ruj 
>> > wrote:
>> >>
>> >> So here is the situation (this is not reproducible every time, but I
>> >> got it twice).
>> >>
>> >> directory "sqlite3.org" has already been renamed from client2.
>> >> executing these commands from client1 gives this output:
>> >>
>> >>    # stat sqlite.org
>> >>    stat: cannot stat `sqlite.org': No such file or directory
>> >>
>> >> This proves that there is no sqlite.org. But when I execute the
>> >> following:
>> >>
>> >>    # mv sqlite.org1 sqlite.org
>> >>    mv: overwrite `sqlite.org'?
>> >>
>> >> Any idea, why does it prompt me to overwrite sqlite.org when there is
>> >> no such directory?
>> >>
>> >> --
>> >> Sabyasachi
>> >> ___
>> >> Gluster-users mailing list
>> >> Gluster-users@gluster.org
>> >> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>> >
>> >
>>
>>
>>
>> --
>> Sabyasachi
>
>



-- 
Sabyasachi
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] local IO

2012-06-04 Thread Pranith Kumar Karampuri
hi,
   Not necessarily. The first time when the file is accessed whichever brick 
responds fast is the one where the reads/stat etc happen. Writes/create/rm etc 
happen on both the bricks.

Pranith.
- Original Message -
From: "Костырев Александр Алексеевич" 
To: gluster-users@gluster.org
Sent: Tuesday, June 5, 2012 8:52:11 AM
Subject: [Gluster-users] local IO





hello! question about IOs: 



I've fired up glusterfs with 

distributed replicated volumes 

like this 

gluster volume create test-mail replica 2 transport tcp 10.0.1.132:/mnt/ld0 
10.0.1.133:/mnt/ld0 10.0.1.132:/mnt/ld1 10.0.1.133:/mnt/ld1 



server1 is 10.0.1.132 

server2 is 10.0.1.133 



gluster volume info all 

Volume Name: test-mail 

Type: Distributed-Replicate 

Status: Started 

Number of Bricks: 2 x 2 = 4 

Transport-type: tcp 

Bricks: 

Brick1: 10.0.1.132:/mnt/ld0 

Brick2: 10.0.1.133:/mnt/ld0 

Brick3: 10.0.1.132:/mnt/ld1 

Brick4: 10.0.1.133:/mnt/ld1 





and mount glusterfs on earch on servers at /mnt/gfs_mail 

mount -t glusterfs 127.0.0.1:/vms /mnt/gluster 



the question: when server1 reads mail from /mnt/gfs_mail is all IO operation is 
LOCAL ? 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Existing Data and self mounts ?

2012-06-04 Thread Tomasz Chmielewski

On 06/04/2012 07:15 PM, Amar Tumballi wrote:


Do you know if I'll be able to convert a distribute to
distribute-replicate this way?

1) delete the distribute volume

2) create a distribute-replicate volume

3) run the self-heal, which hopefully results in the data moved to the
other brick, *not* removed?


With 3.3.0 release, for all these three steps can be achieved by one
step, do just

bash# gluster volume add-brick  replica N BRICK1 BRICK2 .. BRICKn

where,

VOLNAME is distribute volume name

N is target replica count (in this case 2, as with one add-brick command
you can increase replica count only by 1).

BRICK(1-n) is gluster brick to be added as pair to each existing bricks
in order.

Let the proactive self-heal daemon take care of syncing your data :-)


Hmm, but the steps you describe (gluster volume add-brick  
replica N ...), assuming I upgrade to 3.3:


1) my volume is "distribute" right now - "will gluster volume add-brick 
 replica N ..." work in that case?


2) I don't have any bricks to add; all are existing already

3) with 1) and 2) above - does it mean I have to delete the distribute 
volume, create it as distribute+replicate over the existing data, and 
hope it will mirror the data, not remove it (I'm concern that xattrs 
will be somehow confusing glusterfs)



--
Tomasz Chmielewski
http://www.ptraveler.com

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users