Re: [Gluster-users] error in glusterfs volume creation

2010-11-01 Thread Rick King
Hello Kaushik, 

I am a newbie myself. What's the peer status?

gluster peer status

What command are you using to create the volume?


My best to you, 

~~Rick




- Original Message -
From: "kaushik chatterjee" 
To: gluster-users@gluster.org
Sent: Monday, November 1, 2010 10:54:02 AM
Subject: [Gluster-users] error in glusterfs volume creation

Hi
I am a newbie in glusterfs. I have created 2 servers with glusterfs. Now
from server one i can see the peer but whenever trying to create volume its
says *creation of volume unsuccessful, host is not your friend*.

Any help please.



Thanks and Regards,

Kaushik

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
DISCLAIMER: This e-mail and any files transmitted with it ('Message') is 
intended only for the use of the recepient (s) named and may contain 
confidential information. Opinions, conclusion and other information in this 
message that do not relate to the official business of King7.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs NFS dependencies (gentoo)

2010-11-01 Thread Shehjar Tikoo


Yes, you can start rpc.statd but because we dont yet support NLM, you'll 
need to use nolock option at mount time.


Jan Pisacka wrote:

Hi,

I am testing glusterfs-3.1.0 on Gentoo Linux, which is our preferred OS.
For correct NFS server functionality, some dependencies are required. I
understand that on supported distributions, all dependencies are already
installed by default. This is probably not the case of gentoo. First I
found that the glusterfs's NFS server still needs the portmapper:

>

[2010-10-22 16:05:20.469964] E
[rpcsvc.c:2630:nfs_rpcsvc_program_register_portmap] nfsrpc: Could not
register with portmap
[2010-10-22 16:05:20.470006] E
[rpcsvc.c:2715:nfs_rpcsvc_program_register] nfsrpc: portmap registration
of program failed
[2010-10-22 16:05:20.470038] E
[rpcsvc.c:2728:nfs_rpcsvc_program_register] nfsrpc: Program registration
failed: MOUNT3, Num: 15, Ver: 3, Port: 38465
[2010-10-22 16:05:20.470050] E [nfs.c:127:nfs_init_versions] nfs:
Program init failed
[2010-10-22 16:05:20.470061] C [nfs.c:608:notify] nfs: Failed to
initialize protocols

Are there any further dependencies that need to be satisfied?
Replication and distribution works, but I still have issues when trying
to run gnome desktop on clients where /home is glusterfs-nfs mounted.
Conventional nfs works perfectly. The issues seem to be related to the
absent locking:

Oct 26 08:42:24 glc01 kernel: lockd: server gls13 not responding, still
trying

On the server side, I still have log records like this one:

[2010-10-25 11:00:55.823932] E [rpcsvc.c:1249:nfs_rpcsvc_program_actor]
nfsrpc: RPC program not available

Which RPC program should I install? rpc.statd? Anything else? Thanks a lot.

Jan Pisacka
Compass experiment
Institute of Plasma Physics AS CR
Praha, Czech Republic





___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Possible to use gluster w/ email services + Tuning for fast replication

2010-11-01 Thread Rick King
Horacio / Ed, 

Thank you very much for your responses! Don't need to use the mount command on 
the 2nd node, ok got it. I see now in the replicated mode, when I access the 
/mnt/ from the server, e.g. du -sh /mnt, that syncs the data to the other 
nodes. 

Horacio, thank you for reminding including glusterd in the start up sequence. 
Actually glusterd was starting, but the fuse module wasn't included at first, 
but I fixed that. 

Thanks again!

My best to you, 

~~Rick







- Original Message -
From: "Horacio Sanson" 
To: gluster-users@gluster.org
Sent: Monday, November 1, 2010 8:38:31 PM
Subject: Re: [Gluster-users] Possible to use gluster w/ email services +
Tuning for fast replication

On Tuesday 02 November 2010 10:04:27 Rick King wrote:
> Ed, thank you for your response!
> 
> >> Are you examining the second node directly, ie not by mounting it?
> 
> This is an interesting question. I am just examining the 2nd node directly,
> it wasn't obvious to me that the 2nd node needed to mount the data from
> the 1st node. I was just merely expecting the data to be replicate to the
> 2nd node. So my rationale is thinking I should run the following command
> on the 2nd node:
> 
> mount -t glusterfs hostnameA:/test /mnt
> 

You do not need to mount the data from the first node in the second.  As I 
understand GlusterFS  works on the client side. What this means is that you 
must mount the volume in a client machine using either the glusterfs or native 
NFS drivers  and then when you add a file to the mounted volume it will be 
replicated to both nodes. 

Writing directly on one of the server nodes storage (e.g. not through a mount 
point) will also replicate the file eventually due to GlusterFS self-heal 
mechanism but this will take longer to take effect. You can always force the 
replication with the volume rebalance command that is what you are seeing.

I had a similar problem with data not being replicated even when using a 
volume mount point and the problem was that the glusterd daemon was not 
running in one of the nodes.  Make sure the daemon is started:

  /etc/init.d/glusted start

also make sure you configure it to start on system boot (it is not by default). 
You can check the manual for details:

http://www.gluster.com/community/documentation/index.php/Gluster_3.1:_Configuring_glusterd_to_Start_Automatically


> 
> The commands I used to create the volume from the server (hostnameA)
> 
> 1) gluster volume create test replica 2 transport tcp hostnameA:/opt
> hostnameB:/opt
> 
> 2) gluster volume start test
> 
> 3) mount -t glusterfs hostnameA:/test /mnt
> 
Again this third step is not necessary. Make sure the glusterd daemons are 
running in all nodes and that you are accessing the volume through the 
glusterfs client or native NFS client.

> Someone sent a message regarding a tutorial that I haven't read yet, so I
> am going to work through that tutorial, and see if I can answer some of my
> own questions. :)
> 
> Thank you again Ed for the tidbit regarding the latency issue, and your
> comment regarding HTPC applications.
> 
> ~~Rick
> - Original Message -
> From: "Ed W" 
> To: "Gluster Users" 
> Sent: Monday, November 1, 2010 2:29:57 PM
> Subject: Re: [Gluster-users] Possible to use gluster w/ email services +
> Tuning for fast replication
> 
> > Right now, I am testing out a 2 node setup, with one server replicating
> > data to another node. One thing I noticed was when I created a file or
> > directory on the server, the new data does not replicate to the other
> > node. The only time data is synced from server to the other node is when
> > I run "gluster volume rebalance test start". Is this normal? I had
> > envisioned gluster would constantly replicate changes from the server to
> > the other nodes, am I off base?
> 
> Are you examining the second node directly, ie not by mounting it?  I
> think the point is that replication only happens when you "observe" the
> second node?
> 
> Glusterfs is targeted for HTPC applications where typically the nodes
> are all connected over high performance interlinks.  It appears that
> performance degrades very quickly as the latency between nodes increases
> and so whether the solution works for you is largely going to be
> determined by the latency between nodes on your network connection.
> 
> I'm not actually sure what some representative numbers should be?  I
> have two machines hooked up using bonded-rr intel gigabit cards
> (crossover to each other) and these ping at around 0.3ms.  However, I
> have one other machine on a gigabit connection, hooked up to a switch
> and that sometimes drops to around 0.15ms...  I believe infiniband will
> drop that latency to some few tens of microseconds?
> 
> So basically every file access on my system would suffer a 0.3ms access
> latency.  This is better than a spining disk with no cache which comes
> in more like 3-10ms, but obviously it's still not brilliant
> 
> Please let us know how y

Re: [Gluster-users] NFS export issue with GlusterFS 3.1

2010-11-01 Thread Shehjar Tikoo

Bernard Li wrote:

Hi Shehjar:

On Thu, Oct 28, 2010 at 12:34 AM, Shehjar Tikoo  wrote:


Thats not recommended but I can see why this is needed. The simplest way to
run the nfs server for the two replicas is to simply copy over the nfs
volume file from the current nfs server. It will work right away. The volume
file below will not.


Thanks, that worked.  I copied /etc/glusterd/nfs/nfs-server.vol to the
other server, started glusterfsd and I could mount the volume via NFS
on a client.


Performance will also drop because now both your replicas are another
network hop away. I guess the ideal situation would be to allow gnfs to run
even when there is already a server running. Its on the ToDo list.


That would be good.  However, would it also be possible for this other
server to join as a non-contributing peer (i.e. it's not sharing its
disk) but act only as the NFS server?  This way I don't need to copy
over the volume file manually and leave it to glusterd to set
everything up.  Would be a nice stop-gap workaround until you guys can
implement the above mentioned feature.



"non-contributing peer".I like the sound of that. ;-)

I think we'll be better of with a quick fix to the port problem. Let me 
see what I can do.


-Shehjar


Cheers,

Bernard


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Possible to use gluster w/ email services + Tuning for fast replication

2010-11-01 Thread Rick King
Ed, thank you for your response!

>> Are you examining the second node directly, ie not by mounting it?

This is an interesting question. I am just examining the 2nd node directly, it 
wasn't obvious to me that the 2nd node needed to mount the data from the 1st 
node. I was just merely expecting the data to be replicate to the 2nd node. So 
my rationale is thinking I should run the following command on the 2nd node:

mount -t glusterfs hostnameA:/test /mnt


The commands I used to create the volume from the server (hostnameA)

1) gluster volume create test replica 2 transport tcp hostnameA:/opt 
hostnameB:/opt

2) gluster volume start test

3) mount -t glusterfs hostnameA:/test /mnt

Someone sent a message regarding a tutorial that I haven't read yet, so I am 
going to work through that tutorial, and see if I can answer some of my own 
questions. :)

Thank you again Ed for the tidbit regarding the latency issue, and your comment 
regarding HTPC applications. 

~~Rick


- Original Message -
From: "Ed W" 
To: "Gluster Users" 
Sent: Monday, November 1, 2010 2:29:57 PM
Subject: Re: [Gluster-users] Possible to use gluster w/ email services + Tuning 
for fast replication


> Right now, I am testing out a 2 node setup, with one server replicating data 
> to another node. One thing I noticed was when I created a file or directory 
> on the server, the new data does not replicate to the other node. The only 
> time data is synced from server to the other node is when I run "gluster 
> volume rebalance test start". Is this normal? I had envisioned gluster would 
> constantly replicate changes from the server to the other nodes, am I off 
> base?

Are you examining the second node directly, ie not by mounting it?  I 
think the point is that replication only happens when you "observe" the 
second node?

Glusterfs is targeted for HTPC applications where typically the nodes 
are all connected over high performance interlinks.  It appears that 
performance degrades very quickly as the latency between nodes increases 
and so whether the solution works for you is largely going to be 
determined by the latency between nodes on your network connection.

I'm not actually sure what some representative numbers should be?  I 
have two machines hooked up using bonded-rr intel gigabit cards 
(crossover to each other) and these ping at around 0.3ms.  However, I 
have one other machine on a gigabit connection, hooked up to a switch 
and that sometimes drops to around 0.15ms...  I believe infiniband will 
drop that latency to some few tens of microseconds?

So basically every file access on my system would suffer a 0.3ms access 
latency.  This is better than a spining disk with no cache which comes 
in more like 3-10ms, but obviously it's still not brilliant

Please let us know how you get on?

Good luck

Ed W
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
DISCLAIMER: This e-mail and any files transmitted with it ('Message') is 
intended only for the use of the recepient (s) named and may contain 
confidential information. Opinions, conclusion and other information in this 
message that do not relate to the official business of King7.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] error in glusterfs volume creation

2010-11-01 Thread kaushik chatterjee
Hi
I am a newbie in glusterfs. I have created 2 servers with glusterfs. Now
from server one i can see the peer but whenever trying to create volume its
says *creation of volume unsuccessful, host is not your friend*.

Any help please.



Thanks and Regards,

Kaushik
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Stripe volumes questions

2010-11-01 Thread Emmanuel Noobadmin
Thanks for the reply. The documentation did mention stripe is best for
HPC but I couldn't really tell what exactly did that mean. Your
explanation of "very large files that are written
once or only written to rarely," made it very clear striping won't
work for the situation I have in mind :)


On 11/2/10, Jacob Shucart  wrote:
> Emmanuel,
>
> Regular stripe is the same as distributed stripe.  In 3.1, you can expand
> a striped volume, but you have to expand it by the number of servers in
> the stripe configuration.  For example, if you have a stripe volume with 4
> servers, you can expand that volume if you add another 4 servers.  You
> cannot expand from 3 to 4.  Before using stripe, it is a good idea to
> understand where stripe will work and where it will not work.  It is
> intended for environments where you have very large files that are written
> once or only written to rarely, and those files are read by a large number
> of clients simultaneously.  The reason there is so much focus on write is
> because even small changes can cause a significant amount of re-writing in
> the data when in a stripe volume as the data needs to be restriped.
>
> -Jacob
>
> -Original Message-
> From: gluster-users-boun...@gluster.org
> [mailto:gluster-users-boun...@gluster.org] On Behalf Of Emmanuel Noobadmin
> Sent: Monday, November 01, 2010 12:56 PM
> To: Gluster General Discussion List
> Subject: [Gluster-users] Stripe volumes questions
>
> Gluster 3.0.x documentation doesn't mention stripe volume but the 3.1
> documentation (not linked but googled) mentions stripe only in
> conjunction with distributed stripe. Does this mean that plain stripe
> is no longer available as an option?
>
> Earlier 2.x documentation mentions that striped volume cannot be
> expanded after setup. Does this hold true for 3.x or is this why plain
> striped is no longer available and distributed stripe can get around
> this limitation?
>
> Fundamentally, my concern is that if I start with a 4 volume
> distributed or plain stripe to present a single large file (such as a
> virtual disk) to the client machine, would I be able to expand
> subsequently by adding more volumes/server?
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Possible to use gluster w/ email services + Tuning for fast replication

2010-11-01 Thread Rick King
Ed, thank you for your response!

>> Are you examining the second node directly, ie not by mounting it?

This is an interesting question. I am just examining the 2nd node directly, it 
wasn't obvious to me that the 2nd node needed to mount the data from the 1st 
node. I was just merely expecting the data to be replicate to the 2nd node. So 
my rationale is thinking I should run the following command on the 2nd node:

mount -t glusterfs hostnameA:/test /mnt


The commands I used to create the volume from the server (hostnameA)

1) gluster volume create test replica 2 transport tcp hostnameA:/opt 
hostnameB:/opt

2) gluster volume start test

3) mount -t glusterfs hostnameA:/test /mnt

Someone sent a message regarding a tutorial that I haven't read yet, so I am 
going to work through that tutorial, and see if I can answer some of my own 
questions. :)

Thank you again Ed for the tidbit regarding the latency issue, and your comment 
regarding HTPC applications. 

~~Rick
- Original Message -
From: "Ed W" 
To: "Gluster Users" 
Sent: Monday, November 1, 2010 2:29:57 PM
Subject: Re: [Gluster-users] Possible to use gluster w/ email services + Tuning 
for fast replication


> Right now, I am testing out a 2 node setup, with one server replicating data 
> to another node. One thing I noticed was when I created a file or directory 
> on the server, the new data does not replicate to the other node. The only 
> time data is synced from server to the other node is when I run "gluster 
> volume rebalance test start". Is this normal? I had envisioned gluster would 
> constantly replicate changes from the server to the other nodes, am I off 
> base?

Are you examining the second node directly, ie not by mounting it?  I 
think the point is that replication only happens when you "observe" the 
second node?

Glusterfs is targeted for HTPC applications where typically the nodes 
are all connected over high performance interlinks.  It appears that 
performance degrades very quickly as the latency between nodes increases 
and so whether the solution works for you is largely going to be 
determined by the latency between nodes on your network connection.

I'm not actually sure what some representative numbers should be?  I 
have two machines hooked up using bonded-rr intel gigabit cards 
(crossover to each other) and these ping at around 0.3ms.  However, I 
have one other machine on a gigabit connection, hooked up to a switch 
and that sometimes drops to around 0.15ms...  I believe infiniband will 
drop that latency to some few tens of microseconds?

So basically every file access on my system would suffer a 0.3ms access 
latency.  This is better than a spining disk with no cache which comes 
in more like 3-10ms, but obviously it's still not brilliant

Please let us know how you get on?

Good luck

Ed W
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
DISCLAIMER: This e-mail and any files transmitted with it ('Message') is 
intended only for the use of the recepient (s) named and may contain 
confidential information. Opinions, conclusion and other information in this 
message that do not relate to the official business of King7.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Possible to use gluster w/ email services + Tuning for fast replication

2010-11-01 Thread Ed W



Right now, I am testing out a 2 node setup, with one server replicating data to another 
node. One thing I noticed was when I created a file or directory on the server, the new 
data does not replicate to the other node. The only time data is synced from server to 
the other node is when I run "gluster volume rebalance test start". Is this 
normal? I had envisioned gluster would constantly replicate changes from the server to 
the other nodes, am I off base?


Are you examining the second node directly, ie not by mounting it?  I 
think the point is that replication only happens when you "observe" the 
second node?


Glusterfs is targeted for HTPC applications where typically the nodes 
are all connected over high performance interlinks.  It appears that 
performance degrades very quickly as the latency between nodes increases 
and so whether the solution works for you is largely going to be 
determined by the latency between nodes on your network connection.


I'm not actually sure what some representative numbers should be?  I 
have two machines hooked up using bonded-rr intel gigabit cards 
(crossover to each other) and these ping at around 0.3ms.  However, I 
have one other machine on a gigabit connection, hooked up to a switch 
and that sometimes drops to around 0.15ms...  I believe infiniband will 
drop that latency to some few tens of microseconds?


So basically every file access on my system would suffer a 0.3ms access 
latency.  This is better than a spining disk with no cache which comes 
in more like 3-10ms, but obviously it's still not brilliant


Please let us know how you get on?

Good luck

Ed W
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Stripe volumes questions

2010-11-01 Thread Jacob Shucart
Emmanuel,

Regular stripe is the same as distributed stripe.  In 3.1, you can expand
a striped volume, but you have to expand it by the number of servers in
the stripe configuration.  For example, if you have a stripe volume with 4
servers, you can expand that volume if you add another 4 servers.  You
cannot expand from 3 to 4.  Before using stripe, it is a good idea to
understand where stripe will work and where it will not work.  It is
intended for environments where you have very large files that are written
once or only written to rarely, and those files are read by a large number
of clients simultaneously.  The reason there is so much focus on write is
because even small changes can cause a significant amount of re-writing in
the data when in a stripe volume as the data needs to be restriped.

-Jacob

-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Emmanuel Noobadmin
Sent: Monday, November 01, 2010 12:56 PM
To: Gluster General Discussion List
Subject: [Gluster-users] Stripe volumes questions

Gluster 3.0.x documentation doesn't mention stripe volume but the 3.1
documentation (not linked but googled) mentions stripe only in
conjunction with distributed stripe. Does this mean that plain stripe
is no longer available as an option?

Earlier 2.x documentation mentions that striped volume cannot be
expanded after setup. Does this hold true for 3.x or is this why plain
striped is no longer available and distributed stripe can get around
this limitation?

Fundamentally, my concern is that if I start with a 4 volume
distributed or plain stripe to present a single large file (such as a
virtual disk) to the client machine, would I be able to expand
subsequently by adding more volumes/server?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Stripe volumes questions

2010-11-01 Thread Emmanuel Noobadmin
Gluster 3.0.x documentation doesn't mention stripe volume but the 3.1
documentation (not linked but googled) mentions stripe only in
conjunction with distributed stripe. Does this mean that plain stripe
is no longer available as an option?

Earlier 2.x documentation mentions that striped volume cannot be
expanded after setup. Does this hold true for 3.x or is this why plain
striped is no longer available and distributed stripe can get around
this limitation?

Fundamentally, my concern is that if I start with a 4 volume
distributed or plain stripe to present a single large file (such as a
virtual disk) to the client machine, would I be able to expand
subsequently by adding more volumes/server?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] concurrent write and read to the same file

2010-11-01 Thread Lana Deere
Thanks for the pointer, James.

In some ways the symptom seems different, since I was seeing correct
output on other nodes, but not on the node where it was being
generated, but otherwise it might well be a match.  I may try turning
quickread off on my volumes to see if it helps.

.. Lana (lana.de...@gmail.com)






On Mon, Nov 1, 2010 at 10:31 AM, Burnash, James  wrote:
> Hi Lana.
>
> Looks like you may have run into this bug as well:
>
> http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2027
>
> James Burnash, Unix Engineering
>
> -Original Message-
> From: gluster-users-boun...@gluster.org 
> [mailto:gluster-users-boun...@gluster.org] On Behalf Of Lana Deere
> Sent: Friday, October 29, 2010 5:06 PM
> To: gluster-users@gluster.org
> Subject: [Gluster-users] concurrent write and read to the same file
>
> GlusterFS 3.1.0, distributed volumes, rdma transport, CentOS 5.4/5.5.
>
> I was doing "tail -f" on a log file while the program generating it
> was running and noticed some strange behavior.  I wasn't able to
> reproduce exactly the same strange behavior, but here is some which
> did reproduce for me reliably.  This is a small perl program producing
> a bit of output every few seconds:
>    $| = 1;
>    for ($i=0; $i < 100; ++$i) {
>        printf("This is line $i\n");
>        sleep(3);
>    }
>
> If I run it with output sent to a gluster volume,
>   perl foo.pl > bar
> it works fine: "bar" has the right content after the script
> terminates, or if I kill the script while it runs "bar" has a valid
> partial output.
>
> While the perl script is running, if I log into a different host which
> can see that volume and do "tail -f bar" that also works fine: every
> few seconds, the next line of output appears.
>
> However, while the perl script is running if I do "tail -f bar" from
> the same host as the script is running on, it will print me the
> current end of the file but then it will hang without producing any
> subsequent output.  This is true whether or not I have a tail running
> on a different host concurrently.
>
> The tail on the same host as the script will correctly notice that the
> file was reset if I kill and restart the perl script while leaving the
> tail running.  Then it prints the first line, but at that point it
> hangs again.
>
> Anyone else seeing this?  Any ideas what might be reasons why?
>
> Thanks!
>
>
> .. Lana (lana.de...@gmail.com)
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
> DISCLAIMER:
> This e-mail, and any attachments thereto, is intended only for use by the 
> addressee(s) named herein and may contain legally privileged and/or 
> confidential information. If you are not the intended recipient of this 
> e-mail, you are hereby notified that any dissemination, distribution or 
> copying of this e-mail, and any attachments thereto, is strictly prohibited. 
> If you have received this in error, please immediately notify me and 
> permanently delete the original and any copy of any e-mail and any printout 
> thereof. E-mail transmission cannot be guaranteed to be secure or error-free. 
> The sender therefore does not accept liability for any errors or omissions in 
> the contents of this message which arise as a result of e-mail transmission.
> NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
> discretion, monitor and review the content of all e-mail communications. 
> http://www.knight.com
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] concurrent write and read to the same file

2010-11-01 Thread Burnash, James
Hi Lana.

Looks like you may have run into this bug as well:

http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2027

James Burnash, Unix Engineering

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Lana Deere
Sent: Friday, October 29, 2010 5:06 PM
To: gluster-users@gluster.org
Subject: [Gluster-users] concurrent write and read to the same file

GlusterFS 3.1.0, distributed volumes, rdma transport, CentOS 5.4/5.5.

I was doing "tail -f" on a log file while the program generating it
was running and noticed some strange behavior.  I wasn't able to
reproduce exactly the same strange behavior, but here is some which
did reproduce for me reliably.  This is a small perl program producing
a bit of output every few seconds:
$| = 1;
for ($i=0; $i < 100; ++$i) {
printf("This is line $i\n");
sleep(3);
}

If I run it with output sent to a gluster volume,
   perl foo.pl > bar
it works fine: "bar" has the right content after the script
terminates, or if I kill the script while it runs "bar" has a valid
partial output.

While the perl script is running, if I log into a different host which
can see that volume and do "tail -f bar" that also works fine: every
few seconds, the next line of output appears.

However, while the perl script is running if I do "tail -f bar" from
the same host as the script is running on, it will print me the
current end of the file but then it will hang without producing any
subsequent output.  This is true whether or not I have a tail running
on a different host concurrently.

The tail on the same host as the script will correctly notice that the
file was reset if I kill and restart the perl script while leaving the
tail running.  Then it prints the first line, but at that point it
hangs again.

Anyone else seeing this?  Any ideas what might be reasons why?

Thanks!


.. Lana (lana.de...@gmail.com)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s) named herein and may contain legally privileged and/or 
confidential information. If you are not the intended recipient of this e-mail, 
you are hereby notified that any dissemination, distribution or copying of this 
e-mail, and any attachments thereto, is strictly prohibited. If you have 
received this in error, please immediately notify me and permanently delete the 
original and any copy of any e-mail and any printout thereof. E-mail 
transmission cannot be guaranteed to be secure or error-free. The sender 
therefore does not accept liability for any errors or omissions in the contents 
of this message which arise as a result of e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
discretion, monitor and review the content of all e-mail communications. 
http://www.knight.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.1: Multiple networks and Client access

2010-11-01 Thread Udo Waechter
Hi and thanks for the answer.
I think there is a bug/problem with gluster.
First some pre-knowledge

- our internal network does not resolve vie DNS.
- the internal hosts are only resolvable via /etc/hosts.

Now, my idea would be this:

1. on the cluster nodes, use the internal (bonding) interfaces for communication
2. the rest of the network should use the external interfaces to communicate 
with the storage cloud.

To achieve this, the idea was now to:

a) create the volume with the internal names
$ gluster volume create ... hostname1.internal:/exp hostname2.internal:/exp 
hostname3.internal:/exp

b) On all the (external) nodes, that should reach the gluster-cluster, add 
hostnam1...X.internal to /etc/hosts resolving to their external ip-address.


Now to all the problems:

Regarding point 1 above:
- when I add a peer with its internal name it works, except for the peer that I 
add the other peers from.
---hostname2.internal: $ gluster peer probe hostname1.internal
---hostname1.internal $ gluster peer status
Number of Peers: 1

Hostname: 10.10.33.142
Uuid: a19fc9d3-d00f-4440-b096-c974db1cd8c7
State: Peer in Cluster (Connected)

This should be hostname2.internal

When I do gluster peer probe hostname1.internal (on the host 
hostname1.internal) I get:

"hostname1.internal is already part of another cluster"
so here, ip/name resolution works...

this works in all permutations. The peer from which I do "gluster peer probe 
..." always does not resolve to its internal name, but its ip-adress

As a result from all this, point a) can not succeed, since:

gluster volume create  hostname... hostname... hostname... results in:

"Host hostnameX is not a friend", where hostnameX ist the host where the volume 
creation was attempted.


I have tried and installed pdnsd for the internal network, but this does not 
solve the problems either.

As a last resort, I edited /etc/glusterd/peers/* and replaced the ip-adresses 
by hand. Now, "gluster peer status" gives me the names instead of the ip-adress.
but "volume create" still tells me about the host (where I create the volume 
from) not being a friend.

Any help, solution is highly apreciated.
Thanks,
udo.

On 18.10.2010, at 07:10, Craig Carl wrote:

> Udo - 
> With 3.1 when you mount/create/change a volume those changes are 
> propagated via RPC to all of the other Gluster servers and clients. When you 
> created the volume using 10.10.x.x IP addresses those IPs where what got sent 
> to the client. In previous versions you could have just edited the client 
> side configuration file and change or added the 192. addresses but not in 
> this version, due to DVM. There should be a way to make multiple networks 
> work so I will file a bug.
>  In the meantime I think I have a workaround. If you use names instead of 
> IP addresses and then make sure DNS or host files are setup properly is 
> should work as Gluster does export via all interfaces. For example if the 
> servers have these IPs - 
> 
> server1 - 10.10.1.1, 192.168.1.1
> server2 - 10.10.1.2, 192.168.1.2
> server3 - 10.10.1.3, 192.168.1.3
> 
> #gluster volume create test-ext stripe 3 server1:/ext server2:/ext 
> server3:/ext
> 
> You would just need to make sure that hosts on the 10.10.x.x resolve the 
> servername to its 10. IP, and clients on the 192.x resolve to the 192 IP. 
> Should be a simple change to the /etc/host files. 
> 
> Please let me know if this works so I can include that information in my bug 
> report. 
> 
> Thanks, 
> 
> Craig
> 
> --
> Craig Carl
> Senior Systems Engineer; Gluster, Inc. 
> Cell - (408) 829-9953 (California, USA)
> Office - (408) 770-1884
> Gtalk - craig.c...@gmail.com
> Twitter - @gluster
> Installing Gluster Storage Platform, the movie!
> http://rackerhacker.com/2010/08/11/one-month-with-glusterfs-in-production/
> 
> 
> From: "Udo Waechter" 
> To: gluster-users@gluster.org
> Sent: Sunday, October 17, 2010 12:57:18 AM
> Subject: Re: [Gluster-users] 3.1: Multiple networks and Client access
> 
> Hi,
> although I totally forgot about the firewall, this problem is not related to 
> it.
> 
> The ports you mentioned are open.
> 
> I have created another volume using the ip-adresses from the external 
> interfaces
> 
> $ gluster volume create test-ext stripe 3 192.168.x.x1:/ext 192.168.x.x2:/ext 
> 192.168.x.x3:/ext
> 
> and this can not only be mounted, it also works perfectly.
> 
> How could this work 
> $ gluster volume create test-ext stripe 3 10.10.x.x1:/ext 10.10.x.x2:/ext 
> 10.10.x.x3:/ext
> if the ip-addresses of the 10.10.x.x network are not reachable from the 
> 192.168.x.x network?
> 
> I have read somewhere in the docs that by default, glusterd listens on all 
> interfaces, but does it also by default export everything to all interfaces?
> 
> Thanks again,
> udo.
> 
> 
> On 16.10.2010, at 16:37, Jacob Shucart wrote:
> 
> > Udo,
> > 
> > It sounds to me like a firewall is blocking access to the Gluster system 
> > preventing some of the traffic from happening

Re: [Gluster-users] Booting a clean gluster cluster from VM images

2010-11-01 Thread Vijay Bellur
> Is there a quick and simple way to get gluster to reset its
> configuration
> back to nothing? 

You can wipe /etc/glusterd and that would force gluster to reset its 
configuration.

Regards,
Vijay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Booting a clean gluster cluster from VM images

2010-11-01 Thread Joe Whitney
Hello,

I'm trying to set up an experimental gluster storage cluster using virtual
machines (VMs) on Xen.  I want the VMs to "forget" any previous gluster
configuration (i.e. the server to forget about all previously configured
peers, the peers to forget what storage volumes they are providing, etc).
 Is there a quick and simple way to get gluster to reset its configuration
back to nothing?  I am very new to gluster so I apologize for any confusion
of terminology here, I hope it is clear what I am asking.

Thanks in advance,

Joe Whitney
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] glusterfs NFS dependencies (gentoo)

2010-11-01 Thread Jan Pisacka
Hi,

I am testing glusterfs-3.1.0 on Gentoo Linux, which is our preferred OS.
For correct NFS server functionality, some dependencies are required. I
understand that on supported distributions, all dependencies are already
installed by default. This is probably not the case of gentoo. First I
found that the glusterfs's NFS server still needs the portmapper:

[2010-10-22 16:05:20.469964] E
[rpcsvc.c:2630:nfs_rpcsvc_program_register_portmap] nfsrpc: Could not
register with portmap
[2010-10-22 16:05:20.470006] E
[rpcsvc.c:2715:nfs_rpcsvc_program_register] nfsrpc: portmap registration
of program failed
[2010-10-22 16:05:20.470038] E
[rpcsvc.c:2728:nfs_rpcsvc_program_register] nfsrpc: Program registration
failed: MOUNT3, Num: 15, Ver: 3, Port: 38465
[2010-10-22 16:05:20.470050] E [nfs.c:127:nfs_init_versions] nfs:
Program init failed
[2010-10-22 16:05:20.470061] C [nfs.c:608:notify] nfs: Failed to
initialize protocols

Are there any further dependencies that need to be satisfied?
Replication and distribution works, but I still have issues when trying
to run gnome desktop on clients where /home is glusterfs-nfs mounted.
Conventional nfs works perfectly. The issues seem to be related to the
absent locking:

Oct 26 08:42:24 glc01 kernel: lockd: server gls13 not responding, still
trying

On the server side, I still have log records like this one:

[2010-10-25 11:00:55.823932] E [rpcsvc.c:1249:nfs_rpcsvc_program_actor]
nfsrpc: RPC program not available

Which RPC program should I install? rpc.statd? Anything else? Thanks a lot.

Jan Pisacka
Compass experiment
Institute of Plasma Physics AS CR
Praha, Czech Republic





___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] gluster users in germany on this list?

2010-11-01 Thread Uwe Kastens
Hi, 

Is there anyone on the list which comes from germany and is willing to talk 
about experience with glusterfs?

BR

Uwe

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users