Re: [Gluster-users] GlusterFS Cluster over Slow Lines?

2014-12-07 Thread Svenn Dhert
Hey,

I have a similar issue, over a slow line glusterfs seemed to divide the
speed by two for writes. So perhaps you might look into geo-replication ? I
haven't found a solution yet.

*following this topic*

Svenn

2014-12-07 18:25 GMT+01:00 Christian Völker :

> Hi all,
>
> I have not found an answer to my questions regarding GlusterFS so I
> decided to ask here.
>
> We have two sites which are connected through a slow 10Mbit line. But I
> want users accessing shared files on both sites with optimized performance.
>
> My first thought was to use drbd for this but then I would need a
> clustered file system(which would additionally put load on the
> connection) and I would have slow write access on both sites.
>
> So I am currently testing GlusterFS- am I right about the following?
>
> -Reads will always be send from the local disk
> -Writes go through local disk first and are replicated then. When does
> the client get confirmation of succesful write? Once the data is fully
> replicated or earlier? In first case it would slow down writes as the
> nodes are connected only through a small 10Mbit line, correct?
> -How does glusterFS handle a dual-head situation? In case the connection
> between the two nodes goes down? what happens when connection is back then?
> -I want to have access to my data on both sites (nodes), so for me a
> replicated volume would be fine?
> -thhe term "distributed" means the data is just spread across the local
> bricks, right? So having a single brick protected by hardware raid, is
> fine, too?
>
> I am not sure about how many bricks are recommended to use for what
> scenario.
>
> Well, loads of questiosn, thanks for your patience
>
> Greetings
>
> Christian
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] A year's worth of Gluster

2014-12-07 Thread Nguyen Viet Cuong
Did you tweak some options belonging to the performance translator, such as
io-thread-count? If not, try to increase it to 64 from 16 (default).

On Mon, Dec 8, 2014 at 12:10 PM, Andrew Smith 
wrote:

>
> QDR Infiniband has a max theoretical input of 40Gbits, or about 4GB/s.
> My LSI controller RAID controllers typically deliver about 0.5-1.0 GB/s
> for direct disk access.
>
> I have tested it many ways. I typically start jobs on many clients and
> measure the total network bandwidth on the servers by monitoring the
> totals in /proc/net/dev or just count the bytes on the clients. I can't
> get more than about 300MB/s from each server. With a single job on
> a single client, I can't get more than about 100-150MB/s.
>
> On Dec 7, 2014, at 9:15 PM, Franco Broi  wrote:
>
> >
> > Our theoretical peak throughput is about 4Gbytes/sec or 4 x 10Gbits/Sec,
> > you can see from the graph that the maximum recorded is 3.6GB/Sec. This
> > was probably during periods of large sequential IO.
> >
> > We have a small cluster of clients (10) with 10Gbit ethernet but the
> > majority of our machines (130) have gigabit. The throughput maximum for
> > the 10Gbit connected machines was just over 3GBytes/Sec with individual
> > machines recording about 800MB/Sec.
> >
> > We can easily saturate our 10Gbit links on the servers as each JBOD is
> > capable of better than 500MB/Sec but with mixed sequential/random access
> > it seems like a good compromise.
> >
> > We have another 2 server Gluster system with the same specs and we get
> > 1.8GB/Sec reads and 1.1GB/Sec writes.
> >
> > What are you using to measure your throughput?
> >
> > On Sun, 2014-12-07 at 20:52 -0500, Andrew Smith wrote:
> >> I have a similar system with 4 nodes and 2 bricks per node, where
> >> each brick is a single large filesystem (4TB x 24 RAID 6). The
> >> computers are all on QDR Infinband with Gluster using IPOIB. I
> >> have a cluster of Infiniband clients that access the data on the
> >> servers. I can only get about 1.0 to 1.2 GB/s throughput with my
> >> system though. Can you tell us the peak throughput that you are
> >> getting. I just don't have a sense of what I should expect from
> >> my system. A similar Luster setup could achieve 2-3 GB/s, which
> >> I attributed to the fact that it didn't use IPOIB, but instead used
> >> RDMA. I'd really like to know if I am wrong here and there is
> >> some configuration I can tweak to make things faster.
> >>
> >> Andy
> >>
> >> On Dec 7, 2014, at 8:43 PM, Franco Broi  wrote:
> >>
> >>> On Fri, 2014-12-05 at 14:22 +, Kiebzak, Jason M. wrote:
>  May I ask why you chose to go with 4 separate bricks per server
> rather than one large brick per server?
> >>>
> >>> Each brick is a JBOD with 16 disks running RAIDZ2. Just seemed more
> >>> logical to keep the bricks and ZFS filesystems confined to physical
> >>> hardware units, ie I could disconnect a brick and move it to another
> >>> server.
> >>>
> 
>  Thanks
>  Jason
> 
>  -Original Message-
>  From: gluster-users-boun...@gluster.org [mailto:
> gluster-users-boun...@gluster.org] On Behalf Of Franco Broi
>  Sent: Thursday, December 04, 2014 7:56 PM
>  To: gluster-users@gluster.org
>  Subject: [Gluster-users] A year's worth of Gluster
> 
> 
>  1 DHT volume comprising 16 50TB bricks spread across 4 servers. Each
> server has 10Gbit Ethernet.
> 
>  Each brick is a ZOL RADIZ2 pool with a single filesystem.
> >>>
> >>>
> >>> ___
> >>> Gluster-users mailing list
> >>> Gluster-users@gluster.org
> >>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> >>
> >
> >
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>



-- 
Nguyen Viet Cuong
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] A year's worth of Gluster

2014-12-07 Thread Franco Broi
On Sun, 2014-12-07 at 22:10 -0500, Andrew Smith wrote: 
> QDR Infiniband has a max theoretical input of 40Gbits, or about 4GB/s. 
> My LSI controller RAID controllers typically deliver about 0.5-1.0 GB/s
> for direct disk access.
> 
> I have tested it many ways. I typically start jobs on many clients and 
> measure the total network bandwidth on the servers by monitoring the
> totals in /proc/net/dev or just count the bytes on the clients. I can’t
> get more than about 300MB/s from each server. With a single job on 
> a single client, I can’t get more than about 100-150MB/s.

Does seem slow.

If you get the same sort of performance from normal NFS then I would say
your IPoIB stack isn't performing very well but I assume you've tested
that with something like iperf?

>  
> 
> On Dec 7, 2014, at 9:15 PM, Franco Broi  wrote:
> 
> > 
> > Our theoretical peak throughput is about 4Gbytes/sec or 4 x 10Gbits/Sec,
> > you can see from the graph that the maximum recorded is 3.6GB/Sec. This
> > was probably during periods of large sequential IO.
> > 
> > We have a small cluster of clients (10) with 10Gbit ethernet but the
> > majority of our machines (130) have gigabit. The throughput maximum for
> > the 10Gbit connected machines was just over 3GBytes/Sec with individual
> > machines recording about 800MB/Sec.
> > 
> > We can easily saturate our 10Gbit links on the servers as each JBOD is
> > capable of better than 500MB/Sec but with mixed sequential/random access
> > it seems like a good compromise.
> > 
> > We have another 2 server Gluster system with the same specs and we get
> > 1.8GB/Sec reads and 1.1GB/Sec writes.
> > 
> > What are you using to measure your throughput?
> > 
> > On Sun, 2014-12-07 at 20:52 -0500, Andrew Smith wrote: 
> >> I have a similar system with 4 nodes and 2 bricks per node, where 
> >> each brick is a single large filesystem (4TB x 24 RAID 6). The
> >> computers are all on QDR Infinband with Gluster using IPOIB. I
> >> have a cluster of Infiniband clients that access the data on the
> >> servers. I can only get about 1.0 to 1.2 GB/s throughput with my
> >> system though. Can you tell us the peak throughput that you are
> >> getting. I just don’t have a sense of what I should expect from 
> >> my system. A similar Luster setup could achieve 2-3 GB/s, which
> >> I attributed to the fact that it didn’t use IPOIB, but instead used
> >> RDMA. I’d really like to know if I am wrong here and there is 
> >> some configuration I can tweak to make things faster. 
> >> 
> >> Andy
> >> 
> >> On Dec 7, 2014, at 8:43 PM, Franco Broi  wrote:
> >> 
> >>> On Fri, 2014-12-05 at 14:22 +, Kiebzak, Jason M. wrote: 
>  May I ask why you chose to go with 4 separate bricks per server rather 
>  than one large brick per server?
> >>> 
> >>> Each brick is a JBOD with 16 disks running RAIDZ2. Just seemed more
> >>> logical to keep the bricks and ZFS filesystems confined to physical
> >>> hardware units, ie I could disconnect a brick and move it to another
> >>> server.
> >>> 
>  
>  Thanks
>  Jason
>  
>  -Original Message-
>  From: gluster-users-boun...@gluster.org 
>  [mailto:gluster-users-boun...@gluster.org] On Behalf Of Franco Broi
>  Sent: Thursday, December 04, 2014 7:56 PM
>  To: gluster-users@gluster.org
>  Subject: [Gluster-users] A year's worth of Gluster
>  
>  
>  1 DHT volume comprising 16 50TB bricks spread across 4 servers. Each 
>  server has 10Gbit Ethernet.
>  
>  Each brick is a ZOL RADIZ2 pool with a single filesystem.
> >>> 
> >>> 
> >>> ___
> >>> Gluster-users mailing list
> >>> Gluster-users@gluster.org
> >>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> >> 
> > 
> > 
> 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] A year's worth of Gluster

2014-12-07 Thread Andrew Smith

QDR Infiniband has a max theoretical input of 40Gbits, or about 4GB/s. 
My LSI controller RAID controllers typically deliver about 0.5-1.0 GB/s
for direct disk access.

I have tested it many ways. I typically start jobs on many clients and 
measure the total network bandwidth on the servers by monitoring the
totals in /proc/net/dev or just count the bytes on the clients. I can’t
get more than about 300MB/s from each server. With a single job on 
a single client, I can’t get more than about 100-150MB/s. 

On Dec 7, 2014, at 9:15 PM, Franco Broi  wrote:

> 
> Our theoretical peak throughput is about 4Gbytes/sec or 4 x 10Gbits/Sec,
> you can see from the graph that the maximum recorded is 3.6GB/Sec. This
> was probably during periods of large sequential IO.
> 
> We have a small cluster of clients (10) with 10Gbit ethernet but the
> majority of our machines (130) have gigabit. The throughput maximum for
> the 10Gbit connected machines was just over 3GBytes/Sec with individual
> machines recording about 800MB/Sec.
> 
> We can easily saturate our 10Gbit links on the servers as each JBOD is
> capable of better than 500MB/Sec but with mixed sequential/random access
> it seems like a good compromise.
> 
> We have another 2 server Gluster system with the same specs and we get
> 1.8GB/Sec reads and 1.1GB/Sec writes.
> 
> What are you using to measure your throughput?
> 
> On Sun, 2014-12-07 at 20:52 -0500, Andrew Smith wrote: 
>> I have a similar system with 4 nodes and 2 bricks per node, where 
>> each brick is a single large filesystem (4TB x 24 RAID 6). The
>> computers are all on QDR Infinband with Gluster using IPOIB. I
>> have a cluster of Infiniband clients that access the data on the
>> servers. I can only get about 1.0 to 1.2 GB/s throughput with my
>> system though. Can you tell us the peak throughput that you are
>> getting. I just don’t have a sense of what I should expect from 
>> my system. A similar Luster setup could achieve 2-3 GB/s, which
>> I attributed to the fact that it didn’t use IPOIB, but instead used
>> RDMA. I’d really like to know if I am wrong here and there is 
>> some configuration I can tweak to make things faster. 
>> 
>> Andy
>> 
>> On Dec 7, 2014, at 8:43 PM, Franco Broi  wrote:
>> 
>>> On Fri, 2014-12-05 at 14:22 +, Kiebzak, Jason M. wrote: 
 May I ask why you chose to go with 4 separate bricks per server rather 
 than one large brick per server?
>>> 
>>> Each brick is a JBOD with 16 disks running RAIDZ2. Just seemed more
>>> logical to keep the bricks and ZFS filesystems confined to physical
>>> hardware units, ie I could disconnect a brick and move it to another
>>> server.
>>> 
 
 Thanks
 Jason
 
 -Original Message-
 From: gluster-users-boun...@gluster.org 
 [mailto:gluster-users-boun...@gluster.org] On Behalf Of Franco Broi
 Sent: Thursday, December 04, 2014 7:56 PM
 To: gluster-users@gluster.org
 Subject: [Gluster-users] A year's worth of Gluster
 
 
 1 DHT volume comprising 16 50TB bricks spread across 4 servers. Each 
 server has 10Gbit Ethernet.
 
 Each brick is a ZOL RADIZ2 pool with a single filesystem.
>>> 
>>> 
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>> 
> 
> 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] A year's worth of Gluster

2014-12-07 Thread Franco Broi

Our theoretical peak throughput is about 4Gbytes/sec or 4 x 10Gbits/Sec,
you can see from the graph that the maximum recorded is 3.6GB/Sec. This
was probably during periods of large sequential IO.

We have a small cluster of clients (10) with 10Gbit ethernet but the
majority of our machines (130) have gigabit. The throughput maximum for
the 10Gbit connected machines was just over 3GBytes/Sec with individual
machines recording about 800MB/Sec.

We can easily saturate our 10Gbit links on the servers as each JBOD is
capable of better than 500MB/Sec but with mixed sequential/random access
it seems like a good compromise.

We have another 2 server Gluster system with the same specs and we get
1.8GB/Sec reads and 1.1GB/Sec writes.

What are you using to measure your throughput?

On Sun, 2014-12-07 at 20:52 -0500, Andrew Smith wrote: 
> I have a similar system with 4 nodes and 2 bricks per node, where 
> each brick is a single large filesystem (4TB x 24 RAID 6). The
> computers are all on QDR Infinband with Gluster using IPOIB. I
> have a cluster of Infiniband clients that access the data on the
> servers. I can only get about 1.0 to 1.2 GB/s throughput with my
> system though. Can you tell us the peak throughput that you are
> getting. I just don’t have a sense of what I should expect from 
> my system. A similar Luster setup could achieve 2-3 GB/s, which
> I attributed to the fact that it didn’t use IPOIB, but instead used
> RDMA. I’d really like to know if I am wrong here and there is 
> some configuration I can tweak to make things faster. 
> 
> Andy
> 
> On Dec 7, 2014, at 8:43 PM, Franco Broi  wrote:
> 
> > On Fri, 2014-12-05 at 14:22 +, Kiebzak, Jason M. wrote: 
> >> May I ask why you chose to go with 4 separate bricks per server rather 
> >> than one large brick per server?
> > 
> > Each brick is a JBOD with 16 disks running RAIDZ2. Just seemed more
> > logical to keep the bricks and ZFS filesystems confined to physical
> > hardware units, ie I could disconnect a brick and move it to another
> > server.
> > 
> >> 
> >> Thanks
> >> Jason
> >> 
> >> -Original Message-
> >> From: gluster-users-boun...@gluster.org 
> >> [mailto:gluster-users-boun...@gluster.org] On Behalf Of Franco Broi
> >> Sent: Thursday, December 04, 2014 7:56 PM
> >> To: gluster-users@gluster.org
> >> Subject: [Gluster-users] A year's worth of Gluster
> >> 
> >> 
> >> 1 DHT volume comprising 16 50TB bricks spread across 4 servers. Each 
> >> server has 10Gbit Ethernet.
> >> 
> >> Each brick is a ZOL RADIZ2 pool with a single filesystem.
> > 
> > 
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] A year's worth of Gluster

2014-12-07 Thread Andrew Smith

I have a similar system with 4 nodes and 2 bricks per node, where 
each brick is a single large filesystem (4TB x 24 RAID 6). The
computers are all on QDR Infinband with Gluster using IPOIB. I
have a cluster of Infiniband clients that access the data on the
servers. I can only get about 1.0 to 1.2 GB/s throughput with my
system though. Can you tell us the peak throughput that you are
getting. I just don’t have a sense of what I should expect from 
my system. A similar Luster setup could achieve 2-3 GB/s, which
I attributed to the fact that it didn’t use IPOIB, but instead used
RDMA. I’d really like to know if I am wrong here and there is 
some configuration I can tweak to make things faster. 

Andy

On Dec 7, 2014, at 8:43 PM, Franco Broi  wrote:

> On Fri, 2014-12-05 at 14:22 +, Kiebzak, Jason M. wrote: 
>> May I ask why you chose to go with 4 separate bricks per server rather than 
>> one large brick per server?
> 
> Each brick is a JBOD with 16 disks running RAIDZ2. Just seemed more
> logical to keep the bricks and ZFS filesystems confined to physical
> hardware units, ie I could disconnect a brick and move it to another
> server.
> 
>> 
>> Thanks
>> Jason
>> 
>> -Original Message-
>> From: gluster-users-boun...@gluster.org 
>> [mailto:gluster-users-boun...@gluster.org] On Behalf Of Franco Broi
>> Sent: Thursday, December 04, 2014 7:56 PM
>> To: gluster-users@gluster.org
>> Subject: [Gluster-users] A year's worth of Gluster
>> 
>> 
>> 1 DHT volume comprising 16 50TB bricks spread across 4 servers. Each server 
>> has 10Gbit Ethernet.
>> 
>> Each brick is a ZOL RADIZ2 pool with a single filesystem.
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] A year's worth of Gluster

2014-12-07 Thread Franco Broi
On Fri, 2014-12-05 at 14:22 +, Kiebzak, Jason M. wrote: 
> May I ask why you chose to go with 4 separate bricks per server rather than 
> one large brick per server?

Each brick is a JBOD with 16 disks running RAIDZ2. Just seemed more
logical to keep the bricks and ZFS filesystems confined to physical
hardware units, ie I could disconnect a brick and move it to another
server.

> 
> Thanks
> Jason
> 
> -Original Message-
> From: gluster-users-boun...@gluster.org 
> [mailto:gluster-users-boun...@gluster.org] On Behalf Of Franco Broi
> Sent: Thursday, December 04, 2014 7:56 PM
> To: gluster-users@gluster.org
> Subject: [Gluster-users] A year's worth of Gluster
> 
> 
> 1 DHT volume comprising 16 50TB bricks spread across 4 servers. Each server 
> has 10Gbit Ethernet.
> 
> Each brick is a ZOL RADIZ2 pool with a single filesystem.


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] GlusterFS Cluster over Slow Lines?

2014-12-07 Thread Christian Völker
Hi all,

I have not found an answer to my questions regarding GlusterFS so I
decided to ask here.

We have two sites which are connected through a slow 10Mbit line. But I
want users accessing shared files on both sites with optimized performance.

My first thought was to use drbd for this but then I would need a
clustered file system(which would additionally put load on the
connection) and I would have slow write access on both sites.

So I am currently testing GlusterFS- am I right about the following?

-Reads will always be send from the local disk
-Writes go through local disk first and are replicated then. When does
the client get confirmation of succesful write? Once the data is fully
replicated or earlier? In first case it would slow down writes as the
nodes are connected only through a small 10Mbit line, correct?
-How does glusterFS handle a dual-head situation? In case the connection
between the two nodes goes down? what happens when connection is back then?
-I want to have access to my data on both sites (nodes), so for me a
replicated volume would be fine?
-thhe term "distributed" means the data is just spread across the local
bricks, right? So having a single brick protected by hardware raid, is
fine, too?

I am not sure about how many bricks are recommended to use for what
scenario.

Well, loads of questiosn, thanks for your patience

Greetings

Christian


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Geo-Replication Issue

2014-12-07 Thread David Gibbons
Ok,

I was able to get geo-replication configured by
changing /usr/local/libexec/glusterfs/gverify.sh to use ssh to access the
local machine, instead of accessing bash -c directly. I then found that the
hook script was missing for geo-replication, so I copied that over
manually. I now have what appears to be a "configured" geo-rep setup:

> # gluster volume geo-replication shares gfs-a-bkp::bkpshares status
>
>
>> MASTER NODE MASTER VOLMASTER BRICK
>>   SLAVE   STATUS CHECKPOINT STATUSCRAWL
>> STATUS
>
>
>> 
>
> gfs-a-3 shares
>>  /mnt/a-3-shares-brick-1/brickgfs-a-bkp::bkpsharesNot Started
>>  N/A  N/A
>
> gfs-a-3 shares
>>  /mnt/a-3-shares-brick-2/brickgfs-a-bkp::bkpsharesNot Started
>>  N/A  N/A
>
> gfs-a-3 shares
>>  /mnt/a-3-shares-brick-3/brickgfs-a-bkp::bkpsharesNot Started
>>  N/A  N/A
>
> gfs-a-3 shares
>>  /mnt/a-3-shares-brick-4/brickgfs-a-bkp::bkpsharesNot Started
>>  N/A  N/A
>
> gfs-a-2 shares
>>  /mnt/a-2-shares-brick-1/brickgfs-a-bkp::bkpsharesNot Started
>>  N/A  N/A
>
> gfs-a-2 shares
>>  /mnt/a-2-shares-brick-2/brickgfs-a-bkp::bkpsharesNot Started
>>  N/A  N/A
>
> gfs-a-2 shares
>>  /mnt/a-2-shares-brick-3/brickgfs-a-bkp::bkpsharesNot Started
>>  N/A  N/A
>
> gfs-a-2 shares
>>  /mnt/a-2-shares-brick-4/brickgfs-a-bkp::bkpsharesNot Started
>>  N/A  N/A
>
> gfs-a-4 shares
>>  /mnt/a-4-shares-brick-1/brickgfs-a-bkp::bkpsharesNot Started
>>  N/A  N/A
>
> gfs-a-4 shares
>>  /mnt/a-4-shares-brick-2/brickgfs-a-bkp::bkpsharesNot Started
>>  N/A  N/A
>
> gfs-a-4 shares
>>  /mnt/a-4-shares-brick-3/brickgfs-a-bkp::bkpsharesNot Started
>>  N/A  N/A
>
> gfs-a-4 shares
>>  /mnt/a-4-shares-brick-4/brickgfs-a-bkp::bkpsharesNot Started
>>  N/A  N/A
>
> gfs-a-1 shares
>>  /mnt/a-1-shares-brick-1/brickgfs-a-bkp::bkpsharesNot Started
>>  N/A  N/A
>
> gfs-a-1 shares
>>  /mnt/a-1-shares-brick-2/brickgfs-a-bkp::bkpsharesNot Started
>>  N/A  N/A
>
> gfs-a-1 shares
>>  /mnt/a-1-shares-brick-3/brickgfs-a-bkp::bkpsharesNot Started
>>  N/A  N/A
>
> gfs-a-1 shares
>>  /mnt/a-1-shares-brick-4/brickgfs-a-bkp::bkpsharesNot Started
>>  N/A  N/A
>
>
So that's a step in the right direction (and I can upload a patch for
gverify to a bugzilla). However, gverify *should* have worked with bash-c,
and I was not able to figure out why it didn't work, other than it didn't
seem able to find some programs. I'm thinking that maybe the PATH variable
is wrong for Gluster, and that's why gverify didn't work out of the box.

When I attempt to start geo-rep now, I get the following in the geo-rep log:

> [2014-12-07 10:52:40.893594] E
> [syncdutils(monitor):218:log_raise_exception] : execution of "gluster"
> failed with ENOENT (No such file or directory)

[2014-12-07 10:52:40.893886] I [syncdutils(monitor):192:finalize] :
> exiting.


Which seems to agree that maybe gluster isn't running with the same path
variable that my console session is running with. Is this possible? I know
I'm grasping :).

Any nudge in the right direction would be very much appreciated!

Cheers,
Dave


On Sat, Dec 6, 2014 at 10:06 AM, David Gibbons 
wrote:

> Good Morning,
>
> I am having some trouble getting geo-replication started on a 3.5.3 volume.
>
> I have verified that password-less SSH is functional in both directions
> from the backup gluster server, and all nodes in the production gluster. I
> have verified that all nodes in production and backup cluster are running
> the same version of gluster, and that name resolution works in both
> directions.
>
> When I attempt to start geo-replication with this command:
>
>> gluster volume geo-replication shares gfs-a-bkp::bkpshares create push-pem
>>
>
> I end up with the following in the logs:
>
>>  [2014-12-06 15:02:50.284426] E
>> [glusterd-geo-rep.c:1889:glusterd_verify_slave] 0-: Not a valid slave
>
> [2014-12-06 15:02:50.284495] E
>> [glusterd-geo-rep.c:2106:glusterd_op_stage_gsync_create] 0-:
>> gfs-a-bkp::bkpshares is not a valid slave volume. Error: Unable to fetch
>> master volume details. Please check the master cluster and master volume.
>
> [

Re: [Gluster-users] Missing Hooks

2014-12-07 Thread David Gibbons
Thank you, Niels.

Bugzilla ID is 1171477

Dave

On Sun, Dec 7, 2014 at 10:12 AM, Niels de Vos  wrote:

> On Sun, Dec 07, 2014 at 09:55:11AM -0500, David Gibbons wrote:
> > Hi All,
> >
> > I am running into an issue where it appears that some hooks are missing
> > from /var/lib/glusterd/hooks
> >
> > I am running version 3.5.3 and recently did an upgrade to that version
> from
> > 3.4.2.
> >
> > I built from source with make && make install. Is there another make
> target
> > I need to use to get the hooks to install? Do I need to run make extras
> or
> > something to get them installed?
> >
> > I see them in the source folder, so I could certainly just "copy them
> over"
> > but I want to do this the right way if possible
>
> These are "copied over" by the .spec that is used to generate the RPMs.
> It looks as if the hook scripts are not installed by 'make install'. If
> you can file a bug for this, we won't forget about it and can send a
> fix.
>
> Thanks,
> Niels
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Missing Hooks

2014-12-07 Thread Niels de Vos
On Sun, Dec 07, 2014 at 09:55:11AM -0500, David Gibbons wrote:
> Hi All,
> 
> I am running into an issue where it appears that some hooks are missing
> from /var/lib/glusterd/hooks
> 
> I am running version 3.5.3 and recently did an upgrade to that version from
> 3.4.2.
> 
> I built from source with make && make install. Is there another make target
> I need to use to get the hooks to install? Do I need to run make extras or
> something to get them installed?
> 
> I see them in the source folder, so I could certainly just "copy them over"
> but I want to do this the right way if possible

These are "copied over" by the .spec that is used to generate the RPMs.
It looks as if the hook scripts are not installed by 'make install'. If
you can file a bug for this, we won't forget about it and can send a
fix.

Thanks,
Niels


pgpSXyVnoXv1e.pgp
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Missing Hooks

2014-12-07 Thread David Gibbons
Hi All,

I am running into an issue where it appears that some hooks are missing
from /var/lib/glusterd/hooks

I am running version 3.5.3 and recently did an upgrade to that version from
3.4.2.

I built from source with make && make install. Is there another make target
I need to use to get the hooks to install? Do I need to run make extras or
something to get them installed?

I see them in the source folder, so I could certainly just "copy them over"
but I want to do this the right way if possible

Cheers,
Dave
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Sudden interference between 2 independent gluster volumes

2014-12-07 Thread Peter B.

Thanks everyone for their input!

Am Do, 4.12.2014, 17:31 schrieb Peter B.:
> On server "A":
> 1) gluster peer detach "B"
> 2) Re-add the local bricks on "A" (which were already part of the game,
> but ain't anymore)

Actually it worked almost exaclty like that!
*phew*

I wrote down what happened - and how I fixed it:
http://www.das-werkstatt.com/forum/werkstatt/viewtopic.php?f=7&t=2164


Thanks again,
Pb

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Sudden interference between 2 independent gluster volumes

2014-12-07 Thread Peter B.


Am Do, 4.12.2014, 18:41 schrieb Atin Mukherjee:
> How are you saying that volume information is overlapping each other? If
> these volumes were created at different clusters they wouldn't have any
> common data, would they?

Unfortunately, not:
Yes, they were created and handled completely separately, but "B" was an
asynchronous backup-copy of "A". Therefore also the identical volume name.

But it's solved now. *phew*

Pb

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users