[Gluster-users] smth wrongs with glusterfs

2014-12-04 Thread Alexey Shalin
Hello, again
Smth wrong with my install gluster

OS:Debian
cat /etc/debian_version 7.6
Package: glusterfs-server
Versions:
3.2.7-3+deb7u1

Description:   I   have   3   servers   with   bricks  (192.168.1.1  -
node1,192.168.1.2 - node2, 192.168.1.3 - node3)
volume create by:
gluster volume create opennebula transport tcp node1:/data node2:/data 
node3:/data

192.168.1.4 - client

# volume info
gluster volume info

Volume Name: opennebula
Type: Replicate
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: node1:/data
Brick2: node2:/data
Brick3: node3:/data
Options Reconfigured:
server.allow-insecure: on


#peer info
gluster peer show
unrecognized word: show (position 1)
root@node1:/data# gluster peer status
Number of Peers: 2

Hostname: node3
Uuid: 355f676d-044c-453d-8e82-13b810c089bb
State: Peer in Cluster (Connected)

Hostname: node2
Uuid: bfed0b59-6b2f-474e-a3d7-18b0eb0b1c77
State: Peer in Cluster (Connected)


# on client i mounted volume by:
mount.glusterfs node1:/opennebula /var/lib/one/

ls -al /var/lib/one - show  files, but after 1 mins
ls -al /var/lib/one - hangs up


log


[2014-12-05 13:28:53.290981] I [fuse-bridge.c:3461:fuse_graph_setup] 0-fuse: 
switched to graph 0
[2014-12-05 13:28:53.291223] I [fuse-bridge.c:3049:fuse_init] 0-glusterfs-fuse: 
FUSE inited with protocol versions: glusterfs 7.13 kernel 7.17
[2014-12-05 13:28:53.291800] I 
[afr-common.c:1522:afr_set_root_inode_on_first_lookup] 
0-opennebula-replicate-0: added root inode
[2014-12-05 13:29:16.355469] C 
[client-handshake.c:121:rpc_client_ping_timer_expired] 0-opennebula-client-0: 
server 192.168.1.1:24009 has not responded in the last 42 seconds, 
disconnecting.
[2014-12-05 13:29:16.355684] E [rpc-clnt.c:341:saved_frames_unwind] 
(-->/usr/lib/libgfrpc.so.0(rpc_clnt_notify+0xb0) [0x7f7020ccec60] 
(-->/usr/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7e) [0x7f7020cce8fe] 
(-->/usr/lib/libgfrpc.so.0(saved_frames_destroy+0xe) [0x7f7020cce85e]))) 
0-opennebula-client-0: forced unwinding frame type(GlusterFS 3.1) 
op(READDIRP(40)) called at 2014-12-05 13:27:10.345569
[2014-12-05 13:29:16.355754] E [client3_1-fops.c:1937:client3_1_readdirp_cbk] 
0-opennebula-client-0: remote operation failed: Transport endpoint is not 
connected
[2014-12-05 13:29:16.355772] I 
[afr-self-heal-entry.c:1846:afr_sh_entry_impunge_readdir_cbk] 
0-opennebula-replicate-0: readdir of / on subvolume opennebula-client-0 failed 
(Transport endpoint is not connected)
[2014-12-05 13:29:16.356073] I [socket.c:2275:socket_submit_request] 
0-opennebula-client-0: not connected (priv->connected = 0)
[2014-12-05 13:29:16.356091] W [rpc-clnt.c:1417:rpc_clnt_submit] 
0-opennebula-client-0: failed to submit rpc-request (XID: 0x112x Program: 
GlusterFS 3.1, ProgVers: 310, Proc: 33) to rpc-transport (opennebula-client-0)
[2014-12-05 13:29:16.356107] I 
[afr-self-heal-entry.c:129:afr_sh_entry_erase_pending_cbk] 
0-opennebula-replicate-0: /: failed to erase pending xattrs on 
opennebula-client-0 (Transport endpoint is not connected)
[2014-12-05 13:29:16.356209] E [rpc-clnt.c:341:saved_frames_unwind] 
(-->/usr/lib/libgfrpc.so.0(rpc_clnt_notify+0xb0) [0x7f7020ccec60] 
(-->/usr/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7e) [0x7f7020cce8fe] 
(-->/usr/lib/libgfrpc.so.0(saved_frames_destroy+0xe) [0x7f7020cce85e]))) 
0-opennebula-client-0: forced unwinding frame type(GlusterFS Handshake) 
op(PING(3)) called at 2014-12-05 13:27:52.348889
[2014-12-05 13:29:16.356227] W [client-handshake.c:264:client_ping_cbk] 
0-opennebula-client-0: timer must have expired
[2014-12-05 13:29:16.356257] E [rpc-clnt.c:341:saved_frames_unwind] 
(-->/usr/lib/libgfrpc.so.0(rpc_clnt_notify+0xb0) [0x7f7020ccec60] 
(-->/usr/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7e) [0x7f7020cce8fe] 
(-->/usr/lib/libgfrpc.so.0(saved_frames_destroy+0xe) [0x7f7020cce85e]))) 
0-opennebula-client-0: forced unwinding frame type(GlusterFS 3.1) 
op(STATFS(14)) called at 2014-12-05 13:28:20.214777
[2014-12-05 13:29:16.356274] I [client3_1-fops.c:637:client3_1_statfs_cbk] 
0-opennebula-client-0: remote operation failed: Transport endpoint is not 
connected
[2014-12-05 13:29:16.356304] I [client.c:1883:client_rpc_notify] 
0-opennebula-client-0: disconnected
[2014-12-05 13:29:16.356663] I 
[client-handshake.c:1090:select_server_supported_programs] 
0-opennebula-client-0: Using Program GlusterFS 3.2.7, Num (1298437), Version 
(310)
[2014-12-05 13:29:16.356966] W [rpc-common.c:64:xdr_to_generic] 
(-->/usr/lib/libgfrpc.so.0(rpc_clnt_notify+0x85) [0x7f7020ccec35] 
(-->/usr/lib/libgfrpc.so.0(rpc_clnt_handle_reply+0xa5) [0x7f7020cce295] 
(-->/usr/lib/glusterfs/3.2.7/xlator/protocol/client.so(client3_1_entrylk_cbk+0x52)
 [0x7f701da44122]))) 0-xdr: XDR decoding failed
[2014-12-05 13:29:16.356993] E [client3_1-fops.c:1292:client3_1_

Re: [Gluster-users] [Gluster-devel] Proposal for more sub-maintainers

2014-12-04 Thread Vijaikumar M


On Thursday 04 December 2014 08:32 PM, Niels de Vos wrote:

On Fri, Nov 28, 2014 at 01:08:29PM +0530, Vijay Bellur wrote:

Hi All,

To supplement our ongoing effort of better patch management, I am proposing
the addition of more sub-maintainers for various components. The rationale
behind this proposal & the responsibilities of maintainers continue to be
the same as discussed in these lists a while ago [1]. Here is the proposed
list:

Build - Kaleb Keithley & Niels de Vos

DHT   - Raghavendra Gowdappa & Shyam Ranganathan

docs  - Humble Chirammal & Lalatendu Mohanty

gfapi - Niels de Vos & Shyam Ranganathan

index & io-threads - Pranith Karampuri

posix - Pranith Karampuri & Raghavendra Bhat

I'm wondering if there are any volunteers for maintaining the FUSE
component?

And maybe rewrite it to use libgfapi and drop the mount.glusterfs
script?

I am interested.

Thanks,
Vijay



Niels


We intend to update Gerrit with this list by 8th of December. Please let us
know if you have objections, concerns or feedback on this process by then.

Thanks,
Vijay

[1] http://gluster.org/pipermail/gluster-devel/2014-April/025425.html

___
Gluster-devel mailing list
gluster-de...@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
gluster-de...@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Proposal for more sub-maintainers

2014-12-04 Thread Raghavendra Bhat

On Thursday 04 December 2014 08:32 PM, Niels de Vos wrote:

On Fri, Nov 28, 2014 at 01:08:29PM +0530, Vijay Bellur wrote:

Hi All,

To supplement our ongoing effort of better patch management, I am proposing
the addition of more sub-maintainers for various components. The rationale
behind this proposal & the responsibilities of maintainers continue to be
the same as discussed in these lists a while ago [1]. Here is the proposed
list:

Build - Kaleb Keithley & Niels de Vos

DHT   - Raghavendra Gowdappa & Shyam Ranganathan

docs  - Humble Chirammal & Lalatendu Mohanty

gfapi - Niels de Vos & Shyam Ranganathan

index & io-threads - Pranith Karampuri

posix - Pranith Karampuri & Raghavendra Bhat

I'm wondering if there are any volunteers for maintaining the FUSE
component?

And maybe rewrite it to use libgfapi and drop the mount.glusterfs
script?

Niels


I am interested too. :)

Regards,
Raghavendra Bhat


We intend to update Gerrit with this list by 8th of December. Please let us
know if you have objections, concerns or feedback on this process by then.

Thanks,
Vijay

[1] http://gluster.org/pipermail/gluster-devel/2014-April/025425.html

___
Gluster-devel mailing list
gluster-de...@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Proposal for more sub-maintainers

2014-12-04 Thread Pranith Kumar Karampuri


On 12/04/2014 08:32 PM, Niels de Vos wrote:

On Fri, Nov 28, 2014 at 01:08:29PM +0530, Vijay Bellur wrote:

Hi All,

To supplement our ongoing effort of better patch management, I am proposing
the addition of more sub-maintainers for various components. The rationale
behind this proposal & the responsibilities of maintainers continue to be
the same as discussed in these lists a while ago [1]. Here is the proposed
list:

Build - Kaleb Keithley & Niels de Vos

DHT   - Raghavendra Gowdappa & Shyam Ranganathan

docs  - Humble Chirammal & Lalatendu Mohanty

gfapi - Niels de Vos & Shyam Ranganathan

index & io-threads - Pranith Karampuri

posix - Pranith Karampuri & Raghavendra Bhat

I'm wondering if there are any volunteers for maintaining the FUSE
component?

I am interested in this work if you can guide me.

Pranith


And maybe rewrite it to use libgfapi and drop the mount.glusterfs
script?

Niels


We intend to update Gerrit with this list by 8th of December. Please let us
know if you have objections, concerns or feedback on this process by then.

Thanks,
Vijay

[1] http://gluster.org/pipermail/gluster-devel/2014-April/025425.html

___
Gluster-devel mailing list
gluster-de...@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
gluster-de...@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Empty bricks .... :( df -h hangs

2014-12-04 Thread Alexey Shalin
Yeah. sorry :)


---
Старший Системный Администратор
Алексей Шалин
ОсОО "Хостер kg" - http://www.hoster.kg
ул. Ахунбаева 123 (здание БГТС)
h...@hoster.kg

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Empty bricks .... :( df -h hangs

2014-12-04 Thread Joe Julian


On 12/04/2014 10:13 PM, Alexey Shalin wrote:

seems issue was in
auth.allow: 176.126.164.0/22

hmmm...  client  and  nodes  was  from  this  subnet..  any  way  .. I
recreated volume and now everything looks good
The documentation states that " Valid IP address which includes wild 
card patterns including *, such as 192.168.1.*" not CIDR blocks.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Empty bricks .... :( df -h hangs

2014-12-04 Thread Alexey Shalin
seems issue was in
auth.allow: 176.126.164.0/22

hmmm...  client  and  nodes  was  from  this  subnet..  any  way  .. I
recreated volume and now everything looks good

thx

---
Старший Системный Администратор
Алексей Шалин
ОсОО "Хостер kg" - http://www.hoster.kg
ул. Ахунбаева 123 (здание БГТС)
h...@hoster.kg

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Empty bricks .... :( df -h hangs

2014-12-04 Thread Alexey Shalin
Hello
I create volume with next command:
gluster volume create opennebula replica 3 transport tcp node1:/data 
node2:/data node3:/data

# gluster volume info

Volume Name: opennebula
Type: Replicate
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: node1:/data
Brick2: node2:/data
Brick3: node3:/data
Options Reconfigured:
auth.allow: 176.126.164.0/22


then i mount it from client:
root@master:/# mount.glusterfs node1:/opennebula /var/lib/one/datastores/0
root@master:/#

root@master:/# df -h

and hangs :(

also on node1...
root@node1:~# df -h
also hangs
:(

But if will stop(kill) df -h on client, i'm able to do df -h on client

Can you please help me ?

Thx 
---
Старший Системный Администратор
Алексей Шалин
ОсОО "Хостер kg" - http://www.hoster.kg
ул. Ахунбаева 123 (здание БГТС)
h...@hoster.kg

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Hello, small simple question about creating volume

2014-12-04 Thread Alexey Shalin
Hello
I have 3 servers and 1 server (client)
3 servers having /data folder with size 300Gb

Can you tell me the best combination to create volume?
gluster volume create my_volume replica 3 transport tcp node1:/data node2:/data 
node3:/data

is this ok for best performance ?

Thx 
---
Старший Системный Администратор
Алексей Шалин
ОсОО "Хостер kg" - http://www.hoster.kg
ул. Ахунбаева 123 (здание БГТС)
h...@hoster.kg

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] A year's worth of Gluster

2014-12-04 Thread Franco Broi

1 DHT volume comprising 16 50TB bricks spread across 4 servers. Each
server has 10Gbit Ethernet.

Each brick is a ZOL RADIZ2 pool with a single filesystem.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] replication and blanacing issues

2014-12-04 Thread Kiebzak, Jason M.
As a follow up:

I created another replicated striped volume - two brick replica, striped 
against two sets of servers (four servers in all) - same config as mentioned 
below. I started pouring data into it, and here's my output from `du`:
   peer1 - 47G
peer2 - 47G
peer3 - 47G
peer4 - 24G

Peer1 and Peer2 should be mirrored, and peer3 and peer4 should be mirrored.

This time, data seems more balanced - EXCEPT that peer4 continues to lag FAR 
behind - like in the example below.

Any suggestions would be appreciated.
Jason

From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Kiebzak, Jason M.
Sent: Thursday, December 04, 2014 10:08 AM
To: gluster-users@gluster.org
Subject: [Gluster-users] replication and blanacing issues

It seems that I have two issues:


1)  Data is not balanced between all bricks

2)  one replication "pair" is not staying in sync

I have four servers/peers, each with one brick, all running 3.6.1. There are 
two volumes, each running as distributed replicated volume. Below, I've 
included some info. All Daemon are running. The four peers were all added at 
the same time.

Problem 1) for volume1, the peer1/peer2 set have 236G, while peer3 has 3.9T. 
Shouldn't it be split more evenly - close to 2T on each set of servers? A 
similar issue is seen with volume2, but the total data set (thus the diff) is 
not as large.

Problem 2) peer3 and peer4 should be replicated to each other. Peer1 and peer2 
have identical disk usage, where as peer3 and peer4 are egregiously out of 
sync. Data on both peer3 and peer4 continues to grow (I am actively migrating 
50T to volume 1).


`gluster volume info` give this:
Volume Name: volume1
Type: Distributed-Replicate
Volume ID: bf461760-c412-42df-9e1d-7db7f793d344
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: ip1:/data/volume1
Brick2: ip2:/data/volume1
Brick3: ip3:/data/volume1
Brick4: ip4:/data/volume1
Options Reconfigured:
features.quota: on
auth.allow: serverip

Volume Name: volume2
Type: Distributed-Replicate
Volume ID: 54a8dbee-387f-4a61-9f67-3e2accb83072
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: ip1:/data/volume2
Brick2: ip2:/data/volume2
Brick3: ip3:/data/volume2
Brick4: ip4:/data/volume2
Options Reconfigured:
auth.allow: serverip

If I do a `# du -h --max-dpeth=1` on each peer, I get this:
Peer1
236G/data/volume1
177G/data/volume2
Peer2
236G/data/volume1
177G/data/volume2
Peer3
3.9T/data/volume1
179G/data/volume2
Peer4
524G/data/volume1
102G/data/volume2
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Sudden interference between 2 independent gluster volumes

2014-12-04 Thread Joe Julian
If the volumes had the same names, I have no idea what the result of 
that would be.


If they had different names, it sounds like the volume data only sync'ed 
one direction. Theoretically, you can look in /var/lib/glusterd/vols and 
ensure that both volume directories exist on both servers (I would do 
this with glusterd stopped) without detaching them as peers.


On 12/04/2014 08:31 AM, Peter B. wrote:

Status update:

Server "A" and "B" now consider themselves gluster peers: Each one lists
the other one as peer ($ gluster peer status).

However, "gluster volume info " only lists the bricks of "B".

To solve my problem and restore autonomy of "A", I think I could do the
following:

On server "A":
1) gluster peer detach "B"
2) Re-add the local bricks on "A" (which were already part of the game,
but ain't anymore)


Could this work?
Is there anything I must watch out for, before re-adding the bricks of "A"
back to volume "A" again?


Sorry for asking so many questions here, but I'm progressively
understanding the problem/situation better as we speak. Therefore the
changing questions :)
Sorry...


Thanks again,
Peter

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Sudden interference between 2 independent gluster volumes

2014-12-04 Thread Atin Mukherjee


On 12/04/2014 09:08 PM, Peter B. wrote:
> This is actually directly related to my problem is mentioned here on Monday:
> "Folder disappeared on volume, but exists on bricks."
> 
> I probed node "A" from server "B", which caused all this. My bad.
> :(
> 
> 
> No data is lost, but is there any way to recover the volume information in
> /var/lib/glusterd on server "A", to separate it from "B" again?
How are you saying that volume information is overlapping each other? If
these volumes were created at different clusters they wouldn't have any
common data, would they?

~Atin
> 
> 
> Thank you very much in advance,
> Pb
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Introducing gdash - A simple GlusterFS dashboard

2014-12-04 Thread Aravinda
Added --gluster option to specify the gluster path if you are using 
source install.


If you already installed gdash, upgrade using `sudo pip install -U 
gdash`


Usage: 


sudo gdash --gluster /usr/local/sbin/gluster

Updated the same in blog: http://aravindavk.in/blog/introducing-gdash/

--
regards
Aravinda
http://aravindavk.in

On Thu, Dec 4, 2014 at 7:33 AM, Aravinda  wrote:

Hi All,

I created a small local installable web app called gdash, a simple 
dashboard for GlusterFS.


"gdash is a super-young project, which shows GlusterFS volume 
information about local, remote clusters. This app is based on 
GlusterFS's capability of executing gluster volume info and gluster 
volume status commands for a remote server using --remote-host 
option."


It is very easy to install using pip or easy_install.

Check my blog post for more in detail(with screenshots).
http://aravindavk.in/blog/introducing-gdash/

Comments and Suggestions Welcome.

--
regards
Aravinda
http://aravindavk.in

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Problem starting glusterd on CentOS 6

2014-12-04 Thread Jan-Hendrik Zab
On 04/12/14 16:05 +0100, Jan-Hendrik Zab wrote:
> Here is also a complete debug log from starting glusterd:
> 
>   -jhz
> 
> http://www.l3s.de/~zab/glusterfs.log

Apparently we fixed that specific problem. We deactivated iptables and
the glusterd can again communicate with the other processes. We didn't
change the rules tho. And the glusterd was working find in the mean
time, as far as we could tell anyway.

-jhz


pgp7kM8hz01H8.pgp
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Sudden interference between 2 independent gluster volumes

2014-12-04 Thread Peter B.
Status update:

Server "A" and "B" now consider themselves gluster peers: Each one lists
the other one as peer ($ gluster peer status).

However, "gluster volume info " only lists the bricks of "B".

To solve my problem and restore autonomy of "A", I think I could do the
following:

On server "A":
1) gluster peer detach "B"
2) Re-add the local bricks on "A" (which were already part of the game,
but ain't anymore)


Could this work?
Is there anything I must watch out for, before re-adding the bricks of "A"
back to volume "A" again?


Sorry for asking so many questions here, but I'm progressively
understanding the problem/situation better as we speak. Therefore the
changing questions :)
Sorry...


Thanks again,
Peter

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Sudden interference between 2 independent gluster volumes

2014-12-04 Thread Peter B.
This is actually directly related to my problem is mentioned here on Monday:
"Folder disappeared on volume, but exists on bricks."

I probed node "A" from server "B", which caused all this. My bad.
:(


No data is lost, but is there any way to recover the volume information in
/var/lib/glusterd on server "A", to separate it from "B" again?


Thank you very much in advance,
Pb

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Folder disappeared on volume, but exists on bricks.

2014-12-04 Thread Peter B.
I think I know now what happened:

- 2 gluster volumes on 2 different servers: A and B.
- A=production, B=testing
- On server "B" we will soon expand, so I read up on how to add new nodes.
- Therefore on server "B" I ran "gluster probe A", assuming that "probe"
was just reading *if* A would be available.

It seems that "probing" another node already writes configuration data
onto them.


That seems to be the reason why the data of the bricks didn't show up on
the volume mountpoint on "A", because it was now already pointing to "B".


Pb

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Proposal for more sub-maintainers

2014-12-04 Thread Niels de Vos
On Fri, Nov 28, 2014 at 01:08:29PM +0530, Vijay Bellur wrote:
> Hi All,
> 
> To supplement our ongoing effort of better patch management, I am proposing
> the addition of more sub-maintainers for various components. The rationale
> behind this proposal & the responsibilities of maintainers continue to be
> the same as discussed in these lists a while ago [1]. Here is the proposed
> list:
> 
> Build - Kaleb Keithley & Niels de Vos
> 
> DHT   - Raghavendra Gowdappa & Shyam Ranganathan
> 
> docs  - Humble Chirammal & Lalatendu Mohanty
> 
> gfapi - Niels de Vos & Shyam Ranganathan
> 
> index & io-threads - Pranith Karampuri
> 
> posix - Pranith Karampuri & Raghavendra Bhat

I'm wondering if there are any volunteers for maintaining the FUSE
component?

And maybe rewrite it to use libgfapi and drop the mount.glusterfs
script?

Niels

> 
> We intend to update Gerrit with this list by 8th of December. Please let us
> know if you have objections, concerns or feedback on this process by then.
> 
> Thanks,
> Vijay
> 
> [1] http://gluster.org/pipermail/gluster-devel/2014-April/025425.html
> 
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-devel


pgpTVcRlm6igb.pgp
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] replication and blanacing issues

2014-12-04 Thread Kiebzak, Jason M.
It seems that I have two issues:


1)  Data is not balanced between all bricks

2)  one replication "pair" is not staying in sync

I have four servers/peers, each with one brick, all running 3.6.1. There are 
two volumes, each running as distributed replicated volume. Below, I've 
included some info. All Daemon are running. The four peers were all added at 
the same time.

Problem 1) for volume1, the peer1/peer2 set have 236G, while peer3 has 3.9T. 
Shouldn't it be split more evenly - close to 2T on each set of servers? A 
similar issue is seen with volume2, but the total data set (thus the diff) is 
not as large.

Problem 2) peer3 and peer4 should be replicated to each other. Peer1 and peer2 
have identical disk usage, where as peer3 and peer4 are egregiously out of 
sync. Data on both peer3 and peer4 continues to grow (I am actively migrating 
50T to volume 1).


`gluster volume info` give this:
Volume Name: volume1
Type: Distributed-Replicate
Volume ID: bf461760-c412-42df-9e1d-7db7f793d344
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: ip1:/data/volume1
Brick2: ip2:/data/volume1
Brick3: ip3:/data/volume1
Brick4: ip4:/data/volume1
Options Reconfigured:
features.quota: on
auth.allow: serverip

Volume Name: volume2
Type: Distributed-Replicate
Volume ID: 54a8dbee-387f-4a61-9f67-3e2accb83072
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: ip1:/data/volume2
Brick2: ip2:/data/volume2
Brick3: ip3:/data/volume2
Brick4: ip4:/data/volume2
Options Reconfigured:
auth.allow: serverip

If I do a `# du -h --max-dpeth=1` on each peer, I get this:
Peer1
236G/data/volume1
177G/data/volume2
Peer2
236G/data/volume1
177G/data/volume2
Peer3
3.9T/data/volume1
179G/data/volume2
Peer4
524G/data/volume1
102G/data/volume2
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Problem starting glusterd on CentOS 6

2014-12-04 Thread Jan-Hendrik Zab
Here is also a complete debug log from starting glusterd:

-jhz

http://www.l3s.de/~zab/glusterfs.log


pgpsXcscEmQG5.pgp
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Sudden interference between 2 independent gluster volumes

2014-12-04 Thread Peter B.
Hi all,

Since the strange "hickup" I had on Monday (files disappearing from the
volume, although existing on the bricks), another very strange (and
horrible) thing happened:

Out of the blue, gluster "Volume-A" is now pointing to the bricks of
another, completely separate and independent "Volume-B".
Ouch! O.o

Running "gluster volume info volume-a" and the same for "volume-b", both
list the bricks of "volume-b". They even show the identical volume ID.


Regardless of "why" that happened, does anyone here have any idea how I
can properly fix this and separate the volumes again?


I'm very grateful for any help!
Thanks in advance,
Peter

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Problem starting glusterd on CentOS 6

2014-12-04 Thread Jan-Hendrik Zab
Hello,

we have a glusterfs 3.6.1 installation on CentOS 6 with two servers and
one brick each. We just saw some trouble with inodes not being able to
be read and decided to shut down one of our servers and run a xfs_repair
and the filesystem.  (This turned up nothing.)

After which we tried to start the glusterd server again, but we only got
a few error messages in the logs and I'm not sure what to do next. Any
help would be very appreciated.

Our systems are gluster1/2 and each has a brick named brick0. If you
need any further information please ask.

The log messages fro gluster2 follow:

data-brick0-brick.log:

[2014-12-04 13:54:54.862104] I [MSGID: 100030] [glusterfsd.c:2018:main]
0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version
3.6.1 (args: /usr/sbin/glusterfsd -s gluster2 --volfile-id
data.gluster2.data-brick0-brick -p
/var/lib/glusterd/vols/data/run/gluster2-data-brick0-brick.pid -S
/var/run/75e81344cb7da77f1168ab31c31bfebd.socket --brick-name
/data/brick0/brick -l /var/log/glusterfs/bricks/data-brick0-brick.log
--xlator-option
*-posix.glusterd-uuid=1ad5077c-4314-43f1-84ef-e560ca62e155 --brick-port
49154 --xlator-option data-server.listen-port=49154)
[2014-12-04 13:55:57.868573] E [socket.c:2267:socket_connect_finish]
0-glusterfs: connection to 192.168.1.19:24007 failed (Connection timed
out)
[2014-12-04 13:55:57.868696] E [glusterfsd-mgmt.c:1811:mgmt_rpc_notify]
0-glusterfsd-mgmt: failed to connect with remote-host: gluster2
(Transport endpoint is not connected)
[2014-12-04 13:55:57.868722] I [glusterfsd-mgmt.c:1817:mgmt_rpc_notify]
0-glusterfsd-mgmt: Exhausted all volfile servers
[2014-12-04 13:55:57.869050] W [glusterfsd.c:1194:cleanup_and_exit] (-->
0-: received signum (1), shutting down
[2014-12-04 13:55:57.875266] W [rpc-clnt.c:1562:rpc_clnt_submit]
0-glusterfs: failed to submit rpc-request (XID: 0x1 Program: Gluster
Portmap, ProgVers: 1, Proc: 5) to rpc-transport (glusterfs)

etc-glusterfs-glusterd.vol.log (we get a lot of the following wiht
different UUIDs):

[2014-12-04 14:03:16.431380] W [socket.c:611:__socket_rwv] 0-management:
readv on /var/run/75e81344cb7da77f1168ab31c31bfebd.socket failed
(Invalid argument)


glustershd.log:
[2014-12-04 13:55:57.891652] E [socket.c:2267:socket_connect_finish]
0-data-client-1: connection to 192.168.1.19:24007 failed (Connection
timed out)


pgptmNTUkmT5C.pgp
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Introducing gdash - A simple GlusterFS dashboard

2014-12-04 Thread Gene Liverman
Very nice! I see a small Puppet module and a Vagrant setup in my immediate
future for using this. Thanks for sharing!

--
Gene Liverman
Systems Administrator
Information Technology Services
University of West Georgia
glive...@westga.edu

ITS: Making Technology Work for You!

This e-mail and any attachments may contain confidential and privileged
information. If you are not the intended recipient, please notify the
sender immediately by return mail, delete this message, and destroy any
copies. Any dissemination or use of this information by a person other than
the intended recipient is unauthorized and may be illegal or actionable by
law.

On Dec 3, 2014 9:03 PM, "Aravinda"  wrote:

> Hi All,
>
> I created a small local installable web app called gdash, a simple
> dashboard for GlusterFS.
>
> "gdash is a super-young project, which shows GlusterFS volume information
> about local, remote clusters. This app is based on GlusterFS's capability
> of executing gluster volume info and gluster volume status commands for a
> remote server using --remote-host option."
>
> It is very easy to install using pip or easy_install.
>
> Check my blog post for more in detail(with screenshots).
> http://aravindavk.in/blog/introducing-gdash/
>
> Comments and Suggestions Welcome.
>
> --
> regards
> Aravinda
> http://aravindavk.in
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Introducing gdash - A simple GlusterFS dashboard

2014-12-04 Thread Arman Khalatyan
Wow cool thanks.
Would be good to have it as a ovirt plugin.
A.
On Dec 4, 2014 3:03 AM, "Aravinda"  wrote:

> Hi All,
>
> I created a small local installable web app called gdash, a simple
> dashboard for GlusterFS.
>
> "gdash is a super-young project, which shows GlusterFS volume information
> about local, remote clusters. This app is based on GlusterFS's capability
> of executing gluster volume info and gluster volume status commands for a
> remote server using --remote-host option."
>
> It is very easy to install using pip or easy_install.
>
> Check my blog post for more in detail(with screenshots).
> http://aravindavk.in/blog/introducing-gdash/
>
> Comments and Suggestions Welcome.
>
> --
> regards
> Aravinda
> http://aravindavk.in
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users