Re: [Gluster-users] Fwd: to mailing list gluster.org

2015-01-27 Thread Pranith Kumar Karampuri

Vijaikumar, Sachin will work on this quota issue.

Pranith
On 01/22/2015 12:46 AM, Pranith Kumar Karampuri wrote:

+ quota devs

Pranith
On 01/07/2015 07:37 PM, Bekk, Klaus (IKP) wrote:



Hi all,

we are running 4 server with Ubuntu 14.04 as glusterfs server with 
3.5.2-ubuntu1~trusty1.




When starting quotas on any of the volumes the brick will be not 
available after some time. So we have to disable quota on this 
volume and stop and start this volume.


We have the following configuration

root@ikpsrv01:/var/log/glusterfs# gluster volume info

Volume Name: gltestn

Type: Distribute

Volume ID: e50bd30d-dbc8-4279-ac9d-2e557328a643

Status: Started

Number of Bricks: 1

Transport-type: tcp

Bricks:

Brick1: ikpsrv01:/gltest2/gl

Volume Name: glwww1

Type: Replicate

Volume ID: 811e4c2a-4520-4e61-bb84-955afbdb9dff

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: ikpsrv03:/glwwwn/gl

Brick2: ikpsrv01:/glwwwn2/gl

Options Reconfigured:

features.quota: on

Volume Name: corsika

Type: Distribute

Volume ID: d6e07f3e-4b35-4736-9b71-afc3224f0f29

Status: Started

Number of Bricks: 4

Transport-type: tcp

Bricks:

Brick1: ikpsrv03:/glcors3-1/gl

Brick2: ikpsrv03:/glcors3-2/gl

Brick3: ikpsrv02:/glcors2-1/gl

Brick4: ikpsrv02:/glcors2-2/gl

Volume Name: gldata

Type: Distribute

Volume ID: 82485d03-9ef4-4151-a2cc-881072e871a1

Status: Started

Number of Bricks: 2

Transport-type: tcp

Bricks:

Brick1: ikpsrv02:/gldata2-1/gl

Brick2: ikpsrv01:/gldata1-2/gl

Options Reconfigured:

features.quota: on

Volume Name: gltest

Type: Distribute

Volume ID: 9e69afcc-d421-41c9-824b-7d9a31240072

Status: Started

Number of Bricks: 2

Transport-type: tcp

Bricks:

Brick1: ikpsrv01:/gltest3/gl

Brick2: ikpsrv01:/gltest4/gl

Options Reconfigured:

features.quota: off

Volume Name: glusersold

Type: Distribute

Volume ID: 91a79e1a-4ff3-4191-b0ae-73357105774c

Status: Started

Number of Bricks: 1

Transport-type: tcp

Bricks:

Brick1: ikpsrv02:/glusers2-2/gl

Volume Name: glusers

Type: Distribute

Volume ID: f48b1941-9c19-4311-bd9e-497660fbcc80

Status: Started

Number of Bricks: 1

Transport-type: tcp

Bricks:

Brick1: ikpsrv02:/glusers2-1/gl

Options Reconfigured:

diagnostics.client-log-level: CRITICAL

diagnostics.brick-log-level: CRITICAL

features.quota: off

When starting quotas on any of the volumes the brick will be not 
available after some time. So we have to disable quota on this 
volume and stop and start this volume.


The command

Gluster volume quota gltestn enable

gives the following output in quotad.log

[2015-01-07 12:01:55.59] W [glusterfsd.c:1095:cleanup_and_exit] 
(-->/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7fb455bd9fbd] 
(-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x8182) [0x7fb455ead182] 
(-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xd5) 
[0x7fb4569a7265]))) 0-: received signum (15), shutting down


[2015-01-07 12:01:56.534931] I [glusterfsd.c:1959:main] 
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 
3.5.2 (/usr/sbin/glusterfs -s localhost --volfile-id gluster/quotad 
-p /var/lib/glusterd/quotad/run/quotad.pid -l 
/var/log/glusterfs/quotad.log -S 
/var/run/fa869d5552bf340ec7506747e21b2841.socket --xlator-option 
*replicate*.data-self-heal=off --xlator-option 
*replicate*.metadata-self-heal=off --xlator-option 
*replicate*.entry-self-heal=off)


[2015-01-07 12:01:56.537061] I [socket.c:3561:socket_init] 
0-socket.glusterfsd: SSL support is NOT enabled


[2015-01-07 12:01:56.537134] I [socket.c:3576:socket_init] 
0-socket.glusterfsd: using system polling thread


[2015-01-07 12:01:56.537355] I [socket.c:3561:socket_init] 
0-glusterfs: SSL support is NOT enabled


[2015-01-07 12:01:56.537383] I [socket.c:3576:socket_init] 
0-glusterfs: using system polling thread


[2015-01-07 12:01:56.541480] I [graph.c:254:gf_add_cmdline_options] 
0-glwww1-replicate-0: adding option 'entry-self-heal' for volume 
'glwww1-replicate-0' with value 'off'


[2015-01-07 12:01:56.541516] I [graph.c:254:gf_add_cmdline_options] 
0-glwww1-replicate-0: adding option 'metadata-self-heal' for volume 
'glwww1-replicate-0' with value 'off'


[2015-01-07 12:01:56.541536] I [graph.c:254:gf_add_cmdline_options] 
0-glwww1-replicate-0: adding option 'data-self-heal' for volume 
'glwww1-replicate-0' with value 'off'


[2015-01-07 12:01:56.543122] I [socket.c:3561:socket_init] 
0-socket.quotad: SSL support is NOT enabled


[2015-01-07 12:01:56.543157] I [socket.c:3576:socket_init] 
0-socket.quotad: using system polling thread


[2015-01-07 12:01:56.543404] I [dht-shared.c:311:dht_init_regex] 
0-glwww1: using regex rsync-hash-regex = ^\.(.+)\.[^.]+$


[2015-01-07 12:01:56.547124] I [socket.c:3561:socket_init] 
0-glwww1-client-1: SSL support is NOT enabled


[2015-01-07 12:01:56.547158] I [socket.c:3576:socket_init] 
0-glwww1-client-1: using system polling thread


[2015-01-07 12:01:56.547728] I [socket.c:3561:socket_init] 
0-glwww1-client-0: SSL support is NOT enabled


[201

Re: [Gluster-users] Mount failed

2015-01-27 Thread Bartłomiej Syryjczyk
W dniu 2015-01-28 o 00:45, Franco Broi pisze:
> Glad that you found the problem. I wouldn't have thought that glusterfs
> should return before the filesystem is properly mounted, you should file
> a bug.
>
> https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
Ugh! This is known problem...
https://bugzilla.redhat.com/show_bug.cgi?id=1151696

-- 
Z poważaniem,

*Bartłomiej Syryjczyk*
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] [Gluster-devel] FIO read test, why net recv 91MB/s, read only 22MB/s

2015-01-27 Thread 党志强
Hi,
I want to known why net recv 91MB/s, read only 22MB/s when do FIO read 
test. 
when I set cache-size=2GB or disable open-behind, this phenomenon 
disappears.
thanks.




gluster 3.4.5 (default configure)
centos 6.5
replica 2, 2 server
client do FIO test


FIO command:
fio -directory=/mnt/dzq -direct=1 -ioengine=sync -rw=read -bs=32K -size=1G 
-numjobs=8 -runtime=60 -ramp_time=20 -exitall -group_reporting -name=glfs_test
output:
glfs_test: (g=0): rw=read, bs=32K-32K/32K-32K/32K-32K, ioengine=sync, iodepth=1
...
fio-2.1.13
Starting 8 processes
read : io=1327.3MB, bw=22647KB/s, iops=707, runt= 60012msec


# dstat -nN  eth1,total
--net/eth1---net/total-
 recv  send: recv  send
  91M 1977k:  91M 1977k
  91M 1969k:  91M 1969k
  91M 1986k:  91M 1986k
  90M 1972k:  90M 1972k
..

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Mount failed

2015-01-27 Thread Franco Broi
On Tue, 2015-01-27 at 14:09 +0100, Bartłomiej Syryjczyk wrote:

> OK, I removed part "2>/dev/null", and see:
> stat: cannot stat ‘/mnt/gluster’: Resource temporarily unavailable
> 
> So I decided to add sleep just before line number 298 (this one with
> stat). And it works! Is it normal?
> 

Glad that you found the problem. I wouldn't have thought that glusterfs
should return before the filesystem is properly mounted, you should file
a bug.

https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS

Cheers,


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] lockd: server not responding, timed out

2015-01-27 Thread Peter Auyeung
Hi Niels,

I see no kernel NFS service running on the gluster node.

Here are the output

root@glusterprod001:~# rpcinfo
   program version netid addressserviceowner
104tcp6  ::.0.111   portmapper superuser
103tcp6  ::.0.111   portmapper superuser
104udp6  ::.0.111   portmapper superuser
103udp6  ::.0.111   portmapper superuser
104tcp   0.0.0.0.0.111  portmapper superuser
103tcp   0.0.0.0.0.111  portmapper superuser
102tcp   0.0.0.0.0.111  portmapper superuser
104udp   0.0.0.0.0.111  portmapper superuser
103udp   0.0.0.0.0.111  portmapper superuser
102udp   0.0.0.0.0.111  portmapper superuser
104local /run/rpcbind.sock  portmapper superuser
103local /run/rpcbind.sock  portmapper superuser
153tcp   0.0.0.0.150.65 mountd superuser
151tcp   0.0.0.0.150.66 mountd superuser
133tcp   0.0.0.0.8.1nfssuperuser
1000214tcp   0.0.0.0.150.68 nlockmgr   superuser
1002273tcp   0.0.0.0.8.1-  superuser
1000211udp   0.0.0.0.2.215  nlockmgr   superuser
1000211tcp   0.0.0.0.2.217  nlockmgr   superuser
1000241udp   0.0.0.0.136.211status 105
1000241tcp   0.0.0.0.170.60 status 105
1000241udp6  ::.182.65  status 105
1000241tcp6  ::.172.250 status 105

root@glusterprod001:~# ss
State   Recv-Q Send-QLocal 
Address:PortPeer Address:Port
ESTAB   0  0 
10.101.165.61:1015   10.101.165.63:24007
ESTAB   0  0 
10.101.165.61:93610.101.165.61:49156
ESTAB   0  0 
10.101.165.61:1012   10.101.165.66:24007
ESTAB   0  0 
10.101.165.61:49157  10.101.165.64:987
ESTAB   0  0 
10.101.165.61:99910.101.165.62:49153
ESTAB   0  0 
10.101.165.61:49157  10.101.165.66:988
ESTAB   0  0 
10.101.165.61:49155  10.101.165.62:834
ESTAB   0  0 
10.101.165.61:49156  10.101.165.63:998
ESTAB   0  0 
10.101.165.61:91210.101.165.65:49153
ESTAB   0  0 
127.0.0.1:982127.0.0.1:24007
ESTAB   0  0 
10.101.165.61:49156  10.101.165.65:997
ESTAB   0  0 
10.101.165.61:49155  10.101.165.61:850
ESTAB   0  0 
10.101.165.61:92210.101.165.62:49154
ESTAB   0  0 
10.101.165.61:89610.101.165.65:49154
ESTAB   0  0 
10.101.165.61:1010   10.101.165.61:24007
ESTAB   0  0 
10.101.165.61:imaps  10.101.165.61:49156
ESTAB   0  0 
10.101.165.61:49155  10.101.165.63:981
ESTAB   0  0 
10.101.165.61:93010.101.165.64:49155
ESTAB   0  0 
10.101.165.61:4379   10.101.165.65:44899
ESTAB   0  0 
10.101.165.61:98310.101.165.61:49157
ESTAB   0  0   

Re: [Gluster-users] Getting past the basic setup

2015-01-27 Thread Jorge Garcia

Ben,

Thanks for the detailed answer! We will look at implementing it and will 
let you know how it goes. FYI, for our initial setup (and maybe in the 
final setup) we're using distribute only (pure DHT), not replicas. So I 
figured our bandwidth wouldn't be cut too much...


Jorge

On 01/27/15 11:29, Ben Turner wrote:

No problem!  With glusterfs replication is done client side by writing to 
multiple bricks at the same time, this means that you bandwidth will be cut 
into whatever number of replicas you chose.  So in a replica 2 you bandwith is 
cut in half, replica 3 into 1/3, and so on.  Be sure to remember this when 
looking at the numbers.  Also writing in really small blocks / file sizes can 
slow things down quite a bit, I try to keep writes in at least 64KB 
blocks(1024k is the sweet spot in my testing).  I think your use case is a good 
fit for us and I would love to see ya'll using gluster in production.  The only 
concern I have is smallfile performance and we are actively working to address 
that.  I suggest trying out the multi threaded epoll changes that are coming 
down if smallfile perf becomes a focus of yours.

Also, I did a script that will help you out with the configuration / tuning, it 
is developed specifically for RHEL but should work with centos as well or other 
RPM based distros as well.  If you want to tear down your env and easily 
rebuild try the following:

Use kernel 2.6.32-504.3.2.el6 or later if using thinp - there were issues with 
earlier versions

Before this script is run, the system should be wiped. To do so, on each 
gluster server run(be sure to unmount the clients first):

# service glusterd stop; if [ $? -ne 0 ]; then pkill -9 glusterd; fi;  umount ; vgremove  -y --force --force --force; pvremove  --force --force
# pgrep gluster; if [ $? -eq 0 ]; then pgrep gluster | xargs kill -9; fi
# for file in /var/lib/glusterd/*; do if ! echo $file | grep 'hooks' >/dev/null 2>&1;then 
rm -rf $file; fi; done; rm -rf "/bricks/*"

***NOTE*** REPLACE < items > with your actual path/device ***

The prerequisites for the installer script are:

* Network needs to be installed and configured.
* DNS (both ways) needs to be operational if using Host Names
* At least 2 block devices need to be present on servers. First should be the 
O/S device, all other devices will be used for brick storage.

Edit the installer script to select/define the correct RAID being used. For 
example, with RAID6 (the default for redhat sotrage) we have:

stripesize=128k
stripe_elements=10   # number of data disks, not counting spares
dataalign=1280k

RAID 6 works better with large (disk image) files.
RAID 10 works better with many smaller files.

Running the command with no options will display the help information.
Usage:  rhs-system-init.sh [-h] [-u virtual|object]

General:
   -uvirtual - RHS is used for storing virtual machine images.
   object  - Object Access (HTTP) is the primary method to
 access the RHS volume.
   The workload helps decide what performance tuning profile
   to apply and other customizations.
   By default, the  general purpose workload is assumed.
   -d   Specify a preconfigured block device to use for a brick.
   This must be the full path and it should already have an
   XFS file system.  This is used when you want to manually
   tune / configure your brick.  Example:
   $ME -d /dev/myLV/myVG
   -s NODES=" "Specify a space seperated list of hostnames that will be used
   as glusterfs servers.  This will configure a volume with the
   information provided.
   -c CLIENTS=" "  Specify a space seperated list of hostnames that will be used
   as glusterfs clients.
   -t  Number of replicas.  Use 0 for a pure distribute volume.
   -o  Configure Samba on the servers and clients.
   -n  Dry run to show what devices will be used for brick creation.
   -r  Skip RHN Registration.
   -h  Display this help.

As an example:

# sh rhs-system-init.sh -s "node1.example.com node2.example.com node3.example.com" -c 
"client1.example.com" -r -t 2

Let me know if you find the script useful, if so I'll look at doing something 
centos / community specific so that other can use it.

-b

- Original Message -

From: "Jorge Garcia" 
To: "Ben Turner" 
Cc: gluster-users@gluster.org
Sent: Tuesday, January 27, 2015 1:40:19 PM
Subject: Re: [Gluster-users] Getting past the basic setup

Hi Ben,

Sorry I didn't reply earlier, one week of much needed vacation got in
the way...

Anyway, here's what we're trying to do with glusterfs for now. Not much,
but I figured it would be a test:

Supporting a group that uses genomics data, we have a WHOLE bunch of
data that our users keep piling on. We figured we needed a place to
backup all this data 

Re: [Gluster-users] Getting past the basic setup

2015-01-27 Thread Ben Turner
No problem!  With glusterfs replication is done client side by writing to 
multiple bricks at the same time, this means that you bandwidth will be cut 
into whatever number of replicas you chose.  So in a replica 2 you bandwith is 
cut in half, replica 3 into 1/3, and so on.  Be sure to remember this when 
looking at the numbers.  Also writing in really small blocks / file sizes can 
slow things down quite a bit, I try to keep writes in at least 64KB 
blocks(1024k is the sweet spot in my testing).  I think your use case is a good 
fit for us and I would love to see ya'll using gluster in production.  The only 
concern I have is smallfile performance and we are actively working to address 
that.  I suggest trying out the multi threaded epoll changes that are coming 
down if smallfile perf becomes a focus of yours.  

Also, I did a script that will help you out with the configuration / tuning, it 
is developed specifically for RHEL but should work with centos as well or other 
RPM based distros as well.  If you want to tear down your env and easily 
rebuild try the following:

Use kernel 2.6.32-504.3.2.el6 or later if using thinp - there were issues with 
earlier versions

Before this script is run, the system should be wiped. To do so, on each 
gluster server run(be sure to unmount the clients first):

# service glusterd stop; if [ $? -ne 0 ]; then pkill -9 glusterd; fi;  umount 
; vgremove  -y --force --force 
--force; pvremove  --force --force
# pgrep gluster; if [ $? -eq 0 ]; then pgrep gluster | xargs kill -9; fi 
# for file in /var/lib/glusterd/*; do if ! echo $file | grep 'hooks' >/dev/null 
2>&1;then rm -rf $file; fi; done; rm -rf "/bricks/*"

***NOTE*** REPLACE < items > with your actual path/device ***

The prerequisites for the installer script are:

* Network needs to be installed and configured.
* DNS (both ways) needs to be operational if using Host Names
* At least 2 block devices need to be present on servers. First should be the 
O/S device, all other devices will be used for brick storage.

Edit the installer script to select/define the correct RAID being used. For 
example, with RAID6 (the default for redhat sotrage) we have:

stripesize=128k
stripe_elements=10   # number of data disks, not counting spares
dataalign=1280k

RAID 6 works better with large (disk image) files.
RAID 10 works better with many smaller files.

Running the command with no options will display the help information.
Usage:  rhs-system-init.sh [-h] [-u virtual|object]

General:
  -uvirtual - RHS is used for storing virtual machine images.
  object  - Object Access (HTTP) is the primary method to
access the RHS volume.
  The workload helps decide what performance tuning profile
  to apply and other customizations.
  By default, the  general purpose workload is assumed.
  -d   Specify a preconfigured block device to use for a brick.
  This must be the full path and it should already have an
  XFS file system.  This is used when you want to manually
  tune / configure your brick.  Example:
  $ME -d /dev/myLV/myVG
  -s NODES=" "Specify a space seperated list of hostnames that will be used
  as glusterfs servers.  This will configure a volume with the 
  information provided.
  -c CLIENTS=" "  Specify a space seperated list of hostnames that will be used
  as glusterfs clients.
  -t  Number of replicas.  Use 0 for a pure distribute volume.
  -o  Configure Samba on the servers and clients.
  -n  Dry run to show what devices will be used for brick creation.
  -r  Skip RHN Registration.
  -h  Display this help.

As an example:

# sh rhs-system-init.sh -s "node1.example.com node2.example.com 
node3.example.com" -c "client1.example.com" -r -t 2

Let me know if you find the script useful, if so I'll look at doing something 
centos / community specific so that other can use it.

-b

- Original Message -
> From: "Jorge Garcia" 
> To: "Ben Turner" 
> Cc: gluster-users@gluster.org
> Sent: Tuesday, January 27, 2015 1:40:19 PM
> Subject: Re: [Gluster-users] Getting past the basic setup
> 
> Hi Ben,
> 
> Sorry I didn't reply earlier, one week of much needed vacation got in
> the way...
> 
> Anyway, here's what we're trying to do with glusterfs for now. Not much,
> but I figured it would be a test:
> 
> Supporting a group that uses genomics data, we have a WHOLE bunch of
> data that our users keep piling on. We figured we needed a place to
> backup all this data (we're talking 100s of TBs, quickly reaching PBs).
> In the past, we have been doing some quick backups for disaster recovery
> by using rsync to several different machines, separating the data by
> [project/group/user] depending on how much data they had. This becomes a
> problem because some machines r

Re: [Gluster-users] Getting past the basic setup

2015-01-27 Thread Jorge Garcia

Hi Ben,

Sorry I didn't reply earlier, one week of much needed vacation got in 
the way...


Anyway, here's what we're trying to do with glusterfs for now. Not much, 
but I figured it would be a test:


Supporting a group that uses genomics data, we have a WHOLE bunch of 
data that our users keep piling on. We figured we needed a place to 
backup all this data (we're talking 100s of TBs, quickly reaching PBs). 
In the past, we have been doing some quick backups for disaster recovery 
by using rsync to several different machines, separating the data by 
[project/group/user] depending on how much data they had. This becomes a 
problem because some machines run out of space while others have plenty 
of room. So we figured: "What if we make it all one big glusterfs 
filesystem, then we can add storage as we need it for everybody, remove 
old systems when they become obsolete, and keep it all under one 
namespace". We don't need blazing speed, as this is all just disaster 
recovery, backups run about every week. We figured the first backup 
would take a while, but after that it would just be incrementals using 
rsync, so it would be OK. The data is mixed, with some people keeping 
some really large files, some people keeping millions of small files, 
and some people somewhere in between.


We started our testing with a 2 node glusterfs system. Then we compared 
a copy of several TBs to that system vs. copying to an individual 
machine. The glusterfs copy was almost 2 orders of magnitude slower. 
Which led me to believe we had done something wrong, or we needed to do 
some performance tuning. That's when I found out that there's plenty of 
information about the basic setup of glusterfs, but not much when you're 
trying to get beyond that.


Thanks for your help, we will look at the pointers you gave us, and will 
probably report on our results as we get them.


Jorge

On 01/16/15 12:57, Ben Turner wrote:

- Original Message -

From: "Jorge Garcia" 
To: gluster-users@gluster.org
Sent: Thursday, January 15, 2015 1:24:35 PM
Subject: [Gluster-users] Getting past the basic setup


We're looking at using glusterfs for some of our storage needs. I have
read the "Quick start", was able to create a basic filesystem, and that
all worked. But wow, was it slow! I'm not sure if it was due to our very
simple setup, or if that's just the way it is. So, we're trying to look
deeper into options to improve performance, etc. In particular, I'm
interested in the NUFA scheduler. But after hours of googling around for
more information, I haven't gotten anywhere. There's a page with
"translator tutorials", but all the links are dead. Jeff Darcy seems to
be working at Red Hat these days, so maybe all his open-source stuff got
removed. I haven't found anything about how to even start setting up a
NUFA scheduled system. Any pointers of where to even start?


Hi Jorge.  Here are some of my favorite gluster tuning DOCs:

https://rhsummit.files.wordpress.com/2014/04/bengland_h_1100_rhs_performance.pdf
http://rhsummit.files.wordpress.com/2013/07/england_th_0450_rhs_perf_practices-4_neependra.pdf
http://rhsummit.files.wordpress.com/2012/03/england-rhs-performance.pdf

They are all pretty much the same just updated as new features were 
implemented.  I usually think of gluster perf for sequential workloads like:

writes = 1/2 * NIC bandwidth(when using replica 2) - 20%(with a fast enough 
back end to service this)

.5 * 1250 - 250 = ~500 MB / sec

reads = NIC bandwidth - 40%

1250 * .7 = ~750 MB / sec

If you aren't seeing between 400-500 MB / sec sequential writes and 600-750 MB 
/ sec sequential reads on 10G NICs with fast enough back end to service this 
then be sure to tune what is suggested in that DOC.  What is your HW like?  
What are your performance requirements?  Just to note, glusterFS performance 
really starts to degrade when writing in under 64KB block sizes.  What size of 
files and what record / block size are you writing in?

-b


Any help would be extremely appreciated!

Thanks,

Jorge


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] ... i was able to produce a split brain...

2015-01-27 Thread Joe Julian

No, there's not. I've been asking for this for years.

On 01/27/2015 10:09 AM, Ml Ml wrote:

Hello List,

i was able to produce a split brain:

[root@ovirt-node03 splitmount]# gluster volume heal RaidVolB info
Brick ovirt-node03.example.local:/raidvol/volb/brick/

Number of entries: 1

Brick ovirt-node04.example.local:/raidvol/volb/brick/
/1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
Number of entries: 1




I want to either take the file from node03 or node04. i really don’t
mind. Can i not just tell gluster that it should use one node as the
„current“ one?

Like with DRBD: drbdadm connect --discard-my-data 

Is there a similar way with gluster?



Thanks,
Mario

# rpm -qa | grep gluster
---
glusterfs-fuse-3.6.2-1.el6.x86_64
glusterfs-server-3.6.2-1.el6.x86_64
glusterfs-libs-3.6.2-1.el6.x86_64
glusterfs-3.6.2-1.el6.x86_64
glusterfs-cli-3.6.2-1.el6.x86_64
glusterfs-rdma-3.6.2-1.el6.x86_64
vdsm-gluster-4.14.6-0.el6.noarch
glusterfs-api-3.6.2-1.el6.x86_64
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] ... i was able to produce a split brain...

2015-01-27 Thread Ml Ml
Hello List,

i was able to produce a split brain:

[root@ovirt-node03 splitmount]# gluster volume heal RaidVolB info
Brick ovirt-node03.example.local:/raidvol/volb/brick/

Number of entries: 1

Brick ovirt-node04.example.local:/raidvol/volb/brick/
/1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids
Number of entries: 1




I want to either take the file from node03 or node04. i really don’t
mind. Can i not just tell gluster that it should use one node as the
„current“ one?

Like with DRBD: drbdadm connect --discard-my-data 

Is there a similar way with gluster?



Thanks,
Mario

# rpm -qa | grep gluster
---
glusterfs-fuse-3.6.2-1.el6.x86_64
glusterfs-server-3.6.2-1.el6.x86_64
glusterfs-libs-3.6.2-1.el6.x86_64
glusterfs-3.6.2-1.el6.x86_64
glusterfs-cli-3.6.2-1.el6.x86_64
glusterfs-rdma-3.6.2-1.el6.x86_64
vdsm-gluster-4.14.6-0.el6.noarch
glusterfs-api-3.6.2-1.el6.x86_64
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] High latency on FSYNC, INODELK, FINODELK

2015-01-27 Thread Paul E Stallworth
Hello,

I ran a gluster volume profile while doing a dd to read from /dev/zero and 
write to a file and noticed unusually high latency on the FSYNC, INODELK, and 
FINODELK operations.  This latency seems to correspond with very slow page 
loads of our website.  I'm looking for any help in finding where this latency 
might be coming from and how we might mitigate it. 

Here are the values from the profile

%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
- --- --- ---  
0.00 0.00 us 0.00 us 0.00 us 110601 FORGET
0.00 0.00 us 0.00 us 0.00 us 104872 RELEASE
0.00 0.00 us 0.00 us 0.00 us 269678 RELEASEDIR
0.00 50.00 us 45.00 us 55.00 us 2 OPEN
0.00 73.50 us 32.00 us 115.00 us 2 FLUSH
0.00 47.00 us 33.00 us 57.00 us 4 ENTRYLK
0.00 226.00 us 226.00 us 226.00 us 1 CREATE
0.00 83.75 us 42.00 us 136.00 us 4 STATFS
0.00 72.80 us 23.00 us 175.00 us 5 READDIR
0.00 54.90 us 15.00 us 97.00 us 10 GETXATTR
0.00 79.19 us 26.00 us 193.00 us 32 FXATTROP
0.00 45.36 us 14.00 us 271.00 us 206 STAT
0.01 66.26 us 2.00 us 586.00 us 352 OPENDIR
0.09 252262.00 us 252262.00 us 252262.00 us 1 UNLINK
1.02 139.54 us 29.00 us 10777.00 us 20263 WRITE
7.98 1843032.83 us 22261.00 us 16023044.00 us 12 FSYNC
8.19 1620932.64 us 27.00 us 17171453.00 us 14 INODELK
9.03 94.52 us 22.00 us 9533.00 us 264526 LOOKUP
73.67 563743.05 us 14.00 us 17173239.00 us 362 FINODELK

Thanks,
Paul

Paul Stallworth
Housing Information Technology
University of Colorado Boulder
Boulder, Colorado 80309
T 303.735.6623
M 303.735.6623



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] v3.6.2

2015-01-27 Thread David F. Robinson
Not elegant, but here is my short-term fix to prevent the issue after a 
reboot:


Added 'noauto' to the mount in /etc/rc.local:

/etc/fstab:
#... Note: Used the 'noauto' for the NFS mounts and put the mount in 
/etc/rc.local to ensure that
#... glsuter has been started before attempting to mount using NFS. 
Otherwise, hangs ports

#... during startup.
gfsib01bkp.corvidtec.com:/homegfs_bkp /backup_nfs/homegfs nfs 
vers=3,intr,bg,rsize=32768,wsize=32768,noauto 0 0
gfsib01a.corvidtec.com:/homegfs /homegfs_nfs nfs 
vers=3,intr,bg,rsize=32768,wsize=32768,noauto 0 0



/etc/rc.local:
/etc/init.d/glusterd restart
(sleep 20; mount -a; mount /backup_nfs/homegfs)&



-- Original Message --
From: "Xavier Hernandez" 
To: "David F. Robinson" ; "Kaushal M" 

Cc: "Gluster Users" ; "Gluster Devel" 


Sent: 1/27/2015 10:02:31 AM
Subject: Re: [Gluster-devel] [Gluster-users] v3.6.2


Hi,

I had a similar problem once. It happened after doing some unrelated 
tests with NFS. I thought it was a problem I generated doing weird 
things, so I didn't investigate the cause further.


To see if this is the same case, try this:

* Unmount all NFS mounts and stop all gluster volumes
* Check that there are no gluster processes running (ps ax | grep 
gluster), specially any glusterfs. glusterd is ok.

* Check that there are no NFS processes running (ps ax | grep nfs)
* Check with 'rpcinfo -p' that there's no nfs service registered

The output should be similar to this:

   program vers proto port service
10 4 tcp 111 portmapper
10 3 tcp 111 portmapper
10 2 tcp 111 portmapper
10 4 udp 111 portmapper
10 3 udp 111 portmapper
10 2 udp 111 portmapper
100024 1 udp 33482 status
100024 1 tcp 37034 status

If there are more services registered, you can directly delete them or 
check if they correspond to an active process. For example, if the 
output is this:


   program vers proto port service
10 4 tcp 111 portmapper
10 3 tcp 111 portmapper
10 2 tcp 111 portmapper
10 4 udp 111 portmapper
10 3 udp 111 portmapper
10 2 udp 111 portmapper
100021 3 udp 39618 nlockmgr
100021 3 tcp 41067 nlockmgr
100024 1 udp 33482 status
100024 1 tcp 37034 status

You can do a "netstat -anp | grep 39618" to see if there is some 
process really listening at the nlockmgr port. You can repeat this for 
port 41067. If there is some process, you should stop it. If there is 
no process listening on that port, you should remove it with a command 
like this:


rpcinfo -d 100021 3

You must execute this command for all stale ports for any services 
other than portmapper and status. Once done you should get the output 
shown before.


After that, you can try to start your volume and see if everything is 
registered (rpcinfo -p) and if gluster has started the nfs server 
(gluster volume status).


If everything is ok, you should be able to mount the volume using NFS.

Xavi

On 01/27/2015 03:18 PM, David F. Robinson wrote:

Turning off nfslock did not help. Also, still getting these messages
every 3-seconds:

[2015-01-27 14:16:12.921880] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:15.922431] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:18.923080] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:21.923748] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:24.924472] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:27.925192] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:30.925895] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:33.926563] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:36.927248] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
-- Original Message --
From: "Kaushal M" mailto:kshlms...@gmail.com>>
To: "David F. Robinson" mailto:david.robin...@corvidtec.com>>
Cc: "Joe Julian" mailto:j...@julianfamily.org>>;
"Gluster Users" mailto:gluster-users@gluster.org>>; "Gluster Devel"
mailto:gluster-de...@gluster.org>>
Sent: 1/27/2015 1:49:56 AM
Subject: Re: Re[2]: [Gluster-devel] [Gluster-users] v3.6.2

Your nfs.log file has the following lines,
```
[2015-01-26 20:06:

[Gluster-users] Glusterfs -- My doubts and (few) answers

2015-01-27 Thread Marco Marino
Hi. I'm using glusterfs 3.4 in replica 2 configuration. My main usage of
gluster is for openstack storage backend (glance and shared storage for
nova-compute) and i'm using gluster and i'm using gluster for about 1 year
(not really a production environment, so a limited set of experiments is
possible). I have some doubts that i want to share with the users mailing
list:

1) Glusterfs client or nfs client? I think is better the glusterfs client.
But i would like that the more experienced users give a precise answer

2) How glusterfs manage backup-volfile-servers? My answer: from the point
of view of a client (glusterfs client), if we are using this parameter the
client is not affected from a down of a storage server in the replica pool.
But the problem is that if a client writing a file when the problem
happens, i think this file will be damaged. How a glusterfs client manage
this case? It will drop the tcp connection to "storage1" after a
(configurable) time, and opens a new tcp connection to "storage2". But the
clients has some sort of "buffer" for data that uses during the time
necessary for close and open a new tcp connection? Help please

3) If one server goes down in a replica pool, how can i recover and
re-align the data on this server? I think i should use the healing
procedure. But also in this case, experienced users are welcome This is
a particular case, but the more general case is: how can i reboot a server
in a replica configuration?

4) It is better filesystem storage or block storage for openstack? This
question seems to be off topic, but glusterfs has an integration for
cinder. So block storage or file storage is an important decision and also
a possibility (imho)

5) How can i do a monitor on the disks state? Glusterfs has a command for
this? Should i use "external" tools?

6) How can i manage the case when a brick is damaged/corrupted? I think
there is a different behavior by glusterfs version. It seems that 3.5 can
manage this case. Please, an help here!

7) How can i add a new partition with xfs? Should i reboot the server? If i
reboot the server, i (re) jump to the point 3. This is an important
question because i don't know if tomorrow i need a new partition!

These are my doubts. But these are doubts very commons between who plans to
use glusterfs. So, i think that if this mail will have useful responses, it
could be a reference for new glusterfs users.
Thank you.

MM
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] v3.6.2

2015-01-27 Thread David F. Robinson

In my /etc/fstab, I have the following:

  gfsib01bkp.corvidtec.com:/homegfs_bkp  /backup/homegfs
glusterfs   transport=tcp,_netdev 0 0
  gfsib01bkp.corvidtec.com:/Software_bkp /backup/Software   
glusterfs   transport=tcp,_netdev 0 0
  gfsib01bkp.corvidtec.com:/Source_bkp   /backup/Source 
glusterfs   transport=tcp,_netdev 0 0


  #... Setup NFS mounts as well
  gfsib01bkp.corvidtec.com:/homegfs_bkp /backup_nfs/homegfs nfs 
vers=3,intr,bg,rsize=32768,wsize=32768 0 0



It looks like it is trying to start the nfs mount before gluster has 
finished coming up and that this is hanging the nfs ports.  I have 
_netdev in the glusterfs mount point to make sure the network has come 
up (including infiniband) prior to starting gluster.  Shouldn't the 
gluster init scripts check for gluster startup prior to starting the nfs 
mount?  It doesn't look like this is working properly.


David



-- Original Message --
From: "Xavier Hernandez" 
To: "David F. Robinson" ; "Kaushal M" 

Cc: "Gluster Users" ; "Gluster Devel" 


Sent: 1/27/2015 10:02:31 AM
Subject: Re: [Gluster-devel] [Gluster-users] v3.6.2


Hi,

I had a similar problem once. It happened after doing some unrelated 
tests with NFS. I thought it was a problem I generated doing weird 
things, so I didn't investigate the cause further.


To see if this is the same case, try this:

* Unmount all NFS mounts and stop all gluster volumes
* Check that there are no gluster processes running (ps ax | grep 
gluster), specially any glusterfs. glusterd is ok.

* Check that there are no NFS processes running (ps ax | grep nfs)
* Check with 'rpcinfo -p' that there's no nfs service registered

The output should be similar to this:

   program vers proto port service
10 4 tcp 111 portmapper
10 3 tcp 111 portmapper
10 2 tcp 111 portmapper
10 4 udp 111 portmapper
10 3 udp 111 portmapper
10 2 udp 111 portmapper
100024 1 udp 33482 status
100024 1 tcp 37034 status

If there are more services registered, you can directly delete them or 
check if they correspond to an active process. For example, if the 
output is this:


   program vers proto port service
10 4 tcp 111 portmapper
10 3 tcp 111 portmapper
10 2 tcp 111 portmapper
10 4 udp 111 portmapper
10 3 udp 111 portmapper
10 2 udp 111 portmapper
100021 3 udp 39618 nlockmgr
100021 3 tcp 41067 nlockmgr
100024 1 udp 33482 status
100024 1 tcp 37034 status

You can do a "netstat -anp | grep 39618" to see if there is some 
process really listening at the nlockmgr port. You can repeat this for 
port 41067. If there is some process, you should stop it. If there is 
no process listening on that port, you should remove it with a command 
like this:


rpcinfo -d 100021 3

You must execute this command for all stale ports for any services 
other than portmapper and status. Once done you should get the output 
shown before.


After that, you can try to start your volume and see if everything is 
registered (rpcinfo -p) and if gluster has started the nfs server 
(gluster volume status).


If everything is ok, you should be able to mount the volume using NFS.

Xavi

On 01/27/2015 03:18 PM, David F. Robinson wrote:

Turning off nfslock did not help. Also, still getting these messages
every 3-seconds:

[2015-01-27 14:16:12.921880] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:15.922431] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:18.923080] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:21.923748] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:24.924472] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:27.925192] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:30.925895] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:33.926563] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:36.927248] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
-- Original Message --
From: "Kaushal M" mailto:kshlms...@gmail.com>>
To: "David F. Robinson" mailto:david.robin...@corvidtec.com>>
Cc: "Joe Julian" mailto:j...@julianfamily.or

Re: [Gluster-users] [Gluster-devel] v3.6.2

2015-01-27 Thread David F. Robinson
I rebooted the machine to see if the problem would return and it does.  
Same issue after a reboot.

Any suggestions?

One other thing I tested was to comment out the NFS mounts in 
/etc/fstab:
# gfsib01bkp.corvidtec.com:/homegfs_bkp /backup_nfs/homegfs nfs 
vers=3,intr,bg,rsize=32768,wsize=32768 0 0
After the machine comes back up, I remove the comment and do a 'mount 
-a'.  The mount works fine.


It looks like it is a timing during startup issue.  Is it trying to do 
the NFS mount while glusterd is still starting up?


David


-- Original Message --
From: "Xavier Hernandez" 
To: "David F. Robinson" ; "Kaushal M" 

Cc: "Gluster Users" ; "Gluster Devel" 


Sent: 1/27/2015 10:02:31 AM
Subject: Re: [Gluster-devel] [Gluster-users] v3.6.2


Hi,

I had a similar problem once. It happened after doing some unrelated 
tests with NFS. I thought it was a problem I generated doing weird 
things, so I didn't investigate the cause further.


To see if this is the same case, try this:

* Unmount all NFS mounts and stop all gluster volumes
* Check that there are no gluster processes running (ps ax | grep 
gluster), specially any glusterfs. glusterd is ok.

* Check that there are no NFS processes running (ps ax | grep nfs)
* Check with 'rpcinfo -p' that there's no nfs service registered

The output should be similar to this:

   program vers proto port service
10 4 tcp 111 portmapper
10 3 tcp 111 portmapper
10 2 tcp 111 portmapper
10 4 udp 111 portmapper
10 3 udp 111 portmapper
10 2 udp 111 portmapper
100024 1 udp 33482 status
100024 1 tcp 37034 status

If there are more services registered, you can directly delete them or 
check if they correspond to an active process. For example, if the 
output is this:


   program vers proto port service
10 4 tcp 111 portmapper
10 3 tcp 111 portmapper
10 2 tcp 111 portmapper
10 4 udp 111 portmapper
10 3 udp 111 portmapper
10 2 udp 111 portmapper
100021 3 udp 39618 nlockmgr
100021 3 tcp 41067 nlockmgr
100024 1 udp 33482 status
100024 1 tcp 37034 status

You can do a "netstat -anp | grep 39618" to see if there is some 
process really listening at the nlockmgr port. You can repeat this for 
port 41067. If there is some process, you should stop it. If there is 
no process listening on that port, you should remove it with a command 
like this:


rpcinfo -d 100021 3

You must execute this command for all stale ports for any services 
other than portmapper and status. Once done you should get the output 
shown before.


After that, you can try to start your volume and see if everything is 
registered (rpcinfo -p) and if gluster has started the nfs server 
(gluster volume status).


If everything is ok, you should be able to mount the volume using NFS.

Xavi

On 01/27/2015 03:18 PM, David F. Robinson wrote:

Turning off nfslock did not help. Also, still getting these messages
every 3-seconds:

[2015-01-27 14:16:12.921880] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:15.922431] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:18.923080] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:21.923748] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:24.924472] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:27.925192] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:30.925895] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:33.926563] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:36.927248] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
-- Original Message --
From: "Kaushal M" mailto:kshlms...@gmail.com>>
To: "David F. Robinson" mailto:david.robin...@corvidtec.com>>
Cc: "Joe Julian" mailto:j...@julianfamily.org>>;
"Gluster Users" mailto:gluster-users@gluster.org>>; "Gluster Devel"
mailto:gluster-de...@gluster.org>>
Sent: 1/27/2015 1:49:56 AM
Subject: Re: Re[2]: [Gluster-devel] [Gluster-users] v3.6.2

Your nfs.log file has the following lines,
```
[2015-01-26 20:06:58.298078] E
[rpcsvc.c:1303:rpcsvc_program_register_portmap] 0-rpc-service: Could
not register with portmap 100021 4 38468
[2015-01

Re: [Gluster-users] Fwd: to mailing list gluster.org

2015-01-27 Thread Wochele, Doris (IKP)
Hi, diese Antwort ging jetzt an die Liste…. gluster-users@gluster.org

 

Von: Bekk, Klaus (IKP) 
Gesendet: Montag, 26. Januar 2015 12:38
An: Pranith Kumar Karampuri; gluster-users@gluster.org; Vijaikumar
Mallikarjuna; Sachin Pandit
Cc: Wochele, Doris (IKP)
Betreff: AW: [Gluster-users] Fwd: to mailing list gluster.org

 

Dear Sir,

 

sorry, but I do not understand what you mean?

 

What information you need?

 

 

kind regards

Klaus Bekk

 

 

 

--

Karlsruher Institut of Technologie (KIT)

Institut für Kernphysik (IKP)

 

Dr. Klaus Bekk

 

Hermann-von-Helmholtz-Platz 1

Campus Nord Gebäude 425

76344 Eggenstein-Leopoldshafen

 

Telefon: +49 721 608-23382

Fax: +49 721 608-23321

 

E-Mail: klaus.b...@kit.edu

http://www.kit.edu/

 

KIT – University of the State of Baden-Wuerttemberg and

National Research Center of the Helmholtz Association

--

 

Von: Pranith Kumar Karampuri [mailto:pkara...@redhat.com] 
Gesendet: Mittwoch, 21. Januar 2015 20:17
An: Bekk, Klaus (IKP); gluster-users@gluster.org; Vijaikumar Mallikarjuna;
Sachin Pandit
Cc: Wochele, Doris (IKP)
Betreff: Re: [Gluster-users] Fwd: to mailing list gluster.org

 

+ quota devs

Pranith

On 01/07/2015 07:37 PM, Bekk, Klaus (IKP) wrote:

 

 

Hi all,

 

we are running 4 server with Ubuntu 14.04 as glusterfs server with
3.5.2-ubuntu1~trusty1.

 

When starting quotas on any of the volumes the brick will be not available
after some time. So we have to disable quota on this volume and stop and
start this volume.

 

 

We have the following configuration

 

root@ikpsrv01:/var/log/glusterfs# gluster volume info

 

Volume Name: gltestn

Type: Distribute

Volume ID: e50bd30d-dbc8-4279-ac9d-2e557328a643

Status: Started

Number of Bricks: 1

Transport-type: tcp

Bricks:

Brick1: ikpsrv01:/gltest2/gl

 

Volume Name: glwww1

Type: Replicate

Volume ID: 811e4c2a-4520-4e61-bb84-955afbdb9dff

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: ikpsrv03:/glwwwn/gl

Brick2: ikpsrv01:/glwwwn2/gl

Options Reconfigured:

features.quota: on

 

Volume Name: corsika

Type: Distribute

Volume ID: d6e07f3e-4b35-4736-9b71-afc3224f0f29

Status: Started

Number of Bricks: 4

Transport-type: tcp

Bricks:

Brick1: ikpsrv03:/glcors3-1/gl

Brick2: ikpsrv03:/glcors3-2/gl

Brick3: ikpsrv02:/glcors2-1/gl

Brick4: ikpsrv02:/glcors2-2/gl

 

Volume Name: gldata

Type: Distribute

Volume ID: 82485d03-9ef4-4151-a2cc-881072e871a1

Status: Started

Number of Bricks: 2

Transport-type: tcp

Bricks:

Brick1: ikpsrv02:/gldata2-1/gl

Brick2: ikpsrv01:/gldata1-2/gl

Options Reconfigured:

features.quota: on

 

Volume Name: gltest

Type: Distribute

Volume ID: 9e69afcc-d421-41c9-824b-7d9a31240072

Status: Started

Number of Bricks: 2

Transport-type: tcp

Bricks:

Brick1: ikpsrv01:/gltest3/gl

Brick2: ikpsrv01:/gltest4/gl

Options Reconfigured:

features.quota: off

 

Volume Name: glusersold

Type: Distribute

Volume ID: 91a79e1a-4ff3-4191-b0ae-73357105774c

Status: Started

Number of Bricks: 1

Transport-type: tcp

Bricks:

Brick1: ikpsrv02:/glusers2-2/gl

 

Volume Name: glusers

Type: Distribute

Volume ID: f48b1941-9c19-4311-bd9e-497660fbcc80

Status: Started

Number of Bricks: 1

Transport-type: tcp

Bricks:

Brick1: ikpsrv02:/glusers2-1/gl

Options Reconfigured:

diagnostics.client-log-level: CRITICAL

diagnostics.brick-log-level: CRITICAL

features.quota: off

 

When starting quotas on any of the volumes the brick will be not available
after some time. So we have to disable quota on this volume and stop and
start this volume.

 

The command 

 

Gluster volume quota gltestn enable 

gives the following output in quotad.log

 

 

 

 

[2015-01-07 12:01:55.59] W [glusterfsd.c:1095:cleanup_and_exit]
(-->/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7fb455bd9fbd]
(-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x8182) [0x7fb455ead182]
(-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xd5) [0x7fb4569a7265]))) 0-:
received signum (15), shutting down

[2015-01-07 12:01:56.534931] I [glusterfsd.c:1959:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.5.2
(/usr/sbin/glusterfs -s localhost --volfile-id gluster/quotad -p
/var/lib/glusterd/quotad/run/quotad.pid -l /var/log/glusterfs/quotad.log -S
/var/run/fa869d5552bf340ec7506747e21b2841.socket --xlator-option
*replicate*.data-self-heal=off --xlator-option
*replicate*.metadata-self-heal=off --xlator-option
*replicate*.entry-self-heal=off)

[2015-01-07 12:01:56.537061] I [socket.c:3561:socket_init]
0-socket.glusterfsd: SSL support is NOT enabled

[2015-01-07 12:01:56.537134] I [socket.c:3576:socket_init]
0-socket.glusterfsd: using system polling thread

[2015-01-07 12:01:56.537355] I [socket.c:3561:socket_init] 0-glusterfs: SSL
support is NOT enabled

[2015-01-07 12:01:56.537383] I [socket.c:3576:socket_init] 0-glusterfs:
using system polling thread

[2015-01

[Gluster-users] replica position

2015-01-27 Thread yang . bin18
Hi,

Can the replica position be configured ?

Regards.

Bin.Yang




ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] v3.6.2

2015-01-27 Thread David F. Robinson
After shutting down all NFS and gluster processes, there was still an 
NFS process.


[root@gfs01bkp ~]# rpcinfo -p
   program vers proto   port  service
104   tcp111  portmapper
103   tcp111  portmapper
102   tcp111  portmapper
104   udp111  portmapper
103   udp111  portmapper
102   udp111  portmapper
153   tcp  38465  mountd
151   tcp  38466  mountd
133   tcp   2049  nfs
1000241   udp  34738  status
1000241   tcp  37269  status
[root@gfs01bkp ~]# netstat -anp | grep 2049
[root@gfs01bkp ~]# netstat -anp | grep 38465
[root@gfs01bkp ~]# netstat -anp | grep 38466

I killed off the processes using "rpcinfo -d"

[root@gfs01bkp ~]# rpcinfo -p
   program vers proto   port  service
104   tcp111  portmapper
103   tcp111  portmapper
102   tcp111  portmapper
104   udp111  portmapper
103   udp111  portmapper
102   udp111  portmapper
1000241   udp  34738  status
1000241   tcp  37269  status

Then I restarted the glusterd and did a 'mount -a'.  Worked perfectly.  
And the errors that were showing up in the logs every 3-seconds stopped.


Thanks for your help.  Greatly appreciated.

David




-- Original Message --
From: "Xavier Hernandez" 
To: "David F. Robinson" ; "Kaushal M" 

Cc: "Gluster Users" ; "Gluster Devel" 


Sent: 1/27/2015 10:02:31 AM
Subject: Re: [Gluster-devel] [Gluster-users] v3.6.2


Hi,

I had a similar problem once. It happened after doing some unrelated 
tests with NFS. I thought it was a problem I generated doing weird 
things, so I didn't investigate the cause further.


To see if this is the same case, try this:

* Unmount all NFS mounts and stop all gluster volumes
* Check that there are no gluster processes running (ps ax | grep 
gluster), specially any glusterfs. glusterd is ok.

* Check that there are no NFS processes running (ps ax | grep nfs)
* Check with 'rpcinfo -p' that there's no nfs service registered

The output should be similar to this:

   program vers proto port service
10 4 tcp 111 portmapper
10 3 tcp 111 portmapper
10 2 tcp 111 portmapper
10 4 udp 111 portmapper
10 3 udp 111 portmapper
10 2 udp 111 portmapper
100024 1 udp 33482 status
100024 1 tcp 37034 status

If there are more services registered, you can directly delete them or 
check if they correspond to an active process. For example, if the 
output is this:


   program vers proto port service
10 4 tcp 111 portmapper
10 3 tcp 111 portmapper
10 2 tcp 111 portmapper
10 4 udp 111 portmapper
10 3 udp 111 portmapper
10 2 udp 111 portmapper
100021 3 udp 39618 nlockmgr
100021 3 tcp 41067 nlockmgr
100024 1 udp 33482 status
100024 1 tcp 37034 status

You can do a "netstat -anp | grep 39618" to see if there is some 
process really listening at the nlockmgr port. You can repeat this for 
port 41067. If there is some process, you should stop it. If there is 
no process listening on that port, you should remove it with a command 
like this:


rpcinfo -d 100021 3

You must execute this command for all stale ports for any services 
other than portmapper and status. Once done you should get the output 
shown before.


After that, you can try to start your volume and see if everything is 
registered (rpcinfo -p) and if gluster has started the nfs server 
(gluster volume status).


If everything is ok, you should be able to mount the volume using NFS.

Xavi

On 01/27/2015 03:18 PM, David F. Robinson wrote:

Turning off nfslock did not help. Also, still getting these messages
every 3-seconds:

[2015-01-27 14:16:12.921880] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:15.922431] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:18.923080] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:21.923748] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:24.924472] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:27.925192] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:30.925895] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:33.926563] W [socket.c:611:__socket_rwv] 
0-management:

Re: [Gluster-users] [Gluster-devel] v3.6.2

2015-01-27 Thread Xavier Hernandez

Hi,

I had a similar problem once. It happened after doing some unrelated 
tests with NFS. I thought it was a problem I generated doing weird 
things, so I didn't investigate the cause further.


To see if this is the same case, try this:

* Unmount all NFS mounts and stop all gluster volumes
* Check that there are no gluster processes running (ps ax | grep 
gluster), specially any glusterfs. glusterd is ok.

* Check that there are no NFS processes running (ps ax | grep nfs)
* Check with 'rpcinfo -p' that there's no nfs service registered

The output should be similar to this:

   program vers proto   port  service
104   tcp111  portmapper
103   tcp111  portmapper
102   tcp111  portmapper
104   udp111  portmapper
103   udp111  portmapper
102   udp111  portmapper
1000241   udp  33482  status
1000241   tcp  37034  status

If there are more services registered, you can directly delete them or 
check if they correspond to an active process. For example, if the 
output is this:


   program vers proto   port  service
104   tcp111  portmapper
103   tcp111  portmapper
102   tcp111  portmapper
104   udp111  portmapper
103   udp111  portmapper
102   udp111  portmapper
1000213   udp  39618  nlockmgr
1000213   tcp  41067  nlockmgr
1000241   udp  33482  status
1000241   tcp  37034  status

You can do a "netstat -anp | grep 39618" to see if there is some process 
really listening at the nlockmgr port. You can repeat this for port 
41067. If there is some process, you should stop it. If there is no 
process listening on that port, you should remove it with a command like 
this:


rpcinfo -d 100021 3

You must execute this command for all stale ports for any services other 
than portmapper and status. Once done you should get the output shown 
before.


After that, you can try to start your volume and see if everything is 
registered (rpcinfo -p) and if gluster has started the nfs server 
(gluster volume status).


If everything is ok, you should be able to mount the volume using NFS.

Xavi

On 01/27/2015 03:18 PM, David F. Robinson wrote:

Turning off nfslock did not help.  Also, still getting these messages
every 3-seconds:

[2015-01-27 14:16:12.921880] W [socket.c:611:__socket_rwv] 0-management:
readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:15.922431] W [socket.c:611:__socket_rwv] 0-management:
readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:18.923080] W [socket.c:611:__socket_rwv] 0-management:
readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:21.923748] W [socket.c:611:__socket_rwv] 0-management:
readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:24.924472] W [socket.c:611:__socket_rwv] 0-management:
readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:27.925192] W [socket.c:611:__socket_rwv] 0-management:
readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:30.925895] W [socket.c:611:__socket_rwv] 0-management:
readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:33.926563] W [socket.c:611:__socket_rwv] 0-management:
readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:36.927248] W [socket.c:611:__socket_rwv] 0-management:
readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
-- Original Message --
From: "Kaushal M" mailto:kshlms...@gmail.com>>
To: "David F. Robinson" mailto:david.robin...@corvidtec.com>>
Cc: "Joe Julian" mailto:j...@julianfamily.org>>;
"Gluster Users" mailto:gluster-users@gluster.org>>; "Gluster Devel"
mailto:gluster-de...@gluster.org>>
Sent: 1/27/2015 1:49:56 AM
Subject: Re: Re[2]: [Gluster-devel] [Gluster-users] v3.6.2

Your nfs.log file has the following lines,
```
[2015-01-26 20:06:58.298078] E
[rpcsvc.c:1303:rpcsvc_program_register_portmap] 0-rpc-service: Could
not register with portmap 100021 4 38468
[2015-01-26 20:06:58.298108] E [nfs.c:331:nfs_init_versions] 0-nfs:
Program  NLM4 registration failed
```

The Gluster NFS server has it's own NLM (nlockmgr) implementation. You
said that you have the nfslock service on. Can you turn that off and
try again?

~kaushal

On Tue, Jan 27, 2015 at 11:21 AM, David F. Robinson
mailto:david.robin...@corvidtec.com>>
wrote:

On a different system where gluster-NFS (not kernel-nfs) is
working properly shows the following:
[root@gfs01a glusterfs]# rpcinfo -p
   program vers proto   port  service
104   tcp111  portmapper
103   tc

Re: [Gluster-users] geo-replication

2015-01-27 Thread Rosemond, Sonny
I have established the passwordless ssh session rsa keys and have tested that 
successfully. And I have tried
gluster system:: execute gsec_create then issuing the the create push-pem which 
fails as well. So I can never get to the point where I can actually issue the 
start command.


gluster volume geo-replication volume1 gfs7::geo1 create push-pem

geo-replication command failed


tail -100 /var/log/glusterfs/etc-glusterfs-glusterd.vol.log

[2015-01-27 14:42:39.283376] I 
[glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] 0-management: 
connect returned 0

[2015-01-27 14:42:39.283444] I 
[glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] 0-management: 
connect returned 0

[2015-01-27 14:42:39.283502] I [rpc-clnt.c:969:rpc_clnt_connection_init] 
0-management: setting frame-timeout to 600

[2015-01-27 14:42:39.288607] I [rpc-clnt.c:969:rpc_clnt_connection_init] 
0-management: setting frame-timeout to 600

[2015-01-27 14:42:39.292514] I [rpc-clnt.c:969:rpc_clnt_connection_init] 
0-management: setting frame-timeout to 600

[2015-01-27 14:42:39.296403] I [rpc-clnt.c:969:rpc_clnt_connection_init] 
0-management: setting frame-timeout to 600

[2015-01-27 14:42:39.300274] I [rpc-clnt.c:969:rpc_clnt_connection_init] 
0-management: setting frame-timeout to 600

[2015-01-27 14:42:39.304174] I 
[glusterd-store.c:3501:glusterd_store_retrieve_missed_snaps_list] 0-management: 
No missed snaps list.

[2015-01-27 14:42:39.305438] I [glusterd.c:146:glusterd_uuid_init] 
0-management: retrieved UUID: b5977829-dd93-46e7-bf85-26bd4426d28c

Final graph:

+--+

  1: volume management

  2: type mgmt/glusterd

  3: option rpc-auth.auth-glusterfs on

  4: option rpc-auth.auth-unix on

  5: option rpc-auth.auth-null on

  6: option transport.socket.listen-backlog 128

  7: option ping-timeout 30

  8: option transport.socket.read-fail-log off

  9: option transport.socket.keepalive-interval 2

 10: option transport.socket.keepalive-time 10

 11: option transport-type rdma

 12: option working-directory /var/lib/glusterd

 13: end-volume

 14:

+--+

[2015-01-27 14:42:39.306445] I [glusterd-pmap.c:227:pmap_registry_bind] 0-pmap: 
adding brick /brick12 on port 49153

[2015-01-27 14:42:39.309511] I 
[glusterd-handshake.c:1061:__glusterd_mgmt_hndsk_versions_ack] 0-management: 
using the op-version 30501

[2015-01-27 14:42:39.312927] I 
[glusterd-rpc-ops.c:436:__glusterd_friend_add_cbk] 0-glusterd: Received ACC 
from uuid: faa0d923-9541-4964-a7ce-9c0a9dcb1b37, host: gfs7, port: 0

[2015-01-27 14:42:39.314506] I [rpc-clnt.c:969:rpc_clnt_connection_init] 
0-management: setting frame-timeout to 600

[2015-01-27 14:42:40.318334] I 
[glusterd-utils.c:6267:glusterd_nfs_pmap_deregister] 0-: De-registered MOUNTV3 
successfully

[2015-01-27 14:42:40.318807] I 
[glusterd-utils.c:6272:glusterd_nfs_pmap_deregister] 0-: De-registered MOUNTV1 
successfully

[2015-01-27 14:42:40.319189] I 
[glusterd-utils.c:6277:glusterd_nfs_pmap_deregister] 0-: De-registered NFSV3 
successfully

[2015-01-27 14:42:40.319573] I 
[glusterd-utils.c:6282:glusterd_nfs_pmap_deregister] 0-: De-registered NLM v4 
successfully

[2015-01-27 14:42:40.319956] I 
[glusterd-utils.c:6287:glusterd_nfs_pmap_deregister] 0-: De-registered NLM v1 
successfully

[2015-01-27 14:42:40.320289] I 
[glusterd-utils.c:6292:glusterd_nfs_pmap_deregister] 0-: De-registered ACL v3 
successfully

[2015-01-27 14:42:40.323761] I [rpc-clnt.c:969:rpc_clnt_connection_init] 
0-management: setting frame-timeout to 600

[2015-01-27 14:42:40.323981] W [socket.c:2992:socket_connect] 0-management: 
Ignore failed connection attempt on , (No such file or directory)

[2015-01-27 14:42:41.328151] I [rpc-clnt.c:969:rpc_clnt_connection_init] 
0-management: setting frame-timeout to 600

[2015-01-27 14:42:41.328378] W [socket.c:2992:socket_connect] 0-management: 
Ignore failed connection attempt on , (No such file or directory)

[2015-01-27 14:42:41.330849] I 
[glusterd-handshake.c:1061:__glusterd_mgmt_hndsk_versions_ack] 0-management: 
using the op-version 30501

[2015-01-27 14:42:41.344200] I 
[glusterd-handler.c:2216:__glusterd_handle_incoming_friend_req] 0-glusterd: 
Received probe from uuid: e682b121-92f6-4b70-8131-cbda3d467bb6

[2015-01-27 14:42:41.344336] I 
[glusterd-handler.c:3334:glusterd_xfer_friend_add_resp] 0-glusterd: Responded 
to gfs10 (0), ret: 0

[2015-01-27 14:42:41.347457] I 
[glusterd-handshake.c:1061:__glusterd_mgmt_hndsk_versions_ack] 0-management: 
using the op-version 30501

[2015-01-27 14:42:41.348833] I 
[glusterd-rpc-ops.c:633:__glusterd_friend_update_cbk] 0-management: Received 
ACC from uuid: faa0d923-9541-4964-a7ce-9c0a9dcb1b37

[2015-01-27 14:42:41.348871] I 
[glusterd-rpc-ops.c:436:__glusterd_friend_add_cbk] 0-glusterd: Received ACC 
from uuid: 9c1306c9-e86c-452b-9d4f-d99a98b

Re: [Gluster-users] [Gluster-devel] v3.6.2

2015-01-27 Thread David F. Robinson
Turning off nfslock did not help.  Also, still getting these messages 
every 3-seconds:


[2015-01-27 14:16:12.921880] W [socket.c:611:__socket_rwv] 0-management: 
readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed 
(Invalid argument)
[2015-01-27 14:16:15.922431] W [socket.c:611:__socket_rwv] 0-management: 
readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed 
(Invalid argument)
[2015-01-27 14:16:18.923080] W [socket.c:611:__socket_rwv] 0-management: 
readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed 
(Invalid argument)
[2015-01-27 14:16:21.923748] W [socket.c:611:__socket_rwv] 0-management: 
readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed 
(Invalid argument)
[2015-01-27 14:16:24.924472] W [socket.c:611:__socket_rwv] 0-management: 
readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed 
(Invalid argument)
[2015-01-27 14:16:27.925192] W [socket.c:611:__socket_rwv] 0-management: 
readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed 
(Invalid argument)
[2015-01-27 14:16:30.925895] W [socket.c:611:__socket_rwv] 0-management: 
readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed 
(Invalid argument)
[2015-01-27 14:16:33.926563] W [socket.c:611:__socket_rwv] 0-management: 
readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed 
(Invalid argument)
[2015-01-27 14:16:36.927248] W [socket.c:611:__socket_rwv] 0-management: 
readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed 
(Invalid argument)




-- Original Message --
From: "Kaushal M" 
To: "David F. Robinson" 
Cc: "Joe Julian" ; "Gluster Users" 
; "Gluster Devel" 

Sent: 1/27/2015 1:49:56 AM
Subject: Re: Re[2]: [Gluster-devel] [Gluster-users] v3.6.2


Your nfs.log file has the following lines,
```
[2015-01-26 20:06:58.298078] E 
[rpcsvc.c:1303:rpcsvc_program_register_portmap] 0-rpc-service: Could 
not register with portmap 100021 4 38468
[2015-01-26 20:06:58.298108] E [nfs.c:331:nfs_init_versions] 0-nfs: 
Program  NLM4 registration failed

```

The Gluster NFS server has it's own NLM (nlockmgr) implementation. You 
said that you have the nfslock service on. Can you turn that off and 
try again?


~kaushal

On Tue, Jan 27, 2015 at 11:21 AM, David F. Robinson 
 wrote:
On a different system where gluster-NFS (not kernel-nfs) is working 
properly shows the following:


[root@gfs01a glusterfs]# rpcinfo -p
   program vers proto   port  service
104   tcp111  portmapper
103   tcp111  portmapper
102   tcp111  portmapper
104   udp111  portmapper
103   udp111  portmapper
102   udp111  portmapper
153   tcp  38465  mountd
151   tcp  38466  mountd
133   tcp   2049  nfs
1000241   udp  42413  status
1000241   tcp  35424  status
1000214   tcp  38468  nlockmgr
1000211   udp801  nlockmgr
1002273   tcp   2049  nfs_acl
1000211   tcp804  nlockmgr

[root@gfs01a glusterfs]# /etc/init.d/nfs status
rpc.svcgssd is stopped
rpc.mountd is stopped
nfsd is stopped
rpc.rquotad is stopped



-- Original Message --
From: "Joe Julian" 
To: "Kaushal M" ; "David F. Robinson" 

Cc: "Gluster Users" ; "Gluster Devel" 


Sent: 1/27/2015 12:48:49 AM
Subject: Re: [Gluster-devel] [Gluster-users] v3.6.2

If that was true, wouldn't it not "Connection refused" because the 
kernel nfs is listening?


On January 26, 2015 9:43:34 PM PST, Kaushal M  
wrote:
Seems like you have the kernel NFS server running. The `rpcinfo -p` 
output you provided shows that there are other mountd, nfs and 
nlockmgr services running your system. Gluster NFS server requires 
that the kernel nfs services be disabled and not running.


~kaushal

On Tue, Jan 27, 2015 at 10:56 AM, David F. Robinson 
 wrote:

[root@gfs01bkp ~]# gluster volume status homegfs_bkp
Status of volume: homegfs_bkp
Gluster process Port
Online  Pid

--
Brick gfsib01bkp.corvidtec.com:/data/brick01bkp/homegfs
_bkp49152   Y   
4087

Brick gfsib01bkp.corvidtec.com:/data/brick02bkp/homegfs
_bkp49155   Y   
4092
NFS Server on localhost N/A N   
N/A


Task Status of Volume homegfs_bkp
--
Task : Rebalance
ID   : 6d4c6c4e-16da-48c9-9019-dccb7d2cfd66
Status   : completed





-- Original Message --
From: "Atin Mukherjee" 
To: "Pranith Kumar Karampuri" ; "Justin Clift" 
; "David F. Robinson" 

Cc: "Gluster Users" ; "Gluster Devel" 


Sent: 1/26/2015 11:51:13 PM
Subject: Re: [Gluster-devel] v3.6.2




On 01/27/2015 07:33 AM, Pranith Kumar Karampuri wrote:


 On 01/26/2015 

[Gluster-users] GlusterFS Office Hours at FOSDEM

2015-01-27 Thread Kaleb KEITHLEY

Hi,

Lalatendu Mohanty, Niels de Vos, and myself will be holding GlusterFS 
Office Hours at FOSDEM.


Look for us at the CentOS booth, from 16h00 to 17h00 on Saturday, 31 
January.


FOSDEM is taking place this weekend, 31 Jan and 1 Feb, at ULB Solbosch 
Campus, Brussels. FOSDEM is a free event, no registration is necessary. 
For more info see https://fosdem.org/2015/


Hope to see you there.

--

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Mounting issue, glusterfs vs mount.glusterfs, locale issue. I RTFM'd, but still a glitch

2015-01-27 Thread Jan-Hendrik Zab
Hey,
is the first command working without setting LC_NUMERIC?

Can you check if your problem might be related to the reported issue in
the "[Gluster-users] Mount failed" mailing list thread?

-jhz

On 26/01/15 23:17 +0100, Nicolas Ecarnot wrote:
> Hello,
> 
> Many people wrote here that the locale / numeric conversion problem when
> mounting would be solved by the 3.6.2 version.
> It is not.
> 
> [root@client etc]# rpm -qa|grep gluster
> glusterfs-libs-3.6.2-1.el7.x86_64
> glusterfs-3.6.2-1.el7.x86_64
> glusterfs-api-3.6.2-1.el7.x86_64
> glusterfs-fuse-3.6.2-1.el7.x86_64
> 
> [root@server1_out_of_3 glusterfs]# rpm -qa|grep gluster
> glusterfs-libs-3.6.2-1.el7.x86_64
> glusterfs-3.6.2-1.el7.x86_64
> glusterfs-geo-replication-3.6.2-1.el7.x86_64
> glusterfs-api-3.6.2-1.el7.x86_64
> glusterfs-fuse-3.6.2-1.el7.x86_64
> glusterfs-server-3.6.2-1.el7.x86_64
> glusterfs-cli-3.6.2-1.el7.x86_64
> 
> The tests below with the exact commands are still resulting in the same
> outcome...


pgpCshhjyRcA_.pgp
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Minutes of todays Gluster Community Bug Triage meeting

2015-01-27 Thread Niels de Vos
On Tue, Jan 27, 2015 at 12:07:35PM +0100, Niels de Vos wrote:
> Hi all,
> 
> This meeting is scheduled for anyone that is interested in learning more
> about, or assisting with the Bug Triage.
> 
> Meeting details:
> - location: #gluster-meeting on Freenode IRC
> - date: every Tuesday
> - time: 12:00 UTC, 13:00 CET (run: date -d "12:00 UTC")
> - agenda: https://public.pad.fsfe.org/p/gluster-bug-triage
> 
> Currently the following items are listed:
> * Roll Call
> * Status of last weeks action items
> * Group Triage
> * Open Floor
> 
> The last two topics have space for additions. If you have a suitable bug
> or topic to discuss, please add it to the agenda.

Minutes: 
http://meetbot.fedoraproject.org/gluster-meeting/2015-01-27/gluster-meeting.2015-01-27-12.03.html
Minutes (text): 
http://meetbot.fedoraproject.org/gluster-meeting/2015-01-27/gluster-meeting.2015-01-27-12.03.txt
Log: 
http://meetbot.fedoraproject.org/gluster-meeting/2015-01-27/gluster-meeting.2015-01-27-12.03.log.html

Meeting started by ndevos at 12:03:35 UTC.

Meeting summary
---
* Roll Call  (ndevos, 12:04:07)

* Action items from last week  (ndevos, 12:06:26)

* hagarth will look for somebody that can act like a bug assigner
  manager kind of person  (ndevos, 12:06:31)

* hagarth should update the MAINTAINERS file, add current maintainers
  and new components like Snapshot  (ndevos, 12:08:12)

* lalatenduM will send a reminder to the users- and devel- ML about (and
  how to) fixing Coverity defects  (ndevos, 12:09:09)

* lalatenduM initiate a discussion with the RH Gluster team to triage
  their own bugs when they report them  (ndevos, 12:09:53)

* ndevos adds NetBSD queries to the bug triage wiki page  (ndevos,
  12:10:49)

* hagarth start/sync email on regular (nightly) automated tests
  (ndevos, 12:11:50)

* lalatenduM will look into using nightly builds for automated testing,
  and will report issues/success to the mailinglist  (ndevos, 12:12:42)

* Group Triage  (ndevos, 12:16:20)

* Open Floor  (ndevos, 12:31:24)

Meeting ended at 12:41:57 UTC.


People Present (lines said)
---
* ndevos (45)
* hagarth (18)
* kkeithley_ (13)
* hchiramm (10)
* gothos (7)
* zodbot (2)


pgplpcURDGYdq.pgp
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Mount failed

2015-01-27 Thread Bartłomiej Syryjczyk
W dniu 2015-01-27 o 09:47, Bartłomiej Syryjczyk pisze:
> W dniu 2015-01-27 o 09:20, Franco Broi pisze:
>> Well I'm stumped, just seems like the mount.glusterfs script isn't
>> working. I'm still running 3.5.1 and the getinode bit of my script looks
>> like this:
>>
>> ...
>> Linux)
>> getinode="stat -c %i $i"
>>
>> ...
>> inode=$( ${getinode} $mount_point 2>/dev/null);
>>
>> # this is required if the stat returns error
>> if [ -z "$inode" ]; then
>> inode="0";
>> fi
>>
>> if [ $inode -ne 1 ]; then
>> err=1;
>> fi
> (My) script should check return code, not inode. There is right comment
> about that. Or maybe I don't understand construction in 298 line:
> ---
> [...]
>  49 Linux)
>  50 getinode="stat -c %i"
> [...]
> 298 inode=$( ${getinode} $mount_point 2>/dev/null);
> 299 # this is required if the stat returns error
> 300 if [ $? -ne 0 ]; then
> 301 warn "Mount failed. Please check the log file for more details."
> 302 umount $mount_point > /dev/null 2>&1;
> 303 exit 1;
> 304 fi
> ---
>
> When I paste between lines 298 and 300 something with 0 exit code, eg.
> echo $?;
> it works.
>
> With script from 3.6.1 package there was the same problem.
OK, I removed part "2>/dev/null", and see:
stat: cannot stat ‘/mnt/gluster’: Resource temporarily unavailable

So I decided to add sleep just before line number 298 (this one with
stat). And it works! Is it normal?

-- 
Z poważaniem,

*Bartłomiej Syryjczyk*
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] glusterfs-3.6.2 GA

2015-01-27 Thread Kaleb KEITHLEY


glusterfs-3.6.2 has been released.

The release source tar file and packages for Fedora {20,21,rawhide}, 
RHEL/CentOS {5,6,7}, Debian {wheezy,jessie}, Pidora2014, and Raspbian 
wheezy are available at 
http://download.gluster.org/pub/gluster/glusterfs/3.6/3.6.2/


(Ubuntu packages will be available soon.)

This release fixes the following bugs. Thanks to all who submitted bugs 
and patches and reviewed the changes.



 1184191 - Cluster/DHT : Fixed crash due to null deref
 1180404 - nfs server restarts when a snapshot is deactivated
 1180411 - CIFS:[USS]: glusterfsd OOM killed when 255 snapshots were
browsed at CIFS mount and Control+C is issued
 1180070 - [AFR] getfattr on fuse mount gives error : Software caused
connection abort
 1175753 - [readdir-ahead]: indicate EOF for readdirp
 1175752 - [USS]: On a successful lookup, snapd logs are filled with
Warnings "dict OR key (entry-point) is NULL"
 1175749 - glusterfs client crashed while migrating the fds
 1179658 - Add brick fails if parent dir of new brick and existing
brick is same and volume was accessed using libgfapi and smb.
 1146524 - glusterfs.spec.in - synch minor diffs with fedora dist-git
glusterfs.spec
 1175744 - [USS]: Unable to access .snaps after snapshot restore after
directories were deleted and recreated
 1175742 - [USS]: browsing .snaps directory with CIFS fails with
"Invalid argument"
 1175739 - [USS]: Non root user who has no access to a directory, from
NFS mount, is able to access the files under .snaps under that directory
 1175758 - [USS] : Rebalance process tries to connect to snapd and in
case when snapd crashes it might affect rebalance process
 1175765 - USS]: When snapd is crashed gluster volume stop/delete
operation fails making the cluster in inconsistent state
 1173528 - Change in volume heal info command output
 1166515 - [Tracker] RDMA support in glusterfs
 1166505 - mount fails for nfs protocol in rdma volumes
 1138385 - [DHT:REBALANCE]: Rebalance failures are seen with error
message " remote operation failed: File exists"
 1177418 - entry self-heal in 3.5 and 3.6 are not compatible
 1170954 - Fix mutex problems reported by coverity scan
 1177899 - nfs: ls shows "Permission denied" with root-squash
 1175738 - [USS]: data unavailability for a period of time when USS is
enabled/disabled
 1175736 - [USS]:After deactivating a snapshot trying to access the
remaining activated snapshots from NFS mount gives 'Invalid argument' error
 1175735 - [USS]: snapd process is not killed once the glusterd comes back
 1175733 - [USS]: If the snap name is same as snap-directory than cd to
virtual snap directory fails
 1175756 - [USS] : Snapd crashed while trying to access the snapshots
under .snaps directory
 1175755 - SNAPSHOT[USS]:gluster volume set for uss doesnot check any
boundaries
 1175732 - [SNAPSHOT]: nouuid is appended for every snapshoted brick
which causes duplication if the original brick has already nouuid
 1175730 - [USS]: creating file/directories under .snaps shows wrong
error message
 1175754 - [SNAPSHOT]: before the snap is marked to be deleted if the
node goes down than the snaps are propagated on other nodes and glusterd
hungs
 1159484 - ls -alR can not heal the disperse volume
 1138897 - NetBSD port
 1175728 - [USS]: All uss related logs are reported under
/var/log/glusterfs, it makes sense to move it into subfolder
 1170548 - [USS] : don't display the snapshots which are not activated
 1170921 - [SNAPSHOT]: snapshot should be deactivated by default when
created
 1175694 - [SNAPSHOT]: snapshoted volume is read only but it shows rw
attributes in mount
 1161885 - Possible file corruption on dispersed volumes
 1170959 - EC_MAX_NODES is defined incorrectly
 1175645 - [USS]: Typo error in the description for USS under "gluster
volume set help"
 1171259 - mount.glusterfs does not understand -n option

Regards,
Kaleb, on behalf of Raghavendra Bhat, who did all the work.



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.6.1 and openstack

2015-01-27 Thread Xavier Hernandez

Hi Pedro,

On 01/27/2015 12:43 PM, Pedro Serotto wrote:

Hi Xavi,
what happens is that Nova cannot create the file structure which is
needed in order to create the virtual machine.
In //var/lib/nova/i//nstances/, I can find only one directory under a
casual name (/random/), for example
/ca095fd4-1c0c-4fb4-a2d8-a0d758//6c0ffa. /If I try to list the content,
the command fails and never comes back. The shell freezes.


This could be related with a rename and locking issue still pending to 
be merged [1]. If that's the real cause, 3.6.2 won't solve the problem 
yet, though it can alleviate some other issues.



By the way, I try to find out some more verbous logs and to update the
spftware to 3.6.2. In this case, should I unistall 3.6.1. and install it
again?


You can increase log verbosity using this command:

gluster volume set  client-log-level TRACE

This will add *a lot* of log messages from all internal calls.

To upgrade from 3.6.1 to 3.6.2 (assuming you installed 3.6.1 from rpm), 
you can take the new rpm's [2] and they should do the upgrade 
automatically. To make things easier, this should be done with all 
gluster volumes unmounted and stopped.


Xavi

[1] http://review.gluster.org/9420/
[2] http://download.gluster.org/pub/gluster/glusterfs/3.6/3.6.2/
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.6.1 and openstack

2015-01-27 Thread Pedro Serotto
Hi Xavi,what happens is that Nova cannot create the file structure which is 
needed in order to create the virtual machine.In /var/lib/nova/instances, I can 
find only one directory under a casual name (random), for example 
ca095fd4-1c0c-4fb4-a2d8-a0d7586c0ffa. If I try to list the content, the command 
fails and never comes back. The shell freezes.By the way, I try to find out 
some more verbous logs and to update the spftware to 3.6.2. In this case, 
should I unistall 3.6.1. and install it again?
 

 El Martes 27 de enero de 2015 12:14, Xavier Hernandez 
 escribió:
   

 Hi Pedro,

do you get any specific error ?

 From logs, the only thing I see is a warning about a GFID not being 
available. This can cause troubles, but I'm not sure if this is the problem.

Anyway, version 3.6.2 has some fixes that improve on this. Is it 
possible for you to try the newer version ?

Xavi

On 01/27/2015 11:25 AM, Pedro Serotto wrote:
> Dear all,
> I try the disperse volume on 3 node cluster, but my instances never build.
>
> This is the log:
>
> root@node-1:/var/log/glusterfs# tail -f var-lib-nova-instances.log
> [2015-01-27 10:03:11.510702] I
> [client-handshake.c:1415:select_server_supported_programs]
> 0-nova-vm-client-1: Using Program GlusterFS 3.3, Num (1298437), Version
> (330)
> [2015-01-27 10:03:11.511052] I
> [client-handshake.c:1200:client_setvolume_cbk] 0-nova-vm-client-2:
> Connected to nova-vm-client-2, attached to remote volume '/data/nova'.
> [2015-01-27 10:03:11.511074] I
> [client-handshake.c:1212:client_setvolume_cbk] 0-nova-vm-client-2:
> Server and Client lk-version numbers are not same, reopening the fds
> [2015-01-27 10:03:11.511783] I
> [client-handshake.c:1200:client_setvolume_cbk] 0-nova-vm-client-1:
> Connected to nova-vm-client-1, attached to remote volume '/data/nova'.
> [2015-01-27 10:03:11.511806] I
> [client-handshake.c:1212:client_setvolume_cbk] 0-nova-vm-client-1:
> Server and Client lk-version numbers are not same, reopening the fds
> [2015-01-27 10:03:11.512464] I [ec.c:205:ec_up] 0-nova-vm-disperse-0:
> Going UP
> [2015-01-27 10:03:11.520391] I [fuse-bridge.c:5080:fuse_graph_setup]
> 0-fuse: switched to graph 0
> [2015-01-27 10:03:11.520660] I
> [client-handshake.c:188:client_set_lk_version_cbk] 0-nova-vm-client-1:
> Server lk version = 1
> [2015-01-27 10:03:11.520680] I
> [client-handshake.c:188:client_set_lk_version_cbk] 0-nova-vm-client-2:
> Server lk version = 1
> [2015-01-27 10:03:11.520941] I [fuse-bridge.c:4009:fuse_init]
> 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22
> kernel 7.17
>
>
>
>  nova boot --flavor 1 --image
> 4f31cf1b-3cff-470e-8533-0792c0e805d5 test-vm
>
>
>
>
> [2015-01-27 10:04:38.002739] I [dht-common.c:1822:dht_lookup_cbk]
> 0-nova-vm-dht: Entry /instance-0023 missing on subvol nova-vm-disperse-0
> [2015-01-27 10:04:38.009403] I [dht-common.c:1822:dht_lookup_cbk]
> 0-nova-vm-dht: Entry /instance-0023 missing on subvol nova-vm-disperse-0
> [2015-01-27 10:04:38.019634] I [dht-common.c:1822:dht_lookup_cbk]
> 0-nova-vm-dht: Entry /ca095fd4-1c0c-4fb4-a2d8-a0d7586c0ffa missing on
> subvol nova-vm-disperse-0
> [2015-01-27 10:04:38.025431] I [dht-common.c:1822:dht_lookup_cbk]
> 0-nova-vm-dht: Entry /ca095fd4-1c0c-4fb4-a2d8-a0d7586c0ffa missing on
> subvol nova-vm-disperse-0
> [2015-01-27 10:04:38.029552] I [dht-common.c:1822:dht_lookup_cbk]
> 0-nova-vm-dht: Entry /ca095fd4-1c0c-4fb4-a2d8-a0d7586c0ffa missing on
> subvol nova-vm-disperse-0
> [2015-01-27 10:04:38.040237] I
> [dht-selfheal.c:1065:dht_selfheal_layout_new_directory] 0-nova-vm-dht:
> chunk size = 0x / 20458 = 0x33414
> [2015-01-27 10:04:38.040343] I
> [dht-selfheal.c:1103:dht_selfheal_layout_new_directory] 0-nova-vm-dht:
> assigning range size 0xc648 to nova-vm-disperse-0
> [2015-01-27 10:04:38.045521] I [MSGID: 109036]
> [dht-common.c:6221:dht_log_new_layout_for_dir_selfheal] 0-nova-vm-dht:
> Setting layout of /ca095fd4-1c0c-4fb4-a2d8-a0d7586c0ffa with
> [Subvol_name: nova-vm-disperse-0, Err: -1 , Start: 0 , Stop: 4294967295 ],
> [2015-01-27 10:04:38.057391] I [dht-common.c:1822:dht_lookup_cbk]
> 0-nova-vm-dht: Entry /instance-0023 missing on subvol nova-vm-disperse-0
> [2015-01-27 10:04:38.062147] I [dht-common.c:1822:dht_lookup_cbk]
> 0-nova-vm-dht: Entry /instance-0023 missing on subvol nova-vm-disperse-0
> [2015-01-27 10:04:38.068511] I [dht-common.c:1822:dht_lookup_cbk]
> 0-nova-vm-dht: Entry /ca095fd4-1c0c-4fb4-a2d8-a0d7586c0ffa/console.log
> missing on subvol nova-vm-disperse-0
> [2015-01-27 10:04:38.072586] I [dht-common.c:1822:dht_lookup_cbk]
> 0-nova-vm-dht: Entry /instance-0023 missing on subvol nova-vm-disperse-0
> [2015-01-27 10:04:38.076411] I [dht-common.c:1822:dht_lookup_cbk]
> 0-nova-vm-dht: Entry /instance-0023 missing on subvol nova-vm-disperse-0
> [2015-01-27 10:04:38.082613] I [dht-common.c:1822:dht_lookup_cbk]
> 0-nova-vm-dht: Entry /ca095fd4-1c0c-4fb4-a2d8-a0d7586c0ffa/disk.config
> missing on subvol nova

Re: [Gluster-users] how to Set BackupVol for Libgfapi ??

2015-01-27 Thread Alessandro De Salvo
Hi,
two comments here:

1 - I don’t think it’s possible, as libvirt would not accept this, but most of 
all it is not needed, as gluster should handle correctly the failover with 
gfapi too, at least this is what happens in my case (gluster 3.6.2 at the 
moment);

2 - If you want to be able to restart a VM with gfapi without addressing a 
single server you can use a VIP, I do this with pacemaker;

3 - PLEASE do not use the IPs/hostnames you mentioned in your email in any 
thread, as they belong to my academic research infrastructure, and I do not 
know how you have got them.

Thanks,

Alessandro

> Il giorno 27/gen/2015, alle ore 07:55, Arash Shams  ha 
> scritto:
> 
> 
> is there anyone pay attention to my question ? 



smime.p7s
Description: S/MIME cryptographic signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 3.6.1 and openstack

2015-01-27 Thread Xavier Hernandez

Hi Pedro,

do you get any specific error ?

From logs, the only thing I see is a warning about a GFID not being 
available. This can cause troubles, but I'm not sure if this is the problem.


Anyway, version 3.6.2 has some fixes that improve on this. Is it 
possible for you to try the newer version ?


Xavi

On 01/27/2015 11:25 AM, Pedro Serotto wrote:

Dear all,
I try the disperse volume on 3 node cluster, but my instances never build.

This is the log:

root@node-1:/var/log/glusterfs# tail -f var-lib-nova-instances.log
[2015-01-27 10:03:11.510702] I
[client-handshake.c:1415:select_server_supported_programs]
0-nova-vm-client-1: Using Program GlusterFS 3.3, Num (1298437), Version
(330)
[2015-01-27 10:03:11.511052] I
[client-handshake.c:1200:client_setvolume_cbk] 0-nova-vm-client-2:
Connected to nova-vm-client-2, attached to remote volume '/data/nova'.
[2015-01-27 10:03:11.511074] I
[client-handshake.c:1212:client_setvolume_cbk] 0-nova-vm-client-2:
Server and Client lk-version numbers are not same, reopening the fds
[2015-01-27 10:03:11.511783] I
[client-handshake.c:1200:client_setvolume_cbk] 0-nova-vm-client-1:
Connected to nova-vm-client-1, attached to remote volume '/data/nova'.
[2015-01-27 10:03:11.511806] I
[client-handshake.c:1212:client_setvolume_cbk] 0-nova-vm-client-1:
Server and Client lk-version numbers are not same, reopening the fds
[2015-01-27 10:03:11.512464] I [ec.c:205:ec_up] 0-nova-vm-disperse-0:
Going UP
[2015-01-27 10:03:11.520391] I [fuse-bridge.c:5080:fuse_graph_setup]
0-fuse: switched to graph 0
[2015-01-27 10:03:11.520660] I
[client-handshake.c:188:client_set_lk_version_cbk] 0-nova-vm-client-1:
Server lk version = 1
[2015-01-27 10:03:11.520680] I
[client-handshake.c:188:client_set_lk_version_cbk] 0-nova-vm-client-2:
Server lk version = 1
[2015-01-27 10:03:11.520941] I [fuse-bridge.c:4009:fuse_init]
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22
kernel 7.17



 nova boot --flavor 1 --image
4f31cf1b-3cff-470e-8533-0792c0e805d5 test-vm




[2015-01-27 10:04:38.002739] I [dht-common.c:1822:dht_lookup_cbk]
0-nova-vm-dht: Entry /instance-0023 missing on subvol nova-vm-disperse-0
[2015-01-27 10:04:38.009403] I [dht-common.c:1822:dht_lookup_cbk]
0-nova-vm-dht: Entry /instance-0023 missing on subvol nova-vm-disperse-0
[2015-01-27 10:04:38.019634] I [dht-common.c:1822:dht_lookup_cbk]
0-nova-vm-dht: Entry /ca095fd4-1c0c-4fb4-a2d8-a0d7586c0ffa missing on
subvol nova-vm-disperse-0
[2015-01-27 10:04:38.025431] I [dht-common.c:1822:dht_lookup_cbk]
0-nova-vm-dht: Entry /ca095fd4-1c0c-4fb4-a2d8-a0d7586c0ffa missing on
subvol nova-vm-disperse-0
[2015-01-27 10:04:38.029552] I [dht-common.c:1822:dht_lookup_cbk]
0-nova-vm-dht: Entry /ca095fd4-1c0c-4fb4-a2d8-a0d7586c0ffa missing on
subvol nova-vm-disperse-0
[2015-01-27 10:04:38.040237] I
[dht-selfheal.c:1065:dht_selfheal_layout_new_directory] 0-nova-vm-dht:
chunk size = 0x / 20458 = 0x33414
[2015-01-27 10:04:38.040343] I
[dht-selfheal.c:1103:dht_selfheal_layout_new_directory] 0-nova-vm-dht:
assigning range size 0xc648 to nova-vm-disperse-0
[2015-01-27 10:04:38.045521] I [MSGID: 109036]
[dht-common.c:6221:dht_log_new_layout_for_dir_selfheal] 0-nova-vm-dht:
Setting layout of /ca095fd4-1c0c-4fb4-a2d8-a0d7586c0ffa with
[Subvol_name: nova-vm-disperse-0, Err: -1 , Start: 0 , Stop: 4294967295 ],
[2015-01-27 10:04:38.057391] I [dht-common.c:1822:dht_lookup_cbk]
0-nova-vm-dht: Entry /instance-0023 missing on subvol nova-vm-disperse-0
[2015-01-27 10:04:38.062147] I [dht-common.c:1822:dht_lookup_cbk]
0-nova-vm-dht: Entry /instance-0023 missing on subvol nova-vm-disperse-0
[2015-01-27 10:04:38.068511] I [dht-common.c:1822:dht_lookup_cbk]
0-nova-vm-dht: Entry /ca095fd4-1c0c-4fb4-a2d8-a0d7586c0ffa/console.log
missing on subvol nova-vm-disperse-0
[2015-01-27 10:04:38.072586] I [dht-common.c:1822:dht_lookup_cbk]
0-nova-vm-dht: Entry /instance-0023 missing on subvol nova-vm-disperse-0
[2015-01-27 10:04:38.076411] I [dht-common.c:1822:dht_lookup_cbk]
0-nova-vm-dht: Entry /instance-0023 missing on subvol nova-vm-disperse-0
[2015-01-27 10:04:38.082613] I [dht-common.c:1822:dht_lookup_cbk]
0-nova-vm-dht: Entry /ca095fd4-1c0c-4fb4-a2d8-a0d7586c0ffa/disk.config
missing on subvol nova-vm-disperse-0
[2015-01-27 10:04:38.086569] I [dht-common.c:1822:dht_lookup_cbk]
0-nova-vm-dht: Entry /instance-0023 missing on subvol nova-vm-disperse-0
[2015-01-27 10:04:38.089969] I [dht-common.c:1822:dht_lookup_cbk]
0-nova-vm-dht: Entry /instance-0023 missing on subvol nova-vm-disperse-0
[2015-01-27 10:04:38.093946] I [dht-common.c:1822:dht_lookup_cbk]
0-nova-vm-dht: Entry /ca095fd4-1c0c-4fb4-a2d8-a0d7586c0ffa/console.log
missing on subvol nova-vm-disperse-0
[2015-01-27 10:04:38.096379] W [ec-helpers.c:501:ec_loc_prepare]
0-nova-vm-disperse-0: GFID not available for inode
[2015-01-27 10:04:38.123688] I [dht-common.c:1822:dht_lookup_cbk]
0-nova-vm-dht: Entry /instance-0023 missing on subvol nova-vm-disperse-0
[

[Gluster-users] REMINDER: Gluster Community Bug Triage meeting starting in ~1 hour

2015-01-27 Thread Niels de Vos
Hi all,

This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
- date: every Tuesday
- time: 12:00 UTC, 13:00 CET (run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Thanks,
Niels


pgpbSXjNsibZY.pgp
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Mount failed

2015-01-27 Thread Bartłomiej Syryjczyk
W dniu 2015-01-27 o 09:53, Franco Broi pisze:
> What do you get is you do this?
>
> bash-4.1# stat -c %i /mnt/gluster
> 1
> -bash-4.1# echo $?
> 0
---
[root@apache2 ~]# /usr/sbin/glusterfs --volfile-server=apache1
--volfile-id=/testvol /mnt/gluster

[root@apache2 ~]# stat -c %i /mnt/gluster
1

[root@apache2 ~]# echo $?
0

[root@apache2 ~]# umount /mnt/gluster/

[root@apache2 ~]# stat -c %i /mnt/gluster
101729533

[root@apache2 ~]# echo $?
0
---

-- 
Z poważaniem,

*Bartłomiej Syryjczyk*
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Mount failed

2015-01-27 Thread Bartłomiej Syryjczyk
W dniu 2015-01-27 o 09:43, Franco Broi pisze:
> Could this be a case of Oracle Linux being evil?
Yes, I'm using OEL. Now 7.0, earlier 6.6

-- 
Z poważaniem,

*Bartłomiej Syryjczyk*
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] 3.6.1 and openstack

2015-01-27 Thread Pedro Serotto
Dear all, I try the disperse volume on 3 node cluster, but my instances never 
build.
This is the log:
root@node-1:/var/log/glusterfs# tail -f var-lib-nova-instances.log [2015-01-27 
10:03:11.510702] I [client-handshake.c:1415:select_server_supported_programs] 
0-nova-vm-client-1: Using Program GlusterFS 3.3, Num (1298437), Version 
(330)[2015-01-27 10:03:11.511052] I 
[client-handshake.c:1200:client_setvolume_cbk] 0-nova-vm-client-2: Connected to 
nova-vm-client-2, attached to remote volume '/data/nova'.[2015-01-27 
10:03:11.511074] I [client-handshake.c:1212:client_setvolume_cbk] 
0-nova-vm-client-2: Server and Client lk-version numbers are not same, 
reopening the fds[2015-01-27 10:03:11.511783] I 
[client-handshake.c:1200:client_setvolume_cbk] 0-nova-vm-client-1: Connected to 
nova-vm-client-1, attached to remote volume '/data/nova'.[2015-01-27 
10:03:11.511806] I [client-handshake.c:1212:client_setvolume_cbk] 
0-nova-vm-client-1: Server and Client lk-version numbers are not same, 
reopening the fds[2015-01-27 10:03:11.512464] I [ec.c:205:ec_up] 
0-nova-vm-disperse-0: Going UP[2015-01-27 10:03:11.520391] I 
[fuse-bridge.c:5080:fuse_graph_setup] 0-fuse: switched to graph 0[2015-01-27 
10:03:11.520660] I [client-handshake.c:188:client_set_lk_version_cbk] 
0-nova-vm-client-1: Server lk version = 1[2015-01-27 10:03:11.520680] I 
[client-handshake.c:188:client_set_lk_version_cbk] 0-nova-vm-client-2: Server 
lk version = 1[2015-01-27 10:03:11.520941] I [fuse-bridge.c:4009:fuse_init] 
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22 kernel 7.17


 nova boot --flavor 1 --image 
4f31cf1b-3cff-470e-8533-0792c0e805d5 test-vm



[2015-01-27 10:04:38.002739] I [dht-common.c:1822:dht_lookup_cbk] 
0-nova-vm-dht: Entry /instance-0023 missing on subvol 
nova-vm-disperse-0[2015-01-27 10:04:38.009403] I 
[dht-common.c:1822:dht_lookup_cbk] 0-nova-vm-dht: Entry /instance-0023 
missing on subvol nova-vm-disperse-0[2015-01-27 10:04:38.019634] I 
[dht-common.c:1822:dht_lookup_cbk] 0-nova-vm-dht: Entry 
/ca095fd4-1c0c-4fb4-a2d8-a0d7586c0ffa missing on subvol 
nova-vm-disperse-0[2015-01-27 10:04:38.025431] I 
[dht-common.c:1822:dht_lookup_cbk] 0-nova-vm-dht: Entry 
/ca095fd4-1c0c-4fb4-a2d8-a0d7586c0ffa missing on subvol 
nova-vm-disperse-0[2015-01-27 10:04:38.029552] I 
[dht-common.c:1822:dht_lookup_cbk] 0-nova-vm-dht: Entry 
/ca095fd4-1c0c-4fb4-a2d8-a0d7586c0ffa missing on subvol 
nova-vm-disperse-0[2015-01-27 10:04:38.040237] I 
[dht-selfheal.c:1065:dht_selfheal_layout_new_directory] 0-nova-vm-dht: chunk 
size = 0x / 20458 = 0x33414[2015-01-27 10:04:38.040343] I 
[dht-selfheal.c:1103:dht_selfheal_layout_new_directory] 0-nova-vm-dht: 
assigning range size 0xc648 to nova-vm-disperse-0[2015-01-27 
10:04:38.045521] I [MSGID: 109036] 
[dht-common.c:6221:dht_log_new_layout_for_dir_selfheal] 0-nova-vm-dht: Setting 
layout of /ca095fd4-1c0c-4fb4-a2d8-a0d7586c0ffa with [Subvol_name: 
nova-vm-disperse-0, Err: -1 , Start: 0 , Stop: 4294967295 ], [2015-01-27 
10:04:38.057391] I [dht-common.c:1822:dht_lookup_cbk] 0-nova-vm-dht: Entry 
/instance-0023 missing on subvol nova-vm-disperse-0[2015-01-27 
10:04:38.062147] I [dht-common.c:1822:dht_lookup_cbk] 0-nova-vm-dht: Entry 
/instance-0023 missing on subvol nova-vm-disperse-0[2015-01-27 
10:04:38.068511] I [dht-common.c:1822:dht_lookup_cbk] 0-nova-vm-dht: Entry 
/ca095fd4-1c0c-4fb4-a2d8-a0d7586c0ffa/console.log missing on subvol 
nova-vm-disperse-0[2015-01-27 10:04:38.072586] I 
[dht-common.c:1822:dht_lookup_cbk] 0-nova-vm-dht: Entry /instance-0023 
missing on subvol nova-vm-disperse-0[2015-01-27 10:04:38.076411] I 
[dht-common.c:1822:dht_lookup_cbk] 0-nova-vm-dht: Entry /instance-0023 
missing on subvol nova-vm-disperse-0[2015-01-27 10:04:38.082613] I 
[dht-common.c:1822:dht_lookup_cbk] 0-nova-vm-dht: Entry 
/ca095fd4-1c0c-4fb4-a2d8-a0d7586c0ffa/disk.config missing on subvol 
nova-vm-disperse-0[2015-01-27 10:04:38.086569] I 
[dht-common.c:1822:dht_lookup_cbk] 0-nova-vm-dht: Entry /instance-0023 
missing on subvol nova-vm-disperse-0[2015-01-27 10:04:38.089969] I 
[dht-common.c:1822:dht_lookup_cbk] 0-nova-vm-dht: Entry /instance-0023 
missing on subvol nova-vm-disperse-0[2015-01-27 10:04:38.093946] I 
[dht-common.c:1822:dht_lookup_cbk] 0-nova-vm-dht: Entry 
/ca095fd4-1c0c-4fb4-a2d8-a0d7586c0ffa/console.log missing on subvol 
nova-vm-disperse-0[2015-01-27 10:04:38.096379] W 
[ec-helpers.c:501:ec_loc_prepare] 0-nova-vm-disperse-0: GFID not available for 
inode[2015-01-27 10:04:38.123688] I [dht-common.c:1822:dht_lookup_cbk] 
0-nova-vm-dht: Entry /instance-0023 missing on subvol 
nova-vm-disperse-0[2015-01-27 10:04:38.127440] I 
[dht-common.c:1822:dht_lookup_cbk] 0-nova-vm-dht: Entry /instance-0023 
missing on subvol nova-vm-disperse-0[2015-01-27 10:04:38.131979] I 
[dht-common.c:1822:dht_lookup_cbk] 0-nova-vm-dht: Entry 
/ca095fd4-1c0c-4fb4-a2d8-a0d7586c0ffa/disk.info missing on subvol 
nova-vm-disperse-0[20

[Gluster-users] How to find out about new / current releases?

2015-01-27 Thread Alan Orth
Hi, all.

I'm curious how we're supposed to find out about new releases. There are
several sources, each of which give either no information, or different
information.

Let me attempt a list of places I've tried just now in an attempt to find
the latest version:

- On GitHub I see there is a v3.6.2 tag[0]
- On Twitter I don't see any release announcements in the @gluster
timeline[1]
- The gluster-users mailing list has no announcements for anything beyond
3.6.0 or so (despite users reporting issues, questions, etc with 3.6.1,
3.6.2-beta2, etc)
- The Gluster website doesn't show an obvious latest version[2], but lists
3.6.1 as the latest in the wiki[3]
- The download server has packages for 3.6.2 for at least several distros
on the repo[4]

Maybe we should have something on the Gluster homepage like on kernel.org,
where they clearly list the stable branches. Also, if someone wrote a blog
post on gluster.org, with at least a `git shortlog v3.6.1..v3.6.2`, then
all announcements on Twitter, mailing list, IRC channel topic, etc could
point to the one blog post.

Regards!

Alan

[0] https://github.com/gluster/glusterfs/releases
[1] https://twitter.com/gluster
[2] http://www.gluster.org/
[3] http://www.gluster.org/community/documentation/#GlusterFS_3.6
[4]
http://download.gluster.org/pub/gluster/glusterfs/3.6/LATEST/CentOS/epel-6Server/x86_64/
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] GSOC-2015 heads up

2015-01-27 Thread Kaushal M
Hi all,
As it happens every year, the processes for GSOC begin in February,
starting with the organization registration, which runs this year from
February 9th to 20th. Rest of the timeline can be viewed at [1].

Last year we participated in GSOC under the Fedora organization, we hadn't
registered as an organization in time. The Fedora organization graciously
gave us one of their slots, and allowed us to become a part of GSOC for the
first time. Vipul Nayyar implemented glusterfsiostat [2], under Krishnan
P's mentorship. He created a tool to obtain and visualize I/O stats on
GlusterFS clients, and also contributed improvements to GlusterFS code,
specifically to the io-stats xlator. He successfully completed this
project, and along the way became more involved with our community and is
currently helping maintain the Gluster community infrastructure.

This year we need to aim higher, and have multiple contributions coming in.
This first and foremost requires our community to register itself as an
organization, as we cannot continue to ask the Fedora community to lend us
their valuable slots. We can discuss the details regarding the registration
and further steps during the upcoming community meeting.

Following this single success from last year, let's aim for multiple
successes this year and grow our community.

Cheers!

~kaushal

[1]: https://www.google-melange.com/gsoc/homepage/google/gsoc2015
[2]: https://forge.gluster.org/glusterfsiostat
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] geo-replication

2015-01-27 Thread M S Vishwanath Bhat

On 26/01/15 21:35, Rosemond, Sonny wrote:
I have a RHEL7 testing environment consisting of 6 nodes total, all 
running Gluster 3.6.1. The master volume is distributed/replicated, 
and the slave volume is distributed. Firewalls and SELinux have been 
disabled for testing purposes. Passwordless SSH has been established 
and tested successfully, however when I try to start geo-replication, 
the process churns for a bit and then drops me back to the command 
prompt with the message, "geo-replication command failed".

What should I look for? What am I missing?
Have you created the geo-rep session between master and slave? If yes, I 
assume you have run geo-rep create with push-pem? And before that you 
need to collect the pem keys using, "gluster system:: execute 
gsec_create". And the another thing to be noted, is the passwordless ssh 
need to be established between master node where you are running 
"geo-rep create" command and the slave node specified in the "geo-rep 
create" command.


If you have done all of the above properly but still it's failing, 
please share glusterd log file of the node where you are running geo-rep 
start and the slave node specified in the geo-rep start command.


Best Regards,
Vishwanath



~Sonny


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Mount failed

2015-01-27 Thread Franco Broi

What do you get is you do this?

bash-4.1# stat -c %i /mnt/gluster
1
-bash-4.1# echo $?
0


On Tue, 2015-01-27 at 09:47 +0100, Bartłomiej Syryjczyk wrote:
> W dniu 2015-01-27 o 09:20, Franco Broi pisze:
> > Well I'm stumped, just seems like the mount.glusterfs script isn't
> > working. I'm still running 3.5.1 and the getinode bit of my script looks
> > like this:
> >
> > ...
> > Linux)
> > getinode="stat -c %i $i"
> >
> > ...
> > inode=$( ${getinode} $mount_point 2>/dev/null);
> >
> > # this is required if the stat returns error
> > if [ -z "$inode" ]; then
> > inode="0";
> > fi
> >
> > if [ $inode -ne 1 ]; then
> > err=1;
> > fi
> (My) script should check return code, not inode. There is right comment
> about that. Or maybe I don't understand construction in 298 line:
> ---
> [...]
>  49 Linux)
>  50 getinode="stat -c %i"
> [...]
> 298 inode=$( ${getinode} $mount_point 2>/dev/null);
> 299 # this is required if the stat returns error
> 300 if [ $? -ne 0 ]; then
> 301 warn "Mount failed. Please check the log file for more details."
> 302 umount $mount_point > /dev/null 2>&1;
> 303 exit 1;
> 304 fi
> ---
> 
> When I paste between lines 298 and 300 something with 0 exit code, eg.
> echo $?;
> it works.
> 
> With script from 3.6.1 package there was the same problem.
> 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Mount failed

2015-01-27 Thread Bartłomiej Syryjczyk
W dniu 2015-01-27 o 09:20, Franco Broi pisze:
> Well I'm stumped, just seems like the mount.glusterfs script isn't
> working. I'm still running 3.5.1 and the getinode bit of my script looks
> like this:
>
> ...
> Linux)
> getinode="stat -c %i $i"
>
> ...
> inode=$( ${getinode} $mount_point 2>/dev/null);
>
> # this is required if the stat returns error
> if [ -z "$inode" ]; then
> inode="0";
> fi
>
> if [ $inode -ne 1 ]; then
> err=1;
> fi
(My) script should check return code, not inode. There is right comment
about that. Or maybe I don't understand construction in 298 line:
---
[...]
 49 Linux)
 50 getinode="stat -c %i"
[...]
298 inode=$( ${getinode} $mount_point 2>/dev/null);
299 # this is required if the stat returns error
300 if [ $? -ne 0 ]; then
301 warn "Mount failed. Please check the log file for more details."
302 umount $mount_point > /dev/null 2>&1;
303 exit 1;
304 fi
---

When I paste between lines 298 and 300 something with 0 exit code, eg.
echo $?;
it works.

With script from 3.6.1 package there was the same problem.

-- 
Z poważaniem,

*Bartłomiej Syryjczyk*
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Mount failed

2015-01-27 Thread Franco Broi

Could this be a case of Oracle Linux being evil?

On Tue, 2015-01-27 at 16:20 +0800, Franco Broi wrote:
> Well I'm stumped, just seems like the mount.glusterfs script isn't
> working. I'm still running 3.5.1 and the getinode bit of my script looks
> like this:
> 
> ...
> Linux)
> getinode="stat -c %i $i"
> 
> ...
> inode=$( ${getinode} $mount_point 2>/dev/null);
> 
> # this is required if the stat returns error
> if [ -z "$inode" ]; then
> inode="0";
> fi
> 
> if [ $inode -ne 1 ]; then
> err=1;
> fi
> 
> 
> 
> On Tue, 2015-01-27 at 09:12 +0100, Bartłomiej Syryjczyk wrote:
> > W dniu 2015-01-27 o 09:08, Franco Broi pisze:
> > > So what is the inode of your mounted gluster filesystem? And does
> > > running 'mount' show it as being fuse.glusterfs?
> > ---
> > [root@apache2 ~]# stat -c %i /mnt/gluster
> > 101729533
> > 
> > [root@apache2 ~]# /usr/sbin/glusterfs --volfile-server=apache1
> > --volfile-id=/testvol /mnt/gluster
> > 
> > [root@apache2 ~]# stat -c %i /mnt/gluster
> > 1
> > 
> > [root@apache2 ~]# mount|grep fuse
> > fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
> > apache1:/testvol on /mnt/gluster type fuse.glusterfs
> > (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
> > ---
> > 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] qemu use gluster I/O error, dev vda , sector 36703333

2015-01-27 Thread 肖力



Yes !
Have enough space, Tks!
I use xfs。 Dose with this.





At 2015-01-27 16:13:46, "Anatoly Pugachev"  wrote:

Does host system (kvm host) has enough space for this glusterfs guest ?



On Tue, Jan 27, 2015 at 8:51 AM, 肖力  wrote:

Hi
I test GlusterFS on CentOS 7.0 and use for kvm vm.

My vm can start ,but system report err:
  I/O error,dev  vda ,sector 3670
 I/O error,dev  vda ,sector 367034357
...
I/O Error Detected. Shutting down filesystem.

I have six storage node and one host, All is CentOS7 and kernel is 
3.10.0-123.13.2.el7.x86_64.
My GlusterFS ver is glusterfs 3.6.1 built on Nov  7 2014 15:16:40

My vm use Libvirt and disk xml is:

 
  
  

  
  
  


Can someone have same error ,Tks!~
 



os







___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Mount failed

2015-01-27 Thread Franco Broi

Well I'm stumped, just seems like the mount.glusterfs script isn't
working. I'm still running 3.5.1 and the getinode bit of my script looks
like this:

...
Linux)
getinode="stat -c %i $i"

...
inode=$( ${getinode} $mount_point 2>/dev/null);

# this is required if the stat returns error
if [ -z "$inode" ]; then
inode="0";
fi

if [ $inode -ne 1 ]; then
err=1;
fi



On Tue, 2015-01-27 at 09:12 +0100, Bartłomiej Syryjczyk wrote:
> W dniu 2015-01-27 o 09:08, Franco Broi pisze:
> > So what is the inode of your mounted gluster filesystem? And does
> > running 'mount' show it as being fuse.glusterfs?
> ---
> [root@apache2 ~]# stat -c %i /mnt/gluster
> 101729533
> 
> [root@apache2 ~]# /usr/sbin/glusterfs --volfile-server=apache1
> --volfile-id=/testvol /mnt/gluster
> 
> [root@apache2 ~]# stat -c %i /mnt/gluster
> 1
> 
> [root@apache2 ~]# mount|grep fuse
> fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
> apache1:/testvol on /mnt/gluster type fuse.glusterfs
> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
> ---
> 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] qemu use gluster I/O error, dev vda , sector 36703333

2015-01-27 Thread Anatoly Pugachev
Does host system (kvm host) has enough space for this glusterfs guest ?

On Tue, Jan 27, 2015 at 8:51 AM, 肖力  wrote:

> Hi
> I test GlusterFS on CentOS 7.0 and use for kvm vm.
>
> My vm can start ,but system report err:
>   I/O error,dev  vda ,sector 3670
>  I/O error,dev  vda ,sector 367034357
> ...
> I/O Error Detected. Shutting down filesystem.
>
> I have six storage node and one host, All is CentOS7 and kernel is
> 3.10.0-123.13.2.el7.x86_64.
> My GlusterFS ver is glusterfs 3.6.1 built on Nov  7 2014 15:16:40
>
> My vm use Libvirt and disk xml is:
>
>  
>   
>   
> 
>   
>   
>function='0x0'/>
> 
>
> Can someone have same error ,Tks!~
>
>
>
>
> os
>
>
>
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Mount failed

2015-01-27 Thread Bartłomiej Syryjczyk
W dniu 2015-01-27 o 09:08, Franco Broi pisze:
> So what is the inode of your mounted gluster filesystem? And does
> running 'mount' show it as being fuse.glusterfs?
---
[root@apache2 ~]# stat -c %i /mnt/gluster
101729533

[root@apache2 ~]# /usr/sbin/glusterfs --volfile-server=apache1
--volfile-id=/testvol /mnt/gluster

[root@apache2 ~]# stat -c %i /mnt/gluster
1

[root@apache2 ~]# mount|grep fuse
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
apache1:/testvol on /mnt/gluster type fuse.glusterfs
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
---

-- 
Z poważaniem,

*Bartłomiej Syryjczyk*
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Mount failed

2015-01-27 Thread Franco Broi

So what is the inode of your mounted gluster filesystem? And does
running 'mount' show it as being fuse.glusterfs?

On Tue, 2015-01-27 at 09:05 +0100, Bartłomiej Syryjczyk wrote:
> W dniu 2015-01-27 o 09:00, Franco Broi pisze:
> > Your getinode isn't working...
> >
> > + '[' 0 -ne 0 ']'
> > ++ stat -c %i /mnt/gluster
> > + inode=
> > + '[' 1 -ne 0 ']'
> >
> > How old is your mount.glusterfs script?
> It's fresh (I think so). It's from official repo:
> ---
> [root@apache2 ~]# which mount.glusterfs
> /usr/sbin/mount.glusterfs
> 
> [root@apache2 ~]# ls -l /usr/sbin/mount.glusterfs
> -rwxr-xr-x 1 root root 16908 Jan 22 14:00 /usr/sbin/mount.glusterfs
> 
> [root@apache2 ~]# yum provides /usr/sbin/mount.glusterfs
> glusterfs-fuse-3.6.2-1.el7.x86_64 : Fuse client
> Repo: @glusterfs-epel
> Matched from:
> Filename: /usr/sbin/mount.glusterfs
> 
> [root@apache2 ~]# md5sum /usr/sbin/mount.glusterfs
> a302f984367c93fd94ab3ad73386e66a  /usr/sbin/mount.glusterfs
> ---
> 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Mount failed

2015-01-27 Thread Bartłomiej Syryjczyk
W dniu 2015-01-27 o 09:00, Franco Broi pisze:
> Your getinode isn't working...
>
> + '[' 0 -ne 0 ']'
> ++ stat -c %i /mnt/gluster
> + inode=
> + '[' 1 -ne 0 ']'
>
> How old is your mount.glusterfs script?
It's fresh (I think so). It's from official repo:
---
[root@apache2 ~]# which mount.glusterfs
/usr/sbin/mount.glusterfs

[root@apache2 ~]# ls -l /usr/sbin/mount.glusterfs
-rwxr-xr-x 1 root root 16908 Jan 22 14:00 /usr/sbin/mount.glusterfs

[root@apache2 ~]# yum provides /usr/sbin/mount.glusterfs
glusterfs-fuse-3.6.2-1.el7.x86_64 : Fuse client
Repo: @glusterfs-epel
Matched from:
Filename: /usr/sbin/mount.glusterfs

[root@apache2 ~]# md5sum /usr/sbin/mount.glusterfs
a302f984367c93fd94ab3ad73386e66a  /usr/sbin/mount.glusterfs
---

-- 
Z poważaniem,

*Bartłomiej Syryjczyk*
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Mount failed

2015-01-27 Thread Franco Broi

Your getinode isn't working...

+ '[' 0 -ne 0 ']'
++ stat -c %i /mnt/gluster
+ inode=
+ '[' 1 -ne 0 ']'

How old is your mount.glusterfs script?

On Tue, 2015-01-27 at 08:52 +0100, Bartłomiej Syryjczyk wrote:
> W dniu 2015-01-27 o 08:46, Franco Broi pisze:
> > [franco@charlie4 ~]$ stat -c %i /data
> > 1
> > [franco@charlie4 ~]$ stat -c %i /data2
> > 1
> >
> > Can you check that running the mount manually actually did work, ie can
> > you list the files after mounting manually??
> >
> > /usr/sbin/glusterfs --volfile-server=apache1 --volfile-id=/testvol 
> > /mnt/gluster
> Works fine
> ---
> [root@apache2 ~]# /usr/sbin/glusterfs --volfile-server=apache1
> --volfile-id=/testvol /mnt/gluster
> 
> [root@apache2 ~]# ls -l /brick /mnt/gluster
> /brick:
> total 1355800
> -rw-r--r-- 2 root root   1152 Jan 26 08:04 passwd
> -rw-r--r-- 2 root root  104857600 Jan 26 11:49 zero-2.dd
> -rw-r--r-- 2 root root  104857600 Jan 26 11:50 zero-3.dd
> -rw-r--r-- 2 root root  104857600 Jan 26 11:52 zero-4.dd
> -rw-r--r-- 2 root root 1073741824 Jan 26 08:26 zero.dd
> 
> /mnt/gluster:
> total 1355778
> -rw-r--r-- 1 root root   1152 Jan 26 08:03 passwd
> -rw-r--r-- 1 root root  104857600 Jan 26 11:50 zero-2.dd
> -rw-r--r-- 1 root root  104857600 Jan 26 11:50 zero-3.dd
> -rw-r--r-- 1 root root  104857600 Jan 26 11:52 zero-4.dd
> -rw-r--r-- 1 root root 1073741824 Jan 26 08:25 zero.dd
> 
> [root@apache2 ~]# umount /mnt/gluster/
> 
> [root@apache2 ~]# ls -l /mnt/gluster
> total 0
> ---
> 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users