Re: [Gluster-users] Debian package: how to make it

2011-02-04 Thread Giovanni Toraldo
2011/2/4 Cedric Lagneau cedric.lagn...@openwide.fr:
 I've not found on git / wiki / doc / pub directory a .deb-src package or a 
 way to do a .deb package from glusterfs src git.

+1
I need it too.
However, in debian experimental there is a package for 3.1.2
(http://packages.debian.org/source/experimental/glusterfs)

Bye.

-- 
Giovanni Toraldo
http://gionn.net/
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Debian package: how to make it

2011-02-04 Thread Cedric Lagneau
 However, in debian experimental there is a package for 3.1.2
Yes , the debian file 
http://ftp.de.debian.org/debian/pool/main/g/glusterfs/glusterfs_3.1.2-1.debian.tar.gz
 provide splitted debian way packages.

I search the same debian src directory for the all in one debian package 
provided by download.glusterfs because when glusterfs will release newer 
version it'll be more easy to upgrade.


thanks
 
-- 
Cédric L.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.1.2 Debian - client_rpc_notify failed to get the port number for remote subvolume

2011-02-04 Thread Anand Avati
It is very likely the brick process is failing to start. Please look at the
brick log on that server. (in /var/log/glusterfs/bricks/* )

Avati

On Fri, Feb 4, 2011 at 10:19 AM, phil cryer p...@cryer.us wrote:

 I have glusterfs 3.1.2 running on Debian, I'm able to start the volume
 and now mount it via mount -t gluster and I can see everything. I am
 still seeing the following error in /var/log/glusterfs/nfs.log

 [2011-02-04 13:09:16.404851] E
 [client-handshake.c:1079:client_query_portmap_cbk]
 bhl-volume-client-98: failed to get the port number for remote
 subvolume
 [2011-02-04 13:09:16.404909] I [client.c:1590:client_rpc_notify]
 bhl-volume-client-98: disconnected
 [2011-02-04 13:09:20.405843] E
 [client-handshake.c:1079:client_query_portmap_cbk]
 bhl-volume-client-98: failed to get the port number for remote
 subvolume
 [2011-02-04 13:09:20.405938] I [client.c:1590:client_rpc_notify]
 bhl-volume-client-98: disconnected
 [2011-02-04 13:09:24.406634] E
 [client-handshake.c:1079:client_query_portmap_cbk]
 bhl-volume-client-98: failed to get the port number for remote
 subvolume
 [2011-02-04 13:09:24.406711] I [client.c:1590:client_rpc_notify]
 bhl-volume-client-98: disconnected
 [2011-02-04 13:09:28.407249] E
 [client-handshake.c:1079:client_query_portmap_cbk]
 bhl-volume-client-98: failed to get the port number for remote
 subvolume
 [2011-02-04 13:09:28.407300] I [client.c:1590:client_rpc_notify]
 bhl-volume-client-98: disconnected

 However, if I do a gluster volume info I see that it's listed:
 # gluster volume info | grep 98
 Brick98: clustr-02:/mnt/data17

 I've gone to that host, unmounted the specific drive, ran fsck.ext4 on
 it, and it came back clean. Remounting and then restarting gluster on
 all the nodes hasn't changed anything, I keep getting that error.
 Also, I don't understand why it can't get the port number since it's
 working fine on 23 other bricks (drives) on that server; leads me to
 believe that it's not an accurate error.

 I searched the mailing lists and bug-tracker, and only found this similar
 bug:
 http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=1640

 Any idea what's going on? Is this just a benign error since the
 cluster still seems to be working, or ?

 Thanks

 P
 --
 http://philcryer.com
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.1.2 Debian - client_rpc_notify failed to get the port number for remote subvolume

2011-02-04 Thread phil cryer
On Fri, Feb 4, 2011 at 12:33 PM, Anand Avati anand.av...@gmail.com wrote:
 It is very likely the brick process is failing to start. Please look at the
 brick log on that server. (in /var/log/glusterfs/bricks/* )
 Avati

Thanks, so if I'm looking at it right, the 'bhl-volume-client-98' is
really Brick98: clustr-02:/mnt/data17 - I'm figuring that from this:

 [2011-02-04 13:09:28.407300] I [client.c:1590:client_rpc_notify]
 bhl-volume-client-98: disconnected

 However, if I do a gluster volume info I see that it's listed:
 # gluster volume info | grep 98
 Brick98: clustr-02:/mnt/data17

But on that server I don't see any issues with that brick starting:

# head mnt-data17.log -n50
[2011-02-03 23:29:24.235648] W [graph.c:274:gf_add_cmdline_options]
bhl-volume-server: adding option 'listen-port' for volume
'bhl-volume-server' with value '24025'
[2011-02-03 23:29:24.236017] W
[rpc-transport.c:566:validate_volume_options] tcp.bhl-volume-server:
option 'listen-port' is deprecated, preferred is
'transport.socket.listen-port', continuing with correction
Given volfile:
+--+
  1: volume bhl-volume-posix
  2: type storage/posix
  3: option directory /mnt/data17
  4: end-volume
  5:
  6: volume bhl-volume-access-control
  7: type features/access-control
  8: subvolumes bhl-volume-posix
  9: end-volume
 10:
 11: volume bhl-volume-locks
 12: type features/locks
 13: subvolumes bhl-volume-access-control
 14: end-volume
 15:
 16: volume bhl-volume-io-threads
 17: type performance/io-threads
 18: subvolumes bhl-volume-locks
 19: end-volume
 20:
 21: volume /mnt/data17
 22: type debug/io-stats
 23: subvolumes bhl-volume-io-threads
 24: end-volume
 25:
 26: volume bhl-volume-server
 27: type protocol/server
 28: option transport-type tcp
 29: option auth.addr./mnt/data17.allow *
 30: subvolumes /mnt/data17
 31: end-volume

+--+
[2011-02-03 23:29:28.575630] I
[server-handshake.c:535:server_setvolume] bhl-volume-server: accepted
client from 128.128.164.219:724
[2011-02-03 23:29:28.583169] I
[server-handshake.c:535:server_setvolume] bhl-volume-server: accepted
client from 127.0.1.1:985
[2011-02-03 23:29:28.603357] I
[server-handshake.c:535:server_setvolume] bhl-volume-server: accepted
client from 128.128.164.218:726
[2011-02-03 23:29:28.605650] I
[server-handshake.c:535:server_setvolume] bhl-volume-server: accepted
client from 128.128.164.217:725
[2011-02-03 23:29:28.608033] I
[server-handshake.c:535:server_setvolume] bhl-volume-server: accepted
client from 128.128.164.215:725
[2011-02-03 23:29:31.161985] I
[server-handshake.c:535:server_setvolume] bhl-volume-server: accepted
client from 128.128.164.74:697
[2011-02-04 00:40:11.600314] I
[server-handshake.c:535:server_setvolume] bhl-volume-server: accepted
client from 128.128.164.74:805

Plus, looking at the tail of this log, it's still working, latest
messages (from 4 seconds before) as I'm moving some things on the
cluster

[2011-02-04 23:13:35.53685] W [server-resolve.c:565:server_resolve]
bhl-volume-server: pure path resolution for
/www/d/dasobstdertropen00schrrich (INODELK)
[2011-02-04 23:13:35.57107] W [server-resolve.c:565:server_resolve]
bhl-volume-server: pure path resolution for
/www/d/dasobstdertropen00schrrich (SETXATTR)
[2011-02-04 23:13:35.59699] W [server-resolve.c:565:server_resolve]
bhl-volume-server: pure path resolution for
/www/d/dasobstdertropen00schrrich (INODELK)

Thanks!

P




 On Fri, Feb 4, 2011 at 10:19 AM, phil cryer p...@cryer.us wrote:

 I have glusterfs 3.1.2 running on Debian, I'm able to start the volume
 and now mount it via mount -t gluster and I can see everything. I am
 still seeing the following error in /var/log/glusterfs/nfs.log

 [2011-02-04 13:09:16.404851] E
 [client-handshake.c:1079:client_query_portmap_cbk]
 bhl-volume-client-98: failed to get the port number for remote
 subvolume
 [2011-02-04 13:09:16.404909] I [client.c:1590:client_rpc_notify]
 bhl-volume-client-98: disconnected
 [2011-02-04 13:09:20.405843] E
 [client-handshake.c:1079:client_query_portmap_cbk]
 bhl-volume-client-98: failed to get the port number for remote
 subvolume
 [2011-02-04 13:09:20.405938] I [client.c:1590:client_rpc_notify]
 bhl-volume-client-98: disconnected
 [2011-02-04 13:09:24.406634] E
 [client-handshake.c:1079:client_query_portmap_cbk]
 bhl-volume-client-98: failed to get the port number for remote
 subvolume
 [2011-02-04 13:09:24.406711] I [client.c:1590:client_rpc_notify]
 bhl-volume-client-98: disconnected
 [2011-02-04 13:09:28.407249] E
 [client-handshake.c:1079:client_query_portmap_cbk]
 bhl-volume-client-98: failed to get the port number for remote
 subvolume
 [2011-02-04 13:09:28.407300] I [client.c:1590:client_rpc_notify]
 bhl-volume-client-98: disconnected

 However, if I do a gluster volume info I see that it's listed:
 # gluster volume 

[Gluster-users] GlusterFS and MySQL Innodb Locking Issue

2011-02-04 Thread Ken S.
I'm having some problems getting two nodes to mount a shared gluster
volume where I have the MySQL data files stored.  The databases are
Innodb.  Creating the volume on the master server works fine and it
mounts, and when I mount that on the first mysql node it works fine,
too.  However, when I try to mount it with the second node I get this
error:

InnoDB: Unable to lock ./ibdata1, error: 11
InnoDB: Check that you do not already have another mysqld process
InnoDB: using the same InnoDB data or log files.

This obviously is some sort of locking issue.

I've found a few posts where people have said to change locks to
plocks which I have tried without success.  Also saw a post where
someone said plocks was just a symlink to locks.

My confusion is that I don't know if this is an issue with the way the
gluster volume is mounted or if it is a limitation with mysql.  If
anyone is successfully doing this, I would appreciate a gentle nudge
in the right direction.

Here are some details:

Ubuntu 10.10 (Rackspace Cloud virtual server)
root@node1:~# dpkg -l | grep -i gluster
ii  glusterfs-client3.0.4-1
clustered file-system (client package)
ii  glusterfs-server3.0.4-1
clustered file-system (server package)
ii  libglusterfs0   3.0.4-1
GlusterFS libraries and translator modules
root@node1:~# dpkg -l | grep -i fuse
ii  fuse-utils  2.8.4-1ubuntu1
Filesystem in USErspace (utilities)
ii  libfuse22.8.4-1ubuntu1
Filesystem in USErspace library
root@node1:~#

root@node1:~# cat /etc/glusterfs/glusterfsd.vol
volume posix
  type storage/posix   # POSIX FS translator
  option directory /export/apache# Export this directory
end-volume
volume locks
  type features/locks
  subvolumes posix
end-volume
volume brick
  type performance/io-threads
  option thread-count 8
  subvolumes locks
end-volume
# Configuration for the mysql server volume
volume posix-mysql
  type storage/posix
  option directory /export/mysql
  option background-unlink yes
end-volume
volume locks-mysql
  type features/posix-locks
  #type features/locks
  # option mandatory-locks on  # [2011-01-28 20:47:12] W
[posix.c:1477:init] locks-mysql: mandatory locks not supported in this
minor release.
  subvolumes posix-mysql
end-volume
volume brick-mysql
  type performance/io-threads
  option thread-count 8
  subvolumes locks-mysql
end-volume

### Add network serving capability to above brick.
volume server
  type protocol/server
  option transport-type tcp
  subvolumes brick brick-mysql
  option auth.addr.brick.allow aa.bb.cc.190,aa.bb.cc.23 # Allow access
to brick volume
  option auth.addr.brick-mysql.allow aa.bb.cc.23,aa.bb.cc.51
  option auth.login.brick-mysql.allow user-mysql
  option auth.login.user-mysql.password *
end-volume
root@node1:~#

Thanks for any help you can give.
-ken
-- 
Have a nice day ... unless you've made other plans.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Removing bricks

2011-02-04 Thread Roberto Franchini
On Fri, Feb 4, 2011 at 3:28 PM, Thomas Wakefield tw...@cola.iges.org wrote:
 What's the best process for removing a brick from a gluster setup running 3.0 
  (possibly getting upgraded to 3.1 soon)?

 We have 32 bricks, over 8 servers, and need to start thinking about how we 
 will age out the smaller disks in favor of larger disk sizes.

We a 6 servers cluster in distribute/replicate running on 3.0.5.
Yesterday we drop a node in favor of a a new one.
So we umount and the stopped all the gluster servers, then modify the
client vol file to point the new server instead of the old one and
then restart/remount the cluster.
At the beginning the new node was empty, so we did a ls -lR on a dir
on the gluster storage to see the new node filled with data.
Hope this help, cheers,
RF
-- 
Roberto Franchini
http://www.celi.it
http://www.blogmeter.it
http://www.memesphere.it
Tel +39.011.562.71.15
jabber:ro.franch...@gmail.com skype:ro.franchini
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS and MySQL Innodb Locking Issue

2011-02-04 Thread Anand Avati
Locking is more robust in the 3.1.x releases. Please upgrade.

Avati

On Fri, Feb 4, 2011 at 12:13 PM, Ken S. shaw...@gmail.com wrote:

 I'm having some problems getting two nodes to mount a shared gluster
 volume where I have the MySQL data files stored.  The databases are
 Innodb.  Creating the volume on the master server works fine and it
 mounts, and when I mount that on the first mysql node it works fine,
 too.  However, when I try to mount it with the second node I get this
 error:

 InnoDB: Unable to lock ./ibdata1, error: 11
 InnoDB: Check that you do not already have another mysqld process
 InnoDB: using the same InnoDB data or log files.

 This obviously is some sort of locking issue.

 I've found a few posts where people have said to change locks to
 plocks which I have tried without success.  Also saw a post where
 someone said plocks was just a symlink to locks.

 My confusion is that I don't know if this is an issue with the way the
 gluster volume is mounted or if it is a limitation with mysql.  If
 anyone is successfully doing this, I would appreciate a gentle nudge
 in the right direction.

 Here are some details:

 Ubuntu 10.10 (Rackspace Cloud virtual server)
 root@node1:~# dpkg -l | grep -i gluster
 ii  glusterfs-client3.0.4-1
 clustered file-system (client package)
 ii  glusterfs-server3.0.4-1
 clustered file-system (server package)
 ii  libglusterfs0   3.0.4-1
 GlusterFS libraries and translator modules
 root@node1:~# dpkg -l | grep -i fuse
 ii  fuse-utils  2.8.4-1ubuntu1
 Filesystem in USErspace (utilities)
 ii  libfuse22.8.4-1ubuntu1
 Filesystem in USErspace library
 root@node1:~#

 root@node1:~# cat /etc/glusterfs/glusterfsd.vol
 volume posix
  type storage/posix   # POSIX FS translator
  option directory /export/apache# Export this directory
 end-volume
 volume locks
  type features/locks
  subvolumes posix
 end-volume
 volume brick
  type performance/io-threads
  option thread-count 8
  subvolumes locks
 end-volume
 # Configuration for the mysql server volume
 volume posix-mysql
  type storage/posix
  option directory /export/mysql
  option background-unlink yes
 end-volume
 volume locks-mysql
  type features/posix-locks
  #type features/locks
  # option mandatory-locks on  # [2011-01-28 20:47:12] W
 [posix.c:1477:init] locks-mysql: mandatory locks not supported in this
 minor release.
  subvolumes posix-mysql
 end-volume
 volume brick-mysql
  type performance/io-threads
  option thread-count 8
  subvolumes locks-mysql
 end-volume

 ### Add network serving capability to above brick.
 volume server
  type protocol/server
  option transport-type tcp
  subvolumes brick brick-mysql
  option auth.addr.brick.allow aa.bb.cc.190,aa.bb.cc.23 # Allow access
 to brick volume
  option auth.addr.brick-mysql.allow aa.bb.cc.23,aa.bb.cc.51
  option auth.login.brick-mysql.allow user-mysql
  option auth.login.user-mysql.password *
 end-volume
 root@node1:~#

 Thanks for any help you can give.
 -ken
 --
 Have a nice day ... unless you've made other plans.
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users