[Gluster-users] Slow self-heal while increasing replica

2015-07-27 Thread s19n

Hello,

  I am in the process of increasing the replica from 4x2 to 4x3 bricks on 
three physical nodes; even if the three hosts have 4x1 Gb ethernet 
interfaces connected to the same switch with "balance-alb" bonding, I am 
seeing extreme slowness in replicating the data to the third replica, 
somewhere near 40-50 Mbit/s.


  I have tested the bandwidth using netperf and proved that it can reach 
800-900 Mbit/s per channel (that is, per source host up to 4 hosts).


  Is there any GlusterFS option I should check to verify there are no 
bottlenecks?



  I am still using Gluster 3.4.7 because of the following:

- https://bugzilla.redhat.com/show_bug.cgi?id=1168897
"remove-brick" reports "Cluster op-version must atleast be 30600."

- https://bugzilla.redhat.com/show_bug.cgi?id=1113778
gluster volume heal info reports "Volume heal failed"

Regards

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Expanding gluster by one server

2015-04-30 Thread s19n


Hello,

 I'll soon be trying to implement the following scenario: 
https://joejulian.name/blog/how-to-expand-glusterfs-replicated-clusters-by-one-server/


 Instead of the deprecated "replace-brick" command, can I temporarily 
increase the replica number to 3, wait for data to propagate, and after 
that go back to replica 2 with different "brick" sets? I think this 
could be convenient because I currently have two servers with four 
bricks *each* and replica 2.


 I have no spare space left on the servers, so I'm afraid the procedure 
described at 
http://www.gluster.org/pipermail/gluster-users/2013-September/014537.html 
cannot be used in my case.


Regards

--
https://s19n.net
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 3.4.0 and 3.3.2 released!

2013-07-22 Thread s19n
* Vijay Bellur  [2013 07 18, 00:00]:
> There are no known problems with ext4 and 3.4.0, even with newer kernels.

What about 3.3.2? Does the same apply for 3.3.2?

Regards

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster fs hangs on certain operations

2013-03-21 Thread s19n
* Scott Hazelhurst  [2013 03 15, 11:49]:
> We are running  gluster 3.3.1 on SL 6.3. The bricks are formatted ext3

http://joejulian.name/blog/glusterfs-bit-by-ext4-structure-change/ ?
Maybe the solution is "use xfs".

Regards

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster machines slowing down over time

2012-12-11 Thread s19n
> > On Tue, Dec 11, 2012 at 04:13:15PM +, Tom Hall wrote:
> > I notice when sshing into the machine takes ages and running remote commands
> > with capistrano takes longer and longer.

 That is commonly related to DNS unavailability or misconfiguration.
What makes you think the slowness is gluster-related?

Regards

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] worth upgrading from 3.2.7 to 3.3.1?

2012-12-03 Thread s19n
* Gerald Brandt  [2012 11 27, 09:36]:
> I have speed and stability increases with 3.3.0/3.3.1.

 Sorry if this may seem thread hijacking, but have you had success in
mixing the two versions (3.3.0 and 3.3.1) in the same cluster? Is then
possible to perform a rolling upgrade with no downtime?

 I already posted this very same question on Oct 22nd, but it's like
it's so obvious that no-one needs to answer, though I can't find the
answer anywhere in the documentation.

Best regards

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] upgrade from 3.3.0 to 3.3.1

2012-10-22 Thread s19n
* Patrick Irvine  [2012 10 16, 22:03]:
> Can I do a simple rolling upgrade from 3.3.0 to 3.3.1?  Or are
> there some gotchas?

 I haven't seen any answer to this enquiry, though I think it is very
simple and important to know the answer. Can 3.3.0/3.3.1 bricks
coexist? What about clients, can 3.3.1 clients connect to 3.3.0 bricks?

Thank you very much in advance for your answers,
Best regards

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] rdma tansport on 3.3?

2012-10-19 Thread s19n
* samuel  [2012 10 19, 11:44]:
> I've tried 3.3.1, 3.4-rc1,3.2.7 and only been able to mount the volume
> with 3.2.7. With the other gluster versions, the following error appears
> on the gluster logs:

 In the Admin-Guide the following can be read (taken from 3.3.1
changelog):

Command Reference - The Gluster Console Manager
"NOTE: with 3.3.0 release, transport type 'rdma' and 'tcp,rdma' are not
fully supported"

Regards

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Clearing the heal-failed and split-brain status messages

2012-10-10 Thread s19n
* Robert van Leeuwen  [2012 10 10, 13:56]:
> > From: gluster-users-boun...@gluster.org
> > If this were a "wishlist" bug on the bugzilla tracker, I'd subscribe to
> > it (which seems the common practice to show interest and say "+1").
> 
> I've created an issue on the bugtracker:
> https://bugzilla.redhat.com/show_bug.cgi?id=864963

Thanks, subscribed.

Regards

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Clearing the heal-failed and split-brain status messages

2012-10-10 Thread s19n
* Robert van Leeuwen  [2012 10 10, 08:39]:
> Is it possible to clear the heal-failed and split-brain status in a nice way?
> I would personally like if gluster would automatically remove failed states
> when they are resolved ( if future reference is needed you can always look at
> the logs)

 If this were a "wishlist" bug on the bugzilla tracker, I'd subscribe to
it (which seems the common practice to show interest and say "+1").

Regards

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Tuning/Configuration: Specifying the TCP port opened by glusterfsd for each brick?

2012-09-26 Thread s19n
* Eric  [2012 09 24, 20:08]:
> Anybody?
> It's been three weeks and nobody's even made a suggestion!

Have you tried searching the mailing list archives during these three
weeks?

http://gluster.org/pipermail/gluster-users/2011-February/006709.html

"Currently, there is no option to change the port through options, hence
you need to change the code."


Regards

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] New release of Gluster?

2012-09-19 Thread s19n
* Gerald Brandt  [2012 09 18, 11:16]:
> Are there any proposed dates for a new release of Gluster?
> I'm currently running 3.3, and the gluster heal info commands
> all segfault.

 It happened to me as well. I'd support the request for a new release,
which could also address the known problems with ext4...

 For what relates to the segfaults:

https://bugzilla.redhat.com/show_bug.cgi?id=858415
https://bugzilla.redhat.com/show_bug.cgi?id=829657

Regards

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] XFS and MD RAID

2012-08-29 Thread s19n
* Brian Candler  [2012 08 29, 08:48]:
> In a test setup (Ubuntu 12.04, gluster 3.3.0, 24 x SATA HD on LSI Megaraid
> controllers, MD RAID) I can cause XFS corruption just by throwing some
> bonnie++ load at the array - locally without gluster.

Randomly found on Google:

http://www.jive.nl/nexpres/doku.php?id=nexpres:nexpres_wp8#tests_on_xfs_file_system

"It is our opinion that the normalization of XFS behavior on a 24 disks
array is due to some proprietary round-robin algorithm on the raid card
that caused during the tests on a 12 disks array a 'missing disk' signal
that slowed down the pace, even though some downfalls on the 24 disks
array still happen every 18/20 files written. We ought to say that the
downfall pattern is not related to time delays or file sizes, but it is
instead a peculiarity of the XFS file system."

 Now I'd _really_ like to know if you are using a Megaraid or, as you say
at the end, a mpt2sas controller/driver, because I am going to setup a
new gluster volume with them, and considering this issue and the ext4
one I don't really know what to choose...


Regards

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] FeedBack Requested : Changes to CLI output of 'peer status'

2012-08-28 Thread s19n
* Stephan von Krawczynski  [2012 08 28, 15:31]:
> You are not seriously talking about 80 char terminals for an output
> that is commonly used by scripts and stuff like nagios, are you?
>
> > -
> > Hostname: 10.70.36.7
> > Uuid: c7283ee7-0e8d-4cb8-8552-a63ab05deaa7
> > State: Peer in Cluster (Connected)
> > -

 Given the '80 char terminals' and the plan to add more fields, I would
suggest a kind of key-value output which should be as easy to parse:

- cut here -
c7283ee7-0e8d-4cb8-8552-a63ab05deaa7: Hostname: 10.70.36.7
c7283ee7-0e8d-4cb8-8552-a63ab05deaa7: State: Connected
- cut here -

 This way you should be able to line-filter the output as if it was all
in one line.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Failed to get names of volumes

2012-08-28 Thread s19n
* s19n  [2012 08 27, 17:56]:
>  Additional update: seems that the command (and many others) is not
> working because of a 'local lock' being held.

I think I am hitting the following, so I'll be looking forward the fix:

https://bugzilla.redhat.com/show_bug.cgi?id=843003

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Failed to get names of volumes

2012-08-27 Thread s19n
* s19n  [2012 08 27, 11:25]:
> - cut here -
> # gluster volume status all
> operation failed
>  
> Failed to get names of volumes
> - cut here -

 Additional update: seems that the command (and many others) is not
working because of a 'local lock' being held. I have tried running the
command on the host holding the lock as well as on other hosts, with no
success. etc-glusterfs-glusterd.vol.log extract follows:

- cut here -
[2012-08-27 17:41:17.842441] I
  [glusterd-volume-ops.c:583:glusterd_handle_cli_statedump_volume]
  0-glusterd: Received statedump request for volume storage with options 

[2012-08-27 17:41:17.842510] E [glusterd-utils.c:277:glusterd_lock]
  0-glusterd: Unable to get lock for uuid:
  34b665ea-d315-489b-bd0f-172bb6b85ee1, lock held by:
  34b665ea-d315-489b-bd0f-172bb6b85ee1

[2012-08-27 17:41:17.842527] E
  [glusterd-handler.c:453:glusterd_op_txn_begin] 0-management: Unable to
  acquire local lock, ret: -1
- cut here -

 Could you please describe why these locks are being held and, if
possible, how to clear them?

Thank you very much for your kind attention,
Best regards

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Failed to get names of volumes

2012-08-27 Thread s19n
* Tao Lin  [2012 08 27, 17:54]:
> There are issues with gluster on ext4,you have to use other file
> systems(eg. xfs, ext3) instead of ext4.

If you are referring to

http://joejulian.name/blog/glusterfs-bit-by-ext4-structure-change/

 I don't think I was experienceing that problem, since I shouldn't have
an affected kernel version.

 Update: now I am able to issue the command, and indeed noticed that one
of the nodes was offline. In the brick log I found:

- cut here -
patchset: git://git.gluster.com/glusterfs.git
signal received: 7
time of crash: 2012-08-24 23:22:30
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.3.0
/lib/libc.so.6(+0x33af0)[0x7f024da35af0]
/usr/lib/libglusterfs.so.0(__dentry_grep+0x8e)[0x7f024e7879de]
/usr/lib/libglusterfs.so.0(inode_grep+0x66)[0x7f024e787c56]
/usr/lib/glusterfs/3.3.0/xlator/protocol/server.so(resolve_entry_simple+0x91)[0x7f02491eb641]
/usr/lib/glusterfs/3.3.0/xlator/protocol/server.so(server_resolve_entry+0x24)[0x7f02491ebd14]
/usr/lib/glusterfs/3.3.0/xlator/protocol/server.so(server_resolve+0x98)[0x7f02491ebb88]
/usr/lib/glusterfs/3.3.0/xlator/protocol/server.so(server_resolve_all+0x9e)[0x7f02491ebcbe]
/usr/lib/glusterfs/3.3.0/xlator/protocol/server.so(resolve_and_resume+0x14)[0x7f02491ebd84]
/usr/lib/glusterfs/3.3.0/xlator/protocol/server.so(server_lookup+0x18f)[0x7f024920525f]
/usr/lib/libgfrpc.so.0(rpcsvc_handle_rpc_call+0x293)[0x7f024e550ce3]
/usr/lib/libgfrpc.so.0(rpcsvc_notify+0x93)[0x7f024e550e53]
/usr/lib/libgfrpc.so.0(rpc_transport_notify+0x28)[0x7f024e5518b8]
/usr/lib/glusterfs/3.3.0/rpc-transport/socket.so(socket_event_poll_in+0x34)[0x7f024acd0734]
/usr/lib/glusterfs/3.3.0/rpc-transport/socket.so(socket_event_handler+0xc7)[0x7f024acd0817]
/usr/lib/libglusterfs.so.0(+0x3e394)[0x7f024e79b394]
/usr/sbin/glusterfsd(main+0x58a)[0x407aaa]
/lib/libc.so.6(__libc_start_main+0xfd)[0x7f024da20c4d]
/usr/sbin/glusterfsd[0x404a59]
- cut here -

Regards

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Failed to get names of volumes

2012-08-27 Thread s19n

Hello,

 Gluster 3.3.0 distributed replicated, ext4 bricks. Since this morning I
am unable to check the status of the filesystem:

- cut here -
# gluster volume status all
operation failed
 
Failed to get names of volumes
- cut here -

Extract from cli.log:

- cut here -
[2012-08-27 11:10:00.341089] W [rpc-transport.c:174:rpc_transport_load]
  0-rpc-transport: missing 'option transport-type'. defaulting to "socket"
[2012-08-27 11:10:00.439743] E
  [cli-rpc-ops.c:5657:gf_cli_status_volume_all] 0-cli: status all failed
[2012-08-27 11:10:00.439791] I [input.c:46:cli_batch] 0-: Exiting with:
  -22
- cut here -

"gluster peer" reports all nodes connected but it seems to be known as
not really reliable.

Any suggestion will be really appreciated.
Best regards

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] files on gluster brick that have '---------T' designation.

2012-08-21 Thread s19n
* Shishir Gowda  [2012 08 20, 22:09]:
> These are valid files in glusterfs-dht xlator configured volumes.
> These are known as link files, which dht uses to maintain files on
> the hashed subvol, when the actual data resides in non hashed
> subvolumes(rename can lead to these).

Hello,

 since I also have noticed those files, could you please clarify (or
point me to the proper documentation) what 'hashed subvolumes' and
'non hashed subvolumes' are?

> The cleanup of these files will be taken care of by running
> rebalance.

 When you say rebalance you mean a fix-layout or a full rebalance with
data migration? Should I plan to issue a rebalance request periodically
to get rid of those files?


Thank you very much in advance for your attention,
Best regards

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] gfid mismatch on 3.3.0

2012-08-17 Thread s19n

Hello,

 I am seeing the following on a gluster 3.3.0 setup (5x2 bricks
distributed replicated):

- cut here -
 [2012-08-17 07:52:21.011448] W [dht-common.c:416:dht_lookup_dir_cbk]
0-storage-dht: /index/it/201208/20120812/data: gfid different 
on storage-replicate-0
- cut here -

 Is it possible that the problem addressed at the page
http://community.gluster.org/a/alert-glusterfs-release-for-gfid-mismatch/
is still appearing with 3.3.0? This is a new setup, not an upgraded one.


Thank you very much in advance for your attention,
Best regards

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Replica with non-uniform bricks

2012-08-13 Thread s19n

Hello,

 as per http://gluster.org/pipermail/gluster-users/2011-May/007788.html
I am advised to use bricks which are uniform in size, but for a number
of reasons this is not possible at the moment in my current setup.

 So the question is: what is the expected behaviour when two bricks
with different sizes are coupled to form a replica? Will the larger
brick keep accepting writes even after the smallest brick has been
filled up?

 In this scenario, what conditions become relevant in the process of
rebalancing a volume? Is there a precedence of the first brick in a
replica (as per volume definition)? 


Thank you very much in advance for your attention,
Best regards

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users