[Gluster-users] Rev Your (RDMA) Engines for the RDMA GlusterFest

2013-06-20 Thread John Mark Walker
It's that time again – we want to test the GlusterFS 3.4 beta before
we unleash it on the world. Like our last test fest, we want you to
put the latest GlusterFS beta through real-world usage scenarios that
will show you how it compares to previous releases. Unlike the last
time, we want to focus this round of testing on Infiniband and RDMA
hardware.

For a description of how to do this, see the GlusterFest page on the
community wiki:

http://www.gluster.org/community/documentation/index.php/GlusterFest

Run the tests, report the results, and report any bugs you found as a result.


As an added bonus, use this mailing list as another outlet for your
testing. After reporting the results on the GlusterFest page, report
them here, too, and other users can confirm – or counter – your
results.

Find a new bug that is confirmed by the Gluster QE team, and I'll send
you a free t-shirt (see image below).

Testing starts in a matter of minutes – 00:00 UTC on Friday, June 21
(that's 5pm PT/8pm ET today) – and wraps up 24 hours later.

Happy testing,

JM
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 40 gig ethernet

2013-06-20 Thread Bryan Whitehead
Weird, I have a bunch of servers with Areca ARC-1680 8-ports and they
have never given me a problem.

The first thing I did was update the firmware to the latest - my brand
new cards had firmware 2 years old - and didn't recognize disks > 1TB.

On Thu, Jun 20, 2013 at 7:11 AM, Shawn Nock  wrote:
> Justin Clift  writes:
>>> The other issue I have is with hardware RAID. I'm not sure if folks
>>> are using that with gluster or if they're using software RAID, but
>>> the closed source nature and crappy proprietary tools annoys all the
>>> devops guys I know. What are you all doing for your gluster setups?
>>> Is there some magical RAID controller that has Free tools, or are
>>> people using mdadm, or are people just unhappy or ?
>>
>> Before joining Red Hat I was using Areca hardware.  But Areca (the
>> company) was weird/dishonest when I tried to RMA a card that went bad.
>>
>> So, I advise people to keep away from that crowd.  Haven't tried any
>> others in depth since. :/
>
> I second the thoughts on Areca. They are a terrible company; avoid at
> all costs. I've RMA'd every card I've installed of theirs that had been
> in service for more that 6 months, some servers have had RMA returns
> fail within months.
>
> Their only US support option is "we'll ship it to Taiwan for repair and
> return it is 6-8 weeks". There is no option to pay for advanced
> replacement.
>
> I had to keep a stock of spares in-house until I migrated to 3ware (now
> LSI). I haven't had any trouble with these cards in several years (and
> haven't needed to RMA or contact support).
>
> --
> Shawn Nock (OpenPGP: 0x65118FA5)
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Installation problem

2013-06-20 Thread Umar Draz
Hi,

I had install glusterfs on my ubuntu 13.04 now I want to configure qeume
with --enable-glusterfs but its not working, here is the error

ERROR
ERROR: User requested feature GlusterFS backend support
ERROR: configure was not able to find it
ERROR


Would you please help which gluster package need to install so qemu find
the glusterfs.

Br.

Umar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 40 gig ethernet

2013-06-20 Thread Shawn Nock
Justin Clift  writes:
>> The other issue I have is with hardware RAID. I'm not sure if folks
>> are using that with gluster or if they're using software RAID, but
>> the closed source nature and crappy proprietary tools annoys all the
>> devops guys I know. What are you all doing for your gluster setups?
>> Is there some magical RAID controller that has Free tools, or are
>> people using mdadm, or are people just unhappy or ?
>
> Before joining Red Hat I was using Areca hardware.  But Areca (the
> company) was weird/dishonest when I tried to RMA a card that went bad.
>
> So, I advise people to keep away from that crowd.  Haven't tried any
> others in depth since. :/

I second the thoughts on Areca. They are a terrible company; avoid at
all costs. I've RMA'd every card I've installed of theirs that had been
in service for more that 6 months, some servers have had RMA returns
fail within months.

Their only US support option is "we'll ship it to Taiwan for repair and
return it is 6-8 weeks". There is no option to pay for advanced
replacement.

I had to keep a stock of spares in-house until I migrated to 3ware (now
LSI). I haven't had any trouble with these cards in several years (and
haven't needed to RMA or contact support).

-- 
Shawn Nock (OpenPGP: 0x65118FA5)


pgpyp9RwsWROw.pgp
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Fwd: Unable to remove / replace faulty bricks

2013-06-20 Thread elvinas.piliponis
Hello, 

> All volfiles are autogenerated based on the info available in the other files
> in /var/lib/glusterd/vols// (like ./info, ./bricks/*). So to manually 
> fix
> your "situation", please make sure the contents in the files ./info,
> ./node_state.info ./rbstate ./bricks/* are "proper" (you can either share
> them with me offline, or compare them with another volume which is
I will consult with management about sharing. However I have found no obvious 
differencies in the .vol files. Bricks with faulty servers are explicitly 
defined in them the same way as proper bricks. 

> good), and issue a "gluster volume reset " to re-write fresh
> volfiles.
Is this really correct command? According to help it should reset configuration 
options. 
volume reset  [option] [force] - reset all the reconfigured options

> It is also a good idea to double check the contents of
> /var/lib/glusterd/peers/* is proper too.
Only dead 00022 server  is missing as peer probe is not able to probe it. 00031 
was reattached (but still not recognized as part of volume).

> Doing these manual steps and restarting all processes should recover you
> from pretty much any situation.
Yeah, I thought so. However after any attempt to edit volume refused to start 
(glusterfs-server failed to start), complaining about uknown keys and listing 
brick numbers from info file. 

 
> Back to the cause of the problem - it appears to be the case that the ongoing
> replace-brick got messed up when yet another server died.
I believe it got messed (or messed finally) when I was attempting desperately 
to remove bricks and issued 
gluster peer detach 00022 force
gluster peer detach 00031 force
Hoping that this would allow  to break migration in progress and then 
remove/replace those servers. 

> A different way of achieving what you want, is to use add-brick + remove-
> brick for decommissioning servers (i.e, add-brick the new server
> - 00028, and "remove-brick start" the old one - 00031, and "remove-brick
> commit" once all the data has drained out). Moving forward this will be the
> recommended way to decommission servers. Use replace-brick to only
> replace an already dead server - 00022 with its replacement).

I am using distributed - replicated volume so I can only add/remove servers in 
replica pairs. Also command cluster brick-remove has issues with open files. At 
least this occurs with semiosis package. Any active KVM virtual instance COW  
files (base disk + diff) gets corrupted as soon as the following commands 
touches the data:
brick-remove 
rebalance
What I have observed is that corruptions occurs even for the base disk file, 
which is shared in OpenStack by number of instances so single file corruption 
will cause fault on multiple VMs and recovery can be impossible due to 
instances will get corrupt data to their diff files and replacing base file 
with proper one will not help for them. I have tested this several times and 
found that replace-brick  for some reason is working properly and does not 
cause issues for openfiles. 

> 
> > I am using Semiosis 3.3.1 package on Ubuntu 12.04:
> > dpkg -l | grep gluster
> > rc  glusterfs3.3.0-1
> > clustered file-system
> > ii  glusterfs-client 3.3.1-ubuntu1~precise8
> >  clustered file-system (client package)
> > ii  glusterfs-common 3.3.1-ubuntu1~precise8
> >  GlusterFS common libraries and translator modules
> > ii  glusterfs-server 3.3.1-ubuntu1~precise8
> >  clustered file-system (server package)

I have attempted to run glusterfs-server in debug mode and  saw the following 
when I have attempted to replace brick with force. It seems that Gluster is 
unable to force volume change command if one of the nodes does not respond. 
Even when force mode is issued. I would expect "force" should ignore such 
issues, especially when change is not related to the replica set, which node 
does not responds. 

However in the end I do receive the following error, which does not seem to 
relate to the log:
brick: 00031:/mnt/vmstore/brick does not exist in volume: glustervmstore

In my case it is
00031 -- 00036 --- I am replacing 00031 with spare 00028
00022 -- 00024 -- 00022 have had disk failure and system is offline and is 
unable to respond. 

[2013-06-19 09:56:21.520991] D [glusterd-utils.c:941:glusterd_volinfo_find] 0-: 
Volume glustervmstore found
[2013-06-19 09:56:21.521014] D [glusterd-utils.c:949:glusterd_volinfo_find] 0-: 
Returning 0
[2013-06-19 09:56:21.521060] D [glusterd-utils.c:727:glusterd_brickinfo_new] 
0-: Returning 0
[2013-06-19 09:56:21.521095] D 
[glusterd-utils.c:783:glusterd_brickinfo_from_brick] 0-: Returning 0
[2013-06-19 09:56:21.521126] D [glusterd-utils.c:585:glusterd_volinfo_new] 0-: 
Returning 0
[2013-06-19 09:56:21.521170] D 
[glusterd-utils.c:672:glusterd_volume_brickinfos_delete] 0-: Returning 0
[2013-06-19 09:56:21.521201] D [glusterd-utils.c:701:glusterd_volinfo_d