Ryan,
10(storage) nodes, I did some test w 1 brick per node, and another round w/
4 per node. Each is FDR connected, but all on the same switch.
I'd love to hear about your setup, gluster version, OFED stack etc
--
Matthew Nicholson
Research Computing Specialist
Harvard FAS Res
justin,
yeah, this fabirc is all bran new mellanox, and all nodes are running their
v2 stack.
of for a beg report, sure thing. I was thinking i would tack on a comment
here:
https://bugzilla.redhat.com/show_bug.cgi?id=982757
since thats about the silent failure.
--
Matthew Nicholson
Research
lume.
When it IS ready and in 3.4.1 (hopefully!), having good docs around it, and
maybe even a simple printf for the tcp failover would be huge for us.
--
Matthew Nicholson
Research Computing Specialist
Harvard FAS Research Computing
matthew_nichol...@harvard.edu
On Wed, Jul 10, 2013 at 3:18 AM, J
ta4, on both the clients and storage nodes.
So, I guess my questions are:
Is this expected/normal?
Is peering/volfile fetching always tcp based?
How should one peer nodes in a RDMA setup?
Should this be tried with only RDMA as a transport on the volume?
Are there more detailed docs for RDMA gluster
Yes please! Busy day 'round here, but we do have a 3.4 beta3 RDMA cluster
up, just need to get to the tests...
--
Matthew Nicholson
Research Computing Specialist
Harvard FAS Research Computing
matthew_nichol...@harvard.edu
On Fri, Jun 21, 2013 at 11:02 AM, John Mark Walker wrote:
&g
idn't manifest right away
however....
--
Matthew Nicholson
Research Computing Specialist
Harvard FAS Research Computing
matthew_nichol...@harvard.edu
On Tue, Jun 4, 2013 at 2:39 PM, Vijay Bellur wrote:
> On 06/04/2013 10:29 PM, Matthew Nicholson wrote:
>
>> So it sees somethi
e same (the node I'm checking the status from is
holding a lock, gets rejected, and never gets any info back).
--
Matthew Nicholson
Research Computing Specialist
Harvard FAS Research Computing
matthew_nichol...@harvard.edu
On Tue, Jun 4, 2013 at 12:21 PM, Matthew Nicholson <
matthew_ni
;)}, [16]) = 0
futex(0x63b7a4, FUTEX_CMP_REQUEUE_PRIVATE, 1, 2147483647, 0x63b760, 2) = 1
futex(0x63b760, FUTEX_WAKE_PRIVATE, 1) = 1
epoll_ctl(3, EPOLL_CTL_MOD, 5, {EPOLLIN|EPOLLPRI, {u32=5, u64=5}}) = 0
epoll_wait(3,
so talking to locahost on 964
All nodes do that, but with different ports.
glusterd is shutdown) have all been restarted. Actually, we just
went so fas as to bounce one replica then another (reboot).
--
Matthew Nicholson
Research Computing Specialist
Harvard FAS Research Computing
matthew_nichol...@harvard.edu
On Tue, Jun 4, 2013 at 10:30 AM, Vijay Bellur wrote:
>
, the UUID holding the lock, and the uuid requesting the lock,
are the same. So it seems like a lock was "forgotten" about?
any thoughts on clearing this?
--
Matthew Nicholson
Research Computing Specialist
Harvard FAS Research Computing
matthew_n
to the client mount to get it on the other 9 nodes
(distributed), then wipe/re-add the 2 bricks i just removed and reblance?
--
Matthew Nicholson
Research Computing Specialist
Harvard FAS Research Computing
matthew_nichol...@harvard.edu
___
Gluster-users
count?
Option 2, as far as I know should work, but I'm trying to gauge other
options over ding brick by brick replacements for this much data.
Anyone out there done something sort of like this?
--
Matthew Nicholson
Research Computing Specialist
long running HPC jobs, so remounting when we provision new
space on this isn't an option. Again, IF this is teh case, anyway to
get clients to pick up the new quotas without a unmount/remount? Some
sort of volfile refesh?
--
Matthew Nicholson
Research Computing Specialist
Harvard FAS Research
irectories.
--
Matthew Nicholson
Research Computing Specialist
Harvard FAS Research Computing
matthew_nichol...@harvard.edu
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
specify
the new replica count when i add them? What about deleting the volume,
and recreating and letting gluster heal/balance to the new "empty" set
of nodes?
Any insight, especially form experience, anyone has with either of
these will be a huge help!
Thanks!
--
Matthew Nicholson
ma
this happened.
>
> Thanks,
> Vijaykumar
>
>
> ----- Original Message -
>> From: "Matthew Nicholson"
>> To: "Matthew Nicholson"
>> Cc: gluster-users@gluster.org
>> Sent: Saturday, January 5, 2013 12:48:33 AM
>> Subject: Re: [Gluster-u
/geo-replication/gstore/gluster%3A%2F%2F10.242.64.121%3Agstore-rep.log
gluster_params: xlator-option=*-dht.assert-no-child-down=true
is there really no simple way to turn this off and start fresh?
On Fri, Jan 4, 2013 at 10:39 AM, Matthew Nicholson
wrote:
> Oh further more, the slave as listed in the in
ng off
> geo-replication.indexing cannot be disabled while geo-replication sessions
> exist
> Set volume unsuccessful
>
> so, my question is, how to do i stop these outright?
>
> I'm been pinging IRC, but its pretty dead in there, and attempts to
> subscribe to t
ul
so, my question is, how to do i stop these outright?
I'm been pinging IRC, but its pretty dead in there, and attempts to
subscribe to this list have been failing too
--
Matthew Nicholson
matthew_nichol...@harvard.edu
Research Computing Specialist
FAS Research Computing
Har
19 matches
Mail list logo