all zero's) but the speed at which they transfer exceeds (by
quite a bit) the theoretical max of a 1 Gb network.
Does gluster (or anything else) do transparent compression? What else
would explain this oddity?
--
Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine
[ZOT 22
say..)..
thanks.
hjm
On Thursday 22 March 2012 14:05:25 Brian Candler wrote:
> On Thu, Mar 22, 2012 at 01:56:49PM -0700, Harry Mangalam wrote:
> >Previous email had a typo in Subject line.
>
> What do you mean by "destroying" the mountpoint?
>
> I have
next time.
hjm
On Thursday 22 March 2012 14:19:41 Haris Zukanovic wrote:
> I think I have seen these when the Gluster volume is stopped but
> still mounted.
>
> On 22/03/12 22.05, Brian Candler wrote:
> > On Thu, Mar 22, 2012 at 01:56:49PM -0700, Harry Mangalam wrote:
> >>
fect?
I assume not, but what could lead to it? Or rather why should trying
to mount a gluster volume lead to that effect?
hjm
On Thursday 22 March 2012 14:05:25 Brian Candler wrote:
> On Thu, Mar 22, 2012 at 01:56:49PM -0700, Harry Mangalam wrote:
> >Previous email had a ty
Previous email had a typo in Subject line.
--
Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine
[ZOT 2225] / 92697 Google Voice Multiplexer: (949) 478-4487
415 South Circle View Dr, Irvine, CA, 92697 [shipping]
MSTB Lat/Long: (33.642025,-117.844414) (paste into Google Maps
li-rpc-
ops.c:606:gf_cli3_1_get_volume_cbk] 0-: Returning: 0
[2012-03-22 12:56:51.462537] I [cli-rpc-
ops.c:413:gf_cli3_1_get_volume_cbk] 0-cli: Received resp to get vol: 0
[2012-03-22 12:56:51.462649] I [cli-rpc-
ops.c:606:gf_cli3_1_get_volume_cbk] 0-: Returning: 0
[2012-03-22 12:56:51.462663] I [in
ess :)
Yes, as implied in the post - I'd be happy to.
> Have you tried out any of the new QA builds to test?
I'm running 3.3b1 and will be upgrading to the 3.3b2 later this week
if there are no other disasters taking precedence.. If there's a
place to get more recent versions
_
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
--
Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine
[ZOT 2225] / 92697 Google Voice Multiplexer: (949) 478-4487
415 South Circle View Dr, Irvine,
> with which a remove-brick will take care of migrating data out of
> the brick.
>
> This feature is not part of any of current 3.2.x (or earlier)
> releases.
>
> If you are in testing/validating phase, 3.3.0qa15 should have this
> feature for you.
--
Harry Mangala
f White - Linux/Unix Systems Engineer
> University of Pittsburgh - CSSD
>
> On 12/15/2011 01:48 PM, Harry Mangalam wrote:
> > The use case is that we have a multiTB data partition that we
> > would like to glusterize. Could we add that store to a gluster
> > volume and have i
gluster to owners of
large existing data stores.
--
Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine
[ZOT 2225] / 92697 Google Voice Multiplexer: (949) 478-4487
MSTB Lat/Long: (33.642025,-117.844414) (paste into Google Maps)
--
This signature has been OCCUPIED
11:19 AM, John Mark Walker wrote:
> > - Original Message -
> >
> >> Ah - Ok thanks, Jeff. I was looking for the Swift and REST API
> >> docs. I assumed there were API interfaces in 3.2.5. I'll go
> >> dig up some roadmap info.
> >
> > I
.702130] E [rdma.c:4417:tcp_connect_finish] 0-
glrdma-client-5: tcp connect to failed (Connection refused)
cli.log has many of these lines:
[2011-12-13 10:34:55.142428] W [rpc-
transport.c:606:rpc_transport_load] 0-rpc-transport: missing 'option
transport-type'. defaulting to "sock
---
Thu Dec 08 11:52:12 [0.00 0.01 0.00] root@pbs3:~
524 $ gluster volume replace-brick glrdma pbs2:/data2 pbs4:/data start
Brick: pbs4:/data already in use
---
Is there a process whereby I can clear a brick by forcing the files to
migrate to the other b
on, NZ
>
> 0064 4 463 6272
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
--
Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine
[ZOT 2225] /
s: 00066a0098006e5f 00066a00a0006e5f 00066a0098006e5f
Board ID:j (MT_023002)
VSD: j
PSID:MT_023002
===
so with that, the
>
> Good Luck.
> Dan
>
> From: gluster-users-bou
ly
accepted.
Harry
--
Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine
[ZOT 2225] / 92697 Google Voice Multiplexer: (949) 478-4487
MSTB Lat/Long: (33.642025,-117.844414) (paste into Google Maps)
--
This signature has been OCCUPIED!
___
G
a,ib_mthca
>
> ib_core 64935 6 rdma_cm,ib_cm,iw_cm,ib_sa,ib_mthca,ib_mad
>
> so the card seems to be recognized and the driver is loaded (google
> says this card is suboptimal for such an infrastructure but I
> inherited this testbed so will have to deal with it for now).
>
>
deal with it for now).
hjm
On Friday 04 November 2011 18:32:36 Joe Landman wrote:
> On 11/04/2011 09:06 PM, Harry Mangalam wrote:
> > OK - finished some tests over tcp and ironed out a lot of
> > problems. rdma is next; should be snap now
> >
> > [I must admit th
nt -t glusterfs pbs3:/glrdma -o transport=rdma /mnt
Usage: mount.glusterfs : -o
Options:
man 8 mount.glusterfs
To display the version number of the mount helper:
mount.glusterfs --version
==
So, what is the rdma magic that will let me do this?
hjm
On Tuesday 01 November 2011 16:53:56 Joe Landman wrote:
On 11/01/2011 06:09 PM, Harry Mangalam wrote:
> > What else sets the authentication / permission correctly?
>
> gluster volume set g6 auth.allow 192.168.*,128.*
That doesn't cause an error and it does cause itself to be i
rectly to each other with the IP# used for the mounts.
--
Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine
[ZOT 2225] / 92697 Google Voice Multiplexer: (949) 478-4487
MSTB Lat/Long: (33.642025,-117.844414) (paste into Google Maps)
--
This signatu
9] W [client-
handshake.c:862:client_setvolume_cbk] 0-g6-client-1: failed to get
'process-uuid' fro
What else sets the authentication / permission correctly?
--
Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine
[ZOT 2225] / 92697 Google Voice Multiplexer: (949) 478-4
d the
same way as a successful mount and showed up as a successful mount
with a 'df'. A user would only know that it was a failure when he
tried to write or compared the df value to a successful mount.
So, onwards!
Harry
On Friday 28 October 2011 14:00:54 Harry Mangalam wrote:
&g
The nodes that are having the problem have never had a gluster fs
mounted on them before.
--
Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine
[ZOT 2225] / 92697 Google Voice Multiplexer: (949) 478-4487
MSTB Lat/Long: (33.6420
h would hash to that brick (effectively a
> random 1/N) also fail. That part, at least, is fixable. With
> replication, the single-brick failure would effectively be
> invisible to the distribution layer so even this glitch wouldn't
> occur.
--
Harry Mangalam - Research Compu
en't tested this).
- can you intermix distributed and mirrored volumes? This is of
particular interest since some of our users want to have replicated
data and some don't care.
Many thanks
hjm
--
Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine
[ZOT 2225] / 92697 Googl
101 - 127 of 127 matches
Mail list logo