07.07.2016 10:32, Lindsay Mathieson пишет:
Is there a eta for 3.7.13?
If gfapi bug is fixed, then 3.7.13 should be released, or, at least,
new packages of 3.7.12 provided, with patch, I guess.
Right?
___
Gluster-users mailing list
Gluster-users@
On 7 July 2016 at 16:19, Krutika Dhananjay wrote:
> So if 3.7.12 works well for you with FUSE, sticking to
> 3.7.12 would be a safer bet, until 3.7.13 is released.
I didn't test comprehensively - was in to much of a hurry, but I had a
couple of VM's that refused to work on fuse due to access iss
Hmm.. but in any case, lot of VM pause issues were uncovered in 3.7.11 and
fixed in 3.7.12.
Worse yet, none of them had workarounds. So if 3.7.12 works well for you
with FUSE, sticking to
3.7.12 would be a safer bet, until 3.7.13 is released.
-Krutika
On Thu, Jul 7, 2016 at 11:21 AM, Lindsay Math
On 7 July 2016 at 15:42, Krutika Dhananjay wrote:
> could you please share the glusterfs client logs?
Alas, being qemu/libgfapi there aren't an client logs :(
I'll shut down the VM and restart it from the cmd line tonight with a
std output redirect, then wait for it to freeze again. Might take
Hi,
Please pass on the rebalance log from the 1st server for more analysis which
can be found under /var/log/glusterfs/"$VOL-rebalance.log".
And also we need the current layout xattrs from both the bricks, which can be
extracted by the following command.
"getfattr -m . -de hex <$BRICK_PATH>".
Yes, could you please share the glusterfs client logs?
-Krutika
On Thu, Jul 7, 2016 at 5:12 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> Becoming a serious problem. Since my misadventure with 3.7.12 and
> downgrading back to 3.7.11 i have daily freezes of VM where they *appears*
Becoming a serious problem. Since my misadventure with 3.7.12 and
downgrading back to 3.7.11 i have daily freezes of VM where they
*appears* to be unable to write to disk. Its seems to be localised to
just a few VM's, one of which unfortunately is our AD Server. The only
fix is a hard reset of
As some of you might already have noticed, GlusterD has been notably insecure
ever since it was written. Unlike our I/O path, which does check access
control on each request, anyone who can craft a CLI RPC request and send it to
GlusterD's well known TCP port can do anything that the CLI itself
Hi All,
i am trying to do some gluster testing for my customer.
I am experiencing the same issue as described here:
http://serverfault.com/questions/782602/glusterfs-rebalancing-volume-failed
Except: I have dispersed-distributed volume.
And I only let the fix-layout run for a few hours before
Let's assume a distributed replicated (replica 3) volume, with sharding enabled.
Can I add 1 node per time or I have to add 3 nodes every time ?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Hi everyone,
I am trying to configure a dispersed volume following this documentation
page:
http://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/#creating-dispersed-volumes
I have a set of 8 storage nodes, and I want the erasure coding settings
to be (k=4)+(r=
Today's meeting didn't go according to the agenda, as we initially had
low attendance. Attendance overall was low as well owing to a holiday
in Bangalore.
The meeting minutes and logs for the meeting are available at the links below,
Minutes:
https://meetbot.fedoraproject.org/gluster-meeting/2016
Hi All,
i am trying to do some gluster testing for my customer.
I am experiencing the same issue as described here:
http://serverfault.com/questions/782602/glusterfs-rebalancing-volume-failed
Except: I have dispersed-distributed volume.
And I only let the fix-layout run for a few hours before
Hi all,
I'm doing some testings with glusterfs in a virtualized environment
running a 3 x (8 + 4) distributed-dispersed volume simulating a 3 node
cluster with 12 drives per node configuration. The system versions are:
OS: Debian jessie kernel 3.16
Gluster: 3.8.0-2 installed from the gluste
On Wed, Jul 6, 2016 at 12:24 AM, Shyam wrote:
> On 07/01/2016 01:45 AM, B.K.Raghuram wrote:
>
>> I have not gone through this implementation nor the new iscsi
>> implementation being worked on for 3.9 but I thought I'd share the
>> design behind a distributed iscsi implementation that we'd worked
15 matches
Mail list logo