On Thu, Jul 6, 2017 at 3:47 AM, Gianluca Cecchi
wrote:
> On Wed, Jul 5, 2017 at 6:39 PM, Atin Mukherjee
> wrote:
>
>> OK, so the log just hints to the following:
>>
>> [2017-07-05 15:04:07.178204] E [MSGID: 106123]
>> [glusterd-mgmt.c:1532:glusterd_mgmt_v3_commit] 0-management: Commit
>> failed
On Wednesday, July 5, 2017, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> Il 5 lug 2017 11:31 AM, "Kaushal M" > ha scritto:
>
> - Preliminary support for volume expansion has been added. (Note that
> rebalancing is not available yet)
>
>
> What do you mean with this?
> Any dif
What if you disabled eager lock and run your test again on the sharded
configuration along with the profile output?
# gluster volume set cluster.eager-lock off
-Krutika
On Tue, Jul 4, 2017 at 9:03 PM, Krutika Dhananjay
wrote:
> Thanks. I think reusing the same volume was the cause of lack of
On Wed, Jul 5, 2017 at 6:39 PM, Atin Mukherjee wrote:
> OK, so the log just hints to the following:
>
> [2017-07-05 15:04:07.178204] E [MSGID: 106123]
> [glusterd-mgmt.c:1532:glusterd_mgmt_v3_commit]
> 0-management: Commit failed for operation Reset Brick on local node
> [2017-07-05 15:04:07.178
OK, so the log just hints to the following:
[2017-07-05 15:04:07.178204] E [MSGID: 106123]
[glusterd-mgmt.c:1532:glusterd_mgmt_v3_commit] 0-management: Commit failed
for operation Reset Brick on local node
[2017-07-05 15:04:07.178214] E [MSGID: 106123]
[glusterd-replace-brick.c:649:glusterd_mgmt_v
On Wed, Jul 5, 2017 at 5:22 PM, Atin Mukherjee wrote:
> And what does glusterd log indicate for these failures?
>
See here in gzip format
https://drive.google.com/file/d/0BwoPbcrMv8mvYmlRLUgyV0pFN0k/view?usp=sharing
It seems that on each host the peer files have been updated with a new
entry
Hi Soumya,
(1) In http://mseas.mit.edu/download/phaley/GlusterUsers/TestNFSmount/
I've placed the following 2 log files
etc-glusterfs-glusterd.vol.log
gdata.log
The first has repeated messages about nfs disconnects. The second had
the .log name (but not much information).
(2) About the g
And what does glusterd log indicate for these failures?
On Wed, Jul 5, 2017 at 8:43 PM, Gianluca Cecchi
wrote:
>
>
> On Wed, Jul 5, 2017 at 5:02 PM, Sahina Bose wrote:
>
>>
>>
>> On Wed, Jul 5, 2017 at 8:16 PM, Gianluca Cecchi <
>> gianluca.cec...@gmail.com> wrote:
>>
>>>
>>>
>>> On Wed, Jul 5,
On Wed, Jul 5, 2017 at 5:02 PM, Sahina Bose wrote:
>
>
> On Wed, Jul 5, 2017 at 8:16 PM, Gianluca Cecchi > wrote:
>
>>
>>
>> On Wed, Jul 5, 2017 at 7:42 AM, Sahina Bose wrote:
>>
>>>
>>>
...
then the commands I need to run would be:
gluster volume reset-brick export
>>
On Wed, Jul 5, 2017 at 8:32 PM, Sahina Bose wrote:
>
>
> On Wed, Jul 5, 2017 at 8:16 PM, Gianluca Cecchi > wrote:
>
>>
>>
>> On Wed, Jul 5, 2017 at 7:42 AM, Sahina Bose wrote:
>>
>>>
>>>
...
then the commands I need to run would be:
gluster volume reset-brick export
>>
Hi everyone,
I have a question about using encryption in Gluster FS.
1.Created a file (file size is smaller than 1k) in the volume’s mount point.
2.Read the file, finally got a mess code.
I found that the content I got is from cache and no decryption is operated on
the file content, so mess code
On Wed, Jul 5, 2017 at 8:16 PM, Gianluca Cecchi
wrote:
>
>
> On Wed, Jul 5, 2017 at 7:42 AM, Sahina Bose wrote:
>
>>
>>
>>> ...
>>>
>>> then the commands I need to run would be:
>>>
>>> gluster volume reset-brick export
>>> ovirt01.localdomain.local:/gluster/brick3/export
>>> start
>>> gluster
You are confusing volume with brick.
You do not have a "Replicate Brick", you have one 1x3 volume, composed of 3
bricks, and one 1x2 volume made up of 2 bricks. You do need to understand
the difference between volume and brick
Also you need to be aware of the differences between server quorum and
Il 5 lug 2017 11:31 AM, "Kaushal M" ha scritto:
- Preliminary support for volume expansion has been added. (Note that
rebalancing is not available yet)
What do you mean with this?
Any differences in volume expansion from the current architecture?
___
After nearly 3 months, we have another preview release for GlusterD-2.0.
The highlights for this release are,
- GD2 now uses an auto scaling etcd cluster, which automatically
selects and maintains the required number of etcd servers in the
cluster.
- Preliminary support for volume expansion has be
15 matches
Mail list logo