Hi,
It can be that LIO service starts before /mnt gets mounted. In absence of
backend file LIO has created the new one on root filesystem (/mnt directory).
Then gluster volume was mounted over, but as backend file was kept open by LIO
- it still was used instead of the right one on gluster volu
If it's writing to the root partition then the mount went away. Any
clues in the gluster client log?
On 11/18/2016 08:21 AM, Olivier Lambert wrote:
After Node 1 is DOWN, LIO on Node2 (iSCSI target) is not writing
anymore in the local Gluster mount, but in the root partition.
Despite "df -h" sh
After Node 1 is DOWN, LIO on Node2 (iSCSI target) is not writing
anymore in the local Gluster mount, but in the root partition.
Despite "df -h" shows the Gluster brick mounted:
/dev/mapper/centos-root 3,1G3,1G 20K 100% /
...
/dev/xvdb 61G 61G 956M 99% /bricks/brick1
Yes, I did it only if I have the previous result of heal info ("number
of entries: 0"). But same result, as soon as the second Node is
offline (after they were both working/back online), everything is
corrupted.
To recap:
* Node 1 UP Node 2 UP -> OK
* Node 1 UP Node 2 DOWN -> OK (just a small lag
On Fri, Nov 18, 2016 at 3:49 AM, Olivier Lambert
wrote:
> Hi David,
>
> What are the exact commands to be sure it's fine?
>
> Right now I got:
>
> # gluster volume heal gv0 info
> Brick 10.0.0.1:/bricks/brick1/gv0
> Status: Connected
> Number of entries: 0
>
> Brick 10.0.0.2:/bricks/brick1/gv0
>
Okay, got it attached :)
On Fri, Nov 18, 2016 at 11:00 AM, Krutika Dhananjay wrote:
> Assuming you're using FUSE, if your gluster volume is mounted at /some/dir,
> for example,
> then its corresponding logs will be at /var/log/glusterfs/some-dir.log
>
> -Krutika
>
> On Fri, Nov 18, 2016 at 7:13 A
Assuming you're using FUSE, if your gluster volume is mounted at /some/dir,
for example,
then its corresponding logs will be at /var/log/glusterfs/some-dir.log
-Krutika
On Fri, Nov 18, 2016 at 7:13 AM, Olivier Lambert
wrote:
> Attached, bricks log. Where could I find the fuse client log?
>
> On
Hi David,
What are the exact commands to be sure it's fine?
Right now I got:
# gluster volume heal gv0 info
Brick 10.0.0.1:/bricks/brick1/gv0
Status: Connected
Number of entries: 0
Brick 10.0.0.2:/bricks/brick1/gv0
Status: Connected
Number of entries: 0
Brick 10.0.0.3:/bricks/brick1/gv0
Status
On Thu, Nov 17, 2016 at 6:42 PM, Olivier Lambert
wrote:
> Okay, used the exact same config you provided, and adding an arbiter
> node (node3)
>
> After halting node2, VM continues to work after a small "lag"/freeze.
> I restarted node2 and it was back online: OK
>
> Then, after waiting few minute
Attached, bricks log. Where could I find the fuse client log?
On Fri, Nov 18, 2016 at 2:22 AM, Krutika Dhananjay wrote:
> Could you attach the fuse client and brick logs?
>
> -Krutika
>
> On Fri, Nov 18, 2016 at 6:12 AM, Olivier Lambert
> wrote:
>>
>> Okay, used the exact same config you provide
Could you attach the fuse client and brick logs?
-Krutika
On Fri, Nov 18, 2016 at 6:12 AM, Olivier Lambert
wrote:
> Okay, used the exact same config you provided, and adding an arbiter
> node (node3)
>
> After halting node2, VM continues to work after a small "lag"/freeze.
> I restarted node2 a
Okay, used the exact same config you provided, and adding an arbiter
node (node3)
After halting node2, VM continues to work after a small "lag"/freeze.
I restarted node2 and it was back online: OK
Then, after waiting few minutes, halting node1. And **just** at this
moment, the VM is corrupted (se
It's planned to have an arbiter soon :) It was just preliminary tests.
Thanks for the settings, I'll test this soon and I'll come back to you!
On Thu, Nov 17, 2016 at 11:29 PM, Lindsay Mathieson
wrote:
> On 18/11/2016 8:17 AM, Olivier Lambert wrote:
>>
>> gluster volume info gv0
>>
>> Volume Nam
On 18/11/2016 8:17 AM, Olivier Lambert wrote:
gluster volume info gv0
Volume Name: gv0
Type: Replicate
Volume ID: 2f8658ed-0d9d-4a6f-a00b-96e9d3470b53
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.0.0.1:/bricks/brick1/gv0
Brick2: 10.0.0.2:/b
Sure:
# gluster volume info gv0
Volume Name: gv0
Type: Replicate
Volume ID: 2f8658ed-0d9d-4a6f-a00b-96e9d3470b53
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.0.0.1:/bricks/brick1/gv0
Brick2: 10.0.0.2:/bricks/brick1/gv0
Options Reconfigured:
On 18/11/2016 6:00 AM, Olivier Lambert wrote:
First off, thanks for this great product:)
I have a corruption issue when using Glusterfs with LIO iSCSI target:
Could you post the results of:
gluster volume info
gluster volume status
thnaks
--
Lindsay Mathieson
__
16 matches
Mail list logo