Re: [Gluster-users] Fwd: vm paused unknown storage error one node out of 3 only

2018-06-08 Thread Dan Lavu
Krutika,

Is it also normal for the following messages as well?

[2018-06-07 06:36:22.008492] E [MSGID: 113020] [posix.c:1395:posix_mknod]
0-rhev_vms-posix: setting gfid on
/gluster/brick/rhev_vms/.shard/0ab3a16c-1d07-4153-8d01-b9b0ffd9d19b.16158
failed
[2018-06-07 06:36:22.319735] E [MSGID: 113020] [posix.c:1395:posix_mknod]
0-rhev_vms-posix: setting gfid on
/gluster/brick/rhev_vms/.shard/0ab3a16c-1d07-4153-8d01-b9b0ffd9d19b.16160
failed
[2018-06-07 06:36:24.711800] E [MSGID: 113002] [posix.c:267:posix_lookup]
0-rhev_vms-posix: buf->ia_gfid is null for
/gluster/brick/rhev_vms/.shard/0ab3a16c-1d07-4153-8d01-b9b0ffd9d19b.16177
[No data available]
[2018-06-07 06:36:24.711839] E [MSGID: 115050]
[server-rpc-fops.c:170:server_lookup_cbk] 0-rhev_vms-server: 32334131:
LOOKUP /.shard/0ab3a16c-1d07-4153-8d01-b9b0ffd9d19b.16177
(be318638-e8a0-4c6d-977d-7a937aa84806/0ab3a16c-1d07-4153-8d01-b9b0ffd9d19b.16177)
==> (No data available) [No data available]

if so what does it mean?

Dan

On Tue, Aug 16, 2016 at 1:21 AM, Krutika Dhananjay 
wrote:

> Thanks, I just sent http://review.gluster.org/#/c/15161/1 to reduce the
> log-level to DEBUG. Let's see what the maintainers have to say. :)
>
> -Krutika
>
> On Tue, Aug 16, 2016 at 5:50 AM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> On Mon, Aug 15, 2016 at 6:24 PM, Krutika Dhananjay 
>> wrote:
>>
>>> No. The EEXIST errors are normal and can be ignored. This can happen
>>> when multiple threads try to create the same
>>> shard in parallel. Nothing wrong with that.
>>>
>>>
>> Other than they pop up as E errors making a user worry hehe
>>
>> Is their a known bug filed against that or should I maybe create one to
>> see if we can get that sent to an informational level maybe?
>>
>>
>>
>>> -Krutika
>>>
>>> On Tue, Aug 16, 2016 at 1:02 AM, David Gossage <
>>> dgoss...@carouselchecks.com> wrote:
>>>
 On Sat, Aug 13, 2016 at 6:37 AM, David Gossage <
 dgoss...@carouselchecks.com> wrote:

> Here is reply again just in case.  I got quarantine message so not
> sure if first went through or wll anytime soon.  Brick logs weren't large
> so Ill just include as text files this time
>

 Did maintenance over weekend updating ovirt from 3.6.6->3.6.7 and after
 restrating the complaining ovirt node I was able to migrate the 2 vm with
 issues.  So not sure why the mount got stale, but I imagine that one node
 couldn't see the new image files after that had occurred?

 Still getting a few sporadic errors, but seems much fewer than before
 and never get any corresponding notices in any other log files

 [2016-08-15 13:40:31.510798] E [MSGID: 113022]
 [posix.c:1245:posix_mknod] 0-GLUSTER1-posix: mknod on
 /gluster1/BRICK1/1/.shard/0e5ad95d-722d-4374-88fb-66fca0b14341.584
 failed [File exists]
 [2016-08-15 13:40:31.522067] E [MSGID: 113022]
 [posix.c:1245:posix_mknod] 0-GLUSTER1-posix: mknod on
 /gluster1/BRICK1/1/.shard/0e5ad95d-722d-4374-88fb-66fca0b14341.584
 failed [File exists]
 [2016-08-15 17:47:06.375708] E [MSGID: 113022]
 [posix.c:1245:posix_mknod] 0-GLUSTER1-posix: mknod on
 /gluster1/BRICK1/1/.shard/d5a328be-03d0-42f7-a443-248290849e7d.722
 failed [File exists]
 [2016-08-15 17:47:26.435198] E [MSGID: 113022]
 [posix.c:1245:posix_mknod] 0-GLUSTER1-posix: mknod on
 /gluster1/BRICK1/1/.shard/d5a328be-03d0-42f7-a443-248290849e7d.723
 failed [File exists]
 [2016-08-15 17:47:06.405481] E [MSGID: 113022]
 [posix.c:1245:posix_mknod] 0-GLUSTER1-posix: mknod on
 /gluster1/BRICK1/1/.shard/d5a328be-03d0-42f7-a443-248290849e7d.722
 failed [File exists]
 [2016-08-15 17:47:26.464542] E [MSGID: 113022]
 [posix.c:1245:posix_mknod] 0-GLUSTER1-posix: mknod on
 /gluster1/BRICK1/1/.shard/d5a328be-03d0-42f7-a443-248290849e7d.723
 failed [File exists]
 [2016-08-15 18:46:47.187967] E [MSGID: 113022]
 [posix.c:1245:posix_mknod] 0-GLUSTER1-posix: mknod on
 /gluster1/BRICK1/1/.shard/f9a7f3c5-4c13-4020-b560-1f4f7b1e3c42.739
 failed [File exists]
 [2016-08-15 18:47:41.414312] E [MSGID: 113022]
 [posix.c:1245:posix_mknod] 0-GLUSTER1-posix: mknod on
 /gluster1/BRICK1/1/.shard/f9a7f3c5-4c13-4020-b560-1f4f7b1e3c42.779
 failed [File exists]
 [2016-08-15 18:47:41.450470] E [MSGID: 113022]
 [posix.c:1245:posix_mknod] 0-GLUSTER1-posix: mknod on
 /gluster1/BRICK1/1/.shard/f9a7f3c5-4c13-4020-b560-1f4f7b1e3c42.779
 failed [File exists]





 The attached file bricks.zip you sent to ; -us...@gluster.org> on 8/13/2016 7:17:35 AM was quarantined. As a
> safety precaution, the University of South Carolina quarantines .zip and
> .docm files sent via email. If this is a legitimate attachment <
> kdhan...@redhat.com>; may contact the
> Service Desk at 803-777-1800 (serviced...@sc.edu) and the attachment
> file will be released from quarantine and delivered.
>

[Gluster-users] 1 week to go on mountpoint CFP! June 15th close

2018-06-08 Thread Amye Scavarda
Hi all!
Our CFP for mountpoint closes June 15th, this is your one week warning!
See https://mountpoint.io/ for more details!
- amye

-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users