[Gluster-users] Troubles with sharding

2020-01-29 Thread Stefan
Hi, 

I have a lot of troubles with sharding. Firstly, all the shards are stored in 
the same directory, which can make GlusterFS quite slow if it's on a large HDD 
and the shard file size is small (e.g. 64MB). This can easily lead to >100k 
files, which is quite slow to run stats on, so GlusterFS will be very slow to 
operate. 

And then it's often the case that the size reporting on the FUSE mount is 
wrong, for example here: 
=== 
# ls -la 
total 9223370277421359838 
drwx-- 1 root root 196 Jan 29 19:37 . 
drwx-- 1 root root 278 Jan 29 19:37 .. 
-rw-r- 1 root root 1075 Jan 29 19:37 coverblacklist.frm 
-rw-r- 1 root root 589824 Jan 29 14:20 coverblacklist.ibd 
-rw-r- 1 root root 1564 Jan 29 19:37 cover.frm 
-rw-r- 1 root root 879956852736 Jan 29 19:36 cover.ibd 
-rw-r- 1 root root 1869 Jan 29 19:37 coverproperties.frm 
-rw-r- 1 root root 536870912 Jan 29 14:21 coverproperties.ibd 
-rw-r- 1 root root 61 Jan 29 19:37 db.opt 
=== 
Note the total size (8 ExiBytes?), and the size for coverproperties.ibd is 
reported as 512MB, but it's actually 2.89GiB, I tested it with pv: 

# pv < coverproperties.ibd > /dev/null 
2,89GiB 0:00:36 [80,3MiB/s] 
[=]
 577% 

What can be the problem and the solution here? 

Thanks, 

Stefan


Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] interpreting heal info and reported entries

2020-01-29 Thread Ravishankar N



On 30/01/20 11:41 am, Ravishankar N wrote:
I think for some reason setting of AFR xattrs on the parent dir did 
not happen, which is why the files are stuck in split-brain (instead 
of getting recreated on repo2 using the files from repo0 or 1). 


Can you provide the getfattr output of the parent dir of one of the 
.meta files? Maybe 
'/372501f5-062c-4790-afdb-dd7e761828ac/images/968daf61-6858-454a-9ed4-3d3db2ae1805'. 
Please share it from all three bricks.


Also, after you brought repo2 back up, can you check the glustershd.log 
of repo0 and repo1 to see if they contain entry-selfheal messages for 
the directory, something like:


"[2020-01-27 08:53:47.456205] I [MSGID: 108026] 
[afr-self-heal-common.c:1750:afr_log_selfheal] 0-engine-replicate-0: 
Completed entry selfheal on <$gfid-of-directory>. sources=[0] 1  sinks=2"


The timestamps should be after repo2 came back online after the upgrade.

-Ravi



Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] interpreting heal info and reported entries

2020-01-29 Thread Ravishankar N


On 29/01/20 9:56 pm, Cox, Jason wrote:


I have glusterfs (v6.6) deployed with 3-way replication used by ovirt 
(v4.3).


I recently updated 1 of the nodes (now at gluster v6.7) and rebooted. 
When it came back online, glusterfs reported there were entries to be 
healed under the 2 nodes that had stayed online.


After 2+ days, the 2 nodes still show entries that need healing, so 
I’m trying to determine what the issue is.


The files shown in the heal info output are small so healing should 
not take long. Also, ‘Gluster v heal ’ and ‘gluster v heal  
full’ both return successful, but the entries persist.


So first off, I’m a little confused by what gluster volume heal  
info is reporting.


The following is what I see from heal info:

# gluster v heal engine info

Brick repo0:/gluster_bricks/engine/engine

/372501f5-062c-4790-afdb-dd7e761828ac/images/968daf61-6858-454a-9ed4-3d3db2ae1805/4317dd0d-fd35-4176-9353-7ff69e3a8dc3.meta 



/372501f5-062c-4790-afdb-dd7e761828ac/images/4e3e8ca5-0edf-42ae-ac7b-e9a51ad85922/ceb42742-eaaa-4867-aa54-da525629aae4.meta 



Status: Connected

Number of entries: 2

Brick repo1:/gluster_bricks/engine/engine

/372501f5-062c-4790-afdb-dd7e761828ac/images/968daf61-6858-454a-9ed4-3d3db2ae1805/4317dd0d-fd35-4176-9353-7ff69e3a8dc3.meta 



/372501f5-062c-4790-afdb-dd7e761828ac/images/4e3e8ca5-0edf-42ae-ac7b-e9a51ad85922/ceb42742-eaaa-4867-aa54-da525629aae4.meta 



Status: Connected

Number of entries: 2

Brick repo2:/gluster_bricks/engine/engine

Status: Connected

Number of entries: 0

Repo0 and repo1 were not rebooted, but repo2 was.

Since repo2 went offline I would expect when it came back online it 
would have entries that need healing, but based on the heal info 
output that’s not what it looks like, so I’m thinking maybe heal info 
isn’t reporting what I think it is reporting.


*When gluster volume heal  info reports entries as above, what is 
it saying?


In heal info output, it is usually the nodes that were up that display 
the list of files that need heal. So the way to interpret it is while 
repo2 was down, repo0 and repo1 witnessed some modification to the files 
and therefore capture them as needing heal, whose list is what the CLI 
displays


From the above output, I was reading it as repo0 has 2 entries that 
need to be healed from the other bricks and repo1 has 2 entries that 
need healing from the other bricks. However, that doesn’t make sense 
since repo2 was the one that was rebooted and a ‘stat’ on the files in 
the bricks show repo2 is the older version (checksums also show repo0 
and repo1 match).  Trying to access the file through the FUSE mount on 
any node gives input/output errors.


Getfattr output:

repo0 glusterfs]# getfattr -d -m. -e hex 
/gluster_bricks/engine/engine/372501f5-062c-4790-afdb-dd7e761828ac/images/4e3e8ca5-0edf-42ae-ac7b-e9a51ad85922/ceb42742-eaaa-4867-aa54-da525629aae4.meta


getfattr: Removing leading '/' from absolute path names

# file: 
gluster_bricks/engine/engine/372501f5-062c-4790-afdb-dd7e761828ac/images/4e3e8ca5-0edf-42ae-ac7b-e9a51ad85922/ceb42742-eaaa-4867-aa54-da525629aae4.meta


security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000

trusted.afr.dirty=0x

trusted.afr.engine-client-2=0x00020002

trusted.bit-rot.signature=0x0102009338ff61a57fcb452b92ae816b8e5ff672be6d340e7da0a0dcfa34e26b26933b

trusted.bit-rot.version=0x02005e09d54f000ef84f

trusted.gfid=0xb85edc187d594872a594c25419154d05

trusted.gfid2path.ff2d749198341aff=0x32393564303861372d386437352d343638392d393239332d3339336434346362656233342f63656234323734322d656161612d343836372d616135342d6461353235363239616165342e6d657461

trusted.glusterfs.mdata=0x015e2f88ce2d1fe6135e2f88ce2d10ae535e2f88ce2d067b3b

trusted.glusterfs.shard.block-size=0x0400

trusted.glusterfs.shard.file-size=0x01ad0001

trusted.pgfid.295d08a7-8d75-4689-9293-393d44cbeb34=0x0001

repo1 glusterfs]# getfattr -d -m. -e hex 
/gluster_bricks/engine/engine/372501f5-062c-4790-afdb-dd7e761828ac/images/4e3e8ca5-0edf-42ae-ac7b-e9a51ad85922/ceb42742-eaaa-4867-aa54-da525629aae4.meta


getfattr: Removing leading '/' from absolute path names

# file: 
gluster_bricks/engine/engine/372501f5-062c-4790-afdb-dd7e761828ac/images/4e3e8ca5-0edf-42ae-ac7b-e9a51ad85922/ceb42742-eaaa-4867-aa54-da525629aae4.meta


security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000

trusted.afr.dirty=0x

trusted.afr.engine-client-2=0x00020002

trusted.bit-rot.signature=0x0102009338ff61a57fcb452b92ae816b8e5ff672be6d340e7da0a0dcfa34e26b26933b

trusted.bit-rot.version=0x02005e09db58709b

trusted.gfid=0xb85edc187d594872a594c25419154d05


Re: [Gluster-users] Free space reported not consistent with bricks

2020-01-29 Thread Aravinda VK
Gluster mount shows actual free size - reserve size

See the issue related to reserve size: 
https://github.com/gluster/glusterfs/issues/236
Blog related to the same topic: 
https://aravindavk.in/blog/gluster-volume-utilization-multiple-approaches

—
Aravinda Vishwanathapura
https://kadalu.io 


> On 27-Jan-2020, at 1:20 PM, Stefan  wrote:
> 
> Hi,
> 
> why is it that the free space on the volume mount (via FUSE) is never 
> equivalent to any of the bricks' free space in a replicated volume?
> For example here:
> $ df -m on host-01:
> > /dev/mapper/gluster-brick01   7630880 4890922   2722008  65% 
> > /data/glusterfs/backupv01/brick01
> $ df -m on host-06:
> > /dev/mapper/gluster-brick06   7630880 4717589   2897753  62% 
> > /data/glusterfs/backupv01/brick06
> $ df -m on client:
> > host-01:backupv01 7630880 4966615   2664266  66% /mnt/backup01
> 
> Stefan
> 
> 
> Community Meeting Calendar:
> 
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/441850968
> 
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/441850968
> 
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users



Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] NFS clients show missing files while gluster volume rebalanced

2020-01-29 Thread Erik Jacobson
We are using gluster 4.1.6. We are using gluster NFS (not ganesha).

Distributed/replicated with subvolume size 3 (6 total servers, 2
subvols).

The NFS clients use this for their root filesystem.

When I add 3 more gluster servers to add one more subvolume to the
storage volumes (so now subvolume size 3, 9 total servers, 3 total
subvolumes), the process gets started. 

ssh leader1 gluster volume add-brick cm_shared 
172.23.0.9://data/brick_cm_shared 172.23.0.10://data/brick_cm_shared 
172.23.0.11://data/brick_cm_shared

then

ssh leader1 gluster volume rebalance cm_shared start

The re-balance works. 'gluster volume status' shows re-balance in
progress.

However, existing gluster-NFS clients now show missing files and I can
no longer log into them (since NFS is their root). If you are logged in,
you can find that libraries are missing and general unhappiness with
random files now missing.

Is accessing a volume that is in the process of being re-balanced not
supported from a gluster NFS client? Or have I made an error?

Thank you for any help,

Erik


Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] interpreting heal info and reported entries

2020-01-29 Thread Cox, Jason

I have glusterfs (v6.6) deployed with 3-way replication used by ovirt (v4.3).
I recently updated 1 of the nodes (now at gluster v6.7) and rebooted. When it 
came back online, glusterfs reported there were entries to be healed under the 
2 nodes that had stayed online.
After 2+ days, the 2 nodes still show entries that need healing, so I'm trying 
to determine what the issue is.
The files shown in the heal info output are small so healing should not take 
long. Also, 'Gluster v heal ' and 'gluster v heal  full' both return 
successful, but the entries persist.


So first off, I'm a little confused by what gluster volume heal  info is 
reporting.
The following is what I see from heal info:

# gluster v heal engine info
Brick repo0:/gluster_bricks/engine/engine
/372501f5-062c-4790-afdb-dd7e761828ac/images/968daf61-6858-454a-9ed4-3d3db2ae1805/4317dd0d-fd35-4176-9353-7ff69e3a8dc3.meta
/372501f5-062c-4790-afdb-dd7e761828ac/images/4e3e8ca5-0edf-42ae-ac7b-e9a51ad85922/ceb42742-eaaa-4867-aa54-da525629aae4.meta
Status: Connected
Number of entries: 2

Brick repo1:/gluster_bricks/engine/engine
/372501f5-062c-4790-afdb-dd7e761828ac/images/968daf61-6858-454a-9ed4-3d3db2ae1805/4317dd0d-fd35-4176-9353-7ff69e3a8dc3.meta
/372501f5-062c-4790-afdb-dd7e761828ac/images/4e3e8ca5-0edf-42ae-ac7b-e9a51ad85922/ceb42742-eaaa-4867-aa54-da525629aae4.meta
Status: Connected
Number of entries: 2

Brick repo2:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0


Repo0 and repo1 were not rebooted, but repo2 was.
Since repo2 went offline I would expect when it came back online it would have 
entries that need healing, but based on the heal info output that's not what it 
looks like, so I'm thinking maybe heal info isn't reporting what I think it is 
reporting.

*When gluster volume heal  info reports entries as above, what is it 
saying?

>From the above output, I was reading it as repo0 has 2 entries that need to be 
>healed from the other bricks and repo1 has 2 entries that need healing from 
>the other bricks. However, that doesn't make sense since repo2 was the one 
>that was rebooted and a 'stat' on the files in the bricks show repo2 is the 
>older version (checksums also show repo0 and repo1 match).  Trying to access 
>the file through the FUSE mount on any node gives input/output errors.



Getfattr output:

repo0 glusterfs]# getfattr -d -m. -e hex 
/gluster_bricks/engine/engine/372501f5-062c-4790-afdb-dd7e761828ac/images/4e3e8ca5-0edf-42ae-ac7b-e9a51ad85922/ceb42742-eaaa-4867-aa54-da525629aae4.meta
getfattr: Removing leading '/' from absolute path names
# file: 
gluster_bricks/engine/engine/372501f5-062c-4790-afdb-dd7e761828ac/images/4e3e8ca5-0edf-42ae-ac7b-e9a51ad85922/ceb42742-eaaa-4867-aa54-da525629aae4.meta
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.afr.dirty=0x
trusted.afr.engine-client-2=0x00020002
trusted.bit-rot.signature=0x0102009338ff61a57fcb452b92ae816b8e5ff672be6d340e7da0a0dcfa34e26b26933b
trusted.bit-rot.version=0x02005e09d54f000ef84f
trusted.gfid=0xb85edc187d594872a594c25419154d05
trusted.gfid2path.ff2d749198341aff=0x32393564303861372d386437352d343638392d393239332d3339336434346362656233342f63656234323734322d656161612d343836372d616135342d6461353235363239616165342e6d657461
trusted.glusterfs.mdata=0x015e2f88ce2d1fe6135e2f88ce2d10ae535e2f88ce2d067b3b
trusted.glusterfs.shard.block-size=0x0400
trusted.glusterfs.shard.file-size=0x01ad0001
trusted.pgfid.295d08a7-8d75-4689-9293-393d44cbeb34=0x0001

repo1 glusterfs]# getfattr -d -m. -e hex 
/gluster_bricks/engine/engine/372501f5-062c-4790-afdb-dd7e761828ac/images/4e3e8ca5-0edf-42ae-ac7b-e9a51ad85922/ceb42742-eaaa-4867-aa54-da525629aae4.meta
getfattr: Removing leading '/' from absolute path names
# file: 
gluster_bricks/engine/engine/372501f5-062c-4790-afdb-dd7e761828ac/images/4e3e8ca5-0edf-42ae-ac7b-e9a51ad85922/ceb42742-eaaa-4867-aa54-da525629aae4.meta
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.afr.dirty=0x
trusted.afr.engine-client-2=0x00020002
trusted.bit-rot.signature=0x0102009338ff61a57fcb452b92ae816b8e5ff672be6d340e7da0a0dcfa34e26b26933b
trusted.bit-rot.version=0x02005e09db58709b
trusted.gfid=0xb85edc187d594872a594c25419154d05
trusted.gfid2path.ff2d749198341aff=0x32393564303861372d386437352d343638392d393239332d3339336434346362656233342f63656234323734322d656161612d343836372d616135342d6461353235363239616165342e6d657461
trusted.glusterfs.mdata=0x015e2f88ce2d1fe6135e2f88ce2d10ae535e2f88ce2d067b3b
trusted.glusterfs.shard.block-size=0x0400

Re: [Gluster-users] [Gluster-devel] Community Meeting: Make it more reachable

2020-01-29 Thread Sunny Kumar
On Wed, Jan 29, 2020 at 9:50 AM Sankarshan Mukhopadhyay
 wrote:
>
> On Wed, 29 Jan 2020 at 15:03, Sunny Kumar  wrote:
> >
> > Hello folks,
> >
> > I would like to propose moving community meeting to a time which is
> > more suitable for EMEA/NA zone, that is merging both of the zone
> > specific meetings into a single one.
> >
>
> I am aware that there are 2 sets of meetings now - APAC and EMEA/NA.
> This came about to ensure that users and community at these TZs have
> the opportunity to participate in time slices that are more
> comfortable. I have never managed to attend a NA/EMEA instance - is
> that not seeing enough participation? There is only a very thin time

I usually join but no one turns ups in meeting, I guess there is some
sort of problem most likely no one is aware/hosting/communication gap.

> slice that overlaps APAC, EMEA and NA. If we want to do this, we would
> need to have a doodle/whenisgood set of options to see how this pans
> out.

Yes, we have to come up with a time which can cover most of TZs.
>
> > It will be really helpful for people who wish to join form these
> > meetings form other time zones and will help users to collaborate with
> > developers in APAC region.
> >
> > Please share your thoughts.
> >
> > /sunny
> >
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/441850968
>
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/441850968
>
> Gluster-devel mailing list
> gluster-de...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>



Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Errno 107] Transport endpoint is not connected

2020-01-29 Thread Olaf Buitelaar
Hi Strahil,

Thank you for your reply. I found the issue, the not connected errors seem
to appear from the ACL layer. and somehow it received a permission denied,
and this was translated to a not connected error.
while the file permission were listed as owner=vdsm and group=kvm, somehow
ACL saw this differently. I ran "chown -R vdsm.kvm
/rhev/data-center/mnt/glusterSD/10.201.0.11\:_ovirt-mon-2/" on the mount,
and suddenly things started working again.

I indeed have (or now had, since for the restore procedure i needed to
provide an empty domain) 1 other VM on the HostedEngine domain, this other
VM had other critical services like VPN. Since i see the HostedEngine
domain as one of the most reliable domains, i used it for critical
services.
All other VM's have their own domains.

I'm a bit surprised by your comment about brick multiplexing, i understood
this should actually improve performance, by sharing resources? Would you
have some extra information about this?

To answer your questions;

We currently have 15 physical hosts.

1) there are no pending heals
2) yes i'm able to connect to the ports
3) all peers report as connected
4) Actually i had a setup like this before, i had multiple smaller qcow
disks in a raid0 with LVM. But this did appeared not to be reliable, so i
switched to 1 single large disk. Would you know if there is some
documentation about this?
5) i'm running about the latest and greatest stable; 4.3.7.2-1.el7. Only
had trouble with the restore, because the cluster was still in
compatibility mode 4.2 and there were 2 older VM's which had snapshots from
prior versions, while the leaf was in compatibility level 4.2. note; the
backup was taken on the engine running 4.3.

Thanks Olaf



Op di 28 jan. 2020 om 17:31 schreef Strahil Nikolov :

> On January 27, 2020 11:49:08 PM GMT+02:00, Olaf Buitelaar <
> olaf.buitel...@gmail.com> wrote:
> >Dear Gluster users,
> >
> >i'm a bit at a los here, and any help would be appreciated.
> >
> >I've lost a couple, since the disks suffered from severe XFS error's
> >and of
> >virtual machines and some won't boot because they can't resolve the
> >size of
> >the image as reported by vdsm:
> >"VM kube-large-01 is down with error. Exit message: Unable to get
> >volume
> >size for domain 5f17d41f-d617-48b8-8881-a53460b02829 volume
> >f16492a6-2d0e-4657-88e3-a9f4d8e48e74."
> >
> >which is also reported by the vdsm-client;  vdsm-client Volume getSize
> >storagepoolID=59cd53a9-0003-02d7-00eb-01e3
> >storagedomainID=5f17d41f-d617-48b8-8881-a53460b02829
> >imageID=2f96fd46-1851-49c8-9f48-78bb50dbdffd
> >volumeID=f16492a6-2d0e-4657-88e3-a9f4d8e48e74
> >vdsm-client: Command Volume.getSize with args {'storagepoolID':
> >'59cd53a9-0003-02d7-00eb-01e3', 'storagedomainID':
> >'5f17d41f-d617-48b8-8881-a53460b02829', 'volumeID':
> >'f16492a6-2d0e-4657-88e3-a9f4d8e48e74', 'imageID':
> >'2f96fd46-1851-49c8-9f48-78bb50dbdffd'} failed:
> >(code=100, message=[Errno 107] Transport endpoint is not connected)
> >
> >with corresponding gluster mount log;
> >[2020-01-27 19:42:22.678793] W [MSGID: 114031]
> >[client-rpc-fops_v2.c:2634:client4_0_lookup_cbk]
> >0-ovirt-data-client-14:
> >remote operation failed. Path:
>
> >/5f17d41f-d617-48b8-8881-a53460b02829/images/2f96fd46-1851-49c8-9f48-78bb50dbdffd/f16492a6-2d0e-4657-88e3-a9f4d8e48e74
> >(a19abb2f-8e7e-42f0-a3c1-dad1eeb3a851) [Permission denied]
> >[2020-01-27 19:42:22.678828] W [MSGID: 114031]
> >[client-rpc-fops_v2.c:2634:client4_0_lookup_cbk]
> >0-ovirt-data-client-13:
> >remote operation failed. Path:
>
> >/5f17d41f-d617-48b8-8881-a53460b02829/images/2f96fd46-1851-49c8-9f48-78bb50dbdffd/f16492a6-2d0e-4657-88e3-a9f4d8e48e74
> >(a19abb2f-8e7e-42f0-a3c1-dad1eeb3a851) [Permission denied]
> >[2020-01-27 19:42:22.679806] W [MSGID: 114031]
> >[client-rpc-fops_v2.c:2634:client4_0_lookup_cbk]
> >0-ovirt-data-client-14:
> >remote operation failed. Path: (null)
> >(----) [Permission denied]
> >[2020-01-27 19:42:22.679862] W [MSGID: 114031]
> >[client-rpc-fops_v2.c:2634:client4_0_lookup_cbk]
> >0-ovirt-data-client-13:
> >remote operation failed. Path: (null)
> >(----) [Permission denied]
> >[2020-01-27 19:42:22.679981] W [MSGID: 108027]
> >[afr-common.c:2274:afr_attempt_readsubvol_set]
> >0-ovirt-data-replicate-3: no
> >read subvols for
>
> >/5f17d41f-d617-48b8-8881-a53460b02829/images/2f96fd46-1851-49c8-9f48-78bb50dbdffd/f16492a6-2d0e-4657-88e3-a9f4d8e48e74
> >[2020-01-27 19:42:22.680606] W [MSGID: 114031]
> >[client-rpc-fops_v2.c:2634:client4_0_lookup_cbk]
> >0-ovirt-data-client-14:
> >remote operation failed. Path:
>
> >/5f17d41f-d617-48b8-8881-a53460b02829/images/2f96fd46-1851-49c8-9f48-78bb50dbdffd/f16492a6-2d0e-4657-88e3-a9f4d8e48e74
> >(----) [Permission denied]
> >[2020-01-27 19:42:22.680622] W [MSGID: 114031]
> >[client-rpc-fops_v2.c:2634:client4_0_lookup_cbk]
> >0-ovirt-data-client-13:
> >remote operation failed. Path:

[Gluster-users] Community Meeting: Make it more reachable

2020-01-29 Thread Sunny Kumar
Hello folks,

I would like to propose moving community meeting to a time which is
more suitable for EMEA/NA zone, that is merging both of the zone
specific meetings into a single one.

It will be really helpful for people who wish to join form these
meetings form other time zones and will help users to collaborate with
developers in APAC region.

Please share your thoughts.

/sunny



Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users