Re: [Gluster-users] nfs-ganesha logs

2017-03-01 Thread Nithya Balachandran
On 1 March 2017 at 18:25, Soumya Koduri  wrote:

> I am not sure if there are any outstanding issues with exposing shard
> volume via gfapi. CCin Krutika.
>
> On 02/28/2017 01:29 PM, Mahdi Adnan wrote:
>
>> Hi,
>>
>>
>> We have a Gluster volume hosting VMs for ESXi exported via Ganesha.
>>
>> Im getting the following messages in ganesha-gfapi.log and ganesha.log
>>
>>
>>
>> =
>>
>> [2017-02-28 07:44:55.194621] E [MSGID: 109040]
>> [dht-helper.c:1198:dht_migration_complete_check_task] 0-vmware2-dht:
>> : failed to lookup the file
>> on vmware2-dht [Stale file handle]
>>
>
> This "Stale file handle" error suggests that the file may have just got
> removed at the back-end. Probably someone more familiar with dht (cc'ed
> Nithya) can confirm if there are other possibilities.


That is one possibility. In case a FOP returns an ENOENT/ESTALE because the
file was deleted before it could go through, DHT checks to see if the file
was migrated to another brick. However, as the file is no longer present on
the volume, you will see the dht_migration_complete_check_task message
above.

You might want to check if the file in question still exists. There should
also be messages in the log indicating which fop has failed.



>
>
> [2017-02-28 07:44:55.194660] E [MSGID: 133014]
>> [shard.c:1129:shard_common_stat_cbk] 0-vmware2-shard: stat failed:
>> ec846aeb-50f9-4b39-b0c9-24a8b833afe6 [Stale file handle]
>> [2017-02-28 07:44:55.207154] W [MSGID: 108008]
>> [afr-read-txn.c:228:afr_read_txn] 0-vmware2-replicate-5: Unreadable
>> subvolume -1 found with event generation 8 for gfid
>> 4a50127e-4403-49a5-9886-80541a76299c. (Possible split-brain)
>> [2017-02-28 07:44:55.209205] E [MSGID: 109040]
>> [dht-helper.c:1198:dht_migration_complete_check_task] 0-vmware2-dht:
>> : failed to lookup the file
>> on vmware2-dht [Stale file handle]
>> [2017-02-28 07:44:55.209265] E [MSGID: 133014]
>> [shard.c:1129:shard_common_stat_cbk] 0-vmware2-shard: stat failed:
>> 4a50127e-4403-49a5-9886-80541a76299c [Stale file handle]
>> [2017-02-28 07:44:55.212556] W [MSGID: 108008]
>> [afr-read-txn.c:228:afr_read_txn] 0-vmware2-replicate-4: Unreadable
>> subvolume -1 found with event generation 2 for gfid
>> cec80035-1f51-434a-9dbf-8bcdd5f4a8f7. (Possible split-brain)
>> [2017-02-28 07:44:55.214702] E [MSGID: 109040]
>> [dht-helper.c:1198:dht_migration_complete_check_task] 0-vmware2-dht:
>> : failed to lookup the file
>> on vmware2-dht [Stale file handle]
>> [2017-02-28 07:44:55.214741] E [MSGID: 133014]
>> [shard.c:1129:shard_common_stat_cbk] 0-vmware2-shard: stat failed:
>> cec80035-1f51-434a-9dbf-8bcdd5f4a8f7 [Stale file handle]
>> [2017-02-28 07:44:55.259729] I [MSGID: 108031]
>> [afr-common.c:2154:afr_local_discovery_cbk] 0-vmware2-replicate-0:
>> selecting local read_child vmware2-client-0
>> [2017-02-28 07:44:55.259937] I [MSGID: 108031]
>> [afr-common.c:2154:afr_local_discovery_cbk] 0-vmware2-replicate-4:
>> selecting local read_child vmware2-client-8
>>
>> =
>>
>> 28/02/2017 06:27:54 : epoch 58b05af4 : gluster01 :
>> ganesha.nfsd-2015[dbus_heartbeat] dbus_heartbeat_cb :DBUS :WARN :Health
>> status is unhealthy.  Not sending heartbeat
>> 28/02/2017 06:33:36 : epoch 58b05af4 : gluster01 :
>> ganesha.nfsd-2015[work-9] cache_inode_avl_qp_insert :INODE :CRIT
>> :cache_inode_avl_qp_insert_s: name conflict (access, access)
>> =
>>
>>
>> The volume is hosting a few VMs without any noticeable workload, and all
>> bricks are SSDs.
>>
>> Im censored about the logs messages because i have another cluster and
>> ganesha keeps on crashing every few days with the following message
>> spamming the log:
>>
>>
> Do you happen to have core? If yes, could you please check the bt. Below
> messages are just heartbeat warnings typically thrown when the outstanding
> request queue is above certain bench mark and nfs-ganesha server is taking
> a while to process them. Also you seem to be using nfs-ganesha 2.3.x
> version. Its not being actively maintained. There are many improvements and
> fixes done in nfs-ganesha 2.4.x. I suggest to try out that version if
> possible.
>
>
>  >
>
>> 28/02/2017 08:02:45 : epoch 58b1e2f5 : gfs01 :
>> ganesha.nfsd-31929[dbus_heartbeat] dbus_heartbeat_cb :DBUS :WARN :Health
>> status is unhealthy.  Not sending heartbeat
>> 28/02/2017 08:41:08 : epoch 58b1e2f5 : gfs01 :
>> ganesha.nfsd-31929[dbus_heartbeat] dbus_heartbeat_cb :DBUS :WARN :Health
>> status is unhealthy.  Not sending heartbeat
>> 28/02/2017 08:48:38 : epoch 58b1e2f5 : gfs01 :
>> ganesha.nfsd-31929[dbus_heartbeat] dbus_heartbeat_cb :DBUS :WARN :Health
>> status is unhealthy.  Not sending heartbeat
>> 28/02/2017 08:48:52 : epoch 58b1e2f5 : gfs01 :
>> ganesha.nfsd-31929[dbus_heartbeat] dbus_heartbeat_cb :DBUS :WARN :Health
>> status is unhealthy.  Not sending heartbeat
>> 28/02/2017 09:16:27 : epoch 58b1e2f5 : gfs01 :
>> ganesha.nfsd-31929[dbus_heartbeat] dbus_heartbeat_cb :DBUS :WARN :Health
>> status is unhealthy.  Not sending heartbeat
>> 

Re: [Gluster-users] Turning off readdirp in the entire stack on fuse mount

2017-03-01 Thread Raghavendra Gowdappa


- Original Message -
> From: "Raghavendra Gowdappa" 
> To: "Gluster Devel" , "gluster-users" 
> 
> Sent: Thursday, March 2, 2017 9:48:08 AM
> Subject: Turning off readdirp in the entire stack on fuse mount
> 
> Hi all,
> 
> One of the grey areas (in terms of documentation) is how to turn off
> readdirplus (aka readdirp) in the entire stack of glusterfs. The confusion
> is because, there are couple of xlators which by default convert a readdir
> into readdirp. All of the following steps need to be done to turn off
> readdirp entirely.
> 
> 1. mount glusterfs with option --use-readdirp=no (disables readdirp between
> fuse kernel module and glusterfs)
> 
> [root@unused glusterfs]# mount -t glusterfs booradley:/testvol -o
> use-readdirp=no /mnt/glusterfs
> 
> [root@unused glusterfs]# ps ax | grep -i mnt
> 26096 ?Ssl0:00 /usr/local/sbin/glusterfs --use-readdirp=no
> --volfile-server=booradley --volfile-id=/testvol /mnt/glusterfs
> 
> 2. set performance.force-readdirp to false (prevents md-cache from converting
> readdir calls to readdirp)
> [root@unused glusterfs]# gluster volume set testvol
> performance.force-readdirp off
> volume set: success
> 
> 3. set dht.force-readdirp to false (prevents dht from converting readdir into
> readdirp
> [root@unused glusterfs]# gluster volume set testvol dht.force-readdirp off
> volume set: success
> 
> [root@unused glusterfs]# gluster volume info testvol
>  
> Volume Name: testvol
> Type: Distribute
> Volume ID: 007edfec-0e54-4d80-bef4-9aec5bcc1108
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: booradley:/home/export/testvol
> Options Reconfigured:
> dht.force-readdirp: off
> performance.force-readdirp: off
> transport.address-family: inet
> nfs.disable: on
> 
> Note that if readdir-ahead is turned on, by default it does prefetch
> directory entries to fill its cache using readdirp. However, it doesn't
> implement readdirp fop. 

s/readdirp fop/readdir fop/

> Hence those prefetched entries are never consumed in
> the above configuration. If option performance.readdir-ahead is set to off,
> we wouldn't not witness a readdirp fop in the entire glusterfs stack (from
> fuse till bricks).
> 
> regards,
> Raghavendra
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Is it safe to run RAID0 on a replicate cluster?

2017-03-01 Thread Lindsay Mathieson

On 2/03/2017 7:52 AM, Ernie Dunbar wrote:
nevermind the tiny detail of what happens when any set of mailboxes 
outgrows the brick. This makes it look like this more efficient scheme 
would be highly impractical to us.


Sharding would distribute your data evenly over the disks. I have no 
idea how well it would perform with that number of files though.


--
Lindsay Mathieson

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Is it safe to run RAID0 on a replicate cluster?

2017-03-01 Thread Ernie Dunbar

  
  
On 2017-02-28 04:01 PM, Lindsay Mathieson wrote:

  

  On 1 March 2017 at 09:20, Ernie
Dunbar 
wrote:
Every
  node in the Gluster array has their RAID array configured
  as RAID5, so I'd like to improve the performance on each
  node by changing that to RAID0 instead. 
  
  

Hi Ernie, sorry saw your question
  before and meant to reply but "stuff" kept happening ... :)
  

Presuming you're running Replica 3 I
  don't see any issues with converting from RAID5 to RAID0,
  there should be quite a local performance boost and I would
  think its actually safer - the rebuild times for RAID5 are
  horrendous and a performance killer to boot. With RAID0 you'll
  loose the whole brick if you lose a disk but depending on your
  network, healing from the other nodes would probably be
  quicker.
  

nb. What is your raid controller?
  network setup?
  

Alternatively I believe the general
  recommendation is to actually run all your disks in JBOD mode
  and create a brick per disk, that way individual disk failures
  won't effect the other bricks on the node. However that would
  require the same number of disks per node.
  

For myself, I actually run 4 disks per
  node, setup as RAID10 with ZFS. One ZFS Pool and Brick per
  node. I use it for VM Hosting though which is quite a
  different usecase, a few very large files.


  


We're running Gluster on 3 Dell 2950s, using the PERC6i controller.
There's only one brick so far, but I think I'm going to have to keep
it that way, although some of our data on that brick isn't mail - VM
hosting is something that we'll be doing with this very soon.

Considering that this is our mail store, I don't think that setting
up JBOD and a brick per disk is really reasonable, or we'd have to
go around creating new e-mail accounts on random gluster shares,
nevermind the tiny detail of what happens when any set of mailboxes
outgrows the brick. This makes it look like this more efficient
scheme would be highly impractical to us.
  

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Unexplicable GlusterFS load and poor performance

2017-03-01 Thread Jelle Van Hees
Dear gluster community

We are running GlusterFS replication server on 2 nodes and we have 2
clients. Self healing daemon is enabled. Each client is connected to a
different server using the Gluster client. We have a lot of very small
files on the Gluster volume.

We use GlusterFS 3.9.1 from the official GlusterFS APT Repositories on a
Debian Jessie system.

The issue we are facing with is that one of the servers is at 0.1-0.5 load
average while the other is at 200. The server with the high load also has a
huge amount of data being streamed constantly to both the client nodes.
This data stream continues even when the clients are not reading writing
data.

The following values are measured by nload and iftop :

*server:* outgoing 35-40 MB/s

*Two clients:* incoming 17-20 MB/s

Our performance on the Gluster client is very poor. An ls can take up to 10
second to complete, and our app works extremely slow.

Servers and clients are connecting over an internal data-center network and
can handle much more bandwidth so this is not a limiting factor.
My two main questions are :

*1:* Are these differences in server load normal behavior for GlusterFS and
what causes this?

*2:* Why is there such a high constant data stream to clients form one of
the servers.

I cannot seem to find any information concerning this in the Gluster
Documentation or on the


link to the server fault question

http://serverfault.com/questions/835343/unexplicable-glusterfs-load-and-poor-performance


Kind regards
*JELLE VAN HEES*
*LINUX SYSTEM ENGINEER*

AVIOVISION
Jaarbeurslaan 19 box 11, 3600 Genk, Belgium

T +32 (0)89 32 34 80 F +32 (0)89 36 37 95


  



Product of  AvioVision - A Thales Group Company
*The information contained in this message may be privileged and
confidential and protected from disclosure. If the reader of this message
is not the intended recipient, you are hereby notified that any
dissemination, distribution or copying of this communication is strictly
prohibited. If you have received this communication in error, you are
kindly requested to notify us immediately by replying to the message and
deleting it from your computer.Any views or opinions presented in this
email or any attachments are solely those of the author and do not
necessarily represent those of AvioVision, or associated companies.*
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Milind Changire has shared a recording of meeting titled: milin's Meeting

2017-03-01 Thread Blue Jeans Network


© Blue Jeans Network 2017
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] something changes 3.8.9 in respect of

2017-03-01 Thread lejeczek

hi
there is a vol with:

Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
performance.readdir-ahead: on
storage.owner-uid: 107
storage.owner-gid: 107

As you see it is for libvirt/qemu, which all working fine, I 
cannot think at this moment of anything but update from 
3.8.8 to 3.8.9. (centos 7.x)

Now libvirt refuses to start quests with:

Cannot access storage file '/work6-win10Ent.qcow2' (as 
uid:107, gid:107): Input/output error
internal error: Failed to autostart VM 'work6-win10Ent': 
Cannot access storage file '/work6-win10Ent.qcow2' (as 
uid:107, gid:107): Input/output error


Also when I mount the vol and go there, this happens:

]$ chmod g+r rhel-work1.qcow2
chmod: changing permissions of ‘rhel-work1.qcow2’: 
Input/output error


I've restarted the vol, gluster daemons too, but still. 
Other vols in gluster don't give above error.

What could it be?

many thanks,
L.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster RPC Internals - Lecture #1 - recording

2017-03-01 Thread Milind Changire

okay ... so there's a (stupid) trick to download an mp4 version of the
recording

Below the main Flash video display area, you can find a thumbnail of the
recording, most probably a solid black rectangle with a typical play
icon at the centre. When you mouse-over that area, you *should* see a
small icon appear in the bottom-right corner of the highlight rectangle
around the thumbnail. Clicking on that little icon will start your mp4
download.

I don't know why they need to hide or gain by hiding that download icon.
The icon looks like a rectangular tray open at the top with an arrow
pointing downwards into the tray.

HTH

Milind

On 03/01/2017 03:22 PM, Bipin Kunal wrote:

Milind,

Please allow download of recording.

Thanks,
Bipin Kunal


On Wed, Mar 1, 2017 at 3:19 PM, Pavel Szalbot  wrote:

Hi Milind,

is there a non-Flash version available?

-ps

On Wed, Mar 1, 2017 at 9:28 AM, Milind Changire  wrote:


https://bluejeans.com/s/e59Wh/

--
Milind

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] nfs-ganesha logs

2017-03-01 Thread Soumya Koduri
I am not sure if there are any outstanding issues with exposing shard 
volume via gfapi. CCin Krutika.


On 02/28/2017 01:29 PM, Mahdi Adnan wrote:

Hi,


We have a Gluster volume hosting VMs for ESXi exported via Ganesha.

Im getting the following messages in ganesha-gfapi.log and ganesha.log



=

[2017-02-28 07:44:55.194621] E [MSGID: 109040]
[dht-helper.c:1198:dht_migration_complete_check_task] 0-vmware2-dht:
: failed to lookup the file
on vmware2-dht [Stale file handle]


This "Stale file handle" error suggests that the file may have just got 
removed at the back-end. Probably someone more familiar with dht (cc'ed 
Nithya) can confirm if there are other possibilities.



[2017-02-28 07:44:55.194660] E [MSGID: 133014]
[shard.c:1129:shard_common_stat_cbk] 0-vmware2-shard: stat failed:
ec846aeb-50f9-4b39-b0c9-24a8b833afe6 [Stale file handle]
[2017-02-28 07:44:55.207154] W [MSGID: 108008]
[afr-read-txn.c:228:afr_read_txn] 0-vmware2-replicate-5: Unreadable
subvolume -1 found with event generation 8 for gfid
4a50127e-4403-49a5-9886-80541a76299c. (Possible split-brain)
[2017-02-28 07:44:55.209205] E [MSGID: 109040]
[dht-helper.c:1198:dht_migration_complete_check_task] 0-vmware2-dht:
: failed to lookup the file
on vmware2-dht [Stale file handle]
[2017-02-28 07:44:55.209265] E [MSGID: 133014]
[shard.c:1129:shard_common_stat_cbk] 0-vmware2-shard: stat failed:
4a50127e-4403-49a5-9886-80541a76299c [Stale file handle]
[2017-02-28 07:44:55.212556] W [MSGID: 108008]
[afr-read-txn.c:228:afr_read_txn] 0-vmware2-replicate-4: Unreadable
subvolume -1 found with event generation 2 for gfid
cec80035-1f51-434a-9dbf-8bcdd5f4a8f7. (Possible split-brain)
[2017-02-28 07:44:55.214702] E [MSGID: 109040]
[dht-helper.c:1198:dht_migration_complete_check_task] 0-vmware2-dht:
: failed to lookup the file
on vmware2-dht [Stale file handle]
[2017-02-28 07:44:55.214741] E [MSGID: 133014]
[shard.c:1129:shard_common_stat_cbk] 0-vmware2-shard: stat failed:
cec80035-1f51-434a-9dbf-8bcdd5f4a8f7 [Stale file handle]
[2017-02-28 07:44:55.259729] I [MSGID: 108031]
[afr-common.c:2154:afr_local_discovery_cbk] 0-vmware2-replicate-0:
selecting local read_child vmware2-client-0
[2017-02-28 07:44:55.259937] I [MSGID: 108031]
[afr-common.c:2154:afr_local_discovery_cbk] 0-vmware2-replicate-4:
selecting local read_child vmware2-client-8

=

28/02/2017 06:27:54 : epoch 58b05af4 : gluster01 :
ganesha.nfsd-2015[dbus_heartbeat] dbus_heartbeat_cb :DBUS :WARN :Health
status is unhealthy.  Not sending heartbeat
28/02/2017 06:33:36 : epoch 58b05af4 : gluster01 :
ganesha.nfsd-2015[work-9] cache_inode_avl_qp_insert :INODE :CRIT
:cache_inode_avl_qp_insert_s: name conflict (access, access)
=


The volume is hosting a few VMs without any noticeable workload, and all
bricks are SSDs.

Im censored about the logs messages because i have another cluster and
ganesha keeps on crashing every few days with the following message
spamming the log:



Do you happen to have core? If yes, could you please check the bt. Below 
messages are just heartbeat warnings typically thrown when the 
outstanding request queue is above certain bench mark and nfs-ganesha 
server is taking a while to process them. Also you seem to be using 
nfs-ganesha 2.3.x version. Its not being actively maintained. There are 
many improvements and fixes done in nfs-ganesha 2.4.x. I suggest to try 
out that version if possible.


 >

28/02/2017 08:02:45 : epoch 58b1e2f5 : gfs01 :
ganesha.nfsd-31929[dbus_heartbeat] dbus_heartbeat_cb :DBUS :WARN :Health
status is unhealthy.  Not sending heartbeat
28/02/2017 08:41:08 : epoch 58b1e2f5 : gfs01 :
ganesha.nfsd-31929[dbus_heartbeat] dbus_heartbeat_cb :DBUS :WARN :Health
status is unhealthy.  Not sending heartbeat
28/02/2017 08:48:38 : epoch 58b1e2f5 : gfs01 :
ganesha.nfsd-31929[dbus_heartbeat] dbus_heartbeat_cb :DBUS :WARN :Health
status is unhealthy.  Not sending heartbeat
28/02/2017 08:48:52 : epoch 58b1e2f5 : gfs01 :
ganesha.nfsd-31929[dbus_heartbeat] dbus_heartbeat_cb :DBUS :WARN :Health
status is unhealthy.  Not sending heartbeat
28/02/2017 09:16:27 : epoch 58b1e2f5 : gfs01 :
ganesha.nfsd-31929[dbus_heartbeat] dbus_heartbeat_cb :DBUS :WARN :Health
status is unhealthy.  Not sending heartbeat
28/02/2017 09:46:54 : epoch 58b1e2f5 : gfs01 :
ganesha.nfsd-31929[dbus_heartbeat] dbus_heartbeat_cb :DBUS :WARN :Health
status is unhealthy.  Not sending heartbeat
28/02/2017 09:50:02 : epoch 58b1e2f5 : gfs01 :
ganesha.nfsd-31929[dbus_heartbeat] dbus_heartbeat_cb :DBUS :WARN :Health
status is unhealthy.  Not sending heartbeat
28/02/2017 09:57:03 : epoch 58b1e2f5 : gfs01 :
ganesha.nfsd-31929[dbus_heartbeat] dbus_heartbeat_cb :DBUS :WARN :Health
status is unhealthy.  Not sending heartbeat
28/02/2017 09:57:14 : epoch 58b1e2f5 : gfs01 :
ganesha.nfsd-31929[dbus_heartbeat] dbus_heartbeat_cb :DBUS :WARN :Health
status is unhealthy.  Not sending heartbeat
28/02/2017 10:48:41 : epoch 58b1e2f5 : gfs01 :
ganesha.nfsd-31929[dbus_heartbeat] dbus_heartbeat_cb :DBUS :WARN :Health

Re: [Gluster-users] Question about trusted.afr

2017-03-01 Thread Karthik Subrahmanya
Hey Tamal,

Sorry for the delay. See my comments inline.

On Tue, Feb 28, 2017 at 10:29 AM, Tamal Saha  wrote:

> Hi,
> I am running a GlusterFS cluster in Kubernetes. This has a single 1x2
> volume. I am dealing with a split-brain sitution. During debugging I
> noticed that, the files in the backend bricks does not have the proper
> trusted.afr xattr. Given this volume has 2 bricks, I should see
> files with the  following 2 afr, I am guessing:
>
> trusted.afr.vol-client-0
> trusted.afr.vol-client-1
>
> But I see files with xattrs below:
>
> trusted.afr.vol-client-2
> trusted.afr.vol-client-3
> trusted.afr.vol-client-5
>
> As I run the bricks as pods in Kubernetes, I have from time to time added
> and removed bricks fron this volume. Basically the pod IPs changed when the
> pods get restarted. My questions are:
>
> 1. Am I seeing wrong trusted.afr?
>
AFAIK, the numbers 2,3,and 5 you are seeing in trusted.afr is because you
had added bricks to your volume. This is because of the change [1].
I suspect that you are getting 3 brick IDs in the changelog, since it was
1x3 volume after adding the brick to the volume.
You can check for the brick IDs/name of the trusted.afr attributes in the
vol file ".tcp-fuse.vol" and grep for "afr-pending-xattr"

> 2. When I remove-brick and add a new brick and then run heal full, does it
> reapply the trusted.afrs properly?
>
What do you mean by reapplying trusted.afr? Are you referring to changing
the values of attributes or changing the brick IDs?

[1]
https://github.com/gluster/glusterfs-specs/blob/master/done/GlusterFS%203.6/Persistent%20AFR%20Changelog%20xattributes.md

>
> Thanks,
> -Tamal
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster RPC Internals - Lecture #1 - recording

2017-03-01 Thread Bipin Kunal
Milind,

Please allow download of recording.

Thanks,
Bipin Kunal


On Wed, Mar 1, 2017 at 3:19 PM, Pavel Szalbot  wrote:
> Hi Milind,
>
> is there a non-Flash version available?
>
> -ps
>
> On Wed, Mar 1, 2017 at 9:28 AM, Milind Changire  wrote:
>>
>> https://bluejeans.com/s/e59Wh/
>>
>> --
>> Milind
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users