Re: [Gluster-devel] [Gluster-users] Integration of GPU with glusterfs

2018-01-11 Thread Lindsay Mathieson

On 12/01/2018 3:14 AM, Darrell Budic wrote:
It would also add physical resource requirements to future client 
deploys, requiring more than 1U for the server (most likely), and I’m 
not likely to want to do this if I’m trying to optimize for client 
density, especially with the cost of GPUs today.


Nvidia has banned their GPU's being used in Data Centers now to, I 
imagine they are planning to add a licensing fee.


--
Lindsay Mathieson

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Gluster 4.0: Update

2017-08-24 Thread Lindsay Mathieson
>
> This feature (and the patent) is from facebook folks.
>
>
Does that mean its not a problem?

>


-- 
Lindsay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Gluster 4.0: Update

2017-08-24 Thread Lindsay Mathieson
I did a quick google to see what Haolo Replication was - nice feature, very
useful.

Unfortunately I also found this:
https://www.google.com/patents/US20160028806

>Halo based file system replication
>US 20160028806 A1


Is this an issue?





On 25 August 2017 at 10:33, Amar Tumballi  wrote:

> Hello Everyone,
>
> 3 weeks back we (most of the maintainers of Gluster projects) had a
> meeting, and we discussed about features required for Gluster 4.0 and also
> the possible dates.
>
> 
> Summary:
>
>-
>
>It is agreed unanimously that the Gluster 4.0 should be feature based
>release, and not just time based.
>-
>
>The discussions were on what are the blocking features for 4.0
>release, and below were some features which we are considering as blocker
>for release.
>- *glusterd2*:
>   The scalable management interface, which will come with REST api
>   and other admin friendly options. Will be the ONLY major blocker for
>   release.
>   - *Thin clients* (code: gfproxy):
>   Not a blocker as it is slated to get into master branch before next
>   release cut on 3.x series too. But we are calling it a major feature of 
> 4.0
>   as it brings many benefits and also change some of the assumptions and
>   current design principles of glusterfs project.
>   - *Protocol/XDR changes*:
>   Technically, Gluster’s RPC is implemented in a way to support
>   changing of protocols inbetween even the minor releases, but the major
>   changes like adding another fop (like statx(), fadvise() or similar) 
> will
>   need some protocol changes.
>   Also in discussion is the changes to dictionary structure on wire.
>   We will continue to work on each of this individually, and plan to work
>   towards meeting the release timeline.
>-
>
>*Dates*: The plan is to have the release around 2 months from the date
>the final blocker patch gets into master branch.
>- Tentative: October 20th to have last patch, and Jan 1st for release.
>   - Warning: The above is literally ‘tentative’, and not to be
>   assumed as confirmed dates.
>
>
> Other
> features users can anticipate:
>
>- ‘RIO volume type’
>- Extension of snapshot support for zfs / btrfs
>- Better support and documentation for monitoring
>- Halo replication:
>   - Initial patches have made it to 3.11
>   - Pending work is planned to be done by 4.0
>- gNFS (Gluster’s native NFS) is expected to not be packaged for
>distributions anymore.
>   - It currently is an optional package for Fedora and CentOS Storage
>   SIG, but we plan to not provide it at all anymore.
>   - The code stays in the repository for now, but it may get dropped
>   completely later on.
>   - NFS-Ganesha is the answer for NFS protocol access starting
>   Gluster 4.0
>
>
> 
> FAQ:
>
>1. Will we have 3.13 to continue the trend of 3 months release
>calendar?
>
> Answer: Yes. But, it will not be a LTS release.
>
>1. Would 4.0 be LTS release ?
>
> Answer: No.
>
> More information on this topic can be found at :
> https://hackmd.io/s/rJaLCP38b#
>
> Further discussions on this will be done at Gluster Summit 2017 at Prague,
> CZ. Try to attend in person if you can!!
>
> Feedback welcome.
>
> Regards,
> Gluster team.
>
> --
> Amar Tumballi (amarts)
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>



-- 
Lindsay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] GlusterD2 v4.0dev-5

2017-02-04 Thread Lindsay Mathieson

Thanks

On 4/02/2017 8:25 PM, Atin Mukherjee wrote:


On Sat, 4 Feb 2017 at 10:27, Lindsay Mathieson 
<lindsay.mathie...@gmail.com <mailto:lindsay.mathie...@gmail.com>> wrote:


Some dumb questions re this :)

- Is this just a part of Gluster 4.0 or the entire thing?


It is built up separately but will be production ready along with 
other features as part of 4.0.




- Is it usable for doing some testing with setup & data (VM Images)? I
got most of the way towards building it on Ubuntu :)


I'd say it hasn't matured enough yet as you'd need different volume 
tunables and that implementation is still missing.




- No cmd line tools at this stage? all config is via the rest API?


As of now no but there is a plan to come up with CLI too.




On 1/02/2017 9:27 PM, Kaushal M wrote:
> We have a new development release of GD2.
>
> GD2 now supports volfile fetch and portmap requests, so clients are
> finally able to mount volumes using the mount command. Portmap
doesn't
> work reliably yet, so there might be failures.
>
> GD2 was refactored to clean up the main function and standardize the
> various servers it runs.
>
> More details about the release and downloads can be found at [1].
>
> We also have a docker image ready for testing [2]. More
information on
> how to use this image can be found at [3].
>
> Cheers!
>
> ~kaushal
>
> [1]: https://github.com/gluster/glusterd2/releases/tag/v4.0dev-5
> [2]:

https://hub.docker.com/r/gluster/glusterd2-test/builds/bqecolrgfsx8damioi3uyas/
> [3]: https://github.com/gluster/glusterd2/wiki/Testing-releases
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org <mailto:Gluster-devel@gluster.org>
> http://lists.gluster.org/mailman/listinfo/gluster-devel


--
Lindsay Mathieson

___
Gluster-devel mailing list
Gluster-devel@gluster.org <mailto:Gluster-devel@gluster.org>
http://lists.gluster.org/mailman/listinfo/gluster-devel

--
--Atin



--
Lindsay Mathieson

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] GlusterD2 v4.0dev-5

2017-02-03 Thread Lindsay Mathieson

Some dumb questions re this :)

- Is this just a part of Gluster 4.0 or the entire thing?

- Is it usable for doing some testing with setup & data (VM Images)? I 
got most of the way towards building it on Ubuntu :)


- No cmd line tools at this stage? all config is via the rest API?


On 1/02/2017 9:27 PM, Kaushal M wrote:

We have a new development release of GD2.

GD2 now supports volfile fetch and portmap requests, so clients are
finally able to mount volumes using the mount command. Portmap doesn't
work reliably yet, so there might be failures.

GD2 was refactored to clean up the main function and standardize the
various servers it runs.

More details about the release and downloads can be found at [1].

We also have a docker image ready for testing [2]. More information on
how to use this image can be found at [3].

Cheers!

~kaushal

[1]: https://github.com/gluster/glusterd2/releases/tag/v4.0dev-5
[2]: 
https://hub.docker.com/r/gluster/glusterd2-test/builds/bqecolrgfsx8damioi3uyas/
[3]: https://github.com/gluster/glusterd2/wiki/Testing-releases
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel



--
Lindsay Mathieson

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] GlusterFS 3.7.19 released

2017-01-11 Thread Lindsay Mathieson

On 11/01/2017 8:13 PM, Kaushal M wrote:

[1]:https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.18.md


Is there a reason that "performance.strict-o-direct=on" needs to be set 
for VM Hosting?


--
Lindsay Mathieson

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] glusterfsd/glusterfs process taking CPU load higher than usual

2016-12-05 Thread Lindsay Mathieson

On 29/11/2016 11:26 PM, ABHISHEK PALIWAL wrote:

Is there any way to identify the way or to reduce this high load.


 * Is this load constant or spiky?
 * what does a ""gluster v heal <volname? heal info" show?
 * What does " fpipt_main_thre" do?
 * Are there any processes doing a lot of IO?
 * what does "iotop" show?



--
Lindsay Mathieson

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Custom Transport layers

2016-10-31 Thread Lindsay Mathieson

On 31/10/2016 5:56 PM, Gandalf Corvotempesta wrote:


> I'd like to experiment with broadcast udp to see if its feasible in 
local networks. It would be amazing if we could write at 1GB speeds 
simultaneously to all nodes.

>

Is you have replica 3 and set a 3 nic bonded interface with 
balance-alb on the gluster client,  you are able to use the 3 nics 
simultaneously writing at 1gb on each node.




But you can broadcast with UDP - one packet of data through one nic to 
all nodes, so in theory you could broadcast 1GB *per nic* or 3GB via 
three nics. Minus overhead for acks, nacks and ordering :)



But I'm not sure it would work at all in practice now through a switch.

--
Lindsay Mathieson

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Custom Transport layers

2016-10-31 Thread Lindsay Mathieson

On 31/10/2016 5:56 PM, Gandalf Corvotempesta wrote:
Is you have replica 3 and set a 3 nic bonded interface with 
balance-alb on the gluster client,  you are able to use the 3 nics 
simultaneously writing at 1gb on each node.


Actually all you need is two nics, so each node can use 2 nics for the 
other two nodes.



I actually have three nics per node, currently two bonded with 
balance-alb per node and I do indeed max out a 1G connection with Jumbo 
frames. A VM tops out at 120MB/s in seq writes.


I did experiment with 3 nics bonded with balance-rr and managed to get 
2.4Gbs throughput, balance-rr doesn't do to well with bonds bigger than 2.


Unfortunately I need a private IP for gluster and a bridge for the VM's 
and I could only get OpenVSwitch to bond three nics to a bridge and a 
extra IP and OVS doesn't support balance-alb.


--
Lindsay Mathieson

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Custom Transport layers

2016-10-28 Thread Lindsay Mathieson

On 29/10/2016 12:46 AM, Jeff Darcy wrote:

In a modern switched network, the
savings are only on the sender side; the switch has to copy the
packet to N receiver ports anyway.


Hmmm, I never considered that side of things. I guess I had a somewhat 
naive vision of packets floating through the ethernet visible to all 
interfaces, but switched based networks are basically a star topology. 
Are you saying the switch would likely be the choke point here?


--
Lindsay Mathieson

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Custom Transport layers

2016-10-28 Thread Lindsay Mathieson
Is it possible to write custom transport layers for gluster?, data 
transfer, not the management protocols. Pointers to the existing code 
and/or docs :) would be helpful



I'd like to experiment with broadcast udp to see if its feasible in 
local networks. It would be amazing if we could write at 1GB speeds 
simultaneously to all nodes.



Alternatively let me know if this has been tried and discarded as a bad 
idea ...


thanks,

--
Lindsay Mathieson

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] release-3.6 end of life

2016-08-18 Thread Lindsay Mathieson

On 19/08/2016 3:45 AM, Diego Remolina wrote:

The one thing that still remains a mystery to me is how to downgrade
glusterfs packages in Ubuntu. I have never been able to do that. There
was also a post from someone about it recently on the list and I do
not think it got any replies.


I would have assumed something like:


1. stop volume(s)

2. if needed: reset gluster options not available is older version

3. if needed: downgrade op-version

4. stop all gluster daemons :)

5. sudo apt-get purge gluster*

6. sudo ppa-purge 

7. Install older ppa.

8. install older gluster service

9. Start services

10. check peers and status

11. start volume

12. test


if 2) and 3) can't be done then I presume you can't downgrade.

--
Lindsay Mathieson

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] 3.7.13 & proxmox/qemu

2016-08-03 Thread Lindsay Mathieson

On 3/08/2016 10:45 PM, Lindsay Mathieson wrote:

On 3/08/2016 2:26 PM, Krutika Dhananjay wrote:
Once I deleted old content from test volume it mounted to oVirt via 
storage add when previously it would error out.  I am now creating a 
test VM with default disk caching settings (pretty sure oVirt is 
defaulting to none rather than writeback/through).  So far all shards 
are being created properly.


I can confirm that it works with ProxMox VM's in direct (no cache 
mode) as well. 


Also Gluster 3.8.1 is good to

--
Lindsay Mathieson

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] 3.7.13 & proxmox/qemu

2016-08-03 Thread Lindsay Mathieson

On 3/08/2016 2:26 PM, Krutika Dhananjay wrote:
Once I deleted old content from test volume it mounted to oVirt via 
storage add when previously it would error out.  I am now creating a 
test VM with default disk caching settings (pretty sure oVirt is 
defaulting to none rather than writeback/through).  So far all shards 
are being created properly.


I can confirm that it works with ProxMox VM's in direct (no cache mode) 
as well.




Load is sky rocketing but I have all 3 gluster bricks running off 1 
hard drive on test box so I would expect horrible io/load issues with 
that.



Ha! Same config for my test Host :)


--
Lindsay Mathieson

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] 3.7.13 & proxmox/qemu

2016-07-21 Thread Lindsay Mathieson

On 22/07/2016 6:14 AM, David Gossage wrote:

https://github.com/zfsonlinux/zfs/releases/tag/zfs-0.6.4

  * New asynchronous I/O (AIO) support.



Only for  ZVOLS I think, not datasets.

--
Lindsay Mathieson

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] 3.7.13 & proxmox/qemu

2016-07-21 Thread Lindsay Mathieson

On 22/07/2016 4:00 AM, David Gossage wrote:
May be anecdotal with small sample size but the few people who have 
had issue all seemed to have zfs backed gluster volumes.


Good point = allmy volumes are all backed by ZFS and when using it 
directly for virt storage I have to enable caching due to lack of 
O_DIRECT support.



Note: AIO support was just a theory on my part


--
Lindsay Mathieson

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] 3.7.13 & proxmox/qemu

2016-07-09 Thread Lindsay Mathieson

Did a quick test this morning - 3.7.13 is now working with libgfapi - yay!


However I do have to enable write-back or write-through caching in qemu 
before the vm's will start, I believe this is to do with aio support. 
Not a problem for me.


I see there are settings for storage.linux-aio and storage.bd-aio - not 
sure as to whether they are relevant or which ones to play with.


thanks,

--
Lindsay Mathieson

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] 3.7.12/3.8.qemu/proxmox testing

2016-07-04 Thread Lindsay Mathieson

On 4/07/2016 11:06 PM, Poornima Gurusiddaiah wrote:
Found the RCA for the issue, an explanation of the same can be found @https://bugzilla.redhat.com/show_bug.cgi?id=1352482#c8  
The patch for this, will follow shortly and hope to include it in 3.1.13


Brilliant, thanks all.

--
Lindsay Mathieson

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] 3.7.12/3.8.qemu/proxmox testing

2016-07-04 Thread Lindsay Mathieson

On 4/07/2016 7:16 PM, Kaushal M wrote:

An update on this, we are tracking this issue on bugzilla [1].
I've added some of the observations made till now in the bug. Copying
the same here.


Thanks Kaushal, appreciate the updates.


--
Lindsay Mathieson

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] GlusterFS-3.7.12 released

2016-06-28 Thread Lindsay Mathieson
On 28 June 2016 at 22:04, Kaushal M  wrote:
>
> I'm pleased to announce the release of GlusterFS-v3.7.12. This release
> includes a lot of bug fixes, that have been merged since 3.7.11.


Just did a rolling upgrade to my 3 node/rep 3 debian jessie cluster
(proxmox). The process was quite painless and nobody noticed it
happening. I presume that I won't get the full benefit and until I
restart the VM clients (gfapi access).

p.s Also noted that the opversion is now up to 30712

Cheers,


-- 
Lindsay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] GlusterFS-3.7.12 released

2016-06-28 Thread Lindsay Mathieson

On 28/06/2016 10:04 PM, Kaushal M wrote:

I'm pleased to announce the release of GlusterFS-v3.7.12. This release
includes a lot of bug fixes, that have been merged since 3.7.11.


Brilliant, thanks everyone.

--
Lindsay Mathieson

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Gluster Volume mounted but not able to show the files from mount point

2016-05-26 Thread Lindsay Mathieson
On 25 May 2016 at 20:25, ABHISHEK PALIWAL  wrote:
> [2016-05-24 12:10:20.091267] E [MSGID: 113039] [posix.c:2570:posix_open]
> 0-c_glusterfs-posix: open on
> /opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
> flags: 2 [No such file or directory]
> [2016-05-24 12:13:17.305773] E [MSGID: 113039] [posix.c:2570:posix_open]
> 0-c_glusterfs-posix: open on
> /opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
> flags: 2 [No such file or directory]

does /opt/lvmdir/c2/brick contain anything? dies it have a .glusterfd dir?

Could the underlying file system mount for that brick have failed?


-- 
Lindsay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Announcing GlusterFS-3.7.11

2016-04-19 Thread Lindsay Mathieson
On 19 April 2016 at 16:50, Kaushal M  wrote:
> I'm pleased to announce the release of GlusterFS version 3.7.11.


Installed and running quite smoothly here, thanks.

-- 
Lindsay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Announcing GlusterFS-3.7.11

2016-04-19 Thread Lindsay Mathieson
Cool - thanks!

On 19 April 2016 at 16:50, Kaushal M  wrote:
> Packages for Debian Stretch, Jessie and Wheezy are available on
> download.gluster.org.


I think
   http://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/

is still pointing to 3.7.10

-- 
Lindsay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Update on GlusterFS-3.7.10

2016-04-12 Thread Lindsay Mathieson
On 13 April 2016 at 13:57, Atin Mukherjee  wrote:
> Yes, we have a patch which is awaiting regression to pass. Post that
> Kaushal should be able to tag 3.7.11. Thanks for your patience :)


No worries, thanks, it'l be ready when its ready.

-- 
Lindsay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Update on GlusterFS-3.7.10

2016-04-12 Thread Lindsay Mathieson
On 7 April 2016 at 15:42, Kaushal M  wrote:
> This regression was decided as a blocker, and a decision was made to
> do a quick GlusterFS-3.7.11 release solving it. The 3.7.11 release
> should be available in within the next 2 days.


Is this still happening?

-- 
Lindsay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] GlusterFS 3.7.9 released

2016-03-21 Thread Lindsay Mathieson
On 22 March 2016 at 13:19, Vijay Bellur  wrote:

> [2]
> https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.9.md
>

Sorry - not there :(


-- 
Lindsay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Sharding - what next?

2015-12-16 Thread Lindsay Mathieson

On 16/12/15 22:59, Krutika Dhananjay wrote:
I guess I did not make myself clear. Apologies. I meant to say that 
printing a single list of counts aggregated
from all bricks can be tricky and is susceptible to the possibility of 
same entry getting counted multiple times
if the inode needs a heal on multiple bricks. Eliminating such 
duplicates would be rather difficult.


Or, we could have a sub-command of heal-info dump all the file 
paths/gfids that need heal from all bricks and
you could pipe the output to 'sort | uniq | wc -l' to eliminate 
duplicates. Would that be OK? :)



Sorry, my fault - I did understand that. Aggregate counts per brick 
would be fine, I have no desire to complicate things for the devs :)




--
Lindsay Mathieson

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Sharding - what next?

2015-12-14 Thread Lindsay Mathieson
Hi Guys, sorry for the late reply, my attention tends to be somewhat 
sporadic due to work and the large number of rescue dogs/cats I care for :)


On 3/12/2015 8:34 PM, Krutika Dhananjay wrote:
We would love to hear from you on what you think of the feature and 
where it could be improved.

Specifically, the following are the questions we are seeking feedback on:
a) your experience testing sharding with VM store use-case - any bugs 
you ran into, any performance issues, etc


Testing was initially somewhat stressful as I regularly encountered file 
corruption. However I don't think that was due to bugs, rather incorrect 
settings for the VM usecase. Once I got that sorted out it has been very 
stable - I have really stressed failure modes we run into at work - 
nodes going down while heavy writes were happening. Live migrations 
during heals. gluster software being killed while VM were running on the 
host. So far its held up without a hitch.


To that end, one thing I think should be made more obvious is the 
settings required for VM Hosting:


   quick-read=off
   read-ahead=off
   io-cache=off
   stat-prefetch=off
   eager-lock=enable
   remote-dio=enable
   quorum-type=auto
   server-quorum-type=server

They are  quite crucial and very easy to miss in the online docs. And 
they are only recommended with noo mention that you will corrupt KVM 
VM's if you live migrate them between gluster nodes without them set. 
Also the virt group is missing from the debian packages.


Setting them does seem to have slowed sequential writes by about 10% but 
I need to test that more.



Something related - sharding is useful because it makes heals much more 
granular and hence faster. To that end it would be really useful if 
there was a heal info variant that gave a overview of the process - 
rather than list the shards that are being healed, just a aggregate 
total, e.g.


$ gluster volume heal datastore1 status
volume datastore1
- split brain: 0
- Wounded:65
- healing:4

It gives one a easy feeling of progress - heals aren't happening faster, 
but it would feel that way :)



Also, it would be great if the heal info command could return faster, 
sometimes it takes over a minute.


Thanks for the great work,

Lindsay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel