On 29/10/2016 12:46 AM, Jeff Darcy wrote:
In a modern switched network, the
savings are only on the sender side; the switch has to copy the
packet to N receiver ports anyway.
Hmmm, I never considered that side of things. I guess I had a somewhat
naive vision of packets floating through the
> Is it possible to write custom transport layers for gluster?, data
> transfer, not the management protocols. Pointers to the existing code
> and/or docs :) would be helpful
Is it *possible*? Yes. Is it easy or well documented? Definitely no.
The two transports we have - TCP/UNIX-domain
On 10/28/2016 06:47 AM, Kaushal M wrote:
Jeff & Shyam,
We need you opinions on this as these are your components.
3.9 is still building and shipping experimental features. The packages
being build currently include these. We shouldn't be doing this.
I have 2 changes under review [1] & [2],
Is it possible to write custom transport layers for gluster?, data
transfer, not the management protocols. Pointers to the existing code
and/or docs :) would be helpful
I'd like to experiment with broadcast udp to see if its feasible in
local networks. It would be amazing if we could write
> We need you opinions on this as these are your components.
>
> 3.9 is still building and shipping experimental features. The packages
> being build currently include these. We shouldn't be doing this.
>
> I have 2 changes under review [1] & [2], which disable and delete
> these respectively.
>
I've finished my testing of GlusterD, and everything is working as
expected. I'm giving an ACK for GlusterD.
I've tested mainly the core of GlusterD and CLI. I've not tested
features like snapshots, tier, bit-rot, quota, ganesha etc.
On Fri, Oct 28, 2016 at 1:45 PM, Kaushal M
On Fri, Oct 28, 2016 at 4:33 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> 2016-10-28 12:32 GMT+02:00 Pranith Kumar Karampuri :
> > No it is not completely valid. We will update it and announce the release
> > sometime soon.
>
> Thank you.
> Could you
2016-10-28 12:32 GMT+02:00 Pranith Kumar Karampuri :
> No it is not completely valid. We will update it and announce the release
> sometime soon.
Thank you.
Could you also fix the other roadmaps with certain features and what
is being worked on?
There is a little bit
Jeff & Shyam,
We need you opinions on this as these are your components.
3.9 is still building and shipping experimental features. The packages
being build currently include these. We shouldn't be doing this.
I have 2 changes under review [1] & [2], which disable and delete
these respectively.
On Fri, Oct 28, 2016 at 12:35 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> Il 25 ott 2016 12:42, "Aravinda" ha scritto:
> >
> > Hi,
> >
> > Since Automated test framework for Gluster is in progress, we need help
> from Maintainers and developers to
I'm continuing testing GlusterD for 3.9.0rc2. I wasted a lot of my
time earlier this morning testing 3.8.5 because of an oversight.
I have one issue till now, the cluster.op-version defaults to 4.
This shouldn't be how it's supposed to be. It needs to be set to the
39000 for 3.9.0.
I'll send
I did one random read test (~10k shard in one replicate group) but so
far no error reported, will try to do few more tests over the weekend
to confirm this.
Just a quick question, is the full heal process heal in sequence
according to sorted file name?
Thanks.
Cwtan
On Thu, Oct 27, 2016 at
Il 25 ott 2016 12:42, "Aravinda" ha scritto:
>
> Hi,
>
> Since Automated test framework for Gluster is in progress, we need help
from Maintainers and developers to test the features and bug fixes to
release Gluster 3.9.
>
Is the following roadmap still valid or any changes
hi George,
It would help if we can identify the bare minimum xlators which are
contributing to the issue like Raghavendra was mentioning earlier. We were
wondering if it is possible for you to help us in identifying the issue by
running the workload on a modified setup? We can suggest
Thanks to "Tirumala Satya Prasad Desala" , we were able to
run tests for Plain distribute and didn't see any failures.
Ack Plain distribute.
- Original Message -
> From: "Kaleb S. KEITHLEY"
> To: "Aravinda" , "Gluster Devel"
15 matches
Mail list logo