Re: [Gluster-devel] XDR RPC Spec

2016-05-20 Thread Jeff Darcy
> As a community
> if we would like to increase adoption of Gluster I think it would be
> good if other languages could produce bindings in their native language
> without needing to use the C libraries.  If I'm writing code in Java or
> Rust or (name your language here) and I see a Gluster library that is
> pure I'll take that over a library that is just C bindings.

The question is: bindings to what?  At what semantic level?  It would
certainly be possible to change the XDR definitions, but it's not clear
how much value we'd really gain.  For one thing, as I've mentioned
recently in the xdata thread, XDR is a poor choice for us anyway.  I'd
really prefer to replace it for our internal use, though of course it
would still be in play out in Ganesha-land.

More importantly, even a perfect wire-format definition would be of
little value to most people.  Our use of these low-level primitives is
invariably governed by protocols, from the initial connection handshake
to the complex dances done by translators like DHT and AFR.  Raw RPC
calls not in conformance with those protocols will often result in
errors, including operations being prohibited because to do otherwise
could result in data (or metadata) corruption.  The bindings that
matter, that will actually enable developers to do something other than
drive us and them insane, are the ones that either inject higher-level
operations ("fops") into the translator stack or intercept them once
there.  Those bindings do exist, for Python and to some extent for Go.
Any other language with a decent foreign-function interface should be
able to wrap the C calls as those do.  It's about a hundred times easier
to add interesting new functionality that way than by trying to
reverse-engineer what we've done at the RPC level.

Perhaps it would be useful for you to give a bit more detail on what
you'd like to use these Rust bindings *for*.  Then we can probably have
a good discussion on what the best approaches are, instead of jumping
into what's probably the most painful choice.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] netbsd smoke tests fail when code patches are backported to release-3.6

2016-05-20 Thread Emmanuel Dreyfus
On Fri, May 20, 2016 at 05:43:07PM +0300, Angelos SAKELLAROPOULOS wrote:
> May I ask why following review requests are not submitted to release-3.6 ?
> It seems that they fail in netbsd, freebsd smoke tests which are not
> related to code changes.

There are build errors. I am note sur how you could have inherited 
them from git checkout, since previous changes were supposed to 
pass smoke too. If you are sure the error are not yours, you
can try to rebase.

-- 
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] netbsd smoke tests fail when code patches are backported to release-3.6

2016-05-20 Thread Ravishankar N

On 05/20/2016 08:13 PM, Angelos SAKELLAROPOULOS wrote:

Hi,

May I ask why following review requests are not submitted to 
release-3.6 ?
It seems that they fail in netbsd, freebsd smoke tests which are not 
related to code changes.
You'd have to re-trigger tests if they are spurious failures. Type 
"recheck smoke" as a comment in the patch to re-trigger the smoke test. 
Likewise  "recheck netbsd" and "recheck centos" for the other 
regressions. Patches get merged only after all regressions pass and have 
+1 from the reviewers.


-Ravi


http://review.gluster.org/#/c/14007/
http://review.gluster.org/#/c/14403/
http://review.gluster.org/#/c/14418/

Regards,

Angelos


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Idea: Alternate Release process

2016-05-20 Thread Shyam

On 05/19/2016 10:25 PM, Pranith Kumar Karampuri wrote:

Once every 3 months i.e. option 3 sounds good to me.


+1 from my end.

Every 2 months seems to be a bit too much, 4 months is still fine, but 
gives us 1 in 3 to pick the LTS, I like 1:4 odds better for the LTS, 
hence the 3 months (or 'alternative 2').




Pranith

On Fri, May 13, 2016 at 1:46 PM, Aravinda > wrote:

Hi,

Based on the discussion in last community meeting and previous
discussions,

1. Too frequent releases are difficult to manage.(without dedicated
release manager)
2. Users wants to see features early for testing or POC.
3. Backporting patches to more than two release branches is pain

Enclosed visualizations to understand existing release and support
cycle and proposed alternatives.

- Each grid interval is 6 months
- Green rectangle shows supported release or LTS
- Black dots are minor releases till it is supported(once a month)
- Orange rectangle is non LTS release with minor releases(Support
ends when next version released)

Enclosed following images
1. Existing Release cycle and support plan(6 months release cycle, 3
releases supported all the time)
2. Proposed alternative 1 - One LTS every year and non LTS stable
release once in every 2 months
3. Proposed alternative 2 - One LTS every year and non LTS stable
release once in every 3 months
4. Proposed alternative 3 - One LTS every year and non LTS stable
release once in every 4 months
5. Proposed alternative 4 - One LTS every year and non LTS stable
release once in every 6 months (Similar to existing but only
alternate one will become LTS)

Please do vote for the proposed alternatives about release intervals
and LTS releases. You can also vote for the existing plan.

Do let me know if I missed anything.

regards
Aravinda

On 05/11/2016 12:01 AM, Aravinda wrote:


I couldn't find any solution for the backward incompatible
changes. As you mentioned this model will not work for LTS.

How about adopting this only for non LTS releases? We will not
have backward incompatibility problem since we need not release
minor updates to non LTS releases.

regards
Aravinda
On 05/05/2016 04:46 PM, Aravinda wrote:


regards
Aravinda

On 05/05/2016 03:54 PM, Kaushal M wrote:

On Thu, May 5, 2016 at 11:48 AM, Aravinda 
 wrote:

Hi,

Sharing an idea to manage multiple releases without maintaining
multiple release branches and backports.

This idea is heavily inspired by the Rust release model(you may
feel
exactly same except the LTS part). I think Chrome/Firefox also
follows
the same model.

http://blog.rust-lang.org/2014/10/30/Stability.html

Feature Flag:
--
Compile time variable to prevent compiling featurerelated code
when
disabled. (For example, ./configure--disable-geo-replication
or ./configure --disable-xml etc)

Plan
-
- Nightly build with all the features enabled(./build --nightly)

- All new patches will land in Master, if the patch belongs to a
   existing feature then it should be written behind that
feature flag.

- If a feature is still work in progress then it will be only
enabled in
   nightly build and not enabled in beta or stable builds.
   Once the maintainer thinks the feature is ready for testing
then that
   feature will be enabled in beta build.

- Every 6 weeks, beta branch will be created by enabling all the
   features which maintainers thinks it is stable and previous
beta
   branch will be promoted as stable.
   All the previous beta features will be enabled in stable
unless it
   is marked as unstable during beta testing.

- LTS builds are same as stable builds but without enabling all
the
   features. If we decide last stable build will become LTS
release,
   then the feature list from last stable build will be saved as
   `features-release-.yaml`, For example:
   features-release-3.9.yaml`
   Same feature list will be used while building minor releases
for the
   LTS. For example, `./build --stable --features
features-release-3.8.yaml`

- Three branches, nightly/master, testing/beta, stable

To summarize,
- One stable release once in 6 weeks
- One Beta release once in 6 weeks
- Nightly builds every day
- LTS release once in 6 months or 1 year, Minor releases once
in 6 weeks.

Advantageous:
-
1. No more backports required to different release branches.(only
exceptional backports, discussed below)
2. Non feature Bugfix will never get missed in releases.
3. Release process can be automated.
4. Bugzilla process can be simplified.


Re: [Gluster-devel] Query!

2016-05-20 Thread ABHISHEK PALIWAL
I am not getting any failure and after restart the glusterd when I run
volume info command it creates the brick directory
as well as .glsuterfs (xattrs).

but some time even after restart the glusterd, volume info command showing
no volume present.

Could you please tell me why this unpredictable problem is occurring.

Regards,
Abhishek

On Fri, May 20, 2016 at 3:50 PM, Kaushal M  wrote:

> This would erase the xattrs set on the brick root (volume-id), which
> identify it as a brick. Brick processes will fail to start when this
> xattr isn't present.
>
>
> On Fri, May 20, 2016 at 3:42 PM, ABHISHEK PALIWAL
>  wrote:
> > Hi
> >
> > What will happen if we format the volume where the bricks of replicate
> > gluster volume's are created and restart the glusterd on both node.
> >
> > It will work fine or in this case need to remove /var/lib/glusterd
> directory
> > as well.
> >
> > --
> > Regards
> > Abhishek Paliwal
> >
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 




Regards
Abhishek Paliwal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Query!

2016-05-20 Thread Kaushal M
This would erase the xattrs set on the brick root (volume-id), which
identify it as a brick. Brick processes will fail to start when this
xattr isn't present.


On Fri, May 20, 2016 at 3:42 PM, ABHISHEK PALIWAL
 wrote:
> Hi
>
> What will happen if we format the volume where the bricks of replicate
> gluster volume's are created and restart the glusterd on both node.
>
> It will work fine or in this case need to remove /var/lib/glusterd directory
> as well.
>
> --
> Regards
> Abhishek Paliwal
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Query!

2016-05-20 Thread ABHISHEK PALIWAL
Hi

What will happen if we format the volume where the bricks of replicate
gluster volume's are created and restart the glusterd on both node.

It will work fine or in this case need to remove /var/lib/glusterd
directory as well.

-- 
Regards
Abhishek Paliwal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel