Re: [Gluster-devel] Request for regression test suite details

2015-10-15 Thread Vijay Bellur

Hi Amogha,

You can find pre-commit regression tests in the tests/ directory of the 
gluster source tree. These tests are run before any commit gets merged 
into the repository.


We are in the process of populating distaf [1] with post-commit 
regression tests. Details on performance tests can be found at [2] and 
[3]. There is also an ongoing effort to publish tests run before a 
release in gluster.org soon. Please stay tuned for that.


Can you provide more details of your application and the nature of tests 
that you intend running?


Regards,
Vijay

[1] https://github.com/gluster/distaf

[2] 
http://www.gluster.org/community/documentation/index.php/Performance_Testing


[3] https://github.com/avati/perf-test/blob/master/perf-test.sh

On Wednesday 14 October 2015 05:33 PM, Amogha V wrote:

Hi,
  We are building application that use opensource DFS products like
Ceph, GlusterFS.
While going through the web materials for DFS I learnt regression
testing run on GlusterFS. I wanted to understand the tests run by the QA
team before the GFS is released to oustide world(more specifically I
need the following)
* *configuaration details of the test setup at GlusterFS*
** test plan document*
** individual test cases run before GlusterFS is released to outside world*

This would help me avoid any duplicate testing and add any new scenarios
to test our application.


Thanks,
Amogha


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] NETBSD Regression error - mount-nfs-auth.t

2015-10-15 Thread Saravanakumar Arumugam

Hi,
Facing NETBSD Regression errors - mount-nfs-auth.t

https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/10899/consoleFull
for patch http://review.gluster.org/#/c/12326/

Kindly help.

--
Saravana

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] NSR design document

2015-10-15 Thread Manoj Pillai


- Original Message -
> October 14 2015 3:11 PM, "Manoj Pillai"  wrote:
> > E.g. 3x number of bricks could be a problem if workload has
> > operations that don't scale well with brick count.
> 
> Fortunately we have DHT2 to address that.
> 
> > Plus the brick
> > configuration guidelines would not exactly be elegant.
> 
> And we have Heketi to address that.
> 
> > FWIW, if I look at the performance and perf regressions tests
> > that are run at my place of work (as these tests stand today), I'd
> > expect AFR to significantly outperform this design on reads.
> 
> Reads tend to be absorbed by caches above us, *especially* in read-only
> workloads.  See Rosenblum and Ousterhout's 1992 log-structured file
> system paper, and about a bazillion others ever since.  

Yes, their point was that read absorption means the request 
stream at the secondary storage is dominated by writes, so you 
optimize for that. Plus, the non-overwrite mode of update has 
additional benefits, like easier implementation of snapshots 
or versioning, better recovery guarantees. And I think these 
additional benefits still hold true today, which is why there 
is continued interest in similar solutions. But a lot of data has 
flowed over the wires since 1992, and with the explosion in 
data sets sizes, read performance at the lower storage layers 
continues to be the determinant of overall performance for many use 
uses, (add stress) is what I think. Particularly among those shopping 
for a scale-out storage solution to fit their large data 
sets and modern workloads. Update-in-place file systems 
like XFS have endured quite well. 

> We need to be
> concerned at least as much about write performance, and NSR's write
> performance will *far* exceed AFR's because AFR uses neither networks
> nor disks efficiently.  It splits client bandwidth between N replicas,
> and it sprays writes all over the disk (data blocks plus inode plus
> index).  Most other storage systems designed in the last ten years can
> turn that into nice sequential journal writes, which can even be on a
> separate SSD or NVMe device (something AFR can't leverage at all).
> Before work on NSR ever started, I had already compared AFR to other
> file systems using these same methods and data flows (e.g. Ceph and
> MooseFS) many times.  Consistently, I'd see that the difference was
> quite a bit more than theoretical.  Despite all of the optimization work
> we've done on it, AFR's write behavior is still a huge millstone around
> our necks.
> 
> OK, let's bring some of these thoughts together.  If you've read
> Hennessy and Patterson, you've probably seen this formula before.
> 
> value (of an optimization) =
> benefit_when_applicable * probability -
> penalty_when_inapplicable * (1 - probability)
> 
> If NSR's write performance is significantly better than AFR's, and write
> performance is either dominant or at least highly relevant for most real
> workloads, what does that mean for performance overall?  As prototyping
> showed long ago, it means a significant improvement.  Is it *possible*
> to construct a read-dominant workload that shows something different?
> Of course it is.  It's even possible that write performance will degrade
> in certain (increasingly rare) physical configurations.  No design is
> best for every configuration and workload.  Some people tried to focus
> on the outliers when NSR was first proposed.  Our competitors will be
> glad to do the same, for the same reason - to keep their own pet designs
> from looking too bad.  The important question is whether performance
> improves for *most* real-world configurations and workloads.  NSR is
> quite deliberately somewhat write-optimized, because it's where we were
> the furthest behind and because it's the harder problem to solve.
> Optimizing for read-only workloads leaves users with any other kind of
> workload in a permanent hole.
> 
> Also, even for read-heavy workloads where we might see a deficit, we
> have not one but two workarounds.  One (brick splitting) we've just
> discussed, and it is quite deliberately being paired with other
> technologies in 4.0 to make it more effective.  The other (read from
> non-leaders) is also perfectly viable.  It's not the default because it
> reduces consistency to AFR levels, which I don't think serves our users
> very well.  However, if somebody's determined to make AFR comparisons,
> then it's only fair to compare at the same consistency level.  Giving
> users the ability to decide on such tradeoffs, instead of forcing one
> choice on everyone, has been part of NSR's design since day one.

And if there are improvements that can make the non-default option 
(read from non-leaders as well) more palatable, they would be really good 
to have, is what I think. 

> 
> I'm not saying your concern is invalid, but NSR's leader-based approach
> is *essential* to improving write performance - and thus performance
> overall 

Re: [Gluster-devel] Backup support for GlusterFS

2015-10-15 Thread Pranith Kumar Karampuri

Probably a good question on gluster-users (CCed)

Pranith

On 10/14/2015 03:57 AM, Brian Lahoue wrote:
Has anyone tested backing up a fairly large Gluster implementation 
with Amanda/ZManda recently?









___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Required - Details of individual tests of regression test suite

2015-10-15 Thread Amogha V
Hi,
We are building application that use opensource DFS products like Ceph,
GlusterFS.
While going through the web materials for DFS I learnt regression testing
run on GlusterFS. I wanted to understand the tests run by the QA team
before the GFS is released to oustide world(more specifically I need the
following)
* *configuaration details of the test setup at GlusterFS*
** test plan document*
** individual test cases run before GlusterFS is released to outside world*

This would help me avoid any duplicate testing and add any new scenarios to
test our application.


Thanks,
Amogha
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Gluster 4.0 - upgrades & backward compatibility strategy

2015-10-15 Thread Mauro M.
To date my experience with upgrades has been a disaster in that in two
cases I was unable to start my volume and eventually I had to downgrade.

What I want to recommend is that there is an EXTENSIVE REGRESSION TEST.
The most important goal is that NOTHING that works with the previous
release should break in the new release.

I recommend to test in particular with multi-homed bricks, it is to expect
that administrators create fast (Infiniband) LANs dedicated to gluster
with their own separate IPs and Names, physically separated from the LAN
interfaces that carry the canonical host name.

Make sure as well that file system attributes or configuration files
aren't changed during the upgrade to a point that prevents a safe
downgrade.

Mauro

On Thu, October 15, 2015 04:14, Atin Mukherjee wrote:
>
>
> On 10/14/2015 05:50 PM, Roman wrote:
>> Hi,
>>
>> Its hard to comment plans and things like these, but I suggest everyone
>> will be happy to have a possibility to upgrade from 3 to 4 without new
>> installation, OK with offline upgrade also (shut down volumes and
>> upgrade). And I'm somehow pretty sure, that this upgrade process should
>> be pretty flawless so no one under any circumstances would need any kind
>> of rollbacks, so there should not be any IFs :)
> Just to clarify that there will be and has to be an upgrade path. That's
> what I mentioned in point 4 in my mail. The only limitation would be
> here is no rolling upgrade support.
>>
>> 2015-10-07 8:32 GMT+03:00 Atin Mukherjee > >:
>>
>> Hi All,
>>
>> Over the course of the design discussion, we got a chance to discuss
>> about the upgrades and backward compatibility strategy for Gluster
>> 4.0
>> and here is what we came up with:
>>
>> 1. 4.0 cluster would be separate from 3.x clusters. Heterogeneous
>> support won't be available.
>>
>> 2. All CLI interfaces exposed in 3.x would continue to work with
>> 4.x.
>>
>> 3. ReSTful APIs for all old & new management actions.
>>
>> 4. Upgrade path from 3.x to 4.x would be necessary. We need not
>> support
>> rolling upgrades, however all data layouts from 3.x would need to be
>> honored. Our upgrade path from 3.x to 4.x should not be cumbersome.
>>
>>
>> Initiative wise upgrades strategy details:
>>
>> GlusterD 2.0
>> 
>>
>> - No rolling upgrade, service disruption is expected
>> - Smooth upgrade from 3.x to 4.x (migration script)
>> - Rollback - If upgrade fails, revert back to 3.x, old configuration
>> data shouldn't be wiped off.
>>
>>
>> DHT 2.0
>> ---
>> - No in place upgrade to DHT2
>> - Needs migration of data
>> - Backward compat, hence does not exist
>>
>> NSR
>> ---
>> - volume migration from AFR to NSR is possible with an offline
>> upgrade
>>
>> We would like to hear from the community about your opinion on this
>> strategy.
>>
>> Thanks,
>> Atin
>> ___
>> Gluster-users mailing list
>> gluster-us...@gluster.org 
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>>
>> --
>> Best regards,
>> Roman.
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>


-- 
Mauro Mozzarelli
Phone: +44 7941 727378
eMail: ma...@ezplanet.net

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Gluster 4.0 - upgrades & backward compatibility strategy

2015-10-15 Thread Mauro M.
One feature I would like to see in 4.0 is to be able to have a volume
started with only ONE brick up and running, at least as a configurable
option if not a default.

This was possible in 3.5, perhaps more by mistake than design, but it
disappeared in 3.6 and it is a major issue if I want to run a second brick
as standby only.

Right now I can do it in 3.7.4, but if I reboot the single brick I have to
stop and start again the volume before I can mount it, which is the
workaround I am using.

Mauro

On Thu, October 15, 2015 04:14, Atin Mukherjee wrote:
>
>
> On 10/14/2015 05:50 PM, Roman wrote:
>> Hi,
>>
>> Its hard to comment plans and things like these, but I suggest everyone
>> will be happy to have a possibility to upgrade from 3 to 4 without new
>> installation, OK with offline upgrade also (shut down volumes and
>> upgrade). And I'm somehow pretty sure, that this upgrade process should
>> be pretty flawless so no one under any circumstances would need any kind
>> of rollbacks, so there should not be any IFs :)
> Just to clarify that there will be and has to be an upgrade path. That's
> what I mentioned in point 4 in my mail. The only limitation would be
> here is no rolling upgrade support.
>>
>> 2015-10-07 8:32 GMT+03:00 Atin Mukherjee > >:
>>
>> Hi All,
>>
>> Over the course of the design discussion, we got a chance to discuss
>> about the upgrades and backward compatibility strategy for Gluster
>> 4.0
>> and here is what we came up with:
>>
>> 1. 4.0 cluster would be separate from 3.x clusters. Heterogeneous
>> support won't be available.
>>
>> 2. All CLI interfaces exposed in 3.x would continue to work with
>> 4.x.
>>
>> 3. ReSTful APIs for all old & new management actions.
>>
>> 4. Upgrade path from 3.x to 4.x would be necessary. We need not
>> support
>> rolling upgrades, however all data layouts from 3.x would need to be
>> honored. Our upgrade path from 3.x to 4.x should not be cumbersome.
>>
>>
>> Initiative wise upgrades strategy details:
>>
>> GlusterD 2.0
>> 
>>
>> - No rolling upgrade, service disruption is expected
>> - Smooth upgrade from 3.x to 4.x (migration script)
>> - Rollback - If upgrade fails, revert back to 3.x, old configuration
>> data shouldn't be wiped off.
>>
>>
>> DHT 2.0
>> ---
>> - No in place upgrade to DHT2
>> - Needs migration of data
>> - Backward compat, hence does not exist
>>
>> NSR
>> ---
>> - volume migration from AFR to NSR is possible with an offline
>> upgrade
>>
>> We would like to hear from the community about your opinion on this
>> strategy.
>>
>> Thanks,
>> Atin
>> ___
>> Gluster-users mailing list
>> gluster-us...@gluster.org 
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>>
>> --
>> Best regards,
>> Roman.
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>


-- 
Mauro Mozzarelli
Phone: +44 7941 727378
eMail: ma...@ezplanet.net

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Request for regression test plan

2015-10-15 Thread Amogha V
Hi,
We are evaluating GlusterFS for deployment with our product.

While going through the web materials for DFS I learnt regression testing
run on GlusterFS. I wanted to understand the tests run by the QA team
before the GFS is released to oustide world(more specifically I need the
following)
* *configuaration details of the test setup at GlusterFS*
** test plan document*
** individual test cases run before GlusterFS is released to outside world*

This would help me avoid any duplicate testing and add any new scenarios to
test our application.


Thanks,
Amogha
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Backup support for GlusterFS

2015-10-15 Thread Brian Lahoue
Has anyone tested backing up a fairly large Gluster implementation with 
Amanda/ZManda recently?






___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Gluster 4.0 - upgrades & backward compatibility strategy

2015-10-15 Thread Mauro Mozzarelli
One feature I would like to see in 4.0 is to be able to have a volume
started with only ONE brick up and running, at least as a configurable
option if not a default.

This was possible in 3.5, perhaps more by mistake than design, but it
disappeared in 3.6 and it is a major issue if I want to run a second brick
as standby only.

Right now I can do it in 3.7.4, but if I reboot the single brick I have to
stop and start again the volume before I can mount it, which is the
workaround I am using.

Mauro
On Thu, October 15, 2015 04:14, Atin Mukherjee wrote:
>
>
> On 10/14/2015 05:50 PM, Roman wrote:
>> Hi,
>>
>> Its hard to comment plans and things like these, but I suggest everyone
>> will be happy to have a possibility to upgrade from 3 to 4 without new
>> installation, OK with offline upgrade also (shut down volumes and
>> upgrade). And I'm somehow pretty sure, that this upgrade process should
>> be pretty flawless so no one under any circumstances would need any kind
>> of rollbacks, so there should not be any IFs :)
> Just to clarify that there will be and has to be an upgrade path. That's
> what I mentioned in point 4 in my mail. The only limitation would be
> here is no rolling upgrade support.
>>
>> 2015-10-07 8:32 GMT+03:00 Atin Mukherjee > >:
>>
>> Hi All,
>>
>> Over the course of the design discussion, we got a chance to discuss
>> about the upgrades and backward compatibility strategy for Gluster
>> 4.0
>> and here is what we came up with:
>>
>> 1. 4.0 cluster would be separate from 3.x clusters. Heterogeneous
>> support won't be available.
>>
>> 2. All CLI interfaces exposed in 3.x would continue to work with
>> 4.x.
>>
>> 3. ReSTful APIs for all old & new management actions.
>>
>> 4. Upgrade path from 3.x to 4.x would be necessary. We need not
>> support
>> rolling upgrades, however all data layouts from 3.x would need to be
>> honored. Our upgrade path from 3.x to 4.x should not be cumbersome.
>>
>>
>> Initiative wise upgrades strategy details:
>>
>> GlusterD 2.0
>> 
>>
>> - No rolling upgrade, service disruption is expected
>> - Smooth upgrade from 3.x to 4.x (migration script)
>> - Rollback - If upgrade fails, revert back to 3.x, old configuration
>> data shouldn't be wiped off.
>>
>>
>> DHT 2.0
>> ---
>> - No in place upgrade to DHT2
>> - Needs migration of data
>> - Backward compat, hence does not exist
>>
>> NSR
>> ---
>> - volume migration from AFR to NSR is possible with an offline
>> upgrade
>>
>> We would like to hear from the community about your opinion on this
>> strategy.
>>
>> Thanks,
>> Atin
>> ___
>> Gluster-users mailing list
>> gluster-us...@gluster.org 
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>>
>> --
>> Best regards,
>> Roman.
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>


-- 
Mauro Mozzarelli
Phone: +44 7941 727378
eMail: ma...@ezplanet.net

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Gluster 4.0 - upgrades & backward compatibility strategy

2015-10-15 Thread Mauro Mozzarelli
To date my experience with upgrades has been a disaster in that in two
cases I was unable to start my volume and eventually I had to downgrade.

What I want to recommend is that there is an EXTENSIVE REGRESSION TEST.
The most important goal is that NOTHING that works with the previous
release should break in the new release.

I recommend to test in particular with multi-homed bricks, it is to expect
that administrators create fast (Infiniband) LANs dedicated to gluster
with their own separate IPs and Names, physically separated from the LAN
interfaces that carry the canonical host name.

Make sure as well that file system attributes or configuration files
aren't changed during the upgrade to a point that prevents a safe
downgrade.

Mauro

On Thu, October 15, 2015 04:14, Atin Mukherjee wrote:
>
>
> On 10/14/2015 05:50 PM, Roman wrote:
>> Hi,
>>
>> Its hard to comment plans and things like these, but I suggest everyone
>> will be happy to have a possibility to upgrade from 3 to 4 without new
>> installation, OK with offline upgrade also (shut down volumes and
>> upgrade). And I'm somehow pretty sure, that this upgrade process should
>> be pretty flawless so no one under any circumstances would need any kind
>> of rollbacks, so there should not be any IFs :)
> Just to clarify that there will be and has to be an upgrade path. That's
> what I mentioned in point 4 in my mail. The only limitation would be
> here is no rolling upgrade support.
>>
>> 2015-10-07 8:32 GMT+03:00 Atin Mukherjee > >:
>>
>> Hi All,
>>
>> Over the course of the design discussion, we got a chance to discuss
>> about the upgrades and backward compatibility strategy for Gluster
>> 4.0
>> and here is what we came up with:
>>
>> 1. 4.0 cluster would be separate from 3.x clusters. Heterogeneous
>> support won't be available.
>>
>> 2. All CLI interfaces exposed in 3.x would continue to work with
>> 4.x.
>>
>> 3. ReSTful APIs for all old & new management actions.
>>
>> 4. Upgrade path from 3.x to 4.x would be necessary. We need not
>> support
>> rolling upgrades, however all data layouts from 3.x would need to be
>> honored. Our upgrade path from 3.x to 4.x should not be cumbersome.
>>
>>
>> Initiative wise upgrades strategy details:
>>
>> GlusterD 2.0
>> 
>>
>> - No rolling upgrade, service disruption is expected
>> - Smooth upgrade from 3.x to 4.x (migration script)
>> - Rollback - If upgrade fails, revert back to 3.x, old configuration
>> data shouldn't be wiped off.
>>
>>
>> DHT 2.0
>> ---
>> - No in place upgrade to DHT2
>> - Needs migration of data
>> - Backward compat, hence does not exist
>>
>> NSR
>> ---
>> - volume migration from AFR to NSR is possible with an offline
>> upgrade
>>
>> We would like to hear from the community about your opinion on this
>> strategy.
>>
>> Thanks,
>> Atin
>> ___
>> Gluster-users mailing list
>> gluster-us...@gluster.org 
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>>
>> --
>> Best regards,
>> Roman.
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>


-- 
Mauro Mozzarelli
Phone: +44 7941 727378
eMail: ma...@ezplanet.net

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel