Re: [Gluster-devel] Whats latest on Glusto + GD2 integration?

2018-11-04 Thread Sankarshan Mukhopadhyay
On Mon, Nov 5, 2018 at 8:35 AM Atin Mukherjee  wrote:
>
> Thank you Rahul for the report. This does help to keep community up to date 
> on the effort being put up here and understand where the things stand. Some 
> comments inline.
>
> On Sun, Nov 4, 2018 at 8:01 PM Rahul Hinduja  wrote:
>>
>> Hello,
>>
>> Over past few weeks, few folks are engaged in integrating gd2 with existing 
>> glusto infrastructure/cases. This email is an attempt to provide the high 
>> level view of the work that's done so far and next.
>>
>> Whats Done.
>>
>> Libraries incorporated / under review:
>>
>> Gluster Base Class and setup.py file required to read config file and 
>> install all the packages
>> Exception and lib-utils file required for all basic test cases
>> Common rest methods(Post, Get,  Delete), to handle rest api’s
>> Peer management libraries
>> Basic Volume management libraries
>> Basic Snapshot libraries
>> Self-heal libraries
>> Glusterd init
>> Mount operations
>> Device operations
>>
>> Note: I request you all to provide review comments on the libraries that are 
>> submitted. Over this week, Akarsha and Vaibhavi will try to get the review 
>> comments incorporated and to get these libraries to closure.
>>
>> Where is the repo?
>>
>> [1] https://review.gluster.org/#/q/project:glusto-libs
>>
>> Are we able to consume gd1 cases into gd2?
>>
>> We tried POC to run glusterd and snapshot test cases (one-by-one) via 
>> modified automation and libraries. Following are the highlights:
>>
>> We were able to run 20 gd1 cases out of which 8 passed and 12 failed.
>> We were able to run 11 snapshot cases out of which 7 passed and 4 failed.
>>
>> Reason for failures:
>>
>> Because of different volume options with gd1/gd2
>
> Just to clarify here, we have an open GD2 issue  
> https://github.com/gluster/glusterd2/issues/739 which is being worked on and 
> that should help us to achieve this backward compatibility.
>>
>> Due to different error or output format between gd1/gd2
>
>
> We need to move towards parsing error codes than the error messages. I'm 
> aware that with GD1/CLI such infra was missing, but now that GD2 offers 
> specific error codes, all command failures need to be parsed through 
> error/ret codes in GD2. I believe the library/tests need to be modified 
> accordingly to cater to this need to handle both GD1/GD2 based failures.
>
>> For more detail which test cases is passed or failed and reasons for the 
>> failures [2]
>>
>> [2] 
>> https://docs.google.com/spreadsheets/d/1O9JXQ2IgRIg5uZjCacybk3BMIjMmMeZsiv3-x_RTHWg/edit?usp=sharing
>>

Do these failures require bugs or, issues to track the resolution?

>> For more information/collaboration, please reach-out to:
>>
>> Shrivaibavi Raghaventhiran (sragh...@redhat.com)
>> Akarsha Rai (ak...@redhat.com)
>> Rahul Hinduja (rhind...@redhat.com)
>>

Should we not be using
 for
these conversations as well?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Whats latest on Glusto + GD2 integration?

2018-11-04 Thread Atin Mukherjee
Thank you Rahul for the report. This does help to keep community up to date
on the effort being put up here and understand where the things stand. Some
comments inline.

On Sun, Nov 4, 2018 at 8:01 PM Rahul Hinduja  wrote:

> Hello,
>
> Over past few weeks, few folks are engaged in integrating gd2 with
> existing glusto infrastructure/cases. This email is an attempt to provide
> the high level view of the work that's done so far and next.
>
>
> *Whats Done.*
>
>- Libraries incorporated / under review:
>   - Gluster Base Class and setup.py file required to read config file
>   and install all the packages
>   - Exception and lib-utils file required for all basic test cases
>   - Common rest methods(Post, Get,  Delete), to handle rest api’s
>   - Peer management libraries
>   - Basic Volume management libraries
>   - Basic Snapshot libraries
>   - Self-heal libraries
>   - Glusterd init
>   - Mount operations
>   - Device operations
>
> *Note:* I request you all to provide review comments on the libraries
> that are submitted. Over this week, Akarsha and Vaibhavi will try to get
> the review comments incorporated and to get these libraries to closure.
>
>- Where is the repo?
>
> [1] https://review.gluster.org/#/q/project:glusto-libs
>
>- Are we able to consume gd1 cases into gd2?
>   - We tried POC to run glusterd and snapshot test cases (one-by-one)
>   via modified automation and libraries. Following are the highlights:
>  - We were able to run 20 gd1 cases out of which 8 passed and 12
>  failed.
>  - We were able to run 11 snapshot cases out of which 7 passed
>  and 4 failed.
>   - Reason for failures:
>  - Because of different volume options with gd1/gd2
>
> Just to clarify here, we have an open GD2 issue
https://github.com/gluster/glusterd2/issues/739 which is being worked on
and that should help us to achieve this backward compatibility.

>
>-
>  - Due to different error or output format between gd1/gd2
>
>
We need to move towards parsing error codes than the error messages. I'm
aware that with GD1/CLI such infra was missing, but now that GD2 offers
specific error codes, all command failures need to be parsed through
error/ret codes in GD2. I believe the library/tests need to be modified
accordingly to cater to this need to handle both GD1/GD2 based failures.


>- For more detail which test cases is passed or failed and reasons for
>  the failures [2]
> - [2]
> 
> https://docs.google.com/spreadsheets/d/1O9JXQ2IgRIg5uZjCacybk3BMIjMmMeZsiv3-x_RTHWg/edit?usp=sharing
>
>
> *What's next?*
>
>- We have identified few gaps when we triggered glusterd and snapshot
>cases. Details in column C of  [2]. We are in the process of closing those
>gaps so that we don't have to hard-code or skip any functions in the test
>cases.
>- Develop additional/Modify existing libraries for the cases which got
>skipped.
>- Need to check on the volume options and error message or output
>format. This is being brought up in gd2 standup to freeze on the parity and
>rework at functional code level or automation code level.
>- I am aiming to provide the bi-weekly report on this integration work
>to the mailing list
>
>
>-
>
> *For more information/collaboration, please reach-out to:*
>
>- Shrivaibavi Raghaventhiran (sragh...@redhat.com)
>- Akarsha Rai (ak...@redhat.com)
>- Rahul Hinduja (rhind...@redhat.com)
>
> Regards,
> Rahul Hinduja
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Weekly Untriaged Bugs

2018-11-04 Thread jenkins
[...truncated 6 lines...]
https://bugzilla.redhat.com/1644758 / core: CVE-2018-14660 glusterfs: Repeat 
use of "GF_META_LOCK_KEY" xattr allows for memory exhaustion [fedora-all]
https://bugzilla.redhat.com/1641969 / core: Mounted Dir Gets Error in GlusterFS 
Storage Cluster with SSL/TLS Encryption as Doing add-brick and remove-brick 
Repeatly
https://bugzilla.redhat.com/1642804 / core: remove 'decompounder' xlator and 
compound fops from glusterfs codebase
https://bugzilla.redhat.com/1636965 / fuse: key sync-to-mount, string type 
asked, has pointer type [Invalid argument]
https://bugzilla.redhat.com/1644322 / geo-replication: flooding log with 
"glusterfs-fuse: read from /dev/fuse returned -1 (Operation not permitted)"
https://bugzilla.redhat.com/1643716 / geo-replication: "OSError: [Errno 40] Too 
many levels of symbolic links" when syncing deletion of directory hierarchy
https://bugzilla.redhat.com/1637743 / glusterd: Glusterd seems to be attempting 
to start the same brick process twice
https://bugzilla.redhat.com/1640109 / md-cache: Default ACL cannot be removed
https://bugzilla.redhat.com/1644246 / project-infrastructure: Add github users 
to gluster.org and gluster-prometheus project
https://bugzilla.redhat.com/1645776 / project-infrastructure: Consider 
gd2-smoke as part of main vote
https://bugzilla.redhat.com/163 / project-infrastructure: GD2 containers 
build do not seems to use a up to date container
https://bugzilla.redhat.com/1641021 / project-infrastructure: gd2-smoke do not 
seems to clean itself after a crash
https://bugzilla.redhat.com/1643256 / project-infrastructure: Suse-packaging@ 
is bouncing
https://bugzilla.redhat.com/1638883 / replicate: gluster heal problem
https://bugzilla.redhat.com/1637119 / sharding: sharding: ABRT report for 
package glusterfs has reached 100 occurrences
https://bugzilla.redhat.com/1642168 / unclassified: changes to cloudsync xlator
[...truncated 2 lines...]

build.log
Description: Binary data
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Whats latest on Glusto + GD2 integration?

2018-11-04 Thread Rahul Hinduja
Hello,

Over past few weeks, few folks are engaged in integrating gd2 with existing
glusto infrastructure/cases. This email is an attempt to provide the high
level view of the work that's done so far and next.


*Whats Done.*

   - Libraries incorporated / under review:
  - Gluster Base Class and setup.py file required to read config file
  and install all the packages
  - Exception and lib-utils file required for all basic test cases
  - Common rest methods(Post, Get,  Delete), to handle rest api’s
  - Peer management libraries
  - Basic Volume management libraries
  - Basic Snapshot libraries
  - Self-heal libraries
  - Glusterd init
  - Mount operations
  - Device operations

*Note:* I request you all to provide review comments on the libraries that
are submitted. Over this week, Akarsha and Vaibhavi will try to get the
review comments incorporated and to get these libraries to closure.

   - Where is the repo?

[1] https://review.gluster.org/#/q/project:glusto-libs

   - Are we able to consume gd1 cases into gd2?
  - We tried POC to run glusterd and snapshot test cases (one-by-one)
  via modified automation and libraries. Following are the highlights:
 - We were able to run 20 gd1 cases out of which 8 passed and 12
 failed.
 - We were able to run 11 snapshot cases out of which 7 passed and
 4 failed.
  - Reason for failures:
 - Because of different volume options with gd1/gd2
 - Due to different error or output format between gd1/gd2
 - For more detail which test cases is passed or failed and reasons
 for the failures [2]
- [2]

https://docs.google.com/spreadsheets/d/1O9JXQ2IgRIg5uZjCacybk3BMIjMmMeZsiv3-x_RTHWg/edit?usp=sharing


*What's next?*

   - We have identified few gaps when we triggered glusterd and snapshot
   cases. Details in column C of  [2]. We are in the process of closing those
   gaps so that we don't have to hard-code or skip any functions in the test
   cases.
   - Develop additional/Modify existing libraries for the cases which got
   skipped.
   - Need to check on the volume options and error message or output
   format. This is being brought up in gd2 standup to freeze on the parity and
   rework at functional code level or automation code level.
   - I am aiming to provide the bi-weekly report on this integration work
   to the mailing list.

*For more information/collaboration, please reach-out to:*

   - Shrivaibavi Raghaventhiran (sragh...@redhat.com)
   - Akarsha Rai (ak...@redhat.com)
   - Rahul Hinduja (rhind...@redhat.com)

Regards,
Rahul Hinduja
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel