Re: [Gluster-devel] break glusterd into small parts (Re: good job on fixing heavy hitters in spurious regressions)

2015-05-09 Thread Atin Mukherjee


On 05/09/2015 01:36 PM, Pranith Kumar Karampuri wrote:
 
 On 05/09/2015 11:08 AM, Krishnan Parthasarathi wrote:
 Ah! now I understood the confusion. I never said maintainer should fix
 all the bugs in tests. I am only saying that they maintain tests, just
 like we maintain code. Whether you personally work on it or not, you at
 least have an idea of what is the problem and what is the solution so
 someone can come and ask you and you know the status of it. Expectation
 is not to fix every test failure that comes maintainer's way by
 maintainer alone. But he/she would know about problem/solution because
 he/she at least reviews it and merges it. We want to make sure that the
 tests are in good quality as well just like we make sure code is of good
 quality. Core is a special case. We will handle it separately.
 Glusterd is also a 'special' case. As a glusterd maintainer, I am
 _not_ maintaining
 insert-your-favourite-gluster-command-here's implementation. So, I
 don't
 'know'/'understand' how it has been implemented and by extension I
 wouldn't be able
 to fix it (forget maintaining it :-) ). Given the no. of gluster
 commands, I won't be
 surprised if I didn't have an inkling on how
 your-favourite-gluster-command worked ;-)
 I hope this encourages other contributors, i.e, any gluster (feature)
 contributor,
 to join Kaushal and me in maintaining glusterd.
 I understand the frustration kp :-). Human brain can only take so much.
 I think we are solving wrong problem by putting more people on the code.
 Why not break glusterd into small parts and distribute the load to
 different people? Did you guys plan anything for 4.0 for breaking glusterd?
 It is going to be a maintenance hell if we don't break it sooner.
 
 Glusterd does a lot of things: Lets see how we can break things up one
 thing at a time. I would love to spend some quality time thinking about
 this problem once I am done with ec work, but this is a rough idea I
 have for glusterd.
 
 1) CLI handling:
 Glusterd-cli-xlator should act something like fuse in fs. It just gets
 the commands and passes it down, just like fuse gets the fops and passes
 it down. In glusterd process there should be snapshot.so, afr-cli.so,
 ec-cli.so, dht-cli.so loaded as management-xlators.
 Just like we have fops lets have mops (management operations).
 LOCK/STAGE/BRICK-OP/COMMIT-OP if there are more add them as well. Every
 time the top xlator in glusterd receives commands from cli, it converts
 the params into the arguments (req, op, dict etc) which are needed to
 carryout the cli. Now it winds the fop to all its children. One of the
 children is going to handle it locally, while the other child will send
 the cli to different glusterds that are in cluster. Second child of
 gluster-cli-xlator (give it a better name, but for now lets call it:
 mgmtcluster) will collate the responses and give the list of responses
 to glusterd-cli-xlator, it will call COLLATE mop on the first-child(lets
 call it local-handler) to collate the responses, i.e. logic for
 collating responses should also be in snapshot.so, afr-cli.so,
 dht-cli.so etc etc. Once the top translator does LOCK, STAGE, BRICK-OP,
 COMMIT-OP send response to CLI.
 
 2) Volinfo should become more like inode_t in fs where each *-cli xlator
 can store their own ctx like snapshot-cli can store all snapshot related
 info for that volume in that context and afr can store afr-related info
 in the ctx. Volinfo data strcuture should have very minimal information.
 Maybe name, bricks etc.
 
 3) Daemon handling:
  Daemon-manager xlator should have MOPS like START/STOP/INFO and
 this xlator should be accessible for all the -cli xlators which want to
 do their own management of the daemons. i.e. ec-cli/afr-cli should do
 self-heal-daemon handling. dht should do rebalance process handling etc.
 to give an example:
 while winding START mop it has to specify the daemon as
 self-heal-daemon and give enough info etc.
 
 4) Peer handling:
 mgmtcluster(second child of top-xlator) should have MOPS like
 PEER_ADD/PEER_DEL/PEER_UPDATE etc to do the needful. top xlator is going
 to wind these operations based on the peer-cli-commands to this xlator.
 
 5) volgen:
 top xlator is going to wind MOP called GET_NODE_LINKS, which takes
 the type of volfile (i.e. mount/nfs/shd/brick etc) on which each *-cli
 will construct its node(s), stuff options and tell the parent xl-name to
 which it needs to be linked to. Top xlator is going to just link the
 nodes to construct the graph and does graph_print to generate the volfile.
 
 I am pretty sure I forgot some more aspects of what glusterd does but
 you get the picture right? Break each aspect into different xlator and
 have MOPS to solve them.
Sounds interesting but needs to be thought out in details. For 4.0,wWe
do have a plan to make core glusterd algorithms work as a glusterd
engine and other features will work have interfaces to connect to it.
Your proposal looks another 

Re: [Gluster-devel] break glusterd into small parts (Re: good job on fixing heavy hitters in spurious regressions)

2015-05-09 Thread Pranith Kumar Karampuri


On 05/09/2015 03:19 PM, Krishnan Parthasarathi wrote:

Why not break glusterd into small parts and distribute the load to
different people? Did you guys plan anything for 4.0 for breaking glusterd?
It is going to be a maintenance hell if we don't break it sooner.

Good idea. We have thought about it. Just re-architecting glusterd doesn't
(and will not) solve the division of responsibility issue that is being 
discussed here.
It's already difficult to maintain glusterd. I have already explained the 
reasons
in the previous thread.
I was thinking *-cli xlators could be maintained by the respective fs 
team itself. It is easier to maintain it this way because each of those 
xls can be put in xlators/cluster/afr/cli, xlators/cluster/dht/cli, etc. 
There will be clear demarcation of who owns what this way is my feeling. 
Even the tests can be organized to tests/afr-cli, tests/dht-cli etc etc.





Glusterd does a lot of things: Lets see how we can break things up one
thing at a time. I would love to spend some quality time thinking about
this problem once I am done with ec work, but this is a rough idea I
have for glusterd.

1) CLI handling:
Glusterd-cli-xlator should act something like fuse in fs. It just gets
the commands and passes it down, just like fuse gets the fops and passes
it down. In glusterd process there should be snapshot.so, afr-cli.so,
ec-cli.so, dht-cli.so loaded as management-xlators.
Just like we have fops lets have mops (management operations).
LOCK/STAGE/BRICK-OP/COMMIT-OP if there are more add them as well. Every
time the top xlator in glusterd receives commands from cli, it converts
the params into the arguments (req, op, dict etc) which are needed to
carryout the cli. Now it winds the fop to all its children. One of the
children is going to handle it locally, while the other child will send
the cli to different glusterds that are in cluster. Second child of
gluster-cli-xlator (give it a better name, but for now lets call it:
mgmtcluster) will collate the responses and give the list of responses
to glusterd-cli-xlator, it will call COLLATE mop on the first-child(lets
call it local-handler) to collate the responses, i.e. logic for
collating responses should also be in snapshot.so, afr-cli.so,
dht-cli.so etc etc. Once the top translator does LOCK, STAGE, BRICK-OP,
COMMIT-OP send response to CLI.

2) Volinfo should become more like inode_t in fs where each *-cli xlator
can store their own ctx like snapshot-cli can store all snapshot related
info for that volume in that context and afr can store afr-related info
in the ctx. Volinfo data strcuture should have very minimal information.
Maybe name, bricks etc.

3) Daemon handling:
   Daemon-manager xlator should have MOPS like START/STOP/INFO and
this xlator should be accessible for all the -cli xlators which want to
do their own management of the daemons. i.e. ec-cli/afr-cli should do
self-heal-daemon handling. dht should do rebalance process handling etc.
to give an example:
while winding START mop it has to specify the daemon as
self-heal-daemon and give enough info etc.

4) Peer handling:
  mgmtcluster(second child of top-xlator) should have MOPS like
PEER_ADD/PEER_DEL/PEER_UPDATE etc to do the needful. top xlator is going
to wind these operations based on the peer-cli-commands to this xlator.

5) volgen:
  top xlator is going to wind MOP called GET_NODE_LINKS, which takes
the type of volfile (i.e. mount/nfs/shd/brick etc) on which each *-cli
will construct its node(s), stuff options and tell the parent xl-name to
which it needs to be linked to. Top xlator is going to just link the
nodes to construct the graph and does graph_print to generate the volfile.

I am pretty sure I forgot some more aspects of what glusterd does but
you get the picture right? Break each aspect into different xlator and
have MOPS to solve them.

We have some initial ideas on how glusterd for 4.0 would look like. We won't be
continuing with glusterd is also a translator model. The above model would
work well only if we stuck with the stack of translators approach.
Oh nice, I might have missed the mails. Do you mind sharing the plan for 
4.0? Any reason why you guys do not want to continue glusterd as 
translator model?


Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] break glusterd into small parts (Re: good job on fixing heavy hitters in spurious regressions)

2015-05-09 Thread Krishnan Parthasarathi

 Oh nice, I might have missed the mails. Do you mind sharing the plan for
 4.0? Any reason why you guys do not want to continue glusterd as
 translator model?

I don't understand why we are using the translator model in the first place.
I guess it was to reuse rpc code. You should be able to shed more light here.
A quick google search with glusterd 2.0 gluster-users, gave me this
http://www.gluster.org/pipermail/gluster-users/2014-September/018639.html.
Interestingly you asked us to consider AFR/NSR for distributed configuration
management, which lead to 
http://www.gluster.org/pipermail/gluster-devel/2014-November/042944.html
This proposal didn't go in the expected direction.

I don't want to get into why not use translators now. We are currently 
heading in the
direction visible in the above threads. If glusterd can't be a translator 
anymore, so be it.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] break glusterd into small parts (Re: good job on fixing heavy hitters in spurious regressions)

2015-05-09 Thread Pranith Kumar Karampuri


On 05/09/2015 11:08 AM, Krishnan Parthasarathi wrote:

Ah! now I understood the confusion. I never said maintainer should fix
all the bugs in tests. I am only saying that they maintain tests, just
like we maintain code. Whether you personally work on it or not, you at
least have an idea of what is the problem and what is the solution so
someone can come and ask you and you know the status of it. Expectation
is not to fix every test failure that comes maintainer's way by
maintainer alone. But he/she would know about problem/solution because
he/she at least reviews it and merges it. We want to make sure that the
tests are in good quality as well just like we make sure code is of good
quality. Core is a special case. We will handle it separately.

Glusterd is also a 'special' case. As a glusterd maintainer, I am _not_ 
maintaining
insert-your-favourite-gluster-command-here's implementation. So, I don't
'know'/'understand' how it has been implemented and by extension I wouldn't be 
able
to fix it (forget maintaining it :-) ). Given the no. of gluster commands, I 
won't be
surprised if I didn't have an inkling on how your-favourite-gluster-command 
worked ;-)
I hope this encourages other contributors, i.e, any gluster (feature) 
contributor,
to join Kaushal and me in maintaining glusterd.
I understand the frustration kp :-). Human brain can only take so much. 
I think we are solving wrong problem by putting more people on the code. 
Why not break glusterd into small parts and distribute the load to 
different people? Did you guys plan anything for 4.0 for breaking glusterd?

It is going to be a maintenance hell if we don't break it sooner.

Glusterd does a lot of things: Lets see how we can break things up one 
thing at a time. I would love to spend some quality time thinking about 
this problem once I am done with ec work, but this is a rough idea I 
have for glusterd.


1) CLI handling:
Glusterd-cli-xlator should act something like fuse in fs. It just gets 
the commands and passes it down, just like fuse gets the fops and passes 
it down. In glusterd process there should be snapshot.so, afr-cli.so, 
ec-cli.so, dht-cli.so loaded as management-xlators.
Just like we have fops lets have mops (management operations). 
LOCK/STAGE/BRICK-OP/COMMIT-OP if there are more add them as well. Every 
time the top xlator in glusterd receives commands from cli, it converts 
the params into the arguments (req, op, dict etc) which are needed to 
carryout the cli. Now it winds the fop to all its children. One of the 
children is going to handle it locally, while the other child will send 
the cli to different glusterds that are in cluster. Second child of 
gluster-cli-xlator (give it a better name, but for now lets call it: 
mgmtcluster) will collate the responses and give the list of responses 
to glusterd-cli-xlator, it will call COLLATE mop on the first-child(lets 
call it local-handler) to collate the responses, i.e. logic for 
collating responses should also be in snapshot.so, afr-cli.so, 
dht-cli.so etc etc. Once the top translator does LOCK, STAGE, BRICK-OP, 
COMMIT-OP send response to CLI.


2) Volinfo should become more like inode_t in fs where each *-cli xlator 
can store their own ctx like snapshot-cli can store all snapshot related 
info for that volume in that context and afr can store afr-related info 
in the ctx. Volinfo data strcuture should have very minimal information. 
Maybe name, bricks etc.


3) Daemon handling:
 Daemon-manager xlator should have MOPS like START/STOP/INFO and 
this xlator should be accessible for all the -cli xlators which want to 
do their own management of the daemons. i.e. ec-cli/afr-cli should do 
self-heal-daemon handling. dht should do rebalance process handling etc. 
to give an example:
while winding START mop it has to specify the daemon as 
self-heal-daemon and give enough info etc.


4) Peer handling:
mgmtcluster(second child of top-xlator) should have MOPS like 
PEER_ADD/PEER_DEL/PEER_UPDATE etc to do the needful. top xlator is going 
to wind these operations based on the peer-cli-commands to this xlator.


5) volgen:
top xlator is going to wind MOP called GET_NODE_LINKS, which takes 
the type of volfile (i.e. mount/nfs/shd/brick etc) on which each *-cli 
will construct its node(s), stuff options and tell the parent xl-name to 
which it needs to be linked to. Top xlator is going to just link the 
nodes to construct the graph and does graph_print to generate the volfile.


I am pretty sure I forgot some more aspects of what glusterd does but 
you get the picture right? Break each aspect into different xlator and 
have MOPS to solve them.


Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] break glusterd into small parts (Re: good job on fixing heavy hitters in spurious regressions)

2015-05-09 Thread Krishnan Parthasarathi

 Why not break glusterd into small parts and distribute the load to
 different people? Did you guys plan anything for 4.0 for breaking glusterd?
 It is going to be a maintenance hell if we don't break it sooner.

Good idea. We have thought about it. Just re-architecting glusterd doesn't
(and will not) solve the division of responsibility issue that is being 
discussed here.
It's already difficult to maintain glusterd. I have already explained the 
reasons
in the previous thread.

 
 Glusterd does a lot of things: Lets see how we can break things up one
 thing at a time. I would love to spend some quality time thinking about
 this problem once I am done with ec work, but this is a rough idea I
 have for glusterd.
 
 1) CLI handling:
 Glusterd-cli-xlator should act something like fuse in fs. It just gets
 the commands and passes it down, just like fuse gets the fops and passes
 it down. In glusterd process there should be snapshot.so, afr-cli.so,
 ec-cli.so, dht-cli.so loaded as management-xlators.
 Just like we have fops lets have mops (management operations).
 LOCK/STAGE/BRICK-OP/COMMIT-OP if there are more add them as well. Every
 time the top xlator in glusterd receives commands from cli, it converts
 the params into the arguments (req, op, dict etc) which are needed to
 carryout the cli. Now it winds the fop to all its children. One of the
 children is going to handle it locally, while the other child will send
 the cli to different glusterds that are in cluster. Second child of
 gluster-cli-xlator (give it a better name, but for now lets call it:
 mgmtcluster) will collate the responses and give the list of responses
 to glusterd-cli-xlator, it will call COLLATE mop on the first-child(lets
 call it local-handler) to collate the responses, i.e. logic for
 collating responses should also be in snapshot.so, afr-cli.so,
 dht-cli.so etc etc. Once the top translator does LOCK, STAGE, BRICK-OP,
 COMMIT-OP send response to CLI.
 
 2) Volinfo should become more like inode_t in fs where each *-cli xlator
 can store their own ctx like snapshot-cli can store all snapshot related
 info for that volume in that context and afr can store afr-related info
 in the ctx. Volinfo data strcuture should have very minimal information.
 Maybe name, bricks etc.
 
 3) Daemon handling:
   Daemon-manager xlator should have MOPS like START/STOP/INFO and
 this xlator should be accessible for all the -cli xlators which want to
 do their own management of the daemons. i.e. ec-cli/afr-cli should do
 self-heal-daemon handling. dht should do rebalance process handling etc.
 to give an example:
 while winding START mop it has to specify the daemon as
 self-heal-daemon and give enough info etc.
 
 4) Peer handling:
  mgmtcluster(second child of top-xlator) should have MOPS like
 PEER_ADD/PEER_DEL/PEER_UPDATE etc to do the needful. top xlator is going
 to wind these operations based on the peer-cli-commands to this xlator.
 
 5) volgen:
  top xlator is going to wind MOP called GET_NODE_LINKS, which takes
 the type of volfile (i.e. mount/nfs/shd/brick etc) on which each *-cli
 will construct its node(s), stuff options and tell the parent xl-name to
 which it needs to be linked to. Top xlator is going to just link the
 nodes to construct the graph and does graph_print to generate the volfile.
 
 I am pretty sure I forgot some more aspects of what glusterd does but
 you get the picture right? Break each aspect into different xlator and
 have MOPS to solve them.

We have some initial ideas on how glusterd for 4.0 would look like. We won't be
continuing with glusterd is also a translator model. The above model would
work well only if we stuck with the stack of translators approach.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] break glusterd into small parts (Re: good job on fixing heavy hitters in spurious regressions)

2015-05-09 Thread Pranith Kumar Karampuri


On 05/09/2015 02:21 PM, Atin Mukherjee wrote:


On 05/09/2015 01:36 PM, Pranith Kumar Karampuri wrote:

On 05/09/2015 11:08 AM, Krishnan Parthasarathi wrote:

Ah! now I understood the confusion. I never said maintainer should fix
all the bugs in tests. I am only saying that they maintain tests, just
like we maintain code. Whether you personally work on it or not, you at
least have an idea of what is the problem and what is the solution so
someone can come and ask you and you know the status of it. Expectation
is not to fix every test failure that comes maintainer's way by
maintainer alone. But he/she would know about problem/solution because
he/she at least reviews it and merges it. We want to make sure that the
tests are in good quality as well just like we make sure code is of good
quality. Core is a special case. We will handle it separately.

Glusterd is also a 'special' case. As a glusterd maintainer, I am
_not_ maintaining
insert-your-favourite-gluster-command-here's implementation. So, I
don't
'know'/'understand' how it has been implemented and by extension I
wouldn't be able
to fix it (forget maintaining it :-) ). Given the no. of gluster
commands, I won't be
surprised if I didn't have an inkling on how
your-favourite-gluster-command worked ;-)
I hope this encourages other contributors, i.e, any gluster (feature)
contributor,
to join Kaushal and me in maintaining glusterd.

I understand the frustration kp :-). Human brain can only take so much.
I think we are solving wrong problem by putting more people on the code.
Why not break glusterd into small parts and distribute the load to
different people? Did you guys plan anything for 4.0 for breaking glusterd?
It is going to be a maintenance hell if we don't break it sooner.

Glusterd does a lot of things: Lets see how we can break things up one
thing at a time. I would love to spend some quality time thinking about
this problem once I am done with ec work, but this is a rough idea I
have for glusterd.

1) CLI handling:
Glusterd-cli-xlator should act something like fuse in fs. It just gets
the commands and passes it down, just like fuse gets the fops and passes
it down. In glusterd process there should be snapshot.so, afr-cli.so,
ec-cli.so, dht-cli.so loaded as management-xlators.
Just like we have fops lets have mops (management operations).
LOCK/STAGE/BRICK-OP/COMMIT-OP if there are more add them as well. Every
time the top xlator in glusterd receives commands from cli, it converts
the params into the arguments (req, op, dict etc) which are needed to
carryout the cli. Now it winds the fop to all its children. One of the
children is going to handle it locally, while the other child will send
the cli to different glusterds that are in cluster. Second child of
gluster-cli-xlator (give it a better name, but for now lets call it:
mgmtcluster) will collate the responses and give the list of responses
to glusterd-cli-xlator, it will call COLLATE mop on the first-child(lets
call it local-handler) to collate the responses, i.e. logic for
collating responses should also be in snapshot.so, afr-cli.so,
dht-cli.so etc etc. Once the top translator does LOCK, STAGE, BRICK-OP,
COMMIT-OP send response to CLI.

2) Volinfo should become more like inode_t in fs where each *-cli xlator
can store their own ctx like snapshot-cli can store all snapshot related
info for that volume in that context and afr can store afr-related info
in the ctx. Volinfo data strcuture should have very minimal information.
Maybe name, bricks etc.

3) Daemon handling:
  Daemon-manager xlator should have MOPS like START/STOP/INFO and
this xlator should be accessible for all the -cli xlators which want to
do their own management of the daemons. i.e. ec-cli/afr-cli should do
self-heal-daemon handling. dht should do rebalance process handling etc.
to give an example:
while winding START mop it has to specify the daemon as
self-heal-daemon and give enough info etc.

4) Peer handling:
 mgmtcluster(second child of top-xlator) should have MOPS like
PEER_ADD/PEER_DEL/PEER_UPDATE etc to do the needful. top xlator is going
to wind these operations based on the peer-cli-commands to this xlator.

5) volgen:
 top xlator is going to wind MOP called GET_NODE_LINKS, which takes
the type of volfile (i.e. mount/nfs/shd/brick etc) on which each *-cli
will construct its node(s), stuff options and tell the parent xl-name to
which it needs to be linked to. Top xlator is going to just link the
nodes to construct the graph and does graph_print to generate the volfile.

I am pretty sure I forgot some more aspects of what glusterd does but
you get the picture right? Break each aspect into different xlator and
have MOPS to solve them.

Sounds interesting but needs to be thought out in details. For 4.0,wWe
do have a plan to make core glusterd algorithms work as a glusterd
engine and other features will work have interfaces to connect to it.
Your proposal looks another alternative. I would like to hear from 

Re: [Gluster-devel] break glusterd into small parts (Re: good job on fixing heavy hitters in spurious regressions)

2015-05-09 Thread Kaushal M
Modularising GlusterD is something we plan to do.  As of now, it's
just that a plan. We don't yet have a design to achieve it yet.

What Atin mentioned and what you've mentioned seem to be the same at a
high level. The core of GlusterD will be a co-ordinating engine, which
defines an interface for commands to use to do their work. The
commands will each be a seperate module implementing this interface.
Depending on how we implement, the actual names will be different.


On Sat, May 9, 2015 at 2:24 PM, Pranith Kumar Karampuri
pkara...@redhat.com wrote:

 On 05/09/2015 02:21 PM, Atin Mukherjee wrote:


 On 05/09/2015 01:36 PM, Pranith Kumar Karampuri wrote:

 On 05/09/2015 11:08 AM, Krishnan Parthasarathi wrote:

 Ah! now I understood the confusion. I never said maintainer should fix
 all the bugs in tests. I am only saying that they maintain tests, just
 like we maintain code. Whether you personally work on it or not, you at
 least have an idea of what is the problem and what is the solution so
 someone can come and ask you and you know the status of it. Expectation
 is not to fix every test failure that comes maintainer's way by
 maintainer alone. But he/she would know about problem/solution because
 he/she at least reviews it and merges it. We want to make sure that the
 tests are in good quality as well just like we make sure code is of
 good
 quality. Core is a special case. We will handle it separately.

 Glusterd is also a 'special' case. As a glusterd maintainer, I am
 _not_ maintaining
 insert-your-favourite-gluster-command-here's implementation. So, I
 don't
 'know'/'understand' how it has been implemented and by extension I
 wouldn't be able
 to fix it (forget maintaining it :-) ). Given the no. of gluster
 commands, I won't be
 surprised if I didn't have an inkling on how
 your-favourite-gluster-command worked ;-)
 I hope this encourages other contributors, i.e, any gluster (feature)
 contributor,
 to join Kaushal and me in maintaining glusterd.

 I understand the frustration kp :-). Human brain can only take so much.
 I think we are solving wrong problem by putting more people on the code.
 Why not break glusterd into small parts and distribute the load to
 different people? Did you guys plan anything for 4.0 for breaking
 glusterd?
 It is going to be a maintenance hell if we don't break it sooner.

 Glusterd does a lot of things: Lets see how we can break things up one
 thing at a time. I would love to spend some quality time thinking about
 this problem once I am done with ec work, but this is a rough idea I
 have for glusterd.

 1) CLI handling:
 Glusterd-cli-xlator should act something like fuse in fs. It just gets
 the commands and passes it down, just like fuse gets the fops and passes
 it down. In glusterd process there should be snapshot.so, afr-cli.so,
 ec-cli.so, dht-cli.so loaded as management-xlators.
 Just like we have fops lets have mops (management operations).
 LOCK/STAGE/BRICK-OP/COMMIT-OP if there are more add them as well. Every
 time the top xlator in glusterd receives commands from cli, it converts
 the params into the arguments (req, op, dict etc) which are needed to
 carryout the cli. Now it winds the fop to all its children. One of the
 children is going to handle it locally, while the other child will send
 the cli to different glusterds that are in cluster. Second child of
 gluster-cli-xlator (give it a better name, but for now lets call it:
 mgmtcluster) will collate the responses and give the list of responses
 to glusterd-cli-xlator, it will call COLLATE mop on the first-child(lets
 call it local-handler) to collate the responses, i.e. logic for
 collating responses should also be in snapshot.so, afr-cli.so,
 dht-cli.so etc etc. Once the top translator does LOCK, STAGE, BRICK-OP,
 COMMIT-OP send response to CLI.

 2) Volinfo should become more like inode_t in fs where each *-cli xlator
 can store their own ctx like snapshot-cli can store all snapshot related
 info for that volume in that context and afr can store afr-related info
 in the ctx. Volinfo data strcuture should have very minimal information.
 Maybe name, bricks etc.

 3) Daemon handling:
   Daemon-manager xlator should have MOPS like START/STOP/INFO and
 this xlator should be accessible for all the -cli xlators which want to
 do their own management of the daemons. i.e. ec-cli/afr-cli should do
 self-heal-daemon handling. dht should do rebalance process handling etc.
 to give an example:
 while winding START mop it has to specify the daemon as
 self-heal-daemon and give enough info etc.

 4) Peer handling:
  mgmtcluster(second child of top-xlator) should have MOPS like
 PEER_ADD/PEER_DEL/PEER_UPDATE etc to do the needful. top xlator is going
 to wind these operations based on the peer-cli-commands to this xlator.

 5) volgen:
  top xlator is going to wind MOP called GET_NODE_LINKS, which takes
 the type of volfile (i.e. mount/nfs/shd/brick etc) on which each *-cli
 will construct its node(s), 

Re: [Gluster-devel] break glusterd into small parts (Re: good job on fixing heavy hitters in spurious regressions)

2015-05-09 Thread Pranith Kumar Karampuri


On 05/09/2015 03:04 PM, Kaushal M wrote:

Modularising GlusterD is something we plan to do.  As of now, it's
just that a plan. We don't yet have a design to achieve it yet.

What Atin mentioned and what you've mentioned seem to be the same at a
high level. The core of GlusterD will be a co-ordinating engine, which
defines an interface for commands to use to do their work. The
commands will each be a seperate module implementing this interface.
Depending on how we implement, the actual names will be different.
Yes, this is a nice approach. It would be nice if there is a clear 
demarcation as well for the code, so there won't be any dependency with 
merging dht changes vs say afr changes in cli. That is why I was 
suggesting xlator based solution. But other ways of doing it where there 
is clear demarcation is welcome as well. Would love to know more about 
the other approaches :-).


Pranith



On Sat, May 9, 2015 at 2:24 PM, Pranith Kumar Karampuri
pkara...@redhat.com wrote:

On 05/09/2015 02:21 PM, Atin Mukherjee wrote:


On 05/09/2015 01:36 PM, Pranith Kumar Karampuri wrote:

On 05/09/2015 11:08 AM, Krishnan Parthasarathi wrote:

Ah! now I understood the confusion. I never said maintainer should fix
all the bugs in tests. I am only saying that they maintain tests, just
like we maintain code. Whether you personally work on it or not, you at
least have an idea of what is the problem and what is the solution so
someone can come and ask you and you know the status of it. Expectation
is not to fix every test failure that comes maintainer's way by
maintainer alone. But he/she would know about problem/solution because
he/she at least reviews it and merges it. We want to make sure that the
tests are in good quality as well just like we make sure code is of
good
quality. Core is a special case. We will handle it separately.

Glusterd is also a 'special' case. As a glusterd maintainer, I am
_not_ maintaining
insert-your-favourite-gluster-command-here's implementation. So, I
don't
'know'/'understand' how it has been implemented and by extension I
wouldn't be able
to fix it (forget maintaining it :-) ). Given the no. of gluster
commands, I won't be
surprised if I didn't have an inkling on how
your-favourite-gluster-command worked ;-)
I hope this encourages other contributors, i.e, any gluster (feature)
contributor,
to join Kaushal and me in maintaining glusterd.

I understand the frustration kp :-). Human brain can only take so much.
I think we are solving wrong problem by putting more people on the code.
Why not break glusterd into small parts and distribute the load to
different people? Did you guys plan anything for 4.0 for breaking
glusterd?
It is going to be a maintenance hell if we don't break it sooner.

Glusterd does a lot of things: Lets see how we can break things up one
thing at a time. I would love to spend some quality time thinking about
this problem once I am done with ec work, but this is a rough idea I
have for glusterd.

1) CLI handling:
Glusterd-cli-xlator should act something like fuse in fs. It just gets
the commands and passes it down, just like fuse gets the fops and passes
it down. In glusterd process there should be snapshot.so, afr-cli.so,
ec-cli.so, dht-cli.so loaded as management-xlators.
Just like we have fops lets have mops (management operations).
LOCK/STAGE/BRICK-OP/COMMIT-OP if there are more add them as well. Every
time the top xlator in glusterd receives commands from cli, it converts
the params into the arguments (req, op, dict etc) which are needed to
carryout the cli. Now it winds the fop to all its children. One of the
children is going to handle it locally, while the other child will send
the cli to different glusterds that are in cluster. Second child of
gluster-cli-xlator (give it a better name, but for now lets call it:
mgmtcluster) will collate the responses and give the list of responses
to glusterd-cli-xlator, it will call COLLATE mop on the first-child(lets
call it local-handler) to collate the responses, i.e. logic for
collating responses should also be in snapshot.so, afr-cli.so,
dht-cli.so etc etc. Once the top translator does LOCK, STAGE, BRICK-OP,
COMMIT-OP send response to CLI.

2) Volinfo should become more like inode_t in fs where each *-cli xlator
can store their own ctx like snapshot-cli can store all snapshot related
info for that volume in that context and afr can store afr-related info
in the ctx. Volinfo data strcuture should have very minimal information.
Maybe name, bricks etc.

3) Daemon handling:
   Daemon-manager xlator should have MOPS like START/STOP/INFO and
this xlator should be accessible for all the -cli xlators which want to
do their own management of the daemons. i.e. ec-cli/afr-cli should do
self-heal-daemon handling. dht should do rebalance process handling etc.
to give an example:
while winding START mop it has to specify the daemon as
self-heal-daemon and give enough info etc.

4) Peer handling:
  mgmtcluster(second child of 

Re: [Gluster-devel] break glusterd into small parts (Re: good job on fixing heavy hitters in spurious regressions)

2015-05-09 Thread Pranith Kumar Karampuri


On 05/09/2015 04:23 PM, Krishnan Parthasarathi wrote:

Oh nice, I might have missed the mails. Do you mind sharing the plan for
4.0? Any reason why you guys do not want to continue glusterd as
translator model?

I don't understand why we are using the translator model in the first place.
I guess it was to reuse rpc code. You should be able to shed more light here.

Even I am not sure :-). It was a translator by the time I got in.

A quick google search with glusterd 2.0 gluster-users, gave me this
http://www.gluster.org/pipermail/gluster-users/2014-September/018639.html.
Interestingly you asked us to consider AFR/NSR for distributed configuration
management, which lead to 
http://www.gluster.org/pipermail/gluster-devel/2014-November/042944.html
This proposal didn't go in the expected direction.

I don't want to get into why not use translators now. We are currently 
heading in the
direction visible in the above threads. If glusterd can't be a translator 
anymore, so be it.
Kaushal's response gave the answers I was looking for. We should 
probably discuss it more once you guys come up with the interface CLI 
handling code needs to follow. I was thinking it would be great if you 
come up with a model where the handler code will be separate from the 
code of glusterd, which is what you guys seem to be targeting. 
Translator model is one way of achieving it, I personally love it on the 
FS side, that is why I was curious why it was not used. But any other 
way where the above requirements are met is welcome.

Really excited to see what will come up :-).

Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel