Re: [openstack-dev] [Octavia] Using Nova Scheduling Affinity and AntiAffinity

2014-09-02 Thread Susanne Balle
Just wanted to let you know that sahara has move to server groups for
anti-affinity. This is IMHO the way we should so it as well.

Susanne

Jenkins (Code Review) 
5:46 PM (0 minutes ago)
to Andrew, Sahara, Alexander, Sergey, Michael, Sergey, Vitaly, Dmitry,
Trevor
Jenkins has posted comments on this change.

Change subject: Switched anti-affinity feature to server groups
..


Patch Set 15: Verified+1

Build succeeded.

- gate-sahara-pep8
http://logs.openstack.org/59/112159/15/check/gate-sahara-pep8/15869b3 :
SUCCESS in 3m 46s
- gate-sahara-docs
http://docs-draft.openstack.org/59/112159/15/check/gate-sahara-docs/dd9eecd/doc/build/html/
:
SUCCESS in 4m 20s
- gate-sahara-python26
http://logs.openstack.org/59/112159/15/check/gate-sahara-python26/027c775 :
SUCCESS in 4m 53s
- gate-sahara-python27
http://logs.openstack.org/59/112159/15/check/gate-sahara-python27/08f492a :
SUCCESS in 3m 36s
- check-tempest-dsvm-full
http://logs.openstack.org/59/112159/15/check/check-tempest-dsvm-full/e30530a :
SUCCESS in 59m 21s
- check-tempest-dsvm-postgres-full
http://logs.openstack.org/59/112159/15/check/check-tempest-dsvm-postgres-full/9e90341
:
SUCCESS in 1h 19m 32s
- check-tempest-dsvm-neutron-heat-slow
http://logs.openstack.org/59/112159/15/check/check-tempest-dsvm-neutron-heat-slow/70b1955
:
SUCCESS in 21m 30s
- gate-sahara-pylint
http://logs.openstack.org/59/112159/15/check/gate-sahara-pylint/55250e1 :
SUCCESS in 5m 18s (non-voting)

--
To view, visit https://review.openstack.org/112159
To unsubscribe, visit https://review.openstack.org/settings

Gerrit-MessageType: comment
Gerrit-Change-Id: I501438d84f3a486dad30081b05933f59ebab4858
Gerrit-PatchSet: 15
Gerrit-Project: openstack/sahara
Gerrit-Branch: master
Gerrit-Owner: Andrew Lazarev 
Gerrit-Reviewer: Alexander Ignatov 
Gerrit-Reviewer: Andrew Lazarev 
Gerrit-Reviewer: Dmitry Mescheryakov 
Gerrit-Reviewer: Jenkins
Gerrit-Reviewer: Michael McCune 
Gerrit-Reviewer: Sahara Hadoop Cluster CI 
Gerrit-Reviewer: Sergey Lukjanov 
Gerrit-Reviewer: Sergey Reshetnyak 
Gerrit-Reviewer: Trevor McKay 
Gerrit-Reviewer: Vitaly Gridnev 
Gerrit-HasComments: No


On Thu, Aug 28, 2014 at 1:26 AM, Brandon Logan 
wrote:

> Nova scheduler has ServerGroupAffinityFilter and
> ServerGroupAntiAffinityFilter which does the colocation and apolocation
> for VMs.  I think this is something we've discussed before about taking
> advantage of nova's scheduling.  I need to verify that this will work
> with what we (RAX) plan to do, but I'd like to get everyone else's
> thoughts.  Also, if we do decide this works for everyone involved,
> should we make it mandatory that the nova-compute services are running
> these two filters?  I'm also trying to see if we can use this to also do
> our own colocation and apolocation on load balancers, but it looks like
> it will be a bit complex if it can even work.  Hopefully, I can have
> something definitive on that soon.
>
> Thanks,
> Brandon
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Using Nova Scheduling Affinity and AntiAffinity

2014-08-28 Thread Brandon Logan
We may have to put quotas or something built into Octavia.  Since we are
keeping the concept of tenant in Octavia, this may as well be done.
That probably doesn't totally solve the problem though.

ServerGroups will be great to use for keeping VMs off the same host for
HA.  It will be tough to use them for colocation and apolocation of
loadbalancers, though not impossible from my shallow research of it.

P.S. I feel like I wrote this before so sorry if this is a duplicate,
but I must be losing my mind.

Thanks,
Brandon

On Thu, 2014-08-28 at 17:36 -0400, Susanne Balle wrote:
> We need to be careful. I believe that a user can use these filters to
> keep requesting VMs in the case of nova to get to the size of your
> cloud. 
> 
> 
> Also given that nova now has ServerGroups let's not make a quick
> decision on using something that is being replaced with something
> better. I suggest we investigated ServerGroups a little more before we
> discard it.  
> 
> The operator should really decide how he/she wants Anti-affinity by
> setting the right filters in nova.
> 
> 
> Susanne
> 
> 
> On Thu, Aug 28, 2014 at 5:12 PM, Brandon Logan
>  wrote:
> Trevor and I just worked through some scenarios to make sure
> it can
> handle colocation and apolocation.  It looks like it does,
> however not
> everything will so simple, especially when we introduce
> horizontal
> scaling.  Trevor's going to write up an email about some of
> the caveats
> but so far just using a table to track what LB has what VMs
> and on what
> hosts will be sufficient.
> 
> Thanks,
> Brandon
> 
> On Thu, 2014-08-28 at 13:49 -0700, Stephen Balukoff wrote:
> > I'm trying to think of a use case that wouldn't be satisfied
> using
> > those filters and am not coming up with anything. As such, I
> don't see
> > a problem using them to fulfill our requirements around
> colocation and
> > apolocation.
> >
> >
> > Stephen
> >
> >
> > On Thu, Aug 28, 2014 at 1:13 PM, Brandon Logan
> >  wrote:
> > Yeah we were looking at the SameHost and
> DifferentHost filters
> > and that
> > will probably do what we need.  Though I was hoping
> we could
> > do a
> > combination of both but we can make it work with
> those filters
> > I
> > believe.
> >
> > Thanks,
> > Brandon
> >
> > On Thu, 2014-08-28 at 14:56 -0400, Susanne Balle
> wrote:
> > > Brandon
> > >
> > >
> > > I am not sure how ready that nova feature is for
> general use
> > and have
> > > asked our nova lead about that. He is on vacation
> but should
> > be back
> > > by the start of next week. I believe this is the
> right
> > approach for us
> > > moving forward.
> > >
> > >
> > >
> > > We cannot make it mandatory to run the 2 filters
> but we can
> > say in the
> > > documentation that if these two filters aren't set
> that we
> > cannot
> > > guaranty Anti-affinity or Affinity.
> > >
> > >
> > > The other way we can implement this is by using
> availability
> > zones and
> > > host aggregates. This is one technique we use to
> make sure
> > we deploy
> > > our in-cloud services in an HA model. This also
> would assume
> > that the
> > > operator is setting up Availabiltiy zones which we
> can't.
> > >
> > >
> > >
> >
>  
> http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/
> > >
> > >
> > >
> > > Sahara is currently using the following filters to
> support
> > host
> > > affinity which is probably due to the fact that
> they did the
> > work
> > > before ServerGroups. I am not advocating the use
> of those
> > filters but
> > > just showing you that we can document the feature
> and it
> > will be up to
> > > the operator to set it up to get the right
> behavior.
> > >
> > >
> > >

Re: [openstack-dev] [Octavia] Using Nova Scheduling Affinity and AntiAffinity

2014-08-28 Thread Trevor Vardeman
Hello all,

TL;DR
Using the SameHostFilter and DifferentHostFilter will work functionally
for what Octavia needs for colocation, apolocation, and HA
anti-affinity.  There are a couple topics that need discussed:

How should VMs be allocated per host when evaluating colocation, if each
load balancer minimally has 2 VMs?  (Active-Active or Active-Passive)

How would a spare node pool handle affinity (i.e. will every host have a
separate spare node pool)?



Brandon and I spent a little time white-boarding our thoughts on this
affinity/anti-affinity problem.  Basically we came up with a couple
tables we'll need in the DB, and one table representing information
retrieved from nova, as follows:
Note:  The tables were written with "fixed width" text.  Looks really
bad in HTML.


LB Table
+---+--+---+
| LB_ID | colocate | apolocate |
+---+--+---+
|   1   |  |   |
+---+--+---+
|   2   |  | 1 |
+---+--+---+
|   3   |2 |   |
+---+--+---+
|   4   |1 | 3 |
+---+--+---+

DB Association Table
+---+---+-+
| LB_ID | VM_ID | HOST_ID |
+---+---+-+
|   1   |   A   |I|
+---+---+-+
|   1   |   B   |II   |
+---+---+-+
|   2   |   C   |   III   |
+---+---+-+
|   2   |   D   |IV   |
+---+---+-+
|   3   |   E   |   III   |
+---+---+-+
|   3   |   F   |IV   |
+---+---+-+
|   4   |   G   |I|
+---+---+-+
|   4   |   H   |II   |
+---+---+-+

Nova Information Table
+---++-+-+
| VM_ID | SameHostFilter | DifferentHostFilter | HOST_ID |
+---++-+-+
|   A   || |I|
+---++-+-+
|   B   ||  A  |II   |
+---++-+-+
|   C   || A B |   III   |
+---++-+-+
|   D   ||A B C|IV   |
+---++-+-+
|   E   |  C D   | |   III   |
+---++-+-+
|   F   |  C D   |  E  |IV   |
+---++-+-+
|   G   |  A B   | E F |I|
+---++-+-+
|   H   |  A B   |E F G|II   |
+---++-+-+

The first thing we discussed was an Active-Active setup.  Above you can
see I enforce that the first VM will not be on the same host as the
second.  In the first table, I've given some ideas about what LB will
colocate/apolocate with another, and configured them in the association
table appropriately.  Can you see any configuration combination we might
have over-looked?

As for scaling, we considered adding of VMs in an Active-Active setup to
be just as trivial as the initial creation.  Just include another VM id
in the list for DifferentHostFilter and it'll guarantee a different host
assignment.

The second discussion was for Active-Passive, and we decided it would be
very similar to Active-Active in accordance to appending to a list for
filtering.  For each required Active node created for scaling, standing
up another Passive node would happen with just another VM id
specification in the filter.  This keeps all the Active/Passive on
different hosts.  One could just as easily write some logic to keep all
the Passives on one Host and all the Actives on a different, though this
would potentially cause other problems.

One thing that just popped into my head would be scaling on different
hosts to different degrees.  Example:  I already have 2 VMs, 1 active
and 1 passive each (so 4 VMs total right now).  My scaling solution
could call for another 4 VMs to be stood up in the same fashion, but the
hosts matching up like the following table:

+---+---+-++
| LB_ID | VM_ID | HOST_ID | ACTIVE |
+---+---+-++
|   1   |   A   |I|1   |
+---+---+-++
|   1   |   B   |II   |0   |
+---+---+-++
|   2   |   C   |   III   |1   |
+---+---+-++
|   2   |   D   |IV   |0   |
+---+---+-++
|   1   |   E   |I|1   |
+---+---+-++
|   1   |   F   |II   |0   |
+---+---+-++
|   2   |   G   |   III   |1   |
+---+---+-++
|   2   |   H   |IV   |0   |
+---+---

Re: [openstack-dev] [Octavia] Using Nova Scheduling Affinity and AntiAffinity

2014-08-28 Thread Susanne Balle
We need to be careful. I believe that a user can use these filters to keep
requesting VMs in the case of nova to get to the size of your cloud.

Also given that nova now has ServerGroups let's not make a quick decision
on using something that is being replaced with something better. I suggest
we investigated ServerGroups a little more before we discard it.

The operator should really decide how he/she wants Anti-affinity by setting
the right filters in nova.

Susanne


On Thu, Aug 28, 2014 at 5:12 PM, Brandon Logan 
wrote:

> Trevor and I just worked through some scenarios to make sure it can
> handle colocation and apolocation.  It looks like it does, however not
> everything will so simple, especially when we introduce horizontal
> scaling.  Trevor's going to write up an email about some of the caveats
> but so far just using a table to track what LB has what VMs and on what
> hosts will be sufficient.
>
> Thanks,
> Brandon
>
> On Thu, 2014-08-28 at 13:49 -0700, Stephen Balukoff wrote:
> > I'm trying to think of a use case that wouldn't be satisfied using
> > those filters and am not coming up with anything. As such, I don't see
> > a problem using them to fulfill our requirements around colocation and
> > apolocation.
> >
> >
> > Stephen
> >
> >
> > On Thu, Aug 28, 2014 at 1:13 PM, Brandon Logan
> >  wrote:
> > Yeah we were looking at the SameHost and DifferentHost filters
> > and that
> > will probably do what we need.  Though I was hoping we could
> > do a
> > combination of both but we can make it work with those filters
> > I
> > believe.
> >
> > Thanks,
> > Brandon
> >
> > On Thu, 2014-08-28 at 14:56 -0400, Susanne Balle wrote:
> > > Brandon
> > >
> > >
> > > I am not sure how ready that nova feature is for general use
> > and have
> > > asked our nova lead about that. He is on vacation but should
> > be back
> > > by the start of next week. I believe this is the right
> > approach for us
> > > moving forward.
> > >
> > >
> > >
> > > We cannot make it mandatory to run the 2 filters but we can
> > say in the
> > > documentation that if these two filters aren't set that we
> > cannot
> > > guaranty Anti-affinity or Affinity.
> > >
> > >
> > > The other way we can implement this is by using availability
> > zones and
> > > host aggregates. This is one technique we use to make sure
> > we deploy
> > > our in-cloud services in an HA model. This also would assume
> > that the
> > > operator is setting up Availabiltiy zones which we can't.
> > >
> > >
> > >
> >
> http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/
> > >
> > >
> > >
> > > Sahara is currently using the following filters to support
> > host
> > > affinity which is probably due to the fact that they did the
> > work
> > > before ServerGroups. I am not advocating the use of those
> > filters but
> > > just showing you that we can document the feature and it
> > will be up to
> > > the operator to set it up to get the right behavior.
> > >
> > >
> > > Regards
> > >
> > >
> > > Susanne
> > >
> > >
> > >
> > > Anti-affinity
> > > One of the problems in Hadoop running on OpenStack is that
> > there is no
> > > ability to control where machine is actually running. We
> > cannot be
> > > sure that two new virtual machines are started on different
> > physical
> > > machines. As a result, any replication with cluster is not
> > reliable
> > > because all replicas may turn up on one physical machine.
> > > Anti-affinity feature provides an ability to explicitly tell
> > Sahara to
> > > run specified processes on different compute nodes. This is
> > especially
> > > useful for Hadoop datanode process to make HDFS replicas
> > reliable.
> > > The Anti-Affinity feature requires certain scheduler filters
> > to be
> > > enabled on Nova. Edit your/etc/nova/nova.conf in the
> > following way:
> > >
> > > [DEFAULT]
> > >
> > > ...
> > >
> > >
> > scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
> > > scheduler_default_filters=DifferentHostFilter,SameHostFilter
> > > This feature is supported by all plugins out of the box.
> > >
> > >
> > >
> > http://docs.openstack.org/developer/sahara/userdoc/features.html
> > >
> > >
> > >
> 

Re: [openstack-dev] [Octavia] Using Nova Scheduling Affinity and AntiAffinity

2014-08-28 Thread Brandon Logan
Trevor and I just worked through some scenarios to make sure it can
handle colocation and apolocation.  It looks like it does, however not
everything will so simple, especially when we introduce horizontal
scaling.  Trevor's going to write up an email about some of the caveats
but so far just using a table to track what LB has what VMs and on what
hosts will be sufficient.

Thanks,
Brandon

On Thu, 2014-08-28 at 13:49 -0700, Stephen Balukoff wrote:
> I'm trying to think of a use case that wouldn't be satisfied using
> those filters and am not coming up with anything. As such, I don't see
> a problem using them to fulfill our requirements around colocation and
> apolocation.
> 
> 
> Stephen
> 
> 
> On Thu, Aug 28, 2014 at 1:13 PM, Brandon Logan
>  wrote:
> Yeah we were looking at the SameHost and DifferentHost filters
> and that
> will probably do what we need.  Though I was hoping we could
> do a
> combination of both but we can make it work with those filters
> I
> believe.
> 
> Thanks,
> Brandon
> 
> On Thu, 2014-08-28 at 14:56 -0400, Susanne Balle wrote:
> > Brandon
> >
> >
> > I am not sure how ready that nova feature is for general use
> and have
> > asked our nova lead about that. He is on vacation but should
> be back
> > by the start of next week. I believe this is the right
> approach for us
> > moving forward.
> >
> >
> >
> > We cannot make it mandatory to run the 2 filters but we can
> say in the
> > documentation that if these two filters aren't set that we
> cannot
> > guaranty Anti-affinity or Affinity.
> >
> >
> > The other way we can implement this is by using availability
> zones and
> > host aggregates. This is one technique we use to make sure
> we deploy
> > our in-cloud services in an HA model. This also would assume
> that the
> > operator is setting up Availabiltiy zones which we can't.
> >
> >
> >
> 
> http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/
> >
> >
> >
> > Sahara is currently using the following filters to support
> host
> > affinity which is probably due to the fact that they did the
> work
> > before ServerGroups. I am not advocating the use of those
> filters but
> > just showing you that we can document the feature and it
> will be up to
> > the operator to set it up to get the right behavior.
> >
> >
> > Regards
> >
> >
> > Susanne
> >
> >
> >
> > Anti-affinity
> > One of the problems in Hadoop running on OpenStack is that
> there is no
> > ability to control where machine is actually running. We
> cannot be
> > sure that two new virtual machines are started on different
> physical
> > machines. As a result, any replication with cluster is not
> reliable
> > because all replicas may turn up on one physical machine.
> > Anti-affinity feature provides an ability to explicitly tell
> Sahara to
> > run specified processes on different compute nodes. This is
> especially
> > useful for Hadoop datanode process to make HDFS replicas
> reliable.
> > The Anti-Affinity feature requires certain scheduler filters
> to be
> > enabled on Nova. Edit your/etc/nova/nova.conf in the
> following way:
> >
> > [DEFAULT]
> >
> > ...
> >
> >
> scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
> > scheduler_default_filters=DifferentHostFilter,SameHostFilter
> > This feature is supported by all plugins out of the box.
> >
> >
> >
> http://docs.openstack.org/developer/sahara/userdoc/features.html
> >
> >
> >
> >
> >
> > On Thu, Aug 28, 2014 at 1:26 AM, Brandon Logan
> >  wrote:
> > Nova scheduler has ServerGroupAffinityFilter and
> > ServerGroupAntiAffinityFilter which does the
> colocation and
> > apolocation
> > for VMs.  I think this is something we've discussed
> before
> > about taking
> > advantage of nova's scheduling.  I need to verify
> that this
> > will work
> > with what we (RAX) plan to do, but I'd like to get
> everyone
> > else's
> > thoughts.  Also, if we do decide this works for
> everyone
>  

Re: [openstack-dev] [Octavia] Using Nova Scheduling Affinity and AntiAffinity

2014-08-28 Thread Stephen Balukoff
I'm trying to think of a use case that wouldn't be satisfied using those
filters and am not coming up with anything. As such, I don't see a problem
using them to fulfill our requirements around colocation and apolocation.

Stephen


On Thu, Aug 28, 2014 at 1:13 PM, Brandon Logan 
wrote:

> Yeah we were looking at the SameHost and DifferentHost filters and that
> will probably do what we need.  Though I was hoping we could do a
> combination of both but we can make it work with those filters I
> believe.
>
> Thanks,
> Brandon
>
> On Thu, 2014-08-28 at 14:56 -0400, Susanne Balle wrote:
> > Brandon
> >
> >
> > I am not sure how ready that nova feature is for general use and have
> > asked our nova lead about that. He is on vacation but should be back
> > by the start of next week. I believe this is the right approach for us
> > moving forward.
> >
> >
> >
> > We cannot make it mandatory to run the 2 filters but we can say in the
> > documentation that if these two filters aren't set that we cannot
> > guaranty Anti-affinity or Affinity.
> >
> >
> > The other way we can implement this is by using availability zones and
> > host aggregates. This is one technique we use to make sure we deploy
> > our in-cloud services in an HA model. This also would assume that the
> > operator is setting up Availabiltiy zones which we can't.
> >
> >
> >
> http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/
> >
> >
> >
> > Sahara is currently using the following filters to support host
> > affinity which is probably due to the fact that they did the work
> > before ServerGroups. I am not advocating the use of those filters but
> > just showing you that we can document the feature and it will be up to
> > the operator to set it up to get the right behavior.
> >
> >
> > Regards
> >
> >
> > Susanne
> >
> >
> >
> > Anti-affinity
> > One of the problems in Hadoop running on OpenStack is that there is no
> > ability to control where machine is actually running. We cannot be
> > sure that two new virtual machines are started on different physical
> > machines. As a result, any replication with cluster is not reliable
> > because all replicas may turn up on one physical machine.
> > Anti-affinity feature provides an ability to explicitly tell Sahara to
> > run specified processes on different compute nodes. This is especially
> > useful for Hadoop datanode process to make HDFS replicas reliable.
> > The Anti-Affinity feature requires certain scheduler filters to be
> > enabled on Nova. Edit your/etc/nova/nova.conf in the following way:
> >
> > [DEFAULT]
> >
> > ...
> >
> > scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
> > scheduler_default_filters=DifferentHostFilter,SameHostFilter
> > This feature is supported by all plugins out of the box.
> >
> >
> > http://docs.openstack.org/developer/sahara/userdoc/features.html
> >
> >
> >
> >
> >
> > On Thu, Aug 28, 2014 at 1:26 AM, Brandon Logan
> >  wrote:
> > Nova scheduler has ServerGroupAffinityFilter and
> > ServerGroupAntiAffinityFilter which does the colocation and
> > apolocation
> > for VMs.  I think this is something we've discussed before
> > about taking
> > advantage of nova's scheduling.  I need to verify that this
> > will work
> > with what we (RAX) plan to do, but I'd like to get everyone
> > else's
> > thoughts.  Also, if we do decide this works for everyone
> > involved,
> > should we make it mandatory that the nova-compute services are
> > running
> > these two filters?  I'm also trying to see if we can use this
> > to also do
> > our own colocation and apolocation on load balancers, but it
> > looks like
> > it will be a bit complex if it can even work.  Hopefully, I
> > can have
> > something definitive on that soon.
> >
> > Thanks,
> > Brandon
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Using Nova Scheduling Affinity and AntiAffinity

2014-08-28 Thread Brandon Logan
Yeah we were looking at the SameHost and DifferentHost filters and that
will probably do what we need.  Though I was hoping we could do a
combination of both but we can make it work with those filters I
believe.

Thanks,
Brandon

On Thu, 2014-08-28 at 14:56 -0400, Susanne Balle wrote:
> Brandon
> 
> 
> I am not sure how ready that nova feature is for general use and have
> asked our nova lead about that. He is on vacation but should be back
> by the start of next week. I believe this is the right approach for us
> moving forward.
> 
> 
> 
> We cannot make it mandatory to run the 2 filters but we can say in the
> documentation that if these two filters aren't set that we cannot
> guaranty Anti-affinity or Affinity. 
> 
> 
> The other way we can implement this is by using availability zones and
> host aggregates. This is one technique we use to make sure we deploy
> our in-cloud services in an HA model. This also would assume that the
> operator is setting up Availabiltiy zones which we can't.
> 
> 
> http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/
> 
> 
> 
> Sahara is currently using the following filters to support host
> affinity which is probably due to the fact that they did the work
> before ServerGroups. I am not advocating the use of those filters but
> just showing you that we can document the feature and it will be up to
> the operator to set it up to get the right behavior.
> 
> 
> Regards
> 
> 
> Susanne 
> 
> 
> 
> Anti-affinity
> One of the problems in Hadoop running on OpenStack is that there is no
> ability to control where machine is actually running. We cannot be
> sure that two new virtual machines are started on different physical
> machines. As a result, any replication with cluster is not reliable
> because all replicas may turn up on one physical machine.
> Anti-affinity feature provides an ability to explicitly tell Sahara to
> run specified processes on different compute nodes. This is especially
> useful for Hadoop datanode process to make HDFS replicas reliable.
> The Anti-Affinity feature requires certain scheduler filters to be
> enabled on Nova. Edit your/etc/nova/nova.conf in the following way:
> 
> [DEFAULT]
> 
> ...
> 
> scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
> scheduler_default_filters=DifferentHostFilter,SameHostFilter
> This feature is supported by all plugins out of the box.
> 
> 
> http://docs.openstack.org/developer/sahara/userdoc/features.html
> 
> 
> 
> 
> 
> On Thu, Aug 28, 2014 at 1:26 AM, Brandon Logan
>  wrote:
> Nova scheduler has ServerGroupAffinityFilter and
> ServerGroupAntiAffinityFilter which does the colocation and
> apolocation
> for VMs.  I think this is something we've discussed before
> about taking
> advantage of nova's scheduling.  I need to verify that this
> will work
> with what we (RAX) plan to do, but I'd like to get everyone
> else's
> thoughts.  Also, if we do decide this works for everyone
> involved,
> should we make it mandatory that the nova-compute services are
> running
> these two filters?  I'm also trying to see if we can use this
> to also do
> our own colocation and apolocation on load balancers, but it
> looks like
> it will be a bit complex if it can even work.  Hopefully, I
> can have
> something definitive on that soon.
> 
> Thanks,
> Brandon
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Using Nova Scheduling Affinity and AntiAffinity

2014-08-28 Thread Susanne Balle
Brandon

I am not sure how ready that nova feature is for general use and have asked
our nova lead about that. He is on vacation but should be back by the start
of next week. I believe this is the right approach for us moving forward.

We cannot make it mandatory to run the 2 filters but we can say in the
documentation that if these two filters aren't set that we cannot guaranty
Anti-affinity or Affinity.

The other way we can implement this is by using availability zones and host
aggregates. This is one technique we use to make sure we deploy our
in-cloud services in an HA model. This also would assume that the operator
is setting up Availabiltiy zones which we can't.

http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/

Sahara is currently using the following filters to support host affinity
which is probably due to the fact that they did the work before
ServerGroups. I am not advocating the use of those filters but just showing
you that we can document the feature and it will be up to the operator to
set it up to get the right behavior.

Regards

Susanne

Anti-affinity
One of the problems in Hadoop running on OpenStack is that there is no
ability to control where machine is actually running. We cannot be sure
that two new virtual machines are started on different physical machines.
As a result, any replication with cluster is not reliable because all
replicas may turn up on one physical machine. Anti-affinity feature
provides an ability to explicitly tell Sahara to run specified processes on
different compute nodes. This is especially useful for Hadoop datanode
process to make HDFS replicas reliable.

The Anti-Affinity feature requires certain scheduler filters to be enabled
on Nova. Edit your/etc/nova/nova.conf in the following way:

[DEFAULT]

...

scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
scheduler_default_filters=DifferentHostFilter,SameHostFilter

This feature is supported by all plugins out of the box.
http://docs.openstack.org/developer/sahara/userdoc/features.html



On Thu, Aug 28, 2014 at 1:26 AM, Brandon Logan 
wrote:

> Nova scheduler has ServerGroupAffinityFilter and
> ServerGroupAntiAffinityFilter which does the colocation and apolocation
> for VMs.  I think this is something we've discussed before about taking
> advantage of nova's scheduling.  I need to verify that this will work
> with what we (RAX) plan to do, but I'd like to get everyone else's
> thoughts.  Also, if we do decide this works for everyone involved,
> should we make it mandatory that the nova-compute services are running
> these two filters?  I'm also trying to see if we can use this to also do
> our own colocation and apolocation on load balancers, but it looks like
> it will be a bit complex if it can even work.  Hopefully, I can have
> something definitive on that soon.
>
> Thanks,
> Brandon
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev