Re: [openstack-dev] [Neutron][LBaaS] Improvements to current reviews

2014-08-10 Thread Avishay Balderman
" I think you should update the current reviews (new patch set, not additional 
review.)"
+1

I like those changes: +2

-Original Message-
From: Doug Wiegley [mailto:do...@a10networks.com] 
Sent: Sunday, August 10, 2014 12:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Improvements to current reviews

I think you should update the current reviews (new patch set, not additional 
review.)

Doug


> On Aug 9, 2014, at 3:34 PM, "Brandon Logan"  
> wrote:
> 
> So I've done some work on improving the code on the current pending 
> reviews.  And would like to get peoples' opinions on whether I should 
> add antoher patch set to those reviews, or add the changes as as 
> another review dependent on the pending ones.
> 
> To be clear, no matter what the first review in the chain will not
> change:
> https://review.openstack.org/#/c/105331/
> 
> However, if adding another patch set is preferrable then plugin and db 
> implementation review would get another patch set and then obviously 
> anything depending on that.
> 
> https://review.openstack.org/#/c/105609/
> 
> My opinion is that I'd like to get both of these in as a new patch set.
> Reason being that the reviews don't have any +2's and there is 
> uncertainty because of the GBP discussion.  So, I don't think it'd be 
> a major issue if a new patch set was created.  Speak up if you think 
> otherwise.  I'd like to get as many people's thoughts as possible.
> 
> The changes are:
> 
> 1) Added data models, which are just plain python object mimicking the 
> sql alchemy models, but don't have the overhead or dynamic nature of 
> being sql alchemy models.  These data models are now returned by the 
> database methods, instead of the sql alchemy objects.  Also, I moved 
> the definition of the sql alchemy models into its own module.  I've 
> been wanting to do this but since I thought I was running out of time 
> I left it for later.
> 
> These shouldn't cause many merge/rebase conflicts, but it probably 
> cause a few because the sql alchemy models were moved to a different module.
> 
> 
> 2) The LoadBalancerPluginv2 no longer inherits from the 
> LoadBalancerPluginDbv2.  The database is now a composite attribute of 
> the plugin (i.e. plugin.db.get_loadbalancer()).  This cleans up the 
> code a bit and removes the necessity for _delete_db_entity methods 
> that the drivers needed because now they can actually call the database 
> methods.
> Also, this makes testing more clear, though I have not added any tests 
> for this because the previous tests are sufficient for now.  Adding 
> the appropriate tests would add 1k or 2k lines most likely.
> 
> This will likely cause more conflicts because the _db_delete_entity 
> methods have been removed.  However, the new driver interface/mixins 
> defined a db_method for all drivers to use, so if that is being used 
> there shouldn't be any problems.
> 
> Thanks,
> Brandon
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Improvements to current reviews

2014-08-10 Thread Vijay Venkatachalam
Thanks Brandon for constant  improvisation.

I agree with Doug. Please update current one. We already hv  more number of 
reviews :-). It will be difficult to manage if we add more.

Thanks,
Vijay

Sent using CloudMagic

On Sun, Aug 10, 2014 at 3:23 AM, Doug Wiegley 
mailto:do...@a10networks.com>> wrote:


I think you should update the current reviews (new patch set, not additional 
review.)

Doug


> On Aug 9, 2014, at 3:34 PM, "Brandon Logan"  
> wrote:
>
> So I've done some work on improving the code on the current pending
> reviews.  And would like to get peoples' opinions on whether I should
> add antoher patch set to those reviews, or add the changes as as another
> review dependent on the pending ones.
>
> To be clear, no matter what the first review in the chain will not
> change:
> https://review.openstack.org/#/c/105331/
>
> However, if adding another patch set is preferrable then plugin and db
> implementation review would get another patch set and then obviously
> anything depending on that.
>
> https://review.openstack.org/#/c/105609/
>
> My opinion is that I'd like to get both of these in as a new patch set.
> Reason being that the reviews don't have any +2's and there is
> uncertainty because of the GBP discussion.  So, I don't think it'd be a
> major issue if a new patch set was created.  Speak up if you think
> otherwise.  I'd like to get as many people's thoughts as possible.
>
> The changes are:
>
> 1) Added data models, which are just plain python object mimicking the
> sql alchemy models, but don't have the overhead or dynamic nature of
> being sql alchemy models.  These data models are now returned by the
> database methods, instead of the sql alchemy objects.  Also, I moved the
> definition of the sql alchemy models into its own module.  I've been
> wanting to do this but since I thought I was running out of time I left
> it for later.
>
> These shouldn't cause many merge/rebase conflicts, but it probably cause
> a few because the sql alchemy models were moved to a different module.
>
>
> 2) The LoadBalancerPluginv2 no longer inherits from the
> LoadBalancerPluginDbv2.  The database is now a composite attribute of
> the plugin (i.e. plugin.db.get_loadbalancer()).  This cleans up the code
> a bit and removes the necessity for _delete_db_entity methods that the
> drivers needed because now they can actually call the database methods.
> Also, this makes testing more clear, though I have not added any tests
> for this because the previous tests are sufficient for now.  Adding the
> appropriate tests would add 1k or 2k lines most likely.
>
> This will likely cause more conflicts because the _db_delete_entity
> methods have been removed.  However, the new driver interface/mixins
> defined a db_method for all drivers to use, so if that is being used
> there shouldn't be any problems.
>
> Thanks,
> Brandon
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Improvements to current reviews

2014-08-10 Thread Gary Kotton
Hi,
I took a look at https://review.openstack.org/#/c/105331/ and had one minor 
issue which I think can be addressed. Prior to approving we need to understand 
what the state of the V2 API will be.
Thanks
Gary

From: Vijay Venkatachalam 
mailto:vijay.venkatacha...@citrix.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Sunday, August 10, 2014 at 2:57 PM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Improvements to current reviews


Thanks Brandon for constant  improvisation.

I agree with Doug. Please update current one. We already hv  more number of 
reviews :-). It will be difficult to manage if we add more.

Thanks,
Vijay

Sent using CloudMagic

On Sun, Aug 10, 2014 at 3:23 AM, Doug Wiegley 
mailto:do...@a10networks.com>> wrote:


I think you should update the current reviews (new patch set, not additional 
review.)

Doug


> On Aug 9, 2014, at 3:34 PM, "Brandon Logan" 
> mailto:brandon.lo...@rackspace.com>> wrote:
>
> So I've done some work on improving the code on the current pending
> reviews.  And would like to get peoples' opinions on whether I should
> add antoher patch set to those reviews, or add the changes as as another
> review dependent on the pending ones.
>
> To be clear, no matter what the first review in the chain will not
> change:
> https://review.openstack.org/#/c/105331/
>
> However, if adding another patch set is preferrable then plugin and db
> implementation review would get another patch set and then obviously
> anything depending on that.
>
> https://review.openstack.org/#/c/105609/
>
> My opinion is that I'd like to get both of these in as a new patch set.
> Reason being that the reviews don't have any +2's and there is
> uncertainty because of the GBP discussion.  So, I don't think it'd be a
> major issue if a new patch set was created.  Speak up if you think
> otherwise.  I'd like to get as many people's thoughts as possible.
>
> The changes are:
>
> 1) Added data models, which are just plain python object mimicking the
> sql alchemy models, but don't have the overhead or dynamic nature of
> being sql alchemy models.  These data models are now returned by the
> database methods, instead of the sql alchemy objects.  Also, I moved the
> definition of the sql alchemy models into its own module.  I've been
> wanting to do this but since I thought I was running out of time I left
> it for later.
>
> These shouldn't cause many merge/rebase conflicts, but it probably cause
> a few because the sql alchemy models were moved to a different module.
>
>
> 2) The LoadBalancerPluginv2 no longer inherits from the
> LoadBalancerPluginDbv2.  The database is now a composite attribute of
> the plugin (i.e. plugin.db.get_loadbalancer()).  This cleans up the code
> a bit and removes the necessity for _delete_db_entity methods that the
> drivers needed because now they can actually call the database methods.
> Also, this makes testing more clear, though I have not added any tests
> for this because the previous tests are sufficient for now.  Adding the
> appropriate tests would add 1k or 2k lines most likely.
>
> This will likely cause more conflicts because the _db_delete_entity
> methods have been removed.  However, the new driver interface/mixins
> defined a db_method for all drivers to use, so if that is being used
> there shouldn't be any problems.
>
> Thanks,
> Brandon
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db]A proposal for DB read/write separation

2014-08-10 Thread Amrith Kumar
Li Ma, Mike [Wilson | Bayer], and Roman Podoliaka,

 

A similar topic came up in Atlanta at a database panel I participated in. Jay 
Pipes had organized it as part of the ops track and Peter Boros (of Percona) 
and I were on the panel. The issue of what to do about the database under 
OpenStack in the face of high load from such components as, for example 
ceilometer.

 

Splitting reads and writes is a solution that is fraught with challenges as it 
requires the application to know where it wrote, where it should read from, 
what is replication latency, and all of that. At the heart of the issue is that 
you want to scale the database.

 

I had suggested at this panel that those who want to try and solve this problem 
should try the Database Virtualization Engine[1] product from Tesora. In the 
interest of full disclosure, I work for Tesora. 

 

The solution is a simple way to horizontally scale a MySQL (or Percona or 
MariaDB) database across a collection of database servers. It exposes a MySQL 
compatible interface and takes care of all the minutiae of where to store data, 
partitioning it across the various database servers, and executing queries on 
behalf of an application irrespective of the location of the data. It natively 
provides such capabilities as distributed joins, aggregation and sorting. 
Architecturally it is a traditional parallel database built from a collection 
of unmodified MySQL (or variant) databases. 

 

It is open source, and available for free download.[2] 

 

Percona recently tested[3] the DVE product and confirmed that the solution 
provided horizontal scalability and linear (and in some cases better than 
linear) performance improvements[4] with scale. You can get a copy of their 
full test report here.[5] 

 

Ingesting data at very high volume is often an area of considerable pain for 
large systems and in one demonstration of our product, we were required to 
ingest 1 million CDR style records per second. We demonstrated that with just 
15 Amazon RDS servers (m1.xlarge, standard EBS volumes, no provisioned IOPS) 
and two c1.xlarge servers to run the Tesora DVE software, we could in fact 
ingest a sustained stream of over 1 million CDR’s a second![6]

 

To Mike Wilson and Roman’s point, the solution I’m proposing would be entirely 
transparent to the developer and would be something that would be both elastic 
and scalable with the workload placed on it. In addition, standard SQL queries 
will continue to work unmodified, irrespective of which database server 
physically holds a row of data.

 

To Mike Bayer’s point about data distribution and transaction management; yes, 
we handle all the details relating to handling data consistency and providing 
atomic transactions during Insert/Update/Delete operations.

 

As a company, we at Tesora are committed to OpenStack and are significant 
participants in Trove (the database-as-a-service project for OpenStack). You 
can verify this yourself on Stackalytics [7] or [8]. If you would like to 
consider it as a part of your solution to oslo.db, we’d be thrilled to work 
with the OpenStack community to make this work, both from a technical and a 
business/licensing perspective. You can catch most of our dev team on either 
#openstack-trove or #tesora.

 

Some of us from Tesora, Percona and Mirantis are planning an ops panel similar 
to the one at Atlanta, for the Summit in Paris. I would definitely like to meet 
with more of you in Paris and discuss how we address issues of scale in the 
database that powers an OpenStack implementation.

 

Thanks,

 

-amrith

 

--

 

Amrith Kumar, CTO Tesora (www.tesora.com)

 

Twitter: @amrithkumar  

IRC: amrith @freenode 

 

 

[1] http://www.tesora.com/solutions/database-virtualization-engine

[2] http://www.tesora.com/solutions/downloads/products

[3] 
http://www.mysqlperformanceblog.com/2014/06/24/benchmarking-tesoras-database-virtualisation-engine-sysbench/
 

[4] 
http://www.tesora.com/blog/perconas-evaluation-our-database-virtualization-engine

[5] http://resources.tesora.com/site/download/percona-benchmark-whitepaper 

[6] 
http://www.tesora.com/blog/ingesting-over-100-rows-second-mysql-aws-cloud 

[7] http://stackalytics.com/?module=trove-group 
 &metric=commits

[8] http://stackalytics.com/?module=trove-group 
 &metric=marks

 

 

 

 

 

From: Mike Wilson [mailto:geekinu...@gmail.com] 
Sent: Friday, August 08, 2014 7:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [oslo.db]A proposal for DB read/write separation

 

Li Ma,

 

This is interesting, In general I am in favor of expanding the scope of any 
read/write separation capabilities that we have. I'm not clear what exactly you 
are proposing, hopefully you can answer some of my questions inline. The thing 
I had thought of immediately was detect

Re: [openstack-dev] [oslo.db]A proposal for DB read/write separation

2014-08-10 Thread Mike Bayer

On Aug 10, 2014, at 9:59 AM, Amrith Kumar  wrote:

>  
> To Mike Bayer’s point about data distribution and transaction management; 
> yes, we handle all the details relating to handling data consistency and 
> providing atomic transactions during Insert/Update/Delete operations.
>  
> As a company, we at Tesora are committed to OpenStack and are significant 
> participants in Trove (the database-as-a-service project for OpenStack). You 
> can verify this yourself on Stackalytics [7] or [8]. If you would like to 
> consider it as a part of your solution to oslo.db, we’d be thrilled to work 
> with the OpenStack community to make this work, both from a technical and a 
> business/licensing perspective. You can catch most of our dev team on either 
> #openstack-trove or #tesora.
>  
> Some of us from Tesora, Percona and Mirantis are planning an ops panel 
> similar to the one at Atlanta, for the Summit in Paris. I would definitely 
> like to meet with more of you in Paris and discuss how we address issues of 
> scale in the database that powers an OpenStack implementation.


OK well just to be clear, oslo.db is Python code that basically provides 
in-application helpers and patterns to work with databases, primarily through 
SQLAlchemy.   So it’s essentially openstack-specific patterns and recipes on 
top of SQLAlchemy. It has very little to do with the use of special 
database backends that know how to partition among shards and/or master/slaves 
(I thought the original proposal was for master/slave).So the Tesora 
product would be 99% “drop in”, with at most some configurational flags set up 
on the Python side, and everything else being configurational. Since the 
proposal here is for “transparent”, which is taken to mean, “no app changes are 
needed”.   My only point was that, an application-layer reader/writer 
distribution approach would need to work at the level of transactions, not 
statements, and therefore would need to know at transaction start time what the 
nature of the transaction would be (and thus requires some small declaration at 
the top, hence code changes…code changes that I think are a good thing as 
explicit declaration of reader/writer methods up top can be handy in other ways 
too).


>  
> Thanks,
>  
> -amrith
>  
> --
>  
> Amrith Kumar, CTO Tesora (www.tesora.com)
>  
> Twitter: @amrithkumar 
> IRC: amrith @freenode
>  
>  
> [1] http://www.tesora.com/solutions/database-virtualization-engine
> [2] http://www.tesora.com/solutions/downloads/products
> [3] 
> http://www.mysqlperformanceblog.com/2014/06/24/benchmarking-tesoras-database-virtualisation-engine-sysbench/
> [4] 
> http://www.tesora.com/blog/perconas-evaluation-our-database-virtualization-engine
> [5] http://resources.tesora.com/site/download/percona-benchmark-whitepaper
> [6] 
> http://www.tesora.com/blog/ingesting-over-100-rows-second-mysql-aws-cloud
> [7] http://stackalytics.com/?module=trove-group&metric=commits
> [8] http://stackalytics.com/?module=trove-group&metric=marks
>  
>  
>  
>  
>  
> From: Mike Wilson [mailto:geekinu...@gmail.com] 
> Sent: Friday, August 08, 2014 7:35 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [oslo.db]A proposal for DB read/write separation
>  
> Li Ma,
>  
> This is interesting, In general I am in favor of expanding the scope of any 
> read/write separation capabilities that we have. I'm not clear what exactly 
> you are proposing, hopefully you can answer some of my questions inline. The 
> thing I had thought of immediately was detection of whether an operation is 
> read or write and integrating that into oslo.db or sqlalchemy. Mike Bayer has 
> some thoughts on that[1] and there are other approaches around that can be 
> copied/learned from. These sorts of things are clear to me and while moving 
> towards more transparency for the developer, still require context. Please, 
> share with us more details on your proposal.
>  
> -Mike
>  
> [1] 
> http://www.percona.com/doc/percona-xtradb-cluster/5.5/wsrep-system-index.html
> [2] 
> http://techspot.zzzeek.org/2012/01/11/django-style-database-routers-in-sqlalchemy/
>  
> 
> On Thu, Aug 7, 2014 at 10:03 PM, Li Ma  wrote:
> Getting a massive amount of information from data storage to be displayed is
> where most of the activity happens in OpenStack. The two activities of reading
> data and writing (creating, updating and deleting) data are fundamentally
> different.
> 
> The optimization for these two opposite database activities can be done by
> physically separating the databases that service these two different
> activities. All the writes go to database servers, which then replicates the
> written data to the database server(s) dedicated to servicing the reads.
> 
> Currently, AFAIK, many OpenStack deployment in production try to take
> advantage of MySQL (includes Percona or MariaDB) multi-master Galera cluster.
> It is possible to design and implement a read/write 

Re: [openstack-dev] [oslo.db]A proposal for DB read/write separation

2014-08-10 Thread Li Ma
Thanks for all the detailed analysis, Mike W, Mike B, and Roman.
 
For a production-ready database system, replication is a must I think. So, the 
questions are which replication mode is suitable for OpenStack and which way is 
suitable for OpenStack to improve performance and scalability of DB access.

In current implementation of database API in OpenStack, master/slave connection 
is defined for optimizing the performance. Developers of each OpenStack 
component take the responsibility of making use of it in the application 
context and some other guys take the responsibility of architecting database 
system to meet the requirements in various production environments. No general 
guideline for it. Actually, it is not that easy to determine which transaction 
is able to be conducted by slave due to data consistency and business logic for 
different OpenStack components.

The current status is that master/slave configuration is not widely used and 
only Nova uses slave connection in its periodic tasks which are not sensitive 
to the status of replication. Due to the nature of asynchronous replication, 
query to DB is not stable, so the risks of using slaves are apparent.

How about Galera multi-master cluster? As Mike Bayer said, it is virtually 
synchronous by default. It is still possible that outdated rows are queried 
that make results not stable.

When using such eventual consistency methods, you have to carefully design 
which transaction is tolerant of old data. AFAIK, no matter which component is, 
Nova, Cinder or Neutron, most of the transactions are not that 'tolerant'. As 
Mike Bayer said, consistent relational dataset is very important. As a 
footnote, consistent relational dataset is very important for OpenStack 
components. This is why only non-sensitive periodic tasks are using slaves in 
Nova.

Let's move forward to synchronous replication, like Galera with causal-reads 
on. The dominant advantage is that it has consistent relational dataset 
support. The disadvantage are that it uses optimistic locking and its 
performance sucks (also said by Mike Bayer :-). For optimistic locking problem, 
I think it can be dealt with by retry-on-deadlock. It's not the topic here.

If we first ignore the performance-suck problem, multi-master cluster with 
synchronous replication is the perfect for OpenStack with any masters+slaves 
enabled and it can truly scale-out.

So, the transparent read/write separation is dependent on such an environment. 
SQLalchemy tutorial provides code sample for it [1]. Besides, Mike Bayer also 
provides a blog post for it [2].

What I did is to re-implement it in OpenStack DB API modules in my development 
environment, using Galera cluster(causal-reads on). It has been running 
perfectly for more than a week. The routing session manager works well while 
maintaining data consistency.

Back to the performance-suck problem, theoretically causal-reads-on will 
definitely affect the overall performance of concurrent DB reads, but I cannot 
find any report(officially or unofficially) on 
causal-reads-performance-degradation. Actually in the production system of my 
company, the Galera performance is tuned via network round-trip time, network 
throughput, number of slave threads, keep-alive and wsrep flow control 
parameters.

All in all, firstly, transparent read/write separation is feasible using 
synchronous replication method. Secondly, it may help scale-out in large 
deployment without any code modification. Moreover, it needs fine-tuning (Of 
course, every production system needs it :-). Finally, I think if we can 
integrate it into oslo.db, it is a perfect plus for those who would like to 
deploy Galera (or other similar technology) as DB backend.

[1] 
http://docs.sqlalchemy.org/en/rel_0_9/orm/session.html#custom-vertical-partitioning
[2] 
http://techspot.zzzeek.org/2012/01/11/django-style-database-routers-in-sqlalchemy/
[3] Galera replication method: http://galeracluster.com/products/technology/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db]A proposal for DB read/write separation

2014-08-10 Thread Mike Bayer

On Aug 10, 2014, at 11:17 AM, Li Ma  wrote:

> 
> How about Galera multi-master cluster? As Mike Bayer said, it is virtually 
> synchronous by default. It is still possible that outdated rows are queried 
> that make results not stable.

not sure if I said that :).  I know extremely little about galera.


> 
> 
> Let's move forward to synchronous replication, like Galera with causal-reads 
> on. The dominant advantage is that it has consistent relational dataset 
> support. The disadvantage are that it uses optimistic locking and its 
> performance sucks (also said by Mike Bayer :-). For optimistic locking 
> problem, I think it can be dealt with by retry-on-deadlock. It's not the 
> topic here.

I *really* don’t think I said that, because I like optimistic locking, and I’ve 
never used Galera ;).

Where I am ignorant here is of what exactly occurs if you write some rows 
within a transaction with Galera, then do some reads in that same transaction.  
 I’d totally guess that Galera would need to first have SELECTs come from a 
slave node, then the moment it sees any kind of DML / writing, it transparently 
switches the rest of the transaction over to a writer node.   No idea, but it 
has to be something like that?   


> 
> 
> So, the transparent read/write separation is dependent on such an 
> environment. SQLalchemy tutorial provides code sample for it [1]. Besides, 
> Mike Bayer also provides a blog post for it [2].

So this thing with the “django-style routers”, the way that example is, it 
actually would work poorly with a Session that is not in “autocommit” mode, 
assuming you’re working with regular old databases that are doing some simple 
behind-the-scenes replication.   Because again, if you do a flush, those rows 
go to the master, if the transaction is still open, then reading from the 
slaves you won’t see the rows you just inserted.So in reality, that example 
is kind of crappy, if you’re in a transaction (which we are) you’d really need 
to be doing session.using_bind(“master”) all over the place, and that is 
already way too verbose and hardcoded.   I’m wondering why I didn’t make a huge 
note of that in the post.  The point of that article was more to show that hey, 
you *can* control it at this level if you want to but you need to know what 
you’re doing.

Just to put it out there, this is what I think good high/level master/slave 
separation in the app level (reiterating: *if we want it in the app level at 
all*) should approximately look like:

@transaction.writer
def read_and_write_something(arg1, arg2, …):
# …

@transaction.reader
def only_read_something(arg1, arg2, …):
# …

that way there is no awareness of master/slave anything, the underlying system 
can decide what “reader” and “writer” means.   Do in-app switching between two 
databases, send out some magic signals to some commercial clustering service, 
have the “readers” work in “autocommit” mode, or do nothing, whatever.  The 
code doesn’t decide this imperatively.But it isn’t 100% “transparent”, this 
small amount of declaration per-method is needed.


> 
> What I did is to re-implement it in OpenStack DB API modules in my 
> development environment, using Galera cluster(causal-reads on). It has been 
> running perfectly for more than a week. The routing session manager works 
> well while maintaining data consistency.

OK so Galera would perhaps have some way to make this happen, and that’s great. 
   My understanding is that people are running Openstack already with Galera, 
that’s why we’re hitting issues with some of those SELECT..FOR UPDATEs that are 
being replaced with optimistic approaches as you mention. But beyond that 
this isn’t any kind of “change” to oslo.db or anything else.   Run Openstack 
with whatever database backend you want, ideally (that is my primary agenda, 
sorry MySQL vendors!).


> Finally, I think if we can integrate it into oslo.db, it is a perfect plus 
> for those who would like to deploy Galera (or other similar technology) as DB 
> backend.

this (the word “integrate”, and what does that mean) is really the only thing 
making me nervous.  If the integration here is the django blog post I have, 
it’s not going to work with transactions.   Either the system is magical enough 
that a single transaction can read/write from both sources midway and there is 
no “integration” needed, or the transaction has to be declared up front as 
reader or writer.  Or you don’t use transactions except for writers, which is 
essentially the same as “declaration up front”.

> 
> [1] 
> http://docs.sqlalchemy.org/en/rel_0_9/orm/session.html#custom-vertical-partitioning
> [2] 
> http://techspot.zzzeek.org/2012/01/11/django-style-database-routers-in-sqlalchemy/
> [3] Galera replication method: http://galeracluster.com/products/technology/
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/ma

[openstack-dev] Failed to create a stack with heat (SoftwareDeployment resource is always in progress)

2014-08-10 Thread david ferahi
Hi guys,

I 'm trying to create a simple stack with heat.
The template contains SoftwareConfig, SoftwareDeployment and a single
server resources.

The problem is that the SoftwareDeployment resource is always in progress !

After waiting for more than an hour the stack deployment failed and I got
this error:

 TRACE heat.engine.resource HTTPUnauthorized: ERROR: Authentication failed.
Please try again with option --include-password or export
HEAT_INCLUDE_PASSWORD=1
TRACE heat.engine.resource Authentication required

When I checked the log file (/var/log/heat/heat-engine.log), it shows  the
following message(every second):

2014-08-10 19:41:09.622 2391 INFO urllib3.connectionpool [-] Starting new
HTTP connection (1): 192.168.122.10
2014-08-10 19:41:10.648 2391 INFO urllib3.connectionpool [-] Starting new
HTTP connection (1): 192.168.122.10
2014-08-10 19:41:11.671 2391 INFO urllib3.connectionpool [-] Starting new
HTTP connection (1): 192.168.122.10
2014-08-10 19:41:12.690 2391 INFO urllib3.connectionpool [-] Starting new
HTTP connection (1): 192.168.122.10

Here the template I am using :
https://github.com/openstack/heat-templates/blob/master/hot/software-config/example-templates/wordpress/WordPress_software-config_1-instance.yaml

Please help !

Kind Regards,

David
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] API Reference page - proposed change

2014-08-10 Thread Anne Gentle
Hi all,
I want to get more eyes on a design change being proposed for the API
Reference Listing page, http://developer.openstack.org/api-ref.html.

To see the new output, look at this screenshot:
http://aa4698cc2bf4ab7e5907-ed3df21bb39de4e57eec9a20aa0b8711.r41.cf2.rackcdn.com/Screen%20Shot%202014-07-17%20at%2011.57.44%20AM.png

Or, checkout a built prototype at:
http://dec7ddbac72e46ecedbc-e72893bdbe528eca36a8c6e2c241a503.r71.cf2.rackcdn.com/api-ref-compute-v2.html

Basically it takes the very short description of an API call, emboldens it,
and adds more info about the call below the short description. Is this
useful? No new content has to be added to get this output.

Please let us know either on the mailing list or on the review itself:
https://review.openstack.org/#/c/107768/

Also, be aware we are seeking WADL replacements. Let me know of your
interest and any resources you're willing to lend to the API docs effort.

Thanks,
Anne
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][heat] a small experiment with Ansible in TripleO

2014-08-10 Thread Steve Baker
On 02/08/14 04:07, Allison Randal wrote:
> A few of us have been independently experimenting with Ansible as a
> backend for TripleO, and have just decided to try experimenting
> together. I've chatted with Robert, and he says that TripleO was always
> intended to have pluggable backends (CM layer), and just never had
> anyone interested in working on them. (I see it now, even in the early
> docs and talks, I guess I just couldn't see the forest for the trees.)
> So, the work is in line with the overall goals of the TripleO project.
>
> We're starting with a tiny scope, focused only on updating a running
> TripleO deployment, so our first work is in:
>
> - Create an Ansible Dynamic Inventory plugin to extract metadata from Heat
> - Improve/extend the Ansible nova_compute Cloud Module (or create a new
> one), for Nova rebuild
> - Develop a minimal handoff from Heat to Ansible, particularly focused
> on the interactions between os-collect-config and Ansible
>
> We're merging our work in this repo, until we figure out where it should
> live:
>
> https://github.com/allisonrandal/tripleo-ansible
>
> We've set ourselves one week as the first sanity-check to see whether
> this idea is going anywhere, and we may scrap it all at that point. But,
> it seems best to be totally transparent about the idea from the start,
> so no-one is surprised later.
>
Having pluggable backends for configuration seems like a good idea, and
Ansible is a great choice for the first alternative backend.

However what this repo seems to be doing at the moment is bypassing heat
to do a stack update, and I can only assume there is an eventual goal to
not use heat at all for stack orchestration too.

Granted, until blueprint update-failure-recovery lands[1] then doing a
stack-update is about as much fun as russian roulette. But this effort
is tactical rather than strategic, especially given TripleO's mission
statement.

If I were to use Ansible for TripleO configuration I would start with
something like the following:
* Install an ansible software-config hook onto the image to be triggered
by os-refresh-config[2][3]
* Incrementally replace StructuredConfig resources in
tripleo-heat-templates with SoftwareConfig resources that include the
ansible playbooks via get_file
* The above can start in a fork of tripleo-heat-templates, but can
eventually be structured using resource providers so that the deployer
chooses what configuration backend to use by selecting the environment
file that contains the appropriate config resources

Now you have a cloud orchestrated by heat and configured by Ansible. If
it is still deemed necessary to do an out-of-band update to the stack
then you're in a much better position to do an ansible push, since you
can use the same playbook files that heat used to bring up the stack.

[1] https://review.openstack.org/#/c/112938/
[2] https://review.openstack.org/#/c/95937/
[3]
http://git.openstack.org/cgit/openstack/heat-templates/tree/hot/software-config/elements/heat-config

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS][Tempest] LBaaS API Tempest testing status update

2014-08-10 Thread Miguel Lavalle
Hi,

I have concluded the api testing of LBaaS v2. All the operations on all the
resources pass the api test. The test passes both with JSON and XML
interfaces. I now will move on to scenario testing

The extensions to the neutron clients in tempest are verified to work and
good to merge. Please review the patchset here
https://review.openstack.org/#/c/106089/

Cheers
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-08-10 Thread Michael Still
On Sun, Aug 10, 2014 at 2:33 AM, Jeremy Stanley  wrote:
> On 2014-08-08 09:06:29 -0400 (-0400), Russell Bryant wrote:
> [...]
>> We've seen several times that building and maintaining 3rd party
>> CI is a *lot* of work.
>
> Building and maintaining *any* CI is a *lot* of work, not the least
> of which is the official OpenStack project CI (I believe Monty
> mentioned in #openstack-infra last night that our CI is about twice
> the size of Travis-CI now, not sure what metric he's comparing there
> though).
>
>> Like you said in [1], doing this in infra's CI would be ideal. I
>> think 3rd party should be reserved for when running it in the
>> project's infrastructure is not an option for some reason
>> (requires proprietary hw or sw, for example).
>
> Add to the "not an option for some reason" list, software which is
> not easily obtainable through typical installation channels (PyPI,
> Linux distro-managed package repositories for their LTS/server
> releases, et cetera) or which requires gyrations which destabilize
> or significantly complicate maintenance of the overall system as
> well as reproducibility for developers. It may be possible to work
> around some of these concerns via access from multiple locations
> coupled with heavy caching, but adding that in for a one-off source
> is hard to justify the additional complexity too.

My understanding is that Fedora has a PPA equivalent which ships a
"latest and greated" libvirt. So, it would be packages if we went the
Fedora route, which should be less work.

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-10 Thread Michael Still
On Fri, Aug 8, 2014 at 7:06 PM, Thierry Carrez  wrote:
> Michael Still wrote:
>> [...] I think an implied side effect of
>> the runway system is that nova-drivers would -2 blueprint reviews
>> which were not occupying a slot.
>>
>> (If we start doing more -2's I think we will need to explore how to
>> not block on someone with -2's taking a vacation. Some sort of role
>> account perhaps).
>
> Ideally CodeReview-2s should be kept for blocking code reviews on
> technical grounds, not procedural grounds. For example it always feels
> weird to CodeReview-2 all feature patch reviews on Feature Freeze day --
> that CodeReview-2 really doesn't have the same meaning as a traditional
> CodeReview-2.
>
> For those "procedural blocks" (feature freeze, waiting for runway
> room...), it might be interesting to introduce a specific score
> (Workflow-2 perhaps) that drivers could set. That would not prevent code
> review from happening, that would just clearly express that this is not
> ready to land for release cycle / organizational reasons.
>
> Thoughts?

Agreed, especially if any member of a group can manipulate that value.
I don't like pinging people on vacation to remove procedural -2s.

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.concurrency repo review

2014-08-10 Thread Joshua Harlow

One question from me:

Will there be later fixes to remove oslo.config dependency/usage from 
oslo.concurrency?


I still don't understand how oslo.concurrency can be used as a library 
with the configuration being set in a static manner via oslo.config 
(let's use the example of `lock_path` @ 
https://github.com/YorikSar/oslo.concurrency/blob/master/oslo/concurrency/lockutils.py#L41). 
For example:


Library X inside application Z uses lockutils (via the nice 
oslo.concurrency library) and sets the configuration `lock_path` to its 
desired settings, then library Y (also a user of oslo.concurrency) 
inside same application Z sets the configuration for `lock_path` to its 
desired settings. Now both have some unknown set of configuration they 
have set and when library X (or Y) continues to use lockutils they will 
be using some mix of configuration (likely some mish mash of settings 
set by X and Y); perhaps to a `lock_path` that neither actually wants 
to be able to write to...


This doesn't seem like it will end well; and will just cause headaches 
during debug sessions, testing, integration and more...


The same question can be asked about the `set_defaults()` function, how 
is library Y or X expected to use this (are they?)??


I hope one of the later changes is to remove/fix this??

Thoughts?

-Josh

On 08/07/2014 01:58 PM, Yuriy Taraday wrote:
> Hello, oslo cores.
> 
> I've finished polishing up oslo.concurrency repo at [0] - please 
take a
> look at it. I used my new version of graduate.sh [1] to generate it, 
so

> history looks a bit different from what you might be used to.
> 
> I've made as little changes as possible, so there're still some 
steps left

> that should be done after new repo is created:
> - fix PEP8 errors H405 and E126;
> - use strutils from oslo.utils;
> - remove eventlet dependency (along with random sleeps), but proper 
testing

> with eventlet should remain;
> - fix for bug [2] should be applied from [3] (although it needs some
> improvements);
> - oh, there's really no limit for this...
> 
> I'll finalize and publish relevant change request to 
openstack-infra/config

> soon.
> 
> Looking forward to any feedback!
> 
> [0] https://github.com/YorikSar/oslo.concurrency

> [1] https://review.openstack.org/109779
> [2] https://bugs.launchpad.net/oslo/+bug/1327946
> [3] https://review.openstack.org/108954
> 
> 
> 
> ___

> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack][InstanceGroup] metadetails and metadata in instance_group.py

2014-08-10 Thread Jay Lau
Hi,

Does anyone know why in instance_group.py, why do we have the following
logic for transferring metadetails to metadata? Why not transfer metadata
directly from client?

https://github.com/openstack/nova/blob/master/nova/objects/instance_group.py#L99-L101

-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ceilometer] [ft] Improving ceil.objectstore.swift_middleware

2014-08-10 Thread Osanai, Hisashi

On Friday, August 08, 2014 9:20 PM, Chris Dent wrote:

> These may not be directly what you want, but are something worth
> tracking as you explore and think.

Thank you for your help.

I will brush up my thought (shift to pollster) with the fixes which 
you pointed out.

Thanks again!
Hisashi Osanai

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][InstanceGroup] metadetails and metadata in instance_group.py

2014-08-10 Thread Jay Lau
I was asking this because I got a "-2" for
https://review.openstack.org/109505 , just want to know why this new term
"metadetails" was invented when we already have "details", "metadata",
"system_metadata", "instance_metadata", and "properties" (on images and
volumes).

Thanks!


2014-08-11 10:09 GMT+08:00 Jay Lau :

> Hi,
>
> Does anyone know why in instance_group.py, why do we have the following
> logic for transferring metadetails to metadata? Why not transfer metadata
> directly from client?
>
>
> https://github.com/openstack/nova/blob/master/nova/objects/instance_group.py#L99-L101
>
> --
> Thanks,
>
> Jay
>



-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][Technical Committee] nova-network -> Neutron. Throwing a wrench in the Neutron gap analysis

2014-08-10 Thread Tom Fifield

On 09/08/14 05:09, Russell Bryant wrote:

On 08/06/2014 01:41 PM, Jay Pipes wrote:

On 08/06/2014 01:40 AM, Tom Fifield wrote:

On 06/08/14 13:30, Robert Collins wrote:

On 6 August 2014 17:27, Tom Fifield  wrote:

On 06/08/14 13:24, Robert Collins wrote:



What happened to your DB migrations then? :)



Sorry if I misunderstood, I thought we were talking about running VM
downtime here?


While DB migrations are running things like the nova metadata service
can/will misbehave - and user code within instances will be affected.
Thats arguably VM downtime.

OTOH you could define it more narrowly as 'VMs are not powered off' or
'VMs are not stalled for more than 2s without a time slice' etc etc -
my sense is that most users are going to be particularly concerned
about things for which they have to *do something* - e.g. VMs being
powered off or rebooted - but having no network for a short period
while vifs are replugged and the overlay network re-establishes itself
would be much less concerning.


I think you've got it there, Rob - nicely put :)

In many cases the users I've spoken to who are looking for a live path
out of nova-network on to neutron are actually completely OK with some
"API service" downtime (metadata service is an API service by their
definition). A little 'glitch' in the network is also OK for many of
them.

Contrast that with the original proposal in this thread ("snapshot VMs
in old nova-network deployment, store in Swift or something, then launch
VM from a snapshot in new Neutron deployment") - it is completely
unacceptable and is not considered a migration path for these users.


Who are these users? Can we speak with them? Would they be interested in
participating in the documentation and migration feature process?


Yes, I'd really like to see some participation in the development of a
solution if it's an important requirement.  Until then, it feels like a
case of an open question of "what do you want".  Of course the answer is
"a pony".



... and this is exactly why """...raising this concept only on a 
development mailing list is a bad idea


If anyone is serious about not providing a proper migration path for 
these users that need it, there is a need to be yelling this for 
probably a few of summits in a row and every OpenStack event we have in 
between, as well as the full gamut of periodic surveys, blogs, twitters, 
weibos, linkedins, facebooks etc,"""


So, get cracking :)


Regards,


Tom


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][Technical Committee] nova-network -> Neutron. Throwing a wrench in the Neutron gap analysis

2014-08-10 Thread Jian Wen
Our use case is like the Yahoo one.

2014-08-07 7:10 GMT+08:00 Ed Hall :

>
>
> tl;dr: we’re willing to be a use case, but our internal timeline is such
> that in all likelihood
> this will be as a post-mortem.
>
> We (Yahoo) have thousands of pets that need migrated as well as an
> unspecified
> number of cattle. A “live" strategy is strongly preferred (I’m not saying
> “live" migration
> since in our case it needs to be an in-place operation, not shuffling
> instances around).
> But several seconds of network outage? No problem. Disabling VM
> creation/deletion,
> or even the entire Nova API for a few hours? Well take the grumbling from
> our internal
> teams. A suspend/snapshot/cold-migrate would be an absolute last resort,
> and frankly
> could push back our aggressive migration timeline significantly.
>

>
+1
 We (letv.com) have thousands of cattle that need migrated as well
 as an unspecified number of pets. The zoo is growing really fast.


> As an alternative, we’re looking at DB-to-DB translation, with a one-shot
> script run on
> the compute nodes to move network taps. We’d actually worked this out back
> in the
> Quantum/Folsom era but backed off due to OVS/device driver issues (don’t
> ask -- I still
> get nightmares). This, of course, would require an API outage, and is a
> "big bang"
> approach (one of the attractions of Oleg’s approach is that we can migrate
> a few low-
> value instances and then examine results carefully before proceeding). But
> once again,
> our solution is likely to be of limited interest -- flat network without
> DHCP, no routers or
> floating IPs, unconventional (for OpenStack) use of VLANs -- though we’d
> be happy
> to share once the dust settles.
>
+1
Except that DHCP is used in one of our flat networks.

Our OpenStack distribution is based on the latest OpenStack Havana
release.


> -Ed Hall
> edh...@yahoo-inc.com
>
>
> On Aug 5, 2014, at 7:11 PM, "Joe Gordon"  wrote:
>
>
> On Aug 5, 2014 12:57 PM, "Jay Pipes"  wrote:
> >
> > On 08/05/2014 03:23 PM, Collins, Sean wrote:
> >>
> >> On Tue, Aug 05, 2014 at 12:50:45PM EDT, Monty Taylor wrote:
> >>>
> >>> However, I think the cost to providing that path far outweighs
> >>> the benefit in the face of other things on our plate.
> >>
> >>
> >> Perhaps those large operators that are hoping for a
> >> Nova-Network->Neutron zero-downtime live migration, could dedicate
> >> resources to this requirement? It is my direct experience that features
> >> that are important to a large organization will require resources
> >> from that very organization to be completed.
> >
> >
> > Indeed, that's partly why I called out Metacloud in the original post,
> as they were brought up as a deployer with this potential need. Please, if
> there are any other shops that:
>
> Perhaps I am not remembering all the details discussed at the nova
> mid-cycle, but Metacloud was brought up as an example company uses nova
> network and not neutron, not as a company that needs live migration. And
> that getting them to move to neutron would be a good litmus test for
> nova-network performance parity, something that is very hard to do in the
> gate.   But that was all said without any folks from Metacloud in the room,
> so we may both be wrong.
>
> >
> > * Currently deploy nova-network
> > * Need to move to Neutron
> > * Their tenants cannot tolerate any downtime due to a cold migration
> >
> > Please do comment on this thread and speak up.
> >
> > Best,
> > -jay
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>  ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>  ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best,

Jian
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db]A proposal for DB read/write separation

2014-08-10 Thread Li Ma
> not sure if I said that :).  I know extremely little about galera.

Hi Mike Bayer, I'm so sorry I mistake you from Mike Wilson in the last post. 
:-) Also, say sorry to Mike Wilson.

> I’d totally guess that Galera would need to first have SELECTs come from a 
> slave node, then the moment it sees any kind of DML / writing, it 
> transparently switches the rest of the transaction over to a writer node.

You are totally right.

> 
> @transaction.writer
> def read_and_write_something(arg1, arg2, …):
> # …
> 
> @transaction.reader
> def only_read_something(arg1, arg2, …):
> # …

The first approach that I had in mind is the decorator-based method to 
separates read/write ops like what you said. To some degree, it is almost the 
same app-level approach to the master/slave configuration, due to transparency 
to developers. However, as I stated before, the current approach is merely used 
in OpenStack. Decorator is more friendly than use_slave_flag or something like 
that. If ideally transparency cannot be achieved, to say the least, 
decorator-based app-level switching is a great improvement, compared with the 
current implementation.

> OK so Galera would perhaps have some way to make this happen, and that's 
> great.

If any Galera expert here, please correct me. At least in my experiment, 
transactions work in that way.

> this (the word “integrate”, and what does that mean) is really the only thing 
> making me nervous.

Mike, just feel free. What I'd like to do is to add a django-style routing 
method as a plus in oslo.db, like:

[database]
# Original master/slave configuration
master_connection = 
slave_connection = 

# Only Support Synchronous Replication
enable_auto_routing = True

[db_cluster]
master_connection = 
master_connection = 
...
slave_connection = 
slave_connection = 
...

HOWEVER, I think it needs more investigation, so this is why I'd like to put it 
in the mailing list in the early stage to raise some discussions in depth. I'm 
not a Galera expert. I really appreciate any challenges here.

Thanks,
Li Ma


- Original Message -
From: "Mike Bayer" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: 星期日, 2014年 8 月 10日 下午 11:57:47
Subject: Re: [openstack-dev] [oslo.db]A proposal for DB read/write separation


On Aug 10, 2014, at 11:17 AM, Li Ma  wrote:

> 
> How about Galera multi-master cluster? As Mike Bayer said, it is virtually 
> synchronous by default. It is still possible that outdated rows are queried 
> that make results not stable.

not sure if I said that :).  I know extremely little about galera.


> 
> 
> Let's move forward to synchronous replication, like Galera with causal-reads 
> on. The dominant advantage is that it has consistent relational dataset 
> support. The disadvantage are that it uses optimistic locking and its 
> performance sucks (also said by Mike Bayer :-). For optimistic locking 
> problem, I think it can be dealt with by retry-on-deadlock. It's not the 
> topic here.

I *really* don’t think I said that, because I like optimistic locking, and I’ve 
never used Galera ;).

Where I am ignorant here is of what exactly occurs if you write some rows 
within a transaction with Galera, then do some reads in that same transaction.  
 I’d totally guess that Galera would need to first have SELECTs come from a 
slave node, then the moment it sees any kind of DML / writing, it transparently 
switches the rest of the transaction over to a writer node.   No idea, but it 
has to be something like that?   


> 
> 
> So, the transparent read/write separation is dependent on such an 
> environment. SQLalchemy tutorial provides code sample for it [1]. Besides, 
> Mike Bayer also provides a blog post for it [2].

So this thing with the “django-style routers”, the way that example is, it 
actually would work poorly with a Session that is not in “autocommit” mode, 
assuming you’re working with regular old databases that are doing some simple 
behind-the-scenes replication.   Because again, if you do a flush, those rows 
go to the master, if the transaction is still open, then reading from the 
slaves you won’t see the rows you just inserted.So in reality, that example 
is kind of crappy, if you’re in a transaction (which we are) you’d really need 
to be doing session.using_bind(“master”) all over the place, and that is 
already way too verbose and hardcoded.   I’m wondering why I didn’t make a huge 
note of that in the post.  The point of that article was more to show that hey, 
you *can* control it at this level if you want to but you need to know what 
you’re doing.

Just to put it out there, this is what I think good high/level master/slave 
separation in the app level (reiterating: *if we want it in the app level at 
all*) should approximately look like:

@transaction.writer
def read_and_write_something(arg1, arg2, …):
# …

@transaction.reader
def only_read_something(arg1, arg2, …):
# …

that way there is no awareness of master/

Re: [openstack-dev] Retrigger turbo-hipster

2014-08-10 Thread Joshua Hesketh

Hi John,

Sorry for the slow reply*. We ran into some trouble with our CI system 
after an upgrade to nova's requirements broke our system.


Everything is back up and running now and we've caught up on missed jobs 
(including your recheck). My apologies for the hassle. Let me know if 
you have any further problems.


Cheers,
Josh

* we had replied to your email to rcbau, let me know if you didn't 
receive that in-case there is something wrong with our emails


Rackspace Australia

On 8/9/14 2:10 PM, Anita Kuno wrote:

On 08/08/2014 02:53 PM, jswar...@linux.vnet.ibm.com wrote:

I'm unable to retrigger the turbo-hipster verification job on a change
("recheck migrations" comment retriggers Jenkins but not turbo-hipster)
and I sent an e-mail to rc...@rcbops.com two days ago and still have not
received a reply.  Has anyone else run into this problem and found a way
to resolve it?

Thanks,

John


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Hi John:

I've included Josh on this email and also added the openstack-infra
mailing list (where the majority of third-party ci questions get posted).

Since it is the weekend in Australia it might be a day or so before we
get a response but I am confident we will get one.

Thanks John,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev