Re: [openstack-dev] [Trove] Core reviewer update

2015-02-05 Thread Tim Simpson
+1

Alas, my days of offering substantial activity on Trove are over. Best of luck 
in all of your adventures!



On Feb 5, 2015, at 10:26 AM, Nikhil Manchanda 
slick...@gmail.commailto:slick...@gmail.com
 wrote:

Hello Trove folks:

Keeping in line with other OpenStack projects, and attempting to keep
the momentum of reviews in Trove going, we need to keep our core-team up
to date -- folks who are regularly doing good reviews on the code should
be brought in to core and folks whose involvement is dropping off should
be considered for removal since they lose context over time, not being
as involved.

For this update I'm proposing the following changes:
- Adding Peter Stachowski (peterstac) to trove-core
- Adding Victoria Martinez De La Cruz (vkmc) to trove-core
- Adding Edmond Kotowski (edmondk) to trove-core
- Removing Michael Basnight (hub_cap) from trove-core
- Removing Tim Simpson (grapex) from trove-core

For context on Trove reviews and who has been active, please see
Russell's stats for Trove at:
- http://russellbryant.net/openstack-stats/trove-reviewers-30.txt
- http://russellbryant.net/openstack-stats/trove-reviewers-90.txt

Trove-core members -- please reply with your vote on each of these
proposed changes to the core team. Peter, Victoria and Eddie -- please
let me know of your willingness to be in trove-core. Michael, and Tim --
if you are planning on being substantially active on Trove in the near
term, also please do let me know.

Thanks,
Nikhil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Guest RPC API improvements. Messages, topics, queues, consumption.

2014-10-31 Thread Tim Simpson
Hi Denis,

It seems like the issue you're trying to solve is that these 'prepare' messages 
can't be consumed by the guest.
So, if the guest never actually comes online and therefore can't consume the 
prepare call, then you'll be left with the message in the queue forever.

If you use a ping-pong message, you'll still be left with a stray message in 
the queue if it fails.

I think the best fix is if we delete the queue when deleting an instance. This 
way you'll never have more queues in rabbit than are needed.

Thanks,

Tim



From: Denis Makogon [dmako...@mirantis.com]
Sent: Friday, October 31, 2014 4:32 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Trove] Guest RPC API improvements. Messages, topics, 
queues, consumption.


Hello, Stackers/Trovers.



I’d like to start discussion about how do we use guestagent API that will 
eventually be evaluated as a spec. For most of you who well-known with Trove’s 
codebase knows how do Trove acts when provisioning new instance.

I’d like to point out next moments:

  1.  When we provision new instance we expect that guest will create its 
topic/queue for RPC messaging needs.

  2.  Taskmanager doesn’t validate that guest is really up before sending 
‘prepare’ call.

And here comes the problem, what if guest wasn’t able to start properly and 
consume ‘prepare’ message due to certain circumstances? In this case ‘prepare’ 
message would never be consumed.


Me and Sergey Gotliv were looking for proper solution for this case. And we end 
up with next requirements for provisioning workflow:

  1.  We must be sure that ‘prepare’ message will be consumed by guest.

  2.  Taskmanager should handle topic/queue management for guest.

  3.  Guest just need to consume income messages for already existing 
topic/queue.

As concrete proposal (or at least topic for discussions) i’d like to discuss 
next improvements:

We need to add new guest RPC API that will represent “ping-pong” action. So 
before sending any cast- or call-type messages we need to make sure that guest 
is really running.


Pros/Cons for such solution:

  1.  Guest will do only consuming.

  2.  Guest would not manage its topics/queues.

  3.  We’ll be 100% sure that no messages would be lost.

  4.  Fast-fail during provisioning.

  5.  Other minor/major improvements.



Thoughts?


P.S.: I’d like to discuss this topic during upcoming Paris summit (during 
contribution meetup at Friday).



Best regards,

Denis Makogon

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Proposal to add Iccha Sethi to trove-core

2014-10-30 Thread Tim Simpson
+1 


From: Nikhil Manchanda [nik...@manchanda.me]
Sent: Thursday, October 30, 2014 3:47 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Trove] Proposal to add Iccha Sethi to trove-core

Hello folks:

I'm proposing to add Iccha Sethi (iccha on IRC) to trove-core.

Iccha has been working with Trove for a while now. She has been a
very active reviewer, and has provided insightful comments on
numerous reviews. She has submitted quality code for multiple bug-fixes
in Trove, and most recently drove the per datastore volume support BP in
Juno. She was also a crucial part of the team that implemented
replication in Juno, and helped close out multiple replication related
issues during Juno-3.

https://review.openstack.org/#/q/reviewer:iccha,n,z
https://review.openstack.org/#/q/owner:iccha,n,z

Please respond with +1/-1, or any further comments.

Thanks,
Nikhil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Cluster implementation is grabbing instance's gutsHi guys, I was looking through the clustering code today and noticed a lot of it is grabbing what I'd call the guts of the ins

2014-09-11 Thread Tim Simpson
Hi everyone,

I was looking through the clustering code today and noticed a lot of it is 
grabbing what I'd call the guts of the instance models code.

The best example is here: 
https://github.com/openstack/trove/commit/06196fcf67b27f0308381da192da5cc8ae65b157#diff-a4d09d28bd2b650c2327f5d8d81be3a9R89https://github.com/openstack/trove/commit/06196fcf67b27f0308381da192da5cc8ae65b157#diff-a4d09d28bd2b650c2327f5d8d81be3a9R89

In the _all_instances_ready function, I would have expected 
trove.instance.models.load_any_instance to be called for each instance ID and 
it's status to be checked.

Instead, the service_status is being called directly. That is a big mistake. 
For now it works, but in general it mixes the concern of what is an instance 
stauts? to code outside of the instance class itself.

For an example of why this is bad, look at the method 
_instance_ids_with_failures. The code is checking for failures by seeing if 
the service status is failed. What if the Nova server or Cinder volume have 
tanked instead? The code won't work as expected.

It could be we need to introduce another status besides BUILD to instance 
statuses, or we need to introduce a new internal property to the SimpleInstance 
base class we can check. But whatever we do we should add this extra logic to 
the instance class itself rather than put it in the clustering models code.

This is a minor nitpick but I think we should fix it before too much time 
passes.

Thanks,

Tim
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Proposal to add Amrith Kumar to trove-core

2014-08-26 Thread Tim Simpson
+1


From: Sergey Gotliv [sgot...@redhat.com]
Sent: Tuesday, August 26, 2014 8:11 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Trove] Proposal to add Amrith Kumar to trove-core

Strong +1 from me!


 -Original Message-
 From: Nikhil Manchanda [mailto:nik...@manchanda.me]
 Sent: August-26-14 3:48 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Trove] Proposal to add Amrith Kumar to trove-core

 Hello folks:

 I'm proposing to add Amrith Kumar (amrith on IRC) to trove-core.

 Amrith has been working with Trove for a while now. He has been a
 consistently active reviewer, and has provided insightful comments on
 numerous reviews. He has submitted quality code for multiple bug-fixes in
 Trove, and most recently drove the audit and clean-up of log messages across
 all Trove components.

 https://review.openstack.org/#/q/reviewer:amrith,n,z
 https://review.openstack.org/#/q/owner:amrith,n,z

 Please respond with +1/-1, or any further comments.

 Thanks,
 Nikhil

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Trove] Metadata Catalog

2014-07-24 Thread Tim Simpson
I agree as well.

I think we should spend less time worrying about what other projects in 
OpenStack might do in the future and spend more time on adding the features we 
need today to Trove. I understand that it's better to work together but too 
often we stop progress on something in Trove to wait on a feature in another 
project that is either incomplete or merely being planned.

While this stems from our strong desire to be part of the community, which is a 
good thing, it hasn't actually led many of us to do work for these other 
projects. At the same time, its negatively impacted Trove. I also think it 
leads us to over-design or incorrectly design features as we plan for 
functionality in other projects that may never materialize in the forms we 
expect.

So my vote is we merge our own metadata feature and not fret over how metadata 
may end up working in Glance.

Thanks,

Tim


From: Iccha Sethi [iccha.se...@rackspace.com]
Sent: Thursday, July 24, 2014 4:02 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance][Trove] Metadata Catalog

+1

We are unsure when these changes will get into glance.
IMO we should go ahead will our instance metadata patch for now and when things 
are ready in glance land we can consider migrating to using that as a generic 
metadata repository.

Thanks,
Iccha

From: Craig Vyvial cp16...@gmail.commailto:cp16...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, July 24, 2014 at 3:04 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Glance][Trove] Metadata Catalog

Denis,

The scope of the metadata api goes beyond just using the glance metadata. The 
metadata can be used for instances and and other objects to add extra data like 
tags or something else that maybe a UI might want to use. We need this feature 
either way.

-Craig


On Thu, Jul 24, 2014 at 12:17 PM, Amrith Kumar 
amr...@tesora.commailto:amr...@tesora.com wrote:
Speaking as a ‘database guy’ and a ‘Trove guy’, I’ll say this; “Metadata” is a 
very generic term and the meaning of “metadata” in a database context is very 
different from the meaning of “metadata” in the context that Glance is 
providing.

Furthermore the usage and access pattern for this metadata, the frequency of 
change, and above all the frequency of access are fundamentally different 
between Trove and what Glance appears to be offering, and we should probably 
not get too caught up in the project “title”.

We would not be “reinventing the wheel” if we implemented an independent 
metadata scheme for Trove; we would be implementing the right kind of when for 
the vehicle that we are operating. Therefore I do not agree with your 
characterization that concludes that:

 given goals at [1] are out of scope of Database program, etc

Just to be clear, when you write:

 Unfortunately, we’re(Trove devs) are on half way to metadata …

it is vital to understand that our view of “metadata” is very different from 
(for example, a file system’s view of metadata, or potentially Glance’s view of 
metadata). For that reason, I believe that your comments on 
https://review.openstack.org/#/c/82123/16 are also somewhat extreme.

Before postulating a solution (or “delegating development to Glance devs”), it 
would be more useful to fully describe the problem being solved by Glance and 
the problem(s) we are looking to solve in Trove, and then we could have a 
meaningful discussion about the right solution.

I submit to you that we will come away concluding that there is a round peg, 
and a square hole. Yes, one will fit in the other but the final product will 
leave neither party particularly happy with the end result.

-amrith

From: Denis Makogon [mailto:dmako...@mirantis.commailto:dmako...@mirantis.com]
Sent: Thursday, July 24, 2014 9:33 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Glance][Trove] Metadata Catalog


Hello, Stackers.

 I’d like to discuss the future of Trove metadata API. But first small 
history info (mostly taken for Trove medata spec, see [1]):
Instance metadata is a feature that has been requested frequently by our users. 
They need a way to store critical information for their instances and have that 
be associated with the instance so that it is displayed whenever that instance 
is listed via the API. This also becomes very usable from a testing perspective 
when doing integration/ci. We can utilize the metadata to store things like 
what process created the instance, what the instance is being used for, etc... 
The design for this feature is modeled heavily on the Nova metadata API with a 
few tweaks in how it works internally.

And here comes conflict. Glance devs are working on “Glance 

Re: [openstack-dev] [Trove] Guest prepare call polling mechanism issue

2014-07-23 Thread Tim Simpson
To summarize, this is a conversation about the following LaunchPad bug: 
https://launchpad.net/bugs/1325512
and Gerrit review: https://review.openstack.org/#/c/97194/6

You are saying the function _service_is_active in addition to polling the 
datastore service status also polls the status of the Nova resource. At first I 
thought this wasn't the case, however looking at your pull request I was 
surprised to see on line 320 
(https://review.openstack.org/#/c/97194/6/trove/taskmanager/models.py) polls 
Nova using the get method (which I wish was called refresh as to me it 
sounds like a lazy-loader or something despite making a full GET request each 
time).
So moving this polling out of there into the two respective create_server 
methods as you have done is not only going to be useful for Heat and avoid the 
issue of calling Nova 99 times you describe but it will actually help 
operations teams to see more clearly that the issue was with a server that 
didn't provision. We actually had an issue in Staging the other day that took 
us forever to figure out because the server wasn't provisioning, but before 
anything checked that it was ACTIVE the DNS code detected the server had no ip 
address (never mind it was in a FAILED state) so the logs surfaced this as a 
DNS error. This change should help us avoid such issues.

Thanks,

Tim



From: Denis Makogon [dmako...@mirantis.com]
Sent: Wednesday, July 23, 2014 7:30 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Trove] Guest prepare call polling mechanism issue


Hello, Stackers.


I’d like to discuss guestagent prepare call polling mechanism issue (see [1]).


Let me first describe why this is actually an issue and why it should be fixed. 
For those of you who is familiar with Trove knows that Trove can provision 
instances through Nova API and Heat API (see [2] and see [3]).



What’s the difference between this two ways (in general)? The answer is 
simple:

- Heat-based provisioning method has polling mechanism that verifies that stack 
provisioning was completed with successful state (see [4]) which means that all 
stack resources are in ACTIVE state.

- Nova-based provisioning method doesn’t do any polling (which is wrong, since 
instance can’t fail as fast as possible because Trove-taskmanager service 
doesn’t verify that launched server had reached ACTIVE state. That’s the issue 
#1 - compute instance state is unknown, but right after resources (deliverd by 
heat) already in ACTIVE states.


Once one method [2] or [3] finished, taskmanager trying to prepare data for 
guest (see [5]) and then it tries to send prepare call to guest (see [6]). Here 
comes issue #2 - polling mechanism does at least 100 API calls to Nova to 
define compute instance status.

Also taskmanager does almost the same amount of calls to Trove backend to 
discover guest status which is totally normal.


So, here comes the question,  why should i call 99 times Nova for the same 
value if the value asked for the first time was completely acceptable?



There’s only one way to fix it. Since heat-based provisioning delivers 
instance with status validation procedure, the same thing should be done for 
nova-base provisioning (we should extract compute instance status polling from 
guest prepare polling mechanism and integrate it into [2]) and leave only guest 
status discovering in guest prepare polling mechanism.





Benefits? Proposed fix will give an ability for fast-failing for corrupted 
instances, it would reduce amount of redundant Nova API calls while attempting 
to discover guest status.



Proposed fix for this issue - [7].


[1] - https://launchpad.net/bugs/1325512

[2] - 
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L198-L215

[3] - 
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L190-L197

[4] - 
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L420-L429

[5] - 
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L217-L256

[6] - 
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L254-L266

[7] - https://review.openstack.org/#/c/97194/



Thoughts?


Best regards,

Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Proposal to add Craig Vyvial to trove-core

2014-05-06 Thread Tim Simpson
+1

From: Peter Stachowski [pe...@tesora.com]
Sent: Tuesday, May 06, 2014 9:06 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Trove] Proposal to add Craig Vyvial to trove-core

+1

-Original Message-
From: Nikhil Manchanda [mailto:nik...@manchanda.me]
Sent: May-06-14 5:32 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Trove] Proposal to add Craig Vyvial to trove-core


Hello folks:

I'm proposing to add Craig Vyvial (cp16net) to trove-core.

Craig has been working with Trove for a while now. He has been a consistently 
active reviewer, and has provided insightful comments on numerous reviews. He 
has submitted quality code to multiple features in Trove, and most recently 
drove the implementation of configuration groups in Icehouse.

https://review.openstack.org/#/q/reviewer:%22Craig+Vyvial%22,n,z
https://review.openstack.org/#/q/owner:%22Craig+Vyvial%22,n,z

Please respond with +1/-1, or any further comments.

Thanks,
Nikhil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] scheduled tasks redux

2014-01-24 Thread Tim Simpson
  Would it make more sense for an operator to configure a time window, and 
 then let users choose a slot within a time window (and say there are a 
 finite number of slots in a time window). The slotting would be done behind 
 the scenes and a user would only be able to select a window, and if the 
 slots are all taken, it wont be shown in the get available time windows. 
 the available time windows could be smart, in that, your avail time 
 window _could be_ based on the location of the hardware your vm is sitting 
 on (or some other rule…). Think network saturation if everyone on host A is 
 doing a backup to swift.

Allowing operators to define time windows seems preferable to me; I think a 
cron like system might be too granular. Having windows seems easier to schedule 
and would enable an operator to change things in a pinch.

From: Michael Basnight [mbasni...@gmail.com]
Sent: Thursday, January 23, 2014 3:41 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [trove] scheduled tasks redux

On Jan 23, 2014, at 12:20 PM, Greg Hill wrote:

 The blueprint is here:

 https://wiki.openstack.org/wiki/Trove/scheduled-tasks

 So I have basically two questions:

 1. Does anyone see a problem with defining the repeating options as a single 
 field rather than multiple fields?

Im fine w/ a single field, but more explanation below.

 2. Should we use the crontab format for this or is that too terse?  We could 
 go with a more fluid style like Every Wednesday/Friday/Sunday at 12:00PM 
 but that's English-centric and much more difficult to parse programmatically. 
  I'd welcome alternate suggestions.

Will we be doing more complex things than every day at some time? ie, does 
the user base see value in configuring backups every 12th day of every other 
month? I think this is easy to write the schedule code, but i fear that it will 
be hard to build a smarter scheduler that would only allow X tasks in a given 
hour for a window. If we limit to daily at X time, it seems easier to estimate 
how a given window for backup will look for now and into the future given a 
constant user base :P Plz note, I think its viable to schedule more than 1 per 
day, in cron * 0,12 or * */12.

Will we be using this as a single task service as well? So if we assume the 
first paragraph is true, that tasks are scheduled daily, single task services 
would be scheduled once, and could use the same crontab fields. But at this 
point, we only really care about the minute, hour, and _frequency_, which is 
daily or once. Feel free to add 12 scheduled tasks for every 2 hours if you 
want to back it up that often, or a single task as * 0/2. From the backend, i 
see that as 12 tasks created, one for each 2 hours.

But this doesnt take into mind windows, when you say you want a cron style 2pm 
backup, thats really just during some available window. Would it make more 
sense for an operator to configure a time window, and then let users choose a 
slot within a time window (and say there are a finite number of slots in a time 
window). The slotting would be done behind the scenes and a user would only be 
able to select a window, and if the slots are all taken, it wont be shown in 
the get available time windows. the available time windows could be smart, 
in that, your avail time window _could be_ based on the location of the 
hardware your vm is sitting on (or some other rule…). Think network saturation 
if everyone on host A is doing a backup to swift.

/me puts down wrench

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Dropping connectivity from guesagent to Trove back-end

2013-12-20 Thread Tim Simpson
Hi Denis,

The plan from the start with Conductor has been to remove any guest connections 
to the database. So any lingering ones are omissions which should be dealt with.

 Since not each database have root entity (not even ACL at all) it would be 
 incorrect to report about root enabling on server-side because 
 server-side(trove-taskmanager) should stay common as it possible.

I agree that in the case of the root call Conductor should have another RPC 
method that gets called by the guest to inform it that the root entity was set.

I also agree that any code that can stay as common as possible between 
datastores should. However I don't agree that trove-taskmanager (by which I 
assume you mean the daemon) has to only be for common functionality.

Thanks,

Tim


From: Denis Makogon [dmako...@mirantis.com]
Sent: Friday, December 20, 2013 7:04 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [trove] Dropping connectivity from guesagent to Trove 
back-end


Goodday, OpenStack DВaaS community.



I'd like to start conversation about dropping connectivity from In-VM 
guestagent and Trove back-end.

Since Trove has conductor service which interacts with agents via MQ 
service, we could let it deal with any back-end required operations initialized 
by guestagent.

Now conductor deals with instance status notifications and backup status 
notifications. But guest still have one more operation which requires back-end 
connectivity – database root-enabled reporting [1]. After dealing with it we 
could finally drop connectivity [2].

Since not each database have root entity (not even ACL at all) it would be 
incorrect to report about root enabling on server-side because 
server-side(trove-taskmanager) should stay common as it possible.

My first suggestion was to extend conductor API [3] to let conductor write 
report to Trove back-end. Until Trove would reach state when it would support 
multiple datastore (databases) types current patch would work fine [4], but 
when Trove would deliver, suppose, Database (without ACL) it would be confusing 
when after instance provisioning user will find out that some how root was 
enabled, but Database doesn't have any ACL at all.

My point is that Trove Conductor must handle every database (datastore in 
terms of Trove) specific operations which are required back-end connection. And 
Trove server-side (taskmanager) must stay generic and perform preparation 
tasks, which are independent from datastore type.


[1] https://github.com/openstack/trove/blob/master/bin/trove-guestagent#L52

[2] https://bugs.launchpad.net/trove/+bug/1257489

[3] https://review.openstack.org/#/c/59410/5

[4] https://review.openstack.org/#/c/59410/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Dropping connectivity from guesagent to Trove back-end

2013-12-20 Thread Tim Simpson
 whose proposed future phases include turning conductor into a source of 
 truth for trove to ask about instances, and then using its own datastore 
 separate from the host db anyway.

IIRC this was to support such ideas as storing the heart beat or service status 
somewhere besides the Trove database. So let's say that instead of having to 
constantly update the heart beat table from the guest it was possible to ask 
Rabbit when the last time the guest tried to receive a message and use that as 
the heartbeat timestamp instead. This is what Conductor was meant to support - 
the ability to not force a guest to have to send back heart beat info to a 
database if there was an RPC technology dependent way to get that info which 
Conductor knew about.

I don't agree with the idea that all information on a guest should live only in 
Conductor. Under this logic we'd have no backup information in the Trove 
database we could use when listing backups and would have to call Conductor 
instead.  I don't see what that buys us.

Similarly with the RootHistory object, it lives in the database right now which 
works fine because anytime Root is enabled it's done by Trove code which has 
access to that database anyway. Moving root history to Conductor will 
complicate things without giving us any benefit.

Thanks,

Tim


From: Ed Cranford [ed.cranf...@gmail.com]
Sent: Friday, December 20, 2013 10:13 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [trove] Dropping connectivity from guesagent to 
Trove back-end

Conductor was the first phase of 
https://wiki.openstack.org/wiki/Trove/guest_agent_communication whose proposed 
future phases include turning conductor into a source of truth for trove to ask 
about instances, and then using its own datastore separate from the host db 
anyway.
The purpose of the root history table is to keep information in a place even an 
instance with root cannot reach, so we essentially have a warranty seal on the 
instance. The thinking at was if that status was kept on the instance, intrepid 
users could potentially enable root, muck about, and then manually remove root. 
By putting that row in a table outside the instance there's no question.

Phase 2 of the document above is to make conductor the source of truth for 
information about an instance, so taskman will start asking conductor instead 
of fetching the database information directly. So I think the next step for 
removing this is to give conductor a method taskman can call to get the root 
status from the extant table.

Phase 3 then seeks to give conductor its own datastore away from the original 
database; I think that's the right time to migrate the root history table, too.


On Fri, Dec 20, 2013 at 9:44 AM, Denis Makogon 
dmako...@mirantis.commailto:dmako...@mirantis.com wrote:
Unfortunately, Trove cannot manage it's own extensions, so if, suppose, i would 
try to get provisioned cassandra instance i would be still possible to check if 
root enabled.
Prof: 
https://github.com/openstack/trove/blob/master/trove/extensions/mysql/service.py
There are no checks for datastore_type, service just loads root model and 
that's it, since my patch create root model, next API call (root check) will 
load this model.



2013/12/20 Tim Simpson 
tim.simp...@rackspace.commailto:tim.simp...@rackspace.com
Because the ability to check if root is enabled is in an extension which would 
not be in effect for a datastore with no ACL support, the user would not be 
able to see that the marker for root enabled was set in the Trove 
infrastructure database either way.

By the way- I double checked the code, and I was wrong- the guest agent was 
*not* telling the database to update the root enabled flag. Instead, the API 
extension had been updating the database all along after contacting the guest. 
Sorry for making this thread more confusing.

It seems like if you follow my one (hopefully last) suggestion on this pull 
request, it will solve the issue you're tackling: 
https://review.openstack.org/#/c/59410/5

Thanks,

Tim


From: Denis Makogon [dmako...@mirantis.commailto:dmako...@mirantis.com]
Sent: Friday, December 20, 2013 8:58 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [trove] Dropping connectivity from guesagent to 
Trove back-end

Thanks for response, Tim.

As i said, it would be confusing situation when database which has no ACL would 
be deployed by Trove with root enabled - this looks very strange since user 
allowed to check if root enabled. I think in this case Conductor should be 
_that_ place which should contain datastore specific logic, which requires 
back-end connectivity.

It would be nice to have consistent instance states for each datastore types 
and version.

Are there any objections about letting conductor deal with it ?



Best regards,
Denis Makogon


2013/12/20

Re: [openstack-dev] [Heat] [Trove] [Savanna] [Oslo] Unified Agents - what is the actual problem?

2013-12-19 Thread Tim Simpson
 I agree that enabling communication between guest and cloud service is a 
 common problem for most agent designs. The only exception is agent based on 
 hypervisor provided transport. But as far as I understand many people are 
 interested in network-based agent, so indeed we can start a thread (or 
 continue discussion in this on) on the problem.

Can't they co-exist?

Let's say the interface to talk to an agent is simply some class loaded from a 
config file, the way it is in Trove. So we have a class which has the methods 
add_user, get_filesystem_stats. 

The first, and let's say default, implementation sends a message over Rabbit 
using oslo.rpc or something like it. All the arguments turn into a JSON object 
and are deserialized on the agent side using oslo.rpc or some C++ code capable 
of reading JSON.

If someone wants to add a hypervisor provided transport, they could do so by 
instead changing this API class to one which contacts a service on the 
hypervisor node (using oslo.rpc) with arguments that include the guest agent ID 
and args, which is just a dictionary of the original arguments. This service 
would then shell out to execute some hypervisor specific command to talk to the 
given guest.

That's what I meant when I said I liked how Trove handled this now- because it 
uses a simple, non-prescriptive interface, it's easy to swap out yet still easy 
to use.

That would mean the job of a unified agent framework would be to offer up 
libraries to ease up the creation of the API class by offering Python code to 
send messages in various styles / formats, as well as Python or C++ code to 
read and interpret those messages. 

Of course, we'd still settle on one default (probably network based) which 
would become the standard way of sending messages to guests so that package 
maintainers, the Infra team, and newbies to OpenStack wouldn't have to deal 
with dozens of different ways of doing things, but the important thing is that 
other methods of communication would still be possible.

Thanks,

Tim


From: Dmitry Mescheryakov [mailto:dmescherya...@mirantis.com] 
Sent: Thursday, December 19, 2013 7:15 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Heat] [Trove] [Savanna] [Oslo] Unified Agents - 
what is the actual problem?

I agree that enabling communication between guest and cloud service is a common 
problem for most agent designs. The only exception is agent based on hypervisor 
provided transport. But as far as I understand many people are interested in 
network-based agent, so indeed we can start a thread (or continue discussion in 
this on) on the problem.

Dmitry

2013/12/19 Clint Byrum cl...@fewbar.com
So I've seen a lot of really great discussion of the unified agents, and
it has made me think a lot about the problem that we're trying to solve.

I just wanted to reiterate that we should be trying to solve real problems
and not get distracted by doing things right or even better.

I actually think there are three problems to solve.

* Private network guest to cloud service communication.
* Narrow scope highly responsive lean guest agents (Trove, Savanna).
* General purpose in-instance management agent (Heat).

Since the private network guests problem is the only one they all share,
perhaps this is where the three projects should collaborate, and the
other pieces should be left to another discussion.

Thoughts?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] datastore migration issues

2013-12-19 Thread Tim Simpson
I second Rob and Greg- we need to not allow the instance table to have nulls 
for the datastore version ID. I can't imagine that as Trove grows and evolves, 
that edge case is something we'll always remember to code and test for, so 
let's cauterize things now by no longer allowing it at all.

The fact that the migration scripts can't, to my knowledge, accept parameters 
for what the dummy datastore name and version should be isn't great, but I 
think it would be acceptable enough to make the provided default values 
sensible and ask operators who don't like it to manually update the database.

- Tim




From: Robert Myers [myer0...@gmail.com]
Sent: Thursday, December 19, 2013 9:59 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [trove] datastore migration issues

I think that we need to be good citizens and at least add dummy data. Because 
it is impossible to know who all is using this, the list you have is probably 
complete. But Trove has been available for quite some time and all these users 
will not be listening on this thread. Basically anytime you have a database 
migration that adds a required field you *have* to alter the existing rows. If 
we don't we're basically telling everyone who upgrades that we the 'Database as 
a Service' team don't care about data integrity in our own product :)

Robert


On Thu, Dec 19, 2013 at 9:25 AM, Greg Hill 
greg.h...@rackspace.commailto:greg.h...@rackspace.com wrote:
We did consider doing that, but decided it wasn't really any different from the 
other options as it required the deployer to know to alter that data.  That 
would require the fewest code changes, though.  It was also my understanding 
that mysql variants were a possibility as well (percona and mariadb), which is 
what brought on the objection to just defaulting in code.  Also, we can't 
derive the version being used, so we *could* fill it with a dummy version and 
assume mysql, but I don't feel like that solves the problem or the objections 
to the earlier solutions.  And then we also have bogus data in the database.

Since there's no perfect solution, I'm really just hoping to gather consensus 
among people who are running existing trove installations and have yet to 
upgrade to the newer code about what would be easiest for them.  My 
understanding is that list is basically HP and Rackspace, and maybe Ebay?, but 
the hope was that bringing the issue up on the list might confirm or refute 
that assumption and drive the conversation to a suitable workaround for those 
affected, which hopefully isn't that many organizations at this point.

The options are basically:

1. Put the onus on the deployer to correct existing records in the database.
2. Have the migration script put dummy data in the database which you have to 
correct.
3. Put the onus on the deployer to fill out values in the config value

Greg

On Dec 18, 2013, at 8:46 PM, Robert Myers 
myer0...@gmail.commailto:myer0...@gmail.com wrote:


There is the database migration for datastores. We should add a function to  
back fill the existing data with either a dummy data or set it to 'mysql' as 
that was the only possibility before data stores.

On Dec 18, 2013 3:23 PM, Greg Hill 
greg.h...@rackspace.commailto:greg.h...@rackspace.com wrote:
I've been working on fixing a bug related to migrating existing installations 
to the new datastore code:

https://bugs.launchpad.net/trove/+bug/1259642

The basic gist is that existing instances won't have any data in the 
datastore_version_id field in the database unless we somehow populate that data 
during migration, and not having that data populated breaks a lot of things 
(including the ability to list instances or delete or resize old instances).  
It's impossible to populate that data in an automatic, generic way, since it's 
highly vendor-dependent on what database and version they currently support, 
and there's not enough data in the older schema to populate the new tables 
automatically.

So far, we've come up with some non-optimal solutions:

1. The first iteration was to assume 'mysql' as the database manager on 
instances without a datastore set.
2. The next iteration was to make the default value be configurable in 
trove.conf, but default to 'mysql' if it wasn't set.
3. It was then proposed that we could just use the 'default_datastore' value 
from the config, which may or may not be set by the operator.

My problem with any of these approaches beyond the first is that requiring 
people to populate config values in order to successfully migrate to the newer 
code is really no different than requiring them to populate the new database 
tables with appropriate data and updating the existing instances with the 
appropriate values.  Either way, it's now highly dependent on people deploying 
the upgrade to know about this change and react accordingly.

Does anyone have a better solution that we aren't 

[openstack-dev] [trove] My thoughts on the Unified Guest Agent

2013-12-18 Thread Tim Simpson
I've been following the Unified Agent mailing list thread for awhile now and, 
as someone who has written a fair amount of code for both of the two existing 
Trove agents, thought I should give my opinion about it. I like the idea of a 
unified agent, but believe that forcing Trove to adopt this agent for use as 
its by default will stifle innovation and harm the project.

There are reasons Trove has more than one agent currently. While everyone knows 
about the Reference Agent written in Python, Rackspace uses a different agent 
written in C++ because it takes up less memory. The concerns which led to the 
C++ agent would not be addressed by a unified agent, which if anything would be 
larger than the Reference Agent is currently.

I also believe a unified agent represents the wrong approach philosophically. 
An agent by design needs to be lightweight, capable of doing exactly what it 
needs to and no more. This is especially true for a project like Trove whose 
goal is to not to provide overly general PAAS capabilities but simply 
installation and maintenance of different datastores. Currently, the Trove 
daemons handle most logic and leave the agents themselves to do relatively 
little. This takes some effort as many of the first iterations of Trove 
features have too much logic put into the guest agents. However through 
perseverance the subsequent designs are usually cleaner and simpler to follow. 
A community approved, do everything agent would endorse the wrong balance and 
lead to developers piling up logic on the guest side. Over time, features would 
become dependent on the Unified Agent, making it impossible to run or even 
contemplate light-weight agents.

Trove's interface to agents today is fairly loose and could stand to be made 
stricter. However, it is flexible and works well enough. Essentially, the duck 
typed interface of the trove.guestagent.api.API class is used to send messages, 
and Trove conductor is used to receive them at which point it updates the 
database. Because both of these components can be swapped out if necessary, the 
code could support the Unified Agent when it appears as well as future agents.

It would be a mistake however to alter Trove's standard method of communication 
to please the new Unified Agent. In general, we should try to keep Trove 
speaking to guest agents in Trove's terms alone to prevent bloat.

Thanks,

Tim
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

2013-12-18 Thread Tim Simpson
 Please provide proof of that assumption or at least a general hypothesis 
 that we can test.

I can't prove that the new agent will be larger as it doesn't exist yet. 

 Since nothing was agreed upon anyway, I don't know how you came to that 
 conclusion.  I would suggest that any agent framework be held to an 
 extremely high standard for footprint for this very reason.

Sorry, I formed a conclusion based on what I'd read so far. There has been talk 
to add Salt to this Unified Agent along with several other things. So I think 
its a valid concern to state that making this thing small is not as high on the 
list of priorities as adding extra functionality. 

The C++ agent is just over 3 megabytes of real memory and takes up less than 30 
megabytes  of virtual memory. I don't think an agent has to be *that* small. 
However it won't get near that small unless making it tiny is made a priority, 
and I'm skeptical that's possible while also deciding an agent will be capable 
of interacting with all major OpenStack projects as well as Salt.

 Nobody has suggested writing an agent that does everything.

Steven Dake just said:

A unified agent addresses the downstream viewpoint well, which is 'There is 
only one agent to package and maintain, and it supports all the integrated 
OpenStack Program projects'.

So it sounds like some people are saying there will only be one. Or that it is 
at least an idea.

 If Trove's communication method is in fact superior to all others, then 
 perhaps we should discuss using that in the unified agent framework.

My point is every project should communicate to an agent in its own interface, 
which can be swapped out for whatever implementations people need.

  In fact I've specifically been arguing to keep it focused on facilitating 
 guest-service communication and limiting its in-guest capabilities to 
 narrowly focused tasks.

I like this idea better than creating one agent to rule them all, but I would 
like to avoid forcing a single method of communicating between agents.

 Also I'd certainly be interested in hearing about whether or not you think 
 the C++ agent could made generic enough for any project to use.

I certainly believe much of the code could be reused for other projects. Right 
now it communicates over RabbitMQ, Oslo RPC style, so I'm not sure how much it 
will fall in line with what the Unified Agent group wants. However, I would 
love to talk more about this. So far my experience has been that no one wants 
to pursue using / developing an agent that was written in C++.

Thanks,

Tim

From: Clint Byrum [cl...@fewbar.com]
Sent: Wednesday, December 18, 2013 11:36 AM
To: openstack-dev
Subject: Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

Excerpts from Tim Simpson's message of 2013-12-18 07:34:14 -0800:
 I've been following the Unified Agent mailing list thread for awhile
 now and, as someone who has written a fair amount of code for both of
 the two existing Trove agents, thought I should give my opinion about
 it. I like the idea of a unified agent, but believe that forcing Trove
 to adopt this agent for use as its by default will stifle innovation
 and harm the project.


Them's fightin words. ;)

That is a very strong position to take. So I am going to hold your
statements of facts and assumptions to a very high standard below.

 There are reasons Trove has more than one agent currently. While
 everyone knows about the Reference Agent written in Python, Rackspace
 uses a different agent written in C++ because it takes up less memory. The
 concerns which led to the C++ agent would not be addressed by a unified
 agent, which if anything would be larger than the Reference Agent is
 currently.


Would be larger... - Please provide proof of that assumption or at least
a general hypothesis that we can test. Since nothing was agreed upon
anyway, I don't know how you came to that conclusion. I would suggest
that any agent framework be held to an extremely high standard for
footprint for this very reason.

 I also believe a unified agent represents the wrong approach
 philosophically. An agent by design needs to be lightweight, capable
 of doing exactly what it needs to and no more. This is especially true
 for a project like Trove whose goal is to not to provide overly general
 PAAS capabilities but simply installation and maintenance of different
 datastores. Currently, the Trove daemons handle most logic and leave
 the agents themselves to do relatively little. This takes some effort
 as many of the first iterations of Trove features have too much logic
 put into the guest agents. However through perseverance the subsequent
 designs are usually cleaner and simpler to follow. A community approved,
 do everything agent would endorse the wrong balance and lead to
 developers piling up logic on the guest side. Over time, features would
 become dependent on the Unified Agent, making it impossible to run or

Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

2013-12-18 Thread Tim Simpson
Thanks for the summary Dmitry. I'm ok with these ideas, and while I still 
disagree with having a single, forced standard for RPC communication, I should 
probably let things pan out a bit before being too concerned.

- Tim



From: Dmitry Mescheryakov [dmescherya...@mirantis.com]
Sent: Wednesday, December 18, 2013 11:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

Tim,

The unified agent we proposing is based on the following ideas:
  * the core agent has _no_ functionality at all. It is a pure RPC mechanism 
with the ability to add whichever API needed on top of it.
  * the API is organized into modules which could be reused across different 
projects.
  * there will be no single package: each project (Trove/Savanna/Others) 
assembles its own agent based on API project needs.

I hope that covers your concerns.

Dmitry


2013/12/18 Tim Simpson 
tim.simp...@rackspace.commailto:tim.simp...@rackspace.com
I've been following the Unified Agent mailing list thread for awhile now and, 
as someone who has written a fair amount of code for both of the two existing 
Trove agents, thought I should give my opinion about it. I like the idea of a 
unified agent, but believe that forcing Trove to adopt this agent for use as 
its by default will stifle innovation and harm the project.

There are reasons Trove has more than one agent currently. While everyone knows 
about the Reference Agent written in Python, Rackspace uses a different agent 
written in C++ because it takes up less memory. The concerns which led to the 
C++ agent would not be addressed by a unified agent, which if anything would be 
larger than the Reference Agent is currently.

I also believe a unified agent represents the wrong approach philosophically. 
An agent by design needs to be lightweight, capable of doing exactly what it 
needs to and no more. This is especially true for a project like Trove whose 
goal is to not to provide overly general PAAS capabilities but simply 
installation and maintenance of different datastores. Currently, the Trove 
daemons handle most logic and leave the agents themselves to do relatively 
little. This takes some effort as many of the first iterations of Trove 
features have too much logic put into the guest agents. However through 
perseverance the subsequent designs are usually cleaner and simpler to follow. 
A community approved, do everything agent would endorse the wrong balance and 
lead to developers piling up logic on the guest side. Over time, features would 
become dependent on the Unified Agent, making it impossible to run or even 
contemplate light-weight agents.

Trove's interface to agents today is fairly loose and could stand to be made 
stricter. However, it is flexible and works well enough. Essentially, the duck 
typed interface of the trove.guestagent.api.API class is used to send messages, 
and Trove conductor is used to receive them at which point it updates the 
database. Because both of these components can be swapped out if necessary, the 
code could support the Unified Agent when it appears as well as future agents.

It would be a mistake however to alter Trove's standard method of communication 
to please the new Unified Agent. In general, we should try to keep Trove 
speaking to guest agents in Trove's terms alone to prevent bloat.

Thanks,

Tim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

2013-12-18 Thread Tim Simpson
 python:
 consumes 12.5MB of virt memory and 4.3MB of resident memory.

Very few Python only processes will take up such a small amount of virtual 
memory unless the authors are disciplined about not pulling in any dependencies 
at all and writing code in a way that isn't necessarily idiomatic. For an 
example of a project where code is simply written the obvious way, take the 
Trove reference guest, which uses just shy of 200MB of virtual memory and 39MB 
of resident. In defense of the reference guest,  there are some things we can 
do there to make that figure better, I'm certain. I just want to use it as an 
example of how large a Python process can get when the authors proceed doing 
things the way they normally would. 

 C:
 4MB of virt memory and 328k of resident memory

 C++:
 12.5MB of virt memory and 784k of resident memory

Much of the space you're seeing is from the C++ standard library. Building a 
process normally, I get similar results. However it is also possible to 
statically link the standard libraries and knock off roughly half of that, to 
6.42MB virtual and 400kb resident.

Additionally, the C++ standard library can be omitted if necessary. At this 
point, you might argue that you'd just be writing C code, but even then you'd 
have the advantages of template metaprogramming and other features not present 
in plain C. Even without those features, there's no shame in writing C style 
code assuming you *have* to- C++ was designed to be compatible with C to take 
advantages of its strengths. The only loss would be some of the C99 stuff like 
named initializers.

Additionally, in a vast number of contexts the virtual memory used for the 
standard library is not going to matter as other processes will be including 
that code anyway.

Going back to the Trove C++ Agent, it takes 4MB of resident and 28MB of virtual 
memory. This is with some fairly non-trivial dependencies, such as Curl, libz, 
the MySQL and Rabbit client libraries. No special effort was expended making 
sure we kept the process small as in C++ things naturally stay tiny.

 C++ is full of fail in a variety of ways and offers no useful advantage for 
 something as small as an agent ;-)

If you haven't recently, I recommend you read up on modern C++. The language, 
and how it's written and explained, has changed a lot over the past ten years.

Thanks,

Tim


-Original Message-
From: Steven Dake [mailto:sd...@redhat.com] 
Sent: Wednesday, December 18, 2013 4:15 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

On 12/18/2013 12:27 PM, Tim Simpson wrote:
 Please provide proof of that assumption or at least a general hypothesis 
 that we can test.
 I can't prove that the new agent will be larger as it doesn't exist yet.

 Since nothing was agreed upon anyway, I don't know how you came to that 
 conclusion.  I would suggest that any agent framework be held to an 
 extremely high standard for footprint for this very reason.
 Sorry, I formed a conclusion based on what I'd read so far. There has been 
 talk to add Salt to this Unified Agent along with several other things. So I 
 think its a valid concern to state that making this thing small is not as 
 high on the list of priorities as adding extra functionality.

 The C++ agent is just over 3 megabytes of real memory and takes up less than 
 30 megabytes  of virtual memory. I don't think an agent has to be *that* 
 small. However it won't get near that small unless making it tiny is made a 
 priority, and I'm skeptical that's possible while also deciding an agent will 
 be capable of interacting with all major OpenStack projects as well as Salt.

 Nobody has suggested writing an agent that does everything.
 Steven Dake just said:

 A unified agent addresses the downstream viewpoint well, which is 'There is 
 only one agent to package and maintain, and it supports all the integrated 
 OpenStack Program projects'.

 So it sounds like some people are saying there will only be one. Or that it 
 is at least an idea.

 If Trove's communication method is in fact superior to all others, then 
 perhaps we should discuss using that in the unified agent framework.
 My point is every project should communicate to an agent in its own 
 interface, which can be swapped out for whatever implementations people need.

   In fact I've specifically been arguing to keep it focused on facilitating 
 guest-service communication and limiting its in-guest capabilities to 
 narrowly focused tasks.
 I like this idea better than creating one agent to rule them all, but I would 
 like to avoid forcing a single method of communicating between agents.

 Also I'd certainly be interested in hearing about whether or not you think 
 the C++ agent could made generic enough for any project to use.
 I certainly believe much of the code could be reused for other projects. 
 Right now it communicates over RabbitMQ, Oslo

Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-28 Thread Tim Simpson
. trove-cli instance create --datastore_type redis --datastore_version 2.6.16
We have both datastore_type and datastore_version, and that uniquely
identifies redis 2.6.16 e. No further disambiguation is needed.

7. trove-cli instance create --datastore_type cassandra --version 2.0.0,
or trove-cli instance create --datastore_id g
Here, we are attempting to deploy a datastore which is _NOT_ active and
this call should fail with an appropriate error message.

Cheers,
-Nikhil


Andrey Shestakov writes:

 2. it can be confusing coz not clear to what type version belongs
 (possible add type field in version).
 also if you have default type, then specified version recognizes as
 version of default type (no lookup in version.datastore_type_id)
 but i think we can do lookup in version.datastore_type_id before pick
 default.

 4. if default version is need, then it should be specified in db, coz
 switching via versions can be frequent and restart service to reload
 config all times is not good.

 On 10/21/2013 05:12 PM, Tim Simpson wrote:
 Thanks for the feedback Andrey.

  2. Got this case in irc, and decided to pass type and version
 together to avoid confusing.
 I don't understand how allowing the user to only pass the version
 would confuse anyone. Could you elaborate?

  3. Names of types and maybe versions can be good, but in irc conversation 
  rejected this case, i cant
 remember exactly reason.
 Hmm. Does anyone remember the reason for this?

  4. Actually, active field in version marks it as default in type.
 Specify default version in config can be usefull if you have more then
 one active versions in default type.
 If 'active' is allowed to be set for multiple rows of the
 'datastore_versions' table then it isn't a good substitute for the
 functionality I'm seeking, which is to allow operators to specify a
 *single* default version for each datastore_type in the database. I
 still think we should still add a 'default_version_id' field to the
 'datastore_types' table.

 Thanks,

 Tim

 
 *From:* Andrey Shestakov 
 [ashesta...@mirantis.commailto:ashesta...@mirantis.com]
 *Sent:* Monday, October 21, 2013 7:15 AM
 *To:* OpenStack Development Mailing List
 *Subject:* Re: [openstack-dev] [Trove] How users should specify a
 datastore type when creating an instance

 1. Good point
 2. Got this case in irc, and decided to pass type and version together
 to avoid confusing.
 3. Names of types and maybe versions can be good, but in irc
 conversation rejected this case, i cant remember exactly reason.
 4. Actually, active field in version marks it as default in type.
 Specify default version in config can be usefull if you have more then
 one active versions in default type.
 But how match active version in type depends on operator`s
 configuration. And what if default version in config will marked as
 inactive?

 On 10/18/2013 10:30 PM, Tim Simpson wrote:
 Hello fellow Trovians,

 There has been some good work recently to figure out a way to specify
 a specific datastore  when using Trove. This is essential to
 supporting multiple datastores from the same install of Trove.

 I have an issue with some elements of the proposed solution though,
 so I decided I'd start a thread here so we could talk about it.

 As a quick refresher, here is the blue print for this work (there are
 some gists ammended to the end but I figured the mailing list would
 be an easier venue for discussion):
 https://wiki.openstack.org/wiki/Trove/trove-versions-types

 One issue I have is with the way the instance create call will change
 to support different data stores. For example, here is the post call:

 
 {
   instance : {
   flavorRef : 2,
   name : as,
   datastore_type : e60153d4-8ac4-414a-ad58-fe2e0035704a,
   datastore_version : 94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b,
   volume : { size : 1 }
 }
 }
 

 1. I think since we have two fields in the instance object we should
 make a new object for datastore and avoid the name prefixing, like this:

 
 {
  instance : {
   flavorRef : 2,
   name : as,
   datastore: {
 type : e60153d4-8ac4-414a-ad58-fe2e0035704a,
 version : 94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b
   }
   volume : { size : 1 }
 }
 }
 

 2. I also think a datastore_version alone should be sufficient since
 the associated datastore type will be implied:

 
 {
   instance : {
   flavorRef : 2,
   name : as,
   datastore: {
 version : 94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b
   }
   volume : { size : 1 }
 }
 }
 

 3. Additionally, while a datastore_type should have an ID in the
 Trove infastructure database, it should also be possible to pass just
 the name of the datastore type to the instance call, such as mysql
 or mongo. Maybe we could allow this in addition to the ID? I think
 this form should actually use the argument type, and the id should

Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-24 Thread Tim Simpson
So if we decide to support any number of config options for each various 
datastore version, eventually we'll have large config files that will be hard 
to manage.

What about storing the extra config info for each datastore version in its own 
independent config file? So rather than having one increasingly bloated config 
file used by everything, you could optionally specify a file in the 
datastore_versions table of the database that would be looked up similar to how 
we load template files on demand.

- Tim

From: Ilya Sviridov [isviri...@mirantis.com]
Sent: Thursday, October 24, 2013 7:40 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Trove] How users should specify a datastore type 
when creating an instance

So, we have 2 places for configuration management - database and config file

Config file for tunning all datasource type behavior during installation and 
database for all changeable configurations during usage and administration of 
Trove installation.

Database usecases:
- update/custom image
- update/custom packages
- activating/deactivating datastore_type

Config file usecases:
- security group policy
- provisioning mechanism
- guest configuration parameters per database engine
- provisioning  parameters, templates
- manager class
...

In case if i need to register one more MySQL installation with following 
customization:
- custom heat template
- custom packages and additional monitoring tool package
- open specific port for working with my monitoring tool on instance

According to current concept should i add one more section in addition to 
existing mysql like below?

[monitored_mysql]
mount_point=/var/lib/mysql

#8080 is port of my monitoring tool
trove_security_group_rule_ports = 3306, 8080
heat_template=/etc/trove/heat_templates/monitored_mysql.yaml
...

and put additional packages to database configuration?





With best regards,
Ilya Sviridov

http://www.mirantis.ru/


On Wed, Oct 23, 2013 at 9:37 PM, Michael Basnight 
mbasni...@gmail.commailto:mbasni...@gmail.com wrote:

On Oct 23, 2013, at 10:54 AM, Ilya Sviridov wrote:

 Besides the strategy of selecting the default behavior.

 Let me share with you my ideas of configuration management in Trove and how 
 the datastore concept can help with that.

 Initially there was only one database and all configuration was in one config 
 file.
 With adding of new databases, heat provisioning mechanism, we are introducing 
 more options.

 Not only assigning specific image_id, but custom packages, heat templates, 
 probably specific strategies of working with security groups.
 Such needs already exist because we have a lot of optional things in config, 
 and any new feature is implemented with back sight to already existing legacy 
 installations of Trove.

 What is  actually datastore_type + datastore_version?

 The model which glues all the bricks together, so let us use it for all 
 variable part of *service type* configuration.

 from current config file

 # Trove DNS
 trove_dns_support = False

 # Trove Security Groups for Instances
 trove_security_groups_support = True
 trove_security_groups_rules_support = False
 trove_security_group_rule_protocol = tcp
 trove_security_group_rule_port = 3306
 trove_security_group_rule_cidr = 0.0.0.0/0http://0.0.0.0/0

 #guest_config = $pybasedir/etc/trove/trove-guestagent.conf.sample
 #cloudinit_location = /etc/trove/cloudinit

 block_device_mapping = vdb
 device_path = /dev/vdb
 mount_point = /var/lib/mysql

 All that configurations can be moved to data_strore (some defined in heat 
 templates) and be manageable by operator in case if any default behavior 
 should be changed.

 The trove-config becomes core functionality specific only.

Its fine for it to be in the config or the heat templates… im not sure it 
matters. what i would like to see is that specific thing to each service be in 
their own config group in the configuration.

[mysql]
mount_point=/var/lib/mysql
…
[redis]
volume_support=False
…..

and so on.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-22 Thread Tim Simpson
 Are you saying you must have a default version defined to have  1 active 
 versions?
No, my point was using a default version field in the db rather than also 
picking from active versions may be confusing.


From: Michael Basnight [mbasni...@gmail.com]
Sent: Monday, October 21, 2013 4:04 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Trove] How users should specify a datastore   
type when creating an instance

On Oct 21, 2013, at 1:40 PM, Tim Simpson wrote:

 2. I also think a datastore_version alone should be sufficient since the 
 associated datastore type will be implied:

 When i brought this up it was generally discussed as being confusing. Id 
 like to use type and rely on having a default (or active) version behind the 
 scenes.

 Can't we do both? If a user wants a specific version, most likely they had to 
 enumerate all datastore_versions, spot it in a list, and grab the guid. Why 
 force them to also specify the datastore_type when we can easily determine 
 what that is?

Fair enough.


 4. Additionally, in the current pull request to implement this it is 
 possible to avoid passing a version, but only if no more than one version 
 of the datastore_type exists in the database.

 I think instead the datastore_type row in the database should also have a 
 default_version_id property, that an operator could update to the most 
 recent version or whatever other criteria they wish to use, meaning the 
 call could become this simple:

 Since we have determined from this email thread that we have an active 
 status, and that  1 version can be active, we have to think about the 
 precedence of active vs default. My question would be, if we have a 
 default_version_id and a active version, what do we choose on behalf of the 
 user? If there is  1 active version and a user does not specify the 
 version, the api will error out, unless a default is defined. We also need a 
 default_type in the config so the existing APIs can maintain compatibility. 
 We can re-discuss this for v2 of the API.

 Imagine that an operator sets up Trove and only has one active version. They 
 then somehow fumble setting up the default_version, but think they succeeded 
 as the API works for users the way they expect anyway. Then they go to add 
 another active version and suddenly their users get error messages.

 If we only use the default_version field of the datastore_type to define a 
 default would honor the principle of least surprise.

Are you saying you must have a default version defined to have  1 active 
versions?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-22 Thread Tim Simpson
 This is also true that we dont want to define the _need_ to have custom 
 images for the datastores. You can, quite easily, deploy mysql or redis on a 
 vanilla image.

Additionally there could be server code at some point soon that will need to 
know what datastore type is associated with an instance to determine what db 
engine is in use. So for example, if a call such as users isn't supported by 
a certain datastore used by an instance, the server side code will be able to 
determine that and something such as a bad request or not found status code.


From: Michael Basnight [mbasni...@gmail.com]
Sent: Monday, October 21, 2013 4:05 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Trove] How users should specify a datastore   
type when creating an instance

On Oct 21, 2013, at 1:57 PM, Nikhil Manchanda wrote:


 The image approach works fine if Trove only supports deploying a single
 datastore type (mysql in your case). As soon as we support
 deploying more than 1 datastore type, Trove needs to have some knowledge
 of which guestagent manager classes to load. Hence the need
 for having a datastore type API.

 The argument for needing to keep track of the version is
 similar. Potentially a version increment -- especially of the major
 version -- may require for a different guestagent manager. And Trove
 needs to have this information.

This is also true that we dont want to define the _need_ to have custom images 
for the datastores. You can, quite easily, deploy mysql or redis on a vanilla 
image.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-22 Thread Tim Simpson
 It's not intuitive to the User, if they are specifying a version alone.  You 
 don't boot a 'version' of something, with specifying what that some thing is. 
  I would rather they only specified the datastore_type alone, and not have 
 them specify a version at all.

I agree for most users just selecting the datastore_type would be most intutive.

However, when they specify a version it's going to a be GUID which they could 
only possibly know if they have recently enumerated all versions and thus 
*know* the version is for the given type they want. In that case I don't think 
most users would appreciate having to also pass the type- it would just be 
redundant. So in that case why not make it optional?


From: Vipul Sabhaya [vip...@gmail.com]
Sent: Monday, October 21, 2013 5:09 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Trove] How users should specify a datastore type 
when creating an instance




On Mon, Oct 21, 2013 at 2:04 PM, Michael Basnight 
mbasni...@gmail.commailto:mbasni...@gmail.com wrote:

On Oct 21, 2013, at 1:40 PM, Tim Simpson wrote:

 2. I also think a datastore_version alone should be sufficient since the 
 associated datastore type will be implied:

 When i brought this up it was generally discussed as being confusing. Id 
 like to use type and rely on having a default (or active) version behind the 
 scenes.

 Can't we do both? If a user wants a specific version, most likely they had to 
 enumerate all datastore_versions, spot it in a list, and grab the guid. Why 
 force them to also specify the datastore_type when we can easily determine 
 what that is?

Fair enough.


It's not intuitive to the User, if they are specifying a version alone.  You 
don't boot a 'version' of something, with specifying what that some thing is.  
I would rather they only specified the datastore_type alone, and not have them 
specify a version at all.


 4. Additionally, in the current pull request to implement this it is 
 possible to avoid passing a version, but only if no more than one version 
 of the datastore_type exists in the database.

 I think instead the datastore_type row in the database should also have a 
 default_version_id property, that an operator could update to the most 
 recent version or whatever other criteria they wish to use, meaning the 
 call could become this simple:

 Since we have determined from this email thread that we have an active 
 status, and that  1 version can be active, we have to think about the 
 precedence of active vs default. My question would be, if we have a 
 default_version_id and a active version, what do we choose on behalf of the 
 user? If there is  1 active version and a user does not specify the 
 version, the api will error out, unless a default is defined. We also need a 
 default_type in the config so the existing APIs can maintain compatibility. 
 We can re-discuss this for v2 of the API.

 Imagine that an operator sets up Trove and only has one active version. They 
 then somehow fumble setting up the default_version, but think they succeeded 
 as the API works for users the way they expect anyway. Then they go to add 
 another active version and suddenly their users get error messages.

 If we only use the default_version field of the datastore_type to define a 
 default would honor the principle of least surprise.

Are you saying you must have a default version defined to have  1 active 
versions?


I think it makes sense to have a 'Active' flag on every version -- and a 
default flag for the version that should be used as a default in the event the 
user doesn't specify.  It also makes sense to require the deployer to set this 
accurately, and if one doesn't exist instance provisioning errors out.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-21 Thread Tim Simpson
Thanks for the feedback Andrey.

 2. Got this case in irc, and decided to pass type and version together to 
 avoid confusing.
I don't understand how allowing the user to only pass the version would confuse 
anyone. Could you elaborate?

 3. Names of types and maybe versions can be good, but in irc conversation 
 rejected this case, i cant remember exactly reason.
Hmm. Does anyone remember the reason for this?

 4. Actually, active field in version marks it as default in type.
 Specify default version in config can be usefull if you have more then one 
 active versions in default type.
If 'active' is allowed to be set for multiple rows of the 'datastore_versions' 
table then it isn't a good substitute for the functionality I'm seeking, which 
is to allow operators to specify a *single* default version for each 
datastore_type in the database. I still think we should still add a 
'default_version_id' field to the 'datastore_types' table.

Thanks,

Tim


From: Andrey Shestakov [ashesta...@mirantis.com]
Sent: Monday, October 21, 2013 7:15 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Trove] How users should specify a datastore type 
when creating an instance

1. Good point
2. Got this case in irc, and decided to pass type and version together to avoid 
confusing.
3. Names of types and maybe versions can be good, but in irc conversation 
rejected this case, i cant remember exactly reason.
4. Actually, active field in version marks it as default in type.
Specify default version in config can be usefull if you have more then one 
active versions in default type.
But how match active version in type depends on operator`s configuration. And 
what if default version in config will marked as inactive?

On 10/18/2013 10:30 PM, Tim Simpson wrote:
Hello fellow Trovians,

There has been some good work recently to figure out a way to specify a 
specific datastore  when using Trove. This is essential to supporting multiple 
datastores from the same install of Trove.

I have an issue with some elements of the proposed solution though, so I 
decided I'd start a thread here so we could talk about it.

As a quick refresher, here is the blue print for this work (there are some 
gists ammended to the end but I figured the mailing list would be an easier 
venue for discussion):
https://wiki.openstack.org/wiki/Trove/trove-versions-types

One issue I have is with the way the instance create call will change to 
support different data stores. For example, here is the post call:


{
  instance : {
  flavorRef : 2,
  name : as,
  datastore_type : e60153d4-8ac4-414a-ad58-fe2e0035704a,
  datastore_version : 94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b,
  volume : { size : 1 }
}
}


1. I think since we have two fields in the instance object we should make a new 
object for datastore and avoid the name prefixing, like this:


{
 instance : {
  flavorRef : 2,
  name : as,
  datastore: {
type : e60153d4-8ac4-414a-ad58-fe2e0035704a,
version : 94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b
  }
  volume : { size : 1 }
}
}


2. I also think a datastore_version alone should be sufficient since the 
associated datastore type will be implied:


{
  instance : {
  flavorRef : 2,
  name : as,
  datastore: {
version : 94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b
  }
  volume : { size : 1 }
}
}


3. Additionally, while a datastore_type should have an ID in the Trove 
infastructure database, it should also be possible to pass just the name of the 
datastore type to the instance call, such as mysql or mongo. Maybe we could 
allow this in addition to the ID? I think this form should actually use the 
argument type, and the id should then be passed as type_id instead.


{
  instance : {
  flavorRef : 2,
  name : as,
  datastore: {
type : mysql,
version : 94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b
  }
  volume : { size : 1 }
}
}



4. Additionally, in the current pull request to implement this it is possible 
to avoid passing a version, but only if no more than one version of the 
datastore_type exists in the database.

I think instead the datastore_type row in the database should also have a 
default_version_id property, that an operator could update to the most recent 
version or whatever other criteria they wish to use, meaning the call could 
become this simple:


{
  instance : {
  flavorRef : 2,
  name : as,
  datastore: {
type : mysql
  }
  volume : { size : 1 }
}
}


Thoughts?

Thanks,

Tim



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Trove] Testing of new service types support

2013-10-21 Thread Tim Simpson
Hi Illia,

You're correct; until the work on establishing datastore types and versions as 
a first class Trove concept is finished, which will hopefully be soon (see 
Andrey Shestakov's pull request), testing non-MySQL datastore types will be 
problematic.

A short term, fake-mode only solution could be accomplished fairly quickly as 
follows: run the fake mode tests a third time in Tox with a new configuration 
which allows for MongoDB.

If you look at tox.ini, you'll see that the integration tests run in fake mode 
twice already:

 {envpython} run_tests.py
 {envpython} run_tests.py --test-config=etc/tests/xml.localhost.test.conf

The second invocation causes the trove-client to be used in XML mode, 
effectively testing the XML client.

(Tangent: currently running the tests twice takes some time, even in fake mode- 
however it will cost far less time once the following pull request is merged: 
https://review.openstack.org/#/c/52490/)

If you look at run_tests.py, you'll see that on line 104 it accepts a trove 
config file. If the run_tests.py script is updated to allow this value to be 
specified optionally via the command line, you could create a variation on 
etc/trove/trove.conf.test which specifies MongoDB. You'd then invoke 
run_tests.py with a --group= argument to run some subset of the tests support 
by the current Mongo DB code in fake mode.

Of course, this will do nothing to test the guest agent changes or confirm that 
the end to end system actually works, but it could help test a lot of 
incidental API and infrastructure database code.

As for real mode tests, I think we should wait until the datastore type / 
version code is finished, at which point I know we'll all be eager to add 
additional tests for these new datastores. Of course in the short term it 
should be possible for you to change the code locally to build a Mongo DB image 
as well as a Trove config file to support this and then just run some subset of 
tests that works with Mongo.

Thanks,

Tim



From: Illia Khudoshyn [ikhudos...@mirantis.com]
Sent: Monday, October 21, 2013 9:42 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Trove] Testing of new service types support

Hi all,

I've done with implementing the very first bits of MongoDB support in Trove 
along with unit tests and faced an issue with proper testing of it.

It is well known that right now only one service type per installation is 
supported by Trove (it is set in config). All testing infrastructure, including 
Trove-integration codebase and jenkins jobs, seem to rely on that service type 
as well. So it seems to be impossible to run all existing tests AND some 
additional tests for MongoDB service type in one pass, at least until Trove 
client will allow to pass service type (I know that there is ongoing work in 
this area).

Please note, that all of the above is about functional and intergation testing 
-- there is no issues with unit tests.

So the question is, should I first submit the code to Trove and then proceed 
with updating Trove-integration or just put aside all that MongoDB stuff until 
client (and -integration) will be ready?

PS AFAIK, there is some work on adding Cassandra and Riak (or Redis?) support 
to Trove. These guys will likely face this issue as well.

--

Best regards,

Illia Khudoshyn,
Software Engineer, Mirantis, Inc.



38, Lenina ave. Kharkov, Ukraine

www.mirantis.comhttp://www.mirantis.ru/

www.mirantis.ruhttp://www.mirantis.ru/



Skype: gluke_work

ikhudos...@mirantis.commailto:ikhudos...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Testing of new service types support

2013-10-21 Thread Tim Simpson
Can't we say that about nearly any feature though? In theory we could put a 
hold on any tests for feature work saying it 
will need to be redone when Tempest integrated is finished.

Keep in mind what I'm suggesting here is a fairly trivial change to get some 
validation via the existing fake mode / integration tests at a fairly small 
cost.


From: Michael Basnight [mbasni...@gmail.com]
Sent: Monday, October 21, 2013 11:45 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Trove] Testing of new service types support

Top posting…

Id like to see these in the tempest tests. Im just getting started integrating 
trove into tempest for testing, and there are some prerequisites that im 
working thru with the infra team. Progress is being made though. Id rather not 
see them go into 2 different test suites if we can just get them into the 
tempest tests. Lets hope the stars line up so that you can start testing in 
tempest. :)

On Oct 21, 2013, at 9:25 AM, Illia Khudoshyn wrote:

 Hi Tim,

 Thanks for a quick reply. I'll go with updating run_tests.py for now. Hope, 
 Andrey Shestakov's changes arrive soon.

 Best wishes.



 On Mon, Oct 21, 2013 at 7:01 PM, Tim Simpson tim.simp...@rackspace.com 
 wrote:
 Hi Illia,

 You're correct; until the work on establishing datastore types and versions 
 as a first class Trove concept is finished, which will hopefully be soon (see 
 Andrey Shestakov's pull request), testing non-MySQL datastore types will be 
 problematic.

 A short term, fake-mode only solution could be accomplished fairly quickly as 
 follows: run the fake mode tests a third time in Tox with a new configuration 
 which allows for MongoDB.

 If you look at tox.ini, you'll see that the integration tests run in fake 
 mode twice already:

  {envpython} run_tests.py
  {envpython} run_tests.py --test-config=etc/tests/xml.localhost.test.conf

 The second invocation causes the trove-client to be used in XML mode, 
 effectively testing the XML client.

 (Tangent: currently running the tests twice takes some time, even in fake 
 mode- however it will cost far less time once the following pull request is 
 merged: https://review.openstack.org/#/c/52490/)

 If you look at run_tests.py, you'll see that on line 104 it accepts a trove 
 config file. If the run_tests.py script is updated to allow this value to be 
 specified optionally via the command line, you could create a variation on 
 etc/trove/trove.conf.test which specifies MongoDB. You'd then invoke 
 run_tests.py with a --group= argument to run some subset of the tests 
 support by the current Mongo DB code in fake mode.

 Of course, this will do nothing to test the guest agent changes or confirm 
 that the end to end system actually works, but it could help test a lot of 
 incidental API and infrastructure database code.

 As for real mode tests, I think we should wait until the datastore type / 
 version code is finished, at which point I know we'll all be eager to add 
 additional tests for these new datastores. Of course in the short term it 
 should be possible for you to  change the code locally to build a Mongo DB 
 image as well as a Trove config file to support this and then just run some 
 subset of tests that works with Mongo.

 Thanks,

 Tim


 From: Illia Khudoshyn [ikhudos...@mirantis.com]
 Sent: Monday, October 21, 2013 9:42 AM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [Trove] Testing of new service types support

 Hi all,

 I've done with implementing the very first bits of MongoDB support in Trove 
 along with unit tests and faced an issue with proper testing of it.

 It is well known that right now only one service type per installation is 
 supported by Trove (it is set in config). All testing infrastructure, 
 including Trove-integration codebase and jenkins jobs, seem to rely on that 
 service type as well. So it seems to be impossible to run all existing tests 
 AND some additional tests for MongoDB service type in one pass, at least 
 until Trove client will allow to pass service type (I know that there is 
 ongoing work in this area).

 Please note, that all of the above is about functional and intergation 
 testing -- there is no issues with unit tests.

 So the question is, should I first submit the code to Trove and then proceed 
 with updating Trove-integration or just put aside all that MongoDB stuff 
 until client (and -integration) will be ready?

 PS AFAIK, there is some work on adding Cassandra and Riak (or Redis?) support 
 to Trove. These guys will likely face this issue as well.

 --
 Best regards,
 Illia Khudoshyn,
 Software Engineer, Mirantis, Inc.

 38, Lenina ave. Kharkov, Ukraine
 www.mirantis.com
 www.mirantis.ru

 Skype: gluke_work
 ikhudos...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [openstack-dev] [Trove] Testing of new service types support

2013-10-21 Thread Tim Simpson
 For the api stuff, sure thats fine. i just think the overall coverage of the 
 review will be quite low if we are only testing the API via fake code.

We're in agreement here, I think. I will say though that if the people working 
on Mongo want to test it early, and go beyond simply using the client to 
manually confirm stuff, it should be possible to run the existing tests by 
building a different image and running a subset, such as 
--group=dbaas.guest.shutdown. IIRC those tests don't do much other than make 
an instance, see it turn to ACTIVE, and delete it. It would be a worthwhile 
spot test to see if it adheres to the bare-minimum Trove API.


From: Michael Basnight [mbasni...@gmail.com]
Sent: Monday, October 21, 2013 12:19 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Trove] Testing of new service types support

On Oct 21, 2013, at 10:02 AM, Tim Simpson wrote:

 Can't we say that about nearly any feature though? In theory we could put a 
 hold on any tests for feature work saying it
 will need to be redone when Tempest integrated is finished.

 Keep in mind what I'm suggesting here is a fairly trivial change to get some 
 validation via the existing fake mode / integration tests at a fairly small 
 cost.

Of course we can do the old tests. And for this it might be the best thing. The 
problem i see is that we cant do real integration tests w/o this work, and i 
dont want to integrate a bunch of different service_types w/o tests that 
actually spin them up and run the guest, which is where 80% of the new code 
lives for a new service_type. Otherwise we are running fake-guest stuff that is 
not a good representation.

For the api stuff, sure thats fine. i just think the overall coverage of the 
review will be quite low if we are only testing the API via fake code.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-18 Thread Tim Simpson
Hello fellow Trovians,

There has been some good work recently to figure out a way to specify a 
specific datastore  when using Trove. This is essential to supporting multiple 
datastores from the same install of Trove.

I have an issue with some elements of the proposed solution though, so I 
decided I'd start a thread here so we could talk about it.

As a quick refresher, here is the blue print for this work (there are some 
gists ammended to the end but I figured the mailing list would be an easier 
venue for discussion):
https://wiki.openstack.org/wiki/Trove/trove-versions-types

One issue I have is with the way the instance create call will change to 
support different data stores. For example, here is the post call:


{
  instance : {
  flavorRef : 2,
  name : as,
  datastore_type : e60153d4-8ac4-414a-ad58-fe2e0035704a,
  datastore_version : 94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b,
  volume : { size : 1 }
}
}


1. I think since we have two fields in the instance object we should make a new 
object for datastore and avoid the name prefixing, like this:


{
 instance : {
  flavorRef : 2,
  name : as,
  datastore: {
type : e60153d4-8ac4-414a-ad58-fe2e0035704a,
version : 94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b
  }
  volume : { size : 1 }
}
}


2. I also think a datastore_version alone should be sufficient since the 
associated datastore type will be implied:


{
  instance : {
  flavorRef : 2,
  name : as,
  datastore: {
version : 94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b
  }
  volume : { size : 1 }
}
}


3. Additionally, while a datastore_type should have an ID in the Trove 
infastructure database, it should also be possible to pass just the name of the 
datastore type to the instance call, such as mysql or mongo. Maybe we could 
allow this in addition to the ID? I think this form should actually use the 
argument type, and the id should then be passed as type_id instead.


{
  instance : {
  flavorRef : 2,
  name : as,
  datastore: {
type : mysql,
version : 94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b
  }
  volume : { size : 1 }
}
}



4. Additionally, in the current pull request to implement this it is possible 
to avoid passing a version, but only if no more than one version of the 
datastore_type exists in the database.

I think instead the datastore_type row in the database should also have a 
default_version_id property, that an operator could update to the most recent 
version or whatever other criteria they wish to use, meaning the call could 
become this simple:


{
  instance : {
  flavorRef : 2,
  name : as,
  datastore: {
type : mysql
  }
  volume : { size : 1 }
}
}


Thoughts?

Thanks,

Tim
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-18 Thread Tim Simpson
Hi Josh,

 Given that Trove currently only supports a single datastore deployment per 
 control system, does the current work also allow for a default type/version 
 to be defined so that operators of Trove can set this as a property to 
 maintain the current API compatibility/behavior?

Yes, the current pull request to support this allows for a default type, which, 
if there is only a single version for that type in the Trove infrastructure 
database, means that the existing behavior would be preserved. However as soon 
as an operator adds more than one datastore version of the default type then 
API users would need to always include the version ID. This would be fixed by 
recommendation #4 in my original message.

Thanks,

Tim



From: Josh Odom [josh.o...@rackspace.com]
Sent: Friday, October 18, 2013 3:16 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Trove] How users should specify a datastore type 
when creating an instance

Hi Tim,
I do think your recommendation in 3  4 makes a lot of sense and improves the 
usability of the API.  Given that Trove currently only supports a single 
datastore deployment per control system, does the current work also allow for a 
default type/version to be defined so that operators of Trove can set this as a 
property to maintain the current API compatibility/behavior?

Josh


From: Tim Simpson tim.simp...@rackspace.commailto:tim.simp...@rackspace.com
Reply-To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, October 18, 2013 2:30 PM
To: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Trove] How users should specify a datastore type when 
creating an instance

Hello fellow Trovians,

There has been some good work recently to figure out a way to specify a 
specific datastore  when using Trove. This is essential to supporting multiple 
datastores from the same install of Trove.

I have an issue with some elements of the proposed solution though, so I 
decided I'd start a thread here so we could talk about it.

As a quick refresher, here is the blue print for this work (there are some 
gists ammended to the end but I figured the mailing list would be an easier 
venue for discussion):
https://wiki.openstack.org/wiki/Trove/trove-versions-types

One issue I have is with the way the instance create call will change to 
support different data stores. For example, here is the post call:


{
  instance : {
  flavorRef : 2,
  name : as,
  datastore_type : e60153d4-8ac4-414a-ad58-fe2e0035704a,
  datastore_version : 94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b,
  volume : { size : 1 }
}
}


1. I think since we have two fields in the instance object we should make a new 
object for datastore and avoid the name prefixing, like this:


{
 instance : {
  flavorRef : 2,
  name : as,
  datastore: {
type : e60153d4-8ac4-414a-ad58-fe2e0035704a,
version : 94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b
  }
  volume : { size : 1 }
}
}


2. I also think a datastore_version alone should be sufficient since the 
associated datastore type will be implied:


{
  instance : {
  flavorRef : 2,
  name : as,
  datastore: {
version : 94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b
  }
  volume : { size : 1 }
}
}


3. Additionally, while a datastore_type should have an ID in the Trove 
infastructure database, it should also be possible to pass just the name of the 
datastore type to the instance call, such as mysql or mongo. Maybe we could 
allow this in addition to the ID? I think this form should actually use the 
argument type, and the id should then be passed as type_id instead.


{
  instance : {
  flavorRef : 2,
  name : as,
  datastore: {
type : mysql,
version : 94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b
  }
  volume : { size : 1 }
}
}



4. Additionally, in the current pull request to implement this it is possible 
to avoid passing a version, but only if no more than one version of the 
datastore_type exists in the database.

I think instead the datastore_type row in the database should also have a 
default_version_id property, that an operator could update to the most recent 
version or whatever other criteria they wish to use, meaning the call could 
become this simple:


{
  instance : {
  flavorRef : 2,
  name : as,
  datastore: {
type : mysql
  }
  volume : { size : 1 }
}
}


Thoughts?

Thanks,

Tim
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TROVE] Thoughts on DNS refactoring, Designate integration.

2013-10-01 Thread Tim Simpson
Ilya you make a good point. We shouldn't spend time massively change the DNS 
code if we'll just have to do it again so that HEAT can do everything for us.

I echo Mike's comments though that if for some reason someone wants Designate 
support before we get HEAT integrated they should be able to add a new DNS 
driver. As I said before though I think that should be possible without major 
changes to the existing DNS code. 

Thanks,

Tim
___
From: Michael Basnight [mailto:mbasni...@gmail.com] 
Sent: Tuesday, October 01, 2013 5:37 PM
To: OpenStack Development Mailing List
Cc: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [TROVE] Thoughts on DNS refactoring, Designate 
integration.

On Oct 1, 2013, at 3:06 PM, Ilya Sviridov isviri...@mirantis.com wrote:

On Tue, Oct 1, 2013 at 6:45 PM, Tim Simpson tim.simp...@rackspace.com wrote:
Hi fellow Trove devs,

With the Designate project ramping up, its time to refactor the ancient DNS 
code that's in Trove to work with Designate.

The good news is since the beginning, it has been possible to add new drivers 
for DNS in order to use different services. Right now we only have a driver for 
the Rackspace DNS API, but it should be possible to write one for Designate as 
well.

How it corelates with Trove dirrection to use HEAT for all provisioning and 
managing cloud resources? 
There are BPs for Designate resource 
(https://blueprints.launchpad.net/heat/+spec/designate-resource) and Rackspace 
DNS (https://blueprints.launchpad.net/heat/+spec/rax-dns-resource) as well and 
it looks logically to use the HEAT for that.
Currently Trove has logic for provisioning instances, dns driver, creation of 
security group, but with switching to HEAT way, we have duplication of the same 
functionality we have to support. 

+1 to using heat for this. However, as people are working on heat support right 
now to make it more sound, if there is a group that wants/needs DNS refactoring 
now, I'd say lets add it in. If no one is in need of changing what's existing 
until we get better heat support, then we should just abandon the review and 
leave the existing DNS code as is. 

I would prefer, if there is no one in need, to abandon the exiting review and 
add it to heat support. 


 

However, there's a bigger topic here. In a gist sent to me recently by Dennis 
M. with his thoughts on how this work should proceed, he included the comment 
that Trove should *only* support Designate: 
https://gist.github.com/crazymac/6705456/raw/2a16c7a249e73b3e42d98f5319db167f8d09abe0/gistfile1.txt

I disagree. I have been waiting for a canonical DNS solution such as Designate 
to enter the Openstack umbrella for years now, and am looking forward to having 
Trove consume it. However, changing all the code so that nothing else works is 
premature.

All non mainstream resources like cloud provider specific can be implemented as 
HEAT plugins (https://wiki.openstack.org/wiki/Heat/Plugins)
 

Instead, let's start work to play well with Designate now, using the open 
interface that has always existed. In the future after Designate enters 
integration status we can then make the code closed and only support Designate.

Do we really need playing with Designate and then replace it? I expect 
designate resource will come together with designate or even earlier.

With best regards,
Ilya Sviridov

Denis also had some other comments about the DNS code, such as not passing a 
single object as a parameter because it could be None. I think this is in 
reference to passing around a DNS entry which gets formed by the DNS instance 
entry factory. I see how someone might think this is brittle, but in actuality 
it has worked for several years so if anything changing it would introduce 
bugs. The interface was also written to use a full object in order to be 
flexible; a full object should make it easier to work with different types of 
DnsDriver implementations, as well as allowing more options to be set from the 
DnsInstanceEntryFactory. This later class creates a DnsEntry from an 
instance_id. It is possible that two deployments of Trove, even if they are 
using Designate, might opt for different DnsInstanceEntryFactory 
implementations in order to give the DNS entries associated to databases 
different patterns. If the DNS entry is created at this point its easier to 
further customize and tailor it. This will hold true even when Designate is 
ready to become the only DNS option we support (if such a thing is desirable).

Thanks,

Tim



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list