Re: [Openstack] [QA] Aligning "smoke" / "acceptance" / "promotion" test efforts

2012-05-08 Thread Thompson Lee
We're playing with Cucumber through fog https://github.com/MorphGlobal/foghorn. 
 We we're going to write a similar email as smoketests use Boto which is the 
Amazon api, not openstack.  We added the openstack essex api's to fog and while 
foghorn comes in through ruby, it hits the native APIs and is super easy to 
setup and run.  We won't get around to continue playing with it until next week 
so its an idea in the rough at the moment.

We love Cucumber...

Lee

On May 8, 2012, at 4:00 PM, Tim Simpson wrote:

> Hi Daryl, 
> 
> I understand what you're trying to accomplish by creating a new client for 
> the tests, but I'm not sure why the interface has to be completely different 
> from the official set of Python client bindings. If it used the same 
> interface everyone in the community could then benefit from the extra 
> features you're adding to the Tempest client by swapping it in should the 
> need arise. It would also be easier for Tempest newbs to create and 
> contribute new tests.
> 
> I understand that in some cases the current interface is too nice, but 
> wouldn't the existing one work fine for the majority of the tests? If so, why 
> not just write a few extra methods to send HTTP requests directly (or for 
> these cases, use http directly)?
> 
> Additionally I've heard from some in Rackspace QA that the client allows them 
> to see the HTTP codes which is more illustrative. I feel like this problem 
> could be solved with helper methods this:
> 
> def assert_response(http_code, func, *args, **kwargs):
> try:
> func(*args, **kwargs)
> assert_equal(http_code, 200)
> except ClientException as ce:
> assert_equal(http_code, ce.code)
> 
> Then you'd write tests like this:
> 
> server = assert_response(200, servers.get, "some_id")
> 
> You could of course have additional methods if the success case indicated a 
> different HTTP code. If more than one HTTP code could possibly lead to the 
> same return value then maybe that indicates the official bindings should be 
> changed. In this case it would be another win, as Tempest writers would be 
> pushing to ensure the Python client interface was as useful as possible.
> 
> Tim
> From: openstack-bounces+tim.simpson=rackspace@lists.launchpad.net 
> [openstack-bounces+tim.simpson=rackspace@lists.launchpad.net] on behalf 
> of Daryl Walleck [daryl.wall...@rackspace.com]
> Sent: Friday, May 04, 2012 12:03 AM
> To: Maru Newby
> Cc: Rick Lopez; openstack-qa-t...@lists.launchpad.net; 
> 
> Subject: Re: [Openstack] [QA] Aligning "smoke" / "acceptance" / "promotion" 
> test efforts
> 
> Perhaps it's just me, but given if I was developing in a different language, 
> I would not want to use a command line tool to interact with my application. 
> What is the point then of developing RESTful APIs if the primary client is 
> not it, but these command line tools instead?
> 
> While it may appear that the approach I'm advocating is extra work, let me 
> try explain the purpose of this approach. If these aren't clear, I'd be more 
> than glad to give some concrete examples where these techniques have been 
> useful. A few benefits that come to mind are:
> 
> Testing of XML implementations of the API. While this could be built into the 
> clients, I don't see many folks who would rush to that cause
> Direct access to the response. The clients hide any header/response code 
> information from the recipient, which can be very important. For example, the 
> response headers for Nova contain a token that can be captured and used when 
> troubleshooting issues.
> Avoiding the user friendliness of the clients. While retries and user 
> friendly exception handling are great for clients, they are not what I want 
> as a tester. As a tester, while I do want a minimal layer of abstraction 
> between my AUT and my code so that I'm not making HTTP requests directly from 
> my tests, from what I've seen the clients can make more efforts than I'd 
> prefer to be helpful.
> The ability to inject other metrics gathering into my clients for the purpose 
> of troubleshooting/logging/information handling
> 
> While perhaps the idea is that only the smoke tests would use this approach, 
> I'm hesitant to the idea of developing tests using multiple approaches simply 
> for the sake of using the clients for certain tests. I'm assuming these were 
> things that were talked about during the CLI portions of OpenStack summit, 
> which I wasn't able to attend. I wasn't aware of this or even some of the new 
> parallel testing efforts which somehow did not come up during the QA track. 
> The purpose of Tempest in the first place was to unify the functional and 
> integration testing efforts for OpenStack projects, and I'm dedicated to 
> doing everything I can to make that happen. If everyone is in agreement on 
> the other side, I certainly don't want to be the one in the way against the 
> majority. However, I just wanted to state my concerns before we ta

[Openstack] [Openstck] smoketests in Ubuntu pkgs?

2012-04-30 Thread Thompson Lee
Are the nova/smoketests included in any of the ubuntu precise packages?  I'm 
pulling and running them from git at the moment.___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] wsgi code duplication

2012-04-24 Thread Thompson Lee

On Apr 24, 2012, at 9:28 AM, Ghe Rivero wrote:

> 
> 
> On Tue, Apr 24, 2012 at 2:40 PM, Mark McLoughlin  wrote:
> Hi Ghe,
> 
> On Tue, 2012-04-24 at 12:15 +0200, Ghe Rivero wrote:
> > Hi Everyone,
> >i've been looking through wsgi code, and i have found a lot of
> > duplicated code between all the projects.
> 
> Thanks for looking into this. It sounds quite daunting.
> 
> I wonder could we do this iteratively by extract the code which is most
> common into openstack-common, move the projects over to that and then
> start again on the next layer?
> 
> Cheers,
> Mark.
> 
> I have plans to try to move as much as possible into openstack-common. I will 
> start with nova as a test bed and see what we get from there. My future plans 
> include db code and tests (in the case of quantum, plugins test also have a 
> lot of duplicated code).
> I register a bp for the wsgi issue: 
> https://blueprints.launchpad.net/openstack-common/+spec/wsgi-common
> 
> Ghe Rivero

Is there a code metrics site that continually reports on metrics like 
duplication?  Adding Ghe's report to a metric site would be the first step.  
That has always been a starting point as it gives code reviewers quick 
evaluation criteria to stop duplication before it ends up in trunk.  Going at 
it directly fixes it looking backward but the duplication ends up back int the 
code eventually.  The reports help fix the issue going forward.___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] glance-registry fails to start with latest ubuntu pkgs

2012-04-13 Thread Thompson Lee
Works!

On Apr 13, 2012, at 5:59 PM, David Kranz wrote:

> I ran into this a few hours ago. It seems you have to do
> 
> glance-manage version_control 0
> glance-manage db_sync
> 
> before restarting glance-registry. After I did that all was well.
> 
> -David
> 
> On 4/13/2012 5:03 PM, Lee Thompson wrote:
>> Fresh install or upgraded install, glance-registry fails.  Dropping the 
>> glance DB (postgreSQL) and recreating it doesn't help.  This is the error...
>> 
>> 
>> # cat registry.log
>> 2012-04-13 20:58:13 20717 INFO [sqlalchemy.engine.base.Engine] SELECT 
>> images.created_at AS images_created_at, images.updated_at AS 
>> images_updated_at, images.deleted_at AS images_deleted_at, images.deleted AS 
>> images_deleted, images.id  AS images_id, images.name 
>>  AS images_name, images.disk_format AS 
>> images_disk_format, images.container_format AS images_container_format, 
>> images.size AS images_size, images.status AS images_status, images.is_public 
>> AS images_is_public, images.location AS images_location, images.checksum AS 
>> images_checksum, images.min_disk AS images_min_disk, images.min_ram AS 
>> images_min_ram, images.owner AS images_owner, images.protected AS 
>> images_protected
>> FROM images
>> LIMIT %(param_1)s
>> 2012-04-13 20:58:13 20717 INFO [sqlalchemy.engine.base.Engine] 
>> {'param_1': 1}
>> 2012-04-13 20:58:13 20717 INFO [sqlalchemy.engine.base.Engine] ROLLBACK
>> 2012-04-13 20:58:13 20717ERROR [glance.registry.db.api] 
>> (ProgrammingError) relation "images" does not exist
>> LINE 2: FROM images
>> ^
>> 'SELECT images.created_at AS images_created_at, images.updated_at AS 
>> images_updated_at, images.deleted_at AS images_deleted_at, images.deleted AS 
>> images_deleted, images.id  AS images_id, images.name 
>>  AS images_name, images.disk_format AS 
>> images_disk_format, images.container_format AS images_container_format, 
>> images.size AS images_size, images.status AS images_status, images.is_public 
>> AS images_is_public, images.location AS images_location, images.checksum AS 
>> images_checksum, images.min_disk AS images_min_disk, images.min_ram AS 
>> images_min_ram, images.owner AS images_owner, images.protected AS 
>> images_protected \nFROM images \n LIMIT %(param_1)s' {'param_1': 1}
>> 2012-04-13 20:58:13 20717ERROR [glance.registry.db.api] Could not ensure 
>> database connection and consistency. Ensure database configuration and 
>> permissions are correct and database has been migrated since last upgrade by 
>> running 'glance-manage db_sync'
>> 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp