[openstack-dev] [nova] [bug?] possible postgres/mysql incompatibility in InstanceGroup.get_hosts()

2014-03-15 Thread Chris Friesen

Hi,

I'm trying to run InstanceGroup.get_hosts() on a havana installation 
that uses postgres.  When I run the code, I get the following error:



RemoteError: Remote error: ProgrammingError (ProgrammingError) operator 
does not exist: timestamp without time zone ~ unknown
2014-03-14 09:58:57.193 8164 TRACE nova.compute.manager [instance: 
83439206-3a88-495b-b6c7-6aea1287109f] LINE 3: uuid != instances.uuid 
AND (instances.deleted_at ~ 'None') ...
2014-03-14 09:58:57.193 8164 TRACE nova.compute.manager [instance: 
83439206-3a88-495b-b6c7-6aea1287109f]^
2014-03-14 09:58:57.193 8164 TRACE nova.compute.manager [instance: 
83439206-3a88-495b-b6c7-6aea1287109f] HINT:  No operator matches the 
given name and argument type(s). You might need to add explicit type casts.



I'm not a database expert, but after doing some digging, it seems that 
the problem is this line in get_hosts():


filters = {'uuid': filter_uuids, 'deleted_at': None}

It seems that current postgres doesn't allow implicit casts.  If I 
change the line to:


filters = {'uuid': filter_uuids, 'deleted': 0}


Then it seems to work.  Is this change valid?


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [db][all] (Proposal) Restorable Delayed deletion of OS Resources

2014-03-15 Thread Tim Bell
+1 .. looks like it would cover the accidental use case. Something could then 
be included into oslo for standardisation.

Tim

From: Boris Pavlovic [mailto:bpavlo...@mirantis.com]
Sent: 13 March 2014 20:42
To: OpenStack Development Mailing List
Subject: [openstack-dev] [db][all] (Proposal) Restorable  Delayed deletion of 
OS Resources

Hi stackers,

As a result of discussion:
[openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step 
by step)
http://osdir.com/ml/openstack-dev/2014-03/msg00947.html

I understood that there should be another proposal. About how we should 
implement Restorable  Delayed Deletion of OpenStack Resource in common way  
without these hacks with soft deletion in DB.  It is actually very simple, take 
a look at this document:

https://docs.google.com/document/d/1WGrIgMtWJqPDyT6PkPeZhNpej2Q9Mwimula8S8lYGV4/edit?usp=sharing


Best regards,
Boris Pavlovic
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Moving swift3 to stackforge (was: Re: [OpenStack-Infra] Intermittent failures cloning noVNC from github.com/kanaka)

2014-03-15 Thread Chmouel Boudjnah
On Sat, Mar 15, 2014 at 4:28 AM, Pete Zaitcev zait...@redhat.com wrote:

 I think we should've not kicked it out. Maybe just re-fold it
 back into Swift?


we probably would need to have a vote/chat of some sort first.

Chmouel
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [bug?] possible postgres/mysql incompatibility in InstanceGroup.get_hosts()

2014-03-15 Thread Sean Dague
On 03/15/2014 02:49 AM, Chris Friesen wrote:
 Hi,
 
 I'm trying to run InstanceGroup.get_hosts() on a havana installation
 that uses postgres.  When I run the code, I get the following error:
 
 
 RemoteError: Remote error: ProgrammingError (ProgrammingError) operator
 does not exist: timestamp without time zone ~ unknown
 2014-03-14 09:58:57.193 8164 TRACE nova.compute.manager [instance:
 83439206-3a88-495b-b6c7-6aea1287109f] LINE 3: uuid != instances.uuid
 AND (instances.deleted_at ~ 'None') ...
 2014-03-14 09:58:57.193 8164 TRACE nova.compute.manager [instance:
 83439206-3a88-495b-b6c7-6aea1287109f]^
 2014-03-14 09:58:57.193 8164 TRACE nova.compute.manager [instance:
 83439206-3a88-495b-b6c7-6aea1287109f] HINT:  No operator matches the
 given name and argument type(s). You might need to add explicit type casts.
 
 
 I'm not a database expert, but after doing some digging, it seems that
 the problem is this line in get_hosts():
 
 filters = {'uuid': filter_uuids, 'deleted_at': None}
 
 It seems that current postgres doesn't allow implicit casts.  If I
 change the line to:
 
 filters = {'uuid': filter_uuids, 'deleted': 0}
 
 
 Then it seems to work.  Is this change valid?

Yes, postgresql is strongly typed with it's data. That's a valid bug you
found, fixes appreciated!

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [bug?] possible postgres/mysql incompatibility in InstanceGroup.get_hosts()

2014-03-15 Thread Russell Bryant
On 03/14/2014 06:53 PM, Chris Friesen wrote:
 
 Hi,
 
 I'm trying to run InstanceGroup.get_hosts() on a havana installation
 that uses postgres.  When I run the code, I get the following error:
 
 
 RemoteError: Remote error: ProgrammingError (ProgrammingError) operator
 does not exist: timestamp without time zone ~ unknown
 2014-03-14 09:58:57.193 8164 TRACE nova.compute.manager [instance:
 83439206-3a88-495b-b6c7-6aea1287109f] LINE 3: uuid != instances.uuid
 AND (instances.deleted_at ~ 'None') ...
 2014-03-14 09:58:57.193 8164 TRACE nova.compute.manager [instance:
 83439206-3a88-495b-b6c7-6aea1287109f]^
 2014-03-14 09:58:57.193 8164 TRACE nova.compute.manager [instance:
 83439206-3a88-495b-b6c7-6aea1287109f] HINT:  No operator matches the
 given name and argument type(s). You might need to add explicit type casts.
 
 
 I'm not a database expert, but after doing some digging, it seems that
 the problem is this line in get_hosts():
 
 filters = {'uuid': filter_uuids, 'deleted_at': None}
 
 It seems that current postgres doesn't allow implicit casts.  If I
 change the line to:
 
 filters = {'uuid': filter_uuids, 'deleted': 0}
 
 
 Then it seems to work.

Looks right.  Thanks!

Can you please open a bug and submit a patch?  We should target the fix
to icehouse-rc1.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][policy] Integrating network policies and network services

2014-03-15 Thread Kanzhe Jiang
Hi Mohammad,

Please see my comments inline.

Thanks,
Kanzhe


On Fri, Mar 14, 2014 at 3:18 PM, Mohammad Banikazemi m...@us.ibm.com wrote:

 We have started looking at how the Neutron advanced services being defined
 and developed right now can be used within the Neutron policy framework we
 are building. Furthermore, we have been looking at a new model for the
 policy framework as of the past couple of weeks. So, I have been trying to
 see how the services will fit in (or can be utilized by) the policy work in
 general and with the new contract-based model we are considering in
 particular. Some of the I like to discuss here are specific to the use of
 service chains with the group policy work but some are generic and related
 to service chaining itself.

 If I understand it correctly, the proposed service chaining model requires
 the creation of the services in the chain without specifying their
 insertion contexts. Then, the service chain is created with specifying the
 services in the chain, a particular provider (which is specific to the
 chain being built) and possibly source and destination insertion contexts.

 Yes, your understanding is mostly correct. The step of instantiating each
service in the chain is only for the DB resources. The service chain
provider orchestrates the actual service creation when the service chain is
created. The source/dest insertion contexts are required to create the
service chain.

 1- This fits ok with the policy model we had developed earlier where the
 policy would get defined between a source and a destination policy endpoint
 group. The chain could be instantiated at the time the policy gets defined.
 (More questions on the instantiation below marked as 1.a and 1.b.) How
 would that work in a contract based model for policy? At the time a
 contract is defined, it's producers and consumers are not defined yet.
 Would we postpone the instantiation of the service chain to the time a
 contract gets a producer and at least a consumer?

In a contract based model, we can add a state attribute to the service
chain. Once a contract is defined, a corresponding chain could be defined
without insertion contexts. The chain state is pending. I assume the
producer and consumers can be used to derive the source and destination
insertion contexts for the chain. Once a contract gets producer and a
consumer, the chain can then be instantiated. When new consumers are added,
the chain would verify if the new context can be supported before updating
the existing contexts. If all producer and consumers are removed from a
contract, the chain provider deletes all service instances in the chain.



 1.a- It seems to me, it would be helpful if not necessary to be able to
 define a chain without instantiating the chain. If I understand it
 correctly, in the current service chaining model, when the chain is
 created, the source/destination contexts are used (whether they are
 specified explicitly or implicitly) and the chain of services become
 operational. We may want to be able to define the chain and postpone its
 creation to a later point in time.



 1.b-Is it really possible to stand up a service without knowing its
 insertion context (explicitly defined or implicitly defined) in all cases?
 For certain cases this will be ok but for others, depending on the
 insertion context or other factors such as the requirements of other
 services in the chain we may need to for example instantiate the service
 (e.g. create a VM) at a specific location that is not known when the
 service is created. If that may be the case, would it make sense to not
 instantiate the services of a chain at any level (rather than instantiating
 them and mark them as not operational or not routing traffic to them)
 before the chain is created? (This leads to question 3 below.)

2- With one producer and multiple consumers, do we instantiate a chain
 (meaning the chain and the services in the chain become operational) for
 each consumer? If not, how do we deal with using the same
 source/destination insertion context pair for the provider and all of the
 consumers?

The intent of the SC design is to support one provider and multiple
consumers via destination and source insertion contexts, respectively. The
addition of a new consumer requires an update to the source insertion
context of the chain. Of course, the chain provider will verify the
contexts to make sure they can be supported.



 3- For the service chain creation, I am sure there are good reasons for
 requiring a specific provider for a given chain of services but wouldn't it
 be possible to have a generic chain provider which would instantiate each
 service in the chain using the required provider for each service (e.g.,
 firewall or loadbalancer service) and with setting the insertion contexts
 for each service such that the chain gets constructed as well? I am sure I
 am ignoring some practical requirements but is it worth rethinking the
 current 

Re: [openstack-dev] [db][all] (Proposal) Restorable Delayed deletion of OS Resources

2014-03-15 Thread Clint Byrum
Excerpts from Joshua Harlow's message of 2014-03-13 16:16:50 -0700:
 Seems ok to me, and likely a good start, although I'm still not very 
 comfortable with the effects of soft_deletion (unless its done by admins 
 only), to me it complicates scheduling (can u schedule to something that has 
 been soft_deleted, likely not). It also creates a  pool of resources that 
 can't be used but can't be deleted either, that sounds a little bad and 
 wastes companies $$ and it reinforces non-cloudly concepts. It also seems 
 very complex, especially when your start connecting more and more resources 
 together via heat or other system (the whole graph of resources now must be 
 soft_deleted, wasting more $$, and how does one restore the graph of 
 resources if some of them were also hard_deleted).
 

I think you stay clear of scheduling if you treat it as a stopped
resource. It is costing you to be there, even if it isn't using the CPU
and RAM, it is a form of reservation.

The pool of unusable resources must be built into the price for
undeletable resources. How long to keep things around is a business
decision. I could see an evolution of the feature that includes undelete
period in the flavor definition.

The fact that one would need to be able to undelete whole applications
is just a missing feature. In the case of Heat, it would definitely get
complicated if you went out of band and accidentally deleted things but
it would be uncomplicated as long as you undeleted it before Heat tried
to do an in-place operation like a Metadata change + waitcondition or a
rebuild/resize.

I think though, that we already have this feature for most things in the
form of stop versus delete. The way I would implement undelete is at
the policy level.

Deny delete to users, and provide a cron-like functionality for
deleting. Let them decide how long they'd like to keep things around for,
and then let the cron-like thing do the actual deleting. I believe a few
of these cron-as-a-service things already exist.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] server.name raise AttributeError

2014-03-15 Thread Gareth
Hey, Nova guys

   File /opt/stack/python-novaclient/novaclient/v1_1/servers.py,
line 39, in __repr__
 return Server: %s % self.name
   File 
/opt/stack/python-novaclient/novaclient/openstack/common/apiclient/base.py,
line 461, in __getattr__
 self.get()
   File 
/opt/stack/python-novaclient/novaclient/openstack/common/apiclient/base.py,
line 479, in get
 new = self.manager.get(self.id)
   File /opt/stack/python-novaclient/novaclient/v1_1/servers.py,
line 543, in get
 return self._get(/servers/%s % base.getid(server), server)
   File /opt/stack/python-novaclient/novaclient/base.py, line 147, in _get
 _resp, body = self.api.client.get(url)
   File /opt/stack/python-novaclient/novaclient/client.py, line 283, in get
 return self._cs_request(url, 'GET', **kwargs)
   File /opt/stack/python-novaclient/novaclient/client.py, line 260,
in _cs_request
 **kwargs)
   File /opt/stack/python-novaclient/novaclient/client.py, line 242,
in _time_request
 resp, body = self.request(url, method, **kwargs)
   File /opt/stack/python-novaclient/novaclient/client.py, line 236,
in request
 raise exceptions.from_response(resp, body, url, method)
 NotFound: Instance could not be found (HTTP 404) (Request-ID:
req-e6208ee5-5fb3-4195-983f-9847124a9982)


Looking this trace stack, we could know that I have a *server* object, and
when called server.name, the *server* went to lazy load. That means this
*server* object doesn't
have name AttributeError from initializing. And all attributes of
*server*are from nova api response. That is the response against nova
get-server
api didn't contain name
keyword. So what case would make this happened?

The following 404 error is understandable. In that runtime, my codes are
trying to deleting a server. So during the lazy loading, the server has
been already deleted which
results that 404 error.

My question is about that self.name. Why does such a server object miss
the name attribute?

-- 
Gareth

*Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball*
*OpenStack contributor, kun_huang@freenode*
*My promise: if you find any spelling or grammar mistakes in my email from
Mar 1 2013, notify me *
*and I'll donate $1 or ¥1 to an open organization you specify.*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py

2014-03-15 Thread Solly Ross
 --nodeps will only sync the modules specified on the command line: 
 https://wiki.openstack.org/wiki/Oslo#Syncing_Code_from_Incubator

Heh, whoops.  Must have missed that.  Is it in the README/info at the top of 
the update.py script?

 That said, it's not always safe to do that.  You might sync a change in 
 one module that depends on a change in another module and end up 
 breaking something.  It might not be caught in the sync either because 
 the Oslo unit tests don't get synced across.

Hmm... I suppose this is why we have libraries with dependencies (not meant to 
sound snarky). 
Although in the case of updating a library that you wrote, it's less likely to 
break things.

Best Regards,
Solly Ross

- Original Message -
From: Ben Nemec openst...@nemebean.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Cc: Solly Ross sr...@redhat.com
Sent: Friday, March 14, 2014 4:36:03 PM
Subject: Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py

On 2014-03-14 14:49, Solly Ross wrote:
 It would also be great if there was a way to only sync one package.

There is. :-)

--nodeps will only sync the modules specified on the command line: 
https://wiki.openstack.org/wiki/Oslo#Syncing_Code_from_Incubator

That said, it's not always safe to do that.  You might sync a change in 
one module that depends on a change in another module and end up 
breaking something.  It might not be caught in the sync either because 
the Oslo unit tests don't get synced across.

 When adding a new library
 to a project (e.g. openstack.common.report to Nova), one would want to
 only sync the openstack.common.report
 parts, and not the any changes from the rest of openstack.common.  My
 process has been
 
 1. Edit openstack-common.conf to only contain the packages I want
 2. Run the update
 3. Make sure there wasn't code that didn't get changed from
 'openstack.common.xyz' to 'nova.openstack.common.xyz' (hint: this
 happens some times)
 4. git checkout openstack-common.conf to revert the changes to
 openstack-common.conf
 
 IMHO, update.py needs a bit of work (well, I think the whole code
 copying thing needs a bit of work, but that's a different story).
 
 Best Regards,
 Solly Ross
 
 - Original Message -
 From: Jay S Bryant jsbry...@us.ibm.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Sent: Friday, March 14, 2014 3:36:49 PM
 Subject: Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py
 
 
 
 
 From: Brant Knudson b...@acm.org
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date: 03/14/2014 02:21 PM
 Subject: Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py
 
 
 
 
 
 
 
 On Fri, Mar 14, 2014 at 2:05 PM, Jay S Bryant  jsbry...@us.ibm.com  
 wrote:
 
 It would be great if we could get the process for this automated. In 
 the
 mean time, those of us doing the syncs will just have to slog through 
 the
 process.
 
 Jay
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 What's the process? How do I generate the list of changes?
 
 Brant
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 Brant,
 
 My process thus far has been the following:
 
 
 1. Do the sync to see what files are changed.
 2. Take a look at the last commit sync'd to what is currently in
 master for a file.
 3. Document all the commits that have come in on that file since.
 4. Repeat process for all the relevant files if there is more than 
 one.
 5. If are multiples files I organize the commits with a list of
 the files touched by that commit.
 6. Document the master level of Oslo when the sync was done for 
 reference.
 
 Process may not be perfect, but it gets the job done. Here is an
 example of the format I use: https://review.openstack.org/#/c/75740/
 
 Jay
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Advanced Services Common requirements IRC meeting

2014-03-15 Thread Eugene Nikanorov
I'd like to clarify on providers/vendors:

The idea is that provider/vendor attribute (technically
ProviderResourceAssociation) is still used for dispatching calls to the
drivers.
We may, or may not show this attribute to a tenant. And it's not an
attribute that can be provided by admin or tenant, it's always a result of
scheduling, e.g. readonly.

Thanks,
Eugene.




On Sat, Mar 15, 2014 at 12:25 AM, Sumit Naiksatam
sumitnaiksa...@gmail.comwrote:

 Here is a summary from yesterday's meeting:

 ** Flavors (topic lead: Eugene Nikanorov)
 * Hide the provider implementation details from the user
   - The admin API would still need to be able to configure the
 provider artifacts
   - Suggestion to leverage existing provider model for this
   - Also suggestion to optionally expose vendor in the tenant-facing API
 * It should have generic application to Neutron services
 * It should be able to represent different flavors of the same
 service (supported by the same or different backend providers)
 * Should the tenant facing abstraction support exact match and/or
 loose semantics for flavor specifications?
   - This does not necessarily have to be mutually exclusive. We could
 converge on a base set of attributes as a part of the generic and
 common definition across services. There could be additional
 (extended) attributes that can be exposed per backend provider (and
 might end up being specific to that deployment)
 * Name of this abstraction, we did not discuss this

 ** Service Insertion/Chaining (topic lead: Sumit Naiksatam)
 * Service context
   - general agreement on what is being introduced in:
 https://review.openstack.org/#/c/62599
 * Service chain
   - Can a flavor capture the definition of a service chain. Current
 thinking is yes.
   - If so, we need to discuss more on the feasibility of tenant
 created service chains
   - The current approach specified in:

 https://blueprints.launchpad.net/neutron/+spec/neutron-services-insertion-chaining-steering
 does not preclude this.

 ** Vendor plugins for L3 services (topic lead: Paul Michali)
 * how to handle/locate vendor config files
 * way to do vendor validation (e.g. validate, commit, apply ~ to
 precommit/postcommit)
 * How to tell client what vendor capabilities are
 * How to report to plugin status, when there are problems
 * I've seen a bunch of these issues with VPN development and imagine
 other svcs do to.
 * Should we setup a separate IRC to discuss some ideas on this?

 ** Requirements for group policy framework
 * We could not cover this

 ** Logistics
 * The feedback was to continue this meeting on a weekly basis (since
 lots more discussions are needed on these and other topics), and
 change the day/time to Wednesdays at 1700 UTC on #openstack-meeting-3

 Meeting wiki page and logs can be found here:
 https://wiki.openstack.org/wiki/Meetings/AdvancedServices

 Thanks,
 ~Sumit.


 On Wed, Mar 12, 2014 at 9:20 PM, Sumit Naiksatam
 sumitnaiksa...@gmail.com wrote:
  Hi,
 
  This is a reminder - we will be having this meeting in
  #openstack-meeting-3 on March 13th (Thursday) at 18:00 UTC. The
  proposed agenda is as follows:
 
  * Flavors/service-type framework
  * Service insertion/chaining
  * Group policy requirements
  * Vendor plugins for L3 services
 
  We can also decide the time/day/frequency of future meetings.
 
  Meeting wiki: https://wiki.openstack.org/wiki/Meetings/AdvancedServices
 
  Thanks,
  ~Sumit.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] question about e41fb84 fix anti-affinity race condition on boot

2014-03-15 Thread Chris Friesen

Hi,

I'm curious why the specified git commit chose to fix the anti-affinity 
race condition by aborting the boot and triggering a reschedule.


It seems to me that it would have been more elegant for the scheduler to 
do a database transaction that would atomically check that the chosen 
host was not already part of the group, and then add the instance (with 
the chosen host) to the group.  If the check fails then the scheduler 
could update the group_hosts list and reschedule.  This would prevent 
the race condition in the first place rather than detecting it later and 
trying to work around it.


This would require setting the host field in the instance at the time 
of scheduling rather than the time of instance creation, but that seems 
like it should work okay.  Maybe I'm missing something though...


Thanks,
Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack/GSoC

2014-03-15 Thread Victoria Martínez de la Cruz
Hi all,

I asked about connecting to the organization through Melange in #gsoc and
it looks like that's not an student option, it's just for mentors. Students
have to upload their proof of enrollment and their proposal only.

Also I saw #openstack-gsoc has been registered (thats cool!), but it has
the m flag on (moderated) so only opped or voiced users can talk. Not sure
if that was intended, but if it wasn't please turn it off.

Have a great weekend,

Victoria



2014-03-13 10:55 GMT-03:00 Davanum Srinivas dava...@gmail.com:

 FYI, here's the log for the OpenStack GSoC meeting we just wrapped up

 http://paste.openstack.org/show/73389/



 On Tue, Mar 11, 2014 at 1:29 PM, Davanum Srinivas dava...@gmail.com
 wrote:
  Hi,
 
  Mentors:
  * Please click on My Dashboard then Connect with organizations and
  request a connection as a mentor (on the GSoC web site -
  http://www.google-melange.com/)
 
  Students:
  * Please see the Application template you will need to fill in on the
 GSoC site.
http://www.google-melange.com/gsoc/org2/google/gsoc2014/openstack
  * Please click on My Dashboard then Connect with organizations and
  request a connection
 
  Both Mentors and Students:
  Let's meet on #openstack-gsoc channel on Thursday 9:00 AM EDT / 13:00
  UTC for about 30 mins to meet and greet since all application deadline
  is next week. If this time is not convenient, please send me a note
  and i'll arrange for another time say on friday as well.
 
 http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140313T09p1=43am=30
 
  We need to get an idea of how many slots we need to apply for based on
  really strong applications with properly fleshed out project ideas and
  mentor support. Hoping the meeting on IRC will nudge the students and
  mentors work towards that goal.
 
  Thanks,
  dims



 --
 Davanum Srinivas :: http://davanum.wordpress.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [bug?] possible postgres/mysql incompatibility in InstanceGroup.get_hosts()

2014-03-15 Thread Chris Friesen

On 03/15/2014 04:29 AM, Sean Dague wrote:

On 03/15/2014 02:49 AM, Chris Friesen wrote:

Hi,

I'm trying to run InstanceGroup.get_hosts() on a havana installation
that uses postgres.  When I run the code, I get the following error:


RemoteError: Remote error: ProgrammingError (ProgrammingError) operator
does not exist: timestamp without time zone ~ unknown
2014-03-14 09:58:57.193 8164 TRACE nova.compute.manager [instance:
83439206-3a88-495b-b6c7-6aea1287109f] LINE 3: uuid != instances.uuid
AND (instances.deleted_at ~ 'None') ...
2014-03-14 09:58:57.193 8164 TRACE nova.compute.manager [instance:
83439206-3a88-495b-b6c7-6aea1287109f]^
2014-03-14 09:58:57.193 8164 TRACE nova.compute.manager [instance:
83439206-3a88-495b-b6c7-6aea1287109f] HINT:  No operator matches the
given name and argument type(s). You might need to add explicit type casts.


I'm not a database expert, but after doing some digging, it seems that
the problem is this line in get_hosts():

filters = {'uuid': filter_uuids, 'deleted_at': None}

It seems that current postgres doesn't allow implicit casts.  If I
change the line to:

filters = {'uuid': filter_uuids, 'deleted': 0}


Then it seems to work.  Is this change valid?


Yes, postgresql is strongly typed with it's data. That's a valid bug you
found, fixes appreciated!


Bug report is open at https://bugs.launchpad.net/nova/+bug/1292963;

Patch is up for review at https://review.openstack.org/80808;, comments 
welcome.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Policy types

2014-03-15 Thread prabhakar Kudva
Hi Tim,
 
Here is a small change I wanted to try in runtime.py
It may already exist in MaterializedViewTheory, but wasn't clear to me.
Checking to see if this is something that:1. makes sense 2. already exists 
3. worth implementing? in that order.
 
Let's take the example from private_public_network.classify
 
error(vm) :- nova:virtual_machine(vm), nova:network(vm, network),
 not neutron:public_network(network),
 neutron:owner(network, netowner), nova:owner(vm, vmowner), not 
same_group(netowner, vmowner)

same_group(user1, user2) :- cms:group(user1, group), cms:group(user2, group)

nova:virtual_machine(vm1)
nova:virtual_machine(vm2)
nova:virtual_machine(vm3)
nova:network(vm1, net_private)
nova:network(vm2, net_public)
neutron:public_network(net_public)
nova:owner(vm1, tim)
nova:owner(vm2, pete)
nova:owner(vm3, pierre)
neutron:owner(net_private, martin)

 
In this example, if as in Scenario 1:
 
Cloud services at our disposal:
nova:virtual_machine(vm)
nova:network(vm, network)
nova:owner(vm, owner)
neutron:public_network(network)
neutron:owner(network, owner)
cms:group(user, group)

are all python functions called through some nova/neutron api, 
then, we just execute them to get a true/false value in runtime.py
They should be first checked to make sure they are python functions and
not condition primitives using 'callable' and os.dir or some such combination.
 
If not, and they are assertions made in the file, not directly related to OS
state, then in Scenario 2
 
nova:owner(vm1, tim)
nova:owner(vm2, pete)
nova:owner(vm3, pierre)

Are assertions made in the file. In a dynamic environment, 
a python function could query an OS client to actually find the current owner, 
since some other OS command could have been used to change the owner 
without an entry being made in this file, i.e., without explicitly informing 
Congres. 
This may not occur currently with vms, but may be implemented 
in a future release. Similar other examples are possible
https://ask.openstack.org/en/question/5582/how-to-change-ownership-between-tenants-of-volume/
https://blueprints.launchpad.net/cinder/+spec/volume-transfer
 
 
So, I was thinking that python_nova_owner(vm1), is first checked as  a python
function which calls the appropriate OS client to check the current owner.
As in nova:owner(vm1, python_nova_owner(vm1))
 
So, in runtime.py, in either scenario, condition primitives are first checked 
to 
see if it they are callable
python functions (by using python 'callable' with os.dir). In which case, it is
executed to get the name of the owner. All non-callable primitives are assumed
assumed to be congress and/or datalog primitives, and unified through the
Materializedviewtheory.
 
 
Thanks,
 
 Date: Thu, 13 Mar 2014 08:55:24 -0700
 From: thinri...@vmware.com
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Congress] Policy types
 
 Hi Prabhakar,
 
 I'm not sure the functionality is split between 'policy' and 'server' as 
 cleanly as you describe.
 
 The 'policy' directory contains the Policy Engine.  At its core, the policy 
 engine has a generic Datalog implementation that could feasibly be used by 
 other OS components.  (I don't want to think about pulling it out into Oslo 
 though.  There are just too many other things going on and no demand yet.)  
 But there are also Congress-specific things in that directory, e.g. the class 
 Runtime in policy/runtime.py will be the one that we hook up external API 
 calls to.
 
 The 'server' directory contains the code for the API web server that calls 
 into the Runtime class.
 
 So if you're digging through code, I'd suggest focusing on the 'policy' 
 directory and looking at compile.py (responsible for converting Datalog rules 
 written as strings into an internal representation) and runtime.py 
 (responsible for everything else).  The docs I mentioned in the IRC should 
 have a decent explanation of the functions in Runtime that the API web server 
 will hook into.  
 
 Be warned though that unless someone raises some serious objections to the 
 proposal that started this thread, we'll be removing some of the more 
 complicated functions from Runtime.  The compile.py code won't change (much). 
  All of the 3 new theories will be instances of MaterializedViewTheory.  
 That's also the code that must change to add in the Python functions we 
 talked about (more specifically see MaterializedViewTheory::propagate_rule(), 
 which calls TopDownTheory::top_down_evaluation(), which is what will need 
 modification).
 
 Tim
  
 
 
 
 - Original Message -
 | From: prabhakar Kudva nandava...@hotmail.com
 | To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 | Sent: Wednesday, March 12, 2014 1:38:55 PM
 | Subject: Re: [openstack-dev] [Congress] Policy types
 | 
 | 
 | 
 | 
 | Hi Tim,
 | 
 | Thanks for your comments.
 | Would be happy to contribute to the propsal and code.
 | 
 | The existing code 

Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-15 Thread Clint Byrum
Excerpts from Tim Bell's message of 2014-03-14 13:54:32 -0700:
 
 I think we need to split the scenarios and focus on the end user experience 
 with the cloud 
 
  a few come to my mind from the CERN experience (but this may not be all):
 
 1. Accidental deletion of an object (including meta data)
 2. Multi-level consistency (such as between Cell API and child instances)
 3. Auditing
 
 CERN has the scenario 1 at a reasonable frequency. Ultimately, it is due to 
 error by
 --
 A - the openstack administrators themselves
 B - the delegated project administrators
 C - users with a non-optimised scope for administrative action
 D - users who make mistakes
 
 It seems that we should handle these as different cases
 
 3 - make sure there is a log entry (ideally off the box) for all operations
 2 - up to the component implementers but with the aim to expire deleted 
 entries as soon as reasonable consistency is achieved
 1[A-D] - how can we recover from operator/project admin/user error ?
 
 I understand that there are differing perspectives from cloud to server 
 consolidation but my cloud users expect that if they create a one-off virtual 
 desktop running Windows for software testing and install a set of software, I 
 don't tell them it was accidentally deleted due to operator error (1A or 1B), 
 you need to re-create it.
 


Totally agree with all of your points.

I think you can achieve this level of protection simply by denying
interactive users the rights to delete individual things directly, and
using stop instead of delete. Then have something else (cron?) clean up
stopped instances after a safety period has been reached.

This is an interesting counter to the opposite more fluid tactic which
is to delete instances that have been up for too long, assuming that
long lived == wrong and costly.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev