Re: [Openstack] [Metering] Meeting agenda for Thursday at 16:00 UTC (May 17th, 2012)

2012-05-18 Thread Nick Barcet
On 05/16/2012 08:44 PM, Nick Barcet wrote:
 Hi,
 
 The metering project team holds a weekly meeting in #openstack-meeting,
 Thursdays at 1600 UTC
 http://www.timeanddate.com/worldclock/fixedtime.html?hour=16min=0sec=0.
 Everyone is welcome.
 
 Since we were not able to conclude on the API discussion last week we
 continued our dicussions on the list and on the #openstack-metering
 channel and are now coming with a better proposal (or we are at least
 hoping it is better).
 
 http://wiki.openstack.org/Meetings/MeteringAgenda
 Topic: external API definition (part 2)
 
   * Agree on update to schema to include JSON formated metadata as
 described at [1]
   * Agree on API proposal described at [2]
   * Agree on format for date_time.  Suggestion is to use ISO but seeking
 validation for best practice for REST
   * Agree on the use of a transparent cache for aggregation
   * Open discussion (if we have any time left)
 
 [1] http://wiki.openstack.org/EfficientMetering#Storage
 [2] http://wiki.openstack.org/EfficientMetering/APIProposalv1

The meeting took place yesterday, a summary follows.

=
#openstack-meeting Meeting
==

Meeting started by nijaba at 16:01:26 UTC.  The full logs are available
at
http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-05-17-16.01.log.html

Meeting summary
---

* agenda http://wiki.openstack.org/Meetings/MeteringAgenda  (nijaba,
  16:01:26)

* actions from previous meetings  (nijaba, 16:02:14)
  * dachary add info to the wiki on the topic of poll versus push
(nijaba, 16:02:26)
  * dhellmann: reformulate the API proposal as a start point for the
dicussion on the ML  (nijaba, 16:03:25)

* Agree on update to schema to include JSON formated metadata  (nijaba,
  16:05:55)
  * LINK: http://wiki.openstack.org/EfficientMetering#Storage  (nijaba,
16:05:55)
  * AGREED: to update to schema to include JSON formated metadata
(nijaba, 16:10:40)

* Agree on API proposal  (nijaba, 16:10:56)
  * LINK: http://wiki.openstack.org/EfficientMetering/APIProposalv1
(nijaba, 16:10:56)
  * AGREED: on API proposal
http://wiki.openstack.org/EfficientMetering/APIProposalv1  (nijaba,
16:15:01)
  * ACTION: flacoste to follow on the discussion about a bus only
implementation  (nijaba, 16:17:43)

* Agree on format for date_time  (nijaba, 16:15:22)
  * Suggestion is to use ISO but seeking validation for best practice
for REST  (nijaba, 16:15:22)
  * ACTION: nijaba to add the use to UTC for datetime  (nijaba,
16:17:03)
  * AGREED: to use ISO for datetime  (nijaba, 16:18:54)

* Agree on the use of a transparent cache for aggregation  (nijaba,
  16:19:11)
  * AGREED: caching is an implementation detail  (nijaba, 16:23:58)

* Open discussion  (nijaba, 16:24:28)
  * ACTION: dhellmann to document a counter for discrete event as an
example  (nijaba, 16:29:02)

Meeting ended at 16:31:15 UTC.

Action items, by person
---

* dhellmann
  * dhellmann to document a counter for discrete event as an example
* flacoste
  * flacoste to follow on the discussion about a bus only implementation
* nijaba
  * nijaba to add the use to UTC for datetime

Next week's meeting will cover the choice of a messaging queue for the
project.

Cheers,
--
Nick Barcet nick.bar...@canonical.com
aka: nijaba, nicolas



signature.asc
Description: OpenPGP digital signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [metering] Choice of a messaging queue

2012-05-18 Thread Nick Barcet
Hello everyone,

Next week's irc meeting will have for goal to choose a reference
messaging queue service for the ceilometer project.  For this meeting to
be able to be successful, a discussion on the choices that we have to
make need to occur first right here.

To open the discussion here are a few requirements that I would consider
important for the queue to support:

a) the queue must guaranty the delivery of messages.
To the contrary of monitoring, loss of events may have important billing
impacts, it therefore cannot be an option that message be lost.

b) the client should be able to store and forward.
As the load of system or traffic increases, or if the client is
temporarily disconnected, client element of the queue should be able to
hold messages in a local queue to be emitted as soon as condition permit.

c) client must authenticate
Only client which hold a shared private key should be able to send
messages on the queue.

d) queue may support client signing of individual messages
Each message should be individually signed by the agent that emits it in
order to guaranty non repudiability.  This function can be done by the
queue client or by the agent prior to en-queuing of messages.

d) queue must be highly available
the queue servers must be able to support multiple instances running in
parallel in order to support continuation of operations with the loss of
one server.  This should be achievable without the need to use complex
fail over systems and shared storage.

e) queue should be horizontally scalable
The scalability of queue servers should be achievable by increasing the
number of servers.

Not sure this list is exhaustive or viable, feel free to comment on it,
but the real question is: which queue should we be using here?

Cheers,
--
Nick Barcet nick.bar...@canonical.com
aka: nijaba, nicolas




signature.asc
Description: OpenPGP digital signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [metering] example of discrete counter

2012-05-18 Thread Nick Barcet
On 05/17/2012 10:48 PM, Doug Hellmann wrote:
 I have added a row to the list of counters for discrete events such as
 uploading an image to glance [1]. Please let me know if you think I need
 more exposition to explain discrete counters.
 
 Doug
 
 [1] http://wiki.openstack.org/EfficientMetering?action=diffrev2=89rev1=87
 http://wiki.openstack.org/EfficientMetering?action=diffrev2=89rev1=87

Thanks Doug, it looks good to me.

Cheers,
Nick




signature.asc
Description: OpenPGP digital signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [metering] Do we need an API and storage?

2012-05-18 Thread Thierry Carrez
Doug Hellmann wrote:
 On Thu, May 17, 2012 at 5:47 AM, Nick Barcet nick.bar...@canonical.com
 mailto:nick.bar...@canonical.com wrote:
 
 On 05/17/2012 11:13 AM, Loic Dachary wrote:
  On 05/16/2012 11:00 PM, Francis J. Lacoste wrote:
 
  I'm now of the opinion that we exclude storage and API from the
  metering project scope. Let's just focus on defining a metering
  message format, bus, and maybe a client-library to make it easy to
  write metering consumers.
 
 
 The plan, as I understand it, is to ensure that all metering messages
 appear on a common bus using a documented format. Deployers who do not
 want the storage system and REST API will not need to use it, and can
 set up their own clients to listen on that bus. I'm not sure how much of
 a client library is needed, since the bus is AMQP and the messages are
 JSON, both of which have standard libraries in most common languages.
 [...]

You can certainly architect it in a way so that storage and API are
optional: expose metering messages on the bus, and provide an
optionally-run aggregation component that exposes a REST API (and that
would use the metering-consumer client library). That would give
deployers the option to poll via REST or implement their own alternate
aggregation using the metering-consumer client lib, depending on the
system they need to integrate with.

Having the aggregation component clearly separate and optional will
serve as a great example of how it could be done (and what are the
responsibilities of the aggregation component). I would still do a
(minimal) client library to facilitate integration, but maybe that's
just me :)

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova] Blueprint and core cleanup

2012-05-18 Thread Thierry Carrez
Vishvananda Ishaya wrote:
 *Core Cleanup*
 [...]
 If a former core member has time to start participating in reviews
 again, i think he should be able to review for a couple of weeks or two
 and send an email to the list saying, Hey, I've got time to review
 again, can I be added back in.  If we don't here any -1 votes by other
 core members for three days we will bring them back.  In other words,
 its former members can be accelerated back into core.  Sound reasonable?

Yes.

 *Blueprint Cleanup*
 
 As I mentioned in my previous email, I've now obsoleted all blueprints
 not targetted to folsom. The blueprint system has been used for feature
 requests, and I don't think it is working because there is no one
 grabbing unassigned blueprints. I think it has to be up to the drafter
 of the blueprint to find a person/team to actually implement the
 blueprint or it will just sit there. Therefore I've removed all of the
 good idea blueprints. This was kind of sad, because there were some
 really good ideas there.

We discussed for quite some time that wishlist bugs that don't get
worked on for some time should be closed as Opinion/Wishlist... and
use that search to get a nice list of things that sound like a good
idea but nobody has had time to work on. Maybe we should create
wishlist bugs for stuff on obsoleted blueprints, so that we have a
single place to look for abandoned good ideas ?

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Multiple NOVA-INST-DIR/instances

2012-05-18 Thread Sergio Ariel de la Campa Saiz
Hi:

Some days ago, I post a question about nova, but I did it in a wrong way. 
Thanks Igor, I hit me and now I rewrite the doubt:

Is it possible to configurate more than one --instances_path in nova.conf in 
order to get more than one NOVA-INST-DIR/instances???
I have multiple two NFS server and more than one compute node. I need that each 
node could launch its instances in any NFS server, in such a way that  some 
instances will be launched in the  NFS-server-1 and other instances in the 
NFS-server-2, but at the same time to be possible to migrate instances between 
nodes.
Is it possible???

Thanks a lot.

Sergio Ariel
de la Campa Saiz
GMV-SES Infraestructura /
GMV-SES Infrastructure





GMV
Isaac Newton, 11
P.T.M. Tres Cantos
E-28760 Madrid
Tel.
+34 91 807 21 00
Fax
+34 91 807 21 99
 www.gmv.com









De: Igor Laskovy [igor.lask...@gmail.com]
Enviado el: miércoles, 16 de mayo de 2012 20:05
Para: Sergio Ariel de la Campa Saiz
CC: Emilien Macchi; openstack-operat...@lists.openstack.org; 
openstack@lists.launchpad.net
Asunto: Re: [Openstack-operators] nova and multiple storages

ccing openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net

On Wed, May 16, 2012 at 10:19 AM, Sergio Ariel de la Campa Saiz 
saca...@gmv.commailto:saca...@gmv.com wrote:
Sorry to everyone if I didn´t tell exactly what I need, but Igor is right... I 
need to configurate more than one NOVA-INST-DIR/instances/ by NFS.

Thanks to Igor :-) :-)
Thanks to Emilien too :-) :-)
Sergio Ariel
de la Campa Saiz
GMV-SES Infraestructura /
GMV-SES Infrastructure





GMV
Isaac Newton, 11
P.T.M. Tres Cantos
E-28760 Madrid
Tel.
+34 91 807 21 00tel:%2B34%2091%20807%2021%2000
Fax
+34 91 807 21 99tel:%2B34%2091%20807%2021%2099
 www.gmv.comhttp://www.gmv.com













De: 
openstack-operators-boun...@lists.openstack.orgmailto:openstack-operators-boun...@lists.openstack.org
 
[openstack-operators-boun...@lists.openstack.orgmailto:openstack-operators-boun...@lists.openstack.org]
 En nombre de Igor Laskovy 
[igor.lask...@gmail.commailto:igor.lask...@gmail.com]
Enviado el: lunes, 14 de mayo de 2012 16:30
Para: Emilien Macchi
CC: Sergio Ariel de la Campa Saiz; 
openstack-operat...@lists.openstack.orgmailto:openstack-operat...@lists.openstack.org
Asunto: Re: [Openstack-operators] nova and multiple storages


I guess Sergio told about more than one NOVA-INST-DIR/instances/

On May 14, 2012 3:11 PM, Emilien Macchi 
emilien.mac...@stackops.commailto:emilien.mac...@stackops.com wrote:
Hi Sergio,


When you talk about more than one storage, do you mean about Multi-Pathing ?

You can use multi storage node for VMs, and you can use Nexenta Software which 
has a specific drvier for Openstack modules. In this case, you use iSCSI as a 
SAN, and High available storage with a lot of nice features.

You can read more about how to use it 
herehttp://docs.openstack.org/trunk/openstack-compute/admin/content/nexenta-driver.html
.


Good luck !



On Mon, May 14, 2012 at 1:46 PM, Sergio Ariel de la Campa Saiz 
saca...@gmv.commailto:saca...@gmv.com wrote:
Hi:

I was wondering if it possible to configure each nova node to use more than one 
storage, I mean, if each node can storage its instances in more than one nfs 
server.

How to configure this??

Thanks a lot...

Sergio Ariel
de la Campa Saiz
GMV-SES Infraestructura /
GMV-SES Infrastructure





GMV
Isaac Newton, 11
P.T.M. Tres Cantos
E-28760 Madrid
Tel.
+34 91 807 21 00tel:%2B34%2091%20807%2021%2000
Fax
+34 91 807 21 99tel:%2B34%2091%20807%2021%2099
www.gmv.comhttp://www.gmv.com








This message including any attachments may contain confidential information, 
according to our Information Security Management System, and intended solely 
for a specific individual to whom they are addressed. Any unauthorised copy, 
disclosure or distribution of this message is strictly forbidden. If you have 
received this transmission in error, please notify the sender immediately and 
delete it.

Este mensaje, y en su caso, cualquier fichero anexo al mismo, puede contener 
información clasificada por su emisor como confidencial en el marco de su 
Sistema de Gestión de Seguridad de la Información siendo para uso exclusivo del 
destinatario, quedando prohibida su divulgación copia o distribución a terceros 
sin la autorización expresa del remitente. Si Vd. ha recibido este mensaje 
erróneamente, se ruega lo notifique al remitente y proceda a su borrado. 
Gracias por su colaboración.

Esta mensagem, incluindo qualquer ficheiro anexo, pode conter informação 
confidencial, de acordo com nosso Sistema de Gestão de Segurança da Informação, 
sendo para uso exclusivo do destinatário e estando proibida a sua divulgação, 
cópia ou distribuição a terceiros sem autorização expressa do remetente da 
mesma. Se recebeu esta mensagem por engano, por favor avise de imediato o 

Re: [Openstack] [Openstack-operators] Multiple NOVA-INST-DIR/instances

2012-05-18 Thread Igor Laskovy
Hi Sergio,

Maybe you can mount dedicated NFS share for each VM already inside
NOVA-INST-DIR/instances directory ? And than you can control where each VM
will resides.

Igor Laskovy
Kiev, Ukraine
On May 18, 2012 12:43 PM, Sergio Ariel de la Campa Saiz saca...@gmv.com
wrote:

  Hi:

 Some days ago, I post a question about nova, but I did it in a wrong way
 . Thanks Igor, I hit me and now I rewrite the doubt:

 Is it possible to configurate more than one --instances_path in nova.confin 
 order
  to get more than one NOVA-INST-DIR/instances???
  I have multiple two NFS server and more than one compute node. I need
  that each node could launch its instances in any NFS server, in sucha way
  that  some instances will be launched in the  NFS-server-1 and other
  instances in the NFS-server-2, but at the same time to be possible to
  migrate instances between nodes.
 Is it possible???

 Thanks a lot.


*Sergio Ariel *

 *de la Campa Saiz*

 GMV-SES Infraestructura /
 GMV-SES Infrastructure







 *GMV*

 Isaac Newton, 11
 P.T.M. Tres Cantos
 E-28760 Madrid
 Tel.

 +34 91 807 21 00
 Fax

 +34 91 807 21 99
  www.gmv.com







  --
 *De:* Igor Laskovy [igor.lask...@gmail.com]
 *Enviado el:* miércoles, 16 de mayo de 2012 20:05
 *Para:* Sergio Ariel de la Campa Saiz
 *CC:* Emilien Macchi; openstack-operat...@lists.openstack.org; 
 openstack@lists.launchpad.net
 *Asunto:* Re: [Openstack-operators] nova and multiple storages

  ccing openstack@lists.launchpad.net

 On Wed, May 16, 2012 at 10:19 AM, Sergio Ariel de la Campa Saiz 
 saca...@gmv.com wrote:

  Sorry to everyone if I didn´t tell exactly what I need, but Igor isright...
 I need to configurate more than one NOVA-INST-DIR/instances/ by NFS.

 Thanks to Igor :-) :-)
 Thanks to Emilien too :-) :-)

*Sergio Ariel *

 *de la Campa Saiz*

 GMV-SES Infraestructura /
 GMV-SES Infrastructure







 *GMV*

 Isaac Newton, 11
 P.T.M. Tres Cantos
 E-28760 Madrid
 Tel.

 +34 91 807 21 00
 Fax

 +34 91 807 21 99
  www.gmv.com











  --
 *De:* openstack-operators-boun...@lists.openstack.org [
 openstack-operators-boun...@lists.openstack.org] En nombre de Igor
 Laskovy [igor.lask...@gmail.com]
 *Enviado el:* lunes, 14 de mayo de 2012 16:30
 *Para:* Emilien Macchi
 *CC:* Sergio Ariel de la Campa Saiz;
 openstack-operat...@lists.openstack.org
 *Asunto:* Re: [Openstack-operators] nova and multiple storages

I guess Sergio told about more than one NOVA-INST-DIR/instances/
 On May 14, 2012 3:11 PM, Emilien Macchi emilien.mac...@stackops.com
 wrote:

 Hi Sergio,


 When you talk about more than one storage, do you mean about
 Multi-Pathing ?

 You can use multi storage node for VMs, and you can use Nexenta Software
 which has a specific drvier for Openstack modules. In this case, you use
 iSCSI as a SAN, and High available storage with a lot of nice features.

 You can read more about how to use it 
 herehttp://docs.openstack.org/trunk/openstack-compute/admin/content/nexenta-driver.html
 .


 Good luck !



 On Mon, May 14, 2012 at 1:46 PM, Sergio Ariel de la Campa Saiz 
 saca...@gmv.com wrote:

  Hi:

 I was wondering if it possible to configure each nova node to use
 more than one storage, I mean, if each node can storage its instancesin 
 more than
  one nfs server.

 How to configure this??

 Thanks a lot...


*Sergio Ariel *

 *de la Campa Saiz*

 GMV-SES Infraestructura /
 GMV-SES Infrastructure







 *GMV*

 Isaac Newton, 11
 P.T.M. Tres Cantos
 E-28760 Madrid
 Tel.

 +34 91 807 21 00
 Fax

 +34 91 807 21 99
 www.gmv.com






   **
 --
 This message including any attachments may contain confidential
 information, according to our Information Security Management System, and
 intended solely for a specific individual to whom they are addressed. Any
 unauthorised copy, disclosure or distribution of this message is strictly
 forbidden. If you have received this transmission in error, please notify
 the sender immediately and delete it. **
 --
 Este mensaje, y en su caso, cualquier fichero anexo al mismo, puede
 contener información clasificada por su emisor como confidencial en el
 marco de su Sistema de Gestión de Seguridad de la Información siendo para
 uso exclusivo del destinatario, quedando prohibida su divulgación copia o
 distribución a terceros sin la autorización expresa del remitente. Si Vd.
 ha recibido este mensaje erróneamente, se ruega lo notifique al remitente y
 proceda a su borrado. Gracias por su colaboración. **
 --
 Esta mensagem, incluindo qualquer ficheiro anexo, pode conter
 informação confidencial, de acordo com nosso Sistema de Gestão de Segurança
 da Informação, sendo para uso exclusivo do destinatário e estando proibida
 a sua divulgação, cópia ou distribuição a terceiros sem autorização
 expressa do remetente da mesma. Se recebeu esta mensagem por engano, por
 favor avise de 

[Openstack] BadRequest: Can not find requested image (HTTP 400)

2012-05-18 Thread Milind
Hi,

I am getting error while executing following command.

root@ucmaster:/home/milindx/mil# nova boot --image tty-linux --flavor
m1.small --key_name test my-first-server
ERROR: Can not find requested image (HTTP 400)

Following information will help

root@ucmaster:/home/milindx/mil# cat /etc/nova/nova.conf | grep glance
--image_service=nova.image.glance.GlanceImageService
--glance_api_servers=10.253.59.152:9292

root@ucmaster:/home/milindx/mil# nova image-list
+--+--+++
|  ID  |   Name   | Status | Server
|
+--+--+++
| 057f5695-7af5-4d42-ab17-2f0f36f99ee2 | tty-linux| ACTIVE |
|
| 1cfd638b-1187-4f35-a0dd-76352742b762 | tty-kernel   | ACTIVE |
|
| 40c6e4ec-9b49-4f5c-989d-57cd2691fa12 | tty-linuxkernel  | ACTIVE |
|
| a726d350-c187-401b-823c-7bb1527aaa1d | tty-linuxramdisk | ACTIVE |
|
| b1c79ba2-aef0-4dfc-8e0f-f1f223bea1f1 | tty  | ACTIVE |
|
| b3240f79-5b6f-450c-beff-5c8ecdabcf00 | tty-ramdisk  | ACTIVE |
|
+--+--+++
root@ucmaster:/home/milindx/mil#

root@ucmaster:/home/milindx/mil# nova-manage service list
2012-05-18 17:15:30 DEBUG nova.utils
[req-0997c625-ef9c-45ab-96a9-77a211d87459 None None] backend module
'nova.db.sqlalchemy.api' from
'/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.pyc' from
(pid=15253) __get_backend /usr/lib/python2.7/dist-packages/nova/utils.py:658
Binary   Host Zone
Status State Updated_At
nova-scheduler   ucmaster nova
enabled:-)   2012-05-18 11:45:25
nova-volume  ucmaster nova
enabled:-)   2012-05-18 11:45:25
nova-compute ucmaster nova
enabled:-)   2012-05-18 11:45:21
nova-certucmaster nova
enabled:-)   2012-05-18 11:45:25
nova-consoleauth ucmaster nova
enabled:-)   2012-05-18 11:45:25
nova-network ucmaster nova
enabled:-)   2012-05-18 11:45:25

I have deleted and added images using glance still problem persists, any
one know what could be problem.

Regards,
Milind
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] BadRequest: Can not find requested image (HTTP 400)

2012-05-18 Thread Razique Mahroua
Hi, it's weird, I remember someone having the same issue.Edit
 : herehttps://lists.launchpad.net/openstack/msg11742.htmlIt
 may be related to the bug ? 	   
   	Milind  
  18 mai 2012 13:46Hi,I am getting error 
while executing following command.root@ucmaster:/home/milindx/mil#
 nova boot --image tty-linux --flavor m1.small --key_name test 
my-first-serverERROR: Can not find requested image (HTTP 400)
Following information will helproot@ucmaster:/home/milindx/mil#
 cat /etc/nova/nova.conf | grep glance--image_service=nova.image.glance.GlanceImageService--glance_api_servers=10.253.59.152:9292
root@ucmaster:/home/milindx/mil# nova image-list+--+--+++|
 ID | Name | Status | Server |+--+--+++
| 057f5695-7af5-4d42-ab17-2f0f36f99ee2 | tty-linux | ACTIVE 
| || 1cfd638b-1187-4f35-a0dd-76352742b762 | tty-kernel |
 ACTIVE | || 40c6e4ec-9b49-4f5c-989d-57cd2691fa12 | 
tty-linuxkernel | ACTIVE | |
| a726d350-c187-401b-823c-7bb1527aaa1d | tty-linuxramdisk | ACTIVE 
| || b1c79ba2-aef0-4dfc-8e0f-f1f223bea1f1 | tty |
 ACTIVE | || b3240f79-5b6f-450c-beff-5c8ecdabcf00 | 
tty-ramdisk | ACTIVE | |
+--+--+++root@ucmaster:/home/milindx/mil#root@ucmaster:/home/milindx/mil#
 nova-manage service list2012-05-18 17:15:30 DEBUG nova.utils 
[req-0997c625-ef9c-45ab-96a9-77a211d87459 None None] backend module 
'nova.db.sqlalchemy.api' from 
'/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.pyc' from 
(pid=15253) __get_backend 
/usr/lib/python2.7/dist-packages/nova/utils.py:658
Binary Host Zone 
Status State Updated_Atnova-scheduler 
ucmaster nova enabled :-) 
2012-05-18 11:45:25nova-volume 
ucmaster nova enabled :-) 
2012-05-18 11:45:25
nova-compute ucmaster nova 
enabled :-) 2012-05-18 11:45:21nova-cert 
ucmaster nova enabled :-) 
2012-05-18 11:45:25
nova-consoleauth ucmaster nova 
enabled :-) 2012-05-18 11:45:25nova-network 
ucmaster nova enabled :-) 
2012-05-18 11:45:25
I have deleted and added images using glance still problem persists,
 any one know what could be problem.Regards,Milind

___Mailing list: 
https://launchpad.net/~openstackPost to : 
openstack@lists.launchpad.netUnsubscribe : 
https://launchpad.net/~openstackMore help   : 
https://help.launchpad.net/ListHelp-- Nuage  Co - Razique Mahroua 
razique.mahr...@gmail.com















___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] How to access an instance from Dashboard using VNC (password)?

2012-05-18 Thread Jorge Luiz Correa
Hi all,

how we can access an instance using the dashboard and the VNC it provides?
For example, if a common user creates an instance and click on VNC tab, he
will be able to see a VNC console and login screen. How can he know the
password? If the instance is created in command line with nova boot 
there is an output that shows the root password. But, from inside
dashboard, I couldn't find where to get a similar information.

Thanks!

-- 
- MSc. Correa, J.L.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] BadRequest: Can not find requested image (HTTP 400)

2012-05-18 Thread Milind
Is there any way to find out if image is exists with the help of glance
command?
I checked everything all parameters are properly configured in conf files.

On Fri, May 18, 2012 at 5:19 PM, Razique Mahroua
razique.mahr...@gmail.comwrote:

 Hi,
 it's weird, I remember someone having the same issue.
 Edit : here
 https://lists.launchpad.net/openstack/msg11742.html

 It may be related to the bug ?

  Milind milindkpa...@gmail.com
  18 mai 2012 13:46
 Hi,

 I am getting error while executing following command.

 root@ucmaster:/home/milindx/mil# nova boot --image tty-linux --flavor
 m1.small --key_name test my-first-server
 ERROR: Can not find requested image (HTTP 400)

 Following information will help

 root@ucmaster:/home/milindx/mil# cat /etc/nova/nova.conf | grep glance
 --image_service=nova.image.glance.GlanceImageService
 --glance_api_servers=10.253.59.152:9292

 root@ucmaster:/home/milindx/mil# nova image-list

 +--+--+++
 |  ID  |   Name   | Status |
 Server |

 +--+--+++
 | 057f5695-7af5-4d42-ab17-2f0f36f99ee2 | tty-linux| ACTIVE
 ||
 | 1cfd638b-1187-4f35-a0dd-76352742b762 | tty-kernel   | ACTIVE
 ||
 | 40c6e4ec-9b49-4f5c-989d-57cd2691fa12 | tty-linuxkernel  | ACTIVE
 ||
 | a726d350-c187-401b-823c-7bb1527aaa1d | tty-linuxramdisk | ACTIVE
 ||
 | b1c79ba2-aef0-4dfc-8e0f-f1f223bea1f1 | tty  | ACTIVE
 ||
 | b3240f79-5b6f-450c-beff-5c8ecdabcf00 | tty-ramdisk  | ACTIVE
 ||

 +--+--+++
 root@ucmaster:/home/milindx/mil#

 root@ucmaster:/home/milindx/mil# nova-manage service list
 2012-05-18 17:15:30 DEBUG nova.utils
 [req-0997c625-ef9c-45ab-96a9-77a211d87459 None None] backend module
 'nova.db.sqlalchemy.api' from
 '/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.pyc' from
 (pid=15253) __get_backend /usr/lib/python2.7/dist-packages/nova/utils.py:658
 Binary   Host Zone
 Status State Updated_At
 nova-scheduler   ucmaster nova
 enabled:-)   2012-05-18 11:45:25
 nova-volume  ucmaster nova
 enabled:-)   2012-05-18 11:45:25
 nova-compute ucmaster nova
 enabled:-)   2012-05-18 11:45:21
 nova-certucmaster nova
 enabled:-)   2012-05-18 11:45:25
 nova-consoleauth ucmaster nova
 enabled:-)   2012-05-18 11:45:25
 nova-network ucmaster nova
 enabled:-)   2012-05-18 11:45:25

 I have deleted and added images using glance still problem persists, any
 one know what could be problem.

 Regards,
 Milind


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp


 --
 Nuage  Co - Razique Mahroua
 razique.mahr...@gmail.com




compose-unknown-contact.jpgimage.jpg___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Dashboard] Can't access images/snapshots

2012-05-18 Thread Leander Bessa Beernaert
Ok, i've removed swift from the endpoints and services. Nova volumes is
running with a 2GB file as volume on disk and the log files seem ok.
However, i still keep getting this error for volume-list (
http://paste.openstack.org/show/17991/) and this error for snapshot-list (
http://paste.openstack.org/show/17992/).

On Thu, May 17, 2012 at 7:39 PM, Gabriel Hurley
gabriel.hur...@nebula.comwrote:

  Two points:

 ** **

 Nova Volume is a required service for Essex Horizon. That’s documented,
 and there are plans to make it optional for Folsom. However, not having it
 should yield a pretty error message in the dashboard, not a KeyError in
 novaclient, which leads me to my second point…

 ** **

 It sounds like your Keystone service catalog is misconfigured. If you’re
 seeing Swift (AKA Object Store) in the dashboard, that means it’s in your
 keystone service catalog. Swift is a completely optional component and is
 triggered on/off by the presence of an “object-store” endpoint returned by
 Keystone.

 ** **

 I’d check and make sure the services listed in Keystone’s catalog are
 correct for what’s actually running in your environment.

 ** **

 All the best,

 ** **

 **-  **Gabriel

 ** **

 *From:* 
 openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net[mailto:
 openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net] *On
 Behalf Of *Leander Bessa Beernaert
 *Sent:* Thursday, May 17, 2012 8:45 AM
 *To:* Sébastien Han
 *Cc:* openstack@lists.launchpad.net
 *Subject:* Re: [Openstack] [Dashboard] Can't access images/snapshots

 ** **

 Now i made sure nova-volume is installed and running. I still keep running
 into the same problem. It also happens from the command line tool. This is
 the output produced  http://paste.openstack.org/show/17929/

 On Thu, May 17, 2012 at 11:17 AM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

 I have no trouble from the command line. One thing i find peculiar is that
 i haven't installed swift and nova-volume yet and they show up as enabled
 services in the dashboard. Is that normal?

 ** **

 On Wed, May 16, 2012 at 11:39 PM, Sébastien Han han.sebast...@gmail.com
 wrote:

 Hi,

 ** **

 Do you also have an error when retrieving from the command line?


 

 ~Cheers!



 

 On Wed, May 16, 2012 at 5:38 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:

  Hello,

 ** **

 I keep running into this error when i try to list the images/snapshot in
 dashboard: http://paste.openstack.org/show/17820/

 ** **

 This is my local_settings.py file: http://paste.openstack.org/show/17822/ ,
 am i missing something?

 ** **

 Regards,

 ** **

 Leander 

 ** **

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

  ** **

 ** **

 ** **

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] BadRequest: Can not find requested image (HTTP 400)

2012-05-18 Thread Vaze, Mandar
Milind,

Nova boot command takes Image and Flavor IDs, not names
Can you try nova boot -image  057f5695-7af5-4d42-ab17-2f0f36f99ee2  instead ?
Similarly, you may need to use -flavor 2 instead of --flavor m1.small

If you are interested in looking at code, see def do_boot in 
novaclient/v1_1/shell.py under python-novaclient repo.

nova help boot
usage: nova boot [--flavor flavor] [--image image] [--meta key=value]
 [--file dst-path=src-path] [--key_name key_name]
 [--user_data user-data]
 [--availability_zone availability-zone]
 [--security_groups security_groups]
 [--block_device_mapping dev_name=mapping]
 [--hint key=value]
 [--nic net-id=net-uuid,v4-fixed-ip=ip-addr]
 [--config-drive value] [--poll]
 name

Boot a new server.

Positional arguments:
  nameName for the new server

Optional arguments:
  --flavor flavor Flavor ID (see 'nova flavor-list').
  --image image   Image ID (see 'nova image-list').

Hope this helps.

-Mandar

From: openstack-bounces+mandar.vaze=nttdata@lists.launchpad.net 
[mailto:openstack-bounces+mandar.vaze=nttdata@lists.launchpad.net] On 
Behalf Of Milind
Sent: Friday, May 18, 2012 5:16 PM
To: openstack@lists.launchpad.net
Subject: [Openstack] BadRequest: Can not find requested image (HTTP 400)

Hi,

I am getting error while executing following command.

root@ucmaster:/home/milindx/mil# nova boot --image tty-linux --flavor m1.small 
--key_name test my-first-server
ERROR: Can not find requested image (HTTP 400)

Following information will help

root@ucmaster:/home/milindx/mil# cat /etc/nova/nova.conf | grep glance
--image_service=nova.image.glance.GlanceImageService
--glance_api_servers=10.253.59.152:9292http://10.253.59.152:9292

root@ucmaster:/home/milindx/mil# nova image-list
+--+--+++
|  ID  |   Name   | Status | Server |
+--+--+++
| 057f5695-7af5-4d42-ab17-2f0f36f99ee2 | tty-linux| ACTIVE ||
| 1cfd638b-1187-4f35-a0dd-76352742b762 | tty-kernel   | ACTIVE ||
| 40c6e4ec-9b49-4f5c-989d-57cd2691fa12 | tty-linuxkernel  | ACTIVE ||
| a726d350-c187-401b-823c-7bb1527aaa1d | tty-linuxramdisk | ACTIVE ||
| b1c79ba2-aef0-4dfc-8e0f-f1f223bea1f1 | tty  | ACTIVE ||
| b3240f79-5b6f-450c-beff-5c8ecdabcf00 | tty-ramdisk  | ACTIVE ||
+--+--+++
root@ucmaster:/home/milindx/mil#

root@ucmaster:/home/milindx/mil# nova-manage service list
2012-05-18 17:15:30 DEBUG nova.utils [req-0997c625-ef9c-45ab-96a9-77a211d87459 
None None] backend module 'nova.db.sqlalchemy.api' from 
'/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.pyc' from (pid=15253) 
__get_backend /usr/lib/python2.7/dist-packages/nova/utils.py:658
Binary   Host Zone Status   
  State Updated_At
nova-scheduler   ucmaster nova enabled  
  :-)   2012-05-18 11:45:25
nova-volume  ucmaster nova enabled  
  :-)   2012-05-18 11:45:25
nova-compute ucmaster nova enabled  
  :-)   2012-05-18 11:45:21
nova-certucmaster nova enabled  
  :-)   2012-05-18 11:45:25
nova-consoleauth ucmaster nova enabled  
  :-)   2012-05-18 11:45:25
nova-network ucmaster nova enabled  
  :-)   2012-05-18 11:45:25

I have deleted and added images using glance still problem persists, any one 
know what could be problem.

Regards,
Milind


__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data.  If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How to access an instance from Dashboard using VNC (password)?

2012-05-18 Thread Vaze, Mandar
Ø  But, from inside dashboard, I couldn't find where to get a similar 
information.

From Dashboard,  Click Edit Instance button on far right, and click on View 
Log
Scroll at the end, you'll see the password.

-Mandar


__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data.  If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] BadRequest: Can not find requested image (HTTP 400)

2012-05-18 Thread Vaze, Mandar
Ø  Is there any way to find out if image is exists with the help of glance 
command?
glance image-list

See my other email for your original problem

-Mandar

__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data.  If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] About openstack common client

2012-05-18 Thread Andrew Bogott

On 5/18/12 12:16 AM, Yong Sheng Gong wrote:


Hi,
I just want to ask about the relationship among openstackclient 
https://launchpad.net/python-openstackclientand other clients.
Will openstackclient replace other clients ( such as quantum client, 
keystone client, nova client, xx) or just a supplement?
My understanding (and hope) is that ultimately there will be a 
separation between shell interfaces and REST interfaces.  
Openstackclient will implement the commandline, and the other clients 
(python-novaclient, python-glanceclient, etc.) will provide python APIs 
for REST clients, of which openstackclient is one.


by now, the openstackclient is calling codes from other clients, so it 
seems it is just another client wrapper. In this case, we will have to 
implement two set of front codes to call specific client. One will be 
in openstackclient, and one will be in separate client itself.
I expect non-common shell clients to be deprecated and eventually ripped 
out.  We're probably a bit too early in the game to explicitly 
discourage development on those shell commands though.


-Andrew
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] nova network external gateway

2012-05-18 Thread George Mihaiescu
You can try the solution proposed by Vish on February 23, 2012.

Put the flag and the config file on the nodes with nova-network as they have 
dnsmasq running:

 

 

 Then you can use

--dnsmasq_config_file=/path/to/config

in that config file you can use:

dhcp_opiton=3,ip of router to force vms to use your router as their gateway. 

 

 

 



From: openstack-bounces+george.mihaiescu=q9@lists.launchpad.net 
[mailto:openstack-bounces+george.mihaiescu=q9@lists.launchpad.net] On 
Behalf Of Sergio Ariel de la Campa Saiz
Sent: Friday, May 18, 2012 8:49 AM
To: openstack@lists.launchpad.net; openstack-operat...@lists.openstack.org
Subject: [Openstack] nova network external gateway

 

Hi:

 

I have installed Essex and I´m using vlan networking. All virtual machines use 
the same vlan, but I want all of them use an external gateway, I mean, a real 
router of my network instead of a nova node.
I have read about it and I have found a parameter named: dhcpoption=3,gateway 
ip  but I don´t know where to put it. I know that dnsmasq loads it but... from 
where?? I did not find any dnsmasq.conf in my system and, in the other hand, I 
put it in my nova.conf file and nothing happened.

 

Thanks in advance

 

Good luck...

Sergio Ariel 

de la Campa Saiz

GMV-SES Infraestructura / 
GMV-SES Infrastructure

 

 

 

GMV 

Isaac Newton, 11
P.T.M. Tres Cantos
E-28760 Madrid
Tel. 

+34 91 807 21 00
Fax 

+34 91 807 21 99
www.gmv.com 

 

 

 



size=2 width=100% align=center 

This message including any attachments may contain confidential information, 
according to our Information Security Management System, and intended solely 
for a specific individual to whom they are addressed. Any unauthorised copy, 
disclosure or distribution of this message is strictly forbidden. If you have 
received this transmission in error, please notify the sender immediately and 
delete it. 



Este mensaje, y en su caso, cualquier fichero anexo al mismo, puede contener 
información clasificada por su emisor como confidencial en el marco de su 
Sistema de Gestión de Seguridad de la Información siendo para uso exclusivo del 
destinatario, quedando prohibida su divulgación copia o distribución a terceros 
sin la autorización expresa del remitente. Si Vd. ha recibido este mensaje 
erróneamente, se ruega lo notifique al remitente y proceda a su borrado. 
Gracias por su colaboración. 



Esta mensagem, incluindo qualquer ficheiro anexo, pode conter informação 
confidencial, de acordo com nosso Sistema de Gestão de Segurança da Informação, 
sendo para uso exclusivo do destinatário e estando proibida a sua divulgação, 
cópia ou distribuição a terceiros sem autorização expressa do remetente da 
mesma. Se recebeu esta mensagem por engano, por favor avise de imediato o 
remetente e apague-a. Obrigado pela sua colaboração. 



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] About openstack common client

2012-05-18 Thread Doug Hellmann
On Fri, May 18, 2012 at 10:09 AM, Andrew Bogott abog...@wikimedia.orgwrote:

  On 5/18/12 12:16 AM, Yong Sheng Gong wrote:


 Hi,
 I just want to ask about the relationship among openstackclient
 https://launchpad.net/python-openstackclient and other clients.
 Will openstackclient replace other clients ( such as quantum client,
 keystone client, nova client, xx) or just a supplement?

 My understanding (and hope) is that ultimately there will be a separation
 between shell interfaces and REST interfaces.  Openstackclient will
 implement the commandline, and the other clients (python-novaclient,
 python-glanceclient, etc.) will provide python APIs for REST clients, of
 which openstackclient is one.


That is also my understanding of The Plan.


 by now, the openstackclient is calling codes from other clients, so it
 seems it is just another client wrapper. In this case, we will have to
 implement two set of front codes to call specific client. One will be in
 openstackclient, and one will be in separate client itself.

 I expect non-common shell clients to be deprecated and eventually ripped
 out.  We're probably a bit too early in the game to explicitly discourage
 development on those shell commands though.


I'm waffling on agreeing with you here. It is true that (AFAIK) we aren't
set up for packaging builds yet for semi-official installations (i.e., not
using devstack), but I would like to have people who are more familiar with
the other command line programs contributing to the common client, too.

Doug
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] nova state machine simplification and clarification

2012-05-18 Thread Mark Washenberger
Hi Yun,

This proposal looks very good to me. I am glad you included in it the 
requirement that hard deletes can take place in any vm/task/power state. 

I however feel that a similar requirement exists for revert resize. It should 
be possible to issue a RevertResize command for any task_state (assuming that a 
resize is happening or has recently happened and is not yet confirmed). The 
code to support this capability doesn't exist yet, but I want to ask you: is it 
compatible with your proposal to allow RevertResize in any task state?

Yun Mao yun...@gmail.com said:

 Hi,
 
 There are vm_states, task_states, and power_states for each VM. The
 use of them is complicated. Some states are confusing, and sometimes
 ambiguous. There also lacks a guideline to extend/add new state. This
 proposal aims to simplify things, explain and define precisely what
 they mean, and why we need them. A new user-friendly behavior of
 deleting a VM is also discussed.
 
 A TL;DR summary:
 * power_state is the hypervisor state, loaded “bottom-up” from compute
 worker;
 * vm_state reflects the stable state based on API calls, matching user
 expectation, revised “top-down” within API implementation.
 * task_state reflects the transition state introduced by in-progress API 
 calls.
 * “hard” delete of a VM should always succeed no matter what.
 * power_state and vm_state may conflict with each other, which needs
 to be resolved case-by-case.
 
 It's not a definite guide yet and is up for discussion. I'd like to
 thank vishy and comstud for the early input. comstud: the task_state
 is different from when you looked at it. It's a lot closer to what's
 in the current code.
 
 The full text is here and is editable by anyone like etherpad.
 
 https://docs.google.com/document/d/1nlKmYld3xxpTv6Xx0Iky6L46smbEqg7-SWPu_o6VJws/edit?pli=1
 
 Thanks,
 
 Yun
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [metering] Choice of a messaging queue

2012-05-18 Thread Doug Hellmann
On Fri, May 18, 2012 at 4:42 AM, Nick Barcet nick.bar...@canonical.comwrote:

 Hello everyone,

 Next week's irc meeting will have for goal to choose a reference
 messaging queue service for the ceilometer project.  For this meeting to
 be able to be successful, a discussion on the choices that we have to
 make need to occur first right here.

 To open the discussion here are a few requirements that I would consider
 important for the queue to support:

 a) the queue must guaranty the delivery of messages.
 To the contrary of monitoring, loss of events may have important billing
 impacts, it therefore cannot be an option that message be lost.

 b) the client should be able to store and forward.
 As the load of system or traffic increases, or if the client is
 temporarily disconnected, client element of the queue should be able to
 hold messages in a local queue to be emitted as soon as condition permit.

 c) client must authenticate
 Only client which hold a shared private key should be able to send
 messages on the queue.


Does the username/password authentication of rabbitmq meet this requirement?



 d) queue may support client signing of individual messages
 Each message should be individually signed by the agent that emits it in
 order to guaranty non repudiability.  This function can be done by the
 queue client or by the agent prior to en-queuing of messages.


We can embed the message signature in the message, so this requirement
shouldn't have any bearing on the bus itself. Unless I'm missing something?



 d) queue must be highly available
 the queue servers must be able to support multiple instances running in
 parallel in order to support continuation of operations with the loss of
 one server.  This should be achievable without the need to use complex
 fail over systems and shared storage.

 e) queue should be horizontally scalable
 The scalability of queue servers should be achievable by increasing the
 number of servers.

 Not sure this list is exhaustive or viable, feel free to comment on it,
 but the real question is: which queue should we be using here?


While I see the benefit of discussing requirements for the message bus
platform in general, I'm not sure we need to dictate a specific
implementation. If we say we are going to use the nova RPC library to
communicate with the bus for sending and receiving messages, then we can
use all of the tools for which there are drivers in nova -- rabbit, qpid,
zeromq (assuming someone releases a driver, which I think is being worked
on somewhere), etc. This leaves the decision of which bus to use up to the
deployer, where the decision belongs. It also means we won't end up
choosing a tool for which the other projects have no driver, leading us to
have to create one and add a new dependency to the project.

Doug
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [metering] Do we need an API and storage?

2012-05-18 Thread Doug Hellmann
On Fri, May 18, 2012 at 5:27 AM, Thierry Carrez thie...@openstack.orgwrote:

 Doug Hellmann wrote:
  On Thu, May 17, 2012 at 5:47 AM, Nick Barcet nick.bar...@canonical.com
  mailto:nick.bar...@canonical.com wrote:
 
  On 05/17/2012 11:13 AM, Loic Dachary wrote:
   On 05/16/2012 11:00 PM, Francis J. Lacoste wrote:
  
   I'm now of the opinion that we exclude storage and API from the
   metering project scope. Let's just focus on defining a metering
   message format, bus, and maybe a client-library to make it easy to
   write metering consumers.
 
 
  The plan, as I understand it, is to ensure that all metering messages
  appear on a common bus using a documented format. Deployers who do not
  want the storage system and REST API will not need to use it, and can
  set up their own clients to listen on that bus. I'm not sure how much of
  a client library is needed, since the bus is AMQP and the messages are
  JSON, both of which have standard libraries in most common languages.
  [...]

 You can certainly architect it in a way so that storage and API are
 optional: expose metering messages on the bus, and provide an
 optionally-run aggregation component that exposes a REST API (and that
 would use the metering-consumer client library). That would give
 deployers the option to poll via REST or implement their own alternate
 aggregation using the metering-consumer client lib, depending on the
 system they need to integrate with.

 Having the aggregation component clearly separate and optional will
 serve as a great example of how it could be done (and what are the
 responsibilities of the aggregation component). I would still do a
 (minimal) client library to facilitate integration, but maybe that's
 just me :)


I can see some benefit to that, especially when it comes to validating the
message signatures.

Doug
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] error in documentation regarding novnc configuration on newly added compute node

2012-05-18 Thread Staicu Gabriel
Hi all,

I observed an error in documentation regarding the configuration needed to be 
done on additional compute-nodes in order to allow vnc access to instances.
From what I understood from doc you just have to copy nova.conf on additional 
nodes.
However there are some changes to made in the nova.conf of the node you want to 
add or the cloud.
The parameters to be changed 
are:--vncserver_proxyclient_address;--vncserver_listen

If on the cloud master they have the values:

--vncserver_proxyclient_address=$ip_cloud_master
--vncserver_listen=$ip_cloud_master

On the newly added compute node they have to be:
--vncserver_proxyclient_address=$ip_compute_node
--vncserver_listen=$ip_compute_node
From my testing and small understanding :) these values are as such:
-vncserver_listen is the address that you will find in libvirt.xml 
corresponding to the instance started on this server. If you put a value 
different from the ip addresses on this server the instances won't come up.
-vncserver_proxyclient_address is the address that nova-consoleauth will 
associate with the requested token.

I don't know if I explained clearly enough so I will give an example.
I have 2 servers in my cloud, node01(master_cloud), node02(compute_node). If on 
node02 the value vncserver_proxyclient_address is $ip_cloud_master after I 
start an instance on node02 vnc will direct me to $ip_cloud_master:5900 which 
is an old instance created on node01. If If on node02 the value 
vncserver_proxyclient_address is $ip_compute_node
 after I start an instance on node02 vnc will direct me to 
$ip_compute_node:5900 which is what I am looking for. BINGO!!

Regards,
Gabriel___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] BadRequest: Can not find requested image (HTTP 400)

2012-05-18 Thread Razique Mahroua
You can run a $ glance index, but there are high chances you 
would see them... 	   
   	Vaze, Mandar  
  18 mai 2012 15:15








Is there any way to find out if image
 is exists with the help of glance command?
glance image-list

See
 my other email for your original problem

-Mandar


__
Disclaimer:This email and any attachments are sent in strictest 
confidence for the sole use of the addressee and may contain legally 
privileged, confidential, and proprietary data.  If you are not the 
intended recipient, please advise the sender by replying promptly to 
this email and then delete and destroy this email and any attachments 
without any further use, copying or forwarding
 	   
   	Milind  
  18 mai 2012 14:26Is there any way to find out if
 image is exists with the help of glance command?I checked 
everything all parameters are properly configured in conf files.

 	   
   	Milind  
  18 mai 2012 13:46Hi,I am getting error 
while executing following command.root@ucmaster:/home/milindx/mil#
 nova boot --image tty-linux --flavor m1.small --key_name test 
my-first-serverERROR: Can not find requested image (HTTP 400)
Following information will helproot@ucmaster:/home/milindx/mil#
 cat /etc/nova/nova.conf | grep glance--image_service=nova.image.glance.GlanceImageService--glance_api_servers=10.253.59.152:9292
root@ucmaster:/home/milindx/mil# nova image-list+--+--+++|
 ID | Name | Status | Server |+--+--+++
| 057f5695-7af5-4d42-ab17-2f0f36f99ee2 | tty-linux | ACTIVE 
| || 1cfd638b-1187-4f35-a0dd-76352742b762 | tty-kernel |
 ACTIVE | || 40c6e4ec-9b49-4f5c-989d-57cd2691fa12 | 
tty-linuxkernel | ACTIVE | |
| a726d350-c187-401b-823c-7bb1527aaa1d | tty-linuxramdisk | ACTIVE 
| || b1c79ba2-aef0-4dfc-8e0f-f1f223bea1f1 | tty |
 ACTIVE | || b3240f79-5b6f-450c-beff-5c8ecdabcf00 | 
tty-ramdisk | ACTIVE | |
+--+--+++root@ucmaster:/home/milindx/mil#root@ucmaster:/home/milindx/mil#
 nova-manage service list2012-05-18 17:15:30 DEBUG nova.utils 
[req-0997c625-ef9c-45ab-96a9-77a211d87459 None None] backend module 
'nova.db.sqlalchemy.api' from 
'/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.pyc' from 
(pid=15253) __get_backend 
/usr/lib/python2.7/dist-packages/nova/utils.py:658
Binary Host Zone 
Status State Updated_Atnova-scheduler 
ucmaster nova enabled :-) 
2012-05-18 11:45:25nova-volume 
ucmaster nova enabled :-) 
2012-05-18 11:45:25
nova-compute ucmaster nova 
enabled :-) 2012-05-18 11:45:21nova-cert 
ucmaster nova enabled :-) 
2012-05-18 11:45:25
nova-consoleauth ucmaster nova 
enabled :-) 2012-05-18 11:45:25nova-network 
ucmaster nova enabled :-) 
2012-05-18 11:45:25
I have deleted and added images using glance still problem persists,
 any one know what could be problem.Regards,Milind

___Mailing list: 
https://launchpad.net/~openstackPost to : 
openstack@lists.launchpad.netUnsubscribe : 
https://launchpad.net/~openstackMore help   : 
https://help.launchpad.net/ListHelp-- Nuage  Co - Razique Mahroua 
razique.mahr...@gmail.com















___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [metering] Choice of a messaging queue

2012-05-18 Thread Adam Young
I think you want to write to the AMQP spec as much as possible and allow 
multiple implementations.  Opentack supports Rabit MQ and QPID, and you 
should be able to use either.




On 05/18/2012 04:42 AM, Nick Barcet wrote:

Hello everyone,

Next week's irc meeting will have for goal to choose a reference
messaging queue service for the ceilometer project.  For this meeting to
be able to be successful, a discussion on the choices that we have to
make need to occur first right here.

To open the discussion here are a few requirements that I would consider
important for the queue to support:

a) the queue must guaranty the delivery of messages.
To the contrary of monitoring, loss of events may have important billing
impacts, it therefore cannot be an option that message be lost.

b) the client should be able to store and forward.
As the load of system or traffic increases, or if the client is
temporarily disconnected, client element of the queue should be able to
hold messages in a local queue to be emitted as soon as condition permit.

c) client must authenticate
Only client which hold a shared private key should be able to send
messages on the queue.

d) queue may support client signing of individual messages
Each message should be individually signed by the agent that emits it in
order to guaranty non repudiability.  This function can be done by the
queue client or by the agent prior to en-queuing of messages.

d) queue must be highly available
the queue servers must be able to support multiple instances running in
parallel in order to support continuation of operations with the loss of
one server.  This should be achievable without the need to use complex
fail over systems and shared storage.

e) queue should be horizontally scalable
The scalability of queue servers should be achievable by increasing the
number of servers.

Not sure this list is exhaustive or viable, feel free to comment on it,
but the real question is: which queue should we be using here?

Cheers,
--
Nick Barcetnick.bar...@canonical.com
aka: nijaba, nicolas




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Understanding Integration Bridge and MACs

2012-05-18 Thread Salman Malik

Thanks Yamahata. I have tried the all-in-one configuration and it worked 
without any problems.
I will soon try the multi-node setup and will let you know of any 
problems/questions.

I appreciate all the hard work that openstack community is putting in the 
project.

Thanks,
Salman

 Date: Thu, 17 May 2012 21:04:23 +0900
 From: yamah...@valinux.co.jp
 To: salma...@live.com
 CC: openstack@lists.launchpad.net
 Subject: Re: [Openstack] Understanding Integration Bridge and MACs
 
 Hi. Sorry for delayed replay.
 
 Now precreated VM image is available for easy evaluation.
 It is announced on ryu-de...@lists.sourceforge.net,
 I site it for sure. I hope you can get successful result.
 
  From: FUJITA Tomonori fujita.tomonori@...
  Subject: pre-configured VM image file for OpenStack environment with Ryu
 
  Hi,
  
  We created a VM image file that enables you to easily set up
  multi-node Nova environment with Ryu in your desktop machine:
  
  https://github.com/osrg/ryu/wiki/RYU-OpenStack-environment-VM-image-file-HOWTO
  
  Enjoy!
 
 On Sun, May 13, 2012 at 07:42:14PM -0500, Salman Malik wrote:
  Hi Dan and Others,
  
  I am trying to understand the actions taken by Ryu when the new instance 
  sends
  DHCP discover message to dnsmasq. When I launch new instannce it keeps on
  sending discover messages and controller keeps on dropping these messages. 
  But
  looking at the traffic I couldn't exactly map which MAC address belonged to
  which entity. Can someone help me with my understanding of the MAC 
  addresses.
  Using ifconfig , ovs-ofctl show br-int and ovs-ofctl snoop br-int 
  (output
  shown after MAC addresses), I know exactly about some MAC addresses and 
  can't
  figure out some of them:
  
  Interfaces  |HWAddress  |IP-addr
  -
  eth0|08:00:27:7a:ff:65|10.0.3.15
  eth1|08:00:27:16:d5:09  |   
  10.0.0.10 plugged into br-int
  gw-82bd3a73-dc|fa:16:3e:49:57:1b  |10.0.0.1   
  
  plugged into br-int (this is the --listen-address of my two dnsmasqs)
  br-int   |08:00:27:16:d5:09 |   
  why doesn't bridge have no IP ?
  new-instance  |02:d8:47:48:35:26  == MAC address of 
  newly
  launched instance? (see output below)
  
  Unkown |fa:16:3e:5e:02:17   ==Seemingly unknown
  MAC address(which is related to the new instance?)
  Unkown  |33:33:00:00:00:16   == MAC address
  related to multicast ?
  
  
  Questions:
  
  1. What is gw-82bd3a73-dc interface ?
  2. I am kind of unsure why br-int is so useful?
  3. Why doesn't br-int don't have any IP address?
  4. Why do we need to plugin a compute node's interface to br-int? (so that
  guest instances on remote host can communicate with each other?)
  5. What is the relationship b/w 02:d8:47:48:35:26 and fa:16:3e:5e:02:17 MAC
  addresses in the following output?
  
  =
  Output of : ovs-ofctl snoop br-int
  =
  OFPT_ECHO_REQUEST (xid=0x0): 0 bytes of payload
  OFPT_ECHO_REPLY (xid=0x0): 0 bytes of payload
  OFPT_PORT_STATUS (xid=0x0): ADD: 7(tap76127847-b1): addr:02:d8:47:48:35:26
   config: 0
   state:  LINK_DOWN
   current:10MB-FD COPPER
  OFPT_FLOW_MOD (xid=0x491662da): DEL priority=0 buf:0x0 actions=drop
  OFPT_BARRIER_REQUEST (xid=0x491662db):
  OFPT_BARRIER_REPLY (xid=0x491662db):
  OFPT_PORT_STATUS (xid=0x0): MOD: 7(tap76127847-b1): addr:02:d8:47:48:35:26
   config: 0
   state:  0
   current:10MB-FD COPPER
  OFPT_ECHO_REQUEST (xid=0x0): 0 bytes of payload
  OFPT_ECHO_REPLY (xid=0x0): 0 bytes of payload
  OFPT_PACKET_IN (xid=0x0): total_len=90 in_port=7 data_len=90 
  buffer=0x0167
  tunnel0:in_port0007:tci(0) macfa:16:3e:5e:02:17-33:33:00:00:00:16 type86dd
  proto58 tos0 ipv6::-ff02::16 port143-0
  fa:16:3e:5e:02:17  33:33:00:00:00:16, ethertype IPv6 (0x86dd), length 90: 
  :: 
  ff02::16: HBH ICMP6, multicast listener report v2, 1 group record(s), 
  length 28
  OFPT_PACKET_OUT (xid=0x491662dc): in_port=7 actions_len=0 actions=drop 
  buffer=
  0x0167
  OFPT_PACKET_IN (xid=0x0): total_len=322 in_port=7 data_len=128 buffer=
  0x0168
  tunnel0:in_port0007:tci(0) macfa:16:3e:5e:02:17-ff:ff:ff:ff:ff:ff type0800
  proto17 tos0 ip0.0.0.0-255.255.255.255 port68-67
  fa:16:3e:5e:02:17  ff:ff:ff:ff:ff:ff, ethertype IPv4 (0x0800), length 128:
  truncated-ip - 194 bytes missing! 0.0.0.0.68  255.255.255.255.67: 
  BOOTP/DHCP,
  Request from fa:16:3e:5e:02:17, length 280
  OFPT_PACKET_OUT (xid=0x491662dd): in_port=7 actions_len=0 actions=drop 
  buffer=
  0x0168
  OFPT_PACKET_IN (xid=0x0): total_len=78 in_port=7 data_len=78 
  buffer=0x0169
  fa:16:3e:5e:02:17  

[Openstack] Instances don't get an IP from DHCP (Quantum, OVS, multi-node computes)

2012-05-18 Thread Emilien Macchi
Hi,

Since 2 weeks, I've been looking for a solution with a Quantum + OVS issue.

The situation :

2 servers :

Essex-1 - Eth0 : 10.68.1.40 - ETH1 : connected to br-int OVS bridge
- Glance, Nova-*, Keystone, Horizon, Quantum-Server, KVM, OVS,
Quantum-Agent
- nova.conf :
https://github.com/EmilienM/doc-openstack/blob/master/Configuration%20Files/Essex-1/nova.conf

Essex-2 - Eth0 : 10.68.1.45 - ETH1 : connected to br-int OVS bridge
- nova-compute, KVM, Quantum-Agent
- nova.conf :
https://github.com/EmilienM/doc-openstack/blob/master/Configuration%20Files/Essex-1/nova.conf

I've followed http://openvswitch.org/openstack/documentation/ and
http://docs.openstack.org/trunk/openstack-network/admin/content/

I've created th network with :
nova-manage network create --label=mysql
--fixed_range_v4=192.168.113.0/24--project_id=d2f0dc48a8944c6e96cb88c772376f06
--bridge=br-int
--bridge_interface=eth1

What's not working :
- When I create an instance from dashboard, the VM does not get an IP from
DHCP server (hosted on ESSEX-1).
You can see the logs here : http://paste.openstack.org/show/17997/

What I did to investigate :
- dhcpdump -i br-int : I can see DHCPDISCOVER on both servers (without
answers)
- ps -ef | grep dnsmasq :
nobody 6564 1 0 14:12 ? 00:00:00 /usr/sbin/dnsmasq --strict-order
--bind-interfaces --conf-file= --domain=novalocal
--pid-file=/var/lib/nova/networks/nova-gw-0f427a46-3f.pid
--listen-address=192.168.113.1 --except-interface=lo
--dhcp-range=192.168.113.2,static,120s --dhcp-lease-max=256
--dhcp-hostsfile=/var/lib/nova/networks/nova-gw-0f427a46-3f.conf
--dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro
root 6565 6564 0 14:12 ? 00:00:00 /usr/sbin/dnsmasq --strict-order
--bind-interfaces --conf-file= --domain=novalocal
--pid-file=/var/lib/nova/networks/nova-gw-0f427a46-3f.pid
--listen-address=192.168.113.1 --except-interface=lo
--dhcp-range=192.168.113.2,static,120s --dhcp-lease-max=256
--dhcp-hostsfile=/var/lib/nova/networks/nova-gw-0f427a46-3f.conf
--dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro
root 16536 6192 0 14:40 pts/14 00:00:00 grep --color=auto dnsm

Is my nova.conf correct ?
What's wrong with my configuration ?
Is there a problem with DNSMASQ ?

I would apreciate any idea !

Regards

-- 
Emilien Macchi
*SysAdmin (Intern)*
*www.stackops.com* | emilien.mac...@stackops.com** | skype:memilien69
*

*

 ADVERTENCIA LEGAL 
Le informamos, como destinatario de este mensaje, que el correo electrónico
y las comunicaciones por medio de Internet no permiten asegurar ni
garantizar la confidencialidad de los mensajes transmitidos, así como
tampoco su integridad o su correcta recepción, por lo que STACKOPS
TECHNOLOGIES S.L. no asume responsabilidad alguna por tales circunstancias.
Si no consintiese en la utilización del correo electrónico o de las
comunicaciones vía Internet le rogamos nos lo comunique y ponga en nuestro
conocimiento de manera inmediata. Este mensaje va dirigido, de manera
exclusiva, a su destinatario y contiene información confidencial y sujeta
al secreto profesional, cuya divulgación no está permitida por la ley. En
caso de haber recibido este mensaje por error, le rogamos que, de forma
inmediata, nos lo comunique mediante correo electrónico remitido a nuestra
atención y proceda a su eliminación, así como a la de cualquier documento
adjunto al mismo. Asimismo, le comunicamos que la distribución, copia o
utilización de este mensaje, o de cualquier documento adjunto al mismo,
cualquiera que fuera su finalidad, están prohibidas por la ley.

* PRIVILEGED AND CONFIDENTIAL 
We hereby inform you, as addressee of this message, that e-mail and
Internet do not guarantee the confidentiality, nor the completeness or
proper reception of the messages sent and, thus, STACKOPS TECHNOLOGIES S.L.
does not assume any liability for those circumstances. Should you not agree
to the use of e-mail or to communications via Internet, you are kindly
requested to notify us immediately. This message is intended exclusively
for the person to whom it is addressed and contains privileged and
confidential information protected from disclosure by law. If you are not
the addressee indicated in this message, you should immediately delete it
and any attachments and notify the sender by reply e-mail. In such case,
you are hereby notified that any dissemination, distribution, copying or
use of this message or any attachments, for any purpose, is strictly
prohibited by law.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] About openstack common client

2012-05-18 Thread Dean Troyer
On Fri, May 18, 2012 at 12:16 AM, Yong Sheng Gong gong...@cn.ibm.com wrote:
 I just want to ask about the relationship among openstackclient
 https://launchpad.net/python-openstackclient and other clients.
 Will openstackclient replace other clients ( such as quantum client,
 keystone client, nova client, xx) or just a supplement?

It will be a fully-functional-and-then-some replacement.

 by now, the openstackclient is calling codes from other clients, so it seems
 it is just another client wrapper. In this case, we will have to implement
 two set of front codes to call specific client. One will be in
 openstackclient, and one will be in separate client itself.

I'm not sure what you are implementing here; if it is an interface
that uses the CLI to drive OpenStack then you only need to talk to one
CLI (openstackclient) or to the existing CLIs (*client).
openstackclient has an advantage for scripted use in that it has an
option for easily parsable output.

dt

-- 

Dean Troyer
dtro...@gmail.com

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] About openstack common client

2012-05-18 Thread Dean Troyer
On Fri, May 18, 2012 at 9:17 AM, Doug Hellmann
doug.hellm...@dreamhost.com wrote:
 I expect non-common shell clients to be deprecated and eventually ripped
 out.  We're probably a bit too early in the game to explicitly discourage
 development on those shell commands though.

 I'm waffling on agreeing with you here. It is true that (AFAIK) we aren't
 set up for packaging builds yet for semi-official installations (i.e., not
 using devstack), but I would like to have people who are more familiar with
 the other command line programs contributing to the common client, too.

I expect the existing project-specific CLIs to be around fot some
time.  The bit that openstackclient replaces is actually pretty small
code-wise so maintaining them isn't as much duplicated work as it
might seem.

dt

-- 

Dean Troyer
dtro...@gmail.com

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [metering] Choice of a messaging queue

2012-05-18 Thread Julien Danjou
On Fri, May 18 2012, Doug Hellmann wrote:

 While I see the benefit of discussing requirements for the message bus
 platform in general, I'm not sure we need to dictate a specific
 implementation. If we say we are going to use the nova RPC library to
 communicate with the bus for sending and receiving messages, then we can
 use all of the tools for which there are drivers in nova -- rabbit, qpid,
 zeromq (assuming someone releases a driver, which I think is being worked
 on somewhere), etc. This leaves the decision of which bus to use up to the
 deployer, where the decision belongs. It also means we won't end up
 choosing a tool for which the other projects have no driver, leading us to
 have to create one and add a new dependency to the project.

+1

-- 
Julien Danjou
// eNovance  http://enovance.com
// ✉ julien.dan...@enovance.com  ☎ +33 1 49 70 99 81

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [metering] Choice of a messaging queue

2012-05-18 Thread Eric Windisch
The nova rpc implementation is moving into openstack common, I agree with using 
this abstraction.

As per ZeroMQ, I'm the author of that plugin. There is a downloadable plugin 
for Essex and I'm preparing to make a Folsom merge prop within the next week or 
so, if all goes well.

Sent from my iPad

On May 18, 2012, at 7:26, Doug Hellmann doug.hellm...@dreamhost.com wrote:

 
 
 On Fri, May 18, 2012 at 4:42 AM, Nick Barcet nick.bar...@canonical.com 
 wrote:
 Hello everyone,
 
 Next week's irc meeting will have for goal to choose a reference
 messaging queue service for the ceilometer project.  For this meeting to
 be able to be successful, a discussion on the choices that we have to
 make need to occur first right here.
 
 To open the discussion here are a few requirements that I would consider
 important for the queue to support:
 
 a) the queue must guaranty the delivery of messages.
 To the contrary of monitoring, loss of events may have important billing
 impacts, it therefore cannot be an option that message be lost.
 
 b) the client should be able to store and forward.
 As the load of system or traffic increases, or if the client is
 temporarily disconnected, client element of the queue should be able to
 hold messages in a local queue to be emitted as soon as condition permit.
 
 c) client must authenticate
 Only client which hold a shared private key should be able to send
 messages on the queue.
 
 Does the username/password authentication of rabbitmq meet this requirement?
  
 
 d) queue may support client signing of individual messages
 Each message should be individually signed by the agent that emits it in
 order to guaranty non repudiability.  This function can be done by the
 queue client or by the agent prior to en-queuing of messages.
 
 We can embed the message signature in the message, so this requirement 
 shouldn't have any bearing on the bus itself. Unless I'm missing something?
  
 
 d) queue must be highly available
 the queue servers must be able to support multiple instances running in
 parallel in order to support continuation of operations with the loss of
 one server.  This should be achievable without the need to use complex
 fail over systems and shared storage.
 
 e) queue should be horizontally scalable
 The scalability of queue servers should be achievable by increasing the
 number of servers.
 
 Not sure this list is exhaustive or viable, feel free to comment on it,
 but the real question is: which queue should we be using here?
 
 While I see the benefit of discussing requirements for the message bus 
 platform in general, I'm not sure we need to dictate a specific 
 implementation. If we say we are going to use the nova RPC library to 
 communicate with the bus for sending and receiving messages, then we can use 
 all of the tools for which there are drivers in nova -- rabbit, qpid, zeromq 
 (assuming someone releases a driver, which I think is being worked on 
 somewhere), etc. This leaves the decision of which bus to use up to the 
 deployer, where the decision belongs. It also means we won't end up choosing 
 a tool for which the other projects have no driver, leading us to have to 
 create one and add a new dependency to the project.
 
 Doug
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [metering] Choice of a messaging queue

2012-05-18 Thread Doug Hellmann
On Fri, May 18, 2012 at 11:02 AM, Eric Windisch e...@cloudscaling.comwrote:

 The nova rpc implementation is moving into openstack common, I agree with
 using this abstraction.


That's a good point, I forgot about that.



 As per ZeroMQ, I'm the author of that plugin. There is a downloadable
 plugin for Essex and I'm preparing to make a Folsom merge prop within the
 next week or so, if all goes well.


Excellent!



 Sent from my iPad

 On May 18, 2012, at 7:26, Doug Hellmann doug.hellm...@dreamhost.com
 wrote:



 On Fri, May 18, 2012 at 4:42 AM, Nick Barcet nick.bar...@canonical.comwrote:

 Hello everyone,

 Next week's irc meeting will have for goal to choose a reference
 messaging queue service for the ceilometer project.  For this meeting to
 be able to be successful, a discussion on the choices that we have to
 make need to occur first right here.

 To open the discussion here are a few requirements that I would consider
 important for the queue to support:

 a) the queue must guaranty the delivery of messages.
 To the contrary of monitoring, loss of events may have important billing
 impacts, it therefore cannot be an option that message be lost.

 b) the client should be able to store and forward.
 As the load of system or traffic increases, or if the client is
 temporarily disconnected, client element of the queue should be able to
 hold messages in a local queue to be emitted as soon as condition permit.

 c) client must authenticate
 Only client which hold a shared private key should be able to send
 messages on the queue.


 Does the username/password authentication of rabbitmq meet this
 requirement?



 d) queue may support client signing of individual messages
 Each message should be individually signed by the agent that emits it in
 order to guaranty non repudiability.  This function can be done by the
 queue client or by the agent prior to en-queuing of messages.


 We can embed the message signature in the message, so this requirement
 shouldn't have any bearing on the bus itself. Unless I'm missing something?



 d) queue must be highly available
 the queue servers must be able to support multiple instances running in
 parallel in order to support continuation of operations with the loss of
 one server.  This should be achievable without the need to use complex
 fail over systems and shared storage.

 e) queue should be horizontally scalable
 The scalability of queue servers should be achievable by increasing the
 number of servers.

 Not sure this list is exhaustive or viable, feel free to comment on it,
 but the real question is: which queue should we be using here?


 While I see the benefit of discussing requirements for the message bus
 platform in general, I'm not sure we need to dictate a specific
 implementation. If we say we are going to use the nova RPC library to
 communicate with the bus for sending and receiving messages, then we can
 use all of the tools for which there are drivers in nova -- rabbit, qpid,
 zeromq (assuming someone releases a driver, which I think is being worked
 on somewhere), etc. This leaves the decision of which bus to use up to the
 deployer, where the decision belongs. It also means we won't end up
 choosing a tool for which the other projects have no driver, leading us to
 have to create one and add a new dependency to the project.

 Doug

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [metering] Choice of a messaging queue

2012-05-18 Thread Eric Windisch

 
 
 a) the queue must guaranty the delivery of messages.
 To the contrary of monitoring, loss of events may have important billing
 impacts, it therefore cannot be an option that message be lost.

Losing messages should always be an option, in the extreme cases. If a message 
is undeliverable for an excessive amount of time, it should be dropped. 
Otherwise, you'll need the queuing equivalent of a DBA doing periodic cleanup, 
which isn't very cloudy (or scalable).

I agree that the failure cases here are different than we'd normally see with 
Nova. Timeouts on messages would need to be much higher, and potentially 
disablable (but I do insist that a timeout, even if high, should be used).

 
 b) the client should be able to store and forward.
 As the load of system or traffic increases, or if the client is
 temporarily disconnected, client element of the queue should be able to
 hold messages in a local queue to be emitted as soon as condition permit.

The zeromq driver definitely does this (kind of). It will try and send all 
messages at once via green threads, which is effectively the same thing. The 
nice thing is that with 0mq, when a message is sent, delivery to a peer is 
confirmed. 

I think, but may be wrong, that rabbit and qpid essentially do the same for 
store and forward, blocking their green threads until they hit a successful 
connection to the queue, or a timeout. With the amqp drivers, the sender only 
has a confirmation of delivery to the queuing server, not to the destination.
 
One thing the zeromq driver doesn't do is resume sending attempts across a 
service restart. Messages aren't durable in that fashion. This is largely 
because the timeout in Nova does not need to be very large, so there would be 
very little benefit. This goes back to your point in 'a'. Adding this feature 
would be relatively minor, it just wasn't needed in Nova. Actually, this 
limitation would be presumably true of rabbit and qpid as well, in the store 
and forward case.

 c) client must authenticate
 Only client which hold a shared private key should be able to send
 messages on the queue.
 d) queue may support client signing of individual messages
 Each message should be individually signed by the agent that emits it in
 order to guaranty non repudiability.  This function can be done by the
 queue client or by the agent prior to en-queuing of messages


There is a Folsom blueprint to add signing and/or encryption to the rpc layer.

 d) queue must be highly available
 the queue servers must be able to support multiple instances running in
 parallel in order to support continuation of operations with the loss of
 one server.  This should be achievable without the need to use complex
 fail over systems and shared storage.


 e) queue should be horizontally scalable
 The scalability of queue servers should be achievable by increasing the
 number of servers.

d/e are NOT properties of the rabbit (and qpid?) driver today in Nova, but it 
could (should) be made to work this way. You get this with the zeromq driver, 
of course ;)

 
 Not sure this list is exhaustive or viable, feel free to comment on it,
 but the real question is: which queue should we be using here?

The OpenStack common rpc mechanism, for sure. I'm biased, but I believe that 
while the zeromq driver is the newest, it is the only driver that meets all of 
the above requirements, except, to the exceptions marked above.

Improving the other implementations should be done, but I don't know of anyone 
committed to that work.

Regards,
Eric Windisch
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] nova state machine simplification and clarification

2012-05-18 Thread Gabe Westmaas
I agree, this looks much more clear compared to where we are now.

I'd like to understand the difference between a soft and hard delete.
Does an API user have to specify that in some way?  I definitely agree
that you should be able to delete in any state, I would rather it not be a
requirement that the user interact differently when they want to delete a
server that has a task state already set.

Gabe

What exactly is a hard delete from the standpoint of a customer?  Is this
just a delete

On 5/18/12 10:20 AM, Mark Washenberger mark.washenber...@rackspace.com
wrote:

Hi Yun,

This proposal looks very good to me. I am glad you included in it the
requirement that hard deletes can take place in any vm/task/power state.

I however feel that a similar requirement exists for revert resize. It
should be possible to issue a RevertResize command for any task_state
(assuming that a resize is happening or has recently happened and is not
yet confirmed). The code to support this capability doesn't exist yet,
but I want to ask you: is it compatible with your proposal to allow
RevertResize in any task state?

Yun Mao yun...@gmail.com said:

 Hi,
 
 There are vm_states, task_states, and power_states for each VM. The
 use of them is complicated. Some states are confusing, and sometimes
 ambiguous. There also lacks a guideline to extend/add new state. This
 proposal aims to simplify things, explain and define precisely what
 they mean, and why we need them. A new user-friendly behavior of
 deleting a VM is also discussed.
 
 A TL;DR summary:
 * power_state is the hypervisor state, loaded ³bottom-up² from compute
 worker;
 * vm_state reflects the stable state based on API calls, matching user
 expectation, revised ³top-down² within API implementation.
 * task_state reflects the transition state introduced by in-progress
API calls.
 * ³hard² delete of a VM should always succeed no matter what.
 * power_state and vm_state may conflict with each other, which needs
 to be resolved case-by-case.
 
 It's not a definite guide yet and is up for discussion. I'd like to
 thank vishy and comstud for the early input. comstud: the task_state
 is different from when you looked at it. It's a lot closer to what's
 in the current code.
 
 The full text is here and is editable by anyone like etherpad.
 
 
https://docs.google.com/document/d/1nlKmYld3xxpTv6Xx0Iky6L46smbEqg7-SWPu_
o6VJws/edit?pli=1
 
 Thanks,
 
 Yun
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] RFC: Plugin framework draft

2012-05-18 Thread Andrew Bogott

On 5/17/12 4:38 PM, Doug Hellmann wrote:


snip

In the wikistatus plugin I notice that you modify the global FLAGS 
when wikistatus.py is imported. Is that the right time to do that, or 
should it happen during the on-load handler in the plugin class? Or 
maybe even in the plugin manager, which could ask the plugin for the 
new options and then modify FLAGS itself. It seems like lots of Nova 
code modifies FLAGS during import, but having an explicit trigger for 
that (rather than depending on import order) seems safer to me.
I don't feel strongly about this -- I'm just following the example set 
by existing Nova code.  Can you think of a corner case where loading it 
at import time would cause problems?  Alternatively, can you think of a 
corner case where we would want a flag to be defined during the magic 
moment /between/ import time and the load callback?


I can't think of a case for either, although I have the vague feeling 
that the latter is slightly more possible (if still improbable.)




If the entry point for each plugin is given a unique name (instead of 
being called plugin, as the sharedfs plugin is) we would be able to 
log loading plugin X as well as provide options to control which 
plugins are activated. I don't know if that latter idea is a goal or not.
If leaving in that option is free, then I'm all for it.  I'm still a bit 
new to entry points... is the entry-point name a totally arbitrary string?


Also, is supporting unique names the same thing as /requiring/ unique 
names?  Would this ultimately result in us needing a governed, 
hierarchical namespace?




- Two different loading pathways -- is that useful or just confusing?



One and only one obvious way to do it.


OK, I'm convinced.  Outside of the common client, are entrypoints 
already a hard dependency elsewhere in OpenStack such that we don't lose 
anything by requiring them?



- Should the plugin base class interpose itself between the plugin
and python-openstackclient in order to enforce interface
versioning?  Is that even possible?


We could probably figure out a way to do that, but I don't know why 
you would want to. What did you have in mind? Which interface are you 
worried about versioning, the CLI itself?


I'm not sure I do want to, but here's my concern:  Right now the common 
client's API for extending the commandline is entirely internal to the 
common client itself.  When I write the sharedfs plugin to make use of 
that same API, I'm treating that internal API as external... and I don't 
like being the only person in the world doing that.


Of course, if the expectation is that that common client API will soon 
become public/documented/frozen anyway, then there's no problem.


-Andrew

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Instances don't get an IP from DHCP (Quantum, OVS, multi-node computes)

2012-05-18 Thread Dan Wendlandt
discussion is being tracked here:
https://answers.launchpad.net/quantum/+question/197701

dan

On Fri, May 18, 2012 at 7:46 AM, Emilien Macchi emilien.mac...@stackops.com
 wrote:

 Hi,

 Since 2 weeks, I've been looking for a solution with a Quantum + OVS issue.

 The situation :

 2 servers :

 Essex-1 - Eth0 : 10.68.1.40 - ETH1 : connected to br-int OVS bridge
 - Glance, Nova-*, Keystone, Horizon, Quantum-Server, KVM, OVS,
 Quantum-Agent
 - nova.conf :
 https://github.com/EmilienM/doc-openstack/blob/master/Configuration%20Files/Essex-1/nova.conf

 Essex-2 - Eth0 : 10.68.1.45 - ETH1 : connected to br-int OVS bridge
 - nova-compute, KVM, Quantum-Agent
 - nova.conf :
 https://github.com/EmilienM/doc-openstack/blob/master/Configuration%20Files/Essex-1/nova.conf

 I've followed http://openvswitch.org/openstack/documentation/ and
 http://docs.openstack.org/trunk/openstack-network/admin/content/

 I've created th network with :
 nova-manage network create --label=mysql 
 --fixed_range_v4=192.168.113.0/24--project_id=d2f0dc48a8944c6e96cb88c772376f06
  --bridge=br-int
 --bridge_interface=eth1

 What's not working :
 - When I create an instance from dashboard, the VM does not get an IP
 from DHCP server (hosted on ESSEX-1).
 You can see the logs here : http://paste.openstack.org/show/17997/

 What I did to investigate :
 - dhcpdump -i br-int : I can see DHCPDISCOVER on both servers (without
 answers)
 - ps -ef | grep dnsmasq :
 nobody 6564 1 0 14:12 ? 00:00:00 /usr/sbin/dnsmasq --strict-order
 --bind-interfaces --conf-file= --domain=novalocal
 --pid-file=/var/lib/nova/networks/nova-gw-0f427a46-3f.pid
 --listen-address=192.168.113.1 --except-interface=lo
 --dhcp-range=192.168.113.2,static,120s --dhcp-lease-max=256
 --dhcp-hostsfile=/var/lib/nova/networks/nova-gw-0f427a46-3f.conf
 --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro
 root 6565 6564 0 14:12 ? 00:00:00 /usr/sbin/dnsmasq --strict-order
 --bind-interfaces --conf-file= --domain=novalocal
 --pid-file=/var/lib/nova/networks/nova-gw-0f427a46-3f.pid
 --listen-address=192.168.113.1 --except-interface=lo
 --dhcp-range=192.168.113.2,static,120s --dhcp-lease-max=256
 --dhcp-hostsfile=/var/lib/nova/networks/nova-gw-0f427a46-3f.conf
 --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro
 root 16536 6192 0 14:40 pts/14 00:00:00 grep --color=auto dnsm

 Is my nova.conf correct ?
 What's wrong with my configuration ?
 Is there a problem with DNSMASQ ?

 I would apreciate any idea !

 Regards

 --
 Emilien Macchi
 *SysAdmin (Intern)*
 *www.stackops.com* | emilien.mac...@stackops.com** | skype:memilien69
 *

 *

  ADVERTENCIA LEGAL 
 Le informamos, como destinatario de este mensaje, que el correo
 electrónico y las comunicaciones por medio de Internet no permiten asegurar
 ni garantizar la confidencialidad de los mensajes transmitidos, así como
 tampoco su integridad o su correcta recepción, por lo que STACKOPS
 TECHNOLOGIES S.L. no asume responsabilidad alguna por tales circunstancias.
 Si no consintiese en la utilización del correo electrónico o de las
 comunicaciones vía Internet le rogamos nos lo comunique y ponga en nuestro
 conocimiento de manera inmediata. Este mensaje va dirigido, de manera
 exclusiva, a su destinatario y contiene información confidencial y sujeta
 al secreto profesional, cuya divulgación no está permitida por la ley. En
 caso de haber recibido este mensaje por error, le rogamos que, de forma
 inmediata, nos lo comunique mediante correo electrónico remitido a nuestra
 atención y proceda a su eliminación, así como a la de cualquier documento
 adjunto al mismo. Asimismo, le comunicamos que la distribución, copia o
 utilización de este mensaje, o de cualquier documento adjunto al mismo,
 cualquiera que fuera su finalidad, están prohibidas por la ley.

 * PRIVILEGED AND CONFIDENTIAL 
 We hereby inform you, as addressee of this message, that e-mail and
 Internet do not guarantee the confidentiality, nor the completeness or
 proper reception of the messages sent and, thus, STACKOPS TECHNOLOGIES S.L.
 does not assume any liability for those circumstances. Should you not agree
 to the use of e-mail or to communications via Internet, you are kindly
 requested to notify us immediately. This message is intended exclusively
 for the person to whom it is addressed and contains privileged and
 confidential information protected from disclosure by law. If you are not
 the addressee indicated in this message, you should immediately delete it
 and any attachments and notify the sender by reply e-mail. In such case,
 you are hereby notified that any dissemination, distribution, copying or
 use of this message or any attachments, for any purpose, is strictly
 prohibited by law.


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : 

Re: [Openstack] nova state machine simplification and clarification

2012-05-18 Thread Gabe Westmaas
Also, with this proposal I'd be a lot more interested in exposing task
state as a part of the API eventually.  This is helpful to communicate
whether or not other actions would be allowed in certain states.  For
example, right now we don't allow other actions when a server is
snapshotting, but while the server is being snapshotted, the state is set
to ACTIVE.  With these well thought out states, I think we could more
safely expose those task states, and we would just have to be vigilant
about adding new ones to make sure they make sense to expose to end users.

Gabe

On 5/18/12 10:20 AM, Mark Washenberger mark.washenber...@rackspace.com
wrote:

Hi Yun,

This proposal looks very good to me. I am glad you included in it the
requirement that hard deletes can take place in any vm/task/power state.

I however feel that a similar requirement exists for revert resize. It
should be possible to issue a RevertResize command for any task_state
(assuming that a resize is happening or has recently happened and is not
yet confirmed). The code to support this capability doesn't exist yet,
but I want to ask you: is it compatible with your proposal to allow
RevertResize in any task state?

Yun Mao yun...@gmail.com said:

 Hi,
 
 There are vm_states, task_states, and power_states for each VM. The
 use of them is complicated. Some states are confusing, and sometimes
 ambiguous. There also lacks a guideline to extend/add new state. This
 proposal aims to simplify things, explain and define precisely what
 they mean, and why we need them. A new user-friendly behavior of
 deleting a VM is also discussed.
 
 A TL;DR summary:
 * power_state is the hypervisor state, loaded ³bottom-up² from compute
 worker;
 * vm_state reflects the stable state based on API calls, matching user
 expectation, revised ³top-down² within API implementation.
 * task_state reflects the transition state introduced by in-progress
API calls.
 * ³hard² delete of a VM should always succeed no matter what.
 * power_state and vm_state may conflict with each other, which needs
 to be resolved case-by-case.
 
 It's not a definite guide yet and is up for discussion. I'd like to
 thank vishy and comstud for the early input. comstud: the task_state
 is different from when you looked at it. It's a lot closer to what's
 in the current code.
 
 The full text is here and is editable by anyone like etherpad.
 
 
https://docs.google.com/document/d/1nlKmYld3xxpTv6Xx0Iky6L46smbEqg7-SWPu_
o6VJws/edit?pli=1
 
 Thanks,
 
 Yun
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Understanding Network Setup - nova-network

2012-05-18 Thread Alisson Soares Limeira Pontes
Hello,

i have a setup with a Controller node and two Compute nodes. The topology
is as in the figure attached. I followed the hastexo manual 
http://www.hastexo.com/resources/docs/installing-openstack-essex-20121-ubuntu-1204-precise-pangolin,
but installing just the modules as described in the figure (attached).
The private network is the nova fixed_range=192.168.22.32/27
The public network is the nova floating_range=10.4.5.32/27
I can launch VMs and access them with no problem.
But from a VM inside the cloud I cannot access any machine from a range
other than the fixed_range (192.168.22.32/27).

Does anyone know what can it be?
How can the controller resolve NAT coming from a node outside the cloud,
while cannot resolve coming from a VM in the cloud?


Also, i would like to understand the nova-network setup using flat dhcp.

br100 in controller assumes the ip 192.168.22.33 (from dhcp i guess,
different from eth1 that is 192.168.22.1).
In the computes, br100 assumes the same ip as the eth1.
All nodes have a vrbr0 with the same ip 192.168.122.1, why '.122.'?


br100 bridges all the VMs inside a physical server and vrbr0 bridges all
br100, is that?

# network specific settings of nova.conf
--network_manager=nova.network.manager.FlatDHCPManager
--public_interface=eth0
--flat_interface=eth1
--flat_network_bridge=br100
--fixed_range=192.168.22.32/27
--floating_range=10.4.5.32/27
--network_size=32
--flat_network_dhcp_start=192.168.22.33
--flat_injected=False
--force_dhcp_release

Ah, i actually do not know if this is the best topology for the setup.

I appreciate any help.

Thanks.

-- 
*Alisson*
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How to access an instance from Dashboard using VNC (password)?

2012-05-18 Thread Jorge Luiz Correa
Here is the instance full log:

http://pastebin.com/SfmzZ4ET

It is an ubuntu precise cloud image. It doesn't output the password.

Curious...

On Fri, May 18, 2012 at 10:12 AM, Vaze, Mandar mandar.v...@nttdata.comwrote:

   **Ø  **But, from inside dashboard, I couldn't find where to get a
 similar information.

 ** **

 From Dashboard,  Click “Edit Instance” button on far right, and click on
 “View Log”

 Scroll at the end, you’ll see the password.

 ** **

 -Mandar

 ** **

 __
 Disclaimer:This email and any attachments are sent in strictest confidence
 for the sole use of the addressee and may contain legally privileged,
 confidential, and proprietary data. If you are not the intended recipient,
 please advise the sender by replying promptly to this email and then delete
 and destroy this email and any attachments without any further use, copying
 or forwarding




-- 
- MSc. Correa, J.L.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [metering] Choice of a messaging queue

2012-05-18 Thread Caitlin Bestler
Nick Barcet wrote:


 a) the queue must guaranty the delivery of messages.
 To the contrary of monitoring, loss of events may have important billing 
 impacts, it therefore cannot be an option that message be lost.

I don't think absolute reliability is desirable for this application. SCTP 
partial reliability may be a better model.

Ultimately, full reliability requires the ability to block new messages from 
being produced. In the context of billing, that would mean that
a failure of the billing system to consume its messages would result in 
stopping new services from being provided.

Obviously you want to avoid that, but if the billing system fails do you want 
to stop providing services as well? You get no billable time in
either case, but I think providing services without metering for billing will 
keep your customers so that you can bill them for their usage
after you have restarted the billing system.

A partial reliability system would provide a finite amount of storage for 
in-flight messages that probably should be configured for something
Like3x the longest anticipated failure by the consuming entities to drain the 
queue. And it should guarantee that there will be no loss of
Messages without an explicit exception being produced (somethink like 
EMERGENCY: 2000 billing messages just erased. Interim storage
is fully committed. Where is the consumer?)



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Keystone service catalogue has non-core services?

2012-05-18 Thread Nguyen, Liem Manh
Hi Stackers,

I ran the sample_data.sh script in Keystone and saw that we have populated a 
few more services, such as ec2, dashboard and nova-volume.  Are these meant to 
be core services or extension services?  The definition of core services is 
defined here:

https://github.com/openstack/identity-api/blob/3d2e8a470733979b792d04bcfe3745731befbe8d/openstack-identity-api/src/docbkx/common/xsd/services.xsd

Extension services should be in the format of extension prefix:service type

Thanks,
Liem


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Openstack Essex - Guide for Ubuntu 12.04

2012-05-18 Thread Yi Sun
On 05/18/2012 03:44 AM, Emilien Macchi wrote:
 Yi,

 On Fri, May 18, 2012 at 7:47 AM, Yi Sun beyo...@gmail.com
 mailto:beyo...@gmail.com wrote:

 Emilien,
 I'm using your document to setup an environment, so far I have two
 issues, hope you could help out.
 1. the Linux bridge driver is install/loaded during the install
 process. I tried to remove it by add it into modprobe blacklist,
 but it is still installed after reboot.


 We don't care about it, we use OVS bridging.
In the INSTALL.Linux file that comes with OVS source code, it has
following comments:
**
 The Open vSwitch datapath requires bridging support
  (CONFIG_BRIDGE) to be built as a kernel module.  (This is common
  in kernels provided by Linux distributions.)  The bridge module
  must not be loaded or in use.  If the bridge module is running
  (check with lsmod | grep bridge), you must remove it (rmmod
  bridge) before starting the datapath.

Anyway, I have find out how to remove bridge and stp. Just use following
commands:

# virsh net-destroy default
# virsh net-autostart --disable default

 

 2. I can start quantum-server manually without any problem, but it
 will fall if it is auto started during system boot up.


 Do you have logs files ?

Do you know how to turn on the log for quantum-server? I did not find
anything in /var/log/quantum.

Thx
Yi
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [metering] Choice of a messaging queue

2012-05-18 Thread Doug Hellmann
On Fri, May 18, 2012 at 11:40 AM, Eric Windisch e...@cloudscaling.comwrote:


 
 
  a) the queue must guaranty the delivery of messages.
  To the contrary of monitoring, loss of events may have important billing
  impacts, it therefore cannot be an option that message be lost.

 Losing messages should always be an option, in the extreme cases. If a
 message is undeliverable for an excessive amount of time, it should be
 dropped. Otherwise, you'll need the queuing equivalent of a DBA doing
 periodic cleanup, which isn't very cloudy (or scalable).

 I agree that the failure cases here are different than we'd normally see
 with Nova. Timeouts on messages would need to be much higher, and
 potentially disablable (but I do insist that a timeout, even if high,
 should be used).

 
  b) the client should be able to store and forward.
  As the load of system or traffic increases, or if the client is
  temporarily disconnected, client element of the queue should be able to
  hold messages in a local queue to be emitted as soon as condition permit.

 The zeromq driver definitely does this (kind of). It will try and send all
 messages at once via green threads, which is effectively the same thing.
 The nice thing is that with 0mq, when a message is sent, delivery to a peer
 is confirmed.

 I think, but may be wrong, that rabbit and qpid essentially do the same
 for store and forward, blocking their green threads until they hit a
 successful connection to the queue, or a timeout. With the amqp drivers,
 the sender only has a confirmation of delivery to the queuing server, not
 to the destination.

 One thing the zeromq driver doesn't do is resume sending attempts across a
 service restart. Messages aren't durable in that fashion. This is largely
 because the timeout in Nova does not need to be very large, so there would
 be very little benefit. This goes back to your point in 'a'. Adding this
 feature would be relatively minor, it just wasn't needed in Nova. Actually,
 this limitation would be presumably true of rabbit and qpid as well, in the
 store and forward case.

  c) client must authenticate
  Only client which hold a shared private key should be able to send
  messages on the queue.
  d) queue may support client signing of individual messages
  Each message should be individually signed by the agent that emits it in
  order to guaranty non repudiability.  This function can be done by the
  queue client or by the agent prior to en-queuing of messages


 There is a Folsom blueprint to add signing and/or encryption to the rpc
 layer.


See https://review.stackforge.org/#/c/39/ for a simple implementation.



  d) queue must be highly available
  the queue servers must be able to support multiple instances running in
  parallel in order to support continuation of operations with the loss of
  one server.  This should be achievable without the need to use complex
  fail over systems and shared storage.


  e) queue should be horizontally scalable
  The scalability of queue servers should be achievable by increasing the
  number of servers.

 d/e are NOT properties of the rabbit (and qpid?) driver today in Nova, but
 it could (should) be made to work this way. You get this with the zeromq
 driver, of course ;)

 
  Not sure this list is exhaustive or viable, feel free to comment on it,
  but the real question is: which queue should we be using here?

 The OpenStack common rpc mechanism, for sure. I'm biased, but I believe
 that while the zeromq driver is the newest, it is the only driver that
 meets all of the above requirements, except, to the exceptions marked above.

 Improving the other implementations should be done, but I don't know of
 anyone committed to that work.

 Regards,
 Eric Windisch
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] RFC: Plugin framework draft

2012-05-18 Thread Doug Hellmann
On Fri, May 18, 2012 at 12:16 PM, Andrew Bogott abog...@wikimedia.orgwrote:

  On 5/17/12 4:38 PM, Doug Hellmann wrote:


 snip


 In the wikistatus plugin I notice that you modify the global FLAGS when
 wikistatus.py is imported. Is that the right time to do that, or should it
 happen during the on-load handler in the plugin class? Or maybe even in the
 plugin manager, which could ask the plugin for the new options and then
 modify FLAGS itself. It seems like lots of Nova code modifies FLAGS during
 import, but having an explicit trigger for that (rather than depending on
 import order) seems safer to me.

 I don't feel strongly about this -- I'm just following the example set by
 existing Nova code.  Can you think of a corner case where loading it at
 import time would cause problems?  Alternatively, can you think of a corner
 case where we would want a flag to be defined during the magic moment
 /between/ import time and the load callback?


I don't know enough about how the flags code works or whether the order of
flag declarations matters. I based my comments on the fact that I have been
burned in the (distant) past by library code that modified global state on
import (Zope) so I avoid the pattern and stick with explicit triggers. If
the flags module supports modification in an indeterminate order, which the
existing convention implies, then what you have should be fine.



 I can't think of a case for either, although I have the vague feeling that
 the latter is slightly more possible (if still improbable.)



  If the entry point for each plugin is given a unique name (instead of
 being called plugin, as the sharedfs plugin is) we would be able to log
 loading plugin X as well as provide options to control which plugins are
 activated. I don't know if that latter idea is a goal or not.

 If leaving in that option is free, then I'm all for it.  I'm still a bit
 new to entry points... is the entry-point name a totally arbitrary string?


The names need to work as variable names for ConfigParser. I don't think
they can include whitespace or '.' but I'm not sure about other
restrictions.

Also, is supporting unique names the same thing as /requiring/ unique
 names?  Would this ultimately result in us needing a governed, hierarchical
 namespace?


You cannot have two entry points with the same name in the same setup.py,
but you can have duplicate names from different setup.py files. Whether you
want to do that depends on how you are using the plugins. In this case, it
shouldn't matter if there are duplicates *unless* we provide an option to
enable/disable plugins.





  - Two different loading pathways -- is that useful or just confusing?



  One and only one obvious way to do it.


 OK, I'm convinced.  Outside of the common client, are entrypoints already
 a hard dependency elsewhere in OpenStack such that we don't lose anything
 by requiring them?


From what I have seen all of the projects use setuptools/distribute for
packaging, so using entry points will not add any new dependencies.






 - Should the plugin base class interpose itself between the plugin and
 python-openstackclient in order to enforce interface versioning?  Is that
 even possible?


  We could probably figure out a way to do that, but I don't know why you
 would want to. What did you have in mind? Which interface are you worried
 about versioning, the CLI itself?


 I'm not sure I do want to, but here's my concern:  Right now the common
 client's API for extending the commandline is entirely internal to the
 common client itself.  When I write the sharedfs plugin to make use of that
 same API, I'm treating that internal API as external... and I don't like
 being the only person in the world doing that.


The command plugin API for the common CLI is intended to be public and will
be documented. I thought we were going to put the command implementations
in project-specific packages (so that you only got the quantum commands if
you installed the python-quantumclient package, for example). Dean
convinced me we should just put the core stuff into one package, so we went
that route. Extensions can plug directly in. We will document the base
classes within the openstackclient library, but extensions can also write
directly against the cliff framework classes if they do not need any of the
features specific to the unified CLI.

Of course, if the expectation is that that common client API will soon
 become public/documented/frozen anyway, then there's no problem.

 -Andrew


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Using Nova APIs from Javascript: possible?

2012-05-18 Thread javier cerviño
Due to problems people are facing with CORS we've already included further
description and a video of how the JavaScript portal can be used. We'll
work with the fantastic people from StackOps on the implementation of a
basic HTTP proxy which could be used until we find a solution to implement
CORS in OpenStack components.

In the meantime you can see video, description and code in here:
http://ging.github.com/horizon-js/

On 30 April 2012 13:56, Nick Lothian nick.loth...@gmail.com wrote:

 I'm testing out the existing  JStack code at the moment.

 It's been enjoyable process so far.
 On Apr 30, 2012 7:30 PM, javier cerviño jcerv...@dit.upm.es wrote:

 Hi Adrian,

 I've just seen you submitted your Swift-based CORS implementation to
 Gerrit. Would you mind if we do the same for Nova, Keystone and Glance? On
 the other hand, it could be better to wait for its approval because we
 could apply changes proposed by the reviewers to the rest of components.

 We've just started to implement Glance API support in jStack, and then I
 will started with Swift. Is anybody out there who wants to join this
 challenge? You're welcome to propose changes, write code, and so on. The
 idea is to develop full OpenStack API in JavaScript, so that community
 could start working with it.

 Cheers,
 Javier.

 2012/4/27 javier cerviño jcerv...@dit.upm.es

 Hi!

 We have just published the code of the portal in Github. You can find it
 in https://github.com/ging/horizon-js. It will only work with Keystone
 and Nova if they have CORS implemented.

 Adrian, we didn't make big changes in your code, only logger classes and
 a little problem we found with PUT requests in some cases (I have to take a
 deeper look into this problem, anyway). We've made tests from  iPhone,
 iPad, Safari, Firefox and Chrome and we didn't have any problems. But on
 the other hand CORS doesn't work in IE9 with PUT and DELETE methods. Next
 week I will test it with Android and Opera browsers.

 Sure! It will be very interesting to submit your code to gerrit!!

 Diego, I will talk with Joaquin to check if we can show you a demo in
 two weeks!!

 Cheers,
 Javier.

 2012/4/27 Adrian Smith adrian_f_sm...@dell.com

 I'd be really interested to hear how you go on with the CORS middleware 
 Javier.
 Did it work as-is or did you have to modify it? Was there much effort
 involved in using it with Nova?

 From your experience it sounds like there's decent CORS support in
 browsers now so it's probably time to submit this change to gerrit.

 Adrian


 2012/4/27 Diego Parrilla Santamaría 
 diego.parrilla.santama...@gmail.com

 Awesome Javier

 Anxiously waiting for a meeting with you guys to see your progress!

 Cheers
 Diego
  --
 Diego Parrilla
 http://www.stackops.com/*CEO*
 *www.stackops.com | * diego.parri...@stackops.com** | +34 649 94 43 29|
 skype:diegoparrilla*
 * http://www.stackops.com/
 *

 *




 On Thu, Apr 26, 2012 at 9:50 AM, javier cerviño 
 jcerv...@dit.upm.eswrote:

 Hi all,

 I'm glad to hear that there's a lot of interest in the implementation
 of Openstack JavaScript clients. Actually, in my group we're
 developing a single page application developed entirely in
 JavaScript, that widely supports Nova and Keystone APIs.  This work is
 part of a European Project called FI-Ware (http://www.fi-ware.eu/),
 in
 which we are currently using Openstack APIs.

 We've modified Nova and Keystone installations by adding CORS support.
 We did it by implementing a kind of filter on their APIs. For doing
 this we used Adam's implementation
 (https://github.com/adrian/swift/tree/cors), and we adapted it to
 Nova
 and Keystone components. We also developed a JS library
 (http://ging.github.com/jstack/) that can be used by both web and
 Node.js applications, for example. This library aims to provide same
 functionalities as python-novaclient, adding support for Keystone API.

 And finally we are copying Openstack horizon functionality, using JS
 library and other frameworks such as jQuery and Backbone.js to
 implement the web application. This web application is an
 early-stage work, but we will probably publish it by the end of this
 week. I will let you know the github link.

 We didn't find much problems with CORS implementation and support in
 browsers.  For the time being, according to our experiments, the only
 web browser that is not usable at all with this technology is Internet
 Explorer, but we have tried it in Google Chrome, Safari and Firefox as
 well and we didn't have any problems.

 Cheers,
 Javier Cerviño.

 On 26 April 2012 06:28, Nick Lothian nick.loth...@gmail.com wrote:
 
 
  On Thu, Apr 26, 2012 at 5:49 AM, Adam Young ayo...@redhat.com
 wrote:
 
  Let me try to summarize:
 
  1.  If you are running from a web browser,  post requests to hosts
 or
  ports other than the origin are allowed,  but the headers cannot be
  modified.  This prevents the addition of the token from Keystone
 to provide
  single sign on.
 
  2.  There are various browser 

Re: [Openstack] Openstack Essex - Guide for Ubuntu 12.04

2012-05-18 Thread Emilien Macchi
Hi,

On Fri, May 18, 2012 at 7:13 PM, Yi Sun beyo...@gmail.com wrote:

  On 05/18/2012 03:44 AM, Emilien Macchi wrote:

 Yi,

 On Fri, May 18, 2012 at 7:47 AM, Yi Sun beyo...@gmail.com wrote:

 Emilien,
 I'm using your document to setup an environment, so far I have two
 issues, hope you could help out.
 1. the Linux bridge driver is install/loaded during the install process.
 I tried to remove it by add it into modprobe blacklist, but it is still
 installed after reboot.


 We don't care about it, we use OVS bridging.

 In the INSTALL.Linux file that comes with OVS source code, it has
 following comments:

 **
  The Open vSwitch datapath requires bridging support
   (CONFIG_BRIDGE) to be built as a kernel module.  (This is common
   in kernels provided by Linux distributions.)  The bridge module
   must not be loaded or in use.  If the bridge module is running
   (check with lsmod | grep bridge), you must remove it (rmmod
   bridge) before starting the datapath.

 
 Anyway, I have find out how to remove bridge and stp. Just use following
 commands:

 # virsh net-destroy default
 # virsh net-autostart --disable default

 Ok. Thank's for information.





   2. I can start quantum-server manually without any problem, but it will
 fall if it is auto started during system boot up.


You should modify the init script with
--*logfile*=/var/log/quantum/quantum-server.log
flag.

Regards




 Do you have logs files ?

   Do you know how to turn on the log for quantum-server? I did not find
 anything in /var/log/quantum.

 Thx
 Yi




-- 
Emilien Macchi
*SysAdmin (Intern)*
*www.stackops.com* | emilien.mac...@stackops.com** | skype:memilien69
*

*

 ADVERTENCIA LEGAL 
Le informamos, como destinatario de este mensaje, que el correo electrónico
y las comunicaciones por medio de Internet no permiten asegurar ni
garantizar la confidencialidad de los mensajes transmitidos, así como
tampoco su integridad o su correcta recepción, por lo que STACKOPS
TECHNOLOGIES S.L. no asume responsabilidad alguna por tales circunstancias.
Si no consintiese en la utilización del correo electrónico o de las
comunicaciones vía Internet le rogamos nos lo comunique y ponga en nuestro
conocimiento de manera inmediata. Este mensaje va dirigido, de manera
exclusiva, a su destinatario y contiene información confidencial y sujeta
al secreto profesional, cuya divulgación no está permitida por la ley. En
caso de haber recibido este mensaje por error, le rogamos que, de forma
inmediata, nos lo comunique mediante correo electrónico remitido a nuestra
atención y proceda a su eliminación, así como a la de cualquier documento
adjunto al mismo. Asimismo, le comunicamos que la distribución, copia o
utilización de este mensaje, o de cualquier documento adjunto al mismo,
cualquiera que fuera su finalidad, están prohibidas por la ley.

* PRIVILEGED AND CONFIDENTIAL 
We hereby inform you, as addressee of this message, that e-mail and
Internet do not guarantee the confidentiality, nor the completeness or
proper reception of the messages sent and, thus, STACKOPS TECHNOLOGIES S.L.
does not assume any liability for those circumstances. Should you not agree
to the use of e-mail or to communications via Internet, you are kindly
requested to notify us immediately. This message is intended exclusively
for the person to whom it is addressed and contains privileged and
confidential information protected from disclosure by law. If you are not
the addressee indicated in this message, you should immediately delete it
and any attachments and notify the sender by reply e-mail. In such case,
you are hereby notified that any dissemination, distribution, copying or
use of this message or any attachments, for any purpose, is strictly
prohibited by law.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Third Party APIs

2012-05-18 Thread Doug Davis
Vish wrote on 05/17/2012 02:18:19 PM:
...
 3 Feature Branch in Core
 
 We are doing some work to support Feature and Subsystem branches in 
 our CI system. 3rd party apis could live in a feature branch so that
 they can be tested using our CI infrastructure. This is very similar
 to the above solution, and gives us a temporary place to do 
 development until the internal apis are more stable. Changes to 
 internal apis and 3rd party apis could be done concurrently in the 
 branch and tested. 

can you elaborate on this last sentence?  When you say changes to 
internal
apis do you mean in general or only when in the context of those
3rd party APIs needing a change?  I can't see the core developers wanting
to do internal API changes in a 3rd party api branch.  I would expect
3rd party api branches to mainly include just stuff that sits on top of
the internal APIs and (hopefully very few) internal API tweaks.
Which to me means that these 3rd party API branches should be continually 
rebased off of the trunk to catch breaking changes immediately.

If I understand it correctly, of those options, I like option 3 because 
then the CI stuff will detect breakages in the 3rd party APIs right away
and not until some later date when it'll be harder to fix (or undo) those
internal API changes.

-Doug Davis
d...@us.ibm.com___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] RFC - dynamically loading virt drivers

2012-05-18 Thread Jay Pipes

On 05/17/2012 06:38 PM, Vishvananda Ishaya wrote:

On May 17, 2012, at 1:52 PM, Sean Dague wrote:


What I'm mostly looking for is comments on approach. Is importutils the 
prefered way to go about this (which is the nova.volume approach) now, or 
should this be using utils.LazyPluggable as is in nova.db.api, or some other 
approach entirely? Comments, redirections, appreciated.


-1 to LazyPluggable

So we already have plugabillity by just specifying a different compute_driver 
config option.  I don't like that we defer another level in compute and call 
get_connection.  IMO the best cleanup would be to remove the get_connection 
altogether and just construct the driver directly based on compute_driver.

The main issue with changing this is breaking existing installs.

So I guess this would be my strategy:

a) remove get_connection from the drivers (and just have it construct the 
'connection' class directly)
b) modify the global get_connection to construct the drivers for backwards 
compatibilty
c) modify the documentation to suggest changing drivers by specifying the full 
path to the driver instead of connection_type
d) rename the connection classes to something reasonable representing drivers 
(libvirt.driver:LibvirtDriver() vs libvirt.connection.LibvirtConnection)
e) bonus points if it could be done with a short path for ease of use 
(compute_driver=libvirt.LibvirtDriver vs 
compute_driver=nova.virt.libvirt.driver.LibvirtDriver)


+1

-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Openstack Essex - Guide for Ubuntu 12.04

2012-05-18 Thread Yi Sun
BTW-- now, I got a new problem:
1. the network is up
2. VM is up.

But I can not access the VM through anything (now console, vnc and
network). I saw you have sent another e-mail for the interface issue
inside VM.  How did you access the VM console?
Thanks
Yi


On 05/18/2012 10:51 AM, Emilien Macchi wrote:

  

 2. I can start quantum-server manually without any problem,
 but it will fall if it is auto started during system boot up.


 You should modify the init script with
 --/logfile/=/var/log/quantum/quantum-server.log flag.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] nova state machine simplification and clarification

2012-05-18 Thread Yun Mao
Hi Mark,

I haven't looked at resize related API calls very closely. But what
you are saying makes sense. revert_resize() should be able to preempt
an existing resize() call, which might get stuck. I'm not clear how
the leftovers will be garbage collected yet.

Yun

On Fri, May 18, 2012 at 10:20 AM, Mark Washenberger
mark.washenber...@rackspace.com wrote:
 Hi Yun,

 This proposal looks very good to me. I am glad you included in it the 
 requirement that hard deletes can take place in any vm/task/power state.

 I however feel that a similar requirement exists for revert resize. It should 
 be possible to issue a RevertResize command for any task_state (assuming that 
 a resize is happening or has recently happened and is not yet confirmed). The 
 code to support this capability doesn't exist yet, but I want to ask you: is 
 it compatible with your proposal to allow RevertResize in any task state?

 Yun Mao yun...@gmail.com said:

 Hi,

 There are vm_states, task_states, and power_states for each VM. The
 use of them is complicated. Some states are confusing, and sometimes
 ambiguous. There also lacks a guideline to extend/add new state. This
 proposal aims to simplify things, explain and define precisely what
 they mean, and why we need them. A new user-friendly behavior of
 deleting a VM is also discussed.

 A TL;DR summary:
 * power_state is the hypervisor state, loaded “bottom-up” from compute
 worker;
 * vm_state reflects the stable state based on API calls, matching user
 expectation, revised “top-down” within API implementation.
 * task_state reflects the transition state introduced by in-progress API 
 calls.
 * “hard” delete of a VM should always succeed no matter what.
 * power_state and vm_state may conflict with each other, which needs
 to be resolved case-by-case.

 It's not a definite guide yet and is up for discussion. I'd like to
 thank vishy and comstud for the early input. comstud: the task_state
 is different from when you looked at it. It's a lot closer to what's
 in the current code.

 The full text is here and is editable by anyone like etherpad.

 https://docs.google.com/document/d/1nlKmYld3xxpTv6Xx0Iky6L46smbEqg7-SWPu_o6VJws/edit?pli=1

 Thanks,

 Yun

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] nova state machine simplification and clarification

2012-05-18 Thread Yun Mao
Gabe,

There is a flag reclaim_instance_interval on API. If it's set to 0 (by
default), everything is hard_delete. Otherwise, it's soft_delete. and
will be automatically hard deleted after the configured interval.
There is also an API extension to as force_delete, which is hard
delete no matter what.

Right now I *think* task_state is already exposed via some API
(extension?). Otherwise the dashboard won't be able to see it.

Thanks,

Yun


On Fri, May 18, 2012 at 12:26 PM, Gabe Westmaas
gabe.westm...@rackspace.com wrote:
 Also, with this proposal I'd be a lot more interested in exposing task
 state as a part of the API eventually.  This is helpful to communicate
 whether or not other actions would be allowed in certain states.  For
 example, right now we don't allow other actions when a server is
 snapshotting, but while the server is being snapshotted, the state is set
 to ACTIVE.  With these well thought out states, I think we could more
 safely expose those task states, and we would just have to be vigilant
 about adding new ones to make sure they make sense to expose to end users.

 Gabe

 On 5/18/12 10:20 AM, Mark Washenberger mark.washenber...@rackspace.com
 wrote:

Hi Yun,

This proposal looks very good to me. I am glad you included in it the
requirement that hard deletes can take place in any vm/task/power state.

I however feel that a similar requirement exists for revert resize. It
should be possible to issue a RevertResize command for any task_state
(assuming that a resize is happening or has recently happened and is not
yet confirmed). The code to support this capability doesn't exist yet,
but I want to ask you: is it compatible with your proposal to allow
RevertResize in any task state?

Yun Mao yun...@gmail.com said:

 Hi,

 There are vm_states, task_states, and power_states for each VM. The
 use of them is complicated. Some states are confusing, and sometimes
 ambiguous. There also lacks a guideline to extend/add new state. This
 proposal aims to simplify things, explain and define precisely what
 they mean, and why we need them. A new user-friendly behavior of
 deleting a VM is also discussed.

 A TL;DR summary:
 * power_state is the hypervisor state, loaded ³bottom-up² from compute
 worker;
 * vm_state reflects the stable state based on API calls, matching user
 expectation, revised ³top-down² within API implementation.
 * task_state reflects the transition state introduced by in-progress
API calls.
 * ³hard² delete of a VM should always succeed no matter what.
 * power_state and vm_state may conflict with each other, which needs
 to be resolved case-by-case.

 It's not a definite guide yet and is up for discussion. I'd like to
 thank vishy and comstud for the early input. comstud: the task_state
 is different from when you looked at it. It's a lot closer to what's
 in the current code.

 The full text is here and is editable by anyone like etherpad.


https://docs.google.com/document/d/1nlKmYld3xxpTv6Xx0Iky6L46smbEqg7-SWPu_
o6VJws/edit?pli=1

 Thanks,

 Yun

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




___
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] RFC - dynamically loading virt drivers

2012-05-18 Thread Sean Dague

On 05/17/2012 06:38 PM, Vishvananda Ishaya wrote:


So we already have plugabillity by just specifying a different compute_driver 
config option.  I don't like that we defer another level in compute and call 
get_connection.  IMO the best cleanup would be to remove the get_connection 
altogether and just construct the driver directly based on compute_driver.

The main issue with changing this is breaking existing installs.

So I guess this would be my strategy:

a) remove get_connection from the drivers (and just have it construct the 
'connection' class directly)
b) modify the global get_connection to construct the drivers for backwards 
compatibilty
c) modify the documentation to suggest changing drivers by specifying the full 
path to the driver instead of connection_type
d) rename the connection classes to something reasonable representing drivers 
(libvirt.driver:LibvirtDriver() vs libvirt.connection.LibvirtConnection)
e) bonus points if it could be done with a short path for ease of use 
(compute_driver=libvirt.LibvirtDriver vs 
compute_driver=nova.virt.libvirt.driver.LibvirtDriver)


On point c), is the long term view that .conf options are going to 
specify full class names? It seems like this actually gets kind of 
confusing to admins.



What are your thoughts on the following approach, which is related, but 
a little different?


a) have compute_driver take a module name in nova.virt. which is loaded 
with some standard construction method that all drivers would implement 
in their __init__.py. Match all existing module names to connection_type 
names current in use. Basically just jump to e, but also make all 
drivers conform some factory interface so libvirt is actually enough 
to get you nova.virt.libvirt.connect()


b) if compute_driver is not specified, use connection_type, but spit out 
a deprecation warning that the option is going away. (Removed fully in 
G). Because compute_drivers map to existing connection_types this just 
works with only a little refactoring in the drivers.


c) remove nova/virt/connection.py

The end result is that every driver is a self contained subdir in 
nova/virt/DRIVERNAME/.



* one test fails for Fake in test_virt_drivers, but only when it's run as the 
full unit test, not when run on it's own. It looks like it has to do with 
FakeConnection.instance() caching, which actually confuses me a bit, as I would 
have assumed one unit test file couldn't affect another (i.e. they started a 
clean env each time).


Generally breakage like this is due to some global state that is not cleaned 
up, so if FakeConnection is caching globally, then this could happen.


It is keeping global state, I'll look at fixing that independently.

-Sean

--
Sean Dague
IBM Linux Technology Center
email: sda...@linux.vnet.ibm.com
alt-email: slda...@us.ibm.com


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova subsystem branches and feature branches

2012-05-18 Thread Mark McLoughlin
Hey,

Hi On Mon, 2012-05-14 at 14:51 +0200, Thierry Carrez wrote:
 James E. Blair wrote:
  Vish, Thierry, and I spent some time together this week at UDS trying to
  reconcile their needs and your suggestions.  I believe Thierry is going
  to write that up and send it to the list soon.
 
 While at UDS we took some time to discuss a subsystem branch models that
 would work for Vish (PTL/developer), Jim (CI/infra) and me (release
 time). We investigated various models and came up with Vish's
 supercombo subsystem model (see model 5 at the bottom of
 http://wiki.openstack.org/SubsystemsBranchModel for a graph).

It's great you guys got to dig into this in detail, see inline for
thoughts

 In this model:
 
 * You have a master branch which contains release-ready features and
 bugfixes. Milestones are cut directly from it.
 
 * Subsystem branches with area experts are used wherever possible. The
 subsystem maintainer should maintain this branch so that it can directly
 be merged into master when ready (all or nothing). Subsystem
 maintainers are allowed to propose a merge commit to master.

Yep, except I still think having the subsystem maintainer create and
propose a merge commits will be a scaling issue in this process.

To walk through an example, you've got some work on a subsystem branch:

  o--o--o  - sub
 /
 o--o--o--o--o--o  - master

The subsystem maintainer is ready for this to be merged into master, so
creates a merge commit:

  o--o--o-o  - sub
 /   /
 o--o--o--o--o--o  - master

and proposes that. However, there's another subsystem branch merge
already queued which gets merged first. This merge isn't a fast-forward,
but Gerrit is able to do it since there is no merge conflicts:

  o--o--o-o  - sub
 /   /
 o--o--o--o--o--o--o  - master
  \\  /
   o--o--o--o+  - sub2

Now, the sub merge commit proposal is accepted but fails to merge
because of merge conflicts. The maintainer creates another merge and
proposes that:

  o--o--o-o--o  - sub
 /   /  /
 o--o--o--o--o--o--o  - master
  \\  /
   o--o--o--o+  - sub2

and that gets approved quickly and merged as a fast-forward:

  o--o--o-o-+  
 /   /   \
 o--o--o--o--o--o--o--o  - sub  - master
  \\  /
   o--o--o--o+  - sub2


The alternative is that the maintainer of the master branch is creating
the merge commits and resolving conflicts (like Linus):

  o--o--o  - sub
 /
 o--o--o--o--o--o  - master
  \
   o--o--o  - sub2

The two subsystem maintainers send their pull requests and the master
maintainer merges them in the same order:

  o--o--o---+  - sub
 /   \
 o--o--o--o--o--o--o--o  - master
  \   /
   o--o--o---+  - sub2

If you compare to the result when gerrit is merging, we have two less
merge commits with this model and one less subsystem maintainer
roundtrip.

 * Bugfixes get proposed directly to master

I think bugfixes relevant to a particular subsystem need to go through
master, since that's where the domain experts are doing reviews.

 * Features can be directly proposed to master, although they should be
 redirected to a subsystem branch when one exists for that area
 
 * Only features that are release-ready should be accepted into master.
 Final approval of merges on master will therefore be limited to the
 corresponding project release team.

Woah, I didn't see that one coming :-)

Deciding what is ready for merging is the project leader's job IMHO.

 * Milestones are directly cut from master. A couple of days before each
 milestone, we will have a soft freeze during which only bugfixes are merged
 
 * Between the last milestone and RC1, a soft freeze is enforced during
 which only bugfixes are merged (can last for a couple weeks)
 
 * In order to limit the master freeze, at RC1 and until release, a
 specific release branch is cut from master. That specific release branch
 directly gets release-critical (targeted) bugfixes, and gets merged back
 into master periodically.

This release branch thing (to avoid cherry-picking) is probably workable
since the time between RC1 and release is relatively short, but I think
we'll want to eventually to get to a point where the subsystem branches
have alleviate much of the pain involved with just locking down master
between the last milestone and release.

 Benefits of this model:
 * We enable subsystems, which for larger projects let people specialize
 on an area of the code and avoids having to be an expert in all things
 * Subsystems become a staging ground for features that are not release-ready

This sets off loud alarm bells for me.

A subsystem maintainer should never merge anything which could not be
merged into master without more work. Without that rule, the subsystem
branch gets blocked from merging into master until a feature developer
finishes their work.

For sure, preparation work for features (or 

Re: [Openstack] [metering] Do we need an API and storage?

2012-05-18 Thread Francis J. Lacoste
On 12-05-17 08:14 AM, Doug Hellmann wrote:
 
 
 On Thu, May 17, 2012 at 5:47 AM, Nick Barcet nick.bar...@canonical.com
 mailto:nick.bar...@canonical.com wrote:
 
 On 05/17/2012 11:13 AM, Loic Dachary wrote:
  On 05/16/2012 11:00 PM, Francis J. Lacoste wrote:
 
  I'm now of the opinion that we exclude storage and API from the
  metering project scope. Let's just focus on defining a metering
  message format, bus, and maybe a client-library to make it easy to
  write metering consumers.
 
 
 The plan, as I understand it, is to ensure that all metering messages
 appear on a common bus using a documented format. Deployers who do not
 want the storage system and REST API will not need to use it, and can
 set up their own clients to listen on that bus. I'm not sure how much of
 a client library is needed, since the bus is AMQP and the messages are
 JSON, both of which have standard libraries in most common languages.

Like Thierry Carrez mentioned, the main use for a library was to handle
validation of message signature in a handy fashion. Bug I agree that
this client library would just be a thin convenience wrapper around the
bus protocol.


 
 
  Getting rid of the storage imposes a constraint on the billing system
  : it must make 100% sure that once a message is consumed it will be
  reliably archived. It also adds a constraint on the chosen bus : it
  must be able to retain all messages for as long as a consumer needs,
  which may be days or weeks. Or it adds a constraint on the billing
  system which must make 100% sure it will consume all relevant
  messages the bus at all times before they expire.
 
 
 That's exactly right. It will be easier for me to bridge between our two
 systems by pulling a day's worth of details from the ceilometer API and
 storing them in the billing system using a batch job, rather than trying
 to ensure that the billing database performs well enough to record the
 information in real time. My goal is to not have to change the billing
 system at all.

That's good information to have. Which means that a REST API + storage
component definitively has some values for some integration cases.

-- 
Francis J. Lacoste
francis.laco...@canonical.com



signature.asc
Description: OpenPGP digital signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] RFC - dynamically loading virt drivers

2012-05-18 Thread Doug Hellmann
On Fri, May 18, 2012 at 3:08 PM, Sean Dague sda...@linux.vnet.ibm.comwrote:

 On 05/17/2012 06:38 PM, Vishvananda Ishaya wrote:
 

 So we already have plugabillity by just specifying a different
 compute_driver config option.  I don't like that we defer another level in
 compute and call get_connection.  IMO the best cleanup would be to remove
 the get_connection altogether and just construct the driver directly based
 on compute_driver.

 The main issue with changing this is breaking existing installs.

 So I guess this would be my strategy:

 a) remove get_connection from the drivers (and just have it construct the
 'connection' class directly)
 b) modify the global get_connection to construct the drivers for
 backwards compatibilty
 c) modify the documentation to suggest changing drivers by specifying the
 full path to the driver instead of connection_type
 d) rename the connection classes to something reasonable representing
 drivers (libvirt.driver:LibvirtDriver(**) vs libvirt.connection.**
 LibvirtConnection)
 e) bonus points if it could be done with a short path for ease of use
 (compute_driver=libvirt.**LibvirtDriver vs compute_driver=nova.virt.**
 libvirt.driver.LibvirtDriver)


 On point c), is the long term view that .conf options are going to specify
 full class names? It seems like this actually gets kind of confusing to
 admins.


 What are your thoughts on the following approach, which is related, but a
 little different?

 a) have compute_driver take a module name in nova.virt. which is loaded
 with some standard construction method that all drivers would implement in
 their __init__.py. Match all existing module names to connection_type names
 current in use. Basically just jump to e, but also make all drivers conform
 some factory interface so libvirt is actually enough to get you
 nova.virt.libvirt.connect()


Andrew Bogott is working on a common plugin architecture. Under that system
plugins will have well-known, but short names and be loaded using
setuptools entry points (allowing them to be named independently of their
code/filesystem layout and packaged and installed separately from
nova). Could the drivers be loaded from these plugins?



 b) if compute_driver is not specified, use connection_type, but spit out a
 deprecation warning that the option is going away. (Removed fully in G).
 Because compute_drivers map to existing connection_types this just works
 with only a little refactoring in the drivers.

 c) remove nova/virt/connection.py

 The end result is that every driver is a self contained subdir in
 nova/virt/DRIVERNAME/.


  * one test fails for Fake in test_virt_drivers, but only when it's run as
 the full unit test, not when run on it's own. It looks like it has to do
 with FakeConnection.instance() caching, which actually confuses me a bit,
 as I would have assumed one unit test file couldn't affect another (i.e.
 they started a clean env each time).


 Generally breakage like this is due to some global state that is not
 cleaned up, so if FakeConnection is caching globally, then this could
 happen.


 It is keeping global state, I'll look at fixing that independently.


-Sean

 --
 Sean Dague
 IBM Linux Technology Center
 email: sda...@linux.vnet.ibm.com
 alt-email: slda...@us.ibm.com


 __**_
 Mailing list: 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 More help   : 
 https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [metering] Do we need an API and storage?

2012-05-18 Thread Francis J. Lacoste
On 12-05-18 05:27 AM, Thierry Carrez wrote:
 You can certainly architect it in a way so that storage and API are
 optional: expose metering messages on the bus, and provide an
 optionally-run aggregation component that exposes a REST API (and that
 would use the metering-consumer client library). That would give
 deployers the option to poll via REST or implement their own alternate
 aggregation using the metering-consumer client lib, depending on the
 system they need to integrate with.
 
 Having the aggregation component clearly separate and optional will
 serve as a great example of how it could be done (and what are the
 responsibilities of the aggregation component). I would still do a
 (minimal) client library to facilitate integration, but maybe that's
 just me :)

Right, I like this approach very much.

The main thing I'm worried about is that we are building a system that
has no use in _itself_. It's all about enabling integration in
third-party billing system, but we aren't building such an integration
as part of this project.

We could easily implement something that lacks our focus. Maybe, that's
an argument for building a simple billing app as part of OpenStack as a
proof of concept that the metering system can be integrated.

Sure, interested parties will try to integrate it once we have early
versions of it, but that still increase the feedback cycle on our
API/architecture hypotheses.

Cheers

-- 
Francis J. Lacoste
francis.laco...@canonical.com



signature.asc
Description: OpenPGP digital signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [metering] Do we need an API and storage?

2012-05-18 Thread Francis J. Lacoste
On 12-05-18 04:49 PM, Francis J. Lacoste wrote:
  On 12-05-17 08:14 AM, Doug Hellmann wrote:
  The plan, as I understand it, is to ensure that all metering messages
  appear on a common bus using a documented format. Deployers who do not
  want the storage system and REST API will not need to use it, and can
  set up their own clients to listen on that bus. I'm not sure how much of
  a client library is needed, since the bus is AMQP and the messages are
  JSON, both of which have standard libraries in most common languages.

 Like Thierry Carrez mentioned, the main use for a library was to handle
 validation of message signature in a handy fashion. Bug I agree that
 this client library would just be a thin convenience wrapper around the
 bus protocol.
 

Of course, it was you in your reply to Thierry that mentioned this :-)


-- 
Francis J. Lacoste
francis.laco...@canonical.com



signature.asc
Description: OpenPGP digital signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [metering] Do we need an API and storage?

2012-05-18 Thread Doug Hellmann
On Fri, May 18, 2012 at 4:53 PM, Francis J. Lacoste 
francis.laco...@canonical.com wrote:

 On 12-05-18 05:27 AM, Thierry Carrez wrote:
  You can certainly architect it in a way so that storage and API are
  optional: expose metering messages on the bus, and provide an
  optionally-run aggregation component that exposes a REST API (and that
  would use the metering-consumer client library). That would give
  deployers the option to poll via REST or implement their own alternate
  aggregation using the metering-consumer client lib, depending on the
  system they need to integrate with.
 
  Having the aggregation component clearly separate and optional will
  serve as a great example of how it could be done (and what are the
  responsibilities of the aggregation component). I would still do a
  (minimal) client library to facilitate integration, but maybe that's
  just me :)

 Right, I like this approach very much.

 The main thing I'm worried about is that we are building a system that
 has no use in _itself_. It's all about enabling integration in
 third-party billing system, but we aren't building such an integration
 as part of this project.


Well, several of us actually *are* building such integration systems at the
same time that we are building ceilometer. That's where these requirements
are coming from! :-) I don't expect to be releasing all of the code for
that integration, but I will release what I can and I am happy to talk
about the general requirements and constraints for the rest on the list to
help with the design of ceilometer.


 We could easily implement something that lacks our focus. Maybe, that's
 an argument for building a simple billing app as part of OpenStack as a
 proof of concept that the metering system can be integrated.


I would not object if you wanted to do that, but I have a high degree of
confidence that we can produce something usable and useful without going
that far.



 Sure, interested parties will try to integrate it once we have early
 versions of it, but that still increase the feedback cycle on our
 API/architecture hypotheses.


I could shorten that feedback cycle if folks would do code reviews for the
outstanding items at
https://review.stackforge.org/#/q/status:open+project:stackforge/ceilometer,n,zso
I could stand up a copy of what has already been implemented. ;-)

In all seriousness, it seems reasonable for us to concentrate on the
front-end pieces (collecting and storing) for this release, and build a
good enough API service to retrieve data. Even if that means I end up
having to retrieve all of the raw records and process them myself, I can
still get my project done as a proof of concept and we can refine the API
as we go along using the experience I (and others) gain this time around.

Doug
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Dashboard] Can't access images/snapshots

2012-05-18 Thread Vishvananda Ishaya
i think you need to update your endpoint to:

http://192.168.111.202:8776/v1/%(tenant_id)s

note that the volume endpoint should be v1 not v2

Vish

On May 18, 2012, at 6:01 AM, Leander Bessa Beernaert wrote:

 Ok, i've removed swift from the endpoints and services. Nova volumes is 
 running with a 2GB file as volume on disk and the log files seem ok. However, 
 i still keep getting this error for volume-list 
 (http://paste.openstack.org/show/17991/) and this error for snapshot-list 
 (http://paste.openstack.org/show/17992/).
 
 On Thu, May 17, 2012 at 7:39 PM, Gabriel Hurley gabriel.hur...@nebula.com 
 wrote:
 Two points:
 
  
 
 Nova Volume is a required service for Essex Horizon. That’s documented, and 
 there are plans to make it optional for Folsom. However, not having it should 
 yield a pretty error message in the dashboard, not a KeyError in novaclient, 
 which leads me to my second point…
 
  
 
 It sounds like your Keystone service catalog is misconfigured. If you’re 
 seeing Swift (AKA Object Store) in the dashboard, that means it’s in your 
 keystone service catalog. Swift is a completely optional component and is 
 triggered on/off by the presence of an “object-store” endpoint returned by 
 Keystone.
 
  
 
 I’d check and make sure the services listed in Keystone’s catalog are correct 
 for what’s actually running in your environment.
 
  
 
 All the best,
 
  
 
 -  Gabriel
 
  
 
 From: openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net 
 [mailto:openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net] On 
 Behalf Of Leander Bessa Beernaert
 Sent: Thursday, May 17, 2012 8:45 AM
 To: Sébastien Han
 Cc: openstack@lists.launchpad.net
 Subject: Re: [Openstack] [Dashboard] Can't access images/snapshots
 
  
 
 Now i made sure nova-volume is installed and running. I still keep running 
 into the same problem. It also happens from the command line tool. This is 
 the output produced  http://paste.openstack.org/show/17929/
 
 On Thu, May 17, 2012 at 11:17 AM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:
 
 I have no trouble from the command line. One thing i find peculiar is that i 
 haven't installed swift and nova-volume yet and they show up as enabled 
 services in the dashboard. Is that normal?
 
  
 
 On Wed, May 16, 2012 at 11:39 PM, Sébastien Han han.sebast...@gmail.com 
 wrote:
 
 Hi,
 
  
 
 Do you also have an error when retrieving from the command line?
 
 
 
 ~Cheers!
 
 
 
 
 On Wed, May 16, 2012 at 5:38 PM, Leander Bessa Beernaert 
 leande...@gmail.com wrote:
 
 Hello,
 
  
 
 I keep running into this error when i try to list the images/snapshot in 
 dashboard: http://paste.openstack.org/show/17820/
 
  
 
 This is my local_settings.py file: http://paste.openstack.org/show/17822/ , 
 am i missing something?
 
  
 
 Regards,
 
  
 
 Leander 
 
  
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
  
 
  
 
  
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] rant on the resize API implementation

2012-05-18 Thread Yun Mao
According to 
http://docs.openstack.org/api/openstack-compute/2/content/Resize_Server-d1e3707.html

The resize operation converts an existing server to a different
flavor, in essence, scaling the server up or down. The original server
is saved for a period of time to allow rollback if there is a problem.
All resizes should be tested and explicitly confirmed, at which time
the original server is removed. All resizes are automatically
confirmed after 24 hours if they are not explicitly confirmed or
reverted.

Whether this feature is useful in the cloud is not the scope of the
thread. I'd like to discuss the implementation. In the current
implementation, it will first cast to scheduler to decide the
destination host, then shutdown the VM, copy the disk image to the
dest, start the new VM, and wait for a user confirmation, then either
delete the old VM image as confirmation or delete the new VM as
revert.

Problem 1: the image is copied from source to destination via scp/ssh.
This probably means that you will need a password-less ssh private key
setup among all compute nodes. It seems like a security problem.

Problem 2: resize needs to boot up VMs too, once at the destination,
once at the source in case of revert. They have their own
implementation, and look slightly different from spawn which is the
default create instance call.

Problem 3: it's not clear what the semantics is when there are volumes
attached to the VM before resize. What should happen to the VM?

Without the resize API, a user can still do that by first make a
snapshot of a VM, then start a new VM with that snapshot. It's not
that much of a difference. If getting rid of resize is not an option,
I wonder if it makes more sense to implement the resize function by
calling the snapshot and create compute APIs instead of doing it in
the driver.

Thanks,

Yun

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Third Party APIs

2012-05-18 Thread Vishvananda Ishaya

On May 18, 2012, at 11:00 AM, Doug Davis wrote:

 
 Vish wrote on 05/17/2012 02:18:19 PM:
 ... 
  3 Feature Branch in Core 
  
  We are doing some work to support Feature and Subsystem branches in 
  our CI system. 3rd party apis could live in a feature branch so that
  they can be tested using our CI infrastructure. This is very similar
  to the above solution, and gives us a temporary place to do 
  development until the internal apis are more stable. Changes to 
  internal apis and 3rd party apis could be done concurrently in the 
  branch and tested. 
 
 can you elaborate on this last sentence?  When you say changes to internal 
 apis do you mean in general or only when in the context of those 
 3rd party APIs needing a change?  I can't see the core developers wanting 
 to do internal API changes in a 3rd party api branch.  I would expect 
 3rd party api branches to mainly include just stuff that sits on top of 
 the internal APIs and (hopefully very few) internal API tweaks. 
 Which to me means that these 3rd party API branches should be continually 
 rebased off of the trunk to catch breaking changes immediately.


I agree.  I was suggesting that initially internal api changes could be made in 
the feature branch in order to enable the new top level apis, tested, and then 
proposed for merging back into core.  This is generally easier than trying to 
make changes in two separate repositories to support a feature (as we have to 
do frequently in openstack).

 
 If I understand it correctly, of those options, I like option 3 because 
 then the CI stuff will detect breakages in the 3rd party APIs right away 
 and not until some later date when it'll be harder to fix (or undo) those 
 internal API changes.

Well it won't automatically do so, but it should alllow for an easy way for the 
third party developers to run ci tests without setting up their own 
infrastructure.

Vish

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Openstack-qa-team] Summary of QA Weekly Status meeting held on 5/17

2012-05-18 Thread Venkatesan, Ravikumar
The QA team had its weekly IRC meeting today. Here is a summary of the progress 
 and decisions coming out of the meeting.



* Progress

  Tempest tests for Swift are in review.





* Decisions made (Jay is out, Daryl could not make it and David is on vacation)

   None



* Outstanding Items/Issues

Finalize getting smoke test branch in Gerrit

Rohit to check-in Jmeter performance tests.

Swift tests to be reviewed and merged.

Tempest concurrent runs enhancement

 limited number of core reviewers/approvers slows down the review process

Covering tests for completed Folsom blueprints

Multi-node test  environment for Folsom testing(including stability).





* Outstanding Reviews

  Community, please feel free to provide code reviews on outstanding Tempest 
merge proposals:



   Swift tests https://review.openstack.org/#/c/7465/
Refactoring of base test case classes 
https://review.openstack.org/#/c/7069/2









Meeting logs :

Summary (html 
format):http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-05-17-17.04.html

Summary (text format): 
http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-05-17-17.04.txt

Detailed logs:   
http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-05-17-17.04.log.html





Regards,

Ravi



--

Mailing list: https://launchpad.net/~openstack-qa-team

Post to : 
openstack-qa-t...@lists.launchpad.netmailto:openstack-qa-t...@lists.launchpad.net

Unsubscribe : https://launchpad.net/~openstack-qa-team

More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Netstack] nova+quantum vm does bind ip

2012-05-18 Thread Yi Sun
Ok, that helps.  Thanks
Now, I can tell that my issue is totally different one. For me, the dhcp
query is not sent by the Linux running by the VM. It is the ipxe in the
vm sending the query. And my VM was not really boot up at all. It can
not find a valid bootable device, I will need to find a way to fix up
the image first.
thx

Yi

On 05/18/2012 05:14 PM, Yong Sheng Gong wrote:
 Can you use virt-viewer to see vnc console?



 -netstack-bounces+gongysh=cn.ibm@lists.launchpad.net wrote: -
 To: openstack@lists.launchpad.net, netst...@lists.launchpad.net
 From: Yi Sun
 Sent by: netstack-bounces+gongysh=cn.ibm@lists.launchpad.net
 Date: 05/19/2012 07:52AM
 Subject: [Netstack] nova+quantum vm does bind ip

 All,
 My issue is close to what reported by Emilien and others in
 https://answers.launchpad.net/quantum/+question/197701. Only
 difference is that  instead of not receiving DHCP reply, I do see the
 DHCP reply from GW interface, but I still can not ping/ssh to the VM.
 My server configuration is same as what Emilien has described in his
 document here https://github.com/EmilienM, the only difference is that
 I'm using a third interface (eth2) to connect quantum networks on two
 servers. And the eth1 is used for management traffic.

 My network configuration is
 id   IPv4  IPv6   start address 
 DNS1   DNS2   VlanID
 projectuuid  
 110.0.0.0/24   None   10.0.0.2  
 8.8.4.4None   None  
 61868ec0fd63486db3dbf1740e7111e96f765203-7fd8-425e-92b8-cf72b5c1c6cd


 I launched VM in network 1 and it is running on the nova-compute
 server (essex-2), I can see following tcpdump message from the tap
 interface for the VM

 16:19:06.183240 IP 0.0.0.0.bootpc  255.255.255.255.bootps:
 BOOTP/DHCP, Request from fa:16:3e:7f:f1:69 (oui Unknown), length 395
 16:19:06.184146 IP 10.0.0.1.bootps  10.0.0.4.bootpc: BOOTP/DHCP,
 Reply, length 312
 16:19:07.165572 IP 0.0.0.0.bootpc  255.255.255.255.bootps:
 BOOTP/DHCP, Request from fa:16:3e:7f:f1:69 (oui Unknown), length 395
 16:19:07.165955 IP 10.0.0.1.bootps  10.0.0.4.bootpc: BOOTP/DHCP,
 Reply, length 312
 16:19:08.299928 IP6 fe80::2429:e5ff:fed4:8ad5  ip6-allrouters: ICMP6,
 router solicitation, length 16
 16:19:08.411921 IP6 fe80::2429:e5ff:fed4:8ad5  ff02::16: HBH ICMP6,
 multicast listener report v2, 1 group record(s), length 28
 16:19:09.143041 IP 0.0.0.0.bootpc  255.255.255.255.bootps:
 BOOTP/DHCP, Request from fa:16:3e:7f:f1:69 (oui Unknown), length 407
 16:19:09.143507 IP 10.0.0.1.bootps  10.0.0.4.bootpc: BOOTP/DHCP,
 Reply, length 312
 16:19:14.143971 ARP, Request who-has 10.0.0.4 tell 10.0.0.1, length 42
 16:19:15.143784 ARP, Request who-has 10.0.0.4 tell 10.0.0.1, length 42
 16:19:16.143843 ARP, Request who-has 10.0.0.4 tell 10.0.0.1, length 42


 No ARP response from the VM.  I tried to login to the VM console by
 virsh console instance-0003 so that I can do tcpdump inside the
 VM. But I got following error message:

 root@openstack-2:/etc/libvirt/qemu/networks# virsh console
 instance-0003
 Connected to domain instance-0003
 Escape character is ^]
 error: internal error
 /var/lib/nova/instances/instance-0003/console.log: Cannot request
 read and write flags together


 Could some one suggest on how to debug this issue?
 Thanks
 Yi
 PS: my nova.conf files are attached.

 -- 
 Mailing list: https://launchpad.net/~netstack
 https://launchpad.net/%7Enetstack
 Post to : netst...@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~netstack
 https://launchpad.net/%7Enetstack
 More help   : https://help.launchpad.net/ListHelp


 [attachment nova.conf.server removed by Yong Sheng Gong/China/IBM]
 [attachment nova.conf.compute removed by Yong Sheng Gong/China/IBM]

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Netstack] nova+quantum vm does bind ip

2012-05-18 Thread Yong Sheng Gong
Can you use virt-viewer to see vnc console?-netstack-bounces+gongysh=cn.ibm@lists.launchpad.net wrote: -To: openstack@lists.launchpad.net, netst...@lists.launchpad.netFrom: Yi Sun Sent by: netstack-bounces+gongysh=cn.ibm@lists.launchpad.netDate: 05/19/2012 07:52AMSubject: [Netstack] nova+quantum vm does bind ip
  


  
All, 
My issue is close to what reported by Emilien and others in
https://answers.launchpad.net/quantum/+question/197701. Only
difference is that instead of not receiving DHCP reply, I do see
the DHCP reply from GW interface, but I still can not ping/ssh to
the VM.
My server configuration is same as what Emilien has described in his
document here

https://github.com/EmilienM,
the only difference is that I'm using a third interface (eth2) to
connect quantum networks on two servers. And the eth1 is used for
management traffic.

My network configuration is 
id  IPv4  IPv6  start address 
DNS1  DNS2  VlanID 
project  uuid 
1  10.0.0.0/24  None  10.0.0.2 
8.8.4.4  None  None 
61868ec0fd63486db3dbf1740e7111e9
6f765203-7fd8-425e-92b8-cf72b5c1c6cd


I launched VM in network 1 and it is running on the nova-compute
server (essex-2), I can see following tcpdump message from the tap
interface for the VM

16:19:06.183240 IP 0.0.0.0.bootpc  255.255.255.255.bootps:
BOOTP/DHCP, Request from fa:16:3e:7f:f1:69 (oui Unknown), length 395
16:19:06.184146 IP 10.0.0.1.bootps  10.0.0.4.bootpc: BOOTP/DHCP,
Reply, length 312
16:19:07.165572 IP 0.0.0.0.bootpc  255.255.255.255.bootps:
BOOTP/DHCP, Request from fa:16:3e:7f:f1:69 (oui Unknown), length 395
16:19:07.165955 IP 10.0.0.1.bootps  10.0.0.4.bootpc: BOOTP/DHCP,
Reply, length 312
16:19:08.299928 IP6 fe80::2429:e5ff:fed4:8ad5  ip6-allrouters:
ICMP6, router solicitation, length 16
16:19:08.411921 IP6 fe80::2429:e5ff:fed4:8ad5  ff02::16: HBH
ICMP6, multicast listener report v2, 1 group record(s), length 28
16:19:09.143041 IP 0.0.0.0.bootpc  255.255.255.255.bootps:
BOOTP/DHCP, Request from fa:16:3e:7f:f1:69 (oui Unknown), length 407
16:19:09.143507 IP 10.0.0.1.bootps  10.0.0.4.bootpc: BOOTP/DHCP,
Reply, length 312
16:19:14.143971 ARP, Request who-has 10.0.0.4 tell 10.0.0.1, length
42
16:19:15.143784 ARP, Request who-has 10.0.0.4 tell 10.0.0.1, length
42
16:19:16.143843 ARP, Request who-has 10.0.0.4 tell 10.0.0.1, length
42


No ARP response from the VM. I tried to login to the VM console by
virsh console instance-0003 so that I can do tcpdump inside the
VM. But I got following error message:

root@openstack-2:/etc/libvirt/qemu/networks# virsh console
instance-0003
Connected to domain instance-0003
Escape character is ^]
error: internal error
/var/lib/nova/instances/instance-0003/console.log: Cannot
request read and write flags together


Could some one suggest on how to debug this issue?
Thanks
Yi
PS: my nova.conf files are attached.

  -- Mailing list: https://launchpad.net/~netstackPost to   : netst...@lists.launchpad.netUnsubscribe : https://launchpad.net/~netstackMore help  : https://help.launchpad.net/ListHelp[attachment "nova.conf.server" removed by Yong Sheng Gong/China/IBM][attachment "nova.conf.compute" removed by Yong Sheng Gong/China/IBM]

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Essex horizon dashboard - volume snapshot issue

2012-05-18 Thread John Griffith
On Fri, May 18, 2012 at 7:54 PM, Vijay vija...@yahoo.com wrote:
 Hello,
 On Essex Dashboard, I am able to create a snapshot of a volume successfully. 
 However, when I click on the volume snapshot to look at the details, I get 
 Error: Unable to retrieve volume details. This error occurs only when 
 retrieving the details of volume snapshots only. The volume details of the 
 volume created from the scratch shows up correctly.

 Also, on the horizon dashboard, there is no option to attach the volume 
 snapshot to any running instance. I see only delete snapshot option. If it is 
 by design, then, how is volume snapshot is going to be used?

 Any help is appreciated.

 Thanks,
 -vj

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

Hi VJ,

I believe you may have discovered a bug in the current version of
code.  Are you using the latest Folsom version (ie devstack)?  Check
your Horizon logs and you'll notice a number of errors when trying to
perform this operation.  Perhaps some Horizon folks here have some
knowledge of this, otherwise I can look into it further next week and
file a bug if necessary.

Meanwhile, you can use python-novaclient to perform a 'nova
volume-snapshot-show' which is the information that would be reported
from the Horizon details page you're trying to retrieve.

Thanks,
John

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp