[openstack-dev] [Heat] Kilo Summit Topic Proposals

2014-10-19 Thread Angus Salkeld
Hi all

I'd like to use this weeks meeting to discuss and prioritise
the summit sessions (these need to be published before the 28th).

There a lot of proposals that need more info, if you want these to succeed
you need to put some work into them.

-Angus

https://wiki.openstack.org/wiki/Meetings/HeatAgenda
https://etherpad.openstack.org/p/kilo-heat-summit-topics
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] cant launch instace

2014-10-19 Thread Qiming Teng
On Mon, Oct 20, 2014 at 02:29:10AM +0530, shailendra acharya wrote:
> hello folks,
>i m trying to install openstack multinode in my laptop. i
> cant launch instance. even it cant update nova-compute. plz help i m very
> confused

Please post to openst...@lists.openstack.org, this is a list for
developers.  Thanks.

Qiming
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Turbo hipster problems

2014-10-19 Thread Joshua Hesketh

Hi Gary,

Sorry we had a mis-hap over the weekend. The Database CI should be back 
up and running now. Let me know if you see any more problems.


Cheers,
Josh

Rackspace Australia

On 10/18/14 1:55 AM, Gary Kotton wrote:

Hi,
Anyone aware why Turbo his peter is failing with:

real-db-upgrade_nova_percona_user_002:th-percona  
Exception: [Errno 2] No such file or directory: 
'/var/lib/turbo-hipster/datasets_user_002' in 0s


Thanks
Gary


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] cant launch instace

2014-10-19 Thread shailendra acharya
hello folks,
   i m trying to install openstack multinode in my laptop. i
cant launch instance. even it cant update nova-compute. plz help i m very
confused
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Support for external authentication (i.e. REMOTE_USER) in Havana

2014-10-19 Thread Lohit Valleru
Thank you Nathan,

> Do you have more details on what your mapping is configured like?  There
> have been some changes around this area in Juno, but it's still possible
> that there is some sort of bug here.

Here are my mapping details:

# Search base for users. (string value)
user_tree_dn=ou=People,dc=example,dc=com

# LDAP search filter for users. (string value)
#user_filter=(&(objectClass=posixAccount))

# LDAP objectClass for users. (string value)
user_objectclass = posixAccount

# LDAP attribute mapped to user id. (string value)
user_id_attribute = uidNumber

# LDAP attribute mapped to user name. (string value)
user_name_attribute = uid

# LDAP attribute mapped to user email. (string value)
user_mail_attribute = mail

However, I see that keystone does not make use of the "user_id_attribute",
while checking for authorization of user. It defaults to "uid".

Also, when i do ->

keystone user-role-add --user-id="lohit.valleru" --tenant-id="xxx"
--role-id="xxx"

( I see in the logs - that it uses the configuration value that i set
above. i.e uidNumber. )

Now when i do ->

keyston user-list , or keystone user-get lohit.valleru

(I see that it defaults to picking up "uid" values, in place of uidNumber
as above.)

So since it stores "uid" as the user_id_attribute, and searches for
"uidNumber" when i do user-role-add, it will always fail, unless uidNumber
= uid which is impractical.

In addition - i am confused, on why the user_id_attribute is being
defaulted to "uid". Isn't user_id_attribute supposed to default to
uidNumber? ( numerical)

Why is the user_id_attribute being used to search, rather than
user_name_attribute? As far as i understand - it is user_name_attribute
that it stores in the mysql database.

i would rather expect the logic to behave as follows :

As soon as i authenticate using my kerberos principal : "
lohit.vall...@example.com", keystone is supposed to use "lohit.valleru" to
search against "user_name_attribute", and not "user_id_attribute"

On swift storage or object storage - it is supposed to use
"user_id_attribute", to be in sync with legacy file systems, so
"user_id_attribute" is supposed to similar to posix uidNumber.

Since there is no way, i can list groups using keystone, i cannot verify if
it is mapping group information in the right way.

Thank you for helping with Kerberos information. I can try testing the
same, but i might not be able to go too forward, till the above issue is
resolved.

Lohit

On Sat, Oct 18, 2014 at 10:13 PM, Nathan Kinder  wrote:

>
>
> On 10/18/2014 08:43 AM, lohit.valleru wrote:
> > Hello,
> >
> > Thank you for posting this issue to openstack-dev. I had posted this on
> the
> > openstack general user list and was waiting for response.
> >
> > May i know, if we have any progress regarding this issue.
> >
> > I am trying to use external HTTPD authentication with kerberos and LDAP
> > identity backend, in Havana.
> >
> > I think, few things have changed with Openstack Icehouse release and
> > Keystone 0.9.0 on CentOS 6.5.
> >
> > Currently I face a similar issue to yours : I get a full username with
> > domain as REMOTE_USER from apache, and keystone tries to search LDAP
> along
> > with my domain name. ( i have not mentioned any domain information to
> > keystone. i assume it is called 'default', while my domain is:
> example.com )
> >
> > I see that - External Default and External Domain are no longer
> supported by
> > keystone but intstead -
> >
> > keystone.auth.plugins.external.DefaultDomain or
> > external=keystone.auth.plugins.external.Domain are valid as of now.
> >
> > I also tried using keystone.auth.plugins.external.kerberos after checking
> > the code, but it does not make any difference.
> >
> > For example:
> >
> > If i authenticate using kerberos with : lohit.vall...@example.com. I
> see the
> > following in the logs.
> >
> > DEBUG keystone.common.ldap.core [-] LDAP search:
> > dn=ou=People,dc=example,dc=come, scope=1,
> > query=(&(uid=lohit.vall...@example.com)(objectClass=posixAccount)),
> > attrs=['mail', 'userPassword', 'enabled', 'uid'] search_s
> > /usr/lib/python2.6/site-packages/keystone/common/ldap/core.py:807
> > 2014-10-18 02:34:36.459 5592 DEBUG keystone.common.ldap.core [-] LDAP
> unbind
> > unbind_s
> /usr/lib/python2.6/site-packages/keystone/common/ldap/core.py:777
> > 2014-10-18 02:34:36.460 5592 WARNING keystone.common.wsgi [-]
> Authorization
> > failed. Unable to lookup user lohit.vall...@example.com from
> 172.31.41.104
> >
> > Also, i see that keystone always searches with "uid", no matter what i
> enter
> > as a mapping value for userid/username in keystone.conf . I do not
> > understand if this is a bug or limitation. ( The above logs show that
> they
> > are not able to find uid with lohit.vall...@example.com since LDAP
> contains
> > uid without domain name)
>
> Do you have more details on what your mapping is configured like?  There
> have been some changes around this area in Juno, but it's still possible
> that t

Re: [openstack-dev] [cinder][nova] Are disk-intensive operations managed ... or not?

2014-10-19 Thread Preston L. Bannister
Jay,

Thanks very much for the insight and links. In fact, I have visited
*almost* all the places mentioned, prior. Added clarity is good. :)

Also, to your earlier comment (to an earlier thread) about backup not
really belonging in Nova - in main I agree. The "backup" API belongs in
Nova (as this maps cleanly to the equivalent in AWS), but the bulk of the
implementation can and should be distinct (in my opinion).

My current work is at:
https://github.com/dreadedhill-work/stack-backup

I also have matching changes to Nova and the Nova client under the same
Github account.

Please note this is very much a work in progress (as you might guess from
my prior comments). This needs a longer proper write up, and a cleaner Git
history. The code is a pretty fair ways along, but should be considered
more a rough draft, rather than a final version.

For the next few weeks, I am enormously crunched for time, as I have
promised a PoC at a site with a very large OpenStack deployment.

Noted your suggestion about the Rally team. Might be a bit before I can
pursue. :)

Again, Thanks.





On Sun, Oct 19, 2014 at 10:13 AM, Jay Pipes  wrote:

> Hi Preston, some great questions in here. Some comments inline, but tl;dr
> my answer is "yes, we need to be doing a much better job thinking about how
> I/O intensive operations affect other things running on providers of
> compute and block storage resources"
>
> On 10/19/2014 06:41 AM, Preston L. Bannister wrote:
>
>> OK, I am fairly new here (to OpenStack). Maybe I am missing something.
>> Or not.
>>
>> Have a DevStack, running in a VM (VirtualBox), backed by a single flash
>> drive (on my current generation MacBook). Could be I have something off
>> in my setup.
>>
>> Testing nova backup - first the existing implementation, then my (much
>> changed) replacement.
>>
>> Simple scripts for testing. Create images. Create instances (five). Run
>> backup on all instances.
>>
>> Currently found in:
>> https://github.com/dreadedhill-work/stack-backup/
>> tree/master/backup-scripts
>>
>> First time I started backups of all (five) instances, load on the
>> Devstack VM went insane, and all but one backup failed. Seems that all
>> of the backups were performed immediately (or attempted), without any
>> sort of queuing or load management. Huh. Well, maybe just the backup
>> implementation is naive...
>>
>
> Yes, you are exactly correct. There is no queuing behaviour for any of the
> "backup" operations (I put "backup" operations in quotes because IMO it is
> silly to refer to them as backup operations, since all they are doing
> really is a snapshot action against the instance/volume -- and then
> attempting to be a poor man's cloud cron).
>
> The backup is initiated from the admin_actions API extension here:
>
> https://github.com/openstack/nova/blob/master/nova/api/
> openstack/compute/contrib/admin_actions.py#L297
>
> which calls the nova.compute.api.API.backup() method here:
>
> https://github.com/openstack/nova/blob/master/nova/compute/api.py#L2031
>
> which, after creating some image metadata in Glance for the snapshot,
> calls the compute RPC API here:
>
> https://github.com/openstack/nova/blob/master/nova/compute/rpcapi.py#L759
>
> Which sends an RPC asynchronous message to the compute node to execute the
> instance snapshot and "rotate backups":
>
> https://github.com/openstack/nova/blob/master/nova/compute/
> manager.py#L2969
>
> That method eventually calls the blocking snapshot() operation on the virt
> driver:
>
> https://github.com/openstack/nova/blob/master/nova/compute/
> manager.py#L3041
>
> And it is the nova.virt.libvirt.Driver.snapshot() method that is quite
> "icky", with lots of logic to determine the type of snapshot to do and how
> to do it:
>
> https://github.com/openstack/nova/blob/master/nova/virt/
> libvirt/driver.py#L1607
>
> The gist of the driver's snapshot() method calls ImageBackend.snapshot(),
> which is responsible for doing the actual snapshot of the instance:
>
> https://github.com/openstack/nova/blob/master/nova/virt/
> libvirt/driver.py#L1685
>
> and then once the snapshot is done, the method calls to the Glance API to
> upload the snapshotted disk image to Glance:
>
> https://github.com/openstack/nova/blob/master/nova/virt/
> libvirt/driver.py#L1730-L1734
>
> All of which is I/O intensive and AFAICT, mostly done in a blocking
> manner, with no queuing or traffic control measures, so as you correctly
> point out, if the compute node daemon receives 5 backup requests, it will
> go ahead and do 5 snapshot operations and 5 uploads to Glance all as fast
> as it can. It will do it in 5 different eventlet greenthreads, but there
> are no designs in place to prioritize the snapshotting I/O lower than
> active VM I/O.
>
>  I will write on this at greater length, but backup should interfere as
>> little as possible with foreground processing. Overloading a host is
>> entirely unacceptable.
>>
>
> Agree with you completely.
>
>  Replaced the backup implementatio

Re: [openstack-dev] [cinder][nova] Are disk-intensive operations managed ... or not?

2014-10-19 Thread Jay Pipes
Hi Preston, some great questions in here. Some comments inline, but 
tl;dr my answer is "yes, we need to be doing a much better job thinking 
about how I/O intensive operations affect other things running on 
providers of compute and block storage resources"


On 10/19/2014 06:41 AM, Preston L. Bannister wrote:

OK, I am fairly new here (to OpenStack). Maybe I am missing something.
Or not.

Have a DevStack, running in a VM (VirtualBox), backed by a single flash
drive (on my current generation MacBook). Could be I have something off
in my setup.

Testing nova backup - first the existing implementation, then my (much
changed) replacement.

Simple scripts for testing. Create images. Create instances (five). Run
backup on all instances.

Currently found in:
https://github.com/dreadedhill-work/stack-backup/tree/master/backup-scripts

First time I started backups of all (five) instances, load on the
Devstack VM went insane, and all but one backup failed. Seems that all
of the backups were performed immediately (or attempted), without any
sort of queuing or load management. Huh. Well, maybe just the backup
implementation is naive...


Yes, you are exactly correct. There is no queuing behaviour for any of 
the "backup" operations (I put "backup" operations in quotes because IMO 
it is silly to refer to them as backup operations, since all they are 
doing really is a snapshot action against the instance/volume -- and 
then attempting to be a poor man's cloud cron).


The backup is initiated from the admin_actions API extension here:

https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/contrib/admin_actions.py#L297

which calls the nova.compute.api.API.backup() method here:

https://github.com/openstack/nova/blob/master/nova/compute/api.py#L2031

which, after creating some image metadata in Glance for the snapshot, 
calls the compute RPC API here:


https://github.com/openstack/nova/blob/master/nova/compute/rpcapi.py#L759

Which sends an RPC asynchronous message to the compute node to execute 
the instance snapshot and "rotate backups":


https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2969

That method eventually calls the blocking snapshot() operation on the 
virt driver:


https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L3041

And it is the nova.virt.libvirt.Driver.snapshot() method that is quite 
"icky", with lots of logic to determine the type of snapshot to do and 
how to do it:


https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L1607

The gist of the driver's snapshot() method calls 
ImageBackend.snapshot(), which is responsible for doing the actual 
snapshot of the instance:


https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L1685

and then once the snapshot is done, the method calls to the Glance API 
to upload the snapshotted disk image to Glance:


https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L1730-L1734

All of which is I/O intensive and AFAICT, mostly done in a blocking 
manner, with no queuing or traffic control measures, so as you correctly 
point out, if the compute node daemon receives 5 backup requests, it 
will go ahead and do 5 snapshot operations and 5 uploads to Glance all 
as fast as it can. It will do it in 5 different eventlet greenthreads, 
but there are no designs in place to prioritize the snapshotting I/O 
lower than active VM I/O.



I will write on this at greater length, but backup should interfere as
little as possible with foreground processing. Overloading a host is
entirely unacceptable.


Agree with you completely.


Replaced the backup implementation so it does proper queuing (among
other things). Iterating forward - implementing and testing.


Is this code up somewhere we can take a look at?


Fired off snapshots on five Cinder volumes (attached to five instances).
Again the load shot very high. Huh. Well, in a full-scale OpenStack
setup, maybe storage can handle that much I/O more gracefully ... or
not. Again, should taking snapshots interfere with foreground activity?
I would say, most often not. Queuing and serializing snapshots would
strictly limit the interference with foreground. Also, very high end
storage can perform snapshots *very* quickly, so serialized snapshots
will not be slow. My take is that the default behavior should be to
queue and serialize all heavy I/O operations, with non-default
allowances for limited concurrency.

Cleaned up (which required reboot/unstack/stack and more). Tried again.

Ran two test backups (which in the current iteration create Cinder
volume snapshots). Asked Cinder to delete the snapshots. Again, very
high load factors, and in "top" I can see two long-running "dd"
processes. (Given I have a single disk, more than one "dd" is not good.)

Running too many heavyweight operations against storage can lead to
thrashing. Queuing can strictly limit that load, and insure better and
reliable performance

Re: [openstack-dev] [cinder][nova] Are disk-intensive operations managed ... or not?

2014-10-19 Thread Preston L. Bannister
Avishay,

Thanks for the tip on [cinder.conf] volume_clear. The corresponding option
in devstack is CINDER_SECURE_DELETE=False.

Also I *may* have been bitten by the related bug:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1023755

(All I know at this point is the devstack VM became unresponsive - have not
yet identified the cause. But the symptoms fit.)

Not sure if there are spikes on Cinder snapshot creation. Perhaps not. (Too
many different failures and oddities. Have not sorted all, yet.)

I am of the opinion CINDER_SECURE_DELETE=False should be a default for
devstack. Especially as it invokes bug-like behavior.

Also, unbounded concurrent "dd" operations is not a good idea. (Which is
generally what you meant, I believe.)

Onwards



On Sun, Oct 19, 2014 at 8:33 AM, Avishay Traeger 
wrote:

> Hi Preston,
> Replies to some of your cinder-related questions:
> 1. Creating a snapshot isn't usually an I/O intensive operation.  Are you
> seeing I/O spike or CPU?  If you're seeing CPU load, I've seen the CPU
> usage of cinder-api spike sometimes - not sure why.
> 2. The 'dd' processes that you see are Cinder wiping the volumes during
> deletion.  You can either disable this in cinder.conf, or you can use a
> relatively new option to manage the bandwidth used for this.
>
> IMHO, deployments should be optimized to not do very long/intensive
> management operations - for example, use backends with efficient snapshots,
> use CoW operations wherever possible rather than copying full
> volumes/images, disabling wipe on delete, etc.
>
> Thanks,
> Avishay
>
> On Sun, Oct 19, 2014 at 1:41 PM, Preston L. Bannister <
> pres...@bannister.us> wrote:
>
>> OK, I am fairly new here (to OpenStack). Maybe I am missing something. Or
>> not.
>>
>> Have a DevStack, running in a VM (VirtualBox), backed by a single flash
>> drive (on my current generation MacBook). Could be I have something off in
>> my setup.
>>
>> Testing nova backup - first the existing implementation, then my (much
>> changed) replacement.
>>
>> Simple scripts for testing. Create images. Create instances (five). Run
>> backup on all instances.
>>
>> Currently found in:
>>
>> https://github.com/dreadedhill-work/stack-backup/tree/master/backup-scripts
>>
>> First time I started backups of all (five) instances, load on the
>> Devstack VM went insane, and all but one backup failed. Seems that all of
>> the backups were performed immediately (or attempted), without any sort of
>> queuing or load management. Huh. Well, maybe just the backup implementation
>> is naive...
>>
>> I will write on this at greater length, but backup should interfere as
>> little as possible with foreground processing. Overloading a host is
>> entirely unacceptable.
>>
>> Replaced the backup implementation so it does proper queuing (among other
>> things). Iterating forward - implementing and testing.
>>
>> Fired off snapshots on five Cinder volumes (attached to five instances).
>> Again the load shot very high. Huh. Well, in a full-scale OpenStack setup,
>> maybe storage can handle that much I/O more gracefully ... or not. Again,
>> should taking snapshots interfere with foreground activity? I would say,
>> most often not. Queuing and serializing snapshots would strictly limit the
>> interference with foreground. Also, very high end storage can perform
>> snapshots *very* quickly, so serialized snapshots will not be slow. My take
>> is that the default behavior should be to queue and serialize all heavy I/O
>> operations, with non-default allowances for limited concurrency.
>>
>> Cleaned up (which required reboot/unstack/stack and more). Tried again.
>>
>> Ran two test backups (which in the current iteration create Cinder volume
>> snapshots). Asked Cinder to delete the snapshots. Again, very high load
>> factors, and in "top" I can see two long-running "dd" processes. (Given I
>> have a single disk, more than one "dd" is not good.)
>>
>> Running too many heavyweight operations against storage can lead to
>> thrashing. Queuing can strictly limit that load, and insure better and
>> reliable performance. I am not seeing evidence of this thought in my
>> OpenStack testing.
>>
>> So far it looks like there is no thought to managing the impact of disk
>> intensive management operations. Am I missing something?
>>
>>
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Are disk-intensive operations managed ... or not?

2014-10-19 Thread Avishay Traeger
Hi Preston,
Replies to some of your cinder-related questions:
1. Creating a snapshot isn't usually an I/O intensive operation.  Are you
seeing I/O spike or CPU?  If you're seeing CPU load, I've seen the CPU
usage of cinder-api spike sometimes - not sure why.
2. The 'dd' processes that you see are Cinder wiping the volumes during
deletion.  You can either disable this in cinder.conf, or you can use a
relatively new option to manage the bandwidth used for this.

IMHO, deployments should be optimized to not do very long/intensive
management operations - for example, use backends with efficient snapshots,
use CoW operations wherever possible rather than copying full
volumes/images, disabling wipe on delete, etc.

Thanks,
Avishay

On Sun, Oct 19, 2014 at 1:41 PM, Preston L. Bannister 
wrote:

> OK, I am fairly new here (to OpenStack). Maybe I am missing something. Or
> not.
>
> Have a DevStack, running in a VM (VirtualBox), backed by a single flash
> drive (on my current generation MacBook). Could be I have something off in
> my setup.
>
> Testing nova backup - first the existing implementation, then my (much
> changed) replacement.
>
> Simple scripts for testing. Create images. Create instances (five). Run
> backup on all instances.
>
> Currently found in:
> https://github.com/dreadedhill-work/stack-backup/tree/master/backup-scripts
>
> First time I started backups of all (five) instances, load on the Devstack
> VM went insane, and all but one backup failed. Seems that all of the
> backups were performed immediately (or attempted), without any sort of
> queuing or load management. Huh. Well, maybe just the backup implementation
> is naive...
>
> I will write on this at greater length, but backup should interfere as
> little as possible with foreground processing. Overloading a host is
> entirely unacceptable.
>
> Replaced the backup implementation so it does proper queuing (among other
> things). Iterating forward - implementing and testing.
>
> Fired off snapshots on five Cinder volumes (attached to five instances).
> Again the load shot very high. Huh. Well, in a full-scale OpenStack setup,
> maybe storage can handle that much I/O more gracefully ... or not. Again,
> should taking snapshots interfere with foreground activity? I would say,
> most often not. Queuing and serializing snapshots would strictly limit the
> interference with foreground. Also, very high end storage can perform
> snapshots *very* quickly, so serialized snapshots will not be slow. My take
> is that the default behavior should be to queue and serialize all heavy I/O
> operations, with non-default allowances for limited concurrency.
>
> Cleaned up (which required reboot/unstack/stack and more). Tried again.
>
> Ran two test backups (which in the current iteration create Cinder volume
> snapshots). Asked Cinder to delete the snapshots. Again, very high load
> factors, and in "top" I can see two long-running "dd" processes. (Given I
> have a single disk, more than one "dd" is not good.)
>
> Running too many heavyweight operations against storage can lead to
> thrashing. Queuing can strictly limit that load, and insure better and
> reliable performance. I am not seeing evidence of this thought in my
> OpenStack testing.
>
> So far it looks like there is no thought to managing the impact of disk
> intensive management operations. Am I missing something?
>
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Passing multiple values in table.Column in Horizon DataTable

2014-10-19 Thread Rajdeep Dua
Hi,
I need to pass two values in the link generated from the table.Column as
shown below.


class DatasourcesTablesTable(tables.DataTable):
data_source = tables.Column("column1", verbose_name=_("Column1"))
id = tables.Column("id", verbose_name=_("ID"),
   link="horizon:admin:path1:path2:rows_table" )

 The existing link gets value of "id" in kwargs
Also need to pass value of "column1".

Any pointers on how this can be done would be helpful

Thanks
Rajdeep
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova] Are disk-intensive operations managed ... or not?

2014-10-19 Thread Preston L. Bannister
OK, I am fairly new here (to OpenStack). Maybe I am missing something. Or
not.

Have a DevStack, running in a VM (VirtualBox), backed by a single flash
drive (on my current generation MacBook). Could be I have something off in
my setup.

Testing nova backup - first the existing implementation, then my (much
changed) replacement.

Simple scripts for testing. Create images. Create instances (five). Run
backup on all instances.

Currently found in:
https://github.com/dreadedhill-work/stack-backup/tree/master/backup-scripts

First time I started backups of all (five) instances, load on the Devstack
VM went insane, and all but one backup failed. Seems that all of the
backups were performed immediately (or attempted), without any sort of
queuing or load management. Huh. Well, maybe just the backup implementation
is naive...

I will write on this at greater length, but backup should interfere as
little as possible with foreground processing. Overloading a host is
entirely unacceptable.

Replaced the backup implementation so it does proper queuing (among other
things). Iterating forward - implementing and testing.

Fired off snapshots on five Cinder volumes (attached to five instances).
Again the load shot very high. Huh. Well, in a full-scale OpenStack setup,
maybe storage can handle that much I/O more gracefully ... or not. Again,
should taking snapshots interfere with foreground activity? I would say,
most often not. Queuing and serializing snapshots would strictly limit the
interference with foreground. Also, very high end storage can perform
snapshots *very* quickly, so serialized snapshots will not be slow. My take
is that the default behavior should be to queue and serialize all heavy I/O
operations, with non-default allowances for limited concurrency.

Cleaned up (which required reboot/unstack/stack and more). Tried again.

Ran two test backups (which in the current iteration create Cinder volume
snapshots). Asked Cinder to delete the snapshots. Again, very high load
factors, and in "top" I can see two long-running "dd" processes. (Given I
have a single disk, more than one "dd" is not good.)

Running too many heavyweight operations against storage can lead to
thrashing. Queuing can strictly limit that load, and insure better and
reliable performance. I am not seeing evidence of this thought in my
OpenStack testing.

So far it looks like there is no thought to managing the impact of disk
intensive management operations. Am I missing something?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Change of ownership in Traffic Steering blueprint

2014-10-19 Thread Carlos Gonçalves
Hi all,

As the original author of the Traffic Steering blueprint [1] while I worked
for Instituto de Telecomunicacoes, I would like to appoint Igor Cardoso (in
CC) as the new owner if no one opposes to it. Igor will continue carrying
the blueprint proposal and its development onward targeting the Kilo
release [2]. I will still be working closely but now as a reviewer.

Igor is about to finish his Master thesis on the topic "Network
Infraestructure Control for Virtual Campuses" in University of Aveiro,
Portugal. During these past months, Igor gained much experience on
OpenStack deployment and development, focusing on extending Neutron as part
of his studies. As a Researcher at Instituto de Telecomunicacoes, he will
continue taking the traffic steering work further.

Igor and I will be attending the OpenStack Summit in Paris. Any questions
or suggestions you may have, please feel free to poke us. In the meantime
you can also reach us by email or IRC (nicknames igordcard and cgoncalves).


Cheers,
Carlos Goncalves

[1]
https://review.openstack.org/#/q/topic:bp/traffic-steering-abstraction,n,z
[2] https://wiki.openstack.org/wiki/Neutron/AdvancedServices/JunoPlan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Pluggable framework in Fuel: first prototype ready

2014-10-19 Thread Mike Scherbakov
Hi all,
I moved this conversation to openstack-dev to get a broader audience, since
we started to discuss technical details.

Raw notes from demo session:
https://etherpad.openstack.org/p/cinder-neutron-plugins-second-demo.

Let me start answering on a few questions below from Roman & Nathan.

> How are we planning to distribute fuel plugin builder and its updates?
> Ideally, it should be available externally (outside of master node). I
> don't want us to repeat the same mistake as we did with Fuel client, which
> doesn't seem to be usable as an external dependency.

The plan was to have Fuel Plugin Builder (fpb) on PyPI. Ideally it should
be backward compatible with older Fuel release, i.e. when there is Fuel 7.0
out, you should be still able to create plugin for Fuel 6.0. If that it is
going to be overcomplicated - I suggested to produce fpb for every Fuel
release, and name it like fpb60, fpb61, fpb70, etc. Then it becomes easier
to support and maintain plugin builders for certain versions of Fuel.
Speaking about Fuel Client - there is no mistake. It's been discussed
dozens of times, it's just lack of resources to make it on PyPI as well as
to fix a few other things. I hope it could be done as part of efforts from
[2].

- Perhaps we have a separate settings tab just for Plug-Ins?For some
> complex plug-ins, they might require a dedicated tab.   If we have too many
> tabs it could get messy.



> Shall we consider a separate place in UI (tab) for plugins? Settings tab
> seems to be overloaded.


This is certainly under planning and discussion for future releases. See
[1], for example. For 6.0, we agreed that we can just extend existing
Settings tab with plugins-related fields.

One minor thing from me, which I forgot to mention during the demo:
verbosity of fpb run. I understand it might sound like a bikeshedding now,
but I believe if we develop it right from the very beginning, then we can
save some time later. So I would suggest normal, short INFO output, and
verbose one with --debug.

Thanks for feedback folks!!!

[1]
https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg37196.html
[2]
https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg37001.html


-- Forwarded message --
From: Nathan Trueblood 
Date: Sat, Oct 18, 2014 at 3:24 AM
Subject: Re: plugins

Agreed - I thought this initial PoC was great.

A few initial thoughts about settings in the UI and plug-in in general:

- Perhaps we have a separate settings tab just for Plug-Ins?For some
complex plug-ins, they might require a dedicated tab.   If we have too many
tabs it could get messy.
- It seems like we should consider how we handle the VMWare settings in
light of plug-ins as well.   Since with VMWare we have a lot of setting to
configure and settings validation.
- Do we offer any kind of validation for settings on plug-ins?   Or some
way to for the developer to ensure that setting that cannot be default or
computed get requested for the plug-in?

- We need to think carefully about both the plug-in developer experience
(how hard to test, get error messages, etc) and the experience for the user
who deploys the plug-in into an environment.


-Nathan

On Fri, Oct 17, 2014 at 4:13 PM, Roman Alekseenkov <
ralekseen...@mirantis.com> wrote:
>
>
> I watched both videos (creating a file with the text from UI && installing
> and starting a service).
>
> It looks pretty good!! Some initial feedback/questions:
>
>1. I like the fact that fuel plugin builder appends version to the
>name and makes it "fuel-awesome-plugin-1.2.3.tar". The approach is similar
>to Java/Maven and is a good one.
>2. I feel like we should not require user to unpack the plugin before
>installing it. Moreover, we may chose to distribute plugins in our own
>format, which we may potentially change later. E.g. "lbaas-v2.0.fp". I'd
>rather stick with two actions:
>   - Assembly (externally): fpb --build 
>   - Installation (on master node): fuel --install-plugin 
>3. How are we planning to distribute fuel plugin builder and its
>updates? Ideally, it should be available externally (outside of master
>node). I don't want us to repeat the same mistake as we did with Fuel
>client, which doesn't seem to be usable as an external dependency.
>4. How do we handle errors?
>   - What happens if an error occurs during plugin installation?
>   - What happens if an error occurs during plugin execution? Does it
>   (should it?) fail the deployment? Will we show user an error message 
> with
>   the name of plugin that failed?
>   5. Shall we consider a separate place in UI (tab) for plugins?
>Settings tab seems to be overloaded.
>6. When are we planning to focus on the 2 plugins which were
>identified as must-haves for 6.0? Cinder & LBaaS
>
> Once again, great job guys!
>
> Thanks,
> Roman
>
> On Fri, Oct 17, 2014 at 9:32 AM, Mike Scherbakov  > wrote:
>
>> Thanks, Evgeny,