Re: [openstack-dev] [Neutron][Climate] bp:configurable-ip-allocation

2013-09-26 Thread Cristian Tomoiaga
Hello Nikolay,

Looking at this bp, it seems it has been targeted for icehouse-1 :(
I was waiting for this too (for some time now).

Mark I may be able to help if needed (will this use the same logic as the
abandoned code ?).

I am working on something similar to floating IPs but for "normal IPs". I
need to be able to allow project owners to "reserve" specific IPs and
allocate them to VMs as needed (targeted at flat, provider networks where
project owners need to keep IP addresses)

-- 
Regards,
Cristian Tomoiaga
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [elastic-recheck] Announcing elastic-recheck

2013-09-26 Thread Michael Still
On Thu, Sep 26, 2013 at 2:49 PM, Clint Byrum  wrote:
> Excerpts from Joe Gordon's message of 2013-09-25 17:56:15 -0700:
>> Hi All,
>>
>> TL;DR: We will be automatically identifying your flaky tempest runs, so you
>> just have to confirm that you hit bug x, not identify which bug you hit.
>
> \o/

I agree with stick figure man. This is very exciting.

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Libvirt] Virtio-Serial support for Nova libvirt driver

2013-09-26 Thread Daniel P. Berrange
On Thu, Sep 26, 2013 at 03:05:16AM +, P Balaji-B37839 wrote:
> Hi Ravi,
> 
> We did this as part of PoC few months back.
> 
> Daniel can give us more comments on this as he is the lead for Libvirt
> support in Nova.

Just adding the ability to expose virtio-serial devices to the guest
doesn't do much. You need to have a credible story for what connects
and deals with the host side of the device in Nova. For the QEMU guest
agent, libvirt will own the host side and use it for various APIs it
supports. For the SPICE agent, QEMU owns the host side and uses it to
support functionality used by the SPICE client.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Does quantum work with vmware esxi ?

2013-09-26 Thread Konglingxian
Dose it mean that when use Open vSwitch as Neutron Plugin, we can not use both 
KVM and Vmware as the underlying hypervisors at the same time?

It's very appreciated that those folks from Vmware could answer this question.

I apologize if this question was already covered and I missed it.


Lingxian Kong
Huawei Technologies Co.,LTD.
IT Product Line CloudOS PDU
China, Xi'an
Mobile: +86-18602962792
Email: konglingx...@huawei.com

From: Dan Wendlandt [mailto:d...@nicira.com]
Sent: Tuesday, September 24, 2013 9:06 AM
To: OpenStack Development Mailing List
Cc: openst...@lists.openstack.org
Subject: Re: [openstack-dev] Does quantum work with vmware esxi ?

Compatibility of various Quantum/Neutron plugins with various Nova hypervisors 
is documented here: 
http://docs.openstack.org/grizzly/openstack-network/admin/content/flexibility.html
 .

Dan




On Mon, Sep 23, 2013 at 4:00 PM, openstack learner 
mailto:openstacklea...@gmail.com>> wrote:
Hi all,
 I am thinking about using quantum to do some network setting for the vms on 
esxi host but I am not sure if it should work or not because the VMwareVCDriver 
is listed as a compute driver.

Last time when I enable the quantum service in my devstack installation, there 
is a boot instance failure when i tried to boot an instance.  I dont know the 
if failure is caused by my devstack setting or it arose just because quantum 
does not work with esxi.  Anyone know if quantum works with esxi ?


Thank you
xin

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [elastic-recheck] Announcing elastic-recheck

2013-09-26 Thread Gary Kotton
Yay!!

On 9/26/13 10:47 AM, "Michael Still"  wrote:

>On Thu, Sep 26, 2013 at 2:49 PM, Clint Byrum  wrote:
>> Excerpts from Joe Gordon's message of 2013-09-25 17:56:15 -0700:
>>> Hi All,
>>>
>>> TL;DR: We will be automatically identifying your flaky tempest runs,
>>>so you
>>> just have to confirm that you hit bug x, not identify which bug you
>>>hit.
>>
>> \o/
>
>I agree with stick figure man. This is very exciting.
>
>Michael
>
>-- 
>Rackspace Australia
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] bug:When compute service is down, Peforming aciton on instance will leave task_state in intermediate state

2013-09-26 Thread Chang Bo Guo
Hi ALL, 
When compute service is down, Performing action on instance will leave 
instance's task_state 
in intermediate state like 'powering-off'/'powering-on' until the compute 
service is available again. 

Details please see https://bugs.launchpad.net/nova/+bug/1228804, I also 
post a patch for this https://review.openstack.org/#/c/47733/

But I think more discussions about the solution. Any suggestion ? ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][IceHouse] Ceilometer + Kibana + ElasticSearch Integration

2013-09-26 Thread Julien Danjou
On Tue, Sep 24 2013, Steven Gonzales wrote:

Hi Steven,

[…]

> We would love to discuss a way our projects could work together on some of
> these common goals and possibly collaborate. Would it be possible to set up
> a time for us talk briefly?

As Thomas said, feel free to join our meeting next week, and even add an
item on our agenda:

  https://wiki.openstack.org/wiki/Meetings/MeteringAgenda


-- 
Julien Danjou
// Free Software hacker / independent consultant
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [elastic-recheck] Announcing elastic-recheck

2013-09-26 Thread Julien Danjou
On Thu, Sep 26 2013, Joe Gordon wrote:

> TL;DR: We will be automatically identifying your flaky tempest runs, so you
> just have to confirm that you hit bug x, not identify which bug you hit.

I love you guys. It's really painful to work these days due to the high
failure rate.

I imagine the comment will indicate what should be done to have a
recheck? I saw Matthew acting like a bot in comments identifying bug
(and now I undertand he's a bot ;-), so should we just use the bug
number told to do a recheck, or will the procedure evolve?

-- 
Julien Danjou
-- Free Software hacker - independent consultant
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] PTL nomination

2013-09-26 Thread Lucas Alvares Gomes
no doubt, +1

On Tue, Sep 24, 2013 at 7:04 PM, Devananda van der Veen
 wrote:
> Hi!
>
> I would like to nominate myself for the OpenStack Bare Metal Provisioning
> (Ironic) PTL position.
>
> I have been working with OpenStack for over 18 months, and was a scalability
> and performance consultant at Percona for four years prior. Since '99, I
> have worked as a developer, team lead, database admin, and linux systems
> architect for a variety of companies.
>
> I am the current PTL of the Bare Metal Provisioning (Ironic) program, which
> began incubation during Havana. In collaboration with many fine folks from
> HP, NTT Docomo, USC/ISI, and VirtualTech, I worked extensively on the Nova
> Baremetal driver during the Grizzly cycle. I also helped start the TripleO
> program, which relies heavily on the baremetal driver to achieve its goals.
> During the Folsom cycle, I led the effort to improve Nova's DB API layer and
> added devstack support for the OpenVZ driver. Through that work, I became a
> member of nova-core for a time, though my attention has shifted away from
> Nova more recently.
>
> Once I had seen nova-baremetal and TripleO running in our test environment
> and began to assess our longer-term goals (eg, HA, scalability, integration
> with other OpenStack services), I felt very strongly that bare metal
> provisioning was a separate problem domain from Nova and would be best
> served with a distinct API service and a different HA framework than what is
> provided by Nova. I circulated this idea during the last summit, and then
> proposed it to the TC shortly thereafter.
>
> During this development cycle, I feel that Ironic has made significant
> progress. Starting from the initial "git bisect" to retain the history of
> the baremetal driver, I added an initial service and RPC framework,
> implemented some architectural pieces, and left a lot of #TODO's. Today,
> with commits from 10 companies during Havana (*) and integration already
> underway with devstack, tempest, and diskimage-builder, I believe we will
> have a functional release within the Icehouse time frame.
>
> I feel that a large part of my role as PTL has been - and continues to be -
> to gather ideas from a wide array of individuals and companies interested in
> bare metal provisioning, then translate those ideas into a direction for the
> program that fits within the OpenStack ecosystem. Additionally, I am often
> guiding compromise between the long-term goals, such as firmware management,
> and the short-term needs of getting the project to a fully-functional state.
> To that end, here is a brief summary of my goals for the project in the
> Icehouse cycle.
>
> * API service and client library (likely finished before the summit)
> * Nova driver (blocked, depends on ironic client library)
> * Finish RPC bindings for power and deploy management
> * Finish merging bm-deploy-helper with Ironic's PXE driver
> * PXE boot integration with Neutron
> * Integrate with TripleO / TOCI for automated testing
> * Migration script for existing deployments to move off the nova-baremetal
> driver
> * Fault tolerance of the ironic-conductor nodes
> * Translation support
> * Docs, docs, docs!
>
> Beyond this, there are many long-term goals which I would very much like to
> facilitate, such as:
>
> * hardware discovery
> * better integration with SDN capable hardware
> * pre-provisioning tools, eg. management of bios, firmware, and raid config,
> hardware burn-in, etc.
> * post-provisioning tools, eg. secure-erase
> * boot from network volume
> * secure boot (protect deployment against MITM attacks)
> * validation of signed firmware (protect tenant against prior tenant)
>
> Overall, I feel honored to be working with so many talented individuals
> across the OpenStack community, and know that there is much more to learn as
> a developer, and as a program lead.
>
> (*)
> http://www.stackalytics.com/?release=havana&metric=commits&project_type=All&module=ironic
>
> http://russellbryant.net/openstack-stats/ironic-reviewers-30.txt
> http://russellbryant.net/openstack-stats/ironic-reviewers-180.txt
>
> --
> Devananda
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Case sensitivity & backend databases

2013-09-26 Thread Ralf Haferkamp
On Wed, Sep 25, 2013 at 09:45:32AM +0100, Henry Nash wrote:
> Hi
> 
> Do we specify somewhere whether text field matching in the API is case
> sensitive or in-sensitive?  I'm thinking about filters, as well as user and
> domain names in authentication.  I think our current implementation will
> always be case sensitive for filters  (since we do that in python and do not,
> yet, pass the filters to the backends), while authentication will reflect the
> "case sensitivity or lack thereof" of the underlying database.  I believe
> that MySQL is case in-sensitive by default, while Postgres, sqllite and
> others are case-sensitive by default. 
> If using an LDAP backend, then I think this is case-sensitive.
That heavily depends on which LDAP Attributes you use. The usual suspects user
for authentication: "uid", "cn" and keystone's default "sn" are all defined to
be case-insensitive.
 
> The above seems to be inconsistent.  It might become even more so when we
> pass the filters to the backend.  Given that other projects already pass
> filters to the backend, we may also have inter-project inconsistencies that
> bleed through to the user experience.  Should we make at least a
> recommendation that the backend should case-sensitive (you can configure
> MySQL to be so)?  Insist on it? Ignore it and keep things as they are?
For LDAP it will be pretty hard to enforce any particular matching. Especially
if we want to support people using their existing LDAP servers as a backend for
keystone. You could of course enforce this on the LDAP client side (i.e. inside
keystone's LDAP backend), but I am not sure if that is really a good idea. It
might have a negative impact once filtering is implemented in the LDAP backend.

As Dolph already suggested we should not allow usernames that just differ in
capitalization  ("JDoe" vs. "jdoe") to co-exist. (Which could be an argument
for handling users case-insensitive in general)

-- 
Ralf

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [elastic-recheck] Announcing elastic-recheck

2013-09-26 Thread Flavio Percoco

On 25/09/13 17:56 -0700, Joe Gordon wrote:

Hi All,

TL;DR: We will be automatically identifying your flaky tempest runs, so you
just have to confirm that you hit bug x, not identify which bug you hit.



AWESOME!! 



--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [elastic-recheck] Announcing elastic-recheck

2013-09-26 Thread Christopher Yeoh
On Thu, Sep 26, 2013 at 10:26 AM, Joe Gordon  wrote:

> Hi All,
>
> TL;DR: We will be automatically identifying your flaky tempest runs, so
> you just have to confirm that you hit bug x, not identify which bug you hit.
>
>
This is great!

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Review request for perfromance ug fix

2013-09-26 Thread Day, Phil
Hi Folks,

Could I get a review of the following change please:   
https://review.openstack.org/#/c/47651/
It fixes a problem where users with the admin role in Neutron can't get a list 
of servers.

It may also address this long standing High Importance issue, 
https://bugs.launchpad.net/nova/+bug/1176446

Thanks
Phil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse (now featuring software orchestration)

2013-09-26 Thread Steven Hardy
On Wed, Sep 25, 2013 at 11:04:54PM +0200, Thomas Spatzier wrote:
> Excerpt from Clint's mail on 25.09.2013 22:23:07:
> 
> >
> > I think we already have some summit suggestions for discussing HOT,
> > it would be good to come prepared with some visions for the future
> > of HOT so that we can hash these things out, so I'd like to see this
> > discussion continue.
> 
> Absolutely! Can those involved in the discussion check if this seems to be
> covered in one of the session proposal me or others posted recently, and if
> not raise another proposal? This is a good one to have.

There is already a general "HOT Discussion" proposal:

http://summit.openstack.org/cfp/details/78

I'd encourage everyone with HOT functionality they'd like to discuss to
raise a blueprint, with a linked wiki page (or etherpad), then link the BP
as a comment to that session proposal.

That way we can hopefully focus the session when discussing the HOT roadmap
and plans for Icehouse.

As in Portland, I expect we'll need breakout sessions in addition to this,
but we can organize that with those interested during the summit.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] MySQL HA BP

2013-09-26 Thread Ilya Sviridov
The blueprint https://blueprints.launchpad.net/trove/+spec/mysql-ha

In order to become production ready DBaaS, Trove should provide ability to
deploy and manage high available database.

There are several approaches to achive HA in MySQL: driven by high
availability resource managers like Peacemaker [1] ,master-master
replication, Percona XTraDB Cluster [2] based on Galera library [3] so on.

But, as far as Trove DB instances are running in cloud environment, general
approach can be not always the best suitable option and should be discussed

--
[1] http://clusterlabs.org/
[2] http://www.percona.com/software/percona-xtradb-cluster
[3] https://launchpad.net/galera


With best regards,
Ilya Sviridov


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] re: Is barbican suitable/ready for production deployment?

2013-09-26 Thread Jarret Raim
A little more detail. We cut our feature complete release along with everyone 
else for Havana M3. We are currently standing up the production environments 
for the code. So, we believe that the codebase is pretty stable, but it has not 
been run in production yet.

As John said, we're happy to help get things deployed and we'd love to have 
someone else beating on the code to find any bugs. I also end up in SF pretty 
frequently (assuming you are located there), so if you want to talk in person, 
just let me know.


Thanks,
Jarret


From: John Wood mailto:john.w...@rackspace.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, September 25, 2013 10:10 PM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [barbican] re: Is barbican suitable/ready for 
production deployment?

Hello Pathik,

We are preparing the application for production usage, but it is not yet ready 
to go. We could speak further with you about your production needs if that 
would be of interest.

For evaluation purposes, we have stood up an integration 
environment.
 You could also stand up a local instance of 
Barbican. The PKCS 
based HSM plugin may be used with a SafeNet HSM as well.

Thanks,
John
-
john.w...@rackspace.com


From: Pathik Solanki [psola...@salesforce.com]
Sent: Wednesday, September 25, 2013 6:03 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [barbican] Is barbican suitable/ready for production 
deployment?

Hi Barbican Team,
My team here at salesforce.com is evaluating Barbican 
for our use case of managing secrets. The git repository indicates that 
Barbican is still in development and not ready for production deployment. I 
vaguely remember from the presentation at OpenStack Summit that 
cloudkeep/barbican has production ready code too. Please correct me if I am 
wrong and if there is some production ready instance then please point me to it.

Thanks,
Pathik
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Help us reduce gate resets by being specific about bugs being found

2013-09-26 Thread Sean Dague

As many folks know, gerrit takes comments of either the form

recheck bug #X
or
recheck no bug

To kick off the check queue jobs again to handle flakey tests.

The problem is that we're getting a lot more "no bug" than bugs at this 
point. If a failure happens in the OpenStack gate, it's usually an 
actual OpenStack race somewhere. Figuring out what the top races are is 
*really* important to actually fixing those races, as it gives us focus 
on what the top issues are in OpenStack that we need to fix. That makes 
the gate good for everyone, and means less time babysitting your patches 
through merge.


Now that Matt, Joe, and Clark have built the elastic-recheck bot, you 
will often be given a hint in your review about the most probably race 
that it was found. Please confirm the bug looks right before rechecking 
with it, but it should help expedite finding the right issue. 
http://status.openstack.org/rechecks/ is also helpful in seeing what's 
most recently been causing issues.


Here's the score card of how we are doing now at the project level 
(percentage is the percentage of rechecks with a bug, and the fraction 
shows number with a bug / total rechecks issued)


Project Rechecks percentages (last 3000 gerrit reviews)
openstack/requirements   100% (1 / 1)
openstack/cinder  78% (25 / 32)
openstack/heat66% (8 / 12)
openstack/keystone66% (4 / 6)
openstack/python-keystoneclient   66% (4 / 6)
openstack/swift   52% (10 / 19)
openstack-infra/devstack-gate 50% (2 / 4)
openstack/horizon 50% (5 / 10)
openstack/python-ceilometerclient 44% (4 / 9)
openstack/python-cinderclient 42% (3 / 7)
openstack/glance  40% (2 / 5)
openstack-dev/devstack38% (7 / 18)
openstack/neutron 33% (6 / 18)
openstack/python-novaclient   33% (1 / 3)
openstack/nova25% (34 / 134)
openstack/tempest 17% (12 / 69)
openstack/ceilometer  11% (4 / 34)
stackforge/taskflow0% (0 / 1)
openstack/ironic   0% (0 / 5)
openstack/oslo-incubator   0% (0 / 3)
openstack-dev/hacking  0% (0 / 1)
openstack/trove0% (0 / 8)
stackforge/savanna 0% (0 / 1)
openstack-infra/config 0% (0 / 1)
openstack/python-neutronclient 0% (0 / 1)
stackforge/rally   0% (0 / 3)


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] 0.2.2 release!

2013-09-26 Thread Sergey Lukjanov
Hello everyone,

I'm glad to announce the 0.2.2 release of Savanna. This release contains 4 
components: Savanna core, plugin for OpenStack Dashboard, diskimage-builder 
elements and alpha version of python bindings.

Release Notes (https://wiki.openstack.org/wiki/Savanna/ReleaseNotes/0.2.2): 

Features implemented:
* Hortonworks Data Platform plugin added with cluster scaling support and docs.

Bug fixes:
* documentation on scaling improved;
* UI improvements and minor bug-fixes;
* improvements and minor bug-fixes.

Savanna wiki: https://wiki.openstack.org/wiki/Savanna
Launchpad project: https://launchpad.net/savanna
Savanna docs: https://savanna.readthedocs.org/en/0.2.2/index.html (quickstart 
and installation, user and dev guides)

Enjoy!

P.S. It's the last release of Savanna for previous 0.2 version.

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] New Pycharm License

2013-09-26 Thread Andrew Melton

Hey Devs,
 
It's almost been a year since I sent out the first email and I've been getting 
a few emails lately about alerts that the current license is about to expire. 
Well, I've got a hold of our new license, good for another year. This'll give 
you access to the new Pro edition of Pycharm and any updates for a year.
 
As this list is public, I can't email the license out to everyone, so please 
reply to this email and I'll get you the license.
 
Also, please note that if your current license expires, Pycharm will continue 
to work. You will just stop receiving updates until you've entered this new 
license.
 
Thanks,
Andrew Melton___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] MySQL HA BP

2013-09-26 Thread Michael Basnight
On Sep 26, 2013, at 6:07 AM, Ilya Sviridov wrote:

> The blueprint https://blueprints.launchpad.net/trove/+spec/mysql-ha
> 
> In order to become production ready DBaaS, Trove should provide ability to 
> deploy and manage high available database.
> 
> There are several approaches to achive HA in MySQL: driven by high 
> availability resource managers like Peacemaker [1] ,master-master 
> replication, Percona XTraDB Cluster [2] based on Galera library [3] so on.
> 
> But, as far as Trove DB instances are running in cloud environment, general 
> approach can be not always the best suitable option and should be discussed
> 
> --
> [1] http://clusterlabs.org/
> [2] http://www.percona.com/software/percona-xtradb-cluster
> [3] https://launchpad.net/galera

This, to me, is a perfect fit for our (work in progress) clustering API. The 
Codership (galera) team and the Continuent (tungsten) team have both expressed 
interest in helping to guide the Trove team toward building an awesome 
clustering product. I think till be up to the people who want to contribute to 
define what MySQL clustering impl will be the best in Trove. Each of the 
clustering products has pros/cons, so its hard to say Trove will support only X 
for clustering. Im personally a fan of both of the aforementioned clustering 
product because they are different. Galera is synchronous multi master, and 
tungsten is async master/slave. Our first implementation is, of course, basic 
master/slave builtin replication, and it too solves a problem. 

So to answer your question, i think we can build a MySQL cluster using 
different technologies and let operators choose what they want to support. And 
better yet, if operators choose to support > 1 cluster type, we can let 
customers choose what they feel is the right choice for their data.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] PTL candidacy

2013-09-26 Thread Duncan Thomas
I would like to run for election as Cinder PTL for the upcoming
Icehouse release.

I've been involved with Openstack for more than 2 years, I've been an
active and vocal member of the Cinder core team since cinder was
formed and have contributed to variously to debates, reviews, designs
and code. Before that I've been involved with high-performance compute
clusters and networking both as a developer and from an ops
prospective.

I think Cinder is a strong and healthy project, and I'd like to
continue to build on the great work John Griffith has been doing as
PTL. We have at least 16 different back-ends supported, and have been
very successful in allowing many levels of contribution and
involvement.

If elected, my main drives for the Icehouse release will be:

- Cross project coordination - several features have suffered somewhat
from the fact that coordination is needed between cinder and other
projects, particularly nova and horizon. I'd like to work with the PTL
and core team of those projects to see what we can do to better align
expectations and synchronisation between projects, so that features
like volume encryption, read-only volumes, ACLs etc. can be landed
more smoothly

- Deployment issues - several large companies now deploy code from
trunk between releases, and perform regular rolling releases. I'd like
to focus on what makes that difficult and what we can do in terms of
reviews, testing and design to make that a smoother progress. This
includes tying into OSLO and other projects that are working on this.
Task-flow is a good example of a project that made significant useful
progress by working with cinder as a first user before moving out to
otehr projects.

- Grow the cinder community, and encourage new contributes in form of
testing and validation as well as new features. Generally keep the
fantastic inclusive nature of the cinder project going, and encourage
the healthy debates that have allowed us to come up with great
solutions.

- Blueprint management - Many blueprints are currently very thin
indeed, often no more than a sentence or two. I'd like to see more
push-back blueprints that do not provide a reasonable amount of detail
before the code comes along, in order to allow discussion and debate
earlier in the development cycle.

There are many other sub-projects within cinder, such as driver
validation, that I support and intend to do my best to see succeed.



-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Configuration API BP

2013-09-26 Thread Michael Basnight
On Sep 25, 2013, at 7:16 PM, Craig Vyvial wrote:

> So we have a blueprint for this and there are a couple things to point out 
> that have changed since the inception of this BP.
> 
> https://blueprints.launchpad.net/trove/+spec/configuration-management
> 
> This is an overview of the API calls for 
> 
> POST /configurations - create config
> GET  /configurations - list all configs
> PUT  /configurations/{id} - update all the parameters
> 
> GET  /configurations/{id} - get details on a single config
> GET  /configurations/{id}/{key} - get single parameter value that was set for 
> the configuration
> 
> PUT  /configurations/{id}/{key} - update/insert a single parameter
> DELETE  /configurations/{id}/{key} - delete a single parameter
> 
> GET  /configurations/{id}/instances - list of instances the config is 
> assigned to
> GET  /configurations/parameters - list of all configuration parameters
> 
> GET  /configurations/parameters/{key} - get details on a configuration 
> parameter
> 
> There has been talk about using PATCH http action instead of PUT action for 
> thie update of individual parameter(s).
> 
> PUT  /configurations/{id}/{key} - update/insert a single parameter
> and/or
> PATCH  /configurations/{id} - update/insert parameter(s)
> 
> 
> I am not sold on the idea of using PATCH unless its widely used in other 
> projects across Openstack. What does everyone think about this?
> 
> If there are any concerns around this please let me know.

Im a fan of PATCH. Id rather have a different verb on the same resource than 
creating a new sub-resource just to do the job of what PATCH defines. Im not 
sure the [1] gives us any value, and i think its only around because of [2]. I 
can see PATCH removing the need for [1], simplifying the API. And of course 
removing the need for [2] since it _is_ the updating of a single kv pair. And i 
know keystone and glance use PATCH for "updates" in their API as well. 

[1]  GET /configurations/{id}/{key} 
[2] PUT  /configurations/{id}/{key} 


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] New Pycharm License

2013-09-26 Thread Maxime Vidori
Hi!

I send this email to get my PyCharm licence.

Thank you :)


- Original Message -
From: "Andrew Melton" 
To: openstack-dev@lists.openstack.org
Sent: Thursday, September 26, 2013 5:41:17 PM
Subject: [openstack-dev] New Pycharm License



Hey Devs, 



It's almost been a year since I sent out the first email and I've been getting 
a few emails lately about alerts that the current license is about to expire. 
Well, I've got a hold of our new license, good for another year. This'll give 
you access to the new Pro edition of Pycharm and any updates for a year. 



As this list is public, I can't email the license out to everyone, so please 
reply to this email and I'll get you the license. 



Also, please note that if your current license expires, Pycharm will continue 
to work. You will just stop receiving updates until you've entered this new 
license. 



Thanks, 

Andrew Melton 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] PTL candidacy

2013-09-26 Thread Duncan Thomas
I would like to run for election as Cinder PTL for the upcoming
Icehouse release.

I've been involved with Openstack for more than 2 years, I've been an
active and vocal member of the Cinder core team since cinder was
formed and have contributed to variously to debates, reviews, designs
and code. Before that I've been involved with high-performance compute
clusters and networking both as a developer and from an ops
prospective.

I think Cinder is a strong and healthy project, and I'd like to
continue to build on the great work John Griffith has been doing as
PTL. We have at least 16 different back-ends supported, and have been
very successful in allowing many levels of contribution and
involvement.

If elected, my main drives for the Icehouse release will be:

- Cross project coordination - several features have suffered somewhat
from the fact that coordination is needed between cinder and other
projects, particularly nova and horizon. I'd like to work with the PTL
and core team of those projects to see what we can do to better align
expectations and synchronisation between projects, so that features
like volume encryption, read-only volumes, ACLs etc. can be landed
more smoothly

- Deployment issues - several large companies now deploy code from
trunk between releases, and perform regular rolling releases. I'd like
to focus on what makes that difficult and what we can do in terms of
reviews, testing and design to make that a smoother progress. This
includes tying into OSLO and other projects that are working on this.
Task-flow is a good example of a project that made significant useful
progress by working with cinder as a first user before moving out to
otehr projects.

- Grow the cinder community, and encourage new contributes in form of
testing and validation as well as new features. Generally keep the
fantastic inclusive nature of the cinder project going, and encourage
the healthy debates that have allowed us to come up with great
solutions.

- Blueprint management - Many blueprints are currently very thin
indeed, often no more than a sentence or two. I'd like to see more
push-back blueprints that do not provide a reasonable amount of detail
before the code comes along, in order to allow discussion and debate
earlier in the development cycle.

There are many other sub-projects within cinder, such as driver
validation, that I support and intend to do my best to see succeed.



--
Duncan Thomas


-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Case sensitivity & backend databases

2013-09-26 Thread Brant Knudson
On Thu, Sep 26, 2013 at 4:44 AM, Ralf Haferkamp  wrote:

>
> As Dolph already suggested we should not allow usernames that just differ
> in
> capitalization  ("JDoe" vs. "jdoe") to co-exist. (Which could be an
> argument
> for handling users case-insensitive in general)
>

This enforcement should be handled by the LDAP server if the organization
thinks it's important to have users with names unique without respect for
capitalization. LDAP servers can also enforce normal security enhancers
like password strength, expiration, and locking out users after invalid
logins that the SQL backend doesn't support.

My recommendation is that Keystone should get away from dealing with
creating/updating users to avoid reinventing the wheel (and making a wheel
that's missing bells and whistles). If comparing user names is a problem,
let's limit it to our custom SQL backend and not let it spread to other
more featureful backends.

- Brant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Case sensitivity & backend databases

2013-09-26 Thread Dolph Mathews
On Thu, Sep 26, 2013 at 11:02 AM, Brant Knudson  wrote:

>
> On Thu, Sep 26, 2013 at 4:44 AM, Ralf Haferkamp  wrote:
>
>>
>> As Dolph already suggested we should not allow usernames that just differ
>> in
>> capitalization  ("JDoe" vs. "jdoe") to co-exist. (Which could be an
>> argument
>> for handling users case-insensitive in general)
>>
>
> This enforcement should be handled by the LDAP server if the organization
> thinks it's important to have users with names unique without respect for
> capitalization. LDAP servers can also enforce normal security enhancers
> like password strength, expiration, and locking out users after invalid
> logins that the SQL backend doesn't support.
>
> My recommendation is that Keystone should get away from dealing with
> creating/updating users to avoid reinventing the wheel (and making a wheel
> that's missing bells and whistles). If comparing user names is a problem,
> let's limit it to our custom SQL backend and not let it spread to other
> more featureful backends.
>

++; this confusion specifically stems from keystone's implementation
against SQL, where keystone manages users directly


>
>
> - Brant
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Configuration API BP

2013-09-26 Thread Craig Vyvial
I see PATCH used all over the keystone v3 API. Its not used at all in other
older versions. I take that meaning that they did not want to add confusion
or many changes in the current version of the API.[1]

Although since the Configuration is technically a new API being added to
the core of Trove, should consider it to enhance the API now or keep it on
par the way the rest of the API looks.

After looking over the docs[1] I am on the fence and I would like others to
weigh in.

[1] http://api.openstack.org/api-ref-identity.html


On Thu, Sep 26, 2013 at 10:49 AM, Michael Basnight wrote:

> On Sep 25, 2013, at 7:16 PM, Craig Vyvial wrote:
>
> > So we have a blueprint for this and there are a couple things to point
> out that have changed since the inception of this BP.
> >
> > https://blueprints.launchpad.net/trove/+spec/configuration-management
> >
> > This is an overview of the API calls for
> >
> > POST /configurations - create config
> > GET  /configurations - list all configs
> > PUT  /configurations/{id} - update all the parameters
> >
> > GET  /configurations/{id} - get details on a single config
> > GET  /configurations/{id}/{key} - get single parameter value that was
> set for the configuration
> >
> > PUT  /configurations/{id}/{key} - update/insert a single parameter
> > DELETE  /configurations/{id}/{key} - delete a single parameter
> >
> > GET  /configurations/{id}/instances - list of instances the config is
> assigned to
> > GET  /configurations/parameters - list of all configuration parameters
> >
> > GET  /configurations/parameters/{key} - get details on a configuration
> parameter
> >
> > There has been talk about using PATCH http action instead of PUT action
> for thie update of individual parameter(s).
> >
> > PUT  /configurations/{id}/{key} - update/insert a single parameter
> > and/or
> > PATCH  /configurations/{id} - update/insert parameter(s)
> >
> >
> > I am not sold on the idea of using PATCH unless its widely used in other
> projects across Openstack. What does everyone think about this?
> >
> > If there are any concerns around this please let me know.
>
> Im a fan of PATCH. Id rather have a different verb on the same resource
> than creating a new sub-resource just to do the job of what PATCH defines.
> Im not sure the [1] gives us any value, and i think its only around because
> of [2]. I can see PATCH removing the need for [1], simplifying the API. And
> of course removing the need for [2] since it _is_ the updating of a single
> kv pair. And i know keystone and glance use PATCH for "updates" in their
> API as well.
>
> [1]  GET /configurations/{id}/{key}
> [2] PUT  /configurations/{id}/{key}
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Case sensitivity & backend databases

2013-09-26 Thread Clint Byrum
Excerpts from Henry Nash's message of 2013-09-25 01:45:32 -0700:
> Hi
> 
> Do we specify somewhere whether text field matching in the API is case 
> sensitive or in-sensitive?  I'm thinking about filters, as well as user and 
> domain names in authentication.  I think our current implementation will 
> always be case sensitive for filters  (since we do that in python and do not, 
> yet, pass the filters to the backends), while authentication will reflect the 
> "case sensitivity or lack thereof" of the underlying database.  I believe 
> that MySQL is case in-sensitive by default, while Postgres, sqllite and 
> others are case-sensitive by default.  If using an LDAP backend, then I think 
> this is case-sensitive.
> 
> The above seems to be inconsistent.  It might become even more so when we 
> pass the filters to the backend.  Given that other projects already pass 
> filters to the backend, we may also have inter-project inconsistencies that 
> bleed through to the user experience.  Should we make at least a 
> recommendation that the backend should case-sensitive (you can configure 
> MySQL to be so)?  Insist on it? Ignore it and keep things as they are?

The collation controls case sensitivity. The default collations in MySQL
are all case in-sensitive, and this is a good thing for many reasons. I
don't want a user "spamaps" on the same domain where I have "SpamapS".

If you want to force it, you can do so at many levels, from the server
down to the column.  utf8_bin would be the case-sensitive collation to
use. But that is not actually what you want.

The problem and the part where programmers get surprised by this is
that we are calling fields VARCHAR when we mean VARBINARY. Strings that
always need to be 100% identical and are not for human consumption,
should be _BINARY_.

In the keystone user table, id, extra, password, and domain_id are all
"varchar". This is a common mistake in SQL column design. Since user
is a utf8 table, the index on id varchar(64) is 64*3 bytes wide. This
is because we have to reserve enough space for 64 UTF-8 characters (and
MySQL only does up to 3-byte UTF8, really obscure UTF-8 can be as long
as 6 bytes!). We don't intend to interpret this as a human ever. So,
this should be VARBINARY(64) everywhere it is used.

The change would have several effects:

1) Indexes that mention the field would shrink by 128 bytes per key.

2) One could have two rows with identical hex values in id that varied
only by case. -- This is not an actual problem, but an effect of the
change.

3) Sorting by this field will now just use the binary value of each
character, not the language collation. When do you sort by a hex id?

Anyway, doing this on all ID fields and obviously-not-utf8-containing
fields will have a net effect of making the database leaner, so I think
it is worth a wide spread effort not just in keystone but in all of
OpenStack.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Introducing the NNFI scheduler for Zuul

2013-09-26 Thread James E. Blair
We recently made a change to Zuul's scheduling algorithm (how it
determines which changes to combine together and run tests).  Now when a
change fails tests (or has a merge conflict), Zuul will move it out of
the series of changes that it is stacking together to be tested, but it
will still keep that change's position in the queue.  Jobs for changes
behind it will be restarted without the failed change in their proposed
repo states.  And if something later fails ahead of it, Zuul will once
again put it back into the stream of changes it's testing and give it
another chance.

To visualize this, we've updated the status screen to include a tree
view:

  http://status.openstack.org/zuul/

(If you already have that loaded, be sure to hit reload.)

In Zuul, this is called the Nearest Non-Failing Item (NNFI) algorithm
because in short, each item in a queue is at all times being tested
based on the nearest non-failing item ahead of it in the queue.

On the infrastructure side, this is going to drive our use of cloud
resources even more, as Zuul will now try to run as many jobs as it can,
continuously.  Every time a change fails, all of the jobs for changes
behind it will be aborted and restarted with a new proposed future
state.

For developers, this means that changes should land faster, and more
throughput overall, as Zuul won't be waiting as long to re-test changes
after a job has failed.  And that's what this is ultimately about --
virtual machines are cheap compared to developer time, so the more
velocity our automated tests can sustain, the more velocity our project
can achieve.

-Jim


(PS: There is a known problem with the status page not being able to
display the tree correctly while Zuul is in the middle of recalculating
the change graph.  That should be fixed by next week, but in the mean
time, just enjoy the show.)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Introducing the NNFI scheduler for Zuul

2013-09-26 Thread Jay Pipes

On 09/26/2013 01:10 PM, James E. Blair wrote:

We recently made a change to Zuul's scheduling algorithm (how it
determines which changes to combine together and run tests).  Now when a
change fails tests (or has a merge conflict), Zuul will move it out of
the series of changes that it is stacking together to be tested, but it
will still keep that change's position in the queue.  Jobs for changes
behind it will be restarted without the failed change in their proposed
repo states.  And if something later fails ahead of it, Zuul will once
again put it back into the stream of changes it's testing and give it
another chance.

To visualize this, we've updated the status screen to include a tree
view:

   http://status.openstack.org/zuul/

(If you already have that loaded, be sure to hit reload.)

In Zuul, this is called the Nearest Non-Failing Item (NNFI) algorithm
because in short, each item in a queue is at all times being tested
based on the nearest non-failing item ahead of it in the queue.

On the infrastructure side, this is going to drive our use of cloud
resources even more, as Zuul will now try to run as many jobs as it can,
continuously.  Every time a change fails, all of the jobs for changes
behind it will be aborted and restarted with a new proposed future
state.

For developers, this means that changes should land faster, and more
throughput overall, as Zuul won't be waiting as long to re-test changes
after a job has failed.  And that's what this is ultimately about --
virtual machines are cheap compared to developer time, so the more
velocity our automated tests can sustain, the more velocity our project
can achieve.

-Jim


Just wanted to say great work on this to all involved, and thank you!

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Introducing the NNFI scheduler for Zuul

2013-09-26 Thread XINYU ZHAO
++  \m/


On Thu, Sep 26, 2013 at 10:10 AM, James E. Blair wrote:

> We recently made a change to Zuul's scheduling algorithm (how it
> determines which changes to combine together and run tests).  Now when a
> change fails tests (or has a merge conflict), Zuul will move it out of
> the series of changes that it is stacking together to be tested, but it
> will still keep that change's position in the queue.  Jobs for changes
> behind it will be restarted without the failed change in their proposed
> repo states.  And if something later fails ahead of it, Zuul will once
> again put it back into the stream of changes it's testing and give it
> another chance.
>
> To visualize this, we've updated the status screen to include a tree
> view:
>
>   http://status.openstack.org/zuul/
>
> (If you already have that loaded, be sure to hit reload.)
>
> In Zuul, this is called the Nearest Non-Failing Item (NNFI) algorithm
> because in short, each item in a queue is at all times being tested
> based on the nearest non-failing item ahead of it in the queue.
>
> On the infrastructure side, this is going to drive our use of cloud
> resources even more, as Zuul will now try to run as many jobs as it can,
> continuously.  Every time a change fails, all of the jobs for changes
> behind it will be aborted and restarted with a new proposed future
> state.
>
> For developers, this means that changes should land faster, and more
> throughput overall, as Zuul won't be waiting as long to re-test changes
> after a job has failed.  And that's what this is ultimately about --
> virtual machines are cheap compared to developer time, so the more
> velocity our automated tests can sustain, the more velocity our project
> can achieve.
>
> -Jim
>
>
> (PS: There is a known problem with the status page not being able to
> display the tree correctly while Zuul is in the middle of recalculating
> the change graph.  That should be fixed by next week, but in the mean
> time, just enjoy the show.)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Help us reduce gate resets by being specific about bugs being found

2013-09-26 Thread Joe Gordon
On Thu, Sep 26, 2013 at 7:49 AM, Sean Dague  wrote:

> As many folks know, gerrit takes comments of either the form
>
> recheck bug #X
> or
> recheck no bug
>
> To kick off the check queue jobs again to handle flakey tests.
>
> The problem is that we're getting a lot more "no bug" than bugs at this
> point. If a failure happens in the OpenStack gate, it's usually an actual
> OpenStack race somewhere. Figuring out what the top races are is *really*
> important to actually fixing those races, as it gives us focus on what the
> top issues are in OpenStack that we need to fix. That makes the gate good
> for everyone, and means less time babysitting your patches through merge.
>

++


>
> Now that Matt, Joe, and Clark have built the elastic-recheck bot, you will
> often be given a hint in your review about the most probably race that it
> was found. Please confirm the bug looks right before rechecking with it,
> but it should help expedite finding the right issue.
> http://status.openstack.org/**rechecks/is
>  also helpful in seeing what's most recently been causing issues.
>
> Here's the score card of how we are doing now at the project level
> (percentage is the percentage of rechecks with a bug, and the fraction
> shows number with a bug / total rechecks issued)
>
> Project Rechecks percentages (last 3000 gerrit reviews)
> openstack/requirements   100% (1 / 1)
> openstack/cinder  78% (25 / 32)
> openstack/heat66% (8 / 12)
> openstack/keystone66% (4 / 6)
> openstack/python-**keystoneclient   66% (4 / 6)
> openstack/swift   52% (10 / 19)
> openstack-infra/devstack-gate 50% (2 / 4)
> openstack/horizon 50% (5 / 10)
> openstack/python-**ceilometerclient 44% (4 / 9)
> openstack/python-cinderclient 42% (3 / 7)
> openstack/glance  40% (2 / 5)
> openstack-dev/devstack38% (7 / 18)
> openstack/neutron 33% (6 / 18)
> openstack/python-novaclient   33% (1 / 3)
> openstack/nova25% (34 / 134)
> openstack/tempest 17% (12 / 69)
> openstack/ceilometer  11% (4 / 34)
> stackforge/taskflow0% (0 / 1)
> openstack/ironic   0% (0 / 5)
> openstack/oslo-incubator   0% (0 / 3)
> openstack-dev/hacking  0% (0 / 1)
> openstack/trove0% (0 / 8)
> stackforge/savanna 0% (0 / 1)
> openstack-infra/config 0% (0 / 1)
> openstack/python-neutronclient 0% (0 / 1)
> stackforge/rally   0% (0 / 3)
>
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __**_
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.**org 
> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [elastic-recheck] Announcing elastic-recheck

2013-09-26 Thread Joe Gordon
On Thu, Sep 26, 2013 at 1:30 AM, Julien Danjou  wrote:

> On Thu, Sep 26 2013, Joe Gordon wrote:
>
> > TL;DR: We will be automatically identifying your flaky tempest runs, so
> you
> > just have to confirm that you hit bug x, not identify which bug you hit.
>
> I love you guys. It's really painful to work these days due to the high
> failure rate.
>
> I imagine the comment will indicate what should be done to have a
> recheck? I saw Matthew acting like a bot in comments identifying bug
> (and now I undertand he's a bot ;-), so should we just use the bug
> number told to do a recheck, or will the procedure evolve?
>

We don't want to remove the developer from the loop entirely.  Our
classification won't be perfect and we want the developer to spot check to
confirm they hit the bug we identified.  But it should be an order of
magnitude easier to spot check if we classified your failure correctly vs
classifying the failure on your own.


>
> --
> Julien Danjou
> -- Free Software hacker - independent consultant
> -- http://julien.danjou.info
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] gerrit URL queries I've been finding useful

2013-09-26 Thread Monty Taylor
Just in case these help anyone, I've make a couple of bookmarks to a
couple of queries that have really been helping me deal with both my
patches, and things I need to review.

First of all:

https://review.openstack.org/#/q/status:open+owner:mordred%2540inaugust.com+label:CodeReview%253C%253D-1,n,z

(replace mordred%2540inaugust.com with your own url-quoted email address)

Give me the list of all patches I've uploaded that have a negative code
review comment. That is - things where someone is expecting me to do
something.

Next - review queue always sucks:

https://review.openstack.org/#/q/watchedby:mordred%2540inaugust.com+-label:CodeReview%253C%253D-1+-label:Verified%253C%253D-1+-label:Approved%253E%253D1++-status:workinprogress+-status:draft+-is:starred+-owner:mordred%2540inaugust.com,n,z

So I use that one to show me a list of things that are passing tests,
are not already starred, and do not have any negative code reviews yet.
These are things I should probably go look at right now. On that page, I
go through and star everything that I'd like to review. Then, I can make
a pass through that list with this:

https://review.openstack.org/#/q/is:starred+-label:CodeReview%253C%253D-1+-label:Verified%253C%253D-1,n,z

Unstarring as I finish reviewing it. That way I've got a list I can work
down to zero, then go back to the list of unstarred things to make
myself a new list.

It's not perfect, and there's still a ton of review work to do - but I
do believe it's been helping me keep up with the review load better.

Monty

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] gerrit URL queries I've been finding useful

2013-09-26 Thread Tiago Mello
Awesome tip!

On Thu, Sep 26, 2013 at 02:11:58PM -0400, Monty Taylor wrote:
> Just in case these help anyone, I've make a couple of bookmarks to a
> couple of queries that have really been helping me deal with both my
> patches, and things I need to review.
> 
> First of all:
> 
> https://review.openstack.org/#/q/status:open+owner:mordred%2540inaugust.com+label:CodeReview%253C%253D-1,n,z
> 
> (replace mordred%2540inaugust.com with your own url-quoted email address)

It seems the full email is not needed. You can use your username which
matches the launchpad id.

Thanks!

-- 

timello

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Get Credential Of Openstack Login

2013-09-26 Thread Dean Troyer
On Thu, Sep 26, 2013 at 2:07 PM, joseph assiga wrote:

> I just install OpenStack with scripts of devstack. But at the end i want
> to log in to the dashboard, but i did not see were to found the credential
> for.
>

The credentials are displayed at the end of stack.sh.  The password will be
whatever you set.  By default, the users admin and demo have matching
projects of the same name (project == tenant).

I would have sworn that this was in the docs but jeepers it isn't there.
 Now I have one more thing to do this afternoon...  In the window that you
ran stack.sh, you can source openrc to set the environment variables that
the CLI uses.  Look at those to get the same for Horizon's login:

  source openrc
  set | grep OS_

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Get Credential Of Openstack Login

2013-09-26 Thread joseph assiga
Hi,

I just install OpenStack with scripts of devstack. But at the end i want to
log in to the dashboard, but i did not see were to found the credential for.

Please could you help get this credential.

Thanks.

Sincerely,

-- 



*
*
Joseph  ASSIGA
+33(0)6 15 73 44 09
josephassiga.com
joecloud.blogspot.fr 
Master Informatique (Master Degree in Computer science)
Faculté Jean Perrin de Lens (Faculty of Science Jean Perrin of France)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] team meeting minutes September 26

2013-09-26 Thread Sergey Lukjanov
Thanks everyone who have joined Savanna meeting.

Here are the logs from the meeting:

Minutes: 
http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-09-26-18.03.html
Minutes (text): 
http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-09-26-18.03.txt
Log: 
http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-09-26-18.03.log.html

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] How to enable quantum Nicira NVP plugin in devstack

2013-09-26 Thread openstack learner
Hi,

>From the link
http://docs.openstack.org/grizzly/openstack-network/admin/content/flexibility.html
I can see there is a Nicira NVP plugin for vmware. I am using the devstack
to install the openstack, do you know how to enable the plugin in the
localrc file? From the link here
https://wiki.openstack.org/wiki/NeutronDevstack  i did not find something
like a "q-nvc" that I can set in localrc file.  Any info about how to
enable the quantum Nicira NVP plugin will be helpful.

Thanks

xin
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A simple way to improve nova scheduler

2013-09-26 Thread Soren Hansen
Hey, sorry for necroposting. I completely missed this thread when it was
active, but Russel just pointed it out to me on Twitter earlier today and
I couldn't help myself.


2013/7/19 Sandy Walsh :
> On 07/19/2013 05:01 PM, Boris Pavlovic wrote:
> Sorry, I was commenting on Soren's suggestion from way back (essentially
> listening on a separate exchange for each unique flavor ... so no
> scheduler was needed at all). It was a great idea, but fell apart rather
> quickly.

I don't recall we ever really had the discussion, but it's been a while :)

Yes, when moving beyond simple flavours, the idea as initially proposed
falls apart.  I see two ways to fix that:

 * Don't move beyond simple flavours. Seriously. Amazon have been pretty
   darn succesful with just their simple instance types.

 * If you must make things complicated, use fanout to send a reservation
   request:

   - Send out reservation requests to everyone listening (*)

   - Compute nodes able to accommodate the request reserve the resources
in question and respond directly to the requestor. Those unable to
 accommodate the request do nothing.

   - Requestor (scheduler, API server, whatever) picks a winner amongst
the repondants and broadcasts a message announcing the winner of
 the request.

   - The winning node acknowledges acceptance of the task to the
 requestor and gets to work.

   - Every other node that responded also sees the broadcast and cancels
 the reservation.

   - Reservations time out after 5 seconds, so a lost broadcast doesn't
 result in reserved-but-never-used resources.

   - If noone has volunteered to accept the reservation request within a
couple of seconds, broadcast wider.

(*) "Everyone listening" isn't necessarily every node. Maybe you have
topics for nodes that are at less than 10% utilisation, one for less
than 25% utilisation, etc. First broadcast to those at 10% or less, move
on to 20%, etc.

This is just off the top of my head. I'm sure it can be improved upon. A
lot. My point is just that there's plenty of alternatives to the
omniscient schedulers that we've been used to for 3 years now.

-- 
Soren Hansen | http://linux2go.dk/
Ubuntu Developer | http://www.ubuntu.com/
OpenStack Developer  | http://www.openstack.org/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to enable quantum Nicira NVP plugin in devstack

2013-09-26 Thread Aaron Rosen
Hi Xin,

In order to use the NVP plugin you need to have NVP which the plugin talks
to. Do you have access to NVP?

Best,

Aaron


On Thu, Sep 26, 2013 at 1:47 PM, openstack learner <
openstacklea...@gmail.com> wrote:

> Hi,
>
> From the link
> http://docs.openstack.org/grizzly/openstack-network/admin/content/flexibility.html
> I can see there is a Nicira NVP plugin for vmware. I am using the devstack
> to install the openstack, do you know how to enable the plugin in the
> localrc file? From the link here
> https://wiki.openstack.org/wiki/NeutronDevstack  i did not find something
> like a "q-nvc" that I can set in localrc file.  Any info about how to
> enable the quantum Nicira NVP plugin will be helpful.
>
> Thanks
>
> xin
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] gerrit URL queries I've been finding useful

2013-09-26 Thread Russell Bryant
On 09/26/2013 02:11 PM, Monty Taylor wrote:
> Just in case these help anyone, I've make a couple of bookmarks to a
> couple of queries that have really been helping me deal with both my
> patches, and things I need to review.

Thanks for sharing!  I saved off a reference to this post on:

https://wiki.openstack.org/wiki/ReviewWorkflowTips

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A simple way to improve nova scheduler

2013-09-26 Thread Joe Gordon
On Thu, Sep 26, 2013 at 1:53 PM, Soren Hansen  wrote:

> Hey, sorry for necroposting. I completely missed this thread when it was
> active, but Russel just pointed it out to me on Twitter earlier today and
> I couldn't help myself.
>
>
> 2013/7/19 Sandy Walsh :
> > On 07/19/2013 05:01 PM, Boris Pavlovic wrote:
> > Sorry, I was commenting on Soren's suggestion from way back (essentially
> > listening on a separate exchange for each unique flavor ... so no
> > scheduler was needed at all). It was a great idea, but fell apart rather
> > quickly.
>
> I don't recall we ever really had the discussion, but it's been a while :)
>
> Yes, when moving beyond simple flavours, the idea as initially proposed
> falls apart.  I see two ways to fix that:
>
>  * Don't move beyond simple flavours. Seriously. Amazon have been pretty
>darn succesful with just their simple instance types.
>

Who says we have to support one scheduler model?  I can see room for
several scheduler models that have different tradeoffs, such as performance
/ scale  vs features/


>
>  * If you must make things complicated, use fanout to send a reservation
>request:
>
>- Send out reservation requests to everyone listening (*)
>
>- Compute nodes able to accommodate the request reserve the resources
> in question and respond directly to the requestor. Those unable to
>  accommodate the request do nothing.
>
>- Requestor (scheduler, API server, whatever) picks a winner amongst
> the repondants and broadcasts a message announcing the winner of
>  the request.
>
>- The winning node acknowledges acceptance of the task to the
>  requestor and gets to work.
>
>- Every other node that responded also sees the broadcast and cancels
>  the reservation.
>
>- Reservations time out after 5 seconds, so a lost broadcast doesn't
>  result in reserved-but-never-used resources.
>
>- If noone has volunteered to accept the reservation request within a
> couple of seconds, broadcast wider.
>
> (*) "Everyone listening" isn't necessarily every node. Maybe you have
> topics for nodes that are at less than 10% utilisation, one for less
> than 25% utilisation, etc. First broadcast to those at 10% or less, move
> on to 20%, etc.
>
> This is just off the top of my head. I'm sure it can be improved upon. A
> lot. My point is just that there's plenty of alternatives to the
> omniscient schedulers that we've been used to for 3 years now.
>
> --
> Soren Hansen | http://linux2go.dk/
> Ubuntu Developer | http://www.ubuntu.com/
> OpenStack Developer  | http://www.openstack.org/
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Should file injection work for boot from volume images?

2013-09-26 Thread Michael Davies
On Mon, Sep 23, 2013 at 6:20 PM, Thierry Carrez  wrote:
> Monty Taylor wrote:
> > On 09/20/2013 02:47 PM, Michael Still wrote:
> >> Before https://review.openstack.org/#/c/46867/ if file injection of a
> >> mandatory file fails, nova just silently ignores the failure, which is
> >> clearly wrong. However, that review now can't land because its
> >> revealed another failure in the file injection code via tempest, which
> >> is...
> >>
> >> Should file injection work for instances which are boot from volume?
> >> Now that we actually notice injection failures we're now failing to
> >> boot such instances as file injection for them doesn't work.
> >>
> >> I'm undecided though -- should file injection work for boot from
> >> volume at all? Or should we just skip file injection for instances
> >> like this? I'd prefer to see us just support config drive and metadata
> >> server for these instances, but perhaps I am missing something really
> >> important.
> >
> > Well, first of all, I think file injection should DIAF everywhere.
>
> +1
>
> > That said, it may be no surprise that I think boot-from-volume should
> > just do config drive and metadata.
>
> That sounds like the simplest way to preserve behavior. From what you
> said the current behavior is "try, fail and ignore failure" -- having
> noop instead is probably the right thing to do for havana.

This behaviour is what is causing https://bugs.launchpad.net/nova/+bug/1188543

I've submitted a patch (https://review.openstack.org/#/c/48533/) that
addresses the issue.

It appears that:
1) File injection for instances which are boot from volume doesn't
appear to have ever worked.
2) Attempting file injection just fails quietlyish and causes instance
spawning slowdown
3) The code needed to do this properly isn't trivial and probably
wouldn't land in Havana so late in the cycle.

Instead of attempting file injection on a boot volume, my patch simply
LOG.warns the user. I think that's the best solution for now. However
I think we should address file injection in Icehouse as discussed in
this thread.

Thanks in advance,

Michael...
-- 
Michael Davies   mich...@the-davies.net
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Should file injection work for boot from volume images?

2013-09-26 Thread Joe Gordon
On Thu, Sep 26, 2013 at 3:12 PM, Michael Davies wrote:

> On Mon, Sep 23, 2013 at 6:20 PM, Thierry Carrez 
> wrote:
> > Monty Taylor wrote:
> > > On 09/20/2013 02:47 PM, Michael Still wrote:
> > >> Before https://review.openstack.org/#/c/46867/ if file injection of a
> > >> mandatory file fails, nova just silently ignores the failure, which is
> > >> clearly wrong. However, that review now can't land because its
> > >> revealed another failure in the file injection code via tempest, which
> > >> is...
> > >>
> > >> Should file injection work for instances which are boot from volume?
> > >> Now that we actually notice injection failures we're now failing to
> > >> boot such instances as file injection for them doesn't work.
> > >>
> > >> I'm undecided though -- should file injection work for boot from
> > >> volume at all? Or should we just skip file injection for instances
> > >> like this? I'd prefer to see us just support config drive and metadata
> > >> server for these instances, but perhaps I am missing something really
> > >> important.
> > >
> > > Well, first of all, I think file injection should DIAF everywhere.
> >
> > +1
> >
> > > That said, it may be no surprise that I think boot-from-volume should
> > > just do config drive and metadata.
> >
> > That sounds like the simplest way to preserve behavior. From what you
> > said the current behavior is "try, fail and ignore failure" -- having
> > noop instead is probably the right thing to do for havana.
>
> This behaviour is what is causing
> https://bugs.launchpad.net/nova/+bug/1188543
>
> I've submitted a patch (https://review.openstack.org/#/c/48533/) that
> addresses the issue.
>
> It appears that:
> 1) File injection for instances which are boot from volume doesn't
> appear to have ever worked.
> 2) Attempting file injection just fails quietlyish and causes instance
> spawning slowdown
> 3) The code needed to do this properly isn't trivial and probably
> wouldn't land in Havana so late in the cycle.
>
> Instead of attempting file injection on a boot volume, my patch simply
> LOG.warns the user. I think that's the best solution for now. However
> I think we should address file injection in Icehouse as discussed in
> this thread.
>

++


>
> Thanks in advance,
>
> Michael...
> --
> Michael Davies   mich...@the-davies.net
> Rackspace Australia
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Should file injection work for boot from volume images?

2013-09-26 Thread Sean Dague
Looks good, my only question is on how we signal the translation team
that we're breaking string freeze for this, or even if we have a
mechanism for that.

On Thu, Sep 26, 2013 at 6:29 PM, Joe Gordon  wrote:
>
>
>
> On Thu, Sep 26, 2013 at 3:12 PM, Michael Davies 
> wrote:
>>
>> On Mon, Sep 23, 2013 at 6:20 PM, Thierry Carrez 
>> wrote:
>> > Monty Taylor wrote:
>> > > On 09/20/2013 02:47 PM, Michael Still wrote:
>> > >> Before https://review.openstack.org/#/c/46867/ if file injection of a
>> > >> mandatory file fails, nova just silently ignores the failure, which
>> > >> is
>> > >> clearly wrong. However, that review now can't land because its
>> > >> revealed another failure in the file injection code via tempest,
>> > >> which
>> > >> is...
>> > >>
>> > >> Should file injection work for instances which are boot from volume?
>> > >> Now that we actually notice injection failures we're now failing to
>> > >> boot such instances as file injection for them doesn't work.
>> > >>
>> > >> I'm undecided though -- should file injection work for boot from
>> > >> volume at all? Or should we just skip file injection for instances
>> > >> like this? I'd prefer to see us just support config drive and
>> > >> metadata
>> > >> server for these instances, but perhaps I am missing something really
>> > >> important.
>> > >
>> > > Well, first of all, I think file injection should DIAF everywhere.
>> >
>> > +1
>> >
>> > > That said, it may be no surprise that I think boot-from-volume should
>> > > just do config drive and metadata.
>> >
>> > That sounds like the simplest way to preserve behavior. From what you
>> > said the current behavior is "try, fail and ignore failure" -- having
>> > noop instead is probably the right thing to do for havana.
>>
>> This behaviour is what is causing
>> https://bugs.launchpad.net/nova/+bug/1188543
>>
>> I've submitted a patch (https://review.openstack.org/#/c/48533/) that
>> addresses the issue.
>>
>> It appears that:
>> 1) File injection for instances which are boot from volume doesn't
>> appear to have ever worked.
>> 2) Attempting file injection just fails quietlyish and causes instance
>> spawning slowdown
>> 3) The code needed to do this properly isn't trivial and probably
>> wouldn't land in Havana so late in the cycle.
>>
>> Instead of attempting file injection on a boot volume, my patch simply
>> LOG.warns the user. I think that's the best solution for now. However
>> I think we should address file injection in Icehouse as discussed in
>> this thread.
>
>
> ++
>
>>
>>
>> Thanks in advance,
>>
>> Michael...
>> --
>> Michael Davies   mich...@the-davies.net
>> Rackspace Australia
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Introducing the NNFI scheduler for Zuul

2013-09-26 Thread Joshua Hesketh

Awesome work Jim! Love the visualisation :-)

Cheers,
Josh

--
Rackspace Australia

On 9/27/13 3:10 AM, James E. Blair wrote:

We recently made a change to Zuul's scheduling algorithm (how it
determines which changes to combine together and run tests).  Now when a
change fails tests (or has a merge conflict), Zuul will move it out of
the series of changes that it is stacking together to be tested, but it
will still keep that change's position in the queue.  Jobs for changes
behind it will be restarted without the failed change in their proposed
repo states.  And if something later fails ahead of it, Zuul will once
again put it back into the stream of changes it's testing and give it
another chance.

To visualize this, we've updated the status screen to include a tree
view:

   http://status.openstack.org/zuul/

(If you already have that loaded, be sure to hit reload.)

In Zuul, this is called the Nearest Non-Failing Item (NNFI) algorithm
because in short, each item in a queue is at all times being tested
based on the nearest non-failing item ahead of it in the queue.

On the infrastructure side, this is going to drive our use of cloud
resources even more, as Zuul will now try to run as many jobs as it can,
continuously.  Every time a change fails, all of the jobs for changes
behind it will be aborted and restarted with a new proposed future
state.

For developers, this means that changes should land faster, and more
throughput overall, as Zuul won't be waiting as long to re-test changes
after a job has failed.  And that's what this is ultimately about --
virtual machines are cheap compared to developer time, so the more
velocity our automated tests can sustain, the more velocity our project
can achieve.

-Jim


(PS: There is a known problem with the status page not being able to
display the tree correctly while Zuul is in the middle of recalculating
the change graph.  That should be fixed by next week, but in the mean
time, just enjoy the show.)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to create vmdk for openstack usage

2013-09-26 Thread Jason Zhang

Dear Vui,

Thank you very much for your information.

> After obtaining a sparse ide vmdk from "qemu-img convert", due to a 
bug in the
> VMware nova driver, you need to convert the vmdk to a thin or 
preallocated disk.


We tested it based on your information by using the ide.
The preallocated option works. Thin didn't work, any idea?

> You can do this one of the following tools:
> - vmkfstools.pl referenced in the DeveloperGuide Appendix
> - vmkfstools directly if you can ssh into an ESX machine

The above 2 didn't work.

> - vmware-vdiskmanager (comes bundled with VMware Fusion or VMware 
Workstation)
> (e.g. '/Applications/VMware 
Fusion.app/Contents/Library/vmware-vdiskmanager' -r > 
our_sparse_ide.vmdk -t 4 converted.vmdk


This works for pre-allocated.

Is there an option to reduce the disk size of the converted vmdk?

We were using vmware-vdiskmanager -r  -t 0  to 
reduced the disk size but it didn't work, OS couldn't be booted.


Thanks in advance!

Best regards,

Jason


On 9/16/13 1:13 AM, Vui Chiap Lam wrote:

Hi Jason,

What happens if you forgo the converting to lsilogic, and instead 
upload the disk to glance as an ide disk (using --property 
vmware_adaptertype=ide)?


Also, just reiterating the docs and Dan's comments,

After obtaining a sparse ide vmdk from "qemu-img convert", due to a 
bug in the VMware nova driver, you need to convert the vmdk to a thin 
or preallocated disk.

You can do this one of the following tools:
- vmkfstools.pl referenced in the DeveloperGuide Appendix
- vmkfstools directly if you can ssh into an ESX machine
- vmware-vdiskmanager (comes bundled with VMware Fusion or VMware 
Workstation)
(e.g. '/Applications/VMware 
Fusion.app/Contents/Library/vmware-vdiskmanager' -r 
our_sparse_ide.vmdk -t 4 converted.vmdk
After this step you should have a /converted/.vmdk and a 
/converted/-flat.vmdk.
At this point /converted/-flat.vmdk (not the descriptor file 
/converted/.vmdk) can be uploaded to with 
--property vmware_adaptertype=ide as an ide image


If this works, we can worry about converting the disk to SCSI next.

Regards,
Vui





*From: *"Jason Zhang" 
*To: *"OpenStack Development Mailing List"

*Sent: *Friday, September 13, 2013 4:09:47 PM
*Subject: *Re: [openstack-dev] How to create vmdk for openstack usage

Hi Dan,

Thank you very much for your reply.
?
We tested again and it still does not work, can you give more
information about how the vmdk's were created?
?I.e the tool used to create the debian and trend vmdk's listed here

https://wiki.openstack.org/wiki/NovaVMware/DeveloperGuide#Glance_Initial_Setup
?
Using qemu-img convert to convert a qcow2 or raw image to vmdk
doesn't seem to work, for example, by using??
qemu-img convert -f qcow2 -O vmdk 

??The command always converts to a vmdk which is of adapter type
'ide' other than lsilogic.?

We modified the adapter type to lsilogic and uploading it to
glance by using,

glance image-create --name= --disk-format=vmdk
--container-format=bare --is-public=true --property
vmware_adaptertype=lsiLogic --property vmware_disktype=thin
--property vmware_ostype=ubuntu64Guest < output-file.vmdk
or
glance image-create --name= --disk-format=vmdk
--container-format=bare --is-public=true --property
vmware_adaptertype=lsiLogic  --property
vmware-disktype="preallocated" --property vmware_disktype=thin
--property vmware_ostype=ubuntu64Guest < output-file.vmdk

doesn't seem to work. Even after tying the steps under
https://wiki.openstack.org/wiki/NovaVMware/DeveloperGuide#Appendix??

It seems the patch to convert into a scsi disk, has not been
merged yet: https://bugs.launchpad.net/qemu/+bug/545089
?
Thanks in advance!

Best regards,

Jason


On 9/12/13 12:48 PM, Dan Wendlandt wrote:

Hi Jason,

The best place to look is the official openstack compute
documentation that covers vSphere in Nova:

http://docs.openstack.org/trunk/openstack-compute/admin/content/vmware.html

In particular, check out the section titled "Images with
VMware vSphere" (pasted below).  As that text suggests, the
most likely issue with your VMDK not booting is that you may
have passed the wrong vmware_adaptertype to glance when
creating the image.  Also note the statement indicating that
all VMDK images must be "flat" (i.e., single file), otherwise
Glance will be confused.

Dan


  Images with VMware vSphere

When using either VMware driver, images should be uploaded to
the OpenStack Image Service in the VMDK format. Both thick and
thin images are currently supported and all images must be
flat (i.e. contained within 1 file). For example

To load a thick image with a SCSI adaptor:

  

Re: [openstack-dev] How to create vmdk for openstack usage

2013-09-26 Thread Vui Chiap Lam
Hi Jason, 

Comments inlined. If it helps, I can also work with you off-list via email to 
help resolve the issues you have. 

Thanks, 
Vui 

- Original Message -

| From: "Jason Zhang" 
| To: "OpenStack Development Mailing List" 
| Cc: "Vui Chiap Lam" 
| Sent: Thursday, September 26, 2013 5:48:10 PM
| Subject: Re: [openstack-dev] How to create vmdk for openstack usage

| Dear Vui,

| Thank you very much for your information.

| > After obtaining a sparse ide vmdk from "qemu-img convert", due to a bug in
| > the
| > VMware nova driver, you need to convert the vmdk to a thin or preallocated
| > disk.

| We tested it based on your information by using the ide.
| The preallocated option works. Thin didn't work, any idea?

Can you give more details about what you did and what did not work? Was it the 
conversion to a thin disk that failed, or the usage of of the converted disk? 

| > You can do this one of the following tools:
| > - vmkfstools.pl referenced in the DeveloperGuide Appendix
| > - vmkfstools directly if you can ssh into an ESX machine

| The above 2 didn't work.

Same here. Can you provide more details? Were you not able to set up or get to 
an environment to run these tools, or did they not produce a converted disk, 
or did not produce one that works with nova? 

| > - vmware-vdiskmanager (comes bundled with VMware Fusion or VMware
| > Workstation)
| > (e.g. '/Applications/VMware
| > Fusion.app/Contents/Library/vmware-vdiskmanager' -r > our_sparse_ide.vmdk
| > -t 4 converted.vmdk

| This works for pre-allocated.

| Is there an option to reduce the disk size of the converted vmdk?

| We were using vmware-vdiskmanager -r  -t 0  to
| reduced the disk size but it didn't work, OS couldn't be booted.

-t 0 produces a monosparse format that is not compatible with ESX. 

As for the reduction of disk size, not at the moment, although currently two 
issues related to https://bugs.launchpad.net/nova/+bug/1215146 are being looked 
into: 
1. reduce disk usage on ESX datastore by restoring the thin-provisioned-ness of 
a thin disk. 
2. minimize glance <-> nova network traffic when transferring thin provisioned 
disk. 

that hopefully will address this issue soon. 

| Thanks in advance!

| Best regards,

| Jason

| On 9/16/13 1:13 AM, Vui Chiap Lam wrote:

| | Hi Jason,
| 

| | What happens if you forgo the converting to lsilogic, and instead upload
| | the
| | disk to glance as an ide disk (using --property vmware_adaptertype=ide)?
| 

| | Also, just reiterating the docs and Dan's comments,
| 

| | After obtaining a sparse ide vmdk from "qemu-img convert", due to a bug in
| | the VMware nova driver, you need to convert the vmdk to a thin or
| | preallocated disk.
| 
| | You can do this one of the following tools:
| 
| | - vmkfstools.pl referenced in the DeveloperGuide Appendix
| 
| | - vmkfstools directly if you can ssh into an ESX machine
| 
| | - vmware-vdiskmanager (comes bundled with VMware Fusion or VMware
| | Workstation)
| 
| | (e.g. '/Applications/VMware
| | Fusion.app/Contents/Library/vmware-vdiskmanager'
| | -r our_sparse_ide.vmdk -t 4 converted.vmdk
| 
| | After this step you should have a converted .vmdk and a converted
| | -flat.vmdk.
| 
| | At this point converted -flat.vmdk (not the descriptor file converted
| | .vmdk)
| | can be uploaded to with --property vmware_adaptertype=ide as an ide image
| 

| | If this works, we can worry about converting the disk to SCSI next.
| 

| | Regards,
| 
| | Vui
| 

| | - Original Message -
| 

| | | From: "Jason Zhang" 
| | 
| 
| | | To: "OpenStack Development Mailing List"
| | | 
| | 
| 
| | | Sent: Friday, September 13, 2013 4:09:47 PM
| | 
| 
| | | Subject: Re: [openstack-dev] How to create vmdk for openstack usage
| | 
| 

| | | Hi Dan,
| | 
| 

| | | Thank you very much for your reply.
| | 
| 

| | | We tested again and it still does not work, can you give more information
| | | about how the vmdk's were created?
| | 
| 
| | | I.e the tool used to create the debian and trend vmdk's listed here
| | 
| 
| | | 
https://wiki.openstack.org/wiki/NovaVMware/DeveloperGuide#Glance_Initial_Setup
| | 
| 

| | | Using qemu-img convert to convert a qcow2 or raw image to vmdk doesn't
| | | seem
| | | to work, for example, by using
| | 
| 
| | | qemu-img convert -f qcow2 -O vmdk  
| | 
| 
| | | The command always converts to a vmdk which is of adapter type 'ide'
| | | other
| | | than lsilogic.
| | 
| 

| | | We modified the adapter type to lsilogic and uploading it to glance by
| | | using,
| | 
| 

| | | glance image-create --name= --disk-format=vmdk
| | | --container-format=bare
| | | --is-public=true --property vmware_adaptertype=lsiLogic --property
| | | vmware_disktype=thin --property vmware_ostype=ubuntu64Guest <
| | | output-file.vmdk
| | 
| 
| | | or
| | 
| 
| | | glance image-create --name= --disk-format=vmdk
| | | --container-format=bare
| | | --is-public=true --property vmware_adaptertype=lsiLogic --property
| | | vmware-disktype="preallocat

[openstack-dev] flaky tempest -- Top Offenders

2013-09-26 Thread Joe Gordon
Hi All,

As many of you may have suspected the gate has gotten less stable in the
past few days.  Turns out we have the numbers to prove it too!

http://graphite.openstack.org/graphlot/?width=586&from=00%3A00_20130919&_salt=1380244287.508&height=308&target=summarize(stats_counts.zuul.pipeline.gate.job.gate-tempest-devstack-vm-neutron.FAILURE%2C%2224h%22)&target=summarize(stats_counts.zuul.pipeline.gate.job.gate-tempest-devstack-vm-neutron.SUCCESS%2C%2224h%22)&until=23%3A59_20130926&lineMode=staircase

So tempest started failing more right around the 24th, even though we are
in FeatureFreeze.

"FF ensures that sufficient share of the
ReleaseCycle
 is dedicated to QA, until we produce the first release candidates.
Limiting the changes that affect the behavior of the software allow for
consistent testing and efficient bugfixing."

https://wiki.openstack.org/wiki/FeatureFreeze

Thanks to the work we have been doing with logstash and elastic-recheck we
have very good numbers on the top offenders and when they began, the good
news is there are two bugs which we are hitting the most, so the top
offenders list has just two bugs. But there are still other unknown bugs
and lower priority ones out there too!


https://bugs.launchpad.net/tempest/+bug/1226337 -- Launchpad bug 1226337 in
tempest "tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern
flake failure" [High,Triaged]

Started on 9-23 with  408 hits! in the last 24 hours alone

http://logstash.openstack.org/#eyJzZWFyY2giOiJAbWVzc2FnZTpcIk5vdmFFeGNlcHRpb246IGlTQ1NJIGRldmljZSBub3QgZm91bmQgYXRcIiBBTkQgQGZpZWxkcy5idWlsZF9zdGF0dXM6XCJGQUlMVVJFXCIgQU5EIEBmaWVsZHMuZmlsZW5hbWU6XCJsb2dzL3NjcmVlbi1uLWNwdS50eHRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM4MDI0NDY2ODQ5Nn0=


https://bugs.launchpad.net/tempest/+bug/1230407  -- Launchpad bug 1230407
in neutron "State change timeout exceeded" [Undecided,Confirmed]

Started on 9-25 with 66 hits in the last 24 hours alone

http://logstash.openstack.org/#eyJzZWFyY2giOiIgQG1lc3NhZ2U6XCJBc3NlcnRpb25FcnJvcjogU3RhdGUgY2hhbmdlIHRpbWVvdXQgZXhjZWVkZWQhXCIgQU5EIEBmaWVsZHMuYnVpbGRfc3RhdHVzOlwiRkFJTFVSRVwiIEFORCBAZmllbGRzLmZpbGVuYW1lOlwiY29uc29sZS5odG1sXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzODAyNDQ0MzM2NzZ9


Hopefully we can get both of these fixed very soon, so we can stabilize
gate again.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] flaky tempest -- Top Offenders

2013-09-26 Thread Joe Gordon
On Thu, Sep 26, 2013 at 6:41 PM, Joe Gordon  wrote:

> Hi All,
>
> As many of you may have suspected the gate has gotten less stable in the
> past few days.  Turns out we have the numbers to prove it too!
>
>
> http://graphite.openstack.org/graphlot/?width=586&from=00%3A00_20130919&_salt=1380244287.508&height=308&target=summarize(stats_counts.zuul.pipeline.gate.job.gate-tempest-devstack-vm-neutron.FAILURE%2C%2224h%22)&target=summarize(stats_counts.zuul.pipeline.gate.job.gate-tempest-devstack-vm-neutron.SUCCESS%2C%2224h%22)&until=23%3A59_20130926&lineMode=staircase
>
> So tempest started failing more right around the 24th, even though we are
> in FeatureFreeze.
>
> "FF ensures that sufficient share of the 
> ReleaseCycle
>  is dedicated to QA, until we produce the first release candidates.
> Limiting the changes that affect the behavior of the software allow for
> consistent testing and efficient bugfixing."
>
> https://wiki.openstack.org/wiki/FeatureFreeze
>
> Thanks to the work we have been doing with logstash and elastic-recheck we
> have very good numbers on the top offenders and when they began, the good
> news is there are two bugs which we are hitting the most, so the top
> offenders list has just two bugs. But there are still other unknown bugs
> and lower priority ones out there too!
>
>
> https://bugs.launchpad.net/tempest/+bug/1226337 -- Launchpad bug 1226337
> in tempest "tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern
> flake failure" [High,Triaged]
>
> Started on 9-23 with  408 hits! in the last 24 hours alone
>

That should have read 130 hits in the last 24 hours and over 408 in last 7
days.

>
>
> http://logstash.openstack.org/#eyJzZWFyY2giOiJAbWVzc2FnZTpcIk5vdmFFeGNlcHRpb246IGlTQ1NJIGRldmljZSBub3QgZm91bmQgYXRcIiBBTkQgQGZpZWxkcy5idWlsZF9zdGF0dXM6XCJGQUlMVVJFXCIgQU5EIEBmaWVsZHMuZmlsZW5hbWU6XCJsb2dzL3NjcmVlbi1uLWNwdS50eHRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM4MDI0NDY2ODQ5Nn0=
>
>
> https://bugs.launchpad.net/tempest/+bug/1230407  -- Launchpad bug 1230407
> in neutron "State change timeout exceeded" [Undecided,Confirmed]
>
> Started on 9-25 with 66 hits in the last 24 hours alone
>
>
> http://logstash.openstack.org/#eyJzZWFyY2giOiIgQG1lc3NhZ2U6XCJBc3NlcnRpb25FcnJvcjogU3RhdGUgY2hhbmdlIHRpbWVvdXQgZXhjZWVkZWQhXCIgQU5EIEBmaWVsZHMuYnVpbGRfc3RhdHVzOlwiRkFJTFVSRVwiIEFORCBAZmllbGRzLmZpbGVuYW1lOlwiY29uc29sZS5odG1sXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzODAyNDQ0MzM2NzZ9
>
>
> Hopefully we can get both of these fixed very soon, so we can stabilize
> gate again.
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev