Re: [Openstack-operators] Way to check compute - rabbitmq connectivity

2015-01-19 Thread Gustavo Randich
In the meantime, I'm using this horrendous script inside compute nodes to
check for rabbitmq connectivity. It uses the 'set_host_enabled' rpc call,
which in my case is innocuous.

#!/bin/bash
UUID=$(cat /proc/sys/kernel/random/uuid)
RABBIT=$(grep -Po '(?=rabbit_host = ).+' /etc/nova/nova.conf)
HOSTX=$(hostname)
python -c 
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(\$RABBIT\))
channel = connection.channel()
channel.basic_publish(exchange='nova', routing_key=\compute.$HOSTX\,
properties=pika.BasicProperties(content_type = 'application/json'),
body = '{ \version\: \3.0\, \_context_request_id\: \$UUID\, \\
  \_context_roles\: [\KeystoneAdmin\, \KeystoneServiceAdmin\,
\admin\], \\
  \_context_user_id\: \XXX\, \\
  \_context_project_id\: \XXX\, \\
  \method\: \set_host_enabled\, \\
  \args\: {\enabled\: true} \\
}'
)
connection.close()
sleep 2
tail -1000 /var/log/nova/nova-compute.log | grep -q $UUID || { echo
WARNING: nova-compute not consuming RabbitMQ messages. Last message:
$UUID; exit 1; }
echo OK


On Thu, Jan 15, 2015 at 9:48 PM, Sam Morrison sorri...@gmail.com wrote:

 We've had a lot of issues with Icehouse related to rabbitMQ. Basically the
 change from openstack.rpc to oslo.messaging broke things. These things are
 now fixed in oslo.messaging version 1.5.1, there is still an issue with
 heartbeats and that patch is making it's way through review process now.

 https://review.openstack.org/#/c/146047/

 Cheers,
 Sam


 On 16 Jan 2015, at 10:55 am, sridhar basam sridhar.ba...@gmail.com
 wrote:


 If you are using ha queues, use a version of rabbitmq  3.3.0. There was a
 change in that version where consumption on queues was automatically
 enabled when a master election for a queue happened. Previous versions only
 informed clients that they had to reconsume on a queue. It was the clients
 responsibility to start consumption on a queue.

 Make sure you enable tcp keepalives to a low enough value in case you have
 a firewall device in between your rabbit server and it's consumers.

 Monitor consumers on your rabbit infrastructure using 'rabbitmqctl
 list_queues name messages consumers'. Consumers on fanout queues is going
 to depend on the number of services of any type you have in your
 environment.

 Sri
  On Jan 15, 2015 6:27 PM, Michael Dorman mdor...@godaddy.com wrote:

   Here is the bug I've been tracking related to this for a while.  I
 haven't really kept up to speed with it, so I don't know the current status.

  https://bugs.launchpad.net/nova/+bug/856764


   From: Kris Lindgren klindg...@godaddy.com
 Date: Thursday, January 15, 2015 at 12:10 PM
 To: Gustavo Randich gustavo.rand...@gmail.com, OpenStack Operators 
 openstack-operators@lists.openstack.org
 Subject: Re: [Openstack-operators] Way to check compute - rabbitmq
 connectivity

   During the Atlanta ops meeting this topic came up and I specifically
 mentioned about adding a no-op or healthcheck ping to the rabbitmq stuff
 to both nova  neutron.  The dev's in the room looked at me like I was
 crazy, but it was so that we could exactly catch issues as you described.
 I am also interested if any one knows of a lightweight call that could be
 used to verify/confirm rabbitmq connectivity as well.  I haven't been able
 to devote time to dig into it.  Mainly because if one client is having
 issues - you will notice other clients are having similar/silent errors and
 a restart of all the things is the easiest way to fix, for us atleast.
  

 Kris Lindgren
 Senior Linux Systems Engineer
 GoDaddy, LLC.


   From: Gustavo Randich gustavo.rand...@gmail.com
 Date: Thursday, January 15, 2015 at 11:53 AM
 To: openstack-operators@lists.openstack.org 
 openstack-operators@lists.openstack.org
 Subject: Re: [Openstack-operators] Way to check compute - rabbitmq
 connectivity

Just to add one more background scenario, we also had similar
 problems trying to load balance rabbitmq via F5 Big IP LTM. For that reason
 we don't use it now. Our installation is a single rabbitmq instance and no
 intermediaries (albeit network switches). We use Folsom and Icehouse, the
 problem being perceived more in Icehouse nodes.

  We are already monitoring message queue size, but we would like to
 pinpoint in semi-realtime the specific hosts/racks/network paths
 experiencing the stale connection before a user complains about an
 operation being stuck, or even hosts with no such pending operations but
 already disconnected -- we also could diagnose possible network causes
 and avoid massive service restarting.

  So, for now, if someone knows about a cheap and quick openstack
 operation that triggers a message interchange between rabbitmq and
 nova-compute and a way of checking the result it would be great.




 On Thu, Jan 15, 2015 at 1:45 PM, Kris G. Lindgren klindg...@godaddy.com
 wrote:

  We did have an issue using celery  on an internal application that we
 wrote 

Re: [openstack-dev] [neutron][lbaas] Pool member status 'ACTIVE' even on health check failure

2015-01-19 Thread Brandon Logan
Hi Varun,

Could you tell me which driver you are using? If you're running the
HaproxyOnHostPluginDriver then that should do a check every 6 seconds
for members being down.  However, other drivers may not do this.  It's
up to the driver.

As for providing health monitor stats, those currently are not being
provided.  There haven't been any plans for that yet because everyone
has been focused on getting the v2 API out.  Which is almost complete
and plan for that to be completed for Kilo-3.  If you'd like to be able
to retrieve some health stats, please list them and let us know.  We'll
hopefully be able to get them in after v2 has completed.

Thanks,
Brandon

On Mon, 2015-01-19 at 14:42 -0800, Varun Lodaya wrote:
 Hi All,
 
 
 I am trying to get LBaaS running on stable Juno. I can get all the
 LBaaS components correctly installed and working as expected. But I am
 facing some issues with the health-monitor. I am not quite sure if
 it’s working as expected.
 
 
 I have 2 ubuntu servers as members of http-pool and I have stopped
 apache process on 1 of the servers. I have HTTP health-monitor
 configured on the pool which runs every 1 min and checks for 200
 response code on HTTP GET. I was expecting it to FAIL after 3 retries
 and make the status “INACTIVE” for the member where apache is not
 running. But for some reason, it’s always ACTIVE. 
 
 
 Can somebody help me with how is it suppose to work and if it’s a bug?
 
 
 Also, currently I don’t see any health monitor stats with neutron. Is
 there any plan to get health monitor stats in future releases?
 
 
 Thanks,
 Varun
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Optional Properties in an Entity

2015-01-19 Thread Dean Troyer
On Mon, Jan 19, 2015 at 3:54 PM, Kevin L. Mitchell 
kevin.mitch...@rackspace.com wrote:

 When we look at consistency, we look at everything else in OpenStack.
 From the standpoint of the nova API (with which I am the most familiar),
 I am not aware of any property that is ever omitted from any payload
 without versioning coming in to the picture, even if its value is null.
 Thus, I would argue that we should encourage the first situation, where
 all properties are included, even if their value is null.


Independent of actual implementations in OpenStack, I prefer always
including null/empty properties here because it is slightly more
self-documenting.  Having spent the morning chasing down attributes for an
API to be named at a later date by looking at server code, we do not help
ourselves or the users of our APIs by omitting this sort of thing.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Changes to the core team

2015-01-19 Thread Aaron Rosen
+1

On Fri, Jan 16, 2015 at 12:03 PM, Carl Baldwin c...@ecbaldwin.net wrote:

 +1

 On Thu, Jan 15, 2015 at 3:31 PM, Kyle Mestery mest...@mestery.com wrote:
  The last time we looked at core reviewer stats was in December [1]. In
  looking at the current stats, I'm going to propose some changes to the
 core
  team. Reviews are the most important part of being a core reviewer, so we
  need to ensure cores are doing reviews. The stats for the 90 day period
 [2]
  indicate some changes are needed for core reviewers who are no longer
  reviewing on pace with the other core reviewers.
 
  First of all, I'm removing Sumit Naiksatam from neutron-core. Sumit has
 been
  a core reviewer for a long time, and his past contributions are very much
  thanked by the entire OpenStack Neutron team. If Sumit jumps back in with
  thoughtful reviews in the future, we can look at getting him back as a
  Neutron core reviewer. But for now, his stats indicate he's not
 reviewing at
  a level consistent with the rest of the Neutron core reviewers.
 
  As part of the change, I'd like to propose Doug Wiegley as a new Neutron
  core reviewer. Doug has been actively reviewing code across not only all
 the
  Neutron projects, but also other projects such as infra. His help and
 work
  in the services split in December were the reason we were so successful
 in
  making that happen. Doug has also been instrumental in the Neutron LBaaS
 V2
  rollout, as well as helping to merge code in the other neutron service
  repositories.
 
  I'd also like to take this time to remind everyone that reviewing code
 is a
  responsibility, in Neutron the same as other projects. And core reviewers
  are especially beholden to this responsibility. I'd also like to point
 out
  that +1/-1 reviews are very useful, and I encourage everyone to continue
  reviewing code even if you are not a core reviewer.
 
  Existing neutron cores, please vote +1/-1 for the addition of Doug to the
  core team.
 
  Thanks!
  Kyle
 
  [1]
 
 http://lists.openstack.org/pipermail/openstack-dev/2014-December/051986.html
  [2] http://russellbryant.net/openstack-stats/neutron-reviewers-90.txt
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Final steps toward a Convergence design

2015-01-19 Thread Zane Bitter

Hi folks,
I'd like to come to agreement on the last major questions of the 
convergence design. I well aware that I am the current bottleneck as I 
have been struggling to find enough time to make progress on it, but I 
think we are now actually very close.


I believe the last remaining issue to be addressed is the question of 
what to do when we want to update a resource that is still IN_PROGRESS 
as the result of a previous (now cancelled, obviously) update.


There are, of course, a couple of trivial and wrong ways to handle it:

1) Throw UpdateReplace and make a new one
 - This is obviously a terrible solution for the user

2) Poll the DB in a loop until the previous update finishes
 - This is obviously horribly inefficient

So the preferred solution here needs to involve retriggering the 
resource's task in the current update once the one from the previous 
update is complete.



I've implemented some changes in the simulator - although note that 
unlike stuff I implemented previously, this is extremely poorly tested 
(if at all) since the simulator runs the tasks serially and therefore 
never hits this case. So code review would be appreciated. I committed 
the changes on a new branch, resumable:


https://github.com/zaneb/heat-convergence-prototype/commits/resumable

Here is a brief summary:
- The SyncPoints are now:
  * created for every resource, regardless of how many dependencies it has.
  * created at the beginning of an update and deleted before beginning 
another update.
  * contain only the list of satisfied dependencies (and their RefId 
and attribute values).
- The graph is now stored in the Stack table, rather than passed through 
the chain of trigger notifications.
- We'll use locks in the Resource table to ensure that only one action 
at a time can happen on a Resource.
- When a trigger is received for a resource that is locked (i.e. status 
is IN_PROGRESS and the engine owning it is still alive), the trigger is 
ignored.
- When processing of a resource completes, a failure to find any of the 
sync points that are to be notified (every resource has at least one, 
since the last resource in each chain must notify the stack that it is 
complete) indicates that the current update has been cancelled and 
triggers a new check on the resource with the data for the current 
update (retrieved from the Stack table) if it is ready (as indicated by 
its SyncPoint entry).


I'm not 100% happy with the amount of extra load this puts on the 
database, but I can't see a way to do significantly better and still 
solve this locking issue. Suggestions are welcome. At least the common 
case is considerably better than the worst case.


There are two races here that we need to satisfy ourselves we have 
answers for (I think we do):
1) Since old SyncPoints are deleted before a new transition begins and 
we only look for them after unlocking the resource being processed, I 
don't believe that both the previous and the new update can fail to 
trigger the check on the resource in the new update's traversal. (If 
there are any DB experts out there, I'd be interested in their input on 
this one.)
2) When both the previous and the new update end up triggering a check 
on the resource in the new update's traversal, we'll only perform one 
because one will succeed in locking the resource and the other will just 
be ignored after it fails to acquire the lock. (This one is watertight, 
since both processes are acting on the same lock.)



I believe that this model is very close to what Anant and his team are 
proposing. Arguably this means I've been wasting everyone's time, but a 
happier way to look at it is that two mostly independent design efforts 
converging on a similar solution is something we can take a lot of 
confidence from ;)


My next task is to start breaking this down into blueprints that folks 
can start implementing. In the meantime, it would be great if we could 
identify any remaining discrepancies between the two designs and 
completely close those last gaps.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] what code in cinder volume driver supports volume migration between two backends of same type but having different volume types?

2015-01-19 Thread Mike Perez
On 00:31 Tue 20 Jan , Nikesh Kumar Mahalka wrote:
 do cinder retype (v2) works for lvm?
 How to use cinder retype?

In the future, please have your subject prefixed with [cinder].

 I tried for volume migration from one volume-type LVM backend to
 another volume-type LVM backend.But its failed.
 How can i acheive this?

Please provide your cinder-api, cinder-vol, and cinder-scheduler service logs.
You can paste things to http://paste.openstack.org

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][lbaas] Pool member status 'ACTIVE' even on health check failure

2015-01-19 Thread Varun Lodaya
Hi All,

I am trying to get LBaaS running on stable Juno. I can get all the LBaaS 
components correctly installed and working as expected. But I am facing some 
issues with the health-monitor. I am not quite sure if it’s working as expected.

I have 2 ubuntu servers as members of http-pool and I have stopped apache 
process on 1 of the servers. I have HTTP health-monitor configured on the pool 
which runs every 1 min and checks for 200 response code on HTTP GET. I was 
expecting it to FAIL after 3 retries and make the status “INACTIVE” for the 
member where apache is not running. But for some reason, it’s always ACTIVE.

Can somebody help me with how is it suppose to work and if it’s a bug?

Also, currently I don’t see any health monitor stats with neutron. Is there any 
plan to get health monitor stats in future releases?

Thanks,
Varun
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Vancouver Design Summit format changes

2015-01-19 Thread Lana Brindley
I think that how Docs handles these changes depends largely on whether 
or not we're given a track. I'm aware that we didn't get one in Paris, 
and as a consequence a lot of my team felt it was difficult to get any 
real work done.


Like Sean, I appreciate that it's a difficult decision, but am looking 
forward to hearing how the TC plan to make this choice.


Lana

On 10/01/15 03:06, sean roberts wrote:

I like it. Thank you for coming up with improvements to the
summit planning. One caveat on the definition of project for summit
space. Which projects get considered for space is always difficult. Who
is going to fill the rooms they request or are they going to have them
mostly empty? I'm sure the TC can figure it out by looking at the number
of contributors or something like that. I would however, like to know a
bit more of your plan for this specific part of the proposal sooner than
later.

On Friday, January 9, 2015, Thierry Carrez thie...@openstack.org
javascript:_e(%7B%7D,'cvml','thie...@openstack.org'); wrote:

Hi everyone,

The OpenStack Foundation staff is considering a number of changes to the
Design Summit format for Vancouver, changes on which we'd very much like
to hear your feedback.

The problems we are trying to solve are the following:
- Accommodate the needs of more OpenStack projects
- Reduce separation and perceived differences between the Ops Summit and
the Design/Dev Summit
- Create calm and less-crowded spaces for teams to gather and get more
work done

While some sessions benefit from large exposure, loads of feedback and
large rooms, some others are just workgroup-oriented work sessions that
benefit from smaller rooms, less exposure and more whiteboards. Smaller
rooms are also cheaper space-wise, so they allow us to scale more easily
to a higher number of OpenStack projects.

My proposal is the following. Each project team would have a track at
the Design Summit. Ops feedback is in my opinion part of the design of
OpenStack, so the Ops Summit would become a track within the
forward-looking Design Summit. Tracks may use two separate types of
sessions:

* Fishbowl sessions
Those sessions are for open discussions where a lot of participation and
feedback is desirable. Those would happen in large rooms (100 to 300
people, organized in fishbowl style with a projector). Those would have
catchy titles and appear on the general Design Summit schedule. We would
have space for 6 or 7 of those in parallel during the first 3 days of
the Design Summit (we would not run them on Friday, to reproduce the
successful Friday format we had in Paris).

* Working sessions
Those sessions are for a smaller group of contributors to get specific
work done or prioritized. Those would happen in smaller rooms (20 to 40
people, organized in boardroom style with loads of whiteboards). Those
would have a blanket title (like infra team working session) and
redirect to an etherpad for more precise and current content, which
should limit out-of-team participation. Those would replace project
pods. We would have space for 10 to 12 of those in parallel for the
first 3 days, and 18 to 20 of those in parallel on the Friday (by
reusing fishbowl rooms).

Each project track would request some mix of sessions (We'd like 4
fishbowl sessions, 8 working sessions on Tue-Thu + half a day on
Friday) and the TC would arbitrate how to allocate the limited
resources. Agenda for the fishbowl sessions would need to be published
in advance, but agenda for the working sessions could be decided
dynamically from an etherpad agenda.

By making larger use of smaller spaces, we expect that setup to let us
accommodate the needs of more projects. By merging the two separate Ops
Summit and Design Summit events, it should make the Ops feedback an
integral part of the Design process rather than a second-class citizen.
By creating separate working session rooms, we hope to evolve the pod
concept into something where it's easier for teams to get work done
(less noise, more whiteboards, clearer agenda).

What do you think ? Could that work ? If not, do you have alternate
suggestions ?

--
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
~sean


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

[openstack-dev] [stable] Stable check of openstack/nova failed - EnvironmentError: mysql_config not found

2015-01-19 Thread Alan Pevec
 - periodic-nova-docs-icehouse 
 http://logs.openstack.org/periodic-stableperiodic-nova-docs-icehouse/a3d88ed/ 
 : FAILURE in 1m 15s

Same symptom as https://bugs.launchpad.net/openstack-ci/+bug/1336161
which is marked as Fix released, could infra team check if all images
are alright?
This showed up in 3 periodic icehouse jobs over weekend, all on
bare-precise-hpcloud-b2 nodes, I've listed them in
https://etherpad.openstack.org/p/stable-tracker


Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [nailgun] [UI] network_check_status fleild for environments

2015-01-19 Thread Alexander Kislitsky
Guys, definitely we shouldn't delete tasks.

+1 for warning.

On Fri, Jan 16, 2015 at 3:58 PM, Evgeniy L e...@mirantis.com wrote:

 Hi,

 1) +1 for warning

 2) I don't think that we should delete tasks, it's a history which can be
 useful,
 for example for stats feature, also it's useful for debugging, but each
 task
 should have created_at and updated_at fields, and from api you will be able
 to get the latest tasks for specific env.

 Thanks,

 On Thu, Jan 15, 2015 at 7:20 PM, Vitaly Kramskikh vkramsk...@mirantis.com
  wrote:

 Folks,

 I want to discuss possibility to add network verification status field
 for environments. There are 2 reasons for this:

 1) One of the most frequent reasons of deployment failure is wrong
 network configuration. In the current UI network verification is completely
 optional and sometimes users are even unaware that this feature exists. We
 can warn the user before the start of deployment if network check failed of
 wasn't performed.

 2) Currently network verification status is partially tracked by status
 of the last network verification task. Sometimes its results become stale,
 and the UI removes the task. There are a few cases when the UI does this,
 like changing network settings, adding a new node, etc (you can grep
 removeFinishedNetworkTasks to see all the cases). This definitely should
 be done on backend.

 What is your opinion on this?

 --
 Vitaly Kramskikh,
 Software Engineer,
 Mirantis, Inc.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] static files handling, bower/

2015-01-19 Thread Radomir Dopieralski
On 16/01/15 18:55, Matthew Farina wrote:
 Doug, there still is one open question. Distributing JavaScript
 libraries via system packages is unusual. Because of that, most of the
 JavaScript libraries used by horizon don't have existing packages. Who
 will create and maintain the packages for these JavaScript libraries for
 production? For example, most of the libraries aren't available as
 debian or ubuntu packages.

You are mistaken here. It's actually the other way around. Fedora and
Debian packagers used to do heroic work with previous releases to
unbundle the static files from Horizon and link to the system-wide
JavaScript libraries installed from packages, because their packaging
policies require that. The introduction of XStatic was supposed to
merely simplify and formalize that process, but now it turns out that it
is redundant, and we can cut a corner and save the packagers having to
create all those dummy XStatic shims.

-- 
Radomir Dopieralski


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] Hi all, a question about using of CI account

2015-01-19 Thread Wang, Yalei
Hi Doug,

As described in page https://wiki.openstack.org/wiki/NeutronThirdPartyTesting

Does that means use zuul .yaml file to specify this file listed?




=
* Run against the following changes:
* All changes made to your own driver/plugin
* neutron/agents/.* (at least the agents you use)
* neutron/openstack/common/.*
* neutron/notifiers/.* (if your drivers report vif plugging events to nova)
* neutron/db/.*
* neutron/services/ (at least the services you use)




/Yalei

From: Doug Wiegley [mailto:do...@a10networks.com]
Sent: Monday, January 19, 2015 10:37 AM
To: Wang, Yalei
Cc: openstack-infra@lists.openstack.org
Subject: Re: [OpenStack-Infra] Hi all, a question about using of CI account

Yes. If using zuul, that's controlled by the layout file.
Doug


On Jan 18, 2015, at 7:28 PM, Wang, Yalei 
yalei.w...@intel.commailto:yalei.w...@intel.com wrote:
Hi all,

Could the user apply for one CI account and to monitor two kind of changes, 
like use XXX Neutron CI account to monitor both firewall and dvr patches in 
neutron's repo?



/Yalei

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.orgmailto:OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [openstack-dev] [neutron] Changes to the core team

2015-01-19 Thread Joshua Zhang
+1

On Tue, Jan 20, 2015 at 12:59 PM, Aaron Rosen aaronoro...@gmail.com wrote:

 +1

 On Fri, Jan 16, 2015 at 12:03 PM, Carl Baldwin c...@ecbaldwin.net wrote:

 +1

 On Thu, Jan 15, 2015 at 3:31 PM, Kyle Mestery mest...@mestery.com
 wrote:
  The last time we looked at core reviewer stats was in December [1]. In
  looking at the current stats, I'm going to propose some changes to the
 core
  team. Reviews are the most important part of being a core reviewer, so
 we
  need to ensure cores are doing reviews. The stats for the 90 day period
 [2]
  indicate some changes are needed for core reviewers who are no longer
  reviewing on pace with the other core reviewers.
 
  First of all, I'm removing Sumit Naiksatam from neutron-core. Sumit has
 been
  a core reviewer for a long time, and his past contributions are very
 much
  thanked by the entire OpenStack Neutron team. If Sumit jumps back in
 with
  thoughtful reviews in the future, we can look at getting him back as a
  Neutron core reviewer. But for now, his stats indicate he's not
 reviewing at
  a level consistent with the rest of the Neutron core reviewers.
 
  As part of the change, I'd like to propose Doug Wiegley as a new Neutron
  core reviewer. Doug has been actively reviewing code across not only
 all the
  Neutron projects, but also other projects such as infra. His help and
 work
  in the services split in December were the reason we were so successful
 in
  making that happen. Doug has also been instrumental in the Neutron
 LBaaS V2
  rollout, as well as helping to merge code in the other neutron service
  repositories.
 
  I'd also like to take this time to remind everyone that reviewing code
 is a
  responsibility, in Neutron the same as other projects. And core
 reviewers
  are especially beholden to this responsibility. I'd also like to point
 out
  that +1/-1 reviews are very useful, and I encourage everyone to continue
  reviewing code even if you are not a core reviewer.
 
  Existing neutron cores, please vote +1/-1 for the addition of Doug to
 the
  core team.
 
  Thanks!
  Kyle
 
  [1]
 
 http://lists.openstack.org/pipermail/openstack-dev/2014-December/051986.html
  [2] http://russellbryant.net/openstack-stats/neutron-reviewers-90.txt
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best Regards
Zhang Hua(张华)
Software Engineer | Canonical
IRC:  zhhuabj
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] (no subject)

2015-01-19 Thread Nikesh Kumar Mahalka
actually swift was disable in my local.conf of devstack but below
things were uncommented
SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5

SWIFT_REPLICAS=1

SWIFT_DATA_DIR=$DEST/data


So after unstack and clean,i commented these entries and again tried
stack and run tempest tests and now no error.


Regards
Nikesh

On Tue, Jan 20, 2015 at 12:04 AM, Anne Gentle a...@openstack.org wrote:
 Is it related to needing to cap boto version 2.35.0?

 Read through this for more:

 https://bugs.launchpad.net/nova/+bug/1408987

 On Mon, Jan 19, 2015 at 12:11 PM, Nikesh Kumar Mahalka
 nikeshmaha...@vedams.com wrote:

 below test case is failing on lvm in kilo devstack

 ==
 FAIL: tearDownClass
 (tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest)
 --
 Traceback (most recent call last):
 _StringException: Traceback (most recent call last):
   File /opt/stack/tempest/tempest/test.py, line 301, in tearDownClass
 teardown()
   File /opt/stack/tempest/tempest/thirdparty/boto/test.py, line 272, in
 resource_cleanup
 raise exceptions.TearDownException(num=fail_count)
 TearDownException: 1 cleanUp operation failed



 did any one face this?




 Regards
 nikesh

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [OpenStack-Infra] [Third-party-announce] CloudByte CI faililng to merge on all cinder patches

2015-01-19 Thread Punith S
Hi stackers,

since we are not able to read any gerrit events, can we please enable
CloudByte CI account, so that we can debug the
merge failure issue on the sandbox project. ref -
http://paste.openstack.org/show/158218/

i will redirect the layout.yaml to sandbox project instead of cinder.

we are also planning to verify on a new CI setup and shall post the logs to
the community before setting the CI for cinder.

thanks

On Fri, Jan 16, 2015 at 7:20 PM, Edgar Magana edgar.mag...@workday.com
wrote:

  Punith,

  Thank you so much for the update!

  Edgar

   From: Punith S punit...@cloudbyte.com
 Reply-To: Announcements for third party CI operators. 
 third-party-annou...@lists.openstack.org
 Date: Thursday, January 15, 2015 at 10:56 PM

 To: Announcements for third party CI operators. 
 third-party-annou...@lists.openstack.org
 Subject: Re: [Third-party-announce] CloudByte CI faililng to merge on all
 cinder patches

   Hi Stackers,

  Apologies for the inconvenience caused by our CI,

  we were having a problem with the zuul merger and the git python in our
 CI master. ref - http://paste.openstack.org/show/158218/

  since this problem was happening inconsistently with some of the
 patches, we felt it as the problem with the cinder patches itself.
 also we are proactively discussing this issue with community.

  will test our CI thoroughly on the sandbox project and will update the
 status.

  thanks

 On Fri, Jan 16, 2015 at 12:35 AM, Duncan Thomas duncan.tho...@gmail.com
 wrote:

 Thanks for the confirmation Clark

 On 15 January 2015 at 20:48, Clark Boylan cboy...@sapwetik.org wrote:

 According to my logs I disabled the account around 2015-01-14 23:53UTC.
 The email below arrived before then. We should be good now.

 Clark

 On Thu, Jan 15, 2015, at 10:29 AM, Duncan Thomas wrote:
  Most recent one I can see is:
 
  Delivered-To: duncan.tho...@gmail.com
  Received: by 10.114.200.234 with SMTP id jv10csp1839724ldc;
  Wed, 14 Jan 2015 23:35:03 -0800 (PST)
  X-Received: by 10.60.52.132 with SMTP id
 t4mr5012783oeo.11.1421307302246;
  Wed, 14 Jan 2015 23:35:02 -0800 (PST)
  Return-Path: rev...@openstack.org
  Received: from review.openstack.org (review.openstack.org.
  [23.253.232.87])
  by mx.google.com with ESMTPS id
  j134si354105oig.130.2015.01.14.23.35.01
  for duncan.tho...@gmail.com
  (version=TLSv1 cipher=RC4-SHA bits=128/128);
  Wed, 14 Jan 2015 23:35:02 -0800 (PST)
  Received-SPF: pass (google.com: domain of rev...@openstack.org
  designates 23.253.232.87 as permitted sender) client-ip=23.253.232.87;
  Authentication-Results: mx.google.com;
 spf=pass (google.com: domain of rev...@openstack.org designates
  23.253.232.87 as permitted sender) smtp.mail=rev...@openstack.org
  Received: from localhost ([127.0.0.1] helo=127.0.0.1)
by review.openstack.org with esmtp (Exim 4.76)
(envelope-from rev...@openstack.org)
id 1YBexJ-0004Xd-8M
for duncan.tho...@gmail.com; Thu, 15 Jan 2015 07:35:01 +
  Date: Thu, 15 Jan 2015 07:35:01 +
  From: CloudByte CI (Code Review) rev...@openstack.org
  To: Duncan Thomas duncan.tho...@gmail.com
  X-Gerrit-MessageType: comment
  List-Id: gerrit-openstack-cinder.review.openstack.org
  List-Unsubscribe: https://review.openstack.org/settings
  Subject: Change in openstack/cinder[master]: Fix LVM thin pool creation
  race
  X-Gerrit-Change-Id: I006970736ba0e62df383bacc79b5754dea2e9a3e
  X-Gerrit-ChangeURL: https://review.openstack.org/146917
  X-Gerrit-Commit: 2ba8963675d0d193e6a1f10015345265c945c707
  In-Reply-To:
  
 gerrit.1421167415000.i006970736ba0e62df383bacc79b5754dea2e9...@review.openstack.org
 
  References:
  
 gerrit.1421167415000.i006970736ba0e62df383bacc79b5754dea2e9...@review.openstack.org
 
  MIME-Version: 1.0
  Content-Type: text/plain; charset=UTF-8
  Content-Transfer-Encoding: 8bit
  Content-Disposition: inline
  User-Agent: Gerrit/2.8.4-15-g6dc8444
  Message-Id: e1ybexj-0004xd...@review.openstack.org
 
  CloudByte CI has posted comments on this change.
 
 
 
  On 15 January 2015 at 20:16, Duncan Thomas duncan.tho...@gmail.com
  wrote:
 
   Erm, I definitely get mail from it normally, e.g.:
  
   Delivered-To: duncan.tho...@gmail.com
   Received: by 10.114.200.234 with SMTP id jv10csp1377820ldc;
   Tue, 13 Jan 2015 08:44:06 -0800 (PST)
   X-Received: by 10.182.231.230 with SMTP id
 tj6mr21764320obc.58.1421167445699;
   Tue, 13 Jan 2015 08:44:05 -0800 (PST)
   Return-Path: rev...@openstack.org
   Received: from review.openstack.org (review.openstack.org.
 [23.253.232.87])
   by mx.google.com with ESMTPS id
 m3si10101533oic.33.2015.01.13.08.44.05
   (version=TLSv1 cipher=RC4-SHA bits=128/128);
   Tue, 13 Jan 2015 08:44:05 -0800 (PST)
   Received-SPF: pass (google.com: domain of rev...@openstack.org
 designates 23.253.232.87 as permitted sender) client-ip=23.253.232.87;
   Authentication-Results: mx.google.com;
  

[Openstack] [Neutron][OpenStack]How To Debug L3 Agent in DevStack

2015-01-19 Thread Wilence Yao
Hi all,

As a OpenStack developer, I want to make out the working mechanism of L3
Agent in nentron. Before that I have read some materials in wiki and docs
of openstack.org. When I jump into the code, the nuetron code is so complex
that I can't make it clear. Accodring to my previous exprience, debuging
the code step by step may help improvement.
Are there some docs and wiki that focus on neutron debug ?
The developing environment is devstack stable/juno in one node.

Thanks for any help.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[openstack-dev] [gantt] Scheduler sub-group meeting agenda 1/70

2015-01-19 Thread Dugger, Donald D
Meeting on #openstack-meeting at 1500 UTC (8:00AM MST)


1)  Remove direct nova DB/API access by Scheduler Filters - 
https://review.opernstack.org/138444/

2)  Status on cleanup work - https://wiki.openstack.org/wiki/Gantt/kilo

3)  Topics for mid-cycle meetup

Note, I expect we'll spend most of the time talking about 1.  If we can come to 
agreement on that BP I'll be ecstatic.

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Cutoff deadlines for cinder drivers

2015-01-19 Thread Deepak Shetty
Just so that people following this thread know about the final decision,
Per https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Deadlines
the deadline for CI is extended to Mar. 3, 2015 for all volume drivers.

snip
Deadlines

All volume drivers
https://github.com/openstack/cinder/tree/master/cinder/volume/drivers
need to have a CI by end of* K-3, March 19th 2015*.* Failure will result in
removal in the Kilo release.* Discussion regarding this was in the
#openstack-meeting IRC room during the Cinder meeting. Read discussion logs
http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-01-14-16.00.log.html#l-21

/snip

On Tue, Jan 13, 2015 at 3:55 AM, Mike Perez thin...@gmail.com wrote:

 On 09:03 Mon 12 Jan , Erlon Cruz wrote:
  Hi guys,
 
  Thanks for answering my questions. I have 2 points:
 
  1 - This (remove drivers without CI) is a way impacting change to be
  implemented without exhausting notification and discussion on the mailing
  list. I myself was in the meeting but this decision wasn't crystal clear.
  There must be other driver maintainers completely unaware of this.

 I agree that the mailing list has not been exhausted, however, just
 reaching
 out to the mailing list is not good enough. My instructions back in
 November
 19th [1][2] were that we need to email individual maintainers and the
 openstack-dev list. That was not done. As far as I'm concerned, we can't
 stick
 to the current deadline for existing drivers. I will bring this up in the
 next
 Cinder meeting.

  2 - Build a CI infrastructure and have people to maintain a the CI for a
  new driver in a 5 weeks frame. Not all companies has the knowledge and
  resources necessary to this in such sort period. We should consider a
 grace
  release period, i.e. drivers entering on K, have until L to implement
  theirs CIs.

 New driver maintainers have until March 19th. [3] That's around 17 weeks
 since
 we discussed this in November [2]. This is part the documentation for how
 to
 contribute a driver [4], which links to the third party requirement
 deadline
 [3].

 [1] -
 http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-11-19-16.00.html
 [2] -
 http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-11-19-16.00.log.html#l-34
 [3] -
 https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Deadlines
 [4] - https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver

 --
 Mike Perez

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Cutoff deadlines for cinder drivers

2015-01-19 Thread Deepak Shetty
Yuck ! its Mar. 19, 2015 (bad copy paste before)

On Tue, Jan 20, 2015 at 12:16 PM, Deepak Shetty dpkshe...@gmail.com wrote:

 Just so that people following this thread know about the final decision,
 Per
 https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Deadlines
 the deadline for CI is extended to Mar. 3, 2015 for all volume drivers.

 snip
 Deadlines

 All volume drivers
 https://github.com/openstack/cinder/tree/master/cinder/volume/drivers
 need to have a CI by end of* K-3, March 19th 2015*.* Failure will result
 in removal in the Kilo release.* Discussion regarding this was in the
 #openstack-meeting IRC room during the Cinder meeting. Read discussion
 logs
 http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-01-14-16.00.log.html#l-21

 /snip

 On Tue, Jan 13, 2015 at 3:55 AM, Mike Perez thin...@gmail.com wrote:

 On 09:03 Mon 12 Jan , Erlon Cruz wrote:
  Hi guys,
 
  Thanks for answering my questions. I have 2 points:
 
  1 - This (remove drivers without CI) is a way impacting change to be
  implemented without exhausting notification and discussion on the
 mailing
  list. I myself was in the meeting but this decision wasn't crystal
 clear.
  There must be other driver maintainers completely unaware of this.

 I agree that the mailing list has not been exhausted, however, just
 reaching
 out to the mailing list is not good enough. My instructions back in
 November
 19th [1][2] were that we need to email individual maintainers and the
 openstack-dev list. That was not done. As far as I'm concerned, we can't
 stick
 to the current deadline for existing drivers. I will bring this up in the
 next
 Cinder meeting.

  2 - Build a CI infrastructure and have people to maintain a the CI for a
  new driver in a 5 weeks frame. Not all companies has the knowledge and
  resources necessary to this in such sort period. We should consider a
 grace
  release period, i.e. drivers entering on K, have until L to implement
  theirs CIs.

 New driver maintainers have until March 19th. [3] That's around 17 weeks
 since
 we discussed this in November [2]. This is part the documentation for how
 to
 contribute a driver [4], which links to the third party requirement
 deadline
 [3].

 [1] -
 http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-11-19-16.00.html
 [2] -
 http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-11-19-16.00.log.html#l-34
 [3] -
 https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Deadlines
 [4] - https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver

 --
 Mike Perez

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] Pool member status 'ACTIVE' even on health check failure

2015-01-19 Thread Varun Lodaya
Hey Brandon,

Thanks for the response. My bad. Seems there is a small bug in horizon.
The moment you configure a health monitor, it shows up in the pool. I
thought it automatically got associated. But when I checked via CLI, it
was not. After associating it via CLI (not able to associate it via
horizon, the drop down for health-monitors doesn¹t seem to work), it seems
to work fine :).

As per stats, ideally, it¹s good to get counters like:
ICMP successful requests: x
ICMP response  timeouts: y
ICMP response failures: z

HTTP successful responses: a
HTTP timeouts: b
.
.
.


Just an initial thought, this sort of verifies that monitors are working
as expected. Like in current situation, I had to manually login to the
server to see if the server is catering to any health-monitoring requests.

Even getting haproxy stats is not very straightforward, as you need to
open a unix socket in haproxy cfg and restart the haproxy instance which
might not be possible in production sometimes.

Thanks,
Varun



On 1/19/15, 8:21 PM, Brandon Logan brandon.lo...@rackspace.com wrote:

Hi Varun,

Could you tell me which driver you are using? If you're running the
HaproxyOnHostPluginDriver then that should do a check every 6 seconds
for members being down.  However, other drivers may not do this.  It's
up to the driver.

As for providing health monitor stats, those currently are not being
provided.  There haven't been any plans for that yet because everyone
has been focused on getting the v2 API out.  Which is almost complete
and plan for that to be completed for Kilo-3.  If you'd like to be able
to retrieve some health stats, please list them and let us know.  We'll
hopefully be able to get them in after v2 has completed.

Thanks,
Brandon

On Mon, 2015-01-19 at 14:42 -0800, Varun Lodaya wrote:
 Hi All,
 
 
 I am trying to get LBaaS running on stable Juno. I can get all the
 LBaaS components correctly installed and working as expected. But I am
 facing some issues with the health-monitor. I am not quite sure if
 it¹s working as expected.
 
 
 I have 2 ubuntu servers as members of http-pool and I have stopped
 apache process on 1 of the servers. I have HTTP health-monitor
 configured on the pool which runs every 1 min and checks for 200
 response code on HTTP GET. I was expecting it to FAIL after 3 retries
 and make the status ³INACTIVE² for the member where apache is not
 running. But for some reason, it¹s always ACTIVE.
 
 
 Can somebody help me with how is it suppose to work and if it¹s a bug?
 
 
 Also, currently I don¹t see any health monitor stats with neutron. Is
 there any plan to get health monitor stats in future releases?
 
 
 Thanks,
 Varun
 
_
_
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Notification Schemas ...

2015-01-19 Thread Eddie Sheffield
Thanks for the comments Jay.

Haven't acted on the comments yet, but I've just pushed an update with some 
simple validation capability. Fairly quick and dirty stuff, just wrapping the 
schema validation capability of python-jsonschema. You need jsonschema in your 
python env to run it.

Eddie

From: Sandy Walsh [sandy.wa...@rackspace.com]
Sent: Monday, January 19, 2015 7:53 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Notification Schemas ...

Thanks Jay, good feedback.

Comments inline ...

 
 From: Jay Pipes [jaypi...@gmail.com]
 Sent: Sunday, January 18, 2015 10:47 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Notification Schemas ...

 On 01/18/2015 04:39 PM, Sandy Walsh wrote:

  Eddie Sheffield has pulled together a strawman set of notification
  schemas for Nova and Glance.
 
  https://github.com/StackTach/notification-schemas

 Some important things that I see are missing, so far... please let me
 know what your thoughts are regarding these.

 1) There needs to be some method of listing the notification codes. By
 code, I mean compute.instance_create.start, or possibly the CADF
 event codes, which I believe I recommended way back a-when the original
 ML thread started.

The title contains the event name
https://github.com/StackTach/notification-schemas/blob/master/nova/compute-instance-create-end.json#L4
but perhaps a wider CADF-like category would be good. And
we should have a close eye on ensuring we're capturing the broadest
CADF attributes now.

That said, CADF and schemas are complimentary. Once we have the
existing data mapped we should be able to determine which parts fit in
the CADF envelope directly and which parts need to go in the
attachment part (which will still need a separate schema definition). These
attachments could lead the way to changes to the core CADF spec. The
CADF team has stated they're receptive to that.

The notification driver can output the desired wire protocol.

 2) Each notification message payload must contain a version in it.
 We need some ability to evolve the notification schemas over time,
 and a version in the payload is a pre-requisite for that.

Agreed. I suspect we'll be adding a common object to
https://github.com/StackTach/notification-schemas/blob/master/nova/objects.json
which will contain all that version stuff. Next round for sure.
This was just to capture what we have now.

 All the best,
 -jay

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Optional Properties in an Entity

2015-01-19 Thread Kevin L. Mitchell
On Mon, 2015-01-19 at 19:55 +, Douglas Mendizabal wrote:
 I’m curious about something that came up during a bug discussion in
 one of the Barbican weekly meetings.  The question is about optional
 properties in an entity.  e.g. We have a Secret entity that has some
 properties that are optional, such as the Secret’s name.  We were
 split on what the best approach for returning the secret
 representation would be when an optional property is not set.
 
 In one camp, some developers would like to see the properties returned
 no matter what.  That is to say, the Secret dictionary would include a
 key for “name” set to null every single time.  i.e.
[snip]
 On the other camp, some developers would like to see optional
 properties omitted if they were not set by the user.
 
 The advantage of always returning the property is that the response is
 easier to parse, since you don’t have to check for the existence of
 the optional keys.  The argument against it is that it makes the API
 more rigid, and clients more fragile.

I keep trying to come up with technical arguments for or against, and
the only one I can come up with that has any true meaning is that
omitting properties reduces bandwidth usage a little…but I don't really
think that's something we've particularly concerned about.  Thus, from a
technical perspective, either way is perfectly fine, and we just have to
answer consistency.

When we look at consistency, we look at everything else in OpenStack.
From the standpoint of the nova API (with which I am the most familiar),
I am not aware of any property that is ever omitted from any payload
without versioning coming in to the picture, even if its value is null.
Thus, I would argue that we should encourage the first situation, where
all properties are included, even if their value is null.
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Scale] [UI] Improvements to handle 200+ nodes

2015-01-19 Thread Andrey Danin
Definitely, it should be a form-based filter. It's much more simpler than a
pure query.
Also, you can translate a user selection to a query and add to a location
string (like it's done now for the Logs tab [1], for instance). It would
allow a user to use a full power of queries.

[1]
http://demo.fuel-infra.org:8000/#cluster/874/logs/type:local;source:api;level:info

On Fri, Jan 16, 2015 at 3:50 PM, Nikolay Markov nmar...@mirantis.com
wrote:

 It's also should be mentioned that these are several changes to do on
 backend in order for UI to work faster, not on UI itself. For example,
 these are:

 - Custom filters, as Vitaly mentioned
 - Pagination of collections
 - PATCH requests support
 - Probably both short and /full representations for some entities

 On Fri, Jan 16, 2015 at 8:48 AM, Vitaly Kramskikh
 vkramsk...@mirantis.com wrote:
  Folks,
 
  Currently Fuel UI can handle large amounts of nodes due to a recent
  refactoring - rendering and operations with nodes became much faster. But
  that large amount of nodes also requires UX improvement, I'd love to hear
  your ideas and opinions on these proposals:
 
  Introduce compact node representation and let users switch between
 standart
  and compact view. Compact view will display only node name and status and
  will allow to display 4-8 nodes in a row instead of only one.
  Currently it is only possible to filter node by names. Filtering feature
  could be extended to allow filtering by other parameters: status, roles,
  manufacturer, RAM, disk space. There are 2 options (I'd like to hear
 which
  one you prefer):
 
  Form-based filter (beside a single input for name there will be controls
 for
  other parameters)
  Query language-based filter (like one used in Gerrit)
 
  Add ability to add arbitrary tags with values to nodes and also allow
  filtering by them.
 
 
  --
  Vitaly Kramskikh,
  Software Engineer,
  Mirantis, Inc.
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Best regards,
 Nick Markov

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Andrey Danin
ada...@mirantis.com
skype: gcon.monolake
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gantt (Scheduler) meeting agenda

2015-01-19 Thread Ed Leafe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 01/19/2015 02:21 PM, Sylvain Bauza wrote:
 Sounds like there is a misunderstanding with my opinion.
 That's unfortunate that even if we had a Google Hangout, we have to
 discuss again what we agreed.
 But OK, let's discuss again what we said and let me try to provide again
 a quick explanation about my view here...

OK, thanks for clarifying!

- -- 

- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)

iQIcBAEBAgAGBQJUvYGRAAoJEKMgtcocwZqLugoP/ij8CV3btpkm25mj5GoIGAKi
+kC5zNf0ecaFbS0snuF6FnmHbaNme/n8nXDIqx2xtC3KK+OyWUgGPVCr9sXr8VUx
VdoMq9e/OD0ndYR1g2oGsSZ0Wl2K/Zc0mpTzMiXVREgz2dkNVowOJL1XOx/OotvS
QuG/P+nIECwBXPfruXwsHos/OEuZHvtdrVbT60dDCfn49JevCV3oulVB7MpJH1hx
Y8DcPOjV9MufLEKLh0mCYC7nFLV2mjqx3PxQzY55bDydPJFPVhwxGGy+5MUMhoSH
wcIFoEXlu1IeduM8bs2JsShhrMyDLMUaLO2DuuA6sn2uUoobYcaXy5PDtmIz5++M
wgmDhBDkqSs6vSjlUydQqCiUxS2sEN/2H0ZifuExjjPA0XEKKTlvQPpYTNEBeqa7
ElWqDHX4/sMNeGVAV+1FlexNRxdD1mc2IWsmncc5dYEkF0PybPIH69d6G1OMhuGY
GP2C7kai/yQSakD0zk7kz0kvQG2bBwjxtpGu2RTQyXtY9nw5Y/uxIBeHm6w9fAAr
Eg9NjXplz4nsazUackNouxP9Ra9fVP0dEY+8fC/cNOB7sIoFYl40Bx+u6CdCgFk4
/w1Nab0C8pTblLp9MbH6/6nyf9myoOjp8XlpkIDBGp0L0kTwtn2b/75Pu2ZZXZA/
FHDjWVMbkG9/l/iB/sWJ
=g0gO
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Fuel and node config checks while/before provisioning?

2015-01-19 Thread Aleksandr Savatieiev
Hi Everyone,

I’ll be straight to the point here.
We have a cloud with Fuel operating it and provisioning CentOS nodes. You
might know that Fuel, by default, creates mdraid if there is more than
three disks on server (3+). In the debug process we are commented out some
rows in one of the nailgun scripts, and not removed that ‘commenting’
symbols after the experiments. As a result, this brought us to the
situation when disk was not accepted and whole node provisioning was broken.

So, here is the question to discuss:
Can we implement some mechanism here that will check such behaviour and
warn user if he/she did wrong in configuring the system (nailgun). OSTF is
not correct place here, because it tests already provisioned nodes.

Thanks to everyone for reading this.

-- 

Alex Savatieiev | QA manager | DevOps | OpenStack Services | Mirantis

http://www.mirantis.com | cell: +380669077482 | skype: savexx
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Support status of Heat resource types

2015-01-19 Thread Qiming Teng
On Mon, Jan 19, 2015 at 09:49:08AM +, Steven Hardy wrote:
 On Sun, Jan 18, 2015 at 08:41:46PM +0800, Qiming Teng wrote:
  Dear all,
  One question we constantly get from Heat users is about the support
  status of resource types. Some users are not well informed of this
  information so that is something we can improve.
  
  Though some resource types are already labelled with support status,
  there are quite some of them not identified yet. Helps are needed to
  complete the list.
 
 Identifying any resources which were added in either Juno or Kilo which
 aren't tagged appropriately seems worthwhile, but all other resources
 should exist in any non EOL verion of heat, so is the effort justified in,
 for example, saying supported since grizzly?

Honestly speaking, I don't think we need to backtrace all the way to the
day when worldwar II ended. The questions we got are mostly about the
support status of resource types in Icehouse and Juno. What have been
added? Which are deprecating/deprecated? So, I tend to agree that saying
just 'supported since 2013.2' would suffice.

 A more important ommission IMO was that we didn't expose the properties
 schema tags for new properties added to existing resources - I fixed that
 on Friday:
 
 https://review.openstack.org/#/c/147434/1

Documenting the support status is important for sure, but I'm concerning
that most users are just brave/confident enough to start with trying the
command line. They even don't know there are docs. They start with
simple templates and then experiment with each resource type they feel
interested in.

 Can you clarify the nature of the questions you're constantly getting from
 users, so we can ensure we're fixing the right thing to solve their
 problem?
 
 Steve
 
  
  +--+++
  | name | support_status | since  |
  +--+++
  | AWS::AutoScaling::AutoScalingGroup   || 2014.1 |
  | AWS::AutoScaling::LaunchConfiguration|||
  | AWS::AutoScaling::ScalingPolicy  |||
  | AWS::CloudFormation::Stack   |||
  | AWS::CloudFormation::WaitCondition   || 2014.1 |
  | AWS::CloudFormation::WaitConditionHandle || 2014.1 |
  | AWS::CloudWatch::Alarm   |||
  | AWS::EC2::EIP|||
  | AWS::EC2::EIPAssociation |||
  | AWS::EC2::Instance   |||
  | AWS::EC2::InternetGateway|||
  | AWS::EC2::NetworkInterface   |||
  | AWS::EC2::RouteTable || 2014.1 |
  | AWS::EC2::SecurityGroup  |||
  | AWS::EC2::Subnet |||
  | AWS::EC2::SubnetRouteTableAssociation|||
  | AWS::EC2::VPC|||
  | AWS::EC2::VPCGatewayAttachment   |||
  | AWS::EC2::Volume |||
  | AWS::EC2::VolumeAttachment   |||
  | AWS::ElasticLoadBalancing::LoadBalancer  |||
  | AWS::IAM::AccessKey  |||
  | AWS::IAM::User   |||
  | AWS::RDS::DBInstance |||
  | AWS::S3::Bucket  |||
  | My::TestResource |||
  | OS::Ceilometer::Alarm|||
  | OS::Ceilometer::CombinationAlarm || 2014.1 |
  | OS::Cinder::Volume   |||
  | OS::Cinder::VolumeAttachment |||
  | OS::Glance::Image|| 2014.2 |
  | OS::Heat::AccessPolicy   |||
  | OS::Heat::AutoScalingGroup   || 2014.1 |
  | OS::Heat::CloudConfig|| 2014.1 |
  | OS::Heat::HARestarter| DEPRECATED ||
  | OS::Heat::InstanceGroup  |||
  | OS::Heat::MultipartMime  || 2014.1 |
  | OS::Heat::RandomString   || 2014.1 |
  | OS::Heat::ResourceGroup  || 2014.1 |
  | OS::Heat::ScalingPolicy  |||
  | OS::Heat::SoftwareComponent  || 2014.2 |
  | 

Re: [openstack-dev] [Heat] Support status of Heat resource types

2015-01-19 Thread Steven Hardy
On Mon, Jan 19, 2015 at 07:29:42PM +0800, Qiming Teng wrote:
 On Mon, Jan 19, 2015 at 09:49:08AM +, Steven Hardy wrote:
  On Sun, Jan 18, 2015 at 08:41:46PM +0800, Qiming Teng wrote:
   Dear all,
   One question we constantly get from Heat users is about the support
   status of resource types. Some users are not well informed of this
   information so that is something we can improve.
   
   Though some resource types are already labelled with support status,
   there are quite some of them not identified yet. Helps are needed to
   complete the list.
  
  Identifying any resources which were added in either Juno or Kilo which
  aren't tagged appropriately seems worthwhile, but all other resources
  should exist in any non EOL verion of heat, so is the effort justified in,
  for example, saying supported since grizzly?
 
 Honestly speaking, I don't think we need to backtrace all the way to the
 day when worldwar II ended. The questions we got are mostly about the
 support status of resource types in Icehouse and Juno. What have been
 added? Which are deprecating/deprecated? So, I tend to agree that saying
 just 'supported since 2013.2' would suffice.

Ok, I'm not clear what there is to do here then, as AFAIK all resources
added during Juno and Kilo should be tagged already (if they're not, please
raise a bug).

  A more important ommission IMO was that we didn't expose the properties
  schema tags for new properties added to existing resources - I fixed that
  on Friday:
  
  https://review.openstack.org/#/c/147434/1
 
 Documenting the support status is important for sure, but I'm concerning
 that most users are just brave/confident enough to start with trying the
 command line. They even don't know there are docs. They start with
 simple templates and then experiment with each resource type they feel
 interested in.

I find this comment a little confusing given that the whole reason for this
thread is documenting support status ;)

That said, if users don't know there are docs, that's a serious problem, we
should add links somewhere obvious, like heat-templates in a README, if
that's where their simple templates are coming from, or maybe even add it
to the error response they see when we reject a template due to an unknown
resource type.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TEMPEST] doubts regarding tempest

2015-01-19 Thread Inessa Vasilevskaya
Hi,
I can't boast digging a lot into tempest, but at least I can give you some
info on question 1.
You can run a single test or a series of tests stopping at import pdb;
pdb.set_trace() if use use testr:

testr list-tests test_name_regex  my-list
python -m testtools.run discover --load-list my-list

Mind that you need to activate your venv first. More info can be found
here: https://wiki.openstack.org/wiki/Testr

By the way, I personally recommend ipdb - has the same power, but is a lot
more user friendly :)

Hope it helps,

Inessa Vasilevskaya

On Sun, Jan 18, 2015 at 11:50 AM, Abhishek Talwar/HYD/TCS 
abhishek.tal...@tcs.com wrote:

 Hi,

 I have some doubts regarding tempest.

 Q1. How can we debug test cases in OpenStack ? (We can debug the code
 through import pdb but the same doesn't work for test cases and we have a
 bdb quit error)

 Q2. Suppose I need to add a test case in the tempest suite how can I do
 that ?

 Q3. Whenever we run run_tests.sh script where does it first hit and how
 does the tests cases start running ?

 Q4. As it is said that tempest interacts only with the OpenStack Rest
 Api's , then how are the test cases for the various clients run, by clients
 here I mean the python-novaclient,keystoneclient etc.


 --
 Thanks and Regards
 Abhishek Talwar
 Employee ID : 770072
 Assistant System Engineer
 Tata Consultancy Services,Gurgaon
 India
 Contact Details : +918377882003

 =-=-=
 Notice: The information contained in this e-mail
 message and/or attachments to it may contain
 confidential or privileged information. If you are
 not the intended recipient, any dissemination, use,
 review, distribution, printing or copying of the
 information contained in this e-mail message
 and/or attachments to it are strictly prohibited. If
 you have received this communication in error,
 please notify us by reply e-mail or telephone and
 immediately and permanently delete the message
 and any attachments. Thank you


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][client] Keeping Nailgun's log after running tests

2015-01-19 Thread Roman Prykhodchenko
Hi folks,

at the moment run_test.sh script removes Nailgun's log file after running.
The question is whether it is necessary to add an option for keeping it.


- romcheg


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] tooz 0.11 released

2015-01-19 Thread Julien Danjou
Hi,

The Oslo team is pleased to announce the release of tooz 0.11

This release includes several bug fixes as well as many other changes:

405dfec Add a file based driver
26c39ac Upgrade to hacking 0.10
886aa62 Update sentinel support to allow multiple sentinel hosts
ae65ae5 Allow to pass arguments to retry()
e0e8519 IPC simplification

For more details, please see the git log history and
  https://launchpad.net/python-tooz/+milestone/0.11

Please report issues through launchpad: https://launchpad.net/python-tooz

Cheers,
-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Notification Schemas ...

2015-01-19 Thread Sandy Walsh
Thanks Jay, good feedback. 

Comments inline ...

 
 From: Jay Pipes [jaypi...@gmail.com]
 Sent: Sunday, January 18, 2015 10:47 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Notification Schemas ...

 On 01/18/2015 04:39 PM, Sandy Walsh wrote:

  Eddie Sheffield has pulled together a strawman set of notification
  schemas for Nova and Glance. 
 
  https://github.com/StackTach/notification-schemas

 Some important things that I see are missing, so far... please let me
 know what your thoughts are regarding these.

 1) There needs to be some method of listing the notification codes. By
 code, I mean compute.instance_create.start, or possibly the CADF
 event codes, which I believe I recommended way back a-when the original
 ML thread started.

The title contains the event name 
https://github.com/StackTach/notification-schemas/blob/master/nova/compute-instance-create-end.json#L4
 
but perhaps a wider CADF-like category would be good. And
we should have a close eye on ensuring we're capturing the broadest
CADF attributes now.

That said, CADF and schemas are complimentary. Once we have the
existing data mapped we should be able to determine which parts fit in
the CADF envelope directly and which parts need to go in the
attachment part (which will still need a separate schema definition). These
attachments could lead the way to changes to the core CADF spec. The
CADF team has stated they're receptive to that.

The notification driver can output the desired wire protocol.

 2) Each notification message payload must contain a version in it.
 We need some ability to evolve the notification schemas over time,
 and a version in the payload is a pre-requisite for that.

Agreed. I suspect we'll be adding a common object to
https://github.com/StackTach/notification-schemas/blob/master/nova/objects.json
which will contain all that version stuff. Next round for sure. 
This was just to capture what we have now.

 All the best,
 -jay

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] ironic-discoverd status update: 1.0.0 feature freeze and testing

2015-01-19 Thread Dmitry Tantsur

Hi all!

That's pure information email about discoverd, feel free to skip it, if 
not interested.


For those interested I'm glad to announce that ironic-discoverd 1.0.0 is 
feature complete and is scheduled to release on Feb 5 with Kilo-2 
milestone. Master branch is under feature freeze now and will only 
receive bug fixes and documentation updates until the release. This is 
the version intended to work with my in-band inspection spec 
http://specs.openstack.org/openstack/ironic-specs/specs/kilo/inband-properties-discovery.html


Preliminary release notes: 
https://github.com/stackforge/ironic-discoverd#10-series
Release tracking page: 
https://bugs.launchpad.net/ironic-discoverd/+milestone/1.0.0
Installation notes: 
https://github.com/stackforge/ironic-discoverd#installation (might be 
slightly outdated, but should be correct)


I'm not providing a release candidate tarball, but you can treat git 
master at https://github.com/stackforge/ironic-discoverd as such. Users 
of RPM-based distros can use my repo: 
https://copr.fedoraproject.org/coprs/divius/ironic-discoverd/ but beware 
that it's kind of experimental, and it will be receiving updates from 
git master after released is pushed to PyPI.


Lastly, I do not expect this release to be a long-term supported one. 
Next feature release 1.1.0 is expected to arrive around Kilo RC and will 
be supported for longer time.


Looking forward to your comments/suggestions/bug reports.
Dmitry.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [OpenStack-Infra] Hi all, a question about using of CI account

2015-01-19 Thread Anita Kuno
On 01/18/2015 08:24 PM, Wang, Yalei wrote:
 Hi all,
 
 Could the user apply for one CI account and to monitor two kind of changes, 
 like use XXX Neutron CI account to monitor both firewall and dvr patches in 
 neutron's repo?
 
 
 
 /Yalei
 
 
 
 
 ___
 OpenStack-Infra mailing list
 OpenStack-Infra@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
 
Hi Yalei:

Please read the rules for naming an account again:
http://ci.openstack.org/third_party.html#creating-a-service-account

XXX Neutron CI does not comply with the outlined expectations of names,
accounts not in compliance risk being disabled until they are in compliance.

Thank you,
Anita.

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [openstack-dev] [glance] Replication on image create

2015-01-19 Thread Boden Russell

 On 1/15/15 12:59 AM, Flavio Percoco wrote:
 All that being said, it'd be very nice if you could open a spec on
 this topic so we can discuss over the spec review and one of us (or
 you) can implement it if we reach consensus.


I'll create a BP + spec; doing a little homework now...

W / R / T the async task plugins -- are there plans to make the task
definitions more pluggable? Based on what I see right now it appears the
definitions are static[1].

I was expecting to see the list of applicable tasks to be defined via
stevedore entry points or via conf file properties.

e.g.
[entry_points]
glance.async.tasks =
import = glance.common.scripts.image_import
replicate = glance.common.scripts.image_repl
# etc...


Perhaps there's already a BP on this topic, but I didn't find one.

Thanks

[1]
https://github.com/openstack/glance/blob/master/glance/domain/__init__.py#L316

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] MTU/Fragmentation/Retransmission Problem in Juno using GRE

2015-01-19 Thread Eren Türkay
Hello,

This will be a long e-mail and I will present my findings on the
$subject. I have been debugging this problem for 6 days and I have
pinpointed where the problem lies but I haven't been able to fix it. I
would really appreciate if you can read it. I hope this e-mail will be
reference in the mailing list for other people experiencing the
same/similar problem.

Unfortunately, I am stuck at fixing the problem. Any help is appreciated.


=== TL;DR ===
ICMP packets from VM are fragmented normally, seen in tap interface
as-is but they are reconstructed in interfaces above tap (qbr, qvr,
qvo). They don't make their way out of compute node using GRE tunnel.

echo 0  /proc/sys/net/bridge/bridge-nf-call-iptable

on compute node causes fragmented packets to go as-is, not reconstructed
in above interfaces (qbr, qvr, etc). They make their way out of GRE
tunnel, reaches network namespace, network namespace attempts to reply
it but it doesn't reach GRE tunnel on network node.

Lowering the MTU value in every interface in router network namespace to
1454 (same as VM) fixes ICMP problem. ICMP packet with any length
reaches from vm to router and vice-versa.

However, regular TCP connections do not work. I see a lot of
retransmission. Even the simple nc connection is unreliable from VM to
router namespace. Iperf shows 23KBit of speed.

Here is the detailed problem description.

=== Symptoms ===

These are the symptoms I experience.

1- Ping works but cannot SSH into VM
2- I cannot download anything inside VM, the connection is too slow.
3- A lot of TCP retransmission occurs
4- VM cannot communicate with metadata server (maybe related with 2/3?)


=== Install and Infrastructure Information ==

I have followed the official juno document step-by-step (double checked
to see if I mis-configured anything). Neutron is configured using ml2,
Openvswitch, and gre. Just as suggested in the documentation.

I have 3 physical machines with 2 NICs (controller, network, and
compute). em1 interface is my management and data network and it has a
separate switch (10.20.0.0/24). Network and Compute node communicates
using GRE on this address. em2 is connected to other switch which act as
an outside network (192.168.88.0/24). So, external and internal network
is physically separate.

Hosts run Ubuntu Server 14.04. They have OpenStack Juno. Kernel version
is 3.13.0-32-generic. Openvswitch version number is
2.0.1+git20140120-0ubuntu2. KVM is used as a hypervisor.

VMs have MTU 1454 configured in dnsmasq.conf as written in official Juno
documentation. (checked this inside VM as well)

VM network is 10.0.0.0/24. I have 1 VM for testing and it has IP address
of 10.0.0.8. Its router IP address is 10.0.0.1. This router has a
gateway address of 192.168.88.1

All the bridges (created by neutron, agents, etc), and network
interfaces in physical hosts have MTU of 1500.

GRO and TSO is off on network and compute nodes (ethtool -K em1 gro/tso off)


=== Findings ===

I will omit what I tried to get here -it is a long way :(- and will
present the issue straight. I realized that ping -s 1430 10.0.0.1
inside VM works, but ping -s 1431 10.0.0.1 does not work. I checked to
see if it is the same the other way around inside network namespace for
this network in Network node. Running ip netns exec qrouter ping
-s 1430 10.0.0.8 works, but -s 1431 does not.

1- Now, looking at the problem from VM side, I ran tcpdump nearly all
over the places. It appears that the problem lies in qvo/qbr/qvb bridges
as explained in [0].

When sending ICMP packets, they are fragmented as expected in tap
interface. However, the fragmented packet is reconstructed just after
tap interface, in qbrxxx, and carried as reconstructed all the way long
to qvbxxx/qvoxxx. So, the packet never gets out on GRE tunnel.


2- Looking from the network namespace in network node, I checked the MTU
values of the interfaces. qrxx and qgxx have MTU of 1500. Then I ran
tcpdump on qrxxx where the packets to/from VM should be seen. In
addition, I ran tcpdump on em1 (management interface where GRE packets
should be seen) on both network and compute node.

With ping -s 1431 10.0.0.8, I saw that packets were fragmented inside
network namespace. However, the first packet wasn't seen in GRE (em1),
but the second fragmented packet was seen, it made its way out. Since it
was not a full packet, ping failed from network namespace to VM.


=== Attempts to Solve The Problem ===

I searched the bridge fragmentation problem and it was suggested to
disable bridge-nf-call-iptables. I ran the following command in
compute node:

echo 0  /proc/sys/net/bridge/bridge-nf-call-iptables

With this setting, fragmented packets from vm goes as-is in tap, and
other qvo/qvb interfaces. They are not reconstructed. They make their
way out GRE tunnel, and reach the router namespace. Router namespace
attempts to reply it but it fails to reply. The reply packet never goes
out in GRE tunel. This tcpdump is attached as

Re: [openstack-dev] [Heat] Support status of Heat resource types

2015-01-19 Thread Qiming Teng
On Mon, Jan 19, 2015 at 01:24:19PM +, Steven Hardy wrote:
 On Mon, Jan 19, 2015 at 07:29:42PM +0800, Qiming Teng wrote:
  On Mon, Jan 19, 2015 at 09:49:08AM +, Steven Hardy wrote:
   On Sun, Jan 18, 2015 at 08:41:46PM +0800, Qiming Teng wrote:
Dear all,
One question we constantly get from Heat users is about the support
status of resource types. Some users are not well informed of this
information so that is something we can improve.

Though some resource types are already labelled with support status,
there are quite some of them not identified yet. Helps are needed to
complete the list.
   
   Identifying any resources which were added in either Juno or Kilo which
   aren't tagged appropriately seems worthwhile, but all other resources
   should exist in any non EOL verion of heat, so is the effort justified in,
   for example, saying supported since grizzly?
  
  Honestly speaking, I don't think we need to backtrace all the way to the
  day when worldwar II ended. The questions we got are mostly about the
  support status of resource types in Icehouse and Juno. What have been
  added? Which are deprecating/deprecated? So, I tend to agree that saying
  just 'supported since 2013.2' would suffice.
 
 Ok, I'm not clear what there is to do here then, as AFAIK all resources
 added during Juno and Kilo should be tagged already (if they're not, please
 raise a bug).

The question is not about those resource types which have got version
tags. The question is about whether or how to tag all other resource
types. Sorry if I didn't make the question clear.

   A more important ommission IMO was that we didn't expose the properties
   schema tags for new properties added to existing resources - I fixed that
   on Friday:
   
   https://review.openstack.org/#/c/147434/1
  
  Documenting the support status is important for sure, but I'm concerning
  that most users are just brave/confident enough to start with trying the
  command line. They even don't know there are docs. They start with
  simple templates and then experiment with each resource type they feel
  interested in.
 
 I find this comment a little confusing given that the whole reason for this
 thread is documenting support status ;)

Right, documenting support status is important. The only difference I
see is on 'how'. The reason we provide resource-type-show,
resource-type-template commands are all about helping users get
themselves familiarized with the resource types. My understanding is
that command line help has its advantage. Maybe I am misunderstanding
something here.

 That said, if users don't know there are docs, that's a serious problem, we
 should add links somewhere obvious, like heat-templates in a README, if
 that's where their simple templates are coming from, or maybe even add it
 to the error response they see when we reject a template due to an unknown
 resource type.

Well, at one extreme, we are expecting users to read all docs (including
READMEs) before using the tool, at the other we are encouraging
'trial-and-err' exercises. One example comes to my mind is the README
file we placed there for users to understand how to build an image to
use softwareconfig/softwaredeployment. It is a good document people
always neglected.

Regards,
  Qiming

 Steve
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] Bug fix day Jan 26

2015-01-19 Thread Sergey Lukjanov
Hi Sahara folks,

we'll have a bug fixing day at Jan 26 starting approximately at 14:00 UTC.
Let's meet in the #openstack-sahara.

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] multiple cinder backend with emc vnx and NFS backend

2015-01-19 Thread John Griffith
On Sun, Jan 18, 2015 at 11:41 PM, Amit Das amit@cloudbyte.com wrote:
 Hi John,


 Otherwise you can move to multibackend but you will need to update the
 hosts column on your existing volumes.


 For above statement, did you mean a unique backend on separate volume nodes
 ?

 Will there be any issues, if the enabled_backends are used with each backend
 tied to particular volume type. Now this configuration is repeated for all
 volume nodes. Do we need to be concerned about the host entry ?


 Regards,
 Amit
 CloudByte Inc.

 On Mon, Jan 19, 2015 at 4:14 AM, John Griffith john.griff...@solidfire.com
 wrote:


 On Jan 16, 2015 9:03 PM, mad Engineer themadengin...@gmail.com wrote:
 
  Hello All,
i am working on integrating VNX with cinder,i have plan
  to add another NFS storage in the future,without removing VNX.
 
  Can i add another backend while first backend is running without
  causing problem to running volumes.
  I heard that multiple backend is supported,
 
  thanks for any help
 
  ___
  Mailing list:
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
  Post to : openstack@lists.openstack.org
  Unsubscribe :
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

 So as long as you used the enabled backend format in your existing
 config you should be able to just add another backend without impacting
 your existing setup (I've never tried this with NFS/VNX myself though).

 If you're not using the enabled backends directive you can deploy a new
 cinder - volume node and just add your new driver that way.

 Otherwise you can move to multibackend but you will need to update the
 hosts column on your existing volumes.


 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



Hi Amit,

My point was that the way multi-backend works is by the addition of
the enabled_backends parameter in the cinder.conf file, along with a
driver section:

enabled_backends = lvm1,lvm2
[lvm1]
driver settings
[lvm2]
driver settings

This will cause your host entry to be of the form:
cinder-vol-node-name@backend-name

In this scenario you can simply add another entry for enabled_backends
and it's corresponding driver info entry.

If you do NOT have multi backend setup your host entry will just be:
cinder-vol-node-name and it's a bit more difficult to convert to
multi-backend.  You have two options:
1. Just deploy another cinder-volume node (skip multi-backend)
2. Convert existing setup to multi-backend (this will require
modification/update of the host entry of your existing volumes)

This all might be a bit more clear if you try it yourself in a
devstack deployment.  Give us a shout on IRC at openstack-cinder if
you get hung up.

Thanks,
John

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [Keystone][Horizon] User self registration and management

2015-01-19 Thread Enrique Garcia
Hi everyone,

Enrique, if you have a github repo or some project pages you can point
 me to that would be wonderful. I'm currently in the very early stages of
 our proof of concept/prototype, so it would be great to see some work
 others have done to solve similar issues. If I can find something that
 works for a few of our use cases it might be a better starting point or
 good to see what an approach others might find useful is.
 I'd much rather not duplicate work, nor build something only useful for
 our use cases, so collaboration towards a community variant would be ideal.


​Adrian, first of all we are currently working in this functionality so we
don't have a final version yet, that's why we are also interested in
joining efforts and collaborate in a community variant. Anyway,
our first prototype was to do it all in Horizon, implementing a django app
similar to what you can find in django registration
https://django-registration.readthedocs.org/en/latest/. Currently
​I am
 working in moving all the backend logic to a keystone extension and
keeping the views and form handling in a django app to make something
similar to the current authentication system
https://github.com/openstack/django_openstack_auth
​.​

You can check here
https://github.com/ging/keystone/tree/registration/keystone/contrib/user_registration
our
current keystone extension if that helps you.

Getting into the details, we went for a slightly different approach to the
one you propose. Our idea is to have a service in keystone that exposes and
API to register and activate users, as well as other common functionality
like password reset, etc. This API is admin only, so Horizon(or whoever
wants to register users) needs to have its own admin credentials to use it.
If I understand correctly, what you suggest is that is the service the one
that would have the credentials, so we differ here. I see some benefits and
disadvantages in both approaches, we can discuss them if you want.

Secondly, the way we handle temporary user data is setting the enabled
attribute to False until they get activated using a key provided during
registration. In other words, our extension is a 'delayed user-create API
for admins' with some extra functionality like password reset. What do you
think about this approach? How do you plan to store this temporal data?

It would be great if you can provide any feedback on all of this, like how
well do you think it integrates with the current ecosystem and how would
you do things differently.

David, is this approach somewhat redundant with the federated Keystone code
you are working on? I feel like they address different use cases but I
might be looking at it the wrong way.

​regards,
Enrique ​Garcia Navalon



On 16 January 2015 at 15:12, David Chadwick d.w.chadw...@kent.ac.uk wrote:

 The VO code exists already, as does a public demo (see my original email
 for details). I gave demos to the Keystone core in Paris last November.
 How soon this gets incorporated into core depends upon public/user
 demand. So far, it seems that few people have recognised the value of
 this service, probably because they are not using federated Keystone
 seriously. Once they do, I believe that the need for user self
 registration to roles and privileges will become immediately apparent

 regards

 David

 On 15/01/2015 23:28, Adrian Turjak wrote:
  Typo fix, see below.
 
  On 16/01/15 12:26, Adrian Turjak wrote:
  Hello David,
 
  We are definitely assessing the option, although even switching Keystone
  to be backed by an LDAP service might also work, and not be a switch to
  a fully federated system. I believe Keystone has had LDAP support since
  Havana, and that was an option we had looked at. It also might be a
  terrible option, we don't know yet, and would still mean dealing with
  LDAP directly anyway for user management.
 
  What seems weird though with a federated Keystone system though is that
  you still have to store role and group information in Keystone so that
  has to be propagate through somehow, or be automated in some way still
  via an external service.
 
  We don't at present have any concrete plans, and until now a pressing
  need to do so hasn't come up, although there were always plans to.
 
  As for the VO roles blueprint, how likely is that to be merged into
  master, and are there any implementations of it? I assume the VO
  entities mentioned in the proposal would be in Keystone, yes?
 
  We need a solution in the next few months (ideally sooner), but while
  building a quick hack of a service along Keystone could likely be done
  quickly
  , that wouldn't be a good long term solution.
 
 
  Cheers,
  -Adrian
 
  On 15/01/15 21:20, David Chadwick wrote:
  Hi Adrian
 
  Morgan is right in saying that an external IdP would solve many of your
  user management problems, but then of course, you will be using
  federated keystone  which you seem reluctant to do :-) However, if you
  have an IdP under your full 

[openstack-dev] [sahara] Bug triage day - today, Jan 19

2015-01-19 Thread Sergey Lukjanov
Hey sahara folks,

today (Jan 19) is the bug triage day, starting approximately from 14:00
UTC. Let's meet in the #openstack-sahara channel.

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Nominating Brad Topol for Keystone-Spec core

2015-01-19 Thread Dolph Mathews
+1

On Sun, Jan 18, 2015 at 1:11 PM, Morgan Fainberg morgan.fainb...@gmail.com
wrote:

 Hello all,

 I would like to nominate Brad Topol for Keystone Spec core (core reviewer
 for Keystone specifications and API-Specification only:
 https://git.openstack.org/cgit/openstack/keystone-specs ). Brad has been
 a consistent voice advocating for well defined specifications, use of
 existing standards/technology, and ensuring the UX of all projects under
 the Keystone umbrella continue to improve. Brad brings to the table a
 significant amount of insight to the needs of the many types and sizes of
 OpenStack deployments, especially what real-world customers are demanding
 when integrating with the services. Brad is a core contributor on pycadf
 (also under the Keystone umbrella) and has consistently contributed code
 and reviews to the Keystone projects since the Grizzly release.

 Please vote with +1/-1 on adding Brad to as core to the Keystone Spec
 repo. Voting will remain open until Friday Jan 23.

 Cheers,
 Morgan Fainberg


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Nominating Brad Topol for Keystone-Spec core

2015-01-19 Thread Brant Knudson
+1

- Brant

On Sun, Jan 18, 2015 at 1:11 PM, Morgan Fainberg morgan.fainb...@gmail.com
wrote:

 Hello all,

 I would like to nominate Brad Topol for Keystone Spec core (core reviewer
 for Keystone specifications and API-Specification only:
 https://git.openstack.org/cgit/openstack/keystone-specs ). Brad has been
 a consistent voice advocating for well defined specifications, use of
 existing standards/technology, and ensuring the UX of all projects under
 the Keystone umbrella continue to improve. Brad brings to the table a
 significant amount of insight to the needs of the many types and sizes of
 OpenStack deployments, especially what real-world customers are demanding
 when integrating with the services. Brad is a core contributor on pycadf
 (also under the Keystone umbrella) and has consistently contributed code
 and reviews to the Keystone projects since the Grizzly release.

 Please vote with +1/-1 on adding Brad to as core to the Keystone Spec
 repo. Voting will remain open until Friday Jan 23.

 Cheers,
 Morgan Fainberg


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack Rating-Charging-Billing first micro-service public release

2015-01-19 Thread Piyush Harsh
Dear All,

It is my pleasure to announce that ICCLab at Zurich University of
Applied Sciences, is releasing the first micro-service (UDR - Usage
Data Record) of the rating-charging-billing platform for cloud
providers under Apache 2.0 License.

The salient features of this (UDR micro-service) release are -
  * Usage data records generation using OpenStack Telemetry
  * Persistence in time-series database Influxdb
  * Usage visualization with Grafana
  * Application self-monitoring  alerting through Sensu and Uchiwa
  * RESTful APIs to send in external non-openstack usage data (think
from PaaS  SaaS platforms)

Very soon, we will add proper authentication and authorization using
OAuth protocol support in OpenAM. Support for other popular
authorization platforms will be added.

Other micro-services will provide support for dynamic rating and
support for complex rules through rule engine, adaptive billing etc.

For complete details of this release and timelines for other
micro-services, please see http://icclab.github.io/cyclops/

We at ICCLab are looking for community contributors for this project,
if you are interested please contact Srikanta Patanjali (p...@zhaw.ch)
or Piyush Harsh (h...@zhaw.ch) .

Constructive comments are most welcome.

Kind regards,
Piyush.

___
Dr. Piyush Harsh, Ph.D.
Researcher, InIT Cloud Computing Lab
Zurich University of Applied Sciences (ZHAW)
[Site] http://piyush-harsh.info
[Research Lab] blog.zhaw.ch/icclab
Fax: +41(0)58.935.7403 GPG Keyid: 9C5A8838

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] HEAT

2015-01-19 Thread Steven Hardy
On Sun, Jan 18, 2015 at 03:11:04PM +, Jesus arteche wrote:
0.2.10

This is the version of python-heatclient, you need to check the version of
the heat service, which will be e.g Icehouse (2014.1) or Juno (2014.2).

The OS::Heat::WaitCondition resource you're trying to use was only added in
Juno, as documented here:

http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::WaitCondition

Available since 2014.2 (Juno).

This means it's likely you're using some older version of the heat service
which doesn't contain the resource implementation.

You can check with heat resource-type-list | grep Wait - if you don't see
OS::Heat::WaitCondition there, your stack create will fail with an error
such as you are seeing.

Steve

On Sun, Jan 18, 2015 at 3:37 AM, Qiming Teng teng...@linux.vnet.ibm.com
wrote:
 
  On Fri, Jan 16, 2015 at 11:53:02AM +, Jesus arteche wrote:
   hey guys,
  
   I'm new with heat...I managed to do a deployment from a simple
  template...
   now I'm trying this example :
  
  
 http://git.openstack.org/cgit/openstack/heat-templates/plain/hot/Windows/IIS_Drupal/IIS_Drupal.yaml
  
  
   But I'm getting this error:
  
   HTTP/1.1 400 Bad Request
   date: Fri, 16 Jan 2015 11:47:54 GMT
   content-length: 300
   content-type: application/json; charset=UTF-8
  
   {explanation: The server could not comply with the request since it
  is
   either malformed or otherwise incorrect., code: 400, error:
   {message: HT-EA96E26 HT-5E6421D Unknown resource Type :
   OS::Heat::WaitCondition, traceback: null, type:
   StackValidationFailed}, title: Bad Request}
 
  400 looks like a client side error, not a bug from server side.
  The message says that resource type OS::Heat::WaitCondition is not
  recognized and the template validation failed.A  So what is your Heat
  deployment version?
   DEBUG (http:121)
   HTTP/1.1 400 Bad Request
   date: Fri, 16 Jan 2015 11:47:54 GMT
   content-length: 300
   content-type: application/json; charset=UTF-8
  
   {explanation: The server could not comply with the request since it
  is
   either malformed or otherwise incorrect., code: 400, error:
   {message: HT-EA96E26 HT-5E6421D Unknown resource Type :
   OS::Heat::WaitCondition, traceback: null, type:
   StackValidationFailed}, title: Bad Request}
  
  
   Any idea?
 
   ___
   Mailing list:
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
   Post toA  A  A : openstack@lists.openstack.org
   Unsubscribe :
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 
  ___
  Mailing list:
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
  Post toA  A  A : openstack@lists.openstack.org
  Unsubscribe :
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

 ___
 Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


-- 
Steve Hardy
Red Hat Engineering, Cloud

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [Heat] Support status of Heat resource types

2015-01-19 Thread Steven Hardy
On Sun, Jan 18, 2015 at 08:41:46PM +0800, Qiming Teng wrote:
 Dear all,
 One question we constantly get from Heat users is about the support
 status of resource types. Some users are not well informed of this
 information so that is something we can improve.
 
 Though some resource types are already labelled with support status,
 there are quite some of them not identified yet. Helps are needed to
 complete the list.

Identifying any resources which were added in either Juno or Kilo which
aren't tagged appropriately seems worthwhile, but all other resources
should exist in any non EOL verion of heat, so is the effort justified in,
for example, saying supported since grizzly?

A more important ommission IMO was that we didn't expose the properties
schema tags for new properties added to existing resources - I fixed that
on Friday:

https://review.openstack.org/#/c/147434/1

Can you clarify the nature of the questions you're constantly getting from
users, so we can ensure we're fixing the right thing to solve their
problem?

Steve

 
 +--+++
 | name | support_status | since  |
 +--+++
 | AWS::AutoScaling::AutoScalingGroup   || 2014.1 |
 | AWS::AutoScaling::LaunchConfiguration|||
 | AWS::AutoScaling::ScalingPolicy  |||
 | AWS::CloudFormation::Stack   |||
 | AWS::CloudFormation::WaitCondition   || 2014.1 |
 | AWS::CloudFormation::WaitConditionHandle || 2014.1 |
 | AWS::CloudWatch::Alarm   |||
 | AWS::EC2::EIP|||
 | AWS::EC2::EIPAssociation |||
 | AWS::EC2::Instance   |||
 | AWS::EC2::InternetGateway|||
 | AWS::EC2::NetworkInterface   |||
 | AWS::EC2::RouteTable || 2014.1 |
 | AWS::EC2::SecurityGroup  |||
 | AWS::EC2::Subnet |||
 | AWS::EC2::SubnetRouteTableAssociation|||
 | AWS::EC2::VPC|||
 | AWS::EC2::VPCGatewayAttachment   |||
 | AWS::EC2::Volume |||
 | AWS::EC2::VolumeAttachment   |||
 | AWS::ElasticLoadBalancing::LoadBalancer  |||
 | AWS::IAM::AccessKey  |||
 | AWS::IAM::User   |||
 | AWS::RDS::DBInstance |||
 | AWS::S3::Bucket  |||
 | My::TestResource |||
 | OS::Ceilometer::Alarm|||
 | OS::Ceilometer::CombinationAlarm || 2014.1 |
 | OS::Cinder::Volume   |||
 | OS::Cinder::VolumeAttachment |||
 | OS::Glance::Image|| 2014.2 |
 | OS::Heat::AccessPolicy   |||
 | OS::Heat::AutoScalingGroup   || 2014.1 |
 | OS::Heat::CloudConfig|| 2014.1 |
 | OS::Heat::HARestarter| DEPRECATED ||
 | OS::Heat::InstanceGroup  |||
 | OS::Heat::MultipartMime  || 2014.1 |
 | OS::Heat::RandomString   || 2014.1 |
 | OS::Heat::ResourceGroup  || 2014.1 |
 | OS::Heat::ScalingPolicy  |||
 | OS::Heat::SoftwareComponent  || 2014.2 |
 | OS::Heat::SoftwareConfig || 2014.1 |
 | OS::Heat::SoftwareDeployment || 2014.1 |
 | OS::Heat::SoftwareDeployments|| 2014.2 |
 | OS::Heat::Stack  |||
 | OS::Heat::StructuredConfig   || 2014.1 |
 | OS::Heat::StructuredDeployment   || 2014.1 |
 | OS::Heat::StructuredDeployments  || 2014.2 |
 | OS::Heat::SwiftSignal|| 2014.2 |
 | OS::Heat::SwiftSignalHandle  || 2014.2 |
 | OS::Heat::UpdateWaitConditionHandle  || 2014.1 |
 | OS::Heat::WaitCondition  || 

Re: [openstack-dev] [Neutron] Project Idea: IDS integration.

2015-01-19 Thread Miguel Ángel Ajo
Hi Mario,

   Salvatore and Kevin perfectly expressed what I think.

   I’d follow his advice, and look on how the advanced services [1] [2] 
integrate with neutron,
and build a POC. If the POC looks good it could be a good start point to build 
community
around and go on with the development.

[1] https://github.com/openstack/neutron-lbaas
[2] https://github.com/openstack/neutron-fwaas


Miguel Ángel Ajo


On Sunday, 18 de January de 2015 at 13:42, Salvatore Orlando wrote:

 Hello Mario,
  
 IDS surely is an interesting topic for OpenStack integration. I think there 
 might be users out there which could be interested in having this capability 
 in OpenStack networks.
 As Kevin said, we are moving towards a model where it becomes easier for 
 developers to add such capabilities in the form of service plugins - you 
 should be able to develop everything you need in a separate repository and 
 still integrate it with Neutron.
  
 According to what you wrote you have just a bit more than 100 hours to spend 
 on this project. What can be achieved in this timeframe really depends on 
 one's skills, but I believe it could be enough to provide some sort of 
 Proof-of-Concept. However, this time won't be enough at all if you also aim 
 to seek feedback on your proposal, build a consensus and a developer 
 community around it. Unsurprisingly these aspects, albeit not technically 
 challenging, take an awful lot more time than coding!
  
 Therefore the only advice I have here is that you should focus on achieving 
 your real goal, which is graduate with the highest possible marks! Then, if 
 from your thesis there will be something to gain for the OpenStack community, 
 that would be awesome. With a PoC implementation and perhaps some time on 
 your hands, you can then be able to work with the community to transform your 
 masters' project into an OpenStack project and avoid it becomes a bitrotting 
 shelved piece of code.
  
 Salvatore
  
 On 18 January 2015 at 10:45, Kevin Benton blak...@gmail.com 
 (mailto:blak...@gmail.com) wrote:
  Hi Mario,
   
  There is currently a large backlog of network-related features that many 
  people want to develop for Neutron. The model of adding them all to the 
  main neutron codebase has failed to keep up. Due to this, all of the 
  advanced services (LBaaS, FWaaS, etc) are being separated into their own 
  repositories. The main Neutron repo will only be for establishing L2/L3 
  connectivity and providing a framework for other networking services to 
  build on. You can read more about it in the advanced services split 
  blueprint.[1]
   
  Based on what you've described, it sounds like you would be developing an 
  IDS service plugin with a driver/plugin framework for different vendors. 
  For an initial proof of concept, you could do it in github to get started 
  quickly or you can also request a new stackforge repo for it. The benefit 
  of stackforge is that you get the OpenStack testing infrastructure and 
  integration with its gerrit system so other OpenStack developers don't have 
  to switch code review workflows to contribute.
   
  To gauge interest, I would try emailing the OpenStack users list. It 
  doesn't matter if developers are interested if nobody ever wants to 
  actually try it out.  
   
  1. https://blueprints.launchpad.net/neutron/+spec/services-split
   
  Cheers,
  Kevin Benton
   
   
  On Fri, Jan 16, 2015 at 2:32 PM, Mario Tejedor González 
  m.tejedor-gonza...@mycit.ie (mailto:m.tejedor-gonza...@mycit.ie) wrote:
   Hello, Neutron developers.

   My name is Mario and I am a Masters student in Networking and Security.

   I am considering the possibility of integrating IDS technology to Neutron 
   as part of my Masters project.
   As there are many flavors of open ID[P]S out there and those might follow 
   different philosophies, my approach would be developing a Neutron plugin 
   that might cover IDS integration as a service and also a driver (or more, 
   depending on time constraints) to cover the specifics of an IDS. 
   Following the nature of Neutron and OpenStack projects these drivers 
   would be developed for Free and Open Software IDSs and the plugin would 
   be as vendor-agnostic as possible. In order to achieve that the plugin 
   would have to deal with the need for logging and alerting.

   The time window I have for the development of this project goes from 
   February to the end of June and I would be able to allocate around 5h a 
   week to it.

   Now, I would like to know your opinion on this idea, given that you know 
   the project inside out and you are the ones making it happen day after 
   day.
   Do you think there is usefulness on bringing that functionality inside 
   the Neutron project (as a plugin)? I'd prefer do something that 
   contributes to it rather than a one-shot piece of software that will be 
   stored on a shelf.  

   I'd like to know if you think that what I am proposing is 

Re: [openstack-dev] [heat] Remove deprecation properties

2015-01-19 Thread Sergey Kraynev
On 19 January 2015 at 05:13, Angus Salkeld asalk...@mirantis.com wrote:

 On Fri, Jan 16, 2015 at 11:10 PM, Sergey Kraynev skray...@mirantis.com
 wrote:

 Steve, Thanks for the feedback.

 On 16 January 2015 at 15:09, Steven Hardy sha...@redhat.com wrote:

 On Thu, Dec 25, 2014 at 01:52:43PM +0400, Sergey Kraynev wrote:
 Hi all.
 In the last time we got on review several patches, which removes old
 deprecation properties [1],A
 and one mine [2].
 The aim is to delete deprecated code and redundant tests. It looks
 simple,
 but the main problem, which we met, is backward compatibility.
 F.e. user has created resource (FIP) with old property schema, i.e.
 using
 SUBNET_ID instead of SUBNET. On first look nothing bad will not
 happen,
 because:

 FWIW I think it's too soon to remove the Neutron subnet_id/network_id
 properties, they were only deprecated in Juno [1], and it's evident that
 users are still using them on icehouse [2]

 I thought the normal deprecation cycle was at least two releases, but I
 can't recall where I read that.  Considering the overhead of maintaining
 these is small, I'd favour leaning towards giving more time for users to
 update their templates, rather then breaking them via very aggresive
 removal of deprecated interfaces.


 Honestly I thought, that we use 1 release cycle, but I have not any
 objections to do it after two releases.
 Will  be glad to know what is desired deprecation period.



 I'd suggest some or all of the following:

 - Add a planned for removal in $release to the SupportStatus string
   associated with the deprecation, so we can document the planned
 removal.
 - Wait for at least two releases between deprecation and removal, and
   announce the interfaces which will be removed in the release notes for
   the release before removal e.g:
 - Deprecated in Juno
 - Announce planned removal in Kilo release notes
 - Remove in J


 I like this idea, IMO it will make our deprecation process clear.





 [1] https://review.openstack.org/#/c/82853/
 [2]
 http://lists.openstack.org/pipermail/openstack/2015-January/011156.html

 1. handle_delete use resource_id and any changes in property schema
 does
 not affect other actions.
 2. If user want to use old template, he will get adequate error
 message,
 that this property is not presented in schema. After that he just
 should
 switch to new property and update stack using this new property.
 In the same time we have one big issues for shadow dependencies,
 which is
 actual for neutron resources. The simplest approach will not be
 worked
 [3], due to old properties was deleted from property_schema.
 Why is it bad ?
 - we will get again all bugs related with such dependencies.
 - how to make sure:A
 A  A  - create stack with old property (my template [4])
 A  A  - open horizon, and look on topology
 A  A  - download patch [2] and restart engine
 A  A  - reload horizon page with topology
 A  A  - as result it will be different
 A
 I have some ideas about how to solve this, but none of them is not
 enough
 good for me:
 A - getting such information from self.properties.data is bad,
 because we
 will skip all validations mentioned in properties.__getitem__
 A - renaming old key in data to new or creating copy with new key
 is not
 correct for me, because in this case we actually change properties
 (resource representation) invisibly from user.
 A - as possible we may leave old deprecated property and mark it
 something
 like (removed), which will have similar behavior such as for
 implemented=False. I do not like it, because it means, that we never
 remove this support code, because wants to be compatible with old
 resources. (User may be not very lazy to do simple update or
 something
 else ...)
 - last way, which I have not tried yet, is usingA
 _stored_properties_data
 forA extraction necessary information.
 So now I have the questions:
 Should we support such case with backward compatibility?A
 If yes, how will be better to do it for us and user?
 May be we should create some strategy forA removingA A deprecated
 properties ?

 Yeah, other than the process issues I mentioned above, Angus has pointed
 out some technical challenges which may mean property removal breaks
 existing stacks.  IMHO this is something we *cannot* do - folks must be
 able to upgrade heat over multiple versions without breaking their
 stacks.

 As you say, delete may work, but it's likely several scenarios around
 update will break if the stored stack definition doesn't match the schema
 of the resource, and maintaining the internal references to removed or
 obsolete properties doesn't seem like a good plan long term.

 Could we provide some sort of migration tool, which re-writes the
 definition of existing stacks (via a special patch stack update maybe?)
 

Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI

2015-01-19 Thread Eduard Matei
Hi Ramy,
That didn't fix it, zuul-server still gets stuck Looking for lost builds,
but zuul user can read gerrit event-stream.

Any other ideas?

Thanks,
Eduard

On Mon, Jan 19, 2015 at 9:09 AM, Eduard Matei 
eduard.ma...@cloudfounders.com wrote:

 Hi Ramy, indeed user zuul could not read the event-stream (permission
 denied).
 The question is then how it could start zuul-server and read some events?
 Anyway, i copied over .ssh from user jenkins, and now user zuul can run
 that command.
 I restarted zuul-server and will keep an eye on it.

 Thanks,
 Eduard

 On Fri, Jan 16, 2015 at 8:32 PM, Asselin, Ramy ramy.asse...@hp.com
 wrote:

  Hi Eduard,



 Looking at the zuul code, it seems that is just a periodic task:
 https://github.com/openstack-infra/zuul/blob/master/zuul/launcher/gearman.py#L50



 So the issue is not likely those log messages, but rather the lack of
 other log messages.

 It seems somehow zuul lost its connection to gerrit even stream…those are
 the obvious log messages that are missing.

 And without that, no jobs will trigger a run, so I’d look there.



 Zuul Manual is here: http://ci.openstack.org/zuul/

 Zuul conf files is documented here:
 http://ci.openstack.org/zuul/zuul.html#zuul-conf

 And the gerrit configurations are here:
 http://ci.openstack.org/zuul/zuul.html#gerrit



 Double check you can manually read the event stream as the zuul user
 (sudo su - zuul) using those settings and this step:

 http://ci.openstack.org/third_party.html#reading-the-event-stream



 Ramy









 *From:* Eduard Matei [mailto:eduard.ma...@cloudfounders.com]
 *Sent:* Friday, January 16, 2015 6:57 AM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need
 help setting up CI



 Hi Punith,

 That's the whole log :) Not a lot happening after restart, just default
 initialization.

 Zuul-merger is not restarted.

 Layout.yaml is default.

 Gearman plugin tested in Jenkins, reports success.



 I disabled now the restart job to see how long it will Look for lost
 builds.



 Have a nice weekend,



 Eduard



 On Fri, Jan 16, 2015 at 1:15 PM, Punith S punit...@cloudbyte.com wrote:

   Hi eduard,



 can you post the whole zuul.log or debug.log after the zuul and
 zuul-merger restart along with your layout.yaml

 did you test the connection of gearman pulgin in jenkins ?



 thanks



 On Fri, Jan 16, 2015 at 4:20 PM, Eduard Matei 
 eduard.ma...@cloudfounders.com wrote:

  Hi Ramy,

 Still couldn't get my custom code to execute between installing devstack
 and starting tests... i'll try with some custom scripts and skip devstack-*
 scripts.



 Meanwhile i see another issue:

 2015-01-16 11:02:26,283 DEBUG zuul.IndependentPipelineManager: Finished
 queue processor: patch (changed: False)

 2015-01-16 11:02:26,283 DEBUG zuul.Scheduler: Run handler sleeping

 2015-01-16 11:06:06,873 DEBUG zuul.Gearman: Looking for lost builds

 2015-01-16 11:11:06,873 DEBUG zuul.Gearman: Looking for lost builds

 2015-01-16 11:16:06,874 DEBUG zuul.Gearman: Looking for lost builds

 2015-01-16 11:21:06,874 DEBUG zuul.Gearman: Looking for lost builds

 2015-01-16 11:26:06,875 DEBUG zuul.Gearman: Looking for lost builds

 2015-01-16 11:31:06,875 DEBUG zuul.Gearman: Looking for lost builds

 2015-01-16 11:36:06,876 DEBUG zuul.Gearman: Looking for lost builds

 2015-01-16 11:41:06,876 DEBUG zuul.Gearman: Looking for lost builds

 2015-01-16 11:46:06,877 DEBUG zuul.Gearman: Looking for lost builds



 Zuul is stuck in Looking for lost builds and it misses comments so it
 doesn't trigger jobs on patches.

 Any idea how to fix this? (other than restart it every 30 mins, in which
 case it misses the results of running jobs so it doesn't post the results).



 Thanks,

 Eduard



 On Fri, Jan 16, 2015 at 1:43 AM, Asselin, Ramy ramy.asse...@hp.com
 wrote:

  Hi Eduard,



 Glad you’re making progress.



 $BASE/new/devstack/ is available at the time pre_test_hook is called, so
 you should be able to make all the changes you need there.



 The sample shows how to configure the driver using local.conf devstack
 hooks.

 See here for more details: [1] [2]



 Regarding test, you can do both.

 Cinder requires you run tempest.api.volume[3]



 And you can setup a 2nd job that runs your internal functional tests as
 well.



 Ramy



 [1] http://docs.openstack.org/developer/devstack/configuration.html

 [2] http://docs.openstack.org/developer/devstack/plugins.html

 [3]
 https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Third_Party_CI_Requirements











 *From:* Eduard Matei [mailto:eduard.ma...@cloudfounders.com]
 *Sent:* Thursday, January 15, 2015 4:57 AM


 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need
 help setting up CI



 Hi Ramy,



 The issue with disconnect/abort no longer happens, so i guess it was some
 issues with networking.



 

Re: [Openstack] multiple cinder backend with emc vnx and NFS backend

2015-01-19 Thread Amit Das
Thanks John for the detailed information.

I will do the experiments and report the same.
On 19 Jan 2015 21:39, John Griffith john.griff...@solidfire.com wrote:

 On Sun, Jan 18, 2015 at 11:41 PM, Amit Das amit@cloudbyte.com wrote:
  Hi John,
 
 
  Otherwise you can move to multibackend but you will need to update the
  hosts column on your existing volumes.
 
 
  For above statement, did you mean a unique backend on separate volume
 nodes
  ?
 
  Will there be any issues, if the enabled_backends are used with each
 backend
  tied to particular volume type. Now this configuration is repeated for
 all
  volume nodes. Do we need to be concerned about the host entry ?
 
 
  Regards,
  Amit
  CloudByte Inc.
 
  On Mon, Jan 19, 2015 at 4:14 AM, John Griffith 
 john.griff...@solidfire.com
  wrote:
 
 
  On Jan 16, 2015 9:03 PM, mad Engineer themadengin...@gmail.com
 wrote:
  
   Hello All,
 i am working on integrating VNX with cinder,i have plan
   to add another NFS storage in the future,without removing VNX.
  
   Can i add another backend while first backend is running without
   causing problem to running volumes.
   I heard that multiple backend is supported,
  
   thanks for any help
  
   ___
   Mailing list:
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
   Post to : openstack@lists.openstack.org
   Unsubscribe :
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 
  So as long as you used the enabled backend format in your existing
  config you should be able to just add another backend without
 impacting
  your existing setup (I've never tried this with NFS/VNX myself though).
 
  If you're not using the enabled backends directive you can deploy a new
  cinder - volume node and just add your new driver that way.
 
  Otherwise you can move to multibackend but you will need to update the
  hosts column on your existing volumes.
 
 
  ___
  Mailing list:
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
  Post to : openstack@lists.openstack.org
  Unsubscribe :
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 
 

 Hi Amit,

 My point was that the way multi-backend works is by the addition of
 the enabled_backends parameter in the cinder.conf file, along with a
 driver section:

 enabled_backends = lvm1,lvm2
 [lvm1]
 driver settings
 [lvm2]
 driver settings

 This will cause your host entry to be of the form:
 cinder-vol-node-name@backend-name

 In this scenario you can simply add another entry for enabled_backends
 and it's corresponding driver info entry.

 If you do NOT have multi backend setup your host entry will just be:
 cinder-vol-node-name and it's a bit more difficult to convert to
 multi-backend.  You have two options:
 1. Just deploy another cinder-volume node (skip multi-backend)
 2. Convert existing setup to multi-backend (this will require
 modification/update of the host entry of your existing volumes)

 This all might be a bit more clear if you try it yourself in a
 devstack deployment.  Give us a shout on IRC at openstack-cinder if
 you get hung up.

 Thanks,
 John

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[openstack-dev] [mistral] Team meeting minutes/log - 01/19/2015

2015-01-19 Thread Renat Akhmerov
Thanks for joining our meeting today!

Meeting minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-01-19-16.00.html
 
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-01-19-16.00.html
 
Full log: 
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-01-19-16.00.log.html
 
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-01-19-16.00.log.html

The next meeting is scheduled on Jan 26.

Renat Akhmerov
@ Mirantis Inc.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] open contrail for nova-network

2015-01-19 Thread Daneyon Hansen (danehans)
All,

I came across this open contrail BP for nova-network:

https://blueprints.launchpad.net/nova/+spec/opencontrail-nova-vif-driver-plugin

I know we have been doing great things in Neutron. I also understand many 
operators are still using nova-network. Any thoughts on contributing to 
nova-network while we and the rest of the community bring Neutron up-to-speed? 
It would be unfortunate to see Juniper develop key relationships with operators 
through their nova-network development efforts.

Regards,
Daneyon Hansen
Software Engineer
Email: daneh...@cisco.com
Phone: 303-718-0400
http://about.me/daneyon_hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] upstream f21 devstack test

2015-01-19 Thread Attila Fazekas
Per request moving this thread to the openstack-dev list.

I was not able to reproduce the issue so far either on the
vm you pointed me or in any of my VMs.

Several things I observed on `your` machine:
1. The installed kernel is newer then the actually used (No known related issue)
2. On the First tempest (run logs are collected [0]) lp#1353939 was triggered, 
but not related
3. After tried to reproduce the use many-many times I hit lp#1411525, the patch
   which introduced is already reverted.
4. Once I saw 'Returning 400 to user: No nw_info cache associated with 
instance' what I haven't
   seen with nova network for a long time.  (once in 100 run)
5. I see many annoying iscsi related logging, It also does not related to the 
connection issue,
   IMHO the tgtadm can be considered as DEPRECATED thing, and we should switch 
to lioadm.

So far, No Log entry found in connection to connection issue 
 which would worth to search on logstash.

The nova network log is not sufficient to figure out the actual netfilter state 
at any moment.
According the log it should have update the chains with something, but who 
knows..

With the ssh connection issues you can do very few things as post-mortem 
analyses.
Tempest normally deletes the related resources, so less evidences remaining.
If the issue is reproducible some cases enough to alter the test to do not 
destroy evidences,
but very frequently some kind of real debugger required.

Several suspected thing:
* The vm was able to acquire address via dhcp - successful boot, has L2 
connectivity.
* No evidence found for a dead qemu, no special libvirt operation requested 
before failure.
* nnet claims it added the floating ip to the br100
* L3 issue / security group rules ?..

The basic network debug was removed form tempest[1]. I would like to recommend 
to revert that change
in order to have an idea at least the interfaces and netfilter was or wasn't in 
a good shape [1].

I also created a vm with enabled firewalld (normally it is not in my devstack 
setups), the 3
mentioned test case working fine even after running these tests for hours.
However the '/var/log/firewalld' contains COMMAD_FAILURES as in `your` vm. 

I will try run more full tempest+nnet@F21 in my env to have more sample for 
success rate.

So far I reproduced 0 ssh failure,
so I will scan the logs[0] again more carefully on `your` machine,
maybe I missed something, maybe those tests interfered with something less 
obvious.

I'll check the other gate f21 logs (~100 job/week),
 does anything happened when the issue started and/or is the issue still 
exists. 


So, I have nothing useful at the moment, but I did not given up.

[0] 
http://logs.openstack.org/87/139287/14/check/check-tempest-dsvm-f21/5f3d210/console.html.gz
[1] https://review.openstack.org/#/c/140531/


PS.:
F21's HaProxy is more sensitive to services which stops listening,
and it will not be evenly balanced. 
For a working F21 neutron job better listener is required: 
https://review.openstack.org/#/c/146039/ .
 


- Original Message -
 From: Ian Wienand iwien...@redhat.com
 To: Attila Fazekas afaze...@redhat.com
 Cc: Alvaro Lopez Ortega aort...@redhat.com, Jeremy Stanley 
 fu...@yuggoth.org, Sean Dague s...@dague.net,
 dean Troyer dtro...@gmail.com
 Sent: Friday, January 16, 2015 5:24:38 AM
 Subject: upstream f21 devstack test
 
 Hi Attila,
 
 I don't know if you've seen, but upstream f21 testing is happening for
 devstack jobs.  As an experimental job I was getting good runs, but in
 the last day and a bit, all runs have started failing.
 
 The failing tests are varied; a small sample I pulled:
 
 [1]
 tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_compute_with_volumes
 [2]
 tempest.scenario.test_snapshot_pattern.TestSnapshotPattern.test_snapshot_pattern[compute,image,network]
 [3]
 tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_instance[compute,image,network]
 
 The common thread is that they can't ssh to the cirros instance
 started up.
 
 So far I can not replicate this locally.  I know there were some
 firewalld/neutron issues, but this is not a neutron job.
 
 Unfortunately, I'm about to head out the door on PTO until 2015-01-27.
 I don't like the idea of this being broken while I don't have time to
 look at it, so I'm hoping you can help out.
 
 There is a failing f21 machine on hold at
 
  jenk...@xx.yy.zz.qq
Sanitized.
 
 I've attached a private key that should let you log in.  This
 particular run failed in [4]:
 
  
 tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_compute_with_volumes
  
 tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario[compute,image,network,volume]
 
 Sorry I haven't got very far in debugging this.  Nothing obviously
 jumped out at me in the logs, but I only had a brief look.  I'm hoping
 as the best tempest guy I know you can find some time to take a look
 at this in my absence :)
 
 Thanks,
 
 -i
 
 

[openstack-dev] [Fuel][Fuel-Library] MVP implementation of Granular Deployment merged into Fuel master branch

2015-01-19 Thread Vladimir Kuklin
Hi, Fuelers and Stackers

I am glad to announce that we merged initial support for granular
deployment feature which is described here:

https://blueprints.launchpad.net/fuel/+spec/granular-deployment-based-on-tasks

This is an important milestone for our overall deployment and operations
architecture as well as it is going to significantly improve our testing
and engineering process.

Starting from now we can start merging code for:

https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization
https://blueprints.launchpad.net/fuel/+spec/fuel-library-modular-testing

We are still working on documentation and QA stuff, but it should be pretty
simple for you to start trying it out. We would really appreciate your
feedback.

Existing issues are the following:

1) pre and post deployment hooks are still out of the scope of main
deployment graph
2) there is currently only puppet task provider working reliably
3) no developer published documentation
4) acyclic graph testing not injected into CI
5) there is currently no opportunity to execute particular task - only the
whole deployment (code is being reviewed right now)

-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com http://www.mirantis.ru/
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][oslo] plan till end-of-Kilo

2015-01-19 Thread Ihar Hrachyshka

Hi Kyle/all,

(we were going to walk thru that on Mon, but since US is on vacation 
today, sending it via email to openstack-dev@.)


So I've talked to Doug Hellmann from oslo, and here is what we have in 
our oslo queue to consider:


1. minor oslo.concurrency cleanup for *aas repos (we need to drop 
lockutils-wrapper usage now that base test class sets lockutils fixture);
2. migration to namespace-less oslo libraries (this is blocked by 
pending oslo.messaging release scheduled this week, will revive patches 
for all four branches the end of the week) [1];

3. oslo/kilo-2: graduation of oslo.policy;
4. oslo/kilo-3: graduation of oslo.cache, oslo.versionedobjects.

I believe 1. and 2. should be handled in Kilo-2 neutron side. The 2. 
part will introduce some potential friction in gate due to merge 
conflicts and new hacking rule applied, so we may want to synchronize it 
with other refactoring activities.


For 3., I'm not sure we want to go with such a change this cycle. On the 
other side, while that is potentially unsafe, it may free us from later 
patching our local policy module copy due to security issues that could 
be revealed later in the incubator module. Taking into account that we 
claim support for 15 months for all stable branches, and who knows where 
it will lead later, earlier reducing our area of responsibility can be a 
good thing.


For 4., this will definitely need to wait for L. The oslo.cache 
migration can easily go with in L-1 (the module is used in single place 
only - metadata agent); as for oslo.versionedobjects, this will need to 
follow a proper spec process (we had someone willing to post a spec for 
that, but I don't remember his/her name).


Does the plan sound ok?

[1]: 
https://review.openstack.org/#/q/If0dce29a0980206ace9866112be529436194d47e,n,z


/Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] open contrail for nova-network

2015-01-19 Thread Russell Bryant
On 01/19/2015 12:02 PM, Daneyon Hansen (danehans) wrote:
 All,
 
 I came across this open contrail BP for nova-network:
 
 https://blueprints.launchpad.net/nova/+spec/opencontrail-nova-vif-driver-plugin
 
 I know we have been doing great things in Neutron. I also understand
 many operators are still using nova-network. Any thoughts on
 contributing to nova-network while we and the rest of the community
 bring Neutron up-to-speed? It would be unfortunate to see Juniper
 develop key relationships with operators through their nova-network
 development efforts.

The blueprint is for a VIF driver, which is relevant for use with Neutron.

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Replication on image create

2015-01-19 Thread Nikhil Komawar
Hi Boden,

It would be great to open up a BP/spec on this.

As far as the flexibility of adding scripts to tasks is concerned, there are 2 
points to be considered:

1. Glance will maintain a _published_ list of acceptable tasks. Meaning, it 
would be documented somewhere that a task of type A would work with glance 
and how to _extend_ it as per deployer use-case. Currently, 3 types namely, 
import, export and clone tasks have been approved. Clone could fit into your 
usecase depending on the location where you need the data copied.
2. We are planning to merge TaskFlow executor [0] in k2 (or k3) that will 
enable extending the flows of the tasks.

It would be best if we can discuss this on the spec to keep stuff documented.

[0] https://review.openstack.org/#/c/130076

Thanks,
-Nikhil


From: Boden Russell [boden...@gmail.com]
Sent: Monday, January 19, 2015 9:39 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [glance] Replication on image create

 On 1/15/15 12:59 AM, Flavio Percoco wrote:
 All that being said, it'd be very nice if you could open a spec on
 this topic so we can discuss over the spec review and one of us (or
 you) can implement it if we reach consensus.


I'll create a BP + spec; doing a little homework now...

W / R / T the async task plugins -- are there plans to make the task
definitions more pluggable? Based on what I see right now it appears the
definitions are static[1].

I was expecting to see the list of applicable tasks to be defined via
stevedore entry points or via conf file properties.

e.g.
[entry_points]
glance.async.tasks =
import = glance.common.scripts.image_import
replicate = glance.common.scripts.image_repl
# etc...


Perhaps there's already a BP on this topic, but I didn't find one.

Thanks

[1]
https://github.com/openstack/glance/blob/master/glance/domain/__init__.py#L316

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Gantt (Scheduler) meeting agenda

2015-01-19 Thread Ed Leafe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all,

I want to make sure that everyone is present and prepared to discuss the
one outstanding spec for Kilo: https://review.openstack.org/#/c/138444/

In the words of Jay Pipes, we are at an impasse: Jay and I prefer an
approach in which the scheduler loads up the information about the
compute nodes when it starts up, and then relies on the compute nodes to
update their status whenever an instance is create/destroyed/resized.
Sylvain prefers instead to have the hosts query that information for
every call to _get_all_host_states(), adding the instance information to
the Host object as an InstanceList attribute. I might be a little off in
my summary of the two positions, but they largely reflect the preference
for solving this issue.

IMO, the former approach is a lot closer to the ideal end result for an
independent scheduler service, whereas the latter is closer to the
current design, and would be less disruptive code-wise. The former *may*
increase the probability of race conditions in which two schedulers
simultaneously try to consume resources on the same host, but there are
several possible ways we can reduce that probability.

So please read up on that spec, and come to the meeting tomorrow
prepared to discuss it.

BTW, the latter approach is very similar to an earlier version of the
spec: https://review.openstack.org/#/c/138444/8/ . We seem to be going
in circles!

- -- 

- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)

iQIcBAEBAgAGBQJUvVoNAAoJEKMgtcocwZqLtIwP/A8MCveYF5/Q1aiEucDCOEns
kkBtTWMFa/0CGWzl4MTHzw6545gdrBxsDPX2nZBnNHQNObTt/Hq6CAIg3gm3EIDE
fTws9OjX7Ihf4E8IhdB1guH6s2eqRf4jkIIfUjnmp1nk+UkZ6q35bI3Emk1Sta1j
qR2NFmvhWzHK3hSTKHqjas30SVydL/QnCMpVnni0mNP/8uXNdGI2fivPSA7a0LE1
0ssMNFa2Us91v7258bXNhK6B5hbeI2PPhK0r19fFUl5CcsYtYShF0HJQLEd3dG8I
+znvqYZDPRPqZrKC0xWzNp/wpMWAV6oyv0fhVSyUkjfjH/vB5wASK2iGogbqmW07
rKiFcb8xSiMoZbydw9SV0Jya3do/+5tiBrjchzxgUQdRfG72nzGfTssbE/tn/aiw
2BS1ihe6ero20+0lBwxOirdEBsOQ6jmn4rcGuVpRr5lwcealkPe0j6YzFIT6Gqla
Cpj4z0exnMMaUtD/9v2wYE2N3BscWcDoJZ/jBQDibsS6Y5R/1JKvkqF/dpLmw+6s
VAMLy+rYJ6Cx8Z+WJ53uPw2sjZdprfj+qSTu9sHoqV9ycz+ID0FfvwyjJCf36NaD
8j+Z8avM0KJb0gabz9jT3/b2Y0S7dbwdsS+rPMkUSyBYdm+VUe0gCn6i09Erh7ED
Cu/WryIeoy+H2oH5UrzO
=hp2w
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] pyCADF 0.7.0 released

2015-01-19 Thread gordon chung
pyCADF is the python implementation of the DMTF Cloud Auditing Data Federation 
Working Group (CADF) specification.  pyCADF 0.7.0 has been tagged and should be 
available on PyPI and our mirror shortly.
this release includes:
* deprecation of audit middleware (replaced by audit middleware in 
keystonemiddleware).* removal of oslo.messaging requirement.* various 
requirements and oslo synchronisations.
please report any problems here: https://bugs.launchpad.net/pycadf
cheers,
gord
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Next week's meeting is cancelled (and some other notes)

2015-01-19 Thread Sean M. Collins
Thanks Doug!

It is a big link but I'd rather see the full URL than trust opening a
URL shortener link. I've been rickrolled too many times to count. :)

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Nominating Brad Topol for Keystone-Spec core

2015-01-19 Thread Juan Antonio Osorio
+1

On Mon, Jan 19, 2015 at 8:20 PM, Erhan Ekici erhan.ek...@gmail.com wrote:

 +1
 On 18 Jan 2015 21:16, Morgan Fainberg morgan.fainb...@gmail.com wrote:

 Hello all,

 I would like to nominate Brad Topol for Keystone Spec core (core reviewer
 for Keystone specifications and API-Specification only:
 https://git.openstack.org/cgit/openstack/keystone-specs ). Brad has been
 a consistent voice advocating for well defined specifications, use of
 existing standards/technology, and ensuring the UX of all projects under
 the Keystone umbrella continue to improve. Brad brings to the table a
 significant amount of insight to the needs of the many types and sizes of
 OpenStack deployments, especially what real-world customers are demanding
 when integrating with the services. Brad is a core contributor on pycadf
 (also under the Keystone umbrella) and has consistently contributed code
 and reviews to the Keystone projects since the Grizzly release.

 Please vote with +1/-1 on adding Brad to as core to the Keystone Spec
 repo. Voting will remain open until Friday Jan 23.

 Cheers,
 Morgan Fainberg


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Juan Antonio Osorio R.
e-mail: jaosor...@gmail.com

All truly great thoughts are conceived by walking.
- F.N.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] Optional Properties in an Entity

2015-01-19 Thread Douglas Mendizabal
Hi API WG,

I’m curious about something that came up during a bug discussion in one of the 
Barbican weekly meetings.  The question is about optional properties in an 
entity.  e.g. We have a Secret entity that has some properties that are 
optional, such as the Secret’s name.  We were split on what the best approach 
for returning the secret representation would be when an optional property is 
not set.

In one camp, some developers would like to see the properties returned no 
matter what.  That is to say, the Secret dictionary would include a key for 
“name” set to null every single time.  i.e.

{
  …
  “secret”: {
“name”: null,
…
  }
  ...
}

On the other camp, some developers would like to see optional properties 
omitted if they were not set by the user.

The advantage of always returning the property is that the response is easier 
to parse, since you don’t have to check for the existence of the optional keys. 
 The argument against it is that it makes the API more rigid, and clients more 
fragile.

I was wondering what the API Working Group’s thoughts are on this?

Thanks,
Douglas Mendizábal


Douglas Mendizábal
IRC: redrobot
PGP Key: 245C 7B6F 70E9 D8F3 F5D5  0CC9 AD14 1F30 2D58 923C

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Nominating Brad Topol for Keystone-Spec core

2015-01-19 Thread Priti Desai
+1

Cheers
Priti

From: Steve Martinelli steve...@ca.ibm.commailto:steve...@ca.ibm.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Sunday, January 18, 2015 at 9:56 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Keystone] Nominating Brad Topol for Keystone-Spec 
core

+1

Steve

Morgan Fainberg morgan.fainb...@gmail.commailto:morgan.fainb...@gmail.com 
wrote on 01/18/2015 02:11:02 PM:

 From: Morgan Fainberg 
 morgan.fainb...@gmail.commailto:morgan.fainb...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Date: 01/18/2015 02:15 PM
 Subject: [openstack-dev] [Keystone] Nominating Brad Topol for
 Keystone-Spec core

 Hello all,

 I would like to nominate Brad Topol for Keystone Spec core (core
 reviewer for Keystone specifications and API-Specification only:
 https://git.openstack.org/cgit/openstack/keystone-specs ). Brad has
 been a consistent voice advocating for well defined specifications,
 use of existing standards/technology, and ensuring the UX of all
 projects under the Keystone umbrella continue to improve. Brad
 brings to the table a significant amount of insight to the needs of
 the many types and sizes of OpenStack deployments, especially what
 real-world customers are demanding when integrating with the
 services. Brad is a core contributor on pycadf (also under the
 Keystone umbrella) and has consistently contributed code and reviews
 to the Keystone projects since the Grizzly release.

 Please vote with +1/-1 on adding Brad to as core to the Keystone
 Spec repo. Voting will remain open until Friday Jan 23.

 Cheers,
 Morgan Fainberg
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Horizon] User self registration and management

2015-01-19 Thread David Chadwick
Hi Enrique

You are right in that we have been addressing different problems. There
are three aspects to managing users: registration, assigning
authentication credentials, and assigning authorisation credentials. You
appear to be primarily concerned with the first two. I have only
concentrated on the latter, assuming that the users have already been
registered somewhere (with an identity provider) and already have their
authn tokens. In a federated infrastructure the authn and authz are
split between the IdP and SP, so I have only concentrated on the authz
aspects, assuming the authn is already sorted out.

If you are interested in a centralised Keystone system, there is no need
to split the functionality up, as Keystone can register users, assigns
their passwords and their roles. The only place our work would overlap
with yours, is in the assignment of roles to users. Our solution, though
designed for a federated keystone, can equally well be used with a
centralised keystone, since once the user is authenticated, he can then
request to join a VO role regardless of who authenticated him (and we
have demonstrated that local login works just as well as federated login
in our prototype). So you may wish to use our work, once you have sorted
out user registration and the assignment of authn credentials

regards

David


On 19/01/2015 15:15, Enrique Garcia wrote:
 Hi everyone,
 
 Enrique, if you have a github repo or some project pages you can point
 me to that would be wonderful. I'm currently in the very early stages of
 our proof of concept/prototype, so it would be great to see some work
 others have done to solve similar issues. If I can find something that
 works for a few of our use cases it might be a better starting point or
 good to see what an approach others might find useful is.
 I'd much rather not duplicate work, nor build something only useful for
 our use cases, so collaboration towards a community variant would be
 ideal.
 
 
 ​Adrian, first of all we are currently working in this functionality so
 we don't have a final version yet, that's why we are also interested in
 joining efforts and collaborate in a community variant. Anyway, 
 our first prototype was to do it all in Horizon, implementing a django
 app similar to what you can find in django registration
 https://django-registration.readthedocs.org/en/latest/. Currently
 ​I am
  working in moving all the backend logic to a keystone extension and
 keeping the views and form handling in a django app to make something
 similar to the current authentication system ​
 https://github.com/openstack/django_openstack_auth
 ​.​
  
 You can check here
 https://github.com/ging/keystone/tree/registration/keystone/contrib/user_registration
  our
 current keystone extension if that helps you.
 
 Getting into the details, we went for a slightly different approach to
 the one you propose. Our idea is to have a service in keystone that
 exposes and API to register and activate users, as well as other common
 functionality like password reset, etc. This API is admin only, so
 Horizon(or whoever wants to register users) needs to have its own admin
 credentials to use it. If I understand correctly, what you suggest is
 that is the service the one that would have the credentials, so we
 differ here. I see some benefits and disadvantages in both approaches,
 we can discuss them if you want. 
 
 Secondly, the way we handle temporary user data is setting the enabled
 attribute to False until they get activated using a key provided during
 registration. In other words, our extension is a 'delayed user-create
 API for admins' with some extra functionality like password reset. What
 do you think about this approach? How do you plan to store this temporal
 data?
 
 It would be great if you can provide any feedback on all of this, like
 how well do you think it integrates with the current ecosystem and how
 would you do things differently.
 
 David, is this approach somewhat redundant with the federated Keystone
 code you are working on? I feel like they address different use cases
 but I might be looking at it the wrong way.
 
 ​regards,
 Enrique ​Garcia Navalon
 
 
 
 On 16 January 2015 at 15:12, David Chadwick d.w.chadw...@kent.ac.uk
 mailto:d.w.chadw...@kent.ac.uk wrote:
 
 The VO code exists already, as does a public demo (see my original email
 for details). I gave demos to the Keystone core in Paris last November.
 How soon this gets incorporated into core depends upon public/user
 demand. So far, it seems that few people have recognised the value of
 this service, probably because they are not using federated Keystone
 seriously. Once they do, I believe that the need for user self
 registration to roles and privileges will become immediately apparent
 
 regards
 
 David
 
 On 15/01/2015 23:28, Adrian Turjak wrote:
  Typo fix, see below.
 
  On 16/01/15 12:26, Adrian 

[openstack-dev] [Infra] Meeting Tuesday January 20th at 19:00 UTC

2015-01-19 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is resuming our weekly
meetings on Tuesday January 20th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

And in case you missed them, meeting logs and minutes from the last
meeting couple of meetings are available here:

January 6th:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-01-06-19.02.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-01-06-19.02.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-01-06-19.02.log.html

January 13th:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-01-13-19.02.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-01-13-19.02.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-01-13-19.02.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] Resending - Network allocation issues during spawn

2015-01-19 Thread Shyam Nadiminti
Issued nova boot command and the spawn is failed in obtaining
network_info.  Here is the stack:

015-01-19 21:17:56.350 17307 ERROR nova.compute.manager [-] Instance failed
network setup after 1 attempt(s)
2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager Traceback (most
recent call last):
2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager   File
/usr/local/lib/python2.7/site-packages/nova/compute/manager.py, line
1682, in _allocate_network_asyn
c
2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager
dhcp_options=dhcp_options)
2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager   File
/usr/local/lib/python2.7/site-packages/nova/network/api.py, line 47, in
wrapped
2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager return
func(self, context, *args, **kwargs)
2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager   File
/usr/local/lib/python2.7/site-packages/nova/network/base_api.py, line 64,
in wrapper
2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager res = f(self,
context, *args, **kwargs)
2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager   File
/usr/local/lib/python2.7/site-packages/nova/network/api.py, line 277, in
allocate_for_instance
2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager nw_info =
self.network_rpcapi.allocate_for_instance(context, **args)
2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager   File
/usr/local/lib/python2.7/site-packages/nova/network/rpcapi.py, line 188,
in allocate_for_instance
2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager
macs=jsonutils.to_primitive(macs))
2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager   File
/usr/local/lib/python2.7/site-packages/oslo/messaging/rpc/client.py, line
159, in call
2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager
retry=self.retry)
2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager   File
/usr/local/lib/python2.7/site-packages/oslo/messaging/transport.py, line
90, in _send
2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager
timeout=timeout, retry=retry)
2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager   File
/usr/local/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py,
line 408, in send
2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager retry=retry)
2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager   File
/usr/local/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py,
line 397, in _send
2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager result =
self._waiter.wait(msg_id, timeout)
2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager   File
/usr/local/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py,
line 298, in wait
2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager reply, ending,
trylock = self._poll_queue(msg_id, timeout)
2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager   File
/usr/local/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py,
line 238, in _poll_que
ue
2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager message =
self.waiters.get(msg_id, timeout)
2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager   File
/usr/local/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py,
line 144, in get
2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager 'to message ID
%s' % msg_id)
2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager MessagingTimeout:
Timed out waiting for a reply to message ID 56262582185f4a5cb0d11a7f85239c3f
2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager


Any help is greatly appreciated.

Thanks,
Shyam
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] (no subject)

2015-01-19 Thread Nikesh Kumar Mahalka
below test case is failing on lvm in kilo devstack

==
FAIL: tearDownClass
(tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest)
--
Traceback (most recent call last):
_StringException: Traceback (most recent call last):
  File /opt/stack/tempest/tempest/test.py, line 301, in tearDownClass
teardown()
  File /opt/stack/tempest/tempest/thirdparty/boto/test.py, line 272,
in resource_cleanup
raise exceptions.TearDownException(num=fail_count)
TearDownException: 1 cleanUp operation failed



did any one face this?




Regards
nikesh
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[openstack-dev] what code in cinder volume driver supports volume migration between two backends of same type but having different volume types?

2015-01-19 Thread Nikesh Kumar Mahalka
do cinder retype (v2) works for lvm?
How to use cinder retype?

I tried for volume migration from one volume-type LVM backend to
another volume-type LVM backend.But its failed.
How can i acheive this?

Similarly i am writing a cinder volume driver for my array and want to
migrate volume from one volume type to another volume type for my
array backends.
so want to know how can i achieve this in cinder driver?



Regards
Nikesh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Nominating Brad Topol for Keystone-Spec core

2015-01-19 Thread Erhan Ekici
+1
On 18 Jan 2015 21:16, Morgan Fainberg morgan.fainb...@gmail.com wrote:

 Hello all,

 I would like to nominate Brad Topol for Keystone Spec core (core reviewer
 for Keystone specifications and API-Specification only:
 https://git.openstack.org/cgit/openstack/keystone-specs ). Brad has been
 a consistent voice advocating for well defined specifications, use of
 existing standards/technology, and ensuring the UX of all projects under
 the Keystone umbrella continue to improve. Brad brings to the table a
 significant amount of insight to the needs of the many types and sizes of
 OpenStack deployments, especially what real-world customers are demanding
 when integrating with the services. Brad is a core contributor on pycadf
 (also under the Keystone umbrella) and has consistently contributed code
 and reviews to the Keystone projects since the Grizzly release.

 Please vote with +1/-1 on adding Brad to as core to the Keystone Spec
 repo. Voting will remain open until Friday Jan 23.

 Cheers,
 Morgan Fainberg


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-operators][qa][Rally] Thoughts on removing half of Rally benchmark scenarios

2015-01-19 Thread Boris Pavlovic
Mike,


I understand your concern about keeping the number of different benchmark
 scenarios in Rally not too big so that users don't get confused. But what I
 really like now about benchmark scenario names in Rally is that they are
 highly declarative, i.e. you read them and you have a clear idea of what's
 going on inside those scenarios. You see boot_and_delete_server = you
 know that Rally will boot and then delete a server, boot_server = only
 boot a server.



do_delete is as well quite clear for understanding.

This will solve 2 more issues.

1) Now we have benchmarks that are creating a lot of resources. And it's
not clear are we deleting everything or just single resource.
2) Inconsistence in naming (there are part of benchmarks that deletes
resources but don't have delete in name)


In any case this reduce huge duplication of code with should be nice IMHO.

Best regards,
Boris Pavlovic

On Sun, Jan 18, 2015 at 7:18 PM, Mikhail Dubov mdu...@mirantis.com wrote:

 Hi Boris,

 I understand your concern about keeping the number of different benchmark
 scenarios in Rally not too big so that users don't get confused. But what I
 really like now about benchmark scenario names in Rally is that they are
 highly declarative, i.e. you read them and you have a clear idea of what's
 going on inside those scenarios. You see boot_and_delete_server = you
 know that Rally will boot and then delete a server, boot_server = only
 boot a server.

 That's very convenient e.g. when you navigate through Rally report pages:
 you see the scenario names in the left panel and you know what to expect
 from their results. It seems to me that, if we merge scenarios like 
 boot_server
 and boot_and_delete_server together, we will lose a bit in clarity.

 Besides, as you pointed out, Nova.boot_server and 
 Nova.boot_and_delete_server
 are used for two different purposes - seems to be indeed a strong reason
 for keeping them separated.

 Best regards,
 Mikhail Dubov

 Engineering OPS
 Mirantis, Inc.
 E-Mail: mdu...@mirantis.com
 Skype: msdubov

 On Sat, Jan 17, 2015 at 8:47 PM, Boris Pavlovic bo...@pavlovic.me wrote:

 Hi stackers,

 I have an idea about removing almost half of rally scenarios and keep all
 functionality.

 Currently you can see a lot of similar benchmarks like:

 NovaServers.boot_server  # boot server with passed
 arguments
 NovaServers.boot_and_delete_server  # boot server with passed arguments
 and delete

 The reason of having this 2 benchmarks are various purpose of them:

 1) Nova.boot_server is used for *volume/scale testing*.
 Where we would like to see how N active VM works and affects OpenStack
 API and booting next VMs.

 2) Nova.boot_and_delete_server is used for *performance/load* testing.
 We are interested how booting and deleting VM perform in case on various
 load (what is different in duration of booting 1 VM when we have 1, 2, M
 simultaneously VM boot actions)


 *The idea is to keep only 1 boot_server and add arguments do_delete
 with by default False. *

 It means that:

  # this is equal to old Nova.boot_server
 NovaServers.boot_server: [{args: {...} }]

 # this is equal to old Nova.boot_and_delete_server
 NovaServers.boot_server: [{args: {..., do_delete: True}]


 Thoughts?


 Best regards,
 Boris Pavlovic

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Raw vs Qcow2 images_type in nova/libvirt

2015-01-19 Thread Dmitry Guryanov

Hello,

Do I understand correctly, that both Qcow2 and Raw classes in 
libvirt/imagebackend.py can work with images in qcow2 format, but Raw 
copies the whole base image from cache to the instance's dir and Qcow2 
only creates a delta (and use base image from cache)?


--
Dmitry Guryanov


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] instance_info_caches table, nova not populating for some instances

2015-01-19 Thread Don Waterloo
On 3 December 2014 at 11:58, Don Waterloo don.water...@gmail.com wrote:

 I am having a problem that I hope someone can comment on.

 Periodically, an instance ends up w/ 0 rows in 'instance_info_caches' in
 the nova database.

 as a consequence, when i do 'nova list', it ends up without knowing
 anything about the networks. The instance is allocated an IP, has booted,
 is able to use that IP. Neutron owns the port for it, all is good from that
 standpoint, its just nova knows nothing about it.

 Is 'info_caches' something that is truly a cache? it seems the only known
 repository.



Sorry to follow up my own email, but... is anyone else hitting this? I'm
getting more than just a 'no ip in nova list' symptom, once in a while some
instance ends up w/ 0 bridges in its virsh xml file. What happens is it
comes up normally, all is good and happy. but then some number of day(?)
later, it ends up with no source bridges, no interfaces, and a [] for a
'network_info' field in the instance_info_caches table.

Any idea how this could happen? Its juno on Ubuntu. In ~10K instances
started/stopped since ~jan 1, I now have 15 in this state, so its not super
common. This symptom is more severe, so I cannot live with it.

A reboot does not solve, nor does rebuild (it rebuilds from this info).
Neutron still says the instance is connected, but nova gets it wrong.

 select * from instance_info_caches where network_info = '[]' and deleted =
0;
+-+-++--+--+--+-+
| created_at  | updated_at  | deleted_at | id   |
network_info | instance_uuid| deleted |
+-+-++--+--+--+-+
| 2014-11-03 21:47:44 | 2014-11-03 21:48:05 | NULL   | 4762 | []
| 6996aa1c-7c05-4e36-a86e-d45f7af14352 |   0 |
 ...
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [Nova] Raw vs Qcow2 images_type in nova/libvirt

2015-01-19 Thread Pádraig Brady
On 19/01/15 20:41, Michael Still wrote:
 Mostly.
 
 qcow2 can do a copy on write layer, although it can be disabled IIRC.
 So if COW is turned on, you get only the delta in the instance
 directory when using qcow2.
 
 Cheers,
 Michael
 
 On Tue, Jan 20, 2015 at 7:40 AM, Dmitry Guryanov
 dgurya...@parallels.com wrote:
 Hello,

 Do I understand correctly, that both Qcow2 and Raw classes in
 libvirt/imagebackend.py can work with images in qcow2 format, but Raw copies
 the whole base image from cache to the instance's dir and Qcow2 only creates
 a delta (and use base image from cache)?

Correct.  That Raw class should be renamed to Copy,
to clarify/distinguish from CopyOnWrite.

BTW there are some notes on these settings at:
http://www.pixelbeat.org/docs/openstack_libvirt_images/

Pádraig

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Next week's meeting is cancelled (and some other notes)

2015-01-19 Thread Anita Kuno
On 01/19/2015 01:47 PM, Sean M. Collins wrote:
 Thanks Doug!
 
 It is a big link but I'd rather see the full URL than trust opening a
 URL shortener link. I've been rickrolled too many times to count. :)
 
I like to add something like:

label:Code-Review=0,self

to a URL like this so that after I post a vote to a patch and refresh my
list, I am only shown patches I haven't yet voted on.

Nice work Doug,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gantt (Scheduler) meeting agenda

2015-01-19 Thread Sylvain Bauza


Le 19/01/2015 20:25, Ed Leafe a écrit :

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all,

I want to make sure that everyone is present and prepared to discuss the
one outstanding spec for Kilo: https://review.openstack.org/#/c/138444/

In the words of Jay Pipes, we are at an impasse: Jay and I prefer an
approach in which the scheduler loads up the information about the
compute nodes when it starts up, and then relies on the compute nodes to
update their status whenever an instance is create/destroyed/resized.
Sylvain prefers instead to have the hosts query that information for
every call to _get_all_host_states(), adding the instance information to
the Host object as an InstanceList attribute. I might be a little off in
my summary of the two positions, but they largely reflect the preference
for solving this issue.


Sounds like there is a misunderstanding with my opinion.
That's unfortunate that even if we had a Google Hangout, we have to 
discuss again what we agreed.
But OK, let's discuss again what we said and let me try to provide again 
a quick explanation about my view here...


So, as I said during the Hangout, the scheduler is not having an 
HostState Manager created when the scheduler service is running, but 
rather created each time a query is coming in.
That means that if you want to persist an information, it needs to be 
updated in the compute_nodes DB table so it would instanciate 
accordingly the HostState object that filters are consuming.


I think we all agree with the fact that querying instances status should 
be done by looking at HostState instead of querying DB directly, that's 
a good point.
So, having said that, the discussions are about how to instantiate 
HostState and deal with potential race conditions that an asynchronous 
call would have.


By saying I was thinking about a call in _get_all_host_states(), I was 
just saying that it was the current only way to add some details in 
HostState was by doing that way.


Considering a scheduler service for persisting HostState is totally 
having my +1 on it. Are we sure it should be done in that spec ? I'm not 
sure at all.






IMO, the former approach is a lot closer to the ideal end result for an
independent scheduler service, whereas the latter is closer to the
current design, and would be less disruptive code-wise. The former *may*
increase the probability of race conditions in which two schedulers
simultaneously try to consume resources on the same host, but there are
several possible ways we can reduce that probability.


As I said, the former approach is requiring a persistent HostState 
manager that we don't have now. That sounds interesting and you have my 
vote, but that should not be handled in the spec you are mentioning and 
requesting for a Spec Freeze exception.


Honestly, are we talking about coding stuff for Kilo ? If yes, I don't 
think the former approach is doable for Kilo, in particular as no code 
has been written yet.
If we're talking about what the Scheduler should look in the future, 
then yes I'm 100% with you.



So please read up on that spec, and come to the meeting tomorrow
prepared to discuss it.

BTW, the latter approach is very similar to an earlier version of the
spec: https://review.openstack.org/#/c/138444/8/ . We seem to be going
in circles!


Are you sure that the patchset you are quoting is the proposal I'm 
mentioning ?


Keep in mind I'm trying to see a common approach for the same paradigm 
that has been approved here 
http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/isolate-scheduler-db-aggregates.html 



-Sylvain


- -- 


- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)

iQIcBAEBAgAGBQJUvVoNAAoJEKMgtcocwZqLtIwP/A8MCveYF5/Q1aiEucDCOEns
kkBtTWMFa/0CGWzl4MTHzw6545gdrBxsDPX2nZBnNHQNObTt/Hq6CAIg3gm3EIDE
fTws9OjX7Ihf4E8IhdB1guH6s2eqRf4jkIIfUjnmp1nk+UkZ6q35bI3Emk1Sta1j
qR2NFmvhWzHK3hSTKHqjas30SVydL/QnCMpVnni0mNP/8uXNdGI2fivPSA7a0LE1
0ssMNFa2Us91v7258bXNhK6B5hbeI2PPhK0r19fFUl5CcsYtYShF0HJQLEd3dG8I
+znvqYZDPRPqZrKC0xWzNp/wpMWAV6oyv0fhVSyUkjfjH/vB5wASK2iGogbqmW07
rKiFcb8xSiMoZbydw9SV0Jya3do/+5tiBrjchzxgUQdRfG72nzGfTssbE/tn/aiw
2BS1ihe6ero20+0lBwxOirdEBsOQ6jmn4rcGuVpRr5lwcealkPe0j6YzFIT6Gqla
Cpj4z0exnMMaUtD/9v2wYE2N3BscWcDoJZ/jBQDibsS6Y5R/1JKvkqF/dpLmw+6s
VAMLy+rYJ6Cx8Z+WJ53uPw2sjZdprfj+qSTu9sHoqV9ycz+ID0FfvwyjJCf36NaD
8j+Z8avM0KJb0gabz9jT3/b2Y0S7dbwdsS+rPMkUSyBYdm+VUe0gCn6i09Erh7ED
Cu/WryIeoy+H2oH5UrzO
=hp2w
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [Nova] Raw vs Qcow2 images_type in nova/libvirt

2015-01-19 Thread Michael Still
Mostly.

qcow2 can do a copy on write layer, although it can be disabled IIRC.
So if COW is turned on, you get only the delta in the instance
directory when using qcow2.

Cheers,
Michael

On Tue, Jan 20, 2015 at 7:40 AM, Dmitry Guryanov
dgurya...@parallels.com wrote:
 Hello,

 Do I understand correctly, that both Qcow2 and Raw classes in
 libvirt/imagebackend.py can work with images in qcow2 format, but Raw copies
 the whole base image from cache to the instance's dir and Qcow2 only creates
 a delta (and use base image from cache)?

 --
 Dmitry Guryanov


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] instance_info_caches table, nova not populating for some instances

2015-01-19 Thread Michael Still
I've never heard of anything like this.

What release of OpenStack are you running? What hypervisor driver?

Thanks,
Michael

On Tue, Jan 20, 2015 at 7:46 AM, Don Waterloo don.water...@gmail.com wrote:


 On 3 December 2014 at 11:58, Don Waterloo don.water...@gmail.com wrote:

 I am having a problem that I hope someone can comment on.

 Periodically, an instance ends up w/ 0 rows in 'instance_info_caches' in
 the nova database.

 as a consequence, when i do 'nova list', it ends up without knowing
 anything about the networks. The instance is allocated an IP, has booted, is
 able to use that IP. Neutron owns the port for it, all is good from that
 standpoint, its just nova knows nothing about it.

 Is 'info_caches' something that is truly a cache? it seems the only known
 repository.



 Sorry to follow up my own email, but... is anyone else hitting this? I'm
 getting more than just a 'no ip in nova list' symptom, once in a while some
 instance ends up w/ 0 bridges in its virsh xml file. What happens is it
 comes up normally, all is good and happy. but then some number of day(?)
 later, it ends up with no source bridges, no interfaces, and a [] for a
 'network_info' field in the instance_info_caches table.

 Any idea how this could happen? Its juno on Ubuntu. In ~10K instances
 started/stopped since ~jan 1, I now have 15 in this state, so its not super
 common. This symptom is more severe, so I cannot live with it.

 A reboot does not solve, nor does rebuild (it rebuilds from this info).
 Neutron still says the instance is connected, but nova gets it wrong.

  select * from instance_info_caches where network_info = '[]' and deleted =
 0;
 +-+-++--+--+--+-+
 | created_at  | updated_at  | deleted_at | id   |
 network_info | instance_uuid| deleted |
 +-+-++--+--+--+-+
 | 2014-11-03 21:47:44 | 2014-11-03 21:48:05 | NULL   | 4762 | []
 | 6996aa1c-7c05-4e36-a86e-d45f7af14352 |   0 |
  ...



 ___
 Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




-- 
Rackspace Australia

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack