Re: [Openstack-operators] [openstack] [tools] OpenStack client in a Docker container

2016-06-27 Thread Gerard Braad
> Usage is as easy as:
> $ docker pull gbraad/openstack-client:centos

Just now I added an Alpine-based image

$ docker pull gbraad/openstack-client:alpine

Hope this is also useful to you.


-- 

   Gerard Braad | http://gbraad.nl
   [ Doing Open Source Matters ]

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [neutron] Cross region traffic and Neutron

2016-06-27 Thread Sergio Morales Acuña
Hi all.

I'm looking for documentation or articles about cross-region communication
at tenant level (if possible).

Can someone share links about this and similar topics?

Thank you.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [openstack] [tools] OpenStack client in a Docker container

2016-06-27 Thread Gerard Braad
Hi all,


When you reinstall workstations or test environments as often as I do,
you would like to automate everything... or containerize it. So, I
packaged the OpenStack client in a Docker container on Ubuntu and
CentOS. And to make it more convenient, I added Lars's 'stack' helper
tool. Just have a look at the registry [1] or the source [2].

Usage is as easy as:

Store your stackrc in ~/.stack named as an endpoint; e.g. ~/.stack/trystack
$ docker pull gbraad/openstack-client:centos
$ alias stack='docker run -it  --rm -v ~/.stack:/root/.stack
gbraad/openstack-client:centos stack'
$ stack trystack openstack server list

Comments welcomed...

regards,


Gerard

[1] https://hub.docker.com/r/gbraad/openstack-client/
[2] https://github.com/gbraad/docker-openstack-client/

-- 

   Gerard Braad | http://gbraad.nl
   [ Doing Open Source Matters ]

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [craton] Midcycle meetup to discuss fleet management

2016-06-27 Thread Jim Baker
The Craton fleet management midcycle meetup will be held in New York City,
August 23-26, using the following two events to provide structure/meeting
support:

   - OpenStack East - developer-focused discussion on Craton core, along
   with further development of integration points;
   http://www.openstackeast.com/
   - Operators midcycle hosted by Bloomberg - get feedback from operators
   and course correction;
   
http://lists.openstack.org/pipermail/openstack-operators/2016-June/010788.html

Please join us! We also meet on #openstack-meeting-4 (1500 UTC Monday) and
hang out on #craton (Freenode).

Craton is a new open source project that we plan to propose for OpenStack
inclusion ("big tent"). Craton supports deploying and operating OpenStack
clouds by providing scalable fleet management. It provides inventory;
scalable workflows for audit and remediation (in progress); and
corresponding REST APIs/CLI/Python client. We are targeting OpenStack
Ansible in terms of reference workflows.

Craton is based on how Rackspace currently operates its public cloud at
scale, but it is also intended to support private clouds, from small to
large. We are building it on SQLAlchemy, TaskFlow, and Flask; along with
*optional* undercloud integration with Keystone, Barbican, and Horizon.
https://github.com/rackerlabs/craton

- Jim
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Reconstruction of Erasure coded ring is taking more and more time

2016-06-27 Thread Jeremy Stanley
On 2016-06-27 07:30:40 -0700 (-0700), OpenStack Mailing List Archive wrote:
> Link: https://openstack.nimeyo.com/88829/?show=88829#q88829
[...]

I have contacted nimeyo.com again via their feedback form and
re-requested they remove our mailing lists from their service (and
asked that they cease using our trademarked logo), so that we
hopefully don't need to resort to blocking their MTAs and reporting
them to spam blacklists.
-- 
Jeremy Stanley

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Reconstruction of Erasure coded ring is taking more and more time

2016-06-27 Thread OpenStack Mailing List Archive

Link: https://openstack.nimeyo.com/88829/?show=88829#q88829
From: Pompon 

Hello all,

I'm investigating the logs for my Erasure coded ring. Last week, I used to have a full reconstruction process done in about one hour with around 30 partitions per second treated. Here is an extract of the object-reconstructor logs I had :

Jun 18 06:26:51 STACO1 object-reconstructor: 102414/113992 (89.84%) partitions of 7/7 (100.00%) devices reconstructed in 3600.01s (28.45/sec, 6m remaining)


Then I started puting data into the ring (Already sent 5TB of data into it), but the more data are injected into my erasure coded ring, the more times it takes, and the values are today : 70 hours for a full reconstruction process with only 5 partitions per seconds treated.

Jun 27 06:33:08 STACO1 object-reconstructor: 818326/818326 (100.00%) partitions of 7/7 (100.00%) devices reconstructed in 159563.78s (5.13/sec, 0s remaining)
Jun 27 06:38:39 STACO1 object-reconstructor: 955/116211 (0.82%) partitions of 1/7 (14.29%) devices reconstructed in 300.03s (3.18/sec, 70h remaining)


Also what means the "partitions" values given here? Last week I had only 113992 partitions, and now I have 818326?  However, my ring only contains 26244 partitions.

Between last week and now, I injected 5TB of data to the ring and added a 28th device into it

# swift-ring-builder object-1.builder
object-1.builder, build version 66
262144 partitions, 12.00 replicas, 1 regions, 4 zones, 28 devices, 0.82 balance, 31.02 dispersion
The minimum number of hours before a partition can be reassigned is 1 (0:00:00 remaining)
The overload factor is 0.00% (0.00)


Si should I worry about this long reconstruction time or not? And what can I do to enhance it in case? 

Thanks.



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Kilo ->-> Mitaka anyone have notes :)

2016-06-27 Thread Andrew Laski


On Sun, Jun 26, 2016, at 07:58 PM, Sam Morrison wrote:
> I’ve done kilo -> mitaka with Keystone and all worked fine. Nothing
> special I needed to do.
> 
> If you’re wanting to do live upgrades with nova you can’t skip a version
> from my understanding.

Correct. Not without doing the work yourself to make it possible, and
realizing that it's untested/unsupported and likely will bring out
gremlins.

In order to make db migrations less painful, i.e. faster, Nova moved to
a system where only schema changes occur in the sqlalchemy-migrate
migrations and all data is migrated by another nova-manage command which
can be run while the system is in use. But the side effect of this is
that the data migration needs to take place for each release, and there
are checks in place to ensure this happens. Unless you have an empty
database "nova-manage db sync" won't even complete successfully for
kilo->mitaka since there's a check in place to ensure the liberty data
migration took place.

> 
> Sam
> 
> 
> > On 25 Jun 2016, at 4:16 AM, Jonathan Proulx  wrote:
> > 
> > Hi All,
> > 
> > I about to start testing for our Kilo->Mitaka migration.
> > 
> > I seem to recall many (well a few at least) people who were looking to
> > do a direct Kilo to Mitaka upgrade (skipping Liberty).
> > 
> > Blue Box apparently just did and I read Stefano's blog[1] about it,
> > and while it gives me hope my plan is possible it's not realy a
> > technical piece.
> > 
> > I'm on my 7th version of OpenStack for this cloud now so not my first
> > redeo as they say, but other than read Liberty and Mitaka release
> > notes carefully and test like crazy wonder if anyone has seen specific
> > issues or has specific advice for skiping a step here?
> > 
> > Thanks,
> > -Jon
> > 
> > -- 
> > [1] 
> > https://www.dreamhost.com/blog/2016/06/21/dreamcompute-goes-m-for-mitaka-without-unleashing-the-dragons/
> > 
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [neutron] Packet loss with DVR and IPv6

2016-06-27 Thread Tomas Vondra
Tomas Vondra  writes:

> The setup has 3 network nodes and 1 compute node currently hosting a virtual
> network (GRE based). DVR is enabled. I have just added IPv6 to this network
> and to the external network (VLAN based). The virtual network is set to SLAAC.
> 
> However, the link-local router address and associated MAC address is the
> same in all 4 qr namespaces. About 16% packets get lost in randomly occuring
> bursts. Openvswitch forwarding tables are flapping and I think that the
> packet loss occurs at the moment when all 4 switches learn the MAC address
> through a GRE tunnel simultaneously.

I'm convinced it is a bug that was probably not fixed yet. Filing bug report.
https://bugs.launchpad.net/neutron/+bug/1596473
Tomas




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [cloud] Keystone's DB_SYNC from Kilo to Liberty

2016-06-27 Thread Alvise Dorigo



On 27/06/2016 01:54, Sam Morrison wrote:

That usually means your DB is at version 86 (you can check the DB table to see, 
the table is called migration_version or something)
BUT your keystone version is older and doesn’t know about version 86.

Is it possible the keystone version your running is older and doesn’t know 
about version 86?


Hi Sam,
yes it is possible. Actually there was an error in my procedure, and a 
complete restore of the Kilo database solved the problem.


thanks for the support and sorry for the noise.

A.

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [HA] RFC: user story including hypervisor reservation / host maintenance / storage AZs / event history (fwd)

2016-06-27 Thread Juvonen, Tomi (Nokia - FI/Espoo)
Thank you very much from the interest. Need to look over other
discussion and perhaps have a session in Barcelona to look the
way forward after change in Nova.
> -Original Message-
> From: Adam Spiers [mailto:aspi...@suse.com]
> Sent: Monday, June 20, 2016 4:43 PM
> To: Juvonen, Tomi (Nokia - FI/Espoo) 
> Cc: openstack-operators mailing list  operat...@lists.openstack.org>
> Subject: Re: [Openstack-operators] [HA] RFC: user story including
> hypervisor reservation / host maintenance / storage AZs / event history
> (fwd)
> 
> Hi Tomi,
> 
> Juvonen, Tomi (Nokia - FI/Espoo)  wrote:
> > I'm working in the OPNFV Doctor project that is about fault
> > management and maintenance (NFV). The goal of the project is to
> > build fault management and maintenance framework for high
> > availability of Network Services on top of virtualized
> > infrastructure.
> >
> > https://wiki.opnfv.org/display/doctor
> >
> > Currently there is already landed effort to OpenStack to have
> > ability to detect failures fast, change states in OpenStack (Nova),
> > add state information that was missing and also to expose that to
> > owner of a VM. Also alarm is triggered. By all this one can now rely
> > the states and get notice about faults in a split second. Surely
> > with system configured monitor different faults and make actions
> > based configured policies, or leave some actions for consumers of
> > the alarms risen.
> 
> Sounds very interesting - thanks.  Does this really have to be limited
> to OPNFV though?  It sounds like it would be very useful within
> OpenStack generally.
Surely not just for OPNFV, but for all operators. If playing with the idea
of having link to some external tool to have more than 
"host_maintenance_reason", like it now would seem some more generic
"host_details", where one could have external REST API to call to have any
wanted host specific details that one would like to expose also to
tenant/owner of server. If having that tool it could also have maintenance
or host failure specific scenarios implemented. Could have admin to do 
things manually, or configure tool VNF / instance specifically to do some
actions.. OPNFV use case here is just the more specific maintenance state
to begin with, but who knows what one might want to implement there at the
end. Auto evacuate... ? That is anyhow far in next steps as of complex to
build. It is even case specific, what to do in different scenarios:
- Manually do any action by admin.
- Automatically move VM (maybe not if problem with bigger scale)
- Let it stay on host over maintenance (not busy hour for service)
- Let VM owner remove/add VM (to host already gone through maintenance)
...
> 
> > For maintenance I had a session in Austin to talk with Ops and Nova
> > core about the maintenance part. There it was seen that Nova didn't
> > want more specific information about host maintenance (maintenance
> > state, maintenance window...), so as a result of the discussion
> > there is a spec that was now transferred to Ocata:
> >
> > https://review.openstack.org/310510/
> 
> That's great - thanks a lot for highlighting, as it certainly seems to
> overlap a lot with the functionality which NTT proposed and is now
> described here:
> 
>   http://specs.openstack.org/openstack/openstack-user-stories/user-
> stories/proposed/ha_vm.html

Thanks, need to familiarize into this as well as other requests in the
field.
> 
> > The spec proposes a link to Nova external tool to provide more
> > specific information about host (compute) maintenance and by latest
> > comments it could have any host specific extra information to the
> > same place (for example you have mentioned event history). Still if
> > looking this kind of tool, why not make it configurable for anything
> > convenient for different operator scenario like automatic operations
> > if so wanted.
> 
> Yes, that definitely makes sense to me.
> 
> > Anyhow project like Nova do not want big new functionalities, so all
> > "more complex flows" should reside somewhere outside.
> 
> Right.  I can certainly understand that desire, but I'm a bit confused
> why the spec is proposing both extending Nova's API / DB schema *and*
> adding an external tool.
I understand this point as just the text field is also usable. External
tool is kind of out of scope of the spec. Anyhow would mention it to
have the understanding that the aim is to build more functionality in
the future into OpenStack and not to limit to what single string can offer.

Br,
Tomi


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Ops Meetups Team - Meeting Tuesday 1400 UTC

2016-06-27 Thread Tom Fifield


Hi all,

We're having a meeting:

Tuesday, 28 of Jun at 1400 UTC [1]

to continue organising the NYC Ops meet, and start thinking about 
Barcelona and beyond.


See you in IRC[2], in the #openstack-operators channel.


Details about the group, and the link to the agenda etherpad, can be 
found at:


https://wiki.openstack.org/wiki/Ops_Meetups_Team#Meeting_Information



Regards,


Tom


[1] To see this in your local time - check: 
http://www.timeanddate.com/worldclock/fixedtime.html?msg=Ops+Meetups+Team=20160628T14


[2] If you're new to IRC, there's a great guide here: 
http://docs.openstack.org/upstream-training/irc.html


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators