Re: [openstack-dev] Gnocchi sizing on production

2016-12-27 Thread Sam Huracan
Hi guys,

I'm using gnocchi version 3.0:
https://github.com/openstack/gnocchi/tree/stable/3.0 + OpenStack Mitaka,
storing gnocchi metrics on Ceph Storage
All metrics I need collect are VM resources: CPU, Memory, Disk, Network. We
're going to monitor thoses metrics every minutes for alarming when
reaching threshold, and keep them 1 month for billing. (medium archive
policy)

I recently use VM on lab for Gnocchi Server, with 4 vCPU, 8192 MB RAM, and
after configuring 10 workers, I realize the number or gnocchi measures
decrease significantly (gnocchi status), but it drain much more CPU and
RAM, and VM freezed after 30 minutes running. :)

Anyone has ever deployed gnocchi on production, Could you share your
experiences?

Thanks and regards



2016-12-28 5:22 GMT+07:00 Mike Perez :

> On 16:07 Dec 27, Sam Huracan wrote:
> > Hi,
> >
> > I'm testing gnocchi on lab and realizing it drains CPU and RAM too much.
> >
> > Do you have a recommendtion for sizing Gnocchi Server configuration?
> >
> > Thanks and regards,
>
> Hi Sam,
>
> I recommend asking the OpenStack operators mailing list [1] for
> configuration
> help. It's likely someone on their has knowledge of running Gnocchi in
> production, as this list is mostly about development discussions.  Thanks!
>
> [1] - http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack-operators
>
> --
> Mike Perez
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kolla-kubernetes] Call for Kolla-kubernetes use cases contribution

2016-12-27 Thread Takashi Sogabe
Hello,

In my organization, we evaluated a PoC of OpenStack on Kubernetes [01].
As the next step, we are looking to adopt an open solution.

* How do you expect operations on day one to be performed?

Currently, we integrate our own DBaaS for backend db of OpenStack.
In terms of UX, we would be great if such a configuration could be deployed
by similar operations as all-in-one configuration.

I guess Open Services Broker[02] might be the right way to do that.

* How much do you need or want to be able to modify the output we produce to 
meet your organization's specific requirements?

SDN is going to be a key component for expanding our cluster. We are looking to
evolve scalability by using ODL, OVN, Calico, Contrail or something.

* How do you expect to interact with your deployment, and perform day 2 
operations.

The technical specification[7] seems to be fit in our environments so far.

[01] 
http://blog.kubernetes.io/2016/10/kubernetes-and-openstack-at-yahoo-japan.html
[02] https://www.openservicebrokerapi.org

Regards,
--
Takashi Sogabe

-Original Message-
From: Pete Birley [mailto:pete@port.direct] 
Sent: Thursday, December 15, 2016 1:25 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [kolla][kolla-kubernetes] Call for 
Kolla-kubernetes use cases contribution

duonghq,

Thanks for kicking this process off, it really fits in with the point was 
raising about us needing to define the user experience that we are looking to 
get with Kolla Kubernetes. However, I think before we jump straight into 
composing a document on gerrit it may make morse sense to have the initial 
discussion here?

>From my perspective, the project has really taken off on the development front 
>since Barcelona, with several new contributors, including myself having jumped 
>on board. In my opinion, this project has a huge amount of potential for 
>OpenStack, and Kubernetes alike, as we aim to provide the building blocks that 
>make it easy to consume OpenStack services in harmony with the Kubernetes 
>ecosystem.

I believe that Kolla was originally intended to use Kubernetes to provide 
OpenStack services, but instead, moved towards a more operator-friendly Ansible 
model as at the time the feature set that Kubernetes provided was not 
sufficient to enable OpenStack operation without serious compromise. Since then 
Kubernetes has come on leaps and bounds and in addition to the Kolla-Kubernetes 
effort, several other projects have appeared including Stackinetes[1], SAPCC 
OpenStack Helm[2], AIC-Helm[3], and my own much smaller Harbor[4]. Each of 
these has implemented OpenStack above Kubernetes in different ways, each with 
merits and detractors, to meet the needs of their operators and users. It's 
also worth noting the Hypernetes[5] project that took a vastly different tack 
and built a Kuberentes distribution that utilized OpenStack services to provide 
multi-tenancy.

In Barcelona is was decided that we (Kolla-Kubernetes), would move from the 
original proprietary Jinja2 templating engine, to adopt the Helm[6] tooling 
from Kubernetes to package OpenStack services. Helm is also being used by both 
SAPCC OpenStack Helm, and AIC-Helm, to meet their organization's needs.

There has been great progress on making this transition, taking the original 
templates and converting them into individual Helm charts[6], using a build 
methodology that we developed to facilitate this conversion. This has meant 
that we have been able to make the transition with no loss of service 
continuity but means that we are probably not taking full advantage of the 
tooling that is now available from our upstream vendor (Kubernetes).

This last point has become a point of contention at times within the 
development team as we charge full tilt into developing technical solutions 
following a technical specification[7] written to solve a problem that, it is 
apparent to me at least, is not as well defined as it could be. Leading to 
differing options to the direction we should move in, though we all broadly 
share the same goals. As a result of this, I would really appreciate it if we 
could spend a short amount of time, most likely via this thread in the mailing 
list, trying to define the experience we want our users to have (UX). I feel 
that if this activity is going to be worthwhile it is really important that for 
a minute we step back from discussing technical detail but define the 
experience that we want our users to have.

It is also vital that we look to build bridges with the wider OpenStack, and 
Kubernetes community for this project to be successful, as we are so heavily 
invested in the tools they provide us with, and we can mutually benefit from 
the with the experience that we both have in terms of infrastructure lifecycle 
management.

I am also aware that there have been efforts in the past for groups to get 
involved in Kolla-Kubernetes 

[openstack-dev] [neutron-lbaas][octavia] Error when creating load balancer

2016-12-27 Thread Yipei Niu
Hi, All,

I failed creating a load balancer on a subnet. The detailed info of
o-cw.log is pasted in the link http://paste.openstack.org/show/593492/.

Look forward to your valuable comments. Thank you.

Best regards,
Yipei
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Gone (HTTP 410) when use "ceilometer alarm-list"

2016-12-27 Thread Jace.Liang
Hi,


I have trouble using alarm of ceilometer , when I type ceilometer alarm-list,

it returns Gone (HTTP 410) (Request-ID: 
req-a6d1d333-ac49-48c5-ad94-ae8796b13e26)

Even creating alarm will returns same result.

I'm using devstack stable/mitaka on Ubuntu 14.04

Here is my local.conf

[[local|localrc]] HOST_IP=10.214.1.16
SERVICE_HOST=$HOST_IP
MYSQL_HOST=$HOST_IP
RABBIT_HOST=$HOST_IP

GLANCE_HOSTPORT=10.214.1.16:9292
ADMIN_PASSWORD=password
DATABASE_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password

GIT_BASE=${GIT_BASE:-https://git.openstack.org}
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service ceilometer-acompute ceilometer-acentral ceilometer-anotification 
ceilometer-collector ceilometer-api enable_service ceilometer-alarm-notifier 
ceilometer-alarm-evaluator

Q_USE_SECGROUP=True
FLOATING_RANGE="10.214.0.0/16"
FIXED_RANGE="10.0.0.0/24"
Q_FLOATING_ALLOCATION_POOL=start=10.214.180.101,end=10.214.180.250
PUBLIC_NETWORK_GATEWAY="10.214.0.254"
Q_L3_ENABLED=True
PUBLIC_INTERFACE=p6p1

Q_USE_PROVIDERNET_FOR_PUBLIC=True
OVS_PHYSICAL_BRIDGE=br-ex
PUBLIC_BRIDGE=br-ex
OVS_BRIDGE_MAPPINGS=public:br-ex

enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer 
stable/mitaka
How do I solve this problem? Or something wrong in my local.conf?
Thank you.


--
本信件可能包含工研院機密資訊,非指定之收件者,請勿使用或揭露本信件內容,並請銷毀此信件。 This email may contain 
confidential information. Please do not use or disclose it in any way and 
delete it if you are not the intended recipient.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Allowing Teams Based on Vendor-specific Drivers

2016-12-27 Thread Mike Perez
On 13:33 Nov 28, Doug Hellmann wrote:
> The OpenStack community wants to encourage collaboration by emphasizing
> contributions to projects that abstract differences between
> vendor-specific products, while still empowering vendors to integrate
> their products with OpenStack through drivers that can be consumed
> by the abstraction layers.
> 
> Some teams for projects that use drivers may not want to manage
> some or all of the drivers that can be consumed by their project
> because they have little insight into testing or debugging the code,
> or do not have the resources needed to manage centrally a large
> number of separate drivers. Vendors are of course free to produce
> drivers to integrate with OpenStack completely outside of the
> community, but because we value having the drivers as well as the
> more general support of vendor companies, we want to encourage a
> higher level of engagement by welcoming vendor-specific teams to
> be a part of our community governance.
> 
> Our Requirements for New Projects list [0] includes a statement
> about establishing a "level and open collaboration playing field"
> 
>   The project shall provide a level and open collaboration playing
>   field for all contributors. The project shall not benefit a single
>   vendor, or a single vendors product offerings; nor advantage
>   contributors from a single vendor organization due to access to
>   source code, hardware, resources or other proprietary technology
>   available only to those contributors.
> 
> This requirement makes it difficult to support having teams focused
> on producing a deliverable that primarily benefits a single vendor.
> So, we have some tension between wanting to collaborate and have a
> level playing field, while wanting to control the amount of driver
> code that projects have to manage.
> 
> I'm raising this as an issue because it's not just a hypothetical
> problem. The Cisco networking driver team, having been removed from
> the Neutron stadium, is asking for status as a separate official
> team [1]. I would very much like to find a way to say "yes, welcome
> (back)!"



Sorry for bringing this thread back up as I was gone for paternity leave, and
have been looking into this a bit.

I was reached out by someone at Cisco during the Ocata summit that was
interested in a Cisco driver being more recognized and official. I think a way
forward for us to be able to have an interested party in marking which drivers
are validated to work in Neutron and tested is knowing which tests need to be
tested by which driver type [1]. If we can provide Armando with more support.
I have provided some more detailed information on reviews [1][2], and I think
this will help with what some are seeking with their drivers.

[1] - https://review.openstack.org/#/c/391594
[2] - https://review.openstack.org/#/c/363709

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gnocchi sizing on production

2016-12-27 Thread Mike Perez
On 16:07 Dec 27, Sam Huracan wrote:
> Hi,
> 
> I'm testing gnocchi on lab and realizing it drains CPU and RAM too much.
> 
> Do you have a recommendtion for sizing Gnocchi Server configuration?
> 
> Thanks and regards,

Hi Sam,

I recommend asking the OpenStack operators mailing list [1] for configuration
help. It's likely someone on their has knowledge of running Gnocchi in
production, as this list is mostly about development discussions.  Thanks!

[1] - http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Using \ for multiline statements

2016-12-27 Thread Ian Cordasco
On Tue, Dec 27, 2016 at 12:00 PM, Ed Leafe  wrote:
> On Dec 27, 2016, at 11:33 AM, Sean McGinnis  wrote:
>>
>>> There is a problem with the use of backslash at the end of the line, where
>>> if you also put a space after it, it no longer does what it seemingly
>>> should.
>>
>> Oooh. nice. This is the good technical reason I was looking for. So there
>> really is a valid reason to avoid this, other than just preference of
>> readability and coding style. Thanks!
>
> True, but we also have a pep8 check for spaces at the end of lines.
>
> -- Ed Leafe

Worth noting, though, that if you are using multiple context managers
in a single 'with' statement that you can't use ()s to split them
across multiple lines, you *must* use \s, otherwise it is
syntactically invalid. (Weird, yes, I know. Guido has refused to make
this consistent on python-dev several times iirc.)

-- 
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] No meeting this week, and most likely none next week

2016-12-27 Thread Richard Jones
Hi folks,

I'll be around but since most everyone else won't be, let's skip the
meeting this week.

I won't be around next week, so I won't be able to run the meeting then ether.


Richard

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Using \ for multiline statements

2016-12-27 Thread Ed Leafe
On Dec 27, 2016, at 11:33 AM, Sean McGinnis  wrote:
> 
>> There is a problem with the use of backslash at the end of the line, where
>> if you also put a space after it, it no longer does what it seemingly
>> should. 
> 
> Oooh. nice. This is the good technical reason I was looking for. So there
> really is a valid reason to avoid this, other than just preference of
> readability and coding style. Thanks!

True, but we also have a pep8 check for spaces at the end of lines.

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Reminder - no meeting this week

2016-12-27 Thread Sean McGinnis
Just a friendly reminder that there will be no Cinder meeting this week.
Regular weekly meetings will resume in January.

Thanks!

Sean (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Using \ for multiline statements

2016-12-27 Thread Sean McGinnis
On Fri, Dec 23, 2016 at 03:00:02PM +0100, Radomir Dopieralski wrote:
> There is a problem with the use of backslash at the end of the line, where
> if you also put a space after it, it no longer does what it seemingly
> should. 

Oooh. nice. This is the good technical reason I was looking for. So there
really is a valid reason to avoid this, other than just preference of
readability and coding style. Thanks!

> Together with the fact, that you can achieve the same thing using
> () in pretty much all instances (it used to not be true for imports, but
> that has been fixed), it suggests that it should be avoided. However, it
> does help to format certain Java-esque constructs (mostly with mox) in a
> more readable way, so sometimes it's justified.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zun][nova-docker] Time to retire nova-docker?

2016-12-27 Thread Hongbin Lu
Removed "[nova]" and added "[zun]" to the subject.

Esra,

In short, I would consider Zun in beta version. The Zun project was found
in about 6 months ago, and is currently under a rapid development. Simply
speaking, what Zun offers is a container-oriented API. It is up to drivers
to decide how to implement the API. Currently, we have the first driver
that is based on Nova. I expect there will be a second driver that is based
on Kubernetes.

It seems you are looking for nova-docker equivalent functionality. If yes,
I think the first driver will fit better to your use cases. We bootstrapped
the first driver by forking nova-docker as a starting point. However, we
are heading to a different direction from what nova-docker was. You could
find more details in this spec [1].

Please feel free to contact the Zun team if you have any inquiry. We are
happy to help.

[1] https://github.com/openstack/zun/blob/master/specs/container-sandbox.rst

Best regards,
Hongbin

On Mon, Dec 26, 2016 at 1:38 PM, Esra Celik 
wrote:

>
> Hi Jay, I was asking because our discussions to contribute to nova-docker
> project ran across the discussions here to retire the project :)
>
> Hongbin, that is exactly what I meant. Using nova-docker it deploys
> containers to physical machines, not virtual machines.
> Using Ironic driver with Magnum is a solution, but I guess every time
> creating a cluster with Magnum it will redeploy the operating system for
> the selected physical machine, which is not necessary.
> I will investigate Zun project more, thank you very much. What would you
> say for its current maturity level?
>
>
>
> --
>
> *Kimden: *"Hongbin Lu" 
> *Kime: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> *Gönderilenler: *26 Aralık Pazartesi 2016 17:53:00
> *Konu: *Re: [openstack-dev] [nova][nova-docker] Time to retire
> nova-docker?
>
> I guess "extra virtualization layer" means Magnum provisions a Container
> Orchestration Engines (COE) on top of nova instances. If the nova instances
> are virtual machines, there is a "extra virtualization layer".
>
> I think you could consider using Magnum with Ironic driver. If the driver
> is Ironic, COEs are deployed to nova instances that are physical machines
> provided by Ironic. Zun project [1] could be another option for your use
> case. Zun is similar to nova-docker, which enables running containers on
> compute hosts. You could find a thoughtful introduction here [2].
>
> [1] https://wiki.openstack.org/wiki/Zun
> [2] http://www.slideshare.net/hongbin034/zun-presentation-
> openstack-barcelona-summit
>
> Best regards,
> Hongbin
>
> On Mon, Dec 26, 2016 at 8:23 AM, Jay Pipes  wrote:
>
>> On 12/26/2016 08:23 AM, Esra Celik wrote:
>>
>>> Hi All,
>>>
>>> It is very sad to hear nova-docker's retirement. Me and my team (3) are
>>> working for a cloud computing laboratory and we were very keen on
>>> working with nova-docker.
>>> After some research about its current state I saw these mails. Will you
>>> actually propose another equivalent to nova-docker or is it just the
>>> lack of contributors to this project?
>>> Some of the contributors previously advised us the magnum project
>>> instead of nova-docker, however it does not satisfy our needs because of
>>> the additional virtualization layer it needs.
>>> If the main problem is the lack of contributors we may participate in
>>> this project.
>>>
>> There's never any need to ask permission to contribute to a project :) If
>> nova-docker driver is something you cannot do without, feel free to
>> contribute to it.
>>
>> That said, Magnum does seem to be where most of the docker-related
>> contributions to the compute landscape have moved. So, it's more likely you
>> will find company in that project and perhaps be able to make more
>> effective contributions there. Can I ask what is the "extra virtualization
>> layer" that you are referring to in Magnum?
>>
>> Best,
>> -jay
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

Re: [openstack-dev] Gnocchi sizing on production

2016-12-27 Thread Alex Krzos
Hi Sam,

I have also been testing Gnocchi and found high resource utilization
depending on the gnocchi configuration and archival-policy.  Since
hardware can vary and your ability to process metrics will be cpu
heavy and your ability to store metrics will be limited by your disk
io it will be highly dependent upon your hardware and worker counts.
Could you elaborate on the configuration and version you are using
along with the resource count you have metrics collected on?  The more
data points we can share with the community the better defaults and
scale/capacity guidelines we can provide.  Also if you could elboarate
on what aggregations you truely need, we can take that into account
for the default policies.

I can make the following recommendations of moving Gnocchi API (hosted
as a WSGI app in httpd) and Gnocchi metricd to separate hardware (if
you are not doing this already.)  This will prevent resource
contention between other OpenStack Services and the Telemetry
Services.

Alex Krzos | Performance Engineering
Red Hat
Desk: 919-754-4280
Mobile: 919-909-6266

On Tue, Dec 27, 2016 at 4:07 AM, Sam Huracan  wrote:
> Hi,
>
> I'm testing gnocchi on lab and realizing it drains CPU and RAM too much.
>
> Do you have a recommendtion for sizing Gnocchi Server configuration?
>
> Thanks and regards,
>
> Sam
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Gnocchi sizing on production

2016-12-27 Thread minmin ren
Hi, Sam
What's the version of gnocchi you're testing?
I am also experiencing the same problem with gnocchi 2.2.0 release. I find
that the gnocchi-metricd processes keep high CPU utility.



2016年12月27日星期二,Sam Huracan  写道:

> Hi,
>
> I'm testing gnocchi on lab and realizing it drains CPU and RAM too much.
>
> Do you have a recommendtion for sizing Gnocchi Server configuration?
>
> Thanks and regards,
>
> Sam
>


-- 



Thanks & Best Regards,
Min Min Ren
Software Developer
Email: rmm0...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] reminder: no meeting today

2016-12-27 Thread Steve Martinelli
see everyone next week, happy holidays!

stevemar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Time to retire nova-docker?

2016-12-27 Thread Csatari, Gergely (Nokia - HU/Budapest)
Hi,

Zun provides a lifecycle management interface for the containers started in the 
COE deployed with Magnum. In other words a COE is still needed for Zun. For 
more information look the Zun wiki.
Internally, to separate the started containers into different sandboxes Zun 
uses a fork of nova-docker driver which was developed further to fulfill the 
specific needs of Zun.
To have support for containers without an additional COE to Nova  Zun provides 
no solution.
I’ve added Hongbin to the mail to correct me if I’m wrong about Zun.

Br,
Gerg0


From: Esra Celik [mailto:celik.e...@tubitak.gov.tr]
Sent: Monday, December 26, 2016 7:38 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [nova][nova-docker] Time to retire nova-docker?


Hi Jay, I was asking because our discussions to contribute to nova-docker 
project ran across the discussions here to retire the project :)

Hongbin, that is exactly what I meant. Using nova-docker it deploys containers 
to physical machines, not virtual machines.
Using Ironic driver with Magnum is a solution, but I guess every time creating 
a cluster with Magnum it will redeploy the operating system for the selected 
physical machine, which is not necessary.
I will investigate Zun project more, thank you very much. What would you say 
for its current maturity level?




Kimden: "Hongbin Lu" >
Kime: "OpenStack Development Mailing List (not for usage questions)" 
>
Gönderilenler: 26 Aralık Pazartesi 2016 17:53:00
Konu: Re: [openstack-dev] [nova][nova-docker] Time to retire nova-docker?

I guess "extra virtualization layer" means Magnum provisions a Container 
Orchestration Engines (COE) on top of nova instances. If the nova instances are 
virtual machines, there is a "extra virtualization layer".

I think you could consider using Magnum with Ironic driver. If the driver is 
Ironic, COEs are deployed to nova instances that are physical machines provided 
by Ironic. Zun project [1] could be another option for your use case. Zun is 
similar to nova-docker, which enables running containers on compute hosts. You 
could find a thoughtful introduction here [2].

[1] https://wiki.openstack.org/wiki/Zun
[2] 
http://www.slideshare.net/hongbin034/zun-presentation-openstack-barcelona-summit

Best regards,
Hongbin

On Mon, Dec 26, 2016 at 8:23 AM, Jay Pipes 
> wrote:

On 12/26/2016 08:23 AM, Esra Celik wrote:

Hi All,

It is very sad to hear nova-docker's retirement. Me and my team (3) are
working for a cloud computing laboratory and we were very keen on
working with nova-docker.
After some research about its current state I saw these mails. Will you
actually propose another equivalent to nova-docker or is it just the
lack of contributors to this project?
Some of the contributors previously advised us the magnum project
instead of nova-docker, however it does not satisfy our needs because of
the additional virtualization layer it needs.
If the main problem is the lack of contributors we may participate in
this project.
There's never any need to ask permission to contribute to a project :) If 
nova-docker driver is something you cannot do without, feel free to contribute 
to it.

That said, Magnum does seem to be where most of the docker-related 
contributions to the compute landscape have moved. So, it's more likely you 
will find company in that project and perhaps be able to make more effective 
contributions there. Can I ask what is the "extra virtualization layer" that 
you are referring to in Magnum?

Best,
-jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gnocchi sizing on production

2016-12-27 Thread Julien Danjou
On Tue, Dec 27 2016, Sam Huracan wrote:

Hi Sam,

> I'm testing gnocchi on lab and realizing it drains CPU and RAM too much.
>
> Do you have a recommendtion for sizing Gnocchi Server configuration?

Which version are you testing? What's the RAM consumption?
You can reduce the default archive policies and increase processing
latency to decrease CPU usage.

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][networking-*] Attention for upcoming refactoring

2016-12-27 Thread Anna Taraday
Hello everyone!

Please, note that all changes to Neutron merged.

Changes that needs to be merged for external repos:
segments db refactor -
https://review.openstack.org/#/q/status:open+branch:master+topic:segmentsdb
ml2 db refactor -
https://review.openstack.org/#/q/status:open+branch:master+topic:refactor_ml2db

Happy holidays for everyone!


On Thu, Dec 22, 2016 at 7:36 AM Russell Bryant  wrote:

>
> On Wed, Dec 21, 2016 at 10:50 AM, Anna Taraday  > wrote:
>
> Hello everyone!
>
> I've got two changes with refactor of TypeDriver [1] and segments db [2]
> which is needed for implementation new engine facade [3].
>
> Reviewers of networking-cisco, networking-arista, networking-nec
> 
> , networking-midonet
> ,
>  networking-edge-vpn, networking-bagpipe, tricircle, group-based-policy -
> pay attention for [4].
>
> Also recently merged refactor of ml2/db.py [5]. Fixes
> for networking-cisco, networking-cisco, networking-cisco - are on review [6]
>
> [1] - https://review.openstack.org/#/c/398873/
> [2] - https://review.openstack.org/#/c/406275/
> [3] - https://blueprints.launchpad.net/neutron/+spec/enginefacade-switch
> [4] - https://review.openstack.org/#/q/topic:segmentsdb
> [5] - https://review.openstack.org/#/c/404714/
> [6] -
> https://review.openstack.org/#/q/status:open++branch:master+topic:refactor_ml2db
>
>
> ​Thanks a lot for looking out for the various networking-* projects when
> working on changes like this.  It's really great to see.
>
> --
> Russell Bryant
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Regards,
Ann Taraday
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Feedback for upcoming user survey questionnaire

2016-12-27 Thread Rui Chen
I have one to users:

- Have you need to add some customized features on upstream Nova code to
meet your special needs? or Nova is out of the box project?

Thanks.

2016-12-27 7:18 GMT+08:00 Jay Pipes :

> On 12/26/2016 06:08 PM, Matt Riedemann wrote:
>
>> We have the opportunity to again [1] ask a question in the upcoming user
>> survey which will be conducted in February. We can ask one question and
>> have it directed to either *users* of Nova, people *testing* nova, or
>> people *interested* in using/adopting nova. Given the existing adoption
>> of Nova in OpenStack deployments (98% as of October 2016) I think that
>> sliding scale really only makes sense to direct a question at existing
>> users of the project. It's also suggested that for projects with over
>> 50% adoption to make the question quantitative rather than qualitative.
>>
>> We have until January 9th to submit a question. If you have any
>> quantitative questions about Nova to users, please reply to this thread
>> before then.
>>
>> Personally I tend to be interested in feedback on recent development, so
>> I'd like to ask questions about cells v2 or the placement API, i.e. they
>> were optional in Newton but how many deployments that have upgraded to
>> Newton are deploying those features (maybe also noting they will be
>> required to upgrade to Ocata)? However, the other side of me knows that
>> most major production deployments are also lagging behind by a few
>> releases, and may only now be upgrading, or planning to upgrade, to
>> Mitaka since we've recently end-of-life'd the Liberty release. So asking
>> questions about cells v2 or the placement service is probably premature.
>> It might be better to ask about microversion adoption, i.e. if you're
>> monitoring API request traffic to your cloud, what % of compute API
>> requests are using a microversion > 2.1.
>>
>
> My vote would be to ask the following question:
>
> Have you considered using (or already chosen) an alternative to OpenStack
> Nova for launching your software workloads? If you have, please list one to
> three reasons why you chose this alternative.
>
> Thanks,
> -jay
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Gnocchi sizing on production

2016-12-27 Thread Sam Huracan
Hi,

I'm testing gnocchi on lab and realizing it drains CPU and RAM too much.

Do you have a recommendtion for sizing Gnocchi Server configuration?

Thanks and regards,

Sam
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Help: How to make Nova official support AArch64?

2016-12-27 Thread Kevin Zhao
Hi Jay && Matt,
 Thanks for your valuable help~
 I will continue to work on this now for 3rd CI

Best Regards,
Kevin Zhao

On 21 December 2016 at 10:34, Matt Riedemann 
wrote:

> On 12/20/2016 7:27 AM, Jay Pipes wrote:
>
>> On 12/20/2016 12:45 AM, Kevin Zhao wrote:
>>
>>> Hello Nova,
>>>   Greetings from ARM and Linaro :D
>>>   This is Kevin Zhao from ARM. Nowadays Linaro and other
>>> contributors have submitted some patches to Nova community , fixing bugs
>>> for Nova running on AArch64. Now we can successfully running a OpenStack
>>> cluster on AArch64 and pass much of the tempest test. The
>>> virtuallization hypervisor is based on Libvirt+KVM.
>>>
>>>   Actually in Barcelona Keynote(interoperability Challenge
>>> >> bility-challenge-running-unmodified-apps-in-any-openstack-cloud>,
>>>
>>> at 10:30') one of the OpenStack interoperability cloud is from Linaro,
>>> all based on AArch64 machines.
>>>I see there is a Nova support matrix
>>> . So my
>>> question is :
>>> *   what do we need to do, so that Nova can official support AArch64
>>> architecture? for example add AArch64 to this support matrix?*
>>>
>>>Sincerely thanks for your help. Any of your response will be
>>> really appreciated.
>>>
>>
>> Hi Kevin,
>>
>> I think the first thing you would need to do is set up some sort of
>> continuous integration system that can be notified by upstream patch
>> changes/pushes and report results of Tempest testing against a build of
>> OpenStack on AArch64 machines.
>>
>> You can read about setting up third-party CI for this here:
>>
>> http://docs.openstack.org/infra/system-config/
>>
>> Best,
>> -jay
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> Yeah as Jay said it's probably 3rd party CI to recognize it in the support
> matrix.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Using neutron lib plugin constants

2016-12-27 Thread Gary Kotton
The following decomposed plugins are not gating:

-  
openstack/networking-brocade
 (please see https://review.openstack.org/415028)

-  
openstack/networking-huawei
 (please see https://review.openstack.org/415029)

-  
openstack/astara-neutron
 (please see https://review.openstack.org/405388)
Can the relevant maintainers of those projects please address things if 
possible.

The 
openstack/networking-cisco
 does not consumer the correct neutron-lib version. So can the maintainers also 
please take a look at that.
Happy holidays
Gary


From: Gary Kotton 
Reply-To: OpenStack List 
Date: Monday, December 26, 2016 at 3:37 PM
To: OpenStack List 
Subject: [openstack-dev] [Neutron] Using neutron lib plugin constants

Hi,
Please note the following two patches:

1.  https://review.openstack.org/414902 - use CORE from neutron lib

2.  https://review.openstack.org/394164 - use L3 (previous known as 
L3_ROUTER_NAT)
Please note that the above will be removed from 
neutron/plugins/common/constants.py
 and neutron-lib will be used.

For the core change I have posted:

1.  VPNaaS - https://review.openstack.org/#/c/414915/

For the L3 change things are a little more complicated 
(http://codesearch.openstack.org/?q=L3_ROUTER_NAT=nope==):

1.  networking-cisco – https://review.openstack.org/414977

2.  group-based-policy – https://review.openstack.org/414976

3.  big switch - https://review.openstack.org/414956

4.  brocade - https://review.openstack.org/414960

5.  dragonflow - https://review.openstack.org/414970

6.  networking-huawei - https://review.openstack.org/414971

7.  networking-odl = https://review.openstack.org/414972

8.  astara-neutron - https://review.openstack.org/414973

9.  networking-arista - https://review.openstack.org/414974

10.  networking-fortinet - https://review.openstack.org/414980

11.  networking-midonet - https://review.openstack.org/414981

12.  networking-nec - https://review.openstack.org/414982

13.  networking-onos - https://review.openstack.org/414983

Please note that I have not put a ‘Depends-On’ as these patches can and should 
land prior.
Thanks and happy new year
Gary

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev