I am pretty sure I am using the right puppet-keystone branch for stable
juno ...
[root@puppet-centos-001 keystone]# pwd
/etc/puppetlabs/puppet/environments/production/modules/keystone
[root@puppet-centos-001 keystone]# git status
# On branch stable/juno
# Changes not staged for commit:
#
Maybe this could be the "answer":
https://bugs.launchpad.net/cloud-archive/+bug/1486947
Cris
On Tue, Nov 24, 2015 at 11:38 PM, Emilien Macchi wrote:
>
>
> On 11/24/2015 11:21 PM, Russell Cecala wrote:
> > I am trying to use the OpenStack community puppet modules. Here's
Obed, Tim,
Here is the chain of patches related to VM workloads that should cover
second point:
https://review.openstack.org/#/q/status:open+project:openstack/rally+branch:master+topic:bp/vm-workloads-framework,n,z
Best regards,
Boris Pavlovic
On Tue, Nov 24, 2015 at 2:56 PM, Boris Pavlovic
On Tue, Nov 24, 2015 at 5:26 PM, Matt Jarvis
wrote:
> Nope, the HA flag is definitely set to false. Here's another example :
>
> root@osnet0:~# neutron l3-agent-list-hosting-router
> be651d53-1dd2-46eb-8d57-7e2aafd6ff57
>
>
On 11/24/2015 11:21 PM, Russell Cecala wrote:
> I am trying to use the OpenStack community puppet modules. Here's the
> keystone module I am using: https://github.com/openstack/puppet-keystone
> I am using the stable juno branch. I have in my puppet manifest for my
> controller nodes this
Wonderful Emilien. I was feeling pretty along there ;)
I believe this is the openstack client RPM's I am using:
[root@mgmt-centos-001 ~]# rpm -qa | grep openstack
*openstack*-keystone-2014.2.2-1.el7.noarch
python-*openstack*client-1.0.1-1.el7.centos.noarch
On Tue, Nov 24, 2015 at 2:38 PM,
Obed,
Rally team is working on supporting point 2.
We will allow you to run distributed loads in cloud like: IPerf, SPEC, HPCC
and so on
However we are moving very slowly, and for now you can use
https://github.com/openstack/shaker
Best regards,
Boris Pavlovic
On Tue, Nov 24, 2015 at 9:27
On 11/25/2015 12:00 AM, Russell Cecala wrote:
> Wonderful Emilien. I was feeling pretty along there ;)
>
> I believe this is the openstack client RPM's I am using:
>
> [root@mgmt-centos-001 ~]# rpm -qa | grep openstack
>
> *openstack*-keystone-2014.2.2-1.el7.noarch
>
>
I am trying to use the OpenStack community puppet modules. Here's the
keystone module I am using: https://github.com/openstack/puppet-keystone
I am using the stable juno branch. I have in my puppet manifest for my
controller nodes this resource definition:
class {
Nope, the HA flag is definitely set to false. Here's another example :
root@osnet0:~# neutron l3-agent-list-hosting-router
be651d53-1dd2-46eb-8d57-7e2aafd6ff57
+--+++---+
| id | host |
For the consultants reading this (I've pretty much exhausted my Rolodex): I
have a remote engineering opportunity if someone wants a little side work
this week - maybe next week - assisting us with an OpenStack cloud project
for a large ISP with unique requirements. Please ping me if you're a
Tim, thanks for pointing out the User Committee. You are the second (or
third) one to mention that to me in this context and is another great way
to influence the direction OpenStack takes (and goes hand in hand with this
operator's list, the midcycle, and operator's summit meetings.)
-d
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
On 11/24/15 2:41 PM, JJ Asghar wrote
>
> I think this brings in the conversation, where _should_ these types
> of emails be sent?
>
> Maybe we should create a [openstack-jobs] list? So there is a one
> stop shop that we can point these types of
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
On 11/24/15 2:07 PM, Adam Lawson wrote:
> For the consultants reading this (I've pretty much exhausted my
> Rolodex): I have a remote engineering opportunity if someone wants
> a little side work this week - maybe next week - assisting us with
> an
Adam,
This really isn't the right list for this. And I think we'll be pretty
aggressive about enforcing that if this becomes a thing. If you want to
reach out to the community regarding available opportunities there is an
existing mechanism for that in the community job board:
The HA routers feature we merged in Juno, and those are scheduled to
multiple agents.
If you source admin credentials and 'neutron router-show
951c8ec9-9a6c-4c6d-9d6d-049b3dee7f6f'
is the 'HA' flag set to True? Otherwise, this is a really weird bug
which I've never seen before.
On Tue, Nov 24,
Hi Operators,
I'd love to hear from anyone running multiple physical Neutron L3 Agent
nodes...
Specifically: How many routers do you schedule per Neutron L3 Agent and
how much CPU/Memory do you put into an L3 Agent physical host?
Because of a couple of bugs in Linux[1][2], we have a pretty
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
On 11/24/15 3:13 PM, Adam Lawson wrote:
> Yep. I knew I was walking a gray line. If I had time to let folks
> know about an opportunity and wait for folks to visit and reply I
> totally would. Otherwise, I would definitely echo a job-related
>
Yep. I knew I was walking a gray line. If I had time to let folks know
about an opportunity and wait for folks to visit and reply I totally would.
Otherwise, I would definitely echo a job-related mailing list if that could
be setup?
//adam
*Adam Lawson*
AQORN, Inc.
427 North Tatnall Street
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
On 11/24/15 2:14 PM, David Medberry wrote:
> Tim, thanks for pointing out the User Committee. You are the second (or
> third) one to mention that to me in this context and is another great
> way to influence the direction OpenStack takes (and goes
Getting ready to eat lots of turkey
--Rocky
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Hi Matt,
> It's also weird that we've only seen this when the environment has been
> built using terraform. This particular customer re-creates the issue every
> time they rebuild.
>
I work on the OpenStack support for Terraform, so I might be able to help
with this. Could you provide an example
In addition, an etherpad is available
https://etherpad.openstack.org/p/arch-guide-reorg with the current structure
and proposed structure. Have a look and add any changes you would like to see.
Thanks,
Darren
On Wednesday, 25 November 2015, 0:43, Shilla Saebi
Hi
Have a deployment where keystone sits behind a ha proxy node. Now
authentication requests are made to a vip. Problem is when there is an
authentication failure we cannot track the remote ip that failed login as all
authentication failures show the VIP ip since ha proxy fwds the request to a
Kris,
You’re right this system work well before. I restarted ,this time,
rabbitmq-server, nova-api, nova-conductor ,nova-console, nova-novncproxy and
nova-scheduler together in a single line on Controller node. Now it’s back
normal. so for lessen learned, if restarting singe service not
Hi folks,
I was just wondering if there’s a good way or framework for measuring how fast
an OpenStack cloud is?
As far I can see we could divide measurements from 2 perspectives:
1. Operators Perspective
* I think Rally may help on that. https://wiki.openstack.org/wiki/Rally
*
How fast can be measured in a variety of ways:
- How quickly a VM can be spawned and become available ?
- How quickly it runs once it is available ? How many X workunits/second can be
achieved with N cores ?
Rally will help with the 1st case. For the second case, it is important to
choose an
Are all of your instance_types tables identical across the cell/API DBs?
And also marked is_public?
-AH
On Mon, Nov 23, 2015 at 6:25 PM, Belmiro Moreira <
moreira.belmiro.email.li...@gmail.com> wrote:
> Hi Matt,
> I'm rolling kilo 2015.1.1
>
> After resizing an instance (state: verify_resize)
Hi All
In the last week or so we've seen a couple of customer issues where a
router is associated with more than one l3 agent, which obviously causes
significant connectivity weirdness.
❯ neutron l3-agent-list-hosting-router 951c8ec9-9a6c-4c6d-9d6d-049b3dee7f6f
Edgar Magana wrote:
> Is the Foundation aware of that? I mean you comment about "It's the
> cumulative voting system that is broken”
>
> Maybe this is a good opportunity for fixing it!
Oh sure, it's a well-known issue, which (as far as I know) is
periodically discussed by the Board of
> -Original Message-
> From: Thierry Carrez [mailto:thie...@openstack.org]
> Sent: 24 November 2015 17:56
> To: openstack-operators@lists.openstack.org
> Subject: Re: [Openstack-operators] Operational Director?
>
> Edgar Magana wrote:
> > Is the Foundation aware of that? I mean you
Hello Xav,
what version of Openstack are you running ?
thank you
Saverio
2015-11-23 20:04 GMT+01:00 Xav Paice :
> Hi,
>
> Over the last few months we've had a few incidents where the process to
> create network namespaces (Neutron, OVS) on the network nodes gets 'stuck'
>
Hello Xav,
we also had problems with namespaces in Juno. Maybe a little different
than what you describe.
we are running about 250 namespaces in our network node. When we
reboot the network node we observe that some namespaces have qr-* and
qg-* interfaces missing.
we believe that is because
Hello,
If you're involved in Puppet OpenStack or if you want to be involved,
please look at this poll: http://goo.gl/forms/lsBf55Ru8L
Thanks a lot for your time,
--
Emilien Macchi
signature.asc
Description: OpenPGP digital signature
___
hi,
I keep getting such error(as subject) when trying to boot an image for
instance, no matter from webpage dashboard or cli. firstly it show the status
of scheduling and after a while it's Error. I checked status of all service
,nothing gets failed. Below are the logs for nova-api and
matt wrote:
> it's a voted position. there just aren't that many operators who vote
> compared to devs and other contributors. to say nothing of
> non-contributors who sign up for accounts to vote for their co-workers.
It wouldn't be as much of an issue if we used Condorcet or STV -- I
would
Hello there,
we were able finally to backport the patch to Juno:
https://github.com/zioproto/cinder/tree/backport-ceph-object-map
we are testing this version. Everything good so far.
this will require in your ceph.conf
rbd default format = 2
rbd default features = 13
if anyone is willing to
Worth trying for sure.
On Tue, Nov 24, 2015 at 6:06 AM, Thierry Carrez
wrote:
> matt wrote:
> > it's a voted position. there just aren't that many operators who vote
> > compared to devs and other contributors. to say nothing of
> > non-contributors who sign up for
We haven’t seen the bad namespaces issue, but we have experienced an issue
where our node eventually started to see soft lockups like these:
kernel: BUG: soft lockup - CPU#0 stuck for 22s!
We noticed it once we hit a high amount of namespaces. It was definitely over
400, as we didn’t realize
Hello Operators,
The ops guide specialty team is looking for operators to help with a 2 day
virtual swarm for the OpenStack Architecture guide. If you are interested,
please sign up and let us know which days work best for you via the doodle
calendar.
http://doodle.com/poll/cakfn9g9bde94yq4
40 matches
Mail list logo