The error info is :
CRITICAL nova [None req-a84d278b-43db-4c94-864b-7a9733aa772c None None]
Unhandled error: IOError: [Errno 13] Permission denied: '/etc/nova/policy.json'
ERROR nova Traceback (most recent call last):
ERROR nova File "/usr/bin/nova-compute", line 10, in
ERROR nova sys.exit(
Hey Allison,
I have a few comments below about the Cinder drivers. Would love to hear
everyone's input too.
On Thu, Mar 29, 2018 at 12:22:05PM -0500, Allison Price wrote:
> Hi everyone,
>
> We are opening the OpenStack User Survey submission process next month and
> wanted to collect operator
In hindsight, it would be much fun the R release named Ramm :P
On Fri, Mar 30, 2018 at 3:10 AM, Paul Belanger
wrote:
> Hi everybody!
>
> As the subject reads, the "S" release of OpenStack is officially "Stein".
> As
> been with previous elections this wasn't the first choice, that was
> "Solar".
Allison,
In the past, there has been some confusion on the ML2 driver since many of the
drivers are both ML2 based and have specific drivers. Had you an approach in
mind for this time?
It does mean that the results won’t be directly comparable but cleaning up this
confusion would seem worth it
Hi everybody!
As the subject reads, the "S" release of OpenStack is officially "Stein". As
been with previous elections this wasn't the first choice, that was "Solar".
Solar was judged to have legal risk, so as per our name selection process, we
moved to the next name on the list.
Thanks to ever
On Thu, 29 Mar 2018, iain MacDonnell wrote:
If I'm reading
http://modwsgi.readthedocs.io/en/develop/user-guides/processes-and-threading.html
right, it seems that the MPM is not pertinent when using WSGIDaemonProcess.
It doesn't impact the number wsgi processes that will exist or how
they are
On 03/29/2018 04:24 AM, Chris Dent wrote:
On Thu, 29 Mar 2018, Belmiro Moreira wrote:
[lots of great advice snipped]
- Change apache mpm default from prefork to event/worker.
- Increase the WSGI number of processes/threads considering where
placement
is running.
If I'm reading
http://mo
On 3/29/2018 12:05 PM, Chris Dent wrote:
Other suggestions? I'm looking at things like turning off
scheduler_tracks_instance_changes, since affinity scheduling is not
needed (at least so-far), but not sure that that will help with
placement load (seems like it might, though?)
This won't impac
On 3/29/2018 3:36 AM, Tony Breeds wrote:
Hi all,
At Sydney we started the process of change on the stable branches.
Recently we merged a TC resolution[1] to alter the EOL process. The
next step is refinining the stable policy itself.
I've created a review to do that. I think it covers mos
Hi everyone,
We are opening the OpenStack User Survey submission process next month and
wanted to collect operator feedback on the answer choices for three particular
questions: Identity Service (Keystone) drivers, Network (Neutron) drivers and
Block Storage (Cinder) drivers. We want to make su
On Thu, 29 Mar 2018, iain MacDonnell wrote:
placement python stack and kicks out the 401. So this mostly
indicates that socket accept is taking forever.
Well, this test connects and gets a 400 immediately:
echo | nc -v apihost 8778
so I don't think it's at at the socket level, but, I assume,
On 03/29/2018 01:19 AM, Chris Dent wrote:
On Wed, 28 Mar 2018, iain MacDonnell wrote:
Looking for recommendations on tuning of nova-placement-api. I have a
few moderately-sized deployments (~200 nodes, ~4k instances),
currently on Ocata, and instance creation is getting very slow as they
fi
On Thu, 29 Mar 2018, Belmiro Moreira wrote:
[lots of great advice snipped]
- Change apache mpm default from prefork to event/worker.
- Increase the WSGI number of processes/threads considering where placement
is running.
Another option is to switch to nginx and uwsgi. In situations where
the
Hi,all
Now I want to use spice console replace novnc in instance.But the
openstack documentation is a bit sparse on what configuration parameters to
enable for SPICE console access. But my result is the nova-compute service and
nova-consoleauth service failed,and the log tell me the "
Hi,
with Ocata upgrade we decided to run local placements (one service per
cellV1) because we were nervous about possible scalability issues but
specially the increase of the schedule time. Fortunately, this is now been
address with the placement-req-filter work.
We started slowly to aggregate our
On Wed, 28 Mar 2018, iain MacDonnell wrote:
Looking for recommendations on tuning of nova-placement-api. I have a few
moderately-sized deployments (~200 nodes, ~4k instances), currently on Ocata,
and instance creation is getting very slow as they fill up.
This should be well within the capabi
16 matches
Mail list logo