Re: [Openstack] [Keystone] Keystone Middleware Deprecate in-process token cache

2016-04-19 Thread Kuo Hugo
Thanks Adam & Morgan.

Hugo

2016-04-20 5:03 GMT+08:00 Morgan Fainberg :

>
>
> On Tue, Apr 19, 2016 at 1:25 PM, Adam Young  wrote:
>
>> On 04/19/2016 01:55 AM, Kuo Hugo wrote:
>>
>> Hi Keystone Team,
>>
>> We aware this deprecation information in keystone middleware. I got
>> couple of questions.
>>
>>
>> https://github.com/openstack/keystonemiddleware/blob/6e58f8620ae60eb4f26984258d15a9823345c310/releasenotes/notes/deprecate-caching-tokens-in-process-a412b0f1dea84cb9.yaml
>>
>> We may need to update the document for Swift.
>>
>> I want to clarify that if Keystone team plan to deprecate support for
>> the EnvCache (!?) or just the in-memory non-memcache option if you don't
>> set *neither* memcached_servers *nor* env_cache .
>>
>> Thanks // Hugo
>>
>>
>> ___
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>> Doing proper tagging on subject.
>>
>> just in-memory is deprecated.  Memcache is sticking around.
>>
>> ___
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
> The in-process cache will be going away, but caching is sticking around as
> would the ENVCache support (provided we can make it jive with oslo.cache).
>
> We want to move to oslo.cache and not have default caching per-process
> that results in inconsistent results depending on the process you hit for a
> given endpoint (shared caches only).
>
> --Morgan
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Ring rebuild, multiple copies of ringbuilder file, wasRe: swift ringbuilder and disk size/capacity relationship

2016-04-19 Thread Mark Kirkwood
The proxies and the storage nodes all have a copy of the ring 
structure(s): e.g:


$ ls -l /etc/swift/*.ring.gz
-rw-r--r-- 1 root  nagios 1316 Apr 20 00:31 account.ring.gz
-rw-r--r-- 1 root  nagios 1299 Apr 20 00:31 container.ring.gz
-rw-r--r-- 1 root  nagios 1287 Apr 20 00:31 object.ring.gz

but yeah,  suppose you make changes to the ring on (say) one of the 
proxies, got make a coffee, then distribute the new rings to the various 
machines. There is a period of time when the rings are different on some 
machines from others.


So it is possible that a request for an object initiated by the proxy 
where you modified the rings may look for an object on the newly added 
device (which does not have anything yet) - it will be served from a 
handoff or replica instead (you might get a not found if you have num 
replicas = 1...haven't tried that out tho).


I *think* if you modify the ring on a proxy then it won't be able to 
force the storage nodes to move an object somewhere where it can't be 
found (they will look at their own ring version). However, subsequent 
replication runs (where storage servers chatter to their next and prev 
neighbours) once you have all the new rings distributed will reorganise 
anything that did get moved incorectly.


John can hopefully give you fuller details (I haven't read up on or 
tried out all the various scenarios you clearly dream up). However I did 
do some pretty horrific things (on purpose):


- changing the number of partitions and installing this everywhere (ahem 
- do not do this in a cluster you care about)

- checking that it utterly breaks everything :-(
- copying back the old rings (do back these up)!
- checking that the cluster is working again :-)

So in general, seems pretty robust!

Also our friend swift-recon can alert you about any problems with non 
matching rings:


markir@proxy1:~$ swift-recon --md5
===
--> Starting reconnaissance on 4 hosts
===
[2016-04-20 16:35:10] Checking ring md5sums
4/4 hosts matched, 0 error[s] while checking hosts.
===
[2016-04-20 16:35:10] Checking swift.conf md5sum
4/4 hosts matched, 0 error[s] while checking hosts.
===

Cheers

Mark



On 20/04/16 02:37, Peter Brouwer wrote:


Hello All

Followup question.
Assume a swift cluster with a number of swift proxy nodes, each node
needs to hold a copy of the ring structure, right?
What happens when a disk is added to the ring. After the change is made
on the first proxy node, the ring config files need to be copied to the
other proxy nodes, right?
Is there a risk during the period that the new ring builder files are
copied a file can be stored using the new structure on one proxy node
and retrieved via an other node that still holds the old structure and
not returning object not found. Or the odd change an object is moved
already by the re-balance process while being access by a proxy that
still has the old ring structure.




___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Updating pg_num and pgp_num

2016-04-19 Thread Shinobu Kinjo
Hi Stacker,

Just to make sure before doing some work.

Is it possible to dynamically update pg_num and pgp_num  in ceph.conf
using heat template?
AFAIK it's not possible before.

Cheers,
Shinobu

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Keystone] Keystone Middleware Deprecate in-process token cache

2016-04-19 Thread Morgan Fainberg
On Tue, Apr 19, 2016 at 1:25 PM, Adam Young  wrote:

> On 04/19/2016 01:55 AM, Kuo Hugo wrote:
>
> Hi Keystone Team,
>
> We aware this deprecation information in keystone middleware. I got couple
> of questions.
>
>
> https://github.com/openstack/keystonemiddleware/blob/6e58f8620ae60eb4f26984258d15a9823345c310/releasenotes/notes/deprecate-caching-tokens-in-process-a412b0f1dea84cb9.yaml
>
> We may need to update the document for Swift.
>
> I want to clarify that if Keystone team plan to deprecate support for the
> EnvCache (!?) or just the in-memory non-memcache option if you don't set
> *neither* memcached_servers *nor* env_cache .
>
> Thanks // Hugo
>
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
> Doing proper tagging on subject.
>
> just in-memory is deprecated.  Memcache is sticking around.
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
The in-process cache will be going away, but caching is sticking around as
would the ENVCache support (provided we can make it jive with oslo.cache).

We want to move to oslo.cache and not have default caching per-process that
results in inconsistent results depending on the process you hit for a
given endpoint (shared caches only).

--Morgan
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Keystone] Keystone Middleware Deprecate in-process token cache

2016-04-19 Thread Adam Young

On 04/19/2016 01:55 AM, Kuo Hugo wrote:

Hi Keystone Team,

We aware this deprecation information in keystone middleware. I got 
couple of questions.


https://github.com/openstack/keystonemiddleware/blob/6e58f8620ae60eb4f26984258d15a9823345c310/releasenotes/notes/deprecate-caching-tokens-in-process-a412b0f1dea84cb9.yaml

We may need to update the document for Swift.

I want to clarify thatif Keystone team plan to deprecate support for 
the EnvCache (!?) or just the in-memory non-memcache option if you 
don't set *neither* memcached_servers *nor* env_cache .


Thanks // Hugo


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Doing proper tagging on subject.

just in-memory is deprecated.  Memcache is sticking around.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Manila manage share problem

2016-04-19 Thread Grigoriy Roghkov
Hello,
I am trying to use OpenStack Manila to manage custom standalone shared file
system that supports NFS protocol.

Is there any way to use Manila Generic Driver with standalone NFS server?

I tried to execute

manila manage devstack@generic1#GENERIC1 nfs :/folder
--name my_share--description "We manage share." --share_type
my_share_type

but I got an error

2016-04-19 19:46:46.544 ERROR manila.scheduler.manager
[req-e16952a0-084e-454f-a7d2-5d63091b3be4 4d0d3d65f55e4c86a9b92f28cccb7874
a3277de126db4748b5a1bf6153875fb1] Failed to schedule manage_share: No v
alid host was found. Cannot place share
48fa7b52-9769-4d85-a62e-a1109a5c9ee3 on devstack@generic1#GENERIC1.
2016-04-19 19:46:46.611 ERROR oslo_messaging.rpc.dispatcher
[req-e16952a0-084e-454f-a7d2-5d63091b3be4 4d0d3d65f55e4c86a9b92f28cccb7874
a3277de126db4748b5a1bf6153875fb1] Exception during message handlin
g: No valid host was found. Cannot place share
48fa7b52-9769-4d85-a62e-a1109a5c9ee3 on devstack@generic1#GENERIC1.
2016-04-19 19:46:46.611 TRACE oslo_messaging.rpc.dispatcher Traceback (most
recent call last):
2016-04-19 19:46:46.611 TRACE oslo_messaging.rpc.dispatcher   File
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py",
line 138, in _dispatch_and_reply
2016-04-19 19:46:46.611 TRACE oslo_messaging.rpc.dispatcher
incoming.message))
2016-04-19 19:46:46.611 TRACE oslo_messaging.rpc.dispatcher   File
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py",
line 185, in _dispatch
2016-04-19 19:46:46.611 TRACE oslo_messaging.rpc.dispatcher return
self._do_dispatch(endpoint, method, ctxt, args)
2016-04-19 19:46:46.611 TRACE oslo_messaging.rpc.dispatcher   File
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py",
line 127, in _do_dispatch
2016-04-19 19:46:46.611 TRACE oslo_messaging.rpc.dispatcher result =
func(ctxt, **new_args)
2016-04-19 19:46:46.611 TRACE oslo_messaging.rpc.dispatcher   File
"/opt/stack/manila/manila/scheduler/manager.py", line 141, in manage_share
2016-04-19 19:46:46.611 TRACE oslo_messaging.rpc.dispatcher
_manage_share_set_error(self, context, ex, request_spec)
2016-04-19 19:46:46.611 TRACE oslo_messaging.rpc.dispatcher   File
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220,
in __exit__
2016-04-19 19:46:46.611 TRACE oslo_messaging.rpc.dispatcher
self.force_reraise()
2016-04-19 19:46:46.611 TRACE oslo_messaging.rpc.dispatcher   File
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196,
in force_reraise
2016-04-19 19:46:46.611 TRACE oslo_messaging.rpc.dispatcher
six.reraise(self.type_, self.value, self.tb)
2016-04-19 19:46:46.611 TRACE oslo_messaging.rpc.dispatcher   File
"/opt/stack/manila/manila/scheduler/manager.py", line 138, in manage_share
2016-04-19 19:46:46.611 TRACE oslo_messaging.rpc.dispatcher context,
share_ref['host'], request_spec, filter_properties)
2016-04-19 19:46:46.611 TRACE oslo_messaging.rpc.dispatcher   File
"/opt/stack/manila/manila/scheduler/drivers/filter.py", line 458, in
host_passes_filters
2016-04-19 19:46:46.611 TRACE oslo_messaging.rpc.dispatcher raise
exception.NoValidHost(reason=msg)
2016-04-19 19:46:46.611 TRACE oslo_messaging.rpc.dispatcher NoValidHost: No
valid host was found. Cannot place share
48fa7b52-9769-4d85-a62e-a1109a5c9ee3 on devstack@generic1#GENERIC1.
2016-04-19 19:46:46.611 TRACE oslo_messaging.rpc.dispatcher


What the value should I chose for devstack@generic1#GENERIC1  ?

Thanks,
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] OpenStack Python SDK

2016-04-19 Thread Jeremy Stanley
On 2016-04-19 14:30:03 + (+), CHOW Anthony wrote:
> List of OpenStack python clients can be found here:
> 
> https://wiki.openstack.org/wiki/OpenStackClients
[...]

Also see http://developer.openstack.org/ for a broader SDK portal.
-- 
Jeremy Stanley

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Xenial / Mitaka = Instance failed network setup / nova.compute.manager Unauthorized

2016-04-19 Thread Paras pradhan
Hi Eugen,

Thanks. Log says its an error. Here is the full log.
http://pastebin.com/K1f4pJhB

-Paras.

On Tue, Apr 19, 2016 at 2:05 AM, Eugen Block  wrote:

> Hi Paras,
>
> the option auth_plugin is deprecated (from nova.conf):
>
> ---cut here---
> # Authentication type to load (unknown value)
> # Deprecated group/name - [DEFAULT]/auth_plugin
> auth_type = password
> ---cut here---
>
> But as far as I can tell, you should only get a warning, not an error,
> I've seen some of these warnings in my logs, but it works (I work with
> openSUSE). To get Mitaka working at all I simply tried to set the same
> options as in my working Liberty configs, and then I searched for
> deprecation warnings and additional options mentioned in the Mitaka guide.
>
> Hope this helps!
>
> Regards,
> Eugen
>
>
> Zitat von Paras pradhan :
>
>
> Can somebody share the nova.conf and neutron.conf from working mitaka? I am
>> also following the same guide and ran into a problem.
>>
>> 2016-04-18 16:51:07.982 2447 ERROR nova.api.openstack.extensions
>> NoSuchOptError: no such option in group neutron: auth_plugin
>>
>> Not sure what did I do wrong. It was while launching an instance.
>>
>>
>> Thanks
>>
>> Paras.
>>
>> On Mon, Apr 18, 2016 at 2:46 AM, Nasir Mahmood 
>> wrote:
>>
>> Martinx,
>>>
>>> glad to see that you are able to dig into the typo issue. I remember, I
>>> had to completely re-re-clean install my virtual setup of OpenStack for a
>>> POC back in 2015 , just because I have had miss-configured my
>>> neutron.conf's mysql DB connector information.
>>>
>>> Cheers!
>>>
>>>
>>> Regards,
>>> Nasir Mahmood
>>>
>>> On Mon, Apr 18, 2016 at 7:02 AM, Martinx - ジェームズ <
>>> thiagocmarti...@gmail.com> wrote:
>>>
>>> FIXED!! I knew it was a typo somewhere!   LOL



 https://github.com/tmartinx/svauto/commit/40ce6566cd0e6435cf75bb34116b6c3bacbeaf02

 Thank you guys!

 Sorry about the buzz on TWO mail lists...

 At least, now we know that Nova silent fail on its start up, if
 somethings aren't configured according... And there is no verification
 steps to test the communication between Nova and Neutron.

  Mitaka is working now on Xenial! YAY!!

 I'm about to commit changes to enable OpenvSwitch with DPDK and
 multi-node deployments, fully automated!

 Cheers!
 Thiago

 On 17 April 2016 at 21:26, Martinx - ジェームズ 
 wrote:

 On 17 April 2016 at 17:39, Martinx - ジェームズ 
> wrote:
>
> Guys,
>>
>>  I am trying to deploy Mitaka, on top of Ubuntu 16.04, by using the
>> following document:
>>
>>  http://docs.openstack.org/mitaka/install-guide-ubuntu
>>
>>  Yes, I know, the above document is for installing Mitaka on top of
>> Ubuntu 14.04 but, from what I understand, the only difference is that
>> on
>> Xenial, there is no need to add the Ubuntu Mitaka Cloud Archive, since
>> Mitaka is the default on Xenial, so, I can follow that document,
>> right?
>>  =)
>>
>>  At first, OpenStack installation goes okay, without any errors, all
>> services comes up, etc... However, it is not possible to launch an
>> Instance.
>>
>>  *** Errors on launching the Instance:
>>
>>  - Right after launching it:
>>
>>  https://paste.ubuntu.com/15902503/
>>
>>  - Spawning it, after Glance finishes the download, similar error a
>> second time:
>>
>>  https://paste.ubuntu.com/15902556/
>>
>>  What am I missing?
>>
>>  Apparently, Nova is not authorized to talk with Neutron but, I am
>> following the docs (maybe it is just a typo somewhere?)...
>>
>>  Also, I have an Ansible automation to deploy it, so, it is less error
>> prone. And however can help me, will see how I am deploying it.
>>
>>  I see no error on Admin Dashboard, all services are up.
>>
>>  NOTE: My Ansible playbooks is in a sense, "
>> docs.openstack.org/mitaka/install-guide-ubuntu fully automated", it
>> is
>> very close to it, step-by-step.
>>
>>  How can I debug this? I mean, how can I try to do, what Nova is doing
>> (its connection with Neutron), to make sure that the settings are in
>> place
>> correctly?
>>
>>  Here is how I am installing Mitaka on Xenial:
>>
>> ---
>>  1- Install Ubuntu 16.04 server 64-bit on bare-metal;
>>
>>* Configure your /etc/hostname and /etc/hosts.
>>
>>DETAILS:
>> https://github.com/tmartinx/svauto/blob/dev/README.OpenStack.md
>>
>>
>>  2- Clone the automation:
>>
>> cd ~
>> git clone https://github.com/tmartinx/svauto.git
>>
>>
>>  3- Run the automation to install OpenStack (all-in-one)
>>
>> cd ~/svauto
>>
>>./os-deploy.sh --br-mode=LBR --use-dummies --base-os=ubuntu16
>> --base-os-upgrade=yes --openstack-release=mitaka
>> --openstack-installation
>> --dr

Re: [Openstack] OpenStack Python SDK

2016-04-19 Thread CHOW Anthony
List of OpenStack python clients can be found here:

https://wiki.openstack.org/wiki/OpenStackClients

Happy stacking.

Anthony.

From: Tobias Urdin [mailto:tobias.ur...@crystone.com]
Sent: Tuesday, April 19, 2016 7:06 AM
To: Jean-Pierre Ribeauville
Cc: openstack@lists.openstack.org
Subject: Re: [Openstack] OpenStack Python SDK

Hello Jean-Pierre,

All aspects of OpenStack is fronted with a API service.
Yes there are python clients/bindings for all projects.

Best regards
Tobias
On 04/19/2016 11:57 AM, Jean-Pierre Ribeauville wrote:
Hi

Does it exist a functionally equivalent to OVIRT Python SDK   on OpenStack ?


Thx for help.

Regards,

Jean-Pierre RIBEAUVILLE

+33 1 4717 2049



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Ring rebuild, multiple copies of ringbuilder file, wasRe: swift ringbuilder and disk size/capacity relationship

2016-04-19 Thread Peter Brouwer


Hello All

Followup question.
Assume a swift cluster with a number of swift proxy nodes, each node 
needs to hold a copy of the ring structure, right?
What happens when a disk is added to the ring. After the change is made 
on the first proxy node, the ring config files need to be copied to the 
other proxy nodes, right?
Is there a risk during the period that the new ring builder files are 
copied a file can be stored using the new structure on one proxy node 
and retrieved via an other node that still holds the old structure and 
not returning object not found. Or the odd change an object is moved 
already by the re-balance process while being access by a proxy that 
still has the old ring structure.



Regards
Peter
On 16/03/2016 00:23, Mark Kirkwood wrote:

On 16/03/16 00:51, Peter Brouwer wrote:


Ah, good info. Followup question, assume worse case ( just to emphasis
the situation) , one copy ( replication = 1 ) , disk approaching its max
capacity.
How can you monitor this situation, i.e. to avoid the disk full scenario
and
if the disk is full, what type of error is returned?



Let's do an example: 4 storage nodes (obj1...obj4) each with 1 disk 
(vdb) added to ring. Replication set to 1.


Firstly write a 1G object (to see where it is gonna go)...host obj1, 
disk vdb, partition 1003):


obj1 $ ls -l 
/srv/node/vdb/objects/1003/d31/fae796287c852f0833316a3dadfb3d31/

total 1048580
-rw--- 1 swift swift 1073741824 Mar 16 10:15 1458079557.01198.data


Then remove it

obj1 $ ls -l 
/srv/node/vdb/objects/1003/d31/fae796287c852f0833316a3dadfb3d31/

total 4
-rw--- 1 swift swift 0 Mar 16 10:47 1458078463.80396.ts


...and use up space on obj1/vdb (dd a 29G file into /srv/node/vdb 
somewhere)


obj1 $ df -m|grep vdb
/dev/vdb   30705 29729   977  97% /srv/node/vdb


Add object again (ends up on obj4 instead...handoff node)

obj4 $ ls -l 
/srv/node/vdb/objects/1003/d31/fae796287c852f0833316a3dadfb3d31/

total 1048580
-rw--- 1 swift swift 1073741824 Mar 16 11:06 1458079557.01198.data


So swift is coping with the obj1/vdb disk being too full. Remove again 
and exhaust space on all disks (dd again):


@obj[1-4] $ df -h|grep vdb
/dev/vdb 30G   30G  977M  97% /srv/node/vdb


Now attempt to write 1G object again

swiftclient.exceptions.ClientException:
Object PUT failed:
http://192.168.122.61:8080/v1/AUTH_9a428d5a6f134f829b2a5e4420f512e7/con0/obj0 
503 Service Unavailable



So we get an http 503 to show that the put has failed.


Now re monitoring. Out of the box swift-recon cover this:

proxy1 $ swift-recon -dv
=== 


--> Starting reconnaissance on 4 hosts
=== 


[2016-03-16 13:16:54] Checking disk usage now
-> http://192.168.122.63:6000/recon/diskusage: [{u'device': u'vdc', 
u'avail': 32162807808, u'mounted': True, u'used': 33718272, u'size': 
32196526080}, {u'device': u'vdb', u'avail': 1024225280, u'mounted': 
True, u'used': 31172300800, u'size': 32196526080}]
-> http://192.168.122.64:6000/recon/diskusage: [{u'device': u'vdc', 
u'avail': 32162807808, u'mounted': True, u'used': 33718272, u'size': 
32196526080}, {u'device': u'vdb', u'avail': 1024274432, u'mounted': 
True, u'used': 31172251648, u'size': 32196526080}]
-> http://192.168.122.62:6000/recon/diskusage: [{u'device': u'vdc', 
u'avail': 32162807808, u'mounted': True, u'used': 33718272, u'size': 
32196526080}, {u'device': u'vdb', u'avail': 1024237568, u'mounted': 
True, u'used': 31172288512, u'size': 32196526080}]
-> http://192.168.122.65:6000/recon/diskusage: [{u'device': u'vdc', 
u'avail': 32162807808, u'mounted': True, u'used': 33718272, u'size': 
32196526080}, {u'device': u'vdb', u'avail': 1024221184, u'mounted': 
True, u'used': 31172304896, u'size': 32196526080}]

Distribution Graph:
  0%4 
*
 96%4 
*

Disk usage: space used: 124824018944 of 257572208640
Disk usage: space free: 132748189696 of 257572208640
Disk usage: lowest: 0.1%, highest: 96.82%, avg: 48.4617574245%
=== 




So integrating swift-recon into regular monitoring/alerting 
(collectd/nagios or whatever) is one approach (mind you most folk 
already monitor disk usage data... and there is nothing overly special 
about ensuring you don't run of space)!




BTW, thanks for the patience for sticking with me in this.


No worries - a good question (once I finally understood it).

regards

Mark


--
Regards,

Peter Brouwer, Principal Software Engineer,
Oracle Application Integration Engineering.
Phone:  +44 1506 672767, Mobile +44 7720 598 226
E-Mail: peter.brou...@oracle.com


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Po

Re: [Openstack] OpenStack Python SDK

2016-04-19 Thread Tobias Urdin
Hello Jean-Pierre,

All aspects of OpenStack is fronted with a API service.
Yes there are python clients/bindings for all projects.

Best regards
Tobias

On 04/19/2016 11:57 AM, Jean-Pierre Ribeauville wrote:
Hi

Does it exist a functionally equivalent to OVIRT Python SDK   on OpenStack ?


Thx for help.

Regards,

Jean-Pierre RIBEAUVILLE

+33 1 4717 2049

[axway_logo_tagline_87px]


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] OpenStack Python SDK

2016-04-19 Thread Jean-Pierre Ribeauville
Hi

Does it exist a functionally equivalent to OVIRT Python SDK   on OpenStack ?


Thx for help.

Regards,

Jean-Pierre RIBEAUVILLE

+33 1 4717 2049

[axway_logo_tagline_87px]

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Xenial / Mitaka = Instance failed network setup / nova.compute.manager Unauthorized

2016-04-19 Thread Eugen Block

Hi Paras,

the option auth_plugin is deprecated (from nova.conf):

---cut here---
# Authentication type to load (unknown value)
# Deprecated group/name - [DEFAULT]/auth_plugin
auth_type = password
---cut here---

But as far as I can tell, you should only get a warning, not an error,  
I've seen some of these warnings in my logs, but it works (I work with  
openSUSE). To get Mitaka working at all I simply tried to set the same  
options as in my working Liberty configs, and then I searched for  
deprecation warnings and additional options mentioned in the Mitaka  
guide.


Hope this helps!

Regards,
Eugen


Zitat von Paras pradhan :


Can somebody share the nova.conf and neutron.conf from working mitaka? I am
also following the same guide and ran into a problem.

2016-04-18 16:51:07.982 2447 ERROR nova.api.openstack.extensions
NoSuchOptError: no such option in group neutron: auth_plugin

Not sure what did I do wrong. It was while launching an instance.


Thanks

Paras.

On Mon, Apr 18, 2016 at 2:46 AM, Nasir Mahmood 
wrote:


Martinx,

glad to see that you are able to dig into the typo issue. I remember, I
had to completely re-re-clean install my virtual setup of OpenStack for a
POC back in 2015 , just because I have had miss-configured my
neutron.conf's mysql DB connector information.

Cheers!


Regards,
Nasir Mahmood

On Mon, Apr 18, 2016 at 7:02 AM, Martinx - ジェームズ <
thiagocmarti...@gmail.com> wrote:


FIXED!! I knew it was a typo somewhere!   LOL


https://github.com/tmartinx/svauto/commit/40ce6566cd0e6435cf75bb34116b6c3bacbeaf02

Thank you guys!

Sorry about the buzz on TWO mail lists...

At least, now we know that Nova silent fail on its start up, if
somethings aren't configured according... And there is no verification
steps to test the communication between Nova and Neutron.

 Mitaka is working now on Xenial! YAY!!

I'm about to commit changes to enable OpenvSwitch with DPDK and
multi-node deployments, fully automated!

Cheers!
Thiago

On 17 April 2016 at 21:26, Martinx - ジェームズ 
wrote:


On 17 April 2016 at 17:39, Martinx - ジェームズ 
wrote:


Guys,

 I am trying to deploy Mitaka, on top of Ubuntu 16.04, by using the
following document:

 http://docs.openstack.org/mitaka/install-guide-ubuntu

 Yes, I know, the above document is for installing Mitaka on top of
Ubuntu 14.04 but, from what I understand, the only difference is that on
Xenial, there is no need to add the Ubuntu Mitaka Cloud Archive, since
Mitaka is the default on Xenial, so, I can follow that document, right?
 =)

 At first, OpenStack installation goes okay, without any errors, all
services comes up, etc... However, it is not possible to launch  
an Instance.


 *** Errors on launching the Instance:

 - Right after launching it:

 https://paste.ubuntu.com/15902503/

 - Spawning it, after Glance finishes the download, similar error a
second time:

 https://paste.ubuntu.com/15902556/

 What am I missing?

 Apparently, Nova is not authorized to talk with Neutron but, I am
following the docs (maybe it is just a typo somewhere?)...

 Also, I have an Ansible automation to deploy it, so, it is less error
prone. And however can help me, will see how I am deploying it.

 I see no error on Admin Dashboard, all services are up.

 NOTE: My Ansible playbooks is in a sense, "
docs.openstack.org/mitaka/install-guide-ubuntu fully automated", it is
very close to it, step-by-step.

 How can I debug this? I mean, how can I try to do, what Nova is doing
(its connection with Neutron), to make sure that the settings  
are in place

correctly?

 Here is how I am installing Mitaka on Xenial:

---
 1- Install Ubuntu 16.04 server 64-bit on bare-metal;

   * Configure your /etc/hostname and /etc/hosts.

   DETAILS:
https://github.com/tmartinx/svauto/blob/dev/README.OpenStack.md


 2- Clone the automation:

cd ~
git clone https://github.com/tmartinx/svauto.git


 3- Run the automation to install OpenStack (all-in-one)

cd ~/svauto

   ./os-deploy.sh --br-mode=LBR --use-dummies --base-os=ubuntu16
--base-os-upgrade=yes --openstack-release=mitaka --openstack-installation
--dry-run

   ansible-playbook -c local site-openstack.yml --extra-vars
"openstack_installation=yes"
---

 NOTE: If you don't use "--dry-run" option, Ansible will be executed
automatically by "os-deploy.sh".

 I am sharing the Ansible playbooks, because it will be easier to see
what I am doing.

 About the relevant configuration blocks, I believe that I have it
properly configured (I followed Mitaka docs), like this:

 * neutron.conf:
-

https://github.com/tmartinx/svauto/blob/dev/ansible/roles/os_neutron_aio/templates/mitaka/neutron.conf
-

 * nova.conf:
-

https://github.com/tmartinx/svauto/blob/dev/ansible/roles/os_nova_aio/templates/mitaka/nova.conf
-

 I already installed OpenStack many, many, many times, since Havana
release, I'm confident that I am doing it right but, of course,  
maybe I did

something wrong this time...   =P

 I appreciate any help!

Thanks!
Thiago



Hey guys,

 I a