[Openstack-operators] Install Guide for Ironic driver

2014-09-16 Thread Alvise Dorigo

Hello,
can someone point me to a documentation URL to install the Ironic driver 
in a IceHouse release ?


I can't find anything here: 
http://docs.openstack.org/icehouse/install-guide/install/yum/content/


thank you,

Alvise
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Install Guide for Ironic driver

2014-09-17 Thread Alvise Dorigo

Thank you Anne,
I've a question about that doc. It says in the middle:

"Configure Compute Service to use the Bare Metal Service
The Compute Service needs to be configured to use the Bare Metal 
Service’s driver. The configuration file for the Compute Service is 
typically located at /etc/nova/nova.conf. This configuration file must 
be modified on the Compute Service’s controller nodes and compute nodes."


I'm puzzled: if I configure the controller node's nova.conf using

compute_driver=nova.virt.ironic.IronicDriver

I presume that I cannot instantiate anymore 'regular' virtual machine, 
but only real node. Am I wrong ?


Can be the two methods (virtual and baremetal) be mixed in the same cloud ?
I mean: instantiate virtual machines on some compute node, and baremetal 
using other compute nodes ?


thank you,

Alvise


On 09/16/2014 03:40 PM, Anne Gentle wrote:

Hi Alvise,

You want a more specific guide, 
http://docs.openstack.org/developer/ironic/deploy/install-guide.html 
is for you.


Anne Gentle
Content Stacker
a...@openstack.org <mailto:a...@openstack.org>


On Sep 16, 2014, at 2:53 AM, Alvise Dorigo <mailto:alvise.dor...@pd.infn.it>> wrote:



Hello,
can someone point me to a documentation URL to install the Ironic 
driver in a IceHouse release ?


I can't find anything here: 
http://docs.openstack.org/icehouse/install-guide/install/yum/content/


thank you,

Alvise
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org 
<mailto:OpenStack-operators@lists.openstack.org>

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Problem creating resizable CentOS 6.5 image

2014-10-03 Thread Alvise Dorigo

Hi,
I'm creating an CentOS 6.5 image with OZ and following the guide here:

http://docs.openstack.org/image-guide/content/ch_openstack_images.html

In particular I made sure that the kickstart creates only one partition 
( "/" ) which fills all the available initial image space. Then I made 
sure to install the 3 packages


cloud-init
cloud-utils
cloud-utils-growpart

as clearly mentioned in the web page linked above.

When I launch the image with small flavor (20GB disk size), it's "/" 
file system is 2GB large.


What am I doing wrong ?

thanks,

Alvise


P.S. In the following the kickstart and the oz-template files:

= KICKSTART =
install
url --url http://mirror3.mirror.garr.it/mirrors/CentOS/6/os/x86_64/
text
key --skip
keyboard it
lang en_US.UTF-8
skipx
network --bootproto dhcp
rootpw --plaintext XXX

# authconfig
authconfig --enableshadow --enablemd5

selinux --disabled
#service --enabled=ssh
timezone --utc Europe/Rome

bootloader --location=mbr --append="console=tty0 console=ttyS0,115200"
zerombr yes
clearpart --all --initlabel

part / --size=200 --grow
#part / --size=1 --grow

reboot

%packages
@core
@base

= OZ TEMPLATE =

 centos65_x86_64
 CentOS Linux 6.5 x86_64 template
 
   2G
 
 
  CentOS-6
  5
  x86_64
  
file:///Images/CentOS_Mirror/CentOS-6.4-x86_64-minimal.iso
  
 

 

   
NETWORKING=yes
NOZEROCONF=yes
   

   
DEVICE=eth0
BOOTPROTO=dhcp
ONBOOT=yes
TYPE=Ethernet
   

 

 
  
http://vesta.informatik.rwth-aachen.de/ftp/pub/Linux/fedora-epel/6/$basearch
False
  


 
 
   
   
   
 

 

  
echo -n > /etc/udev/rules.d/70-persistent-net.rules
echo -n > /lib/udev/rules.d/75-persistent-net-generator.rules
  

  
rpm --import http://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL
rpm --import http://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-6
rpm -ivh 
http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

  

  
adduser ec2-user -G adm,wheel
  

  
echo "%wheelALL=(ALL)   NOPASSWD: ALL" >> /etc/sudoers
  

  
/usr/bin/passwd -d root || :
/usr/bin/passwd -l root || :
  

  
iptables -F
echo -n > /etc/sysconfig/iptables
  







___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] multiple subnets in a single network

2014-10-29 Thread Alvise Dorigo
Hi,
I’ve tried to put two different subnets (10.0.0.0/24 and 11.0.0.0/24) in the 
same network created with neutron. The system lets me doing that. But when I 
create a VM and attach it to the network with 2 subnets, the dhcp apparently 
assigns an IP always from the second subnets address, and the VM is unable to 
contact the metadata IP (169.254.169.254).

I’m not a great expert of networking, so I’m wondering if it does make any 
sense creating two subnets like this, why the system doesn’t complain about it 
and why after allowing me doing that, the metadata server doesn’t work.

Does anybody can explain me more ?

thanks a lot!

   Alvise
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Glance stopped working

2014-11-13 Thread Alvise Dorigo

Hi,
I'm installing Havan 9 Openstack on a SL 6.5 (without update to 6.6, and 
yum-autoupdate disabled). Untill some month ago no problem. Today 
glance-api, installed from scratch, is crashing with this error:


2014-11-13 13:25:21.443 7833 CRITICAL glance [-] Expecting , delimiter: 
line 5 column 5 (char 103)

2014-11-13 13:25:21.443 7833 TRACE glance Traceback (most recent call last):
2014-11-13 13:25:21.443 7833 TRACE glance   File "/usr/bin/glance-api", 
line 10, in 

2014-11-13 13:25:21.443 7833 TRACE glance sys.exit(main())
2014-11-13 13:25:21.443 7833 TRACE glance   File 
"/usr/lib/python2.6/site-packages/glance/cmd/api.py", line 60, in main
2014-11-13 13:25:21.443 7833 TRACE glance 
glance.store.verify_default_store()
2014-11-13 13:25:21.443 7833 TRACE glance   File 
"/usr/lib/python2.6/site-packages/glance/store/__init__.py", line 197, 
in verify_default_store
2014-11-13 13:25:21.443 7833 TRACE glance context = 
glance.context.RequestContext()
2014-11-13 13:25:21.443 7833 TRACE glance   File 
"/usr/lib/python2.6/site-packages/glance/context.py", line 46, in __init__
2014-11-13 13:25:21.443 7833 TRACE glance 
self.policy_enforcer.check_is_admin(self)
2014-11-13 13:25:21.443 7833 TRACE glance   File 
"/usr/lib/python2.6/site-packages/glance/api/policy.py", line 155, in 
check_is_admin
2014-11-13 13:25:21.443 7833 TRACE glance return self.check(context, 
'context_is_admin', target)
2014-11-13 13:25:21.443 7833 TRACE glance   File 
"/usr/lib/python2.6/site-packages/glance/api/policy.py", line 145, in check
2014-11-13 13:25:21.443 7833 TRACE glance return 
self._check(context, action, target)
2014-11-13 13:25:21.443 7833 TRACE glance   File 
"/usr/lib/python2.6/site-packages/glance/api/policy.py", line 115, in _check

2014-11-13 13:25:21.443 7833 TRACE glance self.load_rules()
2014-11-13 13:25:21.443 7833 TRACE glance   File 
"/usr/lib/python2.6/site-packages/glance/api/policy.py", line 67, in 
load_rules
2014-11-13 13:25:21.443 7833 TRACE glance rules = 
self._read_policy_file()
2014-11-13 13:25:21.443 7833 TRACE glance   File 
"/usr/lib/python2.6/site-packages/glance/api/policy.py", line 99, in 
_read_policy_file
2014-11-13 13:25:21.443 7833 TRACE glance rules_dict = 
json.loads(raw_contents)
2014-11-13 13:25:21.443 7833 TRACE glance   File 
"/usr/lib64/python2.6/json/__init__.py", line 307, in loads
2014-11-13 13:25:21.443 7833 TRACE glance return 
_default_decoder.decode(s)
2014-11-13 13:25:21.443 7833 TRACE glance   File 
"/usr/lib64/python2.6/json/decoder.py", line 319, in decode
2014-11-13 13:25:21.443 7833 TRACE glance obj, end = 
self.raw_decode(s, idx=_w(s, 0).end())
2014-11-13 13:25:21.443 7833 TRACE glance   File 
"/usr/lib64/python2.6/json/decoder.py", line 336, in raw_decode
2014-11-13 13:25:21.443 7833 TRACE glance obj, end = 
self._scanner.iterscan(s, **kw).next()
2014-11-13 13:25:21.443 7833 TRACE glance   File 
"/usr/lib64/python2.6/json/scanner.py", line 55, in iterscan
2014-11-13 13:25:21.443 7833 TRACE glance rval, next_pos = action(m, 
context)
2014-11-13 13:25:21.443 7833 TRACE glance   File 
"/usr/lib64/python2.6/json/decoder.py", line 193, in JSONObject
2014-11-13 13:25:21.443 7833 TRACE glance raise 
ValueError(errmsg("Expecting , delimiter", s, end - 1))
2014-11-13 13:25:21.443 7833 TRACE glance ValueError: Expecting , 
delimiter: line 5 column 5 (char 103)

2014-11-13 13:25:21.443 7833 TRACE glance

Any idea ?

Alvise

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Problem with migration from Havana to IceHouse

2015-01-21 Thread Alvise Dorigo

Hi,
I've an Havana IaaS composed by:

1 controller node
1 network node
1 compute node

and using Neutron as networking, and rabbitmq as AMQP.

The network node runs the agents (dhcp, l3, openvswitch, metadata). The 
controller runs Keystone, glance, Nova APIs, Neutron server.


The compute node runs Nova Compute.

I've followed this guide: 
http://docs.openstack.org/openstack-ops/content/upgrades_havana-icehouse-rhel.html 
to perform a migration from Havana to IceHouse.


After migration (without any apparent error) I get a lot of errors for 
the several services related to the AMQP.


For example in the neutron server's log I see:

http://pastebin.com/2pSpP4mu

In the nova api's log I see:

http://pastebin.com/uCzyhxb6

and a similar one for Nova conductor:

http://pastebin.com/3B0jvT6i

Did I miss something ? I didn't see anything directly related to AMQP in 
the guide cited abobe.


thanks,

Alvise
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Error migrating Neutron from Havana to IceHouse

2015-02-02 Thread Alvise Dorigo

Hi,
I've a Havana installation of OpenStack with Neutron.

I've tried the migration from Havana to IceHouse by following this guide:

http://docs.openstack.org/openstack-ops/content/upgrades_havana-icehouse-rhel.html

Keystone, Glance and Nova have migrated correctly (the services are 
running and respond to command line's queries).


For neutron I got a problem at the very last step:

neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini upgrade icehouse

[...]
sqlalchemy.exc.ProgrammingError: (ProgrammingError) (1146, "Table 
'neutron.ml2_port_bindings' doesn't exist") "ALTER TABLE 
ml2_port_bindings ADD COLUMN vnic_type VARCHAR(64) DEFAULT 'normal' NOT 
NULL" ()


Did I miss something which would have created that missing table ?
Can someone provide some help ?

thanks,

   Alvise
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Error migrating Neutron from Havana to IceHouse

2015-02-02 Thread Alvise Dorigo

> On 02 Feb 2015, at 20:46, Jesse Keating  wrote:
> 
> On 2/2/15 5:56 AM, Alvise Dorigo wrote:
>> For neutron I got a problem at the very last step:
>> 
>> neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file
>> /etc/neutron/plugins/ml2/ml2_conf.ini upgrade icehouse
>> [...]
>> sqlalchemy.exc.ProgrammingError: (ProgrammingError) (1146, "Table
>> 'neutron.ml2_port_bindings' doesn't exist") "ALTER TABLE
>> ml2_port_bindings ADD COLUMN vnic_type VARCHAR(64) DEFAULT 'normal' NOT
>> NULL" ()
>> 
>> Did I miss something which would have created that missing table ?
>> Can someone provide some help ?
> 
> Were you already using ml2 plugin before upgrading?

No, I wasn’t.

Alvise

> 
> -- 
> -jlk
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Error migrating Neutron from Havana to IceHouse

2015-02-02 Thread Alvise Dorigo

> On 02 Feb 2015, at 20:54, Kris G. Lindgren  wrote:
> 
> That¹s what I was thinking as well.  I looked at the install guide and it
> says to set the stamp using your existing config, then run the db upgrade
> using the new config

uhm, this is probably what I did wrong !
I ran the command “… stamp havana” with the new conf file.

Thank you,

   Alvise

> - then do a db migration after db upgrade.  I thought
> I had read some where that you had to migrate first then upgrade.  We were
> already running ml2 - so I didn't have to do that step.
> 
> 
> Kris Lindgren
> Senior Linux Systems Engineer
> GoDaddy, LLC.
> 
> 
> 
> On 2/2/15, 12:46 PM, "Jesse Keating"  wrote:
> 
>> On 2/2/15 5:56 AM, Alvise Dorigo wrote:
>>> For neutron I got a problem at the very last step:
>>> 
>>> neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file
>>> /etc/neutron/plugins/ml2/ml2_conf.ini upgrade icehouse
>>> [...]
>>> sqlalchemy.exc.ProgrammingError: (ProgrammingError) (1146, "Table
>>> 'neutron.ml2_port_bindings' doesn't exist") "ALTER TABLE
>>> ml2_port_bindings ADD COLUMN vnic_type VARCHAR(64) DEFAULT 'normal' NOT
>>> NULL" ()
>>> 
>>> Did I miss something which would have created that missing table ?
>>> Can someone provide some help ?
>> 
>> Were you already using ml2 plugin before upgrading?
>> 
>> -- 
>> -jlk
>> 
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Error migrating Neutron from Havana to IceHouse

2015-02-04 Thread Alvise Dorigo

Thank you guys,
in fact I was following wrongly the procedure: I had to stamp with the 
old conf file, but I was using the new one.

Now it works well!

Alvise

On 02/02/2015 08:54 PM, Kris G. Lindgren wrote:

That¹s what I was thinking as well.  I looked at the install guide and it
says to set the stamp using your existing config, then run the db upgrade
using the new config - then do a db migration after db upgrade.  I thought
I had read some where that you had to migrate first then upgrade.  We were
already running ml2 - so I didn't have to do that step.

  
Kris Lindgren

Senior Linux Systems Engineer
GoDaddy, LLC.



On 2/2/15, 12:46 PM, "Jesse Keating"  wrote:


On 2/2/15 5:56 AM, Alvise Dorigo wrote:

For neutron I got a problem at the very last step:

neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file
/etc/neutron/plugins/ml2/ml2_conf.ini upgrade icehouse
[...]
sqlalchemy.exc.ProgrammingError: (ProgrammingError) (1146, "Table
'neutron.ml2_port_bindings' doesn't exist") "ALTER TABLE
ml2_port_bindings ADD COLUMN vnic_type VARCHAR(64) DEFAULT 'normal' NOT
NULL" ()

Did I miss something which would have created that missing table ?
Can someone provide some help ?

Were you already using ml2 plugin before upgrading?

--
-jlk

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] How to force Heat to use v2.0 Keystone

2015-02-17 Thread Alvise Dorigo

Hi,
I've an IceHouse installation with v2.0 Keystone. All services run 
correctly but Heat, which wants authenticate to the non-existing 
endpoint https://cloud-areapd-test.pd.infn.it:5000/v3/auth/tokens. In 
fact only v2 is configured (and we cannot reconfigure all the openstack 
installation in a short term):


[dorigoa@lxadorigo ~]$ cat keystone_admin.sh
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=
export OS_AUTH_URL=https://:5000/v2.0/
export OS_CACERT=/etc/grid-security/certificates/INFN-CA-2006.pem


[dorigoa@lxadorigo ~]$ heat  -k stack-create -f test-stack.yml   -P 
"ImageID=cirros;NetID=$NET_ID" testStac
ERROR: Property error : server1: image Authorization failed: SSL 
exception connecting to 
https://cloud-areapd-test.pd.infn.it:5000/v3/auth/tokens



A strange thing (at least for me) is that another heat's command, heat 
-k stack-list,  doesn't raise the problem.


Any idea ?

BTW: why I've to forcely use the "-k" even if the OS_CACERT env var is 
set (and correctly working for nova/glance/neutron/cinder) ?


thanks,

Alvise



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] How to force Heat to use v2.0 Keystone

2015-02-17 Thread Alvise Dorigo

Hi Christian; here the info you've requested:

[root@controller-01 ~]# grep auth_uri /etc/heat/heat.conf 
/etc/glance/glance-api.conf
/etc/heat/heat.conf:# Allowed keystone endpoints for auth_uri when 
multi_cloud is

/etc/heat/heat.conf:#allowed_auth_uris=
/etc/heat/heat.conf:auth_uri = 
https://cloud-areapd-test.pd.infn.it:5000/v2.0

/etc/heat/heat.conf:#auth_uri=http://127.0.0.1:5000/v2.0
/etc/heat/heat.conf:# Allowed keystone endpoints for auth_uri when 
multi_cloud is

/etc/heat/heat.conf:#allowed_auth_uris=
/etc/heat/heat.conf:auth_uri = 
https://cloud-areapd-test.pd.infn.it:5000/v2.0

/etc/heat/heat.conf:#auth_uri=
/etc/glance/glance-api.conf:auth_uri = 
https://cloud-areapd-test.pd.infn.it:35357/v2.0


is the problem in the TCP port mismatch ?

A.

On 02/17/2015 04:38 PM, Christian Berendt wrote:

On 02/17/2015 04:11 PM, Alvise Dorigo wrote:

[dorigoa@lxadorigo ~]$ heat  -k stack-create -f test-stack.yml   -P
"ImageID=cirros;NetID=$NET_ID" testStac
ERROR: Property error : server1: image Authorization failed: SSL
exception connecting to
https://cloud-areapd-test.pd.infn.it:5000/v3/auth/tokens

What value do you have for the auth_uri parameter in the
/etc/heat/heat.conf file? Same for the /etc/glance/glance-api.conf file.

Christian.

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] How to force Heat to use v2.0 Keystone

2015-02-18 Thread Alvise Dorigo

Hi Chris,
provided that I'm not a great expert of python, I tried to install it 
without success:


[root@controller-01 heat]# ls -l
total 16
drwxr-xr-x 2 root root 4096 Feb 18 09:00 heat_keystoneclient_v2
-rw-r--r-- 1 root root  801 Feb 18 08:58 README.md
-rw-r--r-- 1 root root  789 Feb 18 08:59 setup.cfg
-rw-r--r-- 1 root root 1045 Feb 18 08:59 setup.py
[root@controller-01 heat]# python ./setup.py install
error in setup command: Error parsing /root/heat/setup.cfg: Exception: 
Versioning for this project requires either an sdist tarball, or access 
to an upstream git repository. Are you sure that git is installed?

[root@controller-01 heat]# git --version
git version 1.7.1


Do you suggest to simply substitute the "stock" heat file with the new 
one heat_keystoneclient_v2/client.py ?


Is this, 
/usr/lib/python2.6/site-packages/heat/common/heat_keystoneclient.py, the 
stock file, isn't it ?


thanks,

Alvise


On 02/17/2015 07:46 PM, Chris Buccella wrote:

For Icehouse and Juno, you'll want to use the Keystone v2 plugin for heat:

https://git.openstack.org/cgit/openstack/heat/tree/contrib/heat_keystoneclient_v2


-Chris

On Tue, Feb 17, 2015 at 10:51 AM, Alvise Dorigo 
mailto:alvise.dor...@pd.infn.it>> wrote:


Hi Christian; here the info you've requested:

[root@controller-01 ~]# grep auth_uri /etc/heat/heat.conf
/etc/glance/glance-api.conf
/etc/heat/heat.conf:# Allowed keystone endpoints for auth_uri when
multi_cloud is
/etc/heat/heat.conf:#allowed_auth_uris=
/etc/heat/heat.conf:auth_uri =
https://cloud-areapd-test.pd.infn.it:5000/v2.0
/etc/heat/heat.conf:#auth_uri=http://127.0.0.1:5000/v2.0
/etc/heat/heat.conf:# Allowed keystone endpoints for auth_uri when
multi_cloud is
/etc/heat/heat.conf:#allowed_auth_uris=
/etc/heat/heat.conf:auth_uri =
https://cloud-areapd-test.pd.infn.it:5000/v2.0
/etc/heat/heat.conf:#auth_uri=
/etc/glance/glance-api.conf:auth_uri =
https://cloud-areapd-test.pd.infn.it:35357/v2.0

is the problem in the TCP port mismatch ?

A.


On 02/17/2015 04:38 PM, Christian Berendt wrote:

    On 02/17/2015 04:11 PM, Alvise Dorigo wrote:

[dorigoa@lxadorigo ~]$ heat  -k stack-create -f
test-stack.yml   -P
"ImageID=cirros;NetID=$NET_ID" testStac
ERROR: Property error : server1: image Authorization
failed: SSL
exception connecting to
https://cloud-areapd-test.pd.infn.it:5000/v3/auth/tokens

What value do you have for the auth_uri parameter in the
/etc/heat/heat.conf file? Same for the
/etc/glance/glance-api.conf file.

Christian.

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
<mailto:OpenStack-operators@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
<mailto:OpenStack-operators@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] How to force Heat to use v2.0 Keystone

2015-02-18 Thread Alvise Dorigo

I had an error in the repository clone.
Now I've successfully installed the plugin (python setup.py install).

The backend is correctly configured in the heat.conf:

[root@controller-02 ~]# grep keystone_backend /etc/heat/heat.conf
#keystone_backend=heat.common.heat_keystoneclient.KeystoneClientV3
keystone_backend=heat.engine.plugins.heat_keystoneclient_v2.client.KeystoneClientV2

as descibed in the README.md

But heat engine dies just after start:

[root@controller-02 ~]# tail /var/log/heat/heat-engine.log
2015-02-18 10:23:30.513 27308 WARNING heat.common.config [-] The 
"instance_user" option in heat.conf is deprecated and will be removed in 
the Juno release.
2015-02-18 10:23:31.018 27308 ERROR heat.common.plugin_loader [-] Failed 
to import module heat.engine.plugins.heat_keystoneclient_v2.client
2015-02-18 10:23:31.019 27308 CRITICAL heat [-] ImportError: No module 
named utils


Any idea ?

Alvise

On 02/18/2015 09:06 AM, Alvise Dorigo wrote:

Hi Chris,
provided that I'm not a great expert of python, I tried to install it 
without success:


[root@controller-01 heat]# ls -l
total 16
drwxr-xr-x 2 root root 4096 Feb 18 09:00 heat_keystoneclient_v2
-rw-r--r-- 1 root root  801 Feb 18 08:58 README.md
-rw-r--r-- 1 root root  789 Feb 18 08:59 setup.cfg
-rw-r--r-- 1 root root 1045 Feb 18 08:59 setup.py
[root@controller-01 heat]# python ./setup.py install
error in setup command: Error parsing /root/heat/setup.cfg: Exception: 
Versioning for this project requires either an sdist tarball, or 
access to an upstream git repository. Are you sure that git is installed?

[root@controller-01 heat]# git --version
git version 1.7.1


Do you suggest to simply substitute the "stock" heat file with the new 
one heat_keystoneclient_v2/client.py ?


Is this, 
/usr/lib/python2.6/site-packages/heat/common/heat_keystoneclient.py, 
the stock file, isn't it ?


thanks,

Alvise


On 02/17/2015 07:46 PM, Chris Buccella wrote:
For Icehouse and Juno, you'll want to use the Keystone v2 plugin for 
heat:


https://git.openstack.org/cgit/openstack/heat/tree/contrib/heat_keystoneclient_v2


-Chris

On Tue, Feb 17, 2015 at 10:51 AM, Alvise Dorigo 
mailto:alvise.dor...@pd.infn.it>> wrote:


Hi Christian; here the info you've requested:

[root@controller-01 ~]# grep auth_uri /etc/heat/heat.conf
/etc/glance/glance-api.conf
/etc/heat/heat.conf:# Allowed keystone endpoints for auth_uri
when multi_cloud is
/etc/heat/heat.conf:#allowed_auth_uris=
/etc/heat/heat.conf:auth_uri =
https://cloud-areapd-test.pd.infn.it:5000/v2.0
/etc/heat/heat.conf:#auth_uri=http://127.0.0.1:5000/v2.0
/etc/heat/heat.conf:# Allowed keystone endpoints for auth_uri
when multi_cloud is
/etc/heat/heat.conf:#allowed_auth_uris=
/etc/heat/heat.conf:auth_uri =
https://cloud-areapd-test.pd.infn.it:5000/v2.0
/etc/heat/heat.conf:#auth_uri=
/etc/glance/glance-api.conf:auth_uri =
https://cloud-areapd-test.pd.infn.it:35357/v2.0

is the problem in the TCP port mismatch ?

A.


On 02/17/2015 04:38 PM, Christian Berendt wrote:

On 02/17/2015 04:11 PM, Alvise Dorigo wrote:

[dorigoa@lxadorigo ~]$ heat  -k stack-create -f
test-stack.yml   -P
"ImageID=cirros;NetID=$NET_ID" testStac
ERROR: Property error : server1: image Authorization
failed: SSL
exception connecting to
https://cloud-areapd-test.pd.infn.it:5000/v3/auth/tokens

What value do you have for the auth_uri parameter in the
/etc/heat/heat.conf file? Same for the
/etc/glance/glance-api.conf file.

Christian.

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
<mailto:OpenStack-operators@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
<mailto:OpenStack-operators@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators






___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] How to force Heat to use v2.0 Keystone

2015-02-18 Thread Alvise Dorigo

Hi Jens,
If the correct one is oslo_utils (instead of oslo.utils) then the 
related package is not found:


[root@controller-02 ~]# tail /var/log/heat/heat-engine.log
[...]
2015-02-18 14:56:29.978 5282 CRITICAL heat [-] ImportError: No module 
named oslo_utils


[root@controller-02 ~]# rpm -qa|grep oslo
python-oslo-messaging-1.3.0.2-4.el6.noarch
python-oslo-config-1.2.1-1.el6.noarch
python-oslo-rootwrap-1.0.0-1.el6.noarch
[root@controller-02 ~]# grep oslo 
/usr/lib/heat/heat_keystoneclient_v2/client.py

from oslo.config import cfg
from oslo_utils import importutils
from oslo_log import log as logging


Alvise


On 02/18/2015 02:15 PM, Dr. Jens Rosenboom wrote:

Am 18/02/15 um 10:27 schrieb Alvise Dorigo:

I had an error in the repository clone.
Now I've successfully installed the plugin (python setup.py install).

The backend is correctly configured in the heat.conf:

[root@controller-02 ~]# grep keystone_backend /etc/heat/heat.conf
#keystone_backend=heat.common.heat_keystoneclient.KeystoneClientV3
keystone_backend=heat.engine.plugins.heat_keystoneclient_v2.client.KeystoneClientV2 




as descibed in the README.md

But heat engine dies just after start:

[root@controller-02 ~]# tail /var/log/heat/heat-engine.log
2015-02-18 10:23:30.513 27308 WARNING heat.common.config [-] The
"instance_user" option in heat.conf is deprecated and will be removed in
the Juno release.
2015-02-18 10:23:31.018 27308 ERROR heat.common.plugin_loader [-] Failed
to import module heat.engine.plugins.heat_keystoneclient_v2.client
2015-02-18 10:23:31.019 27308 CRITICAL heat [-] ImportError: No module
named utils

Any idea ?


This looks to be related to the recent namespace changes, can you try 
this patch?


diff --git 
a/contrib/heat_keystoneclient_v2/heat_keystoneclient_v2/client.py 
b/contrib/heat_keystoneclient_v2/heat_keystoneclient_v2/client.py

index 783231b..ad128ff 100644
--- a/contrib/heat_keystoneclient_v2/heat_keystoneclient_v2/client.py
+++ b/contrib/heat_keystoneclient_v2/heat_keystoneclient_v2/client.py
@@ -15,7 +15,7 @@

 from keystoneclient.v2_0 import client as kc
 from oslo.config import cfg
-from oslo.utils import importutils
+from oslo_utils import importutils
 from oslo_log import log as logging

 from heat.common import exception




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] How to force Heat to use v2.0 Keystone

2015-02-19 Thread Alvise Dorigo

Hi Chris,
I cannot find it in the SL6.6 repos nor in the EPEL-6:

[root@controller-02 ~]# yum search python-oslo.utils
Loaded plugins: security
Warning: No matches found for: python-oslo.utils
No Matches found
[root@controller-02 ~]# cat /etc/issue
Scientific Linux release 6.6 (Carbon)
Kernel \r on an \m

I could get it from here https://pypi.python.org/pypi/oslo.utils, but I 
would prefer keep track of everything is installed by mean of the usual rpm.

Is there a place where I can get the packaged python-oslo.utils for RHEL6 ?

Thanks,

Alvise

On 02/18/2015 06:06 PM, Chris Buccella wrote:

Do you have python-oslo.utils installed?

On Wed, Feb 18, 2015 at 9:02 AM, Alvise Dorigo 
mailto:alvise.dor...@pd.infn.it>> wrote:


Hi Jens,
If the correct one is oslo_utils (instead of oslo.utils) then the
related package is not found:

[root@controller-02 ~]# tail /var/log/heat/heat-engine.log
[...]
2015-02-18 14:56:29.978 5282 CRITICAL heat [-] ImportError: No
module named oslo_utils

[root@controller-02 ~]# rpm -qa|grep oslo
python-oslo-messaging-1.3.0.2-4.el6.noarch
python-oslo-config-1.2.1-1.el6.noarch
python-oslo-rootwrap-1.0.0-1.el6.noarch
[root@controller-02 ~]# grep oslo
/usr/lib/heat/heat_keystoneclient_v2/client.py
from oslo.config import cfg
from oslo_utils import importutils
from oslo_log import log as logging


Alvise



On 02/18/2015 02:15 PM, Dr. Jens Rosenboom wrote:

Am 18/02/15 um 10:27 schrieb Alvise Dorigo:

I had an error in the repository clone.
Now I've successfully installed the plugin (python
setup.py install).

The backend is correctly configured in the heat.conf:

[root@controller-02 ~]# grep keystone_backend
/etc/heat/heat.conf
#keystone_backend=heat.common.heat_keystoneclient.KeystoneClientV3

keystone_backend=heat.engine.plugins.heat_keystoneclient_v2.client.KeystoneClientV2



as descibed in the README.md

But heat engine dies just after start:

[root@controller-02 ~]# tail /var/log/heat/heat-engine.log
2015-02-18 10:23:30.513 27308 WARNING heat.common.config
[-] The
"instance_user" option in heat.conf is deprecated and will
be removed in
the Juno release.
2015-02-18 10:23:31.018 27308 ERROR
heat.common.plugin_loader [-] Failed
to import module
heat.engine.plugins.heat_keystoneclient_v2.client
2015-02-18 10:23:31.019 27308 CRITICAL heat [-]
ImportError: No module
named utils

Any idea ?


This looks to be related to the recent namespace changes, can
you try this patch?

diff --git
a/contrib/heat_keystoneclient_v2/heat_keystoneclient_v2/client.py
b/contrib/heat_keystoneclient_v2/heat_keystoneclient_v2/client.py
index 783231b..ad128ff 100644
---
a/contrib/heat_keystoneclient_v2/heat_keystoneclient_v2/client.py
+++
b/contrib/heat_keystoneclient_v2/heat_keystoneclient_v2/client.py
@@ -15,7 +15,7 @@

 from keystoneclient.v2_0 import client as kc
 from oslo.config import cfg
-from oslo.utils import importutils
+from oslo_utils import importutils
 from oslo_log import log as logging

 from heat.common import exception



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
<mailto:OpenStack-operators@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] How to force Heat to use v2.0 Keystone

2015-02-19 Thread Alvise Dorigo
simple answer: it seems to require python 2.7; and I cannot install it 
because I'm still on SL/CentOS 6.6...

So, no way to run Heat unfortunately :-(

A.

On 02/19/2015 09:24 AM, Alvise Dorigo wrote:

Hi Chris,
I cannot find it in the SL6.6 repos nor in the EPEL-6:

[root@controller-02 ~]# yum search python-oslo.utils
Loaded plugins: security
Warning: No matches found for: python-oslo.utils
No Matches found
[root@controller-02 ~]# cat /etc/issue
Scientific Linux release 6.6 (Carbon)
Kernel \r on an \m

I could get it from here https://pypi.python.org/pypi/oslo.utils, but 
I would prefer keep track of everything is installed by mean of the 
usual rpm.
Is there a place where I can get the packaged python-oslo.utils for 
RHEL6 ?


Thanks,

Alvise

On 02/18/2015 06:06 PM, Chris Buccella wrote:

Do you have python-oslo.utils installed?

On Wed, Feb 18, 2015 at 9:02 AM, Alvise Dorigo 
mailto:alvise.dor...@pd.infn.it>> wrote:


Hi Jens,
If the correct one is oslo_utils (instead of oslo.utils) then the
related package is not found:

[root@controller-02 ~]# tail /var/log/heat/heat-engine.log
[...]
2015-02-18 14:56:29.978 5282 CRITICAL heat [-] ImportError: No
module named oslo_utils

[root@controller-02 ~]# rpm -qa|grep oslo
python-oslo-messaging-1.3.0.2-4.el6.noarch
python-oslo-config-1.2.1-1.el6.noarch
python-oslo-rootwrap-1.0.0-1.el6.noarch
[root@controller-02 ~]# grep oslo
/usr/lib/heat/heat_keystoneclient_v2/client.py
from oslo.config import cfg
from oslo_utils import importutils
from oslo_log import log as logging


Alvise



On 02/18/2015 02:15 PM, Dr. Jens Rosenboom wrote:

Am 18/02/15 um 10:27 schrieb Alvise Dorigo:

I had an error in the repository clone.
Now I've successfully installed the plugin (python
setup.py install).

The backend is correctly configured in the heat.conf:

[root@controller-02 ~]# grep keystone_backend
/etc/heat/heat.conf
#keystone_backend=heat.common.heat_keystoneclient.KeystoneClientV3

keystone_backend=heat.engine.plugins.heat_keystoneclient_v2.client.KeystoneClientV2



as descibed in the README.md

But heat engine dies just after start:

[root@controller-02 ~]# tail /var/log/heat/heat-engine.log
2015-02-18 10:23:30.513 27308 WARNING heat.common.config
[-] The
"instance_user" option in heat.conf is deprecated and
will be removed in
the Juno release.
2015-02-18 10:23:31.018 27308 ERROR
heat.common.plugin_loader [-] Failed
to import module
heat.engine.plugins.heat_keystoneclient_v2.client
2015-02-18 10:23:31.019 27308 CRITICAL heat [-]
ImportError: No module
named utils

Any idea ?


This looks to be related to the recent namespace changes, can
you try this patch?

diff --git
a/contrib/heat_keystoneclient_v2/heat_keystoneclient_v2/client.py
b/contrib/heat_keystoneclient_v2/heat_keystoneclient_v2/client.py
index 783231b..ad128ff 100644
---
a/contrib/heat_keystoneclient_v2/heat_keystoneclient_v2/client.py
+++
b/contrib/heat_keystoneclient_v2/heat_keystoneclient_v2/client.py
@@ -15,7 +15,7 @@

 from keystoneclient.v2_0 import client as kc
 from oslo.config import cfg
-from oslo.utils import importutils
+from oslo_utils import importutils
 from oslo_log import log as logging

 from heat.common import exception



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
<mailto:OpenStack-operators@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators






___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] How to force Heat to use v2.0 Keystone

2015-02-20 Thread Alvise Dorigo

that's a good advice, thanks; I'll give it a try.

A.

On 02/20/2015 05:23 AM, Chris Buccella wrote:
Heat works by calling other OpenStack services. So if you want, you 
could try running Heat separately in a VM (with a modern distro). That 
should work as long as it has network access to the other service 
endpoints (keystone, nova, etc.).




On Thu, Feb 19, 2015 at 3:44 AM, Alvise Dorigo 
mailto:alvise.dor...@pd.infn.it>> wrote:


simple answer: it seems to require python 2.7; and I cannot
install it because I'm still on SL/CentOS 6.6...
So, no way to run Heat unfortunately :-(

A.


On 02/19/2015 09:24 AM, Alvise Dorigo wrote:

Hi Chris,
I cannot find it in the SL6.6 repos nor in the EPEL-6:

[root@controller-02 ~]# yum search python-oslo.utils
Loaded plugins: security
Warning: No matches found for: python-oslo.utils
No Matches found
[root@controller-02 ~]# cat /etc/issue
Scientific Linux release 6.6 (Carbon)
Kernel \r on an \m

I could get it from here https://pypi.python.org/pypi/oslo.utils,
but I would prefer keep track of everything is installed by mean
of the usual rpm.
Is there a place where I can get the packaged python-oslo.utils
for RHEL6 ?

Thanks,

Alvise

On 02/18/2015 06:06 PM, Chris Buccella wrote:

Do you have python-oslo.utils installed?

On Wed, Feb 18, 2015 at 9:02 AM, Alvise Dorigo
mailto:alvise.dor...@pd.infn.it>> wrote:

Hi Jens,
If the correct one is oslo_utils (instead of oslo.utils)
then the related package is not found:

[root@controller-02 ~]# tail /var/log/heat/heat-engine.log
[...]
2015-02-18 14:56:29.978 5282 CRITICAL heat [-] ImportError:
No module named oslo_utils

[root@controller-02 ~]# rpm -qa|grep oslo
python-oslo-messaging-1.3.0.2-4.el6.noarch
python-oslo-config-1.2.1-1.el6.noarch
python-oslo-rootwrap-1.0.0-1.el6.noarch
[root@controller-02 ~]# grep oslo
/usr/lib/heat/heat_keystoneclient_v2/client.py
from oslo.config import cfg
from oslo_utils import importutils
from oslo_log import log as logging


Alvise



On 02/18/2015 02:15 PM, Dr. Jens Rosenboom wrote:

Am 18/02/15 um 10:27 schrieb Alvise Dorigo:

I had an error in the repository clone.
Now I've successfully installed the plugin (python
setup.py install).

The backend is correctly configured in the heat.conf:

[root@controller-02 ~]# grep keystone_backend
/etc/heat/heat.conf

#keystone_backend=heat.common.heat_keystoneclient.KeystoneClientV3

keystone_backend=heat.engine.plugins.heat_keystoneclient_v2.client.KeystoneClientV2



as descibed in the README.md

But heat engine dies just after start:

[root@controller-02 ~]# tail
/var/log/heat/heat-engine.log
2015-02-18 10:23:30.513 27308 WARNING
heat.common.config [-] The
"instance_user" option in heat.conf is deprecated
and will be removed in
the Juno release.
2015-02-18 10:23:31.018 27308 ERROR
heat.common.plugin_loader [-] Failed
to import module
heat.engine.plugins.heat_keystoneclient_v2.client
2015-02-18 10:23:31.019 27308 CRITICAL heat [-]
ImportError: No module
named utils

Any idea ?


This looks to be related to the recent namespace
changes, can you try this patch?

diff --git
a/contrib/heat_keystoneclient_v2/heat_keystoneclient_v2/client.py
b/contrib/heat_keystoneclient_v2/heat_keystoneclient_v2/client.py
index 783231b..ad128ff 100644
---
a/contrib/heat_keystoneclient_v2/heat_keystoneclient_v2/client.py
+++
b/contrib/heat_keystoneclient_v2/heat_keystoneclient_v2/client.py
@@ -15,7 +15,7 @@

 from keystoneclient.v2_0 import client as kc
 from oslo.config import cfg
-from oslo.utils import importutils
+from oslo_utils import importutils
 from oslo_log import log as logging

 from heat.common import exception



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
<mailto:OpenStack-operators@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators






___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org  
<mailt

[Openstack-operators] Locking on images in _base with shared filesystem

2015-05-21 Thread Alvise Dorigo

Hi,
I've several compute nodes which share a distributed FS (Gluster) in 
their /var/lib/nova/instances.


I wonder if multiple nova-compute processes (among different compute 
nodes) lock in some way the image downloading from glance in 
/var/lib/nova/instances/_base...


Consider for example that a glance image has never been launched. And 
suppose that, the first time it is launched, 2 instances are spawned by 
the scheduler on two different compute nodes (because, of course, the 
user actually asked for 2 instances of the same image).


What happens ? My knowledge of the entire procedure is not complete yet: 
I presume that both nova-compute processes see that in 
/var/lib/nova/instances there is not (yet) that image and they start, 
independently, to download it from glance. But they actually write in 
the very same place, the same file. And this causes file corruption 
unless some file locking mechanims can occur between the two different 
nova-compute processes running on two compute nodes.


Can someone clarify ?

thanks,

Alvise

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Cannot delete a cinder volume; alternative manual procedure ?

2015-06-09 Thread Alvise Dorigo

Hi,
I've a cinder volume which is permanently in "deleting" state. I cannot 
retrace the full history of actions that brought to this scenario. What 
I cat reporto is that


1. Cinder is backed by GlusterFS mounted with the glusterfuse,
2. a "cinder delete" (or "cinder force-delete" as admin) doesn't produce 
any effect,
3. In the api.log, scheduler.log and volume.log I do not see any useful 
information but this:


api.log-20150607:2015-06-05 14:45:20.956 17383 INFO eventlet.wsgi.server 
[req-bed01706-77ff-48bb-b12c-a0ccdcfd2e25 
d7b3d4f7d20444adb7fda140553c25bf 3beba6dd3f2648378263bc04d9c205fa - - -] 
192.135.16.31,192.168.60.21 - - [05/Jun/2015 14:45:20] "DELETE 
/v1/3beba6dd3f2648378263bc04d9c205fa/volumes/b056b016-8337-4e64-9069-6c84ab242152 
HTTP/1.1" 400 338 0.053790



(the verbosity and debug are ON of course).

This is not a single event... unfortunately more that one volume cannot 
be deleted and we're, so far, unable to understand why.


My question is if a manual procedure (operating on the relevant cinder's 
tables) which gracefully cleans things up (which is not a trivial 
"delete from volume where id='...'") exists.


Thanks,

Alvise

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Device {UUID}c not defined on plugin

2015-06-16 Thread Alvise Dorigo

Hi
after a migration of Havana to IceHouse (using controller and network 
services/agents on the same physical node, and using OVS/GRE) we started 
facing some network-related problems (the internal tag of the element 
shown by ovs-vsctl show was set to 4095, which is wrong AFAIK). At the 
beginning the problems could be solved by just restarting the 
openvswitch related agents (and openvswitch itself), or changing the tag 
by hand; but now the networking definitely stopped working.


When we add a new router interface connected to a tenant lan, it is 
created in "DOWN" state. The in the openvswitch-agent.log we see this 
errore message:


2015-06-16 15:07:43.275 40708 WARNING 
neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Device 
ba295e45-9a73-48c1-8864-a59edd5855dc not defined on plugin


and nothing more.

Any suggestion ?

thanks,

Alvise


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Device {UUID}c not defined on plugin

2015-06-16 Thread Alvise Dorigo

Hi,
I forgot to attach some relevant config files:

/etc/neutron/plugins/ml2/ml2_conf.ini :

[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[ml2_type_vxlan]
[securitygroup]
firewall_driver = 
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

enable_security_group = True
[ovs]
local_ip = 192.168.61.106
tunnel_type = gre
enable_tunneling = True

/etc/neutron/neutron.conf :

[DEFAULT]
nova_ca_certificates_file = /etc/grid-security/certificates/INFN-CA-2006.pem
auth_strategy = keystone
rpc_backend = neutron.openstack.common.rpc.impl_kombu
rabbit_hosts = 192.168.60.105:5672,192.168.60.106:5672
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = https://cloud-areapd.pd.infn.it:8774/v2
nova_admin_username = nova
nova_admin_tenant_id = 1b2caeedb3e2497b935723dc6e142ec9
nova_admin_password = X
nova_admin_auth_url = https://cloud-areapd.pd.infn.it:35357/v2.0
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
verbose = True
debug = False
rabbit_ha_queues = True
dhcp_agents_per_network = 2
[quotas]
[agent]
[keystone_authtoken]
auth_uri = https://cloud-areapd.pd.infn.it:35357/v2.0
auth_url = https://cloud-areapd.pd.infn.it:35357/v2.0
auth_host = cloud-areapd.pd.infn.it
auth_protocol = https
auth_port = 35357
admin_tenant_name = services
admin_user = neutron
admin_password = X
cafile = /etc/grid-security/certificates/INFN-CA-2006.pem
[database]
connection = mysql://neutron_prod:XX@192.168.60.10/neutron_prod
[service_providers]
service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default



And here

http://pastebin.com/P977162t

the output of "ovs-vsctl show".

Alvise



On 16/06/2015 15:30, Alvise Dorigo wrote:

Hi
after a migration of Havana to IceHouse (using controller and network 
services/agents on the same physical node, and using OVS/GRE) we 
started facing some network-related problems (the internal tag of the 
element shown by ovs-vsctl show was set to 4095, which is wrong 
AFAIK). At the beginning the problems could be solved by just 
restarting the openvswitch related agents (and openvswitch itself), or 
changing the tag by hand; but now the networking definitely stopped 
working.


When we add a new router interface connected to a tenant lan, it is 
created in "DOWN" state. The in the openvswitch-agent.log we see this 
errore message:


2015-06-16 15:07:43.275 40708 WARNING 
neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Device 
ba295e45-9a73-48c1-8864-a59edd5855dc not defined on plugin


and nothing more.

Any suggestion ?

thanks,

Alvise


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Device {UUID}c not defined on plugin

2015-06-16 Thread Alvise Dorigo

Hi,
I forgot to attach some relevant config files:

/etc/neutron/plugins/ml2/ml2_conf.ini :

[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[ml2_type_vxlan]
[securitygroup]
firewall_driver = 
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

enable_security_group = True
[ovs]
local_ip = 192.168.61.106
tunnel_type = gre
enable_tunneling = True

/etc/neutron/neutron.conf :

[DEFAULT]
nova_ca_certificates_file = /etc/grid-security/certificates/INFN-CA-2006.pem
auth_strategy = keystone
rpc_backend = neutron.openstack.common.rpc.impl_kombu
rabbit_hosts = 192.168.60.105:5672,192.168.60.106:5672
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = https://cloud-areapd.pd.infn.it:8774/v2
nova_admin_username = nova
nova_admin_tenant_id = 1b2caeedb3e2497b935723dc6e142ec9
nova_admin_password = X
nova_admin_auth_url = https://cloud-areapd.pd.infn.it:35357/v2.0
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
verbose = True
debug = False
rabbit_ha_queues = True
dhcp_agents_per_network = 2
[quotas]
[agent]
[keystone_authtoken]
auth_uri = https://cloud-areapd.pd.infn.it:35357/v2.0
auth_url = https://cloud-areapd.pd.infn.it:35357/v2.0
auth_host = cloud-areapd.pd.infn.it
auth_protocol = https
auth_port = 35357
admin_tenant_name = services
admin_user = neutron
admin_password = X
cafile = /etc/grid-security/certificates/INFN-CA-2006.pem
[database]
connection = mysql://neutron_prod:XX@192.168.60.10/neutron_prod
[service_providers]
service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default



And here

http://pastebin.com/P977162t

the output of "ovs-vsctl show".

Alvise

On 16/06/2015 15:30, Alvise Dorigo wrote:

Hi
after a migration of Havana to IceHouse (using controller and network 
services/agents on the same physical node, and using OVS/GRE) we 
started facing some network-related problems (the internal tag of the 
element shown by ovs-vsctl show was set to 4095, which is wrong 
AFAIK). At the beginning the problems could be solved by just 
restarting the openvswitch related agents (and openvswitch itself), or 
changing the tag by hand; but now the networking definitely stopped 
working.


When we add a new router interface connected to a tenant lan, it is 
created in "DOWN" state. The in the openvswitch-agent.log we see this 
errore message:


2015-06-16 15:07:43.275 40708 WARNING 
neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Device 
ba295e45-9a73-48c1-8864-a59edd5855dc not defined on plugin


and nothing more.

Any suggestion ?

thanks,

Alvise


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Ceilometer client uses the wrong URL when contacting service

2015-07-14 Thread Alvise Dorigo

Hi,
I've setup an OpenStack IceHouse deployment with SSL.

The Ceilometer service is registered in Keystone with the https endpoints:

[root@controller-01 ~]# keystone endpoint-list|grep 8777
| 8c12e36a75454c5da92ac146630a7022 | regionOne | 
https://cloud-areapd-test.pd.infn.it:8777  | 
https://cloud-areapd-test.pd.infn.it:8777  | 
https://cloud-areapd-test.pd.infn.it:8777  | 
8f765dc84a884786b0e95076a20f1c4c |


When I select on the dashboard the menu "Resource usage", it hungs, and 
in the horizon.log file I see this error:


2015-07-14 14:27:03,899 9751 DEBUG ceilometerclient.common.http curl -i 
-X GET -H 'X-Auth-Token: 46778be5fbe2c753766b501314e6effa' -H 
'Content-Type: application/json' -H 'Accept: application/json' -H 
'User-Agent: python-ceilometerclient' http://90.147.77.250:8777/v2/meters



Why ( and from where) the ceilometerclient is getting the wrong non-SSL 
endpoint http://90.147.77.250:8777/v2/meters ?
I thought it would take that URL from the Keystone's endpoint catalog 
(which contains the correct https URLs); but it seems that it is not true.


Could someone explain and help me to set it up correctly ?

thanks,

Alvise

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Managing security incidents: how to find the guilty VM ?

2015-07-23 Thread Alvise Dorigo

Dear all

Let's suppose that a user of an OpenStack based Cloud does something 
wrong/illegal on the internet, or a VM gets compromised and from that 
machine something wrong/illegal is done.



In this case the local security contact persons could be notified after 
a while (days, weeks, even some months, when probably that VM doesn't 
exist anymore) that  a "malicious operations" affecting some IP 
addresses-ports" was performed on date X from a machine with IP Y.


The local security contact persons have then to find who created that 
VM, at least to prevent that .


If the VM doesn't have a floating IP, the Y IP address that is exposed 
on the internet (and therefore the one that will be commuticated to the 
security people) is the one of the OpenStack router.


Given the private IP of the machine we are able to find the UUID of the 
VM (even if this was already deleted) and then the id of the relevant 
user who created it.

But the problem is how to find this private IP address.


How this issue can be managed ?

thanks.

Alvise

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Which is the correct way to set ha queues in RabbitMQ

2015-07-28 Thread Alvise Dorigo

Hi,
I read these two documents:

http://docs.openstack.org/high-availability-guide/content/_configure_rabbitmq.html

https://www.rdoproject.org/RabbitMQ

To configure the queues in HA mode, the two docs suggests two slightly 
different commands;


The first one says:

rabbitmqctl set_policy ha-all '^(?!amq\.).*' '{"ha-mode": "all"}'


while the second one says:

rabbitmqctl set_policy HA '^(?!amq\.).*' '{"ha-mode": "all"}'


"ha_all" vs. "HA".

which one is correct ?

thanks,

Alvise
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Which is the correct way to set ha queues in RabbitMQ

2015-07-28 Thread Alvise Dorigo

thank you very much Vishal.

A.

On 28/07/2015 09:41, vishal yadav wrote:

>> "ha_all" vs. "HA".
>> which one is correct ?

That's the policy name, you can name anything...

Excerpt from 'man rabbitmqctl'
...
set_policy [-p vhostpath] {name} {pattern} {definition} [priority]
   Sets a policy.

   name
   The name of the policy.

   pattern
   The regular expression, which when matches on a given 
resources causes the policy to apply.


   definition
   The definition of the policy, as a JSON term. In most 
shells you are very likely to need to quote this.


   priority
   The priority of the policy as an integer, defaulting to 
0. Higher numbers indicate greater precedence.

...

Regards,
Vishal


On Tue, Jul 28, 2015 at 12:47 PM, Alvise Dorigo 
mailto:alvise.dor...@pd.infn.it>> wrote:


Hi,
I read these two documents:


http://docs.openstack.org/high-availability-guide/content/_configure_rabbitmq.html

https://www.rdoproject.org/RabbitMQ

To configure the queues in HA mode, the two docs suggests two
slightly different commands;

The first one says:

rabbitmqctl set_policy ha-all '^(?!amq\.).*' '{"ha-mode": "all"}'


while the second one says:

rabbitmqctl set_policy HA '^(?!amq\.).*' '{"ha-mode": "all"}'


"ha_all" vs. "HA".

which one is correct ?

thanks,

Alvise

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Which is the correct way to set ha queues in RabbitMQ

2015-07-28 Thread Alvise Dorigo

Hi Vishal,
do you have a effective recipe to test if the rabbitmq's HA ?
I've three instances of it; I've also nova, cinder and neutron 
configured with rabbit_ha_queues = true.


Just restarting a rabbit instance seems not to be sufficient to test a 
real case scenario, is it ?


any advice ?

thanks,

Alvise

On 28/07/2015 09:46, vishal yadav wrote:

You're welcome :)

On Tue, Jul 28, 2015 at 1:15 PM, Alvise Dorigo 
mailto:alvise.dor...@pd.infn.it>> wrote:


thank you very much Vishal.

A.


On 28/07/2015 09:41, vishal yadav wrote:

>> "ha_all" vs. "HA".
>> which one is correct ?

That's the policy name, you can name anything...

Excerpt from 'man rabbitmqctl'
...
set_policy [-p vhostpath] {name} {pattern} {definition} [priority]
   Sets a policy.

   name
   The name of the policy.

   pattern
   The regular expression, which when matches on a
given resources causes the policy to apply.

   definition
   The definition of the policy, as a JSON term. In
most shells you are very likely to need to quote this.

   priority
   The priority of the policy as an integer,
defaulting to 0. Higher numbers indicate greater precedence.
...

Regards,
Vishal


On Tue, Jul 28, 2015 at 12:47 PM, Alvise Dorigo
mailto:alvise.dor...@pd.infn.it>> wrote:

Hi,
I read these two documents:


http://docs.openstack.org/high-availability-guide/content/_configure_rabbitmq.html

https://www.rdoproject.org/RabbitMQ

To configure the queues in HA mode, the two docs suggests two
slightly different commands;

The first one says:

rabbitmqctl set_policy ha-all '^(?!amq\.).*' '{"ha-mode": "all"}'


while the second one says:

rabbitmqctl set_policy HA '^(?!amq\.).*' '{"ha-mode": "all"}'


"ha_all" vs. "HA".

which one is correct ?

thanks,

Alvise


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Problem in Ceilometer's alarm-evaluator

2015-07-30 Thread Alvise Dorigo

Hi,
I've just installed Ceilometer on an test infrastructure with a 
controller node (which is also network node) e a compute node.
Just after starting the ceilometer services, I see an error in the 
alarm-evaluator.log:


2015-07-30 14:59:54.113 4061 ERROR ceilometer.alarm.service [-] alarm 
evaluation cycle failed
2015-07-30 14:59:54.113 4061 TRACE ceilometer.alarm.service Traceback 
(most recent call last):
2015-07-30 14:59:54.113 4061 TRACE ceilometer.alarm.service   File 
"/usr/lib/python2.6/site-packages/ceilometer/alarm/service.py", line 91, 
in _evaluate_assigned_alarms
2015-07-30 14:59:54.113 4061 TRACE ceilometer.alarm.service alarms = 
self._assigned_alarms()
2015-07-30 14:59:54.113 4061 TRACE ceilometer.alarm.service   File 
"/usr/lib/python2.6/site-packages/ceilometer/alarm/service.py", line 
134, in _assigned_alarms

2015-07-30 14:59:54.113 4061 TRACE ceilometer.alarm.service 'value': True}])
2015-07-30 14:59:54.113 4061 TRACE ceilometer.alarm.service   File 
"/usr/lib/python2.6/site-packages/ceilometerclient/v2/alarms.py", line 
71, in list
2015-07-30 14:59:54.113 4061 TRACE ceilometer.alarm.service return 
self._list(options.build_url(self._path(), q))
2015-07-30 14:59:54.113 4061 TRACE ceilometer.alarm.service   File 
"/usr/lib/python2.6/site-packages/ceilometerclient/common/base.py", line 
58, in _list
2015-07-30 14:59:54.113 4061 TRACE ceilometer.alarm.service resp, body = 
self.api.json_request('GET', url)
2015-07-30 14:59:54.113 4061 TRACE ceilometer.alarm.service   File 
"/usr/lib/python2.6/site-packages/ceilometerclient/common/http.py", line 
191, in json_request
2015-07-30 14:59:54.113 4061 TRACE ceilometer.alarm.service resp, 
body_iter = self._http_request(url, method, **kwargs)
2015-07-30 14:59:54.113 4061 TRACE ceilometer.alarm.service   File 
"/usr/lib/python2.6/site-packages/ceilometerclient/common/http.py", line 
151, in _http_request
2015-07-30 14:59:54.113 4061 TRACE ceilometer.alarm.service resp = 
conn.getresponse()
2015-07-30 14:59:54.113 4061 TRACE ceilometer.alarm.service   File 
"/usr/lib64/python2.6/httplib.py", line 990, in getresponse

2015-07-30 14:59:54.113 4061 TRACE ceilometer.alarm.service response.begin()
2015-07-30 14:59:54.113 4061 TRACE ceilometer.alarm.service   File 
"/usr/lib64/python2.6/httplib.py", line 391, in begin
2015-07-30 14:59:54.113 4061 TRACE ceilometer.alarm.service version, 
status, reason = self._read_status()
2015-07-30 14:59:54.113 4061 TRACE ceilometer.alarm.service   File 
"/usr/lib64/python2.6/httplib.py", line 355, in _read_status
2015-07-30 14:59:54.113 4061 TRACE ceilometer.alarm.service raise 
BadStatusLine(line)

2015-07-30 14:59:54.113 4061 TRACE ceilometer.alarm.service BadStatusLine
2015-07-30 14:59:54.113 4061 TRACE ceilometer.alarm.service

which is quite useless, unless one know well the code.
All the other log files are OK.

Here you can find the ceilometer.conf's content on the controller node: 
http://pastebin.com/WkSJmwwZ


And here the ceilometer.conf in the compute node: 
http://pastebin.com/Vzd3ZW0g


Any idea about the cause of that error, or something I could do to 
obtain a more helpful error message ?


thanks,

A.


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Juno nova-consoleauth stays down

2015-09-18 Thread Alvise Dorigo

Hi,
I've installed a Juno controller (which is also a network node). After 
configuring all service, I launched all of them, and I haven't found any 
error in the files /var/log/nova/*.
Despite this, the consoleauth seems to be "down", as reported by "nova 
service-list":


[root@controller-01 nova]# nova service-list|grep console
| 12 | nova-consoleauth |  | internal | enabled | down  | 
2015-09-16T12:32:30.00 | -   |


Its log just shows:

2015-09-18 09:40:06.697 5358 AUDIT nova.service [-] Starting consoleauth 
node (version 2014.2.2-1.el7)
2015-09-18 09:40:07.833 5358 INFO oslo.messaging._drivers.impl_rabbit 
[req-0ed5c7c0-a499-4afd-a005-a28367342be9 ] Connecting to AMQP server on 
x.y.z.w:5672
2015-09-18 09:40:07.855 5358 INFO oslo.messaging._drivers.impl_rabbit 
[req-0ed5c7c0-a499-4afd-a005-a28367342be9 ] Connected to AMQP server on 
x.y.z.w:5672


(the verbose in the nova.conf is already True).

The command "nova get-vnc-console test novnc" hungs for  long, and when 
it wake back:


DEBUG (shell:803) ('Connection aborted.', BadStatusLine("''",))
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/novaclient/shell.py", line 
800, in main

OpenStackComputeShell().main(argv)
  File "/usr/lib/python2.7/site-packages/novaclient/shell.py", line 
730, in main

args.func(self.cs, args)
  File "/usr/lib/python2.7/site-packages/novaclient/v1_1/shell.py", 
line 1999, in do_get_vnc_console

data = server.get_vnc_console(args.console_type)
  File "/usr/lib/python2.7/site-packages/novaclient/v1_1/servers.py", 
line 71, in get_vnc_console

return self.manager.get_vnc_console(self, console_type)
  File "/usr/lib/python2.7/site-packages/novaclient/v1_1/servers.py", 
line 662, in get_vnc_console

{'type': console_type})[1]
  File "/usr/lib/python2.7/site-packages/novaclient/v1_1/servers.py", 
line 1240, in _action

return self.api.client.post(url, body=body)
  File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 
490, in post

return self._cs_request(url, 'POST', **kwargs)
  File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 
465, in _cs_request

resp, body = self._time_request(url, method, **kwargs)
  File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 
439, in _time_request

resp, body = self.request(url, method, **kwargs)
  File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 
410, in request

**kwargs)
  File "/usr/lib/python2.7/site-packages/requests/api.py", line 50, in 
request

response = session.request(method=method, url=url, **kwargs)
  File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 
464, in request

resp = self.send(prep, **send_kwargs)
  File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 
576, in send

r = adapter.send(request, **kwargs)
  File "/usr/lib/python2.7/site-packages/requests/adapters.py", line 
415, in send

raise ConnectionError(err, request=request)
ConnectionError: ('Connection aborted.', BadStatusLine("''",))
ERROR (ConnectionError): ('Connection aborted.', BadStatusLine("''",))

Could someone suggest more debugging I could perform ?

thanks,

   Alvise

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Help for nova-docker

2015-10-06 Thread Alvise Dorigo

Hi,
I've a Kilo installation with 1 controller/network node e 1 compute node.
I've installed on both nodes the docker-engine RPM (as described here: 
https://docs.docker.com/installation/rhel/) and the nova-docker plugin 
on the compute node only (following instructions here: 
https://github.com/stackforge/nova-docker).


The upload of images into glance has worked.
Instantiation of a docker image ends with an error in the 
nova-compute.log, that I've pasted here: http://pastebin.com/C7EQnSJt


Does anybody have any suggestion/idea to give me ?

thank you,

A.

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Help for nova-docker

2015-10-06 Thread Alvise Dorigo

thank you all, that seems to have solved!

Alvise

On 06/10/2015 15:09, Anne Gentle wrote:


On Tue, Oct 6, 2015 at 6:42 AM, Alvise Dorigo 
mailto:alvise.dor...@pd.infn.it>> wrote:


Hi,
I've a Kilo installation with 1 controller/network node e 1
compute node.
I've installed on both nodes the docker-engine RPM (as described
here: https://docs.docker.com/installation/rhel/) and the
nova-docker plugin on the compute node only (following
instructions here: https://github.com/stackforge/nova-docker).

The upload of images into glance has worked.
Instantiation of a docker image ends with an error in the
nova-compute.log, that I've pasted here: http://pastebin.com/C7EQnSJt

Does anybody have any suggestion/idea to give me ?


Take a look at 
http://docs.openstack.org/admin-guide-cloud/compute-root-wrap-reference.html 
and see if the filters are set correctly for root wrap.


Anne


thank you,

A.

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
<mailto:OpenStack-operators@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




--
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com <http://www.justwriteclick.com>


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [nova-docker] Is volume attach/detach supported for container instances ?

2015-10-08 Thread Alvise Dorigo

Hi,
I've an OpenStack Kilo installation on CentOS7 and more or less the very 
latest version of nova-docker (cloned and installed 1 week ago from the 
stackforge repo 
https://github.com/stackforge/nova-docker/tree/stable/kilo) installed on 
two out of four compute nodes.


Before trying unsuccessfully without understanding if there's (or not) a 
configuration error I would prefer to ask if with the current code, 
nova-docker supports the cinder volume attach/detach on docker instances 
(aka containers).


thank you,

Alvise

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [nova-docker] Updated docker images are not copied into the hypervisor

2015-10-12 Thread Alvise Dorigo

Hi,
I've just experienced a bug (or possibly a feature?) of nova-docker.

I customized a docker image fedora/ssh and uploaded into glance.
I've successfully instantiated a container from that image.
Then I've terminated the container and removed the image from glance.
I've also removed the docker image from the local registry (docker rmi
fedora/ssh) in the controller node where the build took place and where glance 
is running on.

Then, I've built a new docker image from fedora/ssh (adding software).
I've uploaded this new image into glance.
I've instantiated it.
When I logged into the container I didn't found the new additional
software.

To work around this bug (or feature) I had to remove the old fedora/ssh
image from the hypervisor docker's local registry.

nova-docker seems to be "bound" to the image's name instead of the UUID
generated by glance.

Is this supposed to be the right behaviour ? Or am I doing something wrong ?

thanks,

Alvise


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Problems with https endpoints with IceHouse-->Juno-->Kilo migration

2015-10-27 Thread Alvise Dorigo
I have an IceHouse OpenStack installation, where the endpoints are using 
https as protocol (i.e. in the keystone.endpoint table  the https 
protocol is specified).


Now, I want to migrate this installation to Kilo. For this purpose I 
followed these steps:


- I scratched the controller/network node, but the DB was untouched (it 
resides on different machines), and re-installed with CentOS7

- I installed the Juno rpms (without configuring Juno services)
- I synced the keystone DB to the Juno version using the usual "db_sync" 
command:


su -s /bin/sh -c "keystone-manage db_sync" keystone

- Then, I scratched the controller/network node, re-installed again with 
CentOS7 and installed all the Kilo RPMs required to sync the DB to the 
Kilo version.
With all the Kilo's RPM installed, I started from there to configure the 
Kilo Keystone service as described in the official guide 
docs.openstack.org.


That installation configures Keystone exposing v3 API, which can be used 
only with the openstackclient (and not by the legacy keystone one). But 
it seems there's a problem with the https endpoints.


After setting the following env vars

export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=
export OS_AUTH_URL=https://cloud-areapd-test.pd.infn.it:35357/v3
export OS_CACERT=/etc/grid-security/certificates/INFN-CA-2006.pem

openstack fires out the following error:

[root@controller-01 ~]# openstack user list
/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/ssl_.py:90: 
InsecurePlatformWarning: A true SSLContext object is not available. This 
prevents urllib3 from configuring SSL appropriately and may cause 
certain SSL connections to fail. For more information, see 
https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning. 


  InsecurePlatformWarning
ERROR: openstack Unable to establish connection to 
http://cloud-areapd-test.pd.infn.it:35357/v3/auth/tokens



With a deeper investigation I see that the Keystone service returns an 
"http" protocol for the endpoint despite the fact that there's https in 
the backend database:


[root@controller-01 ~]# curl -g -i --cacert 
"/etc/grid-security/certificates/INFN-CA-2006.pem" -X GET 
https://cloud-areapd-test.pd.infn.it:35357/v3 -H "Accept: 
application/json" -H "User-Agent: python-keystoneclient"

HTTP/1.1 200 OK
Vary: X-Auth-Token
Content-Type: application/json
Content-Length: 268
X-Openstack-Request-Id: req-a47a2873-f81b-490a-b249-7f970754914b
Date: Tue, 27 Oct 2015 10:32:20 GMT
Connection: close

{"version": {"status": "stable", "updated": "2015-03-30T00:00:00Z", 
"media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v3+json"}], "id": "v3.4", "links": 
[{"href": "http://cloud-areapd-test.pd.infn.it:35357/v3/";, "rel": 
"self"}]}}


The above curl command is grabbed from the output of "openstack --debug 
user list".


If I switch back to v2.0 API in env var OS_AUTH_URL, keystone client 
works correctly (and openstack stops working) and shows me the users, 
tenants, etc.:


[root@controller-01 ~]# export 
OS_AUTH_URL=https://cloud-areapd-test.pd.infn.it:35357/v2.0

[root@controller-01 ~]# keystone user-list
/usr/lib/python2.7/site-packages/keystoneclient/shell.py:65: 
DeprecationWarning: The keystone CLI is deprecated in favor of 
python-openstackclient. For a Python library, continue using 
python-keystoneclient.

  'python-keystoneclient.', DeprecationWarning)
/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/ssl_.py:90: 
InsecurePlatformWarning: A true SSLContext object is not available. This 
prevents urllib3 from configuring SSL appropriately and may cause 
certain SSL connections to fail. For more information, see 
https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning. 


  InsecurePlatformWarning
+--+--+-+-+ 


|id|   name   | enabled | email|
+--+--+-+-+ 


| 62e64ee442cc42e7b07c0209010148c3 |  admin   |   True  | ADMIN_EMAIL |
| 96ab92677d43476a820428e281d229f2 |  cinder  |   True  | 
cin...@example.co |
| e737d7af46ab46838bbef6c5d16aff7e |  glance  |   True  | 
gla...@example.com |
| 84546c19c2b242738235022f73b2e9c2 | neutron  |   True  | 
neut...@example.com |
| b99c5365b6c448d4956fdae02fe0ef11 |   nova   |   True  | 
n...@example.com |
| 3c2bde47975b4f738b316d87f3727ec3 | sgaravat |   True 
| |
+--+--+-

Re: [Openstack-operators] Problems with https endpoints with IceHouse-->Juno-->Kilo migration

2015-10-28 Thread Alvise Dorigo

I Matt, thank you for your reply.
I think I've resolved my problem by setting the 'admin_endpoint' and 
'public_endpoint' in the DEFAULT section of keystone.conf (they are not 
mentioned in the installation guide, but in this thread 
https://goo.gl/3JAOHb):


admin_endpoint = http://controller_mgmt_private_ip:35357
public_endpoint = https://public_ip:5000

and everything is now working.

Thank you and sorry for the noise,

Alvise


On 27/10/2015 21:18, Matt Fischer wrote:
What's your output from keystone endpoint-list or keystone catalog (or 
the DB table)? Is it possible the admin URL is simply listed as http?


On Tue, Oct 27, 2015 at 9:32 PM, Alvise Dorigo 
mailto:alvise.dor...@pd.infn.it>> wrote:


I have an IceHouse OpenStack installation, where the endpoints are
using https as protocol (i.e. in the keystone.endpoint table  the
https protocol is specified).

Now, I want to migrate this installation to Kilo. For this purpose
I followed these steps:

- I scratched the controller/network node, but the DB was
untouched (it resides on different machines), and re-installed
with CentOS7
- I installed the Juno rpms (without configuring Juno services)
- I synced the keystone DB to the Juno version using the usual
"db_sync" command:

su -s /bin/sh -c "keystone-manage db_sync" keystone

- Then, I scratched the controller/network node, re-installed
again with CentOS7 and installed all the Kilo RPMs required to
sync the DB to the Kilo version.
With all the Kilo's RPM installed, I started from there to
configure the Kilo Keystone service as described in the official
guide docs.openstack.org <http://docs.openstack.org>.

That installation configures Keystone exposing v3 API, which can
be used only with the openstackclient (and not by the legacy
keystone one). But it seems there's a problem with the https
endpoints.

After setting the following env vars

export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=
export OS_AUTH_URL=https://cloud-areapd-test.pd.infn.it:35357/v3
export OS_CACERT=/etc/grid-security/certificates/INFN-CA-2006.pem

openstack fires out the following error:

[root@controller-01 ~]# openstack user list
/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/ssl_.py:90:
InsecurePlatformWarning: A true SSLContext object is not
available. This prevents urllib3 from configuring SSL
appropriately and may cause certain SSL connections to fail. For
more information, see

https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.

  InsecurePlatformWarning
ERROR: openstack Unable to establish connection to
http://cloud-areapd-test.pd.infn.it:35357/v3/auth/tokens


With a deeper investigation I see that the Keystone service
returns an "http" protocol for the endpoint despite the fact that
there's https in the backend database:

[root@controller-01 ~]# curl -g -i --cacert
"/etc/grid-security/certificates/INFN-CA-2006.pem" -X GET
https://cloud-areapd-test.pd.infn.it:35357/v3 -H "Accept:
application/json" -H "User-Agent: python-keystoneclient"
HTTP/1.1 200 OK
Vary: X-Auth-Token
Content-Type: application/json
Content-Length: 268
X-Openstack-Request-Id: req-a47a2873-f81b-490a-b249-7f970754914b
Date: Tue, 27 Oct 2015 10:32:20 GMT
Connection: close

{"version": {"status": "stable", "updated":
"2015-03-30T00:00:00Z", "media-types": [{"base":
"application/json", "type":
"application/vnd.openstack.identity-v3+json"}], "id": "v3.4",
"links": [{"href":
"http://cloud-areapd-test.pd.infn.it:35357/v3/";, "rel": "self"}]}}

The above curl command is grabbed from the output of "openstack
--debug user list".

If I switch back to v2.0 API in env var OS_AUTH_URL, keystone
client works correctly (and openstack stops working) and shows me
the users, tenants, etc.:

[root@controller-01 ~]# export
OS_AUTH_URL=https://cloud-areapd-test.pd.infn.it:35357/v2.0
[root@controller-01 ~]# keystone user-list
/usr/lib/python2.7/site-packages/keystoneclient/shell.py:65:
DeprecationWarning: The keystone CLI is deprecated in favor of
python-openstackclient. For a Python library, continue using
python-keystoneclient.
  'python-keystoneclient.', DeprecationWarning)
/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/ssl_.py:90:
InsecurePlatformWar

[Openstack-operators] neutron metadata-agent HA

2015-12-10 Thread Alvise Dorigo

Hi,
I've installed Kilo release of OpenStack. An interesting thing, for us, 
is the new high available Neutron L3 agent, which - as far as I 
understand - can be set in active/active mode.
But I verified what is reported in the HA Guide 
(http://docs.openstack.org/ha-guide/networking-ha-metadata.html): 
metadata-agent cannot be configured in high availability active/active mode.


Below that sentence, I read:

"[TODO: Update this information. Can this service now be made HA in 
active/active mode or do we need to pull in the instructions to run this 
service in active/passive mode?]"


So my question is: is there any progress on this topic ? is there a way 
(something like a cronjob script) to make the metadata-agent redundant 
without involving the clustering software Pacemaker/Corosync ?


Thanks,

Alvise

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Mitaka's install doc doesn't tell anymore about GRE

2016-05-10 Thread Alvise Dorigo

Hi,
I'm reading the Mitaka installation guide. I've seen that the GRE 
tunneling mechanism isn't mentioned anymore (it was at least until Kilo).

What happened to GRE ? is it not usable anymore in Mitaka, or what ?

I've a Kilo production IaaS using GRE and I need to migrate it to 
Mitaka; do I have any chance to maintain GRE ?


thanks,

Alvise

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Keystone's DB_SYNC from Kilo to Liberty

2016-06-23 Thread Alvise Dorigo

Hi,
I've a Kilo installation which I want to migrate to Liberty.
I've installed the Liberty Keystone's RPMs and configured the minimun to 
upgrade the DB schema ("connection" parameter in the [database] section 
of keystone.conf).

Then, I've tried to run

su -s /bin/sh -c "keystone-manage db_sync" keystone

but it's failed with the following error:

2016-06-23 13:20:50.191 22423 CRITICAL keystone [-] KeyError: 



which is quite useless.

Any suggestion ?

many thanks,

Alvise

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [cloud] Keystone's DB_SYNC from Kilo to Liberty

2016-06-27 Thread Alvise Dorigo



On 27/06/2016 01:54, Sam Morrison wrote:

That usually means your DB is at version 86 (you can check the DB table to see, 
the table is called migration_version or something)
BUT your keystone version is older and doesn’t know about version 86.

Is it possible the keystone version your running is older and doesn’t know 
about version 86?


Hi Sam,
yes it is possible. Actually there was an error in my procedure, and a 
complete restore of the Kilo database solved the problem.


thanks for the support and sorry for the noise.

A.

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] How setup multiple memcache servers for Dashboard

2016-09-27 Thread Alvise Dorigo

Hi,

is there a way to make the dashboard using more than one memcached
server as other components do (e.g. nova, neutron, cinder, with the
memcached_servers parameter) ?

thank you,


 Alvise


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] How to tune scheduling for "Insufficient compute resources" (race conditions ?)

2016-11-30 Thread Alvise Dorigo



On 11/30/2016 04:18 PM, Belmiro Moreira wrote:

How many nova-schedulers are you running?
You can hit this issue when multiple nova-schedulers select the same 
compute node for different instances.




we're running 2 nova-scheduler processes. Could you explain more in 
details please ?


many thanks,

Alvise
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators