Re: [Openstack] running HA cluster of guests within openstack

2012-04-16 Thread ikke
On Fri, Apr 13, 2012 at 5:45 PM, Jason Kölker jkoel...@rackspace.com wrote:
 On Fri, 2012-04-13 at 12:31 +0300, ikke wrote:

 1. Private networks between guests
   - Doable now using Quantum
 1.1. Defining VLANs visible to guest machines to separate clusters
 internal traffic,
        VLAN tags should not be stripped by host (QinQ)

 VLANs and Quantum private networks are pretty much the same thing, why
 would you want both?

For legacy reasons. The cluster at the moment handles the cluster
internal network with VLANs, and for such the cloud layer should just
virtualize the HW functionality. It would need to provide the VLAN
layer for guests for the time being until the guest could be modified
not to require it and handle VLAN network configuration via OpenStack
interfaces instead.

Some of the questions are due the legacy need. OpenStack would offer
similar functionality, but if you intend to bring a legacy apps as
such into cloud, there is plenty of modifications needed to adapt the
legacy SW into cloud concepts. Adaptation takes time, and in some
cases it might be cheaper  faster to adapt the cloud layer to provide
legacy HW as virtualized, HW abstraction layer.

While talking about legacy SW, I mean HUGE amount of code written over
decades, which is not easily modifiable.

 1.2. Set pre-defined MAC addresses for the guests, needed by non-IP
        traffic within the guest cluster (layer2 addressing)
 If you send the mac address to Melange when you create the interface it
 will record it for that instance:

 http://melange.readthedocs.org/en/latest/apidoc.html#interfaces

Thanks for the link, it is exactly what I was looking for!

 -it

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] running HA cluster of guests within openstack

2012-04-16 Thread ikke
On Fri, Apr 13, 2012 at 2:53 PM, Pádraig Brady p...@draigbrady.com wrote:
 On 04/13/2012 10:31 AM, ikke wrote:
 I'll just point out two early stage projects
 that used in combination can provide a HA solution.

 http://wiki.openstack.org/Heat
 http://wiki.openstack.org/ResourceMonitorAlertsandNotifications
 cheers,
 Pádraig.

Thanks for the links, I'll look into them. It looks good having a
pluggable monitoring interface. By a quick look I don't see how do the
local driver connect to libvirt, is the alert notified in fast manner
or based on periodic polling. I need to take a further look into it.

Hopefully there could be local HW watchdog emulated in Qemu that would
somehow be connected to the plugin framework to allow fast reaction
times to guest being stuck.

Also, it would make sense to have some kind of a local decision done
immediately about the reboot of a stuck  guest, instead of taking time
to report it centrally and wait for the central manager decision.

cheers,
Ilkka

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] running HA cluster of guests within openstack

2012-04-16 Thread ikke
One item more into HA features, hot plugging.

2.8. Hot plug pre-warning events.
- Nova should tell the registered client that a node/guest is going to
be shutoff,
  and the remote entry would be given time to ack that.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] running HA cluster of guests within openstack

2012-04-13 Thread ikke
I likely am not the first one to ask this, but since I didn't find a
thread about it I start one.

Is there any shared experience available what are the capabilities of
OpenStack to run cluster of guests in the cloud? Do you have
experience of the following questions, or links to more info? The
questions relate to running a legacy HA cluster in virtual env, and
moving it into cloud...

1. Private networks between guests
  - Doable now using Quantum
1.1. Defining VLANs visible to guest machines to separate clusters
internal traffic,
   VLAN tags should not be stripped by host (QinQ)
1.2. Set pre-defined MAC addresses for the guests, needed by non-IP
   traffic within the guest cluster (layer2 addressing)
  - will Melange do this, according to docs it's not in plans?
2. HA capabilities
2.1. Failure notification times need to be fast, i.e. no tcp timeout allowed
  - there seems to be some activity to integrate pacemaker
2.2. Failure notification of both guests and hosts needs to be included
2.3. Guest cluster controller should be able to monitor the states,
  and get fast notifications of the events.
  - rather in milliseconds than in seconds
  - basically the host should have parent of the guest pid notifying
of a child process failure.
  - Host should have a virtual watch-dog noticing of a guest being stuck
2.4. Failure recovery time, how fast can OS bring up failed guest?
  - any measurements of time from failure to noticing it,
and time that the guest is restarted and back up?
2.5. virtual HW manager (guest isolation)
  - Any plans to integrate a piece from which a state of guest could
be reliably queried, e.g. guaranteeing that if I ask to power
off another
guest, it get's done in given time (millisecs), and not
pending on e.g. some tcp
timeout, and thus leading to split brain case of running two
similar guest
simultaneously. E.g. starting another guest to replace shut
down one, but
due some communications error the first one didn't really shut
before the
new one is already up.
 - should be able to reliably cut down the guests network and disk access to
   guarantee the above case
2.6. Shared disks
 - Could there be a shared scsi device concept for the legacy HW
abstraction?
 - Qemu/KVM supports this, what would it take to make OS to understand
   such disk devices?
2.7. Isolation of redundant nodes
 - In some cases there are nodes that need to backup each others 2N, N+1,
   there should be a way to make sure they run on different host.
 - This project might be aiming for that?
http://wiki.openstack.org/DistributedScheduler

This was something from top of my head, it would be interesting to
hear your thoughts about the issues. This need is coming from the
telco world, which would need a telco-cloud with such more real-time
features in it. Certainly the same applies to many other legacy
environments too.

BR,

 Ilkka Tengvall

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] upgrade path from oneiric + managedit diablo to precise+essex?

2012-04-12 Thread ikke
Hi,

is there yet documented upgrade path from using managedit repo for
diablo on oneiric to use precise with essex?

I installed my cluster according to instructions from
docs.openstack.org for basic setup with dashboard. There are some
things that don't work still (errors) and I think it might be better
to upgrade it instead of start fixing old. But do you have experience
on what exactly goes broken, or is it straight forward as just
upgrading the packages and restarting ( it never is :) )?

 -i

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] authentication help needed, added keystone to system

2012-01-26 Thread ikke
could anyone please explain to me what is the relation between zones
in nova-manage and region in keystone-manage? And help me to get the
auth back working.

I got my fedora host test system messed up after installing keystone.
Now I suspect region/zone could be the reason for authentication
failure. Should they be the same?

I got to this point by too much copy pasting the instructions without
fully understanding the details... :( The system worked before
keystone.


---
# nova-manage host list
hostzone
blade5nova
blade6nova
blade7nova
blade8nova
---


---
# keystone-manage  endpointTemplates list
All EndpointTemplates
service region  Public URL
---
novaRegionOne   http://10.20.106.105:8774/v1.1/%tenant_id%
glance  RegionOne   http://10.20.106.105:9292/v1
swift   RegionOne   http://10.20.106.105:8080/v1/AUTH_%tenant_id%
keystoneRegionOne   http://10.20.106.105:5000/v2.0
nova_compat RegionOne   http://10.20.106.105:8774/v1.0/
---

this works for admin:

---
$ curl -d '{auth:{passwordCredentials:{username: admin,
password: secret}}}' -H Content-type: application/json
http://node1:35357/v2.0/tokens
{access: {token: {expires: 2015-02-05T00:00:00, id:
999888777666, tenant: {id: 2, name: admin}},
serviceCatalog: [{endpoints: [{adminURL:
http://10.0.0.1:8774/v1.1/2;, region: RegionOne, internalURL:
http://10.0.0.1:8774/v1.1/2;, publicURL:
http://10.20.106.105:8774/v1.1/2}], type: compute, name:
nova}, {endpoints: [{adminURL: http://10.0.0.1:9292/v1;,
region: RegionOne, internalURL: http://10.0.0.1:9292/v1;,
publicURL: http://10.20.106.105:9292/v1}], type: image,
name: glance}, {endpoints: [{adminURL:
http://10.0.0.1:8080/v1.0/;, region: RegionOne, internalURL:
http://10.0.0.1:8080/v1/AUTH_2;, publicURL:
http://10.20.106.105:8080/v1/AUTH_2}], type: storage, name:
swift}, {endpoints: [{adminURL: http://10.0.0.1:35357/v2.0;,
region: RegionOne, internalURL: http://10.0.0.1:5000/v2.0;,
publicURL: http://10.20.106.105:5000/v2.0}], type: identity,
name: keystone}, {endpoints: [{adminURL:
http://10.0.0.1:8774/v1.0;, region: RegionOne, internalURL:
http://10.0.0.1:8774/v1.0;, publicURL:
http://10.20.106.105:8774/v1.0/}], type: compute, name:
nova_compat}], user: {id: 2, roles: [{id: 4, name:
Admin}, {id: 4, name: Admin}, {id: 4, name: Admin},
{id: 6, name: KeystoneServiceAdmin}], name: admin}}}
---

but as a user it always gives access error:

---
$ curl -d '{auth:{passwordCredentials:{username: demo,
password: guest}}}' -H Content-type: application/json
http://node1:8774/v1.1/tokens
html
 head
  title401 Unauthorized/title
 /head
 body
  h1401 Unauthorized/h1
  This server could not verify that you are authorized to access the
document you requested. Either you supplied the wrong credentials
(e.g., bad password), or your browser does not understand how to
supply the credentials required.br /br /
Authentication required


 /body
/html
---

What possibly could cause this?

---
# tail  -1 /var/log/keystone/admin.log
2012-01-26 16:11:01  WARNING [eventlet.wsgi.server] 10.0.0.1 - -
[26/Jan/2012 16:11:01] POST /v2.0/tokens HTTP/1.1 200 1519 0.084546
---



versions:

$ rpm -qa 'openstack*'
openstack-nova-doc-2011.3-18.fc17.noarch
openstack-glance-doc-2011.3-2.fc16.noarch
openstack-glance-2011.3-2.fc16.noarch
openstack-swift-doc-1.4.4-1.fc17.noarch
openstack-nova-2011.3-18.fc17.noarch
openstack-keystone-2011.3.1-2.fc17.noarch

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Proposal for new devstack (v2?)

2012-01-23 Thread ikke
On Sat, Jan 21, 2012 at 5:47 AM, Joshua Harlow harlo...@yahoo-inc.com wrote:

 Note rhel6 isn’t fully there yet. But in progress ;)


Anyone working on fedora version of it? Any known major issues
preventing it? I quickly added fedora labels next to RHEL6 in the
code, and added db.py stuff. By quick test it does nova mysql config,
and then stops at rabbitmq password change for command returning exit
code2.
diff --git a/conf/pkgs/db.json b/conf/pkgs/db.json
index b044d10..d684551 100644
--- a/conf/pkgs/db.json
+++ b/conf/pkgs/db.json
@@ -66,5 +66,35 @@
 }
 ]
 }
+},
+fedora-16: {
+mysql: {
+version: 5.5.18-1.fc16,
+allowed: =,
+removable: true
+},
+mysql-server: {
+version: 5.5.18-1.fc16,
+allowed: =,
+removable: true,
+post-install: [
+{ 
+# Make sure it'll start on reboot
+run_as_root: true,
+cmd : [ chkconfig, mysqld, on]
+},
+{ 
+# Start the mysql service
+run_as_root: true,
+cmd : [ service, mysqld, start]
+},
+{  
+# Set the root password
+run_as_root: true,
+cmd : [ mysqladmin, -u, root, 
+   password, %PASSWORD% ]
+}
+]
+}
 }
 }
diff --git a/devstack/components/db.py b/devstack/components/db.py
index b694397..9895e0b 100644
--- a/devstack/components/db.py
+++ b/devstack/components/db.py
@@ -28,10 +28,10 @@ MYSQL = 'mysql'
 DB_ACTIONS = {
 MYSQL: {
 #hopefully these are distro independent, these should be since they are invoking system init scripts
-'start': [service, mysql, 'start'],
-'stop': [service, 'mysql', stop],
-'status': [service, 'mysql', status],
-'restart': [service, 'mysql', status],
+'start': [service, mysqld, 'start'],
+'stop': [service, 'mysqld', stop],
+'status': [service, 'mysqld', status],
+'restart': [service, 'mysqld', status],
 #
 'create_db': ['mysql', '--user=%USER%', '--password=%PASSWORD%', '-e', 'CREATE DATABASE %DB%;'],
 'drop_db': ['mysql', '--user=%USER%', '--password=%PASSWORD%', '-e', 'DROP DATABASE IF EXISTS %DB%;'],
diff --git a/devstack/progs/actions.py b/devstack/progs/actions.py
index 7478a52..9bf17ff 100644
--- a/devstack/progs/actions.py
+++ b/devstack/progs/actions.py
@@ -43,6 +43,7 @@ LOG = logging.getLogger(devstack.progs.actions)
 _PKGR_MAP = {
 settings.UBUNTU11: apt.AptPackager,
 settings.RHEL6: yum.YumPackager,
+settings.FEDORA16: yum.YumPackager,
 }
 
 # This is used to map an action to a useful string for
diff --git a/devstack/settings.py b/devstack/settings.py
index 305ad55..534b6dd 100644
--- a/devstack/settings.py
+++ b/devstack/settings.py
@@ -25,6 +25,7 @@ LOG = logging.getLogger(devstack.settings)
 # ie in the pkg/pip listings so update there also!
 UBUNTU11 = ubuntu-oneiric
 RHEL6 = rhel-6
+FEDORA16 = fedora-16
 
 # What this program is called
 PROG_NICE_NAME = DEVSTACK
@@ -36,7 +37,8 @@ POST_INSTALL = 'post-install'
 # Default interfaces for network ip detection
 IPV4 = 'IPv4'
 IPV6 = 'IPv6'
-DEFAULT_NET_INTERFACE = 'eth0'
+#DEFAULT_NET_INTERFACE = 'eth0'
+DEFAULT_NET_INTERFACE = 'br_iscsi'
 DEFAULT_NET_INTERFACE_IP_VERSION = IPV4
 
 # Component name mappings
@@ -120,6 +122,7 @@ STACK_CONFIG_LOCATION = os.path.join(STACK_CONFIG_DIR, stack.ini)
 KNOWN_DISTROS = {
 UBUNTU11: re.compile('Ubuntu(.*)oneiric', re.IGNORECASE),
 RHEL6: re.compile('redhat-6\.(\d+)', re.IGNORECASE),
+FEDORA16: re.compile('fedora-16(.*)verne', re.IGNORECASE),
 }
 
 
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] how to verify nova uses openstackx extensions?

2012-01-18 Thread ikke
Hi,

I'm strugling with my dashboard setup on fedora, and I would like to verify
nova sees openstackx extensions. I don't see anything in logs about it:

grep -ri openstackx /var/log/{nova,keystone,glance}/*

returns nothing. System is fedora 16 + openstack from rawhide, and horizon
from git using diablo branch.

The only thing related to keyword admin (in openstackx extensions) is this
log, which doesn't necessarily relate to issue:


2012-01-11 13:48:25,026 AUDIT extensions [-] Loading extension file:
admin_only.py
2012-01-11 13:48:25,026 WARNING extensions [-] Did not find expected name
Admin_only in
/usr/lib/python2.7/site-packages/nova/api/openstack/contrib/admin_only.py


I have this on my nova.conf:


--osapi_extensions_path=/home/user/src/openstackx/extensions
--osapi_extension=nova.api.openstack.v2.contrib.standard_extensions
--osapi_compute_extension=extensions.admin.Admin
--osapi_extension=extensions.admin.Admin


I know openstackx is deprecated, but so far I have failed to find a
substitute...

BR,

 -ikke
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp