Re: [Openstack] HPC with Openstack?

2011-12-06 Thread Daniel P. Berrange
On Mon, Dec 05, 2011 at 09:07:06PM -0500, Lorin Hochstein wrote:
 
 
 On Dec 4, 2011, at 7:46 AM, Soren Hansen wrote:
 
  2011/12/4 Lorin Hochstein lo...@isi.edu:
  Some of the LXC-related issues we've run into:
  
  - The CPU affinity issue on LXC you mention. Running LXC with OpenStack, 
  you
  don't get proper space sharing out of the box, each instance actually 
  sees
  all of the available CPUs. It's possible to restrict this, but that
  functionality doesn't seem to be exposed through libvirt, so it would have
  to be implemented in nova.

I recently added support for CPU affinity to the libvirt LXC driver. It will
be in libvirt 0.9.8. I also wired up various other cgroups tunables including
NUMA memory binding, block I/O tuning and CPU quota/period caps.

  - LXC doesn't currently support volume attachment through libvirt. We were
  able to implement a workaround by invoking lxc-attach inside of OpenStack
  instead  (e.g., see
  https://github.com/usc-isi/nova/blob/hpc-testing/nova/virt/libvirt/connection.py#L482.
  But to be able to use lxc-attach, we had to upgrade the Linux kernel in
  RHEL6.1 from 2.6.32 to 2.6.38. This kernel isn't supported by SGI, which
  means that we aren't able to load the SGI numa-related kernel modules.

Can you clarify what you mean by volume attachment ?

Are you talking about passing through host block devices, or hotplug of
further filesystems for the container ?

  Why not address these couple of issues in libvirt itself?

If you let me know what issues you have with libvirt + LXC in OpenStack,
I'll put them on my todo list.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HPC with Openstack?

2011-12-05 Thread Lorin Hochstein


On Dec 4, 2011, at 7:46 AM, Soren Hansen wrote:

 2011/12/4 Lorin Hochstein lo...@isi.edu:
 Some of the LXC-related issues we've run into:
 
 - The CPU affinity issue on LXC you mention. Running LXC with OpenStack, you
 don't get proper space sharing out of the box, each instance actually sees
 all of the available CPUs. It's possible to restrict this, but that
 functionality doesn't seem to be exposed through libvirt, so it would have
 to be implemented in nova.
 
 - LXC doesn't currently support volume attachment through libvirt. We were
 able to implement a workaround by invoking lxc-attach inside of OpenStack
 instead  (e.g., see
 https://github.com/usc-isi/nova/blob/hpc-testing/nova/virt/libvirt/connection.py#L482.
 But to be able to use lxc-attach, we had to upgrade the Linux kernel in
 RHEL6.1 from 2.6.32 to 2.6.38. This kernel isn't supported by SGI, which
 means that we aren't able to load the SGI numa-related kernel modules.
 
 Why not address these couple of issues in libvirt itself?


I agree that adding support for this stuff within libvirt would be a better way 
to do it than coding around it in OpenStack. In our case, it was quicker to get 
something up and running this way than to add the functionality to libvirt.

Lorin
--
Lorin Hochstein, Computer Scientist
USC Information Sciences Institute
703.812.3710
http://www.east.isi.edu/~lorin


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HPC with Openstack?

2011-12-04 Thread Soren Hansen
2011/12/4 Lorin Hochstein lo...@isi.edu:
 Some of the LXC-related issues we've run into:

 - The CPU affinity issue on LXC you mention. Running LXC with OpenStack, you
 don't get proper space sharing out of the box, each instance actually sees
 all of the available CPUs. It's possible to restrict this, but that
 functionality doesn't seem to be exposed through libvirt, so it would have
 to be implemented in nova.

 - LXC doesn't currently support volume attachment through libvirt. We were
 able to implement a workaround by invoking lxc-attach inside of OpenStack
 instead  (e.g., see
 https://github.com/usc-isi/nova/blob/hpc-testing/nova/virt/libvirt/connection.py#L482.
 But to be able to use lxc-attach, we had to upgrade the Linux kernel in
 RHEL6.1 from 2.6.32 to 2.6.38. This kernel isn't supported by SGI, which
 means that we aren't able to load the SGI numa-related kernel modules.

Why not address these couple of issues in libvirt itself?

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HPC with Openstack?

2011-12-03 Thread Muriel
On Fri, Dec 2, 2011 at 1:17 PM, Sandy Walsh sandy.wa...@rackspace.com wrote:
 I've recently had inquiries about High Performance Computing (HPC) on 
 Openstack. As opposed to the Service Provider (SP) model, HPC is interested 
 in fast provisioning, potentially short lifetime instances with precision 
 metrics and scheduling. Real-time vs. Eventually.

 Anyone planning on using Openstack in that way?

 If so, I'll direct those inquires to this thread.

 Thanks in advance,
 Sandy


We are working on a cluster  where the nodes are allocated dynamically
to the OpenStack environment. We plan to distribute the images with a
distributed filesystem like GPFS or Lustre, or via bittorrent. We
would like to include in OpenStack the advanced reservation of
resources and the integration with the current cluster queue manager
(Grid Engine and LFS).

I'm not sure that all points will be realized​, but this is the idea.

Cheers,
Muriel

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HPC with Openstack?

2011-12-03 Thread Cole
First and foremost:
http://wiki.openstack.org/HeterogeneousSgiUltraVioletSupport

With Numa and lightweight container technology (LXC / OpenVZ) you can
achieve very close to real hardware performance for certain HPC
applications.  The problem with technologies like LXC is there isn't a ton
of logic to address the cpu affinity that other hypervisors offer (which
generally wouldn't be ideal for HPC).

On the interconnect side.  There are plenty of open-mx(
http://open-mx.gforge.inria.fr/) HPC applications running on everything
from single channel 1 gig to bonded 10 gig.

This is an area I'm personally interested in and have done some testing and
will be doing more.  If you are going to try HPC with ethernet, Arista
makes the lowest latency switches in the business.

Cole
Nebula

On Sat, Dec 3, 2011 at 11:11 AM, Tim Bell tim.b...@cern.ch wrote:

 At CERN, we are also faced with similar thoughts as we look to the cloud
 on how to match the VM creation performance (typically O(minutes)) with the
 required batch job system rates for a single program (O(sub-second)).

 Data locality to aim that the job runs close to the source data makes this
 more difficult along with fair share to align the priority of the jobs to
 achieve the agreed quota between competing requests for limited and shared
 resource.  The classic IaaS model of 'have credit card, will compute' does
 not apply for some private cloud use cases/users.

 We would be interested to discuss further with other sites.  There is
 further background from OpenStack Boston at http://vimeo.com/31678577.

 Tim
 tim.b...@cern.ch



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HPC with Openstack?

2011-12-03 Thread Lorin Hochstein
Hi Cole:

That link you posted refers to our work at ISI. We're currently running LXC as 
the hypervisor on our SGI UV. Other than performance, one of the issues with 
KVM is that it currently has a hard-coded limit on how many vCPUs you can run 
in a single instance, so we can't run, say, a 256 vcpus instance. 

Some of the LXC-related issues we've run into:

- The CPU affinity issue on LXC you mention. Running LXC with OpenStack, you 
don't get proper space sharing out of the box, each instance actually sees 
all of the available CPUs. It's possible to restrict this, but that 
functionality doesn't seem to be exposed through libvirt, so it would have to 
be implemented in nova.

- LXC doesn't currently support volume attachment through libvirt. We were able 
to implement a workaround by invoking lxc-attach inside of OpenStack instead  
(e.g., see 
https://github.com/usc-isi/nova/blob/hpc-testing/nova/virt/libvirt/connection.py#L482.
 But to be able to use lxc-attach, we had to upgrade the Linux kernel in 
RHEL6.1 from 2.6.32 to 2.6.38. This kernel isn't supported by SGI, which means 
that we aren't able to load the SGI numa-related kernel modules. 

Take care,

Lorin
--
Lorin Hochstein, Computer Scientist
USC Information Sciences Institute
703.812.3710
http://www.east.isi.edu/~lorin




On Dec 3, 2011, at 5:08 PM, Cole wrote:

 First and foremost: 
 http://wiki.openstack.org/HeterogeneousSgiUltraVioletSupport
 
 With Numa and lightweight container technology (LXC / OpenVZ) you can achieve 
 very close to real hardware performance for certain HPC applications.  The 
 problem with technologies like LXC is there isn't a ton of logic to address 
 the cpu affinity that other hypervisors offer (which generally wouldn't be 
 ideal for HPC).
 
 On the interconnect side.  There are plenty of 
 open-mx(http://open-mx.gforge.inria.fr/) HPC applications running on 
 everything from single channel 1 gig to bonded 10 gig.
 
 This is an area I'm personally interested in and have done some testing and 
 will be doing more.  If you are going to try HPC with ethernet, Arista makes 
 the lowest latency switches in the business.
 
 Cole
 Nebula
 
 On Sat, Dec 3, 2011 at 11:11 AM, Tim Bell tim.b...@cern.ch wrote:
 At CERN, we are also faced with similar thoughts as we look to the cloud on 
 how to match the VM creation performance (typically O(minutes)) with the 
 required batch job system rates for a single program (O(sub-second)).
 
 Data locality to aim that the job runs close to the source data makes this 
 more difficult along with fair share to align the priority of the jobs to 
 achieve the agreed quota between competing requests for limited and shared 
 resource.  The classic IaaS model of 'have credit card, will compute' does 
 not apply for some private cloud use cases/users.
 
 We would be interested to discuss further with other sites.  There is further 
 background from OpenStack Boston at http://vimeo.com/31678577.
 
 Tim
 tim.b...@cern.ch
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HPC with Openstack?

2011-12-02 Thread David Busby
May be worth looking at rightscale: 
http://www.rightscale.com/products/plans-pricing/grid-edition.php
The article there is only and only cites EC2 usage, but their API's support 
Rackspace cloud which is Nova 
http://support.rightscale.com/12-Guides/RightScale_API

Cheers

David



On 2 Dec 2011, at 12:17, Sandy Walsh wrote:

 I've recently had inquiries about High Performance Computing (HPC) on 
 Openstack. As opposed to the Service Provider (SP) model, HPC is interested 
 in fast provisioning, potentially short lifetime instances with precision 
 metrics and scheduling. Real-time vs. Eventually.
 
 Anyone planning on using Openstack in that way?
 
 If so, I'll direct those inquires to this thread.
 
 Thanks in advance,
 Sandy
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HPC with Openstack?

2011-12-02 Thread Lorin Hochstein
As a side note, HPC means very different things to different people. In the 
circles I move in, HPC is interested in running compute jobs that are 
CPU-intensive, require large amounts of memory, and need 
low-latency/high-bandwidth interconnects to allow the user to break up a 
tightly coupled compute job across multiple nodes. A particular compute job 
will run for hours to days, so fast provisioning isn't necessarily critical 
(the traditional HPC model is to have your job wait in a batch queue until the 
resources are available).

Lorin
--
Lorin Hochstein, Computer Scientist
USC Information Sciences Institute
703.812.3710
http://www.east.isi.edu/~lorin




On Dec 2, 2011, at 7:17 AM, Sandy Walsh wrote:

 I've recently had inquiries about High Performance Computing (HPC) on 
 Openstack. As opposed to the Service Provider (SP) model, HPC is interested 
 in fast provisioning, potentially short lifetime instances with precision 
 metrics and scheduling. Real-time vs. Eventually.
 
 Anyone planning on using Openstack in that way?
 
 If so, I'll direct those inquires to this thread.
 
 Thanks in advance,
 Sandy
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HPC with Openstack?

2011-12-02 Thread Oleg Gelbukh
Hello,

Here at Mirantis we are working on deployment of Openstack that intended to
manage HPC cluster eventually. There are few features that we are going to
incorporate, and we are still researching. The general idea is to use LXC
as a lightweight virtualization engine, and make use of faster I/O system
than that based on disk image file.

--
Oleg Gelbukh,
Sr. IT Engineer
Mirantis Inc.

On Fri, Dec 2, 2011 at 4:17 PM, Sandy Walsh sandy.wa...@rackspace.comwrote:

 I've recently had inquiries about High Performance Computing (HPC) on
 Openstack. As opposed to the Service Provider (SP) model, HPC is interested
 in fast provisioning, potentially short lifetime instances with precision
 metrics and scheduling. Real-time vs. Eventually.

 Anyone planning on using Openstack in that way?

 If so, I'll direct those inquires to this thread.

 Thanks in advance,
 Sandy

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HPC with Openstack?

2011-12-02 Thread Sandy Walsh
Good point ... thanks for the clarification.

-S


From: Lorin Hochstein [lo...@isi.edu]
Sent: Friday, December 02, 2011 9:47 AM
To: Sandy Walsh
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] HPC with Openstack?

As a side note, HPC means very different things to different people. In the 
circles I move in, HPC is interested in running compute jobs that are 
CPU-intensive, require large amounts of memory, and need 
low-latency/high-bandwidth interconnects to allow the user to break up a 
tightly coupled compute job across multiple nodes. A particular compute job 
will run for hours to days, so fast provisioning isn't necessarily critical 
(the traditional HPC model is to have your job wait in a batch queue until the 
resources are available).

Lorin
--
Lorin Hochstein, Computer Scientist
USC Information Sciences Institute
703.812.3710
http://www.east.isi.edu/~lorin




On Dec 2, 2011, at 7:17 AM, Sandy Walsh wrote:

I've recently had inquiries about High Performance Computing (HPC) on 
Openstack. As opposed to the Service Provider (SP) model, HPC is interested in 
fast provisioning, potentially short lifetime instances with precision metrics 
and scheduling. Real-time vs. Eventually.

Anyone planning on using Openstack in that way?

If so, I'll direct those inquires to this thread.

Thanks in advance,
Sandy

___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HPC with Openstack?

2011-12-02 Thread Brian Schott
You did see that Amazon hit #42 on the Top 500 supercomputing list? It is 
somewhat of a stunt, but the point is that access to a supercomputer is a 
credit-card swipe away and rentable by the hour.  There was a lot of buzz at 
SC11 a few weeks ago.

There are several HPC groups in the OpenStack community:
- The DOE Magellan OpenStack system is intended for mid-range HPC workloads.
- My former group at USC/ISI joined OpenStack during Bexar to deploy on large 
shared memory HPC systems (SGI UltraViolet 1TB of main memory), heterogeneous 
cluster computing (GPU accelerators, many-core processor architectures like 
Tilera), and tightly coupled cluster applications over InfiniBand and/or 
10GbE).  The USC-ISI team is still carrying on that work.  
- At Nimbis, I'm focused on technical computing workloads for companies that 
lack access to HPC.  We work with traditional HPC centers like NCSA, OSC, and 
R-Systems, but many of the configuration management and tenant isolation issues 
we encounter dealing with small users in traditional PBS/Moab batch systems 
would be easier if these centers ran OpenStack.

The challenges for virtualization on HPC are mostly focused on the I/O 
subsystem because there is a lot of highly tuned hardware for high-end 
networking, disk array subsystems, hardware accelerators and they don't know 
about virtual machines generally.  If you have an MPI offload engine running in 
your network card, it expects to pair with a single kernel, not a host and a 
guest.  Exposing these devices through Xen or KVM can be difficult even if you 
don't try to share the devices across VMs.  LXC is a reasonable approach but 
you lose some of the flexibility and isolation of true VMs.

The things that OpenStack can focus on are things that we've created blueprints 
for:
- alternative VMs like LXC from the scheduler
- consideration for bare-metal provisioning where you move vlan management into 
the switch
- cluster-level schedulers that take account of network topology requirements, 
bandwidth, latency, hops
- scheduler support for non-x86 and x86+extra hardware

Having said that, the OpenStack architecture is ideal for folks that want to 
bridge the gap between cloud and HPC.  The community is vibrant and moving fast 
and the architecture is flexible enough to allow many different use cases by 
design.  It's a meritocracy where code wins, which is why I like it.   I spent 
a lot of time at SC11 talking to HPC folks about OpenStack.  

Brian

-
Brian Schott, CTO
Nimbis Services, Inc.
brian.sch...@nimbisservices.com
ph: 443-274-6064  fx: 443-274-6060







On Dec 2, 2011, at 9:18 AM, Oleg Gelbukh wrote:

 Hello,
 
 Here at Mirantis we are working on deployment of Openstack that intended to 
 manage HPC cluster eventually. There are few features that we are going to 
 incorporate, and we are still researching. The general idea is to use LXC as 
 a lightweight virtualization engine, and make use of faster I/O system than 
 that based on disk image file.
 
 --
 Oleg Gelbukh,
 Sr. IT Engineer
 Mirantis Inc.
 
 On Fri, Dec 2, 2011 at 4:17 PM, Sandy Walsh sandy.wa...@rackspace.com wrote:
 I've recently had inquiries about High Performance Computing (HPC) on 
 Openstack. As opposed to the Service Provider (SP) model, HPC is interested 
 in fast provisioning, potentially short lifetime instances with precision 
 metrics and scheduling. Real-time vs. Eventually.
 
 Anyone planning on using Openstack in that way?
 
 If so, I'll direct those inquires to this thread.
 
 Thanks in advance,
 Sandy
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HPC with Openstack?

2011-12-02 Thread Oliver Baltzer
 As a side note, HPC means very different things to different people. In
 the circles I move in, HPC is interested in running compute jobs that are
 CPU-intensive, require large amounts of memory, and need
 low-latency/high-bandwidth interconnects to allow the user to break up a
 tightly coupled compute job across multiple nodes.  A particular compute
 job will run for hours to days, so fast provisioning isn't necessarily
 critical (the traditional HPC model is to have your job wait in a batch
 queue until the resources are available).

I am interested in a model that supports all of the above, but individual
jobs have a very short lifespan (a few minutes) and are time critical
(every minute counts). Also, there is not necessarily a steady stream of
jobs, such that there are demand peaks (several times a day). 

In that model I do not want to wait minutes to provision compute nodes for
a job that runs 5 minutes. Neither do I want to run a cluster permanently
that has 100% utilization for maybe 2 or 3 hours in total per day. So a
cloud model would be quite attractive, if it could deliver the performance,
provision fast enough, and charge in minute intervals rather than hours.

Cheers,
Oliver

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HPC with Openstack?

2011-12-02 Thread Leandro Reox
An HPC way of usage for.openstack at.mercadolibre is for example to run
integration and regresions test on productios pre and post deploy too. So
Jenkins servers are shooted in a minute to support the tests load and then
they destroy themselves.
On Dec 2, 2011 3:55 PM, Oliver Baltzer oli...@hytek.org wrote:

  As a side note, HPC means very different things to different people. In
  the circles I move in, HPC is interested in running compute jobs that are
  CPU-intensive, require large amounts of memory, and need
  low-latency/high-bandwidth interconnects to allow the user to break up a
  tightly coupled compute job across multiple nodes.  A particular compute
  job will run for hours to days, so fast provisioning isn't necessarily
  critical (the traditional HPC model is to have your job wait in a batch
  queue until the resources are available).

 I am interested in a model that supports all of the above, but individual
 jobs have a very short lifespan (a few minutes) and are time critical
 (every minute counts). Also, there is not necessarily a steady stream of
 jobs, such that there are demand peaks (several times a day).

 In that model I do not want to wait minutes to provision compute nodes for
 a job that runs 5 minutes. Neither do I want to run a cluster permanently
 that has 100% utilization for maybe 2 or 3 hours in total per day. So a
 cloud model would be quite attractive, if it could deliver the performance,
 provision fast enough, and charge in minute intervals rather than hours.

 Cheers,
 Oliver

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp