Re: [Openstack] [GLANCE] Proposal: Combine the container_format and disk_format fields in 2.0 Images API

2011-12-02 Thread Soren Hansen
2011/12/1 Jay Pipes jaypi...@gmail.com:
 structure tar'd up. However, I think this can be more easily
 accomplished by consolidating the disk and container formats in the
 2.0 API to just a single format field with the possible values:

    ova - This indicates the data stored in Glance is an OVF container
 that may actually contain multiple virtual appliances that has been
 tar'd into the single-file OVA format
    raw - This is an unstructured disk image format
    vhd - This is the VHD disk format, a common disk format used by
 virtual machine monitors from VMWare, Xen, Microsoft, VirtualBox, and
 others
    vmdk - Another common disk format supported by many common virtual
 machine monitors
    vdi - A disk format supported by VirtualBox virtual machine
 monitor and the QEMU emulator
    iso - An archive format for the data contents of an optical disc
 (e.g. CDROM).
    qcow2 - A disk format supported by the QEMU emulator that can
 expand dynamically and supports Copy on Write
    aki - This indicates what is stored in Glance is an Amazon kernel image
    ari - This indicates what is stored in Glance is an Amazon ramdisk image
    ami - This indicates what is stored in Glance is an Amazon machine image

 What do people think of this proposal to combine the two into a single
 format field?

I agree the current disk_format/container_format tuple isn't ideal.
There's overlap between the two and at the same time, there are things
that can't be expressed with the current selection of valid settings. I
do think having two separate fields defining the contents, though.

There are basically two things that are relevant: The image type and the
container format.

The image type can be either of kernel, ramdisk, filesystem, iso9660,
disk, or other.

The container type can be: raw, cow, qcow, qcow2, vhd, vmdk, vdi or qed
(and probably others I've forgotten).

Container type is essential in deciding whether the hypervisor in
question will be able to take the image and read its contents (i.e. map
a block of data in the container to a block of data in the contained
image).  Image type is essential in deciding what to do with it. I.e.
*don't* try to attach a kernel as a filesystem, *don't* try to use an
iso9660 image as your kernel, *do* attach iso9660 images as CD's, not as
hard drives, *do* accept booting a VM with only a disk image attached,
*do* require a kernel if you have a filesystem image rather than a disk
image, etc. At the moment, we try to guess the user's intent (if they
don't pass a kernel, we just boot the image and hope for the best). This
is error prone.

aki, ari, and ami have always struck me as odd.  If you upload an
aki to OpenStack, by the time it actually reaches Glance, it's not an
aki anymore. Its image type is kernel and its container format is
raw. It's indistinguishable from a raw kernel image uploaded by some
other mechanism. Same for ari (ramdisk/raw) and ami (filesystem/raw). If
anything, aki/ari/ami might be considered a (single) transport format.
Uploading an image to EC2 involves a bundling process where the image in
question is split up, signed (and encrypted?), uploaded to S3 along with
a manifest and then registered. Upon registration, the signature is
verified, the image is decrypted(?), and stitched back together to form
a kernel image (or ramdisk or machine image). At this point, any
remnants of the manifest and the rest of the bundle are gone.

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [GLANCE] Proposal: Combine the container_format and disk_format fields in 2.0 Images API

2011-12-02 Thread Donal Lafferty
During October I noticed that Microsoft's vhdtool.exe creates VHDs that 
XenServer can't understand.  Boy was that painful.  The underlying problem is 
that some vhd's should be described as VM specific.

Does this suggest we adopt MIME-like syntax for category specialization?  E.g. 
in the same way we see video/x-ms-wmv we might see  vhd/x-ms-tools.  In 
this case, its easy to parse out the vhd category.

 
 
 -Original Message-
 From: openstack-bounces+donal.lafferty=citrix@lists.launchpad.net
 [mailto:openstack-bounces+donal.lafferty=citrix@lists.launchpad.net] On
 Behalf Of Jay Pipes
 Sent: 01 December 2011 15:53
 To: openstack@lists.launchpad.net
 Subject: [Openstack] [GLANCE] Proposal: Combine the container_format and
 disk_format fields in 2.0 Images API
 
 Hey all,
 
 OK, so I'm almost done with Draft 3 of the OpenStack Images API 2.0 Proposal.
 While doing this, however, I have come to the conclusion that the
 container_format we added in the Cactus timeframe just makes things more
 confusing and should probably be removed.
 
 We have two fields in the current API that store information about the disk 
 file
 format and any container/package format for an image.
 
 http://glance.openstack.org/formats.html
 
 The disk_format field currently allows the following:
 
 raw - This is an unstructured disk image format
 vhd - This is the VHD disk format, a common disk format used by virtual
 machine monitors from VMWare, Xen, Microsoft, VirtualBox, and others
 vmdk - Another common disk format supported by many common virtual
 machine monitors
 vdi - A disk format supported by VirtualBox virtual machine monitor and 
 the
 QEMU emulator
 iso - An archive format for the data contents of an optical disc (e.g. 
 CDROM).
 qcow2 - A disk format supported by the QEMU emulator that can expand
 dynamically and supports Copy on Write
 aki - This indicates what is stored in Glance is an Amazon kernel image
 ari - This indicates what is stored in Glance is an Amazon ramdisk image
ami - This indicates what is stored in Glance is an Amazon machine image
 
 For container formats, we currently allow:
 
 ovf - This is the OVF container format
 bare - This indicates there is no container or metadata envelope for the
 image
 aki - This indicates what is stored in Glance is an Amazon kernel image
 ari - This indicates what is stored in Glance is an Amazon ramdisk image
 ami - This indicates what is stored in Glance is an Amazon machine image
 
 The problem I see is that really OVF is the only real container format and 
 I'm
 just not sure it's useful to have users set a container format. The goal was 
 to
 allow Glance to report that the image file stored in Glance is an OVA file and
 not an image file itself. An OVA file is a single file that contains the OVF
 directory structure tar'd up. However, I think this can be more easily
 accomplished by consolidating the disk and container formats in the
 2.0 API to just a single format field with the possible values:
 
 ova - This indicates the data stored in Glance is an OVF container that 
 may
 actually contain multiple virtual appliances that has been tar'd into the 
 single-
 file OVA format
 raw - This is an unstructured disk image format
 vhd - This is the VHD disk format, a common disk format used by virtual
 machine monitors from VMWare, Xen, Microsoft, VirtualBox, and others
 vmdk - Another common disk format supported by many common virtual
 machine monitors
 vdi - A disk format supported by VirtualBox virtual machine monitor and 
 the
 QEMU emulator
 iso - An archive format for the data contents of an optical disc (e.g. 
 CDROM).
 qcow2 - A disk format supported by the QEMU emulator that can expand
 dynamically and supports Copy on Write
 aki - This indicates what is stored in Glance is an Amazon kernel image
 ari - This indicates what is stored in Glance is an Amazon ramdisk image
 ami - This indicates what is stored in Glance is an Amazon machine image
 
 What do people think of this proposal to combine the two into a single 
 format
 field?
 
 Thanks in advance for your feedback,
 -jay
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [GLANCE] Proposal: Combine the container_format and disk_format fields in 2.0 Images API

2011-12-02 Thread Soren Hansen
2011/12/2 Donal Lafferty donal.laffe...@citrix.com:
 During October I noticed that Microsoft's vhdtool.exe creates VHDs that 
 XenServer can't understand.  Boy was that painful.
 The underlying problem is that some vhd's should be described as VM specific.

Can you elaborate on this, please? I don't think I understand what VM
specific means.

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] boot from ISO

2011-12-02 Thread Donal Lafferty
The background is that having provided a sample implementation, developers 
targeting libvirt would offer the same functionality.

DL

From: Michaël Van de Borne [mailto:michael.vandebo...@cetic.be]
Sent: 01 December 2011 16:34
To: Anne Gentle
Cc: Donal Lafferty; openstack@lists.launchpad.net
Subject: Re: [Openstack] boot from ISO

That's right, it's a XenServer only feature. I insist on XenServer because it's 
been implemented only inside the xenapi.
If you wish to manage VMs using KVM or Xen hypervisor (the community hypervisor 
packaged in Linux distributions), this will utilize the libvirt API, and not 
XenAPI.

So, one needs to use XenServer (btw, openstack works great with XenServer 6.0 
even if documentation claims that the supported release is 
5.5http://docs.openstack.org/diablo/openstack-compute/admin/content/hypervisors.html)
 in order for the XenAPI to be used.

I made use of this 
documentationhttp://wiki.openstack.org/XenServerDevelopment in order to set 
up the environement. What is missing is that, in order to activate the Boot 
From ISOhttp://wiki.openstack.org/bootFromISO feature, the SR elements on 
XenServer host must be configured that way:

1. create an ISO-typed SR, such as an NFS ISO library, for instance. For this, 
using XenCenter is pretty easy. You need to export an NFS volume from a remote 
NFS server. Make sure it is exported in RW mode.
2. on the host, find the uuid of this ISO SR:
# xe host-list
write the uuid down
3. # xe sr-list content-type=iso
locate the uuid of the NFS ISO library
4. # xe sr-param-set uuid=iso sr uuid other-config:i18n-key=local-storage-iso
Even if an NFS mount point isn't local storage, you must specify 
local-storage-iso.
5. # xe pbd-list sr-uuid=iso sr uuid
make sure the host-uuid from xe pbd-list equals the uuid of the host you 
found at step 2

then apply the rest of the 
tutorialhttp://wiki.openstack.org/XenServerDevelopment#Configure_SR_storage 
and publish an ISO image this way:
glance add name=fedora_iso disk_format=iso container_format=bare   
Fedora-16-x86_64-netinst.iso
nova boot test_iso --flavor flavor ID --image image ID

I've posted this in the bug you filed, Anne.

By the way, I'm going to work on porting this feature on libvirt API and VMWare 
API (if nobody works on it yet).

Is the config drive yet available for Diablo??

cheers,

michaël



Michaël Van de Borne

RD Engineer, SOA team, CETIC

Phone: +32 (0)71 49 07 45 Mobile: +32 (0)472 69 57 16, Skype: mikemowgli

www.cetic.behttp://www.cetic.be, rue des Frères Wright, 29/3, B-6041 Charleroi

Le 01/12/11 16:29, Anne Gentle a écrit :

Thanks for the info! I've logged bug 898682 [1] to ensure it gets

added to the documentation. Based on this note, is this a solution for

Xen only?



Is this the same as using a config drive? I had heard a config drive

works on KVM but not Xen.



 If someone who's familiar with this area could work on the docs that

would be great.



Thanks,

Anne



[1] https://bugs.launchpad.net/openstack-manuals/+bug/898682



On Thu, Dec 1, 2011 at 9:14 AM, Michaël Van de Borne

michael.vandebo...@cetic.bemailto:michael.vandebo...@cetic.be wrote:

It finally works. The problem was the flag checks while looking for the ISO

SR.



inside the find_iso_sr method (in nova/virt/xenapi/vm_utils.py), I found

that the ISO SR must have these settings:

content type: iso

other-config:i18n-key=local-storage-iso



As far as I know, this wasn't documented anywhere. Hope this can be useful

for people from the future.



cheers,





michaël





Michaël Van de Borne

RD Engineer, SOA team, CETIC

Phone: +32 (0)71 49 07 45 Mobile: +32 (0)472 69 57 16, Skype: mikemowgli

www.cetic.behttp://www.cetic.be, rue des Frères Wright, 29/3, B-6041 Charleroi





Le 29/11/11 23:10, Donal Lafferty a écrit :



Off the top of my head, I'd look to see if the compute node can see that ISO

SR.







DL











From: Michaël Van de Borne [mailto:michael.vandebo...@cetic.be]

Sent: 29 November 2011 18:15

To: Donal Lafferty; 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net

Subject: Re: [Openstack] boot from ISO







Hi Donal, hi all,



I'm trying to test the Boot From ISO feature. So I've set a XenServer host

and installed a Ubuntu 11.10 PV DomU in it.



Then I used the following commands but, as you can see in the attached

nova-compute log excerpt, there was a problem.



glance add name=fedora_iso disk_format=iso 

../Fedora-16-x86_64-Live-LXDE.iso

ID: 4

nova boot test_iso --flavor 2 --image 4



I can see the ISO images using nova list but not using glance index.



The error seems to be: 'Cannot find SR of content-type ISO'. However, I've

set a NFS ISO Library using XenCenter, so that there is an actual ISO

content-typed SR. How to tell OpenStack to use this SR for the ISO images I

post using glance?



Any clue? I feel I'm rather close to make it work.





thanks,



michaël







Michaël Van de Borne



RD Engineer, SOA team, CETIC



Phone: +32 

Re: [Openstack] [GLANCE] Proposal: Combine the container_format and disk_format fields in 2.0 Images API

2011-12-02 Thread Soren Hansen
2011/12/2 Donal Lafferty donal.laffe...@citrix.com:
 The key in my email was to ask whether MIME-like specialisations were
 appropriate either for combining characteristics of an image into a
 single property.

 E.g. container_type/image_type.  The example I provided was
 image_type/vendor-specific-format

 That second example came from observing that a VHD produced by
 VHDTOOL.exe as posted on MSDN produced a file that could not be
 understood by XenServer.  In contrast, Ken Bell's 'DiscUtils' as
 posted on Codeplex produced a VHD that worked fine.  When I spoke to
 Ken, he mentioned he'd noticed that VHDTOOL.exe generated a slightly
 different format.  Now, I doubt Microsoft would host a tool that
 didn’t support their format.  Therefore, there seems to be a
 difference of opinion as to what constitutes a VHD.

I understand there might be differences in implementations of the
various formats. Sometimes, this is due to bugs (common if the format
was reverse-engineered) or perhaps different (incompatible) versions of
the same format. I don't think the correct way to encode these
differences is making it vendor specific.

As an example, vmdk's generated by QEmu are different from vmdk's
generated by VirtualBox, and both of those are different that vmdk's
generated by VMWare (which again generates different vmdk's depending on
its version), but the compatibility matrix is complicated. I think all
vmdk's from QEMu will work in VirtualBox and VMWare, but VMWare and
VirtualBox can certainly both generate vmdk's that QEMu doesn't
understand.  Some of these differences are due to different versions of
the vmdk format being used, and some are due to incomplete
implementations of the formats.

I simply don't think adding a vendor part to the container type string
is going to be a very good way to encode this.

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] HPC with Openstack?

2011-12-02 Thread Sandy Walsh
I've recently had inquiries about High Performance Computing (HPC) on 
Openstack. As opposed to the Service Provider (SP) model, HPC is interested in 
fast provisioning, potentially short lifetime instances with precision metrics 
and scheduling. Real-time vs. Eventually.

Anyone planning on using Openstack in that way?

If so, I'll direct those inquires to this thread.

Thanks in advance,
Sandy

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HPC with Openstack?

2011-12-02 Thread David Busby
May be worth looking at rightscale: 
http://www.rightscale.com/products/plans-pricing/grid-edition.php
The article there is only and only cites EC2 usage, but their API's support 
Rackspace cloud which is Nova 
http://support.rightscale.com/12-Guides/RightScale_API

Cheers

David



On 2 Dec 2011, at 12:17, Sandy Walsh wrote:

 I've recently had inquiries about High Performance Computing (HPC) on 
 Openstack. As opposed to the Service Provider (SP) model, HPC is interested 
 in fast provisioning, potentially short lifetime instances with precision 
 metrics and scheduling. Real-time vs. Eventually.
 
 Anyone planning on using Openstack in that way?
 
 If so, I'll direct those inquires to this thread.
 
 Thanks in advance,
 Sandy
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HPC with Openstack?

2011-12-02 Thread Lorin Hochstein
As a side note, HPC means very different things to different people. In the 
circles I move in, HPC is interested in running compute jobs that are 
CPU-intensive, require large amounts of memory, and need 
low-latency/high-bandwidth interconnects to allow the user to break up a 
tightly coupled compute job across multiple nodes. A particular compute job 
will run for hours to days, so fast provisioning isn't necessarily critical 
(the traditional HPC model is to have your job wait in a batch queue until the 
resources are available).

Lorin
--
Lorin Hochstein, Computer Scientist
USC Information Sciences Institute
703.812.3710
http://www.east.isi.edu/~lorin




On Dec 2, 2011, at 7:17 AM, Sandy Walsh wrote:

 I've recently had inquiries about High Performance Computing (HPC) on 
 Openstack. As opposed to the Service Provider (SP) model, HPC is interested 
 in fast provisioning, potentially short lifetime instances with precision 
 metrics and scheduling. Real-time vs. Eventually.
 
 Anyone planning on using Openstack in that way?
 
 If so, I'll direct those inquires to this thread.
 
 Thanks in advance,
 Sandy
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HPC with Openstack?

2011-12-02 Thread Oleg Gelbukh
Hello,

Here at Mirantis we are working on deployment of Openstack that intended to
manage HPC cluster eventually. There are few features that we are going to
incorporate, and we are still researching. The general idea is to use LXC
as a lightweight virtualization engine, and make use of faster I/O system
than that based on disk image file.

--
Oleg Gelbukh,
Sr. IT Engineer
Mirantis Inc.

On Fri, Dec 2, 2011 at 4:17 PM, Sandy Walsh sandy.wa...@rackspace.comwrote:

 I've recently had inquiries about High Performance Computing (HPC) on
 Openstack. As opposed to the Service Provider (SP) model, HPC is interested
 in fast provisioning, potentially short lifetime instances with precision
 metrics and scheduling. Real-time vs. Eventually.

 Anyone planning on using Openstack in that way?

 If so, I'll direct those inquires to this thread.

 Thanks in advance,
 Sandy

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HPC with Openstack?

2011-12-02 Thread Sandy Walsh
Good point ... thanks for the clarification.

-S


From: Lorin Hochstein [lo...@isi.edu]
Sent: Friday, December 02, 2011 9:47 AM
To: Sandy Walsh
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] HPC with Openstack?

As a side note, HPC means very different things to different people. In the 
circles I move in, HPC is interested in running compute jobs that are 
CPU-intensive, require large amounts of memory, and need 
low-latency/high-bandwidth interconnects to allow the user to break up a 
tightly coupled compute job across multiple nodes. A particular compute job 
will run for hours to days, so fast provisioning isn't necessarily critical 
(the traditional HPC model is to have your job wait in a batch queue until the 
resources are available).

Lorin
--
Lorin Hochstein, Computer Scientist
USC Information Sciences Institute
703.812.3710
http://www.east.isi.edu/~lorin




On Dec 2, 2011, at 7:17 AM, Sandy Walsh wrote:

I've recently had inquiries about High Performance Computing (HPC) on 
Openstack. As opposed to the Service Provider (SP) model, HPC is interested in 
fast provisioning, potentially short lifetime instances with precision metrics 
and scheduling. Real-time vs. Eventually.

Anyone planning on using Openstack in that way?

If so, I'll direct those inquires to this thread.

Thanks in advance,
Sandy

___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HPC with Openstack?

2011-12-02 Thread Brian Schott
You did see that Amazon hit #42 on the Top 500 supercomputing list? It is 
somewhat of a stunt, but the point is that access to a supercomputer is a 
credit-card swipe away and rentable by the hour.  There was a lot of buzz at 
SC11 a few weeks ago.

There are several HPC groups in the OpenStack community:
- The DOE Magellan OpenStack system is intended for mid-range HPC workloads.
- My former group at USC/ISI joined OpenStack during Bexar to deploy on large 
shared memory HPC systems (SGI UltraViolet 1TB of main memory), heterogeneous 
cluster computing (GPU accelerators, many-core processor architectures like 
Tilera), and tightly coupled cluster applications over InfiniBand and/or 
10GbE).  The USC-ISI team is still carrying on that work.  
- At Nimbis, I'm focused on technical computing workloads for companies that 
lack access to HPC.  We work with traditional HPC centers like NCSA, OSC, and 
R-Systems, but many of the configuration management and tenant isolation issues 
we encounter dealing with small users in traditional PBS/Moab batch systems 
would be easier if these centers ran OpenStack.

The challenges for virtualization on HPC are mostly focused on the I/O 
subsystem because there is a lot of highly tuned hardware for high-end 
networking, disk array subsystems, hardware accelerators and they don't know 
about virtual machines generally.  If you have an MPI offload engine running in 
your network card, it expects to pair with a single kernel, not a host and a 
guest.  Exposing these devices through Xen or KVM can be difficult even if you 
don't try to share the devices across VMs.  LXC is a reasonable approach but 
you lose some of the flexibility and isolation of true VMs.

The things that OpenStack can focus on are things that we've created blueprints 
for:
- alternative VMs like LXC from the scheduler
- consideration for bare-metal provisioning where you move vlan management into 
the switch
- cluster-level schedulers that take account of network topology requirements, 
bandwidth, latency, hops
- scheduler support for non-x86 and x86+extra hardware

Having said that, the OpenStack architecture is ideal for folks that want to 
bridge the gap between cloud and HPC.  The community is vibrant and moving fast 
and the architecture is flexible enough to allow many different use cases by 
design.  It's a meritocracy where code wins, which is why I like it.   I spent 
a lot of time at SC11 talking to HPC folks about OpenStack.  

Brian

-
Brian Schott, CTO
Nimbis Services, Inc.
brian.sch...@nimbisservices.com
ph: 443-274-6064  fx: 443-274-6060







On Dec 2, 2011, at 9:18 AM, Oleg Gelbukh wrote:

 Hello,
 
 Here at Mirantis we are working on deployment of Openstack that intended to 
 manage HPC cluster eventually. There are few features that we are going to 
 incorporate, and we are still researching. The general idea is to use LXC as 
 a lightweight virtualization engine, and make use of faster I/O system than 
 that based on disk image file.
 
 --
 Oleg Gelbukh,
 Sr. IT Engineer
 Mirantis Inc.
 
 On Fri, Dec 2, 2011 at 4:17 PM, Sandy Walsh sandy.wa...@rackspace.com wrote:
 I've recently had inquiries about High Performance Computing (HPC) on 
 Openstack. As opposed to the Service Provider (SP) model, HPC is interested 
 in fast provisioning, potentially short lifetime instances with precision 
 metrics and scheduling. Real-time vs. Eventually.
 
 Anyone planning on using Openstack in that way?
 
 If so, I'll direct those inquires to this thread.
 
 Thanks in advance,
 Sandy
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Keystone Swift: swiftauth tenant namespace collisions?

2011-12-02 Thread Ziad Sawalha
Great. BTW, Dolph just started work on this, so we've updated the status of the 
blueprint.

Z

From: Judd Maltin openst...@newgoliath.commailto:openst...@newgoliath.com
Date: Fri, 2 Dec 2011 11:27:57 -0500
To: Ziad Sawalha ziad.sawa...@rackspace.commailto:ziad.sawa...@rackspace.com
Cc: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net, 
Rouault, Jason (Cloud Services) 
jason.roua...@hp.commailto:jason.roua...@hp.com, John Dickinson 
m...@not.mnmailto:m...@not.mn
Subject: Re: [Openstack] Keystone  Swift: swiftauth tenant namespace 
collisions?


Ziad!

Just knowing that your team has these issues in mind is a huge help.

-judd

On Dec 1, 2011 6:00 PM, Ziad Sawalha 
ziad.sawa...@rackspace.commailto:ziad.sawa...@rackspace.com wrote:
OK, that helps.

We have a blueprint to use a string ID instead of the integer in the database: 
https://blueprints.launchpad.net/keystone/+spec/portable-identifiers

I think that will address your needs (eventually).

We intend to deliver that without any API changes (the API supports string IDs) 
and with full migration support from stable/diablo.

To summarize the intent:

  *   we add a string UID to the database schema
  *   For deployments with the integer ID, we copy that into the UID field
  *   For deployments where the ID is a string (cactus and pre-Diablo) we copy 
that into the UID field
  *   We use the UID field in the URLs displayed by Keystone

That will allow migrations into Keystone and you can decide in your data import 
what value to make the ID that shows up as the REST URL.

This is a future answer to your need. We plan on doing this very soon (maybe by 
E2). But for the current Keystone schema I don't have any alternative 
suggestions unfortunately.

Does this help?


From: Judd Maltin openst...@newgoliath.commailto:openst...@newgoliath.com
Date: Thu, 1 Dec 2011 16:32:00 -0500
To: Ziad Sawalha ziad.sawa...@rackspace.commailto:ziad.sawa...@rackspace.com
Subject: Re: [Openstack] Keystone  Swift: swiftauth tenant namespace 
collisions?

Hi Ziad,

The current authentication systems for Swift use a hash as the tenant_id.  I 
saw that keystone is using a sequential integer from the DB as the tenant_id.  
This doesn't allow Keystone to match an existing Swift tenant_id (called 
account in Swift).  This prevents Keystone from just taking over for swauth 
or tempauth.

If the definition of tenant_id is changed in Keystone to be configurable by the 
administrator, or at least NOT be a seq from the DB, then migration from swauth 
to keystone is possible, and may even be automated.

Looking forward to your thoughts,
-judd

On Sun, Nov 27, 2011 at 12:51 AM, Ziad Sawalha 
ziad.sawa...@rackspace.commailto:ziad.sawa...@rackspace.com wrote:
Hi Judd –

Account in swift is the same thing as tenant in Keystone.

Is the problem that you are specifying account 'name' instead of the ID?

I'm asking because we have had a number of users having problems migrating into 
Keystone after we switched to ID/Name for tenants and users and we are 
considering a schema change that would allow for simpler migration into 
Keystone and support tenant ID and name being the same.

I'm not sure that would help you, but if it would we would like to get your 
input on the design we are considering.

From: Judd Maltin openst...@newgoliath.commailto:openst...@newgoliath.com
Date: Fri, 25 Nov 2011 11:31:50 -0500
To: Rouault, Jason (Cloud Services) 
jason.roua...@hp.commailto:jason.roua...@hp.com
Cc: John Dickinson m...@not.mnmailto:m...@not.mn, Ziad Sawalha 
ziad.sawa...@rackspace.commailto:ziad.sawa...@rackspace.com, 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net

Subject: Re: [Openstack] Keystone  Swift: swiftauth tenant namespace 
collisions?

Thanks Jason,

I am indeed working off stable/diablo.  It looks like I'm going to have to use 
mod_proxy and mod_rewrite to migrate my users form AUTH_account_name to 
AUTH_tenant_id  Any other ideas for this sort of migration?

-judd




On Mon, Nov 21, 2011 at 9:42 AM, Rouault, Jason (Cloud Services) 
jason.roua...@hp.commailto:jason.roua...@hp.com wrote:
Yes, I am aware of the new swift code for Keystone, but the question came
from Judd who may be working off of Diablo-stable.

-Original Message-
From: John Dickinson [mailto:m...@not.mnmailto:m...@not.mn]
Sent: Sunday, November 20, 2011 8:59 AM
To: Rouault, Jason (Cloud Services)
Cc: Ziad Sawalha; Judd Maltin; 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] Keystone  Swift: swiftauth tenant namespace
collisions?

I don't think that is exactly right, but my understanding of tenants vs
accounts vs users may be lacking. Nonetheless, auth v2.0 support was added
to the swift cli tool by Chmouel recently. Have you tried with the code in
swift's trunk (also the 1.4.4 release scheduled for 

Re: [Openstack] API specifications

2011-12-02 Thread Nachi Ueno
Hi Brian

Thank you for your response.
How about params which is missing in docs?

accessIPv4
accessIPv6
adminPass
config_drive
security_groups
networks
blob
keyname
availability_zone
reservation_id
min_count
max_count
2011/12/1 Brian Waldon brian.wal...@rackspace.com:
 Our consoles resource is not a part of the 1.1 (2.0) API. You are right in 
 thinking it should be in the contrib directory. Additionally, it needs to be 
 modified to act as an extension.

 Our current level of documentation of extensions is extremely lacking, so 
 hopefully before Essex we can do a much better job.

 Brian Waldon


 On Dec 1, 2011, at 1:37 PM, Nachi Ueno wrote:

 Hi Nova-cores

 Is the Console function in OS API 1.1 specs?
 (See https://bugs.launchpad.net/nova/+bug/898266)

 The implementation is not in contrib directory, so it didn't looks an 
 extension.
 But as 898266 mentioned, it is not described in API docs.

 And also, I checked API spces from code. (I know this is reverse way. :))

 There are another example,
 Create server could get,

 name (*)
 imageRef (*)
 flavorRef (*)
 accessIPv4
 accessIPv6
 adminPass
 config_drive
 security_groups
 networks
 blob
 keyname
 availability_zone
 reservation_id
 min_count
 max_count
 metadata (*)
 personality (*)

 And only * one is documented on API.
 http://docs.openstack.org/api/openstack-compute/1.1/content/CreateServers.html

 Doc-Team can not decide specs, so I suppose Nova-core are responsible
 to define these specs.

 Cheers
 Nachi

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] API specifications

2011-12-02 Thread Nachi Ueno
Hi Chris

Sorry, I missed that.
Would you give me url?

Such as console.py, it is not in contrib directory.

2011/12/2 Christopher MacGown ch...@pistoncloud.com:
 Hi Nachi,

 At least for config_drive, it has been documented as an extension.

 - chris

 On Dec 2, 2011, at 10:07, Nachi Ueno ueno.na...@nttdata-agilenet.com wrote:

 Hi Brian

 Thank you for your response.
 How about params which is missing in docs?

 accessIPv4
 accessIPv6
 adminPass
 config_drive
 security_groups
 networks
 blob
 keyname
 availability_zone
 reservation_id
 min_count
 max_count
 2011/12/1 Brian Waldon brian.wal...@rackspace.com:
 Our consoles resource is not a part of the 1.1 (2.0) API. You are right in 
 thinking it should be in the contrib directory. Additionally, it needs to 
 be modified to act as an extension.

 Our current level of documentation of extensions is extremely lacking, so 
 hopefully before Essex we can do a much better job.

 Brian Waldon


 On Dec 1, 2011, at 1:37 PM, Nachi Ueno wrote:

 Hi Nova-cores

 Is the Console function in OS API 1.1 specs?
 (See https://bugs.launchpad.net/nova/+bug/898266)

 The implementation is not in contrib directory, so it didn't looks an 
 extension.
 But as 898266 mentioned, it is not described in API docs.

 And also, I checked API spces from code. (I know this is reverse way. :))

 There are another example,
 Create server could get,

 name (*)
 imageRef (*)
 flavorRef (*)
 accessIPv4
 accessIPv6
 adminPass
 config_drive
 security_groups
 networks
 blob
 keyname
 availability_zone
 reservation_id
 min_count
 max_count
 metadata (*)
 personality (*)

 And only * one is documented on API.
 http://docs.openstack.org/api/openstack-compute/1.1/content/CreateServers.html

 Doc-Team can not decide specs, so I suppose Nova-core are responsible
 to define these specs.

 Cheers
 Nachi

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] API specifications

2011-12-02 Thread Nachi Ueno
Hi Gabe

I got your point.
However,I wanna know it is extension or not.

Cheers
Nati

2011/12/2 Gabe Westmaas gabe.westm...@rackspace.com:
 Hi Nachi,

 The reason for excluding those from being required in the create response
 is to allow us to make those creates as asynchronous as possible.

 Gabe

 On 12/2/11 12:01 PM, Nachi Ueno ueno.na...@nttdata-agilenet.com wrote:

Hi Brian

Thank you for your response.
How about params which is missing in docs?

accessIPv4
accessIPv6
adminPass
config_drive
security_groups
networks
blob
keyname
availability_zone
reservation_id
min_count
max_count
2011/12/1 Brian Waldon brian.wal...@rackspace.com:
 Our consoles resource is not a part of the 1.1 (2.0) API. You are right
in thinking it should be in the contrib directory. Additionally, it
needs to be modified to act as an extension.

 Our current level of documentation of extensions is extremely lacking,
so hopefully before Essex we can do a much better job.

 Brian Waldon


 On Dec 1, 2011, at 1:37 PM, Nachi Ueno wrote:

 Hi Nova-cores

 Is the Console function in OS API 1.1 specs?
 (See https://bugs.launchpad.net/nova/+bug/898266)

 The implementation is not in contrib directory, so it didn't looks an
extension.
 But as 898266 mentioned, it is not described in API docs.

 And also, I checked API spces from code. (I know this is reverse way.
:))

 There are another example,
 Create server could get,

 name (*)
 imageRef (*)
 flavorRef (*)
 accessIPv4
 accessIPv6
 adminPass
 config_drive
 security_groups
 networks
 blob
 keyname
 availability_zone
 reservation_id
 min_count
 max_count
 metadata (*)
 personality (*)

 And only * one is documented on API.

http://docs.openstack.org/api/openstack-compute/1.1/content/CreateServer
s.html

 Doc-Team can not decide specs, so I suppose Nova-core are responsible
 to define these specs.

 Cheers
 Nachi

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HPC with Openstack?

2011-12-02 Thread Oliver Baltzer
 As a side note, HPC means very different things to different people. In
 the circles I move in, HPC is interested in running compute jobs that are
 CPU-intensive, require large amounts of memory, and need
 low-latency/high-bandwidth interconnects to allow the user to break up a
 tightly coupled compute job across multiple nodes.  A particular compute
 job will run for hours to days, so fast provisioning isn't necessarily
 critical (the traditional HPC model is to have your job wait in a batch
 queue until the resources are available).

I am interested in a model that supports all of the above, but individual
jobs have a very short lifespan (a few minutes) and are time critical
(every minute counts). Also, there is not necessarily a steady stream of
jobs, such that there are demand peaks (several times a day). 

In that model I do not want to wait minutes to provision compute nodes for
a job that runs 5 minutes. Neither do I want to run a cluster permanently
that has 100% utilization for maybe 2 or 3 hours in total per day. So a
cloud model would be quite attractive, if it could deliver the performance,
provision fast enough, and charge in minute intervals rather than hours.

Cheers,
Oliver

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] API specifications

2011-12-02 Thread Brian Waldon
accessIPv4 and accessIPv6 are both core instance attributes. The rest are all 
attributes owned by existing extensions. Keep in mind that the spec doesn't 
require all attributes to be returned in a POST response.

Chris - I don't think config_drive is documented as an extension. This bug is 
still not fixed: https://bugs.launchpad.net/nova/+bug/81

Waldon

On Dec 2, 2011, at 1:13 PM, Christopher MacGown wrote:

 Hi Nachi,
 
 At least for config_drive, it has been documented as an extension.
 
 - chris
 
 On Dec 2, 2011, at 10:07, Nachi Ueno ueno.na...@nttdata-agilenet.com wrote:
 
 Hi Brian
 
 Thank you for your response.
 How about params which is missing in docs?
 
 accessIPv4
 accessIPv6
 adminPass
 config_drive
 security_groups
 networks
 blob
 keyname
 availability_zone
 reservation_id
 min_count
 max_count
 2011/12/1 Brian Waldon brian.wal...@rackspace.com:
 Our consoles resource is not a part of the 1.1 (2.0) API. You are right in 
 thinking it should be in the contrib directory. Additionally, it needs to 
 be modified to act as an extension.
 
 Our current level of documentation of extensions is extremely lacking, so 
 hopefully before Essex we can do a much better job.
 
 Brian Waldon
 
 
 On Dec 1, 2011, at 1:37 PM, Nachi Ueno wrote:
 
 Hi Nova-cores
 
 Is the Console function in OS API 1.1 specs?
 (See https://bugs.launchpad.net/nova/+bug/898266)
 
 The implementation is not in contrib directory, so it didn't looks an 
 extension.
 But as 898266 mentioned, it is not described in API docs.
 
 And also, I checked API spces from code. (I know this is reverse way. :))
 
 There are another example,
 Create server could get,
 
 name (*)
 imageRef (*)
 flavorRef (*)
 accessIPv4
 accessIPv6
 adminPass
 config_drive
 security_groups
 networks
 blob
 keyname
 availability_zone
 reservation_id
 min_count
 max_count
 metadata (*)
 personality (*)
 
 And only * one is documented on API.
 http://docs.openstack.org/api/openstack-compute/1.1/content/CreateServers.html
 
 Doc-Team can not decide specs, so I suppose Nova-core are responsible
 to define these specs.
 
 Cheers
 Nachi
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] API specifications

2011-12-02 Thread Nachi Ueno
Hi folks

Anne
Sorry,I remember now. I got extension docs from you.


2011/12/2 Brian Waldon brian.wal...@rackspace.com:
 accessIPv4 and accessIPv6 are both core instance attributes. The rest are all 
 attributes owned by existing extensions. Keep in mind that the spec doesn't 
 require all attributes to be returned in a POST response.

I got it. So accessIPv4 and accessIPv6 is in core. The rest are extension.

 Chris - I don't think config_drive is documented as an extension. This bug is 
 still not fixed: https://bugs.launchpad.net/nova/+bug/81

 Waldon

 On Dec 2, 2011, at 1:13 PM, Christopher MacGown wrote:

 Hi Nachi,

 At least for config_drive, it has been documented as an extension.

 - chris

 On Dec 2, 2011, at 10:07, Nachi Ueno ueno.na...@nttdata-agilenet.com wrote:

 Hi Brian

 Thank you for your response.
 How about params which is missing in docs?

 accessIPv4
 accessIPv6
 adminPass
 config_drive
 security_groups
 networks
 blob
 keyname
 availability_zone
 reservation_id
 min_count
 max_count
 2011/12/1 Brian Waldon brian.wal...@rackspace.com:
 Our consoles resource is not a part of the 1.1 (2.0) API. You are right in 
 thinking it should be in the contrib directory. Additionally, it needs to 
 be modified to act as an extension.

 Our current level of documentation of extensions is extremely lacking, so 
 hopefully before Essex we can do a much better job.

 Brian Waldon


 On Dec 1, 2011, at 1:37 PM, Nachi Ueno wrote:

 Hi Nova-cores

 Is the Console function in OS API 1.1 specs?
 (See https://bugs.launchpad.net/nova/+bug/898266)

 The implementation is not in contrib directory, so it didn't looks an 
 extension.
 But as 898266 mentioned, it is not described in API docs.

 And also, I checked API spces from code. (I know this is reverse way. :))

 There are another example,
 Create server could get,

 name (*)
 imageRef (*)
 flavorRef (*)
 accessIPv4
 accessIPv6
 adminPass
 config_drive
 security_groups
 networks
 blob
 keyname
 availability_zone
 reservation_id
 min_count
 max_count
 metadata (*)
 personality (*)

 And only * one is documented on API.
 http://docs.openstack.org/api/openstack-compute/1.1/content/CreateServers.html

 Doc-Team can not decide specs, so I suppose Nova-core are responsible
 to define these specs.

 Cheers
 Nachi

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] OpenStack Community Newsletter –December 1, 2011

2011-12-02 Thread Stefano Maffulli
OpenStack Community Newsletter –December 1, 2011

HIGHLIGHTS


  * Help building the list of events worldwide where OpenStack
should be represented
http://etherpad.openstack.org/OpenStackEvents2012
  * Zadara Storage won the Innovation Showdown Contest at Cloudbeat
2011
http://venturebeat.com/2011/12/01/innovation-showdown-contestants/
  * OpenStack Guide available in epub format
http://www.openstack.org/blog/2011/11/hacking-on-ebooks/
  * Three post series on Improving Nova privilege escalation model
part 1, part 2, part 3
  * OpenStack Essex-1
milestone 
http://fnords.wordpress.com/2011/11/14/openstack-essex-1-milestone/


EVENTS


  * OpenStack Swift Bay Area Meetup Dec 07, 2011 – Silicon Valley
CloudCenter http://www.meetup.com/openstack/events/42120012/
  * OpenStack Swift Training Dec 07, 2011 – Silicon Valley
CloudCenter http://www.meetup.com/openstack/events/42292592/
  * Australian OpenStack Users Group Inaugural Meetup Dec 13, 2011 –
Sydney http://aosug.openstack.org.au/
  * Meet  Drink: OpenStack in Production Dec 14, 2011 – Silicon
Valley CloudCenter
http://www.meetup.com/openstack/events/41423082/
  * OpenStack Presentation in Slovenia Dec 14, 2011 -  Details to be
announced soon on http://openstack.org/community/events


OTHER NEWS


  * OpenStack Dev Tip — Easily Pull a Review Branch
http://www.joinfu.com/2011/11/openstack-easily-pull-review-branch/
  * OpenStack Wiki Recent Changes –
http://wiki.openstack.org/RecentChanges 
  * Quantum service insertion
http://wiki.openstack.org/QuantumServicesInsertion
  * Essex Scheduler and Scaling Improvements
http://wiki.openstack.org/EssexSchedulerImprovements
  * Fast Cloning For XenServer
http://wiki.openstack.org/FastCloningForXenServer
  * Team meeting summary

http://eavesdrop.openstack.org/meetings/openstack-meeting/2011/openstack-meeting.2011-11-29-21.03.html


COMMUNITY STATISTICS


  * Activity on the OpenStack repositories, lines of code added and
removed by the developers during the past week.
  * Top 10 monthly committers to the repositories (by number of
commits)

2011-11-top10-glance
2011-11-top10-horizon
2011-11-top10-keystone




2011-11-top10-manuals
2011-11-top10-nova
2011-11-top10-quantum




2011-11-top10-swift
2011-11-47-glance
2011-11-47-horizon




2011-11-47-keystone
2011-11-47-manuals
2011-11-47-nova




2011-11-47-quantum
2011-11-47-swift



This weekly newsletter is a way for the community to learn about all the
various activities occurring on a weekly basis. If you would like to add
content to a weekly update or have an idea about this newsletter, please
leave a comment.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HPC with Openstack?

2011-12-02 Thread Leandro Reox
An HPC way of usage for.openstack at.mercadolibre is for example to run
integration and regresions test on productios pre and post deploy too. So
Jenkins servers are shooted in a minute to support the tests load and then
they destroy themselves.
On Dec 2, 2011 3:55 PM, Oliver Baltzer oli...@hytek.org wrote:

  As a side note, HPC means very different things to different people. In
  the circles I move in, HPC is interested in running compute jobs that are
  CPU-intensive, require large amounts of memory, and need
  low-latency/high-bandwidth interconnects to allow the user to break up a
  tightly coupled compute job across multiple nodes.  A particular compute
  job will run for hours to days, so fast provisioning isn't necessarily
  critical (the traditional HPC model is to have your job wait in a batch
  queue until the resources are available).

 I am interested in a model that supports all of the above, but individual
 jobs have a very short lifespan (a few minutes) and are time critical
 (every minute counts). Also, there is not necessarily a steady stream of
 jobs, such that there are demand peaks (several times a day).

 In that model I do not want to wait minutes to provision compute nodes for
 a job that runs 5 minutes. Neither do I want to run a cluster permanently
 that has 100% utilization for maybe 2 or 3 hours in total per day. So a
 cloud model would be quite attractive, if it could deliver the performance,
 provision fast enough, and charge in minute intervals rather than hours.

 Cheers,
 Oliver

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp