Re: [openstack-dev] [ceph] DevStack plugin for Ceph required for Mitaka-1 and beyond?

2015-11-27 Thread Sebastien Han
The code is here: https://github.com/openstack/devstack-plugin-ceph

> On 25 Nov 2015, at 13:49, Sebastien Han <s...@redhat.com> wrote:
> 
> The patch just landed, as soon as I have the repo I’ll move the code and do 
> some testing.
> 
>> On 24 Nov 2015, at 16:20, Sebastien Han <s...@redhat.com> wrote:
>> 
>> Hi Ramana,
>> 
>> I’ll resurrect the infra patch and put the project under the right namespace.
>> There is no plugin at the moment.
>> I’ve figured out that this is quite urgent and we need to solve this asap 
>> since devstack-ceph is used by the gate as well :-/
>> 
>> I don’t think there is much changes to do on the plugin itself.
>> Let’s see if we can make all of this happen before Mitaka-1… I highly doubt 
>> but we’ll see…
>> 
>>> On 24 Nov 2015, at 15:31, Ramana Raja <rr...@redhat.com> wrote:
>>> 
>>> Hi,
>>> 
>>> I was trying to figure out the state of DevStack plugin
>>> for Ceph, but couldn't find its source code and ran into
>>> the following doubt. At Mitaka 1, i.e. next week, wouldn't
>>> Ceph related Jenkins gates (e.g. Cinder's gate-tempest-dsvm-full-ceph)
>>> that still use extras.d's hook script instead of a plugin, stop working?
>>> For reference,
>>> https://github.com/openstack-dev/devstack/commit/1de9e330de9fd509fcdbe04c4722951b3acf199c
>>> [Deepak, thanks for reminding me about the deprecation of extra.ds.]
>>> 
>>> The patch that seeks to integrate Ceph DevStack plugin with Jenkins
>>> gates is under review,
>>> https://review.openstack.org/#/c/188768/
>>> It's outdated as the devstack-ceph-plugin it seeks to integrate
>>> seem to be in the now obsolete namespace, 'stackforge/', and hasn't seen
>>> activity for quite sometime.
>>> 
>>> Even if I'm mistaken about all of this can someone please point me to
>>> the Ceph DevStack plugin's source code? I'm interested to know whether
>>> the plugin would be identical to the current Ceph hook script,
>>> extras.d/60-ceph.sh ?
>>> 
>>> Thanks,
>>> Ramana
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> Cheers.
>> 
>> Sébastien Han
>> Senior Cloud Architect
>> 
>> "Always give 100%. Unless you're giving blood."
>> 
>> Mail: s...@redhat.com
>> Address: 11 bis, rue Roquépine - 75008 Paris
>> 
> 
> 
> Cheers.
> 
> Sébastien Han
> Senior Cloud Architect
> 
> "Always give 100%. Unless you're giving blood."
> 
> Mail: s...@redhat.com
> Address: 11 bis, rue Roquépine - 75008 Paris
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Cheers.

Sébastien Han
Senior Cloud Architect

"Always give 100%. Unless you're giving blood."

Mail: s...@redhat.com
Address: 11 bis, rue Roquépine - 75008 Paris



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceph] DevStack plugin for Ceph required for Mitaka-1 and beyond?

2015-11-25 Thread Sebastien Han
The patch just landed, as soon as I have the repo I’ll move the code and do 
some testing.

> On 24 Nov 2015, at 16:20, Sebastien Han <s...@redhat.com> wrote:
> 
> Hi Ramana,
> 
> I’ll resurrect the infra patch and put the project under the right namespace.
> There is no plugin at the moment.
> I’ve figured out that this is quite urgent and we need to solve this asap 
> since devstack-ceph is used by the gate as well :-/
> 
> I don’t think there is much changes to do on the plugin itself.
> Let’s see if we can make all of this happen before Mitaka-1… I highly doubt 
> but we’ll see…
> 
>> On 24 Nov 2015, at 15:31, Ramana Raja <rr...@redhat.com> wrote:
>> 
>> Hi,
>> 
>> I was trying to figure out the state of DevStack plugin
>> for Ceph, but couldn't find its source code and ran into
>> the following doubt. At Mitaka 1, i.e. next week, wouldn't
>> Ceph related Jenkins gates (e.g. Cinder's gate-tempest-dsvm-full-ceph)
>> that still use extras.d's hook script instead of a plugin, stop working?
>> For reference,
>> https://github.com/openstack-dev/devstack/commit/1de9e330de9fd509fcdbe04c4722951b3acf199c
>> [Deepak, thanks for reminding me about the deprecation of extra.ds.]
>> 
>> The patch that seeks to integrate Ceph DevStack plugin with Jenkins
>> gates is under review,
>> https://review.openstack.org/#/c/188768/
>> It's outdated as the devstack-ceph-plugin it seeks to integrate
>> seem to be in the now obsolete namespace, 'stackforge/', and hasn't seen
>> activity for quite sometime.
>> 
>> Even if I'm mistaken about all of this can someone please point me to
>> the Ceph DevStack plugin's source code? I'm interested to know whether
>> the plugin would be identical to the current Ceph hook script,
>> extras.d/60-ceph.sh ?
>> 
>> Thanks,
>> Ramana
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> Cheers.
> 
> Sébastien Han
> Senior Cloud Architect
> 
> "Always give 100%. Unless you're giving blood."
> 
> Mail: s...@redhat.com
> Address: 11 bis, rue Roquépine - 75008 Paris
> 


Cheers.

Sébastien Han
Senior Cloud Architect

"Always give 100%. Unless you're giving blood."

Mail: s...@redhat.com
Address: 11 bis, rue Roquépine - 75008 Paris



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceph] DevStack plugin for Ceph required for Mitaka-1 and beyond?

2015-11-24 Thread Sebastien Han
Hi Ramana,

I’ll resurrect the infra patch and put the project under the right namespace.
There is no plugin at the moment.
I’ve figured out that this is quite urgent and we need to solve this asap since 
devstack-ceph is used by the gate as well :-/

I don’t think there is much changes to do on the plugin itself.
Let’s see if we can make all of this happen before Mitaka-1… I highly doubt but 
we’ll see…

> On 24 Nov 2015, at 15:31, Ramana Raja  wrote:
> 
> Hi,
> 
> I was trying to figure out the state of DevStack plugin
> for Ceph, but couldn't find its source code and ran into
> the following doubt. At Mitaka 1, i.e. next week, wouldn't
> Ceph related Jenkins gates (e.g. Cinder's gate-tempest-dsvm-full-ceph)
> that still use extras.d's hook script instead of a plugin, stop working?
> For reference,
> https://github.com/openstack-dev/devstack/commit/1de9e330de9fd509fcdbe04c4722951b3acf199c
> [Deepak, thanks for reminding me about the deprecation of extra.ds.]
> 
> The patch that seeks to integrate Ceph DevStack plugin with Jenkins
> gates is under review,
> https://review.openstack.org/#/c/188768/
> It's outdated as the devstack-ceph-plugin it seeks to integrate
> seem to be in the now obsolete namespace, 'stackforge/', and hasn't seen
> activity for quite sometime.
> 
> Even if I'm mistaken about all of this can someone please point me to
> the Ceph DevStack plugin's source code? I'm interested to know whether
> the plugin would be identical to the current Ceph hook script,
> extras.d/60-ceph.sh ?
> 
> Thanks,
> Ramana
> 
> 
> 
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Cheers.

Sébastien Han
Senior Cloud Architect

"Always give 100%. Unless you're giving blood."

Mail: s...@redhat.com
Address: 11 bis, rue Roquépine - 75008 Paris



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ClusterLabs] [HA] RFC: moving Pacemaker openstack-resource-agents to stackforge

2015-06-24 Thread Sebastien Han
Like this idea too, we should have done that long ago. :)

 On 24 Jun 2015, at 02:17, Adam Spiers aspi...@suse.com wrote:
 
 Digimer li...@alteeve.ca wrote:
 Resending to the Cluster Labs mailing list, this list is deprecated
 
 Thanks, I only realised that after getting a deprecation warning :-(
 
 On 23/06/15 06:27 AM, Adam Spiers wrote:
 [cross-posting to openstack-dev and pacemaker lists; please consider
 trimming the recipients list if your reply is not relevant to both
 communities]
 
 Hi all,
 
 https://github.com/madkiss/openstack-resource-agents/ is a nice
 repository of Pacemaker High Availability resource agents (RAs) for
 OpenStack, usage of which has been officially recommended in the
 OpenStack High Availability guide.  Here is one of several examples:
 

 http://docs.openstack.org/high-availability-guide/content/_add_openstack_identity_resource_to_pacemaker.html
 
 Martin Loschwitz, who owns this repository, has since moved away from
 OpenStack, and no longer maintains it.  I recently proposed moving the
 repository to StackForge, and he gave his consent and in fact said
 that he had the same intention but hadn't got round to it:
 

 https://github.com/madkiss/openstack-resource-agents/issues/22#issuecomment-113386505
 
 You can see from that same github issue that several key members of
 the OpenStack Pacemaker sub-community are all in favour.  Therefore
 I am volunteering to do the move to StackForge.
 
 There is a CusterLabs group on github that most of the HA cluster
 projects have or are moving under. Why not use that?
 
 This question was asked and answered in the github issue:
 
 https://github.com/madkiss/openstack-resource-agents/issues/22#issuecomment-114147300
 
 Cheers,
 Adam
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Cheers.
 
Sébastien Han 
Senior Cloud Architect 

Always give 100%. Unless you're giving blood.

Mail: s...@redhat.com 
Address: 11 bis, rue Roquépine - 75008 Paris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceph] Is it necessary to flatten Copy-on-write cloning for RBD-backed disks?

2015-06-09 Thread Sebastien Han
Hi Kun,

 On 09 Jun 2015, at 05:34, Kun Feng fengku...@gmail.com wrote:
 
 Hi all,
 
 I'm using ceph as storage backend for Nova and Glance, and merged the 
 rbd-ephemeral-clone patch into Nova. As VM disks are Copy-on-write clones of 
 a image, I have some concerns about this:
 
 1. Since hundreds of vm disks based on one base file, is there any 
 performance problems that IOs are loaded on this one paticular base file?

This may be an issue but as Clint mentioned you’ll get reads served from OSD 
memory anyway, so this should be expectable.

 2. Is it possible that the data of base file is damaged or PG/OSD containing 
 data of this base file out of service, resulting as all the VMs based on that 
 base file malfunctioned?

This assumption is correct, if the parents gets a corrupted block that hasn’t 
been written to the clone yet, your clone will get the corrupted block on a 
write request.

 
 If so, flattening Copy-on-write clones may do some help. Is it necessary to 
 do it?

People have expressed the concern, but no one has ever seen such thing 
happening (as far as I know), so it’s really up to you to flatten the clone.
There is a patch that needs to be reworked that will allow to flatten all the 
clone after their creation in Nova. Hopefully this will get into Liberty.

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Cheers.
 
Sébastien Han 
Senior Cloud Architect 

Always give 100%. Unless you're giving blood.

Mail: s...@redhat.com 
Address: 11 bis, rue Roquépine - 75008 Paris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] How can I make my changes of ceph.conf to take effectin Cinder?

2015-05-07 Thread Sebastien Han
You must make sure that this path is writable by QEMU and allowed by SELinux or 
AppArmor.

 On 07 May 2015, at 10:23, killingwolf killingw...@qq.com wrote:
 
 
   You should make sure the two file writeable.
 
 -- Original --
 From:  李沛伦;lpl6338...@gmail.com;
 Date:  Thu, May 7, 2015 03:55 PM
 To:  openstack@lists.openstack.orgopenstack@lists.openstack.org;
 Subject:  [Openstack] How can I make my changes of ceph.conf to take effectin 
 Cinder?
 
 Hello! I'm using Ceph as the backend storage system for Cinder, and I want to 
 change some settings of Ceph client, so I add the following into 
 /etc/ceph/ceph.conf of my compute node A:
 
 [global]
   debug ms = 10
   log file = /var/log/ceph/global.log
 
 [client]
   debug ms = 10
   debug rados = 10
   log file = /var/log/ceph/client.log
 
 but after restarting nova-compute and all the VMs on node A, restarting 
 cinder-volume, cinder-api, cinder-scheduler on Cinder controller node B, I 
 created a new volume and attached it to a VM on node A, but there is still no 
 client.log or global.log after I read or write this new volume! Could any one 
 tell me how can my modification of ceph.conf make difference? I make sure 
 that rbd_ceph_conf=/etc/ceph/ceph.conf in cinder.conf on node B, and the data 
 are written into Ceph cluster.
 
 PS: The [global] section is used just to know whether ceph.conf is 
 useful..
 
 Thanks!
 
 --
 Li Peilun (李沛伦)
 Yao Class J10
 Institute for Interdisciplinary Information Sciences
 Tsinghua University
 Beijing,100084
 P.R.China
 Tel:86-18810671857
 E-mail: lpl6338...@gmail.com
 ___
 Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Cheers.

Sébastien Han
Cloud Architect

Always give 100%. Unless you're giving blood.

Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Live Migration + Ceph + ConfigDrive

2015-05-07 Thread Sebastien Han
Actually the issue is that the configdrive is store a file on the fs under 
/var/lib/nova/instances/$uuid/config.drive
AFAIR the other problem is the format of that file that is not supported by 
libvirt for the live migration.

I think you have to apply this patch: https://review.openstack.org/#/c/123073/

I’ve heard from sileht (in cc) that this might work in Kilo (the vfat format of 
the config drive for the live migration).

 On 07 May 2015, at 04:04, killingwolf killingw...@qq.com wrote:
 
 Hi,Wilson,
 We are seeing this issue in Icehouse too , and we fixed it by modify the 
 nova code. I think you can do it in Juno too.
 
 
 -- Original --
 From:  Tyler Wilson;k...@linuxdigital.net;
 Date:  Thu, May 7, 2015 09:16 AM
 To:  Openstack@lists.openstack.orgopenstack@lists.openstack.org;
 Subject:  [Openstack] Live Migration + Ceph + ConfigDrive
 
 Hello All,
 
 Currently in Juno we are seeing an issue with attempting to live-migrate an 
 instance with a configdrive attached.
 NoLiveMigrationForConfigDriveInLibVirt: Live migration of instances with 
 config drives is not supported in libvirt unless libvirt instance path and 
 drive data is shared across compute nodes.
  I have found this bug https://bugs.launchpad.net/nova/+bug/1351002 however 
 it is geared toward NFS, has any work taken place in regards getting this 
 fixed for Ceph users?
 
 Thanks!
 ___
 Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Cheers.

Sébastien Han
Cloud Architect

Always give 100%. Unless you're giving blood.

Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack-operators] OpenStack Dockerizing on CoreOS

2015-04-29 Thread Sebastien Han
Hey,

Did you have a look at koala (https://github.com/stackforge/kolla)?
Trying to avoid duplicating work :)

 On 29 Apr 2015, at 14:08, CoreOS jwlee.phob...@gmail.com wrote:
 
 Hello,
 
 I’m trying to develop fault tolerance supporting OpenStack on Docker/CoreOS. 
 I think this kind of approaching is to getting the following advantages:
   - Easy to Deploy
   - Easy to Test
   - Easy to Scale-out
   - Fault Tolerance
 
 Those who are interested in non-stop operation and easy extension of 
 OpenStack, please see the following link 
 https://github.com/ContinUSE/openstack-on-coreos.
 
 This work is currently in the beginning. Please contact me if you have any 
 comments, or questions.
 
 Thanks,
 JW
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Cheers.

Sébastien Han
Cloud Architect

Always give 100%. Unless you're giving blood.

Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [Manila] Manila driver for CephFS

2015-02-02 Thread Sebastien Han
I believe this will start somewhere after Kilo.

 On 28 Jan 2015, at 22:59, Valeriy Ponomaryov vponomar...@mirantis.com wrote:
 
 Hello Jake,
 
 Main thing, that should be mentioned, is that blueprint has no assignee. 
 Also, It is created long time ago without any activity after it.
 I did not hear any intentions about it, moreover did not see some, at least, 
 drafts.
 
 So, I guess, it is open for volunteers.
 
 Regards,
 Valeriy Ponomaryov
 
 On Wed, Jan 28, 2015 at 11:30 PM, Jake Kugel jku...@us.ibm.com wrote:
 Hi,
 
 I see there is a blueprint for a Manila driver for CephFS here [1].  It
 looks like it was opened back in 2013 but still in Drafting state.  Does
 anyone know more status about this one?
 
 Thank you,
 -Jake
 
 [1]  https://blueprints.launchpad.net/manila/+spec/cephfs-driver
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Cheers.

Sébastien Han
Cloud Architect

Always give 100%. Unless you're giving blood.

Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] cinder-snapshot -vs- cinder-backup?

2015-01-30 Thread Sebastien Han
One of the extra benefit could be that you’re doing backup on another Ceph 
cluster (potentially on another location).
However this will never prevent you from corruption since if corruption already 
occurred then it will be replicated.

Like Erik mentioned a catastrophic situation were you lose all the monitors and 
get corrupted fs/leveldb store (just happened to someone on the ceph ML) would 
be a disaster.
Having another healthy cluster can be useful.

 On 30 Jan 2015, at 20:35, Erik McCormick emccorm...@cirrusseven.com wrote:
 
 It doesn't buy you a lot in that sort of setup, but it would put it in a 
 different pool and thus different placement groups and OSDs. It could 
 potentially protect you from data loss in some catastrophic situation.
 
 -Erik
 
 On Fri, Jan 30, 2015 at 2:08 PM, Jonathan Proulx j...@jonproulx.com wrote:
 Hi All,
 
 I can see the obvious distinction between cinder-snapshot and
 cinder-backup being that snapshots would live on the same storage back
 end as the active volume (using that ever snapshotting that provides)
 where the backup would be to different storage.
 
 We're using Ceph for volume and object storage so it seems like
 running cinder-backup in that case (with active, snap, and backup
 would be all in essentially the same backend) would not make a whole
 lot of sense.
 
 Is my thinking right on this or are there advantages of 'backup' over
 'snapshot' that I'm not considering?
 
 -Jon
 
 ___
 Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 
 ___
 Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Cheers.

Sébastien Han
Cloud Architect

Always give 100%. Unless you're giving blood.

Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

2014-03-06 Thread Sebastien Han
Big +1 on this.
Missing such support would make the implementation useless.

 
Sébastien Han 
Cloud Engineer 

Always give 100%. Unless you're giving blood.” 

Phone: +33 (0)1 49 70 99 72 
Mail: sebastien@enovance.com 
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance 

On 06 Mar 2014, at 11:44, Zhi Yan Liu lzy@gmail.com wrote:

 +1! according to the low rise and the usefulness for the real cloud 
 deployment.
 
 zhiyan
 
 On Thu, Mar 6, 2014 at 4:20 PM, Andrew Woodward xar...@gmail.com wrote:
 I'd Like to request A FFE for the remaining patches in the Ephemeral
 RBD image support chain
 
 https://review.openstack.org/#/c/59148/
 https://review.openstack.org/#/c/59149/
 
 are still open after their dependency
 https://review.openstack.org/#/c/33409/ was merged.
 
 These should be low risk as:
 1. We have been testing with this code in place.
 2. It's nearly all contained within the RBD driver.
 
 This is needed as it implements an essential functionality that has
 been missing in the RBD driver and this will become the second release
 it's been attempted to be merged into.
 
 Andrew
 Mirantis
 Ceph Community
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev