Re: [Openstack] Storage Multi Tenancy

2014-05-22 Thread Dawei Ding
To archive that, you will need multi-backend and a proper way to schedule
volume to certain backend pool. You may leverage volume type or have a
customized filter to schedule based on tenant information.

thanks,
Dawei


On Mon, May 19, 2014 at 6:51 AM, jeroen jer...@mediacaster.nl wrote:

 Hi,

 Where do you define the scheduler filters? I’ve found something about them
 in the cinder.conf config example but when I define one zone by doing this;

 storage_availability_zone=nova2




 I don’t get to see this in Horizon after restarting the services.

 Best Regards,

 Infitialis
 Jeroen
 Sent with Airmail

 On 16 May 2014 at 22:50:21, Nirlay Kundu (nir...@hotmail.com) wrote:


  This can be done the following way : Since Cinder scheduler allows you to
 set multiple filters, you could potentially use one of the filters, say
 ‘availability zone’ for this. Essentially create a different availability
 zone for each storage pool – one for ceph cluster, one for tenants own
 pool, etc. and specify it during nova boot to ensure the appropriate
 pool/availability zone is selected.



 There are storage based options for multi-tenancy that built natively into
 storage arrays like HP's 3Par. You can try that.

  Hope this helps.
  Nirlay



  --
 Date: Fri, 16 May 2014 16:14:34 +0200
 From: jer...@mediacaster.nl
 To: openstack@lists.openstack.org
 Subject: [Openstack] Storage Multi Tenancy

  Hello,

 Currently I am integrating my ceph cluster into Openstack by using Ceph’s
 RBD. I’d like to store my KVM virtual machines on pools that I have made on
 the ceph cluster.
 I would like to achieve to have multiple storage solutions for multiple
 tenants. Currently when I launch an instance the instance will be set on
 the Ceph pool that has been defined in the cinder.conf file of my Openstack
 controller node. If you set up an multi storage backend for cinder then the
 scheduler will determine which storage backend will be used without looking
 at the tenant.

 What I would like to happen is that the instance/VM that’s being launched
 by a specific tenant should have two choices; either choose for a shared
 Ceph Pool or have their own pool. Another option might even be a tenant
 having his own ceph cluster. When the instance is being launched on either
 shared pool, dedicated pool or even another cluster, I would also like the
 extra volumes that are being created to have the same option.

 Data needs to be isolated from another tenants and users and therefore
 choosing other pools/clusters would be nice.
 Is this goal achievable or is it impossible. If it’s achievable could I
 please have some assistance in doing so. Has anyone ever done this before.

 I would like thank you in advance for reading this lengthy e-mail. If
 there’s anything that is unclear, please feel free to ask.

 Best Regards,

 Jeroen van Leur

 --
   Infitialis
 Sent with Airmail

 ___ Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to :
 openstack@lists.openstack.org Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Storage Multi Tenancy

2014-05-19 Thread jeroen
Hi,

Where do you define the scheduler filters? I’ve found something about them in 
the cinder.conf config example but when I define one zone by doing this;

storage_availability_zone=nova2



I don’t get to see this in Horizon after restarting the services.

Best Regards,

Infitialis
Jeroen
Sent with Airmail

On 16 May 2014 at 22:50:21, Nirlay Kundu (nir...@hotmail.com) wrote:

This can be done the following way : Since Cinder scheduler allows you to set 
multiple filters, you could potentially use one of the filters, say 
‘availability zone’ for this. Essentially create a different availability zone 
for each storage pool – one for ceph cluster, one for tenants own pool, etc. 
and specify it during nova boot to ensure the appropriate pool/availability 
zone is selected.
 
There are storage based options for multi-tenancy that built natively into 
storage arrays like HP's 3Par. You can try that.
 
Hope this helps.
Nirlay

 
Date: Fri, 16 May 2014 16:14:34 +0200
From: jer...@mediacaster.nl
To: openstack@lists.openstack.org
Subject: [Openstack] Storage Multi Tenancy

Hello,

Currently I am integrating my ceph cluster into Openstack by using Ceph’s RBD. 
I’d like to store my KVM virtual machines on pools that I have made on the ceph 
cluster.
I would like to achieve to have multiple storage solutions for multiple 
tenants. Currently when I launch an instance the instance will be set on the 
Ceph pool that has been defined in the cinder.conf file of my Openstack 
controller node. If you set up an multi storage backend for cinder then the 
scheduler will determine which storage backend will be used without looking at 
the tenant. 

What I would like to happen is that the instance/VM that’s being launched by a 
specific tenant should have two choices; either choose for a shared Ceph Pool 
or have their own pool. Another option might even be a tenant having his own 
ceph cluster. When the instance is being launched on either shared pool, 
dedicated pool or even another cluster, I would also like the extra volumes 
that are being created to have the same option. 

Data needs to be isolated from another tenants and users and therefore choosing 
other pools/clusters would be nice. 
Is this goal achievable or is it impossible. If it’s achievable could I please 
have some assistance in doing so. Has anyone ever done this before.

I would like thank you in advance for reading this lengthy e-mail. If there’s 
anything that is unclear, please feel free to ask.

Best Regards,

Jeroen van Leur

-- 
Infitialis
Sent with Airmail

___ Mailing list: 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : 
openstack@lists.openstack.org Unsubscribe : 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Storage Multi Tenancy

2014-05-16 Thread jeroen
Hello,

Currently I am integrating my ceph cluster into Openstack by using Ceph’s RBD. 
I’d like to store my KVM virtual machines on pools that I have made on the ceph 
cluster.
I would like to achieve to have multiple storage solutions for multiple 
tenants. Currently when I launch an instance the instance will be set on the 
Ceph pool that has been defined in the cinder.conf file of my Openstack 
controller node. If you set up an multi storage backend for cinder then the 
scheduler will determine which storage backend will be used without looking at 
the tenant. 

What I would like to happen is that the instance/VM that’s being launched by a 
specific tenant should have two choices; either choose for a shared Ceph Pool 
or have their own pool. Another option might even be a tenant having his own 
ceph cluster. When the instance is being launched on either shared pool, 
dedicated pool or even another cluster, I would also like the extra volumes 
that are being created to have the same option. 

Data needs to be isolated from another tenants and users and therefore choosing 
other pools/clusters would be nice. 
Is this goal achievable or is it impossible. If it’s achievable could I please 
have some assistance in doing so. Has anyone ever done this before.

I would like thank you in advance for reading this lengthy e-mail. If there’s 
anything that is unclear, please feel free to ask.

Best Regards,

Jeroen van Leur

-- 
Infitialis
Sent with Airmail___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Storage Multi Tenancy

2014-05-16 Thread Nirlay Kundu



This can be done the following way : Since Cinder scheduler allows you to set 
multiple filters, you
could potentially use one of the filters, say ‘availability zone’ for this.
Essentially create a different availability zone for each storage pool – one
for ceph cluster, one for tenants own pool, etc. and specify it during nova
boot to ensure the appropriate pool/availability zone is selected.  There are 
storage based options for multi-tenancy that built natively into
storage arrays like HP's 3Par. You can try that. 
Hope this helps.
Nirlay

 Date: Fri, 16 May 2014 16:14:34 +0200
From: jer...@mediacaster.nl
To: openstack@lists.openstack.org
Subject: [Openstack] Storage Multi Tenancy

Hello,
Currently I am integrating my ceph cluster into Openstack by using Ceph’s RBD. 
I’d like to store my KVM virtual machines on pools that I have made on the ceph 
cluster.I would like to achieve to have multiple storage solutions for multiple 
tenants. Currently when I launch an instance the instance will be set on the 
Ceph pool that has been defined in the cinder.conf file of my Openstack 
controller node. If you set up an multi storage backend for cinder then the 
scheduler will determine which storage backend will be used without looking at 
the tenant. 
What I would like to happen is that the instance/VM that’s being launched by a 
specific tenant should have two choices; either choose for a shared Ceph Pool 
or have their own pool. Another option might even be a tenant having his own 
ceph cluster. When the instance is being launched on either shared pool, 
dedicated pool or even another cluster, I would also like the extra volumes 
that are being created to have the same option. 
Data needs to be isolated from another tenants and users and therefore choosing 
other pools/clusters would be nice. Is this goal achievable or is it 
impossible. If it’s achievable could I please have some assistance in doing so. 
Has anyone ever done this before.
I would like thank you in advance for reading this lengthy e-mail. If there’s 
anything that is unclear, please feel free to ask.
Best Regards,
Jeroen van Leur
-- InfitialisSent with Airmail
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack 
  ___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack