Re: [ceph-users] Encryption questions

2019-01-24 Thread Gregory Farnum
On Fri, Jan 11, 2019 at 11:24 AM Sergio A. de Carvalho Jr. <
scarvalh...@gmail.com> wrote:

> Thanks for the answers, guys!
>
> Am I right to assume msgr2 (http://docs.ceph.com/docs/mimic/dev/msgr2/)
> will provide encryption between Ceph daemons as well as between clients and
> daemons?
>
> Does anybody know if it will be available in Nautilus?
>

That’s the intention; people are scrambling a bit to get it in soon enough
to validate before the release.


>
> On Fri, Jan 11, 2019 at 8:10 AM Tobias Florek  wrote:
>
>> Hi,
>>
>> as others pointed out, traffic in ceph is unencrypted (internal traffic
>> as well as client traffic).  I usually advise to set up IPSec or
>> nowadays wireguard connections between all hosts.  That takes care of
>> any traffic going over the wire, including ceph.
>>
>> Cheers,
>>  Tobias Florek
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Encryption questions

2019-01-11 Thread Sergio A. de Carvalho Jr.
Thanks for the answers, guys!

Am I right to assume msgr2 (http://docs.ceph.com/docs/mimic/dev/msgr2/)
will provide encryption between Ceph daemons as well as between clients and
daemons?

Does anybody know if it will be available in Nautilus?


On Fri, Jan 11, 2019 at 8:10 AM Tobias Florek  wrote:

> Hi,
>
> as others pointed out, traffic in ceph is unencrypted (internal traffic
> as well as client traffic).  I usually advise to set up IPSec or
> nowadays wireguard connections between all hosts.  That takes care of
> any traffic going over the wire, including ceph.
>
> Cheers,
>  Tobias Florek
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Encryption questions

2019-01-10 Thread Tobias Florek
Hi,

as others pointed out, traffic in ceph is unencrypted (internal traffic
as well as client traffic).  I usually advise to set up IPSec or
nowadays wireguard connections between all hosts.  That takes care of
any traffic going over the wire, including ceph.

Cheers,
 Tobias Florek


signature.asc
Description: signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Encryption questions

2019-01-10 Thread Alexandre DERUMIER
>>1) Are RBD connections encrypted or is there an option to use encryption 
>>between clients and Ceph? From reading the documentation, I have the 
>>impression that the only option to guarantee encryption in >>transit is to 
>>force clients to encrypt volumes via dmcrypt. Is there another option? I know 
>>I could encrypt the OSDs but that's not going to solve the problem of 
>>encryption in transit.

not related to ceph, but if you use qemu, they are a luks driver for qemu, so 
you can encrypt from qemu process to storage.
https://people.redhat.com/berrange/kvm-forum-2016/kvm-forum-2016-security.pdf




- Mail original -
De: "Sergio A. de Carvalho Jr." 
À: "ceph-users" 
Envoyé: Jeudi 10 Janvier 2019 19:59:06
Objet: [ceph-users] Encryption questions

Hi everyone, I have some questions about encryption in Ceph. 
1) Are RBD connections encrypted or is there an option to use encryption 
between clients and Ceph? From reading the documentation, I have the impression 
that the only option to guarantee encryption in transit is to force clients to 
encrypt volumes via dmcrypt. Is there another option? I know I could encrypt 
the OSDs but that's not going to solve the problem of encryption in transit. 

2) I'm also struggling to understand if communication between Ceph daemons 
(monitors and OSDs) are encrypted or not. I came across a few references about 
msgr2 but I couldn't tell if it is already implemented. Can anyone confirm 
this? 

I'm currently starting a new project using Ceph Mimic but if there's something 
new in this space expected for Nautilus, it would be good to know as well. 

Regards, 

Sergio 

___ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Encryption questions

2019-01-10 Thread Jack
Hi,

AFAIK, there is no encryption on the wire, either between daemons or
between a daemon and a client
The only encryption available on Ceph is at rest, using dmcrypt (aka
your data are encrypted before being written on disk)

Regards,

On 01/10/2019 07:59 PM, Sergio A. de Carvalho Jr. wrote:
> Hi everyone, I have some questions about encryption in Ceph.
> 
> 1) Are RBD connections encrypted or is there an option to use encryption
> between clients and Ceph? From reading the documentation, I have the
> impression that the only option to guarantee encryption in transit is to
> force clients to encrypt volumes via dmcrypt. Is there another option? I
> know I could encrypt the OSDs but that's not going to solve the problem of
> encryption in transit.
> 
> 2) I'm also struggling to understand if communication between Ceph daemons
> (monitors and OSDs) are encrypted or not. I came across a few references
> about msgr2 but I couldn't tell if it is already implemented. Can anyone
> confirm this?
> 
> I'm currently starting a new project using Ceph Mimic but if there's
> something new in this space expected for Nautilus, it would be good to know
> as well.
> 
> Regards,
> 
> Sergio
> 
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Encryption questions

2019-01-10 Thread Sergio A. de Carvalho Jr.
Hi everyone, I have some questions about encryption in Ceph.

1) Are RBD connections encrypted or is there an option to use encryption
between clients and Ceph? From reading the documentation, I have the
impression that the only option to guarantee encryption in transit is to
force clients to encrypt volumes via dmcrypt. Is there another option? I
know I could encrypt the OSDs but that's not going to solve the problem of
encryption in transit.

2) I'm also struggling to understand if communication between Ceph daemons
(monitors and OSDs) are encrypted or not. I came across a few references
about msgr2 but I couldn't tell if it is already implemented. Can anyone
confirm this?

I'm currently starting a new project using Ceph Mimic but if there's
something new in this space expected for Nautilus, it would be good to know
as well.

Regards,

Sergio
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Encryption for data at rest support

2016-06-02 Thread M Ranga Swami Reddy
Thank you Chris for the details.

Thanks
Swami

On Thu, Jun 2, 2016 at 6:01 PM, chris holcombe
 wrote:
> Hi Swami,
>
> Yes ceph supports encryption at rest using dmcrypt.  The docs are here:
> http://docs.ceph.com/docs/jewel/rados/deployment/ceph-deploy-osd/
>
> My team has integrated this functionality into the ceph-osd charm also
> if you'd like to try that out: https://jujucharms.com/ceph-osd/xenial/2
> When combined with the ceph-mon charm you're up and running fast :)
>
> -Chris
>
> On 06/02/2016 03:57 AM, M Ranga Swami Reddy wrote:
>> Hello,
>>
>> Can you please share if the ceph supports the "data at rest" functionality?
>> If yes, how can I achieve this? Please share any docs available.
>>
>> Thanks
>> Swami
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Encryption for data at rest support

2016-06-02 Thread chris holcombe
Hi Swami,

Yes ceph supports encryption at rest using dmcrypt.  The docs are here:
http://docs.ceph.com/docs/jewel/rados/deployment/ceph-deploy-osd/

My team has integrated this functionality into the ceph-osd charm also
if you'd like to try that out: https://jujucharms.com/ceph-osd/xenial/2
When combined with the ceph-mon charm you're up and running fast :)

-Chris

On 06/02/2016 03:57 AM, M Ranga Swami Reddy wrote:
> Hello,
> 
> Can you please share if the ceph supports the "data at rest" functionality?
> If yes, how can I achieve this? Please share any docs available.
> 
> Thanks
> Swami
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Encryption for data at rest support

2016-06-02 Thread M Ranga Swami Reddy
Hello,

Can you please share if the ceph supports the "data at rest" functionality?
If yes, how can I achieve this? Please share any docs available.

Thanks
Swami
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Encryption/Multi-tennancy

2014-03-11 Thread Kyle Bader
> There could be millions of tennants. Looking deeper at the docs, it looks 
> like Ceph prefers to have one OSD per disk.  We're aiming at having 
> backblazes, so will be looking at 45 OSDs per machine, many machines.  I want 
> to separate the tennants and separately encrypt their data.  The encryption 
> will be provided by us, but I was originally intending to have 
> passphrase-based encryption, and use programmatic means to either hash the 
> passphrase or/and encrypt it using the same passphrase.  This way, we 
> wouldn't be able to access the tennant's data, or the key for the passphrase, 
> although we'd still be able to store both.


The way I see it you have several options:

1. Encrypted OSDs

Preserve confidentiality in the event someone gets physical access to
a disk, whether theft or RMA. Requires tenant to trust provider.

vm
rbd
rados
osd <-here
disks

2. Whole disk VM encryption

Preserve confidentiality in the even someone gets physical access to
disk, whether theft or RMA.

tenant: key/passphrase
provider: nothing

tenant: passphrase
provider: key

tenant: nothing
provider: key

vm <--- here
rbd
rados
osd
disks

3. Encryption further up stack (application perhaps?)

To me, #1/#2 are identical except in the case of #2 when the rbd
volume is not attached to a VM. Block devices attached to a VM and
mounted will be decrypted, making the encryption only useful at
defending against unauthorized access to storage media. With a
different key per VM, with potentially millions of tenants, you now
have a massive key escrow/management problem that only buys you a bit
of additional security when block devices are detached. Sounds like a
crappy deal to me, I'd either go with #1 or #3.

-- 

Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Encryption/Multi-tennancy

2014-03-11 Thread Mark s2c
There could be millions of tennants. Looking deeper at the docs, it looks like 
Ceph prefers to have one OSD per disk.  We're aiming at having backblazes, so 
will be looking at 45 OSDs per machine, many machines.  I want to separate the 
tennants and separately encrypt their data.  The encryption will be provided by 
us, but I was originally intending to have passphrase-based encryption, and use 
programmatic means to either hash the passphrase or/and encrypt it using the 
same passphrase.  This way, we wouldn't be able to access the tennant's data, 
or the key for the passphrase, although we'd still be able to store both.

I had originally intended to use ZFS to acheive this, but on Linux it's a 
fiddle.  We don't want to pay for any software or support, so that's Solaris 
out (Oracle  changed the plans after they bought Sun).

On 2014 Mar 11, at 00:55, Seth Mason (setmason) wrote:

> Are you expecting the tenant to provide the key?  Also how many tenants are 
> you expecting to have? It seems like you're looking for per-object encryption 
> and not per OSD.
> 
> -Seth
> 
> -Original Message-
> From: ceph-users-boun...@lists.ceph.com 
> [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mark s2c
> Sent: Monday, March 10, 2014 3:08 PM
> To: Kyle Bader
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Encryption/Multi-tennancy
> 
> Thanks Kyle.
> I've deliberately not provided the entire picture.  I'm aware of memory 
> residency and of in-flight encryption issues.  Theses are less of a problem 
> for us.
> For me, it's a question of finding a reliably encrypted, OSS, at-rest setup 
> which involves Ceph and preferably ZFS for flexibility.
> M
> On 2014 Mar 10, at 21:04, Kyle Bader wrote:
> 
>>> Ceph is seriously badass, but my requirements are to create a cluster in 
>>> which I can host my customer's data in separate areas which are 
>>> independently encrypted, with passphrases which we as cloud admins do not 
>>> have access to.
>>> 
>>> My current thoughts are:
>>> 1. Create an OSD per machine stretching over all installed disks, then 
>>> create a user-sized block device per customer.  Mount this block device on 
>>> an access VM and create a LUKS container in to it followed by a zpool and 
>>> then I can allow the users to create separate bins of data as separate ZFS 
>>> filesystems in the container which is actually a blockdevice striped across 
>>> the OSDs.
>>> 2. Create an OSD per customer and use dm-crypt, then store the dm-crypt key 
>>> somewhere which is rendered in some way so that we cannot access it, such 
>>> as a pgp-encrypted file using a passphrase which only the customer knows.
>> 
>>> My questions are:
>>> 1. What are people's comments regarding this problem (irrespective of 
>>> my thoughts)
>> 
>> What is the threat model that leads to these requirements? The story 
>> "cloud admins do not have access" is not achievable through technology 
>> alone.
>> 
>>> 2. Which would be the most efficient of (1) and (2) above?
>> 
>> In the case of #1 and #2, you are only protecting data at rest. With
>> #2 you would need to decrypt the key to open the block device, and the 
>> key would remain in memory until it is unmounted (which the cloud 
>> admin could access). This means #2 is safe so long as you never mount 
>> the volume, which means it's utility is rather limited (archive 
>> perhaps). Neither of these schemes buy you much more than the 
>> encryption handling provided by ceph-disk-prepare (dmcrypted osd 
>> data/journal volumes), the key management problem becomes more acute, 
>> eg. per tenant.
>> 
>> --
>> 
>> Kyle
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Encryption/Multi-tennancy

2014-03-10 Thread Seth Mason (setmason)
Are you expecting the tenant to provide the key?  Also how many tenants are you 
expecting to have? It seems like you're looking for per-object encryption and 
not per OSD.

-Seth

-Original Message-
From: ceph-users-boun...@lists.ceph.com 
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mark s2c
Sent: Monday, March 10, 2014 3:08 PM
To: Kyle Bader
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Encryption/Multi-tennancy

Thanks Kyle.
I've deliberately not provided the entire picture.  I'm aware of memory 
residency and of in-flight encryption issues.  Theses are less of a problem for 
us.
For me, it's a question of finding a reliably encrypted, OSS, at-rest setup 
which involves Ceph and preferably ZFS for flexibility.
M
On 2014 Mar 10, at 21:04, Kyle Bader wrote:

>> Ceph is seriously badass, but my requirements are to create a cluster in 
>> which I can host my customer's data in separate areas which are 
>> independently encrypted, with passphrases which we as cloud admins do not 
>> have access to.
>> 
>> My current thoughts are:
>> 1. Create an OSD per machine stretching over all installed disks, then 
>> create a user-sized block device per customer.  Mount this block device on 
>> an access VM and create a LUKS container in to it followed by a zpool and 
>> then I can allow the users to create separate bins of data as separate ZFS 
>> filesystems in the container which is actually a blockdevice striped across 
>> the OSDs.
>> 2. Create an OSD per customer and use dm-crypt, then store the dm-crypt key 
>> somewhere which is rendered in some way so that we cannot access it, such as 
>> a pgp-encrypted file using a passphrase which only the customer knows.
> 
>> My questions are:
>> 1. What are people's comments regarding this problem (irrespective of 
>> my thoughts)
> 
> What is the threat model that leads to these requirements? The story 
> "cloud admins do not have access" is not achievable through technology 
> alone.
> 
>> 2. Which would be the most efficient of (1) and (2) above?
> 
> In the case of #1 and #2, you are only protecting data at rest. With
> #2 you would need to decrypt the key to open the block device, and the 
> key would remain in memory until it is unmounted (which the cloud 
> admin could access). This means #2 is safe so long as you never mount 
> the volume, which means it's utility is rather limited (archive 
> perhaps). Neither of these schemes buy you much more than the 
> encryption handling provided by ceph-disk-prepare (dmcrypted osd 
> data/journal volumes), the key management problem becomes more acute, 
> eg. per tenant.
> 
> --
> 
> Kyle

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Encryption/Multi-tennancy

2014-03-10 Thread Mark s2c
Thanks Kyle.
I've deliberately not provided the entire picture.  I'm aware of memory 
residency and of in-flight encryption issues.  Theses are less of a problem for 
us.
For me, it's a question of finding a reliably encrypted, OSS, at-rest setup 
which involves Ceph and preferably ZFS for flexibility.
M
On 2014 Mar 10, at 21:04, Kyle Bader wrote:

>> Ceph is seriously badass, but my requirements are to create a cluster in 
>> which I can host my customer's data in separate areas which are 
>> independently encrypted, with passphrases which we as cloud admins do not 
>> have access to.
>> 
>> My current thoughts are:
>> 1. Create an OSD per machine stretching over all installed disks, then 
>> create a user-sized block device per customer.  Mount this block device on 
>> an access VM and create a LUKS container in to it followed by a zpool and 
>> then I can allow the users to create separate bins of data as separate ZFS 
>> filesystems in the container which is actually a blockdevice striped across 
>> the OSDs.
>> 2. Create an OSD per customer and use dm-crypt, then store the dm-crypt key 
>> somewhere which is rendered in some way so that we cannot access it, such as 
>> a pgp-encrypted file using a passphrase which only the customer knows.
> 
>> My questions are:
>> 1. What are people's comments regarding this problem (irrespective of my 
>> thoughts)
> 
> What is the threat model that leads to these requirements? The story
> "cloud admins do not have access" is not achievable through technology
> alone.
> 
>> 2. Which would be the most efficient of (1) and (2) above?
> 
> In the case of #1 and #2, you are only protecting data at rest. With
> #2 you would need to decrypt the key to open the block device, and the
> key would remain in memory until it is unmounted (which the cloud
> admin could access). This means #2 is safe so long as you never mount
> the volume, which means it's utility is rather limited (archive
> perhaps). Neither of these schemes buy you much more than the
> encryption handling provided by ceph-disk-prepare (dmcrypted osd
> data/journal volumes), the key management problem becomes more acute,
> eg. per tenant.
> 
> -- 
> 
> Kyle

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Encryption/Multi-tennancy

2014-03-10 Thread Mark s2c
Thanks for the suggestion Seth.  It's unfortunately not an option in our model. 
 We did consider it.

On 2014 Mar 10, at 02:30, Seth Mason (setmason) wrote:

Why not have the application encrypt the data or at the compute server's file 
system? That way you don't have to manage keys.




Seth

On Mar 9, 2014, at 6:09 PM, "Mark s2c" 
mailto:m...@stuff2cloud.com>> wrote:

Ceph is seriously badass, but my requirements are to create a cluster in which 
I can host my customer's data in separate areas which are independently 
encrypted, with passphrases which we as cloud admins do not have access to.

My current thoughts are:
1. Create an OSD per machine stretching over all installed disks, then create a 
user-sized block device per customer.  Mount this block device on an access VM 
and create a LUKS container in to it followed by a zpool and then I can allow 
the users to create separate bins of data as separate ZFS filesystems in the 
container which is actually a blockdevice striped across the OSDs.
2. Create an OSD per customer and use dm-crypt, then store the dm-crypt key 
somewhere which is rendered in some way so that we cannot access it, such as a 
pgp-encrypted file using a passphrase which only the customer knows.

My questions are:
1. What are people's comments regarding this problem (irrespective of my 
thoughts)
2. Which would be the most efficient of (1) and (2) above?
3. As per (1), would it be easy to stretch a created block dev over more OSDs 
dynamically should we increase the size of one or more? Also, what if we had 
millions of customers/block devices?

Any advice on the above would be deluxe.

M


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Encryption/Multi-tennancy

2014-03-10 Thread Kyle Bader
> Ceph is seriously badass, but my requirements are to create a cluster in 
> which I can host my customer's data in separate areas which are independently 
> encrypted, with passphrases which we as cloud admins do not have access to.
>
> My current thoughts are:
> 1. Create an OSD per machine stretching over all installed disks, then create 
> a user-sized block device per customer.  Mount this block device on an access 
> VM and create a LUKS container in to it followed by a zpool and then I can 
> allow the users to create separate bins of data as separate ZFS filesystems 
> in the container which is actually a blockdevice striped across the OSDs.
> 2. Create an OSD per customer and use dm-crypt, then store the dm-crypt key 
> somewhere which is rendered in some way so that we cannot access it, such as 
> a pgp-encrypted file using a passphrase which only the customer knows.

> My questions are:
> 1. What are people's comments regarding this problem (irrespective of my 
> thoughts)

What is the threat model that leads to these requirements? The story
"cloud admins do not have access" is not achievable through technology
alone.

> 2. Which would be the most efficient of (1) and (2) above?

In the case of #1 and #2, you are only protecting data at rest. With
#2 you would need to decrypt the key to open the block device, and the
key would remain in memory until it is unmounted (which the cloud
admin could access). This means #2 is safe so long as you never mount
the volume, which means it's utility is rather limited (archive
perhaps). Neither of these schemes buy you much more than the
encryption handling provided by ceph-disk-prepare (dmcrypted osd
data/journal volumes), the key management problem becomes more acute,
eg. per tenant.

-- 

Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Encryption/Multi-tennancy

2014-03-10 Thread Pavel V. Kaygorodov
Hi!

I think, it is impossible to hide crypto keys from admin, who have access to 
host machine where VM guest running. Admin can always make snapshot of running 
VM and extract all keys just from memory. May be, you can achieve enough level 
of security providing a dedicated real server holding crypto keys in RAM only 
and somehow guarantee that the server will not be substituted at one fine day 
with VM by malicious admin :)

Pavel.


10 марта 2014 г., в 5:09, Mark s2c  написал(а):

> Ceph is seriously badass, but my requirements are to create a cluster in 
> which I can host my customer's data in separate areas which are independently 
> encrypted, with passphrases which we as cloud admins do not have access to.  
> 
> My current thoughts are:
> 1. Create an OSD per machine stretching over all installed disks, then create 
> a user-sized block device per customer.  Mount this block device on an access 
> VM and create a LUKS container in to it followed by a zpool and then I can 
> allow the users to create separate bins of data as separate ZFS filesystems 
> in the container which is actually a blockdevice striped across the OSDs. 
> 2. Create an OSD per customer and use dm-crypt, then store the dm-crypt key 
> somewhere which is rendered in some way so that we cannot access it, such as 
> a pgp-encrypted file using a passphrase which only the customer knows. 
> 
> My questions are:
> 1. What are people's comments regarding this problem (irrespective of my 
> thoughts)
> 2. Which would be the most efficient of (1) and (2) above?
> 3. As per (1), would it be easy to stretch a created block dev over more OSDs 
> dynamically should we increase the size of one or more? Also, what if we had 
> millions of customers/block devices?
> 
> Any advice on the above would be deluxe.
> 
> M 
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Encryption/Multi-tennancy

2014-03-09 Thread Seth Mason (setmason)
Why not have the application encrypt the data or at the compute server's file 
system? That way you don't have to manage keys.




Seth

On Mar 9, 2014, at 6:09 PM, "Mark s2c" 
mailto:m...@stuff2cloud.com>> wrote:

Ceph is seriously badass, but my requirements are to create a cluster in which 
I can host my customer's data in separate areas which are independently 
encrypted, with passphrases which we as cloud admins do not have access to.

My current thoughts are:
1. Create an OSD per machine stretching over all installed disks, then create a 
user-sized block device per customer.  Mount this block device on an access VM 
and create a LUKS container in to it followed by a zpool and then I can allow 
the users to create separate bins of data as separate ZFS filesystems in the 
container which is actually a blockdevice striped across the OSDs.
2. Create an OSD per customer and use dm-crypt, then store the dm-crypt key 
somewhere which is rendered in some way so that we cannot access it, such as a 
pgp-encrypted file using a passphrase which only the customer knows.

My questions are:
1. What are people's comments regarding this problem (irrespective of my 
thoughts)
2. Which would be the most efficient of (1) and (2) above?
3. As per (1), would it be easy to stretch a created block dev over more OSDs 
dynamically should we increase the size of one or more? Also, what if we had 
millions of customers/block devices?

Any advice on the above would be deluxe.

M


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Encryption/Multi-tennancy

2014-03-09 Thread Mark s2c
Ceph is seriously badass, but my requirements are to create a cluster in which 
I can host my customer's data in separate areas which are independently 
encrypted, with passphrases which we as cloud admins do not have access to.  

My current thoughts are:
1. Create an OSD per machine stretching over all installed disks, then create a 
user-sized block device per customer.  Mount this block device on an access VM 
and create a LUKS container in to it followed by a zpool and then I can allow 
the users to create separate bins of data as separate ZFS filesystems in the 
container which is actually a blockdevice striped across the OSDs. 
2. Create an OSD per customer and use dm-crypt, then store the dm-crypt key 
somewhere which is rendered in some way so that we cannot access it, such as a 
pgp-encrypted file using a passphrase which only the customer knows. 

My questions are:
1. What are people's comments regarding this problem (irrespective of my 
thoughts)
2. Which would be the most efficient of (1) and (2) above?
3. As per (1), would it be easy to stretch a created block dev over more OSDs 
dynamically should we increase the size of one or more? Also, what if we had 
millions of customers/block devices?

Any advice on the above would be deluxe.

M 


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Encryption

2013-10-01 Thread Nicolas Thomas
Ciao Gippa,


From http://ceph.com/releases/v0-61-cuttlefish-released/
* ceph-disk: dm-crypt support for OSD disks

Hope this helps,


On 01/10/2013 08:57, Giuseppe 'Gippa' Paterno' wrote:
> Hi!
> Maybe an FAQ, but is encryption of data available (or will be available)
> in ceph at a storage level?
> Thanks,
> Giuseppe
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

-- 
Best Regards,
  Nicolas Thomas EMEA Sales Engineer - Canonical
Planned absence: Nov 1st to Tue 12th
GPG FPR: D592 4185 F099 9031 6590 6292 492F C740 F03A 7EB9



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Encryption

2013-09-30 Thread Giuseppe 'Gippa' Paterno'
Hi!
Maybe an FAQ, but is encryption of data available (or will be available)
in ceph at a storage level?
Thanks,
Giuseppe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com