Thanks all the feedback.

Final version is uploaded as a pull request (will be part of the documentation if merged)

https://github.com/apache/hadoop-ozone/pull/756/files?short_path=44ad58e#diff-44ad58ec3778726a6b5a60e01642e088


Will be merged if no more concern.


Latest addition:

* There was a question about locking during the community sync. It doesn't seem to be a big problem as for the bind-mounted volumes the lock of the referenced volumes will be hold most of the time. * + it's a read lock and the write lock is required only for quota / owner change which is infrequent (IMHO)



Marton



On 3/20/20 2:42 PM, Elek, Marton wrote:

Based on feedback and comments I updated the document.

https://hackmd.io/uqSYkmd8SXGAAjQx3sObQg

The current proposal is the following:

1. Use one (configured) volume for all the s3 buckets.

For example if this configured volume is /s3, you can see all the Ozone buckets in this volume via the S3 interface

(s3 bucket "bucket1" will be mapped to Ozone path "/s3/bucket1"

2. To make it possible to access ANY volume/buckets, any Ozone volume can be "bind mounted" to other volumes.

For example:

ozone sh mount /vol1/bucket1 /s3/bucket1

will create a symbolic-link like bind mounting, and inside /s3/bucket1 the content of /vol1/bucket1 will be shown. Together with the 1st point (any buckets under s3 is exposed) this make it possible to expose any buckets.

!!! INCOMPATIBLE CHANGE ALERT !!!!!

When 1 will be implemented, but 2, not yet. For a limited time of period we will share buckets only from one volume as s3 buckets. This is different from the current implementation when you can use s3buckets from multiple volumes.

If it's a blocker for you, please share your opinion and we can schedule the implementation according to the feedback.


Thanks all the feedback and comments,
Marton




On 3/17/20 9:43 AM, Elek, Marton wrote:


On 3/16/20 3:11 PM, Arpit Agarwal wrote:
Thanks for writing this up Marton. I updated the doc to add a fourth problem:

      > Ozone buckets created via the native object store interface are not visible via the S3 gateway.

I don’t understand option 1. Does it mean that we will have at least one volume per user?

No, you can use the same value:

kinit user1 -kt ....
ACCESS_KEY_ID=$(ozone s3 create-secret --volume=vol1)
s3 create-bucket ....

kinit user2 -kt ....
ACCESS_KEY_ID=$(ozone s3 create-secret --volume=vol1)
s3 create-bucket ....


Also the access key is separate per user - so how do I grant another user access to my volumes?

See the previous example. If you have permission to the volume you can create an ACCESS_KEY_ID to get an s3 view of the volume.


I like option 2. The notion of volumes already doesn’t work in the S3 world. We also need to fix enumeration of volumes by users, this is not an S3 issue.

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to