Re: [ceph-users] S3 Radosgw : how to grant a user within a tenant

2017-02-17 Thread Bastian Rosner
On 02/17/2017 06:25 PM, Vincent Godin wrote:
> I created 2 users : jack & bob inside a tenant_A
> jack created a bucket named BUCKET_A and want to give read access to the
> user bob
> 
> with s3cmd, i can grant a user without tenant easylly: s3cmd setacl
> --acl-grant=read:user s3://BUCKET_A
> 
> but with an explicit tenant, i tried :
> --acl-grant=read:bob
> --acl-grant=read:tenant_A$bob
> --acl-grant=read:tenant_A\$bob
> --acl-grant=read:"tenant_A:bob"
> 
> each time, i got a s3 error : 400 (invalidArgument)
> 
> Does someone know the solution ?

Have you tried using email-address instead of tenant:UID?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] async-ms with RDMA or DPDK?

2017-02-14 Thread Bastian Rosner

Hi,

according to kraken release-notes and documentation, AsyncMessenger now 
also supports RDMA and DPDK.


Is anyone already using async-ms with RDMA or DPDK and might be able to 
tell us something about real-world performance gains and stability?


Best, Bastian
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] OSPF to the host

2016-06-08 Thread Bastian Rosner

Hi,

regarding clustered VyOS on KVM: In theory this sounds like a safe plan, 
but will come with a great performance penalty because of all the 
context-switches. And even with PCI-passthrough you will also feel 
increased latency.


Docker/LXC/LXD on the other hand does not share the context-switch 
dilemma. Not sure if VyOS likes to run in a docker container though.


I didn't have a chance to play with VPP[1] yet, but it sounds like this 
could be quite useful for high performance routing/switch inside a 
container.


[1]: https://wiki.fd.io/view/VPP

Cheers, Bastian

Am 2016-06-08 09:04, schrieb Josef Johansson:

Hi,

Regarding single points of failure on the daemon on the host I was 
thinking
about doing a cluster setup with i.e. VyOS on kvm-machines on the host, 
and

they handle all the ospf stuff as well. I have not done any performance
benchmarks but it should be possible to do at least. Maybe even 
possible to
do in docker or straight in lxc since it's mostly route management in 
the

kernel.

Regards,
Josef

On Mon, 6 Jun 2016, 18:54 Jeremy Hanmer, 
wrote:

We do the same thing. OSPF between ToR switches, BGP to all of the 
hosts

with each one advertising its own /32 (each has 2 NICs).

On Mon, Jun 6, 2016 at 6:29 AM, Luis Periquito 
wrote:


Nick,

TL;DR: works brilliantly :)

Where I work we have all of the ceph nodes (and a lot of other stuff)
using OSPF and BGP server attachment. With that we're able to 
implement
solutions like Anycast addresses, removing the need to add load 
balancers,

for the radosgw solution.

The biggest issues we've had were around the per-flow vs per-packets
traffic load balancing, but as long as you keep it simple you 
shouldn't

have any issues.

Currently we have a P2P network between the servers and the ToR 
switches
on a /31 subnet, and then create a virtual loopback address, which is 
the
interface we use for all communications. Running tests like iperf 
we're
able to reach 19Gbps (on a 2x10Gbps network). OTOH we no longer have 
the

ability to separate traffic between public and osd network, but never
really felt the need for it.

Also spend a bit of time planning how the network will look like and 
it's
topology. If done properly (think details like route summarization) 
then

it's really worth the extra effort.



On Mon, Jun 6, 2016 at 11:57 AM, Nick Fisk  wrote:


Hi All,



Has anybody had any experience with running the network routed down 
all

the way to the host?



I know the standard way most people configured their OSD nodes is to
bond the two nics which will then talk via a VRRP gateway and then 
probably
from then on the networking is all Layer3. The main disadvantage I 
see here
is that you need a beefy inter switch link to cope with the amount 
of
traffic flowing between switches to the VRRP address. I’ve been 
trying to
design around this by splitting hosts into groups with different 
VRRP
gateways on either switch, but this relies on using active/passive 
bonding
on the OSD hosts to make sure traffic goes from the correct Nic to 
the

directly connected switch.



What I was thinking, instead of terminating the Layer3 part of the
network at the access switches, terminate it at the hosts. If each 
Nic of
the OSD host had a different subnet and the actual “OSD Server” 
address
bound to a loopback adapter, OSPF should advertise this loopback 
adapter
address as reachable via the two L3 links on the physically attached 
Nic’s.
This should give you a redundant topology which also will respect 
your
physically layout and potentially give you higher performance due to 
ECMP.




Any thoughts, any pitfalls?


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] DSS 7000 for large scale object storage

2016-03-21 Thread Bastian Rosner
Yes, rebuild in case of a whole chassis failure is indeed an issue. That 
depends on how the failure domain looks like.


I'm currently thinking of initially not running fully equipped nodes.
Let's say four of these machines with 60x 6TB drives each, so only 
loaded 2/3.

That's raw 1440TB distributed over eight OSD nodes.
Each individual OSD-node would therefore host "only" 30 OSDs but still 
allow for fast expansion.


Usually delivery and installation of a bunch of HDDs is much faster than 
servers.


I really wonder how easy it is to add additional disks and whether 
chance for node- or even chassis-failure increases.


Cheers, Bastian

Am 2016-03-21 10:33, schrieb David:

Sounds like you’ll have a field day waiting for rebuild in case of a
node failure or an upgrade of the crush map ;)

David



21 mars 2016 kl. 09:55 skrev Bastian Rosner <b...@d00m.org>:

Hi,

any chance that somebody here already got hands on Dell DSS 7000 
machines?


4U chassis containing 90x 3.5" drives and 2x dual-socket server sleds 
(DSS7500). Sounds ideal for high capacity and density clusters, since 
each of the server-sleds would run 45 drives, which I believe is a 
suitable number of OSDs per node.


When searching for this model there's not much detailed information 
out there.
Sadly I could not find a review from somebody who actually owns a 
bunch of them and runs a decent PB-size cluster with it.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] DSS 7000 for large scale object storage

2016-03-21 Thread Bastian Rosner

Hi,

any chance that somebody here already got hands on Dell DSS 7000 
machines?


4U chassis containing 90x 3.5" drives and 2x dual-socket server sleds 
(DSS7500). Sounds ideal for high capacity and density clusters, since 
each of the server-sleds would run 45 drives, which I believe is a 
suitable number of OSDs per node.


When searching for this model there's not much detailed information out 
there.
Sadly I could not find a review from somebody who actually owns a bunch 
of them and runs a decent PB-size cluster with it.


Cheers, Bastian
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com