Why cephx rather then kerberos?

2012-12-09 Thread Jonathan Proulx
Hi All,

the docs continuously describe cephx as Kerberos-like, curious why
Kereros isn't used instead.

Developing new security protocols is almost always a bad idea from a
security perspective.  I haven't looked deeply into cephx to see how
much is novel (and likely to contain novel bugs) ans how much is reuse
of well worn crypto.  So this is just a first impression concern.

More importantly to me I already have a Kerberos infrastructure all my
users have principals all my hosts have keytabs and I would really
like to reuse that for securing data access rather than managing yet
another a separate set of credentials.

The only reason I can see documented is Unlike Kerberos, each monitor
can authenticate users and distribute keys, so there is no single
point of failure or bottleneck when using chepx.  Kerberos using
multiple KDCs  needn't have a single point of failure, and each
monitor probably means 3-5 systems in practice which is a typical
scale for production Kerberos deployments.  Now it's true with
Kerberos if the admin server goes down I can't add new principals
(users) or perform other administrative functions, but authentication
continues and users (human and daemon) don't really care.

Am I missing something? Any plans to either add Kerberos as an
authentication method or provide a pluggable authentication scheme?

I'm fairly excited about all things Ceph from a design and direction
perspective, but this piece (IMO) is the one thing that is just
painfully close but not quite right.

Thanks,
-Jon
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


reconfiguring existing hardware for ceph use

2012-10-25 Thread Jonathan Proulx
Hi All,

I have 8 servers available to test ceph on which are a bit over
powered/under-disked and I'm trying to develop a plan for how to lay
out services and how to populate the available disk slots.

The hardware is dual socket Intel E5640 chips (8 core total/ node) with 48G
RAM, dual 10G ethernet but only four 3.5 SAS slots (with Fusion-MPT
controller).

Target application is primarily RBD as volume storage backend for
openstack (folsom cinder), and possibly using as object store for
glance.  I'd also like to test CephFS, but don't have a particular
usecase in mind for it.

The openstack cloud this would back end is used for reasearch
computing by a variety of internal research groups and has wildly
unpredictable work loads.  Volume storage use has not been
particularly intensive to date so I don't have a particular
performance point to hit. 

Comparatively the current back end is a single cinder-volume server
bpacing volumes on two software raid6 volumes each backed by 12 2T
nearline SAS drives.  Another option we're evaluating is a Dell
EqualLogic san with a mirrored pair of 16x1T drive raid6 units.

My first though is to populate the test systesm with a single solid
state drive (not sure size or type) to hold the operating system and
journals and three 3T SAS drives of the OSD data filesystems.  Running
3 osd on all nodes (one per data disk), with mon and mds only on the
first 3. 

My second though is to use 3T drives in all slots.  Take the os cut
off the top of each (probably 16G each assmbled as software raid10 for
32G or mirrored space) and run 4 osd per node on the remaining disk
space using internal journals.

Is either more sane than the other? Are both so crazy I should just use
an os disk and three osd disks with internal journals? have any better
suggestions?

Thanks,
-Jon



--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Ideal hardware spec?

2012-08-22 Thread Jonathan Proulx
Hi All,

Yes I'm asking the impossible question, what is the best hardware
confing.

I'm looking at (possibly) using ceph as backing store for images and
volumes on OpenStack as well as exposing at least the object store for
direct use.  

The openstack cluster exists and is currently in the early stages of
use by researchers here, approx 1500 vCPU (counts hyperthreads
actually 768 physical cores) and 3T or RAM across 64 physical nodes.

On the object store side it would be a new resource for usand hard to
say what people would do with it except that it would be many
different things and the use profile would be constantly changing
(which is true of all our existing storage).

In this sense, even though it's a private cloud the somewhat
unpredictable useage profile gives it some charateristics of a small
public cloud.

Size wise I'm hoping to start out with 3 monitors  and  5(+) OSD nodes
to end up with a 20-30T 3x replicated storage (call me paranoid).

So the monitor specs seem relatively easy to come up with.  For the
OSDs it looks like
http://ceph.com/docs/master/install/hardware-recommendations suggests
1 drive, 1 core and  2G RAM per OSD (with multiple OSDs per storage
node).  On list discussions seem to frequently include an SSD for
journaling (which is similar to what we do for our current ZFS back
NFS storage).

I'm hoping to wrap the hardware in a grant and willing to experiment a
bit with different software configurations to tune it up when/if I get
the hardware in.  So my imediate concern is a hardware spec that will
ahve a reasonable processor:memory:disk ratio and opinions (or better
data) on the utility of SSD.

First is the documented core to disk ratio still current best
practice?  Given a platform with more drive slots could 8 cores handle
more disk? would that need/like more memory?

Have SSD been shown to speed performance with this architecture?

If so given the 8 drive slot example with seven OSDs presented in the
docs what is the liklihood that using a high performance SSD for the
OS image and also cutting journal/log partitions out of it for the
remaining 7 2-3T near line SAS drives?

Thanks,
-Jon
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


getting started with RADOSGW

2012-07-22 Thread Jonathan Proulx
Hi All,

I've created a testuser with testuser:swift subuser, setup apache with
fcgi but client access with either S3 or Swift clients result in HTTP
500 errors on the server:

[Sun Jul 22 10:14:33 2012] [error] [client 128.52.x.x] (2)No such file
or directory: FastCGI: failed to connect to server
/var/www/s3gw.fcgi: connect() failed
[Sun Jul 22 10:14:33 2012] [error] [client 128.52.x.x] FastCGI:
incomplete headers (0 bytes) received from server /var/www/s3gw.fcgi


 ls -lh /var/www/s3gw.fcgi
-rwxrwxr-x 1 root root 79 Jul 10 10:28 /var/www/s3gw.fcgi

cat /var/www/s3gw.fcgi
#!/bin/sh
exec /usr/bin/radosgw -c /etc/ceph/ceph.conf -n client.rados.gateway

/usr/bin/radosgw is in the right place and executable by all
/etc/ceph/ceph.conf is also in the correct location and readable by
all, so I'm a bit confused by the No such file or directory error.

I *think* I've followed all the steps but have obviously missed
something, any idea what?

Thanks,
-Jon
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: getting started with RADOSGW

2012-07-22 Thread Jonathan Proulx
On Sun, Jul 22, 2012 at 10:59 AM, Yehuda Sadeh yeh...@inktank.com wrote:

 I think you've set up your apache to use external fastcgi, but you
 have to run radosgw manually using this method.

Spot on, thanks for the quick response.  By radosgw init script was
quietly exiting because I was using FQDN in ceph.conf but it was
trying to match on the short name.  Now it's failing to start, but at
least logging why.

2012-07-22 11:32:39.636760 7fd2b9be9780  0 librados:
client.radosgw.gateway authentication error (1) Operation not
permitted
2012-07-22 11:32:39.637018 7fd2b9be9780 -1 Couldn't init storage
provider (RADOS)

are these the right capabilities for that user
(http://ceph.com/docs/master/radosgw/config suggests they are)?

client.rados.gateway
key: redacted
caps: [mon] allow r
caps: [osd] allow rwx

ceph.conf points to /etc/ceph/keyring.rados.gateway which has is
readable and has matching key

Thanks,
-Jon
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: getting started with RADOSGW

2012-07-22 Thread Jonathan Proulx
On Sun, Jul 22, 2012 at 12:31 PM, Yehuda Sadeh yeh...@inktank.com wrote:
 On Sun, Jul 22, 2012 at 8:46 AM, Jonathan Proulx j...@jonproulx.com wrote:

 are these the right capabilities for that user
 (http://ceph.com/docs/master/radosgw/config suggests they are)?

 client.rados.gateway
 key: redacted
 caps: [mon] allow r
 caps: [osd] allow rwx

 I think the radosgw needs the 'w' cap for the monitor for
 automatically creating the rados pools. Though it may be that you'd be
 better off creating the pools yourself with the required amount of pgs
 than letting it do that by itself, as the default number of pgs that
 will be created is very low.
 ceph.conf points to /etc/ceph/keyring.rados.gateway which has is
 readable and has matching key

 Try running 'ceph auth list' and see if you see the auth info for that
 user. If not then you'll need to 'ceph auth add' that keyring.

'ceph auth list' is where I got the capabilites list, though the
keyring file above list the same caps

Hmmm, how do I change the capabilities of a key, that doc section is
blank http://ceph.com/docs/master/ops/manage/key/#capabilities.  I
tried ceph-authtool -n client.rados.gateway --cap osd 'allow rwx'
--cap mon 'allow rw' /etc/ceph/keyring.rados.gateway which changed
the keyfile but not the output of ceph auth list

And radosgw is still exiting with an auth error...

root@ceph-mon:/tmp/rbd# /etc/init.d/radosgw restart
No /usr/bin/radosgw found running; none killed.
Starting client.radosgw.gateway...
radosgw daemon started with pid 27614
root@ceph-mon:/tmp/rbd# ps 27614
  PID TTY  STAT   TIME COMMAND
root@ceph-all-0:/tmp/rbd# cat /var/log/ceph/radosgw.log
2012-07-22 17:01:38.709151 7f79c3689780  0 librados:
client.radosgw.gateway authentication error (1) Operation not
permitted
2012-07-22 17:01:38.709391 7f79c3689780 -1 Couldn't init storage
provider (RADOS)

Thanks,
-Jon
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html