If you are just playing around, you could roll everything onto a single server.
Or, if you wanted, put the MON and OSD on a single server and the radosgw on a
different server. You can accomplish this in a virtual machine if you don't
have all the hardware you would like to test with.
> -Original Message-
> From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
> boun...@lists.ceph.com] On Behalf Of Guang
> Sent: Monday, September 16, 2013 6:14 AM
> To: ceph-users@lists.ceph.com; Ceph Development
> Subject: [ceph-users] Deploy a Ceph cluster to play around with
>
> Hello ceph-users, ceph-devel,
> Nice to meet you in the community!
> Today I tried to deploy a Ceph cluster to play around with the API, and during
> the deployment, i have a couple of questions which may need you help:
> 1) How many hosts do I need if I want to deploy a cluster with RadosGW (so
> that I can try with the S3 API)? Is it 3 OSD + 1 Mon + 1 GW = 5 hosts on
> minimum?
>
> 2) I have a list of hardwares, however, my host only have 1 disk with two
> partitions, one for boot and another for LVM members, is it possible to
> deploy an OSD on such hardware (e.g. make a partition with ext4)? Or I will
> need another disk to do so?
>
> -bash-4.1$ ceph-deploy disk list myserver.com [ceph_deploy.osd][INFO ]
> Distro info: RedHatEnterpriseServer 6.3 Santiago [ceph_deploy.osd][DEBUG ]
> Listing disks on myserver.com...
> [repl101.mobstor.gq1.yahoo.com][INFO ] Running command: ceph-disk list
> [repl101.mobstor.gq1.yahoo.com][INFO ] /dev/sda :
> [repl101.mobstor.gq1.yahoo.com][INFO ] /dev/sda1 other, ext4, mounted
> on /boot [repl101.mobstor.gq1.yahoo.com][INFO ] /dev/sda2 other,
> LVM2_member
>
> Thanks,
> Guang
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com