Re: [ceph-users] SSD recommendations for OSD journals

2013-07-21 Thread Leen Besselink
On Mon, Jul 22, 2013 at 08:45:07AM +1100, Mikaël Cluseau wrote:
> On 22/07/2013 08:03, Charles 'Boyo wrote:
> >Counting on the kernel's cache, it appears I will be best served
> >purchasing write-optimized SSDs?
> >Can you share any information on the SSD you are using, is it PCIe
> >connected?
> 
> We are on a standard SAS bus so any SSD going to 500MB/s and being
> stable on the long run (we use 60G Intel 520), you do not need a lot
> of space for the journal (5G per drive is far enough on commodity
> hardware).
> 
> >Another question, since the intention of this storage cluster is
> >relatively cheap storage on commodity hardware, what's the balance
> >between cheap SSDs and reliability since journal failure might
> >result in data loss or will such an event just 'down' the affected
> >OSDs?
> 

When you do a write to Ceph, one OSD (I believe this is the master for a
certain part of the data, an object) receives the write and distributed
the copies to other OSD (as much as is configured, like: min size=2 size=3)
when writes are done on all those OSDs it will confirm the write to the
client. So if one OSD failes, other OSDs will have that data. The master
will have to make sure an other copy is created somewhere else.

So I don't see a reason for data loss if you lose one journal. There
will be a lot of copying of data though and slow things down.

> A journal failure will fail your OSDs (from what I've understood,
> you'll have to rebuild them). But SSDs are very deterministic, so
> monitor them :
> 
> # smartctl -A /dev/sdd
> [..]
> ID# ATTRIBUTE_NAME  FLAG VALUE WORST THRESH TYPE
> UPDATED  WHEN_FAILED RAW_VALUE
> [..]
> 232 Available_Reservd_Space 0x0033   100   100   010Pre-fail
> Always   -   0
> 233 Media_Wearout_Indicator 0x0032   093   093   000Old_age
> Always   -   0
> 
> And don't put too many OSDs on one SSD (I set a rule to not go over
> 4 for 1).
> 

When the SSD is large enough and yournals don't take up all the space,
you can also leave part of the SSD unpartitioned. This will allow the SSD
the fail much later.

> >On a similar note, I am using XFS on the OSDs which also journals,
> >does this affect performance in any way?
> 
> You want this journal for consistency ;) I don't know exactly the
> impact, but since we use spinning drives, the most important factor
> is that ceph, with a journal on SSD, does a lot of sequential
> writes, avoiding most seeks.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] SSD recommendations for OSD journals

2013-07-21 Thread Mikaël Cluseau

On 22/07/2013 08:03, Charles 'Boyo wrote:
Counting on the kernel's cache, it appears I will be best served 
purchasing write-optimized SSDs?
Can you share any information on the SSD you are using, is it PCIe 
connected?


We are on a standard SAS bus so any SSD going to 500MB/s and being 
stable on the long run (we use 60G Intel 520), you do not need a lot of 
space for the journal (5G per drive is far enough on commodity hardware).


Another question, since the intention of this storage cluster is 
relatively cheap storage on commodity hardware, what's the balance 
between cheap SSDs and reliability since journal failure might result 
in data loss or will such an event just 'down' the affected OSDs?


A journal failure will fail your OSDs (from what I've understood, you'll 
have to rebuild them). But SSDs are very deterministic, so monitor them :


# smartctl -A /dev/sdd
[..]
ID# ATTRIBUTE_NAME  FLAG VALUE WORST THRESH TYPE  
UPDATED  WHEN_FAILED RAW_VALUE

[..]
232 Available_Reservd_Space 0x0033   100   100   010Pre-fail  
Always   -   0
233 Media_Wearout_Indicator 0x0032   093   093   000Old_age   
Always   -   0


And don't put too many OSDs on one SSD (I set a rule to not go over 4 
for 1).


On a similar note, I am using XFS on the OSDs which also journals, 
does this affect performance in any way?


You want this journal for consistency ;) I don't know exactly the 
impact, but since we use spinning drives, the most important factor is 
that ceph, with a journal on SSD, does a lot of sequential writes, 
avoiding most seeks.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 1 x raid0 or 2 x disk

2013-07-21 Thread Mikaël Cluseau

On 07/21/13 20:37, Wido den Hollander wrote:
I'd saw two disks and not raid0. Since when you are doing parallel I/O 
both disks can be doing something completely different. 


Completely agree, Ceph is already doing the stripping :)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] SSD recommendations for OSD journals

2013-07-21 Thread Charles 'Boyo
Hello.

I am intending to build a Ceph cluster using several Dell C6100 multi-node 
chassis servers.

These have only 3 disk bays per node (12 x 3.5" drives across 4 nodes) so I 
can't afford to sacrifice a third of my capacity for SSDs. However, fitting the 
SSD via PCI-e seems a valid option.

Unfortunately, I am not a storage/hardware guru, so I'm out of my depth 
regarding the valid types of SSDs - MLC vs SLC, read vs write optimized, 
internal caches and fault symmetries. Can you guide me on what to look for and 
suggest actual PCI-e products known to work properly in this role?

Secondly, I'm unclear about how OSDs use the journal. It appears they write to 
the journal (in all cases, can't be turned off), ack to the client and then 
read the journal later to write to backing storage. Is that correct?

I'm coming from enterprise ZFS with an SSD is also used for write journalling 
but data flushes are from the disk cache in memory, hence the use of write 
optimized SSDs. Why can't Ceph be configured to write from RAM instead of 
reading the journal on flush?

Regards,

Charles
Sent from my BlackBerry® wireless handheld from Glo Mobile.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] storage pools ceph (bobtail) auth failure in xenserver SR creation

2013-07-21 Thread Sébastien RICCIO

Hi again,

[root@xen-blade05 ~]# virsh pool-info rbd
Name:   rbd
UUID:   ebc61120-527e-6e0a-efdc-4522a183877e
State:  running
Persistent: no
Autostart:  no
Capacity:   5.28 TiB
Allocation: 16.99 GiB
Available:  5.24 TiB

I managed to get it running. How ? I rebooted the xenserver host and 
tried again... It worked...


Some mystery here :)

Cheers,
Sébastien


On 21.07.2013 14:01, Sébastien RICCIO wrote:

Hi,

thanks a lot for your answer. I was successfully able to create the
storage pool with virsh.

However it is not listed when I issue a virsh pool-list:

Name State  Autostart
-

If I try to add it again:
[root@xen-blade05 ~]# virsh pool-create ceph.xml
error: Failed to create pool from ceph.xml
error: operation failed: pool 'rbd' already exists with uuid
34609207-3ea0-28c5-fee9-92e6b493e6cb

When i try to get info about it:

virsh # pool-info rbd
Name:   rbd
UUID:   34609207-3ea0-28c5-fee9-92e6b493e6cb
State:  inactive
Persistent: yes
Autostart:  no

When I try to start it:
virsh# pool-start rbd
error: Failed to start pool rbd
error: An error occurred, but the cause is unknown

dumping the config

virsh # pool-dumpxml rbd

  rbd
  34609207-3ea0-28c5-fee9-92e6b493e6cb
  0
  0
  0
  

rbd

  

  


virsh # secret-dumpxml 4cae707d-e049-4ec1-ac51-2cf69dd3

  4cae707d-e049-4ec1-ac51-2cf69dd3
  
client.admin
  


Ceph is working on the box

[root@xen-blade05 ~]# ceph status
  cluster f58c1405-8891-46cb-adf0-c6ab2da4050a
   health HEALTH_OK
   monmap e1: 3 mons at
{ceph01=10.111.80.1:6789/0,ceph02=10.111.80.2:6789/0,ceph03=10.111.80.3:6789/0},
election epoch 18, quorum 0,1,2 ceph01,ceph02,ceph03
   osdmap e96: 6 osds: 6 up, 6 in
pgmap v1317: 192 pgs: 192 active+clean; 17400 MB data, 41192 MB
used, 5367 GB / 5407 GB avail
   mdsmap e1: 0/0/1 up

[root@xen-blade05 ~]# ceph osd lspools
0 data,1 metadata,2 rbd,

Any idea where to look now ? :)

Thanks for your help.

Cheers,
Sébastien

On 21.07.2013 11:42, Wido den Hollander wrote:

Hi,

On 07/21/2013 08:14 AM, Sébastien RICCIO wrote:

Hi !

I'm currently trying to get the xenserver on centos 6.4 tech preview
working against a test ceph cluster and having the same issue.

Some infos: the cluster is  named "ceph", the pool is named "rbd".

ceph.xml:

   rbd
   
 ceph
 
   




You need a secret section inside the  like this:


  MyCephPool
  

rbd

  

  


The secret "uuid" should reference back to the UUID of the secret
you'll define.


secret.xml:

   
 client.admin 
   





Don't put all that information in the  section, the XML should
be like this:


  4cae707d-e049-4ec1-ac51-2cf69dd3
  
client.admin
  



[root@xen-blade05 ~]# virsh pool-create ceph.xml


This should be the procedure to follow:

$ virsh secret-define secret.xml
$ virsh secret-set-value  
$ virsh pool-define ceph.xml



error: Failed to create pool from ceph.xml
error: Invalid secret: virSecretFree



You get this error due to cephx being disabled since you didn't
define a  section. This error is a bug in libvirt, but that
will go away as soon as you use cephx.

Wido


same error :/

Any ideas ?

Don't hesitate to ask if you need more infos.

Cheers,
Sébastien


Hi John,

Could you try without the cat'ing and such?

Could you also try this:

$ virsh secret define secret.xml
$ virsh secret-set-value  
$ virsh pool-create ceph.xml

Could you post both XML files and not use any Xen commands like 'xe'?

I want to verify where this problem is.

Wido

On 07/11/2013 10:34 PM, John Shen wrote:
>/  Wido, Thanks! I tried again with your command syntax but the
result is
/>/  the same.
/>/
/>/  [root at xen02
 ~]# virsh
secret-set-value $(cat uuid) $(cat client.admin.key)
/>/  Secret value set
/>/
/>/  [root at xen02
 ~]# xe
sr-create type=libvirt name-label=ceph
/>/  device-config:xml-filename=ceph.xml
/>/  Error code: libvirt
/>/  Error parameters: libvirt: VIR_ERR_65: VIR_FROM_30: Invalid
secret:
/>/  virSecretFree
/>/  [root at xen02
 ~]#  virsh
pool-create ceph.xml
/>/  error: Failed to create pool from ceph.xml
/>/  error: Invalid secret: virSecretFree
/>/
/>/  [root at xen02
 ~]#
/>/
/>/
/>/
/>/  On Thu, Jul 11, 2013 at 1:14 PM, Wido den Hollander http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
/>/  >> wrote:
/>/
/>/  Hi.
/>/
/>/  So, the problem here is a couple of things.
/>/
/>/  First: libvirt doesn't handle RBD storage pools without
auth. That's
/>/  my bad, but I never resolved that bug:
/>/  http://tracker.ceph.com/__issues/3493
/>/  

Re: [ceph-users] storage pools ceph (bobtail) auth failure in xenserver SR creation

2013-07-21 Thread Sébastien RICCIO

Hi,

thanks a lot for your answer. I was successfully able to create the 
storage pool with virsh.


However it is not listed when I issue a virsh pool-list:

Name State  Autostart
-

If I try to add it again:
[root@xen-blade05 ~]# virsh pool-create ceph.xml
error: Failed to create pool from ceph.xml
error: operation failed: pool 'rbd' already exists with uuid 
34609207-3ea0-28c5-fee9-92e6b493e6cb


When i try to get info about it:

virsh # pool-info rbd
Name:   rbd
UUID:   34609207-3ea0-28c5-fee9-92e6b493e6cb
State:  inactive
Persistent: yes
Autostart:  no

When I try to start it:
virsh# pool-start rbd
error: Failed to start pool rbd
error: An error occurred, but the cause is unknown

dumping the config

virsh # pool-dumpxml rbd

  rbd
  34609207-3ea0-28c5-fee9-92e6b493e6cb
  0
  0
  0
  

rbd

  

  


virsh # secret-dumpxml 4cae707d-e049-4ec1-ac51-2cf69dd3

  4cae707d-e049-4ec1-ac51-2cf69dd3
  
client.admin
  


Ceph is working on the box

[root@xen-blade05 ~]# ceph status
  cluster f58c1405-8891-46cb-adf0-c6ab2da4050a
   health HEALTH_OK
   monmap e1: 3 mons at 
{ceph01=10.111.80.1:6789/0,ceph02=10.111.80.2:6789/0,ceph03=10.111.80.3:6789/0}, 
election epoch 18, quorum 0,1,2 ceph01,ceph02,ceph03

   osdmap e96: 6 osds: 6 up, 6 in
pgmap v1317: 192 pgs: 192 active+clean; 17400 MB data, 41192 MB 
used, 5367 GB / 5407 GB avail

   mdsmap e1: 0/0/1 up

[root@xen-blade05 ~]# ceph osd lspools
0 data,1 metadata,2 rbd,

Any idea where to look now ? :)

Thanks for your help.

Cheers,
Sébastien

On 21.07.2013 11:42, Wido den Hollander wrote:

Hi,

On 07/21/2013 08:14 AM, Sébastien RICCIO wrote:

Hi !

I'm currently trying to get the xenserver on centos 6.4 tech preview
working against a test ceph cluster and having the same issue.

Some infos: the cluster is  named "ceph", the pool is named "rbd".

ceph.xml:

   rbd
   
 ceph
 
   




You need a secret section inside the  like this:


  MyCephPool
  

rbd

  

  


The secret "uuid" should reference back to the UUID of the secret 
you'll define.



secret.xml:

   
 client.admin 
   





Don't put all that information in the  section, the XML should 
be like this:



  4cae707d-e049-4ec1-ac51-2cf69dd3
  
client.admin
  



[root@xen-blade05 ~]# virsh pool-create ceph.xml


This should be the procedure to follow:

$ virsh secret-define secret.xml
$ virsh secret-set-value  
$ virsh pool-define ceph.xml



error: Failed to create pool from ceph.xml
error: Invalid secret: virSecretFree



You get this error due to cephx being disabled since you didn't define 
a  section. This error is a bug in libvirt, but that will go 
away as soon as you use cephx.


Wido


same error :/

Any ideas ?

Don't hesitate to ask if you need more infos.

Cheers,
Sébastien


Hi John,

Could you try without the cat'ing and such?

Could you also try this:

$ virsh secret define secret.xml
$ virsh secret-set-value  
$ virsh pool-create ceph.xml

Could you post both XML files and not use any Xen commands like 'xe'?

I want to verify where this problem is.

Wido

On 07/11/2013 10:34 PM, John Shen wrote:
>/  Wido, Thanks! I tried again with your command syntax but the 
result is

/>/  the same.
/>/
/>/  [root at xen02 
 ~]# virsh 
secret-set-value $(cat uuid) $(cat client.admin.key)

/>/  Secret value set
/>/
/>/  [root at xen02 
 ~]# xe 
sr-create type=libvirt name-label=ceph

/>/  device-config:xml-filename=ceph.xml
/>/  Error code: libvirt
/>/  Error parameters: libvirt: VIR_ERR_65: VIR_FROM_30: Invalid 
secret:

/>/  virSecretFree
/>/  [root at xen02 
 ~]#  virsh 
pool-create ceph.xml

/>/  error: Failed to create pool from ceph.xml
/>/  error: Invalid secret: virSecretFree
/>/
/>/  [root at xen02 
 ~]#

/>/
/>/
/>/
/>/  On Thu, Jul 11, 2013 at 1:14 PM, Wido den Hollander 42on.com 
/>/  >> wrote:

/>/
/>/  Hi.
/>/
/>/  So, the problem here is a couple of things.
/>/
/>/  First: libvirt doesn't handle RBD storage pools without 
auth. That's

/>/  my bad, but I never resolved that bug:
/>/  http://tracker.ceph.com/__issues/3493
/>/  
/>/
/>/  For now, make sure cephx is enabled.
/>/
/>/  Also, the commands you are using don't seem to be right.
/>/
/>/  It should be:
/>/
/>/  $ virsh secret-set-value $(cat uuid) 
/>/
/>/  Could you try again with cephx enabled and setting the 
secret value

/>/  like mentioned above?
/>/
/>/  Wido
/>/
/>/
/>/  On 07/11/2013 06:00 PM, John Shen wrot

Re: [ceph-users] xfs on ceph RBD resizing

2013-07-21 Thread Wido den Hollander

On 07/20/2013 11:42 PM, Wido den Hollander wrote:

On 07/20/2013 05:16 PM, Sage Weil wrote:

On Sat, 20 Jul 2013, Wido den Hollander wrote:

On 07/20/2013 06:56 AM, Jeffrey 'jf' Lim wrote:

On Fri, Jul 19, 2013 at 12:54 PM, Jeffrey 'jf' Lim

wrote:

hey folks, I was hoping to be able to use xfs on top of RBD for a
deployment of mine. And was hoping for the resize of the RBD
(expansion, actually, would be my use case) in the future to be as
simple as a "resize on the fly", followed by an 'xfs_growfs'.'

I just found a recent post, though
(http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-June/002425.html),

that seems to indicate that what I have in mind may not be possible?
is this true? Do I have to unmount the system (or worse, do a reboot?)
before I can do an effective resize?



another question that just came to mind: would you recommend skipping
partitioning, and then making an xfs out of the entire drive? I'm just
thinking how you would have to deal with partitioning upon an RBD
resize.



Yes, skip the partitioning, otherwise you will have to expand the
partition
first prior to expanding the filesystem.

I also just did a quick test with the 3.8 kernel and XFS directly on
top of
the RBD device, but the new size won't get detected while XFS is
mounted.


Hrm.  Do you know if this works with LVM?



I verified again with the 3.8 kernel in Ubuntu 12.04
(linux-image-generic-lts-raring) and with XFS this works:

$ rbd create --size 20480 image1
$ rbd map image1
$ mkfs.xfs /dev/rbd/rbd/image1
$ mkdir /tmp/rbd1
$ mount -o rw,noatime /dev/rbd/rbd/image1 /tmp/rbd1
$ rbd resize --size 40960 image1
$ umount /tmp/rbd1
$ mount -o rw,noatime /dev/rbd/rbd/image1 /tmp/rbd1
$ xfs_growfs /tmp/rbd1

I couldn't get LVM to work though, pvcreate kept failing with:

"#filters/filter.c:134 /dev/rbd2: Skipping: Unrecognised LVM device type
250"

That is usually due to a filter in /etc/lvm/lvm.conf, but that is set to:

 # By default we accept every block device:
 filter = [ "a/.*/" ]

So I'm not sure why pvcreate fails to use that RBD device for LVM.



Ok, so I got LVM working by adding this in the lvm.conf:

types = [ "rbd", 249 ]

Where I fetched that information from /proc/devices

LVM however didn't see the new size, I had to unmount all my FSes on top 
of LVM, change the Volume Group to inactive and run pvresize before the 
new PV size was detected.


Again on the 3.8 kernel

Wido


Wido


sage







--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] storage pools ceph (bobtail) auth failure in xenserver SR creation

2013-07-21 Thread Wido den Hollander

Hi,

On 07/21/2013 08:14 AM, Sébastien RICCIO wrote:

Hi !

I'm currently trying to get the xenserver on centos 6.4 tech preview
working against a test ceph cluster and having the same issue.

Some infos: the cluster is  named "ceph", the pool is named "rbd".

ceph.xml:

   rbd
   
 ceph
 
   




You need a secret section inside the  like this:


  MyCephPool
  

rbd

  

  


The secret "uuid" should reference back to the UUID of the secret you'll 
define.



secret.xml:

   
 client.admin 
   





Don't put all that information in the  section, the XML should be 
like this:



  4cae707d-e049-4ec1-ac51-2cf69dd3
  
client.admin
  



[root@xen-blade05 ~]# virsh pool-create ceph.xml


This should be the procedure to follow:

$ virsh secret-define secret.xml
$ virsh secret-set-value  
$ virsh pool-define ceph.xml



error: Failed to create pool from ceph.xml
error: Invalid secret: virSecretFree



You get this error due to cephx being disabled since you didn't define a 
 section. This error is a bug in libvirt, but that will go away as 
soon as you use cephx.


Wido


same error :/

Any ideas ?

Don't hesitate to ask if you need more infos.

Cheers,
Sébastien


Hi John,

Could you try without the cat'ing and such?

Could you also try this:

$ virsh secret define secret.xml
$ virsh secret-set-value  
$ virsh pool-create ceph.xml

Could you post both XML files and not use any Xen commands like 'xe'?

I want to verify where this problem is.

Wido

On 07/11/2013 10:34 PM, John Shen wrote:
>/  Wido, Thanks! I tried again with your command syntax but the result is
/>/  the same.
/>/
/>/  [root at xen02    
~]# virsh secret-set-value $(cat uuid) $(cat client.admin.key)
/>/  Secret value set
/>/
/>/  [root at xen02    
~]# xe sr-create type=libvirt name-label=ceph
/>/  device-config:xml-filename=ceph.xml
/>/  Error code: libvirt
/>/  Error parameters: libvirt: VIR_ERR_65: VIR_FROM_30: Invalid secret:
/>/  virSecretFree
/>/  [root at xen02    
~]#  virsh pool-create ceph.xml
/>/  error: Failed to create pool from ceph.xml
/>/  error: Invalid secret: virSecretFree
/>/
/>/  [root at xen02    
~]#
/>/
/>/
/>/
/>/  On Thu, Jul 11, 2013 at 1:14 PM, Wido den Hollander http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
/>/  >> wrote:
/>/
/>/  Hi.
/>/
/>/  So, the problem here is a couple of things.
/>/
/>/  First: libvirt doesn't handle RBD storage pools without auth. That's
/>/  my bad, but I never resolved that bug:
/>/  http://tracker.ceph.com/__issues/3493
/>/  
/>/
/>/  For now, make sure cephx is enabled.
/>/
/>/  Also, the commands you are using don't seem to be right.
/>/
/>/  It should be:
/>/
/>/  $ virsh secret-set-value $(cat uuid) 
/>/
/>/  Could you try again with cephx enabled and setting the secret value
/>/  like mentioned above?
/>/
/>/  Wido
/>/
/>/
/>/  On 07/11/2013 06:00 PM, John Shen wrote:
/>/
/>/  Hi Dave, Thank you so much for getting back to me.
/>/
/>/  the command returns the same errors:
/>/
/>/  [root at xen02  
  ~]# virsh pool-create 
ceph.xml
/>/  error: Failed to create pool from ceph.xml
/>/  error: Invalid secret: virSecretFree
/>/
/>/  [root at xen02  
  ~]#
/>/
/>/  the secret was precreated for the user admin that I use
/>/  elsewhere with
/>/  no issues (rbd mount, cephfs etc.), and per the ceph
/>/  documentation, i
/>/  just set the secret value with this command
/>/
/>/ virsh secret-set-value $(cat uuid) --base64 $(cat
/>/  client.admin.key)
/>/
/>/  where the key is obtained from
/>/
/>/ ceph auth list
/>/
/>/  and uuid is generated by
/>/
/>/  virsh secret-define --file secret.xml
/>/
/>/  # cat secret.xml
/>/  
/>/
/>/client.admin $(cat client.admin.key)
/>/
/>/  
/>/
/>/
/>/
/>/  On Thu, Jul 11, 2013 at 7:22 AM, Dave Scott
/>/  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>  >
/>/  
/>/  /
/>/   [sorry I didn't manage to reply to the original message; 

Re: [ceph-users] 1 x raid0 or 2 x disk

2013-07-21 Thread Wido den Hollander

Hi,

On 07/21/2013 07:20 AM, James Harper wrote:

I have a server with 2 x 2TB disks. For performance, is it better to combine 
them as a single OSD backed by RAID0 or have 2 OSD's backed by a single disk? 
(log will be on SSD in either case).



I'd saw two disks and not raid0. Since when you are doing parallel I/O 
both disks can be doing something completely different.


You can have Ceph figure it all out and use all the spindles available.


My need in performance is more IOPS than overall throughput (maybe that's a 
universal thing? :)

Thanks

James
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] rgw bucket index

2013-07-21 Thread Dominik Mostowiec
Hi,
Rgw bucket index is in one file (one osd performance issues).
Is there on roudmap sharding or other change to increase performance?


--
Pozdrawiam
Dominik
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] 1 x raid0 or 2 x disk

2013-07-21 Thread James Harper
I have a server with 2 x 2TB disks. For performance, is it better to combine 
them as a single OSD backed by RAID0 or have 2 OSD's backed by a single disk? 
(log will be on SSD in either case).

My need in performance is more IOPS than overall throughput (maybe that's a 
universal thing? :)

Thanks

James
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com