with these params,
and they improve performance, let me know.
-Original Message-
From: Christian Kauhaus [mailto:k...@gocept.com]
Sent: Friday, June 27, 2014 3:35 AM
To: Aronesty, Erik; Udo Lembke; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] How to improve performance of ceph ob
ceph-users] How to improve performance of ceph objcect storage
cluster
Hi,
Am 25.06.2014 16:48, schrieb Aronesty, Erik:
> I'm assuming you're testing the speed of cephfs (the file system) and not
> ceph "object storage".
for my part I me
I'm assuming you're testing the speed of cephfs (the file system) and not ceph
"object storage".
In my recent experience the primary thing that sped cephfs up was turning on
striping. That way the client should be able to pull down data from all 10
nodes at once, and writes should, also, be wr
work.
-Original Message-
From: Aronesty, Erik
Sent: Friday, May 09, 2014 11:51 AM
To: 'Lincoln Bryant'
Cc: ceph-users
Subject: RE: [ceph-users] issues with ceph
If I stat on that box, I get nothing:
q782657@usadc-seaxd01:/mounts/ceph1/pubdata/tcga/raw$ cd BRCA
-bash: cd: BR
ting the kernel, and rerunning some tests. Thanks.
-Original Message-
From: Lincoln Bryant [mailto:linco...@uchicago.edu]
Sent: Friday, May 09, 2014 10:39 AM
To: Aronesty, Erik
Cc: ceph-users
Subject: Re: [ceph-users] issues with ceph
Hi Erik,
What happens if you try to stat one of
So we were attempting to stress test a cephfs installation, and last night,
after copying 500GB of files, we got this:
570G in the "raw" directory
q782657@usadc-seaxd01:/mounts/ceph1/pubdata/tcga$ ls -lh
total 32M
-rw-rw-r-- 1 q783775 pipeline 32M May 8 10:39
2014-02-25T12:00:01-0800_data_man
Can ceph act in a raid-5 (or raid-6) mode, storing objects so that storage
overhead of n/(n-1)? For some systems where the underlying OSD's are known to
be very reliable, but where the storage is very tight, this could be useful.
-Original Message-
From: ceph-users-boun...@lists.ceph.c
If there's an underperforming disk, why on earth would more data be put on it?
You'd think it would be less I would think an overperforming disk should
(desirably) cause that case,right?
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Greg
Does Ceph really halve your storage like that?
If if you specify N+1,does it really store two copies, or just compute
checksums across MxN stripes? I guess Raid5+Ceph with a large array (12 disks
say) would be not too bad (2.2TB for each 1).
But It would be nicer, if I had 12 storage units i
ver b)
availability increases (at the expense of size/write speed).
-Original Message-
From: Gregory Farnum [mailto:g...@inktank.com]
Sent: Friday, September 27, 2013 11:14 AM
To: Aronesty, Erik
Cc: Aaron Ten Clay; Sage Weil; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] CephFS Pool Sp
Ø You can also create additional data pools and map directories to them, but
this probably isn't what you need (yet).
Is there a link to a web page where you can read how to map a directory to a
pool? (I googled ceph map directory to pool ... and got this post)
From: ceph-users-boun...@lists.c
I did the same thing, restarted with upstart, and I still need to use
authentication. Not sure why yet. Maybe I didn't change the /etc/ceph
configs on all the nodes
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Snider, Tim
Sent: Tuesday,
I did the same thing recently.
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Snider, Tim
Sent: Monday, September 23, 2013 4:10 PM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] ceph.conf changes and restarting ceph.
I modified /etc/ceph.conf
13 matches
Mail list logo