OS: CentOS 6.5
Version: Ceph 0.79
Hi, everybody!
I have installed a ceph cluster with 10 servers.
I test the throughput of ceph cluster in the same datacenter.
Upload files of 1GB from one server or several servers to one server or several
servers, the total is about 30MB/s.
That is to say,
I have recently deployed a Firefly CephFS cluster, and am trying out
the POSIX ACL feature that is supposed to have come in as of kernel
3.14. I've mounted my CephFS volume on a machine with kernel 3.15
The ACL support seems to work (as in I can set and retrieve ACL's),
but it seems kinda buggy,
Am 23.06.2014 20:24, schrieb Gregory Farnum:
Well, actually it always takes the primary copy, unless the primary
has some way of locally telling that its version is corrupt. (This
might happen if the primary thinks it should have an object, but it
doesn't exist on disk.) But there's not a
Dear,
I am trying to deploy a new test following the instructions. for the
latest firefly version under yum repo.
Installing : ceph-libs-0.80.1-2.el6.x86_64
Installing : ceph-0.80.1-2.el6.x86_64
The initial setups contains 3 mon and little osds (1GB per journal)
The cluster has been
Hi,
I am also searching for tuning the single thread performance.
You can try following parameters:
[osd]
osd mount options xfs =
rw,noatime,inode64,logbsize=256k,delaylog,allocsize=4M
osd_op_threads = 4
osd_disk_threads = 4
Udo
Am 25.06.2014 08:52, schrieb wsnote:
OS: CentOS 6.5
Version:
I'm assuming you're testing the speed of cephfs (the file system) and not ceph
object storage.
In my recent experience the primary thing that sped cephfs up was turning on
striping. That way the client should be able to pull down data from all 10
nodes at once, and writes should, also, be
Hi,
On 25/06/14 00:27, Mark Kirkwood wrote:
On 24/06/14 23:39, Mark Nelson wrote:
On 06/24/2014 03:45 AM, Mark Kirkwood wrote:
On 24/06/14 18:15, Robert van Leeuwen wrote:
All of which means that Mysql performance (looking at you binlog) may
still suffer due to lots of small block size sync
You probably want to look at the central log (on your monitors) and
see exactly what scrub errors it's reporting. There might also be
useful info if you dump the pg info on the inconsistent PGs. But if
you're getting this frequently, you're either hitting some unknown
issues with the OSDs around
Create a 3rd OSD. The default pool size is 3 replicas including the initial
system created pools.
David Zafman
Senior Developer
http://www.inktank.com
http://www.redhat.com
On Jun 25, 2014, at 3:04 AM, Iban Cabrillo cabri...@ifca.unican.es wrote:
Dear,
I am trying to deploy a new test
Hi David,
I just add a new OSD with two disk (10 in total 24GB per osd.x)
But I get this error now:
[ceph@cephadm ceph-cloud]$ ceph -w
cluster 344c60e2-cef8-41f3-92ae-1995b0abc870
health HEALTH_ERR 4 pgs incomplete; 15 pgs peering; 19 pgs stale; 46
pgs stuck inactive; 19 pgs stuck
I'm trying to find an issue with RadosGW and special characters in
filenames. Specifically, it seems that filenames with a + in them are
not being handled correctly, and that I need to explicitly escape them.
For example:
---request begin---
HEAD
Unfortunately Yehuda's out for a while as he could best handle this,
but it sounds familiar so I think you probably want to search the list
archives and the bug tracker (http://tracker.ceph.com/projects/rgw).
What version precisely are you on?
-Greg
Software Engineer #42 @ http://inktank.com |
ceph version 0.80.1 (a38fe1169b6d2ac98b427334c12d7cf81f809b74)
I'll try to take a look through the bug tracker, but I didn't see
anything obvious at first glance.
On 6/25/2014 7:33 PM, Gregory Farnum wrote:
Unfortunately Yehuda's out for a while as he could best handle this,
but it sounds
the + is a reserved character in the HTTP protocol, which means it may have
specific meaning in a specific part of the URL, but not everywhere.
The earliest HTTP specification re-encoded spaces in the URL as +
characters after the question mark and form fields for posts that were
sent with
On Wed, Jun 25, 2014 at 12:22 AM, Christian Kauhaus k...@gocept.com wrote:
Am 23.06.2014 20:24, schrieb Gregory Farnum:
Well, actually it always takes the primary copy, unless the primary
has some way of locally telling that its version is corrupt. (This
might happen if the primary thinks it
Sorry we let this drop; we've all been busy traveling and things.
There have been a lot of changes to librados between Dumpling and
Firefly, but we have no idea what would have made it slower. Can you
provide more details about how you were running these tests?
-Greg
Software Engineer #42 @
Unfortunately, both the client and actual files are outside of my
control here In the case that I noticed, the client is the Ubuntu
installer, and the files are part of the Ubuntu archives content.
On 6/25/2014 8:07 PM, Gerard Toonstra wrote:
the + is a reserved character in the HTTP
On Wed, Jun 25, 2014 at 2:56 PM, Sean Crosby
richardnixonsh...@gmail.com wrote:
I have recently deployed a Firefly CephFS cluster, and am trying out
the POSIX ACL feature that is supposed to have come in as of kernel
3.14. I've mounted my CephFS volume on a machine with kernel 3.15
The ACL
Hi,
On 26 June 2014 12:07, Yan, Zheng uker...@gmail.com wrote:
On Wed, Jun 25, 2014 at 2:56 PM, Sean Crosby
richardnixonsh...@gmail.com wrote:
I have recently deployed a Firefly CephFS cluster, and am trying out
the POSIX ACL feature that is supposed to have come in as of kernel
3.14.
On Wed, 25 Jun 2014 17:17:02 -0700 Gregory Farnum wrote:
Sorry we let this drop; we've all been busy traveling and things.
There have been a lot of changes to librados between Dumpling and
Firefly, but we have no idea what would have made it slower. Can you
provide more details about how
Can you try the attached patch. It should solve this issue.
Regards
Yan, Zheng
On Thu, Jun 26, 2014 at 10:45 AM, Sean Crosby
richardnixonsh...@gmail.com wrote:
Hi,
On 26 June 2014 12:07, Yan, Zheng uker...@gmail.com wrote:
On Wed, Jun 25, 2014 at 2:56 PM, Sean Crosby
21 matches
Mail list logo