Hello Greg,
Output of 'ceph osd tree':
# idweight type name up/down reweight
-1 27.3root default
-2 9.1 host stor1
0 3.64osd.0 up 1
1 3.64osd.1 up 1
2 1.82osd.2 up
On Tue, Aug 13, 2013 at 10:41:53AM -0500, Mark Nelson wrote:
Hi Mark,
On 08/13/2013 02:56 AM, Dmitry Postrigan wrote:
I am currently installing some backup servers with 6x3TB drives in them. I
played with RAID-10 but I was not
impressed at all with how it performs during a recovery.
Hello to all,
I've a big issue with Ceph RadosGW.
I did a PoC some days ago with radosgw. It worked well.
Ceph version 0.67.3 under CentOS 6.4
Now, I'm installing a new cluster but, I can't succeed. I do not understand why.
Here is some elements :
ceph.conf:
[global]
filestore_xattr_use_omap
Hi to all.
Let's assume a Ceph cluster used to store VM disk images.
VMs will be booted directly from the RBD.
What will happens in case of OSD failure if the failed OSD is the
primary where VM is reading from ?
___
ceph-users mailing list
Yeah,rbd clone works well, thanks a lot!
2013/9/16 Sage Weil s...@inktank.com
On Mon, 16 Sep 2013, Chris Dunlop wrote:
On Mon, Sep 16, 2013 at 09:20:29AM +0800, ??? wrote:
Hi all:
I have a 30G rbd block device as virtual machine disk, Aleady installed
ubuntu 12.04. About 1G space
hi
i follow the admin api document
http://ceph.com/docs/master/radosgw/adminops/ ,
when i get user info , it rentue 405 not allowed
my commond is
curl -XGET http://kp/admin/user?format=json -d'{uid:user1}'
-H'Authoeization:AWS **:**' -H'Date:**' -i -v
the reasult is
405
Hello,
I'm trying to download objects from one container (which contains 3 million
objects, file sizes between 16K and 1024K) parallel 10 threads. I'm using
s3 binary comes from libs3. I'm monitoring download time, response time
of 80% lower than 50-80 ms. But sometimes download hanging up, up to
On 09/13/2013 01:02 PM, Mihály Árva-Tóth wrote:
Hello,
How can I decrease logging level of radosgw? I uploaded 400k pieces of
objects and my radosgw log raises to 2 GiB. Current settings:
rgw_enable_usage_log = true
rgw_usage_log_tick_interval = 30
rgw_usage_log_flush_threshold = 1024
On Mon, Sep 16, 2013 at 8:30 PM, Gruher, Joseph R
joseph.r.gru...@intel.com wrote:
-Original Message-
From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
Subject: Re: [ceph-users] problem with ceph-deploy hanging
ceph-deploy will use the user as you are currently executing. That is why,
On 09/16/2013 11:29 AM, Nico Massenberg wrote:
Am 16.09.2013 um 11:25 schrieb Wido den Hollander w...@42on.com:
On 09/16/2013 11:18 AM, Nico Massenberg wrote:
Hi there,
I have successfully setup a ceph cluster with a healthy status.
When trying to create a rbd block device image I am stuck
Le 17/09/2013 14:48, Alfredo Deza a écrit :
On Mon, Sep 16, 2013 at 8:30 PM, Gruher, Joseph R
joseph.r.gru...@intel.com wrote:
[...]
Unfortunately, logging in as my ceph user on the admin system (with a matching user on
the target system) does not affect my result. The ceph-deploy install
Hello all,
I am new to the list.
I have a single machines setup for testing Ceph. It has a dual proc 6
cores(12core total) for CPU and 128GB of RAM. I also have 3 Intel 520
240GB SSDs and an OSD setup on each disk with the OSD and Journal in
separate partitions formatted with ext4.
My goal
The VM read will hang until a replica gets promoted and the VM resends the
read. In a healthy cluster with default settings this will take about 15
seconds.
-Greg
On Tuesday, September 17, 2013, Gandalf Corvotempesta wrote:
Hi to all.
Let's assume a Ceph cluster used to store VM disk images.
Your 8k-block dd test is not nearly the same as your 8k-block rados bench
or SQL tests. Both rados bench and SQL require the write to be committed to
disk before moving on to the next one; dd is simply writing into the page
cache. So you're not going to get 460 or even 273MB/s with sync 8k
writes
Windows default (NTFS) is a 4k block. Are you changing the allocation unit to
8k as a default for your configuration?
- Original Message -
From: Gregory Farnum g...@inktank.com
To: Jason Villalta ja...@rubixnet.com
Cc: ceph-users@lists.ceph.com
Sent: Tuesday, September 17, 2013
Oh, and you should run some local sync benchmarks against these drives to
figure out what sort of performance they can deliver with two write streams
going on, too. Sometimes the drives don't behave the way one would expect.
-Greg
On Tuesday, September 17, 2013, Gregory Farnum wrote:
Your
2013/9/17 Gregory Farnum g...@inktank.com:
The VM read will hang until a replica gets promoted and the VM resends the
read. In a healthy cluster with default settings this will take about 15
seconds.
Thank you.
___
ceph-users mailing list
You could be suffering from a known, but unfixed issue [1] where spindle
contention from scrub and deep-scrub cause periodic stalls in RBD. You
can try to disable scrub and deep-scrub with:
# ceph osd set noscrub
# ceph osd set nodeep-scrub
If your problem stops, Issue #6278 is likely the
Thanks for you feed back it is helpful.
I may have been wrong about the default windows block size. What would be
the best tests to compare native performance of the SSD disks at 4K blocks
vs Ceph performance with 4K blocks? It just seems their is a huge
difference in the results.
On Tue, Sep
Ahh thanks I will try the test again with that flag and post the results.
On Sep 17, 2013 11:38 AM, Campbell, Bill bcampb...@axcess-financial.com
wrote:
As Gregory mentioned, your 'dd' test looks to be reading from the cache
(you are writing 8GB in, and then reading that 8GB out, so the reads
Well, that all looks good to me. I'd just keep writing and see if the
distribution evens out some.
You could also double or triple the number of PGs you're using in that
pool; it's not atrocious but it's a little low for 9 OSDs.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Tue, Sep 17, 2013 at 1:29 AM, Alexis GÜNST HORN
alexis.gunsth...@outscale.com wrote:
Hello to all,
I've a big issue with Ceph RadosGW.
I did a PoC some days ago with radosgw. It worked well.
Ceph version 0.67.3 under CentOS 6.4
Now, I'm installing a new cluster but, I can't succeed. I
Hi!
I've a remote server with one unit where is installed Ubuntu. I can't create
another partition on the disk to install OSD because is mounted. There is
another way to install OSD? Maybe in a folder?
And another question... Could I configure Ceph to make a particular replica in
a particular
I see that you added your public and cluster networks under an [osd]
section. All daemons use the public network, and OSDs use the cluster
network. Consider moving those settings to [global].
http://ceph.com/docs/master/rados/configuration/network-config-ref/#ceph-networks
Also, I do believe I
If you use OpenStack, you should fill out the user survey:
https://www.openstack.org/user-survey/Login
In particular, it helps us to know how openstack users consume their
storage, and it helps the larger community to know what kind of storage
systems are being deployed.
sage
As Gregory mentioned, your 'dd' test looks to be reading from the cache (you are writing 8GB in, and then reading that 8GB out, so the reads are all cached reads) so the performance is going to seem good. You can add the 'oflag=direct' to your dd test to try and get a more accurate reading from
-Original Message-
From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
boun...@lists.ceph.com] On Behalf Of Gilles Mocellin
So you can add something like this in all ceph nodes' /etc/sudoers (use
visudo) :
Defaults env_keep += http_proxy https_proxy ftp_proxy no_proxy
Hope it
I will try both suggestions, Thank you for your input.
On Tue, Sep 17, 2013 at 5:06 PM, Josh Durgin josh.dur...@inktank.comwrote:
Also enabling rbd writeback caching will allow requests to be merged,
which will help a lot for small sequential I/O.
On 09/17/2013 02:03 PM, Gregory Farnum
Try it with oflag=dsync instead? I'm curious what kind of variation
these disks will provide.
Anyway, you're not going to get the same kind of performance with
RADOS on 8k sync IO that you will with a local FS. It needs to
traverse the network and go through work queues in the daemon; your
I have examined logs.
Yes, first time it can be scrubbing. It repaired some self.
I had 2 servers before first problem: one dedicated for osd (osd.0), and second
- with osd and websites (osd.1).
After problem I add third server - dedicated for osd (osd.2) and call
ceph osd set out osd.1 for
Also enabling rbd writeback caching will allow requests to be merged,
which will help a lot for small sequential I/O.
On 09/17/2013 02:03 PM, Gregory Farnum wrote:
Try it with oflag=dsync instead? I'm curious what kind of variation
these disks will provide.
Anyway, you're not going to get the
Here are the stats with direct io.
dd of=ddbenchfile if=/dev/zero bs=8K count=100 oflag=direct
819200 bytes (8.2 GB) copied, 68.4789 s, 120 MB/s
dd if=ddbenchfile of=/dev/null bs=8K
819200 bytes (8.2 GB) copied, 19.7318 s, 415 MB/s
These numbers are still over all much faster than
So what I am gleaming from this is it better to have more than 3 ODSs since
the OSD seems to add additional processing overhead when using small blocks.
I will try to do some more testing by using the same three disks but with 6
or more OSDs.
If the OSD has is limited by processing is it safe to
Hi,
I am running Ceph on a 3 node cluster and each of my server node is running 10
OSDs, one for each disk. I have one admin node and all the nodes are connected
with 2 X 10G network. One network is for cluster and other one configured as
public network.
Here is the status of my cluster.
34 matches
Mail list logo