Hello,
I have installed a ceph firefly (0.80) on my system last year.
I run a 3.14.43 kernel (I upgraded it recently from 3.14.4). ceph seems to be
working well in most cases, though I haven't used it in real production mode as
of now.
The only thing I noticed recently was some Input/Output
Not having OSDs and KVMs compete against each other is one thing.
But there are more reasons to do this
1) not moving the processes and threads between cores that much (better cache
utilization)
2) aligning the processes with memory on NUMA systems (that means all modern
dual socket systems) -
On Tue, Jun 30, 2015 at 4:25 PM, Jan Schermer j...@schermer.cz wrote:
Not having OSDs and KVMs compete against each other is one thing.
But there are more reasons to do this
1) not moving the processes and threads between cores that much (better
cache utilization)
2) aligning the processes
On Tue, Jun 30, 2015 at 8:30 AM, Z Zhang zhangz.da...@outlook.com wrote:
Hi Ilya,
Thanks for your explanation. This makes sense. Will you make max_segments to
be configurable? Could you pls point me the fix you have made? We might help
to test it.
[PATCH] rbd: bump queue_max_segments on
On Tue, Jun 30, 2015 at 6:57 AM, Yan, Zheng z...@redhat.com wrote:
I tried 4.1 kernel and 0.94.2 ceph-fuse. their performance are about the same.
fuse:
Files=191, Tests=1964, 60 wallclock secs ( 0.43 usr 0.08 sys + 1.16 cusr
0.65 csys = 2.32 CPU)
kernel:
Files=191, Tests=2286, 61
i'm trying to add a extra monitor with ceph-deploy
the current/first monitor is installed by hand
when i do
ceph-deploy mon add HOST
the new monitor seems to assimilate the old monitor
so the old/first monitor is now in the same state as the new monitor
so it is not aware of anything.
i needed
We are using Ceph (Hammer) on Centos7 and RHEL7.1 successfully.
One secret is to ensure that the disk is cleaned prior to ceph-disk
command. Because GPT tables are used one must use the Œsgdisk -Z¹ command
to purge the disk of all partition tables. We usually issue this command
in the RedHat
Jan,
Thanks a lot. I can do my contribution to this project if I can.
Best Regards
-- Ray
On Tue, Jun 30, 2015 at 11:50 PM, Jan Schermer j...@schermer.cz wrote:
Hi all,
our script is available on GitHub
https://github.com/prozeta/pincpus
I haven’t had much time to do a proper README, but
On Wed, Jul 1, 2015 at 4:50 AM, Steffen Tilsch steffen.til...@gmail.com wrote:
Hello Cephers,
I got some questions regarding where what type of IO is generated.
As far as I understand it looks like this (please see picture:
http://imageshack.com/a/img673/4563/zctaGA.jpg ) :
1. Clients -
On Jul 1, 2015, at 00:34, Dan van der Ster d...@vanderster.com wrote:
On Tue, Jun 30, 2015 at 11:37 AM, Yan, Zheng z...@redhat.com wrote:
On Jun 30, 2015, at 15:37, Ilya Dryomov idryo...@gmail.com wrote:
On Tue, Jun 30, 2015 at 6:57 AM, Yan, Zheng z...@redhat.com wrote:
I tried 4.1
Hi
For seq reads here's the latencies:
lat (usec) : 2=0.01%, 10=0.01%, 20=0.01%, 50=0.02%, 100=0.03%
lat (usec) : 250=1.02%, 500=87.09%, 750=7.47%, 1000=1.50%
lat (msec) : 2=0.76%, 4=1.72%, 10=0.19%, 20=0.19%
Random reads:
lat (usec) : 10=0.01%
lat (msec) : 2=0.01%, 4=0.01%,
On Tue, Jun 30, 2015 at 9:07 PM, Michał Chybowski
michal.chybow...@tiktalik.com wrote:
Hi,
Lately I've been working on XEN RBD SM and I'm using RBD's built-in snapshot
functionality.
My system looks like this:
base image - snapshot - snaphot is used to create XEN VM's volumes -
volume
Hi!
We are seeing a strange - and problematic - behavior in our 0.94.1
cluster on Ubuntu 14.04.1. We have 5 nodes, 4 OSDs each.
When rebooting one of the nodes (e. g. for a kernel upgrade) the OSDs
do not seem to shut down correctly. Clients hang and ceph osd tree show
the OSDs of that node
Hi
Our ceph is running the following hardware:
3 nodes with 36 OSDs, 18 SSDs one SSD for two OSDs, each node has 64gb mem
2x6core cpus
4 monitors running on other servers
40gbit infiniband with IPoIB
Here's my cephfs fio test results using the following file, and changing rw
parameter
Hey cephers,
Just a friendly reminder that our Ceph Developer Summit for Jewel
planning is set to run tomorrow and Thursday. The schedule and dial in
information is available on the new wiki:
http://tracker.ceph.com/projects/ceph/wiki/CDS_Jewel
Please let me know if you have any questions.
Hi all,
our script is available on GitHub
https://github.com/prozeta/pincpus https://github.com/prozeta/pincpus
I haven’t had much time to do a proper README, but I hope the configuration is
self explanatory enough for now.
What it does is pin each OSD into the most “empty” cgroup assigned to a
Hi
I have been trying to figure out why our 4k random reads in VM's are so bad.
I am using fio to test this.
Write : 170k iops
Random write : 109k iops
Read : 64k iops
Random read : 1k iops
Our setup is:
3 nodes with 36 OSDs, 18 SSD's one SSD for two OSD's, each node has 64gb mem
Hi Tuomos,
Can you paste the command you ran to do the test?
Thanks,
Mark
On 06/30/2015 12:18 PM, Tuomas Juntunen wrote:
Hi
It’s not probably hitting the disks, but that really doesn’t matter. The
point is we have very responsive VM’s while writing and that is what the
users will see.
The
I have already set readahead to OSDs before, It is now 2048, this didnt
affect the random reads, but gave a lot more sequential performance.
Br, T
From: Somnath Roy [mailto:somnath@sandisk.com]
Sent: 30. kesäkuuta 2015 21:00
To: Tuomas Juntunen; 'Stephen Mercier'
Cc: 'ceph-users'
Break it down, try fio-rbd to see what is the performance you getting..
But, I am really surprised you are getting 100k iops for write, did you check
it is hitting the disks ?
Thanks Regards
Somnath
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Tuomas
Juntunen
Read_ahead_kb should help you in case of seq workload, but, if you are saying
it is helping your workload in random case also, try to do it both in VM as
well as in OSD side as well and see if it is making any difference.
Thanks Regards
Somnath
From: Tuomas Juntunen
Hi
Its not probably hitting the disks, but that really doesnt matter. The
point is we have very responsive VMs while writing and that is what the
users will see.
The iops we get with sequential read is good, but the random read is way too
low.
Is using SSDs as OSDs the only way to
Just an update, there seems to be no proper way to pass iothread
parameter from openstack-nova (not at least in Juno release). So a
default single iothread per VM is what all we have. So in conclusion a
nova instance max iops on ceph rbd will be limited to 30-40K.
On Tue, Jun 16, 2015 at 10:08
Sage
We still running nightlies on next and branches.
Just wanted to reaffirm that this is not time yet to start scheduling suites on
infernalis?
Thx
YuriW
- Original Message -
From: Sage Weil sw...@redhat.com
To: ceph-annou...@ceph.com, ceph-de...@vger.kernel.org,
hth,
Any idea what caused the pause? I am curious to know more details.
Thanks.
-Simon
On Friday, April 10, 2015, 10 minus t10te...@gmail.com wrote:
Hi ,
Question is what do you want to use it for . As an OSD it wont cut it.
Maybe as an iscsi target and YMMV
I played around with an OEM
Hi ceph experts,
I did some test on my ceph cluster recently with following steps:
1. at the beginning, all pgs are active+clean;
2. stop a osd. I observed a lot of pgs are degraded.
3. ceph osd out.
4. then I observed ceph is doing recovery process.
my question is I expected by the end, all
I am also having same issue can somebody help me out. But for me it is
HTTP/1.1 404 Not Found.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I don’t run Ceph on btrfs, but isn’t this related to the btrfs snapshotting
feature ceph uses to ensure a consistent journal?
Jan
On 19 Jun 2015, at 14:26, Lionel Bouton lionel+c...@bouton.name wrote:
On 06/19/15 13:42, Burkhard Linke wrote:
Forget the reply to the list...
Hi,
Lately I've been working on XEN RBD SM and I'm using RBD's built-in
snapshot functionality.
My system looks like this:
base image - snapshot - snaphot is used to create XEN VM's volumes -
volume snapshots (via rbd snap..) - another VMs - etc.
I'd like to be able to delete one of the
I created a file which has the following parameters
[random-read]
rw=randread
size=128m
directory=/root/asd
ioengine=libaio
bs=4k
#numjobs=8
iodepth=64
Br,T
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Mark Nelson
Sent: 30. kesäkuuta 2015
Hi all,
I am a new user who want to deploy simple ceph cluster.
I start to create ceph monitor node via ceph-deploy and got error:
[*ceph_deploy*][*ERROR* ] RuntimeError: remote connection got closed,
ensure ``requiretty`` is disabled for node1
I commented requiretty and I have a password-less
hi! anyone able to privide som tips on performance issue on a newly
installe all flash ceph cluster? When we do write test we get 900MB/s
write. but read tests are only 200MB/s all servers are on 10GBit
connections.
[global]
fsid = 453d2db9-c764-4921-8f3c-ee0f75412e19
mon_initial_members =
I use sudo visudo and then add in a line under
Defaults requiretty
--
Defaults:user !requiretty
Where user is the username.
Hope this helps?
Alan
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Vida
Ahmadi
Sent: Monday, June 22, 2015 6:31 AM
To:
Seems reasonable. What's the latency distribution look like in your fio
output file? Would be useful to know if it's universally slow or if
some ops are taking much longer to complete than others.
Mark
On 06/30/2015 01:27 PM, Tuomas Juntunen wrote:
I created a file which has the following
I currently have about 250 VMs, ranging from 16GB to 2TB in size. What I found,
after about a week of testing, sniffing, and observing, is that the larger read
ahead buffer causes the VM to chunk reads over to ceph, and in doing so, allows
it to better align with the 4MB block size that Ceph
Sound great, any update please let me know.
Best Regards
-- Ray
On Tue, Jun 30, 2015 at 1:46 AM, Jan Schermer j...@schermer.cz wrote:
I promised you all our scripts for automatic cgroup assignment - they are
in our production already and I just need to put them on github, stay tuned
On Tue, Jun 30, 2015 at 12:24 AM, German Anders gand...@despegar.com wrote:
hi cephers,
Want to know if there's any 'best' practice or procedure to implement
Ceph with Infiniband FDR 56gb/s for front and back end connectivity. Any
crush tunning parameters, etc.
The Ceph cluster has:
-
Jian,
As we put compute and storage together, we don't want them to bother each
other during the runtime. Thanks.
Best Regards
-- Ray
On Tue, Jun 30, 2015 at 8:50 AM, Zhang, Jian jian.zh...@intel.com wrote:
Ray,
Just wondering, what’s the benefit for binding the ceph-osd to a specific
CPU
On Jun 30, 2015, at 15:37, Ilya Dryomov idryo...@gmail.com wrote:
On Tue, Jun 30, 2015 at 6:57 AM, Yan, Zheng z...@redhat.com wrote:
I tried 4.1 kernel and 0.94.2 ceph-fuse. their performance are about the
same.
fuse:
Files=191, Tests=1964, 60 wallclock secs ( 0.43 usr 0.08 sys +
-Original Message-
From: Somnath Roy [mailto:somnath@sandisk.com]
Sent: 29 June 2015 23:29
To: Nick Fisk
Cc: ceph-users@lists.ceph.com
Subject: RE: [ceph-users] Old vs New pool on same OSDs - Performance
Difference
Nick,
I think you are probably hitting the issue of crossing
Answering the question myself, here are the contents of xattr for the object
user.cephos.spill_out:
30 00 0.
user.ceph._:
0F 08 05 01 00 00 04 03 41 00 00 00 00 00 00 00A...
0010 20 00 00 00 72 62 2E 30 2E 31 62 61 37
Two popular benchmarks in the HPC space for testing distributed file
systems are IOR and mdtest. Both use MPI to coordinate processes on
different clients. Another option may be to use fio or iozone. Netmist
may also be an option, but I haven't used it myself and I'm not sure
that it's
42 matches
Mail list logo