On 16/05/15 19:53, Mark Kirkwood wrote:
Hi,
While a standard build (configure, make etc) on $platform works fine,
attempting to build packages gets:
$ dpkg-buildpackage -j4
dpkg-buildpackage: source package ceph
dpkg-buildpackage: source version 9.0.0-1
dpkg-buildpackage: source
On 13/04/15 15:33, Mark Kirkwood wrote:
Hi,
I've been experimenting with the new rgw creation in ceph-deploy, using
version 1.5.23 together with ceph 0.94 (-948-gd77de49).
If simply run it without any args, then it works fine. e.g:
$ ceph-deploy rgw create ceph1
However if I try to set
On 17/04/15 12:27, Gregory Farnum wrote:
On Sat, Apr 11, 2015 at 8:42 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
Hi,
Building without --enable-debug produces:
ceph_fuse.cc: In member function ‘virtual void* main(int, const char**,
const char**)::RemountTest::entry()’:
ceph_fuse.cc
Hi,
Building without --enable-debug produces:
ceph_fuse.cc: In member function ‘virtual void* main(int, const char**,
const char**)::RemountTest::entry()’:
ceph_fuse.cc:146:15: warning: ignoring return value of ‘int system(const
char*)’, declared with attribute warn_unused_result
On 24/03/15 19:53, Ning Yao wrote:
2015-03-20 10:22 GMT+08:00 Shu, Xinxin xinxin@intel.com:
I think rocksdb can support this configuration.
I do not find this option in rocksdb. If you know, can you provide
this option to redirect the WAL file?
I think you want to set:
rocksdb wal
On 26/02/15 11:50, Tom Deneau wrote:
Sage Weil sweil at redhat.com writes:
On Wed, 25 Feb 2015, Robert LeBlanc wrote:
We tried to get radosgw working with Apache + mod_fastcgi, but due to
the changes in radosgw, Apache, mode_*cgi, etc and the documentation
lagging and not having a lot of
On 07/12/14 07:39, Sage Weil wrote:
Thoughts? Suggestions?
Would kit make sense to include radosgw-agent package in this
normalization too?
Regards
Mark
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More
On 04/11/14 22:02, Sage Weil wrote:
On Tue, 4 Nov 2014, Blair Bethwaite wrote:
On 4 November 2014 01:50, Sage Weil s...@newdream.net wrote:
In the Ceph session at the OpenStack summit someone asked what the CephFS
survey results looked like.
Thanks Sage, that was me!
Here's the link:
On 30/10/14 03:31, Jens Axboe wrote:
On 2014-10-29 01:15, Ketor D wrote:
Hi, Jens,
There is cmdline parse bug in the fio rbd test.
I have fixed this and create a pull request on the github.
Please review.
After fix the bugs, the fio test can run.
I merged your two pull requests (thanks!)
On 28/10/14 05:20, Ketor D wrote:
V8 patch runs good.
The iops is 33032. If I just comment the usleep(100) in the master, I
can get iops 35245.
The CPU usage about the two test is same 120%.
So maybe this patch could be better!
Yeah, v8 is working for me.
I'm seeing it a bit slower for some
On 28/10/14 11:32, Jens Axboe wrote:
On 10/27/2014 03:59 PM, Mark Kirkwood wrote:
On 28/10/14 05:20, Ketor D wrote:
V8 patch runs good.
The iops is 33032. If I just comment the usleep(100) in the master, I
can get iops 35245.
The CPU usage about the two test is same 120%.
So maybe this patch
On 28/10/14 16:23, Ketor D wrote:
Hi Mark,
Wish you could test my patch.I get the best performance using this patch.
It is not clear cut for me (tested reads only):
blocksize k | v8 patched iops | Ketor patch iops | orig iops
On 25/10/14 16:47, Jens Axboe wrote:
Since you're running rbd tests... Mind giving this patch a go? I don't
have an easy way to test it myself. It has nothing to do with this
issue, it's just a potentially faster way to do the rbd completions.
Sure - but note I'm testing this on my i7
On 26/10/14 08:20, Jens Axboe wrote:
On 10/24/2014 10:50 PM, Mark Kirkwood wrote:
On 25/10/14 16:47, Jens Axboe wrote:
Since you're running rbd tests... Mind giving this patch a go? I don't
have an easy way to test it myself. It has nothing to do with this
issue, it's just a potentially
the commit after
0.86 that is causing this.
Mark
On 10/24/2014 08:19 AM, Mark Nelson wrote:
FWIW we are seeing this at Redhat/Inktank with recent fio from master
and ceph giant branch as well.
Mark
On 10/24/2014 01:17 AM, Mark Kirkwood wrote:
On 24/10/14 18:35, Jens Axboe wrote:
CC'ing relevant
Righty, building now.
On 25/10/14 13:12, Mark Nelson wrote:
Hi Mark,
Try the latest giant branch. I believe we've fixed this with 7272bb8.
My test cluster is passing read tests now.
Mark
On 10/24/2014 05:45 PM, Mark Kirkwood wrote:
Interestingly, I first encountered this on (what I think
On 25/10/14 13:37, Mark Kirkwood wrote:
Righty, building now.
On 25/10/14 13:12, Mark Nelson wrote:
Hi Mark,
Try the latest giant branch. I believe we've fixed this with 7272bb8.
My test cluster is passing read tests now.
--
To unsubscribe from this list: send the line unsubscribe ceph
On 30/09/14 17:05, Sage Weil wrote:
On Tue, 30 Sep 2014, Haomai Wang wrote:
Hi sage,
What do you think use existing ObjectStore::peek_journal_fsid
interface to detect whether journal needed.
KeyValueStore and MemStore could set passing argument fsid to zero
to indicate no journal.
I'm not
On 25/09/14 01:03, Sage Weil wrote:
On Wed, 24 Sep 2014, Mark Kirkwood wrote:
On 24/09/14 14:29, Aegeaner wrote:
I run ceph on Red Hat Enterprise Linux Server 6.4 Santiago, and when I
run service ceph start i got:
# service ceph start
ERROR:ceph-disk:Failed to activate
ceph-disk
On 22/08/14 12:49, Sage Weil wrote:
On Fri, 22 Aug 2014, Mark Kirkwood wrote:
On 22/08/14 03:23, Sage Weil wrote:
I've pushed the patch to wip-filejournal. Mark, can you test please?
I've tested wip-filejournal and looks good (25 test runs, good journal header
each time).
Thanks! Merged
On 23/08/14 10:22, Somnath Roy wrote:
I think it is using direct io for non-aio mode as well.
Thanks Regards
Somnath
One thing that does still concern me - if I understand what is happening here
correctly: we write to the journal using aio until we want to stop doing writes
(presumably
Will do.
On 21/08/14 19:30, Ma, Jianpeng wrote:
Mark
After sage merge this into wip-filejournal, can you test again? I think at
present only you can do this work!
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
On 22/08/14 03:23, Sage Weil wrote:
I've pushed the patch to wip-filejournal. Mark, can you test please?
I've tested wip-filejournal and looks good (25 test runs, good journal
header each time).
Cheers
Mark
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the
Not yet,
If you have to use master either revert commit
4eb18dd487da4cb621dcbecfc475fc0871b356ac or apply the patch for fixing
the hang mentioned here https://github.com/ceph/ceph/pull/2185
Otherwise you could use the wip-filejournal branch which Sage has just
added!
Cheers
Mark
On
Sorry, I see that sage has reverted it.
On 20/08/14 16:58, Mark Kirkwood wrote:
Not yet,
If you have to use master either revert commit
4eb18dd487da4cb621dcbecfc475fc0871b356ac or apply the patch for fixing
the hang mentioned here https://github.com/ceph/ceph/pull/2185
--
To unsubscribe
On 31/07/14 17:25, Sage Weil wrote:
After the latest set of bug fixes to the FileStore file naming code I am
newly inspired to replace it with something less complex. Right now I'm
mostly thinking about HDDs, although some of this may map well onto hybrid
SSD/HDD as well. It may or may not
Wow - that is a bit strange:
$ cat /etc/issue
Ubuntu 13.10 \n \l
$ sudo ceph -v
ceph version 0.78-569-g6a4c50d (6a4c50d7f27d2e7632d8c017d09e864e969a05f7)
$ sudo ceph osd erasure-code-profile ls
default
myprofile
profile
profile1
I'd hazard a guess that some of your ceph components are at
I'm not sure if this is relevant, but my 0.78 (and currently building
0.79) are compiled from src git checkout (and packages built from the
same src tree using dpkg-buildpackage Debian/Ubuntu package builder).
Having said that - the above procedure *should* produce equivalent
binaries to the
On 11/12/13 19:09, Sage Weil wrote:
That is one part. The current strategy of layering on top of a file
system and using a write-ahead journal makes sense given the existing
linux fs building blocks, but is far from an optimal solution for many
workloads. A k/v interface based on something
I just updated master (a5eda4fcc34461dbc0fcc47448f8456097de15eb), and am
seeing OSDs failing to start:
2013-12-03 15:37:01.291200 7f488e1157c0 -1 OSD magic != my ceph osd
volume v026
failed: 'ulimit -n 32768; /usr/bin/ceph-osd -i 0 --pid-file
/var/run/ceph/osd.0.pid -c /etc/ceph/ceph.conf '
On 13/11/13 16:33, Mark Kirkwood wrote:
I believe he is using a self built (or heavily customized) Linux
installation - so distribution detection is not going to work in this
case. I'm wondering if there could be some sensible fall back for that,
e.g:
- refuse to install or purge
- assume sysv
On 18/11/13 19:05, Mark Kirkwood wrote:
Anyway I have attached a log of me getting a system of 2 Archlinux nodes
up. These were KVM guests built identically and ceph (0.72) was compiled
from src and installed.
Blast - forgot to edit the top of the log to reflect the effect of the
osd
On 15/11/13 03:25, Mark Nelson wrote:
On 11/14/2013 06:27 AM, Dave (Bob) wrote:
I would suggest that it is always dangerous to make assumptions.
If ceph-deploy needs some information, then this should be explicit, and
configurable.
If it needs to know whether initialisation is done by systemd,
On 13/11/13 04:53, Alfredo Deza wrote:
On Mon, Nov 11, 2013 at 12:51 PM, Dave (Bob) d...@bob-the-boat.me.uk wrote:
It is unuseable for me at present, because it reports:
[ceph_deploy][ERROR ] UnsupportedPlatform: Platform is not supported:
That looks like a bug. For the past few months the
On 13/11/13 16:33, Mark Kirkwood wrote:
On 13/11/13 04:53, Alfredo Deza wrote:
On Mon, Nov 11, 2013 at 12:51 PM, Dave (Bob)
d...@bob-the-boat.me.uk wrote:
It is unuseable for me at present, because it reports:
[ceph_deploy][ERROR ] UnsupportedPlatform: Platform is not supported
Hi,
Loic posted a script he uses for testing setups without ceph-deploy:
http://www.spinics.net/lists/ceph-devel/msg16895.html
http://dachary.org/wp-uploads/2013/10/micro-osd.txt
it probably has enough steps in it for you to adapt.
Regards
Mark
P.s: what *is* your platform? It might not be
On 22/10/13 06:17, Gregory Farnum wrote:
On Mon, Oct 21, 2013 at 9:57 AM, Loic Dachary l...@dachary.org wrote:
On 21/10/2013 18:49, Gregory Farnum wrote:
I'm not quite sure what questions you're actually asking here...
I guess I was asking if my understanding was correct.
In general, the
On 05/09/13 17:56, Mark Kirkwood wrote:
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks
ceph2:/dev/vdb:/dev/vdc
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 12.04 precise
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph2
[ceph2][INFO ] write cluster configuration to /etc/ceph
Setup:
hosts: ceph1, ceph2
Command steps:
$ ceph-deploy new ceph1
$ ceph-deploy mon create ceph1
$ ceph-deploy gatherkeys ceph1
$ ceph-deploy disk zap ceph1:/dev/vdb
$ ceph-deploy disk zap ceph1:/dev/vdc
$ ceph-deploy disk zap ceph2:/dev/vdb
$ ceph-deploy disk zap ceph2:/dev/vdc
$ ceph-deploy
One thing that comes to mind is the ability to create (or activate)
osd's with a custom crush specification from (say) a supplied file.
Regards
Mark
On 03/08/13 06:02, Sage Weil wrote:
There is a session at CDS scheduled to discuss ceph-deploy (4:40pm PDT on
Monday). We'll be going over
It seems this has been noted previously (just...):
http://tracker.ceph.com/issues/5492
Blast, I was just a bit slow :-) , should have posted when I first
noticed this a week or so ago!
Regards
Mark
On 07/07/13 17:15, Mark Kirkwood wrote:
I noticed when building with prefix=/usr/local
I noticed when building with prefix=/usr/local that the install step
produced an usr/local/sbin hierarchy *under* /usr/local (i.e
/usr/local/usr/local/sbin) with ceph_disk and friends (i.e
ceph_sbin_SCRIPTS) therein. I am guessing that these should actually be
installed in /usr/local/sbin (i.e
I have a 4 osd system (4 hosts, 1 osd per host), in two (imagined) racks
(osd 0 and 1 in rack 0, osd 2 and 3 in rack1). All pools have number of
replicas = 2. I have a crush rule that puts one pg copy on each rack
(see notes) - but is essentially:
step take root
step
] pgmap v465: 1160 pgs: 1160
active+degraded; 2000 MB data, 12993 MB used, 6133 MB / 20150 MB avail;
100/200 degraded (50.000%)
So looks like Cuttlefish is behaving as expected. Is this due to tweaks
in the 'choose' algorithm in the later code?
Cheers
Mark
On 05/07/13 16:32, Mark Kirkwood wrote
I'd hazard a guess that you are still (accidentally) running the
packaged binary - the packaged version installs in /usr/bin (etc) but
your source build will probably be in /usr/local/bin. I've been through
this myself and purged the packaged version before building and
installing from source
On 19/12/12 14:44, Drunkard Zhang wrote:
2012/12/16 Drunkard Zhang gongfan...@gmail.com:
I couldn't rm files in ceph, which was backuped files of one osd. It
reports direcory not empty, but there's nothing under that directory,
just the directory itself held some spaces. How could I shoot down
On 19/12/12 15:56, Drunkard Zhang wrote:
2012/12/19 Mark Kirkwood mark.kirkw...@catalyst.net.nz:
On 19/12/12 14:44, Drunkard Zhang wrote:
2012/12/16 Drunkard Zhang gongfan...@gmail.com:
I couldn't rm files in ceph, which was backuped files of one osd. It
reports direcory not empty
On 08/11/12 21:08, Wido den Hollander wrote:
On 08-11-12 08:29, Travis Rhoden wrote:
Hey folks,
I'm trying to set up a brand new Ceph cluster, based on v0.53. My
hardware has SSDs for journals, and I'm trying to get mkcephfs to
intialize everything for me. However, the command hangs forever
On 25/10/12 17:55, Mark Nelson wrote:
On Wed, Oct 24, 2012 at 10:58 PM, Dan Mick dan.m...@inktank.com wrote:
HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean; recovery 21/42
The other alternative is to just set the pool(s) replication size to 1,
if you are just wanting a single osd for
On 25/10/12 04:40, Sage Weil wrote:
[moved to ceph-devel]
On Wed, 24 Oct 2012, Roman Alekseev wrote:
Hi there,
I've made simple fresh installation of ceph on Debian server with the
following configuration:
[global]
debug ms = 0
[osd]
osd journal size = 1000
Bryan -
Note that the default block size for the rados bench is 4MB...and
performance decreases quite dramatically with smaller block sizes (-b
option to rados bench).
On 27/09/12 08:54, Bryan K. Wright wrote:
The rados benchmark was run on one of the OSD
machines. Read and write
Sorry Bryan - I should have read further down the thread and noted that
you have this figured out... nothing to see here!
On 28/09/12 11:40, Mark Kirkwood wrote:
Bryan -
Note that the default block size for the rados bench is 4MB...and
performance decreases quite dramatically with smaller
On 31/08/12 20:11, Dietmar Maurer wrote:
RBD waits for the data to be on disk on all replicas. It's pretty easy
to relax this to in memory on all replicas, but there's no option for
that right now.
I thought that is dangerous, because you can loose data?
Sorry Dieter,
Not trying to say you are wrong or anything like that - just trying to
add to the problem solving body of knowledge that from what *I* have
tried out the 'sync' issue does not look to be the bad guy here - altho
more analysis is always welcome (usual story - my findings should
+1 to that. I've been seeing 4-6 MB/s for 4K writes for 1 OSD with 1 SSD
for journal and another for data [1]. Interestingly I did see some nice
scaling with 4K random reads: 2-4 MB/s per thread for up to 8 threads
(looked like it plateaued thereafter).
Cheers
Mark
[1] FYI not on the box I
On 22/08/12 22:24, David McBride wrote:
On 22/08/12 09:54, Denis Fondras wrote:
* Test with dd from the client using CephFS :
# dd if=/dev/zero of=testdd bs=4k count=4M
17179869184 bytes (17 GB) written, 338,29 s, 50,8 MB/s
Again, the synchronous nature of 'dd' is probably severely affecting
On 17/08/12 04:13, Tommi Virtanen wrote:
On Wed, Aug 15, 2012 at 9:44 PM, hemant surale hemant.sur...@gmail.com wrote:
Hello Tommi, Ceph community
I did mkdir the directory. Infact I have created a new partition by
the same name and formatted using ext3. I also executed the following
command
On 10/08/12 11:31, Mark Kirkwood wrote:
There could well be an additional factor connected with xfs and lots
of files on these Intel 520s - I have just had a conversation with a
workmate who switched xfs to ext4 due to this. I will see if ext4 or
btrfs (scary) do any better on these drives
On 09/08/12 09:58, Mark Nelson wrote:
For what it's worth, with mostly default settings I was seeing about
8MB/s to dell branded samsung SSDs with 4k IOs using rados bench.
That was with 256 concurrent client requests. This is definitely
something we are working hard on tracking down.
On 09/08/12 11:36, Mark Kirkwood wrote:
On 09/08/12 09:58, Mark Nelson wrote:
For what it's worth, with mostly default settings I was seeing about
8MB/s to dell branded samsung SSDs with 4k IOs using rados bench.
That was with 256 concurrent client requests. This is definitely
something
On 09/08/12 12:43, Mark Kirkwood wrote:
I tried out a raft of xfs config changes and also made the Ceph
journal really big (10G):
$ mkfs.xfs -f -l internal,size=1024m -d agcount=4 /dev/sd[b,c]2
+ mount options with nobarrier,logbufs=8
The results improved a little, but still very slow
I've been looking at using Ceph RBD as a block store for database use.
As part of this I'm looking a how (particularly random) IO of smallish
(4K, 8K) block sizes performs.
I've setup Ceph with a single osd and mon spread over two SSD (Intel
520) - 2G journal on one and the osd data on the
I am seeing this:
# ceph -s
health HEALTH_WARN 256 pgs stale; 256 pgs stuck stale
monmap e1: 3 mons at
{ved1=192.168.122.11:6789/0,ved2=192.168.122.12:6789/0,ved3=192.168.122.13:6789/0},
election epoch 18, quorum 0,1,2 ved1,ved2,ved3
osdmap e62: 4 osds: 4 up, 4 in
pgmap v47148:
On 11/07/12 13:22, Josh Durgin wrote:
On 07/10/2012 06:11 PM, Mark Kirkwood wrote:
I am seeing this:
# ceph -s
health HEALTH_WARN 256 pgs stale; 256 pgs stuck stale
monmap e1: 3 mons at
{ved1=192.168.122.11:6789/0,ved2=192.168.122.12:6789/0,ved3=192.168.122.13:6789/0},
election epoch 18
On 11/07/12 13:32, Mark Kirkwood wrote:
I have attached the dump of stuck stale pgs, and the crushmap in use.
...of course I left off the crushmap, so here it is, plus my ceph.conf
for good measure.
Mark
# begin crush map
# devices
device 0 osd0
device 1 osd1
device 2 osd2
device 3 osd3
On 11/07/12 13:55, Josh Durgin wrote:
On 07/10/2012 06:32 PM, Mark Kirkwood wrote:
On 11/07/12 13:22, Josh Durgin wrote:
On 07/10/2012 06:11 PM, Mark Kirkwood wrote:
I am seeing this:
# ceph -s
health HEALTH_WARN 256 pgs stale; 256 pgs stuck stale
monmap e1: 3 mons at
{ved1=192.168.122.11
On 11/07/12 14:09, Mark Kirkwood wrote:
On 11/07/12 13:55, Josh Durgin wrote:
On 07/10/2012 06:32 PM, Mark Kirkwood wrote:
On 11/07/12 13:22, Josh Durgin wrote:
On 07/10/2012 06:11 PM, Mark Kirkwood wrote:
I am seeing this:
# ceph -s
health HEALTH_WARN 256 pgs stale; 256 pgs stuck stale
On 06/07/12 16:17, Sage Weil wrote:
On Fri, 6 Jul 2012, Mark Kirkwood wrote:
FYI: I ran into this too - you need to do:
apt-get dist-upgrade
for the 0.47-2 packages to be replaced by 0.48 (of course purging 'em and
reinstalling works too...just a bit more drastic)!
That's strange... anyone
On 05/07/12 15:57, Sage Weil wrote:
On Thu, 5 Jul 2012, Mark Kirkwood wrote:
2/ Also I would like to be able to say make my number of copies 3, but if I
lose datacenter0 (where 2 copies are), don't try to have 3 copies at
datacenter1 (so run degraded in that case). Is that possible
69 matches
Mail list logo