On Fri, Oct 26, 2012 at 7:17 AM, Stephen Perkins perk...@netmass.com wrote:
Most excellent! Many thanks for the clarification. Questions:
Something like RAID-1 would not, RAID-0 might do it. But I would split
the OSDs up over 2 SSDs.
I could take a 256G SSD and then use 50% which gives me
On Thu, Oct 4, 2012 at 8:54 AM, Bryan K. Wright
bk...@ayesha.phys.virginia.edu wrote:
Hi Sage,
s...@inktank.com said:
Can you also include 'ceph osd tree', 'ceph osd dump', and 'ceph pg dump'
output? So we can make sure CRUSH is distributing things well?
Here they are:
# ceph osd tree
, Gregory Farnum wrote:
On Thu, Sep 27, 2012 at 3:23 PM, Jim Schuttjasc...@sandia.gov wrote:
On 09/27/2012 04:07 PM, Gregory Farnum wrote:
Have you tested that this does what you want? If it does, I think
we'll want to implement this so that we actually release the memory,
but continue
On Fri, Oct 26, 2012 at 3:29 PM, Noah Watkins jayh...@cs.ucsc.edu wrote:
Development of the Hadoop shim layer is best done in the Apache Hadoop
Git repository, so we can track up-stream and build for all the
different versions that exist. Is keeping a clone in Inktank Github
something that is
to configure Hadoop to find, but we'd have a full version of
Hadoop in our repo for consistency and what not.
-Joe Buck
On Oct 26, 2012, at 3:42 PM, Gregory Farnum wrote:
On Fri, Oct 26, 2012 at 3:29 PM, Noah Watkins jayh...@cs.ucsc.edu wrote:
Development of the Hadoop shim layer is best done
?
On Wednesday, October 24, 2012 at 11:45 PM, jie sun wrote:
What is the version of the master branch ? I use the stable version 0.48.2
Thank you!
-SunJie
2012/10/24 Gregory Farnum g...@inktank.com:
On Tuesday, October 23, 2012 at 10:48 PM, jie sun wrote:
My vm kernel version is Linux
On Thu, Oct 25, 2012 at 1:05 PM, Cláudio Martins c...@ist.utl.pt wrote:
Hello,
The text at
http://ceph.com/docs/master/cluster-ops/pools/
appears to have a slight inconsistency. At the top it says
Replicas: You can set the desired number of copies/replicas of an object. A
typical
On Sat, Oct 20, 2012 at 6:42 AM, Matthew Roy imjustmatt...@gmail.com wrote:
I'm getting some error messages along the lines of:
2012-10-20 09:35:09.038865 mds.0 [ERR] loaded dup inode 101d369
[2,head] v18402 at ~mds0/stray8/101d369, but inode
101d369.head v18204 already exists at
On Tuesday, October 23, 2012 at 10:48 PM, jie sun wrote:
My vm kernel version is Linux ubuntu12 3.2.0-23-generic.
ceph-s shows
health HEALTH_OK
monmap e1: 1 mons at {a=10.100.211.146:6789/0}, election epoch 0, quorum 0 a
osdmap e152: 10 osds: 9 up, 9 in
pgmap v48479: 2112 pgs: 2112
On Mon, Oct 22, 2012 at 3:27 AM, Yann ROBIN yann.ro...@youscribe.com wrote:
Hi,
We use ceph to store small file (lot of them) on different servers and access
it using rados gateway.
Our data size is 380Go (very small). We have two host with 5 osd each.
We use small config for ceph : 2Go RAM
On Mon, Oct 22, 2012 at 2:04 AM, eric_yh_c...@wiwynn.com wrote:
At this point format 2 is understood by the kernel, and the
infrastructure for opening parent images and the I/O path
for clones is in progress. We estimate about 4-8 weeks for this,
but you should check back then.
Kernel 3.6
On Sun, Oct 21, 2012 at 9:05 AM, GitHub nore...@github.com wrote:
Branch: refs/heads/master
Home: https://github.com/ceph/ceph
Commit: 66bda162e1acad34d37fa97e3a91e277df174f42
https://github.com/ceph/ceph/commit/66bda162e1acad34d37fa97e3a91e277df174f42
Author: Sage Weil
On Fri, Oct 19, 2012 at 12:07 PM, Mike Dawson mdaw...@gammacode.com wrote:
I don't know if this use fits into buffered sequential IOPS or random IOPS
territory. Need to learn more about the NVR software probably.
Video cameras are almost certainly not doing synchronous writes, which
means the
and was just in Replay mode. Other than that, I
don't know of anything that would have affected the MDSs.
-Nick
On 2012/10/18 at 16:55, Gregory Farnum g...@inktank.com wrote:
Okay, looked at this a little bit. Can you describe what was happening
before you got into this failed-replay loop? (So
package and rerun? You will
probably have to first uninstall the ceph package.
Thanks,
-sam
On 2012/10/17 at 07:34, Sam Lang sam.l...@inktank.com wrote:
On 10/16/2012 06:04 PM, Gregory Farnum wrote:
Okay, that's the right debugging but it wasn't quite as helpful on its
own as I expected. Can
On 2012/10/17 at 07:34, Sam Lang sam.l...@inktank.com wrote:
On 10/16/2012 06:04 PM, Gregory Farnum wrote:
Okay, that's the right debugging but it wasn't quite as helpful on its
own as I expected. Can you get a core dump (you might already have
one, depending on system settings) of the crash
On Wed, Oct 17, 2012 at 12:40 PM, Casey Bodley ca...@linuxbox.com wrote:
To expand on what Matt said, we're also trying to address this issue of
lookups by inode number for use with NFS.
The design we've been exploring is to create a single system inode,
designated the 'inode container'
failure or
waits for the import to finish.
Casey
- Original Message -
From: Gregory Farnum g...@inktank.com
To: Casey Bodley ca...@linuxbox.com
Cc: Matt W. Benjamin m...@linuxbox.com, ceph-devel@vger.kernel.org,
aemerson aemer...@linuxbox.com, peter honeyman
peter.honey
On Tue, Oct 16, 2012 at 7:27 AM, Maciej Gałkiewicz
maciejgalkiew...@ragnarson.com wrote:
Hello
I have two ceph clusters configured this way:
production:
# cat /etc/ceph/ceph.conf
[global]
auth supported = cephx
keyring = /srv/ceph/keyring.admin
[mon]
mon data = /srv/ceph/mon
On Tue, Oct 16, 2012 at 2:17 PM, Sage Weil s...@inktank.com wrote:
Hey-
One of the design goals of the ceph fs was to keep metadata separate from
data. This means, among other things, that when a client is creating a
bunch of files, it creates the inode via the mds and writes the file data
nick.couch...@seakr.com wrote:
Well, hopefully this is still okay...8.5MB bzip2d, 230MB unzipped.
-Nick
On 2012/10/15 at 11:47, Gregory Farnum g...@inktank.com wrote:
Yeah, zip it and post * somebody's going to have to download it and
do
fun things. :)
-Greg
On Mon, Oct 15, 2012 at 10:43 AM
Something in the MDS log is bad or is poking at a bug in the code. Can
you turn on MDS debugging and restart a daemon and put that log
somewhere accessible?
debug mds = 20
debug journaler = 20
debug ms = 1
-Greg
On Mon, Oct 15, 2012 at 10:02 AM, Nick Couchman nick.couch...@seakr.com wrote:
Well,
it on a pastebin, if that
works, or perhaps zip it up and throw it somewhere?
-Nick
On 2012/10/15 at 11:26, Gregory Farnum g...@inktank.com wrote:
Something in the MDS log is bad or is poking at a bug in the code. Can
you turn on MDS debugging and restart a daemon and put that log
somewhere
I don't know the ext4 internals at all, but filesystems tend to
require allocation tables of various sorts (for managing extents,
etc). 7.5GB out of 500GB seems a little large for that metadata, but
isn't ridiculously so...
On Wed, Oct 10, 2012 at 10:28 AM, Damien Churchill dam...@gmail.com
On Tue, Oct 9, 2012 at 9:43 AM, Mark Kampe mark.ka...@inktank.com wrote:
I'm not a real engineer, so please forgive me if I misunderstand,
but can't you create a separate rule for each data center (choosing
first a local copy, and then remote copies), which should ensure
that the primary is
also moved to ceph-devel
On Tue, Oct 9, 2012 at 9:59 AM, Sam Lang sam.l...@inktank.com wrote:
On 10/09/2012 11:46 AM, Gregory Farnum wrote:
On Tue, Oct 9, 2012 at 9:43 AM, Sam Lang sam.l...@inktank.com wrote:
Could we add some other chaos monkeys to the network/storage
infrastructure
On Tue, Oct 9, 2012 at 11:32 AM, Sam Lang sam.l...@inktank.com wrote:
Putting a delay on the sender would avoid the reordering of messages that
have semantic meaning but allow delay-caused reordering to occur for those
that have no semantic dependency.
You're right that reordering at the
I'm going to have to leave most of these questions for somebody else,
but I do have one question. Are you using btrfs compression on your
OSD backing filesystems?
-Greg
On Tue, Oct 9, 2012 at 12:43 PM, Dave (Bob) d...@bob-the-boat.me.uk wrote:
I have a problem with this leveldb corruption issue.
Check out the thread titled [ceph-commit] teuthology lock_server error. :)
On Wed, Oct 3, 2012 at 5:41 AM, Pradeep S pradeeps...@gmail.com wrote:
hi, i am getting the following error while executing ceph-qa-suite in
teuthology.
INFO:teuthology.run_tasks:Running task internal.save_config...
I think I'm with Mark now — this does indeed look like too much random
IO for the disks to handle. In particular, Ceph requires that each
write be synced to disk before it's considered complete, which rsync
definitely doesn't. In the filesystem this is generally disguised
fairly well by all the
Sorry I haven't provided any feedback on this either — it's still in
my queue but I've had a great many things to do since you sent it
along. :)
-Greg
On Wed, Oct 3, 2012 at 12:34 PM, Noah Watkins jayh...@cs.ucsc.edu wrote:
Hi Sage,
I wanted to touch base on this Java bindings patch series to
On Wed, Oct 3, 2012 at 3:22 PM, Tren Blackburn t...@eotnetworks.com wrote:
Hi List;
I was advised to use the mds cache size option to limit the memory
that the mds process will take. I have it set to 32768. However it
the ceph-mds process is now at 50GB and still growing.
fern ceph # ps
On Wed, Oct 3, 2012 at 4:23 PM, Tren Blackburn t...@eotnetworks.com wrote:
On Wed, Oct 3, 2012 at 4:15 PM, Gregory Farnum g...@inktank.com wrote:
On Wed, Oct 3, 2012 at 3:22 PM, Tren Blackburn t...@eotnetworks.com wrote:
Hi List;
I was advised to use the mds cache size option to limit
On Wed, Oct 3, 2012 at 4:59 PM, Tren Blackburn t...@eotnetworks.com wrote:
On Wed, Oct 3, 2012 at 4:56 PM, Gregory Farnum g...@inktank.com wrote:
On Wed, Oct 3, 2012 at 4:23 PM, Tren Blackburn t...@eotnetworks.com wrote:
On Wed, Oct 3, 2012 at 4:15 PM, Gregory Farnum g...@inktank.com wrote
On Tue, Sep 25, 2012 at 4:55 PM, Tren Blackburn t...@eotnetworks.com wrote:
On Tue, Sep 25, 2012 at 2:15 PM, Gregory Farnum g...@inktank.com wrote:
Hi Tren,
Sorry your last message got dropped — we've all been really busy!
No worries! I know you guys are busy, and I appreciate any assistance
I was asked to send this to the list. At some point it will be
properly documented, but for the moment...
The mds heartbeat controls are mds_beacon_interval and
mds_beacon_grace. (So named because the beacon is used for a little
bit more than heartbeating; but it also serves heartbeat purposes
On Tue, Oct 2, 2012 at 12:48 PM, Mike Ryan mike.r...@inktank.com wrote:
Tried sending this earlier but it seems the list doesn't like PNGs.
dotty or dot -Tpng will make short work of the .dot file I've attached.
These are the changes to the Active state of the PG state chart in order
to
On Fri, Sep 28, 2012 at 10:12 PM, Sergey Tsalkov stsal...@gmail.com wrote:
So we're running a 3-machine cluster with ceph 0.52 on ubuntu precise.
Our cluster has 2 machines with 5 osds each, and a third machine with
a rados gateway. Each machine has a mon. The default crushmap is
putting a
On Mon, Oct 1, 2012 at 9:47 AM, Tommi Virtanen t...@inktank.com wrote:
On Thu, Sep 27, 2012 at 11:04 AM, Gregory Farnum g...@inktank.com wrote:
However, my suspicion is that you're limited by metadata throughput
here. How large are your files? There might be some MDS or client
tunables we can
On Thu, Sep 27, 2012 at 3:52 AM, hemant surale hemant.sur...@gmail.com wrote:
Sir,
I have upgraded my cluster to Ceph v0.48. and cluster is fine (except
gceph not working) .
How can I direct my data inject to specific osds ?
i tried to edit crushmap tired to specify ruleset accordingly ,
On Wed, Sep 26, 2012 at 1:54 PM, Bryan K. Wright
bk...@ayesha.phys.virginia.edu wrote:
Hi Mark,
Thanks for your help. Some answers to your questions
are below.
mark.nel...@inktank.com said:
On 09/26/2012 09:50 AM, Bryan K. Wright wrote:
Hi folks,
Hi Bryan!
I'm seeing
On Thu, Sep 27, 2012 at 11:47 AM, Bryan K. Wright
bk...@ayesha.phys.virginia.edu wrote:
g...@inktank.com said:
The rados benchmark was run on one of the OSD
machines. Read and write results looked like this (the
objects size was just the default, which seems to be 4kB):
Actually,
On Thu, Sep 27, 2012 at 3:23 PM, Jim Schutt jasc...@sandia.gov wrote:
On 09/27/2012 04:07 PM, Gregory Farnum wrote:
Have you tested that this does what you want? If it does, I think
we'll want to implement this so that we actually release the memory,
but continue accounting it.
Yes. I
Hi Tren,
Sorry your last message got dropped — we've all been really busy!
On Tue, Sep 25, 2012 at 10:22 AM, Tren Blackburn t...@eotnetworks.com wrote:
snip
All ceph servers are running ceph-0.51. Here is the output of ceph -s:
ocr31-ire ~ # ceph -s
health HEALTH_OK
monmap e1: 3 mons
Hmmm...this is probably an issue with SUSECloud setup rather than Ceph
itself (and we don't know how SUSECloud works). But it looks like it's
hanging while running ceph osd tree, which means it's likely a
problem somewhere in connecting or getting an answer from the
monitors.
1) See if you have
On Wed, Sep 19, 2012 at 1:48 PM, Sage Weil s...@inktank.com wrote:
On Wed, 19 Sep 2012, Tren Blackburn wrote:
Hey List;
I'm in the process of rsyncing in about 7TB of data to Ceph across
approximately 58565475 files (okay, so I guess that's not so
approximate). It's only managed to copy a
On Wed, Sep 19, 2012 at 2:05 PM, Tren Blackburn t...@eotnetworks.com wrote:
Greg: It's difficult to tell you that. I'm rsyncing 2 volumes from our
filers. Each base directory on each filer mount has approximate 213
directories, and then each directory under that has approximately
anywhere
On Mon, Sep 17, 2012 at 6:59 AM, Székelyi Szabolcs szeke...@niif.hu wrote:
Hi,
I have a problem that newly copied files disappear from the FS after a few
minutes. The only suspicious log entries look like this:
2012-09-17 15:45:40.251610 7f7024f25700 0 mds.0.server missing
1000818
On Mon, Sep 10, 2012 at 2:36 PM, Andrew Thompson andre...@aktzero.com wrote:
Greetings,
Has anyone seen this or got ideas on how to fix it?
mdsmap e18399: 3/3/3 up {0=b=up:resolve,1=a=up:resolve(laggy or
crashed),2=a=up:resolve(laggy or crashed)}
Notice that the 2nd and 3rd mds are the
On Fri, Sep 7, 2012 at 9:03 AM, Jimmy Tang jt...@tchpc.tcd.ie wrote:
Hi All,
I know that cephfs has the option picking which pools to use, will ceph-fuse
be gaining this feature at any point in the future? Or is this feature
available but isn't documented?
It's not available — making it
On Thu, Sep 6, 2012 at 10:58 AM, Jimmy Tang jt...@tchpc.tcd.ie wrote:
Hi All,
I've been playing around with 0.51 of ceph on two test machines in
work, I was experimenting with adjusting the crushmap to change from
replicating across osd's to replicating across hosts. When I change
the rule
On Wed, Sep 5, 2012 at 9:22 AM, Tommi Virtanen t...@inktank.com wrote:
On Tue, Sep 4, 2012 at 4:26 PM, Smart Weblications GmbH - Florian
Wiessner f.wiess...@smart-weblications.de wrote:
i set up a 3 node ceph cluster 0.48.1argonaut to test ceph-fs.
i mount ceph via fuse, then i downloaded
On Wed, Sep 5, 2012 at 9:42 AM, Smart Weblications GmbH - Florian
Wiessner f.wiess...@smart-weblications.de wrote:
Am 05.09.2012 18:22, schrieb Tommi Virtanen:
On Tue, Sep 4, 2012 at 4:26 PM, Smart Weblications GmbH - Florian
Wiessner f.wiess...@smart-weblications.de wrote:
i set up a 3 node
On Fri, Aug 31, 2012 at 9:24 AM, Andrew Thompson andre...@aktzero.com wrote:
On 8/31/2012 12:10 PM, Sage Weil wrote:
On Fri, 31 Aug 2012, Andrew Thompson wrote:
Have you been reweight-ing osds? I went round and round with my cluster a
few days ago reloading different crush maps only to find
On Tue, Aug 28, 2012 at 7:50 AM, Xiaopong Tran xiaopong.t...@gmail.com wrote:
On 08/25/2012 12:28 AM, Sage Weil wrote:
On Fri, 24 Aug 2012, Xiaopong Tran wrote:
Hello,
I've been running the 0.48argonaut on production for over a month
without any issue. and today, I suddenly lost one mon.
On Mon, Aug 27, 2012 at 1:01 PM, Sage Weil s...@inktank.com wrote:
On Mon, 27 Aug 2012, Noah Watkins wrote:
I have a bufferlist use case that I can't quite resolve. I'm packing
up a struct and a blob, but not sure how to seek to the beginning of
the blob correctly during decode:
1. Setup
On Sun, Aug 26, 2012 at 10:09 AM, Noah Watkins jayh...@cs.ucsc.edu wrote:
Would anyone mind giving a short overview of the difference between
tmap, omap, and xattrs, and the physical layout of these with respect
to the object payload?
xattrs are (usually — caveat below) stored in the filesystem
On Sunday, August 26, 2012 at 11:09 AM, Sébastien Han wrote:
Hi guys!
Ceph doesn't seem to detect a journal failure. The cluster keeps
writing data even if the journal doesn't exist anymore.
I can find anywhere in the log or from the ceph's command output any
information about a journal
On Thu, Aug 23, 2012 at 5:48 PM, Jim Schutt jasc...@sandia.gov wrote:
On 08/23/2012 03:26 PM, Tren Blackburn wrote:
On Thu, Aug 23, 2012 at 2:17 PM, Jim Schuttjasc...@sandia.gov wrote:
On 08/23/2012 02:39 PM, Tren Blackburn wrote:
2) Increase the number of pgs via ceph.conf (osd pg bits =
-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Gregory Farnum
Sent: Thursday, August 23, 2012 2:41 PM
To: Ryan Nicholson
Cc: ceph-devel@vger.kernel.org
Subject: Re: PG's
On Thu, Aug 23, 2012 at 2:51 PM, Ryan Nicholson ryan.nichol...@kcrg.com
On Wed, Aug 22, 2012 at 9:33 AM, Sage Weil s...@inktank.com wrote:
On Wed, 22 Aug 2012, Atchley, Scott wrote:
On Aug 22, 2012, at 10:46 AM, Florian Haas wrote:
On 08/22/2012 03:10 AM, Sage Weil wrote:
I pushed a branch that changes some of the crush terminology. Instead of
having a crush
The tcmalloc backtrace on the OSD suggests this may be unrelated, but
what's the fd limit on your monitor process? You may be approaching
that limit if you've got 500 OSDs and a similar number of clients.
On Wed, Aug 22, 2012 at 6:55 PM, Andrey Korolyov and...@xdel.ru wrote:
On Thu, Aug 23, 2012
On Thursday, August 16, 2012 at 11:00 PM, ramu wrote:
Hi all,
I want to write or insert text file to rados pool,please help me to how to
write or insert to rados pool.
Depending on what you're after, you'll want to explore using librados (for
programmatic access) or the rados tool binary,
On Fri, Aug 17, 2012 at 12:36 PM, Wido den Hollander w...@widodh.nl wrote:
Hi,
I'm looking for the omap functions in librados.h, but they only seem to be
implemented in librados.hpp?
I want to store about 6000 objects in a RADOS pool, but I want to give them
some attributes I can query for,
On Fri, Aug 17, 2012 at 1:09 PM, Wido den Hollander w...@widodh.nl wrote:
On 08/17/2012 09:54 PM, Gregory Farnum wrote:
On Fri, Aug 17, 2012 at 12:36 PM, Wido den Hollander w...@widodh.nl
wrote:
Hi,
I'm looking for the omap functions in librados.h, but they only seem to
be
implemented
On Thursday, August 16, 2012, Wido den Hollander wrote:
On 08/16/2012 02:20 PM, ramu wrote:
Hi,
Iam creating rados pools from java-rados,it's creating the pool fine.
I don't know this pool location on disks,please help me to know the location
of rados pool(means path).It is showing
Can you provide more details about where you pulled this measurement
data from? There are already a number of timestamps saved to track
stuff like this and in our own (admittedly still incomplete) work on
this subject we haven't seen any delays from the SimpleMessenger.
We've had several guys
We've discussed some of the issues here a little bit before. See
http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/7094 if
you're interested.
Josh, can you discuss the current status of the advisory locking?
-Greg
On Sun, Aug 12, 2012 at 8:44 AM, Sage Weil s...@inktank.com wrote:
RBD
On Sat, Aug 4, 2012 at 3:37 AM, Vladimir Bashkirtsev
vladi...@bashkirtsev.com wrote:
Hello,
Yesterday finally I have managed to screw up my installation of ceph! :)
My ceph was at 80% capacity. I have rebooted one of OSDs remotely and
managed to screw up with fstab. Host failed to come up
On Mon, Aug 6, 2012 at 9:39 AM, Vladimir Bashkirtsev
vladi...@bashkirtsev.com wrote:
On 07/08/12 01:55, Gregory Farnum wrote:
There is not yet any such feature, no — dealing with full systems is
notoriously hard and we haven't come up with a great solution yet. One thing
you can do
On Tue, Jul 31, 2012 at 8:07 AM, Jim Schutt jasc...@sandia.gov wrote:
On 07/30/2012 06:24 PM, Gregory Farnum wrote:
Hmm. The concern is that if an OSD is stuck on disk swapping then it's
going to be just as stuck for the monitors as the OSDs — they're all
using the same network in the basic
As Ceph gets deployed on larger clusters our most common scaling
issues have related to
1) our heartbeat system, and
2) handling the larger numbers of OSDMaps that get generated by
increases in the OSD (failures, boots, etc) and PG count (osd
up-thrus, pg_temp insertions, etc).
Lately we haven't
On Wed, Jul 25, 2012 at 1:06 PM, Florian Haas flor...@hastexo.com wrote:
Hi Mehdi,
great work! A few questions (for you, Mark, and anyone else watching
this thread) regarding the content of that wiki page:
For the OSD tests, which OSD filesystem are you testing on? Are you
using a separate
On Fri, Jul 20, 2012 at 3:24 AM, George Shuklin shuk...@selectel.ru wrote:
Good day.
I've start to play with Ceph... And I found some kinda strange performance
issues. I'm not sure if this is due ceph limitation or my bad setup.
Setup:
osd - xfs on ramdisk (only one osd)
mds - raid0 on 10
If I remember mkcephfs correctly, it deliberately does not create the
directories for each store (you'll notice that
http://ceph.com/docs/master/start/quick-start/#deploy-the-configuration
includes creating the directory for each daemon) — does /data/1/osd0
exist yet?
On Fri, Jul 20, 2012 at 2:45
On Monday, July 16, 2012 at 11:55 AM, Andrey Korolyov wrote:
On Mon, Jul 16, 2012 at 10:48 PM, Gregory Farnum g...@inktank.com
(mailto:g...@inktank.com) wrote:
ceph pg set_full_ratio 0.95
ceph pg set_nearfull_ratio 0.94
On Monday, July 16, 2012 at 11:42 AM, Andrey Korolyov wrote
On Tuesday, July 17, 2012 at 11:22 PM, Andrey Korolyov wrote:
On Wed, Jul 18, 2012 at 10:09 AM, Gregory Farnum g...@inktank.com
(mailto:g...@inktank.com) wrote:
On Monday, July 16, 2012 at 11:55 AM, Andrey Korolyov wrote:
On Mon, Jul 16, 2012 at 10:48 PM, Gregory Farnum g...@inktank.com
On Wed, Jul 18, 2012 at 12:47 AM, Andrey Korolyov and...@xdel.ru wrote:
On Wed, Jul 18, 2012 at 11:18 AM, Gregory Farnum g...@inktank.com wrote:
On Tuesday, July 17, 2012 at 11:22 PM, Andrey Korolyov wrote:
On Wed, Jul 18, 2012 at 10:09 AM, Gregory Farnum g...@inktank.com
(mailto:g
On Wed, Jul 18, 2012 at 12:07 PM, Andrey Korolyov and...@xdel.ru wrote:
On Wed, Jul 18, 2012 at 10:30 PM, Gregory Farnum g...@inktank.com wrote:
On Wed, Jul 18, 2012 at 12:47 AM, Andrey Korolyov and...@xdel.ru wrote:
On Wed, Jul 18, 2012 at 11:18 AM, Gregory Farnum g...@inktank.com wrote
On Saturday, July 14, 2012 at 7:20 AM, Andrey Korolyov wrote:
On Fri, Jul 13, 2012 at 9:09 PM, Sage Weil s...@inktank.com
(mailto:s...@inktank.com) wrote:
On Fri, 13 Jul 2012, Gregory Farnum wrote:
On Fri, Jul 13, 2012 at 1:17 AM, Andrey Korolyov and...@xdel.ru
(mailto:and...@xdel.ru
ceph pg set_full_ratio 0.95
ceph pg set_nearfull_ratio 0.94
On Monday, July 16, 2012 at 11:42 AM, Andrey Korolyov wrote:
On Mon, Jul 16, 2012 at 8:12 PM, Gregory Farnum g...@inktank.com
(mailto:g...@inktank.com) wrote:
On Saturday, July 14, 2012 at 7:20 AM, Andrey Korolyov wrote
On Fri, Jul 13, 2012 at 1:17 AM, Andrey Korolyov and...@xdel.ru wrote:
Hi,
Recently I`ve reduced my test suite from 6 to 4 osds at ~60% usage on
six-node,
and I have removed a bunch of rbd objects during recovery to avoid
overfill.
Right now I`m constantly receiving a warn about nearfull
Can you run with -x (enable authentication)? I think the non-cephx
version got broken at some point, though if using cephx is a problem
it could probably get fixed up.
-Greg
On Fri, Jul 13, 2012 at 3:59 PM, Noah Watkins jayh...@cs.ucsc.edu wrote:
Howdy,
With the latest master branch I'm seeing
Okay, bugged (http://tracker.newdream.net/issues/2778) — thanks!
-Greg
On Thu, Jul 12, 2012 at 8:26 AM, Noah Watkins jayh...@cs.ucsc.edu wrote:
On Mon, Jul 2, 2012 at 10:04 AM, Gregory Farnum g...@inktank.com wrote:
On Tue, Jun 26, 2012 at 8:20 PM, Noah Watkins jayh...@cs.ucsc.edu wrote:
I get
On Mon, Jul 9, 2012 at 10:04 AM, Yann Dupont yann.dup...@univ-nantes.fr wrote:
Le 09/07/2012 18:54, Yann Dupont a écrit :
Ok. I've compiled the kernel this afternoon, and tested it without much
success :
Jul 9 18:17:23 label5.u14.univ-nantes.prive kernel: [ 284.116236]
libceph: osd0
On Mon, Jul 9, 2012 at 10:27 AM, Székelyi Szabolcs szeke...@niif.hu wrote:
On 2012. July 9. 09:33:22 Sage Weil wrote:
On Mon, 9 Jul 2012, Székelyi Szabolcs wrote:
this far I accessed my Ceph (0.48) FS with the client.admin key, but I'd
like to change that since I don't want to allow clients
On Sun, Jul 8, 2012 at 11:53 AM, Székelyi Szabolcs szeke...@niif.hu wrote:
On 2012. July 4. 09:34:04 Gregory Farnum wrote:
Hrm, it looks like the OSD data directory got a little busted somehow. How
did you perform your upgrade? (That is, how did you kill your daemons, in
what order, and when
On Sat, Jul 7, 2012 at 1:42 AM, Jimmy Tang jt...@tchpc.tcd.ie wrote:
Hi Greg,
On Fri, Jul 6, 2012 at 5:38 PM, Gregory Farnum g...@inktank.com wrote:
Do you have more in the log? It looks like it's being instructed to
shut down before it's fully come up (thus the error in the Objecter
http
On Fri, Jul 6, 2012 at 8:29 AM, Chen, Hb hbc...@lanl.gov wrote:
Hi,
Can I try CEPH on top of the openstack/Swift Object store (Essex release) ?
Nope! CephFS is heavily dependent on the features provided by the
RADOS object store (and so is RBD, if that's what you're interested
in).
If you
Do you have more in the log? It looks like it's being instructed to
shut down before it's fully come up (thus the error in the Objecter
http://tracker.newdream.net/issues/2740, but is not the root cause),
but I can't see why.
-Greg
On Fri, Jul 6, 2012 at 8:42 AM, Jimmy Tang jt...@tchpc.tcd.ie
On Fri, Jul 6, 2012 at 12:19 AM, Yann Dupont yann.dup...@univ-nantes.fr wrote:
Le 05/07/2012 23:32, Gregory Farnum a écrit :
[...]
ok, so as all nodes were identical, I probably have hit a btrfs bug (like
a
erroneous out of space ) in more or less the same time. And when 1 osd
was
out
On Fri, Jul 6, 2012 at 11:09 AM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
Am 06.07.2012 um 19:11 schrieb Gregory Farnum g...@inktank.com:
On Thu, Jul 5, 2012 at 8:50 PM, Alexandre DERUMIER aderum...@odiso.com
wrote:
Hi,
Stefan is on vacation for the moment,I don't know
On Fri, Jul 6, 2012 at 12:07 PM, Tim Bell tim.b...@cern.ch wrote:
Does SL6 have the kernel level required ?
The MDS is a userspace daemon that demands absolutely nothing unusual
from the kernel. :)
-Greg
Tim
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
On Thu, Jul 5, 2012 at 10:39 AM, Florian Haas flor...@hastexo.com wrote:
Hi guys,
Someone I worked with today pointed me to a quick and easy way to
bring down an entire cluster, by making all mons kill themselves in
mass suicide:
ceph osd setmaxosd 2147483647
2012-07-05 16:29:41.893862
On Thu, Jul 5, 2012 at 10:40 AM, Florian Haas flor...@hastexo.com wrote:
Hi everyone,
please enlighten me if I'm misinterpreting something, but I think the
Ceph FS layer could handle the following situation better.
How to reproduce (this is on a 3.2.0 kernel):
1. Create a client, mine is
On Thu, Jul 5, 2012 at 1:19 PM, Florian Haas flor...@hastexo.com wrote:
On Thu, Jul 5, 2012 at 10:04 PM, Gregory Farnum g...@inktank.com wrote:
But I have a few more queries while this is fresh. If you create a
directory, unmount and remount, and get the location, does that work?
Nope, same
On Thu, Jul 5, 2012 at 1:25 PM, Florian Haas flor...@hastexo.com wrote:
On Thu, Jul 5, 2012 at 10:01 PM, Gregory Farnum g...@inktank.com wrote:
Also, going down the rabbit hole, how would this behavior change if I
used cephfs to set the default layout on some directory to use a
different pool
On Wed, Jul 4, 2012 at 10:53 AM, Yann Dupont yann.dup...@univ-nantes.fr wrote:
Le 04/07/2012 18:21, Gregory Farnum a écrit :
On Wednesday, July 4, 2012 at 1:06 AM, Yann Dupont wrote:
Le 03/07/2012 23:38, Tommi Virtanen a écrit :
On Tue, Jul 3, 2012 at 1:54 PM, Yann Dupont yann.dup...@univ
Could you send over the ceph.conf on your KVM host, as well as how
you're configuring KVM to use rbd?
On Tue, Jul 3, 2012 at 11:20 AM, Stefan Priebe s.pri...@profihost.ag wrote:
I'm sorry but this is the KVM Host Machine there is no ceph running on this
machine.
If i change the admin socket
On Wednesday, July 4, 2012 at 1:06 AM, Yann Dupont wrote:
Le 03/07/2012 23:38, Tommi Virtanen a écrit :
On Tue, Jul 3, 2012 at 1:54 PM, Yann Dupont yann.dup...@univ-nantes.fr
(mailto:yann.dup...@univ-nantes.fr) wrote:
In the case I could repair, do you think a crashed FS as it is right
701 - 800 of 1146 matches
Mail list logo