Can you open a ticket with exact version of your ceph cluster?
http://tracker.ceph.com
Thanks,
On Sun, Dec 10, 2017 at 10:34 PM, Martin Preuss wrote:
> Hi,
>
> I'm new to Ceph. I started a ceph cluster from scratch on DEbian 9,
> consisting of 3 hosts, each host has 3-4 OSDs (using 4TB hdds, cu
On Fri, Oct 13, 2017 at 3:29 PM, Ashley Merrick wrote:
> Hello,
>
>
> Is it possible to limit a cephx user to one image?
>
>
> I have looked and seems it's possible per a pool, but can't find a per image
> option.
What did you look at?
Best reg
So the problem you faced has been completely solved?
On Thu, Sep 28, 2017 at 7:51 PM, Richard Hesketh
wrote:
> On 27/09/17 19:35, John Spray wrote:
>> On Wed, Sep 27, 2017 at 1:18 PM, Richard Hesketh
>> wrote:
>>> On 27/09/17 12:32, John Spray wrote:
On Wed, Sep 27, 2017 at 12:15 PM, Richar
Are we going to have next CDM in an APAC friendly time slot again?
On Thu, Sep 28, 2017 at 12:08 PM, Leonardo Vaz wrote:
> Hey Cephers,
>
> This is just a friendly reminder that the next Ceph Developer Montly
> meeting is coming up:
>
> http://wiki.ceph.com/Planning
>
> If you have work that y
Just for clarification.
Did you upgrade your cluster from Hammer to Luminous, then hit an assertion?
On Wed, Sep 27, 2017 at 8:15 PM, Richard Hesketh
wrote:
> As the subject says... any ceph fs administrative command I try to run hangs
> forever and kills monitors in the background - sometimes t
It would be much better to explain why as of today, object-map feature
is not supported by the kernel client, or document it.
On Tue, Aug 15, 2017 at 8:08 PM, Ilya Dryomov wrote:
> On Tue, Aug 15, 2017 at 11:34 AM, moftah moftah wrote:
>> Hi All,
>>
>> I have search everywhere for some sort of t
On Sun, Apr 23, 2017 at 4:09 AM, Donny Davis wrote:
> Just in case anyone was curious as to how amazing ceph actually is, I did
> the migration to ceph seamlessly. I was able to bring the other two nodes
> into the cluster, and then turn on replication between them without a hitch.
> And with zero
You don't need to recompile that tool. Please see
``ceph_erasure_code_benchmark -h``.
Some examples are:
https://github.com/ceph/ceph/blob/master/src/erasure-code/isa/README#L31-L48
On Sat, Apr 8, 2017 at 8:21 AM, Henry Ngo wrote:
> Hello,
>
> I have a 6 node cluster and I have installed Ceph on
Please open a ticket so that we track.
http://tracker.ceph.com/
Regards,
On Sat, Apr 8, 2017 at 1:40 AM, Patrick Donnelly
wrote:
> Hello Andras,
>
> On Wed, Mar 29, 2017 at 11:07 AM, Andras Pataki
> wrote:
> > Below is a crash we had on a few machines with the ceph-fuse client on
> the
> > la
Adding Patrick who might be the best person.
Regards,
On Wed, Apr 5, 2017 at 6:16 PM, Wido den Hollander wrote:
>
>> Op 5 april 2017 om 8:14 schreef SJ Zhu :
>>
>>
>> Wido, ping?
>>
>
> This might take a while! Has to go through a few hops for this to get fixed.
>
> It's on my radar!
>
> Wido
>
> I am sure I remember having to reduce min_size to 1 temporarily in the past
> to allow recovery from having two drives irrecoverably die at the same time
> in one of my clusters.
What was the situation that you had to do that?
Thanks for sharing your experience in advance.
Regards,
__
So description of Jewel is wrong?
http://docs.ceph.com/docs/master/releases/
On Thu, Mar 16, 2017 at 2:27 AM, John Spray wrote:
> On Wed, Mar 15, 2017 at 5:04 PM, Shinobu Kinjo wrote:
>> It may be probably kind of challenge but please consider Kraken (or
>> later) because Jewel
table) and will receive
> updates until two LTS are published.
>
> --
> Deepak
>
>> On Mar 15, 2017, at 10:09 AM, Shinobu Kinjo wrote:
>>
>> It may be probably kind of challenge but please consider Kraken (or
>> later) because Jewel will be retired:
>>
>>
of time.
>
> Thanks all!
>
> Shain
>
> On Mar 15, 2017, at 12:38 PM, Vasu Kulkarni wrote:
>
> Just curious, why you still want to deploy new hammer instead of stable
> jewel? Is this a test environment? the last .10 release was basically for
> bug fixes for 0.94.9.
>
&g
FYI:
https://plus.google.com/+Cephstorage/posts/HuCaTi7Egg3
On Thu, Mar 16, 2017 at 1:05 AM, Shain Miley wrote:
> Hello,
> I am trying to deploy ceph to a new server using ceph-deply which I have
> done in the past many times without issue.
>
> Right now I am seeing a timeout trying to connect to
We already discussed this:
https://www.spinics.net/lists/ceph-devel/msg34559.html
What do you think of comment posted in that ML?
Would that make sense to you as well?
On Tue, Feb 28, 2017 at 2:41 AM, Vasu Kulkarni wrote:
> Ilya,
>
> Many folks hit this and its quite difficult since the error
Please open ticket at http://tracker.ceph.com, if you haven't yet.
On Thu, Feb 16, 2017 at 6:07 PM, Muthusamy Muthiah
wrote:
> Hi Wido,
>
> Thanks for the information and let us know if this is a bug.
> As workaround we will go with small bluestore_cache_size to 100MB.
>
> Thanks,
> Muthu
>
> On
if ``ceph pg deep-scrub `` does not work
then
do
``ceph pg repair
On Sat, Feb 18, 2017 at 10:02 AM, Tracy Reed wrote:
> I have a 3 replica cluster. A couple times I have run into inconsistent
> PGs. I googled it and ceph docs and various blogs say run a repair
> first. But a couple people
On Sat, Feb 18, 2017 at 9:03 AM, Matyas Koszik wrote:
>
>
> Looks like you've provided me with the solution, thanks!
:)
> I've set the tunables to firefly, and now I only see the normal states
> associated with a recovering cluster, there're no more stale pgs.
> I hope it'll stay like this when
You may need to increase ``choose_total_tries`` to more than 50
(default) up to 100.
-
http://docs.ceph.com/docs/master/rados/operations/crush-map/#editing-a-crush-map
- https://github.com/ceph/ceph/blob/master/doc/man/8/crushtool.rst
On Sat, Feb 18, 2017 at 5:25 AM, Matyas Koszik wrote:
>
>
Can you do?
* ceph osd getcrushmap -o ./crushmap.o; crushtool -d ./crushmap.o -o
./crushmap.txt
On Sat, Feb 18, 2017 at 3:52 AM, Gregory Farnum wrote:
> Situations that are stable lots of undersized PGs like this generally
> mean that the CRUSH map is failing to allocate enough OSDs for certain
Would you simply do?
* ceph -s
On Fri, Feb 17, 2017 at 6:26 AM, Benjeman Meekhof wrote:
> As I'm looking at logs on the OSD mentioned in previous email at this
> point, I mostly see this message repeating...is this normal or
> indicating a problem? This osd is marked up in the cluster.
>
> 201
On Wed, Feb 15, 2017 at 2:18 AM, Lukáš Kubín wrote:
> Hi,
> I'm most probably hitting bug http://tracker.ceph.com/issues/13755 - when
> libvirt mounted RBD disks suspend I/O during snapshot creation until hard
> reboot.
>
> My Ceph cluster (monitors and OSDs) is running v0.94.3, while clients
> (O
so high?
> Are there any solutions or suggestions to this problem?
>
> Cheers
>
> -邮件原件-
> 发件人: Shinobu Kinjo [mailto:ski...@redhat.com]
> 发送时间: 2017年2月13日 10:54
> 收件人: chenyehua 11692 (RD)
> 抄送: kc...@redhat.com; ceph-users@lists.ceph.com
> 主题: Re: 答复: [
9:40
> 收件人: 'Shinobu Kinjo'
> 抄送: kc...@redhat.com; ceph-users@lists.ceph.com
> 主题: 答复: [ceph-users] mon is stuck in leveldb and costs nearly 100% cpu
>
> My ceph version is 10.2.5
>
> -邮件原件-
> 发件人: Shinobu Kinjo [mailto:ski...@redhat.com]
> 发送
Which Ceph version are you using?
On Sat, Feb 11, 2017 at 5:02 PM, Chenyehua wrote:
> Dear Mr Kefu Chai
>
> Sorry to disturb you.
>
> I meet a problem recently. In my ceph cluster ,health status has warning
> “store is getting too big!” for several days; and ceph-mon costs nearly
> 100% cpu;
>
>
ot; the follow-up question is
> "Why?" as it is not required for a MON or OSD host.
>
> On Sat, Feb 11, 2017 at 1:18 PM, Michael Andersen
> wrote:
>> Yeah, all three mons have OSDs on the same machines.
>>
>> On Feb 10, 2017 7:13 PM, "Shinobu Kinjo&
ave OSDs on the same machines.
>
> On Feb 10, 2017 7:13 PM, "Shinobu Kinjo" wrote:
>>
>> Is your primary MON running on the host which some OSDs are running on?
>>
>> On Sat, Feb 11, 2017 at 11:53 AM, Michael Andersen
>> wrote:
>> > Hi
>&
Is your primary MON running on the host which some OSDs are running on?
On Sat, Feb 11, 2017 at 11:53 AM, Michael Andersen
wrote:
> Hi
>
> I am running a small cluster of 8 machines (80 osds), with three monitors on
> Ubuntu 16.04. Ceph version 10.2.5.
>
> I cannot reboot the monitors without phy
What did you exactly do?
On Fri, Feb 10, 2017 at 11:48 AM, 周威 wrote:
> The version I'm using is 0.94.9
>
> And when I want to create a pool, It shows:
>
> Error EINVAL: error running crushmap through crushtool: (1) Operation
> not permitted
>
> What's wrong about this?
> _
Osd.0 up 1.0 1.0
> 3 0.01070 Osd.3 up 1.0 1.0
> 4 0.04390 Osd.4 up.10 1.0
>
> -3 0.05949 Host 2:
> 1 0.00490 Osd.1 up 1.0 1.0
> 2 0.01070 Osd.2 up 1.0 1.0
> 5 0.04390 Osd.5
4 OSD nodes or daemons?
please:
* ceph -v
* ceph -s
* ceph osd tree
On Fri, Feb 10, 2017 at 5:26 AM, Craig Read wrote:
> We have 4 OSDs in test environment that are all stuck unclean
>
>
>
> I’ve tried rebuilding the whole environment with the same result.
>
>
>
> OSDs are running on XFS di
On Wed, Feb 8, 2017 at 8:07 PM, Dan van der Ster wrote:
> Hi,
>
> This is interesting. Do you have a bit more info about how to identify
> a server which is suffering from this problem? Is there some process
> (xfs* or kswapd?) we'll see as busy in top or iotop.
That's my question as well. If you
If you would be able to reproduce the issue intentionally under
particular condition which I have no idea about at the moment, it
would be helpful.
There were some MLs previously regarding to *similar* issue.
# google "libvirt rbd issue"
Regards,
On Tue, Feb 7, 2017 at 7:50 PM, Tracy Reed wr
t; repositories (also latest Jewel and ceph-deploy) as well.
>
Community Ceph packages are running on ubuntu box, right?
If so, please do `ceph -v` on ubuntu box.
And also please provide us with same issue which you hit on suse box.
>
> On Wed, Feb 8, 2017 at 3:03 AM, Shinobu Kinjo w
DispatchQueue::entry()+0x78b) [0x557c5200d06b]
>> > 17: (DispatchQueue::DispatchThread::entry()+0xd) [0x557c51ee5dcd]
>> > 18: (()+0x8734) [0x7f7e95dea734]
>> > 19: (clone()+0x6d) [0x7f7e93d80d3d]
>> > NOTE: a copy of the executable, or `objdump -rdS ` is
>> needed t
0 100 409600
./rados -p cephfs_data_a ls | wc -l
100
If you could reproduce an issue and let us share procedure, that would
be definitely help.
Will try again.
On Tue, Feb 7, 2017 at 2:01 AM, Florent B wrote:
> On 02/06/2017 05:49 PM, Shinobu Kinjo wrote:
>> How abou
How about *pve01-rbd01*?
* rados -p pve01-rbd01 ls | wc -l
?
On Mon, Feb 6, 2017 at 9:40 PM, Florent B wrote:
> On 02/06/2017 11:12 AM, Wido den Hollander wrote:
>>> Op 6 februari 2017 om 11:10 schreef Florent B :
>>>
>>>
>>> # ceph -v
>>> ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a0
On Sun, Feb 5, 2017 at 1:15 AM, John Spray wrote:
> On Fri, Feb 3, 2017 at 5:28 PM, Florent B wrote:
>> Hi everyone,
>>
>> On a Jewel test cluster I have :
please, `ceph -v`
>>
>> # ceph df
>> GLOBAL:
>> SIZE AVAIL RAW USED %RAW USED
>> 6038G 6011G 27379M
You may want to add this in your FIO recipe.
* exec_prerun=echo 3 > /proc/sys/vm/drop_caches
Regards,
On Fri, Feb 3, 2017 at 12:36 AM, Wido den Hollander wrote:
>
>> Op 2 februari 2017 om 15:35 schreef Ahmed Khuraidah :
>>
>>
>> Hi all,
>>
>> I am still confused about my CephFS sandbox.
>>
>>
on, Jan 30, 2017 at 1:23 PM, Gregory Farnum
>>>>> wrote:
>>>>> > On Sun, Jan 29, 2017 at 6:40 AM, Muthusamy Muthiah
>>>>> > wrote:
>>>>> >> Hi All,
>>>>> >>
>>>>> >> Also tried EC
On Wed, Feb 1, 2017 at 1:51 AM, Joao Eduardo Luis wrote:
> On 01/31/2017 03:35 PM, David Turner wrote:
>>
>> If you do have a large enough drive on all of your mons (and always
>> intend to do so) you can increase the mon store warning threshold in the
>> config file so that it no longer warns at
First off, the followings, please.
* ceph -s
* ceph osd tree
* ceph pg dump
and
* what you actually did with exact commands.
Regards,
On Tue, Jan 31, 2017 at 6:10 AM, José M. Martín wrote:
> Dear list,
>
> I'm having some big problems with my setup.
>
> I was trying to increase the global
There were some related MLs.
Google this:
[ceph-users] Ceph Plugin for Collectd
On Sun, Jan 29, 2017 at 8:43 AM, Marc Roos wrote:
>
>
> Is there a doc that describes all the parameters that are published by
> collectd-ceph?
>
> Is there maybe a default grafana dashboard for influxdb? I found
>
`ceph pg dump` should show you something like:
* active+undersized+degraded ... [NONE,3,2,4,1]3[NONE,3,2,4,1]
Sam,
Am I wrong? Or is it up to something else?
On Sat, Jan 21, 2017 at 4:22 AM, Gregory Farnum wrote:
> I'm pretty sure the default configs won't let an EC PG go active with
What does `ceph -s` say?
On Sat, Jan 21, 2017 at 3:39 AM, Wido den Hollander wrote:
>
>> Op 20 januari 2017 om 17:17 schreef Kai Storbeck :
>>
>>
>> Hello ceph users,
>>
>> My graphs of several counters in our Ceph cluster are showing abnormal
>> behaviour after changing the pg_num and pgp_num re
On Fri, Jan 20, 2017 at 2:54 AM, Brian Andrus
wrote:
> Much of the Ceph project VMs (including tracker.ceph.com) is currently
> hosted on DreamCompute. The migration to our new service/cluster that was
> completed on 2017-01-17, the Ceph project was somehow enabled in our new
> OpenStack project w
Now I'm totally clear.
Regards,
On Fri, Jan 13, 2017 at 6:59 AM, Samuel Just wrote:
> That would work.
> -Sam
>
> On Thu, Jan 12, 2017 at 1:40 PM, Gregory Farnum wrote:
>> On Thu, Jan 12, 2017 at 1:37 PM, Samuel Just wrote:
>>> Oh, this is basically working as intended. What happened is that
Thu, Jan 12, 2017 at 1:01 PM, Jason Dillaman wrote:
> On Wed, Jan 11, 2017 at 10:43 PM, Shinobu Kinjo wrote:
>> +2
>> * Reduce manual operation as much as possible.
>> * A recovery tool in case that we break something which would not
>> appear to us initially.
>
>
Sorry, I don't get your question.
Generally speaking, the MON maintains maps of the cluster state:
* Monitor map
* OSD map
* PG map
* CRUSH map
Regards,
On Thu, Jan 12, 2017 at 7:03 PM, wrote:
> Hi all,
> I had just reboot all 3 nodes (one after one) of an small Proxmox-VE
> ceph-cluster
On Thu, Jan 12, 2017 at 12:28 PM, Christian Balzer wrote:
>
> Hello,
>
> On Wed, 11 Jan 2017 11:09:46 -0500 Jason Dillaman wrote:
>
>> I would like to propose that starting with the Luminous release of Ceph,
>> RBD will no longer support the creation of v1 image format images via the
>> rbd CLI an
On Thu, Jan 12, 2017 at 2:41 AM, Ilya Dryomov wrote:
> On Wed, Jan 11, 2017 at 6:01 PM, Shinobu Kinjo wrote:
>> It would be fine to not support v1 image format at all.
>>
>> But it would be probably friendly for users to provide them with more
>> understandable mes
It would be fine to not support v1 image format at all.
But it would be probably friendly for users to provide them with more
understandable message when they face feature mismatch instead of just
displaying:
* rbd: map failed: (6) No such device or address
For instance, show the following some
SUPPORT CRUSH_TUNABLES2
>>> - v0.55 or later, including bobtail series (v0.56.x)
>>> - Linux kernel version v3.9 or later (for the file system and RBD kernel
>>> clients)
>>>
>>> And here my question is: If my clients use librados (version hammer), do I
&
everything is fine because ceph -s said they are up
>> and running.
>>
>> I would think of a problem with the crush map.
>>
>>> Am 10.01.2017 um 08:06 schrieb Shinobu Kinjo :
>>>
>>> e.g.,
>>> OSD7 / 3 / 0 are in the same acting
I think this indicates that they are up:
> osdmap e3114: 9 osds: 9 up, 9 in; 4 remapped pgs?
>
>
>> Am 10.01.2017 um 07:50 schrieb Shinobu Kinjo :
>>
>> On Tue, Jan 10, 2017 at 3:44 PM, Marcus Müller
>> wrote:
>>> All osds are currently up:
>>>
>>&
55872G 15071G 40801G 26.97
> MIN/MAX VAR: 0.61/1.70 STDDEV: 13.16
>
> As you can see, now osd2 also went down to 45% Use and „lost“ data. But I
> also think this is no problem and ceph just clears everything up after
> backfilling.
>
>
> Am 10.01.2017 um 07:29 schrieb Sh
Looking at ``ceph -s`` you originally provided, all OSDs are up.
> osdmap e3114: 9 osds: 9 up, 9 in; 4 remapped pgs
But looking at ``pg query``, OSD.0 / 1 are not up. Are they something
like related to ?:
> Ceph1, ceph2 and ceph3 are vms on one physical host
Are those OSDs running on vm instanc
> pg 9.7 is stuck unclean for 512936.160212, current state active+remapped,
> last acting [7,3,0]
> pg 7.84 is stuck unclean for 512623.894574, current state active+remapped,
> last acting [4,8,1]
> pg 8.1b is stuck unclean for 513164.616377, current state active+remapped,
> last acting [4,7,2]
remove the ones with
> inconsistencies (which should remove the underlying rados objects). But
> it'd be perhaps good to do some searching on how/why this problem came about
> before doing this.
>
> andras
>
>
>
> On 01/07/2017 06:48 PM, Shinobu Kinjo wrote:
>>
ctive+undersized+degraded
>>
>>
>>
>> root@alex-desktop:/var/lib/ceph/mon/ceph-alex-desktop# ls -ls
>> total 8
>> 0 -rw-r--r-- 1 ceph ceph0 Jan 7 21:11 done
>> 4 -rw--- 1 ceph ceph 77 Jan 7 21:05 keyring
>> 4 drwxr-xr-x 2 ceph ceph 4096 Jan 7 21
gt;
> Alex F. Evonosky
>
> <https://twitter.com/alexevon> <https://www.linkedin.com/in/alexevonosky>
>
> On Sat, Jan 7, 2017 at 6:36 PM, Shinobu Kinjo wrote:
>
>> How did you add a third MON?
>>
>> Regards,
>>
>> On Sun, Jan 8, 2017 at 7
Sorry for the late.
Are you still facing inconsistent pg status?
On Wed, Jan 4, 2017 at 11:39 PM, Andras Pataki
wrote:
> # ceph pg debug unfound_objects_exist
> FALSE
>
> Andras
>
>
> On 01/03/2017 11:38 PM, Shinobu Kinjo wrote:
>>
>> Would you run:
>>
>
How did you add a third MON?
Regards,
On Sun, Jan 8, 2017 at 7:01 AM, Alex Evonosky wrote:
> Anyone see this before?
>
>
> 2017-01-07 16:55:11.406047 7f095b379700 0 cephx: verify_reply couldn't
> decrypt with error: error decoding block for decryption
> 2017-01-07 16:55:11.406053 7f095b379700
On Wed, Jan 4, 2017 at 6:05 PM, 许雪寒 wrote:
> We've already restarted the OSD successfully.
> Now, we are trying to figure out why the OSD suicide itself
Network issue which causes pretty unstable communication with other
OSDs in same acting set causes suicide usually.
>
> Re: [ceph-users] Is thi
On Wed, Jan 4, 2017 at 4:33 PM, Henrik Korkuc wrote:
> On 17-01-04 03:16, Gregory Farnum wrote:
>>
>> On Fri, Dec 23, 2016 at 12:04 AM, Henrik Korkuc wrote:
>>>
>>> Hello,
>>>
>>> I wondered if Ceph can emit stats (via perf counters, statsd or in some
>>> other way) IO and bandwidth stats per Cep
,
> "parent": "0.0",
> "parent_split_bits": 0,
> "last_scrub": "342266'14514",
> "last_scrub_stamp": "2016-10-28 16:41:06.563820",
>
Description of ``--pool=data`` is fine but just confusing users.
http://docs.ceph.com/docs/jewel/start/quick-ceph-deploy/
should be synced with
https://github.com/ceph/ceph/blob/master/doc/start/quick-ceph-deploy.rst
I would recommend you to refer ``quick-ceph-deploy.rst`` because docs
in git a
Yeah, dreamhost seems to have internal issue which is not quite good for us.
Sorry for that.
On Tue, Jan 3, 2017 at 5:41 PM, Rajib Hossen
wrote:
> Hello, I can't browse docs.ceph.com for last 2/3 days. Google says it takes
> too many time to reload. I also couldn't ping the website. I also check
I've never done migration of cephfs_metadata from spindle disks to
ssds. But logically you could achieve this through 2 phases.
#1 Configure CRUSH rule including spindle disks and ssds
#2 Configure CRUSH rule for just pointing to ssds
* This would cause massive data shuffling.
On Mon, Jan 2,
The best practice to reweight OSDs is to run
test-reweight-by-utilization which is dry-run of reweighting OSDs
before running reweight-by-utilization.
On Sat, Dec 31, 2016 at 3:05 AM, Brian Andrus
wrote:
> We have a set it and forget it cronjob setup once an hour to keep things a
> bit more balan
On Fri, Dec 30, 2016 at 7:27 PM, Kees Meijs wrote:
> Thanks, I'll try a manual reweight at first.
Great.
CRUSH would probably be able to be more clever in the future anyway.
>
> Have a happy new year's eve (yes, I know it's a day early)!
>
> Regards,
> Kees
>
> On 30-12-16 11:17, Wido den Holla
On Fri, Dec 30, 2016 at 7:17 PM, Wido den Hollander wrote:
>
>> Op 30 december 2016 om 11:06 schreef Kees Meijs :
>>
>>
>> Hi Asley,
>>
>> We experience (using Hammer) a similar issue. Not that I have a perfect
>> solution to share, but I felt like mentioning a "me too". ;-)
>>
>> On a side note:
You can track activity of acting set by using:
# ceph daemon osd.${osd id} dump_ops_in_flight
On Fri, Dec 30, 2016 at 3:59 PM, Jaemyoun Lee
wrote:
> Dear Wido,
> Is there a command to check the ACK? Or, may you tell me a source code
> function for the received ACK?
>
> Thanks,
> Jae
>
> On Thu
hu, Dec 29, 2016 at 2:01 PM, Ukko Hakkarainen
> wrote:
>>
>> Shinobe,
>>
>> I'll re-check if the info I'm after is there, I recall not. I'll get back
>> to you later.
>>
>> Thanks!
>>
>> > Shinobu Kinjo kirjoitti 29.12.201
And we may be interested in your cluster's configuration.
# ceph --show-config > $(hostname).$(date +%Y%m%d).ceph_conf.txt
On Fri, Dec 30, 2016 at 7:48 AM, David Turner wrote:
> Another thing that I need to make sure on is that your number of PGs in
> the pool with 90% of the data is a power o
I always tend to jump into:
https://github.com/ceph
Everything is there.
On Fri, Dec 30, 2016 at 2:34 AM, Michael Hackett wrote:
> Hello Andre,
>
> The Ceph site would be the best place to get the information you are looking
> for, specifically the docs section: http://docs.ceph.com/docs/maste
Please see the following:
http://docs.ceph.com/docs/giant/architecture/
Everything you would want to know about is there.
Regards,
On Thu, Dec 29, 2016 at 8:27 AM, Ukko wrote:
> I'd be interested in CRUSH algorithm simplified in series of
> pictures. How does a storage node write and client
On Sun, Dec 25, 2016 at 7:33 AM, Brad Hubbard wrote:
> On Sun, Dec 25, 2016 at 3:33 AM, w...@42on.com wrote:
>>
>>
>>> Op 24 dec. 2016 om 17:20 heeft L. Bader het volgende
>>> geschreven:
>>>
>>> Do you have any references on this?
>>>
>>> I searched for something like this quite a lot and did
: 2293522445,
> "omap_digest": 4294967295,
> "expected_object_size": 4194304,
> "expected_write_size": 4194304,
> "alloc_hint_flags": 53,
> "watchers": {}
> }
>
> Depending on the output one method for
Would you be able to execute ``ceph pg ${PG ID} query`` against that
particular PG?
On Wed, Dec 21, 2016 at 11:44 PM, Andras Pataki
wrote:
> Yes, size = 3, and I have checked that all three replicas are the same zero
> length object on the disk. I think some metadata info is mismatching what
> t
Can you share exact steps you took to build the cluster?
On Thu, Dec 22, 2016 at 3:39 AM, Aakanksha Pudipeddi
wrote:
> I mean setup a Ceph cluster after compiling from source and make install. I
> usually use the long form to setup the cluster. The mon setup is fine but
> when I create an OSD u
ng set-overlay ? we didn't sweep the clients out while setting overlay
>
> -- Original --
> From: "JiaJia Zhong";
> Date: Wed, Dec 14, 2016 11:24 AM
> To: "Shinobu Kinjo";
> Cc: "CEPH list"; "ukernel";
&
Would you give us some outputs?
# getfattr -n ceph.quota.max_bytes /some/dir
and
# ls -l /some/dir
On Thu, Dec 15, 2016 at 4:41 PM, gjprabu wrote:
>
> Hi Team,
>
> We are using ceph version 10.2.4 (Jewel) and data's are mounted
> with cephfs file system in linux. We are trying to s
-p ${cache pool} ls
# rados -p ${cache pool} get ${object} /tmp/file
# ls -l /tmp/file
-- Original --
From: "Shinobu Kinjo";
Date: Tue, Dec 13, 2016 06:21 PM
To: "JiaJia Zhong";
Cc: "CEPH list"; "ukernel";
Subject: Re:
On Tue, Dec 13, 2016 at 4:38 PM, JiaJia Zhong
wrote:
> hi cephers:
> we are using ceph hammer 0.94.9, yes, It's not the latest ( jewel),
> with some ssd osds for tiering, cache-mode is set to readproxy,
> everything seems to be as expected,
> but when reading some small files from c
On Sat, Dec 10, 2016 at 11:00 PM, Jason Dillaman wrote:
> I should clarify that if the OSD has silently failed (e.g. the TCP
> connection wasn't reset and packets are just silently being dropped /
> not being acked), IO will pause for up to "osd_heartbeat_grace" before
The number is how long an O
On Sat, Nov 19, 2016 at 6:59 AM, Brad Hubbard wrote:
> +ceph-devel
>
> On Fri, Nov 18, 2016 at 8:45 PM, Nick Fisk wrote:
>> Hi All,
>>
>> I want to submit a PR to include fix in this tracker bug, as I have just
>> realised I've been experiencing it.
>>
>> http://tracker.ceph.com/issues/9860
>>
>
:
> @Shinobu
>
> According to
> http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/
>
> "If you cannot start an OSD because it is full, you may delete some data by
> deleting some placement group directories in the full OSD."
>
>
> On 8 Au
On Mon, Aug 8, 2016 at 8:01 PM, Mykola Dvornik wrote:
> Dear ceph community,
>
> One of the OSDs in my cluster cannot start due to the
>
> ERROR: osd init failed: (28) No space left on device
>
> A while ago it was recommended to manually delete PGs on the OSD to let it
> start.
Who recommended t
On Sun, Aug 7, 2016 at 6:56 PM, Christian Balzer wrote:
>
> [Reduced to ceph-users, this isn't community related]
>
> Hello,
>
> On Sat, 6 Aug 2016 20:23:41 +0530 Venkata Manojawa Paritala wrote:
>
>> Hi,
>>
>> We have configured single Ceph cluster in a lab with the below
>> specification.
>>
>>
osd_heartbeat_addr must be in [osd] section.
On Thu, Jul 28, 2016 at 4:31 AM, Venkata Manojawa Paritala
wrote:
> Hi,
>
> I have configured the below 2 networks in Ceph.conf.
>
> 1. public network
> 2. cluster_network
>
> Now, the heart beat for the OSDs is happening thru cluster_network. How can
t:
>
> IP Interactive UG ( haftungsbeschraenkt )
> Zum Sonnenberg 1-3
> 63571 Gelnhausen
>
> HRB 93402 beim Amtsgericht Hanau
> Geschäftsführung: Oliver Dzombic
>
> Steuer Nr.: 35 236 3622 1
> UST ID: DE274086107
>
>
> Am 15.07.2016 um 06:22 schrieb Shinobu Kinjo:
You may want to change value of "osd_pool_default_crush_replicated_ruleset".
shinobu
On Fri, Jul 15, 2016 at 7:38 AM, Oliver Dzombic
wrote:
> Hi,
>
> wow, figured it out.
>
> If you dont have a ruleset 0 id, you are in trouble.
>
> So the solution is, that you >MUST< have a ruleset id 0.
>
> -
Can you reproduce with debug client = 20?
On Tue, Jul 5, 2016 at 10:16 AM, Goncalo Borges <
goncalo.bor...@sydney.edu.au> wrote:
> Dear All...
>
> We have recently migrated all our ceph infrastructure from 9.2.0 to 10.2.2.
>
> We are currently using ceph-fuse to mount cephfs in a number of client
Reproduce with 'debug mds = 20' and 'debug ms = 20'.
shinobu
On Mon, Jul 4, 2016 at 9:42 PM, Lihang wrote:
> Thank you very much for your advice. The command "ceph mds repaired 0"
> work fine in my cluster, my cluster state become HEALTH_OK and the cephfs
> state become normal also. but in the
clients write operations to rados will be
cancel (maybe `cancel` is not appropriate word in this sentence) until the
full epoch before touching same object.
Since clients must have latest OSD map.
Does it make sense?
Anyway in case I've been missing something, some will add more.
>
> Do
;: "Started\/Primary\/Active",
> "enter_time": "2016-06-27 04:57:36.876639",
> "might_have_unfound": [],
> "recovery_progress": {
> "backfill_targets": [],
> "waiting
What does `ceph pg 6.263 query` show you?
On Thu, Jun 30, 2016 at 12:02 PM, Goncalo Borges <
goncalo.bor...@sydney.edu.au> wrote:
> Dear Cephers...
>
> Today our ceph cluster gave us a couple of scrub errors regarding
> inconsistent pgs. We just upgraded from 9.2.0 to 10.2.2 two days ago.
>
> #
Would you enable debug for osd.177
debug osd = 20
debug filestore = 20
debug ms = 1
Cheers,
Shinobu
On Thu, Jun 2, 2016 at 2:31 AM, Jeffrey McDonald wrote:
> Hi,
>
> I just performed a minor ceph upgrade on my ubuntu 14.04 cluster from ceph
> version to0.94.6-1trusty to 0.94.7-1trusty. Upo
1 - 100 of 234 matches
Mail list logo