Hey Kenneth,
We encountered this when the number of strays (unlinked files yet to be
purged) reached 1 million, which is a result of many many file removals
happening on the fs repeatedly. It can also happen when there are more than
100k files in a dir with default settings.
You can tune it via
buy us enough time to move to bluestore.
--
*Rafael Lopez*
Research Devops Engineer
Monash University eResearch Centre
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
lly possible?
--
*Rafael Lopez*
Research Devops Engineer
Monash University eResearch Centre
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
les provided with hammer), but you can write basic script(s) to
start/stop all osds manually. this was ok for us, particularly since we
didn't intend to run that state for a long period, and eventually upgraded
to jewel and soon to be luminous. In your case, since trusty is supported
in luminous I
this into
> Mimic, but the ganesha FSAL has been around for years.)
>
> Thanks!
> sage
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at http://vge
ve you tried upgrading
> to it?
> --
> *From:* ceph-users <ceph-users-boun...@lists.ceph.com> on behalf of
> Rafael Lopez <rafael.lo...@monash.edu>
> *Sent:* Thursday, 16 November 2017 11:59:14 AM
> *To:* Mark Nelson
> *Cc:* ceph-users
> *Su
u can avoid this by telling librbd to always use WB mode (at least when
> benchmarking):
>
> rbd cache writethrough until flush = false
>
> Mark
>
>
> On 09/20/2017 01:51 AM, Rafael Lopez wrote:
>
>> Hi Alexandre,
>>
>> Yeah we are using filestore fo
side a
> qemu machine ? or directly with fio-rbd ?)
>
>
>
> (I'm going to do a lot of benchmarks in coming week, I'll post results on
> mailing soon.)
>
>
>
> - Mail original -
> De: "Rafael Lopez" <rafael.lo...@monash.edu>
> À: "ceph-us
hey guys.
wondering if anyone else has done some solid benchmarking of jewel vs
luminous, in particular on the same cluster that has been upgraded (same
cluster, client and config).
we have recently upgraded a cluster from 10.2.9 to 12.2.0, and
unfortunately i only captured results from a single
0
>
> Rgds,
> Shinobu
>
> - Original Message -
> From: "Andy Allan" <gravityst...@gmail.com>
> To: "Rafael Lopez" <rafael.lo...@monash.edu>
> Cc: ceph-users@lists.ceph.com
> Sent: Monday, January 11, 2016 8:08:38 PM
> Subject:
or copying of this message is prohibited.
> If you received this message erroneously, please notify the sender and
> delete it, together with any attachments.
>
>
>
> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of *Rafael Lopez
> *Sent:* Wednesday,
Hi all,
I am curious what practices other people follow when removing OSDs from a
cluster. According to the docs, you are supposed to:
1. ceph osd out
2. stop daemon
3. ceph osd crush remove
4. ceph auth del
5. ceph osd rm
What value does ceph osd out (1) add to the removal process and why is
inb=121882KB/s, maxb=121882KB/s,
mint=860318msec, maxt=860318msec
Disk stats (read/write):
dm-1: ios=0/2072, merge=0/0, ticks=0/233, in_queue=233, util=0.01%,
aggrios=1/2249, aggrmerge=7/559, aggrticks=9/254, aggrin_queue=261,
aggrutil=0.01%
sda: ios=1/2
onf and rerunning the tests is sufficient.
>
>
>
> Thanks & Regards
>
> Somnath
>
>
>
> *From:* Rafael Lopez [mailto:rafael.lo...@monash.edu]
> *Sent:* Thursday, September 10, 2015 8:58 PM
> *To:* Somnath Roy
> *Cc:* ceph-users@lists.ceph.com
> *Subjec
mp; Regards
>
> Somnath
>
>
>
>
>
>
>
> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of *Rafael Lopez
> *Sent:* Thursday, September 10, 2015 8:24 PM
> *To:* ceph-users@lists.ceph.com
> *Subject:* [ceph-users] bad perf for librbd vs krbd usin
ast we should get a warning somewhere (in the libvirt/qemu log) -
> I don't think there's anything when the issue hits
>
> Should I make tickets for this?
>
> Jan
>
> On 03 Sep 2015, at 02:57, Rafael Lopez <rafael.lo...@monash.edu> wrote:
>
> Hi Jan,
>
> Thanks for
t; cat /proc/$pid/limits
> echo /proc/$pid/fd/* | wc -w
>
> 2) Jumbo frames may be the cause, are they enabled on the rest of the
> network? In any case, get rid of NetworkManager ASAP and set it manually,
> though it looks like your NIC might not support them.
>
> Jan
>
>
>
Hi ceph-users,
Hoping to get some help with a tricky problem. I have a rhel7.1 VM guest
(host machine also rhel7.1) with root disk presented from ceph 0.94.2-0
(rbd) using libvirt.
The VM also has a second rbd for storage presented from the same ceph
cluster, also using libvirt.
The VM boots
18 matches
Mail list logo