Re: [ceph-users] cephfs deleting files No space left on device

2019-05-10 Thread Rafael Lopez
Hey Kenneth, We encountered this when the number of strays (unlinked files yet to be purged) reached 1 million, which is a result of many many file removals happening on the fs repeatedly. It can also happen when there are more than 100k files in a dir with default settings. You can tune it via

[ceph-users] filestore split settings

2018-08-22 Thread Rafael Lopez
buy us enough time to move to bluestore. -- *Rafael Lopez* Research Devops Engineer Monash University eResearch Centre ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Migrating to new pools

2018-02-20 Thread Rafael Lopez
lly possible? -- *Rafael Lopez* Research Devops Engineer Monash University eResearch Centre ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Running Jewel and Luminous mixed for a longer period

2017-12-05 Thread Rafael Lopez
les provided with hammer), but you can write basic script(s) to start/stop all osds manually. this was ok for us, particularly since we didn't intend to run that state for a long period, and eventually upgraded to jewel and soon to be luminous. In your case, since trusty is supported in luminous I

Re: [ceph-users] who is using nfs-ganesha and cephfs?

2017-11-16 Thread Rafael Lopez
this into > Mimic, but the ganesha FSAL has been around for years.) > > Thanks! > sage > > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > the body of a message to majord...@vger.kernel.org > More majordomo info at http://vge

Re: [ceph-users] luminous vs jewel rbd performance

2017-11-15 Thread Rafael Lopez
ve you tried upgrading > to it? > -- > *From:* ceph-users <ceph-users-boun...@lists.ceph.com> on behalf of > Rafael Lopez <rafael.lo...@monash.edu> > *Sent:* Thursday, 16 November 2017 11:59:14 AM > *To:* Mark Nelson > *Cc:* ceph-users > *Su

Re: [ceph-users] luminous vs jewel rbd performance

2017-11-15 Thread Rafael Lopez
u can avoid this by telling librbd to always use WB mode (at least when > benchmarking): > > rbd cache writethrough until flush = false > > Mark > > > On 09/20/2017 01:51 AM, Rafael Lopez wrote: > >> Hi Alexandre, >> >> Yeah we are using filestore fo

Re: [ceph-users] luminous vs jewel rbd performance

2017-09-20 Thread Rafael Lopez
side a > qemu machine ? or directly with fio-rbd ?) > > > > (I'm going to do a lot of benchmarks in coming week, I'll post results on > mailing soon.) > > > > - Mail original - > De: "Rafael Lopez" <rafael.lo...@monash.edu> > À: "ceph-us

[ceph-users] luminous vs jewel rbd performance

2017-09-20 Thread Rafael Lopez
hey guys. wondering if anyone else has done some solid benchmarking of jewel vs luminous, in particular on the same cluster that has been upgraded (same cluster, client and config). we have recently upgraded a cluster from 10.2.9 to 12.2.0, and unfortunately i only captured results from a single

Re: [ceph-users] double rebalance when removing osd

2016-01-11 Thread Rafael Lopez
0 > > Rgds, > Shinobu > > - Original Message - > From: "Andy Allan" <gravityst...@gmail.com> > To: "Rafael Lopez" <rafael.lo...@monash.edu> > Cc: ceph-users@lists.ceph.com > Sent: Monday, January 11, 2016 8:08:38 PM > Subject:

Re: [ceph-users] double rebalance when removing osd

2016-01-10 Thread Rafael Lopez
or copying of this message is prohibited. > If you received this message erroneously, please notify the sender and > delete it, together with any attachments. > > > > *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf > Of *Rafael Lopez > *Sent:* Wednesday,

[ceph-users] double rebalance when removing osd

2016-01-06 Thread Rafael Lopez
Hi all, I am curious what practices other people follow when removing OSDs from a cluster. According to the docs, you are supposed to: 1. ceph osd out 2. stop daemon 3. ceph osd crush remove 4. ceph auth del 5. ceph osd rm What value does ceph osd out (1) add to the removal process and why is

[ceph-users] bad perf for librbd vs krbd using FIO

2015-09-10 Thread Rafael Lopez
inb=121882KB/s, maxb=121882KB/s, mint=860318msec, maxt=860318msec Disk stats (read/write): dm-1: ios=0/2072, merge=0/0, ticks=0/233, in_queue=233, util=0.01%, aggrios=1/2249, aggrmerge=7/559, aggrticks=9/254, aggrin_queue=261, aggrutil=0.01% sda: ios=1/2

Re: [ceph-users] bad perf for librbd vs krbd using FIO

2015-09-10 Thread Rafael Lopez
onf and rerunning the tests is sufficient. > > > > Thanks & Regards > > Somnath > > > > *From:* Rafael Lopez [mailto:rafael.lo...@monash.edu] > *Sent:* Thursday, September 10, 2015 8:58 PM > *To:* Somnath Roy > *Cc:* ceph-users@lists.ceph.com > *Subjec

Re: [ceph-users] bad perf for librbd vs krbd using FIO

2015-09-10 Thread Rafael Lopez
mp; Regards > > Somnath > > > > > > > > *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf > Of *Rafael Lopez > *Sent:* Thursday, September 10, 2015 8:24 PM > *To:* ceph-users@lists.ceph.com > *Subject:* [ceph-users] bad perf for librbd vs krbd usin

Re: [ceph-users] libvirt rbd issue

2015-09-04 Thread Rafael Lopez
ast we should get a warning somewhere (in the libvirt/qemu log) - > I don't think there's anything when the issue hits > > Should I make tickets for this? > > Jan > > On 03 Sep 2015, at 02:57, Rafael Lopez <rafael.lo...@monash.edu> wrote: > > Hi Jan, > > Thanks for

Re: [ceph-users] libvirt rbd issue

2015-09-02 Thread Rafael Lopez
t; cat /proc/$pid/limits > echo /proc/$pid/fd/* | wc -w > > 2) Jumbo frames may be the cause, are they enabled on the rest of the > network? In any case, get rid of NetworkManager ASAP and set it manually, > though it looks like your NIC might not support them. > > Jan > > >

[ceph-users] libvirt rbd issue

2015-09-01 Thread Rafael Lopez
Hi ceph-users, Hoping to get some help with a tricky problem. I have a rhel7.1 VM guest (host machine also rhel7.1) with root disk presented from ceph 0.94.2-0 (rbd) using libvirt. The VM also has a second rbd for storage presented from the same ceph cluster, also using libvirt. The VM boots