On Fri, Oct 6, 2017 at 2:22 PM Shawfeng Dong wrote:
> Here is a quick update. I found that a CephFS client process was accessing
> the big 1TB file, which I think had a lock on the file, preventing the
> flushing of objects to the underlying data pool. Once I killed that
> process, objects starte
You can still use EC for CephFS without a cache tier since you are using
Luminous. This is new functionality since Luminous was released while the
majority of guides you will see are for setups on Jewel and older versions
of ceph. Here's the docs regarding this including how to do it.
http://docs.
Here is a quick update. I found that a CephFS client process was accessing
the big 1TB file, which I think had a lock on the file, preventing the
flushing of objects to the underlying data pool. Once I killed that
process, objects started to flush to the data pool automatically (with
target_max_byt
All of this data is test data, yeah? I would start by removing the
cache-tier and pool, recreate it and attach it, configure all of the
settings including the maximums, and start testing things again. I would
avoid doing the 1.3TB file test until after you've confirmed that the
smaller files are
Curiously, it has been quite a while, but there is still no object in the
underlying data pool:
# rados -p cephfs_data ls
Any advice?
On Fri, Oct 6, 2017 at 9:45 AM, David Turner wrote:
> Notice in the URL for the documentation the use of "luminous". When you
> looked a few weeks ago, you migh
Notice in the URL for the documentation the use of "luminous". When you
looked a few weeks ago, you might have been looking at the documentation
for a different version of Ceph. You can change that to jewel, hammer,
kraken, master, etc depending on which version of Ceph you are running or
reading
Hi Christian,
I set those via CLI:
# ceph osd pool set cephfs_cache target_max_bytes 1099511627776
# ceph osd pool set cephfs_cache target_max_objects 100
but manual flushing doesn't appear to work:
# rados -p cephfs_cache cache-flush-evict-all
100046a.0ca6
it just gets stuck
On Fri, 6 Oct 2017 09:14:40 -0700 Shawfeng Dong wrote:
> I found the command: rados -p cephfs_cache cache-flush-evict-all
>
That's not what you want/need.
Though it will fix your current "full" issue.
> The documentation (
> http://docs.ceph.com/docs/luminous/rados/operations/cache-tiering/) has
I found the command: rados -p cephfs_cache cache-flush-evict-all
The documentation (
http://docs.ceph.com/docs/luminous/rados/operations/cache-tiering/) has
been improved a lot since I last checked it a few weeks ago!
-Shaw
On Fri, Oct 6, 2017 at 9:10 AM, Shawfeng Dong wrote:
> Thanks, Luis.
>
On Fri, 6 Oct 2017 16:55:31 +0100 Luis Periquito wrote:
> Not looking at anything else, you didn't set the max_bytes or
> max_objects for it to start flushing...
>
Precisely!
He says, cackling, as he goes to cash in his bet. ^o^
> On Fri, Oct 6, 2017 at 4:49 PM, Shawfeng Dong wrote:
> > Dear a
Thanks, Luis.
I've just set max_bytes and max_objects:
target_max_objects: 100 (1M)
target_max_bytes: 1099511627776 (1TB)
but nothing appears to be happening. Is there a way to force flushing?
Thanks,
Shaw
On Fri, Oct 6, 2017 at 8:55 AM, Luis Periquito wrote:
> Not looking at anything els
Not looking at anything else, you didn't set the max_bytes or
max_objects for it to start flushing...
On Fri, Oct 6, 2017 at 4:49 PM, Shawfeng Dong wrote:
> Dear all,
>
> Thanks a lot for the very insightful comments/suggestions!
>
> There are 3 OSD servers in our pilot Ceph cluster, each with 2x
Dear all,
Thanks a lot for the very insightful comments/suggestions!
There are 3 OSD servers in our pilot Ceph cluster, each with 2x 1TB SSDs
(boot disks), 12x 8TB SATA HDDs and 2x 1.2TB NVMe SSDs. We use the
bluestore backend, with the first NVMe as the WAL and DB devices for OSDs
on the HDDs. A
On Fri, Oct 6, 2017, 1:05 AM Christian Balzer wrote:
>
> Hello,
>
> On Fri, 06 Oct 2017 03:30:41 + David Turner wrote:
>
> > You're missing most all of the important bits. What the osds in your
> > cluster look like, your tree, and your cache pool settings.
> >
> > ceph df
> > ceph osd df
> >
The default filesize limit for CephFS is 1TB, see also here:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-May/018208.html
(also includes a pointer on how to increase it)
On Fri, Oct 6, 2017 at 12:45 PM, Shawfeng Dong wrote:
> Dear all,
>
> We just set up a Ceph cluster, running the la
Hello,
On Fri, 06 Oct 2017 03:30:41 + David Turner wrote:
> You're missing most all of the important bits. What the osds in your
> cluster look like, your tree, and your cache pool settings.
>
> ceph df
> ceph osd df
> ceph osd tree
> ceph osd pool get cephfs_cache all
>
Especially the last
You're missing most all of the important bits. What the osds in your
cluster look like, your tree, and your cache pool settings.
ceph df
ceph osd df
ceph osd tree
ceph osd pool get cephfs_cache all
You have your writeback cache on 3 nvme drives. It looks like you have
1.6TB available between them
Dear all,
We just set up a Ceph cluster, running the latest stable release Ceph
v12.2.0 (Luminous):
# ceph --version
ceph version 12.2.0 (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc)
The goal is to serve Ceph filesystem, for which we created 3 pools:
# ceph osd lspools
1 cephfs_data,2
18 matches
Mail list logo