Probably the case. I've check a 10% of the objects in the metadata
pool (rbd -p metadata stat $objname). They've all been 0 byte objects.
Most of them have 1-10 omapvals usually 408 bytes each.
Based on the usage of the other pools on the SSDs, that comes out to
about ~46GB of omap/leveldb stuff.
Yeah -- as I said, 4KB was a generous number. It's going to vary some
though, based on the actual length of the names you're using, whether you
have symlinks or hard links, snapshots, etc.
-Greg
On Sat, Apr 25, 2015 at 11:34 AM Adam Tygart mo...@k-state.edu wrote:
Probably the case. I've check a
That doesn't make sense -- 50MB for 36 million files is 1.5 bytes each.
How do you have things configured, exactly?
On Sat, Apr 25, 2015 at 9:32 AM Adam Tygart mo...@k-state.edu wrote:
We're currently putting data into our cephfs pool (cachepool in front
of it as a caching tier), but the
On Sat, 25 Apr 2015, Gregory Farnum wrote:
That's odd -- I almost want to think the pg statistics reporting is going
wrong somehow.
...I bet the leveldb/omap stuff isn't being included in the of statistics.
That could be why and would make sense with what you've got here. :)
Yeah, the pool
That's odd -- I almost want to think the pg statistics reporting is going
wrong somehow.
...I bet the leveldb/omap stuff isn't being included in the of statistics.
That could be why and would make sense with what you've got here. :)
-Greg
On Sat, Apr 25, 2015 at 10:32 AM Adam Tygart
Hi
I was doing some testing on erasure coded based CephFS cluster. cluster is
running with giant 0.87.1 release.
Cluster info
15 * 36 drives node(journal on same osd)
3 * 4 drives SSD cache node( Intel DC3500)
3 * MON/MDS
EC 10 +3
10G Ethernet for private and cluster network
We got
Hello,
I think that the dd test isn't a 100% replica of what Ceph actually does
then.
My suspicion would be the 4k blocks, since when people test the maximum
bandwidth they do it with rados bench or other tools that write the
optimum sized blocks for Ceph, 4MB ones.
I currently have no unused
It seems you just grepped for ceph-osd - that doesn't include sockets
opened by the kernel client, which is what I was after. Paste the
entire netstat?
ouch, bummer! here are full netstats, sorry about delay..
http://nik.lbox.cz/download/ceph/
BR
nik
Thanks,
Ilya
Hi,
Gregory Farnum wrote:
The MDS will run in 1GB, but the more RAM it has the more of the metadata
you can cache in memory. The faster single-threaded performance your CPU
has, the more metadata IOPS you'll get. We haven't done much work
characterizing it, though.
Ok, thanks for the
Thanks Greg and Steffen for your answer. I will make some tests.
Gregory Farnum wrote:
Yeah. The metadata pool will contain:
1) MDS logs, which I think by default will take up to 200MB per
logical MDS. (You should have only one logical MDS.)
2) directory metadata objects, which contain the
Yeah, that's definitely something that we'd address soon.
Yehuda
- Original Message -
From: Ben b@benjackson.email
To: Ben Hines bhi...@gmail.com, Yehuda Sadeh-Weinraub
yeh...@redhat.com
Cc: ceph-users ceph-us...@ceph.com
Sent: Friday, April 24, 2015 5:14:11 PM
Subject: Re:
We're currently putting data into our cephfs pool (cachepool in front
of it as a caching tier), but the metadata pool contains ~50MB of data
for 36 million files. If that were an accurate estimation, we'd have a
metadata pool closer to ~140GB. Here is a ceph df detail:
I'm able to reach around 2-25000iops with 4k block with s3500 (with
o_dsync) (so yes, around 80-100MB/S).
I'l bench new s3610 soon to compare.
- Mail original -
De: Anthony Levesque aleves...@gtcomm.net
À: Christian Balzer ch...@gol.com
Cc: ceph-users ceph-users@lists.ceph.com
13 matches
Mail list logo