[ceph-users] Re: osd is immidietly down and uses CPU full.

2020-02-02 Thread Wido den Hollander
On 2/3/20 8:39 AM, wes park wrote: > How to know a OSD is super busy? Thanks. Check if it's using 100% CPU for example. And check the disk util with iostat. Wido > > Wido den Hollander mailto:w...@42on.com>> > > > > On 2/2/20 5:20 PM, Andreas John wrote: > > Hello, > > > >

[ceph-users] Re: osd is immidietly down and uses CPU full.

2020-02-02 Thread wes park
How to know a OSD is super busy? Thanks. Wido den Hollander > > > On 2/2/20 5:20 PM, Andreas John wrote: > > Hello, > > > > what you see is an stracktrace, so the OSD is hitting an unexpected > > state (Otherwise there would be an error handler). > > > > The crash happens, when the osd wants to

[ceph-users] Re: osd is immidietly down and uses CPU full.

2020-02-02 Thread Wido den Hollander
On 2/2/20 5:20 PM, Andreas John wrote: > Hello, > > what you see is an stracktrace, so the OSD is hitting an unexpected > state (Otherwise there would be an error handler). > > The crash happens, when the osd wants to read from pipe when processing > heartbeat. To me it sounds like a networking

[ceph-users] Re: TR: Understand ceph df details

2020-02-02 Thread Ingo Reimann
Hi Frederic, i guess it is not stuck but just iterating. My "orphans find" job is running for nearly 2 months now! I hope, you started it in a screen session ;) happy waiting, ingo - Ursprüngliche Mail - Von: "CUZA Frédéric" An: "ceph-users" Gesendet: Freitag, 31. Januar 2020 11:19:49

[ceph-users] Re: Questions on Erasure Coding

2020-02-02 Thread Martin Verges
Hello Dave, you can configure Ceph to pick multiple OSDs per Host and therefore work like a classic raid. It will cause a downtime whenever you have to do maintenance on a system, but when you plan to grow it quite fast, it's maybe an option for you. -- Martin Verges Managing director Hint: Secu

[ceph-users] Re: data loss on full file system?

2020-02-02 Thread Håkan T Johansson
On Tue, 28 Jan 2020, Paul Emmerich wrote: Yes, data that is not synced is not guaranteed to be written to disk, this is consistent with POSIX semantics. To get all 0s back during read() of a part that returned successfully from write() of data other than 0s does not seem to be consistent wi

[ceph-users] Re: small cluster HW upgrade

2020-02-02 Thread Marc Roos
You can optimize ceph-osd for this of course. It would benefit people that like to use the 1Gbit connections. I can understand putting time into it now does not make sense because of the availability of 10Gbit. However, I do not get why this was not optimized already 5 or 10 years ago. ---

[ceph-users] Re: osd is immidietly down and uses CPU full.

2020-02-02 Thread Andreas John
Hello, what you see is an stracktrace, so the OSD is hitting an unexpected state (Otherwise there would be an error handler). The crash happens, when the osd wants to read from pipe when processing heartbeat. To me it sounds like a networking issue. I see the other OSD an that host are healthy,

[ceph-users] Re: small cluster HW upgrade

2020-02-02 Thread Anthony D'Atri
This is a natural condition of bonding, it has little to do with ceph-osd. Make sure your hash policy is set appropriatelly, so that you even have a chance of using both links. https://support.packet.com/kb/articles/lacp-bonding The larger the set of destinations, the more likely you are to spr

[ceph-users] Re: Write i/o in CephFS metadata pool

2020-02-02 Thread Patrick Donnelly
On Wed, Jan 29, 2020 at 1:25 AM Samy Ascha wrote: > > Hi! > > I've been running CephFS for a while now and ever since setting it up, I've > seen unexpectedly large write i/o on the CephFS metadata pool. > > The filesystem is otherwise stable and I'm seeing no usage issues. > > I'm in a read-inten

[ceph-users] Re: ceph fs dir-layouts and sub-directory mounts

2020-02-02 Thread Patrick Donnelly
On Wed, Jan 29, 2020 at 3:04 AM Frank Schilder wrote: > > I would like to (in this order) > > - set the data pool for the root "/" of a ceph-fs to a custom value, say "P" > (not the initial data pool used in fs new) > - create a sub-directory of "/", for example "/a" > - mount the sub-directory "

[ceph-users] Re: small cluster HW upgrade

2020-02-02 Thread Marc Roos
Osd's do not even use bonding effenciently. If it were to use 2 links concurrently it would be a lot better. https://www.mail-archive.com/ceph-users@lists.ceph.com/msg35474.html -Original Message- To: ceph-users@ceph.io Subject: [ceph-users] Re: small cluster HW upgrade Hi Philipp

[ceph-users] Re: CephFS - objects in default data pool

2020-02-02 Thread Patrick Donnelly
On Tue, Jan 28, 2020 at 7:26 AM CASS Philip wrote: > > I have a query about https://docs.ceph.com/docs/master/cephfs/createfs/: > > > > “The data pool used to create the file system is the “default” data pool and > the location for storing all inode backtrace information, used for hard link > ma