> It's mds_beacon_grace. Set that on the monitor to control the replacement of
> laggy MDS daemons,
Sounds like William's issue is something else. William shuts down MDS 2 and
MON 4 simultaneously. The log shows that some time later (we don't know how
long), MON 3 detects that MDS 2 is gone ("M
I have a fairly large cluster running ceph bluestore with extremely fast
SAS ssd for the metadata. Doing FIO benchmarks I am getting 200k-300k
random write iops but during sustained workloads of ElasticSearch my
clients seem to hit a wall of around 1100 IO/s per RBD device. I've tried
1 RBD and 4
Hi karri,
Am 4. September 2018 23:30:01 MESZ schrieb Pardhiv Karri
:
>Hi,
>
>I created a ceph cluster manually (not using ceph-deploy). When I
>reboot
>the node the osd's doesn't come backup because the OS doesn't know that
>it
>need to bring up the OSD. I am running this on Ubuntu 1604. Is ther
I was thinking of upgrading luminous to mimic, but does anyone have
mimic running with collectd and the ceph plugin?
When luminous was introduced it took almost half a year before collectd
was supporting it.
___
ceph-users mailing list
ceph-users@li
On Fri, Sep 7, 2018 at 3:31 PM, Maged Mokhtar wrote:
> On 2018-09-07 14:36, Alfredo Deza wrote:
>
> On Fri, Sep 7, 2018 at 8:27 AM, Muhammad Junaid
> wrote:
>
> Hi there
>
> Asking the questions as a newbie. May be asked a number of times before by
> many but sorry, it is not clear yet to me.
>
>
On 2018-09-07 13:52, Janne Johansson wrote:
> Den fre 7 sep. 2018 kl 13:44 skrev Maged Mokhtar :
>
>> Good day Cephers,
>>
>> I want to get some guidance on erasure coding, the docs do state the
>> different plugins and settings but to really understand them all and their
>> use cases is not
In searching the code for rbytes it makes a lot of sense how this is useful
for quotas in general. While nothing references this variable in the
ceph-fuse code, it is in the general client config options as
`client_dirsize_rbytes = false`. Setting that in the config file and
remounting ceph-fuse
On 2018-09-07 14:36, Alfredo Deza wrote:
> On Fri, Sep 7, 2018 at 8:27 AM, Muhammad Junaid
> wrote:
>
>> Hi there
>>
>> Asking the questions as a newbie. May be asked a number of times before by
>> many but sorry, it is not clear yet to me.
>>
>> 1. The WAL device is just like journaling dev
There's an option when mounting the FS on the client to not display those
(on the kernel it's "norbytes"; see
http://docs.ceph.com/docs/master/man/8/mount.ceph/?highlight=recursive; I
didn't poke around to find it on ceph-fuse but it should be there).
Calculating them is not very expensive (or at l
Is it be possible to disable this feature? Very few filesystems calculate
the size of its folder's contents. I know I enjoy it in multiple use
cases, but there are some use cases where this is not useful and a cause
for unnecessary lag/processing. I'm not certain how this is calculated,
but I co
Hmm, I *think* this might be something we've seen before and is the result
of our recursive statistics (ie, the thing that makes directory sizes
reflect the data within them instead of 1 block size). If that's the case
it should resolve within a few seconds to maybe tens of seconds under
stress?
Bu
We have an existing workflow that we've moved from one server sharing a
local disk via NFS to secondary servers to all of them mounting CephFS.
The primary server runs a script similar to [1] this, but since we've moved
it into CephFS, we get [2] this error. We added the sync in there to try
to he
I saw above the recommended size for the db partition was 5% of data, but
yet the recommendation is 40GB partitions for 4TB drives. Isn't that closer
to 1%?
On Fri, Sep 7, 2018 at 10:06 AM, Muhammad Junaid
wrote:
> Thanks very much. It is clear very much now. Because we are just in
> planning st
Thanks very much. It is clear very much now. Because we are just in
planning stage right now, would you tell me if we use 7200rpm SAS 3-4TB for
OSD's, write speed will be fine with this new scenario? Because it will
apparently writing to slower disks before actual confirmation. (I
understand there
It can get confusing.
There will always be a WAL, and there will always be a metadata DB, for
a bluestore OSD. However, if a separate device is not specified for the
WAL, it is kept in the same device/partition as the DB; in the same way,
if a separate device is not specified for the DB, it is kep
Thanks again, but sorry again too. I couldn't understand the following.
1. As per docs, blocks.db is used only for bluestore (file system meta data
info etc). It has nothing to do with actual data (for journaling) which
will ultimately written to slower disks.
2. How will actual journaling will wo
On Fri, Sep 7, 2018 at 9:02 AM, Muhammad Junaid wrote:
> Thanks Alfredo. Just to clear that My configuration has 5 OSD's (7200 rpm
> SAS HDDS) which are slower than the 200G SSD. Thats why I asked for a 10G
> WAL partition for each OSD on the SSD.
>
> Are you asking us to do 40GB * 5 partitions o
Hi,
Are you asking us to do 40GB * 5 partitions on SSD just for block.db?
yes. By default ceph deploys block.db and wal.db on the same device if
no separate wal device is specified.
Regards,
Eugen
Zitat von Muhammad Junaid :
Thanks Alfredo. Just to clear that My configuration has 5 OS
Thanks Alfredo. Just to clear that My configuration has 5 OSD's (7200 rpm
SAS HDDS) which are slower than the 200G SSD. Thats why I asked for a 10G
WAL partition for each OSD on the SSD.
Are you asking us to do 40GB * 5 partitions on SSD just for block.db?
On Fri, Sep 7, 2018 at 5:36 PM Alfredo
On Fri, Sep 7, 2018 at 8:27 AM, Muhammad Junaid wrote:
> Hi there
>
> Asking the questions as a newbie. May be asked a number of times before by
> many but sorry, it is not clear yet to me.
>
> 1. The WAL device is just like journaling device used before bluestore. And
> CEPH confirms Write to cli
Hi there
Asking the questions as a newbie. May be asked a number of times before by
many but sorry, it is not clear yet to me.
1. The WAL device is just like journaling device used before bluestore. And
CEPH confirms Write to client after writing to it (Before actual write to
primary device)?
2.
Hello,
got new logs - if this snip is not sufficent, I can provide the full log
https://pastebin.com/dKBzL9AW
br+thx wolfgang
On 2018-09-05 01:55, Radoslaw Zarzynski wrote:
> In the log following trace can be found:
>
> 0> 2018-08-30 13:11:01.014708 7ff2dd344700 -1 *** Caught signal
> (Se
Den fre 7 sep. 2018 kl 13:44 skrev Maged Mokhtar :
>
> Good day Cephers,
>
> I want to get some guidance on erasure coding, the docs do state the
> different plugins and settings but to really understand them all and their
> use cases is not easy:
>
> -Are the majority of implementations using jer
Good day Cephers,
I want to get some guidance on erasure coding, the docs do state the
different plugins and settings but to really understand them all and
their use cases is not easy:
-Are the majority of implementations using jerasure and just configuring
k and m ?
-For jerasure: when/if woul
Thanks for the info.
On Thu, Sep 6, 2018 at 7:03 PM Darius Kasparavičius
wrote:
> Hello,
>
> I'm currently running a similar setup. It's running a blustore OSD
> with 1 NVME device for db/wal devices. That NVME device is not large
> enough to support 160GB db partition per osd, so I'm stuck with
On Fri, 7 Sep 2018, Paul Emmerich said:
> Mimic
Unless you run debian, in which case Luminous.
Sean
> 2018-09-07 12:24 GMT+02:00 Vincent Godin :
> > Hello Cephers,
> > if i had to go for production today, which release should i choose :
> > Luminous or Mimic ?
> > _
Mimic
2018-09-07 12:24 GMT+02:00 Vincent Godin :
> Hello Cephers,
> if i had to go for production today, which release should i choose :
> Luminous or Mimic ?
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cg
Hello Cephers,
if i had to go for production today, which release should i choose :
Luminous or Mimic ?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I have only 2 scrubs running on hdd's, but keeping the drives in high
busy state. I did not notice this before, did some setting change?
Because I can remember dstat listing 14MB/s-20MB/s and not 60MB/s
DSK | sdd | busy 95% | read1384 | write 92 | KiB/r
292 | KiB/w
Hi Eugen,
Thanks for the update.
The message still appears in the logs these days. Option client_oc_size in
my cluster is 100MB from the start. I have configured
mds_cache_memory_limit
to 4G and from then on the message reduced.
What I noticed is that the mds task reserves 6G memory(in top) whil
the samsung sm863.
write-4k-seq: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T)
4096B-4096B, ioengine=libaio, iodepth=1
randwrite-4k-seq: (g=1): rw=randwrite, bs=(R) 4096B-4096B, (W)
4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
read-4k-seq: (g=2): rw=read, bs=(R) 409
Hi,
the problem still exists
for me, this happens to SSD OSDs only - I recreated all of them running 12.2.8
this is what i got even on newly created OSDs after some time and crashes
ceph-bluestore-tool fsck -l /root/fsck-osd.0.log --log-level=20 --path
/var/lib/ceph/osd/ceph-0 --deep on
2018-09
32 matches
Mail list logo