Hi Anthony!
Mon, 9 Dec 2019 17:11:12 -0800
Anthony D'Atri ==> ceph-users
:
> > How is that possible? I dont know how much more proof I need to present
> > that there's a bug.
>
> FWIW, your pastes are hard to read with all the ? in them. Pasting
> non-7-bit-ASCII?
I don't see much "?" in
Hi,
Tue, 26 Nov 2019 13:57:51 +
Simon Ironside ==> ceph-users@lists.ceph.com :
> Mattia Belluco said back in May:
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-May/035086.html
>
> "when RocksDB needs to compact a layer it rewrites it
> *before* deleting the old data; if you'd li
Hi Kristof,
may I add another choice?
I configured my SSDs this way.
Every host for OSDs has two fast and durable SSDs.
Both SSDs are in one RAID1 which then is split up into LVs.
I took 58GB for DB & WAL (and space for a special action by the DB (was it
compaction?)) for each OSD.
Then there w
I don't use ansible anymore. But this was my config for the host onode1:
./host_vars/onode2.yml:
lvm_volumes:
- data: /dev/sdb
db: '1'
db_vg: host-2-db
- data: /dev/sdc
db: '2'
db_vg: host-2-db
- data: /dev/sde
db: '3'
db_vg: host-2-db
- data: /dev/sdf
db: '4'
Hi.
Sounds like you use kernel clients with kernels from canonical/ubuntu.
Two kernels have a bug:
4.15.0-66
and
5.0.0-32
Updated kernels are said to have fixes.
Older kernels also work:
4.15.0-65
and
5.0.0-31
Lars
Wed, 30 Oct 2019 09:42:16 +
Bob Farrell ==> ceph-users :
> Hi. We are ex
Mon, 30 Sep 2019 15:21:18 +0200
Janne Johansson ==> Lars Täuber :
> >
> > I don't remember where I read it, but it was told that the cluster is
> > migrating its complete traffic over to the public network when the cluster
> > networks goes down. So this seems n
Mon, 30 Sep 2019 14:49:48 +0200
Burkhard Linke ==>
ceph-users@lists.ceph.com :
> Hi,
>
> On 9/30/19 2:46 PM, Lars Täuber wrote:
> > Hi!
> >
> > What happens when the cluster network goes down completely?
> > Is the cluster silently using the public network wi
Hi!
What happens when the cluster network goes down completely?
Is the cluster silently using the public network without interruption, or does
the admin has to act?
Thanks
Lars
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
ed it again and the osd is working again.
It feels like I hit a bug!?
Huge thanks for your help.
Cheers,
Lars
Fri, 23 Aug 2019 13:36:00 +0200
Paul Emmerich ==> Lars Täuber :
> Filter the log for "7f266bdc9700" which is the id of the crashed
> thread, it should contain more i
Hi there!
In our testcluster is an osd that won't start anymore.
Here is a short part of the log:
-1> 2019-08-23 08:56:13.316 7f266bdc9700 -1
/tmp/release/Debian/WORKDIR/ceph-14.2.2/src/os/bluestore/BlueStore.cc: In
function 'void BlueStore::_txc_add_transaction(BlueStore::TransContext*,
Hi there!
We also experience this behaviour of our cluster while it is moving pgs.
# ceph health detail
HEALTH_ERR 1 MDSs report slow metadata IOs; Reduced data availability: 2 pgs
inactive; Degraded data redundancy (low space): 1 pg backfill_toofull
MDS_SLOW_METADATA_IO 1 MDSs report slow metad
All osd are up.
I manually mark one out of 30 "out" not "down".
The primary osd of the stuck pgs are neither marked as out nor as down.
Thanks
Lars
Thu, 22 Aug 2019 15:01:12 +0700
wahyu.muqs...@gmail.com ==> wahyu.muqs...@gmail.com, Lars Täuber
:
> I think you use
There are 30 osds.
Thu, 22 Aug 2019 14:38:10 +0700
wahyu.muqs...@gmail.com ==> ceph-users@lists.ceph.com, Lars Täuber
:
> how many osd do you use ?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph
Hi all,
we are using ceph in version 14.2.2 from
https://mirror.croit.io/debian-nautilus/ on debian buster and experiencing
problems with cephfs.
The mounted file system produces hanging processes due to pg stuck inactive.
This often happens after I marked single osds out manually.
A typical r
all for this great storage solution!
Cheers,
Lars
Tue, 20 Aug 2019 07:30:11 +0200
Lars Täuber ==> ceph-users@lists.ceph.com :
> Hi there!
>
> Does anyone else have an idea what I could do to get rid of this error?
>
> BTW: it is the third time that the pg 20.0 is gone inconsi
Mon, 19 Aug 2019 13:51:59 +0200
Lars Täuber ==> Paul Emmerich :
> Hi Paul,
>
> thanks for the hint.
>
> I did a recursive scrub from "/". The log says there where some inodes with
> bad backtraces repaired. But the error remains.
> May this have something to
3760765989,
"ino": 1099518115802,
"path": "~mds0/stray7/15161f7/dovecot.index.backup"
}
]
starts a bit strange to me.
Are the snapshots also repaired with a recursive repair operation?
Thanks
Lars
Mon, 19 Aug 2019 13:30:53 +0200
Paul Emmeric
Hi all!
Where can I look up what the error number means?
Or did I something wrong in my command line?
Thanks in advance,
Lars
Fri, 16 Aug 2019 13:31:38 +0200
Lars Täuber ==> Paul Emmerich :
> Hi Paul,
>
> thank you for your help. But I get the following error:
>
> # ceph t
7e937fe700 0 client.867786 ms_handle_reset on
v2:192.168.16.23:6800/176704036
{
"return_code": -116
}
Lars
Fri, 16 Aug 2019 13:17:08 +0200
Paul Emmerich ==> Lars Täuber :
> Hi,
>
> damage_type backtrace is rather harmless and can indeed be repaired
> with
Hi all!
The mds of our ceph cluster produces a health_err state.
It is a nautilus 14.2.2 on debian buster installed from the repo made by
croit.io with osds on bluestore.
The symptom:
# ceph -s
cluster:
health: HEALTH_ERR
1 MDSs report damaged metadata
services:
mon: 3 d
Thu, 11 Jul 2019 10:24:16 +0200
"Marc Roos" ==> ceph-users
, lmb :
> What about creating snaps on a 'lower level' in the directory structure
> so you do not need to remove files from a snapshot as a work around?
Thanks for the idea. This might be a solution for our use case.
Regards,
Lars
___
Thu, 11 Jul 2019 10:21:16 +0200
Lars Marowsky-Bree ==> ceph-users@lists.ceph.com :
> On 2019-07-10T09:59:08, Lars Täuber wrote:
>
> > Hi everbody!
> >
> > Is it possible to make snapshots in cephfs writable?
> > We need to remove files because of this G
Hi everbody!
Is it possible to make snapshots in cephfs writable?
We need to remove files because of this General Data Protection Regulation also
from snapshots.
Thanks and best regards,
Lars
___
ceph-users mailing list
ceph-users@lists.ceph.com
http:/
Thu, 27 Jun 2019 11:16:31 -0400
Nathan Fish ==> Ceph Users :
> Are these dual-socket machines? Perhaps NUMA is involved?
>
> On Thu., Jun. 27, 2019, 4:56 a.m. Lars Täuber, wrote:
>
> > Hi!
> >
> > In our cluster I ran some benchmarks.
> > The results are
Hi!
In our cluster I ran some benchmarks.
The results are always similar but strange to me.
I don't know what the results mean.
The cluster consists of 7 (nearly) identical hosts for osds. Two of them have
one an additional hdd.
The hdds are from identical type. The ssds for the journal and wal a
about?
Regards,
Lars
Wed, 19 Jun 2019 15:04:06 +0200
Paul Emmerich ==> Lars Täuber :
> That shouldn't trigger the PG limit (yet), but increasing "mon max pg per
> osd" from the default of 200 is a good idea anyways since you are running
> with more than 200 PGs per O
Hi Paul,
thanks for your reply.
Wed, 19 Jun 2019 13:19:55 +0200
Paul Emmerich ==> Lars Täuber :
> Wild guess: you hit the PG hard limit, how many PGs per OSD do you have?
> If this is the case: increase "osd max pg per osd hard ratio"
>
> Check "ceph pg query&q
Hi there!
Recently I made our cluster rack aware
by adding racks to the crush map.
The failure domain was and still is "host".
rule cephfs2_data {
id 7
type erasure
min_size 3
max_size 6
step set_chooseleaf_tries 5
step set_choose_tries 100
Yes, thanks. This helped.
Regards,
Lars
Tue, 28 May 2019 11:50:01 -0700
Gregory Farnum ==> Lars Täuber :
> You’re the second report I’ve seen if this, and while it’s confusing, you
> should be Abel to resolve it by restarting your active manager daemon.
>
> On Sun, May 26, 2
Fri, 24 May 2019 21:41:33 +0200
Michel Raabe ==> Lars Täuber ,
ceph-users@lists.ceph.com :
>
> You can also try
>
> $ rados lspools
> $ ceph osd pool ls
>
> and verify that with the pgs
>
> $ ceph pg ls --format=json-pretty | jq -r '.pg_stats[].pgid'
Mon, 20 May 2019 10:52:14 +
Eugen Block ==> ceph-users@lists.ceph.com :
> Hi, have you tried 'ceph health detail'?
>
No I hadn't. Thanks for the hint.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-us
Hi everybody,
with the status report I get a HEALTH_WARN I don't know how to get rid of.
It my be connected to recently removed pools.
# ceph -s
cluster:
id: 6cba13d1-b814-489c-9aac-9c04aaf78720
health: HEALTH_WARN
1 pools have many more objects per pg than average
s
Hi,
is there a way to migrate a cephfs to a new data pool like it is for rbd on
nautilus?
https://ceph.com/geen-categorie/ceph-pool-migration/
Thanks
Lars
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-use
here:
http://docs.ceph.com/docs/nautilus/cephfs/experimental-features/#snapshots
Regards,
Lars
Fri, 3 May 2019 11:45:41 +0200
Lars Täuber ==> ceph-users@lists.ceph.com :
> Hi,
>
> I'm still new to ceph. Here are similar problems with CephF
Hi,
I'm still new to ceph. Here are similar problems with CephFS.
ceph version 14.2.0 (3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)
on Debian GNU/Linux buster/sid
# ceph health detail
HEALTH_WARN 1 MDSs report slow requests; 1 MDSs behind on trimming
MDS_SLOW_REQUEST 1 MDSs report
Wed, 17 Apr 2019 20:01:28 +0900
Christian Balzer ==> Ceph Users :
> On Wed, 17 Apr 2019 11:22:08 +0200 Lars Täuber wrote:
>
> > Wed, 17 Apr 2019 10:47:32 +0200
> > Paul Emmerich ==> Lars Täuber :
> > > The standard argument that it helps preventing recover
Wed, 17 Apr 2019 10:47:32 +0200
Paul Emmerich ==> Lars Täuber :
> The standard argument that it helps preventing recovery traffic from
> clogging the network and impacting client traffic is missleading:
What do you mean by "it"? I don't know the standard argument.
Do
Wed, 17 Apr 2019 09:52:29 +0200
Stefan Kooman ==> Lars Täuber :
> Quoting Lars Täuber (taeu...@bbaw.de):
> > > I'd probably only use the 25G network for both networks instead of
> > > using both. Splitting the network usually doesn't help.
> >
> >
Thanks Paul for the judgement.
Tue, 16 Apr 2019 10:13:03 +0200
Paul Emmerich ==> Lars Täuber :
> Seems in line with what I'd expect for the hardware.
>
> Your hardware seems to be way overspecced, you'd be fine with half the
> RAM, half the CPU and way cheaper di
Hi there,
i'm new to ceph and just got my first cluster running.
Now i'd like to know if the performance we get is expectable.
Is there a website with benchmark results somewhere where i could have a look
to compare with our HW and our results?
This are the results:
rados bench single threaded:
Hi everybody!
There is a small mistake in the news about the PG autoscaler
https://ceph.com/rados/new-in-nautilus-pg-merging-and-autotuning/
The command
$ ceph osd pool set foo target_ratio .8
should actually be
$ ceph osd pool set foo target_size_ratio .8
Thanks for this great improvement!
Hi there!
I just started to install a ceph cluster.
I'd like to take the nautilus release.
Because of hardware restrictions (network driver modules) I had to take the
buster release of Debian.
Will there be buster packages of nautilus available after the release?
Thanks for this great storage!
Hi there,
can someone share her/his experiences regarding this question? Maybe
differentiated according to the different available algorithms?
Sat, 12 Aug 2017 14:40:05 +0200
Stijn De Weirdt ==> Gregory Farnum
, Mark Nelson ,
"ceph-users@lists.ceph.com" :
> also any indication how much more
43 matches
Mail list logo