I am consistently getting whiteout mismatches due to which pgs are going in
inconsistent state, and I am not able to figure out why is this happening?
though as it was explained before that whiteouts dont exist and its
nothing, its still painful to see my pgs in inconsistent statecan any
one he
On Tue, Jun 5, 2018 at 6:13 AM, Paul Emmerich wrote:
> Hi,
>
> 2018-06-04 20:39 GMT+02:00 Sage Weil :
>>
>> We'd love to build for stretch, but until there is a newer gcc for that
>> distro it's not possible. We could build packages for 'testing', but I'm
>> not sure if those will be usable on st
I see, thanks for the detailed information, Sage!
Kind regards,
Charles Alva
Sent from Gmail Mobile
On Tue, Jun 5, 2018 at 1:39 AM Sage Weil wrote:
> [adding ceph-maintainers]
>
> On Mon, 4 Jun 2018, Charles Alva wrote:
> > Hi Guys,
> >
> > When will the Ceph Mimic packages for Debian Stretch
Hi Cephers,
We recently upgraded one of our clusters from hammer to jewel and then to
luminous (12.2.5, 5 mons/mgr, 21 storage nodes * 9 osd's). After some
deep-scubs we have an inconsistent pg with a log message we've not seen
before:
HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsi
Thanks for reading my questions!
I want to run MySQL on Ceph using KRBD because KRBD is faster than librbd. And
I know KRBD is a kernel module and we can use KRBD to mount the RBD device on
the operating systems.
It is easy to use command line tool to mount the RBD device on the operating
syst
We have been running a Luminous.04 + Bluestor for about 3 months in
production. All the daemons run as docker containers and were installed
using ceph-ansible. 540 spinning drives with journal/wal/db on the same
drive spread across 9 hosts. Using librados object interface directly with
steady 100MB
On Thu, May 31, 2018 at 4:40 PM Gregory Farnum wrote:
> On Thu, May 24, 2018 at 9:15 AM Michael Burk
> wrote:
>
>> Hello,
>>
>> I'm trying to replace my OSDs with higher capacity drives. I went through
>> the steps to remove the OSD on the OSD node:
>> # ceph osd out osd.2
>> # ceph osd down osd
Hi,
2018-06-04 20:39 GMT+02:00 Sage Weil :
> We'd love to build for stretch, but until there is a newer gcc for that
> distro it's not possible. We could build packages for 'testing', but I'm
> not sure if those will be usable on stretch.
>
you can install gcc (and only gcc) from testing on Str
On 06/04/2018 07:39 PM, Sage Weil wrote:
> [1]
> http://lists.ceph.com/private.cgi/ceph-maintainers-ceph.com/2018-April/000603.html
> [2]
> http://lists.ceph.com/private.cgi/ceph-maintainers-ceph.com/2018-April/000611.html
Just a heads up, seems the ceph-maintainers archives are not public.
-
My reaction when I read that there will be no Mimic soon on Stretch:
https://pix.milkywan.fr/JDjOJWnx.png
Anyway, thank you for the kind explanation, as well as for getting in
touch with the Debian team about this issue
On 06/04/2018 08:39 PM, Sage Weil wrote:
> [adding ceph-maintainers]
>
> On
[adding ceph-maintainers]
On Mon, 4 Jun 2018, Charles Alva wrote:
> Hi Guys,
>
> When will the Ceph Mimic packages for Debian Stretch released? I could not
> find the packages even after changing the sources.list.
The problem is that we're now using c++17, which requires a newer gcc
than stretc
Appreciate the input.
Wasn’t sure if ceph-volume was the one setting these bits of metadata or
something else.
Appreciate the help guys.
Thanks,
Reed
> The fix is in core Ceph (the OSD/BlueStore code), not ceph-volume. :)
> journal_rotational is still a thing in BlueStore; it represents the
There aren't any builds for Debian because the distro does not have
compiler backports required for building Mimic
On Mon, Jun 4, 2018 at 8:55 AM, Ronny Aasen wrote:
> On 04. juni 2018 06:41, Charles Alva wrote:
>>
>> Hi Guys,
>>
>> When will the Ceph Mimic packages for Debian Stretch released? I
On Mon, Jun 4, 2018 at 12:37 PM, Reed Dier wrote:
> Hi Caspar,
>
> David is correct, in that the issue I was having with SSD OSD’s having NVMe
> bluefs_db reporting as HDD creating an artificial throttle based on what
> David was mentioning, a prevention to keep spinning rust from thrashing. Not
>
On Mon, Jun 4, 2018 at 9:38 AM Reed Dier wrote:
> Copying Alfredo, as I’m not sure if something changed with respect to
> ceph-volume in 12.2.2 (when this originally happened) to 12.2.5 (I’m sure
> plenty did), because I recently had an NVMe drive fail on me unexpectedly
> (curse you Micron), and
Hi Caspar,
David is correct, in that the issue I was having with SSD OSD’s having NVMe
bluefs_db reporting as HDD creating an artificial throttle based on what David
was mentioning, a prevention to keep spinning rust from thrashing. Not sure if
the journal_rotational bit should be 1, but either
There's some metadata on Bluestore OSDs (the rocksdb database), it's
usually ~1% of your data.
The DB will start out at a size of around 1GB, so that's expected.
Paul
2018-06-04 15:55 GMT+02:00 Marc-Antoine Desrochers <
marc-antoine.desroch...@sogetel.com>:
> Hi,
>
>
>
> Im not sure if it’s nor
Hello,
Freshly created OSD won't start after upgrading to mimic:
2018-06-04 17:00:23.135 7f48cbecb240 0 osd.3 0 done with init, starting boot
process
2018-06-04 17:00:23.135 7f48cbecb240 1 osd.3 0 start_boot
2018-06-04 17:00:23.135 7f48cbecb240 10 osd.3 0 start_boot - have maps 0..0
2018-06-0
Hi,
Im not sure if it's normal or not but each time I add a new osd with
ceph-deploy osd create --data /dev/sdg ceph-n1.
It add 1GB to my global data but I just format the drive so it's supposed to
be at 0 right ?
So I have 6 osd in my ceph and it took 6gib.
[root@ceph-n1 ~]# ceph -s
c
I don't believe this really applies to you. The problem here was with an
SSD osd that was incorrectly labeled as an HDD osd by ceph. The fix was to
inject a sleep seeing if 0 for those osds to speed up recovery. The sleep
is needed to not kill hdds to avoid thrashing, but the bug was SSDs were
bein
On 04. juni 2018 06:41, Charles Alva wrote:
Hi Guys,
When will the Ceph Mimic packages for Debian Stretch released? I could
not find the packages even after changing the sources.list.
I am also eager to test mimic on my ceph
debian-mimic only contains ceph-deploy atm.
kind regards
Ronny
On Sat, Jun 2, 2018 at 12:31 PM, Oliver Freyermuth
wrote:
> Am 02.06.2018 um 11:44 schrieb Marc Roos:
>>
>>
>> ceph-disk does not require bootstrap-osd/ceph.keyring and ceph-volume
>> does
>
> I believe that's expected when you use "prepare".
> For ceph-volume, "prepare" already bootstraps the OSD
ceph-volume has a 'rollback' functionality that if it was able to
create an OSD id, and the creation of the OSD
fails, it will remove the id. In this case, it failed to create the
id, so the tool can't be sure it has to 'clean up'.
On Sat, Jun 2, 2018 at 5:52 AM, Marc Roos wrote:
>
>
> [@ bootst
On 05/08/2018 07:21 AM, Kai Wagner wrote:
> Looks very good. Is it anyhow possible to display the reason why a
> cluster is in an error or warning state? Thinking about the output from
> ceph -s if this could by shown in case there's a failure. I think this
> will not be provided by default but wo
On Mon, Jun 04, 2018 at 11:12:58AM +0300, Wladimir Mutel wrote:
> /disks> create pool=rbd image=win2016-3tb-1 size=2861589M
> CMD: /disks/ create pool=rbd image=win2016-3tb-1 size=2861589M count=1
> max_data_area_mb=None
> pool 'rbd' is ok to use
> Creating/mapping disk rbd/win2016-3tb-1
>
I won't run out of write iops when I have ssd journal in place. I know that I
can use the dual root method from Sebastien's web site, but I thought the
'storage class' feature is the way to solve this kind of problem.
https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the
On Fri, Jun 01, 2018 at 08:20:12PM +0300, Wladimir Mutel wrote:
>
> And still, when I do '/disks create ...' in gwcli, it says
> that it wants 2 existing gateways. Probably this is related
> to the created 2-TPG structure and I should look for more ways
> to 'improve' that
Hi Reed,
"Changing/injecting osd_recovery_sleep_hdd into the running SSD OSD’s on
bluestore opened the floodgates."
What exactly did you change/inject here?
We have a cluster with 10TB SATA HDD's which each have a 100GB SSD based
block.db
Looking at ceph osd metadata for each of those:
28 matches
Mail list logo