Hi Ilya,
That explains it. Thank you for clarification!
Tony
From: Ilya Dryomov
Sent: December 4, 2023 09:40 AM
To: Tony Liu
Cc: ceph-users@ceph.io; d...@ceph.io
Subject: Re: [ceph-users] the image used size becomes 0 after export/import
with snapshot
Hi,
I have an image with a snapshot and some changes after snapshot.
```
$ rbd du backup/f0408e1e-06b6-437b-a2b5-70e3751d0a26
NAME
PROVISIONED USED
Hi,
src-image is 1GB (provisioned size). I did the following 3 tests.
1. rbd export src-image - | rbd import - dst-image
2. rbd export --export-format 2 src-image - | rbd import --export-format 2 -
dst-image
3. rbd export --export-format 2 src-image - | rbd import - dst-image
With #1 and #2,
looking for?
Zitat von Tony Liu :
> Hi,
>
> Other than get all objects of the pool and filter by image ID,
> is there any easier way to get the number of allocated objects for
> a RBD image?
>
> What I really want to know is the actual usage of an image.
> An allocated obj
Hi,
The context is RBD on bluestore. I did check extent on Wiki.
I see "extent" when talking about snapshot and export/import.
For example, when create a snapshot, we mark extents. When
there is write to marked extents, we will make a copy.
I also know that user data on block device maps to
Hi,
Other than get all objects of the pool and filter by image ID,
is there any easier way to get the number of allocated objects for
a RBD image?
What I really want to know is the actual usage of an image.
An allocated object could be used partially, but that's fine,
no need to be 100%
Figured it out. It's not rbd issue. Sorry for this false alarm.
Thanks!
Tony
From: Tony Liu
Sent: August 27, 2023 08:19 PM
To: Eugen Block; ceph-users@ceph.io
Subject: [ceph-users] Re: rbd export-diff/import-diff hangs
It's export-diff from an in-use
and exporting those? Or set up rbd mirroring?
Zitat von Tony Liu :
> To update, hanging happens when updating local image, not remote, networking
> is not a concern here. Any advices how to look into it?
>
> Thanks!
> Tony
> ________
> From: Tony Liu
>
Thank you Alex for confirmation!
Tony
From: Alex Gorbachev
Sent: August 27, 2023 05:29 PM
To: Tony Liu
Cc: d...@ceph.io; ceph-users@ceph.io
Subject: Re: [ceph-users] rbd export with export-format 2 exports all snapshots?
Tony,
From what I recall having
Hi,
Say, source image has snapshot s1, s2 and s3.
I expect "export" behaves the same as "deep cp", when specify a snapshot,
with "--export-format 2", only the specified snapshot and all snapshots
earlier than that will be exported.
What I see is that, no matter which snapshot I specify,
To update, hanging happens when updating local image, not remote, networking
is not a concern here. Any advices how to look into it?
Thanks!
Tony
From: Tony Liu
Sent: August 26, 2023 10:43 PM
To: d...@ceph.io; ceph-users@ceph.io
Subject: [ceph-users] rbd
Hi,
I'm using rbd import and export to copy image from one cluster to another.
Also using import-diff and export-diff to update image in remote cluster.
For example, "rbd --cluster local export-diff ... | rbd --cluster remote
import-diff ...".
Sometimes, the whole command is stuck. I can't tell
Thank you Ilya for confirmation!
Tony
From: Ilya Dryomov
Sent: August 4, 2023 04:51 AM
To: Tony Liu
Cc: d...@ceph.io; ceph-users@ceph.io
Subject: Re: [ceph-users] snapshot timestamp
On Fri, Aug 4, 2023 at 7:49 AM Tony Liu wrote:
>
> Hi,
>
ould happen if you actually would reach it. I also
> might be misunderstanding so maybe someone with more knowledge can
> confirm oder correct me.
>
> [1] https://github.com/ceph/ceph/blob/main/src/librbd/ImageCtx.cc#L328
>
> Zitat von Tony Liu :
>
> > Hi,
>
Hi,
We know snapshot is on a point of time. Is this point of time tracked
internally by
some sort of sequence number, or the timestamp showed by "snap ls", or
something else?
I noticed that when "deep cp", the timestamps of all snapshot are changed to
copy-time.
Say I create a snapshot at 1PM
Hi,
There is a snap ID for each snapshot. How is this ID allocated, sequentially?
Did some tests, it seems this ID is per pool, starting from 4 and always going
up.
Is that correct?
What's the max of this ID?
What's going to happen when ID reaches the max, going back to start from 4
again?
In case the image has parent, the parent image also needs to be mirrored.
After enabling the mirroring on parent image, it works as expected.
Thanks!
Tony
From: Tony Liu
Sent: July 31, 2023 08:13 AM
To: ceph-users@ceph.io; d...@ceph.io
Subject: [ceph
Hi,
The Ceph cluster is with Pacific v16.2.10.
"rbd mirror image enable journal" seems not working.
Any clues what I'm missing? There is no error messages from the CLI.
Any way to troubleshooting?
```
# rbd mirror pool info volume-ssd
Mode: image
Site Name: 35d050c0-77c0-11eb-9242-2cea7ff9d07c
: Tony Liu
Sent: July 29, 2023 11:44 PM
To: ceph-users@ceph.io; d...@ceph.io
Subject: [ceph-users] Re: configure rgw
A few updates.
1. "radosgw-admin --show-config -n client.rgw.qa.ceph-1.hzfrwq" doesn't show
actual running config.
2. "ceph --admin-daemon
/var/run/ceph/69f94d08
from config, restart rgw, rgw_frontends
goes back to default "port=7480". Add it back to config, restart rgw. Now
rgw_frontends
is what I expect. The logic doesn't make much sense to me. I'd assume that
unit.meta has
something to do with this, hopefully someone could shed light here.
Thank
Hi,
I'm using Pacific v16.2.10 container image, deployed by cephadm.
I used to manually build config file for rgw, deploy rgw, put config file in
place
and restart rgw. It works fine.
Now, I'd like to put rgw config into config db. I tried with client.rgw, but
the config
is not taken by rgw.
Super! Thanks Ilya!
Tony
From: Ilya Dryomov
Sent: July 13, 2023 01:30 PM
To: Tony Liu
Cc: d...@ceph.io; ceph-users@ceph.io
Subject: Re: [ceph-users] resume RBD mirror on another host
On Thu, Jul 13, 2023 at 10:23 PM Ilya Dryomov wrote:
>
> On Th
Hi,
How RBD mirror tracks mirroring process, on local storage?
Say, RBD mirror is running on host-1, when host-1 goes down,
start RBD mirror on host-2. In that case, is RBD mirror on host-2
going to continue the mirroring?
Thanks!
Tony
___
ceph-users
Hi,
Wondering if there is librbd supporting Python asyncio,
or any plan to do that?
Thanks!
Tony
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
e OSDs. I
> will take a closer look.
>
> Zitat von Tony Liu :
>
>> Tried [1] already, but got error.
>> Created no osd(s) on host ceph-4; already created?
>>
>> The error is from [2] in deploy_osd_daemons_for_existing_osds().
>>
>> Not sure what's mis
/en/pacific/cephadm/services/osd/#activate-existing-osds
[2]
https://github.com/ceph/ceph/blob/0a5b3b373b8a5ba3081f1f110cec24d82299cac8/src/pybind/mgr/cephadm/services/osd.py#L196
Thanks!
Tony
From: Tony Liu
Sent: April 27, 2023 10:20 PM
To: ceph-users
Hi,
The cluster is with Pacific and deployed by cephadm on container.
The case is to import OSDs after host OS reinstallation.
All OSDs are SSD who has DB/WAL and data together.
Did some research, but not able to find a working solution.
Wondering if anyone has experiences in this?
What needs to
Thank you Ilya!
Tony
From: Ilya Dryomov
Sent: March 27, 2023 10:28 AM
To: Tony Liu
Cc: ceph-users@ceph.io; d...@ceph.io
Subject: Re: [ceph-users] rbd cp vs. rbd clone + rbd flatten
On Wed, Mar 22, 2023 at 10:51 PM Tony Liu wrote:
>
> Hi,
>
>
Hi,
I want
1) copy a snapshot to an image,
2) no need to copy snapshots,
3) no dependency after copy,
4) all same image format 2.
In that case, is rbd cp the same as rbd clone + rbd flatten?
I ran some tests, seems like it, but want to confirm, in case of missing
anything.
Also, seems cp is a
Thank you Igor!
Tony
From: Igor Fedotov
Sent: November 1, 2022 04:34 PM
To: Tony Liu; ceph-users@ceph.io; d...@ceph.io
Subject: Re: [ceph-users] Re: Is it a bug that OSD crashed when it's full?
Hi Tony,
first of all let me share my understanding
.
Thanks!
Tony
From: Tony Liu
Sent: October 31, 2022 05:46 PM
To: ceph-users@ceph.io; d...@ceph.io
Subject: [ceph-users] Is it a bug that OSD crashed when it's full?
Hi,
Based on doc, Ceph prevents you from writing to a full OSD so that you don’t
lose data
!
Tony
From: Zizon Qiu
Sent: October 31, 2022 08:13 PM
To: Tony Liu
Cc: ceph-users@ceph.io; d...@ceph.io
Subject: Re: Is it a bug that OSD crashed when it's full?
15: (rocksdb::DBImpl::WriteLevel0TableForRecovery(int,
rocksdb::ColumnFamilyData*, rocksdb
23: (RocksDBStore::do_open(std::ostream&, bool, bool, std::__cxx11::basic_strin
g, std::allocator > const&)+0x10c1) [0x56102e
e39c41]
24: (BlueStore::_open_db(bool, bool, bool)+0x8c7) [0x56102ec9de17]
25: (BlueStore::_open_db_and_around(bool, bool)+0x2f7) [
Hi,
Based on doc, Ceph prevents you from writing to a full OSD so that you don’t
lose data.
In my case, with v16.2.10, OSD crashed when it's full. Is this expected or some
bug?
I'd expect write failure instead of OSD crash. It keeps crashing when tried to
bring it up.
Is there any way to bring
h yet, but you seem to be right,
> changing the config isn't applied immediately, but only after a
> service restart ('ceph orch restart rgw.ebl-rgw'). Maybe that's on
> purpose? So you can change your config now and apply it later when a
> service interruption is not critical.
>
>
Hi,
The cluster is Pacific 16.2.10 with containerized service and managed by
cephadm.
"config show" shows running configuration. Who is supported?
mon, mgr and osd all work, but rgw doesn't. Is this expected?
I tried with client. and without "client",
neither works.
When issue "config show",
Hi,
"df -h" on the OSD host shows 187G is being used.
"du -sh /" shows 36G. bluefs_buffered_io is enabled here.
What's taking that 150G disk space, cache?
Then where is that cache file? Any way to configure it smaller?
# free -h
totalusedfree shared
I hit couple issues with Podman when deploy Ceph 16.2.
https://www.spinics.net/lists/ceph-users/msg71367.html
https://www.spinics.net/lists/ceph-users/msg71346.html
Switching back to Docker, all work fine.
Thanks!
Tony
From: Robert Sander
Sent: May 19,
Hi,
I understand that, for rbd mirror to work, the rbd mirror service requires the
connectivity to all nodes from both cluster.
In my case, for security purpose, the "public" network is actually a private
network,
which is not routable to external. All internal RBD clients are on that private
Thank you Anthony! I agree that rbd-mirror is more reliable and manageable
and it's not that complicated to user. I will try both and see which works
better
for me.
Tony
From: Anthony D'Atri
Sent: April 21, 2022 09:02 PM
To: Tony Liu
Cc: ceph-users
To: Tony Liu
Cc: ceph-users@ceph.io; d...@ceph.io
Subject: Re: [ceph-users] the easiest way to copy image to another cluster
Hi Tony,
Have a look at rbd export and rbd import, they dump the image to a file or
stdout. You can pipe the rbd export directly into an rbd import assuming you
have a host
Hi,
I want to copy an image, which is not being used, to another cluster.
rbd-mirror would do it, but rbd-mirror is designed to handle image
which is being used/updated, to ensure the mirrored image is always
consistent with the source. I wonder if there is any easier way to copy
an image without
Thank you Adam! After "orch daemon redeploy", all works as expected.
Tony
From: Adam King
Sent: March 24, 2022 11:50 AM
To: Tony Liu
Cc: ceph-users@ceph.io; d...@ceph.io
Subject: Re: [ceph-users] Re: logging with container
Hmm, I'm assuming fro
Any comments on this?
Thanks!
Tony
From: Tony Liu
Sent: March 21, 2022 10:01 PM
To: Adam King
Cc: ceph-users@ceph.io; d...@ceph.io
Subject: [ceph-users] Re: logging with container
Hi Adam,
When I do "ceph tell mon.ceph-1 config set log_to_file tru
t;log_to_stderr" doesn't help.
Thanks!
Tony
________
From: Tony Liu
Sent: March 21, 2022 09:41 PM
To: Adam King
Cc: ceph-users@ceph.io; d...@ceph.io
Subject: [ceph-users] Re: logging with container
Hi Adam,
# ceph config get mgr log_to_file
true
# ceph con
mgr and osd, but why there is
osd log only,
but no mgr log?
Thanks!
Tony
From: Adam King
Sent: March 21, 2022 08:26 AM
To: Tony Liu
Cc: ceph-users@ceph.io; d...@ceph.io
Subject: Re: [ceph-users] logging with container
Hi Tony,
Afaik those container flags j
It's probably again related to podman. After switching back to Docker,
this works fine.
Thanks!
Tony
From: Tony Liu
Sent: March 20, 2022 06:31 PM
To: ceph-users@ceph.io; d...@ceph.io
Subject: [ceph-users] bind monitoring service to specific network
gistry-login my_url my_username my_password
Zitat von Tony Liu :
> Hi,
>
> I am using Pacific v16.2 container image. I put images on a insecure
> private registry.
> I am using podman and /etc/containers/registries.conf is set with
> that insecure private registry.
> "cephadm boot
Hi,
After reading through doc, it's still not very clear to me how logging works
with container.
This is with Pacific v16.2 container.
In OSD container, I see this.
```
/usr/bin/ceph-osd -n osd.16 -f --setuser ceph --setgroup ceph
--default-log-to-file=false --default-log-to-stderr=true
Hi,
https://docs.ceph.com/en/pacific/cephadm/services/monitoring/#networks-and-ports
When I try that with Pacific v16.2 image, port works, network doesn't.
No matter which network specified in yaml file, orch apply always bind the
service to *.
Is this known issue or something I am missing?
Hi,
I am using Pacific v16.2 container image. I put images on a insecure private
registry.
I am using podman and /etc/containers/registries.conf is set with that insecure
private registry.
"cephadm bootstrap" works fine to pull the image and setup the first node.
When "ceph orch apply -i
Hi,
I have 3 rgw services behind HAProxy. rgw-api-host and rgw-api-port are set
properly
to the VIP and port. "curl http://:" works fine. But dashboard
complains
that it can't find rgw service on that vip:port. If I set rgw-api-host directly
to the node,
it also works fine. I ran tcpdump on
Is there any measurement about how much bandwidth will be taken by
private traffic vs. public/client traffic when they are on the same network?
I am currently having two 2x10G bondings for public and private, the intention
is to provide 2x10G bandwidth for clients. I do understand the overhead
Instead of complaining, take some time to learn more about container would help.
Tony
From: Marc
Sent: November 18, 2021 10:50 AM
To: Pickett, Neale T; Hans van den Bogert; ceph-users@ceph.io
Subject: [ceph-users] Re: [EXTERNAL] Re: Why you might want
For PR-DR case, I am using RGW multi-site support to replicate backup image.
Tony
From: Manuel Holtgrewe
Sent: October 12, 2021 11:40 AM
To: dhils...@performair.com
Cc: mico...@gmail.com; ceph-users
Subject: [ceph-users] Re: Ceph cluster Sync
To chime in
Hi,
I wonder if anyone could share some experiences in etcd support by Ceph.
My users build Kubernetes cluster in VMs on OpenStack with Ceph.
With HDD (DB/WAL on SSD) volume, etcd performance test fails sometimes
because of latency. With SSD (all SSD) volume, it works fine.
I wonder if there is
Update /usr/lib/python3.6/site-packages/swiftclient/client.py and restart
container horizon.
This is to fix the error message on dashboard when it tries to retrieve policy
list.
-parsed = urlparse(urljoin(url, '/info'))
+parsed = urlparse(urljoin(url, '/swift/info'))
Tony
Shalygin
Sent: September 8, 2021 08:29 AM
To: Tony Liu
Cc: ceph-users@ceph.io; d...@ceph.io
Subject: Re: [ceph-users] debug RBD timeout issue
What is ceoh.conf for this rbd client?
k
Sent from my iPhone
> On 7 Sep 2021, at 19:54, Tony Liu wrote:
>
>
> I have OpenStack Ussuri and
is between successful and failing volumes. Is
it the size or anything else? Which glance stores are enabled? Can you
reproduce it, for example 'rbd create...' with the cinder user? Then
you could increase 'debug_rbd' and see if that reveals anything.
Zitat von Tony Liu :
> Hi,
>
> I ha
Hi,
I have OpenStack Ussuri and Ceph Octopus. Sometimes, I see timeout when create
or delete volumes. I can see RBD timeout from cinder-volume. Has anyone seen
such
issue? I'd like to see what happens on Ceph. Which service should I look into?
Is it stuck
with mon or any OSD? Any option to
Thank you Konstantin!
Tony
From: Konstantin Shalygin
Sent: August 9, 2021 01:20 AM
To: Tony Liu
Cc: ceph-users; d...@ceph.io
Subject: Re: [ceph-users] rbd object mapping
On 8 Aug 2021, at 20:10, Tony Liu
mailto:tonyliu0...@hotmail.com>> wrote:
>> There are two types of "object", RBD-image-object and 8MiB-block-object.
>> When create a RBD image, a RBD-image-object is created and 12800
>> 8MiB-block-objects
>> are allocated. That whole RBD-image-object is mapped to a single PG, which
>> is mapped
>> to 3 OSDs (replica 3). That means,
anks!
Tony
From: Konstantin Shalygin
Sent: August 7, 2021 11:35 AM
To: Tony Liu
Cc: ceph-users; d...@ceph.io
Subject: Re: [ceph-users] rbd object mapping
Object map show where your object with any object name will be placed in
defined pool with your crush map, and which of osd
Hi,
This shows one RBD image is treated as one object, and it's mapped to one PG.
"object" here means a RBD image.
# ceph osd map vm fcb09c9c-4cd9-44d8-a20b-8961c6eedf8e_disk
osdmap e18381 pool 'vm' (4) object 'fcb09c9c-4cd9-44d8-a20b-8961c6eedf8e_disk'
-> pg 4.c7a78d40 (4.0) -> up ([4,17,6],
From: Tony Liu
Sent: July 30, 2021 09:16 PM
To: openstack-discuss; ceph-users
Subject: [ceph-users] [cinder-backup][ceph] replicate volume between sites
Hi,
I have two sites with OpenStack Victoria deployed by Kolla and Ceph Octopus
deployed by cephadm. As what I know, either
Hi,
I have two sites with OpenStack Victoria deployed by Kolla and Ceph Octopus
deployed by cephadm. As what I know, either Swift (implemented by RADOSGW)
or RBD is supported to be the backend of cinder-backup. My intention is to use
one of those option to replicate Cinder volume from one site to
Thank you Stefan and Josh!
Tony
From: Josh Baergen
Sent: March 28, 2021 08:28 PM
To: Tony Liu
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Re: Do I need to update ceph.conf and restart each
OSD after adding more MONs?
As was mentioned in this thread
ers:219061308 kB
> Cached: 2066532 kB
> SwapCached: 928 kB
> Active: 142272648 kB
> Inactive: 109641772 kB
> ..
>
>
> Thanks!
> Tony
>
> From: Tony Liu
> Sent: March 27, 2021 01:25 PM
>
Restarting OSD frees buff/cache memory.
What kind of data is there?
Is there any configuration to control this memory allocation?
Thanks!
Tony
From: Tony Liu
Sent: March 27, 2021 06:10 PM
To: ceph-users
Subject: [ceph-users] Re: memory consumption by osd
MemTotal: 263454780 kB
MemFree: 2212484 kB
MemAvailable: 226842848 kB
Buffers:219061308 kB
Cached: 2066532 kB
SwapCached: 928 kB
Active: 142272648 kB
Inactive: 109641772 kB
..
Thanks!
Tony
From: Tony
Hi,
Here is a snippet from top on a node with 10 OSDs.
===
MiB Mem : 257280.1 total, 2070.1 free, 31881.7 used, 223328.3 buff/cache
MiB Swap: 128000.0 total, 126754.7 free, 1245.3 used. 221608.0 avail Mem
PID USER PR NIVIRTRESSHR S %CPU %MEM
at means I still need to restart all services to apply the update, right?
Is this supposed to be part of adding MONs as well, or additional manual step?
Thanks!
Tony
____
From: Tony Liu
Sent: March 27, 2021 12:53 PM
To: Stefan Kooman; ceph-users@ceph.io
Subject: [
and restart service.
Tony
From: Tony Liu
Sent: March 27, 2021 12:20 PM
To: Stefan Kooman; ceph-users@ceph.io
Subject: [ceph-users] Re: Do I need to update ceph.conf and restart each OSD
after adding more MONs?
I expanded MON from 1 to 3 by updating orch
ony
From: Stefan Kooman
Sent: March 26, 2021 12:22 PM
To: Tony Liu; ceph-users@ceph.io
Subject: Re: [ceph-users] Do I need to update ceph.conf and restart each OSD
after adding more MONs?
On 3/26/21 6:06 PM, Tony Liu wrote:
> Hi,
>
> Do I need
Hi,
Do I need to update ceph.conf and restart each OSD after adding more MONs?
This is with 15.2.8 deployed by cephadm.
When adding MON, "mon_host" should be updated accordingly.
Given [1], is that update "the monitor cluster’s centralized configuration
database" or "runtime overrides set by an
Are you sure the OSD is with DB/WAL on SSD?
Tony
From: Philip Brown
Sent: March 19, 2021 02:49 PM
To: Eugen Block
Cc: ceph-users
Subject: [ceph-users] Re: [BULK] Re: Re: ceph octopus mysterious OSD crash
Wow.
My expectations have been adjusted. Thank
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/EC45YMDJZD3T6TQINGM222H2H4RZABJ4/
From: Philip Brown
Sent: March 19, 2021 08:59 AM
To: ceph-users
Subject: [ceph-users] ceph orch daemon add , separate db
I was having difficulty doing
Tony
> -Original Message-
> From: Andrew Walker-Brown
> Sent: Tuesday, March 16, 2021 9:18 AM
> To: Tony Liu ; Stefan Kooman ;
> Dave Hall ; ceph-users
> Subject: RE: [ceph-users] Re: Networking Idea/Question
>
> https://docs.ceph.com/en/latest/rados/co
> -Original Message-
> From: Stefan Kooman
> Sent: Tuesday, March 16, 2021 4:10 AM
> To: Dave Hall ; ceph-users
> Subject: [ceph-users] Re: Networking Idea/Question
>
> On 3/15/21 5:34 PM, Dave Hall wrote:
> > Hello,
> >
> > If anybody out there has tried this or thought about it, I'd
It may help if you could share how you added those OSDs.
This guide works for me.
https://docs.ceph.com/en/latest/cephadm/drivegroups/
Tony
From: Philip Brown
Sent: February 17, 2021 09:30 PM
To: ceph-users
Subject: [ceph-users] ceph orch and mixed
Hi,
This is with v15.2 and v15.2.8.
Once an OSD service is applied, it can't be removed.
It always shows up from "ceph orch ls".
"ceph orch rm " only marks it "unmanaged",
but not actually removes it.
Is this the expected?
Thanks!
Tony
___
ceph-users
work.
Thanks!
Tony
________
From: Tony Liu
Sent: February 14, 2021 02:01 PM
To: ceph-users@ceph.io; dev
Subject: [ceph-users] Is replacing OSD whose data is on HDD and DB is on SSD
supported?
Hi,
I've been trying with v15.2 and v15.2.8, no luck.
Wondering if this i
Never mind, the OSD daemon shows up in "orch ps" after a while.
Thanks!
Tony
____
From: Tony Liu
Sent: February 14, 2021 09:47 PM
To: Kenneth Waegeman; ceph-users
Subject: [ceph-users] Re: reinstalling node with orchestrator/cephadm
I foll
You can have BGP-ECMP to multiple HAProxy instances to support
active-active mode, instead of using keepalived for active-backup mode,
if the traffic amount does required multiple HAProxy instances.
Tony
From: Graham Allan
Sent: February 14, 2021 01:31 PM
Hi,
I've been trying with v15.2 and v15.2.8, no luck.
Wondering if this is actually supported or ever worked for anyone?
Here is what I've done.
1) Create a cluster with 1 controller (mon and mgr) and 3 OSD nodes,
each of which is with 1 SSD for DB and 8 HDDs for data.
2) OSD service spec.
ards
Jens
-Original Message-----
From: Tony Liu
Sent: 10. februar 2021 22:59
To: David Orman
Cc: Jens Hyllegaard (Soft Design A/S) ;
ceph-users@ceph.io
Subject: Re: [ceph-users] Re: db_devices doesn't show up in exported osd
service spec
Hi David,
===
# pvs
PV
Sent: February 10, 2021 01:19 PM
To: Tony Liu
Cc: Jens Hyllegaard (Soft Design A/S); ceph-users@ceph.io
Subject: Re: [ceph-users] Re: db_devices doesn't show up in exported osd
service spec
It's displaying sdb (what I assume you want to be used as a DB device) as
unavailable. What's "pvs"
To update, the OSD had data on HDD and DB on SSD.
After "ceph orch osd rm 12 --replace --force" and wait
till rebalancing is done and daemon is stopped,
I ran "ceph orch device zap ceph-osd-2 /dev/sdd" to zap the device.
It cleared PV, VG and LV for data device, but not DB device.
DB device issue
+
|osd |osd-spec |ceph-osd-1 |/dev/sdd |- |-|
+-+--++--++-+
Thanks!
Tony
From: David Orman
Sent: February 10, 2021 11:02 AM
To: Tony Liu
Cc: Jens Hyllegaard (Soft Design A/S); ceph-users@ceph.io
lter_logic: AND
db_devices:
size: ':1TB'
It was usable in my test environment, and seems to work.
Regards
Jens
-Original Message-----
From: Tony Liu
Sent: 9. februar 2021 02:09
To: David Orman
Cc: ceph-users@ceph.io
Subject: [ceph-users] Re: db_devices doesn't show up in exported o
Hi,
I'd like to know how DB device is expected to be handled by "orch osd rm".
What I see is that, DB device on SSD is untouched when OSD on HDD is removed
or replaced. "orch device zap" removes PV, VG and LV of the device.
It doesn't touch the DB LV on SSD.
To remove an OSD permanently, do I
Hi David,
Could you show me an example of OSD service spec YAML
to workaround it by specifying size?
Thanks!
Tony
From: David Orman
Sent: February 8, 2021 04:06 PM
To: Tony Liu
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Re: db_devices doesn't show
, is it pushed by something
(mgr?) or pulled by mon?
Thanks!
Tony
> -Original Message-
> From: Tony Liu
> Sent: Sunday, February 7, 2021 5:32 PM
> To: ceph-users
> Subject: [ceph-users] Re: Device is not available after zap
>
> I checked pvscan, vgscan, lvscan and
I checked pvscan, vgscan, lvscan and "ceph-volume lvm list" on the OSD node,
that zapped device doesn't show anywhere.
Anything missing?
Thanks!
Tony
____
From: Tony Liu
Sent: February 7, 2021 05:27 PM
To: ceph-users
Subject: [ceph-use
Hi,
With v15.2.8, after zap a device on OSD node, it's still not available.
The reason is "locked, LVM detected". If I reboot the whole OSD node,
then the device will be available. There must be something no being
cleaned up. Any clues?
Thanks!
Tony
looked just like it. With rotational in the data and non-rotational in
> the db.
>
> First use applied fine. Afterwards it only uses the hdd, and not the ssd.
> Also, is there a way to remove an unused osd service.
> I manages to create osd.all-available-devices, when I tried to s
Here is the issue.
https://tracker.ceph.com/issues/47758
Thanks!
Tony
> -Original Message-
> From: Tony Liu
> Sent: Thursday, February 4, 2021 8:46 PM
> To: ceph-users@ceph.io
> Subject: [ceph-users] Re: replace OSD failed
>
> Here is the log from ceph-volume.
>
3c90-b7d5-4f13-8a58-f72761c1971b
ceph-a3886f74-3de9-4e6e-a983-8330eda0bd64
[2021-02-05 04:03:17,244][ceph_volume.process][INFO ] stderr Volume group
"ceph-a3886f74-3de9-4e6e-a983-8330eda0bd64" has insufficient free space (572317
extents): 572318 required.
```
size was passed: 2.18 TB
Hi,
With 15.2.8, run "ceph orch rm osd 12 --replace --force",
PGs on osd.12 are remapped, osd.12 is removed from "ceph osd tree",
the daemon is removed from "ceph orch ps", the device is "available"
in "ceph orch device ls". Everything seems good at this point.
Then dry-run service spec.
```
#
r patience!
Tony
> -Original Message-
> From: Frank Schilder
> Sent: Tuesday, February 2, 2021 11:47 PM
> To: Tony Liu ; ceph-users@ceph.io
> Subject: Re: replace OSD without PG remapping
>
> You asked about exactly this before:
> https://lists.ceph.io/hyperkitty
1 - 100 of 144 matches
Mail list logo