[ceph-users] Re: the image used size becomes 0 after export/import with snapshot

2023-12-04 Thread Tony Liu
Hi Ilya, That explains it. Thank you for clarification! Tony From: Ilya Dryomov Sent: December 4, 2023 09:40 AM To: Tony Liu Cc: ceph-users@ceph.io; d...@ceph.io Subject: Re: [ceph-users] the image used size becomes 0 after export/import with snapshot

[ceph-users] the image used size becomes 0 after export/import with snapshot

2023-11-27 Thread Tony Liu
Hi, I have an image with a snapshot and some changes after snapshot. ``` $ rbd du backup/f0408e1e-06b6-437b-a2b5-70e3751d0a26 NAME PROVISIONED USED

[ceph-users] import/export with --export-format 2

2023-11-25 Thread Tony Liu
Hi, src-image is 1GB (provisioned size). I did the following 3 tests. 1. rbd export src-image - | rbd import - dst-image 2. rbd export --export-format 2 src-image - | rbd import --export-format 2 - dst-image 3. rbd export --export-format 2 src-image - | rbd import - dst-image With #1 and #2,

[ceph-users] Re: easy way to find out the number of allocated objects for a RBD image

2023-11-25 Thread Tony Liu
looking for? Zitat von Tony Liu : > Hi, > > Other than get all objects of the pool and filter by image ID, > is there any easier way to get the number of allocated objects for > a RBD image? > > What I really want to know is the actual usage of an image. > An allocated obj

[ceph-users] understand "extent"

2023-11-24 Thread Tony Liu
Hi, The context is RBD on bluestore. I did check extent on Wiki. I see "extent" when talking about snapshot and export/import. For example, when create a snapshot, we mark extents. When there is write to marked extents, we will make a copy. I also know that user data on block device maps to

[ceph-users] easy way to find out the number of allocated objects for a RBD image

2023-11-24 Thread Tony Liu
Hi, Other than get all objects of the pool and filter by image ID, is there any easier way to get the number of allocated objects for a RBD image? What I really want to know is the actual usage of an image. An allocated object could be used partially, but that's fine, no need to be 100%

[ceph-users] Re: rbd export-diff/import-diff hangs

2023-08-28 Thread Tony Liu
Figured it out. It's not rbd issue. Sorry for this false alarm. Thanks! Tony From: Tony Liu Sent: August 27, 2023 08:19 PM To: Eugen Block; ceph-users@ceph.io Subject: [ceph-users] Re: rbd export-diff/import-diff hangs It's export-diff from an in-use

[ceph-users] Re: rbd export-diff/import-diff hangs

2023-08-27 Thread Tony Liu
and exporting those? Or set up rbd mirroring? Zitat von Tony Liu : > To update, hanging happens when updating local image, not remote, networking > is not a concern here. Any advices how to look into it? > > Thanks! > Tony > ________ > From: Tony Liu >

[ceph-users] Re: rbd export with export-format 2 exports all snapshots?

2023-08-27 Thread Tony Liu
Thank you Alex for confirmation! Tony From: Alex Gorbachev Sent: August 27, 2023 05:29 PM To: Tony Liu Cc: d...@ceph.io; ceph-users@ceph.io Subject: Re: [ceph-users] rbd export with export-format 2 exports all snapshots? Tony, From what I recall having

[ceph-users] rbd export with export-format 2 exports all snapshots?

2023-08-27 Thread Tony Liu
Hi, Say, source image has snapshot s1, s2 and s3. I expect "export" behaves the same as "deep cp", when specify a snapshot, with "--export-format 2", only the specified snapshot and all snapshots earlier than that will be exported. What I see is that, no matter which snapshot I specify,

[ceph-users] Re: rbd export-diff/import-diff hangs

2023-08-27 Thread Tony Liu
To update, hanging happens when updating local image, not remote, networking is not a concern here. Any advices how to look into it? Thanks! Tony From: Tony Liu Sent: August 26, 2023 10:43 PM To: d...@ceph.io; ceph-users@ceph.io Subject: [ceph-users] rbd

[ceph-users] rbd export-diff/import-diff hangs

2023-08-26 Thread Tony Liu
Hi, I'm using rbd import and export to copy image from one cluster to another. Also using import-diff and export-diff to update image in remote cluster. For example, "rbd --cluster local export-diff ... | rbd --cluster remote import-diff ...". Sometimes, the whole command is stuck. I can't tell

[ceph-users] Re: snapshot timestamp

2023-08-04 Thread Tony Liu
Thank you Ilya for confirmation! Tony From: Ilya Dryomov Sent: August 4, 2023 04:51 AM To: Tony Liu Cc: d...@ceph.io; ceph-users@ceph.io Subject: Re: [ceph-users] snapshot timestamp On Fri, Aug 4, 2023 at 7:49 AM Tony Liu wrote: > > Hi, >

[ceph-users] Re: What's the max of snap ID?

2023-08-04 Thread Tony Liu
ould happen if you actually would reach it. I also > might be misunderstanding so maybe someone with more knowledge can > confirm oder correct me. > > [1] https://github.com/ceph/ceph/blob/main/src/librbd/ImageCtx.cc#L328 > > Zitat von Tony Liu : > > > Hi, >

[ceph-users] snapshot timestamp

2023-08-03 Thread Tony Liu
Hi, We know snapshot is on a point of time. Is this point of time tracked internally by some sort of sequence number, or the timestamp showed by "snap ls", or something else? I noticed that when "deep cp", the timestamps of all snapshot are changed to copy-time. Say I create a snapshot at 1PM

[ceph-users] What's the max of snap ID?

2023-08-03 Thread Tony Liu
Hi, There is a snap ID for each snapshot. How is this ID allocated, sequentially? Did some tests, it seems this ID is per pool, starting from 4 and always going up. Is that correct? What's the max of this ID? What's going to happen when ID reaches the max, going back to start from 4 again?

[ceph-users] Re: [rbd-mirror] can't enable journal-based image mirroring

2023-07-31 Thread Tony Liu
In case the image has parent, the parent image also needs to be mirrored. After enabling the mirroring on parent image, it works as expected. Thanks! Tony From: Tony Liu Sent: July 31, 2023 08:13 AM To: ceph-users@ceph.io; d...@ceph.io Subject: [ceph

[ceph-users] [rbd-mirror] can't enable journal-based image mirroring

2023-07-31 Thread Tony Liu
Hi, The Ceph cluster is with Pacific v16.2.10. "rbd mirror image enable journal" seems not working. Any clues what I'm missing? There is no error messages from the CLI. Any way to troubleshooting? ``` # rbd mirror pool info volume-ssd Mode: image Site Name: 35d050c0-77c0-11eb-9242-2cea7ff9d07c

[ceph-users] Re: configure rgw

2023-07-30 Thread Tony Liu
: Tony Liu Sent: July 29, 2023 11:44 PM To: ceph-users@ceph.io; d...@ceph.io Subject: [ceph-users] Re: configure rgw A few updates. 1. "radosgw-admin --show-config -n client.rgw.qa.ceph-1.hzfrwq" doesn't show actual running config. 2. "ceph --admin-daemon /var/run/ceph/69f94d08

[ceph-users] Re: configure rgw

2023-07-30 Thread Tony Liu
from config, restart rgw, rgw_frontends goes back to default "port=7480". Add it back to config, restart rgw. Now rgw_frontends is what I expect. The logic doesn't make much sense to me. I'd assume that unit.meta has something to do with this, hopefully someone could shed light here. Thank

[ceph-users] configure rgw

2023-07-29 Thread Tony Liu
Hi, I'm using Pacific v16.2.10 container image, deployed by cephadm. I used to manually build config file for rgw, deploy rgw, put config file in place and restart rgw. It works fine. Now, I'd like to put rgw config into config db. I tried with client.rgw, but the config is not taken by rgw.

[ceph-users] Re: resume RBD mirror on another host

2023-07-13 Thread Tony Liu
Super! Thanks Ilya! Tony From: Ilya Dryomov Sent: July 13, 2023 01:30 PM To: Tony Liu Cc: d...@ceph.io; ceph-users@ceph.io Subject: Re: [ceph-users] resume RBD mirror on another host On Thu, Jul 13, 2023 at 10:23 PM Ilya Dryomov wrote: > > On Th

[ceph-users] resume RBD mirror on another host

2023-07-13 Thread Tony Liu
Hi, How RBD mirror tracks mirroring process, on local storage? Say, RBD mirror is running on host-1, when host-1 goes down, start RBD mirror on host-2. In that case, is RBD mirror on host-2 going to continue the mirroring? Thanks! Tony ___ ceph-users

[ceph-users] librbd Python asyncio

2023-07-09 Thread Tony Liu
Hi, Wondering if there is librbd supporting Python asyncio, or any plan to do that? Thanks! Tony ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: import OSD after host OS reinstallation

2023-04-28 Thread Tony Liu
e OSDs. I > will take a closer look. > > Zitat von Tony Liu : > >> Tried [1] already, but got error. >> Created no osd(s) on host ceph-4; already created? >> >> The error is from [2] in deploy_osd_daemons_for_existing_osds(). >> >> Not sure what's mis

[ceph-users] Re: import OSD after host OS reinstallation

2023-04-27 Thread Tony Liu
/en/pacific/cephadm/services/osd/#activate-existing-osds [2] https://github.com/ceph/ceph/blob/0a5b3b373b8a5ba3081f1f110cec24d82299cac8/src/pybind/mgr/cephadm/services/osd.py#L196 Thanks! Tony From: Tony Liu Sent: April 27, 2023 10:20 PM To: ceph-users

[ceph-users] import OSD after host OS reinstallation

2023-04-27 Thread Tony Liu
Hi, The cluster is with Pacific and deployed by cephadm on container. The case is to import OSDs after host OS reinstallation. All OSDs are SSD who has DB/WAL and data together. Did some research, but not able to find a working solution. Wondering if anyone has experiences in this? What needs to

[ceph-users] Re: rbd cp vs. rbd clone + rbd flatten

2023-03-27 Thread Tony Liu
Thank you Ilya! Tony From: Ilya Dryomov Sent: March 27, 2023 10:28 AM To: Tony Liu Cc: ceph-users@ceph.io; d...@ceph.io Subject: Re: [ceph-users] rbd cp vs. rbd clone + rbd flatten On Wed, Mar 22, 2023 at 10:51 PM Tony Liu wrote: > > Hi, > >

[ceph-users] rbd cp vs. rbd clone + rbd flatten

2023-03-22 Thread Tony Liu
Hi, I want 1) copy a snapshot to an image, 2) no need to copy snapshots, 3) no dependency after copy, 4) all same image format 2. In that case, is rbd cp the same as rbd clone + rbd flatten? I ran some tests, seems like it, but want to confirm, in case of missing anything. Also, seems cp is a

[ceph-users] Re: Is it a bug that OSD crashed when it's full?

2022-11-01 Thread Tony Liu
Thank you Igor! Tony From: Igor Fedotov Sent: November 1, 2022 04:34 PM To: Tony Liu; ceph-users@ceph.io; d...@ceph.io Subject: Re: [ceph-users] Re: Is it a bug that OSD crashed when it's full? Hi Tony, first of all let me share my understanding

[ceph-users] Re: Is it a bug that OSD crashed when it's full?

2022-11-01 Thread Tony Liu
. Thanks! Tony From: Tony Liu Sent: October 31, 2022 05:46 PM To: ceph-users@ceph.io; d...@ceph.io Subject: [ceph-users] Is it a bug that OSD crashed when it's full? Hi, Based on doc, Ceph prevents you from writing to a full OSD so that you don’t lose data

[ceph-users] Re: Is it a bug that OSD crashed when it's full?

2022-10-31 Thread Tony Liu
! Tony From: Zizon Qiu Sent: October 31, 2022 08:13 PM To: Tony Liu Cc: ceph-users@ceph.io; d...@ceph.io Subject: Re: Is it a bug that OSD crashed when it's full? 15: (rocksdb::DBImpl::WriteLevel0TableForRecovery(int, rocksdb::ColumnFamilyData*, rocksdb

[ceph-users] Re: Is it a bug that OSD crashed when it's full?

2022-10-31 Thread Tony Liu
23: (RocksDBStore::do_open(std::ostream&, bool, bool, std::__cxx11::basic_strin g, std::allocator > const&)+0x10c1) [0x56102e e39c41] 24: (BlueStore::_open_db(bool, bool, bool)+0x8c7) [0x56102ec9de17] 25: (BlueStore::_open_db_and_around(bool, bool)+0x2f7) [

[ceph-users] Is it a bug that OSD crashed when it's full?

2022-10-31 Thread Tony Liu
Hi, Based on doc, Ceph prevents you from writing to a full OSD so that you don’t lose data. In my case, with v16.2.10, OSD crashed when it's full. Is this expected or some bug? I'd expect write failure instead of OSD crash. It keeps crashing when tried to bring it up. Is there any way to bring

[ceph-users] Re: Ceph configuration for rgw

2022-09-26 Thread Tony Liu
h yet, but you seem to be right, > changing the config isn't applied immediately, but only after a > service restart ('ceph orch restart rgw.ebl-rgw'). Maybe that's on > purpose? So you can change your config now and apply it later when a > service interruption is not critical. > >

[ceph-users] Ceph configuration for rgw

2022-09-24 Thread Tony Liu
Hi, The cluster is Pacific 16.2.10 with containerized service and managed by cephadm. "config show" shows running configuration. Who is supported? mon, mgr and osd all work, but rgw doesn't. Is this expected? I tried with client. and without "client", neither works. When issue "config show",

[ceph-users] host disk used by osd container

2022-06-15 Thread Tony Liu
Hi, "df -h" on the OSD host shows 187G is being used. "du -sh /" shows 36G. bluefs_buffered_io is enabled here. What's taking that 150G disk space, cache? Then where is that cache file? Any way to configure it smaller? # free -h totalusedfree shared

[ceph-users] Re: Ceph 15 and Podman compatability

2022-05-19 Thread Tony Liu
I hit couple issues with Podman when deploy Ceph 16.2. https://www.spinics.net/lists/ceph-users/msg71367.html https://www.spinics.net/lists/ceph-users/msg71346.html Switching back to Docker, all work fine. Thanks! Tony From: Robert Sander Sent: May 19,

[ceph-users] rbd mirror between clusters with private "public" network

2022-04-25 Thread Tony Liu
Hi, I understand that, for rbd mirror to work, the rbd mirror service requires the connectivity to all nodes from both cluster. In my case, for security purpose, the "public" network is actually a private network, which is not routable to external. All internal RBD clients are on that private

[ceph-users] Re: the easiest way to copy image to another cluster

2022-04-21 Thread Tony Liu
Thank you Anthony! I agree that rbd-mirror is more reliable and manageable and it's not that complicated to user. I will try both and see which works better for me. Tony From: Anthony D'Atri Sent: April 21, 2022 09:02 PM To: Tony Liu Cc: ceph-users

[ceph-users] Re: the easiest way to copy image to another cluster

2022-04-21 Thread Tony Liu
To: Tony Liu Cc: ceph-users@ceph.io; d...@ceph.io Subject: Re: [ceph-users] the easiest way to copy image to another cluster Hi Tony, Have a look at rbd export and rbd import, they dump the image to a file or stdout. You can pipe the rbd export directly into an rbd import assuming you have a host

[ceph-users] the easiest way to copy image to another cluster

2022-04-21 Thread Tony Liu
Hi, I want to copy an image, which is not being used, to another cluster. rbd-mirror would do it, but rbd-mirror is designed to handle image which is being used/updated, to ensure the mirrored image is always consistent with the source. I wonder if there is any easier way to copy an image without

[ceph-users] Re: logging with container

2022-03-24 Thread Tony Liu
Thank you Adam! After "orch daemon redeploy", all works as expected. Tony From: Adam King Sent: March 24, 2022 11:50 AM To: Tony Liu Cc: ceph-users@ceph.io; d...@ceph.io Subject: Re: [ceph-users] Re: logging with container Hmm, I'm assuming fro

[ceph-users] Re: logging with container

2022-03-24 Thread Tony Liu
Any comments on this? Thanks! Tony From: Tony Liu Sent: March 21, 2022 10:01 PM To: Adam King Cc: ceph-users@ceph.io; d...@ceph.io Subject: [ceph-users] Re: logging with container Hi Adam, When I do "ceph tell mon.ceph-1 config set log_to_file tru

[ceph-users] Re: logging with container

2022-03-21 Thread Tony Liu
t;log_to_stderr" doesn't help. Thanks! Tony ________ From: Tony Liu Sent: March 21, 2022 09:41 PM To: Adam King Cc: ceph-users@ceph.io; d...@ceph.io Subject: [ceph-users] Re: logging with container Hi Adam, # ceph config get mgr log_to_file true # ceph con

[ceph-users] Re: logging with container

2022-03-21 Thread Tony Liu
mgr and osd, but why there is osd log only, but no mgr log? Thanks! Tony From: Adam King Sent: March 21, 2022 08:26 AM To: Tony Liu Cc: ceph-users@ceph.io; d...@ceph.io Subject: Re: [ceph-users] logging with container Hi Tony, Afaik those container flags j

[ceph-users] Re: bind monitoring service to specific network and port

2022-03-21 Thread Tony Liu
It's probably again related to podman. After switching back to Docker, this works fine. Thanks! Tony From: Tony Liu Sent: March 20, 2022 06:31 PM To: ceph-users@ceph.io; d...@ceph.io Subject: [ceph-users] bind monitoring service to specific network

[ceph-users] Re: orch apply failed to use insecure private registry

2022-03-21 Thread Tony Liu
gistry-login my_url my_username my_password Zitat von Tony Liu : > Hi, > > I am using Pacific v16.2 container image. I put images on a insecure > private registry. > I am using podman and /etc/containers/registries.conf is set with > that insecure private registry. > "cephadm boot

[ceph-users] logging with container

2022-03-20 Thread Tony Liu
Hi, After reading through doc, it's still not very clear to me how logging works with container. This is with Pacific v16.2 container. In OSD container, I see this. ``` /usr/bin/ceph-osd -n osd.16 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true

[ceph-users] bind monitoring service to specific network and port

2022-03-20 Thread Tony Liu
Hi, https://docs.ceph.com/en/pacific/cephadm/services/monitoring/#networks-and-ports When I try that with Pacific v16.2 image, port works, network doesn't. No matter which network specified in yaml file, orch apply always bind the service to *. Is this known issue or something I am missing?

[ceph-users] orch apply failed to use insecure private registry

2022-03-20 Thread Tony Liu
Hi, I am using Pacific v16.2 container image. I put images on a insecure private registry. I am using podman and /etc/containers/registries.conf is set with that insecure private registry. "cephadm bootstrap" works fine to pull the image and setup the first node. When "ceph orch apply -i

[ceph-users] [rgw][dashboard] dashboard can't access rgw behind proxy

2022-01-19 Thread Tony Liu
Hi, I have 3 rgw services behind HAProxy. rgw-api-host and rgw-api-port are set properly to the VIP and port. "curl http://:" works fine. But dashboard complains that it can't find rgw service on that vip:port. If I set rgw-api-host directly to the node, it also works fine. I ran tcpdump on

[ceph-users] Re: [Ceph-community] Why MON,MDS,MGR are on Public network?

2021-11-29 Thread Tony Liu
Is there any measurement about how much bandwidth will be taken by private traffic vs. public/client traffic when they are on the same network? I am currently having two 2x10G bondings for public and private, the intention is to provide 2x10G bandwidth for clients. I do understand the overhead

[ceph-users] Re: [EXTERNAL] Re: Why you might want packages not containers for Ceph deployments

2021-11-18 Thread Tony Liu
Instead of complaining, take some time to learn more about container would help. Tony From: Marc Sent: November 18, 2021 10:50 AM To: Pickett, Neale T; Hans van den Bogert; ceph-users@ceph.io Subject: [ceph-users] Re: [EXTERNAL] Re: Why you might want

[ceph-users] Re: Ceph cluster Sync

2021-10-12 Thread Tony Liu
For PR-DR case, I am using RGW multi-site support to replicate backup image. Tony From: Manuel Holtgrewe Sent: October 12, 2021 11:40 AM To: dhils...@performair.com Cc: mico...@gmail.com; ceph-users Subject: [ceph-users] Re: Ceph cluster Sync To chime in

[ceph-users] etcd support

2021-09-20 Thread Tony Liu
Hi, I wonder if anyone could share some experiences in etcd support by Ceph. My users build Kubernetes cluster in VMs on OpenStack with Ceph. With HDD (DB/WAL on SSD) volume, etcd performance test fails sometimes because of latency. With SSD (all SSD) volume, it works fine. I wonder if there is

[ceph-users] Re: Cannot create a container, mandatory "Storage Policy" dropdown field is empty

2021-09-13 Thread Tony Liu
Update /usr/lib/python3.6/site-packages/swiftclient/client.py and restart container horizon. This is to fix the error message on dashboard when it tries to retrieve policy list. -parsed = urlparse(urljoin(url, '/info')) +parsed = urlparse(urljoin(url, '/swift/info')) Tony

[ceph-users] Re: debug RBD timeout issue

2021-09-08 Thread Tony Liu
Shalygin Sent: September 8, 2021 08:29 AM To: Tony Liu Cc: ceph-users@ceph.io; d...@ceph.io Subject: Re: [ceph-users] debug RBD timeout issue What is ceoh.conf for this rbd client? k Sent from my iPhone > On 7 Sep 2021, at 19:54, Tony Liu wrote: > > > I have OpenStack Ussuri and

[ceph-users] Re: debug RBD timeout issue

2021-09-08 Thread Tony Liu
is between successful and failing volumes. Is it the size or anything else? Which glance stores are enabled? Can you reproduce it, for example 'rbd create...' with the cinder user? Then you could increase 'debug_rbd' and see if that reveals anything. Zitat von Tony Liu : > Hi, > > I ha

[ceph-users] debug RBD timeout issue

2021-09-07 Thread Tony Liu
Hi, I have OpenStack Ussuri and Ceph Octopus. Sometimes, I see timeout when create or delete volumes. I can see RBD timeout from cinder-volume. Has anyone seen such issue? I'd like to see what happens on Ceph. Which service should I look into? Is it stuck with mon or any OSD? Any option to

[ceph-users] Re: rbd object mapping

2021-08-09 Thread Tony Liu
Thank you Konstantin! Tony From: Konstantin Shalygin Sent: August 9, 2021 01:20 AM To: Tony Liu Cc: ceph-users; d...@ceph.io Subject: Re: [ceph-users] rbd object mapping On 8 Aug 2021, at 20:10, Tony Liu mailto:tonyliu0...@hotmail.com>> wrote:

[ceph-users] Re: rbd object mapping

2021-08-08 Thread Tony Liu
>> There are two types of "object", RBD-image-object and 8MiB-block-object. >> When create a RBD image, a RBD-image-object is created and 12800 >> 8MiB-block-objects >> are allocated. That whole RBD-image-object is mapped to a single PG, which >> is mapped >> to 3 OSDs (replica 3). That means,

[ceph-users] Re: rbd object mapping

2021-08-07 Thread Tony Liu
anks! Tony From: Konstantin Shalygin Sent: August 7, 2021 11:35 AM To: Tony Liu Cc: ceph-users; d...@ceph.io Subject: Re: [ceph-users] rbd object mapping Object map show where your object with any object name will be placed in defined pool with your crush map, and which of osd

[ceph-users] rbd object mapping

2021-08-07 Thread Tony Liu
Hi, This shows one RBD image is treated as one object, and it's mapped to one PG. "object" here means a RBD image. # ceph osd map vm fcb09c9c-4cd9-44d8-a20b-8961c6eedf8e_disk osdmap e18381 pool 'vm' (4) object 'fcb09c9c-4cd9-44d8-a20b-8961c6eedf8e_disk' -> pg 4.c7a78d40 (4.0) -> up ([4,17,6],

[ceph-users] Re: [cinder-backup][ceph] replicate volume between sites

2021-07-31 Thread Tony Liu
From: Tony Liu Sent: July 30, 2021 09:16 PM To: openstack-discuss; ceph-users Subject: [ceph-users] [cinder-backup][ceph] replicate volume between sites Hi, I have two sites with OpenStack Victoria deployed by Kolla and Ceph Octopus deployed by cephadm. As what I know, either

[ceph-users] [cinder-backup][ceph] replicate volume between sites

2021-07-30 Thread Tony Liu
Hi, I have two sites with OpenStack Victoria deployed by Kolla and Ceph Octopus deployed by cephadm. As what I know, either Swift (implemented by RADOSGW) or RBD is supported to be the backend of cinder-backup. My intention is to use one of those option to replicate Cinder volume from one site to

[ceph-users] Re: Do I need to update ceph.conf and restart each OSD after adding more MONs?

2021-03-28 Thread Tony Liu
Thank you Stefan and Josh! Tony From: Josh Baergen Sent: March 28, 2021 08:28 PM To: Tony Liu Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Re: Do I need to update ceph.conf and restart each OSD after adding more MONs? As was mentioned in this thread

[ceph-users] Re: memory consumption by osd

2021-03-27 Thread Tony Liu
ers:219061308 kB > Cached: 2066532 kB > SwapCached: 928 kB > Active: 142272648 kB > Inactive: 109641772 kB > .. > > > Thanks! > Tony > > From: Tony Liu > Sent: March 27, 2021 01:25 PM >

[ceph-users] Re: memory consumption by osd

2021-03-27 Thread Tony Liu
Restarting OSD frees buff/cache memory. What kind of data is there? Is there any configuration to control this memory allocation? Thanks! Tony From: Tony Liu Sent: March 27, 2021 06:10 PM To: ceph-users Subject: [ceph-users] Re: memory consumption by osd

[ceph-users] Re: memory consumption by osd

2021-03-27 Thread Tony Liu
MemTotal: 263454780 kB MemFree: 2212484 kB MemAvailable: 226842848 kB Buffers:219061308 kB Cached: 2066532 kB SwapCached: 928 kB Active: 142272648 kB Inactive: 109641772 kB .. Thanks! Tony From: Tony

[ceph-users] memory consumption by osd

2021-03-27 Thread Tony Liu
Hi, Here is a snippet from top on a node with 10 OSDs. === MiB Mem : 257280.1 total, 2070.1 free, 31881.7 used, 223328.3 buff/cache MiB Swap: 128000.0 total, 126754.7 free, 1245.3 used. 221608.0 avail Mem PID USER PR NIVIRTRESSHR S %CPU %MEM

[ceph-users] Re: Do I need to update ceph.conf and restart each OSD after adding more MONs?

2021-03-27 Thread Tony Liu
at means I still need to restart all services to apply the update, right? Is this supposed to be part of adding MONs as well, or additional manual step? Thanks! Tony ____ From: Tony Liu Sent: March 27, 2021 12:53 PM To: Stefan Kooman; ceph-users@ceph.io Subject: [

[ceph-users] Re: Do I need to update ceph.conf and restart each OSD after adding more MONs?

2021-03-27 Thread Tony Liu
and restart service. Tony From: Tony Liu Sent: March 27, 2021 12:20 PM To: Stefan Kooman; ceph-users@ceph.io Subject: [ceph-users] Re: Do I need to update ceph.conf and restart each OSD after adding more MONs? I expanded MON from 1 to 3 by updating orch

[ceph-users] Re: Do I need to update ceph.conf and restart each OSD after adding more MONs?

2021-03-27 Thread Tony Liu
ony From: Stefan Kooman Sent: March 26, 2021 12:22 PM To: Tony Liu; ceph-users@ceph.io Subject: Re: [ceph-users] Do I need to update ceph.conf and restart each OSD after adding more MONs? On 3/26/21 6:06 PM, Tony Liu wrote: > Hi, > > Do I need

[ceph-users] Do I need to update ceph.conf and restart each OSD after adding more MONs?

2021-03-26 Thread Tony Liu
Hi, Do I need to update ceph.conf and restart each OSD after adding more MONs? This is with 15.2.8 deployed by cephadm. When adding MON, "mon_host" should be updated accordingly. Given [1], is that update "the monitor cluster’s centralized configuration database" or "runtime overrides set by an

[ceph-users] Re: ceph octopus mysterious OSD crash

2021-03-19 Thread Tony Liu
Are you sure the OSD is with DB/WAL on SSD? Tony From: Philip Brown Sent: March 19, 2021 02:49 PM To: Eugen Block Cc: ceph-users Subject: [ceph-users] Re: [BULK] Re: Re: ceph octopus mysterious OSD crash Wow. My expectations have been adjusted. Thank

[ceph-users] Re: ceph orch daemon add , separate db

2021-03-19 Thread Tony Liu
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/EC45YMDJZD3T6TQINGM222H2H4RZABJ4/ From: Philip Brown Sent: March 19, 2021 08:59 AM To: ceph-users Subject: [ceph-users] ceph orch daemon add , separate db I was having difficulty doing

[ceph-users] Re: Networking Idea/Question

2021-03-16 Thread Tony Liu
Tony > -Original Message- > From: Andrew Walker-Brown > Sent: Tuesday, March 16, 2021 9:18 AM > To: Tony Liu ; Stefan Kooman ; > Dave Hall ; ceph-users > Subject: RE: [ceph-users] Re: Networking Idea/Question > > https://docs.ceph.com/en/latest/rados/co

[ceph-users] Re: Networking Idea/Question

2021-03-16 Thread Tony Liu
> -Original Message- > From: Stefan Kooman > Sent: Tuesday, March 16, 2021 4:10 AM > To: Dave Hall ; ceph-users > Subject: [ceph-users] Re: Networking Idea/Question > > On 3/15/21 5:34 PM, Dave Hall wrote: > > Hello, > > > > If anybody out there has tried this or thought about it, I'd

[ceph-users] Re: ceph orch and mixed SSD/rotating disks

2021-02-18 Thread Tony Liu
It may help if you could share how you added those OSDs. This guide works for me. https://docs.ceph.com/en/latest/cephadm/drivegroups/ Tony From: Philip Brown Sent: February 17, 2021 09:30 PM To: ceph-users Subject: [ceph-users] ceph orch and mixed

[ceph-users] can't remove osd service by "ceph orch rm "

2021-02-15 Thread Tony Liu
Hi, This is with v15.2 and v15.2.8. Once an OSD service is applied, it can't be removed. It always shows up from "ceph orch ls". "ceph orch rm " only marks it "unmanaged", but not actually removes it. Is this the expected? Thanks! Tony ___ ceph-users

[ceph-users] Re: Is replacing OSD whose data is on HDD and DB is on SSD supported?

2021-02-15 Thread Tony Liu
work. Thanks! Tony ________ From: Tony Liu Sent: February 14, 2021 02:01 PM To: ceph-users@ceph.io; dev Subject: [ceph-users] Is replacing OSD whose data is on HDD and DB is on SSD supported? ​Hi, I've been trying with v15.2 and v15.2.8, no luck. Wondering if this i

[ceph-users] Re: reinstalling node with orchestrator/cephadm

2021-02-15 Thread Tony Liu
Never mind, the OSD daemon shows up in "orch ps" after a while. Thanks! Tony ____ From: Tony Liu Sent: February 14, 2021 09:47 PM To: Kenneth Waegeman; ceph-users Subject: [ceph-users] Re: reinstalling node with orchestrator/cephadm I foll

[ceph-users] Re: share haproxy config for radosgw [EXT]

2021-02-14 Thread Tony Liu
You can have BGP-ECMP to multiple HAProxy instances to support active-active mode, instead of using keepalived for active-backup mode, if the traffic amount does required multiple HAProxy instances. Tony From: Graham Allan Sent: February 14, 2021 01:31 PM

[ceph-users] Is replacing OSD whose data is on HDD and DB is on SSD supported?

2021-02-14 Thread Tony Liu
​Hi, I've been trying with v15.2 and v15.2.8, no luck. Wondering if this is actually supported or ever worked for anyone? Here is what I've done. 1) Create a cluster with 1 controller (mon and mgr) and 3 OSD nodes, each of which is with 1 SSD for DB and 8 HDDs for data. 2) OSD service spec.

[ceph-users] Re: db_devices doesn't show up in exported osd service spec

2021-02-11 Thread Tony Liu
ards Jens -Original Message----- From: Tony Liu Sent: 10. februar 2021 22:59 To: David Orman Cc: Jens Hyllegaard (Soft Design A/S) ; ceph-users@ceph.io Subject: Re: [ceph-users] Re: db_devices doesn't show up in exported osd service spec Hi David, === # pvs PV

[ceph-users] Re: db_devices doesn't show up in exported osd service spec

2021-02-10 Thread Tony Liu
Sent: February 10, 2021 01:19 PM To: Tony Liu Cc: Jens Hyllegaard (Soft Design A/S); ceph-users@ceph.io Subject: Re: [ceph-users] Re: db_devices doesn't show up in exported osd service spec It's displaying sdb (what I assume you want to be used as a DB device) as unavailable. What's "pvs"

[ceph-users] Re: Device is not available after zap

2021-02-10 Thread Tony Liu
To update, the OSD had data on HDD and DB on SSD. After "ceph orch osd rm 12 --replace --force" and wait till rebalancing is done and daemon is stopped, I ran "ceph orch device zap ceph-osd-2 /dev/sdd" to zap the device. It cleared PV, VG and LV for data device, but not DB device. DB device issue

[ceph-users] Re: db_devices doesn't show up in exported osd service spec

2021-02-10 Thread Tony Liu
+ |osd |osd-spec |ceph-osd-1 |/dev/sdd |- |-| +-+--++--++-+ Thanks! Tony From: David Orman Sent: February 10, 2021 11:02 AM To: Tony Liu Cc: Jens Hyllegaard (Soft Design A/S); ceph-users@ceph.io

[ceph-users] Re: db_devices doesn't show up in exported osd service spec

2021-02-09 Thread Tony Liu
lter_logic: AND db_devices: size: ':1TB' It was usable in my test environment, and seems to work. Regards Jens -Original Message----- From: Tony Liu Sent: 9. februar 2021 02:09 To: David Orman Cc: ceph-users@ceph.io Subject: [ceph-users] Re: db_devices doesn't show up in exported o

[ceph-users] How is DB handled when remove/replace and add OSD?

2021-02-09 Thread Tony Liu
Hi, I'd like to know how DB device is expected to be handled by "orch osd rm". What I see is that, DB device on SSD is untouched when OSD on HDD is removed or replaced. "orch device zap" removes PV, VG and LV of the device. It doesn't touch the DB LV on SSD. To remove an OSD permanently, do I

[ceph-users] Re: db_devices doesn't show up in exported osd service spec

2021-02-08 Thread Tony Liu
Hi David, Could you show me an example of OSD service spec YAML to workaround it by specifying size? Thanks! Tony From: David Orman Sent: February 8, 2021 04:06 PM To: Tony Liu Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Re: db_devices doesn't show

[ceph-users] Re: Device is not available after zap

2021-02-07 Thread Tony Liu
, is it pushed by something (mgr?) or pulled by mon? Thanks! Tony > -Original Message- > From: Tony Liu > Sent: Sunday, February 7, 2021 5:32 PM > To: ceph-users > Subject: [ceph-users] Re: Device is not available after zap > > I checked pvscan, vgscan, lvscan and

[ceph-users] Re: Device is not available after zap

2021-02-07 Thread Tony Liu
I checked pvscan, vgscan, lvscan and "ceph-volume lvm list" on the OSD node, that zapped device doesn't show anywhere. Anything missing? Thanks! Tony ____ From: Tony Liu Sent: February 7, 2021 05:27 PM To: ceph-users Subject: [ceph-use

[ceph-users] Device is not available after zap

2021-02-07 Thread Tony Liu
Hi, With v15.2.8, after zap a device on OSD node, it's still not available. The reason is "locked, LVM detected". If I reboot the whole OSD node, then the device will be available. There must be something no being cleaned up. Any clues? Thanks! Tony

[ceph-users] Re: db_devices doesn't show up in exported osd service spec

2021-02-06 Thread Tony Liu
looked just like it. With rotational in the data and non-rotational in > the db. > > First use applied fine. Afterwards it only uses the hdd, and not the ssd. > Also, is there a way to remove an unused osd service. > I manages to create osd.all-available-devices, when I tried to s

[ceph-users] Re: replace OSD failed

2021-02-04 Thread Tony Liu
Here is the issue. https://tracker.ceph.com/issues/47758 Thanks! Tony > -Original Message- > From: Tony Liu > Sent: Thursday, February 4, 2021 8:46 PM > To: ceph-users@ceph.io > Subject: [ceph-users] Re: replace OSD failed > > Here is the log from ceph-volume. >

[ceph-users] Re: replace OSD failed

2021-02-04 Thread Tony Liu
3c90-b7d5-4f13-8a58-f72761c1971b ceph-a3886f74-3de9-4e6e-a983-8330eda0bd64 [2021-02-05 04:03:17,244][ceph_volume.process][INFO ] stderr Volume group "ceph-a3886f74-3de9-4e6e-a983-8330eda0bd64" has insufficient free space (572317 extents): 572318 required. ``` size was passed: 2.18 TB

[ceph-users] replace OSD failed

2021-02-04 Thread Tony Liu
Hi, With 15.2.8, run "ceph orch rm osd 12 --replace --force", PGs on osd.12 are remapped, osd.12 is removed from "ceph osd tree", the daemon is removed from "ceph orch ps", the device is "available" in "ceph orch device ls". Everything seems good at this point. Then dry-run service spec. ``` #

[ceph-users] Re: replace OSD without PG remapping

2021-02-03 Thread Tony Liu
r patience! Tony > -Original Message- > From: Frank Schilder > Sent: Tuesday, February 2, 2021 11:47 PM > To: Tony Liu ; ceph-users@ceph.io > Subject: Re: replace OSD without PG remapping > > You asked about exactly this before: > https://lists.ceph.io/hyperkitty

  1   2   >