[ceph-users] configure rgw

2023-07-29 Thread Tony Liu
Hi,

I'm using Pacific v16.2.10 container image, deployed by cephadm.
I used to manually build config file for rgw, deploy rgw, put config file in 
place
and restart rgw. It works fine.

Now, I'd like to put rgw config into config db. I tried with client.rgw, but 
the config
is not taken by rgw. Also "config show" doesn't work. It always says "no config 
state".

```
# ceph orch ps | grep rgw
rgw.qa.ceph-1.hzfrwq  ceph-1  10.250.80.100:80 running (10m)10m ago 
 53m51.4M-  16.2.10  32214388de9d  13169a213bc5  
# ceph config get client.rgw | grep frontends
client.rgwbasic rgw_frontendsbeast port=8086

  * 
# ceph config show rgw.qa.ceph-1.hzfrwq
Error ENOENT: no config state for daemon rgw.qa.ceph-1.hzfrwq
# ceph config show client.rgw.qa.ceph-1.hzfrwq
Error ENOENT: no config state for daemon client.rgw.qa.ceph-1.hzfrwq
# radosgw-admin --show-config -n client.rgw.qa.ceph-1.hzfrwq | grep frontends
rgw_frontends = beast port=7480
```

Any clues what I am missing here?


Thanks!
Tony
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: cephadm logs

2023-07-29 Thread John Mulligan
On Friday, July 28, 2023 11:51:06 AM EDT Adam King wrote:
> Not currently. Those logs aren't generated by any daemons, they come
> directly from anything done by the cephadm binary one the host, which tends
> to be quite a bit since the cephadm mgr module runs most of its operations
> on the host through a copy of the cephadm binary. It doesn't log to journal
> because it doesn't have a systemd unit or anything, it's just a python
> script being run directly and nothing has been implemented to make it
> possible for that to log to journald.


For what it's worth, there's no requirement that a process be executed 
directly by a specific systemd unit to have it log to the journal. These days 
I'm pretty sure that anything that tries to use the local syslog goes to the 
journal.  Here's a quick example:

I create foo.py with the following:
```
import logging
import logging.handlers
import sys

handler = logging.handlers.SysLogHandler('/dev/log')
handler.ident = 'notcephadm: '

h2 = logging.StreamHandler(stream=sys.stderr)

logging.basicConfig(
level=logging.DEBUG,
handlers=[handler, h2],
format="(%(levelname)s): %(message)s",
)
log = logging.getLogger(__name__)

log.debug("debug me")
log.error("oops, an error was here")
log.info("some helpful information goes here")
```
I ran the above and now I can run:
```
$ journalctl   --no-pager -t notcephadm
Jul 29 14:35:31 edfu notcephadm[105868]: (DEBUG): debug me
Jul 29 14:35:31 edfu notcephadm[105868]: (ERROR): oops, an error was here
Jul 29 14:35:31 edfu notcephadm[105868]: (INFO): some helpful information goes 
here
```

Just getting logs into the journal does not even require one of the libraries 
specific to the systemd journal. Personally, I find centralized logging with 
the 
syslog/journal more appealing than logging to a file. But they both have their 
advantages and disadvantages.

Luis, I'd suggest that you should file a ceph tracker issue [1] if having 
cephadm log this way is a use case you would be interested in. We could also 
discuss the topic further in a ceph orchestration weekly meeting.


[1]: https://tracker.ceph.com/projects/orchestrator/issues/new

> 
> On Fri, Jul 28, 2023 at 9:43 AM Luis Domingues 
> wrote:
> 
> 
> > Hi,
> >
> >
> >
> > Quick question about cephadm and its logs. On my cluster I have every
> > logs
> > that goes to journald. But on each machine, I still have
> > /var/log/ceph/cephadm.log that is alive.
> >
> >
> >
> > Is there a way to make cephadm log to journald instead of a file? If yes
> > did I miss it on the documentation? Of if not is there any reason to log
> > into a file while everything else logs to journald?
> >
> >
> >
> > Thanks
> >
> >
> >
> > Luis Domingues
> > Proton AG
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> >
> >
> >
> 
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: precise/best way to check ssd usage

2023-07-29 Thread Marc
> >I have a use % between 48% and 57%, and assume that with a node failure
> 1/3 (only using 3x repl.) of this 57% needs to be able to migrate and
> added to a different node.
> 
> If you by this mean you have 3 nodes with 3x replica and failure domain
> set to

No it is more than 3 nodes

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: precise/best way to check ssd usage

2023-07-29 Thread Kai Stian Olstad

On Fri, Jul 28, 2023 at 07:13:33PM +, Marc wrote:

I have a use % between 48% and 57%, and assume that with a node failure 1/3 
(only using 3x repl.) of this 57% needs to be able to migrate and added to a 
different node.


If you by this mean you have 3 nodes with 3x replica and failure domain set to
host, it's my understanding no data will be migrated/backfilled when a node
fails.

The reason is that there is nowhere to copy the data to, to fulfill the crush 
rule
one copy on 3 different hosts.

--
Kai Stian Olstad
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io