[ceph-users] Re: Ceph-Dokan Mount Caps at ~1GB transfer?

2021-11-01 Thread Radoslav Milanov
Have you tries this with the native client under Linux ? It could be 
just slow cephfs ?


On 1.11.2021 г. 06:40 ч., Mason-Williams, Gabryel (RFI,RAL,-) wrote:

Hello,

We have been trying to use Ceph-Dokan to mount cephfs on Windows. When 
transferring any data below ~1GB the transfer speed is as quick as desired and 
works perfectly. However, once more than ~1GB has been transferred the 
connection stops being able to send data and everything seems to just hang.

I've ruled out it being a quota problem as I can transfer just than just under 
1GB close the connection and then reopen it and then transfer just under 1GB 
again, with no issues.

Windows Version: 10
Dokan Version: 1.3.1.1000

Does anyone have any idea why this is occurring and have any suggestions on how 
to fix it?

Kind regards

Gabryel Mason-Williams

Junior Research Software Engineer

Please bear in mind that I work part-time, so there may be a delay in my 
response.


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: [IMPORTANT NOTICE] Potential data corruption in Pacific

2021-10-29 Thread Radoslav Milanov
Not everyone is subscribed to low traffic MLs. Something like should be 
posted on all lists I think.


On 29.10.2021 г. 05:43 ч., Daniel Poelzleithner wrote:


On 29/10/2021 11:23, Tobias Fischer wrote:


I would propose to either create a separate Mailing list for these kind
of Information from the Ceph Dev Community or use a Mailing list where
not that much is happening, e.g. ceph-announce>
What do you think?

I like that, low traffic ML are easily subscribed.

I would also like to see a Blog post with a warning for such cases like
datacorruption.


kind regards
  poelzi
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Error: UPGRADE_FAILED_PULL: Upgrade: failed to pull target image

2021-09-21 Thread Radoslav Milanov

There is a problem upgrading ceph-iscsi from 16.25 to 16.2.6

2021-09-21T12:43:58.767556-0400 mgr.nj3231.wagzhn [ERR] cephadm exited 
with an error code: 1, stderr:Redeploy daemon iscsi.iscsi.nj3231.mqeari ...

Creating ceph-iscsi config...
Write file: 
/var/lib/ceph/c6c8bc66-1716-11ec-b029-1c34da4b9fb6/iscsi.iscsi.nj3231.mqeari/iscsi-gateway.cfg
Failed to trim old cgroups 
/sys/fs/cgroup/system.slice/system-ceph\x2dc6c8bc66\x2d1716\x2d11ec\x2db029\x2d1c34da4b9fb6.slice/ceph-c6c8bc66-1716-11ec-b029-1c34da4b9fb6@iscsi.iscsi.nj3231.mqeari.service
Non-zero exit code 1 from systemctl start 
ceph-c6c8bc66-1716-11ec-b029-1c34da4b9fb6@iscsi.iscsi.nj3231.mqeari
systemctl: stderr Job for 
ceph-c6c8bc66-1716-11ec-b029-1c34da4b9fb6@iscsi.iscsi.nj3231.mqeari.service 
failed because the control process exited with error code.
systemctl: stderr See "systemctl status 
ceph-c6c8bc66-1716-11ec-b029-1c34da4b9fb6@iscsi.iscsi.nj3231.mqeari.service" 
and "journalctl -xe" for details.

Traceback (most recent call last):
  File 
"/var/lib/ceph/c6c8bc66-1716-11ec-b029-1c34da4b9fb6/cephadm.f46dc95b01feeedb28941a48e2f1d0abb51139ca828de11150ea7122a8e3549c", 
line 8479, in 

    main()
  File 
"/var/lib/ceph/c6c8bc66-1716-11ec-b029-1c34da4b9fb6/cephadm.f46dc95b01feeedb28941a48e2f1d0abb51139ca828de11150ea7122a8e3549c", 
line 8467, in main

    r = ctx.func(ctx)
  File 
"/var/lib/ceph/c6c8bc66-1716-11ec-b029-1c34da4b9fb6/cephadm.f46dc95b01feeedb28941a48e2f1d0abb51139ca828de11150ea7122a8e3549c", 
line 1782, in _default_image

    return func(ctx)
  File 
"/var/lib/ceph/c6c8bc66-1716-11ec-b029-1c34da4b9fb6/cephadm.f46dc95b01feeedb28941a48e2f1d0abb51139ca828de11150ea7122a8e3549c", 
line 4523, in command_deploy

    deploy_daemon(ctx, ctx.fsid, daemon_type, daemon_id, c, uid, gid,
  File 
"/var/lib/ceph/c6c8bc66-1716-11ec-b029-1c34da4b9fb6/cephadm.f46dc95b01feeedb28941a48e2f1d0abb51139ca828de11150ea7122a8e3549c", 
line 2669, in deploy_daemon

    deploy_daemon_units(ctx, fsid, uid, gid, daemon_type, daemon_id,
  File 
"/var/lib/ceph/c6c8bc66-1716-11ec-b029-1c34da4b9fb6/cephadm.f46dc95b01feeedb28941a48e2f1d0abb51139ca828de11150ea7122a8e3549c", 
line 2899, in deploy_daemon_units

    call_throws(ctx, ['systemctl', 'start', unit_name])
  File 
"/var/lib/ceph/c6c8bc66-1716-11ec-b029-1c34da4b9fb6/cephadm.f46dc95b01feeedb28941a48e2f1d0abb51139ca828de11150ea7122a8e3549c", 
line 1462, in call_throws

    raise RuntimeError('Failed command: %s' % ' '.join(command))
RuntimeError: Failed command: systemctl start 
ceph-c6c8bc66-1716-11ec-b029-1c34da4b9fb6@iscsi.iscsi.nj3231.mqeari

Traceback (most recent call last):
  File "/usr/share/ceph/mgr/cephadm/serve.py", line 1366, in 
_remote_connection

    yield (conn, connr)
  File "/usr/share/ceph/mgr/cephadm/serve.py", line 1263, in _run_cephadm
    code, '\n'.join(err)))
orchestrator._interface.OrchestratorError: cephadm exited with an error 
code: 1, stderr:Redeploy daemon iscsi.iscsi.nj3231.mqeari ...

Creating ceph-iscsi config...
Write file: 
/var/lib/ceph/c6c8bc66-1716-11ec-b029-1c34da4b9fb6/iscsi.iscsi.nj3231.mqeari/iscsi-gateway.cfg
Failed to trim old cgroups 
/sys/fs/cgroup/system.slice/system-ceph\x2dc6c8bc66\x2d1716\x2d11ec\x2db029\x2d1c34da4b9fb6.slice/ceph-c6c8bc66-1716-11ec-b029-1c34da4b9fb6@iscsi.iscsi.nj3231.mqeari.service
Non-zero exit code 1 from systemctl start 
ceph-c6c8bc66-1716-11ec-b029-1c34da4b9fb6@iscsi.iscsi.nj3231.mqeari
systemctl: stderr Job for 
ceph-c6c8bc66-1716-11ec-b029-1c34da4b9fb6@iscsi.iscsi.nj3231.mqeari.service 
failed because the control process exited with error code.
systemctl: stderr See "systemctl status 
ceph-c6c8bc66-1716-11ec-b029-1c34da4b9fb6@iscsi.iscsi.nj3231.mqeari.service" 
and "journalctl -xe" for details.

Traceback (most recent call last):
  File 
"/var/lib/ceph/c6c8bc66-1716-11ec-b029-1c34da4b9fb6/cephadm.f46dc95b01feeedb28941a48e2f1d0abb51139ca828de11150ea7122a8e3549c", 
line 8479, in 

    main()
  File 
"/var/lib/ceph/c6c8bc66-1716-11ec-b029-1c34da4b9fb6/cephadm.f46dc95b01feeedb28941a48e2f1d0abb51139ca828de11150ea7122a8e3549c", 
line 8467, in main

    r = ctx.func(ctx)
  File 
"/var/lib/ceph/c6c8bc66-1716-11ec-b029-1c34da4b9fb6/cephadm.f46dc95b01feeedb28941a48e2f1d0abb51139ca828de11150ea7122a8e3549c", 
line 1782, in _default_image

    return func(ctx)
  File 
"/var/lib/ceph/c6c8bc66-1716-11ec-b029-1c34da4b9fb6/cephadm.f46dc95b01feeedb28941a48e2f1d0abb51139ca828de11150ea7122a8e3549c", 
line 4523, in command_deploy

    deploy_daemon(ctx, ctx.fsid, daemon_type, daemon_id, c, uid, gid,
  File 
"/var/lib/ceph/c6c8bc66-1716-11ec-b029-1c34da4b9fb6/cephadm.f46dc95b01feeedb28941a48e2f1d0abb51139ca828de11150ea7122a8e3549c", 
line 2669, in deploy_daemon

    deploy_daemon_units(ctx, fsid, uid, gid, daemon_type, daemon_id,
  File 
"/var/lib/ceph/c6c8bc66-1716-11ec-b029-1c34da4b9fb6/cephadm.f46dc95b01feeedb28941a48e2f1d0abb51139ca828de11150ea7122a8e3549c", 
line 2899, in deploy_daemon_units

    call_throws(ctx, ['systemctl', 'start', unit_name])
  File 
"

[ceph-users] Re: upgrading from Nautilus on CentOS7 to Octopus on Ubuntu 20.04.2

2021-06-30 Thread Radoslav Milanov

Sounds exactly like testing to me...

On 30.6.2021 г. 17:46 ч., Teoman Onay wrote:

What do you mean by different?

RHEL is a supported product while CentOS is not. You get bug/security 
fixes sooner on RHEL than on CentOS as depending on their severity 
level they are released during the z-stream releases.


Stream is one minor release ahead than RHEL which means it already 
contains part of the fixes which will be released a few months later 
in RHEL. It could be considered even more stable as it already 
contains part of the fixes.





On Wed, 30 Jun 2021, 15:15 Radoslav Milanov, 
mailto:radoslav.mila...@gmail.com>> wrote:


If stream is so great why is RHEL different ?

On 30.6.2021 г. 03:49 ч., Teoman Onay wrote:
>>
>>> For similar reasons, CentOS 8 stream, as opposed to every
other CentOS
>> released before, is very experimental. I would never go in
production with
>> CentOS 8 stream.
>>
>>
> Experimental?? Looks like you still don't understand what CentOS
stream is.
> If you have some time just read this:
>

https://www.linkedin.com/pulse/why-you-should-have-already-been-centos-stream-back-2019-smith/

<https://www.linkedin.com/pulse/why-you-should-have-already-been-centos-stream-back-2019-smith/>
>
> He summarized quite well what CentOS stream is.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
<mailto:ceph-users@ceph.io>
> To unsubscribe send an email to ceph-users-le...@ceph.io
<mailto:ceph-users-le...@ceph.io>


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: upgrading from Nautilus on CentOS7 to Octopus on Ubuntu 20.04.2

2021-06-30 Thread Radoslav Milanov

If stream is so great why is RHEL different ?

On 30.6.2021 г. 03:49 ч., Teoman Onay wrote:



For similar reasons, CentOS 8 stream, as opposed to every other CentOS

released before, is very experimental. I would never go in production with
CentOS 8 stream.



Experimental?? Looks like you still don't understand what CentOS stream is.
If you have some time just read this:
https://www.linkedin.com/pulse/why-you-should-have-already-been-centos-stream-back-2019-smith/

He summarized quite well what CentOS stream is.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Issues upgrading to 16.2.1

2021-04-20 Thread Radoslav Milanov

Hello

Tried cephadm upgrade form 16.2.0 to 16.2.1

Managers were updated first then process halted on first monitor being 
upgraded. The monitor fails to start:



root@host3:/var/lib/ceph/c8ee2878-9d54-11eb-bbca-1c34da4b9fb6/mon.host3# 
/usr/bin/docker run --rm --ipc=host --net=host --entrypoint 
/usr/bin/ceph-mon --privileged --group-add=disk --init --name 
ceph-c8ee2878-9d54-11eb-bbca-1c34da4b9fb6-mon.host3 -e 
CONTAINER_IMAGE=ceph/ceph@sha256:9b04c0f15704c49591640a37c7adfd40ffad0a4b42fecb950c3407687cb4f29a 
-e NODE_NAME=host3 -e CEPH_USE_RANDOM_NONCE=1 -v 
/var/run/ceph/c8ee2878-9d54-11eb-bbca-1c34da4b9fb6:/var/run/ceph:z -v 
/var/log/ceph/c8ee2878-9d54-11eb-bbca-1c34da4b9fb6:/var/log/ceph:z -v 
/var/lib/ceph/c8ee2878-9d54-11eb-bbca-1c34da4b9fb6/crash:/var/lib/ceph/crash:z 
-v 
/var/lib/ceph/c8ee2878-9d54-11eb-bbca-1c34da4b9fb6/mon.host3:/var/lib/ceph/mon/ceph-host3:z 
-v 
/var/lib/ceph/c8ee2878-9d54-11eb-bbca-1c34da4b9fb6/mon.host3/config:/etc/ceph/ceph.conf:z 
-v /dev:/dev -v /run/udev:/run/udev 
ceph/ceph@sha256:9b04c0f15704c49591640a37c7adfd40ffad0a4b42fecb950c3407687cb4f29a 
-n mon.host3 -f --setuser ceph --setgroup ceph 
--default-log-to-file=false --default-log-to-stderr=true 
'--default-log-stderr-prefix=debug ' 
--default-mon-cluster-log-to-file=false 
--default-mon-cluster-log-to-stderr=true
debug 2021-04-20T14:06:19.437+ 7f11e080a700  0 set uid:gid to 
167:167 (ceph:ceph)
debug 2021-04-20T14:06:19.437+ 7f11e080a700  0 ceph version 16.2.0 
(0c2054e95bcd9b30fdd908a79ac1d8bbc3394442) pacific (stable), process 
ceph-mon, pid 7
debug 2021-04-20T14:06:19.437+ 7f11e080a700  0 pidfile_write: ignore 
empty --pid-file
debug 2021-04-20T14:06:19.441+ 7f11e080a700  0 load: jerasure load: 
lrc load: isa
debug 2021-04-20T14:06:19.441+ 7f11e080a700  4 rocksdb: RocksDB 
version: 6.8.1


debug 2021-04-20T14:06:19.441+ 7f11e080a700  4 rocksdb: Git sha 
rocksdb_build_git_sha:@0@
debug 2021-04-20T14:06:19.441+ 7f11e080a700  4 rocksdb: Compile date 
Mar 30 2021

debug 2021-04-20T14:06:19.441+ 7f11e080a700  4 rocksdb: DB SUMMARY

debug 2021-04-20T14:06:19.441+ 7f11e080a700  4 rocksdb: CURRENT 
file:  CURRENT


debug 2021-04-20T14:06:19.441+ 7f11e080a700  4 rocksdb: IDENTITY 
file:  IDENTITY


debug 2021-04-20T14:06:19.441+ 7f11e080a700  4 rocksdb: MANIFEST 
file:  MANIFEST-000152 size: 221 Bytes


debug 2021-04-20T14:06:19.441+ 7f11e080a700  4 rocksdb: SST files in 
/var/lib/ceph/mon/ceph-host3/store.db dir, Total Num: 2, files: 
000137.sst 000139.sst


debug 2021-04-20T14:06:19.441+ 7f11e080a700  4 rocksdb: Write Ahead 
Log file in /var/lib/ceph/mon/ceph-host3/store.db: 000153.log size: 0 ;


debug 2021-04-20T14:06:19.441+ 7f11e080a700  4 
rocksdb: Options.error_if_exists: 0
debug 2021-04-20T14:06:19.441+ 7f11e080a700  4 
rocksdb:   Options.create_if_missing: 0
debug 2021-04-20T14:06:19.441+ 7f11e080a700  4 
rocksdb: Options.paranoid_checks: 1
debug 2021-04-20T14:06:19.441+ 7f11e080a700  4 
rocksdb: Options.env: 0x5641244cf1c0
debug 2021-04-20T14:06:19.441+ 7f11e080a700  4 
rocksdb:  Options.fs: Posix File System
debug 2021-04-20T14:06:19.441+ 7f11e080a700  4 
rocksdb:    Options.info_log: 0x564126753220
debug 2021-04-20T14:06:19.441+ 7f11e080a700  4 
rocksdb:    Options.max_file_opening_threads: 16
debug 2021-04-20T14:06:19.441+ 7f11e080a700  4 
rocksdb:  Options.statistics: (nil)
debug 2021-04-20T14:06:19.441+ 7f11e080a700  4 
rocksdb:   Options.use_fsync: 0
debug 2021-04-20T14:06:19.441+ 7f11e080a700  4 
rocksdb:   Options.max_log_file_size: 0
debug 2021-04-20T14:06:19.441+ 7f11e080a700  4 
rocksdb:  Options.max_manifest_file_size: 1073741824
debug 2021-04-20T14:06:19.441+ 7f11e080a700  4 
rocksdb:   Options.log_file_time_to_roll: 0
debug 2021-04-20T14:06:19.441+ 7f11e080a700  4 
rocksdb:   Options.keep_log_file_num: 1000
debug 2021-04-20T14:06:19.441+ 7f11e080a700  4 
rocksdb:    Options.recycle_log_file_num: 0
debug 2021-04-20T14:06:19.441+ 7f11e080a700  4 
rocksdb: Options.allow_fallocate: 1
debug 2021-04-20T14:06:19.441+ 7f11e080a700  4 
rocksdb:    Options.allow_mmap_reads: 0
debug 2021-04-20T14:06:19.441+ 7f11e080a700  4 
rocksdb:   Options.allow_mmap_writes: 0
debug 2021-04-20T14:06:19.441+ 7f11e080a700  4 
rocksdb:    Options.use_direct_reads: 0
debug 2021-04-20T14:06:19.441+ 7f11e080a700  4 rocksdb: 
Options.use_direct_io_for_flush_and_compaction: 0
debug 2021-04-20T14:06:19.441+ 7f11e080a700  4 rocksdb:  
Options.create_missing_column_families: 0
debug 2021-04-

[ceph-users] Re: [External Email] Cephadm upgrade to Pacific problem

2021-04-14 Thread Radoslav Milanov

Thanks for the pointer Dave,

in my case though problem proved to be old docker version (18) provided 
by OS repos. Installing latest docker-ce from docker.com resolves the 
problem. It would be nice though if host was checked for compatibility 
before starting an upgrade.




On 14.4.2021 г. 13:15 ч., Dave Hall wrote:

Radoslav,

I ran into the same.  For Debian 10 - recent updates - you have to add 
'cgroup_enable=memory swapaccount=1' to the kernel command line 
(/etc/default/grub).  The reference I found said that Debian 
decided to disable this by default and make us turn it on if we want 
to run containers.


-Dave

--
Dave Hall
Binghamton University
kdh...@binghamton.edu <mailto:kdh...@binghamton.edu>

On Wed, Apr 14, 2021 at 12:51 PM Radoslav Milanov 
mailto:radoslav.mila...@gmail.com>> wrote:


Hello,

Cluster is 3 nodes Debian 10. Started cephadm upgrade on healthy
15.2.10
cluster. Managers were upgraded fine then first monitor went down for
upgrade and never came back. Researching at the unit files container
fails to run because of an error:

root@host1:/var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/mon.host1#

cat unit.run

set -e
/usr/bin/install -d -m0770 -o 167 -g 167
/var/run/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6
# mon.host1
! /usr/bin/docker rm -f
ceph-97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6-mon.host1 2> /dev/null
/usr/bin/docker run --rm --ipc=host --net=host --entrypoint
/usr/bin/ceph-mon --privileged --group-add=disk --init --name
ceph-97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6-mon.host1 -e

CONTAINER_IMAGE=ceph/ceph@sha256:9b04c0f15704c49591640a37c7adfd40ffad0a4b42fecb950c3407687cb4f29a

-e NODE_NAME=host1 -e CEPH_USE_RANDOM_NONCE=1 -v
/var/run/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6:/var/run/ceph:z -v
/var/log/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6:/var/log/ceph:z -v

/var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/crash:/var/lib/ceph/crash:z

-v

/var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/mon.host1:/var/lib/ceph/mon/ceph-host1:z

-v

/var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/mon.host1/config:/etc/ceph/ceph.conf:z

-v /dev:/dev -v /run/udev:/run/udev

ceph/ceph@sha256:9b04c0f15704c49591640a37c7adfd40ffad0a4b42fecb950c3407687cb4f29a

-n mon.host1 -f --setuser ceph --setgroup ceph
--default-log-to-file=false --default-log-to-stderr=true
'--default-log-stderr-prefix=debug '
--default-mon-cluster-log-to-file=false
--default-mon-cluster-log-to-stderr=true

root@host1:/var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/mon.host1#

/usr/bin/docker run --rm --ipc=host --net=host --entrypoint
/usr/bin/ceph-mon --privileged --group-add=disk --init --name
ceph-97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6-mon.host1 -e

CONTAINER_IMAGE=ceph/ceph@sha256:9b04c0f15704c49591640a37c7adfd40ffad0a4b42fecb950c3407687cb4f29a

-e NODE_NAME=host1 -e CEPH_USE_RANDOM_NONCE=1 -v
/var/run/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6:/var/run/ceph:z -v
/var/log/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6:/var/log/ceph:z -v

/var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/crash:/var/lib/ceph/crash:z

-v

/var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/mon.host1:/var/lib/ceph/mon/ceph-host1:z

-v

/var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/mon.host1/config:/etc/ceph/ceph.conf:z

-v /dev:/dev -v /run/udev:/run/udev

ceph/ceph@sha256:9b04c0f15704c49591640a37c7adfd40ffad0a4b42fecb950c3407687cb4f29a

-n mon.host1 -f --setuser ceph --setgroup ceph
--default-log-to-file=false --default-log-to-stderr=true
'--default-log-stderr-prefix=debug '
--default-mon-cluster-log-to-file=false
--default-mon-cluster-log-to-stderr=true


/usr/bin/docker: Error response from daemon: OCI runtime create
failed:
container_linux.go:344: starting container process caused "exec:
\"/dev/init\": stat /dev/init: no such file or directory": unknown.

Any suggestions how to resolve that ?

Thank you.
___
ceph-users mailing list -- ceph-users@ceph.io
<mailto:ceph-users@ceph.io>
To unsubscribe send an email to ceph-users-le...@ceph.io
<mailto:ceph-users-le...@ceph.io>


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Cephadm upgrade to Pacific problem

2021-04-14 Thread Radoslav Milanov

Hello,

Cluster is 3 nodes Debian 10. Started cephadm upgrade on healthy 15.2.10 
cluster. Managers were upgraded fine then first monitor went down for 
upgrade and never came back. Researching at the unit files container 
fails to run because of an error:


root@host1:/var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/mon.host1# 
cat unit.run


set -e
/usr/bin/install -d -m0770 -o 167 -g 167 
/var/run/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6

# mon.host1
! /usr/bin/docker rm -f 
ceph-97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6-mon.host1 2> /dev/null
/usr/bin/docker run --rm --ipc=host --net=host --entrypoint 
/usr/bin/ceph-mon --privileged --group-add=disk --init --name 
ceph-97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6-mon.host1 -e 
CONTAINER_IMAGE=ceph/ceph@sha256:9b04c0f15704c49591640a37c7adfd40ffad0a4b42fecb950c3407687cb4f29a 
-e NODE_NAME=host1 -e CEPH_USE_RANDOM_NONCE=1 -v 
/var/run/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6:/var/run/ceph:z -v 
/var/log/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6:/var/log/ceph:z -v 
/var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/crash:/var/lib/ceph/crash:z 
-v 
/var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/mon.host1:/var/lib/ceph/mon/ceph-host1:z 
-v 
/var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/mon.host1/config:/etc/ceph/ceph.conf:z 
-v /dev:/dev -v /run/udev:/run/udev 
ceph/ceph@sha256:9b04c0f15704c49591640a37c7adfd40ffad0a4b42fecb950c3407687cb4f29a 
-n mon.host1 -f --setuser ceph --setgroup ceph 
--default-log-to-file=false --default-log-to-stderr=true 
'--default-log-stderr-prefix=debug ' 
--default-mon-cluster-log-to-file=false 
--default-mon-cluster-log-to-stderr=true


root@host1:/var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/mon.host1# 
/usr/bin/docker run --rm --ipc=host --net=host --entrypoint 
/usr/bin/ceph-mon --privileged --group-add=disk --init --name 
ceph-97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6-mon.host1 -e 
CONTAINER_IMAGE=ceph/ceph@sha256:9b04c0f15704c49591640a37c7adfd40ffad0a4b42fecb950c3407687cb4f29a 
-e NODE_NAME=host1 -e CEPH_USE_RANDOM_NONCE=1 -v 
/var/run/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6:/var/run/ceph:z -v 
/var/log/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6:/var/log/ceph:z -v 
/var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/crash:/var/lib/ceph/crash:z 
-v 
/var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/mon.host1:/var/lib/ceph/mon/ceph-host1:z 
-v 
/var/lib/ceph/97d9f40e-9d33-11eb-8e3f-1c34da4b9fb6/mon.host1/config:/etc/ceph/ceph.conf:z 
-v /dev:/dev -v /run/udev:/run/udev 
ceph/ceph@sha256:9b04c0f15704c49591640a37c7adfd40ffad0a4b42fecb950c3407687cb4f29a 
-n mon.host1 -f --setuser ceph --setgroup ceph 
--default-log-to-file=false --default-log-to-stderr=true 
'--default-log-stderr-prefix=debug ' 
--default-mon-cluster-log-to-file=false 
--default-mon-cluster-log-to-stderr=true



/usr/bin/docker: Error response from daemon: OCI runtime create failed: 
container_linux.go:344: starting container process caused "exec: 
\"/dev/init\": stat /dev/init: no such file or directory": unknown.


Any suggestions how to resolve that ?

Thank you.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Questions RE: Ceph/CentOS/IBM

2021-03-03 Thread Radoslav Milanov

+1

On 3.3.2021 г. 11:37 ч., Marc wrote:

Secondly, are we expecting IBM to "kill off" Ceph as well?


Stop spreading rumors! really! one can take it further and say kill
product
x, y, z until none exist!


This natural / logical thinking, the only one to blame here is IBM/redhat. If 
you have no regards for maintaining the release period as it was scheduled, and 
just cut it short by 7-8 years. More professional would have been to announce 
this for el9, and not change 8 like this.

How can you trust anything else they are now saying How can you know the 
opensource version of ceph is going to be having restricted features. With such 
management they will not even inform you. You will be the last to know, like 
all clients. I think it is a valid concern.







___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io