[ceph-users] Re: Have a problem with haproxy/keepalived/ganesha/docker

2024-05-05 Thread Rusik NV
Hello! Any news?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RBD Mirroring with Journaling and Snapshot mechanism

2024-05-05 Thread V A Prabha
 Dear Eugen,
 Expecting your response for the below query. Please guide me the solution

On May 2, 2024 at 12:25 PM V A Prabha  wrote:
> Dear Eugen,
> We have a scenario of DC and DR replication, and planned to explore RBD
> mirroring with both Journaling and Snapshot mechanism.
> I have a 5 TB storage at Primary DC and 5 TB storage at DR site with 2
> different
> ceph clusters configured.
>
> Please clarify the following queries
>
> 1. With One way mirroring, the failover works fine in both journaling and
> snapshot mechanism and we are able to promote the workload from DR site. How
> does Failback work? We wanted to move the contents from DR to DC but it fails.
> In journaling mechanism, it deletes the entire volume and recreates it afresh
> which does not solve our problem.
> 2. How does incremental replication work from DR to DC?
> 3. Does Two-way mirroring help this situation. According to me, in this
> method,
> it is for 2 different clouds with 2 different storages and replicating both
> the
> clouds workloads? Does Failback work in this scenario ?
> Please help us / guide us to deploy this solution
>
> Regards
> V.A.Prabha
>
>
> Thanks & Regards,
> Ms V A Prabha / श्रीमती प्रभा वी ए
> Joint Director / संयुक्त निदेशक
> Centre for Development of Advanced Computing(C-DAC) / प्रगत संगणन विकास
> केन्द्र(सी-डैक)
> Tidel Park”, 8th Floor, “D” Block, (North &South) / “टाइडल पार्क”,8वीं मंजिल,
> “डी” ब्लॉक, (उत्तर और दक्षिण)
> No.4, Rajiv Gandhi Salai / नं.4, राजीव गांधी सलाई
> Taramani / तारामणि
> Chennai / चेन्नई – 600113
> Ph.No.:044-22542226/27
> Fax No.: 044-22542294
> 
> [ C-DAC is on Social-Media too. Kindly follow us at:
> Facebook: https://www.facebook.com/CDACINDIA & Twitter: @cdacindia ]
>
> This e-mail is for the sole use of the intended recipient(s) and may
> contain confidential and privileged information. If you are not the
> intended recipient, please contact the sender by reply e-mail and destroy
> all copies and the original message. Any unauthorized review, use,
> disclosure, dissemination, forwarding, printing or copying of this email
> is strictly prohibited and appropriate legal action will be taken.
> 
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
Thanks & Regards,
Ms V A Prabha / श्रीमती प्रभा वी ए
Joint Director / संयुक्त निदेशक
Centre for Development of Advanced Computing(C-DAC) / प्रगत संगणन विकास
केन्द्र(सी-डैक)
Tidel Park”, 8th Floor, “D” Block, (North &South) / “टाइडल पार्क”,8वीं मंजिल,
“डी” ब्लॉक, (उत्तर और दक्षिण)
No.4, Rajiv Gandhi Salai / नं.4, राजीव गांधी सलाई
Taramani / तारामणि
Chennai / चेन्नई – 600113
Ph.No.:044-22542226/27
Fax No.: 044-22542294

[ C-DAC is on Social-Media too. Kindly follow us at:
Facebook: https://www.facebook.com/CDACINDIA & Twitter: @cdacindia ]

This e-mail is for the sole use of the intended recipient(s) and may
contain confidential and privileged information. If you are not the
intended recipient, please contact the sender by reply e-mail and destroy
all copies and the original message. Any unauthorized review, use,
disclosure, dissemination, forwarding, printing or copying of this email
is strictly prohibited and appropriate legal action will be taken.


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] MDS crashes shortly after starting

2024-05-05 Thread E Taka
Hi all,

we have a serious problem with CephFS. A few days ago, the CephFS file
systems became inaccessible, with the message MDS_DAMAGE: 1 mds daemon
damaged

The cephfs-journal-tool tells us: "Overall journal integrity: OK"

The usual attempts with redeploy were unfortunately not successful.

After many attempts to achieve something with the orchestrator, we set the
MDS to “failed” and provoked the creation of new MDS with “ceph fs reset”.

But this MDS crashes:
ceph-17.2.7/src/mds/MDCache.cc: In function 'void
MDCache::rejoin_send_rejoins()'
ceph-17.2.7/src/mds/MDCache.cc: 4086: FAILED ceph_assert(auth >= 0)

(The full trace is attached).

What can we do now? We are grateful for any help!
May 05 22:42:43 ceph06 bash[707251]: debug -1> 2024-05-05T20:42:43.006+ 
7f6892752700 -1 
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.2.7/rpm/el8/BUILD/ceph-17.2.7/src/mds/MDCache.cc:
 In function 'void MDCache::rejoin_send_rejoins()' thread 7f6892752700 time 
2024-05-05T20:42:43.008448+
May 05 22:42:43 ceph06 bash[707251]: 
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/17.2.7/rpm/el8/BUILD/ceph-17.2.7/src/mds/MDCache.cc:
 4086: FAILED ceph_assert(auth >= 0)
May 05 22:42:43 ceph06 bash[707251]:  ceph version 17.2.7 
(b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)
May 05 22:42:43 ceph06 bash[707251]:  1: (ceph::__ceph_assert_fail(char const*, 
char const*, int, char const*)+0x135) [0x7f689fb974a3]
May 05 22:42:43 ceph06 bash[707251]:  2: 
/usr/lib64/ceph/libceph-common.so.2(+0x269669) [0x7f689fb97669]
May 05 22:42:43 ceph06 bash[707251]:  3: 
(MDCache::rejoin_send_rejoins()+0x216b) [0x5605d03da7eb]
May 05 22:42:43 ceph06 bash[707251]:  4: 
(MDCache::process_imported_caps()+0x1993) [0x5605d03d8353]
May 05 22:42:43 ceph06 bash[707251]:  5: 
(MDCache::rejoin_open_ino_finish(inodeno_t, int)+0x217) [0x5605d03e5837]
May 05 22:42:43 ceph06 bash[707251]:  6: (MDSContext::complete(int)+0x5f) 
[0x5605d05a7f4f]
May 05 22:42:43 ceph06 bash[707251]:  7: (void 
finish_contexts > 
>(ceph::common::CephContext*, std::vector >&, int)+0x8d) [0x5605d024cf5d]
May 05 22:42:43 ceph06 bash[707251]:  8: (MDCache::open_ino_finish(inodeno_t, 
MDCache::open_ino_info_t&, int)+0x138) [0x5605d03cd168]
May 05 22:42:43 ceph06 bash[707251]:  9: 
(MDCache::_open_ino_traverse_dir(inodeno_t, MDCache::open_ino_info_t&, 
int)+0xbb) [0x5605d03cd4bb]
May 05 22:42:43 ceph06 bash[707251]:  10: (MDSContext::complete(int)+0x5f) 
[0x5605d05a7f4f]
May 05 22:42:43 ceph06 bash[707251]:  11: (MDSRank::_advance_queues()+0xaa) 
[0x5605d025b34a]
May 05 22:42:43 ceph06 bash[707251]:  12: 
(MDSRank::ProgressThread::entry()+0xb8) [0x5605d025b918]
May 05 22:42:43 ceph06 bash[707251]:  13: /lib64/libpthread.so.0(+0x81ca) 
[0x7f689eb861ca]
May 05 22:42:43 ceph06 bash[707251]:  14: clone()
May 05 22:42:43 ceph06 bash[707251]: debug  0> 2024-05-05T20:42:43.010+ 
7f6892752700 -1 *** Caught signal (Aborted) **
May 05 22:42:43 ceph06 bash[707251]:  in thread 7f6892752700 
thread_name:mds_rank_progr
May 05 22:42:43 ceph06 bash[707251]:  ceph version 17.2.7 
(b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)
May 05 22:42:43 ceph06 bash[707251]:  1: /lib64/libpthread.so.0(+0x12cf0) 
[0x7f689eb90cf0]
May 05 22:42:43 ceph06 bash[707251]:  2: gsignal()
May 05 22:42:43 ceph06 bash[707251]:  3: abort()
May 05 22:42:43 ceph06 bash[707251]:  4: (ceph::__ceph_assert_fail(char const*, 
char const*, int, char const*)+0x18f) [0x7f689fb974fd]
May 05 22:42:43 ceph06 bash[707251]:  5: 
/usr/lib64/ceph/libceph-common.so.2(+0x269669) [0x7f689fb97669]
May 05 22:42:43 ceph06 bash[707251]:  6: 
(MDCache::rejoin_send_rejoins()+0x216b) [0x5605d03da7eb]
May 05 22:42:43 ceph06 bash[707251]:  7: 
(MDCache::process_imported_caps()+0x1993) [0x5605d03d8353]
May 05 22:42:43 ceph06 bash[707251]:  8: 
(MDCache::rejoin_open_ino_finish(inodeno_t, int)+0x217) [0x5605d03e5837]
May 05 22:42:43 ceph06 bash[707251]:  9: (MDSContext::complete(int)+0x5f) 
[0x5605d05a7f4f]
May 05 22:42:43 ceph06 bash[707251]:  10: (void 
finish_contexts > 
>(ceph::common::CephContext*, std::vector >&, int)+0x8d) [0x5605d024cf5d]
May 05 22:42:43 ceph06 bash[707251]:  11: (MDCache::open_ino_finish(inodeno_t, 
MDCache::open_ino_info_t&, int)+0x138) [0x5605d03cd168]
May 05 22:42:43 ceph06 bash[707251]:  12: 
(MDCache::_open_ino_traverse_dir(inodeno_t, MDCache::open_ino_info_t&, 
int)+0xbb) [0x5605d03cd4bb]
May 05 22:42:43 ceph06 bash[707251]:  13: (MDSContext::complete(int)+0x5f) 
[0x5605d05a7f4f]
May 05 22:42:43 ceph06 bash[707251]:  14: (MDSRank::_advance_queues()+0xaa) 
[0x5605d025b34a]
May 05 22:42:43 ceph06 bash[707251]:  15: 
(MDSRank::ProgressThread::entry()+0xb8) [0x5605d025b918]
May 05 22:42:43 ceph06 bash[707251]:  16: /lib64/libpthread.so.0(+0x81ca) 
[0x7f689eb861ca]
May 05 22:42:43 ceph06 bas