Hi Team,
We are working on Elasticsearch sync module integration with ceph.
Ceph version: 18.2.5 (reef)
Elasticsearch: 8.2.1
Problem statement:
The Syncing between the zones are not happenning:
Links followed to perform the integration:
Facing a similar situation, any support would be helpful.
-Lokendra
On Tue, Jan 9, 2024 at 10:47 PM Kushagr Gupta
wrote:
> Hi Team,
>
> Features used: Rados gateway, ceph S3 buckets
>
> We are trying to create a data pipeline using the S3 buckets capability
> and rado gateway in ceph.
> Our
Hi Team,
please help in the reference of the issue raised.
Best Regards,
Lokendra
On Wed, Dec 13, 2023 at 2:33 PM Kushagr Gupta
wrote:
> Hi Team,
>
> *Environment:*
> We have deployed a ceph setup using ceph-ansible.
> Ceph-version: 18.2.0
> OS: Almalinux 8.8
> We have a 3 node-setup.
>
>
Hi Team,
Facing a similar situation, Any help would be appreciated.
Thanks once again for the support.
-Lokendra
On Tue, Sep 5, 2023 at 10:51 AM Kushagr Gupta
wrote:
> *Ceph-version*: Quincy
> *OS*: Centos 8 stream
>
> *Issue*: Not able to find a standardized restoration procedure for
>
Hi Team,
*Problem:*
Create scheduled snapshots of the ceph subvolume.
*Expected Result:*
The scheduled snapshots should be created at the given scheduled time.
*Actual Result:*
The scheduled snapshots are not getting created till we create a manual
backup.
*Description:*
*Ceph
/:_volumes_xyz_conf_00593e1d-b674-4b00-a289-20bec06761c9
Query:
1. Could anyone help us out with storage domain creation in oVirt, we need
to ensure that Domain is always up and connected in the state of Active
Monitor failure.
On Tue, Apr 18, 2023 at 2:41 PM Lokendra Rathour
wrote:
> yes thanks, Robert,
>
yes thanks, Robert,
after installing the Ceph common mount is working fine.
On Tue, Apr 18, 2023 at 2:10 PM Robert Sander
wrote:
> On 18.04.23 06:12, Lokendra Rathour wrote:
>
> > but if I try mounting from a normal Linux machine with connectivity
> > enabled between Ceph m
e-client almalinux]#
but if I try mounting from a normal Linux machine with connectivity enabled
between Ceph mon nodes, it gives the error as stated before.
On Mon, Apr 17, 2023 at 3:34 PM Robert Sander
wrote:
> On 14.04.23 12:17, Lokendra Rathour wrote:
>
> > *mount: /mnt/image:
:11 AM Lokendra Rathour
wrote:
> Hi .
> Any input will be of great help.
> Thanks once again.
> Lokendra
>
> On Fri, 14 Apr, 2023, 3:47 pm Lokendra Rathour,
> wrote:
>
>> Hi Team,
>> their is one additional observation.
>> Mount as the client is working f
Hi .
Any input will be of great help.
Thanks once again.
Lokendra
On Fri, 14 Apr, 2023, 3:47 pm Lokendra Rathour,
wrote:
> Hi Team,
> their is one additional observation.
> Mount as the client is working fine from one of the Ceph nodes.
> Command *: sudo mount -t ceph :/ /mnt/img
=AQABDzRkTaJCEhAAC7rC6E68ofwULnx6qX/VDA== -v
*mount: /mnt/image: mount point does not exist.*
the document says that if we do not pass the monitor address, it tries
discovering the monitor address from DNS Servers, but in actual it is not
happening.
On Tue, Apr 11, 2023 at 6:48 PM Lokendra Rathour
wrote:
> Ceph vers
essage is different, not sure if it could be the same issue,
> and I don't have anything to test ipv6 with.
>
> [1] https://tracker.ceph.com/issues/47300
>
> Zitat von Lokendra Rathour :
>
> > Hi All,
> > Requesting any inputs around the issue raised.
> >
> > Best Regar
Hi All,
Requesting any inputs around the issue raised.
Best Regards,
Lokendra
On Tue, 24 Jan, 2023, 7:32 pm Lokendra Rathour,
wrote:
> Hi Team,
>
>
>
> We have a ceph cluster with 3 storage nodes:
>
> 1. storagenode1 - abcd:abcd:abcd::21
>
> 2. storagenode2
Hi All,
Any help in this issue would be appreciated.
Thanks once again.
On Tue, Jan 24, 2023 at 7:32 PM Lokendra Rathour
wrote:
> Hi Team,
>
>
>
> We have a ceph cluster with 3 storage nodes:
>
> 1. storagenode1 - abcd:abcd:abcd::21
>
> 2. storagenode2 - abcd:abcd:a
, 2023 at 9:06 PM John Mulligan
wrote:
> On Tuesday, January 24, 2023 9:02:41 AM EST Lokendra Rathour wrote:
> > Hi Team,
> >
> >
> >
> > We have a ceph cluster with 3 storage nodes:
> >
> > 1. storagenode1 - abcd:abcd:abcd::21
> >
> > 2.
Hi Team,
We have a ceph cluster with 3 storage nodes:
1. storagenode1 - abcd:abcd:abcd::21
2. storagenode2 - abcd:abcd:abcd::22
3. storagenode3 - abcd:abcd:abcd::23
The requirement is to mount ceph using the domain name of MON node:
Note: we resolved the domain name via DNS server.
For
Hi Team,
Facing the issue while installing Grafance and related containers while
deploying ceph -ansible.
Error:
t_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit
status from master 0\r\n")
fatal: [storagenode1]: FAILED! => changed=false
invocation:
module_args:
s-nfs-ganesha4]
>> name=CentOS-$releasever - NFS Ganesha 4
>> mirrorlist=
>> http://mirrorlist.centos.org?arch=$basearch=8-stream=storage-nfsganesha-4
>> #baseurl=
>> https://mirror.centos.org/centos/8-stream//storage/$basearch/nfsganesha-4/
>> gpgcheck=
Hi,
was trying to get the NFS Ganesha installed on the Alma 8.5 with Ceph
Quincy releases. getting errors of some packages not being available. For
example: 'librgw'
nothing provides librgw.so.2()(64bit) needed by
nfs-ganesha-rgw-3.5-3.el8.x86_64
nothing provides libcephfs.so.2()(64bit) needed
y to
recover such data in a completely new setup of Ceph or something?
Hope I am not confusing you more.
Best Regards,
Lokendra
On Wed, Sep 8, 2021 at 7:03 PM Matthew Vernon wrote:
> Hi,
>
> On 06/09/2021 08:37, Lokendra Rathour wrote:
> > Thanks, Mathew for the Update.
> > The u
goes down, i.e mon crashes other daemon
crashes, can we try to restore the data in OSDs, maybe by reusing the OSD's
in another or new Ceph Cluster or something to save the data.
Please suggest !!
Best Regards,
Lokendra
On Fri, Sep 3, 2021 at 9:04 PM Matthew Vernon wrote:
> On 02/09/2021 09
Hi Team,
We have deployed the Ceph Octopus release using Ceph-Ansible.
During the upgrade from Octopus to Pacific release we saw the upgrade got
failed.
We have data on OSD which we need to save.
Queries :
1. How can we bring the setup back to the older normal state without
impacting the
don't know if I've missed something or if I miss understood your
> question, but I hope this helps.
>
> Best regards
> Daniel
>
> On Tue, Aug 24, 2021 at 6:17 AM Lokendra Rathour <
> lokendrarath...@gmail.com>
> wrote:
>
> > Hello Everyone,
> > We have
Hello Everyone,
We have deployed Ceph ceph-ansible (Pacific Release).
Query:
Is it possible (if yes then what is the way), to view/verify the alerts
(health/System both) directly without AlertManager?
Or
Can Ceph Dashboard only Only can help us see the Alerts in the Ceph
Cluster(Health/System)?
Hi Team,
We have installed the pacific release of Ceph using Ceph-Ansible. Now we
are planning to downgrade the Ceph release from Pacific to Octopus.
We have tried this but it fails with the error message
"• stderr: 'Error EPERM: require_osd_release cannot be lowered once it has
been set'"
Is
Hello Everyone,
We have Ceph Based three Node setup. In this Setup, we want to test the
Complete Node failover and reuse the old OSD Disk from the failed node.
we are referring to the Red-Hat based document:
Also what is the status of you other mds ?
Is it active ? Or which one was damaged?
U can also look for additional mds in the same cluster possibility?
On Sat, 22 May 2021, 00:40 Eugen Block, wrote:
> Hi,
>
> I went through similar trouble just this week [1], but the root cause
> seems
___
> From: Patrick Donnelly
> Sent: 03 May 2021 17:19:37
> To: Lokendra Rathour
> Cc: Ceph Development; dev; ceph-users
> Subject: [ceph-users] Re: [ Ceph MDS MON Config Variables ] Failover Delay
> issue
>
> On Mon, May 3, 2021 at 6:36 AM Lokendra Rathour
> wrote:
> &g
service it takes 4-7 Seconds to
activate and resume the standy MDS Node.
Thanks for your inputs.
Best Regards,
Lokendra
On Mon, May 3, 2021 at 8:50 PM Patrick Donnelly wrote:
> On Mon, May 3, 2021 at 6:36 AM Lokendra Rathour
> wrote:
> >
> > Hi Team,
> > I was set
40 seconds in either case.
> > Did you also changed these variables (as mentioned above) along with the
> > hot-standby ?
>
> no, we barely differ from the default configs and haven't changed much.
> But we're still running Nautilus so I can't really tell if Octopus
> makes a
more than one MDS active.
> >
> > mds: cephfs:3 {0=cephfs-d=up:active,1=cephfs-e=up:active,2=cephfs-
> > a=up:active} 1 up:standby-replay
> >
> > I got 3 active mds and one standby.
> >
> > I'm using rook in kubernetes for this setup.
> >
> > oa
Hi Team,
I was setting up the ceph cluster with
- Node Details:3 Mon,2 MDS, 2 Mgr, 2 RGW
- Deployment Type: Active Standby
- Testing Mode: Failover of MDS Node
- Setup : Octopus (15.2.7)
- OS: centos 8.3
- hardware: HP
- Ram: 128 GB on each Node
- OSD: 2 ( 1 tb each)
-
ay
>
> Good luck and look forward to hearing feedback/more results.
>
> Reed
>
> On Apr 27, 2021, at 8:40 AM, Lokendra Rathour
> wrote:
>
> Hi Team,
> We have setup two Node Ceph Cluster using *Native Cephfs Driver* with *Details
> as:*
>
>- 3 Node / 2 N
Hi Team,
We have setup two Node Ceph Cluster using *Native Cephfs Driver* with *Details
as:*
- 3 Node / 2 Node MDS Cluster
- 3 Node Monitor Quorum
- 2 Node OSD
- 2 Nodes for Manager
Cephnode3 have only Mon and MDS (only for test case 4-7) rest two nodes
i.e. cephnode1 and
Request the moderators to approve the same.
It is since long and the solution to the issue is not found yet.
-Lokendra
On Tue, 23 Mar 2021, 10:17 Lokendra Rathour,
wrote:
> Hi Team,
> I am trying to upgrade my existing Ceph Cluster (using Ceph-ansible) from
> current release Octopus t
you once again for your help.
-Lokendra
On Tue, 23 Mar 2021, 10:17 Lokendra Rathour,
wrote:
> Hi Team,
> I am trying to upgrade my existing Ceph Cluster (using Ceph-ansible) from
> current release Octopus to pacific for which I am using a rolling upgrade.
> Facing various issue
36 matches
Mail list logo