Hi Guys,
I have a running ovirt-4.3 cluster with 1 manager and 4 hypervisors nodes and
for storage using traditional SAN storage which is connect using iscsi. where i
can create VM's and assign storge from SAN. This is running fine since a decade
but now i want to move from traditional SAN
Hi guys,
i am very newbie to ceph-cluster but after multiple attempts, i was able to
install ceph-reef cluster on debian-12 by cephadm tool on test environment with
2 mons and 3 OSD's om VM's. All was seeming good and i was exploring more about
it so i rebooted cluster and found that now i am
Hi Eugen,
Thank you very much for looking this insight.
But as I mentioned earlier and I am trying build ceph-cluster first time so
could you please help me to build it if you can point any documentation where
all details are available so that I can follow it!
Regards,
Ankit Sharma
Hi Guys,
I am newbie and trying to install Ceph Storage cluster and following this
https://docs.ceph.com/en/latest/cephadm/install/#cephadm-deploying-new-cluster
=
OS - Ubuntu 22.04.3 LTS (Jammy Jellyfish)
4 node Cluster - mon1,mgr1,2
Hello Eugen Block,
There is no inactive pgs in the clusters. now even we
put around the 4 Tib of data in the Primary cluster but data is not sync to
the secondary cluster . it still same .
___
ceph-users mailing list --
d-609b8a29647d.38987.1:1232/S01/1/120/2b7ea802-efad-41d3-9d90-9**523.txt",
"timestamp": "2023-07-31T11:54:53.233451Z",
"info": {
"source_zone": "d09d3d16-8601-448b-bf3d-609b8a29647d",
"error_code": 5,
"message": "failed to sync object(5) Input/output error"
Thanks
Ankit
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hello Wrok,
Almost 4 month ago we also struggel regarding the Ceph iscsi
gateway perfromance and some bug. if you hitting little but amount of load
you gateway will start creating issue. there one option you deploy dedicated
iscsi gateway (tgt-server) that have direct
Hello Team,
Please help me i deploy two ceph cluster with 6 node configuration almost 800tb
of capacity. and configurae in the DC-DR configuration for the data high
availability. i eanbel the rwg and rbd block device mirroring for the
replocatio of the data. we have the 10 GBPS fiber
hello ,
i tryed all of the option it's not working , my replication network
speed still same , can you help me any other way we can do speed performance
increase
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email
Hello Everyone,
i tried to both of the values and try to increase max
20 gb but i did't see any difference in the mirror speed can you please guide
me or help me to tune these value or any other way.
vlaues you i tried :
rbd_mirror_memory_target
@Eugen Block
Thank for your response , i tryed both option but i don't see ant effect on the
replication speed . can you or any one suggest any other way because it's so
slow we are not able to continue with this slow speed . please help or suggest
any configuration.
Hello All,
In the ceph quincy Not able to find
rbd_mirror_journal_max_fetch_bytes config
in rbd mirror
i configured the ceph cluster almost 400 tb and enable the
rbd-mirror in the
starting stage i'm able to achive the almost 9 GB speed , but after the rebalane
completed
12 matches
Mail list logo