. When repair is finished (after 4 hours), the cluster backed to normal.
Is this result expected?
Em ter., 27 de abr. de 2021 às 14:16, Gesiel Galvão Bernardes <
gesiel.bernar...@gmail.com> escreveu:
> Hi,
>
> I have 3 pools, where I use it exclusively for RBD images. 2 They are
>
Hi,
I have 3 pools, where I use it exclusively for RBD images. 2 They are
mirrored and one is an erasure code. It turns out that today I received the
warning that a PG was inconsistent in the pool erasure, and then I ran
"ceph pg repair ". It turns out that after that the entire cluster
became
Hi,
I tryng add a VMware host in Ceph ISCSI. I followed exactly as is in guide.
But when add the iscsi gateway IP in "Dynamic Discovery", and "rescan
adapter", not loaded the "Paths". In vmkernel.log, I receive this messages:
2020-08-15T13:50:36.166Z cpu21:2103927)iscsi_vmk:
Hi,
I have been experiencing rapid outage events in the Ceph cluster. During
these events I receive messages from slow ops, OSD downs, but at the same
time it is operating. Magically everything is back to normal. These events
usually last about 2 minutes.
I couldn't find anything that could
Em sex., 14 de fev. de 2020 às 13:25, Mike Christie
escreveu:
> On 02/13/2020 08:52 PM, Gesiel Galvão Bernardes wrote:
> > Hi
> >
> > Em dom., 9 de fev. de 2020 às 18:27, Mike Christie > <mailto:mchri...@redhat.com>> escreveu:
> >
> > On 02/08
Hi
Em dom., 9 de fev. de 2020 às 18:27, Mike Christie
escreveu:
> On 02/08/2020 11:34 PM, Gesiel Galvão Bernardes wrote:
> > Hi,
> >
> > Em qui., 6 de fev. de 2020 às 18:56, Mike Christie > <mailto:mchri...@redhat.com>> escreveu:
> >
> > On
Hi,
Em qui., 6 de fev. de 2020 às 18:56, Mike Christie
escreveu:
> On 02/05/2020 07:03 AM, Gesiel Galvão Bernardes wrote:
> > Em dom., 2 de fev. de 2020 às 00:37, Gesiel Galvão Bernardes
> > mailto:gesiel.bernar...@gmail.com>>
> escreveu:
> >
> > H
Hi,
Do you have any suggestions on where I can look?
Regards,
Gesiel
Em dom., 2 de fev. de 2020 às 00:37, Gesiel Galvão Bernardes <
gesiel.bernar...@gmail.com> escreveu:
> Hi,
>
> Just now was possible continue this. Below is the information required.
> Thanks advance,
>
Hi,
Just now was possible continue this. Below is the information required.
Thanks advance,
Gesiel
Em seg., 20 de jan. de 2020 às 15:06, Mike Christie
escreveu:
> On 01/20/2020 10:29 AM, Gesiel Galvão Bernardes wrote:
> > Hi,
> >
> > Only now have I been able to ac
m qui., 26 de dez. de 2019 às 17:44, Mike Christie
escreveu:
> On 12/24/2019 06:40 AM, Gesiel Galvão Bernardes wrote:
> > In addition: I turned off one of the GWs, and with just one it works
> > fine. When the two go up, one of the images is changing the "active /
&
ds,
Gesiel
Em ter., 24 de dez. de 2019 às 09:09, Gesiel Galvão Bernardes <
gesiel.bernar...@gmail.com> escreveu:
> Hi,
>
> I am having an unusual slowdown using VMware with ISCSI gws. I have two
> ISCSI gateways with two RBD images. I have checked the following in the
> logs:
Hi,
I am having an unusual slowdown using VMware with ISCSI gws. I have two
ISCSI gateways with two RBD images. I have checked the following in the
logs:
Dec 24 09:00:26 ceph-iscsi2 tcmu-runner: 2019-12-24 09:00:26.040 969 [INFO]
alua_implicit_transition:562 rbd/pool1.vmware_iscsi1: Starting
Hi,
Em qua., 4 de dez. de 2019 às 00:31, Mike Christie
escreveu:
> On 12/03/2019 04:19 PM, Wesley Dillingham wrote:
> > Thanks. If I am reading this correctly the ability to remove an iSCSI
> > gateway would allow the remaining iSCSI gateways to take over for the
> > removed gateway's LUN's as
PM, Jason Dillaman wrote:
> > You haven't answered whether or not you tried restarting the API
> > daemons yet AFAICT.
> >
> > On Tue, Dec 3, 2019 at 5:53 PM Gesiel Galvão Bernardes
> > wrote:
> >>
> >> I copied the /etc/ceph folder from one to the
T
> http://ceph-iscsi3:5000/api/sysinfo/checkconf
>
> (tweak username, password, http/https, and port as appropriate as per
> your configs)
>
> On Tue, Dec 3, 2019 at 12:34 PM Gesiel Galvão Bernardes
> wrote:
> >
> > Are exactly the same:
>
33620867c9c3c5e6a666df2bb461150d4a885db2989cd3aea812a29555fdc45a
/etc/ceph/iscsi-gateway.cfg
Em ter., 3 de dez. de 2019 às 14:28, Jason Dillaman
escreveu:
> The sha256sum between the "/etc/ceph/iscsi-gateway.cfg" where you are
> running the 'gwcli' tool and `ceph-iscsi3` most likely mismatches.
>
> On Tue, Dec 3, 2019 at 12:19 PM G
Hi,
Em qui, 24 de out de 2019 às 20:16, Mike Christie
escreveu:
> On 10/24/2019 12:22 PM, Ryan wrote:
> > I'm in the process of testing the iscsi target feature of ceph. The
> > cluster is running ceph 14.2.4 and ceph-iscsi 3.3. It consists of 5
>
> What kernel are you using?
>
> > hosts with
Hi everyone,
I searching to consulting and support of Ceph in Brazil. Does anyone on
the list provide consulting in Brazil?
Regards,
Gesiel
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi everyone,
I'm configurating ISCSI gateway in Ceph Mimic (13.2.6) using ceph manual:
https://docs.ceph.com/docs/mimic/rbd/iscsi-target-cli/
But i stopped in this problem: In manual says:
"Set the client’s CHAP username to myiscsiusername and password to
myiscsipassword:
>
Hi everyone,
I'm configurating ISCSI gateway in Ceph Mimic (13.2.6) using ceph manual:
https://docs.ceph.com/docs/mimic/rbd/iscsi-target-cli/
But i stopped in this problem: In manual says:
"Set the client’s CHAP username to myiscsiusername and password to
myiscsipassword:
>
ssue was solved after increasing the count of the max fd.
>
> How check and increase max fd for qemu? Can you give-me way?
Regards,
Gesiel
>
> On Wed, 21 Aug 2019 at 20:53, Gesiel Galvão Bernardes <
> gesiel.bernar...@gmail.com> wrote:
>
>> Hi Eliza,
>>
21 matches
Mail list logo