[ceph-users] Re: Ceph iscsi help

2021-03-01 Thread Salsa
I noticed you're trying to connect to an IPv4, but the listening port is on a IPv6. Is that right? You should have a IPv4 listening, right? Also, did you check selinux or firewalld? Maybe you need to allow this 5000 port. -- Salsa Sent with ProtonMail Secure Email. ‐‐‐ Original Me

[ceph-users] RBD image stuck and no erros on logs

2020-11-04 Thread Salsa
the end all I can do is remove the image. Again, I see no errors on the logs and Ceph's status is OK. I tried to alter some log levels, but still no helpful info. Is there anything I should check? Rados? -- Salsa Sent with ProtonMail Secure

[ceph-users] Re: Ceph RBD iSCSI compatibility

2020-09-03 Thread Salsa
Joe, sorry, I should have been clearer. The incompatible rbd features are exclusive-lock, journaling, object-map and such. The info comes from here: https://documentation.suse.com/ses/6/html/ses-all/ceph-rbd.html -- Salsa Sent with ProtonMail Secure Email. ‐‐‐ Original Message ‐‐‐ On

[ceph-users] Re: Ceph RBD iSCSI compatibility

2020-09-02 Thread Salsa
BTW, the documentation can be found here: https://documentation.suse.com/ses/6/html/ses-all/ceph-rbd.html -- Salsa ‐‐‐ Original Message ‐‐‐ On Wednesday, September 2, 2020 7:08 PM, Salsa wrote: > I just came across a Suse documentation stating that RBD features are not >

[ceph-users] Ceph RBD iSCSI compatibility

2020-09-02 Thread Salsa
while using rbd-mirror to backup data to a second cluster? I created all images with all features enabled. Is that compatible? -- Salsa ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Nautilus: rbd image stuck unaccessible after VM restart

2020-09-01 Thread salsa
Hi, Any news on this error? I'm facing the same issue I guess. Had a Windows Server copy data to some RBD images through iSCSI and the server got stuck and had to be reset and now the images that had data are blocking all I/O operations, including editing their config, creating snapshots, etc.

[ceph-users] Rbd image corrupt or locked somehow

2020-09-01 Thread Salsa
en or stuck but as I said no locks on them. I tried a lot of options and somehow my cluster now has some RGW pools that I have no idea where they came from. Any idea what I should do? -- Salsa ___ ceph-users mailing list -- ceph-users@ceph.io To unsu

[ceph-users] Very bad performance on a ceph rbd pool via iSCSI to VMware esx

2020-05-29 Thread Salsa
I got 2.323.206 B/s inside the same VM. I think the performance is way too slow, much more than should be and that I can fix this by correcting some configuration. Any advices? -- Salsa ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe s

[ceph-users] Very bad performance on a ceph rbd pool via iSCSI to VMware esx

2020-05-29 Thread Salsa
I got 2.323.206 B/s inside the same VM. I think the performance is way too slow, much more than should be and that I can fix this by correcting some configuration. Any advices? -- Salsa Sent with [ProtonMail](https://protonmail.com) Secure Email. __

[ceph-users] Re: Maximum CephFS Filesystem Size

2020-04-07 Thread Salsa
I have the same problem. 30TB available on Ceph, but my SMB share has only 5TB available. At IRC I was told I should raise pg count and run balancer. Raising pg count helped a little and I'm waiting Ceph to recover from pg resizing to run the balancer. -- Salsa Sent with ProtonMail S

[ceph-users] Re: Very bad performance on a ceph rbd pool via iSCSI to VMware esx

2020-02-14 Thread Salsa
‐‐‐ Original Message ‐‐‐ On Friday, February 14, 2020 4:49 PM, Mike Christie wrote: > On 02/13/2020 09:56 AM, Salsa wrote: > > > I have a 3 hosts, 10 4TB HDDs per host ceph storage set up. I deined a 3 > > replica rbd pool and some images and presented them to

[ceph-users] Re: Very bad performance on a ceph rbd pool via iSCSI to VMware esx

2020-02-13 Thread Salsa
rbd 29 4.3 TiB 1.42M13 TiB 13.13 29 TiB -- Salsa Sent with [ProtonMail](https://protonmail.com) Secure Email. ‐‐‐ Original Message ‐‐‐ On Thursday, February 13, 2020 4:50 PM, Andrew Ferris wrote: > Hi Salsa, > > More information

[ceph-users] Very bad performance on a ceph rbd pool via iSCSI to VMware esx

2020-02-13 Thread Salsa
I got 2.323.206 B/s inside the same VM. I think the performance is way too slow, much more than should be and that I can fix this by correcting some configuration. Any advices? -- Salsa ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe s