On Thu, 14 Nov 2019 at 07:07, Patrick Donnelly wrote:
>
> On Wed, Nov 13, 2019 at 2:30 AM Jerry Lee wrote:
> > Recently, I'm evaluating the snpahsot feature of CephFS from kernel
> > client and everthing works like a charm. But, it seems that reverting
> > a snapshot is not available currently.
On 14/11/19 01:30, Shawn Iverson wrote:
> On another note, the ceph lists might consider munging the from address
> and implementing SPF/DKIM/DMARC for itself while checking others for
> DMARC compliance at the MTA level.
>
> I see a lot of ceph-users listserv emails landing my spam box.
That wou
Hello everyone,
I very much agree with this, as I'm routinely having to clean this mailing list
out of my SPAM inbox. Proper, more modern SMTP measures and safeguards are
needed.
--
Regards,
Christopher McGill
Proprietor / Lead System Administrator
sa...@gekkofyre.io
https://gekkofyre.i
On Wed, Nov 13, 2019 at 2:30 AM Jerry Lee wrote:
> Recently, I'm evaluating the snpahsot feature of CephFS from kernel
> client and everthing works like a charm. But, it seems that reverting
> a snapshot is not available currently. Is there some reason or
> technical limitation that the feature
Hi Thoralf,
there have been several reports about Ceph mgr modules (not just the
dashboard) experiencing hangs and freezes recently. The thread "mgr
daemons becoming unresponsive" might give you some additional insight.
Is the "device health metrics" module enabled on your cluster? Could you
try
Hello,
When i posted several days ago a crash nobody respondet as well. So i want
to share my thoughts and maybe help you to find it (even im prett new to
ceph and its code)
What i would do i your case:
- git checkout ceph version 14.2.2 (4f8fa0a0024755aae7d95567c63f11d6862d55be)
nautilus (stab
On another note, the ceph lists might consider munging the from address and
implementing SPF/DKIM/DMARC for itself while checking others for DMARC
compliance at the MTA level.
I see a lot of ceph-users listserv emails landing my spam box.
On Tue, Nov 12, 2019 at 7:28 PM Christian Balzer wrote:
Hi,
We have upgraded a 5 node ceph cluster from Luminous to Nautilus and the
cluster was running fine. Yesterday when we tried to add one more osd into
the ceph cluster we find that the OSD is created in the cluster but
suddenly some of the other OSD's started to crash and we are not able to
resta
hi there,
the dashboard of our moderatly used cluster with 3 mon/mgr-nodes gets
stuck about 30 seconds after a mgr becomes active. the dashboard is not
usable anymore (ie: the mgr damon does not respond to http requests
anymore), although it comes back from the dead occasionally for a few
seconds.
On Wed, Nov 13, 2019 at 10:13 AM Stefan Bauer wrote:
>
> Paul,
>
>
> i would like to take the chance, to thank you and ask if it could not be, that
> subop_latency reports high value (is that avgtime in seconds reported?)
> because the communication partner is slow in writing/commiting?
no
Paul
Hi,
Recently, I'm evaluating the snpahsot feature of CephFS from kernel
client and everthing works like a charm. But, it seems that reverting
a snapshot is not available currently. Is there some reason or
technical limitation that the feature is not provided? Any insights
or ideas are appreciat
Paul,
i would like to take the chance, to thank you and ask if it could not be, that
subop_latency reports high value (is that avgtime in seconds reported?)
"subop_latency": {
"avgcount": 7782673,
"sum": 38852.140794738,
"avgtime": 0.004992133
be
12 matches
Mail list logo