FIY,

I have updated some osdsĀ  from 12.2.6 that was suffering from the CRC error and the 12.2.7 fixed the issue!

I installed some new osds on 12/07 without being aware from the issue, and in my small cluestes, I just noticed the problem when I was trying to copy some RBD images to another pool... probably my VMs was not reading the full objects at once, so this has not affected my users.

Thanks to the developers for fixing the issue quickly and for all the users that posted info about the issue to the list!


Em 7/17/2018 3:42 PM, Sage Weil escreveu:
On Tue, 17 Jul 2018, Stefan Kooman wrote:
Quoting Abhishek Lekshmanan (abhis...@suse.com):

*NOTE* The v12.2.5 release has a potential data corruption issue with
erasure coded pools. If you ran v12.2.5 with erasure coding, please see
     ^^^^^^^^^^^^^^^^^^^
below.
< snip >

Upgrading from v12.2.5 or v12.2.6
---------------------------------

If you used v12.2.5 or v12.2.6 in combination with erasure coded
                                                        ^^^^^^^^^^^^^
pools, there is a small risk of corruption under certain workloads.
Specifically, when:
< snip >

One section mentions Luminous clusters _with_ EC pools specifically, the other
section mentions Luminous clusters running 12.2.5.
I think they both do?

I might be misreading this, but to make things clear for current Ceph
Luminous 12.2.5 users. Is the following statement correct?

If you do _NOT_ use EC in your 12.2.5 cluster (only replicated pools), there is
no need to quiesce IO (ceph osd pause).
Correct.

http://docs.ceph.com/docs/master/releases/luminous/#upgrading-from-other-versions
If your cluster did not run v12.2.5 or v12.2.6 then none of the above
issues apply to you and you should upgrade normally.

^^ Above section would indicate all 12.2.5 luminous clusters.
The intent here is to clarify that any cluster running 12.2.4 or
older can upgrade without reading carefully.  If the cluster
does/did run 12.2.5 or .6, then read carefully because it may (or may not)
be affected.

Does that help?  Any suggested revisions to the wording in the release
notes that make it clearer are welcome!

Thanks-
sage


Please clarify,

Thanks,

Stefan

--
| BIT BV  http://www.bit.nl/        Kamer van Koophandel 09090351
| GPG: 0xD14839C6                   +31 318 648 688 / i...@bit.nl
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to