[ceph-users] Re: Strange Ceph architect with SAN storages

2019-08-22 Thread Brett Chancellor
After thinking about this more, you may also consider just adding the SAN in as a different device class(es). I wouldn't be scared of doing it, but you will want to paint a picture of this transitional environment, the end goal, and any steps the customer will need to take to get there. Also,

[ceph-users] Re: Strange Ceph architect with SAN storages

2019-08-22 Thread Anthony D'Atri
In a past life I had a bunch of SAN gear dumped in my lap, it was spec’d by someone else misintepreting vague specs. It was SAN gear with an AoE driver. I wasn’t using Ceph, but sending it back and getting a proper solution wasn’t an option. Ended up using SAN gear as a NAS with a single

[ceph-users] Ceph Tech Talk Cancelled for August

2019-08-22 Thread Mike Perez
Hi all, We're cancelling our Ceph tech talk today. In the mean time check out our archive and consider reaching out to me about giving your own talk in the upcoming months. https://ceph.com/ceph-tech-talks/ -- Mike Perez (thingee) ___ ceph-users

[ceph-users] Re: Strange Ceph architect with SAN storages

2019-08-22 Thread Brett Chancellor
It's certainly possible. It makes things a little more complex though. Some questions you may want to consider during the design.. - Is the customer aware this won't preserve any data on the luns they are hoping to reuse. - Is the plan to eventually replace the SAN with JBOD, in the same systems?

[ceph-users] Re: About image migration

2019-08-22 Thread Jason Dillaman
On Wed, Aug 21, 2019 at 9:33 PM Zaharo Bai (白战豪)-云数据中心集团 wrote: > > I tested and combed the current migration process. If I read and write the > new image during the migration process and then use migration_abort, the > newly written data will be lost. Do we have a solution to this problem?

[ceph-users] Strange Ceph architect with SAN storages

2019-08-22 Thread Mohsen Mottaghi
Hi Yesterday one of our customers asked us a strange request. He asked us to use SAN as the Ceph storage space to add the SAN storages it currently has to the cluster and reduce other disk purchase costs. Anybody know can we do this or not?! And if this is possible how we should start to

[ceph-users] deep-scrub stat mismatch after PG merge

2019-08-22 Thread Daniel Schreiber
Hi everyone, few days ago I reduced the number og PGs on a small pool. The cluster runs 14.2.2, was upgraded from jewel to 14.2.1 and the to 14.2.2. I did a ceph-bluestore-tool repair on all OSDs to update statistics. Today I got a scrub error reporting: 4.3 scrub : stat mismatch, got 68/68