Péter:
I'm forwarding this to ceph-users for a better answer/discussion
On 5/29/19 6:52 AM, Erdősi Péter wrote:
> Dear CEPH maintainers,
>
> I would like to ask a few information about CEPH and Debian 10 (Buster).
> We would like to install a CEPH to the RC buster. As I can see, the ceph
>
Hi,
Am 31.05.19 um 12:07 schrieb Burkhard Linke:
> Hi,
>
>
> see my post in the recent 'CephFS object mapping.' thread. It describes the
> necessary commands to lookup a file based on its rados object name.
many thanks! I somehow missed the important part in that thread earlier and
only got
Hi Orlando,
Thank you for your confirmation. I hope somebody else helps about this
issue.
Best regards,
On Sat, Jun 1, 2019, 03:19 Moreno, Orlando wrote:
> Hi,
>
>
>
> I have not received any response to this and I haven’t worked on this
> lately. I hope to revisit RDMA messenger on Nautilus
Hi,
I have not received any response to this and I haven’t worked on this lately. I
hope to revisit RDMA messenger on Nautilus in the future.
Thanks,
Orlando
From: Lazuardi Nasution [mailto:mrxlazuar...@gmail.com]
Sent: Saturday, May 25, 2019 9:14 PM
To: Moreno, Orlando ; Tang, Haodong
Cc:
Hi Stefan,
Sorry I couldn't get back to you sooner.
On Mon, May 27, 2019 at 5:02 AM Stefan Kooman wrote:
>
> Quoting Stefan Kooman (ste...@bit.nl):
> > Hi Patrick,
> >
> > Quoting Stefan Kooman (ste...@bit.nl):
> > > Quoting Stefan Kooman (ste...@bit.nl):
> > > > Quoting Patrick Donnelly
Is there any other evidence of this?
I have 20 5100 MAX (MTFDDAK1T9TCC) and have not experienced any real issues
with them.
I would pick my Samsung SM863a's or any of my Intel's over the Micron's, but I
haven't seen the Micron's cause any issues for me.
For what its worth, they are all FW
Hi,
see my post in the recent 'CephFS object mapping.' thread. It describes
the necessary commands to lookup a file based on its rados object name.
Regards,
Burkhard
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi all,
we use ceph(hammer) + openstack(mitaka) in my datacenter and there are 300
osds and 3. Because the accident datacenter is powered off, all the servers are
shut down. when power returns to normal ,we start 3 mon service at first, About
two hours later we start 500 osd service,and