On 2018-03-02T15:24:29, Joshua Chen wrote:
> Dear all,
> I wonder how we could support VM systems with ceph storage (block
> device)? my colleagues are waiting for my answer for vmware (vSphere 5) and
> I myself use oVirt (RHEV). the default protocol is iSCSI.
Lean
Hi!
Am 02.03.18 um 13:27 schrieb Federico Lucifredi:
We do speak to the Xen team every once in a while, but while there is
interest in adding Ceph support on their side, I think we are somewhat
down the list of their priorities.
Maybe things change with XCP-ng (https://xcp-ng.github.io).
Dear all,
I wonder how we could support VM systems with ceph storage (block
device)? my colleagues are waiting for my answer for vmware (vSphere 5) and
I myself use oVirt (RHEV). the default protocol is iSCSI.
I know that openstack/cinder work well with ceph and proxmox (just heard)
too.
On 05.03.2018 00:26, Adrian Saul wrote:
>
>
> We are using Ceph+RBD+NFS under pacemaker for VMware. We are doing
> iSCSI using SCST but have not used it against VMware, just Solaris and
> Hyper-V.
>
>
> It generally works and performs well enough – the biggest issues are the
> clustering for
ailto:ceph-users@lists.ceph.com>>
Verzonden: 28-2-2018 13:53
Onderwerp: [ceph-users] Ceph iSCSI is a prank?
I was building ceph in order to use with iSCSI.
But I just see from the docs that need:
CentOS 7.5
(which is not available yet, it's still at 7.4)
https://wiki.centos.org/Dow
ists.ceph.com>" <ceph-users@lists.ceph.com
> <mailto:ceph-users@lists.ceph.com>>
> *Verzonden: * 28-2-2018 13:53
> *Onderwerp: * [ceph-users] Ceph iSCSI is a prank?
>
> I was building ceph in order to use with iSCSI.
> But
rk Schouten | Tuxis Internet Engineering
>> KvK: 61527076 | http://www.tuxis.nl/
>> T: 0318 200208 | i...@tuxis.nl
>>
>>
>>
>> * Van: * Massimiliano Cuttini <m...@phoenixweb.it>
>> * Aan: * "ceph-users@lists.ceph.com" <ceph-users@lists.ceph.c
Il 02/03/2018 13:27, Federico Lucifredi ha scritto:
On Fri, Mar 2, 2018 at 4:29 AM, Max Cuttins > wrote:
Hi Federico,
Hi Max,
On Feb 28, 2018, at 10:06 AM, Max Cuttins
On Fri, Mar 2, 2018 at 4:29 AM, Max Cuttins wrote:
>
>
> Hi Federico,
>
> Hi Max,
>>
>> On Feb 28, 2018, at 10:06 AM, Max Cuttins wrote:
>>>
>>> This is true, but having something that just works in order to have
>>> minimum compatibility and start to
Hi Federico,
Hi Max,
On Feb 28, 2018, at 10:06 AM, Max Cuttins wrote:
This is true, but having something that just works in order to have minimum
compatibility and start to dismiss old disk is something you should think about.
You'll have ages in order to improve and
ceph.com" <ceph-users@lists.ceph.com>
> * Verzonden: * 28-2-2018 13:53
> * Onderwerp: * [ceph-users] Ceph iSCSI is a prank?
>
> I was building ceph in order to use with iSCSI.
> But I just see from the docs that need:
>
> *CentOS 7.5*
> (which is not a
oun...@lists.ceph.com> On Behalf Of Max Cuttins
Sent: Thursday, March 1, 2018 7:27 AM
To: David Turner <drakonst...@gmail.com>; dilla...@redhat.com
Cc: ceph-users <ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Ceph iSCSI is a prank?
Il 28/02/2018 18:16, David Turner ha sc
Almost...
Il 01/03/2018 16:17, Heðin Ejdesgaard Møller ha scritto:
Hello,
I would like to point out that we are running ceph+redundant iscsiGW's,
connecting the LUN's to a esxi+vcsa-6.5 cluster with Red Hat support.
We did encountered a few bumps on the road to production, but those got
Hello,
I would like to point out that we are running ceph+redundant iscsiGW's,
connecting the LUN's to a esxi+vcsa-6.5 cluster with Red Hat support.
We did encountered a few bumps on the road to production, but those got
fixed by Red Hat engineering and are included in the rhel7.5 and 4.17
I wonder when EMC/Netapp are going to start giving away production ready
bits that fit into your architecture
At least support for this feature is coming in the near term.
I say keep on keepin on. Kudos to the ceph team (and maybe more teams) for
taking care of the hard stuff for us.
On
Hi Jason,
That's awesome. Keep up the good work guys, we all love the work you are
doing with that software!!
Sam
On Mar 1, 2018 09:11, "Jason Dillaman" wrote:
> It's very high on our priority list to get a solution merged in the
> upstream kernel. There was a proposal
On 02/28/2018 10:06 AM, Max Cuttins wrote:
Il 28/02/2018 15:19, Jason Dillaman ha scritto:
On Wed, Feb 28, 2018 at 7:53 AM, Massimiliano Cuttini
wrote:
I was building ceph in order to use with iSCSI.
But I just see from the docs that need:
CentOS 7.5
(which is not
On Thu, 1 Mar 2018 09:11:21 -0500, Jason Dillaman wrote:
> It's very high on our priority list to get a solution merged in the
> upstream kernel. There was a proposal to use DLM to distribute the PGR
> state between target gateways (a la the SCST target) and it's quite
> possible that would have
Hi Max,
> On Feb 28, 2018, at 10:06 AM, Max Cuttins wrote:
>
> This is true, but having something that just works in order to have minimum
> compatibility and start to dismiss old disk is something you should think
> about.
> You'll have ages in order to improve and get
It's very high on our priority list to get a solution merged in the
upstream kernel. There was a proposal to use DLM to distribute the PGR
state between target gateways (a la the SCST target) and it's quite
possible that would have the least amount of upstream resistance since
it would work for
On another note, is there any work being done for persistent group
reservations support for Ceph/LIO compatibility? Or just a rough estimate :)
Would love to see Redhat/Ceph support this type of setup. I know Suse
supports it as of late.
Sam
On Mar 1, 2018 07:33, "Kai Wagner"
I totally understand and see your frustration here, but you've to keep
in mind that this is an Open Source project with a lots of volunteers.
If you have a really urgent need, you have the possibility to develop
such a feature on your own or you've to buy someone who could do the
work for you.
Il 28/02/2018 18:16, David Turner ha scritto:
My thought is that in 4 years you could have migrated to a hypervisor
that will have better performance into ceph than an added iSCSI layer.
I won't deploy VMs for ceph on anything that won't allow librbd to
work. Anything else is added complexity
Schouten | Tuxis Internet Engineering
KvK: 61527076 | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl
*Van: * Massimiliano Cuttini <m...@phoenixweb.it>
*Aan: * "ceph-users@lists.ceph.com" <ceph-users@lists.ceph.com>
*Verzonden: * 28-2-2018 13:53
*Onderwerp: * [ceph-users]
Cuttini <m...@phoenixweb.it>
Aan: "ceph-users@lists.ceph.com" <ceph-users@lists.ceph.com>
Verzonden: 28-2-2018 13:53
Onderwerp: [ceph-users] Ceph iSCSI is a prank?
I was building ceph in order to use with iSCSI.
But I just see from the docs that need:
know the indians have a nice
> >> saying
> >>
> >> "Everything will be good at the end. If it is not good, it is still not
> >> the end."
> >>
> >>
> >>
> >> -Original Message-
> >> From: Massimiliano Cuttin
gt;>
>>
>> -Original Message-
>> From: Massimiliano Cuttini [mailto:m...@phoenixweb.it]
>> Sent: woensdag 28 februari 2018 13:53
>> To: ceph-users@lists.ceph.com
>> Subject: [ceph-users] Ceph iSCSI is a prank?
>>
>> I was building ceph
Max,
I understand your frustration.
However, last time I checked, ceph was open source.
Some of you might not remember, but one major reason why open source is
great is that YOU CAN DO your own modifications.
If you need a change like iSCSI support and it isn't there,
it is probably best, if
On Wed, Feb 28, 2018 at 10:06 AM, Max Cuttins wrote:
>
>
> Il 28/02/2018 15:19, Jason Dillaman ha scritto:
>>
>> On Wed, Feb 28, 2018 at 7:53 AM, Massimiliano Cuttini
>> wrote:
>>>
>>> I was building ceph in order to use with iSCSI.
>>> But I just see from
On Feb 28, 2018 10:06 AM, "Max Cuttins" wrote:
Il 28/02/2018 15:19, Jason Dillaman ha scritto:
> On Wed, Feb 28, 2018 at 7:53 AM, Massimiliano Cuttini
> wrote:
>
>> I was building ceph in order to use with iSCSI.
>> But I just see from the docs that
Il 28/02/2018 15:19, Jason Dillaman ha scritto:
On Wed, Feb 28, 2018 at 7:53 AM, Massimiliano Cuttini
wrote:
I was building ceph in order to use with iSCSI.
But I just see from the docs that need:
CentOS 7.5
(which is not available yet, it's still at 7.4)
On Wed, Feb 28, 2018 at 7:53 AM, Massimiliano Cuttini
wrote:
> I was building ceph in order to use with iSCSI.
> But I just see from the docs that need:
>
> CentOS 7.5
> (which is not available yet, it's still at 7.4)
> https://wiki.centos.org/Download
>
> Kernel 4.17
>
ri 2018 13:53
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Ceph iSCSI is a prank?
I was building ceph in order to use with iSCSI.
But I just see from the docs that need:
CentOS 7.5
(which is not available yet, it's still at 7.4)
https://wiki.centos.org/Download
3
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Ceph iSCSI is a prank?
I was building ceph in order to use with iSCSI.
But I just see from the docs that need:
CentOS 7.5
(which is not available yet, it's still at 7.4)
https://wiki.centos.org/Download
K
I was building ceph in order to use with iSCSI.
But I just see from the docs that need:
*CentOS 7.5*
(which is not available yet, it's still at 7.4)
https://wiki.centos.org/Download
*Kernel 4.17*
(which is not available yet, it is still at 4.15.7)
https://www.kernel.org/
So I
35 matches
Mail list logo