Re: [ceph-users] ceph iscsi question

2019-10-17 Thread 展荣臻(信泰)
> Have you updated your "/etc/multipath.conf" as documented here [1]? > You should have ALUA configured but it doesn't appear that's the case > w/ your provided output. > > [1] https://docs.ceph.com/ceph-prs/30912/rbd/iscsi-initiator-linux/ Thank you jason.Updated the /etc/multipath.conf as

Re: [ceph-users] ceph iscsi question

2019-10-17 Thread Mike Christie
On 10/17/2019 10:52 AM, Mike Christie wrote: > On 10/16/2019 01:35 AM, 展荣臻(信泰) wrote: >> hi,all >> we deploy ceph with ceph-ansible.osds,mons and daemons of iscsi runs in >> docker. >> I create iscsi target according to >> https://docs.ceph.com/docs/luminous/rbd/iscsi-target-cli/. >> I

Re: [ceph-users] ceph iscsi question

2019-10-17 Thread Mike Christie
On 10/16/2019 01:35 AM, 展荣臻(信泰) wrote: > hi,all > we deploy ceph with ceph-ansible.osds,mons and daemons of iscsi runs in > docker. > I create iscsi target according to > https://docs.ceph.com/docs/luminous/rbd/iscsi-target-cli/. > I discovered and logined iscsi target on another host,as

Re: [ceph-users] ceph iscsi question

2019-10-17 Thread Jason Dillaman
illaman" > > 发送时间: 2019-10-17 09:54:30 (星期四) > > 收件人: "展荣臻(信泰)" > > 抄送: dillaman , ceph-users > > 主题: Re: [ceph-users] ceph iscsi question > > > > On Wed, Oct 16, 2019 at 9:52 PM 展荣臻(信泰) wrote: > > > > > > > > > >

Re: [ceph-users] ceph iscsi question

2019-10-16 Thread 展荣臻(信泰)
> -原始邮件- > 发件人: "Jason Dillaman" > 发送时间: 2019-10-17 09:54:30 (星期四) > 收件人: "展荣臻(信泰)" > 抄送: dillaman , ceph-users > 主题: Re: [ceph-users] ceph iscsi question > > On Wed, Oct 16, 2019 at 9:52 PM 展荣臻(信泰) wrote: > > > > > > &g

Re: [ceph-users] ceph iscsi question

2019-10-16 Thread Jason Dillaman
On Wed, Oct 16, 2019 at 9:52 PM 展荣臻(信泰) wrote: > > > > > > -原始邮件- > > 发件人: "Jason Dillaman" > > 发送时间: 2019-10-16 20:33:47 (星期三) > > 收件人: "展荣臻(信泰)" > > 抄送: ceph-users > > 主题: Re: [ceph-users] ceph iscsi question > &

Re: [ceph-users] ceph iscsi question

2019-10-16 Thread 展荣臻(信泰)
> -原始邮件- > 发件人: "Jason Dillaman" > 发送时间: 2019-10-16 20:33:47 (星期三) > 收件人: "展荣臻(信泰)" > 抄送: ceph-users > 主题: Re: [ceph-users] ceph iscsi question > > On Wed, Oct 16, 2019 at 2:35 AM 展荣臻(信泰) wrote: > > > > hi,all > > w

Re: [ceph-users] ceph iscsi question

2019-10-16 Thread Jason Dillaman
On Wed, Oct 16, 2019 at 2:35 AM 展荣臻(信泰) wrote: > > hi,all > we deploy ceph with ceph-ansible.osds,mons and daemons of iscsi runs in > docker. > I create iscsi target according to > https://docs.ceph.com/docs/luminous/rbd/iscsi-target-cli/. > I discovered and logined iscsi target on

[ceph-users] ceph iscsi question

2019-10-16 Thread 展荣臻(信泰)
hi,all we deploy ceph with ceph-ansible.osds,mons and daemons of iscsi runs in docker. I create iscsi target according to https://docs.ceph.com/docs/luminous/rbd/iscsi-target-cli/. I discovered and logined iscsi target on another host,as show below: [root@node1 tmp]# iscsiadm -m discovery

[ceph-users] ceph-iscsi: logical/physical block size

2019-09-19 Thread Matthias Leopold
Hi, is it possible to set the logical/physical block size for exported disks? I can set both values in FreeNAS oVirt 4.3.6 will "Support device block size of 4096 bytes for file based storage domains" and I want to know if i can use this with ceph-iscsi thx matthias

Re: [ceph-users] ceph-iscsi: problem when discovery auth is disabled, but gateway receives auth requests

2019-04-23 Thread Mike Christie
On 04/18/2019 06:24 AM, Matthias Leopold wrote: > Hi, > > the Ceph iSCSI gateway has a problem when receiving discovery auth > requests when discovery auth is not enabled. Target discovery fails in > this case (see below). This is especially annoying with oVirt (KVM > management platform) where

[ceph-users] ceph-iscsi: problem when discovery auth is disabled, but gateway receives auth requests

2019-04-18 Thread Matthias Leopold
Hi, the Ceph iSCSI gateway has a problem when receiving discovery auth requests when discovery auth is not enabled. Target discovery fails in this case (see below). This is especially annoying with oVirt (KVM management platform) where you can't separate the two authentication phases. This

Re: [ceph-users] ceph-iscsi: (Config.lock) Timed out (30s) waiting for excl lock on gateway.conf object

2019-04-17 Thread Matthias Leopold
Just for the records: After recreating the config from scratch (after the upgrade to ceph-iscsi-3.0) the problem went away. I can use the gateway without client.admin access now. thanks matthias Am 01.04.19 um 17:05 schrieb Jason Dillaman: What happens when you run "rados -p rbd lock list

Re: [ceph-users] ceph-iscsi: (Config.lock) Timed out (30s) waiting for excl lock on gateway.conf object

2019-04-03 Thread Matthias Leopold
running "rados -p rbd lock list gateway.conf" gives: {"objname":"gateway.conf","locks":[{"name":"lock"}]} To be sure I stopped all related services (tcmu-runner, rbd-target-gw, rbd-target-api) on both gateways and ran "rados -p rbd lock list gateway.conf" again, result was the same as above.

Re: [ceph-users] ceph-iscsi: (Config.lock) Timed out (30s) waiting for excl lock on gateway.conf object

2019-04-01 Thread Jason Dillaman
What happens when you run "rados -p rbd lock list gateway.conf"? On Fri, Mar 29, 2019 at 12:19 PM Matthias Leopold wrote: > > Hi, > > I upgraded my test Ceph iSCSI gateways to > ceph-iscsi-3.0-6.g433bbaa.el7.noarch. > I'm trying to use the new parameter "cluster_client_name", which - to me > -

[ceph-users] ceph-iscsi: (Config.lock) Timed out (30s) waiting for excl lock on gateway.conf object

2019-03-29 Thread Matthias Leopold
Hi, I upgraded my test Ceph iSCSI gateways to ceph-iscsi-3.0-6.g433bbaa.el7.noarch. I'm trying to use the new parameter "cluster_client_name", which - to me - sounds like I don't have to access the ceph cluster as "client.admin" anymore. I created a "client.iscsi" user and watched what

Re: [ceph-users] CEPH ISCSI LIO multipath change delay

2019-03-20 Thread Maged Mokhtar
On 20/03/2019 07:43, li jerry wrote: Hi,ALL I’ve deployed mimic(13.2.5) cluster on 3 CentOS 7.6 servers, then configured iscsi-target and created a LUN, referring to http://docs.ceph.com/docs/mimic/rbd/iscsi-target-cli/. I have another server which is CentOS 7.4, configured and mounted

[ceph-users] CEPH ISCSI LIO multipath change delay

2019-03-19 Thread li jerry
Hi,ALL I've deployed mimic(13.2.5) cluster on 3 CentOS 7.6 servers, then configured iscsi-target and created a LUN, referring to http://docs.ceph.com/docs/mimic/rbd/iscsi-target-cli/. I have another server which is CentOS 7.4, configured and mounted the LUN I've just created, referring to

Re: [ceph-users] CEPH ISCSI Gateway

2019-03-11 Thread David Turner
The problem with clients on osd nodes is for kernel clients only. That's true of krbd and the kernel client for cephfs. The only other reason not to run any other Ceph daemon in the same node as osds is resource contention if you're running at higher CPU and memory utilizations. On Sat, Mar 9,

Re: [ceph-users] CEPH ISCSI Gateway

2019-03-09 Thread Mike Christie
On 03/07/2019 09:22 AM, Ashley Merrick wrote: > Been reading into the gateway, and noticed it’s been mentioned a few > times it can be installed on OSD servers. > > I am guessing therefore there be no issues like is sometimes mentioned > when using kRBD on a OSD node apart from the extra

[ceph-users] CEPH ISCSI Gateway

2019-03-07 Thread Ashley Merrick
Been reading into the gateway, and noticed it’s been mentioned a few times it can be installed on OSD servers. I am guessing therefore there be no issues like is sometimes mentioned when using kRBD on a OSD node apart from the extra resources required from the hardware. Thanks

Re: [ceph-users] ceph-iscsi iSCSI Login negotiation failed

2018-12-05 Thread Steven Vacaroaia
Thanks for taking the trouble to respond I noticed some xfs error on the /var partition so I have rebooted the server in order to force xfs_repair to run It is now working Steven On Wed, 5 Dec 2018 at 11:47, Mike Christie wrote: > On 12/05/2018 09:43 AM, Steven Vacaroaia wrote: > > Hi, > >

Re: [ceph-users] ceph-iscsi iSCSI Login negotiation failed

2018-12-05 Thread Mike Christie
On 12/05/2018 09:43 AM, Steven Vacaroaia wrote: > Hi, > I have a strange issue > I configured 2 identical iSCSI gateways but one of them is complaining > about negotiations although gwcli reports the correct auth and status ( > logged-in) > > Any help will be truly appreciated > > Here are

[ceph-users] ceph-iscsi iSCSI Login negotiation failed

2018-12-05 Thread Steven Vacaroaia
Hi, I have a strange issue I configured 2 identical iSCSI gateways but one of them is complaining about negotiations although gwcli reports the correct auth and status ( logged-in) Any help will be truly appreciated Here are some details ceph-iscsi-config-2.6-42.gccca57d.el7.noarch

Re: [ceph-users] ceph-iscsi upgrade issue

2018-10-11 Thread Steven Vacaroaia
using the rslib from shaman repos did the trick These works fine ceph-iscsi-cli-2.7-54.g9b18a3b.el7.noarch.rpm python2-kmod-0.9-20.fc29.x86_64.rpm python2-rtslib-2.1.fb67-3.fc28.noarch.rpm tcmu-runner-1.4.0-1.el7.x86_64.rpm ceph-iscsi-config-2.6-42.gccca57d.el7.noarch.rpm

Re: [ceph-users] ceph-iscsi upgrade issue

2018-10-10 Thread Jason Dillaman
Can you add "debug = true" to your "iscsi-gateway.cfg" and restart the rbd-target-api on osd03 to see if that provides additional details of the failure? Also, if you don't mind getting your hands dirty, you could temporarily apply this patch [1] to "/usr/bin/rbd-target-api" to see if it can catch

Re: [ceph-users] ceph-iscsi upgrade issue

2018-10-10 Thread Steven Vacaroaia
Yes, I am! [root@osd01 ~]# uname -a Linux osd01.tor.medavail.net 4.18.11-1.el7.elrepo.x86_64 [root@osd03 latest]# uname -a Linux osd03.tor.medavail.net 4.18.11-1.el7.elrepo.x86_64 On Wed, 10 Oct 2018 at 16:22, Jason Dillaman wrote: > Are you running the same kernel version on both nodes? >

Re: [ceph-users] ceph-iscsi upgrade issue

2018-10-10 Thread Jason Dillaman
Are you running the same kernel version on both nodes? On Wed, Oct 10, 2018 at 4:18 PM Steven Vacaroaia wrote: > > so, it seems OSD03 is having issues when creating disks ( I can create target > and hosts ) - here is an excerpt from api.log > Please note I can create disk on the other node > >

Re: [ceph-users] ceph-iscsi upgrade issue

2018-10-10 Thread Steven Vacaroaia
so, it seems OSD03 is having issues when creating disks ( I can create target and hosts ) - here is an excerpt from api.log Please note I can create disk on the other node 2018-10-10 16:03:03,369DEBUG [lun.py:381:allocate()] - LUN.allocate starting, listing rbd devices 2018-10-10 16:03:03,381

Re: [ceph-users] ceph-iscsi upgrade issue

2018-10-10 Thread Mike Christie
On 10/10/2018 12:52 PM, Mike Christie wrote: > On 10/10/2018 08:21 AM, Steven Vacaroaia wrote: >> Hi Jason, >> Thanks for your prompt responses >> >> I have used same iscsi-gateway.cfg file - no security changes - just >> added prometheus entry >> There is no iscsi-gateway.conf but the

Re: [ceph-users] ceph-iscsi upgrade issue

2018-10-10 Thread Mike Christie
On 10/10/2018 08:21 AM, Steven Vacaroaia wrote: > Hi Jason, > Thanks for your prompt responses > > I have used same iscsi-gateway.cfg file - no security changes - just > added prometheus entry > There is no iscsi-gateway.conf but the gateway.conf object is created > and has correct entries > >

Re: [ceph-users] ceph-iscsi upgrade issue

2018-10-10 Thread Steven Vacaroaia
Hi Jason, Thanks for your prompt responses I have used same iscsi-gateway.cfg file - no security changes - just added prometheus entry There is no iscsi-gateway.conf but the gateway.conf object is created and has correct entries iscsi-gateway.cfg is identical and contains the following [config]

Re: [ceph-users] ceph-iscsi upgrade issue

2018-10-09 Thread Jason Dillaman
Anything in the rbd-target-api.log on osd03 to indicate why it failed? Since you replaced your existing "iscsi-gateway.conf", do your security settings still match between the two hosts (i.e. on the trusted_ip_list, same api_XYZ options)? On Tue, Oct 9, 2018 at 4:25 PM Steven Vacaroaia wrote: >

Re: [ceph-users] ceph-iscsi upgrade issue

2018-10-09 Thread Steven Vacaroaia
so the gateways are up but I have issues adding disks ( i.e. if I do it on one gatway it does not show on the other - however, after I restart the rbd-target services I am seeing the disks ) Thanks in advance for taking the trouble to provide advice / guidance 2018-10-09 16:16:08,968 INFO

Re: [ceph-users] ceph-iscsi upgrade issue

2018-10-09 Thread Steven Vacaroaia
It worked. many thanks Steven On Tue, 9 Oct 2018 at 15:36, Jason Dillaman wrote: > Can you try applying [1] and see if that resolves your issue? > > [1] https://github.com/ceph/ceph-iscsi-config/pull/78 > On Tue, Oct 9, 2018 at 3:06 PM Steven Vacaroaia wrote: > > > > Thanks Jason > > > >

Re: [ceph-users] ceph-iscsi upgrade issue

2018-10-09 Thread Jason Dillaman
Can you try applying [1] and see if that resolves your issue? [1] https://github.com/ceph/ceph-iscsi-config/pull/78 On Tue, Oct 9, 2018 at 3:06 PM Steven Vacaroaia wrote: > > Thanks Jason > > adding prometheus_host = 0.0.0.0 to iscsi-gateway.cfg does not work - the > error message is > >

Re: [ceph-users] ceph-iscsi upgrade issue

2018-10-09 Thread Steven Vacaroaia
Thanks Jason adding prometheus_host = 0.0.0.0 to iscsi-gateway.cfg does not work - the error message is "..rbd-target-gw: ValueError: invalid literal for int() with base 10: '0.0.0.0' " adding prometheus_exporter = false works However I'd like to use prometheus_exporter if possible Any

Re: [ceph-users] ceph-iscsi upgrade issue

2018-10-09 Thread Jason Dillaman
You can try adding "prometheus_exporter = false" in your "/etc/ceph/iscsi-gateway.cfg"'s "config" section if you aren't using "cephmetrics", or try setting "prometheus_host = 0.0.0.0" since it sounds like you have the IPv6 stack disabled. [1]

Re: [ceph-users] ceph-iscsi upgrade issue

2018-10-09 Thread Steven Vacaroaia
here is some info from /var/log/messages ..in case someone has the time to take a look Oct 9 13:58:35 osd03 systemd: Started Setup system to export rbd images through LIO. Oct 9 13:58:35 osd03 systemd: Starting Setup system to export rbd images through LIO... Oct 9 13:58:35 osd03 journal:

[ceph-users] ceph-iscsi upgrade issue

2018-10-09 Thread Steven Vacaroaia
Hi , I am using Mimic 13.2 and kernel 4.18 Was using gwcli 2.5 and decided to upgrade to latest (2.7) as people reported improved performance What is the proper methodology ? How should I troubleshoot this? What I did ( and it broke it) was cd tcmu-runner; git pull ; make && make install cd

Re: [ceph-users] Ceph ISCSI Gateways on Ubuntu

2018-09-24 Thread Jason Dillaman
I would say that we consider mimic production ready now -- it was released a few months ago with the second point release in final testing right now.On Mon, Sep 24, 2018 at 2:49 PM Florian Florensa wrote: > > For me its more about will mimic be production ready for mid october > > Le lun. 24

Re: [ceph-users] Ceph ISCSI Gateways on Ubuntu

2018-09-24 Thread Florian Florensa
For me its more about will mimic be production ready for mid october Le lun. 24 sept. 2018 à 19:11, Jason Dillaman a écrit : > On Mon, Sep 24, 2018 at 12:18 PM Florian Florensa > wrote: > > > > Currently building 4.18.9 on ubuntu to try it out, also wondering if i > should plan for

Re: [ceph-users] Ceph ISCSI Gateways on Ubuntu

2018-09-24 Thread Jason Dillaman
On Mon, Sep 24, 2018 at 12:18 PM Florian Florensa wrote: > > Currently building 4.18.9 on ubuntu to try it out, also wondering if i should > plan for xenial+luminous or directly target bionic+mimic There shouldn't be any technical restrictions on the Ceph iSCSI side, so it would come down to

Re: [ceph-users] Ceph ISCSI Gateways on Ubuntu

2018-09-24 Thread Florian Florensa
Currently building 4.18.9 on ubuntu to try it out, also wondering if i should plan for xenial+luminous or directly target bionic+mimic Le lun. 24 sept. 2018 à 18:08, Jason Dillaman a écrit : > It *should* work against any recent upstream kernel (>=4.16) and > up-to-date dependencies [1]. If you

Re: [ceph-users] Ceph ISCSI Gateways on Ubuntu

2018-09-24 Thread Jason Dillaman
It *should* work against any recent upstream kernel (>=4.16) and up-to-date dependencies [1]. If you encounter any distro-specific issues (like the PR that Mike highlighted), we would love to get them fixed. [1] http://docs.ceph.com/docs/master/rbd/iscsi-target-cli-manual-install/ On Mon, Sep

Re: [ceph-users] Ceph ISCSI Gateways on Ubuntu

2018-09-24 Thread Florian Florensa
So from my understanding, as of right now it is not possible to have an iSCSI gw outside of RHEL ? Le lun. 24 sept. 2018 à 17:45, Mike Christie a écrit : > On 09/24/2018 05:47 AM, Florian Florensa wrote: > > Hello there, > > > > I am still in the works of preparing a deployment with iSCSI

Re: [ceph-users] Ceph ISCSI Gateways on Ubuntu

2018-09-24 Thread Mike Christie
On 09/24/2018 05:47 AM, Florian Florensa wrote: > Hello there, > > I am still in the works of preparing a deployment with iSCSI gateways > on Ubuntu, but both the latest LTS of ubuntu ships with kernel 4.15, > and i dont see support for iscsi. > What kernel are people using for this ? > -

[ceph-users] Ceph ISCSI Gateways on Ubuntu

2018-09-24 Thread Florian Florensa
Hello there, I am still in the works of preparing a deployment with iSCSI gateways on Ubuntu, but both the latest LTS of ubuntu ships with kernel 4.15, and i dont see support for iscsi. What kernel are people using for this ? - Mainline v4.16 of the ubuntu kernel team ? - Kernel from

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-14 Thread Lars Marowsky-Bree
On 2018-03-02T15:24:29, Joshua Chen wrote: > Dear all, > I wonder how we could support VM systems with ceph storage (block > device)? my colleagues are waiting for my answer for vmware (vSphere 5) and > I myself use oVirt (RHEV). the default protocol is iSCSI. Lean

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-06 Thread Martin Emrich
Hi! Am 02.03.18 um 13:27 schrieb Federico Lucifredi: We do speak to the Xen team every once in a while, but while there is interest in adding Ceph support on their side, I think we are somewhat down the list of their priorities. Maybe things change with XCP-ng (https://xcp-ng.github.io).

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-06 Thread Konstantin Shalygin
Dear all, I wonder how we could support VM systems with ceph storage (block device)? my colleagues are waiting for my answer for vmware (vSphere 5) and I myself use oVirt (RHEV). the default protocol is iSCSI. I know that openstack/cinder work well with ceph and proxmox (just heard) too.

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-05 Thread Robert Sander
On 05.03.2018 00:26, Adrian Saul wrote: >   > > We are using Ceph+RBD+NFS under pacemaker for VMware.  We are doing > iSCSI using SCST but have not used it against VMware, just Solaris and > Hyper-V. > > > It generally works and performs well enough – the biggest issues are the > clustering for

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-04 Thread Adrian Saul
ailto:ceph-users@lists.ceph.com>> Verzonden: 28-2-2018 13:53 Onderwerp: [ceph-users] Ceph iSCSI is a prank? I was building ceph in order to use with iSCSI. But I just see from the docs that need: CentOS 7.5 (which is not available yet, it's still at 7.4) https://wiki.centos.org/Dow

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-02 Thread Mike Christie
ists.ceph.com>" <ceph-users@lists.ceph.com > <mailto:ceph-users@lists.ceph.com>> > *Verzonden: * 28-2-2018 13:53 > *Onderwerp: * [ceph-users] Ceph iSCSI is a prank? > > I was building ceph in order to use with iSCSI. > But

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-02 Thread Daniel K
rk Schouten | Tuxis Internet Engineering >> KvK: 61527076 | http://www.tuxis.nl/ >> T: 0318 200208 | i...@tuxis.nl >> >> >> >> * Van: * Massimiliano Cuttini <m...@phoenixweb.it> >> * Aan: * "ceph-users@lists.ceph.com" <ceph-users@lists.ceph.c

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-02 Thread Max Cuttins
Il 02/03/2018 13:27, Federico Lucifredi ha scritto: On Fri, Mar 2, 2018 at 4:29 AM, Max Cuttins > wrote: Hi Federico, Hi Max, On Feb 28, 2018, at 10:06 AM, Max Cuttins

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-02 Thread Federico Lucifredi
On Fri, Mar 2, 2018 at 4:29 AM, Max Cuttins wrote: > > > Hi Federico, > > Hi Max, >> >> On Feb 28, 2018, at 10:06 AM, Max Cuttins wrote: >>> >>> This is true, but having something that just works in order to have >>> minimum compatibility and start to

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-02 Thread Max Cuttins
Hi Federico, Hi Max, On Feb 28, 2018, at 10:06 AM, Max Cuttins wrote: This is true, but having something that just works in order to have minimum compatibility and start to dismiss old disk is something you should think about. You'll have ages in order to improve and

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Joshua Chen
ceph.com" <ceph-users@lists.ceph.com> > * Verzonden: * 28-2-2018 13:53 > * Onderwerp: * [ceph-users] Ceph iSCSI is a prank? > > I was building ceph in order to use with iSCSI. > But I just see from the docs that need: > > *CentOS 7.5* > (which is not a

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Milanov, Radoslav Nikiforov
oun...@lists.ceph.com> On Behalf Of Max Cuttins Sent: Thursday, March 1, 2018 7:27 AM To: David Turner <drakonst...@gmail.com>; dilla...@redhat.com Cc: ceph-users <ceph-users@lists.ceph.com> Subject: Re: [ceph-users] Ceph iSCSI is a prank? Il 28/02/2018 18:16, David Turner ha sc

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Max Cuttins
Almost... Il 01/03/2018 16:17, Heðin Ejdesgaard Møller ha scritto: Hello, I would like to point out that we are running ceph+redundant iscsiGW's, connecting the LUN's to a esxi+vcsa-6.5 cluster with Red Hat support. We did encountered a few bumps on the road to production, but those got

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Heðin Ejdesgaard Møller
Hello, I would like to point out that we are running ceph+redundant iscsiGW's, connecting the LUN's to a esxi+vcsa-6.5 cluster with Red Hat support. We did encountered a few bumps on the road to production, but those got fixed by Red Hat engineering and are included in the rhel7.5 and 4.17

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Donny Davis
I wonder when EMC/Netapp are going to start giving away production ready bits that fit into your architecture At least support for this feature is coming in the near term. I say keep on keepin on. Kudos to the ceph team (and maybe more teams) for taking care of the hard stuff for us. On

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Samuel Soulard
Hi Jason, That's awesome. Keep up the good work guys, we all love the work you are doing with that software!! Sam On Mar 1, 2018 09:11, "Jason Dillaman" wrote: > It's very high on our priority list to get a solution merged in the > upstream kernel. There was a proposal

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Ric Wheeler
On 02/28/2018 10:06 AM, Max Cuttins wrote: Il 28/02/2018 15:19, Jason Dillaman ha scritto: On Wed, Feb 28, 2018 at 7:53 AM, Massimiliano Cuttini wrote: I was building ceph in order to use with iSCSI. But I just see from the docs that need: CentOS 7.5 (which is not

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread David Disseldorp
On Thu, 1 Mar 2018 09:11:21 -0500, Jason Dillaman wrote: > It's very high on our priority list to get a solution merged in the > upstream kernel. There was a proposal to use DLM to distribute the PGR > state between target gateways (a la the SCST target) and it's quite > possible that would have

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Federico Lucifredi
Hi Max, > On Feb 28, 2018, at 10:06 AM, Max Cuttins wrote: > > This is true, but having something that just works in order to have minimum > compatibility and start to dismiss old disk is something you should think > about. > You'll have ages in order to improve and get

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Jason Dillaman
It's very high on our priority list to get a solution merged in the upstream kernel. There was a proposal to use DLM to distribute the PGR state between target gateways (a la the SCST target) and it's quite possible that would have the least amount of upstream resistance since it would work for

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Samuel Soulard
On another note, is there any work being done for persistent group reservations support for Ceph/LIO compatibility? Or just a rough estimate :) Would love to see Redhat/Ceph support this type of setup. I know Suse supports it as of late. Sam On Mar 1, 2018 07:33, "Kai Wagner"

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Kai Wagner
I totally understand and see your frustration here, but you've to keep in mind that this is an Open Source project with a lots of volunteers. If you have a really urgent need, you have the possibility to develop such a feature on your own or you've to buy someone who could do the work for you.

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Max Cuttins
Il 28/02/2018 18:16, David Turner ha scritto: My thought is that in 4 years you could have migrated to a hypervisor that will have better performance into ceph than an added iSCSI layer. I won't deploy VMs for ceph on anything that won't allow librbd to work. Anything else is added complexity

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Max Cuttins
Schouten | Tuxis Internet Engineering KvK: 61527076 | http://www.tuxis.nl/ T: 0318 200208 | i...@tuxis.nl *Van: * Massimiliano Cuttini <m...@phoenixweb.it> *Aan: * "ceph-users@lists.ceph.com" <ceph-users@lists.ceph.com> *Verzonden: * 28-2-2018 13:53 *Onderwerp: * [ceph-users]

Re: [ceph-users] Ceph iSCSI is a prank?

2018-02-28 Thread Mark Schouten
Cuttini <m...@phoenixweb.it> Aan: "ceph-users@lists.ceph.com" <ceph-users@lists.ceph.com> Verzonden: 28-2-2018 13:53 Onderwerp: [ceph-users] Ceph iSCSI is a prank? I was building ceph in order to use with iSCSI. But I just see from the docs that need:

Re: [ceph-users] Ceph iSCSI is a prank?

2018-02-28 Thread David Turner
know the indians have a nice > >> saying > >> > >> "Everything will be good at the end. If it is not good, it is still not > >> the end." > >> > >> > >> > >> -Original Message- > >> From: Massimiliano Cuttin

Re: [ceph-users] Ceph iSCSI is a prank?

2018-02-28 Thread Jason Dillaman
gt;> >> >> -Original Message- >> From: Massimiliano Cuttini [mailto:m...@phoenixweb.it] >> Sent: woensdag 28 februari 2018 13:53 >> To: ceph-users@lists.ceph.com >> Subject: [ceph-users] Ceph iSCSI is a prank? >> >> I was building ceph

Re: [ceph-users] Ceph iSCSI is a prank?

2018-02-28 Thread Nico Schottelius
Max, I understand your frustration. However, last time I checked, ceph was open source. Some of you might not remember, but one major reason why open source is great is that YOU CAN DO your own modifications. If you need a change like iSCSI support and it isn't there, it is probably best, if

Re: [ceph-users] Ceph iSCSI is a prank?

2018-02-28 Thread Jason Dillaman
On Wed, Feb 28, 2018 at 10:06 AM, Max Cuttins wrote: > > > Il 28/02/2018 15:19, Jason Dillaman ha scritto: >> >> On Wed, Feb 28, 2018 at 7:53 AM, Massimiliano Cuttini >> wrote: >>> >>> I was building ceph in order to use with iSCSI. >>> But I just see from

Re: [ceph-users] Ceph iSCSI is a prank?

2018-02-28 Thread Erik McCormick
On Feb 28, 2018 10:06 AM, "Max Cuttins" wrote: Il 28/02/2018 15:19, Jason Dillaman ha scritto: > On Wed, Feb 28, 2018 at 7:53 AM, Massimiliano Cuttini > wrote: > >> I was building ceph in order to use with iSCSI. >> But I just see from the docs that

Re: [ceph-users] Ceph iSCSI is a prank?

2018-02-28 Thread Max Cuttins
Il 28/02/2018 15:19, Jason Dillaman ha scritto: On Wed, Feb 28, 2018 at 7:53 AM, Massimiliano Cuttini wrote: I was building ceph in order to use with iSCSI. But I just see from the docs that need: CentOS 7.5 (which is not available yet, it's still at 7.4)

Re: [ceph-users] Ceph iSCSI is a prank?

2018-02-28 Thread Jason Dillaman
On Wed, Feb 28, 2018 at 7:53 AM, Massimiliano Cuttini wrote: > I was building ceph in order to use with iSCSI. > But I just see from the docs that need: > > CentOS 7.5 > (which is not available yet, it's still at 7.4) > https://wiki.centos.org/Download > > Kernel 4.17 >

Re: [ceph-users] Ceph iSCSI is a prank?

2018-02-28 Thread Max Cuttins
ri 2018 13:53 To: ceph-users@lists.ceph.com Subject: [ceph-users] Ceph iSCSI is a prank? I was building ceph in order to use with iSCSI. But I just see from the docs that need: CentOS 7.5 (which is not available yet, it's still at 7.4) https://wiki.centos.org/Download

Re: [ceph-users] Ceph iSCSI is a prank?

2018-02-28 Thread Marc Roos
3 To: ceph-users@lists.ceph.com Subject: [ceph-users] Ceph iSCSI is a prank? I was building ceph in order to use with iSCSI. But I just see from the docs that need: CentOS 7.5 (which is not available yet, it's still at 7.4) https://wiki.centos.org/Download K

[ceph-users] Ceph iSCSI is a prank?

2018-02-28 Thread Massimiliano Cuttini
I was building ceph in order to use with iSCSI. But I just see from the docs that need: *CentOS 7.5* (which is not available yet, it's still at 7.4) https://wiki.centos.org/Download *Kernel 4.17* (which is not available yet, it is still at 4.15.7) https://www.kernel.org/ So I

Re: [ceph-users] ceph iscsi kernel 4.15 - "failed with 500"

2018-02-14 Thread Mike Christie
On 02/13/2018 01:09 PM, Steven Vacaroaia wrote: > Hi, > > I noticed a new ceph kernel (4.15.0-ceph-g1c778f43da52) was made available > so I have upgraded my test environment > ... > > It will be appreciated if someone can provide instructions / stpes for > upgrading the kernel without

Re: [ceph-users] ceph iscsi kernel 4.15 - "failed with 500"

2018-02-14 Thread Steven Vacaroaia
works now - I believe the issue was missing /etc/target directory on one server Just in case anyone else is interested, here is what I had to do 1. make sure there is an /etc/target folder on all your iSCSI gateway servers 2. install latest version of python-pyudev ( for whatever reason I had

Re: [ceph-users] ceph iscsi kernel 4.15 - "failed with 500"

2018-02-14 Thread Jason Dillaman
Have you updated to ceph-iscsi-config-2.4-1 and ceph-iscsi-cli-2.6-1? Any error messages in /var/log/rbd-target-api.log? On Wed, Feb 14, 2018 at 8:49 AM, Steven Vacaroaia wrote: > Thank you for the prompt response > > I was unable to install rtslib even AFTER I installed latest

Re: [ceph-users] ceph iscsi kernel 4.15 - "failed with 500"

2018-02-14 Thread Steven Vacaroaia
Thank you for the prompt response I was unable to install rtslib even AFTER I installed latest version of python-pydev ( 0.21) git clone git://github.com/pyudev/pyudev.git pyudev]# pip install --upgrade . Processing /root/pyudev Collecting six (from pyudev==0.21.0dev-20180214) Downloading

Re: [ceph-users] ceph iscsi kernel 4.15 - "failed with 500"

2018-02-13 Thread Jason Dillaman
It looks that that package was configured to auto-delete on shaman. I've submitted a fix so it shouldn't happen again in the future, but in the meantime I pushed and built python-rtslib-2.1.fb67-1 [1]. [1] https://shaman.ceph.com/repos/python-rtslib/ On Tue, Feb 13, 2018 at 2:09 PM, Steven

[ceph-users] ceph iscsi kernel 4.15 - "failed with 500"

2018-02-13 Thread Steven Vacaroaia
Hi, I noticed a new ceph kernel (4.15.0-ceph-g1c778f43da52) was made available so I have upgraded my test environment Now the iSCSI gateway stopped working - ERROR [rbd-target-api:1430:call_api()] - _disk change on osd02 failed with 500 So I was thinking that I have to pudate all the packages I

Re: [ceph-users] Ceph iSCSI login failed due to authorization failure

2017-10-19 Thread Jason Dillaman
es of this > email and any attachment(s). > > -- > *From: *"Maged Mokhtar" <mmokh...@petasan.org> > *To: *"Kashif Mumtaz" <kashif.mum...@yahoo.com> > *Cc: *"Ceph Users" <ceph-users@lists.ceph.com> > *

Re: [ceph-users] Ceph iSCSI login failed due to authorization failure

2017-10-19 Thread Tyler Bishop
lt;ceph-users@lists.ceph.com> Sent: Saturday, October 14, 2017 1:40:05 PM Subject: Re: [ceph-users] Ceph iSCSI login failed due to authorization failure On 2017-10-14 17:50, Kashif Mumtaz wrote: Hello Dear, I am trying to configure the Ceph iscsi gateway on Ceph Luminious . As per bel

Re: [ceph-users] Ceph-ISCSI

2017-10-17 Thread Maged Mokhtar
The issue with active/active is the following condition: client initiator sends write operation to gateway server A server A does not respond within client timeout client initiator re-sends failed write operation to gateway server B client initiator sends another write operation to gateway server

Re: [ceph-users] Ceph-ISCSI

2017-10-17 Thread Jorge Pinilla López
So what I have understood the final sum up was to support MC to be able to Multipath Active/Active How is that proyect going? Windows will be able to support it because they have already implemented it client-side but unless ESXi implements it, VMware will only be able to do Active/Passive, am I

Re: [ceph-users] Ceph-ISCSI

2017-10-17 Thread Frédéric Nass
Hi folks, For those who missed it, the fun was here :-) : https://youtu.be/IgpVOOVNJc0?t=3715 Frederic. - Le 11 Oct 17, à 17:05, Jake Young a écrit : > On Wed, Oct 11, 2017 at 8:57 AM Jason Dillaman < [ mailto:jdill...@redhat.com > | > jdill...@redhat.com ] >

Re: [ceph-users] Ceph iSCSI login failed due to authorization failure

2017-10-14 Thread Maged Mokhtar
On 2017-10-14 17:50, Kashif Mumtaz wrote: > Hello Dear, > > I am trying to configure the Ceph iscsi gateway on Ceph Luminious . As per > below > > Ceph iSCSI Gateway -- Ceph Documentation [1] > > [1] > > CEPH ISCSI GATEWAY — CEPH DOCUMENTATION > > Ceph is iscsi gateway are configured

Re: [ceph-users] Ceph iSCSI login failed due to authorization failure

2017-10-14 Thread Jason Dillaman
Have you set the CHAP username and password on both sides (and ensured that the initiator IQN matches)? On the initiator side, you would run the following before attempting to log into the portal: iscsiadm --mode node --targetname --op=update --name node.session.auth.authmethod --value=CHAP

[ceph-users] Ceph iSCSI login failed due to authorization failure

2017-10-14 Thread Kashif Mumtaz
Hello Dear, I am trying to configure the Ceph iscsi gateway on Ceph Luminious . As per below Ceph iSCSI Gateway — Ceph Documentation | | | Ceph iSCSI Gateway — Ceph Documentation | | | Ceph is iscsi gateway are configured and chap auth is set. /> lso- /

Re: [ceph-users] Ceph-ISCSI

2017-10-12 Thread Jason Dillaman
On Thu, Oct 12, 2017 at 5:02 AM, Maged Mokhtar wrote: > On 2017-10-11 14:57, Jason Dillaman wrote: > > On Wed, Oct 11, 2017 at 6:38 AM, Jorge Pinilla López > wrote: > >> As far as I am able to understand there are 2 ways of setting iscsi for >> ceph >>

Re: [ceph-users] Ceph-ISCSI

2017-10-12 Thread Maged Mokhtar
On 2017-10-12 11:32, David Disseldorp wrote: > On Wed, 11 Oct 2017 14:03:59 -0400, Jason Dillaman wrote: > > On Wed, Oct 11, 2017 at 1:10 PM, Samuel Soulard > wrote: Hmmm, If you failover the identity of the > LIO configuration including PGRs > (I believe they are

Re: [ceph-users] Ceph-ISCSI

2017-10-12 Thread David Disseldorp
On Wed, 11 Oct 2017 14:03:59 -0400, Jason Dillaman wrote: > On Wed, Oct 11, 2017 at 1:10 PM, Samuel Soulard > wrote: > > Hmmm, If you failover the identity of the LIO configuration including PGRs > > (I believe they are files on disk), this would work no? Using an 2

  1   2   >