Re: [ceph-users] Installing iSCSI support

2018-06-14 Thread Max Cuttins

Ok,

i take your points.
I'll do my best.


Il 13/06/2018 10:14, Lenz Grimmer ha scritto:

On 06/12/2018 07:14 PM, Max Cuttins wrote:


it's a honor to me contribute to the main repo of ceph.

We appreciate you support! Please take a look at
http://docs.ceph.com/docs/master/start/documenting-ceph/ for guidance on
how to contribute to the documentation.

Just a throught, is it wise having DOCS within the software?
Isn't better to move docs to a less sensite repo?

Why do you think so? Every modification is peer reviewed before
inclusion into the source tree. If your documentation fix would
accidentally modify other parts, this would easily be spotted during the
pull request review. Having the docs "near" the actual code improves the
actuality, as new features or changes in behavior can also include the
corresponding documentation updates. This also makes it easier to manage
multiple branches/versions of the code, as there is no disconnect.

Lenz



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Add a new iSCSI gateway would not update client multipath

2018-06-14 Thread Max Cuttins

I did it of course! :)

However I found the real issue.

While I was playing with multipath I disabled some feature from RBD 
image, one of this was EXCLUSIVE LOCK, because I was thinking it was 
related to the issue with multipath.
Instead this brake the RBD iscsi target on gateway side (but not 
immediatly, brake apply on for new gateway or rebooted server).
So I figure out only when I reboot another node that gateway would not 
come up again.


Disabling *Exclusive Lock* will make the Gateway Crash and "gwcli" will 
show you the 500 error status code.

Reanabling the feature and rebooting gateway one by one solve the issue.

Maybe it's better to lock feature on RBD pool managed by APPs.
So nobody could make some distructive changes to features like I did it.

Thanks! :)


Il 13/06/2018 16:15, Jason Dillaman ha scritto:

I've never used XenServer, but I'd imagine you would need to do
something similar to what is documented here [1].

[1] http://docs.ceph.com/docs/master/rbd/iscsi-initiator-linux/

On Wed, Jun 13, 2018 at 5:11 AM, Max Cuttins  wrote:

I just realize there is a an error:

  multipath -r
Jun 13 11:02:27 | rbd0: HDIO_GETGEO failed with 25
reload: mpatha (360014051b4fb8c6384545b7ae7d5142e) undef LIO-ORG ,TCMU
device
size=100G features='1 queue_if_no_path' hwhandler='1 alua' wp=undef
|-+- policy='queue-length 0' prio=130 status=undef
| `- 5:0:0:0 sdb  8:16  active ready running
`-+- policy='queue-length 0' prio=1 status=undef
   `- 4:0:0:0 sda  8:0   active ghost running

I have added these lines to blacklist in /etc/multipath.conf

blacklist {
 devnode "^(rbd)[0-9]*"
}

This solved the error, but. gateway still not updated.




Il 13/06/2018 10:59, Max Cuttins ha scritto:

Hi everybody,

maybe I miss something but multipath is not adding new iscsi gateways.

I have installed 2 gateway and test it on a client.
Everything worked fine.

After that I decided to complete install and create a 3rd gateway.
But no one iscsi initiatior client update the number of gateways.
One clients broke up and now cannot write anymore to the disk (can read the
size of volume but cannot read or write).

I have runned:

systemctl restart multipathd

and also:

multipath -r

but this always display only 2 paths:

multipath -ll
mpatha (360014051b4fb8c6384545b7ae7d5142e) dm-2 LIO-ORG ,TCMU device
size=100G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='queue-length 0' prio=130 status=active
| `- 5:0:0:0 sdb  8:16  active ready running
`-+- policy='queue-length 0' prio=1 status=enabled
   `- 4:0:0:0 sda  8:0   active ghost running


What I shoulded did?




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com






___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Add a new iSCSI gateway would not update client multipath

2018-06-13 Thread Max Cuttins

I just realize there is a an error:

 multipath -r
   *Jun 13 11:02:27 | rbd0: HDIO_GETGEO failed with 25*
   reload: mpatha (360014051b4fb8c6384545b7ae7d5142e) undef LIO-ORG
   ,TCMU device
   size=100G features='1 queue_if_no_path' hwhandler='1 alua' wp=undef
   |-+- policy='queue-length 0' prio=130 status=undef
   | `- 5:0:0:0 sdb  8:16  active ready running
   `-+- policy='queue-length 0' prio=1 status=undef
  `- 4:0:0:0 sda  8:0   active ghost running

I have added these lines to blacklist in /etc/multipath.conf

   blacklist {
    devnode "^(rbd)[0-9]*"
   }

This solved the error, but. gateway still not updated.



Il 13/06/2018 10:59, Max Cuttins ha scritto:


Hi everybody,

maybe I miss something but multipath is not adding new iscsi gateways.

I have installed 2 gateway and test it on a client.
Everything worked fine.

After that I decided to complete install and create a 3rd gateway.
But no one iscsi initiatior client update the number of gateways.
One clients broke up and now cannot write anymore to the disk (can 
read the size of volume but cannot read or write).


I have runned:

systemctl restart multipathd

and also:

multipath -r

but this always display only 2 paths:

multipath -ll
mpatha (360014051b4fb8c6384545b7ae7d5142e) dm-2 LIO-ORG ,TCMU device
size=100G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='queue-length 0' prio=130 status=active
| `- 5:0:0:0 sdb  8:16  active ready running
`-+- policy='queue-length 0' prio=1 status=enabled
  `- 4:0:0:0 sda  8:0   active ghost running


What I shoulded did?




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Add a new iSCSI gateway would not update client multipath

2018-06-13 Thread Max Cuttins

Hi everybody,

maybe I miss something but multipath is not adding new iscsi gateways.

I have installed 2 gateway and test it on a client.
Everything worked fine.

After that I decided to complete install and create a 3rd gateway.
But no one iscsi initiatior client update the number of gateways.
One clients broke up and now cannot write anymore to the disk (can read 
the size of volume but cannot read or write).


I have runned:

   systemctl restart multipathd

and also:

   multipath -r

but this always display only 2 paths:

   multipath -ll
   mpatha (360014051b4fb8c6384545b7ae7d5142e) dm-2 LIO-ORG ,TCMU device
   size=100G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
   |-+- policy='queue-length 0' prio=130 status=active
   | `- 5:0:0:0 sdb  8:16  active ready running
   `-+- policy='queue-length 0' prio=1 status=enabled
  `- 4:0:0:0 sda  8:0   active ghost running


What I shoulded did?


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] iSCSI rookies questions

2018-06-13 Thread Max Cuttins
utilize rbd cache (there is nothing comparable in XENServer 7.2
 Community)
   o use the capabilities of ceph to create snapshots, clone systems

What do you think about that?

Regards
Marc


Am 12.06.2018 um 21:03 schrieb Max Cuttins:

Hi everybody,

i have a running iSCSI-ceph environment that connect to XenServer 7.2.
I have some dubts and rookie questions about iSCSI.

1) Xen refused to connect to iSCSI gateway since I didn't turn up
multipath on Xen.
To me it's ok. But Is it right say that multipath is much more than
just a feature but it's a mandatory way to connect instead?
Is this normal? I thought iSCSI multipath was back-compatible with
singlepath one.

2) The connection accomplished correctly with multipath.
I see on the XEN dashboard:

 *2 of 2 paths active* (2 iSCSI sessions)

I read around that for now that the iSCSI gateway would have just an
active/passive multipath.
Is this already worked? :)

3) I see "optimized/not optmized" on my Ceph dashboard.
This stand for?

4) Performance.
I run a simple test (nothing of statistically proven), and I see these
value:

 dd if=/dev/zero of=/iscsi-test/testfile bs=1G count=1 oflag=direct
 1073741824 bytes (1.1 GB) copied, 6.72009 s, *160 MB/s*

 dd if=/dev/zero of=/ceph-test/testfile bs=1G count=1 oflag=direct
 1073741824 bytes (1.1 GB) copied, 1.57821 s, *680 MB/s*

Of course I expected a drop (due to overhead of iSCSI)... but this is
4x slower than direct client. Which It seems to me a little bit high.
However... is this *more-or-less* what I should consider as expected
drop in iSCSI, or this gap'll be lowered in future?






___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] iSCSI rookies questions

2018-06-12 Thread Max Cuttins

Hi everybody,

i have a running iSCSI-ceph environment that connect to XenServer 7.2.
I have some dubts and rookie questions about iSCSI.

1) Xen refused to connect to iSCSI gateway since I didn't turn up 
multipath on Xen.
To me it's ok. But Is it right say that multipath is much more than just 
a feature but it's a mandatory way to connect instead?
Is this normal? I thought iSCSI multipath was back-compatible with 
singlepath one.


2) The connection accomplished correctly with multipath.
I see on the XEN dashboard:

   *2 of 2 paths active* (2 iSCSI sessions)

I read around that for now that the iSCSI gateway would have just an 
active/passive multipath.

Is this already worked? :)

3) I see "optimized/not optmized" on my Ceph dashboard.
This stand for?

4) Performance.
I run a simple test (nothing of statistically proven), and I see these 
value:


   dd if=/dev/zero of=/iscsi-test/testfile bs=1G count=1 oflag=direct
   1073741824 bytes (1.1 GB) copied, 6.72009 s, *160 MB/s*

   dd if=/dev/zero of=/ceph-test/testfile bs=1G count=1 oflag=direct
   1073741824 bytes (1.1 GB) copied, 1.57821 s, *680 MB/s*

Of course I expected a drop (due to overhead of iSCSI)... but this is 4x 
slower than direct client. Which It seems to me a little bit high.
However... is this *more-or-less* what I should consider as expected 
drop in iSCSI, or this gap'll be lowered in future?





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Installing iSCSI support

2018-06-12 Thread Max Cuttins

Thank Jason,

it's a honor to me contribute to the main repo of ceph.

Just a throught, is it wise having DOCS within the software?
Isn't better to move docs to a less sensite repo?




Il 12/06/2018 17:02, Jason Dillaman ha scritto:

On Tue, Jun 12, 2018 at 5:08 AM, Max Cuttins  wrote:

I have completed the installation of ISCSI.
The documentation is wrong in several parts of it.

Is it anyway to contribute and update with right commands?
I found only

https://github.com/ceph/ceph/tree/master/doc

Which is inside the main Ceph project.
Should I use this to fix docs?

Yes, that's the correct location -- the docs are auto-generated from
the restructuredtext. Contributions welcome.




Il 11/06/2018 16:07, Max Cuttins ha scritto:

Thanks!

I just saw it.
I found all the packages with a deep search in the web.

wget
http://ftp.redhat.com/pub/redhat/linux/enterprise/7ComputeNode/en/RHCEPH/SRPMS/python-rtslib-2.1.fb64-0.1.20170301.git3637171.el7cp.src.rpm
wget
http://ftp.redhat.com/pub/redhat/linux/enterprise/7ComputeNode/en/RHCEPH/SRPMS/targetcli-2.1.fb47-0.1.20170301.gitf632f38.el7cp.src.rpm
wget
https://cbs.centos.org/kojifiles/packages/tcmu-runner/1.3.0/0.2rc4.el7/src/tcmu-runner-1.3.0-0.2rc4.el7.src.rpm
wget
https://4.chacra.ceph.com/r/ceph-iscsi-config/master/9fcf45abcdad3c0f1f01ae1f932e23f25bcb6038/centos/7/flavors/default/noarch/ceph-iscsi-config-2.6-8.g9fcf45a.el7.noarch.rpm
wget
https://4.chacra.ceph.com/r/ceph-iscsi-cli/master/5833b11b5956e0967ff3f4156039eb0dc0bebb4d/centos/7/flavors/default/noarch/ceph-iscsi-cli-2.7-9.g5833b11.el7.noarch.rpm

However in the end i decided to install everything from GIT and
compile.
Too hard find latest packages and so I guess it'll be harder find some
update in the future.
So better stay with latest available release directly available on Git.

That's all.
Maybe one day this'll be installable by package manager.
But right now...

Moreover installation from GIT it's quite easy.
Maybe it's just a matter to said it on the documentation instead of waste
people's time.




Il 11/06/2018 14:02, Brad Hubbard ha scritto:

I'm afraid the answer currently is http://tracker.ceph.com/issues/22143

On Mon, Jun 11, 2018 at 8:08 PM, Max Cuttins  wrote:

Really? :)
So in this huge-big-mailing list have never installed iSCSI and get these
errors before me.
Wow sounds like I'm a pioneer here.

The installation guide is getting wrong at very very very beginning.
Even if it's expressly written to install on a centos7.5 environment it's
impossible to install packages that are not present in any repo.

Do you think I should use instead the manual alternative guide for other
environment?
This documentation it's quite easy but force to install source by source
downloading code by github instead of use packages.
I think it's a pity do this if really those packages are available
somewhere.

http://docs.ceph.com/docs/mimic/rbd/iscsi-target-cli-manual-install/


I'm confused.

Any help will be appreciated.
Thanks



Il 10/06/2018 17:07, Max Cuttins ha scritto:

Hi everybody,

i'm following the documentation stepbystep.
However try to install tcmu-runner and other dependencies give me an error:

yum install targetcli python-rtslib tcmu-runner ceph-iscsi-config
ceph-iscsi-cli
Package targetcli-2.1.fb46-4.el7_5.noarch already installed and latest
version
Package python-rtslib-2.1.fb63-11.el7_5.noarch already installed and latest
version
No package tcmu-runner available.
No package ceph-iscsi-config available.
No package ceph-iscsi-cli available.
Nothing to do


Do I need to add a particular repository?

Thanks,
Max





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com






___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Installing iSCSI support

2018-06-12 Thread Max Cuttins

I have completed the installation of ISCSI.
The documentation is wrong in several parts of it.

Is it anyway to contribute and update with right commands?
I found only

   https://github.com/ceph/ceph/tree/master/doc

Which is inside the main Ceph project.
Should I use this to fix docs?




Il 11/06/2018 16:07, Max Cuttins ha scritto:


Thanks!

I just saw it.
I found all the packages with a deep search in the web.

wget

http://ftp.redhat.com/pub/redhat/linux/enterprise/7ComputeNode/en/RHCEPH/SRPMS/python-rtslib-2.1.fb64-0.1.20170301.git3637171.el7cp.src.rpm
wget

http://ftp.redhat.com/pub/redhat/linux/enterprise/7ComputeNode/en/RHCEPH/SRPMS/targetcli-2.1.fb47-0.1.20170301.gitf632f38.el7cp.src.rpm
wget

https://cbs.centos.org/kojifiles/packages/tcmu-runner/1.3.0/0.2rc4.el7/src/tcmu-runner-1.3.0-0.2rc4.el7.src.rpm
wget

https://4.chacra.ceph.com/r/ceph-iscsi-config/master/9fcf45abcdad3c0f1f01ae1f932e23f25bcb6038/centos/7/flavors/default/noarch/ceph-iscsi-config-2.6-8.g9fcf45a.el7.noarch.rpm
wget

https://4.chacra.ceph.com/r/ceph-iscsi-cli/master/5833b11b5956e0967ff3f4156039eb0dc0bebb4d/centos/7/flavors/default/noarch/ceph-iscsi-cli-2.7-9.g5833b11.el7.noarch.rpm

However in the end i decided to install everything from GIT 
and compile.
Too hard find latest packages and so I guess it'll be harder find some 
update in the future.

So better stay with latest available release directly available on Git.

That's all.
Maybe one day this'll be installable by package manager.
But right now...

Moreover installation from GIT it's quite easy.
Maybe it's just a matter to said it on the documentation instead of 
waste people's time.





Il 11/06/2018 14:02, Brad Hubbard ha scritto:

I'm afraid the answer currently ishttp://tracker.ceph.com/issues/22143

On Mon, Jun 11, 2018 at 8:08 PM, Max Cuttins  wrote:

Really? :)
So in this huge-big-mailing list have never installed iSCSI and get these
errors before me.
Wow sounds like I'm a pioneer here.

The installation guide is getting wrong at very very very beginning.
Even if it's expressly written to install on a centos7.5 environment it's
impossible to install packages that are not present in any repo.

Do you think I should use instead the manual alternative guide for other
environment?
This documentation it's quite easy but force to install source by source
downloading code by github instead of use packages.
I think it's a pity do this if really those packages are available
somewhere.

http://docs.ceph.com/docs/mimic/rbd/iscsi-target-cli-manual-install/


I'm confused.

Any help will be appreciated.
Thanks



Il 10/06/2018 17:07, Max Cuttins ha scritto:

Hi everybody,

i'm following the documentation stepbystep.
However try to install tcmu-runner and other dependencies give me an error:

yum install targetcli python-rtslib tcmu-runner ceph-iscsi-config
ceph-iscsi-cli
Package targetcli-2.1.fb46-4.el7_5.noarch already installed and latest
version
Package python-rtslib-2.1.fb63-11.el7_5.noarch already installed and latest
version
No package tcmu-runner available.
No package ceph-iscsi-config available.
No package ceph-iscsi-cli available.
Nothing to do


Do I need to add a particular repository?

Thanks,
Max





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com







___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] GWCLI - very good job!

2018-06-11 Thread Max Cuttins

This is amazing tools.

Just configured iSCSI multipath in likely 5 minuts.

Kudos to all who develop this tools.
It's just simple, clear and colorfull.



Thanks,
Max



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Installing iSCSI support

2018-06-11 Thread Max Cuttins

Thanks!

I just saw it.
I found all the packages with a deep search in the web.

   wget
   
http://ftp.redhat.com/pub/redhat/linux/enterprise/7ComputeNode/en/RHCEPH/SRPMS/python-rtslib-2.1.fb64-0.1.20170301.git3637171.el7cp.src.rpm
   wget
   
http://ftp.redhat.com/pub/redhat/linux/enterprise/7ComputeNode/en/RHCEPH/SRPMS/targetcli-2.1.fb47-0.1.20170301.gitf632f38.el7cp.src.rpm
   wget
   
https://cbs.centos.org/kojifiles/packages/tcmu-runner/1.3.0/0.2rc4.el7/src/tcmu-runner-1.3.0-0.2rc4.el7.src.rpm
   wget
   
https://4.chacra.ceph.com/r/ceph-iscsi-config/master/9fcf45abcdad3c0f1f01ae1f932e23f25bcb6038/centos/7/flavors/default/noarch/ceph-iscsi-config-2.6-8.g9fcf45a.el7.noarch.rpm
   wget
   
https://4.chacra.ceph.com/r/ceph-iscsi-cli/master/5833b11b5956e0967ff3f4156039eb0dc0bebb4d/centos/7/flavors/default/noarch/ceph-iscsi-cli-2.7-9.g5833b11.el7.noarch.rpm

However in the end i decided to install everything from GIT and 
compile.
Too hard find latest packages and so I guess it'll be harder find some 
update in the future.

So better stay with latest available release directly available on Git.

That's all.
Maybe one day this'll be installable by package manager.
But right now...

Moreover installation from GIT it's quite easy.
Maybe it's just a matter to said it on the documentation instead of 
waste people's time.





Il 11/06/2018 14:02, Brad Hubbard ha scritto:

I'm afraid the answer currently is http://tracker.ceph.com/issues/22143

On Mon, Jun 11, 2018 at 8:08 PM, Max Cuttins  wrote:

Really? :)
So in this huge-big-mailing list have never installed iSCSI and get these
errors before me.
Wow sounds like I'm a pioneer here.

The installation guide is getting wrong at very very very beginning.
Even if it's expressly written to install on a centos7.5 environment it's
impossible to install packages that are not present in any repo.

Do you think I should use instead the manual alternative guide for other
environment?
This documentation it's quite easy but force to install source by source
downloading code by github instead of use packages.
I think it's a pity do this if really those packages are available
somewhere.

http://docs.ceph.com/docs/mimic/rbd/iscsi-target-cli-manual-install/


I'm confused.

Any help will be appreciated.
Thanks



Il 10/06/2018 17:07, Max Cuttins ha scritto:

Hi everybody,

i'm following the documentation stepbystep.
However try to install tcmu-runner and other dependencies give me an error:

yum install targetcli python-rtslib tcmu-runner ceph-iscsi-config
ceph-iscsi-cli
Package targetcli-2.1.fb46-4.el7_5.noarch already installed and latest
version
Package python-rtslib-2.1.fb63-11.el7_5.noarch already installed and latest
version
No package tcmu-runner available.
No package ceph-iscsi-config available.
No package ceph-iscsi-cli available.
Nothing to do


Do I need to add a particular repository?

Thanks,
Max





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com






___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Installing iSCSI support

2018-06-11 Thread Max Cuttins


Really? :)
So in this huge-big-mailing list have never installed iSCSI and get 
these errors before me.

Wow sounds like I'm a pioneer here.

The installation guide is getting wrong at very very very beginning.
Even if it's expressly written to install on a centos7.5 environment 
it's impossible to install packages that are not present in any repo.


Do you think I should use instead the manual alternative guide for other 
environment?
This documentation it's quite easy but force to install source by source 
downloading code by github instead of use packages.
I think it's a pity do this if really those packages are available 
somewhere.


   http://docs.ceph.com/docs/mimic/rbd/iscsi-target-cli-manual-install/


I'm confused.

Any help will be appreciated.
Thanks



Il 10/06/2018 17:07, Max Cuttins ha scritto:


Hi everybody,

i'm following the documentation stepbystep.
However try to install tcmu-runner and other dependencies give me an 
error:


yum install targetcli python-rtslib tcmu-runner ceph-iscsi-config 
ceph-iscsi-cli
Package targetcli-2.1.fb46-4.el7_5.noarch already installed and latest 
version
Package python-rtslib-2.1.fb63-11.el7_5.noarch already installed and latest 
version
No package tcmu-runner available.
No package ceph-iscsi-config available.
No package ceph-iscsi-cli available.
Nothing to do


Do I need to add a particular repository?

Thanks,
Max





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Reinstall everything

2018-06-11 Thread Max Cuttins

I used ceph-deploy to purge the data.
However it didn't remove LVM.

I'm gonna to see the "lvm" part.
Seems new, i used always disk instead since now.




Il 10/06/2018 14:13, Sergey Malinin ha scritto:
Not sure if ceph-deploy has similar functionality, but executing 
‘ceph-volume lvm zap  --destroy’ on target machine would have 
removed lvm mapping.

On Jun 10, 2018, 14:41 +0300, Max Cuttins , wrote:


I solved by myself.
I' writing here my findings to save some working hours to others.
Sound strange that nobody knew this.

The issue is that data is purged but LVM partition are leaved in place.

This means that you need to manually remove.

I just reinstalled the whole OS and on the data disks there are still 
LVM partition named "ceph-*". These partition are ACTIVE by default.

To get rid of the old data:

#find disks

lsblk

See in the result all the "ceph-*" volume groups and remove theme:

vgchange -a n ceph-XX
vgremove ceph-XXX

Do it for all disks.
Now you can run *ceph-deploy osd create* correctly without being 
prompted that disk is in use.





Il 06/06/2018 19:41, Max Cuttins ha scritto:

Hi everybody,

I would like to start from zero.
However last time I run the command to purge everything I got an issue.

I had a complete cleaned up system as expected, but disk was still 
OSD and the new installation refused to overwrite disk in use.
The only way to make it work was manually format the disks with 
fdisk and zap again with ceph later.


Is there something I shoulded do before purge everything in order to 
do not have similar issue?


Thanks,
Max
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Installing iSCSI support

2018-06-10 Thread Max Cuttins

Hi everybody,

i'm following the documentation stepbystep.
However try to install tcmu-runner and other dependencies give me an error:

   yum install targetcli python-rtslib tcmu-runner ceph-iscsi-config 
ceph-iscsi-cli
   Package targetcli-2.1.fb46-4.el7_5.noarch already installed and latest 
version
   Package python-rtslib-2.1.fb63-11.el7_5.noarch already installed and latest 
version
   No package tcmu-runner available.
   No package ceph-iscsi-config available.
   No package ceph-iscsi-cli available.
   Nothing to do


Do I need to add a particular repository?

Thanks,
Max



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Reinstall everything

2018-06-10 Thread Max Cuttins

I solved by myself.
I' writing here my findings to save some working hours to others.
Sound strange that nobody knew this.

The issue is that data is purged but LVM partition are leaved in place.

This means that you need to manually remove.

I just reinstalled the whole OS and on the data disks there are still 
LVM partition named "ceph-*". These partition are ACTIVE by default.

To get rid of the old data:

#find disks

   lsblk

See in the result all the "ceph-*" volume groups and remove theme:

   vgchange -a n ceph-XX
   vgremove ceph-XXX

Do it for all disks.
Now you can run *ceph-deploy osd create* correctly without being 
prompted that disk is in use.





Il 06/06/2018 19:41, Max Cuttins ha scritto:

Hi everybody,

I would like to start from zero.
However last time I run the command to purge everything I got an issue.

I had a complete cleaned up system as expected, but disk was still OSD 
and the new installation refused to overwrite disk in use.
The only way to make it work was manually format the disks with fdisk 
and zap again with ceph later.


Is there something I shoulded do before purge everything in order to 
do not have similar issue?


Thanks,
Max
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph-deploy disk list return a python error

2018-06-10 Thread Max Cuttins

I'm running a new installation of MIMIC:

#ceph-deploy disk list ceph01

   [ceph01][DEBUG ] connection detected need for sudo
   [ceph01][DEBUG ] connected to host: ceph01
   [ceph01][DEBUG ] detect platform information from remote host
   [ceph01][DEBUG ] detect machine type
   [ceph01][DEBUG ] find the location of an executable
   [ceph01][INFO  ] Running command: sudo fdisk -l

   [ceph_deploy][ERROR ] Traceback (most recent call last):
   [ceph_deploy][ERROR ]   File
   "/usr/lib/python2.7/site-packages/ceph_deploy/util/decorators.py",
   line 69, in newfunc
   [ceph_deploy][ERROR ] return f(*a, **kw)
   [ceph_deploy][ERROR ]   File
   "/usr/lib/python2.7/site-packages/ceph_deploy/cli.py", line 164, in
   _main
   [ceph_deploy][ERROR ] return args.func(args)
   [ceph_deploy][ERROR ]   File
   "/usr/lib/python2.7/site-packages/ceph_deploy/osd.py", line 434, in disk
   [ceph_deploy][ERROR ] disk_list(args, cfg)
   [ceph_deploy][ERROR ]   File
   "/usr/lib/python2.7/site-packages/ceph_deploy/osd.py", line 376, in
   disk_list
   [ceph_deploy][ERROR ] distro.conn.logger(line)
   [ceph_deploy][ERROR ] TypeError: 'Logger' object is not callable
   [ceph_deploy][ERROR ]


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Reinstall everything

2018-06-06 Thread Max Cuttins

Hi everybody,

I would like to start from zero.
However last time I run the command to purge everything I got an issue.

I had a complete cleaned up system as expected, but disk was still OSD 
and the new installation refused to overwrite disk in use.
The only way to make it work was manually format the disks with fdisk 
and zap again with ceph later.


Is there something I shoulded do before purge everything in order to do 
not have similar issue?


Thanks,
Max
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] where is it possible download CentOS 7.5

2018-03-28 Thread Max Cuttins

Hi Jason,

i really don't want to stress this much than I already did.
But I need to have a clear answer.


Il 28/03/2018 13:36, Jason Dillaman ha scritto:

But I don't think that CentOS7.5 will use the kernel 4.16 ... so you are
telling me that new feature will be backported to the kernel 3.* ?

Nope. I'm not part of the Red hat kernel team and don't have the
influence to shape what they do.

The RHEL/CentOS 7.5 3.x-based kernel will have all the necessary bug fixes.


You wrote about bug fixes... ok, but this is a new feature. (aka: before 
was not possible, not it is).

So, RHEL/CentOS 7.5 will run iSCSI LIO out of the box?
Yes or no?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] where is it possible download CentOS 7.5

2018-03-28 Thread Max Cuttins

Il 27/03/2018 13:46, Brad Hubbard ha scritto:



On Tue, Mar 27, 2018 at 9:12 PM, Max Cuttins <mailto:m...@phoenixweb.it>> wrote:


Hi Brad,

    that post was mine. I knew it quite well.

That Post was about confirm the fact that minimum requirements
written in the documentation really didn't exists.

However I never asked if there is somewhere a place where is
possible to download the DEV or the RC of Centos7.5.
I was thinking about to join the community of tester and
developers that are already testing Ceph on that "/not ready/"
environment.

In that POST these questions were not really made, so no answer
where given.


From that thread.

"The necessary kernel changes actually are included as part of 4.16-rc1
which is available now. We also offer a pre-built test kernel with the
necessary fixes here [1].

[1] https://shaman.ceph.com/repos/kernel/ceph-iscsi-test/"; 
<https://shaman.ceph.com/repos/kernel/ceph-iscsi-test/>


I notice that URL is unavailable so maybe the real question should be 
why is that kernel no longer available?


Yes, the link was broken and it seemed to me a misprint of old docs.
As all other stuffs described didn't exists already I thought that event 
this Kernel test was not available (already or anymore).



There are plenty more available at 
https://shaman.ceph.com/repos/kernel/testing/ but *I* can't tell you 
which is relevant but perhaps someone else can.


However the 4.16 is almost ready to be released (shoulded had been already).
At this moment is just a double work use that kernel and after upgrade 
it to the final one.




I see that you talked also about other distribution. Well, I read
around that Suse already implement iSCSI.
However as far as I know (which is not so much), this distribution
use modified kernel in order to let this work.
And in order to use it it's needed  a dashboard that can handle
these kind of differences (OpenAttic).
I knew already OpenAttic is contributing in developing the next
generation of the Ceph Dashboard (and this sound damn good!).
However this also means to me that the *official dashboard* should
not be talking about ISCSI at all (as every implementation of
iSCSI are running on mod version).

So these are the things I cannot figure out:
Why is the iSCSI board on the CEPH official dashboard? (I could
understand on OpenAttic which run on SUSE but not on the official
one).

Why do you believe it should not be?


Maybe I'm in wrong, but I guess that the dashboard manager expects to 
get data/details/stats/config from a particular set of paths, components 
and daemons which cannot be the same for all the ad-hoc implementation.
So there is a dashboard that show values for a component which is 
not there (instead could be there something else but written in another 
way).
Every ad-hoc implementation (like OpenAttic) of course know where to 
find data/details/stats/config for work with their implementation (so 
it's understandable that they have board for iSCSI).

Right?



And why, in the official documentation, the minimu requirements to
let iSCSI work, is to install CentOS7.5? Which doesn't exist? Is
there a RC candidate which I can start to use?


But it doesn't say that, it says " RHEL/CentOS 7.5; Linux kernel v4.16 
or newer; or the Ceph iSCSI client test kernel 
<https://shaman.ceph.com/repos/kernel/ceph-iscsi-test>". You seem to 
be ignoring the "Ceph iSCSI client test kernel 
<https://shaman.ceph.com/repos/kernel/ceph-iscsi-test>" part?



Yes, the link was broken and it seemed to me a misprint of old docs.

Moreover at first read I figure out that I needed both centOS7.5 *AND 
*kernel 4.16. *OR *the kernel test.
Now you are telling me that all requirements are alternative. Which 
explain to me why the documentation suggest just CentOS and not all 
others distribution.

Also this sounds good.

But I don't think that CentOS7.5 will use the kernel 4.16 ... so you are 
telling me that new feature will be backported to the kernel 3.* ?
Is it right? So i don't need to upgrade the kernel If I'll use 
RHEL/CentOS7.5 ?
This sound even better. I was a bit worried to don't use the mainstream 
kernel of the distribution.



And... if SUSE or even other distribution works already with
iSCSI... why the documentation just doesn't reccomend these ones
instead of RHEL or CENTOS?

Because that would be odd, to say the least. If the documentation is 
incorrect for CentOS then it was, at least at some point, thought to 
be correct and it probably will be correct again in the near future 
and, if not, we can review and correct it as necessary.


Of course the best way to predict the future is to make it happen. ;)
But this is odd for a document

Re: [ceph-users] where is it possible download CentOS 7.5

2018-03-27 Thread Max Cuttins

Hi Brad,

    that post was mine. I knew it quite well.
That Post was about confirm the fact that minimum requirements written 
in the documentation really didn't exists.


However I never asked if there is somewhere a place where is possible to 
download the DEV or the RC of Centos7.5.
I was thinking about to join the community of tester and developers that 
are already testing Ceph on that "/not ready/" environment.


In that POST these questions were not really made, so no answer where given.

I see that you talked also about other distribution. Well, I read around 
that Suse already implement iSCSI.
However as far as I know (which is not so much), this distribution use 
modified kernel in order to let this work.
And in order to use it it's needed  a dashboard that can handle these 
kind of differences (OpenAttic).
I knew already OpenAttic is contributing in developing the next 
generation of the Ceph Dashboard (and this sound damn good!).
However this also means to me that the *official dashboard* should not 
be talking about ISCSI at all (as every implementation of iSCSI are 
running on mod version).


So these are the things I cannot figure out:
Why is the iSCSI board on the CEPH official dashboard? (I could 
understand on OpenAttic which run on SUSE but not on the official one).
And why, in the official documentation, the minimu requirements to let 
iSCSI work, is to install CentOS7.5? Which doesn't exist? Is there a RC 
candidate which I can start to use?
And... if SUSE or even other distribution works already with iSCSI... 
why the documentation just doesn't reccomend these ones instead of RHEL 
or CENTOS?


There is something confused about what the documentation minimal 
requirements, the dashboard suggest to be able to do, and what i read 
around about modded Ceph for other linux distributions.

I create a new post to clarify all these points.

Thanks for your answer! :)



Il 27/03/2018 11:24, Brad Hubbard ha scritto:
See the thread in this very ML titled "Ceph iSCSI is a prank?", last 
update thirteen days ago.


If your questions are not answered by that thread let us know.

Please also remember that CentOS is not the only platform that ceph 
runs on by a long shot and that not all distros lag as much as it (not 
a criticism, just a fact. The reasons for lagging are valid and well 
documented and should be accepted by those who choose to use them). if 
you want the bleeding edge then rhel/centos should not be your 
platform of choice.



On Tue, Mar 27, 2018 at 7:04 PM, Max Cuttins <mailto:m...@phoenixweb.it>> wrote:


Thanks Jason,

this is exactly what i read around and I supposed.
The RHEL 7.5 is not yet released (neither is Kernel 4.16)

So my dubt are 2:

*1) If it's not released... why is this in the documentation?*
Is the documentation talking about a Dev candidate already
accessible somewhere?

2) why in the dashboard is there already a iSCSI board?
I guess I miss something or is really just for future
implementation and not usable yet?
And if it is usable... where I can download the necessarie in
order to start?


Il 26/03/2018 14:10, Jason Dillaman ha scritto:

RHEL 7.5 has not been released yet, but it should be released very
soon. After it's released, it usually takes the CentOS team a little
time to put together their matching release. I also suspect that Linux
kernel 4.16 is going to be released in the next week or so as well.

On Sat, Mar 24, 2018 at 7:36 AM, Max Cuttins 
<mailto:m...@phoenixweb.it>  wrote:

As stated in the documentation, in order to use iSCSI it's needed use
CentOS7.5.
Where can I download it?


Thanks


iSCSI Targets

Traditionally, block-level access to a Ceph storage cluster has been limited
to QEMU and librbd, which is a key enabler for adoption within OpenStack
environments. Starting with the Ceph Luminous release, block-level access is
expanding to offer standard iSCSI support allowing wider platform usage, and
potentially opening new use cases.

RHEL/CentOS 7.5; Linux kernel v4.16 or newer; or the Ceph iSCSI client test
kernel
A working Ceph Storage cluster, deployed with ceph-ansible or using the
command-line interface
iSCSI gateways nodes, which can either be colocated with OSD nodes or on
dedicated nodes
Separate network subnets for iSCSI front-end traffic and Ceph back-end
traffic


___
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>




___
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
http://lists.ceph

Re: [ceph-users] where is it possible download CentOS 7.5

2018-03-27 Thread Max Cuttins

Thanks Jason,

this is exactly what i read around and I supposed.
The RHEL 7.5 is not yet released (neither is Kernel 4.16)

So my dubt are 2:

*1) If it's not released... why is this in the documentation?*
Is the documentation talking about a Dev candidate already accessible 
somewhere?


2) why in the dashboard is there already a iSCSI board?
I guess I miss something or is really just for future implementation 
and not usable yet?
And if it is usable... where I can download the necessarie in order to 
start?



Il 26/03/2018 14:10, Jason Dillaman ha scritto:

RHEL 7.5 has not been released yet, but it should be released very
soon. After it's released, it usually takes the CentOS team a little
time to put together their matching release. I also suspect that Linux
kernel 4.16 is going to be released in the next week or so as well.

On Sat, Mar 24, 2018 at 7:36 AM, Max Cuttins  wrote:

As stated in the documentation, in order to use iSCSI it's needed use
CentOS7.5.
Where can I download it?


Thanks


iSCSI Targets

Traditionally, block-level access to a Ceph storage cluster has been limited
to QEMU and librbd, which is a key enabler for adoption within OpenStack
environments. Starting with the Ceph Luminous release, block-level access is
expanding to offer standard iSCSI support allowing wider platform usage, and
potentially opening new use cases.

RHEL/CentOS 7.5; Linux kernel v4.16 or newer; or the Ceph iSCSI client test
kernel
A working Ceph Storage cluster, deployed with ceph-ansible or using the
command-line interface
iSCSI gateways nodes, which can either be colocated with OSD nodes or on
dedicated nodes
Separate network subnets for iSCSI front-end traffic and Ceph back-end
traffic


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com






___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] where is it possible download CentOS 7.5

2018-03-24 Thread Max Cuttins

Thanks Marc,

your answer is so illuminating.
If it was so easy I would had already downloaded since 2 months.
But it's not on the official channel and there is not any mention 
anywhere of this release (sorry for you but neither on Google).


Well ...except on the Ceph documention of course.
So I post here.-. I guess somebody read the docs before me and somebody 
"maybe" had already solved this Xfiles for everybody.


But thank you for your ridiculuous answer.
You make my day.



Il 24/03/2018 12:47, Marc Roos ha scritto:
  


https://www.google.pl/search?dcr=0&source=hp&q=where+can+i+download+centos+7.5&oq=where+can+i+download+centos+7.5



-Original Message-
From: Max Cuttins [mailto:m...@phoenixweb.it]
Sent: zaterdag 24 maart 2018 12:36
To: ceph-users@lists.ceph.com
Subject: [ceph-users] where is it possible download CentOS 7.5

As stated in the documentation, in order to use iSCSI it's needed use
CentOS7.5.
Where can I download it?




Thanks






iSCSI Targets


Traditionally, block-level access to a Ceph storage cluster has been
limited to QEMU and librbd, which is a key enabler for adoption within
OpenStack environments. Starting with the Ceph Luminous release,
block-level access is expanding to offer standard iSCSI support allowing
wider platform usage, and potentially opening new use cases.

*   RHEL/CentOS 7.5; Linux kernel v4.16 or newer; or the Ceph iSCSI
client test kernel
<https://shaman.ceph.com/repos/kernel/ceph-iscsi-test>
*   A working Ceph Storage cluster, deployed with ceph-ansible or
using the command-line interface
*   iSCSI gateways nodes, which can either be colocated with OSD nodes
or on dedicated nodes
*   Separate network subnets for iSCSI front-end traffic and Ceph
back-end traffic





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] where is it possible download CentOS 7.5

2018-03-24 Thread Max Cuttins
As stated in the documentation, in order to use iSCSI it's needed use 
CentOS7.5.

Where can I download it?


Thanks


 iSCSI Targets

Traditionally, block-level access to a Ceph storage cluster has been 
limited to QEMU and |librbd|, which is a key enabler for adoption within 
OpenStack environments. Starting with the Ceph Luminous release, 
block-level access is expanding to offer standard iSCSI support allowing 
wider platform usage, and potentially opening new use cases.


 * RHEL/CentOS 7.5; Linux kernel v4.16 or newer; or the Ceph iSCSI
   client test kernel
   
 * A working Ceph Storage cluster, deployed with |ceph-ansible| or
   using the command-line interface
 * iSCSI gateways nodes, which can either be colocated with OSD nodes
   or on dedicated nodes
 * Separate network subnets for iSCSI front-end traffic and Ceph
   back-end traffic

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] What about Petasan?

2018-03-19 Thread Max Cuttins

Hi everybody,

does anybody have used Petasan?
On the website it claim that use Ceph with ready-to-use iSCSI.
Is it something that somebody have try already?

Experience?
Thought?
Reviews?
Dubts?
Pro?
Cons?

Thanks for any thoughts.
Max

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Delete a Pool - how hard should be?

2018-03-07 Thread Max Cuttins

Il 06/03/2018 16:23, David Turner ha scritto:
That said, I do like the idea of being able to disable buckets, rbds, 
pools, etc so that no client could access them. That is useful for 
much more than just data deletion and won't prevent people from 
deleting data prematurely.


To me, if nobody can access data for 30 days and the customer didn't 
call me within those days, it's ok to delete definitly the data.

Which is the way should be.
Make easy to the admin delete data when he really wants.
Make possible to the user to stay some days without it's data till these 
data is obsolete and useless.
The autopurge of the trash of your mailbox works in the sameway and 
seems to me a reasonable way to handle precious data such personal emails.


It could be added as a requisite step to deleting a pool, rbd, etc. 
The process would need to be refactored as adding another step isn't 
viable.
This feature is much more complicated than it may seem on the surface. 
For pools, you could utilize cephx, except not everyone uses that... 
So maybe logic added to the osd map. Buckets would have to be 
completely in rgw. Rbds would probably have to be in the osd map as 
well. This is not a trivial change.


Mine was just a "/nice-to-have/" proposal.
There is no hurry in implement a secondary feature such this one.

About the logic is it possible to use something like this:

 * snapshot the pool with a special poolname
 * remove the original pool
 * give the possibility to restore the snapshot with it's original name.

I think this should suddenly stop all the connection to the original 
pool but leave all the data intact.

Maybe.



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Delete a Pool - how hard should be?

2018-03-06 Thread Max Cuttins


Il 06/03/2018 16:15, David Turner ha scritto:
I've never deleted a bucket, pool, etc at the request of a user that 
they then wanted back because I force them to go through a process to 
have their data deleted. They have to prove to me, and I have to 
agree, that they don't need it before I'll delete it.


Of course I cannot keep in touch with the customer of my reseller (which 
I don't know)
.. or I've to say with the end customer [of the customer] [of the 
customer] [of the customer] of my resellers
...in order to obsessively ask to please PROVE ME that your data are not 
usefull anymore.


And even if I could I neither want to call all the end customers making 
me wasting time to let me confirm _*I can go on *_and do my job.


It just sounds like you need to either learn to be a storage admin, 
hire someone that is, or buy a solution that doesn't care if you are.


Uh! That's bad.
It is so sad when somebody cannot take a proposal as constructive 
criticism but need instead to mark other as incompetent.
Everybody has different admin experience and different point-of-view and 
that's all folks.

You don't have sub-sub-sub customer which you don't know? I do.
You are the one that make everybody obey to "the process"? I can't. I 
need to solve the requests of my customers not yell when they are so 
dumb to delete important data.
I just wrote to throw a proposal in order to improve the admin's life 
not of course to be offended.

Thanks!



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Delete a Pool - how hard should be?

2018-03-06 Thread Max Cuttins




Il 06/03/2018 11:13, Ronny Aasen ha scritto:

On 06. mars 2018 10:26, Max Cuttins wrote:

Il 05/03/2018 20:17, Gregory Farnum ha scritto:


You're not wrong, and indeed that's why I pushed back on the latest 
attempt to make deleting pools even more cumbersome.


But having a "trash" concept is also pretty weird. If admins can 
override it to just immediately delete the data (if they need the 
space), how is that different from just being another hoop to jump 
through? If we want to give the data owners a chance to undo, how do 
we identify and notify *them* rather than the admin running the 
command? But if admins can't override the trash and delete 
immediately, what do we do for things like testing and proofs of 
concept where large-scale data creates and deletes are to be expected?

-Greg


I'm talking about my experience:

  * Data Owner are a little bit in their LA LA LAND, and think that they
    can safely delete some of their data without losses.
  * Data Owner should think that their pool have been really deleted
  * Data Owner should not been akwnoledge about the existance of the
    "/trash/"
  * So Data Owner ask to restore from backup (but instead we'll use
    easily the trash).

Said so, we also have to think that:

  * Administrator is always GOD, so he need to be in the possibility to
    override if needed whenever he needs.
  * However Administrator should just put in status delete without
    override this behaviour if there is not need to do so.
  * Override should be allowed only with many cumbersome telling you
    that YOU SHOULD NOT OVERRIDE - PLEASE AVOID OVERRIDE

I don't like that the software can limit administrators to do his 
job... in the end Administrator'll always find its way to do what he 
want (it's the root).
Of course I like the feature to push the Admin to follow the right 
behaviour.



some sort of active/inactive toggle both on RBD images, pools, buckets 
and filesystems trees is nice to allow admins to perform scream tests.


"data owner requests deletion - admin disables pool(kicks all clients) 
- data owner screams - admin reactivates"


sounds much better then the last step beeing admin checking if the 
backups are good.,..


i try to do something similar by renaming pools to be deleted but that 
is not allways the same as inactive.




EXACTLY! :)
I like the name "scream test"... it really look like that! :)

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Delete a Pool - how hard should be?

2018-03-06 Thread Max Cuttins

Il 05/03/2018 20:17, Gregory Farnum ha scritto:


You're not wrong, and indeed that's why I pushed back on the latest 
attempt to make deleting pools even more cumbersome.


But having a "trash" concept is also pretty weird. If admins can 
override it to just immediately delete the data (if they need the 
space), how is that different from just being another hoop to jump 
through? If we want to give the data owners a chance to undo, how do 
we identify and notify *them* rather than the admin running the 
command? But if admins can't override the trash and delete 
immediately, what do we do for things like testing and proofs of 
concept where large-scale data creates and deletes are to be expected?

-Greg


I'm talking about my experience:

 * Data Owner are a little bit in their LA LA LAND, and think that they
   can safely delete some of their data without losses.
 * Data Owner should think that their pool have been really deleted
 * Data Owner should not been akwnoledge about the existance of the
   "/trash/"
 * So Data Owner ask to restore from backup (but instead we'll use
   easily the trash).

Said so, we also have to think that:

 * Administrator is always GOD, so he need to be in the possibility to
   override if needed whenever he needs.
 * However Administrator should just put in status delete without
   override this behaviour if there is not need to do so.
 * Override should be allowed only with many cumbersome telling you
   that YOU SHOULD NOT OVERRIDE - PLEASE AVOID OVERRIDE

I don't like that the software can limit administrators to do his job... 
in the end Administrator'll always find its way to do what he want (it's 
the root).
Of course I like the feature to push the Admin to follow the right 
behaviour.








___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Delete a Pool - how hard should be?

2018-03-06 Thread Max Cuttins



What about using the at command:

ceph osd pool rm   --yes-i-really-really-mean-it | at now + 30 days

Regards,
Alex


How do you know that this command is scheduled?
How do you delete the scheduled command if is casted?
This is weird. We need something within CEPH that make you see the 
"status" of the pool as "pending delete".



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cluster is empty but it still use 1Gb of data

2018-03-02 Thread Max Cuttins

perfect


Il 02/03/2018 19:18, Igor Fedotov ha scritto:


Yes, by default BlueStore reports 1Gb per OSD as used by BlueFS.


On 3/2/2018 8:10 PM, Max Cuttins wrote:

Umh

Taking a look to your computation I think the ratio OSD/Overhead it's 
really about 1.1Gb per OSD.

Because I have 9 NVMe OSD alive right now. So about 9.5Gb of overhead.
So I guess this is just it's right behaviour.

Fine!


Il 02/03/2018 15:18, David Turner ha scritto:
[1] Here is a ceph starts on a brand new cluster that has never had 
any pools created or data or into it at all. 323GB used out of 
2.3PB. that's 0.01% overhead, but we're using 10TB disks for this 
cluster, and the overhead is moreso per osd than per TB.  It is 
1.1GB overhead per osd. 34 of the osds are pure nvme and the other 
255 have collocated DBs with their WAL on flash.


The used space your string is most likely just osd overhead, but you 
can double check if there are any orphaned rados objects using up 
space with a `rados ls`.  Another thing to note is that deleting a 
pool in ceph is not instant. It goes into garbage collection and is 
taken care of over time.  Most likely you're just looking at osd 
overhead, though.


[1]
$ ceph -s
cluster:
health: HEALTH_OK

services:
mon: 5 daemons, quorum mon1,mon2,mon4,mon3,mon5
mgr: mon1(active), standbys: mon3, mon2, mon5, mon4
osd: 289 osds: 289 up, 289 in

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 bytes
usage: 323 GB used, 2324 TB / 2324 TB avail

On Fri, Mar 2, 2018, 6:25 AM Max Cuttins <mailto:m...@phoenixweb.it>> wrote:


How can I analyze this?


Il 02/03/2018 12:18, Gonzalo Aguilar Delgado ha scritto:


Hi Max,

No that's not normal. 9GB for an empty cluster. Maybe you
reserved some space or you have other service that's taking the
space. But It seems way to much for me.


El 02/03/18 a las 12:09, Max Cuttins escribió:


I don't care of get back those space.
I just want to know if it's expected or not.
Because I run several rados bench with the flag |--no-cleanup|

And maybe I leaved something in the way.



Il 02/03/2018 11:35, Janne Johansson ha scritto:

2018-03-02 11:21 GMT+01:00 Max Cuttins mailto:m...@phoenixweb.it>>:

Hi everybody,

i deleted everything from the cluster after some test
with RBD.
Now I see that there something still in use:

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 bytes
    usage: *9510 MB used*, 8038 GB / 8048 GB avail
    pgs:

Is this the overhead of the bluestore journal/wall?
Or there is something wrong and this should be zero?


People setting up new clusters also see this, there are
overhead items and stuff that eat some space
so it would never be zero. At your place, it would seem it is
close to 0.1%, so just live with it and move
on to using your 8TB for what you really needed it to be used
for.

In almost no case will I think that "if only I could get
those 0.1% back and then my cluster would be great
again".

Storage clusters should probably have something like 10%
"admin" margins so if ceph warns and
whines at OSDs being 85% full, then at 75% you should be
writing orders for more disks and/or more
storage nodes.

At that point, regardless of where the "miscalculation" is,
or where ceph manages to waste
9500M while you think it should be zero, it will be all but
impossible to make anything decent with it
if you were to get those 0.1% back with some magic command.

-- 
May the most significant bit of your life be positive.




___
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cluster is empty but it still use 1Gb of data

2018-03-02 Thread Max Cuttins

Umh

Taking a look to your computation I think the ratio OSD/Overhead it's 
really about 1.1Gb per OSD.

Because I have 9 NVMe OSD alive right now. So about 9.5Gb of overhead.
So I guess this is just it's right behaviour.

Fine!


Il 02/03/2018 15:18, David Turner ha scritto:
[1] Here is a ceph starts on a brand new cluster that has never had 
any pools created or data or into it at all.  323GB used out of 2.3PB. 
that's 0.01% overhead, but we're using 10TB disks for this cluster, 
and the overhead is moreso per osd than per TB.  It is 1.1GB overhead 
per osd. 34 of the osds are pure nvme and the other 255 have 
collocated DBs with their WAL on flash.


The used space your string is most likely just osd overhead, but you 
can double check if there are any orphaned rados objects using up 
space with a `rados ls`.  Another thing to note is that deleting a 
pool in ceph is not instant. It goes into garbage collection and is 
taken care of over time.  Most likely you're just looking at osd 
overhead, though.


[1]
$ ceph -s
cluster:
health: HEALTH_OK

services:
mon: 5 daemons, quorum mon1,mon2,mon4,mon3,mon5
mgr: mon1(active), standbys: mon3, mon2, mon5, mon4
osd: 289 osds: 289 up, 289 in

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 bytes
usage: 323 GB used, 2324 TB / 2324 TB avail

On Fri, Mar 2, 2018, 6:25 AM Max Cuttins <mailto:m...@phoenixweb.it>> wrote:


How can I analyze this?


Il 02/03/2018 12:18, Gonzalo Aguilar Delgado ha scritto:


Hi Max,

No that's not normal. 9GB for an empty cluster. Maybe you
reserved some space or you have other service that's taking the
space. But It seems way to much for me.


El 02/03/18 a las 12:09, Max Cuttins escribió:


I don't care of get back those space.
I just want to know if it's expected or not.
Because I run several rados bench with the flag |--no-cleanup|

And maybe I leaved something in the way.



Il 02/03/2018 11:35, Janne Johansson ha scritto:

2018-03-02 11:21 GMT+01:00 Max Cuttins mailto:m...@phoenixweb.it>>:

Hi everybody,

i deleted everything from the cluster after some test with RBD.
Now I see that there something still in use:

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 bytes
    usage: *9510 MB used*, 8038 GB / 8048 GB avail
    pgs:

Is this the overhead of the bluestore journal/wall?
Or there is something wrong and this should be zero?


People setting up new clusters also see this, there are
overhead items and stuff that eat some space
so it would never be zero. At your place, it would seem it is
close to 0.1%, so just live with it and move
on to using your 8TB for what you really needed it to be used for.

In almost no case will I think that "if only I could get those
0.1% back and then my cluster would be great
again".

Storage clusters should probably have something like 10%
"admin" margins so if ceph warns and
whines at OSDs being 85% full, then at 75% you should be
writing orders for more disks and/or more
storage nodes.

At that point, regardless of where the "miscalculation" is, or
where ceph manages to waste
9500M while you think it should be zero, it will be all but
impossible to make anything decent with it
if you were to get those 0.1% back with some magic command.

-- 
May the most significant bit of your life be positive.




___
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-02 Thread Max Cuttins



Il 02/03/2018 13:27, Federico Lucifredi ha scritto:


On Fri, Mar 2, 2018 at 4:29 AM, Max Cuttins <mailto:m...@phoenixweb.it>> wrote:




Hi Federico,

Hi Max,

On Feb 28, 2018, at 10:06 AM, Max Cuttins
mailto:m...@phoenixweb.it>> wrote:

This is true, but having something that just works in
order to have minimum compatibility and start to dismiss
old disk is something you should think about.
You'll have ages in order to improve and get better
performance. But you should allow Users to cut-off old
solutions as soon as possible while waiting for a better
implementation.

I like your thinking, but I wonder why doesn’t a
locally-mounted kRBD volume meet this need? It seems easier
than iSCSI and I would venture would show twice the
performance at least in some cases.


Simple because it's not possible.
XenServer is closed. You cannot add RPM (so basically install
ceph) without hack the distribution by removing the limitation to YUM.
And this is what we do here:
https://github.com/rposudnevskiy/RBDSR
<https://github.com/rposudnevskiy/RBDSR>


Understood. Thanks Max, I did not realize you were also speaking about 
Xen, I thought you meant to find an arbitrary non-virtual disk  
replacement strategy ("start to dismiss old disk").
I need to find an arbitrary non-virtual disk replacement strategy 
compatible with Xen.





We do speak to the Xen team every once in a while, but while there is 
interest in adding Ceph support on their side, I think we are somewhat 
down the list of their priorities.


Thanks -F


They are somewhat interested in higher the limitation instead of 
improving their hypervisor.

Xen 7.3 is _*exactly *_Xen 7.2 with new limitations and no added features.
It's a shame.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cluster is empty but it still use 1Gb of data

2018-03-02 Thread Max Cuttins

How can I analyze this?


Il 02/03/2018 12:18, Gonzalo Aguilar Delgado ha scritto:


Hi Max,

No that's not normal. 9GB for an empty cluster. Maybe you reserved 
some space or you have other service that's taking the space. But It 
seems way to much for me.



El 02/03/18 a las 12:09, Max Cuttins escribió:


I don't care of get back those space.
I just want to know if it's expected or not.
Because I run several rados bench with the flag |--no-cleanup|

And maybe I leaved something in the way.



Il 02/03/2018 11:35, Janne Johansson ha scritto:
2018-03-02 11:21 GMT+01:00 Max Cuttins <mailto:m...@phoenixweb.it>>:


Hi everybody,

i deleted everything from the cluster after some test with RBD.
Now I see that there something still in use:

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 bytes
    usage: *9510 MB used*, 8038 GB / 8048 GB avail
    pgs:

Is this the overhead of the bluestore journal/wall?
Or there is something wrong and this should be zero?


People setting up new clusters also see this, there are overhead 
items and stuff that eat some space
so it would never be zero. At your place, it would seem it is close 
to 0.1%, so just live with it and move

on to using your 8TB for what you really needed it to be used for.

In almost no case will I think that "if only I could get those 0.1% 
back and then my cluster would be great

again".

Storage clusters should probably have something like 10% "admin" 
margins so if ceph warns and
whines at OSDs being 85% full, then at 75% you should be writing 
orders for more disks and/or more

storage nodes.

At that point, regardless of where the "miscalculation" is, or where 
ceph manages to waste
9500M while you think it should be zero, it will be all but 
impossible to make anything decent with it

if you were to get those 0.1% back with some magic command.

--
May the most significant bit of your life be positive.




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cluster is empty but it still use 1Gb of data

2018-03-02 Thread Max Cuttins

I don't care of get back those space.
I just want to know if it's expected or not.
Because I run several rados bench with the flag |--no-cleanup|

And maybe I leaved something in the way.



Il 02/03/2018 11:35, Janne Johansson ha scritto:
2018-03-02 11:21 GMT+01:00 Max Cuttins <mailto:m...@phoenixweb.it>>:


Hi everybody,

i deleted everything from the cluster after some test with RBD.
Now I see that there something still in use:

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 bytes
    usage: *9510 MB used*, 8038 GB / 8048 GB avail
    pgs:

Is this the overhead of the bluestore journal/wall?
Or there is something wrong and this should be zero?


People setting up new clusters also see this, there are overhead items 
and stuff that eat some space
so it would never be zero. At your place, it would seem it is close to 
0.1%, so just live with it and move

on to using your 8TB for what you really needed it to be used for.

In almost no case will I think that "if only I could get those 0.1% 
back and then my cluster would be great

again".

Storage clusters should probably have something like 10% "admin" 
margins so if ceph warns and
whines at OSDs being 85% full, then at 75% you should be writing 
orders for more disks and/or more

storage nodes.

At that point, regardless of where the "miscalculation" is, or where 
ceph manages to waste
9500M while you think it should be zero, it will be all but impossible 
to make anything decent with it

if you were to get those 0.1% back with some magic command.

--
May the most significant bit of your life be positive.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Cluster is empty but it still use 1Gb of data

2018-03-02 Thread Max Cuttins

Hi everybody,

i deleted everything from the cluster after some test with RBD.
Now I see that there something still in use:

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 bytes
    usage: *9510 MB used*, 8038 GB / 8048 GB avail
    pgs:

Is this the overhead of the bluestore journal/wall?
Or there is something wrong and this should be zero?


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-02 Thread Max Cuttins



Hi Federico,


Hi Max,


On Feb 28, 2018, at 10:06 AM, Max Cuttins  wrote:

This is true, but having something that just works in order to have minimum 
compatibility and start to dismiss old disk is something you should think about.
You'll have ages in order to improve and get better performance. But you should 
allow Users to cut-off old solutions as soon as possible while waiting for a 
better implementation.

I like your thinking, but I wonder why doesn’t a locally-mounted kRBD volume 
meet this need? It seems easier than iSCSI and I would venture would show twice 
the performance at least in some cases.


Simple because it's not possible.
XenServer is closed. You cannot add RPM (so basically install ceph) 
without hack the distribution by removing the limitation to YUM.

And this is what we do here: https://github.com/rposudnevskiy/RBDSR

In order to let live migration works it's needed to rewrite the VHD/VDI 
driver (because this driver it's monolitich fused with iSCSI and HBA).
So any implementation it's just something more than just a plugin or a 
class extension. It's an entire rewrite of the SR manager.

Is it working? yes.
Is this suitable for production?  I think should not.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Delete a Pool - how hard should be?

2018-03-01 Thread Max Cuttins
I think this is a good question for everybody: How hard should be delete 
a Pool?


We ask to tell the pool twice.
We ask to add "--yes-i-really-really-mean-it"
We ask to add ability to mons to delete the pool (and remove this 
ability ASAP after).


... and then somebody of course ask us to restore the pool.

I think that all this stuff is not looking in the right direction.
It's not the administrator that need to be warned from delete datas.
It's the data owner that should be warned (which most of the time give 
it's approval by phone and gone).


So, all this stuff just make the life of administrator harder, while not 
improving in any way the life of the Data Owner.
Probably the best solution is to ...do not delete at all and instead 
apply a "deleting policy".

Something like:

   ceph osd pool rm POOL_NAME -yes
   -> POOL_NAME is set to be deleted, removal is scheduled within 30 days.


This allow us to do 2 things:

 * allow administrator to don't waste their time in CML with true
   strange command
 * allow data owner to have a grace period to verify if, after
   deletion, everything works as expected and that data that disapper
   wasn't usefull in some way.

After 30 days data will be removed automatically. This is a safe policy 
for ADMIN and DATA OWNER.
Of course ADMIN should be allowed to remove POOL scheduleded for 
deletion in order to save disk spaces if needed (but only if needed).


What do you think?



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cannot delete a pool

2018-03-01 Thread Max Cuttins

 and now it worked.
 maybe a typo in my first command.

Sorry


Il 01/03/2018 17:28, David Turner ha scritto:
When dealing with the admin socket you need to be an admin.  `sudu` or 
`sudo -u ceph` ought to get you around that.


I was able to delete a pool just by using the injectargs that you 
showed above.


ceph tell mon.\* injectargs '--mon-allow-pool-delete=true'
ceph osd pool rm pool_name pool_name --yes-i-really-really-mean-it
ceph tell mon.\* injectargs '--mon-allow-pool-delete=false'

If you see the warning 'not observed, change may require restart' you 
can check to see if it took effect or not by asking the daemon what 
it's setting is `ceph daemon mon.ceph_node1 config get 
mon_allow_pool_delete`.


On Thu, Mar 1, 2018 at 10:41 AM Max Cuttins <mailto:m...@phoenixweb.it>> wrote:


I get:

#ceph daemon mon.0 config set mon_allow_pool_delete true
admin_socket: exception getting command descriptions: [Errno 13]
Permission denied


Il 01/03/2018 14:00, Eugen Block ha scritto:
> It's not necessary to restart a mon if you just want to delete a
pool,
> even if the "not observed" message appears. And I would not
recommend
> to permanently enable the "easy" way of deleting a pool. If you are
> not able to delete the pool after "ceph tell mon ..." try this:
>
> ceph daemon mon. config set mon_allow_pool_delete true
>
> and then retry deleting the pool. This works for me without
restarting
> any services or changing config files.
>
> Regards
    >
>
> Zitat von Ronny Aasen mailto:ronny%2bceph-us...@aasen.cx>>:
>
>> On 01. mars 2018 13:04, Max Cuttins wrote:
>>> I was testing IO and I created a bench pool.
>>>
>>> But if I tried to delete I get:
>>>
>>>    Error EPERM: pool deletion is disabled; you must first set the
>>>    mon_allow_pool_delete config option to true before you can
destroy a
>>>    pool
>>>
>>> So I run:
>>>
>>>    ceph tell mon.\* injectargs '--mon-allow-pool-delete=true'
>>>    mon.ceph-node1: injectargs:mon_allow_pool_delete = 'true' (not
>>>    observed, change may require restart)
>>>    mon.ceph-node2: injectargs:mon_allow_pool_delete = 'true' (not
>>>    observed, change may require restart)
>>>    mon.ceph-node3: injectargs:mon_allow_pool_delete = 'true' (not
>>>    observed, change may require restart)
>>>
>>> I restarted all the nodes.
>>> But the flag has not been observed.
>>>
>>> Is this the right way to remove a pool?
>>
>> i think you need to set the option in the ceph.conf of the
monitors.
>> and then restart the mon's one by one.
>>
>> afaik that is by design.
>>

https://blog.widodh.nl/2015/04/protecting-your-ceph-pools-against-removal-or-property-changes/
>>
>>
>> kind regards
>> Ronny Aasen
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>

___
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cannot delete a pool

2018-03-01 Thread Max Cuttins

I get:

#ceph daemon mon.0 config set mon_allow_pool_delete true
admin_socket: exception getting command descriptions: [Errno 13] 
Permission denied



Il 01/03/2018 14:00, Eugen Block ha scritto:
It's not necessary to restart a mon if you just want to delete a pool, 
even if the "not observed" message appears. And I would not recommend 
to permanently enable the "easy" way of deleting a pool. If you are 
not able to delete the pool after "ceph tell mon ..." try this:


ceph daemon mon. config set mon_allow_pool_delete true

and then retry deleting the pool. This works for me without restarting 
any services or changing config files.


Regards


Zitat von Ronny Aasen :


On 01. mars 2018 13:04, Max Cuttins wrote:

I was testing IO and I created a bench pool.

But if I tried to delete I get:

   Error EPERM: pool deletion is disabled; you must first set the
   mon_allow_pool_delete config option to true before you can destroy a
   pool

So I run:

   ceph tell mon.\* injectargs '--mon-allow-pool-delete=true'
   mon.ceph-node1: injectargs:mon_allow_pool_delete = 'true' (not
   observed, change may require restart)
   mon.ceph-node2: injectargs:mon_allow_pool_delete = 'true' (not
   observed, change may require restart)
   mon.ceph-node3: injectargs:mon_allow_pool_delete = 'true' (not
   observed, change may require restart)

I restarted all the nodes.
But the flag has not been observed.

Is this the right way to remove a pool?


i think you need to set the option in the ceph.conf of the monitors.
and then restart the mon's one by one.

afaik that is by design.
https://blog.widodh.nl/2015/04/protecting-your-ceph-pools-against-removal-or-property-changes/ 



kind regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com






___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Max Cuttins

Almost...


Il 01/03/2018 16:17, Heðin Ejdesgaard Møller ha scritto:

Hello,

I would like to point out that we are running ceph+redundant iscsiGW's,
connecting the LUN's to a esxi+vcsa-6.5 cluster with Red Hat support.

We did encountered a few bumps on the road to production, but those got
fixed by Red Hat engineering and are included in the rhel7.5 and 4.17
kernel.

I can recommend having a look at https://github.com/open-iscsi if you
want to contribute on the userspace side.

Regards
Heðin Ejdesgaard
Synack Sp/f

Direct: +298 77 11 12
Phone:  +298 20 11 11
E-Mail: h...@synack.fo


On hós, 2018-03-01 at 13:33 +0100, Kai Wagner wrote:

I totally understand and see your frustration here, but you've to
keep
in mind that this is an Open Source project with a lots of
volunteers.
If you have a really urgent need, you have the possibility to develop
such a feature on your own or you've to buy someone who could do the
work for you.

It's a long journey but it seems like it finally comes to an end.


On 03/01/2018 01:26 PM, Max Cuttins wrote:

It's obvious that Citrix in not anymore belivable.
However, at least Ceph should have added iSCSI to it's platform
during
all these years.
Ceph is awesome, so why just don't kill all the competitors make it
compatible even with washingmachine?

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Max Cuttins

Il 28/02/2018 18:16, David Turner ha scritto:
My thought is that in 4 years you could have migrated to a hypervisor 
that will have better performance into ceph than an added iSCSI layer. 
I won't deploy VMs for ceph on anything that won't allow librbd to 
work. Anything else is added complexity and reduced performance.




You are definitly right: I have to change hypervisor. So Why I didn't do 
this before?
Because both Citrix/Xen and Inktank/Ceph claim that they were ready to 
add support to Xen in _*2013*_!


It was 2013:
XEN claim to support Ceph: 
https://www.citrix.com/blogs/2013/07/08/xenserver-tech-preview-incorporating-ceph-object-stores-is-now-available/
Inktank say the support for Xen was almost ready: 
https://ceph.com/geen-categorie/xenserver-support-for-rbd/


And also iSCSI was close (it was 2014):
https://ceph.com/geen-categorie/updates-to-ceph-tgt-iscsi-support/

So why change Hypervisor if everybody tell you that compatibility is 
almost ready to be deployed?
... but then "just" pass 4 years and both XEN and Ceph never become 
compatibile...


It's obvious that Citrix in not anymore belivable.
However, at least Ceph should have added iSCSI to it's platform during 
all these years.
Ceph is awesome, so why just don't kill all the competitors make it 
compatible even with washingmachine?





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Max Cuttins

Xen by Citrix used to be a very good hypervisor.
However they used very old kernel till the 7.1

The distribution doesn't allow you to add package from yum. So you need 
to hack it.

I have helped to develop the installer of the not ufficial plugin:
https://github.com/rposudnevskiy/RBDSR

However I still don't feel safe using that in production.
So I need to fall back to iSCSI.



Il 28/02/2018 20:16, Mark Schouten ha scritto:

Does Xen still not support RBD? Ceph has been around for years now!

Met vriendelijke groeten,

--
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten | Tuxis Internet Engineering
KvK: 61527076 | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl



*Van: * Massimiliano Cuttini 
*Aan: * "ceph-users@lists.ceph.com" 
*Verzonden: * 28-2-2018 13:53
*Onderwerp: * [ceph-users] Ceph iSCSI is a prank?

I was building ceph in order to use with iSCSI.
But I just see from the docs that need:

*CentOS 7.5*
(which is not available yet, it's still at 7.4)
https://wiki.centos.org/Download

*Kernel 4.17*
(which is not available yet, it is still at 4.15.7)
https://www.kernel.org/

So I guess, there is no ufficial support and this is just a bad prank.

Ceph is ready to be used with S3 since many years.
But need the kernel of the next century to works with such an old
technology like iSCSI.
So sad.





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy won't install luminous (but Jewel instead)

2018-03-01 Thread Max Cuttins

Ah!
So you think this is done by design?

However that command is very very very usefull.
Please add that to documentation.
Next time it will save me 2/3 hours.



Il 01/03/2018 06:12, Sébastien VIGNERON ha scritto:

Hi Max,

I had the same issue (under Ubuntu 1/6/.04) but I have read the 
ceph-deploy 2.0.0 source code and saw a "—-release" flag for the 
install subcommand. You can found the flag with the following 
command: ceph-deploy install --help


It looks like the culprit part of ceph-deploy can be found around line 
20 of /usr/lib/python2.7/dist-packages/ceph_deploy/install.py:


…
    14def sanitize_args(args):
    15   """
    16   args may need a bunch of logic to set proper defaults that 
argparse is

    17   not well suited for.
    18   """
    19   if args.release is None:
    20       args.release = 'jewel'
    21       args.default_release = True
    22
    23   # XXX This whole dance is because --stable is getting deprecated
    24   if args.stable is not None:
    25       LOG.warning('the --stable flag is deprecated, use 
--release instead')

    26       args.release = args.stable
    27   # XXX Tango ends here.
    28
    29   return args

…

Which means we now have to specify "—-release luminous" when we want 
to install a luminous cluster, at least until luminous is considered 
stable and the ceph-deploy tool is changed.
I think it may be a Kernel version consideration: not all distro have 
the needed minimum version of the kernel (and features) for a full use 
of luminous.


Cordialement / Best regards,

Sébastien VIGNERON
CRIANN,
Ingénieur / Engineer
Technopôle du Madrillet
745, avenue de l'Université
76800 Saint-Etienne du Rouvray - France
tél. +33 2 32 91 42 91
fax. +33 2 32 91 42 92
http://www.criann.fr
mailto:sebastien.vigne...@criann.fr
support: supp...@criann.fr

Le 1 mars 2018 à 00:37, Max Cuttins <mailto:m...@phoenixweb.it>> a écrit :


Didn't check at time.

I deployed everything from VM standalone.
The VM was just build up with fresh new centOS7.4 using minimal 
installation ISO1708.

It's a completly new/fresh/empty system.
Then I run:

yum update -y
yum install wget zip unzip vim pciutils -y
yum install epel-release -y
yum update -y
yum install ceph-deploy -y
yum install yum-plugin-priorities -y

it installed:

Feb 27 19:24:47 Installed: ceph-deploy-1.5.37-0.noarch

-> install ceph with ceph-deploy on 3 nodes.

As a result I get Jewel.

Then... I purge everything from all the 3 nodes
yum update again on ceph deployer node and get:

Feb 27 20:33:20 Updated: ceph-deploy-2.0.0-0.noarch

... then I tried to reinstall over and over but I always get Jewel.
I tryed to install after removed .ceph file config in my homedir.
I tryed to install after change default repo to repo-luminous
... got always Jewel.

Only force the release in the ceph-deploy command allow me to install 
luminous.


Probably yum-plugin-priorities should not be installed after 
ceph-deploy even if I didn't run still any command.
But what is so strange is that purge and reinstall everything will 
always reinstall Jewel.

It seems that some lock file has been write somewhere to use Jewel.



Il 28/02/2018 22:08, David Turner ha scritto:

Which version of ceph-deploy are you using?

On Wed, Feb 28, 2018 at 4:37 AM Massimiliano Cuttini 
mailto:m...@phoenixweb.it>> wrote:


This worked.

However somebody should investigate why default is still jewel
on Centos 7.4


Il 28/02/2018 00:53, jorpilo ha scritto:

Try using:
ceph-deploy --release luminous host1...

 Mensaje original 
De: Massimiliano Cuttini 
<mailto:m...@phoenixweb.it>
Fecha: 28/2/18 12:42 a. m. (GMT+01:00)
Para: ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
Asunto: [ceph-users] ceph-deploy won't install luminous (but
Jewel instead)

This is the 5th time that I install and after purge the
installation.
Ceph Deploy is alway install JEWEL instead of Luminous.

No way even if I force the repo from default to luminous:

|https://download.ceph.com/rpm-luminous/el7/noarch|

It still install Jewel it's stuck.

I've already checked if I had installed yum-plugin-priorities,
and I did it.
Everything is exaclty as the documentation request.
But still I get always Jewel and not Luminous.




___
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Cannot delete a pool

2018-03-01 Thread Max Cuttins

I was testing IO and I created a bench pool.

But if I tried to delete I get:

   Error EPERM: pool deletion is disabled; you must first set the
   mon_allow_pool_delete config option to true before you can destroy a
   pool

So I run:

   ceph tell mon.\* injectargs '--mon-allow-pool-delete=true'
   mon.ceph-node1: injectargs:mon_allow_pool_delete = 'true' (not
   observed, change may require restart)
   mon.ceph-node2: injectargs:mon_allow_pool_delete = 'true' (not
   observed, change may require restart)
   mon.ceph-node3: injectargs:mon_allow_pool_delete = 'true' (not
   observed, change may require restart)

I restarted all the nodes.
But the flag has not been observed.

Is this the right way to remove a pool?



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy won't install luminous (but Jewel instead)

2018-02-28 Thread Max Cuttins

Didn't check at time.

I deployed everything from VM standalone.
The VM was just build up with fresh new centOS7.4 using minimal 
installation ISO1708.

It's a completly new/fresh/empty system.
Then I run:

yum update -y
yum install wget zip unzip vim pciutils -y
yum install epel-release -y
yum update -y
yum install ceph-deploy -y
yum install yum-plugin-priorities -y

it installed:

Feb 27 19:24:47 Installed: ceph-deploy-1.5.37-0.noarch

-> install ceph with ceph-deploy on 3 nodes.

As a result I get Jewel.

Then... I purge everything from all the 3 nodes
yum update again on ceph deployer node and get:

Feb 27 20:33:20 Updated: ceph-deploy-2.0.0-0.noarch

... then I tried to reinstall over and over but I always get Jewel.
I tryed to install after removed .ceph file config in my homedir.
I tryed to install after change default repo to repo-luminous
... got always Jewel.

Only force the release in the ceph-deploy command allow me to install 
luminous.


Probably yum-plugin-priorities should not be installed after ceph-deploy 
even if I didn't run still any command.
But what is so strange is that purge and reinstall everything will 
always reinstall Jewel.

It seems that some lock file has been write somewhere to use Jewel.



Il 28/02/2018 22:08, David Turner ha scritto:

Which version of ceph-deploy are you using?

On Wed, Feb 28, 2018 at 4:37 AM Massimiliano Cuttini 
mailto:m...@phoenixweb.it>> wrote:


This worked.

However somebody should investigate why default is still jewel on
Centos 7.4


Il 28/02/2018 00:53, jorpilo ha scritto:

Try using:
ceph-deploy --release luminous host1...

 Mensaje original 
De: Massimiliano Cuttini 

Fecha: 28/2/18 12:42 a. m. (GMT+01:00)
Para: ceph-users@lists.ceph.com 
Asunto: [ceph-users] ceph-deploy won't install luminous (but
Jewel instead)

This is the 5th time that I install and after purge the installation.
Ceph Deploy is alway install JEWEL instead of Luminous.

No way even if I force the repo from default to luminous:

|https://download.ceph.com/rpm-luminous/el7/noarch|

It still install Jewel it's stuck.

I've already checked if I had installed yum-plugin-priorities,
and I did it.
Everything is exaclty as the documentation request.
But still I get always Jewel and not Luminous.




___
ceph-users mailing list
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-02-28 Thread Max Cuttins



Il 28/02/2018 15:19, Jason Dillaman ha scritto:

On Wed, Feb 28, 2018 at 7:53 AM, Massimiliano Cuttini  
wrote:

I was building ceph in order to use with iSCSI.
But I just see from the docs that need:

CentOS 7.5
(which is not available yet, it's still at 7.4)
https://wiki.centos.org/Download

Kernel 4.17
(which is not available yet, it is still at 4.15.7)
https://www.kernel.org/

The necessary kernel changes actually are included as part of 4.16-rc1
which is available now. We also offer a pre-built test kernel with the
necessary fixes here [1].

This is a release candidate and it's not ready for production.
Does anybody know when the kernel 4.16 will be ready for production?





So I guess, there is no ufficial support and this is just a bad prank.

Ceph is ready to be used with S3 since many years.
But need the kernel of the next century to works with such an old technology
like iSCSI.
So sad.

Unfortunately, kernel vs userspace have very different development
timelines.We have no interest in maintaining out-of-tree patchsets to
the kernel.


This is true, but having something that just works in order to have 
minimum compatibility and start to dismiss old disk is something you 
should think about.
You'll have ages in order to improve and get better performance. But you 
should allow Users to cut-off old solutions as soon as possible while 
waiting for a better implementation.





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[1] https://shaman.ceph.com/repos/kernel/ceph-iscsi-test/



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-02-28 Thread Max Cuttins

Sorry for being rude Ross,

I follow Ceph since 2014 waiting for iSCSI support in order to use it 
with Xen.
When finally it seemds it was implemented the OS requirements are 
irrealistic.
Seems a bad prank. 4 year waiting for this... and still not true support 
yet.





Il 28/02/2018 14:11, Marc Roos ha scritto:
  
Hi Massimiliano, have an espresso. You know the indians have a nice

saying

"Everything will be good at the end. If it is not good, it is still not
the end."



-Original Message-
From: Massimiliano Cuttini [mailto:m...@phoenixweb.it]
Sent: woensdag 28 februari 2018 13:53
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Ceph iSCSI is a prank?

I was building ceph in order to use with iSCSI.
But I just see from the docs that need:

CentOS 7.5
(which is not available yet, it's still at 7.4)
https://wiki.centos.org/Download

Kernel 4.17
(which is not available yet, it is still at 4.15.7)
https://www.kernel.org/

So I guess, there is no ufficial support and this is just a bad prank.

Ceph is ready to be used with S3 since many years.
But need the kernel of the next century to works with such an old
technology like iSCSI.
So sad.












___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com