Re: [ceph-users] Dashboard RBD Image listing takes forever

2020-01-06 Thread Lenz Grimmer
Hi Matt,

On 1/6/20 4:33 PM, Matt Dunavant wrote:

> I was hoping there was some update on this bug:
> https://tracker.ceph.com/issues/39140
> 
> In all recent versions of the dashboard, the RBD image page takes 
> forever to populate due to this bug. All our images have fast-diff 
> enabled, so it can take 15-20 min to populate this page with about
> 20-30 images.

Thanks for bringing this up and the reminder. I've just updated the
tracker issue by pointing it to the current pull request that intends to
address this: https://github.com/ceph/ceph/pull/28387 - looks like this
approach needs further testing/review before we can merge it, it
currently is still marked as "Draft".

@Ernesto - any news/thoughts about this from your POV?

Thanks,

Lenz

-- 
SUSE Software Solutions Germany GmbH - Maxfeldstr. 5 - 90409 Nuernberg
GF: Felix Imendörffer, HRB 36809 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] dashboard hangs

2019-11-13 Thread Lenz Grimmer
Hi Thoralf,

there have been several reports about Ceph mgr modules (not just the
dashboard) experiencing hangs and freezes recently. The thread "mgr
daemons becoming unresponsive" might give you some additional insight.

Is the "device health metrics" module enabled on your cluster? Could you
try disabling it to see if that fixes the issue?

Lenz

On 11/13/19 4:01 PM, thoralf schulze wrote:

> the dashboard of our moderatly used cluster with 3 mon/mgr-nodes gets
> stuck about 30 seconds after a mgr becomes active. the dashboard is not
> usable anymore (ie: the mgr damon does not respond to http requests
> anymore), although it comes back from the dead occasionally for a few
> seconds. the same happens to the prometheus module: grafana only shows a
> few data points here and there.
> 
> other mgr-related stuff (eg., ceph pg dump) continues to work just fine.
> forcing a switchover to another mgr or enabling / disabling mgr modules
> helps for a short while, until the whole gets stuck again.
> 
> a mgr log with debugging enabled for both mgr and mgrc at level 20 can
> be found at
> http://www.user.tu-berlin.de/thoralf.schulze/ceph-mgr-2019113.log.xz -
> in this case, the hang occurred shortly before 14:55.
> 
> any hints would be greatly appreciated …
> 
> thank you very much & with kind regards,

-- 
SUSE Software Solutions Germany GmbH - Maxfeldstr. 5 - 90409 Nuernberg
GF: Felix Imendörffer, HRB 36809 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Nautilus : ceph dashboard ssl not working

2019-09-26 Thread Lenz Grimmer
Hi Miha,

interesting observation, I don't think we've noticed this before. Would
you mind submitting a bug report about this on our tracker, including
these logs?

  https://tracker.ceph.com/projects/mgr/issues/new

Thanks in advance!

Lenz

On 9/26/19 10:01 AM, Miha Verlic wrote:

> On 24. 09. 19 14:53, Lenz Grimmer wrote:
>> On 9/24/19 1:37 PM, Miha Verlic wrote:
>>
>>> I've got slightly different problem. After a few days of running fine,
>>> dashboard stops working because it is apparently seeking for wrong
>>> certificate file in /tmp. If I restart ceph-mgr it starts to work again.
>>
>> Does the restart trigger the creation of a similar-looking file in /tmp?
>> I wonder if there's some kind of cron job that cleans up the /tmp
>> directory every now and then...
> 
> There is systemd-tmpfiles-clean.timer, but it ignores PrivateTmp folders.
> 
> But as said, crt and key files do exist in tmp and according to
> timestamps they were created when ceph-mgr daemon was started; it seems
> to me like ceph-mgr simply starts looking for wrong filenames after a while.
> 
> Going through ceph-mgr logs I'd say the problem is internal webserver is
> restarted, but ceph-mgr is not notified about new crt/key files:
> 
> Sep 19 12:54:16 cephtest01 ceph-mgr[2247]: [19/Sep/2019:12:54:16] ENGINE
> Bus STOPPING
> 
> Sep 19 12:54:16 cephtest01 ceph-mgr[2247]: [19/Sep/2019:12:54:16] ENGINE
> HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 8443)) shut down
> 
> Sep 19 12:54:16 cephtest01 ceph-mgr[2247]: [19/Sep/2019:12:54:16] ENGINE
> Stopped thread '_TimeoutMonitor'.
> 
> Sep 19 12:54:16 cephtest01 ceph-mgr[2247]: [19/Sep/2019:12:54:16] ENGINE
> Bus STOPPED
> 
> Sep 19 12:54:16 cephtest01 ceph-mgr[2247]: [19/Sep/2019:12:54:16] ENGINE
> Bus STARTING
> 
> Sep 19 12:54:16 cephtest01 ceph-mgr[2247]: [19/Sep/2019:12:54:16] ENGINE
> Started monitor thread '_TimeoutMonitor'.
> 
> Sep 19 12:54:16 cephtest01 ceph-mgr[2247]: 2019-09-19 12:54:16.618
> 7f3a796e4700 -1 client.0 error registering admin socket command: (17)
> File exists
> 
> Sep 19 12:54:16 cephtest01 ceph-mgr[2247]: 2019-09-19 12:54:16.618
> 7f3a796e4700 -1 client.0 error registering admin socket command: (17)
> File exists
> 
> Sep 19 12:54:16 cephtest01 ceph-mgr[2247]: 2019-09-19 12:54:16.618
> 7f3a796e4700 -1 client.0 error registering admin socket command: (17)
> File exists
> 
> Sep 19 12:54:16 cephtest01 ceph-mgr[2247]: 2019-09-19 12:54:16.618
> 7f3a796e4700 -1 client.0 error registering admin socket command: (17)
> File exists
> 
> Sep 19 12:54:16 cephtest01 ceph-mgr[2247]: 2019-09-19 12:54:16.618
> 7f3a796e4700 -1 client.0 error registering admin socket command: (17)
> File exists
> 
> Sep 19 12:54:16 cephtest01 ceph-mgr[2247]: [19/Sep/2019:12:54:16] ENGINE
> Serving on :::8443
> 
> Sep 19 12:54:16 cephtest01 ceph-mgr[2247]: [19/Sep/2019:12:54:16] ENGINE
> Bus STARTED


-- 
SUSE Software Solutions Germany GmbH - Maxfeldstr. 5 - 90409 Nuernberg
GF: Felix Imendörffer, HRB 247165 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Nautilus : ceph dashboard ssl not working

2019-09-24 Thread Lenz Grimmer
On 9/24/19 1:37 PM, Miha Verlic wrote:

> I've got slightly different problem. After a few days of running fine,
> dashboard stops working because it is apparently seeking for wrong
> certificate file in /tmp. If I restart ceph-mgr it starts to work again.

Does the restart trigger the creation of a similar-looking file in /tmp?
I wonder if there's some kind of cron job that cleans up the /tmp
directory every now and then...

Lenz

-- 
SUSE Software Solutions Germany GmbH - Maxfeldstr. 5 - 90409 Nuernberg
GF: Felix Imendörffer, HRB 247165 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] iostat and dashboard freezing

2019-08-27 Thread Lenz Grimmer
Hi Jake,

On 8/27/19 3:22 PM, Jake Grimmett wrote:

> That exactly matches what I'm seeing:
> 
> when iostat is working OK, I see ~5% CPU use by ceph-mgr
> and when iostat freezes, ceph-mgr CPU increases to 100%

Does this also occur if the dashboard module is disabled? Just wondering
if this is isolatable to the iostat module. Thanks!

Lenz

-- 
SUSE Software Solutions Germany GmbH - Maxfeldstr. 5 - 90409 Nuernberg
GF: Felix Imendörffer, HRB 247165 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Watch a RADOS object for changes, specifically iscsi gateway.conf object

2019-08-22 Thread Lenz Grimmer
On 8/22/19 9:38 PM, Wesley Dillingham wrote:

> I am interested in keeping a revision history of ceph-iscsi's
> gateway.conf object for any and all changes. It seems to me this may
> come in handy to revert the environment to a previous state. My question
> is are there any existing tools which do similar or could someone please
> suggest, if they exist, libraries or code examples (ideally python)
> which may further me in this goal. 
> 
> The RADOS man page has the ability to "*listwatchers" so I believe this
> is **achievable**. Thanks in advance. *

This is how ceph-iscsi seems to be doing it, by polling the config
object's epoch:

https://github.com/ceph/ceph-iscsi/blob/master/rbd-target-api.py#L2673

NFS Ganesha (C code) seems to be using a watch:

https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/config_parsing/conf_url_rados.c#L375

Not sure if that interface is available via the Python bindings as well
- this might be more effective than polling...

Lenz

-- 
SUSE Software Solutions Germany GmbH - Maxfeldstr. 5 - 90409 Nuernberg
GF: Felix Imendörffer, HRB 247165 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] More than 100% in a dashboard PG Status

2019-08-13 Thread Lenz Grimmer
Hi Fyodor,

(Cc:ing Alfonso)

On 8/13/19 12:47 PM, Fyodor Ustinov wrote:

> I have ceph nautilus (upgraded from mimic, if it is important) and in
> dashboard in "PG Status" section I see "Clean (2397%)"
> 
> It's a bug?

Huh, That might be possible - sorry about that. We'd be grateful if you
could submit this on the bug tracker (please attach a screen shot as well):

  https://tracker.ceph.com/projects/mgr/issues/new

We may require additional information from you, so please keep an eye on
the issue. Thanks in advance!

Lenz

-- 
SUSE Software Solutions Germany GmbH - Maxfeldstr. 5 - 90409 Nuernberg
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Unable to list rbd block > images in nautilus dashboard

2019-04-04 Thread Lenz Grimmer
Hi Wes,

On 4/4/19 9:23 PM, Wes Cilldhaire wrote:

> Can anyone at all please confirm whether this is expected behaviour /
> a known issue, or give any advice on how to diagnose this?  As far as
> I can tell my mon and mgr are healthy.  All rbd images have
> object-map and fast-diff enaabled.

My gut reaction, not exactly knowing the inner workings of how this
information is gathered: if it takes quite some time on the command line
as well, this might be due to some internal collection and calculation
of data, likely in the Ceph Manager itself. Could you check the CPU
utilization on the active manager node and which process is causing the
load? I assume that this is acutally expected behaviour, even though I
would have expected that cached information would be returned noticeably
faster. How many RBDs are we talking about here?

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph dashboard cert documentation bug?

2019-02-25 Thread Lenz Grimmer
On 2/6/19 11:52 AM, Junk wrote:

> I was trying to set my mimic dashboard cert using the instructions
> from 
> 
> http://docs.ceph.com/docs/mimic/mgr/dashboard/
> 
> and I'm pretty sure the lines
> 
> 
> $ ceph config-key set mgr mgr/dashboard/crt -i dashboard.crt
> $ ceph config-key set mgr mgr/dashboard/key -i dashboard.key
> 
> should be
> 
> $ ceph config-key set mgr/dashboard/crt -i dashboard.crt
> $ ceph config-key set mgr/dashboard/key -i dashboard.key

Why do you think so? Did you get an error message?

> Can anyone confirm?

Did you check https://tracker.ceph.com/issues/24689 ?

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Enabling Dashboard RGW management functionality

2019-02-21 Thread Lenz Grimmer
On 2/21/19 4:30 PM, Hayashida, Mami wrote:

> I followed the documentation
> (http://docs.ceph.com/docs/mimic/mgr/dashboard/) to enable the dashboard
> RGW management, but am still getting the 501 error ("Please consult the
> documentation on how to configure and enable the Object Gateway... "). 
> The dashboard it self is working.
> 
> 1. create a RGW user with the --system flag
> 2. Entered its access and secret keys using the `dashboard
> set-rgw-api-secret/acess-key` command
> 3 Set the IP address and Port 
> 4.Set the RGW API user id (to match the RGW admin user whose credentials
> are being used for this)
> 
> I confirmed all of this with various `dashboard get-rgw ... ` commands,
> disabled and enabled the mgr dashboard service, logged out and back in,
> but am still getting the error.  I am running Ceph version 13.2.4.

Sorry for the trouble. Have you tried restarting the dashboard module?
Is there anything in the mgr log files that might give a hint?

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Failed to load ceph-mgr modules: telemetry

2019-02-12 Thread Lenz Grimmer
Hi Ashley,

On 2/9/19 4:43 PM, Ashley Merrick wrote:

> Any further suggestions, should i just ignore the error "Failed to load
> ceph-mgr modules: telemetry" or is this my route cause for no realtime
> I/O readings in the Dashboard?

I don't think this is related. It you don't plan to enable the telemetry
module, this error can probably be ignored. However, I wonder why you
don't see those readings. Would you mind submitting an issue on the
tracker about this, ideally with the exact Ceph version you're running
and a screen shot of where the metrics are missing?

Thanks,

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Simple API to have cluster healthcheck ?

2019-01-30 Thread Lenz Grimmer


Am 30. Januar 2019 19:33:14 MEZ schrieb PHARABOT Vincent 
:

>Thanks for the info
>But, nope, on Mimic (13.2.4) /api/health ends in 404 (/api/health/full,
>/api/health/minimal also...)

On which node did you try to access the API? Did you enable the Dashboard 
module in Ceph manager?

Lenz

-- 
Diese Nachricht wurde von meinem Android-Gerät mit K-9 Mail gesendet.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Simple API to have cluster healthcheck ?

2019-01-30 Thread Lenz Grimmer
Hi,

On 1/30/19 2:02 PM, PHARABOT Vincent wrote:

> I have my cluster set up correctly now (thank you again for the help)

What version of Ceph is this?

> I am seeking now a way to get cluster health thru API (REST) with curl
> command.
> 
> I had a look at manager / RESTful and Dashboard but none seems to
> provide simple way to get cluster health
> 
> RESTful module do a lot of things but I didn’t find the simple health
> check result – moreover I don’t want monitoring user to be able to do
> all the command in this module.
> 
> Dashboard is a dashboard so could not get health thru curl

Hmm, the Mimic dashboard's REST API should expose an "/api/health"
endpoint. Have you tried that one?

For Nautilus, this seems to has been split into /api/health/full and
/api/health/minimal, to reduce the overhead.

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] MGR Dashboard

2018-11-29 Thread Lenz Grimmer
Hi Ashley,

On 11/29/18 11:41 AM, Ashley Merrick wrote:

> Managed to fix the issue with some googling from the error above.
> 
> There is a bug with urllib3 1.24.1 which breaks the module ordered_dict (1)

Good spotting!

> I rolled back to a working version "pip install urllib3==1.23" and
> restarted the mgr service and all is now working.

Glad to hear you got it working again. Thanks for the update!

> (1)https://github.com/urllib3/urllib3/issues/1456

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] MGR Dashboard

2018-11-29 Thread Lenz Grimmer
On 11/29/18 11:29 AM, Ashley Merrick wrote:

> Yeah had a few OS updates, but not related directly to CEPH.

But they seem to be the root cause of the issue you're facing. Thanks
for sharing the entire log entry.

> The full error log after a reboot is :
> 
> 2018-11-29 11:24:22.494 7faf046a1700  1 mgr[restful] server not running:
> no certificate configured
> 2018-11-29 11:24:22.586 7faf05ee4700 -1 log_channel(cluster) log [ERR] :
> Unhandled exception from module 'dashboard' while running on
> mgr.ceph-m01: No module named ordered_dict
> 2018-11-29 11:24:22.586 7faf05ee4700 -1 dashboard.serve:
> 2018-11-29 11:24:22.586 7faf05ee4700 -1 Traceback (most recent call last):
>   File "/usr/lib/ceph/mgr/dashboard/module.py", line 276, in serve
>     mapper = generate_routes(self.url_prefix)
>   File "/usr/lib/ceph/mgr/dashboard/controllers/__init__.py", line 118,
> in generate_routes
>     ctrls = load_controllers()
>   File "/usr/lib/ceph/mgr/dashboard/controllers/__init__.py", line 73,
> in load_controllers
>     package='dashboard')
>   File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
>     __import__(name)
>   File "/usr/lib/ceph/mgr/dashboard/controllers/rgw.py", line 10, in
> 
>     from ..services.rgw_client import RgwClient
>   File "/usr/lib/ceph/mgr/dashboard/services/rgw_client.py", line 5, in
> 
>     from ..awsauth import S3Auth
>   File "/usr/lib/ceph/mgr/dashboard/awsauth.py", line 49, in 
>     from requests.auth import AuthBase
>   File "/usr/lib/python2.7/dist-packages/requests/__init__.py", line 97,
> in 
>     from . import utils
>   File "/usr/lib/python2.7/dist-packages/requests/utils.py", line 26, in
> 
>     from ._internal_utils import to_native_string
>   File "/usr/lib/python2.7/dist-packages/requests/_internal_utils.py",
> line 11, in 
>     from .compat import is_py2, builtin_str, str
>   File "/usr/lib/python2.7/dist-packages/requests/compat.py", line 47,
> in 
>     from urllib3.packages.ordered_dict import OrderedDict
> ImportError: No module named ordered_dict
> 
> I have tried "ceph mgr module enable dashboard" and it says already
> enabled, I tried a disable restart and enable and get the same error above.

Try re-installing the following packages: "python-urllib3" and
"python-requests" via apt - somehow Python fails to import a method from
the former library.

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] MGR Dashboard

2018-11-29 Thread Lenz Grimmer
On 11/29/18 10:28 AM, Ashley Merrick wrote:

> Sorry missed the basic info!!
> 
> Latest Mimic 13.2.2
> 
> Ubuntu 18.04

Thanks. So it worked before the reboot and did not afterwards? What
changed? Did you perform an OS update?

Would it be possible for you to paste the entire mgr log file messages
that are printed after the manager restarted? Have you tried to
explicitly enable the dashboard by running "ceph mgr module enable
dashboard"?

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] MGR Dashboard

2018-11-29 Thread Lenz Grimmer
Hi Ashley,

On 11/29/18 7:16 AM, Ashley Merrick wrote:

> After rebooting a server that hosts the MGR Dashboard I am now unable to
> get the dashboard module to run.
> 
> Upon restarting the mgr service I see the following :
> 
> ImportError: No module named ordered_dict
> Nov 29 07:13:14 ceph-m01 ceph-mgr[12486]: [29/Nov/2018:07:13:14] ENGINE
> Serving on http://:::9283
> Nov 29 07:13:14 ceph-m01 ceph-mgr[12486]: [29/Nov/2018:07:13:14] ENGINE
> Bus STARTED
> 
> I have checked using pip install ordereddict and it states the module is
> already installed.

What version of Ceph is this? What OS?

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Bug: Deleting images ending with whitespace in name via dashboard

2018-11-26 Thread Lenz Grimmer
Hi Alexander,

On 11/13/18 12:37 PM, Kasper, Alexander wrote:

> As i am not sure howto correctly use tracker.ceph.com, i´ll post my
> report here:
> 
> Using the dashboard to delete a rbd image via gui throws an error when
> the image name ends with an whitespace (user input error leaded to this
> situation).
> 
> Also editing this image via dashboard throws error.
> 
> Deleting via cli with the name of pool/image put in " " was succesful.
> 
> Should the input be filtered?

Thank you for reporting this and sorry for the late reply. It looks as
if you figured out how to submit this via the tracker:
https://tracker.ceph.com/issues/37084

I left some comments there, your feedback would be welcome. Thank you!

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephday berlin slides

2018-11-16 Thread Lenz Grimmer
Hi Serkan,

On 11/16/18 11:29 AM, Serkan Çoban wrote:

> Does anyone know if slides/recordings will be available online?

Unfortunately, the presentations were not recorded. However, the slides
are usually made available on the corresponding event page,
https://ceph.com/cephdays/ceph-day-berlin/ in this case.

I have already submitted my presentation about the Dashboard here:

https://www.slideshare.net/LenzGr/managing-and-monitoring-ceph-ceph-day-berlin-20181112

The organizers also approached the speakers to submit their slides, so
I'd assume they should appear on the event page at some point.

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Dashboard Object Gateway

2018-09-18 Thread Lenz Grimmer
Hi Hendrik,

On 09/18/2018 12:57 PM, Hendrik Peyerl wrote:

> we just deployed an Object Gateway to our CEPH Cluster via ceph-deploy
> in an IPv6 only Mimic Cluster. To make sure the RGW listens on IPv6 we
> set the following config:
> rgw_frontends = civetweb port=[::]:7480
> 
> We now tried to enable the dashboard functionality for said gateway but
> we are running into an error 500 after trying to access it via the
> dashboard, the mgr log shows the following:
> 
> {"status": "500 Internal Server Error", "version": "3.2.2", "detail":
> "The server encountered an unexpected condition which prevented it from
> fulfilling the request.", "traceback": "Traceback (most recent call
> last):\\n  File
> \\"/usr/lib/python2.7/site-packages/cherrypy/_cprequest.py\\", line 656,
> in respond\\n    response.body = self.handler()\\n  File
> \\"/usr/lib/python2.7/site-packages/cherrypy/lib/encoding.py\\", line
> 188, in __call__\\n    self.body = self.oldhandler(*args, **kwargs)\\n
> File \\"/usr/lib/python2.7/site-packages/cherrypy/lib/jsontools.py\\",
> line 61, in json_handler\\n    value =
> cherrypy.serving.request._json_inner_handler(*args, **kwargs)\\n  File
> \\"/usr/lib/python2.7/site-packages/cherrypy/_cpdispatch.py\\", line 34,
> in __call__\\n    return self.callable(*self.args, **self.kwargs)\\n
> File \\"/usr/lib64/ceph/mgr/dashboard/controllers/rgw.py\\", line 23, in
> status\\n    instance = RgwClient.admin_instance()\\n  File
> \\"/usr/lib64/ceph/mgr/dashboard/services/rgw_client.py\\", line 138, in
> admin_instance\\n    return
> RgwClient.instance(RgwClient._SYSTEM_USERID)\\n  File
> \\"/usr/lib64/ceph/mgr/dashboard/services/rgw_client.py\\", line 121, in
> instance\\n    RgwClient._load_settings()\\n  File
> \\"/usr/lib64/ceph/mgr/dashboard/services/rgw_client.py\\", line 102, in
> _load_settings\\n    host, port = _determine_rgw_addr()\\n  File
> \\"/usr/lib64/ceph/mgr/dashboard/services/rgw_client.py\\", line 78, in
> _determine_rgw_addr\\n    raise LookupError(\'Failed to determine RGW
> port\')\\nLookupError: Failed to determine RGW port\\n"}']
> 
> 
> Any help would be greatly appreciated.

Would you mind sharing the commands that you used to configure the RGW
connection details?

The host and port of the Object Gateway should be determined
automatically, I wonder the IPv6 notation gets mangled somewhere here.

Have you tried setting them explicitly using "ceph dashboard
set-rgw-api-host " and "ceph dashboard set-rgw-api-port "?

Thanks,

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] New Ceph community manager: Mike Perez

2018-08-29 Thread Lenz Grimmer

Great news. Welcome Mike! I look forward to working with you, let me
know if there is anything I can help you with.

Lenz

On 08/29/2018 03:13 AM, Sage Weil wrote:

> Please help me welcome Mike Perez, the new Ceph community manager!
> 
> Mike has a long history with Ceph: he started at DreamHost working on 
> OpenStack and Ceph back in the early days, including work on the original 
> RBD integration.  He went on to work in several roles in the OpenStack 
> project, doing a mix of infrastructure, cross-project and community 
> related initiatives, including serving as the Project Technical Lead for 
> Cinder.
> 
> Mike lives in Pasadena, CA, and can be reached at mpe...@redhat.com, on 
> IRC as thingee, or twitter as @thingee.
> 
> I am very excited to welcome Mike back to Ceph, and look forward to 
> working together on building the Ceph developer and user communities!

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Reminder: bi-weekly dashboard sync call today (15:00 CET)

2018-08-24 Thread Lenz Grimmer
On 08/24/2018 02:00 PM, Lenz Grimmer wrote:

> On 08/24/2018 10:59 AM, Lenz Grimmer wrote:
> 
>> JFYI, the team working on the Ceph Manager Dashboard has a bi-weekly
>> conference call that discusses the ongoing development and gives an
>> update on recent improvements/features.
>>
>> Today, we plan to give a demo of the new dashboard landing page (See
>> https://tracker.ceph.com/issues/24573 and
>> https://github.com/ceph/ceph/pull/23568 for details) and the
>> implementation of the "RBD trash" functionality in the UI
>> (http://tracker.ceph.com/issues/24272 and
>> https://github.com/ceph/ceph/pull/23351)
>>
>> The meeting takes places every second Friday at 15:00 CET at this URL:
>>
>>   https://bluejeans.com/150063190
> 
> My apologies, I picked an incorrect meeting URL - this is the correct one:
> 
>   https://bluejeans.com/470119167/
> 
> Sorry for the confusion.

Thanks to everyone who participated. We actually moved to yet another
different BlueJeans session in order to be able to record it...

For those of you who missed it, here's a recording:

https://bluejeans.com/s/HXnam

Have a nice weekend!

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Reminder: bi-weekly dashboard sync call today (15:00 CET)

2018-08-24 Thread Lenz Grimmer
On 08/24/2018 10:59 AM, Lenz Grimmer wrote:

> JFYI, the team working on the Ceph Manager Dashboard has a bi-weekly
> conference call that discusses the ongoing development and gives an
> update on recent improvements/features.
> 
> Today, we plan to give a demo of the new dashboard landing page (See
> https://tracker.ceph.com/issues/24573 and
> https://github.com/ceph/ceph/pull/23568 for details) and the
> implementation of the "RBD trash" functionality in the UI
> (http://tracker.ceph.com/issues/24272 and
> https://github.com/ceph/ceph/pull/23351)
> 
> The meeting takes places every second Friday at 15:00 CET at this URL:
> 
>   https://bluejeans.com/150063190

My apologies, I picked an incorrect meeting URL - this is the correct one:

  https://bluejeans.com/470119167/

Sorry for the confusion.

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Reminder: bi-weekly dashboard sync call today (15:00 CET)

2018-08-24 Thread Lenz Grimmer
Hi all,

JFYI, the team working on the Ceph Manager Dashboard has a bi-weekly
conference call that discusses the ongoing development and gives an
update on recent improvements/features.

Today, we plan to give a demo of the new dashboard landing page (See
https://tracker.ceph.com/issues/24573 and
https://github.com/ceph/ceph/pull/23568 for details) and the
implementation of the "RBD trash" functionality in the UI
(http://tracker.ceph.com/issues/24272 and
https://github.com/ceph/ceph/pull/23351)

The meeting takes places every second Friday at 15:00 CET at this URL:

  https://bluejeans.com/150063190

See you there!

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mgr/dashboard: backporting Ceph Dashboard v2 to Luminous

2018-08-23 Thread Lenz Grimmer
On 08/22/2018 08:57 PM, David Turner wrote:

> My initial reaction to this PR/backport was questioning why such a
> major update would happen on a dot release of Luminous.  Your
> reaction to keeping both dashboards viable goes to support that.
> Should we really be backporting features into a dot release that
> force people to change how they use the software?  That seems more of
> the purpose of having new releases.

This is indeed an unusual case. But considering that the Dashboard does
not really change any of the Ceph core functionality but adds a lot of
value by improving the usability and manageability of Ceph, we agreed
with Sage on making an exception here.

> I haven't really used either dashboard though.  Other than adding
> admin functionality, does it remove any functionality of the previous
> dashboard?

Like Kai wrote, our initial goal was to reach feature parity with
Dashboard v1, in order to not introduce a regression when replacing it.

In the meanwhile, Dashboard v2 is way beyond that and we have added a
lot of additional functionality, e.g. RBD and RGW management.

With backporting this to Luminous, we also hope to reach a larger
audience of users that have not updated to Mimic yet.

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph cluster monitoring tool

2018-07-24 Thread Lenz Grimmer
On 07/24/2018 07:02 AM, Satish Patel wrote:

> My 5 node ceph cluster is ready for production, now i am looking for
> good monitoring tool (Open source), what majority of folks using in
> their production?

There are several, using Prometheus with the Ceph Exporter Manager
module is a popular choice for collecting the metrics. The ceph-metrics
project provide an exhaustive collection of dashboards for Grafana that
will help with the visualization and some alerting based on these metrics.

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [Ceph-community] Ceph Tech Talk Calendar

2018-06-20 Thread Lenz Grimmer
Hi Leo,

On 06/20/2018 01:47 AM, Leonardo Vaz wrote:

> We created the following etherpad to organize the calendar for the
> future Ceph Tech Talks.
> 
> For the Ceph Tech Talk of June 28th our fellow George Mihaiescu will
> tell us how Ceph is being used on cancer research at OICR (Ontario
> Institute for Cancer Research).
> 
> If you're interested to contribute, please choose one of the available
> dates, add the topic you want to present and your name (or feel free
> to contact me).
> 
>   https://pad.ceph.com/p/ceph-tech-talks-2018

Hmm. IMHO, an Etherpad is somewhat too volatile for this kind of
information, but it of course makes it easier for others to submit
proposals.

If maintaining this on https://ceph.com/ceph-tech-talks/ is too
difficult, would it make sense to use the Wiki for this instead?

In any case, the page at which this information is collected should be
linked from prominent places, e.g. ceph.com/community or
https://tracker.ceph.com/projects/ceph/wiki/Community

Thanks,

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Install ceph manually with some problem

2018-06-19 Thread Lenz Grimmer
On 06/18/2018 08:38 PM, Michael Kuriger wrote:

> Don’t use the installer scripts.  Try  yum install ceph

I'm not sure I agree. While running "make install" is of course somewhat
of limited use on a distributed cluster, I would expect that it at least
installs all the required components on the local system in the
appropriate locations.

The "ImportError: No module named rados" error sounds as if the local
installation of a Python module did not work as expected - I suggest
submitting a bug report about this.

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Mimic on Debian 9 Stretch

2018-06-13 Thread Lenz Grimmer
On 06/13/2018 02:01 PM, Sean Purdy wrote:

> Me too.  I picked ceph luminous on debian stretch because I thought
> it would be maintained going forwards, and we're a debian shop.  I
> appreciate Mimic is a non-LTS release, I hope issues of debian
> support are resolved by the time of the next LTS.

There won't be a "next LTS" in the fashion that previous Ceph releases
were published.

See
http://docs.ceph.com/docs/master/releases/schedule/#stable-releases-x-2-z
for how this is handled now.

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Installing iSCSI support

2018-06-13 Thread Lenz Grimmer
On 06/12/2018 07:14 PM, Max Cuttins wrote:

> it's a honor to me contribute to the main repo of ceph.

We appreciate you support! Please take a look at
http://docs.ceph.com/docs/master/start/documenting-ceph/ for guidance on
how to contribute to the documentation.
> Just a throught, is it wise having DOCS within the software?
> Isn't better to move docs to a less sensite repo?

Why do you think so? Every modification is peer reviewed before
inclusion into the source tree. If your documentation fix would
accidentally modify other parts, this would easily be spotted during the
pull request review. Having the docs "near" the actual code improves the
actuality, as new features or changes in behavior can also include the
corresponding documentation updates. This also makes it easier to manage
multiple branches/versions of the code, as there is no disconnect.

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Show and Tell: Grafana cluster dashboard

2018-06-04 Thread Lenz Grimmer
On 05/08/2018 07:21 AM, Kai Wagner wrote:

> Looks very good. Is it anyhow possible to display the reason why a
> cluster is in an error or warning state? Thinking about the output from
> ceph -s if this could by shown in case there's a failure. I think this
> will not be provided by default but wondering if it's possible to add.

Sorry for the late reply. We actually discussed this aspect during one
of the calls we had when discussing the Grafana dashboard integration
into the Ceph Manager Dashboard. Such kind of state information is
somewhat difficult to track and visualize using Prometheus/Grafana (or
any other TSDB, FWIW), as you can't store the actual reasons for why the
cluster is in HEALTH_WARN or HEALTH_ERR, for example.

We are therefore considering displaying this information in the form of
"native" widgets on the Manager Dashboard, and using the Grafana
dashboards for visualizing the other more suitable performance metrics.

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Is there a faster way of copy files to and from a rgw bucket?

2018-04-23 Thread Lenz Grimmer
Hi Marc,

On 04/21/2018 11:34 AM, Marc Roos wrote:

> I wondered if there are faster ways to copy files to and from a bucket, 
> like eg not having to use the radosgw? Is nfs-ganesha doing this faster 
> than s3cmd?

I have doubts that putting another layer of on top of S3 will make if
faster than the native communication protocol. NFS Ganesha on S3 is
primarily designed to aid the bulk import or export of data for
applications that do not support S3 natively and need a filesystem
abstraction layer to alleviate that.

If the file copy to RGW via s3cmd feels slow, I'd suggest to look for
potential performance improvements in that combo first.

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph Dashboard v2 update

2018-04-09 Thread Lenz Grimmer
Hi all,

a month has passed since the Dashboard v2 was merged into the master
branch, so I thought it might be helpful to write a summary/update (with
screenshots) of what we've been up to since then:

  https://www.openattic.org/posts/ceph-dashboard-v2-update/

Let us know what you think!

Cheers,

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Developer Monthly - March 2018

2018-03-01 Thread Lenz Grimmer
On 02/28/2018 11:51 PM, Sage Weil wrote:

> On Wed, 28 Feb 2018, Dan Mick wrote:
> 
>> Would anyone else appreciate a Google Calendar invitation for the
>> CDMs? Seems like a natural.
> 
> Funny you should mention it!  I was just talking to Leo this morning
> about creating a public Ceph Events calendar that has all of the
> public events (CDM, tech talks, weekly perf call, etc.).
> 
> (Also, we're setting up a Ceph Meetings calendar for meetings that
> aren't completely public that can be shared with active developers
> for standing meetings that are currently invite-only meetings.  e.g.,
> standups, advisory board, etc.)

That'd be excellent - +1

Thanks!

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Linux Distribution: Is upgrade the kerner version a good idea?

2018-02-26 Thread Lenz Grimmer
On 02/25/2018 01:18 PM, Massimiliano Cuttini wrote:

> Is upgrade the kernel to major version on a distribution a bad idea?
> Or is just safe as like as upgrade like any other package?
> I prefer ultra stables release instead of latest higher package.

In that case it's probably best to stick with the latest kernel that has
been released by the distributor for that particular distribution
version. Upgrading to newer kernel versions can be tricky, if they
require new userland utilities for managing new features.

> But maybe I'm in wrong thinking that latest major kernel not in the
> default repository is likely say "dev distribution" and instead is just
> a stable release as like every others.

It's in there for a reason ;) For running Ceph (on the cluster side), a
latest and greatest kernel version is not really required, as the Ceph
services run in userland anyway.

The only requirement that I could think of that might benefit from
running a more recent kernel is using the kRBD or CephFS modules, but
these are "client drivers" - no need to upgrade the kernel on your
entire cluster because of that.

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Luminous and calamari

2018-02-16 Thread Lenz Grimmer
On 02/16/2018 07:16 AM, Kai Wagner wrote:

> yes there are plans to add management functionality to the dashboard as
> well. As soon as we're covered all the existing functionality to create
> the initial PR we'll start with the management stuff. The big benefit
> here is, that we can profit what we've already done within openATTIC.
> 
> If you've missed the ongoing Dashboard V2 discussions and work, here's a
> blog post to follow up:
> 
> https://www.openattic.org/posts/ceph-manager-dashboard-v2/
> 
> Let us know about your thoughts on this.

And just to add to this - of course the standalone version of openATTIC
is still available, too ;)

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Day Germany :)

2018-02-09 Thread Lenz Grimmer
Hi all,

On 02/08/2018 11:23 AM, Martin Emrich wrote:

> I just want to thank all organizers and speakers for the awesome Ceph
> Day at Darmstadt, Germany yesterday.
> 
> I learned of some cool stuff I'm eager to try out (NFS-Ganesha for RGW,
> openATTIC,...), Organization and food were great, too.

I agree - thanks a lot to Danny Al-Gaaf and Leonardo for the overall
organization, and of course the sponsors and speakers who made it
happen! I too learned a lot.

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Custom Prometheus alerts for Ceph?

2018-01-31 Thread Lenz Grimmer

Just curious, is anyone aware of $SUBJECT? As Prometheus provides a
built-in alert mechanism [1], are there any custom rules that people use
to receive notifications about critical situations in a Ceph cluster?

Would it make sense to collect these and have them included in a git
repo under the Ceph project, maybe even as part of the official ceph repo?

Lenz

[1] https://prometheus.io/docs/alerting/alertmanager/
-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Future

2018-01-23 Thread Lenz Grimmer
Ciao Massimiliano,

On 01/23/2018 01:29 PM, Massimiliano Cuttini wrote:

>>   https://www.openattic.org/features.html
>
> Oh god THIS is the answer!

:)

> Lenz, if you need help I can join also development.

You're more than welcome - we have a lot of work ahead of us...
Feel free to join our Freenode IRC channel #openattic to get in touch!

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Future

2018-01-16 Thread Lenz Grimmer
Hi Massimiliano,

On 01/11/2018 12:15 PM, Massimiliano Cuttini wrote:

> _*3) Management complexity*_
> Ceph is amazing, but is just too big to have everything under control
> (too many services).
> Now there is a management console, but as far as I read this management
> console just show basic data about performance.
> So it doesn't manage at all... it's just a monitor...
> 
> In the end You have just to manage everything by your command-line.

[...]

> The management complexity can be completly overcome with a great Web
> Manager.
> A Web Manager, in the end is just a wrapper for Shell Command from the
> CephAdminNode to others.
> If you think about it a wrapper is just tons of time easier to develop
> than what has been already developed.
> I do really see that CEPH is the future of storage. But there is some
> quick-avoidable complexity that need to be reduced.
> 
> If there are already some plan for these issue I really would like to know.

FWIW, there is openATTIC, which provides additional functionality beyond
of what the current dashboard provides. It's a web application that
utilizes various existing APIs (e.g. librados, RGW Admin Ops API)

  https://www.openattic.org/features.html

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rbd: map failed

2018-01-10 Thread Lenz Grimmer
On 01/09/2018 07:46 PM, Karun Josy wrote:

> We have a user "testuser" with below permissions :
> 
> $ ceph auth get client.testuser
> exported keyring for client.testuser
> [client.testuser]
>         key = ==
>         caps mon = "profile rbd"
>         caps osd = "profile rbd pool=ecpool, profile rbd pool=cv,
> profile rbd-read-only pool=templates"
> 
> 
> But when we try to map an image in pool 'templates' we get the below
> error : 
> --
> # rbd map templates/centos.7-4.x86-64.2017 --id testuser
> rbd: sysfs write failed
> In some cases useful info is found in syslog - try "dmesg | tail".
> rbd: map failed: (1) Operation not permitted
> 
> 
> Is it because that user has only read permission in templates pool ?

Did you check "dmesg" as outlined in the error message? Anything in
there that might give a hint?

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Any RGW admin frontends?

2017-12-15 Thread Lenz Grimmer
Hi Dan,

On 12/15/2017 10:13 AM, Dan van der Ster wrote:

> As we are starting to ramp up our internal rgw service, I am wondering
> if someone already developed some "open source" high-level admin tools
> for rgw. On the one hand, we're looking for a web UI for users to create
> and see their credentials, quota, usage, and maybe a web bucket browser.

Except for a bucket browser, openATTIC 3.6 should provide the RGW
management features you are looking for.

> Then from our service PoV, we're additionally looking for tools for
> usage reporting, generating periodic high-level reports, ... 

We use Prometheus/Grafana for that, not sure if that would work for you?

> I'm aware of the OpenStack "Object Store" integration with rgw, but I'm
> curious what exists outside the OS universe.

The Inkscope folks have also started adding RGW management, but I'm not
sure how active this project is.

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph metric exporter HTTP Error 500

2017-12-15 Thread Lenz Grimmer
Hi,

On 12/15/2017 11:53 AM, Falk Mueller-Braun wrote:

> since we upgraded to Luminous (12.2.2), we use the internal Ceph
> exporter for getting the Ceph metrics to Prometheus. At random times we
> get a Internal Server Error from the Ceph exporter, with python having a
> key error with some random metric. Often it is "pg_*".
> 
> Here is an example of the python exception:
> 
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/dist-packages/cherrypy/_cprequest.py", line 
> 670, in respond
> response.body = self.handler()
>   File "/usr/lib/python2.7/dist-packages/cherrypy/lib/encoding.py", line 
> 217, in __call__
> self.body = self.oldhandler(*args, **kwargs)
>   File "/usr/lib/python2.7/dist-packages/cherrypy/_cpdispatch.py", line 
> 61, in __call__
> return self.callable(*self.args, **self.kwargs)
>   File "/usr/lib/ceph/mgr/prometheus/module.py", line 386, in metrics
> metrics = global_instance().collect()
>   File "/usr/lib/ceph/mgr/prometheus/module.py", line 324, in collect
> self.get_pg_status()
>   File "/usr/lib/ceph/mgr/prometheus/module.py", line 266, in 
> get_pg_status
> self.metrics[path].set(value)
> KeyError: 'pg_deep'
> 
> After a certain time (could be 3-5 minutes oder sometimes even 40
> minutes), the metric sending starts working again without any help.
> 
> Has anyone got an idea what could be done about that or does experience
> similar problems?

This seems to be a regression in 12.2.2 -
http://tracker.ceph.com/issues/22441 (which is a duplicate of
http://tracker.ceph.com/issues/22116)

And then there's another one that might be related:
http://tracker.ceph.com/issues/22313

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Calamari ( what a nightmare !!! )

2017-12-13 Thread Lenz Grimmer
Hi Harry,

On 12/12/2017 02:18 AM, DHD.KOHA wrote:

> After managing to install ceph, with all possible ways that I could
> manage  on 4 nodes, 4 osd and 3 monitors , with ceph-deploy and latter
> with ceph-ansible, I thought to to give a try to install CALAMARI on
> UBUNTU 14.04 ( another separate server being not a node or anything in a
> cluster ).
> 
> After all the mess of salt 2014.7.5 and different UBUNTU's since I am
> installing nodes on xenial but CALAMARI on trusty while the calamari
> packages on node come from download.ceph.com and trusty, I ended up
> having a server that refuses to gather anything from anyplace at all.

[SNIP]

> which means obviously that I am doing something WRONG and I have no IDEA
> what is it.
> 
> Given the fact that documentation on the matter is very poor to limited,
> 
> Is there anybody out-there with some clues or hints that is willing to
> share ?

I don't think you're doing anything wrong - Calamari has simply not been
updated for some time and likely does not work with the latest Ceph
version anymore.

Sorry for all the trouble - the installation process certainly is a big
hurdle to get started, especially with so many moving parts and components.

If you're still willing to give this another try (and you're fine with
using openSUSE as the base OS instead of Ubuntu), I'd like to suggest
giving DeepSea [1] (a Salt-based Ceph deployment and configuration
framework) and openATTIC [2] a try.

We have a quick install guide here:

http://docs.openattic.org/en/latest/install_guides/quick_start_guide.html

It's also possible to set up openATTIC manually and configure it to talk
to an existing Ceph Luminous cluster - but that too requires some manual
steps.

Feel free to get in touch with us if you need help.

Lenz

[1] https://github.com/SUSE/DeepSea
[2] https://openattic.org/



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Deploying Ceph with Salt/DeepSea on CentOS 7

2017-11-22 Thread Lenz Grimmer
Hi,

FYI, DeepSea, the Salt-based framework to deploy and manage a Ceph
cluster, now has experimental support on CentOS 7.

Thanks to Ricardo Dias for getting this up and running as a SUSE
Hackweek project.

You can see a demo here: https://asciinema.org/a/147812

RPM packages can be found here:
https://copr.fedorainfracloud.org/coprs/rjdias/home/packages/

See Ricardo's post for more details:

http://lists.suse.com/pipermail/deepsea-users/2017-November/000202.html

Please join the deepsea-users mailing list and let us know how it works
for you. Feedback and patches are welcome!

Lenz



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph monitoring

2017-10-05 Thread Lenz Grimmer
On 10/05/2017 12:15 PM, Jasper Spaans wrote:

> Thanks for the pointers - I guess I'll need to find some time to change
> those dashboards to use the ceph-mgr metrics names (at least, I'm unsure
> if the DO exporter uses the same names as ceph-mgr.) To be continued..

Not sure about that; AFAIK the Prometheus exporter based on ceph-mgr was
merged into the master branch already.

Lenz



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph monitoring

2017-10-05 Thread Lenz Grimmer
Hi,

On 10/03/2017 08:37 AM, Jasper Spaans wrote:

> Now to find or build a pretty dashboard with all of these metrics. I
> wasn't able to find something in the grafana supplied dashboards, and
> haven't spent enough time on openattic to extract a dashboard from
> there. Any pointers appreciated!

openATTIC simply embeds Grafana dashboards, which are set up by DeepSea,
which also takes care of the initial cluster deploy, including the
required Prometheus node and exporters (we use the DigitalOcean Ceph
exporter):

https://github.com/SUSE/DeepSea
https://github.com/digitalocean/ceph_exporter

The Grafana dashboard files can be found here:

https://github.com/SUSE/DeepSea/tree/master/srv/salt/ceph/monitoring/grafana/files
 Lenz



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] iSCSI production ready?

2017-07-19 Thread Lenz Grimmer
On 07/17/2017 10:15 PM, Alvaro Soto wrote:

> The second part, nevermind know I see that the solution is to use
> the TCMU daemon, I was thinking in a out of the box iSCSI endpoint
> directly from CEPH, sorry don't have to much expertise in this area.

There is no "native" iSCSI support built into Ceph directly. In addition
to TCMU mentioned by Jason, there also is "lrbd":
https://github.com/SUSE/lrbd

Both use Ceph RBDs as storage format in the background.

Lenz



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Luminous/Bluestore compression documentation

2017-07-03 Thread Lenz Grimmer
Hi,

On 06/27/2017 11:54 PM, Daniel K wrote:

> Is there anywhere that details the various compression settings for
> bluestore backed pools? 
> 
> I can see compression in the list of options when I run ceph osd pool
> set, but can't find anything that details what valid settings are.
> 
> I've tried discovering the options via the command line utilities and
> via google and have failed at both. 

Looks like it's nowhere to be found in the docs at the moment -
http://tracker.ceph.com/issues/20486

A quick scan of the code (mon/MonCommands.h and
compressor/Compressor.cc) revealed the following options for 'osd pool
set': 'compression_mode, compression_algorithm,
compression_required_ratio, compression_max_blob_size, and
compression_min_blob_size

For 'mode', allowed values are "force", "aggressive", "passive", "none",
for algorithm, "none", "snappy", "zlib", "zstd", and "lz4".

(thanks to Joao for digging these up)

Lenz



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] dropping filestore+btrfs testing for luminous

2017-06-30 Thread Lenz Grimmer
Hi Sage,

On 06/30/2017 05:21 AM, Sage Weil wrote:

> The easiest thing is to
> 
> 1/ Stop testing filestore+btrfs for luminous onward.  We've recommended 
> against btrfs for a long time and are moving toward bluestore anyway.

Searching the documentation for "btrfs" does not really give a user any
clue that the use of Btrfs is discouraged.

Where exactly has this been recommended?

The documentation currently states:

http://docs.ceph.com/docs/master/rados/configuration/ceph-conf/?highlight=btrfs#osds

"We recommend using the xfs file system or the btrfs file system when
running mkfs."

http://docs.ceph.com/docs/master/rados/configuration/filesystem-recommendations/?highlight=btrfs#filesystems

"btrfs is still supported and has a comparatively compelling set of
features, but be mindful of its stability and support status in your
Linux distribution."

http://docs.ceph.com/docs/master/start/os-recommendations/?highlight=btrfs#ceph-dependencies

"If you use the btrfs file system with Ceph, we recommend using a recent
Linux kernel (3.14 or later)."

As an end user, none of these statements would really sound as
recommendations *against* using Btrfs to me.

I'm therefore concerned about just disabling the tests related to
filestore on Btrfs while still including and shipping it. This has
potential to introduce regressions that won't get caught and fixed.

> 2/ Leave btrfs in the mix for jewel, and manually tolerate and filter out 
> the occasional ENOSPC errors we see.  (They make the test runs noisy but 
> are pretty easy to identify.)
> 
> If we don't stop testing filestore on btrfs now, I'm not sure when we 
> would ever be able to stop, and that's pretty clearly not sustainable.
> Does that seem reasonable?  (Pretty please?)

If you want to get rid of filestore on Btrfs, start a proper deprecation
process and inform users that support for it it's going to be removed in
the near future. The documentation must be updated accordingly and it
must be clearly emphasized in the release notes.

Simply disabling the tests while keeping the code in the distribution is
setting up users who happen to be using Btrfs for failure.

Just my 0.02€,

Lenz



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Available tools for deploying ceph cluster as a backend storage ?

2017-05-18 Thread Lenz Grimmer
Hi,

On 05/18/2017 02:28 PM, Shambhu Rajak wrote:

> I want to deploy ceph-cluster as a backend storage for openstack, so I
> am trying to find the best tool available for deploying ceph cluster.
> 
> Few are in my mind:
> 
> https://github.com/ceph/ceph-ansible
> 
> https://github.com/01org/virtual-storage-manager/wiki/Getting-Started-with-VSM
> 
> Is there anything else that are available that could be much easier to
> use and give production deployment.

If you're looking for a Salt-based solution, take a look at DeepSea as
well: https://github.com/SUSE/DeepSea

Lenz



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph monitoring

2017-02-01 Thread Lenz Grimmer
Hi,

On 01/30/2017 12:18 PM, Matthew Vernon wrote:

> On 28/01/17 23:43, Marc Roos wrote:
> 
>> Is there a doc that describes all the parameters that are published by
>> collectd-ceph?
> 
> The best I've found is the Redhat documentation of the performance
> counters (which are what collectd-ceph is querying):
> 
> https://access.redhat.com/documentation/en/red-hat-ceph-storage/1.3/paged/administration-guide/chapter-9-performance-counters

First off, which collectd-ceph plugin are we talking about here?

There seem to be several different implementations:

https://github.com/ceph/collectd (full collectd fork, outdated?)
https://github.com/ceph/collectd-4.10.1 (dito, outdated?)
https://github.com/Crapworks/collectd-ceph (uses "perf dump", but might
be outdated, too)
https://github.com/inkscope/collectd-ceph (which is a fork of a fork of
https://github.com/rochaporto/collectd-ceph)

According to the documentation mentioned above, these performance
metrics can only be obtained via the "ceph daemon  perf schema"
command on the respective node.

The inkscope/collectd-ceph plugin (and its ancestors) seem to be
designed to be installed on any node with librados access to the cluster
and use the usual commands like "ceph osd dump" or "ceph osd pool stats"
to gather information about the cluster.

However, this does seems to provide less datails than what could be
obtained via the "perf schema" statements which are utilized by the
Crapworks/collectd plugin and the plugin included in the ceph/collectd fork.

This is a tad bit messy. IMHO, it would be nice if there was one set of
collectd plugins for Ceph that would both support collecting cluster
stats via librados from any node, as well as a plugin that could be
deployed on the Ceph nodes directly, to obtain additional information
that can only be queried locally.

Lenz



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Calamari or Alternative

2017-01-15 Thread Lenz Grimmer
Hi,

On 01/13/2017 05:34 PM, Tu Holmes wrote:

> I remember seeing one of the openATTIC project people on the list
> mentioning that.
> 
> My initial question is, "Can you configure openATTIC just to monitor an
> existing cluster without having to build a new one?"

Yes, you can - when you install openATTIC, you can give it access to an
existing Ceph Cluster as outlined here:

http://docs.openattic.org/2.0/install_guides/post_installation.html#enabling-ceph-support-in-oa

Lenz



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Introducing DeepSea: A tool for deploying Ceph using Salt

2016-11-30 Thread Lenz Grimmer
Hi all,

(replying to the root of this thread, as the discussions between
ceph-users and ceph-devel have somewhat diverged):

On 11/03/2016 06:52 AM, Tim Serong wrote:

> I thought I should make a little noise about a project some of us at
> SUSE have been working on, called DeepSea.  It's a collection of Salt
> states, runners and modules for orchestrating deployment of Ceph
> clusters.  To help everyone get a feel for it, I've written a blog post
> which walks through using DeepSea to set up a small test cluster:
> 
>   http://ourobengr.com/2016/11/hello-salty-goodness/
> 
> If you'd like to try it out yourself, the code is on GitHub:
> 
>   https://github.com/SUSE/DeepSea
> 
> More detailed documentation can be found at:
> 
>   https://github.com/SUSE/DeepSea/wiki/intro
>   https://github.com/SUSE/DeepSea/wiki/management
>   https://github.com/SUSE/DeepSea/wiki/policy
> 
> Usual story: feedback, issues, pull requests are all welcome ;)

FYI, we now also have a dedicated mailing list "deepsea-users":
http://lists.suse.com/mailman/listinfo/deepsea-users

If you have any questions, suggestions for improvements or any other
feedback, please joins us there! We look forward to your contributions.

Thanks,

Lenz



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Introducing DeepSea: A tool for deploying Ceph using Salt

2016-11-25 Thread Lenz Grimmer
Hi Swami,

On 11/25/2016 11:04 AM, M Ranga Swami Reddy wrote:

> Can you please confirm, if the DeepSea works on Ubuntu also?

Not yet, as far as I can tell, but testing/feedback/patches are very
welcome ;)

One of the benefits of using Salt is that it supports multiple
distributions. However, currently the Salt scripts in DeepSea contain a
few SUSE-specific functions, e.g. using "zypper" to install additional
packages at some places. That needs to be replaced with the generic
functions provided by Salt, for example.

BTW, we're in the process of setting up a dedicated mailing list for
DeepSea - we'll let you know when it's set up.

Lenz



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Introducing DeepSea: A tool for deploying Ceph using Salt

2016-11-03 Thread Lenz Grimmer
On 11/03/2016 06:52 AM, Tim Serong wrote:

> I thought I should make a little noise about a project some of us at
> SUSE have been working on, called DeepSea.  It's a collection of Salt
> states, runners and modules for orchestrating deployment of Ceph
> clusters.  To help everyone get a feel for it, I've written a blog post
> which walks through using DeepSea to set up a small test cluster:
> 
>   http://ourobengr.com/2016/11/hello-salty-goodness/
> 
> If you'd like to try it out yourself, the code is on GitHub:
> 
>   https://github.com/SUSE/DeepSea
> 
> More detailed documentation can be found at:
> 
>   https://github.com/SUSE/DeepSea/wiki/intro
>   https://github.com/SUSE/DeepSea/wiki/management
>   https://github.com/SUSE/DeepSea/wiki/policy
> 
> Usual story: feedback, issues, pull requests are all welcome ;)

Thanks for sharing, Tim!

FWIW, the openATTIC-Team [http://openattic.org/] is working on an
integration of DeepSea into our framework, to provide a friendly UI and
REST API on top of this.

We currently provide mostly read-only support (e.g. list all minions and
their roles in a Ceph cluster, extracting the pillar data and the
content of policy.cfg). Configuring Minions and executing the DeepSea
stages is work in progress.

Be we too would love to get your feedback, bug reports or pull requests :)

Thanks,

Lenz



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph on different OS version

2016-09-22 Thread Lenz Grimmer
Hi,

On 09/22/2016 03:03 PM, Matteo Dacrema wrote:

> someone have ever tried to run a ceph cluster on two different version
> of the OS?
> In particular I’m running a ceph cluster half on Ubuntu 12.04 and half
> on Ubuntu 14.04 with Firefly version.
> I’m not seeing any issues.
> Are there some kind of risks?

I could be wrong, but as long as the Ceph version running on these nodes
is the same, I doubt the underlying OS version makes much of a
difference, if we're talking about "userland" Ceph components like MONs,
OSDs or RGW nodes.

Lenz



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Seeking your feedback on the Ceph monitoring and management functionality in openATTIC

2016-09-14 Thread Lenz Grimmer
Hi,

if you're running a Ceph cluster and would be interested in trying out a
new tool for managing/monitoring it, we've just released version 2.0.14
of openATTIC that now provides a first implementation of a cluster
monitoring dashboard.

This is work in progress, but we'd like to solicit your input and
feedback early on, to make sure we're on the right track. See this blog
post for more details:

https://blog.openattic.org/posts/seeking-your-feedback-on-the-ceph-monitoring-and-management-functionality-in-openattic/

Any comments and suggestions are welcome! Thanks in advance.

Lenz



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] build and Compile ceph in development mode takes an hour

2016-08-31 Thread Lenz Grimmer
On 08/18/2016 12:42 AM, Brad Hubbard wrote:

> On Thu, Aug 18, 2016 at 1:12 AM, agung Laksono
>  wrote:
>> 
>> Is there a way to make the compiling process be faster? something
>> like only compile a particular code that I change.
> 
> Sure, just use the same build directory and run "make" again after
> you make code changes and it should only re-compile the binaries that
> are effected by your code changes.
> 
> You can use "make -jX" if you aren't already where 'X' is usually 
> number of CPUs / 2 which may speed up the build.

In addition to that, ccache might come in handy, too:
https://ccache.samba.org/

Lenz



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] openATTIC 2.0.13 beta has been released

2016-08-17 Thread Lenz Grimmer
Hi,

On 08/16/2016 02:16 PM, Lenz Grimmer wrote:

> I blogged about the state of Ceph support a few months ago [1], a 
> followup posting is currently in the works.
> 
> [1] 
> https://blog.openattic.org/posts/update-the-state-of-ceph-support-in-openattic/

FWIW, the update has been published now:
https://blog.openattic.org/posts/the-state-of-ceph-support-in-openattic-august-2016/

Lenz



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] openATTIC 2.0.13 beta has been released

2016-08-16 Thread Lenz Grimmer
Hi Alexander,

sorry for the late reply, I've been on vacation for a bit.

On 08/11/2016 07:16 AM, Александр Пивушков wrote:

> and what does not suit calamari?

Thank you for your comment! openATTIC has a somewhat different scope: we
aim at providing a versatile storage management system that supports
both "traditional" storage (e.g. NFS/CIFS/iSCSI) as well as adding
support for managing and monitoring Ceph for users that have storage
demands that exceed the boundaries of individual servers with local
attached storage.

We intend to organically grow the Ceph management and monitoring
functionality over time, based on user feedback and demand. Currently,
we're in the final stretch of completing a dashboard that displays the
overall status and health of the configured Ceph Cluster(s). We're also
working on extending the Ceph Pool management and monitoring
functionality for the next release (we release a new openATTIC version
every 5-6 weeks).

I blogged about the state of Ceph support a few months ago [1], a
followup posting is currently in the works.

Our roadmap and development process is fully open - we look forward to
your feedback and suggestions.

Thanks,

Lenz

[1]
https://blog.openattic.org/posts/update-the-state-of-ceph-support-in-openattic/



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] openATTIC 2.0.13 beta has been released

2016-08-04 Thread Lenz Grimmer
Hi all,

FYI, a few days ago, we released openATTIC 2.0.13 beta. On the Ceph
management side, we've made some progress with the cluster and pool
monitoring backend, which lays the foundation for the dashboard that
will display graphs generated from this data. We also added some more
RBD management functionality to the Web UI.

For more details, please see the release announcement here:

https://blog.openattic.org/posts/openattic-2.0.13-beta-has-been-released/

We're still in the early stages of development of the Ceph management
and monitoring functionality, so we're very eager on receiving feedback
and comments.

Thanks!

Lenz



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph and Storage Management with openATTIC (was : June Ceph Tech Talks)

2016-06-15 Thread Lenz Grimmer
Hi there,

On 06/06/2016 09:56 PM, Patrick McGarry wrote:

> So we have gone from not having a Ceph Tech Talk this month…to having
> two! As a part of our regularly scheduled Ceph Tech Talk series, Lenz
> Grimmer from OpenATTIC will be talking about the architecture of their
> management/GUI solution, which will also include a live demo.

Thanks a lot for giving me a chance to talk about our project, Patrick,
much appreciated!

For those of you who want to learn more about openATTIC
(http://openattic.org) in advance or can't make it to the tech talk, I'd
like to give you a quick introduction and some pointers to our project.

openATTIC was started as a "traditional" storage management system
(CIFS/NFS, iSCSI/FC, DRBD, Btrfs, ZFS) around 5 years ago. It supports
managing multiple nodes and has monitoring of the storage resources
built-in (using Nagios/Icinga and PNP4Nagios for storing performance
data in RRDs). The openATTIC Backend is based on Python/Django and we
added a RESTful API and WebUI based on AngularJS and Bootstrap with
version 2.0, which is currently under development.

We started adding Ceph support in early 2015, as an answer to users that
were facing data growth at a faster pace than what a traditional storage
system could keep up with. At first, we added the capability to map and
share RBDs as block volumes as well as providing a simple CRUSH map
editor. We started collaborating with SUSE on the Ceph features at the
beginning of the year and have made good progress on extending the
functionality since then.

In this stage, we use the librados and librbd Python bindings to
communicate with the Ceph cluster. But we're also keeping an eye on the
development of ceph-mgr that is currently being worked on.

For additional remote node management and monitoring features, we intend
to use Salt and collectd. Currently, our focus is on building a
dashboard to monitor the cluster's performance and health (making use of
the D3 JavaScript library for the graphs) as well as creating the WebUI
views that display the cluster's various objects like Pools, OSDs, etc.

The openATTIC development takes place in the open: the code is hosted in
a Mercurial repo on BitBucket [1], all issues (bugs and feature specs)
are tracked in a public Jira instance [2]. New code is submitted via
pull requests and we require code reviews before it is merged.
We also have an extensive test suite that performs tests both on the
REST API level as well as over the WebUI.

The Ceph functionality is still under development [3], and right now the
WebUI does not fully utilize everything the API provides [4], but we'd
like to invite you to take a look at what we have so far and let us know
if we're heading in the right direction with this.

Our intention is to provide a Ceph Management and Monitoring tool that
administrators *want* to use and that makes sense. So any feedback or
comments are welcome and appreciated [5].

Thanks!

Lenz

[1] https://bitbucket.org/openattic/openattic/
[2] https://tracker.openattic.org/
[3]
https://wiki.openattic.org/display/OP/openATTIC+Ceph+Management+Roadmap+and+Implementation+Plan
[4] https://wiki.openattic.org/display/OP/openATTIC+Ceph+REST+API+overview
[5] http://openattic.org/get-involved.html





signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com