[ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard?

2020-04-23 Thread Janne Johansson
Den tors 23 apr. 2020 kl 08:49 skrev Darren Soothill < darren.sooth...@suse.com>: > If you want the lowest cost per TB then you will be going with larger > nodes in your cluster but it does mean you minimum cluster size is going to > be many PB’s in size. > Now the question is what is the tax that

[ceph-users] Re: Upgrading to Octopus

2020-04-23 Thread Simon Sutter
Hello Khodayar Of cours I tried installing them with yum... They are not available in the centos base or epel repos, here are the ones, which are available: [root@node1 ~]# yum list | egrep "cherrypy|jwt|routes" python-cherrypy.noarch 3.2.2-4.el7@base python-ch

[ceph-users] Re: missing amqp-exchange on bucket-notification with AMQP endpoint

2020-04-23 Thread Yuval Lifshitz
x27;m sending out: > > POST / HTTP/1.1 > Content-Type: application/x-www-form-urlencoded; charset=utf-8 > Accept-Encoding: identity > Date: Tue, 23 Apr 2020 05:00:35 GMT > X-Amz-Content-Sha256: > e8d828552b412fde2cd686b0a984509bc485693a02e8c53ab84cf36d1dbb961a > Host: s3.exa

[ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard?

2020-04-23 Thread Darren Soothill
I can think of 1 vendor who has made some of the compromises that you talk of although memory and CPU is not one of them they are limited on slots and NVME capacity. But there are plenty of other vendors out there who use the same model of motherboard across the whole chassis range so there isn

[ceph-users] Healthy objects trapped in incomplete pgs

2020-04-23 Thread Jesper Lykkegaard Karlsen
Dear Cephers, A few days ago disaster struck the Ceph cluster (erasure-coded) I am administrating, as the UPS power was pull from the cluster causing a power outage. After rebooting the system, 6 osds were lost (spread over 5 osd nodes) as they could not mount anymore, several others had dam

[ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard?

2020-04-23 Thread Martin Verges
Hello, simpler systems tend to be cheaper to buy per TB storage, not on a theoretical but practical quote. For example 1U Gigabyte 16bay D120-C21 systems with a density of 64 disks per 4U are quite ok for most users. On 40 Nodes per rack + 2 switches you have 10PB raw space for around 350k€. They

[ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard?

2020-04-23 Thread Richard Hesketh
On Thu, 2020-04-23 at 09:08 +0200, Janne Johansson wrote: > Den tors 23 apr. 2020 kl 08:49 skrev Darren Soothill < > darren.sooth...@suse.com>: > > > If you want the lowest cost per TB then you will be going with > > larger nodes in your cluster but it does mean you minimum cluster > > size is goi

[ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard?

2020-04-23 Thread lin yunfan
Hi Maritin, How is the performance of d120-c21 hdd cluster? Can it utilize the full performance of the 16 hdd? linyunfan Martin Verges 于2020年4月23日周四 下午6:12写道: > > Hello, > > simpler systems tend to be cheaper to buy per TB storage, not on a > theoretical but practical quote. > > For example 1U

[ceph-users] Re: adding block.db to OSD

2020-04-23 Thread Igor Fedotov
I don't recall any additional tuning to be applied to new DB volume. And assume the hardware is pretty the same... Do you still have any significant amount of data spilled over for these updated OSDs? If not I don't have any valid explanation for the phenomena. You might want to try "ceph os

[ceph-users] Re: adding block.db to OSD

2020-04-23 Thread Stefan Priebe - Profihost AG
Hi, Am 23.04.20 um 14:06 schrieb Igor Fedotov: I don't recall any additional tuning to be applied to new DB volume. And assume the hardware is pretty the same... Do you still have any significant amount of data spilled over for these updated OSDs? If not I don't have any valid explanation for

[ceph-users] Re: Upgrading to Octopus

2020-04-23 Thread Khodayar Doustar
Simon, You can try to search for the exact package name, you can try these repos as well: yum -y install epel-release centos-release-ceph-nautilus centos-release-openstack-stein On Thu, Apr 23, 2020 at 11:57 AM Simon Sutter wrote: > Hel

[ceph-users] Increase number of read and writes

2020-04-23 Thread Bobby
Hi, I am using Ceph in developer mode. Currently I am implementing Librados examples which are also available in Introduction to Librados section https://docs.ceph.com/docs/master/rados/api/librados-intro/#step-3-creating-an-i-o-context. It says once your app has a cluster handle and a connection

[ceph-users] Re: Increase number of read and writes

2020-04-23 Thread Janne Johansson
Den tors 23 apr. 2020 kl 16:07 skrev Bobby : > Hi, > > I am using Ceph in developer mode. Currently I am implementing Librados > examples which are also available in Introduction to Librados section > > https://docs.ceph.com/docs/master/rados/api/librados-intro/#step-3-creating-an-i-o-context > .

[ceph-users] Re: Upgrading to Octopus

2020-04-23 Thread Simon Sutter
Khodayar, I added all those repos, but sitll, those packages are missing. I can of course search for the exact package name like this: [root@node1 ~]# yum search python3-cherrypy Loaded plugins: fastestmirror, langpacks, priorities Loading mirror speeds from cached hostfile * base: pkg.adfini

[ceph-users] Re: Upgrading to Octopus

2020-04-23 Thread Adam Tygart
The release notes [1] specify only partial support for CentOS 7. "Note that the dashboard, prometheus, and restful manager modules will not work on the CentOS 7 build due to Python 3 module dependencies that are missing in CentOS 7." You will need to move to CentOS 8, or potentially containerize

[ceph-users] Dear Abby: Why Is Architecting CEPH So Hard?

2020-04-23 Thread Linus VanWeil
Hello, Looks like the original chain got deleted, but thank you to everyone who responded. Just to keep any new-comers in the loop, I have pasted the original positing below. To all the original contributors to this chain, I feel much more confident in my design theory for the storage nodes. Ho

[ceph-users] Re: Increase number of read and writes

2020-04-23 Thread Bobby
Hi Janne, Thanks a lot ! I should have checked it earlier..I got it :-) Basically I would like to compile the client read and write C/C++ codes and then later profile the executables with valgrind and other profiling tools. The reason being I want to see the function calls, execution time etc. T

[ceph-users] Re: Upgrading to Octopus

2020-04-23 Thread Khodayar Doustar
Yes! That was what I was going to paste here On Thu, Apr 23, 2020 at 7:18 PM Adam Tygart wrote: > The release notes [1] specify only partial support for CentOS 7. > > "Note that the dashboard, prometheus, and restful manager modules will > not work on the CentOS 7 build due to Python 3 module de

[ceph-users] Re: How to debug ssh: ceph orch host add ceph01 10.10.1.1

2020-04-23 Thread Ml Ml
Can anyone help me here? :-/ On Wed, Apr 22, 2020 at 10:36 PM Ml Ml wrote: > > Hello List, > > i did: > root@ceph01:~# ceph cephadm set-ssh-config -i /tmp/ssh_conf > > root@ceph01:~# cat /tmp/ssh_conf > Host * > User root > StrictHostKeyChecking no > UserKnownHostsFile /dev/null > > root@ceph01:~

[ceph-users] Re: adding block.db to OSD

2020-04-23 Thread Stefan Priebe - Profihost AG
Hi, if the OSDs are idle the difference is even more worse: # ceph tell osd.0 bench { "bytes_written": 1073741824, "blocksize": 4194304, "elapsed_sec": 15.39670787501, "bytes_per_sec": 69738403.346825853, "iops": 16.626931034761871 } # ceph tell osd.38 bench { "bytes

[ceph-users] Re: Upgrading to Octopus

2020-04-23 Thread gert . wieberdink
Hello Simon, I think that Khodayar is right. I managed to install a new Ceph cluster on CentOS 8.1. Therefore you will need the ceph-el8.repo for the time being. For some reason, "they" left the py3 packages you mentioned out of EPEL (as with leveldb, but this package appeared luckily last week

[ceph-users] v13.2.10 Mimic released

2020-04-23 Thread Abhishek Lekshmanan
We're happy to announce the availability of the tenth bugfix release of Ceph Mimic, this release fixes a RGW vulnerability affecting mimic, and we recommend that all mimic users upgrade. Notable Changes --- * CVE 2020 12059: Fix an issue with Post Object Requests with Tagging (#4496

[ceph-users] active+remapped+backfilling keeps going .. and going

2020-04-23 Thread Kyriazis, George
Hello, I have a Proxmox ceph cluster with 5 nodes and 3 OSDs each (total 15 OSDs), on a 10G network. The cluster started small, and I’ve progressively added OSDs over time. Problem is…. The cluster never rebalances completely. There is always progress on backfilling, but PGs that used to be

[ceph-users] Recovery throughput inversely linked with rbd_cache_xyz?

2020-04-23 Thread Harry G. Coin
Hello, A couple days ago I increased the rbd cache size from the default to 256MB/osd on a small 4 node, 6 osd/node setup in a test/lab setting.  The rbd volumes are all vm images with writeback cache parameters and steady if only a few mb/sec writes going on. Logging mostly.    I noticed the

[ceph-users] Re: active+remapped+backfilling keeps going .. and going

2020-04-23 Thread Eugen Block
Hi, the balancer is probably running, which mode? I changed the mode to none in our own cluster because it also never finished rebalancing and we didn’t have a bad pg distribution. Maybe it’s supposed to be like that, I don’t know. Regards Eugen Zitat von "Kyriazis, George" : Hello, I

[ceph-users] HBase/HDFS on Ceph/CephFS

2020-04-23 Thread jesper
Hi We have an 3 year old Hadoop cluster - up for refresh - so it is time to evaluate options. The "only" usecase is running an HBase installation which is important for us and migrating out of HBase would be a hazzle. Our Ceph usage has expanded and in general - we really like what we see. Thus

[ceph-users] Re: active+remapped+backfilling keeps going .. and going

2020-04-23 Thread Lomayani S. Laizer
I had a similar problem when upgraded to octopus and the solution is to turn off autobalancing. You can try to turn off if enabled ceph balancer off On Fri, Apr 24, 2020 at 8:51 AM Eugen Block wrote: > Hi, > the balancer is probably running, which mode? I changed the mode to > none in our