Re: [ceph-users] Large amount of files - cephfs?

2017-09-27 Thread Henrik Korkuc
On 17-09-27 14:57, Josef Zelenka wrote: Hi, we are currently working on a ceph solution for one of our customers. They run a file hosting and they need to store approximately 100 million of pictures(thumbnails). Their current code works with FTP, that they use as a storage. We thought that we

[ceph-users] Ceph Developers Monthly - October

2017-09-27 Thread Leonardo Vaz
Hey Cephers, This is just a friendly reminder that the next Ceph Developer Montly meeting is coming up: http://wiki.ceph.com/Planning If you have work that you're doing that it a feature work, significant backports, or anything you would like to discuss with the core team, please add it to the

[ceph-users] Ceph Tech Talk - September

2017-09-27 Thread Leonardo Vaz
Hey Cephers, Just a reminder that the monthly Ceph Tech Talk will be this Thursday at 1pm (EDT). This month John Spray will be talking about ceph-mgr. Everyone is invited to join us. http://ceph.com/ceph-tech-talks/ Kindest regards, Leo -- Leonardo Vaz Ceph Community Manager Open Source and

Re: [ceph-users] RDMA with mellanox connect x3pro on debian stretch and proxmox v5.0 kernel 4.10.17-3

2017-09-27 Thread Haomai Wang
previously we have a infiniband cluster, recently we deploy a roce cluster. they are both test purpose for users. On Wed, Sep 27, 2017 at 11:38 PM, Gerhard W. Recher wrote: > Haomai, > > I looked at your presentation, so i guess you already have a running > cluster with RDMA & mellanox > (https:/

Re: [ceph-users] Need some help/advice upgrading Hammer to Jewel - HEALTH_ERR shutting down OSD

2017-09-27 Thread David Turner
There are new PG states that cause health_err. In this case it is undersized that is causing this state. While I decided to upgrade my tunables before upgrading the rest of my cluster, it does not seem to be a requirement. However I would recommend upgrading them sooner than later. It will cause a

[ceph-users] Need some help/advice upgrading Hammer to Jewel - HEALTH_ERR shutting down OSD

2017-09-27 Thread Eric van Blokland
Hello, I have run into an issue while upgrading a Ceph cluster from Hammer to Jewel on CentOS. It's a small cluster with 3 monitoring servers and a humble 6 OSDs distributed over 3 servers. I've upgraded the 3 monitors successfully to 10.2.7. They appear to be running fine except for this health

Re: [ceph-users] Large amount of files - cephfs?

2017-09-27 Thread Deepak Naidu
Josef, my comments based on experience with cephFS(Jewel with 1MDS) community(free) version. * cephFS(Jewel) considering 1 MDS(stable) performs horrible with "small million KB size files", even after MDS cache, dir frag tuning etc. * cephFS(Jewel) considering 1 MDS(stable) performs great for "

Re: [ceph-users] "ceph fs" commands hang forever and kill monitors

2017-09-27 Thread John Spray
On Wed, Sep 27, 2017 at 1:18 PM, Richard Hesketh wrote: > On 27/09/17 12:32, John Spray wrote: >> On Wed, Sep 27, 2017 at 12:15 PM, Richard Hesketh >> wrote: >>> As the subject says... any ceph fs administrative command I try to run >>> hangs forever and kills monitors in the background - someti

Re: [ceph-users] RDMA with mellanox connect x3pro on debian stretch and proxmox v5.0 kernel 4.10.17-3

2017-09-27 Thread Gerhard W. Recher
Yep ROcE i followed up all recommendations in mellanox papers ... */etc/security/limits.conf* * soft memlock unlimited * hard memlock unlimited root soft memlock unlimited root hard memlock unlimited also set properties on daemons (chapter 11) in https://community.mellanox.com/docs/DOC-27

Re: [ceph-users] "ceph fs" commands hang forever and kill monitors

2017-09-27 Thread Shinobu Kinjo
Just for clarification. Did you upgrade your cluster from Hammer to Luminous, then hit an assertion? On Wed, Sep 27, 2017 at 8:15 PM, Richard Hesketh wrote: > As the subject says... any ceph fs administrative command I try to run hangs > forever and kills monitors in the background - sometimes t

Re: [ceph-users] Minimum requirements to mount luminous cephfs ?

2017-09-27 Thread Yoann Moulin
Le 27/09/2017 à 15:15, David Turner a écrit : > You can also use ceph-fuse instead of the kernel driver to mount cephfs. It > supports all of the luminous features. OK thanks, I will try this after, I need to be able to mount the cephfs directly into containers, I don't know what will the best w

Re: [ceph-users] Minimum requirements to mount luminous cephfs ?

2017-09-27 Thread German Anders
Try to work with the tunables: $ *ceph osd crush show-tunables* { "choose_local_tries": 0, "choose_local_fallback_tries": 0, "choose_total_tries": 50, "chooseleaf_descend_once": 1, "chooseleaf_vary_r": 1, "chooseleaf_stable": 0, "straw_calc_version": 1, "allowed_buc

Re: [ceph-users] RDMA with mellanox connect x3pro on debian stretch and proxmox v5.0 kernel 4.10.17-3

2017-09-27 Thread Gerhard W. Recher
Haomai, I looked at your presentation, so i guess you already have a running cluster with RDMA & mellanox (https://www.youtube.com/watch?v=Qb2SUWLdDCw) Is nobody out there having a running cluster with RDMA ? any help is appreciated ! Gerhard W. Recher net4sec UG (haftungsbeschränkt) Leitenweg

Re: [ceph-users] RDMA with mellanox connect x3pro on debian stretch and proxmox v5.0 kernel 4.10.17-3

2017-09-27 Thread Gerhard W. Recher
ah ok but as i stated before : ceph.conf is a cluster wide file on proxmox! so if i specify [global] //Set local GID for ROCEv2 interface used for CEPH //The GID corresponding to IPv4 or IPv6 networks //should be taken from show_gids command output //This parameter should be uniquely set

Re: [ceph-users] RDMA with mellanox connect x3pro on debian stretch and proxmox v5.0 kernel 4.10.17-3

2017-09-27 Thread Haomai Wang
https://community.mellanox.com/docs/DOC-2415 On Wed, Sep 27, 2017 at 10:01 PM, Gerhard W. Recher wrote: > How to set local gid option ? > > I have no glue :) > > Gerhard W. Recher > > net4sec UG (haftungsbeschränkt) > Leitenweg 6 > 86929 Penzing > > +49 171 4802507 > Am 27.09.2017 um 15:59 schrie

Re: [ceph-users] RDMA with mellanox connect x3pro on debian stretch and proxmox v5.0 kernel 4.10.17-3

2017-09-27 Thread Gerhard W. Recher
How to set local gid option ? I have no glue :) Gerhard W. Recher net4sec UG (haftungsbeschränkt) Leitenweg 6 86929 Penzing +49 171 4802507 Am 27.09.2017 um 15:59 schrieb Haomai Wang: > do you set local gid option? > > On Wed, Sep 27, 2017 at 9:52 PM, Gerhard W. Recher > wrote: >> Yep ROcE ...

Re: [ceph-users] RDMA with mellanox connect x3pro on debian stretch and proxmox v5.0 kernel 4.10.17-3

2017-09-27 Thread Haomai Wang
do you set local gid option? On Wed, Sep 27, 2017 at 9:52 PM, Gerhard W. Recher wrote: > Yep ROcE > > i followed up all recommendations in mellanox papers ... > > */etc/security/limits.conf* > > * soft memlock unlimited > * hard memlock unlimited > root soft memlock unlimited > root hard mem

Re: [ceph-users] RDMA with mellanox connect x3pro on debian stretch and proxmox v5.0 kernel 4.10.17-3

2017-09-27 Thread Gerhard W. Recher
Yep ROcE i followed up all recommendations in mellanox papers ... */etc/security/limits.conf* * soft memlock unlimited * hard memlock unlimited root soft memlock unlimited root hard memlock unlimited also set properties on daemons (chapter 11) in https://community.mellanox.com/docs/DOC-27

Re: [ceph-users] Different recovery times for OSDs joining and leaving the cluster

2017-09-27 Thread Richard Hesketh
Just to add, assuming other settings are default, IOPS and maximum physical write speed are probably not the actual limiting factors in the tests you have been doing; ceph by default limits recovery I/O on any given OSD quite a bit in order to ensure recovery operations don't adversely impact cl

Re: [ceph-users] osd max scrubs not honored?

2017-09-27 Thread David Turner
This isn't an answer, but a suggestion to try and help track it down as I'm not sure what the problem is. Try querying the admin socket for your osds and look through all of their config options and settings for something that might explain why you have multiple deep scrubs happening on a single os

Re: [ceph-users] Re install ceph

2017-09-27 Thread David Turner
I've reinstalled a host many times over the years. We used dmcrypt so I made sure to back up the keys for that. Other than that it is seamless as long as your installation process only affects the root disk. If it affected any osd or journal disk, then you would need to mark those osds out and re-

Re: [ceph-users] Large amount of files - cephfs?

2017-09-27 Thread John Spray
On Wed, Sep 27, 2017 at 12:57 PM, Josef Zelenka wrote: > Hi, > > we are currently working on a ceph solution for one of our customers. They > run a file hosting and they need to store approximately 100 million of > pictures(thumbnails). Their current code works with FTP, that they use as a > stora

Re: [ceph-users] Different recovery times for OSDs joining and leaving the cluster

2017-09-27 Thread David Turner
When you lose 2 osds you have 30 osds accepting the degraded data and performing the backfilling. When the 2 osds are added back in you only have 2 osds receiving the majority of the data from the backfilling. 2 osds have a lot less available iops and spindle speed than the other 30 did when they

Re: [ceph-users] Minimum requirements to mount luminous cephfs ?

2017-09-27 Thread David Turner
You can also use ceph-fuse instead of the kernel driver to mount cephfs. It supports all of the luminous features. On Wed, Sep 27, 2017, 8:46 AM Yoann Moulin wrote: > Hello, > > > Try to work with the tunables: > > > > $ *ceph osd crush show-tunables* > > { > > "choose_local_tries": 0, > >

Re: [ceph-users] RDMA with mellanox connect x3pro on debian stretch and proxmox v5.0 kernel 4.10.17-3

2017-09-27 Thread Gerhard W. Recher
Haomai,  ibstat CA 'mlx4_0'     CA type: MT4103     Number of ports: 2     Firmware version: 2.40.7000     Hardware version: 0     Node GUID: 0x248a070300e26070     System image GUID: 0x248a070300e26070     Port 1:     State: Active     Physical

Re: [ceph-users] RDMA with mellanox connect x3pro on debian stretch and proxmox v5.0 kernel 4.10.17-3

2017-09-27 Thread Haomai Wang
On Wed, Sep 27, 2017 at 8:33 PM, Gerhard W. Recher wrote: > Hi Folks! > > I'm totally stuck > > rdma is running on my nics, rping udaddy etc will give positive results. > > cluster consist of: > proxmox-ve: 5.0-23 (running kernel: 4.10.17-3-pve) > pve-manager: 5.0-32 (running version: 5.0-32/2560e

Re: [ceph-users] Minimum requirements to mount luminous cephfs ?

2017-09-27 Thread Yoann Moulin
Hello, > Try to work with the tunables: > > $ *ceph osd crush show-tunables* > { >     "choose_local_tries": 0, >     "choose_local_fallback_tries": 0, >     "choose_total_tries": 50, >     "chooseleaf_descend_once": 1, >     "chooseleaf_vary_r": 1, >     "chooseleaf_stable": 0, >     "straw_calc

[ceph-users] Different recovery times for OSDs joining and leaving the cluster

2017-09-27 Thread Jonas Jaszkowic
Hello all, I have setup a Ceph cluster consisting of one monitor, 32 OSD hosts (1 OSD of size 320GB per host) and 16 clients which are reading and writing to the cluster. I have one erasure coded pool (shec plugin) with k=8, m=4, c=3 and pg_num=256. Failure domain is host. I am able to reach a

[ceph-users] RDMA with mellanox connect x3pro on debian stretch and proxmox v5.0 kernel 4.10.17-3

2017-09-27 Thread Gerhard W. Recher
Hi Folks! I'm totally stuck rdma is running on my nics, rping udaddy etc will give positive results. cluster consist of: proxmox-ve: 5.0-23 (running kernel: 4.10.17-3-pve) pve-manager: 5.0-32 (running version: 5.0-32/2560e073) system(4 nodes): Supermicro 2028U-TN24R4T+ 2 port Mellanox connect

Re: [ceph-users] "ceph fs" commands hang forever and kill monitors

2017-09-27 Thread Richard Hesketh
On 27/09/17 12:32, John Spray wrote: > On Wed, Sep 27, 2017 at 12:15 PM, Richard Hesketh > wrote: >> As the subject says... any ceph fs administrative command I try to run hangs >> forever and kills monitors in the background - sometimes they come back, on >> a couple of occasions I had to manua

[ceph-users] Minimum requirements to mount luminous cephfs ?

2017-09-27 Thread Yoann Moulin
Hello, I try to mount a cephfs filesystem from fresh luminous cluster. With the latest kernel 4.13.3, it works > $ sudo mount.ceph > iccluster041.iccluster,iccluster042.iccluster,iccluster054.iccluster:/ /mnt > -v -o name=container001,secretfile=/tmp/secret > parsing options: name=container001

[ceph-users] Large amount of files - cephfs?

2017-09-27 Thread Josef Zelenka
Hi, we are currently working on a ceph solution for one of our customers. They run a file hosting and they need to store approximately 100 million of pictures(thumbnails). Their current code works with FTP, that they use as a storage. We thought that we could use cephfs for this, but i am not

Re: [ceph-users] "ceph fs" commands hang forever and kill monitors

2017-09-27 Thread John Spray
On Wed, Sep 27, 2017 at 12:15 PM, Richard Hesketh wrote: > As the subject says... any ceph fs administrative command I try to run hangs > forever and kills monitors in the background - sometimes they come back, on a > couple of occasions I had to manually stop/restart a suffering mon. Trying to

[ceph-users] "ceph fs" commands hang forever and kill monitors

2017-09-27 Thread Richard Hesketh
As the subject says... any ceph fs administrative command I try to run hangs forever and kills monitors in the background - sometimes they come back, on a couple of occasions I had to manually stop/restart a suffering mon. Trying to load the filesystem tab in the ceph-mgr dashboard dumps an erro

Re: [ceph-users] Re install ceph

2017-09-27 Thread Ronny Aasen
On 27. sep. 2017 10:09, Pierre Palussiere wrote: Hi, Is anyone know if it’s possible to re install ceph on a host and keep osd without wipe data on them ? Hope you can help me, it depends... if you have journal on same drive as osd, you should be able to eject the drive from a server, conn

[ceph-users] Re install ceph

2017-09-27 Thread Pierre Palussiere
Hi, Is anyone know if it’s possible to re install ceph on a host and keep osd without wipe data on them ? Hope you can help me, Thanks in advance. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph