Re: [ceph-users] Cephfs cannot mount with kernel client

2019-08-14 Thread Serkan Çoban
Hi, just double checked the stack trace and I can confirm it is same as in tracker. compaction also worked for me, I can now mount cephfs without problems. Thanks for help, Serkan On Tue, Aug 13, 2019 at 6:44 PM Ilya Dryomov wrote: > > On Tue, Aug 13, 2019 at 4:30 PM Serkan Çoban

Re: [ceph-users] Cephfs cannot mount with kernel client

2019-08-13 Thread Serkan Çoban
I am out of office right now, but I am pretty sure it was the same stack trace as in tracker. I will confirm tomorrow. Any workarounds? On Tue, Aug 13, 2019 at 5:16 PM Ilya Dryomov wrote: > > On Tue, Aug 13, 2019 at 3:57 PM Serkan Çoban wrote: > > > > I checked /var/lo

Re: [ceph-users] Cephfs cannot mount with kernel client

2019-08-13 Thread Serkan Çoban
On Tue, Aug 13, 2019 at 3:42 PM Ilya Dryomov wrote: > > On Tue, Aug 13, 2019 at 12:36 PM Serkan Çoban wrote: > > > > Hi, > > > > Just installed nautilus 14.2.2 and setup cephfs on it. OS is all centos 7.6. > > From a client I can mount the cephfs with ceph-fu

[ceph-users] Cephfs cannot mount with kernel client

2019-08-13 Thread Serkan Çoban
Hi, Just installed nautilus 14.2.2 and setup cephfs on it. OS is all centos 7.6. >From a client I can mount the cephfs with ceph-fuse, but I cannot mount with ceph kernel client. It gives "mount error 110 connection timeout" and I can see "libceph: corrupt full osdmap (-12) epoch 2759 off 656" in

Re: [ceph-users] Ceph Multi Mds Trim Log Slow

2019-04-28 Thread Serkan Çoban
In this thread [1] it is suggested to bump up mds log max segments = 200 mds log max expiring = 150 1- http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-December/023490.html On Sun, Apr 28, 2019 at 2:58 PM Winger Cheng wrote: > > Hello Everyone, > > I have a CephFS cluster which has 4

Re: [ceph-users] [Bluestore] Some of my osd's uses BlueFS slow storage for db - why?

2019-02-22 Thread Serkan Çoban
>Where did you get those numbers? I would like to read more if you can point to a link. Just found the link: https://github.com/facebook/rocksdb/wiki/Leveled-Compaction On Fri, Feb 22, 2019 at 4:22 PM Serkan Çoban wrote: > > >>These sizes are roughly 3GB,30GB,300GB. Anything i

Re: [ceph-users] [Bluestore] Some of my osd's uses BlueFS slow storage for db - why?

2019-02-22 Thread Serkan Çoban
>>These sizes are roughly 3GB,30GB,300GB. Anything in-between those sizes are >>pointless. Only ~3GB of SSD will ever be used out of a 28GB partition. Likewise a 240GB partition is also pointless as only ~30GB will be used. Where did you get those numbers? I would like to read more if you can

Re: [ceph-users] Ceph in OSPF environment

2019-01-21 Thread Serkan Çoban
If ToR switches are L3 then you can not use LACP. On Mon, Jan 21, 2019 at 4:02 PM Burkhard Linke wrote: > > Hi, > > > I'm curious.what is the advantage of OSPF in your setup over e.g. > LACP bonding both links? > > > Regards, > > Burkhard > > > ___

Re: [ceph-users] Possible data damage: 1 pg inconsistent

2018-12-18 Thread Serkan Çoban
>I will also see a few uncorrected read errors in smartctl. uncorrected read errors in smartctl is a cause for us to replace the drive. On Wed, Dec 19, 2018 at 6:48 AM Frank Ritchie wrote: > > Hi all, > > I have been receiving alerts for: > > Possible data damage: 1 pg inconsistent > > almost

Re: [ceph-users] Huge latency spikes

2018-11-18 Thread Serkan Çoban
e if Bad > BBU > Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad > BBU > > > On 11/18/2018 12:45 AM, Serkan Çoban wrote: > > Does write cache on SSDs enabled on three servers? Can you check them? > > On Sun, Nov 18, 2018 at 9:05 AM Alex Litva

Re: [ceph-users] Huge latency spikes

2018-11-17 Thread Serkan Çoban
occasion > Cache, RAID, and battery situation is the same. > > On 11/17/2018 11:38 PM, Serkan Çoban wrote: > >> 10ms w_await for SSD is too much. How that SSD is connected to the system? > >> Any raid card installed on this system? What is the raid mode? > > On Sun, Nov

Re: [ceph-users] Huge latency spikes

2018-11-17 Thread Serkan Çoban
>10ms w_await for SSD is too much. How that SSD is connected to the system? Any >raid card installed on this system? What is the raid mode? On Sun, Nov 18, 2018 at 8:25 AM Alex Litvak wrote: > > Here is another snapshot. I wonder if this write io wait is too big > Device: rrqm/s

[ceph-users] cephday berlin slides

2018-11-16 Thread Serkan Çoban
Hi, Does anyone know if slides/recordings will be available online? Thanks, Serkan ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] odd osd id in ceph health

2018-10-24 Thread Serkan Çoban
I think you dont have enough hosts for your ec pool crush rule. if your failure domain is host, then you need at least ten hosts. On Wed, Oct 24, 2018 at 9:39 PM Brady Deetz wrote: > > My cluster (v12.2.8) is currently recovering and I noticed this odd OSD ID in > ceph health detail: >

Re: [ceph-users] Verifying the location of the wal

2018-10-21 Thread Serkan Çoban
sing VGs/LVs). The output shows block and block.db, but nothing about > wal.db. How can I learn where my wal lives? > > > > > On Sun, Oct 21, 2018 at 12:43 AM Serkan Çoban wrote: >> >> ceph-bluestore-tool can show you the disk labels. >> ceph-bluestore-tool show-label --

Re: [ceph-users] CEPH Cluster Usage Discrepancy

2018-10-20 Thread Serkan Çoban
PM Waterbly, Dan >> wrote: >>> >>> I get that, but isn’t 4TiB to track 2.45M objects excessive? These numbers >>> seem very high to me. >>> >>> Get Outlook for iOS >>> >>> >>> >>> On Sat, Oct 20, 2018 at 10:27 AM -0700, "Ser

Re: [ceph-users] Verifying the location of the wal

2018-10-20 Thread Serkan Çoban
ceph-bluestore-tool can show you the disk labels. ceph-bluestore-tool show-label --dev /dev/sda1 On Sun, Oct 21, 2018 at 1:29 AM Robert Stanford wrote: > > > An email from this list stated that the wal would be created in the same > place as the db, if the db were specified when running

Re: [ceph-users] CEPH Cluster Usage Discrepancy

2018-10-20 Thread Serkan Çoban
4.65TiB includes size of wal and db partitions too. On Sat, Oct 20, 2018 at 7:45 PM Waterbly, Dan wrote: > > Hello, > > > > I have inserted 2.45M 1,000 byte objects into my cluster (radosgw, 3x > replication). > > > > I am confused by the usage ceph df is reporting and am hoping someone can >

Re: [ceph-users] total_used statistic incorrect

2018-09-20 Thread Serkan Çoban
m just about to go live with this system ( in the next couple of weeks ) so > I'm trying to start out as clean as possible. > > If anyone has any insights I'd appreciate it. > > There should be no data in the system yet... unless I'm missing something. > > Thanks, > Mike &

Re: [ceph-users] total_used statistic incorrect

2018-09-19 Thread Serkan Çoban
Used data is wal+db size on each OSD. On Wed, Sep 19, 2018 at 3:50 PM Jakub Jaszewski wrote: > > Hi, I've recently deployed fresh cluster via ceph-ansible. I've not yet > created pools, but storage is used anyway. > > [root@ceph01 ~]# ceph version > ceph version 13.2.1

Re: [ceph-users] Favorite SSD

2018-09-17 Thread Serkan Çoban
Intel DC series also popular both nvme and ssd use case. https://www.intel.com/content/www/us/en/products/memory-storage/solid-state-drives/data-center-ssds/dc-d3-s4610-series.html On Mon, Sep 17, 2018 at 8:10 PM Robert Stanford wrote: > > > Awhile back the favorite SSD for Ceph was the Samsung

Re: [ceph-users] CephFS on a mixture of SSDs and HDDs

2018-09-06 Thread Serkan Çoban
>Is there a way of doing this without running multiple filesystems within the >same cluster? yes, have a look at following link: https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html-single/ceph_file_system_guide/index#working-with-file-and-directory-layouts On Thu, Sep 6,

Re: [ceph-users] active directory integration with cephfs

2018-07-26 Thread Serkan Çoban
You can do it by exporting cephfs by samba. I don't think any other way exists for cephfs. On Thu, Jul 26, 2018 at 9:12 AM, Manuel Sopena Ballesteros wrote: > Dear Ceph community, > > > > I am quite new to Ceph but trying to learn as much quick as I can. We are > deploying our first Ceph

[ceph-users] Device class types for sas/sata hdds

2018-05-13 Thread Serkan Çoban
Hi, Can I create device class types like sata-hdd and sas-hdd and use them? >From the docs I understand there are only ssd,hdd and nvme device classes. I would like ssd,nvme,sata-hdd,sas-hdd Serkan ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] ceph 12.2.5 - atop DB/WAL SSD usage 0%

2018-04-27 Thread Serkan Çoban
rados bench is using 4MB block size for io. Try with with io size 4KB, you will see ssd will be used for write operations. On Fri, Apr 27, 2018 at 4:54 PM, Steven Vacaroaia wrote: > Hi > > During rados bench tests, I noticed that HDD usage goes to 100% but SSD > stays at ( or

Re: [ceph-users] read_fsid unparsable uuid

2018-04-26 Thread Serkan Çoban
You can try only --block-db /dev/nvme0n1p1 parameter. wal will use same partition. On Thu, Apr 26, 2018 at 3:43 PM, Kevin Olbrich wrote: > Hi! > > Yesterday I deployed 3x SSDs as OSDs fine but today I get this error when > deploying an HDD with separted WAL/DB: > stderr: 2018-04-26

Re: [ceph-users] Where to place Block-DB?

2018-04-26 Thread Serkan Çoban
ecovery) or as fresh formatted OSD? > Thank you. > > - Kevin > > > 2018-04-26 12:36 GMT+02:00 Serkan Çoban <cobanser...@gmail.com>: >> >> >On bluestore, is it safe to move both Block-DB and WAL to this journal >> > NVMe? >> Yes, just specify bloc

Re: [ceph-users] Where to place Block-DB?

2018-04-26 Thread Serkan Çoban
>On bluestore, is it safe to move both Block-DB and WAL to this journal NVMe? Yes, just specify block-db with ceph-volume and wal also use that partition. You can put 12-18 HDDs per NVMe >What happens im the NVMe dies? You lost OSDs backed by that NVMe and need to re-add them to cluster. On Thu,

Re: [ceph-users] scalability new node to the existing cluster

2018-04-18 Thread Serkan Çoban
osd recovery sleep hdd > - osd recovery sleep ssd > > There are other throttling params you can change, though most defaults are > just fine in my environment, and I don’t have experience with them. > > Good luck, > > Hans > > >> On Apr 18, 2018, at 1:32 PM,

Re: [ceph-users] scalability new node to the existing cluster

2018-04-18 Thread Serkan Çoban
gt; > > On Wed, Apr 18, 2018 at 5:02 PM, Serkan Çoban <cobanser...@gmail.com> wrote: >> >> You can add new OSDs with 0 weight and edit below script to increase >> the osd weights instead of decreasing. >> >> >> https://github.com/cernceph/ceph-scripts/blob

Re: [ceph-users] scalability new node to the existing cluster

2018-04-18 Thread Serkan Çoban
You can add new OSDs with 0 weight and edit below script to increase the osd weights instead of decreasing. https://github.com/cernceph/ceph-scripts/blob/master/tools/ceph-gentle-reweight On Wed, Apr 18, 2018 at 2:16 PM, nokia ceph wrote: > Hi All, > > We are having 5

Re: [ceph-users] Best way to remove an OSD node

2018-04-17 Thread Serkan Çoban
Here it is: https://github.com/cernceph/ceph-scripts/blob/master/tools/ceph-gentle-reweight On Tue, Apr 17, 2018 at 10:59 AM, Caspar Smit wrote: > Hi John, > > Thanks for pointing out that script, do you have a link to it? I'm not able > to find it. > Just want to look

[ceph-users] cephalocon slides/videos

2018-03-23 Thread Serkan Çoban
Hi, Where can I find slides/videos of the conference? I already tried (1), but cannot view the videos. Serkan 1- http://www.itdks.com/eventlist/detail/1962 ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] ceph-ansible bluestore lvm scenario

2018-03-09 Thread Serkan Çoban
Hi, I am using ceph-ansible to build a test cluster. I want to learn that If I use lvm scenario and use below settings in osds.yml: - data: data-lv1 data_vg: vg2 db: db-lv1 db_vg: vg1 wal also using db logical volume right? I plan to use one nvme for 10 osds, so creating 10 lvs

Re: [ceph-users] RFC Bluestore-Cluster of SAMSUNG PM863a

2018-02-02 Thread Serkan Çoban
May I ask why are you using EL repo with centos? AFAIK, Redhat is backporting all ceph features to 3.10 kernels. Am I wrong? On Fri, Feb 2, 2018 at 2:44 PM, Richard Hesketh wrote: > On 02/02/18 08:33, Kevin Olbrich wrote: >> Hi! >> >> I am planning a new Flash-based

Re: [ceph-users] "ceph -s" shows no osds

2018-01-05 Thread Serkan Çoban
Answer is in the logs: [mon01][WARNIN] To connect to download.ceph.com insecurely, use `--no-check-certificate'. It will be better to mirror the repos and use them offline... On Fri, Jan 5, 2018 at 12:08 PM, Hüseyin Atatür YILDIRIM < hyildi...@havelsan.com.tr> wrote: > I’ve upgraded the

Re: [ceph-users] Ceph as an Alternative to HDFS for Hadoop

2017-12-21 Thread Serkan Çoban
>Also, are there any benchmark comparisons between hdfs and ceph specifically >around performance of apps benefiting from data locality ? There will be no data locality in ceph, because all the data is accessed through network. On Fri, Dec 22, 2017 at 4:52 AM, Traiano Welcome

Re: [ceph-users] Steps to stop/restart entire ceph cluster

2017-04-07 Thread Serkan Çoban
Below steps are taken from redhat documentation: Follow the below procedure for Shutting down the Ceph Cluster: 1.Stop the clients from using the RBD images/Rados Gateway on this cluster or any other clients. 2.The cluster must be in healthy state before proceeding. 3.Set the noout,

Re: [ceph-users] is it possible using different ceph-fuse version on clients from server

2016-04-21 Thread Serkan Çoban
beim Amtsgericht Hanau > Geschäftsführung: Oliver Dzombic > > Steuer Nr.: 35 236 3622 1 > UST ID: DE274086107 > > > Am 21.04.2016 um 16:22 schrieb Serkan Çoban: >> Hi, >> I would like to install and test ceph jewel release. >> My servers are rhel 7.2 b

[ceph-users] is it possible using different ceph-fuse version on clients from server

2016-04-21 Thread Serkan Çoban
Hi, I would like to install and test ceph jewel release. My servers are rhel 7.2 but clients are rhel6.7. Is it possible to install jewel release to server and use hammer ceph-fuse rpms on clients? Thanks, Serkan ___ ceph-users mailing list