Re: [ceph-users] CephFS dropping data with rsync?

2018-06-18 Thread Yan, Zheng
On Sat, Jun 16, 2018 at 12:23 PM Hector Martin wrote: > > I'm at a loss as to what happened here. > > I'm testing a single-node Ceph "cluster" as a replacement for RAID and > traditional filesystems. 9 4TB HDDs, one single (underpowered) server. > Running Luminous 12.2.5 with BlueStore OSDs. > >

Re: [ceph-users] 答复: how can i remove rbd0

2018-06-18 Thread xiang....@sky-data.cn
stop rbdmap service will unmap all rbd, and rbd showmapped shows none. From: "许雪寒" To: "xiang dai" , "ceph-users" Sent: Tuesday, June 19, 2018 11:01:03 AM Subject: 答复: how can i remove rbd0 rbd unmap [dev-path] 发件人 : ceph-users [mailto:ceph-users-boun...@lists.ceph.com] 代表

[ceph-users] 答复: how can i remove rbd0

2018-06-18 Thread 许雪寒
rbd unmap [dev-path] 发件人: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] 代表 xiang@sky-data.cn 发送时间: 2018年6月19日 10:52 收件人: ceph-users 主题: [ceph-users] how can i remove rbd0 Hi,all! I found a confused question: [root@test]# rbd ls [root@test]# lsblk NAMEMAJ:MIN RM SIZE

[ceph-users] how can i remove rbd0

2018-06-18 Thread xiang . dai
Hi,all! I found a confused question: [root@test]# rbd ls [root@test]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931.5G 0 disk ├─sda1 8:1 0 1G 0 part /boot ├─sda2 8:2 0 200G 0 part │ ├─root 253:0 0 50G 0 lvm / │ └─swap 253:1 0 8G 0 lvm [SWAP] └─sda3 8:3 0 186.3G 0 part

Re: [ceph-users] IO to OSD with librados

2018-06-18 Thread Jialin Liu
Hi Dan, Thanks for the follow-ups. I have just tried running multiple librados MPI applications from multiple nodes, it does show increased bandwidth, with ceph -w, I observed as high as 500MB/sec (previously only 160MB/sec ), I think I can do finer tuning by coordinating more concurrent

Re: [ceph-users] Ceph Mimic on Debian 9 Stretch

2018-06-18 Thread Sage Weil
On Mon, 18 Jun 2018, Fabian Grünbichler wrote: > it's of course within your purview as upstream project (lead) to define > certain platforms/architectures/distros as fully supported, and others > as best-effort/community-driven/... . there was no clear public > communication (AFAICT, only the one

Re: [ceph-users] Install ceph manually with some problem

2018-06-18 Thread Michael Kuriger
Don’t use the installer scripts. Try yum install ceph Mike Kuriger Sr. Unix Systems Engineer T: 818-649-7235 M: 818-434-6195 [ttp://www.hotyellow.com/deximages/dex-thryv-logo.jpg] From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Ch Wan Sent:

Re: [ceph-users] RGW Dynamic bucket index resharding keeps resharding all buckets

2018-06-18 Thread Sander van Schie / True
Thanks, I created the following issue: https://tracker.ceph.com/issues/24551 Sander ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph Mimic on Debian 9 Stretch

2018-06-18 Thread Fabian Grünbichler
On Wed, Jun 13, 2018 at 12:36:50PM +, Sage Weil wrote: > Hi Fabian, thanks for your quick, and sorry for my delayed response (only having 1.5 usable arms atm). > > On Wed, 13 Jun 2018, Fabian Grünbichler wrote: > > On Mon, Jun 04, 2018 at 06:39:08PM +, Sage Weil wrote: > > > [adding

[ceph-users] CephFS mount in Kubernetes requires setenforce

2018-06-18 Thread Rares Vernica
Hello, I have a CentOS cluster running Ceph, in particular CephFS. I'm also running Kubernetes on the cluster and using CephFS as a persistent storage for the Kubernetes pods. I noticed that the pods can't read or write on the mounted CephFS volumes unless I do "setenforce 0" on the CentOS hosts.

Re: [ceph-users] PM1633a

2018-06-18 Thread Brian :
Thanks Paul Wido and Konstantin! If we give them a go I'll share some test results. On Sat, Jun 16, 2018 at 12:09 PM, Konstantin Shalygin wrote: > Hello List - anyone using these drives and have any good / bad things > to say about them? > > > A few moths ago I was asking about PM1725 >

[ceph-users] VMWARE and RBD

2018-06-18 Thread Steven Vacaroaia
Hi, I read somewhere that VMWare is planning to support RBD directly Anyone here know more about this ..maybe a tentative / date / version ? Thanks Steven ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-18 Thread Mike Christie
On 06/15/2018 12:21 PM, Wladimir Mutel wrote: > Jason Dillaman wrote: > [1] http://docs.ceph.com/docs/master/rbd/iscsi-initiator-win/ > >>> I don't use either MPIO or MCS on Windows 2008 R2 or Windows 10 >>> initiator (not Win2016 but hope there is no much difference). I try >>> to

Re: [ceph-users] RGW Dynamic bucket index resharding keeps resharding all buckets

2018-06-18 Thread Yehuda Sadeh-Weinraub
(resending) Sounds like a bug. Can you open a ceph tracker issue? Thanks, Yehuda On Mon, Jun 18, 2018 at 7:24 AM, Sander van Schie / True wrote: > While Ceph was resharding buckets over and over again, the maximum available > storage as reported by 'ceph df' also decreased by about 20%, while

[ceph-users] upgrading jewel to luminous fails

2018-06-18 Thread Elias Abacioglu
Hi I'm having some issues trying to upgrade jewel to luminous. I've installed the new mon on one of the jewel mon nodes. Before I start the luminous mon I remove /var/lib/ceph (cause i'm running ceph-container and switching from the ubuntu image to centos image). The lum mon starts, but fails to

Re: [ceph-users] RGW Dynamic bucket index resharding keeps resharding all buckets

2018-06-18 Thread Sander van Schie / True
While Ceph was resharding buckets over and over again, the maximum available storage as reported by 'ceph df' also decreased by about 20%, while usage stayed the same, we have yet to find out where the missing storage went. The decreasing stopped once we disabled resharding. Any help would be

Re: [ceph-users] IO to OSD with librados

2018-06-18 Thread Dan van der Ster
Hi, Have you tried running rados bench in parallel from several client machines? That would demonstrate the full BW capacity of the cluster. e.g. make a test pool with e.g. 256 PGs (which will average 16 per OSD on your cluster). Then from several clients at once do `rados bench -p test 60

Re: [ceph-users] IO to OSD with librados

2018-06-18 Thread Jialin Liu
Hi, To make the the problem clearer, here is the configuration of the cluster: The 'problem' I have is the low bandwidth no matter how I increase the concurrency. I have tried using MPI to launch 322 processes, each calling librados to create a handle and initialize the io context, and write one

Re: [ceph-users] OSDs too slow to start

2018-06-18 Thread Alfredo Daniel Rezinovsky
On 18/06/18 09:09, Alfredo Deza wrote: On Fri, Jun 15, 2018 at 11:59 AM, Alfredo Daniel Rezinovsky wrote: Too long is 120 seconds The DB is in SSD devices. The devices are fast. The process OSD reads about 800Mb but I cannot be sure from where. You didn't mention what version of Ceph you

Re: [ceph-users] performance exporting RBD over NFS

2018-06-18 Thread Janne Johansson
Den mån 18 juni 2018 kl 14:55 skrev Marc Boisis : > Hi, > > I want to export rbd over nfs in a 10Gb network. Server and Client are > DELL R620 with 10Gb nics. > > NFS client write bandwith on the rbd export is only 233MB/s. > > > My conclusion: > - rbd write performance is good >

Re: [ceph-users] performance exporting RBD over NFS

2018-06-18 Thread Alex Gorbachev
On Mon, Jun 18, 2018 at 8:54 AM, Marc Boisis wrote: > Hi, > > I want to export rbd over nfs in a 10Gb network. Server and Client are DELL > R620 with 10Gb nics. > rbd cache is disabled ont the server. > > NFS server write bandwith on his rbd is 1196MB/s > > NFS client write bandwith on the rbd

[ceph-users] performance exporting RBD over NFS

2018-06-18 Thread Marc Boisis
Hi, I want to export rbd over nfs in a 10Gb network. Server and Client are DELL R620 with 10Gb nics. rbd cache is disabled ont the server. NFS server write bandwith on his rbd is 1196MB/s NFS client write bandwith on the rbd export is only 233MB/s. NFS client write bandwith on a

Re: [ceph-users] OSDs too slow to start

2018-06-18 Thread Alfredo Deza
On Fri, Jun 15, 2018 at 11:59 AM, Alfredo Daniel Rezinovsky wrote: > Too long is 120 seconds > > The DB is in SSD devices. The devices are fast. The process OSD reads about > 800Mb but I cannot be sure from where. You didn't mention what version of Ceph you are using and how you deployed these

[ceph-users] Install ceph manually with some problem

2018-06-18 Thread Ch Wan
Hi, recently I'm trying to build ceph luminous on centos-7, following the documents: > > sudo ./install-deps.sh ./do_cmake.sh cd build && sudo make install But when I run */usr/local/bin/ceph -v, *it failed with there error: > Traceback (most recent call last): > File "/usr/local/bin/ceph",

Re: [ceph-users] IO to OSD with librados

2018-06-18 Thread Jialin Liu
Thank you Dan. I’ll try it. Best, Jialin NERSC/LBNL > On Jun 18, 2018, at 12:22 AM, Dan van der Ster wrote: > > Hi, > > One way you can see exactly what is happening when you write an object > is with --debug_ms=1. > > For example, I write a 100MB object to a test pool: rados > --debug_ms=1

Re: [ceph-users] IO to OSD with librados

2018-06-18 Thread Dan van der Ster
Hi, One way you can see exactly what is happening when you write an object is with --debug_ms=1. For example, I write a 100MB object to a test pool: rados --debug_ms=1 -p test put 100M.dat 100M.dat I pasted the output of this here: https://pastebin.com/Zg8rjaTV In this case, it first gets the

[ceph-users] Mimic 13.2 - Segv in ceph-osd

2018-06-18 Thread Steffen Winther Sørensen
List, Just a heads up, I found an osd that did a segv on a CentOS 7.5 node: # ceph --version ceph version 13.2.0 (79a10589f1f80dfe21e8f9794365ed98143071c4) mimic (stable) # cat /etc/centos-release CentOS Linux release 7.5.1804 (Core) Jun 17 07:01:18 n3 ceph-osd: *** Caught signal

Re: [ceph-users] IO to OSD with librados

2018-06-18 Thread Jialin Liu
Sorry about the misused term 'OSS: object storage server' (a term often used in Lustre filesystem), what I meant is 4 hosts, each manages 12 OSDs. Thanks for anyone who may answer any of my questions. Best, Jialin NERSC/LBNL On Sun, Jun 17, 2018 at 11:29 AM Jialin Liu wrote: > Hello, > > I