Re: [ceph-users] Monitoring a rbd map rbd connection

2017-08-25 Thread Ronny Aasen
write to a subdirectory on the RBD. so if it is not mounted, the directory will be missing, and you get a no such file error. Ronny Aasen On 25.08.2017 18:04, David Turner wrote: Additionally, solely testing if you can write to the path could give a false sense of security if the path is wri

Re: [ceph-users] lease_timeout - new election

2017-08-25 Thread Webert de Souza Lima
Oh god root@bhs1-mail03-ds03:~# zgrep "lease" /var/log/ceph/*.gz /var/log/ceph/ceph-mon.bhs1-mail03-ds03.log.2.gz:2017-08-24 06:39:22.384112 7f44c60f1700 1 mon.bhs1-mail03-ds03@2(peon).paxos(paxos updating c 8973251..8973960) lease_timeout -- calling new election /var/log/ceph/ceph-mon.bhs1-mail0

Re: [ceph-users] RGW Multisite metadata sync init

2017-08-25 Thread Casey Bodley
Hi David, The 'data sync init' command won't touch any actual object data, no. Resetting the data sync status will just cause a zone to restart a full sync of the --source-zone's data changes log. This log only lists which buckets/shards have changes in them, which causes radosgw to consider

Re: [ceph-users] Monitoring a rbd map rbd connection

2017-08-25 Thread David Turner
Additionally, solely testing if you can write to the path could give a false sense of security if the path is writable when the RBD is not mounted. It would write a file to the system drive and you would see it as successful. On Fri, Aug 25, 2017 at 2:27 AM Adrian Saul wrote: > If you are monit

Re: [ceph-users] RGW Multisite metadata sync init

2017-08-25 Thread Casey Bodley
Hi David, The 'radosgw-admin sync error list' command may be useful in debugging sync failures for specific entries. For users, we've seen some sync failures caused by conflicting user metadata that was only present on the secondary site. For example, a user that had the same access key or em

Re: [ceph-users] RGW multisite sync data sync shard stuck

2017-08-25 Thread Andreas Calminder
Hi David, I never solved this issue as I couldn't figure out what was wrong. I just went ahead and removed the second site and will proceed to setup a new multisite whenever luminous is out and hoping the weirdness has been sorted. Sorry I didn't have any good answers :/ /andreas On 24 Aug 2017

[ceph-users] Ceph Lock

2017-08-25 Thread lista2
Hello People,   Some days  ago, I read about this comands rbd lock add  and rbd lock remove , this commands will go maintened in ceph in future versions, or the better form, to use lock in ceph, will go exclusive-lock and this commands will go depreciated ?   Thanks a Lot, Marcelo__

Re: [ceph-users] How big can a mon store get?

2017-08-25 Thread Wido den Hollander
> Op 25 augustus 2017 om 15:00 schreef Matthew Vernon : > > > Hi, > > We have a medium-sized (2520 osds, 42 hosts, 88832 pgs, 15PB raw > capacity) Jewel cluster (on Ubuntu), and in normal operation, our mon > store size is around the 1.2G mark. I've noticed, though, that when > doing larger

[ceph-users] OSD: no data available during snapshot

2017-08-25 Thread Dieter Jablanovsky
Hi Cephians, I wonder if someone has seen this before. Every day between 3:09am and 3:19am I'm seeing those entries in the logfile on ONE of the 2 OSD nodes of my cluster. 3:09 is the time where I start to backup my qemu/kvm-VM images, which are stored as objects in CEPH. For that, I first "syn

Re: [ceph-users] cephfs, kernel(4.12.8) client version hung(D status), ceph version 0.94.9

2017-08-25 Thread Yan, Zheng
> On 25 Aug 2017, at 18:57, donglifec...@gmail.com wrote: > > ZhengYan, > > Yes, shutdown osd.1, process D status disappear. what reason is this ? > when this problem( D status) comes up, ceph is health ok. How should I > deal with this problem? > maybe the disk underneath osd.1 is

[ceph-users] How big can a mon store get?

2017-08-25 Thread Matthew Vernon
Hi, We have a medium-sized (2520 osds, 42 hosts, 88832 pgs, 15PB raw capacity) Jewel cluster (on Ubuntu), and in normal operation, our mon store size is around the 1.2G mark. I've noticed, though, that when doing larger rebalances, they can grow really very large (up to nearly 70G, which is n

Re: [ceph-users] [SSD NVM FOR JOURNAL] Performance issues

2017-08-25 Thread Guilherme Steinmüller
Hello Christian. 2017-08-24 22:43 GMT-03:00 Christian Balzer : > > Hello, > > On Thu, 24 Aug 2017 14:49:24 -0300 Guilherme Steinmüller wrote: > > > Hello Christian. > > > > First of all, thanks for your considerations, I really appreciate it. > > > > 2017-08-23 21:34 GMT-03:00 Christian Balzer :

Re: [ceph-users] cephfs, kernel(4.12.8) client version hung(D status), ceph version 0.94.9

2017-08-25 Thread donglifec...@gmail.com
ZhengYan, Yes, shutdown osd.1, process D status disappear. what reason is this ? when this problem( D status) comes up, ceph is health ok. How should I deal with this problem? Thanks a lot. donglifec...@gmail.com From: Yan, Zheng Date: 2017-08-25 17:17 To: donglifec...@gmail.com

[ceph-users] EC pool as a tier/cache pool

2017-08-25 Thread Henrik Korkuc
Hello, I tried creating tiering with EC pools (EC pool as a cache for another EC pool) and end up with "Error ENOTSUP: tier pool 'ecpool' is an ec pool, which cannot be a tier". Having overwrite support on EC pools with direct support by RBD and CephFS it may be worth having tiering using EC

Re: [ceph-users] Ruleset vs replica count

2017-08-25 Thread John Spray
On Thu, Aug 24, 2017 at 6:44 PM, David Turner wrote: >> min_size 1 > STOP THE MADNESS. Search the ML to realize why you should never user a > min_size of 1. This is a (completely understandable) misunderstanding. The "min_size" in a crush rule is a different thing to the min_size in a pool. In

Re: [ceph-users] cephfs, kernel(4.12.8) client version hung(D status), ceph version 0.94.9

2017-08-25 Thread Yan, Zheng
> On 25 Aug 2017, at 16:23, donglifec...@gmail.com wrote: > > ZhengYan, > > [root@ceph-radosgw-lb-backup cephfs]# ps aux | grep D > USER PID %CPU %MEMVSZ RSS TTY STAT START TIME COMMAND > root 578 0.0 0.0 203360 3248 ?Ssl Aug24 0:00 > /usr/sbin/gssproxy -D

Re: [ceph-users] cephfs, kernel(4.12.8) client version hung(D status), ceph version 0.94.9

2017-08-25 Thread donglifec...@gmail.com
ZhengYan, [root@ceph-radosgw-lb-backup cephfs]# ps aux | grep D USER PID %CPU %MEMVSZ RSS TTY STAT START TIME COMMAND root 578 0.0 0.0 203360 3248 ?Ssl Aug24 0:00 /usr/sbin/gssproxy -D root 865 0.0 0.0 82552 6104 ?Ss Aug24 0:00 /usr/sbi

Re: [ceph-users] cephfs, kernel(4.12.8) client version hung(D status), ceph version 0.94.9

2017-08-25 Thread donglifec...@gmail.com
ZhengYan, I will test this problem again. Thanks a lot. donglifec...@gmail.com From: Yan, Zheng Date: 2017-08-25 16:12 To: donglifecomm CC: ceph-users Subject: Re: [ceph-users]cephfs, kernel(4.12.8) client version hung(D status), ceph version 0.94.9 > On 24 Aug 2017, at 17:40, donglif

Re: [ceph-users] cephfs, kernel(4.12.8) client version hung(D status), ceph version 0.94.9

2017-08-25 Thread Yan, Zheng
> On 24 Aug 2017, at 17:40, donglifec...@gmail.com wrote: > > ZhengYan, > > I meet a problem, Follow the steps outlined below: > > 1. create 30G file test823 > 2. host1 client(kernel 4.12.8) > cat /mnt/cephfs/a/test823 > /mnt/cephfs/a/test823-backup > ls -al /mnt/cephfs/a/* >

Re: [ceph-users] libvirt + rbd questions

2017-08-25 Thread Dajka Tamás
Hi, is qemu-img working for you ont he VM Host machine? Did you create the vol? What does 'rbd ls' say? Did you feed the secret (and the key created with ceph auth) to virsh? Cheers, Tom p.s.: you maybe need anyother package for qemu-rbd support - I did so on latest Debain (stretch)