write to a subdirectory on the RBD. so if it is not mounted, the
directory will be missing, and you get a no such file error.
Ronny Aasen
On 25.08.2017 18:04, David Turner wrote:
Additionally, solely testing if you can write to the path could give a
false sense of security if the path is wri
Oh god
root@bhs1-mail03-ds03:~# zgrep "lease" /var/log/ceph/*.gz
/var/log/ceph/ceph-mon.bhs1-mail03-ds03.log.2.gz:2017-08-24 06:39:22.384112
7f44c60f1700 1 mon.bhs1-mail03-ds03@2(peon).paxos(paxos updating c
8973251..8973960) lease_timeout -- calling new election
/var/log/ceph/ceph-mon.bhs1-mail0
Hi David,
The 'data sync init' command won't touch any actual object data, no.
Resetting the data sync status will just cause a zone to restart a full
sync of the --source-zone's data changes log. This log only lists which
buckets/shards have changes in them, which causes radosgw to consider
Additionally, solely testing if you can write to the path could give a
false sense of security if the path is writable when the RBD is not
mounted. It would write a file to the system drive and you would see it as
successful.
On Fri, Aug 25, 2017 at 2:27 AM Adrian Saul
wrote:
> If you are monit
Hi David,
The 'radosgw-admin sync error list' command may be useful in debugging
sync failures for specific entries. For users, we've seen some sync
failures caused by conflicting user metadata that was only present on
the secondary site. For example, a user that had the same access key or
em
Hi David,
I never solved this issue as I couldn't figure out what was wrong. I just
went ahead and removed the second site and will proceed to setup a new
multisite whenever luminous is out and hoping the weirdness has been sorted.
Sorry I didn't have any good answers :/
/andreas
On 24 Aug 2017
Hello People,
Some days ago, I read about this comands rbd lock add and rbd lock
remove , this commands will go maintened in ceph in future versions, or the better
form, to use lock in ceph, will go exclusive-lock and this commands will go depreciated
?
Thanks a Lot,
Marcelo__
> Op 25 augustus 2017 om 15:00 schreef Matthew Vernon :
>
>
> Hi,
>
> We have a medium-sized (2520 osds, 42 hosts, 88832 pgs, 15PB raw
> capacity) Jewel cluster (on Ubuntu), and in normal operation, our mon
> store size is around the 1.2G mark. I've noticed, though, that when
> doing larger
Hi Cephians,
I wonder if someone has seen this before.
Every day between 3:09am and 3:19am I'm seeing those entries in the
logfile on ONE of the 2 OSD nodes of my cluster.
3:09 is the time where I start to backup my qemu/kvm-VM images, which
are stored as objects in CEPH. For that, I first "syn
> On 25 Aug 2017, at 18:57, donglifec...@gmail.com wrote:
>
> ZhengYan,
>
> Yes, shutdown osd.1, process D status disappear. what reason is this ?
> when this problem( D status) comes up, ceph is health ok. How should I
> deal with this problem?
>
maybe the disk underneath osd.1 is
Hi,
We have a medium-sized (2520 osds, 42 hosts, 88832 pgs, 15PB raw
capacity) Jewel cluster (on Ubuntu), and in normal operation, our mon
store size is around the 1.2G mark. I've noticed, though, that when
doing larger rebalances, they can grow really very large (up to nearly
70G, which is n
Hello Christian.
2017-08-24 22:43 GMT-03:00 Christian Balzer :
>
> Hello,
>
> On Thu, 24 Aug 2017 14:49:24 -0300 Guilherme Steinmüller wrote:
>
> > Hello Christian.
> >
> > First of all, thanks for your considerations, I really appreciate it.
> >
> > 2017-08-23 21:34 GMT-03:00 Christian Balzer :
ZhengYan,
Yes, shutdown osd.1, process D status disappear. what reason is this ?
when this problem( D status) comes up, ceph is health ok. How should I deal
with this problem?
Thanks a lot.
donglifec...@gmail.com
From: Yan, Zheng
Date: 2017-08-25 17:17
To: donglifec...@gmail.com
Hello,
I tried creating tiering with EC pools (EC pool as a cache for another
EC pool) and end up with "Error ENOTSUP: tier pool 'ecpool' is an ec
pool, which cannot be a tier". Having overwrite support on EC pools with
direct support by RBD and CephFS it may be worth having tiering using EC
On Thu, Aug 24, 2017 at 6:44 PM, David Turner wrote:
>> min_size 1
> STOP THE MADNESS. Search the ML to realize why you should never user a
> min_size of 1.
This is a (completely understandable) misunderstanding. The
"min_size" in a crush rule is a different thing to the min_size in a
pool. In
> On 25 Aug 2017, at 16:23, donglifec...@gmail.com wrote:
>
> ZhengYan,
>
> [root@ceph-radosgw-lb-backup cephfs]# ps aux | grep D
> USER PID %CPU %MEMVSZ RSS TTY STAT START TIME COMMAND
> root 578 0.0 0.0 203360 3248 ?Ssl Aug24 0:00
> /usr/sbin/gssproxy -D
ZhengYan,
[root@ceph-radosgw-lb-backup cephfs]# ps aux | grep D
USER PID %CPU %MEMVSZ RSS TTY STAT START TIME COMMAND
root 578 0.0 0.0 203360 3248 ?Ssl Aug24 0:00
/usr/sbin/gssproxy -D
root 865 0.0 0.0 82552 6104 ?Ss Aug24 0:00 /usr/sbi
ZhengYan,
I will test this problem again.
Thanks a lot.
donglifec...@gmail.com
From: Yan, Zheng
Date: 2017-08-25 16:12
To: donglifecomm
CC: ceph-users
Subject: Re: [ceph-users]cephfs, kernel(4.12.8) client version hung(D status),
ceph version 0.94.9
> On 24 Aug 2017, at 17:40, donglif
> On 24 Aug 2017, at 17:40, donglifec...@gmail.com wrote:
>
> ZhengYan,
>
> I meet a problem, Follow the steps outlined below:
>
> 1. create 30G file test823
> 2. host1 client(kernel 4.12.8)
> cat /mnt/cephfs/a/test823 > /mnt/cephfs/a/test823-backup
> ls -al /mnt/cephfs/a/*
>
Hi,
is qemu-img working for you ont he VM Host machine?
Did you create the vol? What does 'rbd ls' say?
Did you feed the secret (and the key created with ceph auth) to virsh?
Cheers,
Tom
p.s.: you maybe need anyother package for qemu-rbd support - I did so on
latest Debain (stretch)
20 matches
Mail list logo