Hello!
I want to mount CephFS with a dedicated user in order to avoid putting the
admin key on every client host.
Therefore I created a user account
ceph auth get-or-create client.mtyadm mon 'allow r' mds 'allow rw path=/MTY'
osd 'allow rw pool=hdb-backup,allow rw pool=hdb-backup_metadata' -o
/
Hi,
I think you there is missing perm for the mds.
Try adding allow r to mds permissions.
Something like
ceph auth get-or-create client.mtyadm mon 'allow r' mds '*allow r*,
allow rw path=/MTY' osd 'allow rw pool=hdb-backup,allow rw
pool=hdb-backup_metadata' -o /etc/ceph/ceph.client.mtyadm.ke
Check your kernel version, prior to 4.9 it was needed to allow read on root
path:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-December/014804.html
> 24 июля 2017 г., в 12:36, c.mo...@web.de написал(а):
>
> Hello!
>
> I want to mount CephFS with a dedicated user in order to avoid p
I would recommend logging into the host and running your commands from a
screen session, so they keep running.
-Original Message-
From: Martin Wittwer [mailto:martin.witt...@datonus.ch]
Sent: zondag 23 juli 2017 15:20
To: ceph-us...@ceph.com
Subject: [ceph-users] Restore RBD image
H
I'm testing ceph in my enviroment, but the feature exclusive lock don't works
fine for me or maybe i'm doing something wrong.
I testing in two machines create one image with exclusive-lock enable, if I
understood correctly, with this feature, one machine only can mount and write
in image at ti
You will need to pass the "exclusive" option when running "rbd map"
(and be running kernel >= 4.12).
On Mon, Jul 24, 2017 at 8:42 AM, wrote:
> I'm testing ceph in my enviroment, but the feature exclusive lock don't
> works fine for me or maybe i'm doing something wrong.
>
> I testing in two mach
THX.
Mount is working now.
The auth list for user mtyadm is now:
client.mtyadm
key: AQAlyXVZEfsYNRAAM4jHuV1Br7lpRx1qaINO+A==
caps: [mds] allow r,allow rw path=/MTY
caps: [mon] allow r
caps: [osd] allow rw pool=hdb-backup,allow rw pool=hdb-backup_metadata
24. Juli 2017 13:25, "Дмитрий Глушенок"
Increasing the size of an image only issues a single write to update
the image size metadata in the image header. That operation is atomic
and really shouldn't be able to do what you are saying. Regardless,
since this is a grow operation, just re-run the resize to update the
metadata again.
On Mo
Hello!
I created CephFS according to documentation:
$ ceph osd pool create hdb-backup
$ ceph osd pool create hdb-backup_metadata
$ ceph fs new
I can mount this pool with user admin:
ld4257:/etc/ceph # mount -t ceph 10.96.5.37,10.96.5.38,10.96.5.38:/ /mnt/cephfs
-o name=admin,secretfile=/etc
I am seeing the same issue on upgrade to Luminous v12.1.0 from Jewel.
I am not using Keystone or OpenStack either and my radosgw daemon
hangs as well. I have to restart it to resume processing.
2017-07-24 00:23:33.057401 7f196096a700 0 ERROR: keystone revocation
processing returned error r=-22
20
On Mon, Jul 24, 2017 at 4:52 PM, wrote:
> Hello!
>
> I created CephFS according to documentation:
> $ ceph osd pool create hdb-backup
> $ ceph osd pool create hdb-backup_metadata
> $ ceph fs new
>
> I can mount this pool with user admin:
> ld4257:/etc/ceph # mount -t ceph 10.96.5.37,10.96.5.
Hi,
I'm running a Ceph cluster which I started back in bobtail age and kept it running/upgrading over the years. It has three nodes, each running one MON, 10 OSDs and one MDS. The cluster has one MDS active and two standby. Machines are 8-core Opterons with 32GB of ECC RAM each. I'm using it
On Fri, Jul 21, 2017 at 10:23 PM Daniel K wrote:
> Luminous 12.1.0(RC)
>
> I replaced two OSD drives(old ones were still good, just too small), using:
>
> ceph osd out osd.12
> ceph osd crush remove osd.12
> ceph auth del osd.12
> systemctl stop ceph-osd@osd.12
> ceph osd rm osd.12
>
> I later fo
Are the clocks dramatically out of sync? Basically any bug in signing could
cause that kind of log message, but u think simple time sync so they're
using different keys is the most common.
On Mon, Jul 24, 2017 at 9:36 AM wrote:
> Hi,
>
> I'm running a Ceph cluster which I started back in bobtail
I was able to export the PGs using the ceph-object-store tool and import
them to the new OSDs.
I moved some other OSDs from the bare metal on a node into a virtual
machine on the same node and was surprised at how easy it was. Install ceph
in the VM(using ceph-deploy) -- stop the OSD and dismount
Yeah, the objects being degraded here are a consequence of stuff being
written while backfill is happening; it doesn't last long because it's only
a certain range of them.
I didn't think that should upgrade to the PG being marked degraded but may
be misinformed. Still planning to dig through that b
The perf counter dump added a "avgtime" field for which collectd-5.7.2
ceph plugin does not understand and put out a warning and exit.
ceph plugin: ds %s was not properly initialized.",
Anybody knows a patch to collectd which might help?
Thanks,
Yang
Hi there,
Thanks for the answer!
I taught that there is someting strange during the resize operation
because it took to long but normaly it's instant. The logs doesn't
contain anything about the bug.
A few hours later I tried to set the size to 100G again but all files
were lost.
I had to resto
Please raise a tracker for rgw and also provide some additional journalctl
logs and info(ceph version, os version etc):
http://tracker.ceph.com/projects/rgw
On Mon, Jul 24, 2017 at 9:03 AM, Vaibhav Bhembre
wrote:
> I am seeing the same issue on upgrade to Luminous v12.1.0 from Jewel.
> I am not
On Mon, Jul 24, 2017 at 6:35 PM, wrote:
> Hi,
>
> I'm running a Ceph cluster which I started back in bobtail age and kept it
> running/upgrading over the years. It has three nodes, each running one MON,
> 10 OSDs and one MDS. The cluster has one MDS active and two standby.
> Machines are 8-core O
List --
I have a 4-node cluster running on baremetal and have a need to use the
kernel client on 2 nodes. As I read you should not run the kernel client on
a node that runs an OSD daemon, I decided to move the OSD daemons into a VM
on the same device.
Orignal host is stor-vm2(bare metal), new hos
Hi
I am having hard time finding documentation on what is the correct way to
upgrade ceph.conf in running cluster.
The change i want to introduce is this
osd crush update on start = false
i tried to do it through the tell utility like this
ceph tell osd.82 injectargs --no-osd-crush-update-on-sta
On Mon, Jul 24, 2017 at 10:33 AM, moftah moftah wrote:
> Hi
>
> I am having hard time finding documentation on what is the correct way to
> upgrade ceph.conf in running cluster.
>
> The change i want to introduce is this
> osd crush update on start = false
>
> i tried to do it through the tell uti
The method I have used is to 1) edit ceph.conf, 2) use ceph-deploy config
push, 3) restart monitors
Example:
roger@desktop:~/ceph-cluster$ vi ceph.conf# make ceph.conf change
roger@desktop:~/ceph-cluster$ ceph-deploy --overwrite-conf config push
nuc{1..3}
[ceph_deploy.conf][DEBUG ] found confi
You might be able to read these objects using s3fs if you're using a
RadosGW. But like John mentioned, you cannot write them as objects into
the pool and read them as files from the filesystem.
On Mon, Jul 24, 2017, 12:07 PM John Spray wrote:
> On Mon, Jul 24, 2017 at 4:52 PM, wrote:
> > Hell
2 questions,
1 In the moment i use kernel 4.10, the exclusive-lock not works fine in kernel's version
less than < 4.12, right ? 2 The comand with exclusive would this
?
rbd map --exclusive test-xlock3
Thanks a Lot,
Marcelo
Em 24/07/2017, Jason Dillaman escreveu:
> You
Hello,
Can you kindly share their experience with the bulit-in FSCache support with
ceph?
Interested in knowing the following:- Are you using FSCache in production
environment?- How large is your Ceph deployment?- If with CephFS, how many Ceph
clients are using FSCache- which version of Ceph a
I'm in the process of cleaning up a test that an internal customer did on our
production cluster that produced over a billion objects spread across 6000
buckets. So far I've been removing the buckets like this:
printf %s\\n bucket{1..6000} | xargs -I{} -n 1 -P 32 radosgw-admin bucket rm
--buck
I hope someone else can answer your question better, but in my case I found
something like this helpful to delete objects faster than I could through
the gateway:
rados -p default.rgw.buckets.data ls | grep 'replace this with pattern
matching files you want to delete' | xargs -d '\n' -n 200 rados
Hi Ilya, hi Gregory,
all hosts/clients run proper NTP. Still, it could be that if hwclock of those machines has significant drift, so after client boot-up in the morning time is quite far off until NTP gets clock resynced. Maybe that offset drift of NTP resync is causing the issue. I'll have a
Wouldn't doing it that way cause problems since references to the objects
wouldn't be getting removed from .rgw.buckets.index?
Bryan
From: Roger Brown
Date: Monday, July 24, 2017 at 2:43 PM
To: Bryan Stillwell , "ceph-users@lists.ceph.com"
Subject: Re: [ceph-users] Speeding up garbage collect
For permanent fix, you need to fix this using patched kernel or upgrade to 4.9
kernel or higher(which has the patch fix) http://tracker.ceph.com/issues/17191
Using [mds] allow r gives users “read” permission to “/” share ie any
directory/files under “/” , Example “/dir1”,”dir2” or “/MTY” can be
Your google-fu hasn't failed -- that is a missing feature. I've opened
a new feature-request tracker ticket to get support for that.
[1] http://tracker.ceph.com/issues/20762
On Fri, Jul 21, 2017 at 5:04 PM, Daniel K wrote:
> Once again my google-fu has failed me and I can't find the 'correct' wa
On Mon, Jul 24, 2017 at 2:15 PM, wrote:
> 2 questions,
>
>
>
> 1 In the moment i use kernel 4.10, the exclusive-lock not works fine in
> kernel's version less than < 4.12, right ?
Exclusive lock should work just fine under 4.10 -- but you are trying
to use the new "exclusive" map option that is
I haven't seen much talk about direct integration with oVirt. Obviously it
kind of comes down to oVirt being interested in participating. But, is the
only hold-up getting development time toward an integration or is there
some kind of friction between the dev teams?
I was as much as told by Redhat in a sales call that they push Gluster
for oVirt/RHEV and Ceph for OpenStack, and don't have any plans to
change that in the short term. (note this was about a year ago, i
think - so this isn't super current information).
I seem to recall the hangup was that oVirt h
I think if you want to delete through gc,
increase this
OPTION(rgw_gc_processor_max_time, OPT_INT, 3600) // total run time
for a single gc processor work
decrease this
OPTION(rgw_gc_processor_period, OPT_INT, 3600) // gc processor cycle time
Or , I think if there is some option to bypass the gc
Funny enough, I just had a call with Redhat where the OpenStack engineer
was voicing his frustration that there wasn't any movement on RBD for
oVirt. This is important to me because I'm building out a user-facing
private cloud that just isn't going to be big enough to justify OpenStack
and its admi
oVirt 3.6 added Cinder/RBD integration [1] and it looks like they are
currently working on integrating Cinder within a container to simplify
the integration [2].
[1]
http://www.ovirt.org/develop/release-management/features/storage/cinder-integration/
[2]
http://www.ovirt.org/develop/release-mana
Thanks for pointing to some documentation. I'd seen that and it is
certainly an option. From my understanding, with a Cinder deployment, you'd
have the same failure domains and similar performance characteristics to an
oVirt + NFS + RBD deployment. This is acceptable. But, the dream I have in
my he
I created an issue: http://tracker.ceph.com/issues/20763
Regards,
Martin
Von: Vasu Kulkarni
Datum: Montag, 24. Juli 2017 um 19:26
An: Vaibhav Bhembre
Cc: Martin Emrich , "ceph-users@lists.ceph.com"
Betreff: Re: [ceph-users] Luminous radosgw hangs after a few hours
Please raise a tracker for
Looks like wei found and fixed this in
https://github.com/ceph/ceph/pull/16495
Thanks Wei!
This has been causing crashes for us since May. Guess it shows that not
many folks use Kraken with lifecycles yet, but more certainly will with
Luminous.
-Ben
On Fri, Jul 21, 2017 at 7:19 AM, Daniel Gryni
42 matches
Mail list logo