On Fri, Mar 23, 2018 at 8:49 PM, Yan, Zheng wrote:
> On Fri, Mar 23, 2018 at 9:50 PM, Josh Haft wrote:
> > On Fri, Mar 23, 2018 at 12:14 AM, Yan, Zheng wrote:
> >>
> >> On Fri, Mar 23, 2018 at 5:14 AM, Josh Haft wrote:
> >> > Hello!
> >> >
> >> > I'm running Ceph 12.2.2 with one primary and on
how can we deal with that? I see some comments that large images without
omap may suffer to get deleted
Only way for now is use nbd-rbd or fuse-rbd.
k
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-use
On 03/24/2018 07:22 AM, Marc Roos wrote:
Thanks! I got it working, although I had to change the date to "date -R
-u", because I got the "RequestTimeTooSkewed" error.
I also had to enable buckets=read on the account that was already able
to read and write via cyberduck, I don’t get that.
rad
On Fri, Mar 23, 2018 at 9:50 PM, Josh Haft wrote:
> On Fri, Mar 23, 2018 at 12:14 AM, Yan, Zheng wrote:
>>
>> On Fri, Mar 23, 2018 at 5:14 AM, Josh Haft wrote:
>> > Hello!
>> >
>> > I'm running Ceph 12.2.2 with one primary and one standby MDS. Mounting
>> > CephFS via ceph-fuse (to leverage quot
I believe this popped up recently and is a container bug. It’s forcibly
resetting the modules to run on every start.
On Sat, Mar 24, 2018 at 5:44 AM Subhachandra Chandra
wrote:
> Hi,
>
>We used ceph-ansible to install/update our Ceph cluster config where
> all the cph dameons run as container
Thanks! I got it working, although I had to change the date to "date -R
-u", because I got the "RequestTimeTooSkewed" error.
I also had to enable buckets=read on the account that was already able
to read and write via cyberduck, I don’t get that.
radosgw-admin caps add --uid='test$test1' --
Hi All,
I'm starting with ceph and faced a problem while using object-map
root@ceph-mon-1:/home/tgonzaga# rbd create test -s 1024 --image-format 2
--image-feature exclusive-lock
root@ceph-mon-1:/home/tgonzaga# rbd feature enable test object-map
root@ceph-mon-1:/home/tgonzaga# rbd list
test
root@c
Hi,
We used ceph-ansible to install/update our Ceph cluster config where all
the cph dameons run as containers. In mgr.yml I have the following config
###
# MODULES #
###
# Ceph mgr modules to enable, current modules available are:
status,dashboard,localpool,restful,zabbix,p
On Fri, Mar 23, 2018 at 7:45 PM, Perrin, Christopher (zimkop1)
wrote:
> Hi,
>
> Last week out MDSs started failing one after another, and could not be
> started anymore. After a lot of tinkering I found out that MDSs crashed after
> trying to rejoin the Cluster. The only Solution I found that, l
Thank you for getting back to me so quickly.
Your suggestion of adding the config change in ceph.conf was a great one. That
helped a lot. I didn't realize that the client would need to be updated and
thought that it was a cluster side modification only.
Something else that I missed was giving
Luminous addresses it with a mgr plugin that actively changes the weights
of OSDs to balance the distribution. In addition to having PGs distributed
well for your OSDs to have an equal amount of data on them is also which
OSDs are Primary. If you're running into a lot of latency on specific OSDs
A lot of videos from ceph days and such pop up on the [1] Ceph youtube
channel.
[1] https://www.youtube.com/channel/UCno-Fry25FJ7B4RycCxOtfw
On Fri, Mar 23, 2018 at 5:28 AM Serkan Çoban wrote:
> Hi,
>
> Where can I find slides/videos of the conference?
> I already tried (1), but cannot view the
The first thing I looked at was if you had any snapshots/clones in your
pools, but that count is 0 for you. Second, I would look at seeing if you
have orphaned objects from deleted RBDs. You could check that by comparing
a list of the rbd 'block_name_prefix' for all of the rbds in the pool with
t
Just to note the "magic" of object-map... If you had a 50TB RBD with
object-map and 100% of the RBD is in use, the rbd rm will take the same
amount of time to delete the RBD as if you don't have object-map enabled on
a brand new 50TB RBD that has no data in it. Removing that many objects
just take
Just moving the OSD is indeed the right thing to do and the crush map will
update when the OSDs start up on the new host. The only "gotcha" is if you
do not have your journals/WAL/DBs on the same device as your data. In that
case, you will need to move both devices to the new server for the OSD t
Le vendredi 23 mars 2018 à 12:14 +0100, Ilya Dryomov a écrit :
> On Fri, Mar 23, 2018 at 11:48 AM, wrote:
> > The stock kernel from Debian is perfect
> > Spectre / meltdown mitigations are worthless for a Ceph point of
> > view,
> > and should be disabled (again, strictly from a Ceph point of vie
The removed snaps is also in the osd map. It does truncate the list over
time to show ranges and such and it is definitely annoying, but it is
needed for some of the internals of Ceph. I don't remember what they are,
but that was the gist of the answer I got back when we were working on some
bugs
PGs per pool also has a lot to do with how much data each pool will have.
If 1 pool will have 90% of the data, it should have 90% of the PGs. If it
will be common for you to create and delete pools (not usually common and
probably something you can do simpler), then you can aim to start at a
minim
Hi,
I think it's no more a problem since async messenger is default.
Difference is minimal now between jemalloc and tcmalloc.
Regards,
Alexandre
- Mail original -
De: "Xavier Trilla"
À: "ceph-users"
Cc: "Arnau Marcé"
Envoyé: Vendredi 23 Mars 2018 13:34:03
Objet: [ceph-users] Luminous
On Fri, Mar 23, 2018 at 3:01 PM, wrote:
> Ok ^^
>
> For Cephfs, as far as I know, quota support is not supported in kernel space
> This is not specific to luminous, tho
quota support is coming, hopefully in 4.17.
Thanks,
Ilya
___
ceph
Ok ^^
For Cephfs, as far as I know, quota support is not supported in kernel space
This is not specific to luminous, tho
On 03/23/2018 03:00 PM, Ilya Dryomov wrote:
> On Fri, Mar 23, 2018 at 2:18 PM, wrote:
>> On 03/23/2018 12:14 PM, Ilya Dryomov wrote:
>>> luminous cluster-wide feature bits ar
On Fri, Mar 23, 2018 at 2:18 PM, wrote:
> On 03/23/2018 12:14 PM, Ilya Dryomov wrote:
>> luminous cluster-wide feature bits are supported since kernel 4.13.
>
> ?
>
> # uname -a
> Linux abweb1 4.14.0-0.bpo.3-amd64 #1 SMP Debian 4.14.13-1~bpo9+1
> (2018-01-14) x86_64 GNU/Linux
> # rbd info truc
>
On Fri, Mar 23, 2018 at 12:14 AM, Yan, Zheng wrote:
>
> On Fri, Mar 23, 2018 at 5:14 AM, Josh Haft wrote:
> > Hello!
> >
> > I'm running Ceph 12.2.2 with one primary and one standby MDS. Mounting
> > CephFS via ceph-fuse (to leverage quotas), and enabled ACLs by adding
> > fuse_default_permission
On 03/23/2018 12:14 PM, Ilya Dryomov wrote:
> luminous cluster-wide feature bits are supported since kernel 4.13.
?
# uname -a
Linux abweb1 4.14.0-0.bpo.3-amd64 #1 SMP Debian 4.14.13-1~bpo9+1
(2018-01-14) x86_64 GNU/Linux
# rbd info truc
rbd image 'truc':
size 20480 MB in 5120 objects
Hi,
Does anybody have information about using jemalloc with Luminous? For what
I've seen on the mailing list and online, bluestor crashes when using jemalloc.
We've been running ceph with jemalloc since Hammer, as performance with
tcmalloc was terrible (We run a quite big full SSD cluster) and
On 2018-03-21 19:50, Frederic BRET wrote:
> Hi all,
>
> The context :
> - Test cluster aside production one
> - Fresh install on Luminous
> - choice of Bluestore (coming from Filestore)
> - Default config (including wpq queuing)
> - 6 nodes SAS12, 14 OSD, 2 SSD, 2 x 10Gb nodes, far more Gb at eac
Hi,
Last week out MDSs started failing one after another, and could not be started
anymore. After a lot of tinkering I found out that MDSs crashed after trying to
rejoin the Cluster. The only Solution I found that, let them start again was
resetting the journal vie cephfs-journal-tool. Now I ha
Hi,
I encountered one more two days ago, and I opened a ticket:
http://tracker.ceph.com/issues/23431
In our case it is more like 1 every two weeks, for now...
And it is affecting different OSDs on different hosts.
Dietmar
On 03/23/2018 11:50 AM, Oliver Freyermuth wrote:
> Hi together,
>
> I
On Fri, Mar 23, 2018 at 11:48 AM, wrote:
> The stock kernel from Debian is perfect
> Spectre / meltdown mitigations are worthless for a Ceph point of view,
> and should be disabled (again, strictly from a Ceph point of view)
>
> If you need the luminous features, using the userspace implementatio
On Wed, Mar 21, 2018 at 6:50 PM, Frederic BRET wrote:
> Hi all,
>
> The context :
> - Test cluster aside production one
> - Fresh install on Luminous
> - choice of Bluestore (coming from Filestore)
> - Default config (including wpq queuing)
> - 6 nodes SAS12, 14 OSD, 2 SSD, 2 x 10Gb nodes, far mor
On Fri, Mar 23, 2018 at 4:05 AM, Anthony D'Atri wrote:
> FYI: I/O limiting in combination with OpenStack 10/12 + Ceph doesn?t work
> properly. Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1476830
>
>
> That's an OpenStack bug, nothing to do with Ceph. Nothing stops you from
> using virsh to t
Hi together,
I notice exactly the same, also the same addresses, Luminous 12.2.4, CentOS 7.
Sadly, logs are equally unhelpful.
It happens randomly on an OSD about once per 2-3 days (of the 196 total OSDs we
have). It's also not a container environment.
Cheers,
Oliver
Am 08.03.2018 u
The stock kernel from Debian is perfect
Spectre / meltdown mitigations are worthless for a Ceph point of view,
and should be disabled (again, strictly from a Ceph point of view)
If you need the luminous features, using the userspace implementations
is required (librbd via rbd-nbd or qemu, libcephf
Hi all,
I'm using Luminous 12.2.4 on all servers, with Debian stock kernel.
I use the kernel cephfs/rbd on the client side, and have a choice of :
* stock Debian 9 kernel 4.9 : LTS, Spectre/Meltdown mitigations in
place, field-tested, probably old libceph inside.
* backports kernel 4.14 : probabl
Hi,
Where can I find slides/videos of the conference?
I already tried (1), but cannot view the videos.
Serkan
1- http://www.itdks.com/eventlist/detail/1962
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-us
Hi,
>>Did the fs have lots of mount/umount?
not too much, I have around 300 ceph-fuse clients (12.2.2 && 12.2.4) and ceph
cluster is 12.2.2.
maybe when client reboot, but that don't happen too much.
>> We recently found a memory leak
>>bug in that area https://github.com/ceph/ceph/pull/20148
36 matches
Mail list logo