Afaik ceph is is not supporting/working with bonding.
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg35474.html
(thread: Maybe some tuning for bonded network adapters)
-Original Message-
From: Andreas Herrmann [mailto:andr...@mx20.org]
Sent: vrijdag 8 september 2017
Sorry to cut in your thread.
> Have you disabled te FLUSH command for the Samsung ones?
We have a test cluster currently only with spinners pool, but we have
SM863 available to create the ssd pool. Is there something specific that
needs to be done for the SM863?
-Original
What would be the best way to get an overview of all client connetions.
Something similar to the output of rbd lock list
cluster:
1 clients failing to respond to capability release
1 MDSs report slow requests
ceph daemon mds.a dump_ops_in_flight
{
"ops": [
Should these messages not be gone in 12.2.0?
2017-08-31 20:49:33.500773 7f5aa1756d40 -1 WARNING: the following
dangerous and experimental features are enabled: bluestore
2017-08-31 20:49:33.501026 7f5aa1756d40 -1 WARNING: the following
dangerous and experimental features are enabled: bluestore
I had this also once. If you update all nodes and then systemctl restart
'ceph-osd@*' on all nodes, you should be fine. But first the monitors of
course
-Original Message-
From: Thomas Gebhardt [mailto:gebha...@hrz.uni-marburg.de]
Sent: woensdag 30 augustus 2017 14:10
To:
I have some osd with these permissions, and without mgr. What are the
correct ones to have for luminous?
osd.0
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
osd.14
caps: [mon] allow profile osd
caps: [osd] allow *
, allow rw path=/nfs
caps: [mon] allow r
caps: [osd] allow rwx pool=fs_meta,allow rwx pool=fs_data
-Original Message-
From: Marc Roos
Sent: dinsdag 29 augustus 2017 23:48
To: ceph-users
Subject: [ceph-users] Centos7, luminous, cephfs, .snaps
Where can I find some examples on creating
If now 12.2.0 is released, how and who should be approached for applying
patches for collectd?
Aug 30 10:40:42 c01 collectd: ceph plugin: JSON handler failed with
status -1.
Aug 30 10:40:42 c01 collectd: ceph plugin:
cconn_handle_event(name=osd.8,i=4,st=4): error 1
Aug 30 10:40:42 c01
Where can I find some examples on creating a snapshot on a directory.
Can I just do mkdir .snaps? I tried with stock kernel and a 4.12.9-1
http://docs.ceph.com/docs/luminous/dev/cephfs-snapshots/
___
ceph-users mailing list
nfs-ganesha-2.5.2-.el7.x86_64.rpm
^
Is this correct?
-Original Message-
From: Marc Roos
Sent: dinsdag 29 augustus 2017 11:40
To: amaredia; wooertim
Cc: ceph-users
Subject: Re: [ceph-users] Cephfs fsal + nfs-ganesha + el7/centos7
Ali, Very very nice! I was creating
:29
To: TYLin
Cc: Marc Roos; ceph-us...@ceph.com
Subject: Re: [ceph-users] Cephfs fsal + nfs-ganesha + el7/centos7
Marc,
These rpms (and debs) are built with the latest ganesha 2.5 stable
release and the latest luminous release on download.ceph.com:
http://download.ceph.com/nfs-ganesha/
I just
ceph fs authorize cephfs client.bla /bla rw
Will generate a user with these permissions
[client.bla]
caps mds = "allow rw path=/bla"
caps mon = "allow r"
caps osd = "allow rw pool=fs_data"
With those permissions I cannot mount, I get a permission denied, until
I
I had some issues with the iscsi software starting to early, maybe this
can give you some ideas.
systemctl show target.service -p After
mkdir /etc/systemd/system/target.service.d
cat << 'EOF' > /etc/systemd/system/target.service.d/10-waitforrbd.conf
[Unit]
After=systemd-journald.socket
Where can you get the nfs-ganesha-ceph rpm? Is there a repository that
has these?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_AR
CH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/
12.1.1/rpm/el7/BUILD/ceph-12.1.1/src/rocksdb/db/db_impl.cc:343] Shutdown
complete
2017-08-09 11:41:25.686088 7f26db8ae100 1 bluefs umount
2017-08-09 11:41:25.705389 7f26db
FYI when creating these rgw pools, not all are automatically 'enabled
application'
I created these
ceph osd pool create default.rgw
ceph osd pool create default.rgw.meta
ceph osd pool create default.rgw.control
ceph osd pool create default.rgw.log
ceph osd pool create .rgw.root
ceph osd
I am not sure if I am the only one having this. But there is an issue
with the collectd plugin and the luminous release. I think I didn’t
have this in Kraken, looks like something changed in the JSON? I also
reported it here https://github.com/collectd/collectd/issues/2343, I
have no idea who
_impl.cc:343] Shutdown
complete
2017-08-09 11:41:25.686088 7f26db8ae100 1 bluefs umount
2017-08-09 11:41:25.705389 7f26db8ae100 1 bdev(0x7f26de472e00
/var/lib/ceph/osd/ceph-0/block) close
2017-08-09 11:41:25.944548 7f26db8ae100 1 bdev(0x7f26de2b3a00
/var/lib/ceph/osd/ceph-0/block) close
:31.339235 madvise(0x7f4a02102000, 32768, MADV_DONTNEED) = 0
<0.14>
23552 16:26:31.339331 madvise(0x7f4a01df8000, 16384, MADV_DONTNEED) = 0
<0.19>
23552 16:26:31.339372 madvise(0x7f4a01df8000, 32768, MADV_DONTNEED) = 0
<0.13>
-Original Message-
From: Brad Hubbard
I tried to fix a 1 pg inconsistent by taking the osd 12 out, hoping for
the data to be copied to a different osd, and that one would be used as
'active?'.
- Would deleting the whole image in the rbd pool solve this? (or would
it fail because of this status)
- Should I have done this rather
:52
To: Marc Roos; ceph-users
Subject: Re: [ceph-users] Pg inconsistent / export_files error -5
It _should_ be enough. What happened in your cluster recently? Power
Outage, OSD failures, upgrade, added new hardware, any changes at all.
What is your Ceph version?
On Fri, Aug 4, 2017 at 11:22 AM
I have got a placement group inconsistency, and saw some manual where
you can export and import this on another osd. But I am getting an
export error on every osd.
What does this export_files error -5 actually mean? I thought 3 copies
should be enough to secure your data.
> PG_DAMAGED
I have an error with a placement group, and seem to only find these
solutions based on a filesystem osd.
http://ceph.com/geen-categorie/ceph-manually-repair-object/
Anybody have a link to how can I do this with a bluestore osd?
/var/log/ceph/ceph-osd.9.log:48:2017-07-31 14:21:33.929855
I would recommend logging into the host and running your commands from a
screen session, so they keep running.
-Original Message-
From: Martin Wittwer [mailto:martin.witt...@datonus.ch]
Sent: zondag 23 juli 2017 15:20
To: ceph-us...@ceph.com
Subject: [ceph-users] Restore RBD image
I am running 12.1.1, and updated to it on the 18th. So I guess this is
either something else or it was not in the rpms.
-Original Message-
From: Gregory Farnum [mailto:gfar...@redhat.com]
Sent: vrijdag 21 juli 2017 20:21
To: Marc Roos; ceph-users
Subject: Re: [ceph-users] Ceph
I would like to work on some grafana dashboards, but since the upgrade
to luminous rc, there seems to have changed something in json and (a lot
of) metrics are not stored in influxdb.
Does any one have an idea when updates to collectd-ceph in the epel repo
will be updated? Or is there some
Should we report these?
[840094.519612] ceph[12010]: segfault at 8 ip 7f194fc8b4c3 sp
7f19491b6030 error 4 in libceph-common.so.0[7f194f9fb000+7e9000]
CentOS Linux release 7.3.1611 (Core)
Linux 3.10.0-514.26.2.el7.x86_64 #1 SMP Tue Jul 4 15:04:05 UTC 2017
x86_64 x86_64 x86_64
Thanks! updating all indeed resolved this.
-Original Message-
From: Gregory Farnum [mailto:gfar...@redhat.com]
Sent: dinsdag 18 juli 2017 23:01
To: Marc Roos; ceph-users
Subject: Re: [ceph-users] Updating 12.1.0 -> 12.1.1
Yeah, some of the message formats changed (incompati
I just updated packages on one CentOS7 node and getting these errors.
Anybody an idea how to resolve this?
Jul 18 12:03:34 c01 ceph-mon: 2017-07-18 12:03:34.537510 7f4fa1c14e40 -1
WARNING: the following dangerous and experimental features are enabled:
bluestore
Jul 18 12:03:34 c01 ceph-mon:
With ceph auth I have set permissions like below, I can add and delete
objects in the test pool, but cannot set size of a the test pool. What
permission do I need to add for this user to modify the size of this
test pool?
mon 'allow r' mds 'allow r' osd 'allow rwx pool=test'
I just updated packages on one CentOS7 node and getting these errors:
Jul 18 12:03:34 c01 ceph-mon: 2017-07-18 12:03:34.537510 7f4fa1c14e40 -1
WARNING: the following dangerous and experimental features are enabled:
bluestore
Jul 18 12:03:34 c01 ceph-mon: 2017-07-18 12:03:34.537510
We are running on
Linux c01 3.10.0-514.26.2.el7.x86_64 #1 SMP Tue Jul 4 15:04:05 UTC 2017
x86_64 x86_64 x86_64 GNU/Linux
CentOS Linux release 7.3.1611 (Core)
And didn’t have any issues installing/upgrading, but we are not using
ceph-deploy. In fact am surprised on how easy it is to install.
When are bugs like these http://tracker.ceph.com/issues/20563 available
in the rpm repository
(https://download.ceph.com/rpm-luminous/el7/x86_64/)?
I sort of don’t get it from this page
http://docs.ceph.com/docs/master/releases/. Maybe something here could
specifically mentioned about the
No, but we are using Perl ;)
-Original Message-
From: Daniel Davidson [mailto:dani...@igb.illinois.edu]
Sent: donderdag 13 juli 2017 16:44
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Crashes Compiling Ruby
We have a weird issue. Whenever compiling Ruby, and only Ruby, on a
Is it possible to change the cephfs meta data pool. I would like to
lower the pg's. And thought about just making a new pool, copying the
pool and then renaming them. But I guess cephfs works with the pool id
not? How can this be best done?
Thanks
I need a little help with fixing some errors I am having.
After upgrading from Kraken im getting incorrect values reported on
placement groups etc. At first I thought it is because I was changing
the public cluster ip address range and modifying the monmap directly.
But after deleting and
Does anyone have an idea, why I am having these osd_bytes=0?
ceph daemon mon.c perf dump cluster
{
"cluster": {
"num_mon": 3,
"num_mon_quorum": 3,
"num_osd": 6,
"num_osd_up": 6,
"num_osd_in": 6,
"osd_epoch": 3593,
"osd_bytes": 0,
On a test cluster with 994GB used, via collectd I get in influxdb an
incorrect 9.3362651136e+10 (93GB) reported and this should be 933GB (or
actually 994GB). Cluster.osdBytes is reported correctly
3.3005833027584e+13 (30TB)
cluster:
health: HEALTH_OK
services:
mon: 3 daemons,
I have updated a test cluster by just updating the rpm and issueing a
ceph osd require-osd-release because it was mentioned in the status. Is
there more you need to do?
- update on all nodes the packages
sed -i 's/Kraken/Luminous/g' /etc/yum.repos.d/ceph.repo
yum update
- then on each node
FYI, 5 or even more years ago I was trying zabbix and when I noticed
that when the monitored hosts increased, the load on the mysql server
was increasing. Without being able to recall exactly what was wrong (I
think every sample they did, was one insert statement), I do remember
that I got
Just a thought, what about marking connections with iptables and using
that mark with tc?
-Original Message-
From: hrchu [mailto:petertc@gmail.com]
Sent: donderdag 4 mei 2017 10:35
To: Marc Roos; ceph-users
Subject: Re: [ceph-users] Limit bandwidth on RadosGW?
Thanks
No experience with it. But why not use linux for it? Maybe this solution
on every RGW is sufficient, I cannot imagine you need 3rd party for
this.
https://unix.stackexchange.com/questions/28198/how-to-limit-network-bandwidth
https://wiki.archlinux.org/index.php/Advanced_traffic_control
Hi Blair,
We are also thinking of using ceph for 'backup'. At the moment we are
using rsync and hardlinks on a drbd setup. But I think when using cephfs
things could speed up, because file information is gotten from the mds
daemon, so this should save on one rsync file lookup, and we expect
If I do a test on a 3 node, 1 osd per node cluster, 2xGbE mode 4 bonded,
on a pool with size 1. I see that the first node is sending streams to
2nd and 3rd node using only one of the bonded adapters.
This is typical for a 'single line of communication' using lacp. Afaik
the streams to the
I have a 3 node test cluster with one osd per node. And I write a file
to a pool with size 1. Why doesn’t ceph just use the full 110MB/s of
the network (as with default rados bench test)? Does ceph 'reserve'
bandwidth for other concurrent connections? Can this be tuned?
Putting from ram
I guess it is correct to assume, that if you have 11 osd's you have
around 11x11=121 established connections in your netstat -tanp?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Is there a doc that describes all the parameters that are published by
collectd-ceph?
Is there maybe a default grafana dashboard for influxdb? I found
something for graphite, and modifying those.
-Original Message-
From: Patrick McGarry [mailto:pmcga...@redhat.com]
Sent:
For a test cluster, we like to use some 5400rpm and 7200rpm drives, is
it advisable to customize the configuration then as described on this
page. Or is the speed difference to so small, and should this only be
done when adding ssd's to the same osd node?
We are going to setup a test cluster with kraken using CentOS7. And
obviously like to stay as close as possible to using their repositories.
If we need to install the 4.1.4 kernel or later, is there a ceph
recommended repository to choose? Like for instance use the elrepo
4.9ml/4.4lt?
I would start with for CentOS7 because if you get into problems you can
always buy a redhat license and get support.
> On Thu, Dec 29, 2016 at 6:20 AM, Andre Forigato
>
> wrote:
>>
>> Hello,
>>
>> I'm starting to study Ceph for implementation in our company.
>>
>>
Is it possible to rsync to the ceph object store with something like
this tool of amazon?
https://aws.amazon.com/customerapps/1771
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -.
F1 Outsourcing Development Sp. z o.o.
Poland
t: +48 (0)124466845
f: +48 (0)124466843
e:
I have created a swift user, and can mount the object store with
cloudfuse, and can create files in the default pool .rgw.root
How can I have my test user go to a different pool and not use the
default .rgw.root?
Thanks,
Marc
I am looking a bit at ceph on a single node. Does anyone have experience
with cloudfuse?
Do I need to use the rados-gw? Does it even work with ceph?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -.
F1 Outsourcing Development Sp. z o.o.
Poland
t: +48 (0)124466845
f:
Hi,
I want to know what are the best practices to start or stop all OSDs of a node
with infernalis.
Before with init, we used « /etc/init.d/ceph start » now with systemd I have a
script per osd : "systemctl start ceph-osd@171.service"
Where is the global one ?
Thanks in advance!
ting_services.html
>
> The following should work:
>
> systemctl start ceph-osd*
>
> On 26/11/15 12:46, Marc Boisis wrote:
>>
>> Hi,
>>
>> I want to know what are the best practices to start or stop all OSDs of a
>> node with infernalis.
>>
upgrade due to systemd.
Regards,
Marc
On 07/30/2015 03:54 PM, Sage Weil wrote:
As time marches on it becomes increasingly difficult to maintain proper
builds and packages for older distros. For example, as we make the
systemd transition, maintaining the kludgey sysvinit and udev support for
centos6
flush the cache. This only means
that the files you'd want cached will have to be pulled back in after
that and you may lose the performance advantage for a little while after
each backup.
Hope that helps, dont hesitate with further inquiries!
Marc
On 05/05/2015 11:05 AM, Götz Reinicke
, 2015 at 2:20 AM, Marc m...@shoowin.de wrote:
Hi everyone,
I am curious about the current state of the roadmap as well. Alongside the
already asked question Re vmware support, where are we at with cephfs'
multiMDS stability and dynamic subtree partitioning?
Zheng has fixed a ton of bugs
Hi everyone,
I am curious about the current state of the roadmap as well. Alongside
the already asked question Re vmware support, where are we at with
cephfs' multiMDS stability and dynamic subtree partitioning?
Thanks
Regards,
Marc
On 04/22/2015 07:04 AM, Ray Sun wrote:
Sage,
I have
I’m trying to create my first ceph disk from a client named bjorn :
[ceph@bjorn ~]$ rbd create foo --size 512000 -m helga -k
/etc/ceph/ceph.client.admin.keyring
[ceph@bjorn ~]$ sudo rbd map foo --pool pool_ulr_1 --name client.admin -m
helga.univ-lr.fr -k /etc/ceph/ceph.client.admin.keyring
rbd:
In dmesg:
[ 5981.113104] libceph: client14929 fsid cd7dd0a4-075c-4317-8aed-0758085ea9d2
[ 5981.115853] libceph: mon0 10.10.10.64:6789 session established
My systems are RHEL 7 with 3.10.0-229.el7.x86_64 kernel
On Thu, Mar 12, 2015 at 3:33 PM, Marc Boisis marc.boi...@univ-lr.fr wrote:
I’m
:/root
Le 12 mars 2015 à 13:42, Ilya Dryomov idryo...@gmail.com a écrit :
On Thu, Mar 12, 2015 at 3:33 PM, Marc Boisis marc.boi...@univ-lr.fr wrote:
I’m trying to create my first ceph disk from a client named bjorn :
[ceph@bjorn ~]$ rbd create foo --size 512000 -m helga -k
/etc/ceph
,
addr: 10.0.0.1:6789\/0}]}}
Almost forgot: this is .80.6
Regards,
Marc
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Wow... disregard this, I figured it out.
Overly restrictive iptables rules permitted incoming traffic from lo,
but outgoing traffic to lo was blocked... *facepalm*
On 13/10/2014 15:09, Marc wrote:
Hi,
I've deployed a number of clusters before on debian boxes, but now I
have to create one
.
In that case, did you actually deep scrub *everything* in the cluster,
Marc? You'll need to run and fix every PG in the cluster, and the
background deep scrubbing doesn't move through the data very quickly.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Tue, Sep 16, 2014 at 11
, acting [1,8,7]
pg 3.185 is active+clean+inconsistent, acting [6,1,4]
14 scrub errors
Any input on this?
Thanks in advance,
Marc
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
— it wasn't popping up in the same searches and I'd
forgotten that was so recent.
In that case, did you actually deep scrub *everything* in the cluster,
Marc? You'll need to run and fix every PG in the cluster, and the
background deep scrubbing doesn't move through the data very quickly.
-Greg
Hi,
to avoid confusion I would name the host entries in the crush map
differently. Make sure these host names can be resolved to the correct
boxes though (/etc/hosts on all the nodes). You're also missing a new
rule entry (also shown in the link you mentioned).
Lastly, and this is *extremely*
Hi,
we try to set up a Webcluster, may somebody give a hint.
3 Webserver with Typo3
The Typo3-Cache on a central Storage with ceph.
May this be usefull within a vmware-cluster?
I need something different than a central NFS-Store. This is to slow.
Regards
Marc
tried restarting the OSDs one by one (which
is why the recovery enter_time is so recent).
Any help would be greatly appreciated.
Regards,
Marc
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
:
INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing'
So, the monitors dont start up (stuck probing) because they cant
communicate because they need new keys, and the keys cannot be generated
because theres no quorum. Is there a way to fix this?
Kind regards,
Marc
actually correct
(fsck)?
The ceph-create-keys is a red herring and will stop as soon as. The
monitors do get into a quorum.
-Greg
On Tuesday, April 29, 2014, Marc m...@shoowin.de wrote:
Hi,
still working on a troubled ceph cluster running .61.2-1raring
consisting of (currently) 4 monitors
a store.db with ~200GB
On 29/04/2014 19:05, Gregory Farnum wrote:
On Tue, Apr 29, 2014 at 9:48 AM, Marc m...@shoowin.de wrote:
'ls' on the respective stores in /var/lib/ceph/mon/ceph.X/store.db
returns a list of files (i.e. still present), fsck seems fine. I did
notice that one of the nodes
30 seconds passing. So I guess this sync is
somehow triggering the 30s mon sync timeout. Should it do that? Should I
try increasing the sync timeout? Advice on how to proceed is very welcome.
Thanks,
Marc
On 01/03/2014 17:51, Martin B Nielsen wrote:
Hi,
You can't form quorom with your
welcome.
Thanks,
Marc
On 01/03/2014 17:51, Martin B Nielsen wrote:
Hi,
You can't form quorom with your monitors on cuttlefish if you're mixing
0.61.5 with any 0.61.5+ ( https://ceph.com/docs/master/release-notes/ ) =
section about 0.61.5.
I'll advice installing pre-0.61.5, form quorom
mons to a cluster running
.61.2 on the 2 alive mons and all the osds? I'm guessing this is the
last shot at trying to rescue the cluster without downtime.
KR,
Marc
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi
running on servers that also host OSDs, especially
since there seem to be communication issues between those versions... or
am I reading this wrong?
KR,
Marc
On 28/02/2014 01:32, Gregory Farnum wrote:
On Thu, Feb 27, 2014 at 4:25 PM, Marc m...@shoowin.de wrote:
Hi,
I was handed a Ceph cluster
Hello,
I got a weird upload issue with Ceph - dumpling (0.67.2) and I don't
know if someone can help me out to pin point my problem...
Basically, If I'm trying to upload a 1Gb file, as soon as my upload is
completed, apache return a 500 error... no problem if I upload a 900Mb
file or less,
501 - 578 of 578 matches
Mail list logo