Normally I would install ceph-common.rpm and access some rbd image via
rbdmap. What would be the best way to do this on an old el6? There is
not even a luminous el6 on download.ceph.com.
___
ceph-users mailing list -- ceph-users@ceph.io
To uns
I honestly do not get what the problem is. Just yum remove the rpm's, dd
your osd drives, if there is something left in /var/lib/ceph, /etc/ceph,
rm -R -f * those. Do a find / -iname "*ceph*" if there is still
something there.
-Original Message-
To: Samuel Taylor Liston
Cc: ceph-us
Nobody ever used luminous on el6?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Ok thanks Dan for letting me know.
-Original Message-
Cc: ceph-users
Subject: Re: [ceph-users] el6 / centos6 rpm's for luminous?
We had built some rpms locally for ceph-fuse, but AFAIR luminous needs
systemd so the server rpms would be difficult.
-- dan
>
>
> Nobody ever used lumino
>1. The pg log contains 3000 entries by default (on nautilus). These
>3000 entries can legitimately consume gigabytes of ram for some
>use-cases. (I haven't determined exactly which ops triggered this
>today).
How can I check how much ram my pg_logs are using?
-Original Message
Is it possible to disable checking on 'x pool(s) have no replicas
configured', so I don't have this HEALTH_WARN constantly.
Or is there some other disadvantage of keeping some empty 1x replication
test pools?
___
ceph-users mailing list -- ceph-use
I enabled a certificate on my radosgw, but I think I am running into the
problem that the s3 clients are accessing the buckets like
bucket.rgw.domain.com. Which fails my cert rgw.domain.com.
Is there any way to configure that only rgw.domain.com is being used?
tps://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/
On 10/15/20 2:18 PM, Marc Roos wrote:
>
> I enabled a certificate on my radosgw, but I think I am running into
the
> problem that the s3 clients are accessing the buckets like
> bucket.rgw
> In the past I see some good results (benchmark & latencies) for MySQL
and PostgreSQL. However, I've always used
> 4MB object size. Maybe i can get much better performance on smaller
object size. Haven't tried actually.
Did you tune mysql / postgres for this setup? Did you have a default
ce
I wanted to create a few statefull containers with mysql/postgres that
did not depend on local persistant storage, so I can dynamically move
them around. What about using;
- a 1x replicated pool and use rbd mirror,
- or having postgres use 2 1x replicated pools
- or upon task launch create an
I am running Nautilus on centos7. Does octopus run similar as nautilus
thus:
- runs on el7/centos7
- runs without containers by default
- runs without cephadm by default
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email
No clarity on this?
-Original Message-
To: ceph-users
Subject: [ceph-users] ceph octopus centos7, containers, cephadm
I am running Nautilus on centos7. Does octopus run similar as nautilus
thus:
- runs on el7/centos7
- runs without containers by default
- runs without cephadm by defa
un octopus via RPMs on el7 without the
cephadm and containers orchestration, then the answer is yes.
-- dan
On Fri, Oct 23, 2020 at 9:47 AM Marc Roos
wrote:
>
>
> No clarity on this?
>
> -Original Message-
> To: ceph-users
> Subject: [ceph-users] ceph octopus centos7,
Really? First time I read this here, afaik you can get a split brain
like this.
-Original Message-
Sent: Thursday, October 29, 2020 12:16 AM
To: Eugen Block
Cc: ceph-users
Subject: [ceph-users] Re: frequent Monitor down
Eugen, I've got four physical servers and I've installed mon on a
I am advocating already a long time for publishing testing data of some
basic test cluster against different ceph releases. Just a basic ceph
cluster that covers most configs and run the same tests, so you can
compare just ceph performance. That would mean a lot for smaller
companies that do
FYI I just had an issue with radosgw / civetweb. Wanted to upload 40 MB
file, it started with poor transfer speed, which was decreasing over
time to 20KB/s when I stopped the transfer. I had to kill radosgw and
start it to get 'normal' operation back.
I have a fairly dormant ceph luminous cluster on centos7 with stock
kernel, and thought about upgrading it before putting it to more use.
I can remember some page on the ceph website that had specific
instructions mentioning upgrading from luminous. But I can't find it
anymore, this page[0]
Pfff, you are right, I don't even know which one is the newest latest,
indeed Nautilus
-Original Message-
Subject: Re: [ceph-users] Upgrad luminous -> mimic , any pointers?
Why would you go to Mimic instead of Nautilus?
>
>
>
> I have a fairly dormant ceph luminous cluster on ce
I have a fairly dormant ceph luminous cluster on centos7 with stock
kernel, and thought about upgrading it before putting it to more use.
I can remember some page on the ceph website that had specific
instructions mentioning upgrading from luminous. But I can't find it
anymore, this page[0]
How many TB's are you backing up?
-Original Message-
Subject: [ceph-users] Re: CEPH Cluster Backup - Options on my solution
> [SNIP script]
>
> Hi mike
>
> When looking for backup solutions, did you come across benji [1][2]
> and the orginal backy2 [3][4] solutions ?
> I have been r
What about getting the disks out of the san enclosure and putting them
in some sas expander? And just hook it up to an existing ceph osd server
via sas/sata jbod adapter with external sas port?
Something like this
https://www.raidmachine.com/products/6g-sas-direct-connect-jbods/
_
Don’t you rather want to use 'systemctl disable'? And maybe out comment
the fstab entry, just to make sure.
-Original Message-
From: Brett Chancellor [mailto:bchancel...@salesforce.com]
Sent: woensdag 28 augustus 2019 3:28
To: Cory Hawkless
Cc: ceph-users@ceph.io
Subject: [ceph-users]
I have this error. I have found the rbd image with the
block_name_prefix:1f114174b0dc51, how can identify what snapshot this
is? (Is it a snapshot?)
2019-08-29 16:16:49.255183 7f9b3f061700 -1 log_channel(cluster) log
[ERR] : deep-scrub 17.36
17:6ca1f70a:::rbd_data.1f114174b0dc51.
Oh ok, that is easy. So the :4 is the snapshot id.
rbd_data.1f114174b0dc51.0974:4
^
-Original Message-
From: Ilya Dryomov [mailto:idryo...@gmail.com]
Sent: vrijdag 30 augustus 2019 10:27
To: Marc Roos
I was a little bit afraid I would be deleting this snapshot without
result. How do I fix this error (pg repair is not working)
pg 17.36 is active+clean+inconsistent, acting [7,29,12]
2019-08-30 10:40:04.580470 7f9b3f061700 -1 log_channel(cluster) log
[ERR] : repair 17.36
17:6ca1f70a:::rbd_da
ematic placement groups. The first time I encountered this, that
action was sufficient to repair the problem. The second time however, I
ended up having to manually remove the snapshot objects.
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-June/027431.html
Once I had done that, repair t
Max 2x listed
["17.36",{"oid":"rbd_data.1f114174b0dc51.0974","key":"","sna
pid":-2,"hash":1357874486,"max":0,"pool":17,"namespace":"","max":0}]
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le
Yes indeed very funny case, are you sure sdd/sdc etc are not being
reconnected(renumbered) to different drives because of some bus reset or
other failure? Or maybe some udev rule is messing things up?
-Original Message-
From: Fyodor Ustinov [mailto:u...@ufm.su]
Sent: dinsdag 3 sept
Is there no ceph wiki page with examples of manual repairs with the
ceph-objectstore-tool (eg. where pg repair and pg scrub don’t work)
I am having this issue for quite some time.
2019-09-02 14:17:34.175139 7f9b3f061700 -1 log_channel(cluster) log
[ERR] : deep-scrub 17.36
17:6ca1f70a:::rbd
Just wanted to say we do not have any problems with current/past setup.
Our ceph nodes are not even connected to the internet and we relay
everything via 'our own local mirror'.
-Original Message-
From: Alfredo Deza [mailto:ad...@redhat.com]
Sent: dinsdag 17 september 2019 15:15
And I was just about to upgrade. :) How is this even possible with this
change[0] where 50-100% iops lost?
[0]
https://github.com/ceph/ceph/pull/28573
-Original Message-
From: 徐蕴 [mailto:yu...@me.com]
Sent: maandag 23 september 2019 8:28
To: ceph-users@ceph.io
Subject: [ceph-user
> The intent of this change is to increase iops on bluestore, it was
implemented in 14.2.4 but it is a
> general bluestore issue not specific to Nautilus.
I am confused. Is it not like this that an increase in iops on bluestore
= increase in overall iops? It is specific to Nautilus, becaus
I am getting this error
I have in two sessions[0] num_caps high ( I assume the error is about
num_caps ). I am using a default luminous and a default centos7 with
default kernel 3.10.
Do I really still need to change to a not stock kernel to resolve this?
I read this in posts of 2016 and 2
On the client I have this
[@~]# cat /proc/sys/fs/file-nr
10368 0 381649
-Original Message-
Subject: [ceph-users] Luminous 12.2.12 "clients failing to respond to
capability release"
I am getting this error
I have in two sessions[0] num_caps high ( I assume the error is ab
These are not excessive values, are they? How to resolve this?
[@~]# ceph daemon mds.c cache status
{
"pool": {
"items": 266303962,
"bytes": 7599982391
}
}
[@~]# ceph daemon mds.c objecter_requests
{
"ops": [],
"linger_ops": [],
"pool_ops": [],
"pool_st
Worked around it by doing a fail of the mds because I read somewhere
about restarting it. However would be nice to know what causes this and
how to prevent it.
ceph mds fail c
-Original Message-
Subject: [ceph-users] Re: Luminous 12.2.12 "clients failing to respond
to capability re
I am having again "1 clients failing to respond to capability release"
and "1 MDSs report slow requests"
It looks like if I stop nfs-ganesha these disappear and if I start
ganesha they reappear. However when you look at the sessions[0] they are
all quite low the sessions with 10120, 2012 and 1
Suddenly I have objects misplaced, I assume this is because of the
balancer being active. But
1. how can I see the currently/last executed plan of the balancer?
2. when it was activated?
7 active+remapped+backfill_wait
7 active+remapped+backfilling
- - - - -
Can it be that there is something wrong with the cephfs, that causes the
nfs-ganesha to fail? I have the impression that when I do an rsync on
only a specific nfs mount/cephfs tree. I am getting this error.
All nfs mounts are inaccessible on that client. Other clients (with
other mounts) are
Is it really necessary to have these dependencies in nfs-ganesha 2.7
Dep-Install samba-client-libs-4.8.3-4.el7.x86_64 @CentOS7
Dep-Install samba-common-4.8.3-4.el7.noarch @CentOS7
Dep-Install samba-common-libs-4.8.3-4.el7.x86_64 @CentOS7
_
"bluefs": "1", ?
-Original Message-
From: Igor Fedotov [mailto:ifedo...@suse.de]
Sent: donderdag 26 september 2019 17:27
To: 徐蕴; ceph-users@ceph.io
Subject: [ceph-users] Re: Check backend type
Hi Xu Yun!
You might want to use "ceph osd metadata" command and check
"ceph_objectstore
Yes I think this one libntirpc. In 2.6 this samba dependency was not
there.
-Original Message-
From: Daniel Gryniewicz [mailto:d...@redhat.com]
Sent: donderdag 26 september 2019 19:07
To: Marc Roos
Cc: ceph-users
Subject: Re: [ceph-users] Nfs-ganesha 2.6 upgrade to 2.7
Ganesha itself
Some time ago on Luminous I also had to change the crush rules on a all
hdd cluster to hdd (to prepare for adding ssd's and ssd pools). And pg's
started migrating while everything already was on hdd's, looks like this
is still not fixed?
-Original Message-
From: Raymond Berg Hans
Oct 19 22:16:49 c03 systemd:
[/usr/lib/systemd/system/ceph-osd@.service:15] Unknown lvalue
'LockPersonality' in section 'Service'
Oct 19 22:16:49 c03 systemd:
[/usr/lib/systemd/system/ceph-osd@.service:16] Unknown lvalue
'MemoryDenyWriteExecute' in section 'Service'
Oct 19 22:16:49 c03 sy
After luminous -> nautilus upgrade I have this error, how to resolve?
Tried enabling and disabling it.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
2019-10-19 22:26:12.606 7fe16448cb80 1 mgr[py] Loading python module
'rbd_support'
2019-10-19 22:26:14.627 7fe13b494700 -1 mgr load Failed to construct
class in 'rbd_support'
File "/usr/share/ceph/mgr/rbd_support/module.py", line 1326, in
__init__
File "/usr/share/ceph/mgr/rbd_support/m
89 1896585 90
On Sun, Oct 20, 2019 at 3:37 PM Jason Dillaman
wrote:
>
> What are the caps for your mgr user? Does it have permissions to talk
> to the OSDs?
>
> # ceph auth get mgr.
>
> On Sat, Oct 19, 2019 at 6:58 PM Marc Roos
wrote:
> >
> >
> >
> >
.croit.io
Tel: +49 89 1896585 90
On Sun, Oct 20, 2019 at 3:37 PM Jason Dillaman
wrote:
>
> What are the caps for your mgr user? Does it have permissions to talk
> to the OSDs?
>
> # ceph auth get mgr.
>
> On Sat, Oct 19, 2019 at 6:58 PM Marc Roos
wrote:
> >
> >
>
Is it really necessary to get these every few seconds?
mgr.b (mgr.3714636) 264 : cluster [DBG] pgmap v265: 384
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Getting these since the upgrade to nautilus
[Wed Oct 23 01:59:12 2019] ceph: build_snap_context 10002085d5c
911d8b648900 fail -12
[Wed Oct 23 01:59:12 2019] ceph: build_snap_context 10002085d18
9115f344ac00 fail -12
[Wed Oct 23 01:59:12 2019] ceph: build_snap_context 10002085d15
9
I am getting these since Nautilus upgrade
[Wed Oct 30 01:32:09 2019] ceph: build_snap_context 100020859dd
911cca33b800 fail -12
[Wed Oct 30 01:32:09 2019] ceph: build_snap_context 100020859d2
911d3eef5a00 fail -12
[Wed Oct 30 01:32:09 2019] ceph: build_snap_context 100020859d9
911
Good you are getting 124MB/s via Gbit, I have only been able to get
110MB/s.
If you are interested, I am also having 4TB sata hdd without db/wal on
ssd, 4 nodes, but 10Gbit
[@]# dd if=/dev/zero of=zero.file bs=32M oflag=direct status=progress
3758096384 bytes (3.8 GB) copied, 36.364817 s, 103
Why is it taking for rgw 2 minutes to be available? Is there some
internet lookup being done that is timing out when there is no default
gw?
[@ ~]# /usr/bin/radosgw -d -f --cluster ceph --name client.rgw2
--keyring=/etc/ceph/ceph.client.rgw2.keyring
2019-11-08 12:32:04.930 7fc9793fb780 0 fr
==
Package Arch Version
RepositorySize
=
Hi Daniel,
I am able to mount the buckets with your config, however when I try to
write something, my logs get a lot of these errors:
svc_732] nfs4_Errno_verbose :NFS4 :CRIT :Error I/O error in
nfs4_write_cb converted to NFS4ERR_IO but was set non-retryable
Any chance you know how to resolv
If I do an fstrim /mount/fs and this is an xsf directly on a rbd device.
I can see space being freed instantly with eg. rbd du. However when
there is an lvm in between, it looks like this is not freed. I already
enabled issue_discards = 1 in lvm.conf but as the comment says, probably
only in
Nevermind, was not having discard='unmap' in libvirt
-Original Message-
To: ceph-users
Subject: [ceph-users] rbd lvm xfs fstrim vs rbd xfs fstrim
If I do an fstrim /mount/fs and this is an xsf directly on a rbd device.
I can see space being freed instantly with eg. rbd du. However
Is there a ceph auth command that just list all clients? Without dumping
keys to the console. I would recommend making 'ceph auth ls' just
display client names/ids. If you want the key, there are already other
commands.
___
ceph-users mailing list
Also dumps to stdout, does not do anything with a file.
auth get write keyring file with requested key
If you value security, you do not want to be dumping keys to the
display, or am I the only one that is noticing this?
-Original Message-
To: ceph-users
Subject:
Also do not get the point of dumping the capabilities in the stdout/key
file.
-Original Message-
To: ceph-users
Subject: ceph keys contantly dumped to the console
Also dumps to stdout, does not do anything with a file.
auth get write keyring file with requested key
I am not sure since when, but I am not able to create nor delete
snapshots anymore. I am getting a permission denied. I upgraded recently
from Luminous to Nautilus and set this allow_new_snaps as mentioned
on[1]
[@ .snap]# ls
snap-1 snap-2 snap-3 snap-4 snap-5 snap-6 snap-7
[@
Nevermind, was
mds "allow rwps"
-Original Message-
To: ceph-users
Subject: [ceph-users] Not able to create and remove snapshots
I am not sure since when, but I am not able to create nor delete
snapshots anymore. I am getting a permission denied. I upgraded recently
from Luminous
Node does not start osds! Why do I have this error? Previous boot was
just fine (upgraded recently to nautilus)
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I had to fix this by doing a workaround I read somewhere[1]. Why did
this happen? Afaik I could reboot a node without problems? What has
changed in rhel7 that caused this?
[1]
chown ceph.ceph /dev/sdx2
-Original Message-
To: ceph-users
Subject: [ceph-users] ERROR: osd init failed: (1
: osd init failed: (13) Permission denied
Mandi! Marc Roos
In chel di` si favelave...
> Node does not start osds! Why do I have this error? Previous boot was
> just fine (upgraded recently to nautilus)
See if this is your case:
https://tracker.ceph.com/issues/41777
--
dott.
I have been asking before[1]. Since Nautilus upgrade I am having these,
with a total node failure as a result(?). Was not expecting this in my
'low load' setup. Maybe now someone can help resolving this? I am also
waiting quite some time to get access at
https://tracker.ceph.com/issues.
D
I am getting these every 2 seconds, does it make sense to log this?
log_channel(cluster) log [DBG] : pgmap v32653: 384 pgs: 2
active+clean+scrubbing+deep, 382 active+clean;
log_channel(cluster) log [DBG] : pgmap v32653: 384 pgs: 2
active+clean+scrubbing+deep, 382 active+clean;
log_channel(clus
Hi Ilya,
>
>
>ISTR there were some anti-spam measures put in place. Is your account
>waiting for manual approval? If so, David should be able to help.
Yes if I remember correctly I get waiting approval when I try to log in.
>>
>>
>>
>> Dec 1 03:14:36 c04 kernel: ceph: build_snap_co
I guess this is related? kworker 100%
[Mon Dec 2 13:05:27 2019] SysRq : Show backtrace of all active CPUs
[Mon Dec 2 13:05:27 2019] sending NMI to all CPUs:
[Mon Dec 2 13:05:27 2019] NMI backtrace for cpu 0 skipped: idling at pc
0xb0581e94
[Mon Dec 2 13:05:27 2019] NMI backtrace
>
>> >
>> >ISTR there were some anti-spam measures put in place. Is your
account
>> >waiting for manual approval? If so, David should be able to help.
>>
>> Yes if I remember correctly I get waiting approval when I try to log
in.
>>
>> >>
>> >>
>> >>
>> >> Dec 1 03:14:36 c04 k
I can confirm that removing all the snapshots seems to resolve the
problem.
A - I would propose a redesign of something like that snapshots from
below the mountpoint are only taken into account and not snapshots in
the entire filesystem. That should fix a lot of issues
B - That reminds me a
Yes Luis, good guess!! ;)
-Original Message-
Cc: ceph-users
Subject: Re: [ceph-users] ceph node crashed with these errors "kernel:
ceph: build_snap_context" (maybe now it is urgent?)
On Mon, Dec 02, 2019 at 10:27:21AM +0100, Marc Roos wrote:
>
> I have been asking
ceph-us...@lists.ceph.com is old one, why this is, I also do not know
https://www.mail-archive.com/search?l=all&q=ceph
-Original Message-
From: Rodrigo Severo - Fábrica [mailto:rodr...@fabricadeideias.com]
Sent: donderdag 5 december 2019 20:37
To: ceph-us...@lists.ceph.com; ceph-user
I recently enabled this and now my rsyncs are taking hours and hours
longer.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I have removed_snaps listed on pools that I am not using. They are
mostly for doing some performance testing, so I cannot imagine ever
creating snapshots in them.
pool 33 'fs_data.ssd' replicated size 3 min_size 1 crush_rule 5
object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode warn
la
Rank the order of resources you check first if you need help
(1 being the highest priority)
I cannot unselect here? This would mean that 'slack' will always be
there. I will not use slack, I don't like using slack. So it is an
incorrect representation of the reality in my case.
Furthermore
Hi Gregory,
I saw ceph -s showing 'snaptrim'(?), but I have still have these
'removed_snaps' listed on this pool (also on other pools, I don't
remember creating/deleting them) A 'ceph tell mds.c scrub start /test/
recursive repair' did not remove those. Can/should/how I remove these?
Thanks, Good tip! If I do not know where I created these, is there a way
to get their location in the filesystem? Or maybe a command that deletes
by snapid?
{
"snapid": 54,
"ino": 1099519875627,
"stamp": "2017-09-13 21:21:35.769863",
"na
099519874624,
"dname": "",
"version": 3299
},
{
"dirino": 1,
"dname": "xxx",
"version": 1424947
}
],
"pool": 19,
"old_pools":
I think this is new since I upgraded to 14.2.6. kworker/7:3 100%
[@~]# echo l > /proc/sysrq-trigger
[Tue Jan 14 10:05:08 2020] CPU: 7 PID: 2909400 Comm: kworker/7:0 Not
tainted 3.10.0-1062.4.3.el7.x86_64 #1
[Tue Jan 14 10:05:08 2020] Workqueue: ceph-msgr ceph_con_workfn
[libceph]
[Tue Jan 1
@gmail.com]
Sent: 14 January 2020 11:45
To: Marc Roos
Cc: ceph-users; Jeff Layton; Yan, Zheng
Subject: Re: [ceph-users] Kworker 100% with ceph-msgr (after upgrade to
14.2.6?)
On Tue, Jan 14, 2020 at 10:31 AM Marc Roos
wrote:
>
>
> I think this is new since I upgraded to 14.2
Is it possible to mount a cephfs with a specific uid or gid? To make it
available to a 'non-root' user?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Is it possible to schedule the creation of snapshots on specific rbd
images within ceph?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Say one is forced to move a production cluster (4 nodes) to a different
datacenter. What options do I have, other than just turning it off at
the old location and on on the new location?
Maybe buying some extra nodes, and move one node at a time?
_
https://www.mail-archive.com/ceph-users@ceph.io/
https://www.mail-archive.com/ceph-users@lists.ceph.com/
-Original Message-
Sent: 28 January 2020 16:32
To: ceph-users@ceph.io
Subject: [ceph-users] No Activity?
All;
I haven't had a single email come in from the ceph-users list at cep
cluster to different
datacenter
On 1/28/20 11:19 AM, Marc Roos wrote:
>
> Say one is forced to move a production cluster (4 nodes) to a
> different datacenter. What options do I have, other than just turning
> it off at the old location and on on the new location?
>
> Maybe buyi
Osd's do not even use bonding effenciently. If it were to use 2 links
concurrently it would be a lot better.
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg35474.html
-Original Message-
To: ceph-users@ceph.io
Subject: [ceph-users] Re: small cluster HW upgrade
Hi Philipp
You can optimize ceph-osd for this of course. It would benefit people
that like to use the 1Gbit connections. I can understand putting time
into it now does not make sense because of the availability of 10Gbit.
However, I do not get why this was not optimized already 5 or 10 years
ago.
---
I didn't have such drop in performance testing 'rados bench 360 write -p
rbd' on 3x replicated (slow)hdd pool. Sort of near the average,
sometimes drops to 90. But I guess the test hits than an osd that is
scrubbing and being used by other processes.
-Original Message-
Sent: 05 Febr
ceph mgr module ls does not show the rbd_support, should this not be
listed?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Is there an performance impact of 'ceph mgr module enable iostat'?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Say I think my cephfs is slow when I rsync to it, slower than it used to
be. First of all, I do not get why it reads so much data. I assume the
file attributes need to come from the mds server, so the rsync backup
should mostly cause writes not?
I think it started being slow, after enabling s
been looking at the mds with 'ceph daemonperf mds.a'
-Original Message-
From: Samy Ascha [mailto:s...@xel.nl]
Sent: 11 February 2020 17:10
To: Marc Roos
Cc: ceph-users
Subject: Re: [ceph-users] cephfs slow, howto investigate and tune mds
configuration?
>
>
> Say I
>
>
>>
>> Say I think my cephfs is slow when I rsync to it, slower than it
used
>> to be. First of all, I do not get why it reads so much data. I
assume
>> the file attributes need to come from the mds server, so the rsync
>> backup should mostly cause writes not?
>>
>
>Are you run
>> >
>> >>
>> >> Say I think my cephfs is slow when I rsync to it, slower than it
>> used >> to be. First of all, I do not get why it reads so much
data.
>> I assume >> the file attributes need to come from the mds server,
so
>> the rsync >> backup should mostly cause writes not?
Via smb, much discussed here
-Original Message-
Sent: 13 February 2020 09:33
To: ceph-users@ceph.io
Subject: [ceph-users] Ceph and Windows - experiences or suggestions
Hi there!
I got the task to connect a Windows client to our existing ceph cluster.
I'm looking for experiences or sugge
I have default centos7 setup with nautilus. I have been asked to install
5.5 to check a 'bug'. Where should I get this from? I read that the
elrepo kernel is not compiled like rhel.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe se
How do you check if you issued this command in the past?
-Original Message-
To: ceph-users@ceph.io
Subject: [ceph-users] Re: Excessive write load on mons after upgrade
from 12.2.13 -> 14.2.7
Hi Peter,
could be a totally different problem but did you run the command "ceph
osd require
-users] centos7 / nautilus where to get kernel 5.5
from?
On Fri, Feb 14, 2020 at 3:19 PM Marc Roos
wrote:
>
>
> I have default centos7 setup with nautilus. I have been asked to
> install
> 5.5 to check a 'bug'. Where should I get this from? I read that the
> elrepo kern
cephfs is considered to be stable. I am using it with only one mds for
2-3 years in low load environment without any serious issues.
-Original Message-
Sent: 17 February 2020 16:07
To: ceph-users
Subject: [ceph-users]
=?eucgb2312_cn?q?=D7=AA=B7=A2=3A_Causual_survey_on_the_successful_usa
501 - 600 of 679 matches
Mail list logo