How do you configure libvirt so it sees the snapshots already created on
the rbd image it is using for the vm?
I have already a vm running connected to the rbd pool via
protocol='rbd', and rbd snap ls is showing for snapshots.
___
ceph-users ma
I am having with the change from pg 8 to pg 16
[@c01 ceph]# ceph osd df | egrep '^ID|^19|^20|^21|^30'
ID CLASS WEIGHT REWEIGHT SIZEUSE AVAIL %USE VAR PGS
19 ssd 0.48000 1.0 447GiB 161GiB 286GiB 35.91 0.84 35
20 ssd 0.48000 1.0 447GiB 170GiB 277GiB 38.09 0.89 36
2
>If I understand the balancer correct, it balances PGs not data.
>This worked perfectly fine in your case.
>
>I prefer a PG count of ~100 per OSD, you are at 30. Maybe it would
>help to bump the PGs.
>
I can remember someone writing something smart about how to increase
your
pg's. Lets say
I have straw2, balancer=on, crush-compat and it gives worst spread over
my ssd drives (4 only) being used by only 2 pools. One of these pools
has pg 8. Should I increase this to 16 to create a better result, or
will it never be any better.
For now I like to stick to crush-compat, so I can use
d)
-> we already have several levels of caching which should prevent re-reading
of data: pagecache of the virtualized systems, rbd cache of rbd-bd, bluestore
cache on the osd
Do you see scenarios where it might be a good idea to activate the cache anyway?
Regards
Marc
Am 20.11.18 um
>> 4. Revert all your reweights.
>
>Done
You mean the values in the reweight column or the weight column? Because
from the commands in this thread I am assuming the weight column. Does
this mean that the upmap is handling disk sizes automatically? Currently
I am using the balancer (turned off)
I have seen several post on the bucket lists, how do you change this for
multitenant user: Tenant$tenuser
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"AWS": ["arn:aws:iam::usfolks:user/fred"]},
"Action": "s3:PutObjectAcl",
"Resource": [
What about putting it in a datacenter near them? Or move everything out
to some provider that allows you to have both.
-Original Message-
From: LuD j [mailto:luds.jer...@gmail.com]
Sent: maandag 17 december 2018 21:38
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Ceph on Azure ?
Thanks for posting this Roman.
-Original Message-
From: Roman Penyaev [mailto:rpeny...@suse.de]
Sent: 20 December 2018 14:21
To: Marc Roos
Cc: green; mgebai; ceph-users
Subject: Re: [ceph-users] RDMA/RoCE enablement failed with (113) No
route to host
On 2018-12-19 22:01, Marc Roos
I would be interested learning about the performance increase it has
compared to 10Gbit. I got the ConnectX-3 Pro but I am not using the rdma
because support is not default available.
sockperf ping-pong -i 192.168.2.13 -p 5001 -m 16384 -t 10 --pps=max
sockperf: Warmup stage (sending a few d
I have been having this for some time, it pops up out of the blue. Next
time this occurs I will enable the logging.
Thanks,
Marc
-Original Message-
From: Daniel Gryniewicz [mailto:d...@redhat.com]
Sent: 12 December 2018 16:49
To: Marc Roos; ceph-users
Subject: Re: [ceph-users
lto:d...@redhat.com]
Sent: 10 December 2018 15:54
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] How to troubleshoot rsync to cephfs via
nfs-ganesha stalling
This isn't something I've seen before. rsync generally works fine, even
over cephfs. More inline.
On 12/09/2018 09:4
pools. How can I explain a user that if they move files between 2
specific folders that they should not mv but cp. Now I have to
workaround this buy apply separate mounts.
-Original Message-
From: Andras Pataki [mailto:apat...@flatironinstitute.org]
Sent: 11 December 2018 00:34
To:
Except if you have different pools on these directories. Then the data
is not moved(copied), which I think should be done. This should be
changed, because no one will expect a symlink to the old pool.
-Original Message-
From: Jack [mailto:c...@jack.fr.eu.org]
Sent: 10 December 20
Are there video's available (MeerKat, CTDB)?
PS. Disk health prediction link is not working
-Original Message-
From: Stefan Kooman [mailto:ste...@bit.nl]
Sent: 10 December 2018 11:22
To: Mike Perez
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] cephday berlin slides
Quoti
I think this is a April fools day joke of someone that did not setup his
time correctly.
-Original Message-
From: Robert Sander [mailto:r.san...@heinlein-support.de]
Sent: 10 December 2018 09:49
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Performance Problems
On 07.12.1
This rsync command fails and makes the local nfs unavailable (Have to
stop nfs-ganesha, kill all rsync processes on the client and then start
nfs-ganesha)
rsync -rlptDvSHP --delete --exclude config.repo --exclude "local*"
--exclude "isos"
anonym...@mirror.ams1.nl.leaseweb.net::centos/7/os/
Afaik it is not random, it is calculated where your objects are stored.
Some algorithm that probably takes into account how many osd's you have
and their sizes.
How can it be random placed? You would not be able to ever find it
again. Because there is not such a thing as a 'file allocation ta
Do you then get these types of error messages?
packet_write_wait: Connection to 192.168.10.43 port 22: Broken pipe
rsync: connection unexpectedly closed (2345281724 bytes received so far)
[receiver]
rsync error: error in rsync protocol data stream (code 12) at io.c(226)
[receiver=3.1.2]
rsync:
I just rolled back a snapshot, and when I started the (windows) vm, I
noticed still a software update I installed after this snapshot.
What am I doing wrong that libvirt is not reading the rolled back
snapshot (,but uses something from cache)?
___
- if my cluster is not well balanced, I have to run the balancer execute
several times, because it only optimises in small steps?
- is there some history of applied plans to see how optimizing brings
down this reported final score 0.054781?
- how can I get the current score?
- I have some 8
- Everyone here will tell you not to use 2x replica, maybe use some
erasure code if you want to save space.
- I cannot say anything about applying the cache pool, did not use it,
read some things that made me doubt it was useful for us. We decided to
put some vm's on ssd rbd pool. Maybe when
his was sort of accomplished by adding the 4th node.
-Original Message-
From: Frank Yu [mailto:flyxia...@gmail.com]
Sent: vrijdag 16 november 2018 3:51
To: Marc Roos
Cc: ceph-users
Subject: Re: [ceph-users] pg 17.36 is active+clean+inconsistent head
expected clone 1 missing?
try to rest
Forgot, these are bluestore osds
-Original Message-
From: Marc Roos
Sent: donderdag 15 november 2018 9:59
To: ceph-users
Subject: [ceph-users] pg 17.36 is active+clean+inconsistent head
expected clone 1 missing?
I thought I will give it another try, asking again here since there
I thought I will give it another try, asking again here since there is
another thread current. I am having this error since a year or so.
This I of course already tried:
ceph pg deep-scrub 17.36
ceph pg repair 17.36
[@c01 ~]# rados list-inconsistent-obj 17.36
{"epoch":24363,"inconsistents":[
Try comparing results from something like this test
[global]
ioengine=posixaio
invalidate=1
ramp_time=30
iodepth=1
runtime=180
time_based
direct=1
filename=/mnt/cephfs/ssd/fio-bench.img
[write-4k-seq]
stonewall
bs=4k
rw=write
#write_bw_log=sdx-4k-write-seq.results
#write_iops_log=sdx-4k-write
This one i am using
https://www.mail-archive.com/ceph-users@lists.ceph.com/On Nov 12, 2018 10:32
PM, Bryan Henderson wrote:
>
> Is it possible to search the mailing list archives?
>
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/
>
> seems to have a search function, but in my experien
>>
>> is anybody using cephfs with snapshots on luminous? Cephfs snapshots
>> are declared stable in mimic, but I'd like to know about the risks
>> using them on luminous. Do I risk a complete cephfs failure or just
>> some not working snapshots? It is one namespace, one fs, one data and
>>
WD Red here
-Original Message-
From: Ashley Merrick [mailto:singap...@amerrick.co.uk]
Sent: zondag 11 november 2018 13:47
To: Vitaliy Filippov
Cc: Marc Roos; ceph-users
Subject: Re: [ceph-users] Disabling write cache on SATA HDDs reduces
write latency 7 times
Either more weird
I just did very very short test and don’t see any difference with this
cache on or off, so I am leaving it on for now.
-Original Message-
From: Ashley Merrick [mailto:singap...@amerrick.co.uk]
Sent: zondag 11 november 2018 11:43
To: Marc Roos
Cc: ceph-users; vitalif
Subject: Re
Does it make sense to test disabling this on hdd cluster only?
-Original Message-
From: Ashley Merrick [mailto:singap...@amerrick.co.uk]
Sent: zondag 11 november 2018 6:24
To: vita...@yourcmc.ru
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Disabling write cache on SATA HDDs
nich HRB 231263
Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx
2018-11-08 10:35 GMT+01:00 Matthew Vernon :
> On 08/11/2018 09:17, Marc Roos wrote:
>>
>> And that is why I don't like ceph-deploy. Unless you have maybe
>> hundreds of disks, I don’t see why you cannot
g
here. I doubt if ceph-deploy is even much faster.
-Original Message-
From: Matthew Vernon [mailto:m...@sanger.ac.uk]
Sent: donderdag 8 november 2018 10:36
To: ceph-users@lists.ceph.com
Cc: Marc Roos
Subject: Re: [ceph-users] ceph 12.2.9 release
On 08/11/2018 09:17, Marc Roos wrote:
@lists.ceph.com
Subject: Re: [ceph-users] ceph 12.2.9 release
El Miércoles 07/11/2018 a las 11:28, Matthew Vernon escribió:
> On 07/11/2018 14:16, Marc Roos wrote:
> >
> >
> > I don't see the problem. I am installing only the ceph updates when
> > others have
I don't see the problem. I am installing only the ceph updates when
others have done this and are running several weeks without problems. I
have noticed this 12.2.9 availability also, did not see any release
notes, so why install it? Especially with recent issues of other
releases.
That bei
Why slack anyway?
-Original Message-
From: Konstantin Shalygin [mailto:k0...@k0ste.ru]
Sent: donderdag 11 oktober 2018 5:11
To: ceph-users@lists.ceph.com
Subject: *SPAM* Re: [ceph-users] https://ceph-storage.slack.com
> why would a ceph slack be invite only?
Because this is
Luminous is also not having an updated librgw that prevents ganesha from
using the multi tenancy mounts. Especially with the current issues of
mimic, would it be nice if this could be made available in luminous.
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg48659.html
https://gith
That is easy I think, so I will give it a try:
Faster CPU's, Use fast NVME disks, all 10Gbit or even better 100Gbit,
added with a daily prayer.
-Original Message-
From: Tomasz Płaza [mailto:tomasz.pl...@grupawp.pl]
Sent: maandag 8 oktober 2018 7:46
To: ceph-users@lists.ceph.com
Sub
-AES256-GCM-SHA384
-Original Message-
From: Vasiliy Tolstov [mailto:v.tols...@selfip.ru]
Sent: zaterdag 6 oktober 2018 16:34
To: Marc Roos
Cc: ceph-users@lists.ceph.com; elias.abacio...@deltaprojects.com
Subject: *SPAM* Re: [ceph-users] list admin issues
сб, 6 окт. 2018 г. в 16:48
Maybe ask first gmail?
-Original Message-
From: Elias Abacioglu [mailto:elias.abacio...@deltaprojects.com]
Sent: zaterdag 6 oktober 2018 15:07
To: ceph-users
Subject: Re: [ceph-users] list admin issues
Hi,
I'm bumping this old thread cause it's getting annoying. My membership
get
losed (con
state CONNECTING)
..
..
..
-Original Message-
From: John Spray [mailto:jsp...@redhat.com]
Sent: donderdag 27 september 2018 11:43
To: Marc Roos
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Cannot write to cephfs if some osd's are not
available on the client
It was not my first intention to host vm's on osd nodes of the ceph
cluster. But since this test cluster is not doing anything, I might
aswell use some of the cores.
Currently I have configured a macvtap on the ceph client network
configured as a vlan. Disadvantage is that the local osd's ca
p and move the file to a 3x replicated
pool, I assume my data is moved there and more secure.
-Original Message-
From: Janne Johansson [mailto:icepic...@gmail.com]
Sent: dinsdag 2 oktober 2018 15:44
To: jsp...@redhat.com
Cc: Marc Roos; Ceph Users
Subject: Re: [ceph-users] cephfs issue w
edhat.com]
Sent: maandag 1 oktober 2018 21:28
To: Marc Roos
Cc: ceph-users; jspray; ukernel
Subject: Re: [ceph-users] cephfs issue with moving files between data
pools gives Input/output error
Moving a file into a directory with a different layout does not, and is
not intended to, copy the un
sdf
-Original Message-
From: Yan, Zheng [mailto:uker...@gmail.com]
Sent: zaterdag 29 september 2018 6:55
To: Marc Roos
Subject: Re: [ceph-users] cephfs issue with moving files between data
pools gives Input/output error
check_pool_perm on pool 30 ns need Fr, but no read perm
client does
How do you test this? I have had no issues under "normal load" with an
old kernel client and a stable os.
CentOS Linux release 7.5.1804 (Core)
Linux c04 3.10.0-862.11.6.el7.x86_64 #1 SMP Tue Aug 14 21:49:04 UTC 2018
x86_64 x86_64 x86_64 GNU/Linux
-Original Message-
From: Andras
dag 28 september 2018 15:45
To: Marc Roos
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] cephfs issue with moving files between data
pools gives Input/output error
On Fri, Sep 28, 2018 at 2:28 PM Marc Roos
wrote:
>
>
> Looks like that if I move files between different da
If I copy the file out6 to out7 in the same location. I can read the
out7 file on the nfs client.
-Original Message-
To: ceph-users
Subject: [ceph-users] cephfs issue with moving files between data pools
gives Input/output error
Looks like that if I move files between different dat
Looks like that if I move files between different data pools of the
cephfs, something is still refering to the 'old location' and gives an
Input/output error. I assume this, because I am using different client
ids for authentication.
With the same user as configured in ganesha, mounting (ker
If I add on one client a file to the cephfs, that is exported via
ganesha and nfs mounted somewhere else. I can see it in the dir listing
on the other nfs client. But trying to read it gives an Input/output
error. Other files (older ones in the same dir I can read)
Anyone had this also?
nfs
I have a test cluster and on a osd node I put a vm. The vm is using a
macvtap on the client network interface of the osd node. Making access
to local osd's impossible.
the vm of course reports that it cannot access the local osd's. What I
am getting is:
- I cannot reboot this vm normally, ne
And where is the manual for bluestore?
-Original Message-
From: mj [mailto:li...@merit.unu.edu]
Sent: dinsdag 25 september 2018 9:56
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] PG inconsistent, "pg repair" not working
Hi,
I was able to solve a similar issue on our cluste
h tunables you can check out the ceph wiki [2]
here.
[1]
ceph osd set-require-min-compat-client hammer
ceph osd crush set-all-straw-buckets-to-straw2
ceph osd crush tunables hammer
[2] http://docs.ceph.com/docs/master/rados/operations/crush-map/
-Original Message-
From: Marc Roos
Sent: d
When running ./do_cmake.sh, I get
fatal: destination path '/Users/mac/ceph/src/zstd' already exists and is
not an empty directory.
fatal: clone of 'https://github.com/facebook/zstd' into submodule path
'/Users/mac/ceph/src/zstd' failed
Failed to clone 'src/zstd'. Retry scheduled
fatal: desti
Has anyone been able to build according to this manual? Because here it
fails.
http://docs.ceph.com/docs/mimic/dev/macos/
I have prepared macos as it is described, took 2h to build this llvm, is
that really necessary?
I do the
git clone --single-branch -b mimic https://github.com/ceph/ceph
I have been trying to do this on a sierra vm, installed xcode 9.2
I had to modify this ceph-fuse.rb and copy it to the folder
/usr/local/Homebrew/Library/Taps/homebrew/homebrew-core/Formula/ (was
not there, is that correct?)
But I get now the error
make: *** No rule to make target `rados'.
Just curious, is anyone running mesos on ceph nodes?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I agree. I was on centos7.4 and updated to I think luminous 12.2.7, and
had something not working related to some python dependancy. This was
resolved by upgrading to centos7.5
-Original Message-
From: David Turner [mailto:drakonst...@gmail.com]
Sent: vrijdag 14 september 2018 15
ssage-
From: David Turner [mailto:drakonst...@gmail.com]
Sent: woensdag 12 september 2018 18:20
To: Marc Roos
Cc: ceph-users
Subject: Re: [ceph-users] Performance predictions moving bluestore wall,
db to ssd
You already have a thread talking about benchmarking the addition of WAL
and DB parti
When having a hdd bluestore osd with collocated wal and db.
- What performance increase can be expected if one would move the wal to
an ssd?
- What performance increase can be expected if one would move the db to
an ssd?
- Would the performance be a lot if you have a very slow hdd (and thu
Is this osxfuse, the only and best performing way to mount a ceph
filesystem on an osx client?
http://docs.ceph.com/docs/mimic/dev/macos/
I am now testing cephfs performance on a client with the fio libaio
engine. This engine does not exist on osx, but there is a posixaio. Does
anyone have ex
Hi,
Is there any recommendation for the mds_cache_memory_limit ? Like a % of the
total ram or something ?
Thanks.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I am new, with using the balancer, I think this should generated a plan
not? Do not get what this error is about.
[@c01 ~]# ceph balancer optimize balancer-test.plan
Error EAGAIN: compat weight-set not available
___
ceph-users mailing list
ceph-users
I guess good luck. Maybe you can ask these guys to hurry up and get
something production ready.
https://github.com/ceph-dovecot/dovecot-ceph-plugin
-Original Message-
From: marc-antoine desrochers
[mailto:marc-antoine.desroch...@sogetel.com]
Sent: maandag 10 september 2018 14:40
0159,
"inodes_with_caps": 62192,
"caps": 114126,
"subtrees": 14,
"traverse": 38309963,
"traverse_hit": 37606227,
"traverse_forward": 12189,
"traverse_discover": 6634,
I was thinking of upgrading luminous to mimic, but does anyone have
mimic running with collectd and the ceph plugin?
When luminous was introduced it took almost half a year before collectd
was supporting it.
___
ceph-users mailing list
ceph-users@li
I have only 2 scrubs running on hdd's, but keeping the drives in high
busy state. I did not notice this before, did some setting change?
Because I can remember dstat listing 14MB/s-20MB/s and not 60MB/s
DSK | sdd | busy 95% | read1384 | write 92 | KiB/r
292 | KiB/w
the samsung sm863.
write-4k-seq: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T)
4096B-4096B, ioengine=libaio, iodepth=1
randwrite-4k-seq: (g=1): rw=randwrite, bs=(R) 4096B-4096B, (W)
4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
read-4k-seq: (g=2): rw=read, bs=(R) 409
To add a data pool to an existing cephfs
ceph osd pool set fs_data.ec21 allow_ec_overwrites true
ceph osd pool application enable fs_data.ec21 cephfs
ceph fs add_data_pool cephfs fs_data.ec21
Then link the pool to the directory (ec21)
setfattr -n ceph.dir.layout.pool -v fs_data.ec21 ec21
---
:
ceph tell osd.* injectargs --osd_max_backfills=0
Again getting slower towards the end.
Bandwidth (MB/sec): 395.749
Average Latency(s): 0.161713
-Original Message-
From: Menno Zonneveld [mailto:me...@1afa.com]
Sent: donderdag 6 september 2018 16:56
To: Marc Roos; ceph-users
Subject:
Menno Zonneveld [mailto:me...@1afa.com]
Sent: donderdag 6 september 2018 15:52
To: Marc Roos; ceph-users
Subject: RE: [ceph-users] Rados performance inconsistencies, lower than
expected performance
ah yes, 3x replicated with minimal 2.
my ceph.conf is pretty bare, just in case it might be rel
Test pool is 3x replicated?
-Original Message-
From: Menno Zonneveld [mailto:me...@1afa.com]
Sent: donderdag 6 september 2018 15:29
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Rados performance inconsistencies, lower than
expected performance
I've setup a CEPH cluster to tes
>
> >
> >
> > The adviced solution is to upgrade ceph only in HEALTH_OK state. And
I
> > also read somewhere that is bad to have your cluster for a long time
in
> > an HEALTH_ERR state.
> >
> > But why is this bad?
>
> Aside from the obvious (errors are bad things!), many people have
> extern
Thanks interesting to read. So in luminous it is not really a problem. I
was expecting to get into trouble with the monitors/mds. Because my
failover takes quite long, and thought it was related to the damaged pg
Luminous: "When the past intervals tracking structure was rebuilt around
exactly t
Do not use Samsung 850 PRO for journal
Just use LSI logic HBA (eg. SAS2308)
-Original Message-
From: Muhammad Junaid [mailto:junaid.fsd...@gmail.com]
Sent: donderdag 6 september 2018 13:18
To: ceph-users@lists.ceph.com
Subject: [ceph-users] help needed
Hi there
Hope, every one wil
ppens.
Regards Marc
Am 05.09.2018 um 20:24 schrieb Uwe Sauter:
> I'm also experiencing slow requests though I cannot point it to scrubbing.
>
> Which kernel do you run? Would you be able to test against the same kernel
> with Spectre/Meltdown mitigations disabled
The adviced solution is to upgrade ceph only in HEALTH_OK state. And I
also read somewhere that is bad to have your cluster for a long time in
an HEALTH_ERR state.
But why is this bad?
Why is this bad during upgrading?
Can I quantify how bad it is? (like with large log/journal file?)
_
ewly added
node has finished.
-Original Message-
From: Jack [mailto:c...@jack.fr.eu.org]
Sent: zondag 2 september 2018 15:53
To: Marc Roos; ceph-users
Subject: Re: [ceph-users] 3x replicated rbd pool ssd data spread across
4 osd's
Well, you have more than one pool here
pg_num =
I am adding a node like this, I think it is more efficient, because in
your case you will have data being moved within the added node (between
the newly added osd's there). So far no problems with this.
Maybe limit your
ceph tell osd.* injectargs --osd_max_backfills=X
Because pg's being move
ever experienced that problem
* the system is healthy, no swapping, no high load, no errors in dmesg
I attached a log excerpt of osd.35 - probably this is useful for
investigating the problem is someone owns deeper bluestore knowledge.
(slow requests appeared on Sun Sep 2 21:00:35)
Regards
Marc
A
h does not spread object on a per-object basis, but on a pg-basis
The data repartition is thus not perfect You may increase your pg_num,
and/or use the mgr balancer module
(http://docs.ceph.com/docs/mimic/mgr/balancer/)
On 09/02/2018 01:28 PM, Marc Roos wrote:
>
> If I have only one rb
If I have only one rbd ssd pool, 3 replicated, and 4 ssd osd's. Why are
these objects so unevenly spread across the four osd's? Should they all
not have 162G?
[@c01 ]# ceph osd status 2>&1
++--+---+---++-++-+-
--+
| id | host | used | a
When adding a node and I increment the crush weight like this. I have
the most efficient data transfer to the 4th node?
sudo -u ceph ceph osd crush reweight osd.23 1
sudo -u ceph ceph osd crush reweight osd.24 1
sudo -u ceph ceph osd crush reweight osd.25 1
sudo -u ceph ceph osd crush rewei
Ok from what I have learned sofar from my own test environment. (Keep in
mind I am having a test setup for only a year). The s3 rgw is not so
much requiring high latency, so you should be able to do fine with hdd
only cluster. I guess my setup should be sufficient for what you need
to have,
How is it going with this? Are we getting close to a state where we can
store a mailbox on ceph with this librmb?
-Original Message-
From: Wido den Hollander [mailto:w...@42on.com]
Sent: maandag 25 september 2017 9:20
To: Gregory Farnum; Danny Al-Gaaf
Cc: ceph-users
Subject: Re: [ce
I have 3 node test cluster and I would like to expand this with a 4th
node that is currently mounting the cephfs and rsync's backups to it. I
can remember reading something about that you could create a deadlock
situation doing this.
What are the risks I would be taking if I would be doing
Thanks!!!
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg46212.html
echo 8192 >/sys/devices/virtual/bdi/ceph-1/read_ahead_kb
-Original Message-
From: Yan, Zheng [mailto:uker...@gmail.com]
Sent: dinsdag 28 augustus 2018 15:44
To: Marc Roos
Cc: ceph-users
Subject: Re: [c
kernel
c01,c02,c03:/backup /home/backupceph
name=cephfs.backup,secretfile=/root/client.cephfs.backup.key,_netdev 0 0
c01,c02,c03:/backup /home/backup2 fuse.ceph
ceph.id=cephfs.backup,_netdev 0 0
Mounts root cephfs
c01,c02,c03:/backup /home/backup2
Was there not some issue a while ago that was related to a kernel
setting? Because I can remember doing some tests that ceph-fuse was
always slower than the kernel module.
-Original Message-
From: Marc Roos
Sent: dinsdag 28 augustus 2018 12:37
To: ceph-users; ifedotov
Subject: Re
bench sort of ok.
Hi Marc,
In general dd isn't the best choice for benchmarking.
In you case there are at least 3 differences from rados bench :
1)If I haven't missed something then you're comparing reads vs. writes
2) Block Size is difference ( 512 bytes for dd vs . 4M for rados
I have a idle test cluster (centos7.5, Linux c04
3.10.0-862.9.1.el7.x86_64), and a client kernel mount cephfs.
I tested reading a few files on this cephfs mount and get very low
results compared to the rados bench. What could be the issue here?
[@client folder]# dd if=5GB.img of=/dev/null st
> I am a software developer and am new to this domain.
So maybe first get some senior system admin or so? You also do not want
me to start doing some amateur brain surgery, do you?
> each file has approx 15 TB
Pfff, maybe rethink/work this to
-Original Message-
From: Jame
Can this be related to numa issues? I have also dual processor nodes,
and was wondering if there is some guide on how to optimize for numa.
-Original Message-
From: Tyler Bishop [mailto:tyler.bis...@beyondhosting.net]
Sent: vrijdag 24 augustus 2018 3:11
To: Andras Pataki
Cc: ceph-u
I also have 2+1 (still only 3 nodes), and 3 replicated. I also moved the
meta datapool to ssds.
What is nice with the cephfs, you can have folders in your filesystem on
the ec21 pool for not so important data and the rest will be 3x
replicated.
I think the single session performance is not
Can this be added to luminous?
https://github.com/ceph/ceph/pull/19358
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I just recently did the same. Take into account that everything starts
migrating. How weird it maybe, I had hdd test cluster only and changed
the crush rule to having hdd. Took a few days, totally unnecessary as
far as I am concerned.
-Original Message-
From: Enrico Kern [mailto:en
"one OSD's data to generate three copies on new failure domain" because
ceph assumes it is correct.
Get the pg's that are going to be moved and scrub them?
I think the problem is more why these objects are inconsistent before
you even do the migration
-Original Message-
From: poi [
I upgraded centos7, not ceph nor collectd. Ceph was already 12.2.7 and
collectd was already 5.8.0-2 (and collectd-ceph-5.8.0-2)
Now I have this error:
Aug 14 22:43:34 c01 collectd[285425]: ceph plugin: ds
FinisherPurgeQueue.queueLen was not properly initialized.
Aug 14 22:43:34 c01 collectd[
Original Message-
From: Marc Roos
Sent: dinsdag 31 juli 2018 9:24
To: jspray
Cc: ceph-users
Subject: Re: [ceph-users] Enable daemonperf - no stats selected by
filters
Luminous 12.2.7
[@c01 ~]# rpm -qa | grep ceph-
ceph-mon-12.2.7-0.el7.x86_64
ceph-selinux-12.2.7-0.el7.x86_64
ceph-osd-12.2.7-0
Did anyone notice any performance loss on osd, mon, rgw nodes because of
the spectre/meltdown updates? What is general practice concerning these
updates?
Sort of follow up on this discussion.
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg43136.html
https://access.redhat.com/arti
201 - 300 of 588 matches
Mail list logo