ents with an increased log level and dumps
> them on crash with negative index starting at - up to -1 as a prefix.
>
> -1> 2020-01-16 01:10:13.404090 7f3350a14700 -1 rocksdb:
>
>
> It would be great If you share several log snippets for different
> crashes conta
480)
Put( Prefix = O key =
0x7f8001cc45c881217262'd_data.4303206b8b4567.9632!='0xfffe'o'
Value size = 510)
on the right size i always see 0xfffe on all
failed OSDs.
greets,
Stefan
Am 19.01.20 um 14:07 schrieb Stefan Priebe -
etc) sometimes this might
> provide some hints.
>
>
> Thanks,
>
> Igor
>
>
>> On 1/17/2020 2:30 PM, Stefan Priebe - Profihost AG wrote:
>> HI Igor,
>>
>>> Am 17.01.20 um 12:10 schrieb Igor Fedotov:
>>> hmmm..
>>>
>>>
u might want to start collecting failure-related information
> (including but not limited to failure logs, perf counter dumps, system
> resource reports etc) for future analysis.
>
>
>
> On 1/16/2020 11:58 PM, Stefan Priebe - Profihost AG wrote:
>> Hi Igor,
>>
>>
,
>
> Igor
>
>
>
> On 1/16/2020 10:04 PM, Stefan Priebe - Profihost AG wrote:
>> Hi Igor,
>>
>> ouch sorry. Here we go:
>>
>> -1> 2020-01-16 01:10:13.404090 7f3350a14700 -1 rocksdb:
>> submit_transaction error: Corruption: block ch
t, char
const*)+0x102) [0x55c9a712d232]
2: (BlueStore::_kv_sync_thread()+0x24c5) [0x55c9a6fb54b5]
3: (BlueStore::KVSyncThread::entry()+0xd) [0x55c9a6ff608d]
4: (()+0x7494) [0x7f33615f9494]
5: (clone()+0x3f) [0x7f3360680acf]
I already picked those:
https://github.com/ceph/ceph/pull/28644
Greets,
Stef
Hello,
does anybody know a fix for this ASSERT / crash?
2020-01-16 02:02:31.316394 7f8c3f5ab700 -1
/build/ceph/src/os/bluestore/BlueStore.cc: In function 'void
BlueStore::_kv_sync_thread()' thread 7f8c3f5ab700 time 2020-01-16
02:02:31.304993
/build/ceph/src/os/bluestore/BlueStore.cc: 8808: FAILED
Hello,
does anybody have real live experience with externel block db?
Greets,
Stefan
Am 13.01.20 um 08:09 schrieb Stefan Priebe - Profihost AG:
> Hello,
>
> i'm plannung to split the block db to a seperate flash device which i
> also would like to use as an OSD for erasure co
drives which have a
capacitor. Samsung does not.
The problem is that Ceph sends a lot of flush commands which slows down
drives without capacitor.
You can make linux to ignore those userspace requests with the following
command:
echo "temporary write through" >
/sys/block/sdX/device/sc
Hello,
i'm plannung to split the block db to a seperate flash device which i
also would like to use as an OSD for erasure coding metadata for rbd
devices.
If i want to use 14x 14TB HDDs per Node
https://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/#sizing
recommends a minim
Hi,
we‘re currently in the process of building a new ceph cluster to backup rbd
images from multiple ceph clusters.
We would like to start with just a single ceph cluster to backup which is about
50tb. Compression ratio of the data is around 30% while using zlib. We need to
scale the backup cl
ter to sync snapshots once a
day.
Important is pricing, performance todo this task and expansion. We would like
to start with something around just 50tb storage.
Greets,
Stefan
>
>
>> Stefan Priebe - Profihost AG < s.pri...@profihost.ag> hat am 9. Januar 2020
>> um 22:5
DB or not?
Since we started using ceph we're mostly subscribed to SSDs - so no
knowlege about HDD in place.
Greets,
Stefan
Am 09.01.20 um 16:49 schrieb Stefan Priebe - Profihost AG:
>
>> Am 09.01.2020 um 16:10 schrieb Wido den Hollander :
>>
>>
>>
>>> O
> Am 09.01.2020 um 16:10 schrieb Wido den Hollander :
>
>
>
>> On 1/9/20 2:27 PM, Stefan Priebe - Profihost AG wrote:
>> Hi Wido,
>>> Am 09.01.20 um 14:18 schrieb Wido den Hollander:
>>>
>>>
>>> On 1/9/20 2:07 PM, Daniel Aberger -
Hi Wido,
Am 09.01.20 um 14:18 schrieb Wido den Hollander:
>
>
> On 1/9/20 2:07 PM, Daniel Aberger - Profihost AG wrote:
>>
>> Am 09.01.20 um 13:39 schrieb Janne Johansson:
>>>
>>> I'm currently trying to workout a concept for a ceph cluster which can
>>> be used as a target for backups wh
sue or not.
>
> This time it reminds the issue shared in this mailing list a while ago by
> Stefan Priebe. The post caption is "Bluestore OSDs keep crashing in
> BlueStore.cc: 8808: FAILED assert(r == 0)"
>
> So first of all I'd suggest to distinguish these issues
us is still unpatched and 12.2.13 might the first containing this
fix - but no idea of an ETA.
Greets,
Stefan
> Thanks,
>
> Igor
>
> On 9/12/2019 8:20 PM, Stefan Priebe - Profihost AG wrote:
>> Hello Igor,
>>
>> i can now confirm that this is indeed a kernel bug. The is
pressure/swapping:
> https://tracker.ceph.com/issues/22464
>
> IMO memory usage worth checking as well...
>
>
> Igor
>
>
> On 8/27/2019 4:52 PM, Stefan Priebe - Profihost AG wrote:
>> see inline
>>
>> Am 27.08.19 um 15:43 schrieb Igor Fedotov:
occasional invalid
> data reads under high memory pressure/swapping:
> https://tracker.ceph.com/issues/22464
We have a current 4.19.X kernel and no memory limit. Mem avail is pretty
constant at 32GB.
Greets,
Stefan
>
> IMO memory usage worth checking as well...
>
>
> Igor
see inline
Am 27.08.19 um 15:43 schrieb Igor Fedotov:
> see inline
>
> On 8/27/2019 4:41 PM, Stefan Priebe - Profihost AG wrote:
>> Hi Igor,
>>
>> Am 27.08.19 um 14:11 schrieb Igor Fedotov:
>>> Hi Stefan,
>>>
>>> this looks like a dupli
;-)
> Did you run fsck for any of broken OSDs? Any reports?
Yes but no reports.
> Any other errors/crashes in logs before these sort of issues happens?
No
> Just in case - what allocator are you using?
tcmalloc
Greets,
Stefan
>
> Thanks,
>
> Igor
>
>
>
> O
Hello,
since some month all our bluestore OSDs keep crashing from time to time.
Currently about 5 OSDs per day.
All of them show the following trace:
Trace:
2019-07-24 08:36:48.995397 7fb19a711700 -1 rocksdb: submit_transaction
error: Corruption: block checksum mismatch code = 2 Rocksdb transacti
Hello,
since some month all our bluestore OSDs keep crashing from time to time.
Currently about 5 OSDs per day.
All of them show the following trace:
Trace:
2019-07-24 08:36:48.995397 7fb19a711700 -1 rocksdb: submit_transaction
error: Corruption: block checksum mismatch code = 2 Rocksdb transacti
Am 06.03.19 um 14:08 schrieb Mark Nelson:
>
> On 3/6/19 5:12 AM, Stefan Priebe - Profihost AG wrote:
>> Hi Mark,
>> Am 05.03.19 um 23:12 schrieb Mark Nelson:
>>> Hi Stefan,
>>>
>>>
>>> Could you try running your random write workload against
hread_cond_wait@@GLIBC_2.3.2 () from
target:/lib/x86_64-linux-gnu/libpthread.so.0
.
Thread 1 "ceph-osd" received signal SIGINT, Interrupt.
0x7f917b6a615f in pthread_cond_wait@@GLIBC_2.3.2 () from
target:/lib/x86_64-linux-gnu/libpthread.so.0
.
Thread 1 "ceph-osd" received signa
Am 05.03.19 um 10:05 schrieb Paul Emmerich:
> This workload is probably bottlenecked by rocksdb (since the small
> writes are buffered there), so that's probably what needs tuning here.
while reading:
https://www.flashmemorysummit.com/English/Collaterals/Proceedings/2018/20180807_INVT-101A-1_Mered
Hello list,
while the performance of sequential writes 4k on bluestore is very high
and even higher than filestore i was wondering what i can do to optimize
random pattern as well.
While using:
fio --rw=write --iodepth=32 --ioengine=libaio --bs=4k --numjobs=4
--filename=/tmp/test --size=10G --run
egative_derivative(first("op_w_process_latency.sum"),
> 1s)/non_negative_derivative(first("op_w_process_latency.avgcount"),1s) FROM
> "ceph" WHERE "host" =~ /^([[host]])$/ AND collection='osd' AND "id" =~
> /^([[osd]])$/ AND $ti
Hi,
Am 30.01.19 um 08:33 schrieb Alexandre DERUMIER:
> Hi,
>
> here some new results,
> different osd/ different cluster
>
> before osd restart latency was between 2-5ms
> after osd restart is around 1-1.5ms
>
> http://odisoweb1.odiso.net/cephperf2/bad.txt (2-5ms)
> http://odisoweb1.odiso.net/
re compiling against and using jemalloc. What happens in this case?
Also i saw now - that 12.2.10 uses 1GB mem max while 12.2.8 uses 6-7GB
Mem (with bluestore_cache_size = 1073741824).
Greets,
Stefan
Am 17.01.19 um 22:59 schrieb Stefan Priebe - Profihost AG:
> Hello Mark,
>
> for what
(no
idea why or it is related to cluste size)
That's currently all i know.
Thanks a lot!
Greets,
Stefan
Am 16.01.19 um 20:56 schrieb Stefan Priebe - Profihost AG:
> i reverted the whole cluster back to 12.2.8 - recovery speed also
> dropped from 300-400MB/s to 20MB/s on 12.2.10. So
i reverted the whole cluster back to 12.2.8 - recovery speed also
dropped from 300-400MB/s to 20MB/s on 12.2.10. So something is really
broken.
Greets,
Stefan
Am 16.01.19 um 16:00 schrieb Stefan Priebe - Profihost AG:
> This is not the case with 12.2.8 - it happens with 12.2.9 as well. Af
15:22 schrieb Stefan Priebe - Profihost AG:
> Hello,
>
> while digging into this further i saw that it takes ages until all pgs
> are active. After starting the OSD 3% of all pgs are inactive and it
> takes minutes after they're active.
>
> The log of the OSD is full of
8145 les/c/f 1318473/1278148/1211861 131
8472/1318472/1318472) [33,3,22] r=0 lpr=1318472 pi=[1278145,1318472)/1
rops=4 crt=1318474'61584855 mlcod 1318356'61576253 active+rec
overing+degraded m=183 snaptrimq=[ec1a0~1,ec808~1]
mbc={255={(2+0)=183,(3+0)=3}}] _update_calc_stats ml 183 upset size 3 up 2
Hi,
no ok it was not. Bug still present. It was only working because the
osdmap was so far away that it has started backfill instead of recovery.
So it happens only in the recovery case.
Greets,
Stefan
Am 15.01.19 um 16:02 schrieb Stefan Priebe - Profihost AG:
>
> Am 15.01.19 um 12:45 s
Am 15.01.19 um 12:45 schrieb Marc Roos:
>
> I upgraded this weekend from 12.2.8 to 12.2.10 without such issues
> (osd's are idle)
it turns out this was a kernel bug. Updating to a newer kernel - has
solved this issue.
Greets,
Stefan
> -Original Message-----
>
Hello list,
i also tested current upstream/luminous branch and it happens as well. A
clean install works fine. It only happens on upgraded bluestore osds.
Greets,
Stefan
Am 14.01.19 um 20:35 schrieb Stefan Priebe - Profihost AG:
> while trying to upgrade a cluster from 12.2.8 to 12.2.10
Hi Paul,
Am 14.01.19 um 21:39 schrieb Paul Emmerich:
> What's the output of "ceph daemon osd. status" on one of the OSDs
> while it's starting?
{
"cluster_fsid": "b338193d-39e0-40e9-baba-4965ef3868a3",
"osd_fsid": "d95d0e3b-7441-4ab0-869c-fe0551d3bd52",
"whoami": 2,
"state": "act
Hi,
while trying to upgrade a cluster from 12.2.8 to 12.2.10 i'm experience
issues with bluestore osds - so i canceled the upgrade and all bluestore
osds are stopped now.
After starting a bluestore osd i'm seeing a lot of slow requests caused
by very high read rates.
Device: rrqm/s wr
sks.
> You might need to modify some bluestore settings to speed up the time it
> takes to peer or perhaps you might just be underpowering the amount of
> OSD disks you're trying to do and your servers and OSD daemons are going
> as fast as they can.
> On Sat, Oct 13, 2018 a
tat shows:
sdi 77,00 0,00 580,00 97,00 511032,00 972,00
1512,5714,88 22,05 24,576,97 1,48 100,00
so it reads with 500MB/s which completely saturates the osd. And it does
for > 10 minutes.
Greets,
Stefan
Am 13.10.2018 um 21:29 schrieb Stefan Priebe - Profih
tat shows:
sdi 77,00 0,00 580,00 97,00 511032,00 972,00
1512,5714,88 22,05 24,576,97 1,48 100,00
so it reads with 500MB/s which completely saturates the osd. And it does
for > 10 minutes.
Greets,
Stefan
Am 13.10.2018 um 21:29 schrieb Stefan Priebe - Profih
ods.19 is a bluestore osd on a healthy 2TB SSD.
Log of osd.19 is here:
https://pastebin.com/raw/6DWwhS0A
Am 13.10.2018 um 21:20 schrieb Stefan Priebe - Profihost AG:
> Hi David,
>
> i think this should be the problem - form a new log from today:
>
> 2018-10-13 20:57:20.36
Hi David,
i think this should be the problem - form a new log from today:
2018-10-13 20:57:20.367326 mon.a [WRN] Health check update: 4 osds down
(OSD_DOWN)
...
2018-10-13 20:57:41.268674 mon.a [WRN] Health check update: Reduced data
availability: 3 pgs peering (PG_AVAILABILITY)
...
2018-10-13 20
Hi David,
Am 12.10.2018 um 15:59 schrieb David Turner:
> The PGs per OSD does not change unless the OSDs are marked out. You
> have noout set, so that doesn't change at all during this test. All of
> your PGs peered quickly at the beginning and then were active+undersized
> the rest of the time,
question, is anyone using Intel Optane DC P4800X on DELL
> R630 ...or any other server ?
> Any gotchas / feedback/ knowledge sharing will be greatly appreciated
>
> Steven
>
>> On Thu, 6 Sep 2018 at 14:59, Stefan Priebe - Profihost AG
>> wrote:
>> Hello
Hello list,
has anybody tested current NVMe performance with luminous and bluestore?
Is this something which makes sense or just a waste of money?
Greets,
Stefan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ce
Am 21.08.2018 um 17:28 schrieb Gregory Farnum:
> You should be able to create issues now; we had a misconfiguration in
> the tracker following the recent spam attack.
> -Greg
>
> On Tue, Aug 21, 2018 at 3:07 AM, Stefan Priebe - Profihost AG
> wrote:
>>
>> Am 21.0
Am 21.08.2018 um 12:03 schrieb Stefan Priebe - Profihost AG:
>
> Am 21.08.2018 um 11:56 schrieb Dan van der Ster:
>> On Tue, Aug 21, 2018 at 11:54 AM Stefan Priebe - Profihost AG
>> wrote:
>>>
>>> Am 21.08.2018 um 11:47 schrieb Dan van der Ster:
>>
Am 21.08.2018 um 11:56 schrieb Dan van der Ster:
> On Tue, Aug 21, 2018 at 11:54 AM Stefan Priebe - Profihost AG
> wrote:
>>
>> Am 21.08.2018 um 11:47 schrieb Dan van der Ster:
>>> On Mon, Aug 20, 2018 at 10:45 PM Stefan Priebe - Profihost AG
>>> wrote:
&
Am 21.08.2018 um 11:47 schrieb Dan van der Ster:
> On Mon, Aug 20, 2018 at 10:45 PM Stefan Priebe - Profihost AG
> wrote:
>>
>>
>> Am 20.08.2018 um 22:38 schrieb Dan van der Ster:
>>> On Mon, Aug 20, 2018 at 10:19 PM Stefan Priebe - Profihost AG
>>> wr
Am 20.08.2018 um 22:38 schrieb Dan van der Ster:
> On Mon, Aug 20, 2018 at 10:19 PM Stefan Priebe - Profihost AG
> wrote:
>>
>>
>> Am 20.08.2018 um 21:52 schrieb Sage Weil:
>>> On Mon, 20 Aug 2018, Stefan Priebe - Profihost AG wrote:
>>>> Hello,
am.net>> wrote:
>
> On Mon, 20 Aug 2018, Stefan Priebe - Profihost AG wrote:
> > Hello,
> >
> > since loic seems to have left ceph development and his wunderful crush
> > optimization tool isn'T working anymore i'm trying to get a g
Am 20.08.2018 um 21:52 schrieb Sage Weil:
> On Mon, 20 Aug 2018, Stefan Priebe - Profihost AG wrote:
>> Hello,
>>
>> since loic seems to have left ceph development and his wunderful crush
>> optimization tool isn'T working anymore i'm trying to get a good
&
Hello,
since loic seems to have left ceph development and his wunderful crush
optimization tool isn'T working anymore i'm trying to get a good
distribution with the ceph balancer.
Sadly it does not work as good as i want.
# ceph osd df | sort -k8
show 75 to 83% Usage which is 8% difference whic
Hi,
Am 02.03.2018 um 21:21 schrieb Stefan Priebe - Profihost AG:
> Hi,
>
> Am 02.03.2018 um 14:29 schrieb Dan van der Ster:
>> On Fri, Mar 2, 2018 at 10:12 AM, Stefan Priebe - Profihost AG
>> wrote:
>>> Thanks! Your patch works great!
>>
>> Coo
Hi,
Am 02.03.2018 um 14:29 schrieb Dan van der Ster:
> On Fri, Mar 2, 2018 at 10:12 AM, Stefan Priebe - Profihost AG
> wrote:
>> Thanks! Your patch works great!
>
> Cool! I plan to add one more feature to allow operators to switch off
> components of the score functio
should evaluate the ceph health status as well.
Stefan
Excuse my typo sent from my mobile phone.
Am 01.03.2018 um 13:12 schrieb Dan van der Ster mailto:d...@vanderster.com>>:
> On Thu, Mar 1, 2018 at 1:08 PM, Stefan Priebe - Profihost AG
> mailto:s.pri...@profihost.ag>> wrote:
&
should evaluate the ceph health status as well.
Stefan
Excuse my typo sent from my mobile phone.
> Am 01.03.2018 um 13:12 schrieb Dan van der Ster :
>
> On Thu, Mar 1, 2018 at 1:08 PM, Stefan Priebe - Profihost AG
> wrote:
>> nice thanks will try that soon.
>>
>>
ote:
>>> On Thu, Mar 1, 2018 at 10:24 AM, Stefan Priebe - Profihost AG
>>> wrote:
>>>>
>>>> Am 01.03.2018 um 09:58 schrieb Dan van der Ster:
>>>>> On Thu, Mar 1, 2018 at 9:52 AM, Stefan Priebe - Profihost AG
>>>>> wrote:
>>
Am 01.03.2018 um 09:58 schrieb Dan van der Ster:
> On Thu, Mar 1, 2018 at 9:52 AM, Stefan Priebe - Profihost AG
> wrote:
>> Hi,
>>
>> Am 01.03.2018 um 09:42 schrieb Dan van der Ster:
>>> On Thu, Mar 1, 2018 at 9:31 AM, Stefan Priebe - Profihost AG
>>> w
Hi,
Am 01.03.2018 um 09:42 schrieb Dan van der Ster:
> On Thu, Mar 1, 2018 at 9:31 AM, Stefan Priebe - Profihost AG
> wrote:
>> Hi,
>> Am 01.03.2018 um 09:03 schrieb Dan van der Ster:
>>> Is the score improving?
>>>
>>> ceph balancer eval
>
gt;> It seems to balance from left to right and than back from right to left...
>>
>> Greets,
>> Stefan
>>
>> Am 28.02.2018 um 13:47 schrieb Stefan Priebe - Profihost AG:
>>> Hello,
>>>
>>> with jewel we always used the python crush optimizer which
Does anybody have some more input?
I keeped the balancer active for 24h now and it is rebalancing 1-3%
every 30 minutes but the distribution is still bad.
It seems to balance from left to right and than back from right to left...
Greets,
Stefan
Am 28.02.2018 um 13:47 schrieb Stefan Priebe
-compat mode
Yes only one iteration but i set max_misplaced to 20%:
"mgr/balancer/max_misplaced": "20.00",
>
> -- dan
>
>
> On Wed, Feb 28, 2018 at 1:47 PM, Stefan Priebe - Profihost AG
> wrote:
>> Hello,
>>
>> with jewel we always used
Am 28.02.2018 um 13:59 schrieb John Spray:
> On Wed, Feb 28, 2018 at 12:47 PM, Stefan Priebe - Profihost AG
> wrote:
>> Hello,
>>
>> with jewel we always used the python crush optimizer which gave us a
>> pretty good distribution fo the used space.
>>
>&g
Hello,
with jewel we always used the python crush optimizer which gave us a
pretty good distribution fo the used space.
Since luminous we're using the included ceph mgr balancer but the
distribution is far from perfect and much worse than the old method.
Is there any way to tune the mgr balancer
tool --data-path /.../osd.$OSD/ --journal-path
/dev/disk/by-partlabel/journal$OSD rbd_data.$RBD remove-clone-metadata
$CLONEID
>
> thanks
>
> Saverio
>
>
> On 08.08.17 12:02, Stefan Priebe - Profihost AG wrote:
>> Hello Greg,
>>
>> Am 08.08.2017 um 11:5
Hi,
Am 14.12.2017 um 15:02 schrieb Sage Weil:
> On Thu, 14 Dec 2017, Stefan Priebe - Profihost AG wrote:
>>
>> Am 14.12.2017 um 13:22 schrieb Sage Weil:
>>> On Thu, 14 Dec 2017, Stefan Priebe - Profihost AG wrote:
>>>> Hello,
>>>>
>>>>
Hello,
bcache didn't supported partitions on the past so that a lot of our osds
have their data directly on:
/dev/bcache[0-9]
But that means i can't give them the needed part type of
4fbd7e29-9d25-41b8-afd0-062c0ceff05d and that means that the activation
with udev und ceph-disk does not work.
Ha
Hello,
i've some osds which were created under bobtail or argonaut (pre
ceph-deploy).
Those are not recognized as a ceph-osd@57.service . Also they have an
entry in the ceph.conf:
[osd.12]
host=1336
osd_data = /ceph/osd.$id/
osd_journal = /dev/disk/by-partlabel/journal$id
Hello,
my ceph-osd luminous are crashing with a segmentation fault while
backfilling.
Is there any way to manually remove the problematic "data"?
-1> 2018-01-16 20:32:50.001722 7f27d53fe700 0 osd.86 pg_epoch:
917877 pg[3.80e( v 917875'69934125 (917365'69924082,917875'69934125] lb
3:7018abae
Am 10.01.2018 um 16:38 schrieb Sage Weil:
> On Wed, 10 Jan 2018, John Spray wrote:
>> On Wed, Jan 10, 2018 at 2:11 PM, Stefan Priebe - Profihost AG
>> wrote:
>>> Hello,
>>>
>>> since upgrading to luminous i get the following error:
>
Hello,
since upgrading to luminous i get the following error:
HEALTH_ERR full ratio(s) out of order
OSD_OUT_OF_ORDER_FULL full ratio(s) out of order
backfillfull_ratio (0.9) < nearfull_ratio (0.95), increased
but ceph.conf has:
mon_osd_full_ratio = .97
mon_osd_nearfull_ratio
Am 04.01.2018 um 18:37 schrieb Gregory Farnum:
> On Thu, Jan 4, 2018 at 4:57 AM Stefan Priebe - Profihost AG
> mailto:s.pri...@profihost.ag>> wrote:
>
> Hello,
>
> i set mon_max_pg_per_osd to 300 but the cluster keeps in warn state.
>
> # ceph -s
Hello,
i set mon_max_pg_per_osd to 300 but the cluster keeps in warn state.
# ceph -s
cluster:
id: 5482b798-0bf1-4adb-8d7a-1cd57bdc1905
health: HEALTH_WARN
too many PGs per OSD (240 > max 200)
# ceph --admin-daemon /var/run/ceph/ceph-mon.1.asok config show|grep -i
mon_
Am 14.12.2017 um 13:22 schrieb Sage Weil:
> On Thu, 14 Dec 2017, Stefan Priebe - Profihost AG wrote:
>> Hello,
>>
>> Am 21.11.2017 um 11:06 schrieb Stefan Priebe - Profihost AG:
>>> Hello,
>>>
>>> to measure performance / latency fo
Hello,
Am 21.11.2017 um 11:06 schrieb Stefan Priebe - Profihost AG:
> Hello,
>
> to measure performance / latency for filestore we used:
> filestore:apply_latency
> filestore:commitcycle_latency
> filestore:journal_latency
> filestore:queue_transaction_latency_avg
>
>
Hello,
to measure performance / latency for filestore we used:
filestore:apply_latency
filestore:commitcycle_latency
filestore:journal_latency
filestore:queue_transaction_latency_avg
What are the correct ones for bluestore?
Greets,
Stefan
___
ceph-user
Am 12.11.2017 um 17:55 schrieb Sage Weil:
> On Wed, 25 Oct 2017, Sage Weil wrote:
>> On Wed, 25 Oct 2017, Stefan Priebe - Profihost AG wrote:
>>> Hello,
>>>
>>> in the lumious release notes is stated that zstd is not supported by
>>> bluestor due t
Hi,
while trying to use python crush tools on a luminous cluster i get:
crush.ceph.HealthError: expected health overall_status == HEALTH_OK but
got HEALTH_WARNinstead
It seems crush-1.0.35 uses the deprecated overall_status element.
Greets,
Stefan
___
Hello,
is there already a kernel available which connects with luminous?
ceph features tells for my kernel clients still release jewel.
Greets,
Stefan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-c
] Compressor/CompressorTest.decompress_16384/3 (1449 ms)
[--] 64 tests from Compressor/CompressorTest (29128 ms total)
Greets,
Stefan
Am 04.11.2017 um 21:10 schrieb Sage Weil:
> On Sat, 4 Nov 2017, Stefan Priebe - Profihost AG wrote:
>> Hi Sage,
>>
>> Am 26.10.2017 um 13
e no idea why it tries to use libceph_invalid and libceph_example
Stefan
Am 04.11.2017 um 21:10 schrieb Sage Weil:
> On Sat, 4 Nov 2017, Stefan Priebe - Profihost AG wrote:
>> Hi Sage,
>>
>> Am 26.10.2017 um 13:58 schrieb Sage Weil:
>>> On Thu, 26 Oct 2017, Stefan
Hi,
arg sorry i got it. Building the test right now. Will report results
shortly.
Greets,
Stefan
Am 04.11.2017 um 21:10 schrieb Sage Weil:
> On Sat, 4 Nov 2017, Stefan Priebe - Profihost AG wrote:
>> Hi Sage,
>>
>> Am 26.10.2017 um 13:58 schrieb Sage Weil:
>>> On T
ession
make: *** No rule to make target 'unittest_compression'. Stop.
Stefan
Am 04.11.2017 um 21:10 schrieb Sage Weil:
> On Sat, 4 Nov 2017, Stefan Priebe - Profihost AG wrote:
>> Hi Sage,
>>
>> Am 26.10.2017 um 13:58 schrieb Sage Weil:
>>> On Thu, 26 Oct 2017, Ste
OK zstd.h needs:
#define ZSTD_STATIC_LINKING_ONLY
to export that one.
Stefan
Am 04.11.2017 um 21:23 schrieb Stefan Priebe - Profihost AG:
> Thanks - not a C++ guy what's wrong with it?
>
> /build/ceph/src/compressor/zstd/ZstdCompressor.h: In member function
> 'virt
Size); /**< re-use compression
parameters from previous init; skip dictionary loading stage; zcs must
be init at least once before */
Stefan
Am 04.11.2017 um 21:10 schrieb Sage Weil:
> On Sat, 4 Nov 2017, Stefan Priebe - Profihost AG wrote:
>> Hi Sage,
>>
>> Am 26.10.2017 um
Hi Sage,
Am 26.10.2017 um 13:58 schrieb Sage Weil:
> On Thu, 26 Oct 2017, Stefan Priebe - Profihost AG wrote:
>> Hi Sage,
>>
>> Am 25.10.2017 um 21:54 schrieb Sage Weil:
>>> On Wed, 25 Oct 2017, Stefan Priebe - Profihost AG wrote:
>>>> Hello,
>>>
Hello list,
in the past we used the E5-1650v4 for our SSD based ceph nodes which
worked fine.
The new xeon generation doesn't seem to have a replacement. The only one
which is still 0.2Ghz slower is the Intel Xeon Gold 6128. But the price
is 3 times as high.
So the question is is there any bene
Hello,
Am 27.10.2017 um 19:00 schrieb David Turner:
> What does your crush map look like? Also a `ceph df` output. You're
> optimizing your map for pool #5, if there are other pools with a
> significant amount of data, then your going to be off on your cluster
> balance.
There are no other pool
Hello,
while trying to optimize a ceph cluster running jewel i get the
following output:
2017-10-26 10:43:27,615 argv = optimize --crushmap
/home/spriebe/ceph.report --out-path /home/spriebe/optimized.crush
--pool 5 --pool=5 --choose-args=5 --replication-count=3 --pg-num=4096
--pgp-num=4096 --rule
Hi Sage,
Am 25.10.2017 um 21:54 schrieb Sage Weil:
> On Wed, 25 Oct 2017, Stefan Priebe - Profihost AG wrote:
>> Hello,
>>
>> in the lumious release notes is stated that zstd is not supported by
>> bluestor due to performance reason. I'm wondering why btrfs instea
Hello,
in the lumious release notes is stated that zstd is not supported by
bluestor due to performance reason. I'm wondering why btrfs instead
states that zstd is as fast as lz4 but compresses as good as zlib.
Why is zlib than supported by bluestor? And why does btrfs / facebook
behave different
Hello,
in the lumious release notes is stated that zstd is not supported by
bluestor due to performance reason. I'm wondering why btrfs instead
states that zstd is as fast as lz4 but compresses as good as zlib.
Why is zlib than supported by bluestor? And why does btrfs / facebook
behave different
des , 18 x intel s3610 1,6TB in coming
> weeks.
>
> I'll send results on the mailing.
Thanks!
Greets,
Stefan
> - Mail original -
> De: "Stefan Priebe, Profihost AG"
> À: "Christian Balzer" , "ceph-users"
> Envoyé: Jeudi 7 Septemb
Am 07.09.2017 um 10:44 schrieb Christian Balzer:
>
> Hello,
>
> On Thu, 7 Sep 2017 08:03:31 +0200 Stefan Priebe - Profihost AG wrote:
>
>> Hello,
>> Am 07.09.2017 um 03:53 schrieb Christian Balzer:
>>>
>>> Hello,
>>>
>>> On Wed, 6 S
itself.
You do this by writing the string "temporary write through" to
/sys/block/sdb/device/scsi_disk/*/cache_type
Greets,
Stefan
>
> -Original Message-
> From: Stefan Priebe - Profihost AG [mailto:s.pri...@profihost.ag]
> Sent: donderdag 7 september 2017 8:04
>
Hello,
Am 07.09.2017 um 03:53 schrieb Christian Balzer:
>
> Hello,
>
> On Wed, 6 Sep 2017 09:09:54 -0400 Alex Gorbachev wrote:
>
>> We are planning a Jewel filestore based cluster for a performance
>> sensitive healthcare client, and the conservative OSD choice is
>> Samsung SM863A.
>>
>
> Whil
Hello Greg,
Am 08.08.2017 um 11:56 schrieb Gregory Farnum:
> On Mon, Aug 7, 2017 at 11:55 PM Stefan Priebe - Profihost AG
> mailto:s.pri...@profihost.ag>> wrote:
>
> Hello,
>
> how can i fix this one:
>
> 2017-08-08 08:42:52.265321 osd.20 [ERR]
1 - 100 of 313 matches
Mail list logo