[ceph-users] Re: kvm vm cephfs mount hangs on osd node (something like umount -l available?) (help wanted going to production)

2020-12-28 Thread Marc Roos
://docs.ceph.com/en/latest/cephfs/eviction/#advanced-un-blocklisting-a-client Zitat von Marc Roos : > Is there not some genius out there that can shed a ligth on this? ;) > Currently I am not able to reproduce this. Thus it would be nice to > have some procedure at hand that resolves stale ceph

[ceph-users] Re: kvm vm cephfs mount hangs on osd node (something like umount -l available?) (help wanted going to production)

2020-12-22 Thread Marc Roos
Is there not some genius out there that can shed a ligth on this? ;) Currently I am not able to reproduce this. Thus it would be nice to have some procedure at hand that resolves stale cephfs mounts nicely. -Original Message- To: ceph-users Subject: [ceph-users] kvm vm cephfs mount

[ceph-users] Re: Can big data use Ceph?

2020-12-22 Thread Marc Roos
I am not really familiar with spark, but I see it often used in combination with mesos. They currently implemented a csi solution that should enable access to ceph. I have been trying to get this to work[1] I assume being able to scale tasks with distributed block devices or the cephfs would

[ceph-users] Re: kvm vm cephfs mount hangs on osd node (something like umount -l available?) (help wanted going to production)

2020-12-22 Thread Marc Roos
I can live-migrate the vm in this locked up state to a different host without any problems. -Original Message- To: ceph-users Subject: [ceph-users] kvm vm cephfs mount hangs on osd node (something like umount -l available?) (help wanted going to production) I have a vm on a osd

[ceph-users] Re: kvm vm cephfs mount hangs on osd node (something like umount -l available?) (help wanted going to production)

2020-12-22 Thread Marc Roos
Just got this during bonnie test, trying to do an ls -l on the cephfs. I also have this kworker process constantly at 40% when doing this bonnie++ test. [35281.101763] INFO: task bash:1169 blocked for more than 120 seconds. [35281.102064] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"

[ceph-users] kvm vm cephfs mount hangs on osd node (something like umount -l available?) (help wanted going to production)

2020-12-22 Thread Marc Roos
I have a vm on a osd node (which can reach host and other nodes via the macvtap interface (used by the host and guest)). I just did a simple bonnie++ test and everything seems to be fine. Yesterday however the dovecot procces apparently caused problems (only using cephfs for an archive

[ceph-users] Cephfs mount hangs

2020-12-21 Thread Marc Roos
How to recover from this? It is possible to have a vm with a cephfs mount on a osd server? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Is there a command to update a client with a new generated key?

2020-12-21 Thread Marc Roos
c/ceph # touch /mnt/file2 host1:/etc/ceph # ls -l /mnt/ insgesamt 0 -rw-r--r-- 1 root root 0 21. Dez 10:14 file2 Zitat von Marc Roos : > Is there a command to update a client with a new generated key? > Something like: > > ceph auth ne

[ceph-users] Is there a command to update a client with a new generated key?

2020-12-20 Thread Marc Roos
Is there a command to update a client with a new generated key? Something like: ceph auth new-key client.rbd Could be usefull if you accidentaly did a ceph auth ls, because that still displays keys ;) ___ ceph-users mailing list --

[ceph-users] who's managing the cephcsi plugin?

2020-12-17 Thread Marc Roos
Is this cephcsi plugin under control of redhat? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Ceph benchmark tool (cbt)

2020-12-11 Thread Marc Roos
Just run the tool from a client that is not part of the ceph nodes. Than it can do nothing, that you did not configure ceph to allow it to do ;) Besides you should never run software from 'unknown' sources in an environment where it can use 'admin' rights. -Original Message- To:

[ceph-users] Re: CentOS

2020-12-08 Thread Marc Roos
.@performair.com > www.PerformAir.com > > -Original Message- > From: Marc Roos [mailto:m.r...@f1-outsourcing.eu] > Sent: Tuesday, December 8, 2020 2:02 PM > To: ceph-users; Dominic Hilsbos > Cc: aKrishna > Subject: [ceph-users] Re: CentOS > > > I did not. Thank

[ceph-users] Re: CentOS

2020-12-08 Thread Marc Roos
I did not. Thanks for the info. But if I understand this[1] explanation correctly. CentOS stream is some sort of trial environment for rhel. So who is ever going to put SDS on such an OS? Last post on this blog "But if you read the FAQ, you also learn that once they start work on RHEL 9,

[ceph-users] Re: guest fstrim not showing free space

2020-12-07 Thread Marc Roos
Yes, use everywhere virtio-scsi (via kvm with discard='unmap'). 'lsblk --discard' also shows discard is supported. vm's with xfs filesystem seem to behave better. -Original Message- Cc: lordcirth; ceph-users Subject: Re: [ceph-users] Re: guest fstrim not showing free space What

[ceph-users] Re: guest fstrim not showing free space

2020-12-07 Thread Marc Roos
, December 07, 2020 3:58 PM Cc: ceph-users Subject: Re: [ceph-users] guest fstrim not showing free space Is the VM's / ext4? On Sun., Dec. 6, 2020, 12:57 p.m. Marc Roos, wrote: I have a 74GB vm with 34466MB free space. But when I do fstrim / 'rbd du' shows still 60GB used

[ceph-users] Re: guest fstrim not showing free space

2020-12-07 Thread Marc Roos
Yes! Indeed old one, with ext4 still. -Original Message- Sent: Monday, December 07, 2020 3:58 PM Cc: ceph-users Subject: Re: [ceph-users] guest fstrim not showing free space Is the VM's / ext4? On Sun., Dec. 6, 2020, 12:57 p.m. Marc Roos, wrote: I have a 74GB vm

[ceph-users] Re: guest fstrim not showing free space

2020-12-06 Thread Marc Roos
Marc Roos : > I have a 74GB vm with 34466MB free space. But when I do fstrim / 'rbd > du' shows still 60GB used. > When I fill the 34GB of space with an image, delete it and do again > the fstrim 'rbd du' still shows 59GB used. > > Is this normal? Or should I be able to ge

[ceph-users] guest fstrim not showing free space

2020-12-06 Thread Marc Roos
I have a 74GB vm with 34466MB free space. But when I do fstrim / 'rbd du' shows still 60GB used. When I fill the 34GB of space with an image, delete it and do again the fstrim 'rbd du' still shows 59GB used. Is this normal? Or should I be able to get it to ~30GB used?

[ceph-users] Re: rbd image backup best practice

2020-11-30 Thread Marc Roos
...@gmail.com] Sent: Monday, November 30, 2020 12:43 PM To: Marc Roos Cc: ceph-users Subject: *SPAM* Re: [ceph-users] rbd image backup best practice Den fre 27 nov. 2020 kl 23:21 skrev Marc Roos : Is there a best practice or guide for backuping rbd images? One would

[ceph-users] rbd image backup best practice

2020-11-27 Thread Marc Roos
Is there a best practice or guide for backuping rbd images? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Ceph on ARM ?

2020-11-25 Thread Marc Roos
How does ARM compare to Xeon in latency and cluster utilization? -Original Message-; ceph-users Subject: [ceph-users] Re: Ceph on ARM ? Indeed it does run very happily on ARM. We have three of the Mars 400 appliances from Ambedded and they work exceedingly well. 8 micro servers

[ceph-users] Re: Documentation of older Ceph version not accessible anymore on docs.ceph.com

2020-11-24 Thread Marc Roos
2nd that. Why even remove old documentation before it is migrated to the new environment. It should be left online until the migration successfully completed. -Original Message- Sent: Tuesday, November 24, 2020 4:23 PM To: Frank Schilder Cc: ceph-users Subject: [ceph-users] Re:

[ceph-users] Re: Seriously degraded performance after update to Octopus

2020-11-02 Thread Marc Roos
I am advocating already a long time for publishing testing data of some basic test cluster against different ceph releases. Just a basic ceph cluster that covers most configs and run the same tests, so you can compare just ceph performance. That would mean a lot for smaller companies that do

[ceph-users] Re: frequent Monitor down

2020-10-29 Thread Marc Roos
Really? First time I read this here, afaik you can get a split brain like this. -Original Message- Sent: Thursday, October 29, 2020 12:16 AM To: Eugen Block Cc: ceph-users Subject: [ceph-users] Re: frequent Monitor down Eugen, I've got four physical servers and I've installed mon on

[ceph-users] Re: ceph octopus centos7, containers, cephadm

2020-10-23 Thread Marc Roos
via RPMs on el7 without the cephadm and containers orchestration, then the answer is yes. -- dan On Fri, Oct 23, 2020 at 9:47 AM Marc Roos wrote: > > > No clarity on this? > > -Original Message- > To: ceph-users > Subject: [ceph-users] ceph octopus centos7, container

[ceph-users] Re: ceph octopus centos7, containers, cephadm

2020-10-23 Thread Marc Roos
No clarity on this? -Original Message- To: ceph-users Subject: [ceph-users] ceph octopus centos7, containers, cephadm I am running Nautilus on centos7. Does octopus run similar as nautilus thus: - runs on el7/centos7 - runs without containers by default - runs without cephadm by

[ceph-users] ceph octopus centos7, containers, cephadm

2020-10-20 Thread Marc Roos
I am running Nautilus on centos7. Does octopus run similar as nautilus thus: - runs on el7/centos7 - runs without containers by default - runs without cephadm by default ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an

[ceph-users] RE Re: Recommended settings for PostgreSQL

2020-10-20 Thread Marc Roos
I wanted to create a few statefull containers with mysql/postgres that did not depend on local persistant storage, so I can dynamically move them around. What about using; - a 1x replicated pool and use rbd mirror, - or having postgres use 2 1x replicated pools - or upon task launch create

[ceph-users] Re: Recommended settings for PostgreSQL

2020-10-19 Thread Marc Roos
> In the past I see some good results (benchmark & latencies) for MySQL and PostgreSQL. However, I've always used > 4MB object size. Maybe i can get much better performance on smaller object size. Haven't tried actually. Did you tune mysql / postgres for this setup? Did you have a default

[ceph-users] Re: radosgw bucket subdomain with tls

2020-10-15 Thread Marc Roos
/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/ On 10/15/20 2:18 PM, Marc Roos wrote: > > I enabled a certificate on my radosgw, but I think I am running into the > problem that the s3 clients are accessing the buckets like > bucket.rgw.domain.com. Which f

[ceph-users] radosgw bucket subdomain with tls

2020-10-15 Thread Marc Roos
I enabled a certificate on my radosgw, but I think I am running into the problem that the s3 clients are accessing the buckets like bucket.rgw.domain.com. Which fails my cert rgw.domain.com. Is there any way to configure that only rgw.domain.com is being used?

[ceph-users] Possible to disable check: x pool(s) have no replicas configured

2020-10-10 Thread Marc Roos
Is it possible to disable checking on 'x pool(s) have no replicas configured', so I don't have this HEALTH_WARN constantly. Or is there some other disadvantage of keeping some empty 1x replication test pools? ___ ceph-users mailing list --

[ceph-users] Re: another osd_pglog memory usage incident

2020-10-09 Thread Marc Roos
>1. The pg log contains 3000 entries by default (on nautilus). These >3000 entries can legitimately consume gigabytes of ram for some >use-cases. (I haven't determined exactly which ops triggered this >today). How can I check how much ram my pg_logs are using? -Original

[ceph-users] Re: el6 / centos6 rpm's for luminous?

2020-10-08 Thread Marc Roos
Ok thanks Dan for letting me know. -Original Message- Cc: ceph-users Subject: Re: [ceph-users] el6 / centos6 rpm's for luminous? We had built some rpms locally for ceph-fuse, but AFAIR luminous needs systemd so the server rpms would be difficult. -- dan > > > Nobody ever used

[ceph-users] el6 / centos6 rpm's for luminous?

2020-10-08 Thread Marc Roos
Nobody ever used luminous on el6? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Wipe an Octopus install

2020-10-08 Thread Marc Roos
I honestly do not get what the problem is. Just yum remove the rpm's, dd your osd drives, if there is something left in /var/lib/ceph, /etc/ceph, rm -R -f * those. Do a find / -iname "*ceph*" if there is still something there. -Original Message- To: Samuel Taylor Liston Cc:

[ceph-users] Quick/easy access to rbd on el6

2020-10-07 Thread Marc Roos
Normally I would install ceph-common.rpm and access some rbd image via rbdmap. What would be the best way to do this on an old el6? There is not even a luminous el6 on download.ceph.com. ___ ceph-users mailing list -- ceph-users@ceph.io To

[ceph-users] Re: pool pgp_num not updated

2020-10-06 Thread Marc Roos
pg_num and pgp_num need to be the same, not? 3.5.1. Set the Number of PGs To set the number of placement groups in a pool, you must specify the number of placement groups at the time you create the pool. See Create a Pool for details. Once you set placement groups for a pool, you can increase

[ceph-users] Re: Massive Mon DB Size with noout on 14.2.11

2020-10-06 Thread Marc Roos
ve unclean PGs/Pools. Cheers, dan On Fri, Oct 2, 2020 at 4:14 PM Marc Roos wrote: > > > Does this also count if your cluster is not healthy because of errors > like '2 pool(s) have no replicas configured' > I sometimes use thes

[ceph-users] Re: Write access delay after OSD & Mon lost

2020-10-06 Thread Marc Roos
I think I do not understand you completely. How long does a live migration take? If I do virsh migrate with vm's on librbd it is a few seconds. I guess this is mainly caused by copying the ram to the other host. Any time more this takes in case of a host failure, is related to time out

[ceph-users] Re: Massive Mon DB Size with noout on 14.2.11

2020-10-02 Thread Marc Roos
Does this also count if your cluster is not healthy because of errors like '2 pool(s) have no replicas configured' I sometimes use these pools for testing, they are empty. -Original Message- Cc: ceph-users Subject: [ceph-users] Re: Massive Mon DB Size with noout on 14.2.11 As long

[ceph-users] Re: ceph-volume quite buggy compared to ceph-disk

2020-10-02 Thread Marc Roos
If such 'simple' tool as ceph-volume is not properly working, how can I trust cephadm to be good? Maybe ceph development should rethink trying to pump out quickly new releases, and take a bit more time for testing. I am already sticking to the oldest supported version just because of this.

[ceph-users] Re: Ceph Tech Talk: Karan Singh - Scale Testing Ceph with 10Billion+ Objects

2020-10-01 Thread Marc Roos
video" at the bottom On Thu, Oct 1, 2020 at 1:10 PM Marc Roos wrote: ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Ceph Tech Talk: Karan Singh - Scale Testing Ceph with 10Billion+ Objects

2020-10-01 Thread Marc Roos
Mike, Can you allow access without mic and cam? Thanks, Marc -Original Message- To: ceph-users@ceph.io Subject: *SPAM* [ceph-users] Ceph Tech Talk: Karan Singh - Scale Testing Ceph with 10Billion+ Objects Hey all, We're live now with the latest Ceph tech talk! Join us:

[ceph-users] bugs ceph-volume scripting

2020-10-01 Thread Marc Roos
I have been creating lvm osd's with: ceph-volume lvm zap --destroy /dev/sdf && ceph-volume lvm create --data /dev/sdf --dmcrypt Because this procedure failed: ceph-volume lvm zap --destroy /dev/sdf (waiting on slow human typing) ceph-volume lvm create --data /dev/sdf --dmcrypt However when

[ceph-users] Re: hdd pg's migrating when converting ssd class osd's

2020-09-30 Thread Marc Roos
I am not sure, but it looks like this remapping at hdd's is not being done when adding back the same ssd osd. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Understanding what ceph-volume does, with bootstrap-osd/ceph.keyring, tmpfs

2020-09-30 Thread Marc Roos
Thanks! -Original Message- To: Janne Johansson; Marc Roos Cc: ceph-devel; ceph-users Subject: Re: [ceph-users] Re: Understanding what ceph-volume does, with bootstrap-osd/ceph.keyring, tmpfs The key is stored in the ceph cluster config db. It can be retrieved by KEY=`/usr/bin/ceph

[ceph-users] Re: hdd pg's migrating when converting ssd class osd's

2020-09-30 Thread Marc Roos
"op": "emit" } ] } [@ceph]# ceph osd crush rule dump replicated_ruleset_ssd { "rule_id": 5, "rule_name": "replicated_ruleset_ssd", "ruleset": 5, "type": 1, "min_size": 1, "max_

[ceph-users] Re: hdd pg's migrating when converting ssd class osd's

2020-09-30 Thread Marc Roos
this in the release notes or so. [1] https://pastebin.com/PFx0V3S7 -Original Message- To: Eugen Block Cc: Marc Roos; ceph-users Subject: Re: [ceph-users] Re: hdd pg's migrating when converting ssd class osd's This is how my crush tree including shadow hierarchies looks like (a mess :): https

[ceph-users] Re: hdd pg's migrating when converting ssd class osd's

2020-09-29 Thread Marc Roos
Yes correct, hosts have indeed both ssd's and hdd's combined. Is this not more of a bug then? I would assume the goal of using device classes is that you separate these and one does not affect the other, even the host weight of the ssd and hdd class are already available. The algorithm

[ceph-users] Re: hdd pg's migrating when converting ssd class osd's

2020-09-29 Thread Marc Roos
I have practically a default setup. If I do a 'ceph osd crush tree --show-shadow' I have a listing like this[1]. I would assume from the hosts being listed within the default~ssd and default~hdd, they are separate (enough)? [1] root default~ssd host c01~ssd .. .. host c02~ssd ..

[ceph-users] Keep having ceph-volume create fail

2020-09-28 Thread Marc Roos
I have no idea why ceph-volume keeps failing so much. I keep zapping and creating and all of a sudden it works. I am not having pvs or links left in /dev/mapper. I am checking that with lsblk, dmsetup ls --tree and ceph-volume inventory. These are the stdout/err I am having, every time

[ceph-users] Re: How OSD encryption affects latency/iops on NVMe, SSD and HDD

2020-09-28 Thread Marc Roos
I did also some testing, but was more surprised how much cputime kworker and dmcrypt-write(?) instances are taking. Is there some way to get fio output realtime to influx or prometheus so you can view it with load together? -Original Message- From: t...@postix.net

[ceph-users] hdd pg's migrating when converting ssd class osd's

2020-09-27 Thread Marc Roos
I have been converting ssd's osd's to dmcrypt, and I have noticed that pg's of pools are migrated that should be (and are?) on hdd class. On a healthy ok cluster I am getting, when I set the crush reweight to 0.0 of a ssd osd this: 17.35 10415 00 9907

[ceph-users] rebalancing adapted during rebalancing with new updates?

2020-09-26 Thread Marc Roos
When I add an osd rebalancing is taking place, lets say ceph relocates 40 pg's. When I add another osd during rebalancing, when ceph has only relocated 10 pgs and has to do still 30 pgs. What happens then: 1. Is ceph just finishing the relocation of these 30 pgs and then calculates how the

[ceph-users] Re: NVMe's

2020-09-23 Thread Marc Roos
://www.storagereview.com/review/hgst-4tb-deskstar-nas-hdd-review -Original Message- Subject: Re: [ceph-users] Re: NVMe's On 9/23/20 8:05 AM, Marc Roos wrote: >> I'm curious if you've tried octopus+ yet? > Why don't you publish results of your test cluster? You cannot expect > all new

[ceph-users] Re: NVMe's

2020-09-23 Thread Marc Roos
> I'm curious if you've tried octopus+ yet?  Why don't you publish results of your test cluster? You cannot expect all new users to buy 4 servers with 40 disks, and try if the performance is ok. Get a basic cluster and start publishing results, and document changes to the test cluster.

[ceph-users] switching to ceph-volume requires changing the default lvm.conf?

2020-09-23 Thread Marc Roos
I was wondering if switching to ceph-volume requires me to change the default centos lvm.conf? Eg. The default has issue_discards = 0 Also I wonder if trimming is the default on lvm's on ssds? I read somewhere that the dmcrypt passthrough of trimming was still secure in combination with a

[ceph-users] Re: NVMe's

2020-09-23 Thread Marc Roos
Depends on your expected load not? I already read here numerous of times that osd's can not keep up with nvme's, that is why people put 2 osd's on a single nvme. So on a busy node, you probably run out of cores? (But better verify this with someone that has an nvme cluster ;))

[ceph-users] Re: Vitastor, a fast Ceph-like block storage for VMs

2020-09-23 Thread Marc Roos
Vitaliy you are crazy ;) But really cool work. Why not combine efforts with ceph? Especially with something as important as SDS and PB's of clients data stored on it, everyone with a little bit of brain chooses a solution from a 'reliable' source. For me it was decisive to learn that CERN

[ceph-users] Re: Understanding what ceph-volume does, with bootstrap-osd/ceph.keyring, tmpfs

2020-09-22 Thread Marc Roos
[mailto:respo...@ifastnet.com] Cc: Janne Johansson; Marc Roos; ceph-devel; ceph-users Subject: Re: [ceph-users] Re: Understanding what ceph-volume does, with bootstrap-osd/ceph.keyring, tmpfs Tbh ceph caused us more problems than it tried to fix ymmv good luck > On 22 Sep 2020, at 13:04

[ceph-users] Re: one-liner getting block device from mounted osd

2020-09-22 Thread Marc Roos
ceph device ls -Original Message- To: ceph-users Subject: [ceph-users] one-liner getting block device from mounted osd I have a optimize script that I run after the reboot of a ceph node. It sets among other things /sys/block/sdg/queue/read_ahead_kb and

[ceph-users] one-liner getting block device from mounted osd

2020-09-22 Thread Marc Roos
I have a optimize script that I run after the reboot of a ceph node. It sets among other things /sys/block/sdg/queue/read_ahead_kb and /sys/block/sdg/queue/nr_requests of block devices being used for osd's. Normally I am using the mount command to discover these but with the tmpfs and

[ceph-users] Re: ceph docs redirect not good

2020-09-22 Thread Marc Roos
also https://docs.ceph.com/docs/mimic/rados/configuration/ceph-conf/ -Original Message- From: Marc Roos Sent: zondag 20 september 2020 15:36 To: ceph-users Subject: [ceph-users] ceph docs redirect not good https://docs.ceph.com/docs/mimic/man/8/ceph-volume-systemd

[ceph-users] Understanding what ceph-volume does, with bootstrap-osd/ceph.keyring, tmpfs

2020-09-21 Thread Marc Roos
When I create a new encrypted osd with ceph volume[1] I assume something like this is being done, please correct what is wrong. - it creates the pv on the block device - it creates the ceph vg on the block device - it creates the osd lv in the vg - it uses cryptsetup to encrypt this lv

[ceph-users] Re: Setting up a small experimental CEPH network

2020-09-21 Thread Marc Roos
I tested something in the past[1] where I could notice that an osd staturated a bond link and did not use the available 2nd one. I think I maybe made a mistake in writing down it was a 1x replicated pool. However it has been written here multiple times that these osd processes are single

[ceph-users] Re: ceph-volume lvm cannot zap???

2020-09-20 Thread Marc Roos
Thanks Oliver, useful checks! -Original Message- To: ceph-users Subject: Re: [ceph-users] ceph-volume lvm cannot zap??? Hi, we have also seen such cases, it seems that sometimes (when the controller / device is broken in special ways), device mapper keeps the volume locked. You

[ceph-users] ceph docs redirect not good

2020-09-20 Thread Marc Roos
https://docs.ceph.com/docs/mimic/man/8/ceph-volume-systemd/ ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Setting up a small experimental CEPH network

2020-09-20 Thread Marc Roos
- pat yourself on the back for choosing ceph, there are a lot of experts(not including me :)) here willing to help(during office hours) - decide what you like to use ceph for, and how much storage you need. - Running just an osd on a server has not that many implications so you could rethink

[ceph-users] ceph-volume quite buggy compared to ceph-disk

2020-09-19 Thread Marc Roos
[@]# ceph-volume lvm activate 36 82b94115-4dfb-4ed0-8801-def59a432b0a Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-36 Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-36/lockbox.keyring --create-keyring --name

[ceph-users] ceph-volume lvm cannot zap???

2020-09-19 Thread Marc Roos
[@~]# ceph-volume lvm zap /dev/sdi --> Zapping: /dev/sdi --> --destroy was not specified, but zapping a whole device will remove the partition table stderr: wipefs: error: /dev/sdi: probing initialization failed: Device or resource busy --> failed to wipefs device, will try again to

[ceph-users] RuntimeError: Unable check if OSD id exists

2020-09-18 Thread Marc Roos
I have still ceph-disk created osd's in nautilus. Thought about using this ceph-volume, but looks like this manual for replacing ceph-disk[1] is not complete. Getting already this error RuntimeError: Unable check if OSD id exists: [1]

[ceph-users] Re: Migration to ceph.readthedocs.io underway

2020-09-17 Thread Marc Roos
[mailto:lgrim...@suse.com] Sent: donderdag 17 september 2020 11:04 To: ceph-users; dev Subject: [ceph-users] Re: Migration to ceph.readthedocs.io underway Hi Marc, On 9/16/20 7:30 PM, Marc Roos wrote: > - In the future you will not be able to read the docs if you have an > adblocker(?) C

[ceph-users] Re: Migration to ceph.readthedocs.io underway

2020-09-16 Thread Marc Roos
- In the future you will not be able to read the docs if you have an adblocker(?) -Original Message- To: dev; ceph-users Cc: Kefu Chai Subject: [ceph-users] Migration to ceph.readthedocs.io underway Hi everyone, We are in the process of migrating from docs.ceph.com to

[ceph-users] Re: ceph rbox test on passive compressed pool

2020-09-16 Thread Marc Roos
is not correctly set to do the passive compression. - or the passive compression is not working when this hint is set. Thanks, Marc -Original Message- Cc: ceph-users Subject: Re: [ceph-users] ceph rbox test on passive compressed pool On 09/11 09:36, Marc Roos wrote: > > Hi David, > >

[ceph-users] Re: New pool with SSD OSDs

2020-09-14 Thread Marc Roos
I did the same, 1 or 2 years ago, creating a replicated_ruleset_hdd and replicated_ruleset_ssd. Eventhough I did not have any ssd's on any of the nodes at that time, adding this hdd type criteria made pg's migrate. I thought it was strange that this happens on a hdd only cluster, so I

[ceph-users] Re: ceph rbox test on passive compressed pool

2020-09-14 Thread Marc Roos
> mail/b875f40571f1545ff43052412a8e mtime 2020-09-06 > 16:25:53.00, > size 63580 > mail/e87c120b19f1545ff43052412a8e mtime 2020-09-06 > 16:24:25.00, > size 525 Hi David, How is this going. To me this looks more like deduplication than compression.

[ceph-users] Re: OSDs and tmpfs

2020-09-11 Thread Marc Roos
I have also these mounts with bluestore /dev/sde1 on /var/lib/ceph/osd/ceph-32 type xfs (rw,relatime,attr2,inode64,noquota) /dev/sdb1 on /var/lib/ceph/osd/ceph-3 type xfs (rw,relatime,attr2,inode64,noquota) /dev/sdc1 on /var/lib/ceph/osd/ceph-6 type xfs (rw,relatime,attr2,inode64,noquota)

[ceph-users] Re: ceph rbox test on passive compressed pool

2020-09-11 Thread Marc Roos
ot;, "size": 3000487051264, "btime": "2017-07-14 14:45:59.212792", "description": "main", "require_osd_release": "14" } } -Original Message- Cc: ceph-users Subject: Re: [ceph-use

[ceph-users] Re: ceph rbox test on passive compressed pool

2020-09-11 Thread Marc Roos
BST, Marc Roos wrote: I have been inserting 10790 exactly the same 64kb text message to a passive compressing enabled pool. I am still counting, but it looks like only half the objects are compressed. mail

[ceph-users] Re: ceph-osd performance on ram disk

2020-09-10 Thread Marc Roos
Hi George, Very interesting and also a bit expecting result. Some messages posted here are already indicating that getting expensive top of the line hardware does not really result in any performance increase above some level. Vitaliy has documented something similar[1] [1]

[ceph-users] Spam here still

2020-09-08 Thread Marc Roos
Do know that this is the only mailing list I am subscribed to, that sends me so much spam. Maybe the list admin should finally have a word with other list admins on how they are managing their lists ___ ceph-users mailing list --

[ceph-users] Re: ceph rbox test on passive compressed pool

2020-09-06 Thread Marc Roos
Hi David, I suppose it is this part https://github.com/ceph-dovecot/dovecot-ceph-plugin/tree/master/src/storage-rbox -Original Message- To: ceph-users@ceph.io; Subject: Re: [ceph-users] ceph rbox test on passive compressed pool The hints have to be given from the client side as far as

[ceph-users] ceph rbox test on passive compressed pool

2020-09-06 Thread Marc Roos
I have been inserting 10790 exactly the same 64kb text message to a passive compressing enabled pool. I am still counting, but it looks like only half the objects are compressed. mail/b08c3218dbf1545ff43052412a8e mtime 2020-09-06 16:27:39.00, size 63580

[ceph-users] Re: bug of the year (with compressed omap and lz 1.7(?))

2020-09-05 Thread Marc Roos
ose pool-specific compression settings weren't applied correctly anyway, so I'm not sure they even work yet in 14.2.9. -- dan On Sat, Sep 5, 2020 at 6:12 PM Marc Roos wrote: > > > I am still running 14.2.9 with lz4-1.7.5-3. Will I run into this bug > enabling compression on a pool

[ceph-users] bug of the year (with compressed omap and lz 1.7(?))

2020-09-05 Thread Marc Roos
I am still running 14.2.9 with lz4-1.7.5-3. Will I run into this bug enabling compression on a pool with: ceph osd pool set POOL_NAME compression_algorithm COMPRESSION_ALGORITHM ceph osd pool set POOL_NAME compression_mode COMPRESSION_MODE ___

[ceph-users] Re: Can 16 server grade ssd's be slower then 60 hdds? (no extra journals)

2020-09-02 Thread Marc Roos
:) this is just native disk performance with regular sata adapter nothing fancy, on the ceph hosts I have the SAS2308. -Original Message- Cc: 'ceph-users' Subject: AW: [ceph-users] Re: Can 16 server grade ssd's be slower then 60 hdds? (no extra journals) Wow 34K ios 4k iodetph 1 

[ceph-users] Re: Can 16 server grade ssd's be slower then 60 hdds? (no extra journals)

2020-09-01 Thread Marc Roos
write-4k-seq: (groupid=0, jobs=1): err= 0: pid=11017: Tue Sep 1 20:58:43 2020 write: IOPS=34.4k, BW=134MiB/s (141MB/s)(23.6GiB/180001msec) slat (nsec): min=3964, max=124499, avg=4432.71, stdev=911.13 clat (nsec): min=470, max=435529, avg=23528.70, stdev=2553.67 lat (usec):

[ceph-users] Re: Can 16 server grade ssd's be slower then 60 hdds? (no extra journals)

2020-09-01 Thread Marc Roos
Sorry I am not fully aware of what has been already discussed in this thread. But can't you flash these LSI logic cards to jbod? I have done this with my 9207 with sas2flash. I have attached my fio test of the Micron 5100 Pro/5200 SSDs MTFDDAK1T9TCC. They perform similar to my samsung sm863a

[ceph-users] Re: Is it possible to mount a cephfs within a container?

2020-08-29 Thread Marc Roos
>octopus 15.2.4 > >just as a test, I put my OSDs each inside of a LXD container. Set up >cephFS and mounted it inside a LXD container and it works. Thanks for making such effort! I am little bit new with the statefull containers, but I am getting the impression it is mostly by design

[ceph-users] Re: ceph auth ls

2020-08-27 Thread Marc Roos
This what I mean, this guy is just posting all his keys. https://www.mail-archive.com/ceph-devel@vger.kernel.org/msg26140.html -Original Message- To: ceph-users Subject: [ceph-users] ceph auth ls Am I the only one that thinks it is not necessary to dump these keys with every

[ceph-users] Is it possible to mount a cephfs within a container?

2020-08-27 Thread Marc Roos
I am getting this, on a osd node I am able to mount the path. adding ceph secret key to kernel failed: Operation not permitted ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] ceph auth ls

2020-08-27 Thread Marc Roos
Am I the only one that thinks it is not necessary to dump these keys with every command (ls and get)? Either remove these keys from auth ls and auth get. Or remove the commands "auth print_key" "auth print-key" and "auth get-key" ___ ceph-users

[ceph-users] Re: radowsgw still needs dedicated clientid?

2020-08-27 Thread Marc Roos
Can someone shed a light on this? Because it is the difference of running multiple instances of one task, or running multiple different tasks. -Original Message- To: ceph-users Subject: [ceph-users] radowsgw still needs dedicated clientid? I think I can remember reading somewhere

[ceph-users] Re: anyone using ceph csi

2020-08-26 Thread Marc Roos
>> >> >> I was wondering if anyone is using ceph csi plugins[1]? I would like to >> know how to configure credentials, that is not really described for >> testing on the console. >> >> I am running >> ./csiceph --endpoint unix:///tmp/mesos-csi-XSJWlY/endpoint.sock --type >> rbd

[ceph-users] anyone using ceph csi

2020-08-26 Thread Marc Roos
I was wondering if anyone is using ceph csi plugins[1]? I would like to know how to configure credentials, that is not really described for testing on the console. I am running ./csiceph --endpoint unix:///tmp/mesos-csi-XSJWlY/endpoint.sock --type rbd --drivername rbd.csi.ceph.com

[ceph-users] Re: Bonus Ceph Tech Talk: Edge Application - Stream Multiple Video Sources

2020-08-20 Thread Marc Roos
Can't join as guest without enabling mic and/or camera??? -Original Message- From: Mike Perez [mailto:mipe...@redhat.com] Sent: donderdag 20 augustus 2020 19:03 To: ceph-users@ceph.io Subject: [ceph-users] Re: Bonus Ceph Tech Talk: Edge Application - Stream Multiple Video Sources And

[ceph-users] luks / disk encryption best practice

2020-08-20 Thread Marc Roos
I still need to move from ceph disk to ceph volume. When doing this, I wanted to also start using disk encryption. I am not really interested in encryption offered by the hdd vendors. Is there a best practice or advice what encryption to use ciphers/hash? Stick to the default of CentOS7 or

[ceph-users] Re: does ceph rgw has any option to limit bandwidth

2020-08-19 Thread Marc Roos
Subject: [ceph-users] Re: does ceph rgw has any option to limit bandwidth I wanna limit the traffic of specific buckets. Can haproxy, nginx or any other proxy software deal with it ? Janne Johansson 于2020年8月19日周三 下午4:32写道: > Apart from Marc Roos' reply, it seems like something that co

[ceph-users] Re: does ceph rgw has any option to limit bandwidth

2020-08-19 Thread Marc Roos
You cannot set that much in radosgw, for that you have to use eg haproxy https://docs.ceph.com/docs/master/radosgw/config-ref/ -Original Message- From: Zhenshi Zhou [mailto:deader...@gmail.com] Sent: woensdag 19 augustus 2020 10:16 To: ceph-users Subject: [ceph-users] does ceph

  1   2   3   >