Re: [ceph-users] Default min_size value for EC pools

2019-05-19 Thread Marc Roos
https://ceph.com/community/new-luminous-erasure-coding-rbd-cephfs/ -Original Message- From: Florent B [mailto:flor...@coppint.com] Sent: zondag 19 mei 2019 12:06 To: Paul Emmerich Cc: Ceph Users Subject: Re: [ceph-users] Default min_size value for EC pools Thank you Paul for your ans

Re: [ceph-users] Poor performance for 512b aligned "partial" writes from Windows guests in OpenStack + potential fix

2019-05-16 Thread Marc Roos
Hmmm, looks like diskpart is of, reports the same about a volume, that fsutil fsinfo ntfsinfo c: report 512 (in this case correct, because it is on a ssd) Anyone knows how to use fsutil with a path mounted disk (without drive letter)? -Original Message- From: Marc Roos Sent

Re: [ceph-users] Poor performance for 512b aligned "partial" writes from Windows guests in OpenStack + potential fix

2019-05-16 Thread Marc Roos
I am not sure if it is possible to run fsutil on disk without drive letter, but mounted on path. So I used: diskpart select volume 3 Filesystems And gives me this: Current File System Type : NTFS Allocation Unit Size : 4096 Flags : File Systems Supported for F

Re: [ceph-users] Huge rebalance after rebooting OSD host (Mimic)

2019-05-15 Thread Marc Roos
Are you sure your osd's are up and reachable? (run ceph osd tree on another node) -Original Message- From: Jan Kasprzak [mailto:k...@fi.muni.cz] Sent: woensdag 15 mei 2019 14:46 To: ceph-us...@ceph.com Subject: [ceph-users] Huge rebalance after rebooting OSD host (Mimic) Hello

Re: [ceph-users] Slow requests from bluestore osds

2019-05-12 Thread Marc Schöchlin
"rbd-rbd" or the cluster itself to improve the situation? Best regards Marc Am 13.05.19 um 07:40 schrieb EDH - Manuel Rios Fernandez: > Hi Marc, > > Try to compact OSD with slow request > > ceph tell osd.[ID] compact > > This will make the OSD offline for s

Re: [ceph-users] Slow requests from bluestore osds

2019-05-12 Thread Marc Schöchlin
New memtable created with log file: #422511. Immutable memtables: 0. Any hints how to find more details about the origin of this problem? How can we solve that? Regards Marc Am 28.01.19 um 22:27 schrieb Marc Schöchlin: > Hello cephers, > > as described - we also have the slow reques

Re: [ceph-users] Poor performance for 512b aligned "partial" writes from Windows guests in OpenStack + potential fix

2019-05-10 Thread Marc Roos
Hmmm, so if I have (wd) drives that list this in smartctl output, I should try and reformat them to 4k, which will give me better performance? Sector Sizes: 512 bytes logical, 4096 bytes physical Do you have a link to this download? Can only find some .cz site with the rpms. -Orig

Re: [ceph-users] maximum rebuild speed for erasure coding pool

2019-05-09 Thread Marc Roos
> Fancy fast WAL/DB/Journals probably help a lot here, since they do affect the "iops" > you experience from your spin-drive OSDs. What difference can be expected if you have a 100 iops hdd and you start using wal/db/journals on ssd? What would this 100 iops increase to (estimating)? --

Re: [ceph-users] EPEL packages issue

2019-05-07 Thread Marc Roos
sdag 7 mei 2019 16:18 To: Marc Roos; 'ceph-users' Subject: RE: [ceph-users] EPEL packages issue Hello, I already did this step and have the packages in local repostry, but still it aske for the EPEL repstry. Regards. mohammad almodallal -Original Message- From: Marc Roos

Re: [ceph-users] EPEL packages issue

2019-05-07 Thread Marc Roos
I have the same situation, where the servers do not have internet connection and use my own repository servers. I am just rsyncing the rpms to my custom repository like this, works like a charm. rsync -qrlptDvSHP --delete --exclude config.repo --exclude "local*" --exclude "aarch64" --exclu

Re: [ceph-users] Ceph OSD fails to start : direct_read_unaligned error No data available

2019-05-06 Thread Marc Roos
The reason why you moved to ceph storage, is that you do not want to do such things. Remove the drive, and let ceph recover. On May 6, 2019 11:06 PM, Florent B wrote: > > Hi, > > It seems that OSD disk is dead (hardware problem), badblocks command > returns a lot of badblocks. > > Is there an

Re: [ceph-users] rbd ssd pool for (windows) vms

2019-05-06 Thread Marc Roos
:34 To: Marc Roos Cc: ceph-users Subject: Re: [ceph-users] rbd ssd pool for (windows) vms On Wed, May 1, 2019 at 5:00 PM Marc Roos wrote: > > > Do you need to tell the vm's that they are on a ssd rbd pool? Or does > ceph and the libvirt drivers do this automatically for you?

Re: [ceph-users] rbd ssd pool for (windows) vms

2019-05-06 Thread Marc Roos
kl 23:00 skrev Marc Roos : Do you need to tell the vm's that they are on a ssd rbd pool? Or does ceph and the libvirt drivers do this automatically for you? When testing a nutanix acropolis virtual install, I had to 'cheat' it by adding this

Re: [ceph-users] co-located cephfs client deadlock

2019-05-02 Thread Marc Roos
How did you retreive what osd nr to restart? Just for future reference, when I run into a similar situation. If you have a client hang on a osd node. This can be resolved by restarting the osd that it is reading from? -Original Message- From: Dan van der Ster [mailto:d...@vanderste

Re: [ceph-users] hardware requirements for metadata server

2019-05-02 Thread Marc Roos
I have only 366M meta data stored in an ssd pool, with 16TB (10 million objects) of filesystem data (hdd pools). The active mds is using 13GB memory. Some stats of the active mds server [@c01 ~]# ceph daemonperf mds.a ---mds --mds_cache--- --mds_log-- -mds_

[ceph-users] rbd ssd pool for (windows) vms

2019-05-01 Thread Marc Roos
Do you need to tell the vm's that they are on a ssd rbd pool? Or does ceph and the libvirt drivers do this automatically for you? When testing a nutanix acropolis virtual install, I had to 'cheat' it by adding this To make the installer think there was a ssd drive. I only have 'Thin provis

Re: [ceph-users] Ceph inside Docker containers inside VirtualBox

2019-04-23 Thread Marc Roos
I am not sure about your background knowledge of ceph, but if you are starting. Maybe first try and get ceph working in a virtual environment, that should not be to much of a problem. Then try migrating it to your container. Now you are probably fighting to many issues at the same time.

Re: [ceph-users] Osd update from 12.2.11 to 12.2.12

2019-04-23 Thread Marc Roos
.x] public addr = 192.168.10.x cluster addr = 10.0.0.x -Original Message- From: David Turner [mailto:drakonst...@gmail.com] Sent: 22 April 2019 22:34 To: Marc Roos Cc: ceph-users Subject: Re: [ceph-users] Osd update from 12.2.11 to 12.2.12 Do you perhaps have anything in the ceph.conf files

Re: [ceph-users] Are there any statistics available on how most production ceph clusters are being used?

2019-04-21 Thread Marc Roos
Double thanks for the on-topic reply. The other two repsonses, were making me doubt if my chinese (which I didn't study) is better than my english. >> I am a bit curious on how production ceph clusters are being used. I am >> reading here that the block storage is used a lot with openstack

[ceph-users] Osd update from 12.2.11 to 12.2.12

2019-04-21 Thread Marc Roos
Just updated luminous, and setting max_scrubs value back. Why do I get osd's reporting differently I get these: osd.18: osd_max_scrubs = '1' (not observed, change may require restart) osd_objectstore = 'bluestore' (not observed, change may require restart) rocksdb_separate_wal_dir = 'false

[ceph-users] Are there any statistics available on how most production ceph clusters are being used?

2019-04-19 Thread Marc Roos
I am a bit curious on how production ceph clusters are being used. I am reading here that the block storage is used a lot with openstack and proxmox, and via iscsi with vmare. But I since nobody here is interested in a better rgw client for end users. I am wondering if the rgw is even being u

[ceph-users] rgw windows/mac clients shitty, develop a new one?

2019-04-18 Thread Marc Roos
I have been looking a bit at the s3 clients available to be used, and I think they are quite shitty, especially this Cyberduck that processes files with default reading rights to everyone. I am in the process to advice clients to use for instance this mountain duck. But I am not to happy abou

Re: [ceph-users] Topology query

2019-04-11 Thread Marc Roos
AFAIK you at least risk with cephfs on osd nodes this 'kernel deadlock'? I have it also, but with enough memory. Search mailing list for this. I am looking at similar setup, but with mesos and strugling with some cni plugin we have to develop. -Original Message- From: Bob Farrell [ma

Re: [ceph-users] VM management setup

2019-04-06 Thread Marc Roos
We have also hybrid ceph/libvirt-kvm setup, using some scripts to do live migration, do you have auto failover in your setup? -Original Message- From: jes...@krogh.cc [mailto:jes...@krogh.cc] Sent: 05 April 2019 21:34 To: ceph-users Subject: [ceph-users] VM management setup Hi. Know

[ceph-users] Recommended fs to use with rbd

2019-03-29 Thread Marc Roos
I would like to use rbd image from replicated hdd pool in a libvirt/kvm vm. 1. What is the best filesystem to use with rbd, just standaard xfs? 2. Is there a recommended tuning for lvm on how to put multiple rbd images? ___ ceph-users mailing

Re: [ceph-users] Bluestore WAL/DB decisions

2019-03-29 Thread Marc Roos
will get from moving the db/wal to ssd with spinners. So if you are able to, please publish some test results of the same environment from before and after your change. Thanks, Marc -Original Message- From: Erik McCormick [mailto:emccorm...@cirrusseven.com] Sent: 29 March 2019 06:2

Re: [ceph-users] SSD Recovery Settings

2019-03-21 Thread Marc Roos
Beware of start using this influx. I have 80GB db, and I regret using it. I have to move now to storing data in graphite. Collectd has also a plugin for that. Influx cannot downsample properly when having tags I think (still wait for a response to this [0]) What I have understood is that wit

Re: [ceph-users] Ceph Nautilus for Ubuntu Cosmic?

2019-03-18 Thread Marc Roos
If you want the excitement, can I then wish you my possible future ceph cluster problems, so I won't have them ;) -Original Message- From: John Hearns Sent: 18 March 2019 17:00 To: ceph-users Subject: [ceph-users] Ceph Nautilus for Ubuntu Cosmic? May I ask if there is a repository

Re: [ceph-users] How to lower log verbosity

2019-03-17 Thread Marc Roos
I am not sure if it is any help but this is getting you some debug settings ceph daemon osd.0 config show| grep debug | grep "[0-9]/[0-9]" And eg. with such a loop you can set them to 0/0 logarr[debug_compressor]="1/5" logarr[debug_bluestore]="1/5" logarr[debug_bluefs]="1/5" logarr[debug_bdev

[ceph-users] Cephfs error

2019-03-17 Thread Marc Roos
2019-03-17 21:59:58.296394 7f97cbbe6700 0 -- 192.168.10.203:6800/1614422834 >> 192.168.10.43:0/1827964483 conn(0x55ba9614d000 :6800 s=STATE_OPEN pgs=8 cs=1 l=0).fault server, going to standby What does this mean? ___ ceph-users mailing list cep

Re: [ceph-users] mount cephfs on ceph servers

2019-03-07 Thread Marc Roos
Container = same kernel, problem is with processes using the same kernel. -Original Message- From: Daniele Riccucci [mailto:devs...@posteo.net] Sent: 07 March 2019 00:18 To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] mount cephfs on ceph servers Hello, is the deadlock

Re: [ceph-users] Ceph cluster on AMD based system.

2019-03-05 Thread Marc Roos
I see indeed lately people writing about putting 2 osd on a nvme, but does this not undermine the idea of having 3 copies on different osds/drives? In theory you could loose 2 copies when one disk fails??? -Original Message- From: Darius Kasparaviius [mailto:daz...@gmail.com] Sent:

Re: [ceph-users] collectd problems with pools

2019-02-28 Thread Marc Roos
Should you not be pasting that as an issue on github collectd-ceph? I hope you don't mind me asking, I am also using collectd and dumping the data to influx. Are you downsampling with influx? ( I am not :/ [0]) [0] https://community.influxdata.com/t/how-does-grouping-work-does-it-work/7936

Re: [ceph-users] Fwd: Re: Blocked ops after change from filestore on HDD to bluestore on SDD

2019-02-28 Thread Marc Roos
aredmemory settings when rebooting (centos7), but maintainers of slapd said that I can ignore that. Dont have any problems since using this also. -Original Message- From: Uwe Sauter [mailto:uwe.sauter...@gmail.com] Sent: 28 February 2019 14:34 To: Marc Roos; ceph-users; vitalif Subject:

Re: [ceph-users] Fwd: Re: Blocked ops after change from filestore on HDD to bluestore on SDD

2019-02-28 Thread Marc Roos
I am having quite a few openldap servers (slaves) running also, make sure to use proper caching that saves a lot of disk io. -Original Message- Sent: 28 February 2019 13:56 To: uwe.sauter...@gmail.com; Uwe Sauter; Ceph Users Subject: *SPAM* Re: [ceph-users] Fwd: Re: Blocked

Re: [ceph-users] RBD poor performance

2019-02-27 Thread Marc Roos
At some point I would expect the cpu to be the bottleneck. They have always been saying this here for better latency get fast cpu's. Would be nice to know what GHz you are testing, and how that scales. Rep 1-3, erasure propably also takes a hit. How do you test maximum iops of the osd? (Just c

Re: [ceph-users] rbd space usage

2019-02-27 Thread Marc Roos
They are 'thin provisioned' meaning if you create a 10GB rbd, it does not use 10GB at the start. (afaik) -Original Message- From: solarflow99 [mailto:solarflo...@gmail.com] Sent: 27 February 2019 22:55 To: Ceph Users Subject: [ceph-users] rbd space usage using ceph df it looks as if

Re: [ceph-users] faster switch to another mds

2019-02-26 Thread Marc Roos
My two cents, with default luminous cluster 4nodes, 2x mds, taking 21 seconds to respond?? Is that not a bit long for a 4 node, 2x mds cluster? After flushing caches and doing [@c03 sbin]# ceph mds fail c failed mds gid 3464231 [@c04 5]# time ls -l total 2 ... real 0m21.891s user 0m0.002s

Re: [ceph-users] Configuration about using nvme SSD

2019-02-25 Thread Marc Roos
How do you test what total 4Kb random write iops (RBD) you have? -Original Message- From: Vitaliy Filippov [mailto:vita...@yourcmc.ru] Sent: 24 February 2019 17:39 To: David Turner Cc: ceph-users; 韦皓诚 Subject: *SPAM* Re: [ceph-users] Configuration about using nvme SSD I've

Re: [ceph-users] Replicating CephFS between clusters

2019-02-19 Thread Marc Roos
>> >> >> >> > >> >I'm not saying CephFS snapshots are 100% stable, but for certain >> >use-cases they can be. >> > >> >Try to avoid: >> > >> >- Multiple CephFS in same cluster >> >- Snapshot the root (/) >> >- Having a lot of snapshots >> >> How many is a lot? Having a l

Re: [ceph-users] Replicating CephFS between clusters

2019-02-19 Thread Marc Roos
>> > >I'm not saying CephFS snapshots are 100% stable, but for certain >use-cases they can be. > >Try to avoid: > >- Multiple CephFS in same cluster >- Snapshot the root (/) >- Having a lot of snapshots How many is a lot? Having a lot of snapshots in total? Or having a lot of snapsho

Re: [ceph-users] Migrating a baremetal Ceph cluster into K8s + Rook

2019-02-18 Thread Marc Roos
Why not just keep it bare metal? Especially with future ceph upgrading/testing. I am having centos7 with luminous and am running libvirt on the nodes aswell. If you configure them with a tls/ssl connection, you can even nicely migrate a vm, from one host/ceph node to the other. Next thing I

[ceph-users] Ceph auth caps 'create rbd image' permission

2019-02-16 Thread Marc Roos
Currently I am using 'profile rbd' on mon and osd. Is it possible with the caps to allow a user to - List rbd images - get state of images - write/read to images Etc But do not allow to have it create new images? ___ ceph-users mailing list c

Re: [ceph-users] Online disk resize with Qemu/KVM and Ceph

2019-02-15 Thread Marc Roos
Use scsi disk and virtio adapter? I think that is recommended also for use with ceph rbd. -Original Message- From: Gesiel Galvão Bernardes [mailto:gesiel.bernar...@gmail.com] Sent: 15 February 2019 13:16 To: Marc Roos Cc: ceph-users Subject: Re: [ceph-users] Online disk resize with

Re: [ceph-users] Online disk resize with Qemu/KVM and Ceph

2019-02-15 Thread Marc Roos
And then in the windows vm cmd diskpart Rescan Linux vm echo 1 > /sys/class/scsi_device/2\:0\:0\:0/device/rescan (sda) echo 1 > /sys/class/scsi_device/2\:0\:3\:0/device/rescan (sdd) I have this to, have to do this to: virsh qemu-monitor-command vps-test2 --hmp "info block" virsh qemu-mon

Re: [ceph-users] Online disk resize with Qemu/KVM and Ceph

2019-02-15 Thread Marc Roos
I have this to, have to do this to: virsh qemu-monitor-command vps-test2 --hmp "info block" virsh qemu-monitor-command vps-test2 --hmp "block_resize drive-scsi0-0-0-0 12G" -Original Message- From: Gesiel Galvão Bernardes [mailto:gesiel.bernar...@gmail.com] Sent: 15 February 2019

Re: [ceph-users] systemd/rbdmap.service

2019-02-13 Thread Marc Roos
Maybe _netdev? /dev/rbd/rbd/influxdb /var/lib/influxdb xfs _netdev 0 0 To be honest I can remember having something similar long time ago, but just tested it on centos7, and have no problems with this. -Original Message- From: Clausen, Jörn [mailto:jclau...@geomar.de]

Re: [ceph-users] Downsizing a cephfs pool

2019-02-08 Thread Marc Roos
Yes that is thus a partial move, not the behaviour you expect from a mv command. (I think this should be changed) -Original Message- From: Burkhard Linke [mailto:burkhard.li...@computational.bio.uni-giessen.de] Sent: 08 February 2019 11:27 To: ceph-users@lists.ceph.com Subject: Re:

Re: [ceph-users] Downsizing a cephfs pool

2019-02-08 Thread Marc Roos
erent pool: setfattr -n ceph.dir.layout.pool -v fs_data.ec21 folder getfattr -n ceph.dir.layout.pool folder -Original Message- From: Brian Topping [mailto:brian.topp...@gmail.com] Sent: 08 February 2019 10:02 To: Marc Roos Cc: ceph-users Subject: Re: [ceph-users] Downsizing a cephfs poo

Re: [ceph-users] Downsizing a cephfs pool

2019-02-08 Thread Marc Roos
There is a setting to set the max pg per osd. I would set that temporarily so you can work, create a new pool with 8 pg's and move data over to the new pool, remove the old pool, than unset this max pg per osd. PS. I am always creating pools starting 8 pg's and when I know I am at what I wan

[ceph-users] Cephfs strays increasing and using hardlinks

2019-02-07 Thread Marc Roos
I read here [0] that to get strays removed, you have to 'touch' them or 'getattr on all the remote links'. Is this still necessary in luminous 12.2.11? Or is there meanwhile a manual option to force purging of strays? [@~]# ceph daemon mds.c perf dump | grep strays "num_strays": 7474

Re: [ceph-users] I get weird ls pool detail output 12.2.11

2019-02-07 Thread Marc Roos
> >> >> >> Hmmm, I am having a daily cron job creating these only on maybe 100 >> directories. I am removing the snapshot if it exists with a rmdir. >> Should I do this differently? Maybe eg use snap-20190101, snap-20190102, >> snap-20190103 then I will always create unique directories

Re: [ceph-users] CephFS overwrite/truncate performance hit

2019-02-07 Thread Marc Roos
Is this difference not related to chaching? And you filling up some cache/queue at some point? If you do a sync after each write, do you have still the same results? -Original Message- From: Hector Martin [mailto:hec...@marcansoft.com] Sent: 07 February 2019 06:51 To: ceph-users@li

Re: [ceph-users] I get weird ls pool detail output 12.2.11

2019-02-07 Thread Marc Roos
12.2.11 On 07/02/2019 18:17, Marc Roos wrote: > 250~1,2252~1,2254~1,2256~1,2258~1,225a~1,225c~1,225e~1,2260~1,2262~1,2 > 26 > 4~1,2266~1,2268~1,226a~1,226c~1,226e~1,2270~1,2272~1,2274~1,2276~1,227 > 8~ > 1,227a~1,227c~1,227e~1,2280~1,2282~1,2284~1,2286~1,2288~1,228a~1,228c~ >

Re: [ceph-users] rados block on SSD - performance - how to tune and get insight?

2019-02-07 Thread Marc Roos
4xnodes, around 100GB, 2x2660, 10Gbit, 2xLSI Logic SAS2308 Thanks for the confirmation Marc Can you put in a but more hardware/network details? Jesper ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com

Re: [ceph-users] rados block on SSD - performance - how to tune and get insight?

2019-02-07 Thread Marc Roos
I did your rados bench test on our sm863a pool 3x rep, got similar results. [@]# rados bench -p fs_data.ssd -b 4096 10 write --no-cleanup hints = 1 Maintaining 16 concurrent writes of 4096 bytes to objects of size 4096 for up to 10 seconds or 0 objects Object prefix: benchmark_data_c04_1337712

Re: [ceph-users] I get weird ls pool detail output 12.2.11

2019-02-07 Thread Marc Roos
Also on pools that are empty, looks like on all cephfs data pools. pool 55 'fs_data.ec21.ssd' erasure size 3 min_size 3 crush_rule 6 object_hash rjenkins pg_num 8 pgp_num 8 last_change 29032 flags hashpspool,ec_overwrites stripe_width 8192 application cephfs removed_snaps [57f~1,583~

[ceph-users] I get weird ls pool detail output 12.2.11

2019-02-07 Thread Marc Roos
ceph osd pool ls detail pool 20 'fs_data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 29032 flags hashpspool stripe_width 0 application cephfs removed_snaps [3~1,5~31,37~768,7a0~3,7a4~b10,12b5~3,12b9~3,12bd~22c,14ea~22e,1719~b04, 2

Re: [ceph-users] Multicast communication compuverde

2019-02-06 Thread Marc Roos
bable that many clients would ask for exactly same data in the same order, so it would just mean all clients hear all traffic (or at least more traffic than they asked for) and need to skip past a lot of it. Den tis 5 feb. 2019 kl 22:07 skrev Marc Roos : I am still testing with

[ceph-users] Multicast communication compuverde

2019-02-05 Thread Marc Roos
I am still testing with ceph mostly, so my apologies for bringing up something totally useless. But I just had a chat about compuverde storage. They seem to implement multicast in a scale out solution. I was wondering if there is any experience here with compuverde and how it compared to ce

[ceph-users] Lumunious 12.2.10 update send to 12.2.11

2019-02-05 Thread Marc Roos
Has some protocol or so changed? I am resizing an rbd device on a luminous 12.2.10 cluster and a 12.2.11 client does not resond (all centos7) 2019-02-05 09:46:27.336885 7f9227fff700 -1 librbd::Operations: update notification timed-out ___ ceph

Re: [ceph-users] CephFS performance vs. underlying storage

2019-01-30 Thread Marc Roos
I was wondering the same, from a 'default' setup I get this performance, no idea if this is bad, good or normal. 4k r ran. 4k w ran. 4k r seq. 4k w seq. 1024k r ran. 1024k w ran. 1024k r seq. 1024k w seq. size lat iops kB/s lat iops kB/s lat iops

Re: [ceph-users] Slow requests from bluestore osds

2019-01-28 Thread Marc Schöchlin
enhancement is in progress to get more iops) What can i do to decrease the impact of snaptrims to prevent slow requests? (i.e. reduce "osd max trimming pgs" to "1") Regards Marc Schöchlin Am 03.09.18 um 10:13 schrieb Marc Schöchlin: > Hi, > > we are also experiencing t

[ceph-users] cephfs constantly strays ( num_strays)

2019-01-27 Thread Marc Roos
I have constantly strays. What are strays? Why do I have them? Is this bad? [@~]# ceph daemon mds.c perf dump| grep num_stray "num_strays": 25823, "num_strays_delayed": 0, "num_strays_enqueuing": 0, [@~]# ___ ceph-users maili

[ceph-users] Bug in application of bucket policy s3:PutObject?

2019-01-27 Thread Marc Roos
If I want that only a user can put objects, and not download or delete. I have to apply a secondary statement denying the GetObject. Yet I did not specify the GetObject. This works { "Sid": "put-only-objects-s2", "Effect": "Deny", "Principal": { "AWS": [ "arn:aws:iam::Co

Re: [ceph-users] Radosgw s3 subuser permissions

2019-01-27 Thread Marc Roos
The Exoteric Order of the Squid Cybernetic Subject: Re: [ceph-users] Radosgw s3 subuser permissions On 24/01/2019, Marc Roos wrote: > > > This should do it sort of. > > { > "Id": "Policy1548367105316", > "Version": "2012-10-17", &

Re: [ceph-users] ceph osd commit latency increase over time, until restart

2019-01-27 Thread Marc Roos
Hi Alexandre, I was curious if I had a similar issue, what value are you monitoring? I have quite a lot to choose from. Bluestore.commitLat Bluestore.kvLat Bluestore.readLat Bluestore.readOnodeMetaLat Bluestore.readWaitAioLat Bluestore.stateAioWaitLat Bluestore.stateDoneLat Bluestore.stateI

[ceph-users] Ceph rbd.ko compatibility

2019-01-27 Thread Marc Schöchlin
l.org/pub/scm/linux/kernel/git/stable/linux.git/tree/Documentation/ABI/testing/sysfs-bus-rbd?h=v4.20.5 do not provide any significant information regarding the described questions. Did i missed important information resources? Best regards Marc _

Re: [ceph-users] Bucket logging howto

2019-01-26 Thread Marc Roos
>From the owner account of the bucket I am trying to enable logging, but I don't get how this should work. I see the s3:PutBucketLogging is supported, so I guess this should work. How do you enable it? And how do you access the log? [@ ~]$ s3cmd -c .s3cfg accesslog s3://archive Access logg

[ceph-users] Bucket logging howto

2019-01-26 Thread Marc Roos
>From the owner account of the bucket I am trying to enable logging, but I don't get how this should work. I see the s3:PutBucketLogging is supported, so I guess this should work. How do you enable it? And how do you access the log? [@ ~]$ s3cmd -c .s3cfg accesslog s3://archive Access loggi

Re: [ceph-users] Radosgw s3 subuser permissions

2019-01-24 Thread Marc Roos
"Action": [ "s3:GetObject", "s3:PutObject", "s3:ListBucket" ], "Principal": { "AWS": "arn:aws:iam::Company:user/testuser" }, "Resource": "arn:aws:s3:::archive/folder2/*"

[ceph-users] Radosgw s3 subuser permissions

2019-01-24 Thread Marc Roos
It is correct that it is NOT possible for s3 subusers to have different permissions on folders created by the parent account? Thus the --access=[ read | write | readwrite | full ] is for everything the parent has created, and it is not possible to change that for specific folders/buckets? rado

Re: [ceph-users] create osd failed due to cephx authentication

2019-01-24 Thread Marc Roos
ceph osd create ceph osd rm osd.15 sudo -u ceph mkdir /var/lib/ceph/osd/ceph-15 ceph-disk prepare --bluestore --zap-disk /dev/sdc (bluestore) blkid /dev/sdb1 echo "UUID=a300d511-8874-4655-b296-acf489d5cbc8 /var/lib/ceph/osd/ceph-15 xfs defaults 0 0" >> /etc/fstab mount /var/li

Re: [ceph-users] Process stuck in D+ on cephfs mount

2019-01-23 Thread Marc Roos
Are there any others I need to grab? So can I do all at once. I do not like to have to restart this one so often. > > Yes sort of. I do have an inconsistent pg for a while, but it is on a > different pool. But I take it this is related to a networking issue I > currently have with rsync and bro

[ceph-users] Cephfs snapshot create date

2019-01-23 Thread Marc Roos
How can I get the snapshot create date on cephfs. When I do an ls on .snap dir it will give me the date of the snapshot source date. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Process stuck in D+ on cephfs mount

2019-01-23 Thread Marc Roos
Yes sort of. I do have an inconsistent pg for a while, but it is on a different pool. But I take it this is related to a networking issue I currently have with rsync and broken pipe. Where exactly does it go wrong? The cephfs kernel clients is sending a request to the osd, but the osd never re

Re: [ceph-users] Process stuck in D+ on cephfs mount

2019-01-22 Thread Marc Roos
I got one again [] wait_on_page_bit_killable+0x83/0xa0 [] __lock_page_or_retry+0xb2/0xc0 [] filemap_fault+0x3b7/0x410 [] ceph_filemap_fault+0x13c/0x310 [ceph] [] __do_fault+0x4c/0xc0 [] do_read_fault.isra.42+0x43/0x130 [] handle_mm_fault+0x6b1/0x1040 [] __do_page_fault+0x154/0x450 [] do_page_fau

Re: [ceph-users] MDS performance issue

2019-01-21 Thread Marc Roos
How can you see that the cache is filling up and you need to execute "echo 2 > /proc/sys/vm/drop_caches"? -Original Message- From: Yan, Zheng [mailto:uker...@gmail.com] Sent: 21 January 2019 15:50 To: Albert Yue Cc: ceph-users Subject: Re: [ceph-users] MDS performance issue On Mon,

[ceph-users] process stuck in D state on cephfs kernel mount

2019-01-21 Thread Marc Roos
I had this weekend a process stuck in D state writing to a cephfs kernel mount, causing the load of the server go to 80 (normally around 1). Forcing me to reboot it. I think this problem is related to the networking between this vm and ceph nodes. Rsync also sometimes complains about a broke

Re: [ceph-users] Process stuck in D+ on cephfs mount

2019-01-21 Thread Marc Roos
Sent: 21 January 2019 02:50 To: Marc Roos Cc: ceph-users Subject: Re: [ceph-users] Process stuck in D+ on cephfs mount check /proc//stack to find where it is stuck On Mon, Jan 21, 2019 at 5:51 AM Marc Roos wrote: > > > I have a process stuck in D+ writing to cephfs kernel mount. Anything

Re: [ceph-users] monitor cephfs mount io's

2019-01-21 Thread Marc Roos
Hi Mohamad, How do you do that client side, I am having currently two kernel mounts? -Original Message- From: Mohamad Gebai [mailto:mge...@suse.de] Sent: 17 January 2019 15:57 To: Marc Roos; ceph-users Subject: Re: [ceph-users] monitor cephfs mount io's You can do that e

Re: [ceph-users] How To Properly Failover a HA Setup

2019-01-21 Thread Marc Roos
I think his downtime is coming from the mds failover, that takes a while in my case to. But I am not using the cephfs that much yet. -Original Message- From: Robert Sander [mailto:r.san...@heinlein-support.de] Sent: 21 January 2019 10:05 To: ceph-users@lists.ceph.com Subject: Re: [

[ceph-users] Process stuck in D+ on cephfs mount

2019-01-20 Thread Marc Roos
I have a process stuck in D+ writing to cephfs kernel mount. Anything can be done about this? (without rebooting) CentOS Linux release 7.5.1804 (Core) Linux 3.10.0-514.21.2.el7.x86_64 ___ ceph-users mailing list ceph-users@lists.ceph.com http://lis

Re: [ceph-users] Salvage CEPHFS after lost PG

2019-01-20 Thread Marc Roos
If you have a backfillfull, no pg's will be able to migrate. Better is to just add harddrives, because at least one of your osd's is to full. I know you can set the backfillfull ratio's with commands like these ceph tell osd.* injectargs '--mon_osd_full_ratio=0.97' ceph tell osd.* injectar

Re: [ceph-users] CephFS - Small file - single thread - read performance.

2019-01-18 Thread Marc Roos
Yes, and to be sure I did the read test again from another client. -Original Message- From: David C [mailto:dcsysengin...@gmail.com] Sent: 18 January 2019 16:00 To: Marc Roos Cc: aderumier; Burkhard.Linke; ceph-users Subject: Re: [ceph-users] CephFS - Small file - single thread

Re: [ceph-users] CephFS - Small file - single thread - read performance.

2019-01-18 Thread Marc Roos
[@test]# time cat 50b.img > /dev/null real0m0.004s user0m0.000s sys 0m0.002s [@test]# time cat 50b.img > /dev/null real0m0.002s user0m0.000s sys 0m0.002s [@test]# time cat 50b.img > /dev/null real0m0.002s user0m0.000s sys 0m0.001s [@test]# time cat 50b.img

Re: [ceph-users] Ceph Nautilus Release T-shirt Design

2019-01-18 Thread Marc Roos
Is there an overview of previous tshirts? -Original Message- From: Anthony D'Atri [mailto:a...@dreamsnake.net] Sent: 18 January 2019 01:07 To: Tim Serong Cc: Ceph Development; ceph-users@lists.ceph.com Subject: Re: [ceph-users] Ceph Nautilus Release T-shirt Design >> Lenz has provide

[ceph-users] How to do multiple cephfs mounts.

2019-01-17 Thread Marc Roos
Should I not be able to increase the io's by splitting the data writes over eg. 2 cephfs mounts? I am still getting similar overall performance. Is it even possible to increase performance by using multiple mounts? Using 2 kernel mounts on CentOS 7.6 _

[ceph-users] monitor cephfs mount io's

2019-01-17 Thread Marc Roos
How / where can I monitor the ios on cephfs mount / client? ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] dropping python 2 for nautilus... go/no-go

2019-01-16 Thread Marc Roos
I have python 2 in rhel7/centos7 [@c04 ~]# python -V Python 2.7.5 [@c04 ~]# cat /etc/redhat-release CentOS Linux release 7.6.1810 (Core) -Original Message- From: c...@jack.fr.eu.org [mailto:c...@jack.fr.eu.org] Sent: 16 January 2019 16:55 To: ceph-users@lists.ceph.com Subject: Re: [

Re: [ceph-users] Filestore OSD on CephFS?

2019-01-16 Thread Marc Roos
How can there be a "catastrophic reason" if you have "no active, production workload"...? Do as you please. I am also having 1 replication for temp en tests. But if you have only one osd why use ceph? Choose the correct 'tool' for the job. -Original Message- From: Kenneth Van Alst

Re: [ceph-users] Recommendations for sharing a file system to a heterogeneous client network?

2019-01-16 Thread Marc Roos
I opened a thread recently here asking about what can be generally accepted as 'ceph overhead' when using the file system. I wonder if the performance loss I have on a cephfs 1x replication pool compared to native performance is really so much. 5,6x to 2x slower than native disk performance

Re: [ceph-users] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10

2019-01-15 Thread Marc Roos
I upgraded this weekend from 12.2.8 to 12.2.10 without such issues (osd's are idle) -Original Message- From: Stefan Priebe - Profihost AG [mailto:s.pri...@profihost.ag] Sent: 15 January 2019 10:26 To: ceph-users@lists.ceph.com Cc: n.fahldi...@profihost.ag Subject: Re: [ceph-users] s

[ceph-users] samsung sm863 vs cephfs rep.1 pool performance

2019-01-15 Thread Marc Roos
Is this result to be expected from cephfs, when comparing to a native ssd speed test. 4k r ran. 4k w ran. 4k r seq. 4k w seq. 1024k r ran. 1024k w ran. 1024k r seq. 1024k w seq. size lat iops kB/s lat iops kB/s lat iops MB/s lat iops MB/s lat iops MB

[ceph-users] vm virtio rbd device, lvm high load but vda not

2019-01-13 Thread Marc Roos
Is this normal or expected that lvm can have high utilization while the disk the logical volume is, has not? Or do I need to do still custom optimizations for the ceph rbd backend? https://www.redhat.com/archives/linux-lvm/2013-October/msg00022.html Atop: LVM | Groot-LVroot | busy 74%

Re: [ceph-users] Upgrade to 7.6 flooding logs pam_unix(sudo:session): session opened for user root

2019-01-13 Thread Marc Roos
Ignore, had to modify Defaults !syslog to Defaults !syslog,!pam_session A bit of topic, I just upgraded the ceph test cluster to 7.6 and my syslog servers are flooded with these pam_unix(sudo:session): session opened for user root Any one knows how to get rid of these. _

[ceph-users] Upgrade to 7.6 flooding logs pam_unix(sudo:session): session opened for user root

2019-01-13 Thread Marc Roos
A bit of topic, I just upgraded the ceph test cluster to 7.6 and my syslog servers are flooded with these pam_unix(sudo:session): session opened for user root Any one knows how to get rid of these. ___ ceph-users mailing list ceph-users@lists.c

[ceph-users] Using a cephfs mount as separate dovecot storage

2019-01-10 Thread Marc Roos
I wanted to expand the usage of the ceph cluster and use a cephfs mount to archive mail messages. Only (the below) 'Archive' tree is going to be on this mount, the default folders stay where they are. Currently mbox is still being used. I thought about switching storage from mbox to mdbox. I

[ceph-users] two OSDs with high out rate

2019-01-10 Thread Marc
Hi, for support reasons we're still running firefly (part of MCP 6). In our grafana monitoring we noticed that two out of 128 OSD processes show significantly higher outbound IO than all the others and this is constant (cant see first occurance of this anymore, grafana only has 14 days backlo

[ceph-users] Ceph Dashboard Rewrite

2019-01-08 Thread Marc Schöchlin
09 * Connection #0 to host ceph-mon-s44.foobarsrv.de left intact Do you see any possibility to have a good workaround for this? Regards Marc ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

<    1   2   3   4   5   6   >