Re: [ceph-users] Ceph inside Docker containers inside VirtualBox

2019-04-18 Thread Varun Singh
On Thu, Apr 18, 2019 at 9:53 PM Siegfried Höllrigl wrote: > > Hi ! > > I am not 100% sure, but i think, --net=host does not propagate /dev/ > inside the conatiner. > > From the Error Message : > > 2019-04-18 07:30:06 /opt/ceph-container/bin/entrypoint.sh: ERROR- The > device pointed by OSD_DEVIC

Re: [ceph-users] How to properly clean up bluestore disks

2019-04-18 Thread Sergei Genchev
# ceph-volume lvm zap --destroy osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz Running command: /usr/sbin/cryptsetup status /dev/mapper/ --> Zapping: osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz --> Destroying physical volume osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz because --de

[ceph-users] iSCSI LUN and target Maximums in ceph-iscsi-3.0+

2019-04-18 Thread Wesley Dillingham
I am trying to determine some sizing limitations for a potential iSCSI deployment and wondering whats still the current lay of the land: Are the following still accurate as of the ceph-iscsi-3.0 implementation assuming CentOS 7.6+ and the latest python-rtslib etc from shaman: * Limit of 4

Re: [ceph-users] How to properly clean up bluestore disks

2019-04-18 Thread Alfredo Deza
On Thu, Apr 18, 2019 at 3:01 PM Sergei Genchev wrote: > > Thank you Alfredo > I did not have any reasons to keep volumes around. > I tried using ceph-volume to zap these stores, but none of the command > worked, including yours 'ceph-volume lvm zap > osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-

Re: [ceph-users] How to properly clean up bluestore disks

2019-04-18 Thread Sergei Genchev
Thank you Alfredo I did not have any reasons to keep volumes around. I tried using ceph-volume to zap these stores, but none of the command worked, including yours 'ceph-volume lvm zap osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz' I ended up manually removing LUKS volumes and then deleting

Re: [ceph-users] rgw windows/mac clients shitty, develop a new one?

2019-04-18 Thread Janne Johansson
https://www.reddit.com/r/netsec/comments/8t4xrl/filezilla_malware/ not saying it definitely is, or isn't malware-ridden, but it sure was shady at that time. I would suggest not pointing people to it. Den tors 18 apr. 2019 kl 16:41 skrev Brian : : > Hi Marc > > Filezilla has decent S3 support ht

Re: [ceph-users] Ceph inside Docker containers inside VirtualBox

2019-04-18 Thread Jacob DeGlopper
The ansible deploy is quite a pain to get set up properly, but it does work to get the whole stack working under Docker.  It uses the following script on Ubuntu to start the OSD containers: /usr/bin/docker run \   --rm \   --net=host \   --privileged=true \   --pid=host \   --memory=64386m \  

Re: [ceph-users] Ceph inside Docker containers inside VirtualBox

2019-04-18 Thread Siegfried Höllrigl
Hi ! I am not 100% sure, but i think, --net=host does not propagate /dev/ inside the conatiner. From the Error Message : 2019-04-18 07:30:06  /opt/ceph-container/bin/entrypoint.sh: ERROR- The device pointed by OSD_DEVICE (/dev/vdd) doesn't exist ! I whould say, you should add something like

[ceph-users] Optimizing for cephfs throughput on a hdd pool

2019-04-18 Thread Daniel Williams
Hey, Im running a new ceph 13 cluster, using just one cephfs, 6.3 erasure encoded stripe pool, each osd is 10T hdd, 20 total, all on there own host. Storing mostly large files ~20G. I'm running mostly stock except that I've optimized for the low (2G) memory hosts based an old threads recommendatio

Re: [ceph-users] How to properly clean up bluestore disks

2019-04-18 Thread Alfredo Deza
On Thu, Apr 18, 2019 at 10:55 AM Sergei Genchev wrote: > > Hello, > I have a server with 18 disks, and 17 OSD daemons configured. One of the OSD > daemons failed to deploy with ceph-deploy. The reason for failing is > unimportant at this point, I believe it was race condition, as I was running

[ceph-users] How to properly clean up bluestore disks

2019-04-18 Thread Sergei Genchev
Hello, I have a server with 18 disks, and 17 OSD daemons configured. One of the OSD daemons failed to deploy with ceph-deploy. The reason for failing is unimportant at this point, I believe it was race condition, as I was running ceph-deploy inside while loop for all disks in this server. Now I

Re: [ceph-users] 'Missing' capacity

2019-04-18 Thread Brent Kennedy
That’s good to know as well, I was seeing the same thing. I hope this is just an informational message though. -Brent -Original Message- From: ceph-users On Behalf Of Mark Schouten Sent: Tuesday, April 16, 2019 3:15 AM To: Igor Podlesny ; Sinan Polat Cc: Ceph Users Subject: Re: [ceph

Re: [ceph-users] Default Pools

2019-04-18 Thread Brent Kennedy
Yea, that was a cluster created during firefly... Wish there was a good article on the naming and use of these, or perhaps a way I could make sure they are not used before deleting them. I know RGW will recreate anything it uses, but I don’t want to lose data because I wanted a clean system.

Re: [ceph-users] rgw windows/mac clients shitty, develop a new one?

2019-04-18 Thread Brian :
Hi Marc Filezilla has decent S3 support https://filezilla-project.org/ ymmv of course! On Thu, Apr 18, 2019 at 2:18 PM Marc Roos wrote: > > > I have been looking a bit at the s3 clients available to be used, and I > think they are quite shitty, especially this Cyberduck that processes > files w

[ceph-users] IO500 @ ISC19

2019-04-18 Thread John Bent
Call for Submission *Deadline*: 10 June 2019 AoE The IO500 is now accepting and encouraging submissions for the upcoming 4th IO500 list to be revealed at ISC-HPC 2019 in Frankfurt, Germany. Once again, we are also accepting submissions to the 10 node I/O challenge to encourage submission of small

[ceph-users] rgw windows/mac clients shitty, develop a new one?

2019-04-18 Thread Marc Roos
I have been looking a bit at the s3 clients available to be used, and I think they are quite shitty, especially this Cyberduck that processes files with default reading rights to everyone. I am in the process to advice clients to use for instance this mountain duck. But I am not to happy abou

[ceph-users] Ceph inside Docker containers inside VirtualBox

2019-04-18 Thread Varun Singh
Hi, I am trying to setup Ceph through Docker inside a VM. My host machine is Mac. My VM is an Ubuntu 18.04. Docker version is 18.09.5, build e8ff056. I am following the documentation present on ceph/daemon Docker Hub page. The idea is, if I spawn docker containers as mentioned on the page, I should

[ceph-users] ceph-iscsi: problem when discovery auth is disabled, but gateway receives auth requests

2019-04-18 Thread Matthias Leopold
Hi, the Ceph iSCSI gateway has a problem when receiving discovery auth requests when discovery auth is not enabled. Target discovery fails in this case (see below). This is especially annoying with oVirt (KVM management platform) where you can't separate the two authentication phases. This le

[ceph-users] failed to load OSD map for epoch X, got 0 bytes

2019-04-18 Thread Lomayani S. Laizer
Hello, I have one osd which cant start and giving out above errror. Everything was running ok until last night when the interface card of the server hosting this osd went fault. we replaced the fault interface and others OSD started well except one OSD We are running ceph 14.2.0 and all OSD are ru

Re: [ceph-users] Is it possible to run a standalone Bluestore instance?

2019-04-18 Thread Brad Hubbard
Let me try to reproduce this on centos 7.5 with master and I'll let you know how I go. On Thu, Apr 18, 2019 at 3:59 PM Can Zhang wrote: > > Using the commands you provided, I actually find some differences: > > On my CentOS VM: > ``` > # sudo find ./lib* -iname '*.so*' | xargs nm -AD 2>&1 | grep

[ceph-users] Ceph v13.2.4 issue with snaptrim

2019-04-18 Thread Vytautas Jonaitis
Hello, Few month ago we experienced issue with Ceph v13.2.4: 1. One of the nodes had all it's osd's set to out. To clean them up for replacement. 2. Noticed that a lot of snaptrim was running. 3. Set nosnaptrim flag on the cluster (to improve performance). 4. Once mon_osd_snap_trim_queue_warn_o

Re: [ceph-users] Multi-site replication speed

2019-04-18 Thread Brian Topping
Hi Casey, thanks for this info. It’s been doing something for 36 hours, but not updating the status at all. So it either takes a really long time for “preparing for full sync” or I’m doing something wrong. This is helpful information, but there’s a myriad of states that the system could be in.