Re: [ceph-users] Ceph inside Docker containers inside VirtualBox

2019-04-18 Thread Varun Singh
On Thu, Apr 18, 2019 at 9:53 PM Siegfried Höllrigl
 wrote:
>
> Hi !
>
> I am not 100% sure, but i think, --net=host does not propagate /dev/
> inside the conatiner.
>
>  From the Error Message :
>
> 2019-04-18 07:30:06  /opt/ceph-container/bin/entrypoint.sh: ERROR- The
> device pointed by OSD_DEVICE (/dev/vdd) doesn't exist !
>
>
> I whould say, you should add something like --device=/dev/vdd to the docker 
> run command for the osd.
>
> Br
>
>
> Am 18.04.2019 um 14:46 schrieb Varun Singh:
> > Hi,
> > I am trying to setup Ceph through Docker inside a VM. My host machine
> > is Mac. My VM is an Ubuntu 18.04. Docker version is 18.09.5, build
> > e8ff056.
> > I am following the documentation present on ceph/daemon Docker Hub
> > page. The idea is, if I spawn docker containers as mentioned on the
> > page, I should get a ceph setup without KV store. I am not worried
> > about KV store as I just want to try it out. Following are the
> > commands I am firing to bring the containers up:
> >
> > Monitor:
> > docker run -d --net=host -v /etc/ceph:/etc/ceph -v
> > /var/lib/ceph/:/var/lib/ceph/ -e MON_IP=10.0.2.15 -e
> > CEPH_PUBLIC_NETWORK=10.0.2.0/24 ceph/daemon mon
> >
> > Manager:
> > docker run -d --net=host -v /etc/ceph:/etc/ceph -v
> > /var/lib/ceph/:/var/lib/ceph/ ceph/daemon mgr
> >
> > OSD:
> > docker run -d --net=host --pid=host --privileged=true -v
> > /etc/ceph:/etc/ceph -v /var/lib/ceph/:/var/lib/ceph/ -v /dev/:/dev/ -e
> > OSD_DEVICE=/dev/vdd ceph/daemon osd
> >
> >  From the above commands I am able to spawn monitor and manager
> > properly. I verified this by firing this command on both monitor and
> > manager containers:
> > sudo docker exec d1ab985 ceph -s
> >
> > I get following outputs for both:
> >
> >cluster:
> >  id: 14a6e40a-8e54-4851-a881-661a84b3441c
> >  health: HEALTH_OK
> >
> >services:
> >  mon: 1 daemons, quorum serverceph-VirtualBox (age 62m)
> >  mgr: serverceph-VirtualBox(active, since 56m)
> >  osd: 0 osds: 0 up, 0 in
> >
> >data:
> >  pools:   0 pools, 0 pgs
> >  objects: 0 objects, 0 B
> >  usage:   0 B used, 0 B / 0 B avail
> >  pgs:
> >
> > However when I try to bring up OSD using above command, it doesn't
> > work. Docker logs show this output:
> > 2019-04-18 07:30:06  /opt/ceph-container/bin/entrypoint.sh: static:
> > does not generate config
> > 2019-04-18 07:30:06  /opt/ceph-container/bin/entrypoint.sh: ERROR- The
> > device pointed by OSD_DEVICE (/dev/vdd) doesn't exist !
> >
> > I am not sure why the doc asks to pass /dev/vdd to OSD_DEVICE env var.
> > I know there are five different ways to spawning the OSD, but I am not
> > able to figure out which one would be suitable for a simple
> > deployment. If you could please let me know how to spawn OSDs using
> > Docker, it would help a lot.
> >
> >

Thanks Br, I will try this out today.

-- 
Regards,
Varun Singh

-- 
Confidentiality Notice and Disclaimer: This email (including any 
attachments) contains information that may be confidential, privileged 
and/or copyrighted. If you are not the intended recipient, please notify 
the sender immediately and destroy this email. Any unauthorized use of the 
contents of this email in any manner whatsoever, is strictly prohibited. If 
improper activity is suspected, all available information may be used by 
the sender for possible disciplinary action, prosecution, civil claim or 
any remedy or lawful purpose. Email transmission cannot be guaranteed to be 
secure or error-free, as information could be intercepted, lost, arrive 
late, or contain viruses. The sender is not liable whatsoever for damage 
resulting from the opening of this message and/or the use of the 
information contained in this message and/or attachments. Expressions in 
this email cannot be treated as opined by the sender company management – 
they are solely expressed by the sender unless authorized.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to properly clean up bluestore disks

2019-04-18 Thread Sergei Genchev
# ceph-volume lvm zap --destroy
osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
Running command: /usr/sbin/cryptsetup status /dev/mapper/
--> Zapping: osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
--> Destroying physical volume
osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz because --destroy was
given
Running command: /usr/sbin/pvremove -v -f -f
osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
 stderr: Device osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz not
found.
--> Unable to remove vg osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
-->  RuntimeError: command returned non-zero exit status: 5

This is how destroy failed before I started deleting volumes.

On Thu, Apr 18, 2019 at 2:26 PM Alfredo Deza  wrote:

> On Thu, Apr 18, 2019 at 3:01 PM Sergei Genchev  wrote:
> >
> >  Thank you Alfredo
> > I did not have any reasons to keep volumes around.
> > I tried using ceph-volume to zap these stores, but none of the command
> worked, including yours 'ceph-volume lvm zap
> osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz'
>
> If you do not want to keep them around you would need to use --destroy
> and use the lv path as input:
>
> ceph-volume lvm zap --destroy
> osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
>
> >
> > I ended up manually removing LUKS volumes and then deleting LVM LV, VG,
> and PV
> >
> > cryptsetup remove /dev/mapper/AeV0iG-odWF-NRPE-1bVK-0mxH-OgHL-fneTzr
> > cryptsetup remove /dev/mapper/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
> > lvremove
> /dev/ceph-f4efa78f-a467-4214-b550-81653da1c9bd/osd-block-097d59be-bbe6-493a-b785-48b259d2ff35
> > sgdisk -Z /dev/sdd
> >
> > # ceph-volume lvm zap osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
> > Running command: /usr/sbin/cryptsetup status /dev/mapper/
> > --> Zapping: osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
> > Running command: /usr/sbin/wipefs --all
> osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
> >  stderr: wipefs: error:
> osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz: probing initialization
> failed: No such file or directory
> > -->  RuntimeError: command returned non-zero exit status: 1
>
> In this case, you removed the LV so the wipefs failed because that LV
> no longer exists. Do you have output on how it failed before?
>
> >
> >
> > On Thu, Apr 18, 2019 at 10:10 AM Alfredo Deza  wrote:
> >>
> >> On Thu, Apr 18, 2019 at 10:55 AM Sergei Genchev 
> wrote:
> >> >
> >> >  Hello,
> >> > I have a server with 18 disks, and 17 OSD daemons configured. One of
> the OSD daemons failed to deploy with ceph-deploy. The reason for failing
> is unimportant at this point, I believe it was race condition, as I was
> running ceph-deploy inside while loop for all disks in this server.
> >> >   Now I have two left over LVM dmcrypded volumes that I am not sure
> how clean up. The command that failed and did not quite clean up after
> itself was:
> >> > ceph-deploy osd create --bluestore --dmcrypt --data /dev/sdd
> --block-db osvg/sdd-db ${SERVERNAME}
> >> >
> >> > # lsblk
> >> > ...
> >> > sdd 8:48   0   7.3T  0
> disk
> >> >
> └─ceph--f4efa78f--a467--4214--b550--81653da1c9bd-osd--block--097d59be--bbe6--493a--b785--48b259d2ff35
> >> >   253:32   0   7.3T  0 lvm
> >> >   └─AeV0iG-odWF-NRPE-1bVK-0mxH-OgHL-fneTzr253:33   0   7.3T  0
> crypt
> >> >
> >> > sds65:32   0 223.5G  0
> disk
> >> > ├─sds1 65:33   0   512M  0
> part  /boot
> >> > └─sds2 65:34   0   223G  0
> part
> >> >  ...
> >> >├─osvg-sdd--db  253:80 8G  0
> lvm
> >> >│ └─2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz  253:34   0 8G  0
> crypt
> >> >
> >> > # ceph-volume inventory /dev/sdd
> >> >
> >> > == Device report /dev/sdd ==
> >> >
> >> >  available False
> >> >  rejected reasons  locked
> >> >  path  /dev/sdd
> >> >  scheduler modedeadline
> >> >  rotational1
> >> >  vendorSEAGATE
> >> >  human readable size   7.28 TB
> >> >  sas address   0x5000c500a6b1d581
> >> >  removable 0
> >> >  model ST8000NM0185
> >> >  ro0
> >> > --- Logical Volume ---
> >> >  cluster name  ceph
> >> >  name
> osd-block-097d59be-bbe6-493a-b785-48b259d2ff35
> >> >  osd id39
> >> >  cluster fsid  8e7a3953-7647-4133-9b9a-7f4a2e2b7da7
> >> >  type  block
> >> >  block uuidAeV0iG-odWF-NRPE-1bVK-0mxH-OgHL-fneTzr
> >> >  osd fsid  097d59be-bbe6-493a-b785-48b259d2ff35
> >> >
> >> > I was trying to run
> >> > ceph-volume lvm zap --destroy /dev/sdd but it errored out. Osd id on
> 

[ceph-users] iSCSI LUN and target Maximums in ceph-iscsi-3.0+

2019-04-18 Thread Wesley Dillingham
I am trying to determine some sizing limitations for a potential iSCSI 
deployment and wondering whats still the current lay of the land:

Are the following still accurate as of the ceph-iscsi-3.0 implementation 
assuming CentOS 7.6+ and the latest python-rtslib etc from shaman:


  *   Limit of 4 gateways per cluster (source: 
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/block_device_guide/using_an_iscsi_gateway#requirements)


  *   Limit of 256 LUNS per target (source: 
https://github.com/ceph/ceph-iscsi-cli/issues/84#issuecomment-373359179 ) there 
is mention of this being updated in this comment: 
https://github.com/ceph/ceph-iscsi-cli/issues/84#issuecomment-373449362 per an 
update to rtslib but I still see the limit as 256 here: 
https://github.com/ceph/ceph-iscsi/blob/master/ceph_iscsi_config/lun.py#L984 
wondering if this is just an outdated limit or there is still valid reason to 
limit the number of LUNs per target


  *   Limit of 1 target per cluster: 
https://github.com/ceph/ceph-iscsi-cli/issues/104#issuecomment-396224922


Thanks in advance.





Respectfully,

Wes Dillingham
wdilling...@godaddy.com
Site Reliability Engineer IV - Platform Storage / Ceph

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to properly clean up bluestore disks

2019-04-18 Thread Alfredo Deza
On Thu, Apr 18, 2019 at 3:01 PM Sergei Genchev  wrote:
>
>  Thank you Alfredo
> I did not have any reasons to keep volumes around.
> I tried using ceph-volume to zap these stores, but none of the command 
> worked, including yours 'ceph-volume lvm zap 
> osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz'

If you do not want to keep them around you would need to use --destroy
and use the lv path as input:

ceph-volume lvm zap --destroy osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz

>
> I ended up manually removing LUKS volumes and then deleting LVM LV, VG, and PV
>
> cryptsetup remove /dev/mapper/AeV0iG-odWF-NRPE-1bVK-0mxH-OgHL-fneTzr
> cryptsetup remove /dev/mapper/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
> lvremove 
> /dev/ceph-f4efa78f-a467-4214-b550-81653da1c9bd/osd-block-097d59be-bbe6-493a-b785-48b259d2ff35
> sgdisk -Z /dev/sdd
>
> # ceph-volume lvm zap osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
> Running command: /usr/sbin/cryptsetup status /dev/mapper/
> --> Zapping: osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
> Running command: /usr/sbin/wipefs --all 
> osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
>  stderr: wipefs: error: osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz: 
> probing initialization failed: No such file or directory
> -->  RuntimeError: command returned non-zero exit status: 1

In this case, you removed the LV so the wipefs failed because that LV
no longer exists. Do you have output on how it failed before?

>
>
> On Thu, Apr 18, 2019 at 10:10 AM Alfredo Deza  wrote:
>>
>> On Thu, Apr 18, 2019 at 10:55 AM Sergei Genchev  wrote:
>> >
>> >  Hello,
>> > I have a server with 18 disks, and 17 OSD daemons configured. One of the 
>> > OSD daemons failed to deploy with ceph-deploy. The reason for failing is 
>> > unimportant at this point, I believe it was race condition, as I was 
>> > running ceph-deploy inside while loop for all disks in this server.
>> >   Now I have two left over LVM dmcrypded volumes that I am not sure how 
>> > clean up. The command that failed and did not quite clean up after itself 
>> > was:
>> > ceph-deploy osd create --bluestore --dmcrypt --data /dev/sdd --block-db 
>> > osvg/sdd-db ${SERVERNAME}
>> >
>> > # lsblk
>> > ...
>> > sdd 8:48   0   7.3T  0 disk
>> > └─ceph--f4efa78f--a467--4214--b550--81653da1c9bd-osd--block--097d59be--bbe6--493a--b785--48b259d2ff35
>> >   253:32   0   7.3T  0 lvm
>> >   └─AeV0iG-odWF-NRPE-1bVK-0mxH-OgHL-fneTzr253:33   0   7.3T  0 crypt
>> >
>> > sds65:32   0 223.5G  0 disk
>> > ├─sds1 65:33   0   512M  0 part  
>> > /boot
>> > └─sds2 65:34   0   223G  0 part
>> >  ...
>> >├─osvg-sdd--db  253:80 8G  0 lvm
>> >│ └─2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz  253:34   0 8G  0 crypt
>> >
>> > # ceph-volume inventory /dev/sdd
>> >
>> > == Device report /dev/sdd ==
>> >
>> >  available False
>> >  rejected reasons  locked
>> >  path  /dev/sdd
>> >  scheduler modedeadline
>> >  rotational1
>> >  vendorSEAGATE
>> >  human readable size   7.28 TB
>> >  sas address   0x5000c500a6b1d581
>> >  removable 0
>> >  model ST8000NM0185
>> >  ro0
>> > --- Logical Volume ---
>> >  cluster name  ceph
>> >  name  
>> > osd-block-097d59be-bbe6-493a-b785-48b259d2ff35
>> >  osd id39
>> >  cluster fsid  8e7a3953-7647-4133-9b9a-7f4a2e2b7da7
>> >  type  block
>> >  block uuidAeV0iG-odWF-NRPE-1bVK-0mxH-OgHL-fneTzr
>> >  osd fsid  097d59be-bbe6-493a-b785-48b259d2ff35
>> >
>> > I was trying to run
>> > ceph-volume lvm zap --destroy /dev/sdd but it errored out. Osd id on this 
>> > volume is the same as on next drive, /dev/sde, and osd.39 daemon is 
>> > running. This command was trying to zap running osd.
>> >
>> > What is the proper way to clean both data and block db volumes, so I can 
>> > rerun ceph-deploy again, and add them to the pool?
>> >
>>
>> Do you want to keep the LVs around or you want to complete get rid of
>> them? If you are passing /dev/sdd to 'zap' you are telling the tool to
>> destroy everything that is in there, regardless of who owns it
>> (including running
>> OSDs).
>>
>> If you want to keep LVs around then you can omit the --destroy flag
>> and pass the LVs as input, or if using a recent enough version you can
>> use --osd-fsid to zap:
>>
>> ceph-volume lvm zap osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
>>
>> If you don't want the LVs around you can add --destroy, but use the LV
>> 

Re: [ceph-users] How to properly clean up bluestore disks

2019-04-18 Thread Sergei Genchev
 Thank you Alfredo
I did not have any reasons to keep volumes around.
I tried using ceph-volume to zap these stores, but none of the command
worked, including yours 'ceph-volume lvm zap
osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz'

I ended up manually removing LUKS volumes and then deleting LVM LV, VG, and
PV

cryptsetup remove /dev/mapper/AeV0iG-odWF-NRPE-1bVK-0mxH-OgHL-fneTzr
cryptsetup remove /dev/mapper/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
lvremove
/dev/ceph-f4efa78f-a467-4214-b550-81653da1c9bd/osd-block-097d59be-bbe6-493a-b785-48b259d2ff35
sgdisk -Z /dev/sdd

# ceph-volume lvm zap osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
Running command: /usr/sbin/cryptsetup status /dev/mapper/
--> Zapping: osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
Running command: /usr/sbin/wipefs --all
osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
 stderr: wipefs: error: osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz:
probing initialization failed: No such file or directory
-->  RuntimeError: command returned non-zero exit status: 1


On Thu, Apr 18, 2019 at 10:10 AM Alfredo Deza  wrote:

> On Thu, Apr 18, 2019 at 10:55 AM Sergei Genchev 
> wrote:
> >
> >  Hello,
> > I have a server with 18 disks, and 17 OSD daemons configured. One of the
> OSD daemons failed to deploy with ceph-deploy. The reason for failing is
> unimportant at this point, I believe it was race condition, as I was
> running ceph-deploy inside while loop for all disks in this server.
> >   Now I have two left over LVM dmcrypded volumes that I am not sure how
> clean up. The command that failed and did not quite clean up after itself
> was:
> > ceph-deploy osd create --bluestore --dmcrypt --data /dev/sdd --block-db
> osvg/sdd-db ${SERVERNAME}
> >
> > # lsblk
> > ...
> > sdd 8:48   0   7.3T  0 disk
> >
> └─ceph--f4efa78f--a467--4214--b550--81653da1c9bd-osd--block--097d59be--bbe6--493a--b785--48b259d2ff35
> >   253:32   0   7.3T  0 lvm
> >   └─AeV0iG-odWF-NRPE-1bVK-0mxH-OgHL-fneTzr253:33   0   7.3T  0 crypt
> >
> > sds65:32   0 223.5G  0 disk
> > ├─sds1 65:33   0   512M  0 part
> /boot
> > └─sds2 65:34   0   223G  0 part
> >  ...
> >├─osvg-sdd--db  253:80 8G  0 lvm
> >│ └─2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz  253:34   0 8G  0 crypt
> >
> > # ceph-volume inventory /dev/sdd
> >
> > == Device report /dev/sdd ==
> >
> >  available False
> >  rejected reasons  locked
> >  path  /dev/sdd
> >  scheduler modedeadline
> >  rotational1
> >  vendorSEAGATE
> >  human readable size   7.28 TB
> >  sas address   0x5000c500a6b1d581
> >  removable 0
> >  model ST8000NM0185
> >  ro0
> > --- Logical Volume ---
> >  cluster name  ceph
> >  name
> osd-block-097d59be-bbe6-493a-b785-48b259d2ff35
> >  osd id39
> >  cluster fsid  8e7a3953-7647-4133-9b9a-7f4a2e2b7da7
> >  type  block
> >  block uuidAeV0iG-odWF-NRPE-1bVK-0mxH-OgHL-fneTzr
> >  osd fsid  097d59be-bbe6-493a-b785-48b259d2ff35
> >
> > I was trying to run
> > ceph-volume lvm zap --destroy /dev/sdd but it errored out. Osd id on
> this volume is the same as on next drive, /dev/sde, and osd.39 daemon is
> running. This command was trying to zap running osd.
> >
> > What is the proper way to clean both data and block db volumes, so I can
> rerun ceph-deploy again, and add them to the pool?
> >
>
> Do you want to keep the LVs around or you want to complete get rid of
> them? If you are passing /dev/sdd to 'zap' you are telling the tool to
> destroy everything that is in there, regardless of who owns it
> (including running
> OSDs).
>
> If you want to keep LVs around then you can omit the --destroy flag
> and pass the LVs as input, or if using a recent enough version you can
> use --osd-fsid to zap:
>
> ceph-volume lvm zap osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
>
> If you don't want the LVs around you can add --destroy, but use the LV
> as input (not the device)
>
> > Thank you!
> >
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rgw windows/mac clients shitty, develop a new one?

2019-04-18 Thread Janne Johansson
https://www.reddit.com/r/netsec/comments/8t4xrl/filezilla_malware/

not saying it definitely is, or isn't malware-ridden, but it sure was shady
at that time.
I would suggest not pointing people to it.


Den tors 18 apr. 2019 kl 16:41 skrev Brian : :

> Hi Marc
>
> Filezilla has decent S3 support https://filezilla-project.org/
>
> ymmv of course!
>
> On Thu, Apr 18, 2019 at 2:18 PM Marc Roos 
> wrote:
> >
> >
> > I have been looking a bit at the s3 clients available to be used, and I
> > think they are quite shitty, especially this Cyberduck that processes
> > files with default reading rights to everyone. I am in the process to
> > advice clients to use for instance this mountain duck. But I am not to
> > happy about it. I don't like the fact that everything has default
> > settings for amazon or other stuff in there for ftp or what ever.
> >
> > I am thinking of developing something in-house, more aimed at the ceph
> > environments, easier/better to use.
> >
> > What I can think of:
> >
> > - cheaper, free or maybe even opensource
> > - default settings for your ceph cluster
> > - only configuration for object storage (no amazon, rackspace, backblaze
> > shit)
> > - default secure settings
> > - offer in the client only functionality that is available from the
> > specific ceph release
> > - integration with the finder / explorer windows
> >
> > I am curious who would be interested in a such new client? Maybe better
> > to send me your wishes directly, and not clutter the mailing list with
> > this.
> >
> >
> >
> >
> >
> >
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>


-- 
May the most significant bit of your life be positive.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph inside Docker containers inside VirtualBox

2019-04-18 Thread Jacob DeGlopper
The ansible deploy is quite a pain to get set up properly, but it does 
work to get the whole stack working under Docker.  It uses the following 
script on Ubuntu to start the OSD containers:



/usr/bin/docker run \
  --rm \
  --net=host \
  --privileged=true \
  --pid=host \
  --memory=64386m \
  --cpus=1 \
  -v /dev:/dev \
  -v /etc/localtime:/etc/localtime:ro \
  -v /var/lib/ceph:/var/lib/ceph:z \
  -v /etc/ceph:/etc/ceph:z \
  -v /var/run/ceph:/var/run/ceph:z \
  --security-opt apparmor:unconfined \
  -e OSD_BLUESTORE=1 \
  -e OSD_DMCRYPT=0 \
  -e CLUSTER=ceph \
  -v /run/lvm/lvmetad.socket:/run/lvm/lvmetad.socket \
  -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE \
  -e OSD_ID="$1" \
  --name=ceph-osd-"$1" \
   \
  docker.io/ceph/daemon:latest


Hi !

I am not 100% sure, but i think, --net=host does not propagate /dev/ 
inside the conatiner.


From the Error Message :

2019-04-18 07:30:06  /opt/ceph-container/bin/entrypoint.sh: ERROR- The
device pointed by OSD_DEVICE (/dev/vdd) doesn't exist !


I whould say, you should add something like --device=/dev/vdd to the 
docker run command for the osd.


Br

Am 18.04.2019 um 14:46 schrieb Varun Singh:

Hi,
I am trying to setup Ceph through Docker inside a VM. My host machine
is Mac. My VM is an Ubuntu 18.04. Docker version is 18.09.5, build
e8ff056.
I am following the documentation present on ceph/daemon Docker Hub
page. The idea is, if I spawn docker containers as mentioned on the
page, I should get a ceph setup without KV store. I am not worried
about KV store as I just want to try it out. Following are the
commands I am firing to bring the containers up:

Monitor:
docker run -d --net=host -v /etc/ceph:/etc/ceph -v
/var/lib/ceph/:/var/lib/ceph/ -e MON_IP=10.0.2.15 -e
CEPH_PUBLIC_NETWORK=10.0.2.0/24 ceph/daemon mon

Manager:
docker run -d --net=host -v /etc/ceph:/etc/ceph -v
/var/lib/ceph/:/var/lib/ceph/ ceph/daemon mgr

OSD:
docker run -d --net=host --pid=host --privileged=true -v
/etc/ceph:/etc/ceph -v /var/lib/ceph/:/var/lib/ceph/ -v /dev/:/dev/ -e
OSD_DEVICE=/dev/vdd ceph/daemon osd

 From the above commands I am able to spawn monitor and manager
properly. I verified this by firing this command on both monitor and
manager containers:
sudo docker exec d1ab985 ceph -s

I get following outputs for both:

   cluster:
 id: 14a6e40a-8e54-4851-a881-661a84b3441c
 health: HEALTH_OK

   services:
 mon: 1 daemons, quorum serverceph-VirtualBox (age 62m)
 mgr: serverceph-VirtualBox(active, since 56m)
 osd: 0 osds: 0 up, 0 in

   data:
 pools:   0 pools, 0 pgs
 objects: 0 objects, 0 B
 usage:   0 B used, 0 B / 0 B avail
 pgs:

However when I try to bring up OSD using above command, it doesn't
work. Docker logs show this output:
2019-04-18 07:30:06  /opt/ceph-container/bin/entrypoint.sh: static:
does not generate config
2019-04-18 07:30:06  /opt/ceph-container/bin/entrypoint.sh: ERROR- The
device pointed by OSD_DEVICE (/dev/vdd) doesn't exist !

I am not sure why the doc asks to pass /dev/vdd to OSD_DEVICE env var.
I know there are five different ways to spawning the OSD, but I am not
able to figure out which one would be suitable for a simple
deployment. If you could please let me know how to spawn OSDs using
Docker, it would help a lot.



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph inside Docker containers inside VirtualBox

2019-04-18 Thread Siegfried Höllrigl

Hi !

I am not 100% sure, but i think, --net=host does not propagate /dev/ 
inside the conatiner.


From the Error Message :

2019-04-18 07:30:06  /opt/ceph-container/bin/entrypoint.sh: ERROR- The
device pointed by OSD_DEVICE (/dev/vdd) doesn't exist !


I whould say, you should add something like --device=/dev/vdd to the 
docker run command for the osd.


Br

Am 18.04.2019 um 14:46 schrieb Varun Singh:

Hi,
I am trying to setup Ceph through Docker inside a VM. My host machine
is Mac. My VM is an Ubuntu 18.04. Docker version is 18.09.5, build
e8ff056.
I am following the documentation present on ceph/daemon Docker Hub
page. The idea is, if I spawn docker containers as mentioned on the
page, I should get a ceph setup without KV store. I am not worried
about KV store as I just want to try it out. Following are the
commands I am firing to bring the containers up:

Monitor:
docker run -d --net=host -v /etc/ceph:/etc/ceph -v
/var/lib/ceph/:/var/lib/ceph/ -e MON_IP=10.0.2.15 -e
CEPH_PUBLIC_NETWORK=10.0.2.0/24 ceph/daemon mon

Manager:
docker run -d --net=host -v /etc/ceph:/etc/ceph -v
/var/lib/ceph/:/var/lib/ceph/ ceph/daemon mgr

OSD:
docker run -d --net=host --pid=host --privileged=true -v
/etc/ceph:/etc/ceph -v /var/lib/ceph/:/var/lib/ceph/ -v /dev/:/dev/ -e
OSD_DEVICE=/dev/vdd ceph/daemon osd

 From the above commands I am able to spawn monitor and manager
properly. I verified this by firing this command on both monitor and
manager containers:
sudo docker exec d1ab985 ceph -s

I get following outputs for both:

   cluster:
 id: 14a6e40a-8e54-4851-a881-661a84b3441c
 health: HEALTH_OK

   services:
 mon: 1 daemons, quorum serverceph-VirtualBox (age 62m)
 mgr: serverceph-VirtualBox(active, since 56m)
 osd: 0 osds: 0 up, 0 in

   data:
 pools:   0 pools, 0 pgs
 objects: 0 objects, 0 B
 usage:   0 B used, 0 B / 0 B avail
 pgs:

However when I try to bring up OSD using above command, it doesn't
work. Docker logs show this output:
2019-04-18 07:30:06  /opt/ceph-container/bin/entrypoint.sh: static:
does not generate config
2019-04-18 07:30:06  /opt/ceph-container/bin/entrypoint.sh: ERROR- The
device pointed by OSD_DEVICE (/dev/vdd) doesn't exist !

I am not sure why the doc asks to pass /dev/vdd to OSD_DEVICE env var.
I know there are five different ways to spawning the OSD, but I am not
able to figure out which one would be suitable for a simple
deployment. If you could please let me know how to spawn OSDs using
Docker, it would help a lot.



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Optimizing for cephfs throughput on a hdd pool

2019-04-18 Thread Daniel Williams
Hey,

Im running a new ceph 13 cluster, using just one cephfs, 6.3 erasure
encoded stripe pool, each osd is 10T hdd, 20 total, all on there own host.
Storing mostly large files ~20G. I'm running mostly stock except that I've
optimized for the low (2G) memory hosts based an old threads
recommendations.

I'm trying to fill it and test various failure scenarios and by far my
biggest bottleneck is iops for both writing and recovery. I'm guessing
because of the journal write + block write (seeing roughly 30MiB/s for
100iops). SSD for the journal is not possible.

Am I correct in saying that I'm really only able to reduce/influence
iops/MiB for the block write? Is the correct way to increase that is to
increase the stripe_unit by say 3x to achieve 100MiB/s per osd?

Daniel
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to properly clean up bluestore disks

2019-04-18 Thread Alfredo Deza
On Thu, Apr 18, 2019 at 10:55 AM Sergei Genchev  wrote:
>
>  Hello,
> I have a server with 18 disks, and 17 OSD daemons configured. One of the OSD 
> daemons failed to deploy with ceph-deploy. The reason for failing is 
> unimportant at this point, I believe it was race condition, as I was running 
> ceph-deploy inside while loop for all disks in this server.
>   Now I have two left over LVM dmcrypded volumes that I am not sure how clean 
> up. The command that failed and did not quite clean up after itself was:
> ceph-deploy osd create --bluestore --dmcrypt --data /dev/sdd --block-db 
> osvg/sdd-db ${SERVERNAME}
>
> # lsblk
> ...
> sdd 8:48   0   7.3T  0 disk
> └─ceph--f4efa78f--a467--4214--b550--81653da1c9bd-osd--block--097d59be--bbe6--493a--b785--48b259d2ff35
>   253:32   0   7.3T  0 lvm
>   └─AeV0iG-odWF-NRPE-1bVK-0mxH-OgHL-fneTzr253:33   0   7.3T  0 crypt
>
> sds65:32   0 223.5G  0 disk
> ├─sds1 65:33   0   512M  0 part  /boot
> └─sds2 65:34   0   223G  0 part
>  ...
>├─osvg-sdd--db  253:80 8G  0 lvm
>│ └─2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz  253:34   0 8G  0 crypt
>
> # ceph-volume inventory /dev/sdd
>
> == Device report /dev/sdd ==
>
>  available False
>  rejected reasons  locked
>  path  /dev/sdd
>  scheduler modedeadline
>  rotational1
>  vendorSEAGATE
>  human readable size   7.28 TB
>  sas address   0x5000c500a6b1d581
>  removable 0
>  model ST8000NM0185
>  ro0
> --- Logical Volume ---
>  cluster name  ceph
>  name  osd-block-097d59be-bbe6-493a-b785-48b259d2ff35
>  osd id39
>  cluster fsid  8e7a3953-7647-4133-9b9a-7f4a2e2b7da7
>  type  block
>  block uuidAeV0iG-odWF-NRPE-1bVK-0mxH-OgHL-fneTzr
>  osd fsid  097d59be-bbe6-493a-b785-48b259d2ff35
>
> I was trying to run
> ceph-volume lvm zap --destroy /dev/sdd but it errored out. Osd id on this 
> volume is the same as on next drive, /dev/sde, and osd.39 daemon is running. 
> This command was trying to zap running osd.
>
> What is the proper way to clean both data and block db volumes, so I can 
> rerun ceph-deploy again, and add them to the pool?
>

Do you want to keep the LVs around or you want to complete get rid of
them? If you are passing /dev/sdd to 'zap' you are telling the tool to
destroy everything that is in there, regardless of who owns it
(including running
OSDs).

If you want to keep LVs around then you can omit the --destroy flag
and pass the LVs as input, or if using a recent enough version you can
use --osd-fsid to zap:

ceph-volume lvm zap osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz

If you don't want the LVs around you can add --destroy, but use the LV
as input (not the device)

> Thank you!
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] How to properly clean up bluestore disks

2019-04-18 Thread Sergei Genchev
 Hello,
I have a server with 18 disks, and 17 OSD daemons configured. One of the
OSD daemons failed to deploy with ceph-deploy. The reason for failing is
unimportant at this point, I believe it was race condition, as I was
running ceph-deploy inside while loop for all disks in this server.
  Now I have two left over LVM dmcrypded volumes that I am not sure how
clean up. The command that failed and did not quite clean up after itself
was:
ceph-deploy osd create --bluestore --dmcrypt --data /dev/sdd --block-db
osvg/sdd-db ${SERVERNAME}

# lsblk
...
sdd 8:48   0   7.3T  0 disk
└─ceph--f4efa78f--a467--4214--b550--81653da1c9bd-osd--block--097d59be--bbe6--493a--b785--48b259d2ff35
  253:32   0   7.3T  0 lvm
  └─AeV0iG-odWF-NRPE-1bVK-0mxH-OgHL-fneTzr253:33   0   7.3T  0 crypt

sds65:32   0 223.5G  0 disk
├─sds1 65:33   0   512M  0 part
/boot
└─sds2 65:34   0   223G  0 part
 ...
   ├─osvg-sdd--db  253:80 8G  0 lvm
   │ └─2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz  253:34   0 8G  0 crypt

# ceph-volume inventory /dev/sdd

== Device report /dev/sdd ==

 available False
 rejected reasons  locked
 path  /dev/sdd
 scheduler modedeadline
 rotational1
 vendorSEAGATE
 human readable size   7.28 TB
 sas address   0x5000c500a6b1d581
 removable 0
 model ST8000NM0185
 ro0
--- Logical Volume ---
 cluster name  ceph
 name
osd-block-097d59be-bbe6-493a-b785-48b259d2ff35
 osd id39
 cluster fsid  8e7a3953-7647-4133-9b9a-7f4a2e2b7da7
 type  block
 block uuidAeV0iG-odWF-NRPE-1bVK-0mxH-OgHL-fneTzr
 osd fsid  097d59be-bbe6-493a-b785-48b259d2ff35

I was trying to run
ceph-volume lvm zap --destroy /dev/sdd but it errored out. Osd id on this
volume is the same as on next drive, /dev/sde, and osd.39 daemon is
running. This command was trying to zap running osd.

What is the proper way to clean both data and block db volumes, so I can
rerun ceph-deploy again, and add them to the pool?

Thank you!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 'Missing' capacity

2019-04-18 Thread Brent Kennedy
That’s good to know as well, I was seeing the same thing.  I hope this is just 
an informational message though.

-Brent

-Original Message-
From: ceph-users  On Behalf Of Mark Schouten
Sent: Tuesday, April 16, 2019 3:15 AM
To: Igor Podlesny ; Sinan Polat 
Cc: Ceph Users 
Subject: Re: [ceph-users] 'Missing' capacity



root@proxmox01:~# ceph osd df tree | sort -n -k8 | tail -1
  1   ssd  0.87000  1.0  889GiB  721GiB  168GiB 81.14 1.50  82 
osd.1  


root@proxmox01:~# ceph osd df tree | grep -c osd
68


68*168=11424

That is closer, thanks. I thought that available was the same as the cluster 
available. But appearantly it is the available on the fullest OSD. Thanks, 
learned someting again!
--

Mark Schouten 

Tuxis, Ede, https://www.tuxis.nl

T: +31 318 200208 
 



- Originele bericht -


Van: Sinan Polat (si...@turka.nl)
Datum: 16-04-2019 06:43
Naar: Igor Podlesny (ceph-u...@poige.ru)
Cc: Mark Schouten (m...@tuxis.nl), Ceph Users (ceph-users@lists.ceph.com)
Onderwerp: Re: [ceph-users] 'Missing' capacity


Probably inbalance of data across your OSDs.

Could you show ceph osd df.

From there take the disk with lowest available space. Multiply that number with 
number of OSDs. How much is it?

Kind regards,
Sinan Polat

> Op 16 apr. 2019 om 05:21 heeft Igor Podlesny  het 
> volgende geschreven:
>
>> On Tue, 16 Apr 2019 at 06:43, Mark Schouten  wrote:
>> [...]
>> So where is the rest of the free space? :X
>
> Makes sense to see:
>
> sudo ceph osd df tree
>
> --
> End of message. Next message?
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Default Pools

2019-04-18 Thread Brent Kennedy
Yea, that was a cluster created during firefly...

Wish there was a good article on the naming and use of these, or perhaps a way 
I could make sure they are not used before deleting them.  I know RGW will 
recreate anything it uses, but I don’t want to lose data because I wanted a 
clean system.

-Brent

-Original Message-
From: Gregory Farnum  
Sent: Monday, April 15, 2019 5:37 PM
To: Brent Kennedy 
Cc: Ceph Users 
Subject: Re: [ceph-users] Default Pools

On Mon, Apr 15, 2019 at 1:52 PM Brent Kennedy  wrote:
>
> I was looking around the web for the reason for some of the default pools in 
> Ceph and I cant find anything concrete.  Here is our list, some show no use 
> at all.  Can any of these be deleted ( or is there an article my googlefu 
> failed to find that covers the default pools?
>
> We only use buckets, so I took out .rgw.buckets, .users and 
> .rgw.buckets.index…
>
> Name
> .log
> .rgw.root
> .rgw.gc
> .rgw.control
> .rgw
> .users.uid
> .users.email
> .rgw.buckets.extra
> default.rgw.control
> default.rgw.meta
> default.rgw.log
> default.rgw.buckets.non-ec

All of these are created by RGW when you run it, not by the core Ceph system. I 
think they're all used (although they may report sizes of 0, as they mostly 
make use of omap).

> metadata

Except this one used to be created-by-default for CephFS metadata, but that 
hasn't been true in many releases. So I guess you're looking at an old cluster? 
(In which case it's *possible* some of those RGW pools are also unused now but 
were needed in the past; I haven't kept good track of them.) -Greg

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rgw windows/mac clients shitty, develop a new one?

2019-04-18 Thread Brian :
Hi Marc

Filezilla has decent S3 support https://filezilla-project.org/

ymmv of course!

On Thu, Apr 18, 2019 at 2:18 PM Marc Roos  wrote:
>
>
> I have been looking a bit at the s3 clients available to be used, and I
> think they are quite shitty, especially this Cyberduck that processes
> files with default reading rights to everyone. I am in the process to
> advice clients to use for instance this mountain duck. But I am not to
> happy about it. I don't like the fact that everything has default
> settings for amazon or other stuff in there for ftp or what ever.
>
> I am thinking of developing something in-house, more aimed at the ceph
> environments, easier/better to use.
>
> What I can think of:
>
> - cheaper, free or maybe even opensource
> - default settings for your ceph cluster
> - only configuration for object storage (no amazon, rackspace, backblaze
> shit)
> - default secure settings
> - offer in the client only functionality that is available from the
> specific ceph release
> - integration with the finder / explorer windows
>
> I am curious who would be interested in a such new client? Maybe better
> to send me your wishes directly, and not clutter the mailing list with
> this.
>
>
>
>
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] IO500 @ ISC19

2019-04-18 Thread John Bent
Call for Submission

*Deadline*: 10 June 2019 AoE

The IO500 is now accepting and encouraging submissions for the upcoming 4th
IO500 list to be revealed at ISC-HPC 2019 in Frankfurt, Germany. Once
again, we are also accepting submissions to the 10 node I/O challenge to
encourage submission of small scale results. The new ranked lists will be
announced at our ISC19 BoF [2]. We hope to see you, and your results, there.

The benchmark suite is designed to be easy to run and the community has
multiple active support channels to help with any questions. Please submit
and we look forward to seeing many of you at ISC 2019! Please note that
submissions of all size are welcome; the site has customizable sorting so
it is possible to submit on a small system and still get a very good
per-client score for example. Additionally, the list is about much more
than just the raw rank; all submissions help the community by collecting
and publishing a wider corpus of data. More details below.

Following the success of the Top500 in collecting and analyzing historical
trends in supercomputer technology and evolution, the IO500 was created in
2017, published its first list at SC17, and has grown exponentially since
then. The need for such an initiative has long been known within
High-Performance Computing; however, defining appropriate benchmarks had
long been challenging. Despite this challenge, the community, after long
and spirited discussion, finally reached consensus on a suite of benchmarks
and a metric for resolving the scores into a single ranking.

The multi-fold goals of the benchmark suite are as follows:

   1. Maximizing simplicity in running the benchmark suite
   2. Encouraging complexity in tuning for performance
   3. Allowing submitters to highlight their “hero run” performance numbers
   4. Forcing submitters to simultaneously report performance for
   challenging IO patterns.

Specifically, the benchmark suite includes a hero-run of both IOR and
mdtest configured however possible to maximize performance and establish an
upper-bound for performance. It also includes an IOR and mdtest run with
highly prescribed parameters in an attempt to determine a lower-bound.
Finally, it includes a namespace search as this has been determined to be a
highly sought-after feature in HPC storage systems that has historically
not been well-measured. Submitters are encouraged to share their tuning
insights for publication.

The goals of the community are also multi-fold:

   1. Gather historical data for the sake of analysis and to aid
   predictions of storage futures
   2. Collect tuning information to share valuable performance
   optimizations across the community
   3. Encourage vendors and designers to optimize for workloads beyond
   “hero runs”
   4. Establish bounded expectations for users, procurers, and
   administrators

Edit
10 Node I/O Challenge

At ISC, we will announce our second IO-500 award for the 10 Node Challenge.
This challenge is conducted using the regular IO-500 benchmark, however,
with the rule that exactly *10 computes nodes* must be used to run the
benchmark (one exception is find, which may use 1 node). You may use any
shared storage with, e.g., any number of servers. When submitting for the
IO-500 list, you can opt-in for “Participate in the 10 compute node
challenge only”, then we won't include the results into the ranked list.
Other 10 compute node submission will be included in the full list and in
the ranked list. We will announce the result in a separate derived list and
in the full list but not on the ranked IO-500 list at io500.org.
Edit
Birds-of-a-feather

Once again, we encourage you to submit [1], to join our community, and to
attend our BoF “The IO-500 and the Virtual Institute of I/O” at ISC 2019
[2] where we will announce the fourth IO500 list and second 10 node
challenge list. The current list includes results from BeeGPFS, DataWarp,
IME, Lustre, Spectrum Scale, and WekaIO. We hope that the next list has
even more.

We look forward to answering any questions or concerns you might have.

   - [1] http://io500.org/submission
   - [2] The BoF schedule will be announced soon

Edit
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] rgw windows/mac clients shitty, develop a new one?

2019-04-18 Thread Marc Roos


I have been looking a bit at the s3 clients available to be used, and I 
think they are quite shitty, especially this Cyberduck that processes 
files with default reading rights to everyone. I am in the process to 
advice clients to use for instance this mountain duck. But I am not to 
happy about it. I don't like the fact that everything has default 
settings for amazon or other stuff in there for ftp or what ever.

I am thinking of developing something in-house, more aimed at the ceph 
environments, easier/better to use. 

What I can think of:

- cheaper, free or maybe even opensource
- default settings for your ceph cluster
- only configuration for object storage (no amazon, rackspace, backblaze 
shit)
- default secure settings
- offer in the client only functionality that is available from the 
specific ceph release
- integration with the finder / explorer windows

I am curious who would be interested in a such new client? Maybe better 
to send me your wishes directly, and not clutter the mailing list with 
this.








___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph inside Docker containers inside VirtualBox

2019-04-18 Thread Varun Singh
Hi,
I am trying to setup Ceph through Docker inside a VM. My host machine
is Mac. My VM is an Ubuntu 18.04. Docker version is 18.09.5, build
e8ff056.
I am following the documentation present on ceph/daemon Docker Hub
page. The idea is, if I spawn docker containers as mentioned on the
page, I should get a ceph setup without KV store. I am not worried
about KV store as I just want to try it out. Following are the
commands I am firing to bring the containers up:

Monitor:
docker run -d --net=host -v /etc/ceph:/etc/ceph -v
/var/lib/ceph/:/var/lib/ceph/ -e MON_IP=10.0.2.15 -e
CEPH_PUBLIC_NETWORK=10.0.2.0/24 ceph/daemon mon

Manager:
docker run -d --net=host -v /etc/ceph:/etc/ceph -v
/var/lib/ceph/:/var/lib/ceph/ ceph/daemon mgr

OSD:
docker run -d --net=host --pid=host --privileged=true -v
/etc/ceph:/etc/ceph -v /var/lib/ceph/:/var/lib/ceph/ -v /dev/:/dev/ -e
OSD_DEVICE=/dev/vdd ceph/daemon osd

From the above commands I am able to spawn monitor and manager
properly. I verified this by firing this command on both monitor and
manager containers:
sudo docker exec d1ab985 ceph -s

I get following outputs for both:

  cluster:
id: 14a6e40a-8e54-4851-a881-661a84b3441c
health: HEALTH_OK

  services:
mon: 1 daemons, quorum serverceph-VirtualBox (age 62m)
mgr: serverceph-VirtualBox(active, since 56m)
osd: 0 osds: 0 up, 0 in

  data:
pools:   0 pools, 0 pgs
objects: 0 objects, 0 B
usage:   0 B used, 0 B / 0 B avail
pgs:

However when I try to bring up OSD using above command, it doesn't
work. Docker logs show this output:
2019-04-18 07:30:06  /opt/ceph-container/bin/entrypoint.sh: static:
does not generate config
2019-04-18 07:30:06  /opt/ceph-container/bin/entrypoint.sh: ERROR- The
device pointed by OSD_DEVICE (/dev/vdd) doesn't exist !

I am not sure why the doc asks to pass /dev/vdd to OSD_DEVICE env var.
I know there are five different ways to spawning the OSD, but I am not
able to figure out which one would be suitable for a simple
deployment. If you could please let me know how to spawn OSDs using
Docker, it would help a lot.


-- 
Regards,
Varun Singh

-- 
Confidentiality Notice and Disclaimer: This email (including any 
attachments) contains information that may be confidential, privileged 
and/or copyrighted. If you are not the intended recipient, please notify 
the sender immediately and destroy this email. Any unauthorized use of the 
contents of this email in any manner whatsoever, is strictly prohibited. If 
improper activity is suspected, all available information may be used by 
the sender for possible disciplinary action, prosecution, civil claim or 
any remedy or lawful purpose. Email transmission cannot be guaranteed to be 
secure or error-free, as information could be intercepted, lost, arrive 
late, or contain viruses. The sender is not liable whatsoever for damage 
resulting from the opening of this message and/or the use of the 
information contained in this message and/or attachments. Expressions in 
this email cannot be treated as opined by the sender company management – 
they are solely expressed by the sender unless authorized.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph-iscsi: problem when discovery auth is disabled, but gateway receives auth requests

2019-04-18 Thread Matthias Leopold

Hi,

the Ceph iSCSI gateway has a problem when receiving discovery auth 
requests when discovery auth is not enabled. Target discovery fails in 
this case (see below). This is especially annoying with oVirt (KVM 
management platform) where you can't separate the two authentication 
phases. This leads to a situation where you are forced to use discovery 
auth and have the same credentials for target auth (for oVirt target). 
These credentials (for discovery auth) would then have to be shared for 
other targets on the same gateway, this is not acceptable. I saw that 
other iSCSI vendors (FreeNAS) don't have this problem. I don't know if 
this is Ceph gateway specific or a general LIO target problem. In any 
case I would be very happy if this could be resolved. I think that 
smooth integration of Ceph iSCSI gateway and oVirt should be of broader 
interest. Please correct me if I got anything wrong.


kernel messages when discovery_auth is disabled, but auth requests are 
received


Apr 18 13:05:01 ceiscsi0 kernel: CHAP user or password not set for 
Initiator ACL

Apr 18 13:05:01 ceiscsi0 kernel: Security negotiation failed.
Apr 18 13:05:01 ceiscsi0 kernel: iSCSI Login negotiation failed.

thx
matthias


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] failed to load OSD map for epoch X, got 0 bytes

2019-04-18 Thread Lomayani S. Laizer
Hello,

I have one osd which cant start and giving out above errror. Everything was
running ok until last night when the interface card of the server hosting
this osd went fault.
we replaced the fault interface and others OSD started well except one OSD
We are running ceph 14.2.0 and all OSD are running bluestore. this cluster
was created in Jewel. We upgraded this cluster to 14.2 from mimic 3weeks ago

--
Lomayani
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Is it possible to run a standalone Bluestore instance?

2019-04-18 Thread Brad Hubbard
Let me try to reproduce this on centos 7.5 with master and I'll let
you know how I go.

On Thu, Apr 18, 2019 at 3:59 PM Can Zhang  wrote:
>
> Using the commands you provided, I actually find some differences:
>
> On my CentOS VM:
> ```
> # sudo find ./lib*  -iname '*.so*' | xargs nm -AD 2>&1 | grep
> _ZTIN13PriorityCache8PriCacheE
> ./libceph-common.so:0221cc08 V _ZTIN13PriorityCache8PriCacheE
> ./libceph-common.so.0:0221cc08 V _ZTIN13PriorityCache8PriCacheE
> ./libfio_ceph_objectstore.so: U _ZTIN13PriorityCache8PriCacheE
> ```
> ```
> # ldd libfio_ceph_objectstore.so |grep common
> libceph-common.so.0 => /root/ceph/build/lib/libceph-common.so.0
> (0x7fd13f3e7000)
> ```
> On my Ubuntu VM:
> ```
> $ sudo find ./lib*  -iname '*.so*' | xargs nm -AD 2>&1 | grep
> _ZTIN13PriorityCache8PriCacheE
> ./libfio_ceph_objectstore.so:019d13e0 V _ZTIN13PriorityCache8PriCacheE
> ```
> ```
> $ ldd libfio_ceph_objectstore.so |grep common
> libceph-common.so.0 =>
> /home/can/work/ceph/build/lib/libceph-common.so.0 (0x7f024a89e000)
> ```
>
> Notice the "U" and "V" from nm results.
>
>
>
>
> Best,
> Can Zhang
>
> On Thu, Apr 18, 2019 at 9:36 AM Brad Hubbard  wrote:
> >
> > Does it define _ZTIN13PriorityCache8PriCacheE ? If it does, and all is
> > as you say, then it should not say that _ZTIN13PriorityCache8PriCacheE
> > is undefined. Does ldd show that it is finding the libraries you think
> > it is? Either it is finding a different version of that library
> > somewhere else or the version you have may not define that symbol.
> >
> > On Thu, Apr 18, 2019 at 11:12 AM Can Zhang  wrote:
> > >
> > > It's already in LD_LIBRARY_PATH, under the same directory of
> > > libfio_ceph_objectstore.so
> > >
> > >
> > > $ ll lib/|grep libceph-common
> > > lrwxrwxrwx. 1 root root19 Apr 17 11:15 libceph-common.so ->
> > > libceph-common.so.0
> > > -rwxr-xr-x. 1 root root 211853400 Apr 17 11:15 libceph-common.so.0
> > >
> > >
> > >
> > >
> > > Best,
> > > Can Zhang
> > >
> > > On Thu, Apr 18, 2019 at 7:00 AM Brad Hubbard  wrote:
> > > >
> > > > On Wed, Apr 17, 2019 at 1:37 PM Can Zhang  wrote:
> > > > >
> > > > > Thanks for your suggestions.
> > > > >
> > > > > I tried to build libfio_ceph_objectstore.so, but it fails to load:
> > > > >
> > > > > ```
> > > > > $ LD_LIBRARY_PATH=./lib ./bin/fio --enghelp=libfio_ceph_objectstore.so
> > > > >
> > > > > fio: engine libfio_ceph_objectstore.so not loadable
> > > > > IO engine libfio_ceph_objectstore.so not found
> > > > > ```
> > > > >
> > > > > I managed to print the dlopen error, it said:
> > > > >
> > > > > ```
> > > > > dlopen error: ./lib/libfio_ceph_objectstore.so: undefined symbol:
> > > > > _ZTIN13PriorityCache8PriCacheE
> > > >
> > > > $ c++filt _ZTIN13PriorityCache8PriCacheE
> > > > typeinfo for PriorityCache::PriCache
> > > >
> > > > $ sudo find /lib* /usr/lib* -iname '*.so*' | xargs nm -AD 2>&1 | grep
> > > > _ZTIN13PriorityCache8PriCacheE
> > > > /usr/lib64/ceph/libceph-common.so:008edab0 V
> > > > _ZTIN13PriorityCache8PriCacheE
> > > > /usr/lib64/ceph/libceph-common.so.0:008edab0 V
> > > > _ZTIN13PriorityCache8PriCacheE
> > > >
> > > > It needs to be able to find libceph-common, put it in your path or 
> > > > preload it.
> > > >
> > > > > ```
> > > > >
> > > > > I found a not-so-relevant
> > > > > issue(https://tracker.ceph.com/issues/38360), the error seems to be
> > > > > caused by mixed versions. My build environment is CentOS 7.5.1804 with
> > > > > SCL devtoolset-7, and ceph is latest master branch. Does someone know
> > > > > about the symbol?
> > > > >
> > > > >
> > > > > Best,
> > > > > Can Zhang
> > > > >
> > > > > Best,
> > > > > Can Zhang
> > > > >
> > > > >
> > > > > On Tue, Apr 16, 2019 at 8:37 PM Igor Fedotov  wrote:
> > > > > >
> > > > > > Besides already mentioned store_test.cc one can also use ceph
> > > > > > objectstore fio plugin
> > > > > > (https://github.com/ceph/ceph/tree/master/src/test/fio) to access
> > > > > > standalone BlueStore instance from FIO benchmarking tool.
> > > > > >
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > Igor
> > > > > >
> > > > > > On 4/16/2019 7:58 AM, Can ZHANG wrote:
> > > > > > > Hi,
> > > > > > >
> > > > > > > I'd like to run a standalone Bluestore instance so as to test and 
> > > > > > > tune
> > > > > > > its performance. Are there any tools about it, or any suggestions?
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Best,
> > > > > > > Can Zhang
> > > > > > >
> > > > > > > ___
> > > > > > > ceph-users mailing list
> > > > > > > ceph-users@lists.ceph.com
> > > > > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > > > > ___
> > > > > ceph-users mailing list
> > > > > ceph-users@lists.ceph.com
> > > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > > >
> > > >
> > > >
> > > > --
> > > > Cheers,
> > > > Brad
> >
> >
> >
> 

[ceph-users] Ceph v13.2.4 issue with snaptrim

2019-04-18 Thread Vytautas Jonaitis
Hello,

Few month ago we experienced issue with Ceph  v13.2.4:

1. One of the nodes had all it's osd's set to out. To clean them up for 
replacement.
2. Noticed that a lot of snaptrim was running.
3. Set nosnaptrim flag on the cluster (to improve performance).
4. Once mon_osd_snap_trim_queue_warn_on appeared, removed nosnaptrim flag.
5. All osds on the cluster crashed and started flapping. Set nosnaptrim flag 
back on.

Issue it registered in tracker, additional logs were collected - 
https://tracker.ceph.com/issues/38124
However, it is still present.

What options do I have? I would like to know when/if issue would be fixed (it 
was not in v13.2.5 release), or, alternatively, to contact developer who can 
resolve it.

--
Best regards,
Vytautas J.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Multi-site replication speed

2019-04-18 Thread Brian Topping
Hi Casey, thanks for this info. It’s been doing something for 36 hours, but not 
updating the status at all. So it either takes a really long time for 
“preparing for full sync” or I’m doing something wrong. This is helpful 
information, but there’s a myriad of states that the system could be in. 

With that, I’m going to set up a lab rig and see if I can build a fully 
replicated state. At that point, I’ll have a better understanding of what a 
working system responds like and maybe I can at least ask better questions, 
hopefully figure it out myself. 

Thanks again! Brian

> On Apr 16, 2019, at 08:38, Casey Bodley  wrote:
> 
> Hi Brian,
> 
> On 4/16/19 1:57 AM, Brian Topping wrote:
>>> On Apr 15, 2019, at 5:18 PM, Brian Topping >> > wrote:
>>> 
>>> If I am correct, how do I trigger the full sync?
>> 
>> Apologies for the noise on this thread. I came to discover the 
>> `radosgw-admin [meta]data sync init` command. That’s gotten me with 
>> something that looked like this for several hours:
>> 
>>> [root@master ~]# radosgw-admin  sync status
>>>   realm 54bb8477-f221-429a-bbf0-76678c767b5f (example)
>>>   zonegroup 8e33f5e9-02c8-4ab8-a0ab-c6a37c2bcf07 (us)
>>>zone b6e32bc8-f07e-4971-b825-299b5181a5f0 (secondary)
>>>   metadata sync preparing for full sync
>>> full sync: 64/64 shards
>>> full sync: 0 entries to sync
>>> incremental sync: 0/64 shards
>>> metadata is behind on 64 shards
>>> behind shards: 
>>> [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63]
>>>   data sync source: 35835cb0-4639-43f4-81fd-624d40c7dd6f (master)
>>> preparing for full sync
>>> full sync: 1/128 shards
>>> full sync: 0 buckets to sync
>>> incremental sync: 127/128 shards
>>> data is behind on 1 shards
>>> behind shards: [0]
>> 
>> I also had the data sync showing a list of “behind shards”, but both of them 
>> sat in “preparing for full sync” for several hours, so I tried 
>> `radosgw-admin [meta]data sync run`. My sense is that was a bad idea, but 
>> neither of the commands seem to be documented and the thread I found them on 
>> indicated they wouldn’t damage the source data.
>> 
>> QUESTIONS at this point:
>> 
>> 1) What is the best sequence of commands to properly start the sync? Does 
>> init just set things up and do nothing until a run is started?
> The sync is always running. Each shard starts with full sync (where it lists 
> everything on the remote, and replicates each), then switches to incremental 
> sync (where it polls the replication logs for changes). The 'metadata sync 
> init' command clears the sync status, but this isn't synchronized with the 
> metadata sync process running in radosgw(s) - so the gateways need to restart 
> before they'll see the new status and restart the full sync. The same goes 
> for 'data sync init'.
>> 2) Are there commands I should run before that to clear out any previous bad 
>> runs?
> Just restart gateways, and you should see progress via 'sync status'.
>> 
>> *Thanks very kindly for any assistance. *As I didn’t really see any 
>> documentation outside of setting up the realms/zones/groups, it seems like 
>> this would be useful information for others that follow.
>> 
>> best, Brian
>> 
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com