[ceph-users] Re: How to get num ops blocked per OSD

2020-03-14 Thread Robert LeBlanc
We are already gathering the Ceph admin socket stats with the Diamond
plugin and sending that to graphite, so I guess I just need to look through
that to find what I'm looking for.

Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Fri, Mar 13, 2020 at 4:48 PM Anthony D'Atri 
wrote:

> Yeah the removal of that was annoying for sure.  ISTR that one can gather
> the information from the OSDs’ admin sockets.
>
> Envision a Prometheus exporter that polls the admin sockets (in parallel)
> and Grafana panes that graph slow requests by OSD and by node.
>
>
> > On Mar 13, 2020, at 4:14 PM, Robert LeBlanc 
> wrote:
> >
> > For Jewel I wrote a script to take the output of `ceph health detail
> > --format=json` and send alerts to our system that ordered the osds based
> on
> > how long the ops were blocked and which OSDs had the most ops blocked.
> This
> > was really helpful to quickly identify which OSD out of a list of 100
> would
> > be the most probable one having issues. Since upgrading to Luminous, I
> > don't get that and I'm not sure where that info went to. Do I need to
> query
> > the manager now?
> >
> > This is the regex I was using to extract the pertinent information:
> >
> > '^(\d+) ops are blocked > (\d+\.+\d+) sec on osd\.(\d+)$'
> >
> > Thanks,
> > Robert LeBlanc
> > 
> > Robert LeBlanc
> > PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Advice on sizing WAL/DB cluster for Optane and SATA SSD disks.

2020-03-14 Thread Victor Hooi
Hi,

I'm building a 4-node Proxmox cluster, with Ceph for the VM disk storage.

On each node, I have:


   - 1 x 512Gb M.2 SSD (for Proxmox/boot volume)
   - 1 x 960GB Intel Optane 905P (for Ceph WAL/DB)
   - 6 x 1.92TB Intel S4610 SATA SSD (for Ceph OSD)

I'm using the Proxmox "pveceph" command to setup the OSDs.

By default this seems to pick 10% of the OSD size for the DB volume, and 1%
of the OSD size for the WAL volume.

This means after four drives, I ran out of space:

# pveceph osd create /dev/sde -db_dev /dev/nvme0n1
> create OSD on /dev/sde (bluestore)
> creating block.db on '/dev/nvme0n1'
>   Rounding up size to full physical extent 178.85 GiB
> lvcreate
> 'ceph-861ebf6d-8fee-4313-8de6-4e797dc436ee/osd-db-da591d0f-8a05-42fa-bc62-a093bf98aded'
> error:   Volume group "ceph-861ebf6d-8fee-4313-8de6-4e797dc436ee" has
> insufficient free space (45784 extents): 45786 required.


Anyway, I assume that means I need to tune my DB and WAL volumes down from
the defaults.

What advice to you have in terms of making best use of the available space,
between WAL and DB?

What is the impact of having WAL and DB smaller than 1% and 10% of OSD size
respectively?

Thanks,
Victor
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] rgw.none shows extremely large object count

2020-03-14 Thread yuezhu3
Hello, we are running a ceph cluster + rgw on luminous 12.2.12 that serves as a 
S3 compatible storage. We have noticed some buckets where the `rgw.none` from 
the output of `radosgw-admin bucket stats` shows extremely large value for 
`num_objects`, which is not convincible. It does look like an underflow by 
subtracting a positive number from 0 and then the value is interpreted and 
shown as an uint64. For example,

```
# radosgw-admin bucket stats --bucket redacted
{
"bucket": "redacted",
 ...
"usage": {
"rgw.none": {
"size": 0,
"size_actual": 0,
"size_utilized": 0,
"size_kb": 0,
"size_kb_actual": 0,
"size_kb_utilized": 0,
"num_objects": 18446744073709551607
},
"rgw.main": {
"size": 1687971465874,
"size_actual": 1696692400128,
"size_utilized": 1687971465874,
"size_kb": 1648409635,
"size_kb_actual": 1656926172,
"size_kb_utilized": 1648409635,
"num_objects": 4290147
},
"rgw.multimeta": {
"size": 0,
"size_actual": 0,
"size_utilized": 0,
"size_kb": 0,
"size_kb_actual": 0,
"size_kb_utilized": 0,
"num_objects": 75
}
},
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
}
}
```

We did find a few reports on this issue, eg. 
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-November/037531.html. 

Do we know any use patterns that can lead object count to become that large? 
Also is there a way to accurately collect the object count for each bucket in 
the cluster, as we would like to use it for management purpose.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: New 3 node Ceph cluster

2020-03-14 Thread Martin Verges
Hello Amudhan,

I will be using Cephfs + with samba for max 10 clients for upload and
> download.
>

Please use samba vfs and not the kernel mount.

Earlier I have tested orchestration using ceph-deploy in the test setup.
> now, is there any other alternative to ceph-deploy?
>

Yes, try our deployment tool. It brings you Ceph the easy way including
anything from Ceph RBD, S3, CephFS, NFS, ISCSI, SMB,.. hassle free.

Storage Node HW is Intel Xeon E5v2 8 core single Proc, 32GB RAM and 10Gb
> Nic 2 nos., 6TB SATA  HDD 24 Nos. each node, OS separate SSD disk
>

You need at least 4 GB ram per HDD, using 24 disks per System is not
suggested. With croit, you won't need an OS disk as the storage node gets
live booted over the network using PXE. Depending on your requirements, CPU
could be a bottleneck as well.

Can I restrict folder access to the user using cephfs + vfs samba or should
> I use ceph client + samba?
>

In croit you can attach the samba service to an active directory to make
use of permissions. You can configure them by hand as well.

Ubuntu or Centos?
>

Debian ;) it's the best. But Ubuntu is ok as well.

Any block size consideration for object size, metadata when using cephfs?
>

Leave it by the defaults unless you know what special case you have. A lot
of issues we see in the wild coming from bad configurations, copy pasted
from a random page found on google.

--
Martin Verges
Managing director

Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263

Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx


Am Sa., 14. März 2020 um 08:17 Uhr schrieb Amudhan P :

> Hi,
>
> I am planning to create a new 3 node ceph storage cluster.
>
> I will be using Cephfs + with samba for max 10 clients for upload and
> download.
>
> Storage Node HW is Intel Xeon E5v2 8 core single Proc, 32GB RAM and 10Gb
> Nic 2 nos., 6TB SATA  HDD 24 Nos. each node, OS separate SSD disk.
>
> Earlier I have tested orchestration using ceph-deploy in the test setup.
> now, is there any other alternative to ceph-deploy?
>
> Can I restrict folder access to the user using cephfs + vfs samba or should
> I use ceph client + samba?
>
> Ubuntu or Centos?
>
> Any block size consideration for object size, metadata when using cephfs?
>
> Idea or suggestion from existing users. I am also going to start to explore
> all the above.
>
> regards
> Amudhan
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Is there a better way to make a samba/nfs gateway? (Marc Roos)

2020-03-14 Thread Martin Verges
Hello Chad,

starting with the Problems from lost connections with the kernel CephFS
mount to a much simpler service setup, there are plenty.
But what would be the point in stacking different tools (kernel mount, smb
service,..) untested together just because you can?

--
Martin Verges
Managing director

Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263

Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx


Am Fr., 13. März 2020 um 16:24 Uhr schrieb Chad William Seys <
cws...@physics.wisc.edu>:

> Awhile back I thought there were some limitations which prevented us
> from trying this, but I cannot remember...
>
> What does the ceph vfs gain you over exporting by cephfs kernel module
> (kernel 4.19).  What does it lose you?
>
> (I.e. pros and cons versus kernel module?)
>
> Thanks!
> C.
>
> > It's based on vfs_ceph and you can read more about how to configure it
> > yourself on
> > https://www.samba.org/samba/docs/current/man-html/vfs_ceph.8.html.
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: New 3 node Ceph cluster

2020-03-14 Thread Ashley Merrick
I would say you definitely need more RAM with that many disks.




 On Sat, 14 Mar 2020 15:17:14 +0800 amudha...@gmail.com wrote 


Hi,

I am planning to create a new 3 node ceph storage cluster.

I will be using Cephfs + with samba for max 10 clients for upload and
download.

Storage Node HW is Intel Xeon E5v2 8 core single Proc, 32GB RAM and 10Gb
Nic 2 nos., 6TB SATA HDD 24 Nos. each node, OS separate SSD disk.

Earlier I have tested orchestration using ceph-deploy in the test setup.
now, is there any other alternative to ceph-deploy?

Can I restrict folder access to the user using cephfs + vfs samba or should
I use ceph client + samba?

Ubuntu or Centos?

Any block size consideration for object size, metadata when using cephfs?

Idea or suggestion from existing users. I am also going to start to explore
all the above.

regards
Amudhan
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: New 3 node Ceph cluster

2020-03-14 Thread jesper
Hi.

Unless there is plans for going to Petabyte scale with it - then I really
dont see the benefits of getting CephFS involved over just an RBD image
with a VM running standard samba on top.

More performant and less complexity to handle - zero gains (by my book)

Jesper

> Hi,
>
> I am planning to create a new 3 node ceph storage cluster.
>
> I will be using Cephfs + with samba for max 10 clients for upload and
> download.
>
> Storage Node HW is Intel Xeon E5v2 8 core single Proc, 32GB RAM and 10Gb
> Nic 2 nos., 6TB SATA  HDD 24 Nos. each node, OS separate SSD disk.
>
> Earlier I have tested orchestration using ceph-deploy in the test setup.
> now, is there any other alternative to ceph-deploy?
>
> Can I restrict folder access to the user using cephfs + vfs samba or
> should
> I use ceph client + samba?
>
> Ubuntu or Centos?
>
> Any block size consideration for object size, metadata when using cephfs?
>
> Idea or suggestion from existing users. I am also going to start to
> explore
> all the above.
>
> regards
> Amudhan
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Is there a better way to make a samba/nfs gateway?

2020-03-14 Thread Konstantin Shalygin

On 3/13/20 10:47 PM, Chip Cox wrote:
Konstantin - in your Windows environment, would it be beneficial to 
have the ability to have NTFS data land as S3 (object store) on a Ceph 
storage appliance?  Or does it have to be NFS?


Thanks and look forward to hearing back.


Nope, for windows we use CephFS over Samba VFS CTDB cluster.



k
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] New 3 node Ceph cluster

2020-03-14 Thread Amudhan P
Hi,

I am planning to create a new 3 node ceph storage cluster.

I will be using Cephfs + with samba for max 10 clients for upload and
download.

Storage Node HW is Intel Xeon E5v2 8 core single Proc, 32GB RAM and 10Gb
Nic 2 nos., 6TB SATA  HDD 24 Nos. each node, OS separate SSD disk.

Earlier I have tested orchestration using ceph-deploy in the test setup.
now, is there any other alternative to ceph-deploy?

Can I restrict folder access to the user using cephfs + vfs samba or should
I use ceph client + samba?

Ubuntu or Centos?

Any block size consideration for object size, metadata when using cephfs?

Idea or suggestion from existing users. I am also going to start to explore
all the above.

regards
Amudhan
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io