Hello,
I'm in the process of building a new ceph cluster, this time around i
was considering going with nvme ssd drives.
In searching for something in the line of 1TB per ssd drive, i found
"Samsung 983 DCT 960GB NVMe M.2 Enterprise SSD for Business".
More info:
https://www.samsung.com/us/busines
Are you running on HDDs? The minimum allocation size is 64kb by
default here. You can control that via the parameter
bluestore_min_alloc_size during OSD creation.
64 kb times 8 million files is 512 GB which is the amount of usable
space you reported before running the test, so that seems to add up.
I have tried to create erasure pools for CephFS using the examples given
at
https://swamireddy.wordpress.com/2016/01/26/ceph-diff-between-erasure-and-replicated-pool-type/
but this is resulting in some weird behaviour. The only number in
common is that when creating the metadata store; is this
Hello,
I wanted to know if there are any max limitations on
- Max number of Ceph data nodes
- Max number of OSDs per data node
- Global max on number of OSDs
- Any limitations on the size of each drive managed by OSD?
- Any limitation on number of client nodes?
- Any limitation on maximum number
Hi Jörn,
On Fri, Mar 29, 2019 at 5:20 AM Clausen, Jörn wrote:
>
> Hi!
>
> In my ongoing quest to wrap my head around Ceph, I created a CephFS
> (data and metadata pool with replicated size 3, 128 pgs each).
What version?
> When I
> mount it on my test client, I see a usable space of ~500 GB, wh
Hi,
I upgraded my test Ceph iSCSI gateways to
ceph-iscsi-3.0-6.g433bbaa.el7.noarch.
I'm trying to use the new parameter "cluster_client_name", which - to me
- sounds like I don't have to access the ceph cluster as "client.admin"
anymore. I created a "client.iscsi" user and watched what happene
I would like to use rbd image from replicated hdd pool in a libvirt/kvm
vm.
1. What is the best filesystem to use with rbd, just standaard xfs?
2. Is there a recommended tuning for lvm on how to put multiple rbd
images?
___
ceph-users mailing
On Fri, Mar 29, 2019 at 1:48 AM Christian Balzer wrote:
>
> On Fri, 29 Mar 2019 01:22:06 -0400 Erik McCormick wrote:
>
> > Hello all,
> >
> > Having dug through the documentation and reading mailing list threads
> > until my eyes rolled back in my head, I am left with a conundrum
> > still. Do I s
The issue we have is large leveldb's . do we have any setting to disable
compaction of leveldb on osd start?
in.linkedin.com/in/nikhilravindra
On Fri, Mar 29, 2019 at 7:44 PM Nikhil R wrote:
> Any help on this would be much appreciated as our prod is down since a day
> and each osd restart is
Any help on this would be much appreciated as our prod is down since a day
and each osd restart is taking 4-5 hours.
in.linkedin.com/in/nikhilravindra
On Fri, Mar 29, 2019 at 7:43 PM Nikhil R wrote:
> We have maxed out the files per dir. CEPH is trying to do an online split
> due to which osd'
We have maxed out the files per dir. CEPH is trying to do an online split
due to which osd's are crashing. We increased the split_multiple and
merge_threshold for now and are restarting osd's. Now on these restarts the
leveldb compaction is taking a long time. Below are some of the logs.
2019-03-2
Hi!
In my ongoing quest to wrap my head around Ceph, I created a CephFS
(data and metadata pool with replicated size 3, 128 pgs each). When I
mount it on my test client, I see a usable space of ~500 GB, which I
guess is okay for the raw capacity of 1.6 TiB I have in my OSDs.
I run bonnie wit
Hi,
Am 28.03.19 um 20:03 schrieb c...@elchaka.de:
> Hi Uwe,
>
> Am 28. Februar 2019 11:02:09 MEZ schrieb Uwe Sauter :
>> Am 28.02.19 um 10:42 schrieb Matthew H:
>>> Have you made any changes to your ceph.conf? If so, would you mind
>> copying them into this thread?
>>
>> No, I just deleted an OSD
Hi Erik,
For now I have everything on the hdd's and I have some pools on just
ssd's that require more speed. It looked to me the best way to start
simple. I do not seem to need the iops yet to change this setup.
However I am curious about what the kind of performance increase you
will get fr
14 matches
Mail list logo