Regardless of if your setup is using ceph, which it doesn't sound like it,
your question has everything to do with clustering cloudstack and nothing
to do with ceph.
I haven't used that VM solution before. I use ProxMox (which has native
ceph support) and when it's in clustering mode you can conf
Does anyone have an idea, why I am having these osd_bytes=0?
ceph daemon mon.c perf dump cluster
{
"cluster": {
"num_mon": 3,
"num_mon_quorum": 3,
"num_osd": 6,
"num_osd_up": 6,
"num_osd_in": 6,
"osd_epoch": 3593,
"osd_bytes": 0,
Hello,
I have a cloudstack system with one management server, one nfs server
and two kvm hypervm host servers. Initially I configured cloudstack
with hypervm1, so the system vm and console proxy gets created in it.
Lately I have added hypervm2 and created few instances in it. So my
doubt i
Hello,
just testing the latest luminous rc packages on debian stretch with
bluestore OSDs.
OSDs without a separate block.db partition do fine.
But when I try to create an OSD with a separate block.db partition:
( e.g.,
ceph-deploy osd create --bluestore --block-db=/dev/nvme0bnp1 node1:/dev/sdi
Hi,
We tried to use Swift interface for Ceph object store and soon found out
that it does not support SLO/DLO. We are planning to move to use S3
interface. Are there any known limitations with the support of S3 with Ceph
object store?
Best,
Murali Balcha
___
On Fri, Jul 7, 2017 at 2:48 AM, Piotr Dałek wrote:
> Is this:
> https://github.com/yuyuyu101/ceph/commit/794b49b5b860c538a349bdadb16bb6ae97ad9c20#commitcomment-15707924
> the issue you mention? Because at this point I'm considering switching to
> C++ API and passing static bufferptr buried in my b
Hello List,
It would be nice if somebody would write an up2date tutorial for this
since Stretch is now the official distro or provide packages with ceph
support on a separate repo.
I have to say that it's a big FAIL for them that this important support
functionality is not already included i
On 07/07/17 14:03, David Turner wrote:
>
> So many of your questions depends on what your cluster is used for. We
> don't even know rbd or cephfs from what you said and that still isn't
> enough to fully answer your questions. I have a much smaller 3 node
> cluster using Erasure coding for rbds as
I submitted a "ceph for absolute, complete beginners" presentation but idk if
it will be approved since I'm kind of a newcomer myself. I'd also like a Ceph
BoF.
<3 Trilliams
Sent from my iPhone
> On Jul 6, 2017, at 10:50 PM, Blair Bethwaite
> wrote:
>
> Oops, this time plain text...
>
>> O
So many of your questions depends on what your cluster is used for. We
don't even know rbd or cephfs from what you said and that still isn't
enough to fully answer your questions. I have a much smaller 3 node cluster
using Erasure coding for rbds as well as cephfs and it is fine speed-wise
for my n
Hi,
Currently, our ceph cluster is all 3-way replicated, and works very
nicely. We're consider the possibility of adding an erasure-coding pool;
which I understand would require a cache tier in front of it to ensure
decent performance.
I am wondering what sort of spec should we be thinking about
So using the commands given I checked all the mon's and a couple of OSD's that
are not backfilling due to saying they are backfill_full.
However when running the command the value's I have been trying to set via the
set commands are correctly set and reporting via the admin socket.
However "cep
On Fri, Jul 7, 2017 at 12:10 PM, Nick Fisk wrote:
> Managed to catch another one, osd.75 again, not sure if that is an indication
> of anything or just a co-incidence. osd.75 is one of 8 OSD's in a cache tier,
> so all IO will be funnelled through them.
>
>
> cat
> /sys/kernel/debug/ceph/d027d5
> -Original Message-
> From: Ilya Dryomov [mailto:idryo...@gmail.com]
> Sent: 01 July 2017 13:19
> To: Nick Fisk
> Cc: Ceph Users
> Subject: Re: [ceph-users] Kernel mounted RBD's hanging
>
> On Sat, Jul 1, 2017 at 9:29 AM, Nick Fisk wrote:
> >> -Original Message-
> >> From: Ilya
Looks good, just one comment the drive I used was 12GB for the WAL and DB and
CEPH still only set the small sizes from my earlier reply.
So not sure if / what benefit their is of the big sizes, and how CEPH-DISK sets
the size or if they are just hard coded.
,Ashley
Sent from my iPhone
On 7 Jul
Hi all!
I got quite far till now, and I documented my progress here:
https://github.com/MartinEmrich/kb/blob/master/ceph/Manual-Bluestore.md
Cheers,
Martin
Von: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] Im Auftrag von
Martin Emrich
Gesendet: Freitag, 30. Juni 2017 17:32
An: ceph-u
On Fri, Jul 7, 2017 at 4:49 PM, Ashley Merrick wrote:
> After looking into this further it seem's none of the :
>
>
> ceph osd set-{full,nearfull,backfillfull}-ratio
>
>
> Commands seem to be taking any effect on the cluster including the
> backfillfull ratio, this command looks to have been added
@Christian: I think the slow tail end is just the fact that there is contention
for the same OSDs.
@David: Yes that's what I did, used shell/awk/python to grab and compare the
set of OSDs locked for backfilling versus the ones waiting.
From: Christian Balz
Makes sense, however I have seen online that it states you can safely use
against the raw disk aka /dev/sdd and it will create two new partitions in the
free space and not touch any existing partitions to allow you to use one device
for multiple OSD's.
Number Start End SizeFile syst
You seem to use a whole disk, but I used a partition, which cannot (sanely) be
partitioned further.
But could you tell me with which sizes for sdd1 and sdd2 you ended up?
Thanks,
Martin
Von: Ashley Merrick [mailto:ash...@amerrick.co.uk]
Gesendet: Freitag, 7. Juli 2017 09:08
An: Martin Emrich ;
I can run the following command with no issue however and have done for
multiple OSD's which work fine, it just creates an sdd1 and sdd2
ceph-disk prepare --bluestore /dev/sdg --block.wal /dev/sdd --block.db /dev/sdd
,Ashley
From: ceph-users on behalf of Mart
Hi!
It looks like I found the problem: The example suggested I can share the same
block device for both WAL and DB, but apparently this is not the case.
Analyzing the log output of ceph-deploy and the OSD hosts logs, I am making
progress in discovering the exact steps to set up a bluestore OSD w
22 matches
Mail list logo