ceph-ansible is able to find those on its own now, try just not specifying
the devices and dedicated devices like before, you'll see in the osd .yml
file its changed.
On Wed, Oct 30, 2019 at 3:47 AM Lars Täuber wrote:
> I don't use ansible anymore. But this was my config for the host onode1:
>
now my understanding is that a NVMe drive is recommended to help speed up
bluestore. If it were to fail then those OSDs would be lost but assuming
there is 3x replication and enough OSDs I don't see the problem here.
There are other scenarios where a whole server might le lost, it doesn't
mean the
You don't have to increase pgp_num first?
On Wed, Sep 11, 2019 at 6:23 AM Kyriazis, George
wrote:
> I have the same problem (nautilus installed), but the proposed command
> gave me an error:
>
> # ceph osd require-osd-release nautilus
> Error EPERM: not all up OSDs have CEPH_FEATURE_SERVER_NAUT
I noticed this has happened before, this time I can't get it to stay down
at all, it just keeps coming back up:
# ceph osd down osd.48
marked down osd.48.
# ceph osd tree |grep osd.48
48 3.64000 osd.48 down0 1.0
# ceph osd tree |grep osd.48
48 3.64000
You are using Nautilus right? Did you use ansible to deploy it?
On Wed, Aug 14, 2019, 10:31 AM wrote:
> All;
>
> We're working to deploy our first production Ceph cluster, and we've run
> into a snag.
>
> The MONs start, but the "cluster" doesn't appear to come up. Ceph -s
> never returns.
>
>
> Actually standalone WAL is required when you have either very small fast
> device (and don't want db to use it) or three devices (different in
> performance) behind OSD (e.g. hdd, ssd, nvme). So WAL is to be located
> at the fastest one.
>
> For the given use case you just have HDD and NVMe and D
I used ceph-ansible just fine, never had this problem.
On Thu, Jul 25, 2019 at 1:31 PM Nathan Harper
wrote:
> Hi all,
>
> We've run into a strange issue with one of our clusters managed with
> ceph-ansible. We're adding some RGW nodes to our cluster, and so re-ran
> site.yml against the cluste
I can't understand how using RAID0 is better than JBOD, considering jbod
would be many individual disks, each used as OSDs, instead of a single big
one used as a single OSD.
On Mon, Jul 22, 2019 at 4:05 AM Vitaliy Filippov wrote:
> OK, I meant "it may help performance" :) the main point is tha
Just set 1 or more SSDs for bluestore, as long as you're within the 4% rule
I think it should be enough.
On Fri, Jul 5, 2019 at 7:15 AM Davis Mendoza Paco
wrote:
> Hi all,
> I have installed ceph luminous, witch 5 nodes(45 OSD), each OSD server
> supports up to 16HD and I'm only using 9
>
> I w
The thing i've seen a lot is where an OSD would get marked down because of
a failed drive, then then it would add itself right back again
On Fri, Jun 28, 2019 at 9:12 AM Robert LeBlanc wrote:
> I'm not sure why the monitor did not mark it down after 600 seconds
> (default). The reason it is so
can the bitmap allocator be set in ceph-ansible? I wonder why is it not
default in 12.2.12
On Thu, Jun 6, 2019 at 7:06 AM Stefan Kooman wrote:
> Quoting Max Vernimmen (vernim...@textkernel.nl):
> >
> > This is happening several times per day after we made several changes at
> > the same time:
I think a deep scrub would eventually catch this right?
On Wed, May 22, 2019 at 2:56 AM Eugen Block wrote:
> Hi Alex,
>
> > The cluster has been idle at the moment being new and all. I
> > noticed some disk related errors in dmesg but that was about it.
> > It looked to me for the next 20 - 30
Does anyone know the necessary steps to install ansible 2.8 in rhel7? I'm
assuming most people are doing it with pip?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Are you sure can you really use 3.2 for nautilus?
On Fri, May 10, 2019 at 7:23 AM Tarek Zegar wrote:
> Ceph-ansible 3.2, rolling upgrade mimic -> nautilus. The ansible file sets
> flag "norebalance". When there is*no* I/O to the cluster, upgrade works
> fine. When upgrading with IO running in th
you mention the version of ansible, that is right. How about the branch of
ceph-ansible? should be 3.2-stable, what OS? I haven't come across this
problem myself, a lot of other ones.
On Mon, May 6, 2019 at 3:47 AM ST Wong (ITSC) wrote:
> Hi all,
>
>
>
> I’ve problem in deploying mimic usi
How is this better than using a single public network, routing through a L3
switch?
If I understand the scenario right, this way would require the switch to be
a trunk port containing all the public vlans, and you can bridge directly
through the switch so L3 wouldn't be necessary?
On Fri, May 3
It sucks that its so hard to set/view active settings, this should be a lot
simpler in my opinion
On Tue, Apr 23, 2019 at 1:58 PM solarflow99 wrote:
> Thanks, but does this not work on Luminous maybe? I am on the mon hosts
> trying this:
>
>
> # ceph config set osd osd_recove
ig diff|grep -A5 osd_recovery_max_active
> "osd_recovery_max_active": {
> "default": 3,
> "mon": 4,
> "override": 4,
> "final": 4
> },
>
> On Wed, Apr 17, 2019 at 5:29 AM solarf
> > cached or only read on startup. But in this case this option is read
> > in the relevant path every time and no notification is required. But
> > the injectargs command can't know that.
>
> Right on all counts. The functions are referred to as observers and
&
Then why doesn't this work?
# ceph tell 'osd.*' injectargs '--osd-recovery-max-active 4'
osd.0: osd_recovery_max_active = '4' (not observed, change may require
restart)
osd.1: osd_recovery_max_active = '4' (not observed, change may require
restart)
osd.2: osd_recovery_max_active = '4' (not observe
I noticed when changing some settings, they appear to stay the same, for
example when trying to set this higher:
ceph tell 'osd.*' injectargs '--osd-recovery-max-active 4'
It gives the usual warning about may need to restart, but it still has the
old value:
# ceph --show-config | grep osd_recove
, Mar 27, 2019 at 4:13 PM Brad Hubbard wrote:
> On Thu, Mar 28, 2019 at 8:33 AM solarflow99 wrote:
> >
> > yes, but nothing seems to happen. I don't understand why it lists OSDs
> 7 in the "recovery_state": when i'm only using 3 replicas and it seems to
>
"status": "not queried"
},
{
"osd": "38",
"status": "already probed"
}
],
On Tue, Mar 26, 2019 at 4:53 PM Brad Hubbard wrote
t; What's the status of osds 7 and 17?
>
> On Tue, Mar 26, 2019 at 8:56 AM solarflow99 wrote:
> >
> > hi, thanks. Its still using Hammer. Here's the output from the pg
> query, the last command you gave doesn't work at all but be too old.
> >
> >
"num_bytes": 6405126628,
"num_objects": 241711,
"num_object_clones": 0,
"num_object_copies": 725130,
"num_objects_missing_on_primary": 0,
I noticed my cluster has scrub errors but the deep-scrub command doesn't
show any errors. Is there any way to know what it takes to fix it?
# ceph health detail
HEALTH_ERR 1 pgs inconsistent; 47 scrub errors
pg 10.2a is active+clean+inconsistent, acting [41,38,8]
47 scrub errors
# zgrep 10.2a
how about adding: --sync=1 --numjobs=1 to the command as well?
On Sat, Mar 9, 2019 at 12:09 PM Vitaliy Filippov wrote:
> There are 2:
>
> fio -ioengine=rbd -direct=1 -name=test -bs=4k -iodepth=1 -rw=randwrite
> -pool=bench -rbdname=testimg
>
> fio -ioengine=rbd -direct=1 -name=test -bs=4k -i
sounds right to me
On Wed, Mar 6, 2019 at 7:35 AM Kai Wagner wrote:
> Hi all,
>
> I think this change really late in the game just results into confusion.
>
> I would be in favor to make the ceph-mgr-dashboard package a dependency of
> the ceph-mgr so that people just need to enable the dashboa
It has to be mounted from somewhere, if that server goes offline, you need
to mount it from somewhere else right?
On Thu, Feb 28, 2019 at 11:15 PM David Turner wrote:
> Why are you making the same rbd to multiple servers?
>
> On Wed, Feb 27, 2019, 9:50 AM Ilya Dryomov wrote:
>
>> On Wed, Feb 2
taken up
On Thu, Feb 28, 2019 at 2:26 PM Jack wrote:
> Are not you using 3-replicas pool ?
>
> (15745GB + 955GB + 1595M) * 3 ~= 51157G (there is overhead involved)
>
> Best regards,
>
> On 02/28/2019 11:09 PM, solarflow99 wrote:
> > thanks, I still can't understa
. (afaik)
>
> You can use 'rbd -p rbd du' to see how much of these devices is
> provisioned and see if it's coherent.
>
> Mohamad
>
> >
> >
> > -Original Message-
> > From: solarflow99 [mailto:solarflo...@gmail.com]
> > Sent: 27 F
using ceph df it looks as if RBD images can use the total free space
available of the pool it belongs to, 8.54% yet I know they are created with
a --size parameter and thats what determines the actual space. I can't
understand the difference i'm seeing, only 5T is being used but ceph df
shows 51T:
I saw Intel had a demo of a luminous cluster running on top of the line
hardware, they used 2 OSD partitions with the best performance. I was
interested that they would split them like that, and asked the demo person
how they came to that number. I never got a really good answer except that
it wo
I knew it. FW updates are very important for SSDs
On Sat, Feb 23, 2019 at 8:35 PM Michel Raabe wrote:
> On Monday, February 18, 2019 16:44 CET, David Turner <
> drakonst...@gmail.com> wrote:
> > Has anyone else come across this issue before? Our current theory is
> that
> > Bluestore is access
Aren't you undersized at only 30GB? I thought you should have 4% of your
OSDs
On Fri, Feb 22, 2019 at 3:10 PM Nick Fisk wrote:
> >On 2/16/19 12:33 AM, David Turner wrote:
> >> The answer is probably going to be in how big your DB partition is vs
> >> how big your HDD disk is. From your output
no, but I know that if the wear leveling isn't right then I wouldn't expect
them to last long, FW updates on SSDs are very important.
On Mon, Feb 18, 2019 at 7:44 AM David Turner wrote:
> We have 2 clusters of [1] these disks that have 2 Bluestore OSDs per disk
> (partitioned), 3 disks per node
Does ceph-ansible support upgrading a cluster to the latest minor versions,
(ex. mimic 13.2.2 to 13.2.4)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I think one limitation would be the 375GB since bluestore needs a larger
amount of space than filestore did.
On Mon, Feb 4, 2019 at 10:20 AM Florian Engelmann <
florian.engelm...@everyware.ch> wrote:
> Hi,
>
> we have built a 6 Node NVMe only Ceph Cluster with 4x Intel DC P4510 8TB
> each and one
I thought a new cluster would have the 'rbd' pool already created, has this
changed? I'm using mimic.
# rbd ls
rbd: error opening default pool 'rbd'
Ensure that the default pool has been created or specify an alternate pool
name.
rbd: list: (2) No such file or directory
_
Do can you do HA on the NFS shares?
On Wed, Jan 30, 2019 at 9:10 AM David C wrote:
> Hi Patrick
>
> Thanks for the info. If I did multiple exports, how does that work in
> terms of the cache settings defined in ceph.conf, are those settings per
> CephFS client or a shared cache? I.e if I've defi
I'm interested to know about this too.
On Mon, Nov 5, 2018 at 10:45 AM Bastiaan Visser wrote:
>
> There are lots of rumors around about the benefit of changing
> io-schedulers for OSD disks.
> Even some benchmarks can be found, but they are all more than a few years
> old.
> Since ceph is movin
Why didn't you just install the DB + WAL on the NVMe? Is this "data disk"
still an ssd?
On Mon, Oct 22, 2018 at 3:34 PM David Turner wrote:
> And by the data disk I mean that I didn't specify a location for the DB
> partition.
>
> On Mon, Oct 22, 2018 at 4:06 PM David Turner
> wrote:
>
>> Tr
I think the answer is, yes. I'm pretty sure only the OSDs require very
long life enterprise grade SSDs
On Mon, Oct 15, 2018 at 4:16 AM ST Wong (ITSC) wrote:
> Hi all,
>
>
>
> We’ve got some servers with some small size SSD but no hard disks other
> than system disks. While they’re not suitable
I had the same thing happen too when I built a ceph cluster on a single VM
for testing, I wasn't concerned though because I knew the slow speed was
likely a problem.
On Mon, Oct 15, 2018 at 7:34 AM Kisik Jeong
wrote:
> Hello,
>
> I successfully deployed Ceph cluster with 16 OSDs and created Cep
I think PGs have more to do with this, the docs were pretty good at
explaining it. Hope this helps
On Thu, Oct 11, 2018, 6:20 PM ST Wong (ITSC) wrote:
> Hi all, we’re new to CEPH. We’ve some old machines redeployed for
> setting up CEPH cluster for our testing environment.
>
> There are over
two
> gateway servers to export cephfs via nfs/smb using ctdb as HA
>
> On 10/11/2018 08:42 PM, solarflow99 wrote:
>
> I am just interested to know more about your use case for NFS as opposed
> to just using cephfs directly, and what are you using for HA?
>
>
> On Thu, Oct
I am just interested to know more about your use case for NFS as opposed to
just using cephfs directly, and what are you using for HA?
On Thu, Oct 11, 2018 at 1:54 AM Felix Stolte wrote:
> Hey folks,
>
> I use nfs-ganesha to export cephfs to nfs. nfs-ganesha can talk to
> cephfs via libcephfs s
s described in the documentation.
>
> -- JJ
>
> On 09/10/2018 00.05, solarflow99 wrote:
> > seems like it did, yet I don't see anything listening on the port it
> should be for dashboard.
> >
> > # ceph mgr module ls
> > {
> > &
gt; errors?) and "ceph mgr module ls" (any reports of the module unable to
> run?)
>
> John
> On Sat, Oct 6, 2018 at 1:53 AM solarflow99 wrote:
> >
> > I enabled the dashboard module in ansible but I don't see ceph-mgr
> listening on a port for it. Is there so
now this goes against what I thought I learned about ceph fs. You should
be able to RW to/from all OSDs, how can it be limited to only a single OSD??
On Sat, Oct 6, 2018 at 4:30 AM Christopher Blum
wrote:
> I wouldn't recommend you pursuit this any further, but if this is the only
> client tha
I enabled the dashboard module in ansible but I don't see ceph-mgr
listening on a port for it. Is there something else I missed?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
2 SSD disks this would mean 2 TB for each SSD !
> If this is really required I am afraid I will keep using filestore ...
>
> Cheers, Massimo
>
> On Fri, Oct 5, 2018 at 7:26 AM wrote:
>
>> Hello
>>
>> Am 4. Oktober 2018 02:38:35 MESZ schrieb solarflow99
thats strange, I recall only deleting the OSD from the crushmap, authm then
osd rm..
On Wed, Oct 3, 2018 at 2:54 PM Alfredo Deza wrote:
> On Wed, Oct 3, 2018 at 3:52 PM Andras Pataki
> wrote:
> >
> > Ok, understood (for next time).
> >
> > But just as an update/closure to my investigation - it
I use the same configuration you have, and I plan on using bluestore. My
SSDs are only 240GB and it worked with filestore all this time, I suspect
bluestore should be fine too.
On Wed, Oct 3, 2018 at 4:25 AM Massimo Sgaravatto <
massimo.sgarava...@gmail.com> wrote:
> Hi
>
> I have a ceph cluste
I have a new deployment and it always has this problem even if I increase
the size of the OSD, it stays at 8. I saw examples where others had this
problem but it was with the RBD pool, I don't have an RBD pool, and just
deployed it fresh with ansible.
health: HEALTH_WARN
1 MDSs repor
odhi.fedoraproject.org/updates/FEDORA-EPEL-2018-7f8d3be3e2 .
> solarflow99, you can test this package and report "+1" in Bodhi there.
>
> It's also in the CentOS Storage SIG
> (http://cbs.centos.org/koji/buildinfo?buildID=23004) . Today I've
> tagged that build in
ya, sadly it looks like btrfs will never materialize as the next filesystem
of the future. Redhat as an example even dropped it from its future, as
others probably will and have too.
On Sun, Sep 23, 2018 at 11:28 AM mj wrote:
> Hi,
>
> Just a very quick and simple reply:
>
> XFS has *always* t
the requirements mention a
> version of a dependency (the notario module) which needs to be 0.0.13
> or newer, and you seem to be using an older one.
>
>
> On Thu, Sep 20, 2018 at 6:53 PM solarflow99 wrote:
> >
> > Hi, tying to get this to do a simple deployment, and i
Hi, tying to get this to do a simple deployment, and i'm getting a strange
error, has anyone seen this? I'm using Centos 7, rel 5 ansible 2.5.3
python version = 2.7.5
I've tried with mimic luninous and even jewel, no luck at all.
TASK [ceph-validate : validate provided configuration]
***
urely
you'd have a vip?
On Tue, Sep 18, 2018 at 12:37 PM Jean-Charles Lopez
wrote:
> > On Sep 17, 2018, at 16:13, solarflow99 wrote:
> >
> > Hi, I read through the various documentation and had a few questions:
> >
> > - From what I understand cephFS clients rea
Hi, anyone able to answer these few questions?
On Mon, Sep 17, 2018 at 4:13 PM solarflow99 wrote:
> Hi, I read through the various documentation and had a few questions:
>
> - From what I understand cephFS clients reach the OSDs directly, does the
> cluster network need to be op
Hi, I read through the various documentation and had a few questions:
- From what I understand cephFS clients reach the OSDs directly, does the
cluster network need to be opened up as a public network?
- Is it still necessary to have a public and cluster network when the using
cephFS since the cl
62 matches
Mail list logo