the delay.
>I tried to resend it but it just returned the same error that mail was
>not deliverable to the ceph mailing list. I will send the message
>beneath as soon it's finally possible, but for now this should help you
>out.
>
>Stephan
>
>--
>
>Hi,
>
fifth
node and restart the ceph service still we are unable to make the fifth
node enter the quorum.
# ceph -s
cluster:
id: 92e8e879-041f-49fd-a26a-027814e0255b
health: HEALTH_WARN
1/5 mons down, quorum cn1,cn2,cn3,cn4
services:
mon: 5 daemons, quorum cn1,cn2,cn3,cn4
TE: Expected behavior is the same as " Linux du command"
>
> Thanks
> Swami
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_
I have observed this in the ceph nautilus dashboard too - and Think it is a
Display Bug... but sometimes it Shows tue right values
Which nautilus u use?
Am 10. Dezember 2019 14:31:05 MEZ schrieb "David Majchrzak, ODERLAND Webbhotell
AB" :
>Hi!
>
>While browsing /#/po
Hi Team,
We would like to create multiple snapshots inside ceph cluster,
initiate the request from librados client and came across this rados api
rados_ioctx_selfmanaged_snap_set_write_ctx
Can some give us sample code on how to use this api .
Thanks,
Muthu
Hi,
We have upgraded a 5 node ceph cluster from Luminous to Nautilus and the
cluster was running fine. Yesterday when we tried to add one more osd into
the ceph cluster we find that the OSD is created in the cluster but
suddenly some of the other OSD's started to crash and we are not ab
Hi Team,
In one of our ceph cluster we observe that there are many slow IOPS in all
our OSD's and most of the latency is happening between two set of
operations which are shown below.
{
"time": "2019-1
Please find the below output.
cn1.chn8be1c1.cdn ~# ceph osd metadata 0
{
"id": 0,
"arch": "x86_64",
"back_addr": "[v2:10.50.12.41:6883/12650,v1:10.50.12.41:6887/12650]",
"back_iface": "dss-p
Hi,
yes still the cluster unrecovered. Not able to even up the osd.0 yet.
osd logs: https://pastebin.com/4WrpgrH5
Mon logs: https://drive.google.com/open?id=1_HqK2d52Cgaps203WnZ0mCfvxdcjcBoE
# ceph daemon /var/run/ceph/ceph-mon.cn1.asok config show|grep debug_mon
"debug_mon&quo
The mon log shows that the all mismatch fsid osds are from node
> 10.50.11.45,
> maybe that the fith node?
> BTW i don't found the osd.0 boot message in ceph-mon.log
> do you set debug_mon=20 first and then restart osd.0 process, and make
> sure the osd.0 is restarted.
>
>
&g
Hi,
Please find the ceph osd tree output in the pastebin
https://pastebin.com/Gn93rE6w
On Fri, Nov 8, 2019 at 7:58 PM huang jun wrote:
> can you post your 'ceph osd tree' in pastebin?
> do you mean the osds report fsid mismatch is from old removed nodes?
>
> nokia ceph 于
Hi,
The fifth node in the cluster was affected by hardware failure and hence
the node was replaced in the ceph cluster. But we were not able to replace
it properly and hence we uninstalled the ceph in all the nodes, deleted the
pools and also zapped the osd's and recreated them as new
Hi,
Below is the status of the OSD after restart.
# systemctl status ceph-osd@0.service
● ceph-osd@0.service - Ceph object storage daemon osd.0
Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service;
enabled-runtime; vendor preset: disabled)
Drop-In: /etc/systemd/system/ceph-osd
Adding my official mail id
-- Forwarded message -
From: nokia ceph
Date: Fri, Nov 8, 2019 at 3:57 PM
Subject: OSD's not coming up in Nautilus
To: Ceph Users
Hi Team,
There is one 5 node ceph cluster which we have upgraded from Luminous to
Nautilus and everything was
Hi Team,
There is one 5 node ceph cluster which we have upgraded from Luminous to
Nautilus and everything was going well until yesterday when we noticed that
the ceph osd's are marked down and not recognized by the monitors as
running eventhough the osd processes are running.
We noticed tha
fined timespan for scrubbing from the
admin?
Hope you can enlighten me :)
- Mehmet___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
all parts", since
>networks
>> can break in thousands of not-very-obvious ways which are not
>0%-vs-100%
>> but somewhere in between.
>>
>
>Ok. I ask my question in a new way.
>What does ceph do, when I switch off all switches of the cluster
>network?
>Does
Hi Team,
We have noticed that memory usage of ceph-monitor processes increased by
1GB in 4 days.
We monitored the ceph-monitor memory usage every minute and we can see it
increases and decreases by few 100 MBs at any point; but over time, the
memory usage increases. We also noticed some monitor
Hi Team,
With default log settings , the ceph stats will be logged like
cluster [INF] pgmap v30410386: 8192 pgs: 8192 active+clean; 445 TB data,
1339 TB used, 852 TB / 2191 TB avail; 188 kB/s rd, 217 MB/s wr, 1618 op/s
Jewel : on mon logs
Nautilus : on mgr logs
Luminous : not able to view
I guess this depends in your Cluster Setup... you have slow request also?
- Mehmet
Am 11. September 2019 12:22:08 MESZ schrieb Ansgar Jazdzewski
:
>Hi,
>
>we are running ceph version 13.2.4 and qemu 2.10, we figured out that
>on VMs with more than three disks IO fails with hung
Thank you Ricardo Dias
On Tue, Sep 17, 2019 at 2:13 PM Ricardo Dias wrote:
> Hi Muthu,
>
> The command you used is only available in v14.2.3. To set the ssl
> certificate in v14.2.2 you need to use the following commands:
>
> $ ceph config-key set mgr/dashboard/crt -i dash
Hi Team,
In ceph 14.2.2 , ceph dashboard does not have set-ssl-certificate .
We are trying to enable ceph dashboard and while using the ssl certificate
and key , it is not working .
cn5.chn5au1c1.cdn ~# ceph dashboard set-ssl-certificate -i dashboard.crt
no valid command found; 10 closest matches
entify the correct
Luminous release in which this is/will be available.
https://github.com/ceph/ceph/pull/25343
Can someone help us with this please?
Thanks,
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/
e a new old one,
> let it sync, etc.
> Still a bad idea.
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89
Hi Team,
One of our old customer had Kraken and they are going to upgrade to
Luminous . In the process they also requesting for downgrade procedure.
Kraken used leveldb for ceph-mon data , from luminous it changed to rocksdb
, upgrade works without any issues.
When we downgrade , the ceph-mon
hare the link of existing rocksdb ticket which does 2 write +
> > syncs.
>
> My PR is here https://github.com/ceph/ceph/pull/26909, you can find the
> issue tracker links inside it.
>
> > 3. Any configuration by which we can reduce/optimize the iops ?
>
> As already said par
obably adds another 5*750 iops on top of each of (1) and (2).
>
> so 5*((2 or 3)+1+2)*750 = either 18750 or 22500. 18750/120 = 156.25,
> 22500/120 = 187.5
>
> the rest may be compaction or metadata reads if you update some objects.
> or maybe I'm missing something else. howev
Thank you Greg, it is now clear for us and the option is only available in
C++ , we need to rewrite the client code with c++ .
Thanks,
Muthu
On Fri, Aug 2, 2019 at 1:05 AM Gregory Farnum wrote:
> On Wed, Jul 31, 2019 at 10:31 PM nokia ceph
> wrote:
> >
> > Thank you Gre
Hi Team,
Could you please help us in understanding the write iops inside ceph
cluster . There seems to be mismatch in iops between theoretical and what
we see in disk status.
Our platform 5 node cluster 120 OSDs, with each node having 24 disks HDD (
data, rcoksdb and rocksdb.WAL all resides in
use librados.h in
our client to communicate with ceph cluster.
Also any equivalent librados api for the command rados -p poolname
Thanks,
Muthu
On Wed, Jul 31, 2019 at 11:13 PM Gregory Farnum wrote:
>
>
> On Wed, Jul 31, 2019 at 1:32 AM nokia ceph
> wrote:
>
>> Hi Greg,
&g
Hi Greg,
We were trying to implement this however having issues in assigning the
destination object name with this api.
There is a rados command "rados -p cp " , is
there any librados api equivalent to this ?
Thanks,
Muthu
On Fri, Jul 5, 2019 at 4:00 PM nokia ceph wrote:
> T
:
> bluestore warn on legacy statfs = false
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
>
> On Fri, Jul 19, 2019
Thank you Paul Emmerich
On Fri, Jul 19, 2019 at 5:22 PM Paul Emmerich
wrote:
> bluestore warn on legacy statfs = false
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 Mün
Hi Team,
After upgrading our cluster from 14.2.1 to 14.2.2 , the cluster moved to
warning state with following error
cn1.chn6m1c1ru1c1.cdn ~# ceph status
cluster:
id: e9afb5f3-4acf-421a-8ae6-caaf328ef888
health: HEALTH_WARN
Legacy BlueStore stats reporting detected on
ese are the fadvise flags we have in various
> places that let you specify things like not to cache the data.
> Probably leave them unset.
>
> -Greg
>
>
>
> On Wed, Jul 3, 2019 at 2:47 AM nokia ceph
> wrote:
> >
> > Hi Greg,
> >
> > Can you please sha
ffic as the machine running the object class will still need to
> > > connect to the relevant primary osd and send the write (presumably in
> > > some situations though this will be the same machine).
> > >
> > > On Tue, Jul 2, 2019 at 4:08 PM nokia ceph
&g
Hi Brett,
I think I was wrong here in the requirement description. It is not about
data replication , we need same content stored in different object/name.
We store video contents inside the ceph cluster. And our new requirement is
we need to store same content for different users , hence need
will clone/copy multiple objects and stores inside the cluster.
Thanks,
Muthu
On Fri, Jun 28, 2019 at 9:23 AM Brad Hubbard wrote:
> On Thu, Jun 27, 2019 at 8:58 PM nokia ceph
> wrote:
> >
> > Hi Team,
> >
> > We have a requirement to create multiple copies of an
?
Please share the document details if it is feasible.
Thanks,
Muthu
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello,
I would advice to use this Script from dan:
https://github.com/cernceph/ceph-scripts/blob/master/tools/ceph-gentle-reweight
I have Used it many Times and it works Great - also if you want to drain the
OSDs.
Hth
Mehmet
Am 30. Mai 2019 22:59:05 MESZ schrieb Michel Raabe :
>Hi Mike,
&g
c4962870fdd67ca758c154760d9df83
> rbd -c ${BACKUP-CLUSTER} -p ${POOL-DESTINATION} export
> ${KVM-IMAGE}@${TODAY-SNAP} - | md5sum
> => 2c4962870fdd67ca758c154760d9df83
>
>
> Can someone has an idea of what's happenning ?
>
> Can someone has a way to succeed in comparing t
Hi List / James,
In the Ceph master (and also Ceph 14.2.1), file: src/common/options.cc,
line # 192:
Option::size_t sz{strict_iecstrtoll(val.c_str(), error_message)};
On ARM 32-bit, compiling with CLang 7.1.0, compilation fails hard at
this line.
The reason is because
state for not deep scrubbing.
Thanks,
Muthu
On Tue, May 14, 2019 at 4:30 PM EDH - Manuel Rios Fernandez <
mrios...@easydatahost.com> wrote:
> Hi Muthu
>
>
>
> We found the same issue near 2000 pgs not deep-scrubbed in time.
>
>
>
> We’re manually force scrubbing
Hi Team,
After upgrading from Luminous to Nautilus , we see 654 pgs not
deep-scrubbed in time error in ceph status . How can we disable this flag?
. In our setup we disable deep-scrubbing for performance issues.
Thanks,
Muthu
___
ceph-users mailing
in a way that
>I haven't seen before.
>> $ ceph versions
>> {
>> "mon": {
>> "ceph version 13.2.5
>(cbff874f9007f1869bfd3821b7e33b2a6ffd4988) mimic (stable)": 3
>> },
>> "mgr": {
>
Hello,
I would also recommend proxmox
It is very easy to install and to Manage your kvm/lxc with Huge amount of
Support for possible storages.
Just my 2 Cents
Hth
- Mehmet
Am 6. April 2019 17:48:32 MESZ schrieb Marc Roos :
>
>We have also hybrid ceph/libvirt-kvm setup, using some scri
t;>
>> Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
>> 128 GB RAM
>> Each OSD is SSD Intel DC-S3710 800GB
>> It runs mimic 13.2.2 in containers.
>>
>> Cluster was operating normally for 4 month and then recently I had an
>outage with multiple VMs (RBD) sh
Hi,
We have a 5 node EC 4+1 cluster with 335 OSDs running Kraken Bluestore
11.2.0.
There was a disk failure on one of the OSDs and the disk was replaced.
After which it was noticed that there was a ~30TB drop in the MAX_AVAIL
value for the pool storage details on output of 'ceph df'
E
impacting my pg query, I can't find the osd to apply the lost
>paremeter.
>> >
>> >
>>
>http://docs.ceph.com/docs/mimic/rados/troubleshooting/troubleshooting-pg/#placement-group-down-peering-failure
>> >
>> > Did someone have same scena
Hello Simon,
Another idea is to increase choose_total_tries.
Hth
Mehmet
Am 7. März 2019 09:56:17 MEZ schrieb Martin Verges :
>Hello,
>
>try restarting every osd if possible.
>Upgrade to a recent ceph version.
>
>--
>Martin Verges
>Managing director
>
>Mobile: +49 1
88da775c5ad4
> keyring = /etc/pve/priv/$cluster.$name.keyring
> public network = 169.254.42.0/24
>
>[mon]
> mon allow pool delete = true
> mon data avail crit = 5
> mon data avail warn = 15
>
>[osd]
> keyring = /var/lib/ceph/osd/ceph-$id/keyring
> osd journal
Hi Rainer,
Try something like
dd if=/dev/zero of=/dev/sdX bs=4096
To wipe/zap any Information on the disk.
HTH
Mehmet
Am 14. Februar 2019 13:57:51 MEZ schrieb Rainer Krienke
:
>Hi,
>
>I am quite new to ceph and just try to set up a ceph cluster. Initially
>I used ceph-deploy
Hello people,
Am 11. Februar 2019 12:47:36 MEZ schrieb c...@elchaka.de:
>Hello Ashley,
>
>Am 9. Februar 2019 17:30:31 MEZ schrieb Ashley Merrick
>:
>>What does the output of apt-get update look like on one of the nodes?
>>
>>You can just list the lines that ment
Hello Ashley,
Am 9. Februar 2019 17:30:31 MEZ schrieb Ashley Merrick
:
>What does the output of apt-get update look like on one of the nodes?
>
>You can just list the lines that mention CEPH
>
... .. .
Get:6 Https://Download.ceph.com/debian-luminous bionic InRelease [8393 B]
... ..
the Ubuntu repo’s or the CEPH
>18.04 repo.
>
>The updates will always be slower to reach you if your waiting for it
>to
>hit the Ubuntu repo vs adding CEPH’s own.
>
>
>On Sun, 10 Feb 2019 at 12:19 AM, wrote:
>
>> Hello m8s,
>>
>> Im curious how we should d
Hello m8s,
Im curious how we should do an Upgrade of our ceph Cluster on Ubuntu 16/18.04.
As (At least on our 18.04 nodes) we only have 12.2.7 (or .8?)
For an Upgrade to mimic we should First Update to Last version, actualy 12.2.11
(iirc).
Which is not possible on 18.04.
Is there a Update
Hi
Am 27. Januar 2019 18:20:24 MEZ schrieb Will Dennis :
>Been reading "Learning Ceph - Second Edition"
>(https://learning.oreilly.com/library/view/learning-ceph-/9781787127913/8f98bac7-44d4-45dc-b672-447d162ea604.xhtml)
>and in Ch. 4 I read this:
>
>"We've
e of LVM you have to specify the name of the Volume Group
>> : and the respective Logical Volume instead of the path, e.g.
>> :
>> : ceph-volume lvm prepare --bluestore --block.db ssd_vg/ssd00 --data
>/dev/sda
>>
>> Eugen,
>>
>> thanks, I will try it
e those OSDs, downgrade to 16.04 and re-add them,
>this is going to take a while.
>
>--Scott
>
>On Mon, Jan 14, 2019 at 10:53 AM Reed Dier
>wrote:
>>
>> This is because Luminous is not being built for Bionic for whatever
>reason.
>> There are some other mailing list en
part\\udumbo\\s180888654\\s20181221\\sxtrabackup\\ufull\\ux19\\u30044\\u20181221025000\\sx19.xbstream.2~ntwW9vwutbmOJ4bDZYehERT2AokbtAi.3595__head_4F7F0C29__184
>> all md5 is : 73281ed56c92a56da078b1ae52e888e0
>>
>> stat info is:
>> root@cld-osd3-48:/home/ceph/var/lib/osd/ceph-33/current/38
>on
>the rbd image it is using for the vm?
>
>I have already a vm running connected to the rbd pool via
>protocol='rbd', and rbd snap ls is showing for snapshots.
>
>
>
>
>
>___
>ceph-users mailing list
>cep
ct, I believe you have to implement that
on top of Ceph
For instance, let say you simply create a pool, with a rbd volume in it
You then create a filesystem on that, and map it on some server
Finally, you can push your files on that mountpoint, using various
Linux's user, acl or whatever : beyond
in Nautilus, we'll be doing it for Octopus.
>
> Are there major python-{rbd,cephfs,rgw,rados} users that are still Python
> 2 that we need to be worried about? (OpenStack?)
>
> sage
> ___
> ceph-users mailin
Hi.
What makes us struggle / wonder again and again is the absence of CEPH __man
pages__. On *NIX systems man pages are always the first way to go for help,
right? Or is this considered "old school" from the CEPH makers / community? :O
And as many ppl complain again and again, the sa
at
'2018-12-25 20:26'. It was then manually deep-scrubbed, also with no
errors, at '2018-12-25 21:47'.
In the past, when a read error occurs, the PG goes inconsistent and the
admin has to repair it. The client operations are unaffected, because
the data from the remaining 2 O
Hi again!
Prior to rebooting the client, I found this file (and it's contents):
# cat
/sys/kernel/debug/ceph/8abf116d-a710-4245-811d-c08473cb9fb4.client7412370/osdc
REQUESTS 1 homeless 0
1459933 osd24.3120c635 [2,18,9]/2 [2,18,9]/2
rbd_data.6b60e8643c9869.000
noted the
usual kernel output regarding I/O errors on the disk. These errors
occured 1 second prior to the message being issued on the client. This
OSD has a drive that is developing bad sectors. This is known and
tollerated. The data sits in a pool with 3 replicas.
Normally, when I/O errors
n the logs we are seeing OOM
> killer. We don't have this issue before upgrade. The only difference is the
> nodes without any issue are R730xd and the ones with the memory leak are
> R740xd. The hardware vendor don't see anything wrong with the hardware. From
> Ceph end w
Hi,
If you are running Ceph Luminous or later, use the Ceph Manager Daemon's
Balancer module. (http://docs.ceph.com/docs/luminous/mgr/balancer/).
Otherwise, tweak the OSD weights (not the OSD CRUSH weights) until you
achieve uniformity. (You should be able to get under 1 STDDEV). I
Hi Cary,
I ran across your email on the ceph-users mailing list 'Signature check
failures.'.
I've just run across the same issue on my end. Also Gentoo user here.
Running Ceph 12.2.5... 32bit/armhf and 64bit/x64_64.
Was your environment mixed or strictly just x86
haw), but
> preceeded by an fstrim. With virtio-scsi, using fstrim propagates the
> discards from within the VM to Ceph RBD (if qemu is configured
> accordingly),
> and a lot of space is saved.
>
> We have yet to observe these hangs, we are running this with ~5 VMs with
> ~10 d
On 2018-12-17 20:16, Brad Hubbard wrote:
On Tue, Dec 18, 2018 at 10:23 AM Mike O'Connor wrote:
Hi All
I have a ceph cluster which has been working with out issues for about
2
years now, it was upgrade about 6 month ago to 10.2.11
root@blade3:/var/lib/ceph/mon# ceph status
2018-12-
r example, I want to remove OSD Server X with all its OSD's.
I am following these steps for all OSD's of Server X:
- ceph osd out
- Wait for rebalance (active+clean)
- On OSD: service ceph stop osd.
Once the steps above are performed, the following steps should be
performed:
- ceph osd cru
Hi,
I have some wild freeze using cephfs with the kernel driver
For instance:
[Tue Dec 4 10:57:48 2018] libceph: mon1 10.5.0.88:6789 session lost,
hunting for new mon
[Tue Dec 4 10:57:48 2018] libceph: mon2 10.5.0.89:6789 session established
[Tue Dec 4 10:58:20 2018] ceph: mds0 caps stale
Isn't this a mgr variable ?
On 10/31/2018 02:49 PM, Steven Vacaroaia wrote:
> Hi,
>
> Any idea why different value for mon_max_pg_per_osd is not "recognized" ?
> I am using mimic 13.2.2
>
> Here is what I have in /etc/ceph/ceph.conf
>
>
IIRC there is a Command like
Ceph osd Metadata
Where you should be able to find Information like this
Hab
- Mehmet
Am 21. Oktober 2018 19:39:58 MESZ schrieb Robert Stanford
:
> I did exactly this when creating my osds, and found that my total
>utilization is about the same as the sum
.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello Roman,
I am Not sure if i could be a help but perhaps this Commands can help to find
the objects in question...
Ceph Heath Detail
rados list-inconsistent-pg rbd
rados list-inconsistent-obj 2.10d
I guess it is also interresting to know you use bluestore or filestore...
Hth
- Mehmet
Am
2018 at 4:25 AM Massimo Sgaravatto <
>massimo.sgarava...@gmail.com> wrote:
>
>> Hi
>>
>> I have a ceph cluster, running luminous, composed of 5 OSD nodes,
>which is
>> using filestore.
>> Each OSD node has 2 E5-2620 v4 processors, 64 GB of RAM, 10x6TB SATA
>
Hello Vikas,
Could you please provide us which Commands you have uses to Setup rbd-mirror?
Would be Great if you could Provide a short howto :)
Thanks in advise
- Mehmet
Am 2. Oktober 2018 22:47:08 MESZ schrieb Vikas Rana :
>Hi,
>
>We have a CEPH 3 node cluster at primary site. We
As of today, there is no such feature in Ceph
Best regards,
On 09/27/2018 04:34 PM, Gaël THEROND wrote:
> Hi folks!
>
> As I’ll soon start to work on a new really large an distributed CEPH
> project for cold data storage, I’m checking out a few features availability
> and status
For cephfs & rgw, it all depends on your needs, as with rbd
You may want to trust blindly Ceph
Or you may backup all your data, just in case (better safe than sorry,
as he said)
To my knowledge, there is no (or few) impact of keeping a large number
of snapshot on a cluster
With rbd, you
Hi,
I assume that you are speaking of rbd only
Taking snapshot of rbd volumes and keeping all of them on the cluster is
fine
However, this is no backup
A snapshot is only a backup if it is exported off-site
On 09/18/2018 11:54 AM, ST Wong (ITSC) wrote:
> Hi,
>
> We're newbie to
Hi karri,
Am 4. September 2018 23:30:01 MESZ schrieb Pardhiv Karri
:
>Hi,
>
>I created a ceph cluster manually (not using ceph-deploy). When I
>reboot
>the node the osd's doesn't come backup because the OS doesn't know that
>it
>need to bring up the OSD.
"event": "done"
>}
>]
>}
>},
>
>Seems like I have an operation that was delayed over 2 seconds in
>queued_for_pg state.
>What does that mean? What was it waiting for?
>
>Regards,
>*Ronnie Lazar*
>*R&D*
>
>T: +972 77 556-1727
>E: ron...@stratoscale.com
>
>
>Web <http://www.stratoscale.com/> | Blog
><http://www.stratoscale.com/blog/>
> | Twitter <https://twitter.com/Stratoscale> | Google+
><https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts>
> | Linkedin <https://www.linkedin.com/company/stratoscale>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
age Weil" wrote:
>> >Hi everyone,
>> >
>> >Please help me welcome Mike Perez, the new Ceph community manager!
>> >
>> >Mike has a long history with Ceph: he started at DreamHost working
>on
>> >OpenStack and Ceph back in the early da
Am 20. August 2018 17:22:35 MESZ schrieb Mehmet :
>Hello,
Hello me,
>
>AFAIK removing of big RBD-Images would lead ceph to produce blocked
>requests - I dont mean caused by poor disks.
>
>Is this still the case with "Luminous (12.2.4)"?
>
To answer my qu
uld be increased First -
not sure which one, but the docs and Mailinglist history should be helpfull.
Hope i could give a Bit usefull hints
- Mehmet
>Thanks,
>
>John
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
quot;, "file_number": 4350}
> -1> 2018-08-03 12:12:53.146753 7f12c38d0a80 0 osd.154 89917 load_pgs
> 0> 2018-08-03 12:12:57.526910 7f12c38d0a80 -1 *** Caught signal
>(Segmentation fault) **
> in thread 7f12c38d0a80 thread_name:ceph-osd
> ceph version 10.2.11 (e4b
Am 1. August 2018 10:33:26 MESZ schrieb Jake Grimmett :
>Dear All,
Hello Jake,
>
>Not sure if this is a bug, but when I add Intel Optane 900P drives,
>their device class is automatically set to SSD rather than NVME.
>
AFAIK ceph actually difference only between hdd and ssd
y predictions when the 12.2.8 release will be available?
>
>
>Micha Krause
>___
>ceph-users mailing list
>ceph-users@lists.ceph.com
>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph
Sound like cephfs to me
On 08/01/2018 09:33 AM, Will Zhao wrote:
> Hi:
>I want to use ceph rbd, because it shows better performance. But I dont
> like kernal module and isci target process. So here is my requirments:
>I dont want to map it and mount it , But I still want
nfirm your found "error" also here:
[root@sds20 ~]# ceph-detect-init
Traceback (most recent call last):
File "/usr/bin/ceph-detect-init", line 9, in
load_entry_point('ceph-detect-init==1.0.1', 'console_scripts',
'ceph-detect-init')()
Strange...
- wouldn't swear, but pretty sure v13.2.0 was working ok before
- so what do others say/see?
- no one on v13.2.1 so far (hard to believe) OR
- just don't have this "systemctl ceph-osd.target" problem and all just works?
If you also __MIGRATED__ from Luminous (sa
Have you guys changed something with the systemctl startup of the OSDs?
I've stopped and disabled all the OSDs on all my hosts via "systemctl
stop|disable ceph-osd.target" and rebooted all the nodes. Everything look just
the same.
The I started all the OSD daemons one after th
Hi Sage.
Sure. Any specific OSD(s) log(s)? Or just any?
Gesendet: Samstag, 28. Juli 2018 um 16:49 Uhr
Von: "Sage Weil"
An: ceph.nov...@habmalnefrage.de, ceph-users@lists.ceph.com,
ceph-de...@vger.kernel.org
Betreff: Re: [ceph-users] HELP! --> CLUSER DOWN (was "v13.2.1 Mimic r
Dear users and developers.
I've updated our dev-cluster from v13.2.0 to v13.2.1 yesterday and since then
everything is badly broken.
I've restarted all Ceph components via "systemctl" and also rebootet the server
SDS21 and SDS24, nothing changes.
This cluster started as Kr
Hi Team,
We need a mechanism to have some data cache on OSD build on bluestore . Is
there an option available to enable data cache?
With default configurations , OSD logs state that data cache is disabled by
default,
bluestore(/var/lib/ceph/osd/ceph-66) _set_cache_sizes cache_size 1073741824
There was no change in the ZABBIX environment... I got the this warning some
minutes after the Linux and Luminous->Mimic update via YUM and a reboot of all
the Ceph servers...
Is there anyone, who also had the ZABBIX module unabled under Luminos AND then
migrated to Mimic? If yes, does it w
This is the problem, the zabbix_sender process is exiting with a
non-zero status.
You didn't change anything? You just upgraded from Luminous to Mimic and
this came along?
Wido
> ---
> ___
&g
1 - 100 of 317 matches
Mail list logo