and IPv6 addresses
>> since it is failing to parse the address as valid. Perhaps it's barfing on
>> the "%eth0" scope id suffix within the address.
>>
>> On Mon, Jul 9, 2018 at 2:47 PM Kevin Olbrich wrote:
>>
>>> Hi!
>>>
>>> I tri
Is it possible to force-remove the lock or the image?
Kevin
2018-07-09 21:14 GMT+02:00 Jason Dillaman :
> Hmm ... it looks like there is a bug w/ RBD locks and IPv6 addresses since
> it is failing to parse the address as valid. Perhaps it's barfing on the
> "%eth0" sc
s for this image?
Kind regards,
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
7;d
set up the VM with ceph-common, the conf and a restricted keyring then
have icinga2 run a nrpe check on it that calls the check_ceph, ceph -s,
or whaterver.
Kevin
On 06/19/2018 04:13 PM, Denny Fuchs wrote:
hi,
Am 19.06.2018 um 17:17 schrieb Kevin Hrpcek :
# ceph auth get client.icinga
e
e trying to
use the same thing as me.
Kevin
On 06/19/2018 07:17 AM, Denny Fuchs wrote:
Hi,
at the moment, we use Icinga2, check_ceph* and Telegraf with the Ceph
plugin. I'm asking what I need to have a separate host, which knows
all about the Ceph cluster health. The reason is, that
d+backfill_wait
1 stale+remapped+peering
1 active+clean+remapped
1 active+remapped+backfilling
io:
client: 3896 GB/s rd, 339 GB/s wr, 8004 kop/s rd, 320 kop/s wr
recovery: 726 GB/s, 11172 objects/s
Thanks,
Kevin
n approach?
Kind regards,
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Realy?
I always thought that splitting the replication network is best practice.
Keeping everything in the same IPv6 network is much easier.
Thank you.
Kevin
2018-06-07 10:44 GMT+02:00 Wido den Hollander :
>
>
> On 06/07/2018 09:46 AM, Kevin Olbrich wrote:
> > Hi!
> >
gards
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ng.
There are row(s) with "blocked by" but no value, is that supposed to be
filled with data?
Kind regards,
Kevin
2018-05-17 16:45 GMT+02:00 Paul Emmerich :
> Check ceph pg query, it will (usually) tell you why something is stuck
> inactive.
>
> Also: never do min_size 1.
&g
but
why are they failing to proceed to active+clean or active+remapped?
Kind regards,
Kevin
2018-05-17 14:05 GMT+02:00 Kevin Olbrich :
> Ok, I just waited some time but I still got some "activating" issues:
>
> data:
> pools: 2 pools, 1536 pgs
> objec
max_pg_per_osd_hard_ratio 32'
Sure, mon_max_pg_per_osd is oversized but this is just temporary.
Calculated PGs per OSD is 200.
I searched the net and the bugtracker but most posts suggest
osd_max_pg_per_osd_hard_ratio
= 32 to fix this issue but this time, I got more stuck PGs.
Any more hints?
K
PS: Cluster currently is size 2, I used PGCalc on Ceph website which, by
default, will place 200 PGs on each OSD.
I read about the protection in the docs and later noticed that I better had
only placed 100 PGs.
2018-05-17 13:35 GMT+02:00 Kevin Olbrich :
> Hi!
>
> Thanks for your qu
33 active+remapped+backfilling
15 activating+undersized+degraded+remapped
A little bit better but still some non-active PGs.
I will investigate your other hints!
Thanks
Kevin
2018-05-17 13:30 GMT+02:00 Burkhard Linke <
burkhard.li...@computational.bio.uni-giessen.de>
only a problem during recovery and the cluster moves to OK after
rebalance or can I take any action to unblock IO on the hdd pool?
This is a pre-prod cluster, it does not have highest prio but I would
appreciate if we would be able to use it before rebalancing is completed.
--bluestore --data
/dev/sde --block-db /dev/nvme0n1p1 --block-wal /dev/nvme0n1p1
node1001.ceph01.example.com
Seems related to:
http://tracker.ceph.com/issues/15386
I am using an Intel P3700 NVMe.
Any ideas?
Kind regards,
Kevin
___
ceph-users mailing list
>>What happens im the NVMe dies?
>You lost OSDs backed by that NVMe and need to re-add them to cluster.
With data located on the OSD (recovery) or as fresh formatted OSD?
Thank you.
- Kevin
2018-04-26 12:36 GMT+02:00 Serkan Çoban :
> >On bluestore, is it safe to move both Bloc
).
What happens im the NVMe dies?
Thank you.
- Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
how can I backup the dmcrypt keys on luminous?
The folder under /etc/ceph does not exist anymore.
Kind regards
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Thanks for the input Greg, we've submitted the patch to the ceph github
repo https://github.com/ceph/ceph/pull/21222
Kevin
On 04/02/2018 01:10 PM, Gregory Farnum wrote:
On Mon, Apr 2, 2018 at 8:21 AM Kevin Hrpcek
mailto:kevin.hrp...@ssec.wisc.edu>> wrote:
Hello,
We
re just doing things different
compared to most users... Any insight would be appreciated as we'd
prefer to use an official solution rather than our bindings fix for long
term use.
Tested on Luminous 12.2.2 and 12.2.4.
Thanks,
Kevin
--
Kevin Hrpcek
Linux Systems Administrator
NASA SNPP
I found a fix: It is *mandatory *to set the public network to the same
network the mons use.
Skipping this while the mon has another network interface, saves garbage to
the monmap.
- Kevin
2018-02-23 11:38 GMT+01:00 Kevin Olbrich :
> I always see this:
>
> [mon01][DEBUG ] "mon
t;0.0.0.0:0/2",
[mon01][DEBUG ] "rank": 2
[mon01][DEBUG ] }
[mon01][DEBUG ] ]
DNS is working fine and the hostnames are also listed in /etc/hosts.
I already purged the mon but still the same problem.
- Kevin
2018-02-23 10:26 GMT+01:00 Kevin Olbrich :
> Hi!
&g
scope global
valid_lft forever preferred_lft forever
inet6 fe80::baae:edff:fee9:b661/64 scope link
valid_lft forever preferred_lft forever
Don't mind wlan0, thats because this node is built from an Intel NUC.
Any idea?
Kind regards
diff,object-map,deep-flatten on the image.
> Otherwise it runs well.
>
I always thought that the latest features are built into newer kernels, are
they available on non-HWE 4.4, HWE 4.8 or HWE 4.10?
Also I am researching for the OSD server side.
- Kevin
_
Would be interested as well.
- Kevin
2018-02-04 19:00 GMT+01:00 Yoann Moulin :
> Hello,
>
> What is the best kernel for Luminous on Ubuntu 16.04 ?
>
> Is linux-image-virtual-lts-xenial still the best one ? Or
> linux-virtual-hwe-16.04 will offer some improvement ?
>
>
artitions 1 - 2 were not added, they are (this disk
has only two partitions).
Should I open a bug?
Kind regards,
Kevin
2018-02-04 19:05 GMT+01:00 Kevin Olbrich :
> I also noticed there are no folders under /var/lib/ceph/osd/ ...
>
>
> Mit freundlichen Grüßen / best regards,
> Kevi
I also noticed there are no folders under /var/lib/ceph/osd/ ...
Mit freundlichen Grüßen / best regards,
Kevin Olbrich.
2018-02-04 19:01 GMT+01:00 Kevin Olbrich :
> Hi!
>
> Currently I try to re-deploy a cluster from filestore to bluestore.
> I zapped all disks (multiple times
te: Failed to activate
> [osd01.cloud.example.local][WARNIN] unmount: Unmounting
> /var/lib/ceph/tmp/mnt.pAfCl4
>
Same problem on 2x 14 disks. I was unable to get this cluster up.
Any ideas?
Kind regards,
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
2018-02-02 12:44 GMT+01:00 Richard Hesketh :
> On 02/02/18 08:33, Kevin Olbrich wrote:
> > Hi!
> >
> > I am planning a new Flash-based cluster. In the past we used SAMSUNG
> PM863a 480G as journal drives in our HDD cluster.
> > After a lot of tests with luminous and
4.4.x-kernel. We plan to migrate
to Ubuntu 16.04.3 with HWE (kernel 4.10).
Clients will be Fedora 27 + OpenNebula.
Any comments?
Thank you.
Kind regards,
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-
e a ton of different configurations to test but I only did
a few focused on writes.
Kevin
R440, Perc H840 with 2 MD1400 attached with 12 10TB NLSAS drives per
md1400. Xfs filestore with 10gb journal lv on each 10tb disk. Ceph
cluster set up as a single mon/mgr/osd server for testing. These tables
p
objects, 72319 MB
usage: 229 GB used, 39965 GB / 40195 GB avail
pgs: 6240 active+clean
can some one suggest a way to improve this.
Thanks,
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users
It was a firewall issue on the controller nodes.After allowing ceph-mgr
port in iptables everything is displaying correctly.Thanks to people on
IRC.
Thanks alot,
Kevin
On Thu, Dec 21, 2017 at 5:24 PM, kevin parrikar
wrote:
> accidently removed mailing list email
>
> ++ceph-users
>
key: AQByfDparprIEBAAj7Pxdr/87/v0kmJV49aKpQ==
caps: [mds] allow *
caps: [mgr] allow *
caps: [mon] allow *
caps: [osd] allow *
Regards,
Kevin
On Thu, Dec 21, 2017 at 8:10 AM, kevin parrikar
wrote:
> Thanks JC,
> I tried
> ceph auth caps client.admin o
17-12-21 02:39:10.622835 7fb40a22b700 0 Cannot get stat of OSD 141
Not sure whats wrong in my setup
Regards,
Kevin
On Thu, Dec 21, 2017 at 2:37 AM, Jean-Charles Lopez
wrote:
> Hi,
>
> make sure client.admin user has an MGR cap using ceph auth list. At some
> point there was a glitch w
showing correct
values.
Can some one help me here please
Regards,
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
28MB limit is a bit high but not unreasonable. If
you have an application written directly to librados that is using
objects larger than 128MB you may need to adjust osd_max_object_size"
Kevin
On 11/09/2017 02:01 PM, Marc Roos wrote:
I would like store objects with
rados -p ec32
quickly setting nodown,noout,noup when
everything is already down will help as well.
Sage, thanks again for your input and advice.
Kevin
On 11/04/2017 11:54 PM, Sage Weil wrote:
On Sat, 4 Nov 2017, Kevin Hrpcek wrote:
Hey Sage,
Thanks for getting back to me this late on a weekend.
Do you
d, a little tight on some of those early
gen servers, but I haven't seen OOM killing things off yet. I think I
saw mention of that patch and luminous handling this type of situation
better while googling the issue...larger osdmap increments or something
similar if i recall correctly.
t daemons are simply
waiting for new maps. I can often see the "newest_map" incrementing on
osd daemons, but it is slow and some are behind by thousands.
Thanks,
Kevin
Cluster details:
CentOS 7.4
Kraken ceph-11.2.1-0.el7.x86_64
540 OSD, 3 mon/mgr/mds
~3.6PB, 72% raw used, ~40 million ob
ete...done.
$ rbd du
NAMEPROVISIONED USED
child10240k 10240k
parent@snap 10240k 0
parent 10240k 0
20480k 10240k
Is there any way to flatten a clone while retaining its sparseness, perhaps in
Luminous or with BlueStor
> match path" option) and such after upgrading from Hammer to Jewel? I am not
> sure if that matters here, but it might help if you elaborate on your
> upgrade process a bit.
>
> --Lincoln
>
> > On Sep 12, 2017, at 2:22 PM, kevin parrikar
> wrote:
> >
> >
Can some one please help me on this.I have no idea how to bring up the
cluster to operational state.
Thanks,
Kev
On Tue, Sep 12, 2017 at 11:12 AM, kevin parrikar
wrote:
> hello All,
> I am trying to upgrade a small test setup having one monitor and one osd
> node which is in hamme
hello All,
I am trying to upgrade a small test setup having one monitor and one osd
node which is in hammer release .
I updating from hammer to jewel using package update commands and things
are working.
How ever after updating from Jewel to Luminous, i am facing issues with osd
failing to start
or not working?
Kind regards,
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
hello All,
I have 50 compute nodes in my environment which are running virtual
machines.I can add one more 10k RPM SAS disk and 1X10G interface to each
server and thus there will be 50 OSD running on 50 compute nodes. Its not
easy to obtain more servers for running Ceph nor taking away servers fr
l entry in "df" and reboot fixed it.
Then OSDs were failing again. Cause: IPv6 DAD on bond-interface. Disabled
via sysctl.
Reboot and voila, cluster immediately online.
Kind regards,
Kevin.
2017-05-16 16:59 GMT+02:00 Kevin Olbrich :
> HI!
>
> Currently I am deploying a small c
er Start (sector)End (sector) Size Code Name
>12048 976642061 465.7 GiB ceph data
I had problems with multipath in the past when running ceph but this time I
was unable to solve the problem.
Any ideas?
Kind regards,
Kevin.
_
hello All,
I am trying Ceph - Jewel on Ubuntu 16.04 with Kubernetes 1.6.2 and Docker
1.11.2
but for some unknown reason its not coming up and crashing often,all ceph
commands are failing.
from *ceph-mon-check:*
kubectl logs -n ceph ceph-mon-check-3190136794-21xg4 -f
subprocess.CalledProcessError:
__
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
>
>
> --
> Cheers,
> Brad
>
the OSDs out one by one or with norefill, norecovery flags set
but all at once?
If last is the case, which flags should be set also?
Thanks!
Kind regards,
Kevin Olbrich.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
hing).
I hope this helps all Ceph users who are interested in the idea of running
Ceph on ZFS.
Kind regards,
Kevin Olbrich.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
more osd "per" node or more osd "nodes".
Thanks alot for all your help.Learned so many new things thanks again
Kevin
On Sat, Jan 7, 2017 at 7:33 PM, Lionel Bouton <
lionel-subscript...@bouton.name> wrote:
> Le 07/01/2017 à 14:11, kevin parrikar a écrit :
>
> T
bought S3500 because last time when we tried ceph, people were
suggesting this model :) :)
Thanks alot for your help
On Sat, Jan 7, 2017 at 6:01 PM, Lionel Bouton <
lionel-subscript...@bouton.name> wrote:
> Hi,
>
> Le 07/01/2017 à 04:48, kevin parrikar a écrit :
>
> i reall
m SEC .
I suppose this also shows slow performance.
Any idea where could be the issue?
I use LSI 9260-4i controller (firmware 12.13.0.-0154) on both the nodes
with write back enabled . i am not sure if this controller is suitable for
ceph.
Regards,
Kevin
On Sat, Jan 7, 2017 at 1:23 PM, Mag
.
Regards,
Kevin
On Fri, Jan 6, 2017 at 4:42 PM, kevin parrikar
wrote:
> Thanks Christian for your valuable comments,each comment is a new learning
> for me.
> Please see inline
>
> On Fri, Jan 6, 2017 at 9:32 AM, Christian Balzer wrote:
>
>>
>> Hello,
>>
>&g
Thanks Christian for your valuable comments,each comment is a new learning
for me.
Please see inline
On Fri, Jan 6, 2017 at 9:32 AM, Christian Balzer wrote:
>
> Hello,
>
> On Fri, 6 Jan 2017 08:40:36 +0530 kevin parrikar wrote:
>
> > Hello All,
> >
> > I h
for your
suggestion.
Regards,
Kevin
On Fri, Jan 6, 2017 at 8:56 AM, jiajia zhong wrote:
>
>
> 2017-01-06 11:10 GMT+08:00 kevin parrikar :
>
>> Hello All,
>>
>> I have setup a ceph cluster based on 0.94.6 release in 2 servers each
>> with 80Gb intel s3510 and
understand this better.
Regards,
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
2016-12-14 2:37 GMT+01:00 Christian Balzer :
>
> Hello,
>
Hi!
>
> On Wed, 14 Dec 2016 00:06:14 +0100 Kevin Olbrich wrote:
>
> > Ok, thanks for your explanation!
> > I read those warnings about size 2 + min_size 1 (we are using ZFS as
> RAID6,
> > called
Ok, thanks for your explanation!
I read those warnings about size 2 + min_size 1 (we are using ZFS as RAID6,
called zraid2) as OSDs.
Time to raise replication!
Kevin
2016-12-13 0:00 GMT+01:00 Christian Balzer :
> On Mon, 12 Dec 2016 22:41:41 +0100 Kevin Olbrich wrote:
>
> > Hi,
>
,
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
is safe regardless of full outage.
Mit freundlichen Grüßen / best regards,
Kevin Olbrich.
2016-12-07 21:10 GMT+01:00 Wido den Hollander :
>
> > Op 7 december 2016 om 21:04 schreef "Will.Boege" >:
> >
> >
> > Hi Wido,
> >
> > Just curious how
I need to note that I already have 5 hosts with one OSD each.
Mit freundlichen Grüßen / best regards,
Kevin Olbrich.
2016-11-28 10:02 GMT+01:00 Kevin Olbrich :
> Hi!
>
> I want to deploy two nodes with 4 OSDs each. I already prepared OSDs and
> only need to activate them.
> What
Hi!
I want to deploy two nodes with 4 OSDs each. I already prepared OSDs and
only need to activate them.
What is better? One by one or all at once?
Kind regards,
Kevin.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
them run remote services (terminal).
My question is: Are 80 VMs hosted on 53 disks (mostly 7.2k SATA) to much?
We sometime experience lags where nearly all servers suffer from "blocked
IO > 32" seconds.
What are your experiences?
Mit freundlichen Grüßen / best regards,
regards,
Kevin Olbrich.
>
> Original Message
> Subject: Re: [ceph-users] degraded objects after osd add (17-Nov-2016 9:14)
> From:Burkhard Linke
> To: c...@dolphin-it.de
>
> Hi,
>
>
> On 11/17/2016 08:07 AM, Steffen Weißgerber wrot
OSDs (and setting size
to 3).
I want to make sure we can resist two offline hosts (in terms of hardware).
Is my assumption correct?
Mit freundlichen Grüßen / best regards,
Kevin Olbrich.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://list
I have 4 node cluster each with 5 disks (4 OSD and 1 Operating system also
hosting 3 monitoring process) with default replica 3.
Total OSD disks : 16
Total Nodes : 4
How can i calculate the
- Maximum number of disk failures my cluster can handle with out any
impact on current data and new
thanks i will follow this work around.
On Thu, Mar 12, 2015 at 12:18 AM, Somnath Roy
wrote:
> Kevin,
>
> This is a known issue and should be fixed in the latest krbd. The problem
> is, it is not backported to 14.04 krbd yet. You need to build it from
> latest krbd source if yo
Hi,
I am trying hammer 0.93 on Ubuntu 14.04.
rbd is mapped in client ,which is also ubuntu 14.04 .
When i did a stop ceph-osd-all and then a start,client machine crashed and
attached pic was in the console.Not sure if its related to ceph.
Thanks
___
found from mailing list) - This showed
some noticeable difference .
Will configuring ssd in RAID0 improve this,A single OSD from RAID0
Regards,
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users
Can I ask what xio and simple messenger are and the differences?
Kind regards
Kevin Walker
+968 9765 1742
On 1 Mar 2015, at 18:38, Alexandre DERUMIER wrote:
Hi Mark,
I found an previous bench from Vu Pham (it's was about simplemessenger vs
xiomessenger)
http://www.spinics.net/lists
What about the Samsung 845DC Pro SSD's?
These have fantastic enterprise performance characteristics.
http://www.thessdreview.com/our-reviews/samsung-845dc-pro-review-800gb-class-leading-speed-endurance/
Kind regards
Kevin
On 28 February 2015 at 15:32, Philippe Schwarz wrote:
> --
vide FC targets, which adds
further power consumption.
Kind regards
Kevin Walker
+968 9765 1742
On 25 Feb 2015, at 04:40, Christian Balzer wrote:
On Wed, 25 Feb 2015 02:50:59 +0400 Kevin Walker wrote:
> Hi Mark
>
> Thanks for the info, 22k is not bad, but still massively below what
SSD's
for OSD's and RAM disk pcie devices for the Journals so this would be ok.
Kind regards
Kevin Walker
+968 9765 1742
On 25 Feb 2015, at 02:35, Mark Nelson wrote:
> On 02/24/2015 04:21 PM, Kevin Walker wrote:
> Hi All
>
> Just recently joined the list and have been
fragmentation problems other users have
experienced?
Kind regards
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi John,
I am using 0.56.1. Could it be because data striping is not supported in
this version?
Kevin
On Wed Dec 17 2014 at 4:00:15 AM PST Wido den Hollander
wrote:
> On 12/17/2014 12:35 PM, John Spray wrote:
> > On Wed, Dec 17, 2014 at 10:25 AM, Wido den Hollander
> wrote:
>
lying file system does not support xattr. Has
anyone ever run into similar problem before?
I deployed CephFS on Debian wheezy.
And here is the mounting information:
ceph-fuse on /dfs type fuse.ceph-fuse
(rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
Many thanks,
Kev
Hello All,
Does anyone know how to configure data stripping when using ceph as file
system? My understanding is that configuring stripping with rbd is only for
block device.
Many thanks,
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
5360 MB -- 85%
avail
mon.cluster4-monitor004 store is getting too big! 93414 MB >= 15360 MB -- 69%
avail
mon.cluster4-monitor005 store is getting too big! 88232 MB >= 15360 MB -- 71%
avail
--
Kevin Sumner
ke...@sumner.io
> On Dec 9, 2014, at 6:20 PM, Haomai Wang wrote:
>
> Mayb
nt io 3463 MB/s rd, 18710 kB/s wr, 7456 op/s
--
Kevin Sumner
ke...@sumner.io
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Making mds cache size 5 million seems to have helped significantly, but we’re
still seeing issues occasionally on metadata reads while under load. Settings
over 5 million don’t seem to have any noticeable impact on this problem. I’m
starting the upgrade to Giant today.
--
Kevin Sumner
ke
minute, so cache at 1
million is still undersized. If that doesn’t work, we’re running Firefly on
the cluster currently and I’ll be upgrading it to Giant.
--
Kevin Sumner
ke...@sumner.io
> On Nov 18, 2014, at 1:36 AM, Thomas Lemarchand
> wrote:
>
> Hi Kevin,
>
> There
> On Nov 17, 2014, at 15:52, Sage Weil wrote:
>
> On Mon, 17 Nov 2014, Kevin Sumner wrote:
>> I?ve got a test cluster together with a ~500 OSDs and, 5 MON, and 1 MDS. All
>> the OSDs also mount CephFS at /ceph. I?ve got Graphite pointing at a space
>> under /ceph.
near-idle up to similar 100-150% CPU.
Hopefully, I’ve missed something in the CephFS tuning. However, I’m looking
for direction on figuring out if it is, indeed, a tuning problem or if this
behavior is a symptom of the “not ready for production” banner in the
documentation.
--
Kevin Sumner
ke
On 06/26/2014 01:08 PM, Gregory Farnum wrote:
On Thu, Jun 26, 2014 at 12:52 PM, Kevin Horan
wrote:
I am also getting inconsistent object errors on a regular basis, about 1-2
every week or so for about 300GB of data. All OSDs are using XFS
filesystems. Some OSDs are individual 3TB internal
that these are harmless and will go
away in a future version. I also looked in the monitor logs but didn't
see any reference to inconsistent or scrubbed objects.
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/lis
t the operation just
hangs.
Kevin
On 5/1/14 10:11 , kevin horan wrote:
Here is how I got into this state. I have only 6 OSDs total, 3 on
one host (vashti) and 3 on another (zadok). I set the noout flag
so I could reboot zadok. Zadok was down for 2 minutes. When it
came up
While everything was
moving from degraded to active+clean, it finally finished probing.
If it's still happening tomorrow, I'd try to find a Geeks on IRC
Duty (http://ceph.com/help/community/).
On 5/3/14 09:43 , Kevin Horan wrote:
Craig,
Thanks for your response
t the operation just
hangs.
Kevin
On 5/1/14 10:11 , kevin horan wrote:
Here is how I got into this state. I have only 6 OSDs total, 3 on
one host (vashti) and 3 on another (zadok). I set the noout flag
so I could reboot zadok. Zadok was down for 2 minutes. When it
came up
"incomplete": 0,
"last_epoch_started": 20323},
"recovery_state": [
{ "name": "Started\/Primary\/Active",
"enter_time": "2014-05-01 09:03:30.557244",
"might_have_unfound": [
{
Am 19.04.2014 um 00:33 hat Josh Durgin geschrieben:
> On 04/18/2014 10:47 AM, Alexandre DERUMIER wrote:
> >Thanks Kevin for for the full explain!
> >
> >>>cache.writeback=on,cache.direct=off,cache.no-flush=off
> >
> >I didn't known about the cache option
ration with rbd and cache.direct=off.
> If yes, it is possible to disable manually writeback online with qmp ?
No, such a QMP command doesn't exist, though it would be possible to
implement (for toggling cache.direct, that is; cache.writeback is guest
visible and
Ah, that sounds like what I want. I'll look into that, thanks.
Kevin
On 11/27/2013 11:37 AM, LaSalle, Jurvis wrote:
Is LUN masking an option in your SAN?
On 11/27/13, 2:34 PM, "Kevin Horan" wrote:
Thanks. I may have to go this route, but it seems awfully fragile. One
stray
drives, how do you limit
the visibility of the drives? Am I missing something here? Could there
be a configuration option or something added to ceph to ensure that it
never tries to mount things on its own?
Thanks.
Kevin
On 11/26/2013 05:14 PM, Kyle Bader wrote:
Is there any way to
r any help.
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
r any help.
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
bytes.
Am I reading this incorrectly?
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 |
http://imc-chicago.com/
Phone: +1 312-204-7439 | Fax: +1 312-244-3301 | E-Mail:
kevin.wei...@imc-chicago.com<mailto:kevin.wei...@imc-chi
101 - 200 of 221 matches
Mail list logo