Make sure to check this blog page
http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
Since Im not sure if you are playing arround with CEPH, or plan it for
production and good performance.
My experience SSD as journal: SSD Samsung 850 PRO =
Jiri,
if you colocate more Journals on 1 SSD (we do...), make sure to understand
the following:
- if SSD dies, all OSDs that had their journals on it, are lost...
- the more journals you put on single SSD (1 journal being 1 partition),
the worse performance, since total SSD performance is not
Hi,
depending on cache mode etc - from what we have also experienced (using
CloudStack) - CEPH snapshot functionality simply stops working in some
cache configuration.
This means, we were also unable to deploy new VMs (base-gold snapshot is
created on CEPH and new data disk which is child of
Another one bites the dust...
This is Samsung 850 PRO 256GB... (6 journals on this SSDs just died...)
[root@cs23 ~]# smartctl -a /dev/sda
smartctl 5.43 2012-06-30 r3573 [x86_64-linux-3.10.66-1.el6.elrepo.x86_64]
(local build)
Copyright (C) 2002-12 by Bruce Allen,
d anyone with their DC-class drives actually in stock so
> I ended up switching the to Intel S3700s. My users will be happy to have
> some SSDs to put in their workstations though!
>
> QH
>
> On Thu, Sep 17, 2015 at 4:49 PM, Andrija Panic <andrija.pa...@gmail.com>
>
"enough 4k read iop/s for multithreaded apps (around 23 000) with qemu
2.2.1."
That is very nice number if I'm allowed to comment - may I know what is
your setup (in 2 lines, hardware, number of OSDs) ?
Thanks
On 10 September 2015 at 15:39, Jan Schermer wrote:
> Get faster
We also get 2ms for writes, INtel S3500 Journals (5 journals on 1 SSD) and
4TB OSDs...
On 10 September 2015 at 16:41, Jan Schermer wrote:
> What did you tune? Did you have to make a human sacrifice? :) Which
> release?
> The last proper benchmark numbers I saw were from hammer
There is
http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
On the other hand, I'm not sure if SSD vendors would be happy to see their
device listed performing total crap (for Journaling) ...but yes, I vote for
having some oficial page if
to be the case.
>
> QH
>
> On Fri, Sep 4, 2015 at 12:53 PM, Andrija Panic <andrija.pa...@gmail.com>
> wrote:
>
>> Hi James,
>>
>> I had 3 CEPH nodes as folowing: 12 OSDs(HDD) and 2 SSDs (2x 6 Journals
>> partitions on each SSD) - SSDs just vanished with n
here is that you deploy Ceph with CloudStack , am I
> correct? The 2 SSDs vanished in 2~3 weeks is brand new Samsung 850 Pro
> 128GB, right?
>
>
>
> Thanks,
>
> James
>
>
>
> *From:* Andrija Panic [mailto:andrija.pa...@gmail.com]
> *Sent:* Friday, S
; early next week. Hopefully they arrive before we have multiple nodes die at
> once and can no longer rebalance successfully.
>
>
>
> Most of the drives I have are the 850 Pro 128GB (specifically
> MZ7KE128HMGA)
>
> There are a couple 120GB 850 EVOs in there too, but ironicall
ince we had a specific use case.
>> The performance was better than our old setup so it was good enough.
>>
>> hth
>>
>>
>>
>> On Tue, Aug 25, 2015 at 12:07 PM, Andrija Panic <andrija.pa...@gmail.com>
>> wrote:
>>
>>> We have some 850 pr
Make sure you test what ever you decide. We just learned this the hard way
with samsung 850 pro, which is total crap, more than you could imagine...
Andrija
On Aug 25, 2015 11:25 AM, Jan Schermer j...@schermer.cz wrote:
I would recommend Samsung 845 DC PRO (not EVO, not just PRO).
Very cheap,
and performans was acceptable...no we are upgrading to intel S3500...
Best
any details on that ?
On Tue, 25 Aug 2015 11:42:47 +0200, Andrija Panic
andrija.pa...@gmail.com wrote:
Make sure you test what ever you decide. We just learned this the hard way
with samsung 850 pro, which is total crap, more
. Yes, it's cheaper than S3700 (about 2x times), and
no so durable for writes, but we think more better to replace 1 ssd per 1
year than to pay double price now.
2015-08-25 12:59 GMT+03:00 Andrija Panic andrija.pa...@gmail.com:
And should I mention that in another CEPH installation we had
And should I mention that in another CEPH installation we had samsung 850
pro 128GB and all of 6 ssds died in 2 month period - simply disappear from
the system, so not wear out...
Never again we buy Samsung :)
On Aug 25, 2015 11:57 AM, Andrija Panic andrija.pa...@gmail.com wrote:
First read
Guys,
I'm Igor's colleague, working a bit on CEPH, together with Igor.
This is production cluster, and we are becoming more desperate as the time
goes by.
Im not sure if this is appropriate place to seek commercial support, but
anyhow, I do it...
If anyone feels like and have some experience
This was related to the caching layer, which doesnt support snapshooting
per docs...for sake of closing the thread.
On 17 August 2015 at 21:15, Voloshanenko Igor igor.voloshane...@gmail.com
wrote:
Hi all, can you please help me with unexplained situation...
All snapshot inside ceph broken...
Well, seems like they are on satellite :)
On 6 May 2015 at 02:58, Matthew Monaco m...@monaco.cx wrote:
On 05/05/2015 08:55 AM, Andrija Panic wrote:
Hi,
small update:
in 3 months - we lost 5 out of 6 Samsung 128Gb 850 PROs (just few days in
between of each SSD death) - cant believe
Hi,
small update:
in 3 months - we lost 5 out of 6 Samsung 128Gb 850 PROs (just few days in
between of each SSD death) - cant believe it - NOT due to wearing out... I
really hope we got efective series from suplier...
Regards
On 18 April 2015 at 14:24, Andrija Panic andrija.pa...@gmail.com
yes I know, but to late now, I'm afraid :)
On 18 April 2015 at 14:18, Josef Johansson jose...@gmail.com wrote:
Have you looked into the samsung 845 dc? They are not that expensive last
time I checked.
/Josef
On 18 Apr 2015 13:15, Andrija Panic andrija.pa...@gmail.com wrote:
might be true
inclined to suffer this fate.
Regards
Mark
On 18/04/15 22:23, Andrija Panic wrote:
these 2 drives, are on the regular SATA (on board)controler, and beside
this, there is 12 x 4TB on the fron of the servers - normal backplane on
the front.
Anyway, we are going to check those dead SSDs
...@me.com wrote:
On 17/04/2015, at 21.07, Andrija Panic andrija.pa...@gmail.com wrote:
nahSamsun 850 PRO 128GB - dead after 3months - 2 of these died...
wearing level is 96%, so only 4% wasted... (yes I know these are not
enterprise,etc… )
Damn… but maybe your surname says it all
be a defect there as well.
On 18 Apr 2015 09:42, Steffen W Sørensen ste...@me.com wrote:
On 17/04/2015, at 21.07, Andrija Panic andrija.pa...@gmail.com wrote:
nahSamsun 850 PRO 128GB - dead after 3months - 2 of these died...
wearing level is 96%, so only 4% wasted... (yes I know
Hi guys,
I have 1 SSD that hosted 6 OSD's Journals, that is dead, so 6 OSD down,
ceph rebalanced etc.
Now I have new SSD inside, and I will partition it etc - but would like to
know, how to proceed now, with the journal recreation for those 6 OSDs that
are down now.
Should I flush journal
, at 18:49, Andrija Panic andrija.pa...@gmail.com wrote:
12 osds down - I expect less work with removing and adding osd?
On Apr 17, 2015 6:35 PM, Krzysztof Nowicki
krzysztof.a.nowi...@gmail.com wrote:
Why not just wipe out the OSD filesystem, run ceph-osd --mkfs with the
existing OSD UUID, copy
Thx guys, thats what I will be doing at the end.
Cheers
On Apr 17, 2015 6:24 PM, Robert LeBlanc rob...@leblancnet.us wrote:
Delete and re-add all six OSDs.
On Fri, Apr 17, 2015 at 3:36 AM, Andrija Panic andrija.pa...@gmail.com
wrote:
Hi guys,
I have 1 SSD that hosted 6 OSD's Journals
2015 o 18:31 użytkownik Andrija Panic andrija.pa...@gmail.com
napisał:
Thx guys, thats what I will be doing at the end.
Cheers
On Apr 17, 2015 6:24 PM, Robert LeBlanc rob...@leblancnet.us wrote:
Delete and re-add all six OSDs.
On Fri, Apr 17, 2015 at 3:36 AM, Andrija Panic andrija.pa
on the SSD?
/Josef
On 17 Apr 2015 20:05, Andrija Panic andrija.pa...@gmail.com wrote:
SSD died that hosted journals for 6 OSDs - 2 x SSD died, so 12 OSDs are
down, and rebalancing is about finish... after which I need to fix the OSDs.
On 17 April 2015 at 19:01, Josef Johansson jo...@oderland.se
) for about half a
year now. So far so good. I'll be keeping a closer look at them.
pt., 17 kwi 2015, 21:07 Andrija Panic użytkownik andrija.pa...@gmail.com
napisał:
nahSamsun 850 PRO 128GB - dead after 3months - 2 of these died...
wearing level is 96%, so only 4% wasted... (yes I know
Hi all,
when I run:
ceph-deploy osd create SERVER:sdi:/dev/sdb5
(sdi = previously ZAP-ed 4TB drive)
(sdb5 = previously manually created empty partition with fdisk)
Is ceph-deploy going to create journal properly on sdb5 (something similar
to: ceph-osd -i $ID --mkjournal ), or do I need to do
was created properly. The OSD would
not start if the journal was not created.
On Fri, Apr 17, 2015 at 2:43 PM, Andrija Panic andrija.pa...@gmail.com
wrote:
Hi all,
when I run:
ceph-deploy osd create SERVER:sdi:/dev/sdb5
(sdi = previously ZAP-ed 4TB drive)
(sdb5 = previously manually created
Acutally, good question - is RBD caching at all - possible with Windows
guestes, if it ussing latest VirtIO drivers ?
Linux caching (write caching, writeback) is working fine with newer virt-io
drivers...
Thanks
On 18 March 2015 at 10:39, Alexandre DERUMIER aderum...@odiso.com wrote:
Hi,
I
Public network is clients-to-OSD traffic - and if you have NOT explicitely
defined cluster network, than also OSD-to-OSD replication takes place over
same network.
Otherwise, you can define public and cluster(private) network - so OSD
replication will happen over dedicated NICs (cluster network)
changin PG number - causes LOOOT of data rebalancing (in my case was 80%)
which I learned the hard way...
On 14 March 2015 at 18:49, Gabri Mate mailingl...@modernbiztonsag.org
wrote:
I had the same issue a few days ago. I was increasing the pg_num of one
pool from 512 to 1024 and all the VMs
This is how I did it, and then retart each OSD one by one, but monritor
with ceph -s, when ceph is healthy, proceed with next OSD restart...
Make sure the networks are fine on physical nodes, that you can ping in
between...
[global]
x
x
x
x
x
x
#
###
Georgios,
no need to put ANYTHING if you don't plan to split client-to-OSD vs
OSD-OSD-replication on 2 different Network Cards/Networks - for pefromance
reasons.
if you have only 1 network - simply DONT configure networks at all inside
your CEPH.conf file...
if you have 2 x 1G cards in servers,
In that case - yes...put everything on 1 card - or if both cards are 1G (or
same speed for that matter...) - then you might want toblock all external
traffic except i.e. SSH, WEB, but allow ALL traffic between all CEPH
OSDs... so you can still use that network for public/client traffic - not
sure
Georgeos
, you need to have deployment server and cd into folder that you used
originaly while deploying CEPH - in this folder you should already have
ceph.conf, admin.client keyring and other stuff - which is required to to
connect to cluster...and provision new MONs or OSDs, etc.
Message:
Check firewall - I hit this issue over and over again...
On 13 March 2015 at 22:25, Georgios Dimitrakakis gior...@acmac.uoc.gr
wrote:
On an already available cluster I 've tried to add a new monitor!
I have used ceph-deploy mon create {NODE}
where {NODE}=the name of the node
and then I
Thanks Wido - I will do that.
On 13 March 2015 at 09:46, Wido den Hollander w...@42on.com wrote:
On 13-03-15 09:42, Andrija Panic wrote:
Hi all,
I have set nodeep-scrub and noscrub while I had small/slow hardware for
the cluster.
It has been off for a while now.
Now we
Hi all,
I have set nodeep-scrub and noscrub while I had small/slow hardware for the
cluster.
It has been off for a while now.
Now we are upgraded with hardware/networking/SSDs and I would like to
activate - or unset these flags.
Since I now have 3 servers with 12 OSDs each (SSD based Journals)
Nice - so I just realized I need to manually scrub 1216 placements groups :)
On 13 March 2015 at 10:16, Andrija Panic andrija.pa...@gmail.com wrote:
Thanks Wido - I will do that.
On 13 March 2015 at 09:46, Wido den Hollander w...@42on.com wrote:
On 13-03-15 09:42, Andrija Panic wrote
Hollander wrote:
On 13-03-15 09:42, Andrija Panic wrote:
Hi all,
I have set nodeep-scrub and noscrub while I had small/slow hardware for
the cluster.
It has been off for a while now.
Now we are upgraded with hardware/networking/SSDs and I would like to
activate - or unset these flags
Hmnice Thx guys
On 13 March 2015 at 12:33, Henrik Korkuc li...@kirneh.eu wrote:
I think settings apply to both kinds of scrubs
On 3/13/15 13:31, Andrija Panic wrote:
Interestingthx for that Henrik.
BTW, my placements groups are arround 1800 objects (ceph pg dump) -
meainng
Will do, of course :)
THx Wido for quick help, as always !
On 13 March 2015 at 12:04, Wido den Hollander w...@42on.com wrote:
On 13-03-15 12:00, Andrija Panic wrote:
Nice - so I just realized I need to manually scrub 1216 placements
groups :)
With manual I meant using a script.
Loop
ceph is RAW format - should be all fine...so VM will be using that RAW
format
On 12 March 2015 at 09:03, Azad Aliyar azad.ali...@sparksupport.com wrote:
Community please explain the 2nd warning on this page:
http://ceph.com/docs/master/rbd/rbd-openstack/
Important Ceph doesn’t support QCOW2
Hi there,
just wanted to share some benchmark experience with RBD caching, that I
have just (partially) implemented. This is not nicely formated results,
just raw numbers to understadn the difference
*INFRASTRUCTURE:
- 3 hosts with: 12 x 4TB drives, 6 Journals on 1 SSD, 6 journals on
for some reason...it just stayed degraded... so
this is a reason why I started back the OSD, and then set it to out...)
Thanks
On 4 March 2015 at 17:54, Andrija Panic andrija.pa...@gmail.com wrote:
Hi Robert,
I already have this stuff set. CEph is 0.87.0 now...
Thanks, will schedule
at the same time.
If you try this, please report back on your experience. I'm might try it
in my lab, but I'm really busy at the moment so I don't know if I'll get to
it real soon.
On Thu, Mar 5, 2015 at 12:53 PM, Andrija Panic andrija.pa...@gmail.com
wrote:
Hi Robert,
it seems I have
are good to go by
restarting one OSD at a time.
On Wed, Mar 4, 2015 at 4:17 AM, Andrija Panic andrija.pa...@gmail.com
wrote:
Hi,
I'm having a live cluster with only public network (so no explicit
network
configuraion in the ceph.conf file)
I'm wondering what is the procedure
configuration to all OSDs and restart them one by
one.
Make sure the network is ofcourse up and running and it should work.
On Wed, Mar 4, 2015 at 4:17 AM, Andrija Panic andrija.pa...@gmail.com
wrote:
Hi,
I'm having a live cluster with only public network (so no explicit
network
configuraion
. Tired, sorry...
On 4 March 2015 at 17:48, Andrija Panic andrija.pa...@gmail.com wrote:
That was my thought, yes - I found this blog that confirms what you are
saying I guess:
http://www.sebastien-han.fr/blog/2012/07/29/tip-ceph-public-slash-private-network-configuration/
I will do
, Andrija Panic andrija.pa...@gmail.com wrote:
That was my thought, yes - I found this blog that confirms what you are
saying I guess:
http://www.sebastien-han.fr/blog/2012/07/29/tip-ceph-public-slash-private-network-configuration/
I will do that... Thx
I guess it doesnt matter, since my Crush Map
you are running, but I think
there was some priority work done in firefly to help make backfills
lower priority. I think it has gotten better in later versions.
On Wed, Mar 4, 2015 at 1:35 AM, Andrija Panic andrija.pa...@gmail.com
wrote:
Thank you Rober - I'm wondering when I do remove total
adding new nodes, when nobackfill and norecover is set, you can
add them in so that the one big relocate fills the new drives too.
On Tue, Mar 3, 2015 at 5:58 AM, Andrija Panic andrija.pa...@gmail.com
wrote:
Thx Irek. Number of replicas is 3.
I have 3 servers with 2 OSDs on them on 1g switch
Hi,
I'm having a live cluster with only public network (so no explicit network
configuraion in the ceph.conf file)
I'm wondering what is the procedure to implement dedicated
Replication/Private and Public network.
I've read the manual, know how to do it in ceph.conf, but I'm wondering
since this
ceph]# ceph --admin-daemon /var/run/ceph/ceph-osd.94.asok
config show | grep osd_recovery_delay_start
osd_recovery_delay_start: 10
2015-03-03 13:13 GMT+03:00 Andrija Panic andrija.pa...@gmail.com:
HI Guys,
I yesterday removed 1 OSD from cluster (out of 42 OSDs), and it caused
over 37% od
GMT+03:00 Andrija Panic andrija.pa...@gmail.com:
Hi Irek,
yes, stoping OSD (or seting it to OUT) resulted in only 3% of data
degraded and moved/recovered.
When I after that removed it from Crush map ceph osd crush rm id,
that's when the stuff with 37% happened.
And thanks Irek for help
potentialy go with 7 x the same
number of missplaced objects...?
Any thoughts ?
Thanks
On 3 March 2015 at 12:14, Andrija Panic andrija.pa...@gmail.com wrote:
Thanks Irek.
Does this mean, that after peering for each PG, there will be delay of
10sec, meaning that every once in a while, I will have
, the correct option is to remove the entire node, rather than
each disk individually
2015-03-03 14:27 GMT+03:00 Andrija Panic andrija.pa...@gmail.com:
Another question - I mentioned here 37% of objects being moved arround -
this is MISPLACED object (degraded objects were 0.001%, after I removed
HI Guys,
I yesterday removed 1 OSD from cluster (out of 42 OSDs), and it caused over
37% od the data to rebalance - let's say this is fine (this is when I
removed it frm Crush Map).
I'm wondering - I have previously set some throtling mechanism, but during
first 1h of rebalancing, my rate of
Hi people,
I had one OSD crash, so the rebalancing happened - all fine (some 3% of the
data has been moved arround, and rebalanced) and my previous
recovery/backfill throtling was applied fine and we didnt have a unusable
cluster.
Now I used the procedure to remove this crashed OSD comletely
, when my
cluster completely colapsed during data rebalancing...
I don't see any option to contribute to documentation ?
Best
On 2 March 2015 at 16:07, Wido den Hollander w...@42on.com wrote:
On 03/02/2015 03:56 PM, Andrija Panic wrote:
Hi people,
I had one OSD crash, so the rebalancing
:
Writes per RBD:
Writes per object:
Writes per length:
.
.
.
On 8 August 2014 16:01, Dan Van Der Ster daniel.vanders...@cern.ch wrote:
Hi,
On 08 Aug 2014, at 15:55, Andrija Panic andrija.pa...@gmail.com wrote:
Hi Dan,
thank you very much for the script, will check it out...no thortling
to fix this... ?
Thanks,
Andrija
On 11 August 2014 12:46, Andrija Panic andrija.pa...@gmail.com wrote:
Hi Dan,
the script provided seems to not work on my ceph cluster :(
This is ceph version 0.80.3
I get empty results, on both debug level 10 and the maximum level of 20...
[root@cs1
/cernceph/ceph-scripts/blob/master/tools/rbd-io-stats.pl
Cheers, Dan
-- Dan van der Ster || Data Storage Services || CERN IT Department --
On 11 Aug 2014, at 12:48, Andrija Panic andrija.pa...@gmail.com wrote:
I appologize, clicked the Send button to fast...
Anyway, I can see
Hi,
we just had some new clients, and have suffered very big degradation in
CEPH performance for some reasons (we are using CloudStack).
I'm wondering if there is way to monitor OP/s or similar usage by client
connected, so we can isolate the heavy client ?
Also, what is the general best
Thanks Wido, yes I'm aware of CloudStack in that sense, but would prefer
some precise OP/s per ceph Image at least...
Will check CloudStack then...
Thx
On 8 August 2014 13:53, Wido den Hollander w...@42on.com wrote:
On 08/08/2014 01:51 PM, Andrija Panic wrote:
Hi,
we just had some new
- could not find
anything with google...
Thanks again Wido.
Andrija
On 8 August 2014 14:07, Wido den Hollander w...@42on.com wrote:
On 08/08/2014 02:02 PM, Andrija Panic wrote:
Thanks Wido, yes I'm aware of CloudStack in that sense, but would prefer
some precise OP/s per ceph Image at least
eat up the
entire iops capacity of the cluster.
Cheers, Dan
-- Dan van der Ster || Data Storage Services || CERN IT Department --
On 08 Aug 2014, at 13:51, Andrija Panic andrija.pa...@gmail.com wrote:
Hi,
we just had some new clients, and have suffered very big degradation in
CEPH
Thanks again, and btw, beside being Friday I'm also on vacation - so double
the joy of troubleshooting performance problmes :)))
Thx :)
On 8 August 2014 16:01, Dan Van Der Ster daniel.vanders...@cern.ch wrote:
Hi,
On 08 Aug 2014, at 15:55, Andrija Panic andrija.pa...@gmail.com wrote
Storage Services || CERN IT Department --
On 08 Aug 2014, at 13:51, Andrija Panic andrija.pa...@gmail.com
mailto:andrija.pa...@gmail.com wrote:
Hi,
we just had some new clients, and have suffered very big degradation
in CEPH performance for some reasons (we are using CloudStack).
I'm
Hi Sage,
can anyone validate, if there is still bug inside RPMs that does
automatic CEPH service restart after updating packages ?
We are instructed to first update/restart MONs, and after that OSD - but
that is impossible if we have MON+OSDs on same host...since the ceph is
automaticaly
prohibited. If you have received this email in error
please immediately advise us by return email at and...@arhont.com and
delete and purge the email and any attachments without making a copy.
--
*From: *Quenten Grasso qgra...@onq.com.au
*To: *Andrija Panic andrija.pa
not perfect yet. :/
sage
On Sun, 13 Jul 2014, Andrija Panic wrote:
Hi,
after seting ceph upgrade (0.72.2 to 0.80.3) I have issued ceph osd
crush
tunables optimal and after only few minutes I have added 2 more OSDs to
the
CEPH cluster...
So these 2 changes were more or a less done
in our upgrade process
2. What options should we have used to keep our vms alive
Cheers
Andrei
--
*From: *Andrija Panic andrija.pa...@gmail.com
*To: *ceph-users@lists.ceph.com
*Sent: *Sunday, 13 July, 2014 9:54:17 PM
*Subject: *[ceph-users] ceph osd crush
of
overhead related to rebalancing... and it's clearly not perfect yet. :/
sage
On Sun, 13 Jul 2014, Andrija Panic wrote:
Hi,
after seting ceph upgrade (0.72.2 to 0.80.3) I have issued ceph osd
crush
tunables optimal and after only few minutes I have added 2 more OSDs to
the
CEPH cluster
Udo, I had all VMs completely unoperational - so don't set optimal for
now...
On 14 July 2014 20:48, Udo Lembke ulem...@polarzone.de wrote:
Hi,
which values are all changed with ceph osd crush tunables optimal?
Is it perhaps possible to change some parameter the weekends before the
upgrade
suggestion on need to recompile libvirt ? I got info from Wido, that
libvirt does NOT need to be recompiled
Best
On 13 July 2014 08:35, Mark Kirkwood mark.kirkw...@catalyst.net.nz wrote:
On 13/07/14 17:07, Andrija Panic wrote:
Hi,
Sorry to bother, but I have urgent situation: upgraded
for your time for my issue...
Best.
Andrija
On 13 July 2014 10:20, Mark Kirkwood mark.kirkw...@catalyst.net.nz wrote:
On 13/07/14 19:15, Mark Kirkwood wrote:
On 13/07/14 18:38, Andrija Panic wrote:
Any suggestion on need to recompile libvirt ? I got info from Wido, that
libvirt does
Hi,
after seting ceph upgrade (0.72.2 to 0.80.3) I have issued ceph osd crush
tunables optimal and after only few minutes I have added 2 more OSDs to
the CEPH cluster...
So these 2 changes were more or a less done at the same time - rebalancing
because of tunables optimal, and rebalancing
daemons automatically ?
Since it makes sense to have all MONs updated first, and than OSD (and
perhaps after that MDS if using it...)
Upgraded to 0.80.3 release btw.
Thanks for your help again.
Andrija
On 3 July 2014 15:21, Andrija Panic andrija.pa...@gmail.com wrote:
Thanks again a lot.
On 3
Hi,
Sorry to bother, but I have urgent situation: upgraded CEPH from 0.72 to
0.80 (centos 6.5), and now all my CloudStack HOSTS can not connect.
I did basic yum update ceph on the first MON leader, and all CEPH
services on that HOST, have been restarted - done same on other CEPH nodes
(I have
/02/2014 04:08 PM, Andrija Panic wrote:
Hi,
I have existing CEPH cluster of 3 nodes, versions 0.72.2
I'm in a process of installing CEPH on 4th node, but now CEPH version is
0.80.1
Will this make problems running mixed CEPH versions ?
No, but the recommendation is not to have this running
Thanks a lot Wido, will do...
Andrija
On 3 July 2014 13:12, Wido den Hollander w...@42on.com wrote:
On 07/03/2014 10:59 AM, Andrija Panic wrote:
Hi Wido, thanks for answers - I have mons and OSD on each host...
server1: mon + 2 OSDs, same for server2 and server3.
Any Proposed upgrade
Wido,
one final question:
since I compiled libvirt1.2.3 usinfg ceph-devel 0.72 - do I need to
recompile libvirt again now with ceph-devel 0.80 ?
Perhaps not smart question, but need to make sure I don't screw something...
Thanks for your time,
Andrija
On 3 July 2014 14:27, Andrija Panic
Thanks again a lot.
On 3 July 2014 15:20, Wido den Hollander w...@42on.com wrote:
On 07/03/2014 03:07 PM, Andrija Panic wrote:
Wido,
one final question:
since I compiled libvirt1.2.3 usinfg ceph-devel 0.72 - do I need to
recompile libvirt again now with ceph-devel 0.80 ?
Perhaps
Hi,
I have existing CEPH cluster of 3 nodes, versions 0.72.2
I'm in a process of installing CEPH on 4th node, but now CEPH version is
0.80.1
Will this make problems running mixed CEPH versions ?
I intend to upgrade CEPH on exsiting 3 nodes anyway ?
Recommended steps ?
Thanks
--
Andrija
...@inktank.com wrote:
Try running ceph health detail on each of the monitors. Your disk space
thresholds probably aren't configured correctly or something.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Tue, Jun 17, 2014 at 2:09 AM, Andrija Panic andrija.pa...@gmail.com
wrote
As stupid as I could do it...
After lowering mon data . from 20% to 15% treshold, it seems I forgot
to restart MON service on this one node...
I appologies for bugging and thanks again everybody.
Andrija
On 18 June 2014 09:49, Andrija Panic andrija.pa...@gmail.com wrote:
Hi Gregory
Thanks Greg, seems like I'm going to update soon...
Thanks again,
Andrija
On 18 June 2014 14:06, Gregory Farnum g...@inktank.com wrote:
The lack of warnings in ceph -w for this issue is a bug in Emperor.
It's resolved in Firefly.
-Greg
On Wed, Jun 18, 2014 at 3:49 AM, Andrija Panic
Hi,
I have 3 node (2 OSD per node) CEPH cluster, running fine, not much data,
network also fine:
Ceph ceph-0.72.2.
When I issue ceph status command, I get randomly HEALTH_OK, and
imidiately after that when repeating command, I get HEALTH_WARN
Examle given down - these commands were issues
:44 +0200 Andrija Panic wrote:
Hi,
I have 3 node (2 OSD per node) CEPH cluster, running fine, not much data,
network also fine:
Ceph ceph-0.72.2.
When I issue ceph status command, I get randomly HEALTH_OK, and
imidiately after that when repeating command, I get HEALTH_WARN
Examle
be a disk space issue.
Regards,
*Stanislav Yanchev*
Core System Administrator
[image: MAX TELECOM]
Mobile: +359 882 549 441
s.yanc...@maxtelecom.bg
www.maxtelecom.bg
*From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
Of *Andrija Panic
*Sent:* Tuesday, June 17, 2014
Try 3.x from elrepo repo...works for me, cloudstack/ceph...
Sent from Google Nexus 4
On May 14, 2014 11:56 AM, maoqi1982 maoqi1...@126.com wrote:
Hi list
our ceph(0.72) cluster use ubuntu12.04 is ok . client server run
openstack install CentOS6.4 final, the kernel is up to
Hi,
just to share my issue with qemu-img provided by CEPH (RedHat made a
problem, not CEPH):
newest qemu-img - /qemu-img-0.12.1.2-2.415.el6.3ceph.x86_64.rpm was built
from RHEL 6.5 source code, where Redhat removed the -s paramter, so
snapshooting in CloudStack up to 4.2.1 does not work, I
Mapping RBD image to 2 or more servers is the same as a shared storage
device (SAN) - so from there on, you could do any clustering you want,
based on what Wido said...
On 7 May 2014 12:43, Andrei Mikhailovsky and...@arhont.com wrote:
Wido, would this work if I were to run nfs over two or
elegant than this manual steps...
Cheers
On 6 May 2014 12:52, Gandalf Corvotempesta
gandalf.corvotempe...@gmail.comwrote:
2014-05-06 12:39 GMT+02:00 Andrija Panic andrija.pa...@gmail.com:
Good question - I'm also interested. Do you want to movejournal to
dedicated
disk/partition i.e. on SSD
wrote:
On 05/05/2014 11:40 PM, Andrija Panic wrote:
Hi Wido,
thanks again for inputs.
Everything is fine, except for the Software Router - it doesn't seem to
get created on CEPH, no matter what I try.
There is a separate offering for the VR, have you checked that?
But this is more
1 - 100 of 111 matches
Mail list logo