Hi Ali,
Which version of Ceph are you using?. Is there any re-spawning osds?
Regards
K.Mohamed Pakkeer
On Mon, May 18, 2015 at 2:23 PM, Ali Hussein <
ali.alkhazr...@earthlinktele.com> wrote:
> *Hi all*
>
> I have two ceph monitors working fine , i have added them a while ago, for
> now i have
CPU speed as per your recommendation.
Please advise.
Cheers
K.Mohamed Pakkeer
On Fri, Apr 10, 2015 at 7:19 PM, Mark Nelson wrote:
>
>
> On 04/10/2015 02:56 AM, Mohamed Pakkeer wrote:
>
>> Hi Blazer,
>>
>> Ceph recommends 1GHz CPU power per OSD. Is it applicab
>
> ·Cache tier capacity would only exceed 80% only if the flushing
> process couldn’t keep up with the ingest process for fairly long periods of
> time (at the observed ingest rate of ~400 MB/sec, a few hundred seconds).
>
>
>
> Am I misunderstanding someth
Hi Don,
Did you configure target_ dirty_ratio, target_full_ratio and
target_max_bytes?
K.Mohamed Pakkeer
On Thu, Apr 30, 2015 at 10:26 PM, Don Doerner
wrote:
> All,
>
>
>
> Synopsis: I can’t get cache tiering to work in HAMMER on RHEL7.
>
>
>
> Process:
>
> 1. Fresh install of HAMMER on
Hi all
The issue is resolved after upgrading Ceph from Giant to Hammer(0.94.1)
cheers
K.Mohamed Pakkeer
On Sun, Apr 26, 2015 at 11:28 AM, Mohamed Pakkeer
wrote:
> Hi
>
> I was doing some testing on erasure coded based CephFS cluster. cluster
> is running with giant 0.
Hi
I was doing some testing on erasure coded based CephFS cluster. cluster is
running with giant 0.87.1 release.
Cluster info
15 * 36 drives node(journal on same osd)
3 * 4 drives SSD cache node( Intel DC3500)
3 * MON/MDS
EC 10 +3
10G Ethernet for private and cluster network
We got app
Hi sage,
When can we expect the fully functional fsck for cephfs?. Can we get at
next major release?. Is there any roadmap or time frame for the fully
functional fsck release?
Thanks & Regards
K.Mohamed Pakkeer
On 21 Apr 2015 20:57, "Sage Weil" wrote:
> On Tue, 21 Apr 2015, Ray Sun wrote:
> > C
oding, is that the increase in the total number of shards, increases the
> CPU requirements, so it's not a simple black and white answer.
>
> Nick
>
> > -Original Message-
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> > M
nk you for your reply.
> I thougt, there is a SAS-Expander on the backplanes of the SC847, so all
> drives can be run. Am i wrong?
>
> thanks,
> Markus
>
> Am 09.04.2015 um 10:24 schrieb Mohamed Pakkeer:
>
> Hi Markus,
>
> X10DRH-CT can support only 16 drive as
Hi Markus,
X10DRH-CT can support only 16 drive as default. If you want to connect more
drive,there is a special SKU for more drive support from super micro or you
need additional SAS controller. We are using 2630 V3( 8 core - 2.4GHz) *2
for 30 drives on SM X10DRI-T. It is working perfectly on repl
Hi Karan,
We faced same issue and resolved after increasing the open file limit and
maximum no of threads
Config reference
/etc/security/limit.conf
root hard nofile 65535
sysctl -w kernel.pid_max=4194303
http://tracker.ceph.com/issues/10554#change-47024
Cheers
Mohamed Pakkeer
On Mon, Mar 9
ards
K.Mohamed Pakkeer
On Mon, Feb 16, 2015 at 8:14 PM, Joao Eduardo Luis wrote:
> On 02/16/2015 12:57 PM, Mohamed Pakkeer wrote:
>
>>
>> Hi ceph-experts,
>>
>>We are getting "store is getting too big" on our test cluster.
>> Cluster is running
Hi ceph-experts,
We are getting "store is getting too big" on our test cluster. Cluster is
running with giant release and configured as EC pool to test cephFS.
cluster c2a97a2f-fdc7-4eb5-82ef-70c52f2eceb1
health HEALTH_WARN too few pgs per osd (0 < min 20); mon.master01
store is getting too
pgs=0 cs=0 l=0
c=0xc59c580).accept connect_s
eq 0 vs existing 0 state connecting
2015-02-13 19:10:38.505886 7f2e3b98e700 0 -- 10.1.100.14:6838/2074180 >>
10.1.100.2:6831/14199331 pipe(0xbe4a940 sd=585 :6838 s=0 pgs=0 cs=0 l=0
c=0xafb4580).accept connect_s
--More--
Regards
K.Mohamed Pakke
Hi all,
Cluster : 540 OSDs , Cache tier and EC pool
ceph version 0.87
cluster c2a97a2f-fdc7-4eb5-82ef-70c52f2eceb1
health HEALTH_WARN 10 pgs peering; 21 pgs stale; 2 pgs stuck inactive;
2 pgs stuck unclean; 287 requests are blocked > 32 sec; recovery 24/6707031
objects degraded (0.000%); to
Hi Greg,
Do you have any idea about the health warning?
Regards
K.Mohamed Pakkeer
On Tue, Feb 10, 2015 at 4:49 PM, Mohamed Pakkeer
wrote:
> Hi
>
> We have created EC pool ( k =10 and m =3) with 540 osds. We followed the
> following rule to calculate the pgs count for
Hi
We have created EC pool ( k =10 and m =3) with 540 osds. We followed the
following rule to calculate the pgs count for the EC pool.
(OSDs * 100)
Total PGs =
pool size
Where *pool size* is either the number of replicas for replicated poo
Hi all,
We are building EC cluster with cache tier for CephFS. We are planning to
use the following 1U chassis along with Intel SSD DC S3700 for cache tier.
It has 10 * 2.5" slots. Could you recommend a suitable Intel processor and
amount of RAM to cater 10 * SSDs?.
http://www.supermicro.com/prod
but the RADOS EC interface is way more limited than the replicated
> one, and we have a lot of other things we'd like to get right first.
> :)
> -Greg
>
> On Tue, Jan 20, 2015 at 9:53 PM, Mohamed Pakkeer
> wrote:
> > Hi Greg,
> >
> > Thanks for your reply. Can w
test
peta-byte scale CephFS cluster with erasure coded pool.
-Mohammed Pakkeer
On Wed, Jan 21, 2015 at 9:11 AM, Gregory Farnum wrote:
> On Tue, Jan 20, 2015 at 5:48 AM, Mohamed Pakkeer
> wrote:
> >
> > Hi all,
> >
> > We are trying to create 2 PB scale Ceph s
Hi all,
We are trying to create 2 PB scale Ceph storage cluster for file system
access using erasure coded profiles in giant release. Can we create Erasure
coded pool (k+m = 10 +3) for data and replicated (4 replicas) pool for
metadata for creating CEPHFS? What are the pros and cons of using two
d
21 matches
Mail list logo