Thanx for your valuable information..
-
Hemant Surale
On Thu, Aug 30, 2012 at 4:37 PM, Sylvain Munaut
s.mun...@whatever-company.com wrote:
I was able to increase the PG_num property but only problem is to
reduce it. Anyways thanks for your valuable reply.
AFAIK the old tool allowed you to
RBD waits for the data to be on disk on all replicas. It's pretty easy
to relax this to in memory on all replicas, but there's no option for
that right now.
Ok, thanks, I miss that.
When you say disk, you mean journal ?
- Mail original -
De: Josh Durgin josh.dur...@inktank.com
À:
RBD waits for the data to be on disk on all replicas. It's pretty easy
to relax this to in memory on all replicas, but there's no option for
that right now.
I thought that is dangerous, because you can loose data?
On 31/08/12 20:11, Dietmar Maurer wrote:
RBD waits for the data to be on disk on all replicas. It's pretty easy
to relax this to in memory on all replicas, but there's no option for
that right now.
I thought that is dangerous, because you can loose data?
Hi,
Is there any known memory issue with mon? We have 3 mons running, and
on keeps on crashing after 2 or 3 days, and I think it's because mon
sucks up all memory.
Here's mon after starting for 10 minutes:
PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND
13700 root
Mark, Inktank,
OK, it is very likely that 'sync_file_range' is not the major slowdown
'culprit'.
But, which areas (design, current implementation, protocol, interconnect,
tuning parameter, ...)
would you rate as 'major slowdown effect(s)' ?
Best Regards,
-Dieter
On Fri, Aug 31, 2012 at
Sorry Dieter,
Not trying to say you are wrong or anything like that - just trying to
add to the problem solving body of knowledge that from what *I* have
tried out the 'sync' issue does not look to be the bad guy here - altho
more analysis is always welcome (usual story - my findings should
On Fri, 31 Aug 2012, Dietmar Maurer wrote:
RBD waits for the data to be on disk on all replicas. It's pretty easy
to relax this to in memory on all replicas, but there's no option for
that right now.
I thought that is dangerous, because you can loose data?
By putting the journal in a tmpfs
Hi,
Ceph storage on each disk in the cluster is very unbalanced. On each
node, the data seems to go to one or two disks, while other disks
are almost empty.
I can't find anything wrong from the crush map, it's just the
default for now. Attached is the crush map.
Here is the current situation
Hi all,
Excellent points all. Many thanks for the clarification.
*Tommi* - Yep... I wrote file but had object in mind. However... now that
you bring the distinction... I may actually mean file. (see below) I don't
know Zooko personally, but will definitely pass it on if I meet him!
On Fri, Aug 31, 2012 at 10:37 AM, Stephen Perkins perk...@netmass.com wrote:
Would this require 2 clusters because of the need to have RADOS keep N
copies on one and 1 copy on the other?
That's doable with just multiple RADOS pools, no need for multiple clusters.
And CephFS is even able to
On 8/31/2012 7:11 AM, Xiaopong Tran wrote:
Hi,
Ceph storage on each disk in the cluster is very unbalanced. On each
node, the data seems to go to one or two disks, while other disks
are almost empty.
I can't find anything wrong from the crush map, it's just the
default for now. Attached is the
On Aug 31, 2012, at 11:15 AM, Tommi Virtanen wrote:
On Fri, Aug 31, 2012 at 10:37 AM, Stephen Perkins perk...@netmass.com wrote:
Would this require 2 clusters because of the need to have RADOS keep N
copies on one and 1 copy on the other?
That's doable with just multiple RADOS pools, no
On Fri, 31 Aug 2012, Xiaopong Tran wrote:
Hi,
Ceph storage on each disk in the cluster is very unbalanced. On each
node, the data seems to go to one or two disks, while other disks
are almost empty.
I can't find anything wrong from the crush map, it's just the
default for now. Attached
On Fri, Aug 31, 2012 at 11:59 AM, Atchley, Scott atchle...@ornl.gov wrote:
I think what he is looking for is not to bring data to a client to convert
from replication to/from erasure coding, but to have the servers do it based
on some metric _or_ have the client indicate which file needs to
On Fri, 31 Aug 2012, Andrew Thompson wrote:
On 8/31/2012 7:11 AM, Xiaopong Tran wrote:
Hi,
Ceph storage on each disk in the cluster is very unbalanced. On each
node, the data seems to go to one or two disks, while other disks
are almost empty.
I can't find anything wrong from the
On Fri, 31 Aug 2012, Tim Flavin wrote:
I build ceph from a tar file, currently ceph-0.48.1argonaut. After
doing a make install, everything winds up in the right place except
for the startup file /etc/init.d/ceph. I have to manually copy
~/ceph-0.48.1argonaut/src/init-ceph to /etc/init.d.
On Fri, Aug 31, 2012 at 9:24 AM, Andrew Thompson andre...@aktzero.com wrote:
On 8/31/2012 12:10 PM, Sage Weil wrote:
On Fri, 31 Aug 2012, Andrew Thompson wrote:
Have you been reweight-ing osds? I went round and round with my cluster a
few days ago reloading different crush maps only to find
On Fri, 31 Aug 2012, Andrew Thompson wrote:
On 8/31/2012 12:10 PM, Sage Weil wrote:
On Fri, 31 Aug 2012, Andrew Thompson wrote:
Have you been reweight-ing osds? I went round and round with my cluster a
few days ago reloading different crush maps only to find that it
re-injecting a
This patch is a follow up on below patch:
[PATCH] exportfs: add FILEID_INVALID to indicate invalid fid_type
https://patchwork.kernel.org/patch/1385131/
Signed-off-by: Namjae Jeon linkinj...@gmail.com
Signed-off-by: Vivek Trivedi vtrivedi...@gmail.com
---
fs/btrfs/export.c |4 ++--
I have this problem too. My mon's in 0.48.1 cluster have 10GB RAM
each, with 78 osd, and 2k request per minute (max) in radosgw.
Now i have run one via valgrind. I will send output when mon grow up.
On Fri, Aug 31, 2012 at 6:03 PM, Sage Weil s...@inktank.com wrote:
On Fri, 31 Aug 2012, Xiaopong
Hi John,
On Fri, 31 Aug 2012, John C. Wright wrote:
An update,
While looking into how to switch over to a different network on my ceph
cluster - another question altogether, I discovered during my upgrade from
0.47.2 to 0.48 on the three nodes somehow my 'git checkout 0.48' wasn't done
Okay, it's trivial to change 'pool' to 'root' in the default generated
crush map, and update all the docs accordingly. The problem is that some
stuff built on top of ceph has 'pool=default' in there, including our chef
cookbooks and those that other people are building.
I see a few options:
On 09/01/2012 12:05 AM, Sage Weil wrote:
On Fri, 31 Aug 2012, Xiaopong Tran wrote:
Hi,
Ceph storage on each disk in the cluster is very unbalanced. On each
node, the data seems to go to one or two disks, while other disks
are almost empty.
I can't find anything wrong from the crush map, it's
On Sat, 1 Sep 2012, Xiaopong Tran wrote:
On 09/01/2012 12:05 AM, Sage Weil wrote:
On Fri, 31 Aug 2012, Xiaopong Tran wrote:
Hi,
Ceph storage on each disk in the cluster is very unbalanced. On each
node, the data seems to go to one or two disks, while other disks
are almost
On Sat, 1 Sep 2012, Xiaopong Tran wrote:
On 09/01/2012 12:39 AM, Gregory Farnum wrote:
On Fri, Aug 31, 2012 at 9:24 AM, Andrew Thompson andre...@aktzero.com
wrote:
On 8/31/2012 12:10 PM, Sage Weil wrote:
On Fri, 31 Aug 2012, Andrew Thompson wrote:
Have you been
On 09/01/2012 11:05 AM, Sage Weil wrote:
On Sat, 1 Sep 2012, Xiaopong Tran wrote:
On 09/01/2012 12:05 AM, Sage Weil wrote:
On Fri, 31 Aug 2012, Xiaopong Tran wrote:
Hi,
Ceph storage on each disk in the cluster is very unbalanced. On each
node, the data seems to go to one or two disks, while
27 matches
Mail list logo