254bytes, which means if there exits too many
fragment, do not record the overlaps.
Regards
Ning Yao
2015-03-09 13:35 GMT+08:00 Haomai Wang haomaiw...@gmail.com:
On Mon, Mar 9, 2015 at 1:26 PM, Nicheal zay11...@gmail.com wrote:
2015-03-07 16:43 GMT+08:00 Haomai Wang haomaiw...@gmail.com
2015-03-07 16:43 GMT+08:00 Haomai Wang haomaiw...@gmail.com:
On Sat, Mar 7, 2015 at 12:03 AM, Sage Weil sw...@redhat.com wrote:
Hi!
[copying ceph-devel]
On Fri, 6 Mar 2015, Nicheal wrote:
Hi Sage,
Cool for issue #3878, Duplicated pg_log write, which is post early in
my issue #3244
and failed to
record some transactions. You'll want to make sure your
filestore/filesystem/disk configuration isn't causing inconsistencies.
-Sam
On Tue, Jan 6, 2015 at 7:54 PM, Nicheal zay11...@gmail.com wrote:
Hi all,
I cannot restart some osds when after a flipping of all 54 osds. The
log
Hi all,
I cannot restart some osds when after a flipping of all 54 osds. The
log is show below:
-9 2015-01-06 10:53:07.976997 7f35695177a0 20 read_log
31150'2273018 (31150'2273012) modify
4a8b7974/rb.0.bc58e8.6b8b4567.3e37/head//2 by
client.16829289.1:720306459 2015-01-05
Dear develops,
I find that SNAPDIR means a special object in ceph. It is a read-only
object since if a op write want to modify SNAPDIR object, it return
-EINVAL immediately. But I cannot find when we need to create the
SNAPDIR object? and what is used for?
Based on my understanding, ceph
Hi, all
Is any guideline that describes how to run the ceph unit test, and its
basic architecture?
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
to the hot pool and
saving it into kv-store? Expecting haomai Wang's input
Nicheal,
Regards
Thanks.
___
ceph-users mailing list
ceph-us...@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Wido den Hollander
Ceph
2014-10-14 21:22 GMT+08:00 Sage Weil s...@newdream.net:
On Tue, 14 Oct 2014, Mark Nelson wrote:
On 10/14/2014 12:15 AM, Nicheal wrote:
Yes, Greg.
But Unix based system always have a parameter dirty_ratio to prevent
the system memory from being exhausted. If Journal speed is so fast
while
On 10/14/2014 02:19 PM, Mark Nelson wrote:
On 10/14/2014 12:15 AM, Nicheal wrote:
Yes, Greg.
But Unix based system always have a parameter dirty_ratio to prevent
the system memory from being exhausted. If Journal speed is so fast
while backing store cannot catch up with Journal
2014-10-14 20:19 GMT+08:00 Mark Nelson mark.nel...@inktank.com:
On 10/14/2014 12:15 AM, Nicheal wrote:
Yes, Greg.
But Unix based system always have a parameter dirty_ratio to prevent
the system memory from being exhausted. If Journal speed is so fast
while backing store cannot catch up
/).
Large number of inode will cause longer time to sync, but submitting a
batch of write to disk always faster than submitting few io update to
the disk.
Nicheal
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More
Yes, Greg.
But Unix based system always have a parameter dirty_ratio to prevent
the system memory from being exhausted. If Journal speed is so fast
while backing store cannot catch up with Journal, then the backing
store write will be blocked by the hard limitation of system dirty
pages. The
always lock the index before lookup() path or lfn_find()?
Any other reasons?
Nicheal
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
13 matches
Mail list logo