On Wed, Jan 9, 2013 at 8:00 AM, Noah Watkins <noah.watk...@inktank.com> wrote:
> Hi Jutta,
>
> On Wed, Jan 9, 2013 at 7:11 AM, Lachfeld, Jutta
> <jutta.lachf...@ts.fujitsu.com> wrote:
>>
>> the current content of the web page 
>> http://ceph.com/docs/master/cephfs/hadoop shows a configuration parameter 
>> ceph.object.size.
>> Is it the CEPH equivalent  to the "HDFS block size" parameter which I have 
>> been looking for?
>
> Yes. By specifying ceph.object.size, the Hadoop will use a default
> Ceph file layout with stripe unit = object size, and stripe count = 1.
> This is effectively the same meaning as dfs.block.size for HDFS.
>
>> Does the parameter ceph.object.size apply to version 0.56.1?
>
> The Ceph/Hadoop file system plugin is being developed here:
>
>   git://github.com/ceph/hadoop-common cephfs/branch-1.0
>
> There is an old version of the Hadoop plugin in the Ceph tree which
> will be removed shortly. Regarding the versions, development is taking
> place in cephfs/branch-1.0 and in ceph.git master. We don't yet have a
> system in place for dealing with compatibility across versions because
> the code is in heavy development.

If you are using the old version in the Ceph tree, you should be
setting fs.ceph.blockSize rather than ceph.object.size. :)


>> I would be interested in setting this parameter to values higher than 64MB, 
>> e.g. 256MB or 512MB similar to the values I have used for HDFS for 
>> increasing the performance of the TeraSort benchmark. Would these values be 
>> allowed and would they at all make sense for the mechanisms used in CEPH?
>
> I can't think of any reason why a large size would cause concern, but
> maybe someone else can chime in?

Yep, totally fine.
-Greg
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to