I may have expressed myself wrong. You don't need to do any test to see how
locality works with files of multiple blocks. If you are accessing a file
of more than one block over webhdfs, you only have assured locality for the
first block of the file.
Thanks.
On Sun, Mar 16, 2014 at 9:18 PM, RJ
Hi thanks for your replay.
What I did:
[speech@h14 mahout]$ /usr/share/apache-maven/bin/mvn -DskipTests clean
install -Dhadoop2.profile=hadoop2 - is hadoop2 right string? I found it
from pom profile section so I used it.
...
it compiled:
[INFO]
Okey sorry for the mess
[speech@h14 mahout]$ /usr/share/apache-maven/bin/mvn -DskipTests clean
install -Dhadoop2.version=2.2.0 - did the trick
Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h
Hi all,
I'm running jobclient tests(on single node), other tests like TestDFSIO,
mrbench succeed except nnbench.
I got a lot of Exceptions but without any explanation(see below).
Could anyone tell me what might went wrong?
Thanks!
14/03/17 23:54:22 INFO hdfs.NNBench: Waiting in barrier for:
Hi Alejandro,
The WebHDFS API allows specifying an offset and length for the request. If
I specify an offset that start in the second block for a file (thus
skipping the first block all together), will the namenode still direct me
to a datanode with the first block or will it direct me to a
dont recall how skips are handled in webhdfs, but i would assume that you'll
get to the first block As usual, and the skip is handled by the DN serving the
file (as webhdfs doesnot know at open that you'll skip)
Alejandro
(phone typing)
On Mar 17, 2014, at 9:47, RJ Nowling rnowl...@gmail.com
Hello,
I was wondering if anyone might know of a way to write bytes directly to the
distributed cache. I know I can call job.addCacheFile(URI uri), but in my
case the file I wish to add to the cache is in memory and is job specific.
I would prefer not writing it to a location that I have to then
The file offset is considered in WebHDFS redirection. It redirects to a
datanode with the first block the client going to read, not the first block of
the file.
Hope it helps.
Tsz-Wo
On Monday, March 17, 2014 10:09 AM, Alejandro Abdelnur t...@cloudera.com
wrote:
actually, i am wrong, the
Hi,
I believe there is an issue with yarn-default.xml setting of
yarn.application.classpath on hadoop-2.3.0. This parameter's default is not set
and I have an application that fails because of that. Below is part of the the
content of yarn-default.xml which shows an empty value of that
Thank you, Tsz. That helps!
On Mon, Mar 17, 2014 at 2:30 PM, Tsz Wo Sze szets...@yahoo.com wrote:
The file offset is considered in WebHDFS redirection. It redirects to a
datanode with the first block the client going to read, not the first block
of the file.
Hope it helps.
Tsz-Wo
This is intentional.
See https://issues.apache.org/jira/browse/YARN-1138 for the detail.
If you want to use the default parameter for your application,
you should write the same parameter to config file or you can use
YarnConfiguration.DEFAULT_YARN_APPLICATION_CLASSPATH instead of
using
download it, unzip and put it back?
Regards,
*Stanley Shi,*
On Fri, Mar 14, 2014 at 5:44 PM, Sai Sai saigr...@yahoo.in wrote:
Can some one please help:
How to unzip a .tar.bz2 file which is in hadoop/hdfs
Thanks
Sai
You want to use the BZip2Codec to un BZip the file and then use FileUtil to
untar it.
Anthony Mattas
anth...@mattas.net
On Mon, Mar 17, 2014 at 10:06 PM, Stanley Shi s...@gopivotal.com wrote:
download it, unzip and put it back?
Regards,
*Stanley Shi,*
On Fri, Mar 14, 2014 at 5:44 PM,
This JIRA is included in Apache code since version
0.21.0https://issues.apache.org/jira/browse/HDFS/fixforversion/12314046
, 1.2.0 https://issues.apache.org/jira/browse/HDFS/fixforversion/12321657
, 1-win https://issues.apache.org/jira/browse/HDFS/fixforversion/12320362
;
If you want to use it,
one possible reason is that you didn't set the namenode working directory,
by default it's in /tmp folder; and the /tmp folder might get deleted
by the OS without any notification. If this is the case, I am afraid you
have lost all your namenode data.
*property
namedfs.name.dir/name
15 matches
Mail list logo