Hi Sachin,
On Mon, Dec 18, 2017 at 9:09 AM, Sachin Tiwari wrote:
> Hi
>
> I am trying to use hadoop as distributed file storage system.
>
> I did a POC with a small cluster with 1 name-node and 4 datanodes and I
> was able to get/put files using hdfs client and monitor the datanodes
> status on:
Hi
I am trying to use hadoop as distributed file storage system.
I did a POC with a small cluster with 1 name-node and 4 datanodes and I was
able to get/put files using hdfs client and monitor the datanodes status
on: http://master-machine:50070/dfshealth.html
However, I have few open questions
Those are just test failures, I suggest you skip the tests as you earlier
did and do mvn clean install.
On Tue, Mar 11, 2014 at 5:10 AM, Avinash Kujur wrote:
> after executing this command :
> mvn clean install
>
> i am getting this error.
>
> Failed tests:
> TestMetricsSystemImpl.testMultiTh
after executing this command :
mvn clean install
i am getting this error.
Failed tests:
TestMetricsSystemImpl.testMultiThreadedPublish:232 expected:<0> but
was:<5>
TestNetUtils.testNormalizeHostName:617 null
TestFsShellReturnCode.testGetWithInvalidSourcePathShouldNotDisplayNullInConsole:307
You must be using Java 1.5 or below where @Override is not allowed on any
method that implements its counterpart from interface.
Remember, both 1.5 and 1.6 are EOL, so I would suggest upgrading to 1.7.
Oleg
On Mon, Mar 10, 2014 at 10:49 AM, Avinash Kujur wrote:
>
> hi,
>
> i downloaded the code
hi,
i downloaded the code from https://github.com/apache/hadoop-common.git .
but while executing the command
mvn install -DskipTests
its giving this error in between:
[INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @
hadoop-hdfs-httpfs ---
[INFO] Compiling 56 source files to
/
Hi there,
I am trying to run a Hadoop source code on an ARM processor, but
getting the below error. Can anyone suggest anything as why this is
shooting up ?
rmr: cannot remove output: No such file or directory.
13/10/18 11:46:21 WARN mapred.JobClient: No job jar file set. User
classes may
I see that the GenericOptionsParser and ToolRunner have three options to
"provide" files/resources to the Task.
-files
-archives
-libjars
The description for all these options looks very alike. Can anyone of you
explain the subtelities??
Thanks
Kish
I have checked earlier error and solved it after seeing logs but sitll have
some problem .many of the solutions suggests about number of entries in
/etc/hosts but not confirmed try to get replies from mailing list
arpit@arpit:~/hadoop-1.0.3$ bin/hadoop jar hadoop-examples-1.0.3.jar
wordcou
Hi,
This is most likely caused by an improper network environment wherein
the reducer is not able to resolve all available tasktrackers to read
the map outputs. Check the logs of the task attempt
attempt_201304091351_0001_r_00_0 from the web UI for more specific
information on which host it wa
we are getting the following error/warning while running wordcount program
on hadoop 2 node cluster with one master and one slave...
arpit@arpit:~/hadoop-1.0.3$ bin/hadoop jar hadoop-examples-1.0.3.jar
wordcount /Input /Output
WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Ple
Hi Kay,
Yeah, that line does set your jar as the job jar. "hadoop jar" expects
java code to configure and submit your job. "mapred job" takes in a
job.xml configuration file and runs the job based on that.
-Sandy
On Wed, Mar 13, 2013 at 11:07 AM, KayVajj wrote:
> Hi Sandy,
>
> I was going th
Hi Sandy,
I was going through the RunJar source code and the jar executes locally.
When the jar fires a mapreduce job,
the way I create JobConf is
JobConf conf = new JobConf(MyJob.class);
>
Does this set MyJar as the job jar?
Can you explain what is the difference between running an MR job usi
: KayVajj
Date: Wed, 13 Mar 2013 09:01:42
To:
Reply-To: user@hadoop.apache.org
Subject: Question regarding hadoop jar command usage
I have a question regarding the hadoop jar command. In a cluster of say
nodes n1,n2...n100
the node n1 has jar Myjar on its local file system.
If I run the command
Hi Kay,
The jar is just executed locally. If the jar fires up a mapreduce job and
sets itself as the job jar, then mapreduce will handle copying it to the
nodes that will use it.
-Sandy
On Wed, Mar 13, 2013 at 9:01 AM, KayVajj wrote:
> I have a question regarding the hadoop jar command. In a
I have a question regarding the hadoop jar command. In a cluster of say
nodes n1,n2...n100
the node n1 has jar Myjar on its local file system.
If I run the command
hadoop jar local/path/to/Myjar Myclass other-args
Is the MR job executed just on n1 or any arbitrary node n1..n100?
If it is any ar
; Tom white. I too started few weeks back and still learning it. J hope
>> you like it too
>>
>> ** **
>>
>> *From:* SrinivasaRao Kongar [mailto:ksrinu...@gmail.com]
>> *Sent:* Thursday, February 14, 2013 11:38 PM
>> *To:* user@hadoop.apache.org
>> *
The Definitive Guide 2nd Edition by
> Tom white. I too started few weeks back and still learning it. J hope you
> like it too
>
> ** **
>
> *From:* SrinivasaRao Kongar [mailto:ksrinu...@gmail.com]
> *Sent:* Thursday, February 14, 2013 11:38 PM
> *To:* user@hadoop.apache
: Regarding Hadoop
Hi sir,
What is Hadoop technology? what is the main purpose of this Hadoop technology?
--
Thanks&Regards,
SrinivasaRao Kongara
Hadoop is a combination of frameworks each having its own purpose,
HDFS -> Distributed data storage, (i.e., if you have to manage huge amount
of data which may NOT fit into single machine then DFS (distributed file
system) is a way through which you can store and manage your data across
multiple ma
Hi Users,
I have a 12 node CDH3 cluster where I am planning to run some benchmark
tests. My main intension is to run the benchmarks first with the default
Hadoop configuration and then analyze the outcomes and tune the Hadoop
metrics accordingly to increase the performance of my cluster.
Can some
21 matches
Mail list logo