>Try hdfs://hostname:9000
Same error:
15/05/04 11:16:12 INFO ipc.Client: Retrying connect to server:
tiger/192.168.1.5:9000. Already tried 9 time(s); retry policy is
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
Exception in thread "main" java.net.ConnectException: Call
Dear all,My problem with "ipc.Client: Retrying connect to server" is still
open! To start a new and clean thread, here is the problem description.
[mahmood@tiger Index]$ which hadoop
~/bigdatabench/apache/hadoop-1.0.2/bin/hadoop[mahmood@tiger Index]$ cat
/etc/hosts
127.0.0.1 localhost.local
Any idea is greatly appreciated Identifying the problem to be either from
hadoop side or the third party side is helpful Regards,
Mahmood
On Friday, May 1, 2015 11:09 AM, Mahmood Naderan
wrote:
Hi Rajesh,That was a good point. In my config, I used
fs.default.name
Hi Rajesh,That was a good point. In my config, I used
fs.default.name
hdfs://localhost:54310
So I ran
hadoop -jar indexdata.jar `pwd`result hdfs://127.0.0.1:54310/data-Index
This time, I get another error:
Exception in thread "main" java.lang.NullPointerException
at IndexHDFS.in
I found out that the $JAVA_HOME specified in hadoop-env.sh was different from
"java -version" in the command line. So I fix the variable to point to JAVA_1.7
(the jar file is also written with 1.7)
Still I get ipc.client error but this time it sound different. The whole output
(in verbose mode)
I don't think that is a not he main issue because I found that the versions are
the same.
On my machine which runs hadoop 1.2.0
[mahmood@tiger Index]$ java -versionjava version "1.7.0_71"
OpenJDK Runtime Environment (rhel-2.5.3.1.el6-x86_64 u71-b14)
OpenJDK 64-Bit Server VM (build 24.65-b04, mixed
On Thursday, April 30, 2015 12:17 PM, Mahmood Naderan
wrote:
Can you explain more?To be honest, I am running a third party script (not
mine) and the developers have no idea on the error.
Do you mean that running "hadoop -jar indexdata.jar `pwd`/result
hdfs://127.0.0.1:9000/d
Can you explain more?To be honest, I am running a third party script (not mine)
and the developers have no idea on the error.
Do you mean that running "hadoop -jar indexdata.jar `pwd`/result
hdfs://127.0.0.1:9000/data-Index" is a better one? For that, I get this error:
[mahmood@tiger Index]$ hado
Hi,when I run the following command, I get ipc.client timeout error.
[mahmood@tiger Index]$ java -jar indexdata.jar `pwd`/result
hdfs://127.0.0.1:9000/data-Index
15/04/30 10:21:03 INFO ipc.Client: Retrying connect to server:
localhost.localdomain/127.0.0.1:9000. Already tried 0 time(s); retry pol
> If the input path is a directory, then the assumption is
> that it will be a directory consisting solely of files
> (not sub-directories)
Thanks. I understood Regards,
Mahmood
Hi guys... Thanks for bringing up this thread. I forgot to post here that
finally I solved the problem. Here is the tip which may be useful for someone
else in the future.
You know what? When you run the format command, in the case that it need the
confirmation, you have to press 'Y' (capital le
jobs like wordcount) reside in this jar file for the 2.x line
of the codebase.
Chris NaurothHortonworkshttp://hortonworks.com/
From: Mahmood Naderan
Reply-To: User , Mahmood Naderan
Date: Saturday, April 18, 2015 at 10:59 PM
To: User
Subject: Again incompatibility, locating example jars
Hi
Hi,There is another incompatibility between 1.2.0 and 2.2.6. I appreciate is
someone help to figure it out.This command works on 1.2.0
time ${HADOOP_HOME}/bin/hadoop jar ${HADOOP_HOME}/hadoop-examples-*.jar grep
${WORK_DIR}/data-MicroBenchmarks/in ${WORK_DIR}/data-MicroBenchmarks/out/grep
a*xyz
Hi,Regarding this warning
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin-java classes where applicable
It seems that the prebuild 32-bit binary is not compatible on the host's 64-bit
architecture. Just want to know does it make sense? Is there
rstood as given or endorsed by
Peridale Ltd, its subsidiaries or their employees, unless expressly so stated.
It is the responsibility of the recipient to ensure that this email is virus
free, therefore neither Peridale Ltd, its subsidiaries nor their employees
accept any responsibility. From: M
Same error again!
[mahmood@tiger in]$ hdfs dfs -put ./lda_wiki1w_1
/data/bigdatabench/Text_datagen/data-MicroBenchmarks/in
15/04/18 22:58:23 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
put: `/data/bigdatabench/T
>Try> hdfs dfs -put hive.log /mich
Sorry but that didn't work...
[mahmood@tiger in]$ ls
lda_wiki1w_1 lda_wiki1w_2
[mahmood@tiger in]$ hadoop dfs -put lda_wiki1w_1
/data/bigdatabench/Text_datagen/data-MicroBenchmarks/in
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instea
Hi,
The following command works with Hadoop-1.2.0
hadoop fs -copyFromLocal data-MicroBenchmarks/in data-MicroBenchmarks/in
However on Hadoop-2.6.0 it fails with this error:
copyFromLocal: `data-MicroBenchmarks/in': No such file or directory
Indeed there are two 500MB files there! and as I said,
Hi,
There are good guides on the number of mappers and reducers in a hadoop job.
For example:
Running Hadoop on Ubuntu Linux (Single-Node Cluster)http://goo.gl/kaA1h5
Partitioning your job into maps and reduces http://goo.gl/tpU23
However, there are some, say noob, question here. Assume
Hello,I have done all steps (as far as I know) to bring up the hadoop. However,
I get the this error
15/04/17 12:45:31 INFO ipc.Client: Retrying connect to server:
localhost/127.0.0.1:54310. Already tried 0 time(s).
There are a lot of threads and posts regarding this error and I tried them.
Howe
Yes thank you very much. It is now OK.
$ ./bin/stop-all.sh
stopping jobtracker
localhost: no tasktracker to stop
stopping namenode
localhost: no datanode to stop
localhost: stopping secondarynamenode
$ /data/mahmood/openjdk6/build/linux-amd64/bin/jps
12576 Jps
$ rm -rf /data/mahmood/nutch-test/f
I am trying to setup Nutch+Hadoop from this tutorial http://bit.ly/1iluBPI
until the section "deploying nutch on multiple machines". So yes, currently I
am working with a single node.
>This could be because you re - formatted the Name Node and the
>versions
are not matching. Your Data Mode
any live datanode.
Thanks
jitendra
On Fri, Apr 4, 2014 at 11:35 AM, Mahmood Naderan wrote:
Jitendra,
>For the first part, can explain how?
>For the second part, do you mean "hadoop dfsadmin -report"?
>
>
>Regards,
>Mahmood
>
>
>
>
>On Friday, Apr
Jitendra,
For the first part, can explain how?
For the second part, do you mean "hadoop dfsadmin -report"?
Regards,
Mahmood
On Friday, April 4, 2014 9:44 PM, Jitendra Yadav
wrote:
>Can you check total running datanodes in your cluster and also free hdfs
>space?
>
>
>Thanks
>Jitendra
Hi,
I want to put a file from local FS to HDFS but at the end I get an error
message and the copied file has zero size. Can someone help with an idea?
$ cat urlsdir/urllist.txt
http://lucene.apache.org
$ hadoop dfs -put urlsdir/urllist.txt urlsdir
put: java.io.IOException: File /user/mahmood/url
***
SHUTDOWN_MSG: Shutting down NameNode at tiger/192.168.1.5
/
Regards,
Mahmood
On Thursday, March 27, 2014 5:37 PM, Mahmood Naderan
wrote:
Here are some onfo. I greped for the port numbers instead of LISTEN. Please
Here are some onfo. I greped for the port numbers instead of LISTEN. Please
note that I am using Hadoop 1.2.1
$ netstat -an | grep 54310
$ netstat -an | grep 54311
tcp 0 0 :::127.0.0.1:54311 :::*
LISTEN
tcp 0 0 :::127.0.0.1:57479
Hi,
I don't know what mistake I did that now I get this error
INFO ipc.Client:Retrying connect toserver:localhost/127.0.0.1:54310.Already
tried2time(s);retry policy
isRetryUpToMaximumCountWithFixedSleep(maxRetries=10,sleepTime=1SECONDS)
INFO ipc.Client:Retrying connect toserver:localhost/12
Rather than memory problem, it was a disk problem. I made more free spaces and
it fixed
Regards,
Mahmood
On Saturday, March 22, 2014 8:58 PM, Mahmood Naderan
wrote:
Really stuck at this step. I have test with smaller data set and it works. Now
I am using wikipedia articles (46GB) with
Hi,
When I format the namenode, at the end I see a shutdown message. Is it
important?
$ hadoop namenode -format
14/03/24 15:42:38 INFO namenode.NameNode: STARTUP_MSG:
/
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = tiger/192.168.
-version.released
such as: Hadoop-2.0.5, major version is 2, 0 is minor versioin. 5 the the fifth
release on 2.0
currently, Hadoop-1.2 and Hadoop-2.2 are all stable, but there are big
difference betwwen 1.x and 2.x
On Mon, Mar 24, 2014 at 3:16 PM, Mahmood Naderan wrote:
Hi,
>What is
Hi,
What is the numbering policy in Hadoop versions. I read that Hadoop 0.2 is
similar to 2.0 but missing some features. What about 1.2? It is stated that
both 1.2 and 2.2 are current stable releases!
Regards,
Mahmood
log4j
system properly.
14/03/22 16:55:15 INFO mapred.JobClient: map 20% reduce 0%
14/03/22 16:55:34 INFO mapred.JobClient: map 20% reduce 1%
Regards,
Mahmood
On Saturday, March 22, 2014 10:27 AM, Mahmood Naderan
wrote:
Again I got the same error and it says
The reducer copier failed
...
. Does that really
matter?
Regards,
Mahmood
On Friday, March 21, 2014 3:52 PM, Mahmood Naderan wrote:
OK it seems that there was a "free disk space" issue.
I made more spaces and running again.
Regards,
Mahmood
On Friday, March 21, 2014 11:43 AM, shashwat shriparv
wrote:
ri, Mar 21, 2014 at 12:11 PM, Mahmood Naderan wrote:
that imply a *retry* process? Or I have to be wo
Warm Regards_∞_
Shashwat Shriparv
How can I find the reason on why the reduce copier failed?
Regards,
Mahmood
On Thursday, March 20, 2014 12:17 PM, Harsh J wrote:
At the end it says clearly that the job has failed.
On Thu, Mar 20, 2014 at 12:49 PM, Mahmood Naderan wrote:
> After multiple messages, it says that the
ave been a transient
issue. Worth investigating either way.
On Thu, Mar 20, 2014 at 12:57 AM, Mahmood Naderan wrote:
> Hi
> In the middle of a map-reduce job I get
>
> map 20% reduce 6%
> ...
> The reduce copier failed
>
> map 20% reduce 0%
> map 20% reduce 1%
&
Hi
In the middle of a map-reduce job I get
map 20% reduce 6%
...
The reduce copier failed
map 20% reduce 0%
map 20% reduce 1%
map 20% reduce 2%
map 20% reduce 3%
Does that imply a *retry* process? Or I have to be worried about that message?
Regards,
Mahmood
Hello
When run the following command on Mahout-0.9 and Hadoop-1.2.1, I get multiple
errors and I can not figure out what is the problem? Sorry for the long post.
[hadoop@solaris ~]$ mahout wikipediaDataSetCreator -i wikipedia/chunks -o
wikipediainput -c ~/categories.txt
Running on hadoop, u
.
Regards,
Mahmood
On Thursday, March 13, 2014 2:31 PM, Mahmood Naderan
wrote:
Strange thing is that if I use either -Xmx128m of -Xmx16384m the process stops
at the chunk #571 (571*64=36.5GB).
Still I haven't figured out is this a problem with JVM or Hadoop or Mahout?
I have tested va
2048m
mapred.reduce.child.java.opts
-Xmx4096m
Is there an relation between the parameters and the amount of available memory?
I also see a HADOOP_HEAPSIZE in hadoop-env.sh which is commented by default.
What is that?
Regards,
Mahmood
On Tuesday, March 11, 2014 11:57 PM, Mahmood Naderan
wrote:
As I p
14 10:21 AM, Mahmood Naderan wrote:
> Hi,
> Is there any verbosity flag for hadoop and mahout commands? I can not find
> such thing in the command line.
>
>
> Regards,
> Mahmood
>
Hi,
Is there any verbosity flag for hadoop and mahout commands? I can not find such
thing in the command line.
Regards,
Mahmood
ing on the entire
english wikipedia."
On Tue, Mar 11, 2014 at 12:56 PM, Mahmood Naderan wrote:
> Hi,
> Recently I have faced a heap size error when I run
>
> $MAHOUT_HOME/bin/mahout wikipediaXMLSplitter -d
>
$MAHOUT_HOME/examples/temp/enwiki-latest-pages-articles.xml -o
&
Hi,
Recently I have faced a heap size error when I run
$MAHOUT_HOME/bin/mahout wikipediaXMLSplitter -d
$MAHOUT_HOME/examples/temp/enwiki-latest-pages-articles.xml -o
wikipedia/chunks -c 64
Here is the specs
1- XML file size = 44GB
2- System memory = 54GB (on virtualbox)
3- Heap size = 51GB (
Hi
Maybe this is a newbie question but I want to know does Hadoop/Mahout use
pthread models?
Regards,
Mahmood
(127MB).
On Fri, Mar 7, 2014 at 12:32 PM, Mahmood Naderan wrote:
hadoop-2.3.0.tar.gz (127MB)
--
Cheers
-MJ
ira
(2014/03/06 11:08), Mahmood Naderan wrote:
> Stuck at this step.
> Hope to receive any idea...
>
> Regards,
> Mahmood
>
>
> On Thursday, March 6, 2014 6:48 PM, Mahmood Naderan
> wrote:
> Hi
> I have downloaded hadoop-2.3.0-src and followed the guide from
&g
Stuck at this step.
Hope to receive any idea...
Regards,
Mahmood
On Thursday, March 6, 2014 6:48 PM, Mahmood Naderan
wrote:
Hi
I have downloaded hadoop-2.3.0-src and followed the guide from
http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-common/SingleCluster.html
Hi
I have downloaded hadoop-2.3.0-src and followed the guide from
http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-common/SingleCluster.html
The first command "mvn clean install -DskipTests" was successful. However wen I
run
cd hadoop-mapreduce-project
mvn clean inst
Hello
We had an old document (I think it was hadoop 0.2) which stated these steps
To start Hadoop:
$HADOOP_HOME/bin/start-all.sh (alternatively, you can start hdfs then mapreduce
by start-dfs.sh and start-mapred.sh respectively)
To stop Hadoop:
$HADOOP_HOME/bin/stop-all.sh (or stop-dfs.sh and sto
>hadoop fsck mytext.txt -files -locations -blocks
I expect something like a tag which is attached to each block (say block X)
that shows the position of the replicated block of X. The method you mentioned
is a user level task. Am I right?
Regards,
Mahmood
F
this a user level task of system level task?
Regards,
Mahmood
From: John Lilley
To: "user@hadoop.apache.org" ; Mahmood Naderan
Sent: Tuesday, June 4, 2013 3:28 AM
Subject: RE: HDFS interfaces
Mahmood,
It is the in the File
Hello,
It is stated in the "HDFS architecture guide"
(https://hadoop.apache.org/docs/r1.0.4/hdfs_design.html) that
HDFS provides interfaces for applications to move themselves closer to where
the data is located.
What are these interfaces and where they are in the source code? Is there any
Hello
I am trying to understand the source of of hadoop especially the HDFS. I want
to know where should I look exactly in the source code about how HDFS
distributes the data. Also how the map reduce engine tries to read the data.
Any hint regarding the location of those in the source code is
55 matches
Mail list logo