Hi all,
I am benchmarking a Hadoop Cluster with the hadoop-*-test.jar TestDFSIO
but the following error returns:
File /usr/hadoop-0.20.2/libhdfs/libhdfs.so.1 does not exist.
How to solve this problem?
Thanks!
Nitin,
On 2011/07/28 14:51, Nitin Khandelwal wrote:
How can I determine if a file is being written to (by any thread) in HDFS.
That information is exposed by the NameNode http servlet. You can
obtain it with the
fsck tool (hadoop fsck /path/to/dir -openforwrite) or you can do an http get
htt
Does this mean 0.22.0 has reached stable and will be released as the stable
version soon?
--Aaron
-Original Message-
From: Robert Evans [mailto:ev...@yahoo-inc.com]
Sent: Thursday, July 28, 2011 6:39 AM
To: common-user@hadoop.apache.org
Subject: Re: next gen map reduce
It has not been i
Harsh
If this is the case I don't understand something. If I see FILE_BYTES_READ to
be non zero for a map, the only thing I can assume is that it came
from a spill during sort phase.
I have a 10 node cluster, and I ran TeraSort with size 100,000 Bytes ( 1000
records).
My io.sort.mb is 300
This is for CLI
Use this:
set hive.cli.print.header=true;
Instead of doing this on the prompt everytime you can change your hive start
command to:
hive -hiveconf hive.cli.print.header=true
But be careful with this setting as quite a few commands stop working with NPE
with this on. I thin
Hi A Df,
see inline at ::
- Original Message -
From: A Df
Date: Wednesday, July 27, 2011 10:31 pm
Subject: Re: cygwin not connecting to Hadoop server
To: "common-user@hadoop.apache.org"
> See inline at **. More questions and many Thanks :D
>
>
>
>
> >___
FYI, I logged a bug for this:
https://issues.apache.org/jira/browse/HADOOP-7489
On Jul 28, 2011, at 11:36 AM, Bryan Keller wrote:
> I am also seeing this error upon startup. I am guessing you are using OS X
> Lion? It started happening to me after I upgraded to 10.7. Hadoop seems to
> function
I am also seeing this error upon startup. I am guessing you are using OS X
Lion? It started happening to me after I upgraded to 10.7. Hadoop seems to
function properly despite this error showing up, though it is annoying.
On Jul 27, 2011, at 12:37 PM, Ben Cuthbert wrote:
> All
> When starting
Hi A Df,
see inline at ::
- Original Message -
From: A Df
Date: Wednesday, July 27, 2011 10:31 pm
Subject: Re: cygwin not connecting to Hadoop server
To: "common-user@hadoop.apache.org"
> See inline at **. More questions and many Thanks :D
>
>
>
>
> >___
Hi,
I was wondering if anyone could help me?
Does anyone know if it is possible to include the column headers in an
output from a Hive Query? I've had a look through the internet but can't
seem to find an answer.
If not, is it possible to export the result from a describe table query? If
so I co
I've been playing with unit testing strategies for my Hadoop work. A
discussion of techniques and a link to example code here:
http://cornercases.wordpress.com/2011/07/28/unit-testing-mapreduce-with-overridden-write-methods/
.
On Thu, Jul 28, 2011 at 12:17 AM, Harsh J wrote:
> Mohit,
>
> I believe Tom's book (Hadoop: The Definitive Guide) covers this
> precisely well. Perhaps others too.
>
> Replication is a best-effort sort of thing. If 2 nodes are all that is
> available, then two replicas are written and one is left
See
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/package-summary.html#package_description
for some help.
St.Ack
On Thu, Jul 28, 2011 at 4:04 AM, air wrote:
> -- Forwarded message --
> From: air
> Date: 2011/7/28
> Subject: HBase Mapreduce cannot find Map cla
On Thu, 28 Jul 2011 10:05:57 -0400, "Kumar, Ranjan"
wrote:
> I have a class to define data I am reading from a MySQL database.
> According to online tutorials I created a class called MyRecord and
> extended it from Writable, DBWritable. While running it with hadoop I
get a
> NoSuchMethodException
I have a class to define data I am reading from a MySQL database. According to
online tutorials I created a class called MyRecord and extended it from
Writable, DBWritable. While running it with hadoop I get a
NoSuchMethodException: dataTest$MyRecord.()
I am using 0.21.0
Thanks for your help
R
Start the namenode[set fs.default.name to hdfs://192.168.1.101:9000] and
check your netstat report [netstat -nlp] to check which port and IP it is
binding. Ideally, 9000 should be bound to 192.168.1.101. If yes, configure
the same IP in slaves as well. Otw, we may need to revisit your configs
once.
I am not completely sure what you are getting at. It looks like the output of
your c program is (And this is just a guess) NOTE: \t stands for the tab
character and in streaming it is used to separate the key from the value \n
stands for carriage return and is used to separate individual recor
It has not been introduced yet. If you are referring to MRV2. It is targeted
to go into the 0.23 release of Hadoop, but is currently on the MR-279 branch.
Which should hopefully be merged to trunk in abut a week.
--Bobby
On 7/28/11 7:31 AM, "real great.." wrote:
In which Hadoop version is
Hi,
Before starting, you need to format the namenode.
./hdfs namenode -format
then this directories will be created.
respective configuration is 'dfs.namenode.name.dir'
default configurations will exist in hdfs-default.xml.
If you want to configure your own directory path, you can add the above
When I started hadoop, the namenode failed to startup because of the following
error. The strange thing is that it says/tmp/hadoop-oracle/dfs/name
isinconsistent, but I don't think I have configured it as
/tmp/hadoop-oracle/dfs/name. Where should I check for the related configuration?
2011-07-
I changed fs.default.name to hdfs://192.168.1.101:9000. But, the same error
as before.
I need a help
On Thu, Jul 28, 2011 at 7:45 PM, Nitin Khandelwal <
nitin.khandel...@germinait.com> wrote:
> Plz change ur* fs.default.name* to hdfs://192.168.1.101:9000
> Thanks,
> Nitin
>
> On 28 July 2011 17:4
Its currently still on the MR279 branch -
http://svn.apache.org/viewvc/hadoop/common/branches/MR-279/. It is planned
to be merged to trunk soon.
Tom
On 7/28/11 7:31 AM, "real great.." wrote:
> In which Hadoop version is next gen introduced?
Plz change ur* fs.default.name* to hdfs://192.168.1.101:9000
Thanks,
Nitin
On 28 July 2011 17:46, Doan Ninh wrote:
> In the first time, i use *hadoop-cluster-1* for 192.168.1.101.
> That is the hostname of the master node.
> But, the same error occurs
> How can i fix it?
>
> On Thu, Jul 28, 2011
In which Hadoop version is next gen introduced?
--
Regards,
R.V.
How about having the slave write to temp file first, then move it to the file
the master is monitoring for after they close it?
-Joey
On Jul 27, 2011, at 22:51, Nitin Khandelwal
wrote:
> Hi All,
>
> How can I determine if a file is being written to (by any thread) in HDFS. I
> have a conti
In the first time, i use *hadoop-cluster-1* for 192.168.1.101.
That is the hostname of the master node.
But, the same error occurs
How can i fix it?
On Thu, Jul 28, 2011 at 7:07 PM, madhu phatak wrote:
> I had issue using IP address in XML files . You can try to use host names
> in
> the place o
I had issue using IP address in XML files . You can try to use host names in
the place of IP address .
On Thu, Jul 28, 2011 at 5:22 PM, Doan Ninh wrote:
> Hi,
>
> I run Hadoop in 4 Ubuntu 11.04 on VirtualBox.
> On the master node (192.168.1.101), I configure fs.default.name = hdfs://
> 127.0.0.1
Hi,
I run Hadoop in 4 Ubuntu 11.04 on VirtualBox.
On the master node (192.168.1.101), I configure fs.default.name = hdfs://
127.0.0.1:9000. Then i configure everything on 3 other node
When i start the cluster by entering "$HADOOP_HOME/bin/start-all.sh" on the
master node
Everything is ok, but the
No such API as per my knowledge.
copyFromLocal is such API. That may not fit in your scenario I guess.
--Laxman
-Original Message-
From: Meghana [mailto:meghana.mara...@germinait.com]
Sent: Thursday, July 28, 2011 4:32 PM
To: hdfs-u...@hadoop.apache.org; lakshman...@huawei.com
Cc: common
-- Forwarded message --
From: air
Date: 2011/7/28
Subject: HBase Mapreduce cannot find Map class
To: CDH Users
import java.io.IOException;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.Date;
import org.apache.hadoop.conf.Configured;
import
Thanks Laxman! That would definitely help things. :)
Is there a better FileSystem/other method call to create a file in one go
(i.e. atomic i guess?), without having to call create() and then write to
the stream?
..meghana
On 28 July 2011 16:12, Laxman wrote:
> One approach can be use some ".
One approach can be use some ".tmp" extension while writing. Once the write
is completed rename back to original file name. Also, reducer has to filter
out ".tmp" files.
This will ensure reducer will not pickup the partial files.
We do have the similar scenario where the a/m approach resolved the
Hi, all
In my recent work in hadoop, I find that the output dir contains:
both _SUCCESS and __temporary. And then the next job would be failed because
the input path contains _temporary. How does this happen? And How to avoid
this?
Thanks for your replies.
liuliu
--
Hi all,
I'm trying to compile and unit testing hadoop 0.20.203, but met with almost
the same problem with previous discussion in the mailing list(
http://mail-archives.apache.org/mod_mbox/hadoop-general/201105.mbox/%3CBANLkTim68H=8ngbfzmsvrqob9pmy7fv...@mail.gmail.com%3E).
Even after setting umask
Hi,
We have a job where the map tasks are given the path to an output folder.
Each map task writes a single file to that folder. There is no reduce phase.
There is another thread, which constantly looks for new files in the output
folder. If found, it persists the contents to index, and deletes th
Mohit,
I believe Tom's book (Hadoop: The Definitive Guide) covers this
precisely well. Perhaps others too.
Replication is a best-effort sort of thing. If 2 nodes are all that is
available, then two replicas are written and one is left to the
replica monitor service to replicate later as possible
Daniel, You can find those std out statements in "{LOG
Directory}/userlogs/{task attemp id}/stdout" file.
Same way you can find std err statements in "{LOG Directory}/userlogs/{task
attemp id}/stderr" and log statements in "{LOG Directory}/userlogs/{task
attemp id}/syslog".
Devaraj K
-Orig
Raj,
There is no overlap. Data read from HDFS FileSystem instances go to
HDFS_BYTES_READ, and data read from Local FileSystem instances go to
FILE_BYTES_READ. These are two different FileSystems, and have no
overlap at all.
On Thu, Jul 28, 2011 at 5:56 AM, R V wrote:
> Hello
>
> I don't know if
Task logs are written to userlogs directory on the TT nodes. You can
view task logs on the JobTracker/TaskTracker web UI for each task at:
http://machine:50030/taskdetails.jsp?jobid=&tipid=
All of syslogs, stdout and stderr logs are available in the links to
logs off that page.
2011/7/28 Daniel,
39 matches
Mail list logo