Thanks. It worked.
Amol
-Original Message-
From: lohit [mailto:[EMAIL PROTECTED]
Sent: Monday, August 04, 2008 10:20 PM
To: core-user@hadoop.apache.org
Subject: Re: EOFException while starting name node
We had seen similar exception earlier reported by others on the list.
What you
MultiFileWordCount uses its own RecordReader, namely
MultiFileLineRecordReader. This is different from the LineRecordReader
which automatically detects the file's codec, and decodes it.
You can write a custom RecordReader similar to LineRecordReader and
MultiFileLineRecordReader, or just add
Hello,
At the libhdfs wiki http://wiki.apache.org/hadoop/LibHDFS#Threading I read
this:
libhdfs can be used in threaded applications using the Posix Threads.
However
to carefully interact with JNI's global/local references the user has to
explicitly call
the *hdfsConvertToGlobalRef* /
Dear All,
I find in the file conf/log4j.properties,it specifies three appenders :
ConsoleAppender DailyRollingFileAppender and TaskLogAppender.I know from the
output location that JobClient output target is ConsoleAppender,that JobTracker
TaskTracker NameNode and DataNode output target are
Hi all.
I'm running a clustering HDFS on linux and I need to access files (I/O) from
eclipse Java application running on Windows. It seems simple, but is it
possible?
I have write code using API but I have a problem: when code invokes
DistributedFileSystem.initialize() method I receive an
I think IBM has a plugin that can access HDFS, I don't know whether it
contains source code, but maybe it helps.
www.alphaworks.*ibm*.com/tech/mapreducetools
On Tue, Aug 5, 2008 at 5:16 AM, Alberto Forcén [EMAIL PROTECTED] wrote:
Hi all.
I'm running a clustering HDFS on linux and I need to
Is there any way for me to log and find out why the NameNode process is not
launching on the master?
On Mon, Aug 4, 2008 at 8:19 PM, Meng Mao [EMAIL PROTECTED] wrote:
assumption -- if I run stop-all.sh _successfully_ on a Hadoop deployment
(which means every node in the grid is using the same
What's the proposed the design pattern for a reducer that needs two
sets of inputs?
Are there any source code examples?
Thanks :)
I am a newbie also, so my answer is not an expert user's by any means.
That said:
This is not what the MR is designed for...
If you have a reporting tool for example, which takes a database a
very long time to answer - such a long time that you can't expect a
user to hang around waiting for the
I was wondering around Hadoop wiki and found this page dedicated to name-node
failover.
http://wiki.apache.org/hadoop/NameNodeFailover
I think it is confusing, contradicts other documentation on the subject and
contains incorrect facts. See
Hello,
On Tue, Aug 5, 2008 at 8:11 PM, Mork0075 [EMAIL PROTECTED] wrote:
So my question: is there a Hadoop scenario for non computation heavy but
heavy load web applications?
I suggest you look into HBase, a subproject of hadoop:
http://hadoop.apache.org/hbase/ -- it is designed after
I am using Hadoop streaming and I want to write the map/reduce scripts in JAVA,
rather than perl, etc. Would anybody give me a sample? Thanks
Hello:
I am actually working on this myself on my project Multisearch. The Map()
function uses clients to connect to services and collect responses, and the
Reduce() function merges them together. I'm working on putting this into a
Servlet as well, too, so it can be used via Tomcat.
I've worked
Hi,
This is about dfs only, not to consider mapreduce. It may sound like a
strange need, but sometimes I want to read a block from a specific
data node which holds a replica. Figuring out which datanodes have the
block is easy. But is there an easy way to specify which datanode I
want to load
Apologies for misphrasing my question.
Let me rephrase it: Using the Hadoop Java APIs is there a suggested
way of doing a pair-wise comparison between all LineRecords in a file?
More generically: is there a Hadoop Java API design pattern for a
reducer to iterate through all the records in
I havent tried it, but see if you can create DFSClient object and use its
open() and read() calls to get the job done. Basically you would have to force
currentNode to be your node of interest in there.
Just curious, what is the use case for your request?
Thanks,
Lohit
- Original
16 matches
Mail list logo