https://issues.apache.org/jira/browse/HADOOP-4346 might explain this.
Raghu.
Bryan Duxbury wrote:
Ok, so, what might I do next to try and diagnose this? Does it sound
like it might be an HDFS/mapreduce bug, or should I pore over my own
code first?
Also, did any of the other exceptions look
Dear hadoop users,
I'm lucky to work in academic environment where information security is not
the question. However, I'm sure that most of the hadoop users aren't.
Here is the question: how secure hadoop is? (or let's say foolproof)
Here is the answer:
I have setup a multi node cluster as one machine(master in this case)
which acts as master as well as slave and other machine(slave) as slave
I'm getting following error while executing the command:
[EMAIL PROTECTED] bin/hadoop namenode
[EMAIL PROTECTED] hadoop-0.18.0]$ bin/hadoop namenode
Greetings!
Hi, Am trying to modify the WordCount.java mentioned at Example: WordCount
v1.0http://hadoop.apache.org/core/docs/current/mapred_tutorial.html#Example%3A+WordCount+v1.0at
http://hadoop.apache.org/core/docs/current/mapred_tutorial.html
Would like to have output the following way,
Hi Srilatha,
You could do the following:
Map steps can only collect key, value pairs; they can't collect three
things. Instead of collecting (word, 1) in the mapper, you could collect
(word, filename). Then, in the reduce step, you could output (filename,
word|count), where word|count is Text,
Hadoopy is secure enough to be used on a cluster that has access control in
a friendly environment. That is to say, not very. These issues are well
known.
User identities were added recently, but, as you note, they are dependent on
trusting unix logins and can easily be spoofed. More secure
What you need to do is snag access to the filename in the configure method
of the mapper.
Then instead of outputting just the word as the key, output a pair
containing the word and the file name as the key. Everything downstream
should remain the same.
On Sun, Oct 5, 2008 at 11:26 AM, Alex
Hey all,
I recently wrote/applied a few of the patches lingering in JIRA in
order to get working Ganglia metrics reporting. I'd like to share a
few notes, and want to know if anyone else has experience running these:
0) A few patches have to be applied to get Ganglia reporting working
Hi folks,We here at VideoSurf are doing some exciting things with Hadoop and
HBase. We're hiring people with Hadoop experience right now into the
position Sr. Advertising Systems, Engineer. Extra helpful if you have
experience with HBase.
Contact me if you're interested!
Daniel Leffel
VP,
Hi Daniel
I am interested in this job since I have experience working in HADOOP.
Can you give me your e-mail ID where I can mail my Resume.
Thank You Very Much
Prerna
On Sun, Oct 5, 2008 at 7:45 PM, Daniel Leffel [EMAIL PROTECTED] wrote:
Hi folks,We here at VideoSurf are doing some exciting
Please send your resume to [EMAIL PROTECTED] and cc: [EMAIL PROTECTED]
Thanks!
Daniel Leffel
On Sun, Oct 5, 2008 at 5:43 PM, Prerna Manaktala [EMAIL PROTECTED]
wrote:
Hi Daniel
I am interested in this job since I have experience working in HADOOP.
Can you give me your e-mail ID where I can
Let's say you have one very large input file of the form:
A|B|C|D
E|F|G|H
...
|1|2|3|4
This input file will be broken up into N pieces, where N is the number of
mappers that run. The location of these splits is semi-arbitrary. This
means that unless you have one mapper, you won't be able to
Hi.
I want to know read and write throughput data of HDFS
Where could I get that data? I can't find it.T.T
Did you see that data?
Thanks,
I think you can find that data at this page --
http://developer.yahoo.com/blogs/hadoop/2008/09/scaling_hadoop_to_4000_nodes_a.html
/Edward J. Yoon
On Mon, Oct 6, 2008 at 11:30 AM, 황인환 [EMAIL PROTECTED] wrote:
Hi.
I want to know read and write throughput data of HDFS
Where could I get that
Hello guys,
I m trying to use Java to manipluate HBase using its API. Atm, I m trying to do
some simple CRUDs activities with it and from here a little problem arise.
What I was trying to make is : Given an existing table, display/get existed row
name ( in order to update, manipulate
15 matches
Mail list logo