Hi Pedro,
Have you read the documentation on profiling MapReduce?
http://hadoop.apache.org/docs/r0.20.2/mapred_tutorial.html#Profiling
Jeff
On Sat, Dec 15, 2012 at 7:20 AM, Pedro Sá da Costa psdc1...@gmail.comwrote:
Hi
I want to attach jprofiler to Hadoop MapReduce (MR). DO I need to
If it is a small number, A seems the best way to me.
On Friday, December 28, 2012, Kshiva Kps wrote:
Which one is current ..
What is the preferred way to pass a small number of configuration
parameters to a mapper or reducer?
*A. *As key-value pairs in the jobconf object.
* *
Perfect, thanks. It's what I was looking for.
I have few nodes, all with 2TB drives, but one with 2x1TB. Which mean
that at the end, for Hadoop, it's almost the same thing.
JM
2012/12/28, Robert Molina rmol...@hortonworks.com:
Hi Jean,
Hadoop will not factor in number of disks or directories,
Ed,
There are some who are of the opinion that these certifications are worthless.
I tend to disagree, however, I don't think that they are the best way to
demonstrate one's abilities.
IMHO they should provide a baseline.
We have seen these types of questions on the list and in the forums.
E. Store them in hbase...
On Sun, Dec 30, 2012 at 12:24 AM, Hemanth Yamijala
yhema...@thoughtworks.com wrote:
If it is a small number, A seems the best way to me.
On Friday, December 28, 2012, Kshiva Kps wrote:
Which one is current ..
What is the preferred way to pass a small number
Only if u have few mappers and reducers
On Monday, December 31, 2012, Jonathan Bishop wrote:
E. Store them in hbase...
On Sun, Dec 30, 2012 at 12:24 AM, Hemanth Yamijala
yhema...@thoughtworks.com wrote:
If it is a small number, A seems the best way to me.
On Friday, December 28, 2012,
Nagarjuna,
Can you explain in more detail - what is the cost of using hbase as a
configuration storage for MR jobs, say if there are many of them.
Jon
On Sun, Dec 30, 2012 at 11:02 AM, nagarjuna kanamarlapudi
nagarjuna.kanamarlap...@gmail.com wrote:
Only if u have few mappers and reducers
F. put a mongodb replica set on all hadoop workernodes and let the tasks
query the mongodb at localhost.
(this is what I did recently with a multi GiB dataset)
--
Met vriendelijke groet,
Niels Basjes
(Verstuurd vanaf mobiel )
Op 30 dec. 2012 20:01 schreef Jonathan Bishop jbishop@gmail.com