Also, are Hadoop summit registrations required to attend the BoF?
On Wed, Jun 3, 2015 at 10:52 AM, Karthik Kambatla ka...@cloudera.com
wrote:
Going through all Yarn umbrella JIRAs
https://issues.apache.org/jira/issues/?jql=project%20in%20(Yarn)%20AND%20summary%20~%20umbrella%20AND%20resolution
users, you are welcome to attend and join the discussion around
Hadoop YARN. The meetup link is here:
http://www.meetup.com/Hadoop-Summit-Community-San-Jose/events/222465938/
Thanks all,
+Vinod
--
Karthik Kambatla
Software Engineer, Cloudera Inc
sumasai.shivapra...@gmail.com wrote:
We are planning to deploy Hadoop 2.6.0 with a default configuration to
cache 1 entries in the state store. With a workload of 150-250
concurrent applications at any time , which state store is better to use
and for what reasons ?
Thanks
Suma
--
Karthik
There was an issue with the infrastructure. It is now fixed and the 2.5.0
artifacts are available.
Mark - can you please retry now.
Thanks
Karthik
On Tue, Aug 26, 2014 at 6:54 AM, Karthik Kambatla ka...@cloudera.com
wrote:
Thanks for reporting this, Mark.
It appears the artifacts
Thanks for reporting this, Mark.
It appears the artifacts are published to
https://repository.apache.org/content/repositories/releases/org/apache/hadoop/hadoop-common/2.5.0/,
but haven't propagated to
http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/
I am following up on this, and
I believe they are normalized to be multiples of
yarn.scheduler.increment-allocation-mb.
yarn.scheduler.minimum-allocation-mb can be set to as low as zero. Llama
does this.
As to why normalization, I think it is to make sure there is no external
fragmentation. It is similar to why memory is
(Redirecting to cdh-user, moving user@hadoop to bcc).
Hi Oren
Can you attach slightly longer versions of the log files on both the JTs?
Also, if this is something recurring, it would be nice to monitor the JT
heap usage and GC timeouts using jstat -gcutil jt-pid.
Thanks
Karthik
On Thu, Jan
Moving general@ to bcc and redirecting this to the appropriate list -
user@hadoop.apache.org
On Mon, Sep 16, 2013 at 2:18 AM, Jagat Singh jagatsi...@gmail.com wrote:
Hello Mahmoud
You can run on your machine also.
I learnt everything on my 3gb 2ghz machine and recently got better machine.
Moving general@ to bcc
On Mon, Sep 16, 2013 at 1:20 PM, xeon xeonmailingl...@gmail.com wrote:
Hi,
- I want that the wordcount example produces a SequenceFile output with
the result. How I do this?
- I want also to do a cat to the SequenceFile and read the result. A
simple hdfs dfs -cat
How about sending 0,x to 0 and 1,x to 1; reduce 0 can act based on the
value of x?
On Wed, Mar 13, 2013 at 2:29 AM, Vikas Jadhav vikascjadha...@gmail.comwrote:
Hello I am not talking about custom partioner(custom partitioner is
involved but i want to write same pair for more number times)
i
Forwarding your email to the cdh-user group.
Thanks
Karthik
On Tue, Jul 17, 2012 at 2:24 PM, Trevor tre...@scurrilous.com wrote:
Hi all,
I recently upgraded from CDH4b2 (0.23.1) to CDH4 (2.0.0). Now for some
strange reason, my MRv2 jobs (TeraGen, specifically) fail if I run with
more than
One way to achieve this would be to:
1. Emit the same value multiple times, each time with a different key.
2. Use these different keys, in conjunction with the partitioner, to
achieve the desired distribution.
Hope that helps!
Karthik
On Thu, Jul 5, 2012 at 12:19 AM, 静行
Hi Nishan
Let me forward this to the right list -
Thanks
Karthik
On Thu, Jul 5, 2012 at 6:43 AM, Nishan Shetty nishan.she...@huawei.comwrote:
Hi
In CDH security guide it is mentioned that
“Important
Remember that the user who launches the job must exist on every node.”
Hi Sharat
A couple of questions/comments:
1. Is your input graph complete?
2. If it is not complete, it might make sense to use adjacency lists of
graph nodes as the input to each map function. (Multiple adjacency lists
for the map task)
3. Even if it is complete, using the
I have never tried it, but the following must also be possible.
map: (k1,v1) - list(k2,v2)
combine: (k2 ,list(v2)) - list(k3,v3)
reduce: (k3 ,list(v3)) - list(k4,v4)
Karthik Kambatla
On Sun, Nov 22, 2009 at 4:16 AM, Y G gymi...@gmail.com wrote:
if your combiner is the same as reducer
Karthik Kambatla
2009/11/22 Gang Luo lgpub...@yahoo.com.cn
So, you want to read the sample file in main and add each line to job by
job.set, and then read these lines in mapper by job.get?
I think it is better to name the data file as input source to mapper, while
read the whole sample file
16 matches
Mail list logo