Re: fail and kill all tasks without killing job.

2012-07-20 Thread JAX
I believe that kill-task simple kills the task, but then the same process (i.e. "task") starts, with a new id. Jay Vyas MMSB UCHC On Jul 20, 2012, at 6:23 PM, "Bejoy KS" wrote: > Hi Jay > > Did you try > hadoop job -kill-task ? And is that not working as desired? > > Regards > Bejoy KS >

Re: isSplitable() problem

2012-04-23 Thread JAX
Curious : Seems like you could aggregate the results in the mapper as a local variable or list of strings--- is there a way to know that your mapper has just read the LAST line of an input split? I.e if so, then you could implement your entire solution in your mapper without needing a new inpu

Re: remote job submission

2012-04-21 Thread JAX
Thanks j harsh: I have another question , though --- You mentioned that : The client needs access to " the DataNodes (for actually writing the previous files to DFS for the JobTracker to pick up)" What do you mean by previous files? It seems like, if designing Hadoop from scratch , I wouldn'

Reporter vs context

2012-04-20 Thread JAX
Hi guys : I notice that there's been some chatter about the "Reporter" in context of counters Forgive my ignorance here as I've never seen Reporters used in real code. What is the difference between the use of our Context, and Reporter objects- and how are they related? Is there any overlap

Re: remote job submission

2012-04-20 Thread JAX
RE anirunds question on "how to submit a job remotely". Here are my follow up questions - hope this helps to guide the discussion: 1) Normally - what is the "job client"? Do you guys typically use the namenode as the client? 2) In the case where the client != name node how does the cli

Re: Accessing global Counters

2012-04-20 Thread JAX
No reducers can't access mapper counters. ---> maybe theres a way to intermediately put counters in the distributed cache??? Jay Vyas MMSB UCHC On Apr 20, 2012, at 1:24 PM, Robert Evans wrote: > There was a discussion about this several months ago > > http://mail-archives.apache.org/mod_mbox

Snappy question related to last

2012-04-15 Thread JAX
Hi guys : related to the last snappy question - how does Hadoop detect Snappy compression in the input dataset ( how does Hadoop Know when to decompress records via snappy ). Jay Vyas MMSB UCHC

Re: Issue with loading the Snappy Codec

2012-04-15 Thread JAX
That is odd why would it crash when your m/r job did not rely on snappy? One possibility : Maybe because your input is snappy compressed, Hadoop is detecting that compression, and trying to use the snappy codec to decompress.? Jay Vyas MMSB UCHC On Apr 15, 2012, at 5:08 AM, Bas Hickendorf

Re: Professional Hiring: Architect and Developer in Hadoop Area ( Beijing, China )

2012-04-09 Thread JAX
Im sure i speak quite accurately for the moderators that ***This is not a job board*** Jay Vyas MMSB UCHC On Apr 9, 2012, at 10:03 AM, Vishal Kumar Gupta wrote: > hi Sarah, > > Please find my updated resume attached with this mail. > > Regards, > vishal > > 2012/4/9 Bing Li > 国际著名大型IT企业(

Re: Get Current Block or Split ID, and using it, the Block Path

2012-04-08 Thread JAX
I have a related question about blocks related to thisNormally, a reduce job outputs several files, all in the same directory. But why? Since we know that Hadoop is abstracting our file for us, shouldn't the part-r- outputs ultimately be thought of as a single file? What is the corres

Job, JobConf, and Configuration.

2012-04-08 Thread JAX
Hi guys. Just a theoretical question here : I notice in chapter 1 of the Hadoop orielly book that the "new API" example has *no* Configuration object. Why is that? I thought the new API still uses / needs a Configuration class when running jobs. Jay Vyas MMSB UCHC On Apr 7, 2012, at 4:29

Namespace logs : a common issue?

2012-04-06 Thread JAX
Hi guys : I'm noticing that namespace conflicts or differences are a common theme in hadoop both in my experience and now on this list Serv. Does anyone have any thoughts on why this is such a common issue and how it will be dealt with in new releases? Jay Vyas MMSB UCHC

Hadoop fs custom commands

2012-04-01 Thread JAX
Hi guys : I wanted to make se custom Hadoop fs - commands. Is this feasible/practical? In particular. , I wanted to summarize file sizes and print some usefull estimated of things on the fly from My cluster. I'm not sure how The hadoop Shell commands are implemented... But I thought maybe ther

Re: namespace error after formatting namenode (psuedo distr mode).

2012-03-30 Thread JAX
Thanks alot arpit : I will try this first thing in the morning. For now --- I need a glass of wine. Jay Vyas MMSB UCHC On Mar 30, 2012, at 10:38 PM, Arpit Gupta wrote: > the namespace id is persisted on the datanode data directories. As you > formatted the namenode these id's no longer match

Re: Question about accessing another HDFS

2011-12-08 Thread JAX
I was confused about this for a while also I dont have all the details but I think my question on s.o. might help you. I was playing with different protocols... Trying to find a way to programatically access all data in Hfds. http://stackoverflow.com/questions/7844458/how-can-i-access-hadoop

Re: Hadoop MapReduce Poster

2011-11-01 Thread JAX
That's a great tutorial. I like the conciseness of it. Jay Vyas MMSB UCHC On Nov 1, 2011, at 1:39 AM, Prashant Sharma wrote: > Hi Mathias, > > I wrote a small introduction or a quick ramp up for starting out with > hadoop while learning it at my institute. > http://functionalprograming.file

Re: getting there (EOF exception).

2011-10-30 Thread JAX
Thanks! Yes i agree ... But Are you sure 8020? 8020 serves on 127.0.0.1 (rather than 0.0.0.0) ... Thus it is inaccessible to outside clients...That is very odd Why would that be the case ? Any insights ( using cloud eras hadoop vm). Sent from my iPad On Oct 30, 2011, at 11:48 PM, Harsh

Re: writing to hdfs via java api

2011-10-28 Thread JAX
Hi tom : which log will have info about why a process was Killed? Sent from my iPad On Oct 28, 2011, at 11:41 PM, Tom Melendez wrote: > Hi Jay, > > Are you able to look at the logs or the web interface? Can you find > out why it's getting killed? > > Also, can you verify that these ports are

Re: writing to hdfs via java api

2011-10-28 Thread JAX
Yup Brutal :-| but you never regret fixing a bug ... Unlike --- Sent from my iPad On Oct 28, 2011, at 11:43 PM, Alex Gauthier wrote: > Brutal Friday night. Coding < pussy. > > :) > > On Fri, Oct 28, 2011 at 8:43 PM, Alex Gauthier > wrote: > >> >> >> On Fri, Oct 28, 2011 at 8:41

Connecting to vm through java

2011-10-20 Thread JAX
Hi guys : im getting the dreaded org.apache.hadoop.ipc.Client$Connection handleConnectionFailure When connecting to clouderas hadoop (running in a vm) to request running a simple m/r job (from a machine outside the hadoop vm).. I've seen a lot of posts about this online, and it's als