I believe that kill-task simple kills the task, but then the same process (i.e.
"task") starts, with a new id.
Jay Vyas
MMSB
UCHC
On Jul 20, 2012, at 6:23 PM, "Bejoy KS" wrote:
> Hi Jay
>
> Did you try
> hadoop job -kill-task ? And is that not working as desired?
>
> Regards
> Bejoy KS
>
Curious : Seems like you could aggregate the results in the mapper as a local
variable or list of strings--- is there a way to know that your mapper has just
read the LAST line of an input split?
I.e if so, then you could implement your entire solution in your mapper without
needing a new inpu
Thanks j harsh:
I have another question , though ---
You mentioned that :
The client needs access to
" the
DataNodes (for actually writing the previous files to DFS for the
JobTracker to pick up)"
What do you mean by previous files? It seems like, if designing Hadoop from
scratch , I wouldn'
Hi guys : I notice that there's been some chatter about the "Reporter" in
context of counters Forgive my ignorance here as I've never seen Reporters
used in real code.
What is the difference between the use of our Context, and Reporter objects-
and how are they related? Is there any overlap
RE anirunds question on "how to submit a job remotely".
Here are my follow up questions - hope this helps to guide the discussion:
1) Normally - what is the "job client"? Do you guys typically use the namenode
as the client?
2) In the case where the client != name node how does the cli
No reducers can't access mapper counters.
---> maybe theres a way to intermediately put counters in the distributed
cache???
Jay Vyas
MMSB
UCHC
On Apr 20, 2012, at 1:24 PM, Robert Evans wrote:
> There was a discussion about this several months ago
>
> http://mail-archives.apache.org/mod_mbox
Hi guys : related to the last snappy question - how does Hadoop detect Snappy
compression in the input dataset ( how does Hadoop
Know when to decompress records via snappy ).
Jay Vyas
MMSB
UCHC
That is odd why would it crash when your m/r job did not rely on snappy?
One possibility : Maybe because your input is snappy compressed, Hadoop is
detecting that compression, and trying to use the snappy codec to decompress.?
Jay Vyas
MMSB
UCHC
On Apr 15, 2012, at 5:08 AM, Bas Hickendorf
Im sure i speak quite accurately for the moderators that ***This is not a job
board***
Jay Vyas
MMSB
UCHC
On Apr 9, 2012, at 10:03 AM, Vishal Kumar Gupta wrote:
> hi Sarah,
>
> Please find my updated resume attached with this mail.
>
> Regards,
> vishal
>
> 2012/4/9 Bing Li
> 国际著名大型IT企业(
I have a related question about blocks related to thisNormally, a reduce
job outputs several files, all in the same directory.
But why? Since we know that Hadoop is abstracting our file for us, shouldn't
the part-r- outputs ultimately be thought of as a single file?
What is the corres
Hi guys. Just a theoretical question here : I notice in chapter 1 of the
Hadoop orielly book that the "new API" example has *no* Configuration object.
Why is that?
I thought the new API still uses / needs a Configuration class when running
jobs.
Jay Vyas
MMSB
UCHC
On Apr 7, 2012, at 4:29
Hi guys : I'm noticing that namespace conflicts or differences are a common
theme in hadoop both in my experience and now on this list Serv.
Does anyone have any thoughts on why this is such a common issue and how it
will be dealt with in new releases?
Jay Vyas
MMSB
UCHC
Hi guys : I wanted to make se custom Hadoop fs - commands. Is this
feasible/practical? In particular. , I wanted to summarize file sizes and
print some usefull estimated of things on the fly from My cluster.
I'm not sure how
The hadoop
Shell commands are implemented... But I thought maybe ther
Thanks alot arpit : I will try this first thing in the morning.
For now --- I need a glass of wine.
Jay Vyas
MMSB
UCHC
On Mar 30, 2012, at 10:38 PM, Arpit Gupta wrote:
> the namespace id is persisted on the datanode data directories. As you
> formatted the namenode these id's no longer match
I was confused about this for a while also I dont have all the details but
I think my question on s.o. might help you.
I was playing with different protocols...
Trying to find a way to programatically access all data in Hfds.
http://stackoverflow.com/questions/7844458/how-can-i-access-hadoop
That's a great tutorial. I like the conciseness of it.
Jay Vyas
MMSB
UCHC
On Nov 1, 2011, at 1:39 AM, Prashant Sharma wrote:
> Hi Mathias,
>
> I wrote a small introduction or a quick ramp up for starting out with
> hadoop while learning it at my institute.
> http://functionalprograming.file
Thanks! Yes i agree ... But Are you sure 8020? 8020 serves on 127.0.0.1 (rather
than 0.0.0.0) ... Thus it is inaccessible to outside clients...That is very
odd Why would that be the case ? Any insights ( using cloud eras hadoop vm).
Sent from my iPad
On Oct 30, 2011, at 11:48 PM, Harsh
Hi tom : which log will have info about why a process was Killed?
Sent from my iPad
On Oct 28, 2011, at 11:41 PM, Tom Melendez wrote:
> Hi Jay,
>
> Are you able to look at the logs or the web interface? Can you find
> out why it's getting killed?
>
> Also, can you verify that these ports are
Yup Brutal :-|
but you never regret fixing a bug ... Unlike ---
Sent from my iPad
On Oct 28, 2011, at 11:43 PM, Alex Gauthier wrote:
> Brutal Friday night. Coding < pussy.
>
> :)
>
> On Fri, Oct 28, 2011 at 8:43 PM, Alex Gauthier
> wrote:
>
>>
>>
>> On Fri, Oct 28, 2011 at 8:41
Hi guys : im getting the dreaded
org.apache.hadoop.ipc.Client$Connection handleConnectionFailure
When connecting to clouderas hadoop (running in a vm) to request running a
simple m/r job (from a machine outside the hadoop vm)..
I've seen a lot of posts about this online, and it's als
20 matches
Mail list logo