I believe that kill-task simple kills the task, but then the same process (i.e.
task) starts, with a new id.
Jay Vyas
MMSB
UCHC
On Jul 20, 2012, at 6:23 PM, Bejoy KS bejoy.had...@gmail.com wrote:
Hi Jay
Did you try
hadoop job -kill-task task-id ? And is that not working as desired?
Thanks j harsh:
I have another question , though ---
You mentioned that :
The client needs access to
the
DataNodes (for actually writing the previous files to DFS for the
JobTracker to pick up)
What do you mean by previous files? It seems like, if designing Hadoop from
scratch , I wouldn't
No reducers can't access mapper counters.
--- maybe theres a way to intermediately put counters in the distributed
cache???
Jay Vyas
MMSB
UCHC
On Apr 20, 2012, at 1:24 PM, Robert Evans ev...@yahoo-inc.com wrote:
There was a discussion about this several months ago
RE anirunds question on how to submit a job remotely.
Here are my follow up questions - hope this helps to guide the discussion:
1) Normally - what is the job client? Do you guys typically use the namenode
as the client?
2) In the case where the client != name node how does the client
Hi guys : I notice that there's been some chatter about the Reporter in
context of counters Forgive my ignorance here as I've never seen Reporters
used in real code.
What is the difference between the use of our Context, and Reporter objects-
and how are they related? Is there any overlap
That is odd why would it crash when your m/r job did not rely on snappy?
One possibility : Maybe because your input is snappy compressed, Hadoop is
detecting that compression, and trying to use the snappy codec to decompress.?
Jay Vyas
MMSB
UCHC
On Apr 15, 2012, at 5:08 AM, Bas
Im sure i speak quite accurately for the moderators that ***This is not a job
board***
Jay Vyas
MMSB
UCHC
On Apr 9, 2012, at 10:03 AM, Vishal Kumar Gupta groups...@gmail.com wrote:
hi Sarah,
Please find my updated resume attached with this mail.
Regards,
vishal
2012/4/9 Bing Li
Hi guys. Just a theoretical question here : I notice in chapter 1 of the
Hadoop orielly book that the new API example has *no* Configuration object.
Why is that?
I thought the new API still uses / needs a Configuration class when running
jobs.
Jay Vyas
MMSB
UCHC
On Apr 7, 2012, at 4:29
I have a related question about blocks related to thisNormally, a reduce
job outputs several files, all in the same directory.
But why? Since we know that Hadoop is abstracting our file for us, shouldn't
the part-r- outputs ultimately be thought of as a single file?
What is the
Hi guys : I'm noticing that namespace conflicts or differences are a common
theme in hadoop both in my experience and now on this list Serv.
Does anyone have any thoughts on why this is such a common issue and how it
will be dealt with in new releases?
Jay Vyas
MMSB
UCHC
Hi guys : I wanted to make se custom Hadoop fs - commands. Is this
feasible/practical? In particular. , I wanted to summarize file sizes and
print some usefull estimated of things on the fly from My cluster.
I'm not sure how
The hadoop
Shell commands are implemented... But I thought maybe
Thanks alot arpit : I will try this first thing in the morning.
For now --- I need a glass of wine.
Jay Vyas
MMSB
UCHC
On Mar 30, 2012, at 10:38 PM, Arpit Gupta ar...@hortonworks.com wrote:
the namespace id is persisted on the datanode data directories. As you
formatted the namenode these
I was confused about this for a while also I dont have all the details but
I think my question on s.o. might help you.
I was playing with different protocols...
Trying to find a way to programatically access all data in Hfds.
That's a great tutorial. I like the conciseness of it.
Jay Vyas
MMSB
UCHC
On Nov 1, 2011, at 1:39 AM, Prashant Sharma prashant.ii...@gmail.com wrote:
Hi Mathias,
I wrote a small introduction or a quick ramp up for starting out with
hadoop while learning it at my institute.
Thanks! Yes i agree ... But Are you sure 8020? 8020 serves on 127.0.0.1 (rather
than 0.0.0.0) ... Thus it is inaccessible to outside clients...That is very
odd Why would that be the case ? Any insights ( using cloud eras hadoop vm).
Sent from my iPad
On Oct 30, 2011, at 11:48 PM, Harsh
Hi guys : im getting the dreaded
org.apache.hadoop.ipc.Client$Connection handleConnectionFailure
When connecting to clouderas hadoop (running in a vm) to request running a
simple m/r job (from a machine outside the hadoop vm)..
I've seen a lot of posts about this online, and it's
16 matches
Mail list logo