There is a very clear picture in chapter 8 of pro hadoop, on all of the
separators for streaming jobs.
On Tue, Nov 10, 2009 at 6:53 AM, wd wrote:
> You mean the ^A ?
> I tried \u0001 and \x01, the streaming job recognise it as a string, not
> ^A..
>
> :(
>
> 2009/11/10 Amogh Vasekar
>
> Hi,
>
The dfs client code waits until the all of the datanodes that are going to
hold a replica of your output's blocks have ack'd.
If you are pausing there, most likely something is wrong in your hdfs
cluster.
On Thu, Nov 12, 2009 at 7:06 AM, Ted Xu wrote:
> hi all,
>
> We are using hadoop-0.19.1 on
All of your data has to be converted back and forth to strings, and passed
through pipes from the jvm to your task and back from the task to the jvm.
On Thu, Nov 12, 2009 at 10:06 PM, Alexey Tigarev
wrote:
> Hi All!
>
> How much overhead using Hadoop Streming vs. native Java steps does add?
>
>
Your log messages to stdout,stderr and syslog will end up in the
logs/userlogs directory of your task tracker.
If the job is still visible via the web ui for the job tracker host (usually
port 50030), you can select the individual tasks that were run for your job,
and if you click through enough s
Your eclipse instance doesn't have the jar files in the lib directory of
your hadoop installation in the class path.
On Sat, Nov 14, 2009 at 7:51 PM, felix gao wrote:
> I wrote a simple code in my eclipse as
>
> Text t = new Text("hadoop");
> System.out.println((char)t.charAt(2));
>
> when I tr