recursively. Could this be causing problems?
Thanks,
Mike
--
Vladimir Klimontovich
Cell: +7-926-890-2349, skype: klimontovich
specifically, on TaskTrackers that can write to the
destination cluster). Each source is specified as
hftp://dfs.http.address/path (the default dfs.http.address is
namenode:50070).
Mike
From: Vladimir Klimontovich [klimontov...@gmail.com]
Sent
. That's
why I am posting the problem here.
Any help is greatly appreciated.
Thanks,
Tarandeep
---
Vladimir Klimontovich,
skype: klimontovich
GoogleTalk/Jabber: klimontov...@gmail.com
Cell phone: +7926 890 2349
-without-command-line--tp25167322p25167322.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.
---
Vladimir Klimontovich,
skype: klimontovich
GoogleTalk/Jabber: klimontov...@gmail.com
Cell phone: +7926 890 2349
pointers?
Regards,
Raakhi
What you're asking for will break the semantics of reduce(). Reduce
can only
proceed after receiving all the map-outputs.
--
Harish Mallipeddi
http://blog.poundbang.in
---
Vladimir Klimontovich,
skype: klimontovich
GoogleTalk/Jabber: klimontov...@gmail.com
Cell phone
-replicated
blocks, or does this happen automatically at some point?
Thanks,
Andy
---
Vladimir Klimontovich,
skype: klimontovich
GoogleTalk/Jabber: klimontov...@gmail.com
Cell phone: +7926 890 2349