@Arun
This is the java program to execute the fsck cmd from java pgm...
If any problem pls let me know
//code
import java.io.*;
public class JavaRunCommand {
public static void main(String args[]) {
String s = null;
try {
Process p = Runtime.getRuntime(
@karthikeyan: Thanks again but I was looking to find that information out
from writing code to do so than to use a command on the command line
prompt.Any idea?
On Sat, Mar 31, 2012 at 10:40 AM, Karthikeyan V.B wrote:
> @bharat : hadoop has a* job tracker* which *resolves the dependencies*and
> *s
@bharat : hadoop has a* job tracker* which *resolves the dependencies*
and *splits
the job into blocks* and *assigns to datanodes*
--
You received this message because you are subscribed to the Google Groups
"Algorithm Geeks" group.
To post to this group, send email to algogeeks@googlegroups.com
Hi,
use the following cmd on linux environment installed with hadoop stable api
fsck -blocks -files -locations
this cmd displays all the blocks - data node mappings.
--
You received this message because you are subscribed to the Google Groups
"Algorithm Geeks" group.
To post to this group, se
@karthikeyan: Thanks for that info. So in the sample wordcount program
using Hadoop pipes in c++ if i want to see data each node has got, I shd
query namenode? Is namenode a class or something which contains information
or which variable should I check out?
Thanks
On Sat, Mar 31, 2012 at 2:23 AM,
but, how can it split the data, if there are dependencies in job ? unless
we write parallel program, Does hadoop do any thing faster than usual
processor?
On Sat, Mar 31, 2012 at 10:32 AM, Karthikeyan V.B wrote:
> Hi,
>
> The NameNode splits the job into several tasks and submits to the
> differ
Hi,
The NameNode splits the job into several tasks and submits to the different
DataNode(i.e nodes) , the split size varies from 64KB to 128MB. The
NameNode assigns the data split to a datanod.
Namenode actually has a table to store the mappings of data node and block
stored in it.
it is possibl
Hi all,
Has anyone worked on Hadoop before? I ran the wordcount program with Hadoop
but I am unable to understand how to find out which node in the cluster got
which data? Any experts out here who can suggest?
Arun
--
You received this message because you are subscribed to the Google Groups
"A