Re:Versions of Jetty & Log4j in CDHu3

2012-04-28 Thread CHAIDY
Hi, Nikhil!


FYI: jetty-6.1.26, log4j-1.2.15



At 2012-04-29 13:03:44,Nikhil  wrote:
>Hi,
>
>I was wondering about the release versions of both Jetty and log4j
>components released as part of CDHu3 release package.
>Can someone please let me know.
>
>Thanks.


Versions of Jetty & Log4j in CDHu3

2012-04-28 Thread Nikhil
Hi,

I was wondering about the release versions of both Jetty and log4j
components released as part of CDHu3 release package.
Can someone please let me know.

Thanks.


Re: cygwin single node setup

2012-04-28 Thread Onder SEZGIN
Hi,

I tried them all.
Finally i could get the datanode up and running.

Thanks Kasi.


But this time, i am getting the following the error.

$ ./bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
12/04/29 03:06:19 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
12/04/29 03:06:19 WARN snappy.LoadSnappy: Snappy native library not loaded
12/04/29 03:06:19 INFO mapred.FileInputFormat: Total input paths to process
: 17
12/04/29 03:06:19 INFO mapred.JobClient: Cleaning up the staging area
hdfs://
127.0.0.1:9000/tmp/mapred/staging/EXT0125622/.staging/job_201204290300_0001
12/04/29 03:06:19 ERROR security.UserGroupInformation:
PriviledgedActionException as:EXT0125622 cause:java.io.IOException: Not a
file: hdfs://127.0.0.1:9000/user/EXT0125622/input/conf
java.io.IOException: Not a file: hdfs://
127.0.0.1:9000/user/EXT0125622/input/conf
at
org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
at
org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:989)
at
org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:981)
at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:174)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:897)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:850)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
at
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:850)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:824)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1261)
at org.apache.hadoop.examples.Grep.run(Grep.java:69)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.examples.Grep.main(Grep.java:93)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
at
org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
at
org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

And interestingly,

once i do the following, i can see a reasonable output.

$ ./bin/hadoop fs -lsr /
drwxr-xr-x   - EXT0125622 supergroup  0 2012-04-29 02:00 /tmp
drwxr-xr-x   - EXT0125622 supergroup  0 2012-04-29 03:01 /tmp/mapred
drwxr-xr-x   - EXT0125622 supergroup  0 2012-04-29 02:14
/tmp/mapred/staging
drwxr-xr-x   - EXT0125622 supergroup  0 2012-04-29 02:14
/tmp/mapred/staging/EXT0125622
drwx--   - EXT0125622 supergroup  0 2012-04-29 03:06
/tmp/mapred/staging/EXT0125622/.staging
drwx--   - EXT0125622 supergroup  0 2012-04-29 02:14
/tmp/mapred/staging/EXT0125622/.staging/job_201204290211_0001
-rw-r--r--  10 EXT0125622 supergroup 142465 2012-04-29 02:14
/tmp/mapred/staging/EXT0125622/.staging/job_201204290211_0001/job.jar
-rw-r--r--  10 EXT0125622 supergroup   1825 2012-04-29 02:14
/tmp/mapred/staging/EXT0125622/.staging/job_201204290211_0001/job.split
-rw-r--r--   1 EXT0125622 supergroup657 2012-04-29 02:14
/tmp/mapred/staging/EXT0125622/.staging/job_201204290211_0001/job.splitmetainfo
-rw-r--r--   1 EXT0125622 supergroup  20586 2012-04-29 02:14
/tmp/mapred/staging/EXT0125622/.staging/job_201204290211_0001/job.xml
drwx--   - EXT0125622 supergroup  0 2012-04-29 03:06
/tmp/mapred/staging/EXT0125622/.staging/job_201204290300_0001
-rw-r--r--  10 EXT0125622 supergroup 142465 2012-04-29 03:06
/tmp/mapred/staging/EXT0125622/.staging/job_201204290300_0001/job.jar
drwx--   - EXT0125622 supergroup  0 2012-04-29 03:01
/tmp/mapred/system
-rw---   1 EXT0125622 supergroup  4 2012-04-29 03:01
/tmp/mapred/system/jobtracker.info
drwxr-xr-x   - EXT0125622 supergroup  0 2012-04-29 02:13 /user
drwxr-xr-x   - EXT0125622 supergroup  0 2012-04-29 03:05
/user/EXT0125622
drwxr-xr-x   - EXT0125622 supergroup  0 2012-04-29 03:05
/user/EXT0125622/conf
drwxr-xr-x   - EXT0125622 supergroup  0 2012-04-29 03:05
/user/EXT0125622/in

Can’t stop hadoop daemons

2012-04-28 Thread Barry, Sean F
hduser@master:~> /usr/java/jdk1.7.0/bin/jps

20907 TaskTracker

20629 SecondaryNameNode

25863 Jps

20777 JobTracker

20383 NameNode

20507 DataNode

hduser@master:~> stop-

stop-all.sh   stop-balancer.sh  stop-dfs.sh   stop-mapred.sh

hduser@master:~> stop-all.sh

no jobtracker to stop

master: no tasktracker to stop

slave: no tasktracker to stop

no namenode to stop

master: no datanode to stop

slave: no datanode to stop

master: no secondarynamenode to stop

hduser@master:~>

as you can see jps shows that the daemons are running but I cant stop them with 
the stop-all.sh command.

Does anyone have an idea for why this is happening ?

-SB


Re: cygwin single node setup

2012-04-28 Thread kasi subrahmanyam
Hi Onder,
You could try to format the namenode and restart the daemons,
That solved my problem most number of times.
May be the running daemons where not able to pickup the all the datanodes
configurations

On Sat, Apr 28, 2012 at 4:23 PM, Onder SEZGIN  wrote:

> Hi,
>
> I am pretty a newbie and i am following the quick start guide for single
> node set up on windows using cygwin.
>
> In this step,
>
> $ bin/hadoop fs -put conf input
>
> I am getting the following errors.
>
> I have got no files
> under /user/EXT0125622/input/conf/capacity-scheduler.xml. That might be a
> reason for the errors i get but why does hadoop look for such directory as
> i have not configured anything like that. so supposably, hadoop is making
> up and looking for such file and directory?
>
> Any idea and help is welcome.
>
> Cheers
> Onder
>
> 12/04/27 13:44:37 WARN hdfs.DFSClient: DataStreamer Exception:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /user/EXT0125622/input/conf/capacity-scheduler.xml could only be replicated
> to 0 nodes, instead of 1
>at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
>at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
>at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>at java.lang.reflect.Method.invoke(Method.java:601)
>at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
>at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
>at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
>at java.security.AccessController.doPrivileged(Native Method)
>at javax.security.auth.Subject.doAs(Subject.java:415)
>at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
>at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
>
>at org.apache.hadoop.ipc.Client.call(Client.java:1066)
>at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>at $Proxy1.addBlock(Unknown Source)
>at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>at java.lang.reflect.Method.invoke(Method.java:601)
>at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>at $Proxy1.addBlock(Unknown Source)
>at
>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3507)
>at
>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3370)
>at
>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2700(DFSClient.java:2586)
>at
>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2826)
>
>  12/04/27 13:44:37 WARN hdfs.DFSClient: Error Recovery for block null bad
> datanode[0] nodes == null
> 12/04/27 13:44:37 WARN hdfs.DFSClient: Could not get block locations.
> Source file "/user/EXT0125622/input/conf/capacity-scheduler.xml" -
> Aborting...
> put: java.io.IOException: File
> /user/EXT0125622/input/conf/capacity-scheduler.xml could only be replicated
> to 0 nodes, instead of 1
> 12/04/27 13:44:37 ERROR hdfs.DFSClient: Exception closing file
> /user/EXT0125622/input/conf/capacity-scheduler.xml :
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /user/EXT0125622/input/conf/capacity-scheduler.xml could only be replicated
> to 0 nodes, instead of 1
>at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
>at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
>at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>at java.lang.reflect.Method.invoke(Method.java:601)
>at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
>at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
>at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
>at java.security.AccessController.doPrivileged(Native Method)
>at javax.security.auth.Subject.doAs(Subject.java:415)
>at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
>at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
>
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /user/EXT0125622/input/conf/capacity-scheduler.xml could only be replicated
> to 

cygwin single node setup

2012-04-28 Thread Onder SEZGIN
Hi,

I am pretty a newbie and i am following the quick start guide for single
node set up on windows using cygwin.

In this step,

$ bin/hadoop fs -put conf input

I am getting the following errors.

I have got no files
under /user/EXT0125622/input/conf/capacity-scheduler.xml. That might be a
reason for the errors i get but why does hadoop look for such directory as
i have not configured anything like that. so supposably, hadoop is making
up and looking for such file and directory?

Any idea and help is welcome.

Cheers
Onder

12/04/27 13:44:37 WARN hdfs.DFSClient: DataStreamer Exception:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/EXT0125622/input/conf/capacity-scheduler.xml could only be replicated
to 0 nodes, instead of 1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

at org.apache.hadoop.ipc.Client.call(Client.java:1066)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at $Proxy1.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy1.addBlock(Unknown Source)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3507)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3370)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2700(DFSClient.java:2586)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2826)

 12/04/27 13:44:37 WARN hdfs.DFSClient: Error Recovery for block null bad
datanode[0] nodes == null
12/04/27 13:44:37 WARN hdfs.DFSClient: Could not get block locations.
Source file "/user/EXT0125622/input/conf/capacity-scheduler.xml" -
Aborting...
put: java.io.IOException: File
/user/EXT0125622/input/conf/capacity-scheduler.xml could only be replicated
to 0 nodes, instead of 1
12/04/27 13:44:37 ERROR hdfs.DFSClient: Exception closing file
/user/EXT0125622/input/conf/capacity-scheduler.xml :
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/EXT0125622/input/conf/capacity-scheduler.xml could only be replicated
to 0 nodes, instead of 1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/EXT0125622/input/conf/capacity-scheduler.xml could only be replicated
to 0 nodes, instead of 1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:6