Error putting files in the HDFS

2013-10-08 Thread Basu,Indrashish


Hello,

My name is Indrashish Basu and I am a Masters student in the Department 
of Electrical and Computer Engineering.


Currently I am doing my research project on Hadoop implementation on 
ARM processor and facing an issue while trying to run a sample Hadoop 
source code on the same. Every time I am trying to put some files in the 
HDFS, I am getting the below error.



13/10/07 11:31:29 WARN hdfs.DFSClient: DataStreamer Exception: 
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File 
/user/root/bin/cpu-kmeans2D could only be replicated to 0 nodes, instead 
of 1
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

at org.apache.hadoop.ipc.Client.call(Client.java:739)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at com.sun.proxy.$Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)

at com.sun.proxy.$Proxy0.addBlock(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2904)
at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2786)
at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)


13/10/07 11:31:29 WARN hdfs.DFSClient: Error Recovery for block null 
bad datanode[0] nodes == null
13/10/07 11:31:29 WARN hdfs.DFSClient: Could not get block locations. 
Source file "/user/root/bin/cpu-kmeans2D" - Aborting...
put: java.io.IOException: File /user/root/bin/cpu-kmeans2D could only 
be replicated to 0 nodes, instead of 1



I tried replicating the namenode and datanode by deleting all the old 
logs on the master and the slave nodes as well as the folders under 
/app/hadoop/, after which I formatted the namenode and started the 
process again (bin/start-all.sh), but still no luck with the same.


I tried generating the admin report(pasted below) after doing the 
restart, it seems the data node is not getting started.


-
Datanodes available: 0 (0 total, 0 dead)

root@tegra-ubuntu:~/hadoop-gpu-master/hadoop-gpu-0.20.1# bin/hadoop 
dfsadmin -report

Configured Capacity: 0 (0 KB)
Present Capacity: 0 (0 KB)
DFS Remaining: 0 (0 KB)
DFS Used: 0 (0 KB)
DFS Used%: �%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-
Datanodes available: 0 (0 total, 0 dead)


I have tried the following methods to debug the process :

1) I logged in to the HADOOP home directory and removed all the old 
logs (rm -rf logs/*)


2) Next I deleted the contents of the directory on all my slave and 
master nodes (rm -rf /app/hadoop/*)


3) I formatted the namenode (bin/hadoop namenode -format)

4) I started all the processes - first the namenode, datanode and then 
the map - reduce. I typed jps on the terminal to ensure that all the 
processes (Namenode, Datanode, JobTracker, Task Tracker) are up and 
running.


5) Now doing this, I recreated the directories in the dfs.

However still no luck with the process.


Can you kindly assist regarding this ? I am new to Hadoop and I am 
having no idea as how I can proceed with this.





Regards,

--
Indrashish Basu
Graduate Student
Department of Electrical and Computer Engineering
University of Florida


Re: Error putting files in the HDFS

2013-10-08 Thread Basu,Indrashish

Hi Jitendra,

This is what I am getting in the datanode logs :

2013-10-07 11:27:41,960 INFO 
org.apache.hadoop.hdfs.server.common.Storage: Storage directory 
/app/hadoop/tmp/dfs/data is not formatted.
2013-10-07 11:27:41,961 INFO 
org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
2013-10-07 11:27:42,094 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Registered 
FSDatasetStatusMBean
2013-10-07 11:27:42,099 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 
50010
2013-10-07 11:27:42,107 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 
1048576 bytes/s
2013-10-07 11:27:42,369 INFO org.mortbay.log: Logging to 
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via 
org.mortbay.log.Slf4jLog
2013-10-07 11:27:42,632 INFO org.apache.hadoop.http.HttpServer: Port 
returned by webServer.getConnectors()[0].getLocalPort() before open() is 
-1. Opening the listener on 50075
2013-10-07 11:27:42,633 INFO org.apache.hadoop.http.HttpServer: 
listener.getLocalPort() returned 50075 
webServer.getConnectors()[0].getLocalPort() returned 50075
2013-10-07 11:27:42,634 INFO org.apache.hadoop.http.HttpServer: Jetty 
bound to port 50075

2013-10-07 11:27:42,634 INFO org.mortbay.log: jetty-6.1.14
2013-10-07 11:31:29,821 INFO org.mortbay.log: Started 
SelectChannelConnector@0.0.0.0:50075
2013-10-07 11:31:29,843 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: 
Initializing JVM Metrics with processName=DataNode, sessionId=null
2013-10-07 11:31:29,912 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: 
Initializing RPC Metrics with hostName=DataNode, port=50020
2013-10-07 11:31:29,922 INFO org.apache.hadoop.ipc.Server: IPC Server 
Responder: starting
2013-10-07 11:31:29,922 INFO org.apache.hadoop.ipc.Server: IPC Server 
listener on 50020: starting
2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server 
handler 0 on 50020: starting
2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server 
handler 1 on 50020: starting
2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server 
handler 2 on 50020: starting
2013-10-07 11:31:29,934 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration = 
DatanodeRegistration(tegra-ubuntu:50010, storageID=, infoPort=50075, 
ipcPort=50020)
2013-10-07 11:31:29,971 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: New storage id 
DS-1027334635-127.0.1.1-50010-1381170689938 is assigned to data-node 
10.227.56.195:50010
2013-10-07 11:31:29,973 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: 
DatanodeRegistration(10.227.56.195:50010, 
storageID=DS-1027334635-127.0.1.1-50010-1381170689938, infoPort=50075, 
ipcPort=50020)In DataNode.run, data = FSDataset

{dirpath='/app/hadoop/tmp/dfs/data/current'}
2013-10-07 11:31:29,974 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: using 
BLOCKREPORT_INTERVAL of 360msec Initial delay: 0msec
2013-10-07 11:31:30,032 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks 
got processed in 19 msecs
2013-10-07 11:31:30,035 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block 
scanner.
2013-10-07 11:41:42,222 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks 
got processed in 20 msecs
2013-10-07 12:41:43,482 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks 
got processed in 22 msecs
2013-10-07 13:41:44,755 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks 
got processed in 13 msecs



I restarted the datanode and made sure that it is up and running (typed 
jps command).


Regards,
Indrashish

On Tue, 8 Oct 2013 23:25:25 +0530, Jitendra Yadav wrote:
As per your dfs report, available DataNodes  count is ZERO  in you 
cluster.


Please check your data node logs.

Regards
Jitendra

On 10/8/13, Basu,Indrashish  wrote:


Hello,

My name is Indrashish Basu and I am a Masters student in the 
Department

of Electrical and Computer Engineering.

Currently I am doing my research project on Hadoop implementation on
ARM processor and facing an issue while trying to run a sample 
Hadoop
source code on the same. Every time I am trying to put some files in 
the

HDFS, I am getting the below error.


13/10/07 11:31:29 WARN hdfs.DFSClient: DataStreamer Exception:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/root/bin/cpu-kmeans2D could only be replicated to 0 nodes, 
instead

of 1
at

org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
at

org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at

sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at

sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.ja

Re: Error putting files in the HDFS

2013-10-08 Thread Basu,Indrashish



Hi ,

Just to update on this, I have deleted all the old logs and files from 
the /tmp and /app/hadoop directory, and restarted all the nodes, I have 
now 1 datanode available as per the below information :


Configured Capacity: 3665985536 (3.41 GB)
Present Capacity: 24576 (24 KB)
DFS Remaining: 0 (0 KB)
DFS Used: 24576 (24 KB)
DFS Used%: 100%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-
Datanodes available: 1 (1 total, 0 dead)

Name: 10.227.56.195:50010
Decommission Status : Normal
Configured Capacity: 3665985536 (3.41 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 3665960960 (3.41 GB)
DFS Remaining: 0(0 KB)
DFS Used%: 0%
DFS Remaining%: 0%
Last contact: Tue Oct 08 11:12:19 PDT 2013


However when I tried putting the files back in HDFS, I am getting the 
same error as stated earlier. Do I need to clear some space for the HDFS 
?


Regards,
Indrashish


On Tue, 08 Oct 2013 14:01:19 -0400, Basu,Indrashish wrote:

Hi Jitendra,

This is what I am getting in the datanode logs :

2013-10-07 11:27:41,960 INFO
org.apache.hadoop.hdfs.server.common.Storage: Storage directory
/app/hadoop/tmp/dfs/data is not formatted.
2013-10-07 11:27:41,961 INFO
org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
2013-10-07 11:27:42,094 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
FSDatasetStatusMBean
2013-10-07 11:27:42,099 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server 
at

50010
2013-10-07 11:27:42,107 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith 
is

1048576 bytes/s
2013-10-07 11:27:42,369 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2013-10-07 11:27:42,632 INFO org.apache.hadoop.http.HttpServer: Port
returned by webServer.getConnectors()[0].getLocalPort() before open()
is -1. Opening the listener on 50075
2013-10-07 11:27:42,633 INFO org.apache.hadoop.http.HttpServer:
listener.getLocalPort() returned 50075
webServer.getConnectors()[0].getLocalPort() returned 50075
2013-10-07 11:27:42,634 INFO org.apache.hadoop.http.HttpServer: Jetty
bound to port 50075
2013-10-07 11:27:42,634 INFO org.mortbay.log: jetty-6.1.14
2013-10-07 11:31:29,821 INFO org.mortbay.log: Started
SelectChannelConnector@0.0.0.0:50075
2013-10-07 11:31:29,843 INFO
org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics
with processName=DataNode, sessionId=null
2013-10-07 11:31:29,912 INFO
org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics
with hostName=DataNode, port=50020
2013-10-07 11:31:29,922 INFO org.apache.hadoop.ipc.Server: IPC Server
Responder: starting
2013-10-07 11:31:29,922 INFO org.apache.hadoop.ipc.Server: IPC Server
listener on 50020: starting
2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 0 on 50020: starting
2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 1 on 50020: starting
2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 2 on 50020: starting
2013-10-07 11:31:29,934 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
DatanodeRegistration(tegra-ubuntu:50010, storageID=, infoPort=50075,
ipcPort=50020)
2013-10-07 11:31:29,971 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: New storage id
DS-1027334635-127.0.1.1-50010-1381170689938 is assigned to data-node
10.227.56.195:50010
2013-10-07 11:31:29,973 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode:
DatanodeRegistration(10.227.56.195:50010,
storageID=DS-1027334635-127.0.1.1-50010-1381170689938, 
infoPort=50075,

ipcPort=50020)In DataNode.run, data = FSDataset
{dirpath='/app/hadoop/tmp/dfs/data/current'}
2013-10-07 11:31:29,974 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: using
BLOCKREPORT_INTERVAL of 360msec Initial delay: 0msec
2013-10-07 11:31:30,032 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks got processed in 19 msecs
2013-10-07 11:31:30,035 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic
block scanner.
2013-10-07 11:41:42,222 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks got processed in 20 msecs
2013-10-07 12:41:43,482 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks got processed in 22 msecs
2013-10-07 13:41:44,755 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks got processed in 13 msecs


I restarted the datanode and made sure that it is up and running
(typed jps command).

Regards,
Indrashish

On Tue, 8 Oct 2013 23:25:25 +0530, Jitendra Yadav wrote:
As per your dfs report, available DataNodes  count is ZERO  in you 
cluster.


Please check your data node logs.

Regards
Jitendra

On 10/8/13, Basu,Indrashish  wrote:


Hello,

My name is Indrashish Basu and I am a Masters student in the 
Department

of Electrical and Computer Engineering.

Currently I a

Re: Error putting files in the HDFS

2013-10-08 Thread Basu,Indrashish
  

Hi Tariq, 

Thanks a lot for your help. 

Can you please let me
know the path where I can check the old files in the HDFS and remove
them accordingly. I am sorry to bother with these questions, I am
absolutely new to Hadoop. 

Thanks again for your time and pateince.


Regards, 

Indrashish 

On Tue, 8 Oct 2013 23:51:30 +0530, Mohammad
Tariq wrote: 

> You don't have any more space left in your HDFS. Delete
some old data or add additional storage. 
> 
> Warm Regards, 
> Tariq 
>
cloudfront.blogspot.com [6] 
> 
> On Tue, Oct 8, 2013 at 11:47 PM,
Basu,Indrashish wrote:
> 
>> Hi ,
>> 
>> Just to update on this, I have
deleted all the old logs and files from the /tmp and /app/hadoop
directory, and restarted all the nodes, I have now 1 datanode available
as per the below information :
>> 
>> Configured Capacity: 3665985536
(3.41 GB)
>> Present Capacity: 24576 (24 KB) 
>> 
>> DFS Remaining: 0 (0
KB) DFS Used: 24576 (24 KB)
>> DFS Used%: 100% 
>> 
>> Under replicated
blocks: 0
>> Blocks with corrupt replicas: 0
>> Missing blocks: 0
>> 
>>
- Datanodes available: 1
(1 total, 0 dead)
>> 
>> Name: 10.227.56.195:50010 [5]
>> Decommission
Status : Normal
>> Configured Capacity: 3665985536 (3.41 GB)
>> DFS
Used: 24576 (24 KB)
>> Non DFS Used: 3665960960 (3.41 GB)
>> DFS
Remaining: 0(0 KB)
>> DFS Used%: 0%
>> DFS Remaining%: 0%
>> Last
contact: Tue Oct 08 11:12:19 PDT 2013
>> 
>> However when I tried
putting the files back in HDFS, I am getting the same error as stated
earlier. Do I need to clear some space for the HDFS ?
>> 
>> Regards,
>>
Indrashish 
>> 
>> On Tue, 08 Oct 2013 14:01:19 -0400, Basu,Indrashish
wrote:
>> 
>>> Hi Jitendra,
>>> 
>>> This is what I am getting in the
datanode logs :
>>> 
>>> 2013-10-07 11:27:41,960 INFO
>>>
org.apache.hadoop.hdfs.server.common.Storage: Storage directory
>>>
/app/hadoop/tmp/dfs/data is not formatted.
>>> 2013-10-07 11:27:41,961
INFO
>>> org.apache.hadoop.hdfs.server.common.Storage: Formatting
...
>>> 2013-10-07 11:27:42,094 INFO
>>>
org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
>>>
FSDatasetStatusMBean
>>> 2013-10-07 11:27:42,099 INFO
>>>
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server
at
>>> 50010
>>> 2013-10-07 11:27:42,107 INFO
>>>
org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith
is
>>> 1048576 bytes/s
>>> 2013-10-07 11:27:42,369 INFO org.mortbay.log:
Logging to
>>> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log)
via
>>> org.mortbay.log.Slf4jLog
>>> 2013-10-07 11:27:42,632 INFO
org.apache.hadoop.http.HttpServer: Port
>>> returned by
webServer.getConnectors()[0].getLocalPort() before open()
>>> is -1.
Opening the listener on 50075
>>> 2013-10-07 11:27:42,633 INFO
org.apache.hadoop.http.HttpServer:
>>> listener.getLocalPort() returned
50075
>>> webServer.getConnectors()[0].getLocalPort() returned 50075
>>>
2013-10-07 11:27:42,634 INFO org.apache.hadoop.http.HttpServer:
Jetty
>>> bound to port 50075
>>> 2013-10-07 11:27:42,634 INFO
org.mortbay.log: jetty-6.1.14
>>> 2013-10-07 11:31:29,821 INFO
org.mortbay.log: Started
>>> SelectChannelConnector@0.0.0.0:50075
[2]
>>> 2013-10-07 11:31:29,843 INFO
>>>
org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics
>>>
with processName=DataNode, sessionId=null
>>> 2013-10-07 11:31:29,912
INFO
>>> org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC
Metrics
>>> with hostName=DataNode, port=50020
>>> 2013-10-07
11:31:29,922 INFO org.apache.hadoop.ipc.Server: IPC Server
>>>
Responder: starting
>>> 2013-10-07 11:31:29,922 INFO
org.apache.hadoop.ipc.Server: IPC Server
>>> listener on 50020:
starting
>>> 2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server:
IPC Server
>>> handler 0 on 50020: starting
>>> 2013-10-07 11:31:29,933
INFO org.apache.hadoop.ipc.Server: IPC Server
>>> handler 1 on 50020:
starting
>>> 2013-10-07 11:31:29,933 INFO org.apache.hadoop.ipc.Server:
IPC Server
>>> handler 2 on 50020: starting
>>> 2013-10-07 11:31:29,934
INFO
>>> org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration
=
>>> DatanodeRegistration(tegra-ubuntu:50010, storageID=,
infoPort=50075,
>>> ipcPort=50020)
>>> 2013-10-07 11:31:29,971 INFO
>>>
org.apache.hadoop.hdfs.server.datanode.DataNode: New storage id
>>>

Re: Error putting files in the HDFS

2013-10-08 Thread Basu,Indrashish
  

Hi Tariq, 

Thanks for your help again. 

I tried deleting the old
HDFS files and directories as you suggested , and then do the
reformatting and starting all the nodes. However after running the
dfsadmin report I am again seeing that datanode is not generated.


root@tegra-ubuntu:~/hadoop-gpu-master/hadoop-gpu-0.20.1# bin/hadoop
dfsadmin -report
Configured Capacity: 0 (0 KB)
Present Capacity: 0 (0
KB)
DFS Remaining: 0 (0 KB)
DFS Used: 0 (0 KB)
DFS Used%: �%
Under
replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks:
0

-
Datanodes
available: 0 (0 total, 0 dead) 

However when I typed jps, it is showing
that datanode is up and running. Below are the datanode logs generated
for the given time stamp. Can you kindly assist regarding this ? 


2013-10-08 13:35:55,680 INFO
org.apache.hadoop.hdfs.server.common.Storage: Storage directory
/app/hadoop/tmp/dfs/data is not formatted.
2013-10-08 13:35:55,680 INFO
org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
2013-10-08
13:35:55,814 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Registered FSDatasetStatusMBean
2013-10-08 13:35:55,820 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at
50010
2013-10-08 13:35:55,828 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
1048576 bytes/s
2013-10-08 13:35:56,153 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2013-10-08 13:35:56,497 INFO
org.apache.hadoop.http.HttpServer: Port returned by
webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening
the listener on 50075
2013-10-08 13:35:56,498 INFO
org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned
50075 webServer.getConnectors()[0].getLocalPort() returned
50075
2013-10-08 13:35:56,513 INFO org.apache.hadoop.http.HttpServer:
Jetty bound to port 50075
2013-10-08 13:35:56,514 INFO org.mortbay.log:
jetty-6.1.14
2013-10-08 13:40:45,127 INFO org.mortbay.log: Started
SelectChannelConnector@0.0.0.0:50075
2013-10-08 13:40:45,139 INFO
org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with
processName=DataNode, sessionId=null
2013-10-08 13:40:45,189 INFO
org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with
hostName=DataNode, port=50020
2013-10-08 13:40:45,198 INFO
org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2013-10-08
13:40:45,201 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on
50020: starting
2013-10-08 13:40:45,201 INFO
org.apache.hadoop.ipc.Server: IPC Server listener on 50020:
starting
2013-10-08 13:40:45,202 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
DatanodeRegistration(tegra-ubuntu:50010, storageID=, infoPort=50075,
ipcPort=50020)
2013-10-08 13:40:45,206 INFO
org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50020:
starting
2013-10-08 13:40:45,207 INFO org.apache.hadoop.ipc.Server: IPC
Server handler 1 on 50020: starting
2013-10-08 13:40:45,234 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: New storage id
DS-863644283-127.0.1.1-50010-1381264845208 is assigned to data-node
10.227.56.195:50010
2013-10-08 13:40:45,235 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode:
DatanodeRegistration(10.227.56.195:50010,
storageID=DS-863644283-127.0.1.1-50010-1381264845208, infoPort=50075,
ipcPort=50020)In DataNode.run, data =
FSDataset{
dirpath='/app/hadoop/tmp/dfs/data/current'}
2013-10-08
13:40:45,235 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: using
BLOCKREPORT_INTERVAL of 360msec Initial delay: 0msec
2013-10-08
13:40:45,275 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
BlockReport of 0 blocks got processed in 14 msecs
2013-10-08
13:40:45,277 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Starting Periodic block scanner.

Regards, 

Indrashish 

On Wed, 9 Oct
2013 01:38:52 +0530, Mohammad Tariq wrote: 

> You are welcome Basu. 
>
Not a problem. You can use BIN/HADOOP FS -LSR / to list down all the
HDFS files and directories. See which files are no longer required and
delete them using BIN/HADOOP FS -RM /PATH/TO/THE/FILE 
> 
> Warm
Regards, 
> Tariq 
> cloudfront.blogspot.com [8] 
> 
> On Tue, Oct 8,
2013 at 11:59 PM, Basu,Indrashish wrote:
> 
>> Hi Tariq, 
>> 
>> Thanks
a lot for your help. 
>> 
>> Can you please let me know the path where I
can check the old files in the HDFS and remove them accordingly. I am
sorry to bother with these questions, I am absolutely new to Hadoop. 
>>

>> Thanks again for your time and pateince. 
>> 
>> Regards, 
>> 
>>
Indrashish 
>> 
>> On Tue, 8 Oct 2013 23:51:30 +0530, Mohammad Tariq
wrote: 
>> 
>>> You don't have any more space left in your HDFS. Delete
some old data or add additional storage. 
>>> 
>>> Warm Regards, 
>>>
Tariq 
>

Help Regarding Hadoop

2013-10-18 Thread Basu,Indrashish


Hi there,

I am trying to run a Hadoop source code on an ARM processor, but 
getting the below error. Can anyone suggest anything as why this is 
shooting up ?


rmr: cannot remove output: No such file or directory.
13/10/18 11:46:21 WARN mapred.JobClient: No job jar file set.  User 
classes may not be found. See JobConf(Class) or JobConf#setJar(String).
13/10/18 11:46:21 INFO mapred.FileInputFormat: Total input paths to 
process : 1
13/10/18 11:46:23 INFO mapred.JobClient: Running job: 
job_201310181141_0001

13/10/18 11:46:24 INFO mapred.JobClient:  map 0% reduce 0%
13/10/18 11:56:47 INFO mapred.JobClient: Task Id : 
attempt_201310181141_0001_m_00_0, Status : FAILED
Task attempt_201310181141_0001_m_00_0 failed to report status for 
600 seconds. Killing!
attempt_201310181141_0001_m_00_0: cmd: [bash, -c, exec 
'/app/hadoop/tmp/mapred/local/taskTracker/archive/10.227.56.195bin/cpu-kmeans2D/cpu-kmeans2D' 
'0'  < /dev/null  1>> 
/root/hadoop-gpu-master/hadoop-gpu-0.20.1/bin/../logs/userlogs/attempt_201310181141_0001_m_00_0/stdout 
2>> 
/root/hadoop-gpu-master/hadoop-gpu-0.20.1/bin/../logs/userlogs/attempt_201310181141_0001_m_00_0/stderr]
attempt_201310181141_0001_m_00_0: bash: 
/app/hadoop/tmp/mapred/local/taskTracker/archive/10.227.56.195bin/cpu-kmeans2D/cpu-kmeans2D: 
Success
13/10/18 12:07:02 INFO mapred.JobClient: Task Id : 
attempt_201310181141_0001_m_00_1, Status : FAILED
Task attempt_201310181141_0001_m_00_1 failed to report status for 
600 seconds. Killing!
attempt_201310181141_0001_m_00_1: cmd: [bash, -c, exec 
'/app/hadoop/tmp/mapred/local/taskTracker/archive/10.227.56.195bin/cpu-kmeans2D/cpu-kmeans2D' 
'0'  < /dev/null  1>> 
/root/hadoop-gpu-master/hadoop-gpu-0.20.1/bin/../logs/userlogs/attempt_201310181141_0001_m_00_1/stdout 
2>> 
/root/hadoop-gpu-master/hadoop-gpu-0.20.1/bin/../logs/userlogs/attempt_201310181141_0001_m_00_1/stderr]
attempt_201310181141_0001_m_00_1: bash: 
/app/hadoop/tmp/mapred/local/taskTracker/archive/10.227.56.195bin/cpu-kmeans2D/cpu-kmeans2D: 
Success
13/10/18 12:17:22 INFO mapred.JobClient: Task Id : 
attempt_201310181141_0001_m_00_2, Status : FAILED
Task attempt_201310181141_0001_m_00_2 failed to report status for 
601 seconds. Killing!
attempt_201310181141_0001_m_00_2: cmd: [bash, -c, exec 
'/app/hadoop/tmp/mapred/local/taskTracker/archive/10.227.56.195bin/cpu-kmeans2D/cpu-kmeans2D' 
'0'  < /dev/null  1>> 
/root/hadoop-gpu-master/hadoop-gpu-0.20.1/bin/../logs/userlogs/attempt_201310181141_0001_m_00_2/stdout 
2>> 
/root/hadoop-gpu-master/hadoop-gpu-0.20.1/bin/../logs/userlogs/attempt_201310181141_0001_m_00_2/stderr]
attempt_201310181141_0001_m_00_2: bash: 
/app/hadoop/tmp/mapred/local/taskTracker/archive/10.227.56.195bin/cpu-kmeans2D/cpu-kmeans2D: 
Success
13/10/18 12:27:44 INFO mapred.JobClient: Job complete: 
job_201310181141_0001

13/10/18 12:27:44 INFO mapred.JobClient: Counters: 2
13/10/18 12:27:44 INFO mapred.JobClient:   Job Counters
13/10/18 12:27:44 INFO mapred.JobClient: Launched map tasks=4
13/10/18 12:27:44 INFO mapred.JobClient: Failed map tasks=1
Exception in thread "main" java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1252)
at 
org.apache.hadoop.mapred.pipes.Submitter.runJob(Submitter.java:265)

at org.apache.hadoop.mapred.pipes.Submitter.run(Submitter.java:522)
at 
org.apache.hadoop.mapred.pipes.Submitter.main(Submitter.java:537)


Regards,
--
Indrashish Basu
Graduate Student
Department of Electrical and Computer Engineering
University of Florida


Error while running Hadoop Source Code

2013-10-31 Thread Basu,Indrashish


Hi,

I am trying to run a sample Hadoop GPU source code (kmeans algorithm) 
on an ARM processor and getting the below error. Can anyone please throw 
some light on this ?


rmr: cannot remove output: No such file or directory.
13/10/31 13:43:12 WARN mapred.JobClient: No job jar file set.  User 
classes may not be found. See JobConf(Class) or JobConf#setJar(String).
13/10/31 13:43:12 INFO mapred.FileInputFormat: Total input paths to 
process : 1
13/10/31 13:43:13 INFO mapred.JobClient: Running job: 
job_201310311320_0001

13/10/31 13:43:14 INFO mapred.JobClient:  map 0% reduce 0%
13/10/31 13:43:39 INFO mapred.JobClient: Task Id : 
attempt_201310311320_0001_m_00_0, Status : FAILED

java.io.IOException: pipe child exception
at 
org.apache.hadoop.mapred.pipes.Application.abort(Application.java:191)
at 
org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:103)

at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:363)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at 
java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113)

at java.net.SocketOutputStream.write(SocketOutputStream.java:159)
at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at 
java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)

at java.io.DataOutputStream.write(DataOutputStream.java:107)
at 
org.apache.hadoop.mapred.pipes.BinaryProtocol.writeObject(BinaryProtocol.java:333)
at 
org.apache.hadoop.mapred.pipes.BinaryProtocol.mapItem(BinaryProtocol.java:286)
at 
org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:92)

... 3 more

attempt_201310311320_0001_m_00_0: cmd: [bash, -c, exec 
'/app/hadoop/tmp/mapred/local/taskTracker/archive/10.227.56.195bin/cpu-kmeans2D/cpu-kmeans2D' 
'0'  < /dev/null  1>> 
/usr/local/hadoop/hadoop-gpu-0.20.1/bin/../logs/userlogs/attempt_201310311320_0001_m_00_0/stdout 
2>> /usr/local/hadoop/hadoop-gpu-0.20.1/bin/../logs/userlogs/


Regards,

--
Indrashish Basu
Graduate Student
Department of Electrical and Computer Engineering
University of Florida


Re: Error while running Hadoop Source Code

2013-11-04 Thread Basu,Indrashish


Hi All,

Any update on the below post ?

I came across some old post regarding the same issue. It explains the 
solution as " The *nopipe* example needs more documentation.  It assumes 
that it is run with the InputFormat from 
src/test/org/apache/*hadoop*/mapred/*pipes*/ 
*WordCountInputFormat*.java, which has a very specific input split 
format. By running with a TextInputFormat, it will send binary bytes as 
the input split and won't work right. The *nopipe* example should 
probably be recoded *to* use libhdfs *too*, but that is more complicated 
*to* get running as a unit test. Also note that since the C++ example is 
using local file reads, it will only work on a cluster if you have nfs 
or something working across the cluster. "


I would need some more light on the above explanation , so if anyone 
could elaborate a bit about the same as what needs to be done exactly. 
To mention, I am trying to run a sample KMeans algorithm on a GPU using 
Hadoop.


Thanks in advance.

Regards,
Indrashish.

On Thu, 31 Oct 2013 20:00:10 -0400, Basu,Indrashish wrote:

Hi,

I am trying to run a sample Hadoop GPU source code (kmeans algorithm)
on an ARM processor and getting the below error. Can anyone please
throw some light on this ?

rmr: cannot remove output: No such file or directory.
13/10/31 13:43:12 WARN mapred.JobClient: No job jar file set.  User
classes may not be found. See JobConf(Class) or
JobConf#setJar(String).
13/10/31 13:43:12 INFO mapred.FileInputFormat: Total input paths to
process : 1
13/10/31 13:43:13 INFO mapred.JobClient: Running job: 
job_201310311320_0001

13/10/31 13:43:14 INFO mapred.JobClient:  map 0% reduce 0%
13/10/31 13:43:39 INFO mapred.JobClient: Task Id :
attempt_201310311320_0001_m_00_0, Status : FAILED
java.io.IOException: pipe child exception
at 
org.apache.hadoop.mapred.pipes.Application.abort(Application.java:191)

at

org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:103)
at 
org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:363)

at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at 
java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113)

at java.net.SocketOutputStream.write(SocketOutputStream.java:159)
at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at 
java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)

at java.io.DataOutputStream.write(DataOutputStream.java:107)
at

org.apache.hadoop.mapred.pipes.BinaryProtocol.writeObject(BinaryProtocol.java:333)
at

org.apache.hadoop.mapred.pipes.BinaryProtocol.mapItem(BinaryProtocol.java:286)
at

org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:92)
... 3 more

attempt_201310311320_0001_m_00_0: cmd: [bash, -c, exec

'/app/hadoop/tmp/mapred/local/taskTracker/archive/10.227.56.195bin/cpu-kmeans2D/cpu-kmeans2D'
'0'  < /dev/null  1>>

/usr/local/hadoop/hadoop-gpu-0.20.1/bin/../logs/userlogs/attempt_201310311320_0001_m_00_0/stdout
2>> /usr/local/hadoop/hadoop-gpu-0.20.1/bin/../logs/userlogs/

Regards,


--
Indrashish Basu
Graduate Student
Department of Electrical and Computer Engineering
University of Florida


Re: Error while running Hadoop Source Code

2013-11-05 Thread Basu,Indrashish


Hi,

Can anyone kindly assist on this ?

Regards,
Indrashish


On Mon, 04 Nov 2013 10:23:23 -0500, Basu,Indrashish wrote:

Hi All,

Any update on the below post ?

I came across some old post regarding the same issue. It explains the
solution as " The *nopipe* example needs more documentation.  It
assumes that it is run with the InputFormat from
src/test/org/apache/*hadoop*/mapred/*pipes*/
*WordCountInputFormat*.java, which has a very specific input split
format. By running with a TextInputFormat, it will send binary bytes
as the input split and won't work right. The *nopipe* example should
probably be recoded *to* use libhdfs *too*, but that is more
complicated *to* get running as a unit test. Also note that since the
C++ example is using local file reads, it will only work on a cluster
if you have nfs or something working across the cluster. "

I would need some more light on the above explanation , so if anyone
could elaborate a bit about the same as what needs to be done 
exactly.

To mention, I am trying to run a sample KMeans algorithm on a GPU
using Hadoop.

Thanks in advance.

Regards,
Indrashish.

On Thu, 31 Oct 2013 20:00:10 -0400, Basu,Indrashish wrote:

Hi,

I am trying to run a sample Hadoop GPU source code (kmeans 
algorithm)

on an ARM processor and getting the below error. Can anyone please
throw some light on this ?

rmr: cannot remove output: No such file or directory.
13/10/31 13:43:12 WARN mapred.JobClient: No job jar file set.  User
classes may not be found. See JobConf(Class) or
JobConf#setJar(String).
13/10/31 13:43:12 INFO mapred.FileInputFormat: Total input paths to
process : 1
13/10/31 13:43:13 INFO mapred.JobClient: Running job: 
job_201310311320_0001

13/10/31 13:43:14 INFO mapred.JobClient:  map 0% reduce 0%
13/10/31 13:43:39 INFO mapred.JobClient: Task Id :
attempt_201310311320_0001_m_00_0, Status : FAILED
java.io.IOException: pipe child exception
at 
org.apache.hadoop.mapred.pipes.Application.abort(Application.java:191)

at

org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:103)
at 
org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:363)

at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at 
java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113)
at 
java.net.SocketOutputStream.write(SocketOutputStream.java:159)
at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at 
java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)

at java.io.DataOutputStream.write(DataOutputStream.java:107)
at

org.apache.hadoop.mapred.pipes.BinaryProtocol.writeObject(BinaryProtocol.java:333)
at

org.apache.hadoop.mapred.pipes.BinaryProtocol.mapItem(BinaryProtocol.java:286)
at

org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:92)
... 3 more

attempt_201310311320_0001_m_00_0: cmd: [bash, -c, exec

'/app/hadoop/tmp/mapred/local/taskTracker/archive/10.227.56.195bin/cpu-kmeans2D/cpu-kmeans2D'
'0'  < /dev/null  1>>

/usr/local/hadoop/hadoop-gpu-0.20.1/bin/../logs/userlogs/attempt_201310311320_0001_m_00_0/stdout
2>> /usr/local/hadoop/hadoop-gpu-0.20.1/bin/../logs/userlogs/

Regards,


--
Indrashish Basu
Graduate Student
Department of Electrical and Computer Engineering
University of Florida


Re: Error while running Hadoop Source Code

2013-11-06 Thread Basu,Indrashish
  

Hi Vinod, 

Thanks for your help regarding this. I checked the task
logs, this is what it is giving as output. 

2013-11-06 06:40:05,541
INFO org.apache.hadoop.mapred.JvmManager: In JvmRunner constructed JVM
ID: jvm_201311060636_0001_m_1100862588 

2013-11-06 06:40:05,553 INFO
org.apache.hadoop.mapred.JvmManager: JVM Runner
jvm_201311060636_0001_m_1100862588 spawned. 

2013-11-06 06:40:05,650
INFO org.apache.hadoop.mapred.JvmManager: In JvmRunner constructed JVM
ID: jvm_201311060636_0001_m_-1960039766 

2013-11-06 06:40:05,651 INFO
org.apache.hadoop.mapred.JvmManager: JVM Runner
jvm_201311060636_0001_m_-1960039766 spawned. 

2013-11-06 06:40:07,496
INFO org.apache.hadoop.mapred.TaskTracker: JVM with ID:
jvm_201311060636_0001_m_1100862588 given task: attempt_201311060


636_0001_m_01_3 

2013-11-06 06:40:07,618 INFO
org.apache.hadoop.mapred.TaskTracker: JVM with ID:
jvm_201311060636_0001_m_-1960039766 given task: attempt_20131106


0636_0001_m_00_3 

2013-11-06 06:40:08,013 INFO
org.apache.hadoop.mapred.TaskTracker: running on gpu xx from status:
false 

2013-11-06 06:40:08,014 INFO
org.apache.hadoop.mapred.TaskTracker: running on gpu xx from task: false


2013-11-06 06:40:08,015 INFO org.apache.hadoop.mapred.TaskTracker:
running on gpu xx from status: false 

2013-11-06 06:40:08,015 INFO
org.apache.hadoop.mapred.TaskTracker: running on gpu xx from task: false


2013-11-06 06:40:09,361 INFO org.apache.hadoop.mapred.TaskTracker:
attempt_201311060636_0001_m_01_3 0.0% 

2013-11-06 06:40:09,735 INFO
org.apache.hadoop.mapred.TaskTracker:
attempt_201311060636_0001_m_00_3 0.0% 

2013-11-06 06:40:10,018 INFO
org.apache.hadoop.mapred.JvmManager: JVM :
jvm_201311060636_0001_m_1100862588 exited. Number of tasks it ran: 0


2013-11-06 06:40:11,021 INFO org.apache.hadoop.mapred.JvmManager: JVM
: jvm_201311060636_0001_m_-1960039766 exited. Number of tasks it ran: 0


2013-11-06 06:40:11,442 INFO org.apache.hadoop.mapred.TaskTracker:
running on gpu xx from status: false 

2013-11-06 06:40:11,442 INFO
org.apache.hadoop.mapred.TaskTracker: running on gpu xx from task: false


2013-11-06 06:40:11,443 INFO org.apache.hadoop.mapred.TaskTracker:
running on gpu xx from status: false 

2013-11-06 06:40:11,443 INFO
org.apache.hadoop.mapred.TaskTracker: running on gpu xx from task: false


2013-11-06 06:40:13,021 INFO org.apache.hadoop.mapred.TaskRunner:
attempt_201311060636_0001_m_01_3 done; removing files. 

2013-11-06
06:40:13,037 INFO org.apache.hadoop.mapred.TaskTracker: addCPUFreeSlot :
current free slots : 3 

2013-11-06 06:40:14,025 INFO
org.apache.hadoop.mapred.TaskRunner:
attempt_201311060636_0001_m_00_3 done; removing files. 

2013-11-06
06:40:14,028 INFO org.apache.hadoop.mapred.TaskTracker: addCPUFreeSlot :
current free slots : 4 

2013-11-06 06:40:14,476 INFO
org.apache.hadoop.mapred.TaskTracker: running on gpu xx from status:
false 

2013-11-06 06:40:14,477 INFO
org.apache.hadoop.mapred.TaskTracker: running on gpu xx from task: false


2013-11-06 06:40:14,477 INFO org.apache.hadoop.mapred.TaskTracker:
running on gpu xx from status: false 

2013-11-06 06:40:14,477 INFO
org.apache.hadoop.mapred.TaskTracker: running on gpu xx from task: false


2013-11-06 06:40:14,894 INFO org.apache.hadoop.mapred.TaskTracker:
running on gpu : false 

2013-11-06 06:40:14,900 INFO
org.apache.hadoop.mapred.TaskTracker: LaunchTaskAction (registerTask):
attempt_201311060636_0001_m_02_0 task's 

state:UNASSIGNED


2013-11-06 06:40:14,902 INFO org.apache.hadoop.mapred.TaskTracker:
Trying to launch : attempt_201311060636_0001_m_02_0 on CPU


2013-11-06 06:40:14,904 INFO org.apache.hadoop.mapred.TaskTracker: In
TaskLauncher, current free CPU slots : 4 and trying to launch
attempt_2013 

11060636_0001_m_02_0 

2013-11-06 06:40:14,909 INFO
org.apache.hadoop.mapred.TaskTracker: Received KillTaskAction for task:
attempt_201311060636_0001_m_00_3 

2013-11-06 06:40:14,920 INFO
org.apache.hadoop.mapred.TaskTracker: Received KillTaskAction for task:
attempt_201311060636_0001_m_01_3 

2013-11-06 06:40:15,161 INFO
org.apache.hadoop.mapred.JvmManager: In JvmRunner constructed JVM ID:
jvm_201311060636_0001_m_164532908 

2013-11-06 06:40:15,162 INFO
org.apache.hadoop.mapred.JvmManager: JVM Runner
jvm_201311060636_0001_m_164532908 spawned. 

2013-11-06 06:40:17,216
INFO org.apache.hadoop.mapred.TaskTracker: JVM with ID:
jvm_201311060636_0001_m_164532908 given task: attempt_2013110606


36_0001_m_02_0 

Regards, 

Indrashish 

On Tue, 5 Nov 2013
10:09:36 -0800, Vinod Kumar Vavilapalli wrote: 

> It seems like your
pipes mapper is exiting before consuming all the input. Did you check
the task-logs on the web UI? 
> 
> Thanks, 
> +Vinod 
> 
> On Nov 5,
2013, at 7:25 AM, Basu,Indrashish wrote: 
> 
>> Hi,
>> 
>> Can anyone
kindly assist on this ?
>> 
>> Regards,
>> Indrashish
>> 
>> On Mon, 04
Nov 2013 10:23:23 -0500, Basu,Indrashish wr

Re: Error while running Hadoop Source Code

2013-11-06 Thread Basu,Indrashish
  

Can anyone please assist regarding this ? 

Thanks in advance


Regards, 

Indra 

On Wed, 06 Nov 2013 09:50:02 -0500, Basu,Indrashish
wrote: 

> Hi Vinod, 
> 
> Thanks for your help regarding this. I
checked the task logs, this is what it is giving as output. 
> 
>
2013-11-06 06:40:05,541 INFO org.apache.hadoop.mapred.JvmManager: In
JvmRunner constructed JVM ID: jvm_201311060636_0001_m_1100862588 
> 
>
2013-11-06 06:40:05,553 INFO org.apache.hadoop.mapred.JvmManager: JVM
Runner jvm_201311060636_0001_m_1100862588 spawned. 
> 
> 2013-11-06
06:40:05,650 INFO org.apache.hadoop.mapred.JvmManager: In JvmRunner
constructed JVM ID: jvm_201311060636_0001_m_-1960039766 
> 
> 2013-11-06
06:40:05,651 INFO org.apache.hadoop.mapred.JvmManager: JVM Runner
jvm_201311060636_0001_m_-1960039766 spawned. 
> 
> 2013-11-06
06:40:07,496 INFO org.apache.hadoop.mapred.TaskTracker: JVM with ID:
jvm_201311060636_0001_m_1100862588 given task: attempt_201311060 
> 
>
636_0001_m_01_3 
> 
> 2013-11-06 06:40:07,618 INFO
org.apache.hadoop.mapred.TaskTracker: JVM with ID:
jvm_201311060636_0001_m_-1960039766 given task: attempt_20131106 
> 
>
0636_0001_m_00_3 
> 
> 2013-11-06 06:40:08,013 INFO
org.apache.hadoop.mapred.TaskTracker: running on gpu xx from status:
false 
> 
> 2013-11-06 06:40:08,014 INFO
org.apache.hadoop.mapred.TaskTracker: running on gpu xx from task: false

> 
> 2013-11-06 06:40:08,015 INFO org.apache.hadoop.mapred.TaskTracker:
running on gpu xx from status: false 
> 
> 2013-11-06 06:40:08,015 INFO
org.apache.hadoop.mapred.TaskTracker: running on gpu xx from task: false

> 
> 2013-11-06 06:40:09,361 INFO org.apache.hadoop.mapred.TaskTracker:
attempt_201311060636_0001_m_01_3 0.0% 
> 
> 2013-11-06 06:40:09,735
INFO org.apache.hadoop.mapred.TaskTracker:
attempt_201311060636_0001_m_00_3 0.0% 
> 
> 2013-11-06 06:40:10,018
INFO org.apache.hadoop.mapred.JvmManager: JVM :
jvm_201311060636_0001_m_1100862588 exited. Number of tasks it ran: 0 
>

> 2013-11-06 06:40:11,021 INFO org.apache.hadoop.mapred.JvmManager: JVM
: jvm_201311060636_0001_m_-1960039766 exited. Number of tasks it ran: 0

> 
> 2013-11-06 06:40:11,442 INFO org.apache.hadoop.mapred.TaskTracker:
running on gpu xx from status: false 
> 
> 2013-11-06 06:40:11,442 INFO
org.apache.hadoop.mapred.TaskTracker: running on gpu xx from task: false

> 
> 2013-11-06 06:40:11,443 INFO org.apache.hadoop.mapred.TaskTracker:
running on gpu xx from status: false 
> 
> 2013-11-06 06:40:11,443 INFO
org.apache.hadoop.mapred.TaskTracker: running on gpu xx from task: false

> 
> 2013-11-06 06:40:13,021 INFO org.apache.hadoop.mapred.TaskRunner:
attempt_201311060636_0001_m_01_3 done; removing files. 
> 
>
2013-11-06 06:40:13,037 INFO org.apache.hadoop.mapred.TaskTracker:
addCPUFreeSlot : current free slots : 3 
> 
> 2013-11-06 06:40:14,025
INFO org.apache.hadoop.mapred.TaskRunner:
attempt_201311060636_0001_m_00_3 done; removing files. 
> 
>
2013-11-06 06:40:14,028 INFO org.apache.hadoop.mapred.TaskTracker:
addCPUFreeSlot : current free slots : 4 
> 
> 2013-11-06 06:40:14,476
INFO org.apache.hadoop.mapred.TaskTracker: running on gpu xx from
status: false 
> 
> 2013-11-06 06:40:14,477 INFO
org.apache.hadoop.mapred.TaskTracker: running on gpu xx from task: false

> 
> 2013-11-06 06:40:14,477 INFO org.apache.hadoop.mapred.TaskTracker:
running on gpu xx from status: false 
> 
> 2013-11-06 06:40:14,477 INFO
org.apache.hadoop.mapred.TaskTracker: running on gpu xx from task: false

> 
> 2013-11-06 06:40:14,894 INFO org.apache.hadoop.mapred.TaskTracker:
running on gpu : false 
> 
> 2013-11-06 06:40:14,900 INFO
org.apache.hadoop.mapred.TaskTracker: LaunchTaskAction (registerTask):
attempt_201311060636_0001_m_02_0 task's 
> 
> state:UNASSIGNED 
> 
>
2013-11-06 06:40:14,902 INFO org.apache.hadoop.mapred.TaskTracker:
Trying to launch : attempt_201311060636_0001_m_02_0 on CPU 
> 
>
2013-11-06 06:40:14,904 INFO org.apache.hadoop.mapred.TaskTracker: In
TaskLauncher, current free CPU slots : 4 and trying to launch
attempt_2013 
> 
> 11060636_0001_m_02_0 
> 
> 2013-11-06
06:40:14,909 INFO org.apache.hadoop.mapred.TaskTracker: Received
KillTaskAction for task: attempt_201311060636_0001_m_00_3 
> 
>
2013-11-06 06:40:14,920 INFO org.apache.hadoop.mapred.TaskTracker:
Received KillTaskAction for task: attempt_201311060636_0001_m_01_3

> 
> 2013-11-06 06:40:15,161 INFO org.apache.hadoop.mapred.JvmManager:
In JvmRunner constructed JVM ID: jvm_201311060636_0001_m_164532908 
> 
>
2013-11-06 06:40:15,162 INFO org.apache.hadoop.mapred.JvmManager: JVM
Runner jvm_201311060636_0001_m_164532908 spawned. 
> 
> 2013-11-06
06:40:17,216 INFO org.apache.hadoop.mapred.TaskTracker: JVM with ID:
jvm_201311060636_0001_m_164532908 given task:

Re: Error while running Hadoop Source Code

2013-11-07 Thread Basu,Indrashish
  

Hi Vinod, 

Please find the below details requested: 

A) ALL OF
THE TASKTRACKER LOG : 

Same what I have provided in the earlier email. 


B) THE TASK-LOGS : 

 SYSLOG : 

2013-11-06 16:37:21,331 INFO
org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with
processName=MAP, sessionId=
2013-11-06 16:37:22,245 INFO
org.apache.hadoop.mapred.MapTask: numReduceTasks: 2
2013-11-06
16:37:22,301 INFO org.apache.hadoop.mapred.MapTask: io.sort.mb =
100
2013-11-06 16:37:22,832 INFO org.apache.hadoop.mapred.MapTask: data
buffer = 79691776/99614720
2013-11-06 16:37:22,833 INFO
org.apache.hadoop.mapred.MapTask: record buffer =
262144/327680
2013-11-06 16:37:23,157 ERROR
org.apache.hadoop.mapred.pipes.BinaryProtocol: java.io.EOFException
 at
java.io.DataInputStream.readByte(DataInputStream.java:267)
 at
org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:298)
 at
org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:319)
 at
org.apache.hadoop.mapred.pipes.BinaryProtocol$UplinkReaderThread.run(BinaryProtocol.java:114)

2013-11-06
16:37:23,157 INFO org.apache.hadoop.mapred.pipes.Application: Aborting
because of java.net.SocketException: Broken pipe
 at
java.net.SocketOutputStream.socketWrite0(Native Method)
 at
java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113)
 at
java.net.SocketOutputStream.write(SocketOutputStream.java:159)
 at
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)

at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
 at
java.io.DataOutputStream.write(DataOutputStream.java:107)
 at
org.apache.hadoop.mapred.pipes.BinaryProtocol.writeObject(BinaryProtocol.java:333)

at
org.apache.hadoop.mapred.pipes.BinaryProtocol.mapItem(BinaryProtocol.java:286)

at
org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:92)

at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:363)
 at
org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
 at
org.apache.hadoop.mapred.Child.main(Child.java:170)

2013-11-06
16:37:23,284 WARN org.apache.hadoop.mapred.TaskTracker: Error running
child
java.io.IOException: pipe child exception
 at
org.apache.hadoop.mapred.pipes.Application.abort(Application.java:191)

at
org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:103)

at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:363)
 at
org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
 at
org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by:
java.net.SocketException: Broken pipe
 at
java.net.SocketOutputStream.socketWrite0(Native Method)
 at
java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113)
 at
java.net.SocketOutputStream.write(SocketOutputStream.java:159)
 at
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)

at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
 at
java.io.DataOutputStream.write(DataOutputStream.java:107)
 at
org.apache.hadoop.mapred.pipes.BinaryProtocol.writeObject(BinaryProtocol.java:333)

at
org.apache.hadoop.mapred.pipes.BinaryProtocol.mapItem(BinaryProtocol.java:286)

at
org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:92)

... 3 more

 STDERR : 

 Command:0received before authentication.
Exiting.. 

 STDOUT :  

 cmd: [bash, -c, exec
'/app/hadoop/tmp/mapred/local/taskTracker/archive/10.227.56.195bin/cpu-kmeans2D/cpu-kmeans2D'
'0' < /dev/null 1>> /usr/lo

cal/hadoop/hadoop-gpu-0.20.1/bin/../logs/userlogs/attempt_201311061630_0001_m_01_0/stdout
2>> /usr/local/hadoop/hadoop-gpu-0.20.1/bin/../log

s/userlogs/attempt_201311061630_0001_m_01_0/stderr]
 deviceID: 

C)
SPECIFIC TASKATTEMPT'S TASKATTEMTPID THAT IS FAILING : 


attempt_201311061630_0001_m_00_0 


attempt_201311061630_0001_m_01_0  


attempt_201311061630_0001_m_02_0  


attempt_201311061630_0001_m_00_1  


attempt_201311061630_0001_m_01_1  


attempt_201311061630_0001_m_03_0 


attempt_201311061630_0001_m_00_2  


attempt_201311061630_0001_m_01_2  


attempt_201311061630_0001_m_00_3  


attempt_201311061630_0001_m_01_3 

Kindly let me know in case of any
more details required. 

Thanks again for your time and patience
regarding this. 

Regards, 

Indrashish. 

On Wed, 6 Nov 2013 16:58:53
-0800, Vinod Kumar Vavilapalli wrote: 

> Don't see anything in the logs
that you pasted. 
> Can you paste the following in say pastebin? 
> -
All of the TaskTracker log 
> - The task-logs. These are syslog, stderr,
stdout files for a specific TaskAttempt. 
> - And specific TaskAttempt's
TaskAttemtpID that is failing. 
> 
> Thanks, 
> +Vinod 
> 
> On Nov 6,
2013, at 2:50 PM, Basu,Indrashish wrote: 
> 
>> Can anyone please assist
regarding this ? 
>> 
>> Thanks in advance 
>> 
>> Regards, 
>> 
>>
Indra 
>> 
>> On Wed, 06 Nov 2013 09:50:02 -0500, Basu,Indrash

Re: Error while running Hadoop Source Code

2013-11-09 Thread Basu,Indrashish
  

Any updates on this ? 

Regards, 

Indra 

On Thu, 07 Nov 2013
11:28:57 -0500, Basu,Indrashish wrote: 

> Hi Vinod, 
> 
> Please find
the below details requested: 
> 
> A) ALL OF THE TASKTRACKER LOG : 
> 
>
Same what I have provided in the earlier email. 
> 
> B) THE TASK-LOGS :

> 
> SYSLOG : 
> 
> 2013-11-06 16:37:21,331 INFO
org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with
processName=MAP, sessionId=
> 2013-11-06 16:37:22,245 INFO
org.apache.hadoop.mapred.MapTask: numReduceTasks: 2
> 2013-11-06
16:37:22,301 INFO org.apache.hadoop.mapred.MapTask: io.sort.mb = 100
>
2013-11-06 16:37:22,832 INFO org.apache.hadoop.mapred.MapTask: data
buffer = 79691776/99614720
> 2013-11-06 16:37:22,833 INFO
org.apache.hadoop.mapred.MapTask: record buffer = 262144/327680
>
2013-11-06 16:37:23,157 ERROR
org.apache.hadoop.mapred.pipes.BinaryProtocol: java.io.EOFException
> at
java.io.DataInputStream.readByte(DataInputStream.java:267)
> at
org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:298)
>
at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:319)
>
at
org.apache.hadoop.mapred.pipes.BinaryProtocol$UplinkReaderThread.run(BinaryProtocol.java:114)
>

> 2013-11-06 16:37:23,157 INFO
org.apache.hadoop.mapred.pipes.Application: Aborting because of
java.net.SocketException: Broken pipe
> at
java.net.SocketOutputStream.socketWrite0(Native Method)
> at
java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113)
>
at java.net.SocketOutputStream.write(SocketOutputStream.java:159)
> at
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
>
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
>
at java.io.DataOutputStream.write(DataOutputStream.java:107)
> at
org.apache.hadoop.mapred.pipes.BinaryProtocol.writeObject(BinaryProtocol.java:333)
>
at
org.apache.hadoop.mapred.pipes.BinaryProtocol.mapItem(BinaryProtocol.java:286)
>
at
org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:92)
>
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:363)
> at
org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
> at
org.apache.hadoop.mapred.Child.main(Child.java:170)
> 
> 2013-11-06
16:37:23,284 WARN org.apache.hadoop.mapred.TaskTracker: Error running
child
> java.io.IOException: pipe child exception
> at
org.apache.hadoop.mapred.pipes.Application.abort(Application.java:191)
>
at
org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:103)
>
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:363)
> at
org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
> at
org.apache.hadoop.mapred.Child.main(Child.java:170)
> Caused by:
java.net.SocketException: Broken pipe
> at
java.net.SocketOutputStream.socketWrite0(Native Method)
> at
java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113)
>
at java.net.SocketOutputStream.write(SocketOutputStream.java:159)
> at
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
>
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
>
at java.io.DataOutputStream.write(DataOutputStream.java:107)
> at
org.apache.hadoop.mapred.pipes.BinaryProtocol.writeObject(BinaryProtocol.java:333)
>
at
org.apache.hadoop.mapred.pipes.BinaryProtocol.mapItem(BinaryProtocol.java:286)
>
at
org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:92)
>
... 3 more
> 
> STDERR : 
> 
> Command:0received before authentication.
Exiting.. 
> 
> STDOUT : 
> 
> cmd: [bash, -c, exec
'/app/hadoop/tmp/mapred/local/taskTracker/archive/10.227.56.195bin/cpu-kmeans2D/cpu-kmeans2D'
'0' < /dev/null 1>> /usr/lo
>
cal/hadoop/hadoop-gpu-0.20.1/bin/../logs/userlogs/attempt_201311061630_0001_m_01_0/stdout
2>> /usr/local/hadoop/hadoop-gpu-0.20.1/bin/../log
>
s/userlogs/attempt_201311061630_0001_m_01_0/stderr]
> deviceID: 
>

> C) SPECIFIC TASKATTEMPT'S TASKATTEMTPID THAT IS FAILING : 
> 
>
attempt_201311061630_0001_m_00_0 
> 
>
attempt_201311061630_0001_m_01_0 
> 
>
attempt_201311061630_0001_m_02_0 
> 
>
attempt_201311061630_0001_m_00_1 
> 
>
attempt_201311061630_0001_m_01_1 
> 
>
attempt_201311061630_0001_m_03_0 
> 
>
attempt_201311061630_0001_m_00_2 
> 
>
attempt_201311061630_0001_m_01_2 
> 
>
attempt_201311061630_0001_m_00_3 
> 
>
attempt_201311061630_0001_m_01_3 
> 
> Kindly let me know in case of
any more details required. 
> 
> Thanks again for your time and patience
regarding this. 
> 
> Regards, 
> 
> Indrashish. 
> 
> On Wed, 6 Nov
2013 16:58:53 -0800, Vinod Kumar Vavilapalli wrote: 
> 
>> Don't see
anything in the logs that you pasted. 
>> Can you pa

Re: Error while running Hadoop Source Code

2013-11-12 Thread Basu,Indrashish
  

Hi, 

Can anyone please assist regarding this ? 

Regards,


Indrashish 

On Sat, 09 Nov 2013 10:58:03 -0500, Basu,Indrashish
wrote: 

> Any updates on this ? 
> 
> Regards, 
> 
> Indra 
> 
> On
Thu, 07 Nov 2013 11:28:57 -0500, Basu,Indrashish wrote: 
> 
>> Hi Vinod,

>> 
>> Please find the below details requested: 
>> 
>> A) ALL OF THE
TASKTRACKER LOG : 
>> 
>> Same what I have provided in the earlier
email. 
>> 
>> B) THE TASK-LOGS : 
>> 
>> SYSLOG : 
>> 
>> 2013-11-06
16:37:21,331 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing
JVM Metrics with processName=MAP, sessionId=
>> 2013-11-06 16:37:22,245
INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 2
>> 2013-11-06
16:37:22,301 INFO org.apache.hadoop.mapred.MapTask: io.sort.mb = 100
>>
2013-11-06 16:37:22,832 INFO org.apache.hadoop.mapred.MapTask: data
buffer = 79691776/99614720
>> 2013-11-06 16:37:22,833 INFO
org.apache.hadoop.mapred.MapTask: record buffer = 262144/327680
>>
2013-11-06 16:37:23,157 ERROR
org.apache.hadoop.mapred.pipes.BinaryProtocol: java.io.EOFException
>>
at java.io.DataInputStream.readByte(DataInputStream.java:267)
>> at
org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:298)
>>
at
org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:319)
>>
at
org.apache.hadoop.mapred.pipes.BinaryProtocol$UplinkReaderThread.run(BinaryProtocol.java:114)
>>

>> 2013-11-06 16:37:23,157 INFO
org.apache.hadoop.mapred.pipes.Application: Aborting because of
java.net.SocketException: Broken pipe
>> at
java.net.SocketOutputStream.socketWrite0(Native Method)
>> at
java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113)
>>
at java.net.SocketOutputStream.write(SocketOutputStream.java:159)
>> at
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
>>
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
>>
at java.io.DataOutputStream.write(DataOutputStream.java:107)
>> at
org.apache.hadoop.mapred.pipes.BinaryProtocol.writeObject(BinaryProtocol.java:333)
>>
at
org.apache.hadoop.mapred.pipes.BinaryProtocol.mapItem(BinaryProtocol.java:286)
>>
at
org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:92)
>>
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:363)
>>
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
>> at
org.apache.hadoop.mapred.Child.main(Child.java:170)
>> 
>> 2013-11-06
16:37:23,284 WARN org.apache.hadoop.mapred.TaskTracker: Error running
child
>> java.io.IOException: pipe child exception
>> at
org.apache.hadoop.mapred.pipes.Application.abort(Application.java:191)
>>
at
org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:103)
>>
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:363)
>>
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
>> at
org.apache.hadoop.mapred.Child.main(Child.java:170)
>> Caused by:
java.net.SocketException: Broken pipe
>> at
java.net.SocketOutputStream.socketWrite0(Native Method)
>> at
java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113)
>>
at java.net.SocketOutputStream.write(SocketOutputStream.java:159)
>> at
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
>>
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
>>
at java.io.DataOutputStream.write(DataOutputStream.java:107)
>> at
org.apache.hadoop.mapred.pipes.BinaryProtocol.writeObject(BinaryProtocol.java:333)
>>
at
org.apache.hadoop.mapred.pipes.BinaryProtocol.mapItem(BinaryProtocol.java:286)
>>
at
org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:92)
>>
... 3 more
>> 
>> STDERR : 
>> 
>> Command:0received before
authentication. Exiting.. 
>> 
>> STDOUT : 
>> 
>> cmd: [bash, -c, exec
'/app/hadoop/tmp/mapred/local/taskTracker/archive/10.227.56.195bin/cpu-kmeans2D/cpu-kmeans2D'
'0' < /dev/null 1>> /usr/lo
>>
cal/hadoop/hadoop-gpu-0.20.1/bin/../logs/userlogs/attempt_201311061630_0001_m_01_0/stdout
2>> /usr/local/hadoop/hadoop-gpu-0.20.1/bin/../log
>>
s/userlogs/attempt_201311061630_0001_m_01_0/stderr]
>> deviceID: 
>>

>> C) SPECIFIC TASKATTEMPT'S TASKATTEMTPID THAT IS FAILING : 
>> 
>>
attempt_201311061630_0001_m_00_0 
>> 
>>
attempt_201311061630_0001_m_01_0 
>> 
>>
attempt_201311061630_0001_m_02_0 
>> 
>>
attempt_201311061630_0001_m_00_1 
>> 
>>
attempt_201311061630_0001_m_01_1 
>> 
>>
attempt_201311061630_0001_m_03_0 
>> 
>>
attempt_201311061630_0001_m_00_

Drawbacks of Hadoop Pipes

2014-03-01 Thread Basu,Indrashish


Hello,

I am trying to execute a CUDA benchmark in a Hadoop Framework and using 
Hadoop Pipes for invoking the CUDA code which is written in a C++ 
interface from the Hadoop Framework. I am just a bit interested in 
knowing what can be the drawbacks of using Hadoop Pipes for this and 
whether the implementation of Hadoop Streaming and JNI interface will be 
a better choice. I am a bit unclear on this, so if anyone can throw some 
light on this and clarify.


Regards,
Indrashish

--
Indrashish Basu
Graduate Student
Department of Electrical and Computer Engineering
University of Florida


Re: Drawbacks of Hadoop Pipes

2014-03-03 Thread Basu,Indrashish


Hello,

Anyone can help regarding the below query.

Regards,
Indrashish

On Sat, 01 Mar 2014 13:52:11 -0500, Basu,Indrashish wrote:

Hello,

I am trying to execute a CUDA benchmark in a Hadoop Framework and
using Hadoop Pipes for invoking the CUDA code which is written in a
C++ interface from the Hadoop Framework. I am just a bit interested 
in

knowing what can be the drawbacks of using Hadoop Pipes for this and
whether the implementation of Hadoop Streaming and JNI interface will
be a better choice. I am a bit unclear on this, so if anyone can 
throw

some light on this and clarify.

Regards,
Indrashish


--
Indrashish Basu
Graduate Student
Department of Electrical and Computer Engineering
University of Florida