What do you want to know? Here is how it goes;

1.     We receive 6TB from an outside client and need to analyze the data 
quickly and report on our findings. I'm using an analysis that was done in our 
current environment with the same data.

2.     Upload the data to hdfs with -put

3.     Create tables in Hive with external like to the data in hdfs with STORED 
AS TEXTFILE LOCATION. (SQL is required for our analyst)

4.     Convert current SQL to HiveSQL and run analysis.

5.     Test ODBC connections to Hive data for pulling data.

Clay


From: ados1...@gmail.com [mailto:ados1...@gmail.com]
Sent: Friday, March 14, 2014 11:40 AM
To: user
Subject: Re: NodeManager health Question

Hey Clay,

How have you loaded 6TB data into HDP? I am in a similar situation and wanted 
to understand your use case.

On Thu, Mar 13, 2014 at 3:59 PM, Clay McDonald 
<stuart.mcdon...@bateswhite.com<mailto:stuart.mcdon...@bateswhite.com>> wrote:
Hello all, I have laid out my POC in a project plan and have HDP 2.0 installed. 
HDFS is running fine and have loaded up about 6TB of data to run my test on. I 
have a series of SQL queries that I will run in Hive ver. 0.12.0. I had to 
manually install Hue and still have a few issues I'm working on there. But at 
the moment, my most pressing issue is with Hive jobs not running. In Yarn, my 
Hive queries are "Accepted" but are "Unassigned" and do not run. See attached.

In Ambari, the datanodes all have the following error; NodeManager health CRIT 
for 20 days CRITICAL: NodeManager unhealthy

>From the datanode logs I found the following;

ERROR datanode.DataNode (DataXceiver.java:run(225)) - 
dc-bigdata1.bateswhite.com:50010:DataXceiver error processing READ_BLOCK 
operation  src: /172.20.5.147:51299<http://172.20.5.147:51299> dest: 
/172.20.5.141:50010<http://172.20.5.141:50010>
java.net.SocketTimeoutException: 480000 millis timeout while waiting for 
channel to be ready for write. ch : java.nio.channels.SocketChannel[connected 
local=/172.20.5.141:50010<http://172.20.5.141:50010> 
remote=/172.20.5.147:51299<http://172.20.5.147:51299>]
            at 
org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
            at 
org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:172)
            at 
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:220)
            at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:546)
            at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:710)
            at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:340)
            at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:101)
            at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:65)
            at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
            at java.lang.Thread.run(Thread.java:662)

Also, in the namenode log I see the following;

2014-03-13 13:50:57,204 WARN  security.UserGroupInformation 
(UserGroupInformation.java:getGroupNames(1355)) - No groups available for user 
dr.who


If anyone can point me in the right direction to troubleshoot this, I would 
really appreciate it!

Thanks! Clay

Reply via email to