Thank you!
On Jun 13, 2016 3:23 PM, "Chris Nauroth" wrote:
> Hello Ram,
>
> This indicates that a client connected to the DataNode's data transfer
> port but then immediately disconnected before requesting an operation.
> Sometimes monitoring tools will do this as a
Hello Ram,
This indicates that a client connected to the DataNode's data transfer port but
then immediately disconnected before requesting an operation. Sometimes
monitoring tools will do this as a liveness check to make sure the DataNode
process is running and the port is reachable. The log
Hi,
I am getting this error in datanode, not sure what it means and how to fix
it, any one have any suggestions?
2016-06-13 20:59:40,148 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode:
localhost:50010:DataXceiver error processing unknown operation src: /
1.2.3.4:57308 dst:
If you want to measure the effect of turning compression on and off, the
most directly observable metric would be the number of bytes written. The
actual time it takes to write data is dependent upon many factors.
On Sat, Jun 11, 2016 at 10:28 AM, Alexandru Calin <
alexandrucali...@gmail.com>
I'd be interested in learning how to do this too? Would we have to override
RecordReader to add timing code around the I/O portion? I'd like to compare
the I/O time between running normal hadoop cluster vs. a hadoop cluster on
the cloud using the remote storage (S3) as the HDFS.
Thanks!
*Alvin
Hi Arun,
Thanks for your prompt reply. Actually, I want to add files to the job
running internally in
JobClient.RunJob(conf2)
method and add cache files to file.
I am unable to find a way to get the running job.
The method Job.getInstanceOf(conf) creates a new job (but I want to add
file to
Hi Jeff,
Thanks for your prompt reply. Actually my problem is as follows:
My code creates a new job named "job 1" which writes something to
distributed cache (say a text file) and the job gets completed.
Now, I want to create some n number of jobs in while loop below, which
reads the text file