I hdfs ls command become pretty slow from today. I doubt it is due to some
network issue. It was fast but after I travel to other country for one week
and then back, the hdfs ls command become pretty slow (single node). Has
anyone experience the same issue ? I believe it should be some network
Sharing the link to simple video for understanding why and what is
ErasureCoding.
http://www.intel.com/content/www/us/en/storage/erasure-code-isa-l-solution-video.html
Thanks to intel for such a nice video.
Regards,
Vinay
no special reason, how to use ipv4 instead? Thanks!
For the issue I met, I will try the following link:
https://gist.githubusercontent.com/tariqmislam/2159173/raw/e2631272efe55a4ada1747dda6d3366b6eb7b577/instructions%2520and%2520how-to
On Mon, Jul 27, 2015 at 11:37 AM, Jonathan Aquilina
n our cluster recently we had issue in Namenode file log size, it's keep
growing with the following type of logs.
2015-07-28 13:37:38,730 INFO org.apache.hadoop.hdfs.
StateChange: BLOCK* addToInvalidates: blk_-2946593971266165812 to
192.168.x.x:50010
2015-07-28 13:37:38,730 INFO
I am using Cascading on top of hadoop. where I am using a Tap which reads
data from FTP Server.
Internally map reduce program gets executed which uses FTPFileSystem class
to read data.
Every time I try to read file from FTP server it throws following (Stream
Closed) Error:
java.io.IOException:
I am trying to write data to FTP server using normal syntax of credetials:
ftp://user:password@host/outputFilePath.. Below is my cascading code
Code:
package com.ftp.readwrite;
import java.io.IOException;
import cascading.flow.FlowDef;
import cascading.flow.hadoop.HadoopFlowConnector;
import
Are there any map reduce jobs running?
On Jul 28, 2015 10:11 PM, Akmal Abbasov akmal.abba...@icloud.com wrote:
Hi, I’m observing strange behaviour in HDFS/HBase cluster.
The disk space of one of datanodes is increasing very fast even when there
are no write requests.
It is 8GB per hour in
Hi, I’m observing strange behaviour in HDFS/HBase cluster.
The disk space of one of datanodes is increasing very fast even when there are
no write requests.
It is 8GB per hour in average. Here is the graph which shows it.
I am using hbase-0.98.7-hadoop2 and hadoop-2.5.1.
And this is logs from