Hi,
val srcPath = s3n://bucket_name_1
> val dstPath =s3n://bucket_name_2
> val config = new Configuration()
> val fs = FileSystem.get(URI.create(srcPath), config)
> FileUtil.copyMerge(fs, new Path(srcPath), fs, new Path(dstPath), false,
> config, null)
I am trying to use Copymerge .. Its not
Thanks a lot Ron. It helps
--
Madhav Sharan
On Sun, Jul 24, 2016 at 2:19 PM, Ron Gonzalez wrote:
> In a manner of speaking. I would imagine that you would like to take
> advantage of resource management that comes with yarn. If you're planning
> to make this a product
There can be several reasons you see this error.
The most common ones are:
Disk Space on Datanodes - Like mentioned earlier in the thread.
Inconsistent DataNodes - You can try to restart HDFS which should clean it
up.
Bad or Unresponsive Datanode
Negative 'Block Size' in hdfs-site.xml.
Network
Hi
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File
/user/pts/output/OTSOutput/_temporary/1/_temporary/attempt_1463_0008_m_18_2/video.mp4.of.txt
could only be replicated to 0 nodes instead of minReplication (=1). *There are
9 datanode(s) running* and no node(s) are
Hi All,
One of our Business requirement is to offload large dataset 1000's of
Terabytes of data processing need to be offloaded to Hadoop over a period
of few months. I am curious to learn and understand the possibilities
around the below points?
1. what is the efficient Data Ingestion