Yes, all the files passed must pre-exist. In this case, you would need to run
something as follows:
curl -i -X POST
Hi, Wellington
All the source parts are:
-rw-r--r-- hadoop supergroup 2.43 KB 2 32 MB part-01-00-000
-rw-r--r-- hadoop supergroup 21.14 MB 2 32 MB part-02-00-000
-rw-r--r-- hadoop supergroup 22.1 MB 2 32 MB part-04-00-000
-rw-r--r-- hadoop supergroup 22.29 MB 2 32 MB
Hi Cinyoung,
Concat has some restrictions, like the need for src file having last block size
to be the same as the configured dfs.block.size. If all the conditions are met,
below command example should work (where we are concatenating /user/root/file-2
into /user/root/file-1):
curl -i -X
https://hadoop.apache.org/docs/r2.8.0/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Concat_Files
I tried to concat multiple parts to single target file through webhdfs.
But, I couldn't do it.
Could you give me examples concatenating parts?