hadoop version 2.4.1.
I have tested the snapshot.
1. upload file, /tmp/test.avi
2. create snapshot, /tmp snap1
3. delete file, /tmp/test.avi
4. moved to the Trash, /user/hadoop/.Trash/tmp/test.avi
5. file deleted in trash.
6. but, blocks is not deleted in datanode disk. why??
7. delete
Hello ,
I finally moved to per task write and reducers gather them all and write
them into the file
Thanks for the help
regards
rab
On Fri, Jul 11, 2014 at 10:50 AM, Bertrand Dechoux decho...@gmail.com
wrote:
And beside with a single file, if that were possible, how do you handle
error?
Hi,
Please check the value of mapreduce.map.maxattempts and
mapreduce.reduce.maxattempts. If you'd like to ignore the error only
in specific jobs, it's useful to use -D option to change the
configuration as follows:
bin/hadoop jar job.jar -Dmapreduce.map.maxattempts=10
Thanks,
- Tsuyoshi
On
Hi team
I am in weird situation where I have following HDFS sample folders
/data/folder/
/data/folder*
/data/folder_day
/data/folder_day/monday
/data/folder/1
/data/folder/2
I want to delete /data/folder* without deleting its sub_folders. If I do
hadoop fs -rmr /data/folder* it will delete
Just Rename the folder.
On Wed, Aug 20, 2014 at 6:53 AM, praveenesh kumar praveen...@gmail.com
wrote:
Hi team
I am in weird situation where I have following HDFS sample folders
/data/folder/
/data/folder*
/data/folder_day
/data/folder_day/monday
/data/folder/1
/data/folder/2
I want
With renaming - you would use the mv command hadoop fs -mv /data/folder*
/data/new_folder. Won't it move all the sub_dirs along with that ?
On Wed, Aug 20, 2014 at 12:00 PM, dileep kumar dileep...@gmail.com wrote:
Just Rename the folder.
On Wed, Aug 20, 2014 at 6:53 AM, praveenesh kumar
Hi,
i use Hadoop 2.4.1, in my cluster, Non DFS Used: 2.09 TB
I found that these files are all under tmp/nm-local-dir/usercache
Is there any Hadoop command to remove these unused user cache files
tmp/nm-local-dir/usercache ?
Regards
Arthur
Thanks for your reply. However I think it is not about 32-bit version issue,
cus my Hadoop is 64-bit as I compiled it from source. I think my way to
install snappy should be wrong,
Arthur
On 19 Aug, 2014, at 11:53 pm, Andre Kelpe ake...@concurrentinc.com wrote:
Could this be caused by the
try putting the name in quotes
On Wed, Aug 20, 2014 at 4:35 PM, praveenesh kumar praveen...@gmail.com
wrote:
With renaming - you would use the mv command hadoop fs -mv /data/folder*
/data/new_folder. Won't it move all the sub_dirs along with that ?
On Wed, Aug 20, 2014 at 12:00 PM, dileep
Try
hadoop fs -mv /data/folder*/* new folder
Now you have only /data/folder*. and all data under /data/folder* will be
moved to new folder ,then delete /data/folder*.Not sure if it works. just
make a try.
On Wed, Aug 20, 2014 at 8:26 AM, Ritesh Kumar Singh
riteshoneinamill...@gmail.com wrote:
No, I have tried all usual things like single quotes, double quotes, escape
character.. but it is not working. I wonder what is escape char with Hadoop
FS utility.
On Wed, Aug 20, 2014 at 1:26 PM, Ritesh Kumar Singh
riteshoneinamill...@gmail.com wrote:
try putting the name in quotes
On
Have you looked at the WholeFileInputFormat implementations? There are
quite a few if search for them...
http://hadoop-sandy.blogspot.com/2013/02/wholefileinputformat-in-java-hadoop.html
https://github.com/tomwhite/hadoop-book/blob/master/ch07/src/main/java/WholeFileInputFormat.java
Regards,
interesting ... although escape character is still the forward slash and
its proven to work with other special characters. Here's a link:
Deleting directory with special character
use this:
hadoop fs -rmr /path-to-folder/folder\*
just tried it out :)
On Wed, Aug 20, 2014 at 7:07 PM, Ritesh Kumar Singh
riteshoneinamill...@gmail.com wrote:
interesting ... although escape character is still the forward slash and
its proven to work with other special characters. Here's
Not working for me, strange :(
On Wed, Aug 20, 2014 at 3:00 PM, Ritesh Kumar Singh
riteshoneinamill...@gmail.com wrote:
use this:
hadoop fs -rmr /path-to-folder/folder\*
just tried it out :)
On Wed, Aug 20, 2014 at 7:07 PM, Ritesh Kumar Singh
riteshoneinamill...@gmail.com wrote:
Thanks for the response.
Yes, I know wholeFileInputFormat. But i am not sure filename comes to map
process either as key or value. But, I think this file format reads the
contents of the file. I wish to have a inputformat that just gives filename
or list of filenames.
Also, files are very small.
unsubscribe
On Wed, Aug 20, 2014 at 5:21 PM, Charles Li charlesqua...@gmail.com wrote:
Please unsubscribe me. Thanks!
On Tue, Aug 19, 2014 at 3:10 AM, Vasantha Kumar Kannaki Kaliappan
vaska...@student.liu.se wrote:
Hi,
Please unsubscribe me from the list. Thanks a lot for active
Hi,
I am trying to use the DistributedCache and I am running into problems in a
test, when using the LocalFileSystem. FSDownload complains about
permissions like so. This is hadoop 2.4.1 with JDK 6 on Linux.:
Caused by: java.io.IOException: Resource file:/path/to/some/file is not
publicly
Move it to some tmp directory and delete parent directory.
On Aug 20, 2014 4:23 PM, praveenesh kumar praveen...@gmail.com wrote:
Hi team
I am in weird situation where I have following HDFS sample folders
/data/folder/
/data/folder*
/data/folder_day
/data/folder_day/monday
/data/folder/1
I have SysV scripts for HDFS and YARN services. I’ll gladly share them. What
is a preferred sharing manner? GitHub?
On Aug 20, 2014, at 11:06 AM, Mohit Anchlia mohitanch...@gmail.com wrote:
Any help would be appreciated. If not I'll go ahead and write these startup
scripts.
On Tue,
Try taking a peek at the Cloudera distributions. Look in the tar file in
the sbin directory for files like
*-daemon.sh
*-daemons.sh
That might be a good starting point.
-Ray
On Wed, Aug 20, 2014 at 10:06 AM, Mohit Anchlia mohitanch...@gmail.com
wrote:
Any help would be appreciated. If not
I wrote a post on how to use CombineInputFormat:
http://www.idryman.org/blog/2013/09/22/process-small-files-on-hadoop-using-combinefileinputformat-1/
In the RecordReader constructor, you can get the context of which file you are
reading in.
In my example, I created FileLineWritable to include the
Thanks! I ended up creating my own script. I am not sure why these are not
part of the apache hadoop tar?
On Wed, Aug 20, 2014 at 10:19 AM, Abdelrahman Kamel abdouka...@gmail.com
wrote:
unsubscribe
On Wed, Aug 20, 2014 at 8:18 PM, Ray Chiang rchi...@cloudera.com wrote:
Try taking a peek
Hi All,
I have a failure in one of the applications consistently after my Nutch job
runs for like an hour, can some please suggest why this error is occurring
looking at the exception message below.
Diagnostics:
Application application_1408512952691_0017 failed 2 times due to AM
Container for
just to be sure, try this one too:
hadoop fs -rmr /data/folder\*
On Thu, Aug 21, 2014 at 1:36 AM, praveenesh kumar praveen...@gmail.com
wrote:
Command used -
1. hadoop fs -rmr /data/folder\*
2. hadoop fs -rmr 'data/folder\*'
3. hadoop fs -rmr /data/folder\*
None of them gave any output.
The blocks of a file and the blocks of the snapshot are the same.
(i.e. There is no data copying when creating snapshots)
Therefore the blocks are not removed from datanode disks if a file is
removed.
Thanks,
Akira
(2014/08/20 15:22), juil cho wrote:
hadoop version 2.4.1.
I have tested the
26 matches
Mail list logo