Hi,
I was looking at the test cases for HDFS and found the following test
- org.apache.hadoop.hdfs.TestSetTimes.testTimes
>From the below, it appears that getAccessTime() for a directory should return
>0.
Is this true?
System.out.println("Creating testdir1 and testdir1/test1.dat.");
Yes HDFS supports aTime only for files. Support for directories would be too
expensive.
Thanks,
--Konstantin
On Thu, Nov 11, 2010 at 12:44 AM, Vivekanand Vellanki wrote:
> Hi,
>
> I was looking at the test cases for HDFS and found the following test
> - org.apache.hadoop.hdfs.TestSetTimes.testTim
Thanks for the prompt response.
From: Konstantin Shvachko
To: hdfs-user@hadoop.apache.org
Sent: Thu, November 11, 2010 2:18:07 PM
Subject: Re: atime for a directory
Yes HDFS supports aTime only for files. Support for directories would be too
expensive.
Thanks,
Thanks Todd,
In HDFS-6313, i see three API (sync, hflush, hsync),
And I assume hflush corresponds to :
*"API2: flushes out to all replicas of the block.
The data is in the buffers of the DNs but not on the DN's OS buffers.
New readers will see the data after the call has returned.*"
I am still c
Had a really peculiar thing happen today: a file that a job of mine
created on HDFS seems to have disappeared, and I'm scratching my head as
to how this could have happened without any errors getting thrown.
I ran a M/R job that created a big bunch of files. Job completed
without errors, and
On 11/11/2010 12:31 PM, David Rosenstrauch wrote:
2) Name node also says that it created the file:
[r...@hdmaster hadoop-0.20]# grep 2010.11.10-21.05.29
hadoop-hadoop-namenode-hdmaster.log.2010-11-10 | grep
shard2/IntentTrait.state
2010-11-10 21:42:28,442 INFO
org.apache.hadoop.hdfs.server.namen
Given that it's an MR output, my guess is it got moved out of the temporary
directory when the job "Committed" and then was removed as another pass. I'd
grep for the containing directory name in the audit logs to see where it got
moved to and how it was eventually deleted.
Would be great if someon
On Thu, Nov 11, 2010 at 7:31 AM, Thanh Do wrote:
> Thanks Todd,
>
> In HDFS-6313, i see three API (sync, hflush, hsync),
> And I assume hflush corresponds to :
>
> *"API2: flushes out to all replicas of the block.
> The data is in the buffers of the DNs but not on the DN's OS buffers.
> New reade
Saw a couple more references to that block before the "to delete blk"
messages:
2010-11-10 21:42:33,389 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
NameSystem.addToInvalidates: blk_-4237880568969698703 is added to
invalidSet of .169:50010
2010-11-10 21:42:33,389 INFO org.apache.hadoop.hdf
On 11/11/2010 05:10 PM, David Rosenstrauch wrote:
Saw a couple more references to that block before the "to delete blk"
messages:
2010-11-10 21:42:33,389 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
NameSystem.addToInvalidates: blk_-4237880568969698703 is added to
invalidSet of .169:50010
201
What's the last audit log entry prior to 2010-11-10 21:42:33,389?
-Todd
On Thu, Nov 11, 2010 at 2:10 PM, David Rosenstrauch wrote:
> Saw a couple more references to that block before the "to delete blk"
> messages:
>
> 2010-11-10 21:42:33,389 INFO org.apache.hadoop.hdfs.StateChange: BLOCK*
> Na
>Would be great if someone wrote some tools that, given a block ID, tracked
>the life of the file that contained it (including renames of containing
> dirs, etc). Shouldn't be too difficult.
There's a tool for this in MapRed's contrib section under
block_forensics. It was released in 21, I beli
12 matches
Mail list logo