[jira] Updated: (HDFS-311) Modifications to enable multiple types of logging

2009-07-13 Thread Luca Telloli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luca Telloli updated HDFS-311:
--

Attachment: HDFS-311-complete.patch

 Modifications to enable multiple types of logging 
 --

 Key: HDFS-311
 URL: https://issues.apache.org/jira/browse/HDFS-311
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Luca Telloli
Assignee: Luca Telloli
 Attachments: HADOOP-5188.patch, HADOOP-5188.patch, HADOOP-5188.patch, 
 HADOOP-5188.patch, HADOOP-5188.patch, HADOOP-5188.patch, HADOOP-5188.pdf, 
 HDFS-311-complete.patch, HDFS-311.patch




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-311) Modifications to enable multiple types of logging

2009-07-13 Thread Luca Telloli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luca Telloli updated HDFS-311:
--

Attachment: HDFS-311-complete.patch

 Modifications to enable multiple types of logging 
 --

 Key: HDFS-311
 URL: https://issues.apache.org/jira/browse/HDFS-311
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Luca Telloli
Assignee: Luca Telloli
 Attachments: HADOOP-5188.patch, HADOOP-5188.patch, HADOOP-5188.patch, 
 HADOOP-5188.patch, HADOOP-5188.patch, HADOOP-5188.patch, HADOOP-5188.pdf, 
 HDFS-311-complete.patch, HDFS-311.patch, HDFS-311.patch




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-234) Integration with BookKeeper logging system

2009-07-13 Thread Luca Telloli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luca Telloli updated HDFS-234:
--

Attachment: HDFS-234.patch

 Integration with BookKeeper logging system
 --

 Key: HDFS-234
 URL: https://issues.apache.org/jira/browse/HDFS-234
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Luca Telloli
Assignee: Luca Telloli
 Attachments: create.png, HADOOP-5189-trunk-preview.patch, 
 HADOOP-5189-trunk-preview.patch, HADOOP-5189-trunk-preview.patch, 
 HADOOP-5189-v.19.patch, HADOOP-5189.patch, HDFS-234.patch


 BookKeeper is a system to reliably log streams of records 
 (https://issues.apache.org/jira/browse/ZOOKEEPER-276). The NameNode is a 
 natural target for such a system for being the metadata repository of the 
 entire file system for HDFS. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-487) HDFS should expose a fileid to uniquely identify a file

2009-07-13 Thread Hong Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12730447#action_12730447
 ] 

Hong Tang commented on HDFS-487:


How about using 
[UUID|http://en.wikipedia.org/wiki/Universally_Unique_Identifier]? They have 
advantages:
- can be calculated independently by any process.
- guaranteed to be unique globally, even across two different HDFS instances, 
which would make it easier if we want to build a federated file system.


 HDFS should expose a fileid to uniquely identify a file
 ---

 Key: HDFS-487
 URL: https://issues.apache.org/jira/browse/HDFS-487
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: dhruba borthakur
Assignee: dhruba borthakur

 HDFS should expose a id that uniquely identifies a file. This helps in 
 developing  applications that work correctly even when files are moved from 
 one directory to another. A typical use-case is to make the Pluggable Block 
 Placement Policy (HDFS-385) use fileid instead of filename.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-489) Updated TestHDFSCLI for changes from HADOOP-6139

2009-07-13 Thread Jakob Homan (JIRA)
Updated TestHDFSCLI for changes from HADOOP-6139


 Key: HDFS-489
 URL: https://issues.apache.org/jira/browse/HDFS-489
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jakob Homan
Assignee: Jakob Homan


HADOOP-6139 changed the output of the rm/rmr console text.  The unit test for 
this is in hdfs, so TestHDFSCLI needs to be updated for the new output.  Patch 
shortly.  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-397) Incorporate storage directories into EditLogFileInput/Output streams

2009-07-13 Thread Raghu Angadi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12730464#action_12730464
 ] 

Raghu Angadi commented on HDFS-397:
---


Regd 1) : What about 'STORAGE_JSPOOL_DIR' which points to a different directory 
from 'current'.
JSPOOL_FILE also 'happens' to be same but intention of original author might 
have been to be able change. 
We should remove these constants if these are not sued.

bq. I assumed it's correct since it passed the unit tests correctly.

hmm.. I don't think that is sufficient or wise. When I program I would like to 
know what I am doing and why it is correct at every line.

 Have a look for instance to line 1497 and following. 
Is it for the latest patch attached there? I am asking this mainly so that I 
don't have go through the big patch for HDFS-311.

 Incorporate storage directories into EditLogFileInput/Output streams
 

 Key: HDFS-397
 URL: https://issues.apache.org/jira/browse/HDFS-397
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Luca Telloli
Assignee: Luca Telloli
 Attachments: HADOOP-6001.patch, HADOOP-6001.patch, HADOOP-6001.patch, 
 HDFS-397.patch




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-234) Integration with BookKeeper logging system

2009-07-13 Thread Flavio Paiva Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12730481#action_12730481
 ] 

Flavio Paiva Junqueira commented on HDFS-234:
-

When you read from a BookKeeper ledger, it seems that you are reading the whole 
ledger before processing it. I can see two potential problems with it:

# If the ledger is large, then the process reading the ledger might end up 
consuming too much memory;
# It does not overlap reading the ledger with processing. It might be best to 
try to overlap reading from the ledger with processing the edits. I don't know 
what changes this suggestion implies, though.



 Integration with BookKeeper logging system
 --

 Key: HDFS-234
 URL: https://issues.apache.org/jira/browse/HDFS-234
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Luca Telloli
Assignee: Luca Telloli
 Attachments: create.png, HADOOP-5189-trunk-preview.patch, 
 HADOOP-5189-trunk-preview.patch, HADOOP-5189-trunk-preview.patch, 
 HADOOP-5189-v.19.patch, HADOOP-5189.patch, HDFS-234.patch, 
 zookeeper-dev-bookkeeper.jar, zookeeper-dev.jar


 BookKeeper is a system to reliably log streams of records 
 (https://issues.apache.org/jira/browse/ZOOKEEPER-276). The NameNode is a 
 natural target for such a system for being the metadata repository of the 
 entire file system for HDFS. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-462) Unit tests not working under Windows

2009-07-13 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12730518#action_12730518
 ] 

Jakob Homan commented on HDFS-462:
--

Again, no new tests as is fixing a defect that's being caught by the current 
unit tests (under Windows at least).  Patch is ready to go.

 Unit tests not working under Windows
 

 Key: HDFS-462
 URL: https://issues.apache.org/jira/browse/HDFS-462
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node, test
Affects Versions: 0.21.0
 Environment: Windows
Reporter: Luca Telloli
Assignee: Jakob Homan
 Fix For: 0.21.0

 Attachments: HDFS-462.patch, TEST-org.apache.hadoop.hdfs.TestHdfs.txt


 Unit tests are failing on windows due to a problem with rename. 
 The failing code is around line 520 in FSImage.java: 
 {noformat}
   assert curDir.exists() : Current directory must exist.;
   assert !prevDir.exists() : prvious directory must not exist.;
   assert !tmpDir.exists() : prvious.tmp directory must not exist.;
   // rename current to tmp
   rename(curDir, tmpDir);
   // save new image
   if (!curDir.mkdir())
 throw new IOException(Cannot create directory  + curDir);
 {noformat}
 and seems related to some open file or directory

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-458) Create target for 10 minute patch test build for hdfs

2009-07-13 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-458:
-

Status: Open  (was: Patch Available)

Canceling patch to address Nicholas' comments.

 Create target for 10 minute patch test build for hdfs
 -

 Key: HDFS-458
 URL: https://issues.apache.org/jira/browse/HDFS-458
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: test
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: build.xml, HDFS-458.patch, TenMinuteTestData.xlsx


 It would be good to identify a subset of hdfs tests that provide strong test 
 code coverage within 10 minutes, as is the goal of MAPREDUCE-670 and 
 HADOOP-5628.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-489) Updated TestHDFSCLI for changes from HADOOP-6139

2009-07-13 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-489:
-

Status: Patch Available  (was: Open)

submitting patch

 Updated TestHDFSCLI for changes from HADOOP-6139
 

 Key: HDFS-489
 URL: https://issues.apache.org/jira/browse/HDFS-489
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-489.patch


 HADOOP-6139 changed the output of the rm/rmr console text.  The unit test for 
 this is in hdfs, so TestHDFSCLI needs to be updated for the new output.  
 Patch shortly.  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-489) Updated TestHDFSCLI for changes from HADOOP-6139

2009-07-13 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-489:
-

Attachment: HDFS-489.patch

Patch fixes TestHDFSCLI with new expected behavior.

Even though the original patch was backported to 20, this particular test case 
was added after 20, so there is no need for a 20 port of this patch.

It looks like the commons jar has been updated in the git repo, so TestHDFSCLI 
is currently breaking with this behavior and should be fixed.

Hudson will complain about no tests but this does modify a test.

 Updated TestHDFSCLI for changes from HADOOP-6139
 

 Key: HDFS-489
 URL: https://issues.apache.org/jira/browse/HDFS-489
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-489.patch


 HADOOP-6139 changed the output of the rm/rmr console text.  The unit test for 
 this is in hdfs, so TestHDFSCLI needs to be updated for the new output.  
 Patch shortly.  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-475) Create a separate targets for fault injection related test and jar files creation files

2009-07-13 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-475:


Hadoop Flags:   (was: [Reviewed])

In HDFS-483, I implemented some tests  which use fi.  I have to change the test 
target as following in order to run the tests.  Do you want to add them to your 
patch?

{code}
@@ -318,7 +318,7 @@
   !-- Weaving aspects in place 
Later on one can run 'ant jar' to create Hadoop jar file with 
instrumented classes
   --
-  target name=injectfaults depends=compile description=Weaves aspects 
into precomplied HDFS classes
+  target name=injectfaults depends=compile, compile-hdfs-test 
description=Weaves aspects into precomplied HDFS classes
 !-- AspectJ task definition --
 taskdef 
resource=org/aspectj/tools/ant/taskdefs/aspectjTaskdefs.properties
   classpath
@@ -335,7 +335,7 @@
   target=${javac.version}
   source=${javac.version}
   deprecation=${javac.deprecation}
-classpath refid=classpath /
+classpath refid=test.classpath /
 /iajc
 echo message=Weaving of aspects is finished/
   /target
@@ -500,10 +500,14 @@
   batchtest todir=${test.build.dir} unless=testcase
 fileset dir=${test.src.dir}/hdfs
includes=**/${test.include}.java
- excludes=**/${test.exclude}.java /
+   excludes=**/${test.exclude}.java /
+fileset dir=${test.src.dir}/aop
+   includes=**/${test.include}.java
+   excludes=**/${test.exclude}.java /
   /batchtest
   batchtest todir=${test.build.dir} if=testcase
 fileset dir=${test.src.dir}/hdfs includes=**/${testcase}.java/
+fileset dir=${test.src.dir}/aop includes=**/${testcase}.java/
   /batchtest
 /junit
 antcall target=checkfailure/
{code}


 Create a separate targets for fault injection related test and jar files 
 creation files
 ---

 Key: HDFS-475
 URL: https://issues.apache.org/jira/browse/HDFS-475
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: build
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik
 Fix For: 0.21.0

 Attachments: HDFS-475.patch, HDFS-475.patch


 Current implementation of the FI framework allows to mix faults into 
 production classes, e.g. into build/ folder.
 Although the default probability level is set to zero it doesn't look clean 
 and might potentially over complicate the build and release process.
 FI related targets are better be logically and physically separated, e.g. to 
 put instrumented artifacts into a separate folder, say, build-fi/

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-462) Unit tests not working under Windows

2009-07-13 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-462:
-

  Resolution: Fixed
Hadoop Flags: [Reviewed]
  Status: Resolved  (was: Patch Available)

I just committed this. Thank you Jakob.

 Unit tests not working under Windows
 

 Key: HDFS-462
 URL: https://issues.apache.org/jira/browse/HDFS-462
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node, test
Affects Versions: 0.21.0
 Environment: Windows
Reporter: Luca Telloli
Assignee: Jakob Homan
 Fix For: 0.21.0

 Attachments: HDFS-462.patch, TEST-org.apache.hadoop.hdfs.TestHdfs.txt


 Unit tests are failing on windows due to a problem with rename. 
 The failing code is around line 520 in FSImage.java: 
 {noformat}
   assert curDir.exists() : Current directory must exist.;
   assert !prevDir.exists() : prvious directory must not exist.;
   assert !tmpDir.exists() : prvious.tmp directory must not exist.;
   // rename current to tmp
   rename(curDir, tmpDir);
   // save new image
   if (!curDir.mkdir())
 throw new IOException(Cannot create directory  + curDir);
 {noformat}
 and seems related to some open file or directory

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-446) Offline Image Viewer Ls visitor incorrectly says 'output file' instead of 'input file'

2009-07-13 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-446:
-

  Component/s: tools
   test
Affects Version/s: 0.21.0
Fix Version/s: 0.21.0
 Hadoop Flags: [Reviewed]

 Offline Image Viewer Ls visitor incorrectly says 'output file' instead of 
 'input file'
 --

 Key: HDFS-446
 URL: https://issues.apache.org/jira/browse/HDFS-446
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test, tools
Affects Versions: 0.21.0
Reporter: Jakob Homan
Assignee: Jakob Homan
Priority: Minor
 Fix For: 0.21.0

 Attachments: HDFS-446.patch, HDFS-446.patch


 In the {{finishAbnormally}} method of Ls Visitor it should be input ended 
 unexpectedly, not output.  Trivial documentation change.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (HDFS-476) Create a fi test target.

2009-07-13 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik reassigned HDFS-476:
---

Assignee: Konstantin Boudnik

 Create a fi test target.
 

 Key: HDFS-476
 URL: https://issues.apache.org/jira/browse/HDFS-476
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: build
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Konstantin Boudnik

 Unit tests may be created with fi.  Current we have to to ant injectfaults 
 and then ant test -DTestFiXxx for running a particular fi unit test 
 TestFiXxx.  It will be easier if there is an ant target, say test-fi, such 
 that it compiles and runs all fi tests.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-475) Create a separate targets for fault injection related test and jar files creation files

2009-07-13 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12730624#action_12730624
 ] 

Konstantin Boudnik commented on HDFS-475:
-

HDFS-476 will be fixed by this patch as well although in a slightly different 
manner. Now in order to run tests with injected faults one doesn't need to run 
'injectfaults' target separately. It could be taken care off in a single 
command:
{code}
  ant run-hdfs-test-fi -DTestFiXxx
{code}

I believe this is the essence of HDFS-476.

Also, this new version of the patch provides new targets to create dev. and 
test jar files with included FI instrumentation.

Please be advised, that Hudson's test-patch output can't be provided for this 
patch, because HDFS's Hudson  doesn't run new targets right now.

 Create a separate targets for fault injection related test and jar files 
 creation files
 ---

 Key: HDFS-475
 URL: https://issues.apache.org/jira/browse/HDFS-475
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: build
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik
 Fix For: 0.21.0

 Attachments: HDFS-475.patch, HDFS-475.patch, HDFS-475.patch


 Current implementation of the FI framework allows to mix faults into 
 production classes, e.g. into build/ folder.
 Although the default probability level is set to zero it doesn't look clean 
 and might potentially over complicate the build and release process.
 FI related targets are better be logically and physically separated, e.g. to 
 put instrumented artifacts into a separate folder, say, build-fi/

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-200) In HDFS, sync() not yet guarantees data available to the new readers

2009-07-13 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12730666#action_12730666
 ] 

stack commented on HDFS-200:


(Thanks for review Konstantin)

In my last few test runs, NameNode has shut itself down with the below:

{code}
...
009-07-14 00:17:46,586 DEBUG org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.removeStoredBlock: blk_-9156287469566772234_2527 from 
XX.XX.XX.142:51010
2009-07-14 00:17:46,586 DEBUG org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.removeStoredBlock: blk_-9181830129071396520_2355 from 
XX.XX.XX.142:51010
2009-07-14 00:17:46,586 DEBUG org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.removeStoredBlock: blk_-9205119721509648294_2410 from 
XX.XX.XX.142:51010
2009-07-14 00:17:46,586 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.processReport: block blk_-7011715647341740217_1 on 
XX.XX.XX.142:51010 size 47027149 does not belong to any file.
2009-07-14 00:17:46,586 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.addToInvalidates: blk_-7011715647341740217 is added to invalidSet of 
XX.XX.XX.142:51010
2009-07-14 00:17:46,586 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.processReport: block blk_-280166356715716926_1 on XX.XX.XX.142:51010 
size 6487 does not belong to any file.
2009-07-14 00:17:46,586 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.addToInvalidates: blk_-280166356715716926 is added to invalidSet of 
XX.XX.XX.142:51010
2009-07-14 00:17:46,586 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.processReport: block blk_1532053033915429278_1 on XX.XX.XX.142:51010 
size 3869 does not belong to any file.
2009-07-14 00:17:46,586 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.addToInvalidates: blk_1532053033915429278 is added to invalidSet of 
XX.XX.XX.142:51010
2009-07-14 00:17:47,303 WARN 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor thread 
received Runtime exception. java.lang.IllegalStateException: generationStamp 
(=1) == GenerationStamp.WILDCARD_STAMP
2009-07-14 00:17:47,304 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
SHUTDOWN_MSG:
/
SHUTDOWN_MSG: Shutting down NameNode at aa0-000-12.u.powerset.com/XX.XX.XX.139
/
{code}

My guess this is a bug only fellas with dfs.support.append=true set run in to?

Here is code from ReplicationMonitor:

{code}
} catch (Throwable t) {
  LOG.warn(ReplicationMonitor thread received Runtime exception.  + 
t);
  Runtime.getRuntime().exit(-1);
}
{code}

Thats a rough call I'd say?

There are no more detailed exceptions in NN log.

Dig in more and stick what I find in another issue?



 In HDFS, sync() not yet guarantees data available to the new readers
 

 Key: HDFS-200
 URL: https://issues.apache.org/jira/browse/HDFS-200
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Tsz Wo (Nicholas), SZE
Assignee: dhruba borthakur
Priority: Blocker
 Attachments: 4379_20081010TC3.java, fsyncConcurrentReaders.txt, 
 fsyncConcurrentReaders11_20.txt, fsyncConcurrentReaders3.patch, 
 fsyncConcurrentReaders4.patch, fsyncConcurrentReaders5.txt, 
 fsyncConcurrentReaders6.patch, fsyncConcurrentReaders9.patch, 
 hadoop-stack-namenode-aa0-000-12.u.powerset.com.log.gz, 
 hypertable-namenode.log.gz, namenode.log, namenode.log, Reader.java, 
 Reader.java, reopen_test.sh, ReopenProblem.java, Writer.java, Writer.java


 In the append design doc 
 (https://issues.apache.org/jira/secure/attachment/12370562/Appends.doc), it 
 says
 * A reader is guaranteed to be able to read data that was 'flushed' before 
 the reader opened the file
 However, this feature is not yet implemented.  Note that the operation 
 'flushed' is now called sync.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.