[jira] [Commented] (HADOOP-1) initial import of code from Nutch

2014-04-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975254#comment-13975254
 ] 

Hudson commented on HADOOP-1:
-

SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #270 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/270/])
HBASE-10948 Revert due to incompatibility with hadoop-1 (tedyu: rev 1588786)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSUtils.java


 initial import of code from Nutch
 -

 Key: HADOOP-1
 URL: https://issues.apache.org/jira/browse/HADOOP-1
 Project: Hadoop Common
  Issue Type: Task
Reporter: Doug Cutting
Assignee: Doug Cutting
 Fix For: 0.1.0


 The initial code for Hadoop will be copied from Nutch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9361) Strictly define the expected behavior of filesystem APIs and write tests to verify compliance

2014-04-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975276#comment-13975276
 ] 

Steve Loughran commented on HADOOP-9361:


This test suite shows up HDFS-6262 -HDFS does not throw a 
{{FileNotFoundException}} if the source of a rename doesn't exist

 Strictly define the expected behavior of filesystem APIs and write tests to 
 verify compliance
 -

 Key: HADOOP-9361
 URL: https://issues.apache.org/jira/browse/HADOOP-9361
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs, test
Affects Versions: 3.0.0, 2.2.0, 2.4.0
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-9361-001.patch, HADOOP-9361-002.patch, 
 HADOOP-9361-003.patch, HADOOP-9361-004.patch, HADOOP-9361-005.patch, 
 HADOOP-9361-006.patch, HADOOP-9361-007.patch, HADOOP-9361-008.patch, 
 HADOOP-9361-009.patch, HADOOP-9361-011.patch


 {{FileSystem}} and {{FileContract}} aren't tested rigorously enough -while 
 HDFS gets tested downstream, other filesystems, such as blobstore bindings, 
 don't.
 The only tests that are common are those of {{FileSystemContractTestBase}}, 
 which HADOOP-9258 shows is incomplete.
 I propose 
 # writing more tests which clarify expected behavior
 # testing operations in the interface being in their own JUnit4 test classes, 
 instead of one big test suite. 
 # Having each FS declare via a properties file what behaviors they offer, 
 such as atomic-rename, atomic-delete, umask, immediate-consistency -test 
 methods can downgrade to skipped test cases if a feature is missing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10375) Local FS doesn't raise an error on mkdir() over a file

2014-04-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975277#comment-13975277
 ] 

Steve Loughran commented on HADOOP-10375:
-

This may depend on Java 7 providing better APIs -a check and fail wouldn't be 
atomic

 Local FS doesn't raise an error on mkdir() over a file
 --

 Key: HADOOP-10375
 URL: https://issues.apache.org/jira/browse/HADOOP-10375
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.3.0
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor

 if you mkdir() on a path where there is already a file, the operation does
 not fail. Instead the operation returns 0.
 This is at odds with the behaviour of HDFS. 
 HADOOP-6229 add the check for the parent dir not being a file, but something 
 similar is needed for the destination dir itself



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10375) Local FS doesn't raise an error on mkdir() over a file

2014-04-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975278#comment-13975278
 ] 

Steve Loughran commented on HADOOP-10375:
-

...actually, you could have the check for the destination being a file only run 
*after* the mkdir() operation returned false

 Local FS doesn't raise an error on mkdir() over a file
 --

 Key: HADOOP-10375
 URL: https://issues.apache.org/jira/browse/HADOOP-10375
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.3.0
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor

 if you mkdir() on a path where there is already a file, the operation does
 not fail. Instead the operation returns 0.
 This is at odds with the behaviour of HDFS. 
 HADOOP-6229 add the check for the parent dir not being a file, but something 
 similar is needed for the destination dir itself



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9361) Strictly define the expected behavior of filesystem APIs and write tests to verify compliance

2014-04-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975281#comment-13975281
 ] 

Steve Loughran commented on HADOOP-9361:


linking as a dependent on HDFS-4258 -handling of rename during operations on 
open files -including append-

Without this, a test of HDFS append+ rename fails:
{code}
---
 T E S T S
---
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSAppendContract
Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 3.481 sec  
FAILURE! - in org.apache.hadoop.fs.contract.hdfs.TestHDFSAppendContract
testRenameFileBeingAppended(org.apache.hadoop.fs.contract.hdfs.TestHDFSAppendContract)
  Time elapsed: 0.044 sec   FAILURE!
java.lang.AssertionError: renamed destination file does not exist: not found 
hdfs://localhost:54005/test/test/renamed in hdfs://localhost:54005/test/test
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.hadoop.fs.contract.ContractTestUtils.assertPathExists(ContractTestUtils.java:587)
at 
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.assertPathExists(AbstractFSContractTestBase.java:254)
at 
org.apache.hadoop.fs.contract.AbstractAppendContractTest.testRenameFileBeingAppended(AbstractAppendContractTest.java:127)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

{code}

 Strictly define the expected behavior of filesystem APIs and write tests to 
 verify compliance
 -

 Key: HADOOP-9361
 URL: https://issues.apache.org/jira/browse/HADOOP-9361
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs, test
Affects Versions: 3.0.0, 2.2.0, 2.4.0
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-9361-001.patch, HADOOP-9361-002.patch, 
 HADOOP-9361-003.patch, HADOOP-9361-004.patch, HADOOP-9361-005.patch, 
 HADOOP-9361-006.patch, HADOOP-9361-007.patch, HADOOP-9361-008.patch, 
 HADOOP-9361-009.patch, HADOOP-9361-011.patch


 {{FileSystem}} and {{FileContract}} aren't tested rigorously enough -while 
 HDFS gets tested downstream, other filesystems, such as blobstore bindings, 
 don't.
 The only tests that are common are those of {{FileSystemContractTestBase}}, 
 which HADOOP-9258 shows is incomplete.
 I propose 
 # writing more tests which clarify expected behavior
 # testing operations in the interface being in their own JUnit4 test 

[jira] [Updated] (HADOOP-9361) Strictly define the expected behavior of filesystem APIs and write tests to verify compliance

2014-04-20 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9361:
---

Status: Open  (was: Patch Available)

 Strictly define the expected behavior of filesystem APIs and write tests to 
 verify compliance
 -

 Key: HADOOP-9361
 URL: https://issues.apache.org/jira/browse/HADOOP-9361
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs, test
Affects Versions: 2.4.0, 2.2.0, 3.0.0
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-9361-001.patch, HADOOP-9361-002.patch, 
 HADOOP-9361-003.patch, HADOOP-9361-004.patch, HADOOP-9361-005.patch, 
 HADOOP-9361-006.patch, HADOOP-9361-007.patch, HADOOP-9361-008.patch, 
 HADOOP-9361-009.patch, HADOOP-9361-011.patch


 {{FileSystem}} and {{FileContract}} aren't tested rigorously enough -while 
 HDFS gets tested downstream, other filesystems, such as blobstore bindings, 
 don't.
 The only tests that are common are those of {{FileSystemContractTestBase}}, 
 which HADOOP-9258 shows is incomplete.
 I propose 
 # writing more tests which clarify expected behavior
 # testing operations in the interface being in their own JUnit4 test classes, 
 instead of one big test suite. 
 # Having each FS declare via a properties file what behaviors they offer, 
 such as atomic-rename, atomic-delete, umask, immediate-consistency -test 
 methods can downgrade to skipped test cases if a feature is missing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9361) Strictly define the expected behavior of filesystem APIs and write tests to verify compliance

2014-04-20 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9361:
---

Attachment: HADOOP-9361-012.patch

patch rebased to trunk; minor changes to the append test to see exactly what 
HDFS is up to (its up to HDFS-4258 -rename of an open file doesn't pick up new 
name)

Pending a decision on what to do with HDFS-6262 - i.e. fix it or not - this 
patch should be ready to go in. It's not complete coverage of the filesystem 
semantics, but it can be extended over time, with the new test framework. 

 Strictly define the expected behavior of filesystem APIs and write tests to 
 verify compliance
 -

 Key: HADOOP-9361
 URL: https://issues.apache.org/jira/browse/HADOOP-9361
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs, test
Affects Versions: 3.0.0, 2.2.0, 2.4.0
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-9361-001.patch, HADOOP-9361-002.patch, 
 HADOOP-9361-003.patch, HADOOP-9361-004.patch, HADOOP-9361-005.patch, 
 HADOOP-9361-006.patch, HADOOP-9361-007.patch, HADOOP-9361-008.patch, 
 HADOOP-9361-009.patch, HADOOP-9361-011.patch, HADOOP-9361-012.patch


 {{FileSystem}} and {{FileContract}} aren't tested rigorously enough -while 
 HDFS gets tested downstream, other filesystems, such as blobstore bindings, 
 don't.
 The only tests that are common are those of {{FileSystemContractTestBase}}, 
 which HADOOP-9258 shows is incomplete.
 I propose 
 # writing more tests which clarify expected behavior
 # testing operations in the interface being in their own JUnit4 test classes, 
 instead of one big test suite. 
 # Having each FS declare via a properties file what behaviors they offer, 
 such as atomic-rename, atomic-delete, umask, immediate-consistency -test 
 methods can downgrade to skipped test cases if a feature is missing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9361) Strictly define the expected behavior of filesystem APIs and write tests to verify compliance

2014-04-20 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9361:
---

Target Version/s: 3.0.0, 2.5.0  (was: 3.0.0)
  Status: Patch Available  (was: Open)

 Strictly define the expected behavior of filesystem APIs and write tests to 
 verify compliance
 -

 Key: HADOOP-9361
 URL: https://issues.apache.org/jira/browse/HADOOP-9361
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs, test
Affects Versions: 2.4.0, 2.2.0, 3.0.0
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-9361-001.patch, HADOOP-9361-002.patch, 
 HADOOP-9361-003.patch, HADOOP-9361-004.patch, HADOOP-9361-005.patch, 
 HADOOP-9361-006.patch, HADOOP-9361-007.patch, HADOOP-9361-008.patch, 
 HADOOP-9361-009.patch, HADOOP-9361-011.patch, HADOOP-9361-012.patch


 {{FileSystem}} and {{FileContract}} aren't tested rigorously enough -while 
 HDFS gets tested downstream, other filesystems, such as blobstore bindings, 
 don't.
 The only tests that are common are those of {{FileSystemContractTestBase}}, 
 which HADOOP-9258 shows is incomplete.
 I propose 
 # writing more tests which clarify expected behavior
 # testing operations in the interface being in their own JUnit4 test classes, 
 instead of one big test suite. 
 # Having each FS declare via a properties file what behaviors they offer, 
 such as atomic-rename, atomic-delete, umask, immediate-consistency -test 
 methods can downgrade to skipped test cases if a feature is missing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-1) initial import of code from Nutch

2014-04-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975296#comment-13975296
 ] 

Hudson commented on HADOOP-1:
-

SUCCESS: Integrated in HBase-0.98 #286 (See 
[https://builds.apache.org/job/HBase-0.98/286/])
HBASE-10948 Revert due to incompatibility with hadoop-1 (tedyu: rev 1588786)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSUtils.java


 initial import of code from Nutch
 -

 Key: HADOOP-1
 URL: https://issues.apache.org/jira/browse/HADOOP-1
 Project: Hadoop Common
  Issue Type: Task
Reporter: Doug Cutting
Assignee: Doug Cutting
 Fix For: 0.1.0


 The initial code for Hadoop will be copied from Nutch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9361) Strictly define the expected behavior of filesystem APIs and write tests to verify compliance

2014-04-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975326#comment-13975326
 ] 

Hadoop QA commented on HADOOP-9361:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12641004/HADOOP-9361-012.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 72 new 
or modified test files.

  {color:red}-1 javac{color}.  The applied patch generated 1289 javac 
compiler warnings (more than the trunk's current 1288 warnings).

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-tools/hadoop-openstack:

  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup
  org.apache.hadoop.fs.contract.hdfs.TestHDFSRenameContract
  org.apache.hadoop.fs.contract.hdfs.TestHDFSAppendContract
  org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3818//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3818//artifact/trunk/patchprocess/diffJavacWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3818//console

This message is automatically generated.

 Strictly define the expected behavior of filesystem APIs and write tests to 
 verify compliance
 -

 Key: HADOOP-9361
 URL: https://issues.apache.org/jira/browse/HADOOP-9361
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs, test
Affects Versions: 3.0.0, 2.2.0, 2.4.0
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-9361-001.patch, HADOOP-9361-002.patch, 
 HADOOP-9361-003.patch, HADOOP-9361-004.patch, HADOOP-9361-005.patch, 
 HADOOP-9361-006.patch, HADOOP-9361-007.patch, HADOOP-9361-008.patch, 
 HADOOP-9361-009.patch, HADOOP-9361-011.patch, HADOOP-9361-012.patch


 {{FileSystem}} and {{FileContract}} aren't tested rigorously enough -while 
 HDFS gets tested downstream, other filesystems, such as blobstore bindings, 
 don't.
 The only tests that are common are those of {{FileSystemContractTestBase}}, 
 which HADOOP-9258 shows is incomplete.
 I propose 
 # writing more tests which clarify expected behavior
 # testing operations in the interface being in their own JUnit4 test classes, 
 instead of one big test suite. 
 # Having each FS declare via a properties file what behaviors they offer, 
 such as atomic-rename, atomic-delete, umask, immediate-consistency -test 
 methods can downgrade to skipped test cases if a feature is missing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10522) JniBasedUnixGroupMapping mishandles errors

2014-04-20 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975332#comment-13975332
 ] 

Kihwal Lee commented on HADOOP-10522:
-

[~cnauroth]: You are right. errno should be thread-safe on most modern 
platforms. I think it is still safer to use the return value than errno 
whenever possible. Buggy code could make decision based on errno set by 
previous sys call.

 JniBasedUnixGroupMapping mishandles errors
 --

 Key: HADOOP-10522
 URL: https://issues.apache.org/jira/browse/HADOOP-10522
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
 Attachments: hadoop-10522.patch


 The mishandling of errors in the jni user-to-groups mapping modules can cause 
 segmentation faults in subsequent calls.  Here are the bugs:
 1) If {{hadoop_user_info_fetch()}} returns an error code that is not ENOENT, 
 the error may not be handled at all.  This bug was found by [~cnauroth].
 2)  In {{hadoop_user_info_fetch()}} and {{hadoop_group_info_fetch()}}, the 
 global {{errno}} is directly used. This is not thread-safe and could be the 
 cause of some failures that disappeared after enabling the big lookup lock.
 3) In the above methods, there is no limit on retries.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10524) Race condition around MutableMetric and its subclasses

2014-04-20 Thread Hiroshi Ikeda (JIRA)
Hiroshi Ikeda created HADOOP-10524:
--

 Summary: Race condition around MutableMetric and its subclasses
 Key: HADOOP-10524
 URL: https://issues.apache.org/jira/browse/HADOOP-10524
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Hiroshi Ikeda
Priority: Minor


For example, MutableGaugeInt has two methods:

{code}
public synchronized void incr() {
  ++value;
  setChanged();
}

public void set(int value) {
  this.value = value;
  setChanged();
}
{code}

and the lack of synchronization of the {{set}} method causes a problem that 
calling the {{set}} method might be ignored while another thread is calling the 
{{incr}} method, such as:

(1) Thread1 takes the current value in the {{incr}} method.
(2) Thread2 sets the new value in the {{set}} method.
(3) Thread1 adds +1 to the taken value and sets the value in the {{incr}} 
method.

Also, in the first place, MutableMetric has a volatile instance variable 
{{changed}}, but lack of synchronization causes a problem that it drops the 
latest notification which is called just before clearing the {{changed}} 
variable. That means, the volatile keyword is useless unless it is needed to 
just check the flag itself. Indeed, the implementation of the method 
{{snapshot}} in MutableCounterInt has this problem because of lack of 
synchronization.

Anyway, synchronization around MutableMetric and its subclasses is doubtful and 
should be reviewed.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10520) Extended attributes definition and FileSystem APIs for extended attributes.

2014-04-20 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975370#comment-13975370
 ] 

Uma Maheswara Rao G commented on HADOOP-10520:
--

With the current API you cannot achieve the CREATE, REPLACE, ANY semantics of 
setxattr()
Agree. We can have the flags like in Linux. how about XAttrSetMode -- 
XAttrFlag? ( as Linux API arg name is 'flags', it may be more consistent in 
nameing as well)

Do we need methods for setting/removing to handle multiple attributes? IMO 
this will complicate failures.
I think in the first version of Xattr support, may be we can have single_xattr 
support for making things simple. On the followup JIRAs, we can have support 
for attr_multi support?

 Extended attributes definition and FileSystem APIs for extended attributes.
 ---

 Key: HADOOP-10520
 URL: https://issues.apache.org/jira/browse/HADOOP-10520
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 3.0.0

 Attachments: HADOOP-10520.patch


 This JIRA defines XAttr (Extended Attribute), it consists of a name and 
 associated data, and 4 namespaces are defined: user, trusted, security and 
 system. FileSystem APIs for XAttr include setXAttrs, getXAttrs, removeXAttrs 
 and so on. For more information, please refer to HDFS-2006.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9919) Rewrite hadoop-metrics2.properties

2014-04-20 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-9919:
--

Target Version/s: 3.0.0, 2.5.0  (was: 3.0.0)

 Rewrite hadoop-metrics2.properties
 --

 Key: HADOOP-9919
 URL: https://issues.apache.org/jira/browse/HADOOP-9919
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.1.0-beta
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HADOOP-9919.2.patch, HADOOP-9919.3.patch, 
 HADOOP-9919.4.patch, HADOOP-9919.patch


 The config for JobTracker and TaskTracker (comment outed) still exists in 
 hadoop-metrics2.properties as follows:
 {code}
 #jobtracker.sink.file_jvm.context=jvm
 #jobtracker.sink.file_jvm.filename=jobtracker-jvm-metrics.out
 #jobtracker.sink.file_mapred.context=mapred
 #jobtracker.sink.file_mapred.filename=jobtracker-mapred-metrics.out
 #tasktracker.sink.file.filename=tasktracker-metrics.out
 {code}
 These lines should be removed and a config for NodeManager should be added 
 instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10525) Remove DRFA.MaxBackupIndex config from log4j.properties

2014-04-20 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10525:
---

Summary: Remove DRFA.MaxBackupIndex config from log4j.properties  (was: 
Remove MaxBackupIndex config from DailyRollingFileAppender)

 Remove DRFA.MaxBackupIndex config from log4j.properties
 ---

 Key: HADOOP-10525
 URL: https://issues.apache.org/jira/browse/HADOOP-10525
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Priority: Minor
  Labels: newbie

 From [hadoop-user mailing 
 list|http://mail-archives.apache.org/mod_mbox/hadoop-user/201404.mbox/%3C534FACD3.8040907%40corp.badoo.com%3E].
 {code}
 # 30-day backup
 # log4j.appender.DRFA.MaxBackupIndex=30
 {code}
 In {{log4j.properties}}, the above lines should be removed because 
 DailyRollingFileAppender(DRFA) doesn't support MaxBackupIndex config.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10525) Remove MaxBackupIndex config from DailyRollingFileAppender

2014-04-20 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-10525:
--

 Summary: Remove MaxBackupIndex config from DailyRollingFileAppender
 Key: HADOOP-10525
 URL: https://issues.apache.org/jira/browse/HADOOP-10525
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Priority: Minor


From [hadoop-user mailing 
list|http://mail-archives.apache.org/mod_mbox/hadoop-user/201404.mbox/%3C534FACD3.8040907%40corp.badoo.com%3E].
{code}
# 30-day backup
# log4j.appender.DRFA.MaxBackupIndex=30
{code}
In {{log4j.properties}}, the above lines should be removed because 
DailyRollingFileAppender(DRFA) doesn't support MaxBackupIndex config.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10525) Remove DRFA.MaxBackupIndex config from log4j.properties

2014-04-20 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10525:
---

Attachment: HADOOP-10525.patch

Attaching a patch.

 Remove DRFA.MaxBackupIndex config from log4j.properties
 ---

 Key: HADOOP-10525
 URL: https://issues.apache.org/jira/browse/HADOOP-10525
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-10525.patch


 From [hadoop-user mailing 
 list|http://mail-archives.apache.org/mod_mbox/hadoop-user/201404.mbox/%3C534FACD3.8040907%40corp.badoo.com%3E].
 {code}
 # 30-day backup
 # log4j.appender.DRFA.MaxBackupIndex=30
 {code}
 In {{log4j.properties}}, the above lines should be removed because 
 DailyRollingFileAppender(DRFA) doesn't support MaxBackupIndex config.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10525) Remove DRFA.MaxBackupIndex config from log4j.properties

2014-04-20 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10525:
---

Assignee: Akira AJISAKA
Target Version/s: 2.5.0
  Status: Patch Available  (was: Open)

 Remove DRFA.MaxBackupIndex config from log4j.properties
 ---

 Key: HADOOP-10525
 URL: https://issues.apache.org/jira/browse/HADOOP-10525
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-10525.patch


 From [hadoop-user mailing 
 list|http://mail-archives.apache.org/mod_mbox/hadoop-user/201404.mbox/%3C534FACD3.8040907%40corp.badoo.com%3E].
 {code}
 # 30-day backup
 # log4j.appender.DRFA.MaxBackupIndex=30
 {code}
 In {{log4j.properties}}, the above lines should be removed because 
 DailyRollingFileAppender(DRFA) doesn't support MaxBackupIndex config.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9919) Rewrite hadoop-metrics2.properties

2014-04-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975406#comment-13975406
 ] 

Hadoop QA commented on HADOOP-9919:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12607403/HADOOP-9919.4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3819//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3819//console

This message is automatically generated.

 Rewrite hadoop-metrics2.properties
 --

 Key: HADOOP-9919
 URL: https://issues.apache.org/jira/browse/HADOOP-9919
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.1.0-beta
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HADOOP-9919.2.patch, HADOOP-9919.3.patch, 
 HADOOP-9919.4.patch, HADOOP-9919.patch


 The config for JobTracker and TaskTracker (comment outed) still exists in 
 hadoop-metrics2.properties as follows:
 {code}
 #jobtracker.sink.file_jvm.context=jvm
 #jobtracker.sink.file_jvm.filename=jobtracker-jvm-metrics.out
 #jobtracker.sink.file_mapred.context=mapred
 #jobtracker.sink.file_mapred.filename=jobtracker-mapred-metrics.out
 #tasktracker.sink.file.filename=tasktracker-metrics.out
 {code}
 These lines should be removed and a config for NodeManager should be added 
 instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9919) Rewrite hadoop-metrics2.properties

2014-04-20 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13975417#comment-13975417
 ] 

Akira AJISAKA commented on HADOOP-9919:
---

The patch is to update only the comment, so new tests are not needed.

 Rewrite hadoop-metrics2.properties
 --

 Key: HADOOP-9919
 URL: https://issues.apache.org/jira/browse/HADOOP-9919
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.1.0-beta
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HADOOP-9919.2.patch, HADOOP-9919.3.patch, 
 HADOOP-9919.4.patch, HADOOP-9919.patch


 The config for JobTracker and TaskTracker (comment outed) still exists in 
 hadoop-metrics2.properties as follows:
 {code}
 #jobtracker.sink.file_jvm.context=jvm
 #jobtracker.sink.file_jvm.filename=jobtracker-jvm-metrics.out
 #jobtracker.sink.file_mapred.context=mapred
 #jobtracker.sink.file_mapred.filename=jobtracker-mapred-metrics.out
 #tasktracker.sink.file.filename=tasktracker-metrics.out
 {code}
 These lines should be removed and a config for NodeManager should be added 
 instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)