[jira] [Resolved] (HADOOP-16007) Order of property settings is incorrect when includes are processed

2018-12-20 Thread Jason Lowe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved HADOOP-16007.
-
Resolution: Duplicate

This was fixed by HADOOP-15554.

> Order of property settings is incorrect when includes are processed
> ---
>
> Key: HADOOP-16007
> URL: https://issues.apache.org/jira/browse/HADOOP-16007
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.2.0, 3.1.1, 3.0.4
>Reporter: Jason Lowe
>Assignee: Eric Payne
>Priority: Blocker
>
> If a configuration file contains a setting for a property then later includes 
> another file that also sets that property to a different value then the 
> property will be parsed incorrectly. For example, consider the following 
> configuration file:
> {noformat}
> http://www.w3.org/2001/XInclude";>
>  
>  myprop
>  val1
>  
> 
> 
> {noformat}
> with the contents of /some/other/file.xml as:
> {noformat}
>  
>myprop
>val2
>  
> {noformat}
> Parsing this configuration should result in myprop=val2, but it actually 
> results in myprop=val1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16016) TestSSLFactory#testServerWeakCiphers sporadically fails in precommit builds

2018-12-19 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-16016:
---

 Summary: TestSSLFactory#testServerWeakCiphers sporadically fails 
in precommit builds
 Key: HADOOP-16016
 URL: https://issues.apache.org/jira/browse/HADOOP-16016
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.3.0
Reporter: Jason Lowe


I have seen a couple of precommit builds across JIRAs fail in 
TestSSLFactory#testServerWeakCiphers with the error:
{noformat}
[ERROR]   TestSSLFactory.testServerWeakCiphers:240 Expected to find 'no cipher 
suites in common' but got unexpected 
exception:javax.net.ssl.SSLHandshakeException: No appropriate protocol 
(protocol is disabled or cipher suites are inappropriate)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16007) Order of property settings is incorrect when includes are processed

2018-12-14 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-16007:
---

 Summary: Order of property settings is incorrect when includes are 
processed
 Key: HADOOP-16007
 URL: https://issues.apache.org/jira/browse/HADOOP-16007
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 3.1.1, 3.2.0, 3.0.4
Reporter: Jason Lowe


If a configuration file contains a setting for a property then later includes 
another file that also sets that property to a different value then the 
property will be parsed incorrectly. For example, consider the following 
configuration file:
{noformat}
http://www.w3.org/2001/XInclude";>
 
 myprop
 val1
 


{noformat}
with the contents of /some/other/file.xml as:
{noformat}
 
   myprop
   val2
 
{noformat}
Parsing this configuration should result in myprop=val2, but it actually 
results in myprop=val1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15822) zstd compressor can fail with a small output buffer

2018-10-05 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-15822:
---

 Summary: zstd compressor can fail with a small output buffer
 Key: HADOOP-15822
 URL: https://issues.apache.org/jira/browse/HADOOP-15822
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.9.0
Reporter: Jason Lowe
Assignee: Jason Lowe


TestZStandardCompressorDecompressor fails a couple of tests on my machine with 
the latest zstd library (1.3.5).  Compression can fail to successfully finalize 
the stream when a small output buffer is used resulting in a failed to init 
error, and decompression with a direct buffer can fail with an invalid src size 
error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15820) ZStandardDecompressor native code sets an integer field as a long

2018-10-04 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-15820:
---

 Summary: ZStandardDecompressor native code sets an integer field 
as a long
 Key: HADOOP-15820
 URL: https://issues.apache.org/jira/browse/HADOOP-15820
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0-alpha2, 2.9.0
Reporter: Jason Lowe
Assignee: Jason Lowe


Java_org_apache_hadoop_io_compress_zstd_ZStandardDecompressor_init in 
ZStandardDecompressor.c sets the {{remaining}} field as a long when it actually 
is an integer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15738) MRAppBenchmark.benchmark1() fails with NullPointerException

2018-09-10 Thread Jason Lowe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved HADOOP-15738.
-
Resolution: Duplicate

> MRAppBenchmark.benchmark1() fails with NullPointerException
> ---
>
> Key: HADOOP-15738
> URL: https://issues.apache.org/jira/browse/HADOOP-15738
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Oleksandr Shevchenko
>Priority: Minor
>
> MRAppBenchmark.benchmark1() fails with NullPointerException:
> 1. We do not set any queue for this test. As the result we got the following 
> exception:
> {noformat}
> 2018-09-10 17:04:23,486 ERROR [Thread-0] rm.RMCommunicator 
> (RMCommunicator.java:register(177)) - Exception while registering
> java.lang.NullPointerException
> at org.apache.avro.util.Utf8$2.toUtf8(Utf8.java:123)
> at org.apache.avro.util.Utf8.getBytesFor(Utf8.java:172)
> at org.apache.avro.util.Utf8.(Utf8.java:39)
> at 
> org.apache.hadoop.mapreduce.jobhistory.JobQueueChangeEvent.(JobQueueChangeEvent.java:35)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.setQueueName(JobImpl.java:1167)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.register(RMCommunicator.java:174)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.serviceStart(RMCommunicator.java:122)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.serviceStart(RMContainerAllocator.java:280)
> at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1293)
> at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at org.apache.hadoop.mapreduce.v2.app.MRApp.submit(MRApp.java:301)
> at org.apache.hadoop.mapreduce.v2.app.MRApp.submit(MRApp.java:285)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppBenchmark.run(MRAppBenchmark.java:72)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppBenchmark.benchmark1(MRAppBenchmark.java:194)
> {noformat}
> 2. We override createSchedulerProxy method and do not set application 
> priority that was added later by MAPREDUCE-6515. We got the following error:
> {noformat}
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.handleJobPriorityChange(RMContainerAllocator.java:1025)
>  at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.getResources(RMContainerAllocator.java:880)
>  at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.heartbeat(RMContainerAllocator.java:286)
>  at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator$AllocatorRunnable.run(RMCommunicator.java:280)
>  at java.lang.Thread.run(Thread.java:748)
> {noformat}
> In both cases, the job never will be run and the test stuck and will not be 
> finished.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-15738) MRAppBenchmark.benchmark1() fails with NullPointerException

2018-09-10 Thread Jason Lowe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe reopened HADOOP-15738:
-

> MRAppBenchmark.benchmark1() fails with NullPointerException
> ---
>
> Key: HADOOP-15738
> URL: https://issues.apache.org/jira/browse/HADOOP-15738
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Oleksandr Shevchenko
>Priority: Minor
>
> MRAppBenchmark.benchmark1() fails with NullPointerException:
> 1. We do not set any queue for this test. As the result we got the following 
> exception:
> {noformat}
> 2018-09-10 17:04:23,486 ERROR [Thread-0] rm.RMCommunicator 
> (RMCommunicator.java:register(177)) - Exception while registering
> java.lang.NullPointerException
> at org.apache.avro.util.Utf8$2.toUtf8(Utf8.java:123)
> at org.apache.avro.util.Utf8.getBytesFor(Utf8.java:172)
> at org.apache.avro.util.Utf8.(Utf8.java:39)
> at 
> org.apache.hadoop.mapreduce.jobhistory.JobQueueChangeEvent.(JobQueueChangeEvent.java:35)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.setQueueName(JobImpl.java:1167)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.register(RMCommunicator.java:174)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.serviceStart(RMCommunicator.java:122)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.serviceStart(RMContainerAllocator.java:280)
> at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1293)
> at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at org.apache.hadoop.mapreduce.v2.app.MRApp.submit(MRApp.java:301)
> at org.apache.hadoop.mapreduce.v2.app.MRApp.submit(MRApp.java:285)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppBenchmark.run(MRAppBenchmark.java:72)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppBenchmark.benchmark1(MRAppBenchmark.java:194)
> {noformat}
> 2. We override createSchedulerProxy method and do not set application 
> priority that was added later by MAPREDUCE-6515. We got the following error:
> {noformat}
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.handleJobPriorityChange(RMContainerAllocator.java:1025)
>  at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.getResources(RMContainerAllocator.java:880)
>  at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.heartbeat(RMContainerAllocator.java:286)
>  at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator$AllocatorRunnable.run(RMCommunicator.java:280)
>  at java.lang.Thread.run(Thread.java:748)
> {noformat}
> In both cases, the job never will be run and the test stuck and will not be 
> finished.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15406) hadoop-nfs dependencies for mockito and junit are not test scope

2018-04-23 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-15406:
---

 Summary: hadoop-nfs dependencies for mockito and junit are not 
test scope
 Key: HADOOP-15406
 URL: https://issues.apache.org/jira/browse/HADOOP-15406
 Project: Hadoop Common
  Issue Type: Bug
  Components: nfs
Reporter: Jason Lowe


hadoop-nfs asks for mockito-all and junit for its unit tests but it does not 
mark the dependency as being required only for tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13500) Concurrency issues when using Configuration iterator

2018-04-03 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe reopened HADOOP-13500:
-

This is not a duplicate of HADOOP-13556.  That JIRA only changed the 
getPropsWithPrefix method which was not involved in the error reported by this 
JIRA or TEZ-3413.  AFAICT iterating a shared configuration object is still 
unsafe.

> Concurrency issues when using Configuration iterator
> 
>
> Key: HADOOP-13500
> URL: https://issues.apache.org/jira/browse/HADOOP-13500
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Jason Lowe
>Assignee: Ajay Kumar
>Priority: Major
>
> It is possible to encounter a ConcurrentModificationException while trying to 
> iterate a Configuration object.  The iterator method tries to walk the 
> underlying Property object without proper synchronization, so another thread 
> simultaneously calling the set method can trigger it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15170) Add symlink support to FileUtil#unTarUsingJava

2018-01-12 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-15170:
---

 Summary: Add symlink support to FileUtil#unTarUsingJava 
 Key: HADOOP-15170
 URL: https://issues.apache.org/jira/browse/HADOOP-15170
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Jason Lowe
Priority: Minor


Now that JDK7 or later is required, we can leverage 
java.nio.Files.createSymbolicLink in FileUtil.unTarUsingJava to support 
archives that contain symbolic links.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15085) Output streams closed with IOUtils suppressing write errors

2017-12-01 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-15085:
---

 Summary: Output streams closed with IOUtils suppressing write 
errors
 Key: HADOOP-15085
 URL: https://issues.apache.org/jira/browse/HADOOP-15085
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jason Lowe


There are a few places in hadoop-common that are closing an output stream with 
IOUtils.cleanupWithLogger like this:
{code}
  try {
...write to outStream...
  } finally {
IOUtils.cleanupWithLogger(LOG, outStream);
  }
{code}
This suppresses any IOException that occurs during the close() method which 
could lead to partial/corrupted output without throwing a corresponding 
exception.  The code should either use try-with-resources or explicitly close 
the stream within the try block so the exception thrown during close() is 
properly propagated as exceptions during write operations are.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15078) dtutil ignores nonexistent files

2017-11-30 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-15078:
---

 Summary: dtutil ignores nonexistent files
 Key: HADOOP-15078
 URL: https://issues.apache.org/jira/browse/HADOOP-15078
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0-alpha1
Reporter: Jason Lowe


While investigating issues in HADOOP-15059 I ran the dtutil append command like 
this:
{noformat}
$ hadoop dtutil append -format protobuf foo foo.pb
{noformat}

expecting the append command to translate the existing tokens in file {{foo}} 
into the currently non-existent file {{foo.pb}}.  Instead the command executed 
without error and overwrote {{foo}} instead of creating {{foo.pb}} as I 
expected.  I now understand how append works, but it was very surprising to 
have dtutil _silently ignore_ filenames requested on the command-line.  At best 
it is a bit surprising to the user.  At worst it clobbers data the user did not 
expect to be overwritten.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14902) LoadGenerator#genFile write close timing is incorrectly calculated

2017-09-22 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-14902:
---

 Summary: LoadGenerator#genFile write close timing is incorrectly 
calculated
 Key: HADOOP-14902
 URL: https://issues.apache.org/jira/browse/HADOOP-14902
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.4.0
Reporter: Jason Lowe


LoadGenerator#genFile's write close timing code looks like the following:
{code}
startTime = Time.now();
executionTime[WRITE_CLOSE] += (Time.now() - startTime);
{code}

That code will generate a zero (or near zero) write close timing since it isn't 
actually closing the file in-between timestamp lookups.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14843) FsPermission symbolic parsing failed to detect invalid argument

2017-09-06 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-14843:
---

 Summary: FsPermission symbolic parsing failed to detect invalid 
argument
 Key: HADOOP-14843
 URL: https://issues.apache.org/jira/browse/HADOOP-14843
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.8.1, 2.7.4
Reporter: Jason Lowe


A user misunderstood the syntax format for the FsPermission symbolic 
constructor and passed the argument "-rwr" instead of "u=rw,g=r".  In 2.7 and 
earlier this was silently misinterpreted as mode 0777 and in 2.8 it oddly 
became mode .  In either case FsPermission should have flagged "-rwr" as an 
invalid argument rather than silently misinterpreting it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14713) Audit for durations that should be measured via Time.monotonicNow

2017-08-01 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-14713:
---

 Summary: Audit for durations that should be measured via 
Time.monotonicNow
 Key: HADOOP-14713
 URL: https://issues.apache.org/jira/browse/HADOOP-14713
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Jason Lowe


Whenever we are measuring a time delta or duration in the same process, the 
timestamps probably should be using Time.monotonicNow rather than Time.now or 
System.currentTimeMillis.  The latter two are directly reading the system clock 
which can move faster or slower than actual time if the system is undergoing a 
time adjustment (e.g.: adjtime or admin sets a new system time).

We should go through the code base and identify places where the code is using 
the system clock but really should be using monotonic time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14669) GenericTestUtils.waitFor should use monotonic time

2017-07-18 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-14669:
---

 Summary: GenericTestUtils.waitFor should use monotonic time
 Key: HADOOP-14669
 URL: https://issues.apache.org/jira/browse/HADOOP-14669
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0-alpha4
Reporter: Jason Lowe
Priority: Trivial


GenericTestUtils.waitFor should be calling Time.monotonicNow rather than 
Time.now.  Otherwise if the system clock adjusts during unit testing the 
timeout period could be incorrect.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14412) HostsFileReader#getHostDetails is very expensive on large clusters

2017-05-11 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-14412:
---

 Summary: HostsFileReader#getHostDetails is very expensive on large 
clusters
 Key: HADOOP-14412
 URL: https://issues.apache.org/jira/browse/HADOOP-14412
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.8.0
Reporter: Jason Lowe
Assignee: Jason Lowe


After upgrading one of our large clusters to 2.8 we noticed many IPC server 
threads of the resourcemanager spending time in NodesListManager#isValidNode 
which in turn was calling HostsFileReader#getHostDetails.  The latter is 
creating complete copies of the include and exclude sets for every node 
heartbeat, and these sets are not small due to the size of the cluster.  These 
copies are causing multiple resizes of the underlying HashSets being filled and 
creating lots of garbage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13552) RetryInvocationHandler logs all remote exceptions

2016-08-26 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-13552:
---

 Summary: RetryInvocationHandler logs all remote exceptions
 Key: HADOOP-13552
 URL: https://issues.apache.org/jira/browse/HADOOP-13552
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.8.0
Reporter: Jason Lowe
Priority: Blocker


RetryInvocationHandler logs a warning for any exception that it does not retry. 
 There are many exceptions that the client can automatically handle, like 
FileNotFoundException, UnresolvedPathException, etc., so now every one of these 
generates a scary looking stack trace as a warning then the program continues 
normally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13500) Concurrency issues when using Configuration iterator

2016-08-16 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-13500:
---

 Summary: Concurrency issues when using Configuration iterator
 Key: HADOOP-13500
 URL: https://issues.apache.org/jira/browse/HADOOP-13500
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Reporter: Jason Lowe


It is possible to encounter a ConcurrentModificationException while trying to 
iterate a Configuration object.  The iterator method tries to walk the 
underlying Property object without proper synchronization, so another thread 
simultaneously calling the set method can trigger it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13362) DefaultMetricsSystem leaks the source name when a source unregisters

2016-07-11 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe reopened HADOOP-13362:
-
  Assignee: Junping Du

Reopening to target a fix just to the DefaultMetricsSystem for 2.7 rather than 
pulling in the entire patch from YARN-5296 (and its dependencies).

> DefaultMetricsSystem leaks the source name when a source unregisters
> 
>
> Key: HADOOP-13362
> URL: https://issues.apache.org/jira/browse/HADOOP-13362
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Junping Du
>Priority: Critical
>
> Ran across a nodemanager that was spending most of its time in GC.  Upon 
> examination of the heap most of the memory was going to the map of names in 
> org.apache.hadoop.metrics2.lib.UniqueNames.  In this case the map had almost 
> 2 million entries.  Looking at a few of the map showed entries like 
> "ContainerResource_container_e01_1459548490386_8560138_01_002020", 
> "ContainerResource_container_e01_1459548490386_2378745_01_000410", etc.
> Looks like the ContainerMetrics for each container will cause a unique name 
> to be registered with UniqueNames and the name will never be unregistered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13362) DefaultMetricsSystem leaks the source name when a source unregisters

2016-07-11 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved HADOOP-13362.
-
Resolution: Duplicate

> DefaultMetricsSystem leaks the source name when a source unregisters
> 
>
> Key: HADOOP-13362
> URL: https://issues.apache.org/jira/browse/HADOOP-13362
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Priority: Critical
>
> Ran across a nodemanager that was spending most of its time in GC.  Upon 
> examination of the heap most of the memory was going to the map of names in 
> org.apache.hadoop.metrics2.lib.UniqueNames.  In this case the map had almost 
> 2 million entries.  Looking at a few of the map showed entries like 
> "ContainerResource_container_e01_1459548490386_8560138_01_002020", 
> "ContainerResource_container_e01_1459548490386_2378745_01_000410", etc.
> Looks like the ContainerMetrics for each container will cause a unique name 
> to be registered with UniqueNames and the name will never be unregistered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13343) globStatus returns null for file path that exists but is filtered

2016-07-06 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-13343:
---

 Summary: globStatus returns null for file path that exists but is 
filtered
 Key: HADOOP-13343
 URL: https://issues.apache.org/jira/browse/HADOOP-13343
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.2
Reporter: Jason Lowe
Priority: Minor


If a file path without globs is passed to globStatus and the file exists but 
the specified input filter suppresses it then globStatus will return null 
instead of an empty array.  This makes it impossible for the caller to discern 
the difference between the file not existing at all vs. being suppressed by the 
filter and is inconsistent with the way it handles globs for an existing dir 
but fail to match anything within the dir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-12966) TestNativeLibraryChecker is crashing

2016-03-25 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-12966:
---

 Summary: TestNativeLibraryChecker is crashing
 Key: HADOOP-12966
 URL: https://issues.apache.org/jira/browse/HADOOP-12966
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Jason Lowe


Precommit builds have reported TestNativeLibraryChecker failing.  The logs show 
the JVM is crashing in unicode_length:
{noformat}
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x7fdf71b45c90, pid=11625, tid=140597680293632
#
# JRE version: Java(TM) SE Runtime Environment (8.0_74-b02) (build 1.8.0_74-b02)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.74-b02 mixed mode linux-amd64 
compressed oops)
# Problematic frame:
# V  [libjvm.so+0xa90c90]  UTF8::unicode_length(char const*)+0x0
{noformat}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12958) PhantomReference for filesystem statistics can trigger OOM

2016-03-23 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-12958:
---

 Summary: PhantomReference for filesystem statistics can trigger OOM
 Key: HADOOP-12958
 URL: https://issues.apache.org/jira/browse/HADOOP-12958
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.4, 2.7.3
Reporter: Jason Lowe
 Fix For: 2.7.3, 2.6.5


I saw an OOM that appears to have been caused by the phantom references 
introduced for file system statistics management.  I'll post details in a 
followup comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12706) TestLocalFsFCStatistics fails occasionally

2016-01-13 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-12706:
---

 Summary: TestLocalFsFCStatistics fails occasionally
 Key: HADOOP-12706
 URL: https://issues.apache.org/jira/browse/HADOOP-12706
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Jason Lowe


TestLocalFsFCStatistics has been failing sometimes, and when it fails it 
appears to be from FCStatisticsBaseTest.testStatisticsThreadLocalDataCleanUp.  
The test is timing out when it fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12594) Deadlock in metrics subsystem

2015-11-24 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved HADOOP-12594.
-
Resolution: Duplicate

Resolving as a duplicate of HADOOP-11361 now that it was reverted.

> Deadlock in metrics subsystem
> -
>
> Key: HADOOP-12594
> URL: https://issues.apache.org/jira/browse/HADOOP-12594
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.7.3
>Reporter: Jason Lowe
>Priority: Blocker
> Attachments: HADOOP-12594.patch
>
>
> Saw a YARN ResourceManager process encounter a deadlock which appears to be 
> caused by the metrics subsystem.  Stack trace to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-11361) Fix a race condition in MetricsSourceAdapter.updateJmxCache

2015-11-24 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe reopened HADOOP-11361:
-

Based on initial patch in HADOOP-12594 and earlier comments, I'm reverting this.

> Fix a race condition in MetricsSourceAdapter.updateJmxCache
> ---
>
> Key: HADOOP-11361
> URL: https://issues.apache.org/jira/browse/HADOOP-11361
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.5.1, 2.6.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-111361-003.patch, HADOOP-11361-002.patch, 
> HADOOP-11361.patch, HDFS-7487.patch
>
>
> {noformat}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateAttrCache(MetricsSourceAdapter.java:247)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:177)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getAttribute(MetricsSourceAdapter.java:102)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12594) Deadlock in metrics subsystem

2015-11-24 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-12594:
---

 Summary: Deadlock in metrics subsystem
 Key: HADOOP-12594
 URL: https://issues.apache.org/jira/browse/HADOOP-12594
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 2.7.1
Reporter: Jason Lowe
Priority: Critical


Saw a YARN ResourceManager process encounter a deadlock which appears to be 
caused by the metrics subsystem.  Stack trace to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12290) hadoop fs -ls command returns inconsistent results with wildcards

2015-07-30 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved HADOOP-12290.
-
Resolution: Invalid

This appears to be pilot error rather than a bug in Hadoop.  The wildcards are 
not quoted and therefore the shell is expanding them _before_ Hadoop even sees 
the wildcard.  You must be running on a Mac, which would explain why it's 
trying to lookup things like /Applications, /Library, /System, etc.  This needs 
to be something like:
{noformat}
hadoop fs -ls '/*'
{noformat}
to keep the shell from expanding it.

The same thing is occurring for the /t* case.

For the last case, the shell is not finding anything for /z* and therefore is 
passing it unexpanded to Hadoop, and Hadoop is expanding it to the various z* 
directories.  However I suspect all of those directories are empty, so it lists 
nothing as a result.

Closing as invalid.  Please reopen if there's a real issue here.

> hadoop fs -ls command returns inconsistent results with wildcards
> -
>
> Key: HADOOP-12290
> URL: https://issues.apache.org/jira/browse/HADOOP-12290
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>
> I cannot find any document for wildcard support for "hadoop fs -ls" cmd and 
> the expected behavior. So I did some experiments and got inconsistent results 
> below. This looks like a bug to me. But if we don't support wildcard for 
> "hadoop fs -ls", we should at least document it.
> On a single node cluster with "fs.default.name" configured as 
> hdfs://localhost:9000. 
> Root without wildcard: HDFS only.
> {code}
> $ hdfs dfs -ls /
> Found 11 items
> drwxrwxrwx   - xyao hadoop  0 2015-07-28 15:27 /data
> drwxr-xr-x   - xyao hadoop  0 2015-07-26 23:05 /noez
> drwxr-xr-x   - xyao hadoop  0 2015-07-29 17:33 /path3
> drwxrwxrwx   - xyao hadoop  0 2015-07-26 23:04 /tmp
> drwx--   - xyao hadoop  0 2015-07-26 23:03 /user
> drwxr-xr-x   - xyao hadoop  0 2015-07-29 17:34 /uu
> drwxr-xr-x   - xyao hadoop  0 2015-07-26 23:08 /z1_1
> drwxr-xr-x   - xyao hadoop  0 2015-07-26 21:43 /z1_2new
> drwxr-xr-x   - xyao hadoop  0 2015-07-26 22:00 /z2_0
> drwxr-xr-x   - xyao hadoop  0 2015-07-26 21:43 /z2_1
> drwxr-xr-x   - xyao hadoop  0 2015-07-26 21:55 /z2_2
> {code}
> Root with wildcard: HDFS and local. 
> {code}
> $ hadoop fs -ls /*
> ls: `/Applications': No such file or directory
> ls: `/Library': No such file or directory
> ls: `/Network': No such file or directory
> ls: `/System': No such file or directory
> ls: `/User Information': No such file or directory
> ls: `/Users': No such file or directory
> ls: `/Volumes': No such file or directory
> ls: `/bin': No such file or directory
> ls: `/dev': No such file or directory
> ls: `/etc': No such file or directory
> ls: `/home': No such file or directory
> ls: `/mach_kernel': No such file or directory
> ls: `/net': No such file or directory
> ls: `/opt': No such file or directory
> ls: `/private': No such file or directory
> ls: `/proc': No such file or directory
> ls: `/sbin': No such file or directory
> ls: `/test.jks': No such file or directory
> Found 3 items
> drwxrwxrwx   - xyao hadoop  0 2015-07-22 10:48 /tmp/test
> drwxrwxrwx   - xyao hadoop  0 2015-07-22 10:50 /tmp/test
> drwxrwxrwx   - xyao hadoop  0 2015-07-22 10:49 /tmp/test
> hello
> ls: `/usr': No such file or directory
> ls: `/var': No such file or directory
> {code}
> Wildcard with prefix 1: HDFS and Local. But HDFS goes one level down.
> {code}
> HW11217:hadoop-hdfs-project xyao$ hadoop fs -ls /t*
> ls: `/test.jks': No such file or directory
> Found 3 items
> drwxrwxrwx   - xyao hadoop  0 2015-07-22 10:48 /tmp/test
> drwxrwxrwx   - xyao hadoop  0 2015-07-22 10:50 /tmp/test
> drwxrwxrwx   - xyao hadoop  0 2015-07-22 10:49 /tmp/test
> hello
> {code}
> Wildcard and prefix 2: Empty result even though HDFS does have a few 
> directories starts with "z" as shown above. 
> {code}
> hadoop fs -ls /z*
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12191) Bzip2Factory is not thread safe

2015-07-06 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-12191:
---

 Summary: Bzip2Factory is not thread safe
 Key: HADOOP-12191
 URL: https://issues.apache.org/jira/browse/HADOOP-12191
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Reporter: Jason Lowe


Bzip2Factory.isNativeBzip2Loaded is not protected from multiple threads calling 
it simultaneously.  A thread can return false from this method despite logging 
the fact that was going to return true due to manipulations of the static 
boolean from another thread calling the same method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12125) Retrying UnknownHostException on a proxy does not actually retry hostname resolution

2015-06-26 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-12125:
---

 Summary: Retrying UnknownHostException on a proxy does not 
actually retry hostname resolution
 Key: HADOOP-12125
 URL: https://issues.apache.org/jira/browse/HADOOP-12125
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Jason Lowe


When RetryInvocationHandler attempts to retry an UnknownHostException the 
hostname fails to be resolved again.  The InetSocketAddress in the ConnectionId 
has cached the fact that the hostname is unresolvable, and when the proxy tries 
to setup a new Connection object with that ConnectionId it checks if the 
(cached) resolution result is unresolved and immediately throws.

The end result is we sleep and retry for no benefit.  The hostname resolution 
is never attempted again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11532) RAT checker complaining about PSD images

2015-02-02 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved HADOOP-11532.
-
Resolution: Duplicate

This is a duplicate of YARN-3113.

> RAT checker complaining about PSD images
> 
>
> Key: HADOOP-11532
> URL: https://issues.apache.org/jira/browse/HADOOP-11532
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>
> Jenkins is rejecting builds as {{Sorting icons.psd}} doesn't have an ASF 
> header.
> {code}
>  !? 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.9.4/images/Sorting
>  icons.psd
> Lines that start with ? in the release audit report indicate files that 
> do not have an Apache license header.
> {code}
> It's a layered photoshop image that either needs to be excluded from RAT or 
> cut from the source tree



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11473) test-patch says "-1 overall" even when all checks are +1

2015-01-12 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-11473:
---

 Summary: test-patch says "-1 overall" even when all checks are +1
 Key: HADOOP-11473
 URL: https://issues.apache.org/jira/browse/HADOOP-11473
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jason Lowe


I noticed recently that test-patch is posting "-1 overall" despite all 
sub-checks being +1.  See HDFS-7533 and HDFS-7598 for some examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11409) FileContext.getFileContext can stack overflow if default fs misconfigured

2014-12-15 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-11409:
---

 Summary: FileContext.getFileContext can stack overflow if default 
fs misconfigured
 Key: HADOOP-11409
 URL: https://issues.apache.org/jira/browse/HADOOP-11409
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Jason Lowe


If the default filesystem is misconfigured such that it doesn't have a scheme 
then FileContext.getFileContext(URI, Configuration) will call 
FileContext.getFileContext(Configuration) which in turn calls the former and we 
loop until the stack explodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11288) yarn.resourcemanager.scheduler.class wrongly set in yarn-default.xml documentation

2014-11-10 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved HADOOP-11288.
-
Resolution: Invalid

The CapacityScheduler is very much supported, and is actively being developed.  
It's setting as the default scheduler is intentional, see YARN-137.

> yarn.resourcemanager.scheduler.class wrongly set in yarn-default.xml 
> documentation
> --
>
> Key: HADOOP-11288
> URL: https://issues.apache.org/jira/browse/HADOOP-11288
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: DeepakVohra
>
> The yarn.resourcemanager.scheduler.class property is wrongly set to 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.
>  CapacitySchduler is not even supported. Should be 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11007) Reinstate building of ant tasks support

2014-08-26 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-11007:
---

 Summary: Reinstate building of ant tasks support
 Key: HADOOP-11007
 URL: https://issues.apache.org/jira/browse/HADOOP-11007
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, fs
Affects Versions: 2.5.0
Reporter: Jason Lowe
Assignee: Jason Lowe


The ant tasks support from HADOOP-1508 is still present under 
hadoop-hdfs/src/ant/ but is no longer being built.  It would be nice if this 
was reinstated in the build and distributed as part of the release.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10945) 4-digit octal permissions throw a parse error

2014-08-07 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-10945:
---

 Summary: 4-digit octal permissions throw a parse error
 Key: HADOOP-10945
 URL: https://issues.apache.org/jira/browse/HADOOP-10945
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.2.0
Reporter: Jason Lowe


Providing a 4-digit octal number for fs permissions leads to a parse error, 
e.g.: -Dfs.permissions.umask-mode=0022




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Reopened] (HADOOP-10468) TestMetricsSystemImpl.testMultiThreadedPublish fails intermediately

2014-06-24 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe reopened HADOOP-10468:
-


Reopening this issue as it breaks all existing metrics2 property files.  Before 
this change the properties needed to be lower-cased but now they must be 
camel-cased (e.g.: namenode.* now must be NameNode.*).

The release note states that the metrics2 file became case-sensitive, but I 
don't believe that's the case.  MetricsConfig uses 
org.apache.commons.configuration.SubsetConfiguration which I think has always 
been case-sensitive.

I'm hoping there's a way we can fix the underlying issue without breaking 
existing metrics2 property files, because the way in which they break is 
silent.  The settings are simply ignored rather than an error being thrown for 
unrecognized/unhandled properties.

> TestMetricsSystemImpl.testMultiThreadedPublish fails intermediately
> ---
>
> Key: HADOOP-10468
> URL: https://issues.apache.org/jira/browse/HADOOP-10468
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.5.0
>
> Attachments: HADOOP-10468.000.patch, HADOOP-10468.001.patch
>
>
> {{TestMetricsSystemImpl.testMultiThreadedPublish}} can fail intermediately 
> due to the insufficient size of the sink queue:
> {code}
> 2014-04-06 21:34:55,269 WARN  impl.MetricsSinkAdapter 
> (MetricsSinkAdapter.java:putMetricsImmediate(107)) - Collector has a full 
> queue and can't consume the given metrics.
> 2014-04-06 21:34:55,270 WARN  impl.MetricsSinkAdapter 
> (MetricsSinkAdapter.java:putMetricsImmediate(107)) - Collector has a full 
> queue and can't consume the given metrics.
> 2014-04-06 21:34:55,271 WARN  impl.MetricsSinkAdapter 
> (MetricsSinkAdapter.java:putMetricsImmediate(107)) - Collector has a full 
> queue and can't consume the given metrics.
> {code}
> The unit test should increase the default queue size to avoid intermediate 
> failure.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10739) Renaming a file into a directory containing the same filename results in a confusing I/O error

2014-06-23 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-10739:
---

 Summary: Renaming a file into a directory containing the same 
filename results in a confusing I/O error
 Key: HADOOP-10739
 URL: https://issues.apache.org/jira/browse/HADOOP-10739
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.4.0
Reporter: Jason Lowe


Renaming a file to another existing filename says "File
exists" but colliding with a file in a directory results in the cryptic
"Input/output error".



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10622) Shell.runCommand can deadlock

2014-05-20 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-10622:
---

 Summary: Shell.runCommand can deadlock
 Key: HADOOP-10622
 URL: https://issues.apache.org/jira/browse/HADOOP-10622
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Jason Lowe
Priority: Critical


Ran into a deadlock in Shell.runCommand.  Stacktrace details to follow.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10474) Move o.a.h.record to hadoop-streaming

2014-05-19 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved HADOOP-10474.
-

   Resolution: Fixed
Fix Version/s: (was: 2.5.0)
   3.0.0

I reverted HADOOP-10485 and HADOOP-10474 from branch-2.

> Move o.a.h.record to hadoop-streaming
> -
>
> Key: HADOOP-10474
> URL: https://issues.apache.org/jira/browse/HADOOP-10474
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 3.0.0
>
> Attachments: HADOOP-10474.000.patch, HADOOP-10474.001.patch, 
> HADOOP-10474.002.patch
>
>
> The classes in o.a.h.record have been deprecated for more than a year and a 
> half. They should be removed. As the first step, the jira moves all these 
> classes into the hadoop-streaming project, which is the only user of these 
> classes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Reopened] (HADOOP-10474) Move o.a.h.record to hadoop-streaming

2014-05-16 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe reopened HADOOP-10474:
-


Reopening this as Hive is an important part of the Hadoop stack.  Arguably we 
shouldn't remove something that hasn't been deprecated for at least one full 
major release.  org.apache.hadoop.record.* wasn't deprecated in 1.x so it seems 
premature to remove it in 2.x, especially in a minor release of 2.x.

Recommend we revert this, at least in branch-2.

> Move o.a.h.record to hadoop-streaming
> -
>
> Key: HADOOP-10474
> URL: https://issues.apache.org/jira/browse/HADOOP-10474
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.5.0
>
> Attachments: HADOOP-10474.000.patch, HADOOP-10474.001.patch, 
> HADOOP-10474.002.patch
>
>
> The classes in o.a.h.record have been deprecated for more than a year and a 
> half. They should be removed. As the first step, the jira moves all these 
> classes into the hadoop-streaming project, which is the only user of these 
> classes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-9344) Configuration.writeXml can warn about deprecated properties user did not set

2014-03-03 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved HADOOP-9344.


Resolution: Duplicate

Looks like this was fixed by HADOOP-10178.

> Configuration.writeXml can warn about deprecated properties user did not set
> 
>
> Key: HADOOP-9344
> URL: https://issues.apache.org/jira/browse/HADOOP-9344
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.0.3-alpha, 0.23.5
>Reporter: Jason Lowe
>Assignee: Rushabh S Shah
>
> When the configuration is serialized it can emit warnings about deprecated 
> properties that the user did not specify.  Converting the config to XML 
> causes all the properties in the config to be processed for deprecation, and 
> after HADOOP-8167 setting a proper config property also causes the deprecated 
> forms to be set.  Processing all the keys in the config for deprecation 
> therefore can trigger warnings for keys that were never specified by the 
> user, leaving users confused as to how their code could be triggering these 
> warnings.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10369) hadoop fs -ls prints "Found 1 items" for each entry when globbing

2014-02-26 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved HADOOP-10369.
-

Resolution: Duplicate

This is a duplicate of HADOOP-8691.

> hadoop fs -ls prints "Found 1 items" for each entry when globbing
> -
>
> Key: HADOOP-10369
> URL: https://issues.apache.org/jira/browse/HADOOP-10369
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Chris Li
>Priority: Minor
>
> {noformat}
> -sh-4.1$ hadoop fs -ls /user/chrili/books/84*
> Found 1 items
> -rw-r--r--   3 chrili gid-chrili 142382 2014-02-25 18:30 
> /user/chrili/books/844.other
> Found 1 items
> -rwxr-xr-x   3 chrili gid-chrili 142382 2013-09-24 10:47 
> /user/chrili/books/844.txt.utf-8
> {noformat}
> This behavior is new to 2.0. In 1.X it would not print this at all:
> {noformat}
> -sh-4.1$ hadoop fs -ls /user/chrili/books/84*
> -rw-r--r--   3 chrili gid-chrili 142382 2014-02-25 18:30 
> /user/chrili/books/844.other
> -rwxr-xr-x   3 chrili gid-chrili 142382 2013-09-24 10:47 
> /user/chrili/books/844.txt.utf-8
> {noformat}
> We can workaround this today by filtering with grep first, but I don't think 
> this is the sort of thing that should be printed to stdout in the first 
> place. Seems like it would be more appropriate to output to stderr.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10346) Deadlock while logging tokens

2014-02-14 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-10346:
---

 Summary: Deadlock while logging tokens
 Key: HADOOP-10346
 URL: https://issues.apache.org/jira/browse/HADOOP-10346
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.2.0
Reporter: Jason Lowe


Ran into a deadlock between two threads that were both wielding Tokens.  One 
was trying to log a token while the other was trying to set the token service 
on a different token.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Reopened] (HADOOP-10112) har file listing doesn't work with wild card

2014-02-06 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe reopened HADOOP-10112:
-


Reopening this as it's also problem in 0.23 that our customers would like 
fixed.  Posting a backported patch for branch-0.23 shortly.

> har file listing  doesn't work with wild card
> -
>
> Key: HADOOP-10112
> URL: https://issues.apache.org/jira/browse/HADOOP-10112
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.23.10, 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HADOOP-10112.004.patch
>
>
> [test@test001 root]$ hdfs dfs -ls har:///tmp/filename.har/*
> -ls: Can not create a Path from an empty string
> Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [ ...]
> It works without "*".



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HADOOP-10233) RPC lacks output flow control

2014-01-29 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved HADOOP-10233.
-

Resolution: Duplicate

> RPC lacks output flow control
> -
>
> Key: HADOOP-10233
> URL: https://issues.apache.org/jira/browse/HADOOP-10233
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Priority: Critical
>
> The RPC layer has input flow control via the callq, however it lacks any 
> output flow control.  A handler will try to directly send the response.  If 
> the full response is not sent then it is queued for the background responder 
> thread.  The RPC layer may end up queuing so many buffers that it "locks" up 
> in GC.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10276) CLONE - RawLocalFs#getFileLinkStatus does not fill in the link owner and mode

2014-01-24 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-10276:
---

 Summary: CLONE - RawLocalFs#getFileLinkStatus does not fill in the 
link owner and mode
 Key: HADOOP-10276
 URL: https://issues.apache.org/jira/browse/HADOOP-10276
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.4.0


{{RawLocalFs#getFileLinkStatus}} does not actually get the owner and mode of 
the symlink, but instead uses the owner and mode of the symlink target.  If the 
target can't be found, it fills in bogus values (the empty string and 
FsPermission.getDefault) for these.

Symlinks have an owner distinct from the owner of the target they point to, and 
getFileLinkStatus ought to expose this.

In some operating systems, symlinks can have a permission other than 0777.  We 
ought to expose this in RawLocalFilesystem and other places, although we don't 
necessarily have to support this behavior in HDFS.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10091) Job with a har archive as input fails on 0.23

2013-11-11 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-10091:
---

 Summary: Job with a har archive as input fails on 0.23
 Key: HADOOP-10091
 URL: https://issues.apache.org/jira/browse/HADOOP-10091
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.10
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Blocker


Attempting to run a MapReduce job with a har as input fails.  Sample stacktrace 
to follow.  We need to backport the fix for HADOOP-10003 to branch-0.23.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HADOOP-10081) Client.setupIOStreams can leak socket resources on exception or error

2013-11-04 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-10081:
---

 Summary: Client.setupIOStreams can leak socket resources on 
exception or error
 Key: HADOOP-10081
 URL: https://issues.apache.org/jira/browse/HADOOP-10081
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.2.0, 0.23.9
Reporter: Jason Lowe
Priority: Critical


The setupIOStreams method in org.apache.hadoop.ipc.Client can leak socket 
resources if an exception is thrown before the inStream and outStream local 
variables are assigned to this.in and this.out, respectively.  



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HADOOP-10059) RPC authentication and authorization metrics overflow to negative values on busy clusters

2013-10-18 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-10059:
---

 Summary: RPC authentication and authorization metrics overflow to 
negative values on busy clusters
 Key: HADOOP-10059
 URL: https://issues.apache.org/jira/browse/HADOOP-10059
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 2.2.0, 0.23.9
Reporter: Jason Lowe
Priority: Minor


The RPC metrics for authorization and authentication successes can easily 
overflow to negative values on a busy cluster that has been up for a long time. 
 We should consider providing 64-bit values for these counters.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HADOOP-10048) LocalDirAllocator should avoid holding locks while accessing the filesystem

2013-10-15 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-10048:
---

 Summary: LocalDirAllocator should avoid holding locks while 
accessing the filesystem
 Key: HADOOP-10048
 URL: https://issues.apache.org/jira/browse/HADOOP-10048
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.3.0
Reporter: Jason Lowe


As noted in MAPREDUCE-5584 and HADOOP-7016, LocalDirAllocator can be a 
bottleneck for multithreaded setups like the ShuffleHandler.  We should 
consider moving to a lockless design or minimizing the critical sections to a 
very small amount of time that does not involve I/O operations.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HADOOP-9912) globStatus of a symlink to a directory does not report symlink as a directory

2013-10-07 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved HADOOP-9912.


Resolution: Duplicate

Agree we can track the fix in HADOOP-9984, marking as a duplicate.

> globStatus of a symlink to a directory does not report symlink as a directory
> -
>
> Key: HADOOP-9912
> URL: https://issues.apache.org/jira/browse/HADOOP-9912
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.1.0-beta
>Reporter: Jason Lowe
>Priority: Blocker
> Attachments: HADOOP-9912-testcase.patch, new-hdfs.txt, new-local.txt, 
> old-hdfs.txt, old-local.txt
>
>
> globStatus for a path that is a symlink to a directory used to report the 
> resulting FileStatus as a directory but recently this has changed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HADOOP-9853) Please upgrade maven-surefire-plugin to version 2.14.1 or later

2013-09-09 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved HADOOP-9853.


Resolution: Duplicate

This has been fixed by HDFS-4491.

> Please upgrade maven-surefire-plugin to version 2.14.1 or later 
> 
>
> Key: HADOOP-9853
> URL: https://issues.apache.org/jira/browse/HADOOP-9853
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, tools
>Affects Versions: 2.1.0-beta
>Reporter: Randy Clayton
>
> Current version of maven-surefire-plugin fails when individual test times out 
> causing test suite to stop. Newer plugin issues correct error message and 
> allows test suite to continue until completion. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9920) Should upgrade maven-surefire-plugin version to avoid hitting SUREFIRE-910

2013-09-09 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved HADOOP-9920.


Resolution: Duplicate

This has been fixed by HDFS-4491.

> Should upgrade maven-surefire-plugin version to avoid hitting SUREFIRE-910
> --
>
> Key: HADOOP-9920
> URL: https://issues.apache.org/jira/browse/HADOOP-9920
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.1.0-beta
>Reporter: Yu Li
>Assignee: Yu Li
> Attachments: HADOOP-9920.patch
>
>
> While running UT against 2.1.0-beta on our own Jenkins server, the UT was 
> interruptted at hadoop common project with below exception:
> {noformat}
> ExecutionException; nested exception is 
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: The 
> forked VM terminated without saying properly goodbye. VM crash or System.exit 
> called ?
> {noformat}
> And further checking proves we have ran into 
> [SUREFIRE-910|http://jira.codehaus.org/browse/SUREFIRE-910] which reports the 
> same issue which got fixed in surefire v2.13 while our maven-surefire-plugin 
> version is still 2.12.3 for now. We should upgrade it to the latest 2.16

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9929) Insufficient permissions for a path reported as file not found

2013-09-03 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-9929:
--

 Summary: Insufficient permissions for a path reported as file not 
found
 Key: HADOOP-9929
 URL: https://issues.apache.org/jira/browse/HADOOP-9929
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.4-alpha
Reporter: Jason Lowe


Using "hadoop fs -ls" to list a path where the permissions of a parent 
directory are insufficient ends up reporting "no such file or directory" on the 
full path rather than reporting the permission issue.  For example:

{noformat}
$ hadoop fs -ls /user/abc/tests/data
ls: `/user/abc/tests/data': No such file or directory
$ hadoop fs -ls /user/abc
ls: Permission denied: user=somebody, access=READ_EXECUTE, 
inode="/user/abc":abc:hdfs:drwx--
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9912) globStatus of a symlink to a directory does not report symlink as a directory

2013-08-28 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-9912:
--

 Summary: globStatus of a symlink to a directory does not report 
symlink as a directory
 Key: HADOOP-9912
 URL: https://issues.apache.org/jira/browse/HADOOP-9912
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.3.0
Reporter: Jason Lowe
Priority: Blocker


globStatus for a path that is a symlink to a directory used to report the 
resulting FileStatus as a directory but recently this has changed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9894) Race condition in Shell leads to logged error stream handling exceptions

2013-08-21 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-9894:
--

 Summary: Race condition in Shell leads to logged error stream 
handling exceptions
 Key: HADOOP-9894
 URL: https://issues.apache.org/jira/browse/HADOOP-9894
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.1.0-beta
Reporter: Jason Lowe


Shell.runCommand starts an error stream handling thread and normally joins with 
it before closing the error stream.  However if parseExecResult throws an 
exception (e.g.: like Stat.parseExecResult does for FileNotFoundException) then 
the error thread is not joined and the error stream can be closed before the 
error stream handling thread is finished.  This causes the error stream 
handling thread to log an exception backtrace for a "normal" situation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9846) AbstractDelegationTokenSecretManager storage interface is inconsistent wrt. exceptions

2013-08-07 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-9846:
--

 Summary: AbstractDelegationTokenSecretManager storage interface is 
inconsistent wrt. exceptions
 Key: HADOOP-9846
 URL: https://issues.apache.org/jira/browse/HADOOP-9846
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.1.0-beta
Reporter: Jason Lowe


AbstractDelegationTokenSecretManager recently added interfaces for persisting 
keys and tokens in HADOOP-9574 but the treatment of exceptions from these 
interfaces is inconsistent.  Storing a master key can throw IOException but 
removing a master key cannot.  Storing or updating a token cannot throw 
IOException but removing a token can.  These should be made consistent, and 
arguably all of these interfaces should allow IOException to be thrown.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-6565) AbstractDelegationTokenSecretManager.stopThreads() will NPE if called before startThreads()

2013-08-07 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved HADOOP-6565.


Resolution: Duplicate

Dup of HADOOP-6554, ultimately fixed by HADOOP-6573.

> AbstractDelegationTokenSecretManager.stopThreads() will NPE if called before 
> startThreads()
> ---
>
> Key: HADOOP-6565
> URL: https://issues.apache.org/jira/browse/HADOOP-6565
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.22.0
>Reporter: Steve Loughran
>Priority: Minor
>
> Looking at the code for starting/stopping SecretManagers, it seems to me that 
> {{AbstractDelegationTokenSecretManager.stopThreads()}} assumes that 
> {{tokenRemoverThread}} is never null. That assumption is only valid if 
> {{AbstractDelegationTokenSecretManager.startThreads()}} was called first.
> the call to {{tokenRemoverThread.interrupt()}} should be guarded with a check 
> for {{tokenRemoverThread!=null}}
> I haven't encountered this in the field yet, but it should be trivial to 
> replicate in a test and then fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HADOOP-9652) RawLocalFs#getFileLinkStatus does not fill in the link owner and mode

2013-07-30 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe reopened HADOOP-9652:



I reverted the changes from branch-2 as well.

> RawLocalFs#getFileLinkStatus does not fill in the link owner and mode
> -
>
> Key: HADOOP-9652
> URL: https://issues.apache.org/jira/browse/HADOOP-9652
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Colin Patrick McCabe
>Assignee: Andrew Wang
> Fix For: 2.3.0
>
> Attachments: hadoop-9452-1.patch, hadoop-9652-2.patch, 
> hadoop-9652-3.patch
>
>
> {{RawLocalFs#getFileLinkStatus}} does not actually get the owner and mode of 
> the symlink, but instead uses the owner and mode of the symlink target.  If 
> the target can't be found, it fills in bogus values (the empty string and 
> FsPermission.getDefault) for these.
> Symlinks have an owner distinct from the owner of the target they point to, 
> and getFileLinkStatus ought to expose this.
> In some operating systems, symlinks can have a permission other than 0777.  
> We ought to expose this in RawLocalFilesystem and other places, although we 
> don't necessarily have to support this behavior in HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9757) Har metadata cache can grow without limit

2013-07-22 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-9757:
--

 Summary: Har metadata cache can grow without limit
 Key: HADOOP-9757
 URL: https://issues.apache.org/jira/browse/HADOOP-9757
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.9, 2.0.4-alpha
Reporter: Jason Lowe


MAPREDUCE-2459 added a metadata cache to the har filesystem, but the cache has 
no upper limits.  A long-running process that accesses many har archives will 
eventually run out of memory due to a har metadata cache that never retires 
entries.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9734) Common RefreshUserMappingsProtocol and GetUserMappingsProtocol implementation

2013-07-15 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-9734:
--

 Summary: Common RefreshUserMappingsProtocol and 
GetUserMappingsProtocol implementation
 Key: HADOOP-9734
 URL: https://issues.apache.org/jira/browse/HADOOP-9734
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 2.0.4-alpha
Reporter: Jason Lowe
Priority: Minor


Many of the Hadoop server daemons support the concept of refreshing user 
mappings or getting the currently cached user-group mappings.  Currently there 
are protocol buffer definitions of RefreshUserMappingsProtocol and 
GetUserMappingsProtocol in HDFS, but using it requires packages to depend upon 
HDFS when they may otherwise have no reason to do so.  We should move the 
protocol buffer definitions and glue code to common for easier reuse.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9725) Configuration allows final parameters to be changed via set method

2013-07-11 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-9725:
--

 Summary: Configuration allows final parameters to be changed via 
set method
 Key: HADOOP-9725
 URL: https://issues.apache.org/jira/browse/HADOOP-9725
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.23.8, 2.0.4-alpha
Reporter: Jason Lowe


Configuration parameters that are marked as final can still be changed via the 
{{set}} method.  The final designation currently is only observed when loading 
resources but ignored when individual properties are modified.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9686) Easy access to final parameters in Configuration

2013-07-02 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-9686:
--

 Summary: Easy access to final parameters in Configuration
 Key: HADOOP-9686
 URL: https://issues.apache.org/jira/browse/HADOOP-9686
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 2.1.0-beta
Reporter: Jason Lowe


It would be nice if there was an easy way to get final parameters within a 
Configuration.  This would allow clients who wrap Configuration to easily 
determine which properties should not be changed and implement stricter 
semantics for them (e.g.: throw an exception when attempts to change them are 
made).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9622) bzip2 codec can drop records when reading data in splits

2013-06-05 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-9622:
--

 Summary: bzip2 codec can drop records when reading data in splits
 Key: HADOOP-9622
 URL: https://issues.apache.org/jira/browse/HADOOP-9622
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 0.23.8, 2.0.4-alpha
Reporter: Jason Lowe
Priority: Critical


Bzip2Codec.BZip2CompressionInputStream can cause records to be dropped when 
reading them in splits based on where record delimiters occur relative to 
compression block boundaries.

Thanks to [~knoguchi] for discovering this problem while working on PIG-3251.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9546) "setsid exited with exit code" message on each hadoop command

2013-05-23 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved HADOOP-9546.


Resolution: Duplicate

Fixed by HADOOP-9593.

> "setsid exited with exit code" message on each hadoop command
> -
>
> Key: HADOOP-9546
> URL: https://issues.apache.org/jira/browse/HADOOP-9546
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jason Lowe
>
> Each hadoop command is printing an INFO message about a setsid exit code.  
> This is at best annoying, and it prevents the start-dfs.sh startup script 
> from working properly since the INFO message is interpreted as more namenode 
> hosts to connect to.  start-dfs.sh uses {{hdfs getconf -namenodes}} which 
> like other hadoop commands prints the extra INFO message.  For example:
> {noformat}
> $ hdfs getconf -namenodes
> 2013-05-06 20:11:22,096 INFO  [main] util.Shell 
> (Shell.java:isSetsidSupported(311)) - setsid exited with exit code 0
> localhost
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9583) test-patch gives +1 despite build failure when running tests

2013-05-21 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-9583:
--

 Summary: test-patch gives +1 despite build failure when running 
tests
 Key: HADOOP-9583
 URL: https://issues.apache.org/jira/browse/HADOOP-9583
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Jason Lowe
Priority: Critical


I've seen a couple of checkins recently where tests have timed out resulting in 
a Maven build failure yet test-patch reports an overall +1 on the patch.  This 
is encouraging commits of patches that subsequently break builds.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9546) "setsid exited with exit code" message on each hadoop command

2013-05-06 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-9546:
--

 Summary: "setsid exited with exit code" message on each hadoop 
command
 Key: HADOOP-9546
 URL: https://issues.apache.org/jira/browse/HADOOP-9546
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Jason Lowe


Each hadoop command is printing an INFO message about a setsid exit code.  This 
is at best annoying, and it prevents the start-dfs.sh startup script from 
working properly since the INFO message is interpreted as more namenode hosts 
to connect to.  start-dfs.sh uses {{hdfs getconf -namenodes}} which like other 
hadoop commands prints the extra INFO message.  For example:

{noformat}
$ hdfs getconf -namenodes
2013-05-06 20:11:22,096 INFO  [main] util.Shell 
(Shell.java:isSetsidSupported(311)) - setsid exited with exit code 0
localhost
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9404) Reconcile dist-maketar.sh and dist-tar-stitching.sh

2013-03-13 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-9404:
--

 Summary: Reconcile dist-maketar.sh and dist-tar-stitching.sh
 Key: HADOOP-9404
 URL: https://issues.apache.org/jira/browse/HADOOP-9404
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0
Reporter: Jason Lowe
Priority: Trivial


Per a discussion in HADOOP-9397, there are a couple of different ways 
compressed tarballs are generated during the build.  Some projects create a 
{{dist-maketar.sh}} script that pipes the output of tar through gzip, while 
hadoop-dist creates a {{dist-tar-stitching.sh}} script which runs the commands 
separately.  Ideally these should be made consistent.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9397) Incremental dist tar build fails

2013-03-12 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-9397:
--

 Summary: Incremental dist tar build fails
 Key: HADOOP-9397
 URL: https://issues.apache.org/jira/browse/HADOOP-9397
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Jason Lowe


Building a dist tar build when the dist tarball already exists from a previous 
build fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9344) Configuration.writeXml can warn about deprecated properties user did not set

2013-02-27 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-9344:
--

 Summary: Configuration.writeXml can warn about deprecated 
properties user did not set
 Key: HADOOP-9344
 URL: https://issues.apache.org/jira/browse/HADOOP-9344
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.23.5, 2.0.3-alpha
Reporter: Jason Lowe


When the configuration is serialized it can emit warnings about deprecated 
properties that the user did not specify.  Converting the config to XML causes 
all the properties in the config to be processed for deprecation, and after 
HADOOP-8167 setting a proper config property also causes the deprecated forms 
to be set.  Processing all the keys in the config for deprecation therefore can 
trigger warnings for keys that were never specified by the user, leaving users 
confused as to how their code could be triggering these warnings.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9193) hadoop script can inadvertently expand wildcard arguments when delegating to hdfs script

2013-01-09 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-9193:
--

 Summary: hadoop script can inadvertently expand wildcard arguments 
when delegating to hdfs script
 Key: HADOOP-9193
 URL: https://issues.apache.org/jira/browse/HADOOP-9193
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.23.5, 2.0.2-alpha
Reporter: Jason Lowe
Priority: Minor


The hadoop front-end script will print a deprecation warning and defer to the 
hdfs front-end script for certain commands, like fsck, dfs.  If a wildcard 
appears as an argument then it can be inadvertently expanded by the shell to 
match a local filesystem path before being sent to the hdfs script, which can 
be very confusing to the end user.

For example, the following two commands usually perform very different things, 
even though they should be equivalent:

hadoop fs -ls /tmp/\*
hadoop dfs -ls /tmp/\*

The former lists everything in the default filesystem under /tmp, while the 
latter expands /tmp/\* into everything in the *local* filesystem under /tmp and 
passes those as arguments to try to list in the default filesystem.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9069) FileSystem.get leads to stack overflow if default FS is not configured with a scheme

2012-11-20 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-9069:
--

 Summary: FileSystem.get leads to stack overflow if default FS is 
not configured with a scheme
 Key: HADOOP-9069
 URL: https://issues.apache.org/jira/browse/HADOOP-9069
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.1-alpha, 0.23.3
Reporter: Jason Lowe
Priority: Minor


If fs.defaultFS is configured without a scheme, e.g.: "/", then FileSystem.get 
will infinitely recurse and lead to a stack overflow.  An example stacktrace 
from 0.23:

{noformat}
java.lang.StackOverflowError
at java.util.AbstractCollection.(AbstractCollection.java:66)
at java.util.AbstractList.(AbstractList.java:76)
at java.util.ArrayList.(ArrayList.java:128)
at java.util.ArrayList.(ArrayList.java:139)
at 
org.apache.hadoop.conf.Configuration.handleDeprecation(Configuration.java:430)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:852)
at org.apache.hadoop.fs.FileSystem.getDefaultUri(FileSystem.java:171)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:163)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:290)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:163)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:290)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:163)
...
{noformat}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8967) Reported source for config property can be misleading

2012-10-23 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-8967:
--

 Summary: Reported source for config property can be misleading
 Key: HADOOP-8967
 URL: https://issues.apache.org/jira/browse/HADOOP-8967
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.23.3
Reporter: Jason Lowe
Priority: Minor


Configuration.set tries to track the source of a property being set, but it 
mistakenly reports properties as being deprecated when they are not.  This is 
misleading and confusing for users examining a job's configuration.

For example, run a sleep job and check the job configuration on the job UI.  
The source for the "mapreduce.job.maps" property will be reported as "job.xml ⬅ 
because mapreduce.job.maps is deprecated".  This leads users to think 
mapreduce.job.maps is now a deprecated property and wonder what other property 
they should use instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8962) RawLocalFileSystem.listStatus fails when a child filename contains a colon

2012-10-22 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-8962:
--

 Summary: RawLocalFileSystem.listStatus fails when a child filename 
contains a colon
 Key: HADOOP-8962
 URL: https://issues.apache.org/jira/browse/HADOOP-8962
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.3
Reporter: Jason Lowe
Priority: Critical


If listStatus is called on a directory that contains a file with a ':' in its 
name then it mistakenly thinks there is a scheme being specified and an 
exception is thrown because of a relative URI.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8942) Thundering herd of RPCs with large responses leads to OOM

2012-10-18 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-8942:
--

 Summary: Thundering herd of RPCs with large responses leads to OOM
 Key: HADOOP-8942
 URL: https://issues.apache.org/jira/browse/HADOOP-8942
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.23.3
Reporter: Jason Lowe


When a large number of clients are all making calls with large amounts of 
response data then the IPC server can exhaust memory.  See MAPREDUCE-4730 for 
an example of this.

There does not appear to be any flow control between the server's handler 
threads and the responder thread.  If a handler thread cannot write out all of 
the response data without blocking, it queues up the remainder for the 
responder thread and goes back to the next call in the call queue.  If there 
are enough clients, this can cause the handler threads to overwhelm the heap by 
queueing response data faster than it can be processed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-8735) Missing support for dfs.umaskmode

2012-08-27 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved HADOOP-8735.


Resolution: Duplicate

> Missing support for dfs.umaskmode
> -
>
> Key: HADOOP-8735
> URL: https://issues.apache.org/jira/browse/HADOOP-8735
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.23.0, 2.0.0-alpha
>Reporter: Jason Lowe
>Priority: Critical
>
> dfs.umaskmode was a supported property in Hadoop 0.20/1.x, but it appears to 
> be completely ignored in 0.23/2.x.  We should at least have deprecated 
> support for this property.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8735) Missing support for dfs.umaskmode

2012-08-27 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-8735:
--

 Summary: Missing support for dfs.umaskmode
 Key: HADOOP-8735
 URL: https://issues.apache.org/jira/browse/HADOOP-8735
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha, 0.23.0
Reporter: Jason Lowe
Priority: Critical


dfs.umaskmode was a supported property in Hadoop 0.20/1.x, but it appears to be 
completely ignored in 0.23/2.x.  We should at least have deprecated support for 
this property.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8723) Remove tests and tests-sources jars from classpath

2012-08-23 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-8723:
--

 Summary: Remove tests and tests-sources jars from classpath
 Key: HADOOP-8723
 URL: https://issues.apache.org/jira/browse/HADOOP-8723
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Jason Lowe


Currently {{hadoop-config.sh}} is including any tests and tests-sources jars in 
the default classpath, as those jars are shipped in the dist tarball next to 
the main jars and the script is globbing everything in that directory.

The tests and tests-sources jars aren't required to run Hadoop, but they can 
cause breakage when inadvertently picked up.  See HDFS-3831 as an example.  
Ideally we should not be adding these jars to the classpath.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8709) globStatus changed behavior from 0.20/1.x

2012-08-17 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-8709:
--

 Summary: globStatus changed behavior from 0.20/1.x
 Key: HADOOP-8709
 URL: https://issues.apache.org/jira/browse/HADOOP-8709
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha, 0.23.0
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical


In 0.20 or 1.x, globStatus will return an empty array if the glob pattern does 
not match any files.  After HADOOP-6201 it throws FileNotFoundException.  The 
javadoc states it will return an empty array.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8691) FsShell can print "Found xxx items" unnecessarily often

2012-08-13 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-8691:
--

 Summary: FsShell can print "Found xxx items" unnecessarily often
 Key: HADOOP-8691
 URL: https://issues.apache.org/jira/browse/HADOOP-8691
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha, 0.23.3
Reporter: Jason Lowe
Priority: Minor


The "Found xxx items" header that is printed with a file listing will often 
appear multiple times in not-so-helpful ways in light of globbing.  For example:

{noformat}
$ hadoop fs -ls 'teradata/*'  
Found 1 items
-rw-r--r--   1 someuser somegroup  0 2012-08-06 16:55 teradata/_SUCCESS
Found 1 items
-rw-r--r--   1 someuser somegroup   5000 2012-08-06 16:55 
teradata/part-m-0
Found 1 items
-rw-r--r--   1 someuser somegroup   5000 2012-08-06 16:55 
teradata/part-m-1
{noformat}

Seems like it should just print "Found 3 items" once at the top, or maybe not 
even print a header at all.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8495) Update Netty to avoid leaking file descriptors during shuffle

2012-06-08 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-8495:
--

 Summary: Update Netty to avoid leaking file descriptors during 
shuffle
 Key: HADOOP-8495
 URL: https://issues.apache.org/jira/browse/HADOOP-8495
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 0.23.3
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical


Netty 3.2.3.Final has a known bug where writes to a closed channel do not have 
their futures invoked.  See 
[Netty-374|https://issues.jboss.org/browse/NETTY-374].  This can lead to file 
descriptor leaks during shuffle as noted in MAPREDUCE-4298.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira