[jira] [Created] (HADOOP-16127) In ipc.Client, put a new connection could happen after stop

2019-02-19 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-16127:


 Summary: In ipc.Client, put a new connection could happen after 
stop
 Key: HADOOP-16127
 URL: https://issues.apache.org/jira/browse/HADOOP-16127
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze


In getConnection(..), running can be initially true but becomes false before 
putIfAbsent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16126) ipc.Client.stop() may sleep too long to wait for all connections

2019-02-19 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-16126:


 Summary: ipc.Client.stop() may sleep too long to wait for all 
connections
 Key: HADOOP-16126
 URL: https://issues.apache.org/jira/browse/HADOOP-16126
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze


{code}
//Client.java
  public void stop() {
...
// wait until all connections are closed
while (!connections.isEmpty()) {
  try {
Thread.sleep(100);
  } catch (InterruptedException e) {
  }
}
...
  }
{code}
In the code above, the sleep time is 100ms.  We found that simply changing the 
sleep time to 10ms could improve a Hive job running time by 10x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14453) Split the maven modules into several profiles

2017-05-24 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-14453:


 Summary: Split the maven modules into several profiles
 Key: HADOOP-14453
 URL: https://issues.apache.org/jira/browse/HADOOP-14453
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze


Current all the modules are defined at directly under .  As a result, 
we cannot select to build only some of the modules.  We have to build all the 
modules in any cases and, unfortunately, it takes a long time.

We propose split all the modules into multiple profiles so that we could build 
some of the modules by disabling some of the profiles.  All the profiles are 
enabled by default so that all the modules will be built by default. 

For example, when we are making change in common.  We could build and run tests 
under common by disabling hdfs, yarn, mapreduce, etc. modules.  This will 
reduce the development time spend on compiling unrelated modules.

Note that this is for local maven builds.   We are not proposing to change 
Jenkins builds, which always build all the modules.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13227) AsyncCallHandler should use a event driven architecture to handle async calls

2016-05-31 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-13227:


 Summary: AsyncCallHandler should use a event driven architecture 
to handle async calls
 Key: HADOOP-13227
 URL: https://issues.apache.org/jira/browse/HADOOP-13227
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io, ipc
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13168) Support Future.get with timeout in ipc async calls

2016-05-17 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-13168:


 Summary: Support Future.get with timeout in ipc async calls
 Key: HADOOP-13168
 URL: https://issues.apache.org/jira/browse/HADOOP-13168
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze


Currently, the Future returned by ipc async call only support Future.get() but 
not Future.get(timeout, unit).  We should support the latter as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13146) Refactor RetryInvocationHandler

2016-05-13 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-13146:


 Summary: Refactor RetryInvocationHandler
 Key: HADOOP-13146
 URL: https://issues.apache.org/jira/browse/HADOOP-13146
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor


- The exception handling is quite long.  It is better to refactor it to a 
separated method.
- The failover logic and synchronization can be moved to a new inner class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13103) Group resolution from LDAP may fail on javax.naming.ServiceUnavailableException

2016-05-05 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-13103:


 Summary: Group resolution from LDAP may fail on 
javax.naming.ServiceUnavailableException
 Key: HADOOP-13103
 URL: https://issues.apache.org/jira/browse/HADOOP-13103
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor


According to the 
[javadoc|https://docs.oracle.com/javase/7/docs/api/javax/naming/ServiceUnavailableException.html],
 ServiceUnavailableException is thrown when attempting to communicate with a 
directory or naming service and that service is not available. It might be 
unavailable for different reasons. For example, the server might be too busy to 
service the request, or the server might not be registered to service any 
requests, etc.

We should retry on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-8813) RPC Server and Client classes need InterfaceAudience and InterfaceStability annotations

2016-04-06 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze resolved HADOOP-8813.
-
   Resolution: Fixed
Fix Version/s: (was: 3.0.0)
   2.8.0

> RPC Server and Client classes need InterfaceAudience and InterfaceStability 
> annotations
> ---
>
> Key: HADOOP-8813
> URL: https://issues.apache.org/jira/browse/HADOOP-8813
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.0.0
>Reporter: Brandon Li
>Assignee: Brandon Li
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HADOOP-8813.patch, HADOOP-8813.patch
>
>
> RPC Serever and Client classes need InterfaceAudience and InterfaceStability 
> annotations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-8813) RPC Server and Client classes need InterfaceAudience and InterfaceStability annotations

2016-04-06 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze reopened HADOOP-8813:
-

Reopen for merging to branch-2.

> RPC Server and Client classes need InterfaceAudience and InterfaceStability 
> annotations
> ---
>
> Key: HADOOP-8813
> URL: https://issues.apache.org/jira/browse/HADOOP-8813
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.0.0
>Reporter: Brandon Li
>Assignee: Brandon Li
>Priority: Trivial
> Fix For: 3.0.0
>
> Attachments: HADOOP-8813.patch, HADOOP-8813.patch
>
>
> RPC Serever and Client classes need InterfaceAudience and InterfaceStability 
> annotations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12923) Move the test code in ipc.Client to test

2016-03-14 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-12923:


 Summary: Move the test code in ipc.Client to test
 Key: HADOOP-12923
 URL: https://issues.apache.org/jira/browse/HADOOP-12923
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor


Some code is used only by tests.  Let's relocate them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-03-09 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-12910:


 Summary: Add new FileSystem API to support asynchronous method 
calls
 Key: HADOOP-12910
 URL: https://issues.apache.org/jira/browse/HADOOP-12910
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Reporter: Tsz Wo Nicholas Sze
Assignee: Xiaobing Zhou


Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
better name).  All the APIs in FutureFileSystem are the same as FileSystem 
except that the return type is wrapped by Future and they do not throw 
IOException, e.g.
{code}
  //FileSystem
  public boolean rename(Path src, Path dst) throws IOException;

  //FutureFileSystem
  public Future rename(Path src, Path dst) throws IOException;
{code}
Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12909) Support asynchronous RPC calls

2016-03-09 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-12909:


 Summary: Support asynchronous RPC calls
 Key: HADOOP-12909
 URL: https://issues.apache.org/jira/browse/HADOOP-12909
 Project: Hadoop Common
  Issue Type: New Feature
  Components: ipc
Reporter: Tsz Wo Nicholas Sze
Assignee: Xiaobing Zhou


In ipc.Client, the underlying mechanism is already supporting asynchronous 
calls -- the calls shares a connection, the call requests are sent using a 
thread pool and the responses can be out of order.  Indeed, synchronized call 
is implemented by invoking wait() in the caller thread in order to wait for the 
server response.

In this JIRA, we change ipc.Client to support asynchronous mode.  In 
asynchronous mode, it return once the request has been sent out but not wait 
for the response from the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12812) The math is incorrect in checkstyle comment

2016-02-16 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-12812:


 Summary: The math is incorrect in checkstyle comment
 Key: HADOOP-12812
 URL: https://issues.apache.org/jira/browse/HADOOP-12812
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Tsz Wo Nicholas Sze
Priority: Minor


| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 22s 
{color} | {color:red} hadoop-common-project/hadoop-common: patch generated 3 
new + 132 unchanged - 7 fixed = 135 total (was 139) {color} |

The math is wrong: 3 + 132 - 7 != 135.  I suggest to change it to below
- patch generated 3 new + 139 existing - 7 fixed = 135 total



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12524) Randomize the DFSStripedOutputStreamWithFailure tests

2015-10-27 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-12524:


 Summary: Randomize the DFSStripedOutputStreamWithFailure tests
 Key: HADOOP-12524
 URL: https://issues.apache.org/jira/browse/HADOOP-12524
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor


Currently, the DFSStripedOutputStreamWithFailure tests are run with fixed 
indices #0 - #19 and #59, and also a test with random index 
(testDatanodeFailureRandomLength).  We should randomize all the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12523) In FileSystem.Cache, the closeAll methods should not synchronize on the Cache object

2015-10-27 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-12523:


 Summary: In FileSystem.Cache, the closeAll methods should not 
synchronize on the Cache object
 Key: HADOOP-12523
 URL: https://issues.apache.org/jira/browse/HADOOP-12523
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze


All the closeAll methods holds the Cache lock so that when one of the three 
closeAll methods is being called, a new FileSystem object has to wait for the 
Cache lock.  The wait time can possible be long since closing a FileSystem can 
be slow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11841) Remove ecschema-def.xml

2015-04-17 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze resolved HADOOP-11841.
--
   Resolution: Fixed
Fix Version/s: HDFS-7285
 Hadoop Flags: Reviewed

Thanks Jing for reviewing the patch.

I have committed this.

> Remove ecschema-def.xml
> ---
>
> Key: HADOOP-11841
> URL: https://issues.apache.org/jira/browse/HADOOP-11841
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Fix For: HDFS-7285
>
> Attachments: c11841_20150416.patch
>
>
> Currently ecschema-def.xml is only used by a test.  It is not clear if the 
> file is needed and how  to use that file.  Let's remove it for HDFS-7285.  We 
> may add it back later on if we decide that it is required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-11841) Is ecschema-def.xml needed?

2015-04-16 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze reopened HADOOP-11841:
--

HDFS-7337 takes care of a lot of issues.  Let's discuss the xml file here.

> Is ecschema-def.xml needed?
> ---
>
> Key: HADOOP-11841
> URL: https://issues.apache.org/jira/browse/HADOOP-11841
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>
> Currently ecschema-def.xml is only used by a test.  Is the file needed?  How  
> to use that file?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11841) Is ecschema-def.xml needed?

2015-04-16 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-11841:


 Summary: Is ecschema-def.xml needed?
 Key: HADOOP-11841
 URL: https://issues.apache.org/jira/browse/HADOOP-11841
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze


Currently ecschema-def.xml is only used by a test.  Is the file needed?  How  
to use that file?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11840) ECSchema are considered as not equal even if they are logically equal

2015-04-16 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-11840:


 Summary: ECSchema are considered as not equal even if they are 
logically equal
 Key: HADOOP-11840
 URL: https://issues.apache.org/jira/browse/HADOOP-11840
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Tsz Wo Nicholas Sze


For example, a schema could have an empty options and another schema has all 
the fields are the same except that options has NUM_DATA_UNITS_KEY => 
numDataUnits.  Then, these two schemas are considered as not equal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-6359) NetworkTopology.chooseRandom(..) throws an IllegalArgumentException

2015-02-11 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze resolved HADOOP-6359.
-
Resolution: Duplicate

Resolving as "Duplicate".

> NetworkTopology.chooseRandom(..) throws an IllegalArgumentException
> ---
>
> Key: HADOOP-6359
> URL: https://issues.apache.org/jira/browse/HADOOP-6359
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsz Wo Nicholas Sze
>  Labels: newbie
>
> When numOfDatanodes == 0, NetworkTopology.chooseRandom(..) throws an 
> IllegalArgumentException.
> {noformat}
> 2009-09-30 00:12:50,768 ERROR org.mortbay.log: /nn_browsedfscontent.jsp
> java.lang.IllegalArgumentException: n must be positive
> at java.util.Random.nextInt(Random.java:250)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:536)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:504)
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-8502) Quota accounting should be calculated based on actual size rather than block size

2015-02-10 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze resolved HADOOP-8502.
-
Resolution: Not a Problem

If the file is known to be small, it can use a small block size.  It this 
example, it can set block size equal to 16kB.  Then it won't get quota 
exception.

Resolving as not-a-problem.  Please feel free to reopen if you disagree.

> Quota accounting should be calculated based on actual size rather than block 
> size
> -
>
> Key: HADOOP-8502
> URL: https://issues.apache.org/jira/browse/HADOOP-8502
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: E. Sammer
>
> When calculating quotas, the block size is used rather than the actual size 
> of the file. This limits the granularity of quota enforcement to increments 
> of the block size which is wasteful and limits the usefulness (i.e. it's 
> possible to violate the quota in a way that's not at all intuitive.
> {code}
> [esammer@xxx ~]$ hadoop fs -count -q /user/esammer/quota-test
> none inf 1048576 10485761 
>2  0 hdfs://xxx/user/esammer/quota-test
> [esammer@xxx ~]$ du /etc/passwd
> 4   /etc/passwd
> esammer@xxx ~]$ hadoop fs -put /etc/passwd /user/esammer/quota-test/
> 12/06/09 13:56:16 WARN hdfs.DFSClient: DataStreamer Exception: 
> org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: 
> org.apache.hadoop.hdf
> s.protocol.DSQuotaExceededException: The DiskSpace quota of 
> /user/esammer/quota-test is exceeded: quota=1048576 diskspace consumed=384.0m
> ...
> {code}
> Obviously the file in question would only occupy 12KB, not 384MB, and should 
> easily fit within the 1MB quota.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11232) jersey-core-1.9 has a faulty glassfish-repo setting

2014-10-25 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-11232:


 Summary: jersey-core-1.9 has a faulty glassfish-repo setting
 Key: HADOOP-11232
 URL: https://issues.apache.org/jira/browse/HADOOP-11232
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze


The following was reported by [~sushanth].

hadoop-common brings in jersey-core-1.9 as a dependency by default.

This is problematic, since the pom file for jersey 1.9 hardcode-specifies 
glassfish-repo as the place to get further transitive dependencies, which leads 
to a site that serves a static "this has moved" page instead of a 404. This 
results in faulty parent resolutions, which when asked for a pom file, get 
erroneous results.

The only way around this seems to be to add a series of exclusions for 
jersey-core, jersey-json, jersey-server and a bunch of others to hadoop-common, 
then to hadoop-hdfs, then to hadoop-mapreduce-client-core. I don't know how 
many more excludes are necessary before I can get this to work.

If you update your jersey.version to 1.14, this faulty pom goes away. Please 
either update that, or work with build infra to update our nexus pom for 
jersey-1.9 so that it does not include the faulty glassfish repo.

Another interesting note about this is that something changed yesterday evening 
to cause this break in behaviour. We have not had this particular problem in 
about 9+ months.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-10865) Add a Crc32 chunked verification benchmark for both directly and non-directly buffer cases

2014-07-19 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-10865:


 Summary: Add a Crc32 chunked verification benchmark for both 
directly and non-directly buffer cases
 Key: HADOOP-10865
 URL: https://issues.apache.org/jira/browse/HADOOP-10865
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor


Currently, it is not easy to compare Crc32 chunked verification 
implementations.  Let's add a benchmark.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10778) Use NativeCrc32 only if it is faster

2014-07-02 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-10778:


 Summary: Use NativeCrc32 only if it is faster
 Key: HADOOP-10778
 URL: https://issues.apache.org/jira/browse/HADOOP-10778
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze


>From the branch mark post in [this 
>comment|https://issues.apache.org/jira/browse/HDFS-6560?focusedCommentId=14044060&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14044060],
> NativeCrc32 is slower than java.util.zip.CRC32 for Java 7 and above when 
>bytesPerChecksum > 512.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6867) Using socket address for datanode registry breaks multihoming

2014-06-30 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze resolved HADOOP-6867.
-

Resolution: Not a Problem

I believe this is not a problem anymore after other JIRAs such as HDFS-4963.  
Please feel free to reopen this if it is not the case.  Resolving ...

> Using socket address for datanode registry breaks multihoming
> -
>
> Key: HADOOP-6867
> URL: https://issues.apache.org/jira/browse/HADOOP-6867
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.2
> Environment: hadoop-0.20-0.20.2+228-1, centos 5, distcp
>Reporter: Jordan Sissel
>
> Related: 
> * https://issues.apache.org/jira/browse/HADOOP-985
> * https://issues.apache.org/jira/secure/attachment/12350813/HADOOP-985-1.patch
> * http://old.nabble.com/public-IP-for-datanode-on-EC2-td19336240.html
> * 
> http://www.cloudera.com/blog/2008/12/securing-a-hadoop-cluster-through-a-gateway/
>  
> Datanodes register using their dns name (even configurable with 
> dfs.datanode.dns.interface). However, the Namenode only really uses the 
> source address that the registration came from when sharing it to clients 
> wanting to write to HDFS.
> Specific environment that causes this problem:
> * Datanode and Namenode multihomed on two networks.
> * Datanode registers to namenode using dns name on network #1
> * Client (distcp) connects to namenode on network #2 \(*) and is told to 
> write to datanodes on network #1, which doesn't work for us.
> \(*) Allowing contact to the namenode on multiple networks was achieved with 
> a socat proxy hack that tunnels network#2 to network#1 port 8020. This is 
> unrelated to the issue at hand.
> The cloudera link above recommends proxying for other reasons than 
> multihoming, but it would work, but it doesn't sound like it would well 
> (bandwidth, multiplicity, multitenant, etc).
> Our specific scenario is wanting to distcp over a different network interface 
> than the datanodes register themselves on, but it would be nice if both (all) 
> interfaces worked. We are internally going to patch hadoop to roll back parts 
> of the patch mentioned above so that we rely the datanode name rather than 
> the socket address it uses to talk to the namenode. The alternate option is 
> to push config changes to all nodes that force them to listen/register on one 
> specific interface only. This helps us work around our specific problem, but 
> doesn't really help with multihoming. 
> I would propose that datanodes register all interface addresses during the 
> registration/heartbeat/whatever process does this and hdfs clients would be 
> given all addresses for a specific node to perform operations against and 
> they could select accordingly (or 'whichever worked first') just like 
> round-robin dns does.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10741) A lightweight WebHDFS client library

2014-06-23 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-10741:


 Summary: A lightweight WebHDFS client library
 Key: HADOOP-10741
 URL: https://issues.apache.org/jira/browse/HADOOP-10741
 Project: Hadoop Common
  Issue Type: New Feature
  Components: tools
Reporter: Tsz Wo Nicholas Sze
Assignee: Mohammad Kamrul Islam


One of the motivations for creating WebHDFS is for applications connecting to 
HDFS from outside the cluster.  In order to do so, users have to either
# install Hadoop and use WebHdfsFileSsytem, or
# develop their own client using the WebHDFS REST API.

For #1, it is very difficult to manage and unnecessarily complicated for other 
applications since Hadoop is not a lightweight library.  For #2, it is not easy 
to deal with security and handle transient errors.

Therefore, we propose adding a lightweight WebHDFS client as a separated 
library which does not depend on Common and HDFS.  The client can be packaged 
as a standalone jar.  Other applications simply add the jar to their classpath 
for using it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10674) Rewrite the PureJavaCrc32 loop for performance improvement

2014-06-09 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-10674:


 Summary: Rewrite the PureJavaCrc32 loop for performance improvement
 Key: HADOOP-10674
 URL: https://issues.apache.org/jira/browse/HADOOP-10674
 Project: Hadoop Common
  Issue Type: Bug
  Components: performance, util
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze


Below are some performance improvement opportunities performance improvement in 
PureJavaCrc32.
- eliminate "off += 8; len -= 8;"
- replace T8_x_start with hard coded constants
- eliminate c0 - c7 local variables

In my machine, there are 30% to 50% improvement for most of the cases.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10473) TestCallQueueManager is still flaky

2014-04-08 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-10473:


 Summary: TestCallQueueManager is still flaky
 Key: HADOOP-10473
 URL: https://issues.apache.org/jira/browse/HADOOP-10473
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor


testSwapUnderContention counts the calls and then interrupts as shown below.  
There could be call after counting the call but before interrupt.
{code}
for (Taker t : consumers) {
  totalCallsConsumed += t.callsTaken;
  threads.get(t).interrupt();
}
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10455) When there is an exception, ipc.Server should first check whether it is an terse exception

2014-04-01 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze resolved HADOOP-10455.
--

   Resolution: Fixed
Fix Version/s: 2.4.1
 Hadoop Flags: Reviewed

I have committed this.

> When there is an exception, ipc.Server should first check whether it is an 
> terse exception
> --
>
> Key: HADOOP-10455
> URL: https://issues.apache.org/jira/browse/HADOOP-10455
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Fix For: 2.4.1
>
> Attachments: c10455_20140401.patch, c10455_20140401b.patch
>
>
> ipc.Server allows application servers to define terse exceptions; see 
> Server.addTerseExceptions.  For terse exception, it only prints a short 
> message but not the stack trace.  However, if an exception is both 
> RuntimeException and terse exception, it still prints out the stack trace of 
> the exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10455) When there is an exception, ipc.Server should first check whether it is an terse exception

2014-04-01 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-10455:


 Summary: When there is an exception, ipc.Server should first check 
whether it is an terse exception
 Key: HADOOP-10455
 URL: https://issues.apache.org/jira/browse/HADOOP-10455
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze


ipc.Server allows application servers to define terse exceptions; see 
Server.addTerseExceptions.  For terse exception, it only prints a short message 
but not the stack trace.  However, if an exception is both RuntimeException and 
terse exception, it still prints out the stack trace of the exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10449) Fix the javac warnings in the security packages.

2014-03-27 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-10449:


 Summary: Fix the javac warnings in the security packages.
 Key: HADOOP-10449
 URL: https://issues.apache.org/jira/browse/HADOOP-10449
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor


The are a few minor javac warnings.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10437) Fix the javac warnings in the hadoop.util package

2014-03-25 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-10437:


 Summary: Fix the javac warnings in the hadoop.util package
 Key: HADOOP-10437
 URL: https://issues.apache.org/jira/browse/HADOOP-10437
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: util
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor


There are a few minor javac warnings in org.apache.hadoop.util.  We should fix 
them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10426) CreateOpts.getOpt(..) should declare with generic type argument

2014-03-24 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-10426:


 Summary: CreateOpts.getOpt(..) should declare with generic type 
argument
 Key: HADOOP-10426
 URL: https://issues.apache.org/jira/browse/HADOOP-10426
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor


Similar to CreateOpts.setOpt(..), the CreateOpts.getOpt(..) should also declare 
with a generic type parameter .  Then, all the casting 
from CreateOpts to its subclasses can be avoided.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10398) KerberosAuthenticator failed to fall back to PseudoAuthenticator after HADOOP-10078

2014-03-21 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze resolved HADOOP-10398.
--

Resolution: Invalid

Filed HADOOP-10416 and HADOOP-10417 for the server-side bugs.  Resolving this 
as invalid.

> KerberosAuthenticator failed to fall back to PseudoAuthenticator after 
> HADOOP-10078
> ---
>
> Key: HADOOP-10398
> URL: https://issues.apache.org/jira/browse/HADOOP-10398
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: a.txt, c10398_20140310.patch
>
>
> {code}
> //KerberosAuthenticator.java
>   if (conn.getResponseCode() == HttpURLConnection.HTTP_OK) {
> LOG.debug("JDK performed authentication on our behalf.");
> // If the JDK already did the SPNEGO back-and-forth for
> // us, just pull out the token.
> AuthenticatedURL.extractToken(conn, token);
> return;
>   } else ...
> {code}
> The problem of the code above is that HTTP_OK does not implies authentication 
> completed.  We should check if the token can be extracted successfully.
> This problem was reported by [~bowenzhangusa] in [this 
> comment|https://issues.apache.org/jira/browse/HADOOP-10078?focusedCommentId=13896823&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13896823]
>  earlier.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10417) There is no token for anonymous authentication

2014-03-21 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-10417:


 Summary: There is no token for anonymous authentication
 Key: HADOOP-10417
 URL: https://issues.apache.org/jira/browse/HADOOP-10417
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsz Wo Nicholas Sze


According to [~tucu00], if ANONYMOUS is enabled, then there is a token (cookie) 
and the response is 200.  However, it never sets cookie when the token is 
ANONYMOUS in the code below.

{code}
//AuthenticationFilter.doFilter(..)
  if (newToken && !token.isExpired() && token != 
AuthenticationToken.ANONYMOUS) {
String signedToken = signer.sign(token.toString());
createAuthCookie(httpResponse, signedToken, getCookieDomain(),
getCookiePath(), token.getExpires(), isHttps);
  }
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10416) If there is an expired token, PseudoAuthenticationHandler should renew it

2014-03-21 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-10416:


 Summary: If there is an expired token, PseudoAuthenticationHandler 
should renew it
 Key: HADOOP-10416
 URL: https://issues.apache.org/jira/browse/HADOOP-10416
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor


PseudoAuthenticationHandler currently only gets username from the "user.name" 
parameter.  It should also renew expired auth token if it is available in the 
cookies.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10407) Fix the javac warnings in the ipc package.

2014-03-12 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-10407:


 Summary: Fix the javac warnings in the ipc package.
 Key: HADOOP-10407
 URL: https://issues.apache.org/jira/browse/HADOOP-10407
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor


Fix the javac warnings in the org.apache.hadoop.ipc package.  Most of them are 
generic warnings.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10406) TestIPC.testIpcWithReaderQueuing may fail

2014-03-12 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-10406:


 Summary: TestIPC.testIpcWithReaderQueuing may fail
 Key: HADOOP-10406
 URL: https://issues.apache.org/jira/browse/HADOOP-10406
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Tsz Wo Nicholas Sze


The test may fail with AssertionError.  The value 
server.getNumOpenConnections() could be larger than maxAccept; see comments for 
more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10398) KerberosAuthenticator failed to fall back to PseudoAuthenticator after HADOOP-10078

2014-03-10 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-10398:


 Summary: KerberosAuthenticator failed to fall back to 
PseudoAuthenticator after HADOOP-10078
 Key: HADOOP-10398
 URL: https://issues.apache.org/jira/browse/HADOOP-10398
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze


{code}
//KerberosAuthenticator.java
  if (conn.getResponseCode() == HttpURLConnection.HTTP_OK) {
LOG.debug("JDK performed authentication on our behalf.");
// If the JDK already did the SPNEGO back-and-forth for
// us, just pull out the token.
AuthenticatedURL.extractToken(conn, token);
return;
  } else ...
{code}
The problem of the code above is that HTTP_OK does not implies authentication 
completed.  We should check if the token can be extracted successfully.

This problem was reported by [~bowenzhangusa] in [this 
comment|https://issues.apache.org/jira/browse/HADOOP-10078?focusedCommentId=13896823&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13896823]
 earlier.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10393) Fix hadoop-auth javac warnings

2014-03-07 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-10393:


 Summary: Fix hadoop-auth javac warnings
 Key: HADOOP-10393
 URL: https://issues.apache.org/jira/browse/HADOOP-10393
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor


There are quite a few generic warnings and other javac warnings in hadoop-auth. 
 All of them are minor.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10107) Server.getNumOpenConnections may throw NPE

2013-11-15 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HADOOP-10107:
---

 Summary: Server.getNumOpenConnections may throw NPE
 Key: HADOOP-10107
 URL: https://issues.apache.org/jira/browse/HADOOP-10107
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Daryn Sharp


Found this in [build 
#5440|https://builds.apache.org/job/PreCommit-HDFS-Build/5440/testReport/junit/org.apache.hadoop.hdfs.server.blockmanagement/TestUnderReplicatedBlocks/testSetrepIncWithUnderReplicatedBlocks/]

Caused by: java.lang.NullPointerException
at org.apache.hadoop.ipc.Server.getNumOpenConnections(Server.java:2434)
at 
org.apache.hadoop.ipc.metrics.RpcMetrics.numOpenConnections(RpcMetrics.java:74)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HADOOP-9899) Remove the debug message added by HADOOP-8855

2013-08-22 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HADOOP-9899:
--

 Summary: Remove the debug message added by HADOOP-8855
 Key: HADOOP-9899
 URL: https://issues.apache.org/jira/browse/HADOOP-9899
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor


HADOOP-8855 added a debug message which was printed to System.out.
{code}
//KerberosAuthenticator.java
  private void sendToken(byte[] outToken) throws IOException, 
AuthenticationException {
new Exception("sendToken").printStackTrace(System.out);
...
}
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9793) RetryInvocationHandler uses raw types that should be parameterized

2013-07-31 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HADOOP-9793.


Resolution: Duplicate

This is now fixed by HADOOP-9803.

> RetryInvocationHandler uses raw types that should be parameterized
> --
>
> Key: HADOOP-9793
> URL: https://issues.apache.org/jira/browse/HADOOP-9793
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Suresh Srinivas
>Priority: Minor
>
> This causes javac warnings as shown below:
> {noformat}
> 274c274,275
> < [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryInvocationHandler.java:[147,45]
>  [unchecked] unchecked call to performFailover(T) as a member of the raw type 
> org.apache.hadoop.io.retry.FailoverProxyProvider
> ---
> > [WARNING] 
> > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryInvocationHandler.java:[103,24]
> >  [unchecked] unchecked call to 
> > getMethod(java.lang.String,java.lang.Class...) as a member of the raw 
> > type java.lang.Class
> > [WARNING] 
> > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryInvocationHandler.java:[152,45]
> >  [unchecked] unchecked call to performFailover(T) as a member of the raw 
> > type org.apache.hadoop.io.retry.FailoverProxyProvider
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9803) Add generic type parameter to RetryInvocationHandler

2013-07-30 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HADOOP-9803:
--

 Summary: Add generic type parameter to RetryInvocationHandler
 Key: HADOOP-9803
 URL: https://issues.apache.org/jira/browse/HADOOP-9803
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor


{code}
//RetryInvocationHandler.java
private final FailoverProxyProvider proxyProvider;
{code}
FailoverProxyProvider in the field above requires a generic type parameter.  So 
RetryInvocationHandler should also has a generic type parameter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9773) TestLightWeightCache fails

2013-07-25 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HADOOP-9773:
--

 Summary: TestLightWeightCache fails
 Key: HADOOP-9773
 URL: https://issues.apache.org/jira/browse/HADOOP-9773
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE


It fails on some size limit tests when the random seed is 1374774736885L, 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9754) Clean up RPC code

2013-07-21 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HADOOP-9754:
--

 Summary: Clean up RPC code
 Key: HADOOP-9754
 URL: https://issues.apache.org/jira/browse/HADOOP-9754
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: c9754_20130722.patch

Cleanup the RPC code for the following problems:
- Remove unnecessary "throws IOException/InterruptedException".
- Fix generic warnings.
- Fix other javac warnings.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9751) Add clientId and retryCount to RpcResponseHeaderProto

2013-07-19 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HADOOP-9751:
--

 Summary: Add clientId and retryCount to RpcResponseHeaderProto
 Key: HADOOP-9751
 URL: https://issues.apache.org/jira/browse/HADOOP-9751
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE


We have clientId, callId and retryCount in RpcRequestHeaderProto.  However, we 
only have callId but not clientId and retryCount in RpcResponseHeaderProto.

It is useful to have clientId and retryCount in the responses for applications 
like rpc proxy server.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9670) Use a long to replace NfsTime

2013-06-25 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HADOOP-9670:
--

 Summary: Use a long to replace NfsTime
 Key: HADOOP-9670
 URL: https://issues.apache.org/jira/browse/HADOOP-9670
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Brandon Li
Priority: Minor


NfsTime is class with two int's.  It can be replaced a long for better 
performance and smaller memory footprint.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9669) There are multiple array creations and array copies for a single nfs rpc reply

2013-06-25 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HADOOP-9669:
--

 Summary: There are multiple array creations and array copies for a 
single nfs rpc reply
 Key: HADOOP-9669
 URL: https://issues.apache.org/jira/browse/HADOOP-9669
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Brandon Li


XDR.writeXxx(..) methods ultimately use the static XDR.append(..) for writing 
each data type.  The static append creates a new array and copy data.  
Therefore, for a singe reply such as RpcAcceptedReply.voidReply(..), there are 
multiple array creations and array copies.  For example, there are at least 6 
array creations and array copies for RpcAcceptedReply.voidReply(..).


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9544) backport UTF8 encoding fixes to branch-1

2013-05-03 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HADOOP-9544.


   Resolution: Fixed
Fix Version/s: 1.2.0

I have committed this.  Thanks, Chris!

> backport UTF8 encoding fixes to branch-1
> 
>
> Key: HADOOP-9544
> URL: https://issues.apache.org/jira/browse/HADOOP-9544
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 1.3.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 1.2.0
>
> Attachments: HDFS-4795-branch-1.1.patch, HDFS-4795-branch-1.2.patch
>
>
> The trunk code has received numerous bug fixes related to UTF8 encoding.  I 
> recently observed a branch-1-based cluster fail to load its fsimage due to 
> these bugs.  I've confirmed that the bug fixes existing on trunk will resolve 
> this, so I'd like to backport to branch-1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9543) TestFsShellReturnCode may fail in branch-1

2013-05-02 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HADOOP-9543.


   Resolution: Fixed
Fix Version/s: 1.2.0
 Hadoop Flags: Reviewed

Thanks Arpit for reviewing the patch.

I have committed this.

> TestFsShellReturnCode may fail in branch-1
> --
>
> Key: HADOOP-9543
> URL: https://issues.apache.org/jira/browse/HADOOP-9543
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
>Priority: Minor
> Fix For: 1.2.0
>
> Attachments: c9543_20130502.patch
>
>
> There is a hardcoded username "admin" in TestFsShellReturnCode. If "admin" 
> does not exist in the local fs, the test may fail.  Before HADOOP-9502, the 
> failure of the command is ignored silently, i.e. the command returns success 
> even if it indeed failed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9502) chmod does not return error exit codes for some exceptions

2013-04-24 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HADOOP-9502.


   Resolution: Fixed
Fix Version/s: 1.2.0
 Hadoop Flags: Reviewed

I have committed this.

> chmod does not return error exit codes for some exceptions
> --
>
> Key: HADOOP-9502
> URL: https://issues.apache.org/jira/browse/HADOOP-9502
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Ramya Sunil
>Assignee: Tsz Wo (Nicholas), SZE
>Priority: Minor
> Fix For: 1.2.0
>
> Attachments: c9502_20130424.patch
>
>
> When some dfs operations fail due to SnapshotAccessControlException, valid 
> exit codes are not returned.
> E.g:
> {noformat}
> -bash-4.1$  hadoop dfs -chmod -R 755 
> /user/foo/hdfs-snapshots/test0/.snapshot/s0
> chmod: changing permissions of 
> 'hdfs://:8020/user/foo/hdfs-snapshots/test0/.snapshot/s0':org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotAccessControlException:
>  Modification on read-only snapshot is disallowed
> -bash-4.1$ echo $?
> 0
> -bash-4.1$  hadoop dfs -chown -R hdfs:users 
> /user/foo/hdfs-snapshots/test0/.snapshot/s0
> chown: changing ownership of 
> 'hdfs://:8020/user/foo/hdfs-snapshots/test0/.snapshot/s0':org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotAccessControlException:
>  Modification on read-only snapshot is disallowed
> -bash-4.1$ echo $?
> 0
> {noformat}
> Similar problems exist for some other exceptions such as SafeModeException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9492) Fix the typo in testConf.xml to make it consistent with FileUtil#copy()

2013-04-23 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HADOOP-9492.


   Resolution: Fixed
Fix Version/s: 1.2.0
 Hadoop Flags: Reviewed

I have committed this.  Thanks, Jing!

Also thanks Chris for reviewing the patch.

> Fix the typo in testConf.xml to make it consistent with FileUtil#copy()
> ---
>
> Key: HADOOP-9492
> URL: https://issues.apache.org/jira/browse/HADOOP-9492
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Trivial
> Fix For: 1.2.0
>
> Attachments: HADOOP-9492.b1.patch
>
>
> HADOOP-9473 fixed a typo in FileUtil#copy(). We need to fix the same typo in 
> testConf.xml accordingly. Otherwise TestCLI will fail in branch-1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9458) In branch-1, client may not retry in rpc.Client.call(..)

2013-04-05 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HADOOP-9458:
--

 Summary: In branch-1, client may not retry in rpc.Client.call(..)
 Key: HADOOP-9458
 URL: https://issues.apache.org/jira/browse/HADOOP-9458
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Tsz Wo (Nicholas), SZE


In ipc.Client.call(..) (around line 1097), the client does not retry if it 
could setup a connection (i.e. Connection connection = getConnection(remoteId, 
call)) to the server but fails afterward.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HADOOP-9112) test-patch should -1 for @Tests without a timeout

2013-03-24 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE reopened HADOOP-9112:



> test-patch should -1 for @Tests without a timeout
> -
>
> Key: HADOOP-9112
> URL: https://issues.apache.org/jira/browse/HADOOP-9112
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Todd Lipcon
>Assignee: Surenkumar Nihalani
> Fix For: 3.0.0
>
> Attachments: HADOOP-9112-1.patch, HADOOP-9112-2.patch, 
> HADOOP-9112-3.patch, HADOOP-9112-4.patch, HADOOP-9112-5.patch, 
> HADOOP-9112-6.patch, HADOOP-9112-7.patch
>
>
> With our current test running infrastructure, if a test with no timeout set 
> runs too long, it triggers a surefire-wide timeout, which for some reason 
> doesn't show up as a failed test in the test-patch output. Given that, we 
> should require that all tests have a timeout set, and have test-patch enforce 
> this with a simple check

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9423) In branch-1, native configuration is generated even if compile.native is not set

2013-03-20 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HADOOP-9423.


Resolution: Duplicate

> In branch-1, native configuration is generated even if compile.native is not 
> set
> 
>
> Key: HADOOP-9423
> URL: https://issues.apache.org/jira/browse/HADOOP-9423
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
>Priority: Minor
> Attachments: c9423_20130321.patch
>
>
> The "create-native-configure" ant target will be executed even if 
> compile.native is not set.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9423) In branch-1, native configuration is generated even if compile.native is not set

2013-03-20 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HADOOP-9423:
--

 Summary: In branch-1, native configuration is generated even if 
compile.native is not set
 Key: HADOOP-9423
 URL: https://issues.apache.org/jira/browse/HADOOP-9423
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor


The "create-native-configure" ant target will be executed even if 
compile.native is not set.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9270) FileUtil#unTarUsingTar on branch-trunk-win contains an incorrect comment

2013-01-31 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HADOOP-9270.


   Resolution: Fixed
Fix Version/s: trunk-win

I have committed this.  Thanks, Chris!

Thanks also Arpit for reviewing the patch.

> FileUtil#unTarUsingTar on branch-trunk-win contains an incorrect comment
> 
>
> Key: HADOOP-9270
> URL: https://issues.apache.org/jira/browse/HADOOP-9270
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Trivial
> Fix For: trunk-win
>
> Attachments: HADOOP-9270-branch-trunk-win.1.patch
>
>
> HADOOP-9081 changed Windows handling of untar to use pure Java.  There had 
> been a comment in that code describing why we need to pass a particular flag 
> to the external tar command on Windows.  Now that Windows doesn't call the 
> external tar command, the comment is no longer relevant.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9252) StringUtils.limitDecimalTo2(..) is unnecessarily synchronized

2013-01-25 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HADOOP-9252:
--

 Summary: StringUtils.limitDecimalTo2(..) is unnecessarily 
synchronized
 Key: HADOOP-9252
 URL: https://issues.apache.org/jira/browse/HADOOP-9252
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor


limitDecimalTo2(double) currently uses decimalFormat, which is a static field, 
so that it is synchronized.  Synchronization is unnecessary since it can simply 
uses String.format(..).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9115) Deadlock in configuration when writing configuration to hdfs

2012-12-04 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HADOOP-9115.


   Resolution: Fixed
Fix Version/s: 1.1.2

I have committed this.  Thanks, Jing!

> Deadlock in configuration when writing configuration to hdfs
> 
>
> Key: HADOOP-9115
> URL: https://issues.apache.org/jira/browse/HADOOP-9115
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1.1.1
>Reporter: Arpit Gupta
>Assignee: Jing Zhao
>Priority: Blocker
> Fix For: 1.1.2
>
> Attachments: HADOOP-7082.b1.002.patch, HADOOP-7082.b1.patch, 
> hive-jstack.log
>
>
> This was noticed when using hive with hadoop-1.1.1 and running 
> {code}
> select count(*) from tbl;
> {code}
> This would cause a deadlock configuration. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9111) Fix failed testcases with @ignore annotation In branch-1

2012-12-03 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HADOOP-9111.


   Resolution: Fixed
Fix Version/s: 1-win
   1.2.0
 Hadoop Flags: Reviewed

I have committed this.  Thanks, Jing!

> Fix failed testcases with @ignore annotation In branch-1
> 
>
> Key: HADOOP-9111
> URL: https://issues.apache.org/jira/browse/HADOOP-9111
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 1.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 1.2.0, 1-win
>
> Attachments: HADOOP-9111-b1.001.patch
>
>
> Currently in branch-1, several failed testcases have @ignore annotation which 
> does not take effect because these testcases are still using JUnit3. This 
> jira plans to change these testcases to JUnit4 to let @ignore work.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9099) NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an IP address

2012-11-27 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HADOOP-9099.


   Resolution: Fixed
Fix Version/s: 1-win
   1.2.0

I have committed this.  Thanks, Ivan!

> NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an 
> IP address
> ---
>
> Key: HADOOP-9099
> URL: https://issues.apache.org/jira/browse/HADOOP-9099
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1-win
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
>Priority: Minor
> Fix For: 1.2.0, 1-win
>
> Attachments: HADOOP-9099.branch-1-win.patch
>
>
> I just hit this failure. We should use some more unique string for 
> "UnknownHost":
> Testcase: testNormalizeHostName took 0.007 sec
>   FAILED
> expected:<[65.53.5.181]> but was:<[UnknownHost]>
> junit.framework.AssertionFailedError: expected:<[65.53.5.181]> but 
> was:<[UnknownHost]>
>   at 
> org.apache.hadoop.net.TestNetUtils.testNormalizeHostName(TestNetUtils.java:347)
> Will post a patch in a bit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9095) TestNNThroughputBenchmark fails in branch-1

2012-11-26 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HADOOP-9095.


   Resolution: Fixed
Fix Version/s: 1-win
   1.2.0

I have committed this.  Thanks Jing!

> TestNNThroughputBenchmark fails in branch-1
> ---
>
> Key: HADOOP-9095
> URL: https://issues.apache.org/jira/browse/HADOOP-9095
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 1.2.0, 1-win
>
> Attachments: HDFS-4204.b1.001.patch, HDFS-4204.b1.002.patch, 
> HDFS-4204.b1.003.patch
>
>
> {noformat}
> java.lang.StringIndexOutOfBoundsException: String index out of range: 0
> at java.lang.String.charAt(String.java:686)
> at org.apache.hadoop.net.NetUtils.normalizeHostName(NetUtils.java:539)
> at org.apache.hadoop.net.NetUtils.normalizeHostNames(NetUtils.java:562)
> at 
> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:88)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1047)
> ...
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$StatsDaemon.run(NNThroughputBenchmark.java:377)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9047) TestHDFSFileSystemContract.testMkdirsWithUmask failed by using umask 022

2012-11-15 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HADOOP-9047.


Resolution: Not A Problem

> TestHDFSFileSystemContract.testMkdirsWithUmask failed by using umask 022
> 
>
> Key: HADOOP-9047
> URL: https://issues.apache.org/jira/browse/HADOOP-9047
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.2-alpha
>Reporter: Junping Du
>
> In PreCommit-test of HADOOP-9045, this error appears as it still use system's 
> default 022 umask but not the 062 specifying in test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HADOOP-9047) TestHDFSFileSystemContract.testMkdirsWithUmask failed by using umask 022

2012-11-15 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE reopened HADOOP-9047:



Let's close this as not-a-problem since HADOOP-9042 was reverted.

> TestHDFSFileSystemContract.testMkdirsWithUmask failed by using umask 022
> 
>
> Key: HADOOP-9047
> URL: https://issues.apache.org/jira/browse/HADOOP-9047
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.2-alpha
>Reporter: Junping Du
>
> In PreCommit-test of HADOOP-9045, this error appears as it still use system's 
> default 022 umask but not the 062 specifying in test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-8820) Backport HADOOP-8469 and HADOOP-8470: add "NodeGroup" layer in new NetworkTopology (also known as NetworkTopologyWithNodeGroup)

2012-11-13 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HADOOP-8820.


   Resolution: Fixed
Fix Version/s: 1-win
   1.2.0
 Hadoop Flags: Reviewed

+1 on HADOOP-8820.b1.002.patch

I have committed it.  Thanks, Junping and Jing!

> Backport HADOOP-8469 and HADOOP-8470: add "NodeGroup" layer in new 
> NetworkTopology (also known as NetworkTopologyWithNodeGroup)
> ---
>
> Key: HADOOP-8820
> URL: https://issues.apache.org/jira/browse/HADOOP-8820
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: net
>Affects Versions: 1.0.0
>Reporter: Junping Du
>Assignee: Junping Du
> Fix For: 1.2.0, 1-win
>
> Attachments: HADOOP-8820.b1.002.patch, HADOOP-8820.b1.003.patch, 
> HADOOP-8820.patch
>
>
> This patch backport HADOOP-8469 and HADOOP-8470 to branch-1 and includes:
> 1. Make NetworkTopology class pluggable for extension.
> 2. Implement a 4-layer NetworkTopology class (named as 
> NetworkTopologyWithNodeGroup) to use in virtualized environment (or other 
> situation with additional layer between host and rack).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-8823) ant package target should not depend on cn-docs

2012-10-18 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HADOOP-8823.


   Resolution: Fixed
Fix Version/s: 1.1.1
 Hadoop Flags: Reviewed

I have committed this.

> ant package target should not depend on cn-docs
> ---
>
> Key: HADOOP-8823
> URL: https://issues.apache.org/jira/browse/HADOOP-8823
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 1.0.0
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Fix For: 1.1.1
>
> Attachments: c8823_20120918.patch
>
>
> In branch-1, the package target depends on cn-docs but the doc is already 
> outdated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8939) Backport HADOOP-7457: remove cn-docs from branch-1

2012-10-17 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HADOOP-8939:
--

 Summary: Backport HADOOP-7457: remove cn-docs from branch-1
 Key: HADOOP-8939
 URL: https://issues.apache.org/jira/browse/HADOOP-8939
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Tsz Wo (Nicholas), SZE


The cn-docs in branch-1 are also out-dated

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8823) ant package target should not depend on cn-docs

2012-09-18 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HADOOP-8823:
--

 Summary: ant package target should not depend on cn-docs
 Key: HADOOP-8823
 URL: https://issues.apache.org/jira/browse/HADOOP-8823
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE


In branch-1, the package target depends on cn-docs but the doc is already 
outdated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8714) Jenkins cannot detect download failure

2012-08-20 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HADOOP-8714:
--

 Summary: Jenkins cannot detect download failure
 Key: HADOOP-8714
 URL: https://issues.apache.org/jira/browse/HADOOP-8714
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Tsz Wo (Nicholas), SZE


In [build #1332|https://builds.apache.org/job/PreCommit-HADOOP-Build/1332/], 
Jenkins failed to downland the patch.  The patch file shown in Build Artifacts 
had zero byte.  However, Jenkins did not detect the download failure.  It still 
gave [+1 on all the test items except "tests 
include"|https://issues.apache.org/jira/browse/HADOOP-8239?focusedCommentId=13438280&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13438280].

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8700) Move the checksum type constants to an enum

2012-08-15 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HADOOP-8700:
--

 Summary: Move the checksum type constants to an enum
 Key: HADOOP-8700
 URL: https://issues.apache.org/jira/browse/HADOOP-8700
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor


In DataChecksum, there are constants for crc types, crc names and crc sizes.  
We should move them to an enum for better coding style.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-8641) handleConnectionFailure(..) in Client.java should properly handle interrupted exception

2012-08-02 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HADOOP-8641.


Resolution: Duplicate

Agree.  Let's resolve this as a duplicate of HADOOP-6221.

> handleConnectionFailure(..) in Client.java should properly handle interrupted 
> exception
> ---
>
> Key: HADOOP-8641
> URL: https://issues.apache.org/jira/browse/HADOOP-8641
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.1-alpha
>Reporter: suja s
>
> If connection retries are happening and thread is interrupted the 
> interruption is not happening and retries will continue till max number of 
> retries configured.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-8617) backport pure Java CRC32 calculator changes to branch-1

2012-07-25 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HADOOP-8617.


   Resolution: Fixed
Fix Version/s: 1.2.0

I have committed this.  Thanks, Brandon!

> backport pure Java CRC32 calculator changes to branch-1
> ---
>
> Key: HADOOP-8617
> URL: https://issues.apache.org/jira/browse/HADOOP-8617
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: performance
>Affects Versions: 1.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Fix For: 1.2.0
>
> Attachments: HADOOP-8617.patch
>
>
> Multiple efforts have been made gradually to improve the CRC performance in 
> Hadoop. This JIRA is to back port these changes to branch-1, which include 
> HADOOP-6166, HADOOP-6148, HADOOP-7333.
> The related HDFS and MAPREDUCE patches are uploaded to their original JIRAs 
> HDFS-496 and MAPREDUCE-782.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-6527) UserGroupInformation::createUserForTesting clobbers already defined group mappings

2012-07-05 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HADOOP-6527.


   Resolution: Fixed
Fix Version/s: 1-win
   1.1.0

I have committed this.  Thanks, Ivan!

> UserGroupInformation::createUserForTesting clobbers already defined group 
> mappings
> --
>
> Key: HADOOP-6527
> URL: https://issues.apache.org/jira/browse/HADOOP-6527
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Jakob Homan
>Assignee: Ivan Mitic
> Fix For: 1.1.0, 1-win
>
> Attachments: HADOOP-6527-branch-1-win_UGI_fix(2).patch, 
> HADOOP-6527-branch-1-win_UGI_fix.patch
>
>
> In UserGroupInformation::createUserForTesting the follow code creates a new 
> groups instance, obliterating any groups that have been previously defined in 
> the static groups field.
> {code}if (!(groups instanceof TestingGroups)) {
>   groups = new TestingGroups();
> }
> {code}
> This becomes a problem in tests that start a Mini{DFS,MR}Cluster and then 
> create a testing user.  The user that started the user (generally the real 
> user running the test) immediately has their groups wiped out and is 
> prevented from accessing files/folders/queues they should be able to.  Before 
> the UserGroupInformation.createRemoteUserForTesting, calls to userA.getGroups 
> may return {"a", "b", "c"} and immediately after the new fake user is 
> created, the same call will return an empty array.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Reopened] (HADOOP-8491) Check for short writes when using FileChannel#write and related methods

2012-06-11 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE reopened HADOOP-8491:



[WARNING] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/IOUtils.java:260:
 warning - @param argument "offset" is not a parameter name.

Why the patch was committed without Jerkins' +1?

> Check for short writes when using FileChannel#write and related methods
> ---
>
> Key: HADOOP-8491
> URL: https://issues.apache.org/jira/browse/HADOOP-8491
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Fix For: 2.0.1-alpha
>
> Attachments: HADOOP-8491.001.patch, HADOOP-8491.002.patch
>
>
> We need to check for short writes when using WritableByteChannel#write and 
> related methods.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Reopened] (HADOOP-8440) HarFileSystem.decodeHarURI fails for URIs whose host contains numbers

2012-06-07 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE reopened HADOOP-8440:



Reopen for committing to trunk and other branches.

> HarFileSystem.decodeHarURI fails for URIs whose host contains numbers
> -
>
> Key: HADOOP-8440
> URL: https://issues.apache.org/jira/browse/HADOOP-8440
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 1.0.0
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
>Priority: Minor
> Attachments: HADOOP-8440-2-branch-1-win.patch, 
> HADOOP-8440-branch-1-win.patch, HADOOP-8440-branch-1-win.patch, 
> HADOOP-8440-trunk.patch
>
>
> For example, HarFileSystem.decodeHarURI will fail for the following URI:
> har://hdfs-127.0.0.1:51040/user

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Reopened] (HADOOP-8368) Use CMake rather than autotools to build native code

2012-06-07 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE reopened HADOOP-8368:



Since Jerkins builds are failing after this.  I will revert the patch again.

-1 on the patch in order to prevent the commit-revert situation.

> Use CMake rather than autotools to build native code
> 
>
> Key: HADOOP-8368
> URL: https://issues.apache.org/jira/browse/HADOOP-8368
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Fix For: 2.0.1-alpha
>
> Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, 
> HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, 
> HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, 
> HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, 
> HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, 
> HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, 
> HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, 
> HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, 
> HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, 
> HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, 
> HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, 
> HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, 
> HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native 
> (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on 
> different operating systems.  It would be extremely difficult, and perhaps 
> impossible, to use autotools under Windows.  Even if it were possible, it 
> might require horrible workarounds like installing cygwin.  Even on Linux 
> variants like Ubuntu 12.04, there are major build issues because /bin/sh is 
> the Dash shell, rather than the Bash shell as it is in other Linux versions.  
> It is currently impossible to build the native code under Ubuntu 12.04 
> because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use 
> shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: 
> cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method 
> "path" via package "Autom4te..." are common error messages.  In order to even 
> start debugging automake problems you need to learn shell, m4, sed, and the a 
> bunch of other things.  With CMake, all you have to learn is the syntax of 
> CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required 
> libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For 
> example, the version installed under openSUSE defaults to putting libraries 
> in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults 
> to installing the same libraries under /usr/local/lib.  (This is why the FUSE 
> build is currently broken when using OpenSUSE.)  This is another source of 
> build failures and complexity.  If things go wrong, you will often get an 
> error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a 
> particular CMakeLists.txt will accept.  In addition, CMake maintains strict 
> backwards compatibility between different versions.  This prevents build bugs 
> due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to 
> build time.
> For all these reasons, I think we should switch to CMake for compiling native 
> (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Reopened] (HADOOP-8483) test-patch +1 even there are build failures

2012-06-06 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE reopened HADOOP-8483:



Hi Colin,

I think this is different from INFRA-4886.  No matter what options it compiles 
with, test-patch should -1 if there is a build failure.

> test-patch +1 even there are build failures
> ---
>
> Key: HADOOP-8483
> URL: https://issues.apache.org/jira/browse/HADOOP-8483
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsz Wo (Nicholas), SZE
>Priority: Critical
>
> For example, there are build failures in [build 
> #1062|https://builds.apache.org/job/PreCommit-HADOOP-Build/1062/console] but 
> it still has all +1's.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Reopened] (HADOOP-8368) Use CMake rather than autotools to build native code

2012-06-05 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE reopened HADOOP-8368:



Hi Eli, the tests are not running in Jenkins.  You may check the recent builds. 
 We should revert this since it never passes Jenkins.  The patch should be 
re-tested.

> Use CMake rather than autotools to build native code
> 
>
> Key: HADOOP-8368
> URL: https://issues.apache.org/jira/browse/HADOOP-8368
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Fix For: 2.0.1-alpha
>
> Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, 
> HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, 
> HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, 
> HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, 
> HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, 
> HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, 
> HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, 
> HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, 
> HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, 
> HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, 
> HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, 
> HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native 
> (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on 
> different operating systems.  It would be extremely difficult, and perhaps 
> impossible, to use autotools under Windows.  Even if it were possible, it 
> might require horrible workarounds like installing cygwin.  Even on Linux 
> variants like Ubuntu 12.04, there are major build issues because /bin/sh is 
> the Dash shell, rather than the Bash shell as it is in other Linux versions.  
> It is currently impossible to build the native code under Ubuntu 12.04 
> because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use 
> shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: 
> cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method 
> "path" via package "Autom4te..." are common error messages.  In order to even 
> start debugging automake problems you need to learn shell, m4, sed, and the a 
> bunch of other things.  With CMake, all you have to learn is the syntax of 
> CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required 
> libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For 
> example, the version installed under openSUSE defaults to putting libraries 
> in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults 
> to installing the same libraries under /usr/local/lib.  (This is why the FUSE 
> build is currently broken when using OpenSUSE.)  This is another source of 
> build failures and complexity.  If things go wrong, you will often get an 
> error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a 
> particular CMakeLists.txt will accept.  In addition, CMake maintains strict 
> backwards compatibility between different versions.  This prevents build bugs 
> due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to 
> build time.
> For all these reasons, I think we should switch to CMake for compiling native 
> (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8483) test-patch +1 even there are build failures

2012-06-04 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HADOOP-8483:
--

 Summary: test-patch +1 even there are build failures
 Key: HADOOP-8483
 URL: https://issues.apache.org/jira/browse/HADOOP-8483
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Tsz Wo (Nicholas), SZE
Priority: Critical


For example, there are build failures in [build 
#1062|https://builds.apache.org/job/PreCommit-HADOOP-Build/1062/console] but it 
still has all +1's.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8375) test-patch should stop immediately once it has found compilation errors

2012-05-09 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HADOOP-8375:
--

 Summary: test-patch should stop immediately once it has found 
compilation errors
 Key: HADOOP-8375
 URL: https://issues.apache.org/jira/browse/HADOOP-8375
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Tsz Wo (Nicholas), SZE


It does not makes sense to run findbugs or javadoc check if the program does 
not compile.  It was the behavior previously if I remember correctly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8348) Server$Listener.getAddress(..) may thow NullPointerException

2012-05-02 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HADOOP-8348:
--

 Summary: Server$Listener.getAddress(..) may thow 
NullPointerException
 Key: HADOOP-8348
 URL: https://issues.apache.org/jira/browse/HADOOP-8348
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Tsz Wo (Nicholas), SZE


{noformat}
Exception in thread "DataXceiver for client /127.0.0.1:35472 [Waiting for 
operation #2]" java.lang.NullPointerException
at org.apache.hadoop.ipc.Server$Listener.getAddress(Server.java:669)
at org.apache.hadoop.ipc.Server.getListenerAddress(Server.java:1988)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.getIpcPort(DataNode.java:882)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.getDisplayName(DataNode.java:863)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:177)
at java.lang.Thread.run(Thread.java:662)
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-7594) Support HTTP REST in HttpServer

2011-08-30 Thread Tsz Wo (Nicholas), SZE (JIRA)
Support HTTP REST in HttpServer
---

 Key: HADOOP-7594
 URL: https://issues.apache.org/jira/browse/HADOOP-7594
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE


Provide an API in HttpServer for supporting HTTP REST.

This is a part of HDFS-2284.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-7593) AssertionError in TestHttpServer.testMaxThreads()

2011-08-30 Thread Tsz Wo (Nicholas), SZE (JIRA)
AssertionError in TestHttpServer.testMaxThreads()
-

 Key: HADOOP-7593
 URL: https://issues.apache.org/jira/browse/HADOOP-7593
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsz Wo (Nicholas), SZE


TestHttpServer passed but there were AssertionError in the output.
{noformat}
11/08/30 03:35:56 INFO http.TestHttpServer: HTTP server started: 
http://localhost:52974/
Exception in thread "pool-1-thread-61" java.lang.AssertionError: 
at org.junit.Assert.fail(Assert.java:91)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at org.apache.hadoop.http.TestHttpServer$1.run(TestHttpServer.java:164)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
{noformat}


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-7427) syntax error in smart-apply-patch.sh

2011-06-26 Thread Tsz Wo (Nicholas), SZE (JIRA)
syntax error in smart-apply-patch.sh 
-

 Key: HADOOP-7427
 URL: https://issues.apache.org/jira/browse/HADOOP-7427
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Tsz Wo (Nicholas), SZE


{noformat}
 [exec] Finished build.
 [exec] hdfs/src/test/bin/smart-apply-patch.sh: line 60: syntax error in 
conditional expression: unexpected token `('

BUILD FAILED
hdfs/build.xml:1595: exec returned: 1
{noformat}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-7408) Add javadoc for SnappyCodec

2011-06-24 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HADOOP-7408.


   Resolution: Not A Problem
Fix Version/s: (was: 0.23.0)

It turns out that we have to revert HADOOP-7206. So this is not a problem 
anymore.

> Add javadoc for SnappyCodec
> ---
>
> Key: HADOOP-7408
> URL: https://issues.apache.org/jira/browse/HADOOP-7408
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Trivial
> Attachments: HADOOP-7408.patch, v1-HADOOP-7408-add-snappy-javadoc.txt
>
>
> HADOOP-7206 failed to include a javadoc for public methods.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-7407) Snappy integration breaks HDFS build.

2011-06-24 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HADOOP-7407.


Resolution: Not A Problem

It turns out that we have to revert HADOOP-7206 and this (Thanks Eli).  So this 
is not a problem anymore.

> Snappy integration breaks HDFS build.
> -
>
> Key: HADOOP-7407
> URL: https://issues.apache.org/jira/browse/HADOOP-7407
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ivan Kelly
>Assignee: Alejandro Abdelnur
>Priority: Critical
> Fix For: 0.23.0
>
> Attachments: HADOOP-7407.patch
>
>
> The common/ivy/hadoop-common-template.xml submitted with 7206 has a typo 
> which breaks anything that depends on the hadoop-common maven package.
> Instead of java-snappy, you should have 
> snappy-java
> [ivy:resolve] downloading 
> https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT/hadoop-common-0.23.0-20110620.163810-177.jar
>  ...
> [ivy:resolve] ...
> [ivy:resolve] ..
> [ivy:resolve] ...
> [ivy:resolve] ...
> [ivy:resolve]  (1631kB)
> [ivy:resolve] .. (0kB)
> [ivy:resolve] [SUCCESSFUL ] 
> org.apache.hadoop#hadoop-common;0.23.0-SNAPSHOT!hadoop-common.jar (8441ms)
> [ivy:resolve] 
> [ivy:resolve] :: problems summary ::
> [ivy:resolve]  WARNINGS
> [ivy:resolve] module not found: 
> org.xerial.snappy#java-snappy;1.0.3-rc2
> [ivy:resolve]  apache-snapshot: tried
> [ivy:resolve]   
> https://repository.apache.org/content/repositories/snapshots/org/xerial/snappy/java-snappy/1.0.3-rc2/java-snappy-1.0.3-rc2.pom
> [ivy:resolve]   -- artifact 
> org.xerial.snappy#java-snappy;1.0.3-rc2!java-snappy.jar:
> [ivy:resolve]   
> https://repository.apache.org/content/repositories/snapshots/org/xerial/snappy/java-snappy/1.0.3-rc2/java-snappy-1.0.3-rc2.jar
> [ivy:resolve]  maven2: tried
> [ivy:resolve]   
> http://repo1.maven.org/maven2/org/xerial/snappy/java-snappy/1.0.3-rc2/java-snappy-1.0.3-rc2.pom
> [ivy:resolve]   -- artifact 
> org.xerial.snappy#java-snappy;1.0.3-rc2!java-snappy.jar:
> [ivy:resolve]   
> http://repo1.maven.org/maven2/org/xerial/snappy/java-snappy/1.0.3-rc2/java-snappy-1.0.3-rc2.jar
> [ivy:resolve] ::
> [ivy:resolve] ::  UNRESOLVED DEPENDENCIES ::
> [ivy:resolve] ::
> [ivy:resolve] :: org.xerial.snappy#java-snappy;1.0.3-rc2: not 
> found
> [ivy:resolve] ::
> [ivy:resolve] 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Reopened] (HADOOP-7407) Snappy integration breaks HDFS build.

2011-06-24 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE reopened HADOOP-7407:



> Snappy integration breaks HDFS build.
> -
>
> Key: HADOOP-7407
> URL: https://issues.apache.org/jira/browse/HADOOP-7407
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ivan Kelly
>Assignee: Alejandro Abdelnur
>Priority: Critical
> Fix For: 0.23.0
>
> Attachments: HADOOP-7407.patch
>
>
> The common/ivy/hadoop-common-template.xml submitted with 7206 has a typo 
> which breaks anything that depends on the hadoop-common maven package.
> Instead of java-snappy, you should have 
> snappy-java
> [ivy:resolve] downloading 
> https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT/hadoop-common-0.23.0-20110620.163810-177.jar
>  ...
> [ivy:resolve] ...
> [ivy:resolve] ..
> [ivy:resolve] ...
> [ivy:resolve] ...
> [ivy:resolve]  (1631kB)
> [ivy:resolve] .. (0kB)
> [ivy:resolve] [SUCCESSFUL ] 
> org.apache.hadoop#hadoop-common;0.23.0-SNAPSHOT!hadoop-common.jar (8441ms)
> [ivy:resolve] 
> [ivy:resolve] :: problems summary ::
> [ivy:resolve]  WARNINGS
> [ivy:resolve] module not found: 
> org.xerial.snappy#java-snappy;1.0.3-rc2
> [ivy:resolve]  apache-snapshot: tried
> [ivy:resolve]   
> https://repository.apache.org/content/repositories/snapshots/org/xerial/snappy/java-snappy/1.0.3-rc2/java-snappy-1.0.3-rc2.pom
> [ivy:resolve]   -- artifact 
> org.xerial.snappy#java-snappy;1.0.3-rc2!java-snappy.jar:
> [ivy:resolve]   
> https://repository.apache.org/content/repositories/snapshots/org/xerial/snappy/java-snappy/1.0.3-rc2/java-snappy-1.0.3-rc2.jar
> [ivy:resolve]  maven2: tried
> [ivy:resolve]   
> http://repo1.maven.org/maven2/org/xerial/snappy/java-snappy/1.0.3-rc2/java-snappy-1.0.3-rc2.pom
> [ivy:resolve]   -- artifact 
> org.xerial.snappy#java-snappy;1.0.3-rc2!java-snappy.jar:
> [ivy:resolve]   
> http://repo1.maven.org/maven2/org/xerial/snappy/java-snappy/1.0.3-rc2/java-snappy-1.0.3-rc2.jar
> [ivy:resolve] ::
> [ivy:resolve] ::  UNRESOLVED DEPENDENCIES ::
> [ivy:resolve] ::
> [ivy:resolve] :: org.xerial.snappy#java-snappy;1.0.3-rc2: not 
> found
> [ivy:resolve] ::
> [ivy:resolve] 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-7206) Integrate Snappy compression

2011-06-20 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HADOOP-7206.


Resolution: Fixed

Let's fix the javadoc in HADOOP-7408.  Thanks T Jake and Tom.

> Integrate Snappy compression
> 
>
> Key: HADOOP-7206
> URL: https://issues.apache.org/jira/browse/HADOOP-7206
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 0.21.0
>Reporter: Eli Collins
>Assignee: T Jake Luciani
> Fix For: 0.23.0
>
> Attachments: HADOOP-7206-002.patch, HADOOP-7206.patch, 
> v2-HADOOP-7206-snappy-codec-using-snappy-java.txt, 
> v3-HADOOP-7206-snappy-codec-using-snappy-java.txt, 
> v4-HADOOP-7206-snappy-codec-using-snappy-java.txt, 
> v5-HADOOP-7206-snappy-codec-using-snappy-java.txt
>
>
> Google release Zippy as an open source (APLv2) project called Snappy 
> (http://code.google.com/p/snappy). This tracks integrating it into Hadoop.
> {quote}
> Snappy is a compression/decompression library. It does not aim for maximum 
> compression, or compatibility with any other compression library; instead, it 
> aims for very high speeds and reasonable compression. For instance, compared 
> to the fastest mode of zlib, Snappy is an order of magnitude faster for most 
> inputs, but the resulting compressed files are anywhere from 20% to 100% 
> bigger. On a single core of a Core i7 processor in 64-bit mode, Snappy 
> compresses at about 250 MB/sec or more and decompresses at about 500 MB/sec 
> or more.
> {quote}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Reopened] (HADOOP-7206) Integrate Snappy compression

2011-06-20 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE reopened HADOOP-7206:



> Integrate Snappy compression
> 
>
> Key: HADOOP-7206
> URL: https://issues.apache.org/jira/browse/HADOOP-7206
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 0.21.0
>Reporter: Eli Collins
>Assignee: T Jake Luciani
> Fix For: 0.23.0
>
> Attachments: HADOOP-7206-002.patch, HADOOP-7206.patch, 
> v2-HADOOP-7206-snappy-codec-using-snappy-java.txt, 
> v3-HADOOP-7206-snappy-codec-using-snappy-java.txt, 
> v4-HADOOP-7206-snappy-codec-using-snappy-java.txt, 
> v5-HADOOP-7206-snappy-codec-using-snappy-java.txt
>
>
> Google release Zippy as an open source (APLv2) project called Snappy 
> (http://code.google.com/p/snappy). This tracks integrating it into Hadoop.
> {quote}
> Snappy is a compression/decompression library. It does not aim for maximum 
> compression, or compatibility with any other compression library; instead, it 
> aims for very high speeds and reasonable compression. For instance, compared 
> to the fastest mode of zlib, Snappy is an order of magnitude faster for most 
> inputs, but the resulting compressed files are anywhere from 20% to 100% 
> bigger. On a single core of a Core i7 processor in 64-bit mode, Snappy 
> compresses at about 250 MB/sec or more and decompresses at about 500 MB/sec 
> or more.
> {quote}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-7337) Annotate PureJavaCrc32 as a public API

2011-05-27 Thread Tsz Wo (Nicholas), SZE (JIRA)
Annotate PureJavaCrc32 as a public API
--

 Key: HADOOP-7337
 URL: https://issues.apache.org/jira/browse/HADOOP-7337
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor


The API of PureJavaCrc32 is stable.  It is incorrect to annotate it as private 
unstable.


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-7289) ivy: test conf should not extend common conf

2011-05-13 Thread Tsz Wo (Nicholas), SZE (JIRA)
ivy: test conf should not extend common conf


 Key: HADOOP-7289
 URL: https://issues.apache.org/jira/browse/HADOOP-7289
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE


Otherwise, the same jars will appear in both 
{{build/ivy/lib/Hadoop-Common/common/}} and 
{{build/ivy/lib/Hadoop-Common/test/}}.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-7270) Server$Listener.getAddress(..) throws NullPointerException

2011-05-09 Thread Tsz Wo (Nicholas), SZE (JIRA)
Server$Listener.getAddress(..) throws NullPointerException
--

 Key: HADOOP-7270
 URL: https://issues.apache.org/jira/browse/HADOOP-7270
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Tsz Wo (Nicholas), SZE


In [build 
#469|https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/469//testReport/org.apache.hadoop.hdfs.server.datanode/TestFiDataTransferProtocol2/pipeline_Fi_29/],
{noformat}
2011-05-09 23:21:35,528 ERROR datanode.DataNode (DataXceiver.java:run(133)) - 
127.0.0.1:57149:DataXceiver
java.lang.NullPointerException
at org.apache.hadoop.ipc.Server$Listener.getAddress(Server.java:518)
at org.apache.hadoop.ipc.Server.getListenerAddress(Server.java:1662)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.getIpcPort(DataNode.java:1461)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.getDatanodeId(DataNode.java:2747)
at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$after$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$9$725950a6(BlockReceiverAspects.aj:226)
at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.close(BlockReceiver.java:230)
at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:157)
at org.apache.hadoop.io.IOUtils.closeStream(IOUtils.java:173)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:408)
at 
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:414)
at 
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:360)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:131)
at java.lang.Thread.run(Thread.java:662)
{noformat}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-6843) Entries in the FileSystem's Cache could be cleared when they are not used

2011-04-28 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HADOOP-6843.


Resolution: Not A Problem

This is not a problem anymore.

> Entries in the FileSystem's Cache could be cleared when they are not used
> -
>
> Key: HADOOP-6843
> URL: https://issues.apache.org/jira/browse/HADOOP-6843
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.0
>Reporter: Devaraj Das
> Attachments: TestFileSystemCache.java, fs-weak-ref.4.patch
>
>
> In FileSystem, there is a cache maintained for Filesystem instances. The 
> entries in the cache are cleared only when explicit FileSystem.close is 
> invoked. Applications are not careful on this aspect. Typically, they do 
> FileSystem.get(), operate on the FileSystem, and then they just forget about 
> it. Every FileSystem instance stores a reference to the Configuration object 
> that it was created with. Over a period of time, as the cache grows, this can 
> lead to OOM (we have seen this happening in our hadoop 20S clusters at Yahoo).
> This jira aims at addressing the above issue.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-6588) CompressionCodecFactory throws IllegalArgumentException

2011-04-28 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HADOOP-6588.


Resolution: Not A Problem

This is not a problem anymore.

> CompressionCodecFactory throws IllegalArgumentException
> ---
>
> Key: HADOOP-6588
> URL: https://issues.apache.org/jira/browse/HADOOP-6588
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Tsz Wo (Nicholas), SZE
> Attachments: c6588_20100222.patch
>
>
> WordCount does not run. :(
> {noformat}
> java.lang.IllegalArgumentException: Compression codec 
> com.hadoop.compression.lzo.LzoCodec not found.
> at 
> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:96)
> at 
> org.apache.hadoop.io.compress.CompressionCodecFactory.(CompressionCodecFactory.java:134)
> at 
> org.apache.hadoop.mapreduce.lib.input.TextInputFormat.isSplitable(TextInputFormat.java:46)
> at 
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:247)
> at 
> org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:886)
> at 
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:780)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:444)
> at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:459)
> at org.apache.hadoop.examples.WordCount.main(WordCount.java:67)
> ...
> {noformat}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HADOOP-7202) Improve Command base class

2011-03-28 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE reopened HADOOP-7202:



HDFS cannot be compiled after this.

> Improve Command base class
> --
>
> Key: HADOOP-7202
> URL: https://issues.apache.org/jira/browse/HADOOP-7202
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 0.22.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 0.23.0
>
> Attachments: HADOOP-7202-2.patch, HADOOP-7202-3.patch, 
> HADOOP-7202-4.patch, HADOOP-7202.patch
>
>
> Need to extend the Command base class to allow all command to easily subclass 
> from a code set of code that correctly handles globs and exit codes.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] Created: (HADOOP-7120) 200 new Findbugs

2011-01-26 Thread Tsz Wo (Nicholas), SZE (JIRA)
200 new Findbugs


 Key: HADOOP-7120
 URL: https://issues.apache.org/jira/browse/HADOOP-7120
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, test
Reporter: Tsz Wo (Nicholas), SZE


ant test-patch over hdfs trunk
{noformat}
 [exec] -1 overall.  
 [exec] 
 [exec] +1 @author.  The patch does not contain any @author tags.
 [exec] 
 [exec] -1 tests included.  The patch doesn't appear to include any new 
or modified tests.
 [exec] Please justify why no new tests are needed 
for this patch.
 [exec] Also please list what manual steps were 
performed to verify this patch.
 [exec] 
 [exec] +1 javadoc.  The javadoc tool did not generate any warning 
messages.
 [exec] 
 [exec] +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
 [exec] 
 [exec] -1 findbugs.  The patch appears to introduce 200 new Findbugs 
(version 1.3.9) warnings.
 [exec] 
 [exec] -1 release audit.  The applied patch generated 1 release audit 
warnings (more than the trunk's current 0 warnings).
 [exec] 
 [exec] +1 system test framework.  The patch passed system test 
framework compile.
{noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HADOOP-6857) FsShell should report raw disk usage including replication factor

2010-09-14 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HADOOP-6857.


Resolution: Won't Fix

Closing this.  Thanks.

> FsShell should report raw disk usage including replication factor
> -
>
> Key: HADOOP-6857
> URL: https://issues.apache.org/jira/browse/HADOOP-6857
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Alex Kozlov
> Fix For: 0.22.0
>
> Attachments: show-space-consumed.txt
>
>
> Currently FsShell report HDFS usage with "hadoop fs -dus " command.  
> Since replication level is per file level, it would be nice to add raw disk 
> usage including the replication factor (maybe "hadoop fs -dus -raw "?). 
>  This will allow to assess resource usage more accurately.  -- Alex K

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HADOOP-6935) IPC Server readAndProcess threw NullPointerException

2010-08-31 Thread Tsz Wo (Nicholas), SZE (JIRA)
IPC Server readAndProcess threw NullPointerException


 Key: HADOOP-6935
 URL: https://issues.apache.org/jira/browse/HADOOP-6935
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Tsz Wo (Nicholas), SZE


{noformat}
2010-09-01 01:22:28,995 INFO org.apache.hadoop.ipc.Server: IPC Server listener 
on 50300:
  readAndProcess threw exception java.lang.NullPointerException. Count of bytes 
read: 0
java.lang.NullPointerException
at org.apache.hadoop.ipc.Server$Call.toString(Server.java:268)
at java.lang.String.valueOf(String.java:2827)
at java.lang.StringBuilder.append(StringBuilder.java:115)
at 
org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:705)
at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:720)
at org.apache.hadoop.ipc.Server$Connection.doSaslReply(Server.java:988)
at 
org.apache.hadoop.ipc.Server$Connection.saslReadAndProcess(Server.java:935)
at 
org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1093)
at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:462)
at org.apache.hadoop.ipc.Server$Listener.run(Server.java:371)
{noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Reopened: (HADOOP-6796) FileStatus allows null srcPath but crashes if that's done

2010-06-13 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE reopened HADOOP-6796:



Reopen since the patch has already been reverted.

> FileStatus allows null srcPath but crashes if that's done
> -
>
> Key: HADOOP-6796
> URL: https://issues.apache.org/jira/browse/HADOOP-6796
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.22.0
>Reporter: Rodrigo Schmidt
>Assignee: Rodrigo Schmidt
>Priority: Minor
> Fix For: 0.22.0
>
> Attachments: HADOOP-6796.1.patch, HADOOP-6796.2.patch, 
> HADOOP-6796.3.patch, HADOOP-6796.patch
>
>
> FileStatus allows constructor invocation with a null srcPath but many methods 
> like write, readFields, compareTo, equals, and hashCode depend on this 
> property.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



  1   2   >