[jira] [Created] (HADOOP-11843) Make setting up the build environment easier

2015-04-17 Thread Niels Basjes (JIRA)
Niels Basjes created HADOOP-11843:
-

 Summary: Make setting up the build environment easier
 Key: HADOOP-11843
 URL: https://issues.apache.org/jira/browse/HADOOP-11843
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Niels Basjes
Assignee: Niels Basjes


( As discussed with [~aw] )
In AVRO-1537 a docker based solution was created to setup all the tools for 
doing a full build. This enables much easier reproduction of any issues and 
getting up and running for new developers.

This issue is to 'copy/port' that setup into the hadoop project in preparation 
for the bug squash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-7305) Eclipse project files are incomplete

2011-06-04 Thread Niels Basjes (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niels Basjes reopened HADOOP-7305:
--


Apparently there are some issues with the first version in combination with OS 
X. 

 Eclipse project files are incomplete
 

 Key: HADOOP-7305
 URL: https://issues.apache.org/jira/browse/HADOOP-7305
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Niels Basjes
Assignee: Niels Basjes
Priority: Minor
 Fix For: 0.22.0

 Attachments: HADOOP-7305-2011-05-19.patch, 
 HADOOP-7305-2011-05-30.patch


 After a fresh checkout of hadoop-common I do 'ant compile eclipse'.
 I open eclipse, set ANT_HOME and build the project. 
 At that point the following error appears:
 {quote}
 The type com.sun.javadoc.RootDoc cannot be resolved. It is indirectly 
 referenced from required .class files   
 ExcludePrivateAnnotationsJDiffDoclet.java   
 /common/src/java/org/apache/hadoop/classification/tools line 1  Java Problem
 {quote}
 The solution is to add the tools.jar from the JDK to the 
 buildpath/classpath.
 This should be fixed in the build.xml.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] Created: (HADOOP-7127) Bug in login error handling in org.apache.hadoop.fs.ftp.FTPFileSystem

2011-01-29 Thread Niels Basjes (JIRA)
Bug in login error handling in org.apache.hadoop.fs.ftp.FTPFileSystem
-

 Key: HADOOP-7127
 URL: https://issues.apache.org/jira/browse/HADOOP-7127
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Niels Basjes


I was playing around with PMD, just to see what kind of messages it gives on 
hadoop.
I noticed a message about Dead code in org.apache.hadoop.fs.ftp.FTPFileSystem

Starting at about line 80:

   String userAndPassword = uri.getUserInfo();
   if (userAndPassword == null) {
 userAndPassword = (conf.get(fs.ftp.user. + host, null) + : + conf
 .get(fs.ftp.password. + host, null));
 if (userAndPassword == null) {
   throw new IOException(Invalid user/passsword specified);
 }
   }

The last if block is the dead code as the string will always contain at least 
the text : or null:null

This means that the error handling fails to work as intended.
It will probably fail a bit later when really trying to login with a wrong 
uid/password.

P.S. Fix the silly typo passsword in the exception message too.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HADOOP-7076) Splittable Gzip

2010-12-23 Thread Niels Basjes (JIRA)
Splittable Gzip
---

 Key: HADOOP-7076
 URL: https://issues.apache.org/jira/browse/HADOOP-7076
 Project: Hadoop Common
  Issue Type: New Feature
  Components: io
Reporter: Niels Basjes


Files compressed with the gzip codec are not splittable due to the nature of 
the codec.
This limits the options you have scaling out when reading large gzipped input 
files.

Given the fact that gunzipping a 1GiB file usually takes only 2 minutes I 
figured that for some use cases wasting some resources may result in a shorter 
job time under certain conditions.
So reading the entire input file from the start for each split (wasting 
resources!!) may lead to additional scalability.


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.