----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/32145/#review76700 -----------------------------------------------------------
connector/connector-hdfs/src/main/java/org/apache/sqoop/connector/hdfs/HdfsUtils.java <https://reviews.apache.org/r/32145/#comment124329> Line 50 has check the confDir already. connector/connector-hdfs/src/main/java/org/apache/sqoop/connector/hdfs/HdfsUtils.java <https://reviews.apache.org/r/32145/#comment124331> There will be an error message in log and there is no exception is thrown. So if a config file cannot be loaded, the MR job might not fail completely? Maybe use warning instead of error? connector/connector-hdfs/src/main/java/org/apache/sqoop/connector/hdfs/configuration/LinkConfig.java <https://reviews.apache.org/r/32145/#comment124332> Is it possible to created a jira and use constants for MAX_LENGTH? And I might be too concerned about the size maximum in general. According to Linux header, filename max length can be 255, path max length can be 4096 and uri length can be 2083. - Qian Xu On March 17, 2015, 10:05 a.m., Jarek Cecho wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/32145/ > ----------------------------------------------------------- > > (Updated March 17, 2015, 10:05 a.m.) > > > Review request for Sqoop. > > > Bugs: SQOOP-2201 > https://issues.apache.org/jira/browse/SQOOP-2201 > > > Repository: sqoop-sqoop2 > > > Description > ------- > > I've added ability to the HDFS connector to read Hadoop configuration files > and and broke the dependency on mapreduce's execution engine. > > > Diffs > ----- > > > connector/connector-hdfs/src/main/java/org/apache/sqoop/connector/hdfs/HdfsExtractor.java > 8237e51 > > connector/connector-hdfs/src/main/java/org/apache/sqoop/connector/hdfs/HdfsFromInitializer.java > 0a95e07 > > connector/connector-hdfs/src/main/java/org/apache/sqoop/connector/hdfs/HdfsLoader.java > cee0a91 > > connector/connector-hdfs/src/main/java/org/apache/sqoop/connector/hdfs/HdfsPartitioner.java > 78fd60a > > connector/connector-hdfs/src/main/java/org/apache/sqoop/connector/hdfs/HdfsToInitializer.java > 991e6c9 > > connector/connector-hdfs/src/main/java/org/apache/sqoop/connector/hdfs/HdfsUtils.java > fce7728 > > connector/connector-hdfs/src/main/java/org/apache/sqoop/connector/hdfs/configuration/LinkConfig.java > 146c3b1 > > connector/connector-hdfs/src/main/resources/hdfs-connector-config.properties > 3904856 > test/src/main/java/org/apache/sqoop/test/testcases/ConnectorTestCase.java > ce6af6e > test/src/main/java/org/apache/sqoop/test/testcases/TomcatTestCase.java > 2ef971d > > test/src/test/java/org/apache/sqoop/integration/connector/jdbc/generic/AllTypesTest.java > ac90eac > > test/src/test/java/org/apache/sqoop/integration/connector/jdbc/generic/FromHDFSToRDBMSTest.java > a21e4a1 > > test/src/test/java/org/apache/sqoop/integration/connector/jdbc/generic/FromRDBMSToHDFSTest.java > 5552e04 > > test/src/test/java/org/apache/sqoop/integration/connector/jdbc/generic/PartitionerTest.java > f69f08c > > test/src/test/java/org/apache/sqoop/integration/connector/jdbc/generic/TableStagedRDBMSTest.java > f850777 > > test/src/test/java/org/apache/sqoop/integration/connector/kafka/FromHDFSToKafkaTest.java > 83273f1 > > Diff: https://reviews.apache.org/r/32145/diff/ > > > Testing > ------- > > I had to update existing integration test as the "reasonable" defaults > weren't applicable to them. > > > Thanks, > > Jarek Cecho > >
