See <https://builds.apache.org/job/Tajo-master-jdk8-nightly/90/changes>

Changes:

[jihoonson] TAJO-1716: Repartitioner.makeEvenDistributedFetchImpl() does not 
distribute fetches evenly.

[jihoonson] TAJO-1715: Precompute the hash value of various kinds of ids.

[jhkim] TAJO-1712: querytasks.jsp throws NPE occasionally when tasks are 
running. (jinho)

[jhkim] TAJO-1273: Merge DirectRawFile to master branch. (jinho)

[jihoonson] TAJO-1552: NPE occurs when 
GreedyHeuristicJoinOrderAlgorithm.getCost() returns infinity.

[hyunsik] TAJO-1718: Refine code for Parquet 1.8.1.

[jihoonson] TAJO-1713: Change the type of edge cache in JoinGraphContext from 
HashMap to LRUMap.

------------------------------------------
[...truncated 2703 lines...]

Running org.apache.tajo.storage.parquet.TestSchemaConverter
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003 sec - in 
org.apache.tajo.storage.parquet.TestSchemaConverter
Running org.apache.tajo.storage.TestLineReader
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.658 sec - in 
org.apache.tajo.storage.TestLineReader
Running org.apache.tajo.storage.index.TestBSTIndex
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.264 sec - 
in org.apache.tajo.storage.index.TestBSTIndex
Running org.apache.tajo.storage.index.TestSingleCSVFileBSTIndex
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.898 sec - in 
org.apache.tajo.storage.index.TestSingleCSVFileBSTIndex
Running org.apache.tajo.storage.TestFileSystems
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.025 sec - in 
org.apache.tajo.storage.TestFileSystems
Running org.apache.tajo.storage.TestMergeScanner
Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.164 sec <<< 
FAILURE! - in org.apache.tajo.storage.TestMergeScanner
testMultipleFiles[3](org.apache.tajo.storage.TestMergeScanner)  Time elapsed: 
0.443 sec  <<< ERROR!
java.io.IOException: Could not read footer: java.lang.NoSuchMethodError: 
java.lang.Integer.compare(II)I
        at 
org.apache.parquet.hadoop.ParquetFileReader.readAllFootersInParallel(ParquetFileReader.java:248)
        at 
org.apache.parquet.hadoop.ParquetFileReader.readAllFootersInParallelUsingSummaryFiles(ParquetFileReader.java:189)
        at 
org.apache.parquet.hadoop.ParquetReader.<init>(ParquetReader.java:115)
        at org.apache.parquet.hadoop.ParquetReader.<init>(ParquetReader.java:65)
        at 
org.apache.tajo.storage.parquet.TajoParquetReader.<init>(TajoParquetReader.java:54)
        at 
org.apache.tajo.storage.parquet.ParquetScanner.init(ParquetScanner.java:60)
        at 
org.apache.tajo.storage.MergeScanner.getNextScanner(MergeScanner.java:137)
        at org.apache.tajo.storage.MergeScanner.reset(MergeScanner.java:130)
        at org.apache.tajo.storage.MergeScanner.<init>(MergeScanner.java:77)
        at 
org.apache.tajo.storage.TestMergeScanner.testMultipleFiles(TestMergeScanner.java:168)
Caused by: java.lang.NoSuchMethodError: java.lang.Integer.compare(II)I
        at org.apache.parquet.SemanticVersion.compareTo(SemanticVersion.java:99)
        at 
org.apache.parquet.CorruptStatistics.shouldIgnoreStatistics(CorruptStatistics.java:74)
        at 
org.apache.parquet.format.converter.ParquetMetadataConverter.fromParquetStatistics(ParquetMetadataConverter.java:263)
        at 
org.apache.parquet.format.converter.ParquetMetadataConverter.fromParquetMetadata(ParquetMetadataConverter.java:567)
        at 
org.apache.parquet.format.converter.ParquetMetadataConverter.readParquetMetadata(ParquetMetadataConverter.java:544)
        at 
org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:431)
        at 
org.apache.parquet.hadoop.ParquetFileReader$2.call(ParquetFileReader.java:238)
        at 
org.apache.parquet.hadoop.ParquetFileReader$2.call(ParquetFileReader.java:234)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
        at java.lang.Thread.run(Thread.java:662)

Running org.apache.tajo.storage.TestFileTablespace
Formatting using clusterid: testClusterID
Formatting using clusterid: testClusterID
Formatting using clusterid: testClusterID
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.519 sec - in 
org.apache.tajo.storage.TestFileTablespace
Running org.apache.tajo.storage.avro.TestAvroUtil
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.21 sec - in 
org.apache.tajo.storage.avro.TestAvroUtil
Running org.apache.tajo.storage.TestDelimitedTextFile
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.128 sec - in 
org.apache.tajo.storage.TestDelimitedTextFile
Running org.apache.tajo.storage.TestCompressionStorages
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.468 sec - in 
org.apache.tajo.storage.TestCompressionStorages
Running org.apache.tajo.storage.json.TestJsonSerDe
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.017 sec - in 
org.apache.tajo.storage.json.TestJsonSerDe
Running org.apache.tajo.storage.TestStorages
Jul 30, 2015 3:00:39 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 65,659
Jul 30, 2015 3:00:39 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 34B for 
[myboolean] BOOLEAN: 1 values, 7B raw, 7B comp, 1 pages, encodings: [PLAIN, 
RLE, BIT_PACKED]
Jul 30, 2015 3:00:39 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [mybit] 
INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [PLAIN, RLE, BIT_PACKED]
Jul 30, 2015 3:00:39 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 38B for [mychar] 
BINARY: 1 values, 11B raw, 11B comp, 1 pages, encodings: [PLAIN, RLE, 
BIT_PACKED]
Jul 30, 2015 3:00:39 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [myint2] 
INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [PLAIN, RLE, BIT_PACKED]
Jul 30, 2015 3:00:39 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [myint4] 
INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [PLAIN, RLE, BIT_PACKED]
Jul 30, 2015 3:00:39 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 55B for [myint8] 
INT64: 1 values, 14B raw, 14B comp, 1 pages, encodings: [PLAIN, RLE, BIT_PACKED]
Jul 30, 2015 3:00:39 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [myfloat4] 
FLOAT: 1 values, 10B raw, 10B comp, 1 pages, encodings: [PLAIN, RLE, BIT_PACKED]
Jul 30, 2015 3:00:39 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 55B for [myfloat8] 
DOUBLE: 1 values, 14B raw, 14B comp, 1 pages, encodings: [PLAIN, RLE, 
BIT_PACKED]
Jul 30, 2015 3:00:39 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 50B for [mytext] 
BINARY: 1 values, 15B raw, 15B comp, 1 pages, encodings: [PLAIN, RLE, 
BIT_PACKED]
Jul 30, 2015 3:00:39 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 50B for [myblob] 
BINARY: 1 values, 15B raw, 15B comp, 1 pages, encodings: [PLAIN, RLE, 
BIT_PACKED]
Jul 30, 2015 3:00:39 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 30, 2015 3:00:39 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
Jul 30, 2015 3:00:39 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 30, 2015 3:01:02 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 200,029
Jul 30, 2015 3:01:02 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 40,047B for [id] 
INT32: 10,000 values, 40,008B raw, 40,008B comp, 1 pages, encodings: [PLAIN, 
RLE, BIT_PACKED]
Jul 30, 2015 3:01:02 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 53B for [file] 
BINARY: 10,000 values, 12B raw, 12B comp, 1 pages, encodings: 
[PLAIN_DICTIONARY, RLE, BIT_PACKED], dic { 1 entries, 11B raw, 1B comp}
Jul 30, 2015 3:01:02 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 51B for [name] 
BINARY: 10,000 values, 12B raw, 12B comp, 1 pages, encodings: 
[PLAIN_DICTIONARY, RLE, BIT_PACKED], dic { 1 entries, 10B raw, 1B comp}
Jul 30, 2015 3:01:02 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 55B for [age] 
INT64: 10,000 values, 12B raw, 12B comp, 1 pages, encodings: [PLAIN_DICTIONARY, 
RLE, BIT_PACKED], dic { 1 entries, 8B raw, 1B comp}
Jul 30, 2015 3:01:02 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 200,029
Jul 30, 2015 3:01:02 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 40,047B for [id] 
INT32: 10,000 values, 40,008B raw, 40,008B comp, 1 pages, encodings: [PLAIN, 
RLE, BIT_PACKED]
Jul 30, 2015 3:01:02 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 53B for [file] 
BINARY: 10,000 values, 12B raw, 12B comp, 1 pages, encodings: 
[PLAIN_DICTIONARY, RLE, BIT_PACKED], dic { 1 entries, 11B raw, 1B comp}
Jul 30, 2015 3:01:02 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 51B for [name] 
BINARY: 10,000 values, 12B raw, 12B comp, 1 pages, encodings: 
[PLAIN_DICTIONARY, RLE, BIT_PACKED], dic { 1 entries, 10B raw, 1B comp}
Jul 30, 2015 3:01:02 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 55B for [age] 
INT64: 10,000 values, 12B raw, 12B comp, 1 pages, encodings: [PLAIN_DICTIONARY, 
RLE, BIT_PACKED], dic { 1 entries, 8B raw, 1B comp}
Jul 30, 2015 3:01:02 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 30, 2015 3:01:02 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
Jul 30, 2015 3:01:02 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 30, 2015 3:01:11 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 65,690
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 34B for [col1] 
BOOLEAN: 1 values, 7B raw, 7B comp, 1 pages, encodings: [PLAIN, RLE, BIT_PACKED]
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 56B for [col2] 
BINARY: 1 values, 17B raw, 17B comp, 1 pages, encodings: [PLAIN, RLE, 
BIT_PACKED]
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [col3] 
INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [PLAIN, RLE, BIT_PACKED]
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [col4] 
INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [PLAIN, RLE, BIT_PACKED]
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 55B for [col5] 
INT64: 1 values, 14B raw, 14B comp, 1 pages, encodings: [PLAIN, RLE, BIT_PACKED]
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [col6] 
FLOAT: 1 values, 10B raw, 10B comp, 1 pages, encodings: [PLAIN, RLE, BIT_PACKED]
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 55B for [col7] 
DOUBLE: 1 values, 14B raw, 14B comp, 1 pages, encodings: [PLAIN, RLE, 
BIT_PACKED]
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 56B for [col8] 
BINARY: 1 values, 17B raw, 17B comp, 1 pages, encodings: [PLAIN, RLE, 
BIT_PACKED]
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 56B for [col9] 
BINARY: 1 values, 17B raw, 17B comp, 1 pages, encodings: [PLAIN, RLE, 
BIT_PACKED]
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 47B for [col10] 
BINARY: 1 values, 14B raw, 14B comp, 1 pages, encodings: [PLAIN, RLE, 
BIT_PACKED]
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 62B for [col12] 
BINARY: 1 values, 19B raw, 19B comp, 1 pages, encodings: [PLAIN, RLE, 
BIT_PACKED]
Jul 30, 2015 3:01:12 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 30, 2015 3:01:12 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
Jul 30, 2015 3:01:12 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 48
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [col1] 
FLOAT: 1 values, 10B raw, 10B comp, 1 pages, encodings: [PLAIN, RLE, BIT_PACKED]
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 55B for [col2] 
DOUBLE: 1 values, 14B raw, 14B comp, 1 pages, encodings: [PLAIN, RLE, 
BIT_PACKED]
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [col3] 
INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [PLAIN, RLE, BIT_PACKED]
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [col4] 
INT32: 1 values, 10B rTests run: 98, Failures: 0, Errors: 2, Skipped: 0, Time 
elapsed: 6.291 sec <<< FAILURE! - in org.apache.tajo.storage.TestStorages
testVariousTypes[2](org.apache.tajo.storage.TestStorages)  Time elapsed: 0.049 
sec  <<< ERROR!
java.io.IOException: Could not read footer: java.lang.NoSuchMethodError: 
java.lang.Integer.compare(II)I
        at 
org.apache.parquet.hadoop.ParquetFileReader.readAllFootersInParallel(ParquetFileReader.java:248)
        at 
org.apache.parquet.hadoop.ParquetFileReader.readAllFootersInParallelUsingSummaryFiles(ParquetFileReader.java:189)
        at 
org.apache.parquet.hadoop.ParquetReader.<init>(ParquetReader.java:115)
        at org.apache.parquet.hadoop.ParquetReader.<init>(ParquetReader.java:65)
        at 
org.apache.tajo.storage.parquet.TajoParquetReader.<init>(TajoParquetReader.java:54)
        at 
org.apache.tajo.storage.parquet.ParquetScanner.init(ParquetScanner.java:60)
        at 
org.apache.tajo.storage.TestStorages.testVariousTypes(TestStorages.java:383)
Caused by: java.lang.NoSuchMethodError: java.lang.Integer.compare(II)I
        at org.apache.parquet.SemanticVersion.compareTo(SemanticVersion.java:99)
        at 
org.apache.parquet.CorruptStatistics.shouldIgnoreStatistics(CorruptStatistics.java:74)
        at 
org.apache.parquet.format.converter.ParquetMetadataConverter.fromParquetStatistics(ParquetMetadataConverter.java:263)
        at 
org.apache.parquet.format.converter.ParquetMetadataConverter.fromParquetMetadata(ParquetMetadataConverter.java:567)
        at 
org.apache.parquet.format.converter.ParquetMetadataConverter.readParquetMetadata(ParquetMetadataConverter.java:544)
        at 
org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:431)
        at 
org.apache.parquet.hadoop.ParquetFileReader$2.call(ParquetFileReader.java:238)
        at 
org.apache.parquet.hadoop.ParquetFileReader$2.call(ParquetFileReader.java:234)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
        at java.lang.Thread.run(Thread.java:662)

testNullHandlingTypes[2](org.apache.tajo.storage.TestStorages)  Time elapsed: 
0.059 sec  <<< ERROR!
java.io.IOException: Could not read footer: java.lang.NoSuchMethodError: 
java.lang.Integer.compare(II)I
        at 
org.apache.parquet.hadoop.ParquetFileReader.readAllFootersInParallel(ParquetFileReader.java:248)
        at 
org.apache.parquet.hadoop.ParquetFileReader.readAllFootersInParallelUsingSummaryFiles(ParquetFileReader.java:189)
        at 
org.apache.parquet.hadoop.ParquetReader.<init>(ParquetReader.java:115)
        at org.apache.parquet.hadoop.ParquetReader.<init>(ParquetReader.java:65)
        at 
org.apache.tajo.storage.parquet.TajoParquetReader.<init>(TajoParquetReader.java:54)
        at 
org.apache.tajo.storage.parquet.ParquetScanner.init(ParquetScanner.java:60)
        at 
org.apache.tajo.storage.TestStorages.testNullHandlingTypes(TestStorages.java:472)
Caused by: java.lang.NoSuchMethodError: java.lang.Integer.compare(II)I
        at org.apache.parquet.SemanticVersion.compareTo(SemanticVersion.java:99)
        at 
org.apache.parquet.CorruptStatistics.shouldIgnoreStatistics(CorruptStatistics.java:74)
        at 
org.apache.parquet.format.converter.ParquetMetadataConverter.fromParquetStatistics(ParquetMetadataConverter.java:263)
        at 
org.apache.parquet.format.converter.ParquetMetadataConverter.fromParquetMetadata(ParquetMetadataConverter.java:567)
        at 
org.apache.parquet.format.converter.ParquetMetadataConverter.readParquetMetadata(ParquetMetadataConverter.java:544)
        at 
org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:431)
        at 
org.apache.parquet.hadoop.ParquetFileReader$2.call(ParquetFileReader.java:238)
        at 
org.apache.parquet.hadoop.ParquetFileReader$2.call(ParquetFileReader.java:234)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
        at java.lang.Thread.run(Thread.java:662)

aw, 10B comp, 1 pages, encodings: [PLAIN, RLE, BIT_PACKED]
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 55B for [col5] 
INT64: 1 values, 14B raw, 14B comp, 1 pages, encodings: [PLAIN, RLE, BIT_PACKED]
Jul 30, 2015 3:01:12 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 30, 2015 3:01:12 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
Jul 30, 2015 3:01:12 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized 
will read a total of 1 records.
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next 
block
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 
7 ms. row count = 1
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 280,000
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 40,047B for [id] 
INT32: 10,000 values, 40,008B raw, 40,008B comp, 1 pages, encodings: [PLAIN, 
RLE, BIT_PACKED]
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 80,055B for [age] 
INT64: 10,000 values, 80,008B raw, 80,008B comp, 1 pages, encodings: [PLAIN, 
RLE, BIT_PACKED]
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 40,047B for 
[score] FLOAT: 10,000 values, 40,008B raw, 40,008B comp, 1 pages, encodings: 
[PLAIN, RLE, BIT_PACKED]
Jul 30, 2015 3:01:12 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 30, 2015 3:01:12 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
Jul 30, 2015 3:01:12 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized 
will read a total of 10000 records.
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next 
block
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 
1 ms. row count = 10000
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 66,794
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 36B for [col1] 
BOOLEAN: 12 values, 9B raw, 9B comp, 1 pages, encodings: [PLAIN, RLE, 
BIT_PACKED]
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 48B for [col2] 
BINARY: 12 values, 9B raw, 9B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE, 
BIT_PACKED], dic { 1 entries, 11B raw, 1B comp}
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 42B for [col3] 
INT32: 12 values, 9B raw, 9B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE, 
BIT_PACKED], dic { 1 entries, 4B raw, 1B comp}
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 42B for [col4] 
INT32: 12 values, 9B raw, 9B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE, 
BIT_PACKED], dic { 1 entries, 4B raw, 1B comp}
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 50B for [col5] 
INT64: 12 values, 9B raw, 9B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE, 
BIT_PACKED], dic { 1 entries, 8B raw, 1B comp}
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 42B for [col6] 
FLOAT: 12 values, 9B raw, 9B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE, 
BIT_PACKED], dic { 1 entries, 4B raw, 1B comp}
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 50B for [col7] 
DOUBLE: 12 values, 9B raw, 9B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE, 
BIT_PACKED], dic { 1 entries, 8B raw, 1B comp}
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 48B for [col8] 
BINARY: 12 values, 9B raw, 9B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE, 
BIT_PACKED], dic { 1 entries, 11B raw, 1B comp}
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 49B for [col9] 
BINARY: 12 values, 10B raw, 10B comp, 1 pages, encodings: [PLAIN_DICTIONARY, 
RLE, BIT_PACKED], dic { 1 entries, 11B raw, 1B comp}
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [col10] 
BINARY: 12 values, 10B raw, 10B comp, 1 pages, encodings: [PLAIN_DICTIONARY, 
RLE, BIT_PACKED], dic { 1 entries, 8B raw, 1B comp}
Jul 30, 2015 3:01:12 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 53B for [col12] 
BINARY: 12 values, 10B raw, 10B comp, 1 pages, encodings: [PLAIN_DICTIONARY, 
RLE, BIT_PACKED], dic { 1 entries, 13B raw, 1B comp}
Jul 30, 2015 3:01:12 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 30, 2015 3:01:12 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
Jul 30, 2015 3:01:12 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5

Results :

Tests in error: 
  TestORCScanner.setup:73 » UnsupportedClassVersion 
com/facebook/presto/orc/OrcD...
  TestORCScanner.end:102 NullPointer
  TestReadWrite.testAll:93 » IO Could not read footer: 
java.lang.NoSuchMethodErr...
  TestMergeScanner.testMultipleFiles:168 » IO Could not read footer: 
java.lang.N...
  TestStorages.testVariousTypes:383 » IO Could not read footer: 
java.lang.NoSuch...
  TestStorages.testNullHandlingTypes:472 » IO Could not read footer: 
java.lang.N...

Tests run: 178, Failures: 0, Errors: 6, Skipped: 0

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Tajo Main ......................................... SUCCESS [  1.769 s]
[INFO] Tajo Project POM .................................. SUCCESS [  1.564 s]
[INFO] Tajo Maven Plugins ................................ SUCCESS [  2.932 s]
[INFO] Tajo Common ....................................... SUCCESS [ 26.320 s]
[INFO] Tajo Algebra ...................................... SUCCESS [  2.492 s]
[INFO] Tajo Catalog Common ............................... SUCCESS [  6.326 s]
[INFO] Tajo Plan ......................................... SUCCESS [  8.017 s]
[INFO] Tajo Rpc Common ................................... SUCCESS [  1.435 s]
[INFO] Tajo Protocol Buffer Rpc .......................... SUCCESS [ 45.858 s]
[INFO] Tajo Catalog Client ............................... SUCCESS [  1.786 s]
[INFO] Tajo Catalog Server ............................... SUCCESS [ 10.341 s]
[INFO] Tajo Storage Common ............................... SUCCESS [ 11.087 s]
[INFO] Tajo HDFS Storage ................................. FAILURE [ 55.356 s]
[INFO] Tajo HBase Storage ................................ SKIPPED
[INFO] Tajo PullServer ................................... SKIPPED
[INFO] Tajo Client ....................................... SKIPPED
[INFO] Tajo CLI tools .................................... SKIPPED
[INFO] Tajo JDBC Driver .................................. SKIPPED
[INFO] ASM (thirdparty) .................................. SKIPPED
[INFO] Tajo RESTful Container ............................ SKIPPED
[INFO] Tajo Metrics ...................................... SKIPPED
[INFO] Tajo Core ......................................... SKIPPED
[INFO] Tajo RPC .......................................... SKIPPED
[INFO] Tajo Catalog Drivers Hive ......................... SKIPPED
[INFO] Tajo Catalog Drivers .............................. SKIPPED
[INFO] Tajo Catalog ...................................... SKIPPED
[INFO] Tajo Storage ...................................... SKIPPED
[INFO] Tajo Distribution ................................. SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:56 min
[INFO] Finished at: 2015-07-30T03:01:16+00:00
[INFO] Final Memory: 86M/1178M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project tajo-storage-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
<https://builds.apache.org/job/Tajo-master-jdk8-nightly/ws/tajo-storage/tajo-storage-hdfs/target/surefire-reports>
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :tajo-storage-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results

Reply via email to