See <https://builds.apache.org/job/Tajo-0.8.0-build/80/changes>

Changes:

[hyunsik] TAJO-763: Out of range problem in utc_usec_to(). (Ilhyun Suh via 
hyunsik)

------------------------------------------
[...truncated 1206 lines...]
[INFO] 
[INFO] --- build-helper-maven-plugin:1.5:add-source (add-source) @ tajo-storage 
---
[INFO] Source directory: 
<https://builds.apache.org/job/Tajo-0.8.0-build/ws/tajo-storage/target/generated-sources/proto>
 added.
[INFO] 
[INFO] --- maven-resources-plugin:2.2:resources (default-resources) @ 
tajo-storage ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ tajo-storage 
---
[INFO] Compiling 89 source files to 
<https://builds.apache.org/job/Tajo-0.8.0-build/ws/tajo-storage/target/classes>
[INFO] 
[INFO] --- maven-resources-plugin:2.2:testResources (default-testResources) @ 
tajo-storage ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ 
tajo-storage ---
[INFO] Compiling 20 source files to 
<https://builds.apache.org/job/Tajo-0.8.0-build/ws/tajo-storage/target/test-classes>
[INFO] 
[INFO] --- maven-surefire-plugin:2.12.4:test (default-test) @ tajo-storage ---
[INFO] Surefire report directory: 
<https://builds.apache.org/job/Tajo-0.8.0-build/ws/tajo-storage/target/surefire-reports>

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running org.apache.tajo.storage.TestFileSystems
log4j:WARN No appenders could be found for logger 
(org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info.
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.708 sec
Running org.apache.tajo.storage.TestLazyTuple
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.018 sec
Running org.apache.tajo.storage.TestStorageManager
Formatting using clusterid: testClusterID
Formatting using clusterid: testClusterID
Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 5.911 sec <<< 
FAILURE!
testGetSplitWithBlockStorageLocationsBatching(org.apache.tajo.storage.TestStorageManager)
  Time elapsed: 1.666 sec  <<< FAILURE!
java.lang.AssertionError: Values should be different. Actual: -1
        at org.junit.Assert.fail(Assert.java:88)
        at org.junit.Assert.failEquals(Assert.java:185)
        at org.junit.Assert.assertNotEquals(Assert.java:161)
        at org.junit.Assert.assertNotEquals(Assert.java:198)
        at org.junit.Assert.assertNotEquals(Assert.java:209)
        at 
org.apache.tajo.storage.TestStorageManager.testGetSplitWithBlockStorageLocationsBatching(TestStorageManager.java:193)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
        at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
        at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
        at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
        at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
        at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
        at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
        at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
        at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
        at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
        at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
        at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
        at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
        at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
        at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
        at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at 
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
        at 
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
        at 
org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
        at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
        at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

Running org.apache.tajo.storage.index.TestSingleCSVFileBSTIndex
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.346 sec
Running org.apache.tajo.storage.index.TestBSTIndex
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.973 sec
Running org.apache.tajo.storage.TestTupleComparator
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0 sec
Running org.apache.tajo.storage.parquet.TestSchemaConverter
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.018 sec
Running org.apache.tajo.storage.parquet.TestReadWrite
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further 
details.
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.886 sec
Running org.apache.tajo.storage.TestCompressionStorages
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.052 sec
Running org.apache.tajo.storage.TestVTuple
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.001 sec
Running org.apache.tajo.storage.TestFrameTuple
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0 sec
Running org.apache.tajo.storage.TestMergeScanner
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.348 sec
Running org.apache.tajo.storage.v2.TestCSVCompression
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.397 sec
Running org.apache.tajo.storage.v2.TestCSVScanner
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.165 sec
Running org.apache.tajo.storage.v2.TestStorages
Apr 19, 2014 4:49:59 AM INFO: parquet.hadoop.InternalParquetRecordWriter: 
Flushing mem store to file. allocated memory: 48,824,426
Apr 19, 2014 4:49:59 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
24B for [myboolean] BOOLEAN: 1 values, 7B raw, 7B comp, 1 pages, encodings: 
[RLE, BIT_PACKED, PLAIN]
Apr 19, 2014 4:49:59 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
27B for [mybit] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN]
Apr 19, 2014 4:49:59 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
28B for [mychar] BINARY: 1 values, 11B raw, 11B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN]
Apr 19, 2014 4:49:59 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
27B for [myint2] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN]
Apr 19, 2014 4:49:59 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
27B for [myint4] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN]
Apr 19, 2014 4:49:59 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
31B for [myint8] INT64: 1 values, 14B raw, 14B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN]
Apr 19, 2014 4:49:59 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
27B for [myfloat4] FLOAT: 1 values, 10B raw, 10B comp, 1 pages, encodings: 
[RLE, BIT_PACKED, PLAIN]
Apr 19, 2014 4:49:59 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
31B for [myfloat8] DOUBLE: 1 values, 14B raw, 14B comp, 1 pages, encodings: 
[RLE, BIT_PACKED, PLAIN]
Apr 19, 2014 4:49:59 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
32B for [mytext] BINARY: 1 values, 15B raw, 15B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN]
Apr 19, 2014 4:49:59 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
32B for [myblob] BINARY: 1 values, 15B raw, 15B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN]
Apr 19, 2014 4:49:59 AM INFO: parquet.hadoop.ParquetFileReader: reading another 
1 footers
Apr 19, 2014 4:49:59 AM INFO: parquet.hadoop.InternalParquetRecordReader: 
RecordReader initialized will read a total of 1 records.
Apr 19, 2014 4:49:59 AM INFO: parquet.hadoop.InternalParquetRecordReader: at 
row 0. reading next block
Apr 19, 2014 4:49:59 AM INFO: parquet.hadoop.InternalParquetRecordReader: block 
read in memory in 7 ms. row count = 1
Apr 19, 2014 4:50:07 AM INFO: parquet.hadoop.InternalParquetRecordWriter: 
Flushing mem store to file. allocated memory: 36,271,037
Apr 19, 2014 4:50:07 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
40,031B for [id] INT32: 10,000 values, 40,008B raw, 40,008B comp, 1 pages, 
encodings: [RLE, BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:07 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
31B for [file] BINARY: 10,000 values, 12B raw, 12B comp, 1 pages, encodings: 
[RLE, BIT_PACKED, PLAIN_DICTIONARY], dic { 1 entries, 11B raw, 1B comp}
Apr 19, 2014 4:50:07 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
31B for [name] BINARY: 10,000 values, 12B raw, 12B comp, 1 pages, encodings: 
[RLE, BIT_PACKED, PLAIN_DICTIONARY], dic { 1 entries, 10B raw, 1B comp}
Apr 19, 2014 4:50:07 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
31B for [age] INT64: 10,000 values, 12B raw, 12B comp, 1 pages, encodings: 
[RLE, BIT_PACKED, PLAIN_DICTIONARY], dic { 1 entries, 8B raw, 1B comp}
Apr 19, 2014 4:50:07 AM INFO: parquet.hadoop.InternalParquetRecordWriter: 
Flushing mem store to file. allocated memory: 36,271,037
Apr 19, 2014 4:50:07 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
40,031B for [id] INT32: 10,000 values, 40,008B raw, 40,008B comp, 1 pages, 
encodings: [RLE, BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:07 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
31B for [file] BINARY: 10,000 values, 12B raw, 12B comp, 1 pages, encodings: 
[RLE, BIT_PACKED, PLAIN_DICTIONARY], dic { 1 entries, 11B raw, 1B comp}
Apr 19, 2014 4:50:07 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
31B for [name] BINARY: 10,000 values, 12B raw, 12B comp, 1 pages, encodings: 
[RLE, BIT_PACKED, PLAIN_DICTIONARY], dic { 1 entries, 10B raw, 1B comp}
Apr 19, 2014 4:50:07 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
31B for [age] INT64: 10,000 values, 12B raw, 12B comp, 1 pages, encodings: 
[RLE, BIT_PACKED, PLAIN_DICTIONARY], dic { 1 entries, 8B raw, 1B comp}
Apr 19, 2014 4:50:07 AM INFO: parquet.hadoop.ParquetFileReader: reading another 
1 footers
Apr 19, 2014 4:50:07 AM INFO: parquet.hadoop.InternalParquetRecordReader: 
RecordReader initialized will read a total of 10000 records.
Apr 19, 2014 4:50:07 AM INFO: parquet.hadoop.InternalParquetRecordReader: at 
row 0. reading next block
Apr 19, 2014 4:50:07 AM INFO: parquet.hadoop.InternalParquetRecordReader: block 
read in memory in 0 ms. row count = 10000
Apr 19, 2014 4:50:07 AM INFO: parquet.hadoop.ParquetFileReader: reading another 
1 footers
Apr 19, 2014 4:50:07 AM INFO: parquet.hadoop.InternalParquetRecordReader: 
RecordReader initialized will read a total of 10000 records.
Apr 19, 2014 4:50:07 AM INFO: parquet.hadoop.InternalParquetRecordReader: at 
row 0. reading next block
Apr 19, 2014 4:50:07 AM INFO: parquet.hadoop.InternalParquetRecordReader: block 
read in memory in 0 ms. row count = 10000
Apr 19, 2014 4:50:16 AM INFO: parquet.hadoop.InternalParquetRecordWriter: 
Flushing mem store to file. allocated memory: 51,131,316
Apr 19, 2014 4:50:16 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
24B for [col1] BOOLEAN: 1 values, 7B raw, 7B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:16 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
27B for [col2] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:16 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
34B for [col3] BINARY: 1 values, 17B raw, 17B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:16 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
27B for [col4] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:16 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
27B for [col5] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:16 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
31B for [col6] INT64: 1 values, 14B raw, 14B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:16 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
27B for [col7] FLOAT: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:16 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
31B for [col8] DOUBLE: 1 values, 14B raw, 14B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:16 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
34B for [col9] BINARY: 1 values, 17B raw, 17B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:16 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
34B for [col10] BINARY: 1 values, 17B raw, 17B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:16 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
31B for [col11] BINARY: 1 values, 14B raw, 14B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:16 AM INFO: parquet.hadoop.ParquetFileReader: reading another 
1 footers
Apr 19, 2014 4:50:16 AM INFO: parquet.hadoop.InternalParquetRecordReader: 
RecordReader initialized will read a total of 1 records.
Apr 19, 2014 4:50:16 AM INFO: parquet.hadoop.InternalParquetRecordReader: at 
row 0. reading next block
Apr 19, 2014 4:50:16 AM INFO: parquet.hadoop.InternalParquetRecordReader: block 
read in memory in 0 ms. row count = 1
Apr 19, 2014 4:50:16 AM INFO: parquet.hadoop.InternalParquetRecordWriter: 
Flushing mem store to file. allocated memory: 34,044,142
Apr 19, 2014 4:50:16 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
40,031B for [id] INT32: 10,000 values, 40,008B raw, 40,008B comp, 1 pages, 
encodings: [RLE, BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:16 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: 
writtenTests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.532 
sec
Running org.apache.tajo.storage.TestStorages
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.818 sec
 80,031B for [age] INT64: 10,000 values, 80,008B raw, 80,008B comp, 1 pages, 
encodings: [RLE, BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:16 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
40,031B for [score] FLOAT: 10,000 values, 40,008B raw, 40,008B comp, 1 pages, 
encodings: [RLE, BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:16 AM INFO: parquet.hadoop.ParquetFileReader: reading another 
1 footers
Apr 19, 2014 4:50:16 AM INFO: parquet.hadoop.InternalParquetRecordReader: 
RecordReader initialized will read a total of 10000 records.
Apr 19, 2014 4:50:16 AM INFO: parquet.hadoop.InternalParquetRecordReader: at 
row 0. reading next block
Apr 19, 2014 4:50:16 AM INFO: parquet.hadoop.InternalParquetRecordReader: block 
read in memory in 0 ms. row count = 10000
Apr 19, 2014 4:50:18 AM INFO: parquet.hadoop.InternalParquetRecordWriter: 
Flushing mem store to file. allocated memory: 53,438,201
Apr 19, 2014 4:50:18 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
24B for [col1] BOOLEAN: 1 values, 7B raw, 7B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:18 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
27B for [col2] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:18 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
34B for [col3] BINARY: 1 values, 17B raw, 17B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:18 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
27B for [col4] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:18 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
27B for [col5] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:18 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
31B for [col6] INT64: 1 values, 14B raw, 14B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:18 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
27B for [col7] FLOAT: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:18 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
31B for [col8] DOUBLE: 1 values, 14B raw, 14B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:18 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
34B for [col9] BINARY: 1 values, 17B raw, 17B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:18 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
34B for [col10] BINARY: 1 values, 17B raw, 17B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:18 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
31B for [col11] BINARY: 1 values, 14B raw, 14B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:18 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
36B for [col13] BINARY: 1 values, 19B raw, 19B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:18 AM INFO: parquet.hadoop.ParquetFileReader: reading another 
1 footers
Apr 19, 2014 4:50:18 AM INFO: parquet.hadoop.InternalParquetRecordReader: 
RecordReader initialized will read a total of 1 records.
Apr 19, 2014 4:50:18 AM INFO: parquet.hadoop.InternalParquetRecordReader: at 
row 0. reading next block
Apr 19, 2014 4:50:18 AM INFO: parquet.hadoop.InternalParquetRecordReader: block 
read in memory in 1 ms. row count = 1
Apr 19, 2014 4:50:18 AM INFO: parquet.hadoop.InternalParquetRecordWriter: 
Flushing mem store to file. allocated memory: 34,044,142
Apr 19, 2014 4:50:18 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
40,031B for [id] INT32: 10,000 values, 40,008B raw, 40,008B comp, 1 pages, 
encodings: [RLE, BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:18 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
80,031B for [age] INT64: 10,000 values, 80,008B raw, 80,008B comp, 1 pages, 
encodings: [RLE, BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:18 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
40,031B for [score] FLOAT: 10,000 values, 40,008B raw, 40,008B comp, 1 pages, 
encodings: [RLE, BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:18 AM INFO: parquet.hadoop.ParquetFileReader: reading another 
1 footers
Apr 19, 2014 4:50:18 AM INFO: parquet.hadoop.InternalParquetRecordReader: 
RecordReader initialized will read a total of 10000 records.
Apr 19, 2014 4:50:18 AM INFO: parquet.hadoop.InternalParquetRecordReader: at 
row 0. reading next block
Apr 19, 2014 4:50:18 AM INFO: parquet.hadoop.InternalParquetRecordReader: block 
read in memory in 1 ms. row count = 10000
Apr 19, 2014 4:50:19 AM INFO: parquet.hadoop.InternalParquetRecordWriter: 
Flushing mem store to file. allocated memory: 53,438,685
Apr 19, 2014 4:50:19 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
26B for [col1] BOOLEAN: 13 values, 9B raw, 9B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN]
Apr 19, 2014 4:50:19 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
26B for [col2] INT32: 13 values, 9B raw, 9B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN_DICTIONARY], dic { 1 entries, 4B raw, 1B comp}
Apr 19, 2014 4:50:19 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
26B for [col3] BINARY: 13 values, 9B raw, 9B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN_DICTIONARY], dic { 1 entries, 11B raw, 1B comp}
Apr 19, 2014 4:50:19 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
26B for [col4] INT32: 13 values, 9B raw, 9B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN_DICTIONARY], dic { 1 entries, 4B raw, 1B comp}
Apr 19, 2014 4:50:19 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
26B for [col5] INT32: 13 values, 9B raw, 9B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN_DICTIONARY], dic { 1 entries, 4B raw, 1B comp}
Apr 19, 2014 4:50:19 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
26B for [col6] INT64: 13 values, 9B raw, 9B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN_DICTIONARY], dic { 1 entries, 8B raw, 1B comp}
Apr 19, 2014 4:50:19 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
26B for [col7] FLOAT: 13 values, 9B raw, 9B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN_DICTIONARY], dic { 1 entries, 4B raw, 1B comp}
Apr 19, 2014 4:50:19 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
26B for [col8] DOUBLE: 13 values, 9B raw, 9B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN_DICTIONARY], dic { 1 entries, 8B raw, 1B comp}
Apr 19, 2014 4:50:19 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
27B for [col9] BINARY: 13 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN_DICTIONARY], dic { 1 entries, 11B raw, 1B comp}
Apr 19, 2014 4:50:19 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
27B for [col10] BINARY: 13 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN_DICTIONARY], dic { 1 entries, 11B raw, 1B comp}
Apr 19, 2014 4:50:19 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
27B for [col11] BINARY: 13 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN_DICTIONARY], dic { 1 entries, 8B raw, 1B comp}
Apr 19, 2014 4:50:19 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
27B for [col13] BINARY: 13 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN_DICTIONARY], dic { 1 entries, 13B raw, 1B comp}
Apr 19, 2014 4:50:19 AM INFO: parquet.hadoop.ParquetFileReader: reading another 
1 footers
Apr 19, 2014 4:50:19 AM INFO: parquet.hadoop.InternalParquetRecordReader: 
RecordReader initialized will read a total of 13 records.
Apr 19, 2014 4:50:19 AM INFO: parquet.hadoop.InternalParquetRecordReader: at 
row 0. reading next block
Apr 19, 2014 4:50:19 AM INFO: parquet.hadoop.InternalParquetRecordReader: block 
read in memory in 1 ms. row count = 13

Results :

Failed tests:   
testGetSplitWithBlockStorageLocationsBatching(org.apache.tajo.storage.TestStorageManager):
 Values should be different. Actual: -1

Tests run: 149, Failures: 1, Errors: 0, Skipped: 0

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Tajo Main ......................................... SUCCESS [25.243s]
[INFO] Tajo Project POM .................................. SUCCESS [0.863s]
[INFO] Tajo Common ....................................... SUCCESS [7.702s]
[INFO] Tajo Algebra ...................................... SUCCESS [1.339s]
[INFO] Tajo Catalog Common ............................... SUCCESS [9.080s]
[INFO] Tajo Rpc .......................................... SUCCESS [21.588s]
[INFO] Tajo Catalog Client ............................... SUCCESS [1.314s]
[INFO] Tajo Catalog Server ............................... SUCCESS [7.481s]
[INFO] Tajo Storage ...................................... FAILURE [57.661s]
[INFO] Tajo Yarn PullServer .............................. SKIPPED
[INFO] Tajo Client ....................................... SKIPPED
[INFO] Tajo JDBC Driver .................................. SKIPPED
[INFO] Tajo Core Backend ................................. SKIPPED
[INFO] Tajo Catalog Drivers .............................. SKIPPED
[INFO] Tajo Catalog ...................................... SKIPPED
[INFO] Tajo Distribution ................................. SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 2:13.116s
[INFO] Finished at: Sat Apr 19 04:50:20 UTC 2014
[INFO] Final Memory: 31M/248M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.12.4:test (default-test) on 
project tajo-storage: There are test failures.
[ERROR] 
[ERROR] Please refer to 
<https://builds.apache.org/job/Tajo-0.8.0-build/ws/tajo-storage/target/surefire-reports>
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :tajo-storage
Build step 'Execute shell' marked build as failure
Updating TAJO-763

Reply via email to