See <https://builds.apache.org/job/Tajo-master-jdk8-nightly/109/changes>

Changes:

[hyunsik] TAJO-1763: tpch/*.tbl files cannot be founded in maven modules except 
for core-tests.

[hyunsik] Add missed change log and reindent some logs.

[jhkim] TAJO-1779: Remove "DFSInputStream has been closed already" messages in 
DelimitedLineReader.

[jihoonson] TAJO-1755: Add documentation for missing built-in functions.

------------------------------------------
[...truncated 723908 lines...]
Aug 18, 2015 5:47:03 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 
1 ms. row count = 1
Aug 18, 2015 5:47:03 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 
1 ms. row count = 1
Aug 18, 2015 5:47:04 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Aug 18, 2015 5:47:04 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Aug 18, 2015 5:47:04 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
Aug 18, 2015 5:47:04 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Aug 18, 2015 5:47:04 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
Aug 18, 2015 5:47:04 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Aug 18, 2015 5:47:04 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized 
will read a total of 1 records.
Aug 18, 2015 5:47:04 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next 
block
Aug 18, 2015 5:47:04 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized 
will read a total of 1 records.
Aug 18, 2015 5:47:04 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next 
block
Aug 18, 2015 5:47:04 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 
1 ms. row count = 1
Aug 18, 2015 5:47:04 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 
1 ms. row count = 1
Aug 18, 2015 5:47:07 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 212
Aug 18, 2015 5:47:07 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for 
[l_orderkey] INT32: 5 values, 10B raw, 10B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN_DICTIONARY], dic { 3 entries, 12B raw, 3B comp}
Aug 18, 2015 5:47:07 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 123B for 
[l_shipdate] BINARY: 5 values, 76B raw, 76B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN]
Aug 18, 2015 5:47:07 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 123B for 
[l_shipdate_function] BINARY: 5 values, 76B raw, 76B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN]
Aug 18, 2015 5:47:07 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Aug 18, 2015 5:47:07 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
Aug 18, 2015 5:47:07 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Aug 18, 2015 5:47:07 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized 
will read a total of 5 records.
Aug 18, 2015 5:47:07 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next 
block
Aug 18, 2015 5:47:07 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 
1 ms. row count = 5
2015-08-18 06:00:53,651 INFO: org.apache.tajo.worker.TajoWorker (run(567)) - 
============================================
2015-08-18 06:00:53,653 INFO: org.apache.tajo.worker.TajoWorker (run(568)) - 
TajoWorker received SIGINT Signal
2015-08-18 06:00:53,653 INFO: org.apache.tajo.worker.TajoWorker (run(569)) - 
============================================
2015-08-18 06:00:53,655 INFO: org.apache.tajo.util.history.HistoryWriter 
(run(268)) - HistoryWriter_asf909.gq1.ygridcore.net_30203 stopped.
2015-08-18 06:00:53,655 INFO: org.apache.tajo.util.history.HistoryCleaner 
(run(136)) - History cleaner stopped
2015-08-18 06:00:53,661 INFO: org.mortbay.log (info(67)) - Shutdown hook 
executing
2015-08-18 06:00:53,661 INFO: org.apache.tajo.master.TajoMaster (run(538)) - 
============================================
2015-08-18 06:00:53,663 INFO: org.apache.tajo.worker.NodeStatusUpdater 
(serviceStop(111)) - NodeStatusUpdater stopped.
2015-08-18 06:00:53,663 INFO: org.apache.tajo.worker.NodeStatusUpdater 
(run(262)) - Heartbeat Thread stopped.
2015-08-18 06:00:53,663 INFO: org.apache.tajo.master.TajoMaster (run(539)) - 
TajoMaster received SIGINT Signal
2015-08-18 06:00:53,664 INFO: org.apache.tajo.master.TajoMaster (run(540)) - 
============================================
2015-08-18 06:00:53,664 INFO: org.mortbay.log (info(67)) - Shutdown hook 
complete
2015-08-18 06:00:53,666 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (Tajo-REST) listened on 0:0:0:0:0:0:0:0:30202) shutdown
2015-08-18 06:00:53,666 INFO: org.apache.tajo.ws.rs.TajoRestService 
(serviceStop(129)) - Tajo Rest Service stopped.
2015-08-18 06:00:53,668 INFO: org.apache.tajo.catalog.CatalogServer 
(serviceStop(178)) - Catalog Server (127.0.0.1:30199) shutdown
2015-08-18 06:00:53,668 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (CatalogProtocol) listened on 127.0.0.1:30199) shutdown
2015-08-18 06:00:53,669 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (QueryMasterProtocol) listened on 0:0:0:0:0:0:0:0:30205) 
shutdown
2015-08-18 06:00:53,670 INFO: 
org.apache.tajo.querymaster.QueryMasterManagerService (serviceStop(106)) - 
QueryMasterManagerService stopped
2015-08-18 06:00:53,671 INFO: org.apache.tajo.util.history.HistoryWriter 
(run(268)) - HistoryWriter_127.0.0.1_30201 stopped.
2015-08-18 06:00:53,671 INFO: org.apache.tajo.querymaster.QueryMaster 
(run(417)) - QueryMaster heartbeat thread stopped
2015-08-18 06:00:53,673 INFO: org.apache.tajo.querymaster.QueryMaster 
(serviceStop(168)) - QueryMaster stopped
2015-08-18 06:00:53,673 INFO: org.apache.tajo.worker.TajoWorkerClientService 
(stop(99)) - TajoWorkerClientService stopping
2015-08-18 06:00:53,675 WARN: org.apache.hadoop.hdfs.DFSClient 
(flushOrSync(2025)) - Unable to persist blocks in hflush for 
/tajo/system/ha/active/127.0.0.1_39047
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
 No lease on /tajo/system/ha/active/127.0.0.1_39047 (inode 29786): File does 
not exist. Holder DFSClient_NONMAPREDUCE_-2056499274_1 does not have any open 
files.
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3433)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.fsync(FSNamesystem.java:3998)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.fsync(NameNodeRpcServer.java:1210)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.fsync(ClientNamenodeProtocolServerSideTranslatorPB.java:903)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)

        at org.apache.hadoop.ipc.Client.call(Client.java:1476)
        at org.apache.hadoop.ipc.Client.call(Client.java:1407)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
        at com.sun.proxy.$Proxy30.fsync(Unknown Source)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.fsync(ClientNamenodeProtocolTranslatorPB.java:838)
        at sun.reflect.GeneratedMethodAccessor68.invoke(Unknown Source)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy31.fsync(Unknown Source)
        at sun.reflect.GeneratedMethodAccessor68.invoke(Unknown Source)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
        at com.sun.proxy.$Proxy74.fsync(Unknown Source)
        at sun.reflect.GeneratedMethodAccessor68.invoke(Unknown Source)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
        at com.sun.proxy.$Proxy74.fsync(Unknown Source)
        at 
org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.java:2022)
        at 
org.apache.hadoop.hdfs.DFSOutputStream.hsync(DFSOutputStream.java:1898)
        at 
org.apache.hadoop.fs.FSDataOutputStream.hsync(FSDataOutputStream.java:139)
        at 
org.apache.tajo.ha.HdfsServiceTracker.createMasterFile(HdfsServiceTracker.java:244)
        at 
org.apache.tajo.ha.HdfsServiceTracker.register(HdfsServiceTracker.java:155)
        at 
org.apache.tajo.ha.HdfsServiceTracker$PingChecker.run(HdfsServiceTracker.java:374)
        at java.lang.Thread.run(Thread.java:745)
2015-08-18 06:00:53,680 INFO: BlockStateChange (logAddStoredBlock(2624)) - 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:54368 is added to 
blk_1073741857_1033{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-4e825a18-4489-4bcd-87b7-5ffefccc8269:NORMAL:127.0.0.1:54368|RBW]]}
 size 3246361
2015-08-18 06:00:53,680 WARN: org.apache.hadoop.hdfs.DFSClient 
(flushOrSync(2047)) - Error while syncing
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
 No lease on /tajo/system/ha/active/127.0.0.1_39047 (inode 29786): File does 
not exist. Holder DFSClient_NONMAPREDUCE_-2056499274_1 does not have any open 
files.
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3433)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.fsync(FSNamesystem.java:3998)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.fsync(NameNodeRpcServer.java:1210)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.fsync(ClientNamenodeProtocolServerSideTranslatorPB.java:903)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)

        at org.apache.hadoop.ipc.Client.call(Client.java:1476)
        at org.apache.hadoop.ipc.Client.call(Client.java:1407)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
        at com.sun.proxy.$Proxy30.fsync(Unknown Source)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.fsync(ClientNamenodeProtocolTranslatorPB.java:838)
        at sun.reflect.GeneratedMethodAccessor68.invoke(Unknown Source)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy31.fsync(Unknown Source)
        at sun.reflect.GeneratedMethodAccessor68.invoke(Unknown Source)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
        at com.sun.proxy.$Proxy74.fsync(Unknown Source)
        at sun.reflect.GeneratedMethodAccessor68.invoke(Unknown Source)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
        at com.sun.proxy.$Proxy74.fsync(Unknown Source)
        at 
org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.java:2022)
        at 
org.apache.hadoop.hdfs.DFSOutputStream.hsync(DFSOutputStream.java:1898)
        at 
org.apache.hadoop.fs.FSDataOutputStream.hsync(FSDataOutputStream.java:139)
        at 
org.apache.tajo.ha.HdfsServiceTracker.createMasterFile(HdfsServiceTracker.java:244)
        at 
org.apache.tajo.ha.HdfsServiceTracker.register(HdfsServiceTracker.java:155)
        at 
org.apache.tajo.ha.HdfsServiceTracker$PingChecker.run(HdfsServiceTracker.java:374)
        at java.lang.Thread.run(Thread.java:745)
2015-08-18 06:00:53,675 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (QueryMasterClientProtocol) listened on 
0:0:0:0:0:0:0:0:30204) shutdown
2015-08-18 06:00:53,681 INFO: org.apache.tajo.worker.TajoWorkerClientService 
(stop(103)) - TajoWorkerClientService stopped
2015-08-18 06:00:53,681 INFO: BlockStateChange 
(processAndHandleReportedBlock(3171)) - BLOCK* addBlock: block 
blk_1073748708_7884 on node 127.0.0.1:54368 size 134217728 does not belong to 
any file
2015-08-18 06:00:53,682 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (TajoWorkerProtocol) listened on 0:0:0:0:0:0:0:0:30203) 
shutdown
2015-08-18 06:00:53,682 INFO: org.apache.tajo.worker.TajoWorkerManagerService 
(serviceStop(93)) - TajoWorkerManagerService stopped
2015-08-18 06:00:53,683 INFO: BlockStateChange (add(115)) - BLOCK* 
InvalidateBlocks: add blk_1073748708_7884 to 127.0.0.1:54368
2015-08-18 06:00:53,685 INFO: org.apache.tajo.worker.TajoWorker 
(serviceStop(377)) - TajoWorker main thread exiting
2015-08-18 06:00:53,684 WARN: org.apache.hadoop.hdfs.DFSClient 
(closeResponder(612)) - Caught exception 
java.lang.InterruptedException
        at java.lang.Object.wait(Native Method)
        at java.lang.Thread.join(Thread.java:1245)
        at java.lang.Thread.join(Thread.java:1319)
        at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:610)
        at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeInternal(DFSOutputStream.java:578)
        at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:574)
2015-08-18 06:00:53,693 WARN: org.apache.tajo.rpc.NettyClientBase 
(doReconnect(198)) - Exception 
[org.apache.tajo.ipc.TajoMasterClientProtocol(/127.0.0.1:30200)]: 
ClosedChannelException:  Try to reconnect : /127.0.0.1:30200
2015-08-18 06:00:53,693 ERROR: org.apache.hadoop.hdfs.server.datanode.DataNode 
(run(278)) - 127.0.0.1:54368:DataXceiver error processing WRITE_BLOCK operation 
 src: /127.0.0.1:52082 dst: /127.0.0.1:54368
java.io.IOException: Premature EOF from inputStream
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:472)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:849)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:804)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
        at java.lang.Thread.run(Thread.java:745)
2015-08-18 06:00:53,746 INFO: BlockStateChange (invalidateWorkForOneNode(3488)) 
- BLOCK* BlockManager: ask 127.0.0.1:54368 to delete [blk_1073748705_7881, 
blk_1073741825_1001, blk_1073748706_7882, blk_1073748707_7883, 
blk_1073748708_7884]
2015-08-18 06:00:54,082 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (QueryCoordinatorProtocol) listened on 127.0.0.1:30201) 
shutdown
2015-08-18 06:00:54,082 INFO: org.apache.tajo.util.history.HistoryCleaner 
(run(136)) - History cleaner stopped
2015-08-18 06:00:54,082 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (TajoMasterClientProtocol) listened on 127.0.0.1:30200) 
shutdown
2015-08-18 06:00:54,089 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (TajoResourceTrackerProtocol) listened on 
127.0.0.1:30198) shutdown
2015-08-18 06:00:54,089 INFO: org.apache.tajo.master.TajoMaster 
(serviceStop(406)) - Tajo Master main thread exiting
2015-08-18 06:00:54,694 WARN: org.apache.tajo.rpc.NettyClientBase 
(doReconnect(198)) - Exception 
[org.apache.tajo.ipc.TajoMasterClientProtocol(/127.0.0.1:30200)]: 
ClosedChannelException:  Try to reconnect : /127.0.0.1:30200
2015-08-18 06:00:55,699 WARN: org.apache.tajo.rpc.NettyClientBase 
(doReconnect(198)) - Exception 
[org.apache.tajo.ipc.TajoMasterClientProtocol(/127.0.0.1:30200)]: 
ConnectException: Connection refused: /127.0.0.1:30200 Try to reconnect : 
/127.0.0.1:30200
2015-08-18 06:00:56,702 WARN: org.apache.tajo.rpc.NettyClientBase 
(doReconnect(198)) - Exception 
[org.apache.tajo.ipc.TajoMasterClientProtocol(/127.0.0.1:30200)]: 
ConnectException: Connection refused: /127.0.0.1:30200 Try to reconnect : 
/127.0.0.1:30200

Results :

Failed tests: 
  TestTajoClientV2.testExecuteQueryAsyncWithListener:191 null
  TestDDLBuilder.testBuildDDLForExternalTable:64 expected:<...) USING TEXT WITH 
('[compression.codec'='org.apache.hadoop.io.compress.GzipCodec', 
'text.delimiter'='|]') PARTITION BY COLU...> but was:<...) USING TEXT WITH 
('[text.delimiter'='|', 
'compression.codec'='org.apache.hadoop.io.compress.GzipCodec]') PARTITION BY 
COLU...>
  TestDDLBuilder.testBuildDDLForBaseTable:103 expected:<...) USING TEXT WITH 
('[compression.codec'='org.apache.hadoop.io.compress.GzipCodec', 
'text.delimiter'='|]');> but was:<...) USING TEXT WITH ('[text.delimiter'='|', 
'compression.codec'='org.apache.hadoop.io.compress.GzipCodec]');>
  TestDDLBuilder.testBuildDDLQuotedTableName:90 expected:<...) USING TEXT WITH 
('[compression.codec'='org.apache.hadoop.io.compress.GzipCodec', 
'text.delimiter'='|]') PARTITION BY COLU...> but was:<...) USING TEXT WITH 
('[text.delimiter'='|', 
'compression.codec'='org.apache.hadoop.io.compress.GzipCodec]') PARTITION BY 
COLU...>

Tests in error: 
  TestHAServiceHDFSImpl.testAutoFailOver:82->verifyDataBaseAndTable:152 ยป 
TajoInternal

Tests run: 1691, Failures: 4, Errors: 1, Skipped: 0

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Tajo Main ......................................... SUCCESS [  2.819 s]
[INFO] Tajo Project POM .................................. SUCCESS [  2.507 s]
[INFO] Tajo Maven Plugins ................................ SUCCESS [  3.717 s]
[INFO] Tajo Common ....................................... SUCCESS [ 24.543 s]
[INFO] Tajo Algebra ...................................... SUCCESS [  2.543 s]
[INFO] Tajo Catalog Common ............................... SUCCESS [  4.550 s]
[INFO] Tajo Plan ......................................... SUCCESS [  7.114 s]
[INFO] Tajo Rpc Common ................................... SUCCESS [  1.353 s]
[INFO] Tajo Protocol Buffer Rpc .......................... SUCCESS [01:26 min]
[INFO] Tajo Catalog Client ............................... SUCCESS [  1.317 s]
[INFO] Tajo Catalog Server ............................... SUCCESS [01:16 min]
[INFO] Tajo Storage Common ............................... SUCCESS [  8.385 s]
[INFO] Tajo HDFS Storage ................................. SUCCESS [ 56.144 s]
[INFO] Tajo PullServer ................................... SUCCESS [  1.089 s]
[INFO] Tajo Client ....................................... SUCCESS [  2.487 s]
[INFO] Tajo CLI tools .................................... SUCCESS [  2.291 s]
[INFO] Tajo JDBC Driver .................................. SUCCESS [  2.857 s]
[INFO] ASM (thirdparty) .................................. SUCCESS [  1.816 s]
[INFO] Tajo RESTful Container ............................ SUCCESS [  4.264 s]
[INFO] Tajo Metrics ...................................... SUCCESS [  1.513 s]
[INFO] Tajo Core ......................................... SUCCESS [  8.860 s]
[INFO] Tajo RPC .......................................... SUCCESS [  0.927 s]
[INFO] Tajo Catalog Drivers Hive ......................... SUCCESS [ 26.611 s]
[INFO] Tajo Catalog Drivers .............................. SUCCESS [  0.087 s]
[INFO] Tajo Catalog ...................................... SUCCESS [  1.002 s]
[INFO] Tajo HBase Storage ................................ SUCCESS [  3.959 s]
[INFO] Tajo Storage ...................................... SUCCESS [  0.980 s]
[INFO] Tajo Distribution ................................. SUCCESS [  5.303 s]
[INFO] Tajo Cluster Tests ................................ SUCCESS [  2.405 s]
[INFO] Tajo Core Tests ................................... FAILURE [21:45 min]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 27:31 min
[INFO] Finished at: 2015-08-18T06:00:58+00:00
[INFO] Final Memory: 139M/1890M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project tajo-core-tests: There are test failures.
[ERROR] 
[ERROR] Please refer to 
<https://builds.apache.org/job/Tajo-master-jdk8-nightly/ws/tajo-core-tests/target/surefire-reports>
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :tajo-core-tests
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Tajo-master-jdk8-nightly #82
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 58864379 bytes
Compression is 0.0%
Took 19 sec
Recording test results

Reply via email to