[jira] [Closed] (ASTERIXDB-2051) Variable not found in a complex group-by query
[ https://issues.apache.org/jira/browse/ASTERIXDB-2051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu closed ASTERIXDB-2051. Resolution: Fixed Fixed with a test case. > Variable not found in a complex group-by query > -- > > Key: ASTERIXDB-2051 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-2051 > Project: Apache AsterixDB > Issue Type: Bug > Components: COMP - Compiler >Reporter: Yingyi Bu >Assignee: Yingyi Bu > > {noformat} > DROP DATAVERSE tpch IF EXISTS; > CREATE dataverse tpch; > USE tpch; > CREATE TYPE LineItemType AS CLOSED { > l_orderkey : integer, > l_partkey : integer, > l_suppkey : integer, > l_linenumber : integer, > l_quantity : double, > l_extendedprice : double, > l_discount : double, > l_tax : double, > l_returnflag : string, > l_linestatus : string, > l_shipdate : string, > l_commitdate : string, > l_receiptdate : string, > l_shipinstruct : string, > l_shipmode : string, > l_comment : string > } > CREATE DATASET LineItem(LineItemType) PRIMARY KEY l_orderkey,l_linenumber; > SELECT l_returnflag AS l_returnflag, >l_linestatus AS l_linestatus, >coll_count(cheap) AS count_cheaps, >coll_count(expensive) AS count_expensives > FROM LineItem AS l > /* +hash */ > GROUP BY l.l_returnflag AS l_returnflag,l.l_linestatus AS l_linestatus > GROUP AS g > LET cheap = ( > SELECT ELEMENT g.l > FROM g > WHERE g.l.l_discount > 0.05 > ), > expensive = ( > SELECT ELEMENT m > FROM (FROM g SELECT VALUE l) AS m > WHERE m.l_discount <= 0.05 > ) > ORDER BY l_returnflag,l_linestatus > ; > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Reopened] (ASTERIXDB-1812) OutofMemoryError when group by on a non-existing field with 300k records (tweets)
[ https://issues.apache.org/jira/browse/ASTERIXDB-1812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu reopened ASTERIXDB-1812: -- Assignee: Yingyi Bu With the default group-by memory budget, i.e., 32MB, we currently get the following error message: "msg": "IllegalArgumentException: Buffer is too large...". The size 32MB is hard coded in ByteArrayAccessibleOutputStream, which makes the query not runnable no matter how large a user sets for the group-by memory budget. > OutofMemoryError when group by on a non-existing field with 300k records > (tweets) > - > > Key: ASTERIXDB-1812 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1812 > Project: Apache AsterixDB > Issue Type: Bug > Components: *DB - AsterixDB, HYR - Hyracks > Environment: Linux 16.04 > Asterix 0.9.0 with 2 nc nodes and 1 cc node. (all using default > configurations from > https://asterixdb.apache.org/docs/0.9.0/install.html#Section1SingleMachineAsterixDBInstallation) >Reporter: Chen Luo >Assignee: Yingyi Bu > > The dataset is a sample tweet dataset provided by Cloudberry, which contains > 324000 tweets (about 300M). When issuing the following query, I always get an > OutofMemoryError. > Query: > {code} > select * from twitter.ds_tweet t > group by t.test; > {code} > Stacktrace: > {code} > org.apache.hyracks.api.exceptions.HyracksException: Job failed on account of: > HYR0003: java.lang.OutOfMemoryError: Java heap space > at > org.apache.hyracks.control.cc.job.JobRun.waitForCompletion(JobRun.java:211) > at > org.apache.hyracks.control.cc.work.WaitForJobCompletionWork$1.run(WaitForJobCompletionWork.java:48) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.hyracks.api.exceptions.HyracksDataException: HYR0003: > java.lang.OutOfMemoryError: Java heap space > at > org.apache.hyracks.control.common.utils.ExceptionUtils.setNodeIds(ExceptionUtils.java:62) > at org.apache.hyracks.control.nc.Task.run(Task.java:330) > ... 3 more > Caused by: org.apache.hyracks.api.exceptions.HyracksDataException: > java.lang.OutOfMemoryError: Java heap space > at > org.apache.hyracks.api.rewriter.runtime.SuperActivityOperatorNodePushable.runInParallel(SuperActivityOperatorNodePushable.java:228) > at > org.apache.hyracks.api.rewriter.runtime.SuperActivityOperatorNodePushable.initialize(SuperActivityOperatorNodePushable.java:84) > at org.apache.hyracks.control.nc.Task.run(Task.java:273) > ... 3 more > Caused by: java.util.concurrent.ExecutionException: > java.lang.OutOfMemoryError: Java heap space > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:192) > at > org.apache.hyracks.api.rewriter.runtime.SuperActivityOperatorNodePushable.runInParallel(SuperActivityOperatorNodePushable.java:222) > ... 5 more > Caused by: java.lang.OutOfMemoryError: Java heap space > at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) > at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) > at > org.apache.hyracks.control.nc.resources.memory.FrameManager.allocateFrame(FrameManager.java:57) > at > org.apache.hyracks.control.nc.resources.memory.FrameManager.reallocateFrame(FrameManager.java:73) > at org.apache.hyracks.control.nc.Joblet.reallocateFrame(Joblet.java:242) > at org.apache.hyracks.control.nc.Task.reallocateFrame(Task.java:136) > at > org.apache.hyracks.api.comm.VSizeFrame.ensureFrameSize(VSizeFrame.java:53) > at > org.apache.hyracks.dataflow.common.comm.io.AbstractFrameAppender.canHoldNewTuple(AbstractFrameAppender.java:104) > at > org.apache.hyracks.dataflow.common.comm.io.FrameTupleAppender.append(FrameTupleAppender.java:49) > at > org.apache.hyracks.dataflow.common.comm.util.FrameUtils.appendToWriter(FrameUtils.java:159) > at > org.apache.hyracks.algebricks.runtime.operators.base.AbstractOneInputOneOutputOneFramePushRuntime.appendToFrameFromTupleBuilder(AbstractOneInputOneOutputOneFramePushRuntime.java:82) > at > org.apache.hyracks.algebricks.runtime.operators.base.AbstractOneInputOneOutputOneFramePushRuntime.appendToFrameFromTupleBuilder(AbstractOneInputOneOutputOneFramePushRuntime.java:78) > at > org.apache.hyracks.algebricks.runtime.operators.std.AssignRuntimeFactory$1.nextFrame(AssignRuntimeFactory.java:150) > at > org.apache.hyracks.algebricks.runtime.operators.meta.AlgebricksMetaOperatorDescriptor$2.nextFrame(AlgebricksMetaOperatorDescriptor.java:134) >
[jira] [Resolved] (ASTERIXDB-993) let NOSQL doc share the same source with AQL 101
[ https://issues.apache.org/jira/browse/ASTERIXDB-993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu resolved ASTERIXDB-993. - Resolution: Won't Fix > let NOSQL doc share the same source with AQL 101 > > > Key: ASTERIXDB-993 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-993 > Project: Apache AsterixDB > Issue Type: Improvement > Components: *DB - AsterixDB, DOC - Documentation >Reporter: asterixdb-importer >Assignee: Yingyi Bu >Priority: Minor > > let NOSQL doc share the same source with AQL 101 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (ASTERIXDB-1129) Support renaming group variables in AQL
[ https://issues.apache.org/jira/browse/ASTERIXDB-1129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu resolved ASTERIXDB-1129. -- Resolution: Won't Fix > Support renaming group variables in AQL > --- > > Key: ASTERIXDB-1129 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1129 > Project: Apache AsterixDB > Issue Type: Improvement > Components: *DB - AsterixDB, AQL - Translator AQL >Reporter: Yingyi Bu >Assignee: Yingyi Bu >Priority: Minor > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (ASTERIXDB-982) Add comma option in for clause
[ https://issues.apache.org/jira/browse/ASTERIXDB-982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu resolved ASTERIXDB-982. - Resolution: Won't Fix > Add comma option in for clause > -- > > Key: ASTERIXDB-982 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-982 > Project: Apache AsterixDB > Issue Type: Improvement > Components: *DB - AsterixDB, AQL - Translator AQL >Reporter: asterixdb-importer >Assignee: Yingyi Bu >Priority: Minor > > Add comma option in for clause -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (ASTERIXDB-698) Add range syntax into AQL to replace the range() function
[ https://issues.apache.org/jira/browse/ASTERIXDB-698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu resolved ASTERIXDB-698. - Resolution: Won't Fix > Add range syntax into AQL to replace the range() function > - > > Key: ASTERIXDB-698 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-698 > Project: Apache AsterixDB > Issue Type: Improvement > Components: *DB - AsterixDB, AQL - Translator AQL >Reporter: JArod Wen >Assignee: Yingyi Bu >Priority: Minor > > Add range syntax into AQL to replace the range() function -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (ASTERIXDB-699) Replace or remove switch-case() function
[ https://issues.apache.org/jira/browse/ASTERIXDB-699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu resolved ASTERIXDB-699. - Resolution: Won't Fix > Replace or remove switch-case() function > > > Key: ASTERIXDB-699 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-699 > Project: Apache AsterixDB > Issue Type: Improvement > Components: *DB - AsterixDB, AQL - Translator AQL >Reporter: JArod Wen >Assignee: Yingyi Bu >Priority: Minor > > Replace or remove switch-case() function -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ASTERIXDB-1997) Rename JobStatus.FAILURE_BEFORE_EXECUTION
[ https://issues.apache.org/jira/browse/ASTERIXDB-1997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu reassigned ASTERIXDB-1997: Assignee: Till (was: Yingyi Bu) > Rename JobStatus.FAILURE_BEFORE_EXECUTION > - > > Key: ASTERIXDB-1997 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1997 > Project: Apache AsterixDB > Issue Type: Task > Components: HYR - Hyracks >Reporter: Till >Assignee: Till > > The JobStatus FAILURE_BEFORE_EXECUTION was not very clear to me. Looking at > the code it seemed that REJECTED (or something similar) would describe the > current use of the status better. > Also, we should consider to extend the > IJobCapacityController.JobSubmissionStatus with a REJECT state to report > rejection (instead of transporting the information through an exception). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ASTERIXDB-1954) RebalanceWithCancellation Test Failed
[ https://issues.apache.org/jira/browse/ASTERIXDB-1954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu reassigned ASTERIXDB-1954: Assignee: Murtadha Hubail (was: Yingyi Bu) > RebalanceWithCancellation Test Failed > - > > Key: ASTERIXDB-1954 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1954 > Project: Apache AsterixDB > Issue Type: Bug > Components: *DB - AsterixDB, STO - Storage >Reporter: Chen Luo >Assignee: Murtadha Hubail > > The RebalanceWithCancellation Test failed with recent changes. The build is > at: > https://asterix-jenkins.ics.uci.edu/job/asterix-gerrit-verify-asterix-app/924/ > The stacktrace is: > {code} > org.apache.hyracks.api.exceptions.HyracksDataException: HYR0081: File > /home/jenkins/jenkins/workspace/asterix-gerrit-verify-asterix-app/asterixdb/asterix-app/target/io/dir/asterix_nc2/iodevice0/storage/partition_2/tpch/2/LineItem_idx_LineItem_virtual_0 > is already mapped > at > org.apache.hyracks.api.exceptions.HyracksDataException.create(HyracksDataException.java:49) > at > org.apache.hyracks.storage.common.file.FileMapManager.registerFile(FileMapManager.java:76) > at > org.apache.hyracks.storage.am.lsm.common.impls.VirtualBufferCache.createFile(VirtualBufferCache.java:79) > at > org.apache.hyracks.storage.am.lsm.common.impls.MultitenantVirtualBufferCache.createFile(MultitenantVirtualBufferCache.java:49) > at > org.apache.hyracks.storage.am.common.impls.AbstractTreeIndex.create(AbstractTreeIndex.java:83) > at > org.apache.hyracks.storage.am.lsm.btree.impls.LSMBTree.allocateMemoryComponent(LSMBTree.java:602) > at > org.apache.hyracks.storage.am.lsm.common.impls.AbstractLSMIndex.allocateMemoryComponents(AbstractLSMIndex.java:386) > at > org.apache.hyracks.storage.am.lsm.common.impls.LSMHarness.enter(LSMHarness.java:623) > at > org.apache.hyracks.storage.am.lsm.common.impls.LSMHarness.batchOperate(LSMHarness.java:646) > at > org.apache.hyracks.storage.am.lsm.common.impls.LSMTreeIndexAccessor.batchOperate(LSMTreeIndexAccessor.java:214) > at > org.apache.asterix.runtime.operators.LSMPrimaryUpsertOperatorNodePushable.nextFrame(LSMPrimaryUpsertOperatorNodePushable.java:280) > at org.apache.hyracks.control.nc.Task.pushFrames(Task.java:376) > at org.apache.hyracks.control.nc.Task.run(Task.java:316) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Jun 24, 2017 6:18:34 PM > org.apache.hyracks.control.common.work.WorkQueue$WorkerThread run > INFO: Executing: NotifyTaskFailure > Jun 24, 2017 6:18:34 PM org.apache.hyracks.control.nc.Task run > WARNING: Task TAID:TID:ANID:ODID:2:0:3:0 failed with exception > org.apache.hyracks.api.exceptions.HyracksDataException: HYR0081: File > /home/jenkins/jenkins/workspace/asterix-gerrit-verify-asterix-app/asterixdb/asterix-app/target/io/dir/asterix_nc2/iodevice1/storage/partition_3/tpch/2/LineItem_idx_LineItem_virtual_0 > is already mapped > at > org.apache.hyracks.api.exceptions.HyracksDataException.create(HyracksDataException.java:49) > at > org.apache.hyracks.storage.common.file.FileMapManager.registerFile(FileMapManager.java:76) > at > org.apache.hyracks.storage.am.lsm.common.impls.VirtualBufferCache.createFile(VirtualBufferCache.java:79) > at > org.apache.hyracks.storage.am.lsm.common.impls.MultitenantVirtualBufferCache.createFile(MultitenantVirtualBufferCache.java:49) > at > org.apache.hyracks.storage.am.common.impls.AbstractTreeIndex.create(AbstractTreeIndex.java:83) > at > org.apache.hyracks.storage.am.lsm.btree.impls.LSMBTree.allocateMemoryComponent(LSMBTree.java:602) > at > org.apache.hyracks.storage.am.lsm.common.impls.AbstractLSMIndex.allocateMemoryComponents(AbstractLSMIndex.java:386) > at > org.apache.hyracks.storage.am.lsm.common.impls.LSMHarness.enter(LSMHarness.java:623) > at > org.apache.hyracks.storage.am.lsm.common.impls.LSMHarness.batchOperate(LSMHarness.java:646) > at > org.apache.hyracks.storage.am.lsm.common.impls.LSMTreeIndexAccessor.batchOperate(LSMTreeIndexAccessor.java:214) > at > org.apache.asterix.runtime.operators.LSMPrimaryUpsertOperatorNodePushable.nextFrame(LSMPrimaryUpsertOperatorNodePushable.java:280) > at org.apache.hyracks.control.nc.Task.pushFrames(Task.java:376) > at org.apache.hyracks.control.nc.Task.run(Task.java:316) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at
[jira] [Assigned] (ASTERIXDB-1974) Sporadic failure in rebalance cancelation
[ https://issues.apache.org/jira/browse/ASTERIXDB-1974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu reassigned ASTERIXDB-1974: Assignee: Murtadha Hubail (was: Yingyi Bu) > Sporadic failure in rebalance cancelation > - > > Key: ASTERIXDB-1974 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1974 > Project: Apache AsterixDB > Issue Type: Bug > Components: CLUS - Cluster management >Reporter: Murtadha Hubail >Assignee: Murtadha Hubail > > The build was aborted after 80 mins. The following stack trace was found in > the build output. > {code:java} > 02:04:31 Expected results file: > src/test/resources/runtimets/results/rebalance/single_dataset_with_index/single_dataset_with_index.8.adm > 02:04:32 java.lang.InterruptedException > 02:04:32 at java.lang.Object.wait(Native Method) > 02:04:32 at java.lang.Object.wait(Object.java:502) > 02:04:32 at > org.apache.hyracks.storage.common.buffercache.AsyncFIFOPageQueueManager.finishQueue(AsyncFIFOPageQueueManager.java:132) > 02:04:32 at > org.apache.hyracks.storage.common.buffercache.BufferCache.finishQueue(BufferCache.java:1386) > 02:04:32 at > org.apache.hyracks.storage.am.common.impls.AbstractTreeIndex$AbstractTreeIndexBulkLoader.end(AbstractTreeIndex.java:282) > 02:04:32 at > org.apache.hyracks.storage.am.btree.impls.BTree$BTreeBulkLoader.end(BTree.java:1182) > 02:04:32 at > org.apache.hyracks.storage.am.lsm.common.impls.AbstractLSMDiskComponentBulkLoader.end(AbstractLSMDiskComponentBulkLoader.java:157) > 02:04:32 at > org.apache.hyracks.storage.am.lsm.btree.impls.LSMBTreeBulkLoader.end(LSMBTreeBulkLoader.java:47) > 02:04:32 at > org.apache.hyracks.storage.am.common.dataflow.IndexBulkLoadOperatorNodePushable.close(IndexBulkLoadOperatorNodePushable.java:92) > 02:04:32 at > org.apache.hyracks.dataflow.std.sort.AbstractExternalSortRunMerger.process(AbstractExternalSortRunMerger.java:175) > 02:04:32 at > org.apache.hyracks.dataflow.std.sort.AbstractSorterOperatorDescriptor$MergeActivity$1.initialize(AbstractSorterOperatorDescriptor.java:181) > 02:04:32 at > org.apache.hyracks.api.rewriter.runtime.SuperActivityOperatorNodePushable.lambda$runInParallel$0(SuperActivityOperatorNodePushable.java:202) > 02:04:32 at java.util.concurrent.FutureTask.run(FutureTask.java:266) > 02:04:32 at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > 02:04:32 at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > 02:04:32 at java.lang.Thread.run(Thread.java:745) > 02:04:32 java.lang.InterruptedException > 02:04:32 at > java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1220) > 02:04:32 at > java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:335) > 02:04:32 at > java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:339) > 02:04:32 at > org.apache.hyracks.storage.common.buffercache.AsyncFIFOPageQueueManager$PageQueue.put(AsyncFIFOPageQueueManager.java:63) > 02:04:32 at > org.apache.hyracks.storage.am.common.freepage.AppendOnlyLinkedMetadataPageManager.close(AppendOnlyLinkedMetadataPageManager.java:223) > 02:04:32 at > org.apache.hyracks.storage.am.lsm.common.impls.AbstractLSMIndex.markAsValidInternal(AbstractLSMIndex.java:394) > 02:04:32 at > org.apache.hyracks.storage.am.lsm.btree.impls.LSMBTree.markAsValid(LSMBTree.java:465) > 02:04:32 at > org.apache.hyracks.storage.am.lsm.common.impls.LSMHarness.addBulkLoadedComponent(LSMHarness.java:566) > 02:04:32 at > org.apache.hyracks.storage.am.lsm.btree.impls.LSMBTreeBulkLoader.end(LSMBTreeBulkLoader.java:53) > 02:04:32 at > org.apache.hyracks.storage.am.common.dataflow.IndexBulkLoadOperatorNodePushable.close(IndexBulkLoadOperatorNodePushable.java:92) > 02:04:32 at > org.apache.hyracks.dataflow.std.sort.AbstractExternalSortRunMerger.process(AbstractExternalSortRunMerger.java:175) > 02:04:32 at > org.apache.hyracks.dataflow.std.sort.AbstractSorterOperatorDescriptor$MergeActivity$1.initialize(AbstractSorterOperatorDescriptor.java:181) > 02:04:32 at > org.apache.hyracks.api.rewriter.runtime.SuperActivityOperatorNodePushable.lambda$runInParallel$0(SuperActivityOperatorNodePushable.java:202) > 02:04:32 at java.util.concurrent.FutureTask.run(FutureTask.java:266) > 02:04:32 at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > 02:04:32 at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > 02:04:32 at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ASTERIXDB-1948) Potential file leaks if crash happens during rebalance
[ https://issues.apache.org/jira/browse/ASTERIXDB-1948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu reassigned ASTERIXDB-1948: Assignee: Murtadha Hubail (was: Yingyi Bu) > Potential file leaks if crash happens during rebalance > -- > > Key: ASTERIXDB-1948 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1948 > Project: Apache AsterixDB > Issue Type: Bug > Components: CLUS - Cluster management >Reporter: Yingyi Bu >Assignee: Murtadha Hubail > > Refer to the rebalance design doc: > https://cwiki.apache.org/confluence/display/ASTERIXDB/Rebalance+API+and+Internal+Implementation > In the event of failures, there could be: > -- leaked source files (from metadata transaction a) which will be reclaimed > in the next rebalance operation, > -- or leaked target files (from metadata transaction b) which will not be > reclaimed, > -- or leaked node group name (from metadata transaction a) which doesn't > prevent the success of the next rebalance operation. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ASTERIXDB-1923) Dataset id is not recycled
[ https://issues.apache.org/jira/browse/ASTERIXDB-1923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu updated ASTERIXDB-1923: - Priority: Minor (was: Major) > Dataset id is not recycled > -- > > Key: ASTERIXDB-1923 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1923 > Project: Apache AsterixDB > Issue Type: Bug >Reporter: Yingyi Bu >Assignee: Till >Priority: Minor > > Currently, dataset ids are not recycled when a dataset is dropped. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ASTERIXDB-1871) Sporadic open file leaks in CancellationTest
[ https://issues.apache.org/jira/browse/ASTERIXDB-1871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156047#comment-16156047 ] Yingyi Bu commented on ASTERIXDB-1871: -- The leak is from MaterializingPipelinedPartition.java. > Sporadic open file leaks in CancellationTest > > > Key: ASTERIXDB-1871 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1871 > Project: Apache AsterixDB > Issue Type: Bug > Components: HYR - Hyracks >Reporter: Yingyi Bu >Assignee: Dmitry Lychagin > > {noformat} > Tests run: 1541, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 610.935 > sec <<< FAILURE! - in > org.apache.asterix.test.runtime.SqlppExecutionWithCancellationTest > org.apache.asterix.test.runtime.SqlppExecutionWithCancellationTest Time > elapsed: 6.846 sec <<< FAILURE! > java.lang.AssertionError: There are 4 leaked run files. > at > org.apache.asterix.test.runtime.LangExecutionUtil.checkOpenRunFileLeaks(LangExecutionUtil.java:166) > at > org.apache.asterix.test.runtime.LangExecutionUtil.tearDown(LangExecutionUtil.java:78) > at > org.apache.asterix.test.runtime.SqlppExecutionWithCancellationTest.tearDown(SqlppExecutionWithCancellationTest.java:53) > [4/3/17, 9:10:23 PM] : java23938 michaelblow 728u REG > 1,4 0 112414077 > /private/var/folders/5x/qdtntlds0fgcgknzwf61khvhgn/T/asterix_nc1/iodevice1/MaterializerTaskState4181303815132287300.waf > java23938 michaelblow 729u REG1,4 0 > 112414078 > /private/var/folders/5x/qdtntlds0fgcgknzwf61khvhgn/T/asterix_nc2/iodevice0/MaterializerTaskState559113667370394226.waf > java23938 michaelblow 730u REG1,4 0 > 112414079 > /private/var/folders/5x/qdtntlds0fgcgknzwf61khvhgn/T/asterix_nc1/iodevice0/MaterializerTaskState8296323294352675529.waf > java23938 michaelblow 731u REG1,4 0 > 112414080 > /private/var/folders/5x/qdtntlds0fgcgknzwf61khvhgn/T/asterix_nc2/iodevice1/MaterializerTaskState6759935260936501189.waf > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ASTERIXDB-1923) Dataset id is not recycled
[ https://issues.apache.org/jira/browse/ASTERIXDB-1923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu reassigned ASTERIXDB-1923: Assignee: Till (was: Yingyi Bu) > Dataset id is not recycled > -- > > Key: ASTERIXDB-1923 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1923 > Project: Apache AsterixDB > Issue Type: Bug >Reporter: Yingyi Bu >Assignee: Till > > Currently, dataset ids are not recycled when a dataset is dropped. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ASTERIXDB-1848) Separating storage devices/directories and workspace devices/directories
[ https://issues.apache.org/jira/browse/ASTERIXDB-1848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu reassigned ASTERIXDB-1848: Assignee: Murtadha Hubail (was: Yingyi Bu) > Separating storage devices/directories and workspace devices/directories > > > Key: ASTERIXDB-1848 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1848 > Project: Apache AsterixDB > Issue Type: Improvement >Reporter: Yingyi Bu >Assignee: Murtadha Hubail > > We need to separate the devices/directories for operator workspaces and > devices/directories for the permanent data storage. > A motivation scenario is AWS. There're two kinds of AWS storage: > -- instance level storage, i.e., local storage. Can be lost if a failure > happens and are recommended for the temporary usage, caching, etc. > -- EBS. Have better persistency properties. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ASTERIXDB-1871) Sporadic open file leaks in CancellationTest
[ https://issues.apache.org/jira/browse/ASTERIXDB-1871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu reassigned ASTERIXDB-1871: Assignee: Dmitry Lychagin (was: Yingyi Bu) > Sporadic open file leaks in CancellationTest > > > Key: ASTERIXDB-1871 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1871 > Project: Apache AsterixDB > Issue Type: Bug > Components: HYR - Hyracks >Reporter: Yingyi Bu >Assignee: Dmitry Lychagin > > {noformat} > Tests run: 1541, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 610.935 > sec <<< FAILURE! - in > org.apache.asterix.test.runtime.SqlppExecutionWithCancellationTest > org.apache.asterix.test.runtime.SqlppExecutionWithCancellationTest Time > elapsed: 6.846 sec <<< FAILURE! > java.lang.AssertionError: There are 4 leaked run files. > at > org.apache.asterix.test.runtime.LangExecutionUtil.checkOpenRunFileLeaks(LangExecutionUtil.java:166) > at > org.apache.asterix.test.runtime.LangExecutionUtil.tearDown(LangExecutionUtil.java:78) > at > org.apache.asterix.test.runtime.SqlppExecutionWithCancellationTest.tearDown(SqlppExecutionWithCancellationTest.java:53) > [4/3/17, 9:10:23 PM] : java23938 michaelblow 728u REG > 1,4 0 112414077 > /private/var/folders/5x/qdtntlds0fgcgknzwf61khvhgn/T/asterix_nc1/iodevice1/MaterializerTaskState4181303815132287300.waf > java23938 michaelblow 729u REG1,4 0 > 112414078 > /private/var/folders/5x/qdtntlds0fgcgknzwf61khvhgn/T/asterix_nc2/iodevice0/MaterializerTaskState559113667370394226.waf > java23938 michaelblow 730u REG1,4 0 > 112414079 > /private/var/folders/5x/qdtntlds0fgcgknzwf61khvhgn/T/asterix_nc1/iodevice0/MaterializerTaskState8296323294352675529.waf > java23938 michaelblow 731u REG1,4 0 > 112414080 > /private/var/folders/5x/qdtntlds0fgcgknzwf61khvhgn/T/asterix_nc2/iodevice1/MaterializerTaskState6759935260936501189.waf > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ASTERIXDB-1822) Need a "kill" button on the Web interface
[ https://issues.apache.org/jira/browse/ASTERIXDB-1822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu reassigned ASTERIXDB-1822: Assignee: Xikui Wang (was: Murtadha Hubail) > Need a "kill" button on the Web interface > - > > Key: ASTERIXDB-1822 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1822 > Project: Apache AsterixDB > Issue Type: Improvement >Reporter: Michael J. Carey >Assignee: Xikui Wang > > It would be cool to hook up the new job-killing capability to the console... -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ASTERIXDB-1829) Failure handling in DML
[ https://issues.apache.org/jira/browse/ASTERIXDB-1829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu reassigned ASTERIXDB-1829: Assignee: Abdullah Alamoudi (was: Yingyi Bu) > Failure handling in DML > --- > > Key: ASTERIXDB-1829 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1829 > Project: Apache AsterixDB > Issue Type: Bug > Components: STO - Storage >Reporter: Yingyi Bu >Assignee: Abdullah Alamoudi > > Currently, CREATE and DROP cannot be cancelled at any point: > {noformat} > Caused by: org.apache.hyracks.api.exceptions.HyracksDataException: Resource > doesn't exist > at > org.apache.asterix.transaction.management.resource.PersistentLocalResourceRepository.delete(PersistentLocalResourceRepository.java:230) > at > org.apache.hyracks.storage.am.common.dataflow.IndexDataflowHelper.create(IndexDataflowHelper.java:93) > at > org.apache.hyracks.storage.am.common.dataflow.IndexCreateOperatorNodePushable.initialize(IndexCreateOperatorNodePushable.java:53) > at > org.apache.hyracks.api.rewriter.runtime.SuperActivityOperatorNodePushable.lambda$initialize$0(SuperActivityOperatorNodePushable.java:86) > at > org.apache.hyracks.api.rewriter.runtime.SuperActivityOperatorNodePushable$$Lambda$91/60968292.runAction(Unknown > Source) > at > org.apache.hyracks.api.rewriter.runtime.SuperActivityOperatorNodePushable.lambda$runInParallel$2(SuperActivityOperatorNodePushable.java:216) > at > org.apache.hyracks.api.rewriter.runtime.SuperActivityOperatorNodePushable$$Lambda$92/845876122.call(Unknown > Source) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ... 3 more > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ASTERIXDB-1822) Need a "kill" button on the Web interface
[ https://issues.apache.org/jira/browse/ASTERIXDB-1822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu reassigned ASTERIXDB-1822: Assignee: Murtadha Hubail (was: Yingyi Bu) > Need a "kill" button on the Web interface > - > > Key: ASTERIXDB-1822 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1822 > Project: Apache AsterixDB > Issue Type: Improvement >Reporter: Michael J. Carey >Assignee: Murtadha Hubail > > It would be cool to hook up the new job-killing capability to the console... -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (ASTERIXDB-1821) Long running ORDER BY issue at U-Wash
[ https://issues.apache.org/jira/browse/ASTERIXDB-1821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu resolved ASTERIXDB-1821. -- Resolution: Fixed I think it is fixed by now as we have added query queuing. > Long running ORDER BY issue at U-Wash > - > > Key: ASTERIXDB-1821 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1821 > Project: Apache AsterixDB > Issue Type: Bug > Components: *DB - AsterixDB, COMP - Compiler, FAIL - Failure > handling/reporting, SQL - Translator SQL++ > Environment: U-Washington >Reporter: Michael J. Carey >Assignee: Yingyi Bu > > From Dan Suciu: > there is a bug in the ORDER BY clause; I can't pinpoint exactly > what happens, but after running repeatedly queries with ORDER BY, > eventually the server no longer responds to an ORDER BY query, until > it is restarted. I got lots of complains from students who were using > the shared server, since they couldn't restart the server themselves > and couldn't run ORDER BY queries. If you to debug, try running the > queries in our homework repeatedly. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ASTERIXDB-1815) Factor out shared part in cluster installation script
[ https://issues.apache.org/jira/browse/ASTERIXDB-1815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu reassigned ASTERIXDB-1815: Assignee: Michael Blow (was: Yingyi Bu) > Factor out shared part in cluster installation script > - > > Key: ASTERIXDB-1815 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1815 > Project: Apache AsterixDB > Issue Type: Improvement > Components: *DB - AsterixDB >Reporter: Yingyi Bu >Assignee: Michael Blow > > Each script under opt/aws and opt/ansible has the ability to be run from > anywhere and then resolve its absolute path. It would be nice to have a > script that only does the absolute path resolution and be shared by different > scripts such start.sh, stop.sh, erase.sh, deploy.sh. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ASTERIXDB-1788) JVM crash on AWS t2.micro instance
[ https://issues.apache.org/jira/browse/ASTERIXDB-1788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu reassigned ASTERIXDB-1788: Assignee: Michael Blow (was: Yingyi Bu) > JVM crash on AWS t2.micro instance > -- > > Key: ASTERIXDB-1788 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1788 > Project: Apache AsterixDB > Issue Type: Bug > Components: *DB - AsterixDB >Reporter: Yingyi Bu >Assignee: Michael Blow > Attachments: hs_err_pid3528.log, hs_err_pid3587.log > > > I have a 2-node AsterixDB instance on AWS, using the t2.micro configuration > (1 vcore + 1GB RAM + 8GB EBS disk). CCDriver runs on the same node with one > NCDriver. I load tpch datasets with scale factor of 0.2 into the instance, > i.e., 200MB data in total. > JVM crashes when queries are running. Two JVM crash logs are attached. > Here are the crash dump files: > cat /tmp/hsperfdata_ec2-user/3265 > {noformat} > p>?/? > ?8J0sun.rt._sync_Inflationsk8J0sun.rt._sync_Deflations]@J8sun.rt._sync_ContendedLockAttempts?b8J0sun.rt._sync_FutileWakeups?0J(sun.rt._sync_Parks?@j8sun.rt._sync_emptynotifications8j0sun.rt._sync_notificationsv8j0sun.rt._sync_slowenter8j0sun.rt._sync_slowexit8j0sun.rt._sync_slownotify8j0sun.rt._sync_slownotifyall8j0sun.rt._sync_failedsp...@j8sun.rt._sync_successfulspins8j0sun.rt._sync_privatea8j0sun.rt._sync_priva...@j8sun.rt._sync_MonInCirculation8J0sun.rt._sync_MonScavenged8J0sun.rt._sync_MonExtant8J0sun.rt.createVmBeginTime?7A*Z8J0sun.rt.createVmEndTime?7A*Z8J0sun.rt.vmInitDoneTime?7A*Z8J0java.threads.startedK0J(java.threads.live8J0java.threads.livePeak0J(java.threads.daemon8J0sun.rt.safepointSyncTime?0J(sun.rt.safepoints48J0sun.rt.safepointTime?N58J0sun.rt.applicationTimeF3???0J(su...@j8sun.rt.threadinterruptsignaled8j0sun.rt.interruptedbeforeio8j0sun.rt.interruptedDuringIOpAB+sun.rt.jvmCapabilities11008J0java.cls.loadedClasses?8j0java.cls.unloadedclas...@j8java.cls.sharedloadedclas...@j8java.cls.sharedUnloadedClasses0J(sun.cls.loadedBytesH?{8J0sun.cls.unloadedBytesx?8J0sun.cls.sharedLoadedBytes8J0sun.cls.sharedUnloadedBJ0sun.cls.classVerifyTime/vZ'@J8sun.cls.classVerifyTime.self݇?assInitTime??TB8J0sun.cls.classInitTime.self8)? > 8J0sun.cls.linkedClasses?8J0sun.cls.verifiedClasses?8J0sun.cls.parseClassTime??G8J0sun.cls.parseClassTime.self?߫8J0sun.cls.lookupSysClassTime?8J0sun.cls.sharedClassLoadTime?8J0sun.cls.sysClassLoadTime??8J0sun.cls.appClassLoadTime???_...@j8sun.cls.appclassloadtime.self???<8J0sun.cls.appClassLoadCount? > > 8J0sun.cls.defineAppClassesf > > > 8J0sun.cls.defineAppClassTime??@J8sun.cls.defineAppClassTime.self??8J0sun.cls.appClassBytes5?8J0sun.cls.sysClassBytes??o...@sun.cls.systemloaderlockcontentionrat...@sun.cls.nonsystemloaderlockcontentionrat...@sun.cls.jvmFindLoadedClassNoLockCalls?@J8sun.cls.jvmDefineClassNoLockCallsc > > @j8sun.cls.jnidefineclassnolockc...@j8sun.cls.unsafedefineclasscal...@j8sun.cls.isunsyncloadclass...@j8sun.cls.loadInstanceClassFailRatexQB!sun.gc.causeNo > GCxQB%sun.gc.lastCauseAllocation > failur...@sun.gc.generation.0.agetable.bytes.0...@sun.gc.generation.0.agetable.bytes.01??(h...@sun.gc.generation.0.agetable.bytes.0...@sun.gc.generation.0.agetable.bytes.0...@sun.gc.generation.0.agetable.bytes.0...@sun.gc.generation.0.agetable.bytes.0...@sun.gc.generation.0.agetable.bytes.0...@sun.gc.generation.0.agetable.bytes.0...@sun.gc.generation.0.agetable.bytes.0...@sun.gc.generation.0.agetable.bytes.0...@sun.gc.generation.0.agetable.bytes.1...@sun.gc.generation.0.agetable.bytes.1...@sun.gc.generation.0.agetable.bytes.1...@sun.gc.generation.0.agetable.bytes.1...@sun.gc.generation.0.agetable.bytes.1...@sun.gc.generation.0.agetable.bytes...@j8sun.gc.generation.0.agetable.size8b-sun.gc.generation.0.namenew8j0sun.gc.generation.0.spa...@j8sun.gc.generation.0.mincapaci...@j8sun.gc.generation.0.maxCapacityu > > > @J8sun.gc.generation.0.capacityu > > 8B,sun.gc.collector.0.namec...@j8sun.gc.collector.0.invocations=8J0sun.gc.collector.0.time???@J8sun.gc.collector.0.lastEntryTime@J8sun.gc.collector.0.lastExitTime?0??@b5sun.gc.generation.0.space.0.nameede...@sun.gc.generation.0.space.0.maxCapacity? >h...@sun.gc.generation.0.space.0.capacity? >
[jira] [Assigned] (ASTERIXDB-1814) Add the ability to erase all data/txnlog/log generated by an instance
[ https://issues.apache.org/jira/browse/ASTERIXDB-1814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu reassigned ASTERIXDB-1814: Assignee: Murtadha Hubail (was: Yingyi Bu) > Add the ability to erase all data/txnlog/log generated by an instance > - > > Key: ASTERIXDB-1814 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1814 > Project: Apache AsterixDB > Issue Type: Improvement > Components: *DB - AsterixDB >Reporter: Yingyi Bu >Assignee: Murtadha Hubail > > Currently, the ansible cluster script opt/ansible/bin/erase.sh only erases > the installation binary of an instance, but does not erase all > data/txnlog/log generated by an instance. It would be nice to add this > capability into erase.sh. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (ASTERIXDB-1666) Sporadic execution test failure
[ https://issues.apache.org/jira/browse/ASTERIXDB-1666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu resolved ASTERIXDB-1666. -- Resolution: Cannot Reproduce > Sporadic execution test failure > --- > > Key: ASTERIXDB-1666 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1666 > Project: Apache AsterixDB > Issue Type: Bug > Components: HYR - Hyracks >Reporter: Yingyi Bu >Assignee: Yingyi Bu >Priority: Critical > > Sporadic execution test failure: > Error Message > Test > "src/test/resources/runtimets/queries/quantifiers/everysat_04/everysat_04.3.query.aql" > FAILED! > Stacktrace > java.lang.Exception: Test > "src/test/resources/runtimets/queries/quantifiers/everysat_04/everysat_04.3.query.aql" > FAILED! > at > org.apache.asterix.test.aql.TestExecutor.runScriptAndCompareWithResult(TestExecutor.java:145) > at > org.apache.asterix.test.aql.TestExecutor.executeTest(TestExecutor.java:704) > at > org.apache.asterix.test.aql.TestExecutor.executeTest(TestExecutor.java:921) > at > org.apache.asterix.test.runtime.ExecutionTest.test(ExecutionTest.java:126) > Standard Error > Expected results file: > src/test/resources/runtimets/results/quantifiers/everysat_04/everysat_04.1.adm > Actual results file: > target/rttest/results/quantifiers/everysat_04/everysat_04.1.adm > testFile > src/test/resources/runtimets/queries/quantifiers/everysat_04/everysat_04.3.query.aql > raised an exception: org.apache.asterix.test.base.ComparisonException: > Result for > src/test/resources/runtimets/queries/quantifiers/everysat_04/everysat_04.3.query.aql > changed at line 1: > < false > > > org.apache.asterix.test.base.ComparisonException: Result for > src/test/resources/runtimets/queries/quantifiers/everysat_04/everysat_04.3.query.aql > changed at line 1: > < false > > > at > org.apache.asterix.test.aql.TestExecutor.runScriptAndCompareWithResult(TestExecutor.java:145) > at > org.apache.asterix.test.aql.TestExecutor.executeTest(TestExecutor.java:704) > at > org.apache.asterix.test.aql.TestExecutor.executeTest(TestExecutor.java:921) > at > org.apache.asterix.test.runtime.ExecutionTest.test(ExecutionTest.java:126) > at sun.reflect.GeneratedMethodAccessor25.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at org.junit.runners.Suite.runChild(Suite.java:127) > at org.junit.runners.Suite.runChild(Suite.java:26) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153) > at >
[jira] [Resolved] (ASTERIXDB-1681) Update HTTP API document
[ https://issues.apache.org/jira/browse/ASTERIXDB-1681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu resolved ASTERIXDB-1681. -- Resolution: Fixed Till has fixed that. > Update HTTP API document > > > Key: ASTERIXDB-1681 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1681 > Project: Apache AsterixDB > Issue Type: Task > Components: DOC - Documentation >Reporter: Yingyi Bu >Assignee: Yingyi Bu > > Update the HTTP API documentation to only use the new query service API. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ASTERIXDB-1600) Support for date and string arthimetic
[ https://issues.apache.org/jira/browse/ASTERIXDB-1600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu reassigned ASTERIXDB-1600: Assignee: Till (was: Yingyi Bu) > Support for date and string arthimetic > -- > > Key: ASTERIXDB-1600 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1600 > Project: Apache AsterixDB > Issue Type: Improvement >Reporter: Vignesh Raghunathan >Assignee: Till > > {code} > drop dataverse sampdb if exists; > create dataverse sampdb; > use sampdb; > drop dataset samptable if exists; > drop type samptabletype if exists; > create type samptabletype as closed { > dt: date > }; > create type samptabletype2 as closed { > id: int64, > firstname: string, > lastname: string > }; > create dataset samptable(samptabletype) primary key dt; > create dataset samptable2(samptabletype2) primary key id; > select * > from samptable s1, samptable s2 > where s1.dt > s2.dt + 5; > select firstname + " " + lastname as fullname > from samptable2 > {code} > The above queries can't be expressed in sqlpp without support for date and > string type arithmetic. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ASTERIXDB-1341) Defer the file path decision into NC
[ https://issues.apache.org/jira/browse/ASTERIXDB-1341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu reassigned ASTERIXDB-1341: Assignee: Abdullah Alamoudi (was: Yingyi Bu) > Defer the file path decision into NC > > > Key: ASTERIXDB-1341 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1341 > Project: Apache AsterixDB > Issue Type: Improvement >Reporter: Yingyi Bu >Assignee: Abdullah Alamoudi > > Currently, the decision of storage file paths is made within the compiler, > e.g., in StoragePathUtil.java. It would be nice to defer the decision to NCs > at runtime. In this way, the index search, bulkload, insert/delete will have > a fixed degree-of-parallelism but in each NC they take file paths from > Dataset/Index lifecycle manager. > This will bring in the following benefits: > 1. the degree of parallelism can be different from the number of file paths > that storage-related operators work with; > 2. it avoids shipping all-file-paths (within the JobSpecification) to every > node for a query. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Reopened] (ASTERIXDB-1248) Exceptions not propagated well when multiple exceptions take place
[ https://issues.apache.org/jira/browse/ASTERIXDB-1248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu reopened ASTERIXDB-1248: -- > Exceptions not propagated well when multiple exceptions take place > -- > > Key: ASTERIXDB-1248 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1248 > Project: Apache AsterixDB > Issue Type: Bug > Components: *DB - AsterixDB, HYR - Hyracks >Reporter: Abdullah Alamoudi >Assignee: Yingyi Bu > > After opening an IFrameWriter, if an exception takes place during open() or > during nextFrame(), close() will be called. if close() also throws an > exception, the initial exception is lost and not propagated well. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Closed] (ASTERIXDB-1248) Exceptions not propagated well when multiple exceptions take place
[ https://issues.apache.org/jira/browse/ASTERIXDB-1248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu closed ASTERIXDB-1248. Resolution: Fixed > Exceptions not propagated well when multiple exceptions take place > -- > > Key: ASTERIXDB-1248 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1248 > Project: Apache AsterixDB > Issue Type: Bug > Components: *DB - AsterixDB, HYR - Hyracks >Reporter: Abdullah Alamoudi >Assignee: Yingyi Bu > > After opening an IFrameWriter, if an exception takes place during open() or > during nextFrame(), close() will be called. if close() also throws an > exception, the initial exception is lost and not propagated well. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (ASTERIXDB-1248) Exceptions not propagated well when multiple exceptions take place
[ https://issues.apache.org/jira/browse/ASTERIXDB-1248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu resolved ASTERIXDB-1248. -- Resolution: Fixed > Exceptions not propagated well when multiple exceptions take place > -- > > Key: ASTERIXDB-1248 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1248 > Project: Apache AsterixDB > Issue Type: Bug > Components: *DB - AsterixDB, HYR - Hyracks >Reporter: Abdullah Alamoudi >Assignee: Yingyi Bu > > After opening an IFrameWriter, if an exception takes place during open() or > during nextFrame(), close() will be called. if close() also throws an > exception, the initial exception is lost and not propagated well. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ASTERIXDB-2051) Variable not found in a complex group-by query
[ https://issues.apache.org/jira/browse/ASTERIXDB-2051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu reassigned ASTERIXDB-2051: Assignee: Yingyi Bu (was: Dmitry Lychagin) > Variable not found in a complex group-by query > -- > > Key: ASTERIXDB-2051 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-2051 > Project: Apache AsterixDB > Issue Type: Bug > Components: COMP - Compiler >Reporter: Yingyi Bu >Assignee: Yingyi Bu > > {noformat} > DROP DATAVERSE tpch IF EXISTS; > CREATE dataverse tpch; > USE tpch; > CREATE TYPE LineItemType AS CLOSED { > l_orderkey : integer, > l_partkey : integer, > l_suppkey : integer, > l_linenumber : integer, > l_quantity : double, > l_extendedprice : double, > l_discount : double, > l_tax : double, > l_returnflag : string, > l_linestatus : string, > l_shipdate : string, > l_commitdate : string, > l_receiptdate : string, > l_shipinstruct : string, > l_shipmode : string, > l_comment : string > } > CREATE DATASET LineItem(LineItemType) PRIMARY KEY l_orderkey,l_linenumber; > SELECT l_returnflag AS l_returnflag, >l_linestatus AS l_linestatus, >coll_count(cheap) AS count_cheaps, >coll_count(expensive) AS count_expensives > FROM LineItem AS l > /* +hash */ > GROUP BY l.l_returnflag AS l_returnflag,l.l_linestatus AS l_linestatus > GROUP AS g > LET cheap = ( > SELECT ELEMENT g.l > FROM g > WHERE g.l.l_discount > 0.05 > ), > expensive = ( > SELECT ELEMENT m > FROM (FROM g SELECT VALUE l) AS m > WHERE m.l_discount <= 0.05 > ) > ORDER BY l_returnflag,l_linestatus > ; > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Closed] (ASTERIXDB-91) Word count example return incorrect result in fullstack_imru branch
[ https://issues.apache.org/jira/browse/ASTERIXDB-91?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu closed ASTERIXDB-91. -- Resolution: Won't Fix > Word count example return incorrect result in fullstack_imru branch > --- > > Key: ASTERIXDB-91 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-91 > Project: Apache AsterixDB > Issue Type: Bug > Components: HYR - Hyracks >Reporter: asterixdb-importer >Assignee: Yingyi Bu >Priority: Trivial > > Word count example return incorrect result in fullstack_imru branch -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Closed] (ASTERIXDB-88) Support for getting/setting control variables during stage execution
[ https://issues.apache.org/jira/browse/ASTERIXDB-88?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu closed ASTERIXDB-88. -- Resolution: Won't Fix It doesn't seem that we need that for the short term. Pls re-open if we still need that. > Support for getting/setting control variables during stage execution > > > Key: ASTERIXDB-88 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-88 > Project: Apache AsterixDB > Issue Type: Improvement > Components: HYR - Hyracks >Reporter: Vinayak Borkar >Assignee: Yingyi Bu >Priority: Minor > > Support for getting/setting control variables during stage execution -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Closed] (ASTERIXDB-1276) Remove UnnestMapOperator
[ https://issues.apache.org/jira/browse/ASTERIXDB-1276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu closed ASTERIXDB-1276. Resolution: Won't Fix UnnestMap is a generalized form of index search. > Remove UnnestMapOperator > > > Key: ASTERIXDB-1276 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1276 > Project: Apache AsterixDB > Issue Type: Improvement > Components: RT - Runtime >Reporter: Abdullah Alamoudi >Assignee: Yingyi Bu >Priority: Minor > > UnnestMap operator is used temporarily until index operators are added to > Algebricks -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ASTERIXDB-1012) About correlated-prefix merge policy behavior
[ https://issues.apache.org/jira/browse/ASTERIXDB-1012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137595#comment-16137595 ] Yingyi Bu commented on ASTERIXDB-1012: -- [~luochen01], this issue should have been fixed already? > About correlated-prefix merge policy behavior > - > > Key: ASTERIXDB-1012 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1012 > Project: Apache AsterixDB > Issue Type: Bug > Components: *DB - AsterixDB, STO - Storage >Reporter: asterixdb-importer >Assignee: Chen Luo >Priority: Minor > Labels: soon > > About correlated-prefix merge policy behavior -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ASTERIXDB-1012) About correlated-prefix merge policy behavior
[ https://issues.apache.org/jira/browse/ASTERIXDB-1012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu reassigned ASTERIXDB-1012: Assignee: Chen Luo (was: Yingyi Bu) > About correlated-prefix merge policy behavior > - > > Key: ASTERIXDB-1012 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1012 > Project: Apache AsterixDB > Issue Type: Bug > Components: *DB - AsterixDB, STO - Storage >Reporter: asterixdb-importer >Assignee: Chen Luo >Priority: Minor > Labels: soon > > About correlated-prefix merge policy behavior -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ASTERIXDB-1558) Possible minor glitch in UNKNOWN value related predicates/handling
[ https://issues.apache.org/jira/browse/ASTERIXDB-1558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu reassigned ASTERIXDB-1558: Assignee: Dmitry Lychagin (was: Yingyi Bu) > Possible minor glitch in UNKNOWN value related predicates/handling > -- > > Key: ASTERIXDB-1558 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1558 > Project: Apache AsterixDB > Issue Type: Bug > Components: *DB - AsterixDB, AQL - Translator AQL >Reporter: Michael J. Carey >Assignee: Dmitry Lychagin >Priority: Minor > Labels: soon > > The following evaluates to TRUE: > { > 'project': 'AsterixDB', > 'members': [ 'vinayakb', 'dtabass', 'chenli', 'tsotras' ] > }.member IS MISSING; > As, desirably, does: > { > 'project': 'AsterixDB', > 'members': [ 'vinayakb', 'dtabass', 'chenli', 'tsotras' ] > }.member IS UNKNOWN; > But the following evaluates to NULL (and it seems to me that FALSE would be > the proper expected result): > { > 'project': 'AsterixDB', > 'members': [ 'vinayakb', 'dtabass', 'chenli', 'tsotras' ] > }.member IS NULL; > Of course, I could be MISSING something here, as a SQL++ newbie -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ASTERIXDB-1038) Add hint to ignore particular indexes
[ https://issues.apache.org/jira/browse/ASTERIXDB-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu reassigned ASTERIXDB-1038: Assignee: Dmitry Lychagin (was: Yingyi Bu) > Add hint to ignore particular indexes > - > > Key: ASTERIXDB-1038 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1038 > Project: Apache AsterixDB > Issue Type: Improvement > Components: *DB - AsterixDB, COMP - Compiler >Reporter: asterixdb-importer >Assignee: Dmitry Lychagin >Priority: Minor > > Add hint to ignore particular indexes -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ASTERIXDB-1647) Some logical operators' implementations of isMap doesn't match their behavior
[ https://issues.apache.org/jira/browse/ASTERIXDB-1647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu reassigned ASTERIXDB-1647: Assignee: Abdullah Alamoudi (was: Yingyi Bu) > Some logical operators' implementations of isMap doesn't match their behavior > - > > Key: ASTERIXDB-1647 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1647 > Project: Apache AsterixDB > Issue Type: Bug >Reporter: Abdullah Alamoudi >Assignee: Abdullah Alamoudi > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Closed] (ASTERIXDB-1655) Making AQL keyword case-insenstive
[ https://issues.apache.org/jira/browse/ASTERIXDB-1655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu closed ASTERIXDB-1655. Resolution: Won't Fix > Making AQL keyword case-insenstive > -- > > Key: ASTERIXDB-1655 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1655 > Project: Apache AsterixDB > Issue Type: Improvement > Components: AQL - Translator AQL >Reporter: Yingyi Bu >Assignee: Yingyi Bu > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ASTERIXDB-1793) Support parameterization for different scale factor in asterix-benchmark
[ https://issues.apache.org/jira/browse/ASTERIXDB-1793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu reassigned ASTERIXDB-1793: Assignee: Xikui Wang (was: Yingyi Bu) > Support parameterization for different scale factor in asterix-benchmark > > > Key: ASTERIXDB-1793 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1793 > Project: Apache AsterixDB > Issue Type: Improvement > Components: *DB - AsterixDB >Reporter: Yingyi Bu >Assignee: Xikui Wang > > Support parameterization for TPC-H queries for various scale factors. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ASTERIXDB-1500) Inject filters to eliminate null/missing join keys for equality joins
[ https://issues.apache.org/jira/browse/ASTERIXDB-1500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu reassigned ASTERIXDB-1500: Assignee: Dmitry Lychagin (was: Yingyi Bu) > Inject filters to eliminate null/missing join keys for equality joins > - > > Key: ASTERIXDB-1500 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1500 > Project: Apache AsterixDB > Issue Type: Improvement > Components: COMP - Compiler >Reporter: Yingyi Bu >Assignee: Dmitry Lychagin > Labels: soon > > For the following query, there could be many tweets that has the field > in_reply_to_status_id being null/missing. Therefore, that would skewness of > for the hash join. Since this is an inner join and missing/null join keys > anyway could not produce qualified join results, the optimizer should inject > null/missing filters before the join. > {noformat} > FROM Tweets t2 JOIN Tweets t1 ON t2.in_reply_to_status_id = t1.id > WHERE not(`is-unknown`(t2.in_reply_to_status_id)) > GROUP BY t1.id AS id, t1.user.name AS name, t1.text AS text > SELECT id, name, text, COUNT(t2) AS num_retweets > ORDER BY num_retweets DESC > LIMIT 5; > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Closed] (ASTERIXDB-1753) Disable automatic scalar->plural conversion in group-by
[ https://issues.apache.org/jira/browse/ASTERIXDB-1753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu closed ASTERIXDB-1753. Resolution: Fixed Fixed with test cases. > Disable automatic scalar->plural conversion in group-by > --- > > Key: ASTERIXDB-1753 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1753 > Project: Apache AsterixDB > Issue Type: Improvement > Components: SQL - Translator SQL++ >Reporter: Yingyi Bu >Assignee: Yingyi Bu > > According to the latest SQL++ meeting minutes, we should disable automatic > scalar->plural conversion in the group-by clause. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (ASTERIXDB-1447) Need an Abort API
[ https://issues.apache.org/jira/browse/ASTERIXDB-1447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu resolved ASTERIXDB-1447. -- Resolution: Fixed Added cancelJob(JobId jobId) > Need an Abort API > - > > Key: ASTERIXDB-1447 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1447 > Project: Apache AsterixDB > Issue Type: Improvement > Components: *DB - AsterixDB, HYR - Hyracks >Reporter: Jianfeng Jia >Assignee: Yingyi Bu > Labels: soon > > It is very useful to be able to stop the running task. > There is one {{AbortTasksWork}} in Hyracks, but I didn't find it was used. I > could be wrong on it. Yet the {{IHyracksClientInterface}} don't have the > {{abortJob}} interface. > Once we have this Hyracks API, we could also add the corresponding RESTFul > API for AsterixDB as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ASTERIXDB-967) Need Micro-distinct physical operator
[ https://issues.apache.org/jira/browse/ASTERIXDB-967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu reassigned ASTERIXDB-967: --- Assignee: Dmitry Lychagin (was: Yingyi Bu) > Need Micro-distinct physical operator > - > > Key: ASTERIXDB-967 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-967 > Project: Apache AsterixDB > Issue Type: Bug > Components: *DB - AsterixDB, COMP - Compiler >Reporter: asterixdb-importer >Assignee: Dmitry Lychagin > > Need Micro-distinct physical operator -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Closed] (ASTERIXDB-1812) OutofMemoryError when group by on a non-existing field with 300k records (tweets)
[ https://issues.apache.org/jira/browse/ASTERIXDB-1812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu closed ASTERIXDB-1812. Resolution: Fixed Fixed with a test case. > OutofMemoryError when group by on a non-existing field with 300k records > (tweets) > - > > Key: ASTERIXDB-1812 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1812 > Project: Apache AsterixDB > Issue Type: Bug > Components: *DB - AsterixDB, HYR - Hyracks > Environment: Linux 16.04 > Asterix 0.9.0 with 2 nc nodes and 1 cc node. (all using default > configurations from > https://asterixdb.apache.org/docs/0.9.0/install.html#Section1SingleMachineAsterixDBInstallation) >Reporter: Chen Luo > > The dataset is a sample tweet dataset provided by Cloudberry, which contains > 324000 tweets (about 300M). When issuing the following query, I always get an > OutofMemoryError. > Query: > {code} > select * from twitter.ds_tweet t > group by t.test; > {code} > Stacktrace: > {code} > org.apache.hyracks.api.exceptions.HyracksException: Job failed on account of: > HYR0003: java.lang.OutOfMemoryError: Java heap space > at > org.apache.hyracks.control.cc.job.JobRun.waitForCompletion(JobRun.java:211) > at > org.apache.hyracks.control.cc.work.WaitForJobCompletionWork$1.run(WaitForJobCompletionWork.java:48) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.hyracks.api.exceptions.HyracksDataException: HYR0003: > java.lang.OutOfMemoryError: Java heap space > at > org.apache.hyracks.control.common.utils.ExceptionUtils.setNodeIds(ExceptionUtils.java:62) > at org.apache.hyracks.control.nc.Task.run(Task.java:330) > ... 3 more > Caused by: org.apache.hyracks.api.exceptions.HyracksDataException: > java.lang.OutOfMemoryError: Java heap space > at > org.apache.hyracks.api.rewriter.runtime.SuperActivityOperatorNodePushable.runInParallel(SuperActivityOperatorNodePushable.java:228) > at > org.apache.hyracks.api.rewriter.runtime.SuperActivityOperatorNodePushable.initialize(SuperActivityOperatorNodePushable.java:84) > at org.apache.hyracks.control.nc.Task.run(Task.java:273) > ... 3 more > Caused by: java.util.concurrent.ExecutionException: > java.lang.OutOfMemoryError: Java heap space > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:192) > at > org.apache.hyracks.api.rewriter.runtime.SuperActivityOperatorNodePushable.runInParallel(SuperActivityOperatorNodePushable.java:222) > ... 5 more > Caused by: java.lang.OutOfMemoryError: Java heap space > at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57) > at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) > at > org.apache.hyracks.control.nc.resources.memory.FrameManager.allocateFrame(FrameManager.java:57) > at > org.apache.hyracks.control.nc.resources.memory.FrameManager.reallocateFrame(FrameManager.java:73) > at org.apache.hyracks.control.nc.Joblet.reallocateFrame(Joblet.java:242) > at org.apache.hyracks.control.nc.Task.reallocateFrame(Task.java:136) > at > org.apache.hyracks.api.comm.VSizeFrame.ensureFrameSize(VSizeFrame.java:53) > at > org.apache.hyracks.dataflow.common.comm.io.AbstractFrameAppender.canHoldNewTuple(AbstractFrameAppender.java:104) > at > org.apache.hyracks.dataflow.common.comm.io.FrameTupleAppender.append(FrameTupleAppender.java:49) > at > org.apache.hyracks.dataflow.common.comm.util.FrameUtils.appendToWriter(FrameUtils.java:159) > at > org.apache.hyracks.algebricks.runtime.operators.base.AbstractOneInputOneOutputOneFramePushRuntime.appendToFrameFromTupleBuilder(AbstractOneInputOneOutputOneFramePushRuntime.java:82) > at > org.apache.hyracks.algebricks.runtime.operators.base.AbstractOneInputOneOutputOneFramePushRuntime.appendToFrameFromTupleBuilder(AbstractOneInputOneOutputOneFramePushRuntime.java:78) > at > org.apache.hyracks.algebricks.runtime.operators.std.AssignRuntimeFactory$1.nextFrame(AssignRuntimeFactory.java:150) > at > org.apache.hyracks.algebricks.runtime.operators.meta.AlgebricksMetaOperatorDescriptor$2.nextFrame(AlgebricksMetaOperatorDescriptor.java:134) > at > org.apache.hyracks.dataflow.common.comm.io.AbstractFrameAppender.write(AbstractFrameAppender.java:92) > at > org.apache.hyracks.dataflow.common.comm.io.FrameTupleAppenderWrapper.write(FrameTupleAppenderWrapper.java:50) > at >
[jira] [Created] (ASTERIXDB-2051) Variable not found in a complex group-by query
Yingyi Bu created ASTERIXDB-2051: Summary: Variable not found in a complex group-by query Key: ASTERIXDB-2051 URL: https://issues.apache.org/jira/browse/ASTERIXDB-2051 Project: Apache AsterixDB Issue Type: Bug Components: COMP - Compiler Reporter: Yingyi Bu Assignee: Yingyi Bu {noformat} DROP DATAVERSE tpch IF EXISTS; CREATE dataverse tpch; USE tpch; CREATE TYPE LineItemType AS CLOSED { l_orderkey : integer, l_partkey : integer, l_suppkey : integer, l_linenumber : integer, l_quantity : double, l_extendedprice : double, l_discount : double, l_tax : double, l_returnflag : string, l_linestatus : string, l_shipdate : string, l_commitdate : string, l_receiptdate : string, l_shipinstruct : string, l_shipmode : string, l_comment : string } CREATE DATASET LineItem(LineItemType) PRIMARY KEY l_orderkey,l_linenumber; SELECT l_returnflag AS l_returnflag, l_linestatus AS l_linestatus, coll_count(cheap) AS count_cheaps, coll_count(expensive) AS count_expensives FROM LineItem AS l /* +hash */ GROUP BY l.l_returnflag AS l_returnflag,l.l_linestatus AS l_linestatus GROUP AS g LET cheap = ( SELECT ELEMENT g.l FROM g WHERE g.l.l_discount > 0.05 ), expensive = ( SELECT ELEMENT m FROM (FROM g SELECT VALUE l) AS m WHERE m.l_discount <= 0.05 ) ORDER BY l_returnflag,l_linestatus ; {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Closed] (ASTERIXDB-2044) Listify in subqueries
[ https://issues.apache.org/jira/browse/ASTERIXDB-2044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu closed ASTERIXDB-2044. Resolution: Fixed > Listify in subqueries > - > > Key: ASTERIXDB-2044 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-2044 > Project: Apache AsterixDB > Issue Type: Bug > Components: COMP - Compiler >Reporter: Yingyi Bu >Assignee: Yingyi Bu > > The following query will result in unnecessary listifies in the optimized > query plan. > {noformat} > DROP DATAVERSE tpch IF EXISTS; > CREATE dataverse tpch; > USE tpch; > CREATE TYPE LineItemType AS CLOSED { > l_orderkey : integer, > l_partkey : integer, > l_suppkey : integer, > l_linenumber : integer, > l_quantity : double, > l_extendedprice : double, > l_discount : double, > l_tax : double, > l_returnflag : string, > l_linestatus : string, > l_shipdate : string, > l_commitdate : string, > l_receiptdate : string, > l_shipinstruct : string, > l_shipmode : string, > l_comment : string > } > CREATE DATASET LineItem(LineItemType) PRIMARY KEY l_orderkey,l_linenumber; > SELECT l_returnflag AS l_returnflag, >l_linestatus AS l_linestatus, >coll_count(cheap) AS count_cheaps, >coll_count(expensive) AS count_expensives > FROM LineItem AS l > /* +hash */ > GROUP BY l.l_returnflag AS l_returnflag,l.l_linestatus AS l_linestatus > GROUP AS g > LET cheap = ( > SELECT ELEMENT m > FROM (FROM g SELECT VALUE l) AS m > WHERE m.l_discount > 0.05 > ), > expensive = ( > SELECT ELEMENT m > FROM (FROM g SELECT VALUE l) AS m > WHERE m.l_discount <= 0.05 > ) > ORDER BY l_returnflag,l_linestatus > ; > {noformat} > {noformat} > distribute result [$$31] > -- DISTRIBUTE_RESULT |PARTITIONED| > exchange > -- ONE_TO_ONE_EXCHANGE |PARTITIONED| > project ([$$31]) > -- STREAM_PROJECT |PARTITIONED| > assign [$$31] <- [{"l_returnflag": $$l_returnflag, "l_linestatus": > $$l_linestatus, "count_cheaps": $$36, "count_expensives": $$37}] > -- ASSIGN |PARTITIONED| > exchange > -- SORT_MERGE_EXCHANGE [$$l_returnflag(ASC), $$l_linestatus(ASC) ] > |PARTITIONED| > project ([$$l_returnflag, $$l_linestatus, $$36, $$37]) > -- STREAM_PROJECT |PARTITIONED| > subplan { > aggregate [$$37] <- [agg-count($$m)] > -- AGGREGATE |LOCAL| > select (le($$39, 0.05)) > -- STREAM_SELECT |LOCAL| > assign [$$39] <- [$$m.getField(6)] > -- ASSIGN |LOCAL| > unnest $$m <- scan-collection($$24) > -- UNNEST |LOCAL| > subplan { > aggregate [$$24] <- [listify($$23)] > -- AGGREGATE |LOCAL| > assign [$$23] <- [$$g.getField(0)] > -- ASSIGN |LOCAL| > unnest $$g <- > scan-collection($$15) > -- UNNEST |LOCAL| > nested tuple source > -- NESTED_TUPLE_SOURCE |LOCAL| > } > -- SUBPLAN |LOCAL| > nested tuple source > -- NESTED_TUPLE_SOURCE |LOCAL| >} > -- SUBPLAN |PARTITIONED| > subplan { > aggregate [$$36] <- [agg-count($$m)] > -- AGGREGATE |LOCAL| > select (gt($$38, 0.05)) > -- STREAM_SELECT |LOCAL| > assign [$$38] <- [$$m.getField(6)] > -- ASSIGN |LOCAL| > unnest $$m <- scan-collection($$18) > -- UNNEST |LOCAL| > subplan { > aggregate [$$18] <- [listify($$17)] > -- AGGREGATE |LOCAL| > assign [$$17] <- [$$g.getField(0)] > -- ASSIGN |LOCAL| > unnest $$g <- > scan-collection($$15) > -- UNNEST |LOCAL| > nested tuple source > -- NESTED_TUPLE_SOURCE > |LOCAL| >} >
[jira] [Created] (ASTERIXDB-2044) Listify in subqueries
Yingyi Bu created ASTERIXDB-2044: Summary: Listify in subqueries Key: ASTERIXDB-2044 URL: https://issues.apache.org/jira/browse/ASTERIXDB-2044 Project: Apache AsterixDB Issue Type: Bug Components: COMP - Compiler Reporter: Yingyi Bu Assignee: Yingyi Bu The following query will result in unnecessary listifies in the optimized query plan. {noformat} DROP DATAVERSE tpch IF EXISTS; CREATE dataverse tpch; USE tpch; CREATE TYPE LineItemType AS CLOSED { l_orderkey : integer, l_partkey : integer, l_suppkey : integer, l_linenumber : integer, l_quantity : double, l_extendedprice : double, l_discount : double, l_tax : double, l_returnflag : string, l_linestatus : string, l_shipdate : string, l_commitdate : string, l_receiptdate : string, l_shipinstruct : string, l_shipmode : string, l_comment : string } CREATE DATASET LineItem(LineItemType) PRIMARY KEY l_orderkey,l_linenumber; SELECT l_returnflag AS l_returnflag, l_linestatus AS l_linestatus, coll_count(cheap) AS count_cheaps, coll_count(expensive) AS count_expensives FROM LineItem AS l /* +hash */ GROUP BY l.l_returnflag AS l_returnflag,l.l_linestatus AS l_linestatus GROUP AS g LET cheap = ( SELECT ELEMENT m FROM (FROM g SELECT VALUE l) AS m WHERE m.l_discount > 0.05 ), expensive = ( SELECT ELEMENT m FROM (FROM g SELECT VALUE l) AS m WHERE m.l_discount <= 0.05 ) ORDER BY l_returnflag,l_linestatus ; {noformat} {noformat} distribute result [$$31] -- DISTRIBUTE_RESULT |PARTITIONED| exchange -- ONE_TO_ONE_EXCHANGE |PARTITIONED| project ([$$31]) -- STREAM_PROJECT |PARTITIONED| assign [$$31] <- [{"l_returnflag": $$l_returnflag, "l_linestatus": $$l_linestatus, "count_cheaps": $$36, "count_expensives": $$37}] -- ASSIGN |PARTITIONED| exchange -- SORT_MERGE_EXCHANGE [$$l_returnflag(ASC), $$l_linestatus(ASC) ] |PARTITIONED| project ([$$l_returnflag, $$l_linestatus, $$36, $$37]) -- STREAM_PROJECT |PARTITIONED| subplan { aggregate [$$37] <- [agg-count($$m)] -- AGGREGATE |LOCAL| select (le($$39, 0.05)) -- STREAM_SELECT |LOCAL| assign [$$39] <- [$$m.getField(6)] -- ASSIGN |LOCAL| unnest $$m <- scan-collection($$24) -- UNNEST |LOCAL| subplan { aggregate [$$24] <- [listify($$23)] -- AGGREGATE |LOCAL| assign [$$23] <- [$$g.getField(0)] -- ASSIGN |LOCAL| unnest $$g <- scan-collection($$15) -- UNNEST |LOCAL| nested tuple source -- NESTED_TUPLE_SOURCE |LOCAL| } -- SUBPLAN |LOCAL| nested tuple source -- NESTED_TUPLE_SOURCE |LOCAL| } -- SUBPLAN |PARTITIONED| subplan { aggregate [$$36] <- [agg-count($$m)] -- AGGREGATE |LOCAL| select (gt($$38, 0.05)) -- STREAM_SELECT |LOCAL| assign [$$38] <- [$$m.getField(6)] -- ASSIGN |LOCAL| unnest $$m <- scan-collection($$18) -- UNNEST |LOCAL| subplan { aggregate [$$18] <- [listify($$17)] -- AGGREGATE |LOCAL| assign [$$17] <- [$$g.getField(0)] -- ASSIGN |LOCAL| unnest $$g <- scan-collection($$15) -- UNNEST |LOCAL| nested tuple source -- NESTED_TUPLE_SOURCE |LOCAL| } -- SUBPLAN |LOCAL| nested tuple source -- NESTED_TUPLE_SOURCE |LOCAL| } -- SUBPLAN |PARTITIONED| exchange -- ONE_TO_ONE_EXCHANGE |PARTITIONED| group by ([$$l_returnflag := $$32; $$l_linestatus :=
[jira] [Closed] (ASTERIXDB-2032) Let stop-sample-cluster script not use the shutdown REST API
[ https://issues.apache.org/jira/browse/ASTERIXDB-2032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu closed ASTERIXDB-2032. Resolution: Fixed > Let stop-sample-cluster script not use the shutdown REST API > > > Key: ASTERIXDB-2032 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-2032 > Project: Apache AsterixDB > Issue Type: Bug > Components: CLUS - Cluster management >Reporter: Yingyi Bu >Assignee: Yingyi Bu > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ASTERIXDB-2034) mvn package does not copy externalibs to asterix-app/target
Yingyi Bu created ASTERIXDB-2034: Summary: mvn package does not copy externalibs to asterix-app/target Key: ASTERIXDB-2034 URL: https://issues.apache.org/jira/browse/ASTERIXDB-2034 Project: Apache AsterixDB Issue Type: Bug Components: CONF - Configuration Reporter: Yingyi Bu Assignee: Michael Blow but mvn clean install does. It seems due to the mvn configuration in asterix-app/pom.xml. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (ASTERIXDB-2011) Subquery as index when selecting element from a list
[ https://issues.apache.org/jira/browse/ASTERIXDB-2011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu reassigned ASTERIXDB-2011: Assignee: Yingyi Bu > Subquery as index when selecting element from a list > > > Key: ASTERIXDB-2011 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-2011 > Project: Apache AsterixDB > Issue Type: Bug > Components: COMP - Compiler >Reporter: Ali Alsuliman >Assignee: Yingyi Bu > > These are some examples where an internal error (null pointer exception) > occurs > (["a", "b", "c"])[ (SELECT * FROM [1,2] AS foo)[0] ]; > (["a", "b", "c"])[ (SELECT value count(*) FROM Users) ]; (Users dataset is > empty and would return 0. So it's within the bound of the list) > Here is the log: > java.lang.NullPointerException > at > org.apache.asterix.om.typecomputer.impl.TypeComputeUtils.getActualType(TypeComputeUtils.java:186) > at > org.apache.asterix.om.typecomputer.impl.TypeComputeUtils.getActualType(TypeComputeUtils.java:165) > at > org.apache.asterix.om.typecomputer.impl.TypeComputeUtils.resolveResultType(TypeComputeUtils.java:85) > at > org.apache.asterix.om.typecomputer.base.AbstractResultTypeComputer.computeType(AbstractResultTypeComputer.java:42) > at > org.apache.asterix.dataflow.data.common.ExpressionTypeComputer.getTypeForFunction(ExpressionTypeComputer.java:80) > at > org.apache.asterix.dataflow.data.common.ExpressionTypeComputer.getType(ExpressionTypeComputer.java:53) > at > org.apache.hyracks.algebricks.core.algebra.operators.logical.AssignOperator.computeOutputTypeEnvironment(AssignOperator.java:92) > at > org.apache.hyracks.algebricks.core.rewriter.base.AlgebricksOptimizationContext.computeAndSetTypeEnvironmentForOperator(AlgebricksOptimizationContext.java:298) > at > org.apache.hyracks.algebricks.rewriter.rules.InferTypesRule.rewritePost(InferTypesRule.java:42) > at > org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController.rewriteOperatorRef(AbstractRuleController.java:126) > at > org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController.rewriteOperatorRef(AbstractRuleController.java:100) > at > org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController.rewriteOperatorRef(AbstractRuleController.java:100) > at > org.apache.hyracks.algebricks.compiler.rewriter.rulecontrollers.SequentialOnceRuleController.rewriteWithRuleCollection(SequentialOnceRuleController.java:44) > at > org.apache.hyracks.algebricks.core.rewriter.base.HeuristicOptimizer.runOptimizationSets(HeuristicOptimizer.java:102) > at > org.apache.hyracks.algebricks.core.rewriter.base.HeuristicOptimizer.optimize(HeuristicOptimizer.java:82) > at > org.apache.hyracks.algebricks.compiler.api.HeuristicCompilerFactoryBuilder$1$1.optimize(HeuristicCompilerFactoryBuilder.java:90) > at > org.apache.asterix.api.common.APIFramework.compileQuery(APIFramework.java:267) > at > org.apache.asterix.app.translator.QueryTranslator.rewriteCompileQuery(QueryTranslator.java:1805) > at > org.apache.asterix.app.translator.QueryTranslator.lambda$handleQuery$1(QueryTranslator.java:2290) > at > org.apache.asterix.app.translator.QueryTranslator.createAndRunJob(QueryTranslator.java:2390) > at > org.apache.asterix.app.translator.QueryTranslator.deliverResult(QueryTranslator.java:2323) > at > org.apache.asterix.app.translator.QueryTranslator.handleQuery(QueryTranslator.java:2302) > at > org.apache.asterix.app.translator.QueryTranslator.compileAndExecute(QueryTranslator.java:370) > at > org.apache.asterix.app.translator.QueryTranslator.compileAndExecute(QueryTranslator.java:253) > at > org.apache.asterix.api.http.server.ApiServlet.post(ApiServlet.java:153) > at > org.apache.hyracks.http.server.AbstractServlet.handle(AbstractServlet.java:78) > at > org.apache.hyracks.http.server.HttpRequestHandler.handle(HttpRequestHandler.java:70) > at > org.apache.hyracks.http.server.HttpRequestHandler.call(HttpRequestHandler.java:55) > at > org.apache.hyracks.http.server.HttpRequestHandler.call(HttpRequestHandler.java:36) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > java.lang.NullPointerException > at > org.apache.asterix.om.typecomputer.impl.TypeComputeUtils.getActualType(TypeComputeUtils.java:186) > at > org.apache.asterix.om.typecomputer.impl.TypeComputeUtils.getActualType(TypeComputeUtils.java:165) > at >
[jira] [Created] (ASTERIXDB-2032) Let stop-sample-cluster script not use the shutdown REST API
Yingyi Bu created ASTERIXDB-2032: Summary: Let stop-sample-cluster script not use the shutdown REST API Key: ASTERIXDB-2032 URL: https://issues.apache.org/jira/browse/ASTERIXDB-2032 Project: Apache AsterixDB Issue Type: Bug Components: CLUS - Cluster management Reporter: Yingyi Bu Assignee: Yingyi Bu -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Closed] (ASTERIXDB-2000) Roundtrip-ability for 'infinity'
[ https://issues.apache.org/jira/browse/ASTERIXDB-2000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu closed ASTERIXDB-2000. Resolution: Fixed > Roundtrip-ability for 'infinity' > > > Key: ASTERIXDB-2000 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-2000 > Project: Apache AsterixDB > Issue Type: Bug >Reporter: Yingyi Bu >Assignee: Yingyi Bu > > We can only parse INF but the output is Infinity, which we cannot parse again. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Closed] (ASTERIXDB-1664) position() and regexp_position() should be 1-based
[ https://issues.apache.org/jira/browse/ASTERIXDB-1664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu closed ASTERIXDB-1664. Resolution: Fixed > position() and regexp_position() should be 1-based > -- > > Key: ASTERIXDB-1664 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1664 > Project: Apache AsterixDB > Issue Type: Bug > Components: FUN - Functions >Reporter: Yingyi Bu >Assignee: Yingyi Bu > > position() and regexp_position() should be 1-based so as to be consistent > with substr(). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ASTERIXDB-2001) NPE during rebalance cancellation test
Yingyi Bu created ASTERIXDB-2001: Summary: NPE during rebalance cancellation test Key: ASTERIXDB-2001 URL: https://issues.apache.org/jira/browse/ASTERIXDB-2001 Project: Apache AsterixDB Issue Type: Bug Reporter: Yingyi Bu https://asterix-jenkins.ics.uci.edu/job/asterix-gerrit-rebalance-cancellation/233/console {noformat} src/test/resources/runtimets/results/rebalance/single_dataset_with_index/single_dataset_with_index.3.adm 23:52:08 java.lang.IllegalStateException: Failed to undo 23:52:08at org.apache.asterix.app.nc.RecoveryManager.undo(RecoveryManager.java:690) 23:52:08at org.apache.asterix.app.nc.RecoveryManager.rollbackTransaction(RecoveryManager.java:630) 23:52:08at org.apache.asterix.transaction.management.service.transaction.TransactionManager.abortTransaction(TransactionManager.java:65) 23:52:08at org.apache.asterix.transaction.management.service.transaction.TransactionManager.completedTransaction(TransactionManager.java:132) 23:52:08at org.apache.asterix.runtime.job.listener.JobEventListenerFactory$1.jobletFinish(JobEventListenerFactory.java:58) 23:52:08at org.apache.hyracks.control.nc.Joblet.performCleanup(Joblet.java:316) 23:52:08at org.apache.hyracks.control.nc.Joblet.removeTask(Joblet.java:151) 23:52:08at org.apache.hyracks.control.nc.work.NotifyTaskFailureWork.run(NotifyTaskFailureWork.java:54) 23:52:08at org.apache.hyracks.control.common.work.WorkQueue$WorkerThread.run(WorkQueue.java:127) 23:52:08 Caused by: java.lang.NullPointerException 23:52:08at org.apache.asterix.app.nc.RecoveryManager.undo(RecoveryManager.java:666) 23:52:08... 8 more 23:52:08 Exception in thread "Worker:asterix_nc1" java.lang.Error: org.apache.asterix.common.exceptions.ACIDException: Could not complete rollback! System is in an inconsistent state 23:52:08at org.apache.asterix.runtime.job.listener.JobEventListenerFactory$1.jobletFinish(JobEventListenerFactory.java:61) 23:52:08at org.apache.hyracks.control.nc.Joblet.performCleanup(Joblet.java:316) 23:52:08at org.apache.hyracks.control.nc.Joblet.removeTask(Joblet.java:151) 23:52:08at org.apache.hyracks.control.nc.work.NotifyTaskFailureWork.run(NotifyTaskFailureWork.java:54) 23:52:08at org.apache.hyracks.control.common.work.WorkQueue$WorkerThread.run(WorkQueue.java:127) 23:52:08 Caused by: org.apache.asterix.common.exceptions.ACIDException: Could not complete rollback! System is in an inconsistent state 23:52:08at org.apache.asterix.transaction.management.service.transaction.TransactionManager.abortTransaction(TransactionManager.java:73) 23:52:08at org.apache.asterix.transaction.management.service.transaction.TransactionManager.completedTransaction(TransactionManager.java:132) 23:52:08at org.apache.asterix.runtime.job.listener.JobEventListenerFactory$1.jobletFinish(JobEventListenerFactory.java:58) 23:52:08... 4 more 23:52:08 Caused by: java.lang.IllegalStateException: Failed to undo 23:52:08at org.apache.asterix.app.nc.RecoveryManager.undo(RecoveryManager.java:690) 23:52:08at org.apache.asterix.app.nc.RecoveryManager.rollbackTransaction(RecoveryManager.java:630) 23:52:08at org.apache.asterix.transaction.management.service.transaction.TransactionManager.abortTransaction(TransactionManager.java:65) 23:52:08... 6 more 23:52:08 Caused by: java.lang.NullPointerException 23:52:08at org.apache.asterix.app.nc.RecoveryManager.undo(RecoveryManager.java:666) 23:52:08... 8 more {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ASTERIXDB-2000) Roundtrip-ability for 'infinity'
Yingyi Bu created ASTERIXDB-2000: Summary: Roundtrip-ability for 'infinity' Key: ASTERIXDB-2000 URL: https://issues.apache.org/jira/browse/ASTERIXDB-2000 Project: Apache AsterixDB Issue Type: Bug Reporter: Yingyi Bu Assignee: Yingyi Bu We can only parse INF but the output is Infinity, which we cannot parse again. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Closed] (ASTERIXDB-1982) Results distribution reporting empty errors
[ https://issues.apache.org/jira/browse/ASTERIXDB-1982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu closed ASTERIXDB-1982. Resolution: Fixed > Results distribution reporting empty errors > --- > > Key: ASTERIXDB-1982 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1982 > Project: Apache AsterixDB > Issue Type: Bug > Components: FAIL - Failure handling/reporting >Reporter: Murtadha Hubail >Assignee: Yingyi Bu > > There are sporadic failures in Hyracks' result distribution where exceptions > are not reported correctly. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ASTERIXDB-1991) Resource not exist in IndexDataflowHelper.destroy
[ https://issues.apache.org/jira/browse/ASTERIXDB-1991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu updated ASTERIXDB-1991: - Description: Sporadic test failures on Jenkins: https://asterix-jenkins.ics.uci.edu/job/asterix-gerrit-asterix-app/org.apache.asterix$asterix-app/1273/testReport/junit/org.apache.asterix.test.runtime/AqlExecutionTest/test_AqlExecutionTest_344__dml__delete_from_loaded_dataset_with_index_/ {noformat} testFile src/test/resources/runtimets/queries/dml/delete-from-loaded-dataset-with-index/delete-from-loaded-dataset-with-index.3.ddl.aql raised an exception: HTTP operation failed: HYR0055: Resource does not exist for storage/partition_2/test/LineItem_idx_idx_LineItem_partkey [HyracksDataException] STATUS LINE: HTTP/1.1 500 Internal Server Error SUMMARY: org.apache.hyracks.api.exceptions.HyracksDataException: java.util.concurrent.ExecutionException: org.apache.hyracks.api.exceptions.HyracksDataException: HYR0055: Resource does not exist for storage/partition_2/test/LineItem_idx_idx_LineItem_partkey HTTP operation failed: HYR0055: Resource does not exist for storage/partition_2/test/LineItem_idx_idx_LineItem_partkey [HyracksDataException] STATUS LINE: HTTP/1.1 500 Internal Server Error SUMMARY: org.apache.hyracks.api.exceptions.HyracksDataException: java.util.concurrent.ExecutionException: org.apache.hyracks.api.exceptions.HyracksDataException: HYR0055: Resource does not exist for storage/partition_2/test/LineItem_idx_idx_LineItem_partkey at org.apache.asterix.test.common.TestExecutor.checkResponse(TestExecutor.java:474) at org.apache.asterix.test.common.TestExecutor.executeAndCheckHttpRequest(TestExecutor.java:437) at org.apache.asterix.test.common.TestExecutor.executeDDL(TestExecutor.java:719) at org.apache.asterix.test.common.TestExecutor.executeTestFile(TestExecutor.java:837) at org.apache.asterix.test.common.TestExecutor.executeTest(TestExecutor.java:1310) at org.apache.asterix.test.runtime.LangExecutionUtil.test(LangExecutionUtil.java:125) at org.apache.asterix.test.runtime.LangExecutionUtil.test(LangExecutionUtil.java:112) at org.apache.asterix.test.runtime.AqlExecutionTest.test(AqlExecutionTest.java:63) at sun.reflect.GeneratedMethodAccessor30.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:236) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:386) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:323) at
[jira] [Created] (ASTERIXDB-1992) Dataset KeyVerse.KVStore is currently being fed into by the following active entities
Yingyi Bu created ASTERIXDB-1992: Summary: Dataset KeyVerse.KVStore is currently being fed into by the following active entities Key: ASTERIXDB-1992 URL: https://issues.apache.org/jira/browse/ASTERIXDB-1992 Project: Apache AsterixDB Issue Type: Bug Components: ING - Ingestion Reporter: Yingyi Bu Assignee: Abdullah Alamoudi Sporadic test failures on Jenkins: https://asterix-jenkins.ics.uci.edu/job/asterix-gerrit-asterix-app/org.apache.asterix$asterix-app/1273/testReport/junit/org.apache.asterix.test.runtime/AqlExecutionTest/test_AqlExecutionTest_13__feeds__change_feed_with_meta_pk_in_meta_index_after_ingest_/ {noformat} testFile src/test/resources/runtimets/queries/feeds/change-feed-with-meta-pk-in-meta-index-after-ingest/change-feed-with-meta-pk-in-meta-index-after-ingest.3.ddl.aql raised an exception: HTTP operation failed: Dataset KeyVerse.KVStore is currently being fed into by the following active entities. KeyVerse.KVChangeStream(Feed) [CompilationException] STATUS LINE: HTTP/1.1 500 Internal Server Error SUMMARY: org.apache.asterix.common.exceptions.CompilationException: Dataset KeyVerse.KVStore is currently being fed into by the following active entities. HTTP operation failed: Dataset KeyVerse.KVStore is currently being fed into by the following active entities. KeyVerse.KVChangeStream(Feed) [CompilationException] STATUS LINE: HTTP/1.1 500 Internal Server Error SUMMARY: org.apache.asterix.common.exceptions.CompilationException: Dataset KeyVerse.KVStore is currently being fed into by the following active entities. at org.apache.asterix.test.common.TestExecutor.checkResponse(TestExecutor.java:474) at org.apache.asterix.test.common.TestExecutor.executeAndCheckHttpRequest(TestExecutor.java:437) at org.apache.asterix.test.common.TestExecutor.executeDDL(TestExecutor.java:719) at org.apache.asterix.test.common.TestExecutor.executeTestFile(TestExecutor.java:837) at org.apache.asterix.test.common.TestExecutor.executeTest(TestExecutor.java:1310) at org.apache.asterix.test.runtime.LangExecutionUtil.test(LangExecutionUtil.java:125) at org.apache.asterix.test.runtime.LangExecutionUtil.test(LangExecutionUtil.java:112) at org.apache.asterix.test.runtime.AqlExecutionTest.test(AqlExecutionTest.java:63) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:236) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
[jira] [Created] (ASTERIXDB-1991) Resource not exist in IndexDataflowHelper.destroy
Yingyi Bu created ASTERIXDB-1991: Summary: Resource not exist in IndexDataflowHelper.destroy Key: ASTERIXDB-1991 URL: https://issues.apache.org/jira/browse/ASTERIXDB-1991 Project: Apache AsterixDB Issue Type: Bug Reporter: Yingyi Bu Assignee: Abdullah Alamoudi {noformat} testFile src/test/resources/runtimets/queries/dml/delete-from-loaded-dataset-with-index/delete-from-loaded-dataset-with-index.3.ddl.aql raised an exception: HTTP operation failed: HYR0055: Resource does not exist for storage/partition_2/test/LineItem_idx_idx_LineItem_partkey [HyracksDataException] STATUS LINE: HTTP/1.1 500 Internal Server Error SUMMARY: org.apache.hyracks.api.exceptions.HyracksDataException: java.util.concurrent.ExecutionException: org.apache.hyracks.api.exceptions.HyracksDataException: HYR0055: Resource does not exist for storage/partition_2/test/LineItem_idx_idx_LineItem_partkey HTTP operation failed: HYR0055: Resource does not exist for storage/partition_2/test/LineItem_idx_idx_LineItem_partkey [HyracksDataException] STATUS LINE: HTTP/1.1 500 Internal Server Error SUMMARY: org.apache.hyracks.api.exceptions.HyracksDataException: java.util.concurrent.ExecutionException: org.apache.hyracks.api.exceptions.HyracksDataException: HYR0055: Resource does not exist for storage/partition_2/test/LineItem_idx_idx_LineItem_partkey at org.apache.asterix.test.common.TestExecutor.checkResponse(TestExecutor.java:474) at org.apache.asterix.test.common.TestExecutor.executeAndCheckHttpRequest(TestExecutor.java:437) at org.apache.asterix.test.common.TestExecutor.executeDDL(TestExecutor.java:719) at org.apache.asterix.test.common.TestExecutor.executeTestFile(TestExecutor.java:837) at org.apache.asterix.test.common.TestExecutor.executeTest(TestExecutor.java:1310) at org.apache.asterix.test.runtime.LangExecutionUtil.test(LangExecutionUtil.java:125) at org.apache.asterix.test.runtime.LangExecutionUtil.test(LangExecutionUtil.java:112) at org.apache.asterix.test.runtime.AqlExecutionTest.test(AqlExecutionTest.java:63) at sun.reflect.GeneratedMethodAccessor30.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:236) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:386) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:323) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:143) Caused by:
[jira] [Updated] (ASTERIXDB-1990) File-not-found in deleting file
[ https://issues.apache.org/jira/browse/ASTERIXDB-1990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu updated ASTERIXDB-1990: - Summary: File-not-found in deleting file (was: Resource not found in deleting file) > File-not-found in deleting file > --- > > Key: ASTERIXDB-1990 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1990 > Project: Apache AsterixDB > Issue Type: Bug > Components: STO - Storage >Reporter: Yingyi Bu >Assignee: Abdullah Alamoudi >Priority: Critical > > https://asterix-jenkins.ics.uci.edu/job/asterix-gerrit-asterix-app/org.apache.asterix$asterix-app/1273/testReport/junit/org.apache.asterix.test.runtime/AqlExecutionTest/test_AqlExecutionTest_339__dml__using_no_merge_policy_/ > {noformat} > Expected results file: > src/test/resources/runtimets/results/dml/using-no-merge-policy/using-no-merge-policy.1.adm > org.apache.hyracks.api.exceptions.HyracksDataException: HYR0019: Cannot > delete the file: > /home/jenkins/workspace/asterix-gerrit-asterix-app/asterixdb/asterix-app/target/io/dir/asterix_nc1/iodevice0/storage/partition_0/test/LineItem_idx_LineItem/2017-07-20-00-31-09-349_2017-07-20-00-31-09-349_b > at > org.apache.hyracks.api.exceptions.HyracksDataException.create(HyracksDataException.java:53) > at org.apache.hyracks.api.util.IoUtil.delete(IoUtil.java:67) > at org.apache.hyracks.api.util.IoUtil.delete(IoUtil.java:48) > at > org.apache.hyracks.storage.common.buffercache.BufferCache.deleteFile(BufferCache.java:1008) > at > org.apache.hyracks.storage.common.buffercache.BufferCache.deleteFile(BufferCache.java:970) > at > org.apache.hyracks.storage.am.common.impls.AbstractTreeIndex.destroy(AbstractTreeIndex.java:138) > at > org.apache.hyracks.storage.am.lsm.btree.impls.LSMBTreeDiskComponent.destroy(LSMBTreeDiskComponent.java:41) > at > org.apache.hyracks.storage.am.lsm.common.impls.LSMHarness.exitComponents(LSMHarness.java:344) > at > org.apache.hyracks.storage.am.lsm.common.impls.LSMHarness.merge(LSMHarness.java:556) > at > org.apache.hyracks.storage.am.lsm.common.impls.LSMTreeIndexAccessor.merge(LSMTreeIndexAccessor.java:127) > at > org.apache.hyracks.storage.am.lsm.common.impls.MergeOperation.call(MergeOperation.java:48) > at > org.apache.hyracks.storage.am.lsm.common.impls.MergeOperation.call(MergeOperation.java:30) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.nio.file.NoSuchFileException: > target/io/dir/asterix_nc1/iodevice0/storage/partition_0/test/LineItem_idx_LineItem/2017-07-20-00-31-09-349_2017-07-20-00-31-09-349_b > at > sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) > at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) > at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) > at > sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:244) > at > sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103) > at java.nio.file.Files.delete(Files.java:1126) > at org.apache.hyracks.api.util.IoUtil.delete(IoUtil.java:64) > ... 14 more > org.apache.asterix.common.exceptions.AsterixException: FileNotFoundException: > File does not exist: > target/io/dir/asterix_nc1/iodevice0/storage/partition_0/test/LineItem_idx_LineItem/2017-07-20-00-31-09-435_2017-07-20-00-31-09-435_f > at > org.apache.asterix.test.common.ResultExtractor.extract(ResultExtractor.java:80) > at > org.apache.asterix.test.common.TestExecutor.cleanup(TestExecutor.java:1421) > at > org.apache.asterix.test.runtime.LangExecutionUtil.test(LangExecutionUtil.java:130) > at > org.apache.asterix.test.runtime.LangExecutionUtil.test(LangExecutionUtil.java:112) > at > org.apache.asterix.test.runtime.AqlExecutionTest.test(AqlExecutionTest.java:63) > at sun.reflect.GeneratedMethodAccessor30.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at
[jira] [Created] (ASTERIXDB-1990) Resource not found in deleting file
Yingyi Bu created ASTERIXDB-1990: Summary: Resource not found in deleting file Key: ASTERIXDB-1990 URL: https://issues.apache.org/jira/browse/ASTERIXDB-1990 Project: Apache AsterixDB Issue Type: Bug Components: STO - Storage Reporter: Yingyi Bu Assignee: Abdullah Alamoudi Priority: Critical https://asterix-jenkins.ics.uci.edu/job/asterix-gerrit-asterix-app/org.apache.asterix$asterix-app/1273/testReport/junit/org.apache.asterix.test.runtime/AqlExecutionTest/test_AqlExecutionTest_339__dml__using_no_merge_policy_/ {noformat} Expected results file: src/test/resources/runtimets/results/dml/using-no-merge-policy/using-no-merge-policy.1.adm org.apache.hyracks.api.exceptions.HyracksDataException: HYR0019: Cannot delete the file: /home/jenkins/workspace/asterix-gerrit-asterix-app/asterixdb/asterix-app/target/io/dir/asterix_nc1/iodevice0/storage/partition_0/test/LineItem_idx_LineItem/2017-07-20-00-31-09-349_2017-07-20-00-31-09-349_b at org.apache.hyracks.api.exceptions.HyracksDataException.create(HyracksDataException.java:53) at org.apache.hyracks.api.util.IoUtil.delete(IoUtil.java:67) at org.apache.hyracks.api.util.IoUtil.delete(IoUtil.java:48) at org.apache.hyracks.storage.common.buffercache.BufferCache.deleteFile(BufferCache.java:1008) at org.apache.hyracks.storage.common.buffercache.BufferCache.deleteFile(BufferCache.java:970) at org.apache.hyracks.storage.am.common.impls.AbstractTreeIndex.destroy(AbstractTreeIndex.java:138) at org.apache.hyracks.storage.am.lsm.btree.impls.LSMBTreeDiskComponent.destroy(LSMBTreeDiskComponent.java:41) at org.apache.hyracks.storage.am.lsm.common.impls.LSMHarness.exitComponents(LSMHarness.java:344) at org.apache.hyracks.storage.am.lsm.common.impls.LSMHarness.merge(LSMHarness.java:556) at org.apache.hyracks.storage.am.lsm.common.impls.LSMTreeIndexAccessor.merge(LSMTreeIndexAccessor.java:127) at org.apache.hyracks.storage.am.lsm.common.impls.MergeOperation.call(MergeOperation.java:48) at org.apache.hyracks.storage.am.lsm.common.impls.MergeOperation.call(MergeOperation.java:30) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.nio.file.NoSuchFileException: target/io/dir/asterix_nc1/iodevice0/storage/partition_0/test/LineItem_idx_LineItem/2017-07-20-00-31-09-349_2017-07-20-00-31-09-349_b at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:244) at sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103) at java.nio.file.Files.delete(Files.java:1126) at org.apache.hyracks.api.util.IoUtil.delete(IoUtil.java:64) ... 14 more org.apache.asterix.common.exceptions.AsterixException: FileNotFoundException: File does not exist: target/io/dir/asterix_nc1/iodevice0/storage/partition_0/test/LineItem_idx_LineItem/2017-07-20-00-31-09-435_2017-07-20-00-31-09-435_f at org.apache.asterix.test.common.ResultExtractor.extract(ResultExtractor.java:80) at org.apache.asterix.test.common.TestExecutor.cleanup(TestExecutor.java:1421) at org.apache.asterix.test.runtime.LangExecutionUtil.test(LangExecutionUtil.java:130) at org.apache.asterix.test.runtime.LangExecutionUtil.test(LangExecutionUtil.java:112) at org.apache.asterix.test.runtime.AqlExecutionTest.test(AqlExecutionTest.java:63) at sun.reflect.GeneratedMethodAccessor30.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
[jira] [Commented] (ASTERIXDB-1922) Change the behavior of Upsert on secondary index for component correlation
[ https://issues.apache.org/jira/browse/ASTERIXDB-1922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16093542#comment-16093542 ] Yingyi Bu commented on ASTERIXDB-1922: -- +1 for this proposal. One thing we can do is that instead of checking that the 2ndary key doesn't change, we in addition check if the existing tuple comes from the current in-memory component in use. Of course, the optimization will becomes less useful when the in-memory component size is set smaller. > Change the behavior of Upsert on secondary index for component correlation > -- > > Key: ASTERIXDB-1922 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1922 > Project: Apache AsterixDB > Issue Type: Improvement >Reporter: Chen Luo >Priority: Minor > > Currently, when we upsert a tuple, the secondary index is modified by > comparing the old secondary key and the new secondary key. If they are > exactly the same, then do nothing. Otherwise, we delete the (old secondary > key, primary key) pair, and insert the (new secondary key, primary key) pair. > However, this behavior is not suitable if we want to make disk components of > the primary index and secondary indexes correlated. The end goal is that each > disk component of the secondary key should correspond to one disk component > of the primary key. With this property, after we get a list of primary keys > from the secondary index, we only need to search one disk component of the > primary index for each primary key, which would greatly reduce the time for > point lookups. > In order for the above optimization to work, the upsert should always delete > the (old secondary key, primary key) pair (unless the primary key is inserted > for the first time), and insert the (new secondary key, primary key) pair. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Closed] (ASTERIXDB-1985) Add a rebalance callback for customizing the pre/post action for rebalance
[ https://issues.apache.org/jira/browse/ASTERIXDB-1985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu closed ASTERIXDB-1985. Resolution: Fixed > Add a rebalance callback for customizing the pre/post action for rebalance > -- > > Key: ASTERIXDB-1985 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1985 > Project: Apache AsterixDB > Issue Type: Improvement > Components: CLUS - Cluster management >Reporter: Yingyi Bu >Assignee: Yingyi Bu > > Add a rebalance callback for customizing the pre/post action for populating > data from the source to the target. > For example, populating LSM metadata from the source to the target is > sometimes necessary. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ASTERIXDB-1987) Sporadic FileNotFound issue in tests
Yingyi Bu created ASTERIXDB-1987: Summary: Sporadic FileNotFound issue in tests Key: ASTERIXDB-1987 URL: https://issues.apache.org/jira/browse/ASTERIXDB-1987 Project: Apache AsterixDB Issue Type: Bug Components: STO - Storage Reporter: Yingyi Bu Assignee: Abdullah Alamoudi https://asterix-jenkins.ics.uci.edu/job/asterix-gerrit-verify-asterix-app/org.apache.asterix$asterix-app/1043/testReport/junit/org.apache.asterix.test.runtime/AqlExecutionLessParallelismIT/test_AqlExecutionLessParallelismIT_1385__temp_dataset__temp_primary_plus_ngram_flush_/ {noformat} Regression org.apache.asterix.test.runtime.AqlExecutionLessParallelismIT.test[AqlExecutionLessParallelismIT 1385: temp-dataset: temp_primary_plus_ngram_flush] Failing for the past 1 build (Since Unstable#1043 ) Took 9.2 sec. Error Message FileNotFoundException: File does not exist: target/io/dir/asterix_nc1/iodevice1/storage/partition_1/temp/recovery/Fragile_idx_cfText2Ix/2017-07-13-20-59-02-190_2017-07-13-20-59-02-190_f Stacktrace org.apache.asterix.common.exceptions.AsterixException: FileNotFoundException: File does not exist: target/io/dir/asterix_nc1/iodevice1/storage/partition_1/temp/recovery/Fragile_idx_cfText2Ix/2017-07-13-20-59-02-190_2017-07-13-20-59-02-190_f at org.apache.asterix.test.runtime.AqlExecutionLessParallelismIT.test(AqlExecutionLessParallelismIT.java:70) Standard Error Jul 13, 2017 8:58:53 PM org.apache.hyracks.control.common.config.ConfigManager get WARNING: NC option [nc] storage.lsm.bloomfilter.falsepositiverate being accessed outside of NC-scoped configuration. Jul 13, 2017 8:58:53 PM org.apache.hyracks.control.common.config.ConfigManager get WARNING: NC option [nc] storage.lsm.bloomfilter.falsepositiverate being accessed outside of NC-scoped configuration. Jul 13, 2017 8:58:53 PM org.apache.hyracks.control.common.config.ConfigManager get WARNING: NC option [nc] storage.lsm.bloomfilter.falsepositiverate being accessed outside of NC-scoped configuration. Jul 13, 2017 8:58:53 PM org.apache.hyracks.control.nc.Joblet close WARNING: Freeing leaked 393216 bytes Jul 13, 2017 8:58:53 PM org.apache.hyracks.control.nc.Joblet close WARNING: Freeing leaked 393216 bytes Jul 13, 2017 8:58:53 PM org.apache.hyracks.control.common.work.WorkQueue$WorkerThread auditWaitsAndBlocks WARNING: Work CleanupJoblet waited 0 times (~0ms), blocked 1 times (~0ms) Jul 13, 2017 8:59:01 PM org.apache.asterix.common.ioopcallbacks.AbstractLSMIOOperationCallback getComponentId WARNING: Flushing a memory component without setting the LSN Jul 13, 2017 8:59:01 PM org.apache.asterix.common.ioopcallbacks.AbstractLSMIOOperationCallback getComponentId WARNING: Flushing a memory component without setting the LSN Jul 13, 2017 8:59:01 PM org.apache.asterix.common.ioopcallbacks.AbstractLSMIOOperationCallback getComponentId WARNING: Flushing a memory component without setting the LSN Jul 13, 2017 8:59:01 PM org.apache.asterix.common.ioopcallbacks.AbstractLSMIOOperationCallback getComponentId WARNING: Flushing a memory component without setting the LSN Jul 13, 2017 8:59:01 PM org.apache.hyracks.control.nc.Joblet close WARNING: Freeing leaked 1507328 bytes Jul 13, 2017 8:59:01 PM org.apache.hyracks.control.nc.Joblet close WARNING: Freeing leaked 1736704 bytes Jul 13, 2017 8:59:01 PM org.apache.hyracks.control.common.work.WorkQueue$WorkerThread auditWaitsAndBlocks WARNING: Work CleanupJoblet waited 0 times (~0ms), blocked 1 times (~0ms) Jul 13, 2017 8:59:01 PM org.apache.asterix.common.ioopcallbacks.AbstractLSMIOOperationCallback getComponentId WARNING: Flushing a memory component without setting the LSN Jul 13, 2017 8:59:01 PM org.apache.asterix.common.ioopcallbacks.AbstractLSMIOOperationCallback getComponentId WARNING: Flushing a memory component without setting the LSN Jul 13, 2017 8:59:01 PM org.apache.asterix.common.ioopcallbacks.AbstractLSMIOOperationCallback getComponentId WARNING: Flushing a memory component without setting the LSN Jul 13, 2017 8:59:01 PM org.apache.asterix.common.ioopcallbacks.AbstractLSMIOOperationCallback getComponentId WARNING: Flushing a memory component without setting the LSN Jul 13, 2017 8:59:01 PM org.apache.asterix.common.ioopcallbacks.AbstractLSMIOOperationCallback getComponentId WARNING: Flushing a memory component without setting the LSN Jul 13, 2017 8:59:01 PM org.apache.asterix.common.ioopcallbacks.AbstractLSMIOOperationCallback getComponentId WARNING: Flushing a memory component without setting the LSN Jul 13, 2017 8:59:02 PM org.apache.asterix.common.ioopcallbacks.AbstractLSMIOOperationCallback getComponentId WARNING: Flushing a memory component without setting the LSN Jul 13, 2017 8:59:02 PM org.apache.asterix.common.ioopcallbacks.AbstractLSMIOOperationCallback getComponentId WARNING: Flushing a memory component without setting the LSN Jul 13,
[jira] [Closed] (ASTERIXDB-1986) Update SQL++ documentation to remove auto plurable examples
[ https://issues.apache.org/jira/browse/ASTERIXDB-1986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu closed ASTERIXDB-1986. Resolution: Fixed > Update SQL++ documentation to remove auto plurable examples > --- > > Key: ASTERIXDB-1986 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1986 > Project: Apache AsterixDB > Issue Type: Improvement > Components: DOC - Documentation >Reporter: Yingyi Bu >Assignee: Yingyi Bu > > Remove auto plurable examples in the SQL++ documentation. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ASTERIXDB-1985) Add a rebalance callback for customizing the pre/post action for rebalance
[ https://issues.apache.org/jira/browse/ASTERIXDB-1985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu updated ASTERIXDB-1985: - Component/s: CLUS - Cluster management > Add a rebalance callback for customizing the pre/post action for rebalance > -- > > Key: ASTERIXDB-1985 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1985 > Project: Apache AsterixDB > Issue Type: Improvement > Components: CLUS - Cluster management >Reporter: Yingyi Bu >Assignee: Yingyi Bu > > Add a rebalance callback for customizing the pre/post action for populating > data from the source to the target. > For example, populating LSM metadata from the source to the target is > sometimes necessary. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ASTERIXDB-1986) Update SQL++ documentation to remove auto plurable examples
Yingyi Bu created ASTERIXDB-1986: Summary: Update SQL++ documentation to remove auto plurable examples Key: ASTERIXDB-1986 URL: https://issues.apache.org/jira/browse/ASTERIXDB-1986 Project: Apache AsterixDB Issue Type: Improvement Components: DOC - Documentation Reporter: Yingyi Bu Assignee: Yingyi Bu Remove auto plurable examples in the SQL++ documentation. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ASTERIXDB-1985) Add a rebalance callback for customizing the pre/post action for rebalance
Yingyi Bu created ASTERIXDB-1985: Summary: Add a rebalance callback for customizing the pre/post action for rebalance Key: ASTERIXDB-1985 URL: https://issues.apache.org/jira/browse/ASTERIXDB-1985 Project: Apache AsterixDB Issue Type: Improvement Reporter: Yingyi Bu Assignee: Yingyi Bu Add a rebalance callback for customizing the pre/post action for populating data from the source to the target. For example, populating LSM metadata from the source to the target is sometimes necessary. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Closed] (ASTERIXDB-1943) Make rebalance operation idempotent
[ https://issues.apache.org/jira/browse/ASTERIXDB-1943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu closed ASTERIXDB-1943. Resolution: Fixed > Make rebalance operation idempotent > --- > > Key: ASTERIXDB-1943 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1943 > Project: Apache AsterixDB > Issue Type: Improvement > Components: *DB - AsterixDB >Reporter: Yingyi Bu >Assignee: Till > > Add a rebalance cancellation API and make the rebalance operation idempotent. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Closed] (ASTERIXDB-1943) Make rebalance operation idempotent
[ https://issues.apache.org/jira/browse/ASTERIXDB-1943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu closed ASTERIXDB-1943. > Make rebalance operation idempotent > --- > > Key: ASTERIXDB-1943 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1943 > Project: Apache AsterixDB > Issue Type: Improvement > Components: *DB - AsterixDB >Reporter: Yingyi Bu >Assignee: Till > > Add a rebalance cancellation API and make the rebalance operation idempotent. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Reopened] (ASTERIXDB-1943) Make rebalance operation idempotent
[ https://issues.apache.org/jira/browse/ASTERIXDB-1943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu reopened ASTERIXDB-1943: -- > Make rebalance operation idempotent > --- > > Key: ASTERIXDB-1943 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1943 > Project: Apache AsterixDB > Issue Type: Improvement > Components: *DB - AsterixDB >Reporter: Yingyi Bu >Assignee: Till > > Add a rebalance cancellation API and make the rebalance operation idempotent. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (ASTERIXDB-1943) Make rebalance operation idempotent
[ https://issues.apache.org/jira/browse/ASTERIXDB-1943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu resolved ASTERIXDB-1943. -- Resolution: Fixed > Make rebalance operation idempotent > --- > > Key: ASTERIXDB-1943 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1943 > Project: Apache AsterixDB > Issue Type: Improvement > Components: *DB - AsterixDB >Reporter: Yingyi Bu >Assignee: Till > > Add a rebalance cancellation API and make the rebalance operation idempotent. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Closed] (ASTERIXDB-1947) Revisit thread safety in FileMapManager
[ https://issues.apache.org/jira/browse/ASTERIXDB-1947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu closed ASTERIXDB-1947. Resolution: Fixed > Revisit thread safety in FileMapManager > --- > > Key: ASTERIXDB-1947 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1947 > Project: Apache AsterixDB > Issue Type: Improvement > Components: STO - Storage >Reporter: Yingyi Bu >Assignee: Abdullah Alamoudi > > Synchronizations on FileMapManager (e.g., synchronized register/unregister, > and ConcurrentHashMap for id2namemap and name2idmap) is added in change > https://asterix-gerrit.ics.uci.edu/#/c/1840/. > However, this class seems 50%-baked -- there are some synchronizations but > there could be inconsistency between id2nameMap and name2IdMap. For example, > register/unregister > are not atomic -- a caller can lookupFileId(...) while the file hasn't fully > registered or unregistered, and possibly open an non-existent file? > I think we probably don't want thread-safety for FileMapManager at all, since > the call sites anyway has to orchestrate multiple steps and make sure the > bigger block is synchronized: > E.g. BufferCache.openFile(...): > {noformat} > synchronized (fileInfoMap) { > if (fileMapManager.isMapped(fileRef)) { > fileId = fileMapManager.lookupFileId(fileRef); > } else { > fileId = fileMapManager.registerFile(fileRef); > } > openFile(fileId); > } > return fileId; > {noformat} > Even if FileMapManager is thread-safe, it doesn't help and you still have to > synchronized on the big block to make sure atomicity of the sequence of > isMapped/lookup/register. Most call sites have a similar pattern. Thoughts? -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Closed] (ASTERIXDB-1945) Fix BufferCache API/Lifecycle
[ https://issues.apache.org/jira/browse/ASTERIXDB-1945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu closed ASTERIXDB-1945. Resolution: Fixed > Fix BufferCache API/Lifecycle > - > > Key: ASTERIXDB-1945 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1945 > Project: Apache AsterixDB > Issue Type: Bug > Components: STO - Storage >Reporter: Abdullah Alamoudi >Assignee: Abdullah Alamoudi > > Currently, BufferCache has an interesting behavior such as: > 1. CreateFile doesn't create a file but only create a name-id mapping in > memory. > 2. To get the id of a file, the caller must have access to the map associated > with the buffer cache. > 3. openFile. If create file was called before openFile, then it creates the > file. If the file already exists, it deletes it and create a new file. > This should be fixed and given clear behavior that matches the expected API. > All usages of the buffer cache must be fixed as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ASTERIXDB-1948) Potential file leaks if crash happens during rebalance
[ https://issues.apache.org/jira/browse/ASTERIXDB-1948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu updated ASTERIXDB-1948: - Description: Refer to the rebalance design doc: https://cwiki.apache.org/confluence/display/ASTERIXDB/Rebalance+API+and+Internal+Implementation In the event of failures, there could be: -- leaked source files (from metadata transaction a) which will be reclaimed in the next rebalance operation, -- or leaked target files (from metadata transaction b) which will not be reclaimed, -- or leaked node group name (from metadata transaction b) which doesn't prevent the success of the next rebalance operation. was: In the event of failures, there could be: -- leaked source files (from metadata transaction a) which will be reclaimed in the next rebalance operation, -- or leaked target files (from metadata transaction b) which will not be reclaimed, -- or leaked node group name (from metadata transaction b) which doesn't prevent the success of the next rebalance operation. > Potential file leaks if crash happens during rebalance > -- > > Key: ASTERIXDB-1948 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1948 > Project: Apache AsterixDB > Issue Type: Bug > Components: CLUS - Cluster management >Reporter: Yingyi Bu >Assignee: Yingyi Bu > > Refer to the rebalance design doc: > https://cwiki.apache.org/confluence/display/ASTERIXDB/Rebalance+API+and+Internal+Implementation > In the event of failures, there could be: > -- leaked source files (from metadata transaction a) which will be reclaimed > in the next rebalance operation, > -- or leaked target files (from metadata transaction b) which will not be > reclaimed, > -- or leaked node group name (from metadata transaction b) which doesn't > prevent the success of the next rebalance operation. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ASTERIXDB-1948) Potential file leaks if crash happens during rebalance
Yingyi Bu created ASTERIXDB-1948: Summary: Potential file leaks if crash happens during rebalance Key: ASTERIXDB-1948 URL: https://issues.apache.org/jira/browse/ASTERIXDB-1948 Project: Apache AsterixDB Issue Type: Bug Components: CLUS - Cluster management Reporter: Yingyi Bu Assignee: Yingyi Bu In the event of failures, there could be: -- leaked source files (from metadata transaction a) which will be reclaimed in the next rebalance operation, -- or leaked target files (from metadata transaction b) which will not be reclaimed, -- or leaked node group name (from metadata transaction b) which doesn't prevent the success of the next rebalance operation. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (ASTERIXDB-1948) Potential file leaks if crash happens during rebalance
[ https://issues.apache.org/jira/browse/ASTERIXDB-1948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu updated ASTERIXDB-1948: - Description: Refer to the rebalance design doc: https://cwiki.apache.org/confluence/display/ASTERIXDB/Rebalance+API+and+Internal+Implementation In the event of failures, there could be: -- leaked source files (from metadata transaction a) which will be reclaimed in the next rebalance operation, -- or leaked target files (from metadata transaction b) which will not be reclaimed, -- or leaked node group name (from metadata transaction a) which doesn't prevent the success of the next rebalance operation. was: Refer to the rebalance design doc: https://cwiki.apache.org/confluence/display/ASTERIXDB/Rebalance+API+and+Internal+Implementation In the event of failures, there could be: -- leaked source files (from metadata transaction a) which will be reclaimed in the next rebalance operation, -- or leaked target files (from metadata transaction b) which will not be reclaimed, -- or leaked node group name (from metadata transaction b) which doesn't prevent the success of the next rebalance operation. > Potential file leaks if crash happens during rebalance > -- > > Key: ASTERIXDB-1948 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1948 > Project: Apache AsterixDB > Issue Type: Bug > Components: CLUS - Cluster management >Reporter: Yingyi Bu >Assignee: Yingyi Bu > > Refer to the rebalance design doc: > https://cwiki.apache.org/confluence/display/ASTERIXDB/Rebalance+API+and+Internal+Implementation > In the event of failures, there could be: > -- leaked source files (from metadata transaction a) which will be reclaimed > in the next rebalance operation, > -- or leaked target files (from metadata transaction b) which will not be > reclaimed, > -- or leaked node group name (from metadata transaction a) which doesn't > prevent the success of the next rebalance operation. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ASTERIXDB-1947) Revisit thread safety in FileMapManager
Yingyi Bu created ASTERIXDB-1947: Summary: Revisit thread safety in FileMapManager Key: ASTERIXDB-1947 URL: https://issues.apache.org/jira/browse/ASTERIXDB-1947 Project: Apache AsterixDB Issue Type: Improvement Components: STO - Storage Reporter: Yingyi Bu Assignee: Abdullah Alamoudi Synchronizations on FileMapManager (e.g., synchronized register/unregister, and ConcurrentHashMap for id2namemap and name2idmap) is added in change https://asterix-gerrit.ics.uci.edu/#/c/1840/. However, this class seems 50%-baked -- there are some synchronizations but there could be inconsistency between id2nameMap and name2IdMap. For example, register/unregister are not atomic -- a caller can lookupFileId(...) while the file hasn't fully registered or unregistered, and possibly open an non-existent file? I think we probably don't want thread-safety for FileMapManager at all, since the call sites anyway has to orchestrate multiple steps and make sure the bigger block is synchronized: E.g. BufferCache.openFile(...): {noformat} synchronized (fileInfoMap) { if (fileMapManager.isMapped(fileRef)) { fileId = fileMapManager.lookupFileId(fileRef); } else { fileId = fileMapManager.registerFile(fileRef); } openFile(fileId); } return fileId; {noformat} Even if FileMapManager is thread-safe, it doesn't help and you still have to synchronized on the big block to make sure atomicity of the sequence of isMapped/lookup/register. Most call sites have a similar pattern. Thoughts? -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ASTERIXDB-1943) Make rebalance operation idempotent
Yingyi Bu created ASTERIXDB-1943: Summary: Make rebalance operation idempotent Key: ASTERIXDB-1943 URL: https://issues.apache.org/jira/browse/ASTERIXDB-1943 Project: Apache AsterixDB Issue Type: Improvement Components: AsterixDB Reporter: Yingyi Bu Assignee: Yingyi Bu Add a rebalance cancellation API and make the rebalance operation idempotent. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (ASTERIXDB-1942) RecoveryManager undo fails when upsert gets interrupted
Yingyi Bu created ASTERIXDB-1942: Summary: RecoveryManager undo fails when upsert gets interrupted Key: ASTERIXDB-1942 URL: https://issues.apache.org/jira/browse/ASTERIXDB-1942 Project: Apache AsterixDB Issue Type: Bug Reporter: Yingyi Bu Assignee: Abdullah Alamoudi {noformat} Exception in thread "Worker:asterix_nc1" java.lang.Error: org.apache.asterix.common.exceptions.ACIDException: Could not complete rollback! System is in an inconsistent state at org.apache.asterix.runtime.job.listener.JobEventListenerFactory$1.jobletFinish(JobEventListenerFactory.java:61) at org.apache.hyracks.control.nc.Joblet.performCleanup(Joblet.java:316) at org.apache.hyracks.control.nc.Joblet.cleanup(Joblet.java:308) at org.apache.hyracks.control.nc.work.CleanupJobletWork.run(CleanupJobletWork.java:74) at org.apache.hyracks.control.common.work.WorkQueue$WorkerThread.run(WorkQueue.java:127) Caused by: org.apache.asterix.common.exceptions.ACIDException: Could not complete rollback! System is in an inconsistent state at org.apache.asterix.transaction.management.service.transaction.TransactionManager.abortTransaction(TransactionManager.java:73) at org.apache.asterix.transaction.management.service.transaction.TransactionManager.completedTransaction(TransactionManager.java:132) at org.apache.asterix.runtime.job.listener.JobEventListenerFactory$1.jobletFinish(JobEventListenerFactory.java:58) ... 4 more Caused by: java.lang.IllegalStateException: Failed to undo at org.apache.asterix.app.nc.RecoveryManager.undo(RecoveryManager.java:702) at org.apache.asterix.app.nc.RecoveryManager.rollbackTransaction(RecoveryManager.java:650) at org.apache.asterix.transaction.management.service.transaction.TransactionManager.abortTransaction(TransactionManager.java:65) ... 6 more Caused by: org.apache.hyracks.api.exceptions.HyracksDataException: HYR0037: Index key not found at org.apache.hyracks.api.exceptions.HyracksDataException.create(HyracksDataException.java:49) at org.apache.hyracks.storage.am.btree.frames.BTreeNSMLeafFrame.findDeleteTupleIndex(BTreeNSMLeafFrame.java:139) at org.apache.hyracks.storage.am.btree.impls.BTree.deleteLeaf(BTree.java:530) at org.apache.hyracks.storage.am.btree.impls.BTree.performOp(BTree.java:700) at org.apache.hyracks.storage.am.btree.impls.BTree.access$700(BTree.java:68) at org.apache.hyracks.storage.am.btree.impls.BTree$BTreeAccessor.insertUpdateOrDelete(BTree.java:949) at org.apache.hyracks.storage.am.btree.impls.BTree$BTreeAccessor.delete(BTree.java:933) at org.apache.hyracks.storage.am.btree.impls.BTree$BTreeAccessor.delete(BTree.java:859) at org.apache.hyracks.storage.am.lsm.btree.impls.LSMBTree.modify(LSMBTree.java:217) at org.apache.hyracks.storage.am.lsm.common.impls.LSMHarness.modify(LSMHarness.java:418) at org.apache.hyracks.storage.am.lsm.common.impls.LSMHarness.forceModify(LSMHarness.java:358) at org.apache.hyracks.storage.am.lsm.common.impls.LSMTreeIndexAccessor.forcePhysicalDelete(LSMTreeIndexAccessor.java:169) at org.apache.asterix.app.nc.RecoveryManager.undo(RecoveryManager.java:694) ... 8 more {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Closed] (ASTERIXDB-1938) AppendOnlyLinkedMetadataPageManager.put() is called after close(). HYR0012 is thrown
[ https://issues.apache.org/jira/browse/ASTERIXDB-1938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu closed ASTERIXDB-1938. Resolution: Fixed > AppendOnlyLinkedMetadataPageManager.put() is called after close(). HYR0012 is > thrown > > > Key: ASTERIXDB-1938 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1938 > Project: Apache AsterixDB > Issue Type: Bug > Components: Storage >Reporter: Dmitry Lychagin >Assignee: Chen Luo > > We're getting this exception: > rg.apache.hyracks.api.exceptions.HyracksDataException: HYR0012: Invalid > attempt to write to a flushed append only metadata page > at > org.apache.hyracks.api.exceptions.HyracksDataException.create(HyracksDataException.java:49) > at > org.apache.hyracks.storage.am.common.freepage.AppendOnlyLinkedMetadataPageManager.put(AppendOnlyLinkedMetadataPageManager.java:311) > at > org.apache.hyracks.storage.am.lsm.common.impls.DiskComponentMetadata.put(DiskComponentMetadata.java:38) > at > org.apache.asterix.common.ioopcallbacks.AbstractLSMIOOperationCallback.putLSNIntoMetadata(AbstractLSMIOOperationCallback.java:105) > at > org.apache.asterix.common.ioopcallbacks.AbstractLSMIOOperationCallback.afterOperation(AbstractLSMIOOperationCallback.java:188) > at > org.apache.hyracks.storage.am.lsm.common.impls.LSMHarness.flush(LSMHarness.java:503) > at > org.apache.hyracks.storage.am.lsm.common.impls.LSMTreeIndexAccessor.flush(LSMTreeIndexAccessor.java:121) > at > org.apache.hyracks.storage.am.lsm.common.impls.FlushOperation.call(FlushOperation.java:42) > at > org.apache.hyracks.storage.am.lsm.common.impls.FlushOperation.call(FlushOperation.java:30) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > However at that time the AppendOnlyLinkedMetadataPageManager is already > closed. > The stack trace for the close() method invocation is the following: > at > org.apache.hyracks.storage.am.common.freepage.AppendOnlyLinkedMetadataPageManager.close(AppendOnlyLinkedMetadataPageManager.java:216) > at > org.apache.hyracks.storage.am.common.impls.AbstractTreeIndex.deactivate(AbstractTreeIndex.java:163) > at > org.apache.hyracks.storage.am.lsm.common.impls.AbstractLSMDiskComponentBulkLoader.cleanupArtifacts(AbstractLSMDiskComponentBulkLoader.java:170) > at > org.apache.hyracks.storage.am.lsm.common.impls.AbstractLSMDiskComponentBulkLoader.end(AbstractLSMDiskComponentBulkLoader.java:158) > at > org.apache.hyracks.storage.am.lsm.btree.impls.LSMBTree.flush(LSMBTree.java:357) > at > org.apache.hyracks.storage.am.lsm.common.impls.LSMHarness.flush(LSMHarness.java:502) > at > org.apache.hyracks.storage.am.lsm.common.impls.LSMTreeIndexAccessor.flush(LSMTreeIndexAccessor.java:121) > at > org.apache.hyracks.storage.am.lsm.common.impls.FlushOperation.call(FlushOperation.java:42) > at > org.apache.hyracks.storage.am.lsm.common.impls.FlushOperation.call(FlushOperation.java:30) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > So AppendOnlyLinkedMetadataPageManager.close() is called from LSMHarness:502 > (flush), then the next line (LSMHarness:503) calls the callback which tries > to write data into the closed page manager. At that point its > 'confiscatedPage' is 'null' and therefore HYR0012 is thrown -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (ASTERIXDB-1938) AppendOnlyLinkedMetadataPageManager.put() is called after close(). HYR0012 is thrown
[ https://issues.apache.org/jira/browse/ASTERIXDB-1938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045253#comment-16045253 ] Yingyi Bu commented on ASTERIXDB-1938: -- [~luochen01], in CB, we have a case that the memory component to flush doesn't have tuples but can have some metadata. > AppendOnlyLinkedMetadataPageManager.put() is called after close(). HYR0012 is > thrown > > > Key: ASTERIXDB-1938 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1938 > Project: Apache AsterixDB > Issue Type: Bug > Components: Storage >Reporter: Dmitry Lychagin >Assignee: Chen Luo > > We're getting this exception: > rg.apache.hyracks.api.exceptions.HyracksDataException: HYR0012: Invalid > attempt to write to a flushed append only metadata page > at > org.apache.hyracks.api.exceptions.HyracksDataException.create(HyracksDataException.java:49) > at > org.apache.hyracks.storage.am.common.freepage.AppendOnlyLinkedMetadataPageManager.put(AppendOnlyLinkedMetadataPageManager.java:311) > at > org.apache.hyracks.storage.am.lsm.common.impls.DiskComponentMetadata.put(DiskComponentMetadata.java:38) > at > org.apache.asterix.common.ioopcallbacks.AbstractLSMIOOperationCallback.putLSNIntoMetadata(AbstractLSMIOOperationCallback.java:105) > at > org.apache.asterix.common.ioopcallbacks.AbstractLSMIOOperationCallback.afterOperation(AbstractLSMIOOperationCallback.java:188) > at > org.apache.hyracks.storage.am.lsm.common.impls.LSMHarness.flush(LSMHarness.java:503) > at > org.apache.hyracks.storage.am.lsm.common.impls.LSMTreeIndexAccessor.flush(LSMTreeIndexAccessor.java:121) > at > org.apache.hyracks.storage.am.lsm.common.impls.FlushOperation.call(FlushOperation.java:42) > at > org.apache.hyracks.storage.am.lsm.common.impls.FlushOperation.call(FlushOperation.java:30) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > However at that time the AppendOnlyLinkedMetadataPageManager is already > closed. > The stack trace for the close() method invocation is the following: > at > org.apache.hyracks.storage.am.common.freepage.AppendOnlyLinkedMetadataPageManager.close(AppendOnlyLinkedMetadataPageManager.java:216) > at > org.apache.hyracks.storage.am.common.impls.AbstractTreeIndex.deactivate(AbstractTreeIndex.java:163) > at > org.apache.hyracks.storage.am.lsm.common.impls.AbstractLSMDiskComponentBulkLoader.cleanupArtifacts(AbstractLSMDiskComponentBulkLoader.java:170) > at > org.apache.hyracks.storage.am.lsm.common.impls.AbstractLSMDiskComponentBulkLoader.end(AbstractLSMDiskComponentBulkLoader.java:158) > at > org.apache.hyracks.storage.am.lsm.btree.impls.LSMBTree.flush(LSMBTree.java:357) > at > org.apache.hyracks.storage.am.lsm.common.impls.LSMHarness.flush(LSMHarness.java:502) > at > org.apache.hyracks.storage.am.lsm.common.impls.LSMTreeIndexAccessor.flush(LSMTreeIndexAccessor.java:121) > at > org.apache.hyracks.storage.am.lsm.common.impls.FlushOperation.call(FlushOperation.java:42) > at > org.apache.hyracks.storage.am.lsm.common.impls.FlushOperation.call(FlushOperation.java:30) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > So AppendOnlyLinkedMetadataPageManager.close() is called from LSMHarness:502 > (flush), then the next line (LSMHarness:503) calls the callback which tries > to write data into the closed page manager. At that point its > 'confiscatedPage' is 'null' and therefore HYR0012 is thrown -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (ASTERIXDB-1931) Unfriendly error message when missing ending semicolon
[ https://issues.apache.org/jira/browse/ASTERIXDB-1931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037556#comment-16037556 ] Yingyi Bu commented on ASTERIXDB-1931: -- This issue should be fixed once we have the new web UI (under development?). We will not see that error message if we let the web UI be based on the query service REST API. In fact, the query should work without exceptions if it goes through the query service, because the query service always appends semi-colons. [~tillw] ? > Unfriendly error message when missing ending semicolon > -- > > Key: ASTERIXDB-1931 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1931 > Project: Apache AsterixDB > Issue Type: Bug > Components: AsterixDB, Translator - SQL++ >Reporter: Michael J. Carey >Assignee: Yingyi Bu > > Try: > USE TinySocial; > 1+1 > You'll get: > Syntax error: In line 2 >>1+1<< Encountered at column 3. > [CompilationException] > It would be nice to get some indication that ; was missing or EOF was > encountered unexpectedly! -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (ASTERIXDB-1930) Feed shouldn't be started when a secondary index is being created
[ https://issues.apache.org/jira/browse/ASTERIXDB-1930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu reassigned ASTERIXDB-1930: Assignee: Abdullah Alamoudi > Feed shouldn't be started when a secondary index is being created > - > > Key: ASTERIXDB-1930 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1930 > Project: Apache AsterixDB > Issue Type: Bug > Components: Feeds, Storage >Reporter: Chen Luo >Assignee: Abdullah Alamoudi > > When a secondary index is being created, new tuples shouldn't be inserted > into that dataset. Normal insert/delete statements are handled properly by > blocking the query. However, currently the user is allowed to start feeds on > this dataset, which should be block as well until the index creation job > finishes. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Closed] (ASTERIXDB-1929) Factor out isAntimatter from TupleWriter and TupleWriterFactory
[ https://issues.apache.org/jira/browse/ASTERIXDB-1929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu closed ASTERIXDB-1929. Resolution: Fixed Fixed in the latest master. > Factor out isAntimatter from TupleWriter and TupleWriterFactory > --- > > Key: ASTERIXDB-1929 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1929 > Project: Apache AsterixDB > Issue Type: Improvement > Components: Storage >Reporter: Yingyi Bu >Assignee: Chen Luo > > isAntimatter is a property of the tuple that we're writing, but not a > property of the Writer. Hence, we probably should remove setAntimatter(...) > from the TupleWriter. > Currently, for each tuple write operation, we need to call setAntimatter() in > both TupleWriterFactory and TupleWriter. This is painful and potentially > buggy -- a XxxFactory typically is one per NC. > Instead, we can add a parameter to write tuple method: > public int writeTuple(ITupleReference tuple, byte[] targetBuf, int targetOff) > -> > public int writeTuple(ITupleReference tuple, byte[] targetBuf, int targetOff, > boolean isDelete) -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (ASTERIXDB-1929) Factor out isAntimatter from TupleWriter and TupleWriterFactory
[ https://issues.apache.org/jira/browse/ASTERIXDB-1929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035813#comment-16035813 ] Yingyi Bu commented on ASTERIXDB-1929: -- OK, I see the point. The current implementation makes some sense to me now. Then, my only left concern is the usage of factory -- can we remove setAntimatter from the LSMTreeRefrencingTupleWriterFactory, and instead, only set/unset antimatter in the tuple writer ? {noformat} public class LSMTreeRefrencingTupleWriterFactory implements ITreeIndexTupleWriterFactory { private static final long serialVersionUID = 1L; private final ITreeIndexTupleWriterFactory factory; private transient ILSMTreeTupleWriter createdTupleWriter; public LSMTreeRefrencingTupleWriterFactory(ITreeIndexTupleWriterFactory factory) { this.factory = factory; } @Override public ITreeIndexTupleWriter createTupleWriter() { createdTupleWriter = (ILSMTreeTupleWriter) factory.createTupleWriter(); return createdTupleWriter; } public void setAntimatter(boolean isAntimatter) { if (this.createdTupleWriter != null) { this.createdTupleWriter.setAntimatter(isAntimatter); } } } {noformat} > Factor out isAntimatter from TupleWriter and TupleWriterFactory > --- > > Key: ASTERIXDB-1929 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1929 > Project: Apache AsterixDB > Issue Type: Improvement > Components: Storage >Reporter: Yingyi Bu >Assignee: Chen Luo > > isAntimatter is a property of the tuple that we're writing, but not a > property of the Writer. Hence, we probably should remove setAntimatter(...) > from the TupleWriter. > Currently, for each tuple write operation, we need to call setAntimatter() in > both TupleWriterFactory and TupleWriter. This is painful and potentially > buggy -- a XxxFactory typically is one per NC. > Instead, we can add a parameter to write tuple method: > public int writeTuple(ITupleReference tuple, byte[] targetBuf, int targetOff) > -> > public int writeTuple(ITupleReference tuple, byte[] targetBuf, int targetOff, > boolean isDelete) -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (ASTERIXDB-1929) Factor out isAntimatter from TupleWriter and TupleWriterFactory
[ https://issues.apache.org/jira/browse/ASTERIXDB-1929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035793#comment-16035793 ] Yingyi Bu commented on ASTERIXDB-1929: -- Alternatively, we can have the method with isDelete flag only appear in ILSMTreeTupleWriter? > Factor out isAntimatter from TupleWriter and TupleWriterFactory > --- > > Key: ASTERIXDB-1929 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1929 > Project: Apache AsterixDB > Issue Type: Improvement > Components: Storage >Reporter: Yingyi Bu >Assignee: Chen Luo > > isAntimatter is a property of the tuple that we're writing, but not a > property of the Writer. Hence, we probably should remove setAntimatter(...) > from the TupleWriter. > Currently, for each tuple write operation, we need to call setAntimatter() in > both TupleWriterFactory and TupleWriter. This is painful and potentially > buggy -- a XxxFactory typically is one per NC. > Instead, we can add a parameter to write tuple method: > public int writeTuple(ITupleReference tuple, byte[] targetBuf, int targetOff) > -> > public int writeTuple(ITupleReference tuple, byte[] targetBuf, int targetOff, > boolean isDelete) -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (ASTERIXDB-1929) Factor out isAntimatter from TupleWriter and TupleWriterFactory
[ https://issues.apache.org/jira/browse/ASTERIXDB-1929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035791#comment-16035791 ] Yingyi Bu commented on ASTERIXDB-1929: -- Can parameter isDelete always be false at the caller from in-place B-Tree? > Factor out isAntimatter from TupleWriter and TupleWriterFactory > --- > > Key: ASTERIXDB-1929 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1929 > Project: Apache AsterixDB > Issue Type: Improvement > Components: Storage >Reporter: Yingyi Bu >Assignee: Chen Luo > > isAntimatter is a property of the tuple that we're writing, but not a > property of the Writer. Hence, we probably should remove setAntimatter(...) > from the TupleWriter. > Currently, for each tuple write operation, we need to call setAntimatter() in > both TupleWriterFactory and TupleWriter. This is painful and potentially > buggy -- a XxxFactory typically is one per NC. > Instead, we can add a parameter to write tuple method: > public int writeTuple(ITupleReference tuple, byte[] targetBuf, int targetOff) > -> > public int writeTuple(ITupleReference tuple, byte[] targetBuf, int targetOff, > boolean isDelete) -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (ASTERIXDB-1929) Factor out isAntimatter from TupleWriter and TupleWriterFactory
Yingyi Bu created ASTERIXDB-1929: Summary: Factor out isAntimatter from TupleWriter and TupleWriterFactory Key: ASTERIXDB-1929 URL: https://issues.apache.org/jira/browse/ASTERIXDB-1929 Project: Apache AsterixDB Issue Type: Improvement Components: Storage Reporter: Yingyi Bu Assignee: Chen Luo isAntimatter is a property of the tuple that we're writing, but not a property of the Writer. Hence, we probably should remove setAntimatter(...) from the TupleWriter. Currently, for each tuple write operation, we need to call setAntimatter() in both TupleWriterFactory and TupleWriter. This is painful and potentially buggy -- a XxxFactory typically is one per NC. Instead, we can add a parameter to write tuple method: public int writeTuple(ITupleReference tuple, byte[] targetBuf, int targetOff) -> public int writeTuple(ITupleReference tuple, byte[] targetBuf, int targetOff, boolean isDelete) -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (ASTERIXDB-1925) Sporadic test failure in test change-feed-with-meta-pk-in-meta-index-after-ingest
Yingyi Bu created ASTERIXDB-1925: Summary: Sporadic test failure in test change-feed-with-meta-pk-in-meta-index-after-ingest Key: ASTERIXDB-1925 URL: https://issues.apache.org/jira/browse/ASTERIXDB-1925 Project: Apache AsterixDB Issue Type: Bug Reporter: Yingyi Bu Assignee: Abdullah Alamoudi https://asterix-jenkins.ics.uci.edu/job/asterix-gerrit-verify-asterix-app/org.apache.asterix$asterix-app/597/testReport/junit/org.apache.asterix.test.runtime/AqlExecutionFullParallelismIT/test_AqlExecutionFullParallelismIT_23__feeds__change_feed_with_meta_pk_in_meta_index_after_ingest_/ {noformat} java.lang.Exception: Test "src/test/resources/runtimets/queries/feeds/change-feed-with-meta-pk-in-meta-index-after-ingest/change-feed-with-meta-pk-in-meta-index-after-ingest.3.ddl.aql" FAILED! at org.apache.asterix.test.runtime.AqlExecutionFullParallelismIT.test(AqlExecutionFullParallelismIT.java:70) Caused by: java.lang.Exception: HTTP operation failed: STATUS LINE: HTTP/1.1 500 Internal Server Error SUMMARY: Dataset KeyVerse.KVStore is currently being fed into by the following active entities. KeyVerse.KVChangeStream(Feed) caused by: org.apache.asterix.app.translator.QueryTranslator.validateIfResourceIsActiveInFeed(QueryTranslator.java:696) STACKTRACE: org.apache.asterix.common.exceptions.CompilationException: Dataset KeyVerse.KVStore is currently being fed into by the following active entities. KeyVerse.KVChangeStream(Feed) at org.apache.asterix.app.translator.QueryTranslator.validateIfResourceIsActiveInFeed(QueryTranslator.java:696) at org.apache.asterix.app.translator.QueryTranslator.handleCreateIndexStatement(QueryTranslator.java:885) at org.apache.asterix.app.translator.QueryTranslator.compileAndExecute(QueryTranslator.java:291) at org.apache.asterix.app.translator.QueryTranslator.compileAndExecute(QueryTranslator.java:247) at org.apache.asterix.api.http.server.RestApiServlet.doHandle(RestApiServlet.java:207) at org.apache.asterix.api.http.server.RestApiServlet.getOrPost(RestApiServlet.java:177) at org.apache.asterix.api.http.server.RestApiServlet.post(RestApiServlet.java:166) at org.apache.hyracks.http.server.AbstractServlet.handle(AbstractServlet.java:78) at org.apache.hyracks.http.server.HttpRequestHandler.handle(HttpRequestHandler.java:70) at org.apache.hyracks.http.server.HttpRequestHandler.call(HttpRequestHandler.java:55) at org.apache.hyracks.http.server.HttpRequestHandler.call(HttpRequestHandler.java:36) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) at org.apache.asterix.test.runtime.AqlExecutionFullParallelismIT.test(AqlExecutionFullParallelismIT.java:70) {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (ASTERIXDB-1923) Dataset id is not recycled
Yingyi Bu created ASTERIXDB-1923: Summary: Dataset id is not recycled Key: ASTERIXDB-1923 URL: https://issues.apache.org/jira/browse/ASTERIXDB-1923 Project: Apache AsterixDB Issue Type: Bug Reporter: Yingyi Bu Assignee: Yingyi Bu Currently, dataset ids are not recycled when a dataset is dropped. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (ASTERIXDB-1921) Replication bugs and their affects in optimized logical plan
[ https://issues.apache.org/jira/browse/ASTERIXDB-1921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026456#comment-16026456 ] Yingyi Bu commented on ASTERIXDB-1921: -- - For the first query, SQL++ should work -- it handles WITH differently from LET in AQL, i.e., it inlines WITH expressions if the expressions do not call stateful functions like current_time(). In other words, SQL++ automatically transform your first query into your second query. - For the third query, the issue is that there are a number of legacy rules thinks that a subplan can only has one nested plan. To fix this, we need to generalize those rules. > Replication bugs and their affects in optimized logical plan > > > Key: ASTERIXDB-1921 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1921 > Project: Apache AsterixDB > Issue Type: Bug > Components: AsterixDB, Optimizer >Reporter: Shiva Jahangiri > > We were trying to see some replication in optimized logical plan by trying to > run the following query on the example in AQL primer,however print optimized > logical plan/print hyracks job/Execute query threw null pointer exception: > Query: > use dataverse TinySocial; > let $temp := (for $message in dataset GleambookMessages >where $message.authorId >= 0 return $message) > return{ >"count1":count( > for $t1 in $temp > for $user in dataset GleambookUsers > where $t1.authorId = $user.id and $user.id > 0 > return { > "user": $user, > "message": $t1 >}), >"count2": count( > for $t2 in $temp >for $user in dataset GleambookUsers >where $t2.authorId = $user.id and $user.id < 11 > return { > "user": $user, > "message": $t2 > }) > } > Error : > Internal error. Please check instance logs for further details. > [NullPointerException] > It happened when replication was happening as this query ran well with with > either count1 or count2 but not both. > What we tried next to track the bug down, was the following query which is > the same query as above without using replication: > use dataverse TinySocial; > { >"count1":count( > for $t1 in (for $message in dataset GleambookMessages >where $message.authorId >= 0 return $message) > for $user in dataset GleambookUsers > where $t1.authorId = $user.id and $user.id > 0 > return { > "user": $user, > "message": $t1 >}), >"count2": count( > for $t2 in (for $message in dataset GleambookMessages >where $message.authorId >= 0 return $message) >for $user in dataset GleambookUsers >where $t2.authorId = $user.id and $user.id < 11 > return { > "user": $user, > "message": $t2 > }) > } > This query produced the result and optimized logical plan successfully. > We continued by trying a simpler query that uses replication as follow: > use dataverse TinySocial; > let $temp := > (for $message in dataset GleambookMessages >where $message.authorId = 1 return $message) > return { >"copy1":(for $m in $temp where $m.messageId <= 10 return $m), >"copy2":(for $m in $temp where $m.messageId >10 return $m) > } > Which produces the following optimized logical plan: > distribute result [$$8] > -- DISTRIBUTE_RESULT |UNPARTITIONED| > exchange > -- ONE_TO_ONE_EXCHANGE |UNPARTITIONED| > project ([$$8]) > -- STREAM_PROJECT |UNPARTITIONED| > assign [$$8] <- [{"copy1": $$11, "copy2": $$14}] > -- ASSIGN |UNPARTITIONED| > project ([$$11, $$14]) > -- STREAM_PROJECT |UNPARTITIONED| > subplan { > aggregate [$$14] <- [listify($$m)] > -- AGGREGATE |UNPARTITIONED| > select (gt($$18, 10)) > -- STREAM_SELECT |UNPARTITIONED| > assign [$$18] <- [$$m.getField(0)] > -- ASSIGN |UNPARTITIONED| > unnest $$m <- scan-collection($$7) > -- UNNEST |UNPARTITIONED| > nested tuple source > -- NESTED_TUPLE_SOURCE |UNPARTITIONED| > } > -- SUBPLAN |UNPARTITIONED| > subplan { > aggregate [$$11] <- [listify($$m)] > -- AGGREGATE |UNPARTITIONED| > select (le($$17, 10)) > -- STREAM_SELECT |UNPARTITIONED| >
[jira] [Commented] (ASTERIXDB-1921) Replication bugs and their affects in optimized logical plan
[ https://issues.apache.org/jira/browse/ASTERIXDB-1921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026426#comment-16026426 ] Yingyi Bu commented on ASTERIXDB-1921: -- Shiva, can you please attach the failure stack trace? > Replication bugs and their affects in optimized logical plan > > > Key: ASTERIXDB-1921 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1921 > Project: Apache AsterixDB > Issue Type: Bug > Components: AsterixDB, Optimizer >Reporter: Shiva Jahangiri > > We were trying to see some replication in optimized logical plan by trying to > run the following query on the example in AQL primer,however print optimized > logical plan/print hyracks job/Execute query threw null pointer exception: > Query: > use dataverse TinySocial; > let $temp := (for $message in dataset GleambookMessages >where $message.authorId >= 0 return $message) > return{ >"count1":count( > for $t1 in $temp > for $user in dataset GleambookUsers > where $t1.authorId = $user.id and $user.id > 0 > return { > "user": $user, > "message": $t1 >}), >"count2": count( > for $t2 in $temp >for $user in dataset GleambookUsers >where $t2.authorId = $user.id and $user.id < 11 > return { > "user": $user, > "message": $t2 > }) > } > Error : > Internal error. Please check instance logs for further details. > [NullPointerException] > It happened when replication was happening as this query ran well with with > either count1 or count2 but not both. > What we tried next to track the bug down, was the following query which is > the same query as above without using replication: > use dataverse TinySocial; > { >"count1":count( > for $t1 in (for $message in dataset GleambookMessages >where $message.authorId >= 0 return $message) > for $user in dataset GleambookUsers > where $t1.authorId = $user.id and $user.id > 0 > return { > "user": $user, > "message": $t1 >}), >"count2": count( > for $t2 in (for $message in dataset GleambookMessages >where $message.authorId >= 0 return $message) >for $user in dataset GleambookUsers >where $t2.authorId = $user.id and $user.id < 11 > return { > "user": $user, > "message": $t2 > }) > } > This query produced the result and optimized logical plan successfully. > We continued by trying a simpler query that uses replication as follow: > use dataverse TinySocial; > let $temp := > (for $message in dataset GleambookMessages >where $message.authorId = 1 return $message) > return { >"copy1":(for $m in $temp where $m.messageId <= 10 return $m), >"copy2":(for $m in $temp where $m.messageId >10 return $m) > } > Which produces the following optimized logical plan: > distribute result [$$8] > -- DISTRIBUTE_RESULT |UNPARTITIONED| > exchange > -- ONE_TO_ONE_EXCHANGE |UNPARTITIONED| > project ([$$8]) > -- STREAM_PROJECT |UNPARTITIONED| > assign [$$8] <- [{"copy1": $$11, "copy2": $$14}] > -- ASSIGN |UNPARTITIONED| > project ([$$11, $$14]) > -- STREAM_PROJECT |UNPARTITIONED| > subplan { > aggregate [$$14] <- [listify($$m)] > -- AGGREGATE |UNPARTITIONED| > select (gt($$18, 10)) > -- STREAM_SELECT |UNPARTITIONED| > assign [$$18] <- [$$m.getField(0)] > -- ASSIGN |UNPARTITIONED| > unnest $$m <- scan-collection($$7) > -- UNNEST |UNPARTITIONED| > nested tuple source > -- NESTED_TUPLE_SOURCE |UNPARTITIONED| > } > -- SUBPLAN |UNPARTITIONED| > subplan { > aggregate [$$11] <- [listify($$m)] > -- AGGREGATE |UNPARTITIONED| > select (le($$17, 10)) > -- STREAM_SELECT |UNPARTITIONED| > assign [$$17] <- [$$m.getField(0)] > -- ASSIGN |UNPARTITIONED| > unnest $$m <- scan-collection($$7) > -- UNNEST |UNPARTITIONED| > nested tuple source > -- NESTED_TUPLE_SOURCE |UNPARTITIONED| >} > -- SUBPLAN
[jira] [Closed] (ASTERIXDB-1918) Parser doesn't support scientific math number
[ https://issues.apache.org/jira/browse/ASTERIXDB-1918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu closed ASTERIXDB-1918. Resolution: Fixed > Parser doesn't support scientific math number > - > > Key: ASTERIXDB-1918 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1918 > Project: Apache AsterixDB > Issue Type: Bug > Components: Compiler >Reporter: Yingyi Bu >Assignee: Yingyi Bu > > For example: > 1.0e-5, 2e10, 0.3e-2 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (ASTERIXDB-1918) Parser doesn't support scientific math number
Yingyi Bu created ASTERIXDB-1918: Summary: Parser doesn't support scientific math number Key: ASTERIXDB-1918 URL: https://issues.apache.org/jira/browse/ASTERIXDB-1918 Project: Apache AsterixDB Issue Type: Bug Components: Compiler Reporter: Yingyi Bu Assignee: Yingyi Bu For example: 1.0e-5, 2e10, 0.3e-2 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Closed] (ASTERIXDB-1899) Address review comments on the LogManager change
[ https://issues.apache.org/jira/browse/ASTERIXDB-1899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu closed ASTERIXDB-1899. > Address review comments on the LogManager change > > > Key: ASTERIXDB-1899 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1899 > Project: Apache AsterixDB > Issue Type: Bug > Components: Transactions >Reporter: Yingyi Bu >Assignee: Abdullah Alamoudi > > Address the outstanding code review comments on the following change: > https://asterix-gerrit.ics.uci.edu/#/c/1719/ -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Closed] (ASTERIXDB-1836) Refactor ICCMessageBroker/INCMessageBroker
[ https://issues.apache.org/jira/browse/ASTERIXDB-1836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu closed ASTERIXDB-1836. > Refactor ICCMessageBroker/INCMessageBroker > -- > > Key: ASTERIXDB-1836 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1836 > Project: Apache AsterixDB > Issue Type: Bug >Reporter: Yingyi Bu >Assignee: Abdullah Alamoudi > > Refactor those to be type specific and only do IApplication to > IXXXApplication cast once in the message broker instead of doing it in any > message handler. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Closed] (ASTERIXDB-1915) Dataset partition files does not uniformly distributed across IO devices
[ https://issues.apache.org/jira/browse/ASTERIXDB-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yingyi Bu closed ASTERIXDB-1915. Resolution: Fixed > Dataset partition files does not uniformly distributed across IO devices > > > Key: ASTERIXDB-1915 > URL: https://issues.apache.org/jira/browse/ASTERIXDB-1915 > Project: Apache AsterixDB > Issue Type: Bug > Components: Storage >Reporter: Yingyi Bu >Assignee: Yingyi Bu > > When we create a dataset, the partitioned physical files can distribute > across iodevices in a non-uniform way, because there are multiple calling > sites for DefaultDeviceComputer.compute(path) for creating each index > partition. -- This message was sent by Atlassian JIRA (v6.3.15#6346)