[jira] [Assigned] (ASTERIXDB-2427) Plan with two instances of create-query-uid() optimizing groupby incorrectly.

2018-07-26 Thread Steven Jacobs (JIRA)


 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs reassigned ASTERIXDB-2427:


Assignee: Dmitry Lychagin

> Plan with two instances of create-query-uid() optimizing groupby incorrectly.
> -
>
> Key: ASTERIXDB-2427
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2427
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Steven Jacobs
>Assignee: Dmitry Lychagin
>Priority: Major
>
> The groupby ends up grouping by nothing when it should group by the top 
> create-query-uid()
>  
> Repro:
> drop dataverse channels if exists;
> create dataverse channels;
> use channels;
> create type UserLocation as {
>  location: circle,
>  userName: string
> };
> create type EmergencyReport as {
>  reportId: uuid,
>  Etype: string,
>  location: circle
> };
> create type EmergencyShelter as {
>  shelterName: string,
>  location: point
> };
> create dataset UserLocations(UserLocation)
> primary key userName;
> create dataset Shelters(EmergencyShelter)
> primary key shelterName;
> create dataset Reports(EmergencyReport)
> primary key reportId autogenerated;
> create index u_location on UserLocations(location) type RTREE;
> create type result as {
>  resultId:uuid
> };
> create type channelSub as {
>  channelSubId:uuid
> };
> create type brokerSub as {
>  channelSubId:uuid,
>  brokerSubId:uuid
> };
> create type broke as {
>  DataverseName: string,
>  BrokerName: string,
>  BrokerEndpoint: string
> };
> create dataset EmergenciesNearMeChannelResults(result) primary key resultId 
> autogenerated;
> create dataset EmergenciesNearMeChannelChannelSubscriptions(channelSub) 
> primary key channelSubId;
> create dataset EmergenciesNearMeChannelBrokerSubscriptions(brokerSub) primary 
> key channelSubId,brokerSubId;
> create dataset Broker(broke) primary key DataverseName,BrokerName;
> create function RecentEmergenciesNearUser(userName) {
>  (
>  select report, shelters from
>  ( select value r from Reports r)report,
>  UserLocations u
>  let shelters = (select s.location from Shelters s where 
> spatial_intersect(s.location,u.location))
>  where u.userName = userName
>  and spatial_intersect(report.location,u.location)
>  )
> };
> SET inline_with "false";
> insert into channels.EmergenciesNearMeChannelResults as a (
> with channelExecutionTime as current_datetime() 
> select result, channelExecutionTime, sub.channelSubId as 
> channelSubId,current_datetime() as deliveryTime,
> (select b.BrokerEndPoint, bs.brokerSubId from
> channels.EmergenciesNearMeChannelBrokerSubscriptions bs,
> channels.Broker b
> where bs.BrokerName = b.BrokerName
> and bs.DataverseName = b.DataverseName
> and bs.channelSubId = sub.channelSubId
> ) as brokerSubIds
> from channels.EmergenciesNearMeChannelChannelSubscriptions sub,
> channels.RecentEmergenciesNearUser(sub.param0) result 
> ) returning (select a.channelExecutionTime group by a.channelExecutionTime);



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ASTERIXDB-2427) Plan with two instances of create-query-uid() optimizing groupby incorrectly.

2018-07-26 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-2427:


 Summary: Plan with two instances of create-query-uid() optimizing 
groupby incorrectly.
 Key: ASTERIXDB-2427
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-2427
 Project: Apache AsterixDB
  Issue Type: Bug
Reporter: Steven Jacobs


The groupby ends up grouping by nothing when it should group by the top 
create-query-uid()

 

Repro:

drop dataverse channels if exists;
create dataverse channels;
use channels;

create type UserLocation as {
 location: circle,
 userName: string
};


create type EmergencyReport as {
 reportId: uuid,
 Etype: string,
 location: circle
};


create type EmergencyShelter as {
 shelterName: string,
 location: point
};

create dataset UserLocations(UserLocation)
primary key userName;
create dataset Shelters(EmergencyShelter)
primary key shelterName;
create dataset Reports(EmergencyReport)
primary key reportId autogenerated;

create index u_location on UserLocations(location) type RTREE;

create type result as {
 resultId:uuid
};
create type channelSub as {
 channelSubId:uuid
};
create type brokerSub as {
 channelSubId:uuid,
 brokerSubId:uuid
};
create type broke as {
 DataverseName: string,
 BrokerName: string,
 BrokerEndpoint: string
};

create dataset EmergenciesNearMeChannelResults(result) primary key resultId 
autogenerated;
create dataset EmergenciesNearMeChannelChannelSubscriptions(channelSub) primary 
key channelSubId;
create dataset EmergenciesNearMeChannelBrokerSubscriptions(brokerSub) primary 
key channelSubId,brokerSubId;
create dataset Broker(broke) primary key DataverseName,BrokerName;


create function RecentEmergenciesNearUser(userName) {
 (
 select report, shelters from
 ( select value r from Reports r)report,
 UserLocations u
 let shelters = (select s.location from Shelters s where 
spatial_intersect(s.location,u.location))
 where u.userName = userName
 and spatial_intersect(report.location,u.location)
 )
};

SET inline_with "false";
insert into channels.EmergenciesNearMeChannelResults as a (
with channelExecutionTime as current_datetime() 
select result, channelExecutionTime, sub.channelSubId as 
channelSubId,current_datetime() as deliveryTime,
(select b.BrokerEndPoint, bs.brokerSubId from
channels.EmergenciesNearMeChannelBrokerSubscriptions bs,
channels.Broker b
where bs.BrokerName = b.BrokerName
and bs.DataverseName = b.DataverseName
and bs.channelSubId = sub.channelSubId
) as brokerSubIds
from channels.EmergenciesNearMeChannelChannelSubscriptions sub,
channels.RecentEmergenciesNearUser(sub.param0) result 
) returning (select a.channelExecutionTime group by a.channelExecutionTime);



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (ASTERIXDB-2426) Unpartitioned Variables before a datascan should be able to have the data scan keys as their primary key after the data scan

2018-07-26 Thread Steven Jacobs (JIRA)


 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs closed ASTERIXDB-2426.

Resolution: Invalid

> Unpartitioned Variables before a datascan should be able to have the data 
> scan keys as their primary key after the data scan 
> -
>
> Key: ASTERIXDB-2426
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2426
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Steven Jacobs
>Priority: Major
>
> For a plan such as:
>  
> data-scan []<-[$$keyVar, $$recordVar] <- DatasetName -- |UNPARTITIONED|
>  assign [$$timeVar] <- [current-datetime()] -- |UNPARTITIONED|
>  empty-tuple-source -- |UNPARTITIONED|
> $$timeVar should have $$keyVar as its primary key after the data scan



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ASTERIXDB-2426) Unpartitioned Variables before a datascan should be able to have the data scan keys as their primary key after the data scan

2018-07-26 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-2426:


 Summary: Unpartitioned Variables before a datascan should be able 
to have the data scan keys as their primary key after the data scan 
 Key: ASTERIXDB-2426
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-2426
 Project: Apache AsterixDB
  Issue Type: Bug
Reporter: Steven Jacobs


For a plan such as:

 

data-scan []<-[$$keyVar, $$recordVar] <- DatasetName -- |UNPARTITIONED|
 assign [$$timeVar] <- [current-datetime()] -- |UNPARTITIONED|
 empty-tuple-source -- |UNPARTITIONED|

$$timeVar should have $$keyVar as its primary key after the data scan



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (ASTERIXDB-2373) Allow Deployed Jobs to receive new Job Specifications

2018-07-26 Thread Steven Jacobs (JIRA)


 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs resolved ASTERIXDB-2373.
--
Resolution: Fixed

> Allow Deployed Jobs to receive new Job Specifications
> -
>
> Key: ASTERIXDB-2373
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2373
> Project: Apache AsterixDB
>  Issue Type: Improvement
>Reporter: Steven Jacobs
>Assignee: Steven Jacobs
>Priority: Major
>
> It may be desirable to create a new Job Specification for a Deployed Job. We 
> need some way to handle this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (ASTERIXDB-2402) findLOJIsMissingFuncInGroupBy fails to find the expression when there is an "and"

2018-07-26 Thread Steven Jacobs (JIRA)


 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs resolved ASTERIXDB-2402.
--
Resolution: Fixed

> findLOJIsMissingFuncInGroupBy fails to find the expression when there is an 
> "and"
> -
>
> Key: ASTERIXDB-2402
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2402
> Project: Apache AsterixDB
>  Issue Type: Improvement
>Reporter: Steven Jacobs
>Priority: Major
>
> When findLOJIsMissingFuncInGroupBy() in AccessMethodUtils looks for the
> not(is-missing($$VAR)
> expression, it fails to check select statements of the form:
> select (and(not(is-missing($$VAR1)), not(is-missing($$VAR2
> It only looks for 
> select (not(is-missing($$VAR))
> which doesn't cover all cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (ASTERIXDB-2405) IntroduceJoinAccessMethodUtil doesn't start at delegates other than commit.

2018-07-26 Thread Steven Jacobs (JIRA)


 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs resolved ASTERIXDB-2405.
--
Resolution: Fixed

> IntroduceJoinAccessMethodUtil doesn't start at delegates other than commit.
> ---
>
> Key: ASTERIXDB-2405
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2405
> Project: Apache AsterixDB
>  Issue Type: Improvement
>Reporter: Steven Jacobs
>Priority: Major
>
> Currently, we only have two delegates:
> commit()
> and the BAD delegate, notify-brokers()
> They should both be considered by IntroduceJoinAccessMethodUtil



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ASTERIXDB-2425) Allow SubstituteVariableVisitor to work on Delegate Operators

2018-07-26 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-2425:


 Summary: Allow SubstituteVariableVisitor to work on Delegate 
Operators
 Key: ASTERIXDB-2425
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-2425
 Project: Apache AsterixDB
  Issue Type: Bug
Reporter: Steven Jacobs


Currently, Delegate Operators are ignored by this visitor. They should be 
allowed to substitute variables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ASTERIXDB-2405) IntroduceJoinAccessMethodUtil doesn't start at delegates other than commit.

2018-06-29 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-2405:


 Summary: IntroduceJoinAccessMethodUtil doesn't start at delegates 
other than commit.
 Key: ASTERIXDB-2405
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-2405
 Project: Apache AsterixDB
  Issue Type: Improvement
Reporter: Steven Jacobs


Currently, we only have two delegates:

commit()

and the BAD delegate, notify-brokers()

They should both be considered by IntroduceJoinAccessMethodUtil



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ASTERIXDB-2404) AbstractGroupingProperty mishandles empty heads for functional dependencies

2018-06-29 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-2404:


 Summary: AbstractGroupingProperty mishandles empty heads for 
functional dependencies
 Key: ASTERIXDB-2404
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-2404
 Project: Apache AsterixDB
  Issue Type: Improvement
Reporter: Steven Jacobs


If a group by uses variable $VAR, and there is a functional dependency of the 
form:

[] -> $VAR

reduceGroupingColumns() will eliminate $VAR from the group by.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ASTERIXDB-2402) findLOJIsMissingFuncInGroupBy fails to find the expression when there is an "and"

2018-06-25 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-2402:


 Summary: findLOJIsMissingFuncInGroupBy fails to find the 
expression when there is an "and"
 Key: ASTERIXDB-2402
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-2402
 Project: Apache AsterixDB
  Issue Type: Improvement
Reporter: Steven Jacobs


When findLOJIsMissingFuncInGroupBy() in AccessMethodUtils looks for the

not(is-missing($$VAR)

expression, it fails to check select statements of the form:

select (and(not(is-missing($$VAR1)), not(is-missing($$VAR2

It only looks for 

select (not(is-missing($$VAR))

which doesn't cover all cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (ASTERIXDB-2400) Bad compilation of CASE statement

2018-06-11 Thread Steven Jacobs (JIRA)


 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs resolved ASTERIXDB-2400.
--
Resolution: Fixed

> Bad compilation of CASE statement 
> --
>
> Key: ASTERIXDB-2400
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2400
> Project: Apache AsterixDB
>  Issue Type: Improvement
>Reporter: Steven Jacobs
>Priority: Major
>
> The following valid syntax will throw an internal error:
> drop dataverse channels if exists;
> create dataverse channels;
> use channels;
> create type sub as {
>  subscriptionId: uuid
> };
> create dataset subscriptions(sub) primary key subscriptionId;
> upsert into subscriptions (
> (let v = (select value s from subscriptions s where param0 = "HenryGale")
> select value (CASE (array_count(v) > 0)
> WHEN true THEN
> {"subscriptionId":v[0].subscriptionId, "param0": 
> v[0].param0,"brokerSubscriptions":(select value sub from 
> v[0].brokerSubscriptions sub UNION ALL select value val from 
> [\{"brokerSubscriptionId":create_uuid()}] val)}
> ELSE
> {"subscriptionId":create_uuid(),"param0": 
> "HenryGale","brokerSubscriptions":[\{"brokerSubscriptionId":create_uuid()}]}
> END
> ))
> );
>  
>  
> The stack trace is:
> java.lang.NullPointerException: null
>  at 
> org.apache.asterix.om.typecomputer.impl.ListConstructorTypeComputer.computeTypeFromItems(ListConstructorTypeComputer.java:63)
>  ~[classes/:?]
>  at 
> org.apache.asterix.om.typecomputer.impl.ListConstructorTypeComputer.computeType(ListConstructorTypeComputer.java:50)
>  ~[classes/:?]
>  at 
> org.apache.asterix.dataflow.data.common.ExpressionTypeComputer.getTypeForFunction(ExpressionTypeComputer.java:84)
>  ~[classes/:?]
>  at 
> org.apache.asterix.dataflow.data.common.ExpressionTypeComputer.getType(ExpressionTypeComputer.java:55)
>  ~[classes/:?]
>  at 
> org.apache.hyracks.algebricks.core.algebra.operators.logical.AggregateOperator.computeOutputTypeEnvironment(AggregateOperator.java:106)
>  ~[classes/:?]
>  at 
> org.apache.hyracks.algebricks.core.rewriter.base.AlgebricksOptimizationContext.computeAndSetTypeEnvironmentForOperator(AlgebricksOptimizationContext.java:298)
>  ~[classes/:?]
>  at 
> org.apache.hyracks.algebricks.core.algebra.util.OperatorManipulationUtil.computeTypeEnvironmentBottomUp(OperatorManipulationUtil.java:296)
>  ~[classes/:?]
>  at 
> org.apache.hyracks.algebricks.core.algebra.util.OperatorManipulationUtil.computeTypeEnvironmentBottomUp(OperatorManipulationUtil.java:286)
>  ~[classes/:?]
>  at 
> org.apache.hyracks.algebricks.core.algebra.util.OperatorManipulationUtil.computeTypeEnvironmentBottomUp(OperatorManipulationUtil.java:286)
>  ~[classes/:?]
>  at 
> org.apache.asterix.optimizer.rules.subplan.InlineSubplanInputForNestedTupleSourceRule.applyGeneralFlattening(InlineSubplanInputForNestedTupleSourceRule.java:422)
>  ~[classes/:?]
>  at 
> org.apache.asterix.optimizer.rules.subplan.InlineSubplanInputForNestedTupleSourceRule.rewriteSubplanOperator(InlineSubplanInputForNestedTupleSourceRule.java:304)
>  ~[classes/:?]
>  at 
> org.apache.asterix.optimizer.rules.subplan.InlineSubplanInputForNestedTupleSourceRule.traverseNonSubplanOperator(InlineSubplanInputForNestedTupleSourceRule.java:332)
>  ~[classes/:?]
>  at 
> org.apache.asterix.optimizer.rules.subplan.InlineSubplanInputForNestedTupleSourceRule.rewriteSubplanOperator(InlineSubplanInputForNestedTupleSourceRule.java:290)
>  ~[classes/:?]
>  at 
> org.apache.asterix.optimizer.rules.subplan.InlineSubplanInputForNestedTupleSourceRule.traverseNonSubplanOperator(InlineSubplanInputForNestedTupleSourceRule.java:332)
>  ~[classes/:?]
>  at 
> org.apache.asterix.optimizer.rules.subplan.InlineSubplanInputForNestedTupleSourceRule.rewriteSubplanOperator(InlineSubplanInputForNestedTupleSourceRule.java:290)
>  ~[classes/:?]
>  at 
> org.apache.asterix.optimizer.rules.subplan.InlineSubplanInputForNestedTupleSourceRule.traverseNonSubplanOperator(InlineSubplanInputForNestedTupleSourceRule.java:332)
>  ~[classes/:?]
>  at 
> org.apache.asterix.optimizer.rules.subplan.InlineSubplanInputForNestedTupleSourceRule.rewriteSubplanOperator(InlineSubplanInputForNestedTupleSourceRule.java:290)
>  ~[classes/:?]
>  at 
> org.apache.asterix.optimizer.rules.subplan.InlineSubplanInputForNestedTupleSourceRule.traverseNonSubplanOperator(InlineSubplanInputForNestedTupleSourceRule.java:332)
>  ~[classes/:?]
>  at 
> org.apache.asterix.optimizer.rules.subplan.InlineSubplanInputForNestedTupleSourceRule.rewriteSubplanOperator(InlineSubplanInputForNestedTupleSourceRule.java:290)
>  ~[classes/:?]
>  at 
> org.apache.asterix.optimizer.rules.subplan.InlineSubplanInputForNestedTupleSourceRule.traverseNonSubplanOperator(InlineSubplanInputForNestedTupleSourceRule.java:332)
>  ~[classes/:?]
>  at 
> 

[jira] [Created] (ASTERIXDB-2400) Bad compilation of CASE statement

2018-06-08 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-2400:


 Summary: Bad compilation of CASE statement 
 Key: ASTERIXDB-2400
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-2400
 Project: Apache AsterixDB
  Issue Type: Improvement
Reporter: Steven Jacobs


The following valid syntax will throw an internal error:

drop dataverse channels if exists;
create dataverse channels;
use channels;

create type sub as {
 subscriptionId: uuid
};

create dataset subscriptions(sub) primary key subscriptionId;

upsert into subscriptions (

(let v = (select value s from subscriptions s where param0 = "HenryGale")

select value (CASE (array_count(v) > 0)

WHEN true THEN

{"subscriptionId":v[0].subscriptionId, "param0": 
v[0].param0,"brokerSubscriptions":(select value sub from 
v[0].brokerSubscriptions sub UNION ALL select value val from 
[\{"brokerSubscriptionId":create_uuid()}] val)}

ELSE

{"subscriptionId":create_uuid(),"param0": 
"HenryGale","brokerSubscriptions":[\{"brokerSubscriptionId":create_uuid()}]}

END
))
);

 

 

The stack trace is:

java.lang.NullPointerException: null
 at 
org.apache.asterix.om.typecomputer.impl.ListConstructorTypeComputer.computeTypeFromItems(ListConstructorTypeComputer.java:63)
 ~[classes/:?]
 at 
org.apache.asterix.om.typecomputer.impl.ListConstructorTypeComputer.computeType(ListConstructorTypeComputer.java:50)
 ~[classes/:?]
 at 
org.apache.asterix.dataflow.data.common.ExpressionTypeComputer.getTypeForFunction(ExpressionTypeComputer.java:84)
 ~[classes/:?]
 at 
org.apache.asterix.dataflow.data.common.ExpressionTypeComputer.getType(ExpressionTypeComputer.java:55)
 ~[classes/:?]
 at 
org.apache.hyracks.algebricks.core.algebra.operators.logical.AggregateOperator.computeOutputTypeEnvironment(AggregateOperator.java:106)
 ~[classes/:?]
 at 
org.apache.hyracks.algebricks.core.rewriter.base.AlgebricksOptimizationContext.computeAndSetTypeEnvironmentForOperator(AlgebricksOptimizationContext.java:298)
 ~[classes/:?]
 at 
org.apache.hyracks.algebricks.core.algebra.util.OperatorManipulationUtil.computeTypeEnvironmentBottomUp(OperatorManipulationUtil.java:296)
 ~[classes/:?]
 at 
org.apache.hyracks.algebricks.core.algebra.util.OperatorManipulationUtil.computeTypeEnvironmentBottomUp(OperatorManipulationUtil.java:286)
 ~[classes/:?]
 at 
org.apache.hyracks.algebricks.core.algebra.util.OperatorManipulationUtil.computeTypeEnvironmentBottomUp(OperatorManipulationUtil.java:286)
 ~[classes/:?]
 at 
org.apache.asterix.optimizer.rules.subplan.InlineSubplanInputForNestedTupleSourceRule.applyGeneralFlattening(InlineSubplanInputForNestedTupleSourceRule.java:422)
 ~[classes/:?]
 at 
org.apache.asterix.optimizer.rules.subplan.InlineSubplanInputForNestedTupleSourceRule.rewriteSubplanOperator(InlineSubplanInputForNestedTupleSourceRule.java:304)
 ~[classes/:?]
 at 
org.apache.asterix.optimizer.rules.subplan.InlineSubplanInputForNestedTupleSourceRule.traverseNonSubplanOperator(InlineSubplanInputForNestedTupleSourceRule.java:332)
 ~[classes/:?]
 at 
org.apache.asterix.optimizer.rules.subplan.InlineSubplanInputForNestedTupleSourceRule.rewriteSubplanOperator(InlineSubplanInputForNestedTupleSourceRule.java:290)
 ~[classes/:?]
 at 
org.apache.asterix.optimizer.rules.subplan.InlineSubplanInputForNestedTupleSourceRule.traverseNonSubplanOperator(InlineSubplanInputForNestedTupleSourceRule.java:332)
 ~[classes/:?]
 at 
org.apache.asterix.optimizer.rules.subplan.InlineSubplanInputForNestedTupleSourceRule.rewriteSubplanOperator(InlineSubplanInputForNestedTupleSourceRule.java:290)
 ~[classes/:?]
 at 
org.apache.asterix.optimizer.rules.subplan.InlineSubplanInputForNestedTupleSourceRule.traverseNonSubplanOperator(InlineSubplanInputForNestedTupleSourceRule.java:332)
 ~[classes/:?]
 at 
org.apache.asterix.optimizer.rules.subplan.InlineSubplanInputForNestedTupleSourceRule.rewriteSubplanOperator(InlineSubplanInputForNestedTupleSourceRule.java:290)
 ~[classes/:?]
 at 
org.apache.asterix.optimizer.rules.subplan.InlineSubplanInputForNestedTupleSourceRule.traverseNonSubplanOperator(InlineSubplanInputForNestedTupleSourceRule.java:332)
 ~[classes/:?]
 at 
org.apache.asterix.optimizer.rules.subplan.InlineSubplanInputForNestedTupleSourceRule.rewriteSubplanOperator(InlineSubplanInputForNestedTupleSourceRule.java:290)
 ~[classes/:?]
 at 
org.apache.asterix.optimizer.rules.subplan.InlineSubplanInputForNestedTupleSourceRule.traverseNonSubplanOperator(InlineSubplanInputForNestedTupleSourceRule.java:332)
 ~[classes/:?]
 at 
org.apache.asterix.optimizer.rules.subplan.InlineSubplanInputForNestedTupleSourceRule.rewriteSubplanOperator(InlineSubplanInputForNestedTupleSourceRule.java:290)
 ~[classes/:?]
 at 
org.apache.asterix.optimizer.rules.subplan.InlineSubplanInputForNestedTupleSourceRule.traverseNonSubplanOperator(InlineSubplanInputForNestedTupleSourceRule.java:332)
 ~[classes/:?]
 at 

[jira] [Resolved] (ASTERIXDB-2391) IntroduceDynamicTypeCastRule doesn't propagate through insert

2018-06-05 Thread Steven Jacobs (JIRA)


 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs resolved ASTERIXDB-2391.
--
Resolution: Fixed

> IntroduceDynamicTypeCastRule doesn't propagate through insert
> -
>
> Key: ASTERIXDB-2391
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2391
> Project: Apache AsterixDB
>  Issue Type: Improvement
>Reporter: Steven Jacobs
>Assignee: Steven Jacobs
>Priority: Major
>
> This makes it so returning clauses refer to non-existent variables. Can be 
> reproduced buy the following:
>  
> drop dataverse channels if exists;
>  create dataverse channels;
>  use channels;
> create type sub as
> { subscriptionId: uuid }
> ;
> create dataset subscriptions(sub) primary key subscriptionId;
> upsert into subscriptions as record(
>  [\{"subscriptionId":create_uuid()}]
> ) returning record.subscriptionId;
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (ASTERIXDB-2391) IntroduceDynamicTypeCastRule doesn't propagate through insert

2018-06-05 Thread Steven Jacobs (JIRA)


 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs reassigned ASTERIXDB-2391:


Assignee: Steven Jacobs

> IntroduceDynamicTypeCastRule doesn't propagate through insert
> -
>
> Key: ASTERIXDB-2391
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2391
> Project: Apache AsterixDB
>  Issue Type: Improvement
>Reporter: Steven Jacobs
>Assignee: Steven Jacobs
>Priority: Major
>
> This makes it so returning clauses refer to non-existent variables. Can be 
> reproduced buy the following:
>  
> drop dataverse channels if exists;
>  create dataverse channels;
>  use channels;
> create type sub as
> { subscriptionId: uuid }
> ;
> create dataset subscriptions(sub) primary key subscriptionId;
> upsert into subscriptions as record(
>  [\{"subscriptionId":create_uuid()}]
> ) returning record.subscriptionId;
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ASTERIXDB-2391) IntroduceDynamicTypeCastRule doesn't propagate through insert

2018-05-24 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs updated ASTERIXDB-2391:
-
Description: 
This makes it so returning clauses refer to non-existent variables. Can be 
reproduced buy the following:

 

drop dataverse channels if exists;
 create dataverse channels;
 use channels;

create type sub as

{ subscriptionId: uuid }

;

create dataset subscriptions(sub) primary key subscriptionId;

upsert into subscriptions as record(
 [\{"subscriptionId":create_uuid()}]
) returning record.subscriptionId;

 

  was:
This makes it so returning clauses refer to non-existent variables. Can be 
reproduced buy the following:

 

drop dataverse channels if exists;
create dataverse channels;
use channels;

create type sub as {
 subscriptionId: uuid
};

create dataset subscriptions(sub) primary key subscriptionId;

upsert into subscriptions as record(
(let v = (select value s from subscriptions s where param0 = "HenryGale")
select value (CASE (array_count(v) > 0) 
WHEN true THEN \{"subscriptionId":v[0].subscriptionId, "param0": 
v[0].param0,"brokerSubscriptions":v[0].brokerSubscriptions} 
ELSE \{"subscriptionId":create_uuid(), "param0": "HenryGale", 
"brokerSubscriptions":[{"brokerSubscriptionId":create_uuid(), 
"brokerDataverse":"dataverse1","brokerName":"broker1"}]} 
END))
) returning record.brokerSubscriptions;

 


> IntroduceDynamicTypeCastRule doesn't propagate through insert
> -
>
> Key: ASTERIXDB-2391
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2391
> Project: Apache AsterixDB
>  Issue Type: Improvement
>Reporter: Steven Jacobs
>Priority: Major
>
> This makes it so returning clauses refer to non-existent variables. Can be 
> reproduced buy the following:
>  
> drop dataverse channels if exists;
>  create dataverse channels;
>  use channels;
> create type sub as
> { subscriptionId: uuid }
> ;
> create dataset subscriptions(sub) primary key subscriptionId;
> upsert into subscriptions as record(
>  [\{"subscriptionId":create_uuid()}]
> ) returning record.subscriptionId;
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ASTERIXDB-2391) IntroduceDynamicTypeCastRule doesn't propagate through insert

2018-05-24 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-2391:


 Summary: IntroduceDynamicTypeCastRule doesn't propagate through 
insert
 Key: ASTERIXDB-2391
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-2391
 Project: Apache AsterixDB
  Issue Type: Improvement
Reporter: Steven Jacobs


This makes it so returning clauses refer to non-existent variables. Can be 
reproduced buy the following:

 

drop dataverse channels if exists;
create dataverse channels;
use channels;

create type sub as {
 subscriptionId: uuid
};

create dataset subscriptions(sub) primary key subscriptionId;

upsert into subscriptions as record(
(let v = (select value s from subscriptions s where param0 = "HenryGale")
select value (CASE (array_count(v) > 0) 
WHEN true THEN \{"subscriptionId":v[0].subscriptionId, "param0": 
v[0].param0,"brokerSubscriptions":v[0].brokerSubscriptions} 
ELSE \{"subscriptionId":create_uuid(), "param0": "HenryGale", 
"brokerSubscriptions":[{"brokerSubscriptionId":create_uuid(), 
"brokerDataverse":"dataverse1","brokerName":"broker1"}]} 
END))
) returning record.brokerSubscriptions;

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (ASTERIXDB-2386) Metadata cache should be refreshed after a cluster restart.

2018-05-15 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs resolved ASTERIXDB-2386.
--
Resolution: Fixed
  Assignee: Steven Jacobs

> Metadata cache should be refreshed after a cluster restart.
> ---
>
> Key: ASTERIXDB-2386
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2386
> Project: Apache AsterixDB
>  Issue Type: Improvement
>Reporter: Steven Jacobs
>Assignee: Steven Jacobs
>Priority: Major
>
> After a cluster restart, queries that don't first call "use" on the 
> dataverses involved will fail. This is because they aren't found in the 
> metadata cache. The bug can be reproduced as follows:
> drop dataverse channels if exists;
> create dataverse channels;
> use channels;
> create type TweetMessageTypeuuid as closed {
>  tweetid: uuid,
>  sender_location: point,
>  send_time: datetime,
>  referred_topics: \{{ string }},
>  message_text: string,
>  countA: int32,
>  countB: int32
> };
> create dataset TweetMessageuuids(TweetMessageTypeuuid)
> primary key tweetid autogenerated;
> create function NearbyTweetsContainingText(place, text) {
>  (select m.message_text
>  from TweetMessageuuids m
>  where contains(m.message_text,text)
>  and spatial_intersect(m.sender_location, place))
> };
>  
> restart the cluster
> Then try the following:
> channels.NearbyTweetsContainingText("hello","world"); 
>  
> It works fine if you do this instead, because channels gets added to the 
> cache:
> use channels;
> channels.NearbyTweetsContainingText("hello","world");
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ASTERIXDB-2386) Metadata cache should be refreshed after a cluster restart.

2018-05-09 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-2386:


 Summary: Metadata cache should be refreshed after a cluster 
restart.
 Key: ASTERIXDB-2386
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-2386
 Project: Apache AsterixDB
  Issue Type: Improvement
Reporter: Steven Jacobs


After a cluster restart, queries that don't first call "use" on the dataverses 
involved will fail. This is because they aren't found in the metadata cache. 
The bug can be reproduced as follows:

drop dataverse channels if exists;
create dataverse channels;
use channels;

create type TweetMessageTypeuuid as closed {
 tweetid: uuid,
 sender_location: point,
 send_time: datetime,
 referred_topics: \{{ string }},
 message_text: string,
 countA: int32,
 countB: int32
};


create dataset TweetMessageuuids(TweetMessageTypeuuid)
primary key tweetid autogenerated;

create function NearbyTweetsContainingText(place, text) {
 (select m.message_text
 from TweetMessageuuids m
 where contains(m.message_text,text)
 and spatial_intersect(m.sender_location, place))
};

 

restart the cluster

Then try the following:

channels.NearbyTweetsContainingText("hello","world"); 

 

It works fine if you do this instead, because channels gets added to the cache:

use channels;
channels.NearbyTweetsContainingText("hello","world");

 

 

 

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ASTERIXDB-2373) Allow Deployed Jobs to receive new Job Specifications

2018-04-26 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-2373:


 Summary: Allow Deployed Jobs to receive new Job Specifications
 Key: ASTERIXDB-2373
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-2373
 Project: Apache AsterixDB
  Issue Type: Improvement
Reporter: Steven Jacobs
Assignee: Steven Jacobs


It may be desirable to create a new Job Specification for a Deployed Job. We 
need some way to handle this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (ASTERIXDB-2333) Filters don't work with autogenerated keys

2018-03-29 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs resolved ASTERIXDB-2333.
--
Resolution: Fixed

> Filters don't work with autogenerated keys
> --
>
> Key: ASTERIXDB-2333
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2333
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Steven Jacobs
>Assignee: Steven Jacobs
>Priority: Major
>
> The following example fails (works without the filter). See TODO in 
> IntroduceAutogenerateIDRule
>  
> drop dataverse channels if exists;
> create dataverse channels;
> use channels;
> create type UserLocation as closed {
> recordId: uuid,
> location: circle,
> userName: string
> };
> create dataset UserLocations(UserLocation)
> primary key recordId autogenerated with filter on userName;
>  
> insert into UserLocations(
> {"userName" : "c1121u1" , "location" : circle("4171.58,1083.41 100.0")}
> );



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (ASTERIXDB-2333) Filters don't work with autogenerated keys

2018-03-29 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs reassigned ASTERIXDB-2333:


Assignee: Steven Jacobs  (was: Ian Maxon)

> Filters don't work with autogenerated keys
> --
>
> Key: ASTERIXDB-2333
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2333
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Steven Jacobs
>Assignee: Steven Jacobs
>Priority: Major
>
> The following example fails (works without the filter). See TODO in 
> IntroduceAutogenerateIDRule
>  
> drop dataverse channels if exists;
> create dataverse channels;
> use channels;
> create type UserLocation as closed {
> recordId: uuid,
> location: circle,
> userName: string
> };
> create dataset UserLocations(UserLocation)
> primary key recordId autogenerated with filter on userName;
>  
> insert into UserLocations(
> {"userName" : "c1121u1" , "location" : circle("4171.58,1083.41 100.0")}
> );



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ASTERIXDB-2337) Make current_date()/time() functions stable during query execution

2018-03-22 Thread Steven Jacobs (JIRA)

[ 
https://issues.apache.org/jira/browse/ASTERIXDB-2337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410644#comment-16410644
 ] 

Steven Jacobs commented on ASTERIXDB-2337:
--

Just curious, has this change been discussed with Mike already? We had lots of 
discussion about this in the past, and there is specific instrumentation in 
place for non-pure functions not to be movable within a plan. I think with this 
change the time functions might and probably should become moveable, as they 
will return the same result from anywhere in the plan. In this case we need to 
make sure they also still work with index searches.

I have comments about the change as well but I'll put those on gerrit.

> Make current_date()/time() functions stable during query execution
> --
>
> Key: ASTERIXDB-2337
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2337
> Project: Apache AsterixDB
>  Issue Type: Bug
>  Components: FUN - Functions
>Reporter: Dmitry Lychagin
>Assignee: Dmitry Lychagin
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ASTERIXDB-2337) Make current_date()/time() functions stable during query execution

2018-03-22 Thread Steven Jacobs (JIRA)

[ 
https://issues.apache.org/jira/browse/ASTERIXDB-2337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16410051#comment-16410051
 ] 

Steven Jacobs commented on ASTERIXDB-2337:
--

Hey, do you have more details on this issue? These functions have a big role in 
BAD.

> Make current_date()/time() functions stable during query execution
> --
>
> Key: ASTERIXDB-2337
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2337
> Project: Apache AsterixDB
>  Issue Type: Bug
>  Components: FUN - Functions
>Reporter: Dmitry Lychagin
>Assignee: Dmitry Lychagin
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ASTERIXDB-2333) Filters don't work with autogenerated keys

2018-03-20 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-2333:


 Summary: Filters don't work with autogenerated keys
 Key: ASTERIXDB-2333
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-2333
 Project: Apache AsterixDB
  Issue Type: Bug
Reporter: Steven Jacobs


The following example fails (works without the filter). See TODO in 
IntroduceAutogenerateIDRule

 

drop dataverse channels if exists;
create dataverse channels;
use channels;
create type UserLocation as closed {
recordId: uuid,
location: circle,
userName: string
};
create dataset UserLocations(UserLocation)
primary key recordId autogenerated with filter on userName;

 

insert into UserLocations(
{"userName" : "c1121u1" , "location" : circle("4171.58,1083.41 100.0")}
);



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (ASTERIXDB-2332) PointableBinaryComparatorFactory is sharing a variable across threads unsafely

2018-03-19 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs resolved ASTERIXDB-2332.
--
Resolution: Fixed

> PointableBinaryComparatorFactory is sharing a variable across threads unsafely
> --
>
> Key: ASTERIXDB-2332
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2332
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Steven Jacobs
>Assignee: Steven Jacobs
>Priority: Major
>
> Both RecordMerge and RecordRemoveFields descriptors currently share a single 
> instance of 
> PointableBinaryComparatorFactory.createBinaryComparator() which breaks when 
> these evaluators are run in multithreaded environments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (ASTERIXDB-2295) Intermittent failures in objects/object-add-fields

2018-03-19 Thread Steven Jacobs (JIRA)

[ 
https://issues.apache.org/jira/browse/ASTERIXDB-2295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402400#comment-16402400
 ] 

Steven Jacobs edited comment on ASTERIXDB-2295 at 3/19/18 2:57 PM:
---

The change for ASTERIXDB-2332 should fix the remove-fields errors. I'll leave 
this open for add-fields.


was (Author: sjaco002):
The change for ASTERIXDB-2332 should fix the remove-fields errors. I re-enabled 
the tests and will close this issue for the moment.

> Intermittent failures in objects/object-add-fields
> --
>
> Key: ASTERIXDB-2295
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2295
> Project: Apache AsterixDB
>  Issue Type: Bug
>  Components: FUN - Functions
>Reporter: Dmitry Lychagin
>Assignee: Steven Jacobs
>Priority: Minor
>  Labels: triaged
>
> objects/object-add-fields tests are currently disabled because of 
> intermittent issues. They seem to produce corrupt records. Need to re-enable 
> them and investigate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ASTERIXDB-2295) Intermittent failures in objects/object-add-fields

2018-03-19 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs updated ASTERIXDB-2295:
-
Description: objects/object-add-fields tests are currently disabled because 
of intermittent issues. They seem to produce corrupt records. Need to re-enable 
them and investigate.  (was: objects/object-add-fields, object-remove-fields 
tests are currently disabled because of intermittent issues. They seem to 
produce corrupt records. Need to re-enable them and investigate.)

> Intermittent failures in objects/object-add-fields
> --
>
> Key: ASTERIXDB-2295
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2295
> Project: Apache AsterixDB
>  Issue Type: Bug
>  Components: FUN - Functions
>Reporter: Dmitry Lychagin
>Assignee: Steven Jacobs
>Priority: Minor
>  Labels: triaged
>
> objects/object-add-fields tests are currently disabled because of 
> intermittent issues. They seem to produce corrupt records. Need to re-enable 
> them and investigate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ASTERIXDB-2295) Intermittent failures in objects/object-add-fields

2018-03-19 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs updated ASTERIXDB-2295:
-
Summary: Intermittent failures in objects/object-add-fields  (was: 
Intermittent failures in objects/object-add-fields, object-remove-fields )

> Intermittent failures in objects/object-add-fields
> --
>
> Key: ASTERIXDB-2295
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2295
> Project: Apache AsterixDB
>  Issue Type: Bug
>  Components: FUN - Functions
>Reporter: Dmitry Lychagin
>Assignee: Steven Jacobs
>Priority: Minor
>  Labels: triaged
>
> objects/object-add-fields, object-remove-fields tests are currently disabled 
> because of intermittent issues. They seem to produce corrupt records. Need to 
> re-enable them and investigate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ASTERIXDB-2295) Intermittent failures in objects/object-add-fields, object-remove-fields

2018-03-16 Thread Steven Jacobs (JIRA)

[ 
https://issues.apache.org/jira/browse/ASTERIXDB-2295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402400#comment-16402400
 ] 

Steven Jacobs commented on ASTERIXDB-2295:
--

The change for ASTERIXDB-2332 should fix the remove-fields errors. I re-enabled 
the tests and will close this issue for the moment.

> Intermittent failures in objects/object-add-fields, object-remove-fields  
> -
>
> Key: ASTERIXDB-2295
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2295
> Project: Apache AsterixDB
>  Issue Type: Bug
>  Components: FUN - Functions
>Reporter: Dmitry Lychagin
>Assignee: Steven Jacobs
>Priority: Minor
>  Labels: triaged
>
> objects/object-add-fields, object-remove-fields tests are currently disabled 
> because of intermittent issues. They seem to produce corrupt records. Need to 
> re-enable them and investigate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (ASTERIXDB-1268) Sporadic test failure in record-remove-fields test

2018-03-16 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-1268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs closed ASTERIXDB-1268.

Resolution: Duplicate

> Sporadic test failure in record-remove-fields test
> --
>
> Key: ASTERIXDB-1268
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-1268
> Project: Apache AsterixDB
>  Issue Type: Bug
>  Components: FUN - Functions
>Reporter: Yingyi Bu
>Assignee: Heri Ramampiaro
>Priority: Critical
>
> {noformat}
> Error Message
> Test 
> "src/test/resources/runtimets/queries/records/record-remove-fields/highly-nested-open/highly-nested-open.3.query.aql"
>  FAILED!
> Stacktrace
> java.lang.Exception: Test 
> "src/test/resources/runtimets/queries/records/record-remove-fields/highly-nested-open/highly-nested-open.3.query.aql"
>  FAILED!
>   at 
> org.apache.asterix.test.aql.TestExecutor.runScriptAndCompareWithResult(TestExecutor.java:116)
>   at 
> org.apache.asterix.test.aql.TestExecutor.executeTest(TestExecutor.java:495)
>   at 
> org.apache.asterix.test.runtime.ExecutionTest.test(ExecutionTest.java:96)
> Standard Error
> Expected results file: 
> src/test/resources/runtimets/results/records/record-remove-fields/highly-nested-open/highly-nested-open.3.adm
> testFile 
> src/test/resources/runtimets/queries/records/record-remove-fields/highly-nested-open/highly-nested-open.3.query.aql
>  raised an exception:
> java.lang.Exception: Result for 
> src/test/resources/runtimets/queries/records/record-remove-fields/highly-nested-open/highly-nested-open.3.query.aql
>  changed at line 1:
> < { "id": 1, "class": { "id": 1 } }
> > { "id": 1, "class": { "id": 1, "fullClassification": { "id": 1, "Kingdom": 
> > "Animalia", "lower": { "id": 1, "Phylum": "Chordata", "lower": { "id": 1, 
> > "Class": "Mammalia", "lower": { "id": 1, "Order": "Carnivora", "lower": { 
> > "id": 1, "Family": "Mustelinae", "lower": { "id": 1, "Genus": "Gulo", 
> > "lower": { "id": 1, "Species": "Gulo" } } } } } } } } }
>   at 
> org.apache.asterix.test.aql.TestExecutor.runScriptAndCompareWithResult(TestExecutor.java:116)
>   at 
> org.apache.asterix.test.aql.TestExecutor.executeTest(TestExecutor.java:495)
>   at 
> org.apache.asterix.test.runtime.ExecutionTest.test(ExecutionTest.java:96)
>   at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at org.junit.runners.Suite.runChild(Suite.java:127)
>   at org.junit.runners.Suite.runChild(Suite.java:26)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> 

[jira] [Commented] (ASTERIXDB-2332) PointableBinaryComparatorFactory is sharing a variable across threads unsafely

2018-03-16 Thread Steven Jacobs (JIRA)

[ 
https://issues.apache.org/jira/browse/ASTERIXDB-2332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402321#comment-16402321
 ] 

Steven Jacobs commented on ASTERIXDB-2332:
--

Yes, I should have mentioned this in the meeting. The bug applies to both Merge 
and Remove fields. I'll take a look.

> PointableBinaryComparatorFactory is sharing a variable across threads unsafely
> --
>
> Key: ASTERIXDB-2332
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2332
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Steven Jacobs
>Assignee: Steven Jacobs
>Priority: Major
>
> Both RecordMerge and RecordRemoveFields descriptors currently share a single 
> instance of 
> PointableBinaryComparatorFactory.createBinaryComparator() which breaks when 
> these evaluators are run in multithreaded environments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ASTERIXDB-2332) PointableBinaryComparatorFactory is sharing a variable across threads unsafely

2018-03-15 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-2332:


 Summary: PointableBinaryComparatorFactory is sharing a variable 
across threads unsafely
 Key: ASTERIXDB-2332
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-2332
 Project: Apache AsterixDB
  Issue Type: Bug
Reporter: Steven Jacobs


Both RecordMerge and RecordRemoveFields descriptors currently share a single 
instance of 

PointableBinaryComparatorFactory.createBinaryComparator() which breaks when 
these evaluators are run in multithreaded environments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (ASTERIXDB-2332) PointableBinaryComparatorFactory is sharing a variable across threads unsafely

2018-03-15 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs reassigned ASTERIXDB-2332:


Assignee: Steven Jacobs

> PointableBinaryComparatorFactory is sharing a variable across threads unsafely
> --
>
> Key: ASTERIXDB-2332
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2332
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Steven Jacobs
>Assignee: Steven Jacobs
>Priority: Major
>
> Both RecordMerge and RecordRemoveFields descriptors currently share a single 
> instance of 
> PointableBinaryComparatorFactory.createBinaryComparator() which breaks when 
> these evaluators are run in multithreaded environments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (ASTERIXDB-1033) current-datetime() needs a little more documentation

2018-03-09 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-1033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs resolved ASTERIXDB-1033.
--
Resolution: Fixed

> current-datetime()  needs a little more documentation
> -
>
> Key: ASTERIXDB-1033
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-1033
> Project: Apache AsterixDB
>  Issue Type: Bug
>  Components: *DB - AsterixDB, DOC - Documentation
>Reporter: asterixdb-importer
>Assignee: Steven Jacobs
>Priority: Minor
>  Labels: soon
>
> current-datetime()  needs a little more documentation



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (ASTERIXDB-1224) Allow qualified paths in type when creating a dataset

2018-03-09 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-1224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs resolved ASTERIXDB-1224.
--
Resolution: Fixed

> Allow qualified paths in type when creating a dataset
> -
>
> Key: ASTERIXDB-1224
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-1224
> Project: Apache AsterixDB
>  Issue Type: Improvement
>Reporter: Steven Jacobs
>Assignee: Steven Jacobs
>Priority: Major
>
> Right now it seems that when creating a dataset you assume that the type you 
> use will be in the same dataverse as the new dataset. It will be nice to 
> specify dataverse for the type that you are using.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (ASTERIXDB-2173) [BAD] Sporadic Failures in concurrent_procedure Test

2018-03-09 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs resolved ASTERIXDB-2173.
--
Resolution: Fixed

> [BAD] Sporadic Failures in concurrent_procedure Test
> 
>
> Key: ASTERIXDB-2173
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2173
> Project: Apache AsterixDB
>  Issue Type: Bug
>  Components: OTH - Other
>Reporter: Murtadha Hubail
>Assignee: Steven Jacobs
>Priority: Major
>
> Sporadic failures with high frequency are happening on BAD 
> concurrent_procedure test.
> Here is the test output:
> {code:java}
> java.lang.Exception: Test 
> "src/test/resources/runtimets/queries/procedure/concurrent_procedure/concurrent_procedure.4.query.sqlpp"
>  FAILED!
>   at 
> org.apache.asterix.bad.test.BADExecutionTest.test(BADExecutionTest.java:96)
> Caused by: org.apache.asterix.test.common.ComparisonException: 
> Result for 
> src/test/resources/runtimets/queries/procedure/concurrent_procedure/concurrent_procedure.4.query.sqlpp
>  changed at line 1:
> < 6
> > 5
>   at 
> org.apache.asterix.bad.test.BADExecutionTest.test(BADExecutionTest.java:96)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (ASTERIXDB-2214) Prevent dropping of Functions Used By Active Entities

2018-03-09 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs resolved ASTERIXDB-2214.
--
Resolution: Fixed

> Prevent dropping of Functions Used By Active Entities
> -
>
> Key: ASTERIXDB-2214
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2214
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Steven Jacobs
>Assignee: Steven Jacobs
>Priority: Major
>
> Right now this check is specialized for Feeds. It should be abstracted to be 
> enabled for Active Entities in general



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ASTERIXDB-2308) fileMapManager in VirtualBufferCache is not cleared after index drop

2018-03-08 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs updated ASTERIXDB-2308:
-
Attachment: nc-4.log
nc-3.log
nc-2.log
nc-1.log
cc.log

> fileMapManager in VirtualBufferCache is not cleared after index drop
> 
>
> Key: ASTERIXDB-2308
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2308
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Xikui Wang
>Assignee: Chen Luo
>Priority: Major
> Attachments: cc.log, nc-1.log, nc-2.log, nc-3.log, nc-4.log
>
>
> This causes duplicated mapping and eventually causes feed hangs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ASTERIXDB-2308) fileMapManager in VirtualBufferCache is not cleared after index drop

2018-03-08 Thread Steven Jacobs (JIRA)

[ 
https://issues.apache.org/jira/browse/ASTERIXDB-2308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16392315#comment-16392315
 ] 

Steven Jacobs commented on ASTERIXDB-2308:
--

[~tillw]: Attached logs from various failures

> fileMapManager in VirtualBufferCache is not cleared after index drop
> 
>
> Key: ASTERIXDB-2308
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2308
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Xikui Wang
>Assignee: Chen Luo
>Priority: Major
> Attachments: cc.log, nc-1.log, nc-2.log, nc-3.log, nc-4.log
>
>
> This causes duplicated mapping and eventually causes feed hangs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (ASTERIXDB-2260) CC doesn't pass extension INI sections to NCService

2018-01-23 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs resolved ASTERIXDB-2260.
--
Resolution: Fixed

> CC doesn't pass extension INI sections to NCService
> ---
>
> Key: ASTERIXDB-2260
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2260
> Project: Apache AsterixDB
>  Issue Type: Bug
>  Components: CONF - Configuration
>Reporter: Ian Maxon
>Assignee: Ian Maxon
>Priority: Major
>
> Summary says it all.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (ASTERIXDB-2239) The new Extension Framework without Managix doesn't seem to work.

2018-01-23 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs closed ASTERIXDB-2239.

Resolution: Fixed

> The new Extension Framework without Managix doesn't seem to work.
> -
>
> Key: ASTERIXDB-2239
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2239
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Steven Jacobs
>Assignee: Ian Maxon
>Priority: Major
>
> Following this guide: 
> https://cwiki.apache.org/confluence/display/ASTERIXDB/Creating+a+BAD+Cluster+of+AsterixDB
> And then trying a statement, e.g.
> create broker peterListBroker at "http://127.0.0.1:5000/brokerNotifications;;
> (Actually any statement will give the error)
> Gives the error trace:
> Jan 12, 2018 11:48:06 AM 
> org.apache.asterix.bad.lang.statement.CreateBrokerStatement handle
> WARNING: Failed creating a broker
> org.apache.hyracks.algebricks.common.exceptions.AlgebricksException: Metadata 
> Extension Index: 
> org.apache.asterix.metadata.api.ExtensionMetadataDatasetId@de83639e was not 
> found
> at 
> org.apache.asterix.metadata.MetadataNode.getEntities(MetadataNode.java:310)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:323)
> at sun.rmi.transport.Transport$1.run(Transport.java:200)
> at sun.rmi.transport.Transport$1.run(Transport.java:197)
> at java.security.AccessController.doPrivileged(Native Method)
> at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
> at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$256(TCPTransport.java:683)
> at java.security.AccessController.doPrivileged(Native Method)
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> at 
> sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:276)
> at 
> sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:253)
> at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:162)
> at 
> java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(RemoteObjectInvocationHandler.java:227)
> at 
> java.rmi.server.RemoteObjectInvocationHandler.invoke(RemoteObjectInvocationHandler.java:179)
> at com.sun.proxy.$Proxy23.getEntities(Unknown Source)
> at 
> org.apache.asterix.metadata.MetadataManager.getEntities(MetadataManager.java:990)
> at 
> org.apache.asterix.bad.lang.BADLangExtension.getBroker(BADLangExtension.java:72)
> at 
> org.apache.asterix.bad.lang.statement.CreateBrokerStatement.handle(CreateBrokerStatement.java:89)
> at 
> org.apache.asterix.app.translator.QueryTranslator.compileAndExecute(QueryTranslator.java:410)
> at 
> org.apache.asterix.api.http.server.ApiServlet.post(ApiServlet.java:169)
> at 
> org.apache.hyracks.http.server.AbstractServlet.handle(AbstractServlet.java:92)
> at 
> org.apache.hyracks.http.server.HttpRequestHandler.handle(HttpRequestHandler.java:71)
> at 
> org.apache.hyracks.http.server.HttpRequestHandler.call(HttpRequestHandler.java:56)
> at 
> org.apache.hyracks.http.server.HttpRequestHandler.call(HttpRequestHandler.java:37)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> 11:48:06.779 [HttpExecutor(port:19001)-3] ERROR org.apache.asterix - 
> org.apache.hyracks.algebricks.common.exceptions.AlgebricksException: Metadata 
> Extension Index: 
> org.apache.asterix.metadata.api.ExtensionMetadataDatasetId@de83639e was not 
> found
> org.apache.hyracks.api.exceptions.HyracksDataException: 
> org.apache.hyracks.algebricks.common.exceptions.AlgebricksException: Metadata 
> Extension Index: 
> 

[jira] [Closed] (ASTERIXDB-2199) Nested primary key and hash repartitioning bug

2018-01-19 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs closed ASTERIXDB-2199.

Resolution: Fixed

> Nested primary key and hash repartitioning bug 
> ---
>
> Key: ASTERIXDB-2199
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2199
> Project: Apache AsterixDB
>  Issue Type: Bug
>  Components: *DB - AsterixDB
>Reporter: Shiva Jahangiri
>Assignee: Steven Jacobs
>Priority: Major
>
> If a join is happening on primary keys of two tables, no hash partitioning 
> should happen. Having the following DDL(Note that primary key of Friendship2 
> is string):
> DROP DATAVERSE Facebook IF EXISTS;
> CREATE DATAVERSE Facebook;
> Use Facebook;
> CREATE TYPE FriendshipType AS closed {
>   id:string,
>   friends :[string]
> };
> CREATE DATASET Friendship2(FriendshipType)
> PRIMARY KEY id; 
> insert into Friendship2([ {"id":"1","friends" : [ "2","3","4"]}, 
> {"id":"2","friends" : [ "4","5","6"]}
> ]);
> By running the following query:
> Use Facebook;
> select * from Friendship2 first, Friendship2 second where first.id = 
> second.id;
> we can see that there is no hash partitioning happening in optimized logical 
> plan which is correct as join is happening on the primary key of both 
> relations and data is already partitioned on primary key:
> {
>  "operator":"distribute-result",
>  "expressions":"$$9",
>  "operatorId" : "1.1",
>  "physical-operator":"DISTRIBUTE_RESULT",
>  "execution-mode":"PARTITIONED",
>  "inputs":[
>  {
>   "operator":"exchange",
>   "operatorId" : "1.2",
>   "physical-operator":"ONE_TO_ONE_EXCHANGE",
>   "execution-mode":"PARTITIONED",
>   "inputs":[
>   {
>"operator":"project",
>"variables" :["$$9"],
>"operatorId" : "1.3",
>"physical-operator":"STREAM_PROJECT",
>"execution-mode":"PARTITIONED",
>"inputs":[
>{
> "operator":"assign",
> "variables" :["$$9"],
> "expressions":"{ first : $$first,  second : $$second}",
> "operatorId" : "1.4",
> "physical-operator":"ASSIGN",
> "execution-mode":"PARTITIONED",
> "inputs":[
> {
>  "operator":"project",
>  "variables" :["$$first","$$second"],
>  "operatorId" : "1.5",
>  "physical-operator":"STREAM_PROJECT",
>  "execution-mode":"PARTITIONED",
>  "inputs":[
>  {
>   "operator":"exchange",
>   "operatorId" : "1.6",
>   "physical-operator":"ONE_TO_ONE_EXCHANGE",
>   "execution-mode":"PARTITIONED",
>   "inputs":[
>   {
>"operator":"join",
>"condition":"eq($$10, $$11)",
>"operatorId" : "1.7",
>"physical-operator":"HYBRID_HASH_JOIN 
> [$$10][$$11]",
>"execution-mode":"PARTITIONED",
>"inputs":[
>{
> "operator":"exchange",
> "operatorId" : "1.8",
> "physical-operator":"ONE_TO_ONE_EXCHANGE",
> "execution-mode":"PARTITIONED",
> "inputs":[
> {
>  "operator":"data-scan",
>  "variables" :["$$10","$$first"],
>  "data-source":"Facebook.Friendship2",
>  "operatorId" : "1.9",
>  
> "physical-operator":"DATASOURCE_SCAN",
>  "execution-mode":"PARTITIONED",
>  "inputs":[
>  {
>   "operator":"exchange",
>   "operatorId" : "1.10",
>   
> "physical-operator":"ONE_TO_ONE_EXCHANGE",
>   "execution-mode":"PARTITIONED",
>   "inputs":[
>   {
>
> "operator":"empty-tuple-source",
>  

[jira] [Updated] (ASTERIXDB-2239) The new Extension Framework without Managix doesn't seem to work.

2018-01-12 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs updated ASTERIXDB-2239:
-
Description: 
Following this guide: 
https://cwiki.apache.org/confluence/display/ASTERIXDB/Creating+a+BAD+Cluster+of+AsterixDB

And then trying a statement, e.g.
create broker peterListBroker at "http://127.0.0.1:5000/brokerNotifications;;
(Actually any statement will give the error)

Gives the error trace:

Jan 12, 2018 11:48:06 AM 
org.apache.asterix.bad.lang.statement.CreateBrokerStatement handle
WARNING: Failed creating a broker
org.apache.hyracks.algebricks.common.exceptions.AlgebricksException: Metadata 
Extension Index: 
org.apache.asterix.metadata.api.ExtensionMetadataDatasetId@de83639e was not 
found
at 
org.apache.asterix.metadata.MetadataNode.getEntities(MetadataNode.java:310)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:323)
at sun.rmi.transport.Transport$1.run(Transport.java:200)
at sun.rmi.transport.Transport$1.run(Transport.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
at 
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$256(TCPTransport.java:683)
at java.security.AccessController.doPrivileged(Native Method)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at 
sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:276)
at 
sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:253)
at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:162)
at 
java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(RemoteObjectInvocationHandler.java:227)
at 
java.rmi.server.RemoteObjectInvocationHandler.invoke(RemoteObjectInvocationHandler.java:179)
at com.sun.proxy.$Proxy23.getEntities(Unknown Source)
at 
org.apache.asterix.metadata.MetadataManager.getEntities(MetadataManager.java:990)
at 
org.apache.asterix.bad.lang.BADLangExtension.getBroker(BADLangExtension.java:72)
at 
org.apache.asterix.bad.lang.statement.CreateBrokerStatement.handle(CreateBrokerStatement.java:89)
at 
org.apache.asterix.app.translator.QueryTranslator.compileAndExecute(QueryTranslator.java:410)
at 
org.apache.asterix.api.http.server.ApiServlet.post(ApiServlet.java:169)
at 
org.apache.hyracks.http.server.AbstractServlet.handle(AbstractServlet.java:92)
at 
org.apache.hyracks.http.server.HttpRequestHandler.handle(HttpRequestHandler.java:71)
at 
org.apache.hyracks.http.server.HttpRequestHandler.call(HttpRequestHandler.java:56)
at 
org.apache.hyracks.http.server.HttpRequestHandler.call(HttpRequestHandler.java:37)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

11:48:06.779 [HttpExecutor(port:19001)-3] ERROR org.apache.asterix - 
org.apache.hyracks.algebricks.common.exceptions.AlgebricksException: Metadata 
Extension Index: 
org.apache.asterix.metadata.api.ExtensionMetadataDatasetId@de83639e was not 
found
org.apache.hyracks.api.exceptions.HyracksDataException: 
org.apache.hyracks.algebricks.common.exceptions.AlgebricksException: Metadata 
Extension Index: 
org.apache.asterix.metadata.api.ExtensionMetadataDatasetId@de83639e was not 
found
at 
org.apache.asterix.bad.lang.statement.CreateBrokerStatement.handle(CreateBrokerStatement.java:101)
 ~[asterix-bad-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
at 
org.apache.asterix.app.translator.QueryTranslator.compileAndExecute(QueryTranslator.java:410)
 ~[asterix-app-0.9.3-SNAPSHOT.jar:0.9.3-SNAPSHOT]
at 
org.apache.asterix.api.http.server.ApiServlet.post(ApiServlet.java:169) 
[asterix-app-0.9.3-SNAPSHOT.jar:0.9.3-SNAPSHOT]
at 
org.apache.hyracks.http.server.AbstractServlet.handle(AbstractServlet.java:92) 

[jira] [Created] (ASTERIXDB-2239) The new Extension Framework without Managix doesn't seem to work.

2018-01-12 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-2239:


 Summary: The new Extension Framework without Managix doesn't seem 
to work.
 Key: ASTERIXDB-2239
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-2239
 Project: Apache AsterixDB
  Issue Type: Bug
Reporter: Steven Jacobs
Assignee: Ian Maxon


Following this guide: 
https://cwiki.apache.org/confluence/display/ASTERIXDB/Creating+a+BAD+Cluster+of+AsterixDB

And then trying a BAD statement, e.g.
create broker peterListBroker at "http://127.0.0.1:5000/brokerNotifications;;

Gives the error trace:

Jan 12, 2018 11:48:06 AM 
org.apache.asterix.bad.lang.statement.CreateBrokerStatement handle
WARNING: Failed creating a broker
org.apache.hyracks.algebricks.common.exceptions.AlgebricksException: Metadata 
Extension Index: 
org.apache.asterix.metadata.api.ExtensionMetadataDatasetId@de83639e was not 
found
at 
org.apache.asterix.metadata.MetadataNode.getEntities(MetadataNode.java:310)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:323)
at sun.rmi.transport.Transport$1.run(Transport.java:200)
at sun.rmi.transport.Transport$1.run(Transport.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
at 
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$256(TCPTransport.java:683)
at java.security.AccessController.doPrivileged(Native Method)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at 
sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:276)
at 
sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:253)
at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:162)
at 
java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(RemoteObjectInvocationHandler.java:227)
at 
java.rmi.server.RemoteObjectInvocationHandler.invoke(RemoteObjectInvocationHandler.java:179)
at com.sun.proxy.$Proxy23.getEntities(Unknown Source)
at 
org.apache.asterix.metadata.MetadataManager.getEntities(MetadataManager.java:990)
at 
org.apache.asterix.bad.lang.BADLangExtension.getBroker(BADLangExtension.java:72)
at 
org.apache.asterix.bad.lang.statement.CreateBrokerStatement.handle(CreateBrokerStatement.java:89)
at 
org.apache.asterix.app.translator.QueryTranslator.compileAndExecute(QueryTranslator.java:410)
at 
org.apache.asterix.api.http.server.ApiServlet.post(ApiServlet.java:169)
at 
org.apache.hyracks.http.server.AbstractServlet.handle(AbstractServlet.java:92)
at 
org.apache.hyracks.http.server.HttpRequestHandler.handle(HttpRequestHandler.java:71)
at 
org.apache.hyracks.http.server.HttpRequestHandler.call(HttpRequestHandler.java:56)
at 
org.apache.hyracks.http.server.HttpRequestHandler.call(HttpRequestHandler.java:37)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

11:48:06.779 [HttpExecutor(port:19001)-3] ERROR org.apache.asterix - 
org.apache.hyracks.algebricks.common.exceptions.AlgebricksException: Metadata 
Extension Index: 
org.apache.asterix.metadata.api.ExtensionMetadataDatasetId@de83639e was not 
found
org.apache.hyracks.api.exceptions.HyracksDataException: 
org.apache.hyracks.algebricks.common.exceptions.AlgebricksException: Metadata 
Extension Index: 
org.apache.asterix.metadata.api.ExtensionMetadataDatasetId@de83639e was not 
found
at 
org.apache.asterix.bad.lang.statement.CreateBrokerStatement.handle(CreateBrokerStatement.java:101)
 ~[asterix-bad-1.0.0-SNAPSHOT.jar:1.0.0-SNAPSHOT]
at 
org.apache.asterix.app.translator.QueryTranslator.compileAndExecute(QueryTranslator.java:410)
 ~[asterix-app-0.9.3-SNAPSHOT.jar:0.9.3-SNAPSHOT]
at 

[jira] [Comment Edited] (ASTERIXDB-2199) Nested primary key and hash repartitioning bug

2018-01-03 Thread Steven Jacobs (JIRA)

[ 
https://issues.apache.org/jira/browse/ASTERIXDB-2199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16310011#comment-16310011
 ] 

Steven Jacobs edited comment on ASTERIXDB-2199 at 1/3/18 6:49 PM:
--

[~wyk] Can you go ahead and file this separately and assign to me?


was (Author: sjaco002):
[~wyk] Can you go ahead and file this separately and assign to me? I'll go 
ahead and add it to the same CR though since it's in the same place of the code.

> Nested primary key and hash repartitioning bug 
> ---
>
> Key: ASTERIXDB-2199
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2199
> Project: Apache AsterixDB
>  Issue Type: Bug
>  Components: *DB - AsterixDB
>Reporter: Shiva Jahangiri
>Assignee: Steven Jacobs
>
> If a join is happening on primary keys of two tables, no hash partitioning 
> should happen. Having the following DDL(Note that primary key of Friendship2 
> is string):
> DROP DATAVERSE Facebook IF EXISTS;
> CREATE DATAVERSE Facebook;
> Use Facebook;
> CREATE TYPE FriendshipType AS closed {
>   id:string,
>   friends :[string]
> };
> CREATE DATASET Friendship2(FriendshipType)
> PRIMARY KEY id; 
> insert into Friendship2([ {"id":"1","friends" : [ "2","3","4"]}, 
> {"id":"2","friends" : [ "4","5","6"]}
> ]);
> By running the following query:
> Use Facebook;
> select * from Friendship2 first, Friendship2 second where first.id = 
> second.id;
> we can see that there is no hash partitioning happening in optimized logical 
> plan which is correct as join is happening on the primary key of both 
> relations and data is already partitioned on primary key:
> {
>  "operator":"distribute-result",
>  "expressions":"$$9",
>  "operatorId" : "1.1",
>  "physical-operator":"DISTRIBUTE_RESULT",
>  "execution-mode":"PARTITIONED",
>  "inputs":[
>  {
>   "operator":"exchange",
>   "operatorId" : "1.2",
>   "physical-operator":"ONE_TO_ONE_EXCHANGE",
>   "execution-mode":"PARTITIONED",
>   "inputs":[
>   {
>"operator":"project",
>"variables" :["$$9"],
>"operatorId" : "1.3",
>"physical-operator":"STREAM_PROJECT",
>"execution-mode":"PARTITIONED",
>"inputs":[
>{
> "operator":"assign",
> "variables" :["$$9"],
> "expressions":"{ first : $$first,  second : $$second}",
> "operatorId" : "1.4",
> "physical-operator":"ASSIGN",
> "execution-mode":"PARTITIONED",
> "inputs":[
> {
>  "operator":"project",
>  "variables" :["$$first","$$second"],
>  "operatorId" : "1.5",
>  "physical-operator":"STREAM_PROJECT",
>  "execution-mode":"PARTITIONED",
>  "inputs":[
>  {
>   "operator":"exchange",
>   "operatorId" : "1.6",
>   "physical-operator":"ONE_TO_ONE_EXCHANGE",
>   "execution-mode":"PARTITIONED",
>   "inputs":[
>   {
>"operator":"join",
>"condition":"eq($$10, $$11)",
>"operatorId" : "1.7",
>"physical-operator":"HYBRID_HASH_JOIN 
> [$$10][$$11]",
>"execution-mode":"PARTITIONED",
>"inputs":[
>{
> "operator":"exchange",
> "operatorId" : "1.8",
> "physical-operator":"ONE_TO_ONE_EXCHANGE",
> "execution-mode":"PARTITIONED",
> "inputs":[
> {
>  "operator":"data-scan",
>  "variables" :["$$10","$$first"],
>  "data-source":"Facebook.Friendship2",
>  "operatorId" : "1.9",
>  
> "physical-operator":"DATASOURCE_SCAN",
>  "execution-mode":"PARTITIONED",
>  "inputs":[
>  {
>   "operator":"exchange",
>   "operatorId" : "1.10",
>   
> "physical-operator":"ONE_TO_ONE_EXCHANGE",
>   

[jira] [Commented] (ASTERIXDB-2199) Nested primary key and hash repartitioning bug

2018-01-03 Thread Steven Jacobs (JIRA)

[ 
https://issues.apache.org/jira/browse/ASTERIXDB-2199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16310011#comment-16310011
 ] 

Steven Jacobs commented on ASTERIXDB-2199:
--

[~wyk] Can you go ahead and file this separately and assign to me? I'll go 
ahead and add it to the same CR though since it's in the same place of the code.

> Nested primary key and hash repartitioning bug 
> ---
>
> Key: ASTERIXDB-2199
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2199
> Project: Apache AsterixDB
>  Issue Type: Bug
>  Components: *DB - AsterixDB
>Reporter: Shiva Jahangiri
>Assignee: Steven Jacobs
>
> If a join is happening on primary keys of two tables, no hash partitioning 
> should happen. Having the following DDL(Note that primary key of Friendship2 
> is string):
> DROP DATAVERSE Facebook IF EXISTS;
> CREATE DATAVERSE Facebook;
> Use Facebook;
> CREATE TYPE FriendshipType AS closed {
>   id:string,
>   friends :[string]
> };
> CREATE DATASET Friendship2(FriendshipType)
> PRIMARY KEY id; 
> insert into Friendship2([ {"id":"1","friends" : [ "2","3","4"]}, 
> {"id":"2","friends" : [ "4","5","6"]}
> ]);
> By running the following query:
> Use Facebook;
> select * from Friendship2 first, Friendship2 second where first.id = 
> second.id;
> we can see that there is no hash partitioning happening in optimized logical 
> plan which is correct as join is happening on the primary key of both 
> relations and data is already partitioned on primary key:
> {
>  "operator":"distribute-result",
>  "expressions":"$$9",
>  "operatorId" : "1.1",
>  "physical-operator":"DISTRIBUTE_RESULT",
>  "execution-mode":"PARTITIONED",
>  "inputs":[
>  {
>   "operator":"exchange",
>   "operatorId" : "1.2",
>   "physical-operator":"ONE_TO_ONE_EXCHANGE",
>   "execution-mode":"PARTITIONED",
>   "inputs":[
>   {
>"operator":"project",
>"variables" :["$$9"],
>"operatorId" : "1.3",
>"physical-operator":"STREAM_PROJECT",
>"execution-mode":"PARTITIONED",
>"inputs":[
>{
> "operator":"assign",
> "variables" :["$$9"],
> "expressions":"{ first : $$first,  second : $$second}",
> "operatorId" : "1.4",
> "physical-operator":"ASSIGN",
> "execution-mode":"PARTITIONED",
> "inputs":[
> {
>  "operator":"project",
>  "variables" :["$$first","$$second"],
>  "operatorId" : "1.5",
>  "physical-operator":"STREAM_PROJECT",
>  "execution-mode":"PARTITIONED",
>  "inputs":[
>  {
>   "operator":"exchange",
>   "operatorId" : "1.6",
>   "physical-operator":"ONE_TO_ONE_EXCHANGE",
>   "execution-mode":"PARTITIONED",
>   "inputs":[
>   {
>"operator":"join",
>"condition":"eq($$10, $$11)",
>"operatorId" : "1.7",
>"physical-operator":"HYBRID_HASH_JOIN 
> [$$10][$$11]",
>"execution-mode":"PARTITIONED",
>"inputs":[
>{
> "operator":"exchange",
> "operatorId" : "1.8",
> "physical-operator":"ONE_TO_ONE_EXCHANGE",
> "execution-mode":"PARTITIONED",
> "inputs":[
> {
>  "operator":"data-scan",
>  "variables" :["$$10","$$first"],
>  "data-source":"Facebook.Friendship2",
>  "operatorId" : "1.9",
>  
> "physical-operator":"DATASOURCE_SCAN",
>  "execution-mode":"PARTITIONED",
>  "inputs":[
>  {
>   "operator":"exchange",
>   "operatorId" : "1.10",
>   
> "physical-operator":"ONE_TO_ONE_EXCHANGE",
>   "execution-mode":"PARTITIONED",
>   "inputs":[
>

[jira] [Resolved] (ASTERIXDB-2181) Nothing prevents creation of unusable functions

2018-01-02 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs resolved ASTERIXDB-2181.
--
Resolution: Fixed
  Assignee: Steven Jacobs

> Nothing prevents creation of unusable functions
> ---
>
> Key: ASTERIXDB-2181
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2181
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Steven Jacobs
>Assignee: Steven Jacobs
>
> The following DDL will work correctly, and create two functions that are 
> currently unusable. It would be better to check whether a function is usable 
> before creating it.
> drop dataverse steven if exists;
> create dataverse steven;
> use steven;
> create function impossible(){
> (select * from something_that_is_not_there)
> };
> create function impossible2(){
> function_that_is_not_there()
> };



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (ASTERIXDB-2199) Nested primary key and hash repartitioning bug

2017-12-29 Thread Steven Jacobs (JIRA)

[ 
https://issues.apache.org/jira/browse/ASTERIXDB-2199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306415#comment-16306415
 ] 

Steven Jacobs commented on ASTERIXDB-2199:
--

I'm guessing that this bug happens in Master as well? If so, I guess it would 
be another issue?

> Nested primary key and hash repartitioning bug 
> ---
>
> Key: ASTERIXDB-2199
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2199
> Project: Apache AsterixDB
>  Issue Type: Bug
>  Components: *DB - AsterixDB
>Reporter: Shiva Jahangiri
>Assignee: Steven Jacobs
>
> If a join is happening on primary keys of two tables, no hash partitioning 
> should happen. Having the following DDL(Note that primary key of Friendship2 
> is string):
> DROP DATAVERSE Facebook IF EXISTS;
> CREATE DATAVERSE Facebook;
> Use Facebook;
> CREATE TYPE FriendshipType AS closed {
>   id:string,
>   friends :[string]
> };
> CREATE DATASET Friendship2(FriendshipType)
> PRIMARY KEY id; 
> insert into Friendship2([ {"id":"1","friends" : [ "2","3","4"]}, 
> {"id":"2","friends" : [ "4","5","6"]}
> ]);
> By running the following query:
> Use Facebook;
> select * from Friendship2 first, Friendship2 second where first.id = 
> second.id;
> we can see that there is no hash partitioning happening in optimized logical 
> plan which is correct as join is happening on the primary key of both 
> relations and data is already partitioned on primary key:
> {
>  "operator":"distribute-result",
>  "expressions":"$$9",
>  "operatorId" : "1.1",
>  "physical-operator":"DISTRIBUTE_RESULT",
>  "execution-mode":"PARTITIONED",
>  "inputs":[
>  {
>   "operator":"exchange",
>   "operatorId" : "1.2",
>   "physical-operator":"ONE_TO_ONE_EXCHANGE",
>   "execution-mode":"PARTITIONED",
>   "inputs":[
>   {
>"operator":"project",
>"variables" :["$$9"],
>"operatorId" : "1.3",
>"physical-operator":"STREAM_PROJECT",
>"execution-mode":"PARTITIONED",
>"inputs":[
>{
> "operator":"assign",
> "variables" :["$$9"],
> "expressions":"{ first : $$first,  second : $$second}",
> "operatorId" : "1.4",
> "physical-operator":"ASSIGN",
> "execution-mode":"PARTITIONED",
> "inputs":[
> {
>  "operator":"project",
>  "variables" :["$$first","$$second"],
>  "operatorId" : "1.5",
>  "physical-operator":"STREAM_PROJECT",
>  "execution-mode":"PARTITIONED",
>  "inputs":[
>  {
>   "operator":"exchange",
>   "operatorId" : "1.6",
>   "physical-operator":"ONE_TO_ONE_EXCHANGE",
>   "execution-mode":"PARTITIONED",
>   "inputs":[
>   {
>"operator":"join",
>"condition":"eq($$10, $$11)",
>"operatorId" : "1.7",
>"physical-operator":"HYBRID_HASH_JOIN 
> [$$10][$$11]",
>"execution-mode":"PARTITIONED",
>"inputs":[
>{
> "operator":"exchange",
> "operatorId" : "1.8",
> "physical-operator":"ONE_TO_ONE_EXCHANGE",
> "execution-mode":"PARTITIONED",
> "inputs":[
> {
>  "operator":"data-scan",
>  "variables" :["$$10","$$first"],
>  "data-source":"Facebook.Friendship2",
>  "operatorId" : "1.9",
>  
> "physical-operator":"DATASOURCE_SCAN",
>  "execution-mode":"PARTITIONED",
>  "inputs":[
>  {
>   "operator":"exchange",
>   "operatorId" : "1.10",
>   
> "physical-operator":"ONE_TO_ONE_EXCHANGE",
>   "execution-mode":"PARTITIONED",
>   "inputs":[
>   {
>  

[jira] [Commented] (ASTERIXDB-2199) Nested primary key and hash repartitioning bug

2017-12-28 Thread Steven Jacobs (JIRA)

[ 
https://issues.apache.org/jira/browse/ASTERIXDB-2199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305926#comment-16305926
 ] 

Steven Jacobs commented on ASTERIXDB-2199:
--

Mike: See my comment from yesterday. The fix for this issue is waiting for 
review. Wail brought up something that might be a separate issue, which he is 
going to check if exists.

> Nested primary key and hash repartitioning bug 
> ---
>
> Key: ASTERIXDB-2199
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2199
> Project: Apache AsterixDB
>  Issue Type: Bug
>  Components: *DB - AsterixDB
>Reporter: Shiva Jahangiri
>Assignee: Steven Jacobs
>
> If a join is happening on primary keys of two tables, no hash partitioning 
> should happen. Having the following DDL(Note that primary key of Friendship2 
> is string):
> DROP DATAVERSE Facebook IF EXISTS;
> CREATE DATAVERSE Facebook;
> Use Facebook;
> CREATE TYPE FriendshipType AS closed {
>   id:string,
>   friends :[string]
> };
> CREATE DATASET Friendship2(FriendshipType)
> PRIMARY KEY id; 
> insert into Friendship2([ {"id":"1","friends" : [ "2","3","4"]}, 
> {"id":"2","friends" : [ "4","5","6"]}
> ]);
> By running the following query:
> Use Facebook;
> select * from Friendship2 first, Friendship2 second where first.id = 
> second.id;
> we can see that there is no hash partitioning happening in optimized logical 
> plan which is correct as join is happening on the primary key of both 
> relations and data is already partitioned on primary key:
> {
>  "operator":"distribute-result",
>  "expressions":"$$9",
>  "operatorId" : "1.1",
>  "physical-operator":"DISTRIBUTE_RESULT",
>  "execution-mode":"PARTITIONED",
>  "inputs":[
>  {
>   "operator":"exchange",
>   "operatorId" : "1.2",
>   "physical-operator":"ONE_TO_ONE_EXCHANGE",
>   "execution-mode":"PARTITIONED",
>   "inputs":[
>   {
>"operator":"project",
>"variables" :["$$9"],
>"operatorId" : "1.3",
>"physical-operator":"STREAM_PROJECT",
>"execution-mode":"PARTITIONED",
>"inputs":[
>{
> "operator":"assign",
> "variables" :["$$9"],
> "expressions":"{ first : $$first,  second : $$second}",
> "operatorId" : "1.4",
> "physical-operator":"ASSIGN",
> "execution-mode":"PARTITIONED",
> "inputs":[
> {
>  "operator":"project",
>  "variables" :["$$first","$$second"],
>  "operatorId" : "1.5",
>  "physical-operator":"STREAM_PROJECT",
>  "execution-mode":"PARTITIONED",
>  "inputs":[
>  {
>   "operator":"exchange",
>   "operatorId" : "1.6",
>   "physical-operator":"ONE_TO_ONE_EXCHANGE",
>   "execution-mode":"PARTITIONED",
>   "inputs":[
>   {
>"operator":"join",
>"condition":"eq($$10, $$11)",
>"operatorId" : "1.7",
>"physical-operator":"HYBRID_HASH_JOIN 
> [$$10][$$11]",
>"execution-mode":"PARTITIONED",
>"inputs":[
>{
> "operator":"exchange",
> "operatorId" : "1.8",
> "physical-operator":"ONE_TO_ONE_EXCHANGE",
> "execution-mode":"PARTITIONED",
> "inputs":[
> {
>  "operator":"data-scan",
>  "variables" :["$$10","$$first"],
>  "data-source":"Facebook.Friendship2",
>  "operatorId" : "1.9",
>  
> "physical-operator":"DATASOURCE_SCAN",
>  "execution-mode":"PARTITIONED",
>  "inputs":[
>  {
>   "operator":"exchange",
>   "operatorId" : "1.10",
>   
> "physical-operator":"ONE_TO_ONE_EXCHANGE",
>   "execution-mode":"PARTITIONED",
>   "inputs":[

[jira] [Created] (ASTERIXDB-2214) Prevent dropping of Functions Used By Active Entities

2017-12-27 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-2214:


 Summary: Prevent dropping of Functions Used By Active Entities
 Key: ASTERIXDB-2214
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-2214
 Project: Apache AsterixDB
  Issue Type: Bug
Reporter: Steven Jacobs
Assignee: Steven Jacobs


Right now this check is specialized for Feeds. It should be abstracted to be 
enabled for Active Entities in general



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (ASTERIXDB-2199) Nested primary key and hash repartitioning bug

2017-12-27 Thread Steven Jacobs (JIRA)

[ 
https://issues.apache.org/jira/browse/ASTERIXDB-2199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304721#comment-16304721
 ] 

Steven Jacobs commented on ASTERIXDB-2199:
--

[~wyk] Wail: I have a CR waiting for review to address Shiva's filed issue 
(https://asterix-gerrit.ics.uci.edu/#/c/2246/). The solution doesn't involve 
EquivalenceClassUtils.addEquivalenceClassesForPrimaryIndexAccess(), but I think 
I see the code block that you are talking about, and maybe there is a potential 
optimization issue there as well. If you can find a query that produces a bug 
through this method, can you file it as a separate issue?

> Nested primary key and hash repartitioning bug 
> ---
>
> Key: ASTERIXDB-2199
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2199
> Project: Apache AsterixDB
>  Issue Type: Bug
>  Components: *DB - AsterixDB
>Reporter: Shiva Jahangiri
>Assignee: Steven Jacobs
>
> If a join is happening on primary keys of two tables, no hash partitioning 
> should happen. Having the following DDL(Note that primary key of Friendship2 
> is string):
> DROP DATAVERSE Facebook IF EXISTS;
> CREATE DATAVERSE Facebook;
> Use Facebook;
> CREATE TYPE FriendshipType AS closed {
>   id:string,
>   friends :[string]
> };
> CREATE DATASET Friendship2(FriendshipType)
> PRIMARY KEY id; 
> insert into Friendship2([ {"id":"1","friends" : [ "2","3","4"]}, 
> {"id":"2","friends" : [ "4","5","6"]}
> ]);
> By running the following query:
> Use Facebook;
> select * from Friendship2 first, Friendship2 second where first.id = 
> second.id;
> we can see that there is no hash partitioning happening in optimized logical 
> plan which is correct as join is happening on the primary key of both 
> relations and data is already partitioned on primary key:
> {
>  "operator":"distribute-result",
>  "expressions":"$$9",
>  "operatorId" : "1.1",
>  "physical-operator":"DISTRIBUTE_RESULT",
>  "execution-mode":"PARTITIONED",
>  "inputs":[
>  {
>   "operator":"exchange",
>   "operatorId" : "1.2",
>   "physical-operator":"ONE_TO_ONE_EXCHANGE",
>   "execution-mode":"PARTITIONED",
>   "inputs":[
>   {
>"operator":"project",
>"variables" :["$$9"],
>"operatorId" : "1.3",
>"physical-operator":"STREAM_PROJECT",
>"execution-mode":"PARTITIONED",
>"inputs":[
>{
> "operator":"assign",
> "variables" :["$$9"],
> "expressions":"{ first : $$first,  second : $$second}",
> "operatorId" : "1.4",
> "physical-operator":"ASSIGN",
> "execution-mode":"PARTITIONED",
> "inputs":[
> {
>  "operator":"project",
>  "variables" :["$$first","$$second"],
>  "operatorId" : "1.5",
>  "physical-operator":"STREAM_PROJECT",
>  "execution-mode":"PARTITIONED",
>  "inputs":[
>  {
>   "operator":"exchange",
>   "operatorId" : "1.6",
>   "physical-operator":"ONE_TO_ONE_EXCHANGE",
>   "execution-mode":"PARTITIONED",
>   "inputs":[
>   {
>"operator":"join",
>"condition":"eq($$10, $$11)",
>"operatorId" : "1.7",
>"physical-operator":"HYBRID_HASH_JOIN 
> [$$10][$$11]",
>"execution-mode":"PARTITIONED",
>"inputs":[
>{
> "operator":"exchange",
> "operatorId" : "1.8",
> "physical-operator":"ONE_TO_ONE_EXCHANGE",
> "execution-mode":"PARTITIONED",
> "inputs":[
> {
>  "operator":"data-scan",
>  "variables" :["$$10","$$first"],
>  "data-source":"Facebook.Friendship2",
>  "operatorId" : "1.9",
>  
> "physical-operator":"DATASOURCE_SCAN",
>  "execution-mode":"PARTITIONED",
>  "inputs":[
>  {
>   "operator":"exchange",
>

[jira] [Created] (ASTERIXDB-2193) Dataverse resolution within function calls should be based on the dataverse in which the function resides

2017-12-08 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-2193:


 Summary: Dataverse resolution within function calls should be 
based on the dataverse in which the function resides
 Key: ASTERIXDB-2193
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-2193
 Project: Apache AsterixDB
  Issue Type: Bug
Reporter: Steven Jacobs
Assignee: Dmitry Lychagin


The following will fail, but should succeed. The second call to the function 
should resolve the dataset based on the experiments dataverse:

drop dataverse experiments if exists;
create dataverse experiments;
use experiments;
create type TwitterUser if not exists as open{
screen_name: string,
friends_count: int32,
name: string,
followers_count: int32
};
create dataset TwitterUsers(TwitterUser) primary key screen_name;
create function test_func0() {
 (select * from TwitterUsers)
};
test_func0();

drop dataverse two if exists;
create dataverse two;
use two;
experiments.test_func0();



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (ASTERIXDB-2181) Nothing prevents creation of unusable functions

2017-11-30 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-2181:


 Summary: Nothing prevents creation of unusable functions
 Key: ASTERIXDB-2181
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-2181
 Project: Apache AsterixDB
  Issue Type: Bug
Reporter: Steven Jacobs


The following DDL will work correctly, and create two functions that are 
currently unusable. It would be better to check whether a function is usable 
before creating it.

drop dataverse steven if exists;
create dataverse steven;
use steven;

create function impossible(){
(select * from something_that_is_not_there)
};
create function impossible2(){
function_that_is_not_there()
};




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (ASTERIXDB-2180) Prevent dropping of datasets and functions that are being used by functions

2017-11-30 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-2180:


 Summary: Prevent dropping of datasets and functions that are being 
used by functions
 Key: ASTERIXDB-2180
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-2180
 Project: Apache AsterixDB
  Issue Type: Bug
Reporter: Steven Jacobs


Currently, there is nothing preventing the dropping of a dataset or a function 
which is being used by some function. This means that when one of these is 
dropped, the function still exists but is unusable. 

It would be better to have some way to prevent the dropping of datasets and 
functions that are being used by functions



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (ASTERIXDB-2167) Transaction Id should be factor out of the job specification

2017-11-29 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs resolved ASTERIXDB-2167.
--
Resolution: Fixed

> Transaction Id should be factor out of the job specification
> 
>
> Key: ASTERIXDB-2167
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2167
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Xikui Wang
>
> In building the deployed job feature, the transaction id caused a lot of 
> issues for us since it's embedded in the serialized job spec in some places. 
> It would be good to factor that out of the job spec.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (ASTERIXDB-2178) Remove MultiTransactionJobletEventListenerFactory

2017-11-28 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-2178:


 Summary: Remove MultiTransactionJobletEventListenerFactory
 Key: ASTERIXDB-2178
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-2178
 Project: Apache AsterixDB
  Issue Type: Improvement
Reporter: Steven Jacobs


This factory is currently only being used to enable feed jobs to have multiple 
transactions within one Hyracks Job. We can remove it by enabling one 
transaction to act on multiple datasets.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (ASTERIXDB-2167) Transaction Id should be factor out of the job specification

2017-11-15 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs updated ASTERIXDB-2167:
-
Summary: Transaction Id should be factor out of the job specification  
(was: Transaction Id should be factor out of the job sepcification)

> Transaction Id should be factor out of the job specification
> 
>
> Key: ASTERIXDB-2167
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2167
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Xikui Wang
>
> In building the deployed job feature, the transaction id caused a lot of 
> issues for us since it's embedded in the serialized job spec in some places. 
> It would be good to factor that out of the job spec.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (ASTERIXDB-2167) Transaction Id should be factor out of the job sepcification

2017-11-15 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs updated ASTERIXDB-2167:
-
Summary: Transaction Id should be factor out of the job sepcification  
(was: Transcation Id should be factor out of the job sepcification)

> Transaction Id should be factor out of the job sepcification
> 
>
> Key: ASTERIXDB-2167
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2167
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Xikui Wang
>
> In building the deployed job feature, the transaction id caused a lot of 
> issues for us since it's embedded in the serialized job spec in some places. 
> It would be good to factor that out of the job spec.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (ASTERIXDB-1911) Allow concurrent executions of one pre-distributed job

2017-11-15 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-1911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs resolved ASTERIXDB-1911.
--
Resolution: Fixed

> Allow concurrent executions of one pre-distributed job
> --
>
> Key: ASTERIXDB-1911
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-1911
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Steven Jacobs
>Assignee: Steven Jacobs
>
> Right now, concurrent attempts to run the same pre-distributed will not work. 
> It would be great if we could find a way to allow them to run together.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (ASTERIXDB-2126) Data won't feed into socket feed with a dataset filter

2017-10-05 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-2126:


 Summary: Data won't feed into socket feed with a dataset filter
 Key: ASTERIXDB-2126
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-2126
 Project: Apache AsterixDB
  Issue Type: Bug
Reporter: Steven Jacobs
Assignee: Xikui Wang


The following DDL creates a feed successfully, but throws errors when data is 
attempted to be fed. Everything works correctly if the filter isn't create on 
the dataset.

drop dataverse channels if exists;
create dataverse channels;
use dataverse channels;

create type UserLocation as closed {
recordId: integer,
location: circle,
userName: string,
timeStamp: datetime
};
create dataset UserLocations(UserLocation)
primary key recordId with filter on timeStamp;
create feed LocationFeed using socket_adapter
(
("sockets"="127.0.0.1:10009"),
("address-type"="IP"),
("type-name"="UserLocation"),
("upsert-feed"="true"),
("format"="adm")
);
connect feed LocationFeed to dataset UserLocations;
start feed LocationFeed;

Here is the error in the logs:
java.lang.IllegalStateException: java.lang.IllegalStateException: resource 
(101,  956599034) not found
at 
org.apache.asterix.transaction.management.service.logging.LogBuffer.internalFlush(LogBuffer.java:221)
at 
org.apache.asterix.transaction.management.service.logging.LogBuffer.flush(LogBuffer.java:193)
at 
org.apache.asterix.transaction.management.service.logging.LogFlusher.call(LogManager.java:704)
at 
org.apache.asterix.transaction.management.service.logging.LogFlusher.call(LogManager.java:1)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: resource (101,  956599034) not found
at 
org.apache.asterix.transaction.management.service.locking.ConcurrentLockManager.unlock(ConcurrentLockManager.java:486)
at 
org.apache.asterix.transaction.management.service.locking.ConcurrentLockManager.unlock(ConcurrentLockManager.java:473)
at 
org.apache.asterix.transaction.management.service.logging.LogBuffer.batchUnlock(LogBuffer.java:242)
at 
org.apache.asterix.transaction.management.service.logging.LogBuffer.internalFlush(LogBuffer.java:218)
... 7 more



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (ASTERIXDB-2124) Feed fails to start when filter and index both exist on one field

2017-10-04 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-2124:


 Summary: Feed fails to start when filter and index both exist on 
one field
 Key: ASTERIXDB-2124
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-2124
 Project: Apache AsterixDB
  Issue Type: Bug
Reporter: Steven Jacobs
Assignee: Xikui Wang


The following gives an internal error, but works successfully when either the 
index or the filter are removed from the DDLs:

drop dataverse channels if exists;
create dataverse channels;
use dataverse channels;

create type UserLocation as closed {
recordId: integer,
location: circle,
userName: string,
timeStamp: datetime
};
create dataset UserLocations(UserLocation)
primary key recordId with filter on timeStamp;
create index time2 on UserLocations(timeStamp);
create feed LocationFeed using socket_adapter
(
("sockets"="127.0.0.1:10009"),
("address-type"="IP"),
("type-name"="UserLocation"),
("upsert-feed"="true"),
("format"="adm")
);
connect feed LocationFeed to dataset UserLocations;
start feed LocationFeed;


StackTrace:
SEVERE: java.lang.NullPointerException
org.apache.hyracks.api.exceptions.HyracksDataException: 
java.lang.NullPointerException
at 
org.apache.hyracks.api.exceptions.HyracksDataException.create(HyracksDataException.java:47)
at 
org.apache.asterix.app.active.FeedEventsListener.doStart(FeedEventsListener.java:110)
at 
org.apache.asterix.app.active.ActiveEntityEventsListener.start(ActiveEntityEventsListener.java:374)
at 
org.apache.asterix.app.translator.QueryTranslator.handleStartFeedStatement(QueryTranslator.java:2120)
at 
org.apache.asterix.app.translator.QueryTranslator.compileAndExecute(QueryTranslator.java:367)
at 
org.apache.asterix.api.http.server.RestApiServlet.doHandle(RestApiServlet.java:209)
at 
org.apache.asterix.api.http.server.RestApiServlet.getOrPost(RestApiServlet.java:177)
at 
org.apache.asterix.api.http.server.RestApiServlet.get(RestApiServlet.java:161)
at 
org.apache.hyracks.http.server.AbstractServlet.handle(AbstractServlet.java:86)
at 
org.apache.hyracks.http.server.HttpRequestHandler.handle(HttpRequestHandler.java:70)
at 
org.apache.hyracks.http.server.HttpRequestHandler.call(HttpRequestHandler.java:55)
at 
org.apache.hyracks.http.server.HttpRequestHandler.call(HttpRequestHandler.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at 
org.apache.hyracks.algebricks.core.algebra.expressions.VariableReferenceExpression.hashCode(VariableReferenceExpression.java:86)
at java.util.HashMap.hash(HashMap.java:338)
at java.util.HashMap.get(HashMap.java:556)
at 
org.apache.hyracks.algebricks.rewriter.rules.ExtractCommonExpressionsRule$CommonExpressionSubstitutionVisitor.transform(ExtractCommonExpressionsRule.java:241)
at 
org.apache.hyracks.algebricks.core.algebra.operators.logical.IndexInsertDeleteUpsertOperator.acceptExpressionTransform(IndexInsertDeleteUpsertOperator.java:106)
at 
org.apache.hyracks.algebricks.rewriter.rules.ExtractCommonExpressionsRule.removeCommonExpressions(ExtractCommonExpressionsRule.java:176)
at 
org.apache.hyracks.algebricks.rewriter.rules.ExtractCommonExpressionsRule.removeCommonExpressions(ExtractCommonExpressionsRule.java:144)
at 
org.apache.hyracks.algebricks.rewriter.rules.ExtractCommonExpressionsRule.removeCommonExpressions(ExtractCommonExpressionsRule.java:144)
at 
org.apache.hyracks.algebricks.rewriter.rules.ExtractCommonExpressionsRule.removeCommonExpressions(ExtractCommonExpressionsRule.java:144)
at 
org.apache.hyracks.algebricks.rewriter.rules.ExtractCommonExpressionsRule.removeCommonExpressions(ExtractCommonExpressionsRule.java:144)
at 
org.apache.hyracks.algebricks.rewriter.rules.ExtractCommonExpressionsRule.rewritePre(ExtractCommonExpressionsRule.java:117)
at 
org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController.rewriteOperatorRef(AbstractRuleController.java:91)
at 
org.apache.hyracks.algebricks.compiler.rewriter.rulecontrollers.SequentialFixpointRuleController.rewriteWithRuleCollection(SequentialFixpointRuleController.java:53)
at 
org.apache.hyracks.algebricks.core.rewriter.base.HeuristicOptimizer.runOptimizationSets(HeuristicOptimizer.java:102)
at 
org.apache.hyracks.algebricks.core.rewriter.base.HeuristicOptimizer.optimize(HeuristicOptimizer.java:82)
at 

[jira] [Closed] (ASTERIXDB-2101) Record Merge fails when inserting into table with autogen key

2017-09-25 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs closed ASTERIXDB-2101.

Resolution: Invalid

> Record Merge fails when inserting into table with autogen key
> -
>
> Key: ASTERIXDB-2101
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2101
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Steven Jacobs
>
> The following will throw a runtime exception:
> drop dataverse channels if exists;
> create dataverse channels;
> use channels;
> create type result as {
>   id:uuid
> };
> create type subscriptionType as {
>   subscriptionId:uuid,
>   param0:int
> };
> create dataset roomRecordsResults(result)
> primary key id autogenerated;
> create dataset roomRecordsSubscriptions(subscriptionType)
> primary key subscriptionId autogenerated;
> use channels;
> insert into channels.roomRecordsResults (
>   select sub.subscriptionId
>   from channels.roomRecordsSubscriptions sub
> );
> Here is the stack trace:
> WARNING: Unhandled throwable
> java.lang.VerifyError: Bad return type
> Exception Details:
>   Location:
> 
> org/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor$_EvaluatorFactoryGen.access$0(Lorg/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor$_EvaluatorFactoryGen;)Lorg/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor;
>  @4: areturn
>   Reason:
> Type 
> 'org/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor$_Gen'
>  (current frame, stack[0]) is not assignable to 
> 'org/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor'
>  (from method signature)
>   Current Frame:
> bci: @4
> flags: { }
> locals: { 
> 'org/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor$_EvaluatorFactoryGen'
>  }
> stack: { 
> 'org/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor$_Gen'
>  }
>   Bytecode:
> 0x000: 2ab4 0063 b0   
>   at 
> org.apache.asterix.runtime.evaluators.functions.records.RecordMergeDescriptor$_Gen.createEvaluatorFactory(RecordMergeDescriptor.java:86)
>   at 
> org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.createScalarFunctionEvaluatorFactory(QueryLogicalExpressionJobGen.java:144)
>   at 
> org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.createEvaluatorFactory(QueryLogicalExpressionJobGen.java:109)
>   at 
> org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.codegenArguments(QueryLogicalExpressionJobGen.java:161)
>   at 
> org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.createScalarFunctionEvaluatorFactory(QueryLogicalExpressionJobGen.java:134)
>   at 
> org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.createEvaluatorFactory(QueryLogicalExpressionJobGen.java:109)
>   at 
> org.apache.hyracks.algebricks.core.algebra.expressions.ExpressionRuntimeProvider.createEvaluatorFactory(ExpressionRuntimeProvider.java:41)
>   at 
> org.apache.hyracks.algebricks.core.algebra.operators.physical.AssignPOperator.contributeRuntimeOperator(AssignPOperator.java:84)
>   at 
> org.apache.hyracks.algebricks.core.algebra.operators.logical.AbstractLogicalOperator.contributeRuntimeOperator(AbstractLogicalOperator.java:166)
>   at 
> org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:97)
>   at 
> org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
>   at 
> org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
>   at 
> org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
>   at 
> org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
>   at 
> org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
>   at 
> org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
>   at 
> org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
>   at 
> org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
>   at 
> org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
>   at 
> org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compilePlan(PlanCompiler.java:60)
>   at 
> org.apache.hyracks.algebricks.compiler.api.HeuristicCompilerFactoryBuilder$1$1.createJob(HeuristicCompilerFactoryBuilder.java:107)
>   at 
> org.apache.asterix.api.common.APIFramework.compileQuery(APIFramework.java:333)
>   at 
> 

[jira] [Updated] (ASTERIXDB-2101) Record Merge fails when inserting into table with autogen key

2017-09-19 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs updated ASTERIXDB-2101:
-
Summary: Record Merge fails when inserting into table with autogen key  
(was: Record Merge Error when running job)

> Record Merge fails when inserting into table with autogen key
> -
>
> Key: ASTERIXDB-2101
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2101
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Steven Jacobs
>
> The following will throw a runtime exception:
> drop dataverse channels if exists;
> create dataverse channels;
> use channels;
> create type result as {
>   id:uuid
> };
> create type subscriptionType as {
>   subscriptionId:uuid,
>   param0:int
> };
> create dataset roomRecordsResults(result)
> primary key id autogenerated;
> create dataset roomRecordsSubscriptions(subscriptionType)
> primary key subscriptionId autogenerated;
> use channels;
> insert into channels.roomRecordsResults (
>   select sub.subscriptionId
>   from channels.roomRecordsSubscriptions sub
> );
> Here is the stack trace:
> WARNING: Unhandled throwable
> java.lang.VerifyError: Bad return type
> Exception Details:
>   Location:
> 
> org/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor$_EvaluatorFactoryGen.access$0(Lorg/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor$_EvaluatorFactoryGen;)Lorg/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor;
>  @4: areturn
>   Reason:
> Type 
> 'org/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor$_Gen'
>  (current frame, stack[0]) is not assignable to 
> 'org/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor'
>  (from method signature)
>   Current Frame:
> bci: @4
> flags: { }
> locals: { 
> 'org/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor$_EvaluatorFactoryGen'
>  }
> stack: { 
> 'org/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor$_Gen'
>  }
>   Bytecode:
> 0x000: 2ab4 0063 b0   
>   at 
> org.apache.asterix.runtime.evaluators.functions.records.RecordMergeDescriptor$_Gen.createEvaluatorFactory(RecordMergeDescriptor.java:86)
>   at 
> org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.createScalarFunctionEvaluatorFactory(QueryLogicalExpressionJobGen.java:144)
>   at 
> org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.createEvaluatorFactory(QueryLogicalExpressionJobGen.java:109)
>   at 
> org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.codegenArguments(QueryLogicalExpressionJobGen.java:161)
>   at 
> org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.createScalarFunctionEvaluatorFactory(QueryLogicalExpressionJobGen.java:134)
>   at 
> org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.createEvaluatorFactory(QueryLogicalExpressionJobGen.java:109)
>   at 
> org.apache.hyracks.algebricks.core.algebra.expressions.ExpressionRuntimeProvider.createEvaluatorFactory(ExpressionRuntimeProvider.java:41)
>   at 
> org.apache.hyracks.algebricks.core.algebra.operators.physical.AssignPOperator.contributeRuntimeOperator(AssignPOperator.java:84)
>   at 
> org.apache.hyracks.algebricks.core.algebra.operators.logical.AbstractLogicalOperator.contributeRuntimeOperator(AbstractLogicalOperator.java:166)
>   at 
> org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:97)
>   at 
> org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
>   at 
> org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
>   at 
> org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
>   at 
> org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
>   at 
> org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
>   at 
> org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
>   at 
> org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
>   at 
> org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
>   at 
> org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
>   at 
> org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compilePlan(PlanCompiler.java:60)
>   at 
> org.apache.hyracks.algebricks.compiler.api.HeuristicCompilerFactoryBuilder$1$1.createJob(HeuristicCompilerFactoryBuilder.java:107)
>   at 
> 

[jira] [Comment Edited] (ASTERIXDB-2101) Record Merge Error when running job

2017-09-18 Thread Steven Jacobs (JIRA)

[ 
https://issues.apache.org/jira/browse/ASTERIXDB-2101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170285#comment-16170285
 ] 

Steven Jacobs edited comment on ASTERIXDB-2101 at 9/18/17 7:48 PM:
---

INFO: Optimized Plan:
commit
-- COMMIT  |PARTITIONED|
  project ([$$6])
  -- STREAM_PROJECT  |PARTITIONED|
exchange
-- ONE_TO_ONE_EXCHANGE  |PARTITIONED|
  insert into channels.roomRecordsResults from record: $$8 partitioned by 
[$$6]
  -- INSERT_DELETE  |PARTITIONED|
exchange
-- HASH_PARTITION_EXCHANGE [$$6]  |PARTITIONED|
  assign [$$6] <- [$$8.getField(0)]
  -- ASSIGN  |PARTITIONED|
project ([$$8])
-- STREAM_PROJECT  |PARTITIONED|
  assign [$$8] <- [cast($$7)]
  -- ASSIGN  |PARTITIONED|
project ([$$7])
-- STREAM_PROJECT  |PARTITIONED|
  assign [$$7] <- [check-unknown(object-merge($$4, {"id": 
create-uuid()}))]
  -- ASSIGN  |PARTITIONED|
project ([$$4])
-- STREAM_PROJECT  |PARTITIONED|
  assign [$$4] <- [{"subscriptionId": $$9}]
  -- ASSIGN  |PARTITIONED|
project ([$$9])
-- STREAM_PROJECT  |PARTITIONED|
  exchange
  -- ONE_TO_ONE_EXCHANGE  |PARTITIONED|
data-scan []<-[$$9, $$sub] <- 
channels.roomRecordsSubscriptions
-- DATASOURCE_SCAN  |PARTITIONED|
  exchange
  -- ONE_TO_ONE_EXCHANGE  |PARTITIONED|
empty-tuple-source
-- EMPTY_TUPLE_SOURCE  |PARTITIONED|


was (Author: sjaco002):
Here is the generated plan:
distribute result [$$30]
-- DISTRIBUTE_RESULT  |PARTITIONED|
  exchange
  -- ONE_TO_ONE_EXCHANGE  |PARTITIONED|
project ([$$30])
-- STREAM_PROJECT  |PARTITIONED|
  commit
  -- COMMIT  |PARTITIONED|
project ([$$28, $$30])
-- STREAM_PROJECT  |PARTITIONED|
  exchange
  -- ONE_TO_ONE_EXCHANGE  |PARTITIONED|
insert into channels.roomRecordsResults from record: $$31 
partitioned by [$$28]
-- INSERT_DELETE  |PARTITIONED|
  exchange
  -- HASH_PARTITION_EXCHANGE [$$28]  |PARTITIONED|
assign [$$28] <- [$$31.getField(0)]
-- ASSIGN  |PARTITIONED|
  assign [$$31] <- [cast($$30)]
  -- ASSIGN  |PARTITIONED|
project ([$$30])
-- STREAM_PROJECT  |PARTITIONED|
  assign [$$30] <- [check-unknown(object-merge($$26, {"id": 
create-uuid()}))]
  -- ASSIGN  |PARTITIONED|
project ([$$26])
-- STREAM_PROJECT  |PARTITIONED|
  assign [$$26] <- [{"result": {"userId": $$35}, 
"channelExecutionTime": $$channelExecutionTime, "subscriptionId": $$32, 
"deliveryTime": current-datetime()}]
  -- ASSIGN  |PARTITIONED|
project ([$$channelExecutionTime, $$32, $$35])
-- STREAM_PROJECT  |PARTITIONED|
  exchange
  -- ONE_TO_ONE_EXCHANGE  |PARTITIONED|
join (eq($$40, $$41))
-- HYBRID_HASH_JOIN [$$41][$$40]  |PARTITIONED|
  exchange
  -- HASH_PARTITION_EXCHANGE [$$41]  
|PARTITIONED|
project ([$$channelExecutionTime, $$32, 
$$41])
-- STREAM_PROJECT  |PARTITIONED|
  exchange
  -- ONE_TO_ONE_EXCHANGE  |PARTITIONED|
join (and(eq($$33, $$39), eq($$34, 
$$37)))
-- HYBRID_HASH_JOIN [$$39, $$37][$$33, 
$$34]  |PARTITIONED|
  exchange
  -- HASH_PARTITION_EXCHANGE [$$39, 
$$37]  |PARTITIONED|
project ([$$channelExecutionTime, 
$$32, $$37, $$39, $$41])
-- STREAM_PROJECT  |PARTITIONED|
  assign [$$41, $$39, $$37] <- 
[$$sub.getField(1), $$sub.getField("DataverseName"), 
$$sub.getField("BrokerName")]
  -- ASSIGN  |PARTITIONED|
exchange
-- ONE_TO_ONE_EXCHANGE  
|PARTITIONED|
  data-scan 

[jira] [Updated] (ASTERIXDB-2101) Record Merge Error when running job

2017-09-18 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs updated ASTERIXDB-2101:
-
Description: 
The following will throw a runtime exception:

drop dataverse channels if exists;
create dataverse channels;
use channels;
create type result as {
  id:uuid
};
create type subscriptionType as {
  subscriptionId:uuid,
  param0:int
};
create dataset roomRecordsResults(result)
primary key id autogenerated;
create dataset roomRecordsSubscriptions(subscriptionType)
primary key subscriptionId autogenerated;
use channels;
insert into channels.roomRecordsResults (
  select sub.subscriptionId
  from channels.roomRecordsSubscriptions sub
);


Here is the stack trace:
WARNING: Unhandled throwable
java.lang.VerifyError: Bad return type
Exception Details:
  Location:

org/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor$_EvaluatorFactoryGen.access$0(Lorg/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor$_EvaluatorFactoryGen;)Lorg/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor;
 @4: areturn
  Reason:
Type 
'org/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor$_Gen'
 (current frame, stack[0]) is not assignable to 
'org/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor' 
(from method signature)
  Current Frame:
bci: @4
flags: { }
locals: { 
'org/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor$_EvaluatorFactoryGen'
 }
stack: { 
'org/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor$_Gen'
 }
  Bytecode:
0x000: 2ab4 0063 b0   

at 
org.apache.asterix.runtime.evaluators.functions.records.RecordMergeDescriptor$_Gen.createEvaluatorFactory(RecordMergeDescriptor.java:86)
at 
org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.createScalarFunctionEvaluatorFactory(QueryLogicalExpressionJobGen.java:144)
at 
org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.createEvaluatorFactory(QueryLogicalExpressionJobGen.java:109)
at 
org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.codegenArguments(QueryLogicalExpressionJobGen.java:161)
at 
org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.createScalarFunctionEvaluatorFactory(QueryLogicalExpressionJobGen.java:134)
at 
org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.createEvaluatorFactory(QueryLogicalExpressionJobGen.java:109)
at 
org.apache.hyracks.algebricks.core.algebra.expressions.ExpressionRuntimeProvider.createEvaluatorFactory(ExpressionRuntimeProvider.java:41)
at 
org.apache.hyracks.algebricks.core.algebra.operators.physical.AssignPOperator.contributeRuntimeOperator(AssignPOperator.java:84)
at 
org.apache.hyracks.algebricks.core.algebra.operators.logical.AbstractLogicalOperator.contributeRuntimeOperator(AbstractLogicalOperator.java:166)
at 
org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:97)
at 
org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
at 
org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
at 
org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
at 
org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
at 
org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
at 
org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
at 
org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
at 
org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
at 
org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
at 
org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compilePlan(PlanCompiler.java:60)
at 
org.apache.hyracks.algebricks.compiler.api.HeuristicCompilerFactoryBuilder$1$1.createJob(HeuristicCompilerFactoryBuilder.java:107)
at 
org.apache.asterix.api.common.APIFramework.compileQuery(APIFramework.java:333)
at 
org.apache.asterix.app.translator.QueryTranslator.rewriteCompileInsertUpsert(QueryTranslator.java:1867)
at 
org.apache.asterix.app.translator.QueryTranslator.lambda$0(QueryTranslator.java:1755)
at 
org.apache.asterix.app.translator.QueryTranslator.handleInsertUpsertStatement(QueryTranslator.java:1781)
at 
org.apache.asterix.app.translator.QueryTranslator.compileAndExecute(QueryTranslator.java:337)
at 

[jira] [Updated] (ASTERIXDB-2101) Record Merge Error when running job

2017-09-18 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs updated ASTERIXDB-2101:
-
Summary: Record Merge Error when running job  (was: Record Merge Error when 
compiling job)

> Record Merge Error when running job
> ---
>
> Key: ASTERIXDB-2101
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2101
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Steven Jacobs
>
> The following will throw a runtime exception:
> drop dataverse channels if exists;
> create dataverse channels;
> use channels;
> create type userLocation as {
>   userId: int,
>   roomNumber: int
> };
> create type result as {
>   id:uuid
> };
> create type subscriptionType as {
>   subscriptionId:uuid,
>   param0:int
> };
> create dataset roomRecordsResults(result)
> primary key id autogenerated;
> create dataset roomRecordsSubscriptions(subscriptionType)
> primary key subscriptionId autogenerated;
> create dataset UserLocations(userLocation)
> primary key userId;
> create function RoomOccupants(room) {
>   (select location.userId
>   from UserLocations location
>   where location.roomNumber = room)
> };
> SET inline_with "false";
> insert into channels.roomRecordsResults as a (
>   with channelExecutionTime as current_datetime() 
>   select result, channelExecutionTime, sub.subscriptionId as 
> subscriptionId,current_datetime() as deliveryTime
>   from channels.roomRecordsSubscriptions sub,
>   Metadata.Datatype b, 
>   channels.RoomOccupants(sub.param0) result 
>   where b.DatatypeName = sub.BrokerName
>   and b.DataverseName = sub.DataverseName
> ) returning a;
> Here is the stack trace:
> WARNING: Unhandled throwable
> java.lang.VerifyError: Bad return type
> Exception Details:
>   Location:
> 
> org/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor$_EvaluatorFactoryGen.access$0(Lorg/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor$_EvaluatorFactoryGen;)Lorg/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor;
>  @4: areturn
>   Reason:
> Type 
> 'org/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor$_Gen'
>  (current frame, stack[0]) is not assignable to 
> 'org/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor'
>  (from method signature)
>   Current Frame:
> bci: @4
> flags: { }
> locals: { 
> 'org/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor$_EvaluatorFactoryGen'
>  }
> stack: { 
> 'org/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor$_Gen'
>  }
>   Bytecode:
> 0x000: 2ab4 0063 b0   
>   at 
> org.apache.asterix.runtime.evaluators.functions.records.RecordMergeDescriptor$_Gen.createEvaluatorFactory(RecordMergeDescriptor.java:86)
>   at 
> org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.createScalarFunctionEvaluatorFactory(QueryLogicalExpressionJobGen.java:144)
>   at 
> org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.createEvaluatorFactory(QueryLogicalExpressionJobGen.java:109)
>   at 
> org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.codegenArguments(QueryLogicalExpressionJobGen.java:161)
>   at 
> org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.createScalarFunctionEvaluatorFactory(QueryLogicalExpressionJobGen.java:134)
>   at 
> org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.createEvaluatorFactory(QueryLogicalExpressionJobGen.java:109)
>   at 
> org.apache.hyracks.algebricks.core.algebra.expressions.ExpressionRuntimeProvider.createEvaluatorFactory(ExpressionRuntimeProvider.java:41)
>   at 
> org.apache.hyracks.algebricks.core.algebra.operators.physical.AssignPOperator.contributeRuntimeOperator(AssignPOperator.java:84)
>   at 
> org.apache.hyracks.algebricks.core.algebra.operators.logical.AbstractLogicalOperator.contributeRuntimeOperator(AbstractLogicalOperator.java:166)
>   at 
> org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:97)
>   at 
> org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
>   at 
> org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
>   at 
> org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
>   at 
> org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
>   at 
> org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
>   at 
> org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
>   at 
> 

[jira] [Commented] (ASTERIXDB-2101) Record Merge Error when compiling job

2017-09-18 Thread Steven Jacobs (JIRA)

[ 
https://issues.apache.org/jira/browse/ASTERIXDB-2101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170285#comment-16170285
 ] 

Steven Jacobs commented on ASTERIXDB-2101:
--

Here is the generated plan:
distribute result [$$30]
-- DISTRIBUTE_RESULT  |PARTITIONED|
  exchange
  -- ONE_TO_ONE_EXCHANGE  |PARTITIONED|
project ([$$30])
-- STREAM_PROJECT  |PARTITIONED|
  commit
  -- COMMIT  |PARTITIONED|
project ([$$28, $$30])
-- STREAM_PROJECT  |PARTITIONED|
  exchange
  -- ONE_TO_ONE_EXCHANGE  |PARTITIONED|
insert into channels.roomRecordsResults from record: $$31 
partitioned by [$$28]
-- INSERT_DELETE  |PARTITIONED|
  exchange
  -- HASH_PARTITION_EXCHANGE [$$28]  |PARTITIONED|
assign [$$28] <- [$$31.getField(0)]
-- ASSIGN  |PARTITIONED|
  assign [$$31] <- [cast($$30)]
  -- ASSIGN  |PARTITIONED|
project ([$$30])
-- STREAM_PROJECT  |PARTITIONED|
  assign [$$30] <- [check-unknown(object-merge($$26, {"id": 
create-uuid()}))]
  -- ASSIGN  |PARTITIONED|
project ([$$26])
-- STREAM_PROJECT  |PARTITIONED|
  assign [$$26] <- [{"result": {"userId": $$35}, 
"channelExecutionTime": $$channelExecutionTime, "subscriptionId": $$32, 
"deliveryTime": current-datetime()}]
  -- ASSIGN  |PARTITIONED|
project ([$$channelExecutionTime, $$32, $$35])
-- STREAM_PROJECT  |PARTITIONED|
  exchange
  -- ONE_TO_ONE_EXCHANGE  |PARTITIONED|
join (eq($$40, $$41))
-- HYBRID_HASH_JOIN [$$41][$$40]  |PARTITIONED|
  exchange
  -- HASH_PARTITION_EXCHANGE [$$41]  
|PARTITIONED|
project ([$$channelExecutionTime, $$32, 
$$41])
-- STREAM_PROJECT  |PARTITIONED|
  exchange
  -- ONE_TO_ONE_EXCHANGE  |PARTITIONED|
join (and(eq($$33, $$39), eq($$34, 
$$37)))
-- HYBRID_HASH_JOIN [$$39, $$37][$$33, 
$$34]  |PARTITIONED|
  exchange
  -- HASH_PARTITION_EXCHANGE [$$39, 
$$37]  |PARTITIONED|
project ([$$channelExecutionTime, 
$$32, $$37, $$39, $$41])
-- STREAM_PROJECT  |PARTITIONED|
  assign [$$41, $$39, $$37] <- 
[$$sub.getField(1), $$sub.getField("DataverseName"), 
$$sub.getField("BrokerName")]
  -- ASSIGN  |PARTITIONED|
exchange
-- ONE_TO_ONE_EXCHANGE  
|PARTITIONED|
  data-scan []<-[$$32, $$sub] 
<- channels.roomRecordsSubscriptions
  -- DATASOURCE_SCAN  
|PARTITIONED|
exchange
-- BROADCAST_EXCHANGE  
|PARTITIONED|
  assign 
[$$channelExecutionTime] <- [current-datetime()]
  -- ASSIGN  |UNPARTITIONED|
empty-tuple-source
-- EMPTY_TUPLE_SOURCE  
|UNPARTITIONED|
  exchange
  -- HASH_PARTITION_EXCHANGE [$$33, 
$$34]  |PARTITIONED|
project ([$$33, $$34])
-- STREAM_PROJECT  |PARTITIONED|
  exchange
  -- ONE_TO_ONE_EXCHANGE  
|PARTITIONED|
data-scan []<-[$$33, $$34, $$b] 
<- Metadata.Datatype
-- DATASOURCE_SCAN  
|PARTITIONED|
  exchange
  -- ONE_TO_ONE_EXCHANGE  
|PARTITIONED|
empty-tuple-source
-- EMPTY_TUPLE_SOURCE  
|PARTITIONED|
  exchange
  -- 

[jira] [Created] (ASTERIXDB-2101) Record Merge Error when compiling job

2017-09-18 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-2101:


 Summary: Record Merge Error when compiling job
 Key: ASTERIXDB-2101
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-2101
 Project: Apache AsterixDB
  Issue Type: Bug
Reporter: Steven Jacobs


The following will throw a runtime exception:


drop dataverse channels if exists;
create dataverse channels;
use channels;

create type userLocation as {
  userId: int,
  roomNumber: int
};

create type result as {
id:uuid
};

create type subscriptionType as {
subscriptionId:uuid,
param0:int
};

create dataset roomRecordsResults(result)
primary key id autogenerated;

create dataset roomRecordsSubscriptions(subscriptionType)
primary key subscriptionId autogenerated;

create dataset UserLocations(userLocation)
primary key userId;


create function RoomOccupants(room) {
  (select location.userId
  from UserLocations location
  where location.roomNumber = room)
};


SET inline_with "false";
insert into channels.roomRecordsResults as a (
with channelExecutionTime as current_datetime() 
select result, channelExecutionTime, sub.subscriptionId as 
subscriptionId,current_datetime() as deliveryTime
from channels.roomRecordsSubscriptions sub,
Metadata.Datatype b, 
channels.RoomOccupants(sub.param0) result 
where b.DatatypeName = sub.BrokerName
and b.DataverseName = sub.DataverseName
) returning a;


Here is the stack trace:
WARNING: Unhandled throwable
java.lang.VerifyError: Bad return type
Exception Details:
  Location:

org/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor$_EvaluatorFactoryGen.access$0(Lorg/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor$_EvaluatorFactoryGen;)Lorg/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor;
 @4: areturn
  Reason:
Type 
'org/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor$_Gen'
 (current frame, stack[0]) is not assignable to 
'org/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor' 
(from method signature)
  Current Frame:
bci: @4
flags: { }
locals: { 
'org/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor$_EvaluatorFactoryGen'
 }
stack: { 
'org/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor$_Gen'
 }
  Bytecode:
0x000: 2ab4 0063 b0   

at 
org.apache.asterix.runtime.evaluators.functions.records.RecordMergeDescriptor$_Gen.createEvaluatorFactory(RecordMergeDescriptor.java:86)
at 
org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.createScalarFunctionEvaluatorFactory(QueryLogicalExpressionJobGen.java:144)
at 
org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.createEvaluatorFactory(QueryLogicalExpressionJobGen.java:109)
at 
org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.codegenArguments(QueryLogicalExpressionJobGen.java:161)
at 
org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.createScalarFunctionEvaluatorFactory(QueryLogicalExpressionJobGen.java:134)
at 
org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.createEvaluatorFactory(QueryLogicalExpressionJobGen.java:109)
at 
org.apache.hyracks.algebricks.core.algebra.expressions.ExpressionRuntimeProvider.createEvaluatorFactory(ExpressionRuntimeProvider.java:41)
at 
org.apache.hyracks.algebricks.core.algebra.operators.physical.AssignPOperator.contributeRuntimeOperator(AssignPOperator.java:84)
at 
org.apache.hyracks.algebricks.core.algebra.operators.logical.AbstractLogicalOperator.contributeRuntimeOperator(AbstractLogicalOperator.java:166)
at 
org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:97)
at 
org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
at 
org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
at 
org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
at 
org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
at 
org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
at 
org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
at 
org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
at 
org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
at 
org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
at 

[jira] [Assigned] (ASTERIXDB-2089) Bad return type error for valid SQL++ query

2017-09-14 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-2089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs reassigned ASTERIXDB-2089:


Assignee: Steven Jacobs  (was: Dmitry Lychagin)

> Bad return type error for valid SQL++ query
> ---
>
> Key: ASTERIXDB-2089
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2089
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Steven Jacobs
>Assignee: Steven Jacobs
>
> The following will produce a "Bad return type" error:
> drop dataverse channels if exists;
> create dataverse channels;
> use channels;
> create type userLocation as {
>   userId: int,
>   roomNumber: int
> };
> create type result as {
>   id:uuid
> };
> create type subscriptionType as {
>   subscriptionId:uuid,
>   param0:int
> };
> create dataset roomRecordsResults(result)
> primary key id autogenerated;
> create dataset roomRecordsSubscriptions(subscriptionType)
> primary key subscriptionId autogenerated;
> create dataset UserLocations(userLocation)
> primary key userId;
> create function RoomOccupants(room) {
>   (select location.userId
>   from UserLocations location
>   where location.roomNumber = room)
> };
> use channels;
> SET inline_with "false";
> insert into channels.roomRecordsResults as a (
>   with channelExecutionTime as current_datetime() 
>   select result, channelExecutionTime, sub.subscriptionId as 
> subscriptionId,current_datetime() as deliveryTime
>   from channels.roomRecordsSubscriptions sub,
>   channels.RoomOccupants(sub.param0) result 
> ) returning a;
> Here is the top of the stack trace:
> WARNING: Unhandled throwable
> java.lang.VerifyError: Bad return type
> Exception Details:
>   Location:
> 
> org/apache/asterix/runtime/evaluators/functions/NotDescriptor$_EvaluatorFactoryGen.access$0(Lorg/apache/asterix/runtime/evaluators/functions/NotDescriptor$_EvaluatorFactoryGen;)Lorg/apache/asterix/runtime/evaluators/functions/NotDescriptor;
>  @4: areturn
>   Reason:
> Type 'org/apache/asterix/runtime/evaluators/functions/NotDescriptor$_Gen' 
> (current frame, stack[0]) is not assignable to 
> 'org/apache/asterix/runtime/evaluators/functions/NotDescriptor' (from method 
> signature)
>   Current Frame:
> bci: @4
> flags: { }
> locals: { 
> 'org/apache/asterix/runtime/evaluators/functions/NotDescriptor$_EvaluatorFactoryGen'
>  }
> stack: { 
> 'org/apache/asterix/runtime/evaluators/functions/NotDescriptor$_Gen' }
>   Bytecode:
> 0x000: 2ab4 003a b0   
>   at 
> org.apache.asterix.runtime.evaluators.functions.NotDescriptor$_Gen.createEvaluatorFactory(NotDescriptor.java:60)
>   at 
> org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.createScalarFunctionEvaluatorFactory(QueryLogicalExpressionJobGen.java:144)
>   at 
> org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.createEvaluatorFactory(QueryLogicalExpressionJobGen.java:109)
>   at 
> org.apache.hyracks.algebricks.core.algebra.expressions.ExpressionRuntimeProvider.createEvaluatorFactory(ExpressionRuntimeProvider.java:41)
>   at 
> org.apache.asterix.optimizer.rules.ConstantFoldingRule$ConstantFoldingVisitor.visitScalarFunctionCallExpression(ConstantFoldingRule.java:217)
>   at 
> org.apache.asterix.optimizer.rules.ConstantFoldingRule$ConstantFoldingVisitor.visitScalarFunctionCallExpression(ConstantFoldingRule.java:1)
>   at 
> org.apache.hyracks.algebricks.core.algebra.expressions.ScalarFunctionCallExpression.accept(ScalarFunctionCallExpression.java:55)
>   at 
> org.apache.asterix.optimizer.rules.ConstantFoldingRule$ConstantFoldingVisitor.transform(ConstantFoldingRule.java:163)
>   at 
> org.apache.hyracks.algebricks.core.algebra.operators.logical.SelectOperator.acceptExpressionTransform(SelectOperator.java:83)
>   at 
> org.apache.asterix.optimizer.rules.ConstantFoldingRule.rewritePost(ConstantFoldingRule.java:150)
>   at 
> org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController.rewriteOperatorRef(AbstractRuleController.java:126)
>   at 
> org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController.rewriteOperatorRef(AbstractRuleController.java:100)
>   at 
> org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController.rewriteOperatorRef(AbstractRuleController.java:100)
>   at 
> org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController.rewriteOperatorRef(AbstractRuleController.java:100)
>   at 
> org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController.rewriteOperatorRef(AbstractRuleController.java:100)
>   at 
> org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController.rewriteOperatorRef(AbstractRuleController.java:100)
>   at 
> 

[jira] [Commented] (ASTERIXDB-2089) Bad return type error for valid SQL++ query

2017-09-14 Thread Steven Jacobs (JIRA)

[ 
https://issues.apache.org/jira/browse/ASTERIXDB-2089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16166985#comment-16166985
 ] 

Steven Jacobs commented on ASTERIXDB-2089:
--

I was able to solve this so I'm assigning to me.

> Bad return type error for valid SQL++ query
> ---
>
> Key: ASTERIXDB-2089
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-2089
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Steven Jacobs
>Assignee: Steven Jacobs
>
> The following will produce a "Bad return type" error:
> drop dataverse channels if exists;
> create dataverse channels;
> use channels;
> create type userLocation as {
>   userId: int,
>   roomNumber: int
> };
> create type result as {
>   id:uuid
> };
> create type subscriptionType as {
>   subscriptionId:uuid,
>   param0:int
> };
> create dataset roomRecordsResults(result)
> primary key id autogenerated;
> create dataset roomRecordsSubscriptions(subscriptionType)
> primary key subscriptionId autogenerated;
> create dataset UserLocations(userLocation)
> primary key userId;
> create function RoomOccupants(room) {
>   (select location.userId
>   from UserLocations location
>   where location.roomNumber = room)
> };
> use channels;
> SET inline_with "false";
> insert into channels.roomRecordsResults as a (
>   with channelExecutionTime as current_datetime() 
>   select result, channelExecutionTime, sub.subscriptionId as 
> subscriptionId,current_datetime() as deliveryTime
>   from channels.roomRecordsSubscriptions sub,
>   channels.RoomOccupants(sub.param0) result 
> ) returning a;
> Here is the top of the stack trace:
> WARNING: Unhandled throwable
> java.lang.VerifyError: Bad return type
> Exception Details:
>   Location:
> 
> org/apache/asterix/runtime/evaluators/functions/NotDescriptor$_EvaluatorFactoryGen.access$0(Lorg/apache/asterix/runtime/evaluators/functions/NotDescriptor$_EvaluatorFactoryGen;)Lorg/apache/asterix/runtime/evaluators/functions/NotDescriptor;
>  @4: areturn
>   Reason:
> Type 'org/apache/asterix/runtime/evaluators/functions/NotDescriptor$_Gen' 
> (current frame, stack[0]) is not assignable to 
> 'org/apache/asterix/runtime/evaluators/functions/NotDescriptor' (from method 
> signature)
>   Current Frame:
> bci: @4
> flags: { }
> locals: { 
> 'org/apache/asterix/runtime/evaluators/functions/NotDescriptor$_EvaluatorFactoryGen'
>  }
> stack: { 
> 'org/apache/asterix/runtime/evaluators/functions/NotDescriptor$_Gen' }
>   Bytecode:
> 0x000: 2ab4 003a b0   
>   at 
> org.apache.asterix.runtime.evaluators.functions.NotDescriptor$_Gen.createEvaluatorFactory(NotDescriptor.java:60)
>   at 
> org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.createScalarFunctionEvaluatorFactory(QueryLogicalExpressionJobGen.java:144)
>   at 
> org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.createEvaluatorFactory(QueryLogicalExpressionJobGen.java:109)
>   at 
> org.apache.hyracks.algebricks.core.algebra.expressions.ExpressionRuntimeProvider.createEvaluatorFactory(ExpressionRuntimeProvider.java:41)
>   at 
> org.apache.asterix.optimizer.rules.ConstantFoldingRule$ConstantFoldingVisitor.visitScalarFunctionCallExpression(ConstantFoldingRule.java:217)
>   at 
> org.apache.asterix.optimizer.rules.ConstantFoldingRule$ConstantFoldingVisitor.visitScalarFunctionCallExpression(ConstantFoldingRule.java:1)
>   at 
> org.apache.hyracks.algebricks.core.algebra.expressions.ScalarFunctionCallExpression.accept(ScalarFunctionCallExpression.java:55)
>   at 
> org.apache.asterix.optimizer.rules.ConstantFoldingRule$ConstantFoldingVisitor.transform(ConstantFoldingRule.java:163)
>   at 
> org.apache.hyracks.algebricks.core.algebra.operators.logical.SelectOperator.acceptExpressionTransform(SelectOperator.java:83)
>   at 
> org.apache.asterix.optimizer.rules.ConstantFoldingRule.rewritePost(ConstantFoldingRule.java:150)
>   at 
> org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController.rewriteOperatorRef(AbstractRuleController.java:126)
>   at 
> org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController.rewriteOperatorRef(AbstractRuleController.java:100)
>   at 
> org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController.rewriteOperatorRef(AbstractRuleController.java:100)
>   at 
> org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController.rewriteOperatorRef(AbstractRuleController.java:100)
>   at 
> org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController.rewriteOperatorRef(AbstractRuleController.java:100)
>   at 
> org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController.rewriteOperatorRef(AbstractRuleController.java:100)
>   at 
> 

[jira] [Commented] (ASTERIXDB-2089) Bad return type error for valid SQL++ query

2017-09-14 Thread Steven Jacobs (JIRA)

[ 
https://issues.apache.org/jira/browse/ASTERIXDB-2089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16166589#comment-16166589
 ] 

Steven Jacobs commented on ASTERIXDB-2089:
--

I think the problem starts during the below optimization. It creates the 
select (not(is-missing($$30))) -- |UNPARTITIONED|
which seems to be troublesome later.
[~dlychagin-cb] when you have the chance to look at this let me know. It's 
blocking me currently so I'm trying to investigate it as well (my Skype is 
sjaco002)



Sep 14, 2017 9:44:25 AM 
org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController 
printRuleApplication
FINE:  Rule class 
org.apache.asterix.optimizer.rules.subplan.InlineSubplanInputForNestedTupleSourceRule
 fired.
Sep 14, 2017 9:44:25 AM 
org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController 
printRuleApplication
FINE:  Before plan
distribute result [$$21] -- |UNPARTITIONED|
  commit -- |UNPARTITIONED|
insert into channels.roomRecordsResults from record: $$22 partitioned by 
[$$19] -- |UNPARTITIONED|
  assign [$$19] <- [$$22.getField(0)] -- |UNPARTITIONED|
assign [$$22] <- [cast($$21)] -- |UNPARTITIONED|
  assign [$$21] <- [check-unknown(object-merge($$17, {"id": 
create-uuid()}))] -- |UNPARTITIONED|
project ([$$17]) -- |UNPARTITIONED|
  assign [$$17] <- [{"result": $$result, "channelExecutionTime": 
$$channelExecutionTime, "subscriptionId": $$23, "deliveryTime": 
current-datetime()}] -- |UNPARTITIONED|
unnest $$result <- scan-collection($$13) -- |UNPARTITIONED|
  subplan {
aggregate [$$13] <- [listify($$12)] -- 
|UNPARTITIONED|
  assign [$$12] <- [{"userId": $$24}] -- 
|UNPARTITIONED|
join (eq($$25, $$26)) -- |UNPARTITIONED|
  nested tuple source -- |UNPARTITIONED|
  assign [$$25] <- [$$location.getField(1)] -- 
|UNPARTITIONED|
data-scan []<-[$$24, $$location] <- 
channels.UserLocations -- |UNPARTITIONED|
  empty-tuple-source -- |UNPARTITIONED|
 } -- |UNPARTITIONED|
assign [$$26] <- [$$sub.getField(1)] -- |UNPARTITIONED|
  data-scan []<-[$$23, $$sub] <- 
channels.roomRecordsSubscriptions -- |UNPARTITIONED|
assign [$$channelExecutionTime] <- [current-datetime()] 
-- |UNPARTITIONED|
  empty-tuple-source -- |UNPARTITIONED|


Sep 14, 2017 9:44:25 AM 
org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController 
printRuleApplication
FINE:  After plan
distribute result [$$21] -- |UNPARTITIONED|
  commit -- |UNPARTITIONED|
insert into channels.roomRecordsResults from record: $$22 partitioned by 
[$$19] -- |UNPARTITIONED|
  assign [$$19] <- [$$22.getField(0)] -- |UNPARTITIONED|
assign [$$22] <- [cast($$21)] -- |UNPARTITIONED|
  assign [$$21] <- [check-unknown(object-merge($$17, {"id": 
create-uuid()}))] -- |UNPARTITIONED|
project ([$$17]) -- |UNPARTITIONED|
  assign [$$17] <- [{"result": $$result, "channelExecutionTime": 
$$channelExecutionTime, "subscriptionId": $$23, "deliveryTime": 
current-datetime()}] -- |UNPARTITIONED|
unnest $$result <- scan-collection($$13) -- |UNPARTITIONED|
  group by ([$$31 := $$29]) decor ([$$channelExecutionTime; 
$$sub; $$23; $$26]) {
aggregate [$$13] <- [listify($$12)] -- 
|UNPARTITIONED|
  assign [$$12] <- [{"userId": $$24}] -- 
|UNPARTITIONED|
select (not(is-missing($$30))) -- 
|UNPARTITIONED|
  nested tuple source -- |UNPARTITIONED|
 } -- |UNPARTITIONED|
left outer join (eq($$25, $$26)) -- |UNPARTITIONED|
  assign [$$29] <- [create-query-uid()] -- |UNPARTITIONED|
assign [$$26] <- [$$sub.getField(1)] -- |UNPARTITIONED|
  data-scan []<-[$$23, $$sub] <- 
channels.roomRecordsSubscriptions -- |UNPARTITIONED|
assign [$$channelExecutionTime] <- 
[current-datetime()] -- |UNPARTITIONED|
  empty-tuple-source -- |UNPARTITIONED|
  assign [$$30] <- [TRUE] -- |UNPARTITIONED|
assign [$$25] <- [$$location.getField(1)] -- 
|UNPARTITIONED|
  data-scan []<-[$$24, $$location] <- 
channels.UserLocations -- |UNPARTITIONED|
empty-tuple-source -- |UNPARTITIONED|


> Bad return type error for valid SQL++ query
> ---
>
> Key: ASTERIXDB-2089
> URL: 

[jira] [Created] (ASTERIXDB-2089) Bad return type error for valid SQL++ query

2017-09-13 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-2089:


 Summary: Bad return type error for valid SQL++ query
 Key: ASTERIXDB-2089
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-2089
 Project: Apache AsterixDB
  Issue Type: Bug
Reporter: Steven Jacobs


The following will produce a "Bad return type" error:

drop dataverse channels if exists;
create dataverse channels;
use channels;
create type userLocation as {
  userId: int,
  roomNumber: int
};
create type result as {
id:uuid
};
create type subscriptionType as {
subscriptionId:uuid,
param0:int
};
create dataset roomRecordsResults(result)
primary key id autogenerated;
create dataset roomRecordsSubscriptions(subscriptionType)
primary key subscriptionId autogenerated;
create dataset UserLocations(userLocation)
primary key userId;
create function RoomOccupants(room) {
  (select location.userId
  from UserLocations location
  where location.roomNumber = room)
};
use channels;
SET inline_with "false";
insert into channels.roomRecordsResults as a (
with channelExecutionTime as current_datetime() 
select result, channelExecutionTime, sub.subscriptionId as 
subscriptionId,current_datetime() as deliveryTime
from channels.roomRecordsSubscriptions sub,
channels.RoomOccupants(sub.param0) result 
) returning a;

Here is the top of the stack trace:
WARNING: Unhandled throwable
java.lang.VerifyError: Bad return type
Exception Details:
  Location:

org/apache/asterix/runtime/evaluators/functions/NotDescriptor$_EvaluatorFactoryGen.access$0(Lorg/apache/asterix/runtime/evaluators/functions/NotDescriptor$_EvaluatorFactoryGen;)Lorg/apache/asterix/runtime/evaluators/functions/NotDescriptor;
 @4: areturn
  Reason:
Type 'org/apache/asterix/runtime/evaluators/functions/NotDescriptor$_Gen' 
(current frame, stack[0]) is not assignable to 
'org/apache/asterix/runtime/evaluators/functions/NotDescriptor' (from method 
signature)
  Current Frame:
bci: @4
flags: { }
locals: { 
'org/apache/asterix/runtime/evaluators/functions/NotDescriptor$_EvaluatorFactoryGen'
 }
stack: { 
'org/apache/asterix/runtime/evaluators/functions/NotDescriptor$_Gen' }
  Bytecode:
0x000: 2ab4 003a b0   

at 
org.apache.asterix.runtime.evaluators.functions.NotDescriptor$_Gen.createEvaluatorFactory(NotDescriptor.java:60)
at 
org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.createScalarFunctionEvaluatorFactory(QueryLogicalExpressionJobGen.java:144)
at 
org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.createEvaluatorFactory(QueryLogicalExpressionJobGen.java:109)
at 
org.apache.hyracks.algebricks.core.algebra.expressions.ExpressionRuntimeProvider.createEvaluatorFactory(ExpressionRuntimeProvider.java:41)
at 
org.apache.asterix.optimizer.rules.ConstantFoldingRule$ConstantFoldingVisitor.visitScalarFunctionCallExpression(ConstantFoldingRule.java:217)
at 
org.apache.asterix.optimizer.rules.ConstantFoldingRule$ConstantFoldingVisitor.visitScalarFunctionCallExpression(ConstantFoldingRule.java:1)
at 
org.apache.hyracks.algebricks.core.algebra.expressions.ScalarFunctionCallExpression.accept(ScalarFunctionCallExpression.java:55)
at 
org.apache.asterix.optimizer.rules.ConstantFoldingRule$ConstantFoldingVisitor.transform(ConstantFoldingRule.java:163)
at 
org.apache.hyracks.algebricks.core.algebra.operators.logical.SelectOperator.acceptExpressionTransform(SelectOperator.java:83)
at 
org.apache.asterix.optimizer.rules.ConstantFoldingRule.rewritePost(ConstantFoldingRule.java:150)
at 
org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController.rewriteOperatorRef(AbstractRuleController.java:126)
at 
org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController.rewriteOperatorRef(AbstractRuleController.java:100)
at 
org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController.rewriteOperatorRef(AbstractRuleController.java:100)
at 
org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController.rewriteOperatorRef(AbstractRuleController.java:100)
at 
org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController.rewriteOperatorRef(AbstractRuleController.java:100)
at 
org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController.rewriteOperatorRef(AbstractRuleController.java:100)
at 
org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController.rewriteOperatorRef(AbstractRuleController.java:100)
at 
org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController.rewriteOperatorRef(AbstractRuleController.java:100)
at 
org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController.rewriteOperatorRef(AbstractRuleController.java:100)
at 

[jira] [Created] (ASTERIXDB-1934) Add Constructor Functions to the Documentation

2017-06-09 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-1934:


 Summary: Add Constructor Functions to the Documentation
 Key: ASTERIXDB-1934
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-1934
 Project: Apache AsterixDB
  Issue Type: Bug
Reporter: Steven Jacobs


The docs are missing description of some constructors, e.g. string constructors 
like bigint("123").



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (ASTERIXDB-1926) Buffer Cache error after 7 hours of Feed Ingestion

2017-05-30 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-1926:


 Summary: Buffer Cache error after 7 hours of Feed Ingestion
 Key: ASTERIXDB-1926
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-1926
 Project: Apache AsterixDB
  Issue Type: Bug
Reporter: Steven Jacobs


I have a single node, single partition cluster on one machine, and a script 
sending artificial tweets as fast as possible to a socket feed from a second 
machine. On the Asterix machine, I have a script performing a single record 
lookup every second. In both scripts, the key is randomly chosen, so in the 
read case the key may not exist, and in the write case, duplicate keys are 
being inserted. I'm using an upsert feed with no secondary keys or filters 
created. After 7 hours, all queries start to fail with the following (this 
happens every time for me):

May 26, 2017 5:17:38 PM org.apache.asterix.api.http.server.RestApiServlet 
doHandle
SEVERE: Unable to find free page in buffer cache after 1000 cycles (buffer 
cache undersized?)
org.apache.hyracks.algebricks.common.exceptions.AlgebricksException: Unable to 
find free page in buffer cache after 1000 cycles (buffer cache undersized?)
at 
org.apache.asterix.metadata.declared.MetadataManagerUtil.getDatasetIndexes(MetadataManagerUtil.java:159)
at 
org.apache.asterix.metadata.declared.MetadataProvider.getDatasetIndexes(MetadataProvider.java:364)
at 
org.apache.asterix.optimizer.rules.PushFieldAccessRule.isAccessToIndexedField(PushFieldAccessRule.java:148)
at 
org.apache.asterix.optimizer.rules.PushFieldAccessRule.propagateFieldAccessRec(PushFieldAccessRule.java:185)
at 
org.apache.asterix.optimizer.rules.PushFieldAccessRule.rewritePost(PushFieldAccessRule.java:96)
at 
org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController.rewriteOperatorRef(AbstractRuleController.java:126)
at 
org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController.rewriteOperatorRef(AbstractRuleController.java:100)
at 
org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController.rewriteOperatorRef(AbstractRuleController.java:100)
at 
org.apache.hyracks.algebricks.compiler.rewriter.rulecontrollers.SequentialFixpointRuleController.rewriteWithRuleCollection(SequentialFixpointRuleController.java:53)
at 
org.apache.hyracks.algebricks.core.rewriter.base.HeuristicOptimizer.runOptimizationSets(HeuristicOptimizer.java:102)
at 
org.apache.hyracks.algebricks.core.rewriter.base.HeuristicOptimizer.optimize(HeuristicOptimizer.java:82)
at 
org.apache.hyracks.algebricks.compiler.api.HeuristicCompilerFactoryBuilder$1$1.optimize(HeuristicCompilerFactoryBuilder.java:90)
at 
org.apache.asterix.api.common.APIFramework.compileQuery(APIFramework.java:263)
at 
org.apache.asterix.app.translator.QueryTranslator.rewriteCompileQuery(QueryTranslator.java:1818)
at 
org.apache.asterix.app.translator.QueryTranslator.lambda$handleQuery$2(QueryTranslator.java:2312)
at 
org.apache.asterix.app.translator.QueryTranslator.createAndRunJob(QueryTranslator.java:2402)
at 
org.apache.asterix.app.translator.QueryTranslator.deliverResult(QueryTranslator.java:2348)
at 
org.apache.asterix.app.translator.QueryTranslator.handleQuery(QueryTranslator.java:2324)
at 
org.apache.asterix.app.translator.QueryTranslator.compileAndExecute(QueryTranslator.java:362)
at 
org.apache.asterix.app.translator.QueryTranslator.compileAndExecute(QueryTranslator.java:246)
at 
org.apache.asterix.api.http.server.RestApiServlet.doHandle(RestApiServlet.java:206)
at 
org.apache.asterix.api.http.server.RestApiServlet.getOrPost(RestApiServlet.java:176)
at 
org.apache.asterix.api.http.server.RestApiServlet.get(RestApiServlet.java:160)
at 
org.apache.hyracks.http.server.AbstractServlet.handle(AbstractServlet.java:74)
at 
org.apache.hyracks.http.server.HttpRequestHandler.handle(HttpRequestHandler.java:70)
at 
org.apache.hyracks.http.server.HttpRequestHandler.call(HttpRequestHandler.java:55)
at 
org.apache.hyracks.http.server.HttpRequestHandler.call(HttpRequestHandler.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Here are the DDL's and the query:

drop dataverse experiments if exists;
create dataverse experiments;
use dataverse experiments;

create type TweetMessageType as closed {
id: int64,
message_text: string,
created_at: string,
country:string
}

create feed TweetFeed using socket_adapter
(
("sockets"="127.0.0.1:10001"),
("address-type"="IP"),

[jira] [Created] (ASTERIXDB-1911) Allow concurrent executions of one pre-distributed job

2017-05-15 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-1911:


 Summary: Allow concurrent executions of one pre-distributed job
 Key: ASTERIXDB-1911
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-1911
 Project: Apache AsterixDB
  Issue Type: Bug
Reporter: Steven Jacobs
Assignee: Steven Jacobs


Right now, concurrent attempts to run the same pre-distributed will not work. 
It would be great if we could find a way to allow them to run together.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (ASTERIXDB-1909) Fix the default log settings

2017-05-11 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-1909:


 Summary: Fix the default log settings
 Key: ASTERIXDB-1909
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-1909
 Project: Apache AsterixDB
  Issue Type: Bug
Reporter: Steven Jacobs


Right now we are logging way too much in the default Asterix settings. We 
should minimize these logs by default. As an example I got 100GB of logs on a 
5GB dataset.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (ASTERIXDB-1887) InlineWithExpressionVisitor inlines nonPure calls

2017-04-19 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-1887:


 Summary: InlineWithExpressionVisitor inlines nonPure calls 
 Key: ASTERIXDB-1887
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-1887
 Project: Apache AsterixDB
  Issue Type: Bug
Reporter: Steven Jacobs


The visitor inlines nonPure function calls, which subverts user intent in the 
case where one value should be produced for the entire query, e.g.

with cTime as current_datetime()
select cTime, sub.subscriptionId as subscriptionId
from Subscriptions sub



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (ASTERIXDB-1875) Can't run UDF in SQL++

2017-04-05 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs reassigned ASTERIXDB-1875:


Assignee: Xikui Wang  (was: Yingyi Bu)

> Can't run UDF in SQL++
> --
>
> Key: ASTERIXDB-1875
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-1875
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Steven Jacobs
>Assignee: Xikui Wang
>
> create function cat(str1,str2) {  
>   concat(str1,str2)
> }
> with result as cat("w229","4u1") 
> select value result;
> The above gives:
>  Unknown function cat@2 [CompilationException]
> Whereas the following works correctly:
> with result as concat("w229","4u1") 
> select value result;



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (ASTERIXDB-1876) UDF fails in select statement in SQL++

2017-04-05 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-1876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs reassigned ASTERIXDB-1876:


Assignee: Xikui Wang  (was: Yingyi Bu)

> UDF fails in select statement in SQL++
> --
>
> Key: ASTERIXDB-1876
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-1876
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Steven Jacobs
>Assignee: Xikui Wang
>
> drop dataverse channels if exists;
> create dataverse channels;
> use channels;
> create type UserLocation as closed {
>   recordId: integer,
>   userName: string
> }
> create dataset UserLocations(UserLocation)
> primary key recordId;
> create function RecentEmergenciesNearUser(userName) {  
>   (SELECT r AS report
>   FROM EmergencyReports r, UserLocations l
>   where l.userName = userName 
>   and spatial_intersect(r.location,l.location))
> }
> select *
> from channels.UserLocations location,
> channels.RecentEmergenciesNearUser(location.userName) result;
> Stack Trace:
> java.lang.NullPointerException
>   at 
> org.apache.asterix.lang.common.visitor.AbstractInlineUdfsVisitor.inlineUdfsInExpr(AbstractInlineUdfsVisitor.java:265)
>   at 
> org.apache.asterix.lang.sqlpp.rewrites.visitor.SqlppInlineUdfsVisitor.visit(SqlppInlineUdfsVisitor.java:122)
>   at 
> org.apache.asterix.lang.sqlpp.rewrites.visitor.SqlppInlineUdfsVisitor.visit(SqlppInlineUdfsVisitor.java:1)
>   at 
> org.apache.asterix.lang.sqlpp.clause.Projection.accept(Projection.java:45)
>   at 
> org.apache.asterix.lang.sqlpp.rewrites.visitor.SqlppInlineUdfsVisitor.visit(SqlppInlineUdfsVisitor.java:178)
>   at 
> org.apache.asterix.lang.sqlpp.rewrites.visitor.SqlppInlineUdfsVisitor.visit(SqlppInlineUdfsVisitor.java:1)
>   at 
> org.apache.asterix.lang.sqlpp.clause.SelectRegular.accept(SelectRegular.java:40)
>   at 
> org.apache.asterix.lang.sqlpp.rewrites.visitor.SqlppInlineUdfsVisitor.visit(SqlppInlineUdfsVisitor.java:162)
>   at 
> org.apache.asterix.lang.sqlpp.rewrites.visitor.SqlppInlineUdfsVisitor.visit(SqlppInlineUdfsVisitor.java:1)
>   at 
> org.apache.asterix.lang.sqlpp.clause.SelectClause.accept(SelectClause.java:42)
>   at 
> org.apache.asterix.lang.sqlpp.rewrites.visitor.SqlppInlineUdfsVisitor.visit(SqlppInlineUdfsVisitor.java:152)
>   at 
> org.apache.asterix.lang.sqlpp.rewrites.visitor.SqlppInlineUdfsVisitor.visit(SqlppInlineUdfsVisitor.java:1)
>   at 
> org.apache.asterix.lang.sqlpp.struct.SetOperationInput.accept(SetOperationInput.java:56)
>   at 
> org.apache.asterix.lang.sqlpp.rewrites.visitor.SqlppInlineUdfsVisitor.visit(SqlppInlineUdfsVisitor.java:186)
>   at 
> org.apache.asterix.lang.sqlpp.rewrites.visitor.SqlppInlineUdfsVisitor.visit(SqlppInlineUdfsVisitor.java:1)
>   at 
> org.apache.asterix.lang.sqlpp.clause.SelectSetOperation.accept(SelectSetOperation.java:47)
>   at 
> org.apache.asterix.lang.sqlpp.rewrites.visitor.SqlppInlineUdfsVisitor.visit(SqlppInlineUdfsVisitor.java:201)
>   at 
> org.apache.asterix.lang.sqlpp.rewrites.visitor.SqlppInlineUdfsVisitor.visit(SqlppInlineUdfsVisitor.java:1)
>   at 
> org.apache.asterix.lang.sqlpp.expression.SelectExpression.accept(SelectExpression.java:55)
>   at 
> org.apache.asterix.lang.common.visitor.AbstractInlineUdfsVisitor.inlineUdfsInExpr(AbstractInlineUdfsVisitor.java:266)
>   at 
> org.apache.asterix.lang.common.visitor.AbstractInlineUdfsVisitor.visit(AbstractInlineUdfsVisitor.java:91)
>   at 
> org.apache.asterix.lang.common.visitor.AbstractInlineUdfsVisitor.visit(AbstractInlineUdfsVisitor.java:1)
>   at org.apache.asterix.lang.common.statement.Query.accept(Query.java:93)
>   at 
> org.apache.asterix.lang.sqlpp.rewrites.SqlppQueryRewriter.inlineDeclaredUdfs(SqlppQueryRewriter.java:234)
>   at 
> org.apache.asterix.lang.sqlpp.rewrites.SqlppQueryRewriter.rewrite(SqlppQueryRewriter.java:127)
>   at 
> org.apache.asterix.api.common.APIFramework.reWriteQuery(APIFramework.java:184)
>   at 
> org.apache.asterix.app.translator.QueryTranslator.rewriteCompileQuery(QueryTranslator.java:1803



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (ASTERIXDB-1876) UDF fails in select statement in SQL++

2017-04-05 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-1876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs updated ASTERIXDB-1876:
-
Description: 
drop dataverse channels if exists;
create dataverse channels;
use channels;

create type UserLocation as closed {
recordId: integer,
userName: string
}

create dataset UserLocations(UserLocation)
primary key recordId;

create function RecentEmergenciesNearUser(userName) {  
  (SELECT r AS report
  FROM EmergencyReports r, UserLocations l
  where l.userName = userName 
  and spatial_intersect(r.location,l.location))
}

select *
from channels.UserLocations location,
channels.RecentEmergenciesNearUser(location.userName) result;


Stack Trace:
java.lang.NullPointerException
at 
org.apache.asterix.lang.common.visitor.AbstractInlineUdfsVisitor.inlineUdfsInExpr(AbstractInlineUdfsVisitor.java:265)
at 
org.apache.asterix.lang.sqlpp.rewrites.visitor.SqlppInlineUdfsVisitor.visit(SqlppInlineUdfsVisitor.java:122)
at 
org.apache.asterix.lang.sqlpp.rewrites.visitor.SqlppInlineUdfsVisitor.visit(SqlppInlineUdfsVisitor.java:1)
at 
org.apache.asterix.lang.sqlpp.clause.Projection.accept(Projection.java:45)
at 
org.apache.asterix.lang.sqlpp.rewrites.visitor.SqlppInlineUdfsVisitor.visit(SqlppInlineUdfsVisitor.java:178)
at 
org.apache.asterix.lang.sqlpp.rewrites.visitor.SqlppInlineUdfsVisitor.visit(SqlppInlineUdfsVisitor.java:1)
at 
org.apache.asterix.lang.sqlpp.clause.SelectRegular.accept(SelectRegular.java:40)
at 
org.apache.asterix.lang.sqlpp.rewrites.visitor.SqlppInlineUdfsVisitor.visit(SqlppInlineUdfsVisitor.java:162)
at 
org.apache.asterix.lang.sqlpp.rewrites.visitor.SqlppInlineUdfsVisitor.visit(SqlppInlineUdfsVisitor.java:1)
at 
org.apache.asterix.lang.sqlpp.clause.SelectClause.accept(SelectClause.java:42)
at 
org.apache.asterix.lang.sqlpp.rewrites.visitor.SqlppInlineUdfsVisitor.visit(SqlppInlineUdfsVisitor.java:152)
at 
org.apache.asterix.lang.sqlpp.rewrites.visitor.SqlppInlineUdfsVisitor.visit(SqlppInlineUdfsVisitor.java:1)
at 
org.apache.asterix.lang.sqlpp.struct.SetOperationInput.accept(SetOperationInput.java:56)
at 
org.apache.asterix.lang.sqlpp.rewrites.visitor.SqlppInlineUdfsVisitor.visit(SqlppInlineUdfsVisitor.java:186)
at 
org.apache.asterix.lang.sqlpp.rewrites.visitor.SqlppInlineUdfsVisitor.visit(SqlppInlineUdfsVisitor.java:1)
at 
org.apache.asterix.lang.sqlpp.clause.SelectSetOperation.accept(SelectSetOperation.java:47)
at 
org.apache.asterix.lang.sqlpp.rewrites.visitor.SqlppInlineUdfsVisitor.visit(SqlppInlineUdfsVisitor.java:201)
at 
org.apache.asterix.lang.sqlpp.rewrites.visitor.SqlppInlineUdfsVisitor.visit(SqlppInlineUdfsVisitor.java:1)
at 
org.apache.asterix.lang.sqlpp.expression.SelectExpression.accept(SelectExpression.java:55)
at 
org.apache.asterix.lang.common.visitor.AbstractInlineUdfsVisitor.inlineUdfsInExpr(AbstractInlineUdfsVisitor.java:266)
at 
org.apache.asterix.lang.common.visitor.AbstractInlineUdfsVisitor.visit(AbstractInlineUdfsVisitor.java:91)
at 
org.apache.asterix.lang.common.visitor.AbstractInlineUdfsVisitor.visit(AbstractInlineUdfsVisitor.java:1)
at org.apache.asterix.lang.common.statement.Query.accept(Query.java:93)
at 
org.apache.asterix.lang.sqlpp.rewrites.SqlppQueryRewriter.inlineDeclaredUdfs(SqlppQueryRewriter.java:234)
at 
org.apache.asterix.lang.sqlpp.rewrites.SqlppQueryRewriter.rewrite(SqlppQueryRewriter.java:127)
at 
org.apache.asterix.api.common.APIFramework.reWriteQuery(APIFramework.java:184)
at 
org.apache.asterix.app.translator.QueryTranslator.rewriteCompileQuery(QueryTranslator.java:1803

  was:
drop dataverse channels if exists;
create dataverse channels;
use channels;

create type UserLocation as closed {
recordId: integer,
userName: string
}

create dataset UserLocations(UserLocation)
primary key recordId;

create function RecentEmergenciesNearUser(userName) {  
  (SELECT r AS report
  FROM EmergencyReports r, UserLocations l
  where l.userName = userName 
  and spatial_intersect(r.location,l.location))
}

select *
from channels.UserLocations location,
channels.RecentEmergenciesNearUser(location.userName) result;


> UDF fails in select statement in SQL++
> --
>
> Key: ASTERIXDB-1876
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-1876
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Steven Jacobs
>Assignee: Yingyi Bu
>
> drop dataverse channels if exists;
> create dataverse channels;
> use channels;
> create type UserLocation as closed {
>   recordId: integer,
>   userName: string
> }
> create dataset UserLocations(UserLocation)
> primary key recordId;
> create function 

[jira] [Updated] (ASTERIXDB-1875) Can't run UDF in SQL++

2017-04-05 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs updated ASTERIXDB-1875:
-
Description: 
create function cat(str1,str2) {  
  concat(str1,str2)
}
with result as cat("w229","4u1") 
select value result;

The above gives:
 Unknown function cat@2 [CompilationException]

Whereas the following works correctly:
with result as concat("w229","4u1") 
select value result;

  was:
create function cat(str1,str2) {  
  concat(str1,str2)
}
with result as cat("w229","4u1") 
select value result;

The above gives an internal error, whereas the following works correctly:
with result as concat("w229","4u1") 
select value result;


> Can't run UDF in SQL++
> --
>
> Key: ASTERIXDB-1875
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-1875
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Steven Jacobs
>Assignee: Yingyi Bu
>
> create function cat(str1,str2) {  
>   concat(str1,str2)
> }
> with result as cat("w229","4u1") 
> select value result;
> The above gives:
>  Unknown function cat@2 [CompilationException]
> Whereas the following works correctly:
> with result as concat("w229","4u1") 
> select value result;



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (ASTERIXDB-1876) UDF fails in select statement in SQL++

2017-04-05 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-1876:


 Summary: UDF fails in select statement in SQL++
 Key: ASTERIXDB-1876
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-1876
 Project: Apache AsterixDB
  Issue Type: Bug
Reporter: Steven Jacobs
Assignee: Yingyi Bu


drop dataverse channels if exists;
create dataverse channels;
use channels;

create type UserLocation as closed {
recordId: integer,
userName: string
}

create dataset UserLocations(UserLocation)
primary key recordId;

create function RecentEmergenciesNearUser(userName) {  
  (SELECT r AS report
  FROM EmergencyReports r, UserLocations l
  where l.userName = userName 
  and spatial_intersect(r.location,l.location))
}

select *
from channels.UserLocations location,
channels.RecentEmergenciesNearUser(location.userName) result;



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (ASTERIXDB-1875) Can't run UDF in SQL++

2017-04-05 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-1875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs reassigned ASTERIXDB-1875:


Assignee: Yingyi Bu

> Can't run UDF in SQL++
> --
>
> Key: ASTERIXDB-1875
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-1875
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Steven Jacobs
>Assignee: Yingyi Bu
>
> create function cat(str1,str2) {  
>   concat(str1,str2)
> }
> with result as cat("w229","4u1") 
> select value result;
> The above gives an internal error, whereas the following works correctly:
> with result as concat("w229","4u1") 
> select value result;



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (ASTERIXDB-1875) Can't run UDF in SQL++

2017-04-05 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-1875:


 Summary: Can't run UDF in SQL++
 Key: ASTERIXDB-1875
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-1875
 Project: Apache AsterixDB
  Issue Type: Bug
Reporter: Steven Jacobs


create function cat(str1,str2) {  
  concat(str1,str2)
}
with result as cat("w229","4u1") 
select value result;

The above gives an internal error, whereas the following works correctly:
with result as concat("w229","4u1") 
select value result;



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (ASTERIXDB-1862) Update Feed Documentation

2017-03-26 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-1862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs reassigned ASTERIXDB-1862:


Assignee: Xikui Wang

> Update Feed Documentation 
> --
>
> Key: ASTERIXDB-1862
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-1862
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Steven Jacobs
>Assignee: Xikui Wang
>
> The feed documentation doesn't reflect recent changes, including the need for 
> a "start feed" statement.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (ASTERIXDB-1862) Update Feed Documentation

2017-03-26 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-1862:


 Summary: Update Feed Documentation 
 Key: ASTERIXDB-1862
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-1862
 Project: Apache AsterixDB
  Issue Type: Bug
Reporter: Steven Jacobs


The feed documentation doesn't reflect recent changes, including the need for a 
"start feed" statement.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (ASTERIXDB-1861) Port not open after creating feed in single server mode

2017-03-26 Thread Steven Jacobs (JIRA)

[ 
https://issues.apache.org/jira/browse/ASTERIXDB-1861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15942578#comment-15942578
 ] 

Steven Jacobs commented on ASTERIXDB-1861:
--

Hi,
I think you are missing the start feed command. This is a recent change to 
Asterix so might be missing from the documentation you used. After connecting 
to dataset, use the following command:
start feed TweetFeed;

> Port not open after creating feed in single server mode
> ---
>
> Key: ASTERIXDB-1861
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-1861
> Project: Apache AsterixDB
>  Issue Type: Bug
>  Components: Feeds
> Environment: single server mode, using default configurations from 
> ./opt/local/conf/
>Reporter: Chen Luo
>Priority: Minor
>
> I'm using the single server mode with 1 CC node and 2 NC nodes (red and 
> blue). All configurations are default from /opt/local/conf/. However, after 
> creating a feed and connecting it to a dataset, the specified port is not 
> open, and no error message/ stacktrace is shown.
> The following commands work fine (and the specified port is open) on 
> AsterixDB 0.9.0.
> Steps to reproduce:
> {code}
> drop dataverse twitter if exists;
> create dataverse twitter if not exists;
> use dataverse twitter
> create type typeUser if not exists as open {
> id: int64,
> name: string,
> screen_name : string,
> lang : string,
> location: string,
> create_at: date,
> description: string,
> followers_count: int32,
> friends_count: int32,
> statues_count: int64
> }
> create type typePlace if not exists as open{
> country : string,
> country_code : string,
> full_name : string,
> id : string,
> name : string,
> place_type : string,
> bounding_box : rectangle
> }
> create type typeGeoTag if not exists as open {
> stateID: int32,
> stateName: string,
> countyID: int32,
> countyName: string,
> cityID: int32?,
> cityName: string?
> }
> create type typeTweet if not exists as open{
> create_at : datetime,
> id: int64,
> "text": string,
> in_reply_to_status : int64,
> in_reply_to_user : int64,
> favorite_count : int64,
> coordinate: point?,
> retweet_count : int64,
> lang : string,
> is_retweet: boolean,
> hashtags : {{ string }} ?,
> user_mentions : {{ int64 }} ? ,
> user : typeUser,
> place : typePlace?,
> geo_tag: typeGeoTag
> }
> create dataset ds_tweet(typeTweet) if not exists primary key id 
> using compaction policy correlated-prefix 
> (("max-mergable-component-size"="1048576"),("max-tolerance-component-count"="10"))
>  with filter on create_at ;
> // with filter on create_at;
> //"using" "compaction" "policy" CompactionPolicy ( Configuration )? )?
> create feed TweetFeed using socket_adapter
> (
> ("sockets"="red:10001"),
> ("address-type"="nc"),
> ("type-name"="typeTweet"),
> ("format"="adm")
> );
> connect feed TweetFeed to dataset ds_tweet;
> {code}
> Then check port 10001 using nmap:
> {code}
> nmap -p 10001 localhost
> {code}
> It shows:
> {code}
> PORT  STATE SERVICE
> 10001/tcp closed  scp-config
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (ASTERIXDB-1362) Exception in Spatial-intersect between point and circle

2017-03-07 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-1362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs resolved ASTERIXDB-1362.
--
Resolution: Fixed

> Exception in Spatial-intersect between point and circle 
> 
>
> Key: ASTERIXDB-1362
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-1362
> Project: Apache AsterixDB
>  Issue Type: Bug
>  Components: Operators
>Reporter: Jianfeng Jia
>Assignee: Steven Jacobs
>
> It works when a point intersects with a circle but doesn't work reversely.
> The following query will produce the exception.
> {code}
>  spatial-intersect(circle("3107.06794511,1079.71664882 
> 1000.0"),point("3513.27543563,978.772476107")) 
> {code}
> {code}
> java.lang.ArrayIndexOutOfBoundsException: 17
>   at 
> org.apache.asterix.dataflow.data.nontagged.serde.AInt64SerializerDeserializer.getLong(AInt64SerializerDeserializer.java:58)
>   at 
> org.apache.asterix.dataflow.data.nontagged.serde.ADoubleSerializerDeserializer.getLongBits(ADoubleSerializerDeserializer.java:61)
>   at 
> org.apache.asterix.dataflow.data.nontagged.serde.ADoubleSerializerDeserializer.getDouble(ADoubleSerializerDeserializer.java:57)
>   at 
> org.apache.asterix.runtime.evaluators.functions.SpatialIntersectDescriptor$2$1.pointInCircle(SpatialIntersectDescriptor.java:186)
>   at 
> org.apache.asterix.runtime.evaluators.functions.SpatialIntersectDescriptor$2$1.evaluate(SpatialIntersectDescriptor.java:1008)
>   
>   at 
> org.apache.asterix.optimizer.rules.ConstantFoldingRule$ConstantFoldingVisitor.visitScalarFunctionCallExpression(ConstantFoldingRule.java:221)
>   at 
> org.apache.asterix.optimizer.rules.ConstantFoldingRule$ConstantFoldingVisitor.visitScalarFunctionCallExpression(ConstantFoldingRule.java:153)
>   at 
> org.apache.hyracks.algebricks.core.algebra.expressions.ScalarFunctionCallExpression.accept(ScalarFunctionCallExpression.java:55)
>   at 
> org.apache.asterix.optimizer.rules.ConstantFoldingRule$ConstantFoldingVisitor.transform(ConstantFoldingRule.java:163)
>   at 
> org.apache.hyracks.algebricks.core.algebra.operators.logical.AbstractAssignOperator.acceptExpressionTransform(AbstractAssignOperator.java:67)
>   at 
> org.apache.asterix.optimizer.rules.ConstantFoldingRule.rewritePost(ConstantFoldingRule.java:150)
>   at 
> org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController.rewriteOperatorRef(AbstractRuleController.java:125)
>   at 
> org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController.rewriteOperatorRef(AbstractRuleController.java:99)
>   at 
> org.apache.hyracks.algebricks.core.rewriter.base.AbstractRuleController.rewriteOperatorRef(AbstractRuleController.java:99)
>   at 
> org.apache.hyracks.algebricks.compiler.rewriter.rulecontrollers.SequentialFixpointRuleController.rewriteWithRuleCollection(SequentialFixpointRuleController.java:53)
>   at 
> org.apache.hyracks.algebricks.core.rewriter.base.HeuristicOptimizer.runOptimizationSets(HeuristicOptimizer.java:95)
>   at 
> org.apache.hyracks.algebricks.core.rewriter.base.HeuristicOptimizer.optimize(HeuristicOptimizer.java:82)
>   at 
> org.apache.hyracks.algebricks.compiler.api.HeuristicCompilerFactoryBuilder$1$1.optimize(HeuristicCompilerFactoryBuilder.java:87)
>   at 
> org.apache.asterix.api.common.APIFramework.compileQuery(APIFramework.java:289)
> {code}
> On the other hand, it returned a valid result if I changed the order of two 
> variables as following:
> {code}
>   
> spatial-intersect(point("3513.27543563,978.772476107"),circle("3107.06794511,1079.71664882
>  1000.0")) 
> {code}
> {code}
> true
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (ASTERIXDB-1327) Spatial-intersect between point and circle not working correctly

2017-03-07 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-1327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs resolved ASTERIXDB-1327.
--
Resolution: Fixed

> Spatial-intersect between point and circle not working correctly
> 
>
> Key: ASTERIXDB-1327
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-1327
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Steven Jacobs
>Assignee: Steven Jacobs
> Attachments: EmergencyReports.adm, UserLocationsShort.adm
>
>
> The first query returns a large set of results. The second returns nothing
> {noformat}
> for $report in dataset EmergencyReports
> for $location in dataset UserLocations
> where $report.emergencyType = "earthquake"
> let $circle := create-circle($location.location,.1)
> where spatial-intersect($report.impactZone, $circle)
> return {
>   "user at":$location.location,
>   "report at":$report.impactZone
> }
> {noformat}
> {noformat}
> for $report in dataset EmergencyReports
> for $location in dataset UserLocations
> where $report.emergencyType = "earthquake"
> where spatial-intersect($report.impactZone, $location.location)
> return {
>   "user at":$location.location,
>   "report at":$report.impactZone
> }
> {noformat}
> Here are the DDL statements. I will attach the two datasets:
> {noformat}
> drop dataverse channels if exists;
> create dataverse channels;
> use dataverse channels;
> create type UserLocation as closed {
>   recordId: uuid,
>   location: point,
>   user-id: string,
>   timeoffset: float
> }
> create type EmergencyReport as closed {
>   reportId: uuid,
>   severity: int,
>   impactZone: circle,
>   timeoffset: float,
>   duration: float,
>   message: string,
>   emergencyType: string
> }
> create dataset UserLocations(UserLocation)
> primary key recordId autogenerated;
> create dataset EmergencyReports(EmergencyReport)
> primary key reportId autogenerated;
> load dataset UserLocations using localfs 
> (("path"="asterix_nc1:///Users/stevenjacobs/Desktop/EmergencyDataset/UserLocationsShort.adm"),("format"="adm"));
> load dataset EmergencyReports using 
> localfs(("path"="asterix_nc1:///Users/stevenjacobs/Desktop/EmergencyDataset/EmergencyReports.adm"),("format"="adm"));
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (ASTERIXDB-1809) Create unit test for ActiveJobNotificationHandler

2017-02-23 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-1809:


 Summary: Create unit test for ActiveJobNotificationHandler
 Key: ASTERIXDB-1809
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-1809
 Project: Apache AsterixDB
  Issue Type: Improvement
Reporter: Steven Jacobs


Create a unit test for ActiveJobNotificationHandler



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (ASTERIXDB-1747) Improve Asterix Capabilities to pre-distribute jobs

2017-02-16 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-1747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs resolved ASTERIXDB-1747.
--
Resolution: Fixed

> Improve Asterix Capabilities to pre-distribute jobs
> ---
>
> Key: ASTERIXDB-1747
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-1747
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Steven Jacobs
>Assignee: Steven Jacobs
>
> Currently, we are able to store the ActivityClusterGraph for a job at the NC 
> level. We need to add the following:
> Don't bother passing the bytes around when the graph is already at the NC.
> Don't serialize the bytes at the CC side when they aren't needed at the NC.
> Remove the ActivityClusterGraph from the NC when the job is no longer needed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (ASTERIXDB-1747) Improve Asterix Capabilities to pre-distribute jobs

2016-12-07 Thread Steven Jacobs (JIRA)

[ 
https://issues.apache.org/jira/browse/ASTERIXDB-1747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15730119#comment-15730119
 ] 

Steven Jacobs commented on ASTERIXDB-1747:
--

Sounds good to me. I actually have all parts of this issue (including the new 
interfaces for distribute and destroy) completed now, but I am currently 
working on cleaning it up a bit and getting the corresponding BAD changes in 
place. I'll add you to the code review when ready. Thanks!

> Improve Asterix Capabilities to pre-distribute jobs
> ---
>
> Key: ASTERIXDB-1747
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-1747
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Steven Jacobs
>Assignee: Steven Jacobs
>
> Currently, we are able to store the ActivityClusterGraph for a job at the NC 
> level. We need to add the following:
> Don't bother passing the bytes around when the graph is already at the NC.
> Don't serialize the bytes at the CC side when they aren't needed at the NC.
> Remove the ActivityClusterGraph from the NC when the job is no longer needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (ASTERIXDB-1617) Match user query for nonPure functions

2016-11-29 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-1617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs closed ASTERIXDB-1617.

Resolution: Fixed

> Match user query for nonPure functions
> --
>
> Key: ASTERIXDB-1617
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-1617
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Steven Jacobs
>Assignee: Steven Jacobs
>
> Currently we are taking liberties with execution and placement of nonPure 
> functions which doesn't align completely with what the user query contains.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (ASTERIXDB-1608) Unexpected result from a join with a Non-Pure function

2016-11-29 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-1608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs closed ASTERIXDB-1608.

Resolution: Fixed

> Unexpected result from a join with a Non-Pure function
> --
>
> Key: ASTERIXDB-1608
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-1608
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Preston Carman
>Assignee: Steven Jacobs
>Priority: Minor
>
> The following query should give a unique UUID for each record. The optimizer 
> pushes the create-uuid() onto one branch thus resulting in three UUID's 
> instead of nine.
> Query:
> for $x in range(1, 3)
> for $y in range(1, 3)
> return {"id": create-uuid(), "x": $x};
> Result:
> { "id": uuid("a5b5cc1e-02e1-8428-cc8e-dcf75d2d4da1"), "x": 1 }
> { "id": uuid("a5b5cc1e-02e1-8428-cc8e-dcf75d2d4da1"), "x": 1 }
> { "id": uuid("a5b5cc1e-02e1-8428-cc8e-dcf75d2d4da1"), "x": 1 }
> { "id": uuid("a5b5cc1e-02e1-8428-cc8e-dcf75d2d4da2"), "x": 2 }
> { "id": uuid("a5b5cc1e-02e1-8428-cc8e-dcf75d2d4da2"), "x": 2 }
> { "id": uuid("a5b5cc1e-02e1-8428-cc8e-dcf75d2d4da2"), "x": 2 }
> { "id": uuid("a5b5cc1e-02e1-8428-cc8e-dcf75d2d4da3"), "x": 3 }
> { "id": uuid("a5b5cc1e-02e1-8428-cc8e-dcf75d2d4da3"), "x": 3 }
> { "id": uuid("a5b5cc1e-02e1-8428-cc8e-dcf75d2d4da3"), "x": 3 }
> Expected Result:
> { "id": uuid("a5b5cc1e-02e1-8428-cc8e-dcf75d2d4da1"), "x": 1 }
> { "id": uuid("a5b5cc1e-02e1-8428-cc8e-dcf75d2d4da2"), "x": 1 }
> { "id": uuid("a5b5cc1e-02e1-8428-cc8e-dcf75d2d4da3"), "x": 1 }
> { "id": uuid("a5b5cc1e-02e1-8428-cc8e-dcf75d2d4da4"), "x": 2 }
> { "id": uuid("a5b5cc1e-02e1-8428-cc8e-dcf75d2d4da5"), "x": 2 }
> { "id": uuid("a5b5cc1e-02e1-8428-cc8e-dcf75d2d4da6"), "x": 2 }
> { "id": uuid("a5b5cc1e-02e1-8428-cc8e-dcf75d2d4da7"), "x": 3 }
> { "id": uuid("a5b5cc1e-02e1-8428-cc8e-dcf75d2d4da8"), "x": 3 }
> { "id": uuid("a5b5cc1e-02e1-8428-cc8e-dcf75d2d4da9"), "x": 3 }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ASTERIXDB-1695) More entities in the default dataverse

2016-10-24 Thread Steven Jacobs (JIRA)

[ 
https://issues.apache.org/jira/browse/ASTERIXDB-1695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15602554#comment-15602554
 ] 

Steven Jacobs commented on ASTERIXDB-1695:
--

I looked at this more, and the issue that I had was actually unrelated (and 
fixed now).

> More entities in the default dataverse
> --
>
> Key: ASTERIXDB-1695
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-1695
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Yingyi Bu
>Assignee: Abdullah Alamoudi
>
> The following query works fine.
> {noformat}
> DROP function foo if exists;
> CREATE function foo(){
>   1
> }
> SELECT Default.foo();
> {noformat}
> But the next one doesn't work:
> {noformat}
> DROP function foo if exists;
> CREATE function foo(){
>   1
> }
> SELECT foo();
> {noformat}
> I got this error message:
> "msg": "function null.foo@0 is not defined",
> I'm guessing that it is related to the default dataverse change.
> I guess we need to verify all metadata entities can work under the default 
> dataverse.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (ASTERIXDB-1573) Refactor extensibility of re-write rules

2016-10-19 Thread Steven Jacobs (JIRA)

 [ 
https://issues.apache.org/jira/browse/ASTERIXDB-1573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Jacobs reassigned ASTERIXDB-1573:


Assignee: Steven Jacobs  (was: Yingyi Bu)

> Refactor extensibility of re-write rules
> 
>
> Key: ASTERIXDB-1573
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-1573
> Project: Apache AsterixDB
>  Issue Type: Improvement
>Reporter: Abdullah Alamoudi
>Assignee: Steven Jacobs
>
> Currently, extension mechanism doesn't provide a good generic way of 
> extending re-write rules.
> A provider should be introduced to improve this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ASTERIXDB-1695) More entities in the default dataverse

2016-10-19 Thread Steven Jacobs (JIRA)

[ 
https://issues.apache.org/jira/browse/ASTERIXDB-1695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15589421#comment-15589421
 ] 

Steven Jacobs commented on ASTERIXDB-1695:
--

I think we have seen a similar bug in the BAD extension. Can you make sure this 
covers extension metadata as well?

> More entities in the default dataverse
> --
>
> Key: ASTERIXDB-1695
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-1695
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Yingyi Bu
>Assignee: Abdullah Alamoudi
>
> The following query works fine.
> {noformat}
> DROP function foo if exists;
> CREATE function foo(){
>   1
> }
> SELECT Default.foo();
> {noformat}
> But the next one doesn't work:
> {noformat}
> DROP function foo if exists;
> CREATE function foo(){
>   1
> }
> SELECT foo();
> {noformat}
> I got this error message:
> "msg": "function null.foo@0 is not defined",
> I'm guessing that it is related to the default dataverse change.
> I guess we need to verify all metadata entities can work under the default 
> dataverse.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ASTERIXDB-1617) Match user query for nonPure functions

2016-08-29 Thread Steven Jacobs (JIRA)
Steven Jacobs created ASTERIXDB-1617:


 Summary: Match user query for nonPure functions
 Key: ASTERIXDB-1617
 URL: https://issues.apache.org/jira/browse/ASTERIXDB-1617
 Project: Apache AsterixDB
  Issue Type: Bug
Reporter: Steven Jacobs
Assignee: Steven Jacobs


Currently we are taking liberties with execution and placement of nonPure 
functions which doesn't align completely with what the user query contains.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ASTERIXDB-1606) Optimize "last value" query

2016-08-23 Thread Steven Jacobs (JIRA)

[ 
https://issues.apache.org/jira/browse/ASTERIXDB-1606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15433653#comment-15433653
 ] 

Steven Jacobs commented on ASTERIXDB-1606:
--

This will be great to have in master. What are your thoughts on (1)?

> Optimize "last value" query
> ---
>
> Key: ASTERIXDB-1606
> URL: https://issues.apache.org/jira/browse/ASTERIXDB-1606
> Project: Apache AsterixDB
>  Issue Type: Bug
>Reporter: Steven Jacobs
>Priority: Minor
>
> We need to work on optimizing queries looking for a "most recent" or 
> "greatest" value of a given field. 
> As an example, consider an append-only dataset filled with user locations 
> over time, and suppose we want to know the user's last known location. 
> Currently, we would need to do this as an:
> order by $record.timeStamp
> limit 1
> We could improve this in two ways:
> 1) Improve usability by providing an alias syntax for users, e.g. "where 
> greatest timeStamp"
> 2) Improve the compilation of such a job to only retrieve a single record.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >