[jira] [Updated] (IGNITE-22448) Sql. Incorrect error message when aggregate function is called with UUID type

2024-06-07 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22448:
--
Description: 
When the AVG function called with a DATE the validator returns the following 
error:

{noformat}
statement error: Cannot apply 'AVG' to arguments of type 'AVG()'. 
Supported form(s): 'AVG()'
SELECT AVG('2011-01-01'::DATE)
{noformat}

But when AVG is called with UUID we get this: 

{noformat}
statement error: Unable to optimize plan due to internal error
SELECT AVG('c4a0327c-44be-416d-ae90-75c05079789f'::UUID)
{noformat}

As this query is transformed to 

{noformat}
SELECT AVG('c4a0327c-44be-416d-ae90-75c05079789f' :: UUID)

LogicalAggregate(group=[{}], EXPR$0=[AVG($0)]), id = 1048
  LogicalProject($f0=[CAST(_UTF-8'c4a0327c-44be-416d-ae90-75c05079789f'):UUID 
NOT NULL]), id = 1047
LogicalValues(tuples=[[{ 0 }]]), id = 1044
{noformat}

Underlying cause:

{noformat}
java.lang.AssertionError: SUM is not supported for ANY
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.unsupportedAggregateFunction(Accumulators.java:809)
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.sumFactory(Accumulators.java:138)
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.accumulatorFunctionFactory(Accumulators.java:83)
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.accumulatorFactory(Accumulators.java:66)
at 
org.apache.ignite.internal.sql.engine.util.PlanUtils.addAccumulatorFields(PlanUtils.java:130)
at 
org.apache.ignite.internal.sql.engine.util.PlanUtils.createHashAggRowType(PlanUtils.java:118)
at 
org.apache.ignite.internal.sql.engine.rel.agg.IgniteMapHashAggregate.deriveRowType(IgniteMapHashAggregate.java:93)
at 
org.apache.calcite.rel.AbstractRelNode.getRowType(AbstractRelNode.java:174)
at 
org.apache.ignite.internal.sql.engine.rel.agg.MapReduceAggregates.buildAggregates(MapReduceAggregates.java:196)
at 
org.apache.ignite.internal.sql.engine.rule.HashAggregateConverterRule$MapReduceHashAggregateConverterRule.convert(HashAggregateConverterRule.java:157)
at 
org.apache.ignite.internal.sql.engine.rule.HashAggregateConverterRule$MapReduceHashAggregateConverterRule.convert(HashAggregateConverterRule.java:91)
at 
org.apache.ignite.internal.sql.engine.rule.AbstractIgniteConverterRule.convert(AbstractIgniteConverterRule.java:51)

{noformat}

*Expected behaviour*
It would be better to return a type error instead of an internal error.




  was:
When the AVG function called with a DATE the validator returns the following 
error:

{noformat}
statement error: Cannot apply 'AVG' to arguments of type 'AVG()'. 
Supported form(s): 'AVG()'
SELECT AVG('2011-01-01'::DATE)
{noformat}

But when AVG is called with UUID we get this: 

{noformat}
statement error: Unable to optimize plan due to internal error
SELECT AVG('c4a0327c-44be-416d-ae90-75c05079789f'::UUID)
{noformat}

As this query is transformed to 

{noformat}
SELECT AVG('c4a0327c-44be-416d-ae90-75c05079789f' :: UUID)

LogicalAggregate(group=[{}], EXPR$0=[AVG($0)]), id = 1048
  LogicalProject($f0=[CAST(_UTF-8'c4a0327c-44be-416d-ae90-75c05079789f'):UUID 
NOT NULL]), id = 1047
LogicalValues(tuples=[[{ 0 }]]), id = 1044
{noformat}

Underlying cause:

{noformat}
java.lang.AssertionError: SUM is not supported for ANY
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.unsupportedAggregateFunction(Accumulators.java:809)
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.sumFactory(Accumulators.java:138)
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.accumulatorFunctionFactory(Accumulators.java:83)
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.accumulatorFactory(Accumulators.java:66)
at 
org.apache.ignite.internal.sql.engine.util.PlanUtils.addAccumulatorFields(PlanUtils.java:130)
at 
org.apache.ignite.internal.sql.engine.util.PlanUtils.createHashAggRowType(PlanUtils.java:118)
at 
org.apache.ignite.internal.sql.engine.rel.agg.IgniteMapHashAggregate.deriveRowType(IgniteMapHashAggregate.java:93)
at 
org.apache.calcite.rel.AbstractRelNode.getRowType(AbstractRelNode.java:174)
at 
org.apache.ignite.internal.sql.engine.rel.agg.MapReduceAggregates.buildAggregates(MapReduceAggregates.java:196)
at 
org.apache.ignite.internal.sql.engine.rule.HashAggregateConverterRule$MapReduceHashAggregateConverterRule.convert(HashAggregateConverterRule.java:157)
at 
org.apache.ignite.internal.sql.engine.rule.HashAggregateConverterRule$MapReduceHashAggregateConverterRule.convert(HashAggregateConverterRule.java:91)
at 
org.apache.ignite.internal.sql.engine.rule.AbstractIgniteConverterRule.convert(AbstractIgniteConverterRule.java:51

[jira] [Updated] (IGNITE-22448) Sql. Incorrect error message when aggregate function is called with UUID type

2024-06-07 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22448:
--
Description: 
When the AVG function called with a DATE the validator returns the following 
error:

{noformat}
statement error: Cannot apply 'AVG' to arguments of type 'AVG()'. 
Supported form(s): 'AVG()'
SELECT AVG('2011-01-01'::DATE)
{noformat}

But when AVG is called with UUID we get this: 

{noformat}
statement error: Unable to optimize plan due to internal error
SELECT AVG('c4a0327c-44be-416d-ae90-75c05079789f'::UUID)
{noformat}

As this query is transformed to 

{noformat}
SELECT AVG('c4a0327c-44be-416d-ae90-75c05079789f' :: UUID)

LogicalAggregate(group=[{}], EXPR$0=[AVG($0)]), id = 1048
  LogicalProject($f0=[CAST(_UTF-8'c4a0327c-44be-416d-ae90-75c05079789f'):UUID 
NOT NULL]), id = 1047
LogicalValues(tuples=[[{ 0 }]]), id = 1044
{noformat}

Underlying cause:

{noformat}
java.lang.AssertionError: SUM is not supported for ANY
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.unsupportedAggregateFunction(Accumulators.java:809)
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.sumFactory(Accumulators.java:138)
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.accumulatorFunctionFactory(Accumulators.java:83)
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.accumulatorFactory(Accumulators.java:66)
at 
org.apache.ignite.internal.sql.engine.util.PlanUtils.addAccumulatorFields(PlanUtils.java:130)
at 
org.apache.ignite.internal.sql.engine.util.PlanUtils.createHashAggRowType(PlanUtils.java:118)
at 
org.apache.ignite.internal.sql.engine.rel.agg.IgniteMapHashAggregate.deriveRowType(IgniteMapHashAggregate.java:93)
at 
org.apache.calcite.rel.AbstractRelNode.getRowType(AbstractRelNode.java:174)
at 
org.apache.ignite.internal.sql.engine.rel.agg.MapReduceAggregates.buildAggregates(MapReduceAggregates.java:196)
at 
org.apache.ignite.internal.sql.engine.rule.HashAggregateConverterRule$MapReduceHashAggregateConverterRule.convert(HashAggregateConverterRule.java:157)
at 
org.apache.ignite.internal.sql.engine.rule.HashAggregateConverterRule$MapReduceHashAggregateConverterRule.convert(HashAggregateConverterRule.java:91)
at 
org.apache.ignite.internal.sql.engine.rule.AbstractIgniteConverterRule.convert(AbstractIgniteConverterRule.java:51)

{noformat}



  was:
When AVG called with DATE the validator returns the following error:

{noformat}
statement error: Cannot apply 'AVG' to arguments of type 'AVG()'. 
Supported form(s): 'AVG()'
SELECT AVG('2011-01-01'::DATE)
{noformat}

But when AVG is called with UUID we get this: 

{noformat}
statement error: Unable to optimize plan due to internal error
SELECT AVG('c4a0327c-44be-416d-ae90-75c05079789f'::UUID)
{noformat}

As this query is transformed to 

{noformat}
SELECT AVG('c4a0327c-44be-416d-ae90-75c05079789f' :: UUID)

LogicalAggregate(group=[{}], EXPR$0=[AVG($0)]), id = 1048
  LogicalProject($f0=[CAST(_UTF-8'c4a0327c-44be-416d-ae90-75c05079789f'):UUID 
NOT NULL]), id = 1047
LogicalValues(tuples=[[{ 0 }]]), id = 1044
{noformat}

Underlying cause:

{noformat}
java.lang.AssertionError: SUM is not supported for ANY
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.unsupportedAggregateFunction(Accumulators.java:809)
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.sumFactory(Accumulators.java:138)
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.accumulatorFunctionFactory(Accumulators.java:83)
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.accumulatorFactory(Accumulators.java:66)
at 
org.apache.ignite.internal.sql.engine.util.PlanUtils.addAccumulatorFields(PlanUtils.java:130)
at 
org.apache.ignite.internal.sql.engine.util.PlanUtils.createHashAggRowType(PlanUtils.java:118)
at 
org.apache.ignite.internal.sql.engine.rel.agg.IgniteMapHashAggregate.deriveRowType(IgniteMapHashAggregate.java:93)
at 
org.apache.calcite.rel.AbstractRelNode.getRowType(AbstractRelNode.java:174)
at 
org.apache.ignite.internal.sql.engine.rel.agg.MapReduceAggregates.buildAggregates(MapReduceAggregates.java:196)
at 
org.apache.ignite.internal.sql.engine.rule.HashAggregateConverterRule$MapReduceHashAggregateConverterRule.convert(HashAggregateConverterRule.java:157)
at 
org.apache.ignite.internal.sql.engine.rule.HashAggregateConverterRule$MapReduceHashAggregateConverterRule.convert(HashAggregateConverterRule.java:91)
at 
org.apache.ignite.internal.sql.engine.rule.AbstractIgniteConverterRule.convert(AbstractIgniteConverterRule.java:51)

{noformat}




> Sql. Incorrect error message when aggregate function is called with UUID t

[jira] [Updated] (IGNITE-22448) Sql. Incorrect error message when aggregate function is called with UUID type

2024-06-07 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22448:
--
Description: 
When AVG called with DATE the validator returns the following error:

{noformat}
statement error: Cannot apply 'AVG' to arguments of type 'AVG()'. 
Supported form(s): 'AVG()'
SELECT AVG('2011-01-01'::DATE)
{noformat}

But when AVG is called with UUID we get this: 

{noformat}
statement error: Unable to optimize plan due to internal error
SELECT AVG('c4a0327c-44be-416d-ae90-75c05079789f'::UUID)
{noformat}

Query is transformed to 

{noformat}
SELECT AVG('c4a0327c-44be-416d-ae90-75c05079789f' :: UUID)

LogicalAggregate(group=[{}], EXPR$0=[AVG($0)]), id = 1048
  LogicalProject($f0=[CAST(_UTF-8'c4a0327c-44be-416d-ae90-75c05079789f'):UUID 
NOT NULL]), id = 1047
LogicalValues(tuples=[[{ 0 }]]), id = 1044
{noformat}

Underlying cause:

{noformat}
java.lang.AssertionError: SUM is not supported for ANY
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.unsupportedAggregateFunction(Accumulators.java:809)
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.sumFactory(Accumulators.java:138)
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.accumulatorFunctionFactory(Accumulators.java:83)
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.accumulatorFactory(Accumulators.java:66)
at 
org.apache.ignite.internal.sql.engine.util.PlanUtils.addAccumulatorFields(PlanUtils.java:130)
at 
org.apache.ignite.internal.sql.engine.util.PlanUtils.createHashAggRowType(PlanUtils.java:118)
at 
org.apache.ignite.internal.sql.engine.rel.agg.IgniteMapHashAggregate.deriveRowType(IgniteMapHashAggregate.java:93)
at 
org.apache.calcite.rel.AbstractRelNode.getRowType(AbstractRelNode.java:174)
at 
org.apache.ignite.internal.sql.engine.rel.agg.MapReduceAggregates.buildAggregates(MapReduceAggregates.java:196)
at 
org.apache.ignite.internal.sql.engine.rule.HashAggregateConverterRule$MapReduceHashAggregateConverterRule.convert(HashAggregateConverterRule.java:157)
at 
org.apache.ignite.internal.sql.engine.rule.HashAggregateConverterRule$MapReduceHashAggregateConverterRule.convert(HashAggregateConverterRule.java:91)
at 
org.apache.ignite.internal.sql.engine.rule.AbstractIgniteConverterRule.convert(AbstractIgniteConverterRule.java:51)

{noformat}



  was:
When AVG called with DATE the validator returns the following error:

{noformat}
statement error: Cannot apply 'AVG' to arguments of type 'AVG()'. 
Supported form(s): 'AVG()'
SELECT AVG('2011-01-01'::DATE)
{noformat}

But when AVG is called with UUID we get this: 

{noformat}
statement error: Unable to optimize plan due to internal error
SELECT AVG('c4a0327c-44be-416d-ae90-75c05079789f'::UUID)
{noformat}

Underlying cause:

{noformat}
java.lang.AssertionError: SUM is not supported for ANY
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.unsupportedAggregateFunction(Accumulators.java:809)
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.sumFactory(Accumulators.java:138)
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.accumulatorFunctionFactory(Accumulators.java:83)
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.accumulatorFactory(Accumulators.java:66)
at 
org.apache.ignite.internal.sql.engine.util.PlanUtils.addAccumulatorFields(PlanUtils.java:130)
at 
org.apache.ignite.internal.sql.engine.util.PlanUtils.createHashAggRowType(PlanUtils.java:118)
at 
org.apache.ignite.internal.sql.engine.rel.agg.IgniteMapHashAggregate.deriveRowType(IgniteMapHashAggregate.java:93)
at 
org.apache.calcite.rel.AbstractRelNode.getRowType(AbstractRelNode.java:174)
at 
org.apache.ignite.internal.sql.engine.rel.agg.MapReduceAggregates.buildAggregates(MapReduceAggregates.java:196)
at 
org.apache.ignite.internal.sql.engine.rule.HashAggregateConverterRule$MapReduceHashAggregateConverterRule.convert(HashAggregateConverterRule.java:157)
at 
org.apache.ignite.internal.sql.engine.rule.HashAggregateConverterRule$MapReduceHashAggregateConverterRule.convert(HashAggregateConverterRule.java:91)
at 
org.apache.ignite.internal.sql.engine.rule.AbstractIgniteConverterRule.convert(AbstractIgniteConverterRule.java:51)

{noformat}




> Sql. Incorrect error message when aggregate function is called with UUID type
> -
>
> Key: IGNITE-22448
> URL: https://issues.apache.org/jira/browse/IGNITE-22448
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>    Reporter: Maksim Zhuravkov
>Priorit

[jira] [Updated] (IGNITE-22448) Sql. Incorrect error message when aggregate function is called with UUID type

2024-06-07 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22448:
--
Description: 
When AVG called with DATE the validator returns the following error:

{noformat}
statement error: Cannot apply 'AVG' to arguments of type 'AVG()'. 
Supported form(s): 'AVG()'
SELECT AVG('2011-01-01'::DATE)
{noformat}

But when AVG is called with UUID we get this: 

{noformat}
statement error: Unable to optimize plan due to internal error
SELECT AVG('c4a0327c-44be-416d-ae90-75c05079789f'::UUID)
{noformat}

As this query is transformed to 

{noformat}
SELECT AVG('c4a0327c-44be-416d-ae90-75c05079789f' :: UUID)

LogicalAggregate(group=[{}], EXPR$0=[AVG($0)]), id = 1048
  LogicalProject($f0=[CAST(_UTF-8'c4a0327c-44be-416d-ae90-75c05079789f'):UUID 
NOT NULL]), id = 1047
LogicalValues(tuples=[[{ 0 }]]), id = 1044
{noformat}

Underlying cause:

{noformat}
java.lang.AssertionError: SUM is not supported for ANY
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.unsupportedAggregateFunction(Accumulators.java:809)
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.sumFactory(Accumulators.java:138)
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.accumulatorFunctionFactory(Accumulators.java:83)
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.accumulatorFactory(Accumulators.java:66)
at 
org.apache.ignite.internal.sql.engine.util.PlanUtils.addAccumulatorFields(PlanUtils.java:130)
at 
org.apache.ignite.internal.sql.engine.util.PlanUtils.createHashAggRowType(PlanUtils.java:118)
at 
org.apache.ignite.internal.sql.engine.rel.agg.IgniteMapHashAggregate.deriveRowType(IgniteMapHashAggregate.java:93)
at 
org.apache.calcite.rel.AbstractRelNode.getRowType(AbstractRelNode.java:174)
at 
org.apache.ignite.internal.sql.engine.rel.agg.MapReduceAggregates.buildAggregates(MapReduceAggregates.java:196)
at 
org.apache.ignite.internal.sql.engine.rule.HashAggregateConverterRule$MapReduceHashAggregateConverterRule.convert(HashAggregateConverterRule.java:157)
at 
org.apache.ignite.internal.sql.engine.rule.HashAggregateConverterRule$MapReduceHashAggregateConverterRule.convert(HashAggregateConverterRule.java:91)
at 
org.apache.ignite.internal.sql.engine.rule.AbstractIgniteConverterRule.convert(AbstractIgniteConverterRule.java:51)

{noformat}



  was:
When AVG called with DATE the validator returns the following error:

{noformat}
statement error: Cannot apply 'AVG' to arguments of type 'AVG()'. 
Supported form(s): 'AVG()'
SELECT AVG('2011-01-01'::DATE)
{noformat}

But when AVG is called with UUID we get this: 

{noformat}
statement error: Unable to optimize plan due to internal error
SELECT AVG('c4a0327c-44be-416d-ae90-75c05079789f'::UUID)
{noformat}

Query is transformed to 

{noformat}
SELECT AVG('c4a0327c-44be-416d-ae90-75c05079789f' :: UUID)

LogicalAggregate(group=[{}], EXPR$0=[AVG($0)]), id = 1048
  LogicalProject($f0=[CAST(_UTF-8'c4a0327c-44be-416d-ae90-75c05079789f'):UUID 
NOT NULL]), id = 1047
LogicalValues(tuples=[[{ 0 }]]), id = 1044
{noformat}

Underlying cause:

{noformat}
java.lang.AssertionError: SUM is not supported for ANY
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.unsupportedAggregateFunction(Accumulators.java:809)
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.sumFactory(Accumulators.java:138)
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.accumulatorFunctionFactory(Accumulators.java:83)
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.accumulatorFactory(Accumulators.java:66)
at 
org.apache.ignite.internal.sql.engine.util.PlanUtils.addAccumulatorFields(PlanUtils.java:130)
at 
org.apache.ignite.internal.sql.engine.util.PlanUtils.createHashAggRowType(PlanUtils.java:118)
at 
org.apache.ignite.internal.sql.engine.rel.agg.IgniteMapHashAggregate.deriveRowType(IgniteMapHashAggregate.java:93)
at 
org.apache.calcite.rel.AbstractRelNode.getRowType(AbstractRelNode.java:174)
at 
org.apache.ignite.internal.sql.engine.rel.agg.MapReduceAggregates.buildAggregates(MapReduceAggregates.java:196)
at 
org.apache.ignite.internal.sql.engine.rule.HashAggregateConverterRule$MapReduceHashAggregateConverterRule.convert(HashAggregateConverterRule.java:157)
at 
org.apache.ignite.internal.sql.engine.rule.HashAggregateConverterRule$MapReduceHashAggregateConverterRule.convert(HashAggregateConverterRule.java:91)
at 
org.apache.ignite.internal.sql.engine.rule.AbstractIgniteConverterRule.convert(AbstractIgniteConverterRule.java:51)

{noformat}




> Sql. Incorrect error message when aggregate function is called with UUID t

[jira] [Created] (IGNITE-22448) Sql. Incorrect error message when aggregate function is called with UUID type

2024-06-07 Thread Maksim Zhuravkov (Jira)
Maksim Zhuravkov created IGNITE-22448:
-

 Summary: Sql. Incorrect error message when aggregate function is 
called with UUID type
 Key: IGNITE-22448
 URL: https://issues.apache.org/jira/browse/IGNITE-22448
 Project: Ignite
  Issue Type: Bug
  Components: sql
Reporter: Maksim Zhuravkov


When AVG called with DATE the validator returns the following error:

{noformat}
statement error: Cannot apply 'AVG' to arguments of type 'AVG()'. 
Supported form(s): 'AVG()'
SELECT AVG('2011-01-01'::DATE)
{noformat}

But when AVG is called with UUID we get this: 

{noformat}
statement error: Unable to optimize plan due to internal error
SELECT AVG('c4a0327c-44be-416d-ae90-75c05079789f'::UUID)
{noformat}

Underlying cause:

{noformat}
java.lang.AssertionError: SUM is not supported for ANY
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.unsupportedAggregateFunction(Accumulators.java:809)
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.sumFactory(Accumulators.java:138)
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.accumulatorFunctionFactory(Accumulators.java:83)
at 
org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators.accumulatorFactory(Accumulators.java:66)
at 
org.apache.ignite.internal.sql.engine.util.PlanUtils.addAccumulatorFields(PlanUtils.java:130)
at 
org.apache.ignite.internal.sql.engine.util.PlanUtils.createHashAggRowType(PlanUtils.java:118)
at 
org.apache.ignite.internal.sql.engine.rel.agg.IgniteMapHashAggregate.deriveRowType(IgniteMapHashAggregate.java:93)
at 
org.apache.calcite.rel.AbstractRelNode.getRowType(AbstractRelNode.java:174)
at 
org.apache.ignite.internal.sql.engine.rel.agg.MapReduceAggregates.buildAggregates(MapReduceAggregates.java:196)
at 
org.apache.ignite.internal.sql.engine.rule.HashAggregateConverterRule$MapReduceHashAggregateConverterRule.convert(HashAggregateConverterRule.java:157)
at 
org.apache.ignite.internal.sql.engine.rule.HashAggregateConverterRule$MapReduceHashAggregateConverterRule.convert(HashAggregateConverterRule.java:91)
at 
org.apache.ignite.internal.sql.engine.rule.AbstractIgniteConverterRule.convert(AbstractIgniteConverterRule.java:51)

{noformat}





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22447) Sql. Numeric aggregate functions accept VARCHAR types

2024-06-07 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22447:
--
Description: 
Numeric aggregate functions accept character string values due to implicit 
casts added by type coercion:

{code:java}
SELECT AVG('100')
> Gets transformed into
SELECT AVG("100"::DECIMAL(MAX_PREC, MAX_SCALE)) 
{code}

*Explanation*
This particular cast is added since AVG function uses FamilyOperandTypeChecker 
that always coerces its arguments if possible (it calls 
TypeCoercion::builtinFunctionCoercion, which in turn calls 
TypeCoercion::implicitCast(t1, t2) .

*Expected behaviour*
Aggregate functions that accept only numeric types should be rejected by the 
validator, when called with arguments of other types.


  was:
Numeric aggregate functions accept character string values due to implicit 
casts added by type coercion:

{code:java}
SELECT AVG('100')
> Gets transformed into
SELECT AVG("100"::DECIMAL(MAX_PREC, MAX_SCALE)) 
{code}

*Explanation*
This particular cast is added since AVG function uses FamilyOperandTypeChecker 
that always coerces its arguments if possible.

*Expected behaviour*
Aggregate functions that accept only numeric types should be rejected by the 
validator, when called with arguments of other types.



> Sql. Numeric aggregate functions accept VARCHAR types
> -
>
> Key: IGNITE-22447
> URL: https://issues.apache.org/jira/browse/IGNITE-22447
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> Numeric aggregate functions accept character string values due to implicit 
> casts added by type coercion:
> {code:java}
> SELECT AVG('100')
> > Gets transformed into
> SELECT AVG("100"::DECIMAL(MAX_PREC, MAX_SCALE)) 
> {code}
> *Explanation*
> This particular cast is added since AVG function uses 
> FamilyOperandTypeChecker that always coerces its arguments if possible (it 
> calls TypeCoercion::builtinFunctionCoercion, which in turn calls 
> TypeCoercion::implicitCast(t1, t2) .
> *Expected behaviour*
> Aggregate functions that accept only numeric types should be rejected by the 
> validator, when called with arguments of other types.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22447) Sql. Numeric aggregate functions accept VARCHAR types

2024-06-07 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22447:
--
Description: 
Numeric aggregate functions accept character string values due to implicit 
casts added by type coercion:

{code:java}
SELECT AVG('100')
> Gets transformed into
SELECT AVG("100"::DECIMAL(MAX_PREC, MAX_SCALE)) 
{code}

*Explanation*
This particular cast is added since AVG function uses FamilyOperandTypeChecker 
that always coerces its arguments if possible.

*Expected behaviour*
Aggregate functions that accept only numeric types should be rejected by the 
validator, when called with arguments of other types.


  was:
Numeric aggregate functions accept character string values due to implicit 
casts added by type coercion:

{code:java}
SELECT AVG('100')
> Gets transformed into
SELECT AVG("100"::DECIMAL(MAX_PREC, MAX_SCALE)) 
{code}

*Expected behaviour*
Aggregate functions that accept only numeric types should be rejected by the 
validator, when called with arguments of other types.



> Sql. Numeric aggregate functions accept VARCHAR types
> -
>
> Key: IGNITE-22447
> URL: https://issues.apache.org/jira/browse/IGNITE-22447
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> Numeric aggregate functions accept character string values due to implicit 
> casts added by type coercion:
> {code:java}
> SELECT AVG('100')
> > Gets transformed into
> SELECT AVG("100"::DECIMAL(MAX_PREC, MAX_SCALE)) 
> {code}
> *Explanation*
> This particular cast is added since AVG function uses 
> FamilyOperandTypeChecker that always coerces its arguments if possible.
> *Expected behaviour*
> Aggregate functions that accept only numeric types should be rejected by the 
> validator, when called with arguments of other types.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-22447) Sql. Numeric aggregate functions accept VARCHAR types

2024-06-07 Thread Maksim Zhuravkov (Jira)
Maksim Zhuravkov created IGNITE-22447:
-

 Summary: Sql. Numeric aggregate functions accept VARCHAR types
 Key: IGNITE-22447
 URL: https://issues.apache.org/jira/browse/IGNITE-22447
 Project: Ignite
  Issue Type: Bug
  Components: sql
Reporter: Maksim Zhuravkov


Numeric aggregate functions accept character string values due to implicit 
casts added by type coercion:

{code:java}
SELECT AVG('100')
> Gets transformed into
SELECT AVG("100"::DECIMAL(MAX_PREC, MAX_SCALE)) 
{code}

*Expected behaviour*
Aggregate functions that accept only numeric types should be rejected by the 
validator, when called with arguments of other types.




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-22403) Algorithm improvement for kafka topic partition distribution

2024-06-06 Thread Maksim Davydov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-22403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17852832#comment-17852832
 ] 

Maksim Davydov commented on IGNITE-22403:
-

https://ci2.ignite.apache.org/buildConfiguration/IgniteExtensions_Tests_Cdc/7902745?buildTab=overview=false=false=true=false

> Algorithm improvement for kafka topic partition distribution
> 
>
> Key: IGNITE-22403
> URL: https://issues.apache.org/jira/browse/IGNITE-22403
> Project: Ignite
>  Issue Type: Improvement
>  Components: extensions
>    Reporter: Maksim Davydov
>Assignee: Maksim Davydov
>Priority: Major
>  Labels: IEP-59, ise
>
> Distribution for partitions from kafka topic over requested threads in 
> AbstractKafkaToIgniteCdcStreamer (CDC Kafka to Ignite) is not uniform. 
> There might be cases with unequal load on threads, that reads data from 
> kafka, which might result in overhead/bottleneck and slow workflow in general 
> for the whole CDC with Kafka process.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22421) Sql. Interval type. DDL statements should return a proper error

2024-06-06 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22421:
--
Description: 
DDL statements that use INTERVAL type in column definitions return the 
following errors:

*Current behaviour*

{noformat}
CREATE TABLE t (a INTERVAL MONTH, b INT, PRIMARY KEY(a))
Err: Precision definition is necessary for column 'A' of type 'PERIOD'

CREATE TABLE t (a INTERVAL SECOND, b INT, PRIMARY KEY(a))
Err:  Scale is not applicable for column 'A' of type 'DURATION'

ALTER TABLE t ADD COLUMN c INTERVAL YEAR TO MONTH
Precision definition is necessary for column 'C' of type 'PERIOD'
{noformat}

*Expected behaviour*

Both statements should return an error that clearly indicates that INTERVAL 
types cannot be used in this context.




  was:
DDL statements that use INTERVAL type in column definitions return the 
following errors:

*Current behaviour*

{noformat}
CREATE TABLE t (a INTERVAL MONTH, b INT, PRIMARY KEY(a))
Err: Precision definition is necessary for column 'A' of type 'PERIOD'

CREATE TABLE t (a INTERVAL SECOND, b INT, PRIMARY KEY(a))
Err:  Scale is not applicable for column 'A' of type 'DURATION'

ALTER TABLE t ADD COLUMN c INTERVAL YEAR TO MONTH
Precision definition is necessary for column 'C' of type 'PERIOD'
{noformat}

*Expected behaviour*

Both statements should return an error that clearly indicates that INTERVAL 
types cannot be  in this context.





> Sql. Interval type. DDL statements should return a proper error
> ---
>
> Key: IGNITE-22421
> URL: https://issues.apache.org/jira/browse/IGNITE-22421
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>    Reporter: Maksim Zhuravkov
>Priority: Minor
>  Labels: ignite-3
>
> DDL statements that use INTERVAL type in column definitions return the 
> following errors:
> *Current behaviour*
> {noformat}
> CREATE TABLE t (a INTERVAL MONTH, b INT, PRIMARY KEY(a))
> Err: Precision definition is necessary for column 'A' of type 'PERIOD'
> CREATE TABLE t (a INTERVAL SECOND, b INT, PRIMARY KEY(a))
> Err:  Scale is not applicable for column 'A' of type 'DURATION'
> ALTER TABLE t ADD COLUMN c INTERVAL YEAR TO MONTH
> Precision definition is necessary for column 'C' of type 'PERIOD'
> {noformat}
> *Expected behaviour*
> Both statements should return an error that clearly indicates that INTERVAL 
> types cannot be used in this context.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22421) Sql. Interval type. DDL statements should return a proper error

2024-06-06 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22421:
--
Description: 
DDL statements that use INTERVAL type in column definitions return the 
following errors:

*Current behaviour*

{noformat}
CREATE TABLE t (a INTERVAL MONTH, b INT, PRIMARY KEY(a))
Err: Precision definition is necessary for column 'A' of type 'PERIOD'

CREATE TABLE t (a INTERVAL SECOND, b INT, PRIMARY KEY(a))
Err:  Scale is not applicable for column 'A' of type 'DURATION'

ALTER TABLE t ADD COLUMN c INTERVAL YEAR TO MONTH
Precision definition is necessary for column 'C' of type 'PERIOD'
{noformat}

*Expected behaviour*

Both statements should return an error that clearly indicates that INTERVAL 
types cannot be  in this context.




  was:
DDL statements that attempt to use INTERVAL type in column definitions should 
return a proper error that informs a user that this type cannot be used in DDL.

*Current behaviour*

{noformat}
CREATE TABLE t (a INTERVAL MONTH, b INT, PRIMARY KEY(a))
Err: Precision definition is necessary for column 'A' of type 'PERIOD'

CREATE TABLE t (a INTERVAL SECOND, b INT, PRIMARY KEY(a))
Err:  Scale is not applicable for column 'A' of type 'DURATION'

ALTER TABLE t ADD COLUMN c INTERVAL YEAR TO MONTH
Precision definition is necessary for column 'C' of type 'PERIOD'
{noformat}

*Expected behaviour*

Both statements should return an error that clearly indicates that INTERVAL 
types cannot be  in this context.





> Sql. Interval type. DDL statements should return a proper error
> ---
>
> Key: IGNITE-22421
> URL: https://issues.apache.org/jira/browse/IGNITE-22421
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>    Reporter: Maksim Zhuravkov
>Priority: Minor
>  Labels: ignite-3
>
> DDL statements that use INTERVAL type in column definitions return the 
> following errors:
> *Current behaviour*
> {noformat}
> CREATE TABLE t (a INTERVAL MONTH, b INT, PRIMARY KEY(a))
> Err: Precision definition is necessary for column 'A' of type 'PERIOD'
> CREATE TABLE t (a INTERVAL SECOND, b INT, PRIMARY KEY(a))
> Err:  Scale is not applicable for column 'A' of type 'DURATION'
> ALTER TABLE t ADD COLUMN c INTERVAL YEAR TO MONTH
> Precision definition is necessary for column 'C' of type 'PERIOD'
> {noformat}
> *Expected behaviour*
> Both statements should return an error that clearly indicates that INTERVAL 
> types cannot be  in this context.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-22421) Sql. Interval type. DDL statements should return a proper error

2024-06-06 Thread Maksim Zhuravkov (Jira)
Maksim Zhuravkov created IGNITE-22421:
-

 Summary: Sql. Interval type. DDL statements should return a proper 
error
 Key: IGNITE-22421
 URL: https://issues.apache.org/jira/browse/IGNITE-22421
 Project: Ignite
  Issue Type: Bug
  Components: sql
Reporter: Maksim Zhuravkov


DDL statements that attempt to use INTERVAL type in column definitions should 
return a proper error that informs a user that this type cannot be used in DDL.

*Current behaviour*

{noformat}
CREATE TABLE t (a INTERVAL MONTH, b INT, PRIMARY KEY(a))
Err: Precision definition is necessary for column 'A' of type 'PERIOD'

CREATE TABLE t (a INTERVAL SECOND, b INT, PRIMARY KEY(a))
Err:  Scale is not applicable for column 'A' of type 'DURATION'

ALTER TABLE t ADD COLUMN c INTERVAL YEAR TO MONTH
Precision definition is necessary for column 'C' of type 'PERIOD'
{noformat}

*Expected behaviour*

Both statements should return an error that clearly indicates that INTERVAL 
types cannot be  in this context.






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-22420) Data replicated with a thin client fails if contains mixed expiry policy

2024-06-06 Thread Maksim Timonin (Jira)
Maksim Timonin created IGNITE-22420:
---

 Summary: Data replicated with a thin client fails if contains 
mixed expiry policy
 Key: IGNITE-22420
 URL: https://issues.apache.org/jira/browse/IGNITE-22420
 Project: Ignite
  Issue Type: Bug
Reporter: Maksim Timonin
Assignee: Maksim Timonin






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-21966) Extend test coverage for SQL E091-01(Set functions. AVG)

2024-06-06 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov reassigned IGNITE-21966:
-

Assignee: Maksim Zhuravkov

> Extend test coverage for SQL E091-01(Set functions. AVG)
> 
>
> Key: IGNITE-21966
> URL: https://issues.apache.org/jira/browse/IGNITE-21966
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Iurii Gerzhedovich
>Assignee: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> Test coverage for SQL E091-01(Set functions. AVG) is poor.
> Let's increase the test coverage. 
>  
> ref - test/sql/aggregate/aggregates/test_avg.test



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22418) Sql. POSITION function does not support USING POSITION | OCTETS clause

2024-06-05 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22418:
--
Description: 
Both queries return parse error: Failed to parse query: Encountered "USING" 

{noformat}
SELECT POSITION('a' IN 'abc' USING CHARACTERS)
{noformat}

{noformat}
SELECT POSITION('a' IN 'abc' USING OCTETS)
{noformat}

*Expected behaviour*:
- POSITION function for character strings supports USING clause.
- POSITION function for binary string w/ USING clause still gets rejected 
(preferably it should be validation error as there is no POSITION (b IN binary 
string USING) function).


  was:
Both queries return parse error: Failed to parse query: Encountered "USING" 

{noformat}
SELECT POSITION('a' IN 'abc' USING CHARACTERS)
{noformat}

{noformat}
SELECT POSITION('a' IN 'abc' USING OCTETS)
{noformat}

*Expected behaviour*:
- POSITION function for character strings supports USING clause.
- POSITION function for binary string still return an error (preferably it 
should be validation error as there is no POSITION (b IN binary string USING) 
function).



> Sql. POSITION function does not support USING POSITION | OCTETS clause
> --
>
> Key: IGNITE-22418
> URL: https://issues.apache.org/jira/browse/IGNITE-22418
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Maksim Zhuravkov
>Priority: Minor
>  Labels: ignite-3
>
> Both queries return parse error: Failed to parse query: Encountered "USING" 
> {noformat}
> SELECT POSITION('a' IN 'abc' USING CHARACTERS)
> {noformat}
> {noformat}
> SELECT POSITION('a' IN 'abc' USING OCTETS)
> {noformat}
> *Expected behaviour*:
> - POSITION function for character strings supports USING clause.
> - POSITION function for binary string w/ USING clause still gets rejected 
> (preferably it should be validation error as there is no POSITION (b IN 
> binary string USING) function).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22418) Sql. POSITION function does not support USING POSITION | OCTETS clause

2024-06-05 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22418:
--
Description: 
Both queries return parse error: Failed to parse query: Encountered "USING" 

{noformat}
SELECT POSITION('a' IN 'abc' USING CHARACTERS)
{noformat}

{noformat}
SELECT POSITION('a' IN 'abc' USING OCTETS)
{noformat}

*Expected behaviour*:
- POSITION function for character strings supports USING clause.
- POSITION function for binary string w/ USING clause still gets rejected. 
Preferably it should be validation error as there is no POSITION (b IN binary 
string USING) function.


  was:
Both queries return parse error: Failed to parse query: Encountered "USING" 

{noformat}
SELECT POSITION('a' IN 'abc' USING CHARACTERS)
{noformat}

{noformat}
SELECT POSITION('a' IN 'abc' USING OCTETS)
{noformat}

*Expected behaviour*:
- POSITION function for character strings supports USING clause.
- POSITION function for binary string w/ USING clause still gets rejected 
(preferably it should be validation error as there is no POSITION (b IN binary 
string USING) function).



> Sql. POSITION function does not support USING POSITION | OCTETS clause
> --
>
> Key: IGNITE-22418
> URL: https://issues.apache.org/jira/browse/IGNITE-22418
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Maksim Zhuravkov
>Priority: Minor
>  Labels: ignite-3
>
> Both queries return parse error: Failed to parse query: Encountered "USING" 
> {noformat}
> SELECT POSITION('a' IN 'abc' USING CHARACTERS)
> {noformat}
> {noformat}
> SELECT POSITION('a' IN 'abc' USING OCTETS)
> {noformat}
> *Expected behaviour*:
> - POSITION function for character strings supports USING clause.
> - POSITION function for binary string w/ USING clause still gets rejected. 
> Preferably it should be validation error as there is no POSITION (b IN binary 
> string USING) function.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22418) Sql. POSITION function does not support USING POSITION | OCTETS clause

2024-06-05 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22418:
--
Description: 
Both queries return parse error: Failed to parse query: Encountered "USING" 

{noformat}
SELECT POSITION('a' IN 'abc' USING CHARACTERS)
{noformat}

{noformat}
SELECT POSITION('a' IN 'abc' USING OCTETS)
{noformat}

*Expected behaviour*:
- POSITION function for character strings supports USING clause.
- POSITION function for binary string still return an error (preferably it 
should be validation error as there is no POSITION (b IN binary string USING) 
function).


  was:
Both queries return parse error: Failed to parse query: Encountered "USING" 

{noformat}
SELECT POSITION('a' IN 'abc' USING CHARACTERS)
{noformat}

{noformat}
SELECT POSITION('a' IN 'abc' USING OCTETS)
{noformat}



> Sql. POSITION function does not support USING POSITION | OCTETS clause
> --
>
> Key: IGNITE-22418
> URL: https://issues.apache.org/jira/browse/IGNITE-22418
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Maksim Zhuravkov
>Priority: Minor
>  Labels: ignite-3
>
> Both queries return parse error: Failed to parse query: Encountered "USING" 
> {noformat}
> SELECT POSITION('a' IN 'abc' USING CHARACTERS)
> {noformat}
> {noformat}
> SELECT POSITION('a' IN 'abc' USING OCTETS)
> {noformat}
> *Expected behaviour*:
> - POSITION function for character strings supports USING clause.
> - POSITION function for binary string still return an error (preferably it 
> should be validation error as there is no POSITION (b IN binary string USING) 
> function).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-22418) Sql. POSITION function does not support USING POSITION | OCTETS clause

2024-06-05 Thread Maksim Zhuravkov (Jira)
Maksim Zhuravkov created IGNITE-22418:
-

 Summary: Sql. POSITION function does not support USING POSITION | 
OCTETS clause
 Key: IGNITE-22418
 URL: https://issues.apache.org/jira/browse/IGNITE-22418
 Project: Ignite
  Issue Type: Bug
Reporter: Maksim Zhuravkov


Both queries return parse error: Failed to parse query: Encountered "USING" 

{noformat}
SELECT POSITION('a' IN 'abc' USING CHARACTERS)
{noformat}

{noformat}
SELECT POSITION('a' IN 'abc' USING OCTETS)
{noformat}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22418) Sql. POSITION function does not support USING POSITION | OCTETS clause

2024-06-05 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22418:
--
Component/s: sql

> Sql. POSITION function does not support USING POSITION | OCTETS clause
> --
>
> Key: IGNITE-22418
> URL: https://issues.apache.org/jira/browse/IGNITE-22418
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>    Reporter: Maksim Zhuravkov
>Priority: Minor
>  Labels: ignite-3
>
> Both queries return parse error: Failed to parse query: Encountered "USING" 
> {noformat}
> SELECT POSITION('a' IN 'abc' USING CHARACTERS)
> {noformat}
> {noformat}
> SELECT POSITION('a' IN 'abc' USING OCTETS)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22418) Sql. POSITION function does not support USING POSITION | OCTETS clause

2024-06-05 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22418:
--
Ignite Flags:   (was: Docs Required,Release Notes Required)

> Sql. POSITION function does not support USING POSITION | OCTETS clause
> --
>
> Key: IGNITE-22418
> URL: https://issues.apache.org/jira/browse/IGNITE-22418
> Project: Ignite
>  Issue Type: Bug
>Reporter: Maksim Zhuravkov
>Priority: Minor
>  Labels: ignite-3
>
> Both queries return parse error: Failed to parse query: Encountered "USING" 
> {noformat}
> SELECT POSITION('a' IN 'abc' USING CHARACTERS)
> {noformat}
> {noformat}
> SELECT POSITION('a' IN 'abc' USING OCTETS)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22418) Sql. POSITION function does not support USING POSITION | OCTETS clause

2024-06-05 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22418:
--
Labels: ignite-3  (was: )

> Sql. POSITION function does not support USING POSITION | OCTETS clause
> --
>
> Key: IGNITE-22418
> URL: https://issues.apache.org/jira/browse/IGNITE-22418
> Project: Ignite
>  Issue Type: Bug
>Reporter: Maksim Zhuravkov
>Priority: Minor
>  Labels: ignite-3
>
> Both queries return parse error: Failed to parse query: Encountered "USING" 
> {noformat}
> SELECT POSITION('a' IN 'abc' USING CHARACTERS)
> {noformat}
> {noformat}
> SELECT POSITION('a' IN 'abc' USING OCTETS)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22417) Sql. Validator accepts f(BIGINT) but f(long) but SQL runtime does not define f(long) it has f(int)

2024-06-05 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22417:
--
Description: 
Calcite uses TypeFamilies in a lot of places to define types of function 
arguments.
This leads to a problem when a function that only accepts INTEGER type also 
accepts BIGINT type, but runtime does not have an implementation of a function 
that accepts long (because there can be no sense in doing so), causing a 
runtime error.

*Example*:

A function *f* can be called with TINYINT, SMALLINT, and INTEGER, but should be 
rejected by the validation when called with BIGINT:

{noformat}
Descriptor: F()
Runtime: Fs.f(int)
{noformat}

Validator accepts a call to f(BIGINT) since BIGINT is a part of INTEGER type 
family.
But f(long) is not defined in runtime, so java.lang.NoSuchMethodException: 
Fs.f(long) is thrown when a query get executed.

*Expected behaviour*: when a function does not accept BIGINTs, then the 
validator should return an error.



  was:
Calcite uses TypeFamilies in a lot of places to define types of function 
arguments.
This leads to a problem when a function that only accepts INTEGER type also 
accepts BIGINT type, but runtime does not have an implementation of a function 
that accepts long (because there can be no sense in doing so), causing a 
runtime error.

*Example*:

Say function can be called with TINYINT, SMALLINT, and INTEGER, but should be 
rejected by the validation when called with BIGINT:

{noformat}
Descriptor: F()
Runtime: Fs.f(int)
{noformat}

Validator accepts a call to f(BIGINT) since BIGINT is a part of INTEGER type 
family.
But f(long) is not defined in runtime, so java.lang.NoSuchMethodException: 
Fs.f(long) is thrown when a query get executed.

*Expected behaviour*: when a function does not accept BIGINTs, then the 
validator should return an error.




> Sql. Validator accepts f(BIGINT) but f(long) but SQL runtime does not define 
> f(long) it has f(int)
> --
>
> Key: IGNITE-22417
> URL: https://issues.apache.org/jira/browse/IGNITE-22417
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> Calcite uses TypeFamilies in a lot of places to define types of function 
> arguments.
> This leads to a problem when a function that only accepts INTEGER type also 
> accepts BIGINT type, but runtime does not have an implementation of a 
> function that accepts long (because there can be no sense in doing so), 
> causing a runtime error.
> *Example*:
> A function *f* can be called with TINYINT, SMALLINT, and INTEGER, but should 
> be rejected by the validation when called with BIGINT:
> {noformat}
> Descriptor: F()
> Runtime: Fs.f(int)
> {noformat}
> Validator accepts a call to f(BIGINT) since BIGINT is a part of INTEGER type 
> family.
> But f(long) is not defined in runtime, so java.lang.NoSuchMethodException: 
> Fs.f(long) is thrown when a query get executed.
> *Expected behaviour*: when a function does not accept BIGINTs, then the 
> validator should return an error.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22417) Sql. Validator accepts f(BIGINT) but f(long) but SQL runtime does not define f(long) it has f(int)

2024-06-05 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22417:
--
Description: 
Calcite uses TypeFamilies in a lot of places to define types of function 
arguments.
This leads to a problem when a function that only accepts INTEGER type also 
accepts BIGINT type, but runtime does not have an implementation of a function 
that accepts long (because there can be no sense in doing so), causing a 
runtime error.

*Example*:

Say function can be called with TINYINT, SMALLINT, and INTEGER, but should be 
rejected by the validation when called with BIGINT:

{noformat}
Descriptor: F()
Runtime: Fs.f(int)
{noformat}

Validator accepts a call to f(BIGINT) since BIGINT is a part of INTEGER type 
family.
But f(long) is not defined in runtime, so java.lang.NoSuchMethodException: 
Fs.f(long) is thrown when a query get executed.

*Expected behaviour*: when a function does not accept BIGINTs, then the 
validator should return an error.



  was:
Calcite's uses TypeFamilies in a lot of places to define types of types 
function arguments.
This leads to a problem when a function that only accepts INTEGER type also 
accepts BIGINT type, but runtime does not have an implementation of a function 
that accepts long (because there can be no sense in doing so), causing a 
runtime error.

This function can be called with TINYINT, SMALLINT, and INTEGER, but should be 
rejected by the validation when called with BIGINT:

{noformat}
Descriptor: F()
Runtime: Fs.f(int)
{noformat}

Validator accepts a call to f(BIGINT) since BIGINT is a part of INTEGER type 
family.
But f(long) is not defined in runtime, so java.lang.NoSuchMethodException: 
Fs.f(long) is thrown when a query get executed.

*Expected behaviour*: when a function does not accept BIGINTs, then the 
validator should return an error.




> Sql. Validator accepts f(BIGINT) but f(long) but SQL runtime does not define 
> f(long) it has f(int)
> --
>
> Key: IGNITE-22417
> URL: https://issues.apache.org/jira/browse/IGNITE-22417
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> Calcite uses TypeFamilies in a lot of places to define types of function 
> arguments.
> This leads to a problem when a function that only accepts INTEGER type also 
> accepts BIGINT type, but runtime does not have an implementation of a 
> function that accepts long (because there can be no sense in doing so), 
> causing a runtime error.
> *Example*:
> Say function can be called with TINYINT, SMALLINT, and INTEGER, but should be 
> rejected by the validation when called with BIGINT:
> {noformat}
> Descriptor: F()
> Runtime: Fs.f(int)
> {noformat}
> Validator accepts a call to f(BIGINT) since BIGINT is a part of INTEGER type 
> family.
> But f(long) is not defined in runtime, so java.lang.NoSuchMethodException: 
> Fs.f(long) is thrown when a query get executed.
> *Expected behaviour*: when a function does not accept BIGINTs, then the 
> validator should return an error.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-22417) Sql. Validator accepts f(BIGINT) but f(long) but SQL runtime does not define f(long) it has f(int)

2024-06-05 Thread Maksim Zhuravkov (Jira)
Maksim Zhuravkov created IGNITE-22417:
-

 Summary: Sql. Validator accepts f(BIGINT) but f(long) but SQL 
runtime does not define f(long) it has f(int)
 Key: IGNITE-22417
 URL: https://issues.apache.org/jira/browse/IGNITE-22417
 Project: Ignite
  Issue Type: Bug
  Components: sql
Reporter: Maksim Zhuravkov


Calcite's uses TypeFamilies in a lot of places to define types of types 
function arguments.
This leads to a problem when a function that only accepts INTEGER type also 
accepts BIGINT type, but runtime does not have an implementation of a function 
that accepts long (because there can be no sense in doing so), causing a 
runtime error.

This function can be called with TINYINT, SMALLINT, and INTEGER, but should be 
rejected by the validation when called with BIGINT:

{noformat}
Descriptor: F()
Runtime: Fs.f(int)
{noformat}

Validator accepts a call to f(BIGINT) since BIGINT is a part of INTEGER type 
family.
But f(long) is not defined in runtime, so java.lang.NoSuchMethodException: 
Fs.f(long) is thrown when a query get executed.

*Expected behaviour*: when a function does not accept BIGINTs, then the 
validator should return an error.





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (JAMES-3946) Proposal: DropLists (akka blacklists)

2024-06-05 Thread Maksim Meliashchuk (Jira)


[ 
https://issues.apache.org/jira/browse/JAMES-3946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17852564#comment-17852564
 ] 

Maksim Meliashchuk commented on JAMES-3946:
---

I believe that someone with the necessary permissions could merge the 
`droplist` branch onto the `master` branch. Following that, I would like to 
further enhance this topic, (JPA backend, postgresql backend).

> Proposal: DropLists (akka blacklists)
> -
>
> Key: JAMES-3946
> URL: https://issues.apache.org/jira/browse/JAMES-3946
> Project: James Server
>  Issue Type: New Feature
>  Components: data, webadmin
>Reporter: Benoit Tellier
>Priority: Major
>  Time Spent: 11h 10m
>  Remaining Estimate: 0h
>
> h3. What?
> Blacklist are a classical email related feature.
> Having a easy to activate core module to handle this feature would IMO be 
> nice.
> Ideally blacklist entries should be added globally, at the domain level, at 
> the user level and should concern individual addresses as well as entire 
> domains.
> h3. Disclaimer
> We identified this feature while working on TMail.
> I am convinced that this is generic enough to land on James. But might 
> consensus reject this, we could still make this a TMail module :-)
> Ideally I'd like to have this fully as an option, not activated by default.
> h3. How?
> Again, proposal here. My first shot was to think of RRTs but they do not take 
> sender into account (sd).
> Write in `/server/data/data-api` a `Droplist interfaces.
> A drop list entry is comprised of
>  - **ownerScope**: `global | domain | user`
>  - **owner**: String. 
> - For ownerScope global: this is always `ALL`.
> - For ownerScope domain: this is the domain, eg `domain.tld`
> - For ownerScope user, this is the users, eg `b...@domain.tld`
> - **deniedEntityType**: String. One of `address | domain`
> - **deniedEntity**: String. Either the domain or the address.
> {code:java}
> interface DropList {
> Mono add(DropListEntry entry);
> Mono remove(DropListEntry entry);
> Flux list(OwnerScope ownerSocpe, Owner owner);
>enum Status {
>ALLOWED,
>BLOCKED
>}
> Mono query(OwnerScope ownerSocpe, Owner owner, MailAddress 
> sender);
> }
> {code}
> And provide a memory + a Cassandra implementation of the DropList.
> Write a `IsInDropList` matcher: Given `attac...@evil.com` sends a mail to 
> `target@localhost`, the following queries are done:
>  - ownerScope all, owner All, deniedEntityType domain, deniedEntity evil.com
>  - ownerScope all, owner All, deniedEntityType address, deniedEntity 
> attac...@evil.com
>  - ownerScope domain, owner localhost, deniedEntityType domain, deniedEntity 
> evil.com
>  - ownerScope domain, owner localhost, deniedEntityType address, deniedEntity 
> attac...@evil.com
>  - ownerScope user, owner target@localhost, deniedEntityType domain, 
> deniedEntity evil.com
>  - ownerScope user, owner target@localhost, deniedEntityType address, 
> deniedEntity attac...@evil.com
> Manage to do only one set of queries at scope global. Manage to do one set of 
> queries at scope domain per domain!
> Webadmin APIs to manage the Drop List:
> {code:java}
> GET /droplist/global?deniedEntityType=null|domain|address
> [ "evil.com", "devil.com", "bad_...@crime.com", "hac...@murder.org" ]
> HEAD /droplist/global/evil.com
> HEAD /droplist/global/bad_...@murder.org
> 204 // 404
> PUT /droplist/global/evil.com
> PUT /droplist/global/bad_...@murder.org
> -> adds the entry into the droplist
> DELETE /droplist/global/evil.com
> DELETE /droplist/global/bad_...@murder.org
> -> removes the entry from the droplist
> 
> GET /droplist/domain/target.com?deniedEntityType=null|domain|address
> [ "evil.com", "devil.com", "bad_...@crime.com", "hac...@murder.org" ]
> HEAD /droplist/domain/target.com/evil.com
> HEAD /droplist/domain/target.com/bad_...@murder.org
> 204 // 404
> PUT /droplist/domain/target.com/evil.com
> PUT /droplist/domain/target.com/bad_...@murder.org
> -> adds the entry into the droplist
> DELETE /droplist/domain/target.com/evil.com
> DELETE /droplist/domain/target.com/bad_...@murder.org
> -> removes the entry from the droplist
> 
> GET /droplist/user/b...@target.com?deniedEntityType=null|domain|address
> [ "evil.com", "devil.com", "bad_...@crime.com", "hac...@murder.org" ]
> HEAD /droplist/user/b...@target.com/evil.com
&

Re: [dspace-tech] Re: Dspace-CRIS 2023.02.04 Handle links do not work

2024-06-05 Thread Maksim Donchenko
The strange thing here is that even the endpoint that is responsible for 
forwarding the handle  (server/api/pid) returns a 500 error.

вторник, 4 июня 2024 г. в 13:00:08 UTC+3, elorenzo: 

> i think you have two ways of creating the dc.identifier.uti (usually 
> used to store the handle).
>
> Typically a repository uses:ç
>
> hdl.handle.net/prefix/suffix--> this only works with a valid prefix 
> number, provided by CNRI, AND a handle server well configured and 
> started.
>
> the other way to build the dc.identifier.uri is 
> http../yourrepositoryname.xxx.xxx/handle/prefix/suffix any value for 
> prefix, (123456789 is used by many repositories) will serve. This 
> URLs will always works (even without a handle server installation)
>
> use one or another is configurable
>
> Best luck
> EMilio
>
> El 2024-06-04 11:45, Maksim Donchenko escribió:
> > Thanks for the answer, but in previous versions such as 2023.02.03 and
> > earlier, everything worked without installing and running the Hande
> > server.
> > 
> > вторник, 4 июня 2024 г. в 12:40:06 UTC+3, Julio:
> > 
> >> Hello, if I'm not mistaken, with the default handle it will create a
> >> handle at the internal and metadata level, but the link will not
> >> work, because handle.net [1] has to be the one that redirects you to
> >> your server, if you are not registered on handle.net [1], this last
> >> part is not going to work for you.
> >> 
> >> El martes, 4 de junio de 2024 a las 8:57:51 UTC+2, Maksim Donchenko
> >> escribió:
> >> No, I have not registered at https://handle.net/. I am using the
> >> default handle.
> >> 
> >> вторник, 4 июня 2024 г. в 09:48:28 UTC+3, Julio:
> >> 
> >> Hello, have you registered at https://handle.net/? and completed the
> >> entire registration process.
> >> 
> >> Do you have the handle server started on your machine?
> >> 
> >> [dspace_dir]/bin/start-handle-server
> >> 
> >> Ports 2641 and 8000 have input and output access on your machine.
> >> 
> >> Review all these points.
> >> 
> >> Greetings
> >> 
> >> El lunes, 3 de junio de 2024 a las 16:41:43 UTC+2, Maksim Donchenko
> >> escribió:
> >> Hello, everyone. I decided to install the new version of Dspace-CRIS
> >> 2023.02.04 and found that the links with Handle do not work, the
> >> server just says “No item found for the identifier”. Maybe
> >> someone has encountered something similar. Thank you in advance for
> >> your response.
> >> Sincerely, Maksim.
> > 
> > --
> > All messages to this mailing list should adhere to the Code of
> > Conduct: https://www.lyrasis.org/about/Pages/Code-of-Conduct.aspx
> > ---
> > You received this message because you are subscribed to the Google
> > Groups "DSpace Technical Support" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> > an email to dspace-tech...@googlegroups.com.
> > To view this discussion on the web visit
> > 
> https://groups.google.com/d/msgid/dspace-tech/27b53c8c-2762-49d5-a6c8-7aa263268b91n%40googlegroups.com
> > [2].
> > 
> > 
> > Links:
> > --
> > [1] http://handle.net
> > [2] 
> > 
> https://groups.google.com/d/msgid/dspace-tech/27b53c8c-2762-49d5-a6c8-7aa263268b91n%40googlegroups.com?utm_medium=email_source=footer
>

-- 
All messages to this mailing list should adhere to the Code of Conduct: 
https://www.lyrasis.org/about/Pages/Code-of-Conduct.aspx
--- 
You received this message because you are subscribed to the Google Groups 
"DSpace Technical Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dspace-tech+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dspace-tech/09101aa9-28d2-4b83-b777-d86e6c86cbb5n%40googlegroups.com.


[jira] [Assigned] (IGNITE-21958) Extend test coverage for SQL E021-11(POSITION function)

2024-06-05 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov reassigned IGNITE-21958:
-

Assignee: Maksim Zhuravkov

> Extend test coverage for SQL E021-11(POSITION function)
> ---
>
> Key: IGNITE-21958
> URL: https://issues.apache.org/jira/browse/IGNITE-21958
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Iurii Gerzhedovich
>Assignee: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> Test coverage for SQL E021-11 (POSITION function) is poor.
> Let's increase the test coverage. 
> ref - 
> modules/runner/src/integrationTest/sql/function/string/test_position.test



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-22204) Sql. Set operation. Incorrect query transformation for a query with limit / offset and sort

2024-06-04 Thread Maksim Zhuravkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-22204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17851993#comment-17851993
 ] 

Maksim Zhuravkov commented on IGNITE-22204:
---

The previous issue was moved to 
https://issues.apache.org/jira/browse/IGNITE-22392.

> Sql. Set operation. Incorrect query transformation for a query with limit / 
> offset and sort
> ---
>
> Key: IGNITE-22204
> URL: https://issues.apache.org/jira/browse/IGNITE-22204
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Query 
> {code:java}
> SELECT a FROM
>   (SELECT a FROM
> (SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
> ORDER BY a OFFSET 1
>   ) t(a)
> {code}
> Should be transformed into
> {code:java}
>  Limit(offset=[1]), id = 201
>Limit(offset=[2], fetch=[3]), id = 200
>  Exchange(distribution=[single]), id = 199
>Sort(sort0=[$0], dir0=[ASC], offset=[2], fetch=[3]), id = 198
>  TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 168
> {code}
> But it gets rewritten as 
> {code:java}
>  Limit(offset=[1]), id = 201
>Limit(offset=[2], fetch=[3]), id = 200
>  Exchange(distribution=[single]), id = 199
>Sort(sort0=[$0], dir0=[ASC], offset=[1], fetch=[1]), id = 198
>  TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 168
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22204) Sql. Set operation. Incorrect query transformation for a query with limit / offset and sort

2024-06-04 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22204:
--
Description: 
Query 
{code:java}
SELECT a FROM
  (SELECT a FROM
(SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
ORDER BY a OFFSET 1
  ) t(a)
{code}

Should be transformed into

{code:java}
 Limit(offset=[1]), id = 201
   Limit(offset=[2], fetch=[3]), id = 200
 Exchange(distribution=[single]), id = 199
   Sort(sort0=[$0], dir0=[ASC], offset=[2], fetch=[3]), id = 198
 TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 168
{code}

But it gets rewritten as 

{code:java}
 Limit(offset=[1]), id = 201
   Limit(offset=[2], fetch=[3]), id = 200
 Exchange(distribution=[single]), id = 199
   Sort(sort0=[$0], dir0=[ASC], offset=[1], fetch=[1]), id = 198
 TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 168
{code}







  was:
Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
assumes that Sort's input always returns data from a single source.
This assumption is not correct for the cases, when a query splits into multiple 
fragments.
So transforming

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, fetch=f + o)
>>Scan
{noformat}

Is not a valid transformation when there is an Exchange operator between a 
Limit and a Sort and should not be used. 





> Sql. Set operation. Incorrect query transformation for a query with limit / 
> offset and sort
> ---
>
> Key: IGNITE-22204
> URL: https://issues.apache.org/jira/browse/IGNITE-22204
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Query 
> {code:java}
> SELECT a FROM
>   (SELECT a FROM
> (SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
> ORDER BY a OFFSET 1
>   ) t(a)
> {code}
> Should be transformed into
> {code:java}
>  Limit(offset=[1]), id = 201
>Limit(offset=[2], fetch=[3]), id = 200
>  Exchange(distribution=[single]), id = 199
>Sort(sort0=[$0], dir0=[ASC], offset=[2], fetch=[3]), id = 198
>  TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 168
> {code}
> But it gets rewritten as 
> {code:java}
>  Limit(offset=[1]), id = 201
>Limit(offset=[2], fetch=[3]), id = 200
>  Exchange(distribution=[single]), id = 199
>Sort(sort0=[$0], dir0=[ASC], offset=[1], fetch=[1]), id = 198
>  TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 168
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22204) Sql. Set operation. Incorrect query transformation for a query with limit / offset and sort

2024-06-04 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22204:
--
Summary: Sql. Set operation. Incorrect query transformation for a query 
with limit / offset and sort  (was: Sql. Set operation. Incorrect query 
transformation for a query with limit / offset that uses the same table (When 
RemoveSortInSubQuery is enabled))

> Sql. Set operation. Incorrect query transformation for a query with limit / 
> offset and sort
> ---
>
> Key: IGNITE-22204
> URL: https://issues.apache.org/jira/browse/IGNITE-22204
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
> assumes that Sort's input always returns data from a single source.
> This assumption is not correct for the cases, when a query splits into 
> multiple fragments.
> So transforming
> {noformat}
> Sort(ordering=ord, offset=o, fetch=f) 
> >Scan
> {noformat}
> into 
> {noformat}
> Limit (offset=o, fetch=f) 
> >Sort(ordering=ord, fetch=f + o)
> >>Scan
> {noformat}
> Is not a valid transformation when there is an Exchange operator between a 
> Limit and a Sort and should not be used. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (IGNITE-22392) Sql. Set operation. Incorrect query transformation for a query with limit / offset that uses the same table (When RemoveSortInSubQuery is enabled)

2024-06-04 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov reopened IGNITE-22392:
---

> Sql. Set operation. Incorrect query transformation for a query with limit / 
> offset that uses the same table (When RemoveSortInSubQuery is enabled)
> --
>
> Key: IGNITE-22392
> URL: https://issues.apache.org/jira/browse/IGNITE-22392
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Maksim Zhuravkov
>Priority: Minor
>  Labels: ignite-3
>
> Combination of LIMIT / OFFSET and set operator results in incorrect 
> transformation of a plan tree. This issue is caused by incorrect handling of 
> the `RemoveSortInSubQuery` flag inside SqlToRelConverter. ATM this issue is 
> migrated by disabling that flag. 
> {noformat}
> statement ok
> CREATE TABLE test (a INTEGER);
> statement ok
> INSERT INTO test VALUES (1), (2), (3), (4);
> # query 1
> query I rowsort
> SELECT a FROM
>   (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)
> 
> 2
> # query 2
> query I rowsort
> SELECT a FROM
>   (SELECT a FROM
> (SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
> ORDER BY a OFFSET 1
>   ) t(a)
> 
> 4
> # combined query should return 2, 4
> # but it returns 2
> query I rowsort
> SELECT a FROM
>   (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)
> UNION ALL
> SELECT a FROM
>   (SELECT a FROM
> (SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
> ORDER BY a OFFSET 1
>   ) t(a)
> 
> 2
> 4
> {noformat}
> Query 1
> {noformat}
> SELECT a FROM
>   (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)
>  Limit(offset=[1], fetch=[1]), id = 80
> Exchange(distribution=[single]), id = 79
>Sort(sort0=[$0], dir0=[ASC], offset=[1], fetch=[1]), id = 78
>  TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 50
> {noformat}
> Query 2
> {noformat}
> SELECT a FROM
>   (SELECT a FROM
> (SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
> ORDER BY a OFFSET 1
>   ) t(a)
>  Limit(offset=[1]), id = 201
>Limit(offset=[2], fetch=[3]), id = 200
>  Exchange(distribution=[single]), id = 199
>Sort(sort0=[$0], dir0=[ASC], offset=[2], fetch=[3]), id = 198
>  TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 168
> {noformat}
> Combine queries using UNION ALL
> {noformat}
> SELECT a FROM
>   (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)
> UNION ALL
> SELECT a FROM
>   (SELECT a FROM
> (SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
> ORDER BY a OFFSET 1
>   ) t(a)
> UnionAll(all=[true]), id = 403
>   Limit(offset=[1], fetch=[1]), id = 400
> Exchange(distribution=[single]), id = 399 # subtree is duplicated in 
> another part of a query
>   Sort(sort0=[$0], dir0=[ASC], offset=[1], fetch=[1]), id = 398 # 
> TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 345
>   Limit(offset=[1]), id = 402
> Limit(offset=[2], fetch=[3]), id = 401
>   Exchange(distribution=[single]), id = 399 # duplicate
> Sort(sort0=[$0], dir0=[ASC], offset=[1], fetch=[1]), id = 398
>   TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 345
> {noformat}
> When tables are different, results are correct.
> Reproducible in vanila calcite:
> {noformat}
>  EnumerableUnion(all=[true])
> >   EnumerableCalc(expr#0..1=[{inputs}], A=[$t0])
> > EnumerableLimit(offset=[1], fetch=[1])
> >   EnumerableSort(sort0=[$0], dir0=[ASC])
> > EnumerableTableScan(table=[[BLANK, TEST]])
> >   EnumerableCalc(expr#0..1=[{inputs}], A=[$t0])
> > EnumerableLimit(offset=[2], fetch=[3])
> >   EnumerableSort(sort0=[$0], dir0=[ASC])
> > EnumerableTableScan(table=[[BLANK, TEST]])
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22204) Sql. Set operation. Incorrect query transformation for a query with limit / offset that uses the same table (When RemoveSortInSubQuery is enabled)

2024-06-04 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22204:
--
Summary: Sql. Set operation. Incorrect query transformation for a query 
with limit / offset that uses the same table (When RemoveSortInSubQuery is 
enabled)  (was: Sql. Sort operator cannot use offset parameter)

> Sql. Set operation. Incorrect query transformation for a query with limit / 
> offset that uses the same table (When RemoveSortInSubQuery is enabled)
> --
>
> Key: IGNITE-22204
> URL: https://issues.apache.org/jira/browse/IGNITE-22204
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
> assumes that Sort's input always returns data from a single source.
> This assumption is not correct for the cases, when a query splits into 
> multiple fragments.
> So transforming
> {noformat}
> Sort(ordering=ord, offset=o, fetch=f) 
> >Scan
> {noformat}
> into 
> {noformat}
> Limit (offset=o, fetch=f) 
> >Sort(ordering=ord, fetch=f + o)
> >>Scan
> {noformat}
> Is not a valid transformation when there is an Exchange operator between a 
> Limit and a Sort and should not be used. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-22392) Sql. Set operation. Incorrect query transformation for a query with limit / offset that uses the same table (When RemoveSortInSubQuery is enabled)

2024-06-04 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov resolved IGNITE-22392.
---
Resolution: Duplicate

> Sql. Set operation. Incorrect query transformation for a query with limit / 
> offset that uses the same table (When RemoveSortInSubQuery is enabled)
> --
>
> Key: IGNITE-22392
> URL: https://issues.apache.org/jira/browse/IGNITE-22392
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Maksim Zhuravkov
>Priority: Minor
>  Labels: ignite-3
>
> Combination of LIMIT / OFFSET and set operator results in incorrect 
> transformation of a plan tree. This issue is caused by incorrect handling of 
> the `RemoveSortInSubQuery` flag inside SqlToRelConverter. ATM this issue is 
> migrated by disabling that flag. 
> {noformat}
> statement ok
> CREATE TABLE test (a INTEGER);
> statement ok
> INSERT INTO test VALUES (1), (2), (3), (4);
> # query 1
> query I rowsort
> SELECT a FROM
>   (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)
> 
> 2
> # query 2
> query I rowsort
> SELECT a FROM
>   (SELECT a FROM
> (SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
> ORDER BY a OFFSET 1
>   ) t(a)
> 
> 4
> # combined query should return 2, 4
> # but it returns 2
> query I rowsort
> SELECT a FROM
>   (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)
> UNION ALL
> SELECT a FROM
>   (SELECT a FROM
> (SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
> ORDER BY a OFFSET 1
>   ) t(a)
> 
> 2
> 4
> {noformat}
> Query 1
> {noformat}
> SELECT a FROM
>   (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)
>  Limit(offset=[1], fetch=[1]), id = 80
> Exchange(distribution=[single]), id = 79
>Sort(sort0=[$0], dir0=[ASC], offset=[1], fetch=[1]), id = 78
>  TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 50
> {noformat}
> Query 2
> {noformat}
> SELECT a FROM
>   (SELECT a FROM
> (SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
> ORDER BY a OFFSET 1
>   ) t(a)
>  Limit(offset=[1]), id = 201
>Limit(offset=[2], fetch=[3]), id = 200
>  Exchange(distribution=[single]), id = 199
>Sort(sort0=[$0], dir0=[ASC], offset=[2], fetch=[3]), id = 198
>  TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 168
> {noformat}
> Combine queries using UNION ALL
> {noformat}
> SELECT a FROM
>   (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)
> UNION ALL
> SELECT a FROM
>   (SELECT a FROM
> (SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
> ORDER BY a OFFSET 1
>   ) t(a)
> UnionAll(all=[true]), id = 403
>   Limit(offset=[1], fetch=[1]), id = 400
> Exchange(distribution=[single]), id = 399 # subtree is duplicated in 
> another part of a query
>   Sort(sort0=[$0], dir0=[ASC], offset=[1], fetch=[1]), id = 398 # 
> TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 345
>   Limit(offset=[1]), id = 402
> Limit(offset=[2], fetch=[3]), id = 401
>   Exchange(distribution=[single]), id = 399 # duplicate
> Sort(sort0=[$0], dir0=[ASC], offset=[1], fetch=[1]), id = 398
>   TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 345
> {noformat}
> When tables are different, results are correct.
> Reproducible in vanila calcite:
> {noformat}
>  EnumerableUnion(all=[true])
> >   EnumerableCalc(expr#0..1=[{inputs}], A=[$t0])
> > EnumerableLimit(offset=[1], fetch=[1])
> >   EnumerableSort(sort0=[$0], dir0=[ASC])
> > EnumerableTableScan(table=[[BLANK, TEST]])
> >   EnumerableCalc(expr#0..1=[{inputs}], A=[$t0])
> > EnumerableLimit(offset=[2], fetch=[3])
> >   EnumerableSort(sort0=[$0], dir0=[ASC])
> > EnumerableTableScan(table=[[BLANK, TEST]])
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (IGNITE-22204) Sql. Sort operator cannot use offset parameter

2024-06-04 Thread Maksim Zhuravkov (Jira)


[ https://issues.apache.org/jira/browse/IGNITE-22204 ]


Maksim Zhuravkov deleted comment on IGNITE-22204:
---

was (Author: JIRAUSER298618):
The previous issue was moved to 
https://issues.apache.org/jira/browse/IGNITE-22392.



> Sql. Sort operator cannot use offset parameter
> --
>
> Key: IGNITE-22204
> URL: https://issues.apache.org/jira/browse/IGNITE-22204
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
> assumes that Sort's input always returns data from a single source.
> This assumption is not correct for the cases, when a query splits into 
> multiple fragments.
> So transforming
> {noformat}
> Sort(ordering=ord, offset=o, fetch=f) 
> >Scan
> {noformat}
> into 
> {noformat}
> Limit (offset=o, fetch=f) 
> >Sort(ordering=ord, fetch=f + o)
> >>Scan
> {noformat}
> Is not a valid transformation when there is an Exchange operator between a 
> Limit and a Sort and should not be used. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[dspace-tech] Re: Dspace-CRIS 2023.02.04 Handle links do not work

2024-06-04 Thread Maksim Donchenko
Thanks for the answer, but in previous versions such as 2023.02.03 and 
earlier, everything worked without installing and running the Hande server.

вторник, 4 июня 2024 г. в 12:40:06 UTC+3, Julio: 

> Hello, if I'm not mistaken, with the default handle it will create a 
> handle at the internal and metadata level, but the link will not work, 
> because handle.net has to be the one that redirects you to your server, 
> if you are not registered on handle.net, this last part is not going to 
> work for you.
>
> El martes, 4 de junio de 2024 a las 8:57:51 UTC+2, Maksim Donchenko 
> escribió:
>
>> No, I have not registered at https://handle.net/. I am using the default 
>> handle.
>>
>> вторник, 4 июня 2024 г. в 09:48:28 UTC+3, Julio: 
>>
>>> Hello, have you registered at https://handle.net/? and completed the 
>>> entire registration process.
>>>
>>> Do you have the handle server started on your machine?
>>>
>>> [dspace_dir]/bin/start-handle-server
>>>
>>> Ports 2641 and 8000 have input and output access on your machine.
>>>
>>> Review all these points.
>>>
>>> Greetings
>>>
>>> El lunes, 3 de junio de 2024 a las 16:41:43 UTC+2, Maksim Donchenko 
>>> escribió:
>>>
>>>> Hello, everyone. I decided to install the new version of Dspace-CRIS 
>>>> 2023.02.04 and found that the links with Handle do not work, the server 
>>>> just says “No item found for the identifier”. Maybe someone has 
>>>> encountered 
>>>> something similar. Thank you in advance for your response.
>>>> Sincerely, Maksim.
>>>>
>>>

-- 
All messages to this mailing list should adhere to the Code of Conduct: 
https://www.lyrasis.org/about/Pages/Code-of-Conduct.aspx
--- 
You received this message because you are subscribed to the Google Groups 
"DSpace Technical Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dspace-tech+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dspace-tech/27b53c8c-2762-49d5-a6c8-7aa263268b91n%40googlegroups.com.


[jira] [Updated] (IGNITE-22403) Algorithm improvement for kafka topic partition distribution

2024-06-04 Thread Maksim Davydov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Davydov updated IGNITE-22403:

Priority: Major  (was: Minor)

> Algorithm improvement for kafka topic partition distribution
> 
>
> Key: IGNITE-22403
> URL: https://issues.apache.org/jira/browse/IGNITE-22403
> Project: Ignite
>  Issue Type: Improvement
>  Components: extensions
>    Reporter: Maksim Davydov
>Assignee: Maksim Davydov
>Priority: Major
>  Labels: IEP-59, ise
>
> Distribution for partitions from kafka topic over requested threads in 
> AbstractKafkaToIgniteCdcStreamer (CDC Kafka to Ignite) is not uniform. 
> There might be cases with unequal load on threads, that reads data from 
> kafka, which might result in overhead/bottleneck and slow workflow in general 
> for the whole CDC with Kafka process.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-22403) Algorithm improvement for kafka topic partition distribution

2024-06-04 Thread Maksim Davydov (Jira)
Maksim Davydov created IGNITE-22403:
---

 Summary: Algorithm improvement for kafka topic partition 
distribution
 Key: IGNITE-22403
 URL: https://issues.apache.org/jira/browse/IGNITE-22403
 Project: Ignite
  Issue Type: Improvement
  Components: extensions
Reporter: Maksim Davydov
Assignee: Maksim Davydov


Distribution for partitions from kafka topic over requested threads in 
AbstractKafkaToIgniteCdcStreamer (CDC Kafka to Ignite) is not uniform. 

There might be cases with unequal load on threads, that reads data from kafka, 
which might result in overhead/bottleneck and slow workflow in general for the 
whole CDC with Kafka process.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[dspace-tech] Re: Dspace-CRIS 2023.02.04 Handle links do not work

2024-06-04 Thread Maksim Donchenko
No, I have not registered at https://handle.net/. I am using the default 
handle.

вторник, 4 июня 2024 г. в 09:48:28 UTC+3, Julio: 

> Hello, have you registered at https://handle.net/? and completed the 
> entire registration process.
>
> Do you have the handle server started on your machine?
>
> [dspace_dir]/bin/start-handle-server
>
> Ports 2641 and 8000 have input and output access on your machine.
>
> Review all these points.
>
> Greetings
>
> El lunes, 3 de junio de 2024 a las 16:41:43 UTC+2, Maksim Donchenko 
> escribió:
>
>> Hello, everyone. I decided to install the new version of Dspace-CRIS 
>> 2023.02.04 and found that the links with Handle do not work, the server 
>> just says “No item found for the identifier”. Maybe someone has encountered 
>> something similar. Thank you in advance for your response.
>> Sincerely, Maksim.
>>
>

-- 
All messages to this mailing list should adhere to the Code of Conduct: 
https://www.lyrasis.org/about/Pages/Code-of-Conduct.aspx
--- 
You received this message because you are subscribed to the Google Groups 
"DSpace Technical Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dspace-tech+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dspace-tech/835fb292-f6f4-4af4-94ae-a29beaed4587n%40googlegroups.com.


[jira] [Assigned] (IGNITE-18556) Sql. TypeSystem. Default implementation of getDefaultPrecision for FLOAT and DOUBLE returns the same value.

2024-06-04 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov reassigned IGNITE-18556:
-

Assignee: Maksim Zhuravkov

> Sql. TypeSystem. Default implementation of getDefaultPrecision for FLOAT and 
> DOUBLE returns the same value.
> ---
>
> Key: IGNITE-18556
> URL: https://issues.apache.org/jira/browse/IGNITE-18556
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 3.0.0-beta1
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Minor
>  Labels: calcite2-required, calcite3-required, ignite-3
> Fix For: 3.0.0-beta1
>
>
> Default implementation of TypeSystem::getDefaultPrecision, provided by 
> Calcite, returns the same value for FLOAT and DOUBLE types. Such behaviour 
> causes TypeFactory::leastRestrictiveType to return different results for 
> (FLOAT, DOUBLE) and (DOUBLE, FLOAT).
> We fixed getDefaultPrecision to return different values to resolve the 
> problem with leastRestrictiveType in IGNITE-18163.
> 1) Investigate how this change affects behaviour of other operators.
> 2) Choose the appropriate value for default precision in IgniteTypeSystem for 
> FLOAT and DOUBLE if necessary.
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-21967) Extend test coverage for SQL E091-06(Set functions. ALL quantifier)

2024-06-04 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov reassigned IGNITE-21967:
-

Assignee: Maksim Zhuravkov

> Extend test coverage for SQL E091-06(Set functions. ALL quantifier)
> ---
>
> Key: IGNITE-21967
> URL: https://issues.apache.org/jira/browse/IGNITE-21967
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Iurii Gerzhedovich
>Assignee: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> Test coverage for SQL E091-06(Set functions. ALL quantifier) is poor.
> Let's increase the test coverage. 
>  
> ref - modules/runner/src/integrationTest/sql/sqlite/aggregates/agg4.test_slow



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-21975) Extend test coverage for SQL F302-01(INTERSECT table operator. INTERSECT DISTINCT table operator)

2024-06-04 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov reassigned IGNITE-21975:
-

Assignee: Maksim Zhuravkov

> Extend test coverage for SQL F302-01(INTERSECT table operator. INTERSECT 
> DISTINCT table operator)
> -
>
> Key: IGNITE-21975
> URL: https://issues.apache.org/jira/browse/IGNITE-21975
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Iurii Gerzhedovich
>Assignee: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> Test coverage for SQL F302-01(INTERSECT table operator. INTERSECT DISTINCT 
> table operator) is poor.
> Let's increase the test coverage. 
> ref - test/sql/subquery/scalar/test_complex_correlated_subquery.test



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22204) Sql. Sort operator cannot use offset parameter

2024-06-03 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22204:
--
Description: 
Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
assumes that Sort's input always returns data from a single source.
This assumption is not correct for the cases, when a query splits into multiple 
fragments.
So transforming

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, fetch=f + o)
>>Scan
{noformat}

Is not a valid transformation when there is an Exchange operator between a 
Limit and a Sort and should not be used. 




  was:
Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
assumes that Sort's input always returns data from a single source.
This assumption is not correct for the cases, when a query splits into multiple 
fragments.
So transforming

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, fetch=f + o)
>>Scan
{noformat}

Is not a valid transformation when there is an exchange operator between Limit 
and parameterised Sort and should not be used. 





> Sql. Sort operator cannot use offset parameter
> --
>
> Key: IGNITE-22204
> URL: https://issues.apache.org/jira/browse/IGNITE-22204
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
> assumes that Sort's input always returns data from a single source.
> This assumption is not correct for the cases, when a query splits into 
> multiple fragments.
> So transforming
> {noformat}
> Sort(ordering=ord, offset=o, fetch=f) 
> >Scan
> {noformat}
> into 
> {noformat}
> Limit (offset=o, fetch=f) 
> >Sort(ordering=ord, fetch=f + o)
> >>Scan
> {noformat}
> Is not a valid transformation when there is an Exchange operator between a 
> Limit and a Sort and should not be used. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22204) Sql. Sort operator cannot use offset parameter

2024-06-03 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22204:
--
Description: 
Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
assumes that Sort's input always returns data from a single source.
This assumption is not correct for the cases, when a query splits into multiple 
fragments.
So transforming

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, fetch=f + o)
>>Scan
{noformat}

Is not a valid transformation when there is an exchange operator between Limit 
and parameterised Sort and should not be used. 




  was:
Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
assumes that Sort's input always returns data from a single source.
This assumption is not correct for the cases, when a query splits into multiple 
fragments.
So transforming

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, fetch=f + o)
>>Scan
{noformat}

Is not a valid transformation if there is an exchange operator between Limit 
and parameterised Sort and should not be used. 





> Sql. Sort operator cannot use offset parameter
> --
>
> Key: IGNITE-22204
> URL: https://issues.apache.org/jira/browse/IGNITE-22204
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
> assumes that Sort's input always returns data from a single source.
> This assumption is not correct for the cases, when a query splits into 
> multiple fragments.
> So transforming
> {noformat}
> Sort(ordering=ord, offset=o, fetch=f) 
> >Scan
> {noformat}
> into 
> {noformat}
> Limit (offset=o, fetch=f) 
> >Sort(ordering=ord, fetch=f + o)
> >>Scan
> {noformat}
> Is not a valid transformation when there is an exchange operator between 
> Limit and parameterised Sort and should not be used. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22204) Sql. Sort operator cannot use offset parameter

2024-06-03 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22204:
--
Description: 
Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
assumes that Sort's input always returns data from a single source.
This assumption is not correct for the cases, when a query splits into multiple 
fragments.
So transforming

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, fetch=f + o)
>>Scan
{noformat}

Is not a valid transformation if there is an exchange operator between Limit 
and parameterised Sort and should not be used. 




  was:
Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
assumes that Sort's input always returns data from a single source.
This assumption is not correct for the cases, when a query splits into multiple 
fragments.
So transforming

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, fetch=f + o)
>>Scan
{noformat}

Is not a valid transformation and should not be used. 





> Sql. Sort operator cannot use offset parameter
> --
>
> Key: IGNITE-22204
> URL: https://issues.apache.org/jira/browse/IGNITE-22204
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
> assumes that Sort's input always returns data from a single source.
> This assumption is not correct for the cases, when a query splits into 
> multiple fragments.
> So transforming
> {noformat}
> Sort(ordering=ord, offset=o, fetch=f) 
> >Scan
> {noformat}
> into 
> {noformat}
> Limit (offset=o, fetch=f) 
> >Sort(ordering=ord, fetch=f + o)
> >>Scan
> {noformat}
> Is not a valid transformation if there is an exchange operator between Limit 
> and parameterised Sort and should not be used. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22204) Sql. Sort operator cannot use offset parameter

2024-06-03 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22204:
--
Description: 
Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
assumes that Sort's input always returns data from a single source.
This assumption is not correct for the cases, when a query splits into multiple 
fragments.
So transforming

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, fetch=f + o)
>>Scan
{noformat}

Is not a valid transformation and should not be used. 




  was:
Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
assumes that Sort's input always returns data from a single source.
This assumption is not correct for the cases, when a query splits into multiple 
fragments.
So transforming

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, fetch=f + o)
>>Scan
{noformat}

Is not a valid transformation and should not be used.




> Sql. Sort operator cannot use offset parameter
> --
>
> Key: IGNITE-22204
> URL: https://issues.apache.org/jira/browse/IGNITE-22204
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
> assumes that Sort's input always returns data from a single source.
> This assumption is not correct for the cases, when a query splits into 
> multiple fragments.
> So transforming
> {noformat}
> Sort(ordering=ord, offset=o, fetch=f) 
> >Scan
> {noformat}
> into 
> {noformat}
> Limit (offset=o, fetch=f) 
> >Sort(ordering=ord, fetch=f + o)
> >>Scan
> {noformat}
> Is not a valid transformation and should not be used. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22392) Sql. Set operation. Incorrect query transformation for a query with limit / offset that uses the same table (When RemoveSortInSubQuery is enabled)

2024-06-03 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22392:
--
Description: 
Combination of LIMIT / OFFSET and set operator results in incorrect 
transformation of a plan tree. This issue is caused by incorrect handling of 
the `RemoveSortInSubQuery` flag inside SqlToRelConverter. ATM this issue is 
migrated by disabling that flag. 

{noformat}
statement ok
CREATE TABLE test (a INTEGER);

statement ok
INSERT INTO test VALUES (1), (2), (3), (4);

# query 1
query I rowsort
SELECT a FROM
  (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)

2

# query 2
query I rowsort
SELECT a FROM
  (SELECT a FROM
(SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
ORDER BY a OFFSET 1
  ) t(a)

4

# combined query should return 2, 4
# but it returns 2
query I rowsort
SELECT a FROM
  (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)
UNION ALL
SELECT a FROM
  (SELECT a FROM
(SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
ORDER BY a OFFSET 1
  ) t(a)

2
4

{noformat}


Query 1
{noformat}
SELECT a FROM
  (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)

 Limit(offset=[1], fetch=[1]), id = 80
Exchange(distribution=[single]), id = 79
   Sort(sort0=[$0], dir0=[ASC], offset=[1], fetch=[1]), id = 78
 TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 50
{noformat}

Query 2

{noformat}
SELECT a FROM
  (SELECT a FROM
(SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
ORDER BY a OFFSET 1
  ) t(a)

 Limit(offset=[1]), id = 201
   Limit(offset=[2], fetch=[3]), id = 200
 Exchange(distribution=[single]), id = 199
   Sort(sort0=[$0], dir0=[ASC], offset=[2], fetch=[3]), id = 198
 TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 168
{noformat}

Combine queries using UNION ALL

{noformat}
SELECT a FROM
  (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)
UNION ALL
SELECT a FROM
  (SELECT a FROM
(SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
ORDER BY a OFFSET 1
  ) t(a)

UnionAll(all=[true]), id = 403
  Limit(offset=[1], fetch=[1]), id = 400
Exchange(distribution=[single]), id = 399 # subtree is duplicated in 
another part of a query
  Sort(sort0=[$0], dir0=[ASC], offset=[1], fetch=[1]), id = 398 # 
TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 345
  Limit(offset=[1]), id = 402
Limit(offset=[2], fetch=[3]), id = 401
  Exchange(distribution=[single]), id = 399 # duplicate
Sort(sort0=[$0], dir0=[ASC], offset=[1], fetch=[1]), id = 398
  TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 345
{noformat}


When tables are different, results are correct.


Reproducible in vanila calcite:

{noformat}
 EnumerableUnion(all=[true])
>   EnumerableCalc(expr#0..1=[{inputs}], A=[$t0])
> EnumerableLimit(offset=[1], fetch=[1])
>   EnumerableSort(sort0=[$0], dir0=[ASC])
> EnumerableTableScan(table=[[BLANK, TEST]])
>   EnumerableCalc(expr#0..1=[{inputs}], A=[$t0])
> EnumerableLimit(offset=[2], fetch=[3])
>   EnumerableSort(sort0=[$0], dir0=[ASC])
> EnumerableTableScan(table=[[BLANK, TEST]])
{noformat}




  was:
Combination of LIMIT / OFFSET and set operator results in incorrect 
transformation of a plan tree. This issue is caused by incorrect by incorrect 
handling of the `RemoveSortInSubQuery` flag inside SqlToRelConverter internals. 
ATM this issue is migrated by disabling that flag. 

{noformat}
statement ok
CREATE TABLE test (a INTEGER);

statement ok
INSERT INTO test VALUES (1), (2), (3), (4);

# query 1
query I rowsort
SELECT a FROM
  (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)

2

# query 2
query I rowsort
SELECT a FROM
  (SELECT a FROM
(SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
ORDER BY a OFFSET 1
  ) t(a)

4

# combined query should return 2, 4
# but it returns 2
query I rowsort
SELECT a FROM
  (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)
UNION ALL
SELECT a FROM
  (SELECT a FROM
(SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
ORDER BY a OFFSET 1
  ) t(a)

2
4

{noformat}


Query 1
{noformat}
SELECT a FROM
  (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)

 Limit(offset=[1], fetch=[1]), id = 80
Exchange(distribution=[single]), id = 79
   Sort(sort0=[$0], dir0=[ASC], offset=[1], fetch=[1]), id = 78
 TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 50
{noformat}

Query 2

{noformat}
SELECT a FROM
  (SELECT a FROM
(SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
ORDER BY a OFFSET 1
  ) t(a)

 Limit(offset=[1]), id = 201
   Limit(offset=[2], fetch=[3]), id = 200
 Exchange(distribution=[single]), id = 199
   Sort(sort0=[$0], dir0=[ASC], offset=[2], fetch=[3]), id = 198
 TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{

[jira] [Comment Edited] (IGNITE-22204) Sql. Sort operator cannot use offset parameter

2024-06-03 Thread Maksim Zhuravkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-22204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17851564#comment-17851564
 ] 

Maksim Zhuravkov edited comment on IGNITE-22204 at 6/3/24 4:29 PM:
---

The previous issue was moved to 
https://issues.apache.org/jira/browse/IGNITE-22392.




was (Author: JIRAUSER298618):
The previous issue is moved to 
https://issues.apache.org/jira/browse/IGNITE-22392.



> Sql. Sort operator cannot use offset parameter
> --
>
> Key: IGNITE-22204
> URL: https://issues.apache.org/jira/browse/IGNITE-22204
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
> assumes that Sort's input always returns data from a single source.
> This assumption is not correct for the cases, when a query splits into 
> multiple fragments.
> So transforming
> {noformat}
> Sort(ordering=ord, offset=o, fetch=f) 
> >Scan
> {noformat}
> into 
> {noformat}
> Limit (offset=o, fetch=f) 
> >Sort(ordering=ord, fetch=f + o)
> >>Scan
> {noformat}
> Is not a valid transformation and should not be used.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22204) Sql. Sort operator cannot use offset parameter

2024-06-03 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22204:
--
Description: 
Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
assumes that Sort's input always returns data from a single source.
This assumption is not correct for the cases, when a query splits into multiple 
fragments.
So transforming

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, fetch=f + o)
>>Scan
{noformat}

Is not a valid transformation and should not be used.



  was:
Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
assumes that Sort's input always returns data from a single source.
This assumption is not correct for the cases, when a query splits into multiple 
fragments.
So transforming

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, offset=o, fetch=f)
>>Scan
{noformat}

Is not a valid transformation and should not be used.




> Sql. Sort operator cannot use offset parameter
> --
>
> Key: IGNITE-22204
> URL: https://issues.apache.org/jira/browse/IGNITE-22204
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
> assumes that Sort's input always returns data from a single source.
> This assumption is not correct for the cases, when a query splits into 
> multiple fragments.
> So transforming
> {noformat}
> Sort(ordering=ord, offset=o, fetch=f) 
> >Scan
> {noformat}
> into 
> {noformat}
> Limit (offset=o, fetch=f) 
> >Sort(ordering=ord, fetch=f + o)
> >>Scan
> {noformat}
> Is not a valid transformation and should not be used.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22204) Sql. Sort operator cannot use offset parameter

2024-06-03 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22204:
--
Description: 
Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
assumes that Sort's input always returns data from a single source.
This assumption is not correct for the cases, when a query splits into multiple 
fragments.
So transforming

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, offset=o, fetch=f)
>>Scan
{noformat}

Is not a valid transformation and should not be used.



  was:
Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
assumes that Sort's input always returns data from a single source.
This assumption is not correct for the cases, when a query splits into multiple 
fragments.
So

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, offset=o, fetch=f)
>>Scan
{noformat}

Is not applicable.




> Sql. Sort operator cannot use offset parameter
> --
>
> Key: IGNITE-22204
> URL: https://issues.apache.org/jira/browse/IGNITE-22204
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
> assumes that Sort's input always returns data from a single source.
> This assumption is not correct for the cases, when a query splits into 
> multiple fragments.
> So transforming
> {noformat}
> Sort(ordering=ord, offset=o, fetch=f) 
> >Scan
> {noformat}
> into 
> {noformat}
> Limit (offset=o, fetch=f) 
> >Sort(ordering=ord, offset=o, fetch=f)
> >>Scan
> {noformat}
> Is not a valid transformation and should not be used.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22204) Sql. Sort operator cannot use offset parameter

2024-06-03 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22204:
--
Description: 
Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
assumes that Sort's input always returns data from a single source.
This assumption is not correct for the cases, when a query splits into multiple 
fragments.
So

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, offset=o, fetch=f)
>>Scan
{noformat}

Is not applicable.



  was:
Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
assumes that Sort's input always returns data from a single source.
This assumption is not correct for most cases, when query splits into multiple 
fragments.
So

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, offset=o, fetch=f)
>>Scan
{noformat}

Is not applicable.




> Sql. Sort operator cannot use offset parameter
> --
>
> Key: IGNITE-22204
> URL: https://issues.apache.org/jira/browse/IGNITE-22204
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
> assumes that Sort's input always returns data from a single source.
> This assumption is not correct for the cases, when a query splits into 
> multiple fragments.
> So
> {noformat}
> Sort(ordering=ord, offset=o, fetch=f) 
> >Scan
> {noformat}
> into 
> {noformat}
> Limit (offset=o, fetch=f) 
> >Sort(ordering=ord, offset=o, fetch=f)
> >>Scan
> {noformat}
> Is not applicable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22204) Sql. Sort operator cannot use offset parameter

2024-06-03 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22204:
--
Description: 
Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
assumes that Sort's input always returns data from a single source.
That assumption is not correct for most cases, when query splits into multiple 
fragments.
So

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, offset=o, fetch=f)
>>Scan
{noformat}

Is not applicable.



  was:
Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
assumes that Sort's input always returns data from a single source 
Which is not correct, because fetch parameter is applied incorrectly when query 
splits into multiple fragments.
So

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, offset=o, fetch=f)
>>Scan
{noformat}

Is not applicable.




> Sql. Sort operator cannot use offset parameter
> --
>
> Key: IGNITE-22204
> URL: https://issues.apache.org/jira/browse/IGNITE-22204
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
> assumes that Sort's input always returns data from a single source.
> That assumption is not correct for most cases, when query splits into 
> multiple fragments.
> So
> {noformat}
> Sort(ordering=ord, offset=o, fetch=f) 
> >Scan
> {noformat}
> into 
> {noformat}
> Limit (offset=o, fetch=f) 
> >Sort(ordering=ord, offset=o, fetch=f)
> >>Scan
> {noformat}
> Is not applicable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22204) Sql. Sort operator cannot use offset parameter

2024-06-03 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22204:
--
Description: 
Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
assumes that Sort's input always returns data from a single source.
This assumption is not correct for most cases, when query splits into multiple 
fragments.
So

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, offset=o, fetch=f)
>>Scan
{noformat}

Is not applicable.



  was:
Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
assumes that Sort's input always returns data from a single source.
That assumption is not correct for most cases, when query splits into multiple 
fragments.
So

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, offset=o, fetch=f)
>>Scan
{noformat}

Is not applicable.




> Sql. Sort operator cannot use offset parameter
> --
>
> Key: IGNITE-22204
> URL: https://issues.apache.org/jira/browse/IGNITE-22204
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
> assumes that Sort's input always returns data from a single source.
> This assumption is not correct for most cases, when query splits into 
> multiple fragments.
> So
> {noformat}
> Sort(ordering=ord, offset=o, fetch=f) 
> >Scan
> {noformat}
> into 
> {noformat}
> Limit (offset=o, fetch=f) 
> >Sort(ordering=ord, offset=o, fetch=f)
> >>Scan
> {noformat}
> Is not applicable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22204) Sql. Sort operator cannot use offset parameter

2024-06-03 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22204:
--
Description: 
Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
assumes that its inputs always returns data from a single source 
Which is not correct, because fetch parameter is applied incorrectly when query 
splits into multiple fragments.
So

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, offset=o, fetch=f)
>>Scan
{noformat}

Is not applicable.



  was:
Patch IGNITE-16013 for AI-2 cannot be applied for AI-3, because that patch 
assumes that its inputs always returns data from a single source 
Which is not correct, because fetch parameter is applied incorrectly when query 
splits into multiple fragments.
So

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, offset=o, fetch=f)
>>Scan
{noformat}

Is not applicable.




> Sql. Sort operator cannot use offset parameter
> --
>
> Key: IGNITE-22204
> URL: https://issues.apache.org/jira/browse/IGNITE-22204
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
> assumes that its inputs always returns data from a single source 
> Which is not correct, because fetch parameter is applied incorrectly when 
> query splits into multiple fragments.
> So
> {noformat}
> Sort(ordering=ord, offset=o, fetch=f) 
> >Scan
> {noformat}
> into 
> {noformat}
> Limit (offset=o, fetch=f) 
> >Sort(ordering=ord, offset=o, fetch=f)
> >>Scan
> {noformat}
> Is not applicable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22204) Sql. Sort operator cannot use offset parameter

2024-06-03 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22204:
--
Description: 
Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
assumes that Sort's input always returns data from a single source 
Which is not correct, because fetch parameter is applied incorrectly when query 
splits into multiple fragments.
So

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, offset=o, fetch=f)
>>Scan
{noformat}

Is not applicable.



  was:
Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
assumes that its inputs always returns data from a single source 
Which is not correct, because fetch parameter is applied incorrectly when query 
splits into multiple fragments.
So

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, offset=o, fetch=f)
>>Scan
{noformat}

Is not applicable.




> Sql. Sort operator cannot use offset parameter
> --
>
> Key: IGNITE-22204
> URL: https://issues.apache.org/jira/browse/IGNITE-22204
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Patch IGNITE-16013 for AI-2 cannot be applied to AI-3, because that patch 
> assumes that Sort's input always returns data from a single source 
> Which is not correct, because fetch parameter is applied incorrectly when 
> query splits into multiple fragments.
> So
> {noformat}
> Sort(ordering=ord, offset=o, fetch=f) 
> >Scan
> {noformat}
> into 
> {noformat}
> Limit (offset=o, fetch=f) 
> >Sort(ordering=ord, offset=o, fetch=f)
> >>Scan
> {noformat}
> Is not applicable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22204) Sql. Sort operator cannot use offset parameter

2024-06-03 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22204:
--
Summary: Sql. Sort operator cannot use offset parameter  (was: Sql. Sort 
operator can not use offset parameter)

> Sql. Sort operator cannot use offset parameter
> --
>
> Key: IGNITE-22204
> URL: https://issues.apache.org/jira/browse/IGNITE-22204
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Patch IGNITE-16013 for AI-2 can not be applied for AI-3, because that patch 
> assumes that its inputs always returns data from a single source 
> Which is not correct, because fetch parameter is applied incorrectly when 
> query splits into multiple fragments.
> So
> {noformat}
> Sort(ordering=ord, offset=o, fetch=f) 
> >Scan
> {noformat}
> into 
> {noformat}
> Limit (offset=o, fetch=f) 
> >Sort(ordering=ord, offset=o, fetch=f)
> >>Scan
> {noformat}
> Is not applicable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22204) Sql. Sort operator cannot use offset parameter

2024-06-03 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22204:
--
Description: 
Patch IGNITE-16013 for AI-2 cannot be applied for AI-3, because that patch 
assumes that its inputs always returns data from a single source 
Which is not correct, because fetch parameter is applied incorrectly when query 
splits into multiple fragments.
So

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, offset=o, fetch=f)
>>Scan
{noformat}

Is not applicable.



  was:
Patch IGNITE-16013 for AI-2 can not be applied for AI-3, because that patch 
assumes that its inputs always returns data from a single source 
Which is not correct, because fetch parameter is applied incorrectly when query 
splits into multiple fragments.
So

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, offset=o, fetch=f)
>>Scan
{noformat}

Is not applicable.




> Sql. Sort operator cannot use offset parameter
> --
>
> Key: IGNITE-22204
> URL: https://issues.apache.org/jira/browse/IGNITE-22204
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Patch IGNITE-16013 for AI-2 cannot be applied for AI-3, because that patch 
> assumes that its inputs always returns data from a single source 
> Which is not correct, because fetch parameter is applied incorrectly when 
> query splits into multiple fragments.
> So
> {noformat}
> Sort(ordering=ord, offset=o, fetch=f) 
> >Scan
> {noformat}
> into 
> {noformat}
> Limit (offset=o, fetch=f) 
> >Sort(ordering=ord, offset=o, fetch=f)
> >>Scan
> {noformat}
> Is not applicable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22204) Sql. Sort operator can not use offset parameter

2024-06-03 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22204:
--
Description: 
Patch IGNITE-16013 for AI-2 can not be applied for AI-3, because that patch 
assumes that its inputs always returns data from a single source 
Which is not correct, because fetch parameter is applied incorrectly when query 
splits into multiple fragments.
So

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, offset=o, fetch=f)
>>Scan
{noformat}

Is not applicable.



  was:
Patch IGNITE-16013 for AI-2 can not be applied for AI-3, because that patch 
either (a) assumes that its inputs always returns 

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, offset=o, fetch=f)
>>Scan
{noformat}

Which is not correct, because fetch parameter is applied incorrectly when query 
splits into multiple fragments.





> Sql. Sort operator can not use offset parameter
> ---
>
> Key: IGNITE-22204
> URL: https://issues.apache.org/jira/browse/IGNITE-22204
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Patch IGNITE-16013 for AI-2 can not be applied for AI-3, because that patch 
> assumes that its inputs always returns data from a single source 
> Which is not correct, because fetch parameter is applied incorrectly when 
> query splits into multiple fragments.
> So
> {noformat}
> Sort(ordering=ord, offset=o, fetch=f) 
> >Scan
> {noformat}
> into 
> {noformat}
> Limit (offset=o, fetch=f) 
> >Sort(ordering=ord, offset=o, fetch=f)
> >>Scan
> {noformat}
> Is not applicable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22204) Sql. Sort operator can not use offset parameter

2024-06-03 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22204:
--
Description: 
Patch IGNITE-16013 for AI-2 can not be applied for AI-3, because that patch 
either (a) assumes that its inputs always returns 

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, offset=o, fetch=f)
>>Scan
{noformat}

Which is not correct, because fetch parameter is applied incorrectly when query 
splits into multiple fragments.




  was:
Patch IGNITE-16013 for AI-2 can not be applied for AI-3, because that patch 
either (a) assumes that its inputs always returns 

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, offset=o, fetch=f)
>>Scan
{noformat}

Which is not correct, because fetch and offset are applied twice.





> Sql. Sort operator can not use offset parameter
> ---
>
> Key: IGNITE-22204
> URL: https://issues.apache.org/jira/browse/IGNITE-22204
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Patch IGNITE-16013 for AI-2 can not be applied for AI-3, because that patch 
> either (a) assumes that its inputs always returns 
> {noformat}
> Sort(ordering=ord, offset=o, fetch=f) 
> >Scan
> {noformat}
> into 
> {noformat}
> Limit (offset=o, fetch=f) 
> >Sort(ordering=ord, offset=o, fetch=f)
> >>Scan
> {noformat}
> Which is not correct, because fetch parameter is applied incorrectly when 
> query splits into multiple fragments.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22204) Sql. Sort operator can not use offset parameter

2024-06-03 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22204:
--
Summary: Sql. Sort operator can not use offset parameter  (was: Sql. 
Incorrect Limit / Sort transformation)

> Sql. Sort operator can not use offset parameter
> ---
>
> Key: IGNITE-22204
> URL: https://issues.apache.org/jira/browse/IGNITE-22204
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Patch IGNITE-16013 for AI-2 can not be applied for AI-3, because that patch 
> either (a) assumes that its inputs always returns 
> {noformat}
> Sort(ordering=ord, offset=o, fetch=f) 
> >Scan
> {noformat}
> into 
> {noformat}
> Limit (offset=o, fetch=f) 
> >Sort(ordering=ord, offset=o, fetch=f)
> >>Scan
> {noformat}
> Which is not correct, because fetch and offset are applied twice.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[dspace-tech] Dspace-CRIS 2023.02.04 Handle links do not work

2024-06-03 Thread Maksim Donchenko
Hello, everyone. I decided to install the new version of Dspace-CRIS 
2023.02.04 and found that the links with Handle do not work, the server 
just says “No item found for the identifier”. Maybe someone has encountered 
something similar. Thank you in advance for your response.
Sincerely, Maksim.

-- 
All messages to this mailing list should adhere to the Code of Conduct: 
https://www.lyrasis.org/about/Pages/Code-of-Conduct.aspx
--- 
You received this message because you are subscribed to the Google Groups 
"DSpace Technical Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dspace-tech+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dspace-tech/50247a31-f168-4a22-9b9f-85b2e5e73917n%40googlegroups.com.


[jira] [Updated] (IGNITE-22204) Sql. Incorrect Limit / Sort transformation

2024-06-03 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22204:
--
Description: 
Patch IGNITE-16013 for AI-2 can not be applied for AI-3, because that patch 
either (a) assumes that its inputs always returns 

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, offset=o, fetch=f)
>>Scan
{noformat}

Which is not correct, because fetch and offset are applied twice.




  was:
IGNITE-16013  incorrectly handles Sort(offset, fetch) transformation. It 
transforms: 

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, offset=o, fetch=f)
>>Scan
{noformat}

Which is not correct, because fetch and offset are applied twice.



> Sql. Incorrect Limit / Sort transformation
> --
>
> Key: IGNITE-22204
> URL: https://issues.apache.org/jira/browse/IGNITE-22204
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Patch IGNITE-16013 for AI-2 can not be applied for AI-3, because that patch 
> either (a) assumes that its inputs always returns 
> {noformat}
> Sort(ordering=ord, offset=o, fetch=f) 
> >Scan
> {noformat}
> into 
> {noformat}
> Limit (offset=o, fetch=f) 
> >Sort(ordering=ord, offset=o, fetch=f)
> >>Scan
> {noformat}
> Which is not correct, because fetch and offset are applied twice.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22204) Sql. Incorrect Limit / Sort transformation

2024-06-03 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22204:
--
Description: 
IGNITE-16013  incorrectly handles Sort(offset, fetch) transformation. It 
transforms: 

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
..Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
..Sort(ordering=ord, offset=o, fetch=f)
Scan
{noformat}

Which is not correct, because fetch and offset are applied twice.


  was:
IGNITE-16013  incorrectly handles Sort(offset, fetch) transformation. It 
transforms: 

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
-> Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
-> Sort(ordering=ord, offset=o, fetch=f)
 -> Scan
{noformat}

Which is not correct, because fetch and offset are applied twice.



> Sql. Incorrect Limit / Sort transformation
> --
>
> Key: IGNITE-22204
> URL: https://issues.apache.org/jira/browse/IGNITE-22204
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> IGNITE-16013  incorrectly handles Sort(offset, fetch) transformation. It 
> transforms: 
> {noformat}
> Sort(ordering=ord, offset=o, fetch=f) 
> ..Scan
> {noformat}
> into 
> {noformat}
> Limit (offset=o, fetch=f) 
> ..Sort(ordering=ord, offset=o, fetch=f)
> Scan
> {noformat}
> Which is not correct, because fetch and offset are applied twice.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22204) Sql. Incorrect Limit / Sort transformation

2024-06-03 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22204:
--
Description: 
IGNITE-16013  incorrectly handles Sort(offset, fetch) transformation. It 
transforms: 

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
>Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
>Sort(ordering=ord, offset=o, fetch=f)
>>Scan
{noformat}

Which is not correct, because fetch and offset are applied twice.


  was:
IGNITE-16013  incorrectly handles Sort(offset, fetch) transformation. It 
transforms: 

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
..Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
..Sort(ordering=ord, offset=o, fetch=f)
Scan
{noformat}

Which is not correct, because fetch and offset are applied twice.



> Sql. Incorrect Limit / Sort transformation
> --
>
> Key: IGNITE-22204
> URL: https://issues.apache.org/jira/browse/IGNITE-22204
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> IGNITE-16013  incorrectly handles Sort(offset, fetch) transformation. It 
> transforms: 
> {noformat}
> Sort(ordering=ord, offset=o, fetch=f) 
> >Scan
> {noformat}
> into 
> {noformat}
> Limit (offset=o, fetch=f) 
> >Sort(ordering=ord, offset=o, fetch=f)
> >>Scan
> {noformat}
> Which is not correct, because fetch and offset are applied twice.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22204) Sql. Incorrect Limit / Sort transformation

2024-06-03 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22204:
--
Summary: Sql. Incorrect Limit / Sort transformation  (was: Sql. Set 
operation. Incorrect query transformation for a query with limit / offset that 
uses the same table)

> Sql. Incorrect Limit / Sort transformation
> --
>
> Key: IGNITE-22204
> URL: https://issues.apache.org/jira/browse/IGNITE-22204
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> IGNITE-16013  incorrectly handles Sort(offset, fetch) transformation
> It transforms  Sort(ordering=abc, offset=o, fetch=f) into Limit (offset=o, 
> fetch=f)  -> Sort(ordering=abc, offset=o, fetch=f), which is not correct, 
> since fetch and offset are applied twice.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22204) Sql. Incorrect Limit / Sort transformation

2024-06-03 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22204:
--
Description: 
IGNITE-16013  incorrectly handles Sort(offset, fetch) transformation. It 
transforms: 

{noformat}
Sort(ordering=ord, offset=o, fetch=f) 
-> Scan
{noformat}

into 

{noformat}
Limit (offset=o, fetch=f) 
-> Sort(ordering=ord, offset=o, fetch=f)
 -> Scan
{noformat}

Which is not correct, because fetch and offset are applied twice.


  was:

IGNITE-16013  incorrectly handles Sort(offset, fetch) transformation
It transforms  Sort(ordering=abc, offset=o, fetch=f) into Limit (offset=o, 
fetch=f)  -> Sort(ordering=abc, offset=o, fetch=f), which is not correct, since 
fetch and offset are applied twice.



> Sql. Incorrect Limit / Sort transformation
> --
>
> Key: IGNITE-22204
> URL: https://issues.apache.org/jira/browse/IGNITE-22204
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> IGNITE-16013  incorrectly handles Sort(offset, fetch) transformation. It 
> transforms: 
> {noformat}
> Sort(ordering=ord, offset=o, fetch=f) 
> -> Scan
> {noformat}
> into 
> {noformat}
> Limit (offset=o, fetch=f) 
> -> Sort(ordering=ord, offset=o, fetch=f)
>  -> Scan
> {noformat}
> Which is not correct, because fetch and offset are applied twice.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22204) Sql. Set operation. Incorrect query transformation for a query with limit / offset that uses the same table

2024-06-03 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22204:
--
Description: 

IGNITE-16013  incorrectly handles Sort(offset, fetch) transformation
It transforms  Sort(ordering=abc, offset=o, fetch=f) into Limit (offset=o, 
fetch=f)  -> Sort(ordering=abc, offset=o, fetch=f), which is not correct, since 
fetch and offset are applied twice.


  was:
Combination of LIMIT / OFFSET and set operator results in incorrect 
transformation of a plan tree:

{noformat}
statement ok
CREATE TABLE test (a INTEGER);

statement ok
INSERT INTO test VALUES (1), (2), (3), (4);

# query 1
query I rowsort
SELECT a FROM
  (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)

2

# query 2
query I rowsort
SELECT a FROM
  (SELECT a FROM
(SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
ORDER BY a OFFSET 1
  ) t(a)

4

# combined query should return 2, 4
# but it returns 2
query I rowsort
SELECT a FROM
  (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)
UNION ALL
SELECT a FROM
  (SELECT a FROM
(SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
ORDER BY a OFFSET 1
  ) t(a)

2
4

{noformat}


Query 1
{noformat}
SELECT a FROM
  (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)

 Limit(offset=[1], fetch=[1]), id = 80
Exchange(distribution=[single]), id = 79
   Sort(sort0=[$0], dir0=[ASC], offset=[1], fetch=[1]), id = 78
 TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 50
{noformat}

Query 2

{noformat}
SELECT a FROM
  (SELECT a FROM
(SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
ORDER BY a OFFSET 1
  ) t(a)

 Limit(offset=[1]), id = 201
   Limit(offset=[2], fetch=[3]), id = 200
 Exchange(distribution=[single]), id = 199
   Sort(sort0=[$0], dir0=[ASC], offset=[2], fetch=[3]), id = 198
 TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 168
{noformat}

Combine queries using UNION ALL

{noformat}
SELECT a FROM
  (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)
UNION ALL
SELECT a FROM
  (SELECT a FROM
(SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
ORDER BY a OFFSET 1
  ) t(a)

UnionAll(all=[true]), id = 403
  Limit(offset=[1], fetch=[1]), id = 400
Exchange(distribution=[single]), id = 399 # subtree is duplicated in 
another part of a query
  Sort(sort0=[$0], dir0=[ASC], offset=[1], fetch=[1]), id = 398 # 
TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 345
  Limit(offset=[1]), id = 402
Limit(offset=[2], fetch=[3]), id = 401
  Exchange(distribution=[single]), id = 399 # duplicate
Sort(sort0=[$0], dir0=[ASC], offset=[1], fetch=[1]), id = 398
  TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 345
{noformat}



When tables are different, results are correct.




> Sql. Set operation. Incorrect query transformation for a query with limit / 
> offset that uses the same table
> ---
>
> Key: IGNITE-22204
> URL: https://issues.apache.org/jira/browse/IGNITE-22204
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> IGNITE-16013  incorrectly handles Sort(offset, fetch) transformation
> It transforms  Sort(ordering=abc, offset=o, fetch=f) into Limit (offset=o, 
> fetch=f)  -> Sort(ordering=abc, offset=o, fetch=f), which is not correct, 
> since fetch and offset are applied twice.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-22204) Sql. Set operation. Incorrect query transformation for a query with limit / offset that uses the same table

2024-06-03 Thread Maksim Zhuravkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-22204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17851564#comment-17851564
 ] 

Maksim Zhuravkov commented on IGNITE-22204:
---

The previous issue is moved to 
https://issues.apache.org/jira/browse/IGNITE-22392.



> Sql. Set operation. Incorrect query transformation for a query with limit / 
> offset that uses the same table
> ---
>
> Key: IGNITE-22204
> URL: https://issues.apache.org/jira/browse/IGNITE-22204
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Combination of LIMIT / OFFSET and set operator results in incorrect 
> transformation of a plan tree:
> {noformat}
> statement ok
> CREATE TABLE test (a INTEGER);
> statement ok
> INSERT INTO test VALUES (1), (2), (3), (4);
> # query 1
> query I rowsort
> SELECT a FROM
>   (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)
> 
> 2
> # query 2
> query I rowsort
> SELECT a FROM
>   (SELECT a FROM
> (SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
> ORDER BY a OFFSET 1
>   ) t(a)
> 
> 4
> # combined query should return 2, 4
> # but it returns 2
> query I rowsort
> SELECT a FROM
>   (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)
> UNION ALL
> SELECT a FROM
>   (SELECT a FROM
> (SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
> ORDER BY a OFFSET 1
>   ) t(a)
> 
> 2
> 4
> {noformat}
> Query 1
> {noformat}
> SELECT a FROM
>   (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)
>  Limit(offset=[1], fetch=[1]), id = 80
> Exchange(distribution=[single]), id = 79
>Sort(sort0=[$0], dir0=[ASC], offset=[1], fetch=[1]), id = 78
>  TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 50
> {noformat}
> Query 2
> {noformat}
> SELECT a FROM
>   (SELECT a FROM
> (SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
> ORDER BY a OFFSET 1
>   ) t(a)
>  Limit(offset=[1]), id = 201
>Limit(offset=[2], fetch=[3]), id = 200
>  Exchange(distribution=[single]), id = 199
>Sort(sort0=[$0], dir0=[ASC], offset=[2], fetch=[3]), id = 198
>  TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 168
> {noformat}
> Combine queries using UNION ALL
> {noformat}
> SELECT a FROM
>   (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)
> UNION ALL
> SELECT a FROM
>   (SELECT a FROM
> (SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
> ORDER BY a OFFSET 1
>   ) t(a)
> UnionAll(all=[true]), id = 403
>   Limit(offset=[1], fetch=[1]), id = 400
> Exchange(distribution=[single]), id = 399 # subtree is duplicated in 
> another part of a query
>   Sort(sort0=[$0], dir0=[ASC], offset=[1], fetch=[1]), id = 398 # 
> TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 345
>   Limit(offset=[1]), id = 402
> Limit(offset=[2], fetch=[3]), id = 401
>   Exchange(distribution=[single]), id = 399 # duplicate
> Sort(sort0=[$0], dir0=[ASC], offset=[1], fetch=[1]), id = 398
>   TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 345
> {noformat}
> When tables are different, results are correct.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22392) Sql. Set operation. Incorrect query transformation for a query with limit / offset that uses the same table (When RemoveSortInSubQuery is enabled)

2024-06-03 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22392:
--
Description: 
Combination of LIMIT / OFFSET and set operator results in incorrect 
transformation of a plan tree. This issue is caused by incorrect by incorrect 
handling of the `RemoveSortInSubQuery` flag inside SqlToRelConverter internals. 
ATM this issue is migrated by disabling that flag. 

{noformat}
statement ok
CREATE TABLE test (a INTEGER);

statement ok
INSERT INTO test VALUES (1), (2), (3), (4);

# query 1
query I rowsort
SELECT a FROM
  (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)

2

# query 2
query I rowsort
SELECT a FROM
  (SELECT a FROM
(SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
ORDER BY a OFFSET 1
  ) t(a)

4

# combined query should return 2, 4
# but it returns 2
query I rowsort
SELECT a FROM
  (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)
UNION ALL
SELECT a FROM
  (SELECT a FROM
(SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
ORDER BY a OFFSET 1
  ) t(a)

2
4

{noformat}


Query 1
{noformat}
SELECT a FROM
  (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)

 Limit(offset=[1], fetch=[1]), id = 80
Exchange(distribution=[single]), id = 79
   Sort(sort0=[$0], dir0=[ASC], offset=[1], fetch=[1]), id = 78
 TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 50
{noformat}

Query 2

{noformat}
SELECT a FROM
  (SELECT a FROM
(SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
ORDER BY a OFFSET 1
  ) t(a)

 Limit(offset=[1]), id = 201
   Limit(offset=[2], fetch=[3]), id = 200
 Exchange(distribution=[single]), id = 199
   Sort(sort0=[$0], dir0=[ASC], offset=[2], fetch=[3]), id = 198
 TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 168
{noformat}

Combine queries using UNION ALL

{noformat}
SELECT a FROM
  (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)
UNION ALL
SELECT a FROM
  (SELECT a FROM
(SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
ORDER BY a OFFSET 1
  ) t(a)

UnionAll(all=[true]), id = 403
  Limit(offset=[1], fetch=[1]), id = 400
Exchange(distribution=[single]), id = 399 # subtree is duplicated in 
another part of a query
  Sort(sort0=[$0], dir0=[ASC], offset=[1], fetch=[1]), id = 398 # 
TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 345
  Limit(offset=[1]), id = 402
Limit(offset=[2], fetch=[3]), id = 401
  Exchange(distribution=[single]), id = 399 # duplicate
Sort(sort0=[$0], dir0=[ASC], offset=[1], fetch=[1]), id = 398
  TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 345
{noformat}


When tables are different, results are correct.


Reproducible in vanila calcite:

{noformat}
 EnumerableUnion(all=[true])
>   EnumerableCalc(expr#0..1=[{inputs}], A=[$t0])
> EnumerableLimit(offset=[1], fetch=[1])
>   EnumerableSort(sort0=[$0], dir0=[ASC])
> EnumerableTableScan(table=[[BLANK, TEST]])
>   EnumerableCalc(expr#0..1=[{inputs}], A=[$t0])
> EnumerableLimit(offset=[2], fetch=[3])
>   EnumerableSort(sort0=[$0], dir0=[ASC])
> EnumerableTableScan(table=[[BLANK, TEST]])
{noformat}




  was:
Combination of LIMIT / OFFSET and set operator results in incorrect 
transformation of a plan tree. This issue is caused by incorrect by incorrect 
handling of the `RemoveSortInSubQuery` flag inside SqlToRelConverter internals. 
ATM this issue is migrated by disabling that flag. 

{noformat}
statement ok
CREATE TABLE test (a INTEGER);

statement ok
INSERT INTO test VALUES (1), (2), (3), (4);

# query 1
query I rowsort
SELECT a FROM
  (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)

2

# query 2
query I rowsort
SELECT a FROM
  (SELECT a FROM
(SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
ORDER BY a OFFSET 1
  ) t(a)

4

# combined query should return 2, 4
# but it returns 2
query I rowsort
SELECT a FROM
  (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)
UNION ALL
SELECT a FROM
  (SELECT a FROM
(SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
ORDER BY a OFFSET 1
  ) t(a)

2
4

{noformat}


Query 1
{noformat}
SELECT a FROM
  (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)

 Limit(offset=[1], fetch=[1]), id = 80
Exchange(distribution=[single]), id = 79
   Sort(sort0=[$0], dir0=[ASC], offset=[1], fetch=[1]), id = 78
 TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 50
{noformat}

Query 2

{noformat}
SELECT a FROM
  (SELECT a FROM
(SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
ORDER BY a OFFSET 1
  ) t(a)

 Limit(offset=[1]), id = 201
   Limit(offset=[2], fetch=[3]), id = 200
 Exchange(distribution=[single]), id = 199
   Sort(sort0=[$0], dir0=[ASC], offset=[2], fetch=[3]), id = 198
 TableScan(table=[[PUBLIC, TEST]], re

[jira] [Created] (IGNITE-22392) Sql. Set operation. Incorrect query transformation for a query with limit / offset that uses the same table (When RemoveSortInSubQuery is enabled)

2024-06-03 Thread Maksim Zhuravkov (Jira)
Maksim Zhuravkov created IGNITE-22392:
-

 Summary: Sql. Set operation. Incorrect query transformation for a 
query with limit / offset that uses the same table (When RemoveSortInSubQuery 
is enabled)
 Key: IGNITE-22392
 URL: https://issues.apache.org/jira/browse/IGNITE-22392
 Project: Ignite
  Issue Type: Bug
  Components: sql
Reporter: Maksim Zhuravkov


Combination of LIMIT / OFFSET and set operator results in incorrect 
transformation of a plan tree. This issue is caused by incorrect by incorrect 
handling of the `RemoveSortInSubQuery` flag inside SqlToRelConverter internals. 
ATM this issue is migrated by disabling that flag. 

{noformat}
statement ok
CREATE TABLE test (a INTEGER);

statement ok
INSERT INTO test VALUES (1), (2), (3), (4);

# query 1
query I rowsort
SELECT a FROM
  (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)

2

# query 2
query I rowsort
SELECT a FROM
  (SELECT a FROM
(SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
ORDER BY a OFFSET 1
  ) t(a)

4

# combined query should return 2, 4
# but it returns 2
query I rowsort
SELECT a FROM
  (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)
UNION ALL
SELECT a FROM
  (SELECT a FROM
(SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
ORDER BY a OFFSET 1
  ) t(a)

2
4

{noformat}


Query 1
{noformat}
SELECT a FROM
  (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)

 Limit(offset=[1], fetch=[1]), id = 80
Exchange(distribution=[single]), id = 79
   Sort(sort0=[$0], dir0=[ASC], offset=[1], fetch=[1]), id = 78
 TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 50
{noformat}

Query 2

{noformat}
SELECT a FROM
  (SELECT a FROM
(SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
ORDER BY a OFFSET 1
  ) t(a)

 Limit(offset=[1]), id = 201
   Limit(offset=[2], fetch=[3]), id = 200
 Exchange(distribution=[single]), id = 199
   Sort(sort0=[$0], dir0=[ASC], offset=[2], fetch=[3]), id = 198
 TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 168
{noformat}

Combine queries using UNION ALL

{noformat}
SELECT a FROM
  (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)
UNION ALL
SELECT a FROM
  (SELECT a FROM
(SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
ORDER BY a OFFSET 1
  ) t(a)

UnionAll(all=[true]), id = 403
  Limit(offset=[1], fetch=[1]), id = 400
Exchange(distribution=[single]), id = 399 # subtree is duplicated in 
another part of a query
  Sort(sort0=[$0], dir0=[ASC], offset=[1], fetch=[1]), id = 398 # 
TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 345
  Limit(offset=[1]), id = 402
Limit(offset=[2], fetch=[3]), id = 401
  Exchange(distribution=[single]), id = 399 # duplicate
Sort(sort0=[$0], dir0=[ASC], offset=[1], fetch=[1]), id = 398
  TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 345
{noformat}


When tables are different, results are correct.





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22140) Possible pagination bug in GridCacheQueryManager#runQuery()

2024-06-03 Thread Maksim Timonin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Timonin updated IGNITE-22140:

Ignite Flags:   (was: Docs Required,Release Notes Required)

> Possible pagination bug in GridCacheQueryManager#runQuery()
> ---
>
> Key: IGNITE-22140
> URL: https://issues.apache.org/jira/browse/IGNITE-22140
> Project: Ignite
>  Issue Type: Task
>Reporter: Oleg Valuyskiy
>Assignee: Oleg Valuyskiy
>Priority: Major
>  Labels: ise
> Fix For: 2.17
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> It looks like there is a pagination bug in the 
> GridCacheQueryManager#runQuery() method caused by fact that the ‘cnt’ counter 
> doesn’t get reset after sending the first page with query results.
> It is advised to find out whether the bug really exists and fix it if that’s 
> the case.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22140) Possible pagination bug in GridCacheQueryManager#runQuery()

2024-06-03 Thread Maksim Timonin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Timonin updated IGNITE-22140:

Fix Version/s: 2.17

> Possible pagination bug in GridCacheQueryManager#runQuery()
> ---
>
> Key: IGNITE-22140
> URL: https://issues.apache.org/jira/browse/IGNITE-22140
> Project: Ignite
>  Issue Type: Task
>Reporter: Oleg Valuyskiy
>Assignee: Oleg Valuyskiy
>Priority: Major
>  Labels: ise
> Fix For: 2.17
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> It looks like there is a pagination bug in the 
> GridCacheQueryManager#runQuery() method caused by fact that the ‘cnt’ counter 
> doesn’t get reset after sending the first page with query results.
> It is advised to find out whether the bug really exists and fix it if that’s 
> the case.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: doveadm index segfaults after upgrade to 7.5

2024-06-02 Thread Maksim Rodin
I applied a patch and installed from ports:
# pkg_info | grep xapian
dovecot-fts-xapian-1.7.13 full text search plugin for Dovecot\
using Xapian
xapian-core-1.4.24  search engine library

It seems to have no effect.
Jun 03 08:01:39 doveadm(mail...@somedomain.org): Debug:\
Mailbox INBOX: UID 1048: Opened mail because: fts indexing
Segmentation fault


On Fri May 31 15:28:41 2024, Stuart Henderson wrote:
> On 2024/05/31 15:44, Maksim Rodin wrote:
> > Hello
> > After upgrading the machine to 7.5 amd64 doveadm command used for
> > indexing mailboxes does not work anymore:
> 
> Does 1.7.13 work any better? Here's a ports diff.
> 
> Index: Makefile
> ===
> RCS file: /cvs/ports/mail/dovecot-fts-xapian/Makefile,v
> diff -u -p -r1.19 Makefile
> --- Makefile  25 Feb 2024 11:36:11 -  1.19
> +++ Makefile  31 May 2024 14:28:15 -
> @@ -1,8 +1,9 @@
>  COMMENT= full text search plugin for Dovecot using Xapian
>  
> -DIST_TUPLE=  github grosjo fts-xapian 1.7.0 .
> -
> -PKGNAME= dovecot-${DISTNAME}
> +V=   1.7.13
> +DISTNAME=dovecot-fts-xapian-$V
> +SITES=   
> https://github.com/grosjo/fts-xapian/releases/download/$V/
> +WRKDIST= ${WRKDIR}/fts-xapian-$V
>  
>  CATEGORIES=  mail
>  
> Index: distinfo
> ===
> RCS file: /cvs/ports/mail/dovecot-fts-xapian/distinfo,v
> diff -u -p -r1.10 distinfo
> --- distinfo  25 Feb 2024 11:36:11 -  1.10
> +++ distinfo  31 May 2024 14:28:15 -
> @@ -1,2 +1,2 @@
> -SHA256 (grosjo-fts-xapian-1.7.0.tar.gz) = 
> ygkBoEvgrNRIxGfCa/MWaFMbrzwlkJXjD99dc/NeS/o=
> -SIZE (grosjo-fts-xapian-1.7.0.tar.gz) = 35121
> +SHA256 (dovecot-fts-xapian-1.7.13.tar.gz) = 
> MF60UgNoctNs3MQN0aI5qKBvE7zRh9RkWeMOm5aL6S4=
> +SIZE (dovecot-fts-xapian-1.7.13.tar.gz) = 37569

-- 
Best regards
Maksim Rodin

С уважением,
Родин Максим



Re: proxy_cache_lock for content revalidation

2024-05-31 Thread Maksim Yevmenkin
[..]

> >
> > https://mailman.nginx.org/pipermail/nginx-devel/2018-December/011710.html
>
> thank you! it seems the original post mentioned this exact issue. it
> also seems that the patch was removed. i am curious if it would be
> possible to restore the patch.

please never mind, i'm blind, sorry!

thanks
max


Re: proxy_cache_lock for content revalidation

2024-05-31 Thread Maksim Yevmenkin
hello,

> > it seems that the proxy_cache_lock directive operates only for cache
> > misses (new content). while this behavior is documented, i am curious
> > about the reasoning behind it. there are scenarios where
> > proxy_cache_lock could be very beneficial for content revalidation.
> > what are the community's thoughts on this?
>
> The generic idea is that "proxy_cache_use_stale updating;" is a
> better option for existing cache items.  As such, current
> implementation of proxy_cache_lock doesn't try to handle existing
> cache items to reduce complexity.

right. there are instances where serving outdated content is not
permissible, yet overwhelming the upstream servers with a flood of
requests is highly undesirable. this situation occurs quite
frequently.

> Just in case, at least one previous attempt to extend
> proxy_cache_lock to work with existing cache items can be found
> here:
>
> https://mailman.nginx.org/pipermail/nginx-devel/2018-December/011710.html

thank you! it seems the original post mentioned this exact issue. it
also seems that the patch was removed. i am curious if it would be
possible to restore the patch.

thanks,
max


doveadm index segfaults after upgrade to 7.5

2024-05-31 Thread Maksim Rodin
Hello
After upgrading the machine to 7.5 amd64 doveadm command used for
indexing mailboxes does not work anymore:

# doveadm -Dvv index -u somemail...@somedom.com '*'
... some usual diagnostic messages...
May 31 06:33:12 doveadm(somemail...@somedom.com): \
Debug: Mailbox INBOX: UID 1048: Opened mail because: fts indexing
Segmentation fault

There is also an entry in dovecot.log when mail indexing was to be done
automatically:
May 31 01:30:07 mail dovecot: indexer-worker(somemail...@somedom.com)\
<18492>:\
Fatal: master: service(indexer-worker): child 18492 killed \
with signal 11 (core not dumped -\
https://dovecot.org/bugreport.html#coredumps - set service \
indexer-worker { drop_priv_before_exec=yes })

# pkg_info -m | grep dovecot
dovecot-2.3.21v0compact IMAP/POP3 server
dovecot-fts-xapian-1.7.0 full text search plugin for Dovecot using Xapian
dovecot-ldap-2.3.21v0 LDAP authentication / dictionary support for Dovecot
dovecot-pigeonhole-0.5.21v1 Sieve mail filtering for Dovecot

Last configuration changes in dovecot were made long before upgrade and
I did not have problems with that configuration on 7.4


-- 
Best regards
Maksim Rodin



Stepping in debugger switches to interpretation mode

2024-05-31 Thread Maksim Zuev
Dear Sir/Madam,

I encountered a problem while debugging the code. I am attaching the
reproducer to this email in the* Main.java file*.

When running it with the debugger without stepping, the application runs in
less than a second (see jdb output in the *jdb_run.txt *file). However,
after performing a single step, the application is running in
interpretation mode, becoming very slow (see jdb output in the
*jdb_step.txt* file).

I assume running in the interpreter mode, as I see
*InterpreterRuntime::post_method_exit*
calls in the profiler.

Could you please help me figure out what causes the application to run in
the interpreter mode? Is this a bug or an expected behavior? Are there any
ways to work around this issue?

Best regards,
Maksim Zuev
Software developer at JetBrains
Initializing jdb ...
> stop at Main:7
Deferring breakpoint Main:7.
It will be set after the class is loaded.
> run
run Main
Set uncaught java.lang.Throwable
Set deferred uncaught java.lang.Throwable
>
VM Started: Set deferred breakpoint Main:7

Breakpoint hit: "thread=main", Main.main(), line=7 bci=0
7int x = 1;

main[1] cont
> 3cc8b2e70ead788fba06f607b827bd8dcb06c6b3b234578b1200b793c75ef999
173ms

The application exited

Main.java
Description: Binary data
Initializing jdb ...
> stop at Main:7
Deferring breakpoint Main:7.
It will be set after the class is loaded.
> run
run Main
Set uncaught java.lang.Throwable
Set deferred uncaught java.lang.Throwable
>
VM Started: Set deferred breakpoint Main:7

Breakpoint hit: "thread=main", Main.main(), line=7 bci=0
7int x = 1;

main[1] next
>
Step completed: "thread=main", Main.main(), line=9 bci=2
9long start = System.currentTimeMillis();

main[1] cont
> 3cc8b2e70ead788fba06f607b827bd8dcb06c6b3b234578b1200b793c75ef999
51212ms

The application exited

[jira] [Resolved] (IGNITE-22390) Sql. Cursor::requestNextAsync returns stale results

2024-05-31 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov resolved IGNITE-22390.
---
Resolution: Invalid

> Sql. Cursor::requestNextAsync returns stale results
> ---
>
> Key: IGNITE-22390
> URL: https://issues.apache.org/jira/browse/IGNITE-22390
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>
> The following test case begun to fail after (Sql. Avoid starting transaction 
> for KV operation) [ https://issues.apache.org/jira/browse/IGNITE-22263 ]
>  
> {code:java}
> package org.apache.ignite.internal.sql.engine;
> import static 
> org.apache.ignite.internal.catalog.CatalogService.DEFAULT_STORAGE_PROFILE;
> import static org.apache.ignite.internal.lang.IgniteStringFormatter.format;
> import static org.apache.ignite.internal.testframework.IgniteTestUtils.await;
> import static org.junit.jupiter.api.Assertions.assertEquals;
> import java.util.Objects;
> import org.apache.ignite.internal.app.IgniteImpl;
> import org.apache.ignite.internal.sql.BaseSqlIntegrationTest;
> import org.apache.ignite.internal.sql.engine.property.SqlProperties;
> import org.apache.ignite.internal.sql.engine.property.SqlPropertiesHelper;
> import org.apache.ignite.internal.tx.HybridTimestampTracker;
> import org.apache.ignite.internal.util.AsyncCursor.BatchedResult;
> import org.gridgain.internal.security.context.GridGainSecurity;
> import org.gridgain.internal.security.context.SecurityContext;
> import org.junit.jupiter.api.Test;
> public class ItCursor extends BaseSqlIntegrationTest {
> @Override
> protected int initialNodes() {
> return 1;
> }
> @Test
> public void testCursor() {
> int rowsCount = 2000;
> sql("create zone test_zone with partitions=1, replicas=1, 
> storage_profiles='" + DEFAULT_STORAGE_PROFILE + "'");
> sql("create table T (ID INT PRIMARY KEY, VAL INT) with 
> primary_zone='TEST_ZONE'");
> sql(format("insert into T select X, X from table(system_range(1, 
> {}))", rowsCount));
> String selectAll = "select * from T";
> AsyncSqlCursor cursor1 = openSqlCursor(selectAll);
> await(cursor1.onFirstPageReady());
> BatchedResult f = 
> await(cursor1.requestNextAsync(1000));
> assertEquals(1000, f.items().size()); // f.items().size() is zero
> }
> 
> private AsyncSqlCursor openSqlCursor(String sql) {
> IgniteImpl node = CLUSTER.node(0);
> SqlQueryProcessor qryProc = (SqlQueryProcessor) node.queryEngine();
> SqlProperties props = SqlPropertiesHelper.emptyProperties();
> SecurityContext securityCtx = GridGainSecurity.systemContext();
> return Objects.requireNonNull(await(qryProc.queryAsync(props, new 
> HybridTimestampTracker(), null, sql, securityCtx)));
> }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22390) Sql. Cursor::requestNextAsync returns stale results

2024-05-31 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22390:
--
Summary: Sql. Cursor::requestNextAsync returns stale results  (was: Sql. 
Cursor::requestNextAsync returns no data)

> Sql. Cursor::requestNextAsync returns stale results
> ---
>
> Key: IGNITE-22390
> URL: https://issues.apache.org/jira/browse/IGNITE-22390
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>
> The following test case begun to fail after 
> https://issues.apache.org/jira/browse/IGNITE-22263 
>  
> {code:java}
> package org.apache.ignite.internal.sql.engine;
> import static 
> org.apache.ignite.internal.catalog.CatalogService.DEFAULT_STORAGE_PROFILE;
> import static org.apache.ignite.internal.lang.IgniteStringFormatter.format;
> import static org.apache.ignite.internal.testframework.IgniteTestUtils.await;
> import static org.junit.jupiter.api.Assertions.assertEquals;
> import java.util.Objects;
> import org.apache.ignite.internal.app.IgniteImpl;
> import org.apache.ignite.internal.sql.BaseSqlIntegrationTest;
> import org.apache.ignite.internal.sql.engine.property.SqlProperties;
> import org.apache.ignite.internal.sql.engine.property.SqlPropertiesHelper;
> import org.apache.ignite.internal.tx.HybridTimestampTracker;
> import org.apache.ignite.internal.util.AsyncCursor.BatchedResult;
> import org.gridgain.internal.security.context.GridGainSecurity;
> import org.gridgain.internal.security.context.SecurityContext;
> import org.junit.jupiter.api.Test;
> public class ItCursor extends BaseSqlIntegrationTest {
> @Override
> protected int initialNodes() {
> return 1;
> }
> @Test
> public void testCursor() {
> int rowsCount = 2000;
> sql("create zone test_zone with partitions=1, replicas=1, 
> storage_profiles='" + DEFAULT_STORAGE_PROFILE + "'");
> sql("create table T (ID INT PRIMARY KEY, VAL INT) with 
> primary_zone='TEST_ZONE'");
> sql(format("insert into T select X, X from table(system_range(1, 
> {}))", rowsCount));
> String selectAll = "select * from T";
> AsyncSqlCursor cursor1 = openSqlCursor(selectAll);
> await(cursor1.onFirstPageReady());
> BatchedResult f = 
> await(cursor1.requestNextAsync(1000));
> assertEquals(1000, f.items().size()); // f.items().size() is zero
> }
> 
> private AsyncSqlCursor openSqlCursor(String sql) {
> IgniteImpl node = CLUSTER.node(0);
> SqlQueryProcessor qryProc = (SqlQueryProcessor) node.queryEngine();
> SqlProperties props = SqlPropertiesHelper.emptyProperties();
> SecurityContext securityCtx = GridGainSecurity.systemContext();
> return Objects.requireNonNull(await(qryProc.queryAsync(props, new 
> HybridTimestampTracker(), null, sql, securityCtx)));
> }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22390) Sql. Cursor::requestNextAsync returns stale results

2024-05-31 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22390:
--
Description: 
The following test case begun to fail after (Sql. Avoid starting transaction 
for KV operation) [ https://issues.apache.org/jira/browse/IGNITE-22263 ]
 
{code:java}
package org.apache.ignite.internal.sql.engine;

import static 
org.apache.ignite.internal.catalog.CatalogService.DEFAULT_STORAGE_PROFILE;
import static org.apache.ignite.internal.lang.IgniteStringFormatter.format;
import static org.apache.ignite.internal.testframework.IgniteTestUtils.await;
import static org.junit.jupiter.api.Assertions.assertEquals;

import java.util.Objects;
import org.apache.ignite.internal.app.IgniteImpl;
import org.apache.ignite.internal.sql.BaseSqlIntegrationTest;
import org.apache.ignite.internal.sql.engine.property.SqlProperties;
import org.apache.ignite.internal.sql.engine.property.SqlPropertiesHelper;
import org.apache.ignite.internal.tx.HybridTimestampTracker;
import org.apache.ignite.internal.util.AsyncCursor.BatchedResult;
import org.gridgain.internal.security.context.GridGainSecurity;
import org.gridgain.internal.security.context.SecurityContext;
import org.junit.jupiter.api.Test;

public class ItCursor extends BaseSqlIntegrationTest {

@Override
protected int initialNodes() {
return 1;
}

@Test
public void testCursor() {
int rowsCount = 2000;

sql("create zone test_zone with partitions=1, replicas=1, 
storage_profiles='" + DEFAULT_STORAGE_PROFILE + "'");
sql("create table T (ID INT PRIMARY KEY, VAL INT) with 
primary_zone='TEST_ZONE'");
sql(format("insert into T select X, X from table(system_range(1, {}))", 
rowsCount));

String selectAll = "select * from T";

AsyncSqlCursor cursor1 = openSqlCursor(selectAll);
await(cursor1.onFirstPageReady());
BatchedResult f = await(cursor1.requestNextAsync(1000));
assertEquals(1000, f.items().size()); // f.items().size() is zero
}

private AsyncSqlCursor openSqlCursor(String sql) {
IgniteImpl node = CLUSTER.node(0);
SqlQueryProcessor qryProc = (SqlQueryProcessor) node.queryEngine();
SqlProperties props = SqlPropertiesHelper.emptyProperties();
SecurityContext securityCtx = GridGainSecurity.systemContext();

return Objects.requireNonNull(await(qryProc.queryAsync(props, new 
HybridTimestampTracker(), null, sql, securityCtx)));
}
}
{code}


  was:
The following test case begun to fail after 
https://issues.apache.org/jira/browse/IGNITE-22263 
 
{code:java}
package org.apache.ignite.internal.sql.engine;

import static 
org.apache.ignite.internal.catalog.CatalogService.DEFAULT_STORAGE_PROFILE;
import static org.apache.ignite.internal.lang.IgniteStringFormatter.format;
import static org.apache.ignite.internal.testframework.IgniteTestUtils.await;
import static org.junit.jupiter.api.Assertions.assertEquals;

import java.util.Objects;
import org.apache.ignite.internal.app.IgniteImpl;
import org.apache.ignite.internal.sql.BaseSqlIntegrationTest;
import org.apache.ignite.internal.sql.engine.property.SqlProperties;
import org.apache.ignite.internal.sql.engine.property.SqlPropertiesHelper;
import org.apache.ignite.internal.tx.HybridTimestampTracker;
import org.apache.ignite.internal.util.AsyncCursor.BatchedResult;
import org.gridgain.internal.security.context.GridGainSecurity;
import org.gridgain.internal.security.context.SecurityContext;
import org.junit.jupiter.api.Test;

public class ItCursor extends BaseSqlIntegrationTest {

@Override
protected int initialNodes() {
return 1;
}

@Test
public void testCursor() {
int rowsCount = 2000;

sql("create zone test_zone with partitions=1, replicas=1, 
storage_profiles='" + DEFAULT_STORAGE_PROFILE + "'");
sql("create table T (ID INT PRIMARY KEY, VAL INT) with 
primary_zone='TEST_ZONE'");
sql(format("insert into T select X, X from table(system_range(1, {}))", 
rowsCount));

String selectAll = "select * from T";

AsyncSqlCursor cursor1 = openSqlCursor(selectAll);
await(cursor1.onFirstPageReady());
BatchedResult f = await(cursor1.requestNextAsync(1000));
assertEquals(1000, f.items().size()); // f.items().size() is zero
}

private AsyncSqlCursor openSqlCursor(String sql) {
IgniteImpl node = CLUSTER.node(0);
SqlQueryProcessor qryProc = (SqlQueryProcessor) node.queryEngine();
SqlProperties props = SqlPropertiesHelper.emptyProperties();
SecurityContext securityCtx = GridGainSecurity.systemContext();

return Objects.requireNonNull(await(qryProc.queryAsync(props, new 
HybridTimestampTracker()

[jira] [Created] (IGNITE-22390) Sql. Cursor::requestNextAsync returns no data

2024-05-31 Thread Maksim Zhuravkov (Jira)
Maksim Zhuravkov created IGNITE-22390:
-

 Summary: Sql. Cursor::requestNextAsync returns no data
 Key: IGNITE-22390
 URL: https://issues.apache.org/jira/browse/IGNITE-22390
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 3.0.0-beta2
Reporter: Maksim Zhuravkov


The following test case begun to fail after 
https://issues.apache.org/jira/browse/IGNITE-22263 
 
{code:java}
package org.apache.ignite.internal.sql.engine;

import static 
org.apache.ignite.internal.catalog.CatalogService.DEFAULT_STORAGE_PROFILE;
import static org.apache.ignite.internal.lang.IgniteStringFormatter.format;
import static org.apache.ignite.internal.testframework.IgniteTestUtils.await;
import static org.junit.jupiter.api.Assertions.assertEquals;

import java.util.Objects;
import org.apache.ignite.internal.app.IgniteImpl;
import org.apache.ignite.internal.sql.BaseSqlIntegrationTest;
import org.apache.ignite.internal.sql.engine.property.SqlProperties;
import org.apache.ignite.internal.sql.engine.property.SqlPropertiesHelper;
import org.apache.ignite.internal.tx.HybridTimestampTracker;
import org.apache.ignite.internal.util.AsyncCursor.BatchedResult;
import org.gridgain.internal.security.context.GridGainSecurity;
import org.gridgain.internal.security.context.SecurityContext;
import org.junit.jupiter.api.Test;

public class ItCursor extends BaseSqlIntegrationTest {

@Override
protected int initialNodes() {
return 1;
}

@Test
public void testCursor() {
int rowsCount = 2000;

sql("create zone test_zone with partitions=1, replicas=1, 
storage_profiles='" + DEFAULT_STORAGE_PROFILE + "'");
sql("create table T (ID INT PRIMARY KEY, VAL INT) with 
primary_zone='TEST_ZONE'");
sql(format("insert into T select X, X from table(system_range(1, {}))", 
rowsCount));

String selectAll = "select * from T";

AsyncSqlCursor cursor1 = openSqlCursor(selectAll);
await(cursor1.onFirstPageReady());
BatchedResult f = await(cursor1.requestNextAsync(1000));
assertEquals(1000, f.items().size()); // f.items().size() is zero
}

private AsyncSqlCursor openSqlCursor(String sql) {
IgniteImpl node = CLUSTER.node(0);
SqlQueryProcessor qryProc = (SqlQueryProcessor) node.queryEngine();
SqlProperties props = SqlPropertiesHelper.emptyProperties();
SecurityContext securityCtx = GridGainSecurity.systemContext();

return Objects.requireNonNull(await(qryProc.queryAsync(props, new 
HybridTimestampTracker(), null, sql, securityCtx)));
}
}
{code}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


doveadm index segfaults after upgrade to 7.5

2024-05-30 Thread Maksim Rodin
Hello
After upgrading the machine to 7.5 amd64 doveadm command used for
indexing mailboxes does not work anymore:

# doveadm -Dvv index -u somemail...@somedom.com '*'
... some usual diagnostic messages...
May 31 06:33:12 doveadm(somemail...@somedom.com): \
Debug: Mailbox INBOX: UID 1048: Opened mail because: fts indexing
Segmentation fault

There is also an entry in dovecot.log when mail indexing was to be done
automatically:
May 31 01:30:07 mail dovecot: indexer-worker(somemail...@somedom.com)\
<18492>:\
Fatal: master: service(indexer-worker): child 18492 killed \
with signal 11 (core not dumped -\
https://dovecot.org/bugreport.html#coredumps - set service \
indexer-worker { drop_priv_before_exec=yes })

# pkg_info -m | grep dovecot
dovecot-2.3.21v0compact IMAP/POP3 server
dovecot-fts-xapian-1.7.0 full text search plugin for Dovecot using Xapian
dovecot-ldap-2.3.21v0 LDAP authentication / dictionary support for Dovecot
dovecot-pigeonhole-0.5.21v1 Sieve mail filtering for Dovecot

Last configuration changes in dovecot were made long before upgrade and
I did not have problems with that configuration on 7.4

-- 
Best regards
Maksim Rodin



[llvm-branch-commits] [BOLT] Detect .warm split functions as cold fragments (PR #93759)

2024-05-30 Thread Maksim Panchenko via llvm-branch-commits

https://github.com/maksfb approved this pull request.

LGTM with the nit addressed.

https://github.com/llvm/llvm-project/pull/93759
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [BOLT] Detect .warm split functions as cold fragments (PR #93759)

2024-05-30 Thread Maksim Panchenko via llvm-branch-commits


@@ -596,6 +597,9 @@ class RewriteInstance {
 
   NameResolver NR;
 
+  // Regex object matching split function names.
+  const Regex ColdFragment{"(.*)\\.(cold|warm)(\\.[0-9]+)?"};

maksfb wrote:

nit: s/ColdFragment/FunctionFragmentTemplate/

https://github.com/llvm/llvm-project/pull/93759
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [BOLT] Detect .warm split functions as cold fragments (PR #93759)

2024-05-30 Thread Maksim Panchenko via llvm-branch-commits

https://github.com/maksfb edited https://github.com/llvm/llvm-project/pull/93759
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


proxy_cache_lock for content revalidation

2024-05-30 Thread Maksim Yevmenkin
hello!

it seems that the proxy_cache_lock directive operates only for cache
misses (new content). while this behavior is documented, i am curious
about the reasoning behind it. there are scenarios where
proxy_cache_lock could be very beneficial for content revalidation.
what are the community's thoughts on this?

thanks!
max
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


proxy_cache_lock for content revalidation

2024-05-30 Thread Maksim Yevmenkin
hello!

it seems that the proxy_cache_lock directive operates only for cache
misses (new content). while this behavior is documented, i am curious
about the reasoning behind it. there are scenarios where
proxy_cache_lock could be very beneficial for content revalidation.
what are the community's thoughts on this?

thanks!
max


[jira] [Updated] (IGNITE-22189) Display Expiry Policy information in the system view

2024-05-30 Thread Maksim Davydov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Davydov updated IGNITE-22189:

Description: 
The {{CacheView#expiryPolicyFactory}} method returns the ExpiryPolicyFactory 
string representation, which at this point is a simple className +@ + hashCode 
in hex, that is default {{Object#toString()}} behaviour. This is not 
informative for an end user of the API.

In addition, it is useful to have information about existing cache entries that 
are about to expire (eligible for cache expiry policy).

{*}TODO{*}:
 * To make the {{CacheView#expiryPolicyFactory}} method return readable, 
human-oriented output, one should refactor the method or 
{{Factory}} child classes to provide the cache expiry policy 
setting in readable form with policy type and ttl.
 * Within the cache view ({{{}CacheView{}}}), check the entries presence 
eligible for expiry policy. It can be done with O(logN) time complexity for 
in-memory, and O(number of partitions) for persistent mode.

  was:
The {{CacheView#expiryPolicyFactory}} method returns the ExpiryPolicyFactory 
string representation, which at this point is a simple className +@ + hashCode 
in hex, that is default {{Object#toString()}} behaviour. This is not 
informative for an end user of the API.

In addition, it is useful to have information about existing cache entries that 
are about to expire (eligible for cache expiry policy).

{*}TODO{*}:
 * To make the {{CacheView#expiryPolicyFactory}} method return readable, 
human-oriented output, one should refactor the method or 
{{Factory}} child classes to provide the cache expiry policy 
setting in readable form with policy type and ttl.
 * Within the cache group view ({{{}CacheGroupView{}}}), check the entries 
presence eligible for expiry policy. It can be done with O(1) time complexity 
for in-memory, and O(number of partitions) for persistent mode.
 * Within the cache view ({{{}CacheView{}}}), check the entries presence 
eligible for expiry policy. It can be done with O(logN) time complexity for 
in-memory, and O(number of partitions) for persistent mode.


> Display Expiry Policy information in the system view
> 
>
> Key: IGNITE-22189
> URL: https://issues.apache.org/jira/browse/IGNITE-22189
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Maksim Davydov
>    Assignee: Maksim Davydov
>Priority: Minor
>  Labels: ise
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> The {{CacheView#expiryPolicyFactory}} method returns the ExpiryPolicyFactory 
> string representation, which at this point is a simple className +@ + 
> hashCode in hex, that is default {{Object#toString()}} behaviour. This is not 
> informative for an end user of the API.
> In addition, it is useful to have information about existing cache entries 
> that are about to expire (eligible for cache expiry policy).
> {*}TODO{*}:
>  * To make the {{CacheView#expiryPolicyFactory}} method return readable, 
> human-oriented output, one should refactor the method or 
> {{Factory}} child classes to provide the cache expiry policy 
> setting in readable form with policy type and ttl.
>  * Within the cache view ({{{}CacheView{}}}), check the entries presence 
> eligible for expiry policy. It can be done with O(logN) time complexity for 
> in-memory, and O(number of partitions) for persistent mode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HDDS-10952) Bump org.apache.derby:derby to mitigate CVE-2022-46337

2024-05-30 Thread Maksim Myskov (Jira)
Maksim Myskov created HDDS-10952:


 Summary: Bump org.apache.derby:derby to mitigate CVE-2022-46337
 Key: HDDS-10952
 URL: https://issues.apache.org/jira/browse/HDDS-10952
 Project: Apache Ozone
  Issue Type: Improvement
Reporter: Maksim Myskov
Assignee: Maksim Myskov


The safe version is 10.14.3.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@ozone.apache.org
For additional commands, e-mail: issues-h...@ozone.apache.org



[jira] [Assigned] (IGNITE-22204) Sql. Set operation. Incorrect query transformation for a query with limit / offset that uses the same table

2024-05-30 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov reassigned IGNITE-22204:
-

Assignee: Maksim Zhuravkov

> Sql. Set operation. Incorrect query transformation for a query with limit / 
> offset that uses the same table
> ---
>
> Key: IGNITE-22204
> URL: https://issues.apache.org/jira/browse/IGNITE-22204
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Assignee: Maksim Zhuravkov
>Priority: Critical
>  Labels: ignite-3
>
> Combination of LIMIT / OFFSET and set operator results in incorrect 
> transformation of a plan tree:
> {noformat}
> statement ok
> CREATE TABLE test (a INTEGER);
> statement ok
> INSERT INTO test VALUES (1), (2), (3), (4);
> # query 1
> query I rowsort
> SELECT a FROM
>   (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)
> 
> 2
> # query 2
> query I rowsort
> SELECT a FROM
>   (SELECT a FROM
> (SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
> ORDER BY a OFFSET 1
>   ) t(a)
> 
> 4
> # combined query should return 2, 4
> # but it returns 2
> query I rowsort
> SELECT a FROM
>   (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)
> UNION ALL
> SELECT a FROM
>   (SELECT a FROM
> (SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
> ORDER BY a OFFSET 1
>   ) t(a)
> 
> 2
> 4
> {noformat}
> Query 1
> {noformat}
> SELECT a FROM
>   (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)
>  Limit(offset=[1], fetch=[1]), id = 80
> Exchange(distribution=[single]), id = 79
>Sort(sort0=[$0], dir0=[ASC], offset=[1], fetch=[1]), id = 78
>  TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 50
> {noformat}
> Query 2
> {noformat}
> SELECT a FROM
>   (SELECT a FROM
> (SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
> ORDER BY a OFFSET 1
>   ) t(a)
>  Limit(offset=[1]), id = 201
>Limit(offset=[2], fetch=[3]), id = 200
>  Exchange(distribution=[single]), id = 199
>Sort(sort0=[$0], dir0=[ASC], offset=[2], fetch=[3]), id = 198
>  TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 168
> {noformat}
> Combine queries using UNION ALL
> {noformat}
> SELECT a FROM
>   (SELECT a FROM test ORDER BY a LIMIT 1 OFFSET 1) t(a)
> UNION ALL
> SELECT a FROM
>   (SELECT a FROM
> (SELECT a FROM test ORDER BY a LIMIT 3 OFFSET 2) i(a)
> ORDER BY a OFFSET 1
>   ) t(a)
> UnionAll(all=[true]), id = 403
>   Limit(offset=[1], fetch=[1]), id = 400
> Exchange(distribution=[single]), id = 399 # subtree is duplicated in 
> another part of a query
>   Sort(sort0=[$0], dir0=[ASC], offset=[1], fetch=[1]), id = 398 # 
> TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 345
>   Limit(offset=[1]), id = 402
> Limit(offset=[2], fetch=[3]), id = 401
>   Exchange(distribution=[single]), id = 399 # duplicate
> Sort(sort0=[$0], dir0=[ASC], offset=[1], fetch=[1]), id = 398
>   TableScan(table=[[PUBLIC, TEST]], requiredColumns=[{0}]), id = 345
> {noformat}
> When tables are different, results are correct.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-21965) Extend test coverage for SQL E071-02(Basic query expressions. UNION ALL table operator)

2024-05-29 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov reassigned IGNITE-21965:
-

Assignee: Maksim Zhuravkov

> Extend test coverage for SQL E071-02(Basic query expressions. UNION ALL table 
> operator)
> ---
>
> Key: IGNITE-21965
> URL: https://issues.apache.org/jira/browse/IGNITE-21965
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Iurii Gerzhedovich
>Assignee: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> Test coverage for SQL E071-02(Basic query expressions. UNION ALL table 
> operator) is poor.
> Let's increase the test coverage. 
>  
> ref - test/sql/subquery/table/test_subquery_union.test



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22360) Sql. SUBSTRING function should not accept REAL/DOUBLE arguments in its numeric arguments.

2024-05-29 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22360:
--
Summary: Sql. SUBSTRING function should not accept REAL/DOUBLE arguments in 
its numeric arguments.  (was: Sql. SUBSTRING function should not accept 
REAL/DOUBLE in its numeric arguments.)

> Sql. SUBSTRING function should not accept REAL/DOUBLE arguments in its 
> numeric arguments.
> -
>
> Key: IGNITE-22360
> URL: https://issues.apache.org/jira/browse/IGNITE-22360
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> The following queries should be rejected by the validator, because numeric 
> arguments other than INTs do not make any sense for this function:
> {noformat}
> SELECT SUBSTRING('aaa', 1.0);
> SELECT SUBSTRING('aaa', 1.0, 2.3)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22360) Sql. SUBSTRING function should not accept REAL/DOUBLE values in its numeric arguments.

2024-05-29 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22360:
--
Summary: Sql. SUBSTRING function should not accept REAL/DOUBLE values in 
its numeric arguments.  (was: Sql. SUBSTRING function should not accept 
REAL/DOUBLE arguments in its numeric arguments.)

> Sql. SUBSTRING function should not accept REAL/DOUBLE values in its numeric 
> arguments.
> --
>
> Key: IGNITE-22360
> URL: https://issues.apache.org/jira/browse/IGNITE-22360
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> The following queries should be rejected by the validator, because numeric 
> arguments other than INTs do not make any sense for this function:
> {noformat}
> SELECT SUBSTRING('aaa', 1.0);
> SELECT SUBSTRING('aaa', 1.0, 2.3)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22360) Sql. SUBSTRING function should not accept REAL/DOUBLE in its numeric arguments.

2024-05-29 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22360:
--
Description: 
The following queries should be rejected by the validator, because numeric 
arguments other than INTs do not make any sense for this function:
{noformat}
SELECT SUBSTRING('aaa', 1.0);
SELECT SUBSTRING('aaa', 1.0, 2.3)
{noformat}


  was:
The following queries should be rejected by the validator, because numeric 
arguments other than INTs do not make any sense for this function.
{noformat}
SELECT SUBSTRING('aaa', 1.0);
SELECT SUBSTRING('aaa', 1.0, 2.3)
{noformat}



> Sql. SUBSTRING function should not accept REAL/DOUBLE in its numeric 
> arguments.
> ---
>
> Key: IGNITE-22360
> URL: https://issues.apache.org/jira/browse/IGNITE-22360
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> The following queries should be rejected by the validator, because numeric 
> arguments other than INTs do not make any sense for this function:
> {noformat}
> SELECT SUBSTRING('aaa', 1.0);
> SELECT SUBSTRING('aaa', 1.0, 2.3)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22360) Sql. SUBSTRING function should not accept REAL/DOUBLE in its numeric arguments.

2024-05-29 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22360:
--
Description: 
The following queries should be rejected by the validator, because numeric 
arguments other than INTs do not make any sense for this function.
{noformat}
SELECT SUBSTRING('aaa', 1.0);
SELECT SUBSTRING('aaa', 1.0, 2.3)
{noformat}


  was:
The following queries should be rejected by the validator, because numeric 
arguments other than INTEGER type do not make any sense for this function.
{noformat}
SELECT SUBSTRING('aaa', 1.0);
SELECT SUBSTRING('aaa', 1.0, 2.3)
{noformat}



> Sql. SUBSTRING function should not accept REAL/DOUBLE in its numeric 
> arguments.
> ---
>
> Key: IGNITE-22360
> URL: https://issues.apache.org/jira/browse/IGNITE-22360
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> The following queries should be rejected by the validator, because numeric 
> arguments other than INTs do not make any sense for this function.
> {noformat}
> SELECT SUBSTRING('aaa', 1.0);
> SELECT SUBSTRING('aaa', 1.0, 2.3)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-22360) Sql. SUBSTRING function should not accept REAL/DOUBLE in its numeric arguments.

2024-05-29 Thread Maksim Zhuravkov (Jira)
Maksim Zhuravkov created IGNITE-22360:
-

 Summary: Sql. SUBSTRING function should not accept REAL/DOUBLE in 
its numeric arguments.
 Key: IGNITE-22360
 URL: https://issues.apache.org/jira/browse/IGNITE-22360
 Project: Ignite
  Issue Type: Bug
  Components: sql
Reporter: Maksim Zhuravkov


The following queries should be rejected by the validator, because numeric 
arguments other than INTEGER type do not make any sense for this function.
{noformat}
SELECT SUBSTRING('aaa', 1.0);
SELECT SUBSTRING('aaa', 1.0, 2.3)
{noformat}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22358) Sql. Results of TypeFactory::leastRestrictiveType are incompatible with legal type transformations

2024-05-29 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22358:
--
Fix Version/s: 3.0.0-beta2

> Sql. Results of TypeFactory::leastRestrictiveType are incompatible with legal 
> type transformations
> --
>
> Key: IGNITE-22358
> URL: https://issues.apache.org/jira/browse/IGNITE-22358
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> {noformat}
> SELECT 1 UNION ALL SELECT '2000-01-01'::DATE
> # returns: DATE, DATE
> {noformat}
> Although a cast from INT to DATE should not be possible according to type 
> transformation rules:
> {noformat}
> SELECT 1::DATE 
> # Error: Cast function cannot convert value of type INTEGER to type DATE
> {noformat}
> This query should also return an error because it is not possible to convert 
> an integer into a date.
> This happens because Calcite's SqlTypeFactoryImpl::leastRestrictiveSqlType 
> contains explicit code that allows int to date conversion and that code 
> completely ignores Calcite's TypeConversion rules.
> {code:java}
>  else if (SqlTypeUtil.isExactNumeric(type)) {
> if (SqlTypeUtil.isExactNumeric(resultType)) {
>   // TODO: come up with a cleaner way to support
>   // interval + datetime = datetime
>   if (types.size() > (i + 1)) {
> RelDataType type1 = types.get(i + 1);
> if (SqlTypeUtil.isDatetime(type1)) {
>   resultType = type1;
>   return createTypeWithNullability(resultType,
>   nullCount > 0 || nullableCount > 0);
> }
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22358) Sql. Results of TypeFactory::leastRestrictiveType are incompatible with legal type transformations

2024-05-29 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22358:
--
Issue Type: Bug  (was: Improvement)

> Sql. Results of TypeFactory::leastRestrictiveType are incompatible with legal 
> type transformations
> --
>
> Key: IGNITE-22358
> URL: https://issues.apache.org/jira/browse/IGNITE-22358
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> {noformat}
> SELECT 1 UNION ALL SELECT '2000-01-01'::DATE
> # returns: DATE, DATE
> {noformat}
> Although a cast from INT to DATE should not be possible according to type 
> transformation rules:
> {noformat}
> SELECT 1::DATE 
> # Error: Cast function cannot convert value of type INTEGER to type DATE
> {noformat}
> This query should also return an error because it is not possible to convert 
> an integer into a date.
> This happens because Calcite's SqlTypeFactoryImpl::leastRestrictiveSqlType 
> contains explicit code that allows int to date conversion and that code 
> completely ignores Calcite's TypeConversion rules.
> {code:java}
>  else if (SqlTypeUtil.isExactNumeric(type)) {
> if (SqlTypeUtil.isExactNumeric(resultType)) {
>   // TODO: come up with a cleaner way to support
>   // interval + datetime = datetime
>   if (types.size() > (i + 1)) {
> RelDataType type1 = types.get(i + 1);
> if (SqlTypeUtil.isDatetime(type1)) {
>   resultType = type1;
>   return createTypeWithNullability(resultType,
>   nullCount > 0 || nullableCount > 0);
> }
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22358) Sql. Results of TypeFactory::leastRestrictiveType are incompatible with legal type transformations

2024-05-29 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22358:
--
Description: 
{noformat}
SELECT 1 UNION ALL SELECT '2000-01-01'::DATE
# returns: DATE, DATE
{noformat}

Although a cast from INT to DATE should not be possible according to type 
transformation rules:
{noformat}
SELECT 1::DATE 
# Error: Cast function cannot convert value of type INTEGER to type DATE
{noformat}

This query should also return an error because it is not possible to convert an 
integer into a date.


This happens because Calcite's SqlTypeFactoryImpl::leastRestrictiveSqlType 
contains explicit code that allows int to date conversion and that code 
completely ignores its own TypeConversion rules.

{code:java}
 else if (SqlTypeUtil.isExactNumeric(type)) {
if (SqlTypeUtil.isExactNumeric(resultType)) {
  // TODO: come up with a cleaner way to support
  // interval + datetime = datetime
  if (types.size() > (i + 1)) {
RelDataType type1 = types.get(i + 1);
if (SqlTypeUtil.isDatetime(type1)) {
  resultType = type1;
  return createTypeWithNullability(resultType,
  nullCount > 0 || nullableCount > 0);
}
  }
{code}



  was:
{noformat}
SELECT 1 UNION ALL SELECT '2000-01-01'::DATE
# returns: DATE, DATE
{noformat}

Although a cast from INT to DATE should not be possible according to type 
transformation rules:
{noformat}
SELECT 1::DATE 
# Error: Cast function cannot convert value of type INTEGER to type DATE
{noformat}

This query should also return an error because it is not possible to convert an 
integer into a date.




> Sql. Results of TypeFactory::leastRestrictiveType are incompatible with legal 
> type transformations
> --
>
> Key: IGNITE-22358
> URL: https://issues.apache.org/jira/browse/IGNITE-22358
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> {noformat}
> SELECT 1 UNION ALL SELECT '2000-01-01'::DATE
> # returns: DATE, DATE
> {noformat}
> Although a cast from INT to DATE should not be possible according to type 
> transformation rules:
> {noformat}
> SELECT 1::DATE 
> # Error: Cast function cannot convert value of type INTEGER to type DATE
> {noformat}
> This query should also return an error because it is not possible to convert 
> an integer into a date.
> This happens because Calcite's SqlTypeFactoryImpl::leastRestrictiveSqlType 
> contains explicit code that allows int to date conversion and that code 
> completely ignores its own TypeConversion rules.
> {code:java}
>  else if (SqlTypeUtil.isExactNumeric(type)) {
> if (SqlTypeUtil.isExactNumeric(resultType)) {
>   // TODO: come up with a cleaner way to support
>   // interval + datetime = datetime
>   if (types.size() > (i + 1)) {
> RelDataType type1 = types.get(i + 1);
> if (SqlTypeUtil.isDatetime(type1)) {
>   resultType = type1;
>   return createTypeWithNullability(resultType,
>   nullCount > 0 || nullableCount > 0);
> }
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22358) Sql. Results of TypeFactory::leastRestrictiveType are incompatible with legal type transformations

2024-05-29 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22358:
--
Description: 
{noformat}
SELECT 1 UNION ALL SELECT '2000-01-01'::DATE
# returns: DATE, DATE
{noformat}

Although a cast from INT to DATE should not be possible according to type 
transformation rules:
{noformat}
SELECT 1::DATE 
# Error: Cast function cannot convert value of type INTEGER to type DATE
{noformat}

This query should also return an error because it is not possible to convert an 
integer into a date.


This happens because Calcite's SqlTypeFactoryImpl::leastRestrictiveSqlType 
contains explicit code that allows int to date conversion and that code 
completely ignores Calcite's TypeConversion rules.

{code:java}
 else if (SqlTypeUtil.isExactNumeric(type)) {
if (SqlTypeUtil.isExactNumeric(resultType)) {
  // TODO: come up with a cleaner way to support
  // interval + datetime = datetime
  if (types.size() > (i + 1)) {
RelDataType type1 = types.get(i + 1);
if (SqlTypeUtil.isDatetime(type1)) {
  resultType = type1;
  return createTypeWithNullability(resultType,
  nullCount > 0 || nullableCount > 0);
}
  }
{code}



  was:
{noformat}
SELECT 1 UNION ALL SELECT '2000-01-01'::DATE
# returns: DATE, DATE
{noformat}

Although a cast from INT to DATE should not be possible according to type 
transformation rules:
{noformat}
SELECT 1::DATE 
# Error: Cast function cannot convert value of type INTEGER to type DATE
{noformat}

This query should also return an error because it is not possible to convert an 
integer into a date.


This happens because Calcite's SqlTypeFactoryImpl::leastRestrictiveSqlType 
contains explicit code that allows int to date conversion and that code 
completely ignores its own TypeConversion rules.

{code:java}
 else if (SqlTypeUtil.isExactNumeric(type)) {
if (SqlTypeUtil.isExactNumeric(resultType)) {
  // TODO: come up with a cleaner way to support
  // interval + datetime = datetime
  if (types.size() > (i + 1)) {
RelDataType type1 = types.get(i + 1);
if (SqlTypeUtil.isDatetime(type1)) {
  resultType = type1;
  return createTypeWithNullability(resultType,
  nullCount > 0 || nullableCount > 0);
}
  }
{code}




> Sql. Results of TypeFactory::leastRestrictiveType are incompatible with legal 
> type transformations
> --
>
> Key: IGNITE-22358
> URL: https://issues.apache.org/jira/browse/IGNITE-22358
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> {noformat}
> SELECT 1 UNION ALL SELECT '2000-01-01'::DATE
> # returns: DATE, DATE
> {noformat}
> Although a cast from INT to DATE should not be possible according to type 
> transformation rules:
> {noformat}
> SELECT 1::DATE 
> # Error: Cast function cannot convert value of type INTEGER to type DATE
> {noformat}
> This query should also return an error because it is not possible to convert 
> an integer into a date.
> This happens because Calcite's SqlTypeFactoryImpl::leastRestrictiveSqlType 
> contains explicit code that allows int to date conversion and that code 
> completely ignores Calcite's TypeConversion rules.
> {code:java}
>  else if (SqlTypeUtil.isExactNumeric(type)) {
> if (SqlTypeUtil.isExactNumeric(resultType)) {
>   // TODO: come up with a cleaner way to support
>   // interval + datetime = datetime
>   if (types.size() > (i + 1)) {
> RelDataType type1 = types.get(i + 1);
> if (SqlTypeUtil.isDatetime(type1)) {
>   resultType = type1;
>   return createTypeWithNullability(resultType,
>   nullCount > 0 || nullableCount > 0);
> }
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22358) Sql. Results of TypeFactory::leastRestrictiveType are incompatible with legal type transformations

2024-05-29 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-22358:
--
Description: 
{noformat}
SELECT 1 UNION ALL SELECT '2000-01-01'::DATE
# returns: DATE, DATE
{noformat}

Although a cast from INT to DATE should not be possible according to type 
transformation rules:
{noformat}
SELECT 1::DATE 
# Error: Cast function cannot convert value of type INTEGER to type DATE
{noformat}

This query should also return an error because it is not possible to convert an 
integer into a date.



  was:
{noformat}
SELECT 1 UNION ALL SELECT '2000-01-01'::DATE
# returns: DATE, DATE
{noformat}

Although a cast from an INT to DATE should not be possible according to type 
transformation rules:
{noformat}
SELECT 1::DATE 
# Error: Cast function cannot convert value of type INTEGER to type DATE
{noformat}

This query should also return an error because it is not possible to convert an 
INT into a DATE.




> Sql. Results of TypeFactory::leastRestrictiveType are incompatible with legal 
> type transformations
> --
>
> Key: IGNITE-22358
> URL: https://issues.apache.org/jira/browse/IGNITE-22358
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> {noformat}
> SELECT 1 UNION ALL SELECT '2000-01-01'::DATE
> # returns: DATE, DATE
> {noformat}
> Although a cast from INT to DATE should not be possible according to type 
> transformation rules:
> {noformat}
> SELECT 1::DATE 
> # Error: Cast function cannot convert value of type INTEGER to type DATE
> {noformat}
> This query should also return an error because it is not possible to convert 
> an integer into a date.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   5   6   7   8   9   10   >