[jira] [Created] (FLINK-23289) BinarySection should null check in contusctor method

2021-07-06 Thread Terry Wang (Jira)
Terry Wang created FLINK-23289:
--

 Summary: BinarySection should null check in contusctor method
 Key: FLINK-23289
 URL: https://issues.apache.org/jira/browse/FLINK-23289
 Project: Flink
  Issue Type: Improvement
Reporter: Terry Wang



{code:java}
Caused by: java.lang.NullPointerException
    at 
org.apache.flink.table.data.binary.BinarySegmentUtils.inFirstSegment(BinarySegmentUtils.java:411)
    at 
org.apache.flink.table.data.binary.BinarySegmentUtils.copyToBytes(BinarySegmentUtils.java:132)
    at 
org.apache.flink.table.data.binary.BinarySegmentUtils.copyToBytes(BinarySegmentUtils.java:118)
    at 
org.apache.flink.table.data.binary.BinaryStringData.copy(BinaryStringData.java:360)
    at 
org.apache.flink.table.runtime.typeutils.StringDataSerializer.copy(StringDataSerializer.java:59)
    at 
org.apache.flink.table.runtime.typeutils.StringDataSerializer.copy(StringDataSerializer.java:37)
    at 
org.apache.flink.table.runtime.typeutils.ArrayDataSerializer.copyGenericArray(ArrayDataSerializer.java:128)
    at 
org.apache.flink.table.runtime.typeutils.ArrayDataSerializer.copy(ArrayDataSerializer.java:86)
    at 
org.apache.flink.table.runtime.typeutils.ArrayDataSerializer.copy(ArrayDataSerializer.java:47)
    at 
org.apache.flink.table.runtime.typeutils.RowDataSerializer.copyRowData(RowDataSerializer.java:170)
    at 
org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:131)
    at 
org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:48)
    at 
org.apache.flink.table.runtime.operators.join.lookup.AsyncLookupJoinWithCalcRunner$CalcCollectionCollector.collect(AsyncLookupJoinWithCalcRunner.java:152)
    at 
org.apache.flink.table.runtime.operators.join.lookup.AsyncLookupJoinWithCalcRunner$CalcCollectionCollector.collect(AsyncLookupJoinWithCalcRunner.java:142)
{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21972) Check whether TemporalTableSourceSpec can be serialized or not

2021-03-25 Thread Terry Wang (Jira)
Terry Wang created FLINK-21972:
--

 Summary: Check whether TemporalTableSourceSpec  can be serialized 
or not 
 Key: FLINK-21972
 URL: https://issues.apache.org/jira/browse/FLINK-21972
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Terry Wang
 Fix For: 1.13.0


Check whether TemporalTableSourceSpec  can be serialized or not 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21868) Support StreamExecLookupJoin json serialization/deserialization

2021-03-19 Thread Terry Wang (Jira)
Terry Wang created FLINK-21868:
--

 Summary: Support StreamExecLookupJoin json 
serialization/deserialization
 Key: FLINK-21868
 URL: https://issues.apache.org/jira/browse/FLINK-21868
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Terry Wang
 Fix For: 1.13.0


Support StreamExecLookupJoin json serialization/deserialization



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21864) Support StreamExecTemporalJoin json serialization/deserialization

2021-03-18 Thread Terry Wang (Jira)
Terry Wang created FLINK-21864:
--

 Summary: Support StreamExecTemporalJoin json 
serialization/deserialization
 Key: FLINK-21864
 URL: https://issues.apache.org/jira/browse/FLINK-21864
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Terry Wang
 Fix For: 1.13.0


Support StreamExecTemporalJoin json serialization/deserialization



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21837) Support StreamExecIntervalJoin/StreamExecLookupJoin/StreamExecTemporalJoin json ser/des

2021-03-16 Thread Terry Wang (Jira)
Terry Wang created FLINK-21837:
--

 Summary: Support 
StreamExecIntervalJoin/StreamExecLookupJoin/StreamExecTemporalJoin json ser/des
 Key: FLINK-21837
 URL: https://issues.apache.org/jira/browse/FLINK-21837
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Terry Wang
 Fix For: 1.13.0


Support StreamExecIntervalJoin/StreamExecLookupJoin/StreamExecTemporalJoin json 
ser/des



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21811) Support StreamExecJoin json serialization/deserialization

2021-03-16 Thread Terry Wang (Jira)
Terry Wang created FLINK-21811:
--

 Summary: Support StreamExecJoin json serialization/deserialization
 Key: FLINK-21811
 URL: https://issues.apache.org/jira/browse/FLINK-21811
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Terry Wang
 Fix For: 1.13.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21802) LogicalTypeJsonDeserializer/Serializer custom RowType/MapType/ArrayType/MultisetType

2021-03-15 Thread Terry Wang (Jira)
Terry Wang created FLINK-21802:
--

 Summary: LogicalTypeJsonDeserializer/Serializer custom 
RowType/MapType/ArrayType/MultisetType
 Key: FLINK-21802
 URL: https://issues.apache.org/jira/browse/FLINK-21802
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Terry Wang
 Fix For: 1.13.0


We should custom  RowType/MapType/ArrayType/MultiSetType deserialize/serializer 
method to allow some special LogicalType such as TimestampType's kind field.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21744) Support StreamExecDeduplicate json serialization/deserialization

2021-03-12 Thread Terry Wang (Jira)
Terry Wang created FLINK-21744:
--

 Summary: Support StreamExecDeduplicate json 
serialization/deserialization
 Key: FLINK-21744
 URL: https://issues.apache.org/jira/browse/FLINK-21744
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Terry Wang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17553) Group by constant and window causes error: Unsupported call: TUMBLE_END(TIMESTAMP(3) NOT NULL)

2020-05-07 Thread Terry Wang (Jira)
Terry Wang created FLINK-17553:
--

 Summary: Group by constant and window causes error:  Unsupported 
call: TUMBLE_END(TIMESTAMP(3) NOT NULL)
 Key: FLINK-17553
 URL: https://issues.apache.org/jira/browse/FLINK-17553
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Reporter: Terry Wang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17313) Validation error when insert decimal/timestamp/varchar with precision into sink using TypeInformation of row

2020-04-21 Thread Terry Wang (Jira)
Terry Wang created FLINK-17313:
--

 Summary: Validation error when insert decimal/timestamp/varchar 
with precision into sink using TypeInformation of row
 Key: FLINK-17313
 URL: https://issues.apache.org/jira/browse/FLINK-17313
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Reporter: Terry Wang


Test code like follwing(in blink planner):
{code:java}
tEnv.sqlUpdate("create table randomSource (" +
"   a varchar(10)," 
+
"   b 
decimal(20,2)" +
"   ) with (" +
"   'type' = 
'random'," +
"   'count' = '10'" 
+
"   )");
tEnv.sqlUpdate("create table printSink (" +
"   a varchar(10)," 
+
"   b 
decimal(22,2)," +
"   c 
timestamp(3)," +
"   ) with (" +
"   'type' = 'print'" +
"   )");
tEnv.sqlUpdate("insert into printSink select *, 
current_timestamp from randomSource");
tEnv.execute("");
{code}

Print TableSink implements UpsertStreamTableSink and it's getReocrdType is as 
following:


{code:java}
public TypeInformation getRecordType() {
return getTableSchema().toRowType();
}
{code}


varchar type exception is:


||Heading 1||
|org.apache.flink.table.api.ValidationException: Type VARCHAR(10) of table 
field 'a' does not match with the physical type STRING of the 'a' field of the 
TableSink consumed type.

at 
org.apache.flink.table.utils.TypeMappingUtils.lambda$checkPhysicalLogicalTypeCompatible$4(TypeMappingUtils.java:165)
at 
org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:278)
at 
org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:255)
at 
org.apache.flink.table.types.logical.utils.LogicalTypeDefaultVisitor.visit(LogicalTypeDefaultVisitor.java:67)
at 
org.apache.flink.table.types.logical.VarCharType.accept(VarCharType.java:157)
at 
org.apache.flink.table.utils.TypeMappingUtils.checkIfCompatible(TypeMappingUtils.java:255)
at 
org.apache.flink.table.utils.TypeMappingUtils.checkPhysicalLogicalTypeCompatible(TypeMappingUtils.java:161)
at 
org.apache.flink.table.planner.sinks.TableSinkUtils$$anonfun$validateLogicalPhysicalTypesCompatible$1.apply$mcVI$sp(TableSinkUtils.scala:315)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
at 
org.apache.flink.table.planner.sinks.TableSinkUtils$.validateLogicalPhysicalTypesCompatible(TableSinkUtils.scala:308)
at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:195)
at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:191)
at scala.Option.map(Option.scala:146)
at 
org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:191)
at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:150)
at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:150)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:150)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:863)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:855)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.execute(TableEnvironmentImpl.java:822)
|
other type validation is similar, I dig it and found it's caused by 

[jira] [Created] (FLINK-17263) Remove RepeatFamilyOperandTypeChecker in blink planner and replace it with calcite's CompositeOperandTypeChecker

2020-04-20 Thread Terry Wang (Jira)
Terry Wang created FLINK-17263:
--

 Summary: Remove RepeatFamilyOperandTypeChecker in blink planner 
and replace it  with calcite's CompositeOperandTypeChecker
 Key: FLINK-17263
 URL: https://issues.apache.org/jira/browse/FLINK-17263
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.11.0
Reporter: Terry Wang


Remove RepeatFamilyOperandTypeChecker in blink planner and replace it  with 
calcite's CompositeOperandTypeChecker.
It seems that what CompositeOperandTypeChecker can do is a super set of 
RepeatFamilyOperandTypeChecker. To keep code easy to read, it's better to do 
such refactor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17152) FunctionDefinitionUtil generate wrong resultType and acc type of AggregateFunctionDefinition

2020-04-15 Thread Terry Wang (Jira)
Terry Wang created FLINK-17152:
--

 Summary: FunctionDefinitionUtil generate wrong resultType and  acc 
type of AggregateFunctionDefinition
 Key: FLINK-17152
 URL: https://issues.apache.org/jira/browse/FLINK-17152
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Runtime
Affects Versions: 1.10.0
Reporter: Terry Wang


FunctionDefinitionUtil generate wrong resultType and  acc type of 
AggregateFunctionDefinition. This bug will  lead to unexpect error such as: 
Field types of query result and registered TableSink do not  match.
Query schema: [v:RAW(IAccumulator, ?)]
Sink schema: [v: STRING]





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16924) TableEnvironment#sqlUpdate throw NPE when called in async thread

2020-04-01 Thread Terry Wang (Jira)
Terry Wang created FLINK-16924:
--

 Summary: TableEnvironment#sqlUpdate throw NPE when called in async 
thread
 Key: FLINK-16924
 URL: https://issues.apache.org/jira/browse/FLINK-16924
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.11.0
Reporter: Terry Wang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16506) Sql

2020-03-09 Thread Terry Wang (Jira)
Terry Wang created FLINK-16506:
--

 Summary: Sql
 Key: FLINK-16506
 URL: https://issues.apache.org/jira/browse/FLINK-16506
 Project: Flink
  Issue Type: Bug
Reporter: Terry Wang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16414) create udaf/udtf function using sql casuing ValidationException: SQL validation failed. null

2020-03-03 Thread Terry Wang (Jira)
Terry Wang created FLINK-16414:
--

 Summary: create udaf/udtf function using sql casuing 
ValidationException: SQL validation failed. null
 Key: FLINK-16414
 URL: https://issues.apache.org/jira/browse/FLINK-16414
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.10.0
Reporter: Terry Wang


When using TableEnvironment.sqlupdate() to create a udaf or udtf function, if 
the function doesn't override the getResultType() method, it's normal. But when 
using this function in the insert sql,  some exception like following will be 
throwed:

Exception in thread "main" org.apache.flink.table.api.ValidationException: SQL 
validation failed. null
at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:130)
at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:105)
at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:127)
at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlInsert(SqlToOperationConverter.java:342)
at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:142)
at 
org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:66)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlUpdate(TableEnvironmentImpl.java:484)

The reason is in FunctionDefinitionUtil#createFunctionDefinition, we shouldn't 
direct call t.getResultType or a.getAccumulatorType() or a.getResultType() but 
using UserDefinedFunctionHelper#getReturnTypeOfTableFunction
 UserDefinedFunctionHelper#getAccumulatorTypeOfAggregateFunction 
UserDefinedFunctionHelper#getReturnTypeOfAggregateFunction instead.
```

if (udf instanceof ScalarFunction) {
return new ScalarFunctionDefinition(
name,
(ScalarFunction) udf
);
} else if (udf instanceof TableFunction) {
TableFunction t = (TableFunction) udf;
return new TableFunctionDefinition(
name,
t,
t.getResultType()
);
} else if (udf instanceof AggregateFunction) {
AggregateFunction a = (AggregateFunction) udf;

return new AggregateFunctionDefinition(
name,
a,
a.getAccumulatorType(),
a.getResultType()
);
} else if (udf instanceof TableAggregateFunction) {
TableAggregateFunction a = (TableAggregateFunction) udf;

return new TableAggregateFunctionDefinition(
name,
a,
a.getAccumulatorType(),
a.getResultType()
);
```






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15552) SQL Client can not correctly create kafka table using --library to indicate a kafka connector directory

2020-01-10 Thread Terry Wang (Jira)
Terry Wang created FLINK-15552:
--

 Summary: SQL Client can not correctly create kafka table using 
--library to indicate a kafka connector directory
 Key: FLINK-15552
 URL: https://issues.apache.org/jira/browse/FLINK-15552
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client, Table SQL / Runtime
Reporter: Terry Wang


How to Reproduce:
first, I start a sql client and using `-l` to point to a kafka connector 
directory.

`
 bin/sql-client.sh embedded -l /xx/connectors/kafka/

`

Then, I create a Kafka Table like following 
`
Flink SQL> CREATE TABLE MyUserTable (
>   content String
> ) WITH (
>   'connector.type' = 'kafka',
>   'connector.version' = 'universal',
>   'connector.topic' = 'test',
>   'connector.properties.zookeeper.connect' = 'localhost:2181',
>   'connector.properties.bootstrap.servers' = 'localhost:9092',
>   'connector.properties.group.id' = 'testGroup',
>   'connector.startup-mode' = 'earliest-offset',
>   'format.type' = 'csv'
>  );
[INFO] Table has been created.
`

Then I select from just created table and an exception been thrown: 

`
Flink SQL> select * from MyUserTable;
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a 
suitable table factory for 
'org.apache.flink.table.factories.TableSourceFactory' in
the classpath.

Reason: Required context properties mismatch.

The matching candidates:
org.apache.flink.table.sources.CsvBatchTableSourceFactory
Mismatched properties:
'connector.type' expects 'filesystem', but is 'kafka'

The following properties are requested:
connector.properties.bootstrap.servers=localhost:9092
connector.properties.group.id=testGroup
connector.properties.zookeeper.connect=localhost:2181
connector.startup-mode=earliest-offset
connector.topic=test
connector.type=kafka
connector.version=universal
format.type=csv
schema.0.data-type=VARCHAR(2147483647)
schema.0.name=content

The following factories have been considered:
org.apache.flink.table.sources.CsvBatchTableSourceFactory
org.apache.flink.table.sources.CsvAppendTableSourceFactory
`
Potential Reasons:
Now we use  `TableFactoryUtil#findAndCreateTableSource`  to convert a 
CatalogTable to TableSource,  but when call `TableFactoryService.find` we don't 
pass current classLoader to the this method, the defualt loader will be 
BootStrapClassLoader, which can not find our factory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15544) Upgrade http-core version to avoid potential DeadLock problem

2020-01-09 Thread Terry Wang (Jira)
Terry Wang created FLINK-15544:
--

 Summary: Upgrade http-core version to avoid potential DeadLock 
problem
 Key: FLINK-15544
 URL: https://issues.apache.org/jira/browse/FLINK-15544
 Project: Flink
  Issue Type: Bug
  Components: Build System
Reporter: Terry Wang


Due to the bug of http-core:4..46 (we current use) 
https://issues.apache.org/jira/browse/HTTPCORE-446, it may cause the bug of 
deadLock, we should upgrade the version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15429) read hive table null value of timestamp type will throw an npe

2019-12-27 Thread Terry Wang (Jira)
Terry Wang created FLINK-15429:
--

 Summary: read hive table null value of timestamp type will throw 
an npe
 Key: FLINK-15429
 URL: https://issues.apache.org/jira/browse/FLINK-15429
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive
Affects Versions: 1.10.0
Reporter: Terry Wang
 Fix For: 1.10.0


When there is null value of timestamp type in hive table, will have exception 
like following:


Caused by: org.apache.flink.table.api.TableException: Exception in writeRecord
at 
org.apache.flink.table.filesystem.FileSystemOutputFormat.writeRecord(FileSystemOutputFormat.java:122)
at 
org.apache.flink.streaming.api.functions.sink.OutputFormatSinkFunction.invoke(OutputFormatSinkFunction.java:87)
at 
org.apache.flink.streaming.api.functions.sink.SinkFunction.invoke(SinkFunction.java:52)
at 
org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingOutput.pushToOperator(OperatorChain.java:550)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingOutput.collect(OperatorChain.java:527)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingOutput.collect(OperatorChain.java:487)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:730)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:708)
at SinkConversion$1.processElement(Unknown Source)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingOutput.pushToOperator(OperatorChain.java:550)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingOutput.collect(OperatorChain.java:527)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingOutput.collect(OperatorChain.java:487)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:730)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:708)
at 
org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
at 
org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:93)
at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)
at 
org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:196)
Caused by: java.lang.NullPointerException
at 
org.apache.flink.table.catalog.hive.client.HiveShimV100.ensureSupportedFlinkTimestamp(HiveShimV100.java:386)
at 
org.apache.flink.table.catalog.hive.client.HiveShimV100.toHiveTimestamp(HiveShimV100.java:357)
at 
org.apache.flink.table.functions.hive.conversion.HiveInspectors.lambda$getConversion$b054b59b$1(HiveInspectors.java:216)
at 
org.apache.flink.table.functions.hive.conversion.HiveInspectors.lambda$getConversion$7f882244$1(HiveInspectors.java:172)
at 
org.apache.flink.connectors.hive.HiveOutputFormatFactory$HiveOutputFormat.getConvertedRow(HiveOutputFormatFactory.java:190)
at 
org.apache.flink.connectors.hive.HiveOutputFormatFactory$HiveOutputFormat.writeRecord(HiveOutputFormatFactory.java:206)
at 
org.apache.flink.connectors.hive.HiveOutputFormatFactory$HiveOutputFormat.writeRecord(HiveOutputFormatFactory.java:178)
at 
org.apache.flink.table.filesystem.SingleDirectoryWriter.write(SingleDirectoryWriter.java:52)
at 
org.apache.flink.table.filesystem.FileSystemOutputFormat.writeRecord(FileSystemOutputFormat.java:120)
... 19 more



We should add null check in HiveShim100#ensureSupportedFlinkTimestamp and 
return a prper value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15398) Correct catalog doc example mistake

2019-12-25 Thread Terry Wang (Jira)
Terry Wang created FLINK-15398:
--

 Summary: Correct catalog doc example mistake
 Key: FLINK-15398
 URL: https://issues.apache.org/jira/browse/FLINK-15398
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.10.0
Reporter: Terry Wang


https://ci.apache.org/projects/flink/flink-docs-master/dev/table/catalogs.html#how-to-create-and-register-flink-tables-to-catalog
Now we don't support `show tables` through TableEnvironemt.sqlQuery() method, 
we should correct it .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15242) Add doc to introduce ddls or dmls supported by sql cli

2019-12-13 Thread Terry Wang (Jira)
Terry Wang created FLINK-15242:
--

 Summary: Add doc to introduce ddls or dmls supported by sql cli
 Key: FLINK-15242
 URL: https://issues.apache.org/jira/browse/FLINK-15242
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation
Affects Versions: 1.10.0
Reporter: Terry Wang
 Fix For: 1.10.0


Now in the document of sql client
https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sqlClient.html,
 there isn't a part to introduce the ddls/dmls in a whole story. We should 
complete it before the 1.10 release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15148) Add doc for create/drop/alter database ddl

2019-12-09 Thread Terry Wang (Jira)
Terry Wang created FLINK-15148:
--

 Summary: Add doc for create/drop/alter database ddl
 Key: FLINK-15148
 URL: https://issues.apache.org/jira/browse/FLINK-15148
 Project: Flink
  Issue Type: Sub-task
Reporter: Terry Wang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15147) Add doc for alter table set properties and rename table

2019-12-09 Thread Terry Wang (Jira)
Terry Wang created FLINK-15147:
--

 Summary: Add doc for alter table set properties and rename table
 Key: FLINK-15147
 URL: https://issues.apache.org/jira/browse/FLINK-15147
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation
Reporter: Terry Wang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15114) Add execute result info for alter/create/drop database in sql client.

2019-12-06 Thread Terry Wang (Jira)
Terry Wang created FLINK-15114:
--

 Summary: Add execute result info for alter/create/drop database in 
sql client.
 Key: FLINK-15114
 URL: https://issues.apache.org/jira/browse/FLINK-15114
 Project: Flink
  Issue Type: Bug
Reporter: Terry Wang
 Fix For: 1.10.0


Add execute result info for alter/create/drop database in sql-client



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15061) create/alter table/databases properties should be case sensitive stored in catalog

2019-12-04 Thread Terry Wang (Jira)
Terry Wang created FLINK-15061:
--

 Summary: create/alter table/databases properties should be case 
sensitive stored in catalog
 Key: FLINK-15061
 URL: https://issues.apache.org/jira/browse/FLINK-15061
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Reporter: Terry Wang
 Fix For: 1.10.0


Now in the class `SqlToOperationConverter`, when creating a table the logic 
will convert all properties key to lower format, which will cause the 
properties stored in catalog to lose the case style and not intuitively be 
observed to user.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15005) Change CatalogTableStats.UNKNOW and HiveStatsUtil stats default value

2019-12-01 Thread Terry Wang (Jira)
Terry Wang created FLINK-15005:
--

 Summary: Change CatalogTableStats.UNKNOW and HiveStatsUtil stats 
default value
 Key: FLINK-15005
 URL: https://issues.apache.org/jira/browse/FLINK-15005
 Project: Flink
  Issue Type: Bug
Reporter: Terry Wang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14965) CatalogTableStatistics UNKNOWN should be consistent with TableStats UNKNOWN

2019-11-26 Thread Terry Wang (Jira)
Terry Wang created FLINK-14965:
--

 Summary: CatalogTableStatistics UNKNOWN should be consistent with 
TableStats UNKNOWN
 Key: FLINK-14965
 URL: https://issues.apache.org/jira/browse/FLINK-14965
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.10.0
Reporter: Terry Wang


UNKNOWN stats in
` org.apache.flink.table.catalog.stats

public class CatalogTableStatistics {
public static final CatalogTableStatistics UNKNOWN = new 
CatalogTableStatistics(0, 0, 0, 0);
`
and 
`
org.apache.flink.table.plan.stats
public final class TableStats {
public static final TableStats UNKNOWN = new TableStats(-1, new 
HashMap<>());
`
are not consistent which will cause some cbo unexpect behavior





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14932) Support table related DDLs that needs return value in TableEnvironment

2019-11-22 Thread Terry Wang (Jira)
Terry Wang created FLINK-14932:
--

 Summary: Support table related DDLs that needs return value in 
TableEnvironment
 Key: FLINK-14932
 URL: https://issues.apache.org/jira/browse/FLINK-14932
 Project: Flink
  Issue Type: Sub-task
Reporter: Terry Wang


1. showTablesStatement:
SHOW TABLES

2. descTableStatement:
DESCRIBE [ EXTENDED]  [[catalogName.] dataBasesName].tableName



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14879) Support database related DDLs that needs return value in TableEnvironment

2019-11-20 Thread Terry Wang (Jira)
Terry Wang created FLINK-14879:
--

 Summary: Support database related DDLs that needs return value in 
TableEnvironment
 Key: FLINK-14879
 URL: https://issues.apache.org/jira/browse/FLINK-14879
 Project: Flink
  Issue Type: Sub-task
Reporter: Terry Wang


1. showDatabasesStatement:
SHOW DATABASES

2. descDatabaseStatement:
DESCRIBE  DATABASE [ EXTENDED] [ catalogName.] dataBasesName

Above statements should be supported in TableEnvironment after flip-84 completed



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14878) Support `use catalog` through sqlUpdate() method in TableEnvironment

2019-11-20 Thread Terry Wang (Jira)
Terry Wang created FLINK-14878:
--

 Summary: Support `use  catalog` through sqlUpdate() method in 
TableEnvironment
 Key: FLINK-14878
 URL: https://issues.apache.org/jira/browse/FLINK-14878
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: Terry Wang


Support `USE CATALOG catalogName` through `sqlUpdate()` method in 
TableEnvironment



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14721) HiveTableSource should implement LimitableTableSource interface

2019-11-12 Thread Terry Wang (Jira)
Terry Wang created FLINK-14721:
--

 Summary: HiveTableSource should implement LimitableTableSource 
interface
 Key: FLINK-14721
 URL: https://issues.apache.org/jira/browse/FLINK-14721
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Hive
Reporter: Terry Wang


Now the HiveTableSource don't support LimitableTableSource which will cause 
huge resource and time waste  in queries like `select * from hiveSourceTable 
limit 10` 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14692) Support Table related DDLs in TableEnvironment

2019-11-09 Thread Terry Wang (Jira)
Terry Wang created FLINK-14692:
--

 Summary: Support Table related DDLs in TableEnvironment
 Key: FLINK-14692
 URL: https://issues.apache.org/jira/browse/FLINK-14692
 Project: Flink
  Issue Type: Sub-task
Reporter: Terry Wang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14691) Support database related DDLs in TableEnvironment

2019-11-09 Thread Terry Wang (Jira)
Terry Wang created FLINK-14691:
--

 Summary: Support database related DDLs in TableEnvironment
 Key: FLINK-14691
 URL: https://issues.apache.org/jira/browse/FLINK-14691
 Project: Flink
  Issue Type: Sub-task
Reporter: Terry Wang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14690) Support catalog related DDLs in TableEnvironment

2019-11-09 Thread Terry Wang (Jira)
Terry Wang created FLINK-14690:
--

 Summary: Support catalog related DDLs in TableEnvironment
 Key: FLINK-14690
 URL: https://issues.apache.org/jira/browse/FLINK-14690
 Project: Flink
  Issue Type: Sub-task
Reporter: Terry Wang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14689) Add catalog related DDLs support in SQL Parser

2019-11-09 Thread Terry Wang (Jira)
Terry Wang created FLINK-14689:
--

 Summary: Add catalog related DDLs support in SQL Parser
 Key: FLINK-14689
 URL: https://issues.apache.org/jira/browse/FLINK-14689
 Project: Flink
  Issue Type: Sub-task
Reporter: Terry Wang


1. showCatalogsStatement
SHOW CATALOGS

2. describeCatalogStatement
DESCRIBE CATALOG catalogName

3.useCatalogStatement
USE CATALOG catalogName 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14688) Add table related

2019-11-09 Thread Terry Wang (Jira)
Terry Wang created FLINK-14688:
--

 Summary: Add table related
 Key: FLINK-14688
 URL: https://issues.apache.org/jira/browse/FLINK-14688
 Project: Flink
  Issue Type: Sub-task
Reporter: Terry Wang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14686) Flink SQL DDL Enhancement

2019-11-09 Thread Terry Wang (Jira)
Terry Wang created FLINK-14686:
--

 Summary: Flink SQL DDL Enhancement
 Key: FLINK-14686
 URL: https://issues.apache.org/jira/browse/FLINK-14686
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / API, Table SQL / Client
Affects Versions: 1.10.0
Reporter: Terry Wang


We would like to achieve the following goals in this FLIP-69.

 - Add Catalog DDL enhancement support
 - Add Database DDL enhancement support
 - Add Table DDL enhancement support

This is the parent Jira for subtasks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-13896) Scala 2.11 maven compile should target Java 1.8

2019-08-29 Thread Terry Wang (Jira)
Terry Wang created FLINK-13896:
--

 Summary: Scala 2.11 maven compile should target Java 1.8
 Key: FLINK-13896
 URL: https://issues.apache.org/jira/browse/FLINK-13896
 Project: Flink
  Issue Type: Bug
  Components: Build System
Affects Versions: 1.9.0
 Environment: When setting TableEnvironment in scala as follwing:

 
{code:java}
// we can repoduce this problem by put following code in 
// org.apache.flink.table.api.scala.internal.StreamTableEnvironmentImplTest

@Test
def testCreateEnvironment(): Unit = {
 val settings = 
EnvironmentSettings.newInstance().useBlinkPlanner().inBatchMode().build();
 val tEnv = TableEnvironment.create(settings);
}
{code}

Then mvn test would fail with an error message like:

 

error: Static methods in interface require -target:JVM-1.8

 

We can fix this bug by adding:


 
 -target:jvm-1.8
 


 

to scala-maven-plugin 

 

 

 
Reporter: Terry Wang






--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (FLINK-13869) Hive built-in function can not work in blink planner stream mode

2019-08-27 Thread Terry Wang (Jira)
Terry Wang created FLINK-13869:
--

 Summary: Hive built-in function can not work in blink planner 
stream mode
 Key: FLINK-13869
 URL: https://issues.apache.org/jira/browse/FLINK-13869
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive, Table SQL / Planner
Affects Versions: 1.9.0
 Environment: method call stack:

!image-2019-08-27-15-37-11-230.png!
Reporter: Terry Wang
 Fix For: 1.10.0
 Attachments: image-2019-08-27-15-36-57-662.png, 
image-2019-08-27-15-37-11-230.png

In flink, specifying the StreamTableEnvironment through the EnvironmentSetting 
using the blink planner, when using the UDF in hive in the table API, the 
following error is reported, the flink planner is used to see the call stack, 
and the flink planner does not call setArgumentTypeAndConstants to initialize 
the null pointer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (FLINK-13610) Refactor HiveTableSource Test use sql query and remove HiveInputFormatTest

2019-08-07 Thread Terry Wang (JIRA)
Terry Wang created FLINK-13610:
--

 Summary: Refactor HiveTableSource Test use sql query and remove 
HiveInputFormatTest
 Key: FLINK-13610
 URL: https://issues.apache.org/jira/browse/FLINK-13610
 Project: Flink
  Issue Type: Test
  Components: Connectors / Hive
Affects Versions: 1.10.0
Reporter: Terry Wang


Since HiveTableSource is mainly used in sql query and now blink planner support 
run sql query, it's time that we change HiveTableSourceTest using sql way 
instead of table api.

HiveTableInputFormt is tested in HiveTableSourceTest and there exists 
redundancy in code,  this ticket also aims to move some test code from 
HiveInputFormatTest and remove this file.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)