[jira] [Created] (FLINK-20446) NoMatchingTableFactoryException

2020-12-01 Thread Ke Li (Jira)
Ke Li created FLINK-20446:
-

 Summary: NoMatchingTableFactoryException
 Key: FLINK-20446
 URL: https://issues.apache.org/jira/browse/FLINK-20446
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client
Affects Versions: 1.11.2
 Environment: * Version:1.11.2
Reporter: Ke Li


When I use sql client configuration, an error is reported, the instruction is 
as follows:
{code:java}
./sql-client.sh embedded -e /root/flink-sql-client/sql-client-demo.yml
{code}
sql-client-demo.yml:
{code:java}
tables:
  - name: SourceTable
type: source-table
update-mode: append
connector:
  type: datagen
  rows-per-second: 5
  fields:
f_sequence:
  kind: sequence
  start: 1
  end: 1000
f_random:
  min: 1
  max: 1000
f_random_str:
  length: 10
schema:
  - name: f_sequence
data-type: INT
  - name: f_random
data-type: INT
  - name: f_random_str
data-type: STRING
{code}
The error is as follows:
{code:java}
No default environment specified.No default environment specified.Searching for 
'/data/data_gas/flink/flink-1.11.2/conf/sql-client-defaults.yaml'...found.Reading
 default environment from: 
file:/data/data_gas/flink/flink-1.11.2/conf/sql-client-defaults.yamlReading 
session environment from: 
file:/root/flink-sql-client/sql-client-demo.ymlException in thread "main" 
org.apache.flink.table.client.SqlClientException: Unexpected exception. This is 
a bug. Please consider filing an issue. at 
org.apache.flink.table.client.SqlClient.main(SqlClient.java:213)Caused by: 
org.apache.flink.table.client.gateway.SqlExecutionException: Could not create 
execution context. at 
org.apache.flink.table.client.gateway.local.ExecutionContext$Builder.build(ExecutionContext.java:870)
 at 
org.apache.flink.table.client.gateway.local.LocalExecutor.openSession(LocalExecutor.java:227)
 at org.apache.flink.table.client.SqlClient.start(SqlClient.java:108) at 
org.apache.flink.table.client.SqlClient.main(SqlClient.java:201)Caused by: 
org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a 
suitable table factory for 
'org.apache.flink.table.factories.TableSourceFactory' inthe classpath.
Reason: Required context properties mismatch.
The matching 
candidates:org.apache.flink.table.sources.CsvAppendTableSourceFactoryMismatched 
properties:'connector.type' expects 'filesystem', but is 'datagen''format.type' 
expects 'csv', but is 'json'
The following properties are 
requested:connector.fields.f_random.max=1000connector.fields.f_random.min=1connector.fields.f_random_str.length=10connector.fields.f_sequence.end=1000connector.fields.f_sequence.kind=sequenceconnector.fields.f_sequence.start=1connector.rows-per-second=5connector.type=datagenformat.type=jsonschema.0.data-type=INTschema.0.name=f_sequenceschema.1.data-type=INTschema.1.name=f_randomschema.2.data-type=STRINGschema.2.name=f_random_strupdate-mode=append
The following factories have been 
considered:org.apache.flink.streaming.connectors.kafka.KafkaTableSourceSinkFactoryorg.apache.flink.connector.jdbc.table.JdbcTableSourceSinkFactoryorg.apache.flink.table.sources.CsvBatchTableSourceFactoryorg.apache.flink.table.sources.CsvAppendTableSourceFactoryorg.apache.flink.table.filesystem.FileSystemTableFactory
 at 
org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:322)
 at 
org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:190)
 at 
org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:143)
 at 
org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:113)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.createTableSource(ExecutionContext.java:384)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.lambda$initializeCatalogs$7(ExecutionContext.java:638)
 at java.util.LinkedHashMap.forEach(LinkedHashMap.java:684) at 
org.apache.flink.table.client.gateway.local.ExecutionContext.initializeCatalogs(ExecutionContext.java:636)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.initializeTableEnvironment(ExecutionContext.java:523)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.(ExecutionContext.java:183)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.(ExecutionContext.java:136)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext$Builder.build(ExecutionContext.java:859)
 ... 3 more
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20445) NoMatchingTableFactoryException

2020-12-01 Thread Ke Li (Jira)
Ke Li created FLINK-20445:
-

 Summary: NoMatchingTableFactoryException
 Key: FLINK-20445
 URL: https://issues.apache.org/jira/browse/FLINK-20445
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client
Affects Versions: 1.11.2
 Environment: * Version:1.11.2
Reporter: Ke Li


When I use sql client configuration, an error is reported, the instruction is 
as follows:
{code:java}
./sql-client.sh embedded -e /root/flink-sql-client/sql-client-demo.yml
{code}
sql-client-demo.yml:
{code:java}
tables:
  - name: SourceTable
type: source-table
update-mode: append
connector:
  type: datagen
  rows-per-second: 5
  fields:
f_sequence:
  kind: sequence
  start: 1
  end: 1000
f_random:
  min: 1
  max: 1000
f_random_str:
  length: 10
schema:
  - name: f_sequence
data-type: INT
  - name: f_random
data-type: INT
  - name: f_random_str
data-type: STRING
{code}
The error is as follows:
{code:java}
No default environment specified.No default environment specified.Searching for 
'/data/data_gas/flink/flink-1.11.2/conf/sql-client-defaults.yaml'...found.Reading
 default environment from: 
file:/data/data_gas/flink/flink-1.11.2/conf/sql-client-defaults.yamlReading 
session environment from: 
file:/root/flink-sql-client/sql-client-demo.ymlException in thread "main" 
org.apache.flink.table.client.SqlClientException: Unexpected exception. This is 
a bug. Please consider filing an issue. at 
org.apache.flink.table.client.SqlClient.main(SqlClient.java:213)Caused by: 
org.apache.flink.table.client.gateway.SqlExecutionException: Could not create 
execution context. at 
org.apache.flink.table.client.gateway.local.ExecutionContext$Builder.build(ExecutionContext.java:870)
 at 
org.apache.flink.table.client.gateway.local.LocalExecutor.openSession(LocalExecutor.java:227)
 at org.apache.flink.table.client.SqlClient.start(SqlClient.java:108) at 
org.apache.flink.table.client.SqlClient.main(SqlClient.java:201)Caused by: 
org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a 
suitable table factory for 
'org.apache.flink.table.factories.TableSourceFactory' inthe classpath.
Reason: Required context properties mismatch.
The matching 
candidates:org.apache.flink.table.sources.CsvAppendTableSourceFactoryMismatched 
properties:'connector.type' expects 'filesystem', but is 'datagen''format.type' 
expects 'csv', but is 'json'
The following properties are 
requested:connector.fields.f_random.max=1000connector.fields.f_random.min=1connector.fields.f_random_str.length=10connector.fields.f_sequence.end=1000connector.fields.f_sequence.kind=sequenceconnector.fields.f_sequence.start=1connector.rows-per-second=5connector.type=datagenformat.type=jsonschema.0.data-type=INTschema.0.name=f_sequenceschema.1.data-type=INTschema.1.name=f_randomschema.2.data-type=STRINGschema.2.name=f_random_strupdate-mode=append
The following factories have been 
considered:org.apache.flink.streaming.connectors.kafka.KafkaTableSourceSinkFactoryorg.apache.flink.connector.jdbc.table.JdbcTableSourceSinkFactoryorg.apache.flink.table.sources.CsvBatchTableSourceFactoryorg.apache.flink.table.sources.CsvAppendTableSourceFactoryorg.apache.flink.table.filesystem.FileSystemTableFactory
 at 
org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:322)
 at 
org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:190)
 at 
org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:143)
 at 
org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:113)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.createTableSource(ExecutionContext.java:384)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.lambda$initializeCatalogs$7(ExecutionContext.java:638)
 at java.util.LinkedHashMap.forEach(LinkedHashMap.java:684) at 
org.apache.flink.table.client.gateway.local.ExecutionContext.initializeCatalogs(ExecutionContext.java:636)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.initializeTableEnvironment(ExecutionContext.java:523)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.(ExecutionContext.java:183)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.(ExecutionContext.java:136)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext$Builder.build(ExecutionContext.java:859)
 ... 3 more
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-13395) Add source and sink connector for Aliyun Log Service

2019-07-23 Thread Ke Li (JIRA)
Ke Li created FLINK-13395:
-

 Summary: Add source and sink connector for Aliyun Log Service
 Key: FLINK-13395
 URL: https://issues.apache.org/jira/browse/FLINK-13395
 Project: Flink
  Issue Type: New Feature
  Components: Connectors / Common
Reporter: Ke Li


 Aliyun Log Service is a storage service which has been widely used in Alibaba 
Group and a lot of customers on Alibaba Cloud. The core storage engine is call 
Loghub which is a large scale distributed storage system and provides 
producer/consumer API as Kafka/Kinesis does. 

There are a lot of users are using Log Service to collect data from on premise 
and cloud and consuming from Flink or Blink for streaming compute. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)