[jira] [Created] (FLINK-35459) Use Incremental Source Framework in Flink CDC TiKV Source Connector

2024-05-26 Thread ouyangwulin (Jira)
ouyangwulin created FLINK-35459:
---

 Summary: Use Incremental Source Framework in Flink CDC TiKV Source 
Connector
 Key: FLINK-35459
 URL: https://issues.apache.org/jira/browse/FLINK-35459
 Project: Flink
  Issue Type: New Feature
  Components: Flink CDC
Affects Versions: cdc-3.2.0
Reporter: ouyangwulin
 Fix For: cdc-3.2.0


Use Incremental Source Framework in Flink CDC TiKV Source Connector



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35450) Introduce Tikv pipeline DataSource

2024-05-25 Thread ouyangwulin (Jira)
ouyangwulin created FLINK-35450:
---

 Summary: Introduce Tikv pipeline DataSource
 Key: FLINK-35450
 URL: https://issues.apache.org/jira/browse/FLINK-35450
 Project: Flink
  Issue Type: New Feature
  Components: Flink CDC
Affects Versions: cdc-3.2.0
Reporter: ouyangwulin
 Fix For: cdc-3.2.0


After we add hostmapping(https://issues.apache.org/jira/browse/FLINK-35354) to 
tikv,we can use flink cdc sync data from tikv. We need more convenient and 
high-performance data synchronization capabilities from tikv



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35354) [discuss] Support host mapping in Flink tikv cdc

2024-05-14 Thread ouyangwulin (Jira)
ouyangwulin created FLINK-35354:
---

 Summary: [discuss] Support host mapping in Flink tikv cdc
 Key: FLINK-35354
 URL: https://issues.apache.org/jira/browse/FLINK-35354
 Project: Flink
  Issue Type: Improvement
  Components: Flink CDC
Affects Versions: cdc-3.1.0, cdc-3.2.0
Reporter: ouyangwulin
 Fix For: cdc-3.1.0, cdc-3.2.0


In tidb production environment deployment, there are usually two kinds of 
network: internal network and public network. When we use pd mode kv, we need 
to do network mapping, such as `spark.tispark.host_mapping` in 
https://github.com/pingcap/tispark/blob/master/docs/userguide_3.0.md. So I 
think we need support `host_mapping` in our Flink tikv cdc connector.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35337) Keep up with the latest version of tikv client

2024-05-13 Thread ouyangwulin (Jira)
ouyangwulin created FLINK-35337:
---

 Summary: Keep up with the latest version of tikv client
 Key: FLINK-35337
 URL: https://issues.apache.org/jira/browse/FLINK-35337
 Project: Flink
  Issue Type: Improvement
Reporter: ouyangwulin






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33491) Support json column validated

2023-11-09 Thread ouyangwulin (Jira)
ouyangwulin created FLINK-33491:
---

 Summary: Support json column validated
 Key: FLINK-33491
 URL: https://issues.apache.org/jira/browse/FLINK-33491
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / Runtime
Affects Versions: 1.8.4, 1.9.4
Reporter: ouyangwulin
 Fix For: 1.8.4, 1.9.4






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-24315) Flink native on k8s wacther thread will down,when k8s api server not work or network timeout

2021-09-16 Thread ouyangwulin (Jira)
ouyangwulin created FLINK-24315:
---

 Summary: Flink native on k8s wacther thread will down,when k8s api 
server not work or network timeout
 Key: FLINK-24315
 URL: https://issues.apache.org/jira/browse/FLINK-24315
 Project: Flink
  Issue Type: Bug
  Components: Deployment / Kubernetes
Affects Versions: 1.13.2, 1.14.0, 1.14.1
Reporter: ouyangwulin
 Fix For: 1.14.0, 1.14.1, 1.13.2


Jobmanager use fabric-client to watch api-server.When k8s api-server  or 
network problems. The watcher thread will closed ,  can use "jstack 1 && grep 
-i 'websocket'" to check the watcher thread is exists.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15400) elasticsearch sink support dynamic index.

2019-12-26 Thread ouyangwulin (Jira)
ouyangwulin created FLINK-15400:
---

 Summary: elasticsearch sink support dynamic index.
 Key: FLINK-15400
 URL: https://issues.apache.org/jira/browse/FLINK-15400
 Project: Flink
  Issue Type: New Feature
  Components: Connectors / ElasticSearch
Affects Versions: 1.9.1, 1.9.0, 1.11.0
Reporter: ouyangwulin
 Fix For: 1.11.0


>From 
>user...@flink.apache.org([https://lists.apache.org/thread.html/ac4e0c068baeb3b070f0213a2e1314e6552b226b8132a4c49d667ecd%40%3Cuser-zh.flink.apache.org%3E]),
>  Becuase the es 6/7 not support ttl. so User need clean the index by 
>timestamp. Add dynamic index is a useful function.  Add with properties 
>'dynamicIndex' as a switch for open dynamicIndex. Add with  properties 
>'indexField'  for the extract time field, Add properties 'indexInterval' for 
>change cycle mode.

 
||With property||discribe||default||Required||
|dynamicIndex|Dynamic or not|false(true/false)|false|
|indexField|extract index field| none|dynamicIndex is true , then indexField is 
required,only supported type "timestamp","date","long" |
|indexInterval|mode for  cycle|d|ddynamicIndex is true , this field is required 
,可选参数值如下:
d:day
m:mouth
w:week|

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15378) StreamFileSystemSink supported mutil hdfs plugins.

2019-12-23 Thread ouyangwulin (Jira)
ouyangwulin created FLINK-15378:
---

 Summary: StreamFileSystemSink supported mutil hdfs plugins.
 Key: FLINK-15378
 URL: https://issues.apache.org/jira/browse/FLINK-15378
 Project: Flink
  Issue Type: Improvement
  Components: API / Core
Affects Versions: 1.9.2, 1.11.0
Reporter: ouyangwulin
 Fix For: 1.11.0


Request 1:  FileSystem plugins not effect the default yarn dependecies.

Request 2:  StreamFileSystemSink supported mutil hdfs plugins.    

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15176) Add '--job-classname' to flink-container 'job-cluster-job.yaml.template'

2019-12-10 Thread ouyangwulin (Jira)
ouyangwulin created FLINK-15176:
---

 Summary: Add '--job-classname' to flink-container 
'job-cluster-job.yaml.template'
 Key: FLINK-15176
 URL: https://issues.apache.org/jira/browse/FLINK-15176
 Project: Flink
  Issue Type: Improvement
  Components: Deployment / Docker
Affects Versions: 1.11.0
Reporter: ouyangwulin
 Fix For: 1.11.0


As from 'user...@flink.apache.org'.  When use 'job-cluster-job.yaml.template' 
deploy a job, the template don't have a good sense about how to use  
'--job-classname'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14855) travis-ci error.

2019-11-19 Thread ouyangwulin (Jira)
ouyangwulin created FLINK-14855:
---

 Summary: travis-ci error.
 Key: FLINK-14855
 URL: https://issues.apache.org/jira/browse/FLINK-14855
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.11.0
Reporter: ouyangwulin
 Fix For: 1.11.0


08:18:03.845 [ERROR] Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time 
elapsed: 0.271 s <<< FAILURE! - in 
org.apache.flink.docs.configuration.ConfigOptionsDocsCompletenessITCase
2541608:18:03.845 [ERROR] 
testFullReferenceCompleteness(org.apache.flink.docs.configuration.ConfigOptionsDocsCompletenessITCase)
 Time elapsed: 0.151 s <<< FAILURE!
25417java.lang.AssertionError: Documentation contains distinct descriptions for 
host in influxdb_reporter_configuration.html and 
prometheus_push_gateway_reporter_configuration.html.
25418 at 
org.apache.flink.docs.configuration.ConfigOptionsDocsCompletenessITCase.lambda$parseDocumentedOptions$10(ConfigOptionsDocsCompletenessITCase.java:168)
25419 at 
org.apache.flink.docs.configuration.ConfigOptionsDocsCompletenessITCase.parseDocumentedOptions(ConfigOptionsDocsCompletenessITCase.java:156)
25420 at 
org.apache.flink.docs.configuration.ConfigOptionsDocsCompletenessITCase.testFullReferenceCompleteness(ConfigOptionsDocsCompletenessITCase.java:75)
25421
2542208:18:04.168 [INFO]
2542308:18:04.168 [INFO] Results:
2542408:18:04.168 [INFO]
2542508:18:04.168 [ERROR] Failures:
2542608:18:04.168 [ERROR] 
ConfigOptionsDocsCompletenessITCase.testFullReferenceCompleteness:75->parseDocumentedOptions:156->lambda$parseDocumentedOptions$10:168
 Documentation contains distinct descriptions for host in 
influxdb_reporter_configuration.html and 
prometheus_push_gateway_reporter_configuration.html.


 

logs as up,  the same host key in influxdb_reporter_configuration.html and 
prometheus_push_gateway_reporter_configuration.html, but It travis-ci build 
erorr



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14832) Support kafka metrics reporter

2019-11-16 Thread ouyangwulin (Jira)
ouyangwulin created FLINK-14832:
---

 Summary: Support kafka metrics reporter
 Key: FLINK-14832
 URL: https://issues.apache.org/jira/browse/FLINK-14832
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Metrics
Affects Versions: 1.11.0
Reporter: ouyangwulin
 Fix For: 1.11.0


We suported influxdb/ganglia,etc. And In some env, support report to kafka is 
needed



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14831) When edit flink-metrics-influxdb, need add metrics.md by hand

2019-11-16 Thread ouyangwulin (Jira)
ouyangwulin created FLINK-14831:
---

 Summary: When edit flink-metrics-influxdb,  need add metrics.md by 
hand
 Key: FLINK-14831
 URL: https://issues.apache.org/jira/browse/FLINK-14831
 Project: Flink
  Issue Type: Improvement
Reporter: ouyangwulin


When edit flink-metrics-influxdb, need add metrics.md by hand. AND 
{code:java}
mvn package -Dgenerate-config-docs -pl flink-docs -am -nsu -DskipTests{code}
, Is not work



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14803) Support Consistency Level for InfluxDB metrics reporter

2019-11-14 Thread ouyangwulin (Jira)
ouyangwulin created FLINK-14803:
---

 Summary: Support Consistency Level for InfluxDB metrics reporter
 Key: FLINK-14803
 URL: https://issues.apache.org/jira/browse/FLINK-14803
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Metrics
Affects Versions: 1.10.0, 1.11.0
Reporter: ouyangwulin
 Fix For: 1.10.0, 1.11.0


Support Consistency Level for InfluxDB metrics reporter. influxdb batch insert 
data default use `ConsistencyLevel.ONE`, We need a config, like 
'ConsistencyLevel.ANY' supported. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14210) influxdb reporter add connect timeout and write timeout config options for decrease timeout exception

2019-09-25 Thread ouyangwulin (Jira)
ouyangwulin created FLINK-14210:
---

 Summary: influxdb reporter add connect timeout and write timeout 
config options for decrease timeout exception
 Key: FLINK-14210
 URL: https://issues.apache.org/jira/browse/FLINK-14210
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Metrics
Affects Versions: 1.9.0, 1.10.0
Reporter: ouyangwulin
 Fix For: 1.10.0


When we run influxdb reporter in product env,  It log timeout exception in 
taskmananger log. 
The execption log is :
2019-07-23 14:05:22,044 WARN 
org.apache.flink.runtime.metrics.MetricRegistryImpl - Error while reporting 
metrics
java.lang.RuntimeException: java.net.SocketTimeoutException: timeout
at 
org.apache.flink.metrics.influxdb.shaded.org.influxdb.impl.InfluxDBImpl.lambda$new$0(InfluxDBImpl.java:197)
at 
org.apache.flink.metrics.influxdb.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at 
org.apache.flink.metrics.influxdb.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at 
org.apache.flink.metrics.influxdb.shaded.org.influxdb.impl.BasicAuthInterceptor.intercept(BasicAuthInterceptor.java:22)
at 
org.apache.flink.metrics.influxdb.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at 
org.apache.flink.metrics.influxdb.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at 
org.apache.flink.metrics.influxdb.shaded.org.influxdb.impl.GzipRequestInterceptor.intercept(GzipRequestInterceptor.java:42)
at 
org.apache.flink.metrics.influxdb.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at 
org.apache.flink.metrics.influxdb.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at 
org.apache.flink.metrics.influxdb.shaded.okhttp3.logging.HttpLoggingInterceptor.intercept(HttpLoggingInterceptor.java:144)
at 
org.apache.flink.metrics.influxdb.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at 
org.apache.flink.metrics.influxdb.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at 
org.apache.flink.metrics.influxdb.shaded.okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:200)
at 
org.apache.flink.metrics.influxdb.shaded.okhttp3.RealCall.execute(RealCall.java:77)
at 
org.apache.flink.metrics.influxdb.shaded.retrofit2.OkHttpCall.execute(OkHttpCall.java:180)
at 
org.apache.flink.metrics.influxdb.shaded.org.influxdb.impl.InfluxDBImpl.execute(InfluxDBImpl.java:1168)
at 
org.apache.flink.metrics.influxdb.shaded.org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:821)
at 
org.apache.flink.metrics.influxdb.InfluxdbReporter.report(InfluxdbReporter.java:97)
at 
org.apache.flink.runtime.metrics.MetricRegistryImpl$ReporterTask.run(MetricRegistryImpl.java:430)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-13873) Change the colum family as tags for influxdb reporter

2019-08-27 Thread ouyangwulin (Jira)
ouyangwulin created FLINK-13873:
---

 Summary: Change the colum family as tags for influxdb reporter
 Key: FLINK-13873
 URL: https://issues.apache.org/jira/browse/FLINK-13873
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Metrics
Affects Versions: 1.9.0
Reporter: ouyangwulin
 Fix For: 1.9.1


When we use influxdb reporter report rocksdb metric data,influxdb reporter call 
\{#getLogicalScope()} get the measurements,and call \{#getAllVariables()} get 
tags. and we need column family as tags for group by in influxSQL。so It need 
call addgroup("column_family", columnFamilyName) build metric group



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (FLINK-12147) Error for monitor kafka metrics when Use influxDB metric plugins

2019-04-09 Thread ouyangwulin (JIRA)
ouyangwulin created FLINK-12147:
---

 Summary: Error for monitor kafka metrics when Use influxDB metric 
plugins
 Key: FLINK-12147
 URL: https://issues.apache.org/jira/browse/FLINK-12147
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Metrics
Affects Versions: 1.8.0
Reporter: ouyangwulin


2019-04-09 17:17:12,654 WARN  
org.apache.flink.runtime.metrics.MetricRegistryImpl   - Error while 
reporting metrics
java.lang.RuntimeException: org.influxdb.InfluxDBException: ShardServer 
Internal error: \{"error":"partial write: unable to parse 
'taskmanager_job_task_operator_join_time_max,host=10.242.91.189,job_id=a75d76273048556c5059262ee3981277,job_name=Flink\\
 Streaming\\ 
Job,operator_id=e3dfc0d7e9ecd8a43f85f0b68ebf3b80,operator_name=Source:\\ 
Custom\\ 
Source,subtask_index=0,task_attempt_id=f57c884bd2ce337003b5c23dcadbfdde,task_attempt_num=0,task_id=e3dfc0d7e9ecd8a43f85f0b68ebf3b80,task_name=Source:\\
 Custom\\ Source\\ -\u003e\\ (Map\\,\\ Sink:\\ Print\\ to\\ Std.\\ 
Out),tm_id=ccaaf9cb625fbc432c169f441d9c1127 value=-∞ 155480143259200': 
invalid number\nunable to parse 
'taskmanager_job_task_operator_KafkaConsumer_join_time_max,host=10.242.91.189,job_id=a75d76273048556c5059262ee3981277,job_name=Flink\\
 Streaming\\ 
Job,operator_id=e3dfc0d7e9ecd8a43f85f0b68ebf3b80,operator_name=Source:\\ 
Custom\\ 
Source,subtask_index=0,task_attempt_id=f57c884bd2ce337003b5c23dcadbfdde,task_attempt_num=0,task_id=e3dfc0d7e9ecd8a43f85f0b68ebf3b80,task_name=Source:\\
 Custom\\ Source\\ -\u003e\\ (Map\\,\\ Sink:\\ Print\\ to\\ Std.\\ 
Out),tm_id=ccaaf9cb625fbc432c169f441d9c1127 value=-∞ 155480143259200': 
invalid number\nunable to parse 
'taskmanager_job_task_operator_KafkaConsumer_heartbeat_response_time_max,host=10.242.91.189,job_id=a75d76273048556c5059262ee3981277,job_name=Flink\\
 Streaming\\ 
Job,operator_id=e3dfc0d7e9ecd8a43f85f0b68ebf3b80,operator_name=Source:\\ 
Custom\\ 
Source,subtask_index=0,task_attempt_id=f57c884bd2ce337003b5c23dcadbfdde,task_attempt_num=0,task_id=e3dfc0d7e9ecd8a43f85f0b68ebf3b80,task_name=Source:\\
 Custom\\ Source\\ -\u003e\\ (Map\\,\\ Sink:\\ Print\\ to\\ Std.\\ 
Out),tm_id=ccaaf9cb625fbc432c169f441d9c1127 value=-∞ 155480143259200': 
invalid number\nunable to parse 
'taskmanager_job_task_operator_sync_time_max,host=10.242.91.189,job_id=a75d76273048556c5059262ee3981277,job_name=Flink\\
 Streaming\\ 
Job,operator_id=e3dfc0d7e9ecd8a43f85f0b68ebf3b80,operator_name=Source:\\ 
Custom\\ 
Source,subtask_index=0,task_attempt_id=f57c884bd2ce337003b5c23dcadbfdde,task_attempt_num=0,task_id=e3dfc0d7e9ecd8a43f85f0b68ebf3b80,task_name=Source:\\
 Custom\\ Source\\ -\u003e\\ (Map\\,\\ Sink:\\ Print\\ to\\ Std.\\ 
Out),tm_id=ccaaf9cb625fbc432c169f441d9c1127 value=-∞ 155480143259200': 
invalid number\nunable to parse 
'taskmanager_job_task_operator_KafkaConsumer_sync_time_max,host=10.242.91.189,job_id=a75d76273048556c5059262ee3981277,job_name=Flink\\
 Streaming\\ 
Job,operator_id=e3dfc0d7e9ecd8a43f85f0b68ebf3b80,operator_name=Source:\\ 
Custom\\ 
Source,subtask_index=0,task_attempt_id=f57c884bd2ce337003b5c23dcadbfdde,task_attempt_num=0,task_id=e3dfc0d7e9ecd8a43f85f0b68ebf3b80,task_name=Source:\\
 Custom\\ Source\\ -\u003e\\ (Map\\,\\ Sink:\\ Print\\ to\\ Std.\\ 
Out),tm_id=ccaaf9cb625fbc432c169f441d9c1127 value=-∞ 155480143259200': 
invalid number\nunable to parse 
'taskmanager_job_task_operator_heartbeat_response_time_max,host=10.242.91.189,job_id=a75d76273048556c5059262ee3981277,job_name=Flink\\
 Streaming\\ 
Job,operator_id=e3dfc0d7e9ecd8a43f85f0b68ebf3b80,operator_name=Source:\\ 
Custom\\ 
Source,subtask_index=0,task_attempt_id=f57c884bd2ce337003b5c23dcadbfdde,task_attempt_num=0,task_id=e3dfc0d7e9ecd8a43f85f0b68ebf3b80,task_name=Source:\\
 Custom\\ Source\\ -\u003e\\ (Map\\,\\ Sink:\\ Print\\ to\\ Std.\\ 
Out),tm_id=ccaaf9cb625fbc432c169f441d9c1127 value=-∞ 155480143259200': 
invalid number dropped=0"}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)