[jira] [Commented] (FLINK-17573) There is duplicate source data in ProcessWindowFunction

2020-05-11 Thread Tammy zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105007#comment-17105007
 ] 

Tammy zhang commented on FLINK-17573:
-

ok, i will post the question on the  u...@flink.apache.org mailing list with 
more details. please close it , thank you.  @[~rmetzger]

> There is duplicate source data in ProcessWindowFunction
> ---
>
> Key: FLINK-17573
> URL: https://issues.apache.org/jira/browse/FLINK-17573
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Reporter: Tammy zhang
>Priority: Major
>
> i consumed kafka topic data, and keyby the stream, then use a 
> ProcessWindowFunction in this keyedStream, and a strange phenomenon appeared, 
> the process function's sourceData become duplicated, like:
> Input Data iterator:[H2update 623.0 2020-05-08 15:19:25.14, H2update 297.0 
> 2020-05-08 15:19:28.501, H2update 832.0 2020-05-08 15:19:29.415]
>  data iterator end--
> Input Data iterator:[H1400 59.0 2020-05-08 15:19:07.087, H1400 83.0 
> 2020-05-08 15:19:09.521]
>  data iterator end--
> Input Data iterator:[H2insert 455.0 2020-05-08 15:19:23.066, H2insert 910.0 
> 2020-05-08 15:19:23.955, H2insert 614.0 2020-05-08 15:19:24.397, H2insert 
> 556.0 2020-05-08 15:19:27.389, H2insert 922.0 2020-05-08 15:19:27.761, 
> H2insert 165.0 2020-05-08 15:19:28.26]
>  data iterator end--
> Input Data iterator:[H1400 59.0 2020-05-08 15:19:07.087, H1400 83.0 
> 2020-05-08 15:19:09.521]
>  data iterator end--
> Input Data iterator:[H2update 623.0 2020-05-08 15:19:25.14, H2update 297.0 
> 2020-05-08 15:19:28.501, H2update 832.0 2020-05-08 15:19:29.415]
>  data iterator end--
> Input Data iterator:[H2insert 455.0 2020-05-08 15:19:23.066, H2insert 910.0 
> 2020-05-08 15:19:23.955, H2insert 614.0 2020-05-08 15:19:24.397, H2insert 
> 556.0 2020-05-08 15:19:27.389, H2insert 922.0 2020-05-08 15:19:27.761, 
> H2insert 165.0 2020-05-08 15:19:28.26]
>  data iterator end--
> I can ensure that there is no duplication of kafka data, Could you help me 
> point out where the problem is, thanks a lot



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-17556) FATAL: Thread 'flink-metrics-akka.remote.default-remote-dispatcher-3' produced an uncaught exception. Stopping the process... java.lang.OutOfMemoryError: Direct b

2020-05-11 Thread Tammy zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17104126#comment-17104126
 ] 

Tammy zhang edited comment on FLINK-17556 at 5/11/20, 6:37 AM:
---

Thank you for your reply. The version i used is flink 1.10.  I have been 
increased the direct memory, but this changes nothing. I am checking if there 
is any code with high memory consumption, Unfortunately, the true reason has 
not been found. Best Regards ! @[~kevin.cyj]


was (Author: 1372114269):
Thank you for your reply. The version i used is flink 1.10.  I have been 
increased the direct memory, but this changes nothing. I am checking if there 
is any code with high memory consumption, Unfortunately, the true reason has 
not been found. Best Regards ! [~kevin.cyj]

> FATAL: Thread 'flink-metrics-akka.remote.default-remote-dispatcher-3' 
> produced an uncaught exception. Stopping the process... 
> java.lang.OutOfMemoryError: Direct buffer memory
> --
>
> Key: FLINK-17556
> URL: https://issues.apache.org/jira/browse/FLINK-17556
> Project: Flink
>  Issue Type: Bug
>Reporter: Tammy zhang
>Priority: Blocker
>
> My job consumes the data in kafka and then processes the data. After the job 
> lasts for a while, the following error appears: 
> ERROR org.apache.flink.runtime.util.FatalExitExceptionHandler - FATAL: Thread 
> 'flink-metrics-akka.remote.default-remote-dispatcher-3' produced an uncaught 
> exception. Stopping the process...
> java.lang.OutOfMemoryError: Direct buffer memory
> i have set the "max.poll.records" propertity is "250", and it does not work. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-17556) FATAL: Thread 'flink-metrics-akka.remote.default-remote-dispatcher-3' produced an uncaught exception. Stopping the process... java.lang.OutOfMemoryError: Direct b

2020-05-11 Thread Tammy zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17104126#comment-17104126
 ] 

Tammy zhang edited comment on FLINK-17556 at 5/11/20, 6:36 AM:
---

Thank you for your reply. The version i used is flink 1.10.  I have been 
increased the direct memory, but this changes nothing. I am checking if there 
is any code with high memory consumption, Unfortunately, the true reason has 
not been found. Best Regards ! [~kevin.cyj]


was (Author: 1372114269):
Thank you for your reply. The version i used is flink 1.10.  I have been 
increased the direct memory, but this changes nothing. I am checking if there 
is any code with high memory consumption, Unfortunately, the true reason has 
not been found. Best Regards !

> FATAL: Thread 'flink-metrics-akka.remote.default-remote-dispatcher-3' 
> produced an uncaught exception. Stopping the process... 
> java.lang.OutOfMemoryError: Direct buffer memory
> --
>
> Key: FLINK-17556
> URL: https://issues.apache.org/jira/browse/FLINK-17556
> Project: Flink
>  Issue Type: Bug
>Reporter: Tammy zhang
>Priority: Blocker
>
> My job consumes the data in kafka and then processes the data. After the job 
> lasts for a while, the following error appears: 
> ERROR org.apache.flink.runtime.util.FatalExitExceptionHandler - FATAL: Thread 
> 'flink-metrics-akka.remote.default-remote-dispatcher-3' produced an uncaught 
> exception. Stopping the process...
> java.lang.OutOfMemoryError: Direct buffer memory
> i have set the "max.poll.records" propertity is "250", and it does not work. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17556) FATAL: Thread 'flink-metrics-akka.remote.default-remote-dispatcher-3' produced an uncaught exception. Stopping the process... java.lang.OutOfMemoryError: Direct buffer

2020-05-11 Thread Tammy zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17104126#comment-17104126
 ] 

Tammy zhang commented on FLINK-17556:
-

Thank you for your reply. The version i used is flink 1.10.  I have been 
increased the direct memory, but this changes nothing. I am checking if there 
is any code with high memory consumption, Unfortunately, the true reason has 
not been found. Best Regards !

> FATAL: Thread 'flink-metrics-akka.remote.default-remote-dispatcher-3' 
> produced an uncaught exception. Stopping the process... 
> java.lang.OutOfMemoryError: Direct buffer memory
> --
>
> Key: FLINK-17556
> URL: https://issues.apache.org/jira/browse/FLINK-17556
> Project: Flink
>  Issue Type: Bug
>Reporter: Tammy zhang
>Priority: Blocker
>
> My job consumes the data in kafka and then processes the data. After the job 
> lasts for a while, the following error appears: 
> ERROR org.apache.flink.runtime.util.FatalExitExceptionHandler - FATAL: Thread 
> 'flink-metrics-akka.remote.default-remote-dispatcher-3' produced an uncaught 
> exception. Stopping the process...
> java.lang.OutOfMemoryError: Direct buffer memory
> i have set the "max.poll.records" propertity is "250", and it does not work. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17573) There is duplicate source data in ProcessWindowFunction

2020-05-08 Thread Tammy zhang (Jira)
Tammy zhang created FLINK-17573:
---

 Summary: There is duplicate source data in ProcessWindowFunction
 Key: FLINK-17573
 URL: https://issues.apache.org/jira/browse/FLINK-17573
 Project: Flink
  Issue Type: Bug
Reporter: Tammy zhang


i consumed kafka topic data, and keyby the stream, then use a 
ProcessWindowFunction in this keyedStream, and a strange phenomenon appeared, 
the process function's sourceData become duplicated, like:

Input Data iterator:[H2update 623.0 2020-05-08 15:19:25.14, H2update 297.0 
2020-05-08 15:19:28.501, H2update 832.0 2020-05-08 15:19:29.415]
 data iterator end--
Input Data iterator:[H1400 59.0 2020-05-08 15:19:07.087, H1400 83.0 2020-05-08 
15:19:09.521]
 data iterator end--
Input Data iterator:[H2insert 455.0 2020-05-08 15:19:23.066, H2insert 910.0 
2020-05-08 15:19:23.955, H2insert 614.0 2020-05-08 15:19:24.397, H2insert 556.0 
2020-05-08 15:19:27.389, H2insert 922.0 2020-05-08 15:19:27.761, H2insert 165.0 
2020-05-08 15:19:28.26]
 data iterator end--
Input Data iterator:[H1400 59.0 2020-05-08 15:19:07.087, H1400 83.0 2020-05-08 
15:19:09.521]
 data iterator end--
Input Data iterator:[H2update 623.0 2020-05-08 15:19:25.14, H2update 297.0 
2020-05-08 15:19:28.501, H2update 832.0 2020-05-08 15:19:29.415]
 data iterator end--
Input Data iterator:[H2insert 455.0 2020-05-08 15:19:23.066, H2insert 910.0 
2020-05-08 15:19:23.955, H2insert 614.0 2020-05-08 15:19:24.397, H2insert 556.0 
2020-05-08 15:19:27.389, H2insert 922.0 2020-05-08 15:19:27.761, H2insert 165.0 
2020-05-08 15:19:28.26]
 data iterator end--

I can ensure that there is no duplication of kafka data, Could you help me 
point out where the problem is, thanks a lot



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17556) FATAL: Thread 'flink-metrics-akka.remote.default-remote-dispatcher-3' produced an uncaught exception. Stopping the process... java.lang.OutOfMemoryError: Direct buffer m

2020-05-07 Thread Tammy zhang (Jira)
Tammy zhang created FLINK-17556:
---

 Summary: FATAL: Thread 
'flink-metrics-akka.remote.default-remote-dispatcher-3' produced an uncaught 
exception. Stopping the process... java.lang.OutOfMemoryError: Direct buffer 
memory
 Key: FLINK-17556
 URL: https://issues.apache.org/jira/browse/FLINK-17556
 Project: Flink
  Issue Type: Bug
Reporter: Tammy zhang


My job consumes the data in kafka and then processes the data. After the job 
lasts for a while, the following error appears: 

ERROR org.apache.flink.runtime.util.FatalExitExceptionHandler - FATAL: Thread 
'flink-metrics-akka.remote.default-remote-dispatcher-3' produced an uncaught 
exception. Stopping the process...
java.lang.OutOfMemoryError: Direct buffer memory

i have set the "max.poll.records" propertity is "250", and it does not work. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-17250) interface akka.event.LoggingFilter is not assignable from class akka.event.slf4j.Slf4jLoggingFilter

2020-05-06 Thread Tammy zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101350#comment-17101350
 ] 

Tammy zhang edited comment on FLINK-17250 at 5/7/20, 3:39 AM:
--

thanks for pay attention to this question, i solved the problem with other 
methods, i guess the reason for this exception is cause i mixed the 
StreamExecutionEnvironment and ExecutionEnvironment in a job, now i use the 
unitive StreamExecutionEnvironment in the job, and the exception is disappeared 
@[~rmetzger]


was (Author: 1372114269):
i solved the problem with other methods, i guess the reason for this exception 
is cause i mixed the StreamExecutionEnvironment and ExecutionEnvironment in a 
job, now i use the unitive StreamExecutionEnvironment in the job, and the 
exception is disappeared

> interface akka.event.LoggingFilter is not assignable from class 
> akka.event.slf4j.Slf4jLoggingFilter
> ---
>
> Key: FLINK-17250
> URL: https://issues.apache.org/jira/browse/FLINK-17250
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.10.0
>Reporter: Tammy zhang
>Priority: Blocker
>
> when i run a job in flink cluster, a exception is occured:
> interface akka.event.LoggingFilter is not assignable from class 
> akka.event.slf4j.Slf4jLoggingFilter,that job can operation sucessfully in 
> idea, but when i package it to jar, the jar throw the exception,i do not know 
> what is happened, pelaese fix it as quickly as possible,thanks



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17250) interface akka.event.LoggingFilter is not assignable from class akka.event.slf4j.Slf4jLoggingFilter

2020-05-06 Thread Tammy zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101350#comment-17101350
 ] 

Tammy zhang commented on FLINK-17250:
-

i solved the problem with other methods, i guess the reason for this exception 
is cause i mixed the StreamExecutionEnvironment and ExecutionEnvironment in a 
job, now i use the unitive StreamExecutionEnvironment in the job, and the 
exception is disappeared

> interface akka.event.LoggingFilter is not assignable from class 
> akka.event.slf4j.Slf4jLoggingFilter
> ---
>
> Key: FLINK-17250
> URL: https://issues.apache.org/jira/browse/FLINK-17250
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.10.0
>Reporter: Tammy zhang
>Priority: Blocker
>
> when i run a job in flink cluster, a exception is occured:
> interface akka.event.LoggingFilter is not assignable from class 
> akka.event.slf4j.Slf4jLoggingFilter,that job can operation sucessfully in 
> idea, but when i package it to jar, the jar throw the exception,i do not know 
> what is happened, pelaese fix it as quickly as possible,thanks



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17250) interface akka.event.LoggingFilter is not assignable from class akka.event.slf4j.Slf4jLoggingFilter

2020-04-19 Thread Tammy zhang (Jira)
Tammy zhang created FLINK-17250:
---

 Summary: interface akka.event.LoggingFilter is not assignable from 
class akka.event.slf4j.Slf4jLoggingFilter
 Key: FLINK-17250
 URL: https://issues.apache.org/jira/browse/FLINK-17250
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.10.0
Reporter: Tammy zhang


when i run a job in flink cluster, a exception is occured:
interface akka.event.LoggingFilter is not assignable from class 
akka.event.slf4j.Slf4jLoggingFilter,that job can operation sucessfully in idea, 
but when i package it to jar, the jar throw the exception,i do not know what is 
happened, pelaese fix it as quickly as possible,thanks



--
This message was sent by Atlassian Jira
(v8.3.4#803005)