[jira] [Comment Edited] (FLINK-9009) Error| You are creating too many HashedWheelTimer instances. HashedWheelTimer is a shared resource that must be reused across the application, so that only a few

2019-12-18 Thread Subramanyam Ramanathan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-9009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16999056#comment-16999056
 ] 

Subramanyam Ramanathan edited comment on FLINK-9009 at 12/18/19 11:52 AM:
--

Hi,

I'm seeing a similar issue when using flink with Pulsar source + Sink.

I am using flink 1.8.2 and pulsar v2.4.2, on an 8 cpu 16GB RAM vm, running 
centos 7

I have 20 map transformations, each with it's own source and sink, and 
parallelism set to 8.

If the source and sink are Kafka, then I don't see any error, and top command 
shows me 4% memory usage.

When I use pulsar source+sink, the java process consumes *40 %* memory which is 
around 6GB. This happens even if I have not streamed any data.

TaskManager heap size was set to 1024M and I don't see any outOfMemory errors. 
I think the increase in memory usage is because flink uses the off heap memory 
which gets set by flink to -XX:MaxDirectMemorySize=8388607T,and something with 
Pulsar Source/Sink is causing it to consume a lot of it.

I also see the message in the logs mentioned in the title : *"Error: You are 
creating too many HashedWheelTimer instances. HashedWheelTimer is a shared 
resource."* 

 

Can you please help me understand the behaviour of the off heap memory in this 
case, and why it grows so much?

Is there any fix that is planned for this ? Or any way I can work around this ? 

 


was (Author: subbu-ramanathan107):
Hi,

I'm seeing a similar issue when using flink with Pulsar source + Sink.

I am using flink 1.8.2 and pulsar v2.4.2, on an 8 cpu 16GB RAM vm, running 
centos 7

I have 20 map transformations, each with it's own source and sink, and 
parallelism set to 8.

If the source and sink are Kafka, then I don't see any error, and top command 
shows me 4% memory usage.

When I use pulsar source+sink, the java process consumes *40 %* memory. This 
happens even if I have not streamed any data.

TaskManager heap size was set to 1024M and I don't see any outOfMemory errors. 
I think the increase in memory usage is because flink uses the off heap memory 
which gets set by flink to -XX:MaxDirectMemorySize=8388607T,and something with 
Pulsar Source/Sink is causing it to consume a lot of it.

I also see the message in the logs mentioned in the title : *"Error: You are 
creating too many HashedWheelTimer instances. HashedWheelTimer is a shared 
resource."* 

 

Can you please help me understand the behaviour of the off heap memory in this 
case, and why it grows so much?

Is there any fix that is planned for this ? Or any way I can work around this ? 

 

> Error| You are creating too many HashedWheelTimer instances.  
> HashedWheelTimer is a shared resource that must be reused across the 
> application, so that only a few instances are created.
> -
>
> Key: FLINK-9009
> URL: https://issues.apache.org/jira/browse/FLINK-9009
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
> Environment: Pass platform: Openshift
>Reporter: Pankaj
>Priority: Major
>
> Steps to reproduce:
> 1- Flink with Kafka as a consumer -> Writing stream to Cassandra using flink 
> cassandra sink.
> 2- In memory Job manager and task manager with checkpointing 5000ms.
> 3- env.setpararllelism(10)-> As kafka topic has 10 partition.
> 4- There are around 13 unique streams in a single flink run time environment 
> which are reading from kafka -> processing and writing to cassandra.
> Hardware: CPU 200 milli core . It is deployed on Paas platform on one node
> Memory: 526 MB.
>  
> When i start the server, It starts flink and all off sudden stops with above 
> error. It also shows out of memory error.
>  
> It would be nice if any body can suggest if something is wrong.
>  
> Maven:
> flink-connector-cassandra_2.11: 1.3.2
> flink-streaming-java_2.11: 1.4.0
> flink-connector-kafka-0.11_2.11:1.4.0
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-9009) Error| You are creating too many HashedWheelTimer instances. HashedWheelTimer is a shared resource that must be reused across the application, so that only a few

2019-12-18 Thread Subramanyam Ramanathan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-9009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16999056#comment-16999056
 ] 

Subramanyam Ramanathan edited comment on FLINK-9009 at 12/18/19 11:52 AM:
--

Hi,

I'm seeing a similar issue when using flink with Pulsar source + Sink.

I am using flink 1.8.2 and pulsar v2.4.2, on an 8 cpu 16GB RAM vm, running 
centos 7

I have 20 map transformations, each with it's own source and sink, and 
parallelism set to 8.

If the source and sink are Kafka, then I don't see any error, and top command 
shows me 4% memory usage.

When I use pulsar source+sink, the java process consumes *40 %* memory. This 
happens even if I have not streamed any data.

TaskManager heap size was set to 1024M and I don't see any outOfMemory errors. 
I think the increase in memory usage is because flink uses the off heap memory 
which gets set by flink to -XX:MaxDirectMemorySize=8388607T,and something with 
Pulsar Source/Sink is causing it to consume a lot of it.

I also see the message in the logs mentioned in the title : *"Error: You are 
creating too many HashedWheelTimer instances. HashedWheelTimer is a shared 
resource."* 

 

Can you please help me understand the behaviour of the off heap memory in this 
case, and why it grows so much?

Is there any fix that is planned for this ? Or any way I can work around this ? 

 


was (Author: subbu-ramanathan107):
Hi,

I'm seeing a similar issue when using flink with Pulsar source + Sink.

I am using flink 1.8.2 and pulsar v2.4.2, on an 8 cpu 16GB RAM vm, running 
centos 7

I have 20 map transformations, each with it's own source and sink, and 
parallelism set to 8.

If the source and sink are Kafka, then I don't see any error, and top command 
shows me 4% memory usage.

When I use pulsar source+sink, the java process consumes *40 %* memory. This 
happens even if I have not streamed any data.

My heap size was set to 1024M and I don't see any outOfMemory errors. I think 
the increase in memory usage is because flink uses the off heap memory which 
gets set by flink to -XX:MaxDirectMemorySize=8388607T,and something with Pulsar 
Source/Sink is causing it to consume a lot of it.

I also see the message in the logs mentioned in the title : *"Error: You are 
creating too many HashedWheelTimer instances. HashedWheelTimer is a shared 
resource."* 

 

Can you please help me understand the behaviour of the off heap memory in this 
case, and why it grows so much?

Is there any fix that is planned for this ? Or any way I can work around this ? 

 

> Error| You are creating too many HashedWheelTimer instances.  
> HashedWheelTimer is a shared resource that must be reused across the 
> application, so that only a few instances are created.
> -
>
> Key: FLINK-9009
> URL: https://issues.apache.org/jira/browse/FLINK-9009
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
> Environment: Pass platform: Openshift
>Reporter: Pankaj
>Priority: Major
>
> Steps to reproduce:
> 1- Flink with Kafka as a consumer -> Writing stream to Cassandra using flink 
> cassandra sink.
> 2- In memory Job manager and task manager with checkpointing 5000ms.
> 3- env.setpararllelism(10)-> As kafka topic has 10 partition.
> 4- There are around 13 unique streams in a single flink run time environment 
> which are reading from kafka -> processing and writing to cassandra.
> Hardware: CPU 200 milli core . It is deployed on Paas platform on one node
> Memory: 526 MB.
>  
> When i start the server, It starts flink and all off sudden stops with above 
> error. It also shows out of memory error.
>  
> It would be nice if any body can suggest if something is wrong.
>  
> Maven:
> flink-connector-cassandra_2.11: 1.3.2
> flink-streaming-java_2.11: 1.4.0
> flink-connector-kafka-0.11_2.11:1.4.0
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-9009) Error| You are creating too many HashedWheelTimer instances. HashedWheelTimer is a shared resource that must be reused across the application, so that only a few

2019-12-18 Thread Subramanyam Ramanathan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-9009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16999056#comment-16999056
 ] 

Subramanyam Ramanathan edited comment on FLINK-9009 at 12/18/19 11:52 AM:
--

Hi,

I'm seeing a similar issue when using flink with Pulsar source + Sink.

I am using flink 1.8.2 and pulsar v2.4.2, on an 8 cpu 16GB RAM vm, running 
centos 7

I have 20 map transformations, each with it's own source and sink, and 
parallelism set to 8.

If the source and sink are Kafka, then I don't see any error, and top command 
shows me 4% memory usage.

When I use pulsar source+sink, the java process consumes *40 %* memory which is 
around 6GB. This happens even if I have not streamed any data.

TaskManager heap size was set to 1024M and I don't see any outOfMemory errors. 
I think the increase in memory usage is because flink uses the off heap memory 
which gets set by flink to -XX:MaxDirectMemorySize=8388607T,and something with 
Pulsar Source/Sink is causing it to consume a lot of it.

I also see the error message in the logs mentioned in the title : *"Error: You 
are creating too many HashedWheelTimer instances. HashedWheelTimer is a shared 
resource."* 

 

Can you please help me understand the behaviour of the off heap memory in this 
case, and why it grows so much?

Is there any fix that is planned for this ? Or any way I can work around this ? 

 


was (Author: subbu-ramanathan107):
Hi,

I'm seeing a similar issue when using flink with Pulsar source + Sink.

I am using flink 1.8.2 and pulsar v2.4.2, on an 8 cpu 16GB RAM vm, running 
centos 7

I have 20 map transformations, each with it's own source and sink, and 
parallelism set to 8.

If the source and sink are Kafka, then I don't see any error, and top command 
shows me 4% memory usage.

When I use pulsar source+sink, the java process consumes *40 %* memory which is 
around 6GB. This happens even if I have not streamed any data.

TaskManager heap size was set to 1024M and I don't see any outOfMemory errors. 
I think the increase in memory usage is because flink uses the off heap memory 
which gets set by flink to -XX:MaxDirectMemorySize=8388607T,and something with 
Pulsar Source/Sink is causing it to consume a lot of it.

I also see the message in the logs mentioned in the title : *"Error: You are 
creating too many HashedWheelTimer instances. HashedWheelTimer is a shared 
resource."* 

 

Can you please help me understand the behaviour of the off heap memory in this 
case, and why it grows so much?

Is there any fix that is planned for this ? Or any way I can work around this ? 

 

> Error| You are creating too many HashedWheelTimer instances.  
> HashedWheelTimer is a shared resource that must be reused across the 
> application, so that only a few instances are created.
> -
>
> Key: FLINK-9009
> URL: https://issues.apache.org/jira/browse/FLINK-9009
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
> Environment: Pass platform: Openshift
>Reporter: Pankaj
>Priority: Major
>
> Steps to reproduce:
> 1- Flink with Kafka as a consumer -> Writing stream to Cassandra using flink 
> cassandra sink.
> 2- In memory Job manager and task manager with checkpointing 5000ms.
> 3- env.setpararllelism(10)-> As kafka topic has 10 partition.
> 4- There are around 13 unique streams in a single flink run time environment 
> which are reading from kafka -> processing and writing to cassandra.
> Hardware: CPU 200 milli core . It is deployed on Paas platform on one node
> Memory: 526 MB.
>  
> When i start the server, It starts flink and all off sudden stops with above 
> error. It also shows out of memory error.
>  
> It would be nice if any body can suggest if something is wrong.
>  
> Maven:
> flink-connector-cassandra_2.11: 1.3.2
> flink-streaming-java_2.11: 1.4.0
> flink-connector-kafka-0.11_2.11:1.4.0
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-9009) Error| You are creating too many HashedWheelTimer instances. HashedWheelTimer is a shared resource that must be reused across the application, so that only a few

2019-12-18 Thread Subramanyam Ramanathan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-9009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16999056#comment-16999056
 ] 

Subramanyam Ramanathan edited comment on FLINK-9009 at 12/18/19 11:33 AM:
--

Hi,

I'm seeing a similar issue when using flink with Pulsar source + Sink.

I am using flink 1.8.2 and pulsar v2.4.2, on an 8 cpu 16GB RAM vm, running 
centos 7

I have 20 map transformations, each with it's own source and sink, and 
parallelism set to 8.

If the source and sink are Kafka, then I don't see any error, and top command 
shows me 4% memory usage.

When I use pulsar source+sink, the java process consumes *40 %* memory. This 
happens even if I have not streamed any data.

My heap size was set to 1024M and I don't see any outOfMemory errors. I think 
the increase in memory usage is because flink uses the off heap memory which 
gets set by flink to -XX:MaxDirectMemorySize=8388607T,and something with Pulsar 
Source/Sink is causing it to consume a lot of it.

I also see the message in the logs mentioned in the title : *"Error: You are 
creating too many HashedWheelTimer instances. HashedWheelTimer is a shared 
resource."* 

 

Can you please help me understand the behaviour of the off heap memory in this 
case, and why it grows so much?

Is there any fix that is planned for this ? Or any way I can work around this ? 

 


was (Author: subbu-ramanathan107):
Hi,

I'm seeing a similar issue when using flink with Pulsar source + Sink.


I am using flink 1.8.2 and pulsar v2.4.2, on an 8 cpu 16GB vm, running centos 7


I have 20 map transformations, each with it's own source and sink, and 
parallelism set to 8.

If the source and sink are Kafka, then I don't see any error, and top command 
shows me 4% memory usage.

When I use pulsar source+sink, the java process consumes *40 %* memory. This 
happens even if I have not streamed any data.


My heap size was set to 1024M and I don't see any outOfMemory errors. I think 
the increase in memory usage is because flink uses the off heap memory which 
gets set by flink to -XX:MaxDirectMemorySize=8388607T,and something with Pulsar 
Source/Sink is causing it to consume a lot of it.

I also see the message in the logs mentioned in the title : *"Error: You are 
creating too many HashedWheelTimer instances. HashedWheelTimer is a shared 
resource."* 

 

Can you please help me understand the behaviour of the off heap memory in this 
case, and why it grows so much?

Is there any fix that is planned for this ? Or any way I can work around this ? 

 

> Error| You are creating too many HashedWheelTimer instances.  
> HashedWheelTimer is a shared resource that must be reused across the 
> application, so that only a few instances are created.
> -
>
> Key: FLINK-9009
> URL: https://issues.apache.org/jira/browse/FLINK-9009
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
> Environment: Pass platform: Openshift
>Reporter: Pankaj
>Priority: Major
>
> Steps to reproduce:
> 1- Flink with Kafka as a consumer -> Writing stream to Cassandra using flink 
> cassandra sink.
> 2- In memory Job manager and task manager with checkpointing 5000ms.
> 3- env.setpararllelism(10)-> As kafka topic has 10 partition.
> 4- There are around 13 unique streams in a single flink run time environment 
> which are reading from kafka -> processing and writing to cassandra.
> Hardware: CPU 200 milli core . It is deployed on Paas platform on one node
> Memory: 526 MB.
>  
> When i start the server, It starts flink and all off sudden stops with above 
> error. It also shows out of memory error.
>  
> It would be nice if any body can suggest if something is wrong.
>  
> Maven:
> flink-connector-cassandra_2.11: 1.3.2
> flink-streaming-java_2.11: 1.4.0
> flink-connector-kafka-0.11_2.11:1.4.0
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-9009) Error| You are creating too many HashedWheelTimer instances. HashedWheelTimer is a shared resource that must be reused across the application, so that only a few

2018-03-19 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16404850#comment-16404850
 ] 

Chesnay Schepler edited comment on FLINK-9009 at 3/19/18 1:59 PM:
--

The error message is due to netty's leak detector which complains if one JVM 
uses more than 4 timers, which in this case is completely expected. See 
[|https://github.com/netty/netty/issues/6225]. This however is just a logging 
message and does not fail the job.

In other words, there is no leak, Flink works as expected, and the job fails 
because not enough memory is given to the JVM. 500mb for a job with a 
parallelism of 20 seems a bit low.


was (Author: zentol):
The error message is due to netty's leak detector which complains if one JVM 
uses more than 4 timers, which in this case is completely expected. See 
[|https://github.com/netty/netty/issues/6225]. This however is just a logging 
message and does not fail the job.

In other words, there is no leak, Flink works as expected, and the job fails 
because not enough memory is given to the JVM.

> Error| You are creating too many HashedWheelTimer instances.  
> HashedWheelTimer is a shared resource that must be reused across the 
> application, so that only a few instances are created.
> -
>
> Key: FLINK-9009
> URL: https://issues.apache.org/jira/browse/FLINK-9009
> Project: Flink
>  Issue Type: Bug
> Environment: Pass platform: Openshift
>Reporter: Pankaj
>Priority: Blocker
>
> Steps to reproduce:
> 1- Flink with Kafka as a consumer -> Writing stream to Cassandra using flink 
> cassandra sink.
> 2- In memory Job manager and task manager with checkpointing 5000ms.
> 3- env.setpararllelism(10)-> As kafka topic has 10 partition.
> 4- There are around 13 unique streams in a single flink run time environment 
> which are reading from kafka -> processing and writing to cassandra.
> Hardware: CPU 200 milli core . It is deployed on Paas platform on one node
> Memory: 526 MB.
>  
> When i start the server, It starts flink and all off sudden stops with above 
> error. It also shows out of memory error.
>  
> It would be nice if any body can suggest if something is wrong.
>  
> Maven:
> flink-connector-cassandra_2.11: 1.3.2
> flink-streaming-java_2.11: 1.4.0
> flink-connector-kafka-0.11_2.11:1.4.0
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-9009) Error| You are creating too many HashedWheelTimer instances. HashedWheelTimer is a shared resource that must be reused across the application, so that only a few

2018-03-19 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16404850#comment-16404850
 ] 

Chesnay Schepler edited comment on FLINK-9009 at 3/19/18 1:59 PM:
--

The error message is due to netty's leak detector which complains if one JVM 
uses more than 4 timers, which in this case is completely expected. See 
[|https://github.com/netty/netty/issues/6225]. This however is just a logging 
message and does not fail the job.

In other words, there is no leak, Flink works as expected, and the job fails 
because not enough memory is given to the JVM.


was (Author: zentol):
The error message is due to netty's leak detector which complains if one JVM 
uses more than 4 timers, which in this case is completely expected. See 
[|https://github.com/netty/netty/issues/6225].

In other words, there is no leak, Flink works as expected, and the job fails 
because not enough memory is given to the JVM.

> Error| You are creating too many HashedWheelTimer instances.  
> HashedWheelTimer is a shared resource that must be reused across the 
> application, so that only a few instances are created.
> -
>
> Key: FLINK-9009
> URL: https://issues.apache.org/jira/browse/FLINK-9009
> Project: Flink
>  Issue Type: Bug
> Environment: Pass platform: Openshift
>Reporter: Pankaj
>Priority: Blocker
>
> Steps to reproduce:
> 1- Flink with Kafka as a consumer -> Writing stream to Cassandra using flink 
> cassandra sink.
> 2- In memory Job manager and task manager with checkpointing 5000ms.
> 3- env.setpararllelism(10)-> As kafka topic has 10 partition.
> 4- There are around 13 unique streams in a single flink run time environment 
> which are reading from kafka -> processing and writing to cassandra.
> Hardware: CPU 200 milli core . It is deployed on Paas platform on one node
> Memory: 526 MB.
>  
> When i start the server, It starts flink and all off sudden stops with above 
> error. It also shows out of memory error.
>  
> It would be nice if any body can suggest if something is wrong.
>  
> Maven:
> flink-connector-cassandra_2.11: 1.3.2
> flink-streaming-java_2.11: 1.4.0
> flink-connector-kafka-0.11_2.11:1.4.0
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-9009) Error| You are creating too many HashedWheelTimer instances. HashedWheelTimer is a shared resource that must be reused across the application, so that only a few

2018-03-16 Thread Pankaj (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402323#comment-16402323
 ] 

Pankaj edited comment on FLINK-9009 at 3/16/18 6:35 PM:


No, Is not related with Kafka. I have already tried and check the problem only 
occurs when we introduced  more parallelism and flink is writing to cassandra 
with two cluster. Lets say in my case I introduced parallelism =10 coz i have 
10 partition in kafka topic.

I do not face any problem if i use parallelism=1  and cassandra writing from 
flink. But it faled with more parallelism

Problem can be replicated with steps i shared in description.

I'm not sure if flink has the fix of below two tickets in the cassandra 
connector api i shared

https://issues.apache.org/jira/browse/CASSANDRA-11243

https://issues.apache.org/jira/browse/CASSANDRA-10837

 


was (Author: pmishra01):
No, Is not related with Kafka. I have already tried and check the problem only 
occurs when we introduced  more parallelism and flink is writing to cassandra 
with two cluster. Lets say in my case I introduced parallelism =10 coz i have 
10 partition in kafka topic.

I do not face any problem with same scenario with no cassandra writing from 
flink.

Problem can be replicated with steps i shared in description.

I'm not sure if flink has the fix of below two tickets in the cassandra 
connector api i shared

https://issues.apache.org/jira/browse/CASSANDRA-11243

https://issues.apache.org/jira/browse/CASSANDRA-10837

 

> Error| You are creating too many HashedWheelTimer instances.  
> HashedWheelTimer is a shared resource that must be reused across the 
> application, so that only a few instances are created.
> -
>
> Key: FLINK-9009
> URL: https://issues.apache.org/jira/browse/FLINK-9009
> Project: Flink
>  Issue Type: Bug
> Environment: Pass platform: Openshit
>Reporter: Pankaj
>Priority: Blocker
>
> Steps to reproduce:
> 1- Flink with Kafka as a consumer -> Writing stream to Cassandra using flink 
> cassandra sink.
> 2- In memory Job manager and task manager with checkpointing 5000ms.
> 3- env.setpararllelism(10)-> As kafka topic has 10 partition.
> 4- There are around 13 unique streams in a single flink run time environment 
> which are reading from kafka -> processing and writing to cassandra.
> Hardware: CPU 200 milli core . It is deployed on Paas platform on one node
> Memory: 526 MB.
>  
> When i start the server, It starts flink and all off sudden stops with above 
> error. It also shows out of memory error.
>  
> It would be nice if any body can suggest if something is wrong.
>  
> Maven:
> flink-connector-cassandra_2.11: 1.3.2
> flink-streaming-java_2.11: 1.4.0
> flink-connector-kafka-0.11_2.11:1.4.0
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-9009) Error| You are creating too many HashedWheelTimer instances. HashedWheelTimer is a shared resource that must be reused across the application, so that only a few

2018-03-16 Thread Pankaj (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402323#comment-16402323
 ] 

Pankaj edited comment on FLINK-9009 at 3/16/18 6:34 PM:


No, Is not related with Kafka. I have already tried and check the problem only 
occurs when we introduced  more parallelism and flink is writing to cassandra 
with two cluster. Lets say in my case I introduced parallelism =10 coz i have 
10 partition in kafka topic.

I do not face any problem with same scenario with no cassandra writing from 
flink.

Problem can be replicated with steps i shared in description.

I'm not sure if flink has the fix of below two tickets in the cassandra 
connector api i shared

https://issues.apache.org/jira/browse/CASSANDRA-11243

https://issues.apache.org/jira/browse/CASSANDRA-10837

 


was (Author: pmishra01):
No, Is not related with Kafka. I have already tried and check the problem only 
occurs when we introduced  more parallelism and flink is writing two cassandra 
with two cluster. Lets say in my case I introduced parallelism =10 coz i have 
10 partition in kafka topic.

I do not face any problem with same scenario with no cassandra writing from 
flink.

Problem can be replicated with steps i shared in description.

I'm not sure if flink has the fix of below two tickets in the cassandra 
connector api i shared

https://issues.apache.org/jira/browse/CASSANDRA-11243

https://issues.apache.org/jira/browse/CASSANDRA-10837

 

> Error| You are creating too many HashedWheelTimer instances.  
> HashedWheelTimer is a shared resource that must be reused across the 
> application, so that only a few instances are created.
> -
>
> Key: FLINK-9009
> URL: https://issues.apache.org/jira/browse/FLINK-9009
> Project: Flink
>  Issue Type: Bug
> Environment: Pass platform: Openshit
>Reporter: Pankaj
>Priority: Blocker
>
> Steps to reproduce:
> 1- Flink with Kafka as a consumer -> Writing stream to Cassandra using flink 
> cassandra sink.
> 2- In memory Job manager and task manager with checkpointing 5000ms.
> 3- env.setpararllelism(10)-> As kafka topic has 10 partition.
> 4- There are around 13 unique streams in a single flink run time environment 
> which are reading from kafka -> processing and writing to cassandra.
> Hardware: CPU 200 milli core . It is deployed on Paas platform on one node
> Memory: 526 MB.
>  
> When i start the server, It starts flink and all off sudden stops with above 
> error. It also shows out of memory error.
>  
> It would be nice if any body can suggest if something is wrong.
>  
> Maven:
> flink-connector-cassandra_2.11: 1.3.2
> flink-streaming-java_2.11: 1.4.0
> flink-connector-kafka-0.11_2.11:1.4.0
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)