Flink Slow Execution

2024-01-17 Thread Dulce Morim
Hello,

In a single JVM, I'm running multiple flink batch jobs locally using the 
MiniCluster (Java 17 and Flink 1.18).

At the beginning of the process, the Mini Cluster starts pretty much instantly. 
However, I'm running into an issue where the more jobs I execute the longer the 
MiniCluster takes to start.

Here's an example:

2024-01-17 17:07:26.989 [INFO ] MiniCluster - Starting Flink Mini Cluster
2024-01-17 17:07:27.165 [INFO ] MiniCluster - Starting Metrics Registry
2024-01-17 17:07:33.801 [INFO ] MetricRegistryImpl - No metrics reporter 
configured, no metrics will be exposed/reported.
2024-01-17 17:07:33.801 [INFO ] MiniCluster - Starting RPC Service(s)
2024-01-17 17:07:34.646 [INFO ] MiniCluster - Flink Mini Cluster started 
successfully

Has anyone faced this issue?

Thanks.


Error flink 1.18 not found ExecutionConfig

2023-11-26 Thread Dulce Morim
:1.18.0]
  at 
deployment.teste.ear.teste.war/org.apache.flink.runtime.jobmaster.JobMaster.(JobMaster.java:356)
 ~[flink-runtime-1.18.0.jar:1.18.0]
  at 
deployment.teste.ear.teste.war/org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.internalCreateJobMasterService(DefaultJobMasterServiceFactory.java:128)
 ~[flink-runtime-1.18.0.jar:1.18.0]
  at 
deployment.teste.ear.teste.war/org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.lambda$createJobMasterService$0(DefaultJobMasterServiceFactory.java:100)
 ~[flink-runtime-1.18.0.jar:1.18.0]
  at 
deployment.teste.ear.teste.war/org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:112)
 ~[flink-core-1.18.0.jar:1.18.0]
  at 
java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1768)
 ~[?:?]
  ... 3 more

Thanks,
Dulce Morim


Control insert database with dataset

2018-06-18 Thread Dulce Morim
Hello,

I'm trying catch a BatchUpdateException when insert DataSet using a method 
output. Because, I need control if insert a duplicate key. How I can do this?



[2018-06-18 22:18:56,419] INFO DataSink 
(org.apache.flink.api.java.io.jdbc.JDBCOutputFormat@64aad6db) (1/1) 
(00a77c9e18f893cde9c62a3c9ca5c471) switched from RUNNING to FAILED. 
(org.apache.flink.runtime.executiongraph.ExecutionGraph)
java.lang.IllegalArgumentException: writeRecord() failed
at 
org.apache.flink.api.java.io.jdbc.JDBCOutputFormat.writeRecord(JDBCOutputFormat.java:209)
at 
org.apache.flink.api.java.io.jdbc.JDBCOutputFormat.writeRecord(JDBCOutputFormat.java:41)
at 
org.apache.flink.runtime.operators.DataSinkTask.invoke(DataSinkTask.java:194)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.BatchUpdateException: Violation of PRIMARY KEY constraint 
'TEST_PK'. Cannot insert duplicate key in object 'TEST'. The duplicate key 
value is (37183).
at 
com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeBatch(SQLServerPreparedStatement.java:2303)
at 
org.apache.flink.api.java.io.jdbc.JDBCOutputFormat.writeRecord(JDBCOutputFormat.java:205)
... 4 more


Only have a generic exception:
org.apache.flink.runtime.client.JobExecutionException: Job execution failed.


Thanks,
Dulce Morim


Keyby connect for a one to many relationship - DataStream API - Ride Enrichment (CoProcessFunction)

2018-03-26 Thread Dulce Morim
Hello,

Following this exercise:
http://training.data-artisans.com/exercises/rideEnrichment-processfunction.html

I need to do something similar, but my data structure is something like:

A
Primary_key
other fields

B
Primary_key
Relation_Key
other fields

Where A and B relationship is one to more, on B.Relation_key = A.Primary_key

When using the keyby function on both streams, with the key "A.Primary_key" on 
the A stream and the "B.Relation_key" on the B stream, the data that comes from 
B, only shows the last occurrence of the records that had the same 
"B.Relation_key".

Is it possible to connect these two streams? In this solution there seems to be 
a 1 to 1 relationship, but we want a one to many relationship. Should this be 
solved via another process?

Thanks,
Dulce Morim