Re: Get savepoint status fails - Flink 1.6.2

2018-11-15 Thread PedroMrChaves
Hello, I've tried with different (jobId, triggerId) pairs but it doesn't work. Regards, Pedro Chaves. - Best Regards, Pedro Chaves -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Re: Get savepoint status fails - Flink 1.6.2

2018-11-15 Thread miki haiat
Can you share some logs On Thu, Nov 15, 2018 at 10:46 AM PedroMrChaves wrote: > Hello, > > I've tried with different (jobId, triggerId) pairs but it doesn't work. > > > Regards, > Pedro Chaves. > > > > - > Best Regards, > Pedro Chaves > -- > Sent from: > http://apache-flink-user-mailing-list

Re: Could not find previous entry with key.

2018-11-15 Thread Chesnay Schepler
Can you provide us with the implementation of your Event and IoTEvent classes? On 15.11.2018 06:10, Steve Bistline wrote: Any thoughts on where to start with this error would be appreciated. Caused by: java.lang.IllegalStateException: Could not find previous entry with key: first event, value:

Re: Partitioning by composite key, But type and number of keys are dynamic

2018-11-15 Thread Chesnay Schepler
How do you determine which fields you want to use if you don't know the names and types beforehand? I would wrap the GenericRecord in my own type, implements the field selection logic in hashCode/equals, and unwrap them again in your functions. On 14.11.2018 10:57, Gaurav Luthra wrote: There

Re: Rescaling Flink job from an External Checkpoint

2018-11-15 Thread Chesnay Schepler
The docs are worded that way since not all backends support it. I believe rescaling does work for RocksDB checkpoints, but we do not provide any /guarantee /that this remains to be the case. Basically, use at your own risk. On 13.11.2018 13:24, suraj7 wrote: Hi, I'm using Flink 1.5 with Roc

Re: Field could not be resolved by the field mapping when using kafka connector

2018-11-15 Thread Dominik Wosiński
Hey, Could You please show a sample data that You want to process? This would help in verifying the issue. Best Regards, Dom. wt., 13 lis 2018 o 13:58 Jeff Zhang napisał(a): > Hi, > > I hit the following error when I try to use kafka connector in flink table > api. There's very little document

Re: Field could not be resolved by the field mapping when using kafka connector

2018-11-15 Thread Chesnay Schepler
This issue was already resolved in another thread by the same author. On 15.11.2018 10:52, Dominik Wosiński wrote: Hey, Could You please show a sample data that You want to process? This would help in verifying the issue. Best Regards, Dom. wt., 13 lis 2018 o 13:58 Jeff Zhang

Re: Flink with parallelism 3 is running locally but not on cluster

2018-11-15 Thread Dominik Wosiński
Hey, DId You try to run any other job on your setup? Also, could You please tell what are the sources you are trying to use, do all messages come from Kafka?? >From the first look, it seems that the JobManager can't connect to one of the TaskManagers. Best Regards, Dom. pon., 12 lis 2018 o 17:1

Re: Flink with parallelism 3 is running locally but not on cluster

2018-11-15 Thread Dominik Wosiński
PS. Could You also post the whole log for the application run ?? Best Regards, Dom. czw., 15 lis 2018 o 11:04 Dominik Wosiński napisał(a): > Hey, > > DId You try to run any other job on your setup? Also, could You please > tell what are the sources you are trying to use, do all messages come fr

Re: Field could not be resolved by the field mapping when using kafka connector

2018-11-15 Thread Dominik Wosiński
Hey, Thanks for the info, I haven't noticed that. I was just going through older messages with no responses. Best Regards, Dom.

Re: Partitioning by composite key, But type and number of keys are dynamic

2018-11-15 Thread Gaurav Luthra
Hi Chesnay, My End user will be aware about the fields of "input records" (GenericRecord). In configuration my end user only will tell me the name and number of the fields, based on these fields of GenericRecord I will have to partition the DataStream and make Keyed Stream. Currently, I have impl

Re: Partitioning by composite key, But type and number of keys are dynamic

2018-11-15 Thread Chesnay Schepler
Why don't you calculate the hashCode for each field, combine them and use that as the key? You won't get around calculating /something /for each field and combining the result. On 15.11.2018 11:02, Gaurav Luthra wrote: Hi Chesnay, My End user will be aware about the fields of "input records"

Re: FLINK Kinesis Connector Error - ProvisionedThroughputExceededException

2018-11-15 Thread Rafi Aroch
Hi Gordon, Thanks for the reply. So is it true to say that the KPL RateLimit would not get enforced when the sink parallelism is >1? If multiple subtasks are writing to the same shard and each has their own RateLimit, it is possible that the RateLimit is crossed. If that's the case, can you sugg

Re: Run Time Exception

2018-11-15 Thread Mar_zieh
Hello guys I have wrote this code. I could print the whole file, but I want to read the file line by line and process each line separately. would you please help how I can do that? ExecutionEnvironment env = ExecutionEnvironment.createLocalEnvironment(); DataSet transactions = env.readTe

Re: Run Time Exception

2018-11-15 Thread Mar_zieh
Hello guys I have wrote this code. I could print the whole file, but I want to read the file line by line and process each line separately. would you please help how I can do that? ExecutionEnvironment env = ExecutionEnvironment.createLocalEnvironment(); DataSet transactions = env.readTe

Re: Run Time Exception

2018-11-15 Thread Chesnay Schepler
I'm highly encouraging you to read through the examples and Batch API documentation: https://ci.apache.org/projects/flink/flink-docs-release-1.6/dev/batch/#example-program https://ci.apache.org/projects/flink/flink-docs-release-1.6/dev/batch/dataset_transformations.html#dataset-transformations

Re: Run Time Exception

2018-11-15 Thread marzieh ghasemi
Thank you! On Thu, Nov 15, 2018 at 4:18 PM Chesnay Schepler wrote: > I'm highly encouraging you to read through the examples and Batch API > documentation: > > > https://ci.apache.org/projects/flink/flink-docs-release-1.6/dev/batch/#example-program > > https://ci.apache.org/projects/flink/flink-

Enabling Flink’s checkpointing

2018-11-15 Thread Olga Luganska
Hello, By reading Flink documentation I see that to enable checkpointing we need to: 1. Enable checkpointing at the execution environment. 2. Make sure that your source/sink implements either CheckpointedFunction or ListCheckpointed interfaces? Is #2 a must, and how checkpointing mechanism is

Re: Flink with parallelism 3 is running locally but not on cluster

2018-11-15 Thread zavalit
Hey, Dominik, tnx for getting back. i've posted also by stackoverflow and David Anderson gave a good tipp where to look. https://stackoverflow.com/questions/53282967/run-flink-with-parallelism-more-than-1/53289840 issues is resolved, everything is running. thx. again -- Sent from: http://apache

Re: Join Dataset in stream

2018-11-15 Thread Ken Krugler
Hi Eric, This sounds like a use case for BroadcastProcessFunction You’d use the Cassandra dataset as the source for the broadcast stream, which is distrib

Standalone HA cluster: Fatal error occurred in the cluster entrypoint.

2018-11-15 Thread Olga Luganska
Hello, I am running flink 1.6.1 standalone HA cluster. Today I am unable to start cluster because of "Fatal error in cluster entrypoint" (I used to see this error when running flink 1.5 version, after upgrade to 1.6.1 (which had a fix for this bug) everything worked well for a while) Question:

Writing to S3

2018-11-15 Thread Steve Bistline
I am trying to write out to S3 from Flink with the following code and getting the error below. Tried adding the parser as a dependency, etc. Any help would be appreciated Table result = tableEnv.sql("SELECT 'Alert ',t_sKey, t_deviceID, t_sValue FROM SENSORS WHERE t_sKey='TEMPERATURE' AND t_sValue

Re: Writing to S3

2018-11-15 Thread Ken Krugler
Hi Steve, This looks similar to https://stackoverflow.com/questions/52009823/flink-shaded-hadoop-s3-filesystems-still-requires-hdfs-default-and-hdfs-site-con I see th

Re: Writing to S3

2018-11-15 Thread Steve Bistline
Hi Ken, Thank you for the link... I had just found this and when I removed the Hadoop dependencies ( not using in this project anyway ) things worked fine. Now just trying to figure out the credentials. Thanks, Steve On Thu, Nov 15, 2018 at 7:12 PM Ken Krugler wrote: > Hi Steve, > > This loo

[no subject]

2018-11-15 Thread Steve Bistline
Well, hopefully the last problem with this project. Any thoughts would be appreciated = java.lang.RuntimeException: Exception occurred while processing valve output watermark: at org.apache.flink.streaming.runtime.io.StreamInputProcessor$Forwardin

Re:

2018-11-15 Thread Steve Bistline
More to the story the cluster kicked out this error: Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "flink-7" Uncaught error from thread [Uncaught error from thread [Uncaught error from thread [flink-scheduler-1]: flink-akka.remote.default-remote-disp

Re: What if not to keep containers across attempts in HA setup?(Internet mail)

2018-11-15 Thread Paul Lam
Hi Devin, Thanks for your reasoning! It’s consistent with my observation, and I fully agree with you. Maybe we should create an issue for the Hadoop community if it is not fixed in the master branch. Best, Paul Lam > 在 2018年11月15日,11:59,devinduan(段丁瑞) 写道: > > Hi Paul: > I have reviewed

Flink Mini Cluster Performance?

2018-11-15 Thread jlist9
I'm developing a Flink application and I'm still learning. For simplicity, most of the time I test by running the main method of the entry class as a regular Java application. Would that be running on what's called a mini cluster? I find it quote convenient and makes debugging job really easy. My q

Re: Auto/Dynamic scaling in Flink

2018-11-15 Thread 罗齐
Hi Nauroz, If you’re using Flink 1.5 on Yarn, it supports dynamic task manager allocation by default [1]. After skimming the code, it seems to me that in general if requested parallelism is larger than available task slots, new task managers will be requested via ResourceManager (please correct