Ah thanks, so event though the method of describing it is exactly the same,
because you're using the max resolution it isn't useful for
out-of-orderness. Ok, clear
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Thanks Roman,
somehow i must have missed this in the documentation.
What is the difference (if any) between:
Ascending timestamps:
WATERMARK FOR rowtime_column AS rowtime_column - INTERVAL '0.001' SECOND.
Bounded out of orderness timestamps:
WATERMARK FOR rowtime_column AS rowtime_column - I
I worked out the rowtype input for the conversion to datastream;
type_info = Types.ROW_NAMED(['sender', 'stw', 'time'],[Types.STRING(),
Types.DOUBLE(), Types.LONG()])
datastream=table_env.to_append_stream(my_table, type_info)
But if i try to assign rowtime and watermarks to the datastream and con
Or is this only possible with the data stream api? I tried converting a table
to a datastream of rows, but being a noob, finding examples of how to do
this has been difficult and not sure how to provide the required
RowTypeInfo.
--
Sent from: http://apache-flink-user-mailing-list-archive.233605
Can i set the watermark strategy to bounded out of orderness when using the
table api and sql DDL to assign watermarks?
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
I'm trying to calculate a simple rolling average using pyflink, but somehow
the last rows streaming in seem to be excluded, which i expected to be the
result of data arriving out of order. However i fail to understand why.
exec_env = StreamExecutionEnvironment.get_execution_environment()
exec_env
Hi Arvid,
I'm currently running PyFlink locally in the JVM with a parallelism of 1,
and the same file works fine if i direct it to a Kafka cluster (running in a
local docker instance).
I assumed that the JAR pipeline definition in the python file would make
sure they are made available on the cl
hi Yun,
thanks for the help!
if i direct the Kafka connector in the DDL to a local Kafka cluster, it
works fine. So i assume access to the JAR files should not be the issue.
This is how i referred to the JAR files from Python:
t_env.get_config().get_configuration().set_string("pipeline.jars",
"
My JAR files included in the same folder i run the python code:
flink-connector-kafka_2.11-1.13-SNAPSHOT.JAR
flink-sql-connector-kafka_2.11-1.13-SNAPSHOT.JAR
kafka-clients-2.7.0.JAR
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Traceback (most recent call last):
File "streaming-dms.py", line 309, in
anomalies()
File "streaming-dms.py", line 142, in anomalies
t_env.sql_query(query).insert_into("ark_sink")
File
"/Users/jag002/anaconda3/lib/python3.7/site-packages/pyflink/table/table_environment.py",
line 748,
I'm trying to read data from my eventhub in Azure, but i end up with the
Flink error message 'findAndCreateTableSource failed'
using Flink 1.13-Snapshot
source_ddl = f"""CREATE TABLE dms_source(
x_value VARCHAR
) WITH (
11 matches
Mail list logo