flink????????????
??flink??yarn killflink1.11.0?? bin/flink run -m yarn-cluster -yjm 2048m -ytm 8192m -ys 2 xxx.jar,rocksdb??taskmanager.memory.managed.fraction=0.6;taskmanager.memory.jvm-overhead.fraction=0.2flink??taskmanage??jarnativenative?? Free Slots / All Slots:0 / 2 CPU Cores:24 Physical Memory:251 GB JVM Heap Size:1.82 GB Flink Managed Memory:4.05 GB Memory JVM (Heap/Non-Heap) Type Committed Used Maximum Heap1.81 GB1.13 GB1.81 GB Non-Heap169 MB160 MB1.48 GB Total1.98 GB1.29 GB3.30 GB Outside JVM Type Count Used Capacity Direct24,493718 MB718 MB Mapped00 B0 B Network Memory Segments Type Count Available21,715 Total22,118 Garbage Collection Collector Count Time PS_Scavenge19917,433 PS_MarkSweep44,173
stop job problem and ddl problem
dear, i have two problems now: 1. when i stop a flink job using command "yarn application -kill
?????? flink interval join????????????????
??flink?? ---- ??:"Benchao Li"
flink interval join????????????????
flink?? interval join??window groupleft join ??select * from a,b where a.id=b.id and b.rowtime between a.rowtime and a.rowtime + INTERVAL '1' HOUR leftRelativeSize=1??rightRelativeSize=0??cleanUpTime = rowTime + leftRelativeSize + (leftRelativeSize + rightRelativeSize) / 2 + allowedLateness + 11.5??null??watermark??Math.max(leftRelativeSize, rightRelativeSize) + allowedLateness1watermark??rowtime0.5,??group by flink 1.10.0?? import org.apache.commons.net.ntp.TimeStamp; import org.apache.flink.api.common.typeinfo.TypeInformation; import org.apache.flink.api.common.typeinfo.Types; import org.apache.flink.api.java.typeutils.RowTypeInfo; import org.apache.flink.streaming.api.TimeCharacteristic; import org.apache.flink.streaming.api.datastream.DataStream; import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; import org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks; import org.apache.flink.streaming.api.functions.ProcessFunction; import org.apache.flink.streaming.api.functions.source.SourceFunction; import org.apache.flink.streaming.api.watermark.Watermark; import org.apache.flink.table.api.EnvironmentSettings; import org.apache.flink.table.api.Table; import org.apache.flink.table.api.java.StreamTableEnvironment; import org.apache.flink.table.functions.ScalarFunction; import org.apache.flink.types.Row; import org.apache.flink.util.Collector; import org.apache.flink.util.IOUtils; import java.io.BufferedReader; import java.io.InputStreamReader; import java.io.Serializable; import java.net.InetSocketAddress; import java.net.Socket; import java.sql.Timestamp; import java.text.SimpleDateFormat; import java.util.ArrayList; import java.util.Date; import java.util.List; public class TimeBoundedJoin { public static AssignerWithPeriodicWatermarks