Re: UDF:Type is not supported: ANY

2020-08-05 Thread Benchao Li
看起来写法没啥问题,我们就是这么用的。 你用的是哪个版本的Flink?然后是怎么注册的UDF呢? zilong xiao 于2020年8月6日周四 下午12:06写道: > 我这么写过,貌似不行,下面是我的代码,可否看下是否可行? > > public class Json2Map extends ScalarFunction { > >private static final Logger LOG = > LoggerFactory.getLogger(Json2Map.class); > >private static final ObjectMapper

Re: flink1.10.1/1.11.1 使用sql 进行group 和 时间窗口 操作后 状态越来越大

2020-08-05 Thread Congxian Qiu
Hi 我这边没有看到相关的附件,不确定是邮件客户端的问题还是其他什么,你那边能否再确认下 附件 的发送情况呢? Best, Congxian op <520075...@qq.com> 于2020年8月6日周四 上午10:36写道: >感谢 , 截图和配置在附件里面 > 我试试配置 RocksDB StateBackend > > > -- 原始邮件 -- > *发件人:* "user-zh" ; > *发送时间:* 2020年8月5日(星期三) 下午5:43 > *收件人:*

Re: UDF:Type is not supported: ANY

2020-08-05 Thread zilong xiao
我这么写过,貌似不行,下面是我的代码,可否看下是否可行? public class Json2Map extends ScalarFunction { private static final Logger LOG = LoggerFactory.getLogger(Json2Map.class); private static final ObjectMapper OBJECT_MAPPER = new ObjectMapper(); public Json2Map(){} public Map eval(String param) {

flink1.11 es connector

2020-08-05 Thread Dream-底限
hi 我们这面想用es做时态表查询,但是flink没有报漏es源连接器,需要自己实现一个,请问大家对es做时态表这件事感觉靠谱吗(ps:之所以不用hbase是因为hbase的rowkey设计以及可能维护的二级索引比较麻烦,但hbase也是一个调研方案)

?????? ????????flink??????????????????

2020-08-05 Thread ????????
cep cepGroupPattern??within??wait?? cepcep??cep ?? cepflink1.7??

Re: UDF:Type is not supported: ANY

2020-08-05 Thread Benchao Li
可以直接返回Map类型呀,比如: public class String2Map extends ScalarFunction { public Map eval(String param) throws Exception { Map map = new HashMap<>(); // ... return map; } @Override public TypeInformation getResultType(Class[] signature) { return

Re: 请教:用flink实现实时告警的功能

2020-08-05 Thread Jun Zhang
可以使用广播,我自己写过一个文章,给你参考下,你可以把source换成每隔几秒钟去读mysql的配置 https://blog.csdn.net/zhangjun5965/article/details/106573528 samuel@ubtrobot.com 于2020年8月6日周四 上午10:26写道: > 由于需要实时告警功能,经调研,用flink 来实现是比较合适,但有几个问题没搞清楚,请大神们指教,感谢! > > 告警有分两部分: >一是告警规则的设置,数据存放在mysql,存储的格式是json > {"times":5}

?????? flink1.10.1/1.11.1 ????sql ????group ?? ???????? ?????? ????????????

2020-08-05 Thread op
?? ?? RocksDB StateBackend ---- ??: "user-zh"

????????flink??????????????????

2020-08-05 Thread samuel....@ubtrobot.com
flink ,?? ??mysql??json {"times":5} ---5?? {"temperature": 80} ---80

Re: UDF:Type is not supported: ANY

2020-08-05 Thread zilong xiao
感谢benchao大佬回答,对于Java用户而言,Map容器用的也比较多,目前有用户需要从json字符串中取对应key的值,目前的做法是提供了一个getValueFromJson的UDF,每次要取值时都要将json反序列化成Map然后get(key),有什么方法可以使得UDF直接返回Map类型吗?这样就可以避免每次都去反序列化然后取值 Benchao Li 于2020年8月5日周三 下午11:49写道: > Hi zilong, > > SQL里面的ARRAY类型,对应的legacy type,应该是Types.PRIMITIVE_ARRAY或者Types.OBJECT_ARRAY,

Re: Re: Re: FLINK SQL view的数据复用问题

2020-08-05 Thread godfrey he
目前sql-client还不支持。关于纯SQL文本statement set的支持, 目前社区已经达成语法的一致意见,应该后续会慢慢的支持。 kandy.wang 于2020年8月5日周三 下午10:43写道: > > > > > > > @ godfrey > 你说的这种StatementSet 提交方式,在sql-client提交任务的时候不支持吧? 可以给加上么。 > > > > > > > > > > > > 在 2020-08-04 19:36:56,"godfrey he" 写道: > >调用 StatementSet#explain() 把结果打出来看看是否因

?????? arm??centos7??????pyflink

2020-08-05 Thread ????
?? ?? ---- ??: "user-zh"

Re: arm,centos7下部署pyflink

2020-08-05 Thread Xingbo Huang
Hi, 肯定不是的,这个只是python shell的启动脚本。你想想看你mvn编译的只是java的代码,里面都没有pyflink的python代码。你写的python job那些Import 的pyflink的包python都认识不了,肯定跑不起来呀。你如果想玩python shell,可以参考文档[1] [1] https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/python_shell.html Best, Xingbo 琴师 <1129656...@qq.com> 于2020年8月6日周四

Re: [DISCUSS] FLIP-133: Rework PyFlink Documentation

2020-08-05 Thread jincheng sun
Hi David, Thank you for sharing the problems with the current document, and I agree with you as I also got the same feedback from Chinese users. I am often contacted by users to ask questions such as whether PyFlink supports "Java UDF" and whether PyFlink supports "xxxConnector". The root cause of

Re: [DISCUSS] FLIP-133: Rework PyFlink Documentation

2020-08-05 Thread jincheng sun
Hi David, Thank you for sharing the problems with the current document, and I agree with you as I also got the same feedback from Chinese users. I am often contacted by users to ask questions such as whether PyFlink supports "Java UDF" and whether PyFlink supports "xxxConnector". The root cause of

?????? arm??centos7??????pyflink

2020-08-05 Thread ????
Hi: mvn clean package -DskipTestflink-1.11.1-src.tgzflink-1.11.1/build-target/bin/pyflink-shell.shpyflink ---- ??:

Re: Issue with single job yarn flink cluster HA

2020-08-05 Thread Ken Krugler
Hi Dinesh, Did updating to Flink 1.10 resolve the issue? Thanks, — Ken > Hi Andrey, > Sure We will try to use Flink 1.10 to see if HA issues we are facing is fixed > and update in this thread. > > Thanks, > Dinesh > > On Thu, Apr 2, 2020 at 3:22 PM Andrey Zagrebin

Flink Promethues Metricsreporter Question

2020-08-05 Thread Avijit Saha
Hi, Have a general question about Flink support for Prometheus metrics. We already have a Prometheus setup in our cluster with ServiceMonitor-s monitoring ports like 8080 etc. for scraping metrics. In a setup like this, if we deploy Flink Job managers/Task managers in the cluster, is there any

Re: Flink CPU load metrics in K8s

2020-08-05 Thread Bajaj, Abhinav
Thanks Roman for providing the details. I also made more observations that has increased my confusion about this topic  To ease the calculations, I deployed a test cluster this time providing 1 CPU in K8s(with docker) for all the taskmanager container. When I check the taskmanager CPU load,

Re: Two Queries and a Kafka Topic

2020-08-05 Thread Marco Villalobos
Hi Theo, Thank you. I just read the State Processor API in an effort to understand Option 1, it seems though I can just use a KeyedProcessFunction that loads the data just once (maybe on the "open" method), and serialize the values into MapState and use it from that point on. Another option

Re: Handle idle kafka source in Flink 1.9

2020-08-05 Thread bat man
Hello Arvid, Thanks for the suggestion/reference and my apologies for the late reply. With this I am able to process the data with some topics not having regular data. Obviously, late data is being handheld as in side-output and has a process for it. One challenge is to handle the back-fill as

Re: Status of a job when a kafka source dies

2020-08-05 Thread Piotr Nowojski
Hi Nick, Could you elaborate more, what event and how would you like Flink to handle? Is there some kind of Kafka's API that can be used to listen to such kind of events? Becket, do you maybe know something about this? As a side note Nick, can not you configure some timeouts [1] in the

Re: Status of a job when a kafka source dies

2020-08-05 Thread Nick Bendtner
+user group. On Wed, Aug 5, 2020 at 12:57 PM Nick Bendtner wrote: > Thanks Piotr but shouldn't this event be handled by the FlinkKafkaConsumer > since the poll happens inside the FlinkKafkaConsumer. How can I catch this > event in my code since I don't have control over the poll. > > Best, >

Re: Status of a job when a kafka source dies

2020-08-05 Thread Piotr Nowojski
Hi Nick, What Aljoscha was trying to say is that Flink is not trying to do any magic. If `KafkaConsumer` - which is being used under the hood of `FlinkKafkaConsumer` connector - throws an exception, this exception bubbles up causing the job to failover. If the failure is handled by the

Re: Kafka transaction error lead to data loss under end to end exact-once

2020-08-05 Thread Piotr Nowojski
Hi Lu, In this case, as it looks from the quite fragmented log/error message that you posted, the job has failed so Flink indeed detected some issue and that probably means a data loss in Kafka (in such case you could probably recover some lost records by reading with `read_uncommitted` mode from

Re: Behavior for flink job running on K8S failed after restart strategy exhausted

2020-08-05 Thread Eleanore Jin
Hi Yang and Till, Thanks a lot for the help! I have the similar question as Till mentioned, if we do not fail Flink pods when the restart strategy is exhausted, it might be hard to monitor such failures. Today I get alerts if the k8s pods are restarted or in crash loop, but if this will no longer

Re: Behavior for flink job running on K8S failed after restart strategy exhausted

2020-08-05 Thread Eleanore Jin
Hi Yang and Till, Thanks a lot for the help! I have the similar question as Till mentioned, if we do not fail Flink pods when the restart strategy is exhausted, it might be hard to monitor such failures. Today I get alerts if the k8s pods are restarted or in crash loop, but if this will no longer

Re: UDF:Type is not supported: ANY

2020-08-05 Thread Benchao Li
Hi zilong, SQL里面的ARRAY类型,对应的legacy type,应该是Types.PRIMITIVE_ARRAY或者Types.OBJECT_ARRAY, 其他类型的type information会被当做any类型来处理。 这里应该跟泛型没有关系,就是在实现的时候并没有考虑将Types.LIST(Types.STRING)当做SQL里面的ARRAY类型。 支持List作为ARRAY的数据,应该要在1.12才能支持[1]。 [1] https://issues.apache.org/jira/browse/FLINK-18417 zilong xiao

Re: Flink Mysql sink按时间分库分表

2020-08-05 Thread Leonard Xu
Hi 我理解这个除了指定表名,关键是要在数据库中自动建表吧,JDBC 这边之前有个相关issue我跟进过[2],不过代码还没进,暂时还没有好的办法。Es connector 是支持类似功能的,如果数据可以放在es可以使用下。 祝好 Leonard [1] https://issues.apache.org/jira/browse/FLINK-16294 > 在 2020年8月5日,20:36,张健 写道: > > > > > 大家好: > > > 想问下目前

Re:Re: Re: FLINK SQL view的数据复用问题

2020-08-05 Thread kandy.wang
@ godfrey 你说的这种StatementSet 提交方式,在sql-client提交任务的时候不支持吧? 可以给加上么。 在 2020-08-04 19:36:56,"godfrey he" 写道: >调用 StatementSet#explain() 把结果打出来看看是否因 Deduplicate的digest不一样导致的没法复用 > >kandy.wang 于2020年8月4日周二 下午6:21写道: > >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> @ godfrey >>

Flink Mysql sink按时间分库分表

2020-08-05 Thread 张健
大家好: 想问下目前 flink mysql sink想要实现按时间分库分表 有什么配置项可以使用嘛? 我目前的需求是数据源经过一个简单的etl处理写入到mysql中,供业务实时查询。由于业务场景只需要查询当天的数据(历史数据有另外的离线模块处理),所以想每天的数据单独写入一张表中(表名为xx_MMDD)这样。但目前Flink JDBC sink是要指定表名的。请问下有什么简单的实现方案嘛?或者说对于我这样的场景有什么更好的实现方式嘛? 多谢。 -- 张健

Re: Behavior for flink job running on K8S failed after restart strategy exhausted

2020-08-05 Thread Till Rohrmann
You are right Yang Wang. Thanks for creating this issue. Cheers, Till On Wed, Aug 5, 2020 at 1:33 PM Yang Wang wrote: > Actually, the application status shows in YARN web UI is not determined by > the jobmanager process exit code. > Instead, we use

Re: Behavior for flink job running on K8S failed after restart strategy exhausted

2020-08-05 Thread Till Rohrmann
You are right Yang Wang. Thanks for creating this issue. Cheers, Till On Wed, Aug 5, 2020 at 1:33 PM Yang Wang wrote: > Actually, the application status shows in YARN web UI is not determined by > the jobmanager process exit code. > Instead, we use

Re: flink sql eos

2020-08-05 Thread Leonard Xu
Hi > 目前仅有kafka实现了TwoPhaseCommitSinkFunction,但kafka的ddl中也没有属性去设 > 置Semantic为EXACTLY_ONCE 除了Kafka还有filesystem connector也是支持 EXACTLY ONCE的,kafka 的已经在1.12支持了[1] > 当开启全局EXACTLY_ONCE并且所有使用的connector都支持EXACTLY_ONCE,是否整个应 > 用程序就可以做到端到端的精确一致性 是的。 祝好 Leonard [1]

Re: Behavior for flink job running on K8S failed after restart strategy exhausted

2020-08-05 Thread Yang Wang
Actually, the application status shows in YARN web UI is not determined by the jobmanager process exit code. Instead, we use "resourceManagerClient.unregisterApplicationMaster" to control the final status of YARN application. So although jobmanager exit with zero code, it still could show failed

Re: Behavior for flink job running on K8S failed after restart strategy exhausted

2020-08-05 Thread Yang Wang
Actually, the application status shows in YARN web UI is not determined by the jobmanager process exit code. Instead, we use "resourceManagerClient.unregisterApplicationMaster" to control the final status of YARN application. So although jobmanager exit with zero code, it still could show failed

Re: Two Queries and a Kafka Topic

2020-08-05 Thread Theo Diefenthal
Hi Marco, In general, I see three solutions here you could approach: 1. Use the StateProcessorAPI: You can run a program with the stateProcessorAPI that loads the data from JDBC and stores it into a Flink SavePoint. Afterwards, you start your streaming job from that savepoint which will load

Re: The bytecode of the class does not match the source code

2020-08-05 Thread Chesnay Schepler
Well of course these differ; on the left you have the decompiled bytecode, on the right the original source. If these were the same you wouldn't need source jars. On 05/08/2020 12:20, 魏子涵 wrote: I'm sure the two versions match up. Following is the pic comparing codes in IDEA

Re: The bytecode of the class does not match the source code

2020-08-05 Thread Chesnay Schepler
Well of course these differ; on the left you have the decompiled bytecode, on the right the original source. If these were the same you wouldn't need source jars. On 05/08/2020 12:20, 魏子涵 wrote: I'm sure the two versions match up. Following is the pic comparing codes in IDEA

Re: The bytecode of the class does not match the source code

2020-08-05 Thread Jake
hi 魏子涵 Idea decompiled code is not match java source code, you can download java source code in idea. /Volumes/work/maven_repository/org/apache/flink/flink-runtime_2.11/1.10.1/flink-runtime_2.11-1.10.1-sources.jar!/org/apache/flink/runtime/jobmaster/slotpool/SlotPoolImpl.java Jake > On Aug

?????? flink-1.11 ????????

2020-08-05 Thread kcz
---- ??: "user-zh"

Re:Re: The bytecode of the class does not match the source code

2020-08-05 Thread 魏子涵
I'm sure the two versions match up. Following is the pic comparing codes in IDEA https://img-blog.csdnimg.cn/20200805180232929.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L0NTRE5yaG1t,size_16,color_FF,t_70 At 2020-08-05

Re:Re: The bytecode of the class does not match the source code

2020-08-05 Thread 魏子涵
I'm sure the two versions match up. Following is the pic comparing codes in IDEA https://img-blog.csdnimg.cn/20200805180232929.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L0NTRE5yaG1t,size_16,color_FF,t_70 At 2020-08-05

Re: [DISCUSS] FLIP-133: Rework PyFlink Documentation

2020-08-05 Thread Wei Zhong
Hi Xingbo, Thanks for your information. I think the PySpark's documentation redesigning deserves our attention. It seems that the Spark community has also begun to treat the user experience of Python documentation more seriously. We can continue to pay attention to the discussion and

Re: [DISCUSS] FLIP-133: Rework PyFlink Documentation

2020-08-05 Thread Wei Zhong
Hi Xingbo, Thanks for your information. I think the PySpark's documentation redesigning deserves our attention. It seems that the Spark community has also begun to treat the user experience of Python documentation more seriously. We can continue to pay attention to the discussion and

Re: flink1.10.1/1.11.1 使用sql 进行group 和 时间窗口 操作后 状态越来越大

2020-08-05 Thread Congxian Qiu
Hi RocksDB StateBackend 只需要在 flink-conf 中进行一下配置就行了[1]. 另外从你前面两份邮件看,我有些信息比较疑惑,你能否贴一下现在使用的 flink-conf,以及 checkpoint UI 的截图,以及 HDFS 上 checkpoint 目录的截图 [1] https://ci.apache.org/projects/flink/flink-docs-release-1.11/zh/ops/state/state_backends.html#%E8%AE%BE%E7%BD%AE-state-backend Best,

Re: The bytecode of the class does not match the source code

2020-08-05 Thread Chesnay Schepler
Please make sure you have loaded the correct source jar, and aren't by chance still using the 1.11.0 source jar. On 05/08/2020 09:57, 魏子涵 wrote: Hi, everyone:       I found  the 【org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl】 class in【flink-runtime_2.11-1.11.1.jar】does not match

Re: The bytecode of the class does not match the source code

2020-08-05 Thread Chesnay Schepler
Please make sure you have loaded the correct source jar, and aren't by chance still using the 1.11.0 source jar. On 05/08/2020 09:57, 魏子涵 wrote: Hi, everyone:       I found  the 【org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl】 class in【flink-runtime_2.11-1.11.1.jar】does not match

Re:Re:写入hive 问题

2020-08-05 Thread air23
你好 谢谢。去掉版本号 确实可以了。我用的版本 和我安装的hive版本是一致的。不知道是什么原因导致的。 在 2020-08-05 15:59:06,"wldd" 写道: >hi: >1.你可以看下你配置hive catalog时的hive版本和你当前使用的hive版本是否一致 >2.你也可以尝试在配置hive catalog的时候,不设置hive版本 > > > > > > > > > > > > > >-- > >Best, >wldd > > > > > >在 2020-08-05 15:38:26,"air23" 写道: >>你好

flink sql eos

2020-08-05 Thread sllence
大家好 请问目前flink sql是不是不能没有开启全局端到端精确一致性(eos)的方 式, 目前仅有kafka实现了TwoPhaseCommitSinkFunction,但kafka的ddl中也没有属性去设 置Semantic为EXACTLY_ONCE 我们是否可以去支持更多的事务性connector,并可以在flink sql维度支持开启全局的 端到端一致性,并为每个connector是否支持EXACTLY_ONCE进行验证, 当开启全局EXACTLY_ONCE并且所有使用的connector都支持EXACTLY_ONCE,是否整个应

Re: Kafka transaction error lead to data loss under end to end exact-once

2020-08-05 Thread Khachatryan Roman
Hi Lu, AFAIK, it's not going to be fixed. As you mentioned in the first email, Kafka should be configured so that it's transaction timeout is less than your max checkpoint duration. However, you should not only change transaction.timeout.ms in producer but also transaction.max.timeout.ms on your

?????? flink1.10.1/1.11.1 ????sql ????group ?? ???????? ?????? ????????????

2020-08-05 Thread op
??ttl?? val settings = EnvironmentSettings.newInstance().inStreamingMode().build() val tableEnv = StreamTableEnvironment.create(bsEnv, settings) val tConfig = tableEnv.getConfig tConfig.setIdleStateRetentionTime(Time.minutes(1440), Time.minutes(1450)) 1)3??

Re:写入hive 问题

2020-08-05 Thread wldd
hi: 1.你可以看下你配置hive catalog时的hive版本和你当前使用的hive版本是否一致 2.你也可以尝试在配置hive catalog的时候,不设置hive版本 -- Best, wldd 在 2020-08-05 15:38:26,"air23" 写道: >你好 >15:33:59,781 INFO org.apache.flink.table.catalog.hive.HiveCatalog > - Created HiveCatalog 'myhive1' >Exception in

The bytecode of the class does not match the source code

2020-08-05 Thread 魏子涵
Hi, everyone: I found the 【org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl】 class in【flink-runtime_2.11-1.11.1.jar】does not match the source code. Is it a problem we need to fix(if it is, what should we do)? or just let it go?

Re: Behavior for flink job running on K8S failed after restart strategy exhausted

2020-08-05 Thread Till Rohrmann
Yes for the other deployments it is not a problem. A reason why people preferred non-zero exit codes in case of FAILED jobs is that this is easier to monitor than having to take a look at the actual job result. Moreover, in the YARN web UI the application shows as failed if I am not mistaken.

Re: Behavior for flink job running on K8S failed after restart strategy exhausted

2020-08-05 Thread Till Rohrmann
Yes for the other deployments it is not a problem. A reason why people preferred non-zero exit codes in case of FAILED jobs is that this is easier to monitor than having to take a look at the actual job result. Moreover, in the YARN web UI the application shows as failed if I am not mistaken.

写入hive 问题

2020-08-05 Thread air23
你好 15:33:59,781 INFO org.apache.flink.table.catalog.hive.HiveCatalog - Created HiveCatalog 'myhive1' Exception in thread "main" org.apache.flink.table.catalog.exceptions.CatalogException: Failed to create Hive Metastore client at

Re: flink1.10.1/1.11.1 使用sql 进行group 和 时间窗口 操作后 状态越来越大

2020-08-05 Thread Congxian Qiu
Hi op 这个情况比较奇怪。我想确认下: 1)你所有作业都遇到 checkpoint size 不断变大的情况,还是只有这个类型的作业遇到这个问题呢? 2)是否尝试过 RocksDBStateBackend 呢(全量和增量)?情况如何呢 另外,你 TTL 其他的配置是怎么设置的呢? 从原理上来说,checkpoint 就是 state 的一个快照,如果 checkpoint 越来越大,那么就是 state 越来越多。 Best, Congxian op <520075...@qq.com> 于2020年8月5日周三 下午2:46写道: > >

Re: Behavior for flink job running on K8S failed after restart strategy exhausted

2020-08-05 Thread Yang Wang
Hi Eleanore, Yes, I suggest to use Job to replace Deployment. It could be used to run jobmanager one time and finish after a successful/failed completion. However, using Job still could not solve your problem completely. Just as Till said, When a job exhausts the restart strategy, the jobmanager

Re: Behavior for flink job running on K8S failed after restart strategy exhausted

2020-08-05 Thread Yang Wang
Hi Eleanore, Yes, I suggest to use Job to replace Deployment. It could be used to run jobmanager one time and finish after a successful/failed completion. However, using Job still could not solve your problem completely. Just as Till said, When a job exhausts the restart strategy, the jobmanager

Re: [DISCUSS] FLIP-133: Rework PyFlink Documentation

2020-08-05 Thread Xingbo Huang
Hi, I found that the spark community is also working on redesigning pyspark documentation[1] recently. Maybe we can compare the difference between our document structure and its document structure. [1] https://issues.apache.org/jira/browse/SPARK-31851

Re: [DISCUSS] FLIP-133: Rework PyFlink Documentation

2020-08-05 Thread Xingbo Huang
Hi, I found that the spark community is also working on redesigning pyspark documentation[1] recently. Maybe we can compare the difference between our document structure and its document structure. [1] https://issues.apache.org/jira/browse/SPARK-31851

?????? flink1.10.1/1.11.1 ????sql ????group ?? ???????? ?????? ????????????

2020-08-05 Thread op
FsStateBackend??5checkpoint??300ms ??1440minute??5 checkpoint shared group