那提交作业的用户跟你login的用户是同一个用户不?我怀疑是不是因为用户不同,所以metastore的client没有用到正确的credentials。
On Tue, Mar 10, 2020 at 6:42 PM 叶贤勋 wrote:
> 在doAs方法中是可以的。我现在hive connector中操作hive涉及认证的代码都在doAs中执行,可以解决认证问题。
> 前面提到的stacktrace是用我们公司自己封装的hive-exec
> jar打印出来的,所以跟源码对应不上,我用官网的hive-exec-2.1.1.jar也是有这个问题。
>
>
> 叶贤勋
>
Hi community,
Pravega connector is a connector that provides both Batch and Streaming Table
API implementation. We uses descriptor API to build Table source. When we plan
to upgrade to Flink 1.10, we found the unit tests are not passing with our
existing Batch Table API. There is a type
Thanks Piotr and Yun for involving.
Hi Piotr and Yun, for implementation,
FLINK-14254 [1] introduce batch sink table world, it deals with partitions
thing, metastore thing and etc.. And it just reuse Dataset/Datastream
FileInputFormat and FileOutputFormat. Filesystem can not do without
Thanks Piotr and Yun for involving.
Hi Piotr and Yun, for implementation,
FLINK-14254 [1] introduce batch sink table world, it deals with partitions
thing, metastore thing and etc.. And it just reuse Dataset/Datastream
FileInputFormat and FileOutputFormat. Filesystem can not do without
Hi Sidney,
The WARN logging you saw was from the AbstractPartitionDiscoverer which is
created by FlinkKafkaConsumer itself. It has an internal consumer which
shares the client.id of the actual consumer fetching data. This is a bug
that we should fix.
As Rong said, this won't affect the normal
For K8s deployment(including standalone session/perjob, native
integration), Flink could
not support port range for rest options. You need to set the
`rest.bind-port` exactly same
with `rest.port`.
Hi LakeShen
I am trying to understand your problem and do not think it is about the
port
Dear community,
happy to share this week's community digest with an outlook on Apache Flink
1.11 & 1.10.1, an update on the recent development around Apache Flink
Stateful Functions, a couple of SQL FLIPs (planner hints, Hbase catalog)
and a bit more.
Flink Development
==
*
We also had seen this issue before running Flink apps in a shared cluster
environment.
Basically, Kafka is trying to register a JMX MBean[1] for application
monitoring.
This is only a WARN suggesting that you are registering more than one MBean
with the same client id "consumer-1", it should not
Hey,
I've been using Flink for a while now without any problems when running apps
with a FlinkKafkaConsumer.
All my apps have the same overall logic (consume from kafka -> transform event
-> write to file) and the only way they differ from each other is the topic
they read (remaining kafka
Anyone?
On Wed, Mar 11, 2020 at 11:23 PM Yitzchak Lieberman <
yitzch...@sentinelone.com> wrote:
> Hi.
>
> Did someone encountered problem with sending metrics with datadog http
> reporter?
> My setup is flink version 1.8.2 deployed on k8s with 1 job manager and 10
> task managers.
> Every
Hi All,
I set a transformation like this and my events in the stream have a
sequential timestamp like 1,2,3, and I set the watermark to event time.
myStream
.keyBy(0)
.timeWindow(Time.of(1000, TimeUnit.MILLISECONDS))
.aggregate(new myAggregateFunction())
Thanks Arvid and Kurt. That's very helpful discussion.
Currently we will continue this with Lambda, but I'll definitely do a A-A
test between Lambda and Flink for this case.
Regards,
Jiawei
On Wed, Mar 11, 2020 at 5:40 PM Kurt Young wrote:
> > The second reason is this query need to scan the
12 matches
Mail list logo