Let me try this out on my standalone Hive. I remember reading something
similar on SO[1]. In this case, it was an external ORC generated by Spark
and an external table was created using CDH. The OP answered referring to a
community post[2] on Cloudera. It may be worth checking.
[1]
-restored-state
-
Sivaprasanna
On Fri, Oct 23, 2020 at 10:48 AM Partha Mishra
wrote:
> Hi,
>
>
>
> We are trying to save checkpoints for one of the flink job running in
> Flink version 1.9 and tried to resume the same flink job in Flink version
> 1.11.2. We are getting the be
checkpoint (lets say
chk-123). The new job is running fine and has made further checkpoints. Can
we delete chk-123?
Thanks,
Sivaprasanna
gain : )
Sivaprasanna
On Fri, Jul 24, 2020 at 9:12 AM Congxian Qiu wrote:
> Hi Sivaprasanna
>I think state schema evolution can work for incremental checkpoint. And
> I tried with a simple Pojo schema, It also works. maybe you need to check
> the schema, from the exception stack, t
with state schema
changes?
BTW, I'm running it on Flink 1.10. I forgot to update it in the original
thread.
Thanks,
Sivaprasanna
On Thu, Jul 23, 2020 at 7:52 PM David Anderson
wrote:
> I believe this should work, with a couple of caveats:
>
> - You can't do this with unaligned checkpoints
&
Adding dev@ to get some traction. Any help would be greatly appreciated.
Thanks.
On Thu, Jul 23, 2020 at 11:48 AM Sivaprasanna
wrote:
> +user-zh@flink.apache.org
>
> A follow up question. I tried taking a savepoint but the job failed
> immediately. It happens everytime I take
issing on some configurations.
Can you folks please help me with this?
[image: Screenshot 2020-07-23 at 10.34.29 AM.png]
On Wed, Jul 22, 2020 at 7:32 PM Sivaprasanna
wrote:
> Hi,
>
> We are trying out state schema migration for one of our stateful
> pipelines. We use few Avro
the latest checkpoint available when we are dealing with state
schema changes?
Complete stacktrace is attached with this mail.
-
Sivaprasanna
job_manager.log
Description: Binary data
on the
organization data.
Cheers,
Sivaprasanna
On Thu, May 14, 2020 at 12:13 PM Jingsong Li wrote:
> Hi, Dhurandar,
>
> Can you describe your needs? Why do you need to modify file names
> flexibly? What kind of name do you want?
>
> Best,
> Jingsong Lee
>
> On Thu, May 14,
It is working as expected. If I'm right, the print operator will simply
call the `.toString()` on the input element. If you want to visualize your
payload in JSON format, override the toString() in `SensorData` class with
the code to form your payload as a JSON representation using ObjectMapper
or
ube.com/playlist?list=PLDX4T_cnKjD0ngnBSU-bYGfgVv17MiwA7
>
> On Fri, Apr 24, 2020 at 3:23 PM Sivaprasanna
> wrote:
>
>> Cool. Thanks for the information.
>>
>> On Fri, 24 Apr 2020 at 11:20 AM, Marta Paes Moreira
>> wrote:
>>
>>> Hi, Sivaprasanna.
&g
Cool. Thanks for the information.
On Fri, 24 Apr 2020 at 11:20 AM, Marta Paes Moreira
wrote:
> Hi, Sivaprasanna.
>
> The talks will be up on Youtube sometime after the conference ends.
>
> Today, the starting schedule is different (9AM CEST / 12:30PM IST / 3PM
> CST) and more
yet).
Where can I find the recorded sessions?
Thanks,
Sivaprasanna
I agree with Leonard. I have just tried the same in Scala 2.11 with Flink
1.10.0 and it works just fine.
Cheers,
Sivaprasanna
On Tue, Apr 21, 2020 at 12:53 PM Leonard Xu wrote:
> Hi, Averell
>
> I guess it’s none of `#withRollingPolicy` and `#withBucketAssigner` and
> may cause
Hi Averell,
Can you please the complete stacktrace of the error?
On Mon, Apr 20, 2020 at 4:48 PM Averell wrote:
> Hi,
>
> I have the following code:
> / StreamingFileSink
> .forRowFormat(new Path(path), myEncoder)
>
Hi,
To subscribe, you have to send a mail to user-subscr...@flink.apache.org
On Wed, 15 Apr 2020 at 7:33 AM, lamber-ken wrote:
> user@flink.apache.org
>
Ideally if the underlying cluster where the job is being deployed changes
(1.8.x to 1.10.x ), it is better to update your project dependencies to the
new version (1.10.x), and hence you need to recompile the jobs.
On Tue, Apr 14, 2020 at 3:29 PM Chesnay Schepler wrote:
> @Robert Why would he
Hi Vitaliy,
Check for "flink-shaded-hadoop-2". It has dependencies with various hadoop
versions.
https://search.maven.org/artifact/org.apache.flink/flink-shaded-hadoop-2
On Mon, Mar 30, 2020 at 10:13 PM Vitaliy Semochkin
wrote:
> Hi,
>
> I can not find flink-shaded-hadoop2 for flink 1.10 in
I think you can modify the operator’s parallelism. It is only if you have
set maxParallelism, and while restoring from a checkpoint, you shouldn’t
modify the maxParallelism. Otherwise, I believe the state will be lost.
-
Sivaprasanna
On Fri, 13 Mar 2020 at 9:01 AM, LakeShen wrote:
>
I think you can modify the operator’s parallelism. It is only if you have
set maxParallelism, and while restoring from a checkpoint, you shouldn’t
modify the maxParallelism. Otherwise, I believe the state will be lost.
-
Sivaprasanna
On Fri, 13 Mar 2020 at 9:01 AM, LakeShen wrote:
>
20 matches
Mail list logo