Hi All
Does current Flink support to set checkpoint properties while using Flink SQL ?
For example, statebackend choices, checkpoint interval and so on ...
Thanks,
SImon
on a remote cluster from the IDE you need to first
build the jar containing your user code. This jar needs to passed to
createRemoteEnvironment() so that the Flink client knows which jar to upload.
Hence, please make sure that /tmp/myudf.jar contains your user code.
Cheers,
Till
On Thu, O
Hi all
I want to test to submit a job from my local IDE and I deployed a Flink
cluster in my vm.
Here is my code from Flink 1.9 document and add some of my parameters.
public static void main(String[] args) throws Exception {
ExecutionEnvironment env = ExecutionEnvironment
.createRemoteE
Hi all
I’m trying to build flink 1.9 release branch, it raises the error like:
Could not resolve dependencies for project
org.apache.flink:flink-s3-fs-hadoop:jar:1.9-SNAPSHOT: Could not find artifact
org.apache.flink:flink-fs-hadoop-shaded:jar:tests:1.9-SNAPSHOT in maven-ali
(http://maven.al
OK, Thanks Jark
Thanks,
SImon
On 08/13/2019 14:05,Jark Wu wrote:
Hi Simon,
This is a temporary workaround for 1.9 release. We will fix the behavior in
1.10, see FLINK-13461.
Regards,
Jark
On Tue, 13 Aug 2019 at 13:57, Simon Su wrote:
Hi Jark
Thanks for your reply.
It’s weird that
in the default catalog.
To create table in your custom catalog, you could use
tableEnv.sqlUpdate("create table ").
Thanks,
Xuefu
On Mon, Aug 12, 2019 at 6:17 PM Simon Su wrote:
> Hi Xuefu
>
> Thanks for you reply.
>
> Actually I have tried it as your advises. I have
stered catalog, you could call
tableEnv.useCatalog() and .useDatabase().
As an alternative, you could fully qualify your table name with a
"catalog.db.table" syntax without switching current catalog/database.
Please try those and let me know if you find new problems.
Thanks,
Xuefu
On Mon,
Hi All
I want to use a custom catalog by setting the name “ca1” and create a
database under this catalog. When I submit the
SQL, and it raises the error like :
Exception in thread "main" org.apache.flink.table.api.ValidationException:
SQL validation failed. From line 1, column 98 to li
Hi Jiongsong
Thanks for your reply.
It seems that to wrap fields is a feasible way for me now. And there
already exists another JIRA FLINK-8921 try to improve this.
Thanks,
Simon
On 06/26/2019 19:21,JingsongLee wrote:
Hi Simon:
Does your code include the PR[1]?
If include: try set
Hi all,
Currently I faced a problem caused by a long Flink SQL.
My sql is like “insert into tableA select a, b, c …….from sourceTable”, I
have more than 1000 columns in select target, so that’s the problem, flink code
generator will generate a RichMapFunction class and contains a map fu
, state is only carried forward if
you use a savepoint.
Can you check if that is what you are doing?
On Tue, Jun 25, 2019 at 2:21 PM Simon Su wrote:
Hi wanglei
Can you post how you restart the job ?
Thanks,
Simon
On 06/25/2019 20:11,wangl...@geekplus.com.cn wrote:
public class
Hi wanglei
Can you post how you restart the job ?
Thanks,
Simon
On 06/25/2019 20:11,wangl...@geekplus.com.cn wrote:
public class StateProcessTest extends KeyedProcessFunction, String> {
private transient ValueState> state;
public void processElement(Tuple2 value, Context ctx,
Collector ou
12 matches
Mail list logo