Thanks a lot for the hard work, Till!
Shuyi
On Sat, Dec 1, 2018 at 4:07 AM Dominik Wosiński wrote:
> Thanks Till for being the release manager!
> Thanks Everyone and Great Job.
>
> Best Regards,
> Dom.
>
> pt., 30 lis 2018 o 13:19 vino yang napisał(a):
>
> > Thanks Till for your great work, al
Jeff Zhang created FLINK-11060:
--
Summary: Unable to set number of task manager and slot per task
number in scala shell local mode
Key: FLINK-11060
URL: https://issues.apache.org/jira/browse/FLINK-11060
P
Hi Stephan:
I totally agree with you, this discussion covers too many topics, so we can cut
it into a series of sub-discussions proposed by you, firstly we can focus on
phrase-1: “What Flink API Stack Should be for a Unified Engine”.
Best,
Feng Wang
On Dec 3, 2018, at 19:36, Stephan Ewen
mailt
shuai.xu created FLINK-11059:
Summary: JobMaster may continue using an invalid slot if releasing
idle slot meet a timeout
Key: FLINK-11059
URL: https://issues.apache.org/jira/browse/FLINK-11059
Project: F
cz created FLINK-11058:
--
Summary: FlinkKafkaProducer011 fails when kafka broker crash
Key: FLINK-11058
URL: https://issues.apache.org/jira/browse/FLINK-11058
Project: Flink
Issue Type: Bug
Com
yuemeng created FLINK-11057:
---
Summary: where in grammar will cause stream inner join loigcal
Key: FLINK-11057
URL: https://issues.apache.org/jira/browse/FLINK-11057
Project: Flink
Issue Type: Bug
TisonKun created FLINK-11056:
Summary: Remove MesosApplicationMasterRunner
Key: FLINK-11056
URL: https://issues.apache.org/jira/browse/FLINK-11056
Project: Flink
Issue Type: Sub-task
Co
Galen Warren created FLINK-11055:
Summary: Allow Queryable State to be transformed on the
TaskManager before being returned to the client
Key: FLINK-11055
URL: https://issues.apache.org/jira/browse/FLINK-11055
Hi Till,
That is a good example. Just a minor correction, in this case, b, c and d
will all consume from a non-cached a. This is because cache will only be
created on the very first job submission that generates the table to be
cached.
If I understand correctly, this is example is about whether .
Hi Addison,
Sorry for the late reply.
I agree that the documentation can be significantly improved
and that adding compression could be a nice thing to have.
There is already a PR open for supporting writing SequenceFiles with
the StreamingFileSink. When this gets merged, you will be able to use
Another argument for Piotr's point is that lazily changing properties of a
node affects all down stream consumers but does not necessarily have to
happen before these consumers are defined. From a user's perspective this
can be quite confusing:
b = a.map(...)
c = a.map(...)
a.cache()
d = a.map(..
Hi Giannis,
logically the resulting plans should be identical, meaning that they both
will use the custom partitioner to create the partitions and then co group
both inputs.
Physically, the latter plan adds an additional partition operator before
the coGroup operator. You can see this is you call
Fabian Hueske created FLINK-11054:
-
Summary: Ingest Long value as TIMESTAMP attribute
Key: FLINK-11054
URL: https://issues.apache.org/jira/browse/FLINK-11054
Project: Flink
Issue Type: Improv
Hi all!
This is a great discussion to start and I agree with the idea behind it. We
should get started designing what the Flink stack should look like in the
future.
This discussion is very big, though, and from past experiences if the scope
is too big, the discussions and up falling apart when e
Thanks, zhijiang.
For the optimization, such as cost-based estimation, we still want to keep it
in the data set layer,
but your suggestion is also a thought that can be considered.
As I know, currently these batch scenarios have been contained in DataSet, such
as
the sort-merge join algorit
Hey Shaoxuan and Becket,
> Can you explain a bit more one what are the side effects? So far my
> understanding is that such side effects only exist if a table is mutable.
> Is that the case?
Not only that. There are also performance implications and those are another
implicit side effects of usi
Hi Haibo,
Thank you for this great proposal!
Flink is a unified computing engine. It has been unified at the TableAPI
and SQLAPI levels (not yet complete). It's greate If we can unify the
DataSet API and DataStream API.
I also want to convert to StreamTransformation in the SQL and Table API,
bec
Avi Levi created FLINK-11053:
Summary: Examples in documentation not compiling
Key: FLINK-11053
URL: https://issues.apache.org/jira/browse/FLINK-11053
Project: Flink
Issue Type: Bug
Com
Hi Wenhui,
Thanks for bringing the topics up. Both make sense to me. For higher-order
functions, I'd suggest you come up with a list of things you'd like to add.
Overall, Flink SQL is weak in handling complex types. Ideally we should have a
doc covering the gaps and provide a roadmap for enhanc
sunjincheng created FLINK-11052:
---
Summary: Add Bounded(Group Window) FlatAggregate operator to batch
Table API
Key: FLINK-11052
URL: https://issues.apache.org/jira/browse/FLINK-11052
Project: Flink
sunjincheng created FLINK-11051:
---
Summary: Add Bounded(Group Window) FlatAggregate operator to
streaming Table API
Key: FLINK-11051
URL: https://issues.apache.org/jira/browse/FLINK-11051
Project: Flink
Hi haibo,
Thanks for bringing this discussion!
I reviewd the google doc and really like the idea of unifying the stream and
batch in all stacks. Currently only network runtime stack is unified for both
stream and batch jobs, but the compilation, operator and runtime task stacks
are all separa
Hello all,
Spark 2.4.0 was released last month. I noticed that Spark 2.4
“Add a lot of new built-in functions, including higher-order functions, to deal
with complex data types easier.”[1]
I wonder if it's necessary for Flink to add higher-order functions to enhance
it's ability.
By the way, I
23 matches
Mail list logo