yuemeng created FLINK-4827:
--
Summary: Sql on streaming example use scala with wrong variable
name
Key: FLINK-4827
URL: https://issues.apache.org/jira/browse/FLINK-4827
Project: Flink
Issue Type: Bu
Vijay Srinivasaraghavan created FLINK-4826:
--
Summary: Add keytab based kerberos support to run Flink in Mesos
environment
Key: FLINK-4826
URL: https://issues.apache.org/jira/browse/FLINK-4826
The chaining code is definitely related, I also have a pretty clear idea
how to fix it.
The odd thing is that the Java API doesn't catch this type mismatch; the
date types are
known when the plan is generated. This kind of error shouldn't even happen.
On 13.10.2016 21:15, Geoffrey Mon wrote:
Thank you very much. Disabling chaining with the Python API allows my
actual script to run properly. The division by zero must be an issue with
the job that I posted on gist.
Does that mean that the issue must be in the chaining part of the API?
Chaining from the way I understand it is an importan
Okay, this sounds prudent. Would this be the right time to implement
FLINK-2268 "Provide Flink binary release without Hadoop"?
On Thu, Oct 13, 2016 at 11:25 AM, Stephan Ewen wrote:
> +1 for dropping Hadoop1 support
>
> @greg There is quite some complexity in the build setup and release scripts
>
+1 for dropping Hadoop1 support
@greg There is quite some complexity in the build setup and release scripts
and testing to support Hadoop 1. Also, we have to prepare to add support
for Hadoop 3, and then supporting in addition Hadoop 1 seems very tough.
Stephan
On Thu, Oct 13, 2016 at 5:04 PM,
+1 to dropping Hadoop 1.x
I am fairly certain there are very few legacy Hadoop users. 2.x is heavily
used at the moment.
Spark actually changed not just Hadoop but Python versions as well.
Hadoop 3 would take a while to mature so I would suggest holding off on
that after it is well baked in and us
The Flink PMC is pleased to announce the availability of Flink 1.1.3.
The official release announcement:
https://flink.apache.org/news/2016/10/12/release-1.1.3.html
Release binaries:
http://apache.lauf-forum.at/flink/flink-1.1.3/
Please update your Maven dependencies to the new 1.1.3 version and
Hi Robert,
What are the benefits to Flink for dropping Hadoop 1 support? Is there
significant code cleanup or would we simply be publishing one less set of
artifacts?
Greg
On Thu, Oct 13, 2016 at 10:47 AM, Robert Metzger
wrote:
> Hi,
>
> The Apache Hadoop community has recently released the fi
I am totally agree with Robert. From the industry point of view, we are not
using in any client Hadoop 1.x . Even in legacy system, we have already
upgraded the software.
From: Robert Metzger [mailto:rmetz...@apache.org]
Sent: jueves, 13 de octubre de 2016 16:48
To: dev@flink.apache.org; u...@fl
Hi,
The Apache Hadoop community has recently released the first alpha version
for Hadoop 3.0.0, while we are still supporting Hadoop 1. I think its time
to finally drop Hadoop 1 support in Flink.
The last minor Hadoop 1 release was in 27 June, 2014.
Apache Spark dropped Hadoop 1 support with thei
Timo Walther created FLINK-4825:
---
Summary: Implement a RexExecutor that uses Flink's code generation
Key: FLINK-4825
URL: https://issues.apache.org/jira/browse/FLINK-4825
Project: Flink
Issue T
A temporary work around appears to be disabling chaining, which you can
do by commenting out L215 "self._find_chains()" in Environment.py.
Note that you then run into a division by zero error, but i can't tell
whether that is a problem of the job or not.
On 13.10.2016 13:41, Chesnay Schepler wr
Robert Metzger created FLINK-4824:
-
Summary: CliFrontend shows misleading error message when main()
method returns before env.execute()
Key: FLINK-4824
URL: https://issues.apache.org/jira/browse/FLINK-4824
Hey Geoffrey,
I was able to reproduce the error and will look into it in more detail
tomorrow.
Regards,
Chesnay
On 12.10.2016 23:09, Geoffrey Mon wrote:
Hello,
Has anyone had a chance to look into this? I am currently working on the
problem but I have minimal understanding of how the intern
Sajeev Ramakrishnan created FLINK-4823:
--
Summary: org.apache.flink.types.NullFieldException: Field 0 is
null, but expected to hold a value
Key: FLINK-4823
URL: https://issues.apache.org/jira/browse/FLINK-4823
Hi Fabian:
What is the strategy for new syntax which calcite does not support? The
calcite will support it? For example, the row window syntax.
Thank you very much!
-邮件原件-
发件人: Fabian Hueske [mailto:fhue...@gmail.com]
发送时间: 2016年10月13日 18:17
收件人: dev@flink.apache.org
抄送: Sean
Hi Juho,
Yes, FLINK-4557 is an umbrella issue for windowed aggregations (both for
stream and batch) for the Table API (not including SQL).
The features are described in FLIP-11 [1]. Note, there is currently a
discussion about certain aspects of the proposed syntax [2].
There is a pull request for
Robert Metzger created FLINK-4822:
-
Summary: Ensure that the Kafka 0.8 connector is compatible with
kafka-consumer-groups.sh
Key: FLINK-4822
URL: https://issues.apache.org/jira/browse/FLINK-4822
Proje
Hi Zhangrucong,
yes, we want to use Calcite's SQL parser including its window syntax, i.e.,
- the standard SQL OVER windows (in streaming with a few restriction such
as no different partitionings or orders)
- the GroupBy window functions (TUMBLE, HOP, SESSION).
The GroupBy window function are no
Hi everybody,
happy to see a good discussion here :-)
I'll reply to Shaoxuan's mail first and comment on Zhangrucong question in
a separate mail.
Shaoxuan, thanks for the suggestions! I think we all agree that for SQL we
should definitely follow the standard (batch) SQL syntax.
In my opinion, the
Hi shaoxuan:
I think, the streamsql must be excuted in table environment. So I call this
table API ‘s StreamSQL. What do you call for this, stream Table API or
streamsql? It is fu
val env = StreamExecutionEnvironment.getExecutionEnvironment
val tblEnv = TableEnvironment.getTableEnvironment(env
Hi zhangrucong,
I am not sure what you mean by "table API'S StreamSQL", I guess you mean
"stream TableAPI"?
TableAPI should be compatible with calcite SQL. (By compatible, My
understanding is that both TableAPI and SQL will be translated to the same
logical plan - the same set of REL and REX).
BTW
Hi Fabian!
Is this the feature that will also add windowed aggregates to streaming SQL:
https://issues.apache.org/jira/browse/FLINK-4557 (Table API Stream
Aggregations)?
You wrote:
> However for the 1.2 release, it we plan to focus on the streaming
> Table API and Stream SQL to add support for w
Tzu-Li (Gordon) Tai created FLINK-4821:
--
Summary: Implement rescalable non-partitioned state for Kinesis
Connector
Key: FLINK-4821
URL: https://issues.apache.org/jira/browse/FLINK-4821
Project: F
25 matches
Mail list logo