tyler fan created FLINK-15218:
-
Summary: java.lang.NoClassDefFoundError:
org/apache/flink/table/sources/TableSource
Key: FLINK-15218
URL: https://issues.apache.org/jira/browse/FLINK-15218
Project: Flink
xiaojin.wy created FLINK-15217:
--
Summary: 'java.time.LocalDate' should support for the CSV input
format.
Key: FLINK-15217
URL: https://issues.apache.org/jira/browse/FLINK-15217
Project: Flink
I
Thanks Hequn for being the release manager. Great work!
Best,
Wei
> 在 2019年12月12日,15:27,Jingsong Li 写道:
>
> Thanks Hequn for your driving, 1.8.3 fixed a lot of issues and it is very
> useful to users.
> Great work!
>
> Best,
> Jingsong Lee
>
> On Thu, Dec 12, 2019 at 3:25 PM jincheng sun
Thanks Hequn for your driving, 1.8.3 fixed a lot of issues and it is very
useful to users.
Great work!
Best,
Jingsong Lee
On Thu, Dec 12, 2019 at 3:25 PM jincheng sun
wrote:
> Thanks for being the release manager and the great work Hequn :)
> Also thanks to the community making this release pos
Thanks for being the release manager and the great work Hequn :)
Also thanks to the community making this release possible!
Best,
Jincheng
Jark Wu 于2019年12月12日周四 下午3:23写道:
> Thanks Hequn for helping out this release and being the release manager.
> Great work!
>
> Best,
> Jark
>
> On Thu, 12 De
Thanks Hequn for helping out this release and being the release manager.
Great work!
Best,
Jark
On Thu, 12 Dec 2019 at 15:02, Jeff Zhang wrote:
> Great work, Hequn
>
> Dian Fu 于2019年12月12日周四 下午2:32写道:
>
>> Thanks Hequn for being the release manager and everyone who contributed
>> to this relea
Tank created FLINK-15216:
Summary: Can't use rocksdb with hdfs filesystem with
flink-s3-fs-hadoop
Key: FLINK-15216
URL: https://issues.apache.org/jira/browse/FLINK-15216
Project: Flink
Issue Type: B
Hi Patrick,
The release has been announced.
+1 to integrate the publication of Docker images into the Flink release
process. Thus we can leverage the current release procedure for the docker
image.
Looking forward to the proposal.
Best, Hequn
On Thu, Dec 12, 2019 at 1:52 PM Yang Wang wrote:
>
Arjun Prakash created FLINK-15215:
-
Summary: Not able to provide a custom AWS credentials provider
with flink-s3-fs-hadoop
Key: FLINK-15215
URL: https://issues.apache.org/jira/browse/FLINK-15215
Proje
Great work, Hequn
Dian Fu 于2019年12月12日周四 下午2:32写道:
> Thanks Hequn for being the release manager and everyone who contributed to
> this release.
>
> Regards,
> Dian
>
> 在 2019年12月12日,下午2:24,Hequn Cheng 写道:
>
> Hi,
>
> The Apache Flink community is very happy to announce the release of Apache
> F
Thanks Hequn for being the release manager and everyone who contributed to this
release.
Regards,
Dian
> 在 2019年12月12日,下午2:24,Hequn Cheng 写道:
>
> Hi,
>
> The Apache Flink community is very happy to announce the release of Apache
> Flink 1.8.3, which is the third bugfix release for the Apach
Yangze Guo created FLINK-15214:
--
Summary: Adding multiple submission e2e test for Flink's Mesos
integration
Key: FLINK-15214
URL: https://issues.apache.org/jira/browse/FLINK-15214
Project: Flink
Hi,
The Apache Flink community is very happy to announce the release of Apache
Flink 1.8.3, which is the third bugfix release for the Apache Flink 1.8
series.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streamin
Zhenghua Gao created FLINK-15213:
Summary: The conversion between java.sql.Timestamp and long is not
asymmetric
Key: FLINK-15213
URL: https://issues.apache.org/jira/browse/FLINK-15213
Project: Flink
Hi Lucas,
That's great if we could integrate the publication of Flink official docker
images into
the Flink release process. Since many users are using or starting to use
Flink in
container environments.
Best,
Yang
Patrick Lucas 于2019年12月11日周三 下午11:44写道:
> Thanks, Hequn!
>
> The Dockerfiles f
Hi Becket,
I also have some performance concerns too.
If I understand correctly, SourceOutput will emit data per record into the
queue? I'm worried about the multithreading performance of this queue.
> One example is some batched messaging systems which only have an offset
for the entire batch i
Thanks Jingsong for bring up this discussion ~
After reviewing FLIP-63, it seems that we have made a conclusion for the syntax
- INSERT OVERWRITE ...
- INSERT INTO … PARTITION
Which means that they should not have the Hive dialect limitation, so I’m
inclined that the behaviors for SQL-CLI is un
Rockey Cui created FLINK-15212:
--
Summary: PROCTIME attribute causes problems with timestamp times
before 1900 ?
Key: FLINK-15212
URL: https://issues.apache.org/jira/browse/FLINK-15212
Project: Flink
Hi Jark,
> The dialect restriction is introduced on purpose, because OVERWRITE and
PARTITION syntax are not SQL standard.
My understanding is that watermark [1] is also a non-standard grammar. We
can extend SQL standard syntax.
> Even in the discussion of FLIP-63, the community have different op
+1 to fix it in 1.10. If this feature doesn't work via SQL CLI, I guess it
doesn't work for most Hive users.
+0 to remove the dialect check. I don't see much benefit this check can
bring to users, except that it prevents users from accidentally using some
Hive features, which doesn't seem to be ver
fa zheng created FLINK-15211:
Summary: Web UI request url for watermark is too long in large
parallelism
Key: FLINK-15211
URL: https://issues.apache.org/jira/browse/FLINK-15211
Project: Flink
Is
Danny Chen created FLINK-15210:
--
Summary: Move java files in flink-sql-parser module package
org.apache.calcite.sql to org.apache.flink.sql.parser.type
Key: FLINK-15210
URL: https://issues.apache.org/jira/browse/FLIN
Thanks for the feedback, Gary.
Regarding the WordCount test:
- True. There is no test coverage increment compared to others.
However, I think each test case better not have multiple purposes so
that we could find out the root cause quickly. As discussed in
FLINK-15135[1], I prefer only including W
Danny Chen created FLINK-15209:
--
Summary: DDL with computed column didn't work for some of the
connectors
Key: FLINK-15209
URL: https://issues.apache.org/jira/browse/FLINK-15209
Project: Flink
Hi Becket,
I think Dawid explained things clearly and makes a lot of sense.
I'm also in favor of #2, because #1 doesn't work for our future unified
envrionment.
You can see the vision in this documentation [1]. In the future, we would
like to
drop the global streaming/batch mode in SQL (i.e.
Envi
Bowen Li created FLINK-15208:
Summary: support client to submit both an online streaming job and
an offline batch job based on dynamic catalog table
Key: FLINK-15208
URL: https://issues.apache.org/jira/browse/FLINK-15
Zili Chen created FLINK-15207:
-
Summary: japicmp reference version is stale
Key: FLINK-15207
URL: https://issues.apache.org/jira/browse/FLINK-15207
Project: Flink
Issue Type: Bug
Compon
Bowen Li created FLINK-15206:
Summary: support dynamic catalog table for unified SQL job
Key: FLINK-15206
URL: https://issues.apache.org/jira/browse/FLINK-15206
Project: Flink
Issue Type: New Fea
Thanks Jingsong,
OVERWRITE and PARTITION are very fundamental features for Hive users.
I'm sorry to hear that it doesn't work in SQL CLI.
> Remove hive dialect limitation for these two grammars?
The dialect restriction is introduced on purpose, because OVERWRITE and
PARTITION syntax
are not SQL s
Bowen Li created FLINK-15205:
Summary: add doc and exmaple of INSERT OVERWRITE and insert into
partitioned table for Hive connector
Key: FLINK-15205
URL: https://issues.apache.org/jira/browse/FLINK-15205
Bowen Li created FLINK-15204:
Summary: add documentation for Flink-Hive timestamp conversions in
table and udf
Key: FLINK-15204
URL: https://issues.apache.org/jira/browse/FLINK-15204
Project: Flink
Bowen Li created FLINK-15203:
Summary: rephrase Hive's data types doc
Key: FLINK-15203
URL: https://issues.apache.org/jira/browse/FLINK-15203
Project: Flink
Issue Type: Task
Components:
Hi Jingsong,
Thanks a lot for reporting this issue.
IIRC, we added [INSERT OVERWRITE] and [PARTITION] clauses to support Hive
integration before FLIP-63 was proposed to introduce generic partition
support to Flink. Thus when we added these syntax, we were intentionally
conservative and limited th
Chris Gillespie created FLINK-15202:
---
Summary: Increment metric when Interval Join record is late
Key: FLINK-15202
URL: https://issues.apache.org/jira/browse/FLINK-15202
Project: Flink
Issu
Zili Chen created FLINK-15201:
-
Summary: Remove verifications in detach execution
Key: FLINK-15201
URL: https://issues.apache.org/jira/browse/FLINK-15201
Project: Flink
Issue Type: Improvement
Leonard Xu created FLINK-15200:
--
Summary: legacy planner cannot deal Type with precision like
DataTypes.TIMESTAMP(3) in TableSourceUtil
Key: FLINK-15200
URL: https://issues.apache.org/jira/browse/FLINK-15200
Thanks, Hequn!
The Dockerfiles for the Flink images on Docker Hub for the 1.8.3 release
are prepared[1] and I'll open a pull request upstream[2] once the release
announcement has gone out.
And stay tuned: I'm working on a proposal for integrating the publication
of these Docker images into the Fl
Some comments on Chesnay's message:
- Changing the number of splits will not reduce the complexity.
- One can also use the Flink build machines by opening a PR to the
"flink-ci/flink" repo, no need to open crappy PRs :)
- On the number of builds being run: We currently use 4 out of 10 machines
offe
Piotr Nowojski created FLINK-15199:
--
Summary: Benchmarks are not compiling
Key: FLINK-15199
URL: https://issues.apache.org/jira/browse/FLINK-15199
Project: Flink
Issue Type: Bug
Co
I think configuration "pipeline.jars" [1] works for you, because SQL Client
supports --jars to load user jars and it also uses this option internally.
But I'm not expert on this, maybe Kostas and Aljoscha can give
a definitive answer.
Best,
Jark
[1]:
https://ci.apache.org/projects/flink/flink-do
Note that for B it's not strictly necessary to maintain the current
number of splits; 2 might already be enough to bring contributor builds
to a more reasonable level.
I don't think that a contributor build taking 3,5h is a viable option;
people will start disregarding their own instance and j
Hi Robert,
thank you very much for raising this issue and improving the build system.
For now, I'd like to stick to a lean solution (= option A).
While option B can greatly reduce build times, it also has the habit of
clogging up the build machines. Just some arbitrary numbers, but it
currently
Hey devs,
I need your opinion on something: As part of our migration from Travis to
Azure, I'm revisiting the build system of Flink. I currently see two
different ways of proceeding, and I would like to know your opinion on the
two options.
A) We build and test Flink in one "mvn clean verify" cal
Andrey Zagrebin created FLINK-15198:
---
Summary: Remove deprecated mesos.resourcemanager.tasks.mem in 1.11
Key: FLINK-15198
URL: https://issues.apache.org/jira/browse/FLINK-15198
Project: Flink
Hi,
Regarding the:
Collection getNextRecords()
I’m pretty sure such design would unfortunately impact the performance
(accessing and potentially creating the collection on the hot path).
Also the
InputStatus emitNext(DataOutput output) throws Exception;
or
Status pollNext(SourceOutput sourceO
Yang Wang created FLINK-15197:
-
Summary: Add resource related config options to dynamical
properties for Kubernetes
Key: FLINK-15197
URL: https://issues.apache.org/jira/browse/FLINK-15197
Project: Flink
Yangze Guo created FLINK-15196:
--
Summary: The mesos.resourcemanager.tasks.cpus configuration does
not work as expectation
Key: FLINK-15196
URL: https://issues.apache.org/jira/browse/FLINK-15196
Project:
At least I hope it has been fixed. Which version and planner are you using?
On 11.12.19 11:47, Arujit Pradhan wrote:
Hi Timo,
Thanks for the bug reference.
You mentioned that this bug has been fixed. Is the fix available for
flink 1.9+ and default query planner.
Thanks and regards,
/Arujit
Kostas Kloudas created FLINK-15195:
--
Summary: Remove unu
Key: FLINK-15195
URL: https://issues.apache.org/jira/browse/FLINK-15195
Project: Flink
Issue Type: Sub-task
Reporter: Kos
Thanks for driving this effort. Also +1 from my side. I have left a few
questions below.
> - Wordcount end-to-end test. For verifying the basic process of Mesos
> deployment.
Would this add additional test coverage compared to the
"multiple submissions" test case? I am asking because the E2E test
Hi Becket,
quick clarification from my side because I think you misunderstood my
question. I did not suggest to let the SourceReader return only a single
record at a time when calling getNextRecords. As the return type indicates,
the method can return an arbitrary number of records.
Cheers,
Till
Hi Dev,
After cutting out the branch of 1.10, I tried the following functions of
SQL-CLI and found that it does not support:
- insert overwrite
- PARTITION (partcol1=val1, partcol2=val2 ...)
The SQL pattern is:
INSERT { INTO | OVERWRITE } TABLE tablename1 [PARTITION (partcol1=val1,
partcol2=val2 .
Hi Becket,
Issue #1 - Design of Source interface
I mentioned the lack of a method like
Source#createEnumerator(Boundedness boundedness, SplitEnumeratorContext
context), because without the current proposal is not complete/does not
work.
If we say that boundedness is an intrinsic property of a so
+1 for building the image locally. If need should arise, then we could
change it always later.
Cheers,
Till
On Wed, Dec 11, 2019 at 4:05 AM Xintong Song wrote:
> Thanks, Yangtze.
>
> +1 for building the image locally.
> The time consumption for both building image locally and pulling it from
>
Wei Zhong created FLINK-15194:
-
Summary: Directories in distributed caches are not extracted in
Yarn Per Job Cluster Mode
Key: FLINK-15194
URL: https://issues.apache.org/jira/browse/FLINK-15194
Project: F
Jingsong Lee created FLINK-15193:
Summary: Move DDL to first tab in table connector page
Key: FLINK-15193
URL: https://issues.apache.org/jira/browse/FLINK-15193
Project: Flink
Issue Type: Tas
56 matches
Mail list logo