Rui Li created FLINK-15546:
--
Summary: Obscure error message in ScalarOperatorGens::generateCast
Key: FLINK-15546
URL: https://issues.apache.org/jira/browse/FLINK-15546
Project: Flink
Issue Type:
Hi Zhenghua,
I think it's not just about precision of type. Connectors not validate the
types either.
Now there is "SchemaValidator", this validator is just used to validate
type properties. But not for connector type support.
I think we can have something like "DataTypeValidator" to help
Hi Bowen, Thanks for driving this.
I think it would be very convenience to use tables in external DBs with
JDBC Catalog.
I have one concern about "Flink-Postgres Data Type Mapping" part:
In Postgress, the TIME/TIMESTAMP WITH TIME ZONE has the java.time.Instant
semantic,
and should be mapped to
Bowen Li created FLINK-15545:
Summary: Separate runtime params and semantics params from Flink
DDL for easier integration with catalogs and better user experience
Key: FLINK-15545
URL:
+1 from my side and thanks for driving this.
*Best Regards,*
*Zhenghua Gao*
On Fri, Jan 10, 2020 at 11:10 AM Forward Xu wrote:
> Hi Danny,
> Thank you very much.
>
> Best,
> Forward
>
> Danny Chan 于2020年1月10日周五 上午11:08写道:
>
> > Thanks Forward ~
> > +1 from my side and would review your
Hi dev,
I'd like to kick off a discussion on a mechanism to validate the precision
of columns for some connectors.
We come to an agreement that the user should be informed if the connector
does not support the desired precision. And from the connector developer's
view, there are 3-levels
Hi Danny,
Thank you very much.
Best,
Forward
Danny Chan 于2020年1月10日周五 上午11:08写道:
> Thanks Forward ~
> +1 from my side and would review your Calcite PR this weekend :) Overall
> it looks good, and I believe we can merge it soon ~
>
> Best,
> Danny Chan
> 在 2020年1月10日 +0800 AM11:04,Jark Wu ,写道:
Hi Bowen, thanks for reply and updating.
> I don't see much value in providing a builder for jdbc catalogs, as they
only have 4 or 5 required params, no optional ones. I prefer users just
provide a base url without default db, usrname, pwd so we don't need to
parse url all around, as I mentioned
Thanks Forward ~
+1 from my side and would review your Calcite PR this weekend :) Overall it
looks good, and I believe we can merge it soon ~
Best,
Danny Chan
在 2020年1月10日 +0800 AM11:04,Jark Wu ,写道:
> Thanks Forward for driving this,
>
> The design doc looks very good to me.
> +1 from my side.
>
Thanks Forward for driving this,
The design doc looks very good to me.
+1 from my side.
Best,
Jark
On Thu, 9 Jan 2020 at 20:12, Forward Xu wrote:
> Hi all,
>
> Listened to the opinion of Timo since the last discussion and updated the
> document [1] Optimized the passing parameters of JSON
Thanks Bowen for bringing up this discussion ~
I think the JDBC catalog is a useful feature.
Just one question about the "Flink-Postgres Metaspace Mapping” part:
Since the PostgreSQL does not have catalog but schema under database, why not
mapping the PG-database to Flink catalog and PG-schema
+1 non-binding to the N-Ary Stream Operator. Thanks Piotr for driving.
Looks like the previous FLIP-92 did not change the "Next FLIP Number" in
FLIP page.
Best,
Jingsong Lee
On Fri, Jan 10, 2020 at 8:40 AM Benchao Li wrote:
> Hi Piotr,
>
> It seems that we have the 'FLIP-92' already.
> see:
>
Terry Wang created FLINK-15544:
--
Summary: Upgrade http-core version to avoid potential DeadLock
problem
Key: FLINK-15544
URL: https://issues.apache.org/jira/browse/FLINK-15544
Project: Flink
Thanks Bowen for the reply,
A user-facing JDBCCatalog and 'catalog.type' = 'jdbc' sounds good to me.
I have some other minor comments when I went through the updated
documentation:
1) 'base_url' configuration: We are following the configuration format
guideline [1] which suggest to use dash
Hi Piotr,
It seems that we have the 'FLIP-92' already.
see:
https://cwiki.apache.org/confluence/display/FLINK/FLIP-92%3A+JDBC+catalog+and+Postgres+catalog
Piotr Nowojski 于2020年1月9日周四 下午11:25写道:
> Hi,
>
> I would like to start a vote for adding the N-Ary Stream Operator in Flink
> as discussed
Hello Flink dev and user,
We have a pipeline that read both bounded and unbounded sources and our
understanding is that when the bounded sources complete they should get a
watermark of +inf and then we should be able to take a savepoint and safely
restart the pipeline. However, we have source
Hi Jark and Jingsong,
Thanks for your review. Please see my reply in line.
> why introducing a `PostgresJDBCCatalog`, not a generic `JDBCCatalog`
(catalog.type = 'postgres' vs 'jdbc') ?
Thanks for the reminding and I looked at JDBCDialect. A generic,
user-facing JDBCCatalog with catalog.type =
Hi,
I have started a vote on this topic [1], please cast your +1 or -1 there :)
Also I assigned FLIP-92 number to this design doc.
Piotrek
[1]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-FLIP-92-Add-N-Ary-Stream-Operator-in-Flink-td36539.html
Hi,
I would like to start a vote for adding the N-Ary Stream Operator in Flink as
discussed in the discussion thread [1].
This vote will be opened at least until Wednesday, January 15th 8:00 UTC.
Piotrek
[1]
Chesnay Schepler created FLINK-15543:
Summary: Apache Camel not bundled but listed in flink-dist NOTICE
Key: FLINK-15543
URL: https://issues.apache.org/jira/browse/FLINK-15543
Project: Flink
Chesnay Schepler created FLINK-15542:
Summary: lz4-java licensing is incorrect
Key: FLINK-15542
URL: https://issues.apache.org/jira/browse/FLINK-15542
Project: Flink
Issue Type: Bug
Xintong Song created FLINK-15541:
Summary: FlinkKinesisConsumerTest.testSourceSynchronization is
unstable on travis.
Key: FLINK-15541
URL: https://issues.apache.org/jira/browse/FLINK-15541
Project:
Chesnay Schepler created FLINK-15540:
Summary: flink-shaded-hadoop-2-uber bundles wrong dependency
versions
Key: FLINK-15540
URL: https://issues.apache.org/jira/browse/FLINK-15540
Project: Flink
Rui Li created FLINK-15539:
--
Summary: Allow user to choose planner for scala shell
Key: FLINK-15539
URL: https://issues.apache.org/jira/browse/FLINK-15539
Project: Flink
Issue Type: Improvement
Hi all,
Listened to the opinion of Timo since the last discussion and updated the
document [1] Optimized the passing parameters of JSON table API. Added
return type when describing each JSON function. It makes the documentation
more clear. So I again vote of FLIP-90 [2] since that we have reached
Hi all,
Listened to the opinion of timo since the last discussion and updated the
document [1] Optimized the passing parameters of json table api. Added
return type when describing each json function. Makes the documentation
more clear. So i again vote of FLIP-90 [2] since that we have reached an
Great! Thanks, guys, for the continued effort on this topic!
On Thu, Jan 9, 2020 at 5:27 AM Xintong Song wrote:
> Thanks all for the discussion. I believe we have get consensus on all the
> open questions discussed in this thread.
>
> Since Andrey already create a jira ticket for renaming
Liya Fan created FLINK-15538:
Summary: Separate decimal implementations into separate sub-classes
Key: FLINK-15538
URL: https://issues.apache.org/jira/browse/FLINK-15538
Project: Flink
Issue
Shuo Cheng created FLINK-15537:
--
Summary: Type of keys should be `BinaryRow` when manipulating map
state with `BaseRow` as key type.
Key: FLINK-15537
URL: https://issues.apache.org/jira/browse/FLINK-15537
Hi all,
As described in FLINK-15145 [1], we decided to tune the default
configuration values of FLIP-49 with more jobs and cases.
After spending time analyzing and tuning the configurations, I've come with
several findings. To be brief, I would suggest the following changes, and
for more details
Zili Chen created FLINK-15536:
-
Summary: Revert removal of
ConfigConstants.YARN_MAX_FAILED_CONTAINERS
Key: FLINK-15536
URL: https://issues.apache.org/jira/browse/FLINK-15536
Project: Flink
Hi all,
Yes, I agree. It would be good to have dedicated methods to check the
validity of SQL queries.
I would propose to have two validation methods:
1. syntactic and semantic validation of a SQL query, i.e., SQL keywords,
catalog information, types in expressions and functions, etc. This is a
vinoyang created FLINK-15535:
Summary: Add usage of ProcessFunctionTestHarnesses for testing
documentation
Key: FLINK-15535
URL: https://issues.apache.org/jira/browse/FLINK-15535
Project: Flink
Hi,
+1 to the general idea. Supporting sql client gateway mode will bridge the
connection
between Flink SQL and production environment. Also the JDBC driver is a
quite good
supplement for usability of Flink SQL, users will have more choices to try
out Flink SQL
such as Tableau.
I went through
Hi all:
We are using flink's iteration,and find the
SpillingResettableMutableObjectIterator has a data overflow problem if
the number of elements in a single input exceeds Integer.MAX_VALUE.
The reason is inside the SpillingResettableMutableObjectIterator, it
track the total number of elements
Hi all:
We are using flink's iteration,and find the
SpillingResettableMutableObjectIterator has a data overflow problem if the
number of elements in a single input exceeds Integer.MAX_VALUE.
The reason is inside the SpillingResettableMutableObjectIterator, it track the
total number of elements
Yu Li created FLINK-15534:
-
Summary:
YARNSessionCapacitySchedulerITCase#perJobYarnClusterWithParallelism failed due
to NPE
Key: FLINK-15534
URL: https://issues.apache.org/jira/browse/FLINK-15534
Project:
37 matches
Mail list logo