Rui Li created FLINK-16767:
--
Summary: Failed to read Hive table with RegexSerDe
Key: FLINK-16767
URL: https://issues.apache.org/jira/browse/FLINK-16767
Project: Flink
Issue Type: Bug
Hequn Cheng created FLINK-16766:
---
Summary: Support create StreamTableEnvironment without passing
StreamExecutionEnvironment
Key: FLINK-16766
URL: https://issues.apache.org/jira/browse/FLINK-16766
Hi Becket,
I don't think we should discuss this in pure engineering aspects. Your
proposal is trying
to let SQL connector developers understand as less SQL concepts as
possible. But quite
the opposite, we are designing those interfaces to emphasize the SQL
concept, to bridge
high level concepts
Hequn Cheng created FLINK-16765:
---
Summary: Replace all BatchTableEnvironment to
StreamTableEnvironment in the document of PyFlink
Key: FLINK-16765
URL: https://issues.apache.org/jira/browse/FLINK-16765
pine zhao created FLINK-16764:
-
Summary: Kafka topic discovery
Key: FLINK-16764
URL: https://issues.apache.org/jira/browse/FLINK-16764
Project: Flink
Issue Type: Improvement
Hequn Cheng created FLINK-16763:
---
Summary: Should not use BatchTableEnvironment for Python UDF in
the document of flink-1.10
Key: FLINK-16763
URL: https://issues.apache.org/jira/browse/FLINK-16763
sunjincheng created FLINK-16762:
---
Summary: Relocation Beam dependency of PyFlink
Key: FLINK-16762
URL: https://issues.apache.org/jira/browse/FLINK-16762
Project: Flink
Issue Type: Improvement
Hi Jark,
It is good to know that we do not expect the end users to touch those
interfaces.
Then the question boils down to whether the connector developers should be
aware of the interfaces that are only used by the SQL optimizer. It seems a
win if we can avoid that.
Two potential solutions off
Hequn Cheng created FLINK-16761:
---
Summary: Return JobExecutionResult for Python ExecutionEnvironment
and TableEnvironment
Key: FLINK-16761
URL: https://issues.apache.org/jira/browse/FLINK-16761
Yang Wang created FLINK-16760:
-
Summary: Support the yaml file submission for native Kubernetes
integration
Key: FLINK-16760
URL: https://issues.apache.org/jira/browse/FLINK-16760
Project: Flink
Dian Fu created FLINK-16759:
---
Summary: HiveModuleTest failed to compile on release-1.10
Key: FLINK-16759
URL: https://issues.apache.org/jira/browse/FLINK-16759
Project: Flink
Issue Type: Bug
Hi Becket,
Regarding to Flavor1 and Flavor2, I want to clarify that user will never
use table source like this:
{
MyTableSource myTableSource = MyTableSourceFactory.create();
myTableSource.setSchema(mySchema);
myTableSource.applyFilterPredicate(expression);
...
}
TableFactory and
Hi Timo and Dawid,
Thanks for the clarification. They really help. You are right that we are
on the same page regarding the hierarchy. I think the only difference
between our view is the flavor of the interfaces. There are two flavors of
the source interface for DataStream and Table source.
Seth Wiesman created FLINK-16758:
Summary: Port StateFun Documentation to Jekyll
Key: FLINK-16758
URL: https://issues.apache.org/jira/browse/FLINK-16758
Project: Flink
Issue Type:
Igal Shilman created FLINK-16756:
Summary: Move Bootstrap API example to statefun-examples/
Key: FLINK-16756
URL: https://issues.apache.org/jira/browse/FLINK-16756
Project: Flink
Issue Type:
Jiayi Liao created FLINK-16755:
--
Summary: Savepoint docs should be updated
Key: FLINK-16755
URL: https://issues.apache.org/jira/browse/FLINK-16755
Project: Flink
Issue Type: Improvement
Andrey Zagrebin created FLINK-16754:
---
Summary: Consider refactoring of ProcessMemoryUtilsTestBase to
avoid inheritance
Key: FLINK-16754
URL: https://issues.apache.org/jira/browse/FLINK-16754
Jiayi Liao created FLINK-16753:
--
Summary: Exception from AsyncCheckpointRunnable should be wrapped
in CheckpointException
Key: FLINK-16753
URL: https://issues.apache.org/jira/browse/FLINK-16753
Project:
Igal Shilman created FLINK-16752:
Summary: Ridesharing example doesn't start
Key: FLINK-16752
URL: https://issues.apache.org/jira/browse/FLINK-16752
Project: Flink
Issue Type: Bug
Hi Dawid,
thanks for your design document.
LIKE vs. INHERITS:
I would also not start creating transitive dependencies for table
metadata. This is very complicated to maintain in a long-term, esp. when
we ALTER or DELETE a table. Instead the new table metadata should be
materialized
Hi Becket,
I really think we don't have a differing opinions. We might not see the
changes in the same way yet. Personally I think of the
DynamicTableSource as of a factory for a Source implemented for the
DataStream API. The important fact about the DynamicTableSource and all
feature traits
Hi Becket,
it is true that concepts such as projection and filtering are worth
having in DataStream API as well. And a SourceFunction can provide
interfaces for those concepts. In the table related classes we will
generate runtime classes that adhere to those interfaces and deal with
RowData
Hi Jark,
However, the interfaces proposed by FLIP-95 are mainly used during
> optimization (compiling), not runtime.
Yes, I am aware of that, I am wondering whether the SQL planner can use the
counterpart interface in the Source to apply the optimizations. It seems
should also work, right?
If
Till Rohrmann created FLINK-16751:
-
Summary: Expose bind port for Flink metric query service
Key: FLINK-16751
URL: https://issues.apache.org/jira/browse/FLINK-16751
Project: Flink
Issue
Hey Kurt,
I don't think DataStream should see some SQL specific concepts such as
> Filtering or ComputedColumn.
Projectable and Filterable seems not necessarily SQL concepts, but could be
applicable to DataStream source as well to reduce the network load. For
example ORC and Parquet should
Thanks Timo for updating the formats section. That would be very helpful
for changelog supporting (FLIP-105).
I just left 2 minor comment about some method names. In general, I'm +1 to
start a voting.
Zhijiang created FLINK-16750:
Summary: Kerberized YARN on Docker test fails with staring Hadoop
cluster
Key: FLINK-16750
URL: https://issues.apache.org/jira/browse/FLINK-16750
Project: Flink
+1 to use LIKE and put after schema part.
I also prefer the keyword LIKE than INHERITS, because it's easier to type
and understand, for a non-native English user :)
But I would like to limit a single LIKE clause in the DDL in the first
version. We can allow multiple LIKE clause in the future if
Hi Becket,
I don't think DataStream should see some SQL specific concepts such as
Filtering or ComputedColumn. It's
better to stay within SQL area and translate to more generic concept when
translating to DataStream/Runtime
layer, such as use MapFunction to represent computed column logic.
Best,
Sorry for a late reply, but I was on vacation.
As for putting the LIKE after the schema part. You're right, sql
standard lets it be only in the schema part. I was mislead by examples
for DB2 and MYSQL, which differ from the standard in that respect. My
bad, sorry.
Nevertheless I'd still be in
Yang Wang created FLINK-16749:
-
Summary: Support to set node selector for JM/TM pod
Key: FLINK-16749
URL: https://issues.apache.org/jira/browse/FLINK-16749
Project: Flink
Issue Type: Sub-task
Hi Timo and Dawid,
It's really great that we have the same goal. I am actually wondering if we
can go one step further to avoid some of the interfaces in Table as well.
For example, if we have the FilterableSource, do we still need the
FilterableTableSource? Should DynamicTableSource just become
Dian Fu created FLINK-16747:
---
Summary: Performance improvements for Python UDF
Key: FLINK-16747
URL: https://issues.apache.org/jira/browse/FLINK-16747
Project: Flink
Issue Type: Improvement
+1. Thanks Timo for the design doc.
We can also consider @Experimental too. But I am +1 to @PublicEvolving, we
should be confident in the current change.
Best,
Jingsong Lee
On Tue, Mar 24, 2020 at 4:30 PM Timo Walther wrote:
> @Becket: We totally agree that we don't need table specific
Andrey Zagrebin created FLINK-16742:
---
Summary: Extend and use BashJavaUtils to start JM JVM process and
pass JVM memory args
Key: FLINK-16742
URL: https://issues.apache.org/jira/browse/FLINK-16742
Hi Krzysztof,
from my past experience as data engineer, I can safely say that users often
underestimate the optimization potential and techniques of the used
systems. I implemented a similar thing in the past, where I parsed up to
500 rules reading from up to 10 data sources.
The basic idea was
@Becket: We totally agree that we don't need table specific connectors
during runtime. As Dawid said, the interfaces proposed here are just for
communication with the planner. Once the properties (watermarks,
computed column, filters, projecttion etc.) are negotiated, we can
configure a
Andrey Zagrebin created FLINK-16746:
---
Summary: Deprecate/remove legacy memory options for JM and expose
the new ones
Key: FLINK-16746
URL: https://issues.apache.org/jira/browse/FLINK-16746
Project:
Jingsong Lee created FLINK-16743:
Summary: Introduce datagen, print, blackhole connectors
Key: FLINK-16743
URL: https://issues.apache.org/jira/browse/FLINK-16743
Project: Flink
Issue Type:
Andrey Zagrebin created FLINK-16745:
---
Summary: Use JobManagerProcessUtils to start JM container and pass
JVM memory args
Key: FLINK-16745
URL: https://issues.apache.org/jira/browse/FLINK-16745
Roman Khachatryan created FLINK-16744:
-
Summary: Implement API to persist channel state: checkpointing
metadata
Key: FLINK-16744
URL: https://issues.apache.org/jira/browse/FLINK-16744
Project:
Hi Becket,
Answering your question, we have the same intention not to duplicate
connectors between datastream and table apis. The interfaces proposed in
the FLIP are a way to describe relational properties of a source. The
intention is as you described to translate all of those expressed as
Hi all,
I created https://issues.apache.org/jira/browse/FLINK-16743 for follow-up
discussion. FYI.
Best,
Jingsong Lee
On Tue, Mar 24, 2020 at 2:20 PM Bowen Li wrote:
> I agree with Jingsong that sink schema inference and system tables can be
> considered later. I wouldn’t recommend to tackle
Yadong Xie created FLINK-16741:
--
Summary: add log list and read log by name for taskmanager in the
web
Key: FLINK-16741
URL: https://issues.apache.org/jira/browse/FLINK-16741
Project: Flink
I agree with Jingsong that sink schema inference and system tables can be
considered later. I wouldn’t recommend to tackle them for the sake of
simplifying user experience to the extreme. Providing the above handy
source and sink implementations already offer users a ton of immediate
value.
On
Rui Li created FLINK-16740:
--
Summary: OrcSplitReaderUtil::logicalTypeToOrcType fails to create
decimal type with precision < 10
Key: FLINK-16740
URL: https://issues.apache.org/jira/browse/FLINK-16740
46 matches
Mail list logo