Leonard Xu created FLINK-15379:
--
Summary: JDBC connector return wrong value if defined dataType
contains precision
Key: FLINK-15379
URL: https://issues.apache.org/jira/browse/FLINK-15379
Project: Flink
In Flink community everybody is already a contributor and we don't grant
extra permission for contributors on JIRA.
Please checkout the guide "How To Contribute"[1]
Best,
tison.
[1] https://flink.apache.org/contributing/how-to-contribute.html
余贤圣 于2019年12月24日周二 下午3:12写道:
> Hi,
>
> I want to
Hi,
I want to contribute to Apache Flink. Would you please give me the contributor
permission? My JIRA ID is Stephen Yu(ashou...@163.com).
Thanks
Stephen Yu
Hi Jark,
I got you. We have discussed this question in Flink Forward 2019.
I know that i can custom operator to resolve this problem.
but also has some other problems:
First,
This is a very common scene that we often meet
I have to rewrite "BroadcastConnectedStream","ConnectedStreams" ...
and "Tw
ouyangwulin created FLINK-15378:
---
Summary: StreamFileSystemSink supported mutil hdfs plugins.
Key: FLINK-15378
URL: https://issues.apache.org/jira/browse/FLINK-15378
Project: Flink
Issue Type:
Yu Li created FLINK-15377:
-
Summary: Mesos WordCount test fails on travis
Key: FLINK-15377
URL: https://issues.apache.org/jira/browse/FLINK-15377
Project: Flink
Issue Type: Bug
Components:
Bowen Li created FLINK-15376:
Summary: support "CREATE TABLE AS" in Flink SQL
Key: FLINK-15376
URL: https://issues.apache.org/jira/browse/FLINK-15376
Project: Flink
Issue Type: New Feature
Hi Ocean,
You can implement your custom operator by the "TwoInputStreamOperator"
interface.
The TwoInputStreamOperator interface provides "processWatermark1" and
"processWatermark2" which handles
watermarks for left stream and right stream. You can then ignore the
watermarks from right stream and
Xintong Song created FLINK-15375:
Summary: Improve MemorySize to print / parse with better
readability.
Key: FLINK-15375
URL: https://issues.apache.org/jira/browse/FLINK-15375
Project: Flink
Xintong Song created FLINK-15374:
Summary: Update descriptions for jvm overhead config options
Key: FLINK-15374
URL: https://issues.apache.org/jira/browse/FLINK-15374
Project: Flink
Issue Typ
Xintong Song created FLINK-15373:
Summary: Update descriptions for framework / task off-heap memory
config options
Key: FLINK-15373
URL: https://issues.apache.org/jira/browse/FLINK-15373
Project: Flin
Xintong Song created FLINK-15372:
Summary: Use shorter config keys for FLIP-49 total memory config
options
Key: FLINK-15372
URL: https://issues.apache.org/jira/browse/FLINK-15372
Project: Flink
Hi gteeice,
CC: godfrey and tsreaper.
I know they are considering and designing JDBC support. I believe there
will be a lot of progress on 1.11.
Best,
Jingsong Lee
On Tue, Dec 24, 2019 at 11:07 AM 唐晨阳 wrote:
> Is there any detailed plan or progress regarding jdbc support?
> On 12/3/2019 04:41,
Xintong Song created FLINK-15371:
Summary: Change FLIP-49 memory configurations to use the new
memory type config options
Key: FLINK-15371
URL: https://issues.apache.org/jira/browse/FLINK-15371
Projec
Yun Tang created FLINK-15370:
Summary: Configured write buffer manager actually not take effect
in RocksDB's DBOptions
Key: FLINK-15370
URL: https://issues.apache.org/jira/browse/FLINK-15370
Project: Flin
Xintong Song created FLINK-15369:
Summary: MiniCluster use fixed network / managed memory sizes by
defualt
Key: FLINK-15369
URL: https://issues.apache.org/jira/browse/FLINK-15369
Project: Flink
Yu Li created FLINK-15368:
-
Summary: Add end-to-end test for controlling RocksDB memory usage
Key: FLINK-15368
URL: https://issues.apache.org/jira/browse/FLINK-15368
Project: Flink
Issue Type: Sub-ta
Xintong Song created FLINK-15367:
Summary: Handle backwards compatibility of "taskmanager.heap.size"
differently for standalone / active setups
Key: FLINK-15367
URL: https://issues.apache.org/jira/browse/FLINK-153
Is there any detailed plan or progress regarding jdbc support?
On 12/3/2019 04:41,Bowen Li (Jira) wrote:
Bowen Li created FLINK-15017:
Summary: add a thrift jdbc/odbc server for Flink
Key: FLINK-15017
URL: https://issues.apache.org/jira/browse/FLINK-15017
Project:
Hi Bowen,
I've done updated the design doc, PTAL.
Btw the PR for catalog is https://github.com/apache/flink/pull/10455, could
you please take a look?
Best,
Yijie
On Mon, Dec 9, 2019 at 8:44 AM Bowen Li wrote:
> Hi Yijie,
>
> I took a look at the design doc. LGTM overall, left a few questions.
Leonard Xu created FLINK-15366:
--
Summary: Dimension table do not support computed column
Key: FLINK-15366
URL: https://issues.apache.org/jira/browse/FLINK-15366
Project: Flink
Issue Type: Bug
Hi all,
It seems we have already get consensus on most of the issues. Thanks
everyone for the good discussion.
While there are still open questions under discussion, I'd like to
summarize the discussion so far, and list the action items that we already
get consensus on. In this way, we can alread
Yangze Guo created FLINK-15365:
--
Summary: Introduce streaming task using rocksDB backend e2e tests
for Mesos
Key: FLINK-15365
URL: https://issues.apache.org/jira/browse/FLINK-15365
Project: Flink
Yangze Guo created FLINK-15364:
--
Summary: Introduce streaming task using heap backend e2e tests for
Mesos
Key: FLINK-15364
URL: https://issues.apache.org/jira/browse/FLINK-15364
Project: Flink
>
> How about putting "taskmanager.memory.flink.size" in the configuration?
> Then new downloaded Flink behaves similar to the previous Standalone setups.
> If someone upgrades the binaries, but re-uses their old configuration,
> then they get the compatibility as discussed previously.
> We used th
>
> How about putting "taskmanager.memory.flink.size" in the configuration?
> Then new downloaded Flink behaves similar to the previous Standalone setups.
> If someone upgrades the binaries, but re-uses their old configuration,
> then they get the compatibility as discussed previously.
> We used th
How about putting "taskmanager.memory.flink.size" in the configuration?
Then new downloaded Flink behaves similar to the previous Standalone setups.
If someone upgrades the binaries, but re-uses their old configuration, then
they get the compatibility as discussed previously.
We used that approach
Hi all:
Currently, The "TwoInputStreamOperator" such as
"CoBroadcastWithKeyedOperator" "KeyedCoProcessOperator" and the
(Co)stream such as "ConnectedStreams" "BroadcastConnectedStream" only
support compute watermark by two stream.
but we just need one stream to compute watermark in some case.
Leonard Xu created FLINK-15363:
--
Summary: Hbase connector do not support datatypes with precision
like TIMESTAMP(9) and DECIMAL(10,4)
Key: FLINK-15363
URL: https://issues.apache.org/jira/browse/FLINK-15363
vinoyang created FLINK-15362:
Summary: Bump Kafka client version to 2.4.0 for universal Kafka
connector
Key: FLINK-15362
URL: https://issues.apache.org/jira/browse/FLINK-15362
Project: Flink
Iss
30 matches
Mail list logo