Hi,
@Timo, thanks for your replay, and congratulations on your job.
@Fibian, No matter what way to achieve, as long as when the table is
generated or created, identity the field attributes, that is what we want.
I think at this point we are on the same page. We can go ahead.
And very glad to hear T
Tzu-Li (Gordon) Tai created FLINK-5949:
--
Summary: Flink on YARN checks for Kerberos credentials for
non-Kerberos authentication methods
Key: FLINK-5949
URL: https://issues.apache.org/jira/browse/FLINK-5949
Hi,
Thanks @Fabian and @Xingcan for the explanation.
@Xingcan Here I mean I have a data analytics server that has *data tables*.
So my initial requirement is to make a client connector for Flink to access
that* data tables*.Then I started with implementing Flink InputFormat
Interface and that was
Hi Greg,
The use case is to create a visualization of the topology.
So I don’t think there’s any reason to “act on the dot file from within the
user program”
Regards,
— Ken
> On Feb 24, 2017, at 7:51am, Greg Hogan wrote:
>
> Ken and Fabian,
>
> Is the use case to generate and act on the do
Geoffrey Mon created FLINK-5948:
---
Summary: Error in Python zip_with_index documentation
Key: FLINK-5948
URL: https://issues.apache.org/jira/browse/FLINK-5948
Project: Flink
Issue Type: Bug
Hi:
I want to use the ACL of Zookeeper. So I configure the following
configurations:
1、 high-availability.zookeeper.path.root: flink234
2、 high-availability.zookeeper.client.acl: creator
3、 zookeeper.sasl.disable: false
But, I use ZK client to get the ACL, the result is :
[cid:image00
Xiaojun Jin created FLINK-5947:
--
Summary: NullPointerException in
ContinuousProcessingTimeTrigger.clear()
Key: FLINK-5947
URL: https://issues.apache.org/jira/browse/FLINK-5947
Project: Flink
Is
Scott Kidder created FLINK-5946:
---
Summary: Kinesis Producer uses KPL that orphans threads that
consume 100% CPU
Key: FLINK-5946
URL: https://issues.apache.org/jira/browse/FLINK-5946
Project: Flink
Greg Hogan created FLINK-5945:
-
Summary: Close function in
OuterJoinOperatorBase#executeOnCollections
Key: FLINK-5945
URL: https://issues.apache.org/jira/browse/FLINK-5945
Project: Flink
Issue T
Ilya Ganelin created FLINK-5944:
---
Summary: Flink should support reading Snappy Files
Key: FLINK-5944
URL: https://issues.apache.org/jira/browse/FLINK-5944
Project: Flink
Issue Type: New Feature
Ted Yu created FLINK-5943:
-
Summary: Unprotected access to haServices in
YarnFlinkApplicationMasterRunner#shutdown()
Key: FLINK-5943
URL: https://issues.apache.org/jira/browse/FLINK-5943
Project: Flink
Flink’s stable API provides the frameworks (DataStream and DataSet). On top of
these frameworks Gelly provides additional models for iterative algorithms, but
there are algorithms such as Minimum Spanning Tree which do not easily map to
these models (in this instance requiring nested iterations;
Till Rohrmann created FLINK-5942:
Summary: Harden ZooKeeperStateHandleStore to deal with corrupted
data
Key: FLINK-5942
URL: https://issues.apache.org/jira/browse/FLINK-5942
Project: Flink
I
On Fri, Feb 24, 2017 at 1:43 PM, Vasiliki Kalavri mailto:vasilikikala...@gmail.com>> wrote:
Hi Greg,
On 24 February 2017 at 18:09, Greg Hogan mailto:c...@greghogan.com>> wrote:
> Thanks, Vasia, for starting the discussion.
>
> I was expecting more changes from the recent discussion on restructuri
It is actually Databricks Scala guide to help contributions to Apache Spark
so not really official Spark Scala guide.
The style guide feels bit more like asking people to write Scala in Java
mode so I am -1 to follow the style for Apache Flink Scala if that what you
are recommending.
If the "unifi
Chesnay Schepler created FLINK-5941:
---
Summary: Let handlers take part in job archiving
Key: FLINK-5941
URL: https://issues.apache.org/jira/browse/FLINK-5941
Project: Flink
Issue Type: New F
Till Rohrmann created FLINK-5940:
Summary: ZooKeeperCompletedCheckpointStore cannot handle broken
state handles
Key: FLINK-5940
URL: https://issues.apache.org/jira/browse/FLINK-5940
Project: Flink
Hi Pawan,
@Fabian was right and I thought it was stream environment. Sorry for that.
What do you mean by `read the available records of my datasource`? How do
you implement the nextRecord() method in DASInputFormat?
Best,
Xingcan
On Wed, Mar 1, 2017 at 4:45 PM, Fabian Hueske wrote:
> Hi Pawa
Hi,
@Xingcan
Yes that is right. It is not (easily) possible to change the watermarks of
a stream. All attributes which are used as event-time timestamps must be
aligned with these watermarks. This are only attributes which are derived
from the original rowtime attribute, i.e., the one that was spe
Hi SunJincheng,
our basic idea was to let the underlying API extract and handle time
correctly. Extracting timestamps and assigning watermarks is a serious
business. More advanced users can create TableSources and define time
there (using DataStream API) and less advanced users can simply use
Hi Pawan,
in the DataSet API DataSet.print() will trigger the execution (you do not
need to call ExecutionEnvironment.execute()).
The DataSet will be printed on the standard out of the process that submits
the program. This does only work for small DataSets.
In general print() should only be used
21 matches
Mail list logo