[jira] [Created] (FLINK-27210) StateFun example link on datastream integration page is broken

2022-04-12 Thread Mans Singh (Jira)
Mans Singh created FLINK-27210:
--

 Summary: StateFun example link on datastream integration page is 
broken
 Key: FLINK-27210
 URL: https://issues.apache.org/jira/browse/FLINK-27210
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, Stateful Functions
Affects Versions: statefun-3.2.0
Reporter: Mans Singh


The [example 
link|https://github.com/apache/flink-statefun/blob/master/statefun-examples/statefun-flink-datastream-example/src/main/java/org/apache/flink/statefun/examples/datastream/Example.java]
 on the page 
[statefun-datastream|https://nightlies.apache.org/flink/flink-statefun-docs-release-3.2/docs/sdk/flink-datastream/#sdk-overview]
 is broken.

Also, please let me know if the example code available anywhere.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26099) Table connector proctime attributes has syntax error

2022-02-13 Thread Mans Singh (Jira)
Mans Singh created FLINK-26099:
--

 Summary: Table connector proctime attributes has syntax error
 Key: FLINK-26099
 URL: https://issues.apache.org/jira/browse/FLINK-26099
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, Table SQL / API
Affects Versions: 1.14.3
 Environment: All
Reporter: Mans Singh
 Fix For: 1.15.0


The example for proctime attributes has syntax error (missing comma after 3rd 
column) [table proctime| 
https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/overview/#proctime-attributes]:

 
{noformat}
CREATE TABLE MyTable (
  MyField1 INT,
  MyField2 STRING,
  MyField3 BOOLEAN
  MyField4 AS PROCTIME() -- declares a proctime attribute
) WITH (...){noformat}
 

 

 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25900) Create view example does not assign alias to functions resulting in generated names like EXPR$5

2022-01-31 Thread Mans Singh (Jira)
Mans Singh created FLINK-25900:
--

 Summary: Create view example does not assign alias to functions 
resulting in generated names like EXPR$5
 Key: FLINK-25900
 URL: https://issues.apache.org/jira/browse/FLINK-25900
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, Table SQL / API
Affects Versions: 1.14.3
Reporter: Mans Singh
 Fix For: 1.15.0


The create view example query:
{noformat}
Flink SQL> CREATE VIEW MyView1 AS SELECT LOCALTIME, LOCALTIMESTAMP, 
CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, CURRENT_ROW_TIMESTAMP(), NOW(), 
PROCTIME();
{noformat}
produces generated column names for CURRENT_ROW_TIMESTAMP() (EXPR$5), NOW() 
(EXPR$6), and PROCTIME() (EXPR$7) since it does not assign aliases, as shown 
below:

 

 
{code:java}
Flink SQL> CREATE VIEW MyView1 AS SELECT LOCALTIME, LOCALTIMESTAMP, 
CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, CURRENT_ROW_TIMESTAMP(), NOW(), 
PROCTIME();
> 
Flink SQL> describe MyView1;
+---+-+---+-++---+
|              name |                        type |  null | key | extras | 
watermark |
+---+-+---+-++---+
|         LOCALTIME |                     TIME(0) | FALSE |     |        |      
     |
|    LOCALTIMESTAMP |                TIMESTAMP(3) | FALSE |     |        |      
     |
|      CURRENT_DATE |                        DATE | FALSE |     |        |      
     |
|      CURRENT_TIME |                     TIME(0) | FALSE |     |        |      
     |
| CURRENT_TIMESTAMP |            TIMESTAMP_LTZ(3) | FALSE |     |        |      
     |
|            EXPR$5 |            TIMESTAMP_LTZ(3) | FALSE |     |        |      
     |
|            EXPR$6 |            TIMESTAMP_LTZ(3) | FALSE |     |        |      
     |
|            EXPR$7 | TIMESTAMP_LTZ(3) *PROCTIME* | FALSE |     |        |      
     |
+---+-+---+-++---+
8 rows in set
 
{code}
 

The documentation shows aliased names 
[Timezone|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/timezone/#decide-time-functions-return-value]

 

 
{code:java}
++-+---+-++---+
|   name |type |  null | key | extras | 
watermark |
++-+---+-++---+
|  LOCALTIME | TIME(0) | false | || 
  |
| LOCALTIMESTAMP |TIMESTAMP(3) | false | || 
  |
|   CURRENT_DATE |DATE | false | || 
  |
|   CURRENT_TIME | TIME(0) | false | || 
  |
|  CURRENT_TIMESTAMP |TIMESTAMP_LTZ(3) | false | || 
  |
|CURRENT_ROW_TIMESTAMP() |TIMESTAMP_LTZ(3) | false | || 
  |
|  NOW() |TIMESTAMP_LTZ(3) | false | || 
  |
| PROCTIME() | TIMESTAMP_LTZ(3) *PROCTIME* | false | || 
  |
++-+---+-++---+
 {code}
 

 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25763) Match Recognize Logical Offsets function table shows backticks

2022-01-22 Thread Mans Singh (Jira)
Mans Singh created FLINK-25763:
--

 Summary: Match Recognize Logical Offsets function table shows 
backticks
 Key: FLINK-25763
 URL: https://issues.apache.org/jira/browse/FLINK-25763
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, Table SQL / API
Affects Versions: 1.14.3
 Environment: All
Reporter: Mans Singh
 Fix For: 1.15.0
 Attachments: MatchRecognizeLogicalOffsets.png

The match recognize logical offsets functions in the table are formatted with 
back ticks as shown below:

 

!MatchRecognizeLogicalOffsets.png|width=736,height=230!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-24442) Flink Queries Docs markup does not show escape ticks

2021-10-03 Thread Mans Singh (Jira)
Mans Singh created FLINK-24442:
--

 Summary: Flink Queries Docs markup does not show escape ticks
 Key: FLINK-24442
 URL: https://issues.apache.org/jira/browse/FLINK-24442
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, Table SQL / API
Affects Versions: 1.14.0
Reporter: Mans Singh
 Fix For: 1.14.0
 Attachments: Screen Shot 2021-10-03 at 7.01.41 PM.png

The [table query overview 
|https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/sql/queries/overview/#syntax]mentions:
{quote} * Unlike Java, back-ticks allow identifiers to contain non-alphanumeric 
characters (e.g. {{“SELECT a AS }}{{my field}}{{ FROM t”}}).{quote}
The "my field" identifier appears without escape back ticks as shown in 
screenshot below:

 

!Screen Shot 2021-10-03 at 7.01.41 PM.png!

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24259) Exception thrown when Kafka topic is null for FlinkKafkaConsumer

2021-09-12 Thread Mans Singh (Jira)
Mans Singh created FLINK-24259:
--

 Summary: Exception thrown when Kafka topic is null for 
FlinkKafkaConsumer
 Key: FLINK-24259
 URL: https://issues.apache.org/jira/browse/FLINK-24259
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Kafka, kafka
Affects Versions: 1.14.0
 Environment: All
Reporter: Mans Singh
 Fix For: 1.15.0


If the topic name is null for the FlinkKafkaConsumer we get an Kafka exception 
with the message:
{quote}{{Error computing size for field 'topics': Error computing size for 
field 'name': Missing value for field 'name' which has no default value.}}
{quote}
The KafkaTopicDescriptor can checks that the topic name empty, blank or null 
and provide more informative message.
  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24172) Table API join documentation for Java has missing end quote after table name

2021-09-06 Thread Mans Singh (Jira)
Mans Singh created FLINK-24172:
--

 Summary: Table API join documentation for Java has missing end 
quote after table name
 Key: FLINK-24172
 URL: https://issues.apache.org/jira/browse/FLINK-24172
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, Table SQL / API
Affects Versions: 1.14.0
Reporter: Mans Singh
 Fix For: 1.15.0


The table api join documentation has missing ending quote after table name:

 
{quote}{{Table left = tableEnv.from("MyTable).select($("a"), $("b"), $("c"));}}
{quote}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24042) DataStream printToErr doc indicates that it writes to standard output

2021-08-29 Thread Mans Singh (Jira)
Mans Singh created FLINK-24042:
--

 Summary: DataStream printToErr doc indicates that it writes to 
standard output
 Key: FLINK-24042
 URL: https://issues.apache.org/jira/browse/FLINK-24042
 Project: Flink
  Issue Type: Improvement
  Components: API / DataStream, Documentation
Affects Versions: 1.13.2
Reporter: Mans Singh
 Fix For: 1.14.0


The data stream printToErr method javadoc indicates that it writes to standard 
output stream.

 
{quote}Writes a DataStream to the standard output stream (stderr).
{quote}
 

It should be standard error stream.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23910) RichAsyncFunction counter message inconsistent with counter type

2021-08-22 Thread Mans Singh (Jira)
Mans Singh created FLINK-23910:
--

 Summary: RichAsyncFunction counter message inconsistent with 
counter type 
 Key: FLINK-23910
 URL: https://issues.apache.org/jira/browse/FLINK-23910
 Project: Flink
  Issue Type: Improvement
  Components: API / DataStream
Affects Versions: 1.13.2
 Environment: All
Reporter: Mans Singh
 Fix For: 1.14.0


RichAsyncFunction does not support double counters but the message indicates 
"Long counters..."
  
{quote} @Override
 public DoubleCounter getDoubleCounter(String name) {
 throw new UnsupportedOperationException(
 "Long counters are not supported in rich async functions.");
 } 
{quote}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23549) Kafka table connector create table example in docs has syntax error

2021-07-29 Thread Mans Singh (Jira)
Mans Singh created FLINK-23549:
--

 Summary: Kafka table connector create table example in docs has 
syntax error
 Key: FLINK-23549
 URL: https://issues.apache.org/jira/browse/FLINK-23549
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Kafka, Documentation
Affects Versions: 1.13.1
Reporter: Mans Singh
 Fix For: 1.14.0


The create table example in the docs has an syntax error (extra comma after the 
opening bracket):
{quote}CREATE TABLE KafkaTable (,
 `ts` TIMESTAMP(3) METADATA FROM 'timestamp',
 `user_id` BIGINT,
 `item_id` BIGINT,
 `behavior` STRING
 ) WITH (
 'connector' = 'kafka',
 ...
 )
{quote}
On executing in the FlinkSQL it produces an error:
{quote}[ERROR] Could not execute SQL statement. Reason:
 org.apache.flink.sql.parser.impl.ParseException: Encountered "," at line 1, 
column 26.
{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23490) Flink Table Example - StreamWindowSQLExample shows output in older format

2021-07-24 Thread Mans Singh (Jira)
Mans Singh created FLINK-23490:
--

 Summary: Flink Table Example - StreamWindowSQLExample shows output 
in older format
 Key: FLINK-23490
 URL: https://issues.apache.org/jira/browse/FLINK-23490
 Project: Flink
  Issue Type: Improvement
  Components: Examples, Table SQL / API
Affects Versions: 1.13.1
Reporter: Mans Singh
 Fix For: 1.14.0


The example print output shows older format:
{quote}{{// 2019-12-12 00:00:00.000,3,10,3}}
 {{// 2019-12-12 00:00:05.000,3,6,2}}
{quote}
 
 Execution of the application print the following:
{quote}+I[2019-12-12 00:00:00.000, 3, 10, 3]
 +I[2019-12-12 00:00:05.000, 3, 6, 2]
{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23412) Improve sourceSink description

2021-07-16 Thread Mans Singh (Jira)
Mans Singh created FLINK-23412:
--

 Summary: Improve sourceSink description
 Key: FLINK-23412
 URL: https://issues.apache.org/jira/browse/FLINK-23412
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.13.1
Reporter: Mans Singh
 Fix For: 1.14.0


The table/sourcesink documentation indicates:

{quote} the sink can solely accept insert-only rows and write out bounded 
streams.{quote}

Perhaps can be:

{quote} the sink can only accept insert-only rows and write out bounded 
streams.{quote}

Also, improving full stack example bullet points.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23347) Datastream operators overview refers to DataStreamStream

2021-07-11 Thread Mans Singh (Jira)
Mans Singh created FLINK-23347:
--

 Summary: Datastream operators overview refers to DataStreamStream
 Key: FLINK-23347
 URL: https://issues.apache.org/jira/browse/FLINK-23347
 Project: Flink
  Issue Type: Improvement
  Components: API / DataStream, Documentation
Affects Versions: 1.13.1
Reporter: Mans Singh


The [operators 
overview|https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/datastream/operators/overview/#windowall]
 document refers to:

{quote}
h4. DataStreamStream → AllWindowedStream 
{quote}

Perhaps should be:
{quote}
h4. DataStream → AllWindowedStream 
{quote}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23280) Python ExplainDetails does not have JSON_EXECUTION_PLAN option

2021-07-06 Thread Mans Singh (Jira)
Mans Singh created FLINK-23280:
--

 Summary: Python ExplainDetails does not have JSON_EXECUTION_PLAN 
option
 Key: FLINK-23280
 URL: https://issues.apache.org/jira/browse/FLINK-23280
 Project: Flink
  Issue Type: Bug
  Components: API / Python, Table SQL / API
Affects Versions: 1.13.0
Reporter: Mans Singh
 Fix For: 1.14.0


Add missing JSON_EXECUTION_PLAN option to python ExplainDetails class 
(https://github.com/apache/flink/blob/master/flink-python/pyflink/table/explain_detail.py)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23256) Explain string shows output of legacy planner

2021-07-05 Thread Mans Singh (Jira)
Mans Singh created FLINK-23256:
--

 Summary: Explain string shows output of legacy planner
 Key: FLINK-23256
 URL: https://issues.apache.org/jira/browse/FLINK-23256
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, Table SQL / API
Affects Versions: 1.13.1
Reporter: Mans Singh
 Fix For: 1.14.0


The output on [Concepts & Common API 
page|https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/common/#explaining-a-table/]
 documentation page of:
{quote}table.explain()
{quote}
is showing the result of the legacy planner.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23162) Create table uses time_ltz in the column name and it's expression which results in exception

2021-06-26 Thread Mans Singh (Jira)
Mans Singh created FLINK-23162:
--

 Summary: Create table uses time_ltz in the column name and it's 
expression which results in exception 
 Key: FLINK-23162
 URL: https://issues.apache.org/jira/browse/FLINK-23162
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, Examples, Table SQL / Client
Affects Versions: 1.13.1
Reporter: Mans Singh
 Fix For: 1.14.0


The create table example in 
[https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/concepts/time_attributes/]
 uses the `time_ltz` in it's declaration
{quote}CREATE TABLE user_actions (
 user_name STRING,
 data STRING,
 ts BIGINT,
 time_ltz AS TO_TIMESTAMP_LTZ(time_ltz, 3),
 – declare time_ltz as event time attribute and use 5 seconds delayed watermark 
strategy
 WATERMARK FOR time_ltz AS time_ltz - INTERVAL '5' SECOND
 ) WITH (
 ...
 );
{quote}
When it is executed in the flink sql client it throws an exception:
{quote}[ERROR] Could not execute SQL statement. Reason:
 org.apache.calcite.sql.validate.SqlValidatorException: Unknown identifier 
'time_ltz'
{quote}
The create table works if the expression uses ts as the argument while 
declaring time_ltz.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18484) RowSerializer arity error does not provide specific information about the mismatch

2020-07-03 Thread Mans Singh (Jira)
Mans Singh created FLINK-18484:
--

 Summary: RowSerializer arity error does not provide specific 
information about the mismatch
 Key: FLINK-18484
 URL: https://issues.apache.org/jira/browse/FLINK-18484
 Project: Flink
  Issue Type: Improvement
  Components: API / Core
Affects Versions: 1.10.1
 Environment: All
Reporter: Mans Singh
 Fix For: 1.11.1


The RowSerializer throws a RuntimeException when there is mismatch between the 
serializers field length and the input row. But the exception message does not 
contain information about the difference in the lengths. Eg:

{{java.lang.RuntimeException: Row arity of from does not match serializers.}}

Adding information about the mismatched length will be more helpful.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17602) Document broadcast state correction

2020-05-10 Thread Mans Singh (Jira)
Mans Singh created FLINK-17602:
--

 Summary: Document broadcast state correction
 Key: FLINK-17602
 URL: https://issues.apache.org/jira/browse/FLINK-17602
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.10.0
Reporter: Mans Singh
 Fix For: 1.11.0


Broadcast state documentation mentions `processBroadcast()` which should be 
`processBroadcastElement()`



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15181) Minor doc correction

2019-12-10 Thread Mans Singh (Jira)
Mans Singh created FLINK-15181:
--

 Summary: Minor doc correction
 Key: FLINK-15181
 URL: https://issues.apache.org/jira/browse/FLINK-15181
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.9.1
Reporter: Mans Singh
 Fix For: 1.10.0


Minor documentation corrections.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-13542) Flink Datadog metrics reporter sends empty series if there is no metrics

2019-08-01 Thread Mans Singh (JIRA)
Mans Singh created FLINK-13542:
--

 Summary: Flink Datadog metrics reporter sends empty series if 
there is no metrics
 Key: FLINK-13542
 URL: https://issues.apache.org/jira/browse/FLINK-13542
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Metrics
Affects Versions: 1.8.1
Reporter: Mans Singh
 Fix For: 1.9.0


If there are no metrics, Datadog reporter still sends empty series array to 
Datadog.  The reporter can check the size of the series and only send if there 
are metrics collected.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13104) Flink Datadog metrics client callback does not check for errors on posting and fails silently

2019-07-04 Thread Mans Singh (JIRA)
Mans Singh created FLINK-13104:
--

 Summary: Flink Datadog metrics client callback does not check for 
errors on posting and fails silently
 Key: FLINK-13104
 URL: https://issues.apache.org/jira/browse/FLINK-13104
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Metrics
Affects Versions: 1.8.1
Reporter: Mans Singh
Assignee: Mans Singh
 Fix For: 1.9.0


Flink DatadogHttpClient's callback does not check if the request was 
successful. In case of non-successful posting request it should log a warning 
so the the error can be resolved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-13065) Document example snippet correction using KeySelector

2019-07-02 Thread Mans Singh (JIRA)
Mans Singh created FLINK-13065:
--

 Summary: Document example snippet correction using KeySelector
 Key: FLINK-13065
 URL: https://issues.apache.org/jira/browse/FLINK-13065
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Reporter: Mans Singh
Assignee: Mans Singh


The broadcast state 
[example|[https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/stream/state/broadcast_state.html#provided-apis]]
 states:

 
{noformat}
Starting from the stream of Items, we just need to key it by Color, as we want 
pairs of the same color. This will make sure that elements of the same color 
end up on the same physical machine.

// key the shapes by color
KeyedStream colorPartitionedStream = shapeStream
.keyBy(new KeySelector(){...});{noformat}
 

How it uses shape stream and use KeySelector but should use 
KeySelector to create KeyedStream.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12952) Minor doc correction regarding incremental window functions

2019-06-22 Thread Mans Singh (JIRA)
Mans Singh created FLINK-12952:
--

 Summary: Minor doc correction regarding incremental window 
functions
 Key: FLINK-12952
 URL: https://issues.apache.org/jira/browse/FLINK-12952
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.8.0
Reporter: Mans Singh
Assignee: Mans Singh


The Flink documentation [Window 
Function|https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/stream/operators/windows.html#window-functions],
 mentions that 

bq. The window function can be one of ReduceFunction, AggregateFunction, 
FoldFunction or ProcessWindowFunction. The first two can be executed more 
efficiently

It should perhaps state (since FoldFunction, though deprecated, is also 
incremental):
bq.  The first *three* can be executed more efficiently





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12784) Support retention policy for InfluxDB metrics reporter

2019-06-09 Thread Mans Singh (JIRA)
Mans Singh created FLINK-12784:
--

 Summary: Support retention policy for InfluxDB metrics reporter
 Key: FLINK-12784
 URL: https://issues.apache.org/jira/browse/FLINK-12784
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Metrics
Affects Versions: 1.8.0
Reporter: Mans Singh
Assignee: Mans Singh


InfluxDB metrics reporter uses default retention policy for saving metrics to 
InfluxDB.  This enhancement will allow user to specify retention policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-3967) Provide RethinkDB Sink for Flink

2016-05-24 Thread Mans Singh (JIRA)
Mans Singh created FLINK-3967:
-

 Summary: Provide RethinkDB Sink for Flink
 Key: FLINK-3967
 URL: https://issues.apache.org/jira/browse/FLINK-3967
 Project: Flink
  Issue Type: New Feature
  Components: Streaming, Streaming Connectors
Affects Versions: 1.0.3
 Environment: All
Reporter: Mans Singh
Assignee: Mans Singh
Priority: Minor
 Fix For: 1.1.0


Provide Sink to stream data from flink to rethink db.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)