[jira] [Updated] (FLINK-13133) Correct error in PubSub documentation

2019-07-10 Thread Neelesh Srinivas Salian (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neelesh Srinivas Salian updated FLINK-13133:

Labels: newbie starter  (was: )

> Correct error in PubSub documentation
> -
>
> Key: FLINK-13133
> URL: https://issues.apache.org/jira/browse/FLINK-13133
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Google Cloud PubSub
>Affects Versions: 1.9.0
>Reporter: Richard Deurwaarder
>Assignee: Richard Deurwaarder
>Priority: Minor
>  Labels: newbie, starter
> Fix For: 1.9.0
>
>
> In the documentation for PubsubSink 
> ([https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/pubsub.html#pubsub-sink])
>  incorrectly uses *De*serialisationSchema, it should use a SerializationSchema



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-13171) Add withParameters(Configuration) method to DataStream operators

2019-07-09 Thread Neelesh Srinivas Salian (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16881638#comment-16881638
 ] 

Neelesh Srinivas Salian edited comment on FLINK-13171 at 7/10/19 1:14 AM:
--

Can I work on this if no one else is?


was (Author: nssalian):
Can I work on this if no one else i?

> Add withParameters(Configuration) method to DataStream operators
> 
>
> Key: FLINK-13171
> URL: https://issues.apache.org/jira/browse/FLINK-13171
> Project: Flink
>  Issue Type: New Feature
>  Components: API / DataStream
>Reporter: Gyula Fora
>Priority: Minor
>
> The DataStream API doesn't have the appropriate methods to configure user 
> functions so Rich functions cannot leverage the parameters passed in the open 
> methods similar to the DataSet API.
>  
> I think this would be a simple but valuable addition.
>  
> [https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/batch/#via-withparametersconfiguration]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-13171) Add withParameters(Configuration) method to DataStream operators

2019-07-09 Thread Neelesh Srinivas Salian (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16881638#comment-16881638
 ] 

Neelesh Srinivas Salian commented on FLINK-13171:
-

Can I work on this if no one else i?

> Add withParameters(Configuration) method to DataStream operators
> 
>
> Key: FLINK-13171
> URL: https://issues.apache.org/jira/browse/FLINK-13171
> Project: Flink
>  Issue Type: New Feature
>  Components: API / DataStream
>Reporter: Gyula Fora
>Priority: Minor
>
> The DataStream API doesn't have the appropriate methods to configure user 
> functions so Rich functions cannot leverage the parameters passed in the open 
> methods similar to the DataSet API.
>  
> I think this would be a simple but valuable addition.
>  
> [https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/batch/#via-withparametersconfiguration]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10619) config MemoryStateBackend default value

2019-07-08 Thread Neelesh Srinivas Salian (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880830#comment-16880830
 ] 

Neelesh Srinivas Salian commented on FLINK-10619:
-

having a look at MemoryStateBackend, the javadoc says this and it seems like 
this isn't needed.
{code:java}
* Configuration
*
* As for all state backends, this backend can either be configured within 
the application (by creating
* the backend with the respective constructor parameters and setting it on the 
execution environment)
* or by specifying it in the Flink configuration.
*
* If the state backend was specified in the application, it may pick up 
additional configuration
* parameters from the Flink configuration. For example, if the backend if 
configured in the application
* without a default savepoint directory, it will pick up a default savepoint 
directory specified in the
* Flink configuration of the running job/cluster. That behavior is implemented 
via the
* {@link #configure(Configuration)} method.
{code}

> config MemoryStateBackend default value
> ---
>
> Key: FLINK-10619
> URL: https://issues.apache.org/jira/browse/FLINK-10619
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Reporter: yangxiaoshuo
>Assignee: yangxiaoshuo
>Priority: Minor
>  Labels: starter
>
> The default MAX_STATE_SIZE of MemoryStateBackend is 5 * 1024 * 1024.
> If we want to change it, the only way is 
> {code:java}
> // code placeholder
> env.setStateBackend(new MemoryStateBackend(1024 * 1024 * 1024));{code}
> Is it?
> So shall we add it into configuration file?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10619) config MemoryStateBackend default value

2019-07-08 Thread Neelesh Srinivas Salian (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880522#comment-16880522
 ] 

Neelesh Srinivas Salian commented on FLINK-10619:
-

Can I work on this if no one else is?

> config MemoryStateBackend default value
> ---
>
> Key: FLINK-10619
> URL: https://issues.apache.org/jira/browse/FLINK-10619
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Reporter: yangxiaoshuo
>Assignee: yangxiaoshuo
>Priority: Minor
>  Labels: starter
>
> The default MAX_STATE_SIZE of MemoryStateBackend is 5 * 1024 * 1024.
> If we want to change it, the only way is 
> {code:java}
> // code placeholder
> env.setStateBackend(new MemoryStateBackend(1024 * 1024 * 1024));{code}
> Is it?
> So shall we add it into configuration file?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-992) Create CollectionDataSets by reading (client) local files.

2019-04-17 Thread Neelesh Srinivas Salian (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neelesh Srinivas Salian reassigned FLINK-992:
-

Assignee: (was: Neelesh Srinivas Salian)

> Create CollectionDataSets by reading (client) local files.
> --
>
> Key: FLINK-992
> URL: https://issues.apache.org/jira/browse/FLINK-992
> Project: Flink
>  Issue Type: New Feature
>  Components: API / DataSet, API / Python
>Reporter: Fabian Hueske
>Priority: Minor
>  Labels: starter
>
> {{CollectionDataSets}} are a nice way to feed data into programs.
> We could add support to read a client-local file at program construction time 
> using a FileInputFormat, put its data into a CollectionDataSet, and ship its 
> data together with the program.
> This would remove the need to upload small files into DFS which are used 
> together with some large input (stored in DFS).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-1890) Add note to docs that ReadFields annotations are currently not evaluated

2019-04-17 Thread Neelesh Srinivas Salian (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neelesh Srinivas Salian reassigned FLINK-1890:
--

Assignee: (was: Neelesh Srinivas Salian)

> Add note to docs that ReadFields annotations are currently not evaluated
> 
>
> Key: FLINK-1890
> URL: https://issues.apache.org/jira/browse/FLINK-1890
> Project: Flink
>  Issue Type: Wish
>  Components: API / DataSet
>Reporter: Stefan Bunk
>Priority: Minor
>  Labels: starter
>
> In the Scala API, you have the option to declare forwarded fields via the
> {{withForwardedFields}} method.
> It would be nice to have sth. similar for read fields, as otherwise one needs 
> to create a class, which I personally try to avoid for readability.
> Maybe grouping all annotations in one function and have a first parameter 
> indicating the type of annotation is also an option, if you plan on adding 
> more annotations and want to keep the interface smaller.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-5595) Add links to sub-sections in the left-hand navigation bar

2019-04-17 Thread Neelesh Srinivas Salian (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-5595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neelesh Srinivas Salian reassigned FLINK-5595:
--

Assignee: (was: Neelesh Srinivas Salian)

> Add links to sub-sections in the left-hand navigation bar
> -
>
> Key: FLINK-5595
> URL: https://issues.apache.org/jira/browse/FLINK-5595
> Project: Flink
>  Issue Type: Improvement
>  Components: Project Website
>Reporter: Mike Winters
>Priority: Minor
>  Labels: newbie, website
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Some pages on the Flink project site (such as 
> http://flink.apache.org/introduction.html) include a table of contents at the 
> top. The sections from the ToC are not exposed in the left-hand nav when the 
> page is active, but this could be a useful addition, especially for longer, 
> content-heavy pages. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-7391) Normalize release entries

2018-07-16 Thread Neelesh Srinivas Salian (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546060#comment-16546060
 ] 

Neelesh Srinivas Salian commented on FLINK-7391:


[~Zentol], is anyone working on this? I could take it up.

> Normalize release entries
> -
>
> Key: FLINK-7391
> URL: https://issues.apache.org/jira/browse/FLINK-7391
> Project: Flink
>  Issue Type: Improvement
>  Components: Project Website
>Reporter: Chesnay Schepler
>Priority: Major
>  Labels: starter
>
> The release list at http://flink.apache.org/downloads.html is inconsistent in 
> regards to the java/scala docs links. For 1.1.3 and below we only include a 
> docs link for the latest version (i.e 1.1.3, but not for 1.1.2), for higher 
> versions we have a docs link for every release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-6895) Add STR_TO_DATE supported in SQL

2018-07-16 Thread Neelesh Srinivas Salian (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-6895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546057#comment-16546057
 ] 

Neelesh Srinivas Salian edited comment on FLINK-6895 at 7/17/18 5:50 AM:
-

[~fhueske] [~twalthr] is this Jira being worked on?

I could assign it and start working on it and what is needed for FLINK-6976 


was (Author: nssalian):
[~fhueske] [~twalthr] is this Jira being worked on?

I could assign it and start working on it. 

> Add STR_TO_DATE supported in SQL
> 
>
> Key: FLINK-6895
> URL: https://issues.apache.org/jira/browse/FLINK-6895
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: buptljy
>Priority: Major
>
> STR_TO_DATE(str,format) This is the inverse of the DATE_FORMAT() function. It 
> takes a string str and a format string format. STR_TO_DATE() returns a 
> DATETIME value if the format string contains both date and time parts, or a 
> DATE or TIME value if the string contains only date or time parts. If the 
> date, time, or datetime value extracted from str is illegal, STR_TO_DATE() 
> returns NULL and produces a warning.
> * Syntax:
> STR_TO_DATE(str,format) 
> * Arguments
> **str: -
> **format: -
> * Return Types
>   DATAETIME/DATE/TIME
> * Example:
>   STR_TO_DATE('01,5,2013','%d,%m,%Y') -> '2013-05-01'
>   SELECT STR_TO_DATE('a09:30:17','a%h:%i:%s') -> '09:30:17'
> * See more:
> ** [MySQL| 
> https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_str-to-date]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-6895) Add STR_TO_DATE supported in SQL

2018-07-16 Thread Neelesh Srinivas Salian (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-6895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16546057#comment-16546057
 ] 

Neelesh Srinivas Salian commented on FLINK-6895:


[~fhueske] [~twalthr] is this Jira being worked on?

I could assign it and start working on it. 

> Add STR_TO_DATE supported in SQL
> 
>
> Key: FLINK-6895
> URL: https://issues.apache.org/jira/browse/FLINK-6895
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: buptljy
>Priority: Major
>
> STR_TO_DATE(str,format) This is the inverse of the DATE_FORMAT() function. It 
> takes a string str and a format string format. STR_TO_DATE() returns a 
> DATETIME value if the format string contains both date and time parts, or a 
> DATE or TIME value if the string contains only date or time parts. If the 
> date, time, or datetime value extracted from str is illegal, STR_TO_DATE() 
> returns NULL and produces a warning.
> * Syntax:
> STR_TO_DATE(str,format) 
> * Arguments
> **str: -
> **format: -
> * Return Types
>   DATAETIME/DATE/TIME
> * Example:
>   STR_TO_DATE('01,5,2013','%d,%m,%Y') -> '2013-05-01'
>   SELECT STR_TO_DATE('a09:30:17','a%h:%i:%s') -> '09:30:17'
> * See more:
> ** [MySQL| 
> https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_str-to-date]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-7243) Add ParquetInputFormat

2018-07-09 Thread Neelesh Srinivas Salian (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536971#comment-16536971
 ] 

Neelesh Srinivas Salian commented on FLINK-7243:


[~ZhenqiuHuang], not particularly. I wasn't sure if there was any work done on 
it and was asking to see if I could help.

 

> Add ParquetInputFormat
> --
>
> Key: FLINK-7243
> URL: https://issues.apache.org/jira/browse/FLINK-7243
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Reporter: godfrey he
>Assignee: Zhenqiu Huang
>Priority: Major
>
> Add a {{ParquetInputFormat}} to read data from a Apache Parquet file. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-7243) Add ParquetInputFormat

2018-07-08 Thread Neelesh Srinivas Salian (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536479#comment-16536479
 ] 

Neelesh Srinivas Salian commented on FLINK-7243:


What is the status of this [~godfreyhe] and [~ZhenqiuHuang]?

Shall I assign and start working on this?

> Add ParquetInputFormat
> --
>
> Key: FLINK-7243
> URL: https://issues.apache.org/jira/browse/FLINK-7243
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Reporter: godfrey he
>Assignee: Zhenqiu Huang
>Priority: Major
>
> Add a {{ParquetInputFormat}} to read data from a Apache Parquet file. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-6976) Add STR_TO_DATE supported in TableAPI

2018-07-08 Thread Neelesh Srinivas Salian (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-6976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536478#comment-16536478
 ] 

Neelesh Srinivas Salian edited comment on FLINK-6976 at 7/9/18 1:40 AM:


[~sunjincheng121], could I work on this if no one else has? I realize it needs 
https://issues.apache.org/jira/browse/FLINK-6895


I could work on both as well.


was (Author: nssalian):
[~sunjincheng121], could I work on this if no one else has? I realize it needs 
[https://issues.apache.org/jira/browse/FLINK-6895](FLINK-6895)

> Add STR_TO_DATE supported in TableAPI
> -
>
> Key: FLINK-6976
> URL: https://issues.apache.org/jira/browse/FLINK-6976
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Priority: Major
>  Labels: starter
>
> See FLINK-6895 for detail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-6976) Add STR_TO_DATE supported in TableAPI

2018-07-08 Thread Neelesh Srinivas Salian (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-6976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536478#comment-16536478
 ] 

Neelesh Srinivas Salian edited comment on FLINK-6976 at 7/9/18 1:40 AM:


[~sunjincheng121], could I work on this if no one else has? I realize it needs 
[https://issues.apache.org/jira/browse/FLINK-6895](FLINK-6895)


was (Author: nssalian):
[~sunjincheng121], could I work on this if no one else has?

> Add STR_TO_DATE supported in TableAPI
> -
>
> Key: FLINK-6976
> URL: https://issues.apache.org/jira/browse/FLINK-6976
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Priority: Major
>  Labels: starter
>
> See FLINK-6895 for detail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-6976) Add STR_TO_DATE supported in TableAPI

2018-07-08 Thread Neelesh Srinivas Salian (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-6976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536478#comment-16536478
 ] 

Neelesh Srinivas Salian commented on FLINK-6976:


[~sunjincheng121], could I work on this if no one else has?

> Add STR_TO_DATE supported in TableAPI
> -
>
> Key: FLINK-6976
> URL: https://issues.apache.org/jira/browse/FLINK-6976
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Priority: Major
>  Labels: starter
>
> See FLINK-6895 for detail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-5595) Add links to sub-sections in the left-hand navigation bar

2017-06-26 Thread Neelesh Srinivas Salian (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neelesh Srinivas Salian reassigned FLINK-5595:
--

Assignee: Neelesh Srinivas Salian  (was: Mike Winters)

> Add links to sub-sections in the left-hand navigation bar
> -
>
> Key: FLINK-5595
> URL: https://issues.apache.org/jira/browse/FLINK-5595
> Project: Flink
>  Issue Type: Improvement
>  Components: Project Website
>Reporter: Mike Winters
>Assignee: Neelesh Srinivas Salian
>Priority: Minor
>  Labels: newbie, website
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Some pages on the Flink project site (such as 
> http://flink.apache.org/introduction.html) include a table of contents at the 
> top. The sections from the ToC are not exposed in the left-hand nav when the 
> page is active, but this could be a useful addition, especially for longer, 
> content-heavy pages. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-992) Create CollectionDataSets by reading (client) local files.

2017-06-21 Thread Neelesh Srinivas Salian (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neelesh Srinivas Salian reassigned FLINK-992:
-

Assignee: Neelesh Srinivas Salian  (was: niraj rai)

> Create CollectionDataSets by reading (client) local files.
> --
>
> Key: FLINK-992
> URL: https://issues.apache.org/jira/browse/FLINK-992
> Project: Flink
>  Issue Type: New Feature
>  Components: DataSet API, Python API
>Reporter: Fabian Hueske
>Assignee: Neelesh Srinivas Salian
>Priority: Minor
>  Labels: starter
>
> {{CollectionDataSets}} are a nice way to feed data into programs.
> We could add support to read a client-local file at program construction time 
> using a FileInputFormat, put its data into a CollectionDataSet, and ship its 
> data together with the program.
> This would remove the need to upload small files into DFS which are used 
> together with some large input (stored in DFS).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-5595) Add links to sub-sections in the left-hand navigation bar

2017-06-21 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058223#comment-16058223
 ] 

Neelesh Srinivas Salian commented on FLINK-5595:


[~wints], if you are not working on this currently, can I assign it to myself?

> Add links to sub-sections in the left-hand navigation bar
> -
>
> Key: FLINK-5595
> URL: https://issues.apache.org/jira/browse/FLINK-5595
> Project: Flink
>  Issue Type: Improvement
>  Components: Project Website
>Reporter: Mike Winters
>Assignee: Mike Winters
>Priority: Minor
>  Labels: newbie, website
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Some pages on the Flink project site (such as 
> http://flink.apache.org/introduction.html) include a table of contents at the 
> top. The sections from the ToC are not exposed in the left-hand nav when the 
> page is active, but this could be a useful addition, especially for longer, 
> content-heavy pages. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-1890) Add note to docs that ReadFields annotations are currently not evaluated

2017-06-21 Thread Neelesh Srinivas Salian (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neelesh Srinivas Salian reassigned FLINK-1890:
--

Assignee: Neelesh Srinivas Salian

> Add note to docs that ReadFields annotations are currently not evaluated
> 
>
> Key: FLINK-1890
> URL: https://issues.apache.org/jira/browse/FLINK-1890
> Project: Flink
>  Issue Type: Wish
>  Components: DataSet API
>Reporter: Stefan Bunk
>Assignee: Neelesh Srinivas Salian
>Priority: Minor
>  Labels: starter
>
> In the Scala API, you have the option to declare forwarded fields via the
> {{withForwardedFields}} method.
> It would be nice to have sth. similar for read fields, as otherwise one needs 
> to create a class, which I personally try to avoid for readability.
> Maybe grouping all annotations in one function and have a first parameter 
> indicating the type of annotation is also an option, if you plan on adding 
> more annotations and want to keep the interface smaller.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-1890) Add note to docs that ReadFields annotations are currently not evaluated

2017-06-21 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058219#comment-16058219
 ] 

Neelesh Srinivas Salian commented on FLINK-1890:


[~fhueske], i can pick it up.

> Add note to docs that ReadFields annotations are currently not evaluated
> 
>
> Key: FLINK-1890
> URL: https://issues.apache.org/jira/browse/FLINK-1890
> Project: Flink
>  Issue Type: Wish
>  Components: DataSet API
>Reporter: Stefan Bunk
>Priority: Minor
>  Labels: starter
>
> In the Scala API, you have the option to declare forwarded fields via the
> {{withForwardedFields}} method.
> It would be nice to have sth. similar for read fields, as otherwise one needs 
> to create a class, which I personally try to avoid for readability.
> Maybe grouping all annotations in one function and have a first parameter 
> indicating the type of annotation is also an option, if you plan on adding 
> more annotations and want to keep the interface smaller.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-992) Create CollectionDataSets by reading (client) local files.

2017-06-21 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058218#comment-16058218
 ] 

Neelesh Srinivas Salian commented on FLINK-992:
---

[~nrai], if you are not working on this at the moment, can I assign it to 
myself?

> Create CollectionDataSets by reading (client) local files.
> --
>
> Key: FLINK-992
> URL: https://issues.apache.org/jira/browse/FLINK-992
> Project: Flink
>  Issue Type: New Feature
>  Components: DataSet API, Python API
>Reporter: Fabian Hueske
>Assignee: niraj rai
>Priority: Minor
>  Labels: starter
>
> {{CollectionDataSets}} are a nice way to feed data into programs.
> We could add support to read a client-local file at program construction time 
> using a FileInputFormat, put its data into a CollectionDataSet, and ship its 
> data together with the program.
> This would remove the need to upload small files into DFS which are used 
> together with some large input (stored in DFS).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-4831) Implement a log4j metric reporter

2017-04-17 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15971400#comment-15971400
 ] 

Neelesh Srinivas Salian commented on FLINK-4831:


Shall I work on this if no one has begun already?

> Implement a log4j metric reporter
> -
>
> Key: FLINK-4831
> URL: https://issues.apache.org/jira/browse/FLINK-4831
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.1.2
>Reporter: Chesnay Schepler
>Priority: Minor
>  Labels: easyfix, starter
>
> For debugging purpose it would be very useful to have a log4j metric 
> reporter. If you don't want to setup a metric backend you currently have to 
> rely on JMX, which a) works a bit differently than other reporters (for 
> example it doesn't extend AbstractReporter) and b) makes it a bit tricky to 
> analyze results as metrics are cleaned up once a job finishes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5506) Java 8 - CommunityDetection.java:158 - java.lang.NullPointerException

2017-04-17 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15971390#comment-15971390
 ] 

Neelesh Srinivas Salian commented on FLINK-5506:


Hi, [~mcoimbra] is this still an issue?

> Java 8 - CommunityDetection.java:158 - java.lang.NullPointerException
> -
>
> Key: FLINK-5506
> URL: https://issues.apache.org/jira/browse/FLINK-5506
> Project: Flink
>  Issue Type: Bug
>  Components: Gelly
>Affects Versions: 1.1.4
>Reporter: Miguel E. Coimbra
>  Labels: easyfix, newbie
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> Reporting this here as per Vasia's advice.
> I am having the following problem while trying out the 
> org.apache.flink.graph.library.CommunityDetection algorithm of the Gelly API 
> (Java).
> Specs: JDK 1.8.0_102 x64
> Apache Flink: 1.1.4
> Suppose I have a very small (I tried an example with 38 vertices as well) 
> dataset stored in a tab-separated file 3-vertex.tsv:
> #id1 id2 score
> 010
> 020
> 030
> This is just a central vertex with 3 neighbors (disconnected between 
> themselves).
> I am loading the dataset and executing the algorithm with the following code:
> ---
> // Load the data from the .tsv file.
> final DataSet> edgeTuples = 
> env.readCsvFile(inputPath)
> .fieldDelimiter("\t") // node IDs are separated by spaces
> .ignoreComments("#")  // comments start with "%"
> .types(Long.class, Long.class, Double.class);
> // Generate a graph and add reverse edges (undirected).
> final Graph graph = Graph.fromTupleDataSet(edgeTuples, 
> new MapFunction() {
> private static final long serialVersionUID = 8713516577419451509L;
> public Long map(Long value) {
> return value;
> }
> },
> env).getUndirected();
> // CommunityDetection parameters.
> final double hopAttenuationDelta = 0.5d;
> final int iterationCount = 10;
> // Prepare and trigger the execution.
> DataSet> vs = graph.run(new 
> org.apache.flink.graph.library.CommunityDetection(iterationCount, 
> hopAttenuationDelta)).getVertices();
> vs.print();
> ​---​
> ​Running this code throws the following exception​ (check the bold line):
> ​org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$8.apply$mcV$sp(JobManager.scala:805)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$8.apply(JobManager.scala:751)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$8.apply(JobManager.scala:751)
> at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
> at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
> at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
> at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253)
> at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346)
> at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.flink.graph.library.CommunityDetection$VertexLabelUpdater.updateVertex(CommunityDetection.java:158)
> at 
> org.apache.flink.graph.spargel.ScatterGatherIteration$GatherUdfSimpleVV.coGroup(ScatterGatherIteration.java:389)
> at 
> org.apache.flink.runtime.operators.CoGroupWithSolutionSetSecondDriver.run(CoGroupWithSolutionSetSecondDriver.java:218)
> at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:486)
> at 
> org.apache.flink.runtime.iterative.task.AbstractIterativeTask.run(AbstractIterativeTask.java:146)
> at 
> org.apache.flink.runtime.iterative.task.IterationTailTask.run(IterationTailTask.java:107)
> at org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:351)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:642)
> at java.lang.Thread.run(Thread.java:745)​
> ​After a further look, I set a breakpoint (Eclipse IDE debugging) at the line 
> in bold:
> org.apache.flink.graph.library.CommunityDetection.java (source code 

[jira] [Commented] (FLINK-5550) NotFoundException: Could not find job with id

2017-04-17 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15971389#comment-15971389
 ] 

Neelesh Srinivas Salian commented on FLINK-5550:


Hi [~jeffreyji666], is this still a issue?

> NotFoundException: Could not find job with id
> -
>
> Key: FLINK-5550
> URL: https://issues.apache.org/jira/browse/FLINK-5550
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Affects Versions: 1.0.3
> Environment: centos
>Reporter: jiwengang
>Priority: Minor
>  Labels: newbie
>
> Job is canceled, but still report the following exception:
> 2017-01-18 10:35:18,677 WARN  
> org.apache.flink.runtime.webmonitor.RuntimeMonitorHandler - Error while 
> handling request
> org.apache.flink.runtime.webmonitor.NotFoundException: Could not find job 
> with id 3b98e734c868cc2b992743cfe8911ad0
> at 
> org.apache.flink.runtime.webmonitor.handlers.AbstractExecutionGraphRequestHandler.handleRequest(AbstractExecutionGraphRequestHandler.java:58)
> at 
> org.apache.flink.runtime.webmonitor.RuntimeMonitorHandler.respondAsLeader(RuntimeMonitorHandler.java:135)
> at 
> org.apache.flink.runtime.webmonitor.RuntimeMonitorHandler.channelRead0(RuntimeMonitorHandler.java:112)
> at 
> org.apache.flink.runtime.webmonitor.RuntimeMonitorHandler.channelRead0(RuntimeMonitorHandler.java:60)
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> at io.netty.handler.codec.http.router.Handler.routed(Handler.java:62)
> at 
> io.netty.handler.codec.http.router.DualAbstractHandler.channelRead0(DualAbstractHandler.java:57)
> at 
> io.netty.handler.codec.http.router.DualAbstractHandler.channelRead0(DualAbstractHandler.java:20)
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> at 
> org.apache.flink.runtime.webmonitor.HttpRequestHandler.channelRead0(HttpRequestHandler.java:104)
> at 
> org.apache.flink.runtime.webmonitor.HttpRequestHandler.channelRead0(HttpRequestHandler.java:65)
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:242)
> at 
> io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:147)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847)
> at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
> at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
> at 
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (FLINK-4627) Use Flink's PropertiesUtil in Kinesis connector to extract typed values from config properties

2016-10-16 Thread Neelesh Srinivas Salian (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neelesh Srinivas Salian reassigned FLINK-4627:
--

Assignee: Neelesh Srinivas Salian

> Use Flink's PropertiesUtil in Kinesis connector to extract typed values from 
> config properties 
> ---
>
> Key: FLINK-4627
> URL: https://issues.apache.org/jira/browse/FLINK-4627
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Neelesh Srinivas Salian
>Priority: Trivial
> Fix For: 1.2.0
>
>
> Right now value extraction from config properties in the Kinesis connector is 
> using the plain methods from {{java.util.Properties}} with string parsing.
> We can use Flink's utility {{PropertiesUtil}} to do this, with lesser lines 
> of and more readable code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-4659) Potential resource leak due to unclosed InputStream in SecurityContext#populateSystemSecurityProperties()

2016-10-16 Thread Neelesh Srinivas Salian (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neelesh Srinivas Salian reassigned FLINK-4659:
--

Assignee: Neelesh Srinivas Salian

> Potential resource leak due to unclosed InputStream in 
> SecurityContext#populateSystemSecurityProperties()
> -
>
> Key: FLINK-4659
> URL: https://issues.apache.org/jira/browse/FLINK-4659
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Neelesh Srinivas Salian
>Priority: Minor
>
> {code}
> try {
> Path jaasConfPath = 
> Files.createTempFile(JAAS_CONF_FILENAME, "");
> InputStream jaasConfStream = 
> SecurityContext.class.getClassLoader().getResourceAsStream(JAAS_CONF_FILENAME);
> Files.copy(jaasConfStream, jaasConfPath, 
> StandardCopyOption.REPLACE_EXISTING);
> jaasConfFile = jaasConfPath.toFile();
> jaasConfFile.deleteOnExit();
> } catch (IOException e) {
> throw new RuntimeException("SASL auth is enabled for 
> ZK but unable to " +
> {code}
> jaasConfStream should be closed in finally block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4659) Potential resource leak due to unclosed InputStream in SecurityContext#populateSystemSecurityProperties()

2016-10-16 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15580858#comment-15580858
 ] 

Neelesh Srinivas Salian commented on FLINK-4659:


I can work on this one [~tedyu], if no one else is already.

> Potential resource leak due to unclosed InputStream in 
> SecurityContext#populateSystemSecurityProperties()
> -
>
> Key: FLINK-4659
> URL: https://issues.apache.org/jira/browse/FLINK-4659
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> try {
> Path jaasConfPath = 
> Files.createTempFile(JAAS_CONF_FILENAME, "");
> InputStream jaasConfStream = 
> SecurityContext.class.getClassLoader().getResourceAsStream(JAAS_CONF_FILENAME);
> Files.copy(jaasConfStream, jaasConfPath, 
> StandardCopyOption.REPLACE_EXISTING);
> jaasConfFile = jaasConfPath.toFile();
> jaasConfFile.deleteOnExit();
> } catch (IOException e) {
> throw new RuntimeException("SASL auth is enabled for 
> ZK but unable to " +
> {code}
> jaasConfStream should be closed in finally block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1091) Allow joins with the solution set using key selectors

2016-10-10 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15564179#comment-15564179
 ] 

Neelesh Srinivas Salian commented on FLINK-1091:


Hi, [~vkalavri], checking if this has been resolved or still being worked on?

> Allow joins with the solution set using key selectors
> -
>
> Key: FLINK-1091
> URL: https://issues.apache.org/jira/browse/FLINK-1091
> Project: Flink
>  Issue Type: Sub-task
>  Components: Iterations
>Reporter: Vasia Kalavri
>Assignee: Vasia Kalavri
>Priority: Minor
>  Labels: easyfix, features
>
> Currently, the solution set may only be joined with using tuple field 
> positions.
> A possible solution can be providing explicit functions "joinWithSolution" 
> and "coGroupWithSolution" to make sure the keys used are valid. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-959) Automated bare-metal deployment of FLINK on Amazon EC2 and OpenStack instances

2016-10-10 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15564177#comment-15564177
 ] 

Neelesh Srinivas Salian commented on FLINK-959:
---

Is this still applicable? [~rmetzger] or [~uce], do you know if this is needed?

> Automated bare-metal deployment of FLINK on Amazon EC2 and OpenStack instances
> --
>
> Key: FLINK-959
> URL: https://issues.apache.org/jira/browse/FLINK-959
> Project: Flink
>  Issue Type: New Feature
>  Components: Build System
>Affects Versions: pre-apache-0.5
>Reporter: Tobias
>Assignee: Tobias
>Priority: Minor
> Fix For: pre-apache-0.5
>
>
> This python script does start Amazon ec2|OpenStack instances to install 
> java+hadoop and configure hdfs/yarn via puppet. In order to run FLINK on top 
> of hadoop YARN.
> In order to install java and hadoop binaries are downloaded by the script and 
> handed over to puppet for automated provisioning.
> User-data scripts are used to install puppet (only debian) on the master and 
> slave instances. Accordingly security groups are created and configured. 
> The master instance then starts a self configuration process, so that the 
> puppet modules are set up according to the cluster structure. 
> The master  detects if hadoop YARN web interface is accessible and waits for 
> all expected nodes to be up and running. Then a stratosphere yarn session is 
> started. Taskmanager and Jobmanager memory allocations are set up in the 
> instances.cfg.
> Notes:
> - Configuration reserves 600mb for the operating system and allocates the 
> rest for the YARN node.
> - The Flink web interface is not accessible because the yarn.web.proxy throws 
> a NullpointerException
> - Only runs on Debian derivatives because it uses apt-get 
> - Tested with ubuntu-13.08
> - FLINK is still named Stratosphere
> Code at: https://github.com/tobwiens/StratopshereBareMetalProvPuppet



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1497) No documentation on MiniCluster

2016-10-10 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15564170#comment-15564170
 ] 

Neelesh Srinivas Salian commented on FLINK-1497:


Checking if this is still valid, unless fixed elsewhere, [~StephanEwen]

> No documentation on MiniCluster
> ---
>
> Key: FLINK-1497
> URL: https://issues.apache.org/jira/browse/FLINK-1497
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 0.9
>Reporter: Sergey Dudoladov
>Priority: Trivial
>  Labels: documentation
>
>  It looks like the Flink docs do not show how to run a MiniCluster. 
>  It might be worth to document this feature  and  add relevant scripts to the 
> /bin folder, e.g. start_mini.sh and stop_mini.sh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4792) Update documentation - FlinkML/QuickStart Guide

2016-10-10 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15564166#comment-15564166
 ] 

Neelesh Srinivas Salian commented on FLINK-4792:


Shall I work on this if no one has begun?

> Update documentation - FlinkML/QuickStart Guide
> ---
>
> Key: FLINK-4792
> URL: https://issues.apache.org/jira/browse/FLINK-4792
> Project: Flink
>  Issue Type: Improvement
>Reporter: Thomas FOURNIER
>Priority: Trivial
>
> Hi,
> I'm going through the first steps of FlinkML/QuickStart guide: 
> https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/libs/ml/quickstart.html
> When using env.readCsvFile:
> val env = ExecutionEnvironment.getExecutionEnvironment
> val survival = env.readCsvFile[(String, String, String, String)]("path/data")
> I encounter the following error:
> Error:(17, 69) could not find implicit value for evidence parameter of type 
> org.apache.flink.api.common.typeinfo.TypeInformation[(String, String, String, 
> String)]
> Error occurred in an application involving default arguments.
> val survival = env.readCsvFile[(String, String, String, String)]("path/data")
> To solve this issue, you need to do the following imports:
> import org.apache.flink.api.scala._  ( instead of import 
> org.apache.flink.api.scala.ExecutionEnvironment as documented).
> I think it would be relevant to update the documentation, even if this point 
> is mentioned in FAQ.
> Thanks
> Best
> Thomas



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4627) Use Flink's PropertiesUtil in Kinesis connector to extract typed values from config properties

2016-10-10 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15564165#comment-15564165
 ] 

Neelesh Srinivas Salian commented on FLINK-4627:


Shall I work on this if no one has already begun?

> Use Flink's PropertiesUtil in Kinesis connector to extract typed values from 
> config properties 
> ---
>
> Key: FLINK-4627
> URL: https://issues.apache.org/jira/browse/FLINK-4627
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Reporter: Tzu-Li (Gordon) Tai
>Priority: Trivial
> Fix For: 1.2.0
>
>
> Right now value extraction from config properties in the Kinesis connector is 
> using the plain methods from {{java.util.Properties}} with string parsing.
> We can use Flink's utility {{PropertiesUtil}} to do this, with lesser lines 
> of and more readable code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4260) Allow SQL's LIKE ESCAPE

2016-10-04 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15547157#comment-15547157
 ] 

Neelesh Srinivas Salian commented on FLINK-4260:


Hey [~miaoever], do you have a chance to implement this?

> Allow SQL's LIKE ESCAPE
> ---
>
> Key: FLINK-4260
> URL: https://issues.apache.org/jira/browse/FLINK-4260
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.1.0
>Reporter: Timo Walther
>Assignee: Leo Deng
>Priority: Minor
>
> Currently, the SQL API does not support specifying an ESCAPE character in a 
> LIKE expression. The SIMILAR TO should also support that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3999) Rename the `running` flag in the drivers to `canceled`

2016-10-03 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15543749#comment-15543749
 ] 

Neelesh Srinivas Salian commented on FLINK-3999:


Apologies. Haven't had the chance to work on this. Will post a PR soon.


> Rename the `running` flag in the drivers to `canceled`
> --
>
> Key: FLINK-3999
> URL: https://issues.apache.org/jira/browse/FLINK-3999
> Project: Flink
>  Issue Type: Bug
>  Components: Local Runtime
>Reporter: Gabor Gevay
>Assignee: Neelesh Srinivas Salian
>Priority: Trivial
>
> The name of the {{running}} flag in the drivers doesn't reflect its usage: 
> when the operator just stops normally, then it is not running anymore, but 
> the {{running}}  flag will still be true, since the {{running}} flag is only 
> set when cancelling.
> It should be renamed, and the value inverted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-598) Add support for globalOrdering in DataSet API

2016-09-26 Thread Neelesh Srinivas Salian (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neelesh Srinivas Salian closed FLINK-598.
-
Resolution: Fixed

Closing since the sub-tasks are Closed and Resolved.

> Add support for globalOrdering in DataSet API
> -
>
> Key: FLINK-598
> URL: https://issues.apache.org/jira/browse/FLINK-598
> Project: Flink
>  Issue Type: Improvement
>Reporter: GitHub Import
>  Labels: github-import
>
> There is no support for globalOrdering at the moment. In the Record API, it 
> was possible to hand an Ordering and a Distribution to a FileDataSink. In the 
> DataSet API, such a feature is still missing.
>  Imported from GitHub 
> Url: https://github.com/stratosphere/stratosphere/issues/598
> Created by: [skunert|https://github.com/skunert]
> Labels: enhancement, java api, user satisfaction, 
> Assignee: [fhueske|https://github.com/fhueske]
> Created at: Mon Mar 17 14:08:05 CET 2014
> State: open



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-4513) Kafka connector documentation refers to Flink 1.1-SNAPSHOT

2016-09-19 Thread Neelesh Srinivas Salian (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neelesh Srinivas Salian closed FLINK-4513.
--
Resolution: Not A Problem

> Kafka connector documentation refers to Flink 1.1-SNAPSHOT
> --
>
> Key: FLINK-4513
> URL: https://issues.apache.org/jira/browse/FLINK-4513
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.1.1
>Reporter: Fabian Hueske
>Assignee: Neelesh Srinivas Salian
>Priority: Trivial
> Fix For: 1.1.3
>
>
> The Kafka connector documentation: 
> https://ci.apache.org/projects/flink/flink-docs-release-1.1/apis/streaming/connectors/kafka.html
>  of Flink 1.1 refers to a Flink 1.1-SNAPSHOT Maven version. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-4614) Kafka connector documentation refers to Flink 1.2-SNAPSHOT

2016-09-19 Thread Neelesh Srinivas Salian (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neelesh Srinivas Salian closed FLINK-4614.
--
Resolution: Not A Problem

> Kafka connector documentation refers to Flink 1.2-SNAPSHOT
> --
>
> Key: FLINK-4614
> URL: https://issues.apache.org/jira/browse/FLINK-4614
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.2.0
>Reporter: Neelesh Srinivas Salian
>Assignee: Neelesh Srinivas Salian
>Priority: Trivial
> Fix For: 1.2.0
>
>
> The Kafka connector documentation: 
> https://ci.apache.org/projects/flink/flink-docs-release-1.1/apis/streaming/connectors/kafka.html
>  of Flink 1.1 refers to a Flink 1.1-SNAPSHOT Maven version. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-1151) CollectionDataSource does not provide statistics

2016-09-19 Thread Neelesh Srinivas Salian (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neelesh Srinivas Salian closed FLINK-1151.
--
Resolution: Information Provided

Can solve this by sampling the size of object by serializing them to a 
DataOutputView which just counts bytes instead of writing data somewhere.

> CollectionDataSource does not provide statistics
> 
>
> Key: FLINK-1151
> URL: https://issues.apache.org/jira/browse/FLINK-1151
> Project: Flink
>  Issue Type: Improvement
>  Components: Optimizer
>Affects Versions: 0.6.1-incubating, 0.7.0-incubating
>Reporter: Fabian Hueske
>Priority: Minor
>
> CollectionDataSources do not provide statistics for the optimizer although 
> the data type and the number of elements are known.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (FLINK-1135) Blog post with topic "Accessing Data Stored in Hive with Flink"

2016-09-14 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491985#comment-15491985
 ] 

Neelesh Srinivas Salian edited comment on FLINK-1135 at 9/15/16 1:16 AM:
-

Checking to see if this is still open and / or in progress. [~twalthr]


was (Author: neelesh77):
Checking to see if this is still open. [~twalthr]

> Blog post with topic "Accessing Data Stored in Hive with Flink"
> ---
>
> Key: FLINK-1135
> URL: https://issues.apache.org/jira/browse/FLINK-1135
> Project: Flink
>  Issue Type: Improvement
>  Components: Project Website
>Reporter: Timo Walther
>Assignee: Robert Metzger
>Priority: Minor
> Attachments: 2014-09-29-querying-hive.md
>
>
> Recently, I implemented a Flink job that accessed Hive. Maybe someone else is 
> going to try this. I created a blog post for the website to share my 
> experience.
> You'll find the blog post file attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1135) Blog post with topic "Accessing Data Stored in Hive with Flink"

2016-09-14 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491985#comment-15491985
 ] 

Neelesh Srinivas Salian commented on FLINK-1135:


Checking to see if this is still open. [~twalthr]

> Blog post with topic "Accessing Data Stored in Hive with Flink"
> ---
>
> Key: FLINK-1135
> URL: https://issues.apache.org/jira/browse/FLINK-1135
> Project: Flink
>  Issue Type: Improvement
>  Components: Project Website
>Reporter: Timo Walther
>Assignee: Robert Metzger
>Priority: Minor
> Attachments: 2014-09-29-querying-hive.md
>
>
> Recently, I implemented a Flink job that accessed Hive. Maybe someone else is 
> going to try this. I created a blog post for the website to share my 
> experience.
> You'll find the blog post file attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4614) Kafka connector documentation refers to Flink 1.2-SNAPSHOT

2016-09-12 Thread Neelesh Srinivas Salian (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neelesh Srinivas Salian updated FLINK-4614:
---
Fix Version/s: (was: 1.1.3)
   1.2.0

> Kafka connector documentation refers to Flink 1.2-SNAPSHOT
> --
>
> Key: FLINK-4614
> URL: https://issues.apache.org/jira/browse/FLINK-4614
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.2.0
>Reporter: Neelesh Srinivas Salian
>Assignee: Neelesh Srinivas Salian
>Priority: Trivial
> Fix For: 1.2.0
>
>
> The Kafka connector documentation: 
> https://ci.apache.org/projects/flink/flink-docs-release-1.1/apis/streaming/connectors/kafka.html
>  of Flink 1.1 refers to a Flink 1.1-SNAPSHOT Maven version. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4614) Kafka connector documentation refers to Flink 1.2-SNAPSHOT

2016-09-12 Thread Neelesh Srinivas Salian (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neelesh Srinivas Salian updated FLINK-4614:
---
Summary: Kafka connector documentation refers to Flink 1.2-SNAPSHOT  (was: 
CLONE - Kafka connector documentation refers to Flink 1.2-SNAPSHOT)

> Kafka connector documentation refers to Flink 1.2-SNAPSHOT
> --
>
> Key: FLINK-4614
> URL: https://issues.apache.org/jira/browse/FLINK-4614
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.2.0
>Reporter: Neelesh Srinivas Salian
>Assignee: Neelesh Srinivas Salian
>Priority: Trivial
> Fix For: 1.1.3
>
>
> The Kafka connector documentation: 
> https://ci.apache.org/projects/flink/flink-docs-release-1.1/apis/streaming/connectors/kafka.html
>  of Flink 1.1 refers to a Flink 1.1-SNAPSHOT Maven version. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4614) CLONE - Kafka connector documentation refers to Flink 1.2-SNAPSHOT

2016-09-12 Thread Neelesh Srinivas Salian (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neelesh Srinivas Salian updated FLINK-4614:
---
Affects Version/s: (was: 1.1.1)
   1.2.0

> CLONE - Kafka connector documentation refers to Flink 1.2-SNAPSHOT
> --
>
> Key: FLINK-4614
> URL: https://issues.apache.org/jira/browse/FLINK-4614
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.2.0
>Reporter: Neelesh Srinivas Salian
>Assignee: Neelesh Srinivas Salian
>Priority: Trivial
> Fix For: 1.1.3
>
>
> The Kafka connector documentation: 
> https://ci.apache.org/projects/flink/flink-docs-release-1.1/apis/streaming/connectors/kafka.html
>  of Flink 1.1 refers to a Flink 1.1-SNAPSHOT Maven version. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4614) CLONE - Kafka connector documentation refers to Flink 1.2-SNAPSHOT

2016-09-12 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485380#comment-15485380
 ] 

Neelesh Srinivas Salian commented on FLINK-4614:


See this prevalent in the latest master branch as well.
https://github.com/apache/flink/blob/master/docs/_config.yml

> CLONE - Kafka connector documentation refers to Flink 1.2-SNAPSHOT
> --
>
> Key: FLINK-4614
> URL: https://issues.apache.org/jira/browse/FLINK-4614
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.1.1
>Reporter: Neelesh Srinivas Salian
>Assignee: Neelesh Srinivas Salian
>Priority: Trivial
> Fix For: 1.1.3
>
>
> The Kafka connector documentation: 
> https://ci.apache.org/projects/flink/flink-docs-release-1.1/apis/streaming/connectors/kafka.html
>  of Flink 1.1 refers to a Flink 1.1-SNAPSHOT Maven version. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4614) CLONE - Kafka connector documentation refers to Flink 1.2-SNAPSHOT

2016-09-12 Thread Neelesh Srinivas Salian (JIRA)
Neelesh Srinivas Salian created FLINK-4614:
--

 Summary: CLONE - Kafka connector documentation refers to Flink 
1.2-SNAPSHOT
 Key: FLINK-4614
 URL: https://issues.apache.org/jira/browse/FLINK-4614
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.1.1
Reporter: Neelesh Srinivas Salian
Assignee: Neelesh Srinivas Salian
Priority: Trivial
 Fix For: 1.1.3


The Kafka connector documentation: 
https://ci.apache.org/projects/flink/flink-docs-release-1.1/apis/streaming/connectors/kafka.html
 of Flink 1.1 refers to a Flink 1.1-SNAPSHOT Maven version. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3457) Link to Apache Flink meetups from the 'Community' section of the website

2016-09-09 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15478687#comment-15478687
 ] 

Neelesh Srinivas Salian commented on FLINK-3457:


Hi [~xazax], are you working on this?
If not, shall I grab it an post a PR?

> Link to Apache Flink meetups from the 'Community' section of the website
> 
>
> Key: FLINK-3457
> URL: https://issues.apache.org/jira/browse/FLINK-3457
> Project: Flink
>  Issue Type: Task
>  Components: Documentation
>Reporter: Slim Baltagi
>Assignee: Gabor Horvath
>Priority: Trivial
>
> Now with the number of Apache Flink meetups increasing worldwide, it is 
> helpful to add a link to Apache Flink meetups 
> http://www.meetup.com/topics/apache-flink/ to the community section of 
> https://flink.apache.org/community.html so visitors can conveniently find 
> them  right from the Apache Flink website. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-2859) Add links to docs and JIRA issues in FlinkML roadmap

2016-09-09 Thread Neelesh Srinivas Salian (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neelesh Srinivas Salian closed FLINK-2859.
--
Resolution: Fixed

This is fixed since the page has the link to the FLINK-ML JIRAs list.

> Add links to docs and JIRA issues in FlinkML roadmap
> 
>
> Key: FLINK-2859
> URL: https://issues.apache.org/jira/browse/FLINK-2859
> Project: Flink
>  Issue Type: Improvement
>  Components: Machine Learning Library
>Reporter: Theodore Vasiloudis
>Priority: Trivial
>  Labels: ML
>
> The FlinkML [vision and roadmap 
> doc|https://cwiki.apache.org/confluence/display/FLINK/FlinkML%3A+Vision+and+Roadmap]
>  lists a number of features we aim to have in FlinkML.
> It would be helpful to newcomers if each feature linked to its corresponding 
> JIRA issue, and already implemented features to their page in the docs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-1914) Wrong FS while starting YARN session without correct HADOOP_HOME

2016-09-09 Thread Neelesh Srinivas Salian (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neelesh Srinivas Salian closed FLINK-1914.
--
Resolution: Fixed

> Wrong FS while starting YARN session without correct HADOOP_HOME
> 
>
> Key: FLINK-1914
> URL: https://issues.apache.org/jira/browse/FLINK-1914
> Project: Flink
>  Issue Type: Bug
>  Components: YARN Client
>Reporter: Zoltán Zvara
>Priority: Trivial
>  Labels: yarn, yarn-client
>
> When YARN session invoked ({{yarn-session.sh}}) without a correct 
> {{HADOOP_HOME}} (AM still deployed to - for example to {{0.0.0.0:8032}}), but 
> the deployed AM fails with an {{IllegalArgumentException}}:
> {code}
> java.lang.IllegalArgumentException: Wrong FS: 
> file:/home/.../flink-dist-0.9-SNAPSHOT.jar, expected: hdfs://localhost:9000
>   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:642)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:181)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:92)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1106)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
>   at org.apache.flink.yarn.Utils.registerLocalResource(Utils.java:105)
>   at 
> org.apache.flink.yarn.ApplicationMasterActor$$anonfun$org$apache$flink$yarn$ApplicationMasterActor$$startYarnSession$2.apply(ApplicationMasterActor.scala:436)
>   at 
> org.apache.flink.yarn.ApplicationMasterActor$$anonfun$org$apache$flink$yarn$ApplicationMasterActor$$startYarnSession$2.apply(ApplicationMasterActor.scala:371)
>   at scala.util.Try$.apply(Try.scala:161)
>   at 
> org.apache.flink.yarn.ApplicationMasterActor$class.org$apache$flink$yarn$ApplicationMasterActor$$startYarnSession(ApplicationMasterActor.scala:371)
>   at 
> org.apache.flink.yarn.ApplicationMasterActor$$anonfun$receiveYarnMessages$1.applyOrElse(ApplicationMasterActor.scala:155)
>   at scala.PartialFunction$OrElse.apply(PartialFunction.scala:162)
>   at 
> org.apache.flink.runtime.ActorLogMessages$$anon$1.apply(ActorLogMessages.scala:37)
>   at 
> org.apache.flink.runtime.ActorLogMessages$$anon$1.apply(ActorLogMessages.scala:30)
>   at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
>   at 
> org.apache.flink.runtime.ActorLogMessages$$anon$1.applyOrElse(ActorLogMessages.scala:30)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager.aroundReceive(JobManager.scala:94)
> {code}
> IMO this {{IllegalArgumentException}} should get handled in 
> {{org.apache.flink.yarn.Utils.registerLocalResource}} or on an upper level to 
> provide a better error message. This needs to be looked up from YARN logs at 
> the moment, which is painful to a trivial mistake like missing 
> {{HADOOP_HOME}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1914) Wrong FS while starting YARN session without correct HADOOP_HOME

2016-09-09 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15478539#comment-15478539
 ] 

Neelesh Srinivas Salian commented on FLINK-1914:


Sounds good [~rmetzger]. Closing the JIRA.

> Wrong FS while starting YARN session without correct HADOOP_HOME
> 
>
> Key: FLINK-1914
> URL: https://issues.apache.org/jira/browse/FLINK-1914
> Project: Flink
>  Issue Type: Bug
>  Components: YARN Client
>Reporter: Zoltán Zvara
>Priority: Trivial
>  Labels: yarn, yarn-client
>
> When YARN session invoked ({{yarn-session.sh}}) without a correct 
> {{HADOOP_HOME}} (AM still deployed to - for example to {{0.0.0.0:8032}}), but 
> the deployed AM fails with an {{IllegalArgumentException}}:
> {code}
> java.lang.IllegalArgumentException: Wrong FS: 
> file:/home/.../flink-dist-0.9-SNAPSHOT.jar, expected: hdfs://localhost:9000
>   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:642)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:181)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:92)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1106)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
>   at org.apache.flink.yarn.Utils.registerLocalResource(Utils.java:105)
>   at 
> org.apache.flink.yarn.ApplicationMasterActor$$anonfun$org$apache$flink$yarn$ApplicationMasterActor$$startYarnSession$2.apply(ApplicationMasterActor.scala:436)
>   at 
> org.apache.flink.yarn.ApplicationMasterActor$$anonfun$org$apache$flink$yarn$ApplicationMasterActor$$startYarnSession$2.apply(ApplicationMasterActor.scala:371)
>   at scala.util.Try$.apply(Try.scala:161)
>   at 
> org.apache.flink.yarn.ApplicationMasterActor$class.org$apache$flink$yarn$ApplicationMasterActor$$startYarnSession(ApplicationMasterActor.scala:371)
>   at 
> org.apache.flink.yarn.ApplicationMasterActor$$anonfun$receiveYarnMessages$1.applyOrElse(ApplicationMasterActor.scala:155)
>   at scala.PartialFunction$OrElse.apply(PartialFunction.scala:162)
>   at 
> org.apache.flink.runtime.ActorLogMessages$$anon$1.apply(ActorLogMessages.scala:37)
>   at 
> org.apache.flink.runtime.ActorLogMessages$$anon$1.apply(ActorLogMessages.scala:30)
>   at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
>   at 
> org.apache.flink.runtime.ActorLogMessages$$anon$1.applyOrElse(ActorLogMessages.scala:30)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager.aroundReceive(JobManager.scala:94)
> {code}
> IMO this {{IllegalArgumentException}} should get handled in 
> {{org.apache.flink.yarn.Utils.registerLocalResource}} or on an upper level to 
> provide a better error message. This needs to be looked up from YARN logs at 
> the moment, which is painful to a trivial mistake like missing 
> {{HADOOP_HOME}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-710) Aggregate transformation works only with FieldPositionKeys

2016-09-09 Thread Neelesh Srinivas Salian (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neelesh Srinivas Salian closed FLINK-710.
-
Resolution: Duplicate

> Aggregate transformation works only with FieldPositionKeys
> --
>
> Key: FLINK-710
> URL: https://issues.apache.org/jira/browse/FLINK-710
> Project: Flink
>  Issue Type: Improvement
>Reporter: GitHub Import
>Assignee: Márton Balassi
>Priority: Trivial
>  Labels: github-import
>
> In the new Java API, Aggregate transformations can only be applied to 
> DataSets that are grouped using field positions and to ungrouped DataSets.
> DataSets grouped with KeySelector functions are not supported.
> Since Aggregations only work on Tuple DataSets which can be grouped using the 
> more convenient field positions, this might be OK.
> Or should we support KeySelector groupings as well for completeness? 
> Opinions?
>  Imported from GitHub 
> Url: https://github.com/stratosphere/stratosphere/issues/710
> Created by: [fhueske|https://github.com/fhueske]
> Labels: enhancement, java api, question, user satisfaction, 
> Milestone: Release 0.6 (unplanned)
> Created at: Tue Apr 22 14:42:52 CEST 2014
> State: open



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-4513) Kafka connector documentation refers to Flink 1.1-SNAPSHOT

2016-09-06 Thread Neelesh Srinivas Salian (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neelesh Srinivas Salian reassigned FLINK-4513:
--

Assignee: Neelesh Srinivas Salian

> Kafka connector documentation refers to Flink 1.1-SNAPSHOT
> --
>
> Key: FLINK-4513
> URL: https://issues.apache.org/jira/browse/FLINK-4513
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.1.1
>Reporter: Fabian Hueske
>Assignee: Neelesh Srinivas Salian
>Priority: Trivial
> Fix For: 1.1.3
>
>
> The Kafka connector documentation: 
> https://ci.apache.org/projects/flink/flink-docs-release-1.1/apis/streaming/connectors/kafka.html
>  of Flink 1.1 refers to a Flink 1.1-SNAPSHOT Maven version. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-4278) Unclosed FSDataOutputStream in multiple files in the project

2016-09-06 Thread Neelesh Srinivas Salian (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neelesh Srinivas Salian closed FLINK-4278.
--
Resolution: Not A Problem

> Unclosed FSDataOutputStream in multiple files in the project
> 
>
> Key: FLINK-4278
> URL: https://issues.apache.org/jira/browse/FLINK-4278
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.0.3
>Reporter: Neelesh Srinivas Salian
>Assignee: Neelesh Srinivas Salian
>
> After FLINK-4259, I did a check and found that the following files don't have 
> the closing of the FSDataOutputStream.
> The following are the files and the corresponding methods missing the close() 
> 1) FSDataOutputStream.java adding a close method in the abstract class
> 2) FSStateBackend flush() and write() - closing the FSDataOutputStream
> 3) StringWriter.java write()
> 4) FileSystemStateStore putState() -  closing the FSDataOutputStream
> 5) HadoopDataOutputStream.java not too sure if this needs closing.
> 6) FileSystemStateStorageHelper.java store() need closing for both outStream 
> and the ObjectOutputStream
> The options to think would be to either close or use  IOUtils.closeQuietly() 
> Any thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4513) Kafka connector documentation refers to Flink 1.1-SNAPSHOT

2016-09-03 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15461833#comment-15461833
 ] 

Neelesh Srinivas Salian commented on FLINK-4513:


Shall I work on this? Need to change all the connector pages to appropriate 
this.
Will check if this is true for other versions too.


> Kafka connector documentation refers to Flink 1.1-SNAPSHOT
> --
>
> Key: FLINK-4513
> URL: https://issues.apache.org/jira/browse/FLINK-4513
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.1.1
>Reporter: Fabian Hueske
>Priority: Trivial
> Fix For: 1.1.2
>
>
> The Kafka connector documentation: 
> https://ci.apache.org/projects/flink/flink-docs-release-1.1/apis/streaming/connectors/kafka.html
>  of Flink 1.1 refers to a Flink 1.1-SNAPSHOT Maven version. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2247) Improve the way memory is reported in the web frontend

2016-08-29 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447477#comment-15447477
 ] 

Neelesh Srinivas Salian commented on FLINK-2247:


Hi [~tvas], please let me know if this is still valid. 

> Improve the way memory is reported in the web frontend
> --
>
> Key: FLINK-2247
> URL: https://issues.apache.org/jira/browse/FLINK-2247
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Reporter: Theodore Vasiloudis
>Priority: Trivial
>
> Currently in the taskmanager view of the web frontend, the memory available 
> to Flink is reported in a slightly confusing manner.
> In the worker summary, we get a report of the Flink Managed memory available, 
> and we get warnings when that is set too low.
> The warnings though seem to be not taking the memory available to Flink when 
> being issued.
> For example, in a machine with 7.5GB memory available it is normal to assign 
> ~6GB for the JVM, which under default settings gives ~4GB to Flink managed 
> memory.
> In this case however, we get a warning that 7500MB of memory is available, 
> but only ~$4000MB is available to Flink, disregarding the total amount 
> available to the JVM.
> The reporting can then be improved by reporting the total amount available 
> for the JVM, the amount available for Flink's managed memory, and only issue 
> warnings when the settings are actually low compared to the available memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3999) Rename the `running` flag in the drivers to `canceled`

2016-08-29 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447356#comment-15447356
 ] 

Neelesh Srinivas Salian commented on FLINK-3999:


Thank you [~ggevay]. Will post a PR this week.



> Rename the `running` flag in the drivers to `canceled`
> --
>
> Key: FLINK-3999
> URL: https://issues.apache.org/jira/browse/FLINK-3999
> Project: Flink
>  Issue Type: Bug
>  Components: Local Runtime
>Reporter: Gabor Gevay
>Assignee: Neelesh Srinivas Salian
>Priority: Trivial
>
> The name of the {{running}} flag in the drivers doesn't reflect its usage: 
> when the operator just stops normally, then it is not running anymore, but 
> the {{running}}  flag will still be true, since the {{running}} flag is only 
> set when cancelling.
> It should be renamed, and the value inverted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4100) RocksDBStateBackend#close() can throw NPE

2016-08-25 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437392#comment-15437392
 ] 

Neelesh Srinivas Salian commented on FLINK-4100:


Sounds good. Thanks [~aljoscha]

> RocksDBStateBackend#close() can throw NPE
> -
>
> Key: FLINK-4100
> URL: https://issues.apache.org/jira/browse/FLINK-4100
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing, Tests
>Affects Versions: 1.1.0
>Reporter: Chesnay Schepler
>Priority: Trivial
>
> When running the RocksDBStateBackendTest on Windows i ran into an NPE. The 
> tests are aborted in the @Before checkOperatingSystem method (which is 
> correct behaviour), but the test still calls dispose() in @After teardown().
> This lead to an NPE since the lock object used is null; it was not 
> initialized since initializeForJob() was never called and there is no null 
> check.
> {code}
> testCopyDefaultValue(org.apache.flink.contrib.streaming.state.RocksDBStateBackendTest)
>   Time elapsed: 0 sec  <<< ERROR!
> java.lang.NullPointerException: null
> at 
> org.apache.flink.contrib.streaming.state.RocksDBStateBackend.dispose(RocksDBStateBackend.java:318)
> at 
> org.apache.flink.runtime.state.StateBackendTestBase.teardown(StateBackendTestBase.java:71)
> at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3999) Rename the `running` flag in the drivers to `canceled`

2016-08-24 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15435037#comment-15435037
 ] 

Neelesh Srinivas Salian commented on FLINK-3999:


Shall I work on this JIRA, if no one else is working already?


> Rename the `running` flag in the drivers to `canceled`
> --
>
> Key: FLINK-3999
> URL: https://issues.apache.org/jira/browse/FLINK-3999
> Project: Flink
>  Issue Type: Bug
>  Components: Local Runtime
>Reporter: Gabor Gevay
>Priority: Trivial
>
> The name of the {{running}} flag in the drivers doesn't reflect its usage: 
> when the operator just stops normally, then it is not running anymore, but 
> the {{running}}  flag will still be true, since the {{running}} flag is only 
> set when cancelling.
> It should be renamed, and the value inverted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4100) RocksDBStateBackend#close() can throw NPE

2016-08-23 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15433925#comment-15433925
 ] 

Neelesh Srinivas Salian commented on FLINK-4100:


Hi,
Shall I go work on this, if no one has started already?


> RocksDBStateBackend#close() can throw NPE
> -
>
> Key: FLINK-4100
> URL: https://issues.apache.org/jira/browse/FLINK-4100
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing, Tests
>Affects Versions: 1.1.0
>Reporter: Chesnay Schepler
>Priority: Trivial
>
> When running the RocksDBStateBackendTest on Windows i ran into an NPE. The 
> tests are aborted in the @Before checkOperatingSystem method (which is 
> correct behaviour), but the test still calls dispose() in @After teardown().
> This lead to an NPE since the lock object used is null; it was not 
> initialized since initializeForJob() was never called and there is no null 
> check.
> {code}
> testCopyDefaultValue(org.apache.flink.contrib.streaming.state.RocksDBStateBackendTest)
>   Time elapsed: 0 sec  <<< ERROR!
> java.lang.NullPointerException: null
> at 
> org.apache.flink.contrib.streaming.state.RocksDBStateBackend.dispose(RocksDBStateBackend.java:318)
> at 
> org.apache.flink.runtime.state.StateBackendTestBase.teardown(StateBackendTestBase.java:71)
> at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4148) incorrect calculation distance in QuadTree

2016-08-23 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15433696#comment-15433696
 ] 

Neelesh Srinivas Salian commented on FLINK-4148:


[~humanoid], thank you for reporting this. Could you please create a Pull 
Request for the same patch?

> incorrect calculation distance in QuadTree
> --
>
> Key: FLINK-4148
> URL: https://issues.apache.org/jira/browse/FLINK-4148
> Project: Flink
>  Issue Type: Bug
>Reporter: Alexey Diomin
>Priority: Trivial
> Attachments: 
> 0001-FLINK-4148-incorrect-calculation-minDist-distance-in.patch
>
>
> https://github.com/apache/flink/blob/master/flink-libraries/flink-ml/src/main/scala/org/apache/flink/ml/nn/QuadTree.scala#L105
> Because EuclideanDistanceMetric extends SquaredEuclideanDistanceMetric we 
> always move in first case and never reach case for math.sqrt(minDist)
> correct match first EuclideanDistanceMetric and after it 
> SquaredEuclideanDistanceMetric
> p.s. because EuclideanDistanceMetric more compute expensive and stay as 
> default DistanceMetric it's can cause some performance degradation for KNN on 
> default parameters



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-4402) Wrong metrics parameter names in documentation

2016-08-17 Thread Neelesh Srinivas Salian (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neelesh Srinivas Salian reassigned FLINK-4402:
--

Assignee: Neelesh Srinivas Salian

> Wrong metrics parameter names in documentation 
> ---
>
> Key: FLINK-4402
> URL: https://issues.apache.org/jira/browse/FLINK-4402
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.1.1
> Environment: all
>Reporter: RWenden
>Assignee: Neelesh Srinivas Salian
>Priority: Trivial
> Fix For: 1.1.2
>
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> On the page 
> https://ci.apache.org/projects/flink/flink-docs-master/apis/metrics.html
> the following metrics parameters should be
> faulty: metrics.scope.tm.task , should be metrics.scope.task
> faulty: metrics.scope.tm.operator , should be metrics.scope.operator
> to make it work on Flink 1.1.1.
> But to fix this, the constants in ConfigConstants.java can also be changed to 
> fit the documentation. Either way...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4402) Wrong metrics parameter names in documentation

2016-08-17 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15425084#comment-15425084
 ] 

Neelesh Srinivas Salian commented on FLINK-4402:


Thank you. Posting a PR soon.

> Wrong metrics parameter names in documentation 
> ---
>
> Key: FLINK-4402
> URL: https://issues.apache.org/jira/browse/FLINK-4402
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.1.1
> Environment: all
>Reporter: RWenden
>Priority: Trivial
> Fix For: 1.1.2
>
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> On the page 
> https://ci.apache.org/projects/flink/flink-docs-master/apis/metrics.html
> the following metrics parameters should be
> faulty: metrics.scope.tm.task , should be metrics.scope.task
> faulty: metrics.scope.tm.operator , should be metrics.scope.operator
> to make it work on Flink 1.1.1.
> But to fix this, the constants in ConfigConstants.java can also be changed to 
> fit the documentation. Either way...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4402) Wrong metrics parameter names in documentation

2016-08-16 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15423395#comment-15423395
 ] 

Neelesh Srinivas Salian commented on FLINK-4402:


If you don't mind, shall I work on this JIRA? 

> Wrong metrics parameter names in documentation 
> ---
>
> Key: FLINK-4402
> URL: https://issues.apache.org/jira/browse/FLINK-4402
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.1.1
> Environment: all
>Reporter: RWenden
>Priority: Trivial
> Fix For: 1.1.2
>
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> On the page 
> https://ci.apache.org/projects/flink/flink-docs-master/apis/metrics.html
> the following metrics parameters should be
> faulty: metrics.scope.tm.task , should be metrics.scope.task
> faulty: metrics.scope.tm.operator , should be metrics.scope.operator
> to make it work on Flink 1.1.1.
> But to fix this, the constants in ConfigConstants.java can also be changed to 
> fit the documentation. Either way...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4278) Unclosed FSDataOutputStream in multiple files in the project

2016-08-15 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421137#comment-15421137
 ] 

Neelesh Srinivas Salian commented on FLINK-4278:


Thanks [~tedyu]. I'll post a PR soon. 

> Unclosed FSDataOutputStream in multiple files in the project
> 
>
> Key: FLINK-4278
> URL: https://issues.apache.org/jira/browse/FLINK-4278
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.0.3
>Reporter: Neelesh Srinivas Salian
>Assignee: Neelesh Srinivas Salian
>
> After FLINK-4259, I did a check and found that the following files don't have 
> the closing of the FSDataOutputStream.
> The following are the files and the corresponding methods missing the close() 
> 1) FSDataOutputStream.java adding a close method in the abstract class
> 2) FSStateBackend flush() and write() - closing the FSDataOutputStream
> 3) StringWriter.java write()
> 4) FileSystemStateStore putState() -  closing the FSDataOutputStream
> 5) HadoopDataOutputStream.java not too sure if this needs closing.
> 6) FileSystemStateStorageHelper.java store() need closing for both outStream 
> and the ObjectOutputStream
> The options to think would be to either close or use  IOUtils.closeQuietly() 
> Any thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4278) Unclosed FSDataOutputStream in multiple files in the project

2016-07-28 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398081#comment-15398081
 ] 

Neelesh Srinivas Salian commented on FLINK-4278:


[~tedyu] any thoughts how this can be achieved.
Can Post a PR based on the discussion

> Unclosed FSDataOutputStream in multiple files in the project
> 
>
> Key: FLINK-4278
> URL: https://issues.apache.org/jira/browse/FLINK-4278
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.0.3
>Reporter: Neelesh Srinivas Salian
>Assignee: Neelesh Srinivas Salian
>
> After FLINK-4259, I did a check and found that the following files don't have 
> the closing of the FSDataOutputStream.
> The following are the files and the corresponding methods missing the close() 
> 1) FSDataOutputStream.java adding a close method in the abstract class
> 2) FSStateBackend flush() and write() - closing the FSDataOutputStream
> 3) StringWriter.java write()
> 4) FileSystemStateStore putState() -  closing the FSDataOutputStream
> 5) HadoopDataOutputStream.java not too sure if this needs closing.
> 6) FileSystemStateStorageHelper.java store() need closing for both outStream 
> and the ObjectOutputStream
> The options to think would be to either close or use  IOUtils.closeQuietly() 
> Any thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4278) Unclosed FSDataOutputStream in multiple files in the project

2016-07-28 Thread Neelesh Srinivas Salian (JIRA)
Neelesh Srinivas Salian created FLINK-4278:
--

 Summary: Unclosed FSDataOutputStream in multiple files in the 
project
 Key: FLINK-4278
 URL: https://issues.apache.org/jira/browse/FLINK-4278
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.0.3
Reporter: Neelesh Srinivas Salian
Assignee: Neelesh Srinivas Salian


After FLINK-4259, I did a check and found that the following files don't have 
the closing of the FSDataOutputStream.

The following are the files and the corresponding methods missing the close() 
1) FSDataOutputStream.java adding a close method in the abstract class
2) FSStateBackend flush() and write() - closing the FSDataOutputStream
3) StringWriter.java write()
4) FileSystemStateStore putState() -  closing the FSDataOutputStream

5) HadoopDataOutputStream.java not too sure if this needs closing.
6) FileSystemStateStorageHelper.java store() need closing for both outStream 
and the ObjectOutputStream

The options to think would be to either close or use  IOUtils.closeQuietly() 

Any thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4047) Fix documentation about determinism of KeySelectors

2016-07-24 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391342#comment-15391342
 ] 

Neelesh Srinivas Salian commented on FLINK-4047:


Hi [~fhueske], could I work on this JIRA?

> Fix documentation about determinism of KeySelectors
> ---
>
> Key: FLINK-4047
> URL: https://issues.apache.org/jira/browse/FLINK-4047
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.1.0, 1.0.3
>Reporter: Fabian Hueske
>  Labels: starter
> Fix For: 1.1.0, 1.0.3
>
>
> KeySelectors must return deterministic keys, i.e., if invoked multiple times 
> on the same object, the returned key must be the same.
> The documentation about this aspect is broken ("The key can be of any type 
> and be derived from arbitrary computations.").
> We need to fix the JavaDocs of the {{KeySelector}} interface and the web 
> documentation 
> (https://ci.apache.org/projects/flink/flink-docs-master/apis/common/index.html#specifying-keys).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2032) Migrate integration tests from temp output files to collect()

2016-07-24 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391343#comment-15391343
 ] 

Neelesh Srinivas Salian commented on FLINK-2032:


Hi, [~fhueske], checking if this issue is still valid. If yes, I would like to 
help.

> Migrate integration tests from temp output files to collect()
> -
>
> Key: FLINK-2032
> URL: https://issues.apache.org/jira/browse/FLINK-2032
> Project: Flink
>  Issue Type: Task
>  Components: Tests
>Affects Versions: 0.9
>Reporter: Fabian Hueske
>Priority: Minor
>  Labels: starter
>
> Most of Flink's integration tests that execute full Flink programs and check 
> their results are implemented by writing results to temporary output file and 
> comparing the content of the file to a provided set of expected Strings. 
> Flink's test utils make this quite comfortable and hide a lot of the 
> complexity of this approach. Nonetheless, this approach has a few drawbacks:
> - increased latency by going through disk
> - comparison is on String representation of objects
> - depends on the file system
> Since Flink's {{collect()}} feature was added, the temp file approach is not 
> the best approach anymore. Instead, tests can collect the result of a Flink 
> program directly as objects and compare these against a set of expected 
> objects.
> It would be good to migrate the existing test base to use {{collect()}} 
> instead of temporary output files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-992) Create CollectionDataSets by reading (client) local files.

2016-07-24 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391284#comment-15391284
 ] 

Neelesh Srinivas Salian commented on FLINK-992:
---

Shall I begin to work on this if no one else is?


> Create CollectionDataSets by reading (client) local files.
> --
>
> Key: FLINK-992
> URL: https://issues.apache.org/jira/browse/FLINK-992
> Project: Flink
>  Issue Type: New Feature
>  Components: DataSet API, Python API
>Reporter: Fabian Hueske
>Assignee: niraj rai
>Priority: Minor
>  Labels: starter
>
> {{CollectionDataSets}} are a nice way to feed data into programs.
> We could add support to read a client-local file at program construction time 
> using a FileInputFormat, put its data into a CollectionDataSet, and ship its 
> data together with the program.
> This would remove the need to upload small files into DFS which are used 
> together with some large input (stored in DFS).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2042) Increase font size for stack figure on website home

2016-07-24 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391283#comment-15391283
 ] 

Neelesh Srinivas Salian commented on FLINK-2042:


Hi [~StephanEwen], is this still needed to improve?


> Increase font size for stack figure on website home
> ---
>
> Key: FLINK-2042
> URL: https://issues.apache.org/jira/browse/FLINK-2042
> Project: Flink
>  Issue Type: Improvement
>  Components: Project Website
>Reporter: Fabian Hueske
>Priority: Minor
>  Labels: starter
>
> The font of the stack figure on the website home is quite small and could be 
> increased.
> The image is also a bit blurred.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4226) Typo: Define Keys using Field Expressions example should use window and not reduce

2016-07-24 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391200#comment-15391200
 ] 

Neelesh Srinivas Salian commented on FLINK-4226:


Put in a fix. Requesting review.

> Typo: Define Keys using Field Expressions example should use window and not 
> reduce
> --
>
> Key: FLINK-4226
> URL: https://issues.apache.org/jira/browse/FLINK-4226
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.0.3
>Reporter: Ahmad Ragab
>Priority: Trivial
>  Labels: documentation
>
> ...
> {code:java}
> val words: DataStream[WC] = // [...]
> val wordCounts = words.keyBy("word").window(/*window specification*/)
> // or, as a case class, which is less typing
> case class WC(word: String, count: Int)
> val words: DataStream[WC] = // [...]
> val wordCounts = words.keyBy("word").reduce(/*window specification*/)
> {code}
> Should be: 
> val wordCounts = words.keyBy("word").-reduce- window(/*window specification*/)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)