Re: Submitting an app via API

2017-02-06 Thread Flavio Pompermaier
Ok thanks!
That's superuseful

On 7 Feb 2017 08:51, "Ufuk Celebi"  wrote:

Hey Flavio!

Yes, the REST API call is the same and the environment for batch
is RemoteEnvironment.

– Ufuk

On 6 February 2017 at 19:18:31, Flavio Pompermaier (pomperma...@okkam.it)
wrote:
> Hi Ufuk,
> does it work also for batch?
>
> On Mon, Feb 6, 2017 at 5:06 PM, Ufuk Celebi wrote:
>
> > You can use RemoteStreamEnvironment or the REST APIs
> > (https://ci.apache.org/projects/flink/flink-docs-
> > release-1.3/monitoring/rest_api.html#submitting-programs).
> >
> > On Sun, Feb 5, 2017 at 4:43 PM, Luqman Ghani wrote:
> > > Hi,
> > >
> > > On quickstart page of Flink docs, it suggests starting a Flink app
with
> > > "bin/flink" command on command line. Is there any other way of
> > submitting to
> > > a cluster of flink, that is, through API call within a program, or
> > through
> > > server request?
> > >
> > > Thanks,
> > > Luqman
> >
>
>
>
> --
>
> Flavio Pompermaier
>
> *Development Department*___
> *OKKAM**Srl **- www.okkam.it *
>
> *Phone:* +(39) 0461 1823908
> *Fax:* + (39) 0461 186 6433
> *Email:* pomperma...@okkam.it
> *Headquarters:* Trento (Italy), via G.B. Trener 8
> *Registered office:* Trento (Italy), via Segantini 23
>
> Confidentially notice. This e-mail transmission may contain legally
> privileged and/or confidential information. Please do not read it if you
> are not the intended recipient(S). Any use, distribution, reproduction or
> disclosure by any other person is strictly prohibited. If you have
received
> this e-mail in error, please notify the sender and destroy the original
> transmission and its attachments without reading or saving it in any
manner.
>


Re: Submitting an app via API

2017-02-06 Thread Ufuk Celebi
Hey Flavio!

Yes, the REST API call is the same and the environment for batch is 
RemoteEnvironment.

– Ufuk

On 6 February 2017 at 19:18:31, Flavio Pompermaier (pomperma...@okkam.it) wrote:
> Hi Ufuk,
> does it work also for batch?
>  
> On Mon, Feb 6, 2017 at 5:06 PM, Ufuk Celebi wrote:
>  
> > You can use RemoteStreamEnvironment or the REST APIs
> > (https://ci.apache.org/projects/flink/flink-docs-
> > release-1.3/monitoring/rest_api.html#submitting-programs).
> >
> > On Sun, Feb 5, 2017 at 4:43 PM, Luqman Ghani wrote:
> > > Hi,
> > >
> > > On quickstart page of Flink docs, it suggests starting a Flink app with
> > > "bin/flink" command on command line. Is there any other way of
> > submitting to
> > > a cluster of flink, that is, through API call within a program, or
> > through
> > > server request?
> > >
> > > Thanks,
> > > Luqman
> >
>  
>  
>  
> --
>  
> Flavio Pompermaier
>  
> *Development Department*___
> *OKKAM**Srl **- www.okkam.it *
>  
> *Phone:* +(39) 0461 1823908
> *Fax:* + (39) 0461 186 6433
> *Email:* pomperma...@okkam.it
> *Headquarters:* Trento (Italy), via G.B. Trener 8
> *Registered office:* Trento (Italy), via Segantini 23
>  
> Confidentially notice. This e-mail transmission may contain legally
> privileged and/or confidential information. Please do not read it if you
> are not the intended recipient(S). Any use, distribution, reproduction or
> disclosure by any other person is strictly prohibited. If you have received
> this e-mail in error, please notify the sender and destroy the original
> transmission and its attachments without reading or saving it in any manner.
>  



Re: Table API: java.sql.DateTime is not supported;

2017-02-06 Thread Fabian Hueske
Hi,

you can also use the CsvTableSource and read the DateTime fields as String.
This will directly give you a table. You can implement a user-defined
scalar function [1] to parse the String into a DateTime type.

The benefit is that you stay in the Table API / SQL and don't have to deal
with the DataSet or DataStream API and the conversion.

Best, Fabian

[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/table_api.html#user-defined-scalar-functions

2017-02-07 3:16 GMT+01:00 nsengupta :

> Hello Timo,
>
> Thanks for the clarification.
>
> This means that I *cannot use CsvTableSource*, as I have, in the example.
> Instead, I should:
>
>  *   Write custom Scalar function to convert STRINGs to other datatypes as
> required
>  *   Read the file as CsvInput, with all fields as STRINGs
>  *   Apply the Scalar function as approrpiate and Map() to a desired a
> *DataSet* type
>  *   /Convert/ the DataSet to a Table
>  *Use SQL to access the Table
>
> Is my understanding correct?
>
> -- Nirmalya
>
>
>
> --
> View this message in context: http://apache-flink-user-
> mailing-list-archive.2336050.n4.nabble.com/Table-API-java-
> sql-DateTime-is-not-supported-tp11439p11480.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive
> at Nabble.com.
>


Re: Table API: java.sql.DateTime is not supported;

2017-02-06 Thread nsengupta
Hello Timo,

Thanks for the clarification.

This means that I *cannot use CsvTableSource*, as I have, in the example.
Instead, I should:

 *   Write custom Scalar function to convert STRINGs to other datatypes as
required
 *   Read the file as CsvInput, with all fields as STRINGs
 *   Apply the Scalar function as approrpiate and Map() to a desired a
*DataSet* type
 *   /Convert/ the DataSet to a Table
 *Use SQL to access the Table 

Is my understanding correct?

-- Nirmalya



--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Table-API-java-sql-DateTime-is-not-supported-tp11439p11480.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Re: Javadoc

2017-02-06 Thread Ted Yu
See this thread:
http://search-hadoop.com/m/Flink/VkLeQV4Igl2GAJjs?subj=JavaDoc+404

On Mon, Feb 6, 2017 at 2:08 PM, Colin Breame  wrote:

> Hello Ted,
>
> Thanks for the link.  That looks like it but it renders rather badly
> in my browser.
>
> There are two 404s on the page which is likely causing the issue:
>
>   https://ci.apache.org/projects/flink/flink-docs-
> master/api/java/stylesheet.css
>   https://ci.apache.org/projects/flink/flink-docs-
> master/api/java/script.js
>
> Any idea who to report to?
>
>
> On 6 February 2017 at 22:00, Ted Yu  wrote:
> > Have you looked under
> > https://ci.apache.org/projects/flink/flink-docs-master/api/java ?
> >
> > e.g. suppose you look for API for ByteArrayOutputStreamWithPos:
> >
> > https://ci.apache.org/projects/flink/flink-docs-
> master/api/java/org/apache/flink/core/memory/ByteArrayOutputStreamWithPos.
> html
> >
> > On Mon, Feb 6, 2017 at 1:24 PM, Colin Breame 
> wrote:
> >>
> >> Hello,
> >>
> >> I'm looking for the javadoc for the APIs.  Can someone tell me if this
> >> is hosted on the internet anywhere?
> >>
> >> Thanks
> >> Colin
> >
> >
>


Re: Javadoc

2017-02-06 Thread Colin Breame
Hello Ted,

Thanks for the link.  That looks like it but it renders rather badly
in my browser.

There are two 404s on the page which is likely causing the issue:

  https://ci.apache.org/projects/flink/flink-docs-master/api/java/stylesheet.css
  https://ci.apache.org/projects/flink/flink-docs-master/api/java/script.js

Any idea who to report to?


On 6 February 2017 at 22:00, Ted Yu  wrote:
> Have you looked under
> https://ci.apache.org/projects/flink/flink-docs-master/api/java ?
>
> e.g. suppose you look for API for ByteArrayOutputStreamWithPos:
>
> https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/core/memory/ByteArrayOutputStreamWithPos.html
>
> On Mon, Feb 6, 2017 at 1:24 PM, Colin Breame  wrote:
>>
>> Hello,
>>
>> I'm looking for the javadoc for the APIs.  Can someone tell me if this
>> is hosted on the internet anywhere?
>>
>> Thanks
>> Colin
>
>


Re: Javadoc

2017-02-06 Thread Ted Yu
Have you looked under https://ci.apache.org/projects/flink/flink
-docs-master/api/java

 ?

e.g. suppose you look for API for ByteArrayOutputStreamWithPos:

https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/core/memory/ByteArrayOutputStreamWithPos.html

On Mon, Feb 6, 2017 at 1:24 PM, Colin Breame  wrote:

> Hello,
>
> I'm looking for the javadoc for the APIs.  Can someone tell me if this
> is hosted on the internet anywhere?
>
> Thanks
> Colin
>


Dealing with latency in Sink

2017-02-06 Thread Mohit Anchlia
What is the best way to dynamically adapt and tune down number of tasks
created to write/read to a sink when sink slows down or the latency to sink
increases? I am looking at the sink interface but don't see a way to
influence flink to reduce the number of tasks or throttle the volume down
to the sink. What is the best way to deal with this scenario?


Javadoc

2017-02-06 Thread Colin Breame
Hello,

I'm looking for the javadoc for the APIs.  Can someone tell me if this
is hosted on the internet anywhere?

Thanks
Colin


Re: Submitting an app via API

2017-02-06 Thread Flavio Pompermaier
Hi Ufuk,
does it work also for batch?

On Mon, Feb 6, 2017 at 5:06 PM, Ufuk Celebi  wrote:

> You can use RemoteStreamEnvironment or the REST APIs
> (https://ci.apache.org/projects/flink/flink-docs-
> release-1.3/monitoring/rest_api.html#submitting-programs).
>
> On Sun, Feb 5, 2017 at 4:43 PM, Luqman Ghani  wrote:
> > Hi,
> >
> > On quickstart page of Flink docs, it suggests starting a Flink app with
> > "bin/flink" command on command line. Is there any other way of
> submitting to
> > a cluster of flink, that is, through API call within a program, or
> through
> > server request?
> >
> > Thanks,
> > Luqman
>



-- 

Flavio Pompermaier

*Development Department*___
*OKKAM**Srl **- www.okkam.it *

*Phone:* +(39) 0461 1823908
*Fax:* + (39) 0461 186 6433
*Email:* pomperma...@okkam.it
*Headquarters:* Trento (Italy), via G.B. Trener 8
*Registered office:* Trento (Italy), via Segantini 23

Confidentially notice. This e-mail transmission may contain legally
privileged and/or confidential information. Please do not read it if you
are not the intended recipient(S). Any use, distribution, reproduction or
disclosure by any other person is strictly prohibited. If you have received
this e-mail in error, please notify the sender and destroy the original
transmission and its attachments without reading or saving it in any manner.


Re: Bug in Table api CsvTableSink

2017-02-06 Thread Flavio Pompermaier
The error was just caused by a cut-and-paste on the shell instead of
opening directly the file so bug solved :)

On Mon, Feb 6, 2017 at 12:20 PM, Fabian Hueske  wrote:

> Hi Flavio,
>
> I checked the records on the current master and the CsvTableSink seemed to
> work fine.
> However, I had some issues when converting the DataSet[Row] into a table.
> You have to make sure that the TypeInformation for DataSet[Row] is
> RowTypeInfo and not GenericType[Row].
>
> Can you check the type of your myDataSet?
>
> Best, Fabian
>
> 2017-01-31 17:51 GMT+01:00 Flavio Pompermaier :
>
>> These 2 rows if converted to Row[] of Strings should cause the problem:
>>
>> http://www.aaa.it/xxx/v/10002780063t/000/1,f/1000195
>> 7530,cf/13,cpva/77,cf/13,,sit/A2046X,strp/408,10921957530,,1
>> ,5,1,2013-01-04T15:02:25,5,,10002780063,XXX,1,,3,,,2013-
>> 01-04T15:02:25,XXX,XXX,13,2013-01-04T15:02:25
>> http://www.aaa.it/xxx/v/10002780063t/000/1,f/1000400
>> 2060,cf/3,cpva/7,cf/3,f/10164002060,sit/A15730L,strp/408,
>> 10164002060,10164002060,2,7,1,2008-05-29T11:47:35,1,,1000278
>> 0063,XXX,1,,0,,,2008-05-29T11:47:35,XXX,XXX,3,
>> 2008-05-29T11:47:35
>>
>> Best,
>> Flavio
>>
>> On Tue, Jan 31, 2017 at 4:51 PM, Flavio Pompermaier > > wrote:
>>
>>> I hope to have time to write a test program :)
>>> Otherwise I hope someone else could give it a try in the meantime..
>>>
>>> Best,
>>> Flavio
>>>
>>> On Tue, Jan 31, 2017 at 4:49 PM, Fabian Hueske 
>>> wrote:
>>>
 Hi Flavio,

 I do not remember that such a bug was fixed. Maybe by chance, but I
 guess not.
 Can you open a JIRA and maybe provide input data to reproduce the
 problem?

 Thank you,
 Fabian

 2017-01-31 16:25 GMT+01:00 Flavio Pompermaier :

> Hi to all,
> I'm trying to read from a db and then writing to a csv.
> In my code I do the following:
>
> tableEnv.fromDataSet(myDataSet).writeToSink(new
> CsvTableSink(csvOutputDir, fieldDelim));
>
> If I use fieldDelim= "," everything is Ok, if I use "\t" some tab is
> not printed correctly...
> PS: myDataSet is a dataset of 32 String fields.
>
> Is is something that has been fixed in Flink > 1.1.1?
>
> Best,
> Flavio
>
>
>>>
>>


Re: Submitting an app via API

2017-02-06 Thread Luqman Ghani
Hi,

Thanks a lot.

On Mon, Feb 6, 2017 at 9:06 PM, Ufuk Celebi  wrote:

> You can use RemoteStreamEnvironment or the REST APIs
> (https://ci.apache.org/projects/flink/flink-docs-
> release-1.3/monitoring/rest_api.html#submitting-programs).
>
> On Sun, Feb 5, 2017 at 4:43 PM, Luqman Ghani  wrote:
> > Hi,
> >
> > On quickstart page of Flink docs, it suggests starting a Flink app with
> > "bin/flink" command on command line. Is there any other way of
> submitting to
> > a cluster of flink, that is, through API call within a program, or
> through
> > server request?
> >
> > Thanks,
> > Luqman
>


Re: Submitting an app via API

2017-02-06 Thread Ufuk Celebi
You can use RemoteStreamEnvironment or the REST APIs
(https://ci.apache.org/projects/flink/flink-docs-release-1.3/monitoring/rest_api.html#submitting-programs).

On Sun, Feb 5, 2017 at 4:43 PM, Luqman Ghani  wrote:
> Hi,
>
> On quickstart page of Flink docs, it suggests starting a Flink app with
> "bin/flink" command on command line. Is there any other way of submitting to
> a cluster of flink, that is, through API call within a program, or through
> server request?
>
> Thanks,
> Luqman


Re: Many streaming jobs vs one

2017-02-06 Thread Ufuk Celebi
As Jonas said, for job upgrades having a single job that "multiplexes"
multiple jobs means that all jobs will be offline at the same time. If
all jobs share a single Flink cluster, it should be fine to use
multiple jobs that share the resources. A down side of this will be
that managing multiple jobs is probably harder than managing just a
single job (keeping track of the state, monitoring, etc.)

On Sun, Feb 5, 2017 at 10:43 PM, Jonas  wrote:
> I recommend multiple Jobs. You can still share most of the code by creating
> Java / Scala packages. THis makes it easier to update Jobs.
>
>
>
> --
> View this message in context: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Many-streaming-jobs-vs-one-tp11449p11450.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive at 
> Nabble.com.


Re: To get Schema for jdbc database in Flink

2017-02-06 Thread Ufuk Celebi
I'm not sure how well this works for the streaming API. Looping in
Chesnay, who worked on this.

On Mon, Feb 6, 2017 at 11:09 AM, Punit Tandel  wrote:
> Hi ,
>
> I was looking into flink streaming api and trying to implement the solution
> for reading the data from jdbc database and writing them to jdbc databse
> again.
>
> At the moment i can see the datastream is returning Row from the database.
> dataStream.getType().getGenericParameters() retuning an empty list of
> collection.
>
> I am right now manually creating a database connection and getting the
> schema from ResultMetadata and constructing the schema for the table which
> is a bit heavy operation.
>
> So is there any other way to get the schema for the table in order to create
> a new table and write those records in the database ?
>
> Please let me know
>
> Thanks
> Punit


Re: Parallelism and max-parallelism

2017-02-06 Thread Dmitry Golubets
Thanks!

Best regards,
Dmitry

On Mon, Feb 6, 2017 at 3:47 PM, Ufuk Celebi  wrote:

> Could you have a look at these PRs please:
>
> https://github.com/apache/flink/pull/3259
>
> https://github.com/apache/flink/pull/3258
>
> If you find that anything is missing, feel free to report it here.
>
> The PRs will be merged later today.
>
>
> On Mon, Feb 6, 2017 at 4:41 PM, Dmitry Golubets 
> wrote:
> > Hi guys,
> >
> > I would appreciate if someone could explain to me what's the difference
> > between those two.
> >
> > The current description refers to "dynamic scaling", and yet I can't find
> > anything about it in Flink's docs.
> >
> > Best regards,
> > Dmitry
>


Re: Parallelism and max-parallelism

2017-02-06 Thread Ufuk Celebi
Could you have a look at these PRs please:

https://github.com/apache/flink/pull/3259

https://github.com/apache/flink/pull/3258

If you find that anything is missing, feel free to report it here.

The PRs will be merged later today.


On Mon, Feb 6, 2017 at 4:41 PM, Dmitry Golubets  wrote:
> Hi guys,
>
> I would appreciate if someone could explain to me what's the difference
> between those two.
>
> The current description refers to "dynamic scaling", and yet I can't find
> anything about it in Flink's docs.
>
> Best regards,
> Dmitry


Re: Chicago Hands on Apache Flink Workshop

2017-02-06 Thread Ufuk Celebi
Sounds very interesting. Thanks for sharing this and wish you all a great time.

On Thu, Feb 2, 2017 at 2:56 AM, Trevor Grant  wrote:
> Any one who is going to be in or around Chicago 2/21:
>
> Joe Olson is putting on a workshop for our local Flink meeup- drop by if you
> can!
>
> https://www.meetup.com/Chicago-Apache-Flink-Meetup-CHAF/events/237385428/
>
> Trevor Grant
> Data Scientist
> https://github.com/rawkintrevo
> http://stackexchange.com/users/3002022/rawkintrevo
> http://trevorgrant.org
>
> "Fortunate is he, who is able to know the causes of things."  -Virgil
>


Re: JavaDoc 404

2017-02-06 Thread Ufuk Celebi
Thanks for reporting this. I think Robert (cc'd) is working in fixing
this, correct?

On Sat, Feb 4, 2017 at 12:12 PM, Yassine MARZOUGUI
 wrote:
> Hi,
>
> The JavaDoc link of BucketingSink in this page[1] yields to a 404 error. I
> couldn't find the correct url.
> The broken link :
> https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/streaming/connectors/fs/bucketing/BucketingSink.html
>
> Other pages in the JavaDoc, like this one[2], seem lacking formatting,
> because
> https://ci.apache.org/projects/flink/flink-docs-master/api/java/stylesheet.css
> and
> https://ci.apache.org/projects/flink/flink-docs-master/api/java/script.js
> are not found (404).
>
> Best,
> Yassine
>
>
> [1] :
> https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/connectors/filesystem_sink.html
> [2] :
> https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/streaming/api/functions/sink/RichSinkFunction.html


Parallelism and max-parallelism

2017-02-06 Thread Dmitry Golubets
Hi guys,

I would appreciate if someone could explain to me what's the difference
between those two.

The current description refers to "dynamic scaling", and yet I can't find
anything about it in Flink's docs.

Best regards,
Dmitry


stream clustering in flink

2017-02-06 Thread Jan Nehring

Hi,

we want to cluster a stream of Tweets using Flink. Every incoming tweet 
is compared to the last 100 tweets. After this comparison, a cluster ID 
is assigned to the tweet. We try to find out the best approach how to 
solve this:


1. Using a stream window of the last tweets seems to be difficult 
because we would need to cross join this window with every incoming 
tweet. According to my research the Flink API does not support cross 
joins on stream windows.
2. We could also store the last 100 tweets in one operator with 
parallelism=1. This would work but it introduces a bottleneck.
3. We could share the last 100 tweets as a "shared state" among the 
operator that assigns the cluster. But every tweet changes the state so 
there would be a lot of synchronization effort between the operators.


Are you aware of other possible solutions? Currently solution #2 seems 
the most promising to me but I do not like the bottleneck.


Best regards Jan


Re: Bug in Table api CsvTableSink

2017-02-06 Thread Fabian Hueske
Hi Flavio,

I checked the records on the current master and the CsvTableSink seemed to
work fine.
However, I had some issues when converting the DataSet[Row] into a table.
You have to make sure that the TypeInformation for DataSet[Row] is
RowTypeInfo and not GenericType[Row].

Can you check the type of your myDataSet?

Best, Fabian

2017-01-31 17:51 GMT+01:00 Flavio Pompermaier :

> These 2 rows if converted to Row[] of Strings should cause the problem:
>
> http://www.aaa.it/xxx/v/10002780063t/000/1,f/
> 10001957530,cf/13,cpva/77,cf/13,,sit/A2046X,strp/408,
> 10921957530,,1,5,1,2013-01-04T15:02:25,5,,10002780063,
> XXX,1,,3,,,2013-01-04T15:02:25,XXX,XXX,13,2013-01-04T15:02:25
> http://www.aaa.it/xxx/v/10002780063t/000/1,f/
> 10004002060,cf/3,cpva/7,cf/3,f/10164002060,sit/A15730L,
> strp/408,10164002060,10164002060,2,7,1,2008-05-29T11:47:35,1,,10002780063,
> XXX,1,,0,,,2008-05-29T11:47:35,XXX,XXX,3,2008-05-29T11:47:35
>
> Best,
> Flavio
>
> On Tue, Jan 31, 2017 at 4:51 PM, Flavio Pompermaier 
> wrote:
>
>> I hope to have time to write a test program :)
>> Otherwise I hope someone else could give it a try in the meantime..
>>
>> Best,
>> Flavio
>>
>> On Tue, Jan 31, 2017 at 4:49 PM, Fabian Hueske  wrote:
>>
>>> Hi Flavio,
>>>
>>> I do not remember that such a bug was fixed. Maybe by chance, but I
>>> guess not.
>>> Can you open a JIRA and maybe provide input data to reproduce the
>>> problem?
>>>
>>> Thank you,
>>> Fabian
>>>
>>> 2017-01-31 16:25 GMT+01:00 Flavio Pompermaier :
>>>
 Hi to all,
 I'm trying to read from a db and then writing to a csv.
 In my code I do the following:

 tableEnv.fromDataSet(myDataSet).writeToSink(new
 CsvTableSink(csvOutputDir, fieldDelim));

 If I use fieldDelim= "," everything is Ok, if I use "\t" some tab is
 not printed correctly...
 PS: myDataSet is a dataset of 32 String fields.

 Is is something that has been fixed in Flink > 1.1.1?

 Best,
 Flavio


>>
>


Re: Compiler error while using 'CsvTableSource'

2017-02-06 Thread nsengupta
Thanks, Timo.

Do I need to add anything to the ticket? Please let me know. I will do the
needful.

-- N

On Mon, Feb 6, 2017 at 2:25 PM, Timo Walther [via Apache Flink User Mailing
List archive.]  wrote:

> I created an issue to make this a bit more user-friendly in the future.
>
> https://issues.apache.org/jira/browse/FLINK-5714
>
> Timo
>
>
> Am 05/02/17 um 06:08 schrieb nsengupta:
>
> Thanks, Till, for taking time to share your understanding.
>
> -- N
>
> On Sun, Feb 5, 2017 at 12:49 AM, Till Rohrmann [via Apache Flink User
> Mailing List archive.] <[hidden email]
> > wrote:
>
>> I think the problem is that there are actually two constructors with the
>> same signature. The one is defined with default arguments and the other has
>> the same signature as the one with default arguments when you leave all
>> default arguments out. I assume that this confuses the Scala compiler and
>> only works if you've specified the right types or at least one of the
>> parameters with a default argument.
>>
>> Cheers,
>> Till
>>
>> On Fri, Feb 3, 2017 at 12:49 PM, nsengupta <[hidden email]
>> > wrote:
>>
>>> Till,
>>>
>>> Many thanks. Just to confirm that it is working fine at my end, here's a
>>> screenshot.
>>>
>>> >> bble.com/file/n11427/Selection_258.png>
>>>
>>> This is Flink 1.1.4 but Flink-1.2/Flink-1.3 shouldn't be any problem.
>>>
>>> It never struck me that lack of covariance in Scala Arrays was the
>>> source of
>>> the problem. Bravo!
>>>
>>> BTW, I am just curious to know how the Testcases worked: just to add to
>>> my
>>> knowledge of Scala. We didn't pass any /typehint/ to the compiler there!
>>>
>>> Could you please put a hint of a line or two? TIA.
>>>
>>>
>>>
>>>
>>>
>>> --
>>> View this message in context: http://apache-flink-user-maili
>>> ng-list-archive.2336050.n4.nabble.com/Compiler-error-while-
>>> using-CsvTableSource-tp11412p11427.html
>>> Sent from the Apache Flink User Mailing List archive. mailing list
>>> archive at Nabble.com.
>>>
>>
>>
>>
>> --
>> If you reply to this email, your message will be added to the discussion
>> below:
>> http://apache-flink-user-mailing-list-archive.2336050.n4.
>> nabble.com/Compiler-error-while-using-CsvTableSource-tp11412p11441.html
>> To unsubscribe from Compiler error while using 'CsvTableSource', click
>> here.
>> NAML
>> 
>>
>
>
>
> --
> Software Technologist
> http://www.linkedin.com/in/nirmalyasengupta
> "If you have built castles in the air, your work need not be lost. That is
> where they should be.
> Now put the foundation under them."
>
> --
> View this message in context: Re: Compiler error while using
> 'CsvTableSource'
> 
> Sent from the Apache Flink User Mailing List archive. mailing list archive
>  at
> Nabble.com.
>
>
>
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-flink-user-mailing-list-archive.2336050.
> n4.nabble.com/Compiler-error-while-using-CsvTableSource-tp11412p11452.html
> To unsubscribe from Compiler error while using 'CsvTableSource', click
> here
> 
> .
> NAML
> 
>



-- 
Software Technologist
http://www.linkedin.com/in/nirmalyasengupta
"If you have built castles in the air, your work need not be lost. That is
where they should be.
Now put the 

RE: 1.2 release date

2017-02-06 Thread Anton Solovev
Hi,

Could you update List of contributors after that? ☺

Anton Solovev
Software Engineer

Office: +7 846 200 09 70 x 55621   
Email: anton_solo...@epam.com
Samara, Russia (GMT+4)   epam.com

CONFIDENTIALITY CAUTION AND DISCLAIMER
This message is intended only for the use of the individual(s) or entity(ies) 
to which it is addressed and contains information that is legally privileged 
and confidential. If you are not the intended recipient, or the person 
responsible for delivering the message to the intended recipient, you are 
hereby notified that any dissemination, distribution or copying of this 
communication is strictly prohibited. All unintended recipients are obliged to 
delete this message and destroy any printed copies.

From: Till Rohrmann [mailto:trohrm...@apache.org]
Sent: Monday, February 6, 2017 12:20 PM
To: user@flink.apache.org
Subject: Re: 1.2 release date

Hi Tarandeep,

afaik, Flink 1.2 will be released today.

Cheers,
Till

On Sun, Feb 5, 2017 at 10:00 PM, Tarandeep Singh 
> wrote:
Hi,

Looking forward to 1.2 version of Flink (lots of exciting features have been 
added).
Has the date finalized yet?

Thanks,
Tarandeep



Fwd: To get Schema for jdbc database in Flink

2017-02-06 Thread Punit Tandel

Hi ,

I was looking into flink streaming api and trying to implement the 
solution for reading the data from jdbc database and writing them to 
jdbc databse again.


At the moment i can see the datastream is returning Row from the 
database. dataStream.getType().getGenericParameters() retuning an empty 
list of collection.


I am right now manually creating a database connection and getting the 
schema from ResultMetadata and constructing the schema for the table 
which is a bit heavy operation.


So is there any other way to get the schema for the table in order to 
create a new table and write those records in the database ?


Please let me know

Thanks
Punit



Re: How about Discourse (https://www.discourse.org/) for this mailing list

2017-02-06 Thread Jonas
Instead of Nabble I will use PonyMail now :) Thanks. Didn't know it existed.



--
View this message in context: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/How-about-Discourse-https-www-discourse-org-for-this-mailing-list-tp11448p11457.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at 
Nabble.com.


Re: How about Discourse (https://www.discourse.org/) for this mailing list

2017-02-06 Thread Fabian Hueske
Hi Jonas,

thanks for the suggestion.
Critical infrastructure (repository, dev mailing list) of Apache projects
must be hosted on Apache infrastructure.
For example, Github is just mirroring the ASF git repositories.

We integrated the mailing lists with Nabble (user [1], dev [2]) and there
is also an Apache project PonyMail [3].

AFAIK, it would be possible to host the user mailing list somewhere else
(there is also a StackOverflow tag for Flink).
However, I see the risk of splitting the community.
The chances that questions can be answered would drop if there are too many
places to ask questions.

Best, Fabian

[1] http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
[2] http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/
[3] https://lists.apache.org/list.html?d...@flink.apache.org

2017-02-05 22:09 GMT+01:00 Jonas :

> https://www.discourse.org/about/ for the features
>
>
>
> --
> View this message in context: http://apache-flink-user-
> mailing-list-archive.2336050.n4.nabble.com/How-about-
> Discourse-https-www-discourse-org-for-this-mailing-list-tp11448.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive
> at Nabble.com.
>


Re: Table API: java.sql.DateTime is not supported;

2017-02-06 Thread Timo Walther

Hi,

java.sql.Timestamps have to have a format like " -mm-dd 
hh:mm:ss.[fff...]". In your case you need to parse this as a String and 
write your own scalar function for parsing.


Regards,
Timo


Am 04/02/17 um 17:46 schrieb nsengupta:

"4/1/2014 0:11:00",40.769,-73.9549,"B02512"





Re: Improving Flink Performance

2017-02-06 Thread Fabian Hueske
Hi Jonas,

thanks for reporting back!
Glad you solve the issue.

Cheers, Fabian

2017-02-05 22:07 GMT+01:00 Jonas :

> Using a profiler I found out that the main performance problem (80%) was
> spent in a domain specific data structure. After implementing it with a
> more
> efficient one, the performance problems are gone.
>
>
>
> --
> View this message in context: http://apache-flink-user-
> mailing-list-archive.2336050.n4.nabble.com/Improving-Flink-
> Performance-tp11248p11447.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive
> at Nabble.com.
>


Re: Compiler error while using 'CsvTableSource'

2017-02-06 Thread Timo Walther

I created an issue to make this a bit more user-friendly in the future.

https://issues.apache.org/jira/browse/FLINK-5714

Timo


Am 05/02/17 um 06:08 schrieb nsengupta:

Thanks, Till, for taking time to share your understanding.

-- N

On Sun, Feb 5, 2017 at 12:49 AM, Till Rohrmann [via Apache Flink User 
Mailing List archive.] <[hidden email] 
> wrote:


I think the problem is that there are actually two constructors
with the same signature. The one is defined with default arguments
and the other has the same signature as the one with default
arguments when you leave all default arguments out. I assume that
this confuses the Scala compiler and only works if you've
specified the right types or at least one of the parameters with a
default argument.

Cheers,
Till

On Fri, Feb 3, 2017 at 12:49 PM, nsengupta <[hidden email]
> wrote:

Till,

Many thanks. Just to confirm that it is working fine at my
end, here's a
screenshot.


>

This is Flink 1.1.4 but Flink-1.2/Flink-1.3 shouldn't be any
problem.

It never struck me that lack of covariance in Scala Arrays was
the source of
the problem. Bravo!

BTW, I am just curious to know how the Testcases worked: just
to add to my
knowledge of Scala. We didn't pass any /typehint/ to the
compiler there!

Could you please put a hint of a line or two? TIA.





--
View this message in context:

http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Compiler-error-while-using-CsvTableSource-tp11412p11427.html


Sent from the Apache Flink User Mailing List archive. mailing
list archive at Nabble.com.





If you reply to this email, your message will be added to the
discussion below:

http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Compiler-error-while-using-CsvTableSource-tp11412p11441.html



To unsubscribe from Compiler error while using 'CsvTableSource',
click here.
NAML







--
Software Technologist
http://www.linkedin.com/in/nirmalyasengupta
"If you have built castles in the air, your work need not be lost. 
That is where they should be.

Now put the foundation under them."


View this message in context: Re: Compiler error while using 
'CsvTableSource' 

Sent from the Apache Flink User Mailing List archive. mailing list 
archive 
 
at Nabble.com.





Re: allowed lateness on windowed join?

2017-02-06 Thread Fabian Hueske
Hi,

Union is a super cheap operator in Flink. It does not scan the records, but
just merges the streams. So the effort is very low.
The built-in join operator works in the same way but does not expose
allowed lateness.

Cheers, Fabian


Re: 1.2 release date

2017-02-06 Thread Till Rohrmann
Hi Tarandeep,

afaik, Flink 1.2 will be released today.

Cheers,
Till

On Sun, Feb 5, 2017 at 10:00 PM, Tarandeep Singh 
wrote:

> Hi,
>
> Looking forward to 1.2 version of Flink (lots of exciting features have
> been added).
> Has the date finalized yet?
>
> Thanks,
> Tarandeep
>