[jira] [Created] (FLINK-7660) Support sideOutput in ProcessAllWindowFunction

2017-09-20 Thread Bowen Li (JIRA)
Bowen Li created FLINK-7660:
---

 Summary: Support sideOutput in ProcessAllWindowFunction
 Key: FLINK-7660
 URL: https://issues.apache.org/jira/browse/FLINK-7660
 Project: Flink
  Issue Type: Improvement
  Components: DataStream API, Scala API
Reporter: Bowen Li
Assignee: Bowen Li
 Fix For: 1.4.0


Add side output support to ProcessAllWindow functions.

This is a sibling ticket for FLINK-7635



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7659) Unprotected access to inProgress in JobCancellationWithSavepointHandlers#handleNewRequest

2017-09-20 Thread Ted Yu (JIRA)
Ted Yu created FLINK-7659:
-

 Summary: Unprotected access to inProgress in 
JobCancellationWithSavepointHandlers#handleNewRequest
 Key: FLINK-7659
 URL: https://issues.apache.org/jira/browse/FLINK-7659
 Project: Flink
  Issue Type: Bug
Reporter: Ted Yu


Here is related code:
{code}
  } finally {
inProgress.remove(jobId);
  }
{code}
A little lower, in another finally block, there is:
{code}
  synchronized (lock) {
if (!success) {
  inProgress.remove(jobId);
{code}
which is correct.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7658) Support COLLECT Aggregate function in Flink TABLE API

2017-09-20 Thread Shuyi Chen (JIRA)
Shuyi Chen created FLINK-7658:
-

 Summary: Support COLLECT Aggregate function in Flink TABLE API
 Key: FLINK-7658
 URL: https://issues.apache.org/jira/browse/FLINK-7658
 Project: Flink
  Issue Type: New Feature
  Components: Table API & SQL
Reporter: Shuyi Chen
Assignee: Shuyi Chen






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] Dropping Scala 2.10

2017-09-20 Thread Aljoscha Krettek
Hi Federico,

As far as I know, the Kafka client code has been rewritten in Java for version 
0.9, meaning there is no more Scala dependency in there. Only the server 
(broker) code still contains Scala but it doesn't matter what Scala version a 
client uses, if any.

Best,
Aljoscha 
> On 20. Sep 2017, at 14:32, Federico D'Ambrosio  wrote:
> 
> Hi, as far as I know some vendors like Hortonworks still use Kafka_2.10 as 
> part of their hadoop distribution. 
> Could the use of a different scala version cause issues with the Kafka 
> connector? I'm asking because we are using HDP 2.6 and we once already had 
> some issue with conflicting scala versions concerning Kafka (though, we were 
> using Storm, I still haven't tested the Flink connector in this context).
> 
> Regards,
> Federico
> 
> 2017-09-20 14:19 GMT+02:00 Ted Yu  >:
> +1
> 
>  Original message 
> From: Hai Zhou >
> Date: 9/20/17 12:44 AM (GMT-08:00)
> To: Aljoscha Krettek >, 
> dev@flink.apache.org , user 
> >
> Subject: Re: [DISCUSS] Dropping Scala 2.10
> 
> +1
> 
> > 在 2017年9月19日,17:56,Aljoscha Krettek  > > 写道:
> > 
> > Hi,
> > 
> > Talking to some people I get the impression that Scala 2.10 is quite 
> > outdated by now. I would like to drop support for Scala 2.10 and my main 
> > motivation is that this would allow us to drop our custom Flakka build of 
> > Akka that we use because newer Akka versions only support Scala 2.11/2.12 
> > and we need a backported feature.
> > 
> > Are there any concerns about this?
> > 
> > Best,
> > Aljoscha
> 
> 
> 
> 
> -- 
> Federico D'Ambrosio



[jira] [Created] (FLINK-7657) SQL Timestamps Converted To Wrong Type By Optimizer Causing ClassCastException

2017-09-20 Thread Kent Murra (JIRA)
Kent Murra created FLINK-7657:
-

 Summary: SQL Timestamps Converted To Wrong Type By Optimizer 
Causing ClassCastException
 Key: FLINK-7657
 URL: https://issues.apache.org/jira/browse/FLINK-7657
 Project: Flink
  Issue Type: Bug
  Components: Table API & SQL
Reporter: Kent Murra
Priority: Critical


I have a SQL statement using the Tables API that has a timestamp in it. When 
the execution environment tries to optimize the SQL, it causes an exception 
(attached below).  The result is any SQL query with a timestamp, date, or time 
literal is unexecutable if any table source is marked with 
FilterableTableSource. {code:none} Exception in thread "main" 
java.lang.RuntimeException: Error while applying rule 
PushFilterIntoTableSourceScanRule, args 
[rel#30:FlinkLogicalCalc.LOGICAL(input=rel#29:Subset#0.LOGICAL,expr#0..1={inputs},expr#2=2017-05-01,expr#3=>($t1,
 $t2),data=$t0,last_updated=$t1,$condition=$t3), Scan(table:[test_table], 
fields:(data, last_updated))] at 
org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:235)
 at 
org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:650)
 at org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:368) at 
org.apache.flink.table.api.TableEnvironment.runVolcanoPlanner(TableEnvironment.scala:266)
 at 
org.apache.flink.table.api.BatchTableEnvironment.optimize(BatchTableEnvironment.scala:298)
 at 
org.apache.flink.table.api.BatchTableEnvironment.translate(BatchTableEnvironment.scala:328)
 at 
org.apache.flink.table.api.BatchTableEnvironment.writeToSink(BatchTableEnvironment.scala:135)
 at org.apache.flink.table.api.Table.writeToSink(table.scala:800) at 
org.apache.flink.table.api.Table.writeToSink(table.scala:773) at 
com.remitly.flink.TestReproductionApp$.delayedEndpoint$com$remitly$flink$TestReproductionApp$1(TestReproductionApp.scala:27)
 at 
com.remitly.flink.TestReproductionApp$delayedInit$body.apply(TestReproductionApp.scala:22)
 at scala.Function0$class.apply$mcV$sp(Function0.scala:34) at 
scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12) at 
scala.App$$anonfun$main$1.apply(App.scala:76) at 
scala.App$$anonfun$main$1.apply(App.scala:76) at 
scala.collection.immutable.List.foreach(List.scala:381) at 
scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
 at scala.App$class.main(App.scala:76) at 
com.remitly.flink.TestReproductionApp$.main(TestReproductionApp.scala:22) at 
com.remitly.flink.TestReproductionApp.main(TestReproductionApp.scala) Caused 
by: java.lang.ClassCastException: java.util.GregorianCalendar cannot be cast to 
java.util.Date at 
org.apache.flink.table.expressions.Literal.dateToCalendar(literals.scala:107) 
at org.apache.flink.table.expressions.Literal.toRexNode(literals.scala:80) at 
org.apache.flink.table.expressions.BinaryComparison$$anonfun$toRexNode$1.apply(comparison.scala:35)
 at 
org.apache.flink.table.expressions.BinaryComparison$$anonfun$toRexNode$1.apply(comparison.scala:35)
 at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
 at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
 at scala.collection.immutable.List.foreach(List.scala:381) at 
scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
scala.collection.immutable.List.map(List.scala:285) at 
org.apache.flink.table.expressions.BinaryComparison.toRexNode(comparison.scala:35)
 at 
org.apache.flink.table.plan.rules.logical.PushFilterIntoTableSourceScanRule$$anonfun$1.apply(PushFilterIntoTableSourceScanRule.scala:92)
 at 
org.apache.flink.table.plan.rules.logical.PushFilterIntoTableSourceScanRule$$anonfun$1.apply(PushFilterIntoTableSourceScanRule.scala:92)
 at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
 at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
 at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1336) at 
scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
org.apache.flink.table.plan.rules.logical.PushFilterIntoTableSourceScanRule.pushFilterIntoScan(PushFilterIntoTableSourceScanRule.scala:92)
 at 
org.apache.flink.table.plan.rules.logical.PushFilterIntoTableSourceScanRule.onMatch(PushFilterIntoTableSourceScanRule.scala:56)
 at 
org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:211)
 ... 19 more {code} I've done quite a bit of debugging on this and tracked it 
down to a problem with the way a Calcite AST is translated into an Expression 
tree for the 

Re: [DISCUSS] Dropping Scala 2.10

2017-09-20 Thread Kostas Kloudas
+ 1

> On Sep 20, 2017, at 2:26 PM, Stefan Richter  
> wrote:
> 
> +1



[jira] [Created] (FLINK-7656) Switch to user ClassLoader when invoking initializeOnMaster finalizeOnMaster

2017-09-20 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-7656:


 Summary: Switch to user ClassLoader when invoking 
initializeOnMaster finalizeOnMaster
 Key: FLINK-7656
 URL: https://issues.apache.org/jira/browse/FLINK-7656
 Project: Flink
  Issue Type: Bug
  Components: Local Runtime
Affects Versions: 1.3.2
Reporter: Fabian Hueske
Assignee: Fabian Hueske


The contract that Flink provides to usercode is that that the usercode 
classloader is the context classloader whenever usercode is called.

In {{OutputFormatVertex}} usercode is called in the {{initializeOnMaster()}} 
and {{finalizeOnMaster()}} methods but the context classloader is not set to 
the usercode classloader.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] Dropping Scala 2.10

2017-09-20 Thread Federico D'Ambrosio
Hi, as far as I know some vendors like Hortonworks still use Kafka_2.10 as
part of their hadoop distribution.
Could the use of a different scala version cause issues with the Kafka
connector? I'm asking because we are using HDP 2.6 and we once already had
some issue with conflicting scala versions concerning Kafka (though, we
were using Storm, I still haven't tested the Flink connector in this
context).

Regards,
Federico

2017-09-20 14:19 GMT+02:00 Ted Yu :

> +1
>
>  Original message 
> From: Hai Zhou 
> Date: 9/20/17 12:44 AM (GMT-08:00)
> To: Aljoscha Krettek , dev@flink.apache.org, user <
> u...@flink.apache.org>
> Subject: Re: [DISCUSS] Dropping Scala 2.10
>
> +1
>
> > 在 2017年9月19日,17:56,Aljoscha Krettek  写道:
> >
> > Hi,
> >
> > Talking to some people I get the impression that Scala 2.10 is quite
> outdated by now. I would like to drop support for Scala 2.10 and my main
> motivation is that this would allow us to drop our custom Flakka build of
> Akka that we use because newer Akka versions only support Scala 2.11/2.12
> and we need a backported feature.
> >
> > Are there any concerns about this?
> >
> > Best,
> > Aljoscha
>
>


-- 
Federico D'Ambrosio


[jira] [Created] (FLINK-7655) Revisit default non-leader id for FencedRpcEndpoints

2017-09-20 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-7655:


 Summary: Revisit default non-leader id for FencedRpcEndpoints
 Key: FLINK-7655
 URL: https://issues.apache.org/jira/browse/FLINK-7655
 Project: Flink
  Issue Type: Bug
  Components: Distributed Coordination
Affects Versions: 1.4.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
Priority: Minor


Currently, when a {{FencedRpcEndpoint}} loses leadership, we set its leader id 
to a random value. This can be problematic, even though it's unlikely, because 
we might set it to a value which is used somewhere else (e.g. the currently 
valid leader id). I think it would  be better to simply set the leader id to 
{{null}} in order to properly encode that the {{FencedRpcEndpoint}} is no 
longer a leader.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] Dropping Scala 2.10

2017-09-20 Thread Driesprong, Fokko
+1

2017-09-20 14:30 GMT+02:00 Kostas Kloudas :

> +1
>
> > On Sep 20, 2017, at 2:26 PM, Stefan Richter 
> wrote:
> >
> > +1
>
>


Re: [DISCUSS] Dropping Scala 2.10

2017-09-20 Thread Kostas Kloudas
+1

> On Sep 20, 2017, at 2:26 PM, Stefan Richter  
> wrote:
> 
> +1



Re: [DISCUSS] Dropping Scala 2.10

2017-09-20 Thread Stefan Richter
+1


Re: [DISCUSS] Dropping Scala 2.10

2017-09-20 Thread Ted Yu
+1
 Original message From: Hai Zhou  Date: 
9/20/17  12:44 AM  (GMT-08:00) To: Aljoscha Krettek , 
dev@flink.apache.org, user  Subject: Re: [DISCUSS] 
Dropping Scala 2.10 
+1

> 在 2017年9月19日,17:56,Aljoscha Krettek  写道:
> 
> Hi,
> 
> Talking to some people I get the impression that Scala 2.10 is quite outdated 
> by now. I would like to drop support for Scala 2.10 and my main motivation is 
> that this would allow us to drop our custom Flakka build of Akka that we use 
> because newer Akka versions only support Scala 2.11/2.12 and we need a 
> backported feature.
> 
> Are there any concerns about this?
> 
> Best,
> Aljoscha



[jira] [Created] (FLINK-7654) Update RabbitMQ Java client to 4.x

2017-09-20 Thread Hai Zhou_UTC+8 (JIRA)
Hai Zhou_UTC+8 created FLINK-7654:
-

 Summary: Update RabbitMQ Java client to  4.x
 Key: FLINK-7654
 URL: https://issues.apache.org/jira/browse/FLINK-7654
 Project: Flink
  Issue Type: Improvement
  Components: RabbitMQ Connector, Streaming Connectors
Reporter: Hai Zhou_UTC+8
Assignee: Hai Zhou_UTC+8
 Fix For: 1.4.0


*RabbitMQ Java Client*

Starting with 4.0, this client releases are independent from RabbitMQ server 
releases.
These versions can still be used with RabbitMQ server 3.x.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7653) Properly implement DispatcherGateway methods on the Dispatcher

2017-09-20 Thread Tzu-Li (Gordon) Tai (JIRA)
Tzu-Li (Gordon) Tai created FLINK-7653:
--

 Summary: Properly implement DispatcherGateway methods on the 
Dispatcher
 Key: FLINK-7653
 URL: https://issues.apache.org/jira/browse/FLINK-7653
 Project: Flink
  Issue Type: Sub-task
Reporter: Tzu-Li (Gordon) Tai


Currently, {{DispatcherGateway}} methods such as {{listJobs}}, 
{{requestStatusOverview}}, and probably other new methods that will be added as 
we port more existing REST handlers to the new endpoint, have only dummy 
placeholder implementations in the {{Dispatcher}} marked with TODOs.

This ticket is to track that they are all eventually properly implemented.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7652) Port CurrentJobIdsHandler to new REST endpoint

2017-09-20 Thread Tzu-Li (Gordon) Tai (JIRA)
Tzu-Li (Gordon) Tai created FLINK-7652:
--

 Summary: Port CurrentJobIdsHandler to new REST endpoint
 Key: FLINK-7652
 URL: https://issues.apache.org/jira/browse/FLINK-7652
 Project: Flink
  Issue Type: Sub-task
  Components: REST, Webfrontend
Reporter: Tzu-Li (Gordon) Tai
 Fix For: 1.4.0


Port existing {{CurrentJobIdsHandler}} to new REST endpoint



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7650) Port JobCancellationHandler to new REST endpoint

2017-09-20 Thread Tzu-Li (Gordon) Tai (JIRA)
Tzu-Li (Gordon) Tai created FLINK-7650:
--

 Summary: Port JobCancellationHandler to new REST endpoint
 Key: FLINK-7650
 URL: https://issues.apache.org/jira/browse/FLINK-7650
 Project: Flink
  Issue Type: Sub-task
  Components: REST, Webfrontend
Reporter: Tzu-Li (Gordon) Tai
 Fix For: 1.4.0


Port existing {{JobCancellationHandler}} to new REST endpoint



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7651) RetryingRegistration does not wait between failed connection attempts

2017-09-20 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-7651:


 Summary: RetryingRegistration does not wait between failed 
connection attempts
 Key: FLINK-7651
 URL: https://issues.apache.org/jira/browse/FLINK-7651
 Project: Flink
  Issue Type: Bug
  Components: Distributed Coordination
Affects Versions: 1.4.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann


The {{RetryingRegistration}} does not wait between connection attempts if a 
connection could not be established to the remote peer. This leads effectively 
to a busy retry loop. Similar to exceptions which occur at registration time, 
we should delay the retrying also in case of a connection error.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7649) Port JobStoppingHandler to new REST endpoint

2017-09-20 Thread Tzu-Li (Gordon) Tai (JIRA)
Tzu-Li (Gordon) Tai created FLINK-7649:
--

 Summary: Port JobStoppingHandler to new REST endpoint
 Key: FLINK-7649
 URL: https://issues.apache.org/jira/browse/FLINK-7649
 Project: Flink
  Issue Type: Sub-task
  Components: REST, Webfrontend
Reporter: Tzu-Li (Gordon) Tai
 Fix For: 1.4.0


Port existing `JobStoppingHandler` to new REST endpoint



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7648) Port TaskManagersHandler to new REST endpoint

2017-09-20 Thread Tzu-Li (Gordon) Tai (JIRA)
Tzu-Li (Gordon) Tai created FLINK-7648:
--

 Summary: Port TaskManagersHandler to new REST endpoint
 Key: FLINK-7648
 URL: https://issues.apache.org/jira/browse/FLINK-7648
 Project: Flink
  Issue Type: Sub-task
  Components: REST, Webfrontend
Reporter: Tzu-Li (Gordon) Tai
Assignee: Tzu-Li (Gordon) Tai
 Fix For: 1.4.0


Port existing {{TaskManagersHandler}} to the new REST endpoint



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7647) Port JobManagerConfigHandler to new REST endpoint

2017-09-20 Thread Tzu-Li (Gordon) Tai (JIRA)
Tzu-Li (Gordon) Tai created FLINK-7647:
--

 Summary: Port JobManagerConfigHandler to new REST endpoint
 Key: FLINK-7647
 URL: https://issues.apache.org/jira/browse/FLINK-7647
 Project: Flink
  Issue Type: Sub-task
  Components: Distributed Coordination, REST, Webfrontend
Reporter: Tzu-Li (Gordon) Tai
Assignee: Tzu-Li (Gordon) Tai
 Fix For: 1.4.0


Port existing {{JobManagerConfigHandler}} to new REST endpoint



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] Dropping Scala 2.10

2017-09-20 Thread Hai Zhou
+1

> 在 2017年9月19日,17:56,Aljoscha Krettek  写道:
> 
> Hi,
> 
> Talking to some people I get the impression that Scala 2.10 is quite outdated 
> by now. I would like to drop support for Scala 2.10 and my main motivation is 
> that this would allow us to drop our custom Flakka build of Akka that we use 
> because newer Akka versions only support Scala 2.11/2.12 and we need a 
> backported feature.
> 
> Are there any concerns about this?
> 
> Best,
> Aljoscha



Re: got Warn message - "the expected leader session ID did not equal the received leader session ID " when using LocalFlinkMiniCluster to interpret scala code

2017-09-20 Thread Till Rohrmann
Hi XiangWei,

programmatically there is no nice tooling yet to cancel jobs on a dedicated
cluster. What you can do is to use Flink's REST API to issue a cancel
command [1]. You have to send a GET request to the target URL
`/jobs/:jobid/cancel`. In the future we will improve the programmatic job
control which will allow you to do these kind of things more easily.

[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.3/monitoring/rest_api.html#job-cancellation

Cheers,
Till

On Wed, Sep 20, 2017 at 5:46 AM, XiangWei Huang 
wrote:

> Hi Till,
>
>  Thanks for your answer,it worked when i use *StandaloneMiniCluster,*
> but another problem is that i can’t find a way to cancel
> a running Flink job without shutting down the cluster,for
> LocalFlinkMiniCluster i can do  it with below code :
>
> *   for (job <- cluster.getCurrentlyRunningJobsJava()) {*
>
> *  cluster.stopJob(job)   }*
>
>Is it possible to cancel a running Flink job without shutting down a 
> *StandaloneMiniCluster
> ?*
>
> Best Regards,
> XiangWei
>
>
>
> 在 2017年9月14日,16:58,Till Rohrmann  写道:
>
> Hi XiangWei,
>
> the problem is that the LocalFlinkMiniCluster can no longer be used in
> combination with a RemoteExecutionEnvironment. The reason is that the
> LocalFlinkMiniCluster uses now an internal leader election service and
> assigns leader ids to its components. Since this is an internal service it
> is not possible to retrieve this information like it is the case with the
> ZooKeeper based leader election services.
>
> Long story short, the Flink Scala shell currently does not work with a
> LocalFlinkMiniCluster and would have to be fixed to work properly
> together with a local execution environment. Until then, I recommend
> starting a local standalone cluster and let the code run there.
>
> Cheers,
> Till
> ​
>
> On Wed, Sep 13, 2017 at 6:21 AM, XiangWei Huang 
> wrote:
>
>> dear all,
>>
>> *Below is the code i execute:*
>>
>> import java.io._
>> import java.net.{URL, URLClassLoader}
>> import java.nio.charset.Charset
>> import java.util.Collections
>> import java.util.concurrent.atomic.AtomicBoolean
>>
>> import com.netease.atom.common.util.logging.Logging
>> import com.netease.atom.interpreter.Code.Code
>> import com.netease.atom.interpreter.{Code, Interpreter, Inte
>> rpreterResult, InterpreterUtils}
>> import io.netty.buffer._
>> import org.apache.flink.api.scala.FlinkILoop
>> import org.apache.flink.client.CliFrontend
>> import org.apache.flink.client.cli.CliFrontendParser
>> import org.apache.flink.client.program.ClusterClient
>> import org.apache.flink.configuration.{QueryableStateOptions
>> , Configuration, ConfigConstants, GlobalConfiguration}
>> import org.apache.flink.runtime.akka.AkkaUtils
>> import org.apache.flink.runtime.minicluster.{StandaloneMiniC
>> luster, LocalFlinkMiniCluster}
>>
>> import scala.Console
>> import scala.beans.BeanProperty
>> import scala.collection.JavaConversions._
>> import scala.collection.mutable
>> import scala.collection.mutable.{ArrayBuffer, ListBuffer}
>> import scala.runtime.AbstractFunction0
>> import scala.tools.nsc.Settings
>> import scala.tools.nsc.interpreter.{IMain, JPrintWriter, Results}
>>
>> class FlinkInterpreter extends Interpreter {
>>   private var bufferedReader: Option[BufferedReader] = None
>>   private var jprintWriter: JPrintWriter = _
>>   private val config = new Configuration;
>>   private var cluster: LocalFlinkMiniCluster = _
>>   @BeanProperty var imain: IMain = _
>>   @BeanProperty var flinkILoop: FlinkILoop = _
>>   private var out: ByteBufOutputStream = null
>>   private var outBuf: ByteBuf = null
>>   private var in: ByteBufInputStream = _
>>   private var isRunning: AtomicBoolean = new AtomicBoolean(false)
>>
>>   override def isOpen: Boolean = {
>> isRunning.get()
>>   }
>>
>>   def startLocalMiniCluster(): (String, Int, LocalFlinkMiniCluster) = {
>> config.toMap.toMap.foreach(println)
>> config.setInteger(ConfigConstants.LOCAL_NUMBER_JOB_MANAGER, 1)
>> config.setInteger(ConfigConstants.LOCAL_NUMBER_TASK_MANAGER, 1)
>> config.setInteger(ConfigConstants.LOCAL_NUMBER_RESOURCE_MANAGER, 1)
>> config.setInteger(ConfigConstants.JOB_MANAGER_IPC_PORT_KEY, 0)
>> config.setBoolean(QueryableStateOptions.SERVER_ENABLE.key(), true)
>> config.setBoolean(ConfigConstants.LOCAL_START_WEBSERVER, true)
>> val localCluster = new LocalFlinkMiniCluster(config, false)
>> localCluster.start(true)
>> val port = AkkaUtils.getAddress(localCluster.jobManagerActorSystems.
>> get.head).port
>> println(s"Starting local Flink cluster (host:
>> localhost,port: ${localCluster.getLeaderRPCPort}).\n")
>> ("localhost", localCluster.getLeaderRPCPort, localCluster)
>>   }
>>
>>
>>   /**
>>* Start flink cluster and create interpreter
>>*/
>>   override def open: Unit = {
>> outBuf = ByteBufAllocator.DEFAULT.heapBuffer(20480)
>> out = new 

Re: [DISCUSS] Dropping Scala 2.10

2017-09-20 Thread Bowen Li
+1 for dropping support for Scala 2.10

On Tue, Sep 19, 2017 at 3:29 AM, Sean Owen  wrote:

> For the curious, here's the overall task in Spark:
>
> https://issues.apache.org/jira/browse/SPARK-14220
>
> and  most of the code-related changes:
>
> https://github.com/apache/spark/pull/18645
>
> and where it's stuck at the moment:
>
> https://mail-archives.apache.org/mod_mbox/spark-dev/201709.mbox/%
> 3CCAMAsSdKe7Os80mX7jYaD2vNWLGWioBgCb4GG55eaN_iotFxZvw%40mail.gmail.com%3E
>
>
>
> On Tue, Sep 19, 2017 at 11:07 AM Márton Balassi 
> wrote:
>
>> Hi Aljoscha,
>>
>> I am in favor of the change. No concerns on my side, just one remark that
>> I have talked to Sean last week (ccd) and he mentioned that he has faced
>> some technical issues while driving the transition from 2.10 to 2.12 for
>> Spark. It had to do with changes in the scope of implicits. You might end
>> up hitting the same.
>>
>> Best,
>> Marton
>>
>> On Tue, Sep 19, 2017 at 11:56 AM, Aljoscha Krettek 
>> wrote:
>>
>>> Hi,
>>>
>>> Talking to some people I get the impression that Scala 2.10 is quite
>>> outdated by now. I would like to drop support for Scala 2.10 and my main
>>> motivation is that this would allow us to drop our custom Flakka build of
>>> Akka that we use because newer Akka versions only support Scala 2.11/2.12
>>> and we need a backported feature.
>>>
>>> Are there any concerns about this?
>>>
>>> Best,
>>> Aljoscha
>>
>>
>>