+1

On Wed, Dec 16, 2015 at 7:19 PM, Patrick Wendell <pwend...@gmail.com> wrote:

> +1
>
> On Wed, Dec 16, 2015 at 6:15 PM, Ted Yu <yuzhih...@gmail.com> wrote:
>
>> Ran test suite (minus docker-integration-tests)
>> All passed
>>
>> +1
>>
>> [INFO] Spark Project External ZeroMQ ...................... SUCCESS [
>> 13.647 s]
>> [INFO] Spark Project External Kafka ....................... SUCCESS [
>> 45.424 s]
>> [INFO] Spark Project Examples ............................. SUCCESS
>> [02:06 min]
>> [INFO] Spark Project External Kafka Assembly .............. SUCCESS [
>> 11.280 s]
>> [INFO]
>> ------------------------------------------------------------------------
>> [INFO] BUILD SUCCESS
>> [INFO]
>> ------------------------------------------------------------------------
>> [INFO] Total time: 01:49 h
>> [INFO] Finished at: 2015-12-16T17:06:58-08:00
>>
>> On Wed, Dec 16, 2015 at 4:37 PM, Andrew Or <and...@databricks.com> wrote:
>>
>>> +1
>>>
>>> Mesos cluster mode regression in RC2 is now fixed (SPARK-12345
>>> <https://issues.apache.org/jira/browse/SPARK-12345> / PR10332
>>> <https://github.com/apache/spark/pull/10332>).
>>>
>>> Also tested on standalone client and cluster mode. No problems.
>>>
>>> 2015-12-16 15:16 GMT-08:00 Rad Gruchalski <ra...@gruchalski.com>:
>>>
>>>> I also noticed that spark.replClassServer.host and
>>>> spark.replClassServer.port aren’t used anymore. The transport now happens
>>>> over the main RpcEnv.
>>>>
>>>> Kind regards,
>>>> Radek Gruchalski
>>>> ra...@gruchalski.com <ra...@gruchalski.com>
>>>> de.linkedin.com/in/radgruchalski/
>>>>
>>>>
>>>> *Confidentiality:*This communication is intended for the above-named
>>>> person and may be confidential and/or legally privileged.
>>>> If it has come to you in error you must take no action based on it, nor
>>>> must you copy or show it to anyone; please delete/destroy and inform the
>>>> sender immediately.
>>>>
>>>> On Wednesday, 16 December 2015 at 23:43, Marcelo Vanzin wrote:
>>>>
>>>> I was going to say that spark.executor.port is not used anymore in
>>>> 1.6, but damn, there's still that akka backend hanging around there
>>>> even when netty is being used... we should fix this, should be a
>>>> simple one-liner.
>>>>
>>>> On Wed, Dec 16, 2015 at 2:35 PM, singinpirate <
>>>> thesinginpir...@gmail.com> wrote:
>>>>
>>>> -0 (non-binding)
>>>>
>>>> I have observed that when we set spark.executor.port in 1.6, we get
>>>> thrown a
>>>> NPE in SparkEnv$.create(SparkEnv.scala:259). It used to work in 1.5.2.
>>>> Is
>>>> anyone else seeing this?
>>>>
>>>>
>>>> --
>>>> Marcelo
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>>>> For additional commands, e-mail: dev-h...@spark.apache.org
>>>>
>>>>
>>>>
>>>
>>
>

Reply via email to