Hey Kevin,

If you are upgrading from 1.0.X to 1.1.X checkout the upgrade notes
here [1] - it could be that default changes caused a regression for
your workload. Do you still see a regression if you restore the
configuration changes?

It's great to hear specifically about issues like this, so please fork
a new thread and describe your workload if you see a regression. The
main focus of a patch release vote like this is to test regressions
against the previous release on the same line (e.g. 1.1.1 vs 1.1.0)
though of course we still want to be cognizant of 1.0-to-1.1
regressions and make sure we can address them down the road.

[1] https://spark.apache.org/releases/spark-release-1-1-0.html

On Mon, Nov 17, 2014 at 2:04 PM, Kevin Markey <kevin.mar...@oracle.com> wrote:
> +0 (non-binding)
>
> Compiled Spark, recompiled and ran application with 1.1.1 RC1 with Yarn,
> plain-vanilla Hadoop 2.3.0. No regressions.
>
> However, 12% to 22% increase in run time relative to 1.0.0 release.  (No
> other environment or configuration changes.)  Would have recommended +1 were
> it not for added latency.
>
> Not sure if added latency a function of 1.0 vs 1.1 or 1.0 vs 1.1.1 changes,
> as we've never tested with 1.1.0. But thought I'd share the results.  (This
> is somewhat disappointing.)
>
> Kevin Markey
>
>
> On 11/17/2014 11:42 AM, Debasish Das wrote:
>>
>> Andrew,
>>
>> I put up 1.1.1 branch and I am getting shuffle failures while doing
>> flatMap
>> followed by groupBy...My cluster memory is less than the memory I need and
>> therefore flatMap does around 400 GB of shuffle...memory is around 120
>> GB...
>>
>> 14/11/13 23:10:49 WARN TaskSetManager: Lost task 22.1 in stage 191.0 (TID
>> 4084, istgbd020.hadoop.istg.verizon.com): FetchFailed(null, shuffleId=4,
>> mapId=-1, reduceId=22)
>>
>> I searched on user-list and this issue has been found over there:
>>
>>
>> http://apache-spark-user-list.1001560.n3.nabble.com/Issues-with-partitionBy-FetchFailed-td14760.html
>>
>> I wanted to make sure whether 1.1.1 does not have the same bug...-1 from
>> me
>> till we figure out the root cause...
>>
>> Thanks.
>>
>> Deb
>>
>> On Mon, Nov 17, 2014 at 10:33 AM, Andrew Or <and...@databricks.com> wrote:
>>
>>> This seems like a legitimate blocker. We will cut another RC to include
>>> the
>>> revert.
>>>
>>> 2014-11-16 17:29 GMT-08:00 Kousuke Saruta <saru...@oss.nttdata.co.jp>:
>>>
>>>> Now I've finished to revert for SPARK-4434 and opened PR.
>>>>
>>>>
>>>> (2014/11/16 17:08), Josh Rosen wrote:
>>>>
>>>>> -1
>>>>>
>>>>> I found a potential regression in 1.1.1 related to spark-submit and
>>>>> cluster
>>>>> deploy mode: https://issues.apache.org/jira/browse/SPARK-4434
>>>>>
>>>>> I think that this is worth fixing.
>>>>>
>>>>> On Fri, Nov 14, 2014 at 7:28 PM, Cheng Lian <lian.cs....@gmail.com>
>>>>> wrote:
>>>>>
>>>>>   +1
>>>>>>
>>>>>>
>>>>>> Tested HiveThriftServer2 against Hive 0.12.0 on Mac OS X. Known issues
>>>>>> are
>>>>>> fixed. Hive version inspection works as expected.
>>>>>>
>>>>>>
>>>>>> On 11/15/14 8:25 AM, Zach Fry wrote:
>>>>>>
>>>>>>   +0
>>>>>>>
>>>>>>>
>>>>>>> I expect to start testing on Monday but won't have enough results to
>>>>>>> change
>>>>>>> my vote from +0
>>>>>>> until Monday night or Tuesday morning.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Zach
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> View this message in context: http://apache-spark-
>>>>>>> developers-list.1001551.n3.nabble.com/VOTE-Release-
>>>>>>> Apache-Spark-1-1-1-RC1-tp9311p9370.html
>>>>>>> Sent from the Apache Spark Developers List mailing list archive at
>>>>>>> Nabble.com.
>>>>>>>
>>>>>>> ---------------------------------------------------------------------
>>>>>>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>>>>>>> For additional commands, e-mail: dev-h...@spark.apache.org
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> ---------------------------------------------------------------------
>>>>>>
>>>>>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>>>>>> For additional commands, e-mail: dev-h...@spark.apache.org
>>>>>>
>>>>>>
>>>>>>
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>>>> For additional commands, e-mail: dev-h...@spark.apache.org
>>>>
>>>>
>>>
>>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org

Reply via email to