I don't see a reason Spark 2.0 would need to support Python 2.6. At this
point, Python 3 should be the default that is encouraged.
Most organizations acknowledge the 2.7 is common, but lagging behind the
version they should theoretically use. Dropping python 2.6
support sounds very reasonable to me.

On Tue, Jan 5, 2016 at 5:45 AM, Nicholas Chammas <nicholas.cham...@gmail.com
> wrote:

> +1
>
> Red Hat supports Python 2.6 on REHL 5 until 2020
> <https://alexgaynor.net/2015/mar/30/red-hat-open-source-community/>, but
> otherwise yes, Python 2.6 is ancient history and the core Python developers
> stopped supporting it in 2013. REHL 5 is not a good enough reason to
> continue support for Python 2.6 IMO.
>
> We should aim to support Python 2.7 and Python 3.3+ (which I believe we
> currently do).
>
> Nick
>
> On Tue, Jan 5, 2016 at 8:01 AM Allen Zhang <allenzhang...@126.com> wrote:
>
>> plus 1,
>>
>> we are currently using python 2.7.2 in production environment.
>>
>>
>>
>>
>>
>> 在 2016-01-05 18:11:45,"Meethu Mathew" <meethu.mat...@flytxt.com> 写道:
>>
>> +1
>> We use Python 2.7
>>
>> Regards,
>>
>> Meethu Mathew
>>
>> On Tue, Jan 5, 2016 at 12:47 PM, Reynold Xin <r...@databricks.com> wrote:
>>
>>> Does anybody here care about us dropping support for Python 2.6 in Spark
>>> 2.0?
>>>
>>> Python 2.6 is ancient, and is pretty slow in many aspects (e.g. json
>>> parsing) when compared with Python 2.7. Some libraries that Spark depend on
>>> stopped supporting 2.6. We can still convince the library maintainers to
>>> support 2.6, but it will be extra work. I'm curious if anybody still uses
>>> Python 2.6 to run Spark.
>>>
>>> Thanks.
>>>
>>>
>>>
>>

Reply via email to