+1
Continuing to support Spark 1.4/1.6 for now while setting a cutover date
for 2.0 sounds like a great idea. This allows for the creation of a really
solid release for 1.x, which greatly benefits SystemML users using Spark
1.x. It also gives these users a general date that they can use to plan
Spark 2.0 has released, we need to support SystemML on Spark 2.0 to be uptodate
with latest version of Spark. This brings us a challenge to support our
consumers until they move to Spark 2.0.Based on some brainstorming, I can
propose following options to keep SystemML being supported on latest
Hi Sourav,
Great question. Work is currently being performed by Alok Singh (see
https://issues.apache.org/jira/browse/SYSTEMML-860) regarding this topic.
Deron
On Mon, Aug 15, 2016 at 9:31 AM, Sourav Mazumder <
sourav.mazumde...@gmail.com> wrote:
> Hi,
>
> Is there any work going on to call
Great, thanks.
On Wed, Aug 17, 2016 at 2:32 PM, wrote:
> Thanks, Luciano for pointing this out. As you mentioned, the intent was
> definitely just to tag a commit that was known to be stable on the Spark
> 1.x line. I've deleted the existing tag, and created a new
>
Thanks, Luciano for pointing this out. As you mentioned, the intent was
definitely just to tag a commit that was known to be stable on the Spark 1.x
line. I've deleted the existing tag, and created a new "spark-1.x-stable" tag
simply pointing to a previous commit that was tested on Spark 1.x.
-1
Sorry Folks, this isn't a voted release and thus creating a tag without
SNAPSHOT is not valid. Please delete this tag.
If what is wanted is to have a stable point in the codebase where folks can
go back if a release is needed for 1.x, then just create a branch/tag with
a descriptive name
Yes, I think this approach sounds great. To that end, I created a new tag
"0.11.0-incubating-preview" that points to a specific commit that contains new
features that will be in the 0.11 release with specific support for the Spark
1.x line.
- Mike
--
Mike Dusenberry
GitHub: