I agree with Arvind here as the 8GB case would mostly run as singlenode, in-memory operations and not test the Spark 2.x integration.

Regards,
Matthias

On 1/17/2017 5:33 AM, Arvind Surve wrote:
We are planning to have 80GB testing for 0.13 release (to support Spark 2.0). 
It will add couple of days for overall performance testing but its worth try 
before releasing SystemML on Spark 2.0.If we find issues which are not show 
stopping then we can release Spark 2.0, but we should get 80GB testing done.

 ------------------Arvind SurveSpark Technology Centerhttp://www.spark.tc/

      From: "dusenberr...@gmail.com" <dusenberr...@gmail.com>
 To: dev@systemml.incubator.apache.org
 Sent: Monday, January 16, 2017 6:38 PM
 Subject: Time To Release 0.13

Hi all,

Now that Spark 2.x support has been merged [1], I think we should go ahead and 
start a release process for 0.13.  That way, we will have an official release 
that supports 2.x, in addition to 0.12 that supports 1.6.

I'd like to propose that as long as our tests pass, and a performance suite on 
*8GB* is reasonably acceptable, we go ahead and release. We can save more 
detailed performance testing and any possible improvements for 0.13.x releases, 
or more importantly for the upcoming 1.0 release.  I think it would be a good 
idea to have one official release on Spark 2.x that the community can stress 
test before our 1.0 release.

Thoughts?

[1]: 
https://github.com/apache/incubator-systemml/commit/a4c7be78390d01a3194e726d7a184c182bd8b558

-Mike

--

Mike Dusenberry
GitHub: github.com/dusenberrymw
LinkedIn: linkedin.com/in/mikedusenberry

Sent from my iPhone.




Reply via email to