Hi,
I customized MVN_HOME/conf/settings.xml's localRepository tag To manage
maven local jars.
localRepositoryF:/Java/maven-build/.m2/repository/localRepository
However when I build Spark with SBT, it seems that it still gets the
default .m2 repository under
Path.userHome + /.m2/repository
How
I think the main concern is this would require scanning the data twice, and
maybe the user should be aware of it ...
On Thu, Jun 5, 2014 at 10:29 AM, Andrew Ash and...@andrewash.com wrote:
I have a use case that would greatly benefit from RDDs having a .scanLeft()
method. Are the project
I that something that documentation on the method can solve?
On Thu, Jun 5, 2014 at 10:47 AM, Reynold Xin r...@databricks.com wrote:
I think the main concern is this would require scanning the data twice, and
maybe the user should be aware of it ...
On Thu, Jun 5, 2014 at 10:29 AM, Andrew
Hi community,
How should I change sbt to compile spark core with a different version
of Scala? I see maven pom files define dependencies to scala 2.10.4. I
need to override/ignore the maven dependencies and use Scala
virtualized, which needs these lines in a build.sbt file:
scalaOrganization :=
I can confirm that the patch fixed my issue. :-)
-
Cheers,
Stephanie
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/Timestamp-support-in-v1-0-tp6850p6948.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.
Awesome, thanks for testing!
On Thu, Jun 5, 2014 at 1:30 PM, dataginjaninja rickett.stepha...@gmail.com
wrote:
I can confirm that the patch fixed my issue. :-)
-
Cheers,
Stephanie
--
View this message in context:
You can modify project/SparkBuild.scala and build Spark with sbt instead of
Maven.
On Jun 5, 2014, at 12:36 PM, Meisam Fathi meisam.fa...@gmail.com wrote:
Hi community,
How should I change sbt to compile spark core with a different version
of Scala? I see maven pom files define
Hi Folks
My name is Steve Watt and I work in the CTO Office at Red Hat. I've recently
spent quite a bit of time designing single rack and multi-rack infrastructures
for Spark for our own hardware procurement at Red Hat and I thought the
diagrams and server specs for both Dell and HP would be
I'm in a situation where I have two compute nodes in Amazon EC2 and a third
node that is used to just execute queries. The third node is not part of the
cluster. It's also configured slightly differently. That is, the third node
runs Ubuntu 14.04 while the two cluster nodes run CentOS.
I launch
Hi,
We are adding a constrained ALS solver in Spark to solve matrix
factorization use-cases which needs additional constraints (bounds,
equality, inequality, quadratic constraints)
We are using a native version of a primal dual SOCP solver due to its small
memory footprint and sparse ccs matrix
Stephen,
We are working thru Dell configurations; would be happy to review your
diagrams and offer feedback from our experience. Let me know the URLs.
Cheers
k/
On Thu, Jun 5, 2014 at 2:51 PM, Stephen Watt sw...@redhat.com wrote:
Hi Folks
My name is Steve Watt and I work in the CTO
Hi Deb,
Why do you want to make those methods public? If you only need to
replace the solver for subproblems. You can try to make the solver
pluggable. Now it supports least squares and non-negative least
squares. You can define an interface for the subproblem solvers and
maintain the IPM solver
Hi Xiangrui,
For orthogonality properties in the factors we need a constraint solver
other than the usuals (l1, upper and lower bounds, l2 etc)
The interface of constraint solver is standard and I can add it in mllib
optimization
But I am not sure how will I call the gpl licensed ipm solver
13 matches
Mail list logo