Hi,
I deployed apache/spark master today and recently there were many ALS
related checkins and enhancements..
I am running ALS with explicit feedback and I remember most enhancements
were related to implicit feedback...
With 25 factors my runs were successful but with 50 factors I am getting
arr
With jdk7 I could compile it fine:
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
What happens if I say take the jar and try to deploy it on ancient centos6
default on cluster ?
java -version
java versi
No, I am thinking along lines of writing to an accelerator card or
dedicated card with its own memory.
Regards,
Mridul
On Apr 6, 2014 5:19 AM, "Haoyuan Li" wrote:
> Hi Mridul,
>
> Do you mean the scenario that different Spark applications need to read the
> same raw data, which is stored in a re
Hi Mridul,
Do you mean the scenario that different Spark applications need to read the
same raw data, which is stored in a remote cluster or machines. And the
goal is to load the remote raw data only once?
Haoyuan
On Sat, Apr 5, 2014 at 4:30 PM, Mridul Muralidharan wrote:
> Hi,
>
> We have a
Hi,
We have a requirement to use a (potential) ephemeral storage, which
is not within the VM, which is strongly tied to a worker node. So
source of truth for a block would still be within spark; but to
actually do computation, we would need to copy data to external device
(where it might lie aro
Will do. I'm just finishing a recompile to check for anything else like this.
The reason is because the tests run with Java 7 (like lots of us do
including me) so it used the Java 7 classpath and found the class.
It's possible to use Java 7 with the Java 6 -bootclasspath. Or just
use Java 6.
--
Se
@patrick our cluster still has java6 deployed...and I compiled using jdk6...
Sean is looking into it...this api is in java7 but not java6...
On Sat, Apr 5, 2014 at 3:06 PM, Patrick Wendell wrote:
> If you want to submit a hot fix for this issue specifically please do. I'm
> not sure why it di
If you want to submit a hot fix for this issue specifically please do. I'm
not sure why it didn't fail our build...
On Sat, Apr 5, 2014 at 2:30 PM, Debasish Das wrote:
> I verified this is happening for both CDH4.5 and 1.0.4...My deploy
> environment is Java 6...so Java 7 compilation is not goin
I verified this is happening for both CDH4.5 and 1.0.4...My deploy
environment is Java 6...so Java 7 compilation is not going to help...
Is this the PR which caused it ?
Andre Schumacher
fbebaedSpark parquet improvements A few improvements to the Parquet
support for SQL queries: - Instea
I can compile with Java 7...let me try that...
On Sat, Apr 5, 2014 at 2:19 PM, Sean Owen wrote:
> That method was added in Java 7. The project is on Java 6, so I think
> this was just an inadvertent error in a recent PR (it was the 'Spark
> parquet improvements' one).
>
> I'll open a hot-fix PR
That method was added in Java 7. The project is on Java 6, so I think
this was just an inadvertent error in a recent PR (it was the 'Spark
parquet improvements' one).
I'll open a hot-fix PR after looking for other stuff like this that
might have snuck in.
--
Sean Owen | Director, Data Science | Lo
I am synced with apache/spark master but getting error in spark/sql
compilation...
Is the master broken ?
[info] Compiling 34 Scala sources to
/home/debasish/spark_deploy/sql/core/target/scala-2.10/classes...
[error]
/home/debasish/spark_deploy/sql/core/src/main/scala/org/apache/spark/sql/parquet
Thanks Patrick...I searched in the archives and found the answer...tuning
the akka and gc params
On Fri, Apr 4, 2014 at 10:35 PM, Patrick Wendell wrote:
> I answered this over on the user list...
>
>
> On Fri, Apr 4, 2014 at 6:13 PM, Debasish Das >wrote:
>
> > Hi,
> >
> > Also posted it on
13 matches
Mail list logo