switched to immutable.Set and it works. this is weird as the code in
ScalaReflection.scala seems to support scala.collection.Set
cc: dev list, in case this is a bug
On Thu, Aug 8, 2019 at 8:41 PM Mohit Jaggi wrote:
> Is this not supported? I found this diff
> <https://github.com/apa
ala-function-from-a-task
>>
>> Sent with ProtonMail Secure Email.
>>
>> ‐‐‐ Original Message ‐‐‐
>>
>> On July 15, 2018 8:01 AM, Mohit Jaggi wrote:
>>
>> > Trying again…anyone know how to make this work?
>> >
>> > > On Jul
Trying again…anyone know how to make this work?
> On Jul 9, 2018, at 3:45 PM, Mohit Jaggi wrote:
>
> Folks,
> I am writing some Scala/Java code and want it to be usable from pyspark.
>
> For example:
> class MyStuff(addend: Int) {
> def myMapFunction(x: Int) = x
ForkJoinWorkerThread.java:107)
Process finished with exit code 137 (interrupted by signal 9: SIGKILL)
Mohit Jaggi
Founder,
Data Orchard LLC
www.dataorchardllc.com
doExec(ForkJoinTask.java:260)
at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Process finis
new AA(1)
}
Mohit Jaggi
Founder,
Data Orchard LLC
www.dataorchardllc.com
> On Aug 30, 2016, at 9:51 PM, Mohit Jaggi <mohitja...@gmail.com> wrote:
>
> thanks Sean. I am cross posting on dev to see why the code was written that
> way. Perhaps, this.type doesn’t do what i
thanks Sean. I am cross posting on dev to see why the code was written that
way. Perhaps, this.type doesn’t do what is needed.
Mohit Jaggi
Founder,
Data Orchard LLC
www.dataorchardllc.com
On Aug 30, 2016, at 2:08 PM, Sean Owen <so...@cloudera.com> wrote:
I think it's imitating, for e
thanks Sean. I am cross posting on dev to see why the code was written that
way. Perhaps, this.type doesn’t do what is needed.
Mohit Jaggi
Founder,
Data Orchard LLC
www.dataorchardllc.com
> On Aug 30, 2016, at 2:08 PM, Sean Owen <so...@cloudera.com> wrote:
>
> I think
com.databricks.spark.csv.util.TextFile has hadoop imports.
I figured out that the answer to my question is just to add
libraryDependencies
+= org.apache.hadoop % hadoop-client % 2.6.0.
But i still wonder where is this 2.2.0 default comes from.
From:Mohit Jaggi mohitja...@gmail.com
spark-csv should not depend on hadoop
On Sun, Aug 16, 2015 at 9:05 AM, Gil Vernik g...@il.ibm.com wrote:
I would like to build spark-csv with Hadoop 2.6.0
I noticed that when i build it with sbt/sbt ++2.10.4 package it build it
with Hadoop 2.2.0 ( at least this is what i saw in the .ivy2
be moved to spark-core. not
sure if that happened ]
- previous posts ---
http://spark.apache.org/docs/1.4.0/api/scala/index.html#org.apache.spark.mllib.rdd.RDDFunctions
On Fri, Jan 30, 2015 at 12:27 AM, Mohit Jaggi mohitja...@gmail.com
wrote:
http://mail-archives.apache.org/mod_mbox/spark
key and value and then using combine,
however.
—
FG
On Tue, Jan 27, 2015 at 10:17 PM, Mohit Jaggi mohitja...@gmail.com
mailto:mohitja...@gmail.com wrote:
Hi All,
I have a use case where I have an RDD (not a k,v pair) where I want to do a
combineByKey() operation. I can do
https://issues.apache.org/jira/browse/SPARK-3489
Folks,
I am Mohit Jaggi and I work for Ayasdi Inc. After experimenting with Spark
for a while and discovering its awesomeness(!) I made an attempt to
provide a wrapper API that looks like R and/or pandas dataframe.
https://github.com
13 matches
Mail list logo