Hi,
I would like to register custom catalyst expressions as SQL DSL
https://stackoverflow.com/questions/51199761/spark-register-expression-for-sql-dsl
can someone shed some light here? The documentation does not seem to contain
a lot of information regarding catalyst internals.
Thanks a lot.
Geor
Hi,
I noticed that spark standalone (locally for development) will no longer
support the integrated hive megastore as some driver classes for derby seem
to be missing from 2.2.1 and onwards (2.3.0). It works just fine for 2.2.0
or previous versions to execute the following script:
spark.sql("CREA
mp;node=21526&i=0>> wrote:
>
> The point is that Spark's prior usage of Akka was limited enough that it
> could fairly easily be removed entirely instead of forcing particular
> architectural decisions on Spark's users.
>
>
> On Sun, May 7, 2017 at 1:14 P
hrieb am So., 7. Mai 2017 um
21:17 Uhr:
> https://issues.apache.org/jira/browse/SPARK-5293
>
>
> On 05/07/2017 08:59 PM, geoHeil wrote:
>
> > Hi,
> >
> > I am curious why spark (with 2.0 completely) removed any akka
> dependencies
> > for RPC and switched
Hi,
I am curious why spark (with 2.0 completely) removed any akka dependencies
for RPC and switched entirely to (as far as I know natty)
regards,
Georg
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/Why-did-spark-switch-from-AKKA-to-net-tp21522.html
Thanks a lot, Holden.
@Liang-Chi Hsieh did you try to run
https://gist.github.com/geoHeil/6a23d18ccec085d486165089f9f430f2 for me
that is crashing in either line 51 or 58. Holden described the problem
pretty well. Ist it clear for you now?
Cheers,
Georg
Holden Karau [via Apache Spark Developers
I am working on building a custom ML pipeline-model / estimator to impute
missing values, e.g. I want to fill with last good known value.
Using a window function is slow / will put the data into a single partition.
I built some sample code to use the RDD API however, it some None / null
problems wi
ation if the
> value is already deserialized.
>
> On Wed, Jan 4, 2017 at 7:19 AM, geoHeil <[hidden email]
> <http:///user/SendEmail.jtp?type=node&node=20462&i=0>> wrote:
>
> Hi I would like to know more about typeface aggregations in spark.
>
>
> http://sta
Hi I would like to know more about typeface aggregations in spark.
http://stackoverflow.com/questions/40596638/inquiries-about-spark-2-0-dataset/40602882?noredirect=1#comment70139481_40602882
An example of these is
https://blog.codecentric.de/en/2016/07/spark-2-0-datasets-case-classes/
ds.groupByK