I tracked this down in the end. It turns out the POJO was not actually
defined as 'public' for some reason. It seems like this should be detected
as an error prior to generating code though?
Thanks,
Andy.
--
Andy Grove
Chief Architect
www.agildata.com
On Fri, Aug 5, 2016 at 8:28 AM, Andy
Ah, you still have to use the JavaSparkContext rather than using the
sparkSession.sparkContext ... that makes sense.
Thanks for your help.
Thanks,
Andy.
--
Andy Grove
Chief Architect
www.agildata.com
On Fri, Aug 5, 2016 at 12:03 PM, Everett Anderson <ever...@nuna.com> wrote:
> Hi
ile a JIRA for.
Thanks,
Andy.
--
Andy Grove
Chief Architect
www.agildata.com
>From some brief experiments using Java with Spark 2.0 it looks like Java
developers should stick to SparkContext and SQLContext rather than using
the new SparkSession API?
It would be great if someone could confirm if that is the intention or not.
Thanks,
Andy.
--
Andy Grove
Chief Archit
is better for sure.
Thanks,
Andy.
--
Andy Grove
Chief Architect
AgilData - Simple Streaming SQL that Scales
www.agildata.com
On Thu, Aug 4, 2016 at 10:25 PM, Andy Grove <andy.gr...@agildata.com> wrote:
> Hi,
>
> I have some working Java code with Spark 1.6 that I am upgrading to Sp
ing wrong or is this a regression?
Thanks,
Andy.
--
Andy Grove
Chief Architect
www.agildata.com
On Tue, May 17, 2016 at 11:48 AM, Andy Grove <andy.gr...@agildata.com>
> wrote:
>
>>
>> Hi,
>>
>> I have a requirement to create types dynamically in Spark and then
>> instantiate those types from Spark SQL via a UDF.
>>
>> I tried doing
Hi,
I have a requirement to create types dynamically in Spark and then
instantiate those types from Spark SQL via a UDF.
I tried doing the following:
val addressType = StructType(List(
new StructField("state", DataTypes.StringType),
new StructField("zipcode", DataTypes.IntegerType)
))
This blog post should be helpful
http://www.agildata.com/apache-spark-rdd-vs-dataframe-vs-dataset/
Thanks,
Andy.
--
Andy Grove
Chief Architect
AgilData - Simple Streaming SQL that Scales
www.agildata.com
On Tue, Feb 16, 2016 at 9:05 AM, Ashok Kumar <ashok34...@yahoo.com.invalid>
-dataframe-vs-dataset/
Thanks,
Andy.
--
Andy Grove
Chief Architect
AgilData - Simple Streaming SQL that Scales
www.agildata.com
On Wed, Jan 20, 2016 at 7:07 AM, raghukiran <raghuki...@gmail.com> wrote:
> Hi,
>
> I created a custom UserDefinedType in Java as follows:
>
I'm talking about implementing CustomerRecord as a scala case class, rather
than as a Java class. Scala case classes implement the scala.Product trait,
which Catalyst is looking for.
Thanks,
Andy.
--
Andy Grove
Chief Architect
AgilData - Simple Streaming SQL that Scales
www.agildata.com
ome pointers.
>
> Thanks,
> Raghu
>
> On Wed, Jan 20, 2016 at 12:25 PM, Andy Grove <andy.gr...@agildata.com>
> wrote:
>
>> I'm talking about implementing CustomerRecord as a scala case class,
>> rather than as a Java class. Scala case classes implement th
Honestly, moving to Scala and using case classes is the path of least
resistance in the long term.
Thanks,
Andy.
--
Andy Grove
Chief Architect
AgilData - Simple Streaming SQL that Scales
www.agildata.com
On Wed, Jan 20, 2016 at 10:19 AM, Raghu Ganti <raghuki...@gmail.com> wrote:
&g
13 matches
Mail list logo