emant9...@gmail.com>
wrote:
> An approach can be to wrap your MutableRow in WrappedInternalRow which is
> a child class of Row.
>
> Hemant
> www.snappydata.io
> linkedin.com/company/snappydata
>
>
> On Tue, Oct 6, 2015 at 3:21 PM, Ophir Cohen <oph...@gmail.com> wrot
Map and Array would need some handling.
>
> I will check with the author of this code, I think this code can be
> contributed to Spark.
>
> Hemant
> www.snappydata.io
> linkedin.com/company/snappydata
>
> On Wed, Oct 7, 2015 at 3:30 PM, Ophir Cohen <oph...@gmail.c
Hi Guys,
I'm upgrading to Spark 1.5.
In our previous version (Spark 1.3 but it was OK on 1.4 as well) we created
GenericMutableRow
(org.apache.spark.sql.catalyst.expressions.GenericMutableRow) and return it
as org.apache.spark.sql.Row
Starting from Spark 1.5 GenericMutableRow isn't extends Row.
Hi,
I'm using Spark on top of Hive.
As I want to keep old tables I store the DataFrame into tmp table in hive
and when finished successfully I rename the table.
In last few days I've upgrade to use Spark 1.4.1, and as I'm using aws emr
I got Hive 1.0.
Now when I try to rename the table I get the
Nop, I'm checking it out thanks!
On Tue, Sep 29, 2015 at 3:30 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> Have you seen this thread ?
> http://search-hadoop.com/m/q3RTtGwP431AQ2B41
>
> Plugin metastore version for your deployment.
>
> Cheers
>
> On Sep 29, 2015
Hi,
I'm working on my companie's system that constructs out of Spark, Zeppelin,
Hive and some other technology and wonder regarding to ability to stop
contexts.
Working on the test framwork for the system, when run tests someting I
would like to create new SparkContext in order to run the tests
A short update: eventually we manually upgraded to 1.3.1 and the problem
fixed.
On Apr 26, 2015 2:26 PM, Ophir Cohen oph...@gmail.com wrote:
I happened to hit the following issue that prevents me from using UDFs
with case classes: https://issues.apache.org/jira/browse/SPARK-6054.
The issue
I happened to hit the following issue that prevents me from using UDFs with
case classes: https://issues.apache.org/jira/browse/SPARK-6054.
The issue already fixed for 1.3.1 but we are working on Amazon and it looks
that Amazon provide deployment of Spark 1.3.1 using their scripts.
Did someone
I wrote few mails here regarding this issue.
After further investigation I think there is a bug in Spark 1.3 in saving
Hive tables.
(hc is HiveContext)
1. Verify the needed configuration exists:
scala hc.sql(set hive.exec.compress.output).collect
res4: Array[org.apache.spark.sql.Row] =
,
spark.sql.sources.provider=org.apache.spark.sql.parquet},
viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE)
On Tue, Apr 21, 2015 at 12:40 PM, Ophir Cohen oph...@gmail.com wrote:
Sadly I'm encounter too many issues migrating my code to Spark 1.3
I wrote one problem on other mail but my main
Lately we upgraded our Spark to 1.3.
Not surprisingly, over the way I find few incomputability between the
versions and quite expected.
I found change that I'm interesting to understand it origin.
env: Amazon EMR, Spark 1.3, Hive 0.13, Hadoop 2.4
In Spark 1.2.1 I ran from the code query such:
BTW
This:
hc.sql(show tables).collect
Works great!
On Tue, Apr 21, 2015 at 10:49 AM, Ophir Cohen oph...@gmail.com wrote:
Lately we upgraded our Spark to 1.3.
Not surprisingly, over the way I find few incomputability between the
versions and quite expected.
I found change that I'm
Hi,
Today I upgraded our code and cluster to 1.3.
We are using Spark 1.3 in Amazon EMR, ami 3.6, include history server and
Ganglia.
I also migrated all deprecated SchemaRDD into DataFrame.
Now when I'm trying to read a parquet files from s3 I get the below
exception.
Actually it not a problem if
Interesting:
Remove the history server, '-a' option and using ami 3.5 fixed the problem.
Now the question is: what made the change?...
I vote for the '-a' but let me update...
On Mon, Apr 20, 2015 at 5:43 PM, Ophir Cohen oph...@gmail.com wrote:
Hi,
Today I upgraded our code and cluster to 1.3
, Ophir Cohen oph...@gmail.com wrote:
Hi,
Today I upgraded our code and cluster to 1.3.
We are using Spark 1.3 in Amazon EMR, ami 3.6, include history server and
Ganglia.
I also migrated all deprecated SchemaRDD into DataFrame.
Now when I'm trying to read a parquet files from s3 I get
Hi Guys and great job!
I encounter a weird problem on local mode and I'll be glad to solve it
out...
When trying to save ScehmaRDD into Hive table it fails with
'TreeNodeException: Unresolved plan found'
I have found similar issue in Jira:
https://issues.apache.org/jira/browse/SPARK-4825 but I'm
basic_null_diluted_d was not resolved? Can you check if
basic_null_diluted_d is in you table?
On Tue, Mar 17, 2015 at 9:34 AM, Ophir Cohen oph...@gmail.com wrote:
Hi Guys,
I'm registering a function using:
sqlc.registerFunction(makeEstEntry,ReutersDataFunctions.makeEstEntry _)
Then I register the table
Ok, I managed to solve it.
As the issue in jira suggests it fixed in 1.2.1, i probably had some old
jars in the classpath.
Cleaning everything and rebuild eventually solve the problem.
On Mar 17, 2015 12:25 PM, Ophir Cohen oph...@gmail.com wrote:
Hi Guys and great job!
I encounter a weird
associated with it.
On Tue, Mar 17, 2015 at 2:08 PM, Ophir Cohen oph...@gmail.com wrote:
Interesting, I thought the problem is with the method itself.
I will check it soon and update.
Can you elaborate what does it mean the # and the number? Is that a
reference to the field in the rdd
Hi Guys,
I'm registering a function using:
sqlc.registerFunction(makeEstEntry,ReutersDataFunctions.makeEstEntry _)
Then I register the table and try to query the table using that function
and I get:
org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Unresolved
attributes:
20 matches
Mail list logo