Re: Error while creating tables in Parquet format in 2.0.1 (No plan for InsertIntoTable)

2016-11-06 Thread Kiran Chitturi
alse > +- > Relation[id#4,stat_repository_type#5,stat_repository_id#6,stat_holder_type#7,stat_holder_id#8,stat_coverage_type#9,stat_coverage_id#10,stat_membership_type#11,stat_membership_id#12,context#13] > JDBCRelation(stats) (state=,code=0) > JDBCRelation also extends the BaseRelation as well. Is

Error while creating tables in Parquet format in 2.0.1 (No plan for InsertIntoTable)

2016-11-06 Thread Kiran Chitturi
implement in the SolrRelation class to be able to create Parquet tables from Solr tables. Looking forward to your suggestions. Thanks, -- Kiran Chitturi

Re: 2.0.0: AnalysisException when reading csv/json files with dots in periods

2016-08-05 Thread Kiran Chitturi
Nevermind, there is already a Jira open for this https://issues.apache.org/jira/browse/SPARK-16698 On Fri, Aug 5, 2016 at 5:33 PM, Kiran Chitturi < kiran.chitt...@lucidworks.com> wrote: > Hi, > > During our upgrade to 2.0.0, we found this issue with one of our failing > tests

2.0.0: Hive metastore uses a different version of derby than the Spark package

2016-08-05 Thread Kiran Chitturi
. Would it make sense to update so that hive-metastore and Spark package are on the same derby version ? Thanks, -- Kiran Chitturi

2.0.0: AnalysisException when reading csv/json files with dots in periods

2016-08-05 Thread Kiran Chitturi
(QueryExecution.scala:83) > at > org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:83) > at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2558) > at org.apache.spark.sql.Dataset.head(Dataset.scala:1924) > at org.apache.spark.sql.Dataset.take(Dataset.scala:2139) > ... 48 elided > scala> The same happens for json files too. Is this a known issue in 2.0.0 ? Removing the field with dots from the csv/json file fixes the issue :) Thanks, -- Kiran Chitturi

Re: 2.0.0 packages for twitter streaming, flume and other connectors

2016-08-03 Thread Kiran Chitturi
hers have indeed been removed from Spark, and can be > found at the Apache Bahir project: http://bahir.apache.org/ > > I don't think there's a release for Spark 2.0.0 yet, though (only for > the preview version). > > > On Wed, Aug 3, 2016 at 8:40 PM, Kiran Chitturi >

2.0.0 packages for twitter streaming, flume and other connectors

2016-08-03 Thread Kiran Chitturi
packages ? If so, how can we get someone to release and publish new versions officially ? I would like to help in any way possible to get these packages released and published. Thanks, -- Kiran Chitturi

Spark executor crashes when the tasks are cancelled

2016-04-27 Thread Kiran Chitturi
t; (executor 2 exited caused by one of the running tasks) Reason: Remote RPC > client di Is it possible for executor to die when the jobs in the sparkContext are cancelled ? Apart from https://issues.apache.org/jira/browse/SPARK-14234, I could not find any Jiras that report this error. Sometimes,

Re: Spark sql not pushing down timestamp range queries

2016-04-15 Thread Kiran Chitturi
rbJd6zP6AcPCCdOABUrV8Pw>* >>>> >>>> >>>> >>>> http://talebzadehmich.wordpress.com >>>> >>>> >>>> >>>> On 14 April 2016 at 19:26, Josh Rosen <joshro...@databricks.com> wrote: >>>> >>

Re: Spark sql not pushing down timestamp range queries

2016-04-15 Thread Kiran Chitturi
Thanks Hyukjin for the suggestion. I will take a look at implementing Solr datasource with CatalystScan. ​

Spark sql not pushing down timestamp range queries

2016-04-14 Thread Kiran Chitturi
estamp filters to be pushed down to the Solr query. Are there limitations on the type of filters that are passed down with Timestamp types ? Is there something that I should do in my code to fix this ? Thanks, -- Kiran Chitturi

supporting adoc files in spark-packages.org

2016-02-10 Thread Kiran Chitturi
g if spark-packages.org can support ascii doc files in addition to README.md files. Thanks, -- Kiran Chitturi