Hi,
Sorry to hear about your troubles. Not sure whether you are aware of the
ES-Hadoop docs [1]. I've raised an issue [2] to
better clarify the usage of elasticsearch-hadoop vs elasticsearch-spark jars.
Apologies for the delayed response, for ES-Hadoop questions/issues it's best to
use the
# disclaimer I'm an employee of Elastic (the company behind Elasticsearch) and
lead of Elasticsearch Hadoop integration
Some things to clarify on the Elasticsearch side:
1. Elasticsearch is a distributed, real-time search and analytics engine. Search is just one aspect of it and it can
work
/searching-with-shingles
https://www.found.no/foundation/text-analysis-part-1/#optimizing-phrase-searches-with-shingles
If you need more help, do reach out to the Elasticsearch mailing list:
https://www.elastic.co/community
Cheers,
On 29 April 2015 at 20:03, Costin Leau costin.l...@gmail.com
Hi,
First off, for Elasticsearch questions is worth pinging the Elastic mailing
list as that is closer monitored than this one.
Back to your question, Jeetendra is right that the exception indicates nodata
is flowing back to the es-connector and
Spark.
The default is 1m [1] which should be
Aris, if you encountered a bug, it's best to raise an issue with the
es-hadoop/spark project, namely here [1].
When using SparkSQL the underlying data needs to be present - this is mentioned
in the docs as well [2]. As for the order,
that does look like a bug and shouldn't occur. Note the
.jar
On Tue, Feb 10, 2015 at 10:05 PM, Costin Leau costin.l...@gmail.com
mailto:costin.l...@gmail.com wrote:
Sorry but there's too little information in this email to make any type of
assesment.
Can you please describe what you are trying to do, what version of Elastic
and es-spark
to
diagnose what's going on.
On 2/10/15 7:24 PM, shahid ashraf wrote:
hi costin i upgraded the es hadoop connector , and at this point i can't use
scala, but still getting same error
On Tue, Feb 10, 2015 at 10:34 PM, Costin Leau costin.l...@gmail.com
mailto:costin.l...@gmail.com wrote
Hi,
Spark 1.2 changed the APIs a bit which is what's causing the problem with
es-spark 2.1.0.Beta3. This has been addressed
a while back in es-spark proper; you can get a hold of the dev build (the
upcoming 2.1.Beta4) here [1].
P.S. Do note that a lot of things have happened in
That indicates that you are using two different versions of es-hadoop (2.0.x)
and es-spark (2.1.x)
Have you considered aligning the two versions?
On 1/28/15 11:08 AM, aarthi wrote:
Hi
We have a maven project which supports running of spark jobs and pig jobs.
But I could use only either one of
provided in SQLContext/HiveContext to
apply the defined schema to the RDD[Row]. The return value of applySchema is
the SchemaRDD you want.
Thanks,
Yin
On Tue, Sep 30, 2014 at 5:05 AM, Costin Leau costin.l...@gmail.com
mailto:costin.l...@gmail.com wrote:
Hi,
I'm working on supporting
Hi,
I'm working on supporting SchemaRDD in Elasticsearch Hadoop [1] but I'm having some issues with the SQL API, in
particular in what the DataTypes translate to.
1. A SchemaRDD is composed of a Row and StructType - I'm using the latter to decompose a Row into primitives. I'm not
clear
11 matches
Mail list logo