Re: What happens to data in an existing type if we update the mapping to specify 'path's for _id and _routing

2014-10-28 Thread Preeti Raj - Buchhada
Any ideas on this?


On Monday, October 27, 2014 3:03:16 PM UTC+5:30, Preeti Raj - Buchhada 
wrote:
>
> We are using ES 1.3.2.
> We have a need to specify custom id and routing values when indexing.
> We've been doing this using Java APIs, however we would now like to update 
> the mapping to specify 'path's for _id and _routing.
>
> The question we have is:
> 1) Since this type already has a huge number of documents, can we change 
> the mapping? When we tried it, we got a '"acknowledged": true' response, 
> but it doesn't seem to be working when we tried indexing.
> 2) In case there is a way to achieve this, will it affect only the new 
> documents being indexed?
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1b912d68-900f-4f6f-be5f-cbae83776e1b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


What happens to data in an existing type if we update the mapping to specify 'path's for _id and _routing

2014-10-27 Thread Preeti Raj - Buchhada
We are using ES 1.3.2.
We have a need to specify custom id and routing values when indexing.
We've been doing this using Java APIs, however we would now like to update 
the mapping to specify 'path's for _id and _routing.

The question we have is:
1) Since this type already has a huge number of documents, can we change 
the mapping? When we tried it, we got a '"acknowledged": true' response, 
but it doesn't seem to be working when we tried indexing.
2) In case there is a way to achieve this, will it affect only the new 
documents being indexed?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1c5516bd-7738-4969-8bee-b979aa89b65b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Is there a way to "update" ES records using Spark?

2014-10-13 Thread Preeti Raj - Buchhada
Thanks for your reply Costin.
However, we have a need to compute a custom ID based on concatenation of 
multiple fields values and then computing the hash value. So simply 
specifying 'es.mapping.id' will not help in our case.

Is there any other way?


On Monday, October 13, 2014 4:08:05 PM UTC+5:30, Costin Leau wrote:
>
> You can the mapping options [1], namely `es.mapping.id` to specify the 
> id field of your documents. 
>
> [1] 
> http://www.elasticsearch.org/guide/en/elasticsearch/hadoop/master/configuration.html#cfg-mapping
>  
>
> On Mon, Oct 13, 2014 at 12:55 PM, Preeti Raj - Buchhada 
> > wrote: 
> > Anyone has an idea? 
> > At least if I get to know whether this is possible or not, that'll be a 
> > great help. 
> > 
> > Thanks. 
> > 
> > 
> > 
> > On Wednesday, October 1, 2014 3:46:51 PM UTC+5:30, Preeti Raj - Buchhada 
> > wrote: 
> >> 
> >> I am using ES version 1.3.2, and Spark 1.1.0. 
> >> I can successfully read and write records from/to ES using 
> >> newAPIHadoopRDD() and saveAsNewAPIHadoopDataset(). 
> >> However, I am struggling to find a way to update records. Even I 
> specify a 
> >> 'key' in ESOutputFormat it gets ignored, as documented clearly. 
> >> So my question is : Is there a way to specify document ID and custom 
> >> routing values when writing to ES using Spark? If yes, how? 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "elasticsearch" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to elasticsearc...@googlegroups.com . 
> > To view this discussion on the web visit 
> > 
> https://groups.google.com/d/msgid/elasticsearch/b6e4628a-5106-4f2b-997d-e790a8aeb455%40googlegroups.com.
>  
>
> > 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/fc10e3fb-158b-4ae2-9117-beea8a620865%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Is there a way to "update" ES records using Spark?

2014-10-13 Thread Preeti Raj - Buchhada
Anyone has an idea?
At least if I get to know whether this is possible or not, that'll be a 
 great help.

Thanks.



On Wednesday, October 1, 2014 3:46:51 PM UTC+5:30, Preeti Raj - Buchhada 
wrote:
>
> I am using ES version 1.3.2, and Spark 1.1.0.
> I can successfully read and write records from/to ES using newAPIHadoopRDD() 
> and saveAsNewAPIHadoopDataset().
> However, I am struggling to find a way to update records. Even I specify a 
> 'key' in ESOutputFormat it gets ignored, as documented clearly.
> So my question is : Is there a way to specify document ID and custom 
> routing values when writing to ES using Spark? If yes, how?
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/b6e4628a-5106-4f2b-997d-e790a8aeb455%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Is there a way to "update" ES records using Spark?

2014-10-01 Thread Preeti Raj - Buchhada
I am using ES version 1.3.2, and Spark 1.1.0.
I can successfully read and write records from/to ES using newAPIHadoopRDD() 
and saveAsNewAPIHadoopDataset().
However, I am struggling to find a way to update records. Even I specify a 
'key' in ESOutputFormat it gets ignored, as documented clearly.
So my question is : Is there a way to specify document ID and custom 
routing values when writing to ES using Spark? If yes, how?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/529fb146-bea8-48ac-aed0-d6908775f85d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.