Guys,

I'm evaluating a denormalized data structure in ES that basically looks 
like a Customer record with a lot of transactions with dollar amounts and 
dates.  It roughly looks like this:

{ "id": 123, 
"name": "Gavin", 
... 
"transactions": { 
   "txn_uid_1" : { "date" : "02-19-2013", "amount" : $19.99 }, 
   "txn_uid_2" : { "date" : "02-20-2013", "amount" : $23.00 }, ... 
   "txn_uid_N" : { "date" : "02-21-2013", "amount" : $99.99} 
}
} 

The transactions in particular can be coming in quite frequently and in a 
batch.  I would like to be able to change the document using a batch of 
partial updates carrying the changes to the same document.  But since the 
overall document can be quite beefy (customer name, characteristics, etc.) 
I would like to avoid the re-indexing of the same document per single 
partial update.  I would like to be able to turn off ES indexing, apply the 
batch hoping the batch will be merged in the transaction log, and turn 
indexing back on, hoping the total end result change will be re-indexed. 
 Is this not possible?  If not, what is the best way to solve this type of 
use case?  Would you normalize the data structure into a parent Customer 
with children etc??

How efficiently can ES handle many partial updates to the same document if 
the document is say a few pages long with 20-30 different fields some of 
which are multi-valued arrays?

Thanks so much for your input!!

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/b4861dbd-f0a9-42de-bc34-5e4177aa18ed%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to