so now I do all the filtering on the client side with very excellent
performance.
Maybe you could just retrieve all the documents in low-priority 'batch'
process. And store the intermediate results just as a blob of text. Perhaps
put as json file into Cloud Storage.
The front end can just
Den onsdag 1 juli 2015 kl. 14:35:04 UTC+2 skrev barryhunter:
On 1 July 2015 at 13:27, Filip Nilsson fill...@gmail.com javascript:
wrote:
I’m trying to retrive all documents in a search index in an efficent
manner.
Why? To be frank it just sounds like bad design. Try to do whatever
I’m trying to retrive all documents in a search index in an efficent
manner. My current approach is something like this:
https://gist.github.com/filleokus/8941f0824bef0fd921a7, but this seem to
take about 50 ms per 100 item batch, which is way to slow. In this
particular index we have a couple
On 1 July 2015 at 13:27, Filip Nilsson filleo...@gmail.com wrote:
I’m trying to retrive all documents in a search index in an efficent
manner.
Why? To be frank it just sounds like bad design. Try to do whatever you
doing by accessing less data.
Any suggestions on how to access all items
Hi Filip, Barry.
Thank you Barry for the great help here :).
Filip, Barry is right, simply having 4000 docs being grabbed is a bit
inefficient at best, and unscalable if you ever grow to more docs.
The blob idea is definitely a good idea, it will be quick to retrieve and
show. With a cron