> That's a really fantastic and useful design metric. Can paraphrase it a bit
> and write it up on the Neo4j blog/my blog?
I'd be honoured.
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user
>>But in answering this, I wonder if there are actually two use cases here
Yes, I see the use cases as the design decision points you are forced
to make at varying points in the scale of increasing data volumes:
1) 0-10s of gigabytes:
Slam in the RAM on a single server and all is plain sailing
2)
A nice clear post. The choice of "Router" is obviously key. For the given
routing examples based on user or geo it should be possible to map a request to
a server. For many other situations it may prove much harder to determine which
server has a warm cache because there is no user and there is
Thanks for taking the time to look over my example, Johan.
I was hoping that the batch inserter's memory costs would not be
directly linear with the volume of data inserted - sounds like it is?.
My assumption was that the indexing service was the service with the
comparatively hard task of random-
Hi Pablo,
>>Regarding the boiled down version of my code I guess I could prepare it but
>>it's quite a big project
Here's a boiled-down batch load demo I did earlier based on public
Wikipedia data: http://code.google.com/p/graphdb-load-tester/
It includes what I believe to be a faster Lucene bat
5 matches
Mail list logo