The 1.2 release is scheduled to be released in Q4 (most likely in
November). Regarding implementations running on large graphs using
Neo4j there have been several mentions of that on the list so you
could try search the user archives
(http://www.mail-archive.com/user@lists.neo4j.org/). For example:
Thanks.
Regarding scaling 1.0 and 1.1 releases have a limit of 4 billion
> records / store file so if you need to store 4 billion strings you
> have to make sure every string fits in a single block. This limit will
> be increased to 32 billion or more in the 1.2 release.
>
Any timeline guidance o
arameters will be ignored and the values used at
creation time will be used.
>
>
> Thanks,
>
>
>
> Rick
>
>
>
>
>
>
>
> Original Message
> Subject: Re: [Neo4j] Node creation limit
> From: Johan Svensson <[1]jo..
parameters to open
it?
Thanks,
Rick
Original Message
Subject: Re: [Neo4j] Node creation limit
From: Johan Svensson <[1]jo...@neotechnology.com>
Date: Tue, June 08, 2010 5:15 am
To: Neo4j user discussions <[2]u...@lists.neo4j.org>
I just added code in trunk so block size for string and array store
can be configured when the store is created. This will be available in
the 1.1 release but if you want to try it out now use 1.1-SNAPSHOT and
create a new store like this:
Map config = new HashMap();
config.put( "string_block_
Similar issue on my side as well. Test data is ok, but production data
(100 million+ objects, 200 relationships per object and 10 properties
per object, with multi-million queries per day about search and
traversal) would need clear disk sizing calculations due to iops and
other hardware li
t;pre-load" of much of the graph would help mitigate the disk space and
I/O concerns.
Can you confirm whether or not the in-memory structures use a
fixed-record size model also?
Original Message
Subject: Re: [Neo4j] Node creation limit
From: Mattias P
> Is there a specific constrain on disk space? Normally disk space isn't
> a problem... it's cheap and there's usually loads of it.
>
Actually for most of my use cases the disk space has been fine. Except for
one data source, that surprised me by expanding from less than a gig of
original binary d
2010/6/7 Craig Taverner :
> Seems that the string store is not optimal for the 'common' usage of
> properties for names or labels, which are typically 5 to 20 characters long,
> leading to about 5x (or more) space utilization than needed. By 'names or
> labels' I mean things like username, tags, ca
Seems that the string store is not optimal for the 'common' usage of
properties for names or labels, which are typically 5 to 20 characters long,
leading to about 5x (or more) space utilization than needed. By 'names or
labels' I mean things like username, tags, categorizations, product names,
etc.
Hi,
These are the current record sizes in bytes that can be used to
calculate the actual store size:
nodestore: 9
relationshipstore: 33
propertystore: 25
stringstore: 133
arraystore: 133
All properties except strings and arrays will take a single
propertystore record (25 bytes). A string or arra
That formula is correct regarding nodes and relationships, yes. When
properties comes into play another formula would, of course, have to
be applied. Depending on property types and length of keys/string
values it is different. It could be good though with a formula/tool to
calculate that.
2010/6/
In that case, what are the ways to estimate storage capacity numbers? Basic
formula of nodes*9 + edges*33 doesn't seem like a practical one.
On Wed, Jun 2, 2010 at 11:26 PM, Mattias Persson
wrote:
> String properties are stored in blocks so even if you have tiny string
> values each property valu
String properties are stored in blocks so even if you have tiny string
values each property value will occupy a full block (30 or 60 bytes,
can someone correct me here?). That's what taking most of your space
IMHO
2010/6/3, Biren Gandhi :
> Here is some content from neostore.propertystore.db.strin
Here is some content from neostore.propertystore.db.strings - another huge
file. What are the max number of nodes/relationships that people have tried
with Neo4j so far? Can someone share disk space usage characteristics?
od -N 1000 -x -c neostore.propertystore.db.strings
000 8500 0
There is only 1 property - "n" (to store name of the node) - used as
follows:
Node node = graphDb.createNode();
node.setProperty( NAME_KEY, username );
And the values of username are "Node-1", "Node-2" etc.
On Wed, Jun 2, 2010 at 3:14 PM, Mattias Persson
wrote:
> Only 4,4mb out
Only 4,4mb out of those 80 is consumed by nodes so you must be storing
some properties somewhere. Would you mind sharing your code so that it
would be easier to get a better insight into your problem?
2010/6/2, Biren Gandhi :
> Thanks. Big transactions were indeed problematic. Splitting them down
Thanks. Big transactions were indeed problematic. Splitting them down into
smaller chunks did the trick.
I'm still disappointed by the on-disk size of a minimal node without any
relationships or attributes. For 500K nodes, it is taking 80MB space (160
byes/node) and for 1M objects it is consuming
Exactly, the problem is most likely that you try to insert all your
stuff in one transaction. All data for a transaction is kept in memory
until committed so for really big transactions it can fill your entire
heap. Try to group 10k operations or so for big insertions or use the
batch inserter.
Li
Exactly, the problem is most likely that you try to insert all your
stuff in one transaction. All data for a transaction is kept in memory
until committed so for really big transactions it can fill your entire
heap. Try to group 10k operations or so for big insertions or use the
batch inserter.
Li
On Wed, Jun 2, 2010 at 3:50 AM, Biren Gandhi wrote:
>
> Is there any limit on number of nodes that can be created in a neo4j
> instance? Any other tips?
I created hundreds of millions of nodes without problems, but it was
splitted into many transaction.
--
Laurent "ker2x" Laborde
Sysadmin & DBA
Correction - disk size 116K is applicable only in failure cases. Here are
the numbers for 100K node inserts (takes up 17MB):
4.0Kactive_tx_log
12K lucene
12K lucene-fulltext
4.0Kneostore
4.0Kneostore.id
884Kneostore.nodestore.db
4.0Kneostore.nodestore.db.id
2.4Mneos
While trying to perform a create-only stress test for nodes, i'm constantly
getting Out of Memory error even while running with these params (with
default config - as no searching/optimizations are being exercised just
yet):
EXTRA_JVM_ARGUMENTS="-d64 -server -Xms256m -Xmx1024m"
Able to create 200
23 matches
Mail list logo