Re: [Neo4j] Meta Model component and Neo4J REST interface

2011-04-20 Thread Mattias Persson
Hi Davi,

There's currently nothing exposing meta model stuff in the REST API.
But you can however write your own server plugins, look at
http://docs.neo4j.org/chunked/stable/server-plugins.html for more
information on that. There you can write you own logic using and
manipulating a meta model.

The meta model isn't part of the official release, but you're perhaps
using an older version of Neo4j?

2011/4/19 Davi Bandeira :
> Hello group,
>
> I've been using Neo4J for quite a while now but I reached a point I need
> some help: I have an ontology and I would like to "model" my neo4j nodes
> (the ones I will create) based on properties, classes, etc from my ontology.
> I'm also using JENA API for this purpose (to get information from my OWL
> file) to create instances, set properties values, etc.
>
> My question is: is there any way for using the meta model component with
> Neo4J REST? Because on the snapshot site, there is an example using meta
> model component with embedded neo4j, with all the transaction stuff.
>
> Thank you in advance
> ___
> Neo4j mailing list
> User@lists.neo4j.org
> https://lists.neo4j.org/mailman/listinfo/user
>



-- 
Mattias Persson, [matt...@neotechnology.com]
Hacker, Neo Technology
www.neotechnology.com
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] REST results pagination

2011-04-20 Thread Michael Hunger
But wouldn't that really custom operation not more easily and much faster done 
as a server plugin?

Otherwise all the data would have to be serialized to json and deserialized 
again and no streaming possible.

>From a server extension you could even stream and gzip that data with ease.

Cheers

Michael

Am 20.04.2011 um 08:41 schrieb Tim McNamara:

> Data export, e.g. dumping everything as CSV, DOT or RDF?
> 
> On 20 April 2011 18:33, Michael Hunger 
> wrote:
> 
>> Hi Javier,
>> 
>> what would you need that for? I'm interested in the usecase.
>> 
>> Cheers
>> 
>> Michael
>> 
>> Am 20.04.2011 um 06:17 schrieb Javier de la Rosa:
>> 
>>> On Tue, Apr 19, 2011 at 10:25, Jim Webber  wrote:
 I've just checked and that's in our "list of stuff we really should do
>> because it annoys us that it's not there."
 No promises, but we do intend to work through at least some of that list
>> for the 1.4 releases.
>>> 
>>> If this finally is developed, it will possible to request for all
>>> nodes and all relationships in some URL?
>>> 
 
 Jim
 ___
 Neo4j mailing list
 User@lists.neo4j.org
 https://lists.neo4j.org/mailman/listinfo/user
>>> 
>>> 
>>> 
>>> --
>>> Javier de la Rosa
>>> http://versae.es
>>> ___
>>> Neo4j mailing list
>>> User@lists.neo4j.org
>>> https://lists.neo4j.org/mailman/listinfo/user
>> 
>> ___
>> Neo4j mailing list
>> User@lists.neo4j.org
>> https://lists.neo4j.org/mailman/listinfo/user
>> 
> ___
> Neo4j mailing list
> User@lists.neo4j.org
> https://lists.neo4j.org/mailman/listinfo/user

___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] REST results pagination

2011-04-20 Thread Akhil
wont dumping a graph database as a tabular format create a huge file 
even if the number of nodes are few !!!  (for a highly  interconnected 
graph)
On 4/20/2011 2:41 AM, Tim McNamara wrote:
> Data export, e.g. dumping everything as CSV, DOT or RDF?
>
> On 20 April 2011 18:33, Michael Hungerwrote:
>
>> Hi Javier,
>>
>> what would you need that for? I'm interested in the usecase.
>>
>> Cheers
>>
>> Michael
>>
>> Am 20.04.2011 um 06:17 schrieb Javier de la Rosa:
>>
>>> On Tue, Apr 19, 2011 at 10:25, Jim Webber  wrote:
 I've just checked and that's in our "list of stuff we really should do
>> because it annoys us that it's not there."
 No promises, but we do intend to work through at least some of that list
>> for the 1.4 releases.
>>> If this finally is developed, it will possible to request for all
>>> nodes and all relationships in some URL?
>>>
 Jim
 ___
 Neo4j mailing list
 User@lists.neo4j.org
 https://lists.neo4j.org/mailman/listinfo/user
>>>
>>>
>>> --
>>> Javier de la Rosa
>>> http://versae.es
>>> ___
>>> Neo4j mailing list
>>> User@lists.neo4j.org
>>> https://lists.neo4j.org/mailman/listinfo/user
>> ___
>> Neo4j mailing list
>> User@lists.neo4j.org
>> https://lists.neo4j.org/mailman/listinfo/user
>>
> ___
> Neo4j mailing list
> User@lists.neo4j.org
> https://lists.neo4j.org/mailman/listinfo/user

___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] REST results pagination

2011-04-20 Thread Jacob Hansson
On Tue, Apr 19, 2011 at 10:17 PM, Michael DeHaan
wrote:

> On Tue, Apr 19, 2011 at 10:58 AM, Jim Webber 
> wrote:
> >>> I'd like to propose that we put this functionality into the plugin (
> https://github.com/skanjila/gremlin-translation-plugin) that Peter and I
> are currently working on, thoughts?
> >
> > I'm thinking that, if we do it, it should be handled through content
> negotiation. That is if you ask for application/atom then you get paged
> lists of results. I don't necessarily think that's a plugin, it's more
> likely part of the representation logic in server itself.
>
> This is something I've been wondering about as I may have the need to
> feed very large graphs into the system and am wondering how the REST
> API will hold up compared to the native interface.
>
> What happens if the result of an index query (or traversal, whatever)
> legitimately needs to return 100k results?
>
> Wouldn't that be a bit large for one request?   If anything, it's a
> lot of JSON to decode at once.
>
>
Yeah, we can't do this right now, and implementing it is harder than it
seems at first glance, since we first need to implement sorting of results,
otherwise the paged result will be useless. Like Jim said though, this is
another one of those *must be done* features.


> Feeds make sense for things that are feed-like, but do atom feeds
> really make sense for results of very dynamic queries that don't get
> subscribed to?
> Or, related question, is there a point where the result sets of
> operations get so large that things start to break down?   What do
> people find this to generally be?
>

I'm sure there are some awesome content types out there that we can look at
that will fit our uses, I don't feel confident to say if Atom is a good
choice, I've never worked with it..

The point where this breaks down I'm gonna guess is in server-side
serialization, because we currently don't stream the serialized data, but
build it up in memory and ship it off when it's done. I'd say you'll run out
of memory after 1 nodes or so on a small server, which I think
underlines how important this is to fix.


>
> Maybe it's not an issue, but pointers to any problems REST API usage
> has with large data sets (and solutions?) would be welcome.
>

Not aware of anyone bumping into these limits yet, but I'm sure we'll start
hearing about it.. The only current solution I can think of is a server
plugin that emulates this, but it would have to sort the result, and I'm
afraid that it will be hard (probably not impossible, but hard) to implement
that in a memory-efficient way that far away from the kernel. You may just
end up moving the OutOfMemeoryExceptions' to the plugin instead of the
serialization system.


>
> --Michael
> ___
> Neo4j mailing list
> User@lists.neo4j.org
> https://lists.neo4j.org/mailman/listinfo/user
>



-- 
Jacob Hansson
Phone: +46 (0) 763503395
Twitter: @jakewins
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] REST results pagination

2011-04-20 Thread Craig Taverner
I think sorting would need to be optional, since it is likely to be a
performance and memory hug on large traversals. I think one of the key
benefits of the traversal framework in the Embedded API is being able to
traverse and 'stream' a very large graph without occupying much memory. If
this can be achieved in the REST API (through pagination), that is a very
good thing. I assume the main challenge is being able to freeze a traverser
and keep it on hold between client requests for the next page. Perhaps you
have already solved that bit?

In my opinion, I would code the sorting as a characteristic of the graph
itself, in order to avoid having to sort in the server (and incur the
memory/performance hit). So that means I would use a domain-specific
solution to sorting. Of course, generic sorting is nice also, but make it
optional.

On Wed, Apr 20, 2011 at 11:19 AM, Jacob Hansson wrote:

> On Tue, Apr 19, 2011 at 10:17 PM, Michael DeHaan
> wrote:
>
> > On Tue, Apr 19, 2011 at 10:58 AM, Jim Webber 
> > wrote:
> > >>> I'd like to propose that we put this functionality into the plugin (
> > https://github.com/skanjila/gremlin-translation-plugin) that Peter and I
> > are currently working on, thoughts?
> > >
> > > I'm thinking that, if we do it, it should be handled through content
> > negotiation. That is if you ask for application/atom then you get paged
> > lists of results. I don't necessarily think that's a plugin, it's more
> > likely part of the representation logic in server itself.
> >
> > This is something I've been wondering about as I may have the need to
> > feed very large graphs into the system and am wondering how the REST
> > API will hold up compared to the native interface.
> >
> > What happens if the result of an index query (or traversal, whatever)
> > legitimately needs to return 100k results?
> >
> > Wouldn't that be a bit large for one request?   If anything, it's a
> > lot of JSON to decode at once.
> >
> >
> Yeah, we can't do this right now, and implementing it is harder than it
> seems at first glance, since we first need to implement sorting of results,
> otherwise the paged result will be useless. Like Jim said though, this is
> another one of those *must be done* features.
>
>
> > Feeds make sense for things that are feed-like, but do atom feeds
> > really make sense for results of very dynamic queries that don't get
> > subscribed to?
> > Or, related question, is there a point where the result sets of
> > operations get so large that things start to break down?   What do
> > people find this to generally be?
> >
>
> I'm sure there are some awesome content types out there that we can look at
> that will fit our uses, I don't feel confident to say if Atom is a good
> choice, I've never worked with it..
>
> The point where this breaks down I'm gonna guess is in server-side
> serialization, because we currently don't stream the serialized data, but
> build it up in memory and ship it off when it's done. I'd say you'll run
> out
> of memory after 1 nodes or so on a small server, which I think
> underlines how important this is to fix.
>
>
> >
> > Maybe it's not an issue, but pointers to any problems REST API usage
> > has with large data sets (and solutions?) would be welcome.
> >
>
> Not aware of anyone bumping into these limits yet, but I'm sure we'll start
> hearing about it.. The only current solution I can think of is a server
> plugin that emulates this, but it would have to sort the result, and I'm
> afraid that it will be hard (probably not impossible, but hard) to
> implement
> that in a memory-efficient way that far away from the kernel. You may just
> end up moving the OutOfMemeoryExceptions' to the plugin instead of the
> serialization system.
>
>
> >
> > --Michael
> > ___
> > Neo4j mailing list
> > User@lists.neo4j.org
> > https://lists.neo4j.org/mailman/listinfo/user
> >
>
>
>
> --
> Jacob Hansson
> Phone: +46 (0) 763503395
> Twitter: @jakewins
> ___
> Neo4j mailing list
> User@lists.neo4j.org
> https://lists.neo4j.org/mailman/listinfo/user
>
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] Starting neo4j Server doesn't return to promt

2011-04-20 Thread Chris Gioran
Hi Stephan,

please see inline

On Wed, Apr 20, 2011 at 12:58 AM, Stephan Hagemann
 wrote:
> Hi Mattias, hi List,
>
> I am running this on Linux... and I solved the problem (with a workaround).
>
> What confused me the whole time was that there were three files for the same
> purpose in the conf directory: log4j.properties, logback.xml,
> logging.properties (the first two aren't used - I am using 1.3 advanced -
> could they go away?). To make matters worse, even neo4j-wrapper.conf has
> some logging related settings. And then there are 'settings' in the neo4j
> executable about whether the startup should wait for success messages from
> the server.

The 3 files that pertain to the logging frameworks have not been
removed since there are lingering libraries that could use them -
log4j is a hard requirement for zookeeper and some dependencies of the
wrapper, for instance. This doesn't mean that they are necessary, it
means we are in the process of fixing this.

The neo4j-wrapper.conf does indeed contain a logging settings section,
since it prefers to do the logging setup on its own.
The reason things seem complicated is that YAJSW pipes stdout/stderr
of the wrapped process into itself and everything that is received
from there is printed to the console. This means that the actual
control of what is logged from the neo4j server (at the latest
snapshots) is the logging.properties file in the usual j.u.l manner -
the neo4j-wrapper.conf logging settings control the logging for the
wrapper *only*, with the exception of the log statement format.

> I am pretty sure this line from logging.properties wasn't there in 1.3.M5,
> because I didn't have a whole lot of console logging before:
>  handlers=java.util.logging.FileHandler, java.util.logging.ConsoleHandler

This is because we want, by default, some interesting messages, such
as the server URI and the webadmin location to be printed to the
console during startup. Essentially, we have set the console handler
to print statements of INFO level or greater. Removing the Console
handler will completely stop the logging from neo4j-server, but the
wrapper will continue outputting its statements.

> When I switched it to
>  handlers=java.util.logging.FileHandler
> most of the logging ouput in the console went away, but the process for some
> reason still held on to the console. E.g., when I do a 'neo4j dump' the
> result is shown in the original console. I have not yet been able to find
> out, where I can make the server fully detach.
>
> The workaround for me is to explicitly pipe the neo4j command's output to a
> file.

This is actually a common behavioral pattern in all *nix applications
- holding on to the pts it was started at. It can be annoying though,
so at some point it may have to be fixed.
Finally, the same applies to "not getting back to the prompt". The
shell script returns as soon as the neo4j-server process is forked,
but there is still some log statements to be written. This clutters up
the screen, as output from any background process does, making it seem
like the prompt isn't there. If you just press enter, you will get it
back. As a workaround, you can increase the WAIT_AFTER_STARTUP
variable in the neo4j script (measured in seconds, set to 5 at the
latest SNAPSHOT) making sure this way that there remains no more
output that could clutter up your console.

> Cheers
> Stephan

Hope the above clarify things. Again, check out the latest builds,
most of this stuff is fixed/made sane.

cheers,
CG

>
> PS: here is my logging.properties:
>
> # Global logging properties.
> # --
> # The set of handlers to be loaded upon startup.
> # Comma-separated list of class names.
> # (? LogManager docs say no comma here, but JDK example has comma.)
> handlers=java.util.logging.FileHandler
>
> # Default global logging level.
> # Loggers and Handlers may override this level
> .level=INFO
>
> # Loggers
> # --
> # Loggers are usually attached to packages.
> # Here, the level for each package is specified.
> # The global level is used by default, so levels
> # specified here simply act as an override.
> org.neo4j.server.level=INFO
>
> # Handlers
> # -
>
> # --- ConsoleHandler ---
> # Override of global logging level
> java.util.logging.ConsoleHandler.level=INFO
> java.util.logging.ConsoleHandler.formatter=java.util.logging.SimpleFormatter
> java.util.logging.ConsoleHandler.filter=org.neo4j.server.logging.NeoLogFilter
>
> # --- FileHandler ---
> # Override of global logging level
> java.util.logging.FileHandler.level=ALL
>
> # Naming style for the output file:
> # (The output file is placed in the directory
> # defined by the "user.home" System property.)
> java.util.logging.FileHandler.pattern=data/log/neo4j.%u.log
>
> # Limiting size of output file in bytes:
> java.util.logging.FileHandler.limit=5
>
> # Number of output files to cycle through, by appendin

Re: [Neo4j] Strange performance difference on different machines

2011-04-20 Thread Tobias Ivarsson
I agree that 16 transactions / second is much slower than what I usually see
on linux, even with a slow file system configuration. But I still believe
this is either disk or filesystem being slow. Could you please go through
the file system benchmarking outlined on this wiki page:
http://wiki.neo4j.org/content/Linux_Performance_Guide

Cheers,
Tobias

On Tue, Apr 19, 2011 at 10:11 PM, Bob Hutchison wrote:

> Hi Tobias,
>
> On 2011-04-19, at 1:48 AM, Tobias Ivarsson wrote:
>
> > Hi Bob,
> >
> > What happens here is that you perform a tiny operation in each
> transaction,
> > so what you are really testing here is how fast your file system can
> flush,
> > because with such tiny transactions all of the time is going to be spent
> in
> > transactional overhead (i.e. flushing transaction logs to the disk).
> >
> > The reason you see such large differences between Mac OS X and Linux is
> > because Mac OS X cheats. Flushing a file (fdatasync) on Mac does pretty
> much
> > nothing. The only thing Mac OS X guarantees is that it will write the
> data
> > that you just flushed before it writes the next data block you flush, so
> > called "ordered writes". This means that you could potentially get
> data-loss
> > on hard failure, but never in a way that makes your data internally
> > inconsistent.
>
> Okay, that's makes some sense. Thanks for the information.
>
> >
> > So to give a short answer to your questions:
> > 1) The linux number is reasonable, Mac OS X cheats.
> > 2) What you are testing is the write speed of your disk for writing small
> > chunks of data.
>
> So you're thinking that 16 or 17 writes is what should be expected?
>
> Cheers,
> Bob
>
> >
> > Cheers,
> > Tobias
> >
> > On Mon, Apr 18, 2011 at 10:57 PM, Bob Hutchison <
> hutch-li...@recursive.ca>wrote:
> >
> >> Hi,
> >>
> >> Using Neo4j 1.3 and the Borneo (Clojure) wrapper I'm getting radically
> >> different performance numbers with identical test code.
> >>
> >> The test is a simple-minded: create two nodes and a relation between
> them.
> >> No properties, no indexes, all nodes and relations are different.
> >>
> >> On OS X, it takes about 50s to perform that operation 50,000 times, <
> 0.8s
> >> to do it 500 times. It uses roughly 30-40% of one core to do this.
> >>
> >> On linux it takes about 30s to perform that operation 500 times. The CPU
> >> usage is negligible (really negligible... almost none).
> >>
> >> I cannot explain the difference in behaviour.
> >>
> >> I have two questions:
> >>
> >> 1) is either of these a reasonable number? I hoping the OS X numbers are
> >> not too fast.
> >>
> >> 2) any ideas as to what might be the cause of this?
> >>
> >> The Computers are comparable. The OS X is a 2.8 GHz i7, the linux box is
> a
> >> 3.something GHz Xeon (I don't remember the details).
> >>
> >> Thanks in advance for any help,
> >> Bob
> >>
> >> ___
> >> Neo4j mailing list
> >> User@lists.neo4j.org
> >> https://lists.neo4j.org/mailman/listinfo/user
> >>
> >
> >
> >
> > --
> > Tobias Ivarsson 
> > Hacker, Neo Technology
> > www.neotechnology.com
> > Cellphone: +46 706 534857
> > ___
> > Neo4j mailing list
> > User@lists.neo4j.org
> > https://lists.neo4j.org/mailman/listinfo/user
>
> 
> Bob Hutchison
> Recursive Design Inc.
> http://www.recursive.ca/
> weblog: http://xampl.com/so
>
>
>
>
> ___
> Neo4j mailing list
> User@lists.neo4j.org
> https://lists.neo4j.org/mailman/listinfo/user
>



-- 
Tobias Ivarsson 
Hacker, Neo Technology
www.neotechnology.com
Cellphone: +46 706 534857
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] Strange performance difference on different machines

2011-04-20 Thread Tobias Ivarsson
Sorry I got a bit distracted when writing this. I should have added that I
then want you to send the results of running that benchmark to me so that I
can further analyze what the cause of these slow writes might be.

Thank you,
Tobias

On Wed, Apr 20, 2011 at 12:26 PM, Tobias Ivarsson <
tobias.ivars...@neotechnology.com> wrote:

> I agree that 16 transactions / second is much slower than what I usually
> see on linux, even with a slow file system configuration. But I still
> believe this is either disk or filesystem being slow. Could you please go
> through the file system benchmarking outlined on this wiki page:
> http://wiki.neo4j.org/content/Linux_Performance_Guide
>
> Cheers,
> Tobias
>
>
> On Tue, Apr 19, 2011 at 10:11 PM, Bob Hutchison 
> wrote:
>
>> Hi Tobias,
>>
>> On 2011-04-19, at 1:48 AM, Tobias Ivarsson wrote:
>>
>> > Hi Bob,
>> >
>> > What happens here is that you perform a tiny operation in each
>> transaction,
>> > so what you are really testing here is how fast your file system can
>> flush,
>> > because with such tiny transactions all of the time is going to be spent
>> in
>> > transactional overhead (i.e. flushing transaction logs to the disk).
>> >
>> > The reason you see such large differences between Mac OS X and Linux is
>> > because Mac OS X cheats. Flushing a file (fdatasync) on Mac does pretty
>> much
>> > nothing. The only thing Mac OS X guarantees is that it will write the
>> data
>> > that you just flushed before it writes the next data block you flush, so
>> > called "ordered writes". This means that you could potentially get
>> data-loss
>> > on hard failure, but never in a way that makes your data internally
>> > inconsistent.
>>
>> Okay, that's makes some sense. Thanks for the information.
>>
>> >
>> > So to give a short answer to your questions:
>> > 1) The linux number is reasonable, Mac OS X cheats.
>> > 2) What you are testing is the write speed of your disk for writing
>> small
>> > chunks of data.
>>
>> So you're thinking that 16 or 17 writes is what should be expected?
>>
>> Cheers,
>> Bob
>>
>> >
>> > Cheers,
>> > Tobias
>> >
>> > On Mon, Apr 18, 2011 at 10:57 PM, Bob Hutchison <
>> hutch-li...@recursive.ca>wrote:
>> >
>> >> Hi,
>> >>
>> >> Using Neo4j 1.3 and the Borneo (Clojure) wrapper I'm getting radically
>> >> different performance numbers with identical test code.
>> >>
>> >> The test is a simple-minded: create two nodes and a relation between
>> them.
>> >> No properties, no indexes, all nodes and relations are different.
>> >>
>> >> On OS X, it takes about 50s to perform that operation 50,000 times, <
>> 0.8s
>> >> to do it 500 times. It uses roughly 30-40% of one core to do this.
>> >>
>> >> On linux it takes about 30s to perform that operation 500 times. The
>> CPU
>> >> usage is negligible (really negligible... almost none).
>> >>
>> >> I cannot explain the difference in behaviour.
>> >>
>> >> I have two questions:
>> >>
>> >> 1) is either of these a reasonable number? I hoping the OS X numbers
>> are
>> >> not too fast.
>> >>
>> >> 2) any ideas as to what might be the cause of this?
>> >>
>> >> The Computers are comparable. The OS X is a 2.8 GHz i7, the linux box
>> is a
>> >> 3.something GHz Xeon (I don't remember the details).
>> >>
>> >> Thanks in advance for any help,
>> >> Bob
>> >>
>> >> ___
>> >> Neo4j mailing list
>> >> User@lists.neo4j.org
>> >> https://lists.neo4j.org/mailman/listinfo/user
>> >>
>> >
>> >
>> >
>> > --
>> > Tobias Ivarsson 
>> > Hacker, Neo Technology
>> > www.neotechnology.com
>> > Cellphone: +46 706 534857
>> > ___
>> > Neo4j mailing list
>> > User@lists.neo4j.org
>> > https://lists.neo4j.org/mailman/listinfo/user
>>
>> 
>> Bob Hutchison
>> Recursive Design Inc.
>> http://www.recursive.ca/
>> weblog: http://xampl.com/so
>>
>>
>>
>>
>> ___
>> Neo4j mailing list
>> User@lists.neo4j.org
>> https://lists.neo4j.org/mailman/listinfo/user
>>
>
>
>
> --
> Tobias Ivarsson 
> Hacker, Neo Technology
> www.neotechnology.com
> Cellphone: +46 706 534857
>



-- 
Tobias Ivarsson 
Hacker, Neo Technology
www.neotechnology.com
Cellphone: +46 706 534857
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] REST results pagination

2011-04-20 Thread Jacob Hansson
On Wed, Apr 20, 2011 at 11:25 AM, Craig Taverner  wrote:

> I think sorting would need to be optional, since it is likely to be a
> performance and memory hug on large traversals. I think one of the key
> benefits of the traversal framework in the Embedded API is being able to
> traverse and 'stream' a very large graph without occupying much memory. If
> this can be achieved in the REST API (through pagination), that is a very
> good thing. I assume the main challenge is being able to freeze a traverser
> and keep it on hold between client requests for the next page. Perhaps you
> have already solved that bit?
>

While I agree with you that the ability to effectively stream the results of
a traversal is a very useful thing, I don't like the persisted traverser
approach, for several reasons. I'm sorry if my tone below is a bit harsh, I
don't mean it that way, I simply want to make a strong case for why I think
the hard way is the right way in this case.

First, the only good restful approach I can think of for doing persisted
traversals would be to "create" a traversal resource (since it is an object
that keeps persistent state), and get back an id to refer to it. Subsequent
calls to paged results would then be to that traversal resource, updating
its state and getting results back. Assuming this is the correct way to
implement this, it comes with a lot of questions. Should there be a timeout
for these resources, or is the user responsible for removing them from
memory? What happens when the server crashes and the client can't find the
traversal resources it has ids for?

If we somehow solve that or find some better approach, we end up with an API
where a client can get paged results, but two clients performing the same
traversal on the same data may get back the same result in different order
(see my comments on sorting based on expected traversal behaviour below).
This means that the API is really only useful if you actually want to get
the entire result back. If that was the problem we wanted to solve, a
streaming solution is a much easier and faster approach than a paging
solution.

Second, being able to iterate over the entire result set is only half of the
use cases we are looking to solve. The other half are the ones I mentioned
examples of (the blog case, presenting lists of things to users and so on),
and those are not solved by this. Forcing users of our database to pull out
all their data over the wire and sort the whole thing, only to keep the
first 10 items, for each user that lands on their frontpage, is not ok.

Third, and most importantly to me, using this case to put more pressure on
ourselves to implement real sorting is a really good thing. Sorting is
something that *really* should be provided by us, anyone who has used a
modern database expects this to be our problem to solve. We have a really
good starting point for optimizing sorting algorithms, sitting as we are
inside the kernel with our caches and indexes :)


>
> In my opinion, I would code the sorting as a characteristic of the graph
> itself, in order to avoid having to sort in the server (and incur the
> memory/performance hit). So that means I would use a domain-specific
> solution to sorting. Of course, generic sorting is nice also, but make it
> optional.
>

I agree sorting should be an opt-in feature. Putting meta-data like sorting
order and similar things inside the graph I think is a matter of personal
preference, and for sure has its place as a useful optimization. I do,
however, think that the "official" approach to sorting needs to be based on
concepts familiar from other databases - define your query, and define how
you want the result sorted. If indexes are available the database can use
them to optimize the sorting, otherwise it will suck, but at least we're
doing what the user wants us to do. All lessons learned in YesSQL databases
(see what I did there?) should not be unlearned :)

Also, the approach of sorting via the traversal itself assumes knowledge of
which order the traverser will move through the graph, and that is not
necessarily something that will be the same in later releases. Tobias was
talking about cache-first traversals as an addition or even a replacement to
depth/breadth first ones, a major optimization we cannot do if we encourage
people to sort "inside" the graph.

/Jake


>
> On Wed, Apr 20, 2011 at 11:19 AM, Jacob Hansson  >wrote:
>
> > On Tue, Apr 19, 2011 at 10:17 PM, Michael DeHaan
> > wrote:
> >
> > > On Tue, Apr 19, 2011 at 10:58 AM, Jim Webber 
> > > wrote:
> > > >>> I'd like to propose that we put this functionality into the plugin
> (
> > > https://github.com/skanjila/gremlin-translation-plugin) that Peter and
> I
> > > are currently working on, thoughts?
> > > >
> > > > I'm thinking that, if we do it, it should be handled through content
> > > negotiation. That is if you ask for application/atom then you get paged
> > > lists of results. I don't necessarily think that's a plugin, it's 

Re: [Neo4j] REST results pagination

2011-04-20 Thread Javier de la Rosa
Wow, had I known the number of replies, I had sent the e-mail much before ;)

Sorting is a very cool feature. I didn't know the hard core
implications of pagination. The only think I want is to avoid the
overload of sending thousands nodes through HTTP in JSON. Actually,
with a workaround that splits results (using offset and limit) in the
server without a real cut of the them, would be enough for me. Sending
in JSON only a concrete number of results. If this feature could be in
the core of Neo4j, perfect :-)


On Wed, Apr 20, 2011 at 08:01, Jacob Hansson  wrote:
> On Wed, Apr 20, 2011 at 11:25 AM, Craig Taverner  wrote:
>
>> I think sorting would need to be optional, since it is likely to be a
>> performance and memory hug on large traversals. I think one of the key
>> benefits of the traversal framework in the Embedded API is being able to
>> traverse and 'stream' a very large graph without occupying much memory. If
>> this can be achieved in the REST API (through pagination), that is a very
>> good thing. I assume the main challenge is being able to freeze a traverser
>> and keep it on hold between client requests for the next page. Perhaps you
>> have already solved that bit?
>>
>
> While I agree with you that the ability to effectively stream the results of
> a traversal is a very useful thing, I don't like the persisted traverser
> approach, for several reasons. I'm sorry if my tone below is a bit harsh, I
> don't mean it that way, I simply want to make a strong case for why I think
> the hard way is the right way in this case.
>
> First, the only good restful approach I can think of for doing persisted
> traversals would be to "create" a traversal resource (since it is an object
> that keeps persistent state), and get back an id to refer to it. Subsequent
> calls to paged results would then be to that traversal resource, updating
> its state and getting results back. Assuming this is the correct way to
> implement this, it comes with a lot of questions. Should there be a timeout
> for these resources, or is the user responsible for removing them from
> memory? What happens when the server crashes and the client can't find the
> traversal resources it has ids for?
>
> If we somehow solve that or find some better approach, we end up with an API
> where a client can get paged results, but two clients performing the same
> traversal on the same data may get back the same result in different order
> (see my comments on sorting based on expected traversal behaviour below).
> This means that the API is really only useful if you actually want to get
> the entire result back. If that was the problem we wanted to solve, a
> streaming solution is a much easier and faster approach than a paging
> solution.
>
> Second, being able to iterate over the entire result set is only half of the
> use cases we are looking to solve. The other half are the ones I mentioned
> examples of (the blog case, presenting lists of things to users and so on),
> and those are not solved by this. Forcing users of our database to pull out
> all their data over the wire and sort the whole thing, only to keep the
> first 10 items, for each user that lands on their frontpage, is not ok.
>
> Third, and most importantly to me, using this case to put more pressure on
> ourselves to implement real sorting is a really good thing. Sorting is
> something that *really* should be provided by us, anyone who has used a
> modern database expects this to be our problem to solve. We have a really
> good starting point for optimizing sorting algorithms, sitting as we are
> inside the kernel with our caches and indexes :)
>
>
>>
>> In my opinion, I would code the sorting as a characteristic of the graph
>> itself, in order to avoid having to sort in the server (and incur the
>> memory/performance hit). So that means I would use a domain-specific
>> solution to sorting. Of course, generic sorting is nice also, but make it
>> optional.
>>
>
> I agree sorting should be an opt-in feature. Putting meta-data like sorting
> order and similar things inside the graph I think is a matter of personal
> preference, and for sure has its place as a useful optimization. I do,
> however, think that the "official" approach to sorting needs to be based on
> concepts familiar from other databases - define your query, and define how
> you want the result sorted. If indexes are available the database can use
> them to optimize the sorting, otherwise it will suck, but at least we're
> doing what the user wants us to do. All lessons learned in YesSQL databases
> (see what I did there?) should not be unlearned :)
>
> Also, the approach of sorting via the traversal itself assumes knowledge of
> which order the traverser will move through the graph, and that is not
> necessarily something that will be the same in later releases. Tobias was
> talking about cache-first traversals as an addition or even a replacement to
> depth/breadth first ones, a major optimization we cannot do i

Re: [Neo4j] REST results pagination

2011-04-20 Thread Javier de la Rosa
Here is my motivation for this petition. In my ideal world, everything
in Neo4j REST server that returns a list, should be something like
RequestList or QuerySet, and supports pagination and even filtering
with lookups, so I could do things like the next (sorry for the Python
syntax, I'm always thinking of the Python rest client):

>>> gdb.nodes.all()[2:5]
# Perform the query to server to get the nodes between the 2nd and
5th position, we assume Neo always returns ordered results in the same
way.

>>> gdb.nodes.filter(name__contains="neo")[:10]
# Returns only nodes with a property called name which contains
"neo", and returns only the first 10.

This is important for the integration of the Neo4j Python Rest Client
in Django, because I'm currently developing an application with lazy
and user-defined schemas on top of Django and Neo4j. The listing of
nodes and relationships is a requirement for me, so the pagination is
a must in my aplication. Performing this in the application layer
instead of Neo4j server side, wastes a lot of time sending information
via REST.


On Wed, Apr 20, 2011 at 03:43, Michael Hunger
 wrote:
> But wouldn't that really custom operation not more easily and much faster 
> done as a server plugin?
>
> Otherwise all the data would have to be serialized to json and deserialized 
> again and no streaming possible.
>
> From a server extension you could even stream and gzip that data with ease.
>
> Cheers
>
> Michael
>
> Am 20.04.2011 um 08:41 schrieb Tim McNamara:
>
>> Data export, e.g. dumping everything as CSV, DOT or RDF?
>>
>> On 20 April 2011 18:33, Michael Hunger 
>> wrote:
>>
>>> Hi Javier,
>>>
>>> what would you need that for? I'm interested in the usecase.
>>>
>>> Cheers
>>>
>>> Michael
>>>
>>> Am 20.04.2011 um 06:17 schrieb Javier de la Rosa:
>>>
 On Tue, Apr 19, 2011 at 10:25, Jim Webber  wrote:
> I've just checked and that's in our "list of stuff we really should do
>>> because it annoys us that it's not there."
> No promises, but we do intend to work through at least some of that list
>>> for the 1.4 releases.

 If this finally is developed, it will possible to request for all
 nodes and all relationships in some URL?

>
> Jim
> ___
> Neo4j mailing list
> User@lists.neo4j.org
> https://lists.neo4j.org/mailman/listinfo/user



 --
 Javier de la Rosa
 http://versae.es
 ___
 Neo4j mailing list
 User@lists.neo4j.org
 https://lists.neo4j.org/mailman/listinfo/user
>>>
>>> ___
>>> Neo4j mailing list
>>> User@lists.neo4j.org
>>> https://lists.neo4j.org/mailman/listinfo/user
>>>
>> ___
>> Neo4j mailing list
>> User@lists.neo4j.org
>> https://lists.neo4j.org/mailman/listinfo/user
>
> ___
> Neo4j mailing list
> User@lists.neo4j.org
> https://lists.neo4j.org/mailman/listinfo/user
>



-- 
Javier de la Rosa
http://versae.es
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] REST results pagination

2011-04-20 Thread Michael DeHaan
>
> This is important for the integration of the Neo4j Python Rest Client
> in Django, because I'm currently developing an application with lazy
> and user-defined schemas on top of Django and Neo4j. The listing of
> nodes and relationships is a requirement for me, so the pagination is
> a must in my aplication. Performing this in the application layer
> instead of Neo4j server side, wastes a lot of time sending information
> via REST.

Well put about the listing of nodes and relationships.   That's the
use case where this comes up.

If I can't trust that my app's code indexed something correctly, or I
need to index old data later, I may need to walk the whole
graph to update the indexes, so large result sets become scary.   I
don't think I can rely on a traverse as part of the graphs might be
disjoint.

New use cases on old data mean we'll have to do that, just like adding
a new index to a SQL db.   Or if I have an index that says "all nodes
of type", that result set could get very large.

In fact, I probably need to access all nodes in order to apply any new
indexes, if I can't just send a reindexing command that says
"for all nodes add to index like so, etc".

If I'm understanding the "server plugin" thing correctly, I've got to
go write some java classes to do that... which, while I *can* do, it
would
better if it could be accessed in a language agnostic way, with
something more or less resembling a database cursor (see MongoDB's
API).

--Michael
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] REST results pagination

2011-04-20 Thread Craig Taverner
To respond to your arguments it would be worth noting a comment by Michael
DeHaan later on in this thread. He asked for 'something more or less
resembling a database cursor (see MongoDB's API).' The trick is to achieve
this without having to store a lot of state on the server, so it is robust
against server restarts or crashes.

If we compare to the SQL situation, there are two numbers passed by the
client, the page size and the offset. The state can be re-created by the
database server entirely from this information. How this is implemented in a
relational database I do not know, but whether the database is relational or
a graph, certain behaviors would be expected, like robustness against
database content changes between the requests, and coping with very long
gaps between requests. In my opinion the database cursor could be achieved
by both of the following approaches:

   - Starting the traversal from the beginning, and only returning results
   after passing the cursor offset position
   - Keeping a live traverser in the server, and continuing it from the
   previous position

Personally I think the second approach is simply a performance optimization
of the first. So robustness is achieved by having both, with the second one
working when possible (no server restarts, timeout not expiring, etc.), and
falling back to the first in other cases. This achieves performance and
robustness. What we do not need to do with either case is keep an entire
result set in memory between client requests.

Now when you add sorting into the picture, then you need to generate the
complete result-set in memory, sort, paginate and return only the requested
page. If the entire process has to be repeated for every page requested,
this could perform very badly for large result sets. I must believe that
relational databases do not do this (but I do not know how they paginate
sorted results, unless the sort order is maintained in an index).

To avoid keeping everything in memory, or repeatedly reloading everything to
memory on every page request, we need sorted results to be produced on the
stream. This can be done by keeping the sort order in an index. This is very
hard to do in a generic way, which is why I thought it best done in a domain
specific way.

Finally, I think we are really looking at two, different but valid use
cases. The need for generic sorting combined with pagination, and the need
for pagination on very large result sets. The former use case can work with
re-traversing and sorting on each client request, is fully generic, but will
perform badly on large result sets. The latter can perform adequately on
large result sets, as long as you do not need to sort (and use the database
cursor approach to avoid loading the result set into memory).

On Wed, Apr 20, 2011 at 2:01 PM, Jacob Hansson  wrote:

> On Wed, Apr 20, 2011 at 11:25 AM, Craig Taverner  wrote:
>
> > I think sorting would need to be optional, since it is likely to be a
> > performance and memory hug on large traversals. I think one of the key
> > benefits of the traversal framework in the Embedded API is being able to
> > traverse and 'stream' a very large graph without occupying much memory.
> If
> > this can be achieved in the REST API (through pagination), that is a very
> > good thing. I assume the main challenge is being able to freeze a
> traverser
> > and keep it on hold between client requests for the next page. Perhaps
> you
> > have already solved that bit?
> >
>
> While I agree with you that the ability to effectively stream the results
> of
> a traversal is a very useful thing, I don't like the persisted traverser
> approach, for several reasons. I'm sorry if my tone below is a bit harsh, I
> don't mean it that way, I simply want to make a strong case for why I think
> the hard way is the right way in this case.
>
> First, the only good restful approach I can think of for doing persisted
> traversals would be to "create" a traversal resource (since it is an object
> that keeps persistent state), and get back an id to refer to it. Subsequent
> calls to paged results would then be to that traversal resource, updating
> its state and getting results back. Assuming this is the correct way to
> implement this, it comes with a lot of questions. Should there be a timeout
> for these resources, or is the user responsible for removing them from
> memory? What happens when the server crashes and the client can't find the
> traversal resources it has ids for?
>
> If we somehow solve that or find some better approach, we end up with an
> API
> where a client can get paged results, but two clients performing the same
> traversal on the same data may get back the same result in different order
> (see my comments on sorting based on expected traversal behaviour below).
> This means that the API is really only useful if you actually want to get
> the entire result back. If that was the problem we wanted to solve, a
> streaming solution is a much easier and

Re: [Neo4j] Question from Webinar - traversing a path with nodes of different types

2011-04-20 Thread David Montag
Hi Vipul,

Thanks for listening!

It's a very good question, and the short answer is: yes! I'm cc'ing our
mailing list so that everyone can take part in the answer.

Here's the long answer, illustrated by an example:

Let's assume you're modeling a network. You'll have some domain classes that
are all networked entities with peers:

@NodeEntity
public class NetworkEntity {
@RelatedTo(type = "PEER", direction = Direction.BOTH, elementClass =
NetworkEntity.class)
private Set peers;

public void addPeer(NetworkEntity peer) {
peers.add(peer);
}
}

public class Server extends NetworkEntity {}
public class Router extends NetworkEntity {}
public class Client extends NetworkEntity {}

Then we can build a small network:

Client c = new Client().persist();
Router r1 = new Router().persist();
Router r21 = new Router().persist();
Router r22 = new Router().persist();
Router r3 = new Router().persist();
Server s = new Server().persist();

c.addPeer(r1);
r1.addPeer(r21);
r1.addPeer(r22);
r21.addPeer(r3);
r22.addPeer(r3);
r3.addPeer(s);

c.persist();

Note that after linking the entities, I only call persist() on the client.
You can read more about this in the reference documentation, but essentially
it will cascade in the direction of the relationships created, and will in
this case cascade all the way to the server entity.

You can now query this:

Iterable> paths =
c.findAllPathsByTraversal(Traversal.description());

The above code will get you an EntityPath per node visited during the
traversal from c. The example does however not use a very interesting
traversal description, but you can still print the results:

for (EntityPath path : paths) {
StringBuilder sb = new StringBuilder();
Iterator iter =
path.nodeEntities().iterator();
while (iter.hasNext()) {
sb.append(iter.next());
if (iter.hasNext()) sb.append(" -> ");
}
System.out.println(sb);
}

This will print each path, with all entities in the path. This is what it
looks like:

domain.Client@1
domain.Client@1 -> domain.Router@2
domain.Client@1 -> domain.Router@2 -> domain.Router@3
domain.Client@1 -> domain.Router@2 -> domain.Router@3 -> domain.Router@5
domain.Client@1 -> domain.Router@2 -> domain.Router@3 ->
domain.Router@5-> domain.Server@6
domain.Client@1 -> domain.Router@2 -> domain.Router@3 ->
domain.Router@5-> domain.Router@4

Let us know if this is what you looked for. If you want to only find paths
that end with a server, you'd use this query instead:

Iterable> paths =
c.findAllPathsByTraversal(Traversal.description().evaluator(new Evaluator()
{
@Override
public Evaluation evaluate(Path path) {
if (new ConvertingEntityPath(graphDatabaseContext, path).endEntity()
instanceof Server) {
return Evaluation.INCLUDE_AND_PRUNE;
}
return Evaluation.EXCLUDE_AND_CONTINUE;
}
}));

In the above code example, graphDatabaseContext is a bean of type
GraphDatabaseContext created by Spring Data Graph. This syntax will
dramatically improve in future releases. It will print:

domain.Client@1 -> domain.Router@2 -> domain.Router@3 ->
domain.Router@5-> domain.Server@6

Regarding your second question about types: If you want to convert a node
into an entity, you would use the TypeRepresentationStrategy configured
internally in Spring Data Graph. See the reference documentation for more
information on this. If you want to convert Neo4j paths to entity paths, you
can use the ConvertingEntityPath class as seen above. As an implementation
detail, the class name is stored on the node as a property.

Hope this helped!

David

On Wed, Apr 20, 2011 at 9:20 AM, Emil Eifrem  wrote:

> David / Michael, do you guys want to dig in and help out Vipul below?
>
> -EE
>
> On Wed, Apr 20, 2011 at 09:17, Vipul Gupta 
> wrote:
> > Hi Emil,
> > I would like to start by thanking you for the webinar. It was very useful
> .
> > This is the question I asked on the webinar as well.
> > How can we traverse a graph which consists of nodes of different types
> using
> > SDG?
> > Say first Node in relationship is of type A, next 3 nodes in path are of
> > type B, and next node is of type C.
> > Is there a way to traverse the path and keep getting the domain objects
> at
> > each stage of traversal.
> > answer mentioned on the webinar is to look at findAllPathsByTraversal in
> > NodeBacked.
> > I am not able to understand how to get an Iterator or a path with starts
> > with Type A, ends with Type C and has number of Type C in between.
> >
> > Also if I convert a domain POJO to a Node object, is there a way to know
> > what domain type it represents or is wrapped around so that it can be
> > converted back to that type.
> > Please let me know.
> > Best Regards,
> > Vipul Gupta
>
>
>
> --
> Emil Eifrém, CEO [e...@neotechnology.com]
> Neo Technology, www.neotechnology.com
> Cell: +46 733 462 271 | US: 206 403 8808
> http://blogs.neotechnology.com/emil
> http://twitter.com/emileifr

Re: [Neo4j] Question from Webinar - traversing a path with nodes of different types

2011-04-20 Thread Vipul Gupta
Perfect.  Thanks David. I have an exactly similar model with a slightly more
complicated structure :).
 I will keep you updated with my project (totally based on neo4j and SDG)
 as I plan to open source it eventually.

I have one more question for you :
I see the latest release changes NodeGraphRepository from interfaces to
classes( RC1 to RELEASE)
What necessitated this change and are there any release notes with detailed
explanations about changes which I can refer to ?
I am finding it hard to map out the changes



On Thu, Apr 21, 2011 at 12:09 AM, David Montag <
david.mon...@neotechnology.com> wrote:

> Hi Vipul,
>
> Thanks for listening!
>
> It's a very good question, and the short answer is: yes! I'm cc'ing our
> mailing list so that everyone can take part in the answer.
>
> Here's the long answer, illustrated by an example:
>
> Let's assume you're modeling a network. You'll have some domain classes
> that are all networked entities with peers:
>
> @NodeEntity
> public class NetworkEntity {
> @RelatedTo(type = "PEER", direction = Direction.BOTH, elementClass =
> NetworkEntity.class)
> private Set peers;
>
> public void addPeer(NetworkEntity peer) {
> peers.add(peer);
> }
> }
>
> public class Server extends NetworkEntity {}
> public class Router extends NetworkEntity {}
> public class Client extends NetworkEntity {}
>
> Then we can build a small network:
>
> Client c = new Client().persist();
> Router r1 = new Router().persist();
> Router r21 = new Router().persist();
> Router r22 = new Router().persist();
> Router r3 = new Router().persist();
> Server s = new Server().persist();
>
> c.addPeer(r1);
> r1.addPeer(r21);
> r1.addPeer(r22);
> r21.addPeer(r3);
> r22.addPeer(r3);
> r3.addPeer(s);
>
> c.persist();
>
> Note that after linking the entities, I only call persist() on the client.
> You can read more about this in the reference documentation, but essentially
> it will cascade in the direction of the relationships created, and will in
> this case cascade all the way to the server entity.
>
> You can now query this:
>
> Iterable> paths =
> c.findAllPathsByTraversal(Traversal.description());
>
> The above code will get you an EntityPath per node visited during the
> traversal from c. The example does however not use a very interesting
> traversal description, but you can still print the results:
>
> for (EntityPath path : paths) {
> StringBuilder sb = new StringBuilder();
> Iterator iter =
> path.nodeEntities().iterator();
> while (iter.hasNext()) {
> sb.append(iter.next());
> if (iter.hasNext()) sb.append(" -> ");
> }
> System.out.println(sb);
> }
>
> This will print each path, with all entities in the path. This is what it
> looks like:
>
> domain.Client@1
> domain.Client@1 -> domain.Router@2
> domain.Client@1 -> domain.Router@2 -> domain.Router@3
> domain.Client@1 -> domain.Router@2 -> domain.Router@3 ->
> domain.Router@5
> domain.Client@1 -> domain.Router@2 -> domain.Router@3 ->
> domain.Router@5 -> domain.Server@6
> domain.Client@1 -> domain.Router@2 -> domain.Router@3 ->
> domain.Router@5 -> domain.Router@4
>
> Let us know if this is what you looked for. If you want to only find paths
> that end with a server, you'd use this query instead:
>
> Iterable> paths =
> c.findAllPathsByTraversal(Traversal.description().evaluator(new Evaluator()
> {
> @Override
> public Evaluation evaluate(Path path) {
> if (new ConvertingEntityPath(graphDatabaseContext,
> path).endEntity() instanceof Server) {
> return Evaluation.INCLUDE_AND_PRUNE;
> }
> return Evaluation.EXCLUDE_AND_CONTINUE;
> }
> }));
>
> In the above code example, graphDatabaseContext is a bean of type
> GraphDatabaseContext created by Spring Data Graph. This syntax will
> dramatically improve in future releases. It will print:
>
> domain.Client@1 -> domain.Router@2 -> domain.Router@3 ->
> domain.Router@5 -> domain.Server@6
>
> Regarding your second question about types: If you want to convert a node
> into an entity, you would use the TypeRepresentationStrategy configured
> internally in Spring Data Graph. See the reference documentation for more
> information on this. If you want to convert Neo4j paths to entity paths, you
> can use the ConvertingEntityPath class as seen above. As an implementation
> detail, the class name is stored on the node as a property.
>
> Hope this helped!
>
> David
>
> On Wed, Apr 20, 2011 at 9:20 AM, Emil Eifrem wrote:
>
>> David / Michael, do you guys want to dig in and help out Vipul below?
>>
>> -EE
>>
>> On Wed, Apr 20, 2011 at 09:17, Vipul Gupta 
>> wrote:
>> > Hi Emil,
>> > I would like to start by thanking you for the webinar. It was very
>> useful .
>> > This is the question I asked on the webinar as well.
>> > How can we traverse a graph which consists of nodes of different types
>> using
>> > SDG?
>> > Say first Node in relationship is of type A, next 3 nodes in path are of
>>

[Neo4j] Great Webinar by Mark Pollack and Emil Eifrem on Introducing Spring Data Graph

2011-04-20 Thread Michael Hunger
Great Job guys, really got people interested.

 dr_pompeii: Just finished the Webinar "Getting Started with Spring Data 
Graph", very impressive Neo4j and SpringData, well done! #Neo4j #SpringData
 mstine: Polyglot persistence looks very compelling. #springdata #neo4j  

Good intro in Spring Data (see http://springsource.org/spring-data),  overview 
on the NOSQL topic, outlining Neo4j and then focus on Spring Data Graph 
(http://springsource.org/spring-data/graph).

The webinar was recorded and will be available from Monday from 
http://www.springsource.com/newsevents/webinars

If you have any questions feel free to ask them on the mailing lists or on the 
Spring Data Forums (http://forum.springsource.org/forumdisplay.php?f=80).

Stay tuned for more in depth tutorials on Spring Data and Spring Data Graph.

Cheers and thanks again to everyone who made it possible.

Michael

P.S: 
Please don't forget to check out the Spring Data Graph Guide Book "Good 
Relationships" http://bit.ly/sdg-book
And the tutorial social movie database running on Spring Data Graph and Neo4j 
http://cineasts.net
Release Blog Post: 
http://blog.springsource.com/2011/04/18/spring-data-graph-1-0-neo4j-support-released
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] Question from Webinar - traversing a path with nodes of different types

2011-04-20 Thread Michael Hunger
Viptul,

please keep us in the loop with your project, we love getting feedback from 
firsthand users of Spring Data Graph.

The structure was changed due to an internal cleanup. You don't need those 
specialized Repositories anymore:

just do something like:

public interface MovieRepository extends GraphRepository , 
CustomRepository {
}

You can have a look in the imdb sample project. 
http://github.com/springsource/spring-data-graph-examples

the Node and Relationship repository names were so freed and the the concrete 
implementation classes got the names instead of the previous (ugly) ones.

HTH

Michael

Am 20.04.2011 um 21:17 schrieb Vipul Gupta:

> Perfect.  Thanks David. I have an exactly similar model with a slightly more
> complicated structure :).
> I will keep you updated with my project (totally based on neo4j and SDG)
> as I plan to open source it eventually.
> 
> I have one more question for you :
> I see the latest release changes NodeGraphRepository from interfaces to
> classes( RC1 to RELEASE)
> What necessitated this change and are there any release notes with detailed
> explanations about changes which I can refer to ?
> I am finding it hard to map out the changes
> 
> 
> 
> On Thu, Apr 21, 2011 at 12:09 AM, David Montag <
> david.mon...@neotechnology.com> wrote:
> 
>> Hi Vipul,
>> 
>> Thanks for listening!
>> 
>> It's a very good question, and the short answer is: yes! I'm cc'ing our
>> mailing list so that everyone can take part in the answer.
>> 
>> Here's the long answer, illustrated by an example:
>> 
>> Let's assume you're modeling a network. You'll have some domain classes
>> that are all networked entities with peers:
>> 
>> @NodeEntity
>> public class NetworkEntity {
>>@RelatedTo(type = "PEER", direction = Direction.BOTH, elementClass =
>> NetworkEntity.class)
>>private Set peers;
>> 
>>public void addPeer(NetworkEntity peer) {
>>peers.add(peer);
>>}
>> }
>> 
>> public class Server extends NetworkEntity {}
>> public class Router extends NetworkEntity {}
>> public class Client extends NetworkEntity {}
>> 
>> Then we can build a small network:
>> 
>> Client c = new Client().persist();
>> Router r1 = new Router().persist();
>> Router r21 = new Router().persist();
>> Router r22 = new Router().persist();
>> Router r3 = new Router().persist();
>> Server s = new Server().persist();
>> 
>> c.addPeer(r1);
>> r1.addPeer(r21);
>> r1.addPeer(r22);
>> r21.addPeer(r3);
>> r22.addPeer(r3);
>> r3.addPeer(s);
>> 
>> c.persist();
>> 
>> Note that after linking the entities, I only call persist() on the client.
>> You can read more about this in the reference documentation, but essentially
>> it will cascade in the direction of the relationships created, and will in
>> this case cascade all the way to the server entity.
>> 
>> You can now query this:
>> 
>> Iterable> paths =
>> c.findAllPathsByTraversal(Traversal.description());
>> 
>> The above code will get you an EntityPath per node visited during the
>> traversal from c. The example does however not use a very interesting
>> traversal description, but you can still print the results:
>> 
>> for (EntityPath path : paths) {
>>StringBuilder sb = new StringBuilder();
>>Iterator iter =
>> path.nodeEntities().iterator();
>>while (iter.hasNext()) {
>>sb.append(iter.next());
>>if (iter.hasNext()) sb.append(" -> ");
>>}
>>System.out.println(sb);
>> }
>> 
>> This will print each path, with all entities in the path. This is what it
>> looks like:
>> 
>>domain.Client@1
>>domain.Client@1 -> domain.Router@2
>>domain.Client@1 -> domain.Router@2 -> domain.Router@3
>>domain.Client@1 -> domain.Router@2 -> domain.Router@3 ->
>> domain.Router@5
>>domain.Client@1 -> domain.Router@2 -> domain.Router@3 ->
>> domain.Router@5 -> domain.Server@6
>>domain.Client@1 -> domain.Router@2 -> domain.Router@3 ->
>> domain.Router@5 -> domain.Router@4
>> 
>> Let us know if this is what you looked for. If you want to only find paths
>> that end with a server, you'd use this query instead:
>> 
>> Iterable> paths =
>> c.findAllPathsByTraversal(Traversal.description().evaluator(new Evaluator()
>> {
>>@Override
>>public Evaluation evaluate(Path path) {
>>if (new ConvertingEntityPath(graphDatabaseContext,
>> path).endEntity() instanceof Server) {
>>return Evaluation.INCLUDE_AND_PRUNE;
>>}
>>return Evaluation.EXCLUDE_AND_CONTINUE;
>>}
>> }));
>> 
>> In the above code example, graphDatabaseContext is a bean of type
>> GraphDatabaseContext created by Spring Data Graph. This syntax will
>> dramatically improve in future releases. It will print:
>> 
>>domain.Client@1 -> domain.Router@2 -> domain.Router@3 ->
>> domain.Router@5 -> domain.Server@6
>> 
>> Regarding your second question about types: If you want to convert a node
>> into an entity, you would use the TypeRepresentationStrategy configured
>> internally in Spring Data Graph. See the reference doc

[Neo4j] Cannot launch neo4j on Mac OS 10.6.7

2011-04-20 Thread Kevin Moore
Latest java 1.6.

Any suggestions for debugging?

INFO|wrapper|11-04-20 14:53:35|init
INFO|wrapper|11-04-20 14:53:35|set state IDLE->STARTING
INFO|wrapper|11-04-20 14:53:35|starting Process
INFO|wrapper|11-04-20 14:53:36|Controller State: UNKNOWN -> WAITING
INFO|wrapper|11-04-20 14:53:36|working dir /Users/kevin/bin/neo4j/bin/..
INFO|wrapper|11-04-20 14:53:36|error initializing script
INFO|wrapper|11-04-20 14:53:36|spawning wrapped process
INFO|wrapper|11-04-20 14:53:36|exec:java -classpath
/Users/kevin/bin/neo4j/bin/wrapper.jar:/Users/kevin/bin/neo4j/lib/geronimo-jta_1.1_spec-1.1.1.jar:/Users/kevin/bin/neo4j/lib/neo4j-community-1.3.jar:/Users/kevin/bin/neo4j/lib/neo4j-graph-algo-1.3.jar:/Users/kevin/bin/neo4j/lib/neo4j-jmx-1.3.jar:/Users/kevin/bin/neo4j/lib/neo4j-kernel-1.3.jar:/Users/kevin/bin/neo4j/lib/neo4j-lucene-index-1.3.jar:/Users/kevin/bin/neo4j/lib/neo4j-shell-1.3.jar:/Users/kevin/bin/neo4j/lib/neo4j-udc-1.3.jar:/Users/kevin/bin/neo4j/lib/org.apache.servicemix.bundles.jline-0.9.94_1.jar:/Users/kevin/bin/neo4j/lib/org.apache.servicemix.bundles.lucene-3.0.1_2.jar:/Users/kevin/bin/neo4j/lib/server-api-1.3.jar:/Users/kevin/bin/neo4j/system/lib/antlr-2.7.7.jar:/Users/kevin/bin/neo4j/system/lib/asm-3.1.jar:/Users/kevin/bin/neo4j/system/lib/asm-analysis-3.2.jar:/Users/kevin/bin/neo4j/system/lib/asm-commons-3.2.jar:/Users/kevin/bin/neo4j/system/lib/asm-tree-3.2.jar:/Users/kevin/bin/neo4j/system/lib/asm-util-3.2.jar:/Users/kevin/bin/neo4j/system/lib/blueprints-
 
core-0.6.jar:/Users/kevin/bin/neo4j/system/lib/blueprints-neo4j-graph-0.6.jar:/Users/kevin/bin/neo4j/system/lib/commons-beanutils-1.8.0.jar:/Users/kevin/bin/neo4j/system/lib/commons-beanutils-core-1.8.0.jar:/Users/kevin/bin/neo4j/system/lib/commons-collections-3.2.1.jar:/Users/kevin/bin/neo4j/system/lib/commons-configuration-1.6.jar:/Users/kevin/bin/neo4j/system/lib/commons-digester-1.8.1.jar:/Users/kevin/bin/neo4j/system/lib/commons-io-1.4.jar:/Users/kevin/bin/neo4j/system/lib/commons-lang-2.4.jar:/Users/kevin/bin/neo4j/system/lib/commons-logging-1.1.1.jar:/Users/kevin/bin/neo4j/system/lib/de.huxhorn.lilith.3rdparty.rrd4j-2.0.5.jar:/Users/kevin/bin/neo4j/system/lib/gremlin-0.9.jar:/Users/kevin/bin/neo4j/system/lib/groovy-1.7.8.jar:/Users/kevin/bin/neo4j/system/lib/jackson-core-asl-1.6.1.jar:/Users/kevin/bin/neo4j/system/lib/jackson-jaxrs-1.6.1.jar:/Users/kevin/bin/neo4j/system/lib/jackson-mapper-asl-1.6.1.jar:/Users/kevin/bin/neo4j/system/lib/jansi-1.5.jar:/Users/kevin/bin/
 
neo4j/system/lib/jcl-over-slf4j-1.6.1.jar:/Users/kevin/bin/neo4j/system/lib/jersey-client-1.4.jar:/Users/kevin/bin/neo4j/system/lib/jersey-core-1.3.jar:/Users/kevin/bin/neo4j/system/lib/jersey-multipart-1.3.jar:/Users/kevin/bin/neo4j/system/lib/jersey-server-1.3.jar:/Users/kevin/bin/neo4j/system/lib/jetty-6.1.25.jar:/Users/kevin/bin/neo4j/system/lib/jetty-util-6.1.25.jar:/Users/kevin/bin/neo4j/system/lib/json-simple-1.1.jar:/Users/kevin/bin/neo4j/system/lib/jsr311-api-1.1.1.jar:/Users/kevin/bin/neo4j/system/lib/log4j-over-slf4j-1.6.1.jar:/Users/kevin/bin/neo4j/system/lib/mimepull-1.4.jar:/Users/kevin/bin/neo4j/system/lib/neo4j-server-1.3-static-web.jar:/Users/kevin/bin/neo4j/system/lib/neo4j-server-1.3.jar:/Users/kevin/bin/neo4j/system/lib/org.apache.felix.fileinstall-3.0.2.jar:/Users/kevin/bin/neo4j/system/lib/org.apache.felix.framework-3.0.2.jar:/Users/kevin/bin/neo4j/system/lib/org.apache.felix.main-3.0.2.jar:/Users/kevin/bin/neo4j/system/lib/org.osgi.compendium-4.1.0.jar
 
:/Users/kevin/bin/neo4j/system/lib/org.osgi.core-4.1.0.jar:/Users/kevin/bin/neo4j/system/lib/pipes-0.4.jar:/Users/kevin/bin/neo4j/system/lib/servlet-api-2.5-20081211.jar:/Users/kevin/bin/neo4j/system/lib/slf4j-api-1.6.1.jar:/Users/kevin/bin/neo4j/system/lib/slf4j-jdk14-1.6.1.jar:/Users/kevin/bin/neo4j/system/lib/sysout-over-slf4j-1.0.2.jar
-Dorg.neo4j.server.properties=conf/neo4j-server.properties
-Djava.util.logging.config.file=conf/logging.properties
-Dwrapper.config=/Users/kevin/bin/neo4j/conf/neo4j-wrapper.conf
-Dwrapper.port=15003 -Dwrapper.key=-8609977225017983781
-Dwrapper.teeName=-8609977225017983781$1303336416133
-Dwrapper.tmpPath=/var/folders/BK/BKrVYstQEzGlF7dlUf36tk+++TI/-Tmp-
org.rzo.yajsw.app.WrapperJVMMain
INFO|wrapper|11-04-20 14:53:36|starting
INFO|wrapper|11-04-20 14:53:37|started process 8920
INFO|wrapper|11-04-20 14:53:37|started process with pid 8920
INFO|wrapper|11-04-20 14:53:37|exit code linux process 5
INFO|wrapper|11-04-20 14:53:37|set state STARTING->RUNNING
INFO|8920/0|11-04-20 14:53:37|Controller State: WAITING -> PROCESS_KILLED
INFO|wrapper|11-04-20 14:53:37|set state RUNNING->STATE_ABORT
INFO|wrapper|11-04-20 14:53:37|set state STATE_ABORT->IDLE
INFO|wrapper|11-04-20 14:53:37|Shutting down Wrapper
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] Cannot launch neo4j on Mac OS 10.6.7

2011-04-20 Thread Kevin Moore
Running from /Applications and ~/Applications are both fail.

I'm running

./bin/neo4j start


On Wed, Apr 20, 2011 at 15:08, Saikat Kanjilal  wrote:

>  I have the same setup, try installing neo in the /Applications directory
> and running from there, that has worked for me.  Let me know how else to
> help.
>
> Regards
>
> > Date: Wed, 20 Apr 2011 15:04:44 -0700
> > From: ke...@thinkpixellab.com
> > To: user@lists.neo4j.org
> > Subject: [Neo4j] Cannot launch neo4j on Mac OS 10.6.7
>
> >
> > Latest java 1.6.
> >
> > Any suggestions for debugging?
> >
> > INFO|wrapper|11-04-20 14:53:35|init
> > INFO|wrapper|11-04-20 14:53:35|set state IDLE->STARTING
> > INFO|wrapper|11-04-20 14:53:35|starting Process
> > INFO|wrapper|11-04-20 14:53:36|Controller State: UNKNOWN -> WAITING
> > INFO|wrapper|11-04-20 14:53:36|working dir /Users/kevin/bin/neo4j/bin/..
> > INFO|wrapper|11-04-20 14:53:36|error initializing script
> > INFO|wrapper|11-04-20 14:53:36|spawning wrapped process
> > INFO|wrapper|11-04-20 14:53:36|exec:java -classpath
> >
> /Users/kevin/bin/neo4j/bin/wrapper.jar:/Users/kevin/bin/neo4j/lib/geronimo-jta_1.1_spec-1.1.1.jar:/Users/kevin/bin/neo4j/lib/neo4j-community-1.3.jar:/Users/kevin/bin/neo4j/lib/neo4j-graph-algo-1.3.jar:/Users/kevin/bin/neo4j/lib/neo4j-jmx-1.3.jar:/Users/kevin/bin/neo4j/lib/neo4j-kernel-1.3.jar:/Users/kevin/bin/neo4j/lib/neo4j-lucene-index-1.3.jar:/Users/kevin/bin/neo4j/lib/neo4j-shell-1.3.jar:/Users/kevin/bin/neo4j/lib/neo4j-udc-1.3.jar:/Users/kevin/bin/neo4j/lib/org.apache.servicemix.bundles.jline-0.9.94_1.jar:/Users/kevin/bin/neo4j/lib/org.apache.servicemix.bundles.lucene-3.0.1_2.jar:/Users/kevin/bin/neo4j/lib/server-api-1.3.jar:/Users/kevin/bin/neo4j/system/lib/antlr-2.7.7.jar:/Users/kevin/bin/neo4j/system/lib/asm-3.1.jar:/Users/kevin/bin/neo4j/system/lib/asm-analysis-3.2.jar:/Users/kevin/bin/neo4j/system/lib/asm-commons-3.2.jar:/Users/kevin/bin/neo4j/system/lib/asm-tree-3.2.jar:/Users/kevin/bin/neo4j/system/lib/asm-util-3.2.jar:/Users/kevin/bin/neo4j/system/lib/blueprint
 s-
> >
> core-0.6.jar:/Users/kevin/bin/neo4j/system/lib/blueprints-neo4j-graph-0.6.jar:/Users/kevin/bin/neo4j/system/lib/commons-beanutils-1.8.0.jar:/Users/kevin/bin/neo4j/system/lib/commons-beanutils-core-1.8.0.jar:/Users/kevin/bin/neo4j/system/lib/commons-collections-3.2.1.jar:/Users/kevin/bin/neo4j/system/lib/commons-configuration-1.6.jar:/Users/kevin/bin/neo4j/system/lib/commons-digester-1.8.1.jar:/Users/kevin/bin/neo4j/system/lib/commons-io-1.4.jar:/Users/kevin/bin/neo4j/system/lib/commons-lang-2.4.jar:/Users/kevin/bin/neo4j/system/lib/commons-logging-1.1.1.jar:/Users/kevin/bin/neo4j/system/lib/de.huxhorn.lilith.3rdparty.rrd4j-2.0.5.jar:/Users/kevin/bin/neo4j/system/lib/gremlin-0.9.jar:/Users/kevin/bin/neo4j/system/lib/groovy-1.7.8.jar:/Users/kevin/bin/neo4j/system/lib/jackson-core-asl-1.6.1.jar:/Users/kevin/bin/neo4j/system/lib/jackson-jaxrs-1.6.1.jar:/Users/kevin/bin/neo4j/system/lib/jackson-mapper-asl-1.6.1.jar:/Users/kevin/bin/neo4j/system/lib/jansi-1.5.jar:/Users/kevin/bin
 /
> >
> neo4j/system/lib/jcl-over-slf4j-1.6.1.jar:/Users/kevin/bin/neo4j/system/lib/jersey-client-1.4.jar:/Users/kevin/bin/neo4j/system/lib/jersey-core-1.3.jar:/Users/kevin/bin/neo4j/system/lib/jersey-multipart-1.3.jar:/Users/kevin/bin/neo4j/system/lib/jersey-server-1.3.jar:/Users/kevin/bin/neo4j/system/lib/jetty-6.1.25.jar:/Users/kevin/bin/neo4j/system/lib/jetty-util-6.1.25.jar:/Users/kevin/bin/neo4j/system/lib/json-simple-1.1.jar:/Users/kevin/bin/neo4j/system/lib/jsr311-api-1.1.1.jar:/Users/kevin/bin/neo4j/system/lib/log4j-over-slf4j-1.6.1.jar:/Users/kevin/bin/neo4j/system/lib/mimepull-1.4.jar:/Users/kevin/bin/neo4j/system/lib/neo4j-server-1.3-static-web.jar:/Users/kevin/bin/neo4j/system/lib/neo4j-server-1.3.jar:/Users/kevin/bin/neo4j/system/lib/org.apache.felix.fileinstall-3.0.2.jar:/Users/kevin/bin/neo4j/system/lib/org.apache.felix.framework-3.0.2.jar:/Users/kevin/bin/neo4j/system/lib/org.apache.felix.main-3.0.2.jar:/Users/kevin/bin/neo4j/system/lib/org.osgi.compendium-4.1.0.ja
 r
> >
> :/Users/kevin/bin/neo4j/system/lib/org.osgi.core-4.1.0.jar:/Users/kevin/bin/neo4j/system/lib/pipes-0.4.jar:/Users/kevin/bin/neo4j/system/lib/servlet-api-2.5-20081211.jar:/Users/kevin/bin/neo4j/system/lib/slf4j-api-1.6.1.jar:/Users/kevin/bin/neo4j/system/lib/slf4j-jdk14-1.6.1.jar:/Users/kevin/bin/neo4j/system/lib/sysout-over-slf4j-1.0.2.jar
> > -Dorg.neo4j.server.properties=conf/neo4j-server.properties
> > -Djava.util.logging.config.file=conf/logging.properties
> > -Dwrapper.config=/Users/kevin/bin/neo4j/conf/neo4j-wrapper.conf
> > -Dwrapper.port=15003 -Dwrapper.key=-8609977225017983781
> > -Dwrapper.teeName=-8609977225017983781$1303336416133
> > -Dwrapper.tmpPath=/var/folders/BK/BKrVYstQEzGlF7dlUf36tk+++TI/-Tmp-
> > org.rzo.yajsw.app.WrapperJVMMain
> > INFO|wrapper|11-04-20 14:53:36|starting
> > INFO|wrapper|11-04-20 14:53:37|started process 8920
> > INFO|wrapper|11-04-20 14:53:37|started process with pid 8920
> > INFO|wrapper|11-04-20 14:53:37|exit

Re: [Neo4j] Cannot launch neo4j on Mac OS 10.6.7

2011-04-20 Thread Jim Webber
Hi Kevin,

The install location shouldn't make any difference.

Can I ask when you downloaded the package? We had a snaffu with our packaging 
mechanism just after we released. That was picked up and fixed, but there's a 
chance you might have a copy of the dodgy package.

Jim
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] Cannot launch neo4j on Mac OS 10.6.7

2011-04-20 Thread Kevin Moore
Just downloaded the latest (again) and ran it from directly from the
extracted directory.

...and everything seems to work fine. :-/

I'll try moving it to see if something changes.

Thanks for the replies.

On Wed, Apr 20, 2011 at 16:47, Jim Webber  wrote:

> Hi Kevin,
>
> The install location shouldn't make any difference.
>
> Can I ask when you downloaded the package? We had a snaffu with our
> packaging mechanism just after we released. That was picked up and fixed,
> but there's a chance you might have a copy of the dodgy package.
>
> Jim
> ___
> Neo4j mailing list
> User@lists.neo4j.org
> https://lists.neo4j.org/mailman/listinfo/user
>
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


[Neo4j] Error building Neo4j

2011-04-20 Thread Kevin Moore
I've tried 1.3 tag, master, etc.

Always the same error.

Maven 3.0.2

Should I be using a different version?

[INFO] Unpacking /Users/kevin/source/github/neo4j/graph-algo/target/classes
to
  /Users/kevin/source/github/neo4j/neo4j/target/sources
   with includes null and excludes:null
org.codehaus.plexus.archiver.ArchiverException: The source must not be a
directory.
at
org.codehaus.plexus.archiver.AbstractUnArchiver.validate(AbstractUnArchiver.java:174)
at
org.codehaus.plexus.archiver.AbstractUnArchiver.extract(AbstractUnArchiver.java:107)
at
org.apache.maven.plugin.dependency.AbstractDependencyMojo.unpack(AbstractDependencyMojo.java:260)
at
org.apache.maven.plugin.dependency.UnpackDependenciesMojo.execute(UnpackDependenciesMojo.java:90)
at
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:107)
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
at
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
at
org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
at
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:319)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:534)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290)
at
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230)
at
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352)
[INFO]

[INFO] Reactor Summary:
[INFO]
[INFO] Neo4j - Graph Database Kernel . SUCCESS
[3:53.536s]
[INFO] Neo4j - JMX support ... SUCCESS [1.291s]
[INFO] Neo4j - Usage Data Collection . SUCCESS [13.238s]
[INFO] Neo4j - Lucene Index .. SUCCESS [5.020s]
[INFO] Neo4j - Graph Algorithms .. SUCCESS [0.204s]
[INFO] Neo4j . FAILURE
[1:16.071s]
[INFO] Neo4j Community ... SKIPPED
[INFO] Neo4j - Generic shell . SKIPPED
[INFO] Neo4j Examples  SKIPPED
[INFO] Neo4j Server API .. SKIPPED
[INFO] Neo4j Server .. SKIPPED
[INFO] Neo4j Server Examples . SKIPPED
[INFO] Neo4j Community Build . SKIPPED
[INFO]

[INFO] BUILD FAILURE
[INFO]

[INFO] Total time: 6:57.812s
[INFO] Finished at: Wed Apr 20 18:58:58 PDT 2011
[INFO] Final Memory: 17M/81M
[INFO]

[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-dependency-plugin:2.1:unpack-dependencies
(get-sources) on project neo4j: Error unpacking file:
/Users/kevin/source/github/neo4j/graph-algo/target/classes to:
/Users/kevin/source/github/neo4j/neo4j/target/sources
[ERROR] org.codehaus.plexus.archiver.ArchiverException: The source must not
be a directory.
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please
read the following articles:
[ERROR] [Help 1]
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the
command
[ERROR]   mvn  -rf :neo4j
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] Cannot launch neo4j on Mac OS 10.6.7

2011-04-20 Thread Michael Hunger
It might be that MacOS permissions for the application folders don't allow 
writing there at all. So writes to logfiles and the graphdb fail.

Cheers Michael

Sent from my iBrick4


Am 21.04.2011 um 01:59 schrieb Kevin Moore :

> Just downloaded the latest (again) and ran it from directly from the
> extracted directory.
> 
> ...and everything seems to work fine. :-/
> 
> I'll try moving it to see if something changes.
> 
> Thanks for the replies.
> 
> On Wed, Apr 20, 2011 at 16:47, Jim Webber  wrote:
> 
>> Hi Kevin,
>> 
>> The install location shouldn't make any difference.
>> 
>> Can I ask when you downloaded the package? We had a snaffu with our
>> packaging mechanism just after we released. That was picked up and fixed,
>> but there's a chance you might have a copy of the dodgy package.
>> 
>> Jim
>> ___
>> Neo4j mailing list
>> User@lists.neo4j.org
>> https://lists.neo4j.org/mailman/listinfo/user
>> 
> ___
> Neo4j mailing list
> User@lists.neo4j.org
> https://lists.neo4j.org/mailman/listinfo/user
___
Neo4j mailing list
User@lists.neo4j.org
https://lists.neo4j.org/mailman/listinfo/user


Re: [Neo4j] Question from Webinar - traversing a path with nodes of different types

2011-04-20 Thread Vipul Gupta
David/Michael,

Let me modify the example a bit.
What if my graph structure is like this

domain.Client@1 -> domain.Router@2 -> domain.Router@3 -> domain.Router@5 ->
domain.Server@6
  -> domain.Router@7 -> domain.Router@8 ->


Imagine a manufacturing line.
6 depends on both 3 and 8 and acts as a blocking point till 3 and 8
finishes.

Is there a way to get a cleaner traversal for such kind of relationship. I
want to get a complete intermediate traversal from Client to Server.

Thank a lot for helping out on this.

Best Regards,
Vipul




On Thu, Apr 21, 2011 at 12:09 AM, David Montag <
david.mon...@neotechnology.com> wrote:

> Hi Vipul,
>
> Thanks for listening!
>
> It's a very good question, and the short answer is: yes! I'm cc'ing our
> mailing list so that everyone can take part in the answer.
>
> Here's the long answer, illustrated by an example:
>
> Let's assume you're modeling a network. You'll have some domain classes
> that are all networked entities with peers:
>
> @NodeEntity
> public class NetworkEntity {
> @RelatedTo(type = "PEER", direction = Direction.BOTH, elementClass =
> NetworkEntity.class)
> private Set peers;
>
> public void addPeer(NetworkEntity peer) {
> peers.add(peer);
> }
> }
>
> public class Server extends NetworkEntity {}
> public class Router extends NetworkEntity {}
> public class Client extends NetworkEntity {}
>
> Then we can build a small network:
>
> Client c = new Client().persist();
> Router r1 = new Router().persist();
> Router r21 = new Router().persist();
> Router r22 = new Router().persist();
> Router r3 = new Router().persist();
> Server s = new Server().persist();
>
> c.addPeer(r1);
> r1.addPeer(r21);
> r1.addPeer(r22);
> r21.addPeer(r3);
> r22.addPeer(r3);
> r3.addPeer(s);
>
> c.persist();
>
> Note that after linking the entities, I only call persist() on the client.
> You can read more about this in the reference documentation, but essentially
> it will cascade in the direction of the relationships created, and will in
> this case cascade all the way to the server entity.
>
> You can now query this:
>
> Iterable> paths =
> c.findAllPathsByTraversal(Traversal.description());
>
> The above code will get you an EntityPath per node visited during the
> traversal from c. The example does however not use a very interesting
> traversal description, but you can still print the results:
>
> for (EntityPath path : paths) {
> StringBuilder sb = new StringBuilder();
> Iterator iter =
> path.nodeEntities().iterator();
> while (iter.hasNext()) {
> sb.append(iter.next());
> if (iter.hasNext()) sb.append(" -> ");
> }
> System.out.println(sb);
> }
>
> This will print each path, with all entities in the path. This is what it
> looks like:
>
> domain.Client@1
> domain.Client@1 -> domain.Router@2
> domain.Client@1 -> domain.Router@2 -> domain.Router@3
> domain.Client@1 -> domain.Router@2 -> domain.Router@3 ->
> domain.Router@5
> domain.Client@1 -> domain.Router@2 -> domain.Router@3 ->
> domain.Router@5 -> domain.Server@6
> domain.Client@1 -> domain.Router@2 -> domain.Router@3 ->
> domain.Router@5 -> domain.Router@4
>
> Let us know if this is what you looked for. If you want to only find paths
> that end with a server, you'd use this query instead:
>
> Iterable> paths =
> c.findAllPathsByTraversal(Traversal.description().evaluator(new Evaluator()
> {
> @Override
> public Evaluation evaluate(Path path) {
> if (new ConvertingEntityPath(graphDatabaseContext,
> path).endEntity() instanceof Server) {
> return Evaluation.INCLUDE_AND_PRUNE;
> }
> return Evaluation.EXCLUDE_AND_CONTINUE;
> }
> }));
>
> In the above code example, graphDatabaseContext is a bean of type
> GraphDatabaseContext created by Spring Data Graph. This syntax will
> dramatically improve in future releases. It will print:
>
> domain.Client@1 -> domain.Router@2 -> domain.Router@3 ->
> domain.Router@5 -> domain.Server@6
>
> Regarding your second question about types: If you want to convert a node
> into an entity, you would use the TypeRepresentationStrategy configured
> internally in Spring Data Graph. See the reference documentation for more
> information on this. If you want to convert Neo4j paths to entity paths, you
> can use the ConvertingEntityPath class as seen above. As an implementation
> detail, the class name is stored on the node as a property.
>
> Hope this helped!
>
> David
>
> On Wed, Apr 20, 2011 at 9:20 AM, Emil Eifrem wrote:
>
>> David / Michael, do you guys want to dig in and help out Vipul below?
>>
>> -EE
>>
>> On Wed, Apr 20, 2011 at 09:17, Vipul Gupta 
>> wrote:
>> > Hi Emil,
>> > I would like to start by thanking you for the webinar. It was very
>> useful .
>> > This is the question I asked on the webinar as well.
>> > How can we traverse a graph which consists of nodes of different types
>> using
>> > SDG?
>> > Say first Node in relationship i

Re: [Neo4j] Question from Webinar - traversing a path with nodes of different types

2011-04-20 Thread Vipul Gupta
my mistake - I meant "5" depends on both 3 and 8 and acts as a blocking
point till 3 and 8 finishes

On Thu, Apr 21, 2011 at 11:19 AM, Vipul Gupta wrote:

> David/Michael,
>
> Let me modify the example a bit.
> What if my graph structure is like this
>
> domain.Client@1 -> domain.Router@2 -> domain.Router@3 -> domain.Router@5-> 
> domain.Server@6
>   -> domain.Router@7 -> domain.Router@8 ->
>
>
> Imagine a manufacturing line.
> 6 depends on both 3 and 8 and acts as a blocking point till 3 and 8
> finishes.
>
> Is there a way to get a cleaner traversal for such kind of relationship. I
> want to get a complete intermediate traversal from Client to Server.
>
> Thank a lot for helping out on this.
>
> Best Regards,
> Vipul
>
>
>
>
> On Thu, Apr 21, 2011 at 12:09 AM, David Montag <
> david.mon...@neotechnology.com> wrote:
>
>> Hi Vipul,
>>
>> Thanks for listening!
>>
>> It's a very good question, and the short answer is: yes! I'm cc'ing our
>> mailing list so that everyone can take part in the answer.
>>
>> Here's the long answer, illustrated by an example:
>>
>> Let's assume you're modeling a network. You'll have some domain classes
>> that are all networked entities with peers:
>>
>> @NodeEntity
>> public class NetworkEntity {
>> @RelatedTo(type = "PEER", direction = Direction.BOTH, elementClass =
>> NetworkEntity.class)
>> private Set peers;
>>
>> public void addPeer(NetworkEntity peer) {
>> peers.add(peer);
>> }
>> }
>>
>> public class Server extends NetworkEntity {}
>> public class Router extends NetworkEntity {}
>> public class Client extends NetworkEntity {}
>>
>> Then we can build a small network:
>>
>> Client c = new Client().persist();
>> Router r1 = new Router().persist();
>> Router r21 = new Router().persist();
>> Router r22 = new Router().persist();
>> Router r3 = new Router().persist();
>> Server s = new Server().persist();
>>
>> c.addPeer(r1);
>> r1.addPeer(r21);
>> r1.addPeer(r22);
>> r21.addPeer(r3);
>> r22.addPeer(r3);
>> r3.addPeer(s);
>>
>> c.persist();
>>
>> Note that after linking the entities, I only call persist() on the client.
>> You can read more about this in the reference documentation, but essentially
>> it will cascade in the direction of the relationships created, and will in
>> this case cascade all the way to the server entity.
>>
>> You can now query this:
>>
>> Iterable> paths =
>> c.findAllPathsByTraversal(Traversal.description());
>>
>> The above code will get you an EntityPath per node visited during the
>> traversal from c. The example does however not use a very interesting
>> traversal description, but you can still print the results:
>>
>> for (EntityPath path : paths) {
>> StringBuilder sb = new StringBuilder();
>> Iterator iter =
>> path.nodeEntities().iterator();
>> while (iter.hasNext()) {
>> sb.append(iter.next());
>> if (iter.hasNext()) sb.append(" -> ");
>> }
>> System.out.println(sb);
>> }
>>
>> This will print each path, with all entities in the path. This is what it
>> looks like:
>>
>> domain.Client@1
>> domain.Client@1 -> domain.Router@2
>> domain.Client@1 -> domain.Router@2 -> domain.Router@3
>> domain.Client@1 -> domain.Router@2 -> domain.Router@3 ->
>> domain.Router@5
>> domain.Client@1 -> domain.Router@2 -> domain.Router@3 ->
>> domain.Router@5 -> domain.Server@6
>> domain.Client@1 -> domain.Router@2 -> domain.Router@3 ->
>> domain.Router@5 -> domain.Router@4
>>
>> Let us know if this is what you looked for. If you want to only find paths
>> that end with a server, you'd use this query instead:
>>
>> Iterable> paths =
>> c.findAllPathsByTraversal(Traversal.description().evaluator(new Evaluator()
>> {
>> @Override
>> public Evaluation evaluate(Path path) {
>> if (new ConvertingEntityPath(graphDatabaseContext,
>> path).endEntity() instanceof Server) {
>> return Evaluation.INCLUDE_AND_PRUNE;
>> }
>> return Evaluation.EXCLUDE_AND_CONTINUE;
>> }
>> }));
>>
>> In the above code example, graphDatabaseContext is a bean of type
>> GraphDatabaseContext created by Spring Data Graph. This syntax will
>> dramatically improve in future releases. It will print:
>>
>> domain.Client@1 -> domain.Router@2 -> domain.Router@3 ->
>> domain.Router@5 -> domain.Server@6
>>
>> Regarding your second question about types: If you want to convert a node
>> into an entity, you would use the TypeRepresentationStrategy configured
>> internally in Spring Data Graph. See the reference documentation for more
>> information on this. If you want to convert Neo4j paths to entity paths, you
>> can use the ConvertingEntityPath class as seen above. As an implementation
>> detail, the class name is stored on the node as a property.
>>
>> Hope this helped!
>>
>> David
>>
>> On Wed, Apr 20, 2011 at 9:20 AM, Emil Eifrem wrote:
>>
>>> David / Michael, do you guys want to dig in and help out Vipul below?
>>>
>>> -EE
>>>
>>> On Wed, Apr 20, 201

Re: [Neo4j] Question from Webinar - traversing a path with nodes of different types

2011-04-20 Thread David Montag
Hi Vipul,

Zooming out a little bit, what are the inputs to your algorithm, and what do
you want it to do?

For example, given 1 and 6, do you want to find any points in the chain
between them that are join points of two (or more) subchains (5 in this
case)?

David

On Wed, Apr 20, 2011 at 10:56 PM, Vipul Gupta wrote:

> my mistake - I meant "5" depends on both 3 and 8 and acts as a blocking
> point till 3 and 8 finishes
>
>
> On Thu, Apr 21, 2011 at 11:19 AM, Vipul Gupta wrote:
>
>> David/Michael,
>>
>> Let me modify the example a bit.
>> What if my graph structure is like this
>>
>> domain.Client@1 -> domain.Router@2 -> domain.Router@3 -> domain.Router@5-> 
>> domain.Server@6
>>   -> domain.Router@7 -> domain.Router@8 ->
>>
>>
>> Imagine a manufacturing line.
>> 6 depends on both 3 and 8 and acts as a blocking point till 3 and 8
>> finishes.
>>
>> Is there a way to get a cleaner traversal for such kind of relationship. I
>> want to get a complete intermediate traversal from Client to Server.
>>
>> Thank a lot for helping out on this.
>>
>> Best Regards,
>> Vipul
>>
>>
>>
>>
>> On Thu, Apr 21, 2011 at 12:09 AM, David Montag <
>> david.mon...@neotechnology.com> wrote:
>>
>>> Hi Vipul,
>>>
>>> Thanks for listening!
>>>
>>> It's a very good question, and the short answer is: yes! I'm cc'ing our
>>> mailing list so that everyone can take part in the answer.
>>>
>>> Here's the long answer, illustrated by an example:
>>>
>>> Let's assume you're modeling a network. You'll have some domain classes
>>> that are all networked entities with peers:
>>>
>>> @NodeEntity
>>> public class NetworkEntity {
>>> @RelatedTo(type = "PEER", direction = Direction.BOTH, elementClass =
>>> NetworkEntity.class)
>>> private Set peers;
>>>
>>> public void addPeer(NetworkEntity peer) {
>>> peers.add(peer);
>>> }
>>> }
>>>
>>> public class Server extends NetworkEntity {}
>>> public class Router extends NetworkEntity {}
>>> public class Client extends NetworkEntity {}
>>>
>>> Then we can build a small network:
>>>
>>> Client c = new Client().persist();
>>> Router r1 = new Router().persist();
>>> Router r21 = new Router().persist();
>>> Router r22 = new Router().persist();
>>> Router r3 = new Router().persist();
>>> Server s = new Server().persist();
>>>
>>> c.addPeer(r1);
>>> r1.addPeer(r21);
>>> r1.addPeer(r22);
>>> r21.addPeer(r3);
>>> r22.addPeer(r3);
>>> r3.addPeer(s);
>>>
>>> c.persist();
>>>
>>> Note that after linking the entities, I only call persist() on the
>>> client. You can read more about this in the reference documentation, but
>>> essentially it will cascade in the direction of the relationships created,
>>> and will in this case cascade all the way to the server entity.
>>>
>>> You can now query this:
>>>
>>> Iterable> paths =
>>> c.findAllPathsByTraversal(Traversal.description());
>>>
>>> The above code will get you an EntityPath per node visited during the
>>> traversal from c. The example does however not use a very interesting
>>> traversal description, but you can still print the results:
>>>
>>> for (EntityPath path : paths) {
>>> StringBuilder sb = new StringBuilder();
>>> Iterator iter =
>>> path.nodeEntities().iterator();
>>> while (iter.hasNext()) {
>>> sb.append(iter.next());
>>> if (iter.hasNext()) sb.append(" -> ");
>>> }
>>> System.out.println(sb);
>>> }
>>>
>>> This will print each path, with all entities in the path. This is what it
>>> looks like:
>>>
>>> domain.Client@1
>>> domain.Client@1 -> domain.Router@2
>>> domain.Client@1 -> domain.Router@2 -> domain.Router@3
>>> domain.Client@1 -> domain.Router@2 -> domain.Router@3 ->
>>> domain.Router@5
>>> domain.Client@1 -> domain.Router@2 -> domain.Router@3 ->
>>> domain.Router@5 -> domain.Server@6
>>> domain.Client@1 -> domain.Router@2 -> domain.Router@3 ->
>>> domain.Router@5 -> domain.Router@4
>>>
>>> Let us know if this is what you looked for. If you want to only find
>>> paths that end with a server, you'd use this query instead:
>>>
>>> Iterable> paths =
>>> c.findAllPathsByTraversal(Traversal.description().evaluator(new Evaluator()
>>> {
>>> @Override
>>> public Evaluation evaluate(Path path) {
>>> if (new ConvertingEntityPath(graphDatabaseContext,
>>> path).endEntity() instanceof Server) {
>>> return Evaluation.INCLUDE_AND_PRUNE;
>>> }
>>> return Evaluation.EXCLUDE_AND_CONTINUE;
>>> }
>>> }));
>>>
>>> In the above code example, graphDatabaseContext is a bean of type
>>> GraphDatabaseContext created by Spring Data Graph. This syntax will
>>> dramatically improve in future releases. It will print:
>>>
>>> domain.Client@1 -> domain.Router@2 -> domain.Router@3 ->
>>> domain.Router@5 -> domain.Server@6
>>>
>>> Regarding your second question about types: If you want to convert a node
>>> into an entity, you would use the TypeRepresentationStrategy configured
>>> internally in Spring Data Graph. See 

Re: [Neo4j] Question from Webinar - traversing a path with nodes of different types

2011-04-20 Thread Vipul Gupta
Hi David,

Inputs are 1 and 6 + Graph is acyclic.

domain.Client@1 -> domain.Router@2 -> domain.Router@3 -> domain.Router@5 ->
domain.Server@6
  -> domain.Router@7 -> domain.Router@8 ->

I want a way to start from 1,

process the 2 path till it reaches 5 (say in a thread)
process the 7 path till it reaches 5 (in another thread)

then process 5 and eventually 6.
the above step of processing intermediate path and waiting on the blocking
point can happen over and over again in a more complex graph (that is there
could be a number of loops in between even) and the traversal stops only we
reach 6

I hope this makes it a bit clear. I was working out something for this, but
it is turning out to be too complex a solution for this sort of traversal of
a graph, so I am hoping if you can suggest something.

Best Regards,
Vipul


On Thu, Apr 21, 2011 at 11:36 AM, David Montag <
david.mon...@neotechnology.com> wrote:

> Hi Vipul,
>
> Zooming out a little bit, what are the inputs to your algorithm, and what
> do you want it to do?
>
> For example, given 1 and 6, do you want to find any points in the chain
> between them that are join points of two (or more) subchains (5 in this
> case)?
>
> David
>
>
> On Wed, Apr 20, 2011 at 10:56 PM, Vipul Gupta wrote:
>
>> my mistake - I meant "5" depends on both 3 and 8 and acts as a blocking
>> point till 3 and 8 finishes
>>
>>
>> On Thu, Apr 21, 2011 at 11:19 AM, Vipul Gupta wrote:
>>
>>> David/Michael,
>>>
>>> Let me modify the example a bit.
>>> What if my graph structure is like this
>>>
>>> domain.Client@1 -> domain.Router@2 -> domain.Router@3 -> domain.Router@5-> 
>>> domain.Server@6
>>>   -> domain.Router@7 -> domain.Router@8 ->
>>>
>>>
>>> Imagine a manufacturing line.
>>> 6 depends on both 3 and 8 and acts as a blocking point till 3 and 8
>>> finishes.
>>>
>>> Is there a way to get a cleaner traversal for such kind of relationship. I
>>> want to get a complete intermediate traversal from Client to Server.
>>>
>>> Thank a lot for helping out on this.
>>>
>>> Best Regards,
>>> Vipul
>>>
>>>
>>>
>>>
>>> On Thu, Apr 21, 2011 at 12:09 AM, David Montag <
>>> david.mon...@neotechnology.com> wrote:
>>>
 Hi Vipul,

 Thanks for listening!

 It's a very good question, and the short answer is: yes! I'm cc'ing our
 mailing list so that everyone can take part in the answer.

 Here's the long answer, illustrated by an example:

 Let's assume you're modeling a network. You'll have some domain classes
 that are all networked entities with peers:

 @NodeEntity
 public class NetworkEntity {
 @RelatedTo(type = "PEER", direction = Direction.BOTH, elementClass =
 NetworkEntity.class)
 private Set peers;

 public void addPeer(NetworkEntity peer) {
 peers.add(peer);
 }
 }

 public class Server extends NetworkEntity {}
 public class Router extends NetworkEntity {}
 public class Client extends NetworkEntity {}

 Then we can build a small network:

 Client c = new Client().persist();
 Router r1 = new Router().persist();
 Router r21 = new Router().persist();
 Router r22 = new Router().persist();
 Router r3 = new Router().persist();
 Server s = new Server().persist();

 c.addPeer(r1);
 r1.addPeer(r21);
 r1.addPeer(r22);
 r21.addPeer(r3);
 r22.addPeer(r3);
 r3.addPeer(s);

 c.persist();

 Note that after linking the entities, I only call persist() on the
 client. You can read more about this in the reference documentation, but
 essentially it will cascade in the direction of the relationships created,
 and will in this case cascade all the way to the server entity.

 You can now query this:

 Iterable> paths =
 c.findAllPathsByTraversal(Traversal.description());

 The above code will get you an EntityPath per node visited during the
 traversal from c. The example does however not use a very interesting
 traversal description, but you can still print the results:

 for (EntityPath path : paths) {
 StringBuilder sb = new StringBuilder();
 Iterator iter =
 path.nodeEntities().iterator();
 while (iter.hasNext()) {
 sb.append(iter.next());
 if (iter.hasNext()) sb.append(" -> ");
 }
 System.out.println(sb);
 }

 This will print each path, with all entities in the path. This is what
 it looks like:

 domain.Client@1
 domain.Client@1 -> domain.Router@2
 domain.Client@1 -> domain.Router@2 -> domain.Router@3
 domain.Client@1 -> domain.Router@2 -> domain.Router@3 ->
 domain.Router@5
 domain.Client@1 -> domain.Router@2 -> domain.Router@3 ->
 domain.Router@5 -> domain.Server@6
 domain.Client@1 -> domain.Router@2 -> domain.Router@3 ->
 domain.Router@5 -> domain.Rou