Considering the situation of implementing a domain-specific web crawler 
I've come across a number of technologies, but I had an idea to implement 
it as a server extension in neo4j.

The idea would be to use the graph database to implement the concepts of 
"already explored pages" and "frontier" as server-side algorithms and use 
them to feed the crawling algorithm but, as you see, you can go an step 
further and implement the crawling in the server side too. Could this be a 
bad idea? If so, why?

-- 
You received this message because you are subscribed to the Google Groups 
"Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to