I support this proposal - great idea, something that's been missing in Spark
world. I'm a data architect working primarily in banking, many years of
designing and tuning relational database systems, and more recently, wBig
Data solutions, often including integration of old and new technologies. The
graph database model is becoming more and more recognised and present in the
world of finance. The idea of being able to take a property graph view over
dataframes and run graph queries makes a lot of sense from the integration
point of view as we want to use graph databases/services alongside existing
investments in the Spark ecosystem (typically deployed on Hadoop clusters,
typically implementing relational stuff). I can see use cases
(relational-meets-graph) in my world, specifically for
completeness/availability calculation dependency graphs, metadata and data
management in the space where enterprise architecture meets BCBS239
(taxonomy, provenance, lineage), plus of course unauthorised trading, fraud
detection, all that. An additional bonus here is that Cypher seems like a
good choice in light of its spread beyond Neo and its contribution to the
future official ISO standard Graph Query Language. 



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to