Hi Tomas,

I'm really not an RDF processing expert, but since nobody responded for 4
days, I'll try to give you some pointers:
I know that there've been discussions regarding RDF processing on this
mailing list before.
Check out this one for example:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Accessing-RDF-triples-using-Flink-td5935.html

Also, there seems to be a project that uses Flink:
http://sansa-stack.net/user-guide/


Regards,
Robert


On Tue, Nov 15, 2016 at 3:47 PM, Tomas Knap <t.k...@semantic-web.at> wrote:

> Good afternoon,
>
> Currently we are using UnifiedViews (unifiedviews.eu) for RDF data
> processing. So you may define various RDF data processing tasks in
> UnifiedViews, e.g.: 1) extract data from certain SPARQL endpoint A, 2)
> extract data from certain folder and convert it to RDF, 3) merge RDF data
> outputted by these two sources, 4) execute series of SPARQL update queries
> on top of that, 5) load to the target repository.
>
> We are thinking about using Apache Flink as a backend for executing the
> RDF data processing tasks (to scale out). But what is still not clear to us
> is how these kind of tasks processing RDF data (as the one above) may be
> supported with Flink. For example, how would you read RDF data, how would
> you support in Flink plugins which execute SPARQL update/construct queries?
>
> If you can share with us some materials regarding processing of RDF data
> in Flink, support of SPARQL (Update) queries in Flink etc. that would be
> great and will help us to decide about future steps!
>
> Thanks,
> Tomas Knap
>
>
>
> --
> *Tomas Knap, PhD*
> Technical Consultant & Researcher
>
> Semantic Web Company GmbH
> EU: +43-1-4021235
> US: (415) 800-3776
> http://www.semantic-web.at
> http://www.poolparty.biz
>
>
>
>

Reply via email to