Hi Florian,

You actually can construct URIs for each solution by using SPARQL 1.1 syntax:

CONSTRUCT {
   ?a ?b ?uri .
   ?uri ?c ?d .
   ?uri ?e ?f .
}
WHERE {
   ...
   BIND( iri( ... ) AS ?uri )
   ...
}

Unfortunately, (and someone correct me if I'm wrong) I think you can
only basically build one by concatenating other variables or literals,
or a hash function.  It would be nice if SPARQL supported a UUID
function to mint unique URIs, so you could do something like:

BIND( iri( concat( "urn:uuid:", UUID() ) AS ?uri )  =>
<urn:uuid:efa77ac0-36fa-11e1-b86c-0800200c9a66>.

I don't know why the working group decided not to add this, maybe Andy
has a little more insight.  It's possible that blank nodes were
thought to be sufficient for this kind of task.  It wouldn't be hard
to add to Jena as an extension however.

-Stephen

[1] http://www.w3.org/TR/sparql11-query/#func-iri


On Wed, Jan 4, 2012 at 1:45 AM, Florian Beuter <[email protected]> wrote:
> Hello Andy,
>
> thanks for your response, this should help me to solve the problem by
> writing a utility method that restores my original blank node IDs to use
> them in the N-Triple format.
>
> For the reason I'm using blank nodes:
> The RDF graph I'm querying is the result of a complex Construct query
> containining a passage like:
>
> CONSTRUCT {
> ...
> ?a         ?b     _:conf
> _:conf  ?c     ?d
> _:conf  ?e     ?f
> ...
> }
> WHERE ...
>
> This is only way I found to create a new RDF node for each solution of the
> query. I would have liked to this rather with URIs, but obviously it's
> impossible to assign a new URI to each solution automatically. Blank nodes
> seem to be the only way to achieve this.
>
> The resulting graph is serialized as N-Triples and then queried again (with
> Select) to help me bringing them in a sorted order and add some triples
> manually that specify the order as integer values, because I need this
> enumeration for another application working on this data later on.
>
>
> Am 03.01.2012 23:03, schrieb Andy Seaborne:
>
>> On 03/01/12 16:57, Florian Beuter wrote:
>>>
>>> Hello together,
>>>
>>> I'm using Jena to process SPARQL queries and have a weird problem
>>> regarding blank nodes in the result of a Select query. My RDF graph
>>> contains blank nodes written like this:
>>> _:A10b7f677X3aX134a46e2d15X3aXX2dX7f6b
>>
>>
>> Serialized to N-Triples like that ... it's the internal id, converted to a
>> string but the label is really scoped to the file (if you read it in again,
>> you will get different bnodes).
>>
>>> And I want to retrieve this in the same way as a result of my Select
>>> query. Unfortunately I'm getting something different like:
>>> 10b7f677:134a46e2d15:-7f5b
>>
>>
>> .. this is the internal label.
>>
>> N-triples does not allow:
>> 1/ A leading digit so add the "A"
>> 2/ : or - and these are encodes as "X" and the hex.  X3a is ":"
>>
>>> I retrieve an RDFNode from the QuerySolution and make sure that in case
>>> of a blank node (by calling RDFNode.isAnon()) it is converted to a
>>> Resource. Now I can access the AnonId by calling Resource.getId(), which
>>> offers to ways to return a String representation of it:
>>> AnonId.toString() and AnondId.getLabelString().
>>> Both methods return 10b7f677:134a46e2d15:-7f5b in my case. Shouldn't one
>>> of them return the other representation
>>> _:A10b7f677X3aX134a46e2d15X3aXX2dX7f6b instead or did I get something
>>> wrong? Or is there another way to access this String representation of
>>> my blank nodes?
>>
>>
>> You can access blank nodes directly but first think about whether the app
>> really should be doing that.   They are blank and have no (real) name.  Yes,
>> jena gives them internal ids, but they are internal, not URIs.
>>
>> You can acess them in SPARQL querying a local graph using the pseudo-URI
>> scheme "_:" followed by the internal label
>>
>> <_:10b7f677:134a46e2d15:-7f5b>
>>
>> but it only works for local queries.  You have to get the server to
>> cooperate for it to work remotely and so it isn't portable.
>>
>>    Andy
>>
>>>
>>> Best regards,
>>>
>>> Florian Beuter
>>
>>
>>
>

Reply via email to