Re: getValue() of XMLLiteral throws DatatypeFormatException

2013-07-09 Thread Joshua TAYLOR
On Tue, Jul 9, 2013 at 8:01 PM, Joshua TAYLOR  wrote:
> Hi all,  I'm trying to create an XMLLiteral and get the value from it.
>  (This is a reduced version of a larger problem, where I'm reading a
> model that has some XMLLiterals, and I need to do some work with
> them.)  It appears that getValue() throws DatatypeFormat Exception
> when this happens, though.  Here's code that illustrates the problem:
>
>
> import com.hp.hpl.jena.datatypes.xsd.impl.XMLLiteralType;
> import com.hp.hpl.jena.rdf.model.Literal;
> import com.hp.hpl.jena.rdf.model.Model;
> import com.hp.hpl.jena.rdf.model.ModelFactory;
> import com.hp.hpl.jena.vocabulary.RDF;
>
> public class ReadXMLLiterals {
>
> public static void main( String[] args ) {
> // Some XML content with newlines.
> final String xmlContent = " xmlns='http://www.openmath.org/OpenMath'\n" +
>   "   
> cdbase='http://www.openmath.org/cd'>\n" +
>   "";
>
> // Create a model, and an XMLLiteral with the given content, 
> and
> // add a triple [] rdf:value xmlliteral so that something can 
> be shown in the
> // model, and to demonstrate that the literal can be created 
> without error.
> // Print the model.
> System.out.println( "=" );
> final Model model = ModelFactory.createDefaultModel();
> final Literal xmlLiteral = model.createTypedLiteral( 
> xmlContent,
> XMLLiteralType.theXMLLiteralType );
> model.createResource().addProperty( RDF.value, xmlLiteral );
> model.write( System.out, "N3" );
>
> // Try to get the value of the literal and watch things fall 
> apart.
> // The error is that "Lexical form ...  is not a legal 
> instance of
> // 
> Datatype[http://www.w3.org/1999/02/22-rdf-syntax-ns#XMLLiteral]
> Bad rdf:XMLLiteral.
> // How come there was no complaint when the literal was 
> created?
> And what's the
> // problem with the lexical form?
> System.out.println( "-" );
> try {
> xmlLiteral.getValue();
> }
> catch ( Exception e ) {
> e.printStackTrace( System.out );
> }
> }
> }
>
>
> The output shows that the model can be created (and thus that the
> XMLLiteral can be created) and serialized (showing the proper
> datatype).  However, when trying to use Literal#getValue to get a
> value from the XMLLiteral, things explode:
>
>
> =
> []
>   """cdbase='http://www.openmath.org/cd'>
> """^^ .
> -
> com.hp.hpl.jena.datatypes.DatatypeFormatException: Lexical form
> 'cdbase='http://www.openmath.org/cd'>
> ' is not a legal instance of
> Datatype[http://www.w3.org/1999/02/22-rdf-syntax-ns#XMLLiteral] Bad
> rdf:XMLLiteral
> at 
> com.hp.hpl.jena.graph.impl.LiteralLabelImpl.getValue(LiteralLabelImpl.java:324)
> at 
> com.hp.hpl.jena.graph.Node_Literal.getLiteralValue(Node_Literal.java:39)
> at 
> com.hp.hpl.jena.rdf.model.impl.LiteralImpl.getValue(LiteralImpl.java:98)
> at ReadXMLLiterals.main(ReadXMLLiterals.java:33)
>
>
> Is something actually actually malformed with the the lexical form?
> And if it is, should it have been caught when the literal was created?
>
> --
> Joshua Taylor, http://www.cs.rpi.edu/~tayloj/

OK, the last post was a first step toward trying to track down a
mysterious RIOT warning that I was getting in some circumstances
involving an XMLLiteral.  It seems that the warning is only triggered
if certain things have happened with the model beforehand.  Here's a
minimal working example:


import java.io.ByteArrayInputStream;

import com.hp.hpl.jena.datatypes.xsd.impl.XMLLiteralType;
import com.hp.hpl.jena.rdf.model.Literal;
import com.hp.hpl.jena.rdf.model.Model;
import com.hp.hpl.jena.rdf.model.ModelFactory;
import com.hp.hpl.jena.sparql.expr.NodeValue;

public class ReadXMLLiterals2 {
// Some N3 content that contains a resource :test with an
// rdf:value that is an XMLLiteral.
final static String n3Content =
"@prefix :  .\n" +
"@prefix rdf: 
 .\n" +
":test rdf:value \"\"\"\n" +
"http://www.openmath.org/OpenMath\"\n"; +
"   version=\"2.0\"\n" +
"   cdbase=\"http://www.openmath.org/cd\";>\n" +
"\n\"\"\"^^rdf:XMLLiteral .";

// Demo creates a model and reads the N3 content into it, and

getValue() of XMLLiteral throws DatatypeFormatException

2013-07-09 Thread Joshua TAYLOR
Hi all,  I'm trying to create an XMLLiteral and get the value from it.
 (This is a reduced version of a larger problem, where I'm reading a
model that has some XMLLiterals, and I need to do some work with
them.)  It appears that getValue() throws DatatypeFormat Exception
when this happens, though.  Here's code that illustrates the problem:


import com.hp.hpl.jena.datatypes.xsd.impl.XMLLiteralType;
import com.hp.hpl.jena.rdf.model.Literal;
import com.hp.hpl.jena.rdf.model.Model;
import com.hp.hpl.jena.rdf.model.ModelFactory;
import com.hp.hpl.jena.vocabulary.RDF;

public class ReadXMLLiterals {

public static void main( String[] args ) {
// Some XML content with newlines.
final String xmlContent = "\n" +
  "";

// Create a model, and an XMLLiteral with the given content, and
// add a triple [] rdf:value xmlliteral so that something can 
be shown in the
// model, and to demonstrate that the literal can be created 
without error.
// Print the model.
System.out.println( "=" );
final Model model = ModelFactory.createDefaultModel();
final Literal xmlLiteral = model.createTypedLiteral( xmlContent,
XMLLiteralType.theXMLLiteralType );
model.createResource().addProperty( RDF.value, xmlLiteral );
model.write( System.out, "N3" );

// Try to get the value of the literal and watch things fall 
apart.
// The error is that "Lexical form ...  is not a legal instance 
of
// 
Datatype[http://www.w3.org/1999/02/22-rdf-syntax-ns#XMLLiteral]
Bad rdf:XMLLiteral.
// How come there was no complaint when the literal was created?
And what's the
// problem with the lexical form?
System.out.println( "-" );
try {
xmlLiteral.getValue();
}
catch ( Exception e ) {
e.printStackTrace( System.out );
}
}
}


The output shows that the model can be created (and thus that the
XMLLiteral can be created) and serialized (showing the proper
datatype).  However, when trying to use Literal#getValue to get a
value from the XMLLiteral, things explode:


=
[]
  """
"""^^ .
-
com.hp.hpl.jena.datatypes.DatatypeFormatException: Lexical form
'
' is not a legal instance of
Datatype[http://www.w3.org/1999/02/22-rdf-syntax-ns#XMLLiteral] Bad
rdf:XMLLiteral
at 
com.hp.hpl.jena.graph.impl.LiteralLabelImpl.getValue(LiteralLabelImpl.java:324)
at 
com.hp.hpl.jena.graph.Node_Literal.getLiteralValue(Node_Literal.java:39)
at 
com.hp.hpl.jena.rdf.model.impl.LiteralImpl.getValue(LiteralImpl.java:98)
at ReadXMLLiterals.main(ReadXMLLiterals.java:33)


Is something actually actually malformed with the the lexical form?
And if it is, should it have been caught when the literal was created?

-- 
Joshua Taylor, http://www.cs.rpi.edu/~tayloj/


SPARQL interaction of OPTIONAL, FILTER, and BIND

2013-07-09 Thread Joshua TAYLOR
This is completely unrelated to any recent questions about the rule
engine and builtins. :)  I've got this data:


@prefix :  .
:a :atIndex 0 .
:b :atIndex 1 .
:c :atIndex 2 .
:d :atIndex 3 .


Here's a query that finds an ?element at an ?index, binds ?index2 to
?index - 1, and optionally selects an ?element2 that is at ?index2.


prefix : 
select ?element ?index ?element2 ?index2 where {
  ?element :atIndex ?index .
  BIND( ?index - 1 as ?index2 )
  OPTIONAL {
?element2 :atIndex ?index2 .
  }
}
order by ?index


The results are as expected:


$ arq --data data.n3 --query query.sparql
---
| element | index | element2 | index2 |
===
| :a  | 0 |  | -1 |
| :b  | 1 | :a   | 0  |
| :c  | 2 | :b   | 1  |
| :d  | 3 | :c   | 2  |
---


Moving the BIND into the OPTIONAL, i.e.,


prefix : 
select ?element ?index ?element2 ?index2 where {
  ?element :atIndex ?index .
  OPTIONAL {
BIND( ?index - 1 as ?index2 )
?element2 :atIndex ?index2 .
  }
}
order by ?index


produces lots more results;  it seems that there's no constraint on
the ?index within the OPTIONAL, so ?index2 can range over everything.

$ arq --data data.n3 --query query.sparql
---
| element | index | element2 | index2 |
===
| :a  | 0 | :a   | 0  |
| :a  | 0 | :b   | 1  |
| :a  | 0 | :c   | 2  |
| :a  | 0 | :d   | 3  |
| :b  | 1 | :a   | 0  |
| :b  | 1 | :b   | 1  |
| :b  | 1 | :c   | 2  |
| :b  | 1 | :d   | 3  |
| :c  | 2 | :a   | 0  |
| :c  | 2 | :b   | 1  |
| :c  | 2 | :c   | 2  |
| :c  | 2 | :d   | 3  |
| :d  | 3 | :a   | 0  |
| :d  | 3 | :b   | 1  |
| :d  | 3 | :c   | 2  |
| :d  | 3 | :d   | 3  |
---


Now, using a FILTER in the OPTIONAL:


prefix : 

select ?element ?index ?element2 ?index2 where {
  ?element :atIndex ?index .
  OPTIONAL {
FILTER( ?index - 1 = ?index2 )
?element2 :atIndex ?index2 .
  }
}
order by ?index


produces results that actually show that ?index2 is constrained, and
are almost the same the as the first query (except that the case where
?index2 is -1 doesn't occur):


$ arq --data data.n3 --query query.sparql
---
| element | index | element2 | index2 |
===
| :a  | 0 |  ||
| :b  | 1 | :a   | 0  |
| :c  | 2 | :b   | 1  |
| :d  | 3 | :c   | 2  |
---


Is this expected?  Why can't the BIND inside the optional reference
the outer binding of ?index?  In the spec, I do read about the
difference of scope between, e.g., FILTER NOT EXISTS { ?n ... } and
MINUS { ?n ... }, where in the former, ?n can refer to an outer
binding, but for the latter, it cannot.  I think that 18.2.1 Variable
Scope [1] is probably relevant here, but I'm not sure how to get an
answer from it.

Thanks in advance!
//JT

[1] http://www.w3.org/TR/2013/REC-sparql11-query-20130321/#variableScope

-- 
Joshua Taylor, http://www.cs.rpi.edu/~tayloj/


Re: Default and named graphs within TDB/Fuseki

2013-07-09 Thread Andy Seaborne

On 09/07/13 14:50, Rob Walpole wrote:

Hi,

I could use some help to understand default and named graphs within Jena
(TDB/Fuseki)...

In the following example taken from the W3C SPARQL 1.1 spec what needs to
go in place of...

FROM 

...assuming I have loaded dft.ttl into the 'default' graph of TDB and am
querying via Fuseki?

PREFIX foaf: 
PREFIX dc: 

SELECT ?who ?g ?mbox
FROM 
FROM NAMED 
FROM NAMED 


That load for the purpose of the query ... you already have the data in 
the dataset.  Remove these.


Did you load into "http://example.org/bob";


WHERE
{
?g dc:publisher ?who .
GRAPH ?g { ?x foaf:mbox ?mbox }
}



Run this:

SELECT * {
  { ?s ?p ?o } UNION { GRAPH ?g { ?s ?p ?o } }
}

to see what you have.


I have tried using the form 
but this has no affect (although I can download the data from here...)


Longer:

In TDB it picks FROM/FROM NAMED from the set of already loaded named graphs.

http://jena.apache.org/documentation/tdb/dynamic_datasets.html

But it is only from data already loaded.  You probably don't want to do 
that.


Andy



Many thanks
Rob





Default and named graphs within TDB/Fuseki

2013-07-09 Thread Rob Walpole
Hi,

I could use some help to understand default and named graphs within Jena
(TDB/Fuseki)...

In the following example taken from the W3C SPARQL 1.1 spec what needs to
go in place of...

FROM 

...assuming I have loaded dft.ttl into the 'default' graph of TDB and am
querying via Fuseki?

PREFIX foaf: 
PREFIX dc: 

SELECT ?who ?g ?mbox
FROM 
FROM NAMED 
FROM NAMED 
WHERE
{
   ?g dc:publisher ?who .
   GRAPH ?g { ?x foaf:mbox ?mbox }
}

I have tried using the form 
but this has no affect (although I can download the data from here...)

Many thanks
Rob

-- 

Rob Walpole
Email robkwalp...@gmail.com
Tel. +44 (0)7969 869881
Skype: RobertWalpolehttp://www.linkedin.com/in/robwalpole


Re: Inserted xsd:int and found xsd:integer

2013-07-09 Thread Enrico Daga
I see. The only drawback is that the extracted RDF graph would not really
match the loaded one, and this may be an issue in some cases (at least in
theory). It would be perfect if the original datatype could be used (only)
at serialization time, but I see the point of the cost at query time.
In my case it is not really a problem, since queries will match both, and I
am happy to benefit from it considering the added value that you mentioned.

Thank you for the insight :)

Enrico


On 9 July 2013 09:40, Andy Seaborne  wrote:

> Enrico,
>
> Yes - two things are happening - derived types get rolled up to
> xsd:integer and also formats are canonicalized (well, the value is stored
> and the lexical form is remade if needed).
>
> {  ?a ?b 28 } matches {  ?a ?b 0028 } and {  ?a ?b +28 }
>
> The original datatype could have been kept (needs 4 bits of encoding -
> there are 13 derived types of xsd:decimal, from xsd:integer on down) but
> "28"^^xsd:int  would not match "28"^^xsd:integer without some cost.
>
> Inlining the value really speeds up numerical filters like
>
>FILTER ( ?x < 56 )
>FILTER ( ?x > 4 && ?x < 56 )
>
> Andy
>
>
> On 09/07/13 09:24, Enrico Daga wrote:
>
>> Yes, I am using TDB, your pointers clarified all, thank you!
>>
>> At the end, any SPARQL expression asking for xsd:int would match values
>> with xsd:integer and this applies to any canonicalized datatype.
>> Returning to my example, the following queries actually work:
>>
>>   select * from  where {  ?a ?b 28 }
>>
>> or
>>
>>   select * from  where {  ?a ?b "28"^^<
>> http://www.w3.org/2001/**XMLSchema#int>
>> }
>>
>> Thank you very much!
>>
>> Enrico
>>
>>
>>
>> On 8 July 2013 17:45, Rob Vesse  wrote:
>>
>>  Is this using TDB as a backend?
>>>
>>> This is by design in TDB - see Value Canonicalization and TDB Design
>>> (http://jena.apache.org/**documentation/tdb/value_**
>>> canonicalization.htmland
>>> http://jena.apache.org/**documentation/tdb/**architecture.html)
>>> - and not a
>>> Fuseki issue but rather a feature of TDB.
>>>
>>> Since TDB inlines certain datatypes into the Node IDs in order to speed
>>> up
>>> common datatype computations it needs to normalize derived datatypes to
>>> the appropriate base type.  So as in your example anything derived from
>>> xsd:integer will be canonicalized to the xsd:integer form.
>>>
>>> Rob
>>>
>>>
>>> On 7/8/13 8:50 AM, "Enrico Daga"  wrote:
>>>
>>>  Hi,

 I loaded some data in Fuseki and found some differences in an xsd
 datatype.
 Follows a test case:

 insert data {
 graph  {
 _:ex >
 "28"^^<
 http://www.w3.org/2001/**XMLSchema#int
 >
 }}

 Selecting data from the graph  will show
 the
 value as 
 >
 instead.

 While this is not a big issue and I could live with that in principle,
 in
 my specific situation (back-end migration to Fuseki), clients relying on
 the xsd:int datatype will break (and I want the data to be consistent
 with
 the legacy back-end).

 Any advise? Should I open a bug towards 0.2.8? ;)

 Thank you all,

 Enrico


 --
 Enrico Daga

 --
 http://www.enridaga.net
 skype: enri-pan

>>>
>>>
>>>
>>
>>
>


-- 
Enrico Daga

--
http://www.enridaga.net
skype: enri-pan


Re: Inserted xsd:int and found xsd:integer

2013-07-09 Thread Andy Seaborne

Enrico,

Yes - two things are happening - derived types get rolled up to 
xsd:integer and also formats are canonicalized (well, the value is 
stored and the lexical form is remade if needed).


{  ?a ?b 28 } matches {  ?a ?b 0028 } and {  ?a ?b +28 }

The original datatype could have been kept (needs 4 bits of encoding - 
there are 13 derived types of xsd:decimal, from xsd:integer on down) but 
"28"^^xsd:int  would not match "28"^^xsd:integer without some cost.


Inlining the value really speeds up numerical filters like

   FILTER ( ?x < 56 )
   FILTER ( ?x > 4 && ?x < 56 )

Andy

On 09/07/13 09:24, Enrico Daga wrote:

Yes, I am using TDB, your pointers clarified all, thank you!

At the end, any SPARQL expression asking for xsd:int would match values
with xsd:integer and this applies to any canonicalized datatype.
Returning to my example, the following queries actually work:

  select * from  where {  ?a ?b 28 }

or

  select * from  where {  ?a ?b "28"^^<
http://www.w3.org/2001/XMLSchema#int> }

Thank you very much!

Enrico



On 8 July 2013 17:45, Rob Vesse  wrote:


Is this using TDB as a backend?

This is by design in TDB - see Value Canonicalization and TDB Design
(http://jena.apache.org/documentation/tdb/value_canonicalization.html and
http://jena.apache.org/documentation/tdb/architecture.html) - and not a
Fuseki issue but rather a feature of TDB.

Since TDB inlines certain datatypes into the Node IDs in order to speed up
common datatype computations it needs to normalize derived datatypes to
the appropriate base type.  So as in your example anything derived from
xsd:integer will be canonicalized to the xsd:integer form.

Rob


On 7/8/13 8:50 AM, "Enrico Daga"  wrote:


Hi,

I loaded some data in Fuseki and found some differences in an xsd
datatype.
Follows a test case:

insert data {
graph  {
_:ex  "28"^^<
http://www.w3.org/2001/XMLSchema#int>
}}

Selecting data from the graph  will show the
value as  instead.

While this is not a big issue and I could live with that in principle, in
my specific situation (back-end migration to Fuseki), clients relying on
the xsd:int datatype will break (and I want the data to be consistent with
the legacy back-end).

Any advise? Should I open a bug towards 0.2.8? ;)

Thank you all,

Enrico


--
Enrico Daga

--
http://www.enridaga.net
skype: enri-pan










Re: Inserted xsd:int and found xsd:integer

2013-07-09 Thread Enrico Daga
Yes, I am using TDB, your pointers clarified all, thank you!

At the end, any SPARQL expression asking for xsd:int would match values
with xsd:integer and this applies to any canonicalized datatype.
Returning to my example, the following queries actually work:

 select * from  where {  ?a ?b 28 }

or

 select * from  where {  ?a ?b "28"^^<
http://www.w3.org/2001/XMLSchema#int> }

Thank you very much!

Enrico



On 8 July 2013 17:45, Rob Vesse  wrote:

> Is this using TDB as a backend?
>
> This is by design in TDB - see Value Canonicalization and TDB Design
> (http://jena.apache.org/documentation/tdb/value_canonicalization.html and
> http://jena.apache.org/documentation/tdb/architecture.html) - and not a
> Fuseki issue but rather a feature of TDB.
>
> Since TDB inlines certain datatypes into the Node IDs in order to speed up
> common datatype computations it needs to normalize derived datatypes to
> the appropriate base type.  So as in your example anything derived from
> xsd:integer will be canonicalized to the xsd:integer form.
>
> Rob
>
>
> On 7/8/13 8:50 AM, "Enrico Daga"  wrote:
>
> >Hi,
> >
> >I loaded some data in Fuseki and found some differences in an xsd
> >datatype.
> >Follows a test case:
> >
> >insert data {
> > graph  {
> > _:ex  "28"^^<
> >http://www.w3.org/2001/XMLSchema#int>
> > }}
> >
> >Selecting data from the graph  will show the
> >value as  instead.
> >
> >While this is not a big issue and I could live with that in principle, in
> >my specific situation (back-end migration to Fuseki), clients relying on
> >the xsd:int datatype will break (and I want the data to be consistent with
> >the legacy back-end).
> >
> >Any advise? Should I open a bug towards 0.2.8? ;)
> >
> >Thank you all,
> >
> >Enrico
> >
> >
> >--
> >Enrico Daga
> >
> >--
> >http://www.enridaga.net
> >skype: enri-pan
>
>


-- 
Enrico Daga

--
http://www.enridaga.net
skype: enri-pan


Re: Validation

2013-07-09 Thread Erdem Eser Ekinci
Thank you Dave,

I've exactly see what I missed.

...


2013/7/9 Dave Reynolds 

> On 09/07/13 05:12, Erdem Eser Ekinci wrote:
>
>> Hi all,
>>
>> I need some help to validate an ontology. The code is as follows. The
>> ontology is simple; there are tree concepts; Role, Goal, Task and one
>> object property hasGoal whose domain is Role and range is Goal. I've tried
>> to restrict Role concept on the hasGoal property with allvalues,
>> somevalues
>> and mincardinality restrictions separately. But none of them hampers the
>> validity. Always validation true.
>>
>> What am I missing?
>>
>
> That OWL with the standard open world semantics is fundamentally unsuited
> to validation :)
>
> It is possible to use OWL syntax to express constraints but interpret it
> with closed world semantics. See upcoming W3C workshop on this topic. But
> the current jena rule reasoners follow the official semantics.
>
> In your example then a cardinality restriction on Role just means that
> each Role instance can be deduced to have a hasGoal, and indeed the
> reasoner will invent an existential variable (blank node) to represent the
> missing value.
>
> Similarly a someValues restriction just means that the reasoner can deduce
> there is a hasGoal value and that it must be a Goal.
>
> An allValues restriction allows you to deduce that all your hasGoal values
> must be Goals but tells you nothing about missing goals. If you made the
> value of a hasGoal instance a Task, and if you declare Task and Goal to be
> disjoint then your allValues (or a simple range declaration) would allow
> you to detect a problem.
>
> Dave
>
>
>
>> Thanks for your help...
>>
>> public class OWLRestrictionTest {
>>  //uri defs
>>   private static String URI_BASE = "
>> http://www.galaksiya.com/**ontologies/galaxia.owl
>> ";
>> private static String URI_INDV = "
>> http://www.galaksiya.com/**ontologies/indv.owl
>> ";
>>   private static String URI_ROLE = "Role";
>> private static String URI_GOAL = "Goal";
>>   private static String URI_TASK = "Task";
>> private static String URI_HAS_GOAL = "hasGoal";
>>
>> // concepts
>> private OntClass clssRole;
>> private OntClass clssGoal;
>>   private OntClass clssTask;
>>   private ObjectProperty prpHasGoal;
>>   @Test
>> public void test() {
>> OntModel schema =
>> ModelFactory.**createOntologyModel(**OntModelSpec.OWL_DL_MEM_RULE_**INF);
>>   //create concepts
>> clssRole = schema.createClass(URI_BASE + "#" + URI_ROLE);
>> clssGoal = schema.createClass(URI_BASE + "#" + URI_GOAL);
>>   clssTask = schema.createClass(URI_BASE + "#" + URI_TASK);
>> //create properties
>> prpHasGoal = schema.createObjectProperty(**URI_BASE + "#" +
>> URI_HAS_GOAL);
>>   //create restrictions
>> //Restriction restriction = schema.**createSomeValuesFromRestrictio**
>> n(null,
>> prpHasGoal, clssGoal);
>>  Restriction restriction =
>> schema.**createAllValuesFromRestriction**(null, prpHasGoal, clssGoal);
>> //Restriction restriction = schema.**createMinCardinalityRestrictio**
>> n(null,
>> prpHasGoal, 2);
>>   clssRole.addSuperClass(**restriction);
>>   OntModel data = ModelFactory.**createOntologyModel();
>>   //create individuals
>> Individual indvRole1 = data.createIndividual(URI_**INDV+"#FetcherRole1",
>> clssRole);
>>   Individual indvGoal1 = data.createIndividual(URI_**INDV+"#FetchGoal1",
>> clssGoal);
>> Individual indvTask1 = data.createIndividual(URI_**INDV+"#FetchTask1",
>> clssTask);
>>   //indvRole1.addProperty(**prpHasGoal, indvGoal1);
>> indvRole1.addProperty(**prpHasGoal, indvTask1);
>>   //validation
>> schema.write(System.out,"N3");
>>   ValidityReport validityReport = schema.validate();
>> if(validityReport != null)
>>   System.out.println("Model consistency: " + validityReport.isValid());
>> else
>> System.out.println("No consistency report");
>>   Reasoner reasoner = ReasonerRegistry.**getOWLReasoner();
>> reasoner.bindSchema(schema);
>>   InfModel infModel = ModelFactory.createInfModel(**reasoner, data);
>> infModel.write(System.out,"N3"**);
>>   ValidityReport validityReport1 = infModel.validate();
>> if(validityReport1 != null)
>> System.out.println("Model consistency: " + validityReport1.isValid());
>>   else
>> System.out.println("No consistency report");
>> }
>>
>>
>


-- 
Galaksiya Bilişim Teknolojileri
Erdem Eser EKİNCİ


Re: Validation

2013-07-09 Thread Dave Reynolds

On 09/07/13 05:12, Erdem Eser Ekinci wrote:

Hi all,

I need some help to validate an ontology. The code is as follows. The
ontology is simple; there are tree concepts; Role, Goal, Task and one
object property hasGoal whose domain is Role and range is Goal. I've tried
to restrict Role concept on the hasGoal property with allvalues, somevalues
and mincardinality restrictions separately. But none of them hampers the
validity. Always validation true.

What am I missing?


That OWL with the standard open world semantics is fundamentally 
unsuited to validation :)


It is possible to use OWL syntax to express constraints but interpret it 
with closed world semantics. See upcoming W3C workshop on this topic. 
But the current jena rule reasoners follow the official semantics.


In your example then a cardinality restriction on Role just means that 
each Role instance can be deduced to have a hasGoal, and indeed the 
reasoner will invent an existential variable (blank node) to represent 
the missing value.


Similarly a someValues restriction just means that the reasoner can 
deduce there is a hasGoal value and that it must be a Goal.


An allValues restriction allows you to deduce that all your hasGoal 
values must be Goals but tells you nothing about missing goals. If you 
made the value of a hasGoal instance a Task, and if you declare Task and 
Goal to be disjoint then your allValues (or a simple range declaration) 
would allow you to detect a problem.


Dave



Thanks for your help...

public class OWLRestrictionTest {
 //uri defs
  private static String URI_BASE = "
http://www.galaksiya.com/ontologies/galaxia.owl";;
private static String URI_INDV = "
http://www.galaksiya.com/ontologies/indv.owl";;
  private static String URI_ROLE = "Role";
private static String URI_GOAL = "Goal";
  private static String URI_TASK = "Task";
private static String URI_HAS_GOAL = "hasGoal";

// concepts
private OntClass clssRole;
private OntClass clssGoal;
  private OntClass clssTask;
  private ObjectProperty prpHasGoal;
  @Test
public void test() {
OntModel schema =
ModelFactory.createOntologyModel(OntModelSpec.OWL_DL_MEM_RULE_INF);
  //create concepts
clssRole = schema.createClass(URI_BASE + "#" + URI_ROLE);
clssGoal = schema.createClass(URI_BASE + "#" + URI_GOAL);
  clssTask = schema.createClass(URI_BASE + "#" + URI_TASK);
//create properties
prpHasGoal = schema.createObjectProperty(URI_BASE + "#" + URI_HAS_GOAL);
  //create restrictions
//Restriction restriction = schema.createSomeValuesFromRestriction(null,
prpHasGoal, clssGoal);
 Restriction restriction =
schema.createAllValuesFromRestriction(null, prpHasGoal, clssGoal);
//Restriction restriction = schema.createMinCardinalityRestriction(null,
prpHasGoal, 2);
  clssRole.addSuperClass(restriction);
  OntModel data = ModelFactory.createOntologyModel();
  //create individuals
Individual indvRole1 = data.createIndividual(URI_INDV+"#FetcherRole1",
clssRole);
  Individual indvGoal1 = data.createIndividual(URI_INDV+"#FetchGoal1",
clssGoal);
Individual indvTask1 = data.createIndividual(URI_INDV+"#FetchTask1",
clssTask);
  //indvRole1.addProperty(prpHasGoal, indvGoal1);
indvRole1.addProperty(prpHasGoal, indvTask1);
  //validation
schema.write(System.out,"N3");
  ValidityReport validityReport = schema.validate();
if(validityReport != null)
  System.out.println("Model consistency: " + validityReport.isValid());
else
System.out.println("No consistency report");
  Reasoner reasoner = ReasonerRegistry.getOWLReasoner();
reasoner.bindSchema(schema);
  InfModel infModel = ModelFactory.createInfModel(reasoner, data);
infModel.write(System.out,"N3");
  ValidityReport validityReport1 = infModel.validate();
if(validityReport1 != null)
System.out.println("Model consistency: " + validityReport1.isValid());
  else
System.out.println("No consistency report");
}





Re: Ways to simulate recursive built-ins?

2013-07-09 Thread Dave Reynolds

On 08/07/13 20:47, Joshua TAYLOR wrote:

On Mon, Jul 8, 2013 at 2:52 PM, Dave Reynolds  wrote:

On 08/07/13 16:14, Joshua TAYLOR wrote:


Hi all,

First off, my impression is that having a builtin do non-trivial query
of the graph (i.e., that depends on more inference), or modification
of the graph, is not supported.  That said, I'd like to do something
akin to it, and I'm looking for suggestions.



Correct. Builtins are not supposed to modify the graph and there's no
provision for invoking the inference engine as part of executing a builtin.



I have a simple expression evaluator (essentially a simple
lambda-calculus interpreter) whose expressions can be encoded in RDF.
In the simplest case, I have a builtin called Eval which takes two
arguments, an expression, which should be concrete, and a result which
may be either a variable or concrete node.  If the result is a
variable, then Eval binds the result of the evaluation to it, and
matches, and if the result is concrete, then Eval compares the
evaluation result with it, and succeeds if they are the same
(approximately with the primitive equal builtin).

Now, the expression language that Eval can handle has some primitives
for querying RDF graphs.  The primitives are based on those presented
in a recent paper [1]. The idea here is to be able to write an
expression denoting a function of x, e.g., "the sum of the values of
the hasValue property for all the resources related to x by the
relatedTo property" and to store the result as a property of x.  E.g.,
with the data

x relatedTo y, z .
y hasValue 3 .
z hasValue 4 .

we would compute the value 7. This is generalized by a rule that says

(?s ?p ?o) <-
(?s hasComputableProperty ?p)
(?s hasComputableExpression ?e)
Eval(?e,?o) .

Now the problem of recursion comes up when the values of y and z are
not stated explicitly, but should also be computed by this same rule.
Eval doesn't have access to the InfModel directly, but through the
context can access the InfGraph, but simply querying it from within
Eval causes problems with recursive query.  (I assume that this is to
be expected.  If it is not, please let me know, and I can reply with
the particular errors that I'm encountering.)

Has anyone done anything like this, or does anyone have any
suggestions for handling things like this?  So far, I've considered
adding some preconditions to the rule that would require that
computable properties of related resources must be computed first
(which would require more explicit connections in the data), or having
Eval return some sort of continuation or promise/delay type object, or
a new rule that could finish the computation when more information
becomes available.

I think this is pushing the boundaries of the rule engine, but any and
all suggestions are welcome!



Pushing the boundaries indeed!

My inclination would be to use the precondition approach and forward rules.
So any computation whose values are available will fire the appropriate rule
which will assert new values. That in turn will unlock new rules. Thus you
get bottom up data flow rather than top down recursion.

Since your preconditions will amount to "all the values I need are
available" that will be easiest to implement via another builtin.

That combination will probably work (except where you have recursive
dependencies but that will be a problem anyway!). However, it's not obvious
to me what the rule engine is buying you in that case. You might want
consider implementing a custom inference engine in code which can recurse
directly.


Thanks for the reply!  The reasons for using builtins and the rule
engine have more to do with integration with work we've already done,
as well as piggybacking on the Jena OWL reasoners so that we can use
restriction classes to define which expressions (also OWL individuals)
should be associated with which individuals.  E.g.,

   SportsCar SubClassOf (hasPerformanceExpression formula1)
   FamilyCar SubClassOf (hasPerformanceExpression formula2)

You said, "Correct. Builtins are not supposed to modify the graph and
there's no provision for invoking the inference engine as part of
executing a builtin." To be clear though, it is OK if builtins access
the raw and deductions graph (without modifying them) to get data
/from/ the graph, right?  (I think this is OK, but I'd rather get
explicit confirmation now than surprise experimental confirmation
later. :) )  I assume that this is OK because the suggested builtin
that would implement "are all functions of dependencies computed yet?"
would have to be examining the graph.


Yes, accessing the raw/deductions graph is legal though if any explicit 
or implicit negation is involved then you should mark the builtin as 
non-monotonic.



A question about this approach though;  if a forward rule has such a
precondition, e.g.,

   (?s hasComputableProperty ?p)
   (?s hasComputableExpression ?e)
   AllPreconditionsComputed( ?s )
   Eval(?e,?o)
   ->
   (?s ?p ?o )

I see where the