Re: Fuseki context path?

2022-02-14 Thread A. Soroka
I can if needed, but it seems like a simple thing for the standalone to do.
If it can't be done now I will put in a PR.

Adam

On Mon, Feb 14, 2022, 4:29 PM Martynas Jusevičius 
wrote:

> Adam,
>
> Why not use the WAR file then in a servlet container?
>
> On Mon, 14 Feb 2022 at 21.59,  wrote:
>
> > I'm afraid that doesn't work because I'm interested in proxying the
> entire
> > application, not a single dataset. I want to expose the whole UI, admin,
> > SPARQL editor and all.
> >
> > I've tried proxying as you describe using --localhost, but the static
> > resources and JavaScript that compose the UI don't come through properly
> > when I have a path fragment on the other side a la:
> >
> > ProxyPass /fuseki http://localhost:3030
> >
> >  I'd really rather not get into rewriting HTML! I was hoping for a
> simple:
> >
> > ProxyPass /fuseki http://localhost:3030/fuseki
> >
> > style of action.
> >
> > Does that make sense?
> >
> > Adam
> >
> >
> > On Mon, Feb 14, 2022, 2:27 PM Andy Seaborne  wrote:
> >
> > >
> > >
> > > On 14/02/2022 17:30, aj...@apache.org wrote:
> > > > I'm probably missing something obvious, because I haven't looked at
> > > Fuseki
> > > > in quite some time. I cannot seem to find any way to set the servlet
> > > > context path for Fuseki in its standalone (non-WAR) incarnation,
> which
> > I
> > > > want to do in order to get it proxied behind httpd.
> > >
> > > For Fuseki standalone server (in the download) and Fuseki Main:
> > >
> > > Set the name of the dataset to a path. The name can have a "/" in it
> but
> > > it seems to need the service name to help it distinguish between the
> > > "sparql" query service and /some/path/dataset thinking "dataset" is the
> > > service (routing has been decided before the named services are
> > > available to inspect).
> > >
> > > fuseki-server /some/path/dataset/sparql
> > >
> > > Is that enough for you?
> > >
> > > BTW:
> > >
> > > One way to proxy is to run it on a known port and then use --localhost
> -
> > > the Fuseki server then will only talk to HTTP traffic on the localhost
> > > interface (IPv4 or IPv6), not to directly sent traffic.
> > >
> > >  Andy
> > >
> > > > Is there a setting here, or will I have to define a Jetty
> configuration
> > > (in
> > > > which case, do we have an example available?)?
> > > >
> > > > Thanks for any info!
> > > >
> > > > Adam
> > > >
> > >
> >
>


Re: Use command tdbquery

2022-01-03 Thread A. Soroka
Is it possible for you to make a copy of the database to query offline?
That can be expensive in storage, but it's really the simplest thing to do
in many ways.

Adam

On Mon, Jan 3, 2022, 1:09 PM Andy Seaborne  wrote:

>
>
> On 03/01/2022 17:44, robert.ba...@tiscali.it wrote:
> >
> >
> > Hi,
> >
> > you are right, I was not clear in the request. I try to
> > explain myself better.
> > I have a knowledge base of over a billion
> > triples.
> > I am testing a query that returns about 2 million results (in
> > the future I will have many queries that will return a lot of data)
> > On
> > the client side I have to allow the download of the results in CSV
> > format (on asynchronous request, not through batch).
>
> How long does it take?
>
> > But, with these
> > volumes of data, we can have 2 types of errors:
> > - OutOfMemory on the
> > Result (I can increase the heap size)
>
> How are you making the query? (what software?)
>
> Fuseki will stream results back and with the Jena client code, can
> provide a end-to-end streaming solution.
>
> The fastest results for is the binary Thrift encoding.
>
> RDFConnectionFuseki will use this.
>
> Some queries don't stream.
>
> > - Connection timeout on Fuseki
> > (can I increase the configuration timeout?)
>
> What is timing it out? Some intermediate?
>
> Fuseki by default does not have timeouts. Your configuration may set
> them but the default is unbounded.
>
> If you have set timeouts, you can create another service to the same
> database with different settings. It shares the TDB database safely.
>
> > For this reason I was
> > thinking of using the tdbquery command (takes 3 minutes to run with
> > tdbquery). But I can't stop fuseki to perform the download operation.
> > Fuseki must remain active at all times to answer all other
> > questions.
>
> You can't use tdbquery this way.
>
> It should cause an error saying "already in use" or some such message.
> There is locking on the file system to detect dual use.
>
> With virtualized setups it may be possible to not get the error because
> filing systems are weird, but all that has happened is the the locking
> is not seeing the duplicate use, not finding it is possible.
>
> You will corrupt the database.
>
> Corrupt = permanently damage, not recoverable.
>
>  Andy
>
> >
> > Il 03.01.2022 17:25 Rinor Sefa ha scritto:
> >
> >> I think if
> > you describe your use case in more detail, it would be easier to get
> > help.
> >>
> >> For example, can you clarify
> >> - a query? What kind of query
> >
> >> - "many results", any number?
> >> - What do you consider slow and
> > inefficient and what are would you consider ideal?
> >>
> >> Also, why do
> > you think that the HTTP call is the bottleneck? I think that this is a
> > wrong assumption. Try to run a simple query and you will see that the
> > HTTP call is not the bottleneck.
> >>
> >> -Original Message-
> >> From:
> > robert.ba...@tiscali.it [1]
> >> Sent: Monday, 3 January 2022 12:59
> >> To:
> > users@jena.apache.org [3]
> >> Subject: Use command tdbquery
> >>
> >> Hi,
> >>
> >>
> > i am using a fuseki server and need to run a query which returns a lot
> > of results. The use of the HTTP call (http: // localhost: 3030 / ds /
> > query = myQuery) is very slow and inefficient. I thought about using the
> > tdbquery command. But I don't want to stop fuseki. Is there any way to
> > do this?
> >>
> >> Con Tiscali Mobile Smart 70 hai 70 GB in 4G, minuti
> > illimitati e 100 SMS a soli 7,99EUR al mese http://tisca.li/Smart70 [4]
> >
> >
> >
> >
> > Con Tiscali Mobile Smart 70 hai 70 GB in 4G, minuti illimitati e 100 SMS
> a soli 7,99€ al mese http://tisca.li/Smart70
> >
> >
>


Re: Software Site for Apache Jena OSGI

2020-03-20 Thread A. Soroka
No, my link worked fine for me, for the JAR. Not sure why it wouldn't for
you...

Adam

On Fri, Mar 20, 2020, 10:07 AM Andy Seaborne  wrote:

>
>
> On 20/03/2020 00:42, aj...@apache.org wrote:
> > I'm not quite sure what you mean by "Software Site". Are you looking for
> a
> > place from which to download that artifact?
> >
> > If so, I don't think we currently provide a direct download of the
> > jena-osgi module. We provide it via Maven repository publication. Are you
> > using a dependency manager as part of your work? If not, you can download
> > it directly here:
> >
> >
> https://repository.apache.org/service/local/repositories/releases/content/org/apache/jena/jena-osgi/3.14.0/jena-osgi-3.14.0.jar
>
> That seems to be XML-format history?
>
> https://repo1.maven.org/maven2/org/apache/jena/jena-osgi/
>
> has the jar itself.
>
> general note:
> https://repo1.maven.org
>
> Not http:
> Not "central.maven.org"
>
> These use to work but the Maven central made some changes early this
> year which mean everyone must use https://repo1.maven.org
>
>  Andy
>
> >
> > Adam
> >
> > On Thu, Mar 19, 2020, 4:28 AM Georg Schmidt-Dumont <
> > georg.schmidtdumon...@gmail.com> wrote:
> >
> >> Good morning,
> >>
> >> I am busy setting up an OSGI Bundle which will use Apache Jena. I have
> >> found the OSGI Bundle provided for Apache Jena. Unfortunately I have not
> >> been able to find a Software Site for it. Do you provide a Software Site
> >> for Jena? If so, where can I find it?
> >>
> >> kind regards,
> >> Georg Schmidt-Dumont
> >>
> >
>


Re: Fuseki user-defined Web Services

2018-05-24 Thread Adam Soroka
Was there a PR associated with that suggestion?

Adam

On 2018/05/24 14:29:51, Martynas Jusevičius  wrote: 
> I had long ago suggested that Jena should build on JAX-RS, which is the
> RESTful API for Java.
> 
> You can see how that can be done here:
> https://github.com/AtomGraph/Core/blob/master/src/main/java/com/atomgraph/core/model/impl/QueriedResourceBase.java
> 
> On Thu, May 24, 2018 at 4:19 PM, Piotr Nowara  wrote:
> 
> > Hi,
> >
> > is there any documentation describing the new Fuseki capability of handling
> > the user-defined services?
> >
> > The 3.7.0 release info says: "JENA-1435: Provide extensibility of Fuseki
> > with new services. It is now possible to add custom services to a Fuseki
> > service, not just the services provided by the Fuseki distribution."
> >
> > Does this mean I can create my own REST Web Service and host it using
> > Fuseki?
> >
> > Thanks,
> > Piotr
> >
> 


Re: SPIN support

2017-09-08 Thread Adam Soroka
Could this be a thing for support in Fuseki? IOW, we don't want to package 
every possible scripting language with Fuseki, but people will want to use this 
kind of facility with it, so we might want to have some instructions available 
as to how to add your JSR 223 lang of choice.


ajs6f 

On 2017-09-08 13:49, Andy Seaborne  wrote: 
> Once the machinery for one language is there, adding other is easy if
> the language has a javax.script.ScriptEngineManager (JSR 223). Groovy
> does. That means the custom functions can be loaded and run without
> the static compile/load steps for the customisations needed to get
> stuff into the server jar.  Just ensure the language is on the
> classpath.
> 
> You can call java from javascript in nashorn; it does some nice
> idiomatic stuff like "obj.getProp()" is "obj.prop". And being highly
> dynamic with reflection, a bit difficult to trace mistakes!
> 
> It is only a bit more complicated to create java objects but you can
> do that too.
> 
> Andy
> 
> On 8 September 2017 at 00:04, Bruno P. Kinoshita
>  wrote:
> > Maybe Groovy could be an option as well? I like the idea of being able to 
> > customize Jena with Groovy + Grape's.
> > Whenever I use JavaScript I always rely on a few dependencies (e.g. 
> > moment.js). If we allowed users to grab extra dependencies with npm that 
> > would work as well I think.
> >
> > In Jenkins, you can customize the server behaviour, and automate pretty 
> > much everything with Groovy. There is a Groovy console, and a few extension 
> > points where you can plug in Groovy code. The main advantage being that 
> > there is no translation between Groovy/Java objects. You simply call the 
> > Java objects from within Groovy. Which means we could even call utility 
> > classes and methods I think.
> >
> > Bruno
> >
> >   From: Andy Seaborne 
> >  To: users@jena.apache.org
> >  Sent: Friday, 8 September 2017 1:20 AM
> >  Subject: Re: SPIN support
> >
> > The nice things about JS functions it allows extension without java
> > programming, whether writing the custom function in java or having to
> > rebuild Fuseki to get the java code in.  war files and jar+dependencies
> > (run with -jar) are sealed.  And it isn't as hard as embedded Java like
> > JSPs.
> >
> > Just restricting the thoughts to SPARQL functions in JS - list of args
> > in, single value out. (So not property functions, not modifying the
> > graph data itself.)
> >
> > I came up with 3 different designs of the function calling model based
> > on what is passed in.
> >
> > 1/ Pass in JS values - strings, numbers, booleans, null.
> >
> > Convert the function arguments as passed in from SPARQL into JS native
> > things.  Also, convert the return to an XSD values by inspection. The JS
> > writer is insulated from RDF details.
> >
> > Works really nicely for strings.
> > Everything could be a string, and the dynamic nature of JS will work
> > (caveat the overheads for simple functions called many, many times).
> > (Enough said about numbers in JS!)
> >
> > Other items (URIs, bNodes) can be some kind of object.  If they have a
> > "toString()", then it works in unaware JS code.
> >
> > I've mocked this up and got it working.
> >
> > 2/ Pass in JS-ish RDFterms - e.g. [A]
> >
> > This exposes the fact the arguments are RDF terms, with datatypes and
> > differentiates between URIs and literals. The function writer is more
> > aware of RDF, such as URIs (NamedNodes in the language of [A]).
> >
> > For custom functions, I think there is less usefulness per se because
> > the function is not manipulating the RDF data, its working on values.
> > On the other hand, one way to handle RDF terms in JS is better.
> >
> > 3/ Pass in Jena Nodes or NodeValues.
> >
> > This is the raw Jena-java-RDF view.  The JS function writer is exposed
> > to Jena details has full power. Probably not meeting the goals of ease
> > of use for a non-Java writing person. NodeValue.toString means the JS
> > writer can be semi-unaware of this.
> >
> >
> > Another design point is whether the JS function can call back into Jena,
> > if at all (well, it can't be stopped in Nashorn but that does not make
> > it a good idea.  The result if a good function is entire defined by its
> > inputs.  No side effects, no state.)
> >
> > For Fuseki:
> >
> > We need a library of functions to be loaded and ideally compiled once.
> >
> > We could get the JS scripts into the Context by reading from URL, or a
> > literal string in the file. There is a Context that is server-wide, in
> > the server section of a configuration file,and the one used for
> > execution can be added to with dataset-specific Context settings.
> >
> > Andy
> >
> > [A] https://github.com/rdfjs/representation-task-force
> >
> > On 06/09/17 22:37, Holger Knublauch wrote:
> >>
> >>
> >> On 6/09/2017 19:45, Adrian Gschwend wrote:
> >>> On 06.09.17 00:21, Holger Knublauch wrote:
> >>>
> 

Re: where is/are the extension points for Jena ARQ to fit stream based framework

2017-08-07 Thread Adam Soroka
You may want to look at the work here:

https://www.w3.org/community/rsp/wiki/Main_Page

because one of the points they make is that queries over streams are 
fundamentally different then queries over complete datasets.

(Also this might fit better on Jena's dev@ list.)

ajs6f

On 2017-08-07 04:19, Qian Liu  wrote: 
> Hello,
> 
> How to adjust jena query engine to fit the Akka stream framework. Based on 
> the detailed knowledge about Jena, could anyone please give me some 
> suggestions about which cut-point can I adjust Jena to fit? My previous plan 
> was to make different Ops as different transformation steps along the stream 
> pipeline to manipulate the dataset. But I found it was difficult to really 
> change different Jena Ops to fit. I really need helps, thanks a lot.
>  
> Best regards,
>  
> Qian Liu
> 


Re: riot not triggering ERROR on bad IRI

2017-04-18 Thread A. Soroka
Did you use the --strict flag?

---
A. Soroka
The University of Virginia Library

> On Apr 18, 2017, at 8:09 AM, Laura Morales <laure...@mail.com> wrote:
> 
> This is the RDF/XML: 
> https://svn.apache.org/repos/asf/hivemind/hivemind2/trunk/doap_Hivemind.rdf
> 
> The command `riot --quiet --output=nt xxx.rdf > xxx.nt` creates the .nt file 
> with the following 2 invalid triples (objects' IRI have a space). No ERRORs 
> risen.
> 
> _:B4f7ecd79X3A15b80f07bfbX3AX2D7ffe <http://usefulinc.com/ns/doap#location> 
> <http://svn.apache.org/repos/asf/hivemind > .
> _:B4f7ecd79X3A15b80f07bfbX3AX2D7ffe <http://usefulinc.com/ns/doap#browse> 
> <http://svn.apache.org/viewcvs.cgi/hivemind > .
> 
> The command `riot --verbose --stop --validate xxx.rdf` also doesn't rise any 
> ERROR, only a WARN
> 
> WARN  riot :: [line: 2, col: 35] {W119} A processing instruction is in RDF 
> content. No processing was done.
> 
> 
> Is this a bug or am I missing something?



Re: tdbloader skip bad file

2017-04-18 Thread A. Soroka
One of the several advantages of N-Triples (and this is not an accident) is how 
easy it is to use standard Posix tools with it, e.g. cut, sed, grep, etc.

---
A. Soroka
The University of Virginia Library

> On Apr 18, 2017, at 11:46 AM, Laura Morales <laure...@mail.com> wrote:
> 
>> In the meantime, you can use something like sed for this, something like: 
>> sed -e "s|\(.*\)|\1 |"
> 
> ah, right! This is a good suggestion. This seems to work: sed "s/\(.*\) 
> \.$/\1  ./"  (all triples have a period at the end).
> I think I'll use this until RIOT has a --graph option that would be much more 
> easy to work with :)



Re: tdbloader skip bad file

2017-04-18 Thread A. Soroka
In the meantime, you can use something like sed for this, something like: sed  
-e "s|\(.*\)|\1 |"

---
A. Soroka
The University of Virginia Library

> On Apr 18, 2017, at 10:28 AM, Laura Morales <laure...@mail.com> wrote:
> 
>> Convert to something cheaper (preferably stream-able, like N-triples, as 
>> Andy says) as early as possible.
> 
> It would be very handy if riot had an "--graph=..." option as well, such that 
> I could immediately output all XML files into n-quads with a graph label (and 
> `cat` all of them into a single .nq file).



Re: tdbloader skip bad file

2017-04-18 Thread A. Soroka
You can file a ticket for that functionality at the Jena JIRA instance:

https://issues.apache.org/jira/browse/JENA

---
A. Soroka
The University of Virginia Library

> On Apr 18, 2017, at 10:28 AM, Laura Morales <laure...@mail.com> wrote:
> 
>> Convert to something cheaper (preferably stream-able, like N-triples, as 
>> Andy says) as early as possible.
> 
> It would be very handy if riot had an "--graph=..." option as well, such that 
> I could immediately output all XML files into n-quads with a graph label (and 
> `cat` all of them into a single .nq file).



Re: tdbloader skip bad file

2017-04-18 Thread A. Soroka
If you don't have a specific reason to use RDF/XML inside your workflow, you 
almost certainly shouldn't. It's one of the most expensive RDF serializations 
to process. Convert to something cheaper (preferably stream-able, like 
N-triples, as Andy says) as early as possible.

As for the costs of validation, depending on your operating resources, it might 
be worthwhile to use something like GNU parallel or xargs -P to run several 
riot invocations together. That will only be true if the startup time for riot 
is very small compared to the time it takes to run over a given file, which 
will depend on the size of your files. In this case it seems unlikely to help 
much, but it may be useful at a different time. You can only load one file at a 
time into TDB with tdbloader, because only one process at a time can act 
against a given TDB database.


---
A. Soroka
The University of Virginia Library

> On Apr 18, 2017, at 5:38 AM, Andy Seaborne <a...@apache.org> wrote:
> 
> 
> 
> On 18/04/17 10:19, Laura Morales wrote:
>>> riot sets the Unix return code to 0 on success and 1 on failure in the
>> usual Unix fashion.
>>> 
>>> So build up a list of valid files by looping on the input files then
>> load all the valid ones in one go with tdbloader.
>> 
>> Thank you.
>> Unfortunately however, running "riot --validate" on each file doesn't seem 
>> much faster than running tdbloader on each single file. Processing all files 
>> seem to take approximately the same time.
>> 
> 
> running tdbloader with bad data can corrupt the database.
> 
> It's a bulk loader - not a fix-up-the data tool.
> 
> If they take about the same time, then the parse costs dominate - which is 
> possible with RDF/XML on small data files.
> 
> If performance matters, parse/validate and output N-triples, then load the 
> N-triples.
> 
>Andy



Re: Delete/Insert single graph in dataset

2017-04-16 Thread A. Soroka
To load, yes. Just use the --graph=IRI (Act on a named graph) switch.

You'll find all the useful switches by executing tdbloader --help.

Neither of the loaders will delete anything, ever. I believe that tdbquery can 
execute SPARQL Update, which you could use for the purpose. If your database is 
supporting a Fuseki instance, you can use the Graph Store protocol:

https://www.w3.org/TR/sparql11-http-rdf-update

and Fuseki includes convenient command-line scripts:

https://jena.apache.org/documentation/fuseki2/soh.html

in this case, s-delete.

---
A. Soroka
The University of Virginia Library

> On Apr 16, 2017, at 12:54 PM, Laura Morales <laure...@mail.com> wrote:
> 
> Can I use tdbloader to load or delete a single graph from a dataset? Or maybe 
> some other command line tool?



Re: Query performance over 1 dataset, many graphs

2017-04-16 Thread A. Soroka
To some extent it will depend on the dataset implementation in use, but the two 
most likely are TDB and TIM (transactional in-memory) and for either of those, 
there is no particular hit. Both use covering indexes that include orderings to 
prevent that.

If you are using some other (not core Jena) dataset implementation, it will 
depend on the specifics. 

---
A. Soroka
The University of Virginia Library

> On Apr 16, 2017, at 9:45 AM, Laura Morales <laure...@mail.com> wrote:
> 
> If I query my dataset as "SELECT * FROM  ...", is there a 
> performance hit if  is in the same dataset with many (10s or 
> 100s, more?) other graphs? Or is this fact completely irrelevant?



Re: Very slow tdbloader2 insertion

2017-04-15 Thread A. Soroka
To start with, tdbloader2 uses the assumption that the tuples are sorted 
(actually, it sorts them, then uses that assumption) as described in this old 
blog post of Andy's:

https://seaborne.blogspot.com/2010/12/repacking-btrees.html

That's one reason that you only want to use tbdloader2 to start from scratch. 
Andy, of course, can say more.

---
A. Soroka
The University of Virginia Library

> On Apr 15, 2017, at 2:58 PM, Laura Morales <laure...@mail.com> wrote:
> 
>> Use tdbloader for 10M quads.
> 
> I wonder how is tdbloader technically different from tdbloader2. What makes 
> tdbloader more suited for small/medium datasets and tdbloader2 more suited 
> for very large datasets? Do they implement different insertion algorithms?



Re: SELECTing s properties in the same query

2017-04-15 Thread A. Soroka
Jena's DESCRIBE behavior is pluggable. [1] If you can live with one "hop" on 
bnode traversal, you can use a plain query with isBlank. I'm sure this could be 
written better, but something like:

CONSTRUCT { ?sub ?pred ?obj } 

WHERE { 

  {  VALUES ?sub {  }
 ?sub ?pred ?obj . }
UNION
   {  ?somepred ?sub . 
?sub ?pred ?obj.  
FILTER isBlank(?sub) }
 }

The legs account one for properties directly on the resource and the other for 
properties on bnodes connected to the resource. If you may have bnodes 
connecting to bnodes, you can also iterate a query like that to get the 
closure. For most people, one "hop" is likely enough.

---
A. Soroka
The University of Virginia Library

[1] https://jena.apache.org/documentation/query/extension.html#describe-handlers

> On Apr 15, 2017, at 5:58 AM, james anderson <ja...@dydra.com> wrote:
> 
> good afternoon;
> 
>> On 2017-04-15, at 11:43, Laura Morales <laure...@mail.com> wrote:
>> 
>>> Blank node closure is non-trivial
>> 
>> wouldn't be this the same problem with URLs as well?
> 
> yes, in general.
> 
>> For example if a node is pointing to another resource's URL and I want to 
>> retrieve some properties of that linked resource from the same query?
> 
> although one often sees sparql’s ‘describe’ treated with antiseptic isolation 
> gloves, there is a thorough description of how one might implement it[1], 
> which is followed in various respects in several sparql implementations.
> by this approach, the distinction between iri and blank nodes bounds the 
> description.
> 
> best regards, from berlin,
> - - -
> [1] : https://www.w3.org/Submission/CBD/
> - - -
> 
> 



Re: Predicates with no vocabulary

2017-04-12 Thread A. Soroka
Perhaps what might be helpful is making up your own _namespace_. Call it 
"http://lauramorales.com/data/; (or use some domain you own). Then you can mint 
predicates as easily as:

http://lauramorales.com/data/myFirstPredicate
http://lauramorales.com/data/theNextPredicate
etc.

---
A. Soroka
The University of Virginia Library

> On Apr 12, 2017, at 5:04 AM, Andy Seaborne <a...@apache.org> wrote:
> 
> 
> 
> On 12/04/17 09:49, Laura Morales wrote:
>>> The question is a bit unclear. If there is no existing vocabulary that
>>> you can resp. want to reuse, then you have to use your own vocabulary
>>> which basically just means to use your own URIs for the predicates.
>> 
>> Right, so let's say I don't want to define any new vocabulary, but I just 
>> want to use some predicates. For example a predicate called "predicate1" and 
>> "predicate2". These are not meant to be shared, I use them for whatever 
>> reason and I take full responsibility to shooting myself in the foot. Is 
>> there any "catch-all" or "default/undefined" vocabulary that I can use? I 
>> mean something like a default vocabulary that parses as valid URIs, but 
>> whose meaning is undefined (= the interpretation is left to the user)? 
>> Something like "  " and " 
>>  "... I wonder if I should use " 
>> <_:predicate1> " but I'm not sure?!
>> 
> 
> Just use a predicate - make up a URI.
> 
> <http://example/s> <http://example/myPredicate> <http://example/o> .
> 
> Vocabularies are a way to organise predicates (etc) - basic RDF has URIs for 
> predicates, no notion of vocabularies.
> 
>> I wonder if I should use " <_:predicate1> " but I'm not 
>> sure?!
> 
> It has to be a URI and "_" isn't a valid URI scheme.
> 
>Andy
> 
> (RIOT treats <_:> as blank nodes but they still have to be in a legal 
> position.)



Re: ArrayIndexOutOfBounds exception on un-synchronized model modifications?

2017-04-11 Thread A. Soroka
Yes, you will want to exercise some control over concurrency here:

https://jena.apache.org/documentation/notes/concurrency-howto.html

---
A. Soroka
The University of Virginia Library

> On Apr 11, 2017, at 1:16 PM, Joshua TAYLOR <joshuaaa...@gmail.com> wrote:
> 
> I expect the answer to my question is simply "make sure model access
> is synchronized", but just in case, I'm wondering whether this is
> expected behavior.  Here's some code that modifies a model from a
> bunch of different threads. This doesn't cause an error every time,
> but occasionally throws, as shown in the stacktrace following the
> code.
> 
> The class here is called OhDearTest, because I getting a
> `jena.shared.BrokenException oh dear, already have a slot for ...`
> earlier, which I'm still trying to reproduce. This code doesn't seem
> to trigger it. I've included a bit of that stacktrace at the very end.
> 
> ## Code
> 
> import org.apache.jena.rdf.model.Model;
> import org.apache.jena.rdf.model.ModelFactory;
> import org.apache.jena.rdf.model.Property;
> import org.apache.jena.rdf.model.Resource;
> 
> public class OhDearTest {
>  public static void main(String[] args) throws InterruptedException {
>int n = 1000;
>Model model = ModelFactory.createDefaultModel();
>Property p = model.createProperty("urn:ex:p");
>Thread[] thread = new Thread[n];
>for (int i = 0; i < n; i++) {
>  thread[i] = new Thread(() -> {
>Resource r = model.createResource();
>r.addLiteral(p, "value");
>  });
>  thread[i].start();
>}
>for (int i = 0; i < n; i++) {
>  thread[i].join();
>}
>  }
> }
> 
> ## Stacktrace
> 
> Exception in thread "Thread-300" Exception in thread "Thread-299"
> Exception in thread "Thread-304" Exception in thread "Thread-282"
> Exception in thread "Thread-324" Exception in thread "Thread-331"
> Exception in thread "Thread-328"
> java.lang.ArrayIndexOutOfBoundsException: 905
>at org.apache.jena.mem.HashCommon.findSlot(HashCommon.java:164)
>at 
> org.apache.jena.mem.HashedTripleBunch.contains(HashedTripleBunch.java:40)
>at org.apache.jena.mem.NodeToTriplesMapMem.add(NodeToTriplesMapMem.java:52)
>at 
> org.apache.jena.mem.GraphTripleStoreBase.add(GraphTripleStoreBase.java:63)
>at org.apache.jena.mem.GraphMem.performAdd(GraphMem.java:37)
>at org.apache.jena.graph.impl.GraphBase.add(GraphBase.java:181)
>at org.apache.jena.rdf.model.impl.ModelCom.add(ModelCom.java:1191)
>at 
> org.apache.jena.rdf.model.impl.ResourceImpl.addLiteral(ResourceImpl.java:285)
>at OhDearTest.lambda$0(OhDearTest.java:15)
>at java.lang.Thread.run(Thread.java:745)
> java.lang.ArrayIndexOutOfBoundsException: 789
>at org.apache.jena.mem.HashCommon.findSlot(HashCommon.java:164)
>at 
> org.apache.jena.mem.HashedTripleBunch.contains(HashedTripleBunch.java:40)
>at org.apache.jena.mem.NodeToTriplesMapMem.add(NodeToTriplesMapMem.java:52)
>at 
> org.apache.jena.mem.GraphTripleStoreBase.add(GraphTripleStoreBase.java:63)
>at org.apache.jena.mem.GraphMem.performAdd(GraphMem.java:37)
>at org.apache.jena.graph.impl.GraphBase.add(GraphBase.java:181)
>at org.apache.jena.rdf.model.impl.ModelCom.add(ModelCom.java:1191)
>at 
> org.apache.jena.rdf.model.impl.ResourceImpl.addLiteral(ResourceImpl.java:285)
>at OhDearTest.lambda$0(OhDearTest.java:15)
>at java.lang.Thread.run(Thread.java:745)
> java.lang.ArrayIndexOutOfBoundsException: 827
>at org.apache.jena.mem.HashCommon.findSlot(HashCommon.java:164)
>at 
> org.apache.jena.mem.HashedTripleBunch.contains(HashedTripleBunch.java:40)
>at org.apache.jena.mem.NodeToTriplesMapMem.add(NodeToTriplesMapMem.java:52)
>at 
> org.apache.jena.mem.GraphTripleStoreBase.add(GraphTripleStoreBase.java:63)
>at org.apache.jena.mem.GraphMem.performAdd(GraphMem.java:37)
>at org.apache.jena.graph.impl.GraphBase.add(GraphBase.java:181)
>at org.apache.jena.rdf.model.impl.ModelCom.add(ModelCom.java:1191)
>at 
> org.apache.jena.rdf.model.impl.ResourceImpl.addLiteral(ResourceImpl.java:285)
>at OhDearTest.lambda$0(OhDearTest.java:15)
>at java.lang.Thread.run(Thread.java:745)
> java.lang.ArrayIndexOutOfBoundsException: 1061
>at org.apache.jena.mem.HashCommon.findSlot(HashCommon.java:164)
>at org.apache.jena.mem.HashedBunchMap.put(HashedBunchMap.java:66)
>at org.apache.jena.mem.NodeToTriplesMapMem.add(NodeToTriplesMapMem.java:51)
>at 
> org.apache.jena.mem.GraphTripleStoreBase.add(GraphTripleStoreBase.java:60)
>at org.apache.jena.mem.GraphMem.

Re: Jena native store indexes

2017-04-11 Thread A. Soroka
The Jena list can't really answer questions about "any RDF store", but for TDB, 
you begin with basic covering indexes, so you do not need to add anything (in 
fact you cannot add anything) to provide more indexing for standard SPARQL 
forms.

As has been pointed out, there are _extensions_ to SPARQL provided by Jena that 
can make use of additional indexes:

https://jena.apache.org/documentation/query/text-query.html

and

https://jena.apache.org/documentation/query/spatial-query.html
---
A. Soroka
The University of Virginia Library

> On Apr 11, 2017, at 1:30 PM, Laura Morales <laure...@mail.com> wrote:
> 
> But is Jena (or any RDF store for what matters) expected to perform well even 
> if I don't explicitly add any index?
> 
> 
>> You 'can' create text-indexes for selected properties of your data for
>> text search with a much better performance:
>> 
>> https://jena.apache.org/documentation/query/text-query.html



Re: Graphs multiple labels

2017-04-10 Thread A. Soroka
That depends a good bit on what you want to say about your graphs. One way to 
think about this kind of question is: with whom will you want to _share_ 
information about your graphs? If you are just recording this for your own 
internal purposes and you can keep your assertions minimal (e.g. where a graph 
came from and a datestamp) it might be easiest to just use a few minted 
predicates of your own. 

PROV-O:

https://www.w3.org/TR/prov-o/#prov-o-at-a-glance

is an example of a powerful but extremely general vocabulary for discussing 
provenance issues. There will be some overhead to using someone else's 
vocabulary, and that overhead is worth paying exactly to the extent that you 
need to share your assertions with other people.

---
A. Soroka
The University of Virginia Library

> On Apr 10, 2017, at 7:12 AM, Laura Morales <laure...@mail.com> wrote:
> 
>> You import LOV-1 and LOV-2 into different graphs, so they will have
>> different graph URIs and all the quads will be distinct (even if some of
>> the triples in the two graphs are the same).
>> 
>> Dave
> 
> Thank you.
> Last question: if I'm going to make my own graph to describe what graphs I've 
> imported, is there any recommended schema/vocabulary that I could use?



Re: Jena command-line tools documentation

2017-04-08 Thread A. Soroka
The CLI tool documentation is a bit scattered. Here are some useful pages:

https://jena.apache.org/documentation/query/cmds.html
https://jena.apache.org/documentation/tdb/commands.html


---
A. Soroka
The University of Virginia Library

> On Apr 8, 2017, at 6:22 PM, Laura Morales <laure...@mail.com> wrote:
> 
> Are the Jena command line tools (arq, infer, nquads, trig, utf8, etc...) 
> documented somewhere? I'm not talking of the --help page, but a more general 
> discussion of what they're supposed to do.



Re: DOAP retrieve license info

2017-04-07 Thread A. Soroka
It's hard to tell what you are doing without seeing your query. When you ask a 
question about SPARQL, it's a good idea to always include the query, some 
sample data and how you executed the query, even if it seems obvious.

Speculating about what you did, and assuming that you built your "license" 
column from the DOAP "license" property, then we examine that _predicate_ to 
find the meaning of the URI. In this case we get lucky because DOAP is a 
well-known vocabulary using an http:// namespace. (Once we expand any 
prefixing) doap:license becomes http://usefulinc.com/ns/doap#license. 
Retrieving that URI we get a schema describing the DOAP vocabulary. That schema 
is published in RDF/XML, but if we translate it to NTriples (using a tool like 
Jena's riot) we see a triple:

<http://usefulinc.com/ns/doap#license> 
<http://www.w3.org/2000/01/rdf-schema#comment> "The URI of an RDF description 
of the license the software is distributed under."@en .

So we've found what we can consider the meaning of the URI on the other end of 
that predicate. The fact that the link is broken is unfortunate, but not a new 
problem on the Web. Bringing semantics to the Web hasn't gotten and won't get 
rid of the classic problem of link rot.

As to whether it is used in some other graph somewhere as a subject, I would be 
surprised if it is not. But that's not the thing that matters for determining 
the meaning of its appearance in a graph that you are working with. That 
meaning comes entirely from the predicate with which it appears, as subject or 
object. That's how RDF works-- meaning is built _up_ out of triples, not _down_ 
from larger contexts, and the relationship that is being proposed in a triple 
is defined by the predicate used. To find the meaning of a subject or object, 
find the meaning of the predicate with which it is used.

In the case of a predicate in an HTTP namespace, you can start by simply 
dereferencing its URI, with a browser or other tool. You might find 
human-centered documentation or more RDF where the predicate's URI features as 
a subject in a triple that gives it a meaning, as we found above. In the case 
of other vocabularies, you will have to find documentation/semantics in some 
other way that will depend on the protocol and form of the URI in use. But HTTP 
URIs are by a long ways the most common. This "find an URI in a graph, follow 
it, find another graph with more information" is the essential mechanism of 
linked data done with RDF.

---
A. Soroka
The University of Virginia Library

> On Apr 7, 2017, at 5:33 AM, Laura Morales <laure...@mail.com> wrote:
> 
> I'm experimenting with Fuseki and the DOAP files of the Apache projects.
> I've run a query to return name/license/description about SpamAssassin, and 
> this is the result
> 
> {
>  "head": {
>"vars": [ "name" , "license" , "description" ]
>  } ,
>  "results": {
>"bindings": [
>  {
>"name": { "type": "literal" , "xml:lang": "en" , "value": "Apache 
> SpamAssassin" } ,
>"license": { "type": "uri" , "value": 
> "http://usefulinc.com/doap/licenses/asl20; } ,
>"description": { "type": "literal" , "xml:lang": "en" , "value": 
> "Apache SpamAssassin is an extensible email filter which is used to identify 
> spam. Using its rule base, it uses a wide range of advanced heuristic and 
> statistical analysis tests on mail headers and body text to identify 
> \"spam\", also known as unsolicited bulk email. Once identified, the mail can 
> then be optionally tagged as spam for later filtering. It provides a command 
> line tool to perform filtering, a client-server system to filter large 
> volumes of mail, and Mail::SpamAssassin, a set of Perl modules." }
>  }
>]
>  }
> }
> 
> and this is the corresponding DOAP file 
> https://spamassassin.apache.org/doap.rdf
> 
> I'm a bit confused about the meaning of the URI 
> <http://usefulinc.com/doap/licenses/asl20>. Does this mean that there used to 
> be the content of the asl20 license on that URL, and now the link is broken 
> since there is nothing there? Or does it represent the "subject" of some 
> other resource in some other graph where I can find more information about 
> that license (in which case, where do I find said graph)?



Re: /data endpoint

2017-04-06 Thread A. Soroka
The number and names of such endpoints are configurable in Fuseki's flexible 
RDF configuration language:

https://jena.apache.org/documentation/fuseki2/fuseki-configuration.html#defining-the-service-name-and-endpoints-available

---
A. Soroka
The University of Virginia Library

> On Apr 6, 2017, at 2:17 AM, Osma Suominen <osma.suomi...@helsinki.fi> wrote:
> 
> 06.04.2017, 03:37, Laura Morales kirjoitti:
>> Looking at Fuseki URL scheme: 
>> https://jena.apache.org/documentation/serving_data/#server-uri-scheme
>> 
>> - Is the "SPARQL Graph Store Protocol endpoint" available at "/data" 
>> reserved for update/delete operations on the dataset (add or remove nquads, 
>> add or remove graphs)?
> 
> It can also be used for read requests i.e. HTTP GET to get individual graphs 
> or the whole dataset. But generally yes, this is more of a maintenance API 
> than something you would likely want to expose to outsiders. It should be 
> safe to expose GET, though.
> 
>> - Is "/query" the only endpoint that users need to know if they want to 
>> query my graph?
> 
> Either that or "/sparql", in the default configuration they are defined 
> exactly the same way. Both accept SPARQL queries (not updates).
> 
> -Osma
> 
> 
> -- 
> Osma Suominen
> D.Sc. (Tech), Information Systems Specialist
> National Library of Finland
> P.O. Box 26 (Kaikukatu 4)
> 00014 HELSINGIN YLIOPISTO
> Tel. +358 50 3199529
> osma.suomi...@helsinki.fi
> http://www.nationallibrary.fi



Re: Named graphs

2017-04-06 Thread A. Soroka
For completeness: the core RDF recommendation

https://www.w3.org/TR/rdf11-concepts/#section-dataset

says "Each named graph is a pair consisting of an IRI or a blank node (the 
graph name), and an RDF graph." i.e. it is perfectly legal to name graphs with 
blank nodes. Obviously, that will have pretty strong effects on your ability to 
refer to named graphs from other contexts than the one in which you created 
them.

On the other hand, as the same recommendation says just a bit later: "SPARQL 
1.1 Query Language only allows RDF Graphs to be identified using an IRI." 

https://www.w3.org/TR/sparql11-query/#rdfDataset:

"An RDF Dataset comprises one graph, the default graph, which does not have a 
name, and zero or more named graphs, where each named graph is identified by an 
IRI."

In neither place is an absolute URI required. That comes in, as Conal Tuohy 
wrote, in the way you might use SPARQL Graph Store, but it's actually a bit 
subtle:

https://www.w3.org/TR/sparql11-http-rdf-update/#direct-graph-identification

---
A. Soroka
The University of Virginia Library

> On Apr 6, 2017, at 4:20 AM, Laura Morales <laure...@mail.com> wrote:
> 
> Thank you for the great response.
> 
> ==
> 
> Subject: Re: Named graphs
> Hi Laura
> 
> If you're asking "why did the W3 decide that graph names must be absolute
> IRIs?" then this is probably not the best forum to ask, but anyway ...
> 
> One reason is so that absolute IRIs can be used as part of the WWW; graph
> names may actually resolve to serializations of the graph content. This is
> what the SPARQL Graph Store Protocol calls "Direct Graph Identification". <
> https://www.w3.org/TR/sparql11-http-rdf-update/#direct-graph-identification>
> 
> There's also the useful possibility that RDF statements can refer to graphs
> by name, e.g. to store metadata about the provenance of graphs: <
> https://www.w3.org/TR/rdf11-datasets/#the-graph-name-denotes-the-named-graph-or-the-graph[https://www.w3.org/TR/rdf11-datasets/#the-graph-name-denotes-the-named-graph-or-the-graph]
>> 
> 
> In any case, that's the standard we have to live with, so you do need to
> use IRIs to name your graphs, and NB the SPARQL Graph Store Protocol also
> requires graph names to be absolute IRIs: <
> https://www.w3.org/TR/sparql11-http-rdf-update/#indirect-graph-identification[https://www.w3.org/TR/sparql11-http-rdf-update/#indirect-graph-identification]
>> 
> 
> "The query string IRI MUST be an absolute IRI and the server MUST respond
> with a 400 Bad Request if it is not. "
> 
> Not all implementations enforce this properly; I've seen some systems in
> which relative URIs are allowed, but this is not compliant and will cause
> interoperability problems.
> 
> However, your graph names don't have to be HTTP IRIs; you could use "data:"
> or "tag:" or any other URI scheme. So you could call your graphs
> <data:,graph-1> and <data:,graph-2> and still comply with the specification
> and keep Fuseki happy.



Re: TDH disk toll

2017-04-06 Thread A. Soroka
Nothing that you wouldn't look at for any other server application. The size of 
the dataset may not make as big a difference as the character of your queries 
(how much scanning are they doing, are you using expansive property paths, that 
sort of thing). 

---
A. Soroka
The University of Virginia Library

> On Apr 6, 2017, at 7:34 AM, Laura Morales <laure...@mail.com> wrote:
> 
>> It caches as much as it can so should not thrash the disk unless you
>> have a very heavy update load (in which case, you really want the disk
>> copy to be updated safely).
> 
> In the case of a read-only dataset (few 10s of GBs), is there anything in 
> particular that I should keep an eye on, or is it going to be fine?



Re: tdbloader vs tdbloader2

2017-04-05 Thread A. Soroka
https://jena.apache.org/documentation/tdb/commands.html#tdbloader2

clarifies the differences pretty thoroughly. What is confusing about them?

---
A. Soroka
The University of Virginia Library

> On Apr 5, 2017, at 11:47 AM, Laura Morales <laure...@mail.com> wrote:
> 
> What's the difference between tdbloader and tdbloader2?



Re: DOAP

2017-04-04 Thread A. Soroka
Some (I think most) projects maintain their own DOAP, and it can be found in a 
different location of a project website per-project.

---
A. Soroka
The University of Virginia Library

> On Apr 4, 2017, at 1:43 PM, Laura Morales <laure...@mail.com> wrote:
> 
>> https://jena.apache.org/about_jena/jena.rdf
> 
> What I meant is DOAP files for all Apache projects, not only Jena.



Re: Why we need Fuseki

2017-04-04 Thread A. Soroka
> On Apr 4, 2017, at 10:25 AM, baran...@gmail.com wrote:
> 
>>> what kind of problems do you see, i have a local Fuseki server running 
>>> downloaded nt-Dbpedia datasets, which i regulary actualize.
>> That doesn't really help anyone compare Jena and Virtuoso, does it? :)
> Ofcourse it does, if you run those datasets as a public Fuseki-endpoint like 
> Virtuoso...

At which point you have the same problem I named before. Unless the resourcing 
for those public endpoints is the same, you don't have a real comparison at all.

>> I'm sorry, I am a bit confused; are you able to volunteer some time or 
>> resources to this purpose? What you would like the Jena team to do to help 
>> _you_ implement this idea?
> Jena Team should run A REFERENCE PUBLIC ENDPOINT and say to the world 'here 
> we are', this is not my job, it has something to do with 'credibility' of the 
> actual develepement...

I don't know if this is always quite clear, so it sometime bears remarking; the 
Jena team (like all Apache efforts) is an all-volunteer group. No one is paid 
by Apache to work on Jena. If you would like to take your idea forward, let's 
talk about how to do that. If you just want someone else to implement it for 
you, it is not the job of anyone on this list to do so, so we can end this 
conversation.

---
A. Soroka
The University of Virginia Library



Re: Why we need Fuseki

2017-04-04 Thread A. Soroka
On Apr 4, 2017, at 10:03 AM, baran...@gmail.com wrote:
> 
>> I've got nothing against DBPedia, although I don't think it's particularly 
>> useful to make a comparison in that way between Virtuoso and Jena, unless 
>> you are ready to do the work to ensure that the actual resourcing for the 
>> two services is the same, forever.
> 
> what kind of problems do you see, i have a local Fuseki server running 
> downloaded nt-Dbpedia datasets, which i regulary actualize.

That doesn't really help anyone compare Jena and Virtuoso, does it? :)

>> Where would you be serving this data from? Do you have perhaps employer 
>> backing or other long-term backing for this?
> 
> Such a service should be A REFERENCE PUBLIC ENDPOINT run by Jena Develepoment 
> like Virtuoso runs it with Dbpedia, but Jena Team can take another dataset 
> ofcourse. Is there now such a A REFERENCE PUBLIC ENDPOINT running by Jena 
> Team? If you think, this is not necessary, then ok...

I'm sorry, I am a bit confused; are you able to volunteer some time or 
resources to this purpose? What you would like the Jena team to do to help 
_you_ implement this idea?

---
A. Soroka
The University of Virginia Library



Re: Why we need Fuseki

2017-04-04 Thread A. Soroka
I've got nothing against DBPedia, although I don't think it's particularly 
useful to make a comparison in that way between Virtuoso and Jena, unless you 
are ready to do the work to ensure that the actual resourcing for the two 
services is the same, forever. 

Where would you be serving this data from? Do you have perhaps employer backing 
or other long-term backing for this? 

---
A. Soroka
The University of Virginia Library

> On Apr 4, 2017, at 9:34 AM, baran...@gmail.com wrote:
> 
> 
>> This sounds like an interesting idea. Do you have some time to devote to it? 
>> What database are you thinking of serving?
> 
> Well, we can take the same as Virtuoso, Dbpedia-dataset, THE BEST would be 
> EXACTLY the same as Virtuoso to make comparisons, but this is an old 'idea' 
> of mine, here in this listing about 5-6 years old, i think...
> 
> thanks, baran
> 
>> 
>> ---
>> A. Soroka
>> The University of Virginia Library
>> 
>>> On Apr 4, 2017, at 4:48 AM, baran...@gmail.com wrote:
>>> 
>>> On Mon, 03 Apr 2017 14:54:53 +0200, javed khan <javedbtk...@gmail.com> 
>>> wrote:
>>> 
>>>> Hi
>>>> 
>>>> Why we need fuseki server in semantic web applications. We can run SPARQL
>>>> queries without it, like we do using Jena syntax.
>>> 
>>> If Fuseki would have had (like Virtuoso) a reference public endpoint with a 
>>> well known database, then were no need for such a question...
>>> 
>>> baran
>>> 
>>> --
>>> Using Opera's mail client: http://www.opera.com/mail/
>> 
> 
> 
> -- 
> Using Opera's mail client: http://www.opera.com/mail/



Re: Why we need Fuseki

2017-04-04 Thread A. Soroka
>> If Fuseki would have had (like Virtuoso) a reference public endpoint with a 
>> well known database, then were no need for such a question...

This sounds like an interesting idea. Do you have some time to devote to it? 
What database are you thinking of serving? 

---
A. Soroka
The University of Virginia Library

> On Apr 4, 2017, at 4:48 AM, baran...@gmail.com wrote:
> 
> On Mon, 03 Apr 2017 14:54:53 +0200, javed khan <javedbtk...@gmail.com> wrote:
> 
>> Hi
>> 
>> Why we need fuseki server in semantic web applications. We can run SPARQL
>> queries without it, like we do using Jena syntax.
> 
> If Fuseki would have had (like Virtuoso) a reference public endpoint with a 
> well known database, then were no need for such a question...
> 
> baran
> 
> -- 
> Using Opera's mail client: http://www.opera.com/mail/



Re: Ontology Imports

2017-04-02 Thread A. Soroka
I cannot find a method read(InputStream stream, String Lang) on Model (from 
which OntModel inherits its "read" methods). Are you by chance using 
read(InputStream in, String base), which is a very different semantic?

---
A. Soroka
The University of Virginia Library

> On Mar 31, 2017, at 2:38 PM, Donald Smith <donald.sm...@argodata.com> wrote:
> 
> RDFDataMgr does fine while loading a given RDF file, but what I'm trying to 
> do is to use OntModel to read an ontology from local disk which would intern 
> fetch the imported ontologies. For any imported ontology that is fetched via 
> HTTP that is returned as RDF/XML works fine. For any imported ontology that 
> is of any other type, such as turtle, it fails.
> 
> Does OntModel.read(InputStream stream, String Lang) not use RDFDataMgr itself 
> to load imported ontologies?
> 
> -Original Message-
> From: Dave Reynolds [mailto:dave.e.reyno...@gmail.com]
> Sent: Thursday, March 30, 2017 2:57 AM
> To: users@jena.apache.org
> Subject: Re: Ontology Imports
> 
> On 29/03/17 20:54, Donald Smith wrote:
>> 
>> Given I have an ontology that imports one or more other ontologies, when I 
>> read that ontology:
>> 
>> model.read("http://example.com/ExampleOntology.owl;, "TURTLE");
> 
> That should be "Turtle" or, better, RDFLanguages.strLangTurtle or better 
> still use RDFDataMgr and let it work out the language.
> 
> Dave
> --- Confidentiality Notice: 
> This electronic mail transmission is confidential, may be privileged and 
> should be read or retained only by the intended recipient. If you have 
> received this transmission in error, please immediately notify the sender and 
> delete it from your system.



Re: persistent inference on named graphs in Fuseki

2017-04-02 Thread A. Soroka
Datasets are covered very nicely in the RDF core recommendations:

https://www.w3.org/TR/rdf11-concepts/#section-dataset

---
A. Soroka
The University of Virginia Library

> On Apr 2, 2017, at 5:38 AM, Dave Reynolds <dave.e.reyno...@gmail.com> wrote:
> 
> On 02/04/17 10:25, Laura Morales wrote:
>>>> - no inference over the whole graph, only inference on a single graph
>>> 
>>> No inference support over the whole *Dataset*.
>> 
>> "whole graph" I mean 2 or more graphs loaded into the server, that together 
>> make a larger graph. Isn't this the same thing as "dataset"? Or am I missing 
>> something?
>> 
> 
> A Dataset is a collection of graph comprising one default graph and zero or 
> more named graphs.
> 
> The default graph in a dataset may be completely distinct from the named 
> graphs or may contain some precomputed combination of them or (e.g. with TDB 
> union default) you can arrange for the default graph to give the appearance 
> of being the union of all the triples in all the named graphs. These are all 
> choices, the notion of a dataset doesn't enforce any particular 
> implementation for the default graph
> 
> My point is that Jena's rule-based inference engines don't know anything 
> about datasets, just about graphs.
> 
> However, you can point an inference engine at any graph in TDB including the 
> union graph (either by using union default and pointing to the default graph 
> or by pointing to the pseudo named graph urn:x-arq:UnionGraph). Then you are 
> indeed performing inference over the union of the data it's just that the 
> inference engine doesn't know that or care.
> 
> Dave
> 



Re: Documentation of Fuseki HTTP Admin Protocol

2017-03-28 Thread A. Soroka
I _think_ these could actually be "correctly inconsistent", although not 
necessarily in the way they are now. :grin:

Some of them are endpoints for collections of things, e.g. /$/datasets/, but 
some are not, e.g. /$/server. There is no /$/server/ because there is no 
collection of servers. But that depends on the interpretation of trailing slash 
as collection, which is hardly universal.


---
A. Soroka
The University of Virginia Library

> On Mar 28, 2017, at 6:25 AM, Bruno P. Kinoshita 
> <brunodepau...@yahoo.com.br.INVALID> wrote:
> 
> Ack. Turned off the computer. Let me try them and re-read that page with more 
> calm tomorrow.
> If you have any other suggestions for that page let me knoe. Like add / 
> remove sections, re-word, clarify, etc.
> ThanksBruno
> 
> Sent from Yahoo Mail on Android 
> 
>  On Tue, Mar 28, 2017 at 23:16, Sweeney, Chris<chris.swee...@sepa.org.uk> 
> wrote:   Hi Bruno,
> 
> Spelling is correct and consistent now. Still showing the trailing slash.
> That gives a 404  on Fusei 2.4.1 at least.
> 
> There's an inconsistency here:
> /$/datasets and /$/datasets/ both work.
> /$/stats and /$/stats/  both work.
> /$/tasks and /$/tasks/ both work.
> /$/server works and  /$/server/  404s.
> /$/backups-list works and /$/backups-list 404s.
> /$/ping works and /$/ping/  404s.
> 
> I only tried these as GETs via a browser.
> 
> 
> 
> 
> 
> 
> 
> -Original Message-
> From: Bruno P. Kinoshita [mailto:brunodepau...@yahoo.com.br.INVALID] 
> Sent: 28 March 2017 10:59
> To: users@jena.apache.org
> Subject: Re: Documentation of Fuseki HTTP Admin Protocol
> 
> Hi Chris,
> 
> You are correct. The correct link is /$/backups-list 
> (https://github.com/apache/jena/blob/347d7764dc9132e182e5f8c12de99c3c20938ce8/jena-fuseki2/jena-fuseki-core/src/main/webapp/WEB-INF/web.xml#L177)
> 
> Fixed in SVN. Will be updated once the site is published. You can check the 
> current staging web site to see what is going to look like
> 
> http://jena.staging.apache.org/documentation/fuseki2/fuseki-server-protocol.html
> 
> Thanks
> Bruno
> 
> From: "Sweeney, Chris" <chris.swee...@sepa.org.uk>
> To: "'users@jena.apache.org'" <users@jena.apache.org> 
> Sent: Tuesday, 28 March 2017 12:15 AM
> Subject: Documentation of Fuseki HTTP Admin Protocol
> 
> 
> 
> Hi,
> 
> 
> Just a minor correction to the documentation at 
> http://jena.apache.org/documentation/fuseki2/fuseki-server-protocol.html as 
> it confused me for a while.
> 
> 
> The URL pattern for listing backups is incorrect in both the table of 
> operations (/$/backup-lists/) and the later section on backup 
> (/$/backups-lists/).
> 
> The correct pattern appears to be /$/backups-list  
> 
> Note that both the spelling and the absence of the trailing slash are 
> significant.
> 
> 
> 
> Regards,
> 
> 
> chris
> 
> 
> Chris Sweeney
> 
> SEPA
> 
> t: 01698 839437
> 
> e: chris.swee...@sepa.org.uk
> 
> http://www.sepa.org.uk
> 



Re: Jena scalability

2017-03-26 Thread A. Soroka
TDB is a native store, with a next generation version in development [1]. SDB 
uses a SQL backend. It is not under active development. Claude Warren (one of 
the Jena committers) has been working on an Apache Cassandra backend, and he 
can say more about it if it seems relevant. 

---
A. Soroka
The University of Virginia Library

[1] https://github.com/afs/mantis

> On Mar 26, 2017, at 12:54 PM, Dick Murray <dandh...@gmail.com> wrote:
> 
> On 26 Mar 2017 5:20 pm, "Laura Morales" <laure...@mail.com> wrote:
> 
> - Is Jena a "native" store? Or does it use some other RDBMS/NoSQL backends?
> 
> 
> It has memory, TDB and SDB (I'm not sure of the current state)
> 
> - Has anybody ever done tests/benchmarks to see how well Jena scales with
> large datasets (billions or trillions of n-quads)?
> 
> 
> We have several 650GB TDB and some Men instances at 128 GB. What queries
> are being performed? How many graphs do you have? Are you just querying or
> updating as well?
> 
> - Is it possible to start with a single machine, and later distribute the
> database over multiple machines as the graph grows?
> 
> 
> Not currently with TDB but i have code in production which aggregates
> across multiple DatasetGraph's. We create a DatasetGraphMosaic and add
> DatasetGraph's to it. TDB in other JVM's are supported via a Thrift based
> proxy. This allows simple sparql, otherwise use the service command in your
> query...



Re: How to select an entity which has a property with a certain value?

2017-03-26 Thread A. Soroka
Please show us sample data and the means by which you are executing the query.

https://stackoverflow.com/help/mcve

---
A. Soroka
The University of Virginia Library

> On Mar 26, 2017, at 9:52 AM, Dmitri Pisarenko <d...@altruix.co> wrote:
> 
> Hello!
> 
> FYI: Modifying the query to
> 
> SELECT ?x
> WHERE { 
>   ?x <http://mycompany.com/data/bp-2/batch/batchNumber> ?curBatchId .
>   FILTER (?curBatchId = 4)
> }
> 
> doesn't change anything.
> 
> 
> Best regards
> 
> Dmitri Pisarenko
> 
> 
> 
> 26.03.2017, 16:41, "Dmitri Pisarenko" <d...@altruix.co>:
>> Hello!
>> 
>> I run a query
>> 
>> SELECT ?x WHERE { ?x <http://mycompany.com/data/bp-2/batch/batchNumber> 4 }
>> 
>> Its purpose is to get the indivdual, which has the property 
>> http://mycompany.com/data/bp-2/batch/batchNumber being equal to 4.
>> 
>> When I run this query the result is a completely different individual and 
>> the condition (batchNumber==4) is not true for it.
>> 
>> What is the correct version of this query (select individual, in which 
>> property X is equal to value Y) ?
>> 
>> Thanks in advance
>> 
>> Dmitri Pisarenko



Re: Limited HTTP API

2017-03-26 Thread A. Soroka
There is some documentation about combining Shiro with jena-permissions 
available here:

https://jena.apache.org/documentation/permissions/example.html

The extent to which that will be useful to you may be limited to the extent to 
which you can fit the patterns of usage you want to control into the 
dataset/graph/triple framework over which jena-permissions works.

---
A. Soroka
The University of Virginia Library

> On Mar 26, 2017, at 9:58 AM, A. Soroka <aj...@email.virginia.edu> wrote:
> 
> You have Apache Shiro available for coarse authorization action on the 
> endpoint, but that will not do much for you if you need to act differently 
> according to the parsed query.
> 
> Claude, could jena-permissions be used here for some cases?
> 
> ---
> A. Soroka
> The University of Virginia Library
> 
>> On Mar 26, 2017, at 8:04 AM, Laura Morales <laure...@mail.com> wrote:
>> 
>> I'd like to make one of my SPARQL endpoints publicly accessible through a 
>> REST API. The problem however, is that SPARQL is a very expressive language, 
>> and it's too easy to abuse it with complex, unoptimized queries.
>> I'm wondering if there's any "filter" that can be applied on the HTTP 
>> request in order to limit what the user can do; for example "allow nodes 
>> traversal only" or "return MAX results at most" etc.
> 



Re: Limited HTTP API

2017-03-26 Thread A. Soroka
You have Apache Shiro available for coarse authorization action on the 
endpoint, but that will not do much for you if you need to act differently 
according to the parsed query.

Claude, could jena-permissions be used here for some cases?

---
A. Soroka
The University of Virginia Library

> On Mar 26, 2017, at 8:04 AM, Laura Morales <laure...@mail.com> wrote:
> 
> I'd like to make one of my SPARQL endpoints publicly accessible through a 
> REST API. The problem however, is that SPARQL is a very expressive language, 
> and it's too easy to abuse it with complex, unoptimized queries.
> I'm wondering if there's any "filter" that can be applied on the HTTP request 
> in order to limit what the user can do; for example "allow nodes traversal 
> only" or "return MAX results at most" etc.



Re: Understanding DatasetGraph getLock() (DatasetGraphInMem throwing a curve ball)...

2017-03-24 Thread A. Soroka
The lock from getLock is always the same semantics for every impl-- currently 
MRSW, with no expectation for changing. It's a kind of "system lock" to keep 
the internal state of that class consistent. That's distinct from the 
transactional semantics of a given impl. In some cases, the semantics happen to 
coincide, when the actual transactional semantics are also MRSW. But sometimes 
they don't (actually, I think DatasetGraphInMem is the only example where they 
don't right now, but I am myself tinkering with another example and I am 
confident that we will have more). When they don't, you need to rely on the 
impl to manage its own transactionality, via the methods for that purpose.  I'm 
not actually sure we have a good non-blocking method for your use right now. We 
have inTransaction(), but that's not too helpful here.

But someone else can hopefully point to a technique that I am missing.


---
A. Soroka
The University of Virginia Library

> On Mar 24, 2017, at 6:51 AM, Dick Murray <dandh...@gmail.com> wrote:
> 
> Hi.
> 
> Is there a way to get what Transactional a DatasetGraph is using and
> specifically what Lock semantics are in force?
> 
> As part of a distributed DatasetGraph implementation I have a
> DatasetGraphTry wrapper which adds Boolean tryBegin(ReadWrite) and as the
> name suggests it will try to lock the given DatasetGraph and return
> immediately, i.e. not block. Internally if it acquires the lock it will
> call the wrapped void begin(ReadWrite) which "should" not block. This is
> useful because I can round robin the DatasetGraph's which constitute the
> distribution without blocking. Especially useful as some of the
> DatasetGraph's are running in other JVM's.
> 
> Currently I've reverted the mapping to the DatasetGraph class (requires I
> manually check the Jena code) but I'd like to understand why and possibly
> make the code neater...
> 
> To automate the wrapping I pulled the Lock via getLock() and used the class
> to lookup the appropriate wrapper. But after digging I noticed that the
> Lock from getLock() doesn't always match the Transactional locking
> semantics.
> 
> DatasetGraphInMem getLock() returns org.apache.jena.shared.LockMRSW but
> internally its Transactional implementation is
> using org.apache.jena.shared.LockMRPlusSW which is subtly different. This
> is noticeable because getLock() isn't overridden but inherits from
> DatasetGraphBase which declares LockMRSW.
> 
> A TDB backed DatasetGraph masquerades as a;
> 
> DatasetGraphTransaction
> 
> DatasetGraphTrackActive
> 
> DatasetGraphWrapper
> 
> which wraps the DatasetGraphTDB
> 
> DatasetGraphTripleQuads
> 
> DatasetGraphBaseFind
> 
> DatasetGraphBase where the getLock() returns
> 
> 
> 
> INFO Thread[main,5,main] [class
> org.apache.jena.sparql.core.mem.DatasetGraphInMemory]
> INFO Thread[main,5,main] [class org.apache.jena.shared.LockMRSW]
> 
> INFO Thread[main,5,main] [class
> org.apache.jena.tdb.transaction.DatasetGraphTransaction]
> INFO Thread[main,5,main] [class org.apache.jena.shared.LockMRSW]
> INFO Thread[main,5,main] [class org.apache.jena.tdb.store.DatasetGraphTDB]
> INFO Thread[main,5,main] [class org.apache.jena.shared.LockMRSW]
> 
> Regards Dick.



Re: [MASSMAIL]Re: about TDB JENA

2017-03-20 Thread A. Soroka
OWL 2 certainly features profiles [https://www.w3.org/TR/owl2-profiles] and OWL 
2 RL, as Lorenz indicated, is explicitly intended for implementation via rules:

> The OWL 2 RL profile is aimed at applications that require scalable reasoning 
> without sacrificing too much expressive power. It is designed to accommodate 
> both OWL 2 applications that can trade the full expressivity of the language 
> for efficiency, and RDF(S) applications that need some added expressivity 
> from OWL 2. This is achieved by defining a syntactic subset of OWL 2 which is 
> amenable to implementation using rule-based technologies (see Section 4.2), 
> and presenting a partial axiomatization of the OWL 2 RDF-Based Semantics in 
> the form of first-order implications that can be used as the basis for such 
> an implementation (see Section 4.3).

See https://www.w3.org/TR/owl2-profiles/#OWL_2_RL

But if you mean to say that your ontology is known not to fit any of the OWL 2 
profiles, then you can still use rules (and SPARQL) for some of the problems 
you may wish to solve, but not others. For example, as Lorenz (and I) remarked, 
if your classes are atomic you can use SPARQL property paths to solve 
subsumption problems.

You may not be able to throw all of your data and problems into a single 
"inference machine", but you may very well be able to solve most or all of your 
problems using different techniques.

---
A. Soroka
The University of Virginia Library

> On Mar 20, 2017, at 7:50 AM, Manuel Enrique Puebla Martinez <mpue...@uci.cu> 
> wrote:
> 
> 
> I'm not sure I completely understood your answer. Please confirm my 
> interpretation.
> 
> I think I have understood, that the solution is to write rule-based 
> materialization that substitute the reasoners. Apparently since TDB it is 
> possible to execute those rules of inferences on my big ontology, is it?
> 
> It seems that rule-based materialization is not applicable to OWL2 ontologies 
> (which do not fit into any of the profiles), is it?
> 
> Greetings and thank you very much for your time.
> 
> 
> - Mensaje original -
> De: "Lorenz B." <buehm...@informatik.uni-leipzig.de>
> Para: users@jena.apache.org
> Enviados: Lunes, 20 de Marzo 2017 2:24:33
> Asunto: Re: [MASSMAIL]Re: about TDB JENA
> 
> 
> It totally depends on the reasoning that you want to apply. OWL 2 DL is
> not possible via simple rules, but for instance RDFS/OLW Horst and OWL
> RL can be doen via rule-based materialization.
>> I keep going into details, thank you for responding.
>> 
>> Of the 13 million property assertions, almost 80% are assertions of object 
>> properties, ie relationships between individuals. In the last ontology I 
>> generated automatically, only for one of the municipalities in Cuba, I had 
>> 27 763 887 of object properties assertions, 105 054 data property assertions 
>> and 8 158 individuals.
>> 
>> The inference I need is basically the following:
>> 
>> 1) To know all the individuals that belong to a class directly and 
>> indirectly, taking into consideration the equivalence between classes and 
>> between individuals.
> Depends on the reasoning profile and the ontology schema, but might be
> covered by SPARQL 1.1 as long as you need only RDFS/OWL RL reasoning.
>> 
>> 2) Given an individual (Ind) and an object property (OP), know all 
>> individuals related to "Ind", through OP. Considering the following 
>> characteristics of OP: symmetry, functional, transitivity, inverse, 
>> equivalence.
>> 
>> 3) Search the direct and indirect subclasses of a class.
> SPARQL 1.1 property paths as long as the classes are atomic classes and
> not complex class expressions.
>> 
>> 4) Identify all classes equivalent to a class, considering that the 
>> equivalence relation is transitive.
>> 
>> 5) Identify the set of superclasses of a class.
> SPARQL 1.1 property paths as long as the classes are atomic classes and
> not complex class expressions.
>> 
>> Could JENA and TDB afford that kind of inference on my big ontologies?
>> 
>> Excuse me, but I'm not a deep connoisseur of the SPARQL language. I have 
>> only used it to access data that is explicit on the ontology, similar to SQL 
>> in relational databases, I have never used it (nor do I know if it is 
>> possible to do so) to infer implicit knowledge.
> The approach that people do is either query rewriting w.r.t. the schema
> or forward-chaining, i.e. materialization based on a set of inference
> rules. For RDFS, OWL Horst and OWL RL this is possible. Materialization
> has to be done only once (given that the dataset does not change).
>> 
>> I p

Re: Wikidata vs DBpedia

2017-03-19 Thread A. Soroka
This would be a much better question for either the Wikidata mailing list [1] 
or the DBpedia support system [2].

---
A. Soroka
The University of Virginia Library

[1] https://lists.wikimedia.org/mailman/listinfo/wikidata
[2] http://wiki.dbpedia.org/support

> On Mar 19, 2017, at 11:36 AM, kumar rohit <kumar.en...@gmail.com> wrote:
> 
> I am sorry if it is slightly off topic.
> 
> How Wikidata differs from DBpedia, in terms of building semantic web
> applications. Wikidata, as I studied as, is a knowledge base which every
> one can edit? How it differs then from Wikipedia?
> 
> DBpedia extracts structured data from wikipedia infoboxes and publishes it
> as rdf.
> 
> If we need Berlin population, we get it from DBpedia via SPARQL. If we can
> do it, why then we need Berlin resource in Wikidata?
> 
> This question will look strange for some, but I want to understand the
> concept.
> Thank you



Re: [MASSMAIL]Re: about TDB JENA

2017-03-19 Thread A. Soroka
Just a side note; Jena offers SPARQL 1.1, which includes property paths [1]. In 
some situations, they can be used to do some forms of inference (e.g. some 
kinds of problems involving subsumption) right in your SPARQL queries.

---
A. Soroka
The University of Virginia Library

[1] https://www.w3.org/TR/sparql11-query/#propertypaths

> On Mar 19, 2017, at 2:45 PM, Dave Reynolds <dave.e.reyno...@gmail.com> wrote:
> 
> On 19/03/17 15:52, Manuel Enrique Puebla Martinez wrote:
>> 
>> I consider that I did not know how to explain correctly in my previous 
>> email, I repeat the two questions:
>> 
>> 
>> 1) I read the page https://jena.apache.org/documentation/tdb/assembler.html, 
>> I do not think it is what I need.
>> 
>>   I work with large OWL2 ontologies from the OWLAPI framework, generated 
>> automatically. With thousands of individuals and more than 13 million 
>> property assertions (data and objects). As one may assume, one of the 
>> limitations I have is that OWLAPI itself can not manage these large 
>> ontologies, that is, because OWLAPI loads the whole owl file into RAM. Not 
>> to dream that some classical reasoner (Pellet, Hermit, etc.) can infer new 
>> knowledge about these great ontologies.
>> 
>> Once explained the problem I have, comes the question: Does JENA solve this 
>> test ?, ie with JENA and TDB I can generate my great ontologies in OWL2 ?, 
>> With JENA and TDB I can use a reasoner to infer new implicit knowledge 
>> (unstated) on my big ontologies?
>> 
>> I do not think JENA will be able to solve this problem, it would be a 
>> pleasant surprise for me. Unfortunately so far I had not read about TDB and 
>> the potentialities of JENA in external memory.
> 
> Indeed Jena does not offer fully scalable reasoning, all inference is done in 
> memory.
> 
> That said 13 million assertions is not *that* enormous, the cost of inference 
> depends on the complexity of the ontology as much its scale. So 13m triples 
> with some simple domain/range inferences might work in memory.
> 
> TDB storage itself scales just fine and querying does not load all the data 
> into memory. So if you don't actually need inference, or only need simple 
> inference that can be usefully expressed as part of the SPARQL query then you 
> are fine.
> 
> Dave
> 



Re: Missing file in the jena RDF API documentation

2017-03-12 Thread A. Soroka
As I said in my first reply, there is a file of that name at:

https://jena.apache.org/tutorials/sparql_data/vc-db-1.rdf

The site is normally updated at least for every release.

---
A. Soroka
The University of Virginia Library

> On Mar 12, 2017, at 4:22 PM, Aya Hamdy <aya.bad...@gmail.com> wrote:
> 
> Yes exactly as Nikalaos said.
> So how will I know when the site is next published, Soroka? Is there
> something like a notification that I can subscribe to?
> 
> Best Regards,
> Aya
> 
> 
> On Sun, Mar 12, 2017 at 7:07 PM, Nikolaos Beredimas <bere...@gmail.com>
> wrote:
> 
>> I think OP means the following:
>> 
>> 1. http://jena.apache.org/tutorials/rdf_api.html
>> Click on 4. Reading RDF >
>> 2. http://jena.apache.org/tutorials/rdf_api.html#ch-Reading RDF
>> Click on Tutorial 5. >
>> 3.
>> https://github.com/apache/jena/blob/master/jena-core/
>> src-examples/jena/examples/rdf/Tutorial05.java
>> 
>> There is a reference to vc-db-1.rdf inside the source code there.
>> 
>> OP wants to understand where that file is located/found.
>> 
>> On Sun, Mar 12, 2017 at 8:55 PM, A. Soroka <aj...@virginia.edu> wrote:
>> 
>>> The idea of "Improve this page" is not to register an issue (which you
>>> have already done very nicely) but to offer the wording you would
>> actually
>>> like to see. It's a way to offer a solution to the problem you are
>> raising
>>> as you raise it, which is one of the most fun parts of participating in
>>> open source!
>>> 
>>> When I look at section 5 of http://jena.apache.org/
>> tutorials/rdf_api.html,
>>> I don't see anything remotely like the wording you are reported,
>> beginning
>>> with the fact that section five of the current public page is under the
>>> heading "Controlling Prefixes", not "Reading RDF" as you report. Are you
>>> looking at some kind of cached off-line version of the site?
>>> 
>>> ---
>>> A. Soroka
>>> The University of Virginia Library
>>> 
>>>> On Mar 12, 2017, at 2:51 PM, Aya Hamdy <aya.bad...@gmail.com> wrote:
>>>> 
>>>> I used "Improve this Page"  and commented at the beginning of tutorial
>> 5
>>>> that the Vcards database used in this tutorial is not provided here or
>> on
>>>> the GitHub file for the tutorial. Is that appropriate?  (I am
>> asking
>>>> for future reference because I tried it before but I saw that the tThe
>>>> email went to d...@jena.apache.org, but I thought as a user I should
>>> contact
>>>> this email list?)
>>>> 
>>>> I am not sure how to be more specific without being confusing,  the
>>> problem
>>>> is that tutorial 5 that is documented on this link:
>>>> http://jena.apache.org/tutorials/rdf_api.html under heading *"Reading
>>>> RDF" *says
>>>> that the "vc-db-1.rdf" file that is used for this tutorial/exercise and
>>> the
>>>> subsequent ones is provided, supposedly, with the GitHb file or
>> something
>>>> on this link:
>>>> https://github.com/apache/jena/blob/master/jena-core/
>>> src-examples/jena/examples/rdf/Tutorial05.java
>>>> .
>>>> However, this file is not provided in either of the two links.
>>>> Does that help? or did I just make it harder for you?
>>>> 
>>>> Regards,
>>>> Aya
>>> 
>>> 
>> 



Re: Missing file in the jena RDF API documentation

2017-03-12 Thread A. Soroka
I added a link to that page. When the site is next published, you should see it 
update. 

---
A. Soroka
The University of Virginia Library

> On Mar 12, 2017, at 3:39 PM, Aya Hamdy <aya.bad...@gmail.com> wrote:
> 
> Reading RDF is section 4, but I am referring to *Tutorial 5*, not section
> 5. The first thing mentioned  under the headline Reading RDF (Section 4)is
> this:
> 
> "Tutorial 5
> <https://github.com/apache/jena/tree/master/jena-core/src-examples/jena/examples/rdf/Tutorial05.java>
> demonstrates
> reading the statements recorded in RDF XML form into a model. *With this
> tutorial, we have provided a small database of vcards in RDF/XML form*. The
> following code will read it in and write it out.etc"
> 
> The bold text in the above quote is what I am referring to, if you click on
> "Tutorial 5
> <https://github.com/apache/jena/tree/master/jena-core/src-examples/jena/examples/rdf/Tutorial05.java>"
> you will be redirected to the GitHub page that I gave you the link to
> before, you will see that it does not have the "*small database
> of vcards in RDF/XML form"*. It just has the code that is reading in the
> vcards database file via the statement: static final String inputFileName =
> "vc-db-1.rdf";
> 
> Regards,
> Aya
> 
> On Sun, Mar 12, 2017 at 6:55 PM, A. Soroka <aj...@virginia.edu> wrote:
> 
>> The idea of "Improve this page" is not to register an issue (which you
>> have already done very nicely) but to offer the wording you would actually
>> like to see. It's a way to offer a solution to the problem you are raising
>> as you raise it, which is one of the most fun parts of participating in
>> open source!
>> 
>> When I look at section 5 of http://jena.apache.org/tutorials/rdf_api.html,
>> I don't see anything remotely like the wording you are reported, beginning
>> with the fact that section five of the current public page is under the
>> heading "Controlling Prefixes", not "Reading RDF" as you report. Are you
>> looking at some kind of cached off-line version of the site?
>> 
>> ---
>> A. Soroka
>> The University of Virginia Library
>> 
>>> On Mar 12, 2017, at 2:51 PM, Aya Hamdy <aya.bad...@gmail.com> wrote:
>>> 
>>> I used "Improve this Page"  and commented at the beginning of tutorial 5
>>> that the Vcards database used in this tutorial is not provided here or on
>>> the GitHub file for the tutorial. Is that appropriate?  (I am asking
>>> for future reference because I tried it before but I saw that the tThe
>>> email went to d...@jena.apache.org, but I thought as a user I should
>> contact
>>> this email list?)
>>> 
>>> I am not sure how to be more specific without being confusing,  the
>> problem
>>> is that tutorial 5 that is documented on this link:
>>> http://jena.apache.org/tutorials/rdf_api.html under heading *"Reading
>>> RDF" *says
>>> that the "vc-db-1.rdf" file that is used for this tutorial/exercise and
>> the
>>> subsequent ones is provided, supposedly, with the GitHb file or something
>>> on this link:
>>> https://github.com/apache/jena/blob/master/jena-core/
>> src-examples/jena/examples/rdf/Tutorial05.java
>>> .
>>> However, this file is not provided in either of the two links.
>>> Does that help? or did I just make it harder for you?
>>> 
>>> Regards,
>>> Aya
>> 
>> 



Re: Missing file in the jena RDF API documentation

2017-03-12 Thread A. Soroka
The idea of "Improve this page" is not to register an issue (which you have 
already done very nicely) but to offer the wording you would actually like to 
see. It's a way to offer a solution to the problem you are raising as you raise 
it, which is one of the most fun parts of participating in open source!

When I look at section 5 of http://jena.apache.org/tutorials/rdf_api.html, I 
don't see anything remotely like the wording you are reported, beginning with 
the fact that section five of the current public page is under the heading 
"Controlling Prefixes", not "Reading RDF" as you report. Are you looking at 
some kind of cached off-line version of the site?

---
A. Soroka
The University of Virginia Library

> On Mar 12, 2017, at 2:51 PM, Aya Hamdy <aya.bad...@gmail.com> wrote:
> 
> I used "Improve this Page"  and commented at the beginning of tutorial 5
> that the Vcards database used in this tutorial is not provided here or on
> the GitHub file for the tutorial. Is that appropriate?  (I am asking
> for future reference because I tried it before but I saw that the tThe
> email went to d...@jena.apache.org, but I thought as a user I should contact
> this email list?)
> 
> I am not sure how to be more specific without being confusing,  the problem
> is that tutorial 5 that is documented on this link:
> http://jena.apache.org/tutorials/rdf_api.html under heading *"Reading
> RDF" *says
> that the "vc-db-1.rdf" file that is used for this tutorial/exercise and the
> subsequent ones is provided, supposedly, with the GitHb file or something
> on this link:
> https://github.com/apache/jena/blob/master/jena-core/src-examples/jena/examples/rdf/Tutorial05.java
> .
> However, this file is not provided in either of the two links.
> Does that help? or did I just make it harder for you?
> 
> Regards,
> Aya



Re: Missing file in the jena RDF API documentation

2017-03-12 Thread A. Soroka
There is what I suppose to be that file available at:

https://jena.apache.org/tutorials/sparql_data/vc-db-1.rdf

but I discover no reference to it in the document to which you have linked. 
Perhaps you can be more specific, or even better, use the link at the top of 
that page labelled "Improve this Page" to send a patch request.

---
A. Soroka
The University of Virginia Library

> On Mar 12, 2017, at 2:06 PM, Aya Hamdy <aya.bad...@gmail.com> wrote:
> 
> Hello,
> 
> I have been exploring the  Jena RDF API documentation found at the
> following URL:
> http://jena.apache.org/tutorials/rdf_api.html
> 
> Starting Tutorial 5 the documentation is using a file if Vcards called "
> vc-db-1.rdf".
> 
> However, this file is provided neither with the documentation nor with the
> GitHub code file for Tutorial 5. Could you please make it available and
> direct me to it to be able to apply what is explained in the documentation
> and understand it better?
> 
> Your speedy response is much appreciated.
> 
> Best Regards,
> Aya



Re: Inference not working

2017-03-10 Thread A. Soroka
> On Mar 10, 2017, at 11:05 AM, Dave Reynolds <dave.e.reyno...@gmail.com> wrote:
> 
>> And how do I
>> perform inference over a bit of data and then persist it?
> 
> An inference model appears to the API as just another Model so can use 
> Model.add to copy all the inference-closure of a model back to a separate 
> TDB-backed Model.
> 
> If you only actually need certain entailments then it is sometimes possible 
> to use selective queries that return the results of those entailments use the 
> results of those to record the entailments as more triples in a persistent 
> model. This is highly application dependent.

Just as a side-note here, SPARQL property paths are really useful for this kind 
of "targeted inference". You can duplicate a lot of subsumption rules and the 
like with property paths.

---
A. Soroka
The University of Virginia Library




Re: Fuseki proxy settings for federated queries

2017-03-10 Thread A. Soroka
I've got a PR with what I hope is a fix linked at that ticket. Is it possible 
for you to confirm whether or not it does in fact fix your problem?

---
A. Soroka
The University of Virginia Library

> On Mar 9, 2017, at 8:32 AM, Dominique Vandensteen <domi@cogni.zone> wrote:
> 
> The ticket has been created here:
> https://issues.apache.org/jira/browse/JENA-1309
> 
> D.
> 
> On 9 March 2017 at 13:43, A. Soroka <aj...@virginia.edu> wrote:
> 
>> This sounds like a bug here:
>> 
>> https://github.com/apache/jena/blob/master/jena-arq/src/
>> main/java/org/apache/jena/riot/web/HttpOp.java#L208
>> 
>> because those system properties should be getting picked up. Can you file
>> a ticket on this with a description? I will take a look at it straightaway.
>> 
>> ---
>> A. Soroka
>> The University of Virginia Library
>> 
>>> On Mar 9, 2017, at 6:12 AM, Dominique Vandensteen <domi@cogni.zone>
>> wrote:
>>> 
>>> Hi,
>>> While trying to run a sparql with "service" I'm getting timeout errors.
>> It
>>> seems fuseki is not picking up the default java proxy properties
>>> (-Dhttp.proxyHost=... and such)
>>> 
>>> I can configure the proxy using the solution described here
>>> http://stackoverflow.com/a/25690647 but this doesn't look very clean to
>> me.
>>> Is there a "fuseki way" of doing this? Or will fuseki pick up the java
>>> proxy properties in a next version?
>>> 
>>> Tested with fuseki 2.5.0
>>> 
>>> D.
>> 
>> 
> 
> 
> -- 
> 
> 
> Dominique Vandensteen
> Head of development
> 
> + 32 474 870856
> domi@cogni.zone
> skype: domi.vds



Re: Fuseki proxy settings for federated queries

2017-03-09 Thread A. Soroka
This sounds like a bug here:

https://github.com/apache/jena/blob/master/jena-arq/src/main/java/org/apache/jena/riot/web/HttpOp.java#L208

because those system properties should be getting picked up. Can you file a 
ticket on this with a description? I will take a look at it straightaway.

---
A. Soroka
The University of Virginia Library

> On Mar 9, 2017, at 6:12 AM, Dominique Vandensteen <domi@cogni.zone> wrote:
> 
> Hi,
> While trying to run a sparql with "service" I'm getting timeout errors. It
> seems fuseki is not picking up the default java proxy properties
> (-Dhttp.proxyHost=... and such)
> 
> I can configure the proxy using the solution described here
> http://stackoverflow.com/a/25690647 but this doesn't look very clean to me.
> Is there a "fuseki way" of doing this? Or will fuseki pick up the java
> proxy properties in a next version?
> 
> Tested with fuseki 2.5.0
> 
> D.



Re: Converting a class into an individual or an individual into a class

2017-03-07 Thread A. Soroka
Not sure what you mean here. That document explicitly states (at 
https://www.w3.org/TR/swbp-n-aryRelations/#RDFReification) 

"It may be natural to think of RDF reification when representing n-ary 
relations. We do not want to use the RDF reification vocabulary to represent 
n-ary relations in general…"


---
A. Soroka
The University of Virginia Library

> On Mar 6, 2017, at 3:29 PM, Hlel Emna <emnah...@gmail.com> wrote:
> 
> hi
> to understand Reification, see this reference:
> https://www.w3.org/TR/swbp-n-aryRelations/



Re: Converting a class into an individual or an individual into a class

2017-03-06 Thread A. Soroka
That is not a Jena class. That is a class from a TopBraid product. It seems 
that the reification to which it refers is not the OWL "punning" about which I 
think you were asking but RDF reification:

https://jena.apache.org/documentation/notes/reification.html

---
A. Soroka
The University of Virginia Library

> On Mar 6, 2017, at 11:58 AM, Jos Lehmann <jos.lehm...@bauhaus-luftfahrt.net> 
> wrote:
> 
> Hi Lorenz, Emna
> 
> You're propobably right about the feasibility of automatic solutions. 
> 
> I have found references though under the general heading "meta-modeling", 
> which provide various modeling solutions to represent a class as an 
> individual of a meta-ontology of choice. In this context they (informally?) 
> refer to what happens to the given class as to "class reification". 
> 
> I have also found reification as a jena class (link below) although I have 
> not yet look into it:
> 
> http://download.topquadrant.com/composer/javadoc/index.html?org/topbraid/core/model/Reification.html
> 
> Thanks, Jos
> 
> -Ursprüngliche Nachricht-
> Von: Hlel Emna [mailto:emnah...@gmail.com] 
> Gesendet: Montag, 6. März 2017 12:31
> An: users@jena.apache.org
> Betreff: Re: Converting a class into an individual or an individual into a 
> class
> 
> To my knowledge, reification is used only if there exists an n-ary relation 
> which allows to connect an individual to more than an individual. There is no 
> reification for a class
> 
> <https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
> Garanti
> sans virus. www.avast.com
> <https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail>
> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
> 
> 
> -Ursprüngliche Nachricht-
> Von: Lorenz B. [mailto:buehm...@informatik.uni-leipzig.de] 
> Gesendet: Montag, 6. März 2017 12:32
> An: users@jena.apache.org
> Betreff: Re: Converting a class into an individual or an individual into a 
> class
> 
> As long as there are only declaration of individuals and classes it's more or 
> less simply removing the old triple and add the new one.
> 
> 
> But, once those entities are used in other OWL axioms, it's not that simple 
> and depends on the ontology. For example
> 
> 1. How to convert subClassOf relationships between classes when converting to 
> individuals?
> 2. How to handle relationships between individuals when converting to classes?
> 
> And this are just the most basic axioms.
> 
> I don't see any automatic approach.
> 
> 
>> Hi there
>> 
>> A general question: are there (implemented) operations of conversions in 
>> OWL-DL of:
>> 
>> 
>> 1.   An individual into a class
>> 
>> 2.   A class into an individual
>> 
>> If so, what are they called? Could you direct me to relevant references in 
>> the literature describing these operations?
>> 
>> My hunch at the moment is that in Protégé 1. and 2. above would be 
>> refactoring operations, although in the Refactoring Menu I can't see any 
>> obvious options that would suggest 1. or 2..
>> 
>> More generally I would think that the following terminology may have been 
>> used for such operations:
>> 
>> 
>> a.   Abstraction   or  Conceptualization
>>Vs.Vs.
>> 
>> b.  Concretization  or  Reification
>> 
>> 
>> ( Note that reification would normally be applied to 
>> relations/properties rather than classes. Then again, if a class can 
>> be considered as a 1-ary relation (can it?), one could reify a class 
>> as well )
>> 
>> Thanks, Jos
>> 
>> 
> --
> Lorenz Bühmann
> AKSW group, University of Leipzig
> Group: http://aksw.org - semantic web research center
> 



Re: Fuseki support other query languages

2017-03-05 Thread A. Soroka
That is in no way a normal SPARQL query. I don't know where in particular you 
got it, but it is an example of Blazegraph/BigData's "GAS" API. It's not an 
example of idiomatic SPARQL at all.

https://wiki.blazegraph.com/wiki/index.php/RDF_GAS_API

That is a specialist extension API for one product's particular capability.

You can refer to any number of good tutorials for how to write normal SPARQL. 
Jena itself maintains one:

https://jena.apache.org/tutorials/sparql.html


---
A. Soroka
The University of Virginia Library

> On Mar 5, 2017, at 11:37 AM, Laura Morales <laure...@mail.com> wrote:
> 
>> Could you be more specific about these intuitions and difficulties?
> 
> with other query languages such as gremlin you start from a vertex (or set of 
> vertices), and follow links (predicates). This is very intuitive, because it 
> resemble the picture of a graph that I have in mind. For example, I can start 
> from the vertex "Bruce Springsteen" and follow its out-links 
> "some_namespace:song". Very easy to understand and work with. SPARQL, at 
> least to me, seems to be way more intricate, messy, verbose, and ultimately 
> difficult to understand. Just look at this query (copied from wikidata 
> examples) for a query as simple as "Children of Genghis Khan" I barely 
> understand how to read it to be honest
> 
> #Children of Genghis Khan
> #added before 2016-10
> #defaultView:Graph
> 
> PREFIX gas: <http://www.bigdata.com/rdf/gas#>
> 
> SELECT ?item ?itemLabel ?pic ?linkTo
> WHERE
> {
>  SERVICE gas:service {
>gas:program gas:gasClass "com.bigdata.rdf.graph.analytics.SSSP" ;
>gas:in wd:Q720 ;
>gas:traversalDirection "Forward" ;
>gas:out ?item ;
>gas:out1 ?depth ;
>gas:maxIterations 4 ;
>gas:linkType wdt:P40 .
>  }
>  OPTIONAL { ?item wdt:P40 ?linkTo }
>  OPTIONAL { ?item wdt:P18 ?pic }
>  SERVICE wikibase:label {bd:serviceParam wikibase:language "en" }
> }



Re: Fuseki support other query languages

2017-03-04 Thread A. Soroka
In between TDB and Fuseki is ARQ, which is Jena's SPARQL implementation.

https://jena.apache.org/documentation/query/index.html

ARQ can be used with a variety of backends, including in-memory systems and 
on-disk databases like TDB. Fuseki is mostly responsible for HTTP management 
and handing queries and updates to ARQ. It is ARQ that talks to TDB.

---
A. Soroka
The University of Virginia Library

> On Mar 4, 2017, at 10:10 AM, Laura Morales <laure...@mail.com> wrote:
> 
> OK if I get this right, TDB is the actual database storing all 
> triples/n-quads, and Fuseki is a layer on top of it whose purpose is to parse 
> SPARQL queries and retrieve triples from TDB.
> 
> Right?
> 
> 
>> Fuseki is not a database. It is a SPARQL server. Jena TDB is the usual 
>> database used with Fuseki. Using Fuseki without Jena is nonsensical. Fuseki 
>> is totally based on Jena.
>> 
>> https://jena.apache.org/documentation/index.html



Re: Fuseki support other query languages

2017-03-04 Thread A. Soroka
Fuseki is not a database. It is a SPARQL server. Jena TDB is the usual database 
used with Fuseki. Using Fuseki without Jena is nonsensical. Fuseki is totally 
based on Jena.

https://jena.apache.org/documentation/index.html

---
A. Soroka
The University of Virginia Library

> On Mar 4, 2017, at 9:43 AM, Laura Morales <laure...@mail.com> wrote:
> 
> I'm not looking at a framework, I'm only interested in the database 
> component. Like, say, MySQL, PostgreSQL, etc... That's why I'm interested in 
> Fuseki and not Jena.



Re: Fuseki support other query languages

2017-03-04 Thread A. Soroka
> well I don't have a specific use case in mind, I just find SPARQL very 
> counter-intuitive and difficult to reason with
...
> nope, never before. Now I'm even more confused about the purposes of 
> Fuseki/Elda/LDP

Then you will probably want to settle on a particular use case through which to 
investigate these tools. Asking about the generic use of a tool is often less 
helpful than planning to accomplish a concrete end and trying that tool in that 
context.

---
A. Soroka
The University of Virginia Library

> On Mar 4, 2017, at 8:49 AM, Laura Morales <laure...@mail.com> wrote:
> 
>> Presumably there is some sort of use case for which extending fuseki to 
>> support other query languages might solve.  Perhaps describing that use case 
>> would lead to an answer which describes how using jena or something that 
>> uses jena can solve that use case.
> 
> 
> well I don't have a specific use case in mind, I just find SPARQL very 
> counter-intuitive and difficult to reason with
> 
> 
>> Have you seen Elda?  http://epimorphics.github.io/elda/current/index.html
> 
> 
> nope, never before. Now I'm even more confused about the purposes of 
> Fuseki/Elda/LDP



Re: Fuseki support other query languages

2017-03-04 Thread A. Soroka
Yes, and I'm increasingly convinced (pretty totally convinced at this point) 
that an LDP piece for Fuseki would be a bad idea. I'm not going to pursue it.

---
A. Soroka
The University of Virginia Library

> On Mar 4, 2017, at 8:49 AM, Andy Seaborne <a...@apache.org> wrote:
> 
> On 04/03/17 12:45, A. Soroka wrote:
> 
> > It is not in any obvious way part of the current remit for the Jena
> > project.
> 
> Why not?!
> 
> Isn't LDP for RDF just another service over the data?
> 
>Andy
> 
>> 
>> ---
>> A. Soroka
>> The University of Virginia Library
>> 
>>> On Mar 4, 2017, at 7:40 AM, Laura Morales <laure...@mail.com> wrote:
>>> 
>>> This message is very confusing.
>>> I was asking whether it would be possible to add another (more friendly) 
>>> query language to Fuseki, or not?
>>> 
>>> 
>>>> Sent: Saturday, March 04, 2017 at 1:32 PM
>>>> From: baran...@gmail.com
>>>> To: users@jena.apache.org
>>>> Subject: Re: Fuseki support other query languages
>>>> 
>>>> 
>>>> I think it was a false estimation to allure SQL folks for Semantic Web
>>>> with SPARQL.
>>>> 
>>>>> SPARQL is rather cumbersome and counter-intuitive to work with...
>>>> 
>>>> and that was one of the important reasons, why they ignored SPARQL. There
>>>> are also other reasons. But the most important one is: No revolution
>>>> basing on the help of the past.
>>>> 
>>>>> I was wondering whether it would be possible to support in Fuseki some
>>>>> other more friendly query language, such as graphql or gremlin.
>>>> 
>>>> I don't know much about graphql...
>>>> I don't know much about gremlin...
>>>> 
>>>> But i know that it would have been much better trying to develope a new
>>>> query language starting from scratch and supporting intuitively usage of a
>>>> simple RDFS-design. Also for better performance...
>>>> 
>>>> But about ten years ago, confrontated with SPARQL, i also thought, very
>>>> good idea, i have 2-3 years experience with SQL and i have an open door to
>>>> Semantic Web revolution...
>>>> 
>>>> thanks, baran
>>>> --
>>>> Using Opera's mail client: http://www.opera.com/mail/
>>>> 
>> 



Re: Fuseki support other query languages

2017-03-04 Thread A. Soroka
There are plenty of graph databases that provide the other languages you 
mentioned. Is there some reason why you want to use Jena? Perhaps, as John 
Fereira asked, you will describe your use case.

---
A. Soroka
The University of Virginia Library

> On Mar 4, 2017, at 8:44 AM, Laura Morales <laure...@mail.com> wrote:
> 
>> Certainly it would be _possible_ to write an extension for Fuseki that would 
>> do such a thing. It is not in any obvious way part of the current remit for 
>> the Jena project. Are you interested in undertaking that work?
> 
> I would if I knew how to do it, but I wouldn't even know how to approach such 
> a thing... I'm just interested to use a graph database, but I find SPARQL 
> very cumbersome... hence why I asked if there were any chance that a more 
> friendly query language could be supported.



Re: Fuseki vs Marmotta

2017-03-04 Thread A. Soroka
> The big thing that LDP adds is its container model.
> 
>Andy

Yes. This is hugely useful, if it meets your use cases. It allows for a lot of 
automatic management for an important class of relationships.

https://www.w3.org/TR/ldp/#ldpc

---
A. Soroka
The University of Virginia Library

> On Mar 4, 2017, at 8:19 AM, Andy Seaborne <a...@apache.org> wrote:
> 
> 
> 
> On 04/03/17 11:27, Jean-Marc Vanel wrote:
>> 2017-03-04 12:08 GMT+01:00 Laura Morales <laure...@mail.com>:
>> 
>>> What problem is a "Linked Data Platform" trying to solve that can't
>>> already be accomplished with a RDF server like Fuseki?
>> 
>> Consider this use case:
>> 
>>   - manage a team's public FOAF profiles with URL prefix http://xx.com/
>>   - a team member X can upload her FOAF profile simply by an HTTP POST
>>   request on http://xx.com/members/ at URL /myName
>>   - then http://xx.com/myName is visible from Internet and contains
>>   triples about <http://xx.com/myName> , which is the member's URI.
>> 
>> 
>> If I'm correct also Fuseki has a REST front end available.
> 
> Yes - it supports the SPARQL Graph Store Protocol.
> 
> PUT/GET/POST/DELETE whole graphs.
> 
> ... and it generalises it to the dataset+quads as well.
> 
>>> 
>> 
>> Well the word REST used in
>> https://jena.apache.org/documentation/serving_data/ is simply meaning that
>> a SPARQL compliant HTTP server can be somehow considered a REST server.
>> 
>> But it's not what is generally meant by a REST server:
>> https://en.wikipedia.org/wiki/Representational_state_transfer
>> In this REST concept , one PUTs or POSTs data at some relative URL's, and
>> by HTP GET at the same URL retrieves the data.
>> 
>> This is what an LDP server basically does.
>> An LDP server also can be viewed as similar to an FTP server that typically
>> stores RDF data (but also any binary data).
> 
> The big thing that LDP adds is its container model.
> 
>Andy
> 
>> 
>> 
>>> 
>>> 
>>>> Sent: Saturday, March 04, 2017 at 11:48 AM
>>>> From: "Jean-Marc Vanel" <jeanmarc.va...@gmail.com>
>>>> To: "Jena users" <users@jena.apache.org>
>>>> Subject: Re: Fuseki vs Marmotta
>>>> 
>>>> Apache Fuseki is a pure SPARQL server, with a native triple database.
>>>> 
>>>> Apache Marmotta is a complex beast, primarily an LDP server [1] , but it
>>>> mixes a lot of ingredients:
>>>> http://marmotta.apache.org/platform/index.html
>>>> 
>>>> Its persistence layer is only SQL databases.
>>>> It does offer a SPARQL service, but it seems loosely connected to the
>>> other
>>>> modules.
>>>> Especially, I'm not sure that after an LDP PUT or POST, the data will be
>>>> added to the underlying SPARQL database.
>>>> 
>>>> [1] LDP https://www.w3.org/TR/ldp/
>>> 
>> 
>> 
>> 



Re: Fuseki support other query languages

2017-03-04 Thread A. Soroka
Certainly it would be _possible_ to write an extension for Fuseki that would do 
such a thing. It is not in any obvious way part of the current remit for the 
Jena project. Are you interested in undertaking that work?

---
A. Soroka
The University of Virginia Library

> On Mar 4, 2017, at 7:40 AM, Laura Morales <laure...@mail.com> wrote:
> 
> This message is very confusing.
> I was asking whether it would be possible to add another (more friendly) 
> query language to Fuseki, or not?
> 
> 
>> Sent: Saturday, March 04, 2017 at 1:32 PM
>> From: baran...@gmail.com
>> To: users@jena.apache.org
>> Subject: Re: Fuseki support other query languages
>> 
>> 
>> I think it was a false estimation to allure SQL folks for Semantic Web  
>> with SPARQL.
>> 
>>> SPARQL is rather cumbersome and counter-intuitive to work with...
>> 
>> and that was one of the important reasons, why they ignored SPARQL. There  
>> are also other reasons. But the most important one is: No revolution  
>> basing on the help of the past.
>> 
>>> I was wondering whether it would be possible to support in Fuseki some  
>>> other more friendly query language, such as graphql or gremlin.
>> 
>> I don't know much about graphql...
>> I don't know much about gremlin...
>> 
>> But i know that it would have been much better trying to develope a new  
>> query language starting from scratch and supporting intuitively usage of a  
>> simple RDFS-design. Also for better performance...
>> 
>> But about ten years ago, confrontated with SPARQL, i also thought, very  
>> good idea, i have 2-3 years experience with SQL and i have an open door to  
>> Semantic Web revolution...
>> 
>> thanks, baran
>> -- 
>> Using Opera's mail client: http://www.opera.com/mail/
>> 



Re: Fuseki / ARQ DESCRIBE query to include ?s ?p

2017-03-03 Thread A. Soroka
No, Pubby [1] is designed to present a simple "Linked Data" representation of 
resources the data for which are recorded in some SPARQL-equipped service (e.g. 
Fuseki).

You set up Pubby aimed at your SPARQL endpoint, and then when people make 
requests for a particular resource from Pubby, it gives them the results of a 
DESCRIBE query on that resource.

---
A. Soroka
The University of Virginia Library

[1]

> On Mar 3, 2017, at 9:37 AM, Martynas Jusevičius <marty...@graphity.org> wrote:
> 
> Isn't Fuseki already doing the same as Pubby?
> 
> On Fri, Mar 3, 2017 at 11:13 AM, Beetz, J. <j.be...@tue.nl> wrote:
>> Dear community,
>> 
>> I have set up a Pubby http://wifo5-03.informatik.uni-mannheim.de/pubby/ 
>> frontend to allow dereferencing/content-negotiation/browser-navigation of a 
>> vocabulary served by Fuseki 2.4.1 .
>> As per the SPARQL standard, the default behavior of DESCRIBE queries that 
>> are sent by the frontend to Fuseki is left to the implementation. Fusekis 
>> default behavior is to send back the graph containing all triples where 
>>  is the subject ( ?p ?o). I would also 
>> like to include (?s ?p  and probably (?s > resource> ?o).
>> 
>> From the list archive I have seen that I can probably implement my own 
>> execDescribe() similar to this thread from some time ago:
>> http://jena.markmail.org/message/5thzze4xhqhak34g?q=describe+query+%3Fs+%3Fp+%3Fo=2#query:describe%20query%20%3Fs%20%3Fp%20%3Fo+page:2+mid:5thzze4xhqhak34g+state:results
>> 
>> I wonder whether there is another (easier) way through a configuration 
>> setting or similar which allows me to achieve this.
>> 
>> Thank you very much in advance for your help
>> 
>> Jakob
>> 
>> ___
>> Dr. Jakob Beetz - Assistant Professor
>> Information Systems in the Built Environment (ISBE) Group
>> Department of the Built Environment | Bouwkunde
>> Eindhoven University of Technology, The Netherlands
>> phone: +31 (0)40 247 2288 on-campus location: VRT 9.J06
>> 



Re: Fuseki / ARQ DESCRIBE query to include ?s ?p

2017-03-03 Thread A. Soroka
I'm not sure what you mean by such triples as "(?s ?p  and 
probably (?s  ?o). "

Can you give some concrete examples?

---
A. Soroka
The University of Virginia Library

> On Mar 3, 2017, at 5:13 AM, Beetz, J. <j.be...@tue.nl> wrote:
> 
> Dear community,
> 
> I have set up a Pubby http://wifo5-03.informatik.uni-mannheim.de/pubby/ 
> frontend to allow dereferencing/content-negotiation/browser-navigation of a 
> vocabulary served by Fuseki 2.4.1 .
> As per the SPARQL standard, the default behavior of DESCRIBE queries that are 
> sent by the frontend to Fuseki is left to the implementation. Fusekis default 
> behavior is to send back the graph containing all triples where  resource> is the subject ( ?p ?o). I would also like to 
> include (?s ?p  and probably (?s  ?o). 
> 
> From the list archive I have seen that I can probably implement my own 
> execDescribe() similar to this thread from some time ago: 
> http://jena.markmail.org/message/5thzze4xhqhak34g?q=describe+query+%3Fs+%3Fp+%3Fo=2#query:describe%20query%20%3Fs%20%3Fp%20%3Fo+page:2+mid:5thzze4xhqhak34g+state:results
> 
> I wonder whether there is another (easier) way through a configuration 
> setting or similar which allows me to achieve this.
> 
> Thank you very much in advance for your help
> 
> Jakob
> 
> ___
> Dr. Jakob Beetz - Assistant Professor 
> Information Systems in the Built Environment (ISBE) Group
> Department of the Built Environment | Bouwkunde
> Eindhoven University of Technology, The Netherlands
> phone: +31 (0)40 247 2288 on-campus location: VRT 9.J06
> 



Re: Extending Jena Text to Support ElasticSearch as Indexing/Querying Engine

2017-03-02 Thread A. Soroka
I do agree that trying to juggle different versions of Lucene libraries is 
probably not a realistic option right now. Luckily (if I understand the 
conversation thus far correctly) we have a solid alternative; getting our 
current Lucene dependency upgraded should allow us to (eventually) merge Anuj's 
work into the mainstream of development. Someone please tell me if I have that 
wrong! :grin:

Let me reiterate that this seems like very good work and speaking for myself, I 
certainly want to get it included into Jena. It's just a question of fitting it 
in correctly, which might take a bit of time. 

---
A. Soroka
The University of Virginia Library

> On Mar 1, 2017, at 1:27 PM, Osma Suominen <osma.suomi...@helsinki.fi> wrote:
> 
> Hi Anuj!
> 
> I have nothing against modularity in general. However, I cannot see how your 
> proposal could work in practice for the Fuseki build, due to the reasons I 
> mentioned in my previous message (and Adam seemed to concur).
> 
> In any case, I'll see what I can do to get the Lucene upgrade moving again. 
> If all current Jena modules (ie jena-text and jena-spatial) were upgraded to 
> Lucene 6.4.1, then you could just add your ES classes to jena-text, right? I 
> think that would be better for everyone than having to maintain your own 
> separate module.
> 
> -Osma
> 
> 01.03.2017, 16:59, anuj kumar kirjoitti:
>> I personally have no preference as to how the code in Jena should be
>> structured, as long as I am able to use it :).
>> I have personal preference of doing it in a specific way because IMO, it is
>> modular which makes it much easier to maintain in the long run. But again
>> it may not be the quickest one.
>> 
>> I already have been given a deadline, by the company to have ES extension
>> implemented in the next 15 days :). What this means is that I will be
>> maintaining the ES code extension to Jena Text at-least locally for a
>> coming period of time. I would be more than happy to contribute to Jena
>> community whatever is required to have a proper ElasticSearch
>> Implementation in place, whether within jena-text module or as a separate
>> module. Till the time Lucene and Solr is not upgraded to the latest
>> version, I will have to maintain a separate module for jena-text-es.
>> 
>> Cheers!
>> Anuj Kumar
>> 
>> 
>> On Wed, Mar 1, 2017 at 3:36 PM, A. Soroka <aj...@virginia.edu> wrote:
>> 
>>> Osma--
>>> 
>>> The short answer is that yes, given the right tools you _can_ have
>>> different versions of code accessible in different ways. The longer answer
>>> is that it's probably not a viable alternative for Jena for this problem,
>>> at least not without a lot of other change.
>>> 
>>> You are right to point to the classloader mechanism as being at the heart
>>> of this question, but I must alter your remark just slightly. From "the
>>> Java classloader only sees a single, flat package/class namespace and a set
>>> of compiled classes" to "ANY GIVEN Java classloader only sees a single,
>>> flat package/class namespace and a set of compiled classes".
>>> 
>>> This is the fact that OSGi uses to make it possible to maintain strict
>>> module boundaries (and even dynamic module relationships at run-time). Each
>>> OSGi bundle sees its own classloader, and the framework is responsible for
>>> connecting bundles up to ensure that every bundle has what it needs in the
>>> way of types to function, based on metadata that the bundles provide to the
>>> framework. It's an incredibly powerful system (I use it every day and enjoy
>>> it enormously) but it's also very "heavy" and requires a good deal of
>>> investment to use. In particular, it's probably too large to put _inside_
>>> Jena. (I frequently put Jena inside an OSGi instance, on the other hand.)
>>> 
>>> Java 9 Jigsaw [1] offers some possibility for strong modularization of
>>> this kind, but it's really meant for the JDK itself, not application
>>> libraries. In theory, we could "roll our own" classloader management for
>>> this problem. That sounds like more than a bit of a rabbit hole to me.
>>> There might be another, more lightweight, toolkit out there to this
>>> purpose, but I'm not aware of any myself.
>>> 
>>> Otherwise, yes, you get into shading and the like. We have to do that for
>>> Guava for now because of HADOOP-10101 (grumble grumble) but it's hardly a
>>> thing we want to do any more of than needed, I don't think.
>>> 
>>> ---
>>&

Re: Wiki data

2017-03-02 Thread A. Soroka
That's a good question to ask the Protege support lists.

---
A. Soroka
The University of Virginia Library

> On Mar 2, 2017, at 3:31 PM, javed khan <javedbtk...@gmail.com> wrote:
> 
> Can we add wikidata in Protege like we do in DBpedia. Not sure if Protege
> and Jena allow us to use both wikidata and DBpedia in one application.?
> 
> On Thu, Mar 2, 2017 at 3:42 PM, Marco Neumann <marco.neum...@gmail.com>
> wrote:
> 
>> since wikidata.org provides canonical RDF dumps the data should behave
>> like any other data set. not particularly relevant to this list
>> though.
>> 
>> https://www.wikidata.org/wiki/Wikidata:Database_download#RDF_dumps
>> 
>> 
>> 
>> On Thu, Mar 2, 2017 at 7:35 AM, javed khan <javedbtk...@gmail.com> wrote:
>>> Is Jena support wikidata the same way as it support DBpedia? For example,
>>> we store DBpedia resources in our owl file and then access it from our
>> Jena
>>> code. Any example, if some one provide how to access a wikidata using
>> Jena
>>> code?
>>> 
>>> Thank you.
>> 
>> 
>> 
>> --
>> 
>> 
>> ---
>> Marco Neumann
>> KONA
>> 



Re: Extending Jena Text to Support ElasticSearch as Indexing/Querying Engine

2017-03-01 Thread A. Soroka
> On Feb 28, 2017, at 11:23 AM, Osma Suominen <osma.suomi...@helsinki.fi> wrote:
> 28.02.2017, 17:12, A. Soroka kirjoitti:
>> https://lists.apache.org/thread.html/dce0d502b11891c28e57bbcbb0cdef27d8374d58d9634076b8ef4cd7@1431107516@%3Cdev.jena.apache.org%3E
>> ? In other words, might it be better to factor out between -text and 
>> -spatial and _then_ try to upgrade the Lucene version?
> 
> I certainly wouldn't object to that, but somebody has to volunteer to do the 
> actual work!

Yes, you are right, and I admit I haven't got anything like the time for it now.

>> I don't use the Solr component now, but I could easily see so doing... 
>> that's pretty vague, I know, and I'm not in a position to do any work to 
>> maintain it, so consider that just a very small and blurry data point. :)
> 
> Last time I tried it (it was a while ago) I couldn't figure out how to get it 
> running... If you could just try that with some toy data, then your data 
> point would be a lot less blurry :) I haven't used Solr for anything, so I'm 
> not very familiar with how to set it up, and the jena-text instructions are 
> pretty vague unfortunately.

I will try to perform a test sometime in the next week or so. Hopefully I will 
at least get it running. If not, then we maybe don't need to worry about it so 
much! :grin:

ajs6f



Re: Extending Jena Text to Support ElasticSearch as Indexing/Querying Engine

2017-03-01 Thread A. Soroka
Osma--

The short answer is that yes, given the right tools you _can_ have different 
versions of code accessible in different ways. The longer answer is that it's 
probably not a viable alternative for Jena for this problem, at least not 
without a lot of other change.

You are right to point to the classloader mechanism as being at the heart of 
this question, but I must alter your remark just slightly. From "the Java 
classloader only sees a single, flat package/class namespace and a set of 
compiled classes" to "ANY GIVEN Java classloader only sees a single, flat 
package/class namespace and a set of compiled classes".

This is the fact that OSGi uses to make it possible to maintain strict module 
boundaries (and even dynamic module relationships at run-time). Each OSGi 
bundle sees its own classloader, and the framework is responsible for 
connecting bundles up to ensure that every bundle has what it needs in the way 
of types to function, based on metadata that the bundles provide to the 
framework. It's an incredibly powerful system (I use it every day and enjoy it 
enormously) but it's also very "heavy" and requires a good deal of investment 
to use. In particular, it's probably too large to put _inside_ Jena. (I 
frequently put Jena inside an OSGi instance, on the other hand.)

Java 9 Jigsaw [1] offers some possibility for strong modularization of this 
kind, but it's really meant for the JDK itself, not application libraries. In 
theory, we could "roll our own" classloader management for this problem. That 
sounds like more than a bit of a rabbit hole to me. There might be another, 
more lightweight, toolkit out there to this purpose, but I'm not aware of any 
myself. 

Otherwise, yes, you get into shading and the like. We have to do that for Guava 
for now because of HADOOP-10101 (grumble grumble) but it's hardly a thing we 
want to do any more of than needed, I don't think.

---
A. Soroka
The University of Virginia Library

[1] http://openjdk.java.net/projects/jigsaw/

> On Mar 1, 2017, at 9:03 AM, Osma Suominen <osma.suomi...@helsinki.fi> wrote:
> 
> Hi Anuj!
> 
> Thanks for the clarification.
> 
> However, I'm still not sure I understand the situation completely. I know 
> Maven can perform a lot of tricks, but Maven modules are just convenient ways 
> to structure a Java project. Maven cannot change the fact that at runtime, 
> module divisions don't really matter (except that they usually correspond to 
> package sub-namespaces) and the Java classloader only sees a single, flat 
> package/class namespace and a set of compiled classes (usually within JARs) 
> in the classpath that it needs to check to find the right classes, and if 
> there are two versions of the same library (eg Lucene) with overlapping class 
> names, that's going to cause trouble. The only way around that is to shade 
> some of the libraries, i.e. rename them so that they end up in another, 
> non-conflicting namespace. Apparently Elasticsearch also did some of that in 
> the past [1] but nowadays tries to avoid it.
> 
> Does your assumption 1 ("At a given point in time, only a single Indexing 
> Technology is used") imply that in the assembler configuration, you cannot 
> have ja:loadClass declarations for both Lucene and ES backends? Or how do you 
> run something like Fuseki that contains (in a single big JAR) both the 
> jena-text and jena-text-es modules with all their dependencies, one of which 
> requires the Lucene 4.x classes and the other one the Lucene 6.4.1 classes? 
> How do you ensure that only one of them is used at a time, and that the Java 
> classloader, even though it has access to both versions of Lucene, only loads 
> classes from the single, correct one and not the other? Or do you need to 
> have separate "Fuseki-Lucene" and "Fuseki-ES" packages, so that you don't end 
> up with two Lucene versions within the same Fuseki JAR?
> 
> -Osma
> 
> [1] https://www.elastic.co/blog/to-shade-or-not-to-shade
> 
> 01.03.2017, 11:03, anuj kumar kirjoitti:
>> Hi Osma,
>> 
>> I understand what you are saying. There are ways to mitigate risks and
>> balance the refactoring without affecting the existing modules. But I will
>> not delve into those now. I am not an expert in Jena to convincingly say
>> that it is possible, without any hiccups. But I can take a guess and say
>> that it is indeed possible :)
>> 
>> For the question: "is it even possible to mix modules that depend on
>> different versions of the Lucene libraries within the same project?"
>> 
>> I actually do not understand what you mean by mixing modules. I assume you
>> mean having jena-text and jena-text-es as dependencies in a build without
>> causing the build to conflict.

Re: Deleting the default graph from the TDB

2017-02-28 Thread A. Soroka
Do you have the default graph set up as the union graph?

---
A. Soroka
The University of Virginia Library

> On Feb 28, 2017, at 12:21 PM, Sandor Kopacsi <sandor.kopa...@univie.ac.at> 
> wrote:
> 
> Dear List-members!
> 
> I would like to delete the default graph from TDB using the web interface of 
> Fuseki, but for some reason it doesn't work.
> 
> In the config file of Fuseki the serviceUpdate is allowed, and I can perform 
> the CLEAR DEFAULT and DROP DEFAULT SPARQL updates successfully, but the 
> default graph is still there, and contains triples.
> 
> When I run Fuseki in the memory with the same setting and update above, I can 
> simply delete the default graph.
> 
> What is wrong?
> 
> Thanks and best regards,
> Sandor
> 
> -- 
> 
> Dr. Sandor Kopacsi
> IT Software Designer
> 
> Vienna University Computer Center
> 



Re: Extending Jena Text to Support ElasticSearch as Indexing/Querying Engine

2017-02-28 Thread A. Soroka
I second Osma's congrats!

Do we want to take this into account:

https://lists.apache.org/thread.html/dce0d502b11891c28e57bbcbb0cdef27d8374d58d9634076b8ef4cd7@1431107516@%3Cdev.jena.apache.org%3E

? In other words, might it be better to factor out between -text and -spatial 
and _then_ try to upgrade the Lucene version?

I don't use the Solr component now, but I could easily see so doing... that's 
pretty vague, I know, and I'm not in a position to do any work to maintain it, 
so consider that just a very small and blurry data point. :)


---
A. Soroka
The University of Virginia Library

> On Feb 28, 2017, at 3:20 AM, Osma Suominen <osma.suomi...@helsinki.fi> wrote:
> 
> Hi Anuj!
> 
> Congratulations for getting the PoC working!
> 
> I'm not sure I like the idea of having a separate jena-text-es module.
> 
> Am I right that your main concern with creating a separate module is that the 
> Elasticsearch client library requires a newer Lucene version than what 
> jena-text currently uses? In that case, I think the solution should be 
> upgrading the Lucene version everywhere, i.e. the current jena-text and 
> jena-spatial modules. This work has already started (see JENA-1250) but it 
> has recently stalled and has not yet been merged.
> 
> I don't think it should be a problem to have multiple implementations 
> (Lucene, Solr, ES) within the same module. Ideally a lot of the 
> infrastructure could be shared (which is of course possible also with 
> separate modules, as you have done), and I would hope that also the unit 
> tests could be reused for the different implementations, although that is 
> currently not the case (the unit tests only target Lucene).
> 
> The Solr side of jena-text has unfortunately bitrotted even more than the 
> Lucene support. I've previously suggested that it should be removed entirely 
> [1], but there were no responses to my suggestion at the time.
> 
> -Osma
> 
> [1] https://www.mail-archive.com/dev@jena.apache.org/msg16380.html
> 
> 27.02.2017, 14:08, anuj kumar kirjoitti:
>> Hi All,
>> 
>> *Apologies for the long email.*
>> 
>> As some of you know, I have been working on extending Jena to Support
>> ElasticSearch for Text Indexing (in addition to Lucene and Solr).
>> 
>> I have come to a point where I have a basic (read non-prod) code that can
>> index RDFS:label text data into ElasticSearch 5.2.1
>> The code is working and testable. You simply have to download elasticsearch
>> 5.2.1 and run it locally for executing the test within  the ES
>> implementation.
>> The code is NOT production Ready but just a PoC code.  You can find the
>> first cut of the code here: https://github.com/EaseTech/jena (look inside
>> the module jena-text-es)
>> 
>> I need feedback from Jena maintainers and community, in terms of the
>> structuring of the code as this is important for me to finalize before I
>> move to implement the full blown Production Ready code for Jean Text
>> ElasticSearch Integration.
>> 
>> Here is the short description of what I did and the reasoning behind it:
>> 
>> 1. Created a separate module : *jena-text-es *that extends from *jena-text*
>> AND excludes all the Lucene related and Solr related dependencies. The
>> reason I had to do it was that* jena-text* module depends on Lucene version
>> 4.9.1 whereas ElasticSearch 5.2.1 version depends on Lucene 6.4.1. This was
>> resulting in the conflicts of Lucene version if I created the code for
>> ElasticSearch support within the *jena-text *module. Thus the need to
>> create a separate module.
>> 2. A side effect of creating a separate module meant, I had to extend the
>> TextDataSetFactory.java class present in the *jena-text *module to include
>> methods for creating ElasticSearch index objects. I named it
>> ESTextDataSetFactory. At this point in time I do not know if this is the
>> right approach or if Jena ALWAYS instantiates Index objects using the
>> TextDataSetFactory.java class. My initial investigation showed it is fine,
>> but I want the people who are experts in Jena to please confirm.
>> 3. I have tested a simple integration with ElasticSearch by defining a test
>> class under
>> src/test/java/org/apache/jena/query/text/TestBuildTextDataSet.java. You can
>> run this test by first starting an instance of Elasticsearch 5.2.1 locally.
>> 
>> *My Queries*
>> 1. Is it acceptable by the Jena community that I create a separate module
>> for support of ElasticSearch and call it *jena-text-es*?
>> 2. Is it fine if I extend the TextDataSetFactory.java class within the
>> *jena-text-es
>> *module?
>> 
>> *Food for Thought*
>>

Re: SPARQL Update over model

2017-02-16 Thread A. Soroka
Try starting with:

https://jena.apache.org/documentation/javadoc/arq/org/apache/jena/query/DatasetFactory.html

(I probably should have said that to begin with.)

---
A. Soroka
The University of Virginia Library

> On Feb 16, 2017, at 11:11 AM, Julien Plu 
> <julien@redaction-developpez.com> wrote:
> 
> Thanks! I will try to use
> https://jena.apache.org/documentation/javadoc/arq/org/apache/jena/sparql/core/DatasetGraphBase.html
> seems to be what I'm looking for.
> 
> --
> Julien Plu
> 
> PhD Student at Eurecom.
> Personal webpage: http://jplu.developpez.com
> FOAF file : http://jplu.developpez.com/julien
> Email address : julien@eurecom.fr && *plu.jul...@gmail.com
> <plu.jul...@gmail.com>*
> Phone : +33493008103
> Twitter : @julienplu
> 
> 2017-02-16 17:01 GMT+01:00 A. Soroka <aj...@virginia.edu>:
> 
>> A model holds exactly one graph. Perhaps you want to be using a dataset
>> [1]?
>> 
>> ---
>> A. Soroka
>> The University of Virginia Library
>> 
>> [1] https://www.w3.org/TR/rdf11-concepts/#section-dataset
>> 
>>> On Feb 16, 2017, at 10:59 AM, Julien Plu <julien.plu@redaction-
>> developpez.com> wrote:
>>> 
>>> Hello,
>>> 
>>> I'm trying to make a SPARQL update query over a model, the problem is
>> that
>>> the query has to delete a triple belonging to a specific graph:
>>> 
>>> PREFIX dc:<http://purl.org/dc/elements/1.1/>
>>> DELETE {
>>>   GRAPH <http://3cixty.com/cotedazur/test> {
>>>   <
>>> http://data.linkedevents.org/event/51f5ecc8-55b4-3a1b-98de-55e4448ab7bf>
>>> dc:identifier ?o .
>>>   }
>>>   } WHERE {
>>>   GRAPH <http://3cixty.com/cotedazur/test> {
>>>   <
>>> http://data.linkedevents.org/event/51f5ecc8-55b4-3a1b-98de-55e4448ab7bf>
>>> dc:identifier ?o .
>>>   }
>>> }
>>> 
>>> which apparently has no effect when we proceed that way:
>>> 
>>> Model model = RDFDataMgr.loadModel("file.ttl");
>>> UpdateAction.parseExecute(sparql_query, model);
>>> 
>>> I suppose it is normal as I never created the graph in the model.
>>> Nevertheless when I do:
>>> 
>>> Model model = ModelFactory.createModelForGraph(new
>>> SimpleGraphMaker().createGraph("http://3cixty.com/cotedazur/test;));
>>> model.read("file.ttl");
>>> UpdateAction.parseExecute(sparql_query, model);
>>> 
>>> It has no effect as well. Can someone guide me on how to do such thing
>>> properly?
>>> 
>>> Thanks in advance.
>>> 
>>> Regards.
>>> --
>>> Julien Plu
>>> 
>>> PhD Student at Eurecom.
>>> Personal webpage: http://jplu.developpez.com
>>> FOAF file : http://jplu.developpez.com/julien
>>> Email address : julien@eurecom.fr && *plu.jul...@gmail.com
>>> <plu.jul...@gmail.com>*
>>> Phone : +33493008103
>>> Twitter : @julienplu
>> 
>> 



Re: SPARQL Update over model

2017-02-16 Thread A. Soroka
A model holds exactly one graph. Perhaps you want to be using a dataset [1]?

---
A. Soroka
The University of Virginia Library

[1] https://www.w3.org/TR/rdf11-concepts/#section-dataset

> On Feb 16, 2017, at 10:59 AM, Julien Plu 
> <julien@redaction-developpez.com> wrote:
> 
> Hello,
> 
> I'm trying to make a SPARQL update query over a model, the problem is that
> the query has to delete a triple belonging to a specific graph:
> 
> PREFIX dc:<http://purl.org/dc/elements/1.1/>
> DELETE {
>GRAPH <http://3cixty.com/cotedazur/test> {
><
> http://data.linkedevents.org/event/51f5ecc8-55b4-3a1b-98de-55e4448ab7bf>
> dc:identifier ?o .
>}
>} WHERE {
>GRAPH <http://3cixty.com/cotedazur/test> {
><
> http://data.linkedevents.org/event/51f5ecc8-55b4-3a1b-98de-55e4448ab7bf>
> dc:identifier ?o .
>}
> }
> 
> which apparently has no effect when we proceed that way:
> 
> Model model = RDFDataMgr.loadModel("file.ttl");
> UpdateAction.parseExecute(sparql_query, model);
> 
> I suppose it is normal as I never created the graph in the model.
> Nevertheless when I do:
> 
> Model model = ModelFactory.createModelForGraph(new
> SimpleGraphMaker().createGraph("http://3cixty.com/cotedazur/test;));
> model.read("file.ttl");
> UpdateAction.parseExecute(sparql_query, model);
> 
> It has no effect as well. Can someone guide me on how to do such thing
> properly?
> 
> Thanks in advance.
> 
> Regards.
> --
> Julien Plu
> 
> PhD Student at Eurecom.
> Personal webpage: http://jplu.developpez.com
> FOAF file : http://jplu.developpez.com/julien
> Email address : julien@eurecom.fr && *plu.jul...@gmail.com
> <plu.jul...@gmail.com>*
> Phone : +33493008103
> Twitter : @julienplu



Re: Remove class

2017-02-15 Thread A. Soroka
Please suggest to your teacher that when he or she gives such an assignment (to 
use Jena rules) it would be useful and helpful to contact this list first. 
There are many people here who would be happy to help advise your teacher and 
make the assignment as good as it can be.

---
A. Soroka
The University of Virginia Library

> On Feb 15, 2017, at 8:57 AM, tina sani <tinamadri...@gmail.com> wrote:
> 
> yes,
> 
> On Wed, Feb 15, 2017 at 4:55 PM, A. Soroka <aj...@virginia.edu> wrote:
> 
>> Can you tell us something about this project? Is this a school assignment?
>> 
>> ---
>> A. Soroka
>> The University of Virginia Library
>> 
>>> On Feb 15, 2017, at 8:54 AM, tina sani <tinamadri...@gmail.com> wrote:
>>> 
>>> Lorenz, using rules in my project is mandatory part so need to stick with
>>> it.
>>> 
>>> On Wed, Feb 15, 2017 at 4:50 PM, Chris Dollin <
>> chris.dol...@epimorphics.com>
>>> wrote:
>>> 
>>>> On 15 February 2017 at 13:06, tina sani <tinamadri...@gmail.com> wrote:
>>>> 
>>>>> Hello Lorenz, so no way to remove or replace these classes?
>>>>> setOntClass also not working, I have tried it.
>>>>> 
>>>> 
>>>> If you don't want inference to add back the statements
>>>> you have deleted
>>>> 
>>>> Then don't use inference to add the statements in
>>>> the first place
>>>> 
>>>> Then you can add and remove statements as you like
>>>> 
>>>> Chris
>>>> 
>>>> 
>>>>> On Wed, Feb 15, 2017 at 3:50 PM, Lorenz Buehmann <
>>>>> buehm...@informatik.uni-leipzig.de> wrote:
>>>>> 
>>>>>> In general, you cannot remove inferred statements - those are given by
>>>>>> data + rules.
>>>>>> 
>>>>>> Indeed, you can remove statements on a materialized inferred model,
>> but
>>>>>> implicitly the class assertion does still exist.
>>>>>> 
>>>>>> 
>>>>>> On 15.02.2017 13:11, tina sani wrote:
>>>>>>> For example, I have added some classes for an individual using rules.
>>>>>>> emplyee 1 is type of Manager, Programmer, Worker.
>>>>>>> 
>>>>>>> Can I replace these classes with one class like
>>>>>>> if (empl1.hasOntclass(manager) && (emp1.hasOntClass(programmer) &
>>>>>>> (emp1.hasOntClass(worker)  then emp1 should be type of one class
>>>>> Employee
>>>>>>> and replace/remove these three classes.?
>>>>>>> 
>>>>>>> There is one method, I dont know if it is suitable here to apply?
>>>>>>> 
>>>>>>> Individual.removeOntClass(Resource)
>>>>>>> <https://jena.apache.org/documentation/javadoc/jena/
>>>>>> org/apache/jena/ontology/Individual.html#removeOntClass-org.apache.
>>>>>> jena.rdf.model.Resource->
>>>>>>> 
>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>> 
>> 



Re: Remove class

2017-02-15 Thread A. Soroka
Can you tell us something about this project? Is this a school assignment?

---
A. Soroka
The University of Virginia Library

> On Feb 15, 2017, at 8:54 AM, tina sani <tinamadri...@gmail.com> wrote:
> 
> Lorenz, using rules in my project is mandatory part so need to stick with
> it.
> 
> On Wed, Feb 15, 2017 at 4:50 PM, Chris Dollin <chris.dol...@epimorphics.com>
> wrote:
> 
>> On 15 February 2017 at 13:06, tina sani <tinamadri...@gmail.com> wrote:
>> 
>>> Hello Lorenz, so no way to remove or replace these classes?
>>> setOntClass also not working, I have tried it.
>>> 
>> 
>> If you don't want inference to add back the statements
>> you have deleted
>> 
>> Then don't use inference to add the statements in
>> the first place
>> 
>> Then you can add and remove statements as you like
>> 
>> Chris
>> 
>> 
>>> On Wed, Feb 15, 2017 at 3:50 PM, Lorenz Buehmann <
>>> buehm...@informatik.uni-leipzig.de> wrote:
>>> 
>>>> In general, you cannot remove inferred statements - those are given by
>>>> data + rules.
>>>> 
>>>> Indeed, you can remove statements on a materialized inferred model, but
>>>> implicitly the class assertion does still exist.
>>>> 
>>>> 
>>>> On 15.02.2017 13:11, tina sani wrote:
>>>>> For example, I have added some classes for an individual using rules.
>>>>> emplyee 1 is type of Manager, Programmer, Worker.
>>>>> 
>>>>> Can I replace these classes with one class like
>>>>> if (empl1.hasOntclass(manager) && (emp1.hasOntClass(programmer) &
>>>>> (emp1.hasOntClass(worker)  then emp1 should be type of one class
>>> Employee
>>>>> and replace/remove these three classes.?
>>>>> 
>>>>> There is one method, I dont know if it is suitable here to apply?
>>>>> 
>>>>> Individual.removeOntClass(Resource)
>>>>> <https://jena.apache.org/documentation/javadoc/jena/
>>>> org/apache/jena/ontology/Individual.html#removeOntClass-org.apache.
>>>> jena.rdf.model.Resource->
>>>>> 
>>>> 
>>>> 
>>> 
>> 



Re: Pull NamedModel from a Dataset didn't work since jena 3.1.1

2017-02-14 Thread A. Soroka
Thanks, I've made that change. Keep in mind that if you see an error on a doc 
page, you can always use the "Improve this Page" link at the top of the page to 
send a patch. 

---
A. Soroka
The University of Virginia Library

> On Feb 14, 2017, at 11:21 AM, marschelin...@web.de wrote:
> 
>> That link works fine for me and leads to appropriate documentation. Perhaps 
>> you can explain what you mean by "it is not a correct linking"?
> 
> I don't mean the link "http://jena.apache.org/documentation/rdfconnection/;. 
> I speak from the link at the bottom of this site in the section "Examples". 
> The link is a plain text and no correct html link. The link at the bottom is 
> "https://github.com/apache/jena/tree/master/jena-rdfconnection/src/main/java/rdfconnection/examples;.
>  And this leads me to an error page. ;)
> 
> I think the link for the example page must be 
> "https://github.com/apache/jena/tree/master/jena-rdfconnection/src/main/java/org/apache/jena/rdfconnection/examples;.
> 
> Greetings,
> 
> Roman
> 
>>> On Feb 14, 2017, at 3:56 AM, marschelin...@web.de wrote:
>>> 
>>> For your information the linking in the example section on the page 
>>> "http://jena.apache.org/documentation/rdfconnection/; don't work correct. 
>>> First it is not a correct linking and second the link > > leads me to an 
>>> error page.
>  



Re: Pull NamedModel from a Dataset didn't work since jena 3.1.1

2017-02-14 Thread A. Soroka
That link works fine for me and leads to appropriate documentation. Perhaps you 
can explain what you mean by "it is not a correct linking"?

---
A. Soroka
The University of Virginia Library

> On Feb 14, 2017, at 3:56 AM, marschelin...@web.de wrote:
> 
> For your information the linking in the example section on the page 
> "http://jena.apache.org/documentation/rdfconnection/; don't work correct. 
> First it is not a correct linking and second the link leads me to an error 
> page.



Re: [ANN] Apache Jena 3.2.0 released with Fuseki 2.5.0

2017-02-13 Thread A. Soroka
It is compatible for TDB; release notes would normally include information 
otherwise were that the case.

I'm not aware of any particularly sharp changes in the API for 3.2.0. Some very 
old material has been deprecated (e.g. see JENA-1270).

---
A. Soroka
The University of Virginia Library

> On Feb 13, 2017, at 10:48 AM, Jean-Marc Vanel <jeanmarc.va...@gmail.com> 
> wrote:
> 
> If you don't say so, I assume that it's binary compatible for TDB database
> files.
> 
> About breaking 3.1.X API , is there something API breaking in 3.2.0 ?
> 
> 
> 
> 2017-02-10 16:59 GMT+01:00 A. Soroka <aj...@virginia.edu>:
> 
>> We are pleased to announce the release of Jena 3.2.0 (including Fuseki 2
>> 2.5.0)!
>> 
>> == Notable in this release:
>> 
>> * New facility for managing RDF Connections (JENA-1267)
>> 
>> * Quad/Triple/Node now Serializable (JENA-1233)
>> 
>> * @context overrides available for JsonLDReader (JENA-1279)
>> 
>> * jena-spatial queries no longer sort intermediate results for a big
>> performance improvement (JENA-1277)
>> 
>> * General maintenance
>> 
>>   A full listing of tickets addressed in this release is available at:
>> 
>>   https://issues.apache.org/jira/secure/ReleaseNote.jspa?
>> version=12338678=Text=12311220
>> 
>> * Dependency changes:
>> 
>> Updates:
>>   com.github.jsonld-java:jsonld-java  0.8.3 -> 0.9.0
>> 
>> == Obtaining Apache Jena 3.2.0
>> 
>> If migrating from Jena 2.x.x, please see
>> http://jena.staging.apache.org/documentation/migrate_jena2_jena3.html
>> 
>> * Via central.maven.org
>> 
>> The main jars and their dependencies can used with:
>> 
>> 
>>   org.apache.jena
>>   apache-jena-libs
>>   pom
>>   3.2.0
>> 
>> 
>> Full details of all maven artifacts are described at:
>> 
>>   http://jena.apache.org/download/maven.html
>> 
>> * As binary downloads
>> 
>> Apache Jena libraries are available as a binary distribution of
>> libraries. For details of a global mirror copy of Jena binaries please see:
>> 
>> http://jena.apache.org/download/
>> 
>> * Source code for the release
>> 
>> The signed source code of this release is available at:
>> 
>> http://www.apache.org/dist/jena/source/
>> 
>> and the signed master source for all Apache Jena releases is available
>> at: http://archive.apache.org/dist/jena/
>> 
>> 
>> == Contributing
>> 
>> If you would like to help out, a good place to look is the list of
>> unresolved JIRA at:
>> 
>> http://s.apache.org/jena-jira-current
>> 
>> or drop into the dev@ list.
>> 
>> We use github pull requests and other ways for accepting code:
>>https://github.com/apache/jena/blob/master/CONTRIBUTING.md
>> 
>> The Apache Jena development community
>> 
>> 
>> 
> 
> 
> -- 
> Jean-Marc Vanel
> http://www.semantic-forms.cc:9111/display?displayuri=http://jmvanel.free.fr/jmv.rdf%23me
> Déductions SARL - Consulting, services, training,
> Rule-based programming, Semantic Web
> +33 (0)6 89 16 29 52
> Twitter: @jmvanel , @jmvanel_fr ; chat: irc://irc.freenode.net#eulergui



[ANN] Apache Jena 3.2.0 released with Fuseki 2.5.0

2017-02-10 Thread A. Soroka
We are pleased to announce the release of Jena 3.2.0 (including Fuseki 2 2.5.0)!

== Notable in this release:

* New facility for managing RDF Connections (JENA-1267)

* Quad/Triple/Node now Serializable (JENA-1233)

* @context overrides available for JsonLDReader (JENA-1279)

* jena-spatial queries no longer sort intermediate results for a big 
performance improvement (JENA-1277)

* General maintenance

   A full listing of tickets addressed in this release is available at:

   
https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12338678=Text=12311220

* Dependency changes:

Updates:
   com.github.jsonld-java:jsonld-java  0.8.3 -> 0.9.0

== Obtaining Apache Jena 3.2.0

If migrating from Jena 2.x.x, please see
http://jena.staging.apache.org/documentation/migrate_jena2_jena3.html

* Via central.maven.org

The main jars and their dependencies can used with:

 
   org.apache.jena
   apache-jena-libs
   pom
   3.2.0
 

Full details of all maven artifacts are described at:

   http://jena.apache.org/download/maven.html

* As binary downloads

Apache Jena libraries are available as a binary distribution of
libraries. For details of a global mirror copy of Jena binaries please see:

http://jena.apache.org/download/

* Source code for the release

The signed source code of this release is available at:

http://www.apache.org/dist/jena/source/

and the signed master source for all Apache Jena releases is available
at: http://archive.apache.org/dist/jena/


== Contributing

If you would like to help out, a good place to look is the list of
unresolved JIRA at:

http://s.apache.org/jena-jira-current

or drop into the dev@ list.

We use github pull requests and other ways for accepting code:
https://github.com/apache/jena/blob/master/CONTRIBUTING.md

 The Apache Jena development community




Re: Release vote : 3.2.0

2017-02-02 Thread A. Soroka
I think Andy has the right story here (I must have copied it from someone using 
Linux). In fact, I did the release candidate on a Mac, a fact which will be 
reflected in my forthcoming vote.

---
A. Soroka
The University of Virginia Library

> On Feb 2, 2017, at 5:40 AM, Andy Seaborne <a...@apache.org> wrote:
> 
> The Apache Jenkins installation has Linux slaves.
> 
> There are problems with Windows - there are general problems with temp files 
> getting left around and Jena uses a lot of temp space so it was not playing 
> nice with other jobs on those machines.
> 
> There aren't any Mac slaves.
> 
> But Java is portable, right? :-)
> 
> I think the text was copied from a vote call from someone who ran on Linux, 
> so that test was implicitly done already.
> 
>Andy
> 
> On 01/02/17 21:27, Dick Murray wrote:
>> ;-) Nothing implied from me and as I thought re Linux/Dev. Thanks (devs)
>> for the work.
>> 
>> On 1 Feb 2017 19:33, "A. Soroka" <aj...@virginia.edu> wrote:
>> 
>>> No, I should say that that exclusion is just a nod to the fact that so
>>> many of the Jena devs use Linux that it's just much less of an issue to
>>> find Linux testers. Windows seems to be generally the hardest platform to
>>> get results for. I certainly didn't intend any more than that, but I copied
>>> that list from earlier release vote announcements. (!)
>>> 
>>> But maybe I am missing some history?
>>> 
>>> ajs6f
>>> 
>>>> On Feb 1, 2017, at 2:30 PM, Dick Murray <dandh...@gmail.com> wrote:
>>>> 
>>>> Hi.
>>>> 
>>>> Under checking Windows and Mac OS's are listed but not Linux. Is Jena
>>>> assumed to pass? I'mean running Jena 3.2 snapshot on Ubuntu 16.04 and
>>>> Centos 7.
>>>> 
>>>> If you haven't broken anything in the snapshot then I vote release. ;-)
>>>> 
>>>> On 1 Feb 2017 16:09, "A. Soroka" <aj...@virginia.edu> wrote:
>>>> 
>>>>> Hello, Jena-folks!
>>>>> 
>>>>> Let's vote on a release of Jena 3.2.0.
>>>>> 
>>>>> Everyone, not just committers, is invited to test and vote. Three +1's
>>>>> from PMC members permit a release, but everyone is not just welcome but
>>>>> _needed_ to do really good full testing. If a non-committer turns up an
>>>>> issue, you can bet I will investigate fast.
>>>>> 
>>>>> This is a distribution of Jena and also of Fuseki 1 and 2.
>>>>> 
>>>>> Versions being released include: Jena @ 3.2.0 (RDF libraries, database
>>>>> gear, and utilities), Fuseki 1 @ 1.5.0 and Fuseki 2 @ 2.5.0 (SPARQL
>>>>> servers).
>>>>> 
>>>>> Staging repository:
>>>>> https://repository.apache.org/content/repositories/orgapachejena-1016/
>>>>> 
>>>>> Proposed distributions:
>>>>> https://dist.apache.org/repos/dist/dev/jena/binaries/
>>>>> 
>>>>> Keys:
>>>>> https://svn.apache.org/repos/asf/jena/dist/KEYS
>>>>> 
>>>>> Git tag:
>>>>> jena-3.2.0-rc1
>>>>> 4bdc528c788681b90acf341de0989ca7686bae8c
>>>>> https://git-wip-us.apache.org/repos/asf?p=jena.git;a=commit;h=
>>>>> 4bdc528c788681b90acf341de0989ca7686bae8c
>>>>> 
>>>>> 
>>>>> Please vote to approve this release:
>>>>> 
>>>>>   [ ] +1 Approve the release
>>>>>   [ ]  0 Don't care
>>>>>   [ ] -1 Don't release, because ...
>>>>> 
>>>>> This vote will be open to the end of
>>>>> 
>>>>>  Monday, 6 February, 23:59 UTC
>>>>> 
>>>>> Thanks to everyone who can help test and give feedback of every kind!
>>>>> 
>>>>> ajs6f (A. Soroka)
>>>>> 
>>>>> 
>>>>> Checking needed:
>>>>> 
>>>>> • Does everything work on MS Windows?
>>>>> • Does everything work on OS X?
>>>>> • Is the GPG signature okay?
>>>>> • Is there a source archive?
>>>>> • Can the source archive really be built?
>>>>> • Is there a correct LICENSE and NOTICE file in each artifact (both
>>> source
>>>>> and binary artifacts)?
>>>>> • Does the NOTICE file contain all necessary attributions?
>>>>> • Does the tag in the SCM contain reproducible sources?
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>> 
>>> 
>> 



Re: Release vote : 3.2.0

2017-02-01 Thread A. Soroka
No, I should say that that exclusion is just a nod to the fact that so many of 
the Jena devs use Linux that it's just much less of an issue to find Linux 
testers. Windows seems to be generally the hardest platform to get results for. 
I certainly didn't intend any more than that, but I copied that list from 
earlier release vote announcements. (!)

But maybe I am missing some history?

ajs6f 

> On Feb 1, 2017, at 2:30 PM, Dick Murray <dandh...@gmail.com> wrote:
> 
> Hi.
> 
> Under checking Windows and Mac OS's are listed but not Linux. Is Jena
> assumed to pass? I'mean running Jena 3.2 snapshot on Ubuntu 16.04 and
> Centos 7.
> 
> If you haven't broken anything in the snapshot then I vote release. ;-)
> 
> On 1 Feb 2017 16:09, "A. Soroka" <aj...@virginia.edu> wrote:
> 
>> Hello, Jena-folks!
>> 
>> Let's vote on a release of Jena 3.2.0.
>> 
>> Everyone, not just committers, is invited to test and vote. Three +1's
>> from PMC members permit a release, but everyone is not just welcome but
>> _needed_ to do really good full testing. If a non-committer turns up an
>> issue, you can bet I will investigate fast.
>> 
>> This is a distribution of Jena and also of Fuseki 1 and 2.
>> 
>> Versions being released include: Jena @ 3.2.0 (RDF libraries, database
>> gear, and utilities), Fuseki 1 @ 1.5.0 and Fuseki 2 @ 2.5.0 (SPARQL
>> servers).
>> 
>> Staging repository:
>> https://repository.apache.org/content/repositories/orgapachejena-1016/
>> 
>> Proposed distributions:
>> https://dist.apache.org/repos/dist/dev/jena/binaries/
>> 
>> Keys:
>> https://svn.apache.org/repos/asf/jena/dist/KEYS
>> 
>> Git tag:
>> jena-3.2.0-rc1
>> 4bdc528c788681b90acf341de0989ca7686bae8c
>> https://git-wip-us.apache.org/repos/asf?p=jena.git;a=commit;h=
>> 4bdc528c788681b90acf341de0989ca7686bae8c
>> 
>> 
>> Please vote to approve this release:
>> 
>>[ ] +1 Approve the release
>>[ ]  0 Don't care
>>[ ] -1 Don't release, because ...
>> 
>> This vote will be open to the end of
>> 
>>   Monday, 6 February, 23:59 UTC
>> 
>> Thanks to everyone who can help test and give feedback of every kind!
>> 
>>  ajs6f (A. Soroka)
>> 
>> 
>> Checking needed:
>> 
>> • Does everything work on MS Windows?
>> • Does everything work on OS X?
>> • Is the GPG signature okay?
>> • Is there a source archive?
>> • Can the source archive really be built?
>> • Is there a correct LICENSE and NOTICE file in each artifact (both source
>> and binary artifacts)?
>> • Does the NOTICE file contain all necessary attributions?
>> • Does the tag in the SCM contain reproducible sources?
>> 
>> 
>> 
>> 
>> 



Release vote : 3.2.0

2017-02-01 Thread A. Soroka
Hello, Jena-folks! 

Let's vote on a release of Jena 3.2.0.

Everyone, not just committers, is invited to test and vote. Three +1's from PMC 
members permit a release, but everyone is not just welcome but _needed_ to do 
really good full testing. If a non-committer turns up an issue, you can bet I 
will investigate fast.

This is a distribution of Jena and also of Fuseki 1 and 2. 

Versions being released include: Jena @ 3.2.0 (RDF libraries, database gear, 
and utilities), Fuseki 1 @ 1.5.0 and Fuseki 2 @ 2.5.0 (SPARQL servers).

Staging repository:
https://repository.apache.org/content/repositories/orgapachejena-1016/

Proposed distributions:
https://dist.apache.org/repos/dist/dev/jena/binaries/

Keys:
https://svn.apache.org/repos/asf/jena/dist/KEYS

Git tag:
jena-3.2.0-rc1
4bdc528c788681b90acf341de0989ca7686bae8c
https://git-wip-us.apache.org/repos/asf?p=jena.git;a=commit;h=4bdc528c788681b90acf341de0989ca7686bae8c


Please vote to approve this release:

[ ] +1 Approve the release
[ ]  0 Don't care
[ ] -1 Don't release, because ...

This vote will be open to the end of

   Monday, 6 February, 23:59 UTC

Thanks to everyone who can help test and give feedback of every kind!

  ajs6f (A. Soroka)


Checking needed:

• Does everything work on MS Windows?
• Does everything work on OS X?
• Is the GPG signature okay?
• Is there a source archive?
• Can the source archive really be built?
• Is there a correct LICENSE and NOTICE file in each artifact (both source and 
binary artifacts)?
• Does the NOTICE file contain all necessary attributions?
• Does the tag in the SCM contain reproducible sources?






Re: 10G loading file to fuseki

2017-01-19 Thread A. Soroka
Your procedure seems reasonable. I still don't understand what you mean by "I 
can query the small Lexo database but not the LinkedCT one." What exactly are 
you doing to send queries?

Please show the configuration you added for your new dataset.

---
A. Soroka
The University of Virginia Library
> On Jan 19, 2017, at 2:24 PM, Reihaneh Amini <amini.reiha...@gmail.com> wrote:
> 
> Sure!
> Thanks for helping!
> 
> 1. TDB laoding:
> 
> public static void main(String[] args) {
> 
> String file = "./linkedct-live-dump-latest.nt";   //source 10 GB file
>   String directory;
>   directory = "./Data";
> //target TDB files
> 
>   Dataset dataset = TDBFactory.createDataset(directory);
> 
>   Model model = dataset.getNamedModel("http://nameFile;);
> 
>   TDBLoader.loadModel(model, file );
> 
> 
>   }
> 
> I loaded the big file by using TDB model and finally after 10 hours of
> execution, it gave me the Data folder that contains 28 files with .dat and
> .idn suffix.
> 
> 2. I have a directory that I downloaded from download part of fuseki-server
> naming: "apache-jena-fuseki-2.4.1". It contains a batch file that by
> running that file from command-line I get access to the server by
> localhost/3030.
> When I get access to the server and upload a small dataset (<100mb) there
> named "Lexo", automatically that dataset will be appeared in the
> "apache-jena-fuseki-2.4.1/run/databases/lexo". That dataset contains the
> .dat and .idn files.
> 
> So, by doing the reverse process. First I read my big dataset by step one.
> Then I save these generated files into
> "apache-jena-fuseki-2.4.1/run/databases/LinkedCT"  path under "LinkedCT"
> folder.
> 
> 3. I run the server again and now both databases appear in the server. ( I
> also took care of creating a config file for this new dataset in
> "Configuration" folder.
> 
> 4. this seems normal from my perspective.
> C:\Programs\apache-jena-fuseki-2.4.1>fuseki-server
> Picked up _JAVA_OPTIONS: -Xms2048m -Xmx4096m
> [2017-01-19 10:23:59] Server INFO  Fuseki 2.4.1
> [2017-01-19 10:23:59] Config INFO
> FUSEKI_HOME=C:\Programs\apache-jena-fuseki-2.4.1\.
> [2017-01-19 10:23:59] Config INFO
> FUSEKI_BASE=C:\Programs\apache-jena-fuseki-2.4.1\run
> [2017-01-19 10:23:59] ServletINFO  Initializing Shiro environment
> [2017-01-19 10:23:59] Config INFO  Shiro file:
> file://C:\Programs\apache-jena-fuseki-2.4.1\run\shiro.ini
> [2017-01-19 10:23:59] Config INFO  Configuration file:
> C:\Programs\apache-jena-fuseki-2.4.1\run\config.ttl
> [2017-01-19 10:23:59] riot   WARN  [line: 5, col: 9 ] Bad IRI:
> <C:\Programs\apache-jena-fuseki-2.4.1\run\config.ttl#> Code:
> 4/UNWISE_CHARACTER in PATH: The character matches no grammar rules of
> URIs/IRIs. These characters are permitted in RDF URI References, XML system
> identifiers, and XML Schema anyURIs.
> [2017-01-19 10:24:00] Config INFO  Load configuration:
> file:///C:/Programs/apache-jena-fuseki-2.4.1/run/configuration/Lexvo.ttl
> [2017-01-19 10:24:01] Config INFO  Load configuration:
> file:///C:/Programs/apache-jena-fuseki-2.4.1/run/configuration/LinkedCT.ttl
> [2017-01-19 10:24:04] Config INFO  Register: /Lexvo
> [2017-01-19 10:24:04] Config INFO  Register: /LinkedCT
> [2017-01-19 10:24:04] Server INFO  Started 2017/01/19 10:24:04 EST on
> port 3030
> 
> 
> 
> 5. When I go to the server, I can query the small Lexo database but not the
> LinkedCT one.
> 
> 
> Regards,
> Reihan
> 
> 
> 
> On Thu, Jan 19, 2017 at 1:40 PM, A. Soroka <aj...@virginia.edu> wrote:
> 
>>> However, the reasoner probably is not working because I cannot query the
>> data!
>> 
>> This isn't really an effective report of a problem. Can you describe what
>> you did (including the exact sequence of steps you followed to do the
>> load), what you then did to query, what you expected to get, and what you
>> actually got?
>> 
>> ---
>> A. Soroka
>> The University of Virginia Library
>> 
>> 
>> 
>>> On Jan 19, 2017, at 12:03 PM, Reihaneh Amini <amini.reiha...@gmail.com>
>> wrote:
>>> 
>>> I load the data by TDB loader and then upload them into the server with
>> no
>>> problem this time. However, the reasoner probably is not working because
>> I
>>> cannot query the data!
>>> 
>>> By loading by TDB I got several .dat and .idn file which I loaded them to
>>> fuseki server.
>>> 
>>> Any suggestion?
>>> 
>>> Regards,
>

Re: 10G loading file to fuseki

2017-01-19 Thread A. Soroka
> However, the reasoner probably is not working because I cannot query the data!

This isn't really an effective report of a problem. Can you describe what you 
did (including the exact sequence of steps you followed to do the load), what 
you then did to query, what you expected to get, and what you actually got?

---
A. Soroka
The University of Virginia Library



> On Jan 19, 2017, at 12:03 PM, Reihaneh Amini <amini.reiha...@gmail.com> wrote:
> 
> I load the data by TDB loader and then upload them into the server with no
> problem this time. However, the reasoner probably is not working because I
> cannot query the data!
> 
> By loading by TDB I got several .dat and .idn file which I loaded them to
> fuseki server.
> 
> Any suggestion?
> 
> Regards,
> Reihan
> 
> On Thu, Jan 19, 2017 at 11:33 AM, A. Soroka <aj...@virginia.edu> wrote:
> 
>> Using the UI is not a good idea for this. You would do _much_ better
>> either to work Osma's suggestion or to use the command-line tools.
>> 
>> ---
>> A. Soroka
>> The University of Virginia Library
>> 
>>> On Jan 19, 2017, at 11:32 AM, Reihaneh Amini <amini.reiha...@gmail.com>
>> wrote:
>>> 
>>> Hi Andy,
>>> 
>>> Thanks for your advice!
>>> I am using UI.
>>> 
>>> Do you mean I can still use UI if I split the data?
>>> By splitting you mean the simple splitting, right?
>>> 
>>> 
>>> Reihan
>> 
>> 
> 
> 
> -- 
> Regards,
> -Reihan



Re: 10G loading file to fuseki

2017-01-19 Thread A. Soroka
Using the UI is not a good idea for this. You would do _much_ better either to 
work Osma's suggestion or to use the command-line tools.

---
A. Soroka
The University of Virginia Library

> On Jan 19, 2017, at 11:32 AM, Reihaneh Amini <amini.reiha...@gmail.com> wrote:
> 
> Hi Andy,
> 
> Thanks for your advice!
> I am using UI.
> 
> Do you mean I can still use UI if I split the data?
> By splitting you mean the simple splitting, right?
> 
> 
> Reihan



Re: 10g data loading in Fuseki

2017-01-18 Thread A. Soroka
It's surely true that a 10GB file is not appropriate for direct upload. You 
could, if you absolutely must, split your NTriples file into many pieces and 
make many SPARQL Updates with them. But Osma's suggestion is much better, 
especially because if you are starting from an empty dataset you will get 
proper data statistics automatically by the means he suggests.

---
A. Soroka
The University of Virginia Library

> On Jan 18, 2017, at 10:50 AM, Osma Suominen <osma.suomi...@helsinki.fi> wrote:
> 
> Hi Reihan,
> 
> You cannot upload files this big via Fuseki. Try tdbloader or tdbloader2 
> instead for batch loading your triples into TDB outside Fuseki.
> 
> -Osma
> 
> 
> 18.01.2017, 17:07, Reihaneh Amini kirjoitti:
>> Hi Dear sir or madam,
>> 
>> I have a frustrating problem which is not going well with Fuseki.
>> I have a .nt file size 10G and I want to upload it into fuseki server as
>> TDB structure not in-memory.
>> 
>> After running the server, if I upload it one-time I get SessionTimesOut
>> error, how can I address this problem?
>> 
>> Please help me what is your recommendation?
>> 
>> 
>> Regards,
>> Reihan
>> 
> 
> 
> -- 
> Osma Suominen
> D.Sc. (Tech), Information Systems Specialist
> National Library of Finland
> P.O. Box 26 (Kaikukatu 4)
> 00014 HELSINGIN YLIOPISTO
> Tel. +358 50 3199529
> osma.suomi...@helsinki.fi
> http://www.nationallibrary.fi



Re: Line Numbers

2017-01-17 Thread A. Soroka
There are several answers.

There is no reason to suppose that any given triple actually derives from a 
file at all. It might have been created programmatically, or by inference, or 
from SPARQL, amongst many possible other means.

You are suggesting the carriage of a really large amount of metadata all 
throughout Jena's internals. The performance implications would be big, and 
entirely negative.

Andy has given you a really good road to go down if what you want is more 
detailed parsing metadata for, as you say, "reporting issues with the content". 
You can take off that metadata and record it elsewhere or record it in RDF in 
various ways. Perhaps you can tell us a little more about your use case and we 
can help you find a more targeted technique for it.

---
A. Soroka
The University of Virginia Library

> On Jan 17, 2017, at 3:14 PM, Grahame Grieve 
> <grah...@healthintersections.com.au> wrote:
> 
> hi
> 
> Yes replacing a library is not simple, but I thought I'd still make the
> offer. Other advantages... no, it's just a JSON parser.
> 
>> You did seem to be asking for a way to get from a triple in a graph to
> the line where it was read, and that is not possible. There is no such
> association.
> 
> why not? the library could provide a way, and retain the association.
> 
> Grahame
> 
> 
> On Wed, Jan 18, 2017 at 6:58 AM, A. Soroka <aj...@virginia.edu> wrote:
> 
>> Replacing the JSON library in use is a considerably bigger proposition
>> than working with the one we now use in a different way. Are there other
>> advantages to using your custom code? We want to stick to well-supported
>> dependencies unless there is a convincing argument otherwise.
>> 
>> As for Turtle, I believe you can take a look at LangTurtleBase to see what
>> might be done. Keep in mind that there's not necessarily a precise way to
>> understand what line produces an error-- it might occur in the interaction
>> between tokens on more than one line.
>> 
>> ---
>> A. Soroka
>> The University of Virginia Library
>> 
>>> On Jan 17, 2017, at 2:42 PM, Grahame Grieve <
>> grah...@healthintersections.com.au> wrote:
>>> 
>>> well, I care about turtle and json-ld.  I can contribute a json library
>>> that preserves line numbers when the json is parsed, since the main
>> stream
>>> ones don't.
>>> 
>>> Grahame
>>> 
>>> 
>>> On Wed, Jan 18, 2017 at 5:38 AM, A. Soroka <aj...@virginia.edu> wrote:
>>> 
>>>> That will depend a bit on the language. For example, JSON parsing
>> doesn't
>>>> occur directly in Jena, Jena uses a library that parses from JSON to
>> Java
>>>> objects and then works with those objects:
>>>> 
>>>> org.apache.jena.riot.lang.JsonLDReader.read(InputStream, String,
>>>> ContentType, StreamRDF, Context)
>>>> 
>>>> In some other cases, it seems like it should be possible. Do you have a
>>>> specific language in mind?
>>>> 
>>>> ---
>>>> A. Soroka
>>>> The University of Virginia Library
>>>> 
>>>>> On Jan 16, 2017, at 6:48 AM, Grahame Grieve <
>>>> grah...@healthintersections.com.au> wrote:
>>>>> 
>>>>> Can the Jena parser maintain a link between the triples and the line
>>>> number
>>>>> from which are sourced in the original file? This is really useful for
>>>>> reporting issues with the content
>>>>> 
>>>>> Grahame
>>>>> 
>>>>> 
>>>>> --
>>>>> -
>>>>> http://www.healthintersections.com.au / grahame@healthintersections.
>>>> com.au
>>>>> / +61 411 867 065
>>>> 
>>>> 
>>> 
>>> 
>>> --
>>> -
>>> http://www.healthintersections.com.au / grahame@healthintersections.
>> com.au
>>> / +61 411 867 065
>> 
>> 
> 
> 
> -- 
> -
> http://www.healthintersections.com.au / grah...@healthintersections.com.au
> / +61 411 867 065



Re: Line Numbers

2017-01-17 Thread A. Soroka
Replacing the JSON library in use is a considerably bigger proposition than 
working with the one we now use in a different way. Are there other advantages 
to using your custom code? We want to stick to well-supported dependencies 
unless there is a convincing argument otherwise.

As for Turtle, I believe you can take a look at LangTurtleBase to see what 
might be done. Keep in mind that there's not necessarily a precise way to 
understand what line produces an error-- it might occur in the interaction 
between tokens on more than one line.

---
A. Soroka
The University of Virginia Library

> On Jan 17, 2017, at 2:42 PM, Grahame Grieve 
> <grah...@healthintersections.com.au> wrote:
> 
> well, I care about turtle and json-ld.  I can contribute a json library
> that preserves line numbers when the json is parsed, since the main stream
> ones don't.
> 
> Grahame
> 
> 
> On Wed, Jan 18, 2017 at 5:38 AM, A. Soroka <aj...@virginia.edu> wrote:
> 
>> That will depend a bit on the language. For example, JSON parsing doesn't
>> occur directly in Jena, Jena uses a library that parses from JSON to Java
>> objects and then works with those objects:
>> 
>> org.apache.jena.riot.lang.JsonLDReader.read(InputStream, String,
>> ContentType, StreamRDF, Context)
>> 
>> In some other cases, it seems like it should be possible. Do you have a
>> specific language in mind?
>> 
>> ---
>> A. Soroka
>> The University of Virginia Library
>> 
>>> On Jan 16, 2017, at 6:48 AM, Grahame Grieve <
>> grah...@healthintersections.com.au> wrote:
>>> 
>>> Can the Jena parser maintain a link between the triples and the line
>> number
>>> from which are sourced in the original file? This is really useful for
>>> reporting issues with the content
>>> 
>>> Grahame
>>> 
>>> 
>>> --
>>> -
>>> http://www.healthintersections.com.au / grahame@healthintersections.
>> com.au
>>> / +61 411 867 065
>> 
>> 
> 
> 
> -- 
> -
> http://www.healthintersections.com.au / grah...@healthintersections.com.au
> / +61 411 867 065



Re: Line Numbers

2017-01-17 Thread A. Soroka
You did seem to be asking for a way to get from a triple in a graph to the line 
where it was read, and that is not possible. There is no such association. Andy 
is pointing out that only during parsing can such information be managed (and I 
pointed out that even that is not the case all the time). If that is not what 
you are asking for, perhaps you can clarify.

---
A. Soroka
The University of Virginia Library

> On Jan 17, 2017, at 2:52 PM, Grahame Grieve 
> <grah...@healthintersections.com.au> wrote:
> 
> I'm not sure where that means it's not possible or of interest to trace the
> triples (or their parts) to source files
> 
> Grahame
> 
> 
> On Wed, Jan 18, 2017 at 6:47 AM, Andy Seaborne <a...@apache.org> wrote:
> 
>> RDF does not have the concept of an order to triples and indeed triples
>> can be added and deleted to the set of triples from different places.
>> 
>> What you can do is to add stages to the parsing process to produce
>> messages as parsing happens.
>> 
>>Andy
>> 
>> 
>> On 17/01/17 19:42, Grahame Grieve wrote:
>> 
>>> well, I care about turtle and json-ld.  I can contribute a json library
>>> that preserves line numbers when the json is parsed, since the main stream
>>> ones don't.
>>> 
>>> Grahame
>>> 
>>> 
>>> On Wed, Jan 18, 2017 at 5:38 AM, A. Soroka <aj...@virginia.edu> wrote:
>>> 
>>> That will depend a bit on the language. For example, JSON parsing doesn't
>>>> occur directly in Jena, Jena uses a library that parses from JSON to Java
>>>> objects and then works with those objects:
>>>> 
>>>> org.apache.jena.riot.lang.JsonLDReader.read(InputStream, String,
>>>> ContentType, StreamRDF, Context)
>>>> 
>>>> In some other cases, it seems like it should be possible. Do you have a
>>>> specific language in mind?
>>>> 
>>>> ---
>>>> A. Soroka
>>>> The University of Virginia Library
>>>> 
>>>> On Jan 16, 2017, at 6:48 AM, Grahame Grieve <
>>>>> 
>>>> grah...@healthintersections.com.au> wrote:
>>>> 
>>>>> 
>>>>> Can the Jena parser maintain a link between the triples and the line
>>>>> 
>>>> number
>>>> 
>>>>> from which are sourced in the original file? This is really useful for
>>>>> reporting issues with the content
>>>>> 
>>>>> Grahame
>>>>> 
>>>>> 
>>>>> --
>>>>> -
>>>>> http://www.healthintersections.com.au / grahame@healthintersections.
>>>>> 
>>>> com.au
>>>> 
>>>>> / +61 411 867 065
>>>>> 
>>>> 
>>>> 
>>>> 
>>> 
>>> 
> 
> 
> -- 
> -
> http://www.healthintersections.com.au / grah...@healthintersections.com.au
> / +61 411 867 065



Re: Line Numbers

2017-01-17 Thread A. Soroka
That will depend a bit on the language. For example, JSON parsing doesn't occur 
directly in Jena, Jena uses a library that parses from JSON to Java objects and 
then works with those objects:

org.apache.jena.riot.lang.JsonLDReader.read(InputStream, String, ContentType, 
StreamRDF, Context)

In some other cases, it seems like it should be possible. Do you have a 
specific language in mind?

---
A. Soroka
The University of Virginia Library

> On Jan 16, 2017, at 6:48 AM, Grahame Grieve 
> <grah...@healthintersections.com.au> wrote:
> 
> Can the Jena parser maintain a link between the triples and the line number
> from which are sourced in the original file? This is really useful for
> reporting issues with the content
> 
> Grahame
> 
> 
> -- 
> -
> http://www.healthintersections.com.au / grah...@healthintersections.com.au
> / +61 411 867 065



Re: Literal string to appropriate object

2017-01-11 Thread A. Soroka
You do know the type: http://www.w3.org/2001/XMSchema#anyURI

It is clearly written in your example.

---
A. Soroka
The University of Virginia Library

> On Jan 11, 2017, at 10:25 AM, George News <george.n...@gmx.net> wrote:
> 
> On 11/01/2017 15:59, A. Soroka wrote:
>> Perhaps parse it as a Jena Literal (e.g. using 
>> ResourceFactory.createTypedLiteral() ), then use Literal.getString() to get 
>> the value you seek.
> 
> then I need to know the type. The issue is that I wanted to know if
> there is any Jena function that directly parses the literal in the
> Turtle (or any other) form and get the object type.
> 
>> ---
>> A. Soroka
>> The University of Virginia Library
>> 
>>> On Jan 11, 2017, at 9:55 AM, George News <george.n...@gmx.net> wrote:
>>> 
>>> Hi,
>>> 
>>> I have this literal:
>>> http://hola^^http://www.w3.org/2001/XMSchema#anyURI
>>> 
>>> And I want to create a URI from it. Is there any way to do so?
>>> 
>>> I have tried
>>> URI z = (URI) XSDDatatype.XSDanyURI.parseValidated(literalString);
>>> 
>>> but I get:
>>> java.lang.ClassCastException: java.lang.String cannot be cast to
>>> java.net.URI
>>> 
>>> I don't know if I should take the shortcut, that is, remove everything
>>> after ^^ using substring, and then URI.create(shortenedLiteralString).
>>> 
>>> Any help is welcome.
>>> Jorge
>> 
>> 



Re: Literal string to appropriate object

2017-01-11 Thread A. Soroka
Perhaps parse it as a Jena Literal (e.g. using 
ResourceFactory.createTypedLiteral() ), then use Literal.getString() to get the 
value you seek.

---
A. Soroka
The University of Virginia Library

> On Jan 11, 2017, at 9:55 AM, George News <george.n...@gmx.net> wrote:
> 
> Hi,
> 
> I have this literal:
> http://hola^^http://www.w3.org/2001/XMSchema#anyURI
> 
> And I want to create a URI from it. Is there any way to do so?
> 
> I have tried
> URI z = (URI) XSDDatatype.XSDanyURI.parseValidated(literalString);
> 
> but I get:
> java.lang.ClassCastException: java.lang.String cannot be cast to
> java.net.URI
> 
> I don't know if I should take the shortcut, that is, remove everything
> after ^^ using substring, and then URI.create(shortenedLiteralString).
> 
> Any help is welcome.
> Jorge



Re: Jena TDB indexing and stats building

2017-01-09 Thread A. Soroka
The layout of the statistics file is documented here:

https://jena.apache.org/documentation/tdb/optimizer.html#statistics-rule-file

tdbloader and tdbloader2 are the CLI utilities for building TDB databases, but 
they are written in Java and can be used in Java.

https://jena.apache.org/documentation/tdb/commands.html#tdbloader

---
A. Soroka
The University of Virginia Library

> On Jan 9, 2017, at 2:36 PM, Ganesh Selvaraj <gsel...@aucklanduni.ac.nz> wrote:
> 
> Hi All,
> 
> I am using Jena TDB for my work. So far I could not find much documentation
> on data indexing and statistics building for Jena TDB.
> 
> I would prefer doing it via a Java API.
> 
> Any help/documentation is appreciated.
> 
> Thanks
> Ganesh



Re: Fuseki - how to release memory

2017-01-06 Thread A. Soroka
Can you give us your actual Fuseki config (i.e. assembler file)? Or are you 
repeatedly creating new datasets via the admin API?

---
A. Soroka
The University of Virginia Library

> On Jan 6, 2017, at 10:43 AM, Janda, Radim <radim.ja...@reporters.cz> wrote:
> 
> Hello,
> we use in-memory datasets.
> JVM is big enough but as we process thousands of small data sets the memory
> is allocated continuously.
> Actualy we restart Fuseki every hour to avoid out of memory error.
> However the performance is also decreasing in time (before restart) that's
> why we are looking for the possibility of memory cleanup.
> 
> Radim
> 
> On Fri, Jan 6, 2017 at 4:12 PM, Andy Seaborne <a...@apache.org> wrote:
> 
>> Are you using persistent or an in-memory datasets for your working storage?
>> 
>> If you really mean memory (RAM), are you sure the JVM is big enough?
>> 
>> Fuseki tries to avoid holding on to cache transactions but if the server
>> is under heavy read requests (Rob's point) then it can build up (solution -
>> reduce the read load for a short while) - also TDB does try to switch to
>> emergency measures after a while but maybe before then the RAM usage has
>> grown too much.
>> 
>>Andy
>> 
>> 
>> On 06/01/17 14:07, Rob Vesse wrote:
>> 
>>> Deleting data does not reclaim all the memory, exactly what is and isn’t
>>> reclaimed depends somewhat on your exact usage pattern.
>>> 
>>> The B+Tree’s which are the primary data structure for TDB, the default
>>> database used in Fuseki, does not reclaim the space. It is potentially
>>> subject fragmentation as well so memory used tends to grow over time. The
>>> node table portion of the database, the mapping from RDF terms to internal
>>> database identifiers is a sequential data structure that will only ever
>>> grow over time. It is also worth noting that many of the data structures
>>> are backed by memory mapped files which are off-heap and subject to the
>>> vagaries of how your OS handles this.
>>> 
>>> Additionally, if you place Fuseki under continuous load TDB maybe blocked
>>> from writing the in memory journal back to disk which can cause back to
>>> grow unbounded overtime and prevent memory being reclaimed. Adding
>>> occasional pauses between operations can help to alleviate this.
>>> 
>>> As Lorenz notes for this kind of use case you may not need Fuseki at all
>>> and could simply drive TDB programmatically instead.
>>> 
>>> As a general point creating a fresh database rather than reusing an
>>> existing one will much more efficiently use memory. However, if you’re
>>> running on Windows then there is a known OS specific JVM bug that can cause
>>> memory mapped files to not be properly deleted until after the process
>>> exits.
>>> 
>>> Rob
>>> 
>>> On 06/01/2017 12:23, "Janda, Radim" <radim.ja...@reporters.cz> wrote:
>>> 
>>>Hello Lorenz,
>>>yes I meant delete data from Fuseki using DELETE command.
>>>We have version 2.4 installed.
>>>We use two types of queries:
>>>1. Insert new triples based on existing triples rdf model (insert
>>> sparql)
>>>2. Find some results in the data (select sparql)
>>> 
>>>Thanks
>>> 
>>>Radim
>>> 
>>>On Fri, Jan 6, 2017 at 1:04 PM, Lorenz B. <
>>>buehm...@informatik.uni-leipzig.de> wrote:
>>> 
>>>> Hello Radim,
>>>> 
>>>> just to avoid confusion, with "Delete whole Fuseki" you mean the
>>> data
>>>> loaded into Fuseki, right?
>>>> 
>>>> Which Fuseki version do you use?
>>>> 
>>>> What kind of transformation do you do? I'm asking because I'm
>>> wondering
>>>> if it's necessary to use Fuseki.
>>>> 
>>>> 
>>>> 
>>>> Cheers,
>>>> Lorenz
>>>> 
>>>>> Hello,
>>>>> We use Jena Fuseki to process a lot of small data sets.
>>>>> 
>>>>> It works in the following way:
>>>>> 1. Delete whole Fuseki (using DELETE command)
>>>>> 2. Load data to Fuseki (using INSERT)
>>>>> 3. Tranform data and create output (sparql called from Python)
>>>>> 4. ad 1)2)3  delete Fuseki and Transform another data set
>>>>> 
>>>>> We have found out that memory is not released after delete in
>>> Fuseki.
>>>>> That means we have lack of memory after some data sets are
>>> transformed.
>>>>> Actually we restart Fuseki server after some number of data sets
>>> but we
>>>>> are looking for the better solution.
>>>>> 
>>>>> Can you please help us with memory releasing?
>>>>> 
>>>>> Many thanks
>>>>> 
>>>>> Radim
>>>>> 
>>>> --
>>>> Lorenz Bühmann
>>>> AKSW group, University of Leipzig
>>>> Group: http://aksw.org - semantic web research center
>>>> 
>>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 



Re: Compile Forked Version

2017-01-05 Thread A. Soroka
I get the same error.  I don't think there is any problem with Maven here and 
you should not be spending time with Maven. The fork just doesn't build. You 
will have to take this up with the maintainer of that fork, Jean-Marc Vanel 
(jmvanel), and the dev@ list might be more appropriate at this point, or 
perhaps the issue:

https://issues.apache.org/jira/browse/JENA-1250

---
A. Soroka
The University of Virginia Library

> On Jan 5, 2017, at 10:26 AM, Samur Araujo <s.ara...@geophy.com> wrote:
> 
> I follow your suggestion:
> 
> ---
> T E S T S
> ---
> Running org.apache.jena.web.TS_Web
> Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.493 sec -
> in org.apache.jena.web.TS_Web
> Running org.apache.jena.system.TS_System
> Tests run: 51, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.054 sec
> - in org.apache.jena.system.TS_System
> Running org.apache.jena.common.TS_Common
> Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.104 sec
> - in org.apache.jena.common.TS_Common
> Running org.apache.jena.query.TS_ParamString
> Tests run: 140, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.262 sec
> - in org.apache.jena.query.TS_ParamString
> Running org.apache.jena.riot.out.TS_Out
> Tests run: 128, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.062 sec
> - in org.apache.jena.riot.out.TS_Out
> Running org.apache.jena.riot.web.TS_RiotWeb
> Tests run: 39, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.009 sec
> - in org.apache.jena.riot.web.TS_RiotWeb
> Running org.apache.jena.riot.tokens.TS_Tokens
> Tests run: 144, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.039 sec
> <<< FAILURE! - in org.apache.jena.riot.tokens.TS_Tokens
> tokenUnit_iri18(org.apache.jena.riot.tokens.TestTokenizer)  Time elapsed:
> 0.004 sec  <<< ERROR!
> java.lang.Exception: Unexpected exception,
> expected but
> was
> at
> org.apache.jena.riot.tokens.TestTokenizer.tokenFirst(TestTokenizer.java:45)
> at
> org.apache.jena.riot.tokens.TestTokenizer.tokenUnit_iri18(TestTokenizer.java:205)
> 
> Running org.apache.jena.riot.system.TS_RiotSystem
> Tests run: 296, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.444 sec
> - in org.apache.jena.riot.system.TS_RiotSystem
> Running org.apache.jena.riot.resultset.TS_ResultSetRIOT
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.052 sec -
> in org.apache.jena.riot.resultset.TS_ResultSetRIOT
> Running org.apache.jena.riot.writer.TS_RiotWriter
> Tests run: 695, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.16 sec
> - in org.apache.jena.riot.writer.TS_RiotWriter
> Running org.apache.jena.riot.thrift.TS_RDFThrift
> Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.016 sec
> - in org.apache.jena.riot.thrift.TS_RDFThrift
> Running org.apache.jena.riot.TS_RiotGeneral
> Tests run: 122, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.066 sec
> - in org.apache.jena.riot.TS_RiotGeneral
> Running org.apache.jena.riot.stream.TS_IO2
> Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.027 sec
> - in org.apache.jena.riot.stream.TS_IO2
> Running org.apache.jena.riot.lang.TS_Lang
> Tests run: 458, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.736 sec
> - in org.apache.jena.riot.lang.TS_Lang
> Running org.apache.jena.riot.adapters.TS_RIOTAdapters
> Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.029 sec
> - in org.apache.jena.riot.adapters.TS_RIOTAdapters
> Running org.apache.jena.riot.TS_LangSuite
> Tests run: 861, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.502 sec
> - in org.apache.jena.riot.TS_LangSuite
> Running org.apache.jena.riot.process.TS_Process
> Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.008 sec
> - in org.apache.jena.riot.process.TS_Process
> Running org.apache.jena.atlas.web.TS_Web
> Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.009 sec
> - in org.apache.jena.atlas.web.TS_Web
> Running org.apache.jena.atlas.json.TS_JSON
> Tests run: 37, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.01 sec -
> in org.apache.jena.atlas.json.TS_JSON
> Running org.apache.jena.atlas.event.TS_Event
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.006 sec -
> in org.apache.jena.atlas.event.TS_Event
> Running org.apache.jena.atlas.data.TS_Data
> Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.603 sec
> - in org.apache.jena.atlas.data.TS_Data
> Running org.apache.jena.sparql.TC_DAWG
> Tests run: 467, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.867 sec
> - in org.apache.jena.sparql.TC_DAWG
> Running org.apache.

Re: Compile Forked Version

2017-01-05 Thread A. Soroka
If you have cloned that fork, try just doing a simple `mvn clean install` in 
the project root, then look in jena-fuseki2/apache-jena-fuseki/target. You 
should find a Fuseki distribution there with the forked code. 

---
A. Soroka
The University of Virginia Library

> On Jan 5, 2017, at 10:13 AM, Samur Araujo <s.ara...@geophy.com> wrote:
> 
> I want to run fuseki with lucene 5 or higher.
> 
> There is a fork for it here :
> 
> https://github.com/jmvanel/jena/commits/master
> 
> I download it and I am trying to compile/package it.
> 
> I did no change in the code of this fork. For know only trying to make it
> to work.
> 
> Any suggestion?
> 
> On 5 January 2017 at 16:05, A. Soroka <aj...@virginia.edu> wrote:
> 
>> Can you explain a little more about what you are trying to do? When you
>> say "compile a forked version of Jena", if you have actually forked the
>> entire codebase, you should be able to just compile the entire codebase to
>> get SNAPSHOT version artifacts of your forked code. Why are you trying to
>> compile a mix of modules? Are you trying to maintain both ordinary and
>> forked forms of Jena in your local Maven repo? What do you intend to do
>> with the forked artifacts? Are you going to integrate them into some other
>> application? There may be an easier way to do all this.
>> 
>> ---
>> A. Soroka
>> The University of Virginia Library
>> 
>>> On Jan 5, 2017, at 10:01 AM, Samur Araujo <s.ara...@geophy.com> wrote:
>>> 
>>> I run the sequence of commands below but it still failures.
>>> 
>>> mvn release:update-versions -DdevelopmentVersion=3.1.1-myfork-SNAPSHOT
>>> mvn clean
>>> mvn install -Pbootstrap
>>> mvn install -Pdev -DskipTests=true
>>> 
>>> [INFO]
>>> 
>>> [INFO] Building Apache Jena - Fuseki Server Engine 3.1.1-myfork-SNAPSHOT
>>> [INFO]
>>> 
>>> [WARNING] The POM for org.apache.jena:jena-text:jar:
>> 3.1.1-myfork-SNAPSHOT
>>> is missing, no dependency information available
>>> [WARNING] The POM for
>>> org.apache.jena:jena-spatial:jar:3.1.1-myfork-SNAPSHOT is missing, no
>>> dependency information available
>>> [INFO]
>>> 
>>> [INFO] Reactor Summary:
>>> [INFO]
>>> [INFO] Apache Jena - Parent ... SUCCESS [
>>> 1.552 s]
>>> [INFO] Apache Jena - Base Common Environment .. SUCCESS [
>>> 8.325 s]
>>> [INFO] Apache Jena - Core . SUCCESS [
>>> 11.643 s]
>>> [INFO] Apache Jena - ARQ (SPARQL 1.1 Query Engine)  SUCCESS [
>>> 19.718 s]
>>> [INFO] Apache Jena - TDB (Native Triple Store)  SUCCESS [
>>> 2.547 s]
>>> [INFO] Apache Jena - Libraries POM  SUCCESS [
>>> 0.234 s]
>>> [INFO] Apache Jena - Command line tools ... SUCCESS [
>>> 5.790 s]
>>> [INFO] Apache Jena - Fuseki - A SPARQL 1.1 Server . SUCCESS [
>>> 0.047 s]
>>> [INFO] Apache Jena - Fuseki Server Engine . FAILURE [
>>> 0.045 s]
>>> [INFO] Apache Jena - Fuseki Embedded Server ... SKIPPED
>>> [INFO] Apache Jena - Fuseki WAR File .. SKIPPED
>>> [INFO] Apache Jena - Fuseki Server Standalone Jar . SKIPPED
>>> [INFO] Apache Jena - Fuseki Binary Distribution ... SKIPPED
>>> [INFO] Apache Jena - Security Permissions . SKIPPED
>>> [INFO] Apache Jena  SKIPPED
>>> [INFO]
>>> 
>>> [INFO] BUILD FAILURE
>>> [INFO]
>>> 
>>> [INFO] Total time: 50.588 s
>>> [INFO] Finished at: 2017-01-05T15:56:51+01:00
>>> [INFO] Final Memory: 60M/1801M
>>> [INFO]
>>> 
>>> [ERROR] Failed to execute goal on project jena-fuseki-core: Could not
>>> resolve dependencies for project
>>> org.apache.jena:jena-fuseki-core:jar:3.1.1-myfork-SNAPSHOT: The
>> following
>>> artifacts could not be resolved:
>>> org.apache.jena:jena-text:jar:3.1.1-myfork-SNAPSHOT,
>&g

Re: Compile Forked Version

2017-01-05 Thread A. Soroka
Can you explain a little more about what you are trying to do? When you say 
"compile a forked version of Jena", if you have actually forked the entire 
codebase, you should be able to just compile the entire codebase to get 
SNAPSHOT version artifacts of your forked code. Why are you trying to compile a 
mix of modules? Are you trying to maintain both ordinary and forked forms of 
Jena in your local Maven repo? What do you intend to do with the forked 
artifacts? Are you going to integrate them into some other application? There 
may be an easier way to do all this.

---
A. Soroka
The University of Virginia Library

> On Jan 5, 2017, at 10:01 AM, Samur Araujo <s.ara...@geophy.com> wrote:
> 
> I run the sequence of commands below but it still failures.
> 
> mvn release:update-versions -DdevelopmentVersion=3.1.1-myfork-SNAPSHOT
> mvn clean
> mvn install -Pbootstrap
> mvn install -Pdev -DskipTests=true
> 
> [INFO]
> 
> [INFO] Building Apache Jena - Fuseki Server Engine 3.1.1-myfork-SNAPSHOT
> [INFO]
> 
> [WARNING] The POM for org.apache.jena:jena-text:jar:3.1.1-myfork-SNAPSHOT
> is missing, no dependency information available
> [WARNING] The POM for
> org.apache.jena:jena-spatial:jar:3.1.1-myfork-SNAPSHOT is missing, no
> dependency information available
> [INFO]
> 
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Jena - Parent ... SUCCESS [
> 1.552 s]
> [INFO] Apache Jena - Base Common Environment .. SUCCESS [
> 8.325 s]
> [INFO] Apache Jena - Core . SUCCESS [
> 11.643 s]
> [INFO] Apache Jena - ARQ (SPARQL 1.1 Query Engine)  SUCCESS [
> 19.718 s]
> [INFO] Apache Jena - TDB (Native Triple Store)  SUCCESS [
> 2.547 s]
> [INFO] Apache Jena - Libraries POM  SUCCESS [
> 0.234 s]
> [INFO] Apache Jena - Command line tools ... SUCCESS [
> 5.790 s]
> [INFO] Apache Jena - Fuseki - A SPARQL 1.1 Server . SUCCESS [
> 0.047 s]
> [INFO] Apache Jena - Fuseki Server Engine . FAILURE [
> 0.045 s]
> [INFO] Apache Jena - Fuseki Embedded Server ... SKIPPED
> [INFO] Apache Jena - Fuseki WAR File .. SKIPPED
> [INFO] Apache Jena - Fuseki Server Standalone Jar . SKIPPED
> [INFO] Apache Jena - Fuseki Binary Distribution ... SKIPPED
> [INFO] Apache Jena - Security Permissions . SKIPPED
> [INFO] Apache Jena  SKIPPED
> [INFO]
> 
> [INFO] BUILD FAILURE
> [INFO]
> 
> [INFO] Total time: 50.588 s
> [INFO] Finished at: 2017-01-05T15:56:51+01:00
> [INFO] Final Memory: 60M/1801M
> [INFO]
> 
> [ERROR] Failed to execute goal on project jena-fuseki-core: Could not
> resolve dependencies for project
> org.apache.jena:jena-fuseki-core:jar:3.1.1-myfork-SNAPSHOT: The following
> artifacts could not be resolved:
> org.apache.jena:jena-text:jar:3.1.1-myfork-SNAPSHOT,
> org.apache.jena:jena-spatial:jar:3.1.1-myfork-SNAPSHOT: Failure to find
> org.apache.jena:jena-text:jar:3.1.1-myfork-SNAPSHOT in
> http://repository.apache.org/snapshots was cached in the local repository,
> resolution will not be reattempted until the update interval of
> apache.snapshots has elapsed or updates are forced -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions,
> please read the following articles:
> [ERROR] [Help 1]
> http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the
> command
> [ERROR]   mvn  -rf :jena-fuseki-core
> 
> 
> Notice the jena spatial and text are there:
> 
> ~/tools/jena-master-forked$ find . | grep text | grep jar
> ./jena-text/target/jena-text-3.1.1-myfork-SNAPSHOT-javadoc.jar
> ./jena-text/target/jena-text-3.1.1-myfork-SNAPSHOT-sources.jar
> ./jena-text/target/jena-text-3.1.1-myfork-SNAPSHOT.jar
> 
> 
> ~/tools/jena-master-forked$ find . | grep spatial | grep jar
> ./jena-spatial/target/jena-spatial-3.1.1-myfork-SNAPSHOT.jar
> ./jena-spatial/target/jena-spati

Re: ClosedException on calling Individual.isClass()

2017-01-02 Thread A. Soroka
>> Since all your models seem to be in-memory then you could simply drop
>> the close(), it's not necessary here.
> 
> Are there any kind of models available in current Jena version? I was storing 
> the Dataset using TDB. Then I call getNamedModel() on the dataset and work 
> with it or just with the in-memory.
> 
> Is there any way to use another thing different that the file based TDB?

If you actually need disk persistence for the models in your dataset, TDB is 
your best choice. If they are in-memory only, try using a TIM (transactional 
in-memory) dataset:

https://jena.apache.org/documentation/rdf/datasets.html

or if they are not part of a dataset, even just an in-memory Model.

---
A. Soroka
The University of Virginia Library

> On Jan 2, 2017, at 7:37 AM, George News <george.n...@gmx.net> wrote:
> 
> 
> On 02/01/2017 13:00, Dave Reynolds wrote:
>>> 
>>> Yes it does. I want to speedup things, as ontologies are not changing.
>>> 
>>> 
>>>
>>>true
>>>true
>>> 
>>> 
>>>> Does the code you haven't shown us close your OntModels somewhere?
>>> 
>>> I call OntModel close on local models vars. But each time I'm creating a
>>> new model, so don't understand why it's being closed.
>> 
>> It sounds like your OntModels include imports. These are implemented as
>> separate sub-models which are bound into a multi-union. If you close the
>> OntModel that will (I think) close all the imported models.
>> 
>> If some of those imported models are cached (which seems to be the case
>> with your setup) and thus shared with other OntModels then you will be
>> closing submodels of other concurrent OntModel instances. That would
>> explain your symptoms.
> 
> I have removed the cached and now it takes ages to perform stuff. As an
> example, something that took 20 sec now it takes 170 sec. I cannot go
> for that solution.
> 
>> 
>> Since all your models seem to be in-memory then you could simply drop
>> the close(), it's not necessary here.
> 
> Are there any kind of models available in current Jena version? I was
> storing the Dataset using TDB. Then I call getNamedModel() on the
> dataset and work with it or just with the in-memory.
> 
> Is there any way to use another thing different that the file based TDB?
> 
>>>> However, since FileManager caching doesn't look particularly thread-safe
>>>> to me then unless you are doing your own synchronization around all the
>>>> operations that touch it that could also cause problems.
>>> 
>>> Could you further explain that? I don't fully understand. What operation
>>> are touch what? The local ontology files are there just in case the
>>> remote ones are not accessible.
>> 
>> FileManager doesn't say it is thread safe and from a quick look at the
>> code it doesn't seem to be. So if you have two different threads which
>> are concurrently importing models via the same FileManager then the
>> cache could become corrupted.
>> 
>> Possible ways round that would include:
>> 
>> - preloading your FileManager cache will all the relevant imported
>> models so there's no concurrent update to the cache
> 
> How can I do that? Sorry for the "silly" question. It seems the easiest
> way to solve the problem.
> 
> I'm also thinking on FileManager.setModelCaching(false). Would it make
> any sense?
> 
>> - build your own thread-safe filemanager (hmm, don't think the design
>> allows for that)
>> 
>> - perform all your local OntModel instantiation from within a suitably
>> protected critical section so that all the import processing is done by
>> a single thread
>> 
>> Dave
>> 
>> 
>> 



Re: Custom SERVICE HTTP requests

2017-01-01 Thread A. Soroka
Just a bit more on how SERVICE-specific HTTP action is managed:

https://jena.apache.org/documentation/query/service.html#configuration-from-jena-version-311

The code is here:

https://github.com/apache/jena/blob/master/jena-arq/src/main/java/org/apache/jena/sparql/engine/http/QueryEngineHTTP.java#L613

It may or may not be a useful pattern for you, but if your desired 
customization can be packaged in an HTTP client, you _can_ inject HTTP clients 
on a per-service basis.


---
A. Soroka
The University of Virginia Library

> On Jan 1, 2017, at 12:59 PM, Andy Seaborne <a...@apache.org> wrote:
> 
> 
> 
> On 01/01/17 12:53, Martynas Jusevičius wrote:
>> Hey,
>> 
>> happy 2017 :)
>> 
>> I am wondering if there is a way to "intercept"
> 
> Any thing can be intercepted with a custom OpExecutor.
> 
>> the HTTP request that
>> is being generated by the SERVICE clause?
> 
> See Service.configureQuery for the use of serviceContext
> 
>> 
>> As you might know, some triplestores (RDF4J, Dydra) provide an
>> extension of the SPARQL protocol that allows sending query bindings
>> separately from the query string. (Too bad it's not standardized and
>> I'm not even able to find a good reference right now, but that is
>> another topic).
> 
> What semantics do they have? Specifically, is it replace name-by-value 
> regardless? (so inside nested SELECT where it can be a "different" variable)? 
> What about aggregates?
> 
>> 
>> We have implemented it with a custom SPARQL client, but how can we
>> plug it into the federated query execution?
> 
> You can have query string parameters in the SERVICE URL.
> 
>Andy
> 
>> 
>> Thanks.
>> 
>> Martynas
>> 



Re: listresourceswithproperty()

2016-12-29 Thread A. Soroka
Please, please read the Javadocs.

StmtIterator listStatements(Resource s, Property p, RDFNode o)

returns a StmtIterator, an iterator of _Statements_.

ResIterator listResourcesWithProperty(Property p, RDFNode o)

returns a ResIterator, an iterator of _Resources_.

Neither of the examples you give are sensible code. The literal "Student" is 
not a reasonable value for an rdf:type. Please go and actually try to write 
some code for your problem and then continue this discussion.

---
A. Soroka
The University of Virginia Library

> On Dec 29, 2016, at 9:16 AM, neha gupta <neha.bang...@gmail.com> wrote:
> 
> What is the difference between then in
> "model.listresourceswithproperty(RDF:type,
> Student )"  and
> 
> model.listStatements(null,RDF.type, "Student"); //Student is class in our
> ontology
> 
> If we want to just retrieve the list of Students (rdf:type Student), which
> of the above statement is correct? Or we should write both these
> statements.
> 
> Regards
> 
> On Thu, Dec 29, 2016 at 5:00 PM, A. Soroka <aj...@virginia.edu> wrote:
> 
>> No. The "resources in this model that have property p": the resource that
>> has a property is the subject of that property.
>> 
>> ---
>> A. Soroka
>> The University of Virginia Library
>> 
>>> On Dec 29, 2016, at 8:57 AM, neha gupta <neha.bang...@gmail.com> wrote:
>>> 
>>> Hi Soroka, it will lists both the subjects and objects of the Property p,
>>> right?
>>> 
>>> 
>>> 
>>> On Thu, Dec 29, 2016 at 4:29 PM, A. Soroka <aj...@virginia.edu> wrote:
>>> 
>>>> Please consult the Javadocs.
>>>> 
>>>> https://jena.apache.org/documentation/javadoc/jena/
>>>> org/apache/jena/rdf/model/Model.html#listResourcesWithProperty-org.
>>>> apache.jena.rdf.model.Property-
>>>> 
>>>> "Answer an iterator [with no duplicates] over all the resources in this
>>>> model that have property p. remove() is not implemented on this
>> iterator."
>>>> 
>>>> ---
>>>> A. Soroka
>>>> The University of Virginia Library
>>>> 
>>>>> On Dec 29, 2016, at 8:23 AM, neha gupta <neha.bang...@gmail.com>
>> wrote:
>>>>> 
>>>>> Hello, I want to ask what is the function of this method:
>>>>> listresourceswithproperty()
>>>>> 
>>>>> And is it the same as when we query SPARQL like:
>>>>> 
>>>>> Select ?x
>>>>> where { ?x rdf:type ?someclass }
>>>>> 
>>>>> A simple example is highly appreciated as I did not find any solid
>>>> examples
>>>>> on web about it.
>>>>> 
>>>>> Thank you
>>>> 
>>>> 
>> 
>> 



Re: listresourceswithproperty()

2016-12-29 Thread A. Soroka
No. The "resources in this model that have property p": the resource that has a 
property is the subject of that property.

---
A. Soroka
The University of Virginia Library

> On Dec 29, 2016, at 8:57 AM, neha gupta <neha.bang...@gmail.com> wrote:
> 
> Hi Soroka, it will lists both the subjects and objects of the Property p,
> right?
> 
> 
> 
> On Thu, Dec 29, 2016 at 4:29 PM, A. Soroka <aj...@virginia.edu> wrote:
> 
>> Please consult the Javadocs.
>> 
>> https://jena.apache.org/documentation/javadoc/jena/
>> org/apache/jena/rdf/model/Model.html#listResourcesWithProperty-org.
>> apache.jena.rdf.model.Property-
>> 
>> "Answer an iterator [with no duplicates] over all the resources in this
>> model that have property p. remove() is not implemented on this iterator."
>> 
>> ---
>> A. Soroka
>> The University of Virginia Library
>> 
>>> On Dec 29, 2016, at 8:23 AM, neha gupta <neha.bang...@gmail.com> wrote:
>>> 
>>> Hello, I want to ask what is the function of this method:
>>> listresourceswithproperty()
>>> 
>>> And is it the same as when we query SPARQL like:
>>> 
>>> Select ?x
>>> where { ?x rdf:type ?someclass }
>>> 
>>> A simple example is highly appreciated as I did not find any solid
>> examples
>>> on web about it.
>>> 
>>> Thank you
>> 
>> 



Re: listresourceswithproperty()

2016-12-29 Thread A. Soroka
Please consult the Javadocs.

https://jena.apache.org/documentation/javadoc/jena/org/apache/jena/rdf/model/Model.html#listResourcesWithProperty-org.apache.jena.rdf.model.Property-

"Answer an iterator [with no duplicates] over all the resources in this model 
that have property p. remove() is not implemented on this iterator."

---
A. Soroka
The University of Virginia Library

> On Dec 29, 2016, at 8:23 AM, neha gupta <neha.bang...@gmail.com> wrote:
> 
> Hello, I want to ask what is the function of this method:
> listresourceswithproperty()
> 
> And is it the same as when we query SPARQL like:
> 
> Select ?x
> where { ?x rdf:type ?someclass }
> 
> A simple example is highly appreciated as I did not find any solid examples
> on web about it.
> 
> Thank you



Re: adjunction of new Balise

2016-12-24 Thread A. Soroka
1) This isn't really a question about Jena at all.

2) You cannot add things to OWL itself. OWL is maintained by a carefully 
set-out and very public process. [1] You can devise your own extension to OWL 
(or your own independent language) and it may or may not acquire proponents and 
use.

3) In what way is this question substantially different than the one you 
recently asked (to which you received some very useful answers)? [2]

---
A. Soroka
The University of Virginia Library

[1] http://www.w3.org/Consortium/Process/

[2] 
https://lists.apache.org/thread.html/c236d7ebf3d8e1d223b6ee4bfd14b197a05b32d95b3589114a21f1e7@%3Cusers.jena.apache.org%3E

> On Dec 24, 2016, at 5:41 PM, Hlel Emna <emnah...@gmail.com> wrote:
> 
> hi,
> 
> 
> How we can add new tags to the OWL language, for example, the tag 
> to represent probabilistic individuals of the domain modeled.
> 
> thanks for the response.



Re: Very slow Geosparql with Jena

2016-12-23 Thread A. Soroka
If the results from TDB and from Lucene have to be joined, that can cause some 
overhead, but I am not familiar enough with that tooling to see from your query 
whether that is a potential issue.

---
A. Soroka
The University of Virginia Library

> On Dec 23, 2016, at 11:48 AM, Samur Araujo <s.ara...@geophy.com> wrote:
> 
> Hi Andy, I run the query many times and it still slow.
> 
> I observed that when I index the data directly on lucene/solr (version
> 5.5.3) the query takes 9ms.
> 
> Is it the lucene version (4.1) used by Fuseki that is slow or are there
> other potential overhead?
> 
> On 23 December 2016 at 17:42, Andy Seaborne <a...@apache.org> wrote:
> 
>> Quite possibly. jena-spatial is a lightweight solution using an external
>> index to using geo data - it is not GeoSPARQL.
>> 
>> (Just running a query once will incur a lot of start-up costs.)
>> 
>>Andy
>> 
>> 
>> On 22/12/16 14:39, Samur Araujo wrote:
>> 
>>> Hi all,
>>> 
>>> I loaded geonames(40 million triples) into Fuseki and I indexed the data
>>> into lucene.
>>> 
>>> The query below takes 4 seconds to execute. While a similar SQL one into
>>> postgis takes 13 ms.
>>> 
>>> PREFIX spatial: <http://jena.apache.org/spatial#>
>>> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
>>> 
>>> SELECT ?place
>>> {
>>>?place spatial:withinCircle (32.55668 -117.128651  "km"  ) .
>>> 
>>> 
>>> }
>>> 
>>> SQL
>>> 
>>> select  geonameid from geoname where ST_Intersects(the_geom,
>>> ST_Buffer(ST_SetSRID(ST_MakePoint(-117.12865, 32.55668
>>> ),4326)::geography,
>>> 1000)::geometry)
>>> 
>>> Do I need to make any extra configuration on Fuseki to improve the
>>> performance?
>>> 
>>> 
>>> Best,
>>> 
>>> 
> 
> 
> -- 
> Senior Data Scientist
> Geophy
> www.geophy.com
> 
> Nieuwe Plantage 54-55
> 2611XK  Delft
> +31 (0)70 7640725
> 
> 1 Fore Street
> EC2Y 9DT  London
> +44 (0)20 37690760



Re: Jena with Lucene 5 or 6

2016-12-23 Thread A. Soroka
Work is ongoing on that front:

https://issues.apache.org/jira/browse/JENA-1250

but I will leave it to Osma, who has been closely involved, to comment as to 
its likely future.

---
A. Soroka
The University of Virginia Library

> On Dec 23, 2016, at 10:35 AM, Samur Araujo <s.ara...@geophy.com> wrote:
> 
> Is there any plan to migrate Jena/Fuseki for Lucene 5 or 6?
> 
> Any fork available that have done the migration already?
> Best,
> 
> -- 
> Senior Data Scientist
> Geophy
> www.geophy.com
> 
> Nieuwe Plantage 54-55
> 2611XK  Delft
> +31 (0)70 7640725
> 
> 1 Fore Street
> EC2Y 9DT  London
> +44 (0)20 37690760



  1   2   3   >