Re: Problems when querying the SPARQL with Jena

2024-03-11 Thread Daniel Hernandez


Hi Max,

What do you mean by namespace declarations? Do you mean query prefixes?
I don't understand why this query may need a prefix. It is a very simple
query without URIs:

SELECT * { ?s ?p ?o }

Best,
Daniel

Maximilian Behnert-Brodhun  writes:

> Dear Pan,
>
> did you try add namespace declarations into your query?
>
> Best regards
>
> Max
>
> Am Mo., 11. März 2024 um 14:45 Uhr schrieb Anna P <
> specialcookie...@gmail.com>:
>
>> Dear Jena support team,
>>
>> Currently I just started to work on a SPARQL project using Jena and I could
>> not get a solution when I query a model.
>> I imported a turtle file and ran a simple query, and the snippet code is
>> shown below. However, I got the error.
>>
>> public class App {
>> public static void main(String[] args) {
>> try {
>> Model model = RDFDataMgr.loadModel('data.ttl', Lang.TURTLE);
>> RDFDataMgr.write(System.out, model, Lang.TURTLE);
>> String queryString = "SELECT * { ?s ?p ?o }";
>> Query query = QueryFactory.create(queryString);
>> QueryExecution qe = QueryExecutionFactory.create(query, model);
>> ResultSet results = qe.execSelect();
>> ResultSetFormatter.out(System.out, results, query);
>> qe.close();
>> } catch (Exception e) {
>> e.printStackTrace();
>> }
>> }
>> }
>>
>> Here is the error message:
>>
>> org.apache.jena.riot.RiotException: Not registered as a SPARQL result set
>> output syntax: Lang:SPARQL-Results-JSON
>> at
>>
>> org.apache.jena.sparql.resultset.ResultsWriter.write(ResultsWriter.java:179)
>> at
>>
>> org.apache.jena.sparql.resultset.ResultsWriter.write(ResultsWriter.java:156)
>> at
>>
>> org.apache.jena.sparql.resultset.ResultsWriter.write(ResultsWriter.java:149)
>> at
>>
>> org.apache.jena.sparql.resultset.ResultsWriter$Builder.write(ResultsWriter.java:96)
>> at
>>
>> org.apache.jena.query.ResultSetFormatter.output(ResultSetFormatter.java:308)
>> at
>>
>> org.apache.jena.query.ResultSetFormatter.outputAsJSON(ResultSetFormatter.java:516)
>> at de.unistuttgart.ki.esparql.App.main(App.java:46)
>>
>>
>> Thank you for your time and help!
>>
>> Best regards,
>>
>> Pan
>>



Support of RDF-star quoted triples in query results

2022-12-05 Thread Daniel Hernandez


Hi everyone,

I loaded the following data in Fuseki 4.6.1:

@prefix :  .
<< :Bob :hasFriend :Alice >> :accordingTo :Alice .

Then, the following query returns :Bob and :Alice, as expected.

PREFIX : 
SELECT ?s ?o
WHERE {
  << ?s ?p ?o >> ?q ?v. 
}

However, the following query returns an error "e[t].value.replace is not
a function" in the Fuseki UI when selecting the output type as JSON:

PREFIX : 
SELECT ?t
WHERE {
  ?t ?q ?v .
}

And the following query returns :Alice (without error):

PREFIX : 
SELECT ?v
WHERE {
  ?t ?q ?v .
}

However, the problematic query works without the Fuseki UI.

curl -G http://localhost:3030/ds/sparql --data-urlencode \
 query='SELECT ?t WHERE { ?t ?q ?v }'
{ "head": {
"vars": [ "t" ]
  } ,
  "results": {
"bindings": [
  { 
"t": {
  "type": "triple" , 
  "value": {
"subject":  { "type": "uri" , "value": "http://example.org/Bob; } ,
"predicate": { "type": "uri" , "value": 
"http://example.org/hasFriend; } ,
"object":   { "type": "uri" , "value": "http://example.org/Alice; }
  }
}
  }
]
  }
}

It seems that this issue only affects the Fuseki UI.

Daniel


Re: Understanding the output of Jena TDB Loader

2021-02-25 Thread Daniel Hernandez


Hi,

Andy Seaborne writes:

> On 23/02/2021 17:55, Daniel Hernandez wrote:
>> Hi,
>>
>>>> The disk where I was loading the data was a local rotating disk of
>>>> 7200 rpm. The machine has also an SSD but is too small to do the
>>>> experiment.
>>>
>>> tdbloader2 may be the right choice for that setup - it was written
>>> with disks in mind. It uses Unix sort(1). What it needs is to tune the
>>> parameters to the runs of "sort"
>> Thanks, this information is very useful.
>>
>>> Wolfgang Fahl has loaded large (several billion triples)
>>>
>>> https://issues.apache.org/jira/browse/JENA-1909
>>>
>>> and his notes are at:
>>>
>>>   http://wiki.bitplan.com/index.php/Get_your_own_copy_of_WikiData
>> I also have loaded Wikidata in a very small virtual machine with a
>> single core, and a rotating non local disk. I remember it lasted more
>> than a week. I do not saved the log, because the machine was running
>> other jobs at the same time. Next time I loaded a big dataset I will
>> share the machine specification and loading log.
>>
>>>> I wonder if it is better to load the data using a fast disk, a lot of
>>>> RAM, or a lot of cores.
>>>
>>> A few years ago, I ran load tests of two machines, one 32G+SATA SSD,
>>> one 16G+ 1TB M2 SSD.  The 16G but faster SSD was quicker overall.
>> That is interesting.  I am considering to have a machine with an
>> NVMe
>> SSD disk for the next loading.
>>
>>> Database directories can be copied across machines after they have
>>> been built.
>> The tdbloader2 generates some files with the tmp extension. The file
>> data-triples.tmp can be very big. The name suggest that it is a temporal
>> file. Can I delete that file after the loading ends?
>
> Yes.
>
> The files are the triples ids from the parse/load nodes stage.
>
> Then comes the indexing which is multiple passes over the tmp files,
> once per index to sort, using an external sort (in both sense!
> external program and external to disk), then build the indexes in a
> single pass per index.
>
> This is reusing the external sort capability of sorting data much
> larger than RAM.  sort(1) needs
>
> I found a previous load script (when wikidata was 2.2 B IIRC)
>
> Setting SORT_ARGS
>
> --
> #!/bin/bash
>
> echo "== $(date)"
>
> export TOOL_DIR="$PWD"
> export JENA_HOME="$HOME/jlib/apache-jena-3.5.0"
> export JVM_ARGS=""
> export GZIP="--fast"
> #export SORT_ARGS="--parallel=2 --compress-program=/bin/gzip
>  --temporary-directory=$PWD/tmp --buffer-size=75%"
>
> export SORT_ARGS="--temporary-directory=$PWD/tmp"
>
> ## -k : keep work files.
>
> # Logger:org.apache.jena.riot
>
> PHASE="--phase index"
> ARGS="--keep-work $PHASE --loc db2-all"
>
> tdbloader2 $ARGS "$@"
> echo "== $(date)"
> --
> IIRC not all sort(1) had "--parallel" back then.
>
>
> I also found a script replacement for the "sort" command in the scripts:
> --
> #!/bin/bash
> # Special.
> ## mysort $KEYS "$DATA" "$WORK"
>
> KEYS="$1"
> DATA="$2"
> WORK="$3"
>
> SORT_ARGS="--compress-program=/bin/gzip --temporary-directory=$PWD/tmp
> --buffer-size=80%"
> gzip -d < "$DATA.gz" | sort $SORT_ARGS -u $KEYS > "$WORK"
> --
>
>   HTH
>   Andy

Thanks Andy, you help me a lot!

Best,
Daniel


Re: Understanding the output of Jena TDB Loader

2021-02-23 Thread Daniel Hernandez


Hi,

>> The disk where I was loading the data was a local rotating disk of
>> 7200 rpm. The machine has also an SSD but is too small to do the
>> experiment.
>
> tdbloader2 may be the right choice for that setup - it was written
> with disks in mind. It uses Unix sort(1). What it needs is to tune the
> parameters to the runs of "sort"

Thanks, this information is very useful.

> Wolfgang Fahl has loaded large (several billion triples)
>
> https://issues.apache.org/jira/browse/JENA-1909
>
> and his notes are at:
>
>  http://wiki.bitplan.com/index.php/Get_your_own_copy_of_WikiData

I also have loaded Wikidata in a very small virtual machine with a
single core, and a rotating non local disk. I remember it lasted more
than a week. I do not saved the log, because the machine was running
other jobs at the same time. Next time I loaded a big dataset I will
share the machine specification and loading log.

>> I wonder if it is better to load the data using a fast disk, a lot of
>> RAM, or a lot of cores.
>
> A few years ago, I ran load tests of two machines, one 32G+SATA SSD,
> one 16G+ 1TB M2 SSD.  The 16G but faster SSD was quicker overall.

That is interesting.  I am considering to have a machine with an NVMe
SSD disk for the next loading.

> Database directories can be copied across machines after they have
> been built.

The tdbloader2 generates some files with the tmp extension. The file
data-triples.tmp can be very big. The name suggest that it is a temporal
file. Can I delete that file after the loading ends?

Best,
Daniel


Re: Understanding the output of Jena TDB Loader

2021-02-16 Thread Daniel Hernandez


Hi,

>>> tdbloader2 may not be the right choice. It is a bit niche but if you
>>> have much less RAM than total data it can be better than tdbloader and
>>> it is better if there is rotating disk, not SSD. It has been reported
>>> to be the right choice for several billion for SSD.
>> I have a SSD disk, a machine with 256 GB of ram, and 32 cores. Do
>> you recommend using tdbloader in this setting?
>
> The rate you were getting seem low even for tdbloader2 - is it all SDD
> or could /tmp be on a disk? And is the SSD local or remove (e.g. EBS)?
>
> As a general point, because the hardware matters, it is a case of try
> a few cases and see.

Sorry, I have been confused. The disk where I was loading the data was a
local rotating disk of 7200 rpm. The machine has also an SSD but is too
small to do the experiment.

> Does to have to be TDB1? "tdb2.tdbloader --loader=parallel" is the
> most aggressive loader. For TDB1, I'm not sure if "tdbloader2" or
> "tdbloader" will be faster end-to-end.

I have running some queries using TDB1 before, so I want to compare the
performance in similar conditions. Otherwise, I would have to run the
queries again for TDB2. So I have to evaluate what option is better.

> I'd be interested in what you found out. It's been a while since I had
> access to a large machine (which was on AWS ~240G RAM, local SSD). I
> used tdb2.tdbloader (i.e. TDB2).

I am sorry that my machine was not so good because it has a rotating
disk. I have another machine, with a 1T local SSD disk, but with only 64
GB. I am going to test the loading speed on that machine (when that
machine finishes the jobs it is doing). I wonder if it is better to load
the data using a fast disk, a lot of RAM, or a lot of cores.

Best,
Daniel


Re: Understanding the output of Jena TDB Loader

2021-02-13 Thread Daniel Hernandez


Hi,

Andy Seaborne writes:
> How much data are you loading?

I am loading a billion triples.

> Heap is only used for the node table cache and not index work which is
> out of heap in memory mapped filesmapped by the virtual memory of the
> OS process so caching is done by the OS filesystem cache machinery. It
> can make the OS process look very large even if the heap is only 1.2G.

So it is better to do not modify the Xms parameter?

> tdbloader2 may not be the right choice. It is a bit niche but if you
> have much less RAM than total data it can be better than tdbloader and 
> it is better if there is rotating disk, not SSD. It has been reported
> to be the right choice for several billion for SSD.

I have a SSD disk, a machine with 256 GB of ram, and 32 cores. Do you
recommend using tdbloader in this setting?

Best regards,
Daniel


Re: Understanding the output of Jena TDB Loader

2021-02-13 Thread Daniel Hernandez


Hi,

Thanks Lorenz for your answer.  Regarding a possible spill to disk, my
machine has 256 GB of RAM, and the Java process is taking only 20 G.  I
am not sure if changing the -Xmx Java argument would speed up the
loading process.  I see that tdbloader2 started the process using with
the argument -Xmx1200M.  Maybe is too conservative.

Best,
Daniel

Lorenz Buehmann writes:

> On 13.02.21 12:00, Daniel Hernandez wrote:
>> Hi,
>>
>> I am loading an n-triples file using tdbloader2.  I am curious about
>> what is the meaning of the numbers in the loader output.  The loading
>> output started as follows:
>>
>>  09:54:15 INFO -- TDB Bulk Loader Start
>>  09:54:15 INFO Data Load Phase
>>  09:54:15 INFO Got 1 data files to load
>>  09:54:15 INFO Data file 1: /home/ubuntu/dataset.nq.gz
>> 09:54:59 INFO  loader  :: Load: /home/ubuntu/dataset.nq.gz -- 
>> 2021/02/08 09:54:59 UTC
>> 09:55:01 INFO  loader  :: Add: 50,000 Data (Batch: 19,912 / Avg: 
>> 19,912)
>> 09:55:03 INFO  loader  :: Add: 100,000 Data (Batch: 23,288 / Avg: 
>> 21,468)
>> 09:55:05 INFO  loader  :: Add: 150,000 Data (Batch: 26,123 / Avg: 
>> 22,824)
>> 09:55:07 INFO  loader  :: Add: 200,000 Data (Batch: 24,987 / Avg: 
>> 23,329)
>> 09:55:09 INFO  loader  :: Add: 250,000 Data (Batch: 25,641 / Avg: 
>> 23,757)
>> 09:55:11 INFO  loader  :: Add: 300,000 Data (Batch: 25,100 / Avg: 
>> 23,971)
>> 09:55:13 INFO  loader  :: Add: 350,000 Data (Batch: 24,213 / Avg: 
>> 24,005)
>> 09:55:15 INFO  loader  :: Add: 400,000 Data (Batch: 24,461 / Avg: 
>> 24,061)
>> 09:55:17 INFO  loader  :: Add: 450,000 Data (Batch: 25,667 / Avg: 
>> 24,230)
>> 09:55:19 INFO  loader  :: Add: 500,000 Data (Batch: 25,879 / Avg: 
>> 24,385)
>> 09:55:19 INFO  loader  ::   Elapsed: 20.50 seconds [2021/02/08 
>> 09:55:19 UTC]
>> 09:55:21 INFO  loader  :: Add: 550,000 Data (Batch: 25,484 / Avg: 
>> 24,481)
>> 09:55:23 INFO  loader  :: Add: 600,000 Data (Batch: 23,419 / Avg: 
>> 24,389)
>> 09:55:25 INFO  loader  :: Add: 650,000 Data (Batch: 25,012 / Avg: 
>> 24,436)
>> 09:55:27 INFO  loader  :: Add: 700,000 Data (Batch: 25,201 / Avg: 
>> 24,489)
>> 09:55:29 INFO  loader  :: Add: 750,000 Data (Batch: 26,288 / Avg: 
>> 24,601)
>> 09:55:31 INFO  loader  :: Add: 800,000 Data (Batch: 25,960 / Avg: 
>> 24,682)
>> 09:55:33 INFO  loader  :: Add: 850,000 Data (Batch: 24,330 / Avg: 
>> 24,661)
>> 09:55:35 INFO  loader  :: Add: 900,000 Data (Batch: 25,813 / Avg: 
>> 24,722)
>> 09:55:37 INFO  loader  :: Add: 950,000 Data (Batch: 26,164 / Avg: 
>> 24,794)
>> 09:55:39 INFO  loader  :: Add: 1,000,000 Data (Batch: 26,357 / Avg: 
>> 24,868)
>> 09:55:39 INFO  loader  ::   Elapsed: 40.21 seconds [2021/02/08 
>> 09:55:39 UTC]
>>
>> My first questions are:
>>
>> 1) I guess that 600,000 is the number of data loaded at 09:55:23.  What
>> means data?  Does it mean bytes or triples?
> #triples
>>
>> 2) What are the numbers Batch: 23,419 and Avg: 24,389?  I guess that are
>> associated to the loading speed.
>
> Batch means in the last 50K triples the number of triples per second
>
> Avg means among all triples parsed so far the number of triples per second
>
>>
>> After some days of loading the output shows different numbers:
>>
>> 10:21:45 INFO  loader  ::   Elapsed: 433,606.84 seconds [2021/02/13 
>> 10:21:45 UTC]
>> 10:21:48 INFO  loader  :: Add: 505,550,000 Data (Batch: 18,348 / 
>> Avg: 1,165)
>> 10:21:51 INFO  loader  :: Add: 505,600,000 Data (Batch: 18,656 / 
>> Avg: 1,166)
>> 10:22:55 INFO  loader  :: Add: 505,650,000 Data (Batch: 781 / Avg: 
>> 1,165)
>> 10:36:12 INFO  loader  :: Add: 505,700,000 Data (Batch: 62 / Avg: 
>> 1,163)
>> 10:36:14 INFO  loader  :: Add: 505,750,000 Data (Batch: 17,543 / 
>> Avg: 1,164)
>> 10:36:17 INFO  loader  :: Add: 505,800,000 Data (Batch: 17,385 / 
>> Avg: 1,164)
>> 10:36:20 INFO  loader  :: Add: 505,850,000 Data (Batch: 17,998 / 
>> Avg: 1,164)
>> 10:36:23 INFO  loader  :: Add: 505,900,000 Data (Batch: 17,170 / 
>> Avg: 1,164)
>> 10:37:12 INFO  loader  :: Add: 505,950,000 Data (Batch: 1,025 / Avg: 
>> 1,164)
>> 10:37:14 INFO  loader  :: Add: 506,000,000 Data (Batch: 18,301 / 
>> Avg: 1,164)
>> 10:37:14 INFO  loader  ::   Elapsed: 434,535.94 secon

Understanding the output of Jena TDB Loader

2021-02-13 Thread Daniel Hernandez


Hi,

I am loading an n-triples file using tdbloader2.  I am curious about
what is the meaning of the numbers in the loader output.  The loading
output started as follows:

 09:54:15 INFO -- TDB Bulk Loader Start
 09:54:15 INFO Data Load Phase
 09:54:15 INFO Got 1 data files to load
 09:54:15 INFO Data file 1: /home/ubuntu/dataset.nq.gz
09:54:59 INFO  loader  :: Load: /home/ubuntu/dataset.nq.gz -- 
2021/02/08 09:54:59 UTC
09:55:01 INFO  loader  :: Add: 50,000 Data (Batch: 19,912 / Avg: 19,912)
09:55:03 INFO  loader  :: Add: 100,000 Data (Batch: 23,288 / Avg: 
21,468)
09:55:05 INFO  loader  :: Add: 150,000 Data (Batch: 26,123 / Avg: 
22,824)
09:55:07 INFO  loader  :: Add: 200,000 Data (Batch: 24,987 / Avg: 
23,329)
09:55:09 INFO  loader  :: Add: 250,000 Data (Batch: 25,641 / Avg: 
23,757)
09:55:11 INFO  loader  :: Add: 300,000 Data (Batch: 25,100 / Avg: 
23,971)
09:55:13 INFO  loader  :: Add: 350,000 Data (Batch: 24,213 / Avg: 
24,005)
09:55:15 INFO  loader  :: Add: 400,000 Data (Batch: 24,461 / Avg: 
24,061)
09:55:17 INFO  loader  :: Add: 450,000 Data (Batch: 25,667 / Avg: 
24,230)
09:55:19 INFO  loader  :: Add: 500,000 Data (Batch: 25,879 / Avg: 
24,385)
09:55:19 INFO  loader  ::   Elapsed: 20.50 seconds [2021/02/08 09:55:19 
UTC]
09:55:21 INFO  loader  :: Add: 550,000 Data (Batch: 25,484 / Avg: 
24,481)
09:55:23 INFO  loader  :: Add: 600,000 Data (Batch: 23,419 / Avg: 
24,389)
09:55:25 INFO  loader  :: Add: 650,000 Data (Batch: 25,012 / Avg: 
24,436)
09:55:27 INFO  loader  :: Add: 700,000 Data (Batch: 25,201 / Avg: 
24,489)
09:55:29 INFO  loader  :: Add: 750,000 Data (Batch: 26,288 / Avg: 
24,601)
09:55:31 INFO  loader  :: Add: 800,000 Data (Batch: 25,960 / Avg: 
24,682)
09:55:33 INFO  loader  :: Add: 850,000 Data (Batch: 24,330 / Avg: 
24,661)
09:55:35 INFO  loader  :: Add: 900,000 Data (Batch: 25,813 / Avg: 
24,722)
09:55:37 INFO  loader  :: Add: 950,000 Data (Batch: 26,164 / Avg: 
24,794)
09:55:39 INFO  loader  :: Add: 1,000,000 Data (Batch: 26,357 / Avg: 
24,868)
09:55:39 INFO  loader  ::   Elapsed: 40.21 seconds [2021/02/08 09:55:39 
UTC]

My first questions are:

1) I guess that 600,000 is the number of data loaded at 09:55:23.  What
means data?  Does it mean bytes or triples?

2) What are the numbers Batch: 23,419 and Avg: 24,389?  I guess that are
associated to the loading speed.

After some days of loading the output shows different numbers:

10:21:45 INFO  loader  ::   Elapsed: 433,606.84 seconds [2021/02/13 
10:21:45 UTC]
10:21:48 INFO  loader  :: Add: 505,550,000 Data (Batch: 18,348 / Avg: 
1,165)
10:21:51 INFO  loader  :: Add: 505,600,000 Data (Batch: 18,656 / Avg: 
1,166)
10:22:55 INFO  loader  :: Add: 505,650,000 Data (Batch: 781 / Avg: 
1,165)
10:36:12 INFO  loader  :: Add: 505,700,000 Data (Batch: 62 / Avg: 1,163)
10:36:14 INFO  loader  :: Add: 505,750,000 Data (Batch: 17,543 / Avg: 
1,164)
10:36:17 INFO  loader  :: Add: 505,800,000 Data (Batch: 17,385 / Avg: 
1,164)
10:36:20 INFO  loader  :: Add: 505,850,000 Data (Batch: 17,998 / Avg: 
1,164)
10:36:23 INFO  loader  :: Add: 505,900,000 Data (Batch: 17,170 / Avg: 
1,164)
10:37:12 INFO  loader  :: Add: 505,950,000 Data (Batch: 1,025 / Avg: 
1,164)
10:37:14 INFO  loader  :: Add: 506,000,000 Data (Batch: 18,301 / Avg: 
1,164)
10:37:14 INFO  loader  ::   Elapsed: 434,535.94 seconds [2021/02/13 
10:37:14 UTC]

Now the numbers Batch and Avg are smaller.  Also, it is taking longer to
load each 500,000 data.  At the beginning it takes 20 seconds to load
500,000 data.  Now it is taking 929 seconds.  Why the load speed is
degraded?  In my experience loading big datasets in Jena I always see
that loading start getting slower as much data have been already loaded.

Best regards,
Daniel


Re: Experience with large number (e.g. 1M) of Named Graphs in Fuseki?

2017-01-17 Thread Daniel Hernandez
I do a benchmark with different models with the same data. One model had
more than 65 millions of named graphs. Fuseki had no problems with named
compared with storing the same data in a single graph. You can see the
results of the benchmark in:

https://users.dcc.uchile.cl/~dhernand/research/ssws-2015-reifying.pdf

Cheers,
Daniel


  On mar, 17 ene 2017 06:22:11 -0300 Nikolaos Beredimas  
wrote  
 > My first guess would be no, it wouldn't hurt performance. 
 > Although I have limited experience on Fuseki (just using it as a second 
 > test endpoint to verify SPARQL compatibility), 
 > I am using a similar approach on a different RDF store (currently at about 
 > 650,000 graphs on one deployment) 
 > I would imagine that Graphs are indexed the same as S, P, or O on Fuseki. 
 >  
 > One of my main problems with this approach has also been the lack of UI 
 > support for administration of large number of graphs as you pointed out. 
 > This is something not specific to Fuseki, apparently there isn't enough 
 > demand for this use case. 
 >  
 > On Tue, Jan 17, 2017 at 9:59 AM, Conal Tuohy  wrote: 
 >  
 > > I am working with Fuseki 2 using the SPARQL Named Graph protocol, and I 
 > > wondered if there are practical limits on the number of named graphs in 
 > > the 
 > > graph store? 
 > > 
 > > I know that many people use Jena only with a very small number of distinct 
 > > graphs, and I noticed that Fuseki's own user interface really only works 
 > > well when the number of named graphs is small (less than a thousand). That 
 > > is not a big problem in itself, since I don't need to use that UI, but I'm 
 > > more concerned about performance or other limitations when the number of 
 > > graphs is much higher; on the order of a million graphs, or a few million. 
 > > 
 > > Can anyone reassure me? Has anyone had problems with large number of named 
 > > graphs, and if so, were you able to fix them? 
 > > 
 > > Thanks! 
 > > 
 > > Conal 
 > > 
 > > -- 
 > > Conal Tuohy 
 > > http://conaltuohy.com/ 
 > > @conal_tuohy 
 > > +61-466-324297 
 > > 
 > 




status of the SSE grammar

2015-08-10 Thread Daniel Hernandez

Hi,

What is the current status of the SSE grammar? I see an insert grammar here
to-do message in the documentation [1]. Is there another place where to find the
documentation of the SEE grammar?

[1]: https://jena.apache.org/documentation/notes/sse.html

Cheers,
Daniel