LIMIT 10 doesn't limit very much in the query. It applies after the
group and count have happen - it's a limit on the results requirned, not
the work done in this query,
It has to do all the WHERE part in order to calculate the COUNT.
The query is returning any old 10 items (there is no sort) - basically
random except ARQ tends to do the same thing each run.
What about finding ten items then running the extract/count.
Something like (untested, and not guaranteed to be exactly the same):
SELECT ?rel (count (?rel) as ?co)
WHERE {
{ SELECT ?RelAttr WHERE {
?object MKG:English_name 'Pyrilamine' .
?RelAttr owl:annotatedTarget ?object ;
MKG:pyear '1967' .
} LIMIT 10
}
?RelAttr owl:annotatedProperty ?rel .
}
GROUP BY ?rel
----
Did you mean the "?RelAttr owl:annotatedSource ?subject" because that
changes the count for ?rel but doesn't make a lot of sense because it
isn't passed out or used to group.
Andy
On 07/12/2018 17:16, HYP wrote:
OK. I explain my project in the following
The KG schema is composed of a set of semantic types like disease or drugs, and
a set of relations like treated_by(disease, drug).
Then each instance relation, like treated_by(disease_1, drug_1) has an
annotation property 'year' which means this triple occur in the 'year'.
My query has two steps. Firstly, query the related triples about some drug,
like Pyrilamine, and group them according to the relation types and give a
count. Secondly, query the related nodes in one relation type.
The first step query, like:
SELECT ?rel (count (?rel) as ?co)
where {
?object MKG:English_name 'Pyrilamine' .
?RelAttr owl:annotatedTarget ?object ;
owl:annotatedSource ?subject ;
owl:annotatedProperty ?rel ;
MKG:pyear '1967' .
}
group by ?rel
limit 10
On 12/8/2018 01:00,ajs6f<aj...@apache.org> wrote:
Let's slow down here a bit.
We can't give you any reasonable advice until you tell us _much_ more about
your work. What is the data like? What kinds of queries are you doing? How are
you running them? What do you expect to happen?
Please give us a great deal more context.
ajs6f
On Dec 7, 2018, at 11:45 AM, HYP <hyphy...@163.com> wrote:
I store the 1.4B triples in two steps. Firstly, I made 886 rdf files, each of
which contains 1615837 triples. Then, I upload them into TDB using Fuseki.
This is a huge job. Are you sure that named graphs have better performance?
Then how to build named graphs?
On 12/7/2018 23:48,Vincent Ventresque<vincent.ventres...@ens-lyon.fr> wrote:
Do you mean -Xms = 64G ?
N.B. : with 1.4 B triples, you should have better performance using
named graphs.
Le 07/12/2018 à 16:37, 胡云苹 a écrit :
My memory is 64G and my setting is no upper limit.
On 12/7/2018 23:34,Vincent Ventresque<vincent.ventres...@ens-lyon.fr>
<mailto:vincent.ventres...@ens-lyon.fr> wrote:
Hello
How do you run fuseki? you can increase java memory limit with
java options :
java -jar -Xms4096m -Xmx4096m fuseki-server.jar
(where 4096m = 4 Go, but could be 8192m or more)
N.B. : I'm not a specialist, don't know if -Xms and -Xmx must be
the same
If I remember correctly the memory limit is 1.2 Go when you run
'./fuseki start' or './fuseki-server'
Vincent
Le 07/12/2018 à 16:23, 胡云苹 a écrit :
Dear jena,
I have built a graph with 1.4 billion triples and store it as a
data set in TDB through Fuseki upload system. Now, I try to make
some sparql search, the speed is very slow.
For example, when I make the sqarql in Fuseki in the following,
it takes 50 seconds. How can I improve the speed?
-