Hello,
is there a way to get statistics of how much queries an endpoint is receiving?
Thanks,
Marc-Alexandre
Hi,
To push the loading capacity of the open source Virtuoso I use 2
things at Bio2RDF.
1) Good size server (24 cores, 128 GB ram). I can't do much for you
here. The more ram the better and the more core, the more parrallel
loading... up to a certain point with the free version
2) Exploit the
Hello,
I've change my dba password and I've lost it. Is there a way to
retrieve it or to reset it when I restart my server. I've try starting
my server using this to reset the dba password:
virtuoso-t -f +pwddba dba
But no success. When I try to connect using isql I receive a Bad login.
Hi,
Is there a Virtuoso query to remove all triples from a Virtuoso
Server? Those triples are in multiple graphs. I know I can delete a
graph with the sparql query delete from graph ... but when faced
with thousand of graph, its not an interesting query.
Thanks,
Marc-Alexandre
Segmentation fault
Also, the first message I've receive, but it is not giving it to me anymore was
GPF: disk.c:1867 cannot write buffer to 0 page.
What does this mean? Is it because of the size of the virtuoso.db
which is 1.1 TB?
Thanks for the help,
Marc-Alexandre Nolin
-10 at 10:09 -0500, Marc-Alexandre Nolin wrote:
I've a N3 dump I'm currently loading into a Virtuoso Server (a
complete NCBI Genbank). One literal have always huge size. Its the one
related to the predicate sequence. Is it possible to compress
literal with a rule based on predicate
virtuoso-users-requ...@lists.sourceforge.net
het volgende geschreven:
Message: 1
Date: Wed, 10 Mar 2010 10:09:08 -0500
From: Marc-Alexandre Nolin lo...@ieee.org
Subject: [Virtuoso-users] compression
To: virtuoso-users@lists.sourceforge.net
Message-ID
Hello,
I've created a new triplestore and created the free text index before
starting to load the triples. I've load approximatively 4 billions
triples in less than 2 days using TTLP. However, the free text index
wasn't there. So the problem might be related to this situation
no full text index.
Thanks,
Marc-Alexandre Nolin
rdf:type ncbi:Record .
?e ?predicate1 ?d .
?e rdf:type hhpid:P3 .
?e hhpid:P4 ?f .
?e hhpid:P5 ?g .
{
?f bio2rdf:P6 ?a .
}
UNION
{
?g bio2rdf:P6 ?a .
}
?a rdfs:label ?aLabel .
}
order by desc (count(?a))
;
Thanks for your help,
Marc-Alexandre Nolin
2010/1/21 Ivan Mikhailov imikhai
://biology.com/Protein .
bif:random(?s, 100) .
?s http://biology.com/chromosomeNumber ?cn .
filter(?cn == 10) .
}
Thanks,
Marc-Alexandre Nolin
:
Marc-Alexandre Nolin wrote:
Hello,
When I do the following query:
select distinct
?s
where
{
?s rdf:type http://biology.com/Protein .
}
limit 100
Lets say I have 2 objects of type protein in the triplestore. I
will receive a results containing 100 rows of 100 distinct proteins
Thanks everybody,
I've managed to accomplish exactly what I wanted using the query you
provided me and including it into another
select
?go count(?go)
where
{
{select distinct ?protein
where {
?protein rdf:type http://biology.com/Protein .
}
order by order by asc
Thanks,
If you implement something with compression, tell me, I would be happy
to test it because it is a problem for me :)
Idea:
- It could be a rule to compress every literal (but indexation would
need to be done before its compress)
- Il could be a compression based on a selection of
)
Be aware that my script correct some errors in triples that have been
create by my first version of my rdfizer. So if you try to load it in
the current state, some triple will miss the closing , but since
creating the N3 take more than a week, I'm not redoing it now :)
Thanks,
Marc-Alexandre Nolin
Hello,
I'm receiving an error while trying to load an N3 version of PubMed.
Error SR175: Uniqueness violation : Violating unique index
RDF_QUAD_POGS on table DB.DBA.RDF_QUAD. Transaction killed.
What does it really mean? What I think it mean is that the hash
function space isn't big enough and
16 matches
Mail list logo