Hey Andy my mtab follows.

/dev/sda5 / ext4 rw,errors=remount-ro,commit=0 0 0

-jp

On Mon, Jun 20, 2011 at 3:45 PM, Andy Seaborne
<[email protected]> wrote:
> I was using ext4:
>
> /etc/mtab ==>
> /dev/sda1 / ext4 rw,errors=remount-ro,user_xattr,commit=0 0 0
>
> (default when I installed Ubuntu).
>
> What is the mtab entry for your setup?
>
>        Andy
>
> On 20/06/11 20:30, Andy Seaborne wrote:
>>
>> Hi there,
>>
>> I tried to recreate this but couldn't, but I don't have an SSD to hand
>> at the moment (being fixed :-)
>>
>> I've put my test program and the data from the jamendo-rdf you sent me in:
>>
>> http://people.apache.org/~andy/
>>
>> so we can agree on exactly a test case. This code is single threaded.
>>
>> The conversion from .rdf to .nt wasn't pure.
>>
>> I tried running using the in-memory store as well.
>> downloads.dbpedia.org was down atthe weekend - I'll try to get the same
>> dbpedia data.
>>
>> Could you run exactly what I was running? The file name needs changing.
>>
>> You can also try uncommenting
>> SystemTDB.setFileMode(FileMode.direct) ;
>> and run it using non-mapped files in about 1.2 G of heap.
>>
>> Looking through the stacktarce, there is a point where the code has
>> passed an internal consistence test then fails with something that
>> should be caught by that test - and the code is sync'ed or single
>> threaded. This is, to put it mildly, worrying.
>>
>> Andy
>>
>> On 18/06/11 16:38, jp wrote:
>>>
>>> Hey Andy,
>>>
>>> My entire program is run on one jvm as follows.
>>>
>>> public static void main(String[] args) throws IOException{
>>> DatasetGraphTDB datasetGraph = TDBFactory.createDatasetGraph(tdbDir);
>>>
>>> /* I saw the BulkLoader had two ways of loading data based on whether
>>> the dataset existed already. I did two runs one with the following two
>>> lines commented out to test both ways the BulkLoader runs. Hopefully
>>> this had the desired effect. */
>>> datasetGraph.getDefaultGraph().add(new
>>> Triple(Node.createURI("urn:hello"), RDF.type.asNode(),
>>> Node.createURI("urn:house")));
>>> datasetGraph.sync();
>>>
>>> InputStream inputStream = new FileInputStream(dbpediaData);
>>>
>>> BulkLoader bulkLoader = new BulkLoader();
>>> bulkLoader.loadDataset(datasetGraph, inputStream, true);
>>> }
>>>
>>> The data can be found here
>>> http://downloads.dbpedia.org/3.6/en/mappingbased_properties_en.nt.bz2
>>> I appended the ontology to end of file it can be found here
>>> http://downloads.dbpedia.org/3.6/dbpedia_3.6.owl.bz2
>>>
>>> The tdbDir is an empty directory.
>>> On my system the error starts occurring after about 2-3minutes and
>>> 8-12 million triples loaded.
>>>
>>> Thanks for looking over this and please let me know if I can be of
>>> further assistance.
>>>
>>> -jp
>>> [email protected]
>>>
>>>
>>> On Jun 17, 2011 9:29 am, andy wrote:
>>>>
>>>> jp,
>>>>
>>>> How does this fit with running:
>>>>
>>>> datasetGraph.getDefaultGraph().add(new
>>>> Triple(Node.createURI("urn:hello"), RDF.type.asNode(),
>>>> Node.createURI("urn:house")));
>>>> datasetGraph.sync();
>>>>
>>>> Is the preload of one triple a separate JVM or the same JVM as the
>>>> BulkLoader call - could you provide a single complete minimal example?
>>>>
>>>> In attempting to reconstruct this, I don't want to hide the problem by
>>>> guessing how things are wired together.
>>>>
>>>> Also - exactly which dbpedia file are you loading (URL?) although I
>>>> doubt the exact data is the cause here.
>

Reply via email to