[ 
https://issues.apache.org/jira/browse/STANBOL-433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13174319#comment-13174319
 ] 

Alessandro Adamou commented on STANBOL-433:
-------------------------------------------

I am doing some improvements on the GraphContentInputSource implementation.

I can now manage to load a 200MB RDF graph on a 1GB VM, though exporting it as 
an OWL API OWLOntology is still tricky.

Currently it takes about 100s on my rig but I have a plan that should about 
halve that time.

However, the OWL API export is still a bottleneck. I am trying to discuss a 
workaround with the OWLAPI/Manchester people.
                
> Loading large ontology using Java API gives out-of-memory error
> ---------------------------------------------------------------
>
>                 Key: STANBOL-433
>                 URL: https://issues.apache.org/jira/browse/STANBOL-433
>             Project: Stanbol
>          Issue Type: Bug
>          Components: Ontology Manager
>            Reporter: Stephen Bayliss
>            Priority: Minor
>
> Loading a large ontology - in our case an RDF file in the order of hundreds 
> of Megabytes - leads to an out of memory error.
> The ontology is being loaded into a custom space, using an 
> OntologyInputSource.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to