so why don't you try a plain update and see how that performs?

On Fri, Mar 30, 2012 at 8:59 AM, [email protected] <[email protected]> wrote:
> yes, I insert individuals in every step.
>
> thanks!
>
>>----Messaggio originale----
>>Da: [email protected]
>>Data: 30/03/2012 14.47
>>A: "[email protected]"<[email protected]>
>>Cc: <[email protected]>
>>Ogg: Re: Re: Re: Jena slow ontology population
>>
>>So all updates are A-box related?
>>
>>On Fri, Mar 30, 2012 at 8:22 AM, [email protected] <[email protected]>
> wrote:
>>> Thanks a lot for your reply!
>>> As you have assumed, I don't use neither TDB or SDB...I have an in memory-
>>> model at runtime.
>>> It seems to me that my model does not require complicate inferences...at
> every
>>> step,
>>> it extracts a tuple from the DB and generates the corresponding individuals
>>> with:
>>> model.createIndividual; the properties are added using the addProperty
>>> method.
>>>
>>> The only thing that can be complicated to be calculated is that, if an
>>> individual has the same
>>> id of the last one processed, I extract the latter from the ontology to add
>>> some properties...
>>>
>>> thanks a lot again!
>>>
>>> Paola
>>>
>>>
>>>
>>>>----Messaggio originale----
>>>>Da: [email protected]
>>>>Data: 30/03/2012 13.56
>>>>A: "[email protected]"<[email protected]>
>>>>Cc: <[email protected]>
>>>>Ogg: Re: Re: Jena slow ontology population
>>>>
>>>>your hardware looks good, what type of inference does your model
>>>>require for updates?
>>>>
>>>>Pellet is a very powerful reasoning system which can result in
>>>>significant computational complexity for the subsystem to deal with
>>>>depending on your level of inference that is of course. So you better
>>>>know exactly what level of reasoning your application requires in
>>>>advance to optimize the system at run-time.
>>>>
>>>>TBD, SDB are persistent storage options, from what you have just said
>>>>I can assume you are using an in-memory model at run-time.
>>>>
>>>>
>>>>
>>>>
>>>>On Fri, Mar 30, 2012 at 7:41 AM, [email protected] <[email protected]>
>>> wrote:
>>>>> Hi Marco, thanks a lot for your help! :-)
>>>>>
>>>>> Version of Jena: 2.6.4
>>>>> OS: Linux 64 bit
>>>>> reasoner: Pellet
>>>>> TDB and SDB: I don't use them, I believed that they were related to RDF
>>> while
>>>>> I'm using OWL, but maybe I can be wrong...
>>>>>
>>>>> current hw:
>>>>>  *-CPU
>>>>>          description: CPU
>>>>>          product: Intel(R) Xeon(R) CPU
>>>>>          version: Intel(R) Xeon(R) CPU E5620
>>>>>          slot: CPU 1
>>>>>          size: 2400MHz
>>>>>          capacity: 2400MHz
>>>>>          width: 64 bits
>>>>>          clock: 133MHz
>>>>>  *-memory
>>>>>          description: System Memory
>>>>>          physical id: 24
>>>>>          slot: System board or motherboard
>>>>>          size: 8GiB
>>>>>
>>>>> If you want to know other information, please tell me! And thanks a lot
>>> again!
>>>>>
>>>>> Paola
>>>>>
>>>>>>----Messaggio originale----
>>>>>>Da: [email protected]
>>>>>>Data: 30/03/2012 11.36
>>>>>>A: <[email protected]>, "[email protected]"<villa.
> va@libero.
>>>>> it>
>>>>>>Ogg: Re: Jena slow ontology population
>>>>>>
>>>>>>can you please be a bit more specific about the configuration you have
>>>>>>currently installed? e.g. version of Jena, back end type (TDB, SDB,
>>>>>>in-memory), current hardware, and OS (32bit, 64bit, linux,
>>>>>>windows,mac), and type of reasoner you use.
>>>>>>
>>>>>>---
>>>>>>Marco Neumann
>>>>>>KONA
>>>>>>
>>>>>>Join us at SemTech Biz in San Francisco June 3-7 2012 and save 15%
>>>>>>with the lotico community discount code 'STMN'
>>>>>>http://www.lotico.com/evt/SemTechSF2012/
>>>>>>
>>>>>>On Fri, Mar 30, 2012 at 4:22 AM, [email protected] <[email protected]>
>>>>> wrote:
>>>>>>> Dear users,
>>>>>>>
>>>>>>> I am new to Jena and have a problem with Jena OWL-API. I have written
>>> some
>>>>>>> simple Java code that:
>>>>>>>
>>>>>>> 1) extracts some tuples (around 1 million tuples) of data from a
> database
>>>>>>> 2) puts each of them in an ontology file using Jena library. In
>>> particular,
>>>>> it
>>>>>>> takes every tuple and inserts
>>>>>>> every element in a class of the ontology file, by using methods like
>>>>>>> createIndividual, addproperty, model.write.
>>>>>>>
>>>>>>> The insertion phase is very quick when my program starts, but the
>>>>> performances
>>>>>>> became worse and worse as
>>>>>>> time goes by. The DB man has told me that the database has no problems.
> I
>>>>> have
>>>>>>> run the program with 5 gigabytes
>>>>>>> of RAM...it simply ends up to use them all. Is that normal? Do I have
> to
>>>>> use a
>>>>>>> more powerful PC to manage such
>>>>>>> data? I can give you more details about my problem if you tell me what
> I
>>>>> have
>>>>>>> to look for exactly.
>>>>>>>
>>>>>>> Thanks a lot,
>>>>>>>
>>>>>>> Paola
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>>--
>>>>
>>>>
>>>>---
>>>>Marco Neumann
>>>>KONA
>>>>
>>>>Join us at SemTech Biz in San Francisco June 3-7 2012 and save 15%
>>>>with the lotico community discount code 'STMN'
>>>>http://www.lotico.com/evt/SemTechSF2012/
>>>>
>>>
>>>
>>
>>
>>
>>--
>>
>>
>>---
>>Marco Neumann
>>KONA
>>
>>Join us at SemTech Biz in San Francisco June 3-7 2012 and save 15%
>>with the lotico community discount code 'STMN'
>>http://www.lotico.com/evt/SemTechSF2012/
>>
>
>



-- 


---
Marco Neumann
KONA

Join us at SemTech Biz in San Francisco June 3-7 2012 and save 15%
with the lotico community discount code 'STMN'
http://www.lotico.com/evt/SemTechSF2012/

Reply via email to