Jeff, the warning occurs when the size of the data is too large to cache. It seems the instance count is then skipped. You can always get an accurate instance count using the SPARQL aggregate count()
-- Scott On Jun 2, 8:59 am, Jeff <[email protected]> wrote: > I'm seeing no instance counts at all. I am also getting the following > Information message in the Error Log when I open the model: > > Skipping Caching: ANY @rdf:type ANY in ((TBCCachingGraph)) > > I don't see this message for smaller SDB graphs. > > Jeff > > On Jun 1, 8:42 pm, Holger Knublauch <[email protected]> wrote: > > > On Jun 2, 2010, at 12:25 AM, Schmitz, Jeffrey A wrote: > > > > On a related note, but more specific to Topbraid, I noticed that working > > > on a large model (about 2,000,000 triples), Topbraid seems to work a > > > little differently. For example, when opened in Topbraid, it doesn't > > > seem to perform the normal "default" inferencing it usually does > > > automatically for smaller models? For example, class/subclass > > > inferencing isn't performed, nor do the instance counts exist. Is the > > > correct? > > > Hmm, not that I am aware of. I checked our source code and also didn't find > > any such size limitations. Computing the number of triples is already an > > expensive operation on some databases. There are some small differences in > > the behavior with database projects in general, no matter what size they > > have. One difference is that the system will switch to "direct instances" > > mode only and not show the number of instances of sub-classes. Also, the > > Instances view cannot be sorted by columns (the latter has been relaxed for > > TDB from 3.4 onwards because TDB is fast enough for large number of small > > transactions). > > > > I'm guessing the answer as to why this would be is that to try and do > > > otherwise would simply be too memory intensive. > > > As I'm working with larger and larger models I'm starting to come to the > > > conclusion that if you have models of any size (say on the order of > > > 1,000,000 triples) and you want to do any inferencing (even OWL_MEM) you > > > will need a 64 bit machine. Would you say this is a fair statement? > > > I have no experience with Jena inferencing (OWL_MEM etc) of that size, but > > my guess is that such things will work best if you are using a local > > database (TDB), and TDB is optimized to work on 64 bit machines. > > > Holger -- You received this message because you are subscribed to the Google Group "TopBraid Suite Users", the topics of which include TopBraid Composer, TopBraid Live, TopBraid Ensemble, SPARQLMotion and SPIN. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/topbraid-users?hl=en
