On Wed, Mar 25, 2015 at 09:54:02AM -0500, Tim Donohue wrote: > Hi helix84, > > So, it seems like there's two possible routes to take here: > > 1. An event consumer writes directly to Solr. The "persistent store" is > then simply a dump from Solr to CSV. > > 2. An event consumer writes directly to CSV. Solr then indexes those CSVs.
3. IMHO the natural route, given the way DSpace handles events: one
event consumer feeds the cache, and another exports event records
for persistent storage. I think we even have a simple example of
the latter, and of course we already have the former. Just stack
'em up.
--
Mark H. Wood
Lead Technology Analyst
University Library
Indiana University - Purdue University Indianapolis
755 W. Michigan Street
Indianapolis, IN 46202
317-274-0749
www.ulib.iupui.edu
signature.asc
Description: Digital signature
------------------------------------------------------------------------------ Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/
_______________________________________________ Dspace-devel mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/dspace-devel
