On Thu, May 27, 2010 at 9:28 PM, Dan Wells <[email protected]> wrote: > Hello, > >>>I also know from this whole process that you have developed very good >>>instincts in working this stuff out, so if your gut is telling you we need >>>to stick with the SRE table, that's good enough for me. > >> Well, it is, but I'd rather convince you on the merits, if I can. ;) > > How did I know you would? ;) > >> If we conceptually split SRE-based and SCAP-based holdings then we >> have two code paths to maintain, and less (or, at least, more >> difficult to code) options on what to display and when. If we leave >> them "serialized" (heh, sorry, couldn't resist) then it's one code >> path and simpler integrating logic AFAICT. > > I think this here gets at the root of any remaining disagreement. I *wanted* > two code paths for the purposes of preservation and current development > freedom; you want one path for better integration and future development > convenience. Of course these generalizations are really more dramatic than > the reality, as the same end results should be achievable in either case. In > the end, I am really just trying to protect my data from myself, but as long > as we agree (and I think we do) on the purpose and authority of the 'marc' > field, I am happy to have a little more faith. >
We do agree (though, possibly, still coming from different angles of attack) on the purpose and authority of the MFHD. If you think of the new serials tables and code as a direct extension, rather than a peer or replacement to, the MFHD-based functionality there now, I think you'll see the context that I'm working from. >From an output/display perspective, I'm looking at the whole set of functionality as a set of refinements providing more and better detail as we move from bib record (a title) through MFHD (for our purposes, general prediction source data, and legacy course-grained holdings statements) and on to distributions (with holdings statements), streams (with lists of issuances), items (with physical location) and units (with unique physical identifiers). Generally speaking, each of these steps is dependent on the one before, but the finer grained details are not required for the courser grained data to be used. (That's not the direct reality for /setting up and creating/ the controlled data in all cases -- there are downstream dependencies on items for controlled holdings statements, for instance -- but the conceptual model is sound and correct for display logic). >From a cataloging perspective, it's more or less the same. You start at a bib, and if you have an MFHD in hand you have the option of adding it -- or not, and just add a subscription and the prediction data(*). Then for the first case (adding MFHD you have in hand), as needs dictate, you add the subscription+prediction information, receive items, and cause "controlled" holdings statements to be generated -- a linear, additive workflow instead of two separate processes. Does restating it like that help explain what I'm imagining? (*) Side note: because we'll be writing the code to perform the "one-time migration from MFHD to "controlled" data" anyway, there's no reason you couldn't add the MFHD and then instruct the system to extract, for that one record, the prediction data for a new subscription. This would make the MFHD "authoritative" only for the purpose of the initial prediction info extraction, and nothing else after that -- it would then immediately become "legacy" data. This, of course, can come later from a UI perspective, but IMO can be done clearly, cleanly and with a good performance profile as a stored procedure in the db. -- Mike Rylander | VP, Research and Design | Equinox Software, Inc. / The Evergreen Experts | phone: 1-877-OPEN-ILS (673-6457) | email: [email protected] | web: http://www.esilibrary.com
