Re: [CODE4LIB] Tool for feedback on document
Just wanted to thank everyone for this feedback! I'm leaning toward using digress.it. --Dave - David Walker Director, Systemwide Digital Library Services California State University 562-355-4845 -Original Message- From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of McCanna, Terran Sent: Wednesday, October 16, 2013 11:34 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Tool for feedback on document I've used http://a.nnotate.com/ for this several times. You can leave comments in line with the text, respond to other comments, display/print the comments in different ways, and one of my favorite things is that the people you send the link to don't have to create an account. Terran McCanna PINES Program Manager Georgia Public Library Service 1800 Century Place, Suite 150 Atlanta, GA 30345 404-235-7138 tmcca...@georgialibraries.org - Original Message - From: Ken Varnum var...@umich.edu To: CODE4LIB@LISTSERV.ND.EDU Sent: Wednesday, October 16, 2013 2:23:51 PM Subject: Re: [CODE4LIB] Tool for feedback on document Commentpress and digress.it are two Wordpress variants that offer paragraph-by-paragraph threaded commenting. Commentpress is quite old (we used it here: http://www.lib.umich.edu/islamic/ in a collaborative cataloging project sponsored by CLIR and funded by Mellon). -- Ken Varnum | Web Systems Manager | MLibrary - University of Michigan - Ann Arbor var...@umich.edu | @varnum | http://www.lib.umich.edu/users/varnum | 734-615-3287 On Wed, Oct 16, 2013 at 2:12 PM, Michael J. Giarlo leftw...@alumni.rutgers.edu wrote: Hi David, Google Drive (née Docs) will allow you to share your document with other users so that they can view and comment (and not edit), FWIW. There may be more elegant solutions that allow, say, nested/threaded comments. I know there is blog software out there that does this, but it's been a few years so I forget what it's called. -Mike On Wed, Oct 16, 2013 at 11:06 AM, Walker, David dwal...@calstate.edu wrote: Hi all, We're looking to put together a large policy document, and would like to be able to solicit feedback on the text from librarians and staff across two dozen institutions. We could just do that via email, of course. But I thought it might be better to have something web-based. A wiki is not the best solution here, as I don't want those providing feedback to be able to change the text itself, but rather just leave comments. My fall back plan is to just use Wordpress, breaking the document up into various pages or posts, which people can then comment on. But it seems to me there must be a better solutions here -- maybe one where people can leave comments in line with the text? Any suggestions? Thanks, --Dave - David Walker Director, Systemwide Digital Library Services California State University 562-355-4845
[CODE4LIB] Tool for feedback on document
Hi all, We're looking to put together a large policy document, and would like to be able to solicit feedback on the text from librarians and staff across two dozen institutions. We could just do that via email, of course. But I thought it might be better to have something web-based. A wiki is not the best solution here, as I don't want those providing feedback to be able to change the text itself, but rather just leave comments. My fall back plan is to just use Wordpress, breaking the document up into various pages or posts, which people can then comment on. But it seems to me there must be a better solutions here -- maybe one where people can leave comments in line with the text? Any suggestions? Thanks, --Dave - David Walker Director, Systemwide Digital Library Services California State University 562-355-4845
Re: [CODE4LIB] PHP HTTP Client preference
We're also using Guzzle, and really like it. --Dave - David Walker Director, Systemwide Digital Library Services California State University 562-355-4845 -Original Message- From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Karen Coombs Sent: Tuesday, September 03, 2013 3:52 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] PHP HTTP Client preference Thanks so much for all the feedback guys. Keep it coming. I'll definitely check out Guzzle as an option. Karen On Tue, Sep 3, 2013 at 4:26 PM, Hagedon, Mike haged...@u.library.arizona.edu wrote: Guzzle++ -Original Message- From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Kevin S. Clarke Sent: Tuesday, September 03, 2013 8:37 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] PHP HTTP Client preference Another +1 for Guzzle Kevin On Tue, Sep 3, 2013 at 11:32 AM, Kevin Reiss reiss.ke...@yahoo.com wrote: I can second Guzzle. We have been using it for our our in-house PHP applications that require HTTP interactions for about six months and it has worked out very well. Guzzle has also been incorporated as the new default HTTP client in the next version of Drupal. From: Ross Singer rossfsin...@gmail.com To: CODE4LIB@LISTSERV.ND.EDU Sent: Tuesday, September 3, 2013 10:59 AM Subject: Re: [CODE4LIB] PHP HTTP Client preference Hey Karen, We use Guzzle: http://guzzlephp.org/ It's nice, seems to work well for our needs, is available in packagist, and is the HTTP client library in the official AWS SDK libraries (which was a big endorsement, in our view). We're still in the process of moving all of our clients over to it (we built a homegrown HTTP client on top of CURL first), but have been really impressed with it so far. -Ross. On Sep 3, 2013, at 10:49 AM, Coombs,Karen coom...@oclc.org wrote: One project I'm working on for OCLC right now is building a set of object-oriented client libraries in PHP that will assist developers with interacting with our web services. The first of these libraries we'd like to release provides classes for authentication and authorization to our web services. You can read more about Authentication/Authorization and our web services on the Developer Network sitehttp://oc.lc/devnet The purpose of this project is to make a simple and easy to use object oriented library that supports our various authentication methods. This library need to make HTTP requests and I've looked at a number of potential libraries and HTTP clients in PHP. Why am I not just considering using CURL natively? The standard CURL functions in PHP are not object-oriented. All of our code libraries (both our authentication/authorization library and future libraries for interacting with the REST services themselves) need to perform a robust set of HTTP interactions. Using the standard CURL functions would very likely increase the size of the code libraries and the potential for errors and inconsistencies within the code base because of how much we use HTTP. Given this, I believe there are three possible options and would like to get the community's feedback on which option you would prefer. Option 1. - Write my own HTTP Client on top of the standard PHP CURL implementation. This means people using the code library can only download it and now worry about any dependencies. However, that means adding extra code to our library which, although essential, isn't at the core of what we're trying to support. My fear is that my client will never be as good as an existing client. Option 2. - Use HTTPful code library (http://phphttpclient.com/). This is a well developed and supported code base which is designed specifically to support REST interactions. It is easy to install via Composer or Phar, or manually. It is slim and trim and only does the HTTP Client functions. It does create a dependency on an external (but small) library. Option 3. - Use the Zend 2 HTTPClient. This is a well developed and supported code base. The biggest downside is that Zend is a massive code library to require. A developer could choose to download only the specific set of classes that we are dependent on, but asking people to do this may prove confusing to some developers. I'd appreciate your feedback so we can provide the most useful set of libraries to the community. Karen Karen A. Coombs Senior Product Analyst WorldShare Platform coom...@oclc.orgmailto:coom...@oclc.org 614-764-4068 Skype: librarywebchic
Re: [CODE4LIB] discovery layers question
I would point out, too, that Encore (the example given) is not one of the major discovery systems. The 'Big Four' discovery services, as they are often called, are those developed by OCLC, Ex Libris, Serials Solutions, and Ebsco. Encore is really a different kind of system. It has no aggregated article index of its own. Rather, the system is designed to integrate your local catalog results with article results from an external service, either via Innovative's own federated search system or, more recently, from Ebsco Discovery. In that way, Encore is a lot more like VUFind, in fact, than the Big Four discovery services, in so far as you can integrate article results from a discovery service (currently Summon and Primo Central) into VUFind as well. To my mind, then, there's little reason to run both VUFind and Encore (specifically). Damien, I think, provided a good case for why you might want to run VUFind in conjunction with a true discovery service: that is, to have greater control over the local catalog results and the interface. Other institutions are doing similar things with Blacklight or their own systems. --Dave - David Walker Director, Systemwide Digital Library Services California State University 562-355-4845 -Original Message- From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Jonathan Rochkind Sent: Monday, May 06, 2013 10:47 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] discovery layers question I think you first need to be clear about what you would be trying to do by using a hosted discovery product simultaneously with VuFind. What would be the goals, why would you be doing this, what are you trying to accomplish? Would you be offering both Encore and a VuFind implementation as alternate services for your users to use? Or would they be combined somehow? How would you want to combine them? You need to be clear on this internally, on what you're trying to do, to have any hope of success. Being clear about that when you ask a question on the list will also elicit more useful answers; I'm not really entirely sure what you're asking. On 5/6/2013 1:39 PM, Donna Campbell wrote: Dear Colleagues, Is anyone in using VuFind as well as one of the major webscale discovery layers (e.g., Encore)? If so, what complications do you encounter? Cordially, Donna R. Campbell Technical Services Systems Librarian (215) 935-3872 (phone) (267) 295-3641 (fax) Mailing Address (via USPS): Westminster Theological Seminary Library P.O. Box 27009 Philadelphia, PA 19118 USA Shipping Address (via UPS or FedEx): Westminster Theological Seminary Library 2960 W. Church Rd. Glenside, PA 19038 USA
Re: [CODE4LIB] Video from the Conference
Ditto. It was almost like being there. I even had a beer each night. --Dave - David Walker Director, Systemwide Digital Library Services California State University 562-355-4845 -Original Message- From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Peter Schlumpf Sent: Friday, February 15, 2013 3:20 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Video from the Conference Thanks Francis and everyone else who made the conference available via streaming video for those of us who could not attend. It was great! Peter -Original Message- From: Francis Kayiwa kay...@uic.edu Sent: Feb 15, 2013 4:56 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: [CODE4LIB] Video from the Conference In order to keep myself honest and not use up Tara Robertson's generosity. I will be uploading the files to my YouTube account as they become available. Since the Lightning Talks work better with the YouTube 15 minute limit cap they will go up first. http://www.youtube.com/watch?v=LRVYmdXJ8OQ Cheers, ./fxk -- Don't hate yourself in the morning -- sleep till noon.
Re: [CODE4LIB] U of Baltimore, Final Usability Report, link resolvers -- MIA?
I've always preferred search engine-based spell checkers over other approaches. I've not seen a library application using a different strategy (dictionary or corpus based) that does nearly as well. We've used the Yahoo [1] and Bing [2] spell check APIs for years now in our applications. They used to be free (Microsoft just ended that last month). But even now they are very reasonably priced (e.g., Yahoo charges $0.10 per 1,000 queries), and well worth it in my experience. The only drawback is that they will suggest corrections that can result in zero hits in your application, especially if you are using it for a small collection like a local catalog. You can mitigate that by doing a quick pre-check for hits before showing the suggestion to users. --Dave [1] http://developer.yahoo.com/search/boss/ [2] http://www.bing.com/developers/ - David Walker Interim Director, Systemwide Digital Library Services California State University 562-355-4845 -Original Message- From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Jonathan Rochkind Sent: Thursday, September 06, 2012 6:45 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] U of Baltimore, Final Usability Report, link resolvers -- MIA? Solr has a feature to make spelling suggestions based on the actual terms in the corpus... but it's hardly a panacea. A straightforward naive implementation of the Solr feature, on top of a large library catalog corpus, in many of our experiences still gives odd and unuseful suggestions (including sometimes suggesting typos from the corpus, or suggesting taking an already 'correct' word and suggesting a different entirely different but lexicographically similar word as a 'correction'). And then there's figuring out the right UI (and managing to make it work on top of the Solr feature) for multi-term querries where each independent part may or may not have a 'correction'. Turns out spell suggestions is kind of hard. And it's kind of amazing that google does it so well (and they use some fairly complex techniques to do so, I think, based on a whole bunch of data and metadata they have including past searches and clickthroughs, not just the corpus). From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] on behalf of Ross Singer [rossfsin...@gmail.com] Sent: Thursday, September 06, 2012 9:37 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] U of Baltimore, Final Usability Report, link resolvers -- MIA? On Thu, Sep 6, 2012 at 9:06 AM, Cindy Harper char...@colgate.edu wrote: I was going to comment that some of the Encore shortcomings mentioned in the PDf do seem to be addressed in current Encore versions, although some of these issues have to be addressed - for instance, there is a spell-check, but it can give some surprising suggestions, though suggestions do clue the user in to the fact that they might have a misspelling/typo. I wrote about the woeful state of spelling suggestions a couple of years ago (among a lot of other things): http://www.inthelibrarywiththeleadpipe.org/2009/were-gonna-geek-this-mother-out/ (you can skip on down to the In the Absence of Suggestion, There is Always Search... - it's pretty TL;DR-worthy) Basically, the crux of it is, as long as spelling suggestions are based on standard dictionaries and not built /on the actual terms and phrases in the collection/ it's going to basically be a worthless feature. I do note there, though, that BiblioCommons apparently must build their dictionaries on the metadata in the system. -Ross. III's reaction to studies that report that users ignore the right-side panel of search options was to provide a skin that has only two columns - the facets on the left, and the search results on the middle-to-right. This pushes important facets like the tag cloud very far down the page, and causes a lot of scrolling, so I don't like this skin much. I recently asked a question on the encore users' list about how the tag cloud could be improved - currently it suggests the most common subfield a of the subject headings. I would think it should include the general, chronological, geographical subdivisions - subfields x,y,z. For instance, it doesn't provide good suggestions for improving the search civil war without these. A chronological subdivision would help a lot there. But then again, I haven't seen a prototype of how many relevant subdivisions this would result in - would the subdivisions drown out the main headings in the tag cloud? Cindy Harper, Systems Librarian Colgate University Libraries char...@colgate.edu 315-228-7363 On Wed, Sep 5, 2012 at 5:30 PM, Jonathan LeBreton lebre...@temple.eduwrote: Lucy Holman, Director of the U Baltimore Library, and a former colleague of mine at UMBC, got back to me about this. Her reply puts this particular document into context. It
Re: [CODE4LIB] Leader in MarcXML Files ( Record Length )
Whatever you do, downstream applications are probably just going to ignore that information anyway. I've never bothered to look at the leader length when parsing MARC-XML, anyway. I would just make it zeros. --Dave - David Walker Interim Director, Systemwide Digital Library Services California State University 562-355-4845 -Original Message- From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Sullivan, Mark V Sent: Friday, June 29, 2012 6:52 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: [CODE4LIB] Leader in MarcXML Files ( Record Length ) All, I received a question regarding a software library I have created and released as open source. The record length in the leader ( positions 0-4 ) was not being calculated correctly when writing as MarcXML. However, this raises a more philosophical and larger question. What is the point of the first five digits of the leader, outside of a ISO2709 / MARC21 encoded record? Should I calculate the record length AS IF it would be encoded in ISO2709? This would be computationally non-trivial and would likely double the time necessary for my software to write a MarcXML file. Should I just make the first five digits of the leader '0', since it means nothing in the context of a MarcXML file? Has anyone else pondered this question or have any input on how current systems work? Keep in mind I could be writing a MarcXML record for a record created or modified in memory, so just using a pre-existing record length is not an option. Many thanks for your consideration. Mark V Sullivan Digital Development and Web Coordinator Technology and Support Services University of Florida Libraries 352-273-2907 (office) 352-682-9692 (mobile) mars...@uflib.ufl.edumailto:mars...@uflib.ufl.edu
Re: [CODE4LIB] Best way to process large XML files
Since you mentioned SimpleXML, Kyle, I assume you're using PHP? If so, you might look at XMLReader [1], which is a pull parser, and should give you better performance on large files than SimpleXML . It is still based on libxml, though, so if that is still not fast enough for you, you can toss out my suggestion. :-) --Dave [1] http://php.net/manual/en/book.xmlreader.php - David Walker Interim Director, Systemwide Digital Library Services California State University 562-355-4845 -Original Message- From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Kyle Banerjee Sent: Friday, June 08, 2012 11:36 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: [CODE4LIB] Best way to process large XML files I'm working on a script that needs to be able to crosswalk at least a couple hundred XML files regularly, some of which are quite large. I've thought of a number of ways to go about this, but I wanted to bounce this off the list since I'm sure people here deal with this problem all the time. My goal is to make something that's easy to read/maintain without pegging the CPU and consuming too much memory. The performance and load I'm seeing from running the files through LibXML and SimpleXML on the large files is completely unacceptable. SAX is not out of the question, but I'm trying to avoid it if possible to keep the code more compact and easier to read. I'm tempted to streamedit out all line breaks since they occur in unpredictable places and put new ones at the end of each record into a temp file. Then I can read the temp file one line at a time and process using SimpleXML. That way, there's no need to load giant files into memory, create huge arrays, etc and the code would be easy enough for a 6th grader to follow. My proposed method doesn't sound very efficient to me, but it should consume predictable resources which don't increase with file size. How do you guys deal with large XML files? Thanks, kyle rantWhy the heck does the XML spec require a root element, particularly since large files usually consist of a large number of records/documents? This makes it absolutely impossible to process a file of any size without resorting to SAX or string parsing -- which takes away many of the advantages you'd normally have with an XML structure. /rant -- -- Kyle Banerjee Digital Services Program Manager Orbis Cascade Alliance baner...@uoregon.edubaner...@orbiscascade.org / 503.999.9787
Re: [CODE4LIB] Proquest dissertation XML?
We use this to transform the PQ XML into the format that DSpace uses for batch loading -- the elements here are qualified Dublin Core, but the format is unique to DSpace. http://library.calstate.edu/media/txt/diss-to-dc.xsl --Dave - David Walker Interim Director, Systemwide Digital Library Services California State University 562-355-4845 -Original Message- From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Reese, Terry Sent: Thursday, May 10, 2012 8:19 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Proquest dissertation XML? I actually wrote a simple one for someone else and include it in MarcEdit, or, for download to MarcEdit from the xslt registry the program uses (wish I would have been paying attention realizing someone else did this work) -- but I've attached. This is fairly simplistic, but does the dissertation xml to marcxml. --tr -Original Message- From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Nick Ruest Sent: Thursday, May 10, 2012 8:14 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Proquest dissertation XML? Hi Michele, This might be a helpful start: http://journal.code4lib.org/articles/1647 -nruest On 12-05-10 11:11 AM, Michele R Combs wrote: Hi all -- Has anyone written an XSL style sheet (or other script) to transform ProQuest's dissertation metadata XML into (a) Dublin Core or (b) MARCXML? Thanks Michele +++ Michele Combs Lead Archivist Special Collections Research Center Syracuse University 315-443-2081 mrrot...@syr.edu scrc.syr.edu library-blog.syr.edu/scrc
Re: [CODE4LIB] PHP SUSHI client
Hi Joshua, What do you see if you do: var_dump($client-__getFunctions()); That should show you the available methods and their parameters. --Dave - David Walker Library Web Services Manager California State University -Original Message- From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Joshua Welker Sent: Tuesday, February 28, 2012 6:00 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: [CODE4LIB] PHP SUSHI client Hi everyone, first post to this listserv. I just started working on a SUSHI harvester client as an addition to a data management program I've built primarily with PHP/MySQL. There are a few projects I've seen listed by the NISO SUSHI organization for doing this, but they are all in Java or .NET. I am not very familiar with those languages and want my app to stay in the realm of PHP for continuity sake, so the roll-your-own approach is the only way to go. I am having trouble understanding how WSDL and SOAP object methods interact with my code. I know from reading the SUSHI protocol standard that there is a method called GetReport that is used to retrieve the report. What I am having a hard time understanding is how I can pass data into the GetReport method. For example, I have the following code in a class: $client = new SoapClient($this-sushiURL); $this-response = $client-GetReport(???); Where the ??? is, I need to be passing in the basic parameters of a SUSHI request: customer ID, requestor ID, date range, etc. As an alternative, I guess I could do it longhand and pass in an entire XML file, but I'd like to learn the standard way using the WSDL method as specified in the protocol definition. This is somewhat unrelated, but is it possible to limit the COUNTER data returned to a particular database? Most vendors such as EBSCO allow you to limit COUNTER reports to a particular database in their admin modules, but I don't see any way in the SUSHI standard to specify one. If anyone with experience rolling a SUSHI client could give me some pointers here, I'd greatly appreciate it. Josh Welker Electronic/Media Services Librarian College Liaison University Libraries Southwest Baptist University 417.328.1624
Re: [CODE4LIB] Experience with codeIgniter?
Are your 'folks' looking for a content management system, Karen? As Mark just mentioned, CodeIgniter is a web application development framework -- that is, a set of reusable programming code that makes it easier for programmers to build applications for the web. The key terms there being programmers and build. That is a very different kind of thing from Drupal or WordPress, which are systems (that have already been built) to manage content for a website. You don't have to be a programmer to use either of those. --Dave - David Walker Library Web Services Manager California State University -Original Message- From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Mark Jordan Sent: Wednesday, December 14, 2011 6:08 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Experience with codeIgniter? Karen, I used CI for a project last summer, and thought it was easy to learn if you had done some PHP programming before and were familiar with MVC architecture, well documented, and had a fairly rich feature set. However, my impression is that it had a very small plugin/module ecosystem compared to Drupal or Wordpress. Before recommending it, you should review the categories under 'Contributions' at http://codeigniter.com/wiki to see if you can identify any glaring holes. But, overall, I'd say it's a pretty good PHP MVC framework (not that I've compared a lot of them). Mark Mark Jordan Head of Library Systems W.A.C. Bennett Library, Simon Fraser University Burnaby, British Columbia, V5A 1S6, Canada Voice: 778.782.5753 / Fax: 778.782.3023 / Skype: mark.jordan50 mjor...@sfu.ca - Original Message - I'm helping some folks find a new platform for their web site, and someone has suggested codeIgniter as being simpler than Drupal or Wordpress. Anyone here have anything to say about it, good or bad? The site is small and light weight but it does have a database that needs to be managed. Thanks, kc -- Karen Coyle kco...@kcoyle.net http://kcoyle.net ph: 1-510-540-7596 m: 1-510-435-8234 skype: kcoylenet
Re: [CODE4LIB] Experience with codeIgniter?
It seems to me that WordPress would be good for the simple and lightweight part of their website. It would allow them to easily create, delete, and update pages for the site. Plus, if they have press releases or other types of newsy content, WordPress is a second to none for blogging. But you could say much the same for any CMS, really. The real trick here, it seems, is what to do with this database they have. In order to manage that, you really do need to create some kind of specialized application. Drupal, and some of the other CMS's, have tools for creating those kinds of applications. But, depending on what this database actually consists of, in some respects, it can be easier just to build something from scratch. And if the consultant is going to do that for this organization using CodeIgniter (or any other programming framework), then that certainly makes sense. --Dave - David Walker Library Web Services Manager California State University -Original Message- From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Karen Coyle Sent: Wednesday, December 14, 2011 6:54 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Experience with codeIgniter? Thanks, Dave and Mark -- this is exactly what I needed to hear. The folks are one of those extremely poor non-profits with almost no staff and zero technical skills. A consulting company is pushing them in this direction saying that Drupal is buggy and WordPress is ... well, I don't know. Dang! I hate being in the middle of this. I still think they'd be better off going with one of the known CMS packages. kc Quoting Walker, David dwal...@calstate.edu: Are your 'folks' looking for a content management system, Karen? As Mark just mentioned, CodeIgniter is a web application development framework -- that is, a set of reusable programming code that makes it easier for programmers to build applications for the web. The key terms there being programmers and build. That is a very different kind of thing from Drupal or WordPress, which are systems (that have already been built) to manage content for a website. You don't have to be a programmer to use either of those. --Dave - David Walker Library Web Services Manager California State University -Original Message- From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Mark Jordan Sent: Wednesday, December 14, 2011 6:08 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Experience with codeIgniter? Karen, I used CI for a project last summer, and thought it was easy to learn if you had done some PHP programming before and were familiar with MVC architecture, well documented, and had a fairly rich feature set. However, my impression is that it had a very small plugin/module ecosystem compared to Drupal or Wordpress. Before recommending it, you should review the categories under 'Contributions' at http://codeigniter.com/wiki to see if you can identify any glaring holes. But, overall, I'd say it's a pretty good PHP MVC framework (not that I've compared a lot of them). Mark Mark Jordan Head of Library Systems W.A.C. Bennett Library, Simon Fraser University Burnaby, British Columbia, V5A 1S6, Canada Voice: 778.782.5753 / Fax: 778.782.3023 / Skype: mark.jordan50 mjor...@sfu.ca - Original Message - I'm helping some folks find a new platform for their web site, and someone has suggested codeIgniter as being simpler than Drupal or Wordpress. Anyone here have anything to say about it, good or bad? The site is small and light weight but it does have a database that needs to be managed. Thanks, kc -- Karen Coyle kco...@kcoyle.net http://kcoyle.net ph: 1-510-540-7596 m: 1-510-435-8234 skype: kcoylenet -- Karen Coyle kco...@kcoyle.net http://kcoyle.net ph: 1-510-540-7596 m: 1-510-435-8234 skype: kcoylenet
Re: [CODE4LIB] jQuery Ajax request to update a PHP variable
I couldn't get json_encode() going on the server at work. This usually means your server is running an older version of PHP. If it's OS is RHEL 5, then you've likely got PHP 5.1.6 installed. http://php.net/manual/en/function.json-encode.php json_encode PHP 5 = 5.2.0 --Dave - David Walker Library Web Services Manager California State University -Original Message- From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Nate Hill Sent: Tuesday, December 06, 2011 8:18 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] jQuery Ajax request to update a PHP variable I attached the app as it stands now. There's something wrong w/ the regex matching in catscrape.php so only some of the images are coming through. A bit more background info: Someone said 'it's not that much data'. Indeed it isn't, but that is because I intentionally gave myself an extremely simple data set to build/test with. I'd anticipate more complex data sets in the future. The .csv files are not generated automatically, we have a product called CollectionHQ that produces reports based on monthly data dumps from our ILS. I was planning to create a folder that the people who run these reports can simply save the csv files to, and then the web app would just work without them having to think about it. A bit of a side note, but I actually was taking the JSON approach briefly and it was working on my MAMP but for some reason I couldn't get json_encode() going on the server at work. I fiddled around w/ the .ini file a little while thinking I might need to do something there, got bored, and decided to take a different approach. Also: should I be sweating the fact that basically every time someone mouses over one of these boxes they are hitting our library catalog with a query? It struck me that this might be unwise. But I don't know either way. Thanks all. Do with this what you will, even if that is nothing. Just following the conversation has been enlightening. Nate On Tue, Dec 6, 2011 at 7:27 AM, Erik Hatcher erikhatc...@mac.com wrote: Again, with jrock... I was replying to the general Ajax requests returning HTML is outdated theme, not to Nate's actual application. Certainly returning objects as code or data to a component (like, say, SIMILE Timeline) is a reasonable use of data coming back from Ajax requests, and covered in my it depends response :) A defender of the old? Only in as much as the old is simpler, cleaner, and leaner than all the new wheels being invented. I'm pragmatic, not dogmatic. Erik On Dec 6, 2011, at 09:34 , Godmar Back wrote: On Tue, Dec 6, 2011 at 8:38 AM, Erik Hatcher erikhatc...@mac.com wrote: I'm with jrock on this one. But maybe I'm a luddite that didn't get the memo either (but I am credited for being one of the instrumental folks in the Ajax world, heh - in one or more of the Ajax books out there, us old timers called it remote scripting). On the in-jest rhetorical front, I'm wondering if referring to oneself as oldtimer helps in defending against insinuations that opposing technological change makes one a defender of the old ;-) But: What I hate hate hate about seeing JSON being returned from a server for the browser to generate the view is stuff like: string = div + some_data_from_JSON + /div; That embodies everything that is wrong about Ajax + JSON. That's exactly why you use new libraries such as knockout.js, to avoid just that. Client-side template engines with automatic data-bindings. Alternatively, AJAX frameworks use JSON and then interpret the returned objects as code. Take a look at the client/server traffic produced by ZK, for instance. As Jonathan said, the server is already generating dynamic HTML... why have it return It isn't. There is no already generating anything server, it's a new app Nate is writing. (Unless you count his work of the past two days). The dynamic HTML he's generating is heavily tailored to his JS. There's extremely tight coupling, which now exists across multiple files written in multiple languages. Simply avoidable bad software engineering. That's not even making the computational cost argument that avoiding template processing on the server is cheaper. And with respect to Jonathan's argument of degradation, a degraded version of his app (presumably) would use table - or something like that, it'd look nothing like what's he showed us yesterday. Heh - the proof of the pudding is in the eating. Why don't we create 2 versions of Nate's app, one with mixed server/client - like the one he's completing now, and I create the client-side based one, and then we compare side by side? I'll work with Nate on that. - Godmar [ I hope it's ok to snip off the rest of the email trail in my reply. ] -- Nate Hill
Re: [CODE4LIB] jQuery Ajax request to update a PHP variable
And I want to update 'Drawing' to be 'Cooking' w/ a jQuery hover effect on the client side then I need to make an Ajax request, correct? What you probably want to do here, Nate, is simply output the PHP variable in your HTML response, like this: h1 id=foo?php echo $searchterm ?/h1 And then in your JavaScript code, you can manipulate the text through the DOM like this: $('#foo').html('Cooking'); --Dave - David Walker Library Web Services Manager California State University -Original Message- From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Nate Hill Sent: Monday, December 05, 2011 2:09 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: [CODE4LIB] jQuery Ajax request to update a PHP variable If I have in my PHP script a variable... $searchterm = 'Drawing'; And I want to update 'Drawing' to be 'Cooking' w/ a jQuery hover effect on the client side then I need to make an Ajax request, correct? What I can't figure out is what that is supposed to look like... something like... $.ajax({ type: POST, url: myfile.php, data: ...not sure how to write what goes here to make it 'Cooking'... }); Any ideas? -- Nate Hill nathanielh...@gmail.com http://www.natehill.net
Re: [CODE4LIB] jQuery Ajax request to update a PHP variable
I gotcha. More information is, indeed, better. ;-) So, on the PHP side, you just need to grab the term from the query string, like this: $searchterm = $_GET['query']; And then in your JavaScript code, you'll send an AJAX request, like: http://www.natehill.net/vizstuff/catscrape.php?query=Cooking Is that what you're looking for? --Dave - David Walker Library Web Services Manager California State University -Original Message- From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Nate Hill Sent: Monday, December 05, 2011 3:00 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] jQuery Ajax request to update a PHP variable As always, I provided too little information. Dave, it's much more involved than that I'm trying to make a kind of visual browser of popular materials from one of our branches from a .csv file. In order to display book covers for a series of searches by keyword, I query the catalog, scrape out only the syndetics images, and then display 4 of them. The problem is that I've hardcoded in a search for 'Drawing', rather than dynamically pulling the correct term and putting it into the catalog query. Here's the work in process, and I believe it will only work in Chrome right now. http://www.natehill.net/vizstuff/donerightclasses.php I may have a solution, Jason's idea got me part way there. I looked all over the place for that little snippet he sent over! Thanks! On Mon, Dec 5, 2011 at 2:44 PM, Walker, David dwal...@calstate.edu wrote: And I want to update 'Drawing' to be 'Cooking' w/ a jQuery hover effect on the client side then I need to make an Ajax request, correct? What you probably want to do here, Nate, is simply output the PHP variable in your HTML response, like this: h1 id=foo?php echo $searchterm ?/h1 And then in your JavaScript code, you can manipulate the text through the DOM like this: $('#foo').html('Cooking'); --Dave - David Walker Library Web Services Manager California State University -Original Message- From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Nate Hill Sent: Monday, December 05, 2011 2:09 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: [CODE4LIB] jQuery Ajax request to update a PHP variable If I have in my PHP script a variable... $searchterm = 'Drawing'; And I want to update 'Drawing' to be 'Cooking' w/ a jQuery hover effect on the client side then I need to make an Ajax request, correct? What I can't figure out is what that is supposed to look like... something like... $.ajax({ type: POST, url: myfile.php, data: ...not sure how to write what goes here to make it 'Cooking'... }); Any ideas? -- Nate Hill nathanielh...@gmail.com http://www.natehill.net -- Nate Hill nathanielh...@gmail.com http://www.natehill.net
Re: [CODE4LIB] marc-8
I know yaz-marcdump changes the encoding bit in MARC leaders. Does it also convert MARC-8 characters to UTF-8? Yes. We use it for that purpose all the time. --Dave - David Walker Library Web Services Manager California State University -Original Message- From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Eric Lease Morgan Sent: Monday, October 24, 2011 11:39 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] marc-8 On Oct 24, 2011, at 2:34 PM, Doran, Michael D wrote: In Perl, how do I specify MARC-8 when reading (decoding) and writing (encoding) data? You can't. MARC-8 is a character set that is unknown to the operating system. Your best bet is to convert MARC-8-encoded records into UTF-8. /me throws his hands up in the air and screams! Okay. How do I go about converting MARC-8 encoded records into UTF-8? I know yaz-marcdump changes the encoding bit in MARC leaders. Does it also convert MARC-8 characters to UTF-8? (I guess I could simply try it and see what happens.) -- Eric Morgan
Re: [CODE4LIB] Examples of Web Service APIs in Academic Public Libraries
We use a number of web services provided by our (often vendor-supplied) library systems. Those include: Metalib, SFX, bX, and Voyager. We've also worked with Ebsco, Summon, Primo/Primo Central, and Worldcat APIs. --Dave - David Walker Library Web Services Manager California State University -Original Message- From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Michel, Jason Paul Sent: Saturday, October 08, 2011 10:34 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: [CODE4LIB] Examples of Web Service APIs in Academic Public Libraries Hello all, I'm a lurker on this listserv and am interested in gaining some insight into your experiences of utilizing web service APIs in either an academic library or public library setting. I'm writing a book for ALA Editions on the use of Web Service APIs in libraries. Each chapter covers a specific API by delineating the technicalities of the API, discussing potential uses of the API in library settings, and step-by-step tutorials. I'm already including examples of how my library (Miami University in Oxford, Ohio) are utilizing these APIs but would like to give the reader more examples from a variety of settings. APIs covered in the book: Flickr, Vimeo, Google Charts, Twitter, Open Library, LibraryThing, Goodreads, OCLC. So, what are you folks doing with APIs? Thanks for any insight! Kind regards, Jason -- Jason Paul Michel User Experience Librarian Miami University Libraries Oxford, Ohio 45044 twitter:jpmichel
Re: [CODE4LIB] Code4Lib 2012 Seattle Update
I doubt anyone is particularly wedded to the particularities of the current theme. In fact, some of us dislike it entirely. ;-) --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Jonathan Rochkind [rochk...@jhu.edu] Sent: Wednesday, June 15, 2011 1:41 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Code4Lib 2012 Seattle Update I doubt anyone is particularly wedded to the particularities of the current theme. It probably doesn't matter, as long as you can put the code4lib logo at the top with a banner-menu, if the theme changes, even significantly. As long as it has pretty much the same functionality exposed that it has now (and even that probably isn't that carefully thought out). On 6/15/2011 4:23 PM, Cary Gordon wrote: The theme looks like a minor hack of the Chameleon theme, so it should not be difficult to reproduce. On Wed, Jun 15, 2011 at 12:46 PM, Wick, Ryanryan.w...@oregonstate.edu wrote: Thanks for offering to help. I agree about the need to upgrade, and this is a pretty quiet time to do so. I'm guessing the theme will need to be done from scratch. It was already cobbled together. I'll try and send you some more information later today. If anyone else really wants in on this, let me know. Ryan Wick -Original Message- From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Cary Gordon Sent: Wednesday, June 15, 2011 12:31 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Code4Lib 2012 Seattle Update That's me! It is probably a good time to move this to a newer version, perhaps Drupal 7, if for no other reason than security. The only downside is that the theme would either need to be recreated or change. No biggy, really. If someone wants to send me the code and a DB dump, I will do it in my less-than-ample spare time. Cary On Wed, Jun 15, 2011 at 12:21 PM, Rob Cassonrob.cas...@gmail.com wrote: i've got admin rights on the code4lib drupal, so i went ahead and set the alias: http://code4lib.org/code4lib_2012_sponsorship cary: i'll look into getting you the correct privileges. you're highermath, correct? cheers, rob On Wed, Jun 15, 2011 at 3:15 PM, Cary Gordonlistu...@chillco.com wrote: In a modern version of Drupal, you can set a path alias for any page. Unfortunately, C4L does not appear to be in a modern version of Drupal. It looks like 4.7 or earlier. I would be happy to volunteer to help manage it. Cary On Wed, Jun 15, 2011 at 11:13 AM, Anjanette Young youn...@u.washington.edu wrote: Hey Susan, Sweet! Language. Information. Social niceties. Here is the link to the 2012 sponsor page. http://code4lib.org/node/417 (Anyone know how to make that a nicer url on drupal?) There seems to be discussion on expanding options for sponsorship, but the options on the page are standard. Thank you for the words. Hope that it turns out that you able to travel to Seattle for the conference. --Anj On Wed, Jun 15, 2011 at 9:51 AM, Susan Kaneadarconsult...@gmail.comwrote: Hi Anj, Nice to see your name again after meeting briefly at UW when you were coming and I was leaving for Boston! I doubt I'll be able to attend the conference this year but I've put the word out to the group of Ex Libris and Endeavor alumni that I manage on LinkedIn. Many people now work for other library technology companies. Will let you know if anything useful comes back. Here's a copy of my promotional message, in case others on the list want to try their own networks. It might help our cause if someone could add a link about sponsorships to the conference section of the website. --- promotional blurb --- c4l -- code4lib is a unique conference that attracts a small but influential group of library technologists each year. Next year's conference is Feb 6-9, 2012 in Seattle, WA. They are still seeking vendor sponsorships -- great visibility with influential folks for a fraction of the cost of ALA! If you can help, please contact me privately through your preferred contact method here. http://code4lib.org/conference http://www.linkedin.com/redirect?url=http%3A%2F%2Fcode4lib%2Eorg%2F conferenceurlhash=-Iyx_t=tracking_anet -- promotional blurb --- Susan Kane Harvard University OIS -- Anjanette Young | Systems Librarian University of Washington Libraries Box 352900 | Seattle, WA 98195 Phone: 206.616.2867 -- Cary Gordon The Cherry Hill Company http://chillco.com -- Cary Gordon The Cherry Hill Company http://chillco.com
Re: [CODE4LIB] Google Book Search and Millennium
IUG has an area on their website called the Clearinghouse, which has a number of scripts and other things. It's behind a login, unfortunately, although any IUG member can get access. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Patrick Berry [pbe...@gmail.com] Sent: Tuesday, April 26, 2011 7:18 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Google Book Search and Millennium I think collecting and documenting these hacks would be a fabulous idea. I know I got a lot of help from a message sent to the IUG by one of our librarians. They may be way ahead of us (or not) but it will be a good place to check. On Mon, Apr 25, 2011 at 8:36 PM, Gabriel Farrell gsf...@gmail.com wrote: Nice work, Patrick. You reminded me I never mentioned on this list the III Refworks Export script I put up on GitHub (see the code for props to those who did most of the work). It's at https://github.com/gsf/refworksexport. Maybe we should start collecting these under a iiihacks GitHub org. On Mon, Apr 25, 2011 at 5:48 PM, Patrick Berry pbe...@gmail.com wrote: Hi, We're working on integrating links to Google Books from Millennium. I'm not a fan of rewritting things from scratch, so I've borrowed heavily from those that already have this working. Props to the gbsclasses.js folks, MSU, and Temple. One thing I noticed is that IE 9 (perhaps earlier versions as well) do not work with the code in use at MSU and Temple on the bib_display.html templates. I've done some clean-up on a static example: http://www.csuchico.edu/~pberry/google-books/ Questions? Comments? DMCA notices? Pat in Chico
Re: [CODE4LIB] Google Book Search and Millennium
I would prefer a more open place to collect these things, too -- Github sounds great. I've got my own III hack [1], and I've kept it out of the IUG clearinghouse on purpose. --Dave [1] http://code.google.com/p/shrew/ == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Gabriel Farrell [gsf...@gmail.com] Sent: Tuesday, April 26, 2011 10:18 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Google Book Search and Millennium For some reason I assumed Github would be a better spot for code sharing than the IUG website, but I'm happy with any accessible place to collect these. On Tue, Apr 26, 2011 at 11:32 AM, Kyle Banerjee baner...@uoregon.edu wrote: IUG recently opened up stuff that has traditionally been passworded to everyone. You might ask if this area will be opened too as it may still be closed as an oversight. kyle On Tue, Apr 26, 2011 at 7:22 AM, Walker, David dwal...@calstate.edu wrote: IUG has an area on their website called the Clearinghouse, which has a number of scripts and other things. It's behind a login, unfortunately, although any IUG member can get access. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Patrick Berry [pbe...@gmail.com] Sent: Tuesday, April 26, 2011 7:18 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Google Book Search and Millennium I think collecting and documenting these hacks would be a fabulous idea. I know I got a lot of help from a message sent to the IUG by one of our librarians. They may be way ahead of us (or not) but it will be a good place to check. On Mon, Apr 25, 2011 at 8:36 PM, Gabriel Farrell gsf...@gmail.com wrote: Nice work, Patrick. You reminded me I never mentioned on this list the III Refworks Export script I put up on GitHub (see the code for props to those who did most of the work). It's at https://github.com/gsf/refworksexport. Maybe we should start collecting these under a iiihacks GitHub org. On Mon, Apr 25, 2011 at 5:48 PM, Patrick Berry pbe...@gmail.com wrote: Hi, We're working on integrating links to Google Books from Millennium. I'm not a fan of rewritting things from scratch, so I've borrowed heavily from those that already have this working. Props to the gbsclasses.js folks, MSU, and Temple. One thing I noticed is that IE 9 (perhaps earlier versions as well) do not work with the code in use at MSU and Temple on the bib_display.html templates. I've done some clean-up on a static example: http://www.csuchico.edu/~pberry/google-books/ Questions? Comments? DMCA notices? Pat in Chico -- -- Kyle Banerjee Digital Services Program Manager Orbis Cascade Alliance baner...@uoregon.edu / 503.877.9773
Re: [CODE4LIB] geo-locating email domains
Oh, I'm sure there is *a* contingent in the Bay Area. But Roy threw down the gauntlet, saying NorCal was more into Code4lib than SoCal. I ain't letting no gmail accounts inflate his numbers. ;-) --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Eric Lease Morgan [emor...@nd.edu] Sent: Thursday, March 24, 2011 10:08 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] geo-locating email domains On Mar 24, 2011, at 1:02 PM, Walker, David wrote: http://bit.ly/hdL55U But doesn't the large circle over the Bay Area come from all the gmail accounts hosted in Mountain View? No, not exactly. Yes, much of the area is centered around Mountain View (Gmail), but as you zoom in you see there is a contingent of folks in the Bay Area -- http://bit.ly/hZdAPN -- Eric Morgan
Re: [CODE4LIB] Simple Web-based Dublin Core search engine?
I wonder if you might be able to load the file in PKP Harvester. http://pkp.sfu.ca/?q=harvester It should already be able to parse and index OAI-DC, and would give you a nice, simple interface. It's based on a straight LAMP stack, which would make it easier to get up and running than some of the other suggestions so far. It's designed to harvest rather than load data, but that has got to be a fairly simple thing to workaround. I've never done this myself, so I could be entirely wrong. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Edward M. Corrado [ecorr...@ecorrado.us] Sent: Wednesday, March 16, 2011 8:00 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: [CODE4LIB] Simple Web-based Dublin Core search engine? Hi, I [will soon] have a small set ( 1000 records) of Dublin Core metadata published in OAI_DC format that I want to be searchable via a Web browser. Normally we would use Ex Libris's Primo for this, but this particular set of data may have some confidential information and our repository only has minimal built in search functions. While we still may go with Primo for these records, I am looking for at other possibilities. The requirements as I see them are: 1) Can ingest records in OAI_DC format 2) Allow remote end-users who are familiar with the collection search these ingest records via a Web browser. 3)Search should be keyword anywhere or individual fields although it does not need to have every whizzbang feature out there. In other words, basic search feature are fine. 4) Should support the ability to link to the display copy in our repository (probably goes without saying) 5) Should be simple to install and maintain (Thus, at least in my mind, eliminating something like Blacklight) 6) Preferably a LAMP application although a Windows server based solution is a possibility as well 7) Preferably Open Source, or at least no- or low-cost I haven't been able to find anything searching the Web, but it seems like something people may have done before. Before I re-invent the wheel or shoe-horn something together, does anyone have any suggestions? Edward
Re: [CODE4LIB] dealing with Summon
Just out of curiosity, is there a Summon (API) developer listserv? Should there be? --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Godmar Back [god...@gmail.com] Sent: Wednesday, March 02, 2011 8:30 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] dealing with Summon On Wed, Mar 2, 2011 at 11:12 AM, Roy Tennant roytenn...@gmail.com wrote: Godmar, I'm surprised you're asking this. Most of the questions you want answered could be answered by a basic programming construct: an if-then-else statement and a simple decision about what you want to use in your specific application (for example, do you prefer text with the period, or not?). About the only question that such a solution wouldn't deal with is which fields are derived from which others, which strikes me as superfluous to your application if you know a hierarchy of preference. But perhaps I'm missing something here. I'm not asking how to code it, I'm asking for the algorithm I should use, given the fact that I'm not familiar with the provenance and status of the data Summon returns (which, I understand, is a mixture of original, harvested data, and cleaned-up, processed data.) Can you suggest such an algorithm, given the fact that each of the 8 elements I showed in the example (PublicationDateYear, PublicationDateDecade, PublicationDate, PublicationDateCentury, PublicationDate_xml.text, PublicationDate_xml.day, PublicationDate_xml.month, PublicationDate_xml.year is optional? But wait I think I've also seen records where there is a PublicationDateMonth, and records where some values have arrays of length 1. Can you suggest, or at least outline, such an algorithm? It would be helpful to know, for instance, if the presence of a PublicationDate_xml field supplants any other PublicationDate* fields (does it?) If a PublicationDate_xml field is absent, which field would I want to look at next? Is PublicationDate more reliable than a combination of PublicationDateYear and PublicationDateMonth (and perhaps PublicationDateDay if it exists?)? If the PublicationDate_xml is present, then: should I prefer the .text option? What's the significance of that dot? Is it spurious, like the identifier you mentioned you find in raw MARC records? If not, what, if anything, is known about the presence of the other fields? What if multiple fields are given in an array? Is the ordering significant (e.g., the first one is more trustworthy?) Or should I sort them based on a heuristics? (e.g., if 20100523 and 201005 is given, prefer the former?) What if the data is contradictory? These are the questions I'm seeking answers to; I know that those of you who have coded their own Summon front-ends must have faced the same questions when implementing their record displays. - Godmar
Re: [CODE4LIB] exporting marc records from iii
Hey Eric, Is this an Innovative system you have access to (at Notre Dame)? And do you need to do this one time only, or does it need to be automated and ongoing? If it's a system you have access to, and you only need it once, then you might just have one of the staff there use the Millennium client to get these records. Innovative provides modules (Create Lists and Data Exchange) to search for and export MARC records. There is, of course, documentation for that. If it's an external system, or you want to automate the above task, then that's a much trickier question. We have some code here that might help with that, but I don't want to overly-complicate your task. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Eric Lease Morgan [emor...@nd.edu] Sent: Friday, February 18, 2011 7:48 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: [CODE4LIB] exporting marc records from iii How does a person go about exporting MARC records from a III system? As you may or may not know, I spend a lot of my time developing a thing colloquially called the Catholic Portal. It uses VUFind under the hood, and it requires me to ingest bibliographic data from a myriad of libraries. Suppose the records I desire have the letters crra saved in MARC field 590$a. What is the process for connecting a III system, searching for crra in 590$a, and saving the result as a file of MARC records? Is there some sort of documentation I can read that will help me out in this regard? -- Eric Lease Morgan
Re: [CODE4LIB] MARCXML - What is it for?
I've been involved in several projects lambasted because managers think MARCXML is solving some imaginary problem It seems to me that this is really the heart of your argument. You had this experience, and now are projecting the opinions of these managers onto lots of people in the library world. I've worked in libraries for nearly a decade, and have never met anyone (manager or otherwise) who held the belief that XML in general, or MARC-XML in particular, somehow magically solves all metadata problems. I guess our two experiences cancel each other out, then. And, ultimately, none of that has anything to do with MARC-XML itself. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Alexander Johannesen [alexander.johanne...@gmail.com] Sent: Monday, October 25, 2010 7:10 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] MARCXML - What is it for? On Tue, Oct 26, 2010 at 12:48 PM, Bill Dueber b...@dueber.com wrote: Here, I think you're guilty of radically underestimating lots of people around the library world. No one thinks MARC is a good solution to our modern problems, and no one who actually knows what MARC is has trouble understanding MARC-XML as an XML serialization of the same old data -- certainly not anyone capable of meaningful contribution to work on an alternative. Slow down, Tex. Lots of people in the library world is not the same as developers, or even good developers, or even good XML developers, or even good XML developers who knows what the document model imposes to a data-centric approach. The problem we're dealing with is *hard*. Mind-numbingly hard. This is no justification for not doing things better. (And I'd love to know what the hard bits are; always interesting to hear from various people as to what they think are the *real* problems of library problems, as opposed to any other problem they have) The library world has several generations of infrastructure built around MARC (by which I mean AACR2), and devising data structures and standards that are a big enough improvement over MARC to warrant replacing all that infrastructure is an engineering and political nightmare. Political? For sure. Engineering? Not so much. This is just that whole blinded by MARC issue that keeps cropping up from time to time, and rightly so; it is truly a beast - at least the way we have come to know it through AACR2 and all its friends and its death-defying focus on all things bibliographic - that has paralyzed library innovation, probably to the point of making libraries almost irrelevant to the world. I'm happy to take potshots at the RDA stuff from the sidelines, but I never forget that I'm on the sidelines, and that the people active in the game are among the best and brightest we have to offer, working on a problem that invariably seems more intractable the deeper in you go. Well, that's a pretty scary sentence, for all sorts of reasons, but I think I shall not go there. If you think MARC-XML is some sort of an actual problem What, because you don't agree with me the problem doesn't exist? :) and that people just need to be shouted at to realize that and do something about it, then, well, I think you're just plain wrong. Fair enough, although you seem to be under the assumption that all of the stuff I'm saying is a figment of my imagination (I've been involved in several projects lambasted because managers think MARCXML is solving some imaginary problem; this is not bullshit, but pain and suffering from the battlefields of library development), that I'm not one of those developers (or one of you, although judging from this discussion it's clear that I am not), that the things I say somehow doesn't apply because you don't agree with, umm, what I'm assuming is my somewhat direct approach to stating my heretic opinions. Alex -- Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps --- http://shelter.nu/blog/ -- -- http://www.google.com/profiles/alexander.johannesen ---
Re: [CODE4LIB] MARCXML - What is it for?
Crosswalking doesn't hold water as a justification for MARCXML. To be fair, though, most of us have simpler cross walking needs than OCLC. And if I need to go from binary MARC to some XML schema (which I sometimes do), then MARC-XML and the XSLT style sheets at LOC seem like a pretty good starting point to me. Better than starting from scratch. Which isn't to say that that approach is always the right one for every project. I very much agree with MJ: If it works for you, use it. If not, don't. But if someone else has a better, general purpose solution to this problem, then by all means open source that puppy and let the rest of us have at it! --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Smith,Devon [smit...@oclc.org] Sent: Tuesday, October 26, 2010 7:44 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] MARCXML - What is it for? One way is to first transform the MARC into MARC-XML. Then you can use XSLT to crosswalk the MARC-XML into that other schema. Very handy. Your criticisms of MARC-XML all seem to presume that MARC-XML is the goal, the end point in the process. But MARC-XML is really better seen as a utility, a middle step between binary MARC and the real goal, which is some other useful and interesting XML schema. Unless useful and interesting is a euphemism for Dublin Core, then using XSLT for crosswalking is not really an option. Well, not a good option. On the other end of the spectrum, assume Onix for useful and interesting and XSLT simply won't work. Crosswalking doesn't hold water as a justification for MARCXML. /dev -- Devon Smith Consulting Software Engineer OCLC Research http://www.oclc.org/research/people/smith.htm -Original Message- From: Code for Libraries [mailto:code4...@listserv.nd.edu] On Behalf Of Walker, David Sent: Monday, October 25, 2010 8:57 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] MARCXML - What is it for? b) expanding it to be actual useful and interesting. But here I think you've missed the very utility of MARC-XML. Let's say you have a binary MARC file (the kind that comes out of an ILS) and want to transform that into MODS, Dublin Core, or maybe some other XML schema. How would you do that? One way is to first transform the MARC into MARC-XML. Then you can use XSLT to crosswalk the MARC-XML into that other schema. Very handy. Your criticisms of MARC-XML all seem to presume that MARC-XML is the goal, the end point in the process. But MARC-XML is really better seen as a utility, a middle step between binary MARC and the real goal, which is some other useful and interesting XML schema. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Alexander Johannesen [alexander.johanne...@gmail.com] Sent: Monday, October 25, 2010 12:38 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] MARCXML - What is it for? Hiya, On Tue, Oct 26, 2010 at 6:26 AM, Nate Vack njv...@wisc.edu wrote: Switching to an XML format doesn't help with that at all. I'm willing to take it further and say that MARCXML was the worst thing the library world ever did. Some might argue it was a good first step, and that it was better with something rather than nothing, to which I respond ; Poppycock! MARCXML is nothing short of evil. Not only does it goes against every principal of good XML anywhere (don't rely on whitespace, structure over code, namespace conventions, identity management, document control, separation of entities and properties, and on and on), it breaks the ontological commitment that a better treatment of the MARC data could bring, deterring people from actually a) using the darn thing as anything but a bare minimal crutch, and b) expanding it to be actual useful and interesting. The quicker the library world can get rid of this monstrosity, the better, although I doubt that will ever happen; it will hang around like a foul stench for as long as there is MARC in the world. A long time. A long sad time. A few extra notes; http://shelterit.blogspot.com/2008/09/marcxml-beast-of-burden.html Can you tell I'm not a fan? :) Kind regards, Alex -- Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps --- http://shelter.nu/blog/ -- -- http://www.google.com/profiles/alexander.johannesen ---
Re: [CODE4LIB] MARCXML - What is it for?
b) expanding it to be actual useful and interesting. But here I think you've missed the very utility of MARC-XML. Let's say you have a binary MARC file (the kind that comes out of an ILS) and want to transform that into MODS, Dublin Core, or maybe some other XML schema. How would you do that? One way is to first transform the MARC into MARC-XML. Then you can use XSLT to crosswalk the MARC-XML into that other schema. Very handy. Your criticisms of MARC-XML all seem to presume that MARC-XML is the goal, the end point in the process. But MARC-XML is really better seen as a utility, a middle step between binary MARC and the real goal, which is some other useful and interesting XML schema. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Alexander Johannesen [alexander.johanne...@gmail.com] Sent: Monday, October 25, 2010 12:38 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] MARCXML - What is it for? Hiya, On Tue, Oct 26, 2010 at 6:26 AM, Nate Vack njv...@wisc.edu wrote: Switching to an XML format doesn't help with that at all. I'm willing to take it further and say that MARCXML was the worst thing the library world ever did. Some might argue it was a good first step, and that it was better with something rather than nothing, to which I respond ; Poppycock! MARCXML is nothing short of evil. Not only does it goes against every principal of good XML anywhere (don't rely on whitespace, structure over code, namespace conventions, identity management, document control, separation of entities and properties, and on and on), it breaks the ontological commitment that a better treatment of the MARC data could bring, deterring people from actually a) using the darn thing as anything but a bare minimal crutch, and b) expanding it to be actual useful and interesting. The quicker the library world can get rid of this monstrosity, the better, although I doubt that will ever happen; it will hang around like a foul stench for as long as there is MARC in the world. A long time. A long sad time. A few extra notes; http://shelterit.blogspot.com/2008/09/marcxml-beast-of-burden.html Can you tell I'm not a fan? :) Kind regards, Alex -- Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps --- http://shelter.nu/blog/ -- -- http://www.google.com/profiles/alexander.johannesen ---
Re: [CODE4LIB] Help with DLF-ILS GetAvailability
Hey Owen, Seems like the you could use the dlf:holdings element to hold this kind of individual library information. The DLF-ILS documentation doesn't seem to think that you would use dlf:simpleavailability here, though, but rather MARC or ISO holdings schemas. But if you're controlling both ends of the communication, I don't know if it really matters. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Owen Stephens [o...@ostephens.com] Sent: Wednesday, October 20, 2010 12:22 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: [CODE4LIB] Help with DLF-ILS GetAvailability I'm working with the University of Oxford to look at integrating some library services into their VLE/Learning Management System (Sakai). One of the services is something that will give availability for items on a reading list in the VLE (the Sakai 'Citation Helper'), and I'm looking at the DLF-ILS GetAvailability specification to achieve this. For physical items, the availability information I was hoping to use is expressed at the level of a physical collection. For example, if several college libraries within the University I have aggregated information that tells me the availability of the item in each of the college libraries. However, I don't have item level information. I can see how I can use simpleavailability to say over the entire institution whether (e.g.) a book is available or not. However, I'm not clear I can express this in a more granular way (say availability on a library by library basis) except by going to item level. Also although it seems you can express multiple locations in simpleavailability, and multiple availabilitymsg, there is no way I can see to link these, so although I could list each location OK, I can't attach an availabilitymsg to a specific location (unless I only express one location). Am I missing something, or is my interpretation correct? Any other suggestions? Thanks, Owen PS also looked at DAIA which I like, but this (as far as I can tell) only allows availabitlity to be specified at the level of items Owen Stephens Owen Stephens Consulting Web: http://www.ostephens.com Email: o...@ostephens.com Telephone: 0121 288 6936
Re: [CODE4LIB] Help with DLF-ILS GetAvailability
Yes - my reading was that dlf:holdings was for pure 'holdings' as opposed to 'availability'. I would agree with Jonathan that putting a summary of item availability in dlf:holdings is not an abuse. For example, ISO Holdings -- one of the schemas the DLF-ILS documents suggests using here -- has elements for things like: holdings:copiesSummary holdings:status holdings:availableCount Very much the kind of summary information you are using. Those are different from it's holdings:copyInformation element, which describes individual items. So IMO it wouldn't be (much of) a stretch to express this in dlf:simpleavailability instead. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Jonathan Rochkind [rochk...@jhu.edu] Sent: Thursday, October 21, 2010 1:26 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Help with DLF-ILS GetAvailability I don't think that's an abuse. I consider dlf:holdings to be for information about a holdingset, or some collection of items, while dlf:item is for information about an individual item. I think regardless of what you do you are being over-optimistic in thinking that if you just do dlf, your stuff will interchangeable with any other clients or servers doing dlf. The spec is way too open-ended for that, it leaves a whole bunch of details not specified and up to the implementer. For better or worse. I made more comments about this in the blog post I referenced earlier. Jonathan Owen Stephens wrote: Thanks Dave, Yes - my reading was that dlf:holdings was for pure 'holdings' as opposed to 'availability'. We could put the simpleavailability in there I guess but as you say since we are controlling both ends then there doesn't seem any point in abusing it like that. The downside is we'd hoped to do something that could be taken by other sites - the original plan was to use the Juice framework - developed by Talis using jQuery to parse a standard availability format so that this could then be applied easily in other environments. Obviously we can still achieve the outcome we need for the immediate requirements of the project by using a custom format. Thanks again Owen On Thu, Oct 21, 2010 at 4:28 PM, Walker, David dwal...@calstate.edu wrote: Hey Owen, Seems like the you could use the dlf:holdings element to hold this kind of individual library information. The DLF-ILS documentation doesn't seem to think that you would use dlf:simpleavailability here, though, but rather MARC or ISO holdings schemas. But if you're controlling both ends of the communication, I don't know if it really matters. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Owen Stephens [o...@ostephens.com] Sent: Wednesday, October 20, 2010 12:22 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: [CODE4LIB] Help with DLF-ILS GetAvailability I'm working with the University of Oxford to look at integrating some library services into their VLE/Learning Management System (Sakai). One of the services is something that will give availability for items on a reading list in the VLE (the Sakai 'Citation Helper'), and I'm looking at the DLF-ILS GetAvailability specification to achieve this. For physical items, the availability information I was hoping to use is expressed at the level of a physical collection. For example, if several college libraries within the University I have aggregated information that tells me the availability of the item in each of the college libraries. However, I don't have item level information. I can see how I can use simpleavailability to say over the entire institution whether (e.g.) a book is available or not. However, I'm not clear I can express this in a more granular way (say availability on a library by library basis) except by going to item level. Also although it seems you can express multiple locations in simpleavailability, and multiple availabilitymsg, there is no way I can see to link these, so although I could list each location OK, I can't attach an availabilitymsg to a specific location (unless I only express one location). Am I missing something, or is my interpretation correct? Any other suggestions? Thanks, Owen PS also looked at DAIA which I like, but this (as far as I can tell) only allows availabitlity to be specified at the level of items Owen Stephens Owen Stephens Consulting Web: http://www.ostephens.com Email: o...@ostephens.com Telephone: 0121 288 6936
[CODE4LIB] Old MARC 007 field
I have some old MARC records that appear to have 007 fields that follow the pre-1981 structure. The LOC MARC pages mention this older structure in a note at the bottom of the page [1], but don't give a whole lot of information on it. I'm curious if others have run into this, and what you've done to work around it? I'm using the 007 -- in part anyway -- to determine the format of the item it describes. --Dave [1] http://www.loc.gov/marc/bibliographic/bd007.html == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu
Re: [CODE4LIB] Innovative's Synergy
Hi Cindy, Both the Ebsco and Proquest APIs are definitely available to customers. We're using the Ebsco one in our Xerxes application, for example. ( I'll send you a link off-list, Cindy.) --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Cindy Harper [char...@colgate.edu] Sent: Wednesday, June 30, 2010 9:11 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: [CODE4LIB] Innovative's Synergy Hi All - III is touting their web-services based Synergy product as having the efficiency of a pre-indexed service and the timeliness of a just-in-time service. Does anyone know if the agreements they have made with database vendors to use these web services preclude an organization developing an open-source client to take advantage of those web services? Just curious. I suppose I should direct my question to EBSCO and Proquest directly. Cindy Harper, Systems Librarian Colgate University Libraries char...@colgate.edu 315-228-7363
Re: [CODE4LIB] DIY aggregate index
You might also need to factor in an extra server or three (in the cloud or otherwise) into that equation, given that we're talking 100s of millions of records that will need to be indexed. companies like iii and Ex Libris are the only ones with enough clout to negotiate access I don't think III is doing any kind of aggregated indexing, hence their decision to try and leverage APIs. I could be wrong. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Jonathan Rochkind [rochk...@jhu.edu] Sent: Wednesday, June 30, 2010 1:15 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] DIY aggregate index Cory Rockliff wrote: Do libraries opt for these commercial 'pre-indexed' services simply because they're a good value proposition compared to all the work of indexing multiple resources from multiple vendors into one local index, or is it that companies like iii and Ex Libris are the only ones with enough clout to negotiate access to otherwise-unavailable database vendors' content? A little bit of both, I think. A library probably _could_ negotiate access to that content... but it would be a heck of a lot of work. When the staff time to negotiations come in, it becomes a good value proposition, regardless of how much the licensing would cost you. And yeah, then the staff time to actually ingest and normalize and troubleshoot data-flows for all that stuff on the regular basis -- I've heard stories of libraries that tried to do that in the early 90s and it was nightmarish. So, actually, I guess i've arrived at convincing myself it's mostly good value proposition, in that a library probably can't afford to do that on their own, with or without licensing issues. But I'd really love to see you try anyway, maybe I'm wrong. :) Can I assume that if a database vendor has exposed their content to me as a subscriber, whether via z39.50 or a web service or whatever, that I'm free to cache and index all that metadata locally if I so choose? Is this something to be negotiated on a vendor-by-vendor basis, or is it an impossibility? I doubt you can assume that. I don't think it's an impossibility. Jonathan
Re: [CODE4LIB] WorldCat as an OpenURL endpoint ?
It seems like the more productive path if the goal of a user is simply to locate a copy, where ever it is held. But I don't think users have *locating a copy* as their goal. Rather, I think their goal is to *get their hands on the book*. If I discover a book via COINs, and you drop me off at Worldcat.org, that allows me to see which libraries own the book. But, unless I happen to be affiliated with those institutions, that's kinda useless information. I have no real way of actually getting the book itself. If, instead, you drop me off at your institution's link resolver menu, and provide me an ILL option in the event you don't have the book, the library can get the book for me, which is really my *goal*. That seems like the more productive path, IMO. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Tom Keays [tomke...@gmail.com] Sent: Tuesday, June 15, 2010 8:43 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] WorldCat as an OpenURL endpoint ? On Mon, Jun 14, 2010 at 3:47 PM, Jonathan Rochkind rochk...@jhu.edu wrote: The trick here is that traditional library metadata practices make it _very hard_ to tell if a _specific volume/issue_ is held by a given library. And those are the most common use cases for OpenURL. Yep. That's true even for individual library's with link resolvers. OCLC is not going to be able to solve that particular issue until the local libraries do. If you just want to get to the title level (for a journal or a book), you can easily write your own thing that takes an OpenURL, and either just redirects straight to worldcat.org on isbn/lccn/oclcnum, or actually does a WorldCat API lookup to ensure the record exists first and/or looks up on author/title/etc too. I was mainly thinking of sources that use COinS. If you have a rarely held book, for instance, then OpenURLs resolved against random institutional endpoints are going to mostly be unproductive. However, a union catalog such as OCLC already has the information about libraries in the system that own it. It seems like the more productive path if the goal of a user is simply to locate a copy, where ever it is held. Umlaut already includes the 'naive' just link to worldcat.org based on isbn, oclcnum, or lccn approach, functionality that was written before the worldcat api exists. That is, Umlaut takes an incoming OpenURL, and provides the user with a link to a worldcat record based on isbn, oclcnum, or lccn. Many institutions have chosen to do this. MPOW, however, represents a counter-example and do not link out to OCLC. Tom
Re: [CODE4LIB] A call for your OPAC (or other system) statistics! (Browse interfaces)
Here are some stats from Cal State San Marcos for the past 6 1/2 years (2003-10) . All searches other than keyword are browse searches. keyword = 596,111 title = 158,761 author = 59,293 subject = 23,692 call number = 9,477 form / genre = 4,838 other numbers = 14,636 So: keyword = 596,111 browse = 270,697 These stats only tracked searches that were performed from the catalog home page [1] or that of the library website [2]. Any subsequent searches performed inside the catalog itself are not counted here. I'm not sure if this is really showing that a browse display is popular here, though. I suspect a good number of users (other than librarians) were expecting the title and author searches to behave like the keyword search. But those options are browse searches, so they generate hits in favor of the browse. --Dave [1] http://library.csusm.edu/catalog/ [2] http://biblio.csusm.edu/ == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Bill Dueber [b...@dueber.com] Sent: Monday, May 03, 2010 11:08 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: [CODE4LIB] A call for your OPAC (or other system) statistics! (Browse interfaces) I got email from a person today saying, and I quote, I must say that [the lack of a browse interface] come as a shock (*which interface cannot browse??*) [Emphasis mine] Here, a browse interface is one where you can get a giant list of all the titles/authors/subjects whatever -- a view on the data devoid of any searching. Will those of you out there with browse interfaces in your system take a couple minutes to send along a guesstimate of what percentage of patron sessions involve their use? [Note that for right now, I'm excluding type-ahead search boxes although there's an obvious and, in my mind, strong argument to be made that they're substantially similar for many types of data] We don't have a browse interface on our (VuFind) OPAC right now. But in the interest of paying it forward, I can tell you that in Mirlyn, our OPAC, has numbers like this: Pct of Mirlyn sessions, Feb/March/April 2010, which included at least one basic search and also: Go to full record view 46% (we put a lot of info in search results) Select/favorite an item 15% Add a facet:13% Export record(s) to email/refworks/RIS/etc. 3.4% Send to phone (sms) 0.21% Click on faq/help/AskUs in footer0.17% (324 total) Based on 187,784 sessions, 2010.02.01 to 2010.04.31 So...anyone out there able to tell me anything about browse interfaces? -- Bill Dueber Library Systems Programmer University of Michigan Library
Re: [CODE4LIB] Twitter annotations and library software
We're using maybe 1% of the spec for 99% of our practice, probably because librarians weren't imaginative (as Jim Weinheimer would say) enough to think of other use cases beyond that most pressing one. I would suggest it's more because, once you step outside of the primary use case for OpenURL, you end-up bumping into *other* standards. Dorthea'sblog post that Jakob referenced in his message is a good example of that. She was trying to use OpenURL (via COINS) to get data into Zotero. Mid-way through the post she wonders if maybe she should have gone with unAPI instead. And, in fact, I think that would have been a better approach. unAPI is better at doing that particular task than OpenURL. And I think that may explain why OpenURL hasn't become the One Standard to Rule Them All, even though it kind of presents itself that way. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of MJ Suhonos [...@suhonos.ca] Sent: Thursday, April 29, 2010 5:17 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Twitter annotations and library software Okay, I know it's cool to hate on OpenURL, but I feel I have to clarify a few points: OpenURL is of no use if you seperate it from the existing infrastructure which is mainly held by companies. No sane person will try to build an open alternative infrastructure because OpenURL is a crapy library-standard like MARC etc. OpenURL is mostly implemented by libraries, yes, but it isn't necessarily *just* a library standard - this is akin to saying that Dublin Core is a library standard. Only sort of. The other issue I have is that — although Jonathan used the term to make a point — OpenURL is *not* an infrastructure, it is a protocol. Condemning the current OpenURL infrastructure (which is mostly a vendor-driven oligopoly) is akin to saying in 2004 that HTTP and HTML sucks because Firefox hadn't been released yet and all we had was IE6. Don't condemn the standard because of the implementation. The OpenURL specification is a 119 page PDF - that alone is a reason to run away as fast as you can. The main reason for this is because OpenURL can do much, much, much more than the simple resolve a unique copy use case that libraries use it for. We're using maybe 1% of the spec for 99% of our practice, probably because librarians weren't imaginative (as Jim Weinheimer would say) enough to think of other use cases beyond that most pressing one. I'd contend that OpenURL, like other technologies (cough XML) is greatly misunderstood, and therefore abused, and therefore discredited. I think there is also often confusion between the KEV schemas and OpenURL itself (which is really what Dorothea's blog rant is about); I'm certainly guilty of this myself, as Jonathan can attest. You don't *have* to use the KEVs with OpenURL, you can use anything, including eg. Dublin Core. If a twitter annotation setup wants to get adopted than it should not be build on a crapy complex library standard like OpenURL. I don't quite understand this (but I think I agree) — twitter annotation should be built on a data model, and then serialized via whatever protocols make sense (which may or may not include OpenURL). I must admit that this solution is based on the open assumption that CSL record format contains all information needed for OpenURL which may not the case. … A good example. And this is where you're exactly right that we need better tools, namely OpenURL resolvers which can do much more than they do now. I've had the idea for a number of years now that OpenURL functionality should be merged into aggregation / discovery layer (eg. OAI harvester)-type systems, because, like OAI-PMH, OpenURL can *transport metadata*, we just don't use it for that in practice. A ContextObject is just a triple that makes a single assertion about two entities (resources): that A references B. Just like an RDF statement using http://purl.org/dc/terms/references, but with more focus on describing the entities rather than the assertion. Maybe if I put it that way, OpenURL sounds a little less crappy. MJ
Re: [CODE4LIB] Twitter annotations and library software
I was also just working on DOI with RIS. It looks like both Endnote and Refworks recognize 'DO' for DOIs. But apparently Zotero does not. If Zotero supported it, I'd say we'd have a de facto standard on our hands. In fact, I couldn't figure out how to pass a DOI to Zotero using RIS. Or, at least, in my testing I never saw the DOI show-up in Zotero. I don't really use Zotero, so I may have missed it. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Owen Stephens [o...@ostephens.com] Sent: Wednesday, April 28, 2010 2:26 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Twitter annotations and library software We've had problems with RIS on a recent project. Although there is a specification (http://www.refman.com/support/risformat_intro.asp), it is (I feel) lacking enough rigour to ever be implemented consistently. The most common issue in the wild that I've seen is use of different tags for the same information (which the specification does not nail down enough to know when each should be used): Use of TI or T1 for primary title Use of AU or A1 for primary author Use of UR, L1 or L2 to link to 'full text' Perhaps more significantly the specification doesn't include any field specifically for a DOI, but despite this EndNote (owned by ISI ResearchSoft, who are also responsible for the RIS format specification) includes the DOI in a DO field in its RIS output - not to specification. Owen On Wed, Apr 28, 2010 at 9:17 AM, Jakob Voss jakob.v...@gbv.de wrote: Hi it's funny how quickly you vote against BibTeX, but at least it is a format that is frequently used in the wild to create citations. If you call BibTeX undocumented and garbage then how do you call MARC which is far more difficult to make use of? My assumption was that there is a specific use case for bibliographic data in twitter annotations: I. Identifiy publication = this can *only* be done seriously with identifiers like ISBN, DOI, OCLCNum, LCCN etc. II. Deliver a citation = use a citation-oriented format (BibTeX, CSL, RIS) I was not voting explicitly for BibTeX but at least there is a large community that can make use of it. I strongly favour CSL ( http://citationstyles.org/) because: - there is a JavaScript CSL-Processor. JavaScript is kind of a punishment but it is the natural environment for the Web 2.0 Mashup crowd that is going to implement applications that use Twitter annotations - there are dozens of CSL citation styles so you can display a citation in any way you want As Ross pointed out RIS would be an option too, but I miss the easy open source tools that use RIS to create citations from RIS data. Any other relevant format that I know (Bibont, MODS, MARC etc.) does not aim at identification or citation at the first place but tries to model the full variety of bibliographic metadata. If your use case is III. Provide semantic properties and connections of a publication Then you should look at the Bibliographic Ontology. But III does *not* just subsume usecase II. - it is a different story that is not beeing told by normal people but only but metadata experts, semantic web gurus, library system developers etc. (I would count me to this groups). If you want such complex data then you should use other systems but Twitter for data exchange anyway. A list of CSL metadata fields can be found at http://citationstyles.org/downloads/specification.html#appendices and the JavaScript-Processor (which is also used in Zotero) provides more information for developers: http://groups.google.com/group/citeproc-js Cheers Jakob P.S: An example of a CSL record from the JavaScript client: { title: True Crime Radio and Listener Disenchantment with Network Broadcasting, 1935-1946, author: [ { family: Razlogova, given: Elena } ], container-title: American Quarterly, volume: 58, page: 137-158, issued: { date-parts: [ [2006, 3] ] }, type: article-journal } -- Jakob Voß jakob.v...@gbv.de, skype: nichtich Verbundzentrale des GBV (VZG) / Common Library Network Platz der Goettinger Sieben 1, 37073 Göttingen, Germany +49 (0)551 39-10242, http://www.gbv.de -- Owen Stephens Owen Stephens Consulting Web: http://www.ostephens.com Email: o...@ostephens.com
Re: [CODE4LIB] Code4Lib North planning continues
I think a good compromise is to have local meeting conversations on the code4libcon google group. this! --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Smith,Devon [smit...@oclc.org] Sent: Thursday, April 08, 2010 6:35 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Code4Lib North planning continues I think a good compromise is to have local meeting conversations on the code4libcon google group. It keeps the conversations in a central place initiallty created to faciliate face to face meetings. /dev -Original Message- From: Code for Libraries on behalf of Ed Summers Sent: Wed 4/7/2010 10:53 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Code4Lib North planning continues On Wed, Apr 7, 2010 at 10:43 PM, William Denton w...@pobox.com wrote: So far there are just three people with ideas for talks (me, Walter Lewis, Art Rhyno). Have the other local chapters found it works well to have more time for informal stuff, or lightning talks, or Ask Anything like I see NYC is doing? Sometimes with a smaller group people don't talk so much, but sometimes they do. The thing that bums me out is that this discussion list was largely created because there were all these discussions going on in niches like xml4lib, web4lib, perl4lib, php4lib, oss4lib, etc ... and not enough conversation about computing and libraries and cross-fertilization between projects/environments. Now we're seeing the code4lib discussion list itself fragment into code4libmdc, code4lib-north, code4libnyc, code4lib-northwest, etc. I guess an argument could be made that the conversations going on in this sublists would overwhelm code4lib proper with all sorts of local noise. But I think ideally we should have crossed that bridge when we came to it. I think if folks on code4lib saw what was going on in different locales it would inspire people to do stuff where they are too. //Ed
Re: [CODE4LIB] Code4Lib North planning continues
I'm not on that conference list, so don't really know how much traffic it gets. But it seems to me that, since these regional conferences are mostly being held at different times of the year from the main conference, the overlap would be minimal. Or not. I don't know. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of William Denton [...@pobox.com] Sent: Thursday, April 08, 2010 7:45 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Code4Lib North planning continues On 8 April 2010, Walker, David quoted: I think a good compromise is to have local meeting conversations on the code4libcon google group. That list is for organizing the main conference, with details about getting rooms, food, shuttle buses, hotel booking agents, who can MC Thursday afternoon, etc. Mixing that with organizational details *and* general discussion about all local chapter meetings would confuse everything, I think. Bill -- William Denton, Toronto : miskatonic.org www.frbr.org openfrbr.org
Re: [CODE4LIB] newbie
Google code has project feeds in Atom, too. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Aaron Rubinstein [arubi...@library.umass.edu] Sent: Thursday, March 25, 2010 10:21 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] newbie On 3/25/2010 12:47 PM, Ross Singer wrote: I disagreed with this back in the day, and I still disagree with running our own code repository. There are too many good code hosting solutions out there for this to be justifiable. We used to run an SVN repo at code4lib.org, but we never bothered rebuilding it after our server got hacked. Actually I think GitHub/Google Code and their ilk are a much better solution -- especially for pastebins/gists/etc. What would be useful, though, is an aggregation of the Code4lib's community spread across these sites, sort of what like the Planet does for blog postings, etc. or what Google Buzz does for the people I follow (i.e. I see their gists). I'd buy in to that (and help support it), but I'm not sure how one would go about it. -Ross. I think the old discussion was looking more for a way to host code snippets as opposed to version controlled projects, which I agree that GitHub and the like already do nicely. Would we really need more than a code4lib.pastebin.com? That being said, a code planet would be really cool. I know that GitHub and BitBucket publish ATOM feeds of a user's activity but I'm not so sure about other code hosting sites. Anyways, just a thought... Aaron
Re: [CODE4LIB] ignore my last message
That was not a reply but a new message. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Mike Taylor [m...@indexdata.com] Sent: Monday, March 08, 2010 9:14 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] ignore my last message On 8 March 2010 17:04, Jonathan Rochkind rochk...@jhu.edu wrote: As usual, I'm great at sending the WORST messages to the wrong list. My email client is messing up all over. Please do not reply to that one on list, please ignore it, and Eric please remove it from teh archives is possible. Man, today is not my day. I've got to stop using email for a year or something. This kind of thing is always going to happen from time to time on a list configuration to fail maximally hard when it fails at all. See Reply-To Munging Considered Harmful: http://www.unicom.com/pw/reply-to-harmful.html
Re: [CODE4LIB] Code4Lib 2011 Proposals
ALL of that said, where are the San Diego gang la_jolla++ BigD? --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Walter Lewis [lew...@hhpl.on.ca] Sent: Wednesday, March 03, 2010 11:20 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Code4Lib 2011 Proposals On 3 Mar 10, at 9:52 AM, Julia Bauder wrote: Also, the farther north we go, the more likely that snow+airplane incompatibilities will foil speakers' (and attendees'!) travel plans at the last minute, which isn't fun for anyone. somewhere_out_of_nor'easter_and_lake_effect_range_in_february++ Actually there is a clear line (at least on the eastern half of the continent) where the further north you go, the *less* snow you got this. Buffalo is trailing a number of places on the east coast in total snow accumulation and Toronto has been dusted a few times this winter, with nothing of real substance. Detroit and Chicago were well below seasonal averages last time I checked. ALL of that said, where are the San Diego gang or the folks from Miami? Walter who can only dream of pubs with open patios in February
Re: [CODE4LIB] Transferring an article bib data From Article Linker page to ILL form
Sarah, Are you using ILLiad, or perhaps just an e-mailed based ILL form? Either way, your link resolver should be able to send the OpenURL to that system. ILLiad can accept OpenURLs and auto-populate it's form. It's not too difficult to do the same for a home-grown email ILL form. This is usually a pretty standard feature for any link resolver. I would contact Serial Solutions to see what they offer in this regard. --Daave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Park,Go-Woon [gop...@nwmissouri.edu] Sent: Wednesday, March 03, 2010 2:07 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: [CODE4LIB] Transferring an article bib data From Article Linker page to ILL form I am wondering if anybody already has solutions for automated interlibrary loan form from a Article Linker or an OpenURL resolver page. We have Serial Solutions' 367 Link. I would like to have one button that copies the contents of the Article Linker result and pastes into our ILL form when no full-text article is found in other databases. It is painful to retype all information. Any suggestion is welcome. Thank you, Sarah G. Park Web/Reference Librarian B. D. Owens Library | Northwest Missouri State University (660) 562-1534 | gop...@nwmissouri.edu
Re: [CODE4LIB] Sunday in Asheville
There is a Duke basketball game on then, and they do love them some college basketball in North Carolina. NASCAR should be over by 7:00. I think you're good. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of David Fiander [da...@fiander.info] Sent: Wednesday, February 17, 2010 11:53 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Sunday in Asheville Seriously, are any other sports going to be broadcast during that time slot? On Wed, Feb 17, 2010 at 14:51, Julia Bauder julia.bau...@gmail.com wrote: Ooh! Ooh! I want to watch the hockey game! (As long as y'all won't throw things at me if I root for Canada) They have an NHL team in North Carolina--there have to be SOME hockey fans in the state. The Bier Garden is listed as a sports bar on Yelp, and their Web site says they have 16 televisions -- I'm sure we can convince them to tune a measly one TV to the hockey game. Julia * Julia Bauder Data Services Librarian Grinnell College Libraries Sixth Ave. Grinnell, IA 50112 641-269-4431 On Wed, Feb 17, 2010 at 1:45 PM, Andrew Darby darby.li...@gmail.com wrote: There's also the Canada/US Olympic men's hockey game on Sunday night at 7:30 EST. Finding an establishment willing to turn it on might be a challenge, though . . . . On Wed, Feb 17, 2010 at 1:41 PM, Tania Fersenheim tan...@brandeis.edu wrote: I emailed them a few questions awhile ago at he...@monkpub.com and they answered within a few hours, from the address ba...@monkpub.com. They seem to have a decent non-Belgian tap list as well. Tania -- Tania Fersenheim Manager of Library Systems Brandeis University Library and Technology Services 415 South Street, (MS 017/P.O. Box 549110) Waltham, MA 02454-9110 Phone: 781.736.4698 Fax: 781.736.4577 email: tan...@brandeis.edu -Original Message- From: Code for Libraries [mailto:code4...@listserv.nd.edu] On Behalf Of Doran, Michael D Sent: Wednesday, February 17, 2010 11:06 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Sunday in Asheville Hi Mike, the Thirsty Monk [1]. It's a half-mile from the conference hotel, so it's easily walkable/stumbleable. 1. http://www.yelp.com/biz/thirsty-monk-pub-asheville The Yelp entry has their address being 50 Commerce St, Asheville, NC 28801. However their website (http://www.monkpub.com/) has them at 92 Patton Ave, Asheville, NC 28801 (which is even closer to the conference hotel). Google maps now has Hookah Joe's at the 50 Commerce St address, so perhaps the Thirsty Monk has moved. They are not answering their phone (828-254-5470) this early, but I will try them later on to get clarification. I hope to run into some of you folks there. If you're into Belgian beer and a different pub atmosphere, do join me. Belgian beer is my favorite, so I plan on going (even if you are going to be there -- just teasing!). I didn't notice any Atomium on draft, though (previewing the beer menu is how I happened to notice the address discrepancy). -- Michael # Michael Doran, Systems Librarian # University of Texas at Arlington # 817-272-5326 office # 817-688-1926 mobile # do...@uta.edu # http://rocky.uta.edu/doran/ -Original Message- From: Code for Libraries [mailto:code4...@listserv.nd.edu] On Behalf Of Michael J. Giarlo Sent: Wednesday, February 17, 2010 8:39 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: [CODE4LIB] Sunday in Asheville Folks, We have a fabulous slate of social activities lined up for this year's conference in Asheville (thanks to, well, y'all). But those of you arriving on Sunday will notice there are no planned outings that night! Oh noez! Well, I'm planning to spend my post-dinner time at the Thirsty Monk [1]. It's a half-mile from the conference hotel, so it's easily walkable/stumbleable. I hope to run into some of you folks there. If you're into Belgian beer and a different pub atmosphere, do join me. -Mike P.S. If you'd like to reach me via phone, my number is: the NJ area code beginning with seven, followed by the numerically lower Santa Monica (CA) area code, followed by the sum of the prior value added to the number of the beast, padded with one zero. 1. http://www.yelp.com/biz/thirsty-monk-pub-asheville -- Andrew Darby Web Services Librarian Ithaca College Library http://www.ithaca.edu/library/ ada...@ithaca.edu
Re: [CODE4LIB] Sunday in Asheville
The hockey game is on MSNBC. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Sean Hannan [shan...@jhu.edu] Sent: Wednesday, February 17, 2010 11:58 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Sunday in Asheville If I'm remembering correctly, NBC is opting to show Ice Dancing over the USA/Canada game. Yay, NBC. -Sean On Feb 17, 2010, at 2:53 PM, David Fiander wrote: Seriously, are any other sports going to be broadcast during that time slot? On Wed, Feb 17, 2010 at 14:51, Julia Bauder julia.bau...@gmail.com wrote: Ooh! Ooh! I want to watch the hockey game! (As long as y'all won't throw things at me if I root for Canada) They have an NHL team in North Carolina--there have to be SOME hockey fans in the state. The Bier Garden is listed as a sports bar on Yelp, and their Web site says they have 16 televisions -- I'm sure we can convince them to tune a measly one TV to the hockey game. Julia * Julia Bauder Data Services Librarian Grinnell College Libraries Sixth Ave. Grinnell, IA 50112 641-269-4431 On Wed, Feb 17, 2010 at 1:45 PM, Andrew Darby darby.li...@gmail.com wrote: There's also the Canada/US Olympic men's hockey game on Sunday night at 7:30 EST. Finding an establishment willing to turn it on might be a challenge, though . . . . On Wed, Feb 17, 2010 at 1:41 PM, Tania Fersenheim tan...@brandeis.edu wrote: I emailed them a few questions awhile ago at he...@monkpub.com and they answered within a few hours, from the address ba...@monkpub.com. They seem to have a decent non-Belgian tap list as well. Tania -- Tania Fersenheim Manager of Library Systems Brandeis University Library and Technology Services 415 South Street, (MS 017/P.O. Box 549110) Waltham, MA 02454-9110 Phone: 781.736.4698 Fax: 781.736.4577 email: tan...@brandeis.edu -Original Message- From: Code for Libraries [mailto:code4...@listserv.nd.edu] On Behalf Of Doran, Michael D Sent: Wednesday, February 17, 2010 11:06 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Sunday in Asheville Hi Mike, the Thirsty Monk [1]. It's a half-mile from the conference hotel, so it's easily walkable/stumbleable. 1. http://www.yelp.com/biz/thirsty-monk-pub-asheville The Yelp entry has their address being 50 Commerce St, Asheville, NC 28801. However their website (http://www.monkpub.com/) has them at 92 Patton Ave, Asheville, NC 28801 (which is even closer to the conference hotel). Google maps now has Hookah Joe's at the 50 Commerce St address, so perhaps the Thirsty Monk has moved. They are not answering their phone (828-254-5470) this early, but I will try them later on to get clarification. I hope to run into some of you folks there. If you're into Belgian beer and a different pub atmosphere, do join me. Belgian beer is my favorite, so I plan on going (even if you are going to be there -- just teasing!). I didn't notice any Atomium on draft, though (previewing the beer menu is how I happened to notice the address discrepancy). -- Michael # Michael Doran, Systems Librarian # University of Texas at Arlington # 817-272-5326 office # 817-688-1926 mobile # do...@uta.edu # http://rocky.uta.edu/doran/ -Original Message- From: Code for Libraries [mailto:code4...@listserv.nd.edu] On Behalf Of Michael J. Giarlo Sent: Wednesday, February 17, 2010 8:39 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: [CODE4LIB] Sunday in Asheville Folks, We have a fabulous slate of social activities lined up for this year's conference in Asheville (thanks to, well, y'all). But those of you arriving on Sunday will notice there are no planned outings that night! Oh noez! Well, I'm planning to spend my post-dinner time at the Thirsty Monk [1]. It's a half-mile from the conference hotel, so it's easily walkable/stumbleable. I hope to run into some of you folks there. If you're into Belgian beer and a different pub atmosphere, do join me. -Mike P.S. If you'd like to reach me via phone, my number is: the NJ area code beginning with seven, followed by the numerically lower Santa Monica (CA) area code, followed by the sum of the prior value added to the number of the beast, padded with one zero. 1. http://www.yelp.com/biz/thirsty-monk-pub-asheville -- Andrew Darby Web Services Librarian Ithaca College Library http://www.ithaca.edu/library/ ada...@ithaca.edu
Re: [CODE4LIB] change management system
Thanks to everyone who responded. The comments have been very helpful! Is anyone using RT? [1] Also, I'm curious how many academic libraries are following a formal change management process? By that, I mean: Do you maintain a strict separation between developers and operations staff (the people who put the changes into production)? And do you have something like a Change Advisory Board that reviews changes before they can be put into production? Just as background to these questions: We've been asked to come-up with a change management procedure/system for a variety of academic technology groups here that have not previously had such (at least nothing formal). But find the process that the business (i.e., PeopleSoft ) folks here follow to be a bit too elaborate for our purposes. They use Remedy. --Dave [1] http://bestpractical.com/rt == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Mark A. Matienzo [m...@matienzo.org] Sent: Thursday, February 11, 2010 5:47 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] change management system I'm inclined to say that any sort of tracking software could be used for this - it's mostly an issue of creating sticking with policy decisions about what the various workflow states are, how things become triaged, etc. I believe if you define that up front, you could find Trac or any other tracking/issue system adaptable to what you want to do. Mark A. Matienzo Digital Archivist, Manuscripts and Archives Yale University Library
Re: [CODE4LIB] change management system
Hey Declan, Does that process only apply to applications you develop yourselves? How about the Innovative system, or open source applications developed elsewhere? --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Fleming, Declan [dflem...@ucsd.edu] Sent: Thursday, February 11, 2010 9:31 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] change management system Hey Dave! We need to go grab lunch sometime... We use JIRA for our bug tracking and tracking feature requests (to some extent). UCSD Libraries IT has a strict Development/Operations split, with a weak Test phase in the middle - weak because I don't have a QA or config manager, and I'm teaching academics the processes I learned while working in the software industry. We follow a 2 week deploy process where Dev can submit any packages to Ops every other Friday. On Monday or Tuesday (depending on what's on fire in Ops), these packages are then staged to a Test server that only Ops has admin privs on. If the project people have a test plan, they have the rest of the week to say whether the package passes or not. If yes, we roll the package to production on the next Monday or Tuesday. If not, we kick the package back to Dev and they do their fixes and unit tests and wait for the next cycle. This system keeps production (and thus, customers) from being thrashed with not-quite-ready builds. There is a lot of natural tension in our system, especially with the lack of a QA manager, and most of the config management being done by Ops. We require a high degree of communication between the Ops and Dev managers on dates, test pass/fail conditions, code quality, process mgt, etc. This can be a challenge as Ops and Dev have different missions at times. D -Original Message- From: Code for Libraries [mailto:code4...@listserv.nd.edu] On Behalf Of Walker, David Sent: Thursday, February 11, 2010 8:55 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] change management system Thanks to everyone who responded. The comments have been very helpful! Is anyone using RT? [1] Also, I'm curious how many academic libraries are following a formal change management process? By that, I mean: Do you maintain a strict separation between developers and operations staff (the people who put the changes into production)? And do you have something like a Change Advisory Board that reviews changes before they can be put into production? Just as background to these questions: We've been asked to come-up with a change management procedure/system for a variety of academic technology groups here that have not previously had such (at least nothing formal). But find the process that the business (i.e., PeopleSoft ) folks here follow to be a bit too elaborate for our purposes. They use Remedy. --Dave [1] http://bestpractical.com/rt == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Mark A. Matienzo [m...@matienzo.org] Sent: Thursday, February 11, 2010 5:47 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] change management system I'm inclined to say that any sort of tracking software could be used for this - it's mostly an issue of creating sticking with policy decisions about what the various workflow states are, how things become triaged, etc. I believe if you define that up front, you could find Trac or any other tracking/issue system adaptable to what you want to do. Mark A. Matienzo Digital Archivist, Manuscripts and Archives Yale University Library
Re: [CODE4LIB] change management system
What are you using for that ticketing system? --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Fleming, Declan [dflem...@ucsd.edu] Sent: Thursday, February 11, 2010 11:52 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] change management system Hi - it's primarily designed for things we develop. We have a Change Management ticketing system following ITIL principles that tracks change requests for anything in production, from working apps we've developed, to III, to the public infestations, and even account adds/moves/changes. Tickets from this system will sometimes be moved into JIRA when they ask for a change to something we've developed. D -Original Message- From: Code for Libraries [mailto:code4...@listserv.nd.edu] On Behalf Of Walker, David Sent: Thursday, February 11, 2010 9:49 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] change management system Hey Declan, Does that process only apply to applications you develop yourselves? How about the Innovative system, or open source applications developed elsewhere? --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Fleming, Declan [dflem...@ucsd.edu] Sent: Thursday, February 11, 2010 9:31 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] change management system Hey Dave! We need to go grab lunch sometime... We use JIRA for our bug tracking and tracking feature requests (to some extent). UCSD Libraries IT has a strict Development/Operations split, with a weak Test phase in the middle - weak because I don't have a QA or config manager, and I'm teaching academics the processes I learned while working in the software industry. We follow a 2 week deploy process where Dev can submit any packages to Ops every other Friday. On Monday or Tuesday (depending on what's on fire in Ops), these packages are then staged to a Test server that only Ops has admin privs on. If the project people have a test plan, they have the rest of the week to say whether the package passes or not. If yes, we roll the package to production on the next Monday or Tuesday. If not, we kick the package back to Dev and they do their fixes and unit tests and wait for the next cycle. This system keeps production (and thus, customers) from being thrashed with not-quite-ready builds. There is a lot of natural tension in our system, especially with the lack of a QA manager, and most of the config management being done by Ops. We require a high degree of communication between the Ops and Dev managers on dates, test pass/fail conditions, code quality, process mgt, etc. This can be a challenge as Ops and Dev have different missions at times. D -Original Message- From: Code for Libraries [mailto:code4...@listserv.nd.edu] On Behalf Of Walker, David Sent: Thursday, February 11, 2010 8:55 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] change management system Thanks to everyone who responded. The comments have been very helpful! Is anyone using RT? [1] Also, I'm curious how many academic libraries are following a formal change management process? By that, I mean: Do you maintain a strict separation between developers and operations staff (the people who put the changes into production)? And do you have something like a Change Advisory Board that reviews changes before they can be put into production? Just as background to these questions: We've been asked to come-up with a change management procedure/system for a variety of academic technology groups here that have not previously had such (at least nothing formal). But find the process that the business (i.e., PeopleSoft ) folks here follow to be a bit too elaborate for our purposes. They use Remedy. --Dave [1] http://bestpractical.com/rt == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Mark A. Matienzo [m...@matienzo.org] Sent: Thursday, February 11, 2010 5:47 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] change management system I'm inclined to say that any sort of tracking software could be used for this - it's mostly an issue of creating sticking with policy decisions about what the various workflow states are, how things become triaged, etc. I believe if you define that up front, you could find Trac or any other tracking/issue system adaptable to what you want to do. Mark A. Matienzo Digital Archivist, Manuscripts and Archives Yale University Library
[CODE4LIB] change management system
Can anyone here recommend an open source system for change management? Not version control, per se. But the process of requesting, reviewing, and approving changes to production systems. Does Trac fit into this category? --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu
Re: [CODE4LIB] urldecode problem and CAS
So a user arrives at your app. You see that they are not logged in, and so redirect them to the CAS server with a return URL back to your application. Do you have an example of that URL? --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Jimmy Ghaphery [jghap...@vcu.edu] Sent: Wednesday, January 27, 2010 9:18 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: [CODE4LIB] urldecode problem and CAS CODE4LIB, I'm looking for some urldecode help if possible. I have an app that gets a call through a url which looks like this in order to pull up a specific record: http://../app.cfm?id=15 It is password protected and we have recently moved to CAS for authentication. After it gets passed from CAS back to our server it looks like this and tosses an error: http://../app.cfm?id%3d15 The equals sign translated to %3d Any ideas are appreciated. thanks -Jimmy -- Jimmy Ghaphery Head, Library Information Systems VCU Libraries http://www.library.vcu.edu --
Re: [CODE4LIB] marc documentation?
Do you mean just the 'CONTENT DESIGNATOR HISTORY' at the bottom of each page? http://www.loc.gov/marc/bibliographic/bd4xx.html --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Jonathan Rochkind [rochk...@jhu.edu] Sent: Wednesday, January 27, 2010 11:59 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: [CODE4LIB] marc documentation? I know I've seen documetnation on the LC site before for since-abandoned marc bib tags, like 400. But I can't for the life of me find it now navigating around the website or googling. does anyone know where this is? Jonathan
Re: [CODE4LIB] image maps + lightbox/thickbox/ibox/etc
I've taken to using Shadowbox: http://www.shadowbox-js.com/ Since one of the things you can bring-up in the box is any external web page, it might meet your need for an image map? --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Ken Irwin [kir...@wittenberg.edu] Sent: Thursday, January 07, 2010 2:20 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: [CODE4LIB] image maps + lightbox/thickbox/ibox/etc Hi all, Does anyone have an-AJAX pop-up window style tool that works with image maps? I'm thinking of something in the the lightbox, thickbox, ibox family. I've found a bunch of references to people online *looking* for this functionality, but no one finding it. Any ideas? Thanks! Ken
Re: [CODE4LIB] Online PHP course?
what's the problem(s) with PHP? I fear this thread may never end. And I like PHP. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Thomas Krichel [kric...@openlib.org] Sent: Tuesday, January 05, 2010 2:13 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Online PHP course? Joe Hourcle writes ps. yes, I could've used this response as an opportunity to bash PHP ... and I didn't, because they might be learning PHP to migrate it to something else. controversial ;-) what's the problem(s) with PHP? Cheers, Thomas Krichelhttp://openlib.org/home/krichel http://authorclaim.org/profile/pkr1 skype: thomaskrichel
Re: [CODE4LIB] ipsCA Certs
Hi John, I also got this email. We also recently installed an ipsCA wildcard cert for a test EZProxy install. Looking at the details of our ipsCA wildcard certificate in Firefox, though, I can see the chain of certificates going up to the root ipsCA cert. Firefox says that that root certificate -- ipsCA CLASEA1 Certificate Authority -- is good until 2025. I see the same thing in IE, Safari, and I assume every other browser I might check. Do you see that too? --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of John Wynstra [john.wyns...@uni.edu] Sent: Thursday, December 17, 2009 1:02 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: [CODE4LIB] ipsCA Certs Out of curiosity, did anyone else using ipsCA certs receive notification that due to the coming expiration of their root CA (December 29,2009), they would need a reissued cert under a new root CA? I am uncertain as to how this new Root CA will become a part of the browsers trusted roots without some type of user action including a software upgrade, but the following library website instructions lead me to believe that this is not going to be smooth. http://bit.ly/53Npel We are just about to go live with EZProxy in January with an ipsCA cert issued a few months ago, and I am not about to do that if I have serious browser support issue. -- John Wynstra Library Information Systems Specialist Rod Library University of Northern Iowa Cedar Falls, IA 50613 wyns...@uni.edu (319)273-6399
Re: [CODE4LIB] ipsCA Certs
I see now that I'm looking at the intermediate certificate. The root does expire in 2009. Nevermind. :-) --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Walker, David Sent: Thursday, December 17, 2009 1:40 PM To: Code for Libraries Subject: RE: [CODE4LIB] ipsCA Certs Hi John, I also got this email. We also recently installed an ipsCA wildcard cert for a test EZProxy install. Looking at the details of our ipsCA wildcard certificate in Firefox, though, I can see the chain of certificates going up to the root ipsCA cert. Firefox says that that root certificate -- ipsCA CLASEA1 Certificate Authority -- is good until 2025. I see the same thing in IE, Safari, and I assume every other browser I might check. Do you see that too? --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of John Wynstra [john.wyns...@uni.edu] Sent: Thursday, December 17, 2009 1:02 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: [CODE4LIB] ipsCA Certs Out of curiosity, did anyone else using ipsCA certs receive notification that due to the coming expiration of their root CA (December 29,2009), they would need a reissued cert under a new root CA? I am uncertain as to how this new Root CA will become a part of the browsers trusted roots without some type of user action including a software upgrade, but the following library website instructions lead me to believe that this is not going to be smooth. http://bit.ly/53Npel We are just about to go live with EZProxy in January with an ipsCA cert issued a few months ago, and I am not about to do that if I have serious browser support issue. -- John Wynstra Library Information Systems Specialist Rod Library University of Northern Iowa Cedar Falls, IA 50613 wyns...@uni.edu (319)273-6399
Re: [CODE4LIB] holdings standards/protocols
Innovative does too. Like Ben mentioned with Voyager Z39.50, simply set the record type to 'OPAC' in your yaz client to get the holdings. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of B.C.Charlton [b.c.charl...@kent.ac.uk] Sent: Monday, November 16, 2009 6:30 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] holdings standards/protocols Can anyone give me an idea if any/many/all (ILS) Z implementations have implemented the holdings information? Is there a way of testing this using a client such as yaz (e.g. a worked example of seeing holdings via Z) Voyager certainly can - see example below. I've also got some perl that pulls back opac-xml using the ZOOM module. If that's of any use, let me know off-list. Ben Z open nemesis.kent.ac.uk:7090 Connecting...OK. Sent initrequest. Connection accepted by v3 target. ID : 34 Name : Voyager LMS - Z39.50 Server Version: 2007.0.4 Options: search present Elapsed: 0.698674 Z base voyager Z format opac Z find 0714120766 Sent searchRequest. Received SearchResponse. Search was a success. Number of hits: 1 records returned: 0 Elapsed: 0.030849 Z show 1 Sent presentRequest (1+1). Records: 1 [VOYAGER]Record type: OPAC Record type: USmarc 00763cam 2200229 a 4500 001 318575 005 20080123143630.0 008 010720s1991xxkabc eng 015$a 0527672 $a 0527673 $a 0527674 $a 0527675 $a 0686148 $a F210884 020$a 0714120766 (pbk.) 035$9 8000527672 050 4 $a N 5760 100 1 $a Walker, Susan. 245 10 $a Roman art / $c Susan Walker. 260$a London : $b British Museum Press for Trustees of the British Museum, $c 1991 $g (repr. 1994). 300$a 72 p. : $b ill. (some col.), col. maps ; $c 22 cm. 500$a Includes bibliographical references (p. 71) and index. 561$a Copy F210884 from the collection of Colin Renfrew. 650 0 $a Art, Roman. 710 2 $a British Museum. 990$a CL335 990$a CL609 Data holdings 0 typeOfRecord: x encodingLevel: 3 receiptAcqStatus: 0 generalRetention: 8 completeness: 4 dateOfReport: 00 nucCode: TCTCOWL localLocation: Templeman - Core Text Collection [1 Week Loan] callNumber: N 5760 circulation 0 availableNow: 1 itemId: 442858 renewable: 0 onHold: 0 circulation 1 availableNow: 1 itemId: 800513 renewable: 0 onHold: 0 Data holdings 1 typeOfRecord: x encodingLevel: 3 receiptAcqStatus: 0 generalRetention: 8 completeness: 4 dateOfReport: 00 nucCode: TMORD localLocation: Templeman - Main Collection [Ordinary Loan] callNumber: N 5760 circulation 0 availableNow: 1 itemId: 442855 renewable: 0 onHold: 0 temporaryLocation: Medway - Tonbridge [Ordinary Loan] circulation 1 availableNow: 1 itemId: 442856 renewable: 0 onHold: 0 temporaryLocation: Medway - Tonbridge [Ordinary Loan] circulation 2 availableNow: 1 itemId: 442857 renewable: 0 onHold: 0 temporaryLocation: Medway - Tonbridge [Ordinary Loan] Data holdings 2 typeOfRecord: x encodingLevel: 4 receiptAcqStatus: 0 generalRetention: 8 completeness: 1 dateOfReport: 00 nucCode: XTONUKCORD localLocation: Medway - Tonbridge [Ordinary Loan] callNumber: N 5760 circulation 0 availableNow: 1 itemId: 692092 renewable: 0 onHold: 0 nextResultSetPosition = 2 Elapsed: 0.013220
Re: [CODE4LIB] Library Website Redesign Info and Project Plans
My wife really likes Web Redesign: Workflow that Works, by Kelly Goto Emily Cotler. The second edition is called Web Redesign 2.0. http://www.web-redesign.com/ http://www.worldcat.org/oclc/57641137 --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Jason Stirnaman [jstirna...@kumc.edu] Sent: Wednesday, September 16, 2009 11:36 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Library Website Redesign Info and Project Plans I just came across this yesterday: http://johncrenshaw.net/blog/web-development-project-process-workflow/ Very high-level and usual systems design approach, but with some good web-specific tips thrown in. Sean Hannan shan...@jhu.edu 09/16/09 10:20 AM We're currently in the middle of a library website redesign as well. For the most part, we have framed our project using Jesse James Garrett's The Elements of User Experience (https://wiki.library.jhu.edu/download/attachments/30737/elements.pdf ). It has been immensely helpful in plotting out our work from the User Experience touchy-feely end to the Information Architecture to the visual design and implementation. -Sean --- Sean Hannan Web Developer Sheridan Libraries Johns Hopkins University On Sep 16, 2009, at 10:52 AM, Rosalyn Metz wrote: Hi All, I'm about to embark on a library website redesign. I've started thinking about creating a project plan, but I honestly don't know where to start. I saw this website redesign presentation Lorcan Dempsey tweeted about: http://www.ucd.ie/library/guides/powerpoint/rpan_ppt2/index.swf And started thinking, I wonder if anyone else has similar slides or project plans or advice. I of course asked the Google but I didn't really find any project plans. (If you're curious what I did find, take a look here: http://delicious.com/rosy1280/library+website-redesign) I do of course realize that every library is different, but I'm hoping that any information you all might be able to provide could help get the juices flowing. Thanks for your help in advance. Rosalyn Rosalyn
[CODE4LIB] EzProxy and recaptcha
Casting a net far and wide on this, sorry. We're using EZproxy to proxy a website that also happens to have reCaptcha on it. I guess reCaptcha keys are tied to domain names, so when the Javascript is brought into the page via the script / tag, it sees that the page is 'proxy.example.edu' instead of 'www.vendorsite.com', and we end-up with an error from reCaptcha saying: This reCAPTCHA key isn't authorized for the given domain. That all makes sense. But can anyone fathom a workaround? --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu
Re: [CODE4LIB] EzProxy and recaptcha
Is this something that can be done using Find/Replace with the ^A modifier? I don't think so, after reading the documentation. But thank you for those links, Albert, I really appreciate it. I think the ultimate issue is that, when the browser fetches the recaptcha Javascript, it sends a referrer that says this is from my proxy server instead of the vendor site. So, unless EZProxy is set-up to manipulate the HTTP referrer header -- which I'm thinking is unlikely if not impossible -- then it's not something we can fix on the EZproxy side. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Bertram, Albert [bertr...@umich.edu] Sent: Tuesday, August 25, 2009 1:08 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] EzProxy and recaptcha Hi Dave, Is this something that can be done using Find/Replace with the ^A modifier? Find NAME=_PRIORREFERER VALUE=http:// Replace NAME=_PRIORREFERER VALUE=http://^A The documentation says it only works after http:// or https://, so it may not work if you're only passing the hostname around. http://pluto.potsdam.edu/ezproxywiki/index.php/Find_And_Replace http://www.oclc.org/us/en/support/documentation/ezproxy/cfg/find/ http://www.oclc.org/support/documentation/ezproxy/db/lexisnexis.htm Cheers, Albert From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Walker, David [dwal...@calstate.edu] Sent: Tuesday, August 25, 2009 3:33 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: [CODE4LIB] EzProxy and recaptcha Casting a net far and wide on this, sorry. We're using EZproxy to proxy a website that also happens to have reCaptcha on it. I guess reCaptcha keys are tied to domain names, so when the Javascript is brought into the page via the script / tag, it sees that the page is 'proxy.example.edu' instead of 'www.vendorsite.com', and we end-up with an error from reCaptcha saying: This reCAPTCHA key isn't authorized for the given domain. That all makes sense. But can anyone fathom a workaround? --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu
Re: [CODE4LIB] EzProxy and recaptcha
I'm thinking this may be the only solution. I will mention it to the vendor, Ryan, thanks! --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Wick, Ryan [ryan.w...@oregonstate.edu] Sent: Tuesday, August 25, 2009 1:22 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] EzProxy and recaptcha reCAPTCHA keys are tied to a domain name by default, but they also offer global keys. From an admin page: If you wish to use your key across a large number of domains (e.g., if you are a hosting provider, OEM, etc.), select the global key option. You may want to use a descriptive domain name such as global-key.mycompany.com Ryan Wick Information Technology Consultant Special Collections Oregon State University Libraries http://osulibrary.oregonstate.edu/specialcollections -Original Message- From: Code for Libraries [mailto:code4...@listserv.nd.edu] On Behalf Of Walker, David Sent: Tuesday, August 25, 2009 12:34 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: [CODE4LIB] EzProxy and recaptcha Casting a net far and wide on this, sorry. We're using EZproxy to proxy a website that also happens to have reCaptcha on it. I guess reCaptcha keys are tied to domain names, so when the Javascript is brought into the page via the script / tag, it sees that the page is 'proxy.example.edu' instead of 'www.vendorsite.com', and we end-up with an error from reCaptcha saying: This reCAPTCHA key isn't authorized for the given domain. That all makes sense. But can anyone fathom a workaround? --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu
Re: [CODE4LIB] EzProxy and recaptcha
So I return to the lists here somewhat sheepishly to admit that the problem was solved simply by adding the reCaptcha domain to our EZproxy stanza with the magic Javascript directives: DJ recaptcha.net HJ recaptcha.net One of those must, I'm guessing, fetch the reCaptcha Javascript without an HTTP referrer, or possible even using the original vendor site's domain in the referrer, since that's the only way it will work. Either way, solved our problem. ezproxy++ --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Walker, David Sent: Tuesday, August 25, 2009 12:33 PM To: CODE4LIB@LISTSERV.ND.EDU Cc: web4...@webjunction.org Subject: EzProxy and recaptcha Casting a net far and wide on this, sorry. We're using EZproxy to proxy a website that also happens to have reCaptcha on it. I guess reCaptcha keys are tied to domain names, so when the Javascript is brought into the page via the script / tag, it sees that the page is 'proxy.example.edu' instead of 'www.vendorsite.com', and we end-up with an error from reCaptcha saying: This reCAPTCHA key isn't authorized for the given domain. That all makes sense. But can anyone fathom a workaround? --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu
Re: [CODE4LIB] tricky mod_rewrite
Is it possible to write a .htaccess file that works *no matter* where it is located I don't believe so. If the .htaccess file lives in a directory inside of the Apache root directory, then you _don't_ need to specify a RewriteBase. It's really only necessary when .htacess lives in a virtual directory outside of the Apache root. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Godmar Back [god...@gmail.com] Sent: Wednesday, July 01, 2009 6:20 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] tricky mod_rewrite On Wed, Jul 1, 2009 at 9:13 AM, Peter Kiraly pkir...@tesuji.eu wrote: From: Godmar Back god...@gmail.com is it possible to write this without hardwiring the RewriteBase in it? So that it can be used, for instance, in an .htaccess file from within any /path? Yes, you can put it into a .htaccess file, and the URL rewrite will apply on that directory only. You misunderstood the question; let me rephrase it: Can I write a .htaccess file without specifying the path where the script will be located in RewriteBase? For instance, consider http://code.google.com/p/tictoclookup/source/browse/trunk/standalone/.htaccess Here, anybody who wishes to use this code has to adapt the .htaccess file to their path and change the RewriteBase entry. Is it possible to write a .htaccess file that works *no matter* where it is located, entirely based on where it is located relative to the Apache root or an Apache directory? - Godmar
Re: [CODE4LIB] tricky mod_rewrite
They can create .htaccess files, but don't always have control of the main Apache httpd.conf or the root directory. Just to be clear, I didn't mean just the root directory itself. If .htacess lives within a sub-directory of the Apache root, then you _don't_ need RewriteBase. RewriteBase is only necessary when you're in a virtual directory, which is physically located outside of Apache's DocumentRoot path. Correct me if I'm wrong. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Godmar Back [god...@gmail.com] Sent: Wednesday, July 01, 2009 7:23 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] tricky mod_rewrite On Wed, Jul 1, 2009 at 10:18 AM, Walker, David dwal...@calstate.edu wrote: Is it possible to write a .htaccess file that works *no matter* where it is located I don't believe so. If the .htaccess file lives in a directory inside of the Apache root directory, then you _don't_ need to specify a RewriteBase. It's really only necessary when .htacess lives in a virtual directory outside of the Apache root. I see. Unfortunately, that's the common deployment case by non-administrators (many librarians). They can create .htaccess files, but don't always have control of the main Apache httpd.conf or the root directory. - Godmar
Re: [CODE4LIB] tricky mod_rewrite
How can I write an .htaccess that's path-independent if I like to exclude certain files in that directory, such as index.html? This is what the Zend Framework uses. I think it's pretty clever: RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^.*$ index.php It basically says that, if the request is for a real, physical file or directory within your application, then Apache should go ahead and serve it up directly. If it's not, then go ahead and rewrite the request through your script (index.php). --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Godmar Back [god...@gmail.com] Sent: Wednesday, July 01, 2009 8:47 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] tricky mod_rewrite On Wed, Jul 1, 2009 at 10:38 AM, Walker, David dwal...@calstate.edu wrote: They can create .htaccess files, but don't always have control of the main Apache httpd.conf or the root directory. Just to be clear, I didn't mean just the root directory itself. If .htacess lives within a sub-directory of the Apache root, then you _don't_ need RewriteBase. RewriteBase is only necessary when you're in a virtual directory, which is physically located outside of Apache's DocumentRoot path. Correct me if I'm wrong. You are correct! If I omit the RewriteBase, it still works in this case. Let's have some more of that sendmail koolaid and up the challenge. How can I write an .htaccess that's path-independent if I like to exclude certain files in that directory, such as index.html? So far, I've been doing: RewriteCond %{REQUEST_URI} !^/services/tictoclookup/standalone/index.html To avoid running my script for index.html. How would I do that? (Hint: the use of SERVER variables on the right-hand side in the CondPattern of a RewriteCond is not allowed, but some trickery may be possible, according to http://www.issociate.de/board/post/495372/Server-Variables_in_CondPattern_of_RewriteCond_directive.html) - Godmar
Re: [CODE4LIB] How to access environment variables in XSL
Micahael, What XSLT processor and programming language are you using? --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Doran, Michael D [do...@uta.edu] Sent: Friday, June 19, 2009 12:44 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: [CODE4LIB] How to access environment variables in XSL I am working with some XSL pages that serve up HTML on the web. I'm new to XSL. In my prior web development, I was accustomed to being able to access environment variables (and their values, natch) in my CGI scripts and/or via Server Side Includes. Is there an equivalent mechanism for accessing those environment variables within an XSL page? These are examples of the variables I'm referring to: SERVER_NAME SERVER_PORT HTTP_HOST DOCUMENT_URI REMOTE_ADDR HTTP_REFERER In a Perl CGI script, I would do something like this: my $server = $ENV{'SERVER_NAME'}; Or in an SSI, I could do something like this: !--#echo var=REMOTE_ADDR-- If it matters, I'm working in: Solaris/Apache/Tomcat I've googled this but not found anything useful yet (except for other people asking the same question). Maybe I'm asking the wrong question. Any help would be appreciated. -- Michael # Michael Doran, Systems Librarian # University of Texas at Arlington # 817-272-5326 office # 817-688-1926 mobile # do...@uta.edu # http://rocky.uta.edu/doran/
[CODE4LIB] openurl.info ?
It appears that the openurl.info domain name has expired. I get an error from the host: http://www.openurl.info/registry/docs/mtx/info:ofi/fmt:kev:mtx:ctx I've been using the registry at OCLC as a reference source for OpenURL. But all of the identifiers and links pointing to openurl.info no longer work. http://alcme.oclc.org/openurl/servlet/OAIHandler?verb=ListSets Is there a different place I should be going now for OpenURL info instead? Or maybe this is just a snafu? --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu
Re: [CODE4LIB] exact title searches with z39.50
I'm not sure it's a _big_ mess, though, at least for metasearching. I was just looking at our metasearch logs this morning, so did a quick count: 93% of the searches were keyword searches. Not a lot of exactness required there. It's mostly in the 7% who are doing more specific searches (author, title, subject) where the bulk if the problems lie, I suspect. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Ray Denenberg, Library of Congress [r...@loc.gov] Sent: Tuesday, April 28, 2009 8:32 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] exact title searches with z39.50 Right, Mike. There is a long and rich history of the debate between loose and strict interpretation, in the world at large, and in particular, within Z39.50, this debate raged from the late 1980s throughout the 90s. The faction that said If you can't give the client what is asks for, at least give them something; make them happy was almost religious in its zeal. Those who said If you can't give the client what it asks for, be honest about it; give them good diagnostic information, tell them a better way to formulate the request, etc. But don't pretend the transaction was a success if it wasn't was shouted down most every time. I can't predict, but I'm just hoping that lessons have been learned from the mess that that mentality got us into. --Ray - Original Message - From: Mike Taylor m...@indexdata.com To: CODE4LIB@LISTSERV.ND.EDU Sent: Tuesday, April 28, 2009 10:43 AM Subject: Re: [CODE4LIB] exact title searches with z39.50 Ray Denenberg, Library of Congress writes: The irony is that Z39.50 actually make _much_ more effort to specify semantics than most other standards -- and yet still finds itself in the situation where many implementations do not respond correctly to the BIB-1 attribute 6=3 (completeness=complete field) which is how Eric should be able to do what he wants here. Not that I have any good answers to this problem ... but I DO know that inventing more and more replacement standards it NOT the answer. Everything that's come along since Z39.50 has suffered from exactly the same problem but more so. I think this remains to be seen for SRU/CQL, in particular for the example at hand, how to search for exact title. There are two related issues: one, how arcane the standard is, and two, how closely implementations conform to the intended semantics. And clearly the first has a bearing on the second. And even I would say that Z39.50 is a bit on the arcance side when it comes to formulating a query for exact title. With SRU/CQL there is an exact relation ('exact' in 1.1, '==' in 1.2). So I would think there is less excuse for a server to apply a creative interpretation. If it cannot support exact title it should fail the search. IMHO, this is where it breaks down 90% of the time. Servers that can't do what they're asked should say I can't do that, but -- for reasons that seem good at the time -- nearly no server fails requests that it can sort of fulfil. Nine out of ten Z39.50 servers asked to do a whole-field search and which can't do it will instead do a word search, because it's better to give the user SOMETHING. I bet the same is true of SRU servers. (I am as guilty as anyone else, I've written servers like that.) The idea that it's better to give the user SOMETHING might -- might -- have been true when we mostly used Z39.50 servers for interactive sessions. Now that they are mostly used as targets in metasearching, that approach is disastrous. _/|_ ___ /o ) \/ Mike Taylorm...@indexdata.com http://www.miketaylor.org.uk )_v__/\ I try to take one day at a time, but sometimes several days attack me at once -- Ashleigh Brilliant.
[CODE4LIB] Ebsco Discovery Service WAS: [CODE4LIB] Serials Solutions Summon
I see today that Ebsco announced their Discovery Service, similar to Summon. Not surprising, of course -- although note the fact that WorldCat will be included in the local index, interesting, no? http://www.ebscohost.com/thisTopic.php?marketID=1topicID=1245 Anyway, nothing in the press release mentions an API. Hopefully folks will impress upon Ebsco the need for such access, as OCLC and Serial Solutions have done for their systems. Ebsco does have an API for their basic platform, so maybe it's just not mentioned. We've taken to calling these class of systems super databases, for what it's worth. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Yitzchak Schaffer [yitzc...@touro.edu] Sent: Wednesday, April 22, 2009 10:21 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Serials Solutions Summon Jonathan Rochkind wrote: It _would_ be great if SerSol would actually give you (if you were subscribed) a feed of their harvested and normalized metadata, so you could still pay them to collect and normalize it, but then use it for your own purposes outside of Summon. I hope SerSol will consider this down the line, if Summon is succesful. This is available as a dump for their traditional holdings product, which makes it possible to do just this (i.e. use SerSol cleaned holdings/access info in a local system). My working with SerSol has brought me to see them as essentially a great data aggregation service with some OK software bundled in. We are looking ahead to possibly using this technique by loading their data into a local ERMS. Agreed that such an availability of data would be a great service with the Summon metadata as well. -- Yitzchak Schaffer Systems Manager Touro College Libraries 33 West 23rd Street New York, NY 10010 Tel (212) 463-0400 x5230 Fax (212) 627-3197 Email yitzc...@touro.edu Twitter /torahsyslib
Re: [CODE4LIB] Ebsco Discovery Service WAS: [CODE4LIB] Serials Solutions Summon
No, it's like Summon where they are also going out to publishers and other orgs to harvest and index their stuff, kinda following in what Google Scholar did. They'll be harvesting and indexing WorldCat, for example. This is kinda the in thing, you know. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Jonathan Rochkind [rochk...@jhu.edu] Sent: Friday, April 24, 2009 12:58 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Ebsco Discovery Service WAS: [CODE4LIB] Serials Solutions Summon That page is timing out for me. Does EBSCO's service include only what is held by EBSCO in the first place? (And I'm still pushing for aggregated index as what to call these things! Thanks to whoever suggested that.) Walker, David wrote: I see today that Ebsco announced their Discovery Service, similar to Summon. Not surprising, of course -- although note the fact that WorldCat will be included in the local index, interesting, no? http://www.ebscohost.com/thisTopic.php?marketID=1topicID=1245 Anyway, nothing in the press release mentions an API. Hopefully folks will impress upon Ebsco the need for such access, as OCLC and Serial Solutions have done for their systems. Ebsco does have an API for their basic platform, so maybe it's just not mentioned. We've taken to calling these class of systems super databases, for what it's worth. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Yitzchak Schaffer [yitzc...@touro.edu] Sent: Wednesday, April 22, 2009 10:21 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Serials Solutions Summon Jonathan Rochkind wrote: It _would_ be great if SerSol would actually give you (if you were subscribed) a feed of their harvested and normalized metadata, so you could still pay them to collect and normalize it, but then use it for your own purposes outside of Summon. I hope SerSol will consider this down the line, if Summon is succesful. This is available as a dump for their traditional holdings product, which makes it possible to do just this (i.e. use SerSol cleaned holdings/access info in a local system). My working with SerSol has brought me to see them as essentially a great data aggregation service with some OK software bundled in. We are looking ahead to possibly using this technique by loading their data into a local ERMS. Agreed that such an availability of data would be a great service with the Summon metadata as well. -- Yitzchak Schaffer Systems Manager Touro College Libraries 33 West 23rd Street New York, NY 10010 Tel (212) 463-0400 x5230 Fax (212) 627-3197 Email yitzc...@touro.edu Twitter /torahsyslib
Re: [CODE4LIB] Serials Solutions Summon
Even though Summon is marketed as a Serial Solutions system, I tend to think of it more as coming from Proquest (the parent company, of course). Summon goes a bit beyond what Proquest and CSA have done in the past, loading outside publisher data, your local catalog records, and some other nice data (no small thing, mind you). But, like Rob and Mike, I tend to see this as an evolutionary step for a database aggregator like Proquest rather than a revolutionary one. Obviously, database aggregators like Proquest, OCLC, and Ebsco are well positioned to do this kind of work. The problem, though, is that they are also competitors. At some point, if you want to have a truly unified local index of _all_ of your database, you're going to have to cross aggregator lines. What happens then? --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Dr R. Sanderson [azar...@liverpool.ac.uk] Sent: Tuesday, April 21, 2009 8:14 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Serials Solutions Summon On Tue, 21 Apr 2009, Eric Lease Morgan wrote: On Apr 21, 2009, at 10:55 AM, Dr R. Sanderson wrote: How is this 'new type' of index any different from an index of OAI-PMH harvested material? Which in turn is no different from any other local search, just a different method of ingesting the data? This new type of index is not any different in functionality from a well-implemented OAI service provider with the exception of the type of content it contains. Not even the type of content, just the source of the content. Eg SS have come to an agreement with the publishers to use their content, and they've stuffed it all in one big index with a nice interface. NTSH, Move Along... Rob
Re: [CODE4LIB] Serials Solutions Summon
I've noticed that reference and instructional librarians (at least in published literature) tend to use the term federated search more often than others. And by that they mean a broadcast search, not what Ray and many others mean by that term. Library technology folk tend to use the other terms more often. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Ray Denenberg, Library of Congress [r...@loc.gov] Sent: Tuesday, April 21, 2009 8:28 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Serials Solutions Summon From: Thomas Dowling tdowl...@ohiolink.edu You can define differences between meta-, federated, and broadcast search, but every discussion on the topic will be punctuated by people asking, Wait, what's the difference again? Leaving aside metasearch and broadcast search (terms invented more recently) it is a shame if federated has really lost its distinction fromdistributed. Historically, a federated database is one that integrates multiple (autonomous) databases so it is in effect a virtual distributed database, though a single database.I don't think that's a hard concept and I don't think it is a trivial distinction. --Ray
Re: [CODE4LIB] Something completely different
I know that a large percentage of the data in our MARC records is not being used for finding/gathering or even display, so in that case, what good is it? This is, of course, a chicken and egg thing. The reason why a lot of MARC data remains inconsistent is precisely because it is not being used for finding or display. Anyone who has worked with a faceted search application has seen this in action. As soon as you aggregate subject headings, genre designations, etc., into facets you begin to see all kinds of data problems that you never noticed before because they are scattered among thousands of records that previously could only be viewed individually. Of course, fixing bad or inconsistent data is probably an order of magnitude easier than adding data to records after the fact. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Alex Dolski [alex.dol...@unlv.edu] Sent: Monday, April 06, 2009 10:38 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Something completely different I think Dublin Core XML is an excellent attempt at what you're talking about if you want to consider it a bibliographic data format, which I guess could be one of its many uses. I know that a large percentage of the data in our MARC records is not being used for finding/gathering or even display, so in that case, what good is it? There is a lot of richness in those records, but it's so all-over-the-place that whatever value it might have had gets killed by all the inconsistency. In my experience, good, consistent metadata that captures the essence of an object is more useful than highly-detailed, inconsistent metadata (which all highly-detailed metadata tends to be) in a fine-grained element set. I think there may be a cultural element to this as well, in that IR people think of metadata in terms of its utility for IR purposes (at which DC tends to be extremely practical) and catalogers think of it as a thorough-as-possible description of an object (at which DC is quite inadequate). Alex Cloutman, David wrote: I'm open to seeing new approaches to the ILS in general. A related question I had the other day, speaking of MARC, is what would an alternative bibliographic data format look like if it was designed with the intent for opening access to the data our ILS systems to developers in a more informal manner? I was thinking of an XML format that a developer could work with without formal training, the basics of which could be learned in an hour, and could reasonably represent the essential fields of the 90% of records that are most likely to be viewed by a public library patron. In my mind, such a format would allow creators of community-based web sites to pull data from their local library, and repurpose it without having to learn a lot of arcane formats (e.g. MARC) or esoteric protocols (e.g. Z39.50). The sacrifice, of course, would be loosing some of the richness MARC allows, but I think in many common situations the really complex records are not what patrons are interested in. You may want to consider prototyping this in your application. I see such an effort to be vital in making our systems relevant in future computing environments, and I am skeptical that a simple, workable solution would come out the initial efforts of a standardization committee. Just my 2 cents. - David --- David Cloutman dclout...@co.marin.ca.us Electronic Services Librarian Marin County Free Library -Original Message- From: Code for Libraries [mailto:code4...@listserv.nd.edu] On Behalf Of Peter Schlumpf Sent: Sunday, April 05, 2009 8:40 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: [CODE4LIB] Something completely different Greetings! I have been lurking on (or ignoring) this forum for years. And libraries too. Some of you may know me. I am the Avanti guy. I am, perhaps, the first person to try to produce an open source ILS back in 1999, though there is a David Duncan out there who tried before I did. I was there when all this stuff was coming together. Since then I have seen a lot of good things happen. There's Koha. There's Evergreen. They are good things. I have also seen first hand how libraries get screwed over and over by commercial vendors with their crappy software. I believe free software is the answer to that. I have neglected Avanti for years, but now I am ready to return to it. I want to get back to simple things. Imagine if there were no Marc records. Minimal layers of abstraction. No politics. No vendors. No SQL straightjacket. What would an ILS look like without those things? Sometimes the biggest prison is between the ears. I am in a position to do this now, and that's what I have decided to do. I am getting busy. Peter Schlumpf Email Disclaimer:
Re: [CODE4LIB] Free cover images?
However, my understanding is that Worldcat forbids any use of those cover images _at all_. OCLC does return the cover image URL as part of it's Z39.50 response, so I'm guessing that it is intended to be used by external applications, or at least those that are actually searching Worldcat. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Jonathan Rochkind [rochk...@jhu.edu] Sent: Monday, March 16, 2009 1:30 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Free cover images? It would be hard for them to turn it off just for services that do not have the principal purpose of driving traffic to the Amazon website and driving sales of products and services on the Amazon website. Libraries are not alone in users of AWS who do not have this principal purpose. Despite that language, it's not clear to me that Amazon actually has any particular interest in preventing such use. But they wouldn't need to switch me off technologically, if I received any communications from Amazon suggesting my use violates their ToS, I'd immediately comply with their requests. My further thoughts on this can be found here: http://bibwild.wordpress.com/2008/03/19/think-you-can-use-amazon-api-for-library-service-book-covers/ However, my understanding is that Worldcat forbids any use of those cover images _at all_. This is much more clear cut, and OCLC is much more likely to care, then Amazon's more bizarre restrictions as to purpose. It's of course up to the individual implemeter, perhaps in consultation with the service provider and/or legal counsel, to decide if they are complying or not, but that's my own evaluation. I don't even know of any WorldCat APIs that would allow you to get WorldCat cover images other than through a screen-scrape though, so I'm curious how anyone is doing it, if anyone is doing it. Jonathan Kyle Banerjee wrote: Yah, but same could be said for Amazon. From http://aws.amazon.com/agreement/ 5.1.3. You are not permitted to use Amazon Associates Web Service with any Application or for any use that does not have, as its principal purpose, driving traffic to the Amazon Website and driving sales of products and services on the Amazon Website. Maybe libraries are under the radar, and maybe Amazon doesn't care, but getting addicted to this stuff is not without risk. If the load ever became something they cared about, they could turn it off in a snap. kyle On Mon, Mar 16, 2009 at 12:53 PM, Jonathan Rochkind rochk...@jhu.edu wrote: You can get cover images from worldcat? How? I'm pretty sure the worldcat ToS specifically disallow you from re-using those covers, even if you are managing to get them via machine access somehow. Lynch,Katherine wrote: Going along with Jonathan Rochkind, Amazon does a good job of supplying some movie images. Also in general, WorldCat, if that's an option to you. For a good example of wealth/response time, check out Gabe's video search: http://www.library.drexel.edu/video/search --- Katherine Lynch Library Webmaster Drexel University Libraries 215.895.1344 (p) 215.895.2070 (f) -Original Message- From: Code for Libraries [mailto:code4...@listserv.nd.edu] On Behalf Of Edward M. Corrado Sent: Monday, March 16, 2009 2:38 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: [CODE4LIB] Free cover images? Hello all, We are reevaluating our source of cover images. At this point I have identified four possible sources of free images: 1. Amazon 2. Google Books 3. LibraryThing 4. OpenLibrary I know that their is some question if the Amazon and Google books images will allow this (although I've also yet to hear Amazon or Google telling libraries that use their Web services for this to cease and desist). However, besides that issue, has anyone noticed any technical problems with any of these four? I'm especially concerned about slow and/or non-consistent performance. Edward
[CODE4LIB] MARC-XML - Qualified Dublin Core XSLT
Hi All, Anyone have an XSLT style sheet to convert from MARC-XML to Qualified Dublin Core? I'm looking to load these into DSpace, if that makes a difference. Looks like LOC only has MARC-XML to Simple Dublin Core. This page [1] mentions a 'MARCXML to Qualified DC styles heets' developed at the University of Illinois, but the links are dead. --Dave [1] http://cicharvest.grainger.uiuc.edu/schemas.asp == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu
Re: [CODE4LIB] Dutch Code4Lib
Some of us can barely afford to get to the east coast of the United States, let alone Europe. Not that you have to cater to us poor state university folk, or anything. ;-) --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Edward M. Corrado [ecorr...@ecorrado.us] Sent: Thursday, January 22, 2009 10:07 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Dutch Code4Lib I know there was a talk about a code4lib Europe in Portugal before. I'd love to see a European conference, but I am a little torn between making it a separate conference from Code4lib or as a location to host for Code4lib. What do people think? Edward On Thu, Jan 22, 2009 at 1:02 PM, Ed Summers e...@pobox.com wrote: Wow, this sounds too good to be true. Perhaps this is premature, but do you think there might be interest in hosting a code4lib2010 in the Netherlands? (he asks selfishly). I see you started a wiki page [1]. If at any point you want nl.code4lib.org (or something) to point somewhere just say the word. //Ed [1] http://wiki.code4lib.org/index.php/NL On Wed, Jan 21, 2009 at 12:40 PM, Posthumus, Etienne e.posthu...@tudelft.nl wrote: At various tech related library gatherings here in the Netherlands there have been discussion about setting up a regional Code4Lib (or something similar) So as a start, if there any subscribers on this list who are in the vicinity of Netherlands/Belgium and interested, please give me a shout. We can then argue about a date/venue/agenda for an inaugural meeting. Etienne Posthumus TU Delft Library - Digital Product Development t: +31 (0) 15 27 81 949 m: e.posthu...@tudelft.nl skype: eposthumus twitter: http://twitter.com/epoz http://www.library.tudelft.nl/ Prometheusplein 1, 2628 ZC, Delft, Netherlands
Re: [CODE4LIB] MODS-to-citation stylesheets
Do you know if it is specifically geared toward Zotero's SQLite data structures? I don't believe so, since the CSL standard, such that it is, predates Zotero. I've worked with CSL a little bit in trying to create a PHP CSL rules engine. Not a trivial task, to say the least, but long-term it would serve you very well. I think there are Python and Ruby CSL libraries, but my impression, like Jonathan's, is that they are still somewhat in the works. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Andrew Ashton [andrew_ash...@brown.edu] Sent: Monday, January 12, 2009 8:06 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] MODS-to-citation stylesheets Thanks, I have heard of CSL but never really worked with it. Do you know if it is specifically geared toward Zotero's SQLite data structures? We're interested in generating citations in web apps from a native XML database of MODS, preferably without going through Zotero. I've experimented with generating the citations from our eXist-based system by way of Zotero, but we run into genre-authority issues. Zotero's default MODS import translator expects mods:genre authority=marcgt and our system uses other authorities. Granted, I'm working off of some older Zotero code, so I may not have the most recent info. -Andy On 1/12/09 10:51 AM, Jonathan Rochkind rochk...@jhu.edu wrote: What I've been meaning to investigate more fully is the Citation Style Language (CSL) which is used by Zotero for citation outputting--there are some other non-Zotero engines for CSL, but I'm not sure how mature/ready for production any of them are. The Zotero engine is of course in Javascript, so inconvenient (although not impossible) to re-use that code a server side app. I haven't really investigated what's going on with CSL, but that seems to be the 'right' way to deal with this to me. Once you have a CSL engine incorporated in your app, you can output not just in Chicago or MLA, but any citation style now or in the future that Zotero (or anyone else) provides a CSL file for. Thanks to Zotero (and it's partners?) for developing this re-useable CSL format instead of just a custom Zotero solution. Jonathan Andrew Ashton wrote: Can someone point me at any good, freely-available stylesheets to convert MODS to Chicago or MLA formatted citations? It seems like something that should be readily available, but I can¹t seem to find it. I¹d rather not reinvent the wheel if possible... Thanks, Andy
[CODE4LIB] Good advanced search screens
I'm working on an advanced search screen as part of our WorldCat API project. WorldCat has dozens of indexes and a ton of limiters. So many, in fact, that it's rather daunting trying to design it all in a way that isn't just a big dump of fields and check boxes that only a cataloger could decipher. So I'm looking for examples of good advanced search screens (for bibliographic databases or otherwise) to gain some inspiration. Thanks! --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu
Re: [CODE4LIB] Google books js api, oclc/lccn, any problems?
So, I would assume that the 2416076 record was merged into the 24991049 record Or maybe this is an example of WorldCat's FRBR work set grouping at work? I've been struggling to wrap my mind around this recently. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [EMAIL PROTECTED] On Behalf Of Custer, Mark [EMAIL PROTECTED] Sent: Wednesday, October 15, 2008 12:45 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Google books js api, oclc/lccn, any problems? I haven't noticed any problems like that myself lately, but I have had some trouble/confusion with OCLC numbers and GBS in the past. If you do a search for OCLC number 2416076 in worldcat.org, you get directed to the book: http://www.worldcat.org/search?q=no%3A2416076qt=advanced [which, of course, features no link to Google Books right now] but the OCLC number listed is 24991049, and no longer 2416076. So, I would assume that the 2416076 record was merged into the 24991049 record (which was just recently updated on 2008-08-28), and so I would also assume that the records retained by Google would not reflect this update? In any event, I would suspect that it might not be a problem with the GBS api, but rather with the change in metadata that is tracked by OCLC but not by Google (since they wouldn't have known). Do you have any other suspect examples, though, that might provide other evidence? Mark Custer -Original Message- From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of Jonathan Rochkind Sent: Wednesday, October 15, 2008 3:04 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: [CODE4LIB] Google books js api, oclc/lccn, any problems? Anyone else noticed any problems with the GBS javascript api? It seems to have stopped returning hits for me for LCCN or OCLCnum, where it used to work. Seems to work now only for ISBN. Here's a URL call that used to return hits, and now doesn't: http://books.google.com/books?jscmd=viewapicallback=gbscallbackbibkeys =OCLC%3A2416076%2CLCCN%3A34025476 Jonathan --- Jonathan Rochkind Digital Services Software Engineer The Sheridan Libraries Johns Hopkins University 410.516.8886 [EMAIL PROTECTED]
Re: [CODE4LIB] OAI-PMH Harvester in PHP?
Thanks everyone! --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [EMAIL PROTECTED] On Behalf Of Mark Jordan [EMAIL PROTECTED] Sent: Monday, October 06, 2008 2:02 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] OAI-PMH Harvester in PHP? David, if you need a harvester with a web GUI for administration and searching, check out the PKP Metadata Harvester at http://pkp.sfu.ca/harvester, which runs on PHP and mysql. We've got a development version in cvs that uses Lucene for indexing, if you want more info let me know. Small world, I was looking at some of your WorldCat presentations just now Mark Mark Jordan Head of Library Systems W.A.C. Bennett Library, Simon Fraser University Burnaby, British Columbia, V5A 1S6, Canada Voice: 778.782.5753 / Fax: 778.782.3023 [EMAIL PROTECTED] - David Walker [EMAIL PROTECTED] wrote: Hi all, Anyone know of any OAI-PMH harvesting software written in PHP? I've seen the code that can serve as a provider, but I'm looking for a harvester. Thanks! --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu
Re: [CODE4LIB] creating call number browse
a decent UI is probably going to be a bigger job I've always felt that the call number browse was a really useful option, but the most disastrously implemented feature in most ILS catalog interfaces. I think the problem is that we're focusing on the task -- browsing the shelf -- as opposed to the *goal*, which is, I think, simply to show users books that are related to the one they are looking at. If you treat it like that (here are books that are related to this book) and dispense with the notion of call numbers and shelves in the interface (even if what you're doing behind the scenes is in fact a call number browse) then I think you can arrive at a much simpler and straight-forward UI for users. I would treat it little different than Amazon's recommendations feature, for example. --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [EMAIL PROTECTED] On Behalf Of Stephens, Owen [EMAIL PROTECTED] Sent: Wednesday, September 17, 2008 9:17 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] creating call number browse I'm not sure, but my guess would be that the example you give isn't really a 'browse index' function, but rather creates a search result set and presents it in a specific way (i.e. via cover images) sorted by call number (by the look of it, it has an ID of the bib record as input, and it displays this book and 10 before it, and 10 after it, in call number order. Whether this is how bibliocommons achieves it or not is perhaps besides the point - this is how I think I would approach it. I'm winging it here, but if I was doing some quick and very dirty here: A simple db table with fields: Database ID (numeric counter auto-increment) Bib record ID URIs to book covers (or more likely the relevant information to create the URIs such as ISBN) Call number To start, get a report from your ILS with this info in it, sorted by Call Number. To populate the table, import your data (sorted in Call Number order). The Database ID will be created on import, automatically in call number order (there are other, almost certainly better, ways of handling this, but this is simple I think) To create your shelf browse given a Bib ID select that record and get the database ID. Then requery selecting all records which have database IDs +-10 of the one you have just retrieved. Output results in appropriate format (e.g. html) using book cover URIs to display the images. Obviously with this approach, you'd need to recreate your data table regularly to keep it up to date (resetting your Database ID if you want). Well - just how I'd do it if I wanted something up and running quickly. As Andy notes, a decent UI is probably going to be a bigger job ;) Owen Owen Stephens Assistant Director: eStrategy and Information Resources Central Library Imperial College London South Kensington Campus London SW7 2AZ t: +44 (0)20 7594 8829 e: [EMAIL PROTECTED] -Original Message- From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of Emily Lynema Sent: 17 September 2008 16:46 To: CODE4LIB@LISTSERV.ND.EDU Subject: [CODE4LIB] creating call number browse Hey all, I would love to tackle the issue of creating a really cool call number browse tool that utilizes book covers, etc. However, I'd like to do this outside of my ILS/OPAC. What I don't know is whether there are any indexing / SQL / query techniques that could be used to browse forward and backword in an index like this. Has anyone else worked on developing a tool like this outside of the OPAC? I guess I would be perfectly happy even if it was something I could build directly on top of the ILS database and its indexes (we use SirsiDynix Unicorn). I wanted to throw a feeler out there before trying to dream up some wild scheme on my own. -emily P.S. The version of BiblioCommons released at Oakville Public Library has a sweet call number browse function accessible from the full record page. I would love to know know how that was accomplished. http://opl.bibliocommons.com/item/show/1413841_mars -- Emily Lynema Systems Librarian for Digital Projects Information Technology, NCSU Libraries 919-513-8031 [EMAIL PROTECTED]
[CODE4LIB] Innovative DLF ILS-DI code WAS: [CODE4LIB] Update: DLF ILS-DI Developers' Workshop Aug 7
Hi all, I'm working on converting a screen-scraping class, written in PHP, I have for looking-up bib and availability information in an Innovative systems to the new ILS-DI specification, and had a couple of questions: 1. Is there a place (other than the workshop) to discuss issues or questions I might have? A listserv perhaps? 2. Is anyone else thinking about, or currently working on, an implementation for Innovative? Since the company has not agreed to work with the library community on this, we're kind of on our own. I've got a pretty good scraper that can accommodate most of the abstract functions in the spec. But wanted to see if others did too, so we might combine efforts. Thanks! --Dave == David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries [EMAIL PROTECTED] On Behalf Of Emily Lynema [EMAIL PROTECTED] Sent: Thursday, July 10, 2008 9:22 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: [CODE4LIB] Update: DLF ILS-DI Developers' Workshop Aug 7 Now that the DLF technical recommendation is officially published [1], DLF is trying to help maintain momentum and build a community of implementation around this project. Toward that end, an ILS-DI Developers' Workshop has been organized in August for folks to hash out questions and answers about implementing the first level of the recommendation, Basic Discovery Interfaces. While this meeting is invitation only to keep the size down, feel free to let me know if you are involved in this type of implementation and think you could contribute to this meeting. Of course, a summary of the outcome of the meeting will be made available in its aftermath. It is even possible there may be some suggested revisions or clarifications to the recommendation as we actually begin to write code. I've included the text of the original inviitation below for all to see. We hope to keep this topic of APIs and interoperability for our integrated library systems fresh on your mind, especially as some many of you are building these types of APIs literally as we speak -emily lynema [1] http://diglib.org/architectures/ilsdi/ - Greetings - As you may know, the Digital Library Federation has released the technical recommendation of its ILS Discovery Interface (ILS-DI) Task Group. This document recommends basic, standard interfaces -- known as the Berkeley Accord -- for integrating the data and services of integrated library systems (ILS) with new applications supporting user discovery. The documentation is available at : http://diglib.org/architectures/ilsdi/ . The basic discovery interfaces permit libraries to deploy new discovery services to meet ever-growing user expectations in the Web 2.0 era, take full advantage of advanced ILS data management and services, and encourage a strong, innovative community and marketplace in next-generation library management and discovery applications. DLF is planning a developer's workshop for Thursday, August 7, at the Berkeley Faculty Club on the UC Berkeley campus, in which parties supporting the Basic Discovery Interfaces can learn more about the interfaces and how they should be implemented, meet with potential development partners, and begin the formation of a community building effective software services. Because of the nature of this meeting, we recommend that staff with a high degree of technical knowledge of your platform and bibliographic standards and protocols receive priority for attendance. The Berkeley Accord and the DLF ILS-DI recommendation are important first steps in building advanced, interoperable architectures for bibliographic discovery and use in the networked world.
[CODE4LIB] Ser Sol 360 Search
Hi All, I'm giving a conference presentation later this month on metasearch. If your library licenses Serial Solutions' metasearch system, would you mind contacting me off-list? I'd like to ask a couple of questions. Thanks! --Dave --- David Walker Library Web Services Manager California State University http://xerxes.calstate.edu
Re: [CODE4LIB] III SIP server
Brilliant. Thanks Mark! --Dave --- David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries on behalf of Mark Ellis Sent: Thu 6/12/2008 10:14 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] III SIP server All, I've attached two versions of a SIP client script that retrieves patron information--one for telnet based servers and the other for sockets based ones. All the functions excepting PatronInformation() are applicable to other SIP messages, so while this isn't the full client library you're dreaming about, it could still save you some head banging. (you may still want to bang my head after looking at it though) The 3M SIP2 SDK (http://www.yourlibrary.ca/mark/SIP2_SDK.ZIP) includes the protocol definition along with a Windows client and server you can use for testing. The client is particularly useful as you can use it interactively with your ILS. HTH, Mark -Original Message- From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of Walker, David Sent: Wednesday, June 11, 2008 3:00 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] III SIP server I'd like to see the PHP code, Mark. Would you mind sending it to me, or perhaps posting it somewhere where we all might download it? Thanks! --Dave --- David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries on behalf of Mark Ellis Sent: Wed 6/11/2008 8:42 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] III SIP server Wayne, What are you using for a client? I have some PHP for getting patron information, but there's nothing III specific about it, so I don't know if it'd be helpful. Do you have the 3M SIP SDK? Mark Mark Ellis Manager, Information Technology Richmond Public Library Richmond, BC (604) 231-6410 www.yourlibrary.ca -Original Message- From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of Schneider, Wayne Sent: Tuesday, June 10, 2008 4:29 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: [CODE4LIB] III SIP server Has anyone out there attempted to code to III's SIP server? We're new to III, having just merged with another library system that is a III customer, and were hoping to be able to use SIP for some basic customer account information - nothing too fancy, just basically some of what is supported in version 2.00 of the protocol. Name and address would be nice (name we seem to get, but no address), items out, items on hold, fines and fees, etc. Our other ILS, SirsiDynix Horizon, has pretty good support for SIP 2.00 features, only somewhat idiosyncratic, with a few fairly well-documented extensions, and we were hoping to find the same level of support in III's server. Is this an entirely unreasonable expectation? wayne -- Wayne Schneider ILS System Administrator Hennepin County Library 952.847.8656 [EMAIL PROTECTED]
Re: [CODE4LIB] III SIP server
I'd like to see the PHP code, Mark. Would you mind sending it to me, or perhaps posting it somewhere where we all might download it? Thanks! --Dave --- David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries on behalf of Mark Ellis Sent: Wed 6/11/2008 8:42 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] III SIP server Wayne, What are you using for a client? I have some PHP for getting patron information, but there's nothing III specific about it, so I don't know if it'd be helpful. Do you have the 3M SIP SDK? Mark Mark Ellis Manager, Information Technology Richmond Public Library Richmond, BC (604) 231-6410 www.yourlibrary.ca -Original Message- From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of Schneider, Wayne Sent: Tuesday, June 10, 2008 4:29 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: [CODE4LIB] III SIP server Has anyone out there attempted to code to III's SIP server? We're new to III, having just merged with another library system that is a III customer, and were hoping to be able to use SIP for some basic customer account information - nothing too fancy, just basically some of what is supported in version 2.00 of the protocol. Name and address would be nice (name we seem to get, but no address), items out, items on hold, fines and fees, etc. Our other ILS, SirsiDynix Horizon, has pretty good support for SIP 2.00 features, only somewhat idiosyncratic, with a few fairly well-documented extensions, and we were hoping to find the same level of support in III's server. Is this an entirely unreasonable expectation? wayne -- Wayne Schneider ILS System Administrator Hennepin County Library 952.847.8656 [EMAIL PROTECTED]
Re: [CODE4LIB] Life after Expect
Has anyone on this list considered this option or tried it out? We (and no doubt many others) have been using Innovative's Z39.50 server with our metasearch and link resolver systems -- not with a yaz-proxy/SRU interface on top, but just directly using Z39.50 clients. It works well with our link resolver, since in that context we are doing known item searches, usually with a standard number. And it gives us back availability and journal holdings data probably better than the other ILS systems we work with. It doesn't work as well in our metasearch environment, however. There are number of bugs with how the keyword search is implemented in the Z39.50 server, which I've never been able to resolve (would love for someone to correct me here). It reports erroneous hit counts and sorts the records in system id order (so oldest records come back first), and is sometimes flaky when implemented for an Innreach system. It really is quite horrible. --Dave --- David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries on behalf of Karen Coombs Sent: Thu 5/15/2008 10:03 AM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Life after Expect There are some very useful things one can get back from III via Z39.50. Terry Resse from OSU showed me that if you send a Z39.50 request for the opac syntax you can actually get back holdings information and whether or not something is checked out. I had no trouble playing around with this via the Z39.50 piece of MARC edit and getting back the data I wanted. Based on this info and reading about yaz-proxy, I've been hoping that by implementing yaz-proxy that I might setup an SRW/U service for III. Has anyone on this list considered this option or tried it out? Karen
Re: [CODE4LIB] Latest OpenLibrary.org release
Nobody in the *library world* uses it, much less non-libraries. Ironically, I was just checking email in between using the WorldCat SRU server. In addition to the systems Rob mentioned, there are also article databases like JSTOR and Springerlink that implement SRU, and every metasearch system in use in libraries today consume SRU web services. But I think the folks at OpenLibrary should implement an OpenSearch interface. I mean come on! OpenLibrary, OpenSearch. A match made in heaven! ;-) --Dave --- David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries on behalf of Casey Durfee Sent: Wed 5/7/2008 1:12 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Latest OpenLibrary.org release SRU is crap, in my opinion -- overengineered and under-thought, incomprehensible to non-librarians and burdened by the weight of history. The notion that it was designed to be used by all kinds of clients on all kinds of data is irrelevant in my book. Nobody in the *library world* uses it, much less non-libraries. APIs are for use. You don't get any points for idealogical correctness. A non-librarian could look at that API document, understand it all, and start working with it right away. There is no way you can say that about SRU. Kudos to the OpenLibrary team, whatever the reason was, for coming up with something better that people outside the library world might actually be willing to use. On Wed, May 7, 2008 at 12:55 PM, Dr R. Sanderson [EMAIL PROTECTED] wrote: I'm the only non-techie on the team, so I don't know that much about SRU. (Our head programmer lives in India, and is presumably asleep at the moment, otherwise I'd ask him!) Is it an interface that is used primarily by libraries? We are definitely hoping that our API will be used by all kinds, so perhaps that's the reasoning. It's designed to be used by all kinds of clients on all kinds of data, but is from the library world so perhaps the most well defined use cases are in this arena. Have a look at: http://www.loc.gov/standards/sru/ But this is an Open Source project, so if anyone would like to volunteer to build an SRU interface... you can! Please do! :-) I feel a student project coming on. :) Rob
Re: [CODE4LIB] Using the balancer webapp in Jakarta tomcat
If there's something better I've had good success with UrlRewriteFilter, which is very much like mod_rewrite. http://tuckey.org/urlrewrite/ --Dave --- David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries on behalf of Simon Huggard Sent: Sun 1/13/2008 10:53 PM To: CODE4LIB@listserv.nd.edu Subject: [CODE4LIB] Using the balancer webapp in Jakarta tomcat Hi all. Has anyone used the balancer in Jakarta-Tomcat? http://tomcat.apache.org/tomcat-5.5-doc/balancer-howto.html I'm probably trying to get it to do something that it wasn't designed to do, but has anyone got it to do a redirect to a new URL but with the full URL - including the data at the end of the URL? For example I'm trying to get it to do a simple redirect from the OLD URL: http://myurl.edu.au/balancer/somethinghere to NEW URL: http://mynewurl.edu.au/balancer/somethinghere I can't get this to work. I've tried using the general redirect in the rules.xml file as follows: ?xml version=1.0 encoding=UTF-8? rules rule className=org.apache.webapp.balancer.rules.AcceptEverythingRule redirectUrl=http://newurl.edu.au/balancer/; / /rules But that just redirects everything to the static URL: http://newurl.edu.au/balancer/ And if I try to be clever, using the string match rule: rule className=org.apache.webapp.balancer.rules.URLStringMatchRule targetString=*/ redirectUrl=http://mynewurl.edu.au/balancer/somethinghere/; / this fails as well. I've tried putting asteriscs, variables (such as $(URI)) and any other wildcards I can think of, but nothing works. Has anyone been able to get the balancer application to just change the URL Stem and filter that through to a new base URL with the rest of the URL in tact? If there's something better in Tomcat or another java-based container out there that people are using (not Apache) I'd like to know what it is. Many thanks, Simon -- ___ Simon Huggard Systems Manager Information Systems Department Monash University Library Monash University Building 4, Monash University, Vic, Australia 3800 Phone: +61 3 9905 9138 Fax: + 61 3 9905 2610 Mobile: 0425 774 260 (International : +61 425 774 260)
[CODE4LIB] Job: Web Librarian
- WEB LIBRARIAN - The California State University San Marcos Library is seeking a creative, energetic librarian to provide vision and leadership in designing, developing, and supporting the Library's Web sites. Senior Assistant Librarian, tenure-track position with a beginning salary of $55,944. CSU San Marcos is an Equal Opportunity/ Title IX Employer. The University has a strong commitment to the principles of diversity and, in that spirit, seeks a broad spectrum of candidates including women, members of minority groups and people with disabilities. Send letter of application addressing the qualifications, resume, URL of a live sample Web site built by the candidate submitted with the initial application, and names of three references to: Marion T. Reid, Dean, Library, CSU San Marcos, San Marcos, CA, 92096-0001 or electronically to [EMAIL PROTECTED] Review of applications begins January 18, 2008. Position open until filled. Full position announcement: http://library.csusm.edu/about/jobs/ --- David Walker Library Web Services Manager California State University http://xerxes.calstate.edu
Re: [CODE4LIB] Z39.50 for III Database?
Here is the class code for my screen scraper. Rather simple, but can be useful in certain cases. http://xerxes.calstate.edu/source/iii/InnopacWeb.zip --Dave --- David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries on behalf of Godmar Back Sent: Tue 5/1/2007 3:32 PM To: CODE4LIB@listserv.nd.edu Subject: Re: [CODE4LIB] Z39.50 for III Database? If I may follow up on an earlier discussion [ relevant parts are included below ] regarding how to extract holdings information from III or other catalogs. I have one thing to offer and 1 thing to request. I'll start with the offering: MAJAX. MAJAX is a JavaScript library that screenscrapes III catalogs and can include the results so obtained into any document served from the same domain. URL of the current code is http://libx.org/majax/majax.html ; a demo is at http://libx.org/majax/majaxtest4.html ) After an initial, somewhat clumsy approach, we've now adopted an approach that's similar to COinS. For instance, to include holdings information for a book into a website, all you have to do is include a span class=majax-showholdings title=iX/span in your HTML, and include MAJAX via a single script element, which will result in that SPAN being replaced with the holdings of the book with ISBN XXX. Also support bibrecord number and title. It's so easy a cave librarian could do it. It can be done directly from the WebBridge management panel for those of you have are damned to use WebBridge. Of course, the underlying JavaScript API is still available for more advanced users. MAJAX has been released under the LGPL. Now for the thing to request. Are there any reusable, open source scripts out there that implements a REST interface that screenscrapes or otherwise efficiently accesses a III catalog? David and James have provided links, but no code. I would be grateful for anything I could reuse and don't have to reimplement. Here's what I envision: Interface: REST Input: search terms/type - maybe OpenURL v0.1-syntax, or another adopted standard, or something custom, but ideally simple. Output: XML - maybe Marc XML with 852 (or whatever the number is) holdings records - similar to what David's screen scrape test provides. Ideally XML that comes with a schema and validates against it. Maybe JSON like James's scripts (?) Implementation: Something that a cave librarian could deploy - good candidates are PhP and possibly Perl-based cgi, but one could conceive of others. Nothing that requires elaborate server setups or installing custom frameworks. Thank you for any pointers/suggestions you may have. - Godmar On 3/4/07, Birkin James Diana [EMAIL PROTECTED] wrote: On Mar 1, 2007, at 5:23 PM, Walker, David wrote: http://walkertr.csusm.edu/scrape/test.htm Very cool; works on our III catalog! Nathan Mealy -- I also used the screenscrape method to get info we needed for a couple of ISBN-based projects, not knowing at the time about the yaz-z39.50-OPAC option. By implementing this in the form of a web-service, I can switch the work-horse code without affecting other apps, and minimize session concerns. http://dl.lib.brown.edu/soa_services/josiah_status/examples.php http://dl.lib.brown.edu/soa_services/josiah_status/tests/ InfoHolderTest.php (The returned json info is more comprehensible via view-source.) --- Birkin James Diana Programmer, Web Services Brown University Library [EMAIL PROTECTED]
Re: [CODE4LIB] Z39.50 for III Database?
Nathan, You may want to consider some alternatives to Z39.50. As Marc and Godmar have pointed out, you can set the record format to 'OPAC' in order to get holdings information in a Z39.50 search, but the III Z39.50 server has some key limitations. First, the III Z39 Server only allows you to search on a handful of indexes. If you have an exported list of records, you will likely want to be able to check item status using the bibliographic record id number, since that is the unique identifier for each record. But the III Z39 server does not support a search for bib id. That forces you to look-up the record using one of the supported indexes, which may not be sufficiently unique for your purposes. Second, the III Z39 Server does not return results in the same fashion as the web interface. In particular, any type of keyword search with a boolean operator behaves erratically, returning different results then you would expect. That may not be crucial for your project, but it is something that has plagued our metasearch initiative here in the CSU for years. Some alternatives to consider: (1) The Innovative catalog does allow you to get an XML representation of any bibliographic or item record using the 'xrecord' parameter, like this: your.catalog.edu/xrecord=b1235478a You don't need to have XML Server for this, it is installed by default. However, it does not attach item records to a request for a bib record. So the only way to check the status of your holdings is to know in advance the item record id numbers for each bib record, and query each in turn. (2) Screen scraping. I know, I know, you'll say that's ugly. It is, but honestly it is the approach we are leaning towards here for a number of things. Here is an example using some PHP code I have. http://walkertr.csusm.edu/scrape/test.htm Type in the server address and a bib record number. It will return the record to you in MARC-XML with the item records attached as a series of 852 fields (you could change that behavior if you need to). --Dave --- David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries on behalf of Godmar Back Sent: Wed 2/28/2007 9:34 AM To: CODE4LIB@listserv.nd.edu Subject: Re: [CODE4LIB] Z39.50 for III Database? Use yaz. This will give you basic holdings information. Example: yaz-client Z open addison.vt.edu:210/innopac Connecting...OK. Sent initrequest. Connection accepted by v2 target. ID : Z39.50-III Name : z39-innopac Version: 1 UserInformationfield: { OCTETSTRING(len=52) Innovative Interfaces Inc. Z39.50 SERVER version 1.1 } Guessing visiblestring: 'Innovative Interfaces Inc. Z39.50 SERVER version 1.1' Options: search present scan namedResultSets Elapsed: 0.083577 Z format opac Z find @attr 1=1003 @attr 4=1 knuth donald Sent searchRequest. Received SearchResponse. Search was a success. Number of hits: 19, setno 1 records returned: 0 Elapsed: 0.007483 Z show 1 Sent presentRequest (1+1). Records: 1 [INNOPAC]Record type: OPAC Record type: USmarc 01013nam 22002658a 4500 001 ocm07948639 008 820506s1981gw b101 0 eng 010$a81018418 020$a 0387111573 (U.S.) 035$a 0501-40660 040$a DLC $c DLC 049$a VPII 050 0 $a QA9.58 $b .A43 245 00 $a Algorithms in modern mathematics and computer science : $b proceedings, Urgench, Uzbek SSR, September 16-22, 1979 / $c edited by A.P. Ershov and D.E. Knuth. 260$a Berlin ; $a New York : $b Springer-Verlag, $c 1981. 263$a 8111. 300$a xi, 487 p. : $b ill. ; $c 24 cm. 440 0 $a Lecture notes in computer science ; $v 122. 500$a The symposium was organized by the Academy of Sciences of the Uzbek S.S.R.--Pref. 504$a Includes bibliographies and index. 650 0 $a Algorithms $v Congresses. 650 0 $a Computer programming $v Congresses. 700 1 $a Ershov, A. P. $q (Andreæi Petrovich) 700 1 $a Knuth, Donald Ervin, $d 1938- 710 2 $a æUzbekiston SSR fanlar akademiëiìasi. Data holdings 0 localLocation: Newman 4th Floor callNumber: QA9.58 .A43 publicNote: AVAILABLE nextResultSetPosition = 2 Elapsed: 0.192699 Caveat: If you want XML, *do not* ask III's Z39.50 for the Z39.50 xml format. Their XML is ill-formed. Instead, ask III's server for records with holdings information using the opac format, as shown above, then have yaz convert that into well-formed XML. See http://lists.indexdata.dk/pipermail/yazlist/2005-December/001485.html for how to do that using yaz's PHP binding. - Godmar On 2/28/07, Nathan Mealey [EMAIL PROTECTED] wrote: Simmons is a III-based library. I'm in the midst of developing an application that uses a subset of our records that I've exported via creating a list and then pushing it into a MySQL database. We did not purchase the XMLServer module during the time that it was available, and so I don't have that as an option for querying our III database. What I'm
[CODE4LIB] Job: Web Librarian, California State University San Marcos
Excuse the cross-posting: -- Web Librarian -- Tenure-track appointment at the Senior Assistant Librarian level (beginning salary $51,852). The California State University San Marcos Library is seeking a creative, energetic librarian to provide vision and leadership in designing, developing, and supporting the Library's virtual presence. The Web Librarian is responsible for all aspects of the Library's website. The goal is a dynamic, interactive environment that integrates a variety of disparate library technologies seamlessly into a highly usable website focused on student learning and research. Currently, the position primarily supports provision of traditional library services and resources in the online environment. Growing areas of interest are facilitating design of effective online instruction, providing user customization and social features, and increased support of multimedia environments. Cal State San Marcos is a young, growing campus, and the Library is at the cutting-edge of technology. This position offers a great deal of creative freedom, and the Library is looking for a visionary, self-motivated individual who is, at the same time, able to build consensus and communicate effectively with librarians and staff to collaboratively design and offer innovative, new digital library services to our community. The position involves understanding principles of library science, information architecture, pedagogy, and human/computer interface issues, particularly user search behaviors. Technical responsibilities include web design, site development and maintenance, some programming and database work, graphic design, and multimedia development. The position has a variety of project management responsibilities including coordinating task groups, facilitating communication, developing project specifications, and conducting project evaluations. Staff training and active involvement with students in a reference or instructional role is expected. Specific Duties: 1. Collaborate with library faculty and staff to articulate a vision and develop strategic plans for the library websites and web-based services. Effectively manage projects with the involvement of colleagues, users and other stakeholders. 2. Design websites to enable students to effectively use library services and resources for their learning and research needs. Apply principles of usability and accessibility to develop effective site interfaces and navigation structures. Make effective use of graphics and multimedia to convey information in a complex environment. Support design of online instruction based on current learning theories and instructional technologies. 3. As webmaster for library websites, coordinate content development, evaluate content and services, develop guidelines and standards, and manage the daily maintenance of the websites. Instruct and support library faculty and staff in their work on the web. Develop written tutorials and other documentation to support use of the website. 4. Build a robust website infrastructure using web authoring, scripting, mark-up languages and relational database tools. Provide a seamless web environment and integrate various library applications and services using current linking, metasearch, and other web technologies. 5. Anticipate web trends, investigate their application to the academic library and develop new web-based services. 6. Work on a regular basis with students in a reference or instructional capacity. 7. Meet faculty tenure obligations, including developing a research agenda, publishing, and serving on library, university, and/or national committees. Qualifications Required: * ALA-accredited MLS or equivalent. * Demonstrated knowledge of effective web interface design, information architecture, human/computer interface (HCI) theory and accessibility. Experience conducting usability studies and other user interface design testing. * Demonstrated ability to build effective websites using state-of-the-art development tools. * Ability to adapt in fast-changing environments. Innovative with a willingness to take risks and consider new and untested approaches and use creative problem-solving skills. Demonstrated ability to seek out and learn new technology. * Knowledge of best practices, standards, issues and trends relevant to web and information technology in academic libraries. * Demonstrated commitment to working with and serving people. Ability to inspire a shared vision and work effectively with colleagues, students and faculty in a collegial environment. * Excellent written and oral communication skills. * Ability to meet tenure requirements and desire to play a visible role in the academic community. Qualifications Desired * Demonstrated experience in web interface design, site management, or content development. * Demonstrated skill with web authoring tools, scripting and
Re: [CODE4LIB] Getting data from Voyager into XML?
One thing I am hoping that can come out of the preconference is a standard XSLT doc. Is there XSLT to do this sort of thing with dates available? I am, however, skeptical of a purely MARC - XSLT - Solr solution. Most XSLT processors do, of course, allow you to write your own extension functions -- typically in something like Javascript -- and embed those within your stylesheet. Although that would neatly package both the tranformation abilities of XSLT and the need for simple string manipulation from a scripting language in one file, that whole approach is rather processor specific, defeating the goal of a 'standard' XSLT doc. --Dave --- David Walker Library Web Services Manager California State University http://xerxes.calstate.edu From: Code for Libraries on behalf of Jonathan Gorman Sent: Fri 1/19/2007 7:11 AM To: CODE4LIB@listserv.nd.edu Subject: Re: [CODE4LIB] Getting data from Voyager into XML? On Fri, 19 Jan 2007, Erik Hatcher wrote: Tod, Great information. I apologize for being a late comer to the game and bringing up FAQs. What about date normalization? One thing that must be considered when doing faceted browsing is that it works best with some pre-processed data, such as years rather than full dates. The question becomes where does the logic for stripping out the years belong? Solr could do it if configured with a custom analyzer for certain fields, or the client could do it. Is there XSLT to do this sort of thing with dates available? I know XSLT 2.0 can handle them far better due to the support for types. However, MARC still has oddities which would probably need to be address directly. If doing it entirely in XSLT I'd probably actually pipeline it and do several transformations in a row. There's also been work done to provide libraries and the like in XSLT. EXSLT comes to mind right away. One example of an MARC oddity I had recently is that a report required the 260 |c field. I got complaints that the dates were malformed. Why? They appeared like 1922]. Those with some catalog experience can guess the problem. The whole 260 field looks like this $a [Chicago: $b some publisher $c 1922]. I'm not entirely sure how that would get parsed into MARCXML in the first place. There's techniques to deal with this in xslt, but the string manipulations are generally more cumbersome in that language than in a scripting language as you mention. In XSLT 2.0 I'd probably have a template/function to parse out punctuation, then something to possibly normalize dates. Which reminds me, I need to start reviewing some XSLT/Cocoon for the pre-conference ;). Jonathan T. Gorman Research Information Specialist University of Illinois at Champaign-Urbana 216 Main Library - MC522 1408 West Gregory Drive Urbana, IL 61801 Phone: (217) 244-4688 Erik On Jan 19, 2007, at 5:58 AM, Tod Olson wrote: On Jan 19, 2007, at 4:07 AM, Erik Hatcher wrote: On Jan 17, 2007, at 3:26 PM, Andrew Nagy wrote: One thing I am hoping that can come out of the preconference is a standard XSLT doc. I sat down with my metadata librarian to develop our XSLT doc -- determining what fields are to be searchable what fields should be left out to help speed up results, etc. It's pretty easy, I think you will be amazed how fast you can have a functioning system with very little effort. You're quite right with that last statement. I am, however, skeptical of a purely MARC - XSLT - Solr solution. The MARC data I've seen requires some basic cleanup (removing dots at the end of subjects, normalizing dates, etc) in order to be useful as facets. While XSLT is powerful, this type of data manipulation is better (IMO) done with scripting languages that allow for easy tweaking in a succinct way. I'm sure XSLT could do everything that you'd want done; you can also drive screws in with a hammer :) So the punctuation stripping has already been done in XSLT. LoC has a MARCXML - MODS XSLT stylesheet [1] which strips out the evil ISBD punctuation. I've generally found mapping from MODS to be more convenient than mapping from MARC, so while it's an extra step, it does save a little programmer time since some of the hidden hierarchy in the MARC data is made explicit in the MODS structure. If hopping through MODS is unacceptable, the LoC has the punctuation- stripping nicely tucked away into a MARC Conversion Utility Stylesheet that you could use directly in a MARC XML - Solr transformation. [2] [1] http://www.loc.gov/standards/mods/v3/MARC21slim2MODS.xsl [2] http://www.loc.gov/marcxml/xslt/MARC21slimUtils.xsl Tod Olson [EMAIL PROTECTED] Programmer/Analyst University of Chicago Library