[CODE4LIB] Code4Lib Journal: Issue 34, Call for Proposals

2016-07-11 Thread Andrew Darby
Hi, all.  Just a reminder that we are reviewing proposals for the next
issue of the Code4Lib Journal. Deadline is next Friday.



Call for Papers (and apologies for cross-posting):

The Code4Lib Journal (C4LJ) exists to foster community and share
information among those interested in the intersection of libraries,
technology, and the future.

We are now accepting proposals for publication in our 34th issue.  Don't
miss out on this opportunity to share your ideas and experiences. To be
included in the 34th issue, which is scheduled for publication in mid
October 2016, please submit articles, abstracts, or proposals at
http://journal.code4lib.org/submit-proposal or to jour...@code4lib.org by
Friday, July 22, 2016.  When submitting, please include the title or
subject of the proposal in the subject line of the email message.

C4LJ encourages creativity and flexibility, and the editors welcome
submissions across a broad variety of topics that support the mission of
the journal.  Possible topics include, but are not limited to:

* Practical applications of library technology (both actual and
hypothetical)
* Technology projects (failed, successful, or proposed), including how they
were done and challenges faced
* Case studies
* Best practices
* Reviews
* Comparisons of third party software or libraries
* Analyses of library metadata for use with technology
* Project management and communication within the library environment
* Assessment and user studies

C4LJ strives to promote professional communication by minimizing the
barriers to publication.  While articles should be of a high quality, they
need not follow any formal structure.  Writers should aim for the middle
ground between blog posts and articles in traditional refereed journals.
Where appropriate, we encourage authors to submit code samples, algorithms,
and pseudo-code.  For more information, visit C4LJ's Article Guidelines or
browse articles from the first 32 issues published on our website:
http://journal.code4lib.org.

Remember, for consideration for the 34 issue, please send proposals,
abstracts, or draft articles to jour...@code4lib.org no later than  Friday,
July 22, 2016.

Send in a submission.  Your peers would like to hear what you are doing.


Code4Lib Journal Editorial Committee

-- 
Andrew Darby
Head, Web & Application Development
University of Miami Libraries


[CODE4LIB] Code4Lib Journal: Call for Proposals, Issue 34

2016-06-20 Thread Andrew Darby
Call for Papers (and apologies for cross-posting):

The Code4Lib Journal (C4LJ) exists to foster community and share
information among those interested in the intersection of libraries,
technology, and the future.

We are now accepting proposals for publication in our 34th issue.  Don't
miss out on this opportunity to share your ideas and experiences. To be
included in the 34th issue, which is scheduled for publication in mid
October 2016, please submit articles, abstracts, or proposals at
http://journal.code4lib.org/submit-proposal or to jour...@code4lib.org by
Friday, July 22, 2016.  When submitting, please include the title or
subject of the proposal in the subject line of the email message.

C4LJ encourages creativity and flexibility, and the editors welcome
submissions across a broad variety of topics that support the mission of
the journal.  Possible topics include, but are not limited to:

* Practical applications of library technology (both actual and
hypothetical)
* Technology projects (failed, successful, or proposed), including how they
were done and challenges faced
* Case studies
* Best practices
* Reviews
* Comparisons of third party software or libraries
* Analyses of library metadata for use with technology
* Project management and communication within the library environment
* Assessment and user studies

C4LJ strives to promote professional communication by minimizing the
barriers to publication.  While articles should be of a high quality, they
need not follow any formal structure.  Writers should aim for the middle
ground between blog posts and articles in traditional refereed journals.
Where appropriate, we encourage authors to submit code samples, algorithms,
and pseudo-code.  For more information, visit C4LJ's Article Guidelines or
browse articles from the first 32 issues published on our website:
http://journal.code4lib.org.

Remember, for consideration for the 34 issue, please send proposals,
abstracts, or draft articles to jour...@code4lib.org no later than  Friday,
July 22, 2016.

Send in a submission.  Your peers would like to hear what you are doing.


Code4Lib Journal Editorial Committee


-- 
Andrew Darby
Head, Web & Application Development
University of Miami Libraries


[CODE4LIB] Application Developer at the University of Miami Libraries

2016-06-13 Thread Andrew Darby
Hello, all.  We’re looking for a developer to join us at the University of
Miami Libraries.   Web & Application Development consists of the department
head, one Digital Preservation & Application Development Librarian, two
programmers (once you join us!), one web designer, and at times a student
worker.  We are responsible, broadly, for all the public interfaces to
library systems, selected backend systems, and for providing application
development and support for our colleagues in the Libraries.  In addition,
we lead user research on web products, and do research and development on
technologies and applications that might be beneficial to the Libraries or
the library community.

*Apply Here: * http://um.hodesiq.com/job_detail.asp?JobID=5230660

The UM Libraries is undergoing a growth spurt in the area of Digital
Strategies, with new hires in GIS Services, Digital Humanities, Data
Scholarship and a Digital Infrastructure Librarian, so we expect a lot of
new and interesting projects to come our way.  We have a couple of existing
open source projects out there (SubjectsPlus [1] and the Remixing Archival
Metadata Project [2]), and are generally a pretty congenial group.

Feel free to drop me a line if you'd like more information.

-- 
Andrew Darby
Head, Web & Application Development
University of Miami Libraries

[1] https://github.com/subjectsplus/SubjectsPlus
[2] https://github.com/UMiamiLibraries/RAMP


Re: [CODE4LIB] Language codes

2016-06-01 Thread Andrew Cunningham
It is better to refer to BCP-47 instead.

https://tools.ietf.org/html/bcp47

An RFC can be updated, when it is, it recieves a new number. For language
tagging, the relevant information is split across two RFCs. BCP-47 is a
permanent IEFT ifentifier referencing the latest versions of the two RFCs
relating to language tagging.

Andrew

On 2 Jun 2016 9:24 am, "Stuart A. Yeates" <syea...@gmail.com> wrote:
>
> I recommend reading https://tools.ietf.org/html/rfc5646 which seems to do
> what you need.
>
> cheers
> stuart
>
> --
> ...let us be heard from red core to black sky
>
> On Thu, Jun 2, 2016 at 10:59 AM, Greg Lindahl <lind...@pbm.com> wrote:
>
> > Some of the Internet Archive's library partners are asking us about
> > language metadata for regional languages that don't have standard
> > codes.  Is there a standard way of dealing with this situation?
> >
> > Overall we use MARC codes https://www.loc.gov/marc/languages/ which
> > were last updated in 2007. LOC also maintains ISO639-2
> > https://www.loc.gov/standards/iso639-2/php/code_list.php last updated
> > in 2014.
> >
> > The languages in question are regional languages which are currently
> > lumped together in both standards. With the recent rise in interest
> > and funding for regional languages, it's no surprise that some
> > catalogers want to split these languages out into separate codes.
> >
> > Thanks!
> >
> > -- greg
> >


Re: [CODE4LIB] Language codes

2016-06-01 Thread Andrew Cunningham
On 2 Jun 2016 9:40 am, "Andrew Cunningham" <lang.supp...@gmail.com> wrote:
>
>
> Ultimately it is what a library is working on, if you are cataloguing
then all you have is ISO-639-3/B
>

Opps, meant to input ISO-639-2/B

Andrew


Re: [CODE4LIB] Language codes

2016-06-01 Thread Andrew Cunningham
Outside the library sector, the most common approach to language tagging
and matching isn't ISO-639-2 or ISO-639-3, rather BCP-47.

Quite a number of ISO-639-2 language tags represent what ISO-639-3 refers
to as macro languages. For instance 'kar' in ISO-639-2 resolves to 20
language codes in ISO-639-3

But ISO-639-3 by itself isn't sufficient to fully identify a written
language.

Eg you could have sr-Cyrl for Serbian in the Cyrillic script. Sr-Latn to
represent Serbian written in the Latin orthography, sr-Latn-alalc97 ...
Romanised Cyrillic Serbian based on the ALA-LC Cyrillic romanisation table
published in 1997.

Its worth noting the only ALA-LC romanisation tables that can be specified
in BCP-47 are the 1997 editions.

Ultimately it is what a library is working on, if you are cataloguing then
all you have is ISO-639-3/B

If you are working on a digitisation or linked data project it is much
better to correctly use BCP-47 which would align your resources more
accurately with the rest of the broader information ecosystem in which your
resources would exist.

Andrew
On 2 Jun 2016 9:15 am, "Craig Franklin" <cfrank...@halonetwork.net> wrote:

> We've never had any problems sticking to ISO639-2 codes (in cases there
> isn't a shorter ISO639-1 code available).  I'm interested in what sort of
> regional languages you might be dealing with where there are significant
> gaps in that standard?
>
> You might also look at ISO 639-3, which is quite comprehensive but also
> introduces a fair chunk of complexity:
>
> http://www-01.sil.org/iso639-3/download.asp
>
> Cheers,
> Craig Franklin
>
> On 2 June 2016 at 08:59, Greg Lindahl <lind...@pbm.com> wrote:
>
> > Some of the Internet Archive's library partners are asking us about
> > language metadata for regional languages that don't have standard
> > codes.  Is there a standard way of dealing with this situation?
> >
> > Overall we use MARC codes https://www.loc.gov/marc/languages/ which
> > were last updated in 2007. LOC also maintains ISO639-2
> > https://www.loc.gov/standards/iso639-2/php/code_list.php last updated
> > in 2014.
> >
> > The languages in question are regional languages which are currently
> > lumped together in both standards. With the recent rise in interest
> > and funding for regional languages, it's no surprise that some
> > catalogers want to split these languages out into separate codes.
> >
> > Thanks!
> >
> > -- greg
> >
>


Re: [CODE4LIB] Preserving Digital Objects with Descriptive Metadata

2016-05-16 Thread Andrew Weidner
Hi Rosalyn,

Yes, I see the RDF and EAD preservation ingests as text files. The key
affordance of this model, in my mind, is flexibility. Metadata producers
can create and maintain their data in systems designed for that purpose on
a schedule that is decoupled from preservation ingest for digital object
bitstreams, and then export that metadata for preservation ingest and
versioning as deemed appropriate for the local environment.

Andrew


On Thu, May 12, 2016 at 10:13 PM, Rosalyn Metz <rosalynm...@gmail.com>
wrote:

> in the preservation space you describe, the files would presumably be small
> text files.  if that's the case it might be useful to ask, what would
> keeping the metadata separate from the content files afford you?
>
> On Mon, May 9, 2016 at 9:26 AM, Andrew Weidner <metaweid...@gmail.com>
> wrote:
>
> > Thanks for your replies, Stuart and Brian.
> >
> > The information you provided got me to thinking more generally about what
> > comprehensive preservation could look like for the digitized cultural
> > heritage materials we are managing. As a content administrator for an
> > access repository, I am primarily concerned with the ability to retrieve
> > bitstreams and metadata from preservation cold storage in the event that
> we
> > lose data and our intermediate backup systems fail. I've visualized one
> > possible data restoration model in the following slides:
> >
> >
> >
> https://docs.google.com/presentation/d/1x3YlQbQUitaRLH1qVVWYHZVonrffzoAKJTY905PpjK8/edit?usp=sharing
> >
> > I'm curious what the C4L collective brain thinks about such an approach,
> > especially any pitfalls we should watch out for if we should choose to
> > implement this model.
> >
> > Andy Weidner
> >
> >
> >
> > On Wed, Mar 23, 2016 at 8:59 AM, Brian Kennison <kennis...@wcsu.edu>
> > wrote:
> >
> > > >>
> > > >> How do others approach this problem? Are there recognized best
> > > practices to
> > > >> adhere to?
> > > >>
> > >
> > > I’m still trying to put the CDL model into practice. <
> > > https://confluence.ucop.edu/display/Curation/D-flat >
> > > And Stanford has similar but different model <
> > > http://journal.code4lib.org/articles/8482 >
> > >
> > > —Brian
> > >
> >
>


Re: [CODE4LIB] Preserving Digital Objects with Descriptive Metadata

2016-05-09 Thread Andrew Weidner
Thanks for your replies, Stuart and Brian.

The information you provided got me to thinking more generally about what
comprehensive preservation could look like for the digitized cultural
heritage materials we are managing. As a content administrator for an
access repository, I am primarily concerned with the ability to retrieve
bitstreams and metadata from preservation cold storage in the event that we
lose data and our intermediate backup systems fail. I've visualized one
possible data restoration model in the following slides:

https://docs.google.com/presentation/d/1x3YlQbQUitaRLH1qVVWYHZVonrffzoAKJTY905PpjK8/edit?usp=sharing

I'm curious what the C4L collective brain thinks about such an approach,
especially any pitfalls we should watch out for if we should choose to
implement this model.

Andy Weidner



On Wed, Mar 23, 2016 at 8:59 AM, Brian Kennison  wrote:

> >>
> >> How do others approach this problem? Are there recognized best
> practices to
> >> adhere to?
> >>
>
> I’m still trying to put the CDL model into practice. <
> https://confluence.ucop.edu/display/Curation/D-flat >
> And Stanford has similar but different model <
> http://journal.code4lib.org/articles/8482 >
>
> —Brian
>


Re: [CODE4LIB] Form fill from URL

2016-04-25 Thread Andrew Gordon
Hi Teague,

It sounds like you want to query an API given some kind of handle (e.g.,
DOI) or perhaps a URL to the paper itself?

For the former instance, we use the CrossRef API if the submitter knows the
DOI of the related journal article. Though in terms of sending it just a
URL to the paper, I don't know if it or other API services can handle that.
CrossRef returns Title, Authors, and a citation.

As others have suggested, there are other APIs you might use, depending on
your needs. MIT provides a handy list of other scholarly publication APIs
you might connect to: http://libguides.mit.edu/apis

Hope that's helpful,

drew

On Mon, Apr 25, 2016 at 10:08 AM, Park, Sarah  wrote:

> Teague,
>
> If you want to auto-populated a form from URL, see this PPT file.
>
> Some years ago, I created an auto-populated ILL using JavaScript form from
> SerialsSolutions API (because I didn't have access to a PHP server).
>
> http://www.nwmissouri.edu/library/brickandclick/Presentations2012/SarahPark.pptx
>
> The PPT includes a sample JavaScript code. The actual working page is at
> http://www.nwmissouri.edu/library/ill/photocopy.htm
>
> Cheers!
> Sarah
>
> -Original Message-
> From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of
> Matt Bernhardt
> Sent: Friday, April 22, 2016 8:51 PM
> To: CODE4LIB@LISTSERV.ND.EDU
> Subject: Re: [CODE4LIB] Form fill from URL
>
> If you're talking about a URL of the form:
>
> example.com/form?title=foo=bar
>
> ...which then loads a form with "foo" placed in the "title" field and "bar"
> placed in the "author" field, then yes - this is something that's
> relatively easy to do depending on how you're building the form. Your
> programming language of choice should be able to parse the querystring for
> fields and their associated values. The same language can then define field
> values based on those parsed values. It is then up to the user to make any
> necessary adjustments, and hit the submit button on the form.
>
> Or is your question more about a system that would take a URL of the form:
>
> example.com/form?handle=1721.1/90974
> or
> example.com/form?issn=1234-5678
>
> and then look up that identifier and fill the form with details such as
> author, etc? In that case, then yes - it would depend on the qualify of the
> APIs to which you have access. I've never worked with any, but provided
> they exist (and that you have access), then what you describe is definitely
> possible.
>
> Do either of these describe what you're thinking of? Or did I completely
> miss the mark?
>
> Thanks,
> Matt Bernhardt
> MIT Libraries
>
> On Fri, Apr 22, 2016 at 7:58 PM, Teague Allen 
> wrote:
>
> > Hello collective,
> >
> > I've been given the opportunity to replace a much-detested PDF form
> > used to request cataloging for items by our researchers that are
> > published outside our organization. My hope is to create a web form
> > that will automatically populate with title, author(s), and
> > appropriate citation information if a URL/URI is entered. I imagine
> > this can by done with a citation manager API, but would love to hear
> > from someone who's already gotten beyond imagination.
> >
> > Any insight is very much appreciated,
> >
> > Teague Allen
> > Librarian III (Metadata)
> > RAND  Knowledge Services
> >
>


Re: [CODE4LIB] Google can give you answers, but librarians give you the right answers

2016-04-01 Thread Andrew Anderson
On Apr 1, 2016, at 0:31, Cornel Darden Jr. <corneldarde...@gmail.com> wrote:

> "Google can give you answers, but librarians give you the right answers."
> 
> Library: "because not everything on the internet is true"
> 
> Some people applauded the statement and were like: "yay librarians!"
> 
> Others thought it was a very ignorant statement. And many patrons caused a 
> huge backlash. It was interesting as the library responded to the irritated 
> patrons. 

While I understand the motivation behind these statements, it also presents as 
“You’re doing it wrong!”, which is likely part of the reason for the backlash.  

Some of the more effective materials that I’ve seen created to communicate this 
concept effectively show sample search engine results with millions of hits of 
varying quality juxtaposed against commercial databases with dozens of high 
quality hits, letting the user draw their own conclusion that they would rather 
look through a few dozen relevant items than all the chaff from the search 
engine results.

Don’t tell them they’re doing it wrong, let them see that there’s a better way 
and let them chose the better option willingly.

-- 
Andrew Anderson, President & CEO, Library and Information Resources Network, 
Inc.
http://www.lirn.net/ | http://www.twitter.com/LIRNnotes | 
http://www.facebook.com/LIRNnotes


Re: [CODE4LIB] Internet of Things

2016-03-31 Thread Andrew Anderson
For those who were not previously aware of IoT, here’s a primer focused 
specifically on the library space:

https://www.oclc.org/publications/nextspace/articles/issue24/librariesandtheinternetofthings.en.html

IMHO this is still a very young concept, and not even fully imagined yet, so 
there is no reason to feel like you’ve missed the boat, when the ship hasn’t 
even reached the dock yet.

-- 
Andrew Anderson, President & CEO, Library and Information Resources Network, 
Inc.
http://www.lirn.net/ | http://www.twitter.com/LIRNnotes | 
http://www.facebook.com/LIRNnotes

On Mar 30, 2016, at 22:16, Lesli M <les...@gmail.com> wrote:

> I feel compelled to pipe up about the comment "Very sad that a librarian 
> didn't know what it was."
> 
> Librarians come in all flavors and varieties. Until I worked in a medical 
> library, I had no idea what a systematic review was. I had no idea there was 
> a variety of librarian called "clinical librarian."
> 
> Do you know the hot new interest for law libraries? Medical libraries? 
> Science libraries?
> 
> The IoT is a specific area of interest. Just like every other special 
> interest out there.
> 
> Is it really justified to expect all librarians of all flavors and varieties 
> to know this very tech-ish thing called IoT?
> 
> Lesli


Re: [CODE4LIB] ISO: State of the art in video annotation

2016-03-23 Thread Andrew Gordon
I would also be interested to hear the results of this, Stuart.

Not that it adds very much more to what everyone has already provided, but
I remember bookmarking this page from Annotations work at Harvard:
http://www.annotations.harvard.edu/icb/icb.do?keyword=k80243=icb.page466612

Seemed like a good primer, though it might not reflect the most current
work.

There was also this which I bookmarked around the same time:
https://code.google.com/archive/p/annotation-ontology/

Again, not too sure if these are all abandoned projects, but at least gives
a sense where things were at some point in the not too distant past.

On Fri, Mar 18, 2016 at 4:04 AM, Erwin Verbruggen <
everbrug...@beeldengeluid.nl> wrote:

> Hi Stuart, all,
>
> Very interested in the IIIF-developments as well. A colleague from the
> University of Amsterdam recently did a post on Digital Film Historiography
> <
>
> http://filmhistoryinthemaking.com/2016/03/16/update-digital-film-historiography-a-bibliography/
> >
> and when I asked about the tools in reference to this conversation replied:
>
> Anvil was used by Adelheid Heftberger in the Digital Formalism project in
> > Vienna with really good results. In addition, the French tool Lignes de
> > temps developed by IRI at the Pompidou center has been used by several
> film
> > scholars and in education on several levels for video annotation, (it
> also
> > exists in English) and I think it might be relevant/useful for the
> purposes
> > described though it is not web-based from what I can see:
> >
> > http://www.iri.centrepompidou.fr/outils/lignes-de-temps/
> >
> > Stuart, hope all this brings you somewhat further to your original goal -
> would be curious to hear the results of your quest.
>
> Kind regards,
> Erwin
>
> On Thu, Mar 17, 2016 at 5:31 AM, Tom Cramer <tcra...@stanford.edu> wrote:
>
> > Stuart,
> >
> > It may be useful to also cross-post this question to the IIIF-discuss
> list
> > [1]. There is a lot of interest in developing a IIIF-like approach to
> > presenting video via a common API, and one that lends itself to web-based
> > annotation. This would allow theoretically allow users to annotate videos
> > with their tool of choice, and to be able to reuse / export the
> annotations
> > to any other tool.
> >
> > I expect this will be a topic at the next IIIF meetings, in New York City
> > (May 10-13, 2016). [2]
> >
> > - Thomas
> >
> >
> > [1] iiif-disc...@googlegroups.com<mailto:iiif-disc...@googlegroups.com>
> > [2] http://iiif.io/event/2016/newyork/
> >
> > On Mar 16, 2016, at 8:33 PM, Greg Lindahl <lind...@pbm.com > lind...@pbm.com>> wrote:
> >
> > This may or may not be relevant to the "annotation" that the original
> > poster had in mind, but the Internet Archive embedded video player
> > takes subtitles in the common SubRip .srt format, which is apparently
> > supported by many video players & subtitling programs.
> >
> > Instead of using this for closed captioning, you could use it for
> > annotations. Each video can have multiple .srt files, with the user
> > being able to pick which one is shown. I'm not 100% sure if our embed
> > code allows the embedder to choose one .srt to be shown by default,
> > that's where my knowledge ends.
> >
> > https://archive.org/help/video.php
> > https://en.wikipedia.org/wiki/SubRip
> >
> > -- greg
> >
> > On Wed, Mar 16, 2016 at 02:06:46PM +0100, Gregory Markus wrote:
> > Hi Stuart,
> >
> > A colleague of mine has just recently recommended Clipper (
> > http://blog.clippertube.com/index.php/clipper-prototype-3/) they're
> > currently experimenting with it in the EUscreenXL project.
> >
> > Might be worth checking out for you as well.
> >
> > Curious as to what others will suggest as well.
> >
> > Cheers,
> >
> > greg
> >
> > On Tue, Mar 15, 2016 at 11:11 PM, Andrew Gordon <drew.s.gor...@gmail.com
> >
> > wrote:
> >
> > Thanks for sending out that document, Erwin.
> >
> > This is a really interesting topic and I feel like video annotation on
> the
> > web should be more of a thing.
> >
> > On top of what Erwin already provided (OVA looks particularly like A
> > project that might be good to look at for your needs) there are also:
> >
> > http://mith.us/OACVideoAnnotator/ - which is a proof of concept using
> the
> > open annotation specification (http://www.openannotation.org/). The
> > specification is format agnostic, intending annotatation of objects with
> > 

[CODE4LIB] Preserving Digital Objects with Descriptive Metadata

2016-03-21 Thread Andrew Weidner
Hi Code4Lib,

We are in the process of designing new workflows for preservation and
access of our digital stuff, and I'd like to get a sense of how people
understand digital objects in the preservation space.

My gut tells me that it might be useful for future digital
archivists/archaeologists to have an object's descriptive metadata closely
associated with the object's files in the same directory in a human
readable plain text format. So that one directory would contain all of the
object's files and descriptive metadata in an easy to read package.

Alternatively, descriptive metadata for many objects could be stored in a
single external file, say at the root of a preservation accession
directory, according to a recognized standard like METS. That requires more
work to reconstruct an object, and the linkage between an object's files
and descriptive metadata is looser, but it seems more efficient.

How do others approach this problem? Are there recognized best practices to
adhere to?

Thanks,

Andy Weidner


Re: [CODE4LIB] List of Database Subjects

2016-03-19 Thread Andrew Darby
Like Oakland U, we use SubjectsPlus for our A-Z list:

http://sp.library.miami.edu/subjects/databases.php?letter=bysub

and like others, we started out trying to match departments at the
university, but I don't think anyone has tried to keep things in sync
recently.

Andrew

On Thu, Mar 17, 2016 at 2:59 PM, David J. Fiander <da...@fiander.info>
wrote:

> The problem with "programs/departments" is that there are things that
> aren't under the umbrella of a single program/department. Where does
> "planetary science" go, for example, when it's a strange mix of Earth
> Sciences, Geography, and Astronomy?
>
> The other problem is that there are some very LARGE departments (civil
> engineering is 1/4 of a faculty, for example), which would benefit from
> multiple database subjects.
>
> On 2016/03/17 13:09, Salazar, Christina wrote:
> > I'm curious because I wanted to do a better job with our db
> categorization, other than program/majors/departments, HOW did you(s) come
> up with your categories? Any usability/card sorting/etc
> >
> >
> > Christina Salazar
> > Systems Librarian
> > John Spoor Broome Library
> > California State University, Channel Islands
> > 805/437-3198
> >
> >
> > -Original Message-
> > From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of
> Jeremy C. Shellhase
> > Sent: Thursday, March 17, 2016 10:07 AM
> > To: CODE4LIB@LISTSERV.ND.EDU
> > Subject: Re: [CODE4LIB] List of Database Subjects
> >
> > Hi,
> > Subjects we're using are
> http://library.humboldt.edu/search/articles.html
> > Based pretty much on our programs/depts.
> >
> > "Whenever you find yourself on the side of the majority, it is time to
> pause and reflect." *-- **Mark Twain*
> >
> > Jeremy C. Shellhase
> > Systems Librarian
> > Library room 206
> > Humboldt State University Library
> > One Harpst Street
> > Arcata, California 95521
> > 707-826-3144 (voice)
> > 707-826-3441 (fax)
> > jeremy.shellh...@humboldt.edu
> >
> > On Thu, Mar 17, 2016 at 7:17 AM, Ian Chan <ic...@csusm.edu> wrote:
> >
> >> Hi,
> >> The subjects we use are listed on
> >> https://biblio.csusm.edu/research_portal/databases.
> >>
> >> Best,
> >>
> >> Ian Chan
> >> Systems Coordinator
> >> University Library
> >> California State University San Marcos ic...@csusm.edu | 760-750-4385
> >> | biblio.csusm.edu | Skype: ian.t.chan
> >>
> >>
> >>
> >>
> >> -Original Message-
> >> From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf
> >> Of Mitchell B. Roe
> >> Sent: Thursday, March 17, 2016 5:38 AM
> >> To: CODE4LIB@LISTSERV.ND.EDU
> >> Subject: Re: [CODE4LIB] List of Database Subjects
> >>
> >> On 2016/03/15 14:26, Burrell, Matthew wrote:
> >>> Hello all,
> >>> I am looking for examples of lists of database subjects similar to
> >>> one
> >> we are using, https://www.lib.fsu.edu/eresources/subjects , as a
> >> comparative model. We would like to limit the number of subjects and
> >> searching for examples. Thanks in advance! I appreciate it.
> >>> Matt
> >>>
> >>> Matt Burrell
> >>> Web Developer
> >>> The Florida State University Libraries Tallahassee, Florida
> >>> (850) 814-9634
> >>> Or Schedule an Appointment<http://fsu.libcal.com/appointment/656>
> >>>
> >>
> >> Here's Oakland University Libraries':
> >> https://research.library.oakland.edu/sp/subjects/databases.php
> >>
> >> --
> >> Mitchell B. Roe
> >> Medical Library Technology Specialist
> >>
> >> Oakland University William Beaumont School of Medicine
> >> 130 Kresge Library
> >> 2200 N Squirrel Rd
> >> Rochester, MI 48309
> >>
> >> mb...@oakland.edu
> >>
>



-- 
Andrew Darby
Head, Web  Development
University of Miami Libraries


Re: [CODE4LIB] ISO: State of the art in video annotation

2016-03-15 Thread Andrew Gordon
Thanks for sending out that document, Erwin.

This is a really interesting topic and I feel like video annotation on the
web should be more of a thing.

On top of what Erwin already provided (OVA looks particularly like A
project that might be good to look at for your needs) there are also:

http://mith.us/OACVideoAnnotator/ - which is a proof of concept using the
open annotation specification (http://www.openannotation.org/). The
specification is format agnostic, intending annotatation of objects with
text, media, web resources etc. - the genius.com folks seem to be involved.

http://cowlog.org/ - pretty basic, but appears to get the job done and is
web based.

There are scads of proprietary and open source desktop video
coding/annotating software that I will spare you the burden of going
through. Full disclosure, I work on a project whose sibling project is a
desktop video coding tool for psychology researchers.

>From my vantage point, video annotation software generally seems to be
developed around a specific set of user needs (a type of researcher and
research subject, for example). More specific target audience gets a more
robust set of tools targeted at those needs.

The biggest issues come down to diversity of encoding for video and the
ability for operating systems to support the playback of them. This said,
the web has even more limitations around what video formats it will
support, but if you control the source of the video, this might not be such
a big deal.

It would really be great to see video annotation for specifically DH
projects warm up.

Have a look at all the resources and determine whether you think it might
be useful just to roll your own annotator using HTML5, some sophisticated
JS libraries for handling media, and hopefully wrapping in a standard like
the Open Annotation Data model (linked above).

Would love to hear what others think/may have experienced.

Drew





On Tue, Mar 15, 2016 at 5:04 PM, Erwin Verbruggen <
everbrug...@beeldengeluid.nl> wrote:

> Dear Stuart,
>
> A few years ago we started an overview of video annotation projects and
> tools for the EUscreen network. We haven't been able to turn it into a
> state of the art document as of yet, but I'm hoping it would be useful for
> such an endeavour:
>
> https://docs.google.com/document/d/1t6CIL8oQjkAtUe2LGInrUgxpNzj5k9s17Mihz6UotIM/edit?usp=sharing
>
> Kind regards,
> Erwin
>
> Erwin Verbruggen
> Project lead R
>
> Netherlands Institute for Sound and Vision
> Media Parkboulevard 1, 1217 WE  Hilversum | Postbus 1060, 1200 BB
> Hilversum | beeldengeluid.nl
>
>
> On Tue, Mar 15, 2016 at 9:38 PM, Stuart Snydman 
> wrote:
>
> > I am doing some discovery for a DH project that, at its center, needs to
> > annotate digital video (locally produced videos that will be hosted and
> > streamed on the web in our local environment).  We are still gathering
> > requirements, but it needs to:
> >
> >
> >   *   have a user friendly interface for creating annotations, better on
> > the web but not an absolute requirement
> >   *   create annotations at specific timestamps, or across spans of time,
> > and have those annotations associated with regions of the video image.
> >   *   annotations could include, text, audio, video, image, URL, etc.
> >
> > We’d prefer open source solutions that can be integrated into a web app,
> > but aren’t fully closed to alternatives.  We’d strongly prefer a solution
> > that supports open standards for annotation or is at least capable of
> > supporting open standards.
> >
> > I know there are many, many video annotation projects.  What is the
> > current state of the art in web-based video annotation making and
> viewing?
> >
> > Many thanks,
> >
> > Stu
> >
> >
>


Re: [CODE4LIB] Code4Lib NYS Meeting: Aug. 4-5, 2016 at Cornell

2016-02-24 Thread Andrew Woods
Thanks Christina and Esmé,
If there is interest, a Fedora workshop or even a "how we use Git/GitHub on
the Fedora project" or even a "Semantic Web bookclub in review" could all
be sessions/workshops.
I have the dates scratched on the calendar.
Andrew


On Wed, Feb 24, 2016 at 9:37 AM, Esmé Cowles <escow...@ticklefish.org>
wrote:

> Christina-
>
> It's really cool to see this shaping up!  The timing's great for me
> because it's right before my mother-in-law's birthday when we usually go up
> to Ithaca anyway.
>
> I could lead a Fedora 4 workshop with Andrew/David, or solo if they can't
> make it.
>
> -Esmé
>
> > On Feb 24, 2016, at 9:23 AM, Christina Harlow <cmh...@cornell.edu>
> wrote:
> >
> > Hi Code4Lib:
> >
> > We’re planning a 2-day Code4Lib New York State Unconference at Mann
> Library, here at Cornell University, on August 4-5 (first Thursday-Friday
> in August). This will be in the style of the 2014 Code4Lib DC Unconference
> [1] and the various Code4Lib Midwest Meetings [2].
> >
> > We’re in the planning stages now of this event, but I’d like to both let
> the community know this is happening (dates and place have been confirmed),
> and send out a call for anyone interested in volunteering. Just send me a
> quick email indicating your interest (or questions). You can see our
> proposal and notes so far on our shared Google doc [3], and there will be
> an organizers email sent out in the next few days.
> >
> > Otherwise, expect to hear more about this event in the Spring.
> >
> > Thanks!
> > Christina Harlow
> >
> > 1. http://library.gwu.edu/code4lib-dc-2014
> > 2. http://wiki.code4lib.org/2015_Code4Lib_Midwest_Meeting
> > 3. http://bit.ly/c4lNYS16
>


[CODE4LIB] Fwd: [camms-ccaam] Common encoding errors

2016-02-22 Thread Andrew Cunningham
On behalf of Charles Riley:

-- Forwarded message --
From: Riley, Charles <charles.ri...@yale.edu>
Date: 23 February 2016 at 05:37
Subject: [camms-ccaam] Common encoding errors
To: "voyage...@listserv.nd.edu" <voyage...@listserv.nd.edu>, "
lit...@lists.ala.org" <lit...@lists.ala.org>, "camms-cc...@lists.ala.org" <
camms-cc...@lists.ala.org>, "ol-tech-boun...@archive.org" <
ol-tech-boun...@archive.org>, "ole.technical.usergr...@kuali.org" <
ole.technical.usergr...@kuali.org>, "auto...@listserv.syr.edu" <
auto...@listserv.syr.edu>


Hi all,



This is something I’ve noticed happening with somewhat regular, and
probably increasing occurrence lately:  a class of problems with records
containing either escaped entity references from HTML or XML (like
‘’), or accented characters that have become corrupted in a data
migration (like ‘français
<https://openlibrary.org/works/OL10004281W/Les_archets_français>‘).  I was
asked by another librarian if I could point them to any resources that deal
with this class of issues, and rounded up a few that I thought would be
good to share.  Here’s what I came across, in terms of examples and
explanations for some of the more common cases:



http://markmcb.com/2011/11/07/replacing-ae%E2%80%9C-ae%E2%84%A2-aeoe-etc-with-utf-8-characters-in-ruby-on-rails/



https://en.wikipedia.org/wiki/List_of_XML_and_HTML_character_entity_references
(But treat this list with caution in using it to search; there will be
false positives for a search for ‘amp;’, for example.)



http://www.i18nqa.com/debug/utf8-debug.html (See also associated links on
this page.)



Hope this helps!



Charles Riley



*Charles Riley*

*Interim Librarian for African Studies and Catalog Librarian*

*Sterling Memorial Library*

*Yale University*



*charles.ri...@yale.edu <charles.ri...@yale.edu>*

*(203)432-7566 <%28203%29432-7566> or (203)432-9301 <%28203%29432-9301>*







-- 
Andrew Cunningham
lang.supp...@gmail.com


Re: [CODE4LIB] Best way to handle non-US keyboard chars in URLs?

2016-02-21 Thread Andrew Cunningham
i,
On Monday, 22 February 2016, Chris Moschini <ch...@brass9.com> wrote:
> On Feb 20, 2016 9:33 PM, "Stuart A. Yeates" <syea...@gmail.com> wrote:
>>
>> 1) With Unicode 8, sign writing and ASL,  the American / international
>> dichotomy is largely specious. Before that there were American indigenous
>> languages (Cheyenne etc.), but in my experience Americans don't usually
>> think of them them as American.
>
> It's not about the label, so don't get too hung up on that. It's about
> what's easy to type on a typical US keyboard.
>

If you are accessing a non-English resource, then having characters outside
the basic latin block would seem to be perfectly acceptable to me.

There are two types of users involved .. those that xan read the target
language and those that can't.

Those you can should be able to work with keyboards other than a US English
layout. On most devices this is fairly trivial. Not to mention the user may
jot actuually have the US English keyboard layout as their default input
system.

On a multilingual site I prefer the access points to be in the language of
the resource.

Obviously there are cases where people who can not read the language need
to access a resource. In those cases I would look at apis that expose the
resource in a different way. Maybe through a transliteration mapping.
Rather than having a second URL.

Ultimately it comes fown to who the users are and why they are accessing
the resource.

It seems to me your primary concern is for users who can not read the
resource in any event

Andrew

-- 
Andrew Cunningham
lang.supp...@gmail.com


[CODE4LIB]

2016-02-08 Thread Andrew Cunningham
Thanks I will look into them.

On 9 February 2016 at 03:56, Han, Yan - (yhan) <y...@email.arizona.edu>
wrote:

> Yes. Use iText or PDFBox
>
> These are common PDF libraries.
>
>
>
>
>
> On 2/6/16, 2:24 PM, "Code for Libraries on behalf of Andrew Cunningham" <
> CODE4LIB@LISTSERV.ND.EDU on behalf of lang.supp...@gmail.com> wrote:
>
> >Hi all,
> >
> >I am working with PDF files in some South Asian and South East Asian
> >languages. Each PDF has ActualText added for each tag in the PDF. Each PDF
> >has ActualText as an alternative forvthe visible text layer in the PDF.
> >
> >Is anyone aware of tools the will allow me to index and search PDFs based
> >on the ActualText content rather than the visible text layers in the PDF?
> >
> >Andrew
> >
> >--
> >Andrew Cunningham
> >lang.supp...@gmail.com
>



-- 
Andrew Cunningham
lang.supp...@gmail.com


[CODE4LIB]

2016-02-08 Thread Andrew Cunningham
Thanks Levy will look at PDFBox and see what i can leverage from it.

Andrew


On 9 February 2016 at 04:33, Levy, Michael <ml...@ushmm.org> wrote:

> There is a method named getActualText() in PDFBox, there are some listserv
> postings (circa 2012) that indicate that the command-line PDFBox did not
> support extraction of the ActualText contents at that time. That may have
> changed. I'd like to know more.
>
> Thank you Andrew for sending me scurrying to learn about ActualText. I
> don't think we have any in any of the PDFs that I'm indexing, but I
> wouldn't have known it existed without your posting.
>
>
> On Mon, Feb 8, 2016 at 11:56 AM, Han, Yan - (yhan) <y...@email.arizona.edu
> >
> wrote:
>
> > Yes. Use iText or PDFBox
> >
> > These are common PDF libraries.
> >
> >
> >
> >
> >
> > On 2/6/16, 2:24 PM, "Code for Libraries on behalf of Andrew Cunningham" <
> > CODE4LIB@LISTSERV.ND.EDU on behalf of lang.supp...@gmail.com> wrote:
> >
> > >Hi all,
> > >
> > >I am working with PDF files in some South Asian and South East Asian
> > >languages. Each PDF has ActualText added for each tag in the PDF. Each
> PDF
> > >has ActualText as an alternative forvthe visible text layer in the PDF.
> > >
> > >Is anyone aware of tools the will allow me to index and search PDFs
> based
> > >on the ActualText content rather than the visible text layers in the
> PDF?
> > >
> > >Andrew
> > >
> > >--
> > >Andrew Cunningham
> > >lang.supp...@gmail.com
> >
>



-- 
Andrew Cunningham
lang.supp...@gmail.com


[CODE4LIB]

2016-02-06 Thread Andrew Cunningham
Hi all,

I am working with PDF files in some South Asian and South East Asian
languages. Each PDF has ActualText added for each tag in the PDF. Each PDF
has ActualText as an alternative forvthe visible text layer in the PDF.

Is anyone aware of tools the will allow me to index and search PDFs based
on the ActualText content rather than the visible text layers in the PDF?

Andrew

-- 
Andrew Cunningham
lang.supp...@gmail.com


Re: [CODE4LIB] [patronprivacy] Let's Encrypt and EZProxy

2016-01-16 Thread Andrew Anderson
On Jan 15, 2016, at 13:20, Salazar, Christina <christina.sala...@csuci.edu> 
wrote:

> Something that I also see implied here is why aren’t vendors doing a better 
> job collaborating with the developers of EZProxy, instead of only putting the 
> pressure on Let’s Encrypt to support wildcard certs (although I kind of think 
> that’s the better way to go).


Because it’s easier than actually taking the time to fully understand the 
platforms and how all the pieces fit together.  

I’ve lost track of how many discussions I have had with various vendors 
recently over:

* Why they need to encode URLs before trying to pass them to another service 
like EZproxy's login handler
* Why they really do need to pay attention to what RFC 2616 Section 3.2.2 and 
RFC 2396 Section 2.2 have to say regarding the use of the reserved character in 
URLs
* Why it’s a bad idea to add “DJ google.com” in the EZproxy stanza
* Why it’s a bad idea to add “DJ ” in the EZproxy stanza
* Why it’s a bad idea to add “DJ ” in the EZproxy 
stanza

Instead of trying to understand how proxied access works, someone just keeps 
slapping “DJ ” or “HJ ” into the service stanza 
until the service starts working, and then never revisits the final product to 
see if those additions were really necessary.  Do this for a few platform 
iterations, and the resulting stanza can become insane.

The conversations typically go something like this:

Me: “Why are you trying to proxy google.com services?” 
Vendor: “Because we’re loading the jQuery JavaScript library from their CDN."
Me: “And how are you handling registering all your customer’s IP addresses with 
Google?” 
…  … 
Vendor: “We don’t”.
Me: “Then why do you think you need that in your proxy stanza?”. 
…  …
Vendor: “We . . . don’t?”
Me: “Exactly. And how are you reaping the performance benefits of a CDN service 
if you’re funneling all of the unauthenticated web traffic through a proxy 
server instead of allowing the CDN to do what it does best and keeping the 
proxy server out of the middle of that transaction?"
Vendor: “We . . . aren’t?”
Me: “That’s right, by adding ‘DJ ’ to your stanza, you have 
successfully negated the performance benefits of using a CDN service.”

-- 
Andrew Anderson, President & CEO, Library and Information Resources Network, 
Inc.
http://www.lirn.net/ | http://www.twitter.com/LIRNnotes | 
http://www.facebook.com/LIRNnotes


Re: [CODE4LIB] Let's Encrypt and EZProxy

2016-01-14 Thread Andrew Anderson
Eric,

Check out Startcom’s StartSSL service (https://www.startssl.com), for $120 you 
have the ability to generate 3-year wildcard certificates with their 
Organizational Validation level of service.

Andrew

-- 
Andrew Anderson, President & CEO, Library and Information Resources Network, 
Inc.
http://www.lirn.net/ | http://www.twitter.com/LIRNnotes | 
http://www.facebook.com/LIRNnotes

On Jan 14, 2016, at 21:33, Eric Hellman <e...@hellman.net> wrote:

> I would also go with the $120 3 year wildcard cert for ezproxy. What vendor 
> are you using?
>> On Jan 14, 2016, at 7:23 PM, Cary Gordon <listu...@chillco.com> wrote:
>> 
>> I love the idea of Let’s Encrypt, but I recently bought a three year 
>> wildcard cert subscription for about $120. I would need to fall firmly into 
>> the true believer category to go the route you suggest.
>> 
>> Cary
>> 
>>> On Jan 14, 2016, at 11:20 AM, Eric Hellman <e...@hellman.net> wrote:
>>> 
>>> A while back, the issue of needing a wildcard certificate (not supported by 
>>> Lets Encrypt) for EZProxy was discussed.
>>> 
>>> In my discussions with publishers about switching to HTTPS, EZProxy 
>>> compatibility has been the most frequently mentioned stumbling block 
>>> preventing a complete switch to HTTPS for some HTTPS-ready  publishers. In 
>>> two cases that I know of, a publisher which has been HTTPS-only was asked 
>>> by a library customer to provide insecure service (oh the horror!) for this 
>>> reason.
>>> 
>>> It's been pointed out to me that while Lets Encrypt is not supporting 
>>> wildcard certificates, up to 100 hostnames can be supported on a single LE 
>>> certificate. A further limit on certificates issued per week per domain 
>>> would mean that up to 500 hostnames can be registered with LE in a week.
>>> 
>>> Are there EZProxy instances out there that need more than 500 hostnames, 
>>> assuming that all services are switched to HTTPS?
>>> 
>>> Also, I blogged my experience talking to people about privacy at #ALAMW16.
>>> http://go-to-hellman.blogspot.com/2016/01/not-using-https-on-your-website-is-like.html
>>>  
>>> <http://go-to-hellman.blogspot.com/2016/01/not-using-https-on-your-website-is-like.html>
>>> 
>>> Eric
>>> 
>>> 
>>> Eric Hellman
>>> President, Free Ebook Foundation
>>> Founder, Unglue.it https://unglue.it/
>>> https://go-to-hellman.blogspot.com/
>>> twitter: @gluejar
>>> 
> 


Re: [CODE4LIB] Library Juice - thoughts?

2015-11-05 Thread Andrew Weidner
I've taken a number of courses with Library Juice and had really good
learning experiences.

The series on XML and RDF based systems is excellent:

http://libraryjuiceacademy.com/certificate-xml-rdf.php

As are the courses on metadata design and implementation:

http://libraryjuiceacademy.com/105-metadata-design.php
http://libraryjuiceacademy.com/106-metadata-implementation.php

The time commitment is substantial, but the instructors have been very
flexible about turning in assignments after the deadlines.

Andrew

On Wed, Nov 4, 2015 at 9:38 PM, Patricia Farnan <patricia.far...@nd.edu.au>
wrote:

> The managers in my library wanted to try out the courses to see what they
> were like, so I was able to be one of the 'experiments' when I did the PHP
> & APIs course. Some others did a management course which they didn't find
> nearly as good as I found mine. So I guess I was the lucky one to get a
> good instructor!
>
> -Original Message-
> From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of
> davesgonechina
> Sent: Wednesday, 28 October 2015 12:55 PM
> To: CODE4LIB@LISTSERV.ND.EDU
> Subject: Re: [CODE4LIB] Library Juice - thoughts?
>
> I've not taken any classes on LibraryJuice mainly because I find their
> course descriptions too thin. The Data Management course has a better
> description than most, but perhaps I've been spoiled by Coursera where I
> can see a syllabus, schedule, and materials before deciding to pay any
> fees. I'm wondering, those of you who have taken a LibraryJuice course,
> what attracted you to it and how did the experience match or differ from
> your expectations?
>
> Dave
>
> On Wed, Oct 28, 2015 at 2:58 AM, Folds, Dusty <dfo...@montevallo.edu>
> wrote:
>
> > Yes, I concur with these comments. Just be aware of the time
> > commitment that will be involved. That's where I ran into problems, too.
> >
> > Dusty
> >
> > --
> > Dusty Folds, MLIS
> > Information Literacy and Digital Learning Librarian Assistant
> > Professor University of Montevallo Carmichael Library Station 6108
> > Montevallo, AL 35115
> > P: 205-665-6108
> > F: 205-665-6112
> > E: dfo...@montevallo.edu
> >
> >
> >
> > -Original Message-
> > From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf
> > Of REESE-HORNSBY, TWYLA
> > Sent: Tuesday, October 27, 2015 1:47 PM
> > To: CODE4LIB@LISTSERV.ND.EDU
> > Subject: Re: [CODE4LIB] Library Juice - thoughts?
> >
> > I started a course (Introduction to XML) through Library Juice a year
> > ago.  I wasn't able to finish it due to some personal challenges but I
> > still have access to the archived class which is great.  Like
> > Patricia, I found the content very useful but underestimated how much
> > time I needed to read and study the material.  Four weeks goes fast!
> > The instructor also scheduled times to meet online for questions.
> >
> > I did have trouble getting used to the Moodle platform but I think it
> > has since been upgraded to be more user friendly.
> >
> > I am seriously considering taking another course in the near future.
> >
> > Best,
> >
> > Twyla Reese-Hornsby
> > Public Service Librarian | J. Ardis Bell Library Tarrant County
> > College Northeast Campus | Office: NLIB 2127A
> > 828 W. Harwood Rd. |Hurst, TX 76054
> > 817-515-6365 | Fax: 817-515-6275
> > twyla.reese-horn...@tccd.edu | www.tccd.edu
> >
> > -Original Message-
> > From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf
> > Of Patricia Farnan
> > Sent: Monday, October 26, 2015 9:40 PM
> > To: CODE4LIB@LISTSERV.ND.EDU
> > Subject: Re: [CODE4LIB] Library Juice - thoughts?
> >
> > I recently did a course through Library Juice on PHP & APIs, and I
> > found it really useful and easy to follow (well, easy for my poor
> > brain to follow. I still had to re-read my notes and re-listen to
> > certain parts of each video, to really let things sink in). The
> > instructor was very good at staying in touch with students and
> interacting.
> >
> > -Original Message-
> > From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf
> > Of BWS Johnson
> > Sent: Tuesday, 27 October 2015 4:14 AM
> > To: CODE4LIB@LISTSERV.ND.EDU
> > Subject: Re: [CODE4LIB] Library Juice - thoughts?
> >
> > Salvete!
> >
> >  I'm going to be exceedingly naughty in replying to this. I used
> > to teach a course on Koha for Rory, so obviously I'm heavily biased.
> >

[CODE4LIB] Call for comment, Principles for Evaluating Metadata Standards

2015-11-02 Thread Andrew Weidner
The ALCTS/LITA Metadata Standards Committee of the American Library
Association (ALA) invites the library, archives, and museums metadata
communities to comment on the document, *Principles for Evaluating Metadata
Standards*. This document is meant to assist in the development,
maintenance, selection, use, and assessment of metadata standards. The
current draft incorporates feedback given on an earlier version of the
document.

The committee encourages written feedback through one of the following
options: leave a comment on the Principles page
<http://metaware.buzz/2015/10/27/draft-principles-for-evaluating-metadata-standards/>[1]
or fill out the comment form <http://goo.gl/forms/QE4J7ZIG66>[2].

In addition, the Committee will be presenting at the ALA Midwinter Meeting
in Boston. There will be two opportunities for face-to-face discussion of
this draft, one at the Metadata Interest Group meeting on Sunday, January
10, 2016 at 8:30 am EST, and one at the regular Metadata Standards
Committee meeting on Sunday, January 10, 2016 at 1:00 pm EST. Registered
conference attendees are welcome to attend.



Best,


Andrew Weidner
LITA representative, ALCTS/LITA Metadata Standards Committee


[1]
http://metaware.buzz/2015/10/27/draft-principles-for-evaluating-metadata-standards/
[2] http://goo.gl/forms/QE4J7ZIG66


[CODE4LIB] New Release: Diva.js 4.0 (with IIIF support)

2015-09-09 Thread Andrew Hankinson
We’re pleased to announce a new version of our open-source document image 
viewer, Diva.js. Diva is an ideal for archival book digitization initiatives 
where viewing high-resolution images is a crucial part of the user experience. 
Using Diva, libraries, archives, and museums can present high-resolution 
document page images in a user-friendly “instant-on” interface that has been 
optimized for speed and flexibility.

In version 4.0 we’re introducing support for the International Image 
Interoperability Framework (IIIF). Through IIIF, Diva becomes part of a larger 
movement to enhance archival image collections through promoting sharing of 
these resources.

With 4.0 we’re also introducing the “Book Layout” view, presenting document 
images as openings, or facing pages. This will provide our users with a 
valuable way of visualizing document openings, providing more tools for viewing 
and understanding the structure of a digitized document.

Several demos are available at http://ddmal.github.io/diva.js/try/ 


Other improvements in 4.0 include:
• Improved integration with existing web applications
• New plugins: Autoscroll (animated page scrolling), Page Alias (pages 
may have multiple identifiers), IIIF Metadata (displays document metadata from 
IIIF manifest), IIIF Highlight (displays annotations from a IIIF manifest)
• Improved build system with Gulp
• Support for switching documents without reloading the viewer
• Numerous bug fixes and optimizations

For more information, demos, and documentation visit 
http://ddmal.github.io/diva.js/.

Diva.js is developed by the Distributed Digital Music Archives and Libraries 
laboratory, part of the Music Technology Area of the Schulich School of Music 
at McGill University and is funded by the Social Sciences and Humanities 
Research Council of Canada.


Re: [CODE4LIB] New Release: Diva.js 4.0 (with IIIF support)

2015-09-09 Thread Andrew Hankinson
Thanks, John. I've fixed it now. Can't have a major release announcement 
without *something* going wrong. ;)

-Andrew

> On Sep 9, 2015, at 1:16 PM, Scancella, John <j...@loc.gov> wrote:
> 
> Andrew,
> 
> I am getting this error when trying out the default on the website(see 
> attached)
> 
> -Original Message-
> From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU 
> <mailto:CODE4LIB@LISTSERV.ND.EDU>] On Behalf Of Andrew Hankinson
> Sent: Wednesday, September 09, 2015 4:54 AM
> To: CODE4LIB@LISTSERV.ND.EDU <mailto:CODE4LIB@listserv.nd.edu>
> Subject: [CODE4LIB] New Release: Diva.js 4.0 (with IIIF support)
> 
> We’re pleased to announce a new version of our open-source document image 
> viewer, Diva.js. Diva is an ideal for archival book digitization initiatives 
> where viewing high-resolution images is a crucial part of the user 
> experience. Using Diva, libraries, archives, and museums can present 
> high-resolution document page images in a user-friendly “instant-on” 
> interface that has been optimized for speed and flexibility.
> 
> In version 4.0 we’re introducing support for the International Image 
> Interoperability Framework (IIIF). Through IIIF, Diva becomes part of a 
> larger movement to enhance archival image collections through promoting 
> sharing of these resources.
> 
> With 4.0 we’re also introducing the “Book Layout” view, presenting document 
> images as openings, or facing pages. This will provide our users with a 
> valuable way of visualizing document openings, providing more tools for 
> viewing and understanding the structure of a digitized document.
> 
> Several demos are available at http://ddmal.github.io/diva.js/try/ 
> <http://ddmal.github.io/diva.js/try/> <http://ddmal.github.io/diva.js/try/ 
> <http://ddmal.github.io/diva.js/try/>>
> 
> Other improvements in 4.0 include:
>   • Improved integration with existing web applications
>   • New plugins: Autoscroll (animated page scrolling), Page Alias (pages 
> may have multiple identifiers), IIIF Metadata (displays document metadata 
> from IIIF manifest), IIIF Highlight (displays annotations from a IIIF 
> manifest)
>   • Improved build system with Gulp
>   • Support for switching documents without reloading the viewer
>   • Numerous bug fixes and optimizations
> 
> For more information, demos, and documentation visit 
> http://ddmal.github.io/diva.js/.
> 
> Diva.js is developed by the Distributed Digital Music Archives and Libraries 
> laboratory, part of the Music Technology Area of the Schulich School of Music 
> at McGill University and is funded by the Social Sciences and Humanities 
> Research Council of Canada.
> 
> 


Re: [CODE4LIB] FOSS recommendations for online-only library

2015-08-23 Thread Andrew Anderson
I would recommend Apache’s mod_proxy over Squid for a library setting, as it 
can be morphed into a general rewriting proxy easier than Squid can for 
off-site access.

It’s true that both can be made to perform the rewriting function, but the bar 
for entry is lower for Apache and it supports a broader set of authentication 
options than Squid does.

-- 
Andrew Anderson, President  CEO, Library and Information Resources Network, 
Inc.
http://www.lirn.net/ | http://www.twitter.com/LIRNnotes | 
http://www.facebook.com/LIRNnotes

On Aug 23, 2015, at 0:45, Cornel Darden Jr. corneldarde...@gmail.com wrote:

 Hello,
 
 There are open-source proxies available. I would give squid a try. 
 http://wiki.squid-cache.org/Features/Authentication
 
 At such a library, public domain materials are awesome! I would look into 
 calibre as an ebook server and mamager. http://calibre-ebook.com
 
 Of course, project Gutenberg and the internet archive will supply calibre 
 with thousands of free books. Also, look into drm free publishers. With squid 
 active, many non-drm options can be realized for eBooks too. Do not allow 
 access to databases without authentication. 
 
 Sent from my iPhone
 
 On Aug 22, 2015, at 11:06 PM, Nicole Askin nask...@alumni.ubc.ca wrote:
 
 1. We don't currently have such technology, though we are definitely
 looking at it beyond this project as well
 2. Either. From my understanding there aren't many/any comprehensive free
 discovery products. We're currently making do with a Google custom search
 engine, which is a very suboptimal solution
 3. Yes. I'm working on learning what I can, and we're working on tech
 support options.
 Thanks,
 Nicole
 
 On Fri, Aug 21, 2015 at 2:11 PM, Kevin Hawkins 
 kevin.s.hawk...@ultraslavonic.info wrote:
 
 We should probably clarify you're needs a bit.
 
 Will you need technology that manages authentication of authorized users,
 or does your non-profit already have some tool (like a user login or proxy
 server) that can decide which users should be able to get access to your
 resources?
 
 You mention discovery options ... are you thinking of a discovery
 product or old-fashioned federated search that provides a single user
 search interface that searches across many or all of your licensed
 products?  And a link resolver?
 
 As a general rule of thumb, you can either have limited tech support or
 use open-source software but not both.  :(
 
 Kevin
 
 
 On 8/20/15 5:04 PM, Nicole Askin wrote:
 
 Hello all,
 I'm working with a non-profit that is offering access to research
 databases
 for patrons that do not otherwise have it. We are hoping to develop a
 library portal to support users, ideally including both article- and
 journal-level search. We'd like to do this as much as possible using
 *only*
 free and open source software, so I'm looking for recommendations on what
 to use and, crucially, what works well together.
 Some parameters:
 -We have no physical location or physical holdings - don't need
 circulation
 or anything in that category, although access stats would be nice
 -We do not have our own hosted materials - no need for a CMS
 -We have very limited tech support
 
 Any thoughts? I've been playing around with VuFind and reSearcher so far
 but am definitely open to other possibilities, particularly if there are
 good discovery options available.
 
 Thanks,
 Nicole
 
 


Re: [CODE4LIB] Protocol-relative URLs in MARC

2015-08-17 Thread Andrew Anderson
There are multiple questions embedded in this:

1) What does the MARC standard have to say about 856$u?

$u - Uniform Resource Identifier

Uniform Resource Identifier (URI), which provides standard syntax for locating 
an object using existing Internet protocols. Field 856 is structured to allow 
for the creation of a URL from the concatenation of other separate 856 
subfields. Subfield $u may be used instead of those separate subfields or in 
addition to them.

Subfield $u may be repeated only if both a URN or a URL or more than one URN 
are recorded.

Used for automated access to an electronic item using one of the Internet 
protocols or by resolution of a URN. Subfield $u may be repeated only if both a 
URN and a URL or more than one URN are recorded. Field 856 is repeated if more 
than one URL needs to be recorded.

Here, it is established that $u uses a URI, which leads to….

2) What do the RFCs say about protocol-relative URIs?

http://tools.ietf.org/html/rfc3986#section-4.1

  URI-reference is used to denote the most common usage of a resource
   identifier.

  URI-reference = URI / relative-ref

   A URI-reference is either a URI or a relative reference.  If the
   URI-reference's prefix does not match the syntax of a scheme followed
   by its colon separator, then the URI-reference is a relative
   reference.

So by the stated use of URIs in the MARC standard, and the RFC definition of 
the URI relative reference, there should be no standards basis by which 
protocol relative URLs should not be valid for use in 856.

Expanding out to the software support, most tools that I have used with general 
URL manipulation in general have no problems with this format, but I have only 
used PyMARC for manipulating MARC records, not any of the other MARC editors. 
If they try to be too clever about data validation and not quite clever enough 
about standards and patterns, there could be issues at this level.

As for browser support, IE7  IE8 have issues with double-loading some 
resources when used in this manner, but those browsers are becoming nearly 
extinct, so I would not anticipate client-side issues as long as the 
intermediate system that consumed the 856 record and render it for display can 
handle this.  Our web properties switched to using this pattern several years 
ago to avoid the “insecure content” warnings and we have had no issues on the 
client side.  

Then the other consumers of MARC data come into play — title lists, link 
resolvers, proxy servers, etc.  A lot of what I’ve seen in this space are 
lipstick wearing dinosaurs of a code base, so unless the vendor is particularly 
good about keeping up with current web patterns, this is where I would expect 
the most challenges.  There may be implicit or explicit assumptions built into 
systems that would break with protocol-relative URLs, e.g. if the value is 
passed directly to a proxy server, it may not know what to do without a scheme 
prefixed to the URI, and attempt to serve local content instead.

That said, there is a big push recently for dropping non-SSL connections in 
general (going so far as to call the protocol relative URIs an anti-pattern), 
so is it really worth all the potential pain and suffering to make your links 
scheme-agnostic, when maybe it would be a better investment in time to switch 
them all to SSL instead?  This dovetails nicely with some of the discussions I 
have had recently with electronic services librarians about how to protect 
patron privacy in an online world by using SSL as an arrow in that quiver.

Andrew

-- 
Andrew Anderson, President  CEO, Library and Information Resources Network, 
Inc.
http://www.lirn.net/ | http://www.twitter.com/LIRNnotes | 
http://www.facebook.com/LIRNnotes

On Aug 17, 2015, at 16:41, Stuart A. Yeates syea...@gmail.com wrote:

 I'm in the middle of some work which includes touching the 856s in lots of
 MARC records pointing to websites we control. The websites are available on
 both https://example.org/ and http://example.org/
 
 Can I put //example.org/ in the MARC or is this contrary to the standard?
 
 Note that there is a separate question about whether various software
 systems support this, but that's entirely secondary to the question of the
 standard.
 
 cheers
 stuart
 --
 ...let us be heard from red core to black sky


Re: [CODE4LIB] quick question: CloudFlare

2015-06-19 Thread Andrew Anderson
We have had good experience with it so far, yes.  Do you have a specific use 
case that you’re concerned about?

-- 
Andrew Anderson, Director of Development, Library and Information Resources 
Network, Inc.
http://www.lirn.net/ | http://www.twitter.com/LIRNnotes | 
http://www.facebook.com/LIRNnotes

On Jun 19, 2015, at 12:58, Kun Lin l...@whitman.edu wrote:

 Quick question:
 
 
 
 Who is using CloudFlare for their library website? Are they very
 accommodating in using CNAME?
 
 
 
 Thanks
 
 Kun Lin


Re: [CODE4LIB] quick question: CloudFlare

2015-06-19 Thread Andrew Anderson
That’s a bit sub-optimal regarding how they handle domain setup, I agree.  You 
can get partial functionality by adding a NS record in your existing DNS 
servers for pointing specific records to their DNS servers even without going 
through the full domain delegation process.  After some testing, we were 
sufficiently happy with their service to move forward with the full delegation, 
but this technique worked well for kicking the tires without making the full 
commitment to their DNS service.

The down side to using the NS trick is that their SSL handling will not be 
fully active unless you do the whole domain.  Depending on what you hope to 
accomplish, that may be the make-or-break decision for using their service or 
not.  You can still do SSL on the host under some circumstances, but I believe 
all entries in the top level domain must use their certificates when 
acceleration is active.  Subdomains can still use the SSL certificate on the 
host even without full delegation.

Another reason to consider letting them handle your DNS (if you can) is that 
they have some pretty interesting plans for adding DNSSEC support for later 
this year.

At any rate, what I would suggest you consider is something like this:

testIN  NS  ns1.ns.cloudflare.com
IN  NS  ns2.ns.cloudflare.com

and replace ns1 and ns2 with the name servers assigned to your account.

Of course, you need a “test” record created on the CloudFlare end to serve the 
appropriate DNS entries.  This configuration will send all DNS queries for the 
test host to CloudFlare’s servers and through their acceleration infrastructure.

Hope this helps,
Andrew

-- 
Andrew Anderson, Director of Development, Library and Information Resources 
Network, Inc.
http://www.lirn.net/ | http://www.twitter.com/LIRNnotes | 
http://www.facebook.com/LIRNnotes

On Jun 19, 2015, at 18:29, Kun Lin l...@whitman.edu wrote:

 In most case, Cloudflare will want you to delete the whole domain to their
 DNS server. This is impossible for us to do. Therefore, I am trying to
 figure out CNAME option.
 
 Thanks
 Kun
 
 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of
 Andrew Anderson
 Sent: Friday, June 19, 2015 3:24 PM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] quick question: CloudFlare
 
 We have had good experience with it so far, yes.  Do you have a specific
 use case that you're concerned about?
 
 --
 Andrew Anderson, Director of Development, Library and Information
 Resources Network, Inc.
 http://www.lirn.net/ | http://www.twitter.com/LIRNnotes |
 http://www.facebook.com/LIRNnotes
 
 On Jun 19, 2015, at 12:58, Kun Lin l...@whitman.edu wrote:
 
 Quick question:
 
 
 
 Who is using CloudFlare for their library website? Are they very
 accommodating in using CNAME?
 
 
 
 Thanks
 
 Kun Lin


Re: [CODE4LIB] Let's implement the referrer meta tag

2015-06-12 Thread Andrew Anderson
Or just SSL enable your library web site.  Few vendors support SSL today, so 
crossing the HTTP/HTTPS barrier is supposed to automatically disable referring 
URL passing.

http://www.w3.org/Protocols/rfc2616/rfc2616-sec15.html#sec15.1.3

15.1.3 Encoding Sensitive Information in URI's

Because the source of a link might be private information or might reveal an 
otherwise private information source, it is strongly recommended that the user 
be able to select whether or not the Referer field is sent. For example, a 
browser client could have a toggle switch for browsing openly/anonymously, 
which would respectively enable/disable the sending of Referer and From 
information.

Clients SHOULD NOT include a Referer header field in a (non-secure) HTTP 
request if the referring page was transferred with a secure protocol.

Authors of services which use the HTTP protocol SHOULD NOT use GET based forms 
for the submission of sensitive data, because this will cause this data to be 
encoded in the Request-URI. Many existing servers, proxies, and user agents 
will log the request URI in some place where it might be visible to third 
parties. Servers can use POST-based form submission instead

-- 
Andrew Anderson, Director of Development, Library and Information Resources 
Network, Inc.
http://www.lirn.net/ | http://www.twitter.com/LIRNnotes | 
http://www.facebook.com/LIRNnotes

On Jun 12, 2015, at 0:24, Conal Tuohy conal.tu...@gmail.com wrote:

 Assuming your library web server has a front-end proxy (I guess this is
 pretty common) or at least runs inside Apache httpd or something, then
 rather than use the HTML meta tag, it might be easier to set the referer
 policy via the Content-Security-Policy HTTP header field.
 
 https://w3c.github.io/webappsec/specs/content-security-policy/#content-security-policy-header-field
 
 e.g. in Apache httpd with mod_headers:
 
 Header set Content-Security-Policy referrer 'no-referrer'
 
 
 
 On 12 June 2015 at 13:55, Frumkin, Jeremy A - (frumkinj) 
 frumk...@email.arizona.edu wrote:
 
 Eric -
 
 Many thanks for raising awareness of this. It does feel like encouraging
 good practice re: referrer meta tag would be a good thing, but I would not
 know where to start to make something like this required practice. Did you
 have some thoughts on that?
 
 — jaf
 
 ---
 Jeremy Frumkin
 Associate Dean / Chief Technology Strategist
 University of Arizona Libraries
 
 +1 520.626.7296
 j...@arizona.edu
 ——
 A person who never made a mistake never tried anything new. - Albert
 Einstein
 
 
 
 
 
 
 
 
 
 On 6/11/15, 8:25 AM, Eric Hellman e...@hellman.net wrote:
 
 
 http://go-to-hellman.blogspot.com/2015/06/protect-reader-privacy-with-referrer.html
 
 http://go-to-hellman.blogspot.com/2015/06/protect-reader-privacy-with-referrer.html
 
 
 I hope this is easy to deploy on library websites, because the privacy
 enhancement is significant.
 
 I'd be very interested to know of sites that are using it; I know Thomas
 Dowling implemented a referrer policy on http://oatd.org/ 
 http://oatd.org/
 
 Would it be a good idea to make it a required practice for libraries?
 
 
 Eric Hellman
 President, Gluejar.Inc.
 Founder, Unglue.it https://unglue.it/
 http://go-to-hellman.blogspot.com/
 twitter: @gluejar
 


Re: [CODE4LIB] Help with Auto Hot Key

2015-05-06 Thread Andrew Weidner
Hi Eddie,

AutoHotkey can probably do what you want to do. I am not familiar with the
Sierra interface, although I have successfully used AHK to automate
workflows in a variety of applications.

Here's an example of a subroutine with key commands that copy the contents
of a CONTENTdm text input box:
https://github.com/metaweidner/UHDL_SubjectTopical_CDM/blob/master/UHDL_SubjectTopical_CDM.ahk#L295-303

And check to see if there is was actually any text on the clipboard as a
result:
https://github.com/metaweidner/UHDL_SubjectTopical_CDM/blob/master/UHDL_SubjectTopical_CDM.ahk#L152-158

I'd be happy to pass along more examples.

Best,

Andrew Weidner
ajweid...@uh.edu



On Tue, May 5, 2015 at 5:00 PM, Karl Holten khol...@switchinc.org wrote:

 This doesn't involve AutoHotkey, but maybe it would be easier to use SQL
 to pull that data from the Sierra database rather than screen scraping from
 the Sierra application. You wouldn't need to worry about where stuff
 displays in the interface, just where its stored on the backend. This
 solution would probably be cleaner to maintain as well.

 Excel has ways to pull in data from external sources like SQL databases,
 it looks like Microsoft Publisher does too. I can't speak to how easy it
 would be to set that up, but hopefully it would give you a start:

 https://support.office.com/en-ie/article/Import-data-into-Office-Publisher--Visio-or-Word-by-using-the-Data-Connection-Wizard-65295a62-8da3-49bc-8dd8-1f77d0a05127

 Anyway, that's my 2 cents on an alternative tack you might want to try.

 Hope that helps,
 Karl Holten
 Systems Integration Specialist
 SWITCH Inc
 414-382-6711

 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of
 Eddie Clem
 Sent: Monday, May 4, 2015 1:50 PM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: [CODE4LIB] Help with Auto Hot Key

 Hi there! I'm hoping someone here is a guru at AutoHotKey! :)



 We have a clerk that pays our invoices in Sierra. She will write the bib
 number on a sticky note, as well as the list price and the locations (that
 each copy will go to). I want to have Sierra copy the bib number, list
 price, locations, and order record notes onto a receipt and then this clerk
 would put this receipt with the first copy of the material, rather than
 hand write on sticky notes all day! Since I had looked, and couldn't find a
 way to do this easily from Sierra, I had another brilliant idea that we
 could have Autohotkey copy the fields I want into a template (say, in
 Publisher) and have the bib number turned into a barcode, and list the
 other fields that we want that travel around the tech services department.
 This barcoded bib number would be used by catalogers to enter the bib
 number in the 949 for overlay in Connexion, and then again by our barcoding
 clerk to search by bib number in Sierra. At this point, I'm thinking that
 Autohotkey is my best bet.



 Here is my prototype of what the routing slip would look like when it's
 done. The Thickety 2 is a note in the order record put in by our
 selectors for our catalogers to add that series to the bib record. The
 978... is just a placeholder for where the list price will go once we get
 that field added to our order records:



 [cid:image001.png@01D08679.A5CC5160]



 Here is the corresponding order record. Part of my problem for Autohotkey
 is that not all order records will contain a note (in field z) and the
 locations may be different (fewer or more) on the LOCATIONS line. I have to
 include the multi line, because if it's just our Main Library that's
 receiving the item, then the LOCATIONS at the bottom don't show up at
 all...just the LOCATION fixed field (under ACQ TYPE).



 [cid:image002.png@01D08679.A5CC5160]



 Any thoughts would be greatly appreciated!



 Thanks!

 Eddie


 Eddie Clem, MLS
 Cataloging Librarian
 ec...@khcpl.orgmailto:ec...@khcpl.org | www.KHCPL.org
 http://www.khcpl.org/

 Kokomo-Howard County Public Library
 Collection Management Department
 305 East Mulberry Street
 Kokomo, IN 46901
 765.626.0853|765.450.6290 (fax)



Re: [CODE4LIB] Anyone analyzed SirsiDynix Symphony transaction logs?

2015-03-19 Thread Andrew Nisbet
Elasticsearch is a no SQL database 
(http://www.slideshare.net/DmitriBabaev1/elastic-search-moscow-bigdata-cassandra-sept-2013-meetup)
 and much easier to install and manage than Mongo or CouchDB. 

Why 'boggle'? I it's a 'hello world' sketch, no exception guarding, hard coded 
URLs' and other embarrassing no-nos... 

... ok, fine https://github.com/anisbet/hist

Edmonton Public Library
Andrew Nisbet
ILS Administrator

T: 780.496.4058   F: 780.496.8317

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Cary 
Gordon
Sent: March-19-15 1:15 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Anyone analyzed SirsiDynix Symphony transaction logs?

Has anyone considered using a NoSQL database to store their logs? With enough 
memory, Redis might be interesting, and it would be fast.

The concept of too experimental to post to Github boggles the mind.

Cary


 On Mar 19, 2015, at 9:38 AM, Andrew Nisbet anis...@epl.ca wrote:
 
 Hi Bill,
 
 I have been doing some work with Symphony logs using Elasticsearch. It is 
 simple to install and use, though I recommend Elasticsearch: The Definitive 
 Guide (http://shop.oreilly.com/product/0636920028505.do). The main problem is 
 the size of the history logs, ours being on the order of 5,000,000 lines per 
 month. 
 
 Originally I used a simple python script to load each record. The script 
 broke down each line into the command code, then all the data codes, then 
 loaded them using curl. This failed initially because Symphony writes 
 extended characters to title fields. I then ported the script to python 3.3 
 which was not difficult, and everything loaded fine -- but took more than a 
 to finish a month's worth of data. I am now experimenting with Bulk 
 (http://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html)
  to improve performance.
 
 I would certainly be willing to share what I have written if you would like. 
 The code is too experimental to post to Github however.
 
 Edmonton Public Library
 Andrew Nisbet
 ILS Administrator
 
 T: 780.496.4058   F: 780.496.8317
 
 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of 
 William Denton
 Sent: March-18-15 3:55 PM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: [CODE4LIB] Anyone analyzed SirsiDynix Symphony transaction logs?
 
 I'm going to analyze a whack of transaction logs from our Symphony ILS so 
 that we can dig into collection usage.  Any of you out there done this?  
 Because the system is so closed and proprietary I understand it's not easy 
 (perhaps
 impossible?) to share code (publicly?), but if you've dug into it I'd be 
 curious to know, not just about how you parsed the logs but then what you did 
 with it, whether you loaded bits of data into a database, etc.
 
 Looking around, I see a few examples of people using the system's API, but 
 that's it.
 
 Bill
 --
 William Denton ↔  Toronto, Canada ↔  https://www.miskatonic.org/


Re: [CODE4LIB] Anyone analyzed SirsiDynix Symphony transaction logs?

2015-03-19 Thread Andrew Nisbet
Hi Bill,

I have been doing some work with Symphony logs using Elasticsearch. It is 
simple to install and use, though I recommend Elasticsearch: The Definitive 
Guide (http://shop.oreilly.com/product/0636920028505.do). The main problem is 
the size of the history logs, ours being on the order of 5,000,000 lines per 
month. 

Originally I used a simple python script to load each record. The script broke 
down each line into the command code, then all the data codes, then loaded them 
using curl. This failed initially because Symphony writes extended characters 
to title fields. I then ported the script to python 3.3 which was not 
difficult, and everything loaded fine -- but took more than a to finish a 
month's worth of data. I am now experimenting with Bulk 
(http://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html) 
to improve performance.

I would certainly be willing to share what I have written if you would like. 
The code is too experimental to post to Github however.

Edmonton Public Library
Andrew Nisbet
ILS Administrator

T: 780.496.4058   F: 780.496.8317

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of William 
Denton
Sent: March-18-15 3:55 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] Anyone analyzed SirsiDynix Symphony transaction logs?

I'm going to analyze a whack of transaction logs from our Symphony ILS so that 
we can dig into collection usage.  Any of you out there done this?  Because the 
system is so closed and proprietary I understand it's not easy (perhaps
impossible?) to share code (publicly?), but if you've dug into it I'd be 
curious to know, not just about how you parsed the logs but then what you did 
with it, whether you loaded bits of data into a database, etc.

Looking around, I see a few examples of people using the system's API, but 
that's it.

Bill
--
William Denton ↔  Toronto, Canada ↔  https://www.miskatonic.org/


[CODE4LIB] Access 2015 Call for peer-reviewers

2015-03-13 Thread Andrew Lockhart
Hi all:

The planning team for the Access 2015 Conference is now looking for
volunteers to serve as peer-reviewers. As noted in the Call for Proposals
http://accessconference.ca/program/call-for-proposals/, we’ll be
peer-reviewing submissions for proposals that are longer than 15 minutes.

If you’re interested, shoot us an email at accesslib...@gmail.com (by
Friday, March 27th), attaching your most recent CV/resume and answers to
the following questions:

Name

Current Position (including whether you are a student)

Institution

Have you been to Access before?

Have you presented at Access before?

Have you done peer review for Access before?

As always, feel free to contact us with any questions you might have!

Andrew Lockhart
Reference Librarian
Moncton Public Library

On behalf of the Access 2015 Planning Committee


Re: [CODE4LIB] making EZproxy http/https transparent

2015-03-03 Thread Andrew Anderson
https://pluto.potsdam.edu/ezproxywiki/index.php/SSL#Wildcard_certificate

(You can safely ignore the SSL warning, pluto uses self-signed certificates)

-- 
Andrew Anderson, Director of Development, Library and Information Resources 
Network, Inc.
http://www.lirn.net/ | http://www.twitter.com/LIRNnotes | 
http://www.facebook.com/LIRNnotes

On Mar 3, 2015, at 11:46, Karl Holten khol...@switchinc.org wrote:

 If you're using proxy by hostname, it's my understanding that you need to 
 purchase a SSL certificate for each secure domain, otherwise you get security 
 errors. Depending on how many domains you have, the cost of this can add up. 
 Maintaining it is a headache too because it seems like vendors often don't 
 bother to notify you they're making a switch.
 
 If there's some way to avoid doing this, I would love to know!
 
 Karl Holten
 Systems Integration Specialist
 SWITCH Inc
 414-382-6711
 
 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of 
 Stuart A. Yeates
 Sent: Monday, March 2, 2015 5:27 PM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: [CODE4LIB] making EZproxy http/https transparent
 
 In the last couple of months we've had to update a number of EZproxy stanzas 
 as either tools migrate to HTTPS-only or people try to access HTTP/HTTPS 
 parallel resources using browsers that automatically detect HTTP/HTTPS 
 parallel resources and switch users to the HTTPS version (think current 
 Chrome, anything with the HTTPSeverywhere plugin).
 
 We'd like to avoid updating our config.txt piecemeal on the basis of 
 user-gernated error-reports
 
 We're thinking of going through our EZproxy config.txt and adding an H 
 https:// for every H or URL entry. (Domain and DomainJavascript already work 
 for both HTTP and HTTPS).
 
 Has anyone tried anything like this? Are there pitfalls?
 
 cheers
 stuart
 --
 ...let us be heard from red core to black sky


[CODE4LIB] Call for proposals for this year's Access Conference

2015-02-21 Thread Andrew Lockhart
*APOLOGIES FOR ANY DUPLICATION.*

Access is Canada’s premier library technology conference, bringing together
librarians, technicians, developers, and programmers to discuss
cutting-edge library technologies. This is the 22nd year of Access and
we’re thrilled to bring the conference to Toronto, Ontario! AccessYYZ will
be held at the beautiful Bram and Bluma Appel Salon at the Toronto
Reference Library from September 8-11, 2015.

This year, we’re looking for the most innovative and creative ways that
tech has been or could be implemented in your library – whether it’s big,
small, or completely theoretical! Are you using a Raspberry Pi to build
custom hardware? How might linked open data changing your day-to-day
library services in the future? Have you been working to foster a renewed
culture of tech exploration among your staff? We want to hear all about it!
We would especially love to see proposals from public libraries, special
libraries, and other places outside the ivory tower that are using library
technologies in new and innovative ways.

In time-honoured Access tradition, we want to take advantage of the
flexibility of single track conference planning by letting you propose your
preferred session length and format. Let us know if you want to do a
traditional talk, a poster presentation, a demo, Pecha Kucha, a lightning
talk, a panel of experts or something completely different. Be creative!

To apply, please fill out the form here
http://accessconference.ca/program/call-for-proposals/ by March 20, 2015.
If you need some extra inspiration, you can check out the 2014 conference
program here
http://accessconference.ca/about/past-conferences/2014-calgary-mn/schedule/.
Make sure you take a gander at the Code of Conduct
http://accessconference.ca/access-code-of-conduct/ too, so you know
what’ll fly.

If you have any questions, check out our site at accessconference.ca or
shoot us an email at accesslib...@gmail.com.

We’re looking forward to hearing from you!

Andrew Lockhart
On behalf of the Access 2015 Organizing Committee


Re: [CODE4LIB] IT Refresh Cycles

2015-02-02 Thread Andrew LIVESAY
Hi Brian,

There are a lot of IT Service Management (ITSM) frameworks available that
cover hardware asset lifecycle management in some way.  Information
Technology Infrastructure Library (ITIL) Software Asset Management (SAM) is
one.  In my experience these aren't really effective approaches taken
piecemeal, and there's a lot of overhead in subscribing to the whole
approach.

For some general advice, this article seems pretty decent:
http://www.archstoneconsulting.com/services/it-strategy-opeations/white-papers/optimizing-it-technology.jsp

The points about taking a holistic approach and avoiding a
one-size-fits-all refresh policy are spot-on.

-a

Andrew Livesay
andr...@multco.us


On Mon, Feb 2, 2015 at 5:30 PM, Riley Childs rchi...@cucawarriors.com
wrote:

 That is the general feel I got, taking into account the idea that your
 tech will be in pieces after 3-5 years (at least ours is), it is less about
 a need and more about fiscal planing (if we don't replace on a regular
 schedule the money won't be there for it) .
 We lease because the TCO is much lower at the volume we lease at, I highly
 recommend calling CDW-G, they focus on edu and government and are fantastic.
 //Riley


 Sent from my Windows Phone

 --
 Riley Childs
 Senior
 Charlotte United Christian Academy
 Library Services Administrator
 IT Services Administrator
 (704) 537-0331x101
 (704) 497-2086
 rileychilds.net
 @rowdychildren
 I use Lync (select External Contact on any XMPP chat client)

 CONFIDENTIALITY NOTICE:  This email and any files transmitted with it are
 the property of Charlotte United Christian Academy.  This e-mail, and any
 attachments thereto, is intended only for use by the addressee(s) named
 herein and may contain confidential information that is privileged and/or
 exempt from disclosure under applicable law.  If you are not one of the
 named original recipients or have received this e-mail in error, please
 permanently delete the original and any copy of any e-mail and any printout
 thereof. Thank you for your compliance.  This email is also subject to
 copyright. No part of it nor any attachments may be reproduced, adapted,
 forwarded or transmitted without the written consent of the copyright
 ow...@cucawarriors.com

 
 From: Rogers, Brianmailto:brian-rog...@utc.edu
 Sent: ‎2/‎2/‎2015 8:22 PM
 To: Riley Childsmailto:rchi...@cucawarriors.com
 Subject: RE: IT Refresh Cycles

 Hi Riley -

 Thanks for the email. When you came up with those numbers, did you pull
 from any research for informing that decision? I'm having a difficult time
 finding consistent, authoritative justification.

 - Brian
 
 From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] on behalf of Riley
 Childs [rchi...@cucawarriors.com]
 Sent: Monday, February 02, 2015 8:13 PM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: IT Refresh Cycles

 We lease everything, 3-5 years is the average refresh for our stuff.

 //Riley

 Sent from my Windows Phone

 --
 Riley Childs
 Senior
 Charlotte United Christian Academy
 Library Services Administrator
 IT Services Administrator
 (704) 537-0331x101
 (704) 497-2086
 rileychilds.net
 @rowdychildren
 I use Lync (select External Contact on any XMPP chat client)

 CONFIDENTIALITY NOTICE:  This email and any files transmitted with it are
 the property of Charlotte United Christian Academy.  This e-mail, and any
 attachments thereto, is intended only for use by the addressee(s) named
 herein and may contain confidential information that is privileged and/or
 exempt from disclosure under applicable law.  If you are not one of the
 named original recipients or have received this e-mail in error, please
 permanently delete the original and any copy of any e-mail and any printout
 thereof. Thank you for your compliance.  This email is also subject to
 copyright. No part of it nor any attachments may be reproduced, adapted,
 forwarded or transmitted without the written consent of the copyright
 ow...@cucawarriors.com

 
 From: Brian Rogersmailto:brian-rog...@utc.edu
 Sent: ‎2/‎2/‎2015 6:58 PM
 To: CODE4LIB@LISTSERV.ND.EDUmailto:CODE4LIB@LISTSERV.ND.EDU
 Subject: [CODE4LIB] IT Refresh Cycles

 Hi - Is anyone responsible for IT refresh cycles in their library? This
 could be anything from computers to projectors, switches to routers,
 servers to media players. Looking to see who people turn to for industry
 standards (within libraries, or within colleges/universities, or the
 outside world), in researching recommendations and policies.

 Thanks,
 Brian Rogers



Re: [CODE4LIB] lita

2015-01-06 Thread Hickner, Andrew
As another tech person in a medical library, I will second this.  It does pose 
a bit of a dilemma, since I'm not interested in paying 2 sets of membership 
dues either.  IMO, there's a particular need for a web user experience group 
within MLA - after all, every medical library has a website.  If you ever 
decide to make another push for a section within MLA, I'd be game to help out.



Andy Hickner
Web Services Librarian
Cushing/Whitney Medical Library
Yale University
p 203.785.3969
andrew.hick...@yale.edu 
http://library.medicine.yale.edu 

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Jason 
Bengtson
Sent: Tuesday, January 06, 2015 8:34 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] lita

It's funny; I work in medical libraries, although I've been considering 
attempting a move to regular academic libraries for a while now. In the Medical 
Library Association we don't really have a LITA. We may have some kind of 
technology interest group in there somewhere, but I find tech interest on this 
side of the discipline to be very spotty. I approached the responsible party at 
MLA about creating a technology section and couldn't even get them to return my 
emails. I turned away from my AHIP (MLA's
Academy) membership a while back in disgust at some of their policies, although 
before I did I noticed with some interest that you could (and I
did) get points for developing some apps or digital tutorials and the like; 
nevertheless it felt very tacked on. The one case study I submitted to JMLA 
(the association's main journal), on an XML/XSLT based staff list tool I 
created for the website of the University of New Mexico's medical library, was 
flatly rejected as being too technical for the journal.

Best regards,
*Jason Bengtson, MLIS, MA*

Head of Library Computing and Information Systems Assistant Professor, Graduate 
College Department of Health Sciences Library and Information Management 
University of Oklahoma Health Sciences Center 405-271-2285, opt. 5
405-271-3297 (fax)
jason-bengt...@ouhsc.edu
https://urldefense.proofpoint.com/v2/url?u=http-3A__library.ouhsc.edud=AwIFaQc=-dg2m7zWuuDZ0MUcV7Sdqwr=5fy2ggMUErpmpHWhhxP_wWZLydYimmqEaUWUOm_laNwm=r-R0sjhlA05odOKZwONi73E9xLUj6bkuZe9-Aij2MLUs=EddcnjSYcfczczLEeTFnCTqpe-ZyQCYLgn3C7ogDQcQe=
www.jasonbengtson.com

NOTICE:
This e-mail is intended solely for the use of the individual to whom it is 
addressed and may contain information that is privileged, confidential or 
otherwise exempt from disclosure. If the reader of this e-mail is not the 
intended recipient or the employee or agent responsible for delivering the 
message to the intended recipient, you are hereby notified that any 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this communication in error, please 
immediately notify us by replying to the original message at the listed email 
address. Thank You.
j.bengtson...@gmail.com

On Mon, Jan 5, 2015 at 2:51 PM, KLINGLER, THOMAS t...@kent.edu wrote:

 ...and maybe a little influence by the current ALA membership payment 
 options.  Used to have to pay your base membership and a division (or
 two?)   Recently, you can go cheap and pay ONLY the base membership cost!
   No forced division membership.

 TK


 Tom Klingler
 Assistant Dean for Technical Services
 University Libraries, Rm 300
 1125 Risman Drive
 Kent State University
 Kent, Ohio 44242-0001
 330-672-1646 office



 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf 
 Of Roy Tennant
 Sent: Monday, January 05, 2015 11:42 AM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] lita

 Also, I would point out that libraries increasingly hire 
 non-librarians in technology positions. That likely means that even if 
 said persons might eventually find Code4Lib, their allegiance to a 
 profession as epitomized by ALA is unlikely.
 Roy

 On Mon, Jan 5, 2015 at 8:37 AM, Debra Shapiro dsshap...@wisc.edu wrote:

  LITA is now the smallest ALA division.
 
  Personally, as someone who’s been involved with LITA for 20 years, I 
  think the decrease is due to all the reasons Kevin cites below, and 
  also because of something of an identity crisis - related to the 
  advent of the Internet, as Eric says.
 
  LITA is the technology division of the ALA. *Everything* in 
  libraries is done with technology now, so ALA members who once 
  might’ve chosen to join the technology division choose instead to 
  join other divisions, related to their other interests. Look at the 
  list of ALCTS (the cataloging division) programs for any given ALA 
  conference, or ALCTS list of CE webinars, and it’s all topics that 
  might’ve once been
 more the purview of LITA.
 
  Of course I ran for LITA prez on that platform 6 years ago and lost 
  so what do I know …
 
  deb
 
 
  On Jan 5, 2015, at 10:28 AM, Kevin Ford k...@3windmills.com wrote:
 
I think 

Re: [CODE4LIB] [RESOLVED] Re: HTTPS EZproxy question / RFC 6125

2014-12-24 Thread Andrew Anderson
There are 3 basic approaches to rewriting proxy servers that I have seen in the 
wild, each with their own strengths and weaknesses:

1) Proxy by port

This is the original EZproxy model, where each proxied resource gets its own 
port number.  This runs afoul of firewall rules to non port 80/443 resources, 
and it creates a problem for SSL access, as clients try both HTTP and HTTPS to 
the same port number, and EZproxy is not setup to differentiate both protocols 
accessing the same port.  With more and more resources moving to HTTPS, the end 
of this solution as a viable option is in sight.

2) Proxy by hostname

This is the current preferred EZproxy model, as it addresses the HTTP(S) port 
issue, but as you have identified, it instead creates a hostname mangling 
issue, and now I’m curious myself about how EZproxy will handle a hyphenated 
SSL site as well with HttpsHyphens enabled.  I /think/ it does the right thing 
by mapping the hostname back to the original internally, as a “-“ in hostnames 
for release versioning is how the Google App Engine platform works, but I have 
not explicitly investigated that.

3) Proxy by path

A different proxy product that we use, Muse Proxy from Edulib, leverages proxy 
by path, where the original website URL is deconstructed and passed to the 
proxy server as query arguments.  This approach has worked fairly well as it 
cleanly avoids the hostname mangling issues, though some of the new “single 
page web apps” that use JavaScript routing patterns can be interesting, so the 
vendor has added proxy by hostname support as an option for those sites as a 
fallback.

So there is no perfect solution, but some work better than others.  I’m looking 
forward to expanding our use of the proxy by path approach, as that is a very 
clean approach to this problem, and it seems to have fewer caveats than the 
other two approaches.

-- 
Andrew Anderson, Director of Development, Library and Information Resources 
Network, Inc.
http://www.lirn.net/ | http://www.twitter.com/LIRNnotes | 
http://www.facebook.com/LIRNnotes

On Dec 18, 2014, at 17:04, Stuart A. Yeates syea...@gmail.com wrote:

 It appears that the core of my problem was that I was unaware of
 
 Option HttpsHyphens / NoHttpsHyphens
 
 which toggle between proxying on
 
 https://www.somedb.com.ezproxy.yourlib.org
 
 and
 
 https://www-somedb-com.ezproxy.yourlib.org
 
 and allows infinitely nested domains to be proxied using a simple
 wildcard cert by compressing things.
 
 The paranoid in me is screaming that there's an interesting brokenness
 in here when a separate hosted resource is at https://www-somedb.com/,
 but I'm trying to overlook that.
 
 cheers
 stuart
 --
 ...let us be heard from red core to black sky
 
 
 On Mon, Dec 15, 2014 at 9:24 AM, Stuart A. Yeates syea...@gmail.com wrote:
 Some resources are only available only via HTTPS. Previously we used a
 wildcard certificate, I can't swear that it was ever tested as
 working, but we weren't getting any complaints.
 
 Recently browser security has been tightened and RFC 6125 has appeared
 and been implemented and proxing of https resources with a naive
 wildcard cert no longer works (we're getting complaints and are able
 to duplicate the issues).
 
 At 
 https://security.stackexchange.com/questions/10538/what-certificates-are-needed-for-multi-level-subdomains
 there is an interesting solution with multiple wildcards in the same
 cert:
 
 foo.com
 *.foo.com
 *.*.foo.com
 ...
 
 There is also the possibility that we can just grep the logs for every
 machine name ever accessed and generate a huge list.
 
 Has anyone tried these options? Successes? Failures? Thoughts?
 
 cheers
 stuart
 
 
 --
 ...let us be heard from red core to black sky


Re: [CODE4LIB] Functional Archival Resource Keys

2014-12-11 Thread Andrew Anderson
I’m not commenting on whether inflections are good, bad, or ugly, but simply 
looking at this from the perspective of real-world hurdles, unexpected 
interactions, and implementation challenges that are going to be run into by 
the selection of an existing reserved character as an inflection indicator.  It 
looks like we disagree on the concept that “no one is using it” as it has a 
clearly defined role in the URI specification, and it is not uncommon to use 
“?”’s as a cache-busting mechanism when clearly no one intends to fetch an 
object’s metadata when they do so.

Taking a step back, this seems like a false economy vs a more expressive and 
human-friendly mechanism for defining access to metadata and policy for the 
object in question.

There are a number of different approaches that could be taken to achieve the 
stated goals of ARK without overloading the purpose of an existing defined 
reserved character, and I think that the project would be doing itself a favor 
by exploring the alternatives to find an approach that does not have the 
potential to slow adoption due to technical and political reasons.

-- 
Andrew Anderson, Director of Development, Library and Information Resources 
Network, Inc.
http://www.lirn.net/ | http://www.twitter.com/LIRNnotes | 
http://www.facebook.com/LIRNnotes

On Dec 10, 2014, at 14:28, John Kunze j...@ucop.edu wrote:

 I don't know the precise constraints you're working under, but Henry
 Thompson of the W3C TAG (Technical Architecture Group) has advocated for
 consideration of the ARK approach to the TAG's meetings.
 
 The terminal '?' is sort of a no-brainer, but clearly it stretches the URI
 spec; on the plus side, it's ripe for definition since no one else is using
 it.  It was Jonathan Rees (also of the W3C TAG) who pointed out the need
 for an additional response header, just in case some service actually was
 responding query strings that overlapped with inflections.
 
 Just to be clear, the ARKs don't own the inflections concept (in fact the
 ARK scheme is unusual in not owning things, such as a resolver).  If you
 think inflections are a good/bad idea for ARKs, chances are you'll think
 the same for other kinds of identifiers.  As Clifford Lynch once said, the
 '?' inflection should work for all URLs.
 
 On Tue, Dec 9, 2014 at 10:09 PM, Andrew Anderson and...@lirn.net wrote:
 
 RFC and expectation violations make my brain hurt.
 
 Overloading an operator that has a clearly defined role in HTTP URIs (
 https://tools.ietf.org/html/rfc7230#section-2.7.1) creates the potential
 for /so/ many unexpected interactions between browsers (
 https://code.google.com/p/chromium/issues/detail?id=108690), HTTP caches,
 URL rewriting servers, etc. that implementations, adopters, and users are
 going to be playing a long game of whack-a-mole working around them.
 
 The proposal is already carving out a URI namespace in the form of “ark:”:
 
  http://ark.cdlib.org/ark:/13030/tf5p30086k?
 
 So why not take advantage of the fact that any system processing the
 “ark:” namespace is already going to have to be a custom application and
 adopt a RESTful path to communicate the service requested instead?
 
  http://ark.cdlib.org/ark:metadata/13030/tf5p30086k
  http://ark.cdlib.org/ark:policy/13030/tf5p30086k
 
 If a web services style implementation is undesired, what about creating
 another reserved character or overload a character that is already used in
 URIs but not part of the HTTP URI specification, “!?
 
 Or, if a standard approach for HTTP header implementation were proposed
 and adopted, it is not unreasonable to imagine that browsers might adopt
 methods that would allow the average user access to the inflections without
 jumping through hoops once adoption reaches critical mass.
 
 There are many approaches and techniques that could be employed here that
 would not require overloading “?” in HTTP URIs that there really is no
 excuse for trying to do so.
 
 --
 Andrew Anderson, Director of Development, Library and Information
 Resources Network, Inc.
 http://www.lirn.net/ | http://www.twitter.com/LIRNnotes |
 http://www.facebook.com/LIRNnotes
 
 On Dec 9, 2014, at 9:25, Ethan Gruber ewg4x...@gmail.com wrote:
 
 I'm using a few applications in Tomcat, so inflections are much more
 difficult to implement than content negotiation. I can probably tweak the
 Apache settings to do a proxypass for inflections by modifying the
 examples
 above.
 
 I agree with Conal, though. Inflections are puzzling at best and bad
 architecture at worst, and the sooner the community puts forward a more
 standard solution, the better.
 
 On Mon, Dec 8, 2014 at 7:21 PM, John Kunze j...@ucop.edu wrote:
 
 Just as a URL permits an ordinary user with a web browser to get to an
 object, inflections permit an ordinary user to see metadata (without
 curl
 or code).
 
 There's nothing to prevent a server from supporting both the HTTP Accept
 header (content negotiation) and inflections.  If you can do

[CODE4LIB] Jobs: Vacancy at Imperial College London - Systems Librarian

2014-11-17 Thread Andrew Preater
I'd like to draw your attention to this exciting opportunity at Imperial 
College London. We're seeking to recruit an enthusiastic, experienced 
individual to support our key library information systems and assist in 
developing new systems and services to meet the changing needs of our users for 
learning, teaching, and research.

Full details and application information can be found at: 
https://www4.ad.ic.ac.uk/OA_HTML/OA.jsp?OAFunc=IRC_VIS_VAC_DISPLAYOAMC=Rp_svid=44962p_spid=1698171p_lang_code=US

The closing date is 1st December.

If you'd like an informal discussion about the post, please contact me.

Andrew

-- 
Andrew Preater
Team Leader Systems and Innovation Support Services
Central Library
Imperial College London
South Kensington Campus
London SW7 2AZ

Email: a.prea...@imperial.ac.uk
Tel: 020 7594 5667
Web: www.imperial.ac.uk/library
Twitter: @preater


Re: [CODE4LIB] Stack Overflow

2014-11-04 Thread Andrew Anderson
On Nov 4, 2014, at 9:42, Joshua Welker wel...@ucmo.edu wrote:

 3. Libraries have a culture of
 protecting vendors from criticism. Sure, we do lots of criticism behind
 closed doors, but nowhere that leaves an online footprint.

Oops.  Someone should have told me that rule before I openly and repeatedly 
criticized EBSCO for having a broken DNS configuration that is celebrating the 
2-year anniversary of my in-depth bug report to them, along with a specific 
resolution path that their IT department has demonstrated an amazing resolve to 
ignore despite repeated pings to their customer service representatives to keep 
the issue active over the past 2 years.

-- 
Andrew Anderson, Director of Development, Library and Information Resources 
Network, Inc.
http://www.lirn.net/ | http://www.twitter.com/LIRNnotes | 
http://www.facebook.com/LIRNnotes


Re: [CODE4LIB] Why learn Unix?

2014-10-27 Thread Andrew Anderson
There is something of a natural symbiosis between *NIX and libraries.  If you 
have not already found it, read Unix as Literature for some background on why 
those who like the written word are drawn to *NIX naturally.

-- 
Andrew Anderson, Director of Development, Library and Information Resources 
Network, Inc.
http://www.lirn.net/ | http://www.twitter.com/LIRNnotes | 
http://www.facebook.com/LIRNnotes

On Oct 27, 2014, at 10:02, Siobhain Rivera siori...@indiana.edu wrote:

 Hi everyone,
 
 I'm part of the ASIST Student Chapter and Indiana University, and we're
 putting together a series of workshops on Unix. We've noticed that a lot of
 people don't seem to have a good idea of why they should learn Unix,
 particularly the reference/non technology types. We're going to do some
 more research to make a fact sheet about the uses of Unix, but I thought
 I'd pose the question to the list - what do you think are reasons
 librarians need to know Unix, even if they aren't in particularly tech
 heavy jobs?
 
 I'd appreciate any input. Have a great week!
 
 Siobhain Rivera
 Indiana University Bloomington
 Library Science, Digital Libraries Specialization
 ASIST-SC, Webmaster


Re: [CODE4LIB] Requesting a Little IE Assistance

2014-10-13 Thread Andrew Anderson
I’ve never attempted this, but instead of linking to the text files directly, 
can you include the text files in an iframe and leverage that to apply 
sizing/styling information to the iframe content?

Something like:

html
body
iframe src=“/path/to/file.txt”/iframe
/body
/html

That structure, combined with some javascript tricks might get you where you 
need to be:

http://stackoverflow.com/questions/4612374/iframe-inherit-from-parent

Of course, if you’re already going that far, you’re not too far removed from 
just pulling the text file into a nicely formatted container via AJAX, and 
styling that container as needed, without the iframe hackery.

-- 
Andrew Anderson, Director of Development, Library and Information Resources 
Network, Inc.
http://www.lirn.net/ | http://www.twitter.com/LIRNnotes | 
http://www.facebook.com/LIRNnotes

On Oct 13, 2014, at 9:59, Matthew Sherman matt.r.sher...@gmail.com wrote:

 For anyone who knows Internet Explore, is there a way to tell it to use
 word wrap when it displays txt files?  This is an odd question but one of
 my supervisors exclusively uses IE and is going to try to force me to
 reupload hundreds of archived permissions e-mails as text files to a
 repository in a different, less preservable, file format if I cannot tell
 them how to turn on word wrap.  Yes it is as crazy as it sounds.  Any
 assistance is welcome.
 
 Matt Sherman


Re: [CODE4LIB] Requesting a Little IE Assistance

2014-10-13 Thread Andrew Berger
In IE 11, at least, when you view source on a text file, you get a window
that has the option of turning word wrap on or off. I think it's probably
embedding notepad or wordpad's viewing capabilities.

Andrew Berger

On Mon, Oct 13, 2014 at 1:40 PM, Matthew Sherman matt.r.sher...@gmail.com
wrote:

 Thanks for the insights.  I was really hoping IE had a setting.  The
 problem is that these are txt files with copies of the permissions e-mails
 for our institutional repository that we store in the backend of the record
 in DSpace.  So I do not know that I can edit the HTML to make them display
 properly in IE.  The real frustration is that they do display, and the
 Firefox, Chrome, Safari, ect. display them fine, but IE does not and this
 supervisor only seems to use IE.




Re: [CODE4LIB] Forwarding blog post: Apple, Android and NFC – how should libraries prepare? (RFID stuffs)

2014-10-08 Thread Andrew Anderson
On Oct 8, 2014, at 4:54, Ross Singer rossfsin...@gmail.com wrote:

 We’re generally in need of a spec, not a standard, I’ve found (although 
 they’re definitely not mutually exclusive!).


The wonderful thing about standards, is that there are so many to choose from.

-- 
Andrew Anderson, Director of Development, Library and Information Resources 
Network, Inc.
http://www.lirn.net/ | http://www.twitter.com/LIRNnotes | 
http://www.facebook.com/LIRNnotes


Re: [CODE4LIB] Library app basics

2014-10-07 Thread Andrew Anderson
Before launching into a native app, start with the functional requirements to 
see if what you want to accomplish could be done in a well designed mobile web 
site, or if you actually need the advanced features that native development 
would make available.

For example, there is a _lot_ that you can do in jQuery Mobile backed by a 
strong AJAX backend that looks like a native app, yet does not subject you to 
the stringent requirements of having to do multi-platform development and worry 
about submitting to multiple vendors for approval.  

There is already some support for media capture for photos/video/sound in HTML5 
on some devices that you can use for interactive experiences like snapping a 
photo, sending it to the server for processing, and having the server send back 
something relevant.  See 
http://www.html5rocks.com/en/tutorials/getusermedia/intro/ for some information 
on what is possible currently, and then imagine what you could do with book 
covers, bar codes, maybe even tapping into the NFC chips in smartphones to 
tickle those RFID chips everyone is talking about this week.

As a data point, I have seen estimates that put mobile app development costs 
between $5,000 and $50,000, depending on their complexity, amount of UI/UX 
design and testing, graphics development, etc, so if you are operating without 
a budget and are having to scrounge for devices just to test with, a smart 
mobile web site may be a better starting point anyway.  It’s less of an 
unknown, using familiar tools, doesn’t require testing hardware, and doesn’t 
have an onerous vendor approval step to deal with.

-- 
Andrew Anderson, Director of Development, Library and Information Resources 
Network, Inc.
http://www.lirn.net/ | http://www.twitter.com/LIRNnotes | 
http://www.facebook.com/LIRNnotes

On Oct 7, 2014, at 14:51, Will Martin w...@will-martin.net wrote:

 My boss has directed me to start looking into producing a phone app for the 
 library, or better yet finding a way to integrate with the existing 
 campus-wide app.  Could I pick the list's brains?
 
 1) Is there some tolerably decent cross-platform app language, or am I going 
 to be learning 3 different languages for iOS, Android, and Windows phone?  
 I've dabbled in all kinds of things, but my bread-and-butter work has been 
 PHP on a LAMP stack.  Apps aren't written in that, so new language time.
 
 2) The library's selection of mobile devices consists of 2 iPads and a Galaxy 
 tablet.  We don't have phones for testing.  My personal phone is a 
 12-year-old flip phone which doesn't run apps.  Can I get by with emulators?  
 What are some good ones?  The budget for the project is zero, so I don't 
 think dedicated testing devices are in the cards unless I upgrade my own 
 phone, which I probably ought to anyway.
 
 3) What are some best practices for library app design?  We were thinking the 
 key functionality would be personal account management (what have I got 
 checked out, renew my stuff, etc), hours, lab availability, search the 
 catalog, and ask a librarian.  Anything missing?  Too much stuff?
 
 Will Martin
 
 Web Services Librarian
 Chester Fritz Library
 
 P.S.  I sent this a couple days ago and wondered why it hadn't shown up -- 
 only to realize I accidently sent it to j...@code4lib.org rather than the 
 actual list serv address.  Whoops, embarrassing!


Re: [CODE4LIB] What is the real impact of SHA-256? - Updated

2014-10-06 Thread Andrew Anderson
My concern would be more that given proven weaknesses in MD5, do I want to risk 
that 1 in a billion chance that the “right” bit error creeps into an archive 
that manages to not impact the checksum, thus creating the illusion that the 
archive integrity has not been violated?

-- 
Andrew Anderson, Director of Development, Library and Information Resources 
Network, Inc.
http://www.lirn.net/ | http://www.twitter.com/LIRNnotes | 
http://www.facebook.com/LIRNnotes

On Oct 2, 2014, at 18:34, Jonathan Rochkind rochk...@jhu.edu wrote:

 For checksums for ensuring archival integrity, are cryptographic flaws 
 relavent? I'm not sure, is part of the point of a checksum to ensure against 
 _malicious_ changes to files?  I honestly don't know. (But in most systems, 
 I'd guess anyone who had access to maliciously change the file would also 
 have access to maliciously change the checksum!)
 
 Rot13 is not suitable as a checksum for ensuring archival integrity however, 
 because it's output is no smaller than it's input, which is kind of what 
 you're looking for. 
 
 
 From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] on behalf of Cary Gordon 
 [listu...@chillco.com]
 Sent: Thursday, October 02, 2014 5:51 PM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] What is the real impact of SHA-256? - Updated
 
 +1
 
 MD5 is little better than ROT13. At least with ROT13, you have no illusions.
 
 We use SHA 512 for most work. We don't do finance or national security, so it 
 is a good fit for us.
 
 Cary
 
 On Oct 2, 2014, at 12:30 PM, Simon Spero sesunc...@gmail.com wrote:
 
 Intel skylake processors have dedicated sha instructions.
 See: https://software.intel.com/en-us/articles/intel-sha-extensions
 
 Using a tree hash approach (which is inherently embarrassingly parallel)
 will leave io time dominant. This approach is used by Amazon glacier - see
 http://docs.aws.amazon.com/amazonglacier/latest/dev/checksum-calculations.html
 
 MD5 is broken, and cannot be used for any security purposes. It cannot be
 used for deduplication if any of the files are in the directories of
 security researchers!
 
 If security is not a concern then there are many faster hashing algorithms
 that avoid the costs imposed by the need to defend against adversaries.
 See siphash, murmur, cityhash, etc.
 
 Simon
 On Oct 2, 2014 11:18 AM, Alex Duryee a...@avpreserve.com wrote:
 
 Despite some of its relative flaws, MD5 is frequently selected over SHA-256
 in archives as the checksum algorithm of choice. One of the primary factors
 here is the longer processing time required for SHA-256, though there have
 been no empirical studies calculating that time difference and its overall
 impact on checksum generation and verification in a preservation
 environment.
 
 AVPreserve Consultant Alex Duryee recently ran a series of tests comparing
 the real time and cpu time used by each algorithm. His newly updated white
 paper What Is the Real Impact of SHA-256? presents the results and comes
 to some interesting conclusions regarding the actual time difference
 between the two and what other factors may have a greater impact on your
 selection decision and file monitoring workflow. The paper can be
 downloaded for free at
 
 http://www.avpreserve.com/papers-and-presentations/whats-the-real-impact-of-sha-256/
 .
 __
 
 Alex Duryee
 *AVPreserve*
 350 7th Ave., Suite 1605
 New York, NY 10001
 
 office: 917-475-9630
 
 http://www.avpreserve.com
 Facebook.com/AVPreserve http://facebook.com/AVPreserve
 twitter.com/AVPreserve
 


Re: [CODE4LIB] Library community web standards (was: LibGuides v2 - Templates and Nav)

2014-09-30 Thread Andrew Cunningham
Hi Brad,

An interesting idea, but many potential failure points.

I have been in the position of spending considerable time to develop,best
practive materials on web internationalisation for our state government,
without any prospect of being able to roll it out within our own library.

Wether we are discussing corporate or opensource solutions. Web
technologies withon library sector are at,the within the long tail of
implementation.

But best practice should be encouraged.

Andrew

On 01/10/2014 12:23 AM, Brad Coffield bcoffield.libr...@gmail.com wrote:

 I agree that it would be a bad idea to endeavor to create our own special
 standards that deviate from accepted web best practices and standards. My
 own thought was more towards a guide for librarians, curated by
librarians,
 that provides a summary of best practices. On the one hand, something to
 help those without a deep tech background to quickly get up to speed with
 best practices instead of needing to conduct a lot of research and
reading.
 But beyond that, it would also be a resource that went deeper for those
who
 wanted to explore the literature.

 So, bullet points and short lists of information accompanied by links to
 additional resources etc. (So, right now, it sounds like a libguide lol)

 Though I do think there would potentially be additional information that
 did apply mostly/only to libraries and our particular sites etc. Off the
 top of my head: a thorough treatment and recommendations regarding
 libguides v2 and accessibility, customizing common library-used products
 (like Serial Solutions 360 link, Worldcat Local and all their competitors)
 so that they are most usable and accessible.

 At it's core, though, what I'm picturing is something where librarians get
 together and cut through the noise, pull out best web practices, and
 display them in a quickly digested format. Everything else would be the
 proverbial gravy.

 On Tue, Sep 30, 2014 at 10:01 AM, Michael Schofield mschofi...@nova.edu
 wrote:

  I am interested but I am a little hazy about what kind of standards you
  all are suggesting. I would warn against creating standards that
conflict
  with any actual web standards, because I--and, I think, many
others--would
  honestly recommend that the #libweb should aspire to and adhere more
firmly
  to larger web standards and best practices that conflict with something
  that's more, ah, librarylike. Although that might not be what you folks
  have in mind at all : ).
 
  Michael S.
 
  -Original Message-
  From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of
  Brad Coffield
  Sent: Tuesday, September 30, 2014 9:30 AM
  To: CODE4LIB@LISTSERV.ND.EDU
  Subject: Re: [CODE4LIB] Library community web standards (was: LibGuides
v2
  - Templates and Nav)
 
  Josh, thanks for separating this topic out and starting this new
thread. I
  don't know of any such library standards that exist on the web. I agree
  that this sounds like a great idea. As for this group or not... why not!
  It's 2014 and they don't exist yet and they would be incredibly useful
for
  many libraries, if not all. Now all we need is a cool 'working group'
title
  for ourselves and we're halfway done! Right???
 
  But seriously, I'd love to help.
 
  Brad
 
 
 
 
  --
  Brad Coffield, MLIS
  Assistant Information and Web Services Librarian Saint Francis
University
  814-472-3315
  bcoffi...@francis.edu
 



 --
 Brad Coffield, MLIS
 Assistant Information and Web Services Librarian
 Saint Francis University
 814-472-3315
 bcoffi...@francis.edu


[CODE4LIB] GIS Librarian Position at the University of Miami Libraries

2014-09-25 Thread Andrew Darby
GEOSPATIAL INFORMATION SYSTEMS (GIS) SERVICES LIBRARIAN:



The University of Miami Libraries seeks nominations and applications for a
Geospatial Information Systems (GIS) Services Librarian, to serve as a key
member of our emerging digital scholarship program. The GIS Services
Librarian will build and curate our growing spatial data collection, and
review and recommend the acquisition of relevant application software
programs. The GIS Librarian will collaborate with and provide services to
various schools and departments across UM in support of their geospatial
research needs. The Librarian will work directly with liaison librarians in
the Richter Library’s Education and Outreach Department and the UM subject
specialty libraries (architecture, business, marine and atmospheric
sciences, music) as well as a range of staff involved in metadata, web
applications, emerging technologies and digital scholarship support
throughout the Library organization. The GIS Services Librarian will liaise
with UM staff in related academic technology support roles, especially
Academic Technology and the Center for Computational Sciences.

For additional information please see the full job description:

https://library.miami.edu/wp-content/uploads/2014/08/Geospatial-Information-Systems-GIS-Services-Librarian1.pdf


-- 
Andrew Darby
Head, Web  Emerging Technologies
University of Miami Libraries


[CODE4LIB] Diva.js 3.0: High-resolution document image viewer

2014-09-24 Thread Andrew Hankinson
We’re pleased to announce a new version of our open-source document image 
viewer, Diva.js. Diva.js is especially suited for use in rare and archival book 
digitization initiatives where viewing high-resolution images can show even the 
smallest detail present on the physical object. Using Diva, libraries, 
archives, and museums can present high-resolution document page images in an 
“instant-on” interface with a user-friendly interface that has been optimized 
for speed and flexibility.

New features in Diva.js 3.0:

 • Several speed optimizations – Documents load and scroll faster.
 • In-browser (JavaScript) image manipulation – Adjust page brightness, 
contrast, and rotation.
 • Improved mobile device support – Tap and pinch to navigate through documents.
 • Horizontal orientation – Switch between the default vertical page layout and 
a horizontal scrolling layout.
 • Events system – Allows you to pass streaming data from the document viewer 
into your own website and plugins.
 • Improved and updated documentation: https://github.com/DDMAL/diva.js/wiki.
 • A new website.
 • Numerous of bug fixes.

For more information, demos, and documentation visit 
http://ddmal.github.io/diva.js/.

Diva.js is developed by the Distributed Digital Music Archives and Libraries 
laboratory, part of the Music Technology Area of the Schulich School of Music 
at McGill University.


Re: [CODE4LIB] Diva.js 3.0: High-resolution document image viewer

2014-09-24 Thread Andrew Hankinson
Hi Todd,

We’ve got someone working on it as we speak. ;)

https://github.com/DDMAL/diva.js/issues/136

-Andrew

On Sep 24, 2014, at 11:23 AM, todd.d.robb...@gmail.com 
todd.d.robb...@gmail.com wrote:

 Solid work Andrew and team!
 
 Is there IIIF http://iiif.io/ integration already or is that on the
 roadmap?
 
 
 Cheers!
 
 
 
 On Wed, Sep 24, 2014 at 7:26 AM, Andrew Hankinson 
 andrew.hankin...@gmail.com wrote:
 
 We’re pleased to announce a new version of our open-source document image
 viewer, Diva.js. Diva.js is especially suited for use in rare and archival
 book digitization initiatives where viewing high-resolution images can show
 even the smallest detail present on the physical object. Using Diva,
 libraries, archives, and museums can present high-resolution document page
 images in an “instant-on” interface with a user-friendly interface that has
 been optimized for speed and flexibility.
 
 New features in Diva.js 3.0:
 
 • Several speed optimizations – Documents load and scroll faster.
 • In-browser (JavaScript) image manipulation – Adjust page brightness,
 contrast, and rotation.
 • Improved mobile device support – Tap and pinch to navigate through
 documents.
 • Horizontal orientation – Switch between the default vertical page
 layout and a horizontal scrolling layout.
 • Events system – Allows you to pass streaming data from the document
 viewer into your own website and plugins.
 • Improved and updated documentation:
 https://github.com/DDMAL/diva.js/wiki.
 • A new website.
 • Numerous of bug fixes.
 
 For more information, demos, and documentation visit
 http://ddmal.github.io/diva.js/.
 
 Diva.js is developed by the Distributed Digital Music Archives and
 Libraries laboratory, part of the Music Technology Area of the Schulich
 School of Music at McGill University.
 
 
 
 
 -- 
 Tod Robbins
 Digital Asset Manager, MLIS
 todrobbins.com | @todrobbins http://www.twitter.com/#!/todrobbins


Re: [CODE4LIB] LibGuides v2 - Templates and Nav

2014-09-18 Thread Andrew Anderson
There are ways around this, e.g. http://api.jquerymobile.com/taphold/

-- 
Andrew Anderson, Director of Development, Library and Information Resources 
Network, Inc.
http://www.lirn.net/ | http://www.twitter.com/LIRNnotes | 
http://www.facebook.com/LIRNnotes

On Sep 17, 2014, at 21:17, Jonathan Rochkind rochk...@jhu.edu wrote:

 Mouse hover is not available to anyone using a touch device rather than a 
 mouse, as well as being problematic for keyboard access.
 
 While there might be ways to make the on-hover UI style keyboard accessible 
 (perhaps in some cases activating on element focus in addition toon hover), 
 there aren't really any good ones I can think for purely touch devices (which 
 don't really trigger focus state either).
 
 An increasing amount of web use, of course, is mobile touch devices, and 
 probably will continue to be and to increase for some time, including on 
 library properties.
 
 So I think probably on-hover UI should simply be abandoned at this point, 
 even if some people love it, it will be inaccessible to an increasing portion 
 of our users with no good accomodations.
 
 Jonathan
 
 On 9/17/14 4:25 PM, Jesse Martinez wrote:
 On the same token, we're making it a policy to not use mouse hover over
 effects to display database/asset descriptions in LG2 until this can become
 keyboard accessible. This is a beloved feature from LG1 so I'm hoping
 SpringShare read my pestering emails about this...
 
 Jesse
 
 On Wed, Sep 17, 2014 at 3:38 PM, Brad Coffield bcoffield.libr...@gmail.com
 wrote:
 
 Johnathan,
 
 That point is well taken. Accessibility, to me, shouldn't be a tacked-on
 we'll do the best we can sort of thing. It's an essential part of being a
 library being open to all users. Unfortunately I know our site has a lot of
 work to be done regarding accessibility. I'll also pay attention to that
 when/if I make mods to the v2 templates.
 
 On Wed, Sep 17, 2014 at 1:49 PM, Jonathan LeBreton lebre...@temple.edu
 wrote:
 
 I might mention here that we (Temple University)  found LibGuides 2.0  to
 offer some noteworthy improvements in section 508 accessibility
 when compared with version 1.0.   Accessibility is a particular point of
 concern for the whole institution as we look across the city, state, and
 country at other institutions that have been called out and settled with
 various disability advocacy groups.
 So we moved to v. 2.0 during the summer in order to have those
 improvements in place for the fall semester, as well as to get the value
 from some other developments in v. 2.0 that benefit all customers.
 
 When I see email on list about making  modifications to templates and
 such, it gives me a bit of concern on this score that by doing so,  one
 might easily begin to make the CMS framework for content less accessible.
   I thought I should voice that.This is not to say that one shouldn't
 customize and explore enhancements etc.,  but one should do so with some
 care if you are operating with similar mandates or concerns.Unless I
 am
 mistaken, several of the examples noted are now throwing 508 errors that
 are not in the out-of-the box  LibGuide templates and which are not the
 result of an individual content contributor/author inserting bad stuff
 like images without alt tags.
 
 
 
 
 Jonathan LeBreton
 Senior Associate University Librarian
 Editor:  Library  Archival Security
 Temple University Libraries
 Paley M138,  1210 Polett Walk, Philadelphia PA 19122
 voice: 215.204.8231
 fax: 215.204.5201
 mobile: 215.284.5070
 email:  lebre...@temple.edu
 email:  jonat...@temple.edu
 
 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of
 Cindi Blyberg
 Sent: Wednesday, September 17, 2014 12:03 PM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] LibGuides v2 - Templates and Nav
 
 Hey everyone!
 
 Not to turn C4L into Support4LibGuides, but... :)
 
 The infrastructure for all the APIs is in place; currently, the Guides
 API
 and the Subjects API are functioning.  Go to Tools  API  Get Guides to
 see the general structure of the URL.  Replace guides with subjects
 to
 retrieve your subjects.  You will need your LibGuides site ID, which you
 can get from the LibApps Dashboard screen.
 
 Word is that it will not take long to add other API calls on the back
 end;
 if you need these now, please do email supp...@springshare.com and
 reference this conversation.
 
 As for v1, we are planning on supporting it for 2 more years--that said,
 we would never leave anyone hanging, so if it takes longer than that to
 get
 everyone moved over, we're ready for that.
 
 Best,
  -Cindi
 
 On Wed, Sep 17, 2014 at 10:46 AM, Nadaleen F Tempelman-Kluit 
 n...@nyu.edu
 
 wrote:
 
 Hi all-
 While we're on the topic of LibGuides V2, when will the GET subjects
 API (and other API details) be in place? We're in a holding pattern
 until we get those details and we've not been able to get any timeline
 as to when those assets

Re: [CODE4LIB] Open source alternative to LibAnswers as the library IT KB for library staff?

2014-09-15 Thread Andrew Darby
I'm not sure what all is in LibAnswers, but SubjectsPlus has a talkback
module for publicly answering questions from patrons, e.g.,

http://library.miami.edu/sp/subjects/talkback.php

and then an FAQ module, e.g.,

http://library.miami.edu/sp/subjects/faq.php

which is sprinkled with things that we think might be useful to patrons.



On Mon, Sep 15, 2014 at 1:55 PM, Jonathan Bloy jb...@edgewood.edu wrote:

 The new version of LibAnswers (we're currently playing around with a v2
 beta site) allows for separate knowledgebases, you can also set a
 knowledgebase to only be accessible by certain groups.  In our LA v2 beta
 site, we've set up a group for library staff FAQs.


 --
 Jonathan Bloy
 Librarian, Head of Digital Initiatives
 Edgewood College
 Madison, Wisconsin
 http://library.edgewood.edu



 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of
 Kim, Bohyun
 Sent: Friday, September 12, 2014 9:42 AM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: [CODE4LIB] Open source alternative to LibAnswers as the library
 IT KB for library staff?

 Hi all

 Does anyone have a suggestion for the free open-source Q/A board + easily
 searchable KB comparable to LibAnswers? We already have LibAnswers for
 patrons. This is more for the library staff who submits a lot of similar or
 same questions to the Library IT help desk.

 It is an option to use the SharePoint Discussion Board but I am looking
 for an alternative since SP tends to get lukewarm responses from users in
 my experience.

 Any suggestions or feedback would be appreciated.

 Thanks,
 Bohyun




-- 
Andrew Darby
Head, Web  Emerging Technologies
University of Miami Libraries


Re: [CODE4LIB] Anybody know a way to add a MARC tag on-mass to a file of MARC records

2014-08-28 Thread Andrew Anderson
I’ve had a lot of success with pymarc for this.

-- 
Andrew Anderson, Director of Development, Library and Information Resources 
Network, Inc.
http://www.lirn.net/ | http://www.twitter.com/LIRNnotes | 
http://www.facebook.com/LIRNnotes

On Aug 28, 2014, at 14:37, Schwartz, Raymond schwart...@wpunj.edu wrote:

 I need to automate this in a script.  As far as I can tell.  You cannot do 
 this with MarcEdit.
 
 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Jane 
 Costanza
 Sent: Thursday, August 28, 2014 2:33 PM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] Anybody know a way to add a MARC tag on-mass to a 
 file of MARC records
 
 MarcEdit is a free MARC editing utility.
 
 http://marcedit.reeset.net/
 
 Jane Costanza
 Associate Professor/Head of Discovery Services Trinity University San 
 Antonio, Texas
 210-999-7612
 jcost...@trinity.edu
 http://digitalcommons.trinity.edu/
 http://lib.trinity.edu/
 
 
 On Thu, Aug 28, 2014 at 1:26 PM, Schwartz, Raymond schwart...@wpunj.edu
 wrote:
 
 Anybody know a way to add a MARC tag on-mass to a file of MARC 
 records.  I need to add the tag 918 $a with the contents DELETE to 
 each of the records.
 
 Thanks in advance. /Ray
 
 Ray Schwartz
 Systems Specialist Librarian schwart...@wpunj.edu
 blocked::mailto:schwart...@wpunj.edu
 David and Lorraine Cheng Library   Tel: +1 973 720-3192
 William Paterson University Fax: +1 973 720-2585
 300 Pompton RoadMobile: +1 201
 424-4491
 Wayne, NJ 07470-2103 USA
 http://nova.wpunj.edu/schwartzr2/
 http://euphrates.wpunj.edu/faculty/schwartzr2/
 


[CODE4LIB] Job: Digital Infrastructure Librarian--correction

2014-08-22 Thread Andrew Rouner
With apologies for reposting: the previous announcement had the incorrect
job # listed.  The correct # is 28774.


*Digital Infrastructure Librarian*



Washington University Libraries is seeking a creative and enthusiastic
individual to design and implement a new digital library application
infrastructure using the Hydra repository framework and related
technologies. Reporting to the Director of the Digital Library, the Digital
Infrastructure Librarian will work collaboratively with Libraries’ staff
and campus partners to lead all aspects of system design and
implementation, including gathering requirements, establishing coding
standards, and participating in system testing, resulting in the delivery
of a functioning digital asset management system based on the Hydra
repository framework.



*RESPONSIBILITIES*

DIGITAL LIBRARY DESIGN AND IMPLEMENTATION

Lead the design and/or implementation of a set of Hydra-based digital
library applications for the preservation and delivery of Washington
University Libraries’ digital assets (through development of new or re-use
of existing resources) in collaboration with the Libraries’ Scholarly
Publishing, Special Collections, Systems and other library personnel.
Gather requirements and develop specifications for digital library
architectures. Implement workflow tools and functionality for the deposit,
storage and delivery of digital assets and associated metadata. Participate
in iterative testing and integration of user feedback throughout the
development process. Collaborate with Libraries’ Systems Operations and
Support Office and campus-wide technology services to ensure proper
implementation and management of security policies and authentication/
authorization procedures, and write and maintain documentation for systems
architecture and application code for internal and external users and
stakeholders.

COMMUNITY ENGAGEMENT

Maintain awareness of national and international best practices and
advances in digital library applications, frameworks, and implementations.
Actively engage in the Fedora and Hydra development communities, including
the development and contribution of new interfaces and code customizations.



*QUALIFICATIONS*

REQUIRED

Master’s degree in library or information science from an ALA-accredited
institution or related field, or a combination of relevant experience and
education; demonstrated experience with programming languages such as Ruby
on Rails, PHP, Perl or Javascript; familiarity with a variety of digital
library standards (e.g. TEI, MODS, METS, EAD, VRA Core, Dublin Core) and
file formats.

PREFERRED

Experience establishing and customizing open source software applications
and digital library software platforms (e.g., Fedora, DSpace, etc.) in a
production environment; experience with digital library software
development in an academic library or higher education setting; project
management experience on grant-funded projects; familiarity with the Hydra
development community and related technologies. Ability to work effectively
with a culturally diverse population of staff, faculty, students, and
community members; to communicate effectively on technology issues with
technical and non-technical staff, and to work in a team environment.
Excellent written and verbal communication skills.





EXCELLENT BENEFITS PACKAGE:  including 22 VACATION DAYS, TIAA-CREF, etc.



APPLICATION INFORMATION:  Applications must be submitted online at
https://jobs.wustl.edu.  Reference job # 28774.  For full consideration,
attach a letter of application, resume, and the names of three references
(including e-mail  phone number).  Review of applications will begin
immediately and continue until the position is filled.



*Employment eligibility verification required upon hire.  Washington
University is an equal opportunity/affirmative action employer.*


[CODE4LIB] Job: Digital Infrastructure Librarian

2014-08-21 Thread Andrew Rouner
*Digital Infrastructure Librarian*



Washington University Libraries is seeking a creative and enthusiastic
individual to design and implement a new digital library application
infrastructure using the Hydra repository framework and related
technologies. Reporting to the Director of the Digital Library, the Digital
Infrastructure Librarian will work collaboratively with Libraries’ staff
and campus partners to lead all aspects of system design and
implementation, including gathering requirements, establishing coding
standards, and participating in system testing, resulting in the delivery
of a functioning digital asset management system based on the Hydra
repository framework.



*RESPONSIBILITIES*

DIGITAL LIBRARY DESIGN AND IMPLEMENTATION

Lead the design and/or implementation of a set of Hydra-based digital
library applications for the preservation and delivery of Washington
University Libraries’ digital assets (through development of new or re-use
of existing resources) in collaboration with the Libraries’ Scholarly
Publishing, Special Collections, Systems and other library personnel.
Gather requirements and develop specifications for digital library
architectures. Implement workflow tools and functionality for the deposit,
storage and delivery of digital assets and associated metadata. Participate
in iterative testing and integration of user feedback throughout the
development process. Collaborate with Libraries’ Systems Operations and
Support Office and campus-wide technology services to ensure proper
implementation and management of security policies and authentication/
authorization procedures, and write and maintain documentation for systems
architecture and application code for internal and external users and
stakeholders.

COMMUNITY ENGAGEMENT

Maintain awareness of national and international best practices and
advances in digital library applications, frameworks, and implementations.
Actively engage in the Fedora and Hydra development communities, including
the development and contribution of new interfaces and code customizations.



*QUALIFICATIONS*

REQUIRED

Master’s degree in library or information science from an ALA-accredited
institution or related field, or a combination of relevant experience and
education; demonstrated experience with programming languages such as Ruby
on Rails, PHP, Perl or Javascript; familiarity with a variety of digital
library standards (e.g. TEI, MODS, METS, EAD, VRA Core, Dublin Core) and
file formats.

PREFERRED

Experience establishing and customizing open source software applications
and digital library software platforms (e.g., Fedora, DSpace, etc.) in a
production environment; experience with digital library software
development in an academic library or higher education setting; project
management experience on grant-funded projects; familiarity with the Hydra
development community and related technologies. Ability to work effectively
with a culturally diverse population of staff, faculty, students, and
community members; to communicate effectively on technology issues with
technical and non-technical staff, and to work in a team environment.
Excellent written and verbal communication skills.





EXCELLENT BENEFITS PACKAGE:  including 22 VACATION DAYS, TIAA-CREF, etc.



APPLICATION INFORMATION:  Applications must be submitted online at
https://jobs.wustl.edu.  Reference job # 28354.  For full consideration,
attach a letter of application, resume, and the names of three references
(including e-mail  phone number).  Review of applications will begin
immediately and continue until the position is filled.



*Employment eligibility verification required upon hire.  Washington
University is an equal opportunity/affirmative action employer.*


[CODE4LIB] Autumn Code4LibNYC Event - Sept. 10th and Winter input

2014-08-20 Thread Andrew Gordon
Hi All,

This is just a quick reminder to register for the Code4Lib NYC 
eventhttp://metro.org/events/534/ (free and open to all) taking place at 
METRO, September 10, 2:00PM - 4:00PM and also a reminder that if you haven't 
yet and are interested, to please give your input 
herehttp://libguides.metro.org/vote (or at the google 
formhttps://docs.google.com/forms/d/1QUT54yFRQO-g3kw_RXEemK-NzoDd3JjjyIxdy9YJqFM/viewform)
 for a 1-day Code4LibNYC workshop in the Winter. The poll will stay open until 
the September meeting.

The list of topics and speakers for this upcoming meeting is:

  *   Eric Glass and Jeremiah Trinidad-Christensen, librarians at Columbia 
University, on GeoData@Columbiahttp://culspatial.cul.columbia.edu/.
  *   Matthew Lipper, a developer for the NYC Department of Planning, on the 
NYC GeoClient APIhttps://developer.cityofnewyork.us/api/geoclient-api (which 
I'm personally a huge fan of for metadata enhancement!).
  *   Eric Hellman, president of Gluejarhttp://www.gluejar.com/, Web Privacy 
http://tinyurl.com/kjxs6r6*
  *   Jennifer Anderson, senior UX designer at NYPL, on her work on a rapid 
prototyping package for NYPL (described as 'kind of our own Twitter Bootstrap').
*please note that Eric Hellman's talk has changed, no doubt in response to 
easily the most popular discussion topic this month on the Code4Lib listserv.
Hope to see you all there and as mentioned in the first e-mail, for those who 
can stick around after the meeting, we will decamp to an establishment in the 
area for drinks and snacks.

Best,
Drew


__
Drew Gordon
Systems Librarian
Center for the History of Medicine and Public Health
New York Academy of Medicine | 1216 Fifth Avenue | New York, NY 10029
212.822.7324 | http://nyamcenterforhistory.org/


Re: [CODE4LIB] Baggit specification question

2014-08-06 Thread Andrew Hankinson
I suspect the first example you give is correct. The newline character is the 
field delimiter. If you’re reading this into a structured representation (e.g., 
a Python dictionary) you could parse the presence of nothing between the colon 
and the newline as “None”, but in a text file there is no representation of 
“nothing” except for actually having nothing.


On Aug 6, 2014, at 7:11 PM, Rosalyn Metz rosalynm...@gmail.com wrote:

 Hi all,
 
 Below is a question one of my colleagues posted to digital curation, but
 I'm also posting here (because the more info the merrier).
 
 Thanks!
 Rosy
 
 
 
 Hi,
 
 I am working for the Digital Preservation Network and our current
 specification requires that we use baggit bags.
 
 Our current spec for bag-info.txt reads in part:
 
 DPN requires the presence of the following fields, although they may be
 empty.  Please note that the values of null and/or nil should not be
 used.  The colon (:) should still be present.
 
 
 From my reading of the baggit spec, section 2.2.2:
 
 A metadata element MUST consist of a label, a colon, and a value, each
 separated by optional whitespace. 
 
 
 2.2.2 is for the bag-info.txt, but it seems that this is the general rule.
 
 Question: Are values required for all? Which below is correct or both? Ex:
 
 Source-Organization:
 
 or
 
 
 Source-Organization: nil
 
 
 I appreciate any clarification,
 
 Thanks
 James
 Stanford Digital Repository


[CODE4LIB] Recommendations for Audio Transcription Services for Historical Radio Programming

2014-07-29 Thread Andrew Gordon
Hello,

I just wanted to poll you all on recommendations for any tried and true Audio 
Transcription services that preferably have experience doing English language 
radio broadcasts circa the 1950's.

We have about 3000 minutes of audio files we hope to have transcribed within a 
reasonable amount of time (~1mo - 2mo. or so).

Located in or around the NYC area would be nice, but I am sure for stuff like 
this it can all be done remotely.

Thanks in advance,
drew
__
Drew Gordon
Systems Librarian
Center for the History of Medicine and Public Health
New York Academy of Medicine | 1216 Fifth Avenue | New York, NY 10029
212.822.7324 | http://nyamcenterforhistory.org/


Re: [CODE4LIB] [WEB4LIB] Interactive content for digital signage

2014-07-18 Thread Andrew Nisbet
Hello Paul,



Richard Loomis has a project he presented at ALA 2014: 
http://somerset.lib.nj.us/rpisign.htm. I hope this helps.



Edmonton Public Library
Andrew Nisbet
ILS Administrator

T: 780.496.4058   F: 780.496.8317



-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Paul Go
Sent: July-18-14 11:24 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] [WEB4LIB] Interactive content for digital signage



We implemented a very inexpensive digital signage solution using TVs and 
Raspberry Pis.  The Pis connect to the server to automatically display images 
in certain drives, making changing signs simple.  We could also do RSS but have 
not implemented that as of now.  The Pis are around $35 (additional costs 
include the storage card, wifi adapter or networking) and are very easy to 
program.



We have discussed having touch screen kiosks using iPads or Kindle Fires but 
have not attempted to do so., yet.



Paul Go



Systems Librarian /

Library Technology Manager /

CS and ITM Liaison

Paul V. Galvin Library

Illinois Institute of Technology

35 West 33rd Street

Chicago, IL  60616

312.567.7997

p...@iit.edumailto:p...@iit.edu



*Driving Innovation through Knowledge and Scholarship*





On Fri, Jul 18, 2014 at 11:58 AM, Michael Schofield 
mschofi...@nova.edumailto:mschofi...@nova.edu

wrote:



 My friend Amanda Goodman (@godaisies on Twitter) is building and

 designing a touch kiosk right now. She's been sharing pictures about

 the design and the process. I'd pick her brain.



 Also,



 At this stage I too would balk about a $30,000 price tag. There are

 some legit reasons [I guess] for the cost of the hardware, etc. - but

 based on how you and other libraries intend to use this it really

 shouldn't cost that much. What you need is a large touch screen with

 internet access, then you can essentially do what OSU [and Amanda] are

 doing and build a responsive website for the kiosk. It can be on top

 of a CMS or pull from RSS or JSON feeds to make it painless to update.

 You might even use a framework like jQuery Mobile (which isn't just

 for small hand screens) that adds a nice layer of interactive transitions, 
 modals, etc.



 I'm x-posting this to code4lib because I think folks might like to

 weigh in. Good topic!



 // Michael

 // ns4lib.com

 // @gollydamn





 -Original Message-

 From: Web technologies in libraries [mailto:web4...@listserv.nd.edu]

 On Behalf Of Thomas Edelblute

 Sent: Friday, July 18, 2014 12:23 PM

 To: web4...@listserv.nd.edumailto:web4...@listserv.nd.edu

 Subject: Re: [WEB4LIB] Interactive content for digital signage



 When we did a remodel of the library a few years ago, I first looked

 at a server that would feed the content to various digital signs that

 we could change on the fly and pull content from RSS feeds.  But

 management balked at the $30,000 price tag on that.  So we went with a

 company that provides large television like monitors that read JPG

 files of USB drives and are turned on and off by a Christmas tree

 timer.  The company also supports these setups with auto-dealerships in the 
 area.



 Thomas Edelblute

 Public Access Systems Coordinator

 Anaheim Public Library



 -Original Message-

 From: Web technologies in libraries [mailto:web4...@listserv.nd.edu]

 On Behalf Of David S Vose

 Sent: Friday, July 18, 2014 7:36 AM

 To: web4...@listserv.nd.edumailto:web4...@listserv.nd.edu

 Subject: [WEB4LIB] Interactive content for digital signage



 We will be installing interactive digital signs in our main library

 this fall. One sign will be at our entrance and one will be in the

 lobby. The draft plan is to provide interactivity that will allow

 patrons to browse to floor plans, hours and schedules, directories, a

 campus map, and an about the libraries section.



 I would be interested to learn what type of interactive content others

 have found to be most popular and useful to students and what

 interactive content did not turn out to be particularly successful.



 Thanks,



 David Vose | Geography, Data, Government Information, Law Binghamton

 University Libraries, POB 6012, Binghamton, NY 13902-6012

 dv...@binghamton.edumailto:dv...@binghamton.edu | 607.777.4907 | Downtown 
 Center: 607.777.9275



 



 To unsubscribe: http://bit.ly/web4lib



 Web4Lib Web Site: http://web4lib.org/



 2014-07-18



 



 THIS MESSAGE IS INTENDED ONLY FOR THE USE OF THE INDIVIDUAL OR ENTITY

 TO WHICH IT IS ADDRESSED AND MAY CONTAIN INFORMATION THAT IS

 PRIVILEGED, CONFIDENTIAL, AND EXEMPT FROM DISCLOSURE UNDER APPLICABLE

 LAWS. If the reader of this message is not the intended recipient, or

 the employee or agent responsible for delivering the message to the

 intended recipient, you are hereby notified that any dissemination,

 distribution, forwarding, or copying of this communication

Re: [CODE4LIB] Have file will share

2014-07-17 Thread Andrew Shuping
Unfortunately Ariel is pretty much dead.  Infotrieve is the parent company
and they only had one support person as of 4 years ago (no idea if they
even have that any more) and they quit updating it as of 10 years ago.  The
last OS that it definitively worked with was Windows XP.  Vista was a
crapshoot and 7 could be jiggered if you ran it in XP mode, but I know a
lot of folks had difficult with it.

Andrew Shuping

Robert Frost - In three words I can sum up everything I've learned about
life: it goes on.


On Thu, Jul 17, 2014 at 10:30 AM, Francis Kayiwa fkay...@colgate.edu
wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 7/17/2014 10:20 AM, Karen Coyle wrote:
 
 https://web.archive.org/web/20070209042706/http://www4.infotrieve.com/ariel/downloads.asp
 
 
 Thanks but no cigar still  HTTP Error 403.6 - Forbidden: IP address of
 the client has been rejected.

 ./fxk


 - --
 Don't stop to stomp ants when the elephants are stampeding.
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2.0.22 (MingW32)
 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

 iQEcBAEBAgAGBQJTx938AAoJEOptrq/fXk6M4psH/3ga4wZ7Hzzv9lzBGvv1DlMG
 M86LTnfU/QjDR7c02Zt4PmT6MYOtdTqvxL6TtPW1sWFgbZxV5XZT8nVDhiGhETpu
 qEwPAL6S64adP9P4Ny7llFzPyBnsbAZhRl4ZUnMAFo08xnK7zCjhKor59XuF5gxE
 M4Xy3MG1DkTSYj4Gy1T+mqcAEYZQo5laHO0JYDHdBdT/KMnpVyfM5nr27xGd1XZt
 +9eZ+OIYytrDGYxgJVdh5hA/KJtSXgy4tkIXnztxHhzF/jgwsWRbzD0lo6A7GbXI
 1yBmc+uXTnZ6K7VDnfKhQUjcB4d1pEkfdo9LB9yM2FjqSlMG3oGoYwPxBUAqguI=
 =Vshh
 -END PGP SIGNATURE-



Re: [CODE4LIB] Software to track website changes?

2014-07-11 Thread Andrew Shuping
Hey Elizabeth,

I know my library's systems department uses The Trac project:
http://trac.edgewall.org/, which lets them do exactly what you're asking
about.  I can't remember how easy/difficult the installation process is,
but using it is easy for almost anyone.  Our building maintenance person
has even started using it as a way to track what she needs to do.

Andrew Shuping

Robert Frost - In three words I can sum up everything I've learned about
life: it goes on.


On Fri, Jul 11, 2014 at 9:30 AM, Elizabeth Leonard 
elizabeth.leon...@shu.edu wrote:

 Does anyone have a good way to track requests to make changes to your
 website(s)? I would like to be able to put in requests and be able to track
 if they are done and when, so there's fewer emails flying about.

 E

 Elizabeth Leonard
 Assistant Dean of Information Technologies, Resources Acquisition and
 Description
 Seton Hall University
 400 South Orange Avenue
 South Orange, NJ 07079
 973-761-9445



Re: [CODE4LIB] 'automation' tools

2014-07-04 Thread Andrew Weidner
Great idea for a workshop, Owen.

My staff and I use AutoHotkey every day. We have some apps for data
cleaning in the CONTENTdm Project Client that I presented on recently:
http://scholarcommons.sc.edu/cdmusers/cdmusersMay2014/May2014/13/. I'll be
talking about those in more detail at the Upper Midwest Digital Collections
Conference http://www.wils.org/news-events/wilsevents/umdcc/ if anyone is
interested.

I did an in-house training session for our ILS and database management
folks on a simple AHK app that they now use for repetitive data entry:
https://github.com/metaweidner/AutoType. When I was working with digital
newspapers I developed a suite of tools for making repetitive quality
review tasks easier: https://github.com/drewhop/AutoHotkey/wiki/NDNP_QR

Basic AHK scripts are really great for text wrangling. Just yesterday I
wrote a script to grab some values from a spreadsheet, remove commas from
the numbers, and dump them into a tab delimited file in the format that we
need. That script will become part of our regular workflow. Wrote another
one-off script to transform labels on our wiki into links. It wrapped the
labels in the wiki link syntax, and then I copied and pasted the unique
URLs into the appropriate spots.

It's also useful for keeping things organized. I have a set of scripts that
open up frequently used network drive folders and applications, and I
packaged them as drop down menu choices in a little GUI that's always open
on the desktop. We have a few search scripts that either grab values from a
spreadsheet or input box and then run a search for those terms in a web
database (e.g. id.loc.gov).

You might check out Selenium IDE for working with web forms:
http://docs.seleniumhq.org/projects/ide/. The recording feature makes it
really easy to get started with as an automation tool. I've used it
extensively for automated metadata editing:
http://digital.library.unt.edu/ark:/67531/metadc86138/m1/1/

Cheers!

Andrew


On Fri, Jul 4, 2014 at 6:54 AM, Riley Childs ri...@tfsgeo.com wrote:

 Don't forget AutoIT (auto IT, pretty clever eh?)
 http://www.autoitscript.com/site/autoit/

 Riley Childs
 Student
 Asst. Head of IT Services
 Charlotte United Christian Academy
 (704) 497-2086
 RileyChilds.net
  Sent from my Windows Phone, please excuse mistakes

 -Original Message-
 From: Owen Stephens o...@ostephens.com
 Sent: ‎7/‎4/‎2014 4:55 AM
 To: CODE4LIB@LISTSERV.ND.EDU CODE4LIB@LISTSERV.ND.EDU
 Subject: [CODE4LIB] 'automation' tools

 I'm doing a workshop in the UK at a library tech unconference-style event
 (Pi and Mash http://piandmash.info) on automating computer based tasks.
 I want to cover tools that are usable by non-programmers and that would
 work in a typical library environment. The types of tools I'm thinking of
 are:

 MacroExpress
 AutoHotKey
 iMacros for Firefox

 While I'm hoping workshop attendees will bring ideas about tasks they
 would like to automate the type of thing I have in mind are things like:

 Filling out a set of standard data on a GUI or Web form (e.g. standard set
 of budget codes for an order)
 Processing a list of item barcodes from a spreadsheet and doing something
 with them on the library system (e.g. change loan status, check for holds)
 Similarly for User IDs
 Navigating to a web page and doing some task

 Clearly some of these tasks would be better automated with appropriate
 APIs and scripts, but I want to try to introduce those without programming
 skills to some of the concepts and tools and essentially how they can work
 around problems themselves to some extent.

 What tools do you use for this kind of automation task, and what kind of
 tasks do they best deal with?

 Thanks,

 Owen

 Owen Stephens
 Owen Stephens Consulting
 Web: http://www.ostephens.com
 Email: o...@ostephens.com
 Telephone: 0121 288 6936



Re: [CODE4LIB] Natural language programming

2014-07-01 Thread Andrew Cunningham
Since you maybe looking at Drupal intergratin down the path, I would look
at using python znd the NLTK , and develop a web service that coild ghen be
used by drupal
On 01/07/2014 11:13 PM, Katie konrad.ka...@gmail.com wrote:

 Hello,

 Has anyone here experience in the world of natural language programming
 (while applying information retrieval techniques)?

 I'm currently trying to develop a tool that will:

 1. take a pdf and extract the text (paying no attention to images or
 formatting)
 2. analyze the text via term weighting, inverse document frequency, and
 other natural language processing techniques
 3. assemble a list of suggested terms and concepts that are weighted
 heavily in that document

 Step 1 is straightforward and I've had much success there. Step 2 is the
 problem child. I've played around with a few APIs (like AlchemyAPI) but
 they have character length limitations or other shortcomings that keep me
 looking.

 The background behind this project is that I work for a digital library
 with a large pre-existing collection of pdfs with rudimentary metadata. The
 aforementioned tool will be used to classify and group the pdfs according
 to the themes of the library. Our CMS is Drupal so depending on my level of
 ambition, this *might* develop into a module.

 Does this sound like a project that has been done/attempted before? Any
 suggested tools or reading materials?



Re: [CODE4LIB] Does 'Freedom to Read' require us to systematically privilege HTTPS over HTTP?

2014-06-18 Thread Andrew Anderson
On Jun 17, 2014, at 17:09, Stuart Yeates stuart.yea...@vuw.ac.nz wrote:

 On 06/17/2014 08:49 AM, Galen Charlton wrote:
 On Sun, Jun 15, 2014 at 4:03 PM, Stuart Yeates stuart.yea...@vuw.ac.nz 
 wrote:
 As I read it, 'Freedom to Read' means that we have to take active steps to
 protect that rights of our readers to read what they want and  in private.
 [snip]
 * building HTTPS Everywhere-like functionality into LMSs (such functionality
 may already exist, I'm not sure)
 
 Many ILSs can be configured to require SSL to access their public
 interfaces, and I think it would be worthwhile to encourage that as a
 default expectation for discovery interfaces.
 
 However, I think that's only part of the picture for ILSs.  Other
 parts would include:
 
 * staff training on handling patron and circulation data
 * ensuring that the ILS has the ability to control (and let users
 control) how much circulation and search history data gets retained
 * ensuring that the ILS backup policy strikes the correct balance
 between having enough for disaster recovery while not keeping
 individually identifiable circ history forever
 * ensuring that contracts with ILS hosting providers and services that
 access patron data from the ILS have appropriate language concerning
 data retention and notification of subpoenas.
 
 Compared to other contributors to this thread, I appear to be (a) less 
 worried about state actors than our commercial partners and (b) keener to see 
 relatively straight forward technical fixes that just work 'for free' across 
 large classes of library systems. Things like:
 
 * An ILS module that pulls the HTTPS Everywhere ruleset from 
 https://gitweb.torproject.org/https-everywhere.git/tree/HEAD:/src/chrome/content/rules
  and applies those rules as a standard data-cleanup step on all imported data 
 (MARC, etc).
 
 * A plugin to the CMS that drives the library's websites / blogs / whatever 
 and uses the same rulesets to default all links to HTTPS.
 
 * An EzProxy plugin (or howto) on silently redirectly users to HTTPS over 
 HTTP sites.
 
 cheers
 stuart

This is something that I have been interested in as well, and I have been 
asking our content providers when they will make their content available via 
HTTPS, but so far with very little uptake.  Perhaps if enough customers start 
asking, it will get enough exposure internally to drive adoption of HTTPS for 
the content side.

I looked into what EZproxy offers for the user side, and that product does not 
currently have the ability to do HTTPS to HTTP proxying, even though there is 
no technical reason why it could not be done (look at how many HTTPS sites run 
Apache in a reverse proxy to HTTP servers internally for load balancing, etc.)  

EZproxy makes the assumption that a HTTP resource will always be accessed over 
HTTP, and you cannot configure a HTTPS entry point to HTTP services to at least 
secure the side of the communication channel that is going to contain more 
identifiable information about the user, before it becomes aggregated into the 
general proxy stream.

-- 
Andrew Anderson, Director of Development, Library and Information Resources 
Network, Inc.
http://www.lirn.net/ | http://www.twitter.com/LIRNnotes | 
http://www.facebook.com/LIRNnotes


Re: [CODE4LIB] Does 'Freedom to Read' require us to systematically privilege HTTPS over HTTP?

2014-06-18 Thread Andrew Anderson
EZproxy already handles HTTPS connections for HTTPS enabled services today, and 
on modern hardware (i.e. since circa 2005), cryptographic processing far 
surpasses the speed of most network connections, so I do not accept the “it’s 
too heavy” argument against it supporting the HTTPS to HTTP functionality.  
Even embedded systems with 500MHz CPUs can terminate SSL VPNs at over 100Mb/s 
these days.

All I am saying is that the model where you expose HTTPS to the patron and 
still continue to use HTTP for the vendor is not possible with EZproxy today, 
and there is no technical reason why it could not do so, but rather a policy 
decision.  While HTTPS to HTTP translation would not completely solve the 
entire point of the original posting, it would be a step in the right direction 
until the rest of the world caught up.

As an aside, the lightweight nature of EZproxy seems to be becoming its 
Achilles Heel these days, as modern web development methods seem to be pushing 
the boundaries of its capabilities pretty hard.  The stance that EZproxy only 
supports what it understands is going to be a problem when vendors adopt 
HTTP/2.0, SDCH encoding, web sockets, etc., just as AJAX caused issues 
previously.  Most vendor platforms are Java based, and once Jetty starts 
supporting these features, the performance chasm between dumbed-down proxy 
connections and direct connections is going to become even more significant 
than it is today.

-- 
Andrew Anderson, Director of Development, Library and Information Resources 
Network, Inc.
http://www.lirn.net/ | http://www.twitter.com/LIRNnotes | 
http://www.facebook.com/LIRNnotes

On Jun 18, 2014, at 11:20, Cary Gordon listu...@chillco.com wrote:

 One of the reasons that EZProxy is so fast and resource-efficient is that
 it is very lightweight. HTTPS to HTTP processing would require that
 EZProzy, or another proxy layer behind it, provide an HTTPS endpoint.
 Building this into EZProxy, I think, would not be a good fit for
 their model.
 
 I think that it would be simpler to just do everything in nginx, or
 possibly node.
 
 Cary
 
 On Wednesday, June 18, 2014, Andrew Anderson and...@lirn.net wrote:
 
 On Jun 17, 2014, at 17:09, Stuart Yeates stuart.yea...@vuw.ac.nz
 javascript:; wrote:
 
 On 06/17/2014 08:49 AM, Galen Charlton wrote:
 On Sun, Jun 15, 2014 at 4:03 PM, Stuart Yeates stuart.yea...@vuw.ac.nz
 javascript:; wrote:
 As I read it, 'Freedom to Read' means that we have to take active
 steps to
 protect that rights of our readers to read what they want and  in
 private.
 [snip]
 * building HTTPS Everywhere-like functionality into LMSs (such
 functionality
 may already exist, I'm not sure)
 
 Many ILSs can be configured to require SSL to access their public
 interfaces, and I think it would be worthwhile to encourage that as a
 default expectation for discovery interfaces.
 
 However, I think that's only part of the picture for ILSs.  Other
 parts would include:
 
 * staff training on handling patron and circulation data
 * ensuring that the ILS has the ability to control (and let users
 control) how much circulation and search history data gets retained
 * ensuring that the ILS backup policy strikes the correct balance
 between having enough for disaster recovery while not keeping
 individually identifiable circ history forever
 * ensuring that contracts with ILS hosting providers and services that
 access patron data from the ILS have appropriate language concerning
 data retention and notification of subpoenas.
 
 Compared to other contributors to this thread, I appear to be (a) less
 worried about state actors than our commercial partners and (b) keener to
 see relatively straight forward technical fixes that just work 'for free'
 across large classes of library systems. Things like:
 
 * An ILS module that pulls the HTTPS Everywhere ruleset from
 https://gitweb.torproject.org/https-everywhere.git/tree/HEAD:/src/chrome/content/rules
 and applies those rules as a standard data-cleanup step on all imported
 data (MARC, etc).
 
 * A plugin to the CMS that drives the library's websites / blogs /
 whatever and uses the same rulesets to default all links to HTTPS.
 
 * An EzProxy plugin (or howto) on silently redirectly users to HTTPS
 over HTTP sites.
 
 cheers
 stuart
 
 This is something that I have been interested in as well, and I have been
 asking our content providers when they will make their content available
 via HTTPS, but so far with very little uptake.  Perhaps if enough customers
 start asking, it will get enough exposure internally to drive adoption of
 HTTPS for the content side.
 
 I looked into what EZproxy offers for the user side, and that product does
 not currently have the ability to do HTTPS to HTTP proxying, even though
 there is no technical reason why it could not be done (look at how many
 HTTPS sites run Apache in a reverse proxy to HTTP servers internally for
 load balancing, etc.)
 
 EZproxy makes the assumption that a HTTP resource

[CODE4LIB] Hiring Python/Javascript developers

2014-06-04 Thread Andrew Sallans
The Center for Open Science (http://cos.io) is a non-profit tech start-up
in Charlottesville, VA, aimed at promoting integrity, reproducibility, and
transparency in science.  We're now a little bit over a year old, and we
are continuing to grow rapidly.  We are still adding more Python/Javascript
developers to our existing team, and are eager to see applicants from the
Code4Lib community.

This call for applicants may be of particular interest now as we just began
a partnership with the Association of Research Libraries to build the SHARE
Notification Service (http://centerforopenscience.org/pr/2014-06-02/) over
the next 18 months.  If working on 100% free, open source software to
connect systems and tools across the entire research workflow appeals to
you, then please apply now:  http://centerforopenscience.org/jobs/

Happy to answer any questions!

Andrew Sallans

-- 
Center for Open Science (http://centerforopenscience.org)
Partnerships, Collaborations, and Funding
Twitter (asallans)
Skype (andrew.sallans)


Re: [CODE4LIB] Solr and Koha

2014-05-29 Thread Andrew Gordon
You can export your records from Koha in marc or xml format by using the 
`Export bibliographic and holdings` option in the tools module in the staff 
client (quick tutorial here: 
http://bywatersolutions.com/2012/07/04/exporting-marc-records-in-koha-3-8/). 
For 1 bib records you should be able to do it in one shot. This is all for 
hosted Koha installs, if you are running your own, you should be able to run an 
export.pl script from the terminal. 

With the .mrc file, it's pretty much ready to go to upload into a fresh install 
of Blacklight with a rake task. So it's totally doable. 

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Justin 
Coyne
Sent: Thursday, May 29, 2014 12:20 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Solr and Koha

If you can get the records from Koha in any format. It's not difficult to 
import them to Solr.  When you get Blacklight set up you can do:

Blacklight.solr.add(id: 12345, title_t: One flew over the cuckoo's nest,
author_t: Ken Kesey)

Blacklight.solr.commit


And you've added your first record into solr.  Feel free to ask more on the 
blackight-development google group or #blacklight on IRC.

-Justin




On Wed, May 28, 2014 at 10:21 PM, Riley Childs rchi...@cucawarriors.comwrote:

 Does anybody have any direction about how to get koha export to solr 
 so it can be utilized by black light, server power is not an issue (if 
 needed I can dedicate one to it). Has anyone done this if 
 so...benefits, disadvantages? We have a collection of 1 books (small but 
 growing).

 Honestly this is just something to do as a learning experience, but if 
 I commit, I commit!


 Thx!
 //Riley

 Riley Childs
 Student
 Asst. Head of IT Services
 Charlotte United Christian Academy
 (704) 497-2086
 RileyChilds.net
 Sent from my Windows Phone, please excuse mistakes



Re: [CODE4LIB] Any good introduction to SPARQL workshops out there?

2014-05-02 Thread Andrew Gordon
Not sure if you're still looking for info, but I found these slides (below) the 
other day trying to get a better grasp of the basics:

Lots of slides here, but I found the slow building step-by-step a good jump 
start: https://www.cambridgesemantics.com/semantic-university/sparql-by-example

Also the answer here for generally exploring a graph: 
http://stackoverflow.com/questions/2930246/exploratory-sparql-queries

Hope it helps.
drew

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Hutt, 
Arwen
Sent: Friday, May 2, 2014 11:39 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Any good introduction to SPARQL workshops out there?

Thanks to both Owen and Deb!
These are some great resources I'm going to explore them more.  I really 
appreciate the help!
Arwen

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Debra 
Shapiro
Sent: Thursday, May 01, 2014 9:33 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Any good introduction to SPARQL workshops out there?

I organized a SPARQL webinar that LITA put on in February. The instructor was 
Bob DuCharme, who also wrote an O'Reilly book - 
http://www.worldcat.org/oclc/752976161

You may be able to view it at the link below; I expect DuCharme would be 
willing to contract with UCSD to tailor something for you -
 
HTH,
deb

 Thank you for participating in today's LITA webinar SKOS, SPARQL, and 
 vocabulary management part three of a three part series of webinars on 
 Linked Data. 
 
 You may access the recording of today's session here: 
 http://ala.adobeconnect.com/p1n8obr32vd/
 
On May 1, 2014, at 11:23 AM, Hutt, Arwen ah...@ucsd.edu wrote:

 We're interested in an introduction to SPARQL workshop for a smallish group 
 of staff.  Specifically an introduction for fairly tech comfortable 
 non-programmers (in our case metadata librarians), as well as a refresher for 
 programmers who aren't using it regularly.
 
 Ideally (depending on cost) we'd like to bring the workshop to our staff, 
 since it'll allow more people to attend, but any recommendations for good 
 introductory workshops or tutorials would be welcome!
 
 Thanks!
 Arwen
 
 
 Arwen Hutt
 Head, Digital Object Metadata Management Unit Metadata Services, 
 Geisel Library University of California, San Diego
 

dsshap...@wisc.edu
Debra Shapiro
UW-Madison SLIS
Helen C. White Hall, Rm. 4282
600 N. Park St.
Madison WI 53706
608 262 9195
mobile 608 712 6368
FAX 608 263 4849


Re: [CODE4LIB] SubjectsPlus themes

2014-04-29 Thread Andrew Darby
I'm not aware of any themes, but you could post to the list.  People
generally modify the header, footer and css for localization of the
front-end. Some sites have customized a lot, but the customizations tend to
hew to the parent site's look and feel.  Others haven't customized at all,
which has led us to rethink the very vanilla default theme.

We're just (re)starting a version 3 sprint, but haven't gotten to the front
end yet.  We're hoping to pretty it up a bit, but I'm not sure we'll have a
templating system more than css files to monkey with.  If you have
suggestions or ideas, please send them to the list, or me, or add as issues
in GitHub.

Andrew




On Tue, Apr 29, 2014 at 1:22 PM, Tom Keays tomke...@gmail.com wrote:

 I searched briefly in the SubjectsPlus group archive but found no mention
 of themes.

 https://groups.google.com/forum/?hl=en#!forum/subjectsplus




 On Tue, Apr 29, 2014 at 11:54 AM, Wilhelmina Randtke rand...@gmail.com
 wrote:

  Does anyone have a theme for SubjectsPlus up on github?
 
  I'm playing around with the CMS, and I can't find themes.  Surely they
  must exist.
 
  -Wilhelmina Randtke
 




-- 
Andrew Darby
Head, Web  Emerging Technologies
University of Miami Libraries


Re: [CODE4LIB] Cataloguing Telugu

2014-04-07 Thread Andrew Cunningham
Stuart,  had a quick look at the proposal,  not sure cataloguing is an
appropriate term,  nor are they citations.

I suspect that a simple database,  web interface, simple search interface
and Telugu collation should suffice. No specific tools would be needed. We
are talking about a fairly common web infrastructure requirements,  the
challenge will be integrating it with wikimedia platforms,

Best to discuss that with the internationalisaton team at WMF.

On 08/04/2014 7:02 AM, Stuart Yeates stuart.yea...@vuw.ac.nz wrote:

 Currently there is a funding proposal for cataloguing Telugu works up
 before the Wikimedia foundation. If anyone has experience with Telugu or
 knows of any tools that are likely to be useful, please give your input:

 https://meta.wikimedia.org/wiki/Grants:IEG/Making_telugu_
 content_accessible

 cheers
 stuart



Re: [CODE4LIB] Carolina Regional Group?

2014-03-28 Thread Andrew Shuping
I'm from GA as well and I'd be interested in joining in as well.

Andrew Shuping

Robert Frost - In three words I can sum up everything I've learned about
life: it goes on.


On Fri, Mar 28, 2014 at 3:29 PM, Akerman, Laura lib...@emory.edu wrote:

 Would you mind if some Georgia people came?

 Laura

 Laura Akerman
 Technology and Metadata Librarian
 Robert W. Woodruff Library
 Emory University
 Atlanta, Ga. 30322
 (404) 373-8241
 lib...@emory.edu
 
 From: Code for Libraries CODE4LIB@LISTSERV.ND.EDU on behalf of Forrest,
 Stuart sforr...@bcgov.net
 Sent: Friday, March 28, 2014 1:58 PM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] Carolina Regional Group?

 Yeah something in South Carolina would be rally cool.

 Stuart Forrest PhD
 Library Systems Specialist
 Beaufort County Library
 311 Scott Street
 Beaufort SC, 29902
 843 255 6450
 sforr...@bcgov.net
 www.beaufortcountylibrary.org
 For Leisure - For Learning - For Life
 
 From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] on behalf of Kevin S.
 Clarke [kscla...@gmail.com]
 Sent: Friday, March 28, 2014 1:21 PM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] Carolina Regional Group?

 I'd be interested in a regional meetup anywhere near NC/SC.

 Kevin

 On Fri, Mar 28, 2014 at 12:52 PM, Riley Childs rchi...@cucawarriors.com
 wrote:
  I live in Charlotte, but would trudge out to SCŠ
 
  //Riley
 
  On 3/28/14, 12:31 PM, Sarah Shealy sarah.she...@outlook.com wrote:
 
 I'm in Columbia as well Colin, so at the very least we can do a Columbia
 meetup.
 
 Sent from my iPad
 
  On Mar 28, 2014, at 12:01 PM, WILDER, COLIN wilde...@mailbox.sc.edu
 
 wrote:
 
  Sarah et al.,
 
  I wasn't able to get up to the conference. I work at USC and would be
 interested in a regional group. My sense is that there would be
 sufficient interest to make what you suggest a reality. Happy to help.
 
  -Colin Wilder, Center for Digital Humanities at the University of South
 Carolina
 
 
 
 
 
  -Original Message-
  From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf
 Of
 Sarah Shealy
  Sent: Friday, March 28, 2014 11:52 AM
  To: CODE4LIB@LISTSERV.ND.EDU
  Subject: [CODE4LIB] Carolina Regional Group?
 
 
 
  Having the conference in NC and seeing how many institutions sent
 people got me to thinking we should create a regional group. Anyone from
 NC and/or SC want to set up a regional meeting with me? It'll be fun. :)
 
 
 
  NCSU folks, I get it if you're tired of planning, but I know other
 places were represented!
 
 
 
  Thanks,
 
  Sarah
 
 
 
  Sent from my iPad



 --
 There are two kinds of people in this world: those who believe there
 are two kinds of people in this world and those who know better.



Re: [CODE4LIB] tool for finding close matches in vocabular list

2014-03-21 Thread Andrew Gordon
Ken, 

A group in Chicago has been working for a few years now on a deduplication 
toolkit that might do what you are looking for, they also have a couple 
versions that works with an excel file or .csv file. 

https://github.com/datamade/dedupe
https://github.com/datamade/dedupe-web
https://github.com/datamade/csvdedupe

I have not worked with them extensively, but I have heard others find these 
very useful for entity recognition and resolution.






-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Ken 
Irwin
Sent: Friday, March 21, 2014 2:25 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] tool for finding close matches in vocabular list

Hi folks,

I'm looking for a tool that can look at a list of all of subject terms in a 
poorly-controlled index as possible candidates for term consolidation. Our 
student newspaper index has about 16,000 subject terms and they include a lot 
of meaningless typographical and nomenclatural difference, e.g.:

Irwin, Ken
Irwin, Kenneth
Irwin, Mr. Kenneth
Irwin, Kenneth R.

Basketball - Women
Basketball - Women's
Basketball-Women
Basketball-Women's

I would love to have some sort of pattern-matching tool that's smart about this 
sort of thing that could go through the list of terms (as a text list, 
database, xml file, or whatever structure it wants to ingest) and spit out some 
clusters of possible matches.

Does anyone know of a tool that's good for that sort of thing?

The index is just a bunch of MySQL tables - there is no real controlled-vocab 
system, though I've recently built some systems to suggest known SH's to reduce 
this sort of redundancy.

Any ideas?

Thanks!
Ken


[CODE4LIB] .cpd file format head scratcher

2014-03-11 Thread Andrew Gordon
Hey All,

For a set of digitized pharmaceutical cards, I am coming up against an image 
file format that seems to be locked in time. It's supposedly a Compressed 
PhotoDefiner (?) lossless (.cpd) file (http://www.photodefiner.com/home/). 
Though when I try to load up the software, I can't get it to take on any of our 
windows machines (running 8 and 7). Don't have a mac on hand so don't know if 
that works or not, currently.

In my experience, though, I've always been able to find some rogue third party 
file converter (or imagemagick) to be helpful in these scenarios but this 
format  is just not something that appears to have been accounted for. 
Additionally, it's one of those file formats that seem to only pop randomly 
generated answer sites with questionable downloads in a google search, such as  
http://www.solvusoft.com/en/file-extensions/file-extension-cpd/

Just wanted to see if anyone has come across this format and whether there 
might be any tools to convert it.

Thanks,
Drew




Andrew Gordon, MSI
Systems Librarian
Center for the History of Medicine and Public Health
New York Academy of Medicine
1216 Fifth Avenue
New York, NY, 10029
212.822.7324
http://nyamcenterforhistory.org/


Re: [CODE4LIB] .cpd file format head scratcher

2014-03-11 Thread Andrew Gordon
Thanks for the quick and helpful responses everyone, though Kyle and Rachel, I 
think it being CONTENTdm is what it comes down to. I should have mentioned that 
I am trying to extract these images from CONTENTdm. 

So I am still a little confused, though, about how the .cpd file in CONTENTdm 
works. In the export I am noticing that the `cdmfile` variously points to .cpd 
files in some records and .jpg files in other records. It does not seem to 
correspond to whether there is a front and back to the image or not (I think 
they all have front and back in same object though I may be wrong). I am 
relatively green with CONTENTdm, so this is a learning moment for me.

If this is the case, how do I go about extracting the image file information 
from the .cpd files? If they are XML, I am not sure how to read their contents 
as opening up with a text file or browser shows 'invalid file'.

Thanks,
d


-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Kyle 
Banerjee
Sent: Tuesday, March 11, 2014 11:42 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] .cpd file format head scratcher

Just out of curiosity, are these files in a DAM or did you get them elsewhere? 
The reason I ask is that you appear to have a bunch of pharmaceutical cards in 
a CONTENTdm system at
http://nyam.contentdm.oclc.org/cdm/search/collection/p4129coll16

If you're trying to extract the image files from CDM, the cpd file is an XML 
file defining a compound object with object odentifiers, filenames, and 
descriptions.You can iterate through all the images separately

kyle


On Tue, Mar 11, 2014 at 8:15 AM, Andrew Gordon agor...@nyam.org wrote:

 Hey All,

 For a set of digitized pharmaceutical cards, I am coming up against an 
 image file format that seems to be locked in time. It's supposedly a 
 Compressed PhotoDefiner (?) lossless (.cpd) file ( 
 http://www.photodefiner.com/home/). Though when I try to load up the 
 software, I can't get it to take on any of our windows machines 
 (running 8 and 7). Don't have a mac on hand so don't know if that 
 works or not, currently.

 In my experience, though, I've always been able to find some rogue 
 third party file converter (or imagemagick) to be helpful in these 
 scenarios but this format  is just not something that appears to have been 
 accounted for.
 Additionally, it's one of those file formats that seem to only pop 
 randomly generated answer sites with questionable downloads in a 
 google search, such as  
 http://www.solvusoft.com/en/file-extensions/file-extension-cpd/

 Just wanted to see if anyone has come across this format and whether 
 there might be any tools to convert it.

 Thanks,
 Drew



 
 Andrew Gordon, MSI
 Systems Librarian
 Center for the History of Medicine and Public Health New York Academy 
 of Medicine
 1216 Fifth Avenue
 New York, NY, 10029
 212.822.7324
 http://nyamcenterforhistory.org/



Re: [CODE4LIB] Windows XP EOL

2014-03-05 Thread Andrew Anderson
You’d be amazed at what you can do with port 80/443 access, so while that is a 
deterrent, it is not a solution that will make any guarantees that the machines 
cannot do anything nefarious.

Adding a proxy server in front of the machines with a whitelist of allowed web 
sites instead of NAT would go further, but at the end of that day you’re still 
talking about taking a 14 year old operating system that is no longer supported 
and connecting it to the internet.

-- 
Andrew Anderson, Director of Development, Library and Information Resources 
Network, Inc.
http://www.lirn.net/ | http://www.twitter.com/LIRNnotes | 
http://www.facebook.com/LIRNnotes

On Mar 5, 2014, at 7:20, Michael Bond mb...@the-forgotten.org wrote:

 Why not setup your XP boxes to use a private network (10.x.x.x or 
 192.168.x.x) and put them behind a heavily fire walled NAT solution. Could be 
 setup on the network level or with a router and a linux box running IP 
 tables. Lots of ways to do it. 
 
 Install and keep updated Firefox or Chrome, lock down the machines so that 
 users don’t have permissions to install anything, and setup a whitelist of 
 programs that are allowed to be run (takes a little bit of work, but its very 
 doable. We did this in WVU Libraries on all our machines [500 or so], public 
 and staff, until we got our virtualized desktops in place). 
 
 You can’t disallow Internet Explorer from running, but you can limit the 
 websites that it is allowed to visit. You could even go as far as only 
 allowing it to connect to the local host, but likely anything ‘on campus’ 
 would be fine.
 
 I’m assuming you are using some sort of image management solution (Ghost, at 
 the very least). So once you get an image setup it shouldn’t be that bad to 
 maintain and deploy. And if something does become exploited, you can can 
 re-image the machine. 
 
 Configure the NAT to not allow any traffic to come from that private network 
 other than ports 80 and 443 (and any other legitimate port that you need). 
 that way if a machine does become compromised it can’t do (much) harm outside 
 of your private XP network. 
 
 If you need AD authentication you can set that all up in the ACLs for the 
 network as well so that they can only contact a specific authentication 
 server. If you absolutely needed to you could even put an auth server on the 
 same private network that has a trust back to your main auth servers. Put 2 
 network interfaces in it and it can live on 2 networks so you don’t have to 
 poke a hole through your private networks ACLs to get back to the main auth 
 servers. 
 
 Its not an ideal situation, but if you can’t afford new machines and you 
 absolutely need to keep your XP machines running there are ways of doing it. 
 But at what point does it become cost prohibitive with your time compared to 
 investing in new hardware?
 
 If you don’t do something though, you’ll be spending all your time rebuilding 
 compromised XP boxes eventually. 
 
 Michael Bond
 mb...@the-forgotten.org
 
 
 
 On Mar 4, 2014, at 4:55 PM, Riley Childs rchi...@cucawarriors.com wrote:
 
 Not to stomp around, but 1 hour is a LONG time for an unpatched computer, 
 especially when in close proximity to other unpatched computers! DeepFreeze 
 is great, but it is not a long term solution, also starting next week you 
 will get a nag screen every time you login telling you about the EOL.
 
 Riley Childs
 Student
 Asst. Head of IT Services
 Charlotte United Christian Academy
 (704) 497-2086
 RileyChilds.net
 Sent from my Windows Phone, please excuse mistakes
 
 From: Benjamin Stewartmailto:benjamin.stew...@unbc.ca
 Sent: ‎3/‎4/‎2014 4:46 PM
 To: CODE4LIB@LISTSERV.ND.EDUmailto:CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] Windows XP EOL
 
 Hello everyone
 
 (I have been in IT for 25+ years, k-7 for 15 years and now 10 months UNBC
 Library)
 
 
 If I worked for an organization that did not have the money to go either
 replacement Win7 or Linux desktop for usability issues.
 
 I would contact Faronics and get a deal for educational licenses to
 install Deepfreeze.
 Then setup all workstation basic accounts and to reboot if idle for 1
 hour. (and shut down, startup between set times)
 Deepfreeze also has a remote console to unfreeze and refreeze for
 maintenance to the workstation. (e.g. browser updates flash adobe)
 This in hand with PDQ deploy/inventory works very nice. (Basic version
 free)
 
 
 Last option would (no possible for most places) contact the Dell official
 lease site via direct or eBay. (there is a Canada and US supplier)
 
 You can by nice 780 Dell with win7 pro for about $140 with shipping.
 Some companies like Dell of HP have be know to also donate to non-profit.
 
 ~Ben
 
 System Administrator
 Geoffrey R. Weller library
 UNBC, BC Canada
 PH (250) 960-6605
 benjamin.stew...@unbc.ca
 
 
 
 
 
 
 
 On 2014-03-04, 11:12 AM, Ingraham Dwyer, Andy adw...@library.ohio.gov
 wrote:
 
 I would

Re: [CODE4LIB] Windows XP EOL

2014-03-05 Thread Andrew Anderson
On Mar 5, 2014, at 15:37, Marc Truitt mtru...@mta.ca wrote:

 Perhaps that's why several contributors to this thread have suggested
 that M$' EOL declaration aside, why give it up?  XP, I'll miss ya...

XP: The new DOS 3.3?

-- 
Andrew Anderson, Director of Development, Library and Information Resources 
Network, Inc.
http://www.lirn.net/ | http://www.twitter.com/LIRNnotes | 
http://www.facebook.com/LIRNnotes


Re: [CODE4LIB] Proquest search api?

2014-02-17 Thread Andrew Anderson
The document you want to request from ProQuest support was called 
Federated-Search.docx when they sent it to me.  This will address many of your 
documentation needs.

ProQuest used to have an excel spreadsheet with all of the product codes for 
the databases available for download from 
http://support.proquest.com/kb/article?ArticleId=3698source=articlec=12cid=26,
 but it appears to no longer be available from that source.  ProQuest support 
should be able to answer where it went when you request the federated search 
document.

You may receive multiple 856 fields for Citation/Abstract, Full Text, and 
Scanned PDF:

=856  41$3Citation/Abstract$uhttp://search.proquest.com/docview/...
=856  40$3Full Text$uhttp://search.proquest.com/docview/...
=856  40$3Scanned PDF$uhttp://search.proquest.com/docview/...

I would suggest that rather than relying on the 2nd indicator, you should parse 
subfield 3 instead to find the format that you prefer.  You see the multiple 
856 fields in the MARC records for ProQuest holdings as well, as that is how 
ProQuest handles coverage gaps in titles, so if you have ever processed 
ProQuest MARC records before, you should be already prepared for this.

-- 
Andrew Anderson, Director of Development, Library and Information Resources 
Network, Inc.
http://www.lirn.net/ | http://www.twitter.com/LIRNnotes | 
http://www.facebook.com/LIRNnotes

On Feb 17, 2014, at 10:28, Jonathan Rochkind rochk...@jhu.edu wrote:

 I still haven't managed to get info from Proquest support, but thanks to off 
 list hints from another coder, I have discovered the Proquest SRU endpoint, 
 which I think is the thing they call the XML gateway.
 
 Here's an example query:
 
 http://fedsearch.proquest.com/search/sru/pqdtft?operation=searchRetrieveversion=1.2maximumRecords=30startRecord=1query=title%3D%22global%20warming%22%20AND%20author%3DCastet
 
 For me, coming from an IP address recognized as 'on campus' for our general 
 Proquest access, no additional authentication is required to use this API. 
 I'm not sure if we at some point prior had them activate the XML Gateway 
 for us, likely for a federated search product, or if it's just this way for 
 everyone.
 
 The path component after /sru, pqdtft is the database code for Proquest 
 Dissertations and Theses. I'm not sure where you find a list of these 
 database codes in general; if you've made a succesful API request to that 
 endpoint, there will be a diagnosticMessage element near the end of the 
 response listing all database codes you have access to (but without 
 corresponding full English names, you kind of have to guess).
 
 The value of the 'query' parameter is a valid CQL query, as usual for SRU. 
 Unfortunately, there seems to be no SRU explain response to tell you what 
 fields/operators are available. But guessing often works, title, author, 
 and date are all available -- I'm not sure exactly how 'date' works, need 
 to experiment more. The CQL query param above un-escaped is:
 
 title=global warming AND author=Castet
 
 Responses seem to be in MARCXML, and that seems to be the only option.
 
 It looks like you can tell if a full text is available (on Proquest platform) 
 for a given item, based on whether there's an 856 field with second indicator 
 set to 0 -- that will be a URL to full text. I think. It looks like. Did I 
 mention if there are docs for any of this, I haven't found them?
 
 So, there you go, a Proquest search API!
 
 Jonathan
 
 
 
 On 2/12/14 3:44 PM, Jonathan Rochkind wrote:
 Aha, thinking to google search for proquest z3950 actually got me some
 additional clues!
 
 Sites that are currently using Z39.50 to search ProQuest are advised to
 consider moving to the XML gateway.
 
 in Google snippets for:
 
 http://www.proquest.com/assets/downloads/products/techrequirements_np.pdf
 
 Also If you are using the previous XML
 gateway for access other than with a federated search vendor, please
 contact our support center at
 www.proquest.com/go/migrate and we can get you the new XML gateway
 implementation documentation.
 
 Okay, so now I at least know that something called the XML Gateway
 exists, and that's what I want info on or ask about!  (Why are our
 vendors so reluctant to put info on their services online?)
 
 I am not a huge fan of z3950, and am not ordinarily optimistic about
 it's ability to actually do what I need, but I'd use it if it was all
 that was available; in this case, it seems like Proquest is recommending
 you do NOT use it, but use this mysterious 'XML gateway'.
 
 
 
 On 2/12/14 3:29 PM, Eric Lease Morgan wrote:
 On Feb 12, 2014, at 3:22 PM, Jonathan Rochkind rochk...@jhu.edu wrote:
 
 I feel like at some point I heard there was a search API for the
 Proquest content/database platform.
 
 
 While it may not be the coolest, I’d be willing to bet Proquest
 supports Z39.50. I used it lately to do some interesting queries
 against the New York Times Historical Newspapers Database (index). [1]
 Okay. I know

Re: [CODE4LIB] Python CMSs

2014-02-13 Thread Andrew Hankinson
I have a small anecdote on my experience with Drupal, Django, and custom 
development.

I was writing a site that required a number of custom content types, some of 
them fairly complex, and a Solr back-end for full-text and faceted search. I 
had developed a number of Drupal sites up to that point, but this was probably 
the most complex one.

I tore my hair out for a month or two, trying to get all of the different 
Drupal modules to talk to each other, and writing lots of glue code to go 
between the custom modules using the (sometimes undocumented) hooks for each 
module. 

One day I became so frustrated that I decided that I would give myself 24 hours 
to re-do the site in Django. If I could get the Django site up to par with the 
Drupal site in that amount of time, I would move forward with Django. 
Otherwise, I would keep going with Drupal. Up to that point, I had done the 
Django tutorial a couple times, and implemented a few test sites, but not much 
else.

Within 24 hours I had re-implemented the content type models, hooked up the 
Solr search, worked out a few of the templates, and was well on my way to 
actually making progress with the site. More than that, I was enjoying the 
coding rather than staring in frustration at hooks and wondering why something 
wasn’t getting called when it should be.

Since then I haven’t touched Drupal.

Cheers,
-Andrew

On Feb 13, 2014, at 9:59 PM, Riley Childs rchi...@cucawarriors.com wrote:

 WordPress is easy for content creators, but don't let the blog part fool you, 
 it is a fully developed framework that is easy to develop for, it is intended 
 to make it easy to get started, but from base upward it is 100% custom. I 
 don't know what your particular needs are, but I would give WP a serious 
 look! Plus WP integrates well with any web app you could shake a stick at. In 
 summary chose a CMS that fits YOUR needs, my rants are what made WP a good 
 fit for me, yours are different so make a decision based on what YOU need, 
 not my needs!
 
 Riley Childs
 Student
 Asst. Head of IT Services
 Charlotte United Christian Academy
 (704) 497-2086
 RileyChilds.net
 Sent from my Windows Phone, please excuse mistakes
 
 From: Daron Dierkesmailto:daron.dier...@gmail.com
 Sent: ‎2/‎13/‎2014 9:52 PM
 To: CODE4LIB@LISTSERV.ND.EDUmailto:CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] Python CMSs
 
 If you're new to python and django there will be a steep learning curve for
 you, but probably a much steeper one for people after you who may not do
 python at all.  Drupal and Wordpress are limited, but non-technical
 librarians can still get in pretty easy to fix typos and add links at
 least..  Codecademy has a decent intro python course:
 http://www.codecademy.com/tracks/python
 Udemy has a few python courses with some django as well.
 
 A big reason why I've been learning django is to try to understand how our
 library can work with the various DH projects that use our collections. If
 we need to at some point take on permanent ownership of these projects or
 if we want to develop them further, a basic familiarity on the part of our
 library staff seems like a good idea.


[CODE4LIB] Information Systems Coordinator at the University of Miami Libraries

2014-02-06 Thread Andrew Darby
.

The University of Miami is an Equal Opportunity Affirmative Action
Employer. The University has a strong commitment to diversity and
encourages applications from candidates of diverse cultural backgrounds.



-- 
Andrew Darby
Head, Web  Emerging Technologies
University of Miami Libraries


Re: [CODE4LIB] EZProxy changes / alternatives ?

2014-02-03 Thread Andrew Anderson
For me it’s a little more concrete, and a little less abstract when it comes to 
why a viable alternative to EZproxy is necessary.  It has very little to do 
with the cost of EZproxy itself, and much more to do with support, features, 
and functionality.

There exists a trivial DoS attack against EZproxy that I reported to OCLC about 
2 years ago, and has not been addressed yet.

Native IPv6 support by EZproxy has slipped by years now.  I have patrons using 
IPv6 for access today that I want to provide a better experience than forcing 
them to use a 6to4 gateway at their ISP.

You cannot proxy https to http with EZproxy to secure the patron to proxy side 
of the proxy communication, increasing your patron’s privacy.

I have requested that OCLC make a minor change to their existing AD 
authentication support to enable generic LDAP/Kerberos authentication that was 
denied because “no one wants it”.  Since they support AD, 95% of the code 
required already exists, and would make a lot more sense than some of the other 
authentication schemes that EZproxy already supports.  This closes the door on 
integration with eDirectory, IPA, SUN Directory Server, OpenLDAP, etc. for no 
good reason.

OCLC has been the steward of EZproxy for over 5 years now, and in that time, 
they are yet to fully document the software.  Every few months some new obscure 
configuration option gets discussed on the EZproxy list that I’ve never seen 
before, and I have been working with this software for over a decade now.  This 
is not only limited to existing configuration options, either — there was no 
documentation on the new MimeFilter option when it was first introduced.  I 
would have expected that the IT staff at OCLC that is managing the EZproxy 
service would have demanded full documentation by now, and that documentation 
would have been released to customers as well.

EZproxy does not cluster well.  The peering support is functional, but not 
seamless when there is a failure.  When a proxy in the server pool goes down, 
the patron is prompted for authentication again when they land on a new proxy 
server, since EZproxy does not share session state.  External load balancers 
cannot fix this problem, either, for the same reason.

EZproxy does not support gzip compression, causing library access use an 
additional 80-90% bandwidth for textual content (HTML, CSS, JS, etc).

EZproxy does not support caching, causing library access to use an additional 
30-50% additional bandwidth for cacheable web assets. (And yes, you can park a 
cache in front of EZproxy to offset this, which is how I collected the 30-50% 
numbers, but doing so breaks the “it’s easy and just works” model that EZproxy 
promises.)

Combine the lack of gzip support with the lack of caching support, and you are 
looking at around a 60-80% overall increase in bandwidth consumption.  When you 
have a user community measured in hundreds of users, things like gzip 
compression and caching may not matter as much, but when your user community is 
measured in the hundreds of thousands of patrons, these things really do 
matter, and mean the difference between doubling your bandwidth costs this 
year, or deferring that expense 5-7 years down the road.

So it’s not _just_ $500 per year when you take a step back and look at the 
bigger picture.  It’s $500 per year, plus the per Mb cost of your internet 
connection — both inbound and outbound — which can be measured in hundreds of 
dollars per month for larger sites.  If you could could cut that by 2/3 just by 
switching to a different proxy solution, that might get your attention, even if 
you shifted the $500/yr support costs to a different entity.  

Imagine never hearing “wow this library network is slow” again because a web 
page that used to load 1MB of content was able to gzip that down to 600KB, and 
300KB of that content was served off the local proxy server, leaving just 300KB 
to pull off the remote server.  How much is a better user experience worth to 
you?

Bottom line: competition is good.  Just look at how Internet Explorer is almost 
a sane browser now, thanks largely to competition from Firefox and Chrome.  If 
coming up with a viable alternative to EZproxy using open source tools causes a 
security, features, and functionality arms race, then everyone wins.

-- 
Andrew Anderson, Director of Development, Library and Information Resources 
Network, Inc.
http://www.lirn.net/ | http://www.twitter.com/LIRNnotes | 
http://www.facebook.com/LIRNnotes

On Jan 31, 2014, at 18:43, Kyle Banerjee kyle.baner...@gmail.com wrote:

 On Fri, Jan 31, 2014 at 3:10 PM, Salazar, Christina 
 christina.sala...@csuci.edu wrote:
 
 I think though that razor thin budgets aside, the EZProxy using community
 is vulnerable to what amounts to a monopoly. Don't get any ideas, OCLC
 peeps (just kiddin') but now we're so captive to EZProxy, what are our
 options if OCLC wants to gradually (or not so gradually) jack up the price?
 
 Does

Re: [CODE4LIB] EZProxy changes / alternatives ?

2014-01-31 Thread Andrew Anderson
EZproxy is a self-installing statically compiled single binary download, with a 
built-in administrative interface that makes most common administrative tasks 
point-and-click, that works on Linux and Windows systems, and requires very 
little in the way of resources to run.  It also has a library of a few hundred 
vendor stanzas that can be copied and pasted and work the majority of the time.

To successfully replace EZproxy in this setting, it would need to be packaged 
in such a way that it is equally easy to install and maintain, and the library 
of vendor stanzas would need to be developed as apache conf.d files.

Re: nginx from another reply in this thread, I am keeping my eye on it for 
future projects, but one thing it does not have currently is the wealth of 
Apache modules.  Some of the authentication that is commonly used in a library 
setting are supported by existing Apache modules, while nginx does not support 
them. Since it was developed with a different set of priorities, supporting 
things like Athens/CAS/SAML were not the main focus of nginx historically.

-- 
Andrew Anderson, Director of Development, Library and Information Resources 
Network, Inc.
http://www.lirn.net/ | http://www.twitter.com/LIRNnotes | 
http://www.facebook.com/LIRNnotes

On Jan 31, 2014, at 12:43, Timothy Cornwell tc...@cornell.edu wrote:

 I have an IT background and some apache proxy experience, and it seems fairly 
 easy - for me.  I understand it may not be for libraries with limited IT 
 resources.  I am not at all familiar with EZProxy, so I have to ask:
 
 What is it about EZProxy that makes it attractive for those libraries with 
 limited IT resources?
 
 -T
 
 
 
 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Kyle 
 Banerjee
 Sent: Friday, January 31, 2014 12:14 PM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] EZProxy changes / alternatives ?
 
 Many good ideas in this thread.
 
 One thing I'd just like to throw out there is that there are some ideas that 
 may be good to distribute in the form of virtual machines and this might be 
 one of them.
 
 Proxying is needed by practically all libraries and takes little in terms of 
 systems resources. But many libraries with limited IT resources would have 
 trouble implementing alternatives to ezproxy -- especially if they have to 
 use authentication features not supported by Apache HTTPD. Even for those who 
 do have enough staff time, it seems kind of nuts to have everyone spending 
 time solving the same problems.
 
 kyle
 
 
 On Fri, Jan 31, 2014 at 5:43 AM, Ryan Eby ryan...@gmail.com wrote:
 
 There was actually a breakout in 2011? Code4lib discussing Apache and 
 using it as a proxy. I believe Terry Reese and Jeremy Frumkin, then 
 from Oregon?, were the ones leading it. There was lots of interest but 
 I'm not sure if anything took off or if they have documentation 
 somewhere of how far they got. I remember it being about getting 
 something a consortia of libraries could use together so may have been 
 more complex requirements than what is looked for here.
 
 
 http://wiki.code4lib.org/index.php/Can_we_hack_on_this:_Open_Extensibl
 e_Proxy:_going_beyond_EZProxy%3F
 
 --
 Ryan Eby
 


Re: [CODE4LIB] EZProxy changes / alternatives ?

2014-01-29 Thread Andrew Anderson
 for testing:

Location “/badpath”
ProxyHTMLEnable Off
SetOutputFilter INFLATE;dummy-html-to-plain
ExtFilterOptions LogStdErr Onfail=remove
/Location
ExtFilterDefine dummy-html-to-plain mode=output intype=text/html 
outtype=text/plain cmd=“/bin/cat -“

So what’s currently missing in the Apache HTTPd solution?

- Services that use an authentication token (predominantly ebook vendors) need 
special support written.  I have been entertaining using mod_lua for this to 
make this support relatively easy for someone who is not hard-core technical to 
maintain.

- Services that are not IP authenticated, but use one of the Form-based 
authentication variants.  I suspect that an approach that injects a script tag 
into the page pointing to javascript that handles the form fill/submission 
might be a sane approach here.  This should also cleanly deal with the ASP.net 
abominations that use __PAGESTATE to store sessions client-side instead of 
server-side.

- EZproxy’s built-in DNS server (enabled with the “DNS” directive) would need 
to be handled using a separate DNS server (there are several options to choose 
from).

- In this setup, standard systems-level management and reporting tools would be 
used instead of the /admin interface in EZproxy

- In this setup, the functionality of the EZproxy /menu URL would need to be 
handled externally.  This may not be a real issue, as many academic sites 
already use LMS or portal systems instead of the EZproxy to direct students to 
resources, so this feature may not be as critical to replicate.

- And of course, extensive testing.  While the above ProQuest stanza works for 
the main ProQuest search interface, it won’t work for everyone, everywhere just 
yet.

Bottom line: Yes, Apache HTTPd is a viable EZproxy alternative if you have a 
system administrator who knows their way around Apache HTTPd, and are willing 
to spend some time getting to know your vendor services intimately.

All of this testing was done on Fedora 19 for the 2.4 version of HTTPd, which 
should be available in RHEL7/CentOS7 soon, so about the time that hard 
decisions are to be made regarding EZproxy vs something else, that something 
else may very well be Apache HTTPd with vendor-specific configuration files.

-- 
Andrew Anderson, Director of Development, Library and Information Resources 
Network, Inc.
http://www.lirn.net/ | http://www.twitter.com/LIRNnotes | 
http://www.facebook.com/LIRNnotes

On Jan 29, 2014, at 14:42, Margo Duncan mdun...@uttyler.edu wrote:

 Would you *have* to be hosted? We're in a rural part of the USA and network 
 connections from here to anywhere aren't great, so we try to host most 
 everything we can.  EZProxy really is EZ to host yourself.
 
 Margo
 
 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of 
 stuart yeates
 Sent: Wednesday, January 29, 2014 1:40 PM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] EZProxy changes / alternatives ?
 
 The text I've seen talks about [e]xpanded reporting capabilities to support 
 management decisions in forthcoming versions and encourages towards the 
 hosted solution.
 
 Since we're in .nz, they'd put our hosted proxy server in .au, but the 
 network connection between .nz and .au is via the continental .us, which puts 
 an extra trans-pacific network loop in 99% of our proxied network connections.
 
 cheers
 stuart
 
 On 30/01/14 03:14, Ingraham Dwyer, Andy wrote:
 OCLC announced in April 2013 the changes in their license model for North 
 America.  EZProxy's license moves from requiring a one-time purchase of 
 US$495 to a *annual* fee of $495, or through their hosted service, with the 
 fee depending on scale of service.  The old one-time purchase license is no 
 longer offered for sale as of July 1, 2013.  I don't have any details about 
 pricing for other parts of the world.
 
 An important thing to recognize here, is that they cannot legally change the 
 terms of a license that is already in effect.  The software you have 
 purchased under the old license is still yours to use, indefinitely.  OCLC 
 has even released several maintenance updates during 2013 that are available 
 to current license-holders.  In fact, they released V5.7 in early January 
 2014, and made that available to all license-holders.  However, all updates 
 after that version are only available to holders of the yearly subscription. 
  The hosted product is updated to the most current version automatically.
 
 My recommendation is:  If your installation of EZProxy works, don't change 
 it.  Yet.  Upgrade your installation to the last version available under the 
 old license, and use that for as long as you can.  At this point, there are 
 no world-changing new features that have been added to the product.  There 
 is speculation that IPv6 support will be the next big feature-add, but I 
 haven't heard anything official.  Start

Re: [CODE4LIB] Digital Collections Browser Kiosk Software Options

2014-01-22 Thread Andrew Gordon
Very cool, did not consider these approaches but they are worth looking into. 
Out of curiosity, would there be good recommendations if we were to forego the 
touch screen requirement? Just plain ole' dumb mouse and keyboard?

Thanks again,
drew

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Derek 
Merleaux
Sent: Wednesday, January 22, 2014 3:27 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Digital Collections Browser Kiosk Software Options

On a similar note to Sam's suggestion, I saw a demo by Open Exhibits 
http://openexhibits.org/category/software/ of their multi-touch image browser - 
they just released a new version of their open-source sdk that allows use of 
the Leap Motion controller (or several of them for more
users) - that way you can use a less expensive non-touch screen and get the 
same or better effect. Been meaning to try this out but the time it keeps 
getting away from me. Would love to hear of someone making it work.
-Derek

Derek Merleaux
@dmer





On Wed, Jan 22, 2014 at 12:45 PM, Andrew Gordon agor...@nyam.org wrote:

 Hi All,

 We are looking into options for setting up a physical kiosk 
 (touchscreen monitor and computer) in our lobby to allow visitors to 
 our building to browse digital versions of some items from our 
 collection. I see that Turning The Pages (e.g. 
 http://archive.nlm.nih.gov/proj/ttp/) provides a nice solution for 
 this but I just wanted to see if anyone else had worked with something 
 similar and might know of any other options (open source?) so that we 
 can do a little comparing and contrasting. For some reason I am 
 thinking there was a discussion a little while back about 3D digital 
 collections browsing but can't seem to locate it and don't know it if was 
 like the above scenario.

 I think since it's a kiosk style implementation and we are looking for 
 apples-to-apples comparisons, we are interested in the physical, 
 touch-screen turning of the page interaction rather than a browser 
 pointed at a more pragmatic digital collections browser, at least at 
 this point in the exploration.

 Thanks in advance for anyone that might have potential suggestions,

 -d

 
 Andrew Gordon, MSI
 Systems Librarian
 Center for the History of Medicine and Public Health New York Academy 
 of Medicine
 1216 Fifth Avenue
 New York, NY, 10029
 212.822.7324
 http://nyamcenterforhistory.org/



Re: [CODE4LIB] archiving web pages

2014-01-15 Thread Andrew Darby
If it's doable, I think preserving the whole enchilada is desirable.  For
instance, at my last library, there was a regular assignment where students
needed the print version of old periodicals because they were tasked with
analysing the ads and layouts.  Someone might be interested in web layouts
from the 2000s, and there might be content (again, ads, but also masthead
logos, ???) that might not otherwise be captured.

Andrew


On Wed, Jan 15, 2014 at 10:29 AM, Wilhelmina Randtke rand...@gmail.comwrote:

 Agreed, don't focus too much on preserving the presentation for an online
 newspaper.  The text and images are important, but the layout isn't so
 important.

 -Wilhelmina Randtke


 On Tue, Jan 14, 2014 at 10:59 AM, Kyle Banerjee kyle.baner...@gmail.com
 wrote:

  IMO, there are many web archiving situations where it is more appropriate
  to just focus on the content rather than the manifestation of the
 content.
  Just as you wouldn't expect a 1995 article from the NYT to be displayed
 as
  the website was in 1995 or an article in an online database to actually
  appear like it originally appeared online, it's the content rather than
 the
  skin that's relevant in the case of a newspaper. If you make sure it's
 in a
  format that can be migrated forward and added to standalone or union
  systems that provide access to this sort of stuff, you'll be fine.
 
  kyle
 
 
  On Tue, Jan 14, 2014 at 8:48 AM, Kathryn Frederick (Library) 
  kfred...@skidmore.edu wrote:
 
   Hi,
   I'm trying to develop a strategy for preserving issues our school's
  online
   newspaper. Creating a WARC file of the content seems straightforward,
 but
   how will that content fair long-term? Also, how is the WARC served to
 an
   end-user? Is there some other method I should look at?
   Thanks in advance for any advice!
   Kathryn
  
 




-- 
Andrew Darby
Head, Web  Emerging Technologies
University of Miami Libraries


Re: [CODE4LIB] EZcheck: a java-based catalog link checking program

2014-01-08 Thread Andrew Nisbet
Hi Joshua,

I have been looking for, and thinking about building one of these myself. I 
will be interested to see your detection technique. In the meantime Java 
applications are distributed as jar files which are then made executable with 
the inclusion of a manifest as shown at 
http://www.cis.upenn.edu/~matuszek/cit597-2002/Pages/executable-jar-files.html. 
Then users need only execute the jar as they would any other executable on 
their OS.

Edmonton Public Library
Andrew Nisbet
ILS Administrator, IT Services

T: 780.496.4058  F: 780.496.8317

anis...@epl.ca  Spread the words.
 
 
-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Joshua 
Welker
Sent: January-08-14 3:48 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] EZcheck: a java-based catalog link checking program

I put together a link checking program that searches through MARC 856 fields 
for broken links and generates a report spreadsheet. It was originally in 
Python, but I switched to Java for a variety of reasons (don't want to start a 
language flame war). The result is the EZcheck program above. If anyone would 
like to give it a whirl, feel free and let me know how it works for you. There 
is no documentation right now, but it is fairly self-explanatory and has 
tooltips if you mouse over each field.
We used this at my library and found several thousand broken links in our 
catalog.



https://github.com/jswelker/ezcheck

https://github.com/jswelker/ezcheck/releases (It's an executable JAR file, 
which you should be able to double-click to launch on any platform, assuming 
you have Java 7.)



The program accepts input in several formats. You can feed it a tab-delimited 
text file containing a record number, an 856 field, a title, and an author. You 
can feed it .MRC recorods. If you use the Sierra ILS and appropriate user 
permissions, you can use the Direct SQL Access interface to connect straight to 
the ILS without having to generate MARC files or create lists.



This is my first multi-threaded program and my first major Java/JavaFX project, 
so if you have any feedback on the code quality or find any bugs, please let me 
know. I also have no clue how to package a Java app into a .exe file or the Mac 
equivalent, if anyone has any pointers on that subject.



Josh Welker

Information Technology Librarian

James C. Kirkpatrick Library

University of Central Missouri

Warrensburg, MO 64093

JCKL 2260

660.543.8022


Re: [CODE4LIB] mass convert jpeg to pdf

2013-11-12 Thread Andrew Hankinson
Just thought I might plug some software we're developing to solve the book 
image navigation misery that Kyle mentions.

http://ddmal.music.mcgill.ca/diva/

and a demo:

http://ddmal.music.mcgill.ca/newdiva/demo/single.html

We developed it because we were frustrated with the image gallery paradigm 
for book image viewing, and wanted something more like Google Books' viewer, 
but with access to the highest resolution possible. We also were frustrated 
with having to download large PDFs to just view a couple pages.

Diva uses IIP on the back-end to serve out image tiles, so you're only ever 
downloading the part of the image that's viewable -- the rest is auto-loaded as 
the user scrolls. 

We've used it to display a manuscript that's ~80GB (total), with each image 
around 200MB.

http://coltrane.music.mcgill.ca/salzinnes/experiments/diva-cci-tif/

It's also got a couple other neat features, like in-browser 
brightness/contrast/rotation adjustments via canvas. (Click the little gear 
icon in the top left of each page image).

Cheers,
-Andrew

On 2013-11-08, at 4:22 PM, Kyle Banerjee kyle.baner...@gmail.com wrote:

 It is sad to me that converting to PDF for viewing off the Web seems like
 the answer. Isn’t there a tiling viewer (like Leaflet) that could be used
 to render jpeg derivatives of the original tif files in Omeka?
 
 
 This should be pretty easy. But the issue with tiling is that the nav
 process is miserable for all but the shortest books. Most of the people who
 want to download want are looking for jpegs rather than source tiffs and
 one pdf instead of a bunch of tiffs (which is good since each one is
 typically over 100MB). Of course there are people who want the real deal,
 but that's actually a much less common use case.
 
 As Karen observes, downloading and viewing serve different use cases so of
 course we will provide both. IIP Image Server looks intriguing. But most of
 our users who want the full res stuff really just want to download the
 source tiffs which will be made available.
 
 kyle


Re: [CODE4LIB] Loris

2013-11-08 Thread Andrew Hankinson
So what’s the difference between IIIF and IIP? (the protocol, not the server 
implementation)

-Andrew

On Nov 8, 2013, at 9:05 PM, Jon Stroop jstr...@princeton.edu wrote:

 It aims to do the same thing...serve big JP2s (and other images) over the 
 web, so from that perspective, yes. But, beyond that, time will tell. One 
 nice thing about coding against a well-thought-out spec is that are lots of 
 implementations from which you can choose[1]--though as far as I know Loris 
 is the only one that supports the IIIF syntax natively (maybe IIP?). We still 
 have Djatoka floating around in a few places here, but, as many people have 
 noted over the years, it takes a lot of shimming to scale it up, and, as far 
 as I know, the project has more or less been abandoned.
 
 I haven't done too much in the way of benchmarking, but to date don't have 
 any reason to think Loris can't perform just as well. The demo I sent earlier 
 is working against a very large jp2 with small tiles[1] which means a lot of 
 rapid hits on the server, and between that, (a little bit of) JMeter and ab 
 testing, and a fair bit of concurrent use from the c4l community this 
 afternoon, I feel fairly confident about it being able to perform as well as 
 Djatoka in a production environment.
 
 By the way, you can page through some other images here: 
 http://libimages.princeton.edu/osd-demo/
 
 Not much of an answer, I realize, but, as I said, time and usage will tell.
 
 -Js
 
 1. http://iiif.io/apps-demos.html
 2. 
 http://libimages.princeton.edu/loris/pudl0052%2F6131707%2F0001.jp2/info.json
 
 
 On 11/8/13 8:07 PM, Peter Murray wrote:
 A clarifying question: is Loris effectively a Python-based replacement for 
 the Java-based djatoka [1] server?
 
 
 Peter
 
 [1] http://sourceforge.net/apps/mediawiki/djatoka/index.php?title=Main_Page
 
 
 On Nov 8, 2013, at 3:05 PM, Jon Stroop jstr...@princeton.edu wrote:
 
 c4l,
 I was reminded earlier this week at DLF (and a few minutes ago by Tom
 and Simeon) that I hadn't ever announced a project I've been working for
 the least year or so to this list. I showed an early version in a
 lightning talk at code4libcon last year.
 
 Meet Loris: https://github.com/pulibrary/loris
 
 Loris is a Python based image server that implements the IIIF Image API
 version 1.1 level 2[1].
 
 http://www-sul.stanford.edu/iiif/image-api/1.1/
 
 It can take JP2 (if you make Kakadu available to it), TIFF, or JPEG
 source images, and hand back JPEG, PNG, TIF, and GIF (why not...).
 
 Here's a demo of the server directly: http://goo.gl/8XEmjp
 
 And here's a sample of the server backing OpenSeadragon[2]:
 http://goo.gl/Gks6lR
 
 -Js
 
 1. http://www-sul.stanford.edu/iiif/image-api/1.1/
 2. http://openseadragon.github.io/
 
 -- 
 Jon Stroop
 Digital Initiatives Programmer/Analyst
 Princeton University Library
 jstr...@princeton.edu
 --
 Peter Murray
 Assistant Director, Technology Services Development
 LYRASIS
 peter.mur...@lyrasis.org
 +1 678-235-2955
 800.999.8558 x2955


Re: [CODE4LIB] Usability Person?

2013-10-31 Thread Andrew Darby
Thanks, everyone for your examples and suggestions.  Super helpful.

Andrew


On Thu, Oct 31, 2013 at 12:42 AM, Ranti Junus ranti.ju...@gmail.com wrote:

 Our library has a User Experience group. This is not a unit, but consists
 of 4 people whose part of work is related to user experience. This group's
 main focus primarily on the online experience: website, catalog,
 e-resources, and accessibility. We did quite a number of usability tests,
 shared the results with the stake holders, and recommended the changes. The
 changes that we recommended on our web presence tend to be small. The idea
 is not to do big change where it's very noticeable, but make it incremental
 so users won't get disoriented. Hence the frequent tests. For the
 accessibility part, I hired a blind student to assist me assessing our web
 presence and e-resources.

 We just hired a dedicated user experience librarian whose work would also
 include customer service assessments and user space area.


 ranti.


 On Wed, Oct 30, 2013 at 11:24 AM, Andrew Darby darby.li...@gmail.com
 wrote:

  Hello, all.  This is perhaps a bit off-topic, but I was wondering how
 many
  of you have a dedicated usability person as part of your development
 team.
  Right now, we have a sort of ad hoc Usability Team, and I'd like to make
 a
  pitch for hiring someone who will have the time and inclination to manage
  this effort more effectively.
 
  Anything you'd care to share (on-list or off-) would be welcome.  I'm
  especially curious about whether or not this is a full-time
 responsibility
  for someone in your organization or if it's shared with another job
  function; if you find this position is working out well or you wish you'd
  spent the money on more robots instead; where this person resides in your
  org chart; what sort of qualifications you looked for when hiring; etc.
 
  Thanks,
 
  Andrew
 
  --
  Andrew Darby
  Head, Web  Emerging Technologies
  University of Miami Libraries
 



 --
 Bulk mail.  Postage paid.




-- 
Andrew Darby
Head, Web  Emerging Technologies
University of Miami Libraries


[CODE4LIB] Usability Person?

2013-10-30 Thread Andrew Darby
Hello, all.  This is perhaps a bit off-topic, but I was wondering how many
of you have a dedicated usability person as part of your development team.
Right now, we have a sort of ad hoc Usability Team, and I'd like to make a
pitch for hiring someone who will have the time and inclination to manage
this effort more effectively.

Anything you'd care to share (on-list or off-) would be welcome.  I'm
especially curious about whether or not this is a full-time responsibility
for someone in your organization or if it's shared with another job
function; if you find this position is working out well or you wish you'd
spent the money on more robots instead; where this person resides in your
org chart; what sort of qualifications you looked for when hiring; etc.

Thanks,

Andrew

-- 
Andrew Darby
Head, Web  Emerging Technologies
University of Miami Libraries


Re: [CODE4LIB] pdf2txt

2013-10-11 Thread Andrew Cunningham
You may want to consider how best to handle PDF files where the text would
contain ligatures and glyph ids rather than the underlying characters.

A.
On 12/10/2013 4:58 AM, Eric Lease Morgan emor...@nd.edu wrote:

 On Oct 11, 2013, at 1:49 PM, Matthew Sherman matt.r.sher...@gmail.com
 wrote:

  For a limited period of time I am making publicly available a Web-based
  program called PDF2TXT -- http://bit.ly/1bJRyh8
 
  Very slick, good work.  I can see where this tool can be very helpful.
  It
  does have some issues with some characters, but this is rather common
 with
  most systems.

 Again, thank you for the support. Yes, there are some escaping issues to
 be resolved. Release early. Release often. I need help with the graphic
 design in general.

 Here's an enhancement I thought of:

   1. allow readers to authenticate
   2. allow readers to upload documents
   3. documents get saved in readers' cache
   4. allow interface to list documents in the cache
   5. provide text mining services against reader-selected documents
   6. go to Step #1

 It would also be cool if I could figure out how to finish the installation
 of Tesseract to enable OCRing. [1]

 [1] OCRing -
 http://serials.infomotions.com/code4lib/archive/2013/201303/1554.html

 --
 Eric Morgan



Re: [CODE4LIB] pdf2txt

2013-10-11 Thread Andrew Cunningham
Hi Mark,

I suspect the tool wil only be able to handle select languages, and very
doubtful you could develop a tool to handle non-LCG text.

For a fully internationalised tool, you would have fo ignore all text
layers in a PDF and run all PDFs through OCR to generate text.

Then you'd need to apply very sophisticated word boundary identification
routines.

A.
On 12/10/2013 9:40 AM, Mark Pernotto mark.perno...@gmail.com wrote:

 Very cool tool, thank you!

 Putting my devil's advocate hat on, it doesn't parse foreign documents well
 (I got it to break!).  I also got inconsistent results feeding it PDF files
 with tables embedded (but haven't been able to figure out what it is about
 them it doesn't like).

 Just from a curiosity standpoint, what encoding is being utilized?  I know
 nothing about Perl.  It seemed to have no problem parsing a dash (-) if it
 was up against another character (2007-2012), but barfs when it's by itself
 (2007 � 2012). I'm only referring to 'extracted text' mode.

 If it helps, I can send along *most* of my test PDF files used.

 Thank you!
 .m





 On Fri, Oct 11, 2013 at 10:58 AM, Eric Lease Morgan emor...@nd.edu
 wrote:

  On Oct 11, 2013, at 1:49 PM, Matthew Sherman matt.r.sher...@gmail.com
  wrote:
 
   For a limited period of time I am making publicly available a
 Web-based
   program called PDF2TXT -- http://bit.ly/1bJRyh8
  
   Very slick, good work.  I can see where this tool can be very helpful.
   It
   does have some issues with some characters, but this is rather common
  with
   most systems.
 
  Again, thank you for the support. Yes, there are some escaping issues to
  be resolved. Release early. Release often. I need help with the graphic
  design in general.
 
  Here's an enhancement I thought of:
 
1. allow readers to authenticate
2. allow readers to upload documents
3. documents get saved in readers' cache
4. allow interface to list documents in the cache
5. provide text mining services against reader-selected documents
6. go to Step #1
 
  It would also be cool if I could figure out how to finish the
 installation
  of Tesseract to enable OCRing. [1]
 
  [1] OCRing -
  http://serials.infomotions.com/code4lib/archive/2013/201303/1554.html
 
  --
  Eric Morgan
 



Re: [CODE4LIB] pdf2txt

2013-10-11 Thread Andrew Cunningham
Perl has its own encoding model, strings vould be unicode or legacy
encoding, unicode is Unicode is indicated by the presence of a flag on a
string. Out its decided on a string by string basis.

If it is a legacy encoding, then it could be any legacy encoding.

If your data is truly multilingual, multiscript and in a variety of
encodings, it becomes a challenge to manage it in Perl.

In our own projects we found perl module to be inadequate and needed our
own internal modules to handle encoding issues, radio when you factor in
the fact that some cpan modules have the nasty habit of stripping the
Unicode flag from strings.

Although that said, Perl still has better Unicode support than most
languages.

A.


[CODE4LIB] Online survey on Project Management Software Adoption in Libraries

2013-09-11 Thread Andrew Tweet
Last Call! This survey will close Friday Sept. 13. Thanks for putting up
with our cross  re-posting!

** **


Dear Colleagues,

 

Please take the survey linked below to help us gather data on how libraries
manage their many projects. We want to know how libraries manage, keep
track of progress, and collaborate on projects. Survey results will show a
snapshot of project management techniques used, project management software
strengths and weaknesses, and what types of library projects are a good fit
for which project management software.




Please help us answer these questions by taking an online survey (estimated
10 minutes to complete). Findings will be reported at the Internet
Librarian 2013 and CARL 2014 conferences, with the potential for future
journal publications. Your responses will be anonymous, your participation
is voluntary, and there are no foreseen risks in volunteering for this
study.




To take the survey please click on this link
https://www.surveymonkey.com/s/WW86ZV3
 

 

In case you are still on the fence about taking our survey, let us define
what we mean by project management software and techniques. Project
Management is a set of techniques used heavily in business, construction,
and software development to describe and monitor work on large projects
that involve multiple people over a long period of time. The various
techniques help keep track of goals, tasks, deadlines, responsible
individuals, progress toward completion, budget, and many more factors that
contribute to project success.




Within the library, a project might be implementing a discovery service,
marketing a program to freshmen, renovating the building, redesigning the
website, or weeding the humanities section. We want to hear from
individuals who have contributed to projects in libraries. Please take our
survey so we can learn from your collective experience.




Thank you for your participation!




Margot Hanson, Instruction  Outreach Librarian, California Maritime Academy


Annis Lee Adams, E-Resources Librarian, Golden Gate University

Andrew Tweet, Librarian, William Jessup University

Kevin Pischke, Library Director, William Jessup University



** **

** **

If you have any questions about the survey please contact:

Margot Hanson: mhan...@csum.edu, 707-654-1091

or

California Maritime Academy Institutional Review Board

IRB # CMA-IRB2013-014 (Exempt status)


[CODE4LIB]

2013-09-04 Thread Andrew Tweet
Kari  Rosalyn,
We are grateful for your interest in our research. I hope you don't mind,
but I will be sharing my responses to your questions on the web4lib list.
Several others have been asking similar questions off-list or on other
lists so I think there is value in sharing these answers with everybody.

You asked why we did not include specific project management techniques
(waterfall, agile, etc). We wanted the survey to assume little formal
knowledge of project management techniques so we used the most general
terms we could. Respondents can get more specific by selecting other and
filling in a short answer.

You also asked if we are interested in individual's use of project
management software or institutional use. We're focused on what tools
institutions or groups use, not so much individual librarians.

We will not be able to distinguish between libraries/institutions of the
respondents. To balance that we will be including case studies from our own
institutions in our conference presentation.

We are still gathering data so I don't have any results to share yet. We
will share our results with all the lists to which we sent the survey. It
will have to be after the Internet Librarian Conference, so look for the
results in November.

We included a bunch of software programs in the survey, but it is by no
means an exhaustive list. If your library tested another program (including
the many open-source/GPL products) please select the other category and
specify the name of the product.

Best regards,
Andrew


[CODE4LIB] Online survey on Project Management Software Adoption

2013-09-03 Thread Andrew Tweet
Dear Colleagues,

 

Please take the survey linked below to help us gather data on how libraries
manage their many projects. We want to know how libraries manage, keep
track of progress, and collaborate on projects. Survey results will show a
snapshot of project management techniques used, project management software
strengths and weaknesses, and what types of library projects are a good fit
for which project management software.




Please help us answer these questions by taking an online survey (estimated
10 minutes to complete). Findings will be reported at the Internet
Librarian 2013 and CARL 2014 conferences, with the potential for future
journal publications. Your responses will be anonymous, your participation
is voluntary, and there are no foreseen risks in volunteering for this
study.




To take the survey please click on this link
https://www.surveymonkey.com/s/WW86ZV3
 

 

In case you are still on the fence about taking our survey, let us define
what we mean by project management software and techniques. Project
Management is a set of techniques used heavily in business, construction,
and software development to describe and monitor work on large projects
that involve multiple people over a long period of time. The various
techniques help keep track of goals, tasks, deadlines, responsible
individuals, progress toward completion, budget, and many more factors that
contribute to project success.




Within the library, a project might be implementing a discovery service,
marketing a program to freshmen, renovating the building, redesigning the
website, or weeding the humanities section. We want to hear from
individuals who have contributed to projects in libraries. Please take our
survey so we can learn from your collective experience.




Thank you for your participation!




Margot Hanson, Instruction  Outreach Librarian, California Maritime Academy


Annis Lee Adams, E-Resources Librarian, Golden Gate University

Andrew Tweet, Librarian, William Jessup University

Kevin Pischke, Library Director, William Jessup University



** **

** **

If you have any questions about the survey please contact:

Margot Hanson: mhan...@csum.edu, 707-654-1091

or

California Maritime Academy Institutional Review Board

IRB # CMA-IRB2013-014 (Exempt status)


[CODE4LIB] Online survey on Project Management Software Adoption in Libraries

2013-08-23 Thread Andrew Tweet
Dear Colleagues,

 

Please take the survey linked below to help us gather data on how libraries
manage their many projects. We want to know how libraries manage, keep
track of progress, and collaborate on projects. Survey results will show a
snapshot of project management techniques used, project management software
strengths and weaknesses, and what types of library projects are a good fit
for which project management software.




Please help us answer these questions by taking an online survey (estimated
10 minutes to complete). Findings will be reported at the Internet
Librarian 2013 and CARL 2014 conferences, with the potential for future
journal publications. Your responses will be anonymous, your participation
is voluntary, and there are no foreseen risks in volunteering for this
study.




To take the survey please click on this link
https://www.surveymonkey.com/s/WW86ZV3
 

 

In case you are still on the fence about taking our survey, let us define
what we mean by project management software and techniques. Project
Management is a set of techniques used heavily in business, construction,
and software development to describe and monitor work on large projects
that involve multiple people over a long period of time. The various
techniques help keep track of goals, tasks, deadlines, responsible
individuals, progress toward completion, budget, and many more factors that
contribute to project success.




Within the library, a project might be implementing a discovery service,
marketing a program to freshmen, renovating the building, redesigning the
website, or weeding the humanities section. We want to hear from
individuals who have contributed to projects in libraries. Please take our
survey so we can learn from your collective experience.




Thank you for your participation!




Margot Hanson, Instruction  Outreach Librarian, California Maritime Academy


Annis Lee Adams, E-Resources Librarian, Golden Gate University

Andrew Tweet, Librarian, William Jessup University

Kevin Pischke, Library Director, William Jessup University



** **

** **

If you have any questions about the survey please contact:

Margot Hanson: mhan...@csum.edu, 707-654-1091

or

California Maritime Academy Institutional Review Board

IRB # CMA-IRB2013-014 (Exempt status)


  1   2   3   4   >