[CODE4LIB] Local search index with Alma and Primo

2015-09-17 Thread Jason Stirnaman
Has anyone used Alma, Primo APIs in combination with a local index of the
same collection? For example, indexing bib records in Solr ( + Blacklight,
VuFind, RYO) and fetching additional info from the APIs.  If so, are you
happy? Has it been an improvement over using only the APIs or over Primo's
UI?

Apologies if this is a rehash, but I've only found this 2013 thread on the
same topic:
http://serials.infomotions.com/code4lib/archive/2013/201312/4587.html

Contact me off list if you prefer.

Thanks,
Jason

Jason Stirnaman
Galter Health Sciences Library, Northwestern University


Re: [CODE4LIB] API to retrieve scholarly publications by author

2015-05-25 Thread Jason Stirnaman
Thanks for catching that. I haven't done much testing yet. Send a pull request?
Jason

On May 24, 2015 10:59 PM, Wataru Ono ono.wataru.p...@gmail.com wrote:
Hi,
Thank you for your kind advice
using jQuery with the ORCID Public API to fetch publications for specific IDs:
https://github.com/jstirnaman/Orcid-Profiles-jQuery-Widget

In the case of ['work-external-identifiers'] is null, It is an error!
81 line:  var extids =
value['work-external-identifiers']['work-external-identifier'];
_
 小野 亘 (Ono, Wataru)
 E-Mail: ono.wataru.p...@gmail.com
 (業務用: onow...@u-gakugei.ac.jp)
   @wonox  http://orcid.org/-0002-6398-9317
 東京学芸大学附属図書館(教育研究支援部学術情報課)
 Tel: 042-329-7217 Fax: 042-323-5994


2015-05-20 23:16 GMT+09:00 Jason Stirnaman jstirna...@kumc.edu:
 Hi again.
 Here are some examples implementing the ORCID API:

 using jQuery with the ORCID Public API to fetch publications for specific IDs:
 https://github.com/jstirnaman/Orcid-Profiles-jQuery-Widget

 a Ruby client for Public and Member:
 https://github.com/jstirnaman/BibApp/blob/kumc/lib/orcid_client.rb

 Jason

 Jason Stirnaman, MLS
 Application Development, Library and Information Services, IR
 University of Kansas Medical Center
 jstirna...@kumc.edu
 913-588-7319

 On May 20, 2015, at 9:06 AM, Jason Stirnaman jstirna...@kumc.edu wrote:

 Hi, Alex.
 re: ORCID, available author info  depends on what Bio information the ID 
 owner makes publicly visible. See the READ section at 
 https://members.orcid.org/api/api-calls

 I was about to send some old Ruby code for searching NLM Eutils (PubMed) 
 until I saw your last message.

 If you want to manage a local bibliography, then try Zotero and their API: 
 https://www.zotero.org/support/dev/web_api/v3/start


 Jason

 Jason Stirnaman, MLS
 Application Development, Library and Information Services, IR
 University of Kansas Medical Center
 jstirna...@kumc.edu
 913-588-7319

 On May 20, 2015, at 5:59 AM, Alex Armstrong alehand...@gmail.com wrote:

 Hi list,

 What are some good API options for retrieving a list of scholarly 
 publications by author?

 I would like to be able to grab them and display them on a website along 
 with other information about each author.

 Google Scholar does not have a public API as far as I can tell.

 CrossRef metadata search does not allow to search by author.

 Orcid seems promising. I would have to ask the users I have in mind to add 
 or import their publications to Orcid, as most of them are not on there 
 already. That's doable, but I'm not sure if I'll be able to do what I 
 described above with their public (as opposed to their member) API.

 Any other ideas or thoughts?

 Best,
 Alex Armstrong



Re: [CODE4LIB] API to retrieve scholarly publications by author

2015-05-20 Thread Jason Stirnaman
Hi, Alex. 
re: ORCID, available author info  depends on what Bio information the ID owner 
makes publicly visible. See the READ section at 
https://members.orcid.org/api/api-calls

I was about to send some old Ruby code for searching NLM Eutils (PubMed) until 
I saw your last message.

If you want to manage a local bibliography, then try Zotero and their API: 
https://www.zotero.org/support/dev/web_api/v3/start


Jason

Jason Stirnaman, MLS
Application Development, Library and Information Services, IR
University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319

On May 20, 2015, at 5:59 AM, Alex Armstrong alehand...@gmail.com wrote:

 Hi list,
 
 What are some good API options for retrieving a list of scholarly 
 publications by author?
 
 I would like to be able to grab them and display them on a website along with 
 other information about each author.
 
 Google Scholar does not have a public API as far as I can tell.
 
 CrossRef metadata search does not allow to search by author.
 
 Orcid seems promising. I would have to ask the users I have in mind to add or 
 import their publications to Orcid, as most of them are not on there already. 
 That's doable, but I'm not sure if I'll be able to do what I described above 
 with their public (as opposed to their member) API.
 
 Any other ideas or thoughts?
 
 Best,
 Alex Armstrong


Re: [CODE4LIB] API to retrieve scholarly publications by author

2015-05-20 Thread Jason Stirnaman
Hi again.
Here are some examples implementing the ORCID API:

using jQuery with the ORCID Public API to fetch publications for specific IDs:
https://github.com/jstirnaman/Orcid-Profiles-jQuery-Widget

a Ruby client for Public and Member:
https://github.com/jstirnaman/BibApp/blob/kumc/lib/orcid_client.rb

Jason

Jason Stirnaman, MLS
Application Development, Library and Information Services, IR
University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319

On May 20, 2015, at 9:06 AM, Jason Stirnaman jstirna...@kumc.edu wrote:

 Hi, Alex. 
 re: ORCID, available author info  depends on what Bio information the ID 
 owner makes publicly visible. See the READ section at 
 https://members.orcid.org/api/api-calls
 
 I was about to send some old Ruby code for searching NLM Eutils (PubMed) 
 until I saw your last message.
 
 If you want to manage a local bibliography, then try Zotero and their API: 
 https://www.zotero.org/support/dev/web_api/v3/start
 
 
 Jason
 
 Jason Stirnaman, MLS
 Application Development, Library and Information Services, IR
 University of Kansas Medical Center
 jstirna...@kumc.edu
 913-588-7319
 
 On May 20, 2015, at 5:59 AM, Alex Armstrong alehand...@gmail.com wrote:
 
 Hi list,
 
 What are some good API options for retrieving a list of scholarly 
 publications by author?
 
 I would like to be able to grab them and display them on a website along 
 with other information about each author.
 
 Google Scholar does not have a public API as far as I can tell.
 
 CrossRef metadata search does not allow to search by author.
 
 Orcid seems promising. I would have to ask the users I have in mind to add 
 or import their publications to Orcid, as most of them are not on there 
 already. That's doable, but I'm not sure if I'll be able to do what I 
 described above with their public (as opposed to their member) API.
 
 Any other ideas or thoughts?
 
 Best,
 Alex Armstrong
 


Re: [CODE4LIB] Anyone analyzed SirsiDynix Symphony transaction logs?

2015-03-22 Thread Jason Stirnaman
Francis,

I was able to use Logstash's existing patterns for what I needed.

Depending on how you configure the logging, the format can be identical to 
Apache's.

I may have some custom expressions for query params, but you can also do a lot 
with ES' dynamic fields, which will keep the index smaller.

I have the template on Github, but I'm not sure it's the latest. I'll check and 
post the link.



Jason

-- Original message --
From: Francis Kayiwa
Date: 03/21/2015 8:53 AM
To: CODE4LIB@LISTSERV.ND.EDU;
Subject:Re: [CODE4LIB] Anyone analyzed SirsiDynix Symphony transaction logs?

On 3/19/15 3:53 PM, Jason Stirnaman wrote:
 I've been using the ELK (elastic + logstash(1) + kibana)(2) stack for EZProxy 
 log analysis.
 Yes, the index can grow really fast with log data, so I have to be selective 
 about what I store. I'm not familiar with the Symphony log format, but 
 Logstash has filters to handle just about any data that you want to parse, 
 including multiline. Maybe for some log entries, you don't need to store the 
 full entry at all but only a few bits or a single tag?

 And because it's Ruby underneath, you can filter using custom Ruby. I use 
 that to do LDAP lookups on user names so we can get department and user-type 
 stats.

Hey Jason,

Did you have to create customized grok filters for EZProxy logs format?
It has been something on my mind and if you've done the work... ;-)

Cheers,

./fxk

--
Your analyst has you mixed up with another patient.  Don't believe a
thing he tells you.


Re: [CODE4LIB] Anyone analyzed SirsiDynix Symphony transaction logs?

2015-03-19 Thread Jason Stirnaman
I've been using the ELK (elastic + logstash(1) + kibana)(2) stack for EZProxy 
log analysis.
Yes, the index can grow really fast with log data, so I have to be selective 
about what I store. I'm not familiar with the Symphony log format, but Logstash 
has filters to handle just about any data that you want to parse, including 
multiline. Maybe for some log entries, you don't need to store the full entry 
at all but only a few bits or a single tag?

And because it's Ruby underneath, you can filter using custom Ruby. I use that 
to do LDAP lookups on user names so we can get department and user-type stats.

1. http://logstash.net/
2. https://www.elastic.co/downloads


Jason

Jason Stirnaman, MLS
Application Development, Library and Information Services, IR
University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319

On Mar 19, 2015, at 2:15 PM, Cary Gordon listu...@chillco.com wrote:

 Has anyone considered using a NoSQL database to store their logs? With enough 
 memory, Redis might be interesting, and it would be fast.
 
 The concept of too experimental to post to Github boggles the mind.
 
 Cary
 
 
 On Mar 19, 2015, at 9:38 AM, Andrew Nisbet anis...@epl.ca wrote:
 
 Hi Bill,
 
 I have been doing some work with Symphony logs using Elasticsearch. It is 
 simple to install and use, though I recommend Elasticsearch: The Definitive 
 Guide (http://shop.oreilly.com/product/0636920028505.do). The main problem 
 is the size of the history logs, ours being on the order of 5,000,000 lines 
 per month. 
 
 Originally I used a simple python script to load each record. The script 
 broke down each line into the command code, then all the data codes, then 
 loaded them using curl. This failed initially because Symphony writes 
 extended characters to title fields. I then ported the script to python 3.3 
 which was not difficult, and everything loaded fine -- but took more than a 
 to finish a month's worth of data. I am now experimenting with Bulk 
 (http://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html)
  to improve performance.
 
 I would certainly be willing to share what I have written if you would like. 
 The code is too experimental to post to Github however.
 
 Edmonton Public Library
 Andrew Nisbet
 ILS Administrator
 
 T: 780.496.4058   F: 780.496.8317
 
 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of 
 William Denton
 Sent: March-18-15 3:55 PM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: [CODE4LIB] Anyone analyzed SirsiDynix Symphony transaction logs?
 
 I'm going to analyze a whack of transaction logs from our Symphony ILS so 
 that we can dig into collection usage.  Any of you out there done this?  
 Because the system is so closed and proprietary I understand it's not easy 
 (perhaps
 impossible?) to share code (publicly?), but if you've dug into it I'd be 
 curious to know, not just about how you parsed the logs but then what you 
 did with it, whether you loaded bits of data into a database, etc.
 
 Looking around, I see a few examples of people using the system's API, but 
 that's it.
 
 Bill
 --
 William Denton ↔  Toronto, Canada ↔  https://www.miskatonic.org/


[CODE4LIB] Assignment planner-calculator use

2015-01-23 Thread Jason Stirnaman
One of our librarians came across K-State's Assignment Planner 
http://www.lib.k-state.edu/apps/ap/
which is based on Minnesota/Minitex's 
http://sourceforge.net/projects/research-calc/
We're curious to hear:
  1. some anecdotes as to how much use this kind of service gets and
  2. if there are worthy alternatives (free or fee)?

Contact me directly if you prefer.

Thanks,
Jason

Jason Stirnaman, MLS
Application Development, Library and Information Services, IR
University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319


Re: [CODE4LIB] Past Conference T-Shirts?

2014-11-06 Thread Jason Stirnaman
Riley,
Could you fix the spelling on More then just books in the store? Should be 
More than just books

Thanks,
Jason

Jason Stirnaman
Lead, Library Technology Services
University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319

On Nov 6, 2014, at 1:04 PM, Riley Childs rchi...@cucawarriors.com wrote:

 Yes, but I have been unsuccessful thus far in getting a vector file/high res 
 transparent image.
 If you have one and can send please do so and I will put it up on the 
 code4lib store (code4lib.spreadshirt.com).
 Thanks
 //Riley
 
 
 --
 Riley Childs  
 Senior
 Charlotte United Christian Academy
 IT Services Administrator
 Library Services Administrator
 https://rileychilds.net
 cell: +1 (704) 497-2086
 office: +1 (704) 537-0331x101
 twitter: @rowdychildren
 Checkout our new Online Library Catalog: https://catalog.cucawarriors.com
 
 Proudly sent in plain text 
 
 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of 
 Goben, Abigail
 Sent: Thursday, November 06, 2014 1:10 PM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: [CODE4LIB] Past Conference T-Shirts?
 
 My Metadata t-shirt, from C4L 2013, has been getting some interest/requests 
 of where others can purchase. I thought we'd talked about that here.  Was 
 there a store ever finally set up that I could refer people to?
 
 Abigail
 
 --
 Abigail Goben, MLS
 Assistant Information Services Librarian and Assistant Professor Library of 
 the Health Sciences University of Illinois at Chicago
 1750 W. Polk (MC 763)
 Chicago, IL 60612
 ago...@uic.edu


Re: [CODE4LIB] Stack Overflow

2014-11-04 Thread Jason Stirnaman
I think those are the reasons Jeff Atwood of Stack Exchange started Discourse 
(https://github.com/discourse/discourse | http://www.discourse.org/faq/). Of 
course, it comes with hosting and/or maintenance overhead.
The listserv is archaic and easy and backed by inertia.
I think the only way you're going to know is to scratch your itch, experiment, 
and see if anyone else joins in.

Jason

Jason Stirnaman
Lead, Library Technology Services
University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319

On Nov 4, 2014, at 9:34 AM, Schulkins, Joe joseph.schulk...@liverpool.ac.uk 
wrote:

 To be honest I absolutely hate the whole reputation and badge system for 
 exactly the reasons you outline, but I can't deny that I do find the family 
 of Stack Exchange sites extremely useful and by comparison Listservs just 
 seem very archaic to me as it's all too easy for a question (and/or its 
 answer) to drop through the cracks of a popular discussion. Are Listservs 
 really the best way to deal with help? I would even prefer a Drupal site...   
 
 
 Joseph Schulkins| Systems Librarian| University of Liverpool Library| PO Box 
 123 | Liverpool L69 3DA | joseph.schulk...@liverpool.ac.uk| T 0151 794 3844 
 
 Follow us: @LivUniLibrary Like us: LivUniLibrary Visit us: 
 http://www.liv.ac.uk/library 
 Special Collections  Archives blog: http://manuscriptsandmore.liv.ac.uk
 
 
 
 
 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of 
 Joshua Welker
 Sent: 04 November 2014 14:43
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] Stack Overflow
 
 The concept of a library technology Stack Exchange site as a google-able 
 repository of information sounds great. However, I do have quite a few 
 reservations.
 
 1. Stack Exchange sites seem to naturally lead to gatekeeping, snobbishness, 
 and other troll behaviors. The reputation system built into those sites 
 really go to a lot of folks' heads. High-ranking users seem to take pleasure 
 in shutting down questions as off-topic, redundant, etc.
 Argument and one-upmanship are actively promoted--The previous answer sucks. 
 Here's my better answer!  This  tends to attract certain (often
 male) personalities and to repel certain (often female) personalities.
 This seems very contrary to the direction the Code4Lib community has tried to 
 move in the last few years of being more inclusive and inviting to women 
 instead of just promoting the stereotypical IT guy qualities that dominate 
 most IT-related discussions on the Internet. More here:
 
 http://www.banane.com/2012/06/20/there-are-no-women-on-stackoverflow-or-ar
 e-there/
 http://michael.richter.name/blogs/why-i-no-longer-contribute-to-stackoverf
 low
 
 2. Having a Stack Exchange site might fragment the already quite small and 
 nascent library technology community. This might be an unfounded worry, but 
 it's worth consideration. A lot of QA takes place on this listserv, and it 
 would be awkward to try to have all this information in both places. That 
 said, searching StackExchange is much easier than searching a listserv.
 
 3. I echo your concerns about vendors. Libraries have a culture of protecting 
 vendors from criticism. Sure, we do lots of criticism behind closed doors, 
 but nowhere that leaves an online footprint. Often, our contracts include a 
 clause that we have to keep certain kinds of information private. I don't 
 think this is a very positive aspect of librarian culture, but it is there.
 
 I think a year or two ago that there was a pretty long discussion on this 
 listserv about creating a Stack Exchange site.
 
 Josh Welker
 
 
 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of 
 Schulkins, Joe
 Sent: Tuesday, November 04, 2014 8:12 AM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: [CODE4LIB] Stack Overflow
 
 Presumably I'm not alone in this, but I find Stack Overflow a valuable 
 resource for various bits of web development and I was wondering whether 
 anyone has given any thought about proposing a Library Technology site to 
 Stack Exchange's Area 51 (http://area51.stackexchange.com/)? Doing a search 
 of the proposals shows there was one for 'Libraries and Information Science' 
 but this closed 2 years ago as it didn't reach the required levels during the 
 beta phase.
 
 The reason I think this might be useful is that instead of individual places 
 to go for help or raise questions (i.e. various mailing lists) there could be 
 a 'one-stop' shop approach from which we could get help with LMSs, discovery 
 layers, repository software etc. I appreciate though that certain vendors 
 aren't particularly open (yes, Innovative I'm looking at you here) and might 
 not like these things being discussed on an open forum.
 
 Does anybody else think this might be useful? Would such a forum be shot down 
 by all the vendors legalese wrapped up in their Terms and Conditions?
 Or are you happy with the way

Re: [CODE4LIB] ruby-marc: how to sort fields after append?

2014-09-12 Thread Jason Stirnaman
Thanks, Steve! Thought I had tried that, but it's exactly what I was looking 
for.

Jason

Jason Stirnaman
Lead, Library Technology Services
University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319

On Sep 12, 2014, at 8:06 AM, Steve Meyer steve.e.me...@gmail.com wrote:

 Since the fields property of a MARC::Record is a MARC::FieldMap, which is a
 subclass of Array, I use the Array.sort_by! method:
 
 record.fields.sort_by! {|f| f.tag}
 
 
 
 On Fri, Sep 12, 2014 at 12:28 AM, Jason Stirnaman jstirna...@kumc.edu
 wrote:
 
 Ruby-marc sages,
 What's the best way to re-sequence fields in a record after appending to
 it?  This seems to work ok, but feels wrong.
 
 
  for record in reader
# Return a record with new field appended.
newrecord = add_control_number(record)
 
### Re-sort fields by tag and copy them to a new record. ###
sortedrecord = MARC::Record.new
sortedrecord.leader = newrecord.leader
newrecord.sort_by{|f| f.tag}.each {|tag| sortedrecord.append(tag)}
 
 writer.write(sortedrecord)
  end
 
 
 Thanks,
 Jason
 
 Jason Stirnaman
 Lead, Library Technology Services
 University of Kansas Medical Center
 jstirna...@kumc.edu
 913-588-7319
 


Re: [CODE4LIB] ruby-marc: how to sort fields after append?

2014-09-12 Thread Jason Stirnaman
Thanks, Terry, Kyle, et al. To Terry's point, I was too lazy to review the 
rules for sorting fields, but hoping someone wiser would chime in. Yeah, I'm 
going to keep sorting indiscriminately until I see a problem or someone 
complains. 

In my example it's an 035. I considered not re-sorting at all, but it just 
looks so wrong, even if I am busting any field order magic in the process. 

Jason

Jason Stirnaman
Lead, Library Technology Services
University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319

On Sep 12, 2014, at 12:11 PM, Terry Reese ree...@gmail.com wrote:

 I was so hoping someone would bring up position of MARC fields.  Everything 
 Kyle says is true -- and I would follow that up by saying, no one will care, 
 even most catalogers.  In fact, I wouldn't even resort the data to begin with 
 -- outside of aesthetics, the sooner we can get away from prescribing some 
 kind of magical meaning to field order (have you ever read the book on 
 determining 5xx field order, I have -- it's depressing; again, who but a 
 cataloger would know) we'll all be better off.  :)
 
 --tr
 
 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Kyle 
 Banerjee
 Sent: Friday, September 12, 2014 12:44 PM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] ruby-marc: how to sort fields after append?
 
 On Fri, Sep 12, 2014 at 9:20 AM, Galen Charlton g...@esilibrary.com wrote:
 
 ...
 One caveat though -- at least in MARC21, re-sorting a MARC record 
 strictly by tag number can be incorrect for certain fields...
 
 
 This is absolutely true. In addition to the fields you mention, 4XX, 7XX, and 
 8XX are also not necessarily in numerical order even if most records contain 
 them this way.  There is no way to programatically determine the correct 
 sort. While this may sound totally cosmetic, it sometimes has use 
 implications. Depending on how the sort mechanism works, you could 
 conceivably reorder fields with the same number in the wrong order.
 
 The original question was how to resort a MARC record after appending a field 
 which appears to be a control number. I would think it preferable to iterate 
 through the fields and place it in the correct position (I'm assuming it's 
 not an 001) rather than append and sort.
 
 However, record quality is such a mixed bag nowadays and getting much worse 
 that tag order is the least of the corruption issues. Besides, most displays 
 normalize fields so heavily that these type of distinctions simply aren't 
 supported anymore.
 
 kyle


[CODE4LIB] ruby-marc: how to sort fields after append?

2014-09-11 Thread Jason Stirnaman
Ruby-marc sages,
What's the best way to re-sequence fields in a record after appending to it?  
This seems to work ok, but feels wrong.


  for record in reader
# Return a record with new field appended.
newrecord = add_control_number(record)

### Re-sort fields by tag and copy them to a new record. ###
sortedrecord = MARC::Record.new
sortedrecord.leader = newrecord.leader
newrecord.sort_by{|f| f.tag}.each {|tag| sortedrecord.append(tag)}
  
 writer.write(sortedrecord)
  end


Thanks,
Jason

Jason Stirnaman
Lead, Library Technology Services
University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319


[CODE4LIB] OpenRefine survey results: Librarians are largest percentage of OR user base

2014-09-02 Thread Jason Stirnaman
Martin Magdinier just published the 2014 OpenRefine user survey results: 
http://openrefine.org/2014/08/29/2014-survey-results.html

 One of four OpenRefine user identified himself as librarians making this 
 group the the largest of OpenRefine user base. The Researcher and Open Data 
 enthusiasts represent the two second largest group, each representing over 
 15% of the userbase. Finally Data journalist and Semantic web each represent 
 around 10%.

The OpenRefine team is investigating how to better engage the user community, 
encourage problem reports, collect feature requests, etc. I know they'd 
appreciate any feedback. 
They're also wanting to validate possible business models, e.g. OR in the 
cloud. 

Jason

Jason Stirnaman
Lead, Library Technology Services
University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319


Re: [CODE4LIB] Anybody know a way to add a MARC tag on-mass to a file of MARC records

2014-08-28 Thread Jason Stirnaman
Ray,
There are also good MARC libraries for Ruby, Python, Perl...
Here's an example implementing ruby-marc within a pipeline-able command-line 
script:
https://github.com/jstirnaman/ebook_bibs

Jason

Jason Stirnaman
Lead, Library Technology Services
University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319

On Aug 28, 2014, at 1:39 PM, Schwartz, Raymond schwart...@wpunj.edu wrote:

 But how can you automate that in a script?  /Ray
 
 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Blake 
 Galbreath
 Sent: Thursday, August 28, 2014 2:37 PM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] Anybody know a way to add a MARC tag on-mass to a 
 file of MARC records
 
 Yep, then you just go MarcEditor  Tools  Add/Delete Field (F7).
 
 
 On Thu, Aug 28, 2014 at 11:33 AM, Jane Costanza jcost...@trinity.edu
 wrote:
 
 MarcEdit is a free MARC editing utility.
 
 http://marcedit.reeset.net/
 
 Jane Costanza
 Associate Professor/Head of Discovery Services Trinity University San 
 Antonio, Texas
 210-999-7612
 jcost...@trinity.edu
 http://digitalcommons.trinity.edu/
 http://lib.trinity.edu/
 
 
 On Thu, Aug 28, 2014 at 1:26 PM, Schwartz, Raymond 
 schwart...@wpunj.edu
 wrote:
 
 Anybody know a way to add a MARC tag on-mass to a file of MARC records.
 I
 need to add the tag 918 $a with the contents DELETE to each of the 
 records.
 
 Thanks in advance. /Ray
 
 Ray Schwartz
 Systems Specialist Librarian
 schwart...@wpunj.edu
 blocked::mailto:schwart...@wpunj.edu
 David and Lorraine Cheng Library   Tel: +1 973 720-3192
 William Paterson University Fax: +1 973 720-2585
 300 Pompton RoadMobile: +1 201
 424-4491
 Wayne, NJ 07470-2103 USA
 http://nova.wpunj.edu/schwartzr2/
 http://euphrates.wpunj.edu/faculty/schwartzr2/
 
 
 
 
 
 --
 Blake L. Galbreath
 Systems Librarian
 Eastern Oregon University
 One University Boulevard
 La Grande, OR 97850
 (541) 962.3017
 bgalbre...@eou.edu


Re: [CODE4LIB] Anybody know a way to add a MARC tag on-mass to a file of MARC records

2014-08-28 Thread Jason Stirnaman
MARCEdit can also be invoked on the command-line: 
http://blog.reeset.net/archives/1078

Jason

Jason Stirnaman
Lead, Library Technology Services
University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319

On Aug 28, 2014, at 1:44 PM, John Mignault jmigna...@metro.org wrote:

 You don't need to write a script. The MARCEditor will add the field to
 every record in the file. Take a look. --j
 
 
 On Thu, Aug 28, 2014 at 2:39 PM, Schwartz, Raymond schwart...@wpunj.edu
 wrote:
 
 But how can you automate that in a script?  /Ray
 
 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of
 Blake Galbreath
 Sent: Thursday, August 28, 2014 2:37 PM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] Anybody know a way to add a MARC tag on-mass to a
 file of MARC records
 
 Yep, then you just go MarcEditor  Tools  Add/Delete Field (F7).
 
 
 On Thu, Aug 28, 2014 at 11:33 AM, Jane Costanza jcost...@trinity.edu
 wrote:
 
 MarcEdit is a free MARC editing utility.
 
 http://marcedit.reeset.net/
 
 Jane Costanza
 Associate Professor/Head of Discovery Services Trinity University San
 Antonio, Texas
 210-999-7612
 jcost...@trinity.edu
 http://digitalcommons.trinity.edu/
 http://lib.trinity.edu/
 
 
 On Thu, Aug 28, 2014 at 1:26 PM, Schwartz, Raymond
 schwart...@wpunj.edu
 wrote:
 
 Anybody know a way to add a MARC tag on-mass to a file of MARC records.
 I
 need to add the tag 918 $a with the contents DELETE to each of the
 records.
 
 Thanks in advance. /Ray
 
 Ray Schwartz
 Systems Specialist Librarian
 schwart...@wpunj.edu
 blocked::mailto:schwart...@wpunj.edu
 David and Lorraine Cheng Library   Tel: +1 973 720-3192
 William Paterson University Fax: +1 973
 720-2585
 300 Pompton RoadMobile: +1 201
 424-4491
 Wayne, NJ 07470-2103 USA
 http://nova.wpunj.edu/schwartzr2/
 http://euphrates.wpunj.edu/faculty/schwartzr2/
 
 
 
 
 
 --
 Blake L. Galbreath
 Systems Librarian
 Eastern Oregon University
 One University Boulevard
 La Grande, OR 97850
 (541) 962.3017
 bgalbre...@eou.edu
 
 
 
 
 -- 
 *John Mignault, Empire State Digital Network Technology Specialist*
 Metropolitan New York Library Council (METRO http://metro.org/)
 212.228.2320 x129
 http://www.mnylc.org/esdn


Re: [CODE4LIB] Creating a Linked Data Service

2014-08-07 Thread Jason Stirnaman
Mike,
Check out 
http://json-ld.org/,
http://json-ld.org/primer/latest/, and
https://github.com/digitalbazaar/pyld

But, if you haven't yet sketched out a model for *your* data, then the LD stuff 
will just be a distraction. The information on Linked Data seems overly complex 
because trying to represent data for the Semantic Web gets complex - and 
verbose. 

As others have suggested, it's never a bad idea to just do the simplest thing 
that could possibly work.[1] Mark recommended writing a simple API. That would 
be a good start to understanding your data model and to eventually serving LD. 
And, you may find that it's enough for now.

1. http://www.xprogramming.com/Practices/PracSimplest.html

Jason

Jason Stirnaman
Lead, Library Technology Services
University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319

On Aug 6, 2014, at 1:45 PM, Michael Beccaria mbecca...@paulsmiths.edu wrote:

 I have recently had the opportunity to create a new library web page and host 
 it on my own servers. One of the elements of the new page that I want to 
 improve upon is providing live or near live information on technology 
 availability (10 of 12 laptops available, etc.). That data resides on my ILS 
 server and I thought it might be a good time to upgrade the bubble gum and 
 duct tape solution I now have to creating a real linked data service that 
 would provide that availability information to the web server.
 
 The problem is there is a lot of overly complex and complicated information 
 out there onlinked data and RDF and the semantic web etc. and I'm looking for 
 a simple guide to creating a very simple linked data service with php or 
 python or whatever. Does such a resource exist? Any advice on where to start?
 Thanks,
 
 Mike Beccaria
 Systems Librarian
 Head of Digital Initiative
 Paul Smith's College
 518.327.6376
 mbecca...@paulsmiths.edu
 Become a friend of Paul Smith's Library on Facebook today!


Re: [CODE4LIB] Recommendations for IT Department Management Resouces

2014-07-03 Thread Jason Stirnaman
I'm a big fan of Johanna Rothman: http://www.jrothman.com/ 
Most of her writing centers on teams, project management, and hiring in 
technology.  It doesn't always translate easily to our glacial, 
resource-strapped academic environments though.

Jason

Jason Stirnaman
Lead, Library Technology Services
University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319

On Jul 3, 2014, at 1:45 PM, Hagedon, Mike - (mhagedon) 
mhage...@email.arizona.edu wrote:

 Code4Lib is awesome. I probably wouldn't have thought to ask Mike's question 
 here, but I just bought Leading Snowflakes because of Francis' answer. Thanks!
 
 Mike
 
 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of 
 Francis Kayiwa
 Sent: Friday, May 02, 2014 5:35 AM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] Recommendations for IT Department Management Resouces
 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 On 5/2/2014 8:25 AM, Michael Beccaria wrote:
 I'm looking for resources on managing IT departments and 
 infrastructure in an academic environment. Resources that go over high 
 level organization stuff like essential job roles, policies, standard 
 operating procedures, etc.? Anyone know of any good resources out 
 there that they consider useful or essential?
 
 
 I quite enjoyed reading this book -it comes with video packages etc.,- and I 
 wish it on all my past and future managers. ;-)
 
 http://leadingsnowflakes.com/
 
 Cheers,
 ./fxk
 
 - --
 Sent from a computer using a keyboard and software.
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2.0.22 (MingW32)
 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
 
 iQEcBAEBAgAGBQJTY5DvAAoJEOptrq/fXk6MxEUH/1vphJKTUegBG3VTims0FFYD
 FHtlPK2i/venSKOpcmCxwQESCiDPgLO4oiUeXe6MecjaxdGEWv00GHoDASfmhTTQ
 pKpFRGHo3kLM80ZdaeuHFfk/EO+N/sb8KGAohStaul9ZJvYUtiL0ItnaZsg4fMbv
 qu/wpEG/9+gJYZho8WFUe5jSc4yDOPiVdo74f0OkyNaNFhjFwz8147WpqoveAnuS
 Qsf1LkaTSJgkcoAfENARrw5nKCnj8dZQXYbxq/hBD6h/FvbduyOn8+M5seL6Wsco
 YwNcLNuSKFcp+aV+QE/jIshhdIBiZG/qfEaZbWUCb5uwpnlSs8+/3iG5hK45Uz8=
 =/Gwp
 -END PGP SIGNATURE-


Re: [CODE4LIB] orcid

2014-06-10 Thread Jason Stirnaman
http://orcid.org/faq-page#n110

Jason

Jason Stirnaman
Lead, Library Technology Services
University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319

On Jun 10, 2014, at 3:04 PM, Eric Lease Morgan emor...@nd.edu wrote:

  Is ORCID an acronym, and if it is then what does it stand for? —ELM


Re: [CODE4LIB] Anyone working with iPython?

2013-12-19 Thread Jason Stirnaman
Yes, I just started playing with it, too, and would like to hear ideas. The 
notebook model is really cool and, I think, would at least be helpful for 
teaching others to code.



There's also an iRuby port.



Jason

-- Original message --
From: Roy Tennant
Date: 12/19/2013 11:54 AM
To: CODE4LIB@LISTSERV.ND.EDU;
Subject:[CODE4LIB] Anyone working with iPython?


Our Wikipedian in Residence, Max Klein brought iPython [1] to my attention
recently and even in just the little exploration I've done with it so far
I'm quite impressed. Although you could call it interactive Python that
doesn't begin to put across the full range of capabilities, as when I first
heard that I thought Great, a Python shell where you enter a command, hit
the return, and it executes. Great. Just what I need. NOT. But I was SO
WRONG.

It certainly can and does do that, but also so much more. You can enter
blocks of code that then execute. Those blocks don't even have to be
Python. They can be Ruby or Perl or bash. There are built-in functions of
various kinds that it (oddly) calls magic. But perhaps the killer bit is
the idea of Notebooks that can capture all of your work in a way that is
also editable and completely web-ready. This last part is probably
difficult to understand until you experience it.

Anyway, i was curious if others have been working with it and if so, what
they are using it for. I can think of all kinds of things I might want to
do with it, but hearing from others can inspire me further, I'm sure.
Thanks,
Roy

[1] http://ipython.org/


Re: [CODE4LIB] rdf triplestores

2013-11-11 Thread Jason Stirnaman
Eric,
We just did a workshop at C4LMidwest on getting up and running with Fuseki and 
RDF/XML. Here's the 3-part tutorial (for OS X, but translates easily to Linux):
http://jstirnaman.wordpress.com/2013/10/11/installing-fuseki-with-jena-and-tdb-on-os-x/

Jason

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Eric 
Lease Morgan
Sent: Sunday, November 10, 2013 11:12 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] rdf triplestores

What is your favorite RDF triplestore?

I am able to convert numerous library-related metadata formats into RDF/XML. In 
a minimal way, I can then contribute to the Semantic Web by simply putting the 
resulting files on an HTTP file system. But if I were to import my RDF/XML into 
a triplestore, then I could do a lot more. Jena seems like a good option. So 
does Openlink Virtuoso. 

What experience do y'all have with these tools, and do you know how to import 
RDF/XML into them?

-- 
Eric Lease Morgan


Re: [CODE4LIB] Faculty publication database

2013-10-28 Thread Jason Stirnaman
To affirm what Eric and Tom said, we use BibApp but collecting publication data 
and disambiguating authors is a huge, person-intensive chore. And that's 
speaking as a small-ish medical center that can rely on PubMed to source 80-90% 
of our data.

We're hoping that ORCID will make a big difference. We're just getting ready to 
push our verified publication data to ORCID profiles for our researchers.

Cornell has done some work on author-name disambiguation: 
http://www.youtube.com/watch?v=44PVULsDk24

Jason Stirnaman, MLS
Biomedical Librarian, Digital Projects and Research

A.R. Dykes Library
University of Kansas Medical Center

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Eric 
Larson
Sent: Friday, October 25, 2013 2:32 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Faculty publication database

Hi all,

I was the lead developer for BibApp at UW-Madison.  BibApp is a neat tool and 
worth consideration for Ruby/Solr folk.

However, the project lost momentum at UW because we could not capture enough 
data to approach faculty expectations that the database be _truly 
comprehensive_.  We harvested citation data via APIs, collected paper CVs, 
brokered our way into obtaining copies of annual merit review exercises, but 
still we could not capture enough publication data.  Ultimately, seeing the 
amount of staff cost for data collection, for building a non-comprehensive 
tool, the library decided to back away.

In the sciences you'll have far better luck, but in the humanities it's a 
complete mess.  Good luck finding citations for all the public radio 
appearances the Chair of the English department expects to see on their 
profile...

It's an unwinnable war.  I still cry at night.

Cheers,
- Eric



On Fri, Oct 25, 2013 at 1:30 PM, Michael J. Giarlo  
leftw...@alumni.rutgers.edu wrote:

 Have you looked at VIVO yet?  http://vivoweb.org/

 It's an open-source project that was initially developed by Cornell 
 and is now being incubated by DuraSpace.

 -Mike


 On Fri, Oct 25, 2013 at 8:35 AM, Alevtina Verbovetskaya  
 alevtina.verbovetsk...@mail.cuny.edu wrote:

  Hi guys,
 
  Does your library maintain a database of faculty publications? How 
  do you do it?
 
  Some things I've come across in my (admittedly brief) research:
  - RSS feeds from the major databases
  - RefWorks citation lists
 
  These options do not necessarily work for my university, made up of 
  24 colleges/institutions, 6,700+ FT faculty, and 270,000+ 
  degree-seeking students.
 
  Does anyone have a better solution? It need not be searchable: we 
  are
 just
  interested in pulling a periodical report of articles written by our 
  faculty/students without relying on them self-reporting 
  days/weeks/months/years after the fact.
 
  Thanks!
  Allie
 
  --
  Alevtina (Allie) Verbovetskaya
  Web and Mobile Systems Librarian
  Office of Library Services
  City University of New York
  555 W 57th St, Ste. 1325
  New York, NY 10019
  1-646-313-8158
  alevtina.verbovetsk...@cuny.edumailto:alevtina.verbovetskaya@cuny.e
  du
 



Re: [CODE4LIB] Ruby on Windows

2013-10-02 Thread Jason Stirnaman
Josh,

If you're wanting to deploy a Ruby app to Windows desktops then you might look 
at Shoes or jRuby (as others suggested):

http://www.slideshare.net/anisniit/jruby-15867973

http://www.slideshare.net/anisniit/jruby-15867973http://www.slideshare.net/anisniit/jruby-15867973



For web apps...what everyone else said, but if you're adventurous you might 
look at Ironpython and Ironruby: https://github.com/IronLanguages



Jason
https://github.com/IronLanguages-- Original message --
From: Jonathan Rochkind
Date: 10/01/2013 7:04 PM
To: CODE4LIB@LISTSERV.ND.EDU;
Subject:Re: [CODE4LIB] Ruby on Windows


So, when my desktop workstation was Windows, i developed ruby by actually 
running it on a seperate box which was a linux box. I'd just ssh in for a 
command line, and I used ExpanDrive[1] to mount the linux box's file system as 
a G:// drive on Windows, so I could still edit files there with the text 
editor of my choice.




So it barely mattered that it was a separate machine, right?  Even if it had 
somehow been on my local machine, I'd still be opening up some kind of shell 
(whether CMD.exe or more likely some kind of Cygwin thing) to start up my app 
or run the automated tests etc.  It's a window with a command line in it, what 
does it matter if it's actually running things on my local machine, or is a 
putty window to a linux machine?




So, if you don't have a separate linux machine available, you might be able to 
do something very similar using VirtualBox[2] to run a linux machine in a VM on 
your windows machine.  With VirtualBox, you can share file systems so you can 
just open up files 'in' your linux VM on your Windows machine. There's probably 
a way to ssh into the local linux VM, from the Windows host, even if the linux 
VM doesn't have it's own externally available IP address.




It would end up being quite similar to what I did, which worked fine for me for 
many years (eventually I got an OSX box cause I just like it better, but my 
development process is not _substantially_ different).




But here's the thing, even if you manage to do actual Windows ruby development 
without a linux VM... assuming you're writing a web app... what the heck are 
you going to actually deploy it on?  If you're planning on deploying it on a  
Windows server, I think you're in for a _world_ of hurt; deploying a production 
ruby web app on a Windows server is going to be much _more_ painful than 
getting a ruby dev environment going on a Windows server. And really that's not 
unique to ruby, it's true of just about any non-Microsoft 
interpreted/virtual-machine language, or compiled language not supported by 
Microsoft compilers.  There are reasons that almost everyone running non-MS 
languages deploys on linux (and a virtuous/viscious circle where since most 
people deploy on linux, most open source deployment tools are for linux).




If you really have to deploy on a Windows server, you should probably stick to 
MS languages. Or, contrarily, if you want to develop in non-MS languages, you 
should find a way to get linux servers into your infrastructure.








[1] http://www.expandrive.com/
[2] https://www.virtualbox.org/

From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] on behalf of Ross Singer 
[rossfsin...@gmail.com]
Sent: Tuesday, October 01, 2013 7:06 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Ruby on Windows




If you absolutely must have a Windows development environment, you may want
to consider a JVM-based scripting language, like Groovy or JRuby. All the
cross-platform advantages, none of the woe. Or, not as much, at
least (there's always a modicum of woe with anything you decide on).




-Ross.




On Tuesday, October 1, 2013, Joshua Welker wrote:




 I'm using Windows 7 x64 SP1. I am using the most recent RubyInstaller
 (2.0.0-p247 x64) and DevKit (DevKit-mingw64-64-4.7.2-2013022-1432-sfx).

 That's disappointing to hear that most folks use Ruby exclusively in *nix
 environments. That really limits its utility for me. I am trying Ruby
 because dealing with HTTP in Java is a huge pain, and I was having
 difficulties setting up a Python environment in Windows, too (go figure).

 Josh Welker


 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU javascript:;]
 On Behalf Of
 David Mayo
 Sent: Tuesday, October 01, 2013 3:44 PM
 To: CODE4LIB@LISTSERV.ND.EDU javascript:;
 Subject: Re: [CODE4LIB] Ruby on Windows

 DevKit is a MingW/MSYS wrapper for Windows Ruby development.  It might not
 be finding it, but he does have a C dev environment.

 I know you cut them out earlier, but would you mind sending some of the C
 Header Blather our way?  It's probably got some clues as to what's going
 on.

 Also - which versions of Windows, RubyInstaller, and DevKit are you using?




 On Tue, Oct 1, 2013 at 4:38 PM, Ross Singer 
 rossfsin...@gmail.comjavascript:;
 wrote:

  It's probably also possible to get 

[CODE4LIB] Discovery services (Summon, EDS) at medical centers

2013-08-28 Thread Jason Stirnaman
Would anyone from a medical/healthcare institution that has implemented or 
evaluated Serials Solutions' Summon, EBSCO's Discovery Service, or similar 
search service be willing to share their opinions?
Any opinions on- or off-list with regard to overall value, source coverage, 
usability, or API integration with VuFind or Blacklight are appreciated.

Thanks,
Jason

Jason Stirnaman
Digital Projects Librarian
A.R. Dykes Library
University of Kansas Medical Center
913-588-7319


Re: [CODE4LIB] Python and Ruby

2013-07-30 Thread Jason Stirnaman
I recommend going through 
http://pragprog.com/book/btlang/seven-languages-in-seven-weeks 
No, of course it's not exhaustive, but it offers an appreciation of some modern 
 languages, their differences, and the roots they derived from.
Every coder [their] language. Every language its coder :)

Jason

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Marc 
Chantreux
Sent: Tuesday, July 30, 2013 10:14 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Python and Ruby

On Tue, Jul 30, 2013 at 02:21:55PM +, Rich Wenger wrote:
 The proliferation of boutique languages is a cancer on our community. 

sure ... but i don't want to be stuck on PHP or python when i have the power of 
perl inside my hands, other would argue perl is too hard for librarians and go 
python, someone else will tell us all that yeah, his go server is 30 times 
faster than our dynamic langages based ones. Guess what? They are all right and 
it's a matter of what you need and how those languages will taste to you.

There is no silver bullet, so don't expect a cancer cure for the moment.
Sorry about that :)

regards
--
Marc Chantreux
Université de Strasbourg, Direction Informatique
14 Rue René Descartes,
67084  STRASBOURG CEDEX
☎: 03.68.85.57.40
http://unistra.fr
Don't believe everything you read on the Internet
-- Abraham Lincoln


Re: [CODE4LIB] Python and Ruby

2013-07-30 Thread Jason Stirnaman
As of Ruby 1.9, I would dispute the Ruby is slower than everything case. 
There's lots of evidence to the contrary, e.g.
http://www.unlimitednovelty.com/2012/06/ruby-is-faster-than-python-php-and-perl.html

Jason

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Marc 
Chantreux
Sent: Tuesday, July 30, 2013 11:25 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Python and Ruby

On Tue, Jul 30, 2013 at 10:25:14AM -0500, Matthew Sherman wrote:
 Ok folks, we have veered into nonconstructive territory.  How about we 
 come back to the original question and help this person figure out 
 what they need to about Ruby and Python so they can do well with what 
 they want to work on.

comparing languages on objective criterias (especially when they are as close 
as ruby and python) isn't constructive.

but ok, let's try

* both claim to be very easy to learn (ruby by having a very nice
  syntax, python by limitating the features from the syntax)
* writing python code is very boring when you come from featured.
  langages like ruby or perl. nothing can be expressed a simple way.
* ruby is slow ... i mean: even for a dynamic language.
* both langages have libs for libraries for libraries but lack
  something as robust and usefull as CPAN (and related tools)
* python has an equivalent of the perl PDL (scipy)
* python has Natural Language Toolkit (equivalent in other langages ?)

your basic goal   |  your langage 
-
write/maintain faster | perl
reuse existing faster | python 
learn  faster | ruby 
executefaster | you're probably screwed.
experiment lua, go, haskell, rust 

regards
--
Marc Chantreux
Université de Strasbourg, Direction Informatique
14 Rue René Descartes,
67084  STRASBOURG CEDEX
☎: 03.68.85.57.40
http://unistra.fr
Don't believe everything you read on the Internet
-- Abraham Lincoln


Re: [CODE4LIB] Anyone have access to well-disambiguated sets of publication data?

2013-07-09 Thread Jason Stirnaman
Paul,

Not a huge set, but I will offer up our BibApp data, 
http://experts.kumc.edu/faq. It should mostly meet your requirements and is 
already in VIVO form (and other flavors) for you.

Contact me off-list of you have questions.



Jason Stirnaman

913-588-7319


-- Original message --
From: Paul Albert
Date: 7/9/2013 10:33 AM
To: CODE4LIB@LISTSERV.ND.EDU;
Subject:[CODE4LIB] Anyone have access to well-disambiguated sets of publication 
data?

I am exploring methods for author disambiguation, and I would like to have 
access to one or more set of well-disambiguated data set containing:
– a unique author identifier (email address, institutional identifier)
– a unique article identifier (PMID, DOI, etc.)
– a unique journal identifier (ISSN)

Definition for well-disambiguated – for a given set of authors, you know the 
identity of their journal articles to a precision and recall of greater than 
90-95%.

Any ideas?

thanks,
Paul


Paul Albert
Project Manager, VIVO
Weill Cornell Medical Library
646.962.2551


Re: [CODE4LIB] Visualizing (public) library statistics

2013-06-05 Thread Jason Stirnaman
Cab,
I realize you asked for examples, not tools, and this may be overkill for what 
you're wanting, but http://ushahidi.com/products/ushahidi-platform. 
Ushahidi would be good if you wanted a geographic, time-series visualization 
mashed-up with social media.
e.g. 
http://community.ushahidi.com/uploads/documents/c_Ushahidi-Practical_Considerations.pdf
I imagine that could be a worthwhile project on a large scale for many 
libraries.

A Google Fusion Table would be a simpler mapping/charting alternative. e.g. 
https://www.google.com/fusiontables/DataSource?docid=1JRSvdVxym2lKiM2cnfB7vmY735l58GSxD5O7-g0

Jason
Jason Stirnaman
Digital Projects Librarian
A.R. Dykes Library
University of Kansas Medical Center
913-588-7319


From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] on behalf of Francis Kayiwa 
[kay...@uic.edu]
Sent: Wednesday, June 05, 2013 3:38 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Visualizing (public) library statistics

On Wed, Jun 05, 2013 at 03:40:29PM -0400, Cab Vinton wrote:
 Come budget time, I invariably find myself working with the most
 recent compilation of public library statistics put out by our State
 Library -- comparing our library to peer institutions along a variety
 of measures (support per capita, circulation per capita, staffing
 levels, etc.) so I can make the best possible case for increasing/
 maintaining our funding.

 The raw data is in a Excel spreadsheet --
 http://www.nh.gov/nhsl/lds/public_library_stats.html -- so this seems
 ripe for mashing up, data visualization, online charting, etc.

 Does anyone know of any examples where these types of library stats
 have been made available online in a way that meets my goals of being
 user-friendly, visually informative/ clear, and just plain cool?

 If not, examples from the non-library world and/ or pointers to
 dashboards of note would be equally welcome, particularly if there's
 an indication of how things work on the back end.

YMMV but I've used infogr.am [0]

Granted the type of data I was using doesn't compare to the kind you are
trying to tame above.

Failing that there's lots of listed at datavisualization.ch[1] that could
help solve you problem. Here some assembly will be required.

Cheers,
./fxk

[0] http://infogr.am/
[1] http://selection.datavisualization.ch/

 Cheers,

 Cab Vinton, Director
 Sanbornton Public Library
 Sanbornton, NH


--
i'm living so far beyond my income that we may almost be said to be
living apart.
-- e. e. cummings


Re: [CODE4LIB] GitHub Myths (was thanks and poetry)

2013-02-20 Thread Jason Stirnaman
Another option might be to set it up like the Planet. Where individuals just 
post their poetry to their own blogs, Tumblrs, etc., tag them, and have 
$PLANET_NERD_POETS aggregate them.

Git and Github are great. But while I get the argument for utility, there does 
seem to be barrier-to-entry there for someone just wanting to submit a poem.

Jason

Jason Stirnaman
Digital Projects Librarian
A.R. Dykes Library
University of Kansas Medical Center
913-588-7319


From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] on behalf of Karen Coyle 
[li...@kcoyle.net]
Sent: Wednesday, February 20, 2013 10:42 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] GitHub Myths (was thanks and poetry)

Shaun, you cannot decide whether github is a barrier to entry FOR ME (or
anyone else), any more than you can decide whether or not my foot hurts.
I'm telling you github is NOT what I want to use. Period.

I'm actually thinking that a blog format would be nice. It could be
pretty (poetry and beauty go together). Poems tend to be short, so
they'd make a nice blog post. They could appear in the Planet blog roll.
They could be coded by author and topic. There could be comments! Even
poems as comments! The only down-side is managing users. Anyone have
ideas on that?

kc


On 2/20/13 8:20 AM, Shaun Ellis wrote:
  (As a general rule, for every programmer who prefers tool A, and says
  that everybody should use it, there’s a programmer who disparages tool
  A, and advocates tool B. So take what we say with a grain of salt!)

 It doesn't matter what tools you use, as long as you and your team are
 able to participate easily, if you want to.  But if you want to
 attract  contributions from a given development community, then
 choices should be balanced between the preferences of that community
 and what best serve the project.

 From what I've been hearing, I think there is a lot of confusion about
 GitHub.  Heck, I am constantly learning about new GitHub features,
 APIs, and best practices myself. But I find it to be an incredibly
 powerful platform for moving open source, distributed software
 development forward.  I am not telling anyone to use GitHub if they
 don't want to, but I want to dispel a few myths I've heard recently:

 

 * Myth #1 : GitHub creates a barrier to entry.
 * To contribute to a project on GitHub, you need to use the
 command-line. It's not for non-coders.

 GitHub != git.  While GitHub was initially built for publishing and
 sharing code via integration with git, all GitHub functionality can be
 performed directly through the web gui.  In fact, GitHub can even be
 used as your sole coding environment. There are other tools in the
 eco-system that allow non-coders to contribute documentation, issue
 reporting, and more to a project.

 

 * Myth #2 : GitHub is for sharing/publishing code.
 * I would be fun to have a wiki for more durable poetry (github
 unfortunately would be a barrier to many).

 GitHub can be used to collaborate on and publish other types of
 content as well.  For example, GitHub has a great wiki component* (as
 well as a website component).  In a number of ways, has less of a
 barrier to entry than our Code4Lib wiki.

 While the path of least resistance requires a repository to have a
 wiki, public repos cost nothing and can consist of a simple README
 file.  The wiki can be locked down to a team, or it can be writable by
 anyone with a github account.  You don't need to do anything via
 command-line, don't need to understand git-flow, and you don't even
 need to learn wiki markup to write content. All you need is an account
 and something to say, just like any wiki. Log in, go to the
 anti-harassment policy wiki, and see for yourself:
 https://github.com/code4lib/antiharassment-policy/wiki

 * The github wiki even has an API (via Gollum) that you can use to
 retrieve raw or formatted wiki content, write new content, and collect
 various meta data about the wiki as a whole:
 https://github.com/code4lib/antiharassment-policy/wiki/_access

 

 * Myth #3 : GitHub is person-centric.
  (And as a further aside, there’s plenty to dislike about github as
  well, from it’s person-centric view of projects (rather than
  team-centric)...

 Untrue. GitHub is very team centered when using organizational
 accounts, which formalize authorization controls for projects, among
 other things: https://github.com/blog/674-introducing-organizations

 

 * Myth #4 : GitHub is monopolizing open source software development.
  ... to its unfortunate centralizing of so much free/open
  source software on one platform.)

 Convergence is not always a bad thing. GitHub provides a great, free
 service with lots of helpful collaboration tools beyond version
 control.  It's natural that people would flock there, despite having
 lots of other options.

 

 -Shaun







 On 2/19/13 5:35 PM, Erik Hetzner wrote:
 At Sat, 16

[CODE4LIB] Getting started with Ruby and library-ish data (was RE: [CODE4LIB] You *are* a coder. So what am I?)

2013-02-18 Thread Jason Stirnaman
This is a terribly distorted view of Ruby: If you want to make web pages, 
learn Ruby, and you don't need to learn Rails to get the benefit of Ruby's 
awesomeness. But, everyone will have their own opinions. There's no accounting 
for taste. 

For anyone interested in learning to program and hack around with library data 
or linked data, here are some places to start (heavily biased toward the 
elegance of Ruby):

http://wiki.code4lib.org/index.php/Working_with_MaRC
https://delicious.com/jstirnaman/ruby+books
https://delicious.com/jstirnaman/ruby+tutorials
http://rdf.rubyforge.org/

Jason

Jason Stirnaman
Digital Projects Librarian
A.R. Dykes Library
University of Kansas Medical Center
913-588-7319


From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] on behalf of Joe Hourcle 
[onei...@grace.nascom.nasa.gov]
Sent: Sunday, February 17, 2013 12:52 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] You *are* a coder. So what am I?

On Feb 17, 2013, at 11:43 AM, John Fereira wrote:

 I have been writing software professionally since around 1980 and first 
 encounterd perl in the early 1990s of so and have *always* disliked it.   
 Last year I had to work on a project that was mostly developed in perl and it 
 reminded me how much I disliked it.  As a utility language, and one that I 
 think is good for beginning programmers (especially for those working in a 
 library) I'd recommend PHP over perl every time.

I'll agree that there are a few aspects of Perl that can be confusing, as some 
functions will change behavior depending on context, and there was a lot of bad 
code examples out there.*

... but I'd recommend almost any current mainstream language before 
recommending that someone learn PHP.

If you're looking to make web pages, learn Ruby.

If you're doing data cleanup, Perl if it's lots of text, Python if it's mostly 
numbers.

I should also mention that in the early 1990s would have been Perl 4 ... and 
unfortunately, most people who learned Perl never learned Perl 5.  It's changed 
a lot over the years.  (just like PHP isn't nearly as insecure as it used to be 
... and actually supports placeholders so you don't end up with SQL injections)

-Joe


Re: [CODE4LIB] Getting started with Ruby and library-ish data (was RE: [CODE4LIB] You *are* a coder. So what am I?)

2013-02-18 Thread Jason Stirnaman
I've heard similar good things about Codecademy from a friend who recently 
wanted to start learning programming along with his teenage son. It seems like 
a good gateway drug :) I introduced my 11-year-old to the Javascript-based 
animation tutorials on Khan Academy and he found them really fun. I have him 
use IRB to calculate his math homework. I don't care which, if any, language he 
prefers. It's more important to me that he's able to think under the hood a 
bit about computers, data, and what's possible.

I've been thinking alot about how to introduce not only my kids, but some of 
our cataloging/technical staff to thinking programmatically or 
computationally[1] or whatever you want to call it. For me, Ruby will likely 
be the tool - especially since it's so easy to install on Windows now, too. 

In her wisdom, Diane Hillman (I think), pointed out the need for catalogers to 
be able talk to programmers. Personally, that's what I'm after... to equip 
people to think about problems, data, and networks differently, e.g. No, you 
really don't have to look up each record individually in the catalog and check 
the link, etc.


1. http://www.google.com/edu/computational-thinking/

Jason Stirnaman
Digital Projects Librarian
A.R. Dykes Library
University of Kansas Medical Center
913-588-7319


From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] on behalf of Matthew 
Sherman [matt.r.sher...@gmail.com]
Sent: Monday, February 18, 2013 10:18 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Getting started with Ruby and library-ish data (was RE: 
[CODE4LIB] You *are* a coder. So what am I?)

Getting back to the original point so noting some nice starting tools, I
find http://www.codecademy.com to be a decent starting spot for those of us
without much computer science background.  I am not sure what professional
developers think of the site but I find it a helpful to tutorial to start
getting a basic understanding of scripting, Ruby, JavaScript, Python,
JQuery, APIs, ect.  Hope that helps.

Matt Sherman


On Mon, Feb 18, 2013 at 7:52 AM, Jason Stirnaman jstirna...@kumc.eduwrote:

 This is a terribly distorted view of Ruby: If you want to make web pages,
 learn Ruby, and you don't need to learn Rails to get the benefit of Ruby's
 awesomeness. But, everyone will have their own opinions. There's no
 accounting for taste.

 For anyone interested in learning to program and hack around with library
 data or linked data, here are some places to start (heavily biased toward
 the elegance of Ruby):

 http://wiki.code4lib.org/index.php/Working_with_MaRC
 https://delicious.com/jstirnaman/ruby+books
 https://delicious.com/jstirnaman/ruby+tutorials
 http://rdf.rubyforge.org/

 Jason

 Jason Stirnaman
 Digital Projects Librarian
 A.R. Dykes Library
 University of Kansas Medical Center
 913-588-7319

 
 From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] on behalf of Joe
 Hourcle [onei...@grace.nascom.nasa.gov]
 Sent: Sunday, February 17, 2013 12:52 PM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] You *are* a coder. So what am I?

 On Feb 17, 2013, at 11:43 AM, John Fereira wrote:

  I have been writing software professionally since around 1980 and
 first encounterd perl in the early 1990s of so and have *always* disliked
 it.   Last year I had to work on a project that was mostly developed in
 perl and it reminded me how much I disliked it.  As a utility language, and
 one that I think is good for beginning programmers (especially for those
 working in a library) I'd recommend PHP over perl every time.

 I'll agree that there are a few aspects of Perl that can be confusing, as
 some functions will change behavior depending on context, and there was a
 lot of bad code examples out there.*

 ... but I'd recommend almost any current mainstream language before
 recommending that someone learn PHP.

 If you're looking to make web pages, learn Ruby.

 If you're doing data cleanup, Perl if it's lots of text, Python if it's
 mostly numbers.

 I should also mention that in the early 1990s would have been Perl 4 ...
 and unfortunately, most people who learned Perl never learned Perl 5.  It's
 changed a lot over the years.  (just like PHP isn't nearly as insecure as
 it used to be ... and actually supports placeholders so you don't end up
 with SQL injections)

 -Joe



Re: [CODE4LIB] Getting started with Ruby and library-ish data (was RE: [CODE4LIB] You *are* a coder. So what am I?)

2013-02-18 Thread Jason Stirnaman
I'm not advocating the Google CT lessons as the best way to learn Python. 
Karen, I really like your hacker space idea. Anyone else know of an online 
environment like that?  Another option is maybe a Python IRC channel or a local 
meetup discussion list. For example, we have a really good Ruby meetup group 
here in KC that meets once a month. I also know between meetings that I can go 
to the mail list to get help with my Rails questions.

I am interested more in the Google CT lessons in the Data Analysis and 
English-Language subjects as entry points into how to think differently about 
your work and about this thing you're hunched over for 8 hours a day. Sure, 
those lessons focus heavily on spreadsheet functions, but that's a familiar way 
to introduce the concepts. I think it could also be adapted to Ruby, Python, 
whatever.

Jason

Jason Stirnaman
Digital Projects Librarian
A.R. Dykes Library
University of Kansas Medical Center
913-588-7319


From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] on behalf of Karen Coyle 
[li...@kcoyle.net]
Sent: Monday, February 18, 2013 3:25 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Getting started with Ruby and library-ish data (was RE: 
[CODE4LIB] You *are* a coder. So what am I?)

On 2/18/13 12:53 PM, Jonathan Rochkind wrote:
 On 2/18/2013 2:04 PM, Jason Stirnaman wrote:
 I've been thinking alot about how to introduce not only my kids, but
 some of our cataloging/technical staff to thinking programmatically
 or computationally[1] or whatever you want to call it.

 Do you have an opinion of the google 'computational thinking'
 curriculum pieces linked off of that page you cite? For instance, at:

 http://www.google.com/edu/computational-thinking/lessons.html

I looked at the Beginning Python one[1], and I have to say that any
intro to programming that begins with a giant table of mathematical
functions is a #FAIL. Wow - how wrong can you get it?

On the other hand, I've been going through the Google online python
class [2] and have found it very easy to follow (it's youtubed), and the
exercises are interesting. What I want next is more exercises, and
someone to talk to about any difficulties I run into. I want a hands-on
hacker space learning environment that has a live expert (and you
wouldn't have to be terribly expert to answer a beginner's questions).
It's very hard to learn programming alone because there are always
multiple ways to solve a problem, and an infinite number of places to
get stuck.

kc
[1] http://tinyurl.com/bcj894s
[2] https://developers.google.com/edu/python/

 Or at:

 http://www.iste.org/learn/computational-thinking/ct-toolkit

--
Karen Coyle
kco...@kcoyle.net http://kcoyle.net
ph: 1-510-540-7596
m: 1-510-435-8234
skype: kcoylenet


Re: [CODE4LIB] Rdio playlist

2013-02-06 Thread Jason Stirnaman
++
I was bummed not to find any Big Black in rdio.

Jason

Jason Stirnaman
Digital Projects Librarian
A.R. Dykes Library
University of Kansas Medical Center
913-588-7319


From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] on behalf of Gabriel 
Farrell [gsf...@gmail.com]
Sent: Wednesday, February 06, 2013 1:15 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Rdio playlist

Glad to see Tortoise on here. If we run out of music we can just dip into
the catalogs of Thrill Jockey, Drag City, and all the
resthttp://en.wikipedia.org/wiki/Chicago_record_labels
.


On Tue, Feb 5, 2013 at 12:08 PM, Matt Schultz
matt.schu...@metaarchive.orgwrote:

 This is great - loved the way the mix shaped up! Getting a taste of some
 new music.

 Thanks especially to the I Fight Dragons rec that surfaced on the thread.
 Love. It. Rock. On.

 On Tue, Feb 5, 2013 at 10:33 AM, William Denton w...@pobox.com wrote:

  There are 70 songs on the playlist [1] now, including Little Walter,
 Styx,
  Liz Phair, Tortoise, Lupe Fiasco, Cheap Trick, Herbie Hancock, Ministry,
  Sam Prekop and Screeching Weasel.  Great listening!  Nine busy people
 have
  added songs so far.
 
  It costs $5 or more per month if you want to subscribe to Rdio, but you
  can sign up free for a week if you just want to try it out.
 
  There's an API [2], and with it or by hand I'll make a record of the
 songs
  on the playlist so they're not lost and people can listen to them
 elsewhere.
 
  Bill
 
  [1] http://www.rdio.com/people/**wdenton/playlists/2229053/**
  Code4Lib_2013_in_Chicago/
 http://www.rdio.com/people/wdenton/playlists/2229053/Code4Lib_2013_in_Chicago/
 
  [2] http://developer.rdio.com/
 
  --
  William Denton
  Toronto, Canada
  http://www.miskatonic.org/
 



 --
 Matt Schultz
 Program Manager
 Educopia Institute, MetaArchive Cooperative
 http://www.metaarchive.org
 matt.schu...@metaarchive.org
 616-566-3204



Re: [CODE4LIB] Group Decision Making (was Zoia)

2013-01-25 Thread Jason Stirnaman
Ian +1

I like that direction and I'll sign it.

I think it would be good to offer an occasional reminder in C4L channels (e.g. 
link in the IRC greeting, mail list signup, etc.) that this is the sort of 
*community* you're entering and here's what you should expect.

Jason

Jason Stirnaman
Digital Projects Librarian
A.R. Dykes Library
University of Kansas Medical Center
913-588-7319


From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] on behalf of Ian Walls 
[iwa...@library.umass.edu]
Sent: Friday, January 25, 2013 12:46 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Group Decision Making (was Zoia)

+1

Perhaps, instead of a policy document (which is inherently rules-based), we
have a statement of belief and a pledge to stand by it (which is more of a
good-faith social contract).  Those of us who believe in it could sign it in
some way, perhaps through GitHub  This way we'd still have a document to
point people at, but we wouldn't have to worry about coding up rules that
work for every conceivable situation.

A basic statement of belief:

We don't believe that people should harm each other.

The basic situations we'd need to cover are:

a) I am harmed by someone - a pledge to speak up, either to the person
directly or to someone else in the community
b) someone is harmed by me - a pledge to review my behavior and take
appropriate action (apologize, or explain why I feel the behavior is
justified)
c) someone is harmed by someone else - a pledge to be willing to listen to
both parties, and form our opinions of the situation in light of the
statement of belief

Do you all think something like this would work for the whole community?


-Ian

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of
Jonathan Rochkind
Sent: Friday, January 25, 2013 1:25 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Group Decision Making (was Zoia)

  The best way, in my mind,
is to somehow create a culture where someone can say: you know, I'm not ok
with that kind of remark and the person spoken to can respond OK, I'll
think about that.

I think that's a really good to try to create, Karen says it just right.
Note that OK, I'll think about it is neither No, you must be mistaken
nor Okay, I will immediately do whatever you ask of me.  But it does need
to be a legitimate actual I'll think about it, seriously.

The flip side is that the culture is also one where when someone says you
know, I'm not ok with that kind of remark, it often means And I'd like you
to think about that, in a real serious way rather than And I expect you to
immediately change your behavior to acede to my demands.

Of course, what creates that, from both ends, is a culture of trust.  Which
I think code4lib actually has pretty a pretty decent dose of already, let's
try to keep it that way. (In my opinion, one way we keep it that way is by
continuing to resist becoming a formal rules-based bueurocratic
organization, rather than a community based on social ties and good faith).

Now, at some times it might really be neccesary to say And I expect you to
immediately stop what you're doing and do it exactly like I say.  Other
times it's not.  But in our society as a whole, we are so trained to think
that everything must be rules-based rather than based on good faith trust
between people who care about each other, that we're likely to asume that
you know, i'm not ok with that remark ALWAYS implies And therefore I
think you are an awful person, and your only hope of no longer being an
awful person is to immediately do exactly what I say.  Rather than And I
expect you to think about this seriously, and maybe get back to me on what
you think.  So if you do mean the second one when saying you know, i'm not
ok with that remark, it can be helpful to say so, to elicit the
self-reflection you want, rather than defensiveness.  And of course, on the
flip-side, it is obviously helpful if you can always respond to you know,
i'm really not okay with that!
  with reflection, rather than defensiveness.

From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] on behalf of Karen Coyle
[li...@kcoyle.net]
Sent: Friday, January 25, 2013 12:22 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Group Decision Making (was Zoia)

On 1/24/13 3:09 PM, Shaun Ellis wrote:


 To be clear, I am only uncomfortable with uncomfortable being used
 in the policy because I wouldn't support it being there. Differing
 opinions can make people uncomfortable.  Since I am not going to stop
 sharing what may be a dissenting opinion, should I be banned?

I can't come up with a word for it that is unambiguous, but I can propose a
scenario. Imagine a room at a conference full of people -- and that there
are only a few people of color. A speaker gets up and shows or says
something racist. It may be light-hearted in nature, but the people of color
in that almost-all

Re: [CODE4LIB] Anyone have a SUSHI client?

2013-01-24 Thread Jason Stirnaman
It looks like the Metridoc project might have one: 
https://code.google.com/p/metridoc/source/search?q=sushiorigq=sushibtnG=Search+Trunk

No idea if it's working, but I'd be really interested in hearing an update on 
Metridoc - if Thomas or anyone else involved is listening.

Jason Stirnaman
Digital Projects Librarian
A.R. Dykes Library
University of Kansas Medical Center
913-588-7319


From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] on behalf of Bill Dueber 
[b...@dueber.com]
Sent: Thursday, January 24, 2013 9:36 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Anyone have a SUSHI client?

Yeah -- I found that right away. Most of what's there appears to be
abandonware.


On Thu, Jan 24, 2013 at 9:10 AM, Tom Keays tomke...@gmail.com wrote:

 Hey. NISO has a list of SUSHI tools.

 http://www.niso.org/workrooms/sushi/tools/

 Tom




--
Bill Dueber
Library Systems Programmer
University of Michigan Library


Re: [CODE4LIB] Library CDNs

2013-01-04 Thread Jason Stirnaman
Tom,
We use Rackspace's Cloud Files for Cloud Server backup, not CDN, but it is 
built for that. You can use it with Akamai to serve content. It has versioning 
and mobile UIs:
http://www.rackspace.com/cloud/public/files/technology/

Jason

Jason Stirnaman
Digital Projects Librarian
A.R. Dykes Library
University of Kansas Medical Center
913-588-7319


From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] on behalf of Tom Keays 
[tomke...@gmail.com]
Sent: Friday, January 04, 2013 9:48 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] Library CDNs

Is anybody out there using a CDN[1] that is separate from their website to
host JavaScript, CSS, and image files? I'm looking for a one place where I
can consolidate and organize these files that is reliable (good uptime and
good response time) and affordable (less expensive than hosting a complete
website). In-as non-technical folks may need to access it, the interface
for managing the files and directories needs to be friendly. E.G., AWS's
native interface is too convoluted for newbies, but a program or web app
built as a front-end designed to have simple management functions is the
kind of thing I'm looking for (and something that mirrored AWS's built-in
versioning would be awesome).

Tom

[1] CDN: http://en.wikipedia.org/wiki/Content_delivery_network


Re: [CODE4LIB] Help with WordPress for Code4Lib Journal

2012-12-04 Thread Jason Stirnaman
It might be worth considering the Annotum theme for Wordpress, meant to do just 
that.
http://wordpress.org/extend/themes/annotum-base

Jason

Jason Stirnaman
Digital Projects Librarian
A.R. Dykes Library
University of Kansas Medical Center
913-588-7319


From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] on behalf of Tom Keays 
[tomke...@gmail.com]
Sent: Tuesday, December 04, 2012 9:27 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] Help with WordPress for Code4Lib Journal

On Tue, Dec 4, 2012 at 9:53 AM, Ross Singer rossfsin...@gmail.com wrote:

 Seriously, folks, if we can't even figure out how to upgrade our Drupal
 instance to a version that was released this decade, we shouldn't be
 discussing *new* implementations of *anything* that we have to host
 ourselves.


Not being one to waste a perfectly good segue...

The Code4Lib Journal runs on WordPress. This was a decision made by the
editorial board at the time (2007) and by and large it was a good one. Over
time, one of the board members offered his technical expertise to build a
few custom plugins that would streamline the workflow for publishing the
journal. Out of the box, WordPress is designed to publish a string of
individual articles, but we wanted to publish issues in a more traditional
model, with all the issues published at one time and arranged in the issue
is a specific order. We could (and have done) all this manually, but having
the plugin has been a real boon for us.

The Issue Manager plugin that he wrote provided the mechanism for:
a) preventing articles from being published prematurely,
b) identifying and arranging a set of final (pending) articles into an
issue, and
c) publishing that issue at the desired time.

That person is no longer on the Journal editorial board and upkeep of the
plugin has not been maintained since he left. We're now several
WordPress releases
behind, mainly because we delayed upgrading until we could test if doing so
would break the plugins. We have now tested, and it did. I won't bore you
with the details, but if we want to continue using the plugin to manage our
workflow, we need help.

Is there anybody out there with experience writing WordPress plugins that
would be willing to work with me to diagnose what has changed in the
WordPress codex that is causing the problems and maybe help me understand
how to prevent this from happening again with future releases?

Thanks,
Tom Keays / tomke...@gmail.com


Re: [CODE4LIB] OpenURL linking but from the content provider's point of view

2012-11-21 Thread Jason Stirnaman
Sorry. I guess I misunderstood. I thought David meant resolving OpenURLs 
pointed at his content.

Jason

Jason Stirnaman
Digital Projects Librarian
A.R. Dykes Library
University of Kansas Medical Center
913-588-7319


From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] on behalf of Young,Jeff 
(OR) [jyo...@oclc.org]
Sent: Wednesday, November 21, 2012 9:08 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] OpenURL linking but from the content provider's point 
of view

If the referent has a DOI, then I would argue that
rft_id=http://dx.doi.org/10.1145/2132176.2132212 is all you need. The
descriptive information that typically goes in the ContextObject can be
obtained (if necessary) by content-negotiating for application/rdf+xml.
OTOH, if someone pokes this same URI from a browser instead, you will
generally get redirected to the publisher's web site with the full-text
close at hand.

The same principle should apply for any bibliographic resource that has
a Linked Data identifier.

Jeff

 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf
Of
 Owen Stephens
 Sent: Wednesday, November 21, 2012 9:55 AM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] OpenURL linking but from the content
provider's
 point of view

 The only difference between COinS and a full OpenURL is the addition
of
 a link resolver address. Most databases that provide OpenURL links
 directly (rather than simply COinS) use some profile information -
 usually set by the subscribing library, although some based on
 information supplied by an individual user. If set by the library this
 is then linked to specific users by IP or by login.

 There are a couple(?) of generic base URLs you can use which will try
 to redirect to an appropriate link resolver based on IP range of the
 requester, with fallback options if it can't find an appropriate
 resolver (I think this is how the WorldCat resolver works? The
'OpenURL
 Router' in the UK definitely works like this)

 The LibX toolbar allows users to set their link resolver address, and
 then translates COinS into OpenURLs when you view a page - all user
 driven, no need for the data publisher to do anything beyond COinS

 There is also the 'cookie pusher' solution which ArXiv uses - where
the
 user can set a cookie containing the base URL, and this is picked up
 and used by ArXiV (http://arxiv.org/help/openurl)

 Owen

 PS it occurs to me that the other part of the question is 'what
 metadata should be included in the OpenURL to give it the best chance
 of working with a link resolver'?

 Owen Stephens
 Owen Stephens Consulting
 Web: http://www.ostephens.com
 Email: o...@ostephens.com
 Telephone: 0121 288 6936

 On 20 Nov 2012, at 19:39, David Lawrence david.lawre...@sdsu.edu
 wrote:

  I have some experience with the library side of link resolver code.
  However, we want to implement OpenURL hooks on our open access
  literature database and I can not find where to begin.
 
  SafetyLit is a free service of San Diego State University in
  cooperation with the World Health Organization. We already provide
  embedded metadata in both COinS and unAPI formats to allow its
 capture
  by Mendeley, Papers, Zotero, etc. Over the past few months, I have
  emailed or talked with many people and read everything I can get my
  hands on about this but I'm clearly not finding the right people or
 information sources.
 
  Please help me to find references to examples of the code that is
  required on the literature database server that will enable library
  link resolvers to recognize the SafetyLit.org metadata and allow
  appropriate linking to full text.
 
  SafetyLit.org receives more than 65,000 unique (non-robot) visitors
  and the database responds to almost 500,000 search queries every
 week.
  The most frequently requested improvement is to add link resolver
 capacity.
 
  I hope that code4lib users will be able to help.
 
  Best regards,
 
  David
 
  David W. Lawrence, PhD, MPH, Director
  Center for Injury Prevention Policy and Practice San Diego State
  University, School of Public Health
  6475 Alvarado Road, Suite 105
  San Diego, CA  92120  usadavid.lawre...@sdsu.edu
  V 619 594 1994   F 619 594 1995  Skype: DWL-SDCAwww.CIPPP.org  --
  www.SafetyLit.org


Re: [CODE4LIB] OpenURL linking but from the content provider's point of view

2012-11-20 Thread Jason Stirnaman
David,
The short answer is that your application needs to map keys and values from the 
OpenURL query to your application's specific query interface.
DSpace supports OpenURL requests. You can find some of the relevant code at:
https://github.com/DSpace/DSpace/blob/master/dspace-xmlui/src/main/java/org/dspace/app/xmlui/cocoon/OpenURLReader.java

Jason Stirnaman
Digital Projects Librarian
A.R. Dykes Library
University of Kansas Medical Center
913-588-7319


From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] on behalf of David Lawrence 
[david.lawre...@sdsu.edu]
Sent: Tuesday, November 20, 2012 1:39 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] OpenURL linking but from the content provider's point of 
view

I have some experience with the library side of link resolver code.
However, we want to implement OpenURL hooks on our open access literature
database and I can not find where to begin.

SafetyLit is a free service of San Diego State University in cooperation
with the World Health Organization. We already provide embedded metadata in
both COinS and unAPI formats to allow its capture by Mendeley, Papers,
Zotero, etc. Over the past few months, I have emailed or talked with many
people and read everything I can get my hands on about this but I'm clearly
not finding the right people or information sources.

Please help me to find references to examples of the code that is required
on the literature database server that will enable library link resolvers
to recognize the SafetyLit.org metadata and allow appropriate linking to
full text.

SafetyLit.org receives more than 65,000 unique (non-robot) visitors and the
database responds to almost 500,000 search queries every week. The most
frequently requested improvement is to add link resolver capacity.

I hope that code4lib users will be able to help.

Best regards,

David

David W. Lawrence, PhD, MPH, Director
Center for Injury Prevention Policy and Practice
San Diego State University, School of Public Health
6475 Alvarado Road, Suite 105
San Diego, CA  92120  usadavid.lawre...@sdsu.edu
V 619 594 1994   F 619 594 1995  Skype: DWL-SDCAwww.CIPPP.org  --
www.SafetyLit.org


Re: [CODE4LIB] Transcription/dictation software?

2012-02-27 Thread Jason Stirnaman
That's what I hear, too. You might also look at 
http://support.google.com/youtube/bin/answer.py?hl=enanswer=100076 

Jason

 On 2/27/2012 at 12:56 PM, in message cb713bfa.df25%shan...@jhu.edu, Sean 
 Hannan shan...@jhu.edu wrote:


Mechanical Turk it.

(I hear that's what all the hipsters do while they watch Downton Abbey.)

-Sean


On 2/27/12 1:52 PM, Suchy, Daniel dsu...@ucsd.edu wrote:

 Hello all,

 At my campus we offer podcasts of course lectures, recorded in class and then
 delivered via iTunes and as a plain Mp3 download (http://podcast.ucsd.edu).  I
 have the new responsibility of figuring out how to transcribe text versions of
 these audio podcasts for folks with hearing issues.

 I was wondering if any of you are using or have played with
 dictation/transcription software and can recommend or de-recommend any?   My
 first inclination is to go with open-source, but I'm open to anything that
 works well and can scale to handle hundreds of courses.

 Thanks in advance!
 Dan

 *
 Daniel Suchy
 User Services Technology Analyst
 University of California, San Diego Libraries
 858.534.6819
 dsu...@ucsd.edumailto:dsu...@ucsd.edu


Re: [CODE4LIB] Get Lamp showing at cod4libcon

2012-01-10 Thread Jason Stirnaman
/me waves dongle 
I have a Mac mini displayport to DVI adaptor 

Jason

 On 1/9/2012 at 04:01 PM, in message 
 CABqCXLTTT+C=l3jcspwozyr-gqoffqyf8kmvmmabi92a4dq...@mail.gmail.com, 
 Michael B. Klein mbkl...@gmail.com wrote:


DVD ordered! Do we know what kind of large-screen viewing/projector device
we'll have in the hospitality/hostility suite? I can currently handle VGA
and HDMI, but I'm not sure about DVI.

Michael

On Mon, Jan 9, 2012 at 11:21 AM, Adam Wead aw...@rockhall.org wrote:

 Hi all,

 There's been some discussion on IRC about having a viewing of the movie
 Get Lamp [1] at the code4lib conference.  Michael Klein has agreed to
 spring for the movie, which costs about $45, and I can look at coordinating
 a showtime in the hospitality suite.

 Is there any interest from conference attendees out there?  Is it
 agreeable to chip in $1 or $2 to Mike to his trouble?

 Respond off-list if you have interest, and if there's enough I'll send
 another message with details.

 thanks,

 ...adam

 Adam Wead | Systems and Digital Collection Librarian
 ROCK AND ROLL HALL OF FAME + MUSEUM
 Library and Archives
 2809 Woodland Avenue | Cleveland, Ohio 44115-3216
 216-515-1960 | FAX 216-515-1964
 Email: aw...@rockhall.org
 Follow us: rockhall.com | Membership | e-news | e-store | Facebook |
 Twitter

 [1] http://www.getlamp.com/

 [http://donations.rockhall.com/Logo_WWR.gif]
 http://rockhall.com/exhibits/women-who-rock/
 This communication is a confidential and proprietary business
 communication. It is intended solely for the use of the designated
 recipient(s). If this communication is received in error, please contact
 the sender and delete this communication.

 '



Re: [CODE4LIB] Data Mining / Business Analytics in libraries

2011-12-15 Thread Jason Stirnaman
Cindy, 
I asked this same question a few months ago[1].  
We've been working with our campus Enterprise Analytics group to help us 
prioritize what we to measure and develop a BI strategy. They use QlikView 
http://www.qlikview.com/ as their analysis tool of choice. 
I like the idea of possibly using the Metridoc project for harvesting and 
modeling the data. I'm not sure we have enough data or flux to warrant fully 
automating the harvesting. 

1. See thread 
http://www.mail-archive.com/code4lib@listserv.nd.edu/msg11280.html 

Jason

 On 12/15/2011 at 08:54 AM, in message 
 CANc3e05ARd9J_tq49=b_dy-tfvjxg+nidp6f7dxrmrhi7s7...@mail.gmail.com, Cindy 
 Harper char...@colgate.edu wrote:


Are there any listservs, blogs, forums addressing data mining in
libraries?  I've taken some courses, and am now exploring software - I just
tried our RapidMiner, which integrates with R and Weka, and has facility
for data cleaning and storage. I'm interested to see if anyone is sharing
their experiences with Business Analytics type products in libraries.

Cindy Harper, Systems Librarian
Colgate University Libraries
char...@colgate.edu
315-228-7363


Re: [CODE4LIB] Automatic Content Classification recommendations?

2011-11-28 Thread Jason Stirnaman
ConceptSearch http://www.conceptsearching.com/web/ is a commercial search 
engine and classification tool. Maybe similar to TemaTres, it doesn't use 
machine-learning but extracts concepts out of your documents that can be 
mapped to vocabulary terms. The vocabulary is then exposed to the end-user as 
search results facet. It's all driven by MS SQL Server and exposed as web 
services. 
We've used it here to map medical school lectures to the licensing exam 
outlines and have experimented a little with autoclassifying the same lecture 
content by MeSH. 

Jason


Jason Stirnaman
Biomedical Librarian, Digital Projects
A.R. Dykes Library, University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319


 On 11/28/2011 at 12:00 AM, in message 
 of1513ea09.c0a3aa92-onca257956.001e9bcd-ca257956.00210...@parliament.vic.gov.au,
  Peter Neish peter.ne...@parliament.vic.gov.au wrote:


Hi there,

Just wondering if anyone has any recommendations for systems that will do
automatic content classification through machine learning? We want to
classify newspaper articles using terms from our existing thesaurus and
have a fairly big set of articles already tagged that could be used as a
training set.. Services like OpenCalais don't really fit our need because
we want to use our own thesaurus. Happy to look at both open source and
commercial software.

Thanks,

Peter

--
Peter Neish
Systems Officer
Victorian Parliamentary Library
Ph: 03 9651 8638
peter.ne...@parliament.vic.gov.au






//////

Parliament of Victoria  
  .
Important Disclaimer Notice:


The information contained in this email  including any attachments, may be
confidential and/or privileged. If you are not the intended recipient, please
notify the sender and delete it from  your system. Any unauthorised
disclosure, copying or dissemination of all or part of this email, including
any attachments, is not permitted. This email, including any attachments, should
be dealt with in accordance with copyright and  privacy legislation.
Except where otherwise stated, views expressed are those of the individual 
sender.


Re: [CODE4LIB] Code4Lib 2012 Registration Cost?

2011-11-15 Thread Jason Stirnaman
You have no idea Whenever I make it, some of this is coming with me

http://www.kickstarter.com/projects/1693254250/wilderness-brewing-co 

Jason


Jason Stirnaman
Biomedical Librarian, Digital Projects
A.R. Dykes Library, University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319


 On 11/15/2011 at 11:16 AM, in message
c4bb34c7c52a1c4ba828b699fc6cc9a320a67...@xmail-mbx-bt1.ad.ucsd.edu,
Fleming, Declan dflem...@ucsd.edu wrote:


tl;dr except that maybe we'll miss out on KC beer?  SAY IT ISN'T SO! 
:)

D

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of
Kevin S. Clarke
Sent: Thursday, November 10, 2011 11:49 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Code4Lib 2012 Registration Cost?

+1 to the below...

On Thu, Nov 10, 2011 at 10:12 AM, Jason Stirnaman jstirna...@kumc.edu
wrote:
 I agree that conference-ish things are mostly not broken, and that
we
 should keep talking about this in order to make them better. I also
 think Tim makes an important point here:
  then let's not separate the two parts of C4L conf like a
 traditional conference Why do we need to define workshops or
hackfests
 as pre-conference?
 Why not just say this is the conference: hackfests, workshops,
talks,
 great people, and brew-tasting? And maybe more people would
 participate or benefit if hackfest was middle-conference or
 post-conference, i.e. after they're spurred by  presentations and
ready to dive in.
 And couch-surfing ++
 I'm expecting I won't get to spend my library's entire travel budget
 to attend C4L this year and lodging obviously accounts for a huge
 chunk of that. If not, I'll just save up my bottles of KC's finest
for
 next year and hope some newcomer replaces me in the registration
rush.

 Jason


 Jason Stirnaman
 Biomedical Librarian, Digital Projects A.R. Dykes Library,
University
 of Kansas Medical Center jstirna...@kumc.edu
 913-588-7319


 On 11/10/2011 at 11:01 AM, in message

cajajqvrgw8ogxojo3+zzwgz1e2ozyfafgx5dw2y1ajumhdj...@mail.gmail.com,
 Cary Gordon listu...@chillco.com wrote:


 In the grand scheme of things, $150 for a conference is very low,
and
 $320 for a pre conference is low, as well. This doesn't make them
 cheap or mean that everyone who would like to go, can go. I fully
 understand that $25 is in inconsequential amount of money… Unless it
 is your $25.

 I think that there is consensus that we want to keep c4l at its
 current size, and I think that money shouldn't be the primary factor
 determining attendance.

 Perhaps it would be good if we raised our price to $200 or $250,
 increased our sponsorship fundraising efforts, and used the excess
to
 provide partial and full (including travel and lodging) scholarships
 for those who need them.

 Maybe we should encourage local couch surfing hosts.

 We should keep talking about this.

 Cary

 On Wed, Nov 9, 2011 at 9:11 PM, Timothy McGeary
timmcge...@gmail.com
 wrote:

 At $150 for registration, I agree with Kyle, that this is a very
 good price
 in comparison to most technical conferences. Perhaps you could
 consider the
 extra airfare and hotel room night as the price of the
 pre-conference.


 The extra airfare and hotel, in this case, is $320 per person.
 Hardly a
 reasonable comparison.

 I realize that I'm now looking at this from a different perspective
 than
 when I was a first time Code4Libber, when I was simply try to soak
it
 all
 in and build a network of people I could work with on projects that
I
 and
 my library were interested in.  Now I have to be concerned with
 budgets,
 and getting people other than myself to C4L so they can join the
 community
 and contribute.

 If the price points of rentals goes down because of the preconf day
 where
 the costs are mostly a wash, then that's great, but then let's not
 separate
 the two parts of C4L conf like a traditional conference, or put
such
 emphasis on the participation in a preconf that undermines or
 undervalues
 the participation of someone coming to the conference days
 themselves.  We
 can't have it both ways.  Code4Lib conferences *ARE* unique and
they
 are
 invaluable to many, many, many people who are fortunate enough to
A)
 register in time and B) can afford to come at all.  So let's not
 diminish
 this by presuming or assuming anything, rather take extra care in
 protecting this event as a treasure, lest all of the tireless
efforts
 the
 conference planners put forth be for naught.

 The last thing I'd want to see is C4L be under attended because
 people
 couldn't justify reasonable costs to their organization due to lack
 of
 information, openness, or mere confusion.

 Tim

 --
 Tim McGeary
 Team Leader, Library Technology
 Lehigh University
 610-758-4998
 tim.mcge...@lehigh.edu

 timmcge...@gmail.com
 GTalk/Yahoo/Skype/Twitter: timmcgeary 484-938-TMCG (Google Voice)

 On Wed, Nov 9, 2011 at 6:26 PM, Fowler, Jason jason.fow...@ubc.ca
 wrote:

   let people completely fend for themselves w/r/t to food

Re: [CODE4LIB] Code4Lib 2012 Registration Cost?

2011-11-10 Thread Jason Stirnaman
I agree that conference-ish things are mostly not broken, and that we
should keep talking about this in order to make them better. I also
think Tim makes an important point here: 
  then let's not separate the two parts of C4L conf like a
traditional conference 
Why do we need to define workshops or hackfests as pre-conference?
Why not just say this is the conference: hackfests, workshops, talks,
great people, and brew-tasting? And maybe more people would participate
or benefit if hackfest was middle-conference or post-conference,
i.e. after they're spurred by  presentations and ready to dive in. 
And couch-surfing ++ 
I'm expecting I won't get to spend my library's entire travel budget to
attend C4L this year and lodging obviously accounts for a huge chunk of
that. If not, I'll just save up my bottles of KC's finest for next year
and hope some newcomer replaces me in the registration rush. 

Jason 


Jason Stirnaman
Biomedical Librarian, Digital Projects
A.R. Dykes Library, University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319


 On 11/10/2011 at 11:01 AM, in message
cajajqvrgw8ogxojo3+zzwgz1e2ozyfafgx5dw2y1ajumhdj...@mail.gmail.com,
Cary Gordon listu...@chillco.com wrote:


In the grand scheme of things, $150 for a conference is very low, and
$320 for a pre conference is low, as well. This doesn't make them
cheap or mean that everyone who would like to go, can go. I fully
understand that $25 is in inconsequential amount of money… Unless it
is your $25.

I think that there is consensus that we want to keep c4l at its
current size, and I think that money shouldn't be the primary factor
determining attendance.

Perhaps it would be good if we raised our price to $200 or $250,
increased our sponsorship fundraising efforts, and used the excess to
provide partial and full (including travel and lodging) scholarships
for those who need them.

Maybe we should encourage local couch surfing hosts.

We should keep talking about this.

Cary

On Wed, Nov 9, 2011 at 9:11 PM, Timothy McGeary timmcge...@gmail.com
wrote:

 At $150 for registration, I agree with Kyle, that this is a very
good price
 in comparison to most technical conferences. Perhaps you could
consider the
 extra airfare and hotel room night as the price of the
pre-conference.


 The extra airfare and hotel, in this case, is $320 per person. 
Hardly a
 reasonable comparison.

 I realize that I'm now looking at this from a different perspective
than
 when I was a first time Code4Libber, when I was simply try to soak it
all
 in and build a network of people I could work with on projects that I
and
 my library were interested in.  Now I have to be concerned with
budgets,
 and getting people other than myself to C4L so they can join the
community
 and contribute.

 If the price points of rentals goes down because of the preconf day
where
 the costs are mostly a wash, then that's great, but then let's not
separate
 the two parts of C4L conf like a traditional conference, or put such
 emphasis on the participation in a preconf that undermines or
undervalues
 the participation of someone coming to the conference days
themselves.  We
 can't have it both ways.  Code4Lib conferences *ARE* unique and they
are
 invaluable to many, many, many people who are fortunate enough to A)
 register in time and B) can afford to come at all.  So let's not
diminish
 this by presuming or assuming anything, rather take extra care in
 protecting this event as a treasure, lest all of the tireless efforts
the
 conference planners put forth be for naught.

 The last thing I'd want to see is C4L be under attended because
people
 couldn't justify reasonable costs to their organization due to lack
of
 information, openness, or mere confusion.

 Tim

 --
 Tim McGeary
 Team Leader, Library Technology
 Lehigh University
 610-758-4998
 tim.mcge...@lehigh.edu

 timmcge...@gmail.com
 GTalk/Yahoo/Skype/Twitter: timmcgeary
 484-938-TMCG (Google Voice)

 On Wed, Nov 9, 2011 at 6:26 PM, Fowler, Jason jason.fow...@ubc.ca
wrote:

   let people completely fend for themselves w/r/t to food/drink on
the
 preconference day.

 Coders consume from the flat food group.  Anything that fits
beneath a
 door…

 ..jason


 On 11-11-09 3:12 PM, Kevin S. Clarke kscla...@gmail.commailto:
 kscla...@gmail.com wrote:

 On Wed, Nov 9, 2011 at 1:41 PM, Kyle Banerjee baner...@uoregon.edu
 mailto:baner...@uoregon.edu wrote:
 Yes, we have.  We've even had people want to come to the
preconference
 (and pay the preconference charge) but not attend the regular
 conference.  :-)


 They've wanted to do it, but have they actually been able to? What
makes
 c4l worthwhile is the ability to mix it up. If people attend with
the
 intention of just receiving specific training, they're not
contributing and
 that undermines the experience for everyone.

 Yes... at least at the Asheville conference (though there were just
a
 couple).

 C4l has seen both free and paid preconferences and it's easy enough
to
 rationalize

Re: [CODE4LIB] Code4Lib 2012 Registration Cost?

2011-11-08 Thread Jason Stirnaman
Second that. It was mentioned earlier on the list that cost would be $150. Are 
the pre-conference sessions extra? 

Jason


Jason Stirnaman
Biomedical Librarian, Digital Projects
A.R. Dykes Library, University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319


 On 11/8/2011 at 10:46 AM, in message 
 CALBScSQVFmz9RH7wFBX8CHwZ6DHUEbRVO0mHd9G1A_x=bpj...@mail.gmail.com, 
 Timothy McGeary timmcge...@gmail.com wrote:


For having registration opening next week, I am surprised the registration
cost isn't listed on the C4L 2012 page:
http://code4lib.org/conference/2012/

I can't authorize my team members to register next week without first
submitted cost estimates for the trip, and I can't do that without a
registration cost.  Airfare is already over $400, so maybe it's a moot
point.  But I'm not sure it's wise to open registration without publishing
this information very soon, like today.

Just a thought...
Tim

--
Tim McGeary
Team Leader, Library Technology
Lehigh University
610-758-4998
tim.mcge...@lehigh.edu

timmcge...@gmail.com
GTalk/Yahoo/Skype/Twitter: timmcgeary
484-938-TMCG (Google Voice)


Re: [CODE4LIB] Usage and financial data aggregation

2011-09-14 Thread Jason Stirnaman
When all else fails, Wikipedia 
http://en.wikipedia.org/wiki/Business_intelligence_tools#Open_source_free_products
 

RapidMiner and Pentaho Community Editions both look appealing. I hope to try 
them out soon. 
I also found the Ruby ActiveWarehouse and ActiveWarehouse-ETL projects which 
look pretty cool for Rails projects, but maybe a bit stale. 

Jason


Jason Stirnaman
Biomedical Librarian, Digital Projects
A.R. Dykes Library, University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319


 On 9/13/2011 at 04:37 PM, in message 4E6FCD17.CF3 : 5 : 23711, Jason 
 Stirnaman wrote:


Thanks, Shirley! I remember seeing that before but I'll look more closely now. 
I know what I'm describing is also known, typically, as a data warehouse. I 
guess I'm trying to steer around the usual solutions in that space. We do have 
an Oracle-driven data warehouse on campus, but the project is in heavy 
transition right now and we still had to do a fair amount of work ourselves 
just to get a few data sources into it.


Jason Stirnaman
Biomedical Librarian, Digital Projects
A.R. Dykes Library, University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319


 On 9/13/2011 at 04:25 PM, in message 
 can7tqjapw78rpgzpu1l5qvoj6iw9rrkmzl+yeygqbov-gzo...@mail.gmail.com, 
 Shirley Lincicum shirley.linci...@gmail.com wrote:


Jason,

Check out: http://www.needlebase.com/

It was not developed specifically for libraries, but it supports data
aggregation, analysis, web scraping, and does not require programming
skills to use.

Shirley

Shirley Lincicum
Librarian, Western Oregon University
linc...@wou.edu

On Tue, Sep 13, 2011 at 2:08 PM, Jason Stirnaman jstirna...@kumc.edu wrote:
 Does anyone have suggestions or recommendations for platforms that can 
 aggregate usage data from multiple sources, combine it with financial data, 
 and then provide some analysis, graphing, data views, etc?
 From what I can tell, something like Ex Libris' Alma would require all 
 fulfillment transactions to occur within the system.
 I'm looking instead for something like Splunk that would accept log data, 
 circulation data, usage reports, costs, and Sherpa/Romeo authority data but 
 then schematize it for data analysis and maybe push out reporting dashboards 
 nods to Brown Library http://library.brown.edu/dashboard/widgets/all/ 
 I'd also want to automate the data retrieval, so that might consist of 
 scraping, web services, and FTP, but that could easily be handled separately.
 I'm aware there are many challenges, such as comparing usage stats, shifts in 
 journal aggregators, etc.
 Does anyone have any cool homegrown examples or ideas they've cooked up for 
 this? Pie in the sky?


 Jason
 Jason Stirnaman
 Biomedical Librarian, Digital Projects
 A.R. Dykes Library, University of Kansas Medical Center
 jstirna...@kumc.edu
 913-588-7319



Re: [CODE4LIB] Usage and financial data aggregation

2011-09-14 Thread Jason Stirnaman
Thanks, Jonathan. I vaguely recall the presentation. Looks like their code is 
available at http://code.google.com/p/metridoc/ and active. Definitely worth 
trying.


Jason Stirnaman
Biomedical Librarian, Digital Projects
A.R. Dykes Library, University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319


 On 9/14/2011 at 11:13 AM, in message 4e70d292.7000...@jhu.edu, Jonathan 
 Rochkind rochk...@jhu.edu wrote:


Yeah, I think it ends up being pretty hard to create general-purpose
solutions to this sort of thing that are both not-monstrous-to-use and
flexible enough to do what everyone wants.  Which is why most of the
'data warehouse' solutions you see end up being so terrible, in my
analysis.

I am not sure if there is any product specifically focused on library
usage/financial data -- that might end up being somewhat less monstrous,
it seems the more you focus your use case (instead of trying to provide
for general data warehouse and analysis), the more likely a software
provider can come up with something that isn't insane.

At the 2011 Code4Lib Conf,  Thomas Barker from UPenn presented on some
open source software they were developing (based on putting together
existing open source packages to be used together) to provide
library-oriented 'data warehousing'.  I was interested that he talked
about how their _first_ attempt at this ended up being the sort of
monstrous flexible-but-impossible-to-use sort of solution we're talking
about, but they tried to learn from their experience and start over,
thinking they could do better. I'm not sure what the current status of
that project is.  I'm not sure if any 2011 code4lib conf video is
available online? If it is, it doesn't seem to be linked to from the
conf presentation pages like it was in past years:

http://code4lib.org/conference/2011/barker

On 9/13/2011 5:37 PM, Jason Stirnaman wrote:
 Thanks, Shirley! I remember seeing that before but I'll look more closely now.
 I know what I'm describing is also known, typically, as a data warehouse. I 
 guess I'm trying to steer around the usual solutions in that space. We do 
 have an Oracle-driven data warehouse on campus, but the project is in heavy 
 transition right now and we still had to do a fair amount of work ourselves 
 just to get a few data sources into it.


 Jason Stirnaman
 Biomedical Librarian, Digital Projects
 A.R. Dykes Library, University of Kansas Medical Center
 jstirna...@kumc.edu
 913-588-7319


 On 9/13/2011 at 04:25 PM, in 
 messagecan7tqjapw78rpgzpu1l5qvoj6iw9rrkmzl+yeygqbov-gzo...@mail.gmail.com,
  Shirley Lincicumshirley.linci...@gmail.com  wrote:

 Jason,

 Check out: http://www.needlebase.com/

 It was not developed specifically for libraries, but it supports data
 aggregation, analysis, web scraping, and does not require programming
 skills to use.

 Shirley

 Shirley Lincicum
 Librarian, Western Oregon University
 linc...@wou.edu

 On Tue, Sep 13, 2011 at 2:08 PM, Jason Stirnamanjstirna...@kumc.edu  wrote:
 Does anyone have suggestions or recommendations for platforms that can 
 aggregate usage data from multiple sources, combine it with financial data, 
 and then provide some analysis, graphing, data views, etc?
  From what I can tell, something like Ex Libris' Alma would require all 
 fulfillment transactions to occur within the system.
 I'm looking instead for something like Splunk that would accept log data, 
 circulation data, usage reports, costs, and Sherpa/Romeo authority data but 
 then schematize it for data analysis and maybe push out reporting 
 dashboardsnods to Brown Library 
 http://library.brown.edu/dashboard/widgets/all/
 I'd also want to automate the data retrieval, so that might consist of 
 scraping, web services, and FTP, but that could easily be handled separately.
 I'm aware there are many challenges, such as comparing usage stats, shifts 
 in journal aggregators, etc.
 Does anyone have any cool homegrown examples or ideas they've cooked up for 
 this? Pie in the sky?


 Jason
 Jason Stirnaman
 Biomedical Librarian, Digital Projects
 A.R. Dykes Library, University of Kansas Medical Center
 jstirna...@kumc.edu
 913-588-7319



Re: [CODE4LIB] Usage and financial data aggregation

2011-09-14 Thread Jason Stirnaman
PS, Here's the link for jumping to Thomas Browning's Metridoc talk: 
http://www.indiana.edu/~video/stream/launchflash.html?format=MP4folder=vicfilename=C4L2011_session_2_20110208.mp4starttime=3600


Jason Stirnaman
Biomedical Librarian, Digital Projects
A.R. Dykes Library, University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319


 On 9/14/2011 at 11:13 AM, in message 4e70d292.7000...@jhu.edu, Jonathan 
 Rochkind rochk...@jhu.edu wrote:


Yeah, I think it ends up being pretty hard to create general-purpose
solutions to this sort of thing that are both not-monstrous-to-use and
flexible enough to do what everyone wants.  Which is why most of the
'data warehouse' solutions you see end up being so terrible, in my
analysis.

I am not sure if there is any product specifically focused on library
usage/financial data -- that might end up being somewhat less monstrous,
it seems the more you focus your use case (instead of trying to provide
for general data warehouse and analysis), the more likely a software
provider can come up with something that isn't insane.

At the 2011 Code4Lib Conf,  Thomas Barker from UPenn presented on some
open source software they were developing (based on putting together
existing open source packages to be used together) to provide
library-oriented 'data warehousing'.  I was interested that he talked
about how their _first_ attempt at this ended up being the sort of
monstrous flexible-but-impossible-to-use sort of solution we're talking
about, but they tried to learn from their experience and start over,
thinking they could do better. I'm not sure what the current status of
that project is.  I'm not sure if any 2011 code4lib conf video is
available online? If it is, it doesn't seem to be linked to from the
conf presentation pages like it was in past years:

http://code4lib.org/conference/2011/barker

On 9/13/2011 5:37 PM, Jason Stirnaman wrote:
 Thanks, Shirley! I remember seeing that before but I'll look more closely now.
 I know what I'm describing is also known, typically, as a data warehouse. I 
 guess I'm trying to steer around the usual solutions in that space. We do 
 have an Oracle-driven data warehouse on campus, but the project is in heavy 
 transition right now and we still had to do a fair amount of work ourselves 
 just to get a few data sources into it.


 Jason Stirnaman
 Biomedical Librarian, Digital Projects
 A.R. Dykes Library, University of Kansas Medical Center
 jstirna...@kumc.edu
 913-588-7319


 On 9/13/2011 at 04:25 PM, in 
 messagecan7tqjapw78rpgzpu1l5qvoj6iw9rrkmzl+yeygqbov-gzo...@mail.gmail.com,
  Shirley Lincicumshirley.linci...@gmail.com  wrote:

 Jason,

 Check out: http://www.needlebase.com/

 It was not developed specifically for libraries, but it supports data
 aggregation, analysis, web scraping, and does not require programming
 skills to use.

 Shirley

 Shirley Lincicum
 Librarian, Western Oregon University
 linc...@wou.edu

 On Tue, Sep 13, 2011 at 2:08 PM, Jason Stirnamanjstirna...@kumc.edu  wrote:
 Does anyone have suggestions or recommendations for platforms that can 
 aggregate usage data from multiple sources, combine it with financial data, 
 and then provide some analysis, graphing, data views, etc?
  From what I can tell, something like Ex Libris' Alma would require all 
 fulfillment transactions to occur within the system.
 I'm looking instead for something like Splunk that would accept log data, 
 circulation data, usage reports, costs, and Sherpa/Romeo authority data but 
 then schematize it for data analysis and maybe push out reporting 
 dashboardsnods to Brown Library 
 http://library.brown.edu/dashboard/widgets/all/
 I'd also want to automate the data retrieval, so that might consist of 
 scraping, web services, and FTP, but that could easily be handled separately.
 I'm aware there are many challenges, such as comparing usage stats, shifts 
 in journal aggregators, etc.
 Does anyone have any cool homegrown examples or ideas they've cooked up for 
 this? Pie in the sky?


 Jason
 Jason Stirnaman
 Biomedical Librarian, Digital Projects
 A.R. Dykes Library, University of Kansas Medical Center
 jstirna...@kumc.edu
 913-588-7319



Re: [CODE4LIB] Usage and financial data aggregation

2011-09-14 Thread Jason Stirnaman
Correction: Thomas Barker. Sorry. 


PS, Here's the link for jumping to Thomas Browning's Metridoc talk: 
http://www.indiana.edu/~video/stream/launchflash.html?format=MP4folder=vicfilename=C4L2011_session_2_20110208.mp4starttime=3600


Jason Stirnaman
Biomedical Librarian, Digital Projects
A.R. Dykes Library, University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319


 On 9/14/2011 at 11:13 AM, in message 4e70d292.7000...@jhu.edu, Jonathan 
 Rochkind rochk...@jhu.edu wrote:


Yeah, I think it ends up being pretty hard to create general-purpose
solutions to this sort of thing that are both not-monstrous-to-use and
flexible enough to do what everyone wants.  Which is why most of the
'data warehouse' solutions you see end up being so terrible, in my
analysis.

I am not sure if there is any product specifically focused on library
usage/financial data -- that might end up being somewhat less monstrous,
it seems the more you focus your use case (instead of trying to provide
for general data warehouse and analysis), the more likely a software
provider can come up with something that isn't insane.

At the 2011 Code4Lib Conf,  Thomas Barker from UPenn presented on some
open source software they were developing (based on putting together
existing open source packages to be used together) to provide
library-oriented 'data warehousing'.  I was interested that he talked
about how their _first_ attempt at this ended up being the sort of
monstrous flexible-but-impossible-to-use sort of solution we're talking
about, but they tried to learn from their experience and start over,
thinking they could do better. I'm not sure what the current status of
that project is.  I'm not sure if any 2011 code4lib conf video is
available online? If it is, it doesn't seem to be linked to from the
conf presentation pages like it was in past years:

http://code4lib.org/conference/2011/barker

On 9/13/2011 5:37 PM, Jason Stirnaman wrote:
 Thanks, Shirley! I remember seeing that before but I'll look more closely now.
 I know what I'm describing is also known, typically, as a data warehouse. I 
 guess I'm trying to steer around the usual solutions in that space. We do 
 have an Oracle-driven data warehouse on campus, but the project is in heavy 
 transition right now and we still had to do a fair amount of work ourselves 
 just to get a few data sources into it.


 Jason Stirnaman
 Biomedical Librarian, Digital Projects
 A.R. Dykes Library, University of Kansas Medical Center
 jstirna...@kumc.edu
 913-588-7319


 On 9/13/2011 at 04:25 PM, in 
 messagecan7tqjapw78rpgzpu1l5qvoj6iw9rrkmzl+yeygqbov-gzo...@mail.gmail.com,
  Shirley Lincicumshirley.linci...@gmail.com  wrote:

 Jason,

 Check out: http://www.needlebase.com/

 It was not developed specifically for libraries, but it supports data
 aggregation, analysis, web scraping, and does not require programming
 skills to use.

 Shirley

 Shirley Lincicum
 Librarian, Western Oregon University
 linc...@wou.edu

 On Tue, Sep 13, 2011 at 2:08 PM, Jason Stirnamanjstirna...@kumc.edu  wrote:
 Does anyone have suggestions or recommendations for platforms that can 
 aggregate usage data from multiple sources, combine it with financial data, 
 and then provide some analysis, graphing, data views, etc?
  From what I can tell, something like Ex Libris' Alma would require all 
 fulfillment transactions to occur within the system.
 I'm looking instead for something like Splunk that would accept log data, 
 circulation data, usage reports, costs, and Sherpa/Romeo authority data but 
 then schematize it for data analysis and maybe push out reporting 
 dashboardsnods to Brown Library 
 http://library.brown.edu/dashboard/widgets/all/
 I'd also want to automate the data retrieval, so that might consist of 
 scraping, web services, and FTP, but that could easily be handled separately.
 I'm aware there are many challenges, such as comparing usage stats, shifts 
 in journal aggregators, etc.
 Does anyone have any cool homegrown examples or ideas they've cooked up for 
 this? Pie in the sky?


 Jason
 Jason Stirnaman
 Biomedical Librarian, Digital Projects
 A.R. Dykes Library, University of Kansas Medical Center
 jstirna...@kumc.edu
 913-588-7319


Jason Stirnaman
Biomedical Librarian, Digital Projects
A.R. Dykes Library, University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319


Re: [CODE4LIB] Usage and financial data aggregation

2011-09-14 Thread Jason Stirnaman
I'll try to update a few more. You can adjust the starttime parameter (in 
seconds) in the URL accordingly for each talk. Of course, you have to watch to 
figure out where they start.


Jason Stirnaman
Biomedical Librarian, Digital Projects
A.R. Dykes Library, University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319


 On 9/14/2011 at 12:58 PM, in message 4e70eb3b.9090...@jhu.edu, Jonathan 
 Rochkind rochk...@jhu.edu wrote:


Thanks, I added this as a comment on the code4lib talk page from the conf.

If anyone else happens to be looking for a video and finds it, and you
want to add it to the code4lib talk page in question, it would probably
be useful for findability.

In the past I think someone bulk added the URLs to all the talk pages,
but I guess that didn't happen this time, I guess actually cause there
aren't split videos of each talk but just video of the whole session?
Hmm, I guess that makes it harder to figure out what the URL to the
right minute of the talk should be, unless you're Jason. Oh well. Thanks
Jason!

On 9/14/2011 1:49 PM, Jason Stirnaman wrote:
 PS, Here's the link for jumping to Thomas Browning's Metridoc talk: 
 http://www.indiana.edu/~video/stream/launchflash.html?format=MP4folder=vicfilename=C4L2011_session_2_20110208.mp4starttime=3600


 Jason Stirnaman
 Biomedical Librarian, Digital Projects
 A.R. Dykes Library, University of Kansas Medical Center
 jstirna...@kumc.edu
 913-588-7319


 On 9/14/2011 at 11:13 AM, in message4e70d292.7000...@jhu.edu, Jonathan 
 Rochkindrochk...@jhu.edu  wrote:

 Yeah, I think it ends up being pretty hard to create general-purpose
 solutions to this sort of thing that are both not-monstrous-to-use and
 flexible enough to do what everyone wants.  Which is why most of the
 'data warehouse' solutions you see end up being so terrible, in my
 analysis.

 I am not sure if there is any product specifically focused on library
 usage/financial data -- that might end up being somewhat less monstrous,
 it seems the more you focus your use case (instead of trying to provide
 for general data warehouse and analysis), the more likely a software
 provider can come up with something that isn't insane.

 At the 2011 Code4Lib Conf,  Thomas Barker from UPenn presented on some
 open source software they were developing (based on putting together
 existing open source packages to be used together) to provide
 library-oriented 'data warehousing'.  I was interested that he talked
 about how their _first_ attempt at this ended up being the sort of
 monstrous flexible-but-impossible-to-use sort of solution we're talking
 about, but they tried to learn from their experience and start over,
 thinking they could do better. I'm not sure what the current status of
 that project is.  I'm not sure if any 2011 code4lib conf video is
 available online? If it is, it doesn't seem to be linked to from the
 conf presentation pages like it was in past years:

 http://code4lib.org/conference/2011/barker

 On 9/13/2011 5:37 PM, Jason Stirnaman wrote:
 Thanks, Shirley! I remember seeing that before but I'll look more closely 
 now.
 I know what I'm describing is also known, typically, as a data warehouse. I 
 guess I'm trying to steer around the usual solutions in that space. We do 
 have an Oracle-driven data warehouse on campus, but the project is in heavy 
 transition right now and we still had to do a fair amount of work ourselves 
 just to get a few data sources into it.


 Jason Stirnaman
 Biomedical Librarian, Digital Projects
 A.R. Dykes Library, University of Kansas Medical Center
 jstirna...@kumc.edu
 913-588-7319


 On 9/13/2011 at 04:25 PM, in 
 messagecan7tqjapw78rpgzpu1l5qvoj6iw9rrkmzl+yeygqbov-gzo...@mail.gmail.com,
  Shirley Lincicumshirley.linci...@gmail.com   wrote:
 Jason,

 Check out: http://www.needlebase.com/

 It was not developed specifically for libraries, but it supports data
 aggregation, analysis, web scraping, and does not require programming
 skills to use.

 Shirley

 Shirley Lincicum
 Librarian, Western Oregon University
 linc...@wou.edu

 On Tue, Sep 13, 2011 at 2:08 PM, Jason Stirnamanjstirna...@kumc.edu   
 wrote:
 Does anyone have suggestions or recommendations for platforms that can 
 aggregate usage data from multiple sources, combine it with financial data, 
 and then provide some analysis, graphing, data views, etc?
   From what I can tell, something like Ex Libris' Alma would require all 
 fulfillment transactions to occur within the system.
 I'm looking instead for something like Splunk that would accept log data, 
 circulation data, usage reports, costs, and Sherpa/Romeo authority data but 
 then schematize it for data analysis and maybe push out reporting 
 dashboardsnods to Brown Library 
 http://library.brown.edu/dashboard/widgets/all/
 I'd also want to automate the data retrieval, so that might consist of 
 scraping, web services, and FTP, but that could easily be handled 
 separately.
 I'm aware there are many challenges

Re: [CODE4LIB] Usage and financial data aggregation

2011-09-13 Thread Jason Stirnaman
Thanks, Shirley! I remember seeing that before but I'll look more closely now. 
I know what I'm describing is also known, typically, as a data warehouse. I 
guess I'm trying to steer around the usual solutions in that space. We do have 
an Oracle-driven data warehouse on campus, but the project is in heavy 
transition right now and we still had to do a fair amount of work ourselves 
just to get a few data sources into it.


Jason Stirnaman
Biomedical Librarian, Digital Projects
A.R. Dykes Library, University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319


 On 9/13/2011 at 04:25 PM, in message 
 can7tqjapw78rpgzpu1l5qvoj6iw9rrkmzl+yeygqbov-gzo...@mail.gmail.com, 
 Shirley Lincicum shirley.linci...@gmail.com wrote:


Jason,

Check out: http://www.needlebase.com/

It was not developed specifically for libraries, but it supports data
aggregation, analysis, web scraping, and does not require programming
skills to use.

Shirley

Shirley Lincicum
Librarian, Western Oregon University
linc...@wou.edu

On Tue, Sep 13, 2011 at 2:08 PM, Jason Stirnaman jstirna...@kumc.edu wrote:
 Does anyone have suggestions or recommendations for platforms that can 
 aggregate usage data from multiple sources, combine it with financial data, 
 and then provide some analysis, graphing, data views, etc?
 From what I can tell, something like Ex Libris' Alma would require all 
 fulfillment transactions to occur within the system.
 I'm looking instead for something like Splunk that would accept log data, 
 circulation data, usage reports, costs, and Sherpa/Romeo authority data but 
 then schematize it for data analysis and maybe push out reporting dashboards 
 nods to Brown Library http://library.brown.edu/dashboard/widgets/all/ 
 I'd also want to automate the data retrieval, so that might consist of 
 scraping, web services, and FTP, but that could easily be handled separately.
 I'm aware there are many challenges, such as comparing usage stats, shifts in 
 journal aggregators, etc.
 Does anyone have any cool homegrown examples or ideas they've cooked up for 
 this? Pie in the sky?


 Jason
 Jason Stirnaman
 Biomedical Librarian, Digital Projects
 A.R. Dykes Library, University of Kansas Medical Center
 jstirna...@kumc.edu
 913-588-7319



Re: [CODE4LIB] Survey on Uses of Cloud Computing and Virtualization in Libraries

2011-08-26 Thread Jason Stirnaman
Hey, Erik. I'd be to happy to complete the survey but I feel I should let you 
know that it doesn't jibe with our environment and we're probably not the only 
ones. We have a mix of support staffing scenarios. Nearly everything is 
virtualized now. In some cases, campus IT spins up a virtual server 
specifically for our use and we manage it from there. In other cases, the web 
site for example, they manage it completely. In other cases, we have things 
hosted in the cloud with varying levels of management responsibility. As I 
began the survey I got stuck immediately and question what value and how 
accurate my response would be.  
It's just not as clear-cut as the survey assumes.  

Regards, 
Jason


Jason Stirnaman
Biomedical Librarian, Digital Projects
A.R. Dykes Library, University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319


 On 8/25/2011 at 02:04 PM, in message 
 camgfwb97+xemuvwvjepcke0sjffvnsz8c+-ygpou0m2g254...@mail.gmail.com, Erik 
 Mitchell mitch...@gmail.com wrote:


Please forgive cross posting

This research study is about cloud computing and virtualization
adoption in libraries.  It asks questions about the level of adoption
and  factors that enable or inhibit the use of these technologies in
library environments.

The survey is open to anyone who works with IT related to libraries
(e.g., systems departments, desktop support, campus IT department
supporting the library, etc.).

Even if your library does not use cloud computing or virtualization
technologies your input is still valuable for understanding the
landscape of this technology adoption in libraries.

To take the survey please follow this link
https://uncodum.qualtrics.com/SE/?SID=SV_6mmaLbFa2El3trK

Erik Mitchell
Assistant Professor
College of Information Studies
University of Maryland College Park


Re: [CODE4LIB] Do you enjoy the Code4Lib conference? Why not sponsor it?

2011-01-07 Thread Jason Stirnaman
Kevin, 
Marketing genius to send this out the morning of payday!  Thanks for running 
with it! 

Jason


Jason Stirnaman
Biomedical Librarian, Digital Projects
A.R. Dykes Library, University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319


 On 1/7/2011 at 09:38 AM, in message 
 aanlktin6zxxpeltzk6shifchy3vgchjkn6wvbspwa...@mail.gmail.com, Kevin S. 
 Clarke kscla...@gmail.com wrote:


Hi all,

On the Code4Lib Conference planning list, Dan Chudnov suggested that
we should create a Code4Lib Community sponsorship opportunity.  We
all appreciate the sponsorships of our corporate sponsors, but since
the Code4Lib Conference if for, by, and of the people of Code4Lib,
wouldn't it be a great idea if we were also sponsors for it?  I'm not
sure if we're too late to get something like The Code4Lib Community
on the t-shirt as a sponsor, but I've created a pool at ChipIn.com for
those of us who would like to pool our resources to become a
collective sponsor of the conference.

If this idea interests you, you can contribute at:
  http://code4lib.chipin.com/code4lib-conference

If you've enjoyed attending in the past and would like to express your
love (and/or appreciation) for Code4Lib and the annual conference,
this is a great way.  I'm acting as the coordinator of the ChipIn fund
and will pass the collected money onto this year's local hosts so that
it can be used or, at the very least, be passed onto next year's host.

So far we have 4 contributors pitching in $300 (%30 of the way to our
goal of $1000!).  Feel free to contribute any amount you'd like ($5,
$10, $20, $100)... that's the beauty of pooling our resources, every
bit adds to the total.

Thanks for considering this!
Kevin


Re: [CODE4LIB] Hotel reservations

2010-12-13 Thread Jason Stirnaman
Me too when confirming, after it shows the list of rooms available.


Jason Stirnaman
Biomedical Librarian, Digital Projects
A.R. Dykes Library, University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319


 On 12/13/2010 at 11:54 AM, in message 
 aanlkti=c+xq_-znr8=cencg3p35p+gesjgkefhwdl...@mail.gmail.com, Mark A. 
 Matienzo m...@matienzo.org wrote:


I seem to be getting a ROOM UNAVAILABLE for just about every rate
listed for the Biddle Hotel using the online reservation system.

Mark A. Matienzo
Digital Archivist, Manuscripts and Archives
Yale University Library


[CODE4LIB] Job posting: Applications Administrator, University of Kansas Medical Center

2010-11-29 Thread Jason Stirnaman
Job Ad
Applications Administrator 

A.R. Dykes Library at the University of Kansas Medical Center is recruiting for 
an Application Administrator with emphasis in planning, implementing, 
supporting new and existing applications.  This position (position #J0184705) 
is unclassified and will work closely with the Digital Projects Librarian, 
other application/system managers, and library staff to ensure that technology 
supports the goals of the library and University.   

Interested applicants must apply online at 
https://jobs.kumc.edu/applicants/jsp/shared/Welcome_css.jsp. Go to Information 
Technology, Application Administrator, position #J0184705

Regards,
Jason


Jason Stirnaman
Biomedical Librarian, Digital Projects
A.R. Dykes Library, University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319


[CODE4LIB] Job Posting: Web Developer / Designer

2010-07-19 Thread Jason Stirnaman
University of Kansas Medical Center, Kansas City, KS
 
Web Developer/Designer
 
This person will work closely with Dykes Library (http://library.kumc.edu) 
staff, faculty and other Information Resources units to raise the visibility of 
expertise, research, publications, grey literature, and collections on the KUMC 
web site. Requires one to work closely with librarians and library personnel to 
understand information requirement needs and to then seek solutions for meeting 
those required needs. Requires working with diverse web applications and data 
sources to provide seamless user services. 
 
Duties:
Extend Bibapp (http://www.bibapp.org), a Ruby on Rails and Apache Solr web 
application, to better meet the needs of KUMC.  Customize DSpace 
(http://www.dspace.org)  and assist with XML/XSLT theme development.   
Customize sites in a content management system using XSLT.   Assist 
departments, faculty, and staff with integrating RSS feeds, Solr web services, 
or other application data into their websites.   Provide guidance for 
integrating repository and discovery applications including the library 
catalog, journal holdings, online journals, DSpace, digital asset management, 
bibliographic databases, and search engines. 
 
Visit 
https://jobs.kumc.edu/applicants/jsp/shared/position/JobDetails_css.jsp?postingId=371769
 for a complete description and application
 
 
 
 
Jason Stirnaman
Biomedical Librarian, Digital Projects
A.R. Dykes Library, University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319


[CODE4LIB] Job Posting: Web Developer / Designer

2010-05-19 Thread Jason Stirnaman
Web Developer / Designer
A.R. Dykes Library/Internet Development
University of Kansas Medical Center, Kansas City, KS

See the full posting at 
https://jobs.kumc.edu/applicants/jsp/shared/position/JobDetails_css.jsp?postingId=371066

Position Summary: This person will work closely with Dykes Library staff, 
faculty and other Information Resources units to raise the visibility of 
expertise, research, publications, grey literature, and collections on the KUMC 
web site. Requires one to work closely with librarians and library personnel to 
understand information requirement needs and to then seek solutions for meeting 
those required needs.  Requires working with diverse web applications and data 
sources to provide seamless user services.

Required Qualifications: 2 or more years experience developing database-driven 
web applications
Degree from accredited college or university. An equivalent combination of 
education and experience may be considered - each year of additional experience 
may be substituted for one year of education.
Experience programming with Ruby
Experience working with XSLT and XPath
Experience developing real-world applications using Ruby on Rails, Java, .NET, 
or PHP and Postgresql, MySQL, SQL Server, or Oracle
Experience working in a Unix environment
Willingness to work in a collaborative team setting
Excellent communication skills, analytical, and problem-solving abilities   
  


Re: [CODE4LIB] PHP bashing (was: newbie)

2010-03-26 Thread Jason Stirnaman
Those things cost a (relative) fortune. You can find cheaper versions at
Amazon.
Oh, and please never use duck tape for stage applications like taping
extension cords and mic cables to the floor. Gaff tape is tougher and
leaves no sticky residue.
 
Jason
 
Jason Stirnaman
Biomedical Librarian, Digital Projects
A.R. Dykes Library, University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319


 On 3/25/2010 at 1:58 PM, in message
9bd043651003251158k767fc446he67589cebd5f2...@mail.gmail.com, Rosalyn
Metz rosalynm...@gmail.com wrote:

Simon you can purchase the dongles at the Mac store (did it for
another conference the week after code4lib).

Also thank you all for the duck tape info.  This explains why the duck
tape i used to attach the dryer vent ducts didn't work.  i shall now
go by the proper tape.

and now this conversation has completely devolved.




On Thu, Mar 25, 2010 at 2:42 PM, Simon Spero sesunc...@gmail.com
wrote:
 The proper name is actually Duck

Tapehttp://www.nytimes.com/2003/03/02/magazine/the-way-we-live-now-3-02-03-on-language-why-a-duck.html?sec=spon=pagewanted=all,
 yet unlike Duck Typing, it makes everything it touches more
reliable.
 Discuss.

 However,  C4L10  exposed a major gap in my meeting-tech go-bag;  I
don't
 have any new style mac dongles- just the DVI to VGA one.  People need
to
 send me one of each of the new generation of macbooks, so I can be
prepared.

 Simon



Re: [CODE4LIB] Code4Lib Midwest?

2010-03-06 Thread Jason Stirnaman
I think maybe I started the chatter in IRC, but now it looks as if I
should redefine my region.  Being in KC, I don't see myself or other MO,
KS driving to Ohio and back in a day or weekend. I could be wrong.
Thoughts? Any interest in a Code4Lib-Big-12-ish? 
Kudos to Jonathan for running with this. I'm a drummer too, BTW...in
the event of a C4L11 Battle of the Bands.

Jason


-- 
Jason Stirnaman
Biomedical Librarian, Digital Projects
A.R. Dykes Library, University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319


 On 3/5/2010 at 9:37 AM, in message
acf139531003050737m54303c1sc9fa64096e9c9...@mail.gmail.com, Bill
Dueber
b...@dueber.com wrote:
 I'm pretty sure I could make it from Ann Arbor!
 
 On Fri, Mar 5, 2010 at 10:12 AM, Ken Irwin kir...@wittenberg.edu
wrote:
 
 I would come from Ohio to wherever we choose. Kalamazoo would suit
me just
 fine; I've not been back there in entirely too long!
 Ken

  -Original Message-
  From: Code for Libraries [mailto:code4...@listserv.nd.edu] On
Behalf Of
  Scott Garrison
  Sent: Friday, March 05, 2010 8:37 AM
  To: CODE4LIB@LISTSERV.ND.EDU 
  Subject: Re: [CODE4LIB] Code4Lib Midwest?
 
  +1
 
  ELM, I'm happy to help coordinate in whatever way you need.
 
  Also, if we can find a drummer, we could do a blues trio (count me
in on
 bass). I
  could bring our band's drummer (a HUGE ND fan) down for a day or
two if
  needed--he's awesome.
 
  --SG
  WMU in Kalamazoo
 
  - Original Message -
  From: Eric Lease Morgan emor...@nd.edu
  To: CODE4LIB@LISTSERV.ND.EDU 
  Sent: Thursday, March 4, 2010 4:38:53 PM
  Subject: Re: [CODE4LIB] Code4Lib Midwest?
 
  On Mar 4, 2010, at 3:29 PM, Jonathan Brinley wrote:
 
2. share demonstrations
  
   I'd like to see this be something like a blend between lightning
talks
   and the ask anything session at the last conference
 
  This certainly works for me, and the length of time of each
talk
 would/could be
  directly proportional to the number of people who attend.
 
 
4. give a presentation to library staff
  
   What sort of presentation did you have in mind, Eric?
  
   This also raises the issue of weekday vs. weekend. I'm game for
   either. Anyone else have a preference?
 
  What I was thinking here was a possible presentation to library
 faculty/staff
  and/or computing faculty/staff from across campus. The
presentation could
 be
  one or two cool hacks or solutions that solved wider, less geeky
 problems.
  Instead of tweaking Solr's term-weighting algorithms to index
 OAI-harvested
  content it would be making journal articles easier to find.
This would
 be an
  opportunity to show off the good work done by institutions outside
Notre
 Dame.
  A prophet in their own land is not as convincing as the expert
from afar.
 
  I was thinking it would happen on a weekday. There would be more
stuff
 going
  on here on campus, as well as give everybody a break from their
normal
 work
  week. More specifically, I would suggest such an event take place
on a
 Friday
  so the poeple who stayed over night would not have to take so many
days
 off of
  work.
 
 
5. have a hack session
  
   It would be good to have 2 or 3 projects we can/should work on
decided
   ahead of time (in case no one has any good ideas at the time),
and
   perhaps a couple more inspired by the earlier presentations.
 
 
 
  True.
 
  --
  ELM
  University of Notre Dame

 


Re: [CODE4LIB] exercising at code4libcon next week

2010-02-19 Thread Jason Stirnaman
Cary,
Where are you renting from?

jason

-- 
Jason Stirnaman
Biomedical Librarian, Digital Projects
A.R. Dykes Library, University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319


 On 2/18/2010 at 5:08 PM, in message
17c28acc1002181508w52810797m1a2daa5b5a9ef...@mail.gmail.com, Cary
Gordon
listu...@chillco.com wrote:
 I haven't given any thought to route. I am renting a road bike, so I
 may look for something challenging so that I can get a good ride in
a
 shorter time. I may hedge by renting a trainer.
 
 Warning, I am a vision in lycra. Shield your eyes.
 
 Cary
 
 On Thu, Feb 18, 2010 at 12:54 PM, Mitchell, Erik mitch...@wfu.edu
wrote:
 Excellent - not sure what the weather holds but perhaps a bike ride
can be
 had.  Did you have a route in mind?

 E

 On Thu, Feb 18, 2010 at 3:20 PM, Cary Gordon listu...@chillco.com
wrote:

 BTW, I will ride in light rain, but not if there is ice on the
road. I
 will be leaving the hotel at 7 AM.

 On Thu, Feb 18, 2010 at 12:16 PM, Cary Gordon
listu...@chillco.com
 wrote:
  I am going to be attempting to ride a bike Tue-Fri mornings,
early.
  Let me know if you want to join. Probably about 15-20 miles. I
ride
  about 18 mph, but I can go faster or slower, NP.
 
  Caru
 
  On Wed, Feb 17, 2010 at 7:04 AM, Erik Hatcher
erikhatc...@mac.com
 wrote:
  code4libcon is about here, yay!
 
  I'm kinda in a fitness craze right now, and will be doing some
training
 in
  Asheville.
 
  Monday night, 6:30pm, I'm going to the CrossFit Asheville gym -
  http://www.crossfitasheville.com/ 
  I contacted them and they said that was a good time to come. 
I'll
 likely go
  back on Wednesday night at the same time.  (dunno if they'll
charge some
  fee, though).  There were a couple of folks that mentioned
interest, and
 I
  can carpool up to 3 others.  If you've never done it before,
now's not
 the
  time to start and I'm sure they'll only let experienced folks
partake,
 but I
  imagine those curious about the insanity are welcome to
spectate.
 
  Jogging - what say folks up for runs meet in the hotel lobby at
6:30am
 any
  day next week.  I'm game for a relatively short run (2-3 miles)
both
 Monday
  and Wednesday.  I fleshed out a daily signup on the wiki.  If
it's too
 cold
  or treacherous out, I'll just hit the treadmill or rowing
machine if
 they
  have it.
 
 
 

http://wiki.code4lib.org/index.php/C4L2010_social_activities#Working_Out

 
  I'm still debating how many pushups folks must do at the Black
Belt
  preconference, and which kata to teach ;)
 
 Erik
 
 
 
 
  --
  Cary Gordon
  The Cherry Hill Company
  http://chillco.com 
 



 --
 Cary Gordon
 The Cherry Hill Company
 http://chillco.com 




 --
 Erik Mitchell, Ph.D.
 Assistant Director for Technology Services
 Z. Smith Reynolds Library
 Wake Forest University
 http://erikmitchell.info 

 


Re: [CODE4LIB] exercising at code4libcon next week

2010-02-19 Thread Jason Stirnaman
NM. Just saw the previous message dork/

-- 
Jason Stirnaman
Biomedical Librarian, Digital Projects
A.R. Dykes Library, University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319


 On 2/18/2010 at 5:08 PM, in message
17c28acc1002181508w52810797m1a2daa5b5a9ef...@mail.gmail.com, Cary
Gordon
listu...@chillco.com wrote:
 I haven't given any thought to route. I am renting a road bike, so I
 may look for something challenging so that I can get a good ride in
a
 shorter time. I may hedge by renting a trainer.
 
 Warning, I am a vision in lycra. Shield your eyes.
 
 Cary
 
 On Thu, Feb 18, 2010 at 12:54 PM, Mitchell, Erik mitch...@wfu.edu
wrote:
 Excellent - not sure what the weather holds but perhaps a bike ride
can be
 had.  Did you have a route in mind?

 E

 On Thu, Feb 18, 2010 at 3:20 PM, Cary Gordon listu...@chillco.com
wrote:

 BTW, I will ride in light rain, but not if there is ice on the
road. I
 will be leaving the hotel at 7 AM.

 On Thu, Feb 18, 2010 at 12:16 PM, Cary Gordon
listu...@chillco.com
 wrote:
  I am going to be attempting to ride a bike Tue-Fri mornings,
early.
  Let me know if you want to join. Probably about 15-20 miles. I
ride
  about 18 mph, but I can go faster or slower, NP.
 
  Caru
 
  On Wed, Feb 17, 2010 at 7:04 AM, Erik Hatcher
erikhatc...@mac.com
 wrote:
  code4libcon is about here, yay!
 
  I'm kinda in a fitness craze right now, and will be doing some
training
 in
  Asheville.
 
  Monday night, 6:30pm, I'm going to the CrossFit Asheville gym -
  http://www.crossfitasheville.com/ 
  I contacted them and they said that was a good time to come. 
I'll
 likely go
  back on Wednesday night at the same time.  (dunno if they'll
charge some
  fee, though).  There were a couple of folks that mentioned
interest, and
 I
  can carpool up to 3 others.  If you've never done it before,
now's not
 the
  time to start and I'm sure they'll only let experienced folks
partake,
 but I
  imagine those curious about the insanity are welcome to
spectate.
 
  Jogging - what say folks up for runs meet in the hotel lobby at
6:30am
 any
  day next week.  I'm game for a relatively short run (2-3 miles)
both
 Monday
  and Wednesday.  I fleshed out a daily signup on the wiki.  If
it's too
 cold
  or treacherous out, I'll just hit the treadmill or rowing
machine if
 they
  have it.
 
 
 

http://wiki.code4lib.org/index.php/C4L2010_social_activities#Working_Out

 
  I'm still debating how many pushups folks must do at the Black
Belt
  preconference, and which kata to teach ;)
 
 Erik
 
 
 
 
  --
  Cary Gordon
  The Cherry Hill Company
  http://chillco.com 
 



 --
 Cary Gordon
 The Cherry Hill Company
 http://chillco.com 




 --
 Erik Mitchell, Ph.D.
 Assistant Director for Technology Services
 Z. Smith Reynolds Library
 Wake Forest University
 http://erikmitchell.info 

 


Re: [CODE4LIB] preconference proposals - solr

2009-11-13 Thread Jason Stirnaman
+1 I'd never leave the dojo.

 Agreed on morning and afternoon black belt sessions for all those
who
 desire dark Solr.

-- 

Jason Stirnaman
 


 On 11/13/2009 at 11:01 AM, in message
20091113170101.gd4...@manheim.library.drexel.edu, Gabriel Farrell
g...@rc98.net wrote:
 On Fri, Nov 13, 2009 at 11:42:53AM -0500, Walter Lewis wrote:
 On 13 Nov 09, at 11:25 AM, Bess Sadler wrote:
 
  1. Morning session - solr white belt
  [delightful descriptions snipped]
  2. Morning session - solr black belt
  3. Afternoon session - Blacklight
 
 Is there any chance that the black belt session needs to be/should
be a two 
 parter and run through the afternoon as well?  ... or repeat for
those who 
 have just acquired their white belts but are headed in different
directions?
 
 Agreed on morning and afternoon black belt sessions for all those
who
 desire dark Solr.
 
 
 Gabriel


Re: [CODE4LIB] preconference proposals

2009-11-10 Thread Jason Stirnaman
+1
I'm managing a Solr-powered app, but haven't had much time to learn how
to use Solr better. Hands-on would be great as long as the attendees
understand the prereqs and were ready to jump in and work - not
expecting time and assistance with downloading, installing, etc. 
Speaking as one of the problem children (Windows users), I think that
pretty much derailed the VuFind session last year.
 
Jason
 
Jason Stirnaman
 


 Bess Sadler eo...@virginia.edu 11/10/2009 8:41 AM 
+1 from me on this, no surprise. :)

What if we did a next gen catalog day thing? We could spend the  
morning on solr, which many projects have in common, in the morning,  
and then in the afternoon have sessions that build on top of solr  
(vufind, blacklight, kochief, etc.) We were going to submit a proposal 

for a blacklight pre-conference regardless, but it makes a lot of  
sense to do something more coordinated, and it particularly makes  
sense to ensure that as many people as possible can take advantage of 

Erik's presence and expertise.

One goal I also have for this conference is to solicit community  
feedback on how to improve solrmarc and marc4j, which are used by both 

blacklight and vufind (and a few other projects at this point...  
yes?). A solr-focused session might be a good venue for some of that  
discussion as well... as we discuss neat things to do with solr we  
might ask how do we do that with solrmarc. Or maybe that's a separate 

discussion. It's up to the community.

Bess

On 10-Nov-09, at 5:38 AM, Erik Hatcher wrote:

 I'm interested presenting something Solr+library related at c4l10.
 I'm soliciting ideas from the community on what angle makes the most
 sense.  At first I was thinking a regular conference talk proposal,
 but perhaps a preconference session would be better.  I could be
game
 for a half day session.  It could be either an introductory Solr
 class, get up and running with Solr (+ Blacklight, of course).  Or
 maybe a more advanced session on topics like leveraging dismax, Solr
 performance and scalability tuning, and so on, or maybe a freer form
 Solr hackathon session where I'd be there to help with hurdles or
 answer questions.

 Thoughts?  Suggestions?   Anything I can do to help the library
world
 with Solr is fair game - let me know.

 Thanks,
 Erik

 On Nov 9, 2009, at 9:55 PM, Kevin S. Clarke wrote:

 Hi all,

 It's time again to collect proposals for Code4Lib 2010
preconference
 sessions.  We have space for six full day sessions (or 12 half day
 sessions (or some combination of the two)).  If we get more than we
 can accommodate, we'll vote... but I don't think we will (take that 

 as
 a challenge to propose lots of interesting preconference sessions).
 Like last year, attendees will pay $12.50 for a half day or $25 for
 the whole day.  The preconference space will be in the hotel so
we'll
 have wireless available.  If you have a preconference idea, send it 

 to
 this list, to me, or to the code4libcon planning list.  We'll put  
 them
 up on the wiki once we start receiving them.  Some possible ideas? 
A
 Drupal in libraries session? LOD part two?  An OCLC webservices
 hackathon?  Send the proposals along...

 Thanks,
 Kevin


Re: [CODE4LIB] openphi and/or healthlibrarian

2009-07-10 Thread Jason Stirnaman
Thanks, Eric. I hadn't heard of these. We'll check it out.
Another interesting one is Mednar (who named this thing?) - a medical
open access federated search engine launched by fed search veterans Deep
Web.

http://mednar.com/mednar/

Jason
-- 



 On 7/9/2009 at 6:18 PM, in message
bb97ff1f-b194-4519-9723-4a73480de...@nd.edu, Eric Lease Morgan
emor...@nd.edu wrote:
 How many people here work in a library where medicine is a topic of 

 interest, and how many of those are familiar with OpenPHI [1] and/or 

 HealthLibrarian [2] ?
 
 OpenPHI is a start-up company who is using open source software to  
 harvest and index open access content for the purposes of creating  
 useful indexes to medical information. For example, they have   
 collected content from MEDLINE, Biomed, and other peer-reviewed sites
 
 to create a pretty comprehensive and competitive index called  
 HealthLibrarian. Lots o' content!
 
 I'm not sure of all the details, but the folks at OpenPHI are looking
 
 for librarians like ourselves (hackers) to integrate HealthLibrarian 

 into their library offerings -- a possible alternative or supplement 

 to the indexes we are already providing. I think they are offering  
 free trials to their index through Web Services interfaces sans the 

 advertising, etc.
 
 Being a new company and very open to all things... open, I believe
the  
 folks at OpenPHI are open to constructive criticism on how to provide
 
 viable library services to... libraries.
 
 [1] http://www.openphi.com/ 
 [2] http://www.healthlibrarian.net/


Re: [CODE4LIB] Digital imaging questions

2009-06-19 Thread Jason Stirnaman
 and then write a parser (in Python or Ruby) that
 will read the values from that spreadsheet and produce a dublin_core.xml

Sai,

That work has already been done in PHP: http://tds.terkko.helsinki.fi/utils/ I 
just used it for a small project. I tweaked it a tiny bit and tried to clarify 
the documentation, but otherwise it works really well.
Regarding the DSpace metadata registry, I'd recommend sending your question to 
dspace-tech maillist.

Jason
-- 



 On 6/18/2009 at 1:38 PM, in message
97d9c0c70906181138x4e15e044q939ed862f9e11...@mail.gmail.com, Andrew Hankinson
andrew.hankin...@gmail.com wrote:
 I'm pretty sure you can add extra fields to the dublin_core.xml file and
 import it. I think I did something like this a few years ago, but I'm a bit
 fuzzy on the details.
 For the metadata creation, it might be worth your while to save the Excel
 spreadsheet to a CSV file and then write a parser (in Python or Ruby) that
 will read the values from that spreadsheet and produce a dublin_core.xml
 file. If you gather the photo files in the same location,
 you can then use the DSpace bulk importer to import everything into
 your collection.
 
 See here:
 http://www.tdl.org/wp-content/uploads/2009/04/DSpaceBatchImportFormat.pdf 
 
 You may be able to add extra fields to the search index. See here:
 http://wiki.dspace.org/index.php/Modify_search_fields 
 
 On Thu, Jun 18, 2009 at 1:32 PM, Deng, Sai sai.d...@wichita.edu wrote:
 
 Andrew and Yan,
 Thanks for the reply and the information!

 About DSpace metadata registry, we can add new schema or new elements to
 it, but the elements won’t be searchable, right? (We can change the
 input-forms.xml to make it display in the submission workflow if we will
 have item by item submission.)

 In our case, we already have the herbarium metadata in excel sheet created
 by Biology Dept. They are now in loose Darwin Core and kind of free style.
 If I would like to do data transformation (transform it to a mixture of DC
 and Darwin Core possibly) and batch import the xml to DSpace, how to
 proceed? Where should I add the Darwin Core metadata (in the dublin_core.xml
 as well)? It seems that it only has dcvalue element.

 Sai

 -Original Message-
 From: Code for Libraries [mailto:code4...@listserv.nd.edu] On Behalf Of
 Andrew Hankinson
 Sent: Thursday, June 18, 2009 11:03 AM
 To: CODE4LIB@LISTSERV.ND.EDU 
 Subject: Re: [CODE4LIB] Digital imaging questions

 Hi Sai,
 Archival Quality Images has some meaning, but it might be helpful to look
 up a standard and start your investigation for a new camera based on the
 recommendations of that standard. You might find this page from the Library
 of Congress helpful:

 http://www.digitalpreservation.gov/formats/content/still.shtml 

 I think your indication that RAW/TIFF is a pretty safe bet, but being able
 to point to an actual standard might make your case for a new camera a bit
 more convincingly.  Other factors to take into account (other than
 megapixels and format) are color reproduction, image 'noise'
 specifications,
 DPI, lighting, (and probably many other things).

 For DSpace you don't even need to map the elements of Dublin Core to
 DarwinCore. Dspace has the ability to input different schema in its
 metadata
 registry. You can then modify the inputforms.xml file in the Dspace
 config
 directory to add the appropriate fields for the additional metadata fields.

 Hope this helps!
 -Andrew

 On Thu, Jun 18, 2009 at 10:33 AM, Deng, Sai sai.d...@wichita.edu wrote:

  Hi, list,
 
 
 
  A while ago, I read some interesting discussion on how to use camera to
  produce archival-quality images from this list. Now, I have some imaging
  questions and I think this might be a good list to turn to. Thank you in
  advance! We are trying to add some herbarium images to our DSpace. The
  specimen pictures will be taken at the Biology department and the library
 is
  responsible for depositing the images and transferring/mapping/adding
  metadata. On the testing stage, they use Fujifilm FinePix S8000fd digital
  camera
 
  (
 
 
 http://www.fujifilmusa.com/support/ServiceSupportSoftwareContent.do?dbid=8747 
 16prodcat=871639sscucatid=664260
 ).
  It produces 8 megapixel images, and it doesn't have raw/tiff support. It
  seems that it cannot produce archival quality images. Before we persuade
 the
  Biology department to switch their camera, I want to make sure it is
  absolutely necessary. The pictures they took look fine with human eyes,
 see
  an example at:
 
 http://library.wichita.edu/techserv/test/herbarium/Astenophylla1-02710.jpg 
 
  In order to make master images from a camera, it should be capable of
  producing raw or tiff images with 12 or above megapixels?
 
 
 
  A related archiving question, the biology field standard is DarwinCore,
  however, DSpace doesn't support it. The Biology Dept. already has some
 data
  in spreadsheet. In this case, when it is impossible to map all the
 elements
  to Dublin Core, is it a 

Re: [CODE4LIB] Serials Solutions Summon

2009-04-21 Thread Jason Stirnaman
Agree. When you step outside libraryland and into corporate/enterprise
IT (thinking Autonomy, FAST, etc.) then federated search is often used
to refer to aggregated local indexing of distinct databases.

Jason
-- 

Jason Stirnaman
Digital Projects Librarian/School of Medicine Support
A.R. Dykes Library, University of Kansas Medical Center
jstirna...@kumc.edu
913-588-7319


 On 4/21/2009 at 12:56 PM, in message 49ee08d3.7010...@jhu.edu,
Jonathan
Rochkind rochk...@jhu.edu wrote:
 I think I like your term aggregated index even better than local 
 index, thanks Peter. You're right that local can be confusing as
far 
 as local to WHAT.
 
 So that's my new choice of terminology with the highest chance of
being 
 understood and least chance of being misconstrued: broadcast search

 vs. aggregated index.
 
 As we've discovered in this thread, if you say federated search 
 without qualification, different people _will_ have different ideas
of 
 what you're talking about, as apparently the phrase has been 
 historically used differently by different people/communities.
 
 I think broadcast search and aggregated index are specific enough

 that it would be harder for reasonable people to misconstrue -- and 
 don't (yet?) have a history of being used to refer to different
things 
 by different people. So it's what I'm going to use.
 
 Jonathan
 
 Peter Noerr wrote:
 From one of the Federated Search vendor's perspective... 

 It seems in the broader web world we in the library world have lost

 metasearch. That has become the province of those systems (mamma,
dogpile, 
 etc.) which search the big web search engines (G,Y,M, etc.) primarily
for 
 shoppers and travelers (kayak, mobissimo, etc.) and so on. One of the

 original differences between these engines and the
library/information world 
 ones was that they presented results by Source - not combined. This
is still 
 evident in a fashion in the travel sites where you can start multiple
search 
 sessions on the individual sites.

 We use Federated Search for what we do in the library/information
space. 
 It equates directly to Jonathan's Broadcast Search which was the
original 
 term I used when talking about it about 10 years ago. Broadcast is
more 
 descriptive, and I prefer it, but it seems an uphill struggle to get
it 
 accepted.

 Fed Search has the problem of Ray's definition of Federated, to mean
a 
 bunch of things brought together. It can be broadcast search (real
time 
 searching of remote Sources and aggregation of a virtual result set),
or 
 searching of a local (to the searcher) index which is composed of
material 
 federated from multiple Sources at some previous time. We tend to use
the 
 term Aggregate Index for this (and for the Summon-type index) Mixed
content 
 is almost a given, so that is not an issue. And Federated Search
systems have 
 to undertake in real time the normalization and other tasks that
Summon will 
 be (presumably) putting into its aggregate index.

 A problem in terminology we come across is the use of local
(notice my 
 careful caveat in its use above). It is used to mean local to the
searcher 
 (as in the aggregate/meta index above), or it is used to mean local
to the 
 original documents (i.e. at the native Source).

 I can't imagine this has done more than confirm that there is no
agreed 
 terminology - which we sort of all knew. So we just do a lot of
explaining - 
 with pictures - to people.

 Peter Noerr


 Dr Peter Noerr
 CTO, MuseGlobal, Inc.

 +1 415 896 6873 (office)
 +1 415 793 6547 (mobile)
 www.museglobal.com 




   
 -Original Message-
 From: Code for Libraries [mailto:code4...@listserv.nd.edu] On
Behalf Of
 Jonathan Rochkind
 Sent: Tuesday, April 21, 2009 08:59
 To: CODE4LIB@LISTSERV.ND.EDU 
 Subject: Re: [CODE4LIB] Serials Solutions Summon

 Ray Denenberg, Library of Congress wrote:
 
 Leaving aside metasearch and broadcast search (terms invented
more
   
 recently)
 
 it  is a shame if federated has really lost its distinction
 fromdistributed.  Historically, a federated database is one
that
 integrates multiple (autonomous) databases so it is in effect a
   
 virtual
 
 distributed database, though a single database.I don't think
   
 that's a
 
 hard concept and I don't think it is a trivial distinction.

   
 For at least 10 years vendors in the library market have been
selling
 us
 products called federated search which are in fact
 distributed/broadcast search products.

 If you want to reclaim the term federated to mean a local index,
I
 think you have a losing battle in front of you.

 So I'm sticking with broadcast search and local index. 
Sometimes
 you need to use terms invented more recently when the older terms
have
 been used ambiguously or contradictorily.  To me, understanding the
two
 different techniques and their differences is more important than
the
 terminology -- it's just important that the terminology be
understood.
 

   


Re: [CODE4LIB] Solr for Internal Searching

2008-08-06 Thread Jason Stirnaman
I haven't used it, but there is an AJAX extension for GSA and Mini that
does faceting - they refer to it as parametric search:
http://code.google.com/p/parametric/
We use the Mini on our university web site, but not for intranet (yet).
 
Jason
-- 

Jason Stirnaman
Digital Projects Librarian/School of Medicine Support
A.R. Dykes Library, University of Kansas Medical Center
[EMAIL PROTECTED]
913-588-7319


 On 8/6/2008 at 4:00 AM, in message
[EMAIL PROTECTED], Stephens,
Owen
[EMAIL PROTECTED] wrote:
 Google Mini didn't facet when I used it (about a year ago). It is
very
 simple to setup, and required very little maintenance.
 
 The Mini had some restrictions compared to the Google Appliance, if
 these apply it would be worth looking at the differences - the
Appliance
 certainly was significantly more expensive than the Mini.
 
 Overall I'd recommend the Mini if you want something cheap, very
quick
 to get going, with a brand that users will recognise and (generally)
 trust, and you are happy to sacrifice some flexibility for these
 features.
 
 Owen
 
 Owen Stephens
 Assistant Director: eStrategy and Information Resources
 Central Library
 Imperial College London
 South Kensington Campus
 London
 SW7 2AZ
  
 t: +44 (0)20 7594 8829
 e: [EMAIL PROTECTED] 
 -Original Message-
 From: Code for Libraries [mailto:[EMAIL PROTECTED] On
Behalf
 Of
 Tim Spalding
 Sent: 06 August 2008 05:08
 To: CODE4LIB@LISTSERV.ND.EDU 
 Subject: Re: [CODE4LIB] Solr for Internal Searching
 
 Does Google Mini facet? It seems to have a concept of collections,
but
 does it facet by them?
 
 T
 
 On Wed, Aug 6, 2008 at 12:05 AM, Bill Dueber [EMAIL PROTECTED]
wrote:
  At UMich, we use space on a Google Appliance as our site search
  (different setups for internal vs. public pages) and have been
 pretty
  happy. I've been able to abuse the google ads space to our
benefit
  -- e.g., go to http://lib.umich.edu/ and search Web Pages for
 grad
  (get today's hours) or 'dueber' (find me).
 
  On Tue, Aug 5, 2008 at 11:24 PM, Nate Vack [EMAIL PROTECTED]
wrote:
  I know this is code4lib, not buystuff4lib, but the Google Mini
is
  reputed to be rather quick, bulletproof and configurable, and
 starts
  at $3k. For example, it works nicely with lots of file formats
  (including Office documents) out of the box. And works with LDAP
 and
  NTLM for authentication and authorization.
 
  I suspect it'll probably be challenging to deliver a quality
search
  solution for a lower total cost.
 
  Of course, this all depends on what your intranet looks like on
the
  inside. I've seen 'intranet' mean a things that would call for
 wildly
  different search solutions.
 
  So... solr is great, but this question doesn't contain nearly
 enough
  information to answer whether it's a good fit for your task at
 hand.
 
  Cheers
  -Nate
 
  On Tue, Aug 5, 2008 at 6:03 PM, Cloutman, David
  [EMAIL PROTECTED] wrote:
  Today my boss asked me to come up with a solution that would
let
 us
  index and search our intranet. I was already thinking of using
 Solr
 on
  our public Web site we are building, and thought this might be
a
 good
  opportunity to knock two items off the to-do list with the same
  technology. I know there was a preconference session on Solr
this
 year,
  and I have the sense that this is gaining traction in the
library
  community. Is there any reason why I shouldn't do this?
 
  Thanks,
 
  - David
 
 
 
  ---
  David Cloutman [EMAIL PROTECTED]
  Electronic Services Librarian
  Marin County Free Library
 
  Email Disclaimer:
 http://www.co.marin.ca.us/nav/misc/EmailDisclaimer.cfm 
 
 
 
 
 
  --
  Bill Dueber
  Library Systems Programmer
  University of Michigan Library
 
 
 
 
 --
 Check out my library at
 http://www.librarything.com/profile/timspalding


Re: [CODE4LIB] Digital Collections management software

2008-07-17 Thread Jason Stirnaman
Harish,

OpenCollection (http://www.opencollection.org/ ) is open source.  It
was mentioned at JASIG.  I'm hoping to install and try it out here
soon.

Jason
-- 

Jason Stirnaman
Digital Projects Librarian/School of Medicine Support
A.R. Dykes Library, University of Kansas Medical Center
[EMAIL PROTECTED]
913-588-7319


 On 7/17/2008 at 11:12 AM, in message
[EMAIL PROTECTED], Harish
Maringanti [EMAIL PROTECTED] wrote:
 Hi all,
 
 I've heard of Contentdm from OCLC that many institutions are using to
manage
 their digital collections. If you are using Contentdm would you mind
sharing
 some of the pros  cons of using it (either to the group or off the
list).
 
 Are there any other viable products either commercial or open source
that
 can be considered to manage digital collections. Particularly in the
open
 source domain are there any good applications to manage image
collections?
 
 Thanks in advance,
 Harish
 
 
 Harish Maringanti
 Systems Analyst
 K-State Libraries
 (785)532-3261


Re: [CODE4LIB] Enterprise Search and library collection

2008-07-11 Thread Jason Stirnaman
 In short, I think a Google Appliance is an expensive but viable
option.
Relative to other commercial products in the space, the GA or G-mini is
actually very inexpensive.  Another option to add to Eric's list is the
All Access Connector which adds MuseGlobal's fed search technology to
the Google appliance.  Of course, it also add $40K or more to the total
price.
http://wire.jstirnaman.com/2008/05/23/federated-search-for-google-search-appliance/

Jason
-- 

Jason Stirnaman
Digital Projects Librarian/School of Medicine Support
A.R. Dykes Library, University of Kansas Medical Center
[EMAIL PROTECTED]
913-588-7319


 On 7/10/2008 at 10:25 PM, in message
[EMAIL PROTECTED], Eric Lease Morgan
[EMAIL PROTECTED] wrote:
 At the risk of interpreting the original question incorrectly, we  
 have had decent success using the Google Search Appliance to  
 facilitate search across the enterprise (university):
 
* Buy the Appliance.
* Feed it one or more URLs.
* Wait for it to crawl.
* Customize the user interface.
* Allow people to use it.
 
 While we haven't done so, it would not be too difficult to implement 

 a sort of federated search within the Appliance's interface. This  
 could be done in a number of ways:
 
1. Acquire bibliographic data and
   feed to directly to the Appliance
   via the (poorly) documented SQL
   interface.
 
2. Acquire bibliographic data, save
   it as HTML files, and allow the
   Appliance to crawl the HTML.
 
3. License access to bibliographic
   making sure it is accessible through
   some sort of API, and write a Google
   OneBox module that queries the data,
   and returns results as a part of a
   normal Google Appliance search.
 
 The larger Google Appliance costs about $30,000 but you purchase it, 

 not license it. No annual fees. That will buy you the ability to  
 index 500,000 documents. When it comes to a bibliographic database  
 (such as a subject index or a library catalog) that is not really  
 very much.
 
 We here at Notre Dame did implement Option #3, but it queries the  
 local LDAP sever to return names and addresses of people, not  
 bibliographic citations. [1, 2] I did write a OneBox module to query 

 our catalog, but we haven't implemented it, yet. It will probably  
 appear as a part of the library's Search This Site functionality.
 
 In short, I think a Google Appliance is an expensive but viable
option.
 
 [1] Search for a name (ex: Hesburgh) at http://search.nd.edu/ 
 [2] OneBox source code - http://tinyurl.com/6ktxot


Re: [CODE4LIB] Enterprise Search and library collection [SEC=UNCLASSIFIED]

2008-07-08 Thread Jason Stirnaman
Renata,

We haven't implemented anything yet, but we did recently issue an RFI
for exactly this and evaluated the vendors who responded.  Our biggest
challenge is still getting sufficient institutional buy-in.  So, we will
likely be conducting small, focused pilots with two different vendors. 
I would be happy to share our RFI and some of our evaluation results
with you off list.

Jason
-- 

Jason Stirnaman
Digital Projects Librarian/School of Medicine Support
A.R. Dykes Library, University of Kansas Medical Center
[EMAIL PROTECTED]
913-588-7319


 On 7/8/2008 at 1:11 AM, in message
[EMAIL PROTECTED],
Dyer,
Renata [EMAIL PROTECTED] wrote:
 Our organisation is looking into getting an enterprise search and I
was
 wondering how many libraries out there have incorporated library
 collection into a 'federated' search that would retrieve a whole
lot:
 a library collection items, external sources (websites, databases),
 internal documents (available on share drives and/or records
systems),
 maybe even records from other internal applications, etc.?
 
  
 I would like to hear about your experience and what is good or bad
about
 it.
  
 Please reply on or offline whichever more convenient.
  
 I'll collate answers.
  
 Thanks,
  
 Renata Dyer
 Systems Librarian
 Information Services
 The Treasury 
 Langton Crescent, Parkes ACT 2600 Australia
 (p) 02 6263 2736
 (f) 02 6263 2738
 (e) [EMAIL PROTECTED] 
 
 https://adot.sirsidynix.net.au/uhtbin/cgisirsi/ruzseo2h7g/0/0/49 
 
 

**
 Please Note: The information contained in this e-mail message 
 and any attached files may be confidential information and 
 may also be the subject of legal professional privilege.  If you are
 not the intended recipient, any use, disclosure or copying of this
 e-mail is unauthorised.  If you have received this e-mail by error
 please notify the sender immediately by reply e-mail and delete all
 copies of this transmission together with any attachments.

**


Re: [CODE4LIB] Exporting RSS Source from a Blog

2008-05-06 Thread Jason Stirnaman
Couple of thoughts:
I'm not a Blojsom user, but shouldn't you be able to set the number of
posts that Blojsom puts in the feed?  I see a Blojsom blog at
http://www.entropy.ch/blog/?flavor=rss that seems to spit out all the
entries as Atom, RSS 2.0, RSS, etc.
Also, the Blojsom export pushes out a package of XML.  You could use
XSLT to convert to RSS:
http://www.ejlife.net/blogs/john/2006/04/25/1146023865263.html

Jason
--

Jason Stirnaman
OME/Biomedical  Digital Projects Librarian
A.R. Dykes Library
The University of Kansas Medical Center
Kansas City, Kansas
Work: 913-588-7319
Email: [EMAIL PROTECTED]


 On 5/6/2008 at 10:23 AM, in message
[EMAIL PROTECTED], The Ford
Library
at Fuqua [EMAIL PROTECTED] wrote:
 Hi John,

 Thanks for the quick response. I tried accessing the feed with lynx
to no
 avail. Its been quite awhile since I worked w/ lynx. I'll take a
quick look
 at wget as well and see if its deployed here and usable.

 Can you spare a few moments to send an example of your quick and
dirty
 method?

 Feel free to do this on- or off-list if you have the time.
 Thanks again!
 --
 Carlton Brown
 Associate Director  IT Services Manager
 Ford Library - Fuqua School of Business
 Duke University


 On Tue, May 6, 2008 at 10:11 AM, Jonathan Gorman [EMAIL PROTECTED]
wrote:

 The quick and dirty way I've done something similar in the past is
to
 download individual rss pages by running something like wget. Other
 command-line browsers/spiders could do something similar.

 After all, the mechanisms for pulling rss feeds are really at the
base the
 same mechanisms for pulling web pages of any type.

 Jon Gorman

  Original message 
 Date: Tue, 6 May 2008 10:01:48 -0400
 From: The Ford Library at Fuqua [EMAIL PROTECTED]
 Subject: [CODE4LIB] Exporting RSS Source from a Blog
 To: CODE4LIB@LISTSERV.ND.EDU
 
  Hello All,
 
 We're attempting to migrate our java-based Blojsom blog to the
more
 user-friendly WordPress software. WordPress has built import
wizards for
 many popular blog platforms; but there isn't one for Blojsom which
is
 different from *bloxsom* which does have an import wizard. Blojsom
does
 have
 an export blog plugin; but the data is not in RSS 2.0 and would
require
 more
 Perl than I know to convert.
 
 WP can import data in RSS 2.0, and I can grab the RSS source of
some
 posts
 by simply viewing/copying the source in my browser. But I need to
migrate
 more than the limited number of posts that can be extracted by
viewing
 the
 RSS source in the browser.
 
 Does anyone know of a tool or hack to extract - export the entire
 contents,
 or a large fixed number of posts from a blog as RSS 2.0? Google
Reader
 and
 some others will grab a large number of posts; but I can't view the
RSS
 source.
 
 I've done considerable googling already and the few scripts/tools
I've
 located call for PHP or Ruby -- neither of which are deployed in
our
 environment.
 
 Thanks in advance for any tips or pointers.
 
 --
 Carlton Brown
 Associate Director  IT Services Manager
 Ford Library - Fuqua School of Business
 Duke University



[CODE4LIB] Serials Solutions 360 API - classes?

2008-04-02 Thread Jason Stirnaman
Or .NET classes for the same?
--

Jason Stirnaman
OME/Biomedical  Digital Projects Librarian
A.R. Dykes Library
The University of Kansas Medical Center
Kansas City, Kansas
Work: 913-588-7319
Email: [EMAIL PROTECTED]


 On 4/2/2008 at 12:39 PM, in message
[EMAIL PROTECTED], Pons,
Lisa
(ponslm) [EMAIL PROTECTED] wrote:
 I'd be interested as well

 

 From: Code for Libraries on behalf of Yitzchak Schaffer
 Sent: Wed 4/2/2008 12:28 PM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: [CODE4LIB] Serials Solutions 360 API - PHP classes?



 All:

 Does anyone have/know of PHP classes for searching the Serials
Solutions
 360 APIs, particularly Search?

 Thanks,

 --
 Yitzchak Schaffer
 Systems Librarian
 Touro College Libraries
 33 West 23rd Street
 New York, NY 10010
 Tel (212) 463-0400 x230
 Fax (212) 627-3197
 [EMAIL PROTECTED]


[CODE4LIB] Distributed Models the Library (was: Re: [CODE4LIB] RFC 5005 ATOM extension and OAI)

2007-10-25 Thread Jason Stirnaman
 not, for instance, and entire library catalog?  If I could check out the
 library catalog onto my computer  use whatever tools I wished to search,

Peter,

You might be interested in Art Rhyno's experiment.  Here's Jon Udell's summary:

Art Rhyno’s science project
Art Rhyno’s title is Systems Librarian but he should consider adding Mad 
Scientist to his business card because his is full of wild and crazy and — to 
me, at least — brilliant ideas. Last year, when I was a judge for the Talis 
“Mashing up the Library” competion, one of my favorite entries was this one 
from Art. The project mirrors a library catalog to the desktop and integrates 
it with desktop search. The searcher in this case is Google Desktop, but could 
be another, and the integration is accomplished by exposing the catalog as a 
set of Web Folders, which Art correctly describes as “Microsoft’s in-built and 
oft-overlooked WebDAV option.”

http://blog.jonudell.net/2007/03/16/art-rhynos-science-project/

Jason
--

Jason Stirnaman
OME/Biomedical  Digital Projects Librarian
A.R. Dykes Library
The University of Kansas Medical Center
Kansas City, Kansas
Work: 913-588-7319
Email: [EMAIL PROTECTED]


 On 10/25/2007 at 10:47 AM, in message
[EMAIL PROTECTED], pkeane
[EMAIL PROTECTED] wrote:
 Hi Jakob-

 Yes, I think you are correct that it is a bit much to think that a
 distributed archiving model is a bit much for libraries to even consider
 now, but I do think there are useful insights to be gained here.

 As it stands now, linux developers using Git can carry around the entire
 change history of the linux kernel (well, I think they just included the
 2.6 kernel when they moved to Git) on their laptop, make changes, create
 patches, etc and then make that available to others.  Well, undoubtedly
 change history is is a bit much for the library to think about, by why
 not, for instance, and entire library catalog?  If I could check out the
 library catalog onto my computer  use whatever tools I wished to search,
 organize, annotate, etc., then perhaps mix-in data (say holdings data
 from other that are near me) OR even create the sort of relationships
 between records that the Open Library folks are talking about
 (http://www.hyperorg.com/blogger/mtarchive/berkman_lunch_aaron_swartz_on.htm
 l)
 then share that added data, we have quite a powerful distributed
 development model.  It may seem a bit far-fetched, but I think that some
 of the pieces (or at least a better understanding of how this might all
 work) are beginning to take shape.

 -Peter

 On Thu, 25 Oct 2007, Jakob Voss wrote:

 Peter wrote:

 Also, re: blog mirroring, I highly recommend the current discussions
 floating aroung the blogosphere regarding distributed source control (Git,
 Mercurial, etc.).  It's a fundamental paradigm shift from centralized
 control to distributed control that points the way toward the future of
 libraries as they (we) become less and less the gatekeepers for the
 stuff be it digital or physical and more and more the facilitators of
 the bidirectional replication that assures ubiquitous access and
 long-term preservation.  The library becomes (actually it has already
 happended) simply a node on a network of trust and should act accordingly.

 See the thoroughly entertaining/thought-provoking Google tech talk by
 Linus Torvalds on Git:  http://www.youtube.com/watch?v=4XpnKHJAok8

 Thanks for pointing to this interesting discussion. This goes even
 further then the current paradigm shift from the old model
 (author - publisher - distributor - reader) to a world of
 user-generated content and collaboration! I was glad if we finally got
 to model and archive Weblogs and Wikis - modelling and archiving the
 whole process of content copying, changing and remixing and
 republication is far beyong libraries capabilities!

 Greetings,
 Jakob

 --
 Jakob Voß [EMAIL PROTECTED], skype: nichtich
 Verbundzentrale des GBV (VZG) / Common Library Network
 Platz der Goettinger Sieben 1, 37073 Göttingen, Germany
 +49 (0)551 39-10242, http://www.gbv.de