[CODE4LIB] Announcing the 2014 AMIA / DLF Hack Day

2014-08-06 Thread Lauren Sorensen
In association with the annual conference, the Association of Moving Image
Archivists will host its second annual hack day on October 8, 2014 in
Savannah, GA. The event will be a unique opportunity for practitioners and
managers of digital audiovisual collections to join with developers and
engineers for an intense day of collaboration to develop solutions for
digital audiovisual preservation and access. It will be fun and
practical…and there will be prizes!

For those of you who want to participate in another way, we’ll be hosting a
concurrent Wikipedia edit-a-thon
http://outreach.wikimedia.org/wiki/GLAM/Model_projects/Edit-a-thon_How-To,
which will focus on topics related to digital preservation  access for
audiovisual materials. While we encourage non-engineers to participate in
the hack day portion, there’s a lot of work to be done to describe topics
relevant to our community on Wikipedia as well.

We are very excited to be collaborating with the Digital Library Federation
once again. A robust and diverse community of practitioners who advance
research, teaching and learning through the application of digital library
research, technology and services, DLF brings years of experience creating
and hosting events designed to foster collaboration and develop shared
solutions for common challenges. DLF is generously funding two Cross-Pollinator
Travel Awards http://www.diglib.org/archives/6240/ for developers
interested in attending the AMIA conference and participating in the hack
day.

What is a hack day?

A hack day or hackathon is an event that brings together computer
technologists and practitioners for an intense period of collaborative
problem solving. Within digital preservation and curation communities, hack
days provide an opportunity for archivists, collection managers,
technologists, and others to work together develop software solutions,
documentation or training materials, and more for digital collections
management needs.

The manifesto
http://ptsefton.com/2012/09/05/open-repositories-developer-challenge-draft-manifesto-v0-1.htm
of a recent event at the Open Repositories conference framed the benefits
this way:

“Transparent, fun, open collaboration in diversely constituted teams...The
creation of new professional networks over the ossification of old
ones. Effective
engagement of non-developers (researchers, repository managers) in
development...Work done at the conference over presentation of something
prepared earlier.”

What happened at last year’s hack day?

Last year’s AMIA/DLF Hack Day was an incredible success. Over 30
participants formed 6 teams who worked intensively over the day to create
innovating solutions to problems submitted by the participants themselves.
The outcomes ranged from working software to guidelines for common tools.
See the results on last year’s wiki
http://wiki.curatecamp.org/index.php/Association_of_Moving_Image_Archivists_%26_Digital_Library_Federation_Hack_Day_2013
.

What will be the format of the event?

In advance of the hack day, project ideas and a Wikipedia editing topic
list will be collected through the registration form and the event wiki
http://wiki.curatecamp.org/index.php/Association_of_Moving_Image_Archivists_%26_Digital_Library_Federation_Hack_Day_2014.
On the morning of the event, participants will review and discuss submitted
project ideas. We’ll then break into groups consisting of technologists and
practitioners, selecting an idea to work on together for the day and (if
desired) throughout the duration of the AMIA conference in the developers
lounge.

Projects will be presented during the conference, on Friday, October 10 at
3:30pm. Projects will be judged by a panel as well as by conference
attendees.

How can I participate?

Sign up! As this will be a highly participatory event, registration is
limited to those willing to get their hands dirty, so no onlookers please.
You may participate even if you do not know code or have an engineering
background—we welcome metadata hacking, ideas for programs that can be
worked on with engineers who will be present to collaborate with, and
Wikipedia editing for digital preservation and access for moving image and
sound.

Ready to sign up and join the fun?

REGISTER HERE
https://docs.google.com/forms/d/1P8iQfCPub8abaWGUcl-WGPYnEvr7CxIFKK0dYA3VHaA/viewform.
It’s free.


Re: [CODE4LIB] Bandwidth control

2014-08-06 Thread Carol Bean
Appreciate the offer!  I am willing to get my hands dirty, and it has,
likewise, been a while.  The problem comes in handing it off to others who
aren't willing or can't. :)

Definitely a project worth considering!

Thanks,
Carol

Carol Bean
beanwo...@gmail.com


On Wed, Aug 6, 2014 at 2:27 AM, Francis Kayiwa kay...@pobox.com wrote:

 On 2014-08-04 16:07, Carol Bean wrote:

 Thanks, Scott.  I appreciate the details.  I hadn't thought of
 investigating firmware hacks.  I have heard Cisco routers are being
 used to manage bandwidth, and are, as expected, a pricey solution.



 If you are willing to get your hands dirty. One does not need to ever deal
 with Cisco unless you have deep pockets as you correctly point out.
 Depending on how much of your network you control you should consider using
 OpenBSD's PF. Yes I am a well known shill for this OS so grab ya grain of
 salt. ;-) That said this is a tale of savings and performance. Sometimes
 you can have both.

 http://www.skeptech.org/blog/2013/01/13/unscrewed-a-story-about-openbsd/

 As always YMMV but I actually enjoy this sort of thing so if you need
 someone who has done this *granted a good while back* I'm your Huckleberry.
 ;-)

 ./fxk



Re: [CODE4LIB] Bandwidth control

2014-08-06 Thread Carol Bean
Yeah, gigabits seem to disappear fast with a few dedicated video users plus
Skype users (yep - Skype is allowed, too).  Then it gets really challenging
trying to also have a library program involving a something like Watchitoo.

Thanks,
Carol

Carol Bean
beanwo...@gmail.com


On Wed, Aug 6, 2014 at 3:50 AM, Riley Childs rchi...@cucawarriors.com
wrote:

 20 users streaming HD YouTube is a big strain on the network itself,
 regardless of the pipe size.

 Sent from my Windows Phone
 
 From: Cary Gordonmailto:listu...@chillco.com
 Sent: ‎8/‎5/‎2014 8:33 PM
 To: CODE4LIB@LISTSERV.ND.EDUmailto:CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] Bandwidth control

 With a gigabit pipe, I don't think that Youtube would be an issue :)


 On Aug 5, 2014, at 3:54 PM, Stuart Yeates stuart.yea...@vuw.ac.nz wrote:

  We had complaints from students about other students using the limited
 resource (in this case student computers) to do facebook / youtube.
 
  We negotiated with the students union that certain sites would be
 blocked from those machines for a certain busy period during the day.
 Negotiation with the students union appeared to be hugely important in
 deflating any protests.
 
  cheers
  stuart
 
  On 05/08/14 02:20, Carol Bean wrote:
  A quick and dirty search of the list archives turned up this topic from
 5
  years ago.  I am wondering what libraries (especially those with limited
  resources) are doing today to control or moderate bandwidth, e.g., where
  viewing video sites uses up excessive amounts of bandwidth?
 
  Thanks for any help,
  Carol
 
  Carol Bean
  beanwo...@gmail.com
 



Re: [CODE4LIB] Bandwidth control

2014-08-06 Thread Cary Gordon
Oops, I meant to type Facebook, not Youtube.

Cary

On Tuesday, August 5, 2014, Cary Gordon listu...@chillco.com wrote:

 With a gigabit pipe, I don't think that Youtube would be an issue :)


 On Aug 5, 2014, at 3:54 PM, Stuart Yeates stuart.yea...@vuw.ac.nz
 javascript:; wrote:

  We had complaints from students about other students using the limited
 resource (in this case student computers) to do facebook / youtube.
 
  We negotiated with the students union that certain sites would be
 blocked from those machines for a certain busy period during the day.
 Negotiation with the students union appeared to be hugely important in
 deflating any protests.
 
  cheers
  stuart
 
  On 05/08/14 02:20, Carol Bean wrote:
  A quick and dirty search of the list archives turned up this topic from
 5
  years ago.  I am wondering what libraries (especially those with limited
  resources) are doing today to control or moderate bandwidth, e.g., where
  viewing video sites uses up excessive amounts of bandwidth?
 
  Thanks for any help,
  Carol
 
  Carol Bean
  beanwo...@gmail.com javascript:;
 



-- 
Cary Gordon
The Cherry Hill Company
http://chillco.com


[CODE4LIB] DuraSpace and Artefactual Partner to Offer New Hosted Service

2014-08-06 Thread Carol Minton Morris
Contact: Michele Kimpton mkimp...@duraspace.org, Evelyn McLellan 
eve...@artefactual.com

Read it online: http://bit.ly/1srx8y4

DuraSpace and Artefactual Partner to Offer New Hosted Service
New End-to-End Digital Preservation Service is Designed for Universities, 
Archives and Cultural Heritage Organizations
Winchester, MA Universities, archives and cultural heritage organizations want 
it all when it comes to ensuring that their digital holdings remain both safe 
and accessible for future generations. Archivematica, a preservation workflow 
tool designed by Artefactual, and DuraCloud, an archival cloud storage and 
preservation service from DuraSpace, are pleased to announce that they have 
teamed up to provide just that–an end-to-end open-source digital preservation 
solution based on Archivematica and DuraCloud that will set the standard for 
one-stop durable, safe, and cost effective long-term preservation and storage.

We are extremely enthusiastic about our new strategic partnership with 
Artefactual Systems,” said DuraSpace CEO Michele Kimpton. “Artefactual are 
experts in archiving digital material and we are experts in managing open 
source projects and running software in cloud infrastructure.  With our teams 
working together we can achieve a truly robust, open, easy to use digital 
archiving solution I think the community will be excited about.

Archivematica and DuraCloud are unique among long-term preservation and storage 
solutions. They are both built on open-source software which is documented and 
freely available. Users can download their data at any point. This means that 
users of the new service do not have to worry about data lock-in and the 
service can be run locally at any time. AVPreserve has called DuraCloud “unique 
among the services covered” in their Cloud Storage Vendor Profiles series [1] 
because users can download the entirety of data at any point and/or host the 
system locally without additional cost.[2]

The launch of an Archivematica DuraCloud hosted solution is a timely addition 
to the digital preservation community, offering a configurable preservation 
planning option at the intersection of OAIS-based workflows (Archivematica) and 
archival storage services (DuraCloud),” said Nancy McGovern, Director of DPM 
Workshops. “When providers choose collaboration over competition, the gains to 
our community can be significant. A partnership like this that brings together 
open-source providers each with a solid track record promises to result in just 
that kind of benefit. 

Users of the service will have access to a robust suite of digital preservation 
functions via the online dashboard. Archivematica is well known for its ability 
to produce highly standardized and interoperable Archival Information Packages; 
these packages will automatically be placed into DuraCloud for long-term secure 
archival storage. Some of the key features of Archivematica include assigning 
permanent identifiers and checksums, virus checking, identifying and validating 
file formats, extracting technical metadata, normalizing files to 
preservation-friendly formats, and generating detailed PREMIS metadata to 
facilitate inter-repository data exchange.  Key features of DuraCloud include 
automated health checking of the content, reporting, and the choice to store 
multiple copies at multiple storage providers.

If your organization is interested in learning more about this offering please 
contact Michele Kimpton (mkimp...@duraspace.org) or Evelyn McLellan 
(eve...@artefactual.com), or complete the inquiry form at 
http://duracloud.org/archivematica

About DuraSpace
DuraCloud (http://duracloud.org) is a service from DuraSpace 
(http://duraspace.org), an independent 501(c)(3) not-for-profit organization 
providing leadership and innovation for open technologies that promote durable, 
persistent access to digital data. We collaborate with academic, scientific, 
cultural, and technology communities by supporting projects (DSpace, Fedora, 
VIVO) and creating services (DuraCloud, DSpaceDirect) to help ensure that 
current and future generations have access to our collective digital heritage. 
Our values are expressed in our organizational byline, Committed to our 
digital future.

About Artefactual Systems
Artefactual's (http://artefactual.com) mission is to provide the heritage 
community with vital expertise and technology in the domains of digital 
preservation and online access. We develop open-source software 
(Archivematicaand AtoM) and promote open standards as the best means of 
enabling archives, libraries and museums to preserve and provide access to 
society's cultural assets. We are archivists, librarians, software developers, 
systems administrators and systems technicians, all working together to advance 
the capacity of heritage institutions to meet their mandates in a rapidly 
changing world.

[1] Cloud Storage Vendor Profiles: 

[CODE4LIB] Creating a Linked Data Service

2014-08-06 Thread Michael Beccaria
I have recently had the opportunity to create a new library web page and host 
it on my own servers. One of the elements of the new page that I want to 
improve upon is providing live or near live information on technology 
availability (10 of 12 laptops available, etc.). That data resides on my ILS 
server and I thought it might be a good time to upgrade the bubble gum and duct 
tape solution I now have to creating a real linked data service that would 
provide that availability information to the web server.

The problem is there is a lot of overly complex and complicated information out 
there onlinked data and RDF and the semantic web etc. and I'm looking for a 
simple guide to creating a very simple linked data service with php or python 
or whatever. Does such a resource exist? Any advice on where to start?
Thanks,

Mike Beccaria
Systems Librarian
Head of Digital Initiative
Paul Smith's College
518.327.6376
mbecca...@paulsmiths.edu
Become a friend of Paul Smith's Library on Facebook today!


Re: [CODE4LIB] Creating a Linked Data Service

2014-08-06 Thread Mark Jordan
Mike, 

If you want to create Linked Data, check out EasyLOD, 
https://github.com/mjordan/easyLOD. It's not a guide, but it does provide a 
toolkit. You'd need to write a data source plugin in PHP that scrapes your ILS 
but the EasyLOD framework will take care of most of the other bits involved in 
publishing Linked Data, assuming you're happy to provide only RDF/XML 
representations of your data (I never got around to providing other formats). 

If you decide that Linked Data is overkill, you may want to consider providing 
an API to your data. Check out http://api.lib.sfu.ca/equipment as an example. 

Mark 

- Original Message -

 I have recently had the opportunity to create a new library web page
 and host it on my own servers. One of the elements of the new page
 that I want to improve upon is providing live or near live
 information on technology availability (10 of 12 laptops available,
 etc.). That data resides on my ILS server and I thought it might be
 a good time to upgrade the bubble gum and duct tape solution I now
 have to creating a real linked data service that would provide that
 availability information to the web server.

 The problem is there is a lot of overly complex and complicated
 information out there onlinked data and RDF and the semantic web
 etc. and I'm looking for a simple guide to creating a very simple
 linked data service with php or python or whatever. Does such a
 resource exist? Any advice on where to start?
 Thanks,

 Mike Beccaria
 Systems Librarian
 Head of Digital Initiative
 Paul Smith's College
 518.327.6376
 mbecca...@paulsmiths.edu
 Become a friend of Paul Smith's Library on Facebook today!


Re: [CODE4LIB] Creating a Linked Data Service

2014-08-06 Thread Cary Gordon
Drupal has a variety of tools for working with RDF. These are best supported in 
Drupal 7, but there are also some tools for Drupal 6, the version that your 
school — except for the library — uses for their website.

Thanks,

Cary

On Aug 6, 2014, at 11:45 AM, Michael Beccaria mbecca...@paulsmiths.edu wrote:

 I have recently had the opportunity to create a new library web page and host 
 it on my own servers. One of the elements of the new page that I want to 
 improve upon is providing live or near live information on technology 
 availability (10 of 12 laptops available, etc.). That data resides on my ILS 
 server and I thought it might be a good time to upgrade the bubble gum and 
 duct tape solution I now have to creating a real linked data service that 
 would provide that availability information to the web server.
 
 The problem is there is a lot of overly complex and complicated information 
 out there onlinked data and RDF and the semantic web etc. and I'm looking for 
 a simple guide to creating a very simple linked data service with php or 
 python or whatever. Does such a resource exist? Any advice on where to start?
 Thanks,
 
 Mike Beccaria
 Systems Librarian
 Head of Digital Initiative
 Paul Smith's College
 518.327.6376
 mbecca...@paulsmiths.edu
 Become a friend of Paul Smith's Library on Facebook today!


Re: [CODE4LIB] Creating a Linked Data Service

2014-08-06 Thread Justin Coyne
Fedora 4 (https://github.com/fcrepo4/fcrepo4/releases) is based on the
Linked Data Platform standard (http://www.w3.org/TR/ldp/). This enables you
to just push linked data to it (using curl or the ldp gem in ruby, more
languages to follow) and it's published. It's quite easy if you can get
your head around RDF (turtle serialization).

The Hydra Project and Islandora are working on Fedora 4 front ends for your
patrons, who presumably do not read RDF, to use.


- Justin



On Wed, Aug 6, 2014 at 1:45 PM, Michael Beccaria mbecca...@paulsmiths.edu
wrote:

 I have recently had the opportunity to create a new library web page and
 host it on my own servers. One of the elements of the new page that I want
 to improve upon is providing live or near live information on technology
 availability (10 of 12 laptops available, etc.). That data resides on my
 ILS server and I thought it might be a good time to upgrade the bubble gum
 and duct tape solution I now have to creating a real linked data service
 that would provide that availability information to the web server.

 The problem is there is a lot of overly complex and complicated
 information out there onlinked data and RDF and the semantic web etc. and
 I'm looking for a simple guide to creating a very simple linked data
 service with php or python or whatever. Does such a resource exist? Any
 advice on where to start?
 Thanks,

 Mike Beccaria
 Systems Librarian
 Head of Digital Initiative
 Paul Smith's College
 518.327.6376
 mbecca...@paulsmiths.edu
 Become a friend of Paul Smith's Library on Facebook today!



[CODE4LIB] Reminder: Call for Survey Participation: DAMS Migration

2014-08-06 Thread Stein, Ayla
Please excuse cross-postings



Greetings:



This is a friendly reminder that our survey, Identifying Motivations for DAMS 
Migration: A Survey, concludes on October 1, 2014.



We are soliciting survey responses from information professionals at 
institutions which are migrating, have migrated, or will migrate to a new 
digital asset management system.  The title of the survey is Identifying 
Motivations for DAMS Migration: A Survey.



For the purposes of this survey, a digital asset management system (DAMS) is 
software that supports the ingest, description, tracking, discovery, 
retrieval, searching, and distribution of collections of digital objects [1].  
Some examples of commonly used DAMS are: CONTENTdm, DSpace, Islandora, 
DigiTool, Fedora, etc.



Please note that this survey does not focus on systems used exclusively as 
institutional repositories, which we consider to be repositories that only 
provide access to the intellectual output of an institution [2].



The results from our survey will possibly lead to a publication in a 
professional journal and/or presentations at relevant professional conferences.



If your institution meets these parameters, we would appreciate your 
participation in this survey.  The survey will take approximately 20 minutes to 
complete and will not ask for or obtain any personally identifying information.



You can access the survey here: 
https://qtrial2013.qualtrics.com/SE/?SID=SV_3JgGpZH0UNRSnlP 
https://uiuc.qualtrics.com/SE/?SID=SV_3aw56frpWbGLlgV



If you have any questions, please feel free to contact us (information provided 
below).



We look forward to seeing your responses and sharing the results of our 
research.



Thank you.



Santi Thompson

sathomps...@uh.edumailto:sathomps...@uh.edu



Ayla Stein
ast...@illinois.edumailto:ast...@illinois.edu

[1] http://www2.archivists.org/glossary/terms/d/digital-assets-management-system
[2] 
http://en.wikipedia.org/wiki/Institutional_repository#cite_note-eprints.ecs.soton.ac.uk-1



Ayla Stein
Metadata Librarian
Assistant Professor, University Library
220 Main Library
University of Illinois at Urbana-Champaign
1408 W. Gregory Drive (MC-522)
Urbana, Illinois 61801
(217) 300-2958
ast...@illinois.edumailto:ast...@illinois.edu


Re: [CODE4LIB] Baggit specification question

2014-08-06 Thread Andrew Hankinson
I suspect the first example you give is correct. The newline character is the 
field delimiter. If you’re reading this into a structured representation (e.g., 
a Python dictionary) you could parse the presence of nothing between the colon 
and the newline as “None”, but in a text file there is no representation of 
“nothing” except for actually having nothing.


On Aug 6, 2014, at 7:11 PM, Rosalyn Metz rosalynm...@gmail.com wrote:

 Hi all,
 
 Below is a question one of my colleagues posted to digital curation, but
 I'm also posting here (because the more info the merrier).
 
 Thanks!
 Rosy
 
 
 
 Hi,
 
 I am working for the Digital Preservation Network and our current
 specification requires that we use baggit bags.
 
 Our current spec for bag-info.txt reads in part:
 
 DPN requires the presence of the following fields, although they may be
 empty.  Please note that the values of null and/or nil should not be
 used.  The colon (:) should still be present.
 
 
 From my reading of the baggit spec, section 2.2.2:
 
 A metadata element MUST consist of a label, a colon, and a value, each
 separated by optional whitespace. 
 
 
 2.2.2 is for the bag-info.txt, but it seems that this is the general rule.
 
 Question: Are values required for all? Which below is correct or both? Ex:
 
 Source-Organization:
 
 or
 
 
 Source-Organization: nil
 
 
 I appreciate any clarification,
 
 Thanks
 James
 Stanford Digital Repository


[CODE4LIB] Baggit specification question

2014-08-06 Thread Rosalyn Metz
Hi all,

Below is a question one of my colleagues posted to digital curation, but
I'm also posting here (because the more info the merrier).

Thanks!
Rosy



Hi,

I am working for the Digital Preservation Network and our current
specification requires that we use baggit bags.

Our current spec for bag-info.txt reads in part:

DPN requires the presence of the following fields, although they may be
empty.  Please note that the values of null and/or nil should not be
used.  The colon (:) should still be present.


From my reading of the baggit spec, section 2.2.2:

A metadata element MUST consist of a label, a colon, and a value, each
separated by optional whitespace. 


2.2.2 is for the bag-info.txt, but it seems that this is the general rule.

Question: Are values required for all? Which below is correct or both? Ex:

Source-Organization:

or


Source-Organization: nil


I appreciate any clarification,

Thanks
James
Stanford Digital Repository


Re: [CODE4LIB] Creating a Linked Data Service

2014-08-06 Thread Roy Tennant
I'm puzzled about why you want to use linked data for this. At first glance
the requirement simply seems to be to fetch data from your ILS server,
which likely could be sent in any number of simple packages that don't
require an RDF wrapper. If you are the only one consuming this data then
you can use whatever (simplistic, proprietary) format you want. I just
don't see what benefits you would get by creating linked data in this
case that you wouldn't get by doing something much more straightforward and
simple. And don't be harshing on duct tape. Duct tape is a perfectly fine
solution for many problems.
Roy


On Wed, Aug 6, 2014 at 2:45 PM, Michael Beccaria mbecca...@paulsmiths.edu
wrote:

 I have recently had the opportunity to create a new library web page and
 host it on my own servers. One of the elements of the new page that I want
 to improve upon is providing live or near live information on technology
 availability (10 of 12 laptops available, etc.). That data resides on my
 ILS server and I thought it might be a good time to upgrade the bubble gum
 and duct tape solution I now have to creating a real linked data service
 that would provide that availability information to the web server.

 The problem is there is a lot of overly complex and complicated
 information out there onlinked data and RDF and the semantic web etc. and
 I'm looking for a simple guide to creating a very simple linked data
 service with php or python or whatever. Does such a resource exist? Any
 advice on where to start?
 Thanks,

 Mike Beccaria
 Systems Librarian
 Head of Digital Initiative
 Paul Smith's College
 518.327.6376
 mbecca...@paulsmiths.edu
 Become a friend of Paul Smith's Library on Facebook today!



Re: [CODE4LIB] Creating a Linked Data Service

2014-08-06 Thread Riley-Huff, Debra
I agree with Roy. Seems like something that could be easily handled with
PHP or Python scripts. Someone on the list may even have a homegrown
solution (improved duct tape) they would be happy to share. I fail to see
what the project has to do with linked data or why you would go that route.

Debra Riley-Huff
Head of Web Services  Associate Professor
JD Williams Library
University of Mississippi
University, MS 38677
662-915-7353
riley...@olemiss.edu


On Wed, Aug 6, 2014 at 9:33 PM, Roy Tennant roytenn...@gmail.com wrote:

 I'm puzzled about why you want to use linked data for this. At first glance
 the requirement simply seems to be to fetch data from your ILS server,
 which likely could be sent in any number of simple packages that don't
 require an RDF wrapper. If you are the only one consuming this data then
 you can use whatever (simplistic, proprietary) format you want. I just
 don't see what benefits you would get by creating linked data in this
 case that you wouldn't get by doing something much more straightforward and
 simple. And don't be harshing on duct tape. Duct tape is a perfectly fine
 solution for many problems.
 Roy


 On Wed, Aug 6, 2014 at 2:45 PM, Michael Beccaria mbecca...@paulsmiths.edu
 
 wrote:

  I have recently had the opportunity to create a new library web page and
  host it on my own servers. One of the elements of the new page that I
 want
  to improve upon is providing live or near live information on technology
  availability (10 of 12 laptops available, etc.). That data resides on my
  ILS server and I thought it might be a good time to upgrade the bubble
 gum
  and duct tape solution I now have to creating a real linked data service
  that would provide that availability information to the web server.
 
  The problem is there is a lot of overly complex and complicated
  information out there onlinked data and RDF and the semantic web etc. and
  I'm looking for a simple guide to creating a very simple linked data
  service with php or python or whatever. Does such a resource exist? Any
  advice on where to start?
  Thanks,
 
  Mike Beccaria
  Systems Librarian
  Head of Digital Initiative
  Paul Smith's College
  518.327.6376
  mbecca...@paulsmiths.edu
  Become a friend of Paul Smith's Library on Facebook today!