[CODE4LIB] Job: Design Lab Specialist at University of Nevada, Las Vegas

2017-08-28 Thread Code4Lib Jobs
Hi all,

Please consider applying for our Design Lab Specialist position and join an 
exciting new department at the University of Nevada, Las Vegas Libraries.

Summary of Position:

The University of Nevada, Las Vegas University Libraries seeks applications for 
a Design Lab Specialist to join a newly-formed department dedicated to 
facilitating innovative knowledge creation through curricular integration, 
community-driven learning spaces, and specialized technology. The individual 
hired will have an opportunity to contribute to this new enterprise and gain 
valuable experience in a fast-paced and emerging area at the UNLV Libraries. 
Professionals at all stages in their career are encouraged to apply.

As a key member of the newly-formed Department of Knowledge Production, the 
Design Lab Specialist will plan, provide, and maintain efficient access to the 
UNLV Libraries Design Lab. This new, multimillion dollar space will be 
dedicated to connecting students, faculty, staff, and community users with 
hardware and software to support knowledge production across disciplines and 
beyond.

The Design Lab Specialist serves as a consultant to individuals, project teams, 
and classes. Additionally, the Specialist will investigate and evaluate 
emerging technologies for potential adoption, develop online learning objects, 
coordinate projects, and supervise students working as peer technology 
consultants.



Brought to you by code4lib jobs: 
https://jobs.code4lib.org/jobs/27788-design-lab-specialist


Re: [CODE4LIB] Job: PBCore Cataloging Tool Development Contractor at WGBH Educational Foundation

2017-08-28 Thread Rebecca Fraimow
Hi all,

As an FYI, we  are extending the deadline to send us an RFP for the PBCore 
Cataloging Tool Development. We will now be accepting proposals through 
September 6th.

Thanks!

Best,
Rebecca Fraimow, WGBH

On 8/7/17, 9:37 PM, "Code for Libraries on behalf of Code4Lib Jobs" 
 wrote:

PBCore Cataloging Tool Development RFP

WGBH Educational Foundation

August 1, 2017

I. Project Overview

WGBH is seeking a qualified developer or development team (“the 
Contractor”) to create a Ruby-based web application tool for the American 
Archive of Public Broadcasting’s PBCore Development and Training Project. 

The goal of the National Endowment for the Humanities-funded PBCore 
Development and Training Project is to develop tools, methodologies, workflows 
and training to enhance and extend the adoption of the Public Broadcasting 
Metadata Dictionary (“PBCore”), a metadata schema for the management of audio 
and audiovisual collections.

PBCore can be used as:

A guideline for cataloging or describing audiovisual content (as a content 
standard)
A model for building custom databases/applications
A guideline for identifying a set of vocabularies for fields describing av 
assets
A data model for a configurable collection management system (Omeka, 
Collective Access, etc.)
A guideline for creating inventory spreadsheets
An exchange (import or export) mechanism between applications 

PBCore records can easily be shared, allowing information about media 
assets and collections to be exchanged between organizations and media systems.

Public Broadcasting in the United States developed PBCore so producers and 
local stations can better share, manage and preserve the media they produce. 
Because it is so useful in describing media assets, a growing number of film 
archives and media organizations outside of public broadcasting have adopted 
PBCore to manage their audiovisual assets and collections. PBCore is used by 
the American Archive of Public Broadcasting as an exchange format and data 
model for metadata about public broadcasting collections.

As part of the NEH project, WGBH seeks to create an open-source graphical 
user interface (“GUI”) cataloging tool. We envision the tool as a simple GUI 
built in Ruby. The tool will have a function that allows for creation, editing, 
import and export of PBCoreXML 2.1 documents (and CSVs formatted in accordance 
with a PBCore data model), which are then stored externally to the app. The 
tool will also allow for search and discovery of the stored XML documents. The 
tool should be easy to implement and use by novice and experienced PBCore users.

II. Project Activities

a. Process overview

i.   WGBH will provide contractor with background materials and 
preliminary requirements for the app development, including information about 
the metadata structure of PBCore, existing PBCore tools, and the prototype 
Filemaker-based tool.

ii.  Contractor will create a work plan and more detailed 
proposal for the product based on the background materials.

iii.The work plan and detailed proposal will be discussed and 
approved in a meeting with WGBH staff, launching a six-month development phase.

iv.Over the course of the development phase, contractor will 
perform weekly code review with WGBH in-house developers to assure that WGBH 
development staff are comfortable and familiar with the tool.

v.  Contractor will hold bi-weekly update meetings with WGBH 
project staff to demo development and discuss direction of the project. 

vi.After the six-month development phase, WGBH staff will demo 
tool with Test User group for review and feedback.

vii.   WGBH will share feedback with contractor and jointly 
determine a proposal for a further three months of development based on user 
feedback and feasibility.

viii. After the second development phase, all code and 
documentation will be turned over to WGBH for release under an MIT Open Source 
license.  

b. Project Requirements/Desired Features

Required:

User can easily install the tool.
User can easily create a new PBCore XML document.
User can add any PBCore XML attribute or element that is allowed to an 
existing document.
User can see definitions of each element incorporated into the tool.
User can see options for elements or attributes to add to an existing 
document.
User can import pre-existing batch or single PBCore XML.
User can export batch or single PBCore XML.
User can import PBCore-compliant CSV.
User can export PBCore-compliant CSV.
User can view created XML documents.
User can conduct a keyword search across fields in XML documents.
   

Re: [CODE4LIB] Are MarcEdit Alma API requests multi-threaded?

2017-08-28 Thread Steve Meyer
Terry, thanks for the quick response and explanation. This design is good
for our large consortium (University of Wisconsin System) because it
reduces the likelihood of batch updates having an impact on more critically
time sensitive API requests like displaying a patron account, placing
requests for items or processing patron record updates.

On Mon, Aug 28, 2017 at 11:34 AM, Terry Reese  wrote:

> I can answer that question -- they are not multi-threaded.  Search is done
> via Z39.50 or SRU (though, the record itself must be pulled via the API in
> order to get the IDs), and then updates must happen on a record by record
> bases (so again, one call per record) via the API.  Each record is handled
> individually, and procedurally because the ability for alma to handle
> threaded updates seemed to vary by institution.  Taking the slower (but
> more reliable approach) seemed best.
>
> You can see the API code here: https://github.com/reeset/alma_api_library
>
> FYI -- I'll be changing it slightly to accommodate a few changes to the
> Alma api happening in September.
>
> --tr
>
> -Original Message-
> From: Code for Libraries [mailto:CODE4LIB@LISTS.CLIR.ORG] On Behalf Of
> Steve Meyer
> Sent: Monday, August 28, 2017 12:20 PM
> To: CODE4LIB@LISTS.CLIR.ORG
> Subject: [CODE4LIB] Are MarcEdit Alma API requests multi-threaded?
>
> We are investigating using Marc4Edit to batch update MARC records in Alma
> via the Alma APIs.
>
> I realize this question is likely more appropriate for the MarcEdit
> listserv, but I am asking from the perspective of our systems and
> development departments, not as the MarcEdit users in our cataloging
> departments. We are concerned about API throttling, quotas and the impact
> on multiple API requests against Alma.
>
> The Alma APIs are rate limited; so before embarking on this approach, we'd
> like to make sure we don't hit our API quotas. Can anyone tell us if
> Marc4Edit's Alma bib record write operations are multi-threaded, and if so,
> is there any way to configure the number of threads spawned?
>
> We could end up in a situation where other critical systems that
> communicate with Alma via its APIs could become unavailable due to our API
> account being flooded with what is actually a lower priority set of
> requests.
>
> Thanks,
> Steve
>


Re: [CODE4LIB] Are MarcEdit Alma API requests multi-threaded?

2017-08-28 Thread Terry Reese
I can answer that question -- they are not multi-threaded.  Search is done via 
Z39.50 or SRU (though, the record itself must be pulled via the API in order to 
get the IDs), and then updates must happen on a record by record bases (so 
again, one call per record) via the API.  Each record is handled individually, 
and procedurally because the ability for alma to handle threaded updates seemed 
to vary by institution.  Taking the slower (but more reliable approach) seemed 
best.

You can see the API code here: https://github.com/reeset/alma_api_library

FYI -- I'll be changing it slightly to accommodate a few changes to the Alma 
api happening in September.

--tr

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTS.CLIR.ORG] On Behalf Of Steve 
Meyer
Sent: Monday, August 28, 2017 12:20 PM
To: CODE4LIB@LISTS.CLIR.ORG
Subject: [CODE4LIB] Are MarcEdit Alma API requests multi-threaded?

We are investigating using Marc4Edit to batch update MARC records in Alma via 
the Alma APIs.

I realize this question is likely more appropriate for the MarcEdit listserv, 
but I am asking from the perspective of our systems and development 
departments, not as the MarcEdit users in our cataloging departments. We are 
concerned about API throttling, quotas and the impact on multiple API requests 
against Alma.

The Alma APIs are rate limited; so before embarking on this approach, we'd like 
to make sure we don't hit our API quotas. Can anyone tell us if Marc4Edit's 
Alma bib record write operations are multi-threaded, and if so, is there any 
way to configure the number of threads spawned?

We could end up in a situation where other critical systems that communicate 
with Alma via its APIs could become unavailable due to our API account being 
flooded with what is actually a lower priority set of requests.

Thanks,
Steve


[CODE4LIB] Are MarcEdit Alma API requests multi-threaded?

2017-08-28 Thread Steve Meyer
We are investigating using Marc4Edit to batch update MARC records in Alma
via the Alma APIs.

I realize this question is likely more appropriate for the MarcEdit
listserv, but I am asking from the perspective of our systems and
development departments, not as the MarcEdit users in our cataloging
departments. We are concerned about API throttling, quotas and the impact
on multiple API requests against Alma.

The Alma APIs are rate limited; so before embarking on this approach, we'd
like to make sure we don't hit our API quotas. Can anyone tell us if
Marc4Edit's Alma bib record write operations are multi-threaded, and if so,
is there any way to configure the number of threads spawned?

We could end up in a situation where other critical systems that
communicate with Alma via its APIs could become unavailable due to our API
account being flooded with what is actually a lower priority set of
requests.

Thanks,
Steve


[CODE4LIB] fiscal continuity

2017-08-28 Thread Eric Lease Morgan
This is just a “keep you in the loop” message regarding fiscal continuity of 
Code4Lib.

The Code4Lib Fiscal Continuity Interest Group — the folks who are investigating 
possible financial options when it comes to our community — see its work as 
almost done. More specifically, because of the recent discussions, we took it 
upon ourselves to investigate & update our report with a couple more fiscal 
agent possibilities. When that work is done (and we anticipate it to be 
finished in less than a few weeks), a vote will take place. For more detail, 
see the Group’s home page:

  https://wiki.code4lib.org/Fiscal_Continuity

FYI

—
Eric Morgan, Just One of Many