See NLM's Fedora/Blacklight implementation for serials:
http://collections.nlm.nih.gov/catalog/nlm:nlmuid-52420700R-root
Root object metadata is from the MARC record for the whole work; leaf object
metadata is volume-specific (it's hand-crafted).
John P. Rees
Archivist and Digital Resources
Thanks, everyone, and keep them coming.
There can be a lot of different components to a single item, and we're
looking at ways of fitting it all together in a user friendly and elegant
way. I'm hoping that when a person clicks on something after searching or
browsing, they don't get a
Our digital collections contain multi-page works of scanned imagery, single
documents, and sometimes a combination of both. Below is an example of a letter
containing both scanned images and document transcriptions.
The descriptive of the metadata is applied to the parent work, but each
Sarah-
We developed our own RDF ontology[1] to model our data, based roughly on MODS
and MADS, and we store our files and metadata in a custom repository[2] which
implements the core of the Fedora 3 REST API. We developed a Hydra head[3] for
searching, display, etc.
There is currently an
Esme,
Your examples are similar to what I am hoping for. Can you explain a little
bit more what system you used for backend to store image URLs and Object
descriptions?
Sarah
-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Esmé
Cowles
Sent:
The short answer is that what we have right now is document-type items
(pages of books or letters, front and back of a map), but that might grow
in the future to include video or multiple views of art objects. The
documents are the main concern right now.
Most of our compound objects in contentdm
The best way to display compound objects really depends on the nature of
the compound objects. For example, the optimal display for a book stored as
a compound object will be very different than an art object taken from
various vantage points or a dataset. Likewise, whether you can get away
with
Islandora has a compound image model that allows for objects in the repository
to be related to each other. An example in the Islandora Foundation's sandbox:
http://sandbox.islandora.ca/islandora/object/islandora%3A105
This is made up of two large image objects:
Laura, is it an option to migrate the literary content into a TEI form? You
could consolidate the objects that make up a single text into a single
complex object, with embedded metadata (at whatever level you like), and
then wheel in some existing TEI content management / presentation system.
On
Laura-
At UCSD, we have complex objects which range from a flat list of files (e.g.
page images):
http://library.ucsd.edu/dc/object/bb59054559
all the way up to pretty involved hierarchy modeling a filesystem:
http://library.ucsd.edu/dc/object/bb9796611k
Many of these have a hierarchy with
We're migrating from CONTENTdm and trying to figure out how to display
compound objects (or the things formerly known as compound objects) and
metadata for the end user. Can anyone point me to really good examples of
displaying items like this, especially where the user can see metadata for
parts
11 matches
Mail list logo