Re: [CODE4LIB] Patents and open source projects

2011-12-06 Thread MJ Ray
Erik Hetzner erik.hetz...@ucop.edu
 MJ Ray wrote:
  Will people please stop suggesting that PTFS's attempts to register
  Koha trademarks in various jurisdictions are somehow because of
  inattention on the part of the Koha users and developers?
 
 It was my intention only to suggest that trademark issues were
 something that one needs to pay attention to, not that the Koha
 community had not paid attention to trademark issues. Thanks for
 clarifying the issue: I was unclear.

OK, sorry, I'm probably a bit sensitive because of some of the crazier
press coverage that we've had, suggesting that users or developers
should have done various things - often contradictory - but like the
old saying goes: the price of freedom is eternal vigilence.

My personal opinion is that it wouldn't matter if friendly people had
already registered it as a NZ trademark for whatever class covers
software (and I understand someone has a similar trademark for it).
Aome ratbags could still come along, register it for another class
(books, perhaps), slip past the regulator by mistake and screw with
the community for a while.

Trademarks aren't quite as awful as patents, but they're not far off.
Neither are as narrow and straightforward as copyright can be and are
much more expensive to defend.  They're a bottomless pit for resources
and ideally private trademarks and patents should not be allowed for
FOSS.

Regards,
-- 
MJ Ray (slef), member of www.software.coop, a for-more-than-profit co-op.
http://koha-community.org supporter, web and LMS developer, statistician.
In My Opinion Only: see http://mjr.towers.org.uk/email.html
Available for hire for Koha work http://www.software.coop/products/koha


Re: [CODE4LIB] Patents and open source projects

2011-12-06 Thread Emily Lynema
Just wanted to say thanks for the many responses. You all are right that
this issue is not specific to library software in specific. It's not often
that I hear such a resounding agreement from all responders!!

-emily

On Mon, Dec 5, 2011 at 12:53 PM, Erik Hetzner erik.hetz...@ucop.edu wrote:

 At Mon, 5 Dec 2011 08:17:26 -0500,
 Emily Lynema wrote:
 
  A colleague approached me this morning with an interesting question that
 I
  realized I didn't know how to answer. How are open source projects in the
  library community dancing around technologies that may have been patented
  by vendors? We were particularly wondering about this in light of open
  source ILS projects, like Kuali OLE, Koha, and Evergreen. I know OLE is
  still in the early stages, but did the folks who created Koha and
 Evergreen
  ever run into any problems in this area? Have library vendors
 historically
  pursued patents for their systems and solutions?

 I don’t think libraries have a particularly unique perspective on
 this: most free/open source software projects have the same issues
 with patents.

 The Software Freedom Law Center has some basic information about these
 issues. As I recall, the “Legal basics for developers” edition of
 their podcasts is useful [1], but other editions may be helpful as
 well.

 Basically, the standard advice for patents is what Mike Taylor gave:
 ignore them. Pay attention to copyright and trademark issues (as the
 Koha problem shows), but patents really don’t need to be on your
 radar.

 best, Erik

 1.
 http://www.softwarefreedom.org/podcast/2011/aug/16/Episode-0x16-Legal-Basics-for-Developers/

 Sent from my free software system http://fsf.org/.




Re: [CODE4LIB] jQuery Ajax request to update a PHP variable

2011-12-06 Thread Erik Hatcher
I'm with jrock on this one.   But maybe I'm a luddite that didn't get the memo 
either (but I am credited for being one of the instrumental folks in the Ajax 
world, heh - in one or more of the Ajax books out there, us old timers called 
it remote scripting).

What I hate hate hate about seeing JSON being returned from a server for the 
browser to generate the view is stuff like:

   string = div + some_data_from_JSON + /div;

That embodies everything that is wrong about Ajax + JSON.

As Jonathan said, the server is already generating dynamic HTML... why have it 
return JSON and move processing/templating to the client for some things but 
not other things?  Rhetorical question... of course it depends on the 
application.  If everything is entirely client-side generated, then sure.  But 
for traditional webapps, JSON to the client to simply piece it together as 
HTML is hideous.

I spoke to this a bit at my recent ApacheCon talk, slides are here: 
http://www.slideshare.net/erikhatcher/solr-flair-10173707 slides 4 and 8 
particularly on this topic.

So in short, opinions differ on the right way to do Ajax obviously.  It 
depends, no question, on the bigger picture and architectural pieces in play, 
but there is absolutely nothing wrong with having HTML being returned from the 
server for partial pieces of the page.  An in many cases it's the cleanest way 
to do it anyway.

Erik



On Dec 5, 2011, at 18:45 , Jonathan Rochkind wrote:

 I still like sending HTML back from my server. I guess I never got the 
 message that that was out of style, heh.
 
 My server application already has logic for creating HTML from templates, and 
 quite possibly already creates this exact same piece of HTML in some other 
 place, possibly for use with non-AJAX fallbacks, or some other context where 
 that snippet of HTML needs to be rendered. I prefer to re-use this logic 
 that's already on the server, rather than have a duplicate HTML 
 generating/templating system in the javascript too.  It's working fine for 
 me, in my use patterns.
 
 Now, certainly, if you could eliminate any PHP generation of HTML at all, as 
 I think Godmar is suggesting, and basically have a pure Javascript app -- 
 that would be another approach that avoids duplication of HTML generating 
 logic in both JS and PHP. That sounds fine too. But I'm still writing apps 
 that degrade if you have no JS (including for web spiders that have no JS, 
 for instance), and have nice REST-ish URLs, etc.   If that's not a 
 requirement and you can go all JS, then sure.  But I wouldn't say that making 
 apps that use progressive enhancement with regard to JS and degrade fine if 
 you don't have is out of style, or if it is, it ought not to be!
 
 Jonathan
 
 On 12/5/2011 6:31 PM, Godmar Back wrote:
 FWIW, I would not send HTML back to the client in an AJAX request - that
 style of AJAX fell out of favor years ago.
 
 Send back JSON instead and keep the view logic client-side. Consider using
 a library such as knockout.js. Instead of your current (difficult to
 maintain) mix of PhP and client-side JavaScript, you'll end up with a
 static HTML page, a couple of clean JSON services (for checked-out per
 subject, and one for the syndetics ids of the first 4 covers), and clean
 HTML templates.
 
 You had earlier asked the question whether to do things client or server
 side - well in this example, the correct answer is to do it client-side.
 (Yours is a read-only application, where none of the advantages of
 server-side processing applies.)
 
  - Godmar
 
 On Mon, Dec 5, 2011 at 6:18 PM, Nate Hillnathanielh...@gmail.com  wrote:
 
 Something quite like that, my friend!
 Cheers
 N
 
 On Mon, Dec 5, 2011 at 3:10 PM, Walker, Daviddwal...@calstate.edu
 wrote:
 
 I gotcha.  More information is, indeed, better. ;-)
 
 So, on the PHP side, you just need to grab the term from the  query
 string, like this:
 
  $searchterm = $_GET['query'];
 
 And then in your JavaScript code, you'll send an AJAX request, like:
 
  http://www.natehill.net/vizstuff/catscrape.php?query=Cooking
 
 Is that what you're looking for?
 
 --Dave
 
 -
 David Walker
 Library Web Services Manager
 California State University
 
 
 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of
 Nate Hill
 Sent: Monday, December 05, 2011 3:00 PM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] jQuery Ajax request to update a PHP variable
 
 As always, I provided too little information.  Dave, it's much more
 involved than that
 
 I'm trying to make a kind of visual browser of popular materials from one
 of our branches from a .csv file.
 
 In order to display book covers for a series of searches by keyword, I
 query the catalog, scrape out only the syndetics images, and then
 display 4
 of them.  The problem is that I've hardcoded in a search for 'Drawing',
 rather than dynamically pulling the correct term and putting it into the
 catalog 

Re: [CODE4LIB] jQuery Ajax request to update a PHP variable

2011-12-06 Thread Andreas Orphanides
jrock... I like it! I think Mr. Rochkind has a new nickname. And a new
imperative: better get the band together for Seattle, Jonathan!

Is it too late to dedicate a presentation slot to a performance?

(Whoa, actually, seriously, a Code4Lib talent show would be AWESOME.)



On Tue, Dec 6, 2011 at 8:38 AM, Erik Hatcher erikhatc...@mac.com wrote:

 I'm with jrock on this one.   But maybe I'm a luddite that didn't get the
 memo either (but I am credited for being one of the instrumental folks in
 the Ajax world, heh - in one or more of the Ajax books out there, us old
 timers called it remote scripting).

 What I hate hate hate about seeing JSON being returned from a server for
 the browser to generate the view is stuff like:

   string = div + some_data_from_JSON + /div;

 That embodies everything that is wrong about Ajax + JSON.

 As Jonathan said, the server is already generating dynamic HTML... why
 have it return JSON and move processing/templating to the client for some
 things but not other things?  Rhetorical question... of course it depends
 on the application.  If everything is entirely client-side generated, then
 sure.  But for traditional webapps, JSON to the client to simply piece it
 together as HTML is hideous.

 I spoke to this a bit at my recent ApacheCon talk, slides are here: 
 http://www.slideshare.net/erikhatcher/solr-flair-10173707 slides 4 and 8
 particularly on this topic.

 So in short, opinions differ on the right way to do Ajax obviously.  It
 depends, no question, on the bigger picture and architectural pieces in
 play, but there is absolutely nothing wrong with having HTML being returned
 from the server for partial pieces of the page.  An in many cases it's the
 cleanest way to do it anyway.

Erik



 On Dec 5, 2011, at 18:45 , Jonathan Rochkind wrote:

  I still like sending HTML back from my server. I guess I never got the
 message that that was out of style, heh.
 
  My server application already has logic for creating HTML from
 templates, and quite possibly already creates this exact same piece of HTML
 in some other place, possibly for use with non-AJAX fallbacks, or some
 other context where that snippet of HTML needs to be rendered. I prefer to
 re-use this logic that's already on the server, rather than have a
 duplicate HTML generating/templating system in the javascript too.  It's
 working fine for me, in my use patterns.
 
  Now, certainly, if you could eliminate any PHP generation of HTML at
 all, as I think Godmar is suggesting, and basically have a pure Javascript
 app -- that would be another approach that avoids duplication of HTML
 generating logic in both JS and PHP. That sounds fine too. But I'm still
 writing apps that degrade if you have no JS (including for web spiders that
 have no JS, for instance), and have nice REST-ish URLs, etc.   If that's
 not a requirement and you can go all JS, then sure.  But I wouldn't say
 that making apps that use progressive enhancement with regard to JS and
 degrade fine if you don't have is out of style, or if it is, it ought not
 to be!
 
  Jonathan
 
  On 12/5/2011 6:31 PM, Godmar Back wrote:
  FWIW, I would not send HTML back to the client in an AJAX request - that
  style of AJAX fell out of favor years ago.
 
  Send back JSON instead and keep the view logic client-side. Consider
 using
  a library such as knockout.js. Instead of your current (difficult to
  maintain) mix of PhP and client-side JavaScript, you'll end up with a
  static HTML page, a couple of clean JSON services (for checked-out per
  subject, and one for the syndetics ids of the first 4 covers), and clean
  HTML templates.
 
  You had earlier asked the question whether to do things client or server
  side - well in this example, the correct answer is to do it client-side.
  (Yours is a read-only application, where none of the advantages of
  server-side processing applies.)
 
   - Godmar
 
  On Mon, Dec 5, 2011 at 6:18 PM, Nate Hillnathanielh...@gmail.com
  wrote:
 
  Something quite like that, my friend!
  Cheers
  N
 
  On Mon, Dec 5, 2011 at 3:10 PM, Walker, Daviddwal...@calstate.edu
  wrote:
 
  I gotcha.  More information is, indeed, better. ;-)
 
  So, on the PHP side, you just need to grab the term from the  query
  string, like this:
 
   $searchterm = $_GET['query'];
 
  And then in your JavaScript code, you'll send an AJAX request, like:
 
   http://www.natehill.net/vizstuff/catscrape.php?query=Cooking
 
  Is that what you're looking for?
 
  --Dave
 
  -
  David Walker
  Library Web Services Manager
  California State University
 
 
  -Original Message-
  From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf
 Of
  Nate Hill
  Sent: Monday, December 05, 2011 3:00 PM
  To: CODE4LIB@LISTSERV.ND.EDU
  Subject: Re: [CODE4LIB] jQuery Ajax request to update a PHP variable
 
  As always, I provided too little information.  Dave, it's much more
  involved than that
 
  I'm trying to make a kind of 

Re: [CODE4LIB] jQuery Ajax request to update a PHP variable

2011-12-06 Thread Nate Vack
On Mon, Dec 5, 2011 at 8:04 PM, Godmar Back god...@gmail.com wrote:

 That said, I'm genuinely interested in what others are thinking/have
 experienced.

I've heard the don't send HTML argument, but in my experience,
writing HTML template code and then Javascript code to generate the
same HTML is bad. They either have slight inconsistencies, or I change
one and forget about the other, or something else.

My rule of thumb: If a UI element relies on JS for its functionality,
rendering it with JS is fine. If it's supposed to show up and operate
without JS, render it with your templating engine. Render each thing
in only one place.

Whether an app should require javascript or use progressive
enhancement depends on the app -- but this just needs javascript is
not a decision to make lightly; it's hard to change your mind two
months in.

-n


Re: [CODE4LIB] jQuery Ajax request to update a PHP variable

2011-12-06 Thread Nate Vack
On Mon, Dec 5, 2011 at 5:00 PM, Nate Hill nathanielh...@gmail.com wrote:

 Here's the work in process, and I believe it will only work in Chrome right
 now.
 http://www.natehill.net/vizstuff/donerightclasses.php

In this case, it looks like there really isn't that much data. I'd
preprocess everything into a big JSON object and just have all the
data instead of using AJAX for this task.

And actually, if you have some automated process writing the .CSV
file, I might have that same process (or a follow-on process) generate
your JSON and serve it statically instead of doing it on the fly in
PHP.

-n


Re: [CODE4LIB] Patents and open source projects

2011-12-06 Thread Nate Vack
On Mon, Dec 5, 2011 at 5:18 PM, Erik Hetzner erik.hetz...@ucop.edu wrote:

 It was my intention only to suggest that trademark issues were
 something that one needs to pay attention to, not that the Koha
 community had not paid attention to trademark issues.

Additionally, in the case of trademark in particular, it's ABSOLUTELY
ESSENTIAL that once you find out about some ratbags, you begin to
apply your attention with substantial vigor -- as it sounds like Koha
has. Trademarks must be actively used and defended to remain valid.

This can be hard for open-source projects, because defending
trademarks involves lawyers and costs money.

Not allowing trademarks and patents for FOSS is complex if they're
allowed for software at all -- should someone reading a patent and
providing a free implementation invalidate that patent? That's the
exact opposite intent of patents. (Note: I think software patents
should not exist at all.)

If FOSS projects are immune to trademark suits, should I be able to
start a competing open-source catalog and call it Koha or Evergreen?
That seems like an undesirable outcome.

-n


Re: [CODE4LIB] jQuery Ajax request to update a PHP variable

2011-12-06 Thread Godmar Back
On Tue, Dec 6, 2011 at 8:38 AM, Erik Hatcher erikhatc...@mac.com wrote:

 I'm with jrock on this one.   But maybe I'm a luddite that didn't get the
 memo either (but I am credited for being one of the instrumental folks in
 the Ajax world, heh - in one or more of the Ajax books out there, us old
 timers called it remote scripting).


On the in-jest rhetorical front, I'm wondering if referring to oneself as
oldtimer helps in defending against insinuations that opposing
technological change makes one a defender of the old ;-)

But:


 What I hate hate hate about seeing JSON being returned from a server for
 the browser to generate the view is stuff like:

   string = div + some_data_from_JSON + /div;

 That embodies everything that is wrong about Ajax + JSON.


That's exactly why you use new libraries such as knockout.js, to avoid just
that. Client-side template engines with automatic data-bindings.

Alternatively, AJAX frameworks use JSON and then interpret the returned
objects as code. Take a look at the client/server traffic produced by ZK,
for instance.


 As Jonathan said, the server is already generating dynamic HTML... why
 have it return


It isn't. There is no already generating anything server, it's a new app
Nate is writing. (Unless you count his work of the past two days). The
dynamic HTML he's generating is heavily tailored to his JS. There's
extremely tight coupling, which now exists across multiple files written in
multiple languages. Simply avoidable bad software engineering. That's not
even making the computational cost argument that avoiding template
processing on the server is cheaper. And with respect to Jonathan's
argument of degradation, a degraded version of his app (presumably) would
use table - or something like that, it'd look nothing like what's he
showed us yesterday.

Heh - the proof of the pudding is in the eating. Why don't we create 2
versions of Nate's app, one with mixed server/client - like the one he's
completing now, and I create the client-side based one, and then we compare
side by side?  I'll work with Nate on that.

  - Godmar

[ I hope it's ok to snip off the rest of the email trail in my reply. ]


Re: [CODE4LIB] Patents and open source projects

2011-12-06 Thread Eric Lease Morgan
Ironically, I had (or there was) some trouble with the term 
MyLibrary@NCState. Granted, the term was originally a variation of My 
Netscape, My Yahoo, and My Deja News, but all sorts of things followed it, like 
MyiLibrary, the Google Books My Library, and then there was a ALA thing. I'm 
not necessarily saying MyLibrary was the leader here, but an example of how 
trademarks (monikers) can be used, abused, and morphed. --Eric Morgan


[CODE4LIB] Job posting: Metadata Librarian, Princeton, NJ

2011-12-06 Thread Christine Schwartz
Metadata Librarian

Princeton Theological Seminary is seeking a Metadata Librarian to
assist with the production of metadata for a variety of digital
projects.

Responsibilities

Responsibilities include analyzing metadata requirements and
specifications, creating and editing metadata documents, developing
metadata crosswalks, facilitating outsourcing workflows, and
performing quality control.

Requirements

The successful candidate will be enthusiastic about collaborating with
a small, cross-functional, agile team using the scrum framework. This
position requires a master’s degree in Library Science, Information
Science, or the equivalent, as well as understanding a wide variety of
metadata standards (including AACR2, DOI, Dublin Core, EAD, HTML,
MARCXML, METS, MODS, and TEI). Preferred qualifications include
experience with XML and related standards such as CSS, DTD, SRU, XML
Schema, XPath, XQuery, and XSLT as well as familiarity with relational
and XML-native databases.

Application Information

We offer a pleasant, academic work environment in an attractive campus
setting and a benefits package that includes 20 days vacation after
one year of service. Interested candidates should send resume with
salary requirement to:

PRINCETON THEOLOGICAL SEMINARY
Human Resources Office
64 Mercer Street
Princeton, NJ  08540
Fax: (609) 924-2973
E-Mail: ap...@ptsem.edu

We Are An Equal Opportunity Employer

Christine Schwartz
XML Database Administrator
Princeton Theological Seminary Library
christine.schwa...@ptsem.edu


Re: [CODE4LIB] jQuery Ajax request to update a PHP variable

2011-12-06 Thread Jonathan Rochkind
 Is it too late to dedicate a presentation slot to a performance?
 (Whoa, actually, seriously, a Code4Lib talent show would be AWESOME.)

The rails conf in baltimore a couple years ago had an evening jam session slot. 

Sadly, it's really a pain bringing the accordion on an airplane. 


Re: [CODE4LIB] Patents and open source projects

2011-12-06 Thread Roy Tennant
I once got a cease and desist letter from a legal firm defending someone's 
trademark for metadata. I mean, seriously. Perhaps obviously, I ignored it. 
It's still in my files somewhere.
Roy



On Dec 6, 2011, at 6:31 AM, Eric Lease Morgan emor...@nd.edu wrote:

 Ironically, I had (or there was) some trouble with the term 
 MyLibrary@NCState. Granted, the term was originally a variation of My 
 Netscape, My Yahoo, and My Deja News, but all sorts of things followed it, 
 like MyiLibrary, the Google Books My Library, and then there was a ALA thing. 
 I'm not necessarily saying MyLibrary was the leader here, but an example of 
 how trademarks (monikers) can be used, abused, and morphed. --Eric Morgan


Re: [CODE4LIB] Patents and open source projects

2011-12-06 Thread Eric Lease Morgan
I too got a cease and desisted letter almost twenty years ago. I wrote a CGI 
script that would calculate the phase of the moon. I called it LunaTick. The 
letter was from a lawyer defending a trademark for a fishing lure. --Eric Morgan


Re: [CODE4LIB] Patents and open source projects

2011-12-06 Thread Mike Taylor
I have heard that it's best not to acknowledge receipt of such letters
at all. Can anyone confirm or deny that?

-- Mike.


On 6 December 2011 14:46, Roy Tennant roytenn...@gmail.com wrote:
 I once got a cease and desist letter from a legal firm defending someone's 
 trademark for metadata. I mean, seriously. Perhaps obviously, I ignored it. 
 It's still in my files somewhere.
 Roy



 On Dec 6, 2011, at 6:31 AM, Eric Lease Morgan emor...@nd.edu wrote:

 Ironically, I had (or there was) some trouble with the term 
 MyLibrary@NCState. Granted, the term was originally a variation of My 
 Netscape, My Yahoo, and My Deja News, but all sorts of things followed it, 
 like MyiLibrary, the Google Books My Library, and then there was a ALA 
 thing. I'm not necessarily saying MyLibrary was the leader here, but an 
 example of how trademarks (monikers) can be used, abused, and morphed. 
 --Eric Morgan



Re: [CODE4LIB] Patents and open source projects

2011-12-06 Thread Wilfred Drew
Over 15 years ago I got a threatening letter because I created a guide called 
Library Jargon and offered it up via FTP, gopher and email. Some rinky-dink 
company claimed they had a trademark and copyright to it.  I wrote them back 
after doing a search via gopher on the tphrase in question and found over 200 
other documents with the same title.  I sent the search results to them and 
never heard from them again.

Bill Drew

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Roy 
Tennant
Sent: Tuesday, December 06, 2011 9:46 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Patents and open source projects

I once got a cease and desist letter from a legal firm defending someone's 
trademark for metadata. I mean, seriously. Perhaps obviously, I ignored it. 
It's still in my files somewhere.
Roy



On Dec 6, 2011, at 6:31 AM, Eric Lease Morgan emor...@nd.edu wrote:

 Ironically, I had (or there was) some trouble with the term 
 MyLibrary@NCState. Granted, the term was originally a variation of My 
 Netscape, My Yahoo, and My Deja News, but all sorts of things followed it, 
 like MyiLibrary, the Google Books My Library, and then there was a ALA thing. 
 I'm not necessarily saying MyLibrary was the leader here, but an example of 
 how trademarks (monikers) can be used, abused, and morphed. --Eric Morgan


Re: [CODE4LIB] jQuery Ajax request to update a PHP variable

2011-12-06 Thread Jonathan Rochkind
I'll admit I haven't spent a lot of time investigating/analyzing this 
particular application -- it's quite possible an all-JS app is the right choice 
here. 

I was just responding to the suggestion that returning HTML to AJAX was out of 
style and shouldn't be done anymore; with the implication I picked up that 
(nearly) ALL apps should be almost all JS, use JS templating engines, etc., 
that this is the right new way to write web apps.  I think this sends the 
wrong message to newbies. 

It's true that it is very trendy these days to write all JS apps, which if they 
function at all without JS do so with a completely seperate codepath (this is 
NOT progressive enhancement, although it is a way of ensuring non-JS 
accessibility).   Yeah, it's trendy, but I think it's frequently (but not 
always, true)  the wrong choice when it's done.  If you do provide a completely 
separate codepath for non-JS, this can be harder to maintain than actual 
progressive enhancement.  And pure JS either way can easily make your app a 
poor web citizen of the web, harder to screen-scrape or spider, harder to find 
URLs to link to certain parts of the app, etc.  (eg, 
http://www.tbray.org/ongoing/When/201x/2011/02/09/Hash-Blecch ) 

But, sure, maybe in this particular case pure-JS is a good way to go, I haven't 
spent enough time looking at or thinking about it to have an opinion. Sure, if 
you've already started down the path of using a JS templating/view-rendering 
engine, and that's something you/want need to do anyway, you might as well 
stick to it, I guess.  

I just reacted to the suggestion that doing anything _but_ this is out of 
style, or an old bad way of doing things. If writing apps that produce HTML 
with progressive enhancement is out of style, then I don't want to be 
fashionable! 

From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] on behalf of Godmar Back 
[god...@gmail.com]
Sent: Tuesday, December 06, 2011 9:34 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] jQuery Ajax request to update a PHP variable

On Tue, Dec 6, 2011 at 8:38 AM, Erik Hatcher erikhatc...@mac.com wrote:

 I'm with jrock on this one.   But maybe I'm a luddite that didn't get the
 memo either (but I am credited for being one of the instrumental folks in
 the Ajax world, heh - in one or more of the Ajax books out there, us old
 timers called it remote scripting).


On the in-jest rhetorical front, I'm wondering if referring to oneself as
oldtimer helps in defending against insinuations that opposing
technological change makes one a defender of the old ;-)

But:


 What I hate hate hate about seeing JSON being returned from a server for
 the browser to generate the view is stuff like:

   string = div + some_data_from_JSON + /div;

 That embodies everything that is wrong about Ajax + JSON.


That's exactly why you use new libraries such as knockout.js, to avoid just
that. Client-side template engines with automatic data-bindings.

Alternatively, AJAX frameworks use JSON and then interpret the returned
objects as code. Take a look at the client/server traffic produced by ZK,
for instance.


 As Jonathan said, the server is already generating dynamic HTML... why
 have it return


It isn't. There is no already generating anything server, it's a new app
Nate is writing. (Unless you count his work of the past two days). The
dynamic HTML he's generating is heavily tailored to his JS. There's
extremely tight coupling, which now exists across multiple files written in
multiple languages. Simply avoidable bad software engineering. That's not
even making the computational cost argument that avoiding template
processing on the server is cheaper. And with respect to Jonathan's
argument of degradation, a degraded version of his app (presumably) would
use table - or something like that, it'd look nothing like what's he
showed us yesterday.

Heh - the proof of the pudding is in the eating. Why don't we create 2
versions of Nate's app, one with mixed server/client - like the one he's
completing now, and I create the client-side based one, and then we compare
side by side?  I'll work with Nate on that.

  - Godmar

[ I hope it's ok to snip off the rest of the email trail in my reply. ]


Re: [CODE4LIB] Patents and open source projects

2011-12-06 Thread Walter Lewis
On 6 December 2011, at 9:46 AM, Roy Tennant wrote:

 I once got a cease and desist letter from a legal firm defending someone's 
 trademark for metadata. I mean, seriously. Perhaps obviously, I ignored it. 
 It's still in my files somewhere.

We had a variation in Ontario back in the 90s when a businessmen working with 
libraries heard the phrase virtual library pass my lips in conversation.  
Next thing I knew, he thought he had trademarked it.  

I try never to use the phrase these days, and he left the library market.

I can't begin to recall which of you I heard it from first.

Walter Lewis
Halton Hills


Re: [CODE4LIB] Models of MARC in RDF

2011-12-06 Thread Karen Coyle

Quoting Fleming, Declan dflem...@ucsd.edu:

Hi - I'll note that the mapping decisions were made by our metadata  
services (then Cataloging) group, not by the tech folks making it  
all work, though we were all involved in the discussions.  One idea  
that came up was to do a, perhaps, lossy translation, but also stuff  
one triple with a text dump of the whole MARC record just in case we  
needed to grab some other element out we might need.  We didn't do  
that, but I still like the idea.  Ok, it was my idea.  ;)


I like that idea! Now that disk space is no longer an issue, it  
makes good sense to keep around the original state of any data that  
you transform, just in case you change your mind. I hadn't thought  
about incorporating the entire MARC record string in the  
transformation, but as I recall the average size of a MARC record is  
somewhere around 1K, which really isn't all that much by today's  
standards.


(As an old-timer, I remember running the entire Univ. of California  
union catalog on 35 megabytes, something that would now be considered  
a smallish email attachment.)


kc



D

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf  
Of Esme Cowles

Sent: Monday, December 05, 2011 11:22 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Models of MARC in RDF

I looked into this a little more closely, and it turns out it's a  
little more complicated than I remembered.  We built support for  
transforming to MODS using the MODS21slim2MODS.xsl stylesheet, but  
don't use that.  Instead, we use custom Java code to do the mapping.


I don't have a lot of public examples, but there's at least one  
public object which you can view the MARC from our OPAC:


http://roger.ucsd.edu/search/.b4827884/.b4827884/1,1,1,B/detlmarc~1234567FF=1,0,

The public display in our digital collections site:

http://libraries.ucsd.edu/ark:/20775/bb0648473d

The RDF for the MODS looks like:

mods:classification rdf:parseType=Resource
mods:authoritylocal/mods:authority
rdf:valueFVLP 222-1/rdf:value
/mods:classification
mods:identifier rdf:parseType=Resource
mods:typeARK/mods:type
 
rdf:valuehttp://libraries.ucsd.edu/ark:/20775/bb0648473d/rdf:value

/mods:identifier
mods:name rdf:parseType=Resource
mods:namePartBrown, Victor W/mods:namePart
mods:typepersonal/mods:type
/mods:name
mods:name rdf:parseType=Resource
mods:namePartAmateur Film Club of San Diego/mods:namePart
mods:typecorporate/mods:type
/mods:name
mods:originInfo rdf:parseType=Resource
mods:dateCreated[196-]/mods:dateCreated
/mods:originInfo
mods:originInfo rdf:parseType=Resource
mods:dateIssued2005/mods:dateIssued
mods:publisherFilm and Video Library, University of  
California, San Diego, La Jolla, CA 92093-0175  
http://orpheus.ucsd.edu/fvl/FVLPAGE.HTM/mods:publisher

/mods:originInfo
mods:physicalDescription rdf:parseType=Resource
mods:digitalOriginreformatted digital/mods:digitalOrigin
mods:note16mm; 1 film reel (25 min.) :; sd., col. ;/mods:note
/mods:physicalDescription
mods:subject rdf:parseType=Resource
mods:authoritylcsh/mods:authority
mods:topicRanching/mods:topic
/mods:subject

etc.


There is definitely some loss in the conversion process -- I don't  
know enough about the MARC leader and control fields to know if they  
are captured in the MODS and/or RDF in some way.  But there are  
quite a few local and note fields that aren't present in the RDF.   
Other fields (e.g. 300 and 505) are mapped to MODS, but not  
displayed in our access system (though they are indexed for  
searching).


I agree it's hard to quantify lossy-ness.  Counting fields or  
characters would be the most objective, but has obvious problems  
with control characters sometimes containing a lot of information,  
and then the relative importance of different fields to the overall  
description.  There are other issues too -- some fields in this  
record weren't migrated because they duplicated collection-wide  
values, which are formulated slightly differently from the MARC  
record.  Some fields weren't migrated because they concern the  
physical object, and therefore don't really apply to the digital  
object.  So that really seems like a morass to me.


-Esme
--
Esme Cowles escow...@ucsd.edu

Necessity is the plea for every infringement of human freedom. It  
is the  argument of tyrants; it is the creed of slaves. -- William  
Pitt, 1783


On 12/3/2011, at 10:35 AM, Karen Coyle wrote:


Esme, let me second Owen's enthusiasm for more detail if you can
supply it. I think we also need to start putting these efforts along a
loss continuum - MODS is already lossy vis-a-vis MARC, and my guess
is that some of the other 

Re: [CODE4LIB] Patents and open source projects

2011-12-06 Thread MJ Ray
Nate Vack njv...@wisc.edu [...]
 Not allowing trademarks and patents for FOSS is complex if they're
 allowed for software at all -- should someone reading a patent and
 providing a free implementation invalidate that patent? That's the
 exact opposite intent of patents. (Note: I think software patents
 should not exist at all.)

Mathematics is not patentable, at least here and at least so far, so
yes, if the full implementation in software alone is obvious, it
clearly isn't a valid patent.

 If FOSS projects are immune to trademark suits, should I be able to
 start a competing open-source catalog and call it Koha or Evergreen?
 That seems like an undesirable outcome.

As I understand it, if you did, even without a trademark, you would
still probably be committing a range of civil offences, including
passing off and various advertising or trade descriptions offences,
in English law at least.

The main thing a registered trademark brings to that party is
criminalisation (and so the ability of government agents to prosecute
autonomously, at the taxpayers' expense and regardless of the wishes
of project contributors) and I feel that's neither necessary nor
desirable.

Hasn't this happened already, though, with Liblime starting some
competing Kohas and using trademark registrations to back up their
failure to rename their forks?  (Although most of us call them LAK,
LEK and LK, to try to reduce the confusion.)

Which brings me to a question which probably people here can help to
answer: are there similar civil offences of passing-off, misleading
advertising and trade misdescriptions in the US?

Thanks,
-- 
MJ Ray (slef), member of www.software.coop, a for-more-than-profit co-op.
http://koha-community.org supporter, web and LMS developer, statistician.
In My Opinion Only: see http://mjr.towers.org.uk/email.html
Available for hire for Koha work http://www.software.coop/products/koha


Re: [CODE4LIB] Models of MARC in RDF

2011-12-06 Thread Owen Stephens
I'd suggest that rather than shove it in a triple it might be better to point 
at alternative representations, including MARC if desirable (keep meaning to 
blog some thoughts about progressively enhanced metadata...)

Owen

Owen Stephens
Owen Stephens Consulting
Web: http://www.ostephens.com
Email: o...@ostephens.com
Telephone: 0121 288 6936

On 6 Dec 2011, at 15:44, Karen Coyle wrote:

 Quoting Fleming, Declan dflem...@ucsd.edu:
 
 Hi - I'll note that the mapping decisions were made by our metadata services 
 (then Cataloging) group, not by the tech folks making it all work, though we 
 were all involved in the discussions.  One idea that came up was to do a, 
 perhaps, lossy translation, but also stuff one triple with a text dump of 
 the whole MARC record just in case we needed to grab some other element out 
 we might need.  We didn't do that, but I still like the idea.  Ok, it was my 
 idea.  ;)
 
 I like that idea! Now that disk space is no longer an issue, it makes good 
 sense to keep around the original state of any data that you transform, 
 just in case you change your mind. I hadn't thought about incorporating the 
 entire MARC record string in the transformation, but as I recall the average 
 size of a MARC record is somewhere around 1K, which really isn't all that 
 much by today's standards.
 
 (As an old-timer, I remember running the entire Univ. of California union 
 catalog on 35 megabytes, something that would now be considered a smallish 
 email attachment.)
 
 kc
 
 
 D
 
 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Esme 
 Cowles
 Sent: Monday, December 05, 2011 11:22 AM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] Models of MARC in RDF
 
 I looked into this a little more closely, and it turns out it's a little 
 more complicated than I remembered.  We built support for transforming to 
 MODS using the MODS21slim2MODS.xsl stylesheet, but don't use that.  Instead, 
 we use custom Java code to do the mapping.
 
 I don't have a lot of public examples, but there's at least one public 
 object which you can view the MARC from our OPAC:
 
 http://roger.ucsd.edu/search/.b4827884/.b4827884/1,1,1,B/detlmarc~1234567FF=1,0,
 
 The public display in our digital collections site:
 
 http://libraries.ucsd.edu/ark:/20775/bb0648473d
 
 The RDF for the MODS looks like:
 
mods:classification rdf:parseType=Resource
mods:authoritylocal/mods:authority
rdf:valueFVLP 222-1/rdf:value
/mods:classification
mods:identifier rdf:parseType=Resource
mods:typeARK/mods:type

 rdf:valuehttp://libraries.ucsd.edu/ark:/20775/bb0648473d/rdf:value
/mods:identifier
mods:name rdf:parseType=Resource
mods:namePartBrown, Victor W/mods:namePart
mods:typepersonal/mods:type
/mods:name
mods:name rdf:parseType=Resource
mods:namePartAmateur Film Club of San Diego/mods:namePart
mods:typecorporate/mods:type
/mods:name
mods:originInfo rdf:parseType=Resource
mods:dateCreated[196-]/mods:dateCreated
/mods:originInfo
mods:originInfo rdf:parseType=Resource
mods:dateIssued2005/mods:dateIssued
mods:publisherFilm and Video Library, University of California, 
 San Diego, La Jolla, CA 92093-0175 
 http://orpheus.ucsd.edu/fvl/FVLPAGE.HTM/mods:publisher
/mods:originInfo
mods:physicalDescription rdf:parseType=Resource
mods:digitalOriginreformatted digital/mods:digitalOrigin
mods:note16mm; 1 film reel (25 min.) :; sd., col. ;/mods:note
/mods:physicalDescription
mods:subject rdf:parseType=Resource
mods:authoritylcsh/mods:authority
mods:topicRanching/mods:topic
/mods:subject
 
 etc.
 
 
 There is definitely some loss in the conversion process -- I don't know 
 enough about the MARC leader and control fields to know if they are captured 
 in the MODS and/or RDF in some way.  But there are quite a few local and 
 note fields that aren't present in the RDF.  Other fields (e.g. 300 and 505) 
 are mapped to MODS, but not displayed in our access system (though they are 
 indexed for searching).
 
 I agree it's hard to quantify lossy-ness.  Counting fields or characters 
 would be the most objective, but has obvious problems with control 
 characters sometimes containing a lot of information, and then the relative 
 importance of different fields to the overall description.  There are other 
 issues too -- some fields in this record weren't migrated because they 
 duplicated collection-wide values, which are formulated slightly differently 
 from the MARC record.  Some fields weren't migrated because they concern the 
 physical object, and therefore don't really apply to the digital object.  So 
 that really seems like a morass to me.
 
 -Esme
 --
 Esme Cowles escow...@ucsd.edu
 
 Necessity 

Re: [CODE4LIB] jQuery Ajax request to update a PHP variable

2011-12-06 Thread Doran, Michael D
 You had earlier asked the question whether to do things client or server
 side - well in this example, the correct answer is to do it client-side.
 (Yours is a read-only application, where none of the advantages of
 server-side processing applies.)

One thing to take into consideration when weighing the advantages of 
server-side vs. client-side processing, is whether the web app is likely to be 
used on mobile devices.  Douglas Crockford, speaking about the fact that 
JavaScript has become the de fact universal runtime, cautions: Which I think 
puts even more pressure on getting JavaScript to go fast. Particularly as we're 
now going into mobile. Moore's Law doesn't apply to batteries. So how much time 
we're wasting interpreting stuff really matters there. The cycles count.[1]  
Personally, I don't know enough to know how significant the impact would be.  
However, I understand Douglas Crockford knows a little something about 
JavaScript and JSON.

-- Michael

[1] Quoted in Coders at Work by Peter Seibel,  pg. 100

# Michael Doran, Systems Librarian
# University of Texas at Arlington
# 817-272-5326 office
# 817-688-1926 mobile
# do...@uta.edu
# http://rocky.uta.edu/doran/


 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of
 Godmar Back
 Sent: Monday, December 05, 2011 5:31 PM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] jQuery Ajax request to update a PHP variable
 
 FWIW, I would not send HTML back to the client in an AJAX request - that
 style of AJAX fell out of favor years ago.
 
 Send back JSON instead and keep the view logic client-side. Consider using
 a library such as knockout.js. Instead of your current (difficult to
 maintain) mix of PhP and client-side JavaScript, you'll end up with a
 static HTML page, a couple of clean JSON services (for checked-out per
 subject, and one for the syndetics ids of the first 4 covers), and clean
 HTML templates.
 
 You had earlier asked the question whether to do things client or server
 side - well in this example, the correct answer is to do it client-side.
 (Yours is a read-only application, where none of the advantages of
 server-side processing applies.)
 
  - Godmar
 
 On Mon, Dec 5, 2011 at 6:18 PM, Nate Hill nathanielh...@gmail.com wrote:
 
  Something quite like that, my friend!
  Cheers
  N
 
  On Mon, Dec 5, 2011 at 3:10 PM, Walker, David dwal...@calstate.edu
  wrote:
 
   I gotcha.  More information is, indeed, better. ;-)
  
   So, on the PHP side, you just need to grab the term from the  query
   string, like this:
  
$searchterm = $_GET['query'];
  
   And then in your JavaScript code, you'll send an AJAX request, like:
  
http://www.natehill.net/vizstuff/catscrape.php?query=Cooking
  
   Is that what you're looking for?
  
   --Dave
  
   -
   David Walker
   Library Web Services Manager
   California State University
  
  
   -Original Message-
   From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of
   Nate Hill
   Sent: Monday, December 05, 2011 3:00 PM
   To: CODE4LIB@LISTSERV.ND.EDU
   Subject: Re: [CODE4LIB] jQuery Ajax request to update a PHP variable
  
   As always, I provided too little information.  Dave, it's much more
   involved than that
  
   I'm trying to make a kind of visual browser of popular materials from
 one
   of our branches from a .csv file.
  
   In order to display book covers for a series of searches by keyword, I
   query the catalog, scrape out only the syndetics images, and then
  display 4
   of them.  The problem is that I've hardcoded in a search for 'Drawing',
   rather than dynamically pulling the correct term and putting it into
 the
   catalog query.
  
   Here's the work in process, and I believe it will only work in Chrome
   right now.
   http://www.natehill.net/vizstuff/donerightclasses.php
  
   I may have a solution, Jason's idea got me part way there.  I looked
 all
   over the place for that little snippet he sent over!
  
   Thanks!
  
  
  
   On Mon, Dec 5, 2011 at 2:44 PM, Walker, David dwal...@calstate.edu
   wrote:
  
 And I want to update 'Drawing' to be 'Cooking'  w/ a jQuery hover
 effect on the client side then I need to make an Ajax request,
  correct?
   
What you probably want to do here, Nate, is simply output the PHP
variable in your HTML response, like this:
   
 h1 id=foo?php echo $searchterm ?/h1
   
And then in your JavaScript code, you can manipulate the text through
the DOM like this:
   
 $('#foo').html('Cooking');
   
--Dave
   
-
David Walker
Library Web Services Manager
California State University
   
   
-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf
Of Nate Hill
Sent: Monday, December 05, 2011 2:09 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] jQuery Ajax request to update a PHP variable
   
If I have in my 

Re: [CODE4LIB] Models of MARC in RDF

2011-12-06 Thread Owen Stephens
I think the strength of adopting RDF is that it doesn't tie us to a single 
vocab/schema. That isn't to say it isn't desirable for us to establish common 
approaches, but that we need to think slightly differently about how this is 
done - more application profiles than 'one true schema'.

This is why RDA worries me - because it (seems to?) suggest that we define a 
schema that stands alone from everything else and that is used by the library 
community. I'd prefer to see the library community adopting the best of what 
already exists and then enhancing where the existing ontologies are lacking. If 
we are going to have a (web of) linked data, then re-use of ontologies and IDs 
is needed. For example in the work I did at the Open University in the UK we 
ended up only a single property from a specific library ontology (the draft 
ISBD http://metadataregistry.org/schemaprop/show/id/1957.html has place of 
publication, production, distribution).

I think it is interesting that many of the MARC-RDF mappings so far have 
adopting many of the same ontologies (although no doubt partly because there is 
a 'follow the leader' element to this - or at least there was for me when 
looking at the transformation at the Open University)

Owen

Owen Stephens
Owen Stephens Consulting
Web: http://www.ostephens.com
Email: o...@ostephens.com
Telephone: 0121 288 6936

On 5 Dec 2011, at 18:56, Jonathan Rochkind wrote:

 On 12/5/2011 1:40 PM, Karen Coyle wrote:
 
 This brings up another point that I haven't fully grokked yet: the use of 
 MARC kept library data consistent across the many thousands of libraries 
 that had MARC-based systems. 
 
 Well, only somewhat consistent, but, yeah.
 
 What happens if we move to RDF without a standard? Can we rely on linking to 
 provide interoperability without that rigid consistency of data models?
 
 Definitely not. I think this is a real issue.  There is no magic to linking 
 or RDF that provides interoperability for free; it's all about the 
 vocabularies/schemata -- whether in MARC or in anything else.   (Note 
 different national/regional  library communities used different schemata in 
 MARC, which made interoperability infeasible there. Some still do, although 
 gradually people have moved to Marc21 precisely for this reason, even when 
 Marc21 was less powerful than the MARC variant they started with).
 
 That is to say, if we just used MARC's own implicit vocabularies, but output 
 them as RDF, sure, we'd still have consistency, although we wouldn't really 
 _gain_ much.On the other hand, if we switch to a new better vocabulary -- 
 we've got to actually switch to a new better vocabulary.  If it's just 
 whatever anyone wants to use, we've made it VERY difficult to share data, 
 which is something pretty darn important to us.
 
 Of course, the goal of the RDA process (or one of em) was to create a new 
 schema for us to consistently use. That's the library community effort to 
 maintain a common schema that is more powerful and flexible than MARC.  If 
 people are using other things instead, apparently that failed, or at least 
 has not yet succeeded.


Re: [CODE4LIB] jQuery Ajax request to update a PHP variable

2011-12-06 Thread Walker, David
 I couldn't get json_encode() going on the server at work.

This usually means your server is running an older version of PHP.  If it's OS 
is RHEL 5, then you've likely got PHP 5.1.6 installed.

  http://php.net/manual/en/function.json-encode.php
  json_encode
  PHP 5 = 5.2.0

--Dave
-
David Walker
Library Web Services Manager
California State University


-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Nate 
Hill
Sent: Tuesday, December 06, 2011 8:18 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] jQuery Ajax request to update a PHP variable

I attached the app as it stands now.  There's something wrong w/ the regex 
matching in catscrape.php so only some of the images are coming through.

A bit more background info:

Someone said 'it's not that much data'.  Indeed it isn't, but that is because I 
intentionally gave myself an extremely simple data set to build/test with.  I'd 
anticipate more complex data sets in the future.

The .csv files are not generated automatically, we have a product called 
CollectionHQ that produces reports based on monthly data dumps from our ILS.  I 
was planning to create a folder that the people who run these reports can 
simply save the csv files to, and then the web app would just work without them 
having to think about it.

A bit of a side note, but I actually was taking the JSON approach briefly and 
it was working on my MAMP but for some reason I couldn't get
json_encode() going on the server at work.  I fiddled around w/ the .ini file a 
little while thinking I might need to do something there, got bored, and 
decided to take a different approach.

Also: should I be sweating the fact that basically every time someone mouses 
over one of these boxes they are hitting our library catalog with a query?  It 
struck me that this might be unwise.  But I don't know either way.

Thanks all.  Do with this what you will, even if that is nothing.  Just 
following the conversation has been enlightening.

Nate

On Tue, Dec 6, 2011 at 7:27 AM, Erik Hatcher erikhatc...@mac.com wrote:

 Again, with jrock... I was replying to the general Ajax requests 
 returning HTML is outdated theme, not to Nate's actual application.

 Certainly returning objects as code or data to a component (like, say, 
 SIMILE Timeline) is a reasonable use of data coming back from Ajax 
 requests, and covered in my it depends response :)

 A defender of the old?  Only in as much as the old is simpler, 
 cleaner, and leaner than all the new wheels being invented.  I'm 
 pragmatic, not dogmatic.

Erik


 On Dec 6, 2011, at 09:34 , Godmar Back wrote:

  On Tue, Dec 6, 2011 at 8:38 AM, Erik Hatcher erikhatc...@mac.com
 wrote:
 
  I'm with jrock on this one.   But maybe I'm a luddite that didn't get
 the
  memo either (but I am credited for being one of the instrumental 
  folks
 in
  the Ajax world, heh - in one or more of the Ajax books out there, 
  us old timers called it remote scripting).
 
 
  On the in-jest rhetorical front, I'm wondering if referring to 
  oneself as oldtimer helps in defending against insinuations that 
  opposing technological change makes one a defender of the old ;-)
 
  But:
 
 
  What I hate hate hate about seeing JSON being returned from a 
  server for the browser to generate the view is stuff like:
 
   string = div + some_data_from_JSON + /div;
 
  That embodies everything that is wrong about Ajax + JSON.
 
 
  That's exactly why you use new libraries such as knockout.js, to 
  avoid
 just
  that. Client-side template engines with automatic data-bindings.
 
  Alternatively, AJAX frameworks use JSON and then interpret the 
  returned objects as code. Take a look at the client/server traffic 
  produced by ZK, for instance.
 
 
  As Jonathan said, the server is already generating dynamic HTML... 
  why have it return
 
 
  It isn't. There is no already generating anything server, it's a new 
  app Nate is writing. (Unless you count his work of the past two 
  days). The dynamic HTML he's generating is heavily tailored to his 
  JS. There's extremely tight coupling, which now exists across 
  multiple files written
 in
  multiple languages. Simply avoidable bad software engineering. 
  That's not even making the computational cost argument that avoiding 
  template processing on the server is cheaper. And with respect to 
  Jonathan's argument of degradation, a degraded version of his app 
  (presumably) would use table - or something like that, it'd look 
  nothing like what's he showed us yesterday.
 
  Heh - the proof of the pudding is in the eating. Why don't we create 
  2 versions of Nate's app, one with mixed server/client - like the 
  one he's completing now, and I create the client-side based one, and 
  then we
 compare
  side by side?  I'll work with Nate on that.
 
   - Godmar
 
  [ I hope it's ok to snip off the rest of the email trail in my 
  reply. ]




--
Nate Hill

Re: [CODE4LIB] jQuery Ajax request to update a PHP variable

2011-12-06 Thread Dave Caroline
I dont understand the thinking and waste of time scanning entire csv
files where a database table with good indexing can be a lot faster
and use less server memory.

Do the work once up front when the data becomes available not on every
page draw.

I subscribe to the read/send and mangle as little as possible(server
and client) on a web page view

Dave Caroline


Re: [CODE4LIB] jQuery Ajax request to update a PHP variable

2011-12-06 Thread Nate Hill
csv files are what I have- they are easy for the not-technically inclined
staff to create and then save to a folder.  I was really just hoping to
make this easy on the people who make the reports.


On Tue, Dec 6, 2011 at 10:21 AM, Dave Caroline
dave.thearchiv...@gmail.comwrote:

 I dont understand the thinking and waste of time scanning entire csv
 files where a database table with good indexing can be a lot faster
 and use less server memory.

 Do the work once up front when the data becomes available not on every
 page draw.

 I subscribe to the read/send and mangle as little as possible(server
 and client) on a web page view

 Dave Caroline




-- 
Nate Hill
nathanielh...@gmail.com
http://www.natehill.net


Re: [CODE4LIB] jQuery Ajax request to update a PHP variable

2011-12-06 Thread Godmar Back
On Tue, Dec 6, 2011 at 11:18 AM, Nate Hill nathanielh...@gmail.com wrote:

 I attached the app as it stands now.  There's something wrong w/ the regex
 matching in catscrape.php so only some of the images are coming through.


No, it's not the regexp. You're simply scraping syndetics links, without
checking if syndetics has or does not have an image for those ISBNs. Those
searches where the first four hits have jackets display, the others don't.


 Also: should I be sweating the fact that basically every time someone
 mouses over one of these boxes they are hitting our library catalog with a
 query?  It struck me that this might be unwise.  But I don't know either
 way.


Yes, it's unwise, especially since the results won't change (much).

 - Godmar


Re: [CODE4LIB] Models of MARC in RDF

2011-12-06 Thread Owen, Will
This is a *very* tangential rant, but it makes me mental when I hear
people say the 'disk space' is no longer an issue.  While it's true that
the costs of disk drives continue to drop, my experience is that the cost
of managing storage and backups is rising almost exponentially as
libraries continue to amass enormous quantities of digital data and
metadata.  Again, I recognize that text files are a small portion of our
library storage these days, but to casually suggest that doubling any
amount of data storage is an inconsiderable consideration strikes me as
the first step down a dangerous path.  Sorry for the interruption to an
interesting thread.

Will



On 12/6/11 10:44 AM, Karen Coyle li...@kcoyle.net wrote:

Quoting Fleming, Declan dflem...@ucsd.edu:

Hi - I'll note that the mapping decisions were made by our metadata
services (then Cataloging) group, not by the tech folks making it
all work, though we were all involved in the discussions.  One idea
that came up was to do a, perhaps, lossy translation, but also stuff
one triple with a text dump of the whole MARC record just in case we
needed to grab some other element out we might need.  We didn't do
that, but I still like the idea.  Ok, it was my idea.  ;)

I like that idea! Now that disk space is no longer an issue, it
makes good sense to keep around the original state of any data that
you transform, just in case you change your mind. I hadn't thought
about incorporating the entire MARC record string in the
transformation, but as I recall the average size of a MARC record is
somewhere around 1K, which really isn't all that much by today's
standards.


Re: [CODE4LIB] jQuery Ajax request to update a PHP variable

2011-12-06 Thread Dave Caroline
php has some nice and fast csv parsing abilities, use them as a source
for your database.
can then remove any regexp need
still simple for the users

snippet taken from one of my csv readers showing the prints in
comments so you can see the data in an array
this also keeps memory footprint down

$row = 1;
$fp = fopen ($fromdir.$file,r);
while ($data = fgetcsv ($fp, 1000, ,)) {//readlines in csv
$num = count ($data);
//print p $num fields in line $row: br;
$row++;
//for ($c=0; $c$num; $c++) {
//print '.$data[$c] . ' ;
//}
//print BR;
}

Dave Caroline

On Tue, Dec 6, 2011 at 6:32 PM, Nate Hill nathanielh...@gmail.com wrote:
 csv files are what I have- they are easy for the not-technically inclined
 staff to create and then save to a folder.  I was really just hoping to
 make this easy on the people who make the reports.


 On Tue, Dec 6, 2011 at 10:21 AM, Dave Caroline
 dave.thearchiv...@gmail.comwrote:

 I dont understand the thinking and waste of time scanning entire csv
 files where a database table with good indexing can be a lot faster
 and use less server memory.

 Do the work once up front when the data becomes available not on every
 page draw.

 I subscribe to the read/send and mangle as little as possible(server
 and client) on a web page view

 Dave Caroline




 --
 Nate Hill
 nathanielh...@gmail.com
 http://www.natehill.net


Re: [CODE4LIB] jQuery Ajax request to update a PHP variable

2011-12-06 Thread Godmar Back
On Tue, Dec 6, 2011 at 11:22 AM, Doran, Michael D do...@uta.edu wrote:

  You had earlier asked the question whether to do things client or server
  side - well in this example, the correct answer is to do it client-side.
  (Yours is a read-only application, where none of the advantages of
  server-side processing applies.)

 One thing to take into consideration when weighing the advantages of
 server-side vs. client-side processing, is whether the web app is likely to
 be used on mobile devices.  Douglas Crockford, speaking about the fact that
 JavaScript has become the de fact universal runtime, cautions: Which I
 think puts even more pressure on getting JavaScript to go fast.
 Particularly as we're now going into mobile. Moore's Law doesn't apply to
 batteries. So how much time we're wasting interpreting stuff really matters
 there. The cycles count.[1]  Personally, I don't know enough to know how
 significant the impact would be.  However, I understand Douglas Crockford
 knows a little something about JavaScript and JSON.


It's certainly true that limited energy motivates the need to minimize
client processing, but the conclusion that this then means server
generation of static HTML is not clear.

Current trends certainly go in the opposite direction, look at jQuery
Mobile.

 - Godmar


Re: [CODE4LIB] Models of MARC in RDF

2011-12-06 Thread Mark Jordan
Well said Will,

Mark

- Original Message -
 This is a *very* tangential rant, but it makes me mental when I hear
 people say the 'disk space' is no longer an issue. While it's true
 that
 the costs of disk drives continue to drop, my experience is that the
 cost
 of managing storage and backups is rising almost exponentially as
 libraries continue to amass enormous quantities of digital data and
 metadata. Again, I recognize that text files are a small portion of
 our
 library storage these days, but to casually suggest that doubling any
 amount of data storage is an inconsiderable consideration strikes me
 as
 the first step down a dangerous path. Sorry for the interruption to an
 interesting thread.
 
 Will
 
 
 
 On 12/6/11 10:44 AM, Karen Coyle li...@kcoyle.net wrote:
 
 Quoting Fleming, Declan dflem...@ucsd.edu:
 
 Hi - I'll note that the mapping decisions were made by our metadata
 services (then Cataloging) group, not by the tech folks making it
 all work, though we were all involved in the discussions. One idea
 that came up was to do a, perhaps, lossy translation, but also stuff
 one triple with a text dump of the whole MARC record just in case we
 needed to grab some other element out we might need. We didn't do
 that, but I still like the idea. Ok, it was my idea. ;)
 
 I like that idea! Now that disk space is no longer an issue, it
 makes good sense to keep around the original state of any data that
 you transform, just in case you change your mind. I hadn't thought
 about incorporating the entire MARC record string in the
 transformation, but as I recall the average size of a MARC record is
 somewhere around 1K, which really isn't all that much by today's
 standards.


Re: [CODE4LIB] jQuery Ajax request to update a PHP variable

2011-12-06 Thread Jonathan Rochkind

On 12/6/2011 1:42 PM, Godmar Back wrote:

Current trends certainly go in the opposite direction, look at jQuery
Mobile.


Hmm, JQuery mobile still operates on valid and functional HTML delivered 
by the server. In fact, one of the designs of JQuery mobile is indeed to 
degrade to a non-JS version in feature phones (you know, eg, flip 
phones with a web browser but probably no javascript).  The non-JS 
version it degrades to is the same HTML that was delivered to the 
browser in either way, just not enhanced by JQuery Mobile.


If I were writing AJAX requests for an application targetted mainly at 
JQuery Mobile... I'd be likely to still have the server delivery HTML to 
the AJAX request, then have js insert it into the page and trigger 
JQuery Mobile enhancements on it.




  - Godmar



Re: [CODE4LIB] jQuery Ajax request to update a PHP variable

2011-12-06 Thread Godmar Back
On Tue, Dec 6, 2011 at 1:57 PM, Jonathan Rochkind rochk...@jhu.edu wrote:

 On 12/6/2011 1:42 PM, Godmar Back wrote:

 Current trends certainly go in the opposite direction, look at jQuery
 Mobile.


 Hmm, JQuery mobile still operates on valid and functional HTML delivered
 by the server. In fact, one of the designs of JQuery mobile is indeed to
 degrade to a non-JS version in feature phones (you know, eg, flip phones
 with a web browser but probably no javascript).  The non-JS version it
 degrades to is the same HTML that was delivered to the browser in either
 way, just not enhanced by JQuery Mobile.


My argument was that current platforms, such as jQuery mobile, heavily rely
on JavaScript on the very platforms on which Crockford statement points out
it would be wise to save energy. Look at the jQuery Mobile documentation,
A-grade platforms:
http://jquerymobile.com/demos/1.0/docs/about/platforms.html



If I were writing AJAX requests for an application targetted mainly at
 JQuery Mobile... I'd be likely to still have the server delivery HTML to
 the AJAX request, then have js insert it into the page and trigger JQuery
 Mobile enhancements on it.


I wouldn't. Return JSON and interpret or template the result.

 - Godmar


Re: [CODE4LIB] Patents and open source projects

2011-12-06 Thread Thomas Krichel
  Joann Ransom writes

 LibLime Koha is not Koha. The rest of the community use Koha.

  Misunderstanding of this issue is wide-spread. Case in point

http://lists.webjunction.org/wjlists/web4lib/2010-September/052195.html


  Cheers,

  Thomas Krichelhttp://openlib.org/home/krichel
  http://authorprofile.org/pkr1
   skype: thomaskrichel


Re: [CODE4LIB] jQuery Ajax request to update a PHP variable

2011-12-06 Thread Doran, Michael D
 It's certainly true that limited energy motivates the need to minimize
 client processing, but the conclusion that this then means server
 generation of static HTML is not clear.

I'm not sure anyone was drawing that conclusion. It was offered up as factor to 
consider.
 
 Current trends certainly go in the opposite direction, look at jQuery
 Mobile.

I agree that jQuery Mobile is very popular now.  However, that in no way 
negates the caution.  One could consider it as a tragedy of the commons in 
which a user's iPhone battery is the shared resource.  Why should I as a 
developer (rationally consulting my own self-interest) conserve battery power 
that doesn't belong to me, just so some other developer's app can use that 
resource?  I'm just playing the devil's advocate here. ;-) 

-- Michael

[1] A dilemma arising from the situation in which multiple 
individuals, acting independently and rationally consulting 
their own self-interest, will ultimately deplete a shared 
limited resource, even when it is clear that it is not in 
anyone's long-term interest for this to happen.

http://en.wikipedia.org/wiki/Tragedy_of_the_commons


 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of
 Godmar Back
 Sent: Tuesday, December 06, 2011 12:43 PM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] jQuery Ajax request to update a PHP variable
 
 On Tue, Dec 6, 2011 at 11:22 AM, Doran, Michael D do...@uta.edu wrote:
 
   You had earlier asked the question whether to do things client or
 server
   side - well in this example, the correct answer is to do it client-
 side.
   (Yours is a read-only application, where none of the advantages of
   server-side processing applies.)
 
  One thing to take into consideration when weighing the advantages of
  server-side vs. client-side processing, is whether the web app is likely
 to
  be used on mobile devices.  Douglas Crockford, speaking about the fact
 that
  JavaScript has become the de fact universal runtime, cautions: Which I
  think puts even more pressure on getting JavaScript to go fast.
  Particularly as we're now going into mobile. Moore's Law doesn't apply to
  batteries. So how much time we're wasting interpreting stuff really
 matters
  there. The cycles count.[1]  Personally, I don't know enough to know how
  significant the impact would be.  However, I understand Douglas Crockford
  knows a little something about JavaScript and JSON.
 
 
 It's certainly true that limited energy motivates the need to minimize
 client processing, but the conclusion that this then means server
 generation of static HTML is not clear.
 
 Current trends certainly go in the opposite direction, look at jQuery
 Mobile.
 
  - Godmar


[CODE4LIB] EADitor December 2011 (.1112) beta released

2011-12-06 Thread Ethan Gruber
EADitor is a free, open-source cross-platform XForms framework for
creating, editing, and publishing Encoded Archival Description (EAD)
finding aids using Orbeon, an enterprise-level XForms Java application,
which runs in Apache Tomcat.  I have released the latest stable code in
downloadable packages on our Google Code site (
http://code.google.com/p/eaditor/downloads/list).  This release is a major
advancement over the June 2011 release, especially in terms of performance
and stability.  I call EADitor a beta because there is much I have left to
improve, but this is the first production-ready release, an example of
which is the American Numismatic Society Archives site,
Archerhttp://numismatics.org/archives/
.

Features in a nutshell:

   - Public interface with faceted search results and facet-based
   OpenLayers mapping
   - Linked data and geographic services: OAI-PMH feed, Solr-based Atom
   feed (embedded with geographic points) and search results in the form of KML
   - Geonames, LCSH, VIAF APIs for geographic, subject term, personal, and
   corporate name controlled vocabulary
   - Upload finding aids from the wild (if they adhere to EAD 2002).
   - Interface for reordering and setting permissions of components
   - Flickr API integration, attach flickr images as a daogrp
   - Simple template controls for EAD finding aids
   - Introduction of simple themes: select facet orientation on search page
   and from a selection of jQuery UI themes (theme controls will be enhanced
   over time)


One of the most important recent advancements in the project is the
introduction of our documentation wiki (
http://wiki.numismatics.org/eaditor:eaditor).  Documentation is an ongoing
process, but the wiki contains enough information to get you started with
installation and use.

Blog: http://eaditor.blogspot.com/
Google Group: http://groups.google.com/group/eaditor
SAA 2010 slideshow: http://people.virginia.edu/~ewg4x/saa10_eaditor.ppt
code4lib article (XForms for Libraries, an Introduction):
http://journal.code4lib.org/articles/3916

Feedback is welcome!

Best,
Ethan Gruber
American Numismatic Society


Re: [CODE4LIB] Models of MARC in RDF

2011-12-06 Thread Fleming, Declan
Well, we didn't end up doing it (although we still could).

When I look across the storage load that our asset management system is 
overseeing, metadata space pales in comparison to the original data file 
itself.  Even access derivatives like display JPGs are tiny compared to their 
TIFF masters.  WAV files are even bigger.

I agree that we shouldn't just assume disk is free, but when looking at the 
orders of magnitude of metadata to originals, I'd err on the side of keeping 
all the metadata.

Do you really feel that the cost of management of storage is going up?  I do 
find that the bulk of the ongoing cost of digital asset management is in the 
people to manage the assets, but over time I'm seeing the management cost per 
asset drop as we need about the same number of people to run ten racks of 
storage as it takes to run two.  And all of those racks are getting denser as 
storage media costs go down (Lord willin' and the creek don't flood.  Again).  
I expect at some point the cost to store the assets in the cloud, rather than 
in local racks, will hit a sweet spot, and we'll move to that.  We'll still 
need good management of the assets, but the policies it takes to track 300k 
assets will probably scale to millions, especially if the metadata is stored in 
a very accessible, linkable way.

D

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Mark 
Jordan
Sent: Tuesday, December 06, 2011 10:51 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Models of MARC in RDF

Well said Will,

Mark

- Original Message -
 This is a *very* tangential rant, but it makes me mental when I hear 
 people say the 'disk space' is no longer an issue. While it's true 
 that the costs of disk drives continue to drop, my experience is that 
 the cost of managing storage and backups is rising almost 
 exponentially as libraries continue to amass enormous quantities of 
 digital data and metadata. Again, I recognize that text files are a 
 small portion of our library storage these days, but to casually 
 suggest that doubling any amount of data storage is an inconsiderable 
 consideration strikes me as the first step down a dangerous path. 
 Sorry for the interruption to an interesting thread.
 
 Will
 
 
 
 On 12/6/11 10:44 AM, Karen Coyle li...@kcoyle.net wrote:
 
 Quoting Fleming, Declan dflem...@ucsd.edu:
 
 Hi - I'll note that the mapping decisions were made by our metadata 
 services (then Cataloging) group, not by the tech folks making it 
 all work, though we were all involved in the discussions. One idea 
 that came up was to do a, perhaps, lossy translation, but also stuff 
 one triple with a text dump of the whole MARC record just in case we 
 needed to grab some other element out we might need. We didn't do 
 that, but I still like the idea. Ok, it was my idea. ;)
 
 I like that idea! Now that disk space is no longer an issue, it 
 makes good sense to keep around the original state of any data that 
 you transform, just in case you change your mind. I hadn't thought 
 about incorporating the entire MARC record string in the 
 transformation, but as I recall the average size of a MARC record is 
 somewhere around 1K, which really isn't all that much by today's 
 standards.


Re: [CODE4LIB] Models of MARC in RDF

2011-12-06 Thread Fleming, Declan
Hi - point at it where?  We could point back to the library catalog that we 
harvested in the MARC to MODS to RDF process, but what if that goes away?  Why 
not write ourselves a 1K insurance policy that sticks with the object for its 
life?

D

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Owen 
Stephens
Sent: Tuesday, December 06, 2011 8:06 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Models of MARC in RDF

I'd suggest that rather than shove it in a triple it might be better to point 
at alternative representations, including MARC if desirable (keep meaning to 
blog some thoughts about progressively enhanced metadata...)

Owen

Owen Stephens
Owen Stephens Consulting
Web: http://www.ostephens.com
Email: o...@ostephens.com
Telephone: 0121 288 6936

On 6 Dec 2011, at 15:44, Karen Coyle wrote:

 Quoting Fleming, Declan dflem...@ucsd.edu:
 
 Hi - I'll note that the mapping decisions were made by our metadata 
 services (then Cataloging) group, not by the tech folks making it all 
 work, though we were all involved in the discussions.  One idea that 
 came up was to do a, perhaps, lossy translation, but also stuff one 
 triple with a text dump of the whole MARC record just in case we 
 needed to grab some other element out we might need.  We didn't do 
 that, but I still like the idea.  Ok, it was my idea.  ;)
 
 I like that idea! Now that disk space is no longer an issue, it makes good 
 sense to keep around the original state of any data that you transform, 
 just in case you change your mind. I hadn't thought about incorporating the 
 entire MARC record string in the transformation, but as I recall the average 
 size of a MARC record is somewhere around 1K, which really isn't all that 
 much by today's standards.
 
 (As an old-timer, I remember running the entire Univ. of California 
 union catalog on 35 megabytes, something that would now be considered 
 a smallish email attachment.)
 
 kc
 
 
 D
 
 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf 
 Of Esme Cowles
 Sent: Monday, December 05, 2011 11:22 AM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] Models of MARC in RDF
 
 I looked into this a little more closely, and it turns out it's a little 
 more complicated than I remembered.  We built support for transforming to 
 MODS using the MODS21slim2MODS.xsl stylesheet, but don't use that.  Instead, 
 we use custom Java code to do the mapping.
 
 I don't have a lot of public examples, but there's at least one public 
 object which you can view the MARC from our OPAC:
 
 http://roger.ucsd.edu/search/.b4827884/.b4827884/1,1,1,B/detlmarc~123
 4567FF=1,0,
 
 The public display in our digital collections site:
 
 http://libraries.ucsd.edu/ark:/20775/bb0648473d
 
 The RDF for the MODS looks like:
 
mods:classification rdf:parseType=Resource
mods:authoritylocal/mods:authority
rdf:valueFVLP 222-1/rdf:value
/mods:classification
mods:identifier rdf:parseType=Resource
mods:typeARK/mods:type

 rdf:valuehttp://libraries.ucsd.edu/ark:/20775/bb0648473d/rdf:value
/mods:identifier
mods:name rdf:parseType=Resource
mods:namePartBrown, Victor W/mods:namePart
mods:typepersonal/mods:type
/mods:name
mods:name rdf:parseType=Resource
mods:namePartAmateur Film Club of San Diego/mods:namePart
mods:typecorporate/mods:type
/mods:name
mods:originInfo rdf:parseType=Resource
mods:dateCreated[196-]/mods:dateCreated
/mods:originInfo
mods:originInfo rdf:parseType=Resource
mods:dateIssued2005/mods:dateIssued
mods:publisherFilm and Video Library, University of California, 
 San Diego, La Jolla, CA 92093-0175 
 http://orpheus.ucsd.edu/fvl/FVLPAGE.HTM/mods:publisher
/mods:originInfo
mods:physicalDescription rdf:parseType=Resource
mods:digitalOriginreformatted digital/mods:digitalOrigin
mods:note16mm; 1 film reel (25 min.) :; sd., col. ;/mods:note
/mods:physicalDescription
mods:subject rdf:parseType=Resource
mods:authoritylcsh/mods:authority
mods:topicRanching/mods:topic
/mods:subject
 
 etc.
 
 
 There is definitely some loss in the conversion process -- I don't know 
 enough about the MARC leader and control fields to know if they are captured 
 in the MODS and/or RDF in some way.  But there are quite a few local and 
 note fields that aren't present in the RDF.  Other fields (e.g. 300 and 505) 
 are mapped to MODS, but not displayed in our access system (though they are 
 indexed for searching).
 
 I agree it's hard to quantify lossy-ness.  Counting fields or characters 
 would be the most objective, but has obvious problems with control 
 characters sometimes containing a lot of information, and then the relative 
 

[CODE4LIB] Namespace management, was Models of MARC in RDF

2011-12-06 Thread Karen Coyle

Quoting Owen Stephens o...@ostephens.com:




This is why RDA worries me - because it (seems to?) suggest that we  
define a schema that stands alone from everything else and that is  
used by the library community. I'd prefer to see the library  
community adopting the best of what already exists and then  
enhancing where the existing ontologies are lacking.


I've been ruminating a bit on the ad/dis-advantages of re-use vs.  
create your own then link to others. In the end, I wonder how easy it  
will be to manage a metadata scheme that has cherry-picked from  
existing ones, so something like:


dc:title
bibo:chapter
foaf:depiction

but NOT including all properties in those namespaces. It requires any  
application to have detailed knowledge about the particular selections  
made. On the other hand, something like:


myNS:title -- sameas -- dc:title
myNS:chapter -- sameas -- bibo:chapter
myNS:depiction -- sameas -- foaf:depiction

allows you to easily identify your properties, but at the same time  
gives you the equivalents to other properties in other namespaces for  
sharing. It also gives you greater stability. If the FOAF community  
should (rudely) change the meaning of depiction, you could find  
yourself using a property that no longer means what it should.  
Instead, if you have your own namespace you can change your link to  
foaf (or remove it altogether) to indicate that you now fork from that  
property.


Perhaps what I perceive is that properties persist over time and  
relationships can be more easily treated as relating to now.


kc

p.s. I agree with you about RDA, but think that links could be made to  
remedy that.



If we are going to have a (web of) linked data, then re-use of  
ontologies and IDs is needed. For example in the work I did at the  
Open University in the UK we ended up only a single property from a  
specific library ontology (the draft ISBD  
http://metadataregistry.org/schemaprop/show/id/1957.html has place  
of publication, production, distribution).


I think it is interesting that many of the MARC-RDF mappings so far  
have adopting many of the same ontologies (although no doubt partly  
because there is a 'follow the leader' element to this - or at least  
there was for me when looking at the transformation at the Open  
University)


Owen

Owen Stephens
Owen Stephens Consulting
Web: http://www.ostephens.com
Email: o...@ostephens.com
Telephone: 0121 288 6936

On 5 Dec 2011, at 18:56, Jonathan Rochkind wrote:


On 12/5/2011 1:40 PM, Karen Coyle wrote:


This brings up another point that I haven't fully grokked yet: the  
use of MARC kept library data consistent across the many  
thousands of libraries that had MARC-based systems.


Well, only somewhat consistent, but, yeah.

What happens if we move to RDF without a standard? Can we rely on  
linking to provide interoperability without that rigid consistency  
of data models?


Definitely not. I think this is a real issue.  There is no magic to  
linking or RDF that provides interoperability for free; it's all  
about the vocabularies/schemata -- whether in MARC or in anything  
else.   (Note different national/regional  library communities used  
different schemata in MARC, which made interoperability infeasible  
there. Some still do, although gradually people have moved to  
Marc21 precisely for this reason, even when Marc21 was less  
powerful than the MARC variant they started with).


That is to say, if we just used MARC's own implicit vocabularies,  
but output them as RDF, sure, we'd still have consistency, although  
we wouldn't really _gain_ much.On the other hand, if we switch  
to a new better vocabulary -- we've got to actually switch to a new  
better vocabulary.  If it's just whatever anyone wants to use,  
we've made it VERY difficult to share data, which is something  
pretty darn important to us.


Of course, the goal of the RDA process (or one of em) was to create  
a new schema for us to consistently use. That's the library  
community effort to maintain a common schema that is more powerful  
and flexible than MARC.  If people are using other things instead,  
apparently that failed, or at least has not yet succeeded.






--
Karen Coyle
kco...@kcoyle.net http://kcoyle.net
ph: 1-510-540-7596
m: 1-510-435-8234
skype: kcoylenet


Re: [CODE4LIB] Models of MARC in RDF

2011-12-06 Thread Montoya, Gabriela
One critical thing to consider with MARC records (or any metadata, for that 
matter) is that it they are not stagnant, so what is the value of storing 
entire record strings into one triple if we know that metadata is volatile? As 
an example, UCSD has over 200,000 art images that had their metadata records 
ingested into our local DAMS over five years ago. Since then, many of these 
records have been edited/massaged in our OPAC (and ARTstor), but these updated 
records have not been refreshed in our DAMS. Now we find ourselves needing to 
desperately have the What is our database of record? conversation.

I'd much rather see resources invested in data synching than spending it in 
saving text dumps that will most likely not be referred to again.

Dream Team for Building a MARC  RDF Model: Karen Coyle, Alistair Miles, Diane 
Hillman, Ed Summers, Bradley Westbrook.

Gabriela

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Karen 
Coyle
Sent: Tuesday, December 06, 2011 7:44 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Models of MARC in RDF

Quoting Fleming, Declan dflem...@ucsd.edu:

 Hi - I'll note that the mapping decisions were made by our metadata 
 services (then Cataloging) group, not by the tech folks making it all 
 work, though we were all involved in the discussions.  One idea that 
 came up was to do a, perhaps, lossy translation, but also stuff one 
 triple with a text dump of the whole MARC record just in case we 
 needed to grab some other element out we might need.  We didn't do 
 that, but I still like the idea.  Ok, it was my idea.  ;)

I like that idea! Now that disk space is no longer an issue, it makes good 
sense to keep around the original state of any data that you transform, just 
in case you change your mind. I hadn't thought about incorporating the entire 
MARC record string in the transformation, but as I recall the average size of a 
MARC record is somewhere around 1K, which really isn't all that much by today's 
standards.

(As an old-timer, I remember running the entire Univ. of California union 
catalog on 35 megabytes, something that would now be considered a smallish 
email attachment.)

kc


 D

 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf 
 Of Esme Cowles
 Sent: Monday, December 05, 2011 11:22 AM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] Models of MARC in RDF

 I looked into this a little more closely, and it turns out it's a 
 little more complicated than I remembered.  We built support for 
 transforming to MODS using the MODS21slim2MODS.xsl stylesheet, but 
 don't use that.  Instead, we use custom Java code to do the mapping.

 I don't have a lot of public examples, but there's at least one public 
 object which you can view the MARC from our OPAC:

 http://roger.ucsd.edu/search/.b4827884/.b4827884/1,1,1,B/detlmarc~1234
 567FF=1,0,

 The public display in our digital collections site:

 http://libraries.ucsd.edu/ark:/20775/bb0648473d

 The RDF for the MODS looks like:

 mods:classification rdf:parseType=Resource
 mods:authoritylocal/mods:authority
 rdf:valueFVLP 222-1/rdf:value
 /mods:classification
 mods:identifier rdf:parseType=Resource
 mods:typeARK/mods:type
  
 rdf:valuehttp://libraries.ucsd.edu/ark:/20775/bb0648473d/rdf:value
 /mods:identifier
 mods:name rdf:parseType=Resource
 mods:namePartBrown, Victor W/mods:namePart
 mods:typepersonal/mods:type
 /mods:name
 mods:name rdf:parseType=Resource
 mods:namePartAmateur Film Club of San Diego/mods:namePart
 mods:typecorporate/mods:type
 /mods:name
 mods:originInfo rdf:parseType=Resource
 mods:dateCreated[196-]/mods:dateCreated
 /mods:originInfo
 mods:originInfo rdf:parseType=Resource
 mods:dateIssued2005/mods:dateIssued
 mods:publisherFilm and Video Library, University of 
 California, San Diego, La Jolla, CA 92093-0175 
 http://orpheus.ucsd.edu/fvl/FVLPAGE.HTM/mods:publisher
 /mods:originInfo
 mods:physicalDescription rdf:parseType=Resource
 mods:digitalOriginreformatted digital/mods:digitalOrigin
 mods:note16mm; 1 film reel (25 min.) :; sd., col. ;/mods:note
 /mods:physicalDescription
 mods:subject rdf:parseType=Resource
 mods:authoritylcsh/mods:authority
 mods:topicRanching/mods:topic
 /mods:subject

 etc.


 There is definitely some loss in the conversion process -- I don't 
 know enough about the MARC leader and control fields to know if they 
 are captured in the MODS and/or RDF in some way.  But there are
 quite a few local and note fields that aren't present in the RDF.   
 Other fields (e.g. 300 and 505) are mapped to MODS, but not displayed 
 in our access system (though they are 

Re: [CODE4LIB] Models of MARC in RDF

2011-12-06 Thread BRIAN TINGLE
On Dec 6, 2011, at 5:52 PM, Montoya, Gabriela wrote:

 ...
 I'd much rather see resources invested in data synching than spending it in 
 saving text dumps that will most likely not be referred to again.
 ...

In a MARC-as-the-record-of-record scenario; storing the original raw MARC might 
be helpful for the syncing -- when a sync was happing, the new MARC of record 
could maybe be compared against the old MARC of record to know that RDF triples 
needed to be updated?


Re: [CODE4LIB] Models of MARC in RDF

2011-12-06 Thread Michael J. Giarlo
On Tue, Dec 6, 2011 at 20:52, Montoya, Gabriela gamont...@ucsd.edu wrote:
 One critical thing to consider with MARC records (or any metadata, for that 
 matter) is that it they are not stagnant, so what is the value of storing 
 entire record strings into one triple if we know that metadata is volatile? 
 As an example, UCSD has over 200,000 art images that had their metadata 
 records ingested into our local DAMS over five years ago. Since then, many of 
 these records have been edited/massaged in our OPAC (and ARTstor), but these 
 updated records have not been refreshed in our DAMS. Now we find ourselves 
 needing to desperately have the What is our database of record? 
 conversation.

 I'd much rather see resources invested in data synching than spending it in 
 saving text dumps that will most likely not be referred to again.


I don't disagree with your rationale, and I love your Dream Team, but
there's a false equivalence here between the cost of sucking in a
record and stuffing it away and dealing with the very tricky problem
of interop with the OPAC, ARTstor,  other systems.

-Mike


[CODE4LIB] Job Opening: Anythink Systems Administrator

2011-12-06 Thread Matthew Hamilton
*Apologies for Cross-Posting*

This is an exciting time for us as Anythink Libraries, as we are about to 
embark on a VuFind project and have just received an IMLS grant to develop a 
digital learning lab inspired by Chicago Public Library's YouMedia. This 
position will be part of the team helping to make both of those projects 
happen….


Job Description/Duties:
Anythink libraries are hubs for innovation – and our staff and customers need 
tools to create and innovate. That’s where you come in. You use your tech savvy 
to find solutions and keep these tools running. As part of the team that 
manages the district's technology infrastructure, you’re an ace at managing 
server applications, databases, virtual machines and backups. You use your 
quick problem-solving skills to keep every server, application, and network at 
Anythink up and running because you know how critical it is that everything 
runs smoothly for our staff and our community.

Ready to join in a bold opportunity to help us take community library services 
to an entirely new dynamic realm?

Who you are:
•You are a tech guru; you understand the magic behind the technology and 
provide expertise for our district.
•You have always wanted to work for an organization that strives to innovate 
through technology.
•You understand that collaboration is essential for innovation and that staff 
are also your customers.
•You are an effective communicator, able to relay pertinent information and are 
a patient and active listener.
•You engage well with others and are passionate about providing an exemplary 
customer experience.
•You inspire fun in the people around you.

A position you’ll love:
•You’re great at making connections; you can manage and understand how Active 
Directory – including Group Policy, account management and MS Exchange – all 
work together.
•You are an MS-SQL expert, able to write queries that help our team do their 
jobs better.
•You are the genius behind the Anythink IT infrastructure. You install, modify, 
and maintain virtual hosts and machines, SAN, and enterprise applications 
(MS-SCCM, MS-SQL) to help keep things running smoothly.
•You enjoy keeping things tidy and efficient; you perform maintenance in a 
VMware vSphere 4 environment hosting Windows Server 2008 and Ubuntu Linux 
virtual machines.
•You know that Anythink is a place of big ideas – and we want to keep our 
information safe; you help lead the charge in implementing security procedures.
•You are the person that signals “Help is on the way!” as part of the IT team 
supporting district tech operations at Anythink’s seven locations around Adams 
County.
•You love a good challenge, utilizing your awesome problem-solving skills to 
look at things from every angle; you’re all about efficiency and finding the 
best solutions for the best value.
•You understand the importance of getting everyone on the same page and help to 
write and maintain IT documentation and policies.
•You are flexible and reliable to be on call for possible after-hours 
maintenance, upgrades and emergency weekend needs.
•You are an explorer of new and emerging technologies to support library 
services.
•You are a wizard at leading various projects and coordinate various tasks with 
vendors and consultants.
•You have the energy to juggle many tasks with a smile, and you realize you 
have many people depending on you.
•You demonstrate excellent communication skills with staff, customers, vendors, 
and district leadership.
•You do the right thing. Every decision you make and action you take is an 
opportunity to demonstrate our collective integrity.

Preferred Education: Bachelor's Degree in Computer Science or Information 
Systems

Other Requirements:
Do you have what it takes?
•Bachelor’s degree in computer information systems, computer science or 
equivalent experience
•2-5 years experience with increasing responsibility working with Active 
Directory and Group Policy
•1-2 years SQL experience and/or training
•Ability to support library-specific applications, including Integrated Library 
Systems, print and PC management systems and open-source applications
•Knowledge of filtering, security and firewall applications
•Experience working in a Windows 2008 R2 Active Directory environment
•Working knowledge of Microsoft operating systems and applications, including 
Windows 7, Exchange 2010, and Office 2010
•Experience with wide area networks and remote access solutions
•Valid Driver’s license and use of personal vehicle (mileage reimbursed)

Community:

Why Rangeview Library District? A job with Rangeview Library District is a 
chance to use your knowledge and experience to enable transformations every 
day. You will be instrumental in helping our customers access technology, 
whether they’re sitting by the fireplace with a laptop, surfing the Internet, 
gaming in the teen room or learning something new at one of our many intriguing 
programs. Your vision for emerging technology will help pave 

Re: [CODE4LIB] Models of MARC in RDF

2011-12-06 Thread stuart yeates

On 07/12/11 14:52, Montoya, Gabriela wrote:


Dream Team for Building a MARC  RDF Model: Karen Coyle, Alistair Miles, Diane 
Hillman, Ed Summers, Bradley Westbrook.


As much as I have nothing against anyone on this list, isn't it a little 
US-centric? Didn't we make that mistake before?


cheers
stuart
--
Stuart Yeates
Library Technology Services http://www.victoria.ac.nz/library/


Re: [CODE4LIB] Patents and open source projects

2011-12-06 Thread Ross Singer
On Tue, Dec 6, 2011 at 3:36 PM, Thomas Krichel kric...@openlib.org wrote:
  Joann Ransom writes

 LibLime Koha is not Koha. The rest of the community use Koha.

  Misunderstanding of this issue is wide-spread. Case in point

 http://lists.webjunction.org/wjlists/web4lib/2010-September/052195.html

That was pretty unrelated to this issue, Thomas.

-Ross.


Re: [CODE4LIB] Models of MARC in RDF

2011-12-06 Thread Alexander Johannesen
On Wed, Dec 7, 2011 at 1:49 PM, stuart yeates stuart.yea...@vuw.ac.nz wrote:
 As much as I have nothing against anyone on this list, isn't it a little
 US-centric? Didn't we make that mistake before?

I wouldn't worry. A dream-team have no basis in reality, hence the
dream part. I'd like to see a Real Team instead, an international
collaboration of people, including international smarts and
non-librarians. (Realistically, an international [or semi] library
conference should have a three-day session with smart people first on
this very issue, and that would make a fine place to get this thing
working, even to some degree of speed)


Alex
-- 
 Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps
--- http://shelter.nu/blog/ --
-- http://www.google.com/profiles/alexander.johannesen ---


Re: [CODE4LIB] Models of MARC in RDF

2011-12-06 Thread Mark A. Matienzo
On Tue, Dec 6, 2011 at 10:23 PM, Alexander Johannesen
alexander.johanne...@gmail.com wrote:

 A dream-team have no basis in reality, hence the dream part.

Tell that to the 1992 U.S. Men's Olympic Basketball Team.

Mark


Re: [CODE4LIB] Models of MARC in RDF

2011-12-06 Thread Stuart Yeates
  A dream-team have no basis in reality, hence the dream part.
 
 Tell that to the 1992 U.S. Men's Olympic Basketball Team.

So, the response to my suggestion of an unhelpful US bias is a US-based 
metaphor? 

I'll just consider my point proved.

cheers
stuart


Re: [CODE4LIB] Models of MARC in RDF

2011-12-06 Thread Michael J. Giarlo

I mean, have you *seen* Drexler dunk?

-Original message-
From: Stuart Yeates stuart.yea...@vuw.ac.nz
To: CODE4LIB@listserv.nd.edu
Sent: Wed, Dec 7, 2011 06:50:28 GMT+00:00
Subject: Re: [CODE4LIB] Models of MARC in RDF


 A dream-team have no basis in reality, hence the dream part.

Tell that to the 1992 U.S. Men's Olympic Basketball Team.


So, the response to my suggestion of an unhelpful US bias is a US-based  
metaphor? 


I'll just consider my point proved.

cheers
stuart