-H 'Application/rdf+xml'
http://bmcr.brynmawr.edu/2014/2014-02-18.html | grep schema
I tried a few variations, such as removing the .html from the end of the
URL etc. Nada.
On 03/31/2016 08:39 AM, Brian Kennison wrote:
On Mar 29, 2016, at 12:46 PM, Kevin Ford
mailto:k...@3windmill
You Can’t Read Latin Yet.
Oxford; New York: Oxford University Press, 2013. Pp. ix, 278. ISBN
9780199657865. $35.00.
This is indeed why I wanted a "before and after" test - to see if schema
did add SEO. Now we don't know.
kc
On 3/29/16 7:48 AM, Kevin Ford wrote:
Hi Karen,
I to
Hi Karen,
I took a look at those bryn mawr hits and I don't see the schema.org
used in the page. Am I missing it? Perhaps I found the wrong thing.
If indeed it's not there, it just goes to show how using schema is not a
panacea. Loads of factors go into search ranking, relevancy, and displ
Hi Cindy,
Deduping can happen in any number of ways, but making use of shared
identifiers is the preferred way to address this issue. You could adopt
a shared identifier or you can an indicate that your Thing is the same
as a this other Thing. In schema.org's vocabulary, you'd use
schema:sa
It's probably not safe to say that "all search is local" but there is
most certainly a strong local component considered for every search.
For me, every hit on the first page of Google's results for a search for
"ice cream parlor" is related to Chicago, which is where I executed the
search. A
I think it is technically permissible, but unwise for a host of reasons,
a number of which have been noted in this thread.
It boils down to this: at the end of the day - and putting aside the
whole SSL/non-SSL tangent - it is a "relative reference" according to
the RFC and that begs the quest
e modern data analysis software. My db is 136,256 kb. But adding that extra query will
probably put it over the 2GB mark. I've tried extracting to a csv, and that didn't work. Maybe I'll
try a Make table to a separate db.
Or the OpenRefine suggestion sounds good too.
Cindy Harper
-
Hi Cindy,
This doesn't quite address your issue, but, unless you've hit the 2 GB
Access size limit [1], Access can handle a good deal more than 250,000
item records ("rows," yes?) you cited.
What makes you think you've hit the limit? Slowness, something else?
All the best,
Kevin
[1]
https
Hi Rodney,
Who, or how, is the scheduling system being replaced? (Assuming it is
changing.)
Do *you* need to replace the scheduling system (and that's would you
would potentially have to write from scratch)?
OR
Is a scheduling system being procured that will obsolete the current
system an
I know for a fact that new ones are being worked on, but can not tell
you an ETA.
Yours,
Kevin
On 2/16/15 12:29 AM, Peter Green wrote:
Does anyone happen to know how often LC's bulk downloads are generated?
The date currently displayed on the downloads page[1] for each file is
'27 Oct 2014',
> I think this just goes to show, with the advent of the
> Internet, centralized authorities are not as necessary/useful
> as they once
> used to be. —ELM
>
-- Maybe. I think it it recession-related. The high water mark for
nearly all of the groups on that list is 2007 (2006 for one or two).
There is also this:
http://www.loc.gov/z3950/
Yours,
Kevin
On 08/28/2014 06:40 PM, Habing, Thomas Gerald wrote:
Index Data maintains a searchable list: http://irspy.indexdata.com/
Tom
-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Jay
Ga
ns.
Those inquiries are forwarded to us, we answer them, and then the information
is posted publicly so that everyone interested in the opportunity has access to
the same information.
Yours,
Kevin
--
Kevin Ford
Network Development and MARC Standards Office
Library of Congress
Washington, DC
Dear All,
This position - though hard to tell from the below - is chiefly for a
developer position in the Library of Congress's Network Development and
MARC Standards Office, also known as NetDev for short. Our office, as
its name suggests, manages the MARC Format standards, but we also
prom
I fully second Josh's comments. A nice job and a big thanks!
--Kevin
On 08/13/2014 12:59 PM, Joshua Westgard wrote:
A big, public thank you is in order to Laura Wrubel, Dan Chudnov, and their
whole team for organizing and running the C4L regional meeting in DC over the
past two days, to GWU
> Anything that will remodel MARC to (decent) RDF is going be:
>
> - Non-trivial to install
> - Non-trivial to use
> - Slow
> - Require massive amounts of memory/disk space
>
> Choose any two.
-- I'll second this.
> Frankly, I don't see how you can generate RDF that anybody would
>* BIBFRAME Tools [6] - sports nice ontologies, but
> the online tools won’t scale for large operations
-- The code running the transformation at [6] is available here:
https://github.com/lcnetdev/marc2bibframe
We've run several million records through it at one time. As with
everythi
> A key (haha) thing that keys also provide is an opportunity
> to have a conversation with the user of your api: who are they,
> how could you get in touch with them, what are they doing with
> the API, what would they like to do with the API, what doesn’t
> work? These questions are difficult to
t.
Jonathan
On 12/2/13 12:18 PM, Ross Singer wrote:
I'm not going to defend API keys, but not all APIs are open or free. You
need to have *some* way to track usage.
There may be alternative ways to implement that, but you can't just hand
wave away the rather large use case for API keys
Though I have some quibbles with Seth's post, I think it's worth
drawing attention to his repeatedly calling out API keys as a very
significant barrier to use, or at least entry. Most of the posts here
have given little attention to the issue API keys present. I can say
that I have quite ofte
Have you tried
-u username/password
?
Yours,
Kevin
On 11/13/2013 02:12 PM, Eric Lease Morgan wrote:
How do I authenticate to a z39.50 server using yaz-client?
There is a ProQuest database/index I want to search via their z39.50 interface.
I have a valid username/password combination that I
I'll second Richard on this. 4store is fairly quick to set up and get
going. It comes with command-line tools and an HTTP option.
FWIW, ID.LOC.GOV uses 4store in its stack.
Yours,
Kevin
On 11/11/2013 01:17 AM, Richard Wallis wrote:
I've had some success with 4Store: http://4store.org
Use
ries (as well as publishing
an extension) is part of that. I could see this extending to best
practices for "naming" (e.g. URI/IRIs), and perhaps even a bit about
documenting.
Great topic!
kc
On 9/2/13 1:25 AM, Kevin Ford wrote:
Dear Karen,
I think that "how extensible RDF is"
Dear Karen,
I think that "how extensible RDF is" would be a very good topic. I'm
not talking about the theoretical extensibility of RDF, but how to do it
in a practical manner. That is, if you have a role, or some other
relationship, for example, and you want to use it. Linked Data provides
> My (erroneous) assumption was that if a record did not have a broader
term (i.e. a 550 $wg value) then it would sit at the top of the subject
tree, and that they would be the very general subjects headings. As I
found this obviously not the case.
-- You're corrrect - LCSH doesn't work like th
which has been available for years.
Roy
On Tue, Jul 10, 2012 at 2:08 PM, Kevin Ford wrote:
The use case clarifies perfectly.
Totally feasible. Well, I should say "totally feasible" with the caveat
that I've never used the Worldcat Search API. Not letting that stop me, so
long
Assume
that the author is prolific enough that one wouldn't want to look up all
of the records by hand.
kc
On 7/10/12 1:43 PM, Kevin Ford wrote:
As for someone who might want to do this programmatically, he/she
should take a look at the "Programming languages" section of the
be gathering use cases to support the
need? And have a place to post various solutions, even ones that are not
OCLC-specific? (Because I am hoping that the use of microformats will
increase in general.)
kc
On 7/10/12 12:12 PM, Kevin Ford wrote:
> is there an open search to get one to the
(12.04 - precise pangolin).
I write because the inclusion of a "file" MARC21 specification rule
in the magic.db stems from a Code4lib exchange that started in March
2011 [1] (it ends in April if you want to go crawling for the entire
thread).
Rgds,
Kevin
[1]
https://listser
t ends in April
if you want to go crawling for the entire thread).
Rgds,
Kevin
[1] https://listserv.nd.edu/cgi-bin/wa?A2=ind1103&L=CODE4LIB&T=0&F=&S=&P=112728
--
Kevin Ford
Network Development and MARC Standards Office
Library of Congress
Washington, DC
> (and am
> looking into a java triplestore to run in Tomcat)
-- I don't know if the parenthetical was simply a statement or a
solicitation - apologies if it was the former.
Take a look at Mulgara. Drops right into Tomcat.
http://mulgara.org/
--Kevin
On 05/08/2012 02:01 PM, Ethan Gruber w
> I was
> told by the project manager that Apache, Java, and Tomcat were "showing
> signs of age."
-- Taking this statement at face value, and taking it to its logical end
(that you'll have to migrate your application), I'm extremely doubtful
that Apache, Java, and Tomcat are so near their ends
want to do metadata synchronization. The
> advantage of Atom is that it fits into the syndication world so
> nicely, and its ecosystem of tools and services.
>
> //Ed
>
> [1] http://www.openarchives.org/ore/1.0/atom
>
>
> On Thu, May 13, 2010 at 4:53 PM, Kevin Ford
The short answer to your question is "no," there's no way to query terms based
on last modification date. However, and this feature needs publication on the
website, there is an Atom feed that exposes the change activities for the
subject headings:
http://id.loc.gov/authorities/feed/
You can
Thanks for the heads up about this.
Was this announced on a code4libmdc list? If so, I'm having no luck
locating it (I did find the local group's wiki page at code4lib, but
that also does not mention tomorrow's mtg). Providing there is a list,
how might one get on it?
--Kevin
>>> Ed Summers
35 matches
Mail list logo