On Feb 26, 2007, at 12:38 PM, John D Thiesen wrote:
This is presumably an obvious question to most of you, but where do
I get MARC::Batch? I used it about a year ago in a couple of Perl
scripts to prepare Netlibrary records to upload into our catalog
server. Since then I've changed desktop
On Jan 18, 2007, at 2:48 PM, Mike Rylander wrote:
So it is written, so it shall be done.
Thanks brian for making this happen...and mikery for allowing M::R to
pass into your very capable hands. It's pretty awesome to see M::R is
at the heart of systems like Evergreen and Koha--especially
On Jul 13, 2006, at 7:22 PM, [EMAIL PROTECTED] wrote:
However, can anyone here tell me of any tool or tutorialof how to
create a CIP block out of a MARC 21 record?
It's not a CIP block, but Thom Hickey at OCLC created a sweet XSL
stylesheet for turning MARCXML into something like a catalog
This is all fine, but lets talk in unit tests for MARC::Record if we
can. They will make plain what the actual behavior is, and will let
us talk about what the preferred behavior could be. Sorry to be so
short, but there's only so much time in the day.
//Ed
On Jul 13, 2006, at 2:55 PM,
On Jul 13, 2006, at 12:41 PM, Paul POULAIN wrote:
sometimes, I parse XML that contains invalid subfieldcode (like a
capital letter)
M::F::X definetly dies in this case.
MARC::Field seems to allow a subfield with a capital letter--as it
should since there really is no requirement that
On Jun 22, 2006, at 5:34 AM, [EMAIL PROTECTED] wrote:
I'm using MARC::Charset::marc8_to_utf8() v0.95 to transcode some
Library of Congress data to utf8, however I'm finding a problem with
character 'ΓΈ' (hex 0xB2 - lowercase scandinavian o / latin small
letter o with stroke), this character is
On Jun 7, 2006, at 9:37 AM, Aaron Huber wrote:
#create new record object
my $rec = $rs-record($i);
#read in raw MARC to record
my $mrec = MARC::Record-new_from_usmarc($rec-rawdata());
I would just like to skip those items without records or suppress
this error
On May 28, 2006, at 2:25 PM, Joshua Ferraro wrote:
Maybe a note somewhere in the MARC::File::XML documentation to point
these issues out would be useful. Also, it wouldn't be too bad to have
a few tests to make sure that the system's default SAX parser is
capable
of handling these cases. Just
On May 19, 2006, at 7:59 PM, Joshua Ferraro wrote:
I've attached a small script that reproduces the same error we're
getting in the new_from_xml() method. Try it out and see what
it does for you.
Works ok for me, at least it doesn't crash :-)
So ... Is there a workaround that we can use to
On May 18, 2006, at 6:48 AM, Joshua Ferraro wrote:
Anyway, if anyone can shed some light on this I'd be grateful.
I believe the data loss you are seeing is due to your source records--
not to do with character translation. Just running marcdump on them
generates a ton of errors (see
On May 18, 2006, at 10:03 AM, Joshua Ferraro wrote:
http://liblime.com/public/sample.mrc
Could you try downloading from there and running marcdump again?
Yes that one has the same amount of records but now passes through
marcdump fine. Now, when running your script I get a lot of warnings
So I got curious (thanks to your convo in #code4lib). I isolated the
problem to one record:
http://www.inkdroid.org/tmp/one.dat
Your roundtrip conversion complains:
--
no mapping found at position 8 in Price : 9c 7.99;Inv.# B
476913;Date 06/03/98; Supplier : Dawson UK;
On May 1, 2006, at 4:41 PM, Leif Andersson wrote:
+1
count can possibly be complemented or replaced with occurrence as
suggested.
It'd be nice to be able to denote last occurrence [-1].
And I suppose the indexing should be based on ordinary perl
subscript indexing - i.e. governed by the
On May 3, 2006, at 6:28 AM, Edward Summers wrote:
$field-delete_subfield(pos = 2);
won't work because 'pos' is a perl keyword--
I should've tried it before I said this -- it works fine in that
context, even though my perl syntax highlighter indicates otherwise.
So I've changed
On May 3, 2006, at 8:55 AM, Mark Jordan wrote:
I think it should mean the zeroth occurrence of subfield 'u',
since specifying which of a repeated group of subfields is a
realistic task, as you say. For example, each record has two 'u's
but all of the first ones are garbage.
Actually
On May 3, 2006, at 11:25 AM, Mark Jordan wrote:
For example, in a given batch, most but not all records have an 856
subfield 3, followed by multiple subfield u's. If you ask to delete
the first u using pos, then your target will be different
determined by the presence of subfield 3. If
+1
:-)
//Ed
On May 1, 2006, at 1:24 PM, Brad Baxter wrote:
# delete first two subfield u
$field-delete_subfield(code = 'u', count = 2);
I don't think I like it this way. How would you delete just the
second one?
I'd rather see 'count' mean 'occurrence', so the above would mean
delete the second
On Apr 29, 2006, at 10:31 AM, Mark Jordan wrote:
Maybe other people should verify the usefulness of a delete
subfield function before anyone does anything about it, though.
Would a half dozen +1 votes from perl4libers validate its usefulness?
Yes it would...but to get the changes out on
On Apr 29, 2006, at 1:08 AM, Mark Jordan wrote:
Edward Summers wrote:
Deleting subfields is a bit tricky since subfields may repeat, and
sometimes people just want to delete one of them. An unfortunate
state of affairs perhaps.
Yeah, I can see what you're saying, but doesn't that also
On Apr 28, 2006, at 8:20 PM, Michael Kreyche wrote:
my $new856f = MARC::Field-new('856',$i1,$i2,@new856s);
$field-replace_with($new856f);
If there's an easier way, I'd like to know!
Creating a new field and replacing the old one with the new one is
the way to go. Deleting subfields is a
It's not an editor, but I imagine this XML file [1] might be useful
for creating an editor since it lists the MARC fields, their labels,
indicators, and what valid subfields. If you are interested this file
is generated by crawling the LoC documentation with this [2].
//Ed
[1]
On Feb 19, 2006, at 10:14 AM, Tony Bowden wrote:
So, it presumably is an issue with the Library of Congress server.
Is there some sort of automatic throttling there? Or is there
likely to
be some sort of option that I should be setting, but not?
Yeah, I could well imagine there to be
exciting news from the net-z3950 list:
I am very pleased to announce the availability of ZOOM-Perl:
http://search.cpan.org/~mirk/Net-Z3950-ZOOM/
This is an implementation of the ZOOM Abstract API for Perl, enabling
information retrieval using a de-facto standard API over any of the
On Dec 28, 2005, at 8:25 PM, Bryan Baldus wrote:
I was able to successfully compile and test MARC::Charset v. 0.8
from CPAN with the following minor modifications, while using
MacPerl 5.8.0a2 on MacOS 9.2.2:
Thanks for trying it out and for emailing the list. If anyone else is
On Dec 5, 2005, at 8:33 PM, Brad Baxter wrote:
I think you're correct to be conservative. I've been spoiled
by servers with lots of memory, so my judgement may be in
question. :-)
Wow, AnyDBM_File looks perfect. It'll use ndbm, then Berkeley DB,
GDBM, and then fall back on SDBM. Like you
On Nov 29, 2005, at 1:35 PM, Walter Lewis wrote:
I'm going to break a personal rule and leave most of the rest of
this thread attached at the bottom for those who missed it as they
returned to work on Monday.
Eric's solution, to hard code an explain package for each SRU
application, was
On Nov 14, 2005, at 10:15 AM, Ed Sanchez wrote:
Would you point me in the right direction for help?
I'd recommend you upgrade to using MARC::Record and co. One advantage
to doing so is that you can use the no_strict() method on a
MARC::Batch object for ignoring errors like this. Of course
Please find attached the file I'm trying to parse. It is extracted
from a OAI Data Provider in oai_dc format. The challenge is to
preserve the Thai characters encoded in UTF-8.
I see these are the result of oai-pmh GetRequests. If you like you
can use the SAX handler in Net::OAI::Harvester
Thanks for the update Bryan.
On Oct 24, 2005, at 4:33 PM, Bryan Baldus wrote:
Any suggestions, comments, and assistance are welcome.
I committed a few changes to make the test suite run clean. I
documented the slight alterations in Changes. Are you planning to
release
On Oct 18, 2005, at 5:16 PM, Bryan Baldus wrote:
Would it be possible to add or have this package added to CVS in
SourceForge, as marc-marcmaker at
http://cvs.sourceforge.net/viewcvs.py/marcpm/? Is there objection
to this?
Other suggestions?
Good idea to get it into CVS. It's not currently
On Sep 27, 2005, at 7:29 AM, Sperr, Edwin wrote:
I'm attempting to use XSL (on a Windows server) to transform XML
that I
generated using MARC::File::XML. However, I keep running into
errors because of illegal characters.
Well part of the problem is that MARC::File::XML does not do
On Sep 1, 2005, at 10:47 AM, KEVIN ZEMBOWER wrote:
Further research on the PubMed site now leads me to believe that
Endnote must have written a client that allows it to search PubMed
on port 80 and get answers in XML format. I'm going to ask this
question of the PubMed support list, but can
Bareword qr not allowed while strict subs in use at C:\progra~1
\perl\lib\MARC/Record.pm line 20
Please send your MARC::Record version, script, and data file if
possible. You may have uncovered a bug in MARC::Record.
If you like you can send the data file (and anything else) privately.
On Aug 24, 2005, at 5:08 AM, [EMAIL PROTECTED] wrote:
Can someone confirm from experience with such field tags that I'll
have no
problems doing so (or inform me that it's not yet possible ..).
Yeah, good memory :-) A good place to look for info like this is in
the Changes file included
35 matches
Mail list logo