A few years back we selected FogBugz (http://www.fogcreek.com/fogbugz/) as the
best issue tracking system available. It integrates well with versioning
systems, has an integrated wiki and project management abilities. We are very
satisfied. It is not an open source product however, but not
Even bigger list at
http://en.wikipedia.org/wiki/Comparison_of_project_management_software
Dave Caroline
Hiya,
--What project management software are you using?
Semantic MediaWiki, xSiteable
--What made you choose the system?
Most project management software is written by geeks, not for humans. They
all propose some methodology to go with their model, but either their model
is inflexible (and
Great thread!
At WFU we used reserved AWS instances which lowered our overall costs
but committed us to the amazon platform for a year. We also wound up
grouping most of our services on a large server (~$87 per month after
reservation fee) so that we could take advantage of all of that
capacity.
The Avanti Nova semantic mapping system finally has a web based interface. You
can try out the demo on the web site: http://www.avantilibrarysystems.com .
It implements a simple catalog of a small collection of title records plus
other resources from elsewhere. It is an example of a
Am 23.02.12 04:04, schrieb Brian McBride:
Question for all the code4lib developers out there:
--What project management software are you using?
We're using Redmine for several projects, see http://www.redmine.org/
--What made you choose the system?
easy to use (even for non-geeks), easy
On Feb 22, 2012, at 11:52 PM, Cary Gordon wrote:
EC2 works for a lot of models, but one that it does not work for is
small traffic apps that need to be available 24/7. If you have a small
instance (AWS term) running full time with a fixed IP, it costs about
$75 a month. If you turn it on for
I have a single co-located host and I get ping, power, pipe, and air
conditioned comfort for $75/month. I haven't seen nor touched my (Linux) server
in more than four or five years, and I might have restarted it four times.
--
Eric Lease Morgan
That's him on the right.
-Mike
On Wed, Feb 22, 2012 at 21:58, Cary Gordon listu...@chillco.com wrote:
Is that you on the left?
http://www.thestranger.com/images/blogimages/2012/02/13/1329169799-fc-11.jpg
On Wed, Feb 22, 2012 at 6:02 PM, Michael B. Klein mbkl...@gmail.com wrote:
...the
FaerieCon and code4lib may not be as different as we think. Quoth the
Faeries Three:
You need to have a fun, free, and open spirit in order to belong
here. Alcohol helps, too. And shiny things. Did we mention alcohol?
-Mike
P.S. jk.
On Thu, Feb 23, 2012 at 09:38, Michael J. Giarlo
Hi all,
Just wanted to pass on some info folks may know already. Sorry about the
cross-post.
The Linux box running our EZproxy instances crashed hard last night.
(Not sure why yet, I don't admin the hardware.)
When this happens, the ezproxy.lck file remains. EZproxy will not start
with
Simple todo's beat
complex task management every time.
I was checking out Backbone.js the other day and they listed a number of
interesting lean Project/Task Management Apps that were built with it.
I haven't tried any of these, but they seem interesting, and light:
https://www.blossom.io/
On Wed, Feb 22, 2012 at 7:04 PM, Brian McBride brian.mcbr...@utah.eduwrote:
Question for all the code4lib developers out there:
--What project management software are you using?
We're getting into Asana.
--What made you choose the system?
Other departments had tried it and actually
You need to ask?
On Wed, Feb 22, 2012 at 6:24 PM, Nick Ruest rue...@mcmaster.ca wrote:
Is there a declicorn bounty on that last image?
-nruest
On 12-02-22 09:02 PM, Michael B. Klein wrote:
...the Faerie Convention moved into our conference space.
EC2 works for a lot of models, but one that it does not work for is
small traffic apps that need to be available 24/7. If you have a small
instance (AWS term) running full time with a fixed IP, it costs about
$75 a month. If you turn it on for 2 hours a day, it costs about
$15/month. A large
There's been some recent discussion at our site about revi(s|v)ing URL checking
in our catalog, and I was wondering if other sites have any strategies that
they have found to be effective.
We used to run some home-grown link checking software. It fit nicely into a
shell pipeline, so it was
At UCLA, we've been trying to get a better handle on project management,
and have developed a set of practices using a suite of tools.
We begin projects with a One-Pager, a project proposal or description.
It includes description of the problem, proposed solution, scope,
deliverables, risks,
On 2/23/12 11:02 AM, Gary Thompson wrote:
At UCLA, we've been trying to get a better handle on project management,
and have developed a set of practices using a suite of tools.
This totally could be a paper on Code4lib Journal.
Would be easier to bookmark it then. ;-)
./fxk
--
No animal
On Thu, Feb 23, 2012 at 12:10 PM, Francis Kayiwa kay...@uic.edu wrote:
On 2/23/12 11:02 AM, Gary Thompson wrote:
At UCLA, we've been trying to get a better handle on project management,
and have developed a set of practices using a suite of tools.
This totally could be a paper on Code4lib
We use LinkScan to check the 856 fields for those monograph and serials
records that have them. Every week, the catalog dumps an export of record
numbers, titles, and 856 fields into a big file for each category of
record. LinkScan runs through them, reporting on broken links for
monographs and
Gary,
This is very, very interesting. Thank you for sharing this with the list. Would
it be possible for you to share screenshots of your Confluence views using the
metadata-report macro? We use Confluence for the same purpose, but maintain an
independent list of projects in its own table.
Matt--
I have the same question for you as for Gary: could you possibly share screen
shots of your Confluence templates, and the projects dashboard? Is the latter
generated automatically, or through human data entry?
- Tom
On Feb 23, 2012, at 9:48 AM, Critchlow, Matt wrote:
Hi Brian,
Our Blacklight-powered catalog (https://catalyst.library.jhu.edu/) comes up
a lot in google search results (try gil scott heron circle of stone).
Some numbers:
59% of our total catalog traffic comes from google searches
0.04% of our total catalog traffic comes from yahoo searches
0.03% of our
Sean,
That's awesome. Any idea why Blacklight seems to be more discoverable? I
live in Seattle and the KCLS (public system) catalog is powered by
Evergreen but doesn't show up at all.
Tod
This is really interesting. Do you have evidence (anecdotally or
otherwise) that the people coming to you via search engines found what
they were looking for? Sorry, I don't know exactly how to phrase this.
To put it another way - are your patrons finding you this way?
wayne
Appears that KCLS doesn't allow crawlers:
http://catalog.kcls.org/robots.txt
User-agent: *
Disallow: /
Jason
On Thu, Feb 23, 2012 at 1:43 PM, todd.d.robb...@gmail.com
todd.d.robb...@gmail.com wrote:
Sean,
That's awesome. Any idea why Blacklight seems to be more discoverable? I
live in
We don't allow crawlers because it has caused serious performance issues in the
past.
-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Jason
Ronallo
Sent: Thursday, February 23, 2012 1:55 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB]
On 2/23/2012 1:37 PM, Sean Hannan wrote:
Anecdotally, it would appear that bing (and bing-using yahoo) seem to
drastically play down catalog records in their results. We're not doing
anything to favor a particular search engine; we have a completely open
robots.txt file.
I think they're
We have similar results to Sean with our Vufind instance for the same
reasons (interlinking, stable urls, etc.). Additionally Vufind has a
sitemap generator [1] which spits out sitemap.org sitemaps for all the
record pages and some predefined static pages. Submitting the sitemaps to
the various
That's true, but since Blacklight/Vufind often sit over
digital/institutional repositories as well as ILS systems subscription
resources, at least some public domain content gets found that otherwise
wouldn't be. As you said, even if the item isn't available digitally, for
Special Collections
This links to thoughts I've had about linked data and finding a way to
use library holdings over the Web. Obviously, bibliographic data alone
is a full service: people want to get the stuff once they've found out
that such stuff exists. So how do we get users from the retrieval of a
Karen,
I had a similar thought. How hard would it be, even as an experiment, to
associate geolocation to local records? Therefore the records of
institutions (I'm speaking of physical resources primarily) would be
relevant to a person's query based upon their location, which of course
would play
On Thu, Feb 23, 2012 at 2:07 PM, Baksik, Corinna M.
corinna_bak...@harvard.edu wrote:
We don't allow crawlers because it has caused serious performance issues in
the past.
There are simple solutions that may help for the performance problem.
You can use Crawl-delay:
On 2/23/2012 2:45 PM, Karen Coyle wrote:
This links to thoughts I've had about linked data and finding a way to
use library holdings over the Web. Obviously, bibliographic data alone
is a full service: people want to get the stuff once they've found out
that such stuff exists. So how do we get
Changing the subject line, cause this is an interesting topic on it's own.
On 2/23/2012 2:45 PM, Karen Coyle wrote:
This links to thoughts I've had about linked data and finding a way to
use library holdings over the Web. Obviously, bibliographic data alone
is a full service: people want to
That was obviously meant to read: bibliographic data alone is NOT a
full service - kc
On 2/23/12 11:45 AM, Karen Coyle wrote:
This links to thoughts I've had about linked data and finding a way to
use library holdings over the Web. Obviously, bibliographic data alone
is a full service:
I've been influenced lately by a great talk on Project Management that
Delphine Khanna gave at THATCamp a few months ago. She stressed the
need for lightweight solutions to handle the more common case where we
have multiple small library projects rather than one massive endeavor.
The core piece of
On 2/23/2012 3:53 PM, Karen Coyle wrote:
Jonathan, while having these thoughts your Umlaut service did come to
mind. If you ever have time to expand on how it could work in a wide
open web environment, I'd love to hear it. (I know you explain below,
but I don't know enough about link
Speaking of, I gave a poster at LITA last fall on how we're using Google
Docs to manage projects at VCU - poster and sample docs available here:
http://www.people.vcu.edu/~erwhite/posters/nimble_pm.html
--
Erin White
Web Systems Librarian, VCU Libraries
804-827-3552 | erwh...@vcu.edu |
Jonathan, while having these thoughts your Umlaut service did come to
mind. If you ever have time to expand on how it could work in a wide
open web environment, I'd love to hear it. (I know you explain below,
but I don't know enough about link resolvers to understand what it
really means from
why local library catalog records do not show up in search results?
Basically, most OPACs are crap. :-) There are still some that that
don't provide persistent links to record pages, and most are designed
so that the user has a session and gets kicked out after 10 minutes
or so.
These issues
I tend to agree with Jonathan Rochkind that having every library's bib
record turn up as a Google snippet would be unwelcome. Better to
mediate the access to local library copies with something more
generic.
OCLC's WorldCat.org does get crawled and indexed in Google, though
WorldCat.org hits
To avoid sessions and other silliness just expose a search engine
friendly format without sessions.
As I dont have local visitors google traffic matters.
86.62% Search Traffic
2.41% Referral Traffic
10.98% Direct Traffic
For my tiny corner on the web
Dave Caroline
On 2/23/2012 5:35 PM, Stephen Hearn wrote:
But there's a catch--when WorldCat redirects a search to the selected
local library catalog, it targets the OCLC record number. If the
holding library has included the OCLC record number in its indexed
data, the user goes right to the desired record. If
Hmm. I wonder how much google juice we could generate if we all started linking
to the WorldCat.org OCLC number permalink. Wouldn't that tend to drive up the
relevance of WorldCat.org?
Any SEO specialists out there care to speculate?
Peter
On Feb 23, 2012, at 5:32 PM, Stephen Hearn
Hi all
--What project management software are you using?
We are using ActiveCollab (eg. to share documents and discuss on
requirements) and TRAC for development.
--What made you choose the system?
ActiveCollab is simple also for non technicals, it could be connected with
subversion, has a lot
Michigan Technological University's Van Pelt and Opie Library seeks an
energetic, user-focused and collegial Web Developer that enjoys working on a
wide variety of projects with library and IT staff, faculty and students. The
position requires commitment to the completion of reliable and
Jonathan --
I suspect a message sent to the developers network mailing list would have the
greatest possibility of hitting the most right people. (Perhaps the only higher
action-to-frustration route than posting it here on code4lib itself.)
Peter
On Feb 23, 2012, at 5:52 PM, Jonathan
I tend to agree with Jonathan Rochkind that having every library's bib
record turn up as a Google snippet would be unwelcome. Better to
mediate the access to local library copies with something more
generic.
So when someone searches for a book in Google they should see every
online
49 matches
Mail list logo