[CODE4LIB] TIFF Metadata to XML?

2011-07-18 Thread Edward M. Corrado
Hello All,

Before I re-invent the wheel or try many different programs, does
anyone have a suggestion on a good way to extract embedded Metadata
added by cameras and (more importantly) photo-editing programs such as
Photoshop from TIFF files and save it as as XML? I have > 60k photos
that have metadata including keywords, descriptions, creator, and
other fields embedded in them and I need to extract the metadata so I
can load them into our digital archive.

Right now, after looking at a few tools and having done a number of
Google searches and haven't found anything that seems to do what I
want. As of now I am leaning towards extracting the metadata using
exiv2 and creating a script (shell, perl, whatever) to put the fields
I need into a pseudo-Dublin Core XML format. I say pseudo because I
have a few fields that are not Dublin Core. I am assuming there is a
better way. (Although part of me thinks it might be easier to do that
then exporting to XML and using XSLT to transform the file since I
might need to do a lot of cleanup of the data regardless.)

Anyway, before I go any further, does anyone have any
thoughts/ideas/suggestions?

Edward


Re: [CODE4LIB] TIFF Metadata to XML?

2011-07-18 Thread Jon Stroop

Edward,
JHOVE (1)  should be able to do this, and I believe you can pass the 
included shell script a directory and have it extract data for 
everything it finds and can parse inside.

-Jon

On 07/18/2011 09:18 AM, Edward M. Corrado wrote:

Hello All,

Before I re-invent the wheel or try many different programs, does
anyone have a suggestion on a good way to extract embedded Metadata
added by cameras and (more importantly) photo-editing programs such as
Photoshop from TIFF files and save it as as XML? I have>  60k photos
that have metadata including keywords, descriptions, creator, and
other fields embedded in them and I need to extract the metadata so I
can load them into our digital archive.

Right now, after looking at a few tools and having done a number of
Google searches and haven't found anything that seems to do what I
want. As of now I am leaning towards extracting the metadata using
exiv2 and creating a script (shell, perl, whatever) to put the fields
I need into a pseudo-Dublin Core XML format. I say pseudo because I
have a few fields that are not Dublin Core. I am assuming there is a
better way. (Although part of me thinks it might be easier to do that
then exporting to XML and using XSLT to transform the file since I
might need to do a lot of cleanup of the data regardless.)

Anyway, before I go any further, does anyone have any
thoughts/ideas/suggestions?

Edward


Re: [CODE4LIB] Seeking feedback on database design for an open source software registry

2011-07-18 Thread Kevin S. Clarke
You might also talk to the http://oss4lib.org/ folks to see what they did.

Kevin



On Sun, Jul 17, 2011 at 11:22 PM, Nate Vack  wrote:
> On Fri, Jul 15, 2011 at 2:09 PM, Peter Murray  
> wrote:
>> On Jul 15, 2011, at 2:59 PM, Mike Taylor wrote:
>>>
>>> Isn't this pretty much what FreshMeat is for?
>>>        http://freshmeat.net/
>>
>> It is similar in concept to Freshmeat, but the scope is limited to 
>> library-oriented software (which might be too use-specific for Freshmeat and 
>> certainly harder to find among the vast expanse of non-library-oriented 
>> stuff).
>
> You might look at NITRC[1], which has tried very hard to do the same
> thing for neuroscience software in addition to providing project
> hosting like Sourceforge. They get funded by some federal grant
> thing[2].
>
> Unfortunately, they've also found that the world wasn't really looking
> for a site to review and host a small subset of open-source projects,
> so their usage isn't high. They've convinced some projects to come
> live in their domain, so they seem to attract enough funding to stay
> online, but they've never succeeded in becoming much of a community.
> And the "people who do neuroscience" crowd is probably two orders of
> magnitude larger than the "people who do open-source in libraries"
> crowd -- so building a vibrant community will be even harder in this
> case.
>
> The real problem for me is that their site doesn't seem to warrant
> enough attention to really be made usable or stay up reliably. So if
> you want to get software that's hosted only by them, it can be really
> frustrating. It's like a crappy FreshMeat combined with a crappy,
> unreliable Sourceforge.
>
> My ultimate take: you can probably do something more interesting with
> your grant money than building a FreshMeat-alike.
>
> Either way, you might talk to the NITRC folks about their experiences
> -- I'm speaking as an end-user, not as one of their team.
>
> Cheers,
> -Nate
>
> 1: http://www.nitrc.org/
>
> 2: The National Institutes of Health Blueprint for Neuroscience Research
>


Re: [CODE4LIB] TIFF Metadata to XML?

2011-07-18 Thread Sheila M. Morrissey
JHOVE2 (www.jhove2.org) will work as well.
Sheila


From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Jon Stroop 
[jstr...@princeton.edu]
Sent: Monday, July 18, 2011 9:23 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] TIFF Metadata to XML?

Edward,
JHOVE (1)  should be able to do this, and I believe you can pass the
included shell script a directory and have it extract data for
everything it finds and can parse inside.
-Jon

On 07/18/2011 09:18 AM, Edward M. Corrado wrote:
> Hello All,
>
> Before I re-invent the wheel or try many different programs, does
> anyone have a suggestion on a good way to extract embedded Metadata
> added by cameras and (more importantly) photo-editing programs such as
> Photoshop from TIFF files and save it as as XML? I have>  60k photos
> that have metadata including keywords, descriptions, creator, and
> other fields embedded in them and I need to extract the metadata so I
> can load them into our digital archive.
>
> Right now, after looking at a few tools and having done a number of
> Google searches and haven't found anything that seems to do what I
> want. As of now I am leaning towards extracting the metadata using
> exiv2 and creating a script (shell, perl, whatever) to put the fields
> I need into a pseudo-Dublin Core XML format. I say pseudo because I
> have a few fields that are not Dublin Core. I am assuming there is a
> better way. (Although part of me thinks it might be easier to do that
> then exporting to XML and using XSLT to transform the file since I
> might need to do a lot of cleanup of the data regardless.)
>
> Anyway, before I go any further, does anyone have any
> thoughts/ideas/suggestions?
>
> Edward


Re: [CODE4LIB] TIFF Metadata to XML?

2011-07-18 Thread Sheila M. Morrissey
Hello, Jon --
Should have added -- thanks for the pointer to JHOVE/JHOVE2 --
There are still some modules in JHOVE for which there is not yet one in JHOVE2 
(though coming to a Bitbucker repository near you soon!!) and vice versa-- but 
for TIFF -- folks might prefer using the later code.
Best,
Sehila

From: Jon Stroop [jstr...@princeton.edu]
Sent: Monday, July 18, 2011 9:41 AM
To: Sheila M. Morrissey
Subject: Re: [CODE4LIB] TIFF Metadata to XML?
Oops!  I wasn't trying to specify a version of JHOVE, I meant to add a
footnote with a link and forgot. For what it's worth, I was going to
link to JHOVE2 :-) .
Hope all is well with you,
Jon
On 07/18/2011 09:36 AM, Sheila M. Morrissey wrote:
> JHOVE2 (www.jhove2.org) will work as well.
> Sheila
>
> 
> From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Jon Stroop 
> [jstr...@princeton.edu]
> Sent: Monday, July 18, 2011 9:23 AM
> To: CODE4LIB@LISTSERV.ND.EDU
> Subject: Re: [CODE4LIB] TIFF Metadata to XML?
>
> Edward,
> JHOVE (1)  should be able to do this, and I believe you can pass the
> included shell script a directory and have it extract data for
> everything it finds and can parse inside.
> -Jon
>
> On 07/18/2011 09:18 AM, Edward M. Corrado wrote:
>> Hello All,
>>
>> Before I re-invent the wheel or try many different programs, does
>> anyone have a suggestion on a good way to extract embedded Metadata
>> added by cameras and (more importantly) photo-editing programs such as
>> Photoshop from TIFF files and save it as as XML? I have>   60k photos
>> that have metadata including keywords, descriptions, creator, and
>> other fields embedded in them and I need to extract the metadata so I
>> can load them into our digital archive.
>>
>> Right now, after looking at a few tools and having done a number of
>> Google searches and haven't found anything that seems to do what I
>> want. As of now I am leaning towards extracting the metadata using
>> exiv2 and creating a script (shell, perl, whatever) to put the fields
>> I need into a pseudo-Dublin Core XML format. I say pseudo because I
>> have a few fields that are not Dublin Core. I am assuming there is a
>> better way. (Although part of me thinks it might be easier to do that
>> then exporting to XML and using XSLT to transform the file since I
>> might need to do a lot of cleanup of the data regardless.)
>>
>> Anyway, before I go any further, does anyone have any
>> thoughts/ideas/suggestions?
>>
>> Edward


From: Sheila M. Morrissey
Sent: Monday, July 18, 2011 9:36 AM
To: Code for Libraries
Subject: RE: [CODE4LIB] TIFF Metadata to XML?

JHOVE2 (www.jhove2.org) will work as well.
Sheila


From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Jon Stroop 
[jstr...@princeton.edu]
Sent: Monday, July 18, 2011 9:23 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] TIFF Metadata to XML?

Edward,
JHOVE (1)  should be able to do this, and I believe you can pass the
included shell script a directory and have it extract data for
everything it finds and can parse inside.
-Jon

On 07/18/2011 09:18 AM, Edward M. Corrado wrote:
> Hello All,
>
> Before I re-invent the wheel or try many different programs, does
> anyone have a suggestion on a good way to extract embedded Metadata
> added by cameras and (more importantly) photo-editing programs such as
> Photoshop from TIFF files and save it as as XML? I have>  60k photos
> that have metadata including keywords, descriptions, creator, and
> other fields embedded in them and I need to extract the metadata so I
> can load them into our digital archive.
>
> Right now, after looking at a few tools and having done a number of
> Google searches and haven't found anything that seems to do what I
> want. As of now I am leaning towards extracting the metadata using
> exiv2 and creating a script (shell, perl, whatever) to put the fields
> I need into a pseudo-Dublin Core XML format. I say pseudo because I
> have a few fields that are not Dublin Core. I am assuming there is a
> better way. (Although part of me thinks it might be easier to do that
> then exporting to XML and using XSLT to transform the file since I
> might need to do a lot of cleanup of the data regardless.)
>
> Anyway, before I go any further, does anyone have any
> thoughts/ideas/suggestions?
>
> Edward


Re: [CODE4LIB] TIFF Metadata to XML?

2011-07-18 Thread Ford, Kevin
Exiftool [1] and trusty ImageMagick [2] will work.  With ImageMagick it is as 
easy as:

convert image.tiff image.xmp

Members of the Visual Resources Association (VRA) have been working on/with 
embedded metadata for a few years now.  There may be something more to glean 
from the working group's wiki [3].

Cordially,

Kevin


[1] http://www.sno.phy.queensu.ca/~phil/exiftool/
[2] http://www.imagemagick.org/script/index.php
[3] http://metadatadeluxe.pbworks.com/w/page/20792238/FrontPage
 

> -Original Message-
> From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of
> Edward M. Corrado
> Sent: Monday, July 18, 2011 9:18 AM
> To: CODE4LIB@LISTSERV.ND.EDU
> Subject: [CODE4LIB] TIFF Metadata to XML?
> 
> Hello All,
> 
> Before I re-invent the wheel or try many different programs, does
> anyone have a suggestion on a good way to extract embedded Metadata
> added by cameras and (more importantly) photo-editing programs such as
> Photoshop from TIFF files and save it as as XML? I have > 60k photos
> that have metadata including keywords, descriptions, creator, and other
> fields embedded in them and I need to extract the metadata so I can
> load them into our digital archive.
> 
> Right now, after looking at a few tools and having done a number of
> Google searches and haven't found anything that seems to do what I want.
> As of now I am leaning towards extracting the metadata using
> exiv2 and creating a script (shell, perl, whatever) to put the fields I
> need into a pseudo-Dublin Core XML format. I say pseudo because I have
> a few fields that are not Dublin Core. I am assuming there is a better
> way. (Although part of me thinks it might be easier to do that then
> exporting to XML and using XSLT to transform the file since I might
> need to do a lot of cleanup of the data regardless.)
> 
> Anyway, before I go any further, does anyone have any
> thoughts/ideas/suggestions?
> 
> Edward


Re: [CODE4LIB] TIFF Metadata to XML?

2011-07-18 Thread Joe Hourcle
On Jul 18, 2011, at 9:18 AM, Edward M. Corrado wrote:

> Hello All,
> 
> Before I re-invent the wheel or try many different programs, does
> anyone have a suggestion on a good way to extract embedded Metadata
> added by cameras and (more importantly) photo-editing programs such as
> Photoshop from TIFF files and save it as as XML? I have > 60k photos
> that have metadata including keywords, descriptions, creator, and
> other fields embedded in them and I need to extract the metadata so I
> can load them into our digital archive.
> 
> Right now, after looking at a few tools and having done a number of
> Google searches and haven't found anything that seems to do what I
> want. As of now I am leaning towards extracting the metadata using
> exiv2 and creating a script (shell, perl, whatever) to put the fields
> I need into a pseudo-Dublin Core XML format. I say pseudo because I
> have a few fields that are not Dublin Core. I am assuming there is a
> better way. (Although part of me thinks it might be easier to do that
> then exporting to XML and using XSLT to transform the file since I
> might need to do a lot of cleanup of the data regardless.)
> 
> Anyway, before I go any further, does anyone have any
> thoughts/ideas/suggestions?

I haven't (yet) used it myself, but Exiv2 ( http://www.exiv2.org )
supports reading and writing XMP, EXIF and IPTC metadata from
a large number of file formats.

-Joe


Re: [CODE4LIB] TIFF Metadata to XML?

2011-07-18 Thread Dave Rice
Try exiftool with the -X flag to get RDF XML output.
Dave Rice
avpreserve.com

On Jul 18, 2011, at 9:18 AM, Edward M. Corrado wrote:

> Hello All,
> 
> Before I re-invent the wheel or try many different programs, does
> anyone have a suggestion on a good way to extract embedded Metadata
> added by cameras and (more importantly) photo-editing programs such as
> Photoshop from TIFF files and save it as as XML? I have > 60k photos
> that have metadata including keywords, descriptions, creator, and
> other fields embedded in them and I need to extract the metadata so I
> can load them into our digital archive.
> 
> Right now, after looking at a few tools and having done a number of
> Google searches and haven't found anything that seems to do what I
> want. As of now I am leaning towards extracting the metadata using
> exiv2 and creating a script (shell, perl, whatever) to put the fields
> I need into a pseudo-Dublin Core XML format. I say pseudo because I
> have a few fields that are not Dublin Core. I am assuming there is a
> better way. (Although part of me thinks it might be easier to do that
> then exporting to XML and using XSLT to transform the file since I
> might need to do a lot of cleanup of the data regardless.)
> 
> Anyway, before I go any further, does anyone have any
> thoughts/ideas/suggestions?
> 
> Edward


Re: [CODE4LIB] TIFF Metadata to XML?

2011-07-18 Thread Edward M. Corrado
Thanks for all the suggestions. I know have multiple ways to get an
XML file... now I only need to figure out which fields map to what.

Edward

On Mon, Jul 18, 2011 at 9:57 AM, Dave Rice  wrote:
> Try exiftool with the -X flag to get RDF XML output.
> Dave Rice
> avpreserve.com
>
> On Jul 18, 2011, at 9:18 AM, Edward M. Corrado wrote:
>
>> Hello All,
>>
>> Before I re-invent the wheel or try many different programs, does
>> anyone have a suggestion on a good way to extract embedded Metadata
>> added by cameras and (more importantly) photo-editing programs such as
>> Photoshop from TIFF files and save it as as XML? I have > 60k photos
>> that have metadata including keywords, descriptions, creator, and
>> other fields embedded in them and I need to extract the metadata so I
>> can load them into our digital archive.
>>
>> Right now, after looking at a few tools and having done a number of
>> Google searches and haven't found anything that seems to do what I
>> want. As of now I am leaning towards extracting the metadata using
>> exiv2 and creating a script (shell, perl, whatever) to put the fields
>> I need into a pseudo-Dublin Core XML format. I say pseudo because I
>> have a few fields that are not Dublin Core. I am assuming there is a
>> better way. (Although part of me thinks it might be easier to do that
>> then exporting to XML and using XSLT to transform the file since I
>> might need to do a lot of cleanup of the data regardless.)
>>
>> Anyway, before I go any further, does anyone have any
>> thoughts/ideas/suggestions?
>>
>> Edward
>


Re: [CODE4LIB] Seeking feedback on database design for an open source software registry

2011-07-18 Thread Peter Murray
Nate --

Thanks for the pointer to NITRC.  There are some good interface elements there 
that might be useful to emulate.

I want to be clear that our grant mandate extends only to the FreshMeat 
registry functionality.  Source code hosting is definitely out of scope for 
what we are doing.

Building community will be hard, particularly because the intent of the 
registry isn't for just developers themselves but also for any library that is 
interested in applying open source solutions to their library needs.  It 
doesn't mean that the library will be developing or running the software 
themselves (that is where the "Provider" entity comes in, and it is a point 
that distinguishes this registry from FreshMeat and NITRC).


Peter

On Jul 17, 2011, at 11:22 PM, Nate Vack wrote:
> 
> On Fri, Jul 15, 2011 at 2:09 PM, Peter Murray  
> wrote:
>> On Jul 15, 2011, at 2:59 PM, Mike Taylor wrote:
>>> 
>>> Isn't this pretty much what FreshMeat is for?
>>>http://freshmeat.net/
>> 
>> It is similar in concept to Freshmeat, but the scope is limited to 
>> library-oriented software (which might be too use-specific for Freshmeat and 
>> certainly harder to find among the vast expanse of non-library-oriented 
>> stuff).
> 
> You might look at NITRC[1], which has tried very hard to do the same
> thing for neuroscience software in addition to providing project
> hosting like Sourceforge. They get funded by some federal grant
> thing[2].
> 
> Unfortunately, they've also found that the world wasn't really looking
> for a site to review and host a small subset of open-source projects,
> so their usage isn't high. They've convinced some projects to come
> live in their domain, so they seem to attract enough funding to stay
> online, but they've never succeeded in becoming much of a community.
> And the "people who do neuroscience" crowd is probably two orders of
> magnitude larger than the "people who do open-source in libraries"
> crowd -- so building a vibrant community will be even harder in this
> case.
> 
> The real problem for me is that their site doesn't seem to warrant
> enough attention to really be made usable or stay up reliably. So if
> you want to get software that's hosted only by them, it can be really
> frustrating. It's like a crappy FreshMeat combined with a crappy,
> unreliable Sourceforge.
> 
> My ultimate take: you can probably do something more interesting with
> your grant money than building a FreshMeat-alike.
> 
> Either way, you might talk to the NITRC folks about their experiences


-- 
Peter Murray peter.mur...@lyrasis.orgtel:+1-678-235-2955
 
Ass't Director, Technology Services Development   http://dltj.org/about/
LYRASIS   --Great Libraries. Strong Communities. Innovative Answers.
The Disruptive Library Technology Jesterhttp://dltj.org/ 
Attrib-Noncomm-Share   http://creativecommons.org/licenses/by-nc-sa/2.5/ 


Re: [CODE4LIB] Seeking feedback on database design for an open source software registry

2011-07-18 Thread Peter Murray
On Jul 18, 2011, at 9:34 AM, Kevin S. Clarke wrote:
> 
> You might also talk to the http://oss4lib.org/ folks to see what they did.

I had some early conversations with Dan Chudnov about six months ago as early 
plans were being drawn up.  I haven't reached out to Dan specifically with the 
latest message, and that is a good suggestion.


Peter
-- 
Peter Murray peter.mur...@lyrasis.orgtel:+1-678-235-2955
 
Ass't Director, Technology Services Development   http://dltj.org/about/
LYRASIS   --Great Libraries. Strong Communities. Innovative Answers.
The Disruptive Library Technology Jesterhttp://dltj.org/ 
Attrib-Noncomm-Share   http://creativecommons.org/licenses/by-nc-sa/2.5/ 


[CODE4LIB] Python/Django/jQuery dev job for Ibis Reader - remote in U.S.

2011-07-18 Thread Jodi Schneider
Develop a great ebook reader with Threepress:
http://blog.threepress.org/2011/07/18/threepress-job-opening-work-on-ibis-reader/


[CODE4LIB] yet another test

2011-07-18 Thread LeVan,Ralph
test


Re: [CODE4LIB] TIFF Metadata to XML?

2011-07-18 Thread Genny Engel
Guess it depends on whether they actually followed any kind of standard in 
encoding the data in the TIFF files.

http://www.metadataworkinggroup.com/pdf/mwg_guidance.pdf


Genny Engel
Internet Librarian
Sonoma County Library
gen...@sonoma.lib.ca.us
www.sonomalibrary.org
707 545-0831 x581


-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Edward 
M. Corrado
Sent: Monday, July 18, 2011 7:40 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] TIFF Metadata to XML?

Thanks for all the suggestions. I know have multiple ways to get an
XML file... now I only need to figure out which fields map to what.

Edward

On Mon, Jul 18, 2011 at 9:57 AM, Dave Rice  wrote:
> Try exiftool with the -X flag to get RDF XML output.
> Dave Rice
> avpreserve.com
>
> On Jul 18, 2011, at 9:18 AM, Edward M. Corrado wrote:
>
>> Hello All,
>>
>> Before I re-invent the wheel or try many different programs, does
>> anyone have a suggestion on a good way to extract embedded Metadata
>> added by cameras and (more importantly) photo-editing programs such as
>> Photoshop from TIFF files and save it as as XML? I have > 60k photos
>> that have metadata including keywords, descriptions, creator, and
>> other fields embedded in them and I need to extract the metadata so I
>> can load them into our digital archive.
>>
>> Right now, after looking at a few tools and having done a number of
>> Google searches and haven't found anything that seems to do what I
>> want. As of now I am leaning towards extracting the metadata using
>> exiv2 and creating a script (shell, perl, whatever) to put the fields
>> I need into a pseudo-Dublin Core XML format. I say pseudo because I
>> have a few fields that are not Dublin Core. I am assuming there is a
>> better way. (Although part of me thinks it might be easier to do that
>> then exporting to XML and using XSLT to transform the file since I
>> might need to do a lot of cleanup of the data regardless.)
>>
>> Anyway, before I go any further, does anyone have any
>> thoughts/ideas/suggestions?
>>
>> Edward
>


[CODE4LIB] UIUC jobs

2011-07-18 Thread Jodi Schneider
https://jobs.illinois.edu/default.cfm?page=job&jobID=8424
via
http://twitter.com/ranti/status/93057230454276096


Re: [CODE4LIB] Trends with virtualization

2011-07-18 Thread Richmond,Ian
I have seen the pendulum swing back and forth several times over the last 20 
years between dumb terminals and complete PC's with their own set of apps each. 
 Philosophically, the tension is between control and anarchy; cost is just 
brought in to justify your position.  If you love control, then dumb terminals 
are what you want.  Since this means things are centralized, it requires 
important hardware and backup systems to make sure it never goes down.  I think 
of this as the "nuclear aircraft carrier mentality" - sinking a nuclear carrier 
would be such a catastrophe (to both sides) that you need umpteen other ships 
to protect it from ever happening. 

I am more of an anarchist: I have faith in people's innate ability to muddle 
through okay for themselves. It doesn't bother me so much that people make 
mistakes and do dumb things; I try to set things up to blunt that, but other 
people's mistakes really not my responsibility.  I try to set things up more on 
the side of "boppo the clown" - the weighted blow-up figure that you can keep 
hitting forever and still have it come back without effort.  So I love being 
able to snapshot VMs before doing anything new; no longer are you risking 
rebuilding the whole machine every time you update/install something new.  VMs 
let me give people the leeway to shoot themselves in the foot without hurting 
others.  This is a great confidence-builder for people; they will come up with 
new ways of doing things far more often when the penalties for mistakes are not 
so severe.

The second thing I love about vms is that you can delete them. This is because 
you can afford to use them for just one or two things.  In the old days 
(pre-2006) when everything was on bare metal, you bought a big machine 
(aircraft carrier) and put all the business processes on it until there were 
too many to ever have the server go down.  In practical terms, security was 
non-existent, because no one could ever keep up with which task needed to do 
what after a while, and no one wanted to screw up some important process that 
everyone had forgotten needed rights to some files somewhere obscure.  So the 
longer a server lasted, the more extra rights were left over from previous 
business processes that no one even quite remembered any more.  But a VM you 
can delete when the main business process on it stops.  You will have had some 
security creep unless you really named your groups well, but that all goes away 
when you kill the VM.

I brought up the security aspect because it is an argument which can actually 
appeal to those worried about loss of control and proliferating VMs.  (I 
realize I probably have had a sheltered life, but I have only once been in a 
place that had more groups than people, with the groups controlling file access 
named so everyone knew what the main business process was and what the sub-task 
was.)  

--Ian Richmond

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Genny 
Engel
Sent: Monday, July 11, 2011 2:51 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Trends with virtualization

I *had* the entire computer lab go down when the network failed once.  That's 
when I switched it all to local desktops.  The security was way easier to 
manage with a hosted desktop (I basically didn't have to manage it at all) but 
we weren't set up to offer any alternative when the network server hiccupped.   
It took me a lot of time to learn how to set up adequate security on an 
individual desktop, but once I got a good profile set up, I copied the image to 
all the other PCs and we were set.  There weren't any equipment cost 
differences either way, as I recall.

On moving things to the cloud, I'm still leery, especially after that Amazon 
thing a few months ago.
http://www.dailymail.co.uk/sciencetech/article-1379474/Web-chaos-Amazon-cloud-failure-crashes-major-websites-Playstation-Network-goes-AGAIN.html



Genny Engel
Internet Librarian
Sonoma County Library
gen...@sonoma.lib.ca.us
www.sonomalibrary.org
707 545-0831 x581

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of 
Madrigal, Juan A
Sent: Monday, July 11, 2011 8:21 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Trends with virtualization

Its true what they say, history does repeat itself! I don't see how
virtualization is much different from
a dummy terminal connected to a mainframe. I'd hate to see an entire
computer lab go down should the network fail.

The only real promise is for making web development and server management
easier.

Vmware is looking to make thing easier with CloudFoundry
http://cloudfoundry.org/ along
with Activestate and Stackato http://www.activestate.com/cloud

I definitely want to take those two out for a test run. Deployment looks
dead simple.

Juan Madrigal


Web Developer
Web and Emerging Technologies
University of Miami
Richter Library





On 7/11/11 10:38 AM, "Nate Vac