Hi everyone.

Last week Thomas Breuel from the Image Understanding and Pattern Recognition
group at the Technical University of Kaiserslautern, and John Burns from
Jstor, visited the ATRC in Toronto for 3 days of meetings to work on Fluid
Decapod. The meetings were extremely productive and informative which
covered a wide range of topics: user workflow, use cases, backend API, and
accessibility.

Thomas also demonstrated an early build of the system which had a Decapod
controlled Canon G10 camera capture a page of content and have the
perspective corrected. (I wish I took a video of this demonstration because
it was a lot more impressive in reality).

I have posted the meeting notes to the Fluid Wiki:
http://wiki.fluidproject.org/x/NgSY

There is a lot of information to digest on that wiki page, but it gives you
a sense of the direction we are heading in as a project.

Notable take-aways:
* New design direction for the main UI
* Addition of a "Project Manager" type interface to make exporting and
managing multiple Decapod work projects easier
* Refined user and system workflow
* UI Development to begin shortly
* Back-end development continues in all areas

Decapod is an open source project, so we invite everyone interested to get
involved. Currently we welcome participation in:
* Javascript UI development (to be coordinated with the lead UI developer)
* User testing (administer a test, or be a test user)
* Domain experts in paper collections interested in digitization and
preservation (i.e. librarians, curators, adminstrators, etc.)

Have a great weekend!

Sincerely,

- Jonathan

---
Jonathan Hung / [email protected]
Designer Fluid Decapod Project - ATRC at University of Toronto
Tel: (416) 946-3002
_______________________________________________________
fluid-work mailing list - [email protected]
To unsubscribe, change settings or access archives,
see http://fluidproject.org/mailman/listinfo/fluid-work

Reply via email to