We have also been working on semantic tagging of image parts, from the microscopy realm. We have been working in this area with our Web Image Browser. This system is designed for annotation of extrememly large images coming from light and electron microscopy. A demo can be seen at:

http://ccdb-portal.crbs.ucsd.edu/WebImageBrowser/cgi-bin/start.pl?imagePath=http://ccdb-portal.crbs.ucsd.edu:8081/ZoomifyDataServer/data/MP_23_rec

The annotation creates a simple model of EQ (entity quality) drawing on the NIF ontologies through an autocomplete function. In the "View xml" function, you can see the xml that we use to relate the annotations to the geometry. The next step will be to specify these geometries in brain coordinates if the data are registered to our Whole Brain Catalog (http://wholebraincatalog.org) or some other coordinate system.

Maryann


On Jan 10, 2011, at 12:48 PM, Tim Clark wrote:

Hi Michael,

The default is assumed to be pixels. Thanks very much for the suggestion regarding GeIML.

Subclasses of AO's Selector class have been worked through in some detail for annotating text, but have not received comparably detailed attention for images.

This is something we are hoping to do in the future, particularly for biomedical imaging. Collaborators interested in working with us on this topic are welcome.

Kind regards,

Tim


On Jan 10, 2011, at 3:28 PM, Michael Miller wrote:

hi tim and scott,

in looking at the ImageSelector, i'm surprised there are no units
specifically specified, either as a default or as a property. are they
assumed to be pixels?

also, you might want to take a look at GelML
(http://psidev.info/index.php?q=node/448) for a bit more sophisticated way to specify a position. the specification allows four different types of
basic shapes: BoundaryChain, BoundaryPointSet, Circle, and Rectangle.
altho it's an XML Schema spec, it should be easy enough to translate to
RDF.

cheers,
michael


-----Original Message-----
From: public-semweb-lifesci-requ...@w3.org [mailto:public-semweb-
lifesci-requ...@w3.org] On Behalf Of Tim Clark
Sent: Monday, January 10, 2011 11:35 AM
To: M. Scott Marshall
Cc: HCLS IG; public-...@w3.org; Daniel Rubin; John F. Madden; Vasiliy Faronov; Toby Inkster; Peter DeVries; Tim Berners-Lee; Paolo Ciccarese;
Anita de Waard; Maryann Martone
Subject: Re: best practice relation for linking to image/machine- opaque
docs? biomedical use case

Hi Scott,

For referring to a portion of an image, let me point you to work in my group done in collaboration with HCLS Scientific Discourse Task, UCSD,
Elsevier, and one of the major pharmas.  Paolo Ciccarese is the main
author, and this work is based on the earlier W3C project Annotea.

AO, Annotation ontology, here: http://code.google.com/p/annotation-
ontology/, presented at Bio Ontologies 2010, and full-length paper in
press at BMC Bioinformatics.

Bio Ontologies 2010 slides here:
http://www.slideshare.net/paolociccarese/ao-annotation-ontology-for-
science-on-the-web

AO uses a special subclass of Selector to specify the part of the
document (image) being referred to.

see here for Selectors: http://code.google.com/p/annotation-
ontology/wiki/Selectors

and here for an example of image annotation:
http://code.google.com/p/annotation-ontology/wiki/AnnotationTypes

Best

Tim

On Jan 10, 2011, at 11:30 AM, M. Scott Marshall wrote:

[Scott dusts off old use case and pulls from the shelf. Adjusts
subject of thread. Was: best practice for referring to PDF]

In Health Care and Life Science domains, image data is a common form of data under discussion so a best practice for referring to an image
or to an (extractable) feature *within* an image would cover a
fundamental need in biomedicine to point to 'raw' data as evidence
(as
well as giving meaning to the raw data!).

A clinical example from breast cancer:
There is a scan that produces an image that contains features
referred
to by the radiologist as 'microcalcifications', which can be
indicative of the presence of a tumor.

I can think of a few scenarios that would refer to the image data
(mammogram). There are probably more:
1) The radiology report (in RDF) asserts the presence of
microcalcifications and refers to the entire image as evidence.
2) The radiology report (in RDF) asserts the presence of
microcalcifications and refers to the entire image as evidence, along
with a image processing/feature extraction program that will
highlight
the phenomenon in the image.
3) The radiology report (in RDF) asserts the presence of
microcalcifications and refers to a specific region in the image as
evidence using some function of a 2D coordinate system such as
polyline.

The question: How can we refer to the microcalcifications as an
indication of a certain type of tumor in each case 1, 2, and 3 in
RDF?

I am especially interested in the 'structural' aspects: How do we
refer to the image document as containingEvidence ? How can we refer
to a *region* of the image in the document? How can we refer to the
software that will extract the relevant features with statistical
confidence, etc.?

Any ideas or pointers to existing practices would be appreciated. I'm aware of some related work in multimedia to refer to temporal regions
but I am specifically interested in spatial regions.

Note that an analogous question of practice exists for textual
documents such as literature in PubMed that can be text-mined for
(evidence of) assertions.

* Note: 2D is a simplification that should come in handy in
implementations and often deemed necessary, such as thumbnails.

-Scott

--
M. Scott Marshall, W3C HCLS IG co-chair, http://www.w3.org/blog/ hcls
Leiden University Medical Center / University of Amsterdam
http://staff.science.uva.nl/~marshall

On Mon, Jan 10, 2011 at 4:01 PM, Tim Berners-Lee <ti...@w3.org>
wrote:
It is well to look at and make best practices for the things
we have if we don't

It was the FOAF folks who, initially, instead of using linked data,
used an Inverse Functional Property to uniquely identify
someone and then rdfs:seeAlso to find the data about them.
So any FOAF browser has to look up the seeAlso  or they
don't follow any links.

So tabulator always when looking up x and finding x see:also y will
load y.  So must any similar client or any crawler.

So there is a lot of existing use we would throw away if we
allowed rdfs:seeAlso for pointing to things which do not
provide data. (It isn't the question of conneg or mime type,
that is a red herring. it is whether there is machine-redable
standards-based stuff about x).

Further, we should not make any weaker properties like
seeDocumentation
subproperties of see:Also, or they would imply
We maybe need a very weak top property like

mayHaveSomeKindOfInfoAboutThis

to be the superProperty of all the others.

One things which could be stronger than seeAlso is definedBy if it
is normally used for data, to point to the definitive ontology.
That would then imply seeAlso.

Tim










Maryann Martone
Professor-In-Residence
Dept of Neurosciences
University of California, San Diego
San Diego, CA  92093-0446
Tel:  858 822 0745
Fax:  858 822 3610




Reply via email to