Revision: 17413
http://sourceforge.net/p/gate/code/17413
Author: ian_roberts
Date: 2014-02-24 17:30:43 +0000 (Mon, 24 Feb 2014)
Log Message:
-----------
Documentation for crowd tools.
Modified Paths:
--------------
userguide/trunk/Makefile
userguide/trunk/tao_main.tex
Added Paths:
-----------
userguide/trunk/crowdsourcing.tex
userguide/trunk/example-cf-annotation-job.png
userguide/trunk/new-annotation-job-dialog.png
userguide/trunk/new-classification-job-dialog.png
Modified: userguide/trunk/Makefile
===================================================================
--- userguide/trunk/Makefile 2014-02-24 17:30:09 UTC (rev 17412)
+++ userguide/trunk/Makefile 2014-02-24 17:30:43 UTC (rev 17413)
@@ -12,7 +12,7 @@
tao_main.tex intro.tex gettingstarted.tex developer.tex creole-model.tex \
corpora.tex annie.tex api.tex jape.tex annic.tex evaluation.tex \
gate_development.tex gazetteers.tex ontologies.tex language-creole.tex
machine-learning.tex \
-alignment.tex parsers.tex uima.tex misc-creole.tex changes.tex \
+alignment.tex parsers.tex crowdsourcing.tex uima.tex misc-creole.tex
changes.tex \
plugin-name-map.tex design.tex ant-tasks.tex negram.tex \
postag.tex mlconfig.tex iaa-kappa.tex shortcuts.tex colophon.tex \
recent-changes.tex cloud.tex teamware.tex mimir.tex domain-creole.tex
Added: userguide/trunk/crowdsourcing.tex
===================================================================
--- userguide/trunk/crowdsourcing.tex (rev 0)
+++ userguide/trunk/crowdsourcing.tex 2014-02-24 17:30:43 UTC (rev 17413)
@@ -0,0 +1,439 @@
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %
+% crowdsourcing.tex
+%
+% Ian Roberts, February 2014
+%
+% $Id: uima.tex,v 1.3 2006/10/21 11:44:47 ian Exp $
+%
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\chapt[chap:crowd]{Crowdsourcing Data with GATE}
+\markboth{Crowdsourcing Data with GATE}{Crowd-Sourcing Data with GATE}
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\nnormalsize
+
+To develop high-performance language processing applications, you need training
+data. Traditionally that means recruiting a small team of experts in your
+chosen domain, then several iterations developing annotation guidelines,
+training your annotators, doing a test run, examining the results, refining the
+guidelines until you reach an acceptable level of inter-annotator agreement,
+letting the annotators loose on the full corpus, cross-checking their
+results\ldots Clearly this can be a time-consuming and expensive process.
+
+An alternative approach for some annotation tasks is to \emph{crowdsource} some
+or all of your training data. If the task can be defined tightly enough and
+broken down into sufficiently small self-contained chunks, then you can take
+advantage of services such as Amazon Mechanical
+Turk\footnote{\htlinkplain{https://www.mturk.com/}} to farm out the tasks to a
+much larger pool of users over the Internet, paying each user a small fee per
+completed task. For the right kinds of annotation tasks crowdsourcing can be
+much more cost-effective than the traditional approach, as well as giving a
+much faster turn-around time (since the job is shared among many more people
+working in parallel).
+
+This \chapthing{} describes the tools that GATE Developer provides to assist in
+crowdsourcing data for training and evaluation. GATE provides tools for two
+main types of crowdsourcing task:
+
+\begin{itemize}
+\item \emph{annotation} -- present the user with a snippet of text (e.g.
+ a sentence) and ask them to mark all the mentions of a particular annotation
+ type.
+\item \emph{classification} -- present the user with a snippet of text
+ containing an existing annotation with several possible labels, and ask them
+ to select the most appropriate label (or ``none of the above'').
+\end{itemize}
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\sect[sec:crowd:basics]{The Basics}
+
+The GATE crowdsourcing tools are based on the \emph{CrowdFlower}
+platform\footnote{\htlinkplain{http://crowdflower.com}}. To get the most out
+of the GATE tools it is first necessary to understand a few pieces of
+CrowdFlower terminology.
+
+\begin{itemize}
+\item a \emph{job} is the container which represents a single end-to-end
+ crowdsourcing process. It defines the input form you want to present to your
+ workers, and holds a number of units of work.
+\item a \emph{unit} is a single item of work, i.e. a single snippet (for
+ annotation jobs) or a single entity (for classification jobs). CrowdFlower
+ presents several units at a time to the user as a single \emph{task}, and
+ users are paid for each task they successfully complete.
+\item a \emph{gold} unit is one where the correct answer is already known in
+ advance. Gold units are the basis for determining whether a task has been
+ completed ``successfully'' -- when a job includes gold units, CrowdFlower
+ includes one gold unit in each task but does not tell the user which one it
+ is, and if they get the gold unit wrong then the whole task is disregarded.
+ You can track users' performance through the CrowdFlower platform and ignore
+ results from users who get too many gold units wrong.
+\end{itemize}
+
+CrowdFlower provides a web interface to build jobs in a browser, and also a
+REST API for programmatic access. The GATE tools use the REST API, so you will
+need to sign up for a CrowdFlower account and generate an API key which you
+will use to configure the various processing resources.
+
+To access the GATE crowdsourcing tools, you must first load the
+\verb!Crowd_Sourcing! plugin. This plugin provides four PR types, a ``job
+builder'' and ``results importer'' for each of the two supported styles of
+crowdsourcing job.
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\sect[sec:crowd:classification]{Entity classification}
+
+The ``entity classification'' job builder and results importer PRs are intended
+for situations where you have pre-annotated entities but each entity could have
+one of several different labels. Examples could be:
+
+\begin{itemize}
+\item a term recognintion system that has established which spans of text are
+ candidate terms but not what class of term each annotation represents.
+\item annotation with respect to an ontology, when the same string could match
+ one of several different ontology concepts.
+\end{itemize}
+
+In the first case, the set of available labels would be constant, with the same
+set of options presented for every unit. In the second case each annotation
+would supply its own set of options (there may also be ``common options''
+available for every annotation, such as ``none of the above'').
+
+\subsect[sec:crowd:classification:create]{Creating a classification job}
+
+To start a new classification job, first load the \verb!Crowd_Sourcing! plugin,
+then create a new instance of the ``Entity Classification Job Builder'' PR.
+The PR requires your CrowdFlower API key as an init-time parameter.
+
+Right-clicking on the newly-created PR in the resources tree will offer the
+option to ``Create a new CrowdFlower job'', which presents a dialog to
+configure the settings of the new job (see
+figure~\ref{fig:crowd:new-classification-job}). The available options are as
+follows:
+
+\begin{figure}[tb]
+ \centering
+ \includegraphics[width=0.5\textwidth]{new-classification-job-dialog.png}
+ \caption{Setting options to create a new classification job}
+ \label{fig:crowd:new-classification-job}
+\end{figure}
+
+\begin{description}
+\item[Job title] a descriptive title for this job %TODO check does this appear
to the workers?
+\item[Task caption] the ``question'' that the user will be asked. This is
+ shown above the snippet showing the entity in context, and may include the
+ placeholder \verb!{{entity}}! (including the double braces) which will be
+ replaced by the text covered by the target entity annotation.
+\item[Instructions] detailed instructions that will be shown to workers. In
+ contrast to the caption, which is shown as part of each unit, the
+ instructions appear just once on each task page, and are in a collapsible
+ panel so the user can hide them once they are confident that they understand
+ the task. The instructions are rendered as HTML, which allows them to
+ include markup but also means that characters such as \verb!&! and \verb!<!
+ must be escaped as HTML entity references.
+\item[Common options] options that will be available for \emph{all} units, in
+ addition to unit-specific options taken from the target annotation. These
+ common options appear below the unit-specific options (if any) and are
+ presented in the order specified here. Use the \verb!+! and \verb|-| buttons
+ to add and remove options, and the arrows to change the order. For each row
+ in the table, the ``Value'' column is the value that will be submitted as the
+ answer if the user selects this option, the ``Description'' is the string
+ that will be shown to the user. It is a good idea to include details in the
+ instructions to explain the common options.
+\end{description}
+
+Clicking ``OK'' will make calls to the CrowdFlower REST API to create a job
+with the given settings, and store the resulting job ID so the PR can be used
+to load units into the job.
+
+\subsect[sec:crowd:classification:data]{Loading data into a job}
+
+When added to a corpus pipeline application, the PR will read annotations from
+documents and use them to create units of work in the CrowdFlower job. It is
+highly recommended that you store your documents in a persistent corpus in a
+serial datastore, as the PR will add additional features to the source
+annotations which can be used at a later date to import the results of the
+crowdsourcing job and turn them back into GATE annotations.
+
+The job builder PR has a few runtime parameters:
+\begin{description}
+\item[contextASName/contextAnnotationType] the annotation set and type
+ representing the snippets of text that will be shown as the ``context''
+ around an entity. Typically the ``context'' annotation will be something
+ like ``Sentence'', or possibly ``Tweet'' if you are working with Twitter
+ data.
+\item[entityASName/entityAnnotationType] the annotation set and type
+ representing the individual entities to be classified. Every ``entity''
+ annotation must fall within the span of exactly one ``context'' annotation.
+\item[jobId] the unique identifier of the CrowdFlower job that is to be
+ populated. This parameter is filled in automatically when you create a job
+ with the dialog described above.
+\end{description}
+
+The number and format of the options presented to the user, and the marking of
+annotations as ``gold'' is handled by a number of conventions governing the
+features that each entity annotation is expected to have. Getting the
+annotations into the required format is beyond the scope of the
+\verb!Crowd_Sourcing! plugin itself, and will probably involve the use of
+custom JAPE grammars and/or Groovy scripts.
+
+The job builder expects the following features on each entity annotation:
+\begin{description}
+\item[options] the classification options that are specific to this unit. If
+ this feature is supplied its value must take one of two forms, either:
+ \begin{enumerate}
+ \item a \verb!java.util.Collection! of values (typically strings, but any
+ object with a sensible \verb!toString()! representation can be used).
+ \item a \verb!java.util.Map! where a \emph{key} in the map is the value to be
+ submitted by the form if this option is selected, and the corresponding
+ \emph{value} is the description of the option that will be displayed to the
+ user. For example, if the task is to select an appropriate URI from an
+ ontology then the key would be the ontology URI and the value could be an
+ \verb!rdfs:label! for that ontology resource in a suitable language.
+ \end{enumerate}
+ If this feature is omitted, then only the ``common options'' configured for
+ the job will be shown.
+\item[correct] the ``correct answer'' if this annotation represents a gold
+ unit, which must match one of the ``options'' for this unit (a \emph{key} if
+ the options are given as a map) or one of the job's configured ``common
+ options''. If omitted the unit is not marked as gold.
+\end{description}
+
+Note that the options will be presented to the workers in the order they are
+returned by the collection (or the map's \verb!entrySet()!) iterator.
+If this matters then you should consider using a collection or map type with
+predictable iteration order (e.g. a \verb!List! or \verb!LinkedHashMap!). In
+particular it is often a good idea to randomize the ordering of options -- if
+you always put the most probable option first then users will learn this and
+may try to ``beat the system'' by always selecting option 1 for every unit.
+
+The ID of the created unit will be stored as an additional feature named
+\verb!cf_unit! on the entity annotation.
+
+\subsect[sec:crowd:classification:import]{Importing the results}
+
+Once you have populated your job and gathered judgments from human workers, you
+can use the ``Entity Classification Results Importer'' PR to turn those
+judgments back into GATE annotations in your original documents.
+
+As with the job builder, the results importer PR has just one initialization
+parameter, which is your CrowdFlower API key, and the following runtime
+parameters:
+\begin{description}
+\item[entityASName/entityAnnotationType] the annotation set and type
+ representing the entities that have been classified. Each entity annotation
+ should have a \verb!cf_unit! feature created by the job builder PR.
+\item[resultASName/resultAnnotationType] the annotation set and type where
+ annotations corresponding to the judgments of your annotators should be
+ created.
+\item[jobId] the ID of the CrowdFlower job whose results are being imported
+ (copy the value from the corresponding job builder PR).
+\end{description}
+
+When run, the results importer PR will call the CrowdFlower REST API to
+retrieve the list of judgments for each unit in turn, and then create one
+annotation of the target type in the target annotation set (as configured by
+the ``result'' runtime parameters) for each judgment -- so if your job required
+three annotators to judge each unit then the unit will generate three output
+annotations, all with the same span (as each other and as the original input
+entity annotation). Each generated annotation will have the following
+features:
+\begin{description}
+\item[cf\_judgment] the ``judgment ID'' -- the unique identifier assigned to
+ this judgment by CrowdFlower.
+\item[answer] the answer selected by the user -- this will be one of the option
+ values (a map key if the options were provided as a map) or one of the common
+ options configured when the job was created.
+\item[worker\_id] the CrowdFlower identifier for the worker who provided this
+ judgment. There is no way to track this back directly to a specific human
+ being, but it is guaranteed that two judgments with the same worker ID were
+ performed by the same person.
+\item[trust] the worker's ``trust score'' assigned by CrowdFlower based on the
+ proportion of this job's gold units they answered correctly. The higher the
+ score, the more reliable this worker's judgments.
+\end{description}
+
+Since each generated annotation tracks the judgment ID it was created from,
+this PR is idempotent -- if you run it again over the same corpus then new
+annotations will be created for new judgments only, you will not get duplicate
+annotations for judgments that have already been processed.
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\sect[sec:crowd:annotation]{Entity annotation}
+
+The ``entity annotation'' job builder and results importer PRs are intended
+for situations where you want people to mark occurrences of named entities in
+plain text. A number of simplifying assumptions are made to make this task
+suitable for crowdsourcing:
+\begin{itemze}
+\item Text is presented in short snippets (e.g. one sentence or Tweet at a
+ time).
+\item Each job focuses on one specific entity type (if you want to annotate
+ different entities you can do this by running a number of different jobs over
+ the same corpus).
+\item Entity annotations are constrained to whole tokens only, and there are no
+ adjacent annotations (i.e. a contiguous sequence of marked tokens represents
+ \emph{one} target annotation, and different annotations must be separated
+ by at least one intervening token). This is a reasonable assumption to make
+ given the previous point, as adjacent entities of the same type will usually
+ be separated by something (a comma, the word ``and'', etc.).
+\end{itemize}
+
+\subsect[sec:crowd:annotation:create]{Creating an annotation job}
+
+To start a new annotation job, first load the \verb!Crowd_Sourcing! plugin,
+then create a new instance of the ``Entity Annotation Job Builder'' PR.
+The PR requires your CrowdFlower API key as an init-time parameter.
+
+Right-clicking on the newly-created PR in the resources tree will offer the
+option to ``Create a new CrowdFlower job'', which presents a dialog to
+configure the settings of the new job (see
+figure~\ref{fig:crowd:new-annotation-job}). The available options are as
+follows:
+
+\begin{figure}[tb]
+ \centering
+ \includegraphics[width=0.5\textwidth]{new-annotation-job-dialog.png}
+ \caption{Setting options to create a new annotation job}
+ \label{fig:crowd:new-annotation-job}
+\end{figure}
+
+\begin{description}
+\item[Job title] a descriptive title for this job %TODO check does this appear
to the workers?
+\item[Task caption] the ``question'' that the user will be asked, which should
+ include the kind of annotations they are being asked to find.
+\item[Caption for ``no entities'' checkbox] if the user does not select any
+ tokens to annotate, they must explicitly click a checkbox to confirm that
+ they believe there are no mentions in this unit. This is done to distinguish
+ between units that have not been attempted and units which have been
+ attempted but for which the correct answer is ``nothing''. This parameter is
+ the caption shown for this checkbox, and should include the kind of
+ annotations the user is being asked to find.
+\item[Instructions] detailed instructions that will be shown to workers. In
+ contrast to the caption, which is shown as part of each unit, the
+ instructions appear just once on each task page, and are in a collapsible
+ panel so the user can hide them once they are confident that they understand
+ the task. The instructions are rendered as HTML, which allows them to
+ include markup but also means that characters such as \verb!&! and \verb!<!
+ must be escaped as HTML entity references.
+\end{description}
+
+The defaults assume a job to annotate person names within the context of a
+single sentence, where the selection is done at the level of words (i.e. Token
+annotations). Figure~\ref{fig:crowd:sample-annotation-job} shows how the units
+are presented to users.
+\begin{figure}[tb]
+ \centering
+ \includegraphics[width=0.5\textwidth]{example-cf-annotation-job.png}
+ \caption{Example of how an annotation job is presented to workers}
+ \label{fig:crowd:sample-annotation-job}
+\end{figure}
+
+
+Clicking ``OK'' will make calls to the CrowdFlower REST API to create a job
+with the given settings, and store the resulting job ID so the PR can be used
+to load units into the job.
+
+\subsect[sec:crowd:annotation:data]{Loading data into a job}
+
+When added to a corpus pipeline application, the PR will read annotations from
+documents and use them to create units of work in the CrowdFlower job. It is
+highly recommended that you store your documents in a persistent corpus in a
+serial datastore, as the PR will add additional features to the source
+annotations which can be used at a later date to import the results of the
+crowdsourcing job and turn them back into GATE annotations.
+
+The job builder PR has a few runtime parameters:
+\begin{description}
+\item[snippetASName/snippetAnnotationType] the annotation set and type
+ representing the snippets of text that will be shown to the user. Each
+ snippet is one unit of work, and typical examples would be ``Sentence'' or
+ ``Tweet''.
+\item[tokenASName/tokenAnnotationType] the annotation set and type representing
+ ``tokens'', i.e. the atomic units that users will be asked to select when
+ marking annotations. The token annotations should completely cover all the
+ non-whitespace characters within every snippet, and when presented to the
+ user the tokens will be rendered with a single space between each pair. In
+ the vast majority of cases, the default value of ``Token'' will be
+ the appropriate one to use.
+\item[entityASName/entityAnnotationType] the annotation set and type
+ representing the annotations that the user is being asked to create. Any
+ already-existing annotations of this type can be treated as gold-standard
+ data.
+\item[goldFeatureName/goldFeatureValue] a feature name/value pair that is used
+ to mark snippets that should become gold units in the job. Any snippet
+ annotation that has the matching feature is considered gold, and its
+ contained entity annotations are used to construct the correct answer. Note
+ that it is possible for the correct answer to be that the snippet contains
+ \emph{no} annotations, which is why we need an explicit trigger for gold
+ snippets rather than simply marking as gold any snippet that contains at
+ least one pre-annotated entity. The default trigger feature is
+ \verb!gold=yes!.
+\item[jobId] the unique identifier of the CrowdFlower job that is to be
+ populated. This parameter is filled in automatically when you create a job
+ with the dialog described above.
+\end{description}
+
+When executed, the PR will create one unit from each snippet annotation in the
+corpus and store the ID of the newly created unit on the annotation as a
+feature named for the \verb!entityAnnotationType! with \verb!_unit_id! appended
+to the end (e.g. \verb!Person_unit_id!). This allows you to build several
+different jobs from the same set of documents for different types of
+annotation.
+
+\subsect[sec:crowd:annotation:import]{Importing the results}
+
+Once you have populated your job and gathered judgments from human workers, you
+can use the ``Entity Annotation Results Importer'' PR to turn those
+judgments back into GATE annotations in your original documents.
+
+As with the job builder, the results importer PR has just one initialization
+parameter, which is your CrowdFlower API key, and the following runtime
+parameters:
+\begin{description}
+\item[jobId] the ID of the CrowdFlower job whose results are being imported
+ (copy the value from the corresponding job builder PR).
+\item[resultASName/resultAnnotationType] the annotation set and type where
+ annotations corresponding to the judgments of your annotators should be
+ created. This annotation type \emph{must} be the same as the
+ \verb!entityAnnotationType! you specified when creating the job, since the
+ ``\texttt{\emph{resultAnnotationType}\_unit\_id}'' feature provides the link
+ between the snippet and its corresponding CrowdFlower unit.
+\item[snippetASName/snippetAnnotationType] the annotation set and type
+ containing the snippets whose results are to be imported. Each snippet
+ annotation must have an appropriate unit ID feature.
+\item[tokenASName/tokenAnnotationType] the annotation set and type representing
+ tokens. The encoding of results from CrowdFlower is based on the order of
+ the tokens within each snippet, so it is imperative that the tokens used to
+ import the results are the same as those used to create the units in the
+ first place (or at least, that there are the same \emph{number} of tokens
+ in the same order within each snippet as there were when the unit was
+ created).
+\end{description}
+
+When run, the results importer PR will call the CrowdFlower REST API to
+retrieve the list of judgments for each unit in turn, and then create
+annotations of the target type in the target annotation set (as configured by
+the ``result'' runtime parameters) for each judgment, matching the tokens that
+the annotator selected. A run of adjacent tokens will be treated as a single
+annotation spanning from the start of the first to the end of the last token in
+the sequence. Each generated annotation will have the following features:
+\begin{description}
+\item[cf\_judgment] the ``judgment ID'' -- the unique identifier assigned to
+ this judgment by CrowdFlower.
+\item[worker\_id] the CrowdFlower identifier for the worker who provided this
+ judgment. There is no way to track this back directly to a specific human
+ being, but it is guaranteed that two judgments with the same worker ID were
+ performed by the same person.
+\item[trust] the worker's ``trust score'' assigned by CrowdFlower based on the
+ proportion of this job's gold units they answered correctly. The higher the
+ score, the more reliable this worker's judgments.
+\end{description}
+
+Since each generated annotation tracks the judgment ID it was created from,
+this PR is idempotent -- if you run it again over the same corpus then new
+annotations will be created for new judgments only, you will not get duplicate
+annotations for judgments that have already been processed.
+
+% vim:ft=tex
Added: userguide/trunk/example-cf-annotation-job.png
===================================================================
(Binary files differ)
Index: userguide/trunk/example-cf-annotation-job.png
===================================================================
--- userguide/trunk/example-cf-annotation-job.png 2014-02-24 17:30:09 UTC
(rev 17412)
+++ userguide/trunk/example-cf-annotation-job.png 2014-02-24 17:30:43 UTC
(rev 17413)
Property changes on: userguide/trunk/example-cf-annotation-job.png
___________________________________________________________________
Added: svn:mime-type
## -0,0 +1 ##
+image/png
\ No newline at end of property
Added: userguide/trunk/new-annotation-job-dialog.png
===================================================================
(Binary files differ)
Index: userguide/trunk/new-annotation-job-dialog.png
===================================================================
--- userguide/trunk/new-annotation-job-dialog.png 2014-02-24 17:30:09 UTC
(rev 17412)
+++ userguide/trunk/new-annotation-job-dialog.png 2014-02-24 17:30:43 UTC
(rev 17413)
Property changes on: userguide/trunk/new-annotation-job-dialog.png
___________________________________________________________________
Added: svn:mime-type
## -0,0 +1 ##
+image/png
\ No newline at end of property
Added: userguide/trunk/new-classification-job-dialog.png
===================================================================
(Binary files differ)
Index: userguide/trunk/new-classification-job-dialog.png
===================================================================
--- userguide/trunk/new-classification-job-dialog.png 2014-02-24 17:30:09 UTC
(rev 17412)
+++ userguide/trunk/new-classification-job-dialog.png 2014-02-24 17:30:43 UTC
(rev 17413)
Property changes on: userguide/trunk/new-classification-job-dialog.png
___________________________________________________________________
Added: svn:mime-type
## -0,0 +1 ##
+image/png
\ No newline at end of property
Modified: userguide/trunk/tao_main.tex
===================================================================
--- userguide/trunk/tao_main.tex 2014-02-24 17:30:09 UTC (rev 17412)
+++ userguide/trunk/tao_main.tex 2014-02-24 17:30:43 UTC (rev 17413)
@@ -680,6 +680,7 @@
\input{parsers} %final for book
\input{machine-learning} %final for book
\input{alignment} %final for book
+\input{crowdsourcing}
\input{uima} %final for book
\input{misc-creole} %final for book
This was sent by the SourceForge.net collaborative development platform, the
world's largest Open Source development site.
------------------------------------------------------------------------------
Flow-based real-time traffic analytics software. Cisco certified tool.
Monitor traffic, SLAs, QoS, Medianet, WAAS etc. with NetFlow Analyzer
Customize your own dashboards, set traffic alerts and generate reports.
Network behavioral analysis & security monitoring. All-in-one tool.
http://pubads.g.doubleclick.net/gampad/clk?id=126839071&iu=/4140/ostg.clktrk
_______________________________________________
GATE-cvs mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/gate-cvs