RE: CfC: to add Speech API to Charter; deadline January 24

2012-01-23 Thread Deborah Dahl
Hi Art,
That's a very good point about IP commitments. I think it's likely to speed
up the process of getting something standardized if companies don't have to
make the broad IP commitments to all of a WG's activities that would be
required if the work was entirely done within an existing WG. 
As far as existing Working Groups go, I think that the Voice Browser WG
would be a better choice than the MMIWG, because the HTML-Speech work is
focused on the details of a speech-specific API, which is the expertise of
the Voice Browser WG. However,  I think a new group would be better because
the group could concentrate entirely on the HTML-Speech work and not have to
prioritize it with other specs. Also, both VB and MMI are
member-confidential, and it would be easier to work with a joint WebApps
task force in a new, public, WG. 
Regards,
Debbie


> -Original Message-
> From: Arthur Barstow [mailto:art.bars...@nokia.com]
> Sent: Monday, January 23, 2012 12:39 PM
> To: ext Charles McCathieNevile; Glen Shires; Deborah Dahl; Scott
McGlashan;
> Kazuyuki Ashimura
> Cc: public-webapps; public-xg-htmlspe...@w3.org
> Subject: Re: CfC: to add Speech API to Charter; deadline January 24
> 
> On 1/23/12 12:17 PM, ext Charles McCathieNevile wrote:
> > On Fri, 20 Jan 2012 18:37:35 +0100, Glen Shires 
> > wrote:
> >
> >> 2. WebApps provides a balanced web-centric view for new JavaScript
> APIs.
> >>  The XG group consisted of a large number of speech experts, but only
> >> a few with broad web API expertise. We believe the formation of a new
> WG
> >> would have a similar imbalance,
> >
> > I'm not sure this is necessarily the case, and the reverse
> > possibility, that the Web Apps group would not have enough speech
> > experts should also be considered a potential risk.
> >
> >> whereas the WebApps WG can provide valuable, balanced guidance and
> >> feedback.
> >
> > (FWIW I don't have a strong opinion on whether this is likely to be a
> > real problem as opposed to a risk, and I think this conversation helps
> > us work that out).
> 
> Another way to help us get the broadest set of stakeholders possible is
> for the Speech work to be done in a new WG or an existing WG like with
> some speech experts (Voice Browser WG or MMI WG?) and then to create
> some type of joint task force with WebApps.
> 
> This would have the advantage that WebApps members would only have to
> make an IP commitment for the specs created by the task force (and none
> of the other WG's specs) and the other WG would not have to make an IP
> commitment for any of WebApps' other specs. (Note we are already doing
> this for the Web Intents spec and the Dev-API WG).
> 
> Is the VBWG or MMIWG interested in taking the lead on the speech spec?
> 
> -AB
> 
> 
> 
> 





RE: to add Speech API to Charter; deadline January 19

2012-01-15 Thread Deborah Dahl
It's entirely up to the Working Group that takes on this work how to proceed
with prioritization. 
It's my belief that they would be interested in any public comments on the
proposals and the XG's prioritization, though.

> -Original Message-
> From: Dave Bernard [mailto:dbern...@intellectiongroup.com]
> Sent: Friday, January 13, 2012 3:19 PM
> To: 'Deborah Dahl'; 'Satish S'; 'Young, Milan'
> Cc: 'Arthur Barstow'; 'public-webapps'; public-xg-htmlspe...@w3.org
> Subject: RE: to add Speech API to Charter; deadline January 19
> 
> Based on the link below, it looks like there is already a prioritized
list.
> So what *could* happen next is that the Strong Interest items would be
> designated "good enough" for the first pass; what then?
> 
> -Original Message-
> From: Deborah Dahl [mailto:d...@conversational-technologies.com]
> Sent: Friday, January 13, 2012 2:17 PM
> To: dbern...@intellectiongroup.com; 'Satish S'; 'Young, Milan'
> Cc: 'Arthur Barstow'; 'public-webapps'; public-xg-htmlspe...@w3.org
> Subject: RE: to add Speech API to Charter; deadline January 19
> 
> How prioritization works in practice depends on how a specific Working
> Group
> decides to organize its work, but generally, the W3C is very
> consensus-oriented and tries to make sure that all opinions are respected.
> 
> > -Original Message-
> > From: Dave Bernard [mailto:dbern...@intellectiongroup.com]
> > Sent: Friday, January 13, 2012 1:39 PM
> > To: 'Deborah Dahl'; 'Satish S'; 'Young, Milan'
> > Cc: 'Arthur Barstow'; 'public-webapps'; public-xg-htmlspe...@w3.org
> > Subject: RE: to add Speech API to Charter; deadline January 19
> >
> > Deborah-
> >
> > So how would a good "democratic" prioritization work, in practice? Is
> > that something that is rare/common in similar W3C endeavors?
> >
> > Dave
> >
> >
> > -Original Message-
> > From: Deborah Dahl [mailto:d...@conversational-technologies.com]
> > Sent: Friday, January 13, 2012 12:00 PM
> > To: dbern...@intellectiongroup.com; 'Satish S'; 'Young, Milan'
> > Cc: 'Arthur Barstow'; 'public-webapps'; public-xg-htmlspe...@w3.org
> > Subject: RE: to add Speech API to Charter; deadline January 19
> >
> > I agree that getting "good enough" out there sooner is an excellent
> > goal, although in practice there's always a lot of room for
> > disagreement about what's "good enough".
> > There isn't a draft priority list now, although the XG final report
> > does include prioritized requirements [1]. However, the requirements
> > in the
> list
> > are just prioritized into very general classes, like "strong
> > interest", so they only provide a general guide to possible priorities
> > for the standardization work.
> >
> > [1]
> > http://www.w3.org/2005/Incubator/htmlspeech/XGR-htmlspeech-
> 20111206/#p
> > rioritized
> 
> >
> > > -Original Message-
> > > From: Dave Bernard [mailto:dbern...@intellectiongroup.com]
> > > Sent: Friday, January 13, 2012 11:14 AM
> > > To: 'Deborah Dahl'; 'Satish S'; 'Young, Milan'
> > > Cc: 'Arthur Barstow'; 'public-webapps'; public-xg-htmlspe...@w3.org
> > > Subject: RE: to add Speech API to Charter; deadline January 19
> > >
> > > Deborah-
> > >
> > > Is there a draft priority list in existence? I like the idea of
> > > getting "good enough" out there sooner, especially as an implementer
> > > with real projects in the space.
> > >
> > > Dave
> > >
> > > -Original Message-
> > > From: Deborah Dahl [mailto:d...@conversational-technologies.com]
> > > Sent: Friday, January 13, 2012 10:43 AM
> > > To: 'Satish S'; 'Young, Milan'
> > > Cc: 'Arthur Barstow'; 'public-webapps'; public-xg-htmlspe...@w3.org
> > > Subject: RE: to add Speech API to Charter; deadline January 19
> > >
> > > Olli has a good point that it makes sense to implement the SpeechAPI
> > > in pieces. That doesn't mean that the WebApps WG only has to look at
> > > one proposal in deciding how to proceed with the work. Another
> > > option would be to start off the Speech API work in the Web Apps
> > > group with both proposals (the Google proposal and the SpeechXG
> > > rep

RE: to add Speech API to Charter; deadline January 19

2012-01-13 Thread Deborah Dahl
How prioritization works in practice depends on how a specific Working Group
decides to organize its work, but generally, the W3C is very
consensus-oriented and tries to make sure that all opinions are respected.

> -Original Message-
> From: Dave Bernard [mailto:dbern...@intellectiongroup.com]
> Sent: Friday, January 13, 2012 1:39 PM
> To: 'Deborah Dahl'; 'Satish S'; 'Young, Milan'
> Cc: 'Arthur Barstow'; 'public-webapps'; public-xg-htmlspe...@w3.org
> Subject: RE: to add Speech API to Charter; deadline January 19
> 
> Deborah-
> 
> So how would a good "democratic" prioritization work, in practice? Is that
> something that is rare/common in similar W3C endeavors?
> 
> Dave
> 
> 
> -Original Message-
> From: Deborah Dahl [mailto:d...@conversational-technologies.com]
> Sent: Friday, January 13, 2012 12:00 PM
> To: dbern...@intellectiongroup.com; 'Satish S'; 'Young, Milan'
> Cc: 'Arthur Barstow'; 'public-webapps'; public-xg-htmlspe...@w3.org
> Subject: RE: to add Speech API to Charter; deadline January 19
> 
> I agree that getting "good enough" out there sooner is an excellent goal,
> although in practice there's always a lot of room for disagreement about
> what's "good enough".
> There isn't a draft priority list now, although the XG final report does
> include prioritized requirements [1]. However, the requirements in the
list
> are just prioritized into very general classes, like "strong interest", so
> they only provide a general guide to possible priorities for the
> standardization work.
> 
> [1]
> http://www.w3.org/2005/Incubator/htmlspeech/XGR-htmlspeech-
> 20111206/#priorit
> ized
> 
> > -Original Message-
> > From: Dave Bernard [mailto:dbern...@intellectiongroup.com]
> > Sent: Friday, January 13, 2012 11:14 AM
> > To: 'Deborah Dahl'; 'Satish S'; 'Young, Milan'
> > Cc: 'Arthur Barstow'; 'public-webapps'; public-xg-htmlspe...@w3.org
> > Subject: RE: to add Speech API to Charter; deadline January 19
> >
> > Deborah-
> >
> > Is there a draft priority list in existence? I like the idea of
> > getting "good enough" out there sooner, especially as an implementer
> > with real projects in the space.
> >
> > Dave
> >
> > -Original Message-
> > From: Deborah Dahl [mailto:d...@conversational-technologies.com]
> > Sent: Friday, January 13, 2012 10:43 AM
> > To: 'Satish S'; 'Young, Milan'
> > Cc: 'Arthur Barstow'; 'public-webapps'; public-xg-htmlspe...@w3.org
> > Subject: RE: to add Speech API to Charter; deadline January 19
> >
> > Olli has a good point that it makes sense to implement the SpeechAPI
> > in pieces. That doesn't mean that the WebApps WG only has to look at
> > one proposal in deciding how to proceed with the work. Another option
> > would be to start off the Speech API work in the Web Apps group with
> > both proposals (the Google proposal and the SpeechXG report) and let
> > the editors
> prioritize
> > the order that the different aspects of the API are worked out and
> published
> > as specs.
> >
> > > -Original Message-
> > > From: Satish S [mailto:sat...@google.com]
> > > Sent: Thursday, January 12, 2012 5:01 PM
> > > To: Young, Milan
> > > Cc: Arthur Barstow; public-webapps; public-xg-htmlspe...@w3.org
> > > Subject: Re: to add Speech API to Charter; deadline January 19
> > >
> > > Milan,
> > > It looks like we fundamentally agree on several things:
> > > *  That we'd like to see the JavaScript Speech API included in the
> > > WebApps' charter.*  That we believe the wire protocol is best suited
> > > for another organization, such as IETF.*  That we believe the markup
> > > bindings may be excluded.
> > > Our only difference seems to be whether to start with the extensive
> > > Javascript API proposed in [1] or the simplified subset of it
> > > proposed in [2] which supports majority of the use cases in the XG’s
> > > Final Report.
> > >
> > > Art Barstow asked for “a relatively specific proposal” and provided
> > > some precedence examples regarding the level of detail. [3] Olli
> > > Pettay wrote in [4] “Since from practical point of view the
> > > API+protocol XG defined is a huge thing to implement at once, it
> > > API+makes
> > > sense to implement it in pieces.”
> > > Sta

RE: to add Speech API to Charter; deadline January 19

2012-01-13 Thread Deborah Dahl
I agree that getting "good enough" out there sooner is an excellent goal,
although in practice there's always a lot of room for disagreement about
what's "good enough".
There isn't a draft priority list now, although the XG final report does
include prioritized requirements [1]. However, the requirements in the list
are just prioritized into very general classes, like "strong interest", so
they only provide a general guide to possible priorities for the
standardization work.

[1]
http://www.w3.org/2005/Incubator/htmlspeech/XGR-htmlspeech-20111206/#priorit
ized

> -Original Message-
> From: Dave Bernard [mailto:dbern...@intellectiongroup.com]
> Sent: Friday, January 13, 2012 11:14 AM
> To: 'Deborah Dahl'; 'Satish S'; 'Young, Milan'
> Cc: 'Arthur Barstow'; 'public-webapps'; public-xg-htmlspe...@w3.org
> Subject: RE: to add Speech API to Charter; deadline January 19
> 
> Deborah-
> 
> Is there a draft priority list in existence? I like the idea of getting
> "good enough" out there sooner, especially as an implementer with real
> projects in the space.
> 
> Dave
> 
> -Original Message-
> From: Deborah Dahl [mailto:d...@conversational-technologies.com]
> Sent: Friday, January 13, 2012 10:43 AM
> To: 'Satish S'; 'Young, Milan'
> Cc: 'Arthur Barstow'; 'public-webapps'; public-xg-htmlspe...@w3.org
> Subject: RE: to add Speech API to Charter; deadline January 19
> 
> Olli has a good point that it makes sense to implement the SpeechAPI in
> pieces. That doesn't mean that the WebApps WG only has to look at one
> proposal in deciding how to proceed with the work. Another option would be
> to start off the Speech API work in the Web Apps group with both proposals
> (the Google proposal and the SpeechXG report) and let the editors
prioritize
> the order that the different aspects of the API are worked out and
published
> as specs.
> 
> > -Original Message-
> > From: Satish S [mailto:sat...@google.com]
> > Sent: Thursday, January 12, 2012 5:01 PM
> > To: Young, Milan
> > Cc: Arthur Barstow; public-webapps; public-xg-htmlspe...@w3.org
> > Subject: Re: to add Speech API to Charter; deadline January 19
> >
> > Milan,
> > It looks like we fundamentally agree on several things:
> > *  That we'd like to see the JavaScript Speech API included in the
> > WebApps' charter.*  That we believe the wire protocol is best suited
> > for another organization, such as IETF.*  That we believe the markup
> > bindings may be excluded.
> > Our only difference seems to be whether to start with the extensive
> > Javascript API proposed in [1] or the simplified subset of it proposed
> > in [2] which supports majority of the use cases in the XG’s Final
> > Report.
> >
> > Art Barstow asked for “a relatively specific proposal” and provided
> > some precedence examples regarding the level of detail. [3] Olli
> > Pettay wrote in [4] “Since from practical point of view the
> > API+protocol XG defined is a huge thing to implement at once, it makes
> > sense to implement it in pieces.”
> > Starting with a baseline that supports the majority of use cases will
> > accelerate implementation, interoperability testing, standardization
> > and ultimately developer adoption.
> > Cheers
> > Satish
> >
> > [1] http://www.w3.org/2005/Incubator/htmlspeech/XGR-htmlspeech/[2]
> > http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/att-
> > 1696/speechapi.html[3]
> > http://lists.w3.org/Archives/Public/public-
> > webapps/2011OctDec/1474.html[4]
> > http://lists.w3.org/Archives/Public/public-webapps/2012JanMar/0068.htm
> > l On Thu, Jan 12, 2012 at 5:46 PM, Young, Milan
> > 
> > wrote:
> > >
> > > I've made the point a few times now, and would appreciate a response.
> > > Why are we preferring to seed WebApps speech with [2] when we
> > > already have [3] that represents industry consensus as of a month
> > > ago (Google not withstanding)?  Proceeding with [2] would almost
> > > surely delay the resulting specification as functionality would
> > > patched and haggled over to meet consensus.
> > >
> > > My counter proposal is to open the HTML/speech marriage in WebApps
> > > essentially where we left off at [3].  The only variants being: 1)
> > > Dropping the markup bindings in sections 7.1.2/7.1.3 because its
> > > primary supporter has since expressed non-interest, and 2) Spin the
> > > protocol specification in 7.2 out to the IETF.  If I

RE: to add Speech API to Charter; deadline January 19

2012-01-13 Thread Deborah Dahl
Olli has a good point that it makes sense to implement the SpeechAPI in
pieces. That doesn't mean that the WebApps WG only has to look at one
proposal in deciding how to proceed with the work. Another option would be
to start off the Speech API work in the Web Apps group with both proposals
(the Google proposal and the SpeechXG report) and let the editors prioritize
the order that the different aspects of the API are worked out and published
as specs.

> -Original Message-
> From: Satish S [mailto:sat...@google.com]
> Sent: Thursday, January 12, 2012 5:01 PM
> To: Young, Milan
> Cc: Arthur Barstow; public-webapps; public-xg-htmlspe...@w3.org
> Subject: Re: to add Speech API to Charter; deadline January 19
> 
> Milan,
> It looks like we fundamentally agree on several things:
> *  That we'd like to see the JavaScript Speech API included in the
> WebApps' charter.*  That we believe the wire protocol is best suited
> for another organization, such as IETF.*  That we believe the markup
> bindings may be excluded.
> Our only difference seems to be whether to start with the extensive
> Javascript API proposed in [1] or the simplified subset of it proposed
> in [2] which supports majority of the use cases in the XG’s Final
> Report.
> 
> Art Barstow asked for “a relatively specific proposal” and provided
> some precedence examples regarding the level of detail. [3]
> Olli Pettay wrote in [4] “Since from practical point of view the
> API+protocol XG defined is a huge thing to implement at once, it makes
> sense to implement it in pieces.”
> Starting with a baseline that supports the majority of use cases will
> accelerate implementation, interoperability testing, standardization
> and ultimately developer adoption.
> Cheers
> Satish
> 
> [1] http://www.w3.org/2005/Incubator/htmlspeech/XGR-htmlspeech/[2]
> http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/att-
> 1696/speechapi.html[3]
> http://lists.w3.org/Archives/Public/public-
> webapps/2011OctDec/1474.html[4]
> http://lists.w3.org/Archives/Public/public-webapps/2012JanMar/0068.html
> On Thu, Jan 12, 2012 at 5:46 PM, Young, Milan 
> wrote:
> >
> > I've made the point a few times now, and would appreciate a response.
> > Why are we preferring to seed WebApps speech with [2] when we already
> > have [3] that represents industry consensus as of a month ago (Google
> > not withstanding)?  Proceeding with [2] would almost surely delay the
> > resulting specification as functionality would patched and haggled over
> > to meet consensus.
> >
> > My counter proposal is to open the HTML/speech marriage in WebApps
> > essentially where we left off at [3].  The only variants being: 1)
> > Dropping the markup bindings in sections 7.1.2/7.1.3 because its primary
> > supporter has since expressed non-interest, and 2) Spin the protocol
> > specification in 7.2 out to the IETF.  If I need to formalize all of
> > this in a document, please let me know.
> >
> > Thank you
> >
> > [3] http://www.w3.org/2005/Incubator/htmlspeech/XGR-htmlspeech/
> >
> >
> >
> > -Original Message-
> > From: Arthur Barstow [mailto:art.bars...@nokia.com]
> > Sent: Thursday, January 12, 2012 4:31 AM
> > To: public-webapps
> > Cc: public-xg-htmlspe...@w3.org
> > Subject: CfC: to add Speech API to Charter; deadline January 19
> >
> > Glen Shires and some others at Google proposed [1] that WebApps add
> > Speech API to WebApps' charter and they put forward the Speech
> > Javascript API Specification [2] as as a starting point. Members of
> > Mozilla and Nuance have voiced various levels of support for this
> > proposal. As such, this is a Call for Consensus to add Speech API to
> > WebApps' charter.
> >
> > Positive response to this CfC is preferred and encouraged and silence
> > will be considered as agreeing with the proposal. The deadline for
> > comments is January 19 and all comments should be sent to public-
> webapps
> > at w3.org.
> >
> > -AB
> >
> > [1]
> > http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/1696.html
> > [2]
> > http://lists.w3.org/Archives/Public/public-webapps/2011OctDec/att-
> 1696/s
> > peechapi.html
> >
> >





RE: Overview of W3C technologies for mobile Web applications

2011-02-24 Thread Deborah Dahl
Hi Dom,
This looks like a very useful document. 
On the voice/multimodal side, in addition to the HTML-Speech XG, you will 
definitely want to add some of the Voice Browser Working Group and Multimodal 
Interaction Working Group specs, specifically:
1. Multimodal Architecture and Interfaces, for integrating multiple modalities 
into an application
http://www.w3.org/TR/mmi-arch/
2. InkML for representing traces from pointing devices (stylus, finger, mouse)
http://www.w3.org/TR/InkML/
Also see an interesting prototype for displaying and capturing traces in a web 
browser at
http://lists.w3.org/Archives/Public/www-multimodal/2011Feb/0004.html
3. EMMA for representing user inputs from different modalities (for example, 
speech, ink, haptics, biometrics)
http://www.w3.org/TR/emma/
4. VoiceXML (especially VoiceXML 3.0) for speech interaction
http://www.w3.org/TR/voicexml30/

Regards,
Debbie Dahl

> -Original Message-
> From: public-html-requ...@w3.org [mailto:public-html-requ...@w3.org] On
> Behalf Of Dominique Hazael-Massieux
> Sent: Thursday, February 24, 2011 10:04 AM
> To: public-webapps
> Subject: Overview of W3C technologies for mobile Web applications
> 
> (bcc to public-html and public-device-apis; please follow-up on
> public-webapps)
> 
> Hi,
> 
> As part of a European research project I'm involved in [1], I've
> compiled a report on the existing technologies in development (or in
> discussion) at W3C for building Web applications and that are
> particularly relevant on mobile devices:
> http://www.w3.org/2011/02/mobile-web-app-state.html
> 
> It is meant as a picture of the current state as of today, based on my
> own (necessarily limited) knowledge of the specifications and their
> current implementations.
> 
> I'm very much looking for feedback on the document, the mistakes it most
> probably contains, its overall organization, its usefulness.
> 
> I can also look into moving it in a place where a larger community could
> edit it (dvcs.w3.org, or www.w3.org/wiki/ for instance) if anyone is
> interested in contributing.
> 
> I'll likely publish regular updates to the document (e.g. every 3
> months?), esp. if it helps sufficiently many people to understand our
> current ongoing activities in this space.
> 
> Thanks,
> 
> Dom
> 
> 1. http://mobiwebapp.eu/
> 
> 





RE: Multimodal Interaction WG questions for WebApps (especially WebAPI)

2009-10-23 Thread Deborah Dahl
That's very interesting, thanks! 

> -Original Message-
> From: w3c-mmi-wg-requ...@w3.org 
> [mailto:w3c-mmi-wg-requ...@w3.org] On Behalf Of Jonas Sicking
> Sent: Friday, October 23, 2009 4:18 PM
> To: Deborah Dahl
> Cc: ingmar.kli...@telekom.de; olli.pet...@helsinki.fi; 
> public-webapps@w3.org; w3c-mmi...@w3.org
> Subject: Re: Multimodal Interaction WG questions for WebApps 
> (especially WebAPI)
> 
> On Fri, Oct 23, 2009 at 11:17 AM, Deborah Dahl
>  wrote:
> > Just a quick follow-up about WebSockets -- do you have
> > any sense of when implementations might start to
> > be available in browsers?
> 
> There's a patch for Firefox already. It'll probably take in the order
> of a couple of weeks to get it reviewed and landed. And initially
> we'll land it preffed off so that we can more easily change the
> implementation in case the spec changes.
> 
> / Jonas
> 
> 




RE: Multimodal Interaction WG questions for WebApps (especially WebAPI)

2009-10-23 Thread Deborah Dahl
Just a quick follow-up about WebSockets -- do you have
any sense of when implementations might start to
be available in browsers?

> -Original Message-
> From: w3c-mmi-wg-requ...@w3.org 
> [mailto:w3c-mmi-wg-requ...@w3.org] On Behalf Of 
> ingmar.kli...@telekom.de
> Sent: Friday, October 23, 2009 10:08 AM
> To: olli.pet...@helsinki.fi
> Cc: public-webapps@w3.org; w3c-mmi...@w3.org
> Subject: Re: Multimodal Interaction WG questions for WebApps 
> (especially WebAPI)
> 
> Olli,
> 
> thanks for pointing this out. The Multimodal WG has looked into whats
> available on WebSockets and indeed it seems to be a good 
> candidate to be
> used as a transport mechanic for distributed multimodal 
> applications.  
> 
> -- Ingmar. 
> 
> > -Original Message-
> > From: Olli Pettay [mailto:olli.pet...@helsinki.fi] 
> > Sent: Thursday, September 24, 2009 10:19 AM
> > To: Deborah Dahl
> > Cc: public-webapps@w3.org; 'Kazuyuki Ashimura'
> > Subject: Re: Multimodal Interaction WG questions for WebApps 
> > (especially WebAPI)
> > 
> > On 9/24/09 4:51 PM, Deborah Dahl wrote:
> > > Hello WebApps WG,
> > >
> > > The Multimodal Interaction Working Group is working on 
> > specifications
> > > that will support distributed applications that include 
> inputs from
> > > different modalities, such as speech, graphics and handwriting. We
> > > believe there's some applicability of specific WebAPI specs such
> > > as XMLHttpRequest and Server-sent Events to our use cases 
> and we're
> > > hoping to get some comments/feedback/suggestions from you.
> > >
> > > Here's a brief overview of how Multimodal Interaction and WebAPI
> > > specs might interact.
> > >
> > > The Multimodal Architecture [1] is a loosely coupled 
> > architecture for
> > > multimodal user interfaces, which allows for co-resident 
> > and distributed
> > > implementations. The aim of this design is to provide a 
> > general and flexible
> > > framework providing interoperability among 
> > modality-specific components from
> > > different vendors - for example, speech recognition from 
> > one vendor and
> > > handwriting recognition from another. This framework 
> > focuses on providing a
> > > general means for allowing these components to communicate 
> > with each other,
> > > plus basic infrastructure for application control and 
> > platform services.
> > >
> > > The basic components of an application conforming to the 
> Multimodal
> > > Architecture are (1) a set of components which provide 
> > modality-related
> > > services, such as GUI interaction, speech recognition and 
> > handwriting
> > > recognition, as well as more specialized modalities such as 
> > biometric input,
> > > and (2) an Interaction Manager which coordinates inputs 
> > from different
> > > modalities with the goal of providing a seamless and 
> well-integrated
> > > multimodal user experience. One use case of particular 
> interest is a
> > > distributed one, in which a server-based Interaction 
> > Manager (using, for
> > > example SCXML [2]) controls a GUI component based on a 
> > (mobile or desktop)
> > > web browser, along with a distributed speech recognition 
> component.
> > > "Authoring Applications for the Multimodal Architecture" 
> > [3] describes this
> > > type of an application in more detail. If, for example, 
> > speech recognition
> > > is distributed, the Interaction Manager receives results 
> > from the recognizer
> > > and will need to inform the browser of a spoken user input 
> > so that the
> > > graphical user interface can reflect that information. For 
> > example, the user
> > > might say "November 2, 2009" and that information would be 
> > displayed in a
> > > text field in the browser. However, this requires that the 
> > server be able to
> > > send an event to the browser to tell it to update the 
> > display. Current
> > > implementations do this by having the brower poll for the 
> server for
> > > possible updates on a frequent basis, but we believe that a 
> > better approach
> > > would be for the browser to actually be able to receive 
> > events from the
> > > server.
> > > So our main question is, what mechanisms are or will be 
> available to
> > > support efficient communication among distributed components (for
> > > example, speech recognizers, interaction managers, and 
> web browsers)
> > > that interact to create a multimodal application,(hence 
> our interest
> > > in server-sent events and XMLHttpRequest)?
> > 
> > I believe WebSockets could work a lot better than XHR or 
> server-sent 
> > events. IM would be a WebSocket server and it would have 
> > bi-directional
> > connection to modality components.
> > 
> > -Olli
> > 
> > 
> > 
> > >
> > > [1] MMI Architecture: http://www.w3.org/TR/mmi-arch/
> > > [2] SCXML: http://www.w3.org/TR/scxml/
> > > [3] MMI Example: http://www.w3.org/TR/mmi-auth/
> > >
> > > Regards,
> > >
> > > Debbie Dahl
> > > MMIWG Chair
> > >
> > >
> > >
> > 
> > 
> 
> 




Multimodal Interaction WG questions for WebApps (especially WebAPI)

2009-09-24 Thread Deborah Dahl
Hello WebApps WG,

The Multimodal Interaction Working Group is working on specifications
that will support distributed applications that include inputs from
different modalities, such as speech, graphics and handwriting. We
believe there's some applicability of specific WebAPI specs such
as XMLHttpRequest and Server-sent Events to our use cases and we're
hoping to get some comments/feedback/suggestions from you.

Here's a brief overview of how Multimodal Interaction and WebAPI
specs might interact.

The Multimodal Architecture [1] is a loosely coupled architecture for
multimodal user interfaces, which allows for co-resident and distributed
implementations. The aim of this design is to provide a general and flexible
framework providing interoperability among modality-specific components from
different vendors - for example, speech recognition from one vendor and
handwriting recognition from another. This framework focuses on providing a
general means for allowing these components to communicate with each other,
plus basic infrastructure for application control and platform services.

The basic components of an application conforming to the Multimodal
Architecture are (1) a set of components which provide modality-related
services, such as GUI interaction, speech recognition and handwriting
recognition, as well as more specialized modalities such as biometric input,
and (2) an Interaction Manager which coordinates inputs from different
modalities with the goal of providing a seamless and well-integrated
multimodal user experience. One use case of particular interest is a
distributed one, in which a server-based Interaction Manager (using, for
example SCXML [2]) controls a GUI component based on a (mobile or desktop)
web browser, along with a distributed speech recognition component.
"Authoring Applications for the Multimodal Architecture" [3] describes this
type of an application in more detail. If, for example, speech recognition
is distributed, the Interaction Manager receives results from the recognizer
and will need to inform the browser of a spoken user input so that the
graphical user interface can reflect that information. For example, the user
might say "November 2, 2009" and that information would be displayed in a
text field in the browser. However, this requires that the server be able to
send an event to the browser to tell it to update the display. Current
implementations do this by having the brower poll for the server for
possible updates on a frequent basis, but we believe that a better approach
would be for the browser to actually be able to receive events from the
server. 
So our main question is, what mechanisms are or will be available to 
support efficient communication among distributed components (for 
example, speech recognizers, interaction managers, and web browsers) 
that interact to create a multimodal application,(hence our interest 
in server-sent events and XMLHttpRequest)?

[1] MMI Architecture: http://www.w3.org/TR/mmi-arch/
[2] SCXML: http://www.w3.org/TR/scxml/
[3] MMI Example: http://www.w3.org/TR/mmi-auth/

Regards,

Debbie Dahl
MMIWG Chair