On Wed, Dec 2, 2009 at 10:20 PM, Jonas Sicking jo...@sicking.cc wrote:
On Wed, Dec 2, 2009 at 11:17 AM, Bjorn Bringert bring...@google.com wrote:
I agree that being able to capture and upload audio to a server would
be useful for a lot of applications, and it could be used to do speech
I agree 100%. Still, I think the access to the mic and the speech
recognition could be separated.
--
Diogo Resende drese...@thinkdigital.pt
ThinkDigital
On Thu, 2009-12-03 at 12:06 +, Bjorn Bringert wrote:
On Wed, Dec 2, 2009 at 10:20 PM, Jonas Sicking jo...@sicking.cc wrote:
On Wed, Dec
On Thu, 03 Dec 2009 03:31:27 +0100, Kit Grose k...@iqmultimedia.com.au
wrote:
On 28/10/2009, at 1:10 PM, Aryeh Gregor wrote:
On Tue, Oct 27, 2009 at 7:40 PM, Kit Grose k...@iqmultimedia.com.au
wrote:
Can I get some sort of an understanding on why this behaviour (non-
descript error in
I agree. The application should be able to choose a source for speech
commands, or give the user a choice of options for a speech source. It also
provides a much better separation of APIs, allowing the development of a
speech API that doesn't depend on or interfere in any way with the
development
Can someone explain to me how this works, given Aryeh's response
above? Surely if the iPhone can determine its capacity to be able
to play a video file, other UAs could do likewise and fall back on
the content accordingly as UAs with zero video support do?
I know nothing about the iPhone,
It seems counterintuitive to me that having produced fallback content
already, I still need to use Javascript to test for compatibility
(even if I *did* generate two formats, there's obviously no guarantee
IE9 won't come out requiring WMV or a similar issue with a different
UA).
Are there
On Thu, 03 Dec 2009 14:31:41 +0100, Kornel Lesiński kor...@geekhood.net
wrote:
Can someone explain to me how this works, given Aryeh's response
above? Surely if the iPhone can determine its capacity to be able to
play a video file, other UAs could do likewise and fall back on the
content
On Thu, 03 Dec 2009 14:29:19 +0100, Kit Grose k...@iqmultimedia.com.au
wrote:
It seems counterintuitive to me that having produced fallback content
already, I still need to use Javascript to test for compatibility
(even if I *did* generate two formats, there's obviously no guarantee
IE9 won't
On 04/12/2009, at 1:13 AM, Philip Jägenstedt wrote:
I'll freely admit that the most important reason I oppose this is
because
I don't want to implement it
And I'll admit that the main reason I support it is selfish on my part
too :).
Basically I don't want to be producing OGG files (given
It's not clear to me that every possible attribute is intended to be in that
table. Autofocus, for example, is missing as well.
However, the first time I read through that table I did do a double-take.
Mike T.
On Thu, Dec 3, 2009 at 1:48 AM, Futomi Hatano i...@html5.jp wrote:
Hi all
Could
On Dec 3, 2009, at 4:06 AM, Bjorn Bringert wrote:
On Wed, Dec 2, 2009 at 10:20 PM, Jonas Sicking jo...@sicking.cc wrote:
On Wed, Dec 2, 2009 at 11:17 AM, Bjorn Bringert bring...@google.com wrote:
I agree that being able to capture and upload audio to a server would
be useful for a lot of
On Thu, Dec 3, 2009 at 4:06 AM, Bjorn Bringert bring...@google.com wrote:
On Wed, Dec 2, 2009 at 10:20 PM, Jonas Sicking jo...@sicking.cc wrote:
On Wed, Dec 2, 2009 at 11:17 AM, Bjorn Bringert bring...@google.com wrote:
I agree that being able to capture and upload audio to a server would
be
On Thu, Dec 3, 2009 at 7:32 AM, Diogo Resende drese...@thinkdigital.ptwrote:
I agree 100%. Still, I think the access to the mic and the speech
recognition could be separated.
While it would be possible to separate access to the microphone and speech
recognition, combining them allows the API
On 02.12.2009, at 23:46, Fumitoshi Ukai (鵜飼文敏) wrote:
If server sends back handshake response and a data frame, and close
immediately, fast enough to run JavaScript on browser, how
readyState should be?
I'd expect it to work in the same way it works for XMLHttpRequest -
e.g., in an
I was not thinking of raw access to the mic. I was just thinking of a 2
step method to do it so you could just do 1 step :)
I was thinking of something like:
1. Call Sound API and ask to record (maybe something like the
geolocation on Firefox [1]).
2. Pass it to
Hey List:
I was just following up to the composition thread:
http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-October/023706.html
Where in the spec did this more explicit definition get added?
..
From the thread, Ian's post.
... Safari and Chrome currently do bound composition
On 03.12.2009, at 9:50, Alexey Proskuryakov wrote:
If server sends back handshake response and a data frame, and close
immediately, fast enough to run JavaScript on browser, how
readyState should be?
I'd expect it to work in the same way it works for XMLHttpRequest -
e.g., in an
On Fri, 6 Nov 2009, Yuzo Fujishima wrote:
I see both US-ASCII and ASCII are used in:
http://tools.ietf.org/html/draft-hixie-thewebsocketprotocol-54
If they mean the same thing, one should be used consistently.
In the document, US-ASCII seems to mean encoding while ASCII mean
charset.
On Fri, 6 Nov 2009, Yuzo Fujishima wrote:
Section 4.4 of
http://tools.ietf.org/html/draft-hixie-thewebsocketprotocol-54 specifies
how erroneous UTF-8 must be handled on the client side.
Does the same apply for the server side?
It does not; the server-side processing of these errors is
On Fri, 6 Nov 2009, Yuzo Fujishima wrote:
As far as I can read from
http://tools.ietf.org/html/draft-hixie-thewebsocketprotocol-54#section-5.2
the server should (or must?) accept requests starting with, say:
POSTSPACE/some/resourceSPACEHTTP/1.0
or, even
SPACE/some/resourceSPACE
Is
On Tue, 17 Nov 2009, Christian Biesinger wrote:
Is it intentional that it is impossible to implement this spec over an
existing HTTP stack, as currently specified?
Only to the same extent that it is intentional that it's impossible to
implement Telnet or SSH over an existing HTTP stack.
On Thu, 3 Dec 2009 09:31:18 -0500
Mike Taylor michaelaarontay...@gmail.com wrote:
It's not clear to me that every possible attribute is intended to be in that
table. Autofocus, for example, is missing as well.
Exactly.
The table seems to list attributes defined for only input.
Attributes
On Wed, 2 Dec 2009, Alexey Proskuryakov wrote:
Currently, the Web Sockets API spec says that the WebSocket.URL
attribute must just return a value that was passed to the WebSocket
constructor. This doesn't match how many other url accessors work, and
consequentially, it doesn't match what
On Thu, 3 Dec 2009, Fumitoshi Ukai (��~\飼�~V~G�~U~O) wrote:
I've question about thread to run Web Socket feedback from the protocol.
If server sends back handshake response and a data frame, and close
immediately, fast enough to run JavaScript on browser, how readyState
should be?
1)
On Fri, Dec 4, 2009 at 10:55 AM, Ian Hickson i...@hixie.ch wrote:
On Wed, 2 Dec 2009, Alexey Proskuryakov wrote:
Currently, the Web Sockets API spec says that the WebSocket.URL
attribute must just return a value that was passed to the WebSocket
constructor. This doesn't match how many
25 matches
Mail list logo