Re: [whatwg] BroadcastChannel in Firefox Nightly
On Wed, Jan 14, 2015 at 2:58 PM, Janusz Majnert j.majn...@samsung.com wrote: On 14.01.2015 14:01, Anne van Kesteren wrote: Andrea just landed the last patch for the BroadcastChannel API: https://bugzilla.mozilla.org/show_bug.cgi?id=966439 https://html.spec.whatwg.org/multipage/comms.html#broadcasting-to-other-browsing-contexts Assuming everything sticks it'll start appearing in Firefox Nightly soon. We thought we'd give everyone a heads up we're doing this since it hasn't been discussed much. From quickly skimming the linked spec it looks like the same functionality (minus sending objects) is already available with the use of WebStorage: https://html.spec.whatwg.org/multipage/webstorage.html#the-storage-event It is possible but not trivial (see e.g. http://blog.fastmail.com/2012/11/26/inter-tab-communication-using-local-storage/). Paving such cowpaths is a good thing.
Re: [whatwg] Splash screen proposal for web apps ?
On Wed, Jul 31, 2013 at 9:15 PM, Ian Hickson i...@hixie.ch wrote: On Wed, 31 Jul 2013, Laurent Perez wrote: Is there work going on on a Splash screen specification ? What's the use case? Generally speaking, Web pages load incrementally, so by the time you've downloaded an image, you should be able to just show the Web page itself, at least in a state good enough for the user. (For example, even really large and expensive pages like Google+ render in a usable state quickly, even though they continue to load assets and scripts in the background and thus actually don't present an interactive UI straight away.) On Fri, Aug 2, 2013 at 12:48 AM, Laurent Perez l.lauren...@gmail.com wrote: The use case is to show a please wait, loading... message until all resources of an index page (js, css, html, images, fonts) are downloaded. When the message dismisses, the index page is ready for a non-blocking UI navigation since js was already loaded. We plan to implement it in our own user agent, and I was wondering if I should go the Apple meta way or use the w3c widgets spec and use a webapp descriptor. I know the widgets spec has been implemented by some (Opera, Phonegap to describe an hybrid application), I was wondering if work was still going on on the splash proposal. Your exact use case is unclear as to whether you want this for web pages in general (in which case you have the problem that you need to load some resources before the page itself loads) or for some kind of packaged web application scenario. For the latter, there has been some discussion around splash screens [1] in the context of the ongoing work around the web manifest specification [2].Check that out and direct your query to W3C SysApps [3]? [1] https://github.com/sysapps/sysapps/issues/41 [2] http://manifest.sysapps.org [3] http://www.w3.org/2012/sysapps/
Re: [whatwg] Device proximity and light events
Scott González wrote: On Tuesday, May 8, 2012, Doug Turner wrote: You don't. This API doesn't have device detection. Don't assume that onXXX means that the UA supports an event. I thought this was the preferred way to check. I seem to recall a discussion about this and agreement that this was the best way to detect events. The WHATWG spec [1] itself says the following: When support for a feature is disabled (e.g. as an emergency measure to mitigate a security problem, or to aid in development, or for performance reasons), user agents must act as if they had no support for the feature whatsoever, and as if the feature was not mentioned in this specification. For example, if a particular feature is accessed via an attribute in a Web IDL interface, the attribute itself would be omitted from the objects that implement that interface — leaving the attribute on the object but making it return null or throw an exception is insufficient. If the feature is not yet implemented then the same principle should apply IMO (if that's not implicit in the above). - Rich [1] http://www.whatwg.org/specs/web-apps/current-work/multipage/infrastructure.html#extensibility
[whatwg] Cross-document messaging setup on form submit
During some experimentation today I wanted to establish a cross-document messaging channel for when a user submits a form with a target of _blank (or _someframe). I couldn't find any native way to do this when target == _blank so I had to produce my own workaround [1]. In the course of doing this I wondered if HTMLFormElement.submit() could actually just return a WindowProxy object in the same way that window.open() currently does [2] to grease this whole process. If the form is blocked - i.e. Event.preventDefault has been called or the form is running in a sandboxed iframe without 'enable-forms' permission or some other condition has prevented actual form submission from happening - then this method would return null. If the form's target == '_self' then this method could return it's own WindowProxy to itself, since the current script will simply run to completion and then get overridden by the new page load anyway. This change would be backward compatible with all existing web content and shouldn't affect any existing web pages. So is this a reasonable addition, are the security considerations manageable and is it implementable? br/ Rich [1] http://fiddle.jshell.net/GZWYE/10/show/light/ [2] http://www.whatwg.org/specs/web-apps/current-work/multipage/browsers.html#dom-open
Re: [whatwg] Proposal to extend registerProtocolHandler
Olli Pettay wrote: On 07/06/2011 07:51 AM, Robert O'Callahan wrote: I don't think browsers need to prompt for registerProtocolHandler. Instead, I would simply allow any site to register as a protocol handler for almost anything, and remember all such registration So all the ad sites (which are embedded via iframe or something) would just register all the possible protocols so that some user might accidentally use their site for protocol handling - especially if the site was the first one to register the protocol. For registration, we could allow _auto-registration_ of protocol handlers only if a.) this is the first time the protocol is being registered and b.) when the registration request is coming directly from the top-most window context only (i.e. from a web page that users are actually visiting). In all other cases (if you're the nth provider to attempt to register an already registered protocol handler or you're not requesting from window.top) then the UA would ask the user whether they wish to install this handler as the default for the given protocol or not (replacing the previous default handler). For usage, the ideal situation would be for the user to click on a link and be auto-directed somewhere without having to select from multiple providers each time. When the user wants to override the default protocol handler then the UA could allow e.g. ctrl-shift-click to force show the protocol handler dialog to the user. There the user would be free to select any handler that the UA has come in to contact with during the course of the user's previous browsing sessions (i.e. those handlers that the user decided not to set as defaults at the time). Users should be able to easily detach protocol handlers from this list with either [delete] or [delete all handlers for this domain] on this interface. - Rich
Re: [whatwg] Proposal to extend registerProtocolHandler
Peter Kasting wrote: On Fri, Jul 1, 2011 at 1:59 PM, Ojan Vafaio...@chromium.org wrote: Do any browser vendors agree with this or have objections? From my work on the Chrome UI side of this, I would very much like to see something like isRegistered(). This would allow sites to conditionalize requests for the protocol handler. This is important to me because I would also like to experiment (after that point) with requiring a user gesture for this request (much like browsers typically require user gestures for window.open()), so sites cannot hammer the user with requests outside of some sort of interaction-based workflow. I've also been thinking along these lines for things that require some user interaction at the chrome level. The idea would be to emulate window.open functionality to the point that something happens only if a _user-initiated_ click event occurs. If an event is invoked by a script synthesizing its event via e.g. anchor.click() then that should not invoke any UI stuff. Only if window.open is invoked by a user does the popup blocker not kick-in and the popup pages open. It would be nice to have that same principle work for registerProtocolHandler. FWIW, I proposed something to the effect you are describing in the W3C Contacts API [1]. window.open seems a little under-defined when it comes to the pseudo-standard behavior of blocking window.open calls outside of a user-initiated event. br/ Rich [1] http://www.w3.org/TR/contacts-api/#api-invocation-via-dom-events
Re: [whatwg] Proposal to extend registerProtocolHandler
Rich Tibbett wrote: Peter Kasting wrote: On Fri, Jul 1, 2011 at 1:59 PM, Ojan Vafaio...@chromium.org wrote: Do any browser vendors agree with this or have objections? From my work on the Chrome UI side of this, I would very much like to see something like isRegistered(). This would allow sites to conditionalize requests for the protocol handler. This is important to me because I would also like to experiment (after that point) with requiring a user gesture for this request (much like browsers typically require user gestures for window.open()), so sites cannot hammer the user with requests outside of some sort of interaction-based workflow. I've also been thinking along these lines for things that require some user interaction at the chrome level. The idea would be to emulate window.open functionality to the point that something happens only if a _user-initiated_ click event occurs. If an event is invoked by a script synthesizing its event via e.g. anchor.click() then that should not invoke any UI stuff. Only if window.open is invoked by a user does the popup blocker not kick-in and the popup pages open. It would be nice to have that same principle work for registerProtocolHandler. FWIW, I proposed something to the effect you are describing in the W3C Contacts API [1]. window.open seems a little under-defined when it comes to the pseudo-standard behavior of blocking window.open calls outside of a user-initiated event. The behavior implied in [1] can be demoed against window.open here http://jsfiddle.net/nRJrz/3/ Should we document this psuedo-standard UA behavior wrt window.open? It would be nice to have a reference of this behavior on which to build other interactive UI experiences in the future. Tested in Opera 11.5, Chrome 12, Firefox 5 and Safari 5. br/ Rich [1] http://www.w3.org/TR/contacts-api/#api-invocation-via-dom-events
Re: [whatwg] Geolocation - Browser usability issues with regards to asking user permission
Biju wrote: What I want from browser vendors is make navigator.geolocation.getCurrentPosition and navigator.geolocation.watchPosition ONLY callable from a CLICK event. I thought we all learned lesson from window.open popups window.open is still entirely underspecified in standards when it comes to its behavior in trusted vs. non-trusted click event invocation. The first thing somebody would need to is document how window.open works according to different modes of invocation. Then at least we would have a model to be able to discuss for other similar scenarios. What you're then suggesting is that we allow *trusted* click events (according to the DOM Level 3 Events definition) to trigger a non-blocking dialog to enable geolocation in the same way as window.open only opens a popup window if it has been called from a trusted user-initiated click event. That seems logical to me and it's something that we attempted to define in a blue-sky proposal elsewhere: http://dev.w3.org/2009/dap/contacts/Overview.html#api-invocation-via-dom-events. The same model could potentially be applied to geolocation and other upcoming Web APIs. The ability for a user to click something and receive a response that fits their work flow, instead of an async bar that the OP suggested users are frequently missing seems like something we should look in to more. Some interaction that's a little bit like the file picker interaction, but with a non-modal output notification. - Rich
Re: [whatwg] Device element and the lifetime of the stream objects
Andrei Popescu wrote: Hi Anne, On Wed, Feb 16, 2011 at 12:36 PM, Anne van Kesterenann...@opera.com wrote: On Tue, 15 Feb 2011 17:48:24 +0100, Leandro Graciá Gil leandrogra...@chromium.org wrote: All feedback will be greatly appreciated. This is just a thought. Instead of acquiring a Stream object asynchronously there always is one available showing transparent black or some such. E.g. navigator.cameraStream. It also inherits from EventTarget. Then on the Stream object you have methods to request camera access which triggers some asynchronous UI. I thought we were all trying to avoid asynchronous UI (dialogs, infobars, popups, etc), which is a solution that does not scale very well when many different APIs require it. This was one of the main reasons for trying a different approach. We are also trying a different approach but we're not really coming up with anything other than modal dialogs, no-authorization models or policies; none of which are suitable for different reasons. One option that works is, on device click, presenting some kind of async authorization request. However, if we're going to do that then we might as well just implement a Javascript API to call the async authorization request in the first place (in the process saving one user click). Once granted an appropriately named event is dispatched on Stream indicating you now have access to an actual stream. When the user decides it is enough and turns of the camera (or something else happens) some other appropriately named event is dispatched on Stream again turning it transparent black again. This also removes the need for thedevice element as has been mentioned off-list. Basically, the idea was thatdevice does not really help anyone. What do you mean exactly by this? The usecases are pretty clear. It makes custom in-page UI harder, it does not prevent the need for scripting, and it does not help with fallback. I was never under the impression we need to prevent the need for scripting. Why is that a goal? The device element requires JavaScript to do anything useful. Without Javascript all the authorization interfaces will still work and users will still go through them to authorize access to their device but then...nothing will happen if the page is not running JavaScript. That's not great. There's no fallback or interaction within non-scripted environments so a HTML element seems to be the wrong level for integration. As also mentioned above, in-page UI is harder if we force a particular style for device elements (a button or otherwise) on to developers. This is somewhat weird though :-) Agreed. And, as I said earlier, I thought the goal was to try something else than asynchronous permission dialogs. What has changed? We've been looking at usability and user interface for device vs an async JS API and are finding better ways to make multiple async JS APIs work in our interfaces than we are for device element-based authorizations. It would also be easier for prototyping if we didn't make assumptions on a device element. We can always jump to implementing a device element later on but initially it might be good to prototype in JavaScript with vendor API prefixes in preparation for a final standard approach. We're doing due diligence on both options so it's worth having this discussion and seeing what other implementers think. Cheers, Rich
Re: [whatwg] File inputs, @accept, and expected behavior
Tab Atkins Jr. wrote: The file input gained the @accept attribute a little while ago, to indicate what type of file should be accepted. It has three special values, image/*, video/*, and audio/*. I believe one intent of these special values is that browsers may offer the user the ability to capture an image/video/audio with the webcam/mic and automatically set it as the value of theinput, without the user having to create an intermediary file themselves. The spec doesn't give any indication of this, though, and I've surprised some people (browser devs, internally) when I tell them about @accept after they ask me about access the webcam/mic. That is possible, yes. It's about providing a video/image/audio file or capturing from the webcam/mic by creating an on-the-fly file to return. For explicitly requesting a webcam or microphone 'file' from a web page we have produced the W3C Media Capture spec [1]. For streaming webcam and/or microphone were working on and around the device element. Could we get a note added to the File Input section describing this intention? It's entirely a interface option for UAs to provide (e.g. [2]) but the primary intention is on sharing normal video/audio/image files so a note in the spec seems a little unnecessary IMO. - Rich [1] http://www.w3.org/TR/capture-api/ [2] http://www.w3.org/TR/capture-api/#uiexamples
Re: [whatwg] Pressure API?
Jens Müller wrote: Hi, now that device orientation, geolocation, camera etc. have been spec'ed: Is there any intent to provide an API for pressure sensors? This might well be the next hip feature in smartphones ... Oh, and while we are at it: Humidity probably belongs to the same group. Could this be modeled as an extension [1] to the System Info API [2]? This spec is still at the working draft phase. Perhaps renaming it to 'Generic Sensors API' and removing a whole bunch of the sensors included in there at the moment would be appropriate. Getting and monitoring pressure and humidity within that framework would work. - Rich [1] http://www.w3.org/TR/system-info-api/#extensibility [2] http://www.w3.org/TR/system-info-api
Re: [whatwg] Live video streaming with html5.
Henri Sivonen wrote: In HTML5, a URL (or a set of URLs) point at what you want the user-agent to display. From the spec's point of view, you can insert any protocol (that can be described by a URL) in there. You'll need it to be supported by your user-agent, of course. In practice, live streaming works with HTTP and either Ogg or WebM in at least Firefox and Opera (maybe Chromium, too), since Ogg and WebM don't require the length of the video to be known in advance. If you're interested in live streaming from your webcam you should check out Ericsson's experimentation with device and video: https://labs.ericsson.com/blog/beyond-html5-conversational-voice-and-video-implemented-webkit-gtk The toolchain with HTML5 technologies is device to video to Web Sockets. The HTML Device draft [1] is still very much in its infancy but hopefully not too far around the corner. - Rich [1] http://dev.w3.org/html5/html-device/