Re: CORS performance proposal

2015-02-19 Thread Dale Harvey
> The cache would be on a per requesting origin basis as per the headers
> above. The Origin and Access-Control-Allow-Origin would not take part
> in this exchange, to make it very clear what this is about.

I dont want to conflate what could be seperate proposals, but they seem
closely related, this would improve the situation for easing the number of
preflight requests to be made, however still requires servers to follow
what is a fairly complicated process of setting up the appropriate headers

What if we allowed one of the response fields to denote this url is on the
public internet, please dont bother with cors restrictions. This means the
process of setting up cors could be to ensure a single response returns
with the appropriate headers and servers no longer need to worry about
every possible headers clients can send to each particular url.

(Clients would have to set a custom header to ensure the preflight
optimisation was skipped I believe)

This would be very much in line with how it was implemented for flash -
http://www.adobe.com/devnet/articles/crossdomain_policy_file_spec.html


On 19 February 2015 at 13:29, Anne van Kesteren  wrote:

> When the user agent is about to make its first preflight to an origin
> (timeout up to the user agent), it first makes a preflight that looks
> like:
>
>   OPTIONS *
>   Access-Control-Request-Origin-Wide-Cache: [origin]
>   Access-Control-Request-Method: *
>   Access-Control-Request-Headers: *
>
> If the response is
>
>   2xx XX
>   Access-Control-Allow-Origin-Wide-Cache: [origin]
>   Access-Control-Allow-Methods: *
>   Access-Control-Allow-Headers: *
>   Access-Control-Max-Age: [max-age]
>
> then no more preflights will be made for the duration of [max-age] (or
> shortened per user agent preference). If the response includes
>
>   Access-Control-Allow-Credentials: true
>
> the cache scope is increased to requests that include credentials.
>
> I think this has a reasonable tradeoff between security and opening up
> all the power of the HTTP APIs on the server without the performance
> hit. It still makes the developer very conscious about the various
> features involved.
>
> The cache would be on a per requesting origin basis as per the headers
> above. The Origin and Access-Control-Allow-Origin would not take part
> in this exchange, to make it very clear what this is about.
>
> (This does not affect Access-Control-Expose-Headers or any of the
> other headers required as part of non-preflight responses.)
>
>
> --
> https://annevankesteren.nl/
>
>


Re: CORS performance

2015-02-19 Thread Dale Harvey
> If the cache is against the url, and we are sending requests to different
urls, wont
> requests to different urls always trigger a preflight?

I just realised my mistake, GETS without custom headers should need to
trigger preflight requests, sorry

On 19 February 2015 at 13:31, Dale Harvey  wrote:

> Will take a look at the content-type on GET requests, thanks
>
> > I believe none of these require preflight unless a mistake is being
> > made (probably setting Content-Type on GET requests).
>
> http://www.w3.org/TR/cors/#preflight-result-cache-0
>
> If the cache is against the url, and we are sending requests to different
> urls, wont requests to different urls always trigger a preflight?
>
> > Also, regardless, you can use the CouchDB bulk document API to fetch
> > all these documents in one request, instead of 70,000 requests.
>
> CouchDB has no bulk document fetch api, it has all_docs but that isnt
> appropriate for this case, there is a talk about introducing it
> https://issues.apache.org/jira/browse/COUCHDB-2310, however its going to
> take a while (I would personally rather we replace it with a streaming api)
>
> > I agree that things can be improved here. I think the solution may be
> > better developer tools. In particular, devtools should tell you
> > exactly why a request triggered preflight.
>
> Whats wrong with 'This origin is part of the public internet and doesnt
> need any complications or restrictions due to CORS' ie Anne proposal?
>
>
> On 19 February 2015 at 13:21, Brian Smith  wrote:
>
>> On Thu, Feb 19, 2015 at 4:49 AM, Dale Harvey  wrote:
>> >> so presumably it is OK to set the Content-Type to text/plain
>> >
>> > Thats not ok, but may explain my confusion, is Content-Type considered a
>> > Custom Header that will always trigger a preflight?
>>
>> To be clear, my comment was about POST requests to the bulk document
>> API, not about other requests.
>>
>> I ran your demo and observed the network traffic using Wireshark.
>> Indeed, OPTIONS requests are being sent for every GET. But, that is
>> because you are setting the Content-Type header field on your GET
>> requests. Since GET requests don't have a request body, you shouldn't
>> set the Content-Type header field on them. And, if you do, then
>> browsers will treat it as a custom header field. That is what forces
>> the preflight for those requests.
>>
>> Compare the network traffic for these two scripts:
>>
>>   
>> xhr=new XMLHttpRequest();
>> xhr.open("GET",
>> "
>> <a  rel="nofollow" href="http://skimdb.iriscouch.com/registry/_changes?timeout=25000&style=all_docs&since=209&limit=100&_nonce=xhGtdb3XqOaYCWh4">http://skimdb.iriscouch.com/registry/_changes?timeout=25000&style=all_docs&since=209&limit=100&_nonce=xhGtdb3XqOaYCWh4</a>
>> ",
>> true);
>> xhr.setRequestHeader("Accept","application/json");
>> xhr.setRequestHeader("Content-Type","application/json");
>> xhr.send();
>>   
>>
>>   
>> xhr=new XMLHttpRequest();
>> xhr.open("GET",
>> "
>> <a  rel="nofollow" href="http://skimdb.iriscouch.com/registry/_changes?timeout=25000&style=all_docs&since=209&limit=100&_nonce=xhGtdb3XqOaYCWh4">http://skimdb.iriscouch.com/registry/_changes?timeout=25000&style=all_docs&since=209&limit=100&_nonce=xhGtdb3XqOaYCWh4</a>
>> ",
>> true);
>> xhr.setRequestHeader("Accept","application/json");
>> xhr.send();
>>   
>>
>> They are the same, except the second one doesn't set the Content-Type
>> header, and thus it doesn't cause the preflight to be sent.
>>
>> > if so then none of the
>> > caching will apply, CouchDB requires sending the appropriate
>> content-type
>>
>> CouchDB may require sending "Accept: application/json", but that isn't
>> considered a custom header field, so it doesn't trigger preflight.
>>
>> > The /_changes requests are only part of the problem, once we receive the
>> > changes information we then have to request information about individual
>> > documents which all have a unique id
>> >
>> >   GET /registry/mypackagename
>> >
>> > We do one of those per document (70,000 npm docs), all trigger a
>> preflight
>> > (whether or not custom headers are involved)
>>
>> I believe none of these require pre

Re: CORS performance

2015-02-19 Thread Dale Harvey
Will take a look at the content-type on GET requests, thanks

> I believe none of these require preflight unless a mistake is being
> made (probably setting Content-Type on GET requests).

http://www.w3.org/TR/cors/#preflight-result-cache-0

If the cache is against the url, and we are sending requests to different
urls, wont requests to different urls always trigger a preflight?

> Also, regardless, you can use the CouchDB bulk document API to fetch
> all these documents in one request, instead of 70,000 requests.

CouchDB has no bulk document fetch api, it has all_docs but that isnt
appropriate for this case, there is a talk about introducing it
https://issues.apache.org/jira/browse/COUCHDB-2310, however its going to
take a while (I would personally rather we replace it with a streaming api)

> I agree that things can be improved here. I think the solution may be
> better developer tools. In particular, devtools should tell you
> exactly why a request triggered preflight.

Whats wrong with 'This origin is part of the public internet and doesnt
need any complications or restrictions due to CORS' ie Anne proposal?


On 19 February 2015 at 13:21, Brian Smith  wrote:

> On Thu, Feb 19, 2015 at 4:49 AM, Dale Harvey  wrote:
> >> so presumably it is OK to set the Content-Type to text/plain
> >
> > Thats not ok, but may explain my confusion, is Content-Type considered a
> > Custom Header that will always trigger a preflight?
>
> To be clear, my comment was about POST requests to the bulk document
> API, not about other requests.
>
> I ran your demo and observed the network traffic using Wireshark.
> Indeed, OPTIONS requests are being sent for every GET. But, that is
> because you are setting the Content-Type header field on your GET
> requests. Since GET requests don't have a request body, you shouldn't
> set the Content-Type header field on them. And, if you do, then
> browsers will treat it as a custom header field. That is what forces
> the preflight for those requests.
>
> Compare the network traffic for these two scripts:
>
>   
> xhr=new XMLHttpRequest();
> xhr.open("GET",
> "
> <a  rel="nofollow" href="http://skimdb.iriscouch.com/registry/_changes?timeout=25000&style=all_docs&since=209&limit=100&_nonce=xhGtdb3XqOaYCWh4">http://skimdb.iriscouch.com/registry/_changes?timeout=25000&style=all_docs&since=209&limit=100&_nonce=xhGtdb3XqOaYCWh4</a>
> ",
> true);
> xhr.setRequestHeader("Accept","application/json");
> xhr.setRequestHeader("Content-Type","application/json");
> xhr.send();
>   
>
>   
> xhr=new XMLHttpRequest();
> xhr.open("GET",
> "
> <a  rel="nofollow" href="http://skimdb.iriscouch.com/registry/_changes?timeout=25000&style=all_docs&since=209&limit=100&_nonce=xhGtdb3XqOaYCWh4">http://skimdb.iriscouch.com/registry/_changes?timeout=25000&style=all_docs&since=209&limit=100&_nonce=xhGtdb3XqOaYCWh4</a>
> ",
> true);
> xhr.setRequestHeader("Accept","application/json");
> xhr.send();
>   
>
> They are the same, except the second one doesn't set the Content-Type
> header, and thus it doesn't cause the preflight to be sent.
>
> > if so then none of the
> > caching will apply, CouchDB requires sending the appropriate content-type
>
> CouchDB may require sending "Accept: application/json", but that isn't
> considered a custom header field, so it doesn't trigger preflight.
>
> > The /_changes requests are only part of the problem, once we receive the
> > changes information we then have to request information about individual
> > documents which all have a unique id
> >
> >   GET /registry/mypackagename
> >
> > We do one of those per document (70,000 npm docs), all trigger a
> preflight
> > (whether or not custom headers are involved)
>
> I believe none of these require preflight unless a mistake is being
> made (probably setting Content-Type on GET requests).
>
> Also, regardless, you can use the CouchDB bulk document API to fetch
> all these documents in one request, instead of 70,000 requests.
>
> > Also performance details aside every week somebody has a library or proxy
> > that sends some custom header or they just missed a step when configuring
> > CORS, its a constant source of confusion for our users. We try to get
> around
> > it by providing helper scripts but Anne's proposal mirroring flashes
> cross
> > domain.xml sounds vastly superior to the current implementation from the
> > developers perspective.
>
> I agree that things can be improved here. I think the solution may be
> better developer tools. In particular, devtools should tell you
> exactly why a request triggered preflight.
>
> Cheers,
> Brian
>


Re: CORS performance

2015-02-19 Thread Dale Harvey
> so presumably it is OK to set the Content-Type to text/plain

Thats not ok, but may explain my confusion, is Content-Type considered a
Custom Header that will always trigger a preflight? if so then none of the
caching will apply, CouchDB requires sending the appropriate content-type

I tried setting up a little demo here, it will replicate the npm registry
for 5 seconds - http://paste.pouchdb.com/paste/q8n610/#output

You can see in the network logs various OPTIONS requests for
http://skimdb.iriscouch.com/registry/_changes?timeout=25000&style=all_docs&since=209&limit=100&_nonce=xhGtdb3XqOaYCWh4
http://skimdb.iriscouch.com/registry/_changes?timeout=25000&style=all_docs&since=311&limit=100&_nonce=UIZRQHrUG1Gjbm6S
etc etc

The /_changes requests are only part of the problem, once we receive the
changes information we then have to request information about individual
documents which all have a unique id

  GET /registry/mypackagename

We do one of those per document (70,000 npm docs), all trigger a preflight
(whether or not custom headers are involved)

We can and are doing a lot of thing to try and improve performance / reduce
the number of HTTP requests, but for our particular case its dealing with
10 years of established server protocols, there isnt 'a server', theres at
least 10 server implementations across all platforms by various projects /
companies that all need / try to interoperate, we cant just make ad hoc
changes to the protocol to get around CORS limitations.

Also performance details aside every week somebody has a library or proxy
that sends some custom header or they just missed a step when configuring
CORS, its a constant source of confusion for our users. We try to get
around it by providing helper scripts but Anne's proposal mirroring flashes
cross domain.xml sounds vastly superior to the current implementation from
the developers perspective.

On 19 February 2015 at 12:05, Brian Smith  wrote:

> Dale Harvey  wrote:
> > The REST api pretty much by design means a unique url per request
>
> CouchDB has http://wiki.apache.org/couchdb/HTTP_Bulk_Document_API,
> which allows you to fetch or edit and create multiple documents at
> once, with one HTTP request. CouchDB's documentation says you're
> supposed to POST a JSON document for editing, but the example doesn't
> set the Content-Type on the request so presumably it is OK to set the
> Content-Type to text/plain. This means that you'd have ONE request and
> ZERO preflights to edit N documents.
>
> > in this case a lot of the requests look like
> >
> >   GET origin/_change?since=0
> >   GET origin/_change?since=the last id
>
> A GET like this won't require preflight unless you set custom header
> fields on the request. Are you setting custom headers? If so, which
> ones and why? I looked at the CouchDB documentation and it doesn't
> mention any custom header fields. Thus, it seems to me like none of
> the GET requests should require preflight.
>
> Also, if your server is SPDY or HTTP/2, you should be able to
> configure it so that when the server receives a request "GET
> /whatever/123", it replies with the response for that request AND
> pushes the response for the not-even-yet-sent "OPTIONS /whatever/123"
> request. In that case, even if you don't use the preflight-less bulk
> document API and insist on using PUT, there's zero added latency from
> the preflight.
>
> Cheers,
> Brian
>


Re: CORS performance

2015-02-19 Thread Dale Harvey
> What is it about PouchDB and CouchDB that causes them to require
> preflight for all of these requests in the first place? What is
> difficult about changing them to not require preflight for all of
> these requests?

The REST api pretty much by design means a unique url per request, in this
case a lot of the requests look like

  GET origin/_change?since=0
  GET origin/_change?since=the last id

Its unlikely to change since its 10 years old across standardized across
several different products that works well in most cases aside for just
being kinda slow when you try to use it over CORS.

> If declaring this policy through a header is not acceptable, we could
> attempt a double preflight fetch for the very first CORS fetch against
> an origin (that requires a preflight). Try OPTIONS * before OPTIONS
> /actual-request. If that handshake succeeds (details TBD) no more
> preflights necessary for the entire origin.

This is very much what I expected when I first used CORS, similiar to the
flash cross-domain.xml file, I would just like to mark an origin I control
as being accessible from any host, as the only things CORS protects is data
behind a firewall I think it should be a simple mechanism to say "this
domain is not behind a firewall, have at it"


On 19 February 2015 at 11:30, Brian Smith  wrote:

> Dale Harvey  wrote:
> > With Couch / PouchDB we are working with an existing REST API wherein
> every
> > request is to a different url (which is unlikely to change), the
> performance
> > impact is significant since most of the time is used up by latency, the
> CORS
> > preflight request essentially double the time it takes to do anything
>
> I understand that currently the cost of this API is 2*N and you want
> to reduce the 2 to 1 instead of reducing the N, even though N is
> usually much larger than 2.
>
> What is it about PouchDB and CouchDB that causes them to require
> preflight for all of these requests in the first place? What is
> difficult about changing them to not require preflight for all of
> these requests?
>
> Cheers,
> Brian
>


Re: CORS performance

2015-02-19 Thread Dale Harvey
With Couch / PouchDB we are working with an existing REST API wherein every
request is to a different url (which is unlikely to change), the
performance impact is significant since most of the time is used up by
latency, the CORS preflight request essentially double the time it takes to
do anything

On 19 February 2015 at 10:50, Brian Smith  wrote:

> On Thu, Feb 19, 2015 at 2:45 AM, Anne van Kesteren 
> wrote:
> > On Thu, Feb 19, 2015 at 11:43 AM, Brian Smith 
> wrote:
> >> 1. Preflight is only necessary for a subset of CORS requests.
> >> Preflight is never done for GET or HEAD, and you can avoid preflight
> >> for POST requests by making your API accept data in a format that
> >> matches what HTML forms post. Therefore, we're only talking about PUT,
> >> DELETE, less common forms of POST, and other less commonly-used
> >> methods.
> >
> > Euh, if you completely ignore headers, sure. But most HTTP APIs will
> > use some amount of custom headers, meaning *all* methods require a
> > preflight.
>
> Is it really true that most HTTP APIs will sue some amount of custom
> headers? And, is is it necessary for these APIs to be designed such
> that the custom headers are required?
>
> Cheers,
> Brian
>


Re: Starting work on Indexed DB v2 spec - feedback wanted

2014-04-18 Thread Dale Harvey
Our current performance suite is @
https://github.com/pouchdb/pouchdb/tree/master/tests/performance

Its at a fairly abstract level above idb, and right now its not
particularly clean, but it should be easy enough to get running,
instructions @
https://github.com/pouchdb/pouchdb/blob/master/CONTRIBUTING.md

We have only just started on and the tests may not be great
representations, but early signs are chrome and firefox are quite
comparable with chrome being noticeably faster for keyrange queries, and
safari being orders of magnitude faster


On 18 April 2014 07:48, Kyle Huey  wrote:

> On Thu, Apr 17, 2014 at 5:16 PM, Ben Kelly  wrote:
> > On 4/17/2014 5:41 PM, Kyle Huey wrote:
> >>
> >> On Thu, Apr 17, 2014 at 2:10 PM, Dale Harvey 
> wrote:
> >>>
> >>> No features that slow it down, as with Tim I also implemented the same
> >>> thing
> >>> in node.js and see much better perfomance against straight leveldb,
> with
> >>> websql still being ~5x faster than idb
> >>
> >>
> >> Do you have benchmarks for this?  When we've profiled IndexedDB
> >> performance for Gaia apps in the past the issue is invariably that the
> >> main thread event loop is busy and IndexedDB's responses have to go to
> >> the end of a long line.
> >
> >
> > I would hazard a guess that some of SQL's more feature rich constructs
> allow
> > you to do more in a single API call.  This could mean you need to hit the
> > event loop less often to accomplish the same amount of work in many
> cases.
> >
> > Just a theory.
> >
> > Ben
>
> Yes, that's entirely possible.  Which is why I would like to see a
> testcase ;-)
>
> - Kyle
>


Re: Starting work on Indexed DB v2 spec - feedback wanted

2014-04-17 Thread Dale Harvey
My IndexedDB wishlist:

Ability to enumerate databases, I dont particularly want or care about the
transactional integrity of the api, if someone deletes a database while I
am in a callback in which I think it exists, meh

Change events / Observers, right now I have to fake it via localstorage

A transactional model that isnt tied to the event loop, sometimes I want to
do async things inside the transaction like converting to an ArrayBuffer
etc, I would like to open with an option to have the transaction stay open
till its explicitly closed

No features that slow it down, as with Tim I also implemented the same
thing in node.js and see much better perfomance against straight leveldb,
with websql still being ~5x faster than idb

I dont have any concrete suggestions, but making the transactional states
more visible would help, I have seen pretty much everyone errantly use
.put().onsuccess(function() { yay do stuff, to either lose data by assuming
the write is completed or get in a confusing state trying to open new
transactions. (see step 3 -
http://www.html5rocks.com/en/tutorials/indexeddb/todo/)

I think its also worth (and fairly trivial) to implement sugar syntax for a
key value storage like https://github.com/mozilla/localForage, as a large
amount of usage I have seen just wants to store some simple data and not
deal with transactions, object stores and schema migrations

Cheers
Dale



On 17 April 2014 21:22, Tim Caswell  wrote:

> Personally, the main thing I want to see is expose simpler and lower level
> APIs.  For my uses (backend to git server), the leveldb API is plenty
> powerful.  Most of the time I'm using IndexedDB, I'm getting frustrated
> because it's way more complex than I need and gets in the way and slows
> things down.
>
> Would it make sense to standardize on a simpler set of APIs similar to
> what levelDB offers and expose that in addition to what indexedDB currently
> exposes?  Or would this make sense as a new API apart from IDB?
>
> As a JS developer, I'd much rather see fast, simple, yet powerful
> primitives over application-level databases with indexes and transactions
> baked in.  Chrome implements IDB on top of LevelDB, so it has just enough
> primitives to make more complex systems.
>
> But for applications like mine that use immutable storage and hashes for
> all lookups don't need or want the advanced features added on top.  IDB is
> a serious performance bottleneck in my apps and when using LevelDB in
> node.js, my same logic runs a *lot* faster and using a lot less code.
>
> -Tim Caswell
>
>
> On Wed, Apr 16, 2014 at 1:49 PM, Joshua Bell  wrote:
>
>> At the April 2014 WebApps WG F2F [1] there was general agreement that
>> moving forward with an Indexed Database "v2" spec was a good idea. Ali
>> Alabbas (Microsoft) has volunteered to co-edit the spec with me.
>> Maintaining compatibility is the highest priority; this will not break the
>> existing API.
>>
>> We've been tracking additional features for quite some time now, both on
>> the wiki [2] and bug tracker [3]. Several are very straightforward
>> (continuePrimaryKey, batch gets, binary keys, ...) and have already been
>> implemented in some user agents, and it will be helpful to document these.
>> Others proposals (URLs, Promises, full text search, ...) are much more
>> complex and will require additional implementation feedback; we plan to add
>> features to the v2 spec based on implementer acceptance.
>>
>> This is an informal call for feedback to implementers on what is missing
>> from v1:
>>
>> * What features and functionality do you see as important to include?
>> * How would you prioritize the features?
>>
>> If there's anything you think is missing from the wiki [2], or want to
>> comment on the importance of a particular feature, please call it out -
>> replying here is great. This will help implementers decide what work to
>> prioritize, which will drive the spec work. We'd also like to keep the v2
>> cycle shorter than the v1 cycle was, so timely feedback is appreciated -
>> there's always room for a "v3".
>>
>> [1] http://www.w3.org/2014/04/10-webapps-minutes.html
>> [2] http://www.w3.org/2008/webapps/wiki/IndexedDatabaseFeatures
>> [3]
>> https://www.w3.org/Bugs/Public/buglist.cgi?bug_status=RESOLVED&component=Indexed%20Database%20API&list_id=34841&product=WebAppsWG&query_format=advanced&resolution=LATER
>>
>> PS: Big thanks to Zhiqiang Zhang for his Indexed DB implementation
>> report, also presented at the F2F.
>>
>
>


Re: IndexedDB, what were the issues? How do we stop it from happening again?

2013-03-06 Thread Dale Harvey
I wrote a quick overview of the issues I have had using the indexedDB API

http://arandomurl.com/2013/02/21/thoughts-on-indexeddb.html

Most of them are just implementation details, however I still havent met a
webdev who understands the transaction model without having one of the
indexeddb implementors explain it to them, and past being confusing it also
makes things much harder to implement due to being tied to the event loop.

and I agree with pretty much everything that Alex wrote, at least the parts
that I understood.

Cheers
Dale

On 6 March 2013 15:01, Alex Russell  wrote:

> Comments inline. Adding some folks from the IDB team at Google to the
> thread as well as public-webapps.
>
> On Sunday, February 17, 2013, Miko Nieminen wrote:
>
>>
>>
>> 2013/2/15 Shwetank Dixit 
>>
>>>  Why did you feel it was necessary to write a layer on top of IndexedDB?

>>>
>>> I think this is the main issue here.
>>>
>>> As it stands, IDB is great in terms of features and power it offers, but
>>> the feedback I recieved from other devs was that writing raw IndexedDB
>>> requires an uncomfortable amount of verbosity even for some simple tasks
>>> (This can be disputed, but that is the views I got from some of the
>>> developers I interacted with). Adding that much amount of code (once again,
>>> im talking of raw IndexedDB) makes it less readable and understandable. For
>>> beginners, this all seemed very intimidating, and for some people more
>>> experienced, it was a bit frustrating.
>>>
>>>
>> After my experiments with IDB, I don't feel that it is particularly
>> verbose. I have to admit that often I prefer slightly verbose syntax over
>> shorter one when it makes reading the code easier. In IDB's case, I think
>> this is the case.
>>
>>
>>
>>>  For the latter bit, I reckon it would be a good practice for groups
 working on low-level APIs to more or less systematically produce a library
 that operates at a higher level. This would not only help developers in
 that they could pick that up instead of the lower-level stuff, but more
 importantly (at least in terms of goals) it would serve to validate that
 the lower-level design is indeed appropriate for librarification.

>>>
>>> I think that would be a good idea. Also, people making those low level
>>> APIs should still keep in mind that the resulting code should not be too
>>> verbose or complex. Librarification should be an advantage, but not a de
>>> facto requirement for developers when it comes to such APIs. It should
>>> still be feasable for them to write code in the raw low level API without
>>> writing uncomfortably verbose or complex code for simple tasks. Spec
>>> designers of low level APIs should not take this as a license to make
>>> things so complex that only they and a few others understand it, and then
>>> hope that some others will go ahead and make it simple for the 'common
>>> folk' through an abstraction library.
>>
>>
>> I quite don't see how to simplify IDB syntax much more.
>>
>
> I've avoided weighing in on this thread until I had more IDB experience.
> I've been wrestling with it on two fronts of late:
>
>
>- A re-interpretation of the API based on Futures:
>
>
> https://github.com/slightlyoff/DOMFuture/tree/master/reworked_APIs/IndexedDB
>- A new async LocalStorage design + p(r)olyfill that's bootstrapped on
>IDB:
>https://github.com/slightlyoff/async-local-storage
>
> While you might be right that it's unlikely that the API can be
> "simplified", I think it's trivial to extend it in ways that make it easier
> to reason about and use.
>
> This thread started out with a discussion of what might be done to keep
> IDB's perceived mistakes from reoccurring. Here's a quick stab at both an
> outline of the mistakes and what can be done to avoid them:
>
>
>- *Abuse of events*
>The current IDB design models one-time operations using events. This *
>can* make sense insofar as events can occur zero or more times in the
>future, but it's not a natural fit. What does it mean for oncomplete to
>happen more than once? Is that an error? Are onsuccess and onerror
>exclusive? Can they both be dispatched for an operation? The API isn't
>clear. Events don't lead to good design here as they don't encapsulate
>these concerns. Similarly, event handlers don't chain. This is natural, as
>they could be invoked multiple times (conceptually), but it's not a good
>fit for data access. It's great that IDB as async, and events are the
>existing DOM model for this, but IDB's IDBRequest object is calling out for
>a different kind of abstraction. I'll submit Futures for the job, but
>others might work (explicit callback, whatever) so long as they maintain
>chainability + async.
>
>- *Implicitness*
>IDB is implicit in a number of places that cause confusion for folks
>not intimately familiar with the contract(s) that IDB expects you to enter
>into. First, t

Re: IndexedDB events for object storage add, put and delete

2013-02-05 Thread Dale Harvey
They can, I was just saying that they wont do that by default (as I assume
a native implementation would), you need to write your own messaging system
out of band

Cheers
Dale

On 5 February 2013 22:12, pira...@gmail.com  wrote:

> Why it can propagate over tabs if all of them are accessing to the
> same database?
>
> 2013/2/5 Dale Harvey :
> > The problem with emitting change notification on writes is that they dont
> > propogate across tabs, my library has to use localstorage to emit events
> > across tabls and keep track of a change sequence in each tab
> >
> > This would be a welcome addition to the spec (after we get to enumerate
> > databases) :)
> >
> >
> > On 5 February 2013 21:59, pira...@gmail.com  wrote:
> >>
> >> One solution would be to don't call directly to IndexedDB methods but
> >> instead use custom wrappers that fit better with your application
> >> (this is what I'm doing), but definitelly I totally agree with you
> >> that IndexedDB should raise events when a row has been
> >> inserted/updated/deleted. I think it was talked about it would be an
> >> explosion of events, but I'm not sure about this... having events
> >> would be useful to develop triggers to maintain database consistency,
> >> for example :-)
> >>
> >> 2013/2/5 Miko Nieminen :
> >> > Hi,
> >> >
> >> > I'm new to this forum and I'm not completely sure if I'm posting to
> >> > right
> >> > list. I hope I am.
> >> >
> >> > I've been playing with IndexedDB to learn how to use it and around
> this
> >> > experiment I wrote a blog article about my experiences.
> >> >
> >> > While writing my article, I realized there is no way to add event
> >> > listeners
> >> > for object store to get notifications when new object is added,
> existing
> >> > one
> >> > is modified or one is deleted. I think lack of these events makes some
> >> > use
> >> > cases much more complicated than one would hope. Use cases like
> keeping
> >> > local data in sync with remote database, synchronizing views between
> >> > multiple windows or creating generic data indexers or manipulation
> >> > libraries. I know there are ways to go around the lack of these
> events,
> >> > but
> >> > having those would make things much easier.
> >> >
> >> > Is there any reason why these are not included in the specification?
> It
> >> > just
> >> > feels bit strange when similar mechanism is included in WebStorage
> API,
> >> > but
> >> > not in IDB. I suppose right moment to emit these events would be just
> >> > after
> >> > emitting transaction complete.
> >> >
> >> > I wasn't able to find any references from the archives and I hope I'm
> >> > not
> >> > asking same question again. Also I hope I'm not asking this question
> too
> >> > late.
> >> >
> >> > My blog article talks about this in a bit more detailed level under
> >> > header
> >> > "Shortcomings of IndexedDB".. The whole article is quite long so you
> >> > might
> >> > want to skip most of it. You can find it from
> >> >
> >> >
> http://mini-thinking.blogspot.co.uk/2013/02/web-app-example-using-indexeddb.html
> >> >
> >> > Thanks,
> >> > --
> >> > Miko Nieminen
> >> > miko.niemi...@iki.fi
> >> > miko.niemi...@gmail.com
> >> >
> >>
> >>
> >>
> >> --
> >> "Si quieres viajar alrededor del mundo y ser invitado a hablar en un
> >> monton de sitios diferentes, simplemente escribe un sistema operativo
> >> Unix."
> >> – Linus Tordvals, creador del sistema operativo Linux
> >>
> >
>
>
>
> --
> "Si quieres viajar alrededor del mundo y ser invitado a hablar en un
> monton de sitios diferentes, simplemente escribe un sistema operativo
> Unix."
> – Linus Tordvals, creador del sistema operativo Linux
>


Re: IndexedDB events for object storage add, put and delete

2013-02-05 Thread Dale Harvey
The problem with emitting change notification on writes is that they dont
propogate across tabs, my library has to use localstorage to emit events
across tabls and keep track of a change sequence in each tab

This would be a welcome addition to the spec (after we get to enumerate
databases) :)

On 5 February 2013 21:59, pira...@gmail.com  wrote:

> One solution would be to don't call directly to IndexedDB methods but
> instead use custom wrappers that fit better with your application
> (this is what I'm doing), but definitelly I totally agree with you
> that IndexedDB should raise events when a row has been
> inserted/updated/deleted. I think it was talked about it would be an
> explosion of events, but I'm not sure about this... having events
> would be useful to develop triggers to maintain database consistency,
> for example :-)
>
> 2013/2/5 Miko Nieminen :
> > Hi,
> >
> > I'm new to this forum and I'm not completely sure if I'm posting to right
> > list. I hope I am.
> >
> > I've been playing with IndexedDB to learn how to use it and around this
> > experiment I wrote a blog article about my experiences.
> >
> > While writing my article, I realized there is no way to add event
> listeners
> > for object store to get notifications when new object is added, existing
> one
> > is modified or one is deleted. I think lack of these events makes some
> use
> > cases much more complicated than one would hope. Use cases like keeping
> > local data in sync with remote database, synchronizing views between
> > multiple windows or creating generic data indexers or manipulation
> > libraries. I know there are ways to go around the lack of these events,
> but
> > having those would make things much easier.
> >
> > Is there any reason why these are not included in the specification? It
> just
> > feels bit strange when similar mechanism is included in WebStorage API,
> but
> > not in IDB. I suppose right moment to emit these events would be just
> after
> > emitting transaction complete.
> >
> > I wasn't able to find any references from the archives and I hope I'm not
> > asking same question again. Also I hope I'm not asking this question too
> > late.
> >
> > My blog article talks about this in a bit more detailed level under
> header
> > "Shortcomings of IndexedDB".. The whole article is quite long so you
> might
> > want to skip most of it. You can find it from
> >
> http://mini-thinking.blogspot.co.uk/2013/02/web-app-example-using-indexeddb.html
> >
> > Thanks,
> > --
> > Miko Nieminen
> > miko.niemi...@iki.fi
> > miko.niemi...@gmail.com
> >
>
>
>
> --
> "Si quieres viajar alrededor del mundo y ser invitado a hablar en un
> monton de sitios diferentes, simplemente escribe un sistema operativo
> Unix."
> – Linus Tordvals, creador del sistema operativo Linux
>
>