Re: [indexeddb] Calling update on a cursor index with a unique value constraint

2011-07-08 Thread Jeremy Orlow
On Thu, Jul 7, 2011 at 1:46 PM, Jonas Sicking  wrote:

> On Wed, Jul 6, 2011 at 9:41 PM, Jeremy Orlow  wrote:
> > On Wed, Jul 6, 2011 at 10:06 AM, Israel Hilerio 
> >> We believe an error should be thrown because of the violation of the
> >> unique value index constraint and the error code should be set to
> >> CONSTRAINT_ERR.  What do you think?
> >
> > IIRC, we decided update should essentially be an alias to delete and then
> an
> > add on the parent object store--probably an atomic one.  So by that logic
> it
> > does seem to me CONSTRAINT_ERR would be the right error.
>
> Hmm.. it's not exactly a delete and a add since if the add produces an
> error but the error handler calls .preventDefault, you don't want only
> the delete to be executed.
>
> I'd rather say that a .update is the same as a .put.
>
> > Btw, ObjectStore.add()'s exception section doesn't mention CONSTRAINT_ERR
> > though it probably should.
>
> IDBObjectStore.add never throws CONSTRAINT_ERR since that's detected
> asynchronously, so the spec seems fine here. However
> IDBObjectStoreSync.add and IDBObjectStoreSync.put should and does list
> it as an exception.
>

Oopsyeah...just got mixed up.


Re: [indexeddb] Calling update on a cursor index with a unique value constraint

2011-07-06 Thread Jeremy Orlow
On Wed, Jul 6, 2011 at 10:06 AM, Israel Hilerio wrote:

> What is the expected behavior when calling update() in a cursor index that
> requires unique values.  Firefox allows the update, even when it results in
> a duplicate value.  Chrome throws an error event with the code set to
> UNKNOWN_ERR.
>

Most (if not all?) of the times Chrome throws an UNKNOWN_ERR, it's because
the functionality simply hasn't been implemented yet.


> We believe an error should be thrown because of the violation of the unique
> value index constraint and the error code should be set to CONSTRAINT_ERR.
>  What do you think?
>

IIRC, we decided update should essentially be an alias to delete and then an
add on the parent object store--probably an atomic one.  So by that logic it
does seem to me CONSTRAINT_ERR would be the right error.

Btw, ObjectStore.add()'s exception section doesn't mention CONSTRAINT_ERR
though it probably should.

J


Re: [indexeddb] IDBRequest.transaction property set to null

2011-07-06 Thread Jeremy Orlow
I'd be OK with it.  Jonas, what do you think?

J

On Wed, Jul 6, 2011 at 10:27 AM, Israel Hilerio wrote:

> On Tuesday, June 28, 2011 11:21 AM, Israel Hilerio wrote:
> > On Monday, June 27, 2011 11:59 PM, Jeremy Orlow wrote:
> >
> > > On Thu, Jun 23, 2011 at 2:21 PM, Israel Hilerio  >
> > wrote:
> > >> In the definition of IDBRequest.transaction it stipulates that "This
> > >> property can be null for certain requests, such as for request
> returned from
> > IDBFactory.open and IDBDatabase.setVersion."  Based on this we understand
> > that the following handlers will set the transaction property to null:
> > >> * setVersion onsuccess handler
> > >> * setVersion onerror handler
> > >> * setVersion onblock handler
> > >> * open onsuccess handler
> > >> * open onerror handler
> > >> Are there any other times when this property should be set to null or
> is this
> > the complete list?  We couldn't think of any other times when this
> applied but
> > wanted to check.
> >
> > > I believe this is correct.
> >
> > >> Also, in the setVersion case, if we're setting the result property to
> its active
> > transaction, why are we setting the transaction property to null instead
> of the
> > same active transaction?
> >
> > > I know Jonas and I talked about this, but I don't remember the
> > > reasoning for sure.  One thing I can think of off the top of my head is
> that
> > it's weird that it'd start off null and then be set later.  Also, it
> would be
> > duplicate data given that .result is also set to the transaction.  Is
> there any
> > strong reason to set it?
> >
> > > J
> >
> > The main reason was to keep a consistent calling pattern inside our event
> > handlers:
> > * event.target.transaction.oncomplete
> >
> > The only exception to this pattern are the open and setVersion APIs.  In
> the
> > case of the setVersion handler we have to use:
> > * event.target.result.oncomplete
> >
> > It would be nice to use only one pattern all the time.
> >
> > Israel
> >
>
> What do you think about the idea of having a consistent/common access
> pattern for accessing the transaction inside most of our event handlers
> (i.e. inside setVersion but not open).  This will always guaranteed a good
> (not null) transaction handler developers can always count on.
>
> Israel
>


Re: [indexeddb] IDBRequest.transaction property set to null

2011-06-27 Thread Jeremy Orlow
On Thu, Jun 23, 2011 at 2:21 PM, Israel Hilerio wrote:

> In the definition of IDBRequest.transaction it stipulates that "This
> property can be null for certain requests, such as for request returned from
> IDBFactory.open and IDBDatabase.setVersion."  Based on this we understand
> that the following handlers will set the transaction property to null:
> * setVersion onsuccess handler
> * setVersion onerror handler
> * setVersion onblock handler
> * open onsuccess handler
> * open onerror handler
> Are there any other times when this property should be set to null or is
> this the complete list?  We couldn't think of any other times when this
> applied but wanted to check.
>

I believe this is correct.


> Also, in the setVersion case, if we're setting the result property to its
> active transaction, why are we setting the transaction property to null
> instead of the same active transaction?
>

I know Jonas and I talked about this, but I don't remember the reasoning for
sure.  One thing I can think of off the top of my head is that it's weird
that it'd start off null and then be set later.  Also, it would be duplicate
data given that .result is also set to the transaction.  Is there any strong
reason to set it?

J


Re: [indexeddb] Behavior when calling IDBCursor.continue multiple times

2011-06-27 Thread Jeremy Orlow
I thought it already was in there (or in some bug).  But, if not, yeah it
should just be documented.

On Thu, Jun 23, 2011 at 2:32 PM, Israel Hilerio wrote:

>  We noticed that the spec doesn’t say anything about what needs to happen
> if IDBCursor.continue is called multiple times.  We noticed that both FF and
> Chrome throw a NOT_ALLOWED_ERR exception.  If the exception is not caught,
> the cursor doesn’t continue to iterate, an error event is triggered
> (errorCode = ABORT_ERR), and the transaction is aborted.  However, if the
> exception is caught, the cursor will iterate normally.  This model makes
> sense to us.
>
> ** **
>
> It seems this is something we should document on the spec.  Do you agree?*
> ***
>
> ** **
>
> Israel
>


Re: [Bug 12111] New: spec for Storage object getItem(key) method does not match implementation behavior

2011-06-13 Thread Jeremy Orlow
On Sun, Jun 12, 2011 at 2:58 PM, Aryeh Gregor wrote:

> On Sat, Jun 11, 2011 at 3:10 PM, Ian Hickson  wrote:
> > The particular issue in question isn't a particularly important one. The
> > spec describes a superset of implementations, and is a logical direction
> > for the spec to go. (Even within the process, there's no reason we
> > couldn't go to LC with it as is.) Implementations are the ultimate guide
> > here, when this issue bubbles up to the top of the priority list then
> > it'll get resolved one way or the other based on what they do and want.
>
> The spec does not describe a superset of implementations.  It
> describes behavior that contradicts what implementations actually do.
> For instance, if you set localStorage.foo = false, the spec requires
> localStorage.foo to return boolean false.  In implementations, it will
> return the string "false", which evaluates to boolean true.  It is not
> realistically going to be possible for implementations to change to
> what the spec currently says.
>
> Furthermore, we have some implementers from each of IE, Firefox, and
> Chrome saying that they don't intend to change to match the spec, and
> no implementers saying they intend to change to match the spec.  That
> should serve to indicate that the spec is broken and needs to change,
> process issues aside.
>
> I don't see what would take a few hours to change here.  Change all
> the relevant types from any to DOMString, remove all the stuff about
> structured clones, and let WebIDL do the work.  That's immediately
> much closer to browser behavior than the current spec.
>

I was about to write an email that said about the exact same thing.  Then I
saw Aryeh beat me to it.

J


Re: [indexeddb] IDBDatabase.setVersion non-nullable parameter has a default for null

2011-06-06 Thread Jeremy Orlow
We should probably just remove the special case.  I believe WebIDL specifies
that a null would then turn into the string "null".  This is what we've done
pretty much everywhere else I believe.

J

On Mon, Jun 6, 2011 at 7:23 PM, Israel Hilerio wrote:

> The parameter of IDBDatabase.setVersion is defined in the WebIDL as
> [TreatNullAs=EmptyString] but in the method definition it says that the
> parameter cannot be nullable.  Do we want to enable null values?
>
> Israel
>
>


Re: [indexeddb] Section 4.1 - Opening the database (error codes)

2011-06-06 Thread Jeremy Orlow
Unknown err might make sense for implementation specific bugs/issues.  (If
it's not deeply tied to an implementation, it shouldn't be "unknown"
though.)

On Mon, Jun 6, 2011 at 9:43 AM, Jonas Sicking  wrote:

> On Mon, Jun 6, 2011 at 9:29 AM, Israel Hilerio 
> wrote:
> > The first step in section 4.1 "Opening the database" stipulates:
> >
> > 1. If these steps fail for any reason, return a error with the
> appropriate code and abort this algorithm.
> >
> > What are the expected error codes for IDBFactory.open?
>
> I think it's mostly QUOTA_ERR that makes sense there.
>
> / Jonas
>
>


Re: [indexeddb] Auto-Generated Key Max Error

2011-05-31 Thread Jeremy Orlow
Not sure what's right, but nothing should be specced to return UNKNOWN_ERR.
 It's just there as a catch all for weird implementation specific issues and
such.

J

On Tue, May 31, 2011 at 2:18 PM, Israel Hilerio wrote:

> What should happen when the auto-generated key reaches its max size?  What
> error should we throw, UNKNOWN_ERR?
>
> Israel
>
>


Re: [IndexedDB] IDBDatabase.transaction needs to specify exception for invalid mode parameter (Bug# 11406)

2011-05-31 Thread Jeremy Orlow
Yes in this case, but by default no.  :-)

On Tue, May 31, 2011 at 11:18 AM, Jonas Sicking  wrote:

> On Tue, May 31, 2011 at 10:56 AM, Israel Hilerio 
> wrote:
> > On Tue, May 17, 2011 10:57 AM, Israel Hilerio wrote:
> >> -Original Message-
> >> From: public-webapps-requ...@w3.org [mailto:public-webapps-
> >> requ...@w3.org] On Behalf Of Israel Hilerio
> >> Sent: Tuesday, May 17, 2011 10:57 AM
> >> To: public-webapps@w3.org
> >> Subject: [IndexedDB] IDBDatabase.transaction needs to specify exception
> for
> >> invalid mode parameter (Bug# 11406)
> >>
> >> Can we update the spec with Jeremy's proposal to throw a
> >> NON_TRANSIENT_ERR for invalid mode parameters on
> >> IDBDatabase.transaction [1]?  That seems reasonable to us.
> >>
> >> Israel
> >> [1] http://www.w3.org/Bugs/Public/show_bug.cgi?id=11406
> >>
> >
> > Should I interpret the silence to mean we agree?
>
> Yes :)
>
> / Jonas
>
>


Re: [IndexedDB] Bug# 11401 - We should disallow .transaction() from within setVersion transactions

2011-05-17 Thread Jeremy Orlow
On Tue, May 17, 2011 at 1:26 PM, Jonas Sicking  wrote:

> On Mon, May 16, 2011 at 6:04 PM, Israel Hilerio 
> wrote:
> > Pablo explained to me that the main issue with allowing transactions
> > from being created inside a SetVersion handler is identifying which
> > objectstores the new transaction is binding to. That is bug# 11401 [1].
> >
> > Using Jeremy's example:
> > db.setVersion('1').onsuccess(function () {
> >   db.createObjectStore('a');   //objectstore a
> >   trans = db.transaction('a');
> >   db.removeObjectStore('a');
> >   db.createObjectStore('a');   //objecstore a'
> >   trans.objectStore('a').put('foo', 'bar'); });
> >
> > It is unclear which of the two objectstores a or a' is associated with
> > the newly created READ_ONLY transaction inside the setVersion handler.
> > To echo Jeremy's proposal, would it be okay if we were not to support
> > this scenario and just throw and exception.
> >
> > We would like to modify the spec to say something like:
> >
> > IDBDatabase.transaction:
> > Throws an IDBDatabaseException of NOT_ALLOWED_ERR when the
> > transaction() method is called within the onsuccess handler of a
> > setVersion request.
>
> We also need to throw a NOT_ALLWED_ERR if .transaction() is called
> when there is a pending setVersion call. So .transaction() needs to
> throw from the time when .setVersion is called, to the point when the
> "complete" event is fired on the resulting transaction. Otherwise code
> like the following would suffer the same problem as Jeremy describes:
>
> 
> db.setVersion('1').onsuccess = function(a) {
>   db.removeObjectStore('a');
>  db.createObjectStore('a');
> };
> db.transaction(['a'], READ_WRITE).objectStore('a').put('foo', 'bar');
>
> Note that in the above code the .put() call will happen (though not
> complete) before the setVersion transaction starts.
>
> You can construct similar cases when a .transaction() call happens
> between the asynchronous callbacks during a setVersion transaction.
> Hence .transaction() needs to be blocked all until the setVersion
> transaction completes and the "complete" event is fired.
>

All of this sounds good to me.

J


Re: [IndexedDB] deleteObjectStore method and updates to IDBDatabase.objectStoreNames on the client

2011-05-04 Thread Jeremy Orlow
On Wed, May 4, 2011 at 9:40 PM, Israel Hilerio wrote:

> The reason I was thinking that deleteObjectStore was async was because it
> returns an IDBRequest interface and the pattern implies that the onsuccess
> handler needs to be called for me to be sure that the operation happened
> successfully.
>

I'm pretty sure this is just an oversight.  There's no mention of things
being async in the description, it returns exceptions, and deleteIndex
returns void.  I don't see any reason why deleteObjectStore would be
different.


> Regarding the createObjectStore, it returns immediately but the the actual
> object store creation could happen asynchronously in the background. At
> least there is language in the spec that alludes to that fact:
>
> "In some implementations it's possible for the implementation to
> asynchronously run into problems creating the object store after the
> createObjectStore function has returned. . Such implementations must still
> create and return a IDBObjectStore object. Instead, once the implementation
> realizes that creating the objectStore has failed, it must abort the
> transaction using the steps for aborting a transaction."
>

Sure, but besides the fact that the transaction may spuriously abort (which
is unfortunate), all other side effects should be hidden from the user.  We
should just spec that the name should immediately show up in the list.


> If we believe that the actual object store creation needs to happen
> synchronously to establish a common behavior between platforms, we should
> stipulate that in the spec.
>

The entire design revolves around ensuring you'll never block on disk io.
 So we definitely should not do this.

Back to the original issue, I like your statement of ensuring that the
> deleteObjectStore removes the objectStore name from the
> IDBDatabase.objectStoreNames immediately after it executes.
> If everyone else agrees, we should add some text or a note to the spec to
> capture this.
>

Making it return void should be enough.


>
> Israel
>
> On Wed, May 4, 2011 at 9:17 PM, Jeremy Orlow wrote:
> > Well, createObjectStore is synchronous, so that one's easy.  Everything
> happens at once in terms of side effects.
> >
> > As for delete: why is this asynchronous again?  It seems easiest just to
> make it sync unless there's some major problem with doing so.
> >
> > Either way, it seems that the change to objectStoreNames should either
> happen immediately or when firing the onsuccess event (i.e. not just at some
> random time in between).
> >
> > J
> > > On Wed, May 4, 2011 at 7:13 AM, Israel Hilerio 
> wrote:
> > > In looking at createObjectStore on IDBDatabase, it seems that we would
> have to update the IDBDatabase.objectStoreNames attribute on the client side
> after returning the IDBObjectStore.  Otherwise, it would be difficult > >
> > > to detect that an objectStore with the same name already exists and
> throw a CONSTRAINT_ERR exception.
> > >
> > > Following this pattern, would it make sense to update the
> IDBDatabase.objectStoreNames attribute on the client side after executing
> deleteObjectStore before the async operation is executed.  This would allow
> us to
> > > support scenarios like:
> > >
> > > var b = db.createObjectStore(B);
> > > var req = db.deleteObjectStore(B);
> > > b = db.createObjectStore(B);
> > >
> > > What do you think?
> > >
> > > Israel
>
>


Re: [IndexedDB] deleteObjectStore method and updates to IDBDatabase.objectStoreNames on the client

2011-05-03 Thread Jeremy Orlow
Well, createObjectStore is synchronous, so that one's easy.  Everything
happens at once in terms of side effects.

As for delete: why is this asynchronous again?  It seems easiest just to
make it sync unless there's some major problem with doing so.

Either way, it seems that the change to objectStoreNames should either
happen immediately or when firing the onsuccess event (i.e. not just at some
random time in between).

J

On Wed, May 4, 2011 at 7:13 AM, Israel Hilerio wrote:

> In looking at createObjectStore on IDBDatabase, it seems that we would have
> to update the IDBDatabase.objectStoreNames attribute on the client side
> after returning the IDBObjectStore.  Otherwise, it would be difficult to
> detect that an objectStore with the same name already exists and throw a
> CONSTRAINT_ERR exception.
>
> Following this pattern, would it make sense to update the
> IDBDatabase.objectStoreNames attribute on the client side after executing
> deleteObjectStore before the async operation is executed.  This would allow
> us to support scenarios like:
>
> var b = db.createObjectStore(B);
> var req = db.deleteObjectStore(B);
> b = db.createObjectStore(B);
>
> What do you think?
>
> Israel
>
>


Re: [IndexedDB] Closing on bug 9903 (collations)

2011-05-03 Thread Jeremy Orlow
On Wed, May 4, 2011 at 5:27 AM, Jonas Sicking  wrote:

> On Tue, May 3, 2011 at 12:19 AM, Keean Schupke  wrote:
> > The more I think about it, the more I want a user-specified comparison
> > function. Efficiency should not be an issue here - the engines should
> tweek
> > the JIT compiler to fix any efficiency issues. Just let the user pass a
> > closure (remember functions are first-class in JavaScript so this is not
> a
> > callback nor an event).
>
> I don't think we should do callbacks for the first version of
> javascript. It gets very messy since we can't rely on that the script
> function will be returning stable values.
>
> Additionally we'd either have to ask that the callback function is
> re-registered each time the database is opened, or somehow store a
> serialized copy of the callback function in the browser so that it's
> available the next time the database is opened. Neither of these
> things have been done in other APIs in the past, so if we hold up v1
> until we solve the challenges involved I think it will delay the
> release of a stable spec.
>
> So the choice here really is between only supporting some form of
> binary sorting, or supporting a built-in set of collations. Anything
> else will have to wait for version 2 in my opinion.
>

Agreed.  And I also agree with the logic behind punting to v2.

J


Re: [indexeddb] result attribute for IDBRequest is set to undefined when calling IDBObjectStore.clear()

2011-05-03 Thread Jeremy Orlow
Undefined is also the return value for void functions.  The result is
essentially the return value of our async methods.  And in most cases, the
behavior of each async method is just a transformation of the sync method
and vice versa.  So my thinking is that it should stay as it is.

J

On Mon, May 2, 2011 at 11:22 PM, Israel Hilerio wrote:

> After calling the clear() method on IDBObjectStore, the result of the
> IDBRequest is set to undefined according to the "steps for clearing an
> object store".
>
> However, the result property in IDBRequest says that the result value is
> undefined when the request results in an error: "This is undefined when the
> request resulted in an error."
>
> In IE, we've been using undefined to signal properties that are not
> available to developers and null to signal unassigned values.  It seems that
> null would be a better result value when the object store has been cleared.
>
> This would follow the same pattern we use in the deleteDatabase method
> where we return a null value for the result of the IDBRequest: "If the steps
> above are successful, the implementation must set the result of the request
> to null and fire a success event at the request."
>
> What do you think?
>
> Israel
>
>


Re: [IndexedDB] Design Flaws: Not Stateless, Not Treating Objects As Opaque

2011-03-31 Thread Jeremy Orlow
On Thu, Mar 31, 2011 at 11:24 AM, Keean Schupke  wrote:

> On 31 March 2011 18:17, Jeremy Orlow  wrote:
>
>> On Thu, Mar 31, 2011 at 11:09 AM, Keean Schupke  wrote:
>>
>>> On 31 March 2011 17:41, Jonas Sicking  wrote:
>>>
>>>> On Thu, Mar 31, 2011 at 1:32 AM, Joran Greef  wrote:
>>>> > On 31 Mar 2011, at 9:53 AM, Jonas Sicking wrote:
>>>> >
>>>> >> I previously have asked for a detailed proposal, but so far you have
>>>> >> not supplied one but instead keep referring to other unnamed database
>>>> >> APIs.
>>>> >
>>>> > I have already provided an adequate interface proposal for putObject
>>>> and deleteObject.
>>>>
>>>> That is hardly a comprehensive proposal, but rather just one small part
>>>> of it.
>>>>
>>>
>>> I wanted to make a few comments about these points :-
>>>
>>>
>>>>
>>>> I do really think the idea of not having the implementation keep track
>>>> of the set of indexes for a objectStore is a really interesting one.
>>>> As is the idea of not even having a set set of objectStores. However,
>>>> there are several problems that needs to be solved. In particular how
>>>> do you deal with collations?
>>>>
>>>
>>> no indexes, no object stores... well I for one prefer the
>>> "validate_object_store", "validate_index" approach, in that it can hide
>>> statefullness if necessary (like I do with RelationalDB) whilst presenting a
>>> stateless API. It also keeps the size of the put statements down.
>>>
>>>
>>>>
>>>> I.e. we have concluded that there are important use cases which
>>>> require using different collations for different indexes and
>>>> objectStores. Even for different indexes attached to the same
>>>> objectStore.
>>>>
>>>> Additionally, if we're getting rid of setVersion, how do we expect
>>>> pages dealing with the (application managed) schema changing while the
>>>> page has a connection open to the database?
>>>>
>>>
>>> 1 - there is no schema
>>> 2 - dont allow it to change whilst the database is open
>>>
>>> In reality a schema is implicitly tied to a code version. In other words
>>> the source code of the application assumes a certain schema. If the assumed
>>> schema and the schema in the DB do not match things are going to go very
>>> wrong very quickly. Schema changes _always_ accompany code changes
>>> (otherwise they are not schema changes just data changes). As such they
>>> never happen when a DB is open. The way I handle this in RelationalDB, by
>>> validating the actual schema against the source-code schema in the db-open
>>> (actually the method is called validate), is probably the best way to handle
>>> this. If the database does not exist we create it according to the schema.
>>> If it exists we check it matches the schema. If there is a difference we see
>>> if we can 'upgrade' the database automatically (certain changes like adding
>>> a new column with a default value can be done automaticall), if we cannot
>>> automaticall upgrade, we exit with an error - as allowing the program to run
>>> will result in corruption of the data already in the database. At this point
>>> it is up to the application to figure out how to upgrade the database (by
>>> opening one database with an old schema and another with a new schema)...
>>> There is not point in ever allowing a database to be opened with the wrong
>>> schema.
>>>
>>>
>>>> So pretty please, with sugar on top, please come up with a proposal
>>>> for the full API rather than bits and pieces.
>>>>
>>>> And I should mention that I have as an absolute requirement that you
>>>> should be able to specify collation by simply saying that you want to
>>>> use "en-US" or "sv-SV" sorting. Using callbacks or other means is ok
>>>> *in addition to this*, but callback mechanisms tend to be a lot more
>>>> complex since they have to deal with the callback doing all sorts of
>>>> evil things such as returning inconsistent results (think "return
>>>> Math.random()"), or simply do evil things like navigate the current
>>>> page, deleting the database, or modifying the record that is in the
>>>> process of being stored.
>>>>
>>>
>>> The core API only needs to deal with sorting binary-blob sort orders. A
>>> library wrapper could provide all the collation ordering goodness that
>>> people want. For example RelationalDB will have to deal with sorting orders,
>>> it does not need the browser to provide that functionality. In fact browser
>>> provided functionality may limit what can be done in libraries on top.
>>>
>>
>> This is difficult if not impossible to do.  See previous threads on the
>> matter.
>>
>> J
>>
>
> I can find a lot of stuff on collation, but not a lot about why it could
> not be done in a library. Could you summerise the reasons why this needs to
> be core functionality for me?
>

Sorry, but that stuff is paged out of my brain.  Pablo, can you?


> A library could chose to use an object store as meta-data to store the
> collation orders that it is using for various indexes for example.
>
>
> Cheers,
> Keean.
>
>


Re: [IndexedDB] Design Flaws: Not Stateless, Not Treating Objects As Opaque

2011-03-31 Thread Jeremy Orlow
On Thu, Mar 31, 2011 at 11:09 AM, Keean Schupke  wrote:

> On 31 March 2011 17:41, Jonas Sicking  wrote:
>
>> On Thu, Mar 31, 2011 at 1:32 AM, Joran Greef  wrote:
>> > On 31 Mar 2011, at 9:53 AM, Jonas Sicking wrote:
>> >
>> >> I previously have asked for a detailed proposal, but so far you have
>> >> not supplied one but instead keep referring to other unnamed database
>> >> APIs.
>> >
>> > I have already provided an adequate interface proposal for putObject and
>> deleteObject.
>>
>> That is hardly a comprehensive proposal, but rather just one small part of
>> it.
>>
>
> I wanted to make a few comments about these points :-
>
>
>>
>> I do really think the idea of not having the implementation keep track
>> of the set of indexes for a objectStore is a really interesting one.
>> As is the idea of not even having a set set of objectStores. However,
>> there are several problems that needs to be solved. In particular how
>> do you deal with collations?
>>
>
> no indexes, no object stores... well I for one prefer the
> "validate_object_store", "validate_index" approach, in that it can hide
> statefullness if necessary (like I do with RelationalDB) whilst presenting a
> stateless API. It also keeps the size of the put statements down.
>
>
>>
>> I.e. we have concluded that there are important use cases which
>> require using different collations for different indexes and
>> objectStores. Even for different indexes attached to the same
>> objectStore.
>>
>> Additionally, if we're getting rid of setVersion, how do we expect
>> pages dealing with the (application managed) schema changing while the
>> page has a connection open to the database?
>>
>
> 1 - there is no schema
> 2 - dont allow it to change whilst the database is open
>
> In reality a schema is implicitly tied to a code version. In other words
> the source code of the application assumes a certain schema. If the assumed
> schema and the schema in the DB do not match things are going to go very
> wrong very quickly. Schema changes _always_ accompany code changes
> (otherwise they are not schema changes just data changes). As such they
> never happen when a DB is open. The way I handle this in RelationalDB, by
> validating the actual schema against the source-code schema in the db-open
> (actually the method is called validate), is probably the best way to handle
> this. If the database does not exist we create it according to the schema.
> If it exists we check it matches the schema. If there is a difference we see
> if we can 'upgrade' the database automatically (certain changes like adding
> a new column with a default value can be done automaticall), if we cannot
> automaticall upgrade, we exit with an error - as allowing the program to run
> will result in corruption of the data already in the database. At this point
> it is up to the application to figure out how to upgrade the database (by
> opening one database with an old schema and another with a new schema)...
> There is not point in ever allowing a database to be opened with the wrong
> schema.
>
>
>> So pretty please, with sugar on top, please come up with a proposal
>> for the full API rather than bits and pieces.
>>
>> And I should mention that I have as an absolute requirement that you
>> should be able to specify collation by simply saying that you want to
>> use "en-US" or "sv-SV" sorting. Using callbacks or other means is ok
>> *in addition to this*, but callback mechanisms tend to be a lot more
>> complex since they have to deal with the callback doing all sorts of
>> evil things such as returning inconsistent results (think "return
>> Math.random()"), or simply do evil things like navigate the current
>> page, deleting the database, or modifying the record that is in the
>> process of being stored.
>>
>
> The core API only needs to deal with sorting binary-blob sort orders. A
> library wrapper could provide all the collation ordering goodness that
> people want. For example RelationalDB will have to deal with sorting orders,
> it does not need the browser to provide that functionality. In fact browser
> provided functionality may limit what can be done in libraries on top.
>

This is difficult if not impossible to do.  See previous threads on the
matter.

J


>
>
>>
>> / Jonas
>>
>>
>
> Cheers,
> Keean.
>
>


Re: [IndexedDB] Design Flaws: Not Stateless, Not Treating Objects As Opaque

2011-03-31 Thread Jeremy Orlow
On Thu, Mar 31, 2011 at 5:41 AM, Joran Greef  wrote:

> On 31 Mar 2011, at 12:52 PM, Keean Schupke wrote:
>
> > I totally agree with everything so far...
> >
> >> 3. This requires an adjustment to the putObject and deleteObject
> interfaces (see previous threads).
> >
> > I disagree that a simple API change is the answer. The problem is
> architectural, not just a superficial API issue.
>
> Yes, for IndexedDB to be stateless with respect to application schema, one
> would need to:
>
> 1. Provide the application with a first-class means to manage indexes at
> time of putting/deleting objects.
>

I'm OK with doing this for v1 if the others are.  It doesn't seem like that
big of an addition and it would give a decent amount of additional
flexibility.


> 2. Treat objects as opaque (remove key path,


Key paths are quite useful.  I agree that making it possible to use
statelessly is good, but I don't see any reason why making it 100% stateless
should be a goal.


> structured clone mechanisms


For sure, not going to happen.


> , application must provide an id and JSON value to put/delete calls,
> reduces serialization/deserialization overhead where application already has
> the object as a string).
>

I'm not sure why you think this would reduce overhead.


> 3. Remove setVersion (redundant, application migrates objects and indexes
> using transactions as it needs to).
> 4. Remove createIndex.
>

Like I said above, although I think we should make it possible to operate
more statelessly, I don't see a reason we need to remove stuff like this.
 Some users will find it more convenient to work this way.

J


Re: [IndexedDB] Design Flaws: Not Stateless, Not Treating Objects As Opaque

2011-03-31 Thread Jeremy Orlow
On Thu, Mar 31, 2011 at 1:38 AM, Joran Greef  wrote:

> On 31 Mar 2011, at 9:34 AM, Jeremy Orlow wrote:
>
> > We have made an effort to understand other "contributions to the field".
> >
> > I'm not convinced that these are "essential database concepts" and having
> personally spent quite some time working with the API in JS and implementing
> it, I feel pretty confident that what we have for v1 is pretty solid.  There
> are definitely some things I wouldn't mind re-visiting or looking at closer,
> possibly even for v1, but they all seem reasonable to study further for v2
> as well.
> >
> > We've spent a lot of time over the last year and a half talking about
> IndexedDB.  But now it's shipping in Firefox 4 and soon Chrome 11.  So
> realistically v1 is not going to change much unless we are convinced that
> what's there is fundamentally broken.
> >
> > We intentionally limited the scope of v1, which is why we know there'll
> be a v2.  We can't solve all the problems at once, and the difficulty of
> speccing something is typically exponential to the size of the API.
> >
> > Maybe a constructive way to discuss this would be to look at what use
> cases will be difficult or impossible to achieve with the current design?
>
> Application-managed indices for starters


That's not a use case.


> I would consider that to be essential when designing indexed key/value
> stores, and I would consider that to be the contribution made by almost
> every other indexed key/value store to date. If we have to use IDB the way
> FriendFeed used MySQL to achieve application-managed indices then I would
> argue that the API is in fact "fundamentally broken" and we would be better
> off with an embedding of SQLite by Mozilla.
>
> Regarding "the difficulty of speccing something is typically exponential to
> the size of the API", if people want to build a Rube Goldberg device then
> they must deal with the spec issues of that.
>
> If we were provided with the primitives for an indexed key/value store with
> application-managed indices (as Nikunj suggested at the time), we would have
> been well out of the starting blocks by now, and issues such as "computed
> indexes", "indexing array values" etc. would have been non-issues.
>
> Summary:
>
> 1. There's a problem.
> 2. It can still be fixed with a minimum of fuss.
> 3. This requires an adjustment to the putObject and deleteObject interfaces
> (see previous threads).


Re: [IndexedDB] Design Flaws: Not Stateless, Not Treating Objects As Opaque

2011-03-31 Thread Jeremy Orlow
On Thu, Mar 31, 2011 at 12:16 AM, Joran Greef  wrote:

> On 31 Mar 2011, at 1:01 AM, Jonas Sicking wrote:
>
> > Anyhow, I do think that the idea of passing in index values at the
> > same time as a entry is created/modified is an interesting idea. And I
> > have said so in the past on this list. It's definitely something we
> > should consider for v2.
>
> > Oh, and if we did this, I wouldn't really know how to support things
> > like collations. Neither if you did collations using built in sets of
> > locales (like in Pablo's recent proposal), nor if you used some sort
> > of callback to do collation.
> >
> > / Jonas
>
> That's fine. You don't need to figure it out. Just look at how stateless
> databases have done it (or not done it) and do likewise.
>
> I submit to you that there is inadequate understanding of the concerns
> raised, hence the lack of urgency in trying to address them. That there is
> even a need for a "V2" is symptomatic of this.
>
> It may be a good idea to start looking at these things not as "interesting
> ideas" but as essential database concepts.
>
> If someone were trying to build some kind of transactional indexed key
> value store for the web, and they wanted to do a truly great job of it, they
> would certainly want to learn everything they could from databases that have
> made contributions to the field.
>

We have made an effort to understand other "contributions to the field".

I'm not convinced that these are "essential database concepts" and having
personally spent quite some time working with the API in JS and implementing
it, I feel pretty confident that what we have for v1 is pretty solid.  There
are definitely some things I wouldn't mind re-visiting or looking at closer,
possibly even for v1, but they all seem reasonable to study further for v2
as well.

We've spent a lot of time over the last year and a half talking about
IndexedDB.  But now it's shipping in Firefox 4 and soon Chrome 11.  So
realistically v1 is not going to change much unless we are convinced that
what's there is fundamentally broken.

We intentionally limited the scope of v1, which is why we know there'll be a
v2.  We can't solve all the problems at once, and the difficulty of speccing
something is typically exponential to the size of the API.

Maybe a constructive way to discuss this would be to look at what use cases
will be difficult or impossible to achieve with the current design?

J


Re: [IndexedDB] Any particular reason built-in properties are not indexable?

2011-03-21 Thread Jeremy Orlow
Indexing toString or general getters seems like a bad idea since they can
run arbitrary code.  Wouldn't removing the restriction allow for that?

J

On Mon, Mar 21, 2011 at 11:51 AM, Pablo Castro
wrote:

>  The spec today requires that properties key paths point at need to be
> enumerated (see 3.1.2 “Object Store”). Any particular reason for that? It
> would be reasonable to allow an index on say the “length” property of a
> string. Perhaps we’re opening the door for too much, so I wanted to double
> check so we make an explicit call one way or the other. Thoughts?
>
>
>
> Thanks
>
> -pablo
>
>
>


Re: [Bug 12321] New: Add compound keys to IndexedDB

2011-03-18 Thread Jeremy Orlow
On Fri, Mar 18, 2011 at 1:45 AM, Keean Schupke  wrote:

> I like BDB's solution. You have one primary key you cannot mess with (say
> an integer for fast comparisons) you can then add any number of secondary
> indexes. With a secondary index there is a callback to generate a binary
> blob that is used for indexing. The callback has access to all the fields of
> the object plus any info in the closure and can use that to generate the
> index data any way it likes.


We discussed this a while ago.  IIRC, we decided to look at something like
it for v2.  It sounds like a good, general way to solve the problem though.
 And given the other discussion in this thread, it sounds like maybe this
isn't a super important use case to fix in the mean time.

J


> This has the advantage of supporting any indexing scheme's the user may
> wish to implement (by writing a custom callback), whist allowing a few
> common options to be provided for the user (say a hash of all fields, or a
> field name, international char set, and direction captured in a closure).
> The user gets the power, the core implementation is simple, and common cases
> can be implemented in an easy to use way.
>
> var lex_order = function(field, charset, direction) {return
> function(object) {/* map indexed 'field' to blob in required order */ return
> key;};};
>
> Then create a new index:
>
> object_store.validate_index(1, lex_order('name', 'us',
> 'ascending')).on_done(function(status) {/* status ok or error */})
>
> validate index checks if the requested secondary index (1) exists, if it
> does not it creates the index and calls the done callback (with a status
> code indicating successful creation), if it does and it passes some
> validation checks it also calls the done callback (with a status code
> indicating successful validation). If anything goes wrong with either the
> creation or validation of the secondary index if would call the done
> callback with an error status code.
>
>
> Cheers,
> Keean.
>
>
> On 18 March 2011 02:03, Jeremy Orlow  wrote:
>
>> Here's one ugliness with A: There's no way to specify ascending
>> or descending for the individual components of the key.  So there's no way
>> for me to open a cursor that looks at one field ascending and the other
>> field descending.  In addition, I can't think of any easy/good ways to hack
>> around this.
>>
>> Any thoughts on how we could address this use case?
>>
>> J
>>
>> On Wed, Mar 16, 2011 at 4:50 PM,  wrote:
>>
>>> http://www.w3.org/Bugs/Public/show_bug.cgi?id=12321
>>>
>>>   Summary: Add compound keys to IndexedDB
>>>   Product: WebAppsWG
>>>   Version: unspecified
>>>  Platform: PC
>>>OS/Version: All
>>>Status: NEW
>>>  Severity: normal
>>>  Priority: P2
>>> Component: Indexed Database API
>>>AssignedTo: dave.n...@w3.org
>>>ReportedBy: jor...@chromium.org
>>> QAContact: member-webapi-...@w3.org
>>>CC: m...@w3.org, public-webapps@w3.org
>>>
>>>
>>> >From the thread "[IndexedDB] Compound and multiple keys" by Jonas
>>> Sicking,
>>> we're going to go with both options A and B.
>>>
>>> =
>>>
>>> Hi IndexedDB fans (yay!!),
>>>
>>> Problem description:
>>>
>>> One of the current shortcomings of IndexedDB is that it doesn't
>>> support compound indexes. I.e. indexing on more than one value. For
>>> example it's impossible to index on, and therefor efficiently search
>>> for, firstname and lastname in an objectStore which stores people. Or
>>> index on to-address and date sent in an objectStore holding emails.
>>>
>>> The way this is traditionally done is that multiple values are used as
>>> key for each individual entry in an index or objectStore. For example
>>> the CREATE INDEX statement in SQL can list multiple columns, and
>>> CREATE TABLE statment can list several columns as PRIMARY KEY.
>>>
>>> There have been a couple of suggestions how to do this in IndexedDB
>>>
>>> Option A)
>>> When specifying a key path in createObjectStore and createIndex, allow
>>> an array of key-paths to be specified. Such as
>>>
>>> store = db.createObjectStore("mystore", ["firstName", "lastName"]);
>>> store.add({firstName: "Benny", las

Re: [Bug 12321] New: Add compound keys to IndexedDB

2011-03-17 Thread Jeremy Orlow
- jessica

On Thu, Mar 17, 2011 at 7:03 PM, Jeremy Orlow  wrote:

> Here's one ugliness with A: There's no way to specify ascending
> or descending for the individual components of the key.  So there's no way
> for me to open a cursor that looks at one field ascending and the other
> field descending.  In addition, I can't think of any easy/good ways to hack
> around this.
>
> Any thoughts on how we could address this use case?
>
> J
>
> On Wed, Mar 16, 2011 at 4:50 PM,  wrote:
>
>> http://www.w3.org/Bugs/Public/show_bug.cgi?id=12321
>>
>>   Summary: Add compound keys to IndexedDB
>>   Product: WebAppsWG
>>   Version: unspecified
>>  Platform: PC
>>OS/Version: All
>>Status: NEW
>>  Severity: normal
>>  Priority: P2
>> Component: Indexed Database API
>>AssignedTo: dave.n...@w3.org
>>ReportedBy: jor...@chromium.org
>> QAContact: member-webapi-...@w3.org
>>CC: m...@w3.org, public-webapps@w3.org
>>
>>
>> >From the thread "[IndexedDB] Compound and multiple keys" by Jonas
>> Sicking,
>> we're going to go with both options A and B.
>>
>> =
>>
>> Hi IndexedDB fans (yay!!),
>>
>> Problem description:
>>
>> One of the current shortcomings of IndexedDB is that it doesn't
>> support compound indexes. I.e. indexing on more than one value. For
>> example it's impossible to index on, and therefor efficiently search
>> for, firstname and lastname in an objectStore which stores people. Or
>> index on to-address and date sent in an objectStore holding emails.
>>
>> The way this is traditionally done is that multiple values are used as
>> key for each individual entry in an index or objectStore. For example
>> the CREATE INDEX statement in SQL can list multiple columns, and
>> CREATE TABLE statment can list several columns as PRIMARY KEY.
>>
>> There have been a couple of suggestions how to do this in IndexedDB
>>
>> Option A)
>> When specifying a key path in createObjectStore and createIndex, allow
>> an array of key-paths to be specified. Such as
>>
>> store = db.createObjectStore("mystore", ["firstName", "lastName"]);
>> store.add({firstName: "Benny", lastName: "Zysk", age: 28});
>> store.add({firstName: "Benny", lastName: "Andersson", age: 63});
>> store.add({firstName: "Charlie", lastName: "Brown", age: 8});
>>
>> The records are stored in the following order
>> "Benny", "Andersson"
>> "Benny", "Zysk"
>> "Charlie", "Brown"
>>
>> Similarly, createIndex accepts the same syntax:
>> store.createIndex("myindex", ["lastName", "age"]);
>>
>> Option B)
>> Allowing arrays as an additional data type for keys.
>> store = db.createObjectStore("mystore", "fullName");
>> store.add({fullName: ["Benny", "Zysk"], age: 28});
>> store.add({fullName: ["Benny", "Andersson"], age: 63});
>> store.add({fullName: ["Charlie", "Brown"], age: 8});
>>
>> Also allows out-of-line keys using:
>> store = db.createObjectStore("mystore");
>> store.add({age: 28}, ["Benny", "Zysk"]);
>> store.add({age: 63}, ["Benny", "Andersson"]);
>> store.add({age: 8}, ["Charlie", "Brown"]);
>>
>> (the sort order here is the same as in option A).
>>
>> Similarly, if an index pointed used a keyPath which points to an
>> array, this would create an entry in the index which used a compound
>> key consisting of the values in the array.
>>
>> There are of course advantages and disadvantages with both options.
>>
>> Option A advantages:
>> * Ensures that at objectStore/index creation time the number of keys
>> are known. This allows the implementation to create and optimize the
>> index using this information. This is especially useful in situations
>> when the indexedDB implementation is backed by a SQL database which
>> uses columns as a way to represent multiple keys.
>> * Easy to use when key values appear as separate properties on the
>> stored object.
>> * Obvious how to sort entries.
>>
>> Option A disadvantages:
>> * Doesn't allow compound out-of-line keys.
>> * Requires multiple pro

Re: [Bug 12321] New: Add compound keys to IndexedDB

2011-03-17 Thread Jeremy Orlow
Here's one ugliness with A: There's no way to specify ascending
or descending for the individual components of the key.  So there's no way
for me to open a cursor that looks at one field ascending and the other
field descending.  In addition, I can't think of any easy/good ways to hack
around this.

Any thoughts on how we could address this use case?

J

On Wed, Mar 16, 2011 at 4:50 PM,  wrote:

> http://www.w3.org/Bugs/Public/show_bug.cgi?id=12321
>
>   Summary: Add compound keys to IndexedDB
>   Product: WebAppsWG
>   Version: unspecified
>  Platform: PC
>OS/Version: All
>Status: NEW
>  Severity: normal
>  Priority: P2
> Component: Indexed Database API
>AssignedTo: dave.n...@w3.org
>ReportedBy: jor...@chromium.org
> QAContact: member-webapi-...@w3.org
>CC: m...@w3.org, public-webapps@w3.org
>
>
> >From the thread "[IndexedDB] Compound and multiple keys" by Jonas Sicking,
> we're going to go with both options A and B.
>
> =
>
> Hi IndexedDB fans (yay!!),
>
> Problem description:
>
> One of the current shortcomings of IndexedDB is that it doesn't
> support compound indexes. I.e. indexing on more than one value. For
> example it's impossible to index on, and therefor efficiently search
> for, firstname and lastname in an objectStore which stores people. Or
> index on to-address and date sent in an objectStore holding emails.
>
> The way this is traditionally done is that multiple values are used as
> key for each individual entry in an index or objectStore. For example
> the CREATE INDEX statement in SQL can list multiple columns, and
> CREATE TABLE statment can list several columns as PRIMARY KEY.
>
> There have been a couple of suggestions how to do this in IndexedDB
>
> Option A)
> When specifying a key path in createObjectStore and createIndex, allow
> an array of key-paths to be specified. Such as
>
> store = db.createObjectStore("mystore", ["firstName", "lastName"]);
> store.add({firstName: "Benny", lastName: "Zysk", age: 28});
> store.add({firstName: "Benny", lastName: "Andersson", age: 63});
> store.add({firstName: "Charlie", lastName: "Brown", age: 8});
>
> The records are stored in the following order
> "Benny", "Andersson"
> "Benny", "Zysk"
> "Charlie", "Brown"
>
> Similarly, createIndex accepts the same syntax:
> store.createIndex("myindex", ["lastName", "age"]);
>
> Option B)
> Allowing arrays as an additional data type for keys.
> store = db.createObjectStore("mystore", "fullName");
> store.add({fullName: ["Benny", "Zysk"], age: 28});
> store.add({fullName: ["Benny", "Andersson"], age: 63});
> store.add({fullName: ["Charlie", "Brown"], age: 8});
>
> Also allows out-of-line keys using:
> store = db.createObjectStore("mystore");
> store.add({age: 28}, ["Benny", "Zysk"]);
> store.add({age: 63}, ["Benny", "Andersson"]);
> store.add({age: 8}, ["Charlie", "Brown"]);
>
> (the sort order here is the same as in option A).
>
> Similarly, if an index pointed used a keyPath which points to an
> array, this would create an entry in the index which used a compound
> key consisting of the values in the array.
>
> There are of course advantages and disadvantages with both options.
>
> Option A advantages:
> * Ensures that at objectStore/index creation time the number of keys
> are known. This allows the implementation to create and optimize the
> index using this information. This is especially useful in situations
> when the indexedDB implementation is backed by a SQL database which
> uses columns as a way to represent multiple keys.
> * Easy to use when key values appear as separate properties on the
> stored object.
> * Obvious how to sort entries.
>
> Option A disadvantages:
> * Doesn't allow compound out-of-line keys.
> * Requires multiple properties to be added to stored objects if the
> components of the key isn't available there (for example if it's
> out-of-line or stored in an array).
>
> Option B advantages:
> * Allows compound out-of-line keys.
> * Easy to use when the key values are handled as an array by other
> code. Both when using in-line and out-of-line keys.
> * Maximum flexibility since you can combine single-value keys and
> compound keys in one objectStore, as well as arrays of different
> length (we couldn't come up with use cases for this though).
>
> Option B disadvantages:
> * Requires defining sorting between single values and arrays, as well
> as between arrays of different length.
> * Requires a single property to be added to stored objects if the key
> isn't available there (for example if it's stored as separate
> properties).
>
> There is of course a third alternative: Do both Option A and Option B.
> This brings most of the advantages of both options, but also many of
> the disadvantages of both. It also adds a lot of API surface which
> could conflict with future features, so it's something I'd really like
> to avoid.
>
>
> Questions:
>
>

Re: [IndexedDB] Spec changes for international language support

2011-03-17 Thread Jeremy Orlow
FWIW, this maybe would have been better off as its own thread.  :-)

On Thu, Mar 17, 2011 at 3:37 PM, Pablo Castro wrote:

>
> From: Jonas Sicking [mailto:jo...@sicking.cc]
> Sent: Tuesday, March 08, 2011 1:11 PM
>
> >> All in all, is there anything preventing adding the API Pablo suggests
> >> in this thread to the IndexedDB spec drafts?
>
> I wanted to propose a couple of specific tweaks to the initial proposal and
> then unless I hear pushback start editing this into the spec.
>
> From reading the details on this thread I'm starting to realize that
> per-database collations won't do it. What did it for me was the example that
> has a fuzzier matching mode (case/accent insensitive). This is exactly the
> kind of index I would want to sort people's names in my address book, but
> most likely not the index I'll want to use for my primary key.
>
> Refactoring the API to accommodate for this would mean to move the
> setCollation() method and the collation property to the object store and
> index objects. If we were willing to live without the ability to change them
> we could take collation as one of the optional parameters to
> createObjectStore()/createIndex() and reduce a bit of surface area...I don't
> have a strong preference there. In any case both would use BCP47 names as
> discussed in this thread (as Jonas pointed out, implementations can also do
> their thing as long as they don't interfere with BCP47).
>

I'm fine with this.  Another (I believe) related use case I ran into today
is wanting collation to be case insensitive.


> Another piece of feedback I heard consistently as I discussed this with
> various folks at Microsoft is the need to be able to pick up what the UA
> would consider the collation that's most appropriate for the user
> environment (derived from settings, page language or whatever). We could
> support this by introducing a special value that  you can pass to
> setCollation that indicates "pick whatever is the right for the
> environment's language right now". Given that there is no other way for
> people to discover the user preference on this, I think this is pretty
> important.
>

This seems useful even outside of the context of IndexedDB.  It should
probably be added to some other spec.  I'm fine adding it to ours for now
and adding an "issue" along with it.  But if so, please do shop it around.

J


[IndexedDB] Enabling multiple values in a single index to correspond to a single ObjectStore entry

2011-03-16 Thread Jeremy Orlow
We've talked about this off and on for a while now, but given that we've
made a decision on how to handle compound keys, I think we can finally come
to closure on this.

There are several basic use cases.
1) You have a "names" field in the object that you're storing and you want
to be able to search for any one of the names in the same index.  For
example, I can search for "Rose" and I'll get people whose first, middle, or
even last name is Rose.
2) Similarly, you might have multiple phone numbers that apply to a single
person and thus you'd want to search all of them at the same time.
3) You're implementing something (for example Gmail) which allows multiple
labels to correspond to a particular object, and where you want to be able
to efficiently look it up.

All of these can be worked around by creating another ObjectStore, and this
is what one would do in SQL (create another table).  But given that this is
a common thing to do, I think we should add it to the API itself.

I talked to Jonas and Ben Turner a bit about this, and I believe we were
leaning towards adding a new option to createIndex to enable this.  The best
name we could come up with was "multi index" (so multiIndex?) but maybe
someone else can come up with something better?

Thanks,
Jeremy


Re: Indexed Database API

2011-03-15 Thread Jeremy Orlow
Filed: http://www.w3.org/Bugs/Public/show_bug.cgi?id=12310

On Fri, Mar 4, 2011 at 5:45 PM, Jeremy Orlow  wrote:

> On Fri, Mar 4, 2011 at 5:36 PM, Jonas Sicking  wrote:
>
>> A few observations:
>>
>> 1. It seems like a fairly rare use case to have to jump to item #100
>> without first also observing item 1-99. When showing a paged view
>> which lets the user to jump directly to, say, page 5 it can certainly
>> happen, but the page could optimize for the case when the user first
>> goes through page 1-4.
>> 2. Since it's not a common case, adding support for it just on
>> cursors, rather than cursors and objectStores, seems enough. Would be
>> as simple as adding a .advance (or similarly named function) which
>> simply takes an integer. I don't see that we need to support jumping
>> in a arbitrary direction since we don't allow continue() in an
>> arbitrary direction.
>> 3. We do have a bit of a hole in our index-cursor API. While you can
>> start the cursor at an arbitrary key, you can only start it at the
>> first entry with that key in the case when there are duplicate keys.
>> So if you iterate an index 10 records at a time, even if you never
>> need to skip any entries, you can't always resume where you left off,
>> even if you know the exact key+primaryKey for the record you want to
>> resume at.
>>
>
> I agree with all of this reasoning.
>
>
>> 4. While I agree that count() seems like a useful function, my concern
>> is that people might think it's a cheap operation.
>
>
> This is my concern with your "getAll" function, btw.  :-)
>
>
>> Getting the count
>> for a full objectStore or index should be quick, but getting the count
>> for a given key range (such as on a cursor) seems like it could be
>> expensive. My b-tree knowledge isn't the best, but isn't there a risk
>> that you have to linearly walk the full keyrange? Or does b-trees keep
>> an exact count of record in each node? Even if linear walking is
>> required, there might not be much we can do, and the best we can do is
>> to document that this is a slow operation.
>>
>
> I don't think we should limit our thinking to btrees, but it seems as
> though implementations could keep track of the number of children under a
> particular node, in which case it should be faster than linear.
>
> COUNT(*) is a very popular function in SQL (even with WHERE clauses).  It
> seems like there will be some cases where the implementor truly does need a
> count but not the data.  And given that at least some implementations should
> be able to optimize this, I think we should give them an API call.
>
> J
>
> On Fri, Mar 4, 2011 at 2:32 PM, Jeremy Orlow  wrote:
>> > On Fri, Mar 4, 2011 at 1:38 PM, ben turner 
>> wrote:
>> >>
>> >> Firefox does lazily deserialize cursor values, so the slowdown you're
>> >> noticing is most likely due to us preserving the order of request
>> >> callbacks by queuing every continue() call in line with the rest of
>> >> the transaction. Jonas had proposed a faster, high performance cursor
>> >> that did not respect this ordering, maybe that's all that you'd need.
>> >>
>> >> However, a few thoughts:
>> >>
>> >> 1. How do you know Page 5 even exists? We haven't exposed a count()
>> >> function yet...
>> >> 2. I think we should expose a count() function!
>> >> 3. Maybe we should expose a getAt(long index,  direction);
>> >> function on indexes and objectStores?
>> >
>> > A count function might make sense.
>> > But in this case, you could just jump forward to page 5 and see if you
>> get
>> > an error or not.
>> > I'd lean towards just adding jumping forward to cursors for now though.
>>  If
>> > getting a single item at some position is popular, then we can always
>> add
>> > it.
>> > Let's avoid adding prioritization of cursor.continue calls unless we
>> have
>> > absolutely no other choice.
>> > J
>> >>
>> >> On Fri, Mar 4, 2011 at 12:11 PM, Olli Pettay 
>> >> wrote:
>> >> > On 03/02/2011 09:02 AM, Ben Dilts wrote:
>> >> >>
>> >> >> Why is there no mechanism for paging results, a la SQL's "limit"?
>>  If I
>> >> >> want entries in positions 140-159 from an index, I have to call
>> >> >> continue() on a cursor 139 times, which in turn unserializes 139
>> >> >> objects
>> >> >> from my store that I don't care about, which in FF4 is making a
>> lookup
>> >> >> in IndexedDB sometimes take many seconds for even a few records.
>> >> >
>> >> > Sounds like there is something to optimize in the implementation.
>> >> > Have you filed a bug
>> >> > https://bugzilla.mozilla.org/enter_bug.cgi?product=Core
>> >> > component DOM ?
>> >> > If not, please do so and attach a *minimal* testcase.
>> >> >
>> >> >
>> >> > Thanks,
>> >> >
>> >> >
>> >> > -Olli
>> >> >
>> >> >
>> >> >  It
>> >> >>
>> >> >> makes no sense--am I just missing something in the spec?
>> >> >>
>> >> >>
>> >> >> Ben Dilts
>> >> >
>> >> >
>> >>
>> >
>> >
>>
>
>


Re: [IndexedDB] Compound and multiple keys

2011-03-09 Thread Jeremy Orlow
Keean/Charles:

I definitely think the more people involved the better, but let's not get
too hung up on the specifics of PostgreSQL, BDB, etc.  Our goal here should
be to make a great API for web developers while balancing practical
considerations like how difficult it'll be to implement and/or use
efficiently.

That said, I'm not understanding what your comments have to do with this
proposal.  Do you have specific concerns?

J

On Wed, Mar 9, 2011 at 12:55 AM, Keean Schupke  wrote:

> Getting pgsql people involved sounds a great idea. Having some more people
> to argue for formalised and standardised database APIs like SQL, and
> experience with relational operations and optimisation would be good (That
> is an assumption on my part, but then they are writing PostgreSQL not
> CouchDB). Do you know some people you could invite?
>
> More generally though, I think BerkeleyDB would make a much better target
> for IDB. I don't think it should be trying to be PostgreSQL or MySQL. I
> think that implementing a good low-level API like BerkeleyDB that has enough
> functionality to allow SQL to be implemented over the top.
>
> The problem with trying to implement IDB on top of PostgreSQL is that IDB
> has a very narrow interface, that does not support any of the powerful
> features of pgsql. It would give you the worst of both. BDB would make a
> much implementation.
>
> Far more sensible would be to target the feature set of BDB for IDB, then
> PostgreSQL could be re-implemented in JavaSctipt on top.  (a massive and
> impractical task, but I am trying to express the relationship between high
> level and low level database APIs).
>
>
> If we wanted to go fully relational, and avoid the correctness problems
> with string processing SQL commands, take a look at my relational library,
> currently implemented on top of WebSQL but an IDB version is in the works:
> https://github.com/keean/RelationalDB
>
>
> Cheers,
> Keean.
>
>
> On 9 March 2011 04:10, Charles Pritchard  wrote:
>
>>  On 3/8/2011 6:12 PM, Jeremy Orlow wrote:
>>
>> On Tue, Mar 8, 2011 at 5:55 PM, Pablo Castro 
>> wrote:
>>
>>>
>>> From: public-webapps-requ...@w3.org [mailto:
>>> public-webapps-requ...@w3.org] On Behalf Of Keean Schupke
>>> Sent: Tuesday, March 08, 2011 3:03 PM
>>>
>>> >> No objections here.
>>> >>
>>> >> Keean.
>>> >>
>>> >> On 8 March 2011 21:14, Jonas Sicking 
>>> >> wrote:
>>> >> On Mon, Mar 7, 2011 at 10:43 PM, Jeremy Orlow 
>>> wrote:
>>> >> > On Fri, Jan 21, 2011 at 1:41 AM, Jeremy Orlow 
>>> wrote:
>>>
>>> >> > After thinking about it a bunch and talking to others, I'm actually
>>> leaning
>>> >> > towards both option A and B.  Although this will be a little harder
>>> for
>>> >> > implementors, it seems like there are solid reasons why some users
>>> would
>>> >> > want to use A and solid reasons why others would want to use B.
>>> >> > Any objections to us going that route?
>>> >> Not from me. If I don't hear objections I'll write up a spec draft and
>>> >> attach it here before committing to the spec.
>>>
>>>  Option A is pretty well understood, I like that one.
>>>
>>> For option B, at some point we had a debate on whether when indexing an
>>> array value we should consider it a single key value or we should unfold it
>>> into multiple index records. The first option makes it very similar to A in
>>> that an array is just a composite value (it is quite a bit more painful to
>>> implement...), the second option is interesting in that allows for new
>>> scenarios such as objects with an array for tags, where you want to look up
>>> by tag (even after doing options A and B as currently defined, in order
>>> support multiple tags you'd need a second store that keeps the tags + key
>>> for the objects you want to tag). Is there any interest in that scenario?
>>>
>>
>>  Yes.  Once we're settled on this, I'm going to send an email on that.
>>  :-)  Option b won't get in the way of my proposal.
>>
>>  J
>>
>>
>> At some point, I really would like to get people from the PostgreSQL
>> project involved with indexeddb.
>>
>> They have a wealth of experience to bring to the discussion. For the
>> moment, like many "server-side" packages, they're at quite a distance from
>> the w3.
>>
>> FWIW, pgsql is a perfectly valid 'host' for idb calls.
>>
>>
>>
>


Re: [IndexedDB] Compound and multiple keys

2011-03-08 Thread Jeremy Orlow
On Tue, Mar 8, 2011 at 5:55 PM, Pablo Castro wrote:

>
> From: public-webapps-requ...@w3.org [mailto:public-webapps-requ...@w3.org]
> On Behalf Of Keean Schupke
> Sent: Tuesday, March 08, 2011 3:03 PM
>
> >> No objections here.
> >>
> >> Keean.
> >>
> >> On 8 March 2011 21:14, Jonas Sicking  wrote:
> >> On Mon, Mar 7, 2011 at 10:43 PM, Jeremy Orlow 
> wrote:
> >> > On Fri, Jan 21, 2011 at 1:41 AM, Jeremy Orlow 
> wrote:
> >> >>
> >> >> On Thu, Jan 20, 2011 at 6:29 PM, Tab Atkins Jr. <
> jackalm...@gmail.com>
> >> >> wrote:
> >> >>>
> >> >>> On Thu, Jan 20, 2011 at 10:12 AM, Keean Schupke 
> wrote:
> >> >>> > Compound primary keys are commonly used afaik.
> >> >>>
> >> >>> Indeed.  It's one of the common themes in the debate between natural
> >> >>> and synthetic keys.
> >> >>
> >> >> Fair enough.
> >> >> Should we allow explicit compound keys?  I.e myOS.put({...}, ['first
> >> >> name', 'last name'])?  I feel pretty strongly that if we do, we
> should
> >> >> require this be specified up-front when creating the objectStore.
>  I.e. add
> >> >> some additional parameter to the optional options object.  Otherwise,
> we'll
> >> >> force implementations to handle variable compound keys for just this
> one
> >> >> case, which seems kind of silly.
> >> >> The other option is to just disallow them.
> >> >
> >> > After thinking about it a bunch and talking to others, I'm actually
> leaning
> >> > towards both option A and B.  Although this will be a little harder
> for
> >> > implementors, it seems like there are solid reasons why some users
> would
> >> > want to use A and solid reasons why others would want to use B.
> >> > Any objections to us going that route?
> >> Not from me. If I don't hear objections I'll write up a spec draft and
> >> attach it here before committing to the spec.
>
> Option A is pretty well understood, I like that one.
>
> For option B, at some point we had a debate on whether when indexing an
> array value we should consider it a single key value or we should unfold it
> into multiple index records. The first option makes it very similar to A in
> that an array is just a composite value (it is quite a bit more painful to
> implement...), the second option is interesting in that allows for new
> scenarios such as objects with an array for tags, where you want to look up
> by tag (even after doing options A and B as currently defined, in order
> support multiple tags you'd need a second store that keeps the tags + key
> for the objects you want to tag). Is there any interest in that scenario?
>

Yes.  Once we're settled on this, I'm going to send an email on that.  :-)
 Option b won't get in the way of my proposal.

J


Re: [IndexedDB] Compound and multiple keys

2011-03-07 Thread Jeremy Orlow
On Fri, Jan 21, 2011 at 1:41 AM, Jeremy Orlow  wrote:

> On Thu, Jan 20, 2011 at 6:29 PM, Tab Atkins Jr. wrote:
>
>> On Thu, Jan 20, 2011 at 10:12 AM, Keean Schupke  wrote:
>> > Compound primary keys are commonly used afaik.
>>
>> Indeed.  It's one of the common themes in the debate between natural
>> and synthetic keys.
>>
>
> Fair enough.
>
> Should we allow explicit compound keys?  I.e myOS.put({...}, ['first name',
> 'last name'])?  I feel pretty strongly that if we do, we should require this
> be specified up-front when creating the objectStore.  I.e. add some
> additional parameter to the optional options object.  Otherwise, we'll force
> implementations to handle variable compound keys for just this one case,
> which seems kind of silly.
>
> The other option is to just disallow them.
>

After thinking about it a bunch and talking to others, I'm actually leaning
towards both option A and B.  Although this will be a little harder for
implementors, it seems like there are solid reasons why some users would
want to use A and solid reasons why others would want to use B.

Any objections to us going that route?

Thanks,
J


[IndexedDB] What should be allowed as a key path?

2011-03-07 Thread Jeremy Orlow
As far as I recall, we never settled on how key path should be specified.
 Right now in Chrome, we allow any combination of .'s and static array
lookups.  So, for example, we allow "foo.bar[1][2].baz".  I don't remember
any specific use cases for the array lookups though, so I'm wondering if we
should just specify " ["." ]*" to be the key path
specification.  Where  is whatever's allowed in JavaScript.  It
seems to me that this should support most use cases, should be easy to
understand and implement, and should leave us a lot of flexibility to
support other use cases as we may come up with them.

Thoughts?

J


Re: Indexed Database API

2011-03-04 Thread Jeremy Orlow
On Fri, Mar 4, 2011 at 5:36 PM, Jonas Sicking  wrote:

> A few observations:
>
> 1. It seems like a fairly rare use case to have to jump to item #100
> without first also observing item 1-99. When showing a paged view
> which lets the user to jump directly to, say, page 5 it can certainly
> happen, but the page could optimize for the case when the user first
> goes through page 1-4.
> 2. Since it's not a common case, adding support for it just on
> cursors, rather than cursors and objectStores, seems enough. Would be
> as simple as adding a .advance (or similarly named function) which
> simply takes an integer. I don't see that we need to support jumping
> in a arbitrary direction since we don't allow continue() in an
> arbitrary direction.
> 3. We do have a bit of a hole in our index-cursor API. While you can
> start the cursor at an arbitrary key, you can only start it at the
> first entry with that key in the case when there are duplicate keys.
> So if you iterate an index 10 records at a time, even if you never
> need to skip any entries, you can't always resume where you left off,
> even if you know the exact key+primaryKey for the record you want to
> resume at.
>

I agree with all of this reasoning.


> 4. While I agree that count() seems like a useful function, my concern
> is that people might think it's a cheap operation.


This is my concern with your "getAll" function, btw.  :-)


> Getting the count
> for a full objectStore or index should be quick, but getting the count
> for a given key range (such as on a cursor) seems like it could be
> expensive. My b-tree knowledge isn't the best, but isn't there a risk
> that you have to linearly walk the full keyrange? Or does b-trees keep
> an exact count of record in each node? Even if linear walking is
> required, there might not be much we can do, and the best we can do is
> to document that this is a slow operation.
>

I don't think we should limit our thinking to btrees, but it seems as though
implementations could keep track of the number of children under a
particular node, in which case it should be faster than linear.

COUNT(*) is a very popular function in SQL (even with WHERE clauses).  It
seems like there will be some cases where the implementor truly does need a
count but not the data.  And given that at least some implementations should
be able to optimize this, I think we should give them an API call.

J

On Fri, Mar 4, 2011 at 2:32 PM, Jeremy Orlow  wrote:
> > On Fri, Mar 4, 2011 at 1:38 PM, ben turner 
> wrote:
> >>
> >> Firefox does lazily deserialize cursor values, so the slowdown you're
> >> noticing is most likely due to us preserving the order of request
> >> callbacks by queuing every continue() call in line with the rest of
> >> the transaction. Jonas had proposed a faster, high performance cursor
> >> that did not respect this ordering, maybe that's all that you'd need.
> >>
> >> However, a few thoughts:
> >>
> >> 1. How do you know Page 5 even exists? We haven't exposed a count()
> >> function yet...
> >> 2. I think we should expose a count() function!
> >> 3. Maybe we should expose a getAt(long index,  direction);
> >> function on indexes and objectStores?
> >
> > A count function might make sense.
> > But in this case, you could just jump forward to page 5 and see if you
> get
> > an error or not.
> > I'd lean towards just adding jumping forward to cursors for now though.
>  If
> > getting a single item at some position is popular, then we can always add
> > it.
> > Let's avoid adding prioritization of cursor.continue calls unless we have
> > absolutely no other choice.
> > J
> >>
> >> On Fri, Mar 4, 2011 at 12:11 PM, Olli Pettay 
> >> wrote:
> >> > On 03/02/2011 09:02 AM, Ben Dilts wrote:
> >> >>
> >> >> Why is there no mechanism for paging results, a la SQL's "limit"?  If
> I
> >> >> want entries in positions 140-159 from an index, I have to call
> >> >> continue() on a cursor 139 times, which in turn unserializes 139
> >> >> objects
> >> >> from my store that I don't care about, which in FF4 is making a
> lookup
> >> >> in IndexedDB sometimes take many seconds for even a few records.
> >> >
> >> > Sounds like there is something to optimize in the implementation.
> >> > Have you filed a bug
> >> > https://bugzilla.mozilla.org/enter_bug.cgi?product=Core
> >> > component DOM ?
> >> > If not, please do so and attach a *minimal* testcase.
> >> >
> >> >
> >> > Thanks,
> >> >
> >> >
> >> > -Olli
> >> >
> >> >
> >> >  It
> >> >>
> >> >> makes no sense--am I just missing something in the spec?
> >> >>
> >> >>
> >> >> Ben Dilts
> >> >
> >> >
> >>
> >
> >
>


Re: Indexed Database API

2011-03-04 Thread Jeremy Orlow
On Fri, Mar 4, 2011 at 1:38 PM, ben turner  wrote:

> Firefox does lazily deserialize cursor values, so the slowdown you're
> noticing is most likely due to us preserving the order of request
> callbacks by queuing every continue() call in line with the rest of
> the transaction. Jonas had proposed a faster, high performance cursor
> that did not respect this ordering, maybe that's all that you'd need.
>
> However, a few thoughts:
>
> 1. How do you know Page 5 even exists? We haven't exposed a count()
> function yet...
> 2. I think we should expose a count() function!
> 3. Maybe we should expose a getAt(long index,  direction);
> function on indexes and objectStores?
>

A count function might make sense.

But in this case, you could just jump forward to page 5 and see if you get
an error or not.

I'd lean towards just adding jumping forward to cursors for now though.  If
getting a single item at some position is popular, then we can always add
it.

Let's avoid adding prioritization of cursor.continue calls unless we have
absolutely no other choice.

J

On Fri, Mar 4, 2011 at 12:11 PM, Olli Pettay 
> wrote:
> > On 03/02/2011 09:02 AM, Ben Dilts wrote:
> >>
> >> Why is there no mechanism for paging results, a la SQL's "limit"?  If I
> >> want entries in positions 140-159 from an index, I have to call
> >> continue() on a cursor 139 times, which in turn unserializes 139 objects
> >> from my store that I don't care about, which in FF4 is making a lookup
> >> in IndexedDB sometimes take many seconds for even a few records.
> >
> > Sounds like there is something to optimize in the implementation.
> > Have you filed a bug
> https://bugzilla.mozilla.org/enter_bug.cgi?product=Core
> > component DOM ?
> > If not, please do so and attach a *minimal* testcase.
> >
> >
> > Thanks,
> >
> >
> > -Olli
> >
> >
> >  It
> >>
> >> makes no sense--am I just missing something in the spec?
> >>
> >>
> >> Ben Dilts
> >
> >
>
>


Re: Indexed Database API

2011-03-04 Thread Jeremy Orlow
On Fri, Mar 4, 2011 at 11:33 AM, Ben Dilts  wrote:

> Jeremy,
>
> Thanks for the reply!  However, my indices are not typically unique,
> contiguous numbers.  For example, I have an index on an item's "saved" date,
> as a MySQL-style date/time string.  These dates are not necessarily unique,
> and are certainly not contiguous.  So if a user is currently viewing the
> first 20 items in this object store, and would like to jump to page 5 (items
> 81-100), how would I go about that?  I don't know what key value is in the
> 81st position in the index.  In fact, the key value in position 81 may also
> occupy positions 80 and 82--if I skip to that key value, I may end up in a
> slightly wrong place.
>

If you're jumping beyond what you've already looked at, then yeah...the
current API is probably not sufficient.

I wouldn't mind adding an option to openCursor to start the cursor some
number of items forward of the first element in the key range.  I also
wouldn't mind adding some sort of "jumpForward" method to IDBCursor.

J


> On Fri, Mar 4, 2011 at 11:33 AM, Jeremy Orlow  wrote:
>
>> On Tue, Mar 1, 2011 at 11:02 PM, Ben Dilts  wrote:
>>
>>> Why is there no mechanism for paging results, a la SQL's "limit"?  If I
>>> want entries in positions 140-159 from an index, I have to call continue()
>>> on a cursor 139 times, which in turn unserializes 139 objects from my store
>>> that I don't care about, which in FF4 is making a lookup in IndexedDB
>>> sometimes take many seconds for even a few records.  It makes no sense--am I
>>> just missing something in the spec?
>>
>>
>> Just use cursor.continue() with a key parameter to skip the cursor ahead
>> to where you care about.
>>
>> J
>>
>
>


Re: Indexed Database API

2011-03-04 Thread Jeremy Orlow
On Tue, Mar 1, 2011 at 11:02 PM, Ben Dilts  wrote:

> Why is there no mechanism for paging results, a la SQL's "limit"?  If I
> want entries in positions 140-159 from an index, I have to call continue()
> on a cursor 139 times, which in turn unserializes 139 objects from my store
> that I don't care about, which in FF4 is making a lookup in IndexedDB
> sometimes take many seconds for even a few records.  It makes no sense--am I
> just missing something in the spec?


Just use cursor.continue() with a key parameter to skip the cursor ahead to
where you care about.

J


Re: [IndexedDB] Two Real World Use-Cases

2011-03-01 Thread Jeremy Orlow
On Tue, Mar 1, 2011 at 7:34 AM, Joran Greef  wrote:

> I have been following the development behind IndexedDB with interest. Thank
> you all for your efforts.
>
> I understand that the initial version of IndexedDB will not support
> indexing array values.
>
> May I suggest an alternative derived from my home-brew server database
> evolved from experience using MySql, WebSql, LocalStorage, CouchDb, Tokyo
> Cabinet and Redis?
>
> 1. Be able to put an object and pass an array of index names which must
> reference the object. This may remove the need for a complicated indexing
> spec (perhaps the reason why this issue has been pushed into the future) and
> give developers all the flexibility they need.
>

You're talking about having multiple entries in a single index that point
towards the same primary key?  If so, then I strongly agree, and I think
others agree as well.  It's mostly a question of syntax.  A while ago we
brainstormed a couple possibilities.  I'll try to send out a proposal this
week.  I think this + compound keys should probably be our last v1 features
though.  (Though they almost certainly won't make Chrome 11 or Firefox 4,
unfortunately, hopefully they'll be done in the next version of each, and
hopefully that release with be fairly soon after for both.)


> 2. Be able to intersect and union indexes. This covers a tremendous amount
> of ground in terms of authorization and filtering.
>

Our plan was to punt some sort of join language to v2.  Could you give a
more concrete proposal for what we'd add?  It'd make it easier to see if
it's something realistic for v1 or not.

As you mention below, you can always do this in JS if necessary.  And
although I know it's not ideal, I think it's the right tradeoff in terms of
making it practical for browser vendors to get v1 out the door fairly fast.


> These two needs are critical.
>
> Without them, I will either carry on using WebSql for as long as possible,
> or be forced to use IndexedDb as a simple key value store and layer my own
> indexing on top.
>
> I am writing an email application and have to deal with secondary indexes
> of up to 4,000,000 keys. It would not be ideal to do intersects and unions
> on these indexes in the application layer.
>
> Regards
>
> Joran Greef
>


Re: [IndexedDB] success/error events

2011-02-23 Thread Jeremy Orlow
Jonas: any idea?  I assume this is just a typo?

On Wed, Feb 23, 2011 at 3:30 PM, Glenn Maynard  wrote:

> Just wanted to ping this question: why does IDB 3.2.2 fire success and
> error events "at each Window object", rather than at the IDBRequest
> itself?
>
> (It would make sense for error events on an IDBRequest that aren't
> cancelled to be re-fired at the Window--or rather, the
> IDBEnvironment--for global error trapping, but it seems like they're
> not fired at the IDBRequest at all.)
>
>
> On Wed, Feb 9, 2011 at 11:21 PM, Glenn Maynard  wrote:
> > I was looking over the spec on success and error events some more while
> > considering this, and I'm deeply confused: 3.2.2 describes the success
> and
> > error events being fired at the Window ("at each Window object") rather
> then
> > at the request, which seems bizarre.  I feel like this is an embarrassing
> > spec-novice question, but could someone clue me in on what's happening
> here?
>
> --
> Glenn Maynard
>
>


Re: [Bug 11348] New: [IndexedDB] Overhaul of the event model

2011-02-22 Thread Jeremy Orlow
On Sat, Feb 19, 2011 at 8:46 PM, Jonas Sicking  wrote:

> On Fri, Feb 18, 2011 at 5:58 PM, Jeremy Orlow  wrote:
> > If an exception is unhanded in an IDB event handler, we abort the
> > transaction.  Should we continue firing the other handlers when this
> > happens, though?
>
> What do you mean by "other handlers"? The other handlers for that same
> event? If so, I would say we should so that we're sticking with the
> DOM Events spec.
>
> > And should preventDefault prevent the abort?
>
> preventDefault usually prevents the default action of the event. The
> abort isn't the default action, so I would say no. (It also seems a
> bit weird that calling preventDefault on a success event would prevent
> an abort).
>

So if any of the event handlers doesn't catch an exception, there's no way
to keep the transaction from aborting?

J


Re: [Bug 11348] New: [IndexedDB] Overhaul of the event model

2011-02-18 Thread Jeremy Orlow
If an exception is unhanded in an IDB event handler, we abort the
transaction.  Should we continue firing the other handlers when this
happens, though?  And should preventDefault prevent the abort?

J

On Tue, Feb 15, 2011 at 11:52 AM, David Grogan  wrote:

>
>
> On Mon, Feb 14, 2011 at 11:15 PM, Jonas Sicking  wrote:
>
>> On Mon, Feb 14, 2011 at 7:53 PM, Jeremy Orlow 
>> wrote:
>> > On Mon, Feb 14, 2011 at 7:36 PM, David Grogan 
>> wrote:
>> >>
>> >>
>> >> On Thu, Feb 10, 2011 at 5:58 PM, Jeremy Orlow 
>> wrote:
>> >>>
>> >>> On Thu, Jan 27, 2011 at 5:14 PM, Jonas Sicking 
>> wrote:
>> >>>>
>> >>>> On Wed, Jan 26, 2011 at 11:47 AM, Jeremy Orlow 
>> >>>> wrote:
>> >>>> > What's the current thinking in terms of events that we're firing?
>>  I
>> >>>> > remember we talked about this a bit, but I don't remember the
>> >>>> > conclusion and
>> >>>> > I can't find it captured anywhere.
>> >>>> > Here's a brain dump of the requirements as I remember them:
>> >>>> > * Everything should have a source attribute.
>> >>>> > * Everything done in the context of a transaction should have a
>> >>>> > transaction
>> >>>> > attribute.  (Probably even errors, which I believe is not the
>> current
>> >>>> > case.)
>> >>>> > * Only success events should have a result.
>> >>>> > * Only error events should have a code and a messageor should
>> they
>> >>>> > just
>> >>>> > have an error attribute which holds an IDBDatabaseError object?
>>  (If
>> >>>> > it's
>> >>>> > the former, then do we even need an interface for IDBDatabaseError
>> to
>> >>>> > be
>> >>>> > defined?)
>> >>>> > * IIRC, Jonas suggested at some point that maybe there should be
>> >>>> > additional
>> >>>> > attributes beyond just the source and/or objects should link to
>> their
>> >>>> > parents.  (The latter probably makes the most sense, right?  If so,
>> >>>> > I'll bug
>> >>>> > it.)
>> >>>> > Is there anything I'm missing?
>> >>>> > As far as I can tell, this means we need 5 events: an IDBEvent
>> (with
>> >>>> > source)
>> >>>> > and then error with transaction, error without, success with, and
>> >>>> > success
>> >>>> > without.  That seems kind of ugly though.
>> >>>> > Another possibility is that we could put a transaction attribute on
>> >>>> > IDBEvent
>> >>>> > that's null when there's no transaction.  And then error and
>> success
>> >>>> > would
>> >>>> > have their own subclasses.  To me, this sounds best.
>> >>>> > Thoughts?
>> >>>>
>> >>>> Actually, I was proposing something entirely different.
>> >>>>
>> >>>> IDBRequest should look like this:
>> >>>>
>> >>>> interface IDBRequest : EventTarget {
>> >>>
>> >>> For each, what do we do when it's not available?  Throw exception?
>> >>>  Return undefined?  Null?  Particularly in the errorCode case, it's
>> not
>> >>> clear to me what the right thing to do is.
>> >>>
>> >>
>> >> How do IDBVersionChangeEvent and its version attribute fit in to this
>> new
>> >> model?  Should we add a nullable version attribute to IDBRequest and
>> let the
>> >> function handling a blocked event check event.target.version?  Could we
>> add
>> >> a version attribute just to IDBVersionChangeRequest?
>> >
>> > Adding a "newVersion", "nextVersion", or something similar to
>> > IDBVersionChangeRequest seems like the best answer to me.  Simply adding
>> > "version" to it seems kind of confusing though.
>>
>> Adding it to the request won't help as the versionchange event is
>> fired at other databases, not at the request.
>
>
> It's fired at the request if the version_change transaction is blocked
> because other connections to the database remain open after receiving
> versionchange events, but I see what you mean.
>
>
>> Adding it to the request
>> is also not needed since the new version isn't something that is the
>> result of the request, it's something you specify when creating the
>> request.
>>
>> I think we can leave IDBVersionChangeEvent as it is, it's an entirely
>> different beast from success/error.
>>
>
> I'm on board with this.
>
>
>> / Jonas
>>
>
>


Re: [IndexedDB] More questions about IDBRequests always firing (WAS: Reason for aborting transactions)

2011-02-17 Thread Jeremy Orlow
On Thu, Feb 17, 2011 at 3:58 PM, Pablo Castro wrote:

>
> From: public-webapps-requ...@w3.org [mailto:public-webapps-requ...@w3.org]
> On Behalf Of Jeremy Orlow
> Sent: Thursday, February 17, 2011 11:51 AM
>
> >> On Thu, Feb 17, 2011 at 11:12 AM, Jonas Sicking 
> wrote:
> >> On Thu, Feb 17, 2011 at 11:02 AM, ben turner 
> wrote:
> >> >>> Also, what should we do when you enqueue a setVersion transaction
> and then
> >> >>> close the database handle?  Maybe an ABORT_ERR there too?
> >> >>
> >> >> Yeah, that'd make sense to me. Just like if you enque any other
> >> >> transaction and then close the db handle.
> >> >
> >> > We don't abort transactions that are already in progress when you call
> >> > db.close()... We just set a flag and prevent further transactions from
> >> > being created.
> >> Doh! Of course.
> >>
> >> If the setVersion transaction has started then we should definitely
> >> allow it finish, just like all other transactions. I don't have a
> >> strong opinion on if we should let the setVersion transaction start if
> >> it hasn't yet. Seems most consistent to let it, but if there's a
> >> strong reason not to I could be convinced.
> >>
> >> What if you have two database connections open and both do a setVersion
> transaction and one calls .close (to yield to the other)?  Neither can start
> until one or the other actually is closed.  If a database is closed (not
> just close pending) then I think we need to abort any blocked setVersion
> calls.  If one is already running, it should certainly be allowed to finish
> before we close the database.
>
> This sounds reasonable to me (special case and abort the transaction only
> for blocked setVersion transactions). We should capture it explicitly on the
> spec, it's the kind of little detail that's easy to forget.
>

Captured in http://www.w3.org/Bugs/Public/show_bug.cgi?id=12114


Re: [IndexedDB] More questions about IDBRequests always firing (WAS: Reason for aborting transactions)

2011-02-17 Thread Jeremy Orlow
On Thu, Feb 17, 2011 at 11:12 AM, Jonas Sicking  wrote:

> On Thu, Feb 17, 2011 at 11:02 AM, ben turner 
> wrote:
> >>> Also, what should we do when you enqueue a setVersion transaction and
> then
> >>> close the database handle?  Maybe an ABORT_ERR there too?
> >>
> >> Yeah, that'd make sense to me. Just like if you enque any other
> >> transaction and then close the db handle.
> >
> > We don't abort transactions that are already in progress when you call
> > db.close()... We just set a flag and prevent further transactions from
> > being created.
>
> Doh! Of course.
>
> If the setVersion transaction has started then we should definitely
> allow it finish, just like all other transactions. I don't have a
> strong opinion on if we should let the setVersion transaction start if
> it hasn't yet. Seems most consistent to let it, but if there's a
> strong reason not to I could be convinced.
>

What if you have two database connections open and both do a setVersion
transaction and one calls .close (to yield to the other)?  Neither can start
until one or the other actually is closed.  If a database is closed (not
just close pending) then I think we need to abort any blocked setVersion
calls.  If one is already running, it should certainly be allowed to finish
before we close the database.

J


[IndexedDB] More questions about IDBRequests always firing (WAS: Reason for aborting transactions)

2011-02-16 Thread Jeremy Orlow
On Wed, Feb 9, 2011 at 4:30 PM, Jonas Sicking  wrote:

> On Wed, Feb 9, 2011 at 4:03 PM, Jeremy Orlow  wrote:
> > Gotcha.  Does this mean that _every_ async request will fire an onerror
> or
> > onsuccess?  I guess I had forgotten about that (and assumed it was that
> it'd
> > fire either 0 or 1 times.)
>
> Yes. That's the idea. It's always nice to be able to rely on that
> you'll *always* get one of the two callbacks. That way you can put
> cleanup code in there and be sure that it always runs.
>

Will the IDBRequest always fire before the IDBTransaction's abort/complete
event fires?  (It seems like it should.)

Also, what should we do when you enqueue a setVersion transaction and then
close the database handle?  Maybe an ABORT_ERR there too?

J


Re: [Bug 11348] New: [IndexedDB] Overhaul of the event model

2011-02-14 Thread Jeremy Orlow
On Mon, Feb 14, 2011 at 7:36 PM, David Grogan  wrote:

>
>
> On Thu, Feb 10, 2011 at 5:58 PM, Jeremy Orlow  wrote:
>
>> On Thu, Jan 27, 2011 at 5:14 PM, Jonas Sicking  wrote:
>>
>>> On Wed, Jan 26, 2011 at 11:47 AM, Jeremy Orlow 
>>> wrote:
>>> > What's the current thinking in terms of events that we're firing?  I
>>> > remember we talked about this a bit, but I don't remember the
>>> conclusion and
>>> > I can't find it captured anywhere.
>>> > Here's a brain dump of the requirements as I remember them:
>>> > * Everything should have a source attribute.
>>> > * Everything done in the context of a transaction should have a
>>> transaction
>>> > attribute.  (Probably even errors, which I believe is not the current
>>> case.)
>>> > * Only success events should have a result.
>>> > * Only error events should have a code and a messageor should they
>>> just
>>> > have an error attribute which holds an IDBDatabaseError object?  (If
>>> it's
>>> > the former, then do we even need an interface for IDBDatabaseError to
>>> be
>>> > defined?)
>>> > * IIRC, Jonas suggested at some point that maybe there should be
>>> additional
>>> > attributes beyond just the source and/or objects should link to their
>>> > parents.  (The latter probably makes the most sense, right?  If so,
>>> I'll bug
>>> > it.)
>>> > Is there anything I'm missing?
>>> > As far as I can tell, this means we need 5 events: an IDBEvent (with
>>> source)
>>> > and then error with transaction, error without, success with, and
>>> success
>>> > without.  That seems kind of ugly though.
>>> > Another possibility is that we could put a transaction attribute on
>>> IDBEvent
>>> > that's null when there's no transaction.  And then error and success
>>> would
>>> > have their own subclasses.  To me, this sounds best.
>>> > Thoughts?
>>>
>>> Actually, I was proposing something entirely different.
>>>
>>> IDBRequest should look like this:
>>>
>>> interface IDBRequest : EventTarget {
>>>
>>
>> For each, what do we do when it's not available?  Throw exception?  Return
>> undefined?  Null?  Particularly in the errorCode case, it's not clear to me
>> what the right thing to do is.
>>
>>
>
> How do IDBVersionChangeEvent and its version attribute fit in to this new
> model?  Should we add a nullable version attribute to IDBRequest and let the
> function handling a blocked event check event.target.version?  Could we add
> a version attribute just to IDBVersionChangeRequest?
>

Adding a "newVersion", "nextVersion", or something similar to
IDBVersionChangeRequest seems like the best answer to me.  Simply adding
"version" to it seems kind of confusing though.

J


> attribute any result;
>>>attribute unsigned long errorCode;
>>>attribute DOMObject source;
>>>attribute IDBTransaction transaction;
>>>
>>
>>>const unsigned short LOADING = 1;
>>>const unsigned short DONE = 2;
>>>readonly attribute unsigned short readyState;
>>>
>>> attribute Function   onsuccess;
>>> attribute Function   onerror;
>>> };
>>>
>>> "success" and "error" events are plain Event objects, i.e. no
>>> indexedDB-specific properties.
>>>
>>> The advantage of this is:
>>> 1. Request objects are actually useful as representing the request.
>>> Consumers of a request can check what the readystate is and either get
>>> the .result or attach a event listener as appropriate. (As things
>>> stand now you can't really rely on .readyState. The only thing it
>>> tells you is if you got to the request too late or not. If you risk
>>> getting there too late you better rewrite your code)
>>> 2. Easier to implement a promises-style API on top of indexedDB.
>>> 3. More similar to for example XMLHttpRequest
>>>
>>> The downside is:
>>> 1. Have to use a bigger syntax to get to the result. "event.result"
>>> vs. "event.target.result".
>>>
>>> / Jonas
>>>
>>
>>
>


Re: [Bug 11348] New: [IndexedDB] Overhaul of the event model

2011-02-11 Thread Jeremy Orlow
On Fri, Feb 11, 2011 at 11:38 AM, Jonas Sicking  wrote:

> On Fri, Feb 11, 2011 at 11:30 AM, ben turner 
> wrote:
> > It looks like I was wrong. Our current impl throws NOT_ALLOWED_ERR for
> > getting errorCode *and* result before readyState is set to DONE.
> >
> > And now that I think about it I think I like that best. If we returned
> > NO_ERR from errorCode before DONE then it seems to imply that the
> > request succeeded when the reality is we don't yet know. Checking
> > errorCode before DONE is most likely a bug in the page script just as
> > calling result before DONE, so I'm happy with throwing here.
> >
> > Sound ok?
>
> Ah, I thought that's what you were saying in your previous email :-)
>
> I.e. throw when it's almost surely a bug in the script (reading too
> early), and return 0/undefined once there is a result of some sort.
>
> Sounds ok to me.
>

Sounds good to me.

J


Re: [Bug 11348] New: [IndexedDB] Overhaul of the event model

2011-02-11 Thread Jeremy Orlow
On Thu, Feb 10, 2011 at 7:06 PM, ben turner  wrote:

> > I think generally avoiding throwing exceptions is a good thing. So for
> > .errorCode I would say returning unidentified or 0 is the way to go.
>
> I would say we should add a code to IDBDatabaseException, NO_ERR = 0.
> Or else indicate somehow that 0 is reserved for "no exception". Then
> return that from errorCode.
>
> > But it does seem like a
> > pretty bad bug if you do access these properties before having a
> > result. So maybe exception is in fact better here.
>
> Definitely agreed. People will want to know that they're checking a
> result too early.
>

Is this the behavior shipping in ff4?

J


Re: [Bug 11348] New: [IndexedDB] Overhaul of the event model

2011-02-10 Thread Jeremy Orlow
On Thu, Jan 27, 2011 at 5:14 PM, Jonas Sicking  wrote:

> On Wed, Jan 26, 2011 at 11:47 AM, Jeremy Orlow 
> wrote:
> > What's the current thinking in terms of events that we're firing?  I
> > remember we talked about this a bit, but I don't remember the conclusion
> and
> > I can't find it captured anywhere.
> > Here's a brain dump of the requirements as I remember them:
> > * Everything should have a source attribute.
> > * Everything done in the context of a transaction should have a
> transaction
> > attribute.  (Probably even errors, which I believe is not the current
> case.)
> > * Only success events should have a result.
> > * Only error events should have a code and a messageor should they
> just
> > have an error attribute which holds an IDBDatabaseError object?  (If it's
> > the former, then do we even need an interface for IDBDatabaseError to be
> > defined?)
> > * IIRC, Jonas suggested at some point that maybe there should be
> additional
> > attributes beyond just the source and/or objects should link to their
> > parents.  (The latter probably makes the most sense, right?  If so, I'll
> bug
> > it.)
> > Is there anything I'm missing?
> > As far as I can tell, this means we need 5 events: an IDBEvent (with
> source)
> > and then error with transaction, error without, success with, and success
> > without.  That seems kind of ugly though.
> > Another possibility is that we could put a transaction attribute on
> IDBEvent
> > that's null when there's no transaction.  And then error and success
> would
> > have their own subclasses.  To me, this sounds best.
> > Thoughts?
>
> Actually, I was proposing something entirely different.
>
> IDBRequest should look like this:
>
> interface IDBRequest : EventTarget {
>

For each, what do we do when it's not available?  Throw exception?  Return
undefined?  Null?  Particularly in the errorCode case, it's not clear to me
what the right thing to do is.


>attribute any result;
>attribute unsigned long errorCode;
>attribute DOMObject source;
>attribute IDBTransaction transaction;
>
>const unsigned short LOADING = 1;
>const unsigned short DONE = 2;
>readonly attribute unsigned short readyState;
>
> attribute Function   onsuccess;
> attribute Function   onerror;
> };
>
> "success" and "error" events are plain Event objects, i.e. no
> indexedDB-specific properties.
>
> The advantage of this is:
> 1. Request objects are actually useful as representing the request.
> Consumers of a request can check what the readystate is and either get
> the .result or attach a event listener as appropriate. (As things
> stand now you can't really rely on .readyState. The only thing it
> tells you is if you got to the request too late or not. If you risk
> getting there too late you better rewrite your code)
> 2. Easier to implement a promises-style API on top of indexedDB.
> 3. More similar to for example XMLHttpRequest
>
> The downside is:
> 1. Have to use a bigger syntax to get to the result. "event.result"
> vs. "event.target.result".
>
> / Jonas
>


Re: [IndexedDB] Reason for aborting transactions

2011-02-09 Thread Jeremy Orlow
On Wed, Feb 9, 2011 at 5:54 PM, Jonas Sicking  wrote:

> On Wed, Feb 9, 2011 at 5:43 PM, Jeremy Orlow  wrote:
> > On Wed, Feb 9, 2011 at 5:37 PM, ben turner 
> wrote:
> >>
> >> > Normal exceptions have error messages that are not consistient across
> >> > implementations and are not localized.  What's the difference?
> >>
> >> These messages aren't part of any exception though, it's just some
> >> property on a transaction object. (None of our DOM exceptions, IDB or
> >> otherwise, have message properties btw, they're only converted to some
> >> message if they make it to the error console).
> >>
> >> > For stuff like internal errors, they seem especially important.
> >>
> >> You're thinking of having multiple messages for the INTERAL_ERROR_ABORT
> >> code?
> >
> > I think that'd be ideal, yes.  Since internal errors will be UA specific,
> > string matching wouldn't be so bad there.
> > If no one likes this idea, I'm happy hiding away the message in some
> > webkitAbortMessage attribute so it's super clear it's just us who
> implements
> > this.  (Speaking of which, maybe you guys should do that with getAll.)
>
> We'll definitely put getAll under a vendor prefix once we drop the
> "front door" prefix on .indexeddb.
>
> I'm with Ben here. I'd prefer to hide the message away under a vendor
> prefix (either now or once you drop the front door one) for now to
> gather feedback on how it'll be used.
>

It's common for people to do something like this:
indexeddb = indexeddb || moz_indexedDB || mozIndexedDB || webkitIndexedDB;
and then pretty much ignore which vendor's implementation they're using from
then on whenever possible, so I think it's worth doing a prefix on every
level of vendor specific stuff.  So we'll definitely use a prefix for the
abort message.  And I'd encourage you to do the same with getAll if you can
before FF4 ships.

J


Re: [IndexedDB] Reason for aborting transactions

2011-02-09 Thread Jeremy Orlow
On Wed, Feb 9, 2011 at 5:37 PM, ben turner  wrote:

> > Normal exceptions have error messages that are not consistient across
> > implementations and are not localized.  What's the difference?
>
> These messages aren't part of any exception though, it's just some
> property on a transaction object. (None of our DOM exceptions, IDB or
> otherwise, have message properties btw, they're only converted to some
> message if they make it to the error console).
>
> > For stuff like internal errors, they seem especially important.
>
> You're thinking of having multiple messages for the INTERAL_ERROR_ABORT
> code?
>

I think that'd be ideal, yes.  Since internal errors will be UA specific,
string matching wouldn't be so bad there.

If no one likes this idea, I'm happy hiding away the message in some
webkitAbortMessage attribute so it's super clear it's just us who implements
this.  (Speaking of which, maybe you guys should do that with getAll.)

J


Re: [IndexedDB] Reason for aborting transactions

2011-02-09 Thread Jeremy Orlow
On Wed, Feb 9, 2011 at 5:17 PM, ben turner  wrote:

> Hm, Jeremy is right, If you want to look just at the transaction and
> see why it aborted you can't rely on errorCode. Ick.
>
> The only thing I'd change then is the abortMessage property. It's
> easier to tell why your transaction aborted with the error code, and
> I'd hate people doing string comparisons instead of checking the error
> code. And what about localization?
>

Normal exceptions have error messages that are not consistient across
implementations and are not localized.  What's the difference?

For stuff like internal errors, they seem especially important.

J


Re: [Bug 11948] New: index.openCursor's cursor should have a way to access the index's "value" (in addition to the index's key and objectStore's value)

2011-02-09 Thread Jeremy Orlow
On Wed, Feb 9, 2011 at 4:00 PM, Jonas Sicking  wrote:

> On Mon, Feb 7, 2011 at 3:55 PM, Jeremy Orlow  wrote:
> > On Mon, Feb 7, 2011 at 3:47 PM, Jonas Sicking  wrote:
> >>
> >> On Sat, Feb 5, 2011 at 11:02 AM, Jeremy Orlow 
> wrote:
> >> > On Fri, Feb 4, 2011 at 11:50 PM, Jonas Sicking 
> wrote:
> >> >>
> >> >> On Fri, Feb 4, 2011 at 3:30 PM, Jeremy Orlow 
> >> >> wrote:
> >> >> > We haven't used the term primary key too much in the spec, but I
> >> >> > think a
> >> >> > lot
> >> >> > might actually be more clear if we used it more.  And I think it'd
> >> >> > also
> >> >> > make
> >> >> > a good name here.  So I'm OK with that being the name we choose.
> >> >> > Here's another question: what do we set primaryKey to for cursors
> >> >> > opened
> >> >> > via
> >> >> > index.openKeyCursor and objectStore.openCursor?  It seems as though
> >> >> > setting
> >> >> > them to null/undefined could be confusing.  One possibility is to
> >> >> > have
> >> >> > .value and .primaryKey be the same thing for the former and .key
> and
> >> >> > .primaryKey be the same for the latter, but that too could be
> >> >> > confusing.
> >> >> >  (I
> >> >> > think we have this problem no matter what we name it, but if there
> >> >> > were
> >> >> > some
> >> >> > name that was more clear in these contexts, then that'd be a good
> >> >> > reason
> >> >> > to
> >> >> > consider it instead.)
> >> >> > J
> >> >> >
> >> >> > For objectStore.openCursor, if we went with primaryKey, then would
> we
> >> >> > set
> >> >> > both key and primaryKey to be the same thing?  Leaving it
> >> >> > undefined/null
> >> >> > seems odd.
> >> >>
> >> >> I've been pondering the same questions but so far no answer seems
> >> >> obviously best.
> >> >>
> >> >> One way to think about it is that it's good if you can use the same
> >> >> code to iterate over an index cursor as a objectStore cursor. For
> >> >> example to display a list of results in a table. This would indicate
> >> >> that for objectStore cursors .key and .primaryKey should have the
> same
> >> >> value. This sort of makes sense too since it means that a objectStore
> >> >> cursor is just a special case of an index cursor where the iterated
> >> >> index just happens to be the primary index.
> >> >>
> >> >> This would leave the index key-cursor. Here it would actually make
> >> >> sense to me to let .key be the index key, .primaryKey be the key in
> >> >> the objectStore, and .value be empty. This means that index cursors
> >> >> and index key-cursors work the same, with just .value being empty for
> >> >> the latter.
> >> >>
> >> >> So in summary
> >> >>
> >> >> objectStore.openCursor:
> >> >> .key = entry key
> >> >> .primaryKey = entry key
> >> >> .value = entry value
> >> >>
> >> >> index.openCursor:
> >> >> .key = index key
> >> >> .primaryKey = entry key
> >> >> .value = entry value
> >> >>
> >> >> index.openKeyCursor:
> >> >> .key = index key
> >> >> .primaryKey = entry key
> >> >> .value = undefined
> >> >>
> >> >>
> >> >> There are two bad things with this:
> >> >> 1. for an objectStore cursor .key and .primaryKey are the same. This
> >> >> does seem unneccesary, but I doubt it'll be a source of bugs or
> >> >> require people to write more code. I'm less worried about confusion
> >> >> since both properties are in fact keys.
> >> >
> >> > As long as we're breaking backwards compatibility in the name of
> >> > clarity, we
> >> > might as well change key to indexKey and keep it null undefined for
> >> > objectStore.openCursor I think.  This would eliminate the confusion.
> >> > If we do break

Re: [IndexedDB] Reason for aborting transactions

2011-02-09 Thread Jeremy Orlow
On Wed, Feb 9, 2011 at 3:22 PM, Jonas Sicking  wrote:

> On Tue, Feb 8, 2011 at 10:48 AM, Jeremy Orlow  wrote:
> > On Tue, Feb 8, 2011 at 10:38 AM, Jonas Sicking  wrote:
> >>
> >> On Tue, Feb 8, 2011 at 9:16 AM, Jeremy Orlow 
> wrote:
> >> > On Tue, Feb 8, 2011 at 2:21 AM, Jonas Sicking 
> wrote:
> >> >>
> >> >> On Mon, Feb 7, 2011 at 8:05 PM, Jeremy Orlow 
> >> >> wrote:
> >> >> > On Mon, Feb 7, 2011 at 7:36 PM, Jonas Sicking 
> >> >> > wrote:
> >> >> >>
> >> >> >> On Fri, Jan 28, 2011 at 4:33 PM, Jeremy Orlow <
> jor...@chromium.org>
> >> >> >> wrote:
> >> >> >> > We do that as well.
> >> >> >> > What's the best way to do it API wise?  Do we need to add an
> >> >> >> > IDBTransactionError object with error codes and such?
> >> >> >>
> >> >> >> I don't actually know. I can't think of a precedence. Usually you
> >> >> >> use
> >> >> >> different error codes for different errors, but here we want to
> >> >> >> distinguish a particular type of error (aborts) into several sub
> >> >> >> categories.
> >> >> >
> >> >> > I don't see how that's any different than what we're doing with the
> >> >> > onerror
> >> >> > error codes though?
> >> >>
> >> >> Hmm.. true.
> >> >>
> >> >> >> To make this more complicated, I actually think we're going to end
> >> >> >> up
> >> >> >> having to change a lot of error handling when things are all said
> >> >> >> and
> >> >> >> done. Error handling right now is sort of a mess since DOM
> >> >> >> exceptions
> >> >> >> are vastly different from JavaScript exceptions. Also DOM
> exceptions
> >> >> >> have a messy situation of error codes overlapping making it very
> >> >> >> easy
> >> >> >> to confuse a IDBDatabaseException with a DOMException with an
> >> >> >> overlapping error code.
> >> >> >>
> >> >> >> For details, see
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >>
> http://lists.w3.org/Archives/Public/public-script-coord/2010OctDec/0112.html
> >> >> >>
> >> >> >> So my gut feeling is that we'll have to revamp exceptions quite a
> >> >> >> bit
> >> >> >> before we unprefix our implementation. This is very unfortunate,
> but
> >> >> >> shouldn't be as big deal of a deal as many other changes as most
> of
> >> >> >> the time people don't have error handling code. Or at least not
> >> >> >> error
> >> >> >> handling code that differentiates the various errors.
> >> >> >>
> >> >> >> Unfortunately we can't make any changes to the spec here until
> >> >> >> WebIDL
> >> >> >> prescribes what the new exceptions should look like :(
> >> >> >>
> >> >> >> So to loop back to your original question, I think that the best
> way
> >> >> >> to expose the different types of aborts is by adding a .reason (or
> >> >> >> better named) property which returns a string or enum which
> >> >> >> describes
> >> >> >> the reason for the abort.
> >> >> >
> >> >> > Could we just add .abortCode, .abortReason, and constants for each
> >> >> > code
> >> >> > to
> >> >> > IDBTransaction?
> >> >>
> >> >> Why both? How are they different. I'd just go with the former to
> align
> >> >> with error codes.
> >> >
> >> > Sorry, I meant .abortMessage instead of .abortReason.  This would be
> >> > much
> >> > like normal error messages where we have a code that's standardized
> and
> >> > easy
> >> > for scripts to understand and then the message portion which is easy
> for
> >> > humans to understand but more ad-hoc.
> >> >
> >> >>
> >> >> > And 

Re: [IndexedDB] setVersion blocked on uncollected garbage IDBDatabases

2011-02-09 Thread Jeremy Orlow
On Wed, Feb 9, 2011 at 11:05 AM, Drew Wilson  wrote:

> This discussion reminds me of a similar issue with MessagePorts. The
> original MessagePort spec exposed GC behavior through the use of onclose
> events/closed attributes on MessagePorts. It turns out that on Chromium,
> there are situations where it's very difficult for us to GC MessagePorts (a
> port's reachability depends on the reachability of the entangled port on an
> entirely separate process), and so we just don't.
>
> My concern is that there may be situations like this with IDB - if at some
> point it's possible for events to be fired on an IDB instance (if we support
> triggers), you'll have a situation where the reachability of an IDB instance
> may depend on the reachability of that same DB in other processes. The net
> effect is that on multi-process/multi-heap platforms, we may not be able to
> GC databases, while on other platforms (which have a unified heap) we will
> be able to GC them. This will end up being a source of cross-browser
> incompatibility, because code will work just fine on platforms that are able
> to deterministically GC databases, but then will break on other platforms
> that cannot.
>
> (As an aside, Jeremy mentions that there may already be situations where we
> cannot GC databases today - I don't know the spec well enough to comment,
> though, so perhaps he can elaborate).
>

Yeah.  Talking to Drew made me realize that we (WebKit) already have a cycle
so that we probably can't collect IDBDatabase objects with event listeners
attached to it.  When there's a listener, we have to hold a reference to the
JavaScript wrapper since it's what holds onto the JavaScript function we
call.  But the wrapper holds a reference to our WebCore object.  We can
break the cycle only when we know that we're not going to call any more
events on it.  We know that when .close() is called.

Working around this as is will be tricky, but isn't really a spec problem.
 But it does mean that the developer will need to always call .close() or
ask the user to close the tab in order to ever be able to run a setVersion
transaction.  At least for the time being in any WebKit browser.


> In any case, I don't think that IDB should be the first place in the entire
> web platform where we expose GC behavior to clients.
>
> -atw
>
> On Tue, Feb 8, 2011 at 4:43 PM, Jonas Sicking  wrote:
>
>> On Tue, Feb 8, 2011 at 3:31 PM, Glenn Maynard  wrote:
>> > On Tue, Feb 8, 2011 at 4:01 PM, Jeremy Orlow 
>> wrote:
>> >>
>> >> I talked it over with Darin (Fisher), and he largely agreed with you
>> guys.
>> >>  I'll file a bug saying that after unload, all IDBDatabases attached to
>> that
>> >> document should be closed.
>> >
>> > What happens if a database is open in a page in the back-forward cache?
>> > That's incompatible with onunload having side-effects.
>> >
>> > I know the BF-cache is off-spec, but it's extremely useful and will
>> > hopefully find its way into the standard some day, so it'd be nice to
>> keep
>> > it in mind.
>> >
>> > I suppose the browser would discard whole pages from the BF-cache on
>> demand
>> > if required by a setVersion call.
>>
>> That's exactly what we do in Firefox. Implementations have to be able
>> to throw things out of the BF cache on command anyway (since you
>> generally want to limit the number of pages living in BF cache, and so
>> loading a new page often causes other pages to be thrown out), so it's
>> just a matter of calling into the same code here.
>>
>> / Jonas
>>
>>
>


Re: [IndexedDB] setVersion blocked on uncollected garbage IDBDatabases

2011-02-08 Thread Jeremy Orlow
On Tue, Feb 8, 2011 at 3:31 PM, Glenn Maynard  wrote:

> On Tue, Feb 8, 2011 at 4:01 PM, Jeremy Orlow  wrote:
>
>> I talked it over with Darin (Fisher), and he largely agreed with you guys.
>>  I'll file a bug saying that after unload, all IDBDatabases attached to that
>> document should be closed.
>
>
> What happens if a database is open in a page in the back-forward cache?
> That's incompatible with onunload having side-effects.
>
> I know the BF-cache is off-spec, but it's extremely useful and will
> hopefully find its way into the standard some day, so it'd be nice to keep
> it in mind.
>

As long as the behavior as far as developers can tell matches the spec, you
can do just about whatever you'd like behind the scenes.


> I suppose the browser would discard whole pages from the BF-cache on demand
> if required by a setVersion call.
>

That sounds like it'd more or less work.

J


Re: [IndexedDB] setVersion blocked on uncollected garbage IDBDatabases

2011-02-08 Thread Jeremy Orlow
I talked it over with Darin (Fisher), and he largely agreed with you guys.
 I'll file a bug saying that after unload, all IDBDatabases attached to that
document should be closed.

J

On Tue, Feb 8, 2011 at 11:51 AM, ben turner  wrote:

> I'm actually fine with keeping the setVersion from proceeding until
> the old database is collected. First, this is probably a bug in the
> web page, and the page should be fixed. Second, the new database that
> is waiting for setVersion to proceed will get an onblocked event, so
> the page should know that something is wrong.
>
> I really don't think this is that big of a deal, and certainly not
> worth changing the opt-in vs. opt-out behavior that we've settled on.
>
> -Ben
>


Re: [IndexedDB] Reason for aborting transactions

2011-02-08 Thread Jeremy Orlow
On Tue, Feb 8, 2011 at 11:37 AM, ben turner  wrote:

> > I think that's what Ben was suggesting.
>
> Yes. We already have ABORT_ERR, no reason we can't subdivide that
> since it's being overloaded. In fact I think it makes perfect sense.
>

That part of the spec seems completely broken (there are no "steps to abort
a transaction" in the spec and it's not clear how ABORT_ERR would be plugged
in).  Either way, ABORT_ERR should probably be removed.


> > Add the following to IDBTransaction:
>
> I'm really not a fan of making IDBTransaction more complicated. We
> already have a generic "tell me when something goes wrong, and why"
> mechanism via errorCode


errorCode is something on the IDBRequest object these days, right?  Clearly
we can't use that when we're in an abort handler to figure out why we were
aborted.  I'm pretty sure we need to add a code (no matter what we name it
and where the code enums live) to IDBtransaction.


> and onError/onAbort. Why add another one? If
> the name is confusing we could rename it to exceptionCode perhaps?
>
> >   readonly attribute abortMessage;
>
> We dropped errorMessage already, let's not add abortMessage.
>

We did?  When and why?  I think the text based messages can be very useful
for debugging.  And they're there for exceptions.


> > And just set the message/code right before firing an abort event.
>
> And what happens when someone calls .abortCode before the transaction
> has finished? Or if the transaction succeeded?


What happens when you try to access the error code on IDBRequest today?
 (Serious question, I don't remember.)


> Or if the transaction
> failed for some other reason?


If it fails for any reason, it'll result in an abort.


> I think we'd probably want the same
> behavior as calling .errorCode, so again I think we should just roll
> the specific abort reasons into error codes and stick with our
> existing mechanism.
>

The existing mechanism doesn't fix the use case of being in an abort event
handler and wanting to know why you aborted.

J


Re: [IndexedDB] Reason for aborting transactions

2011-02-08 Thread Jeremy Orlow
On Tue, Feb 8, 2011 at 10:38 AM, Jonas Sicking  wrote:

> On Tue, Feb 8, 2011 at 9:16 AM, Jeremy Orlow  wrote:
> > On Tue, Feb 8, 2011 at 2:21 AM, Jonas Sicking  wrote:
> >>
> >> On Mon, Feb 7, 2011 at 8:05 PM, Jeremy Orlow 
> wrote:
> >> > On Mon, Feb 7, 2011 at 7:36 PM, Jonas Sicking 
> wrote:
> >> >>
> >> >> On Fri, Jan 28, 2011 at 4:33 PM, Jeremy Orlow 
> >> >> wrote:
> >> >> > We do that as well.
> >> >> > What's the best way to do it API wise?  Do we need to add an
> >> >> > IDBTransactionError object with error codes and such?
> >> >>
> >> >> I don't actually know. I can't think of a precedence. Usually you use
> >> >> different error codes for different errors, but here we want to
> >> >> distinguish a particular type of error (aborts) into several sub
> >> >> categories.
> >> >
> >> > I don't see how that's any different than what we're doing with the
> >> > onerror
> >> > error codes though?
> >>
> >> Hmm.. true.
> >>
> >> >> To make this more complicated, I actually think we're going to end up
> >> >> having to change a lot of error handling when things are all said and
> >> >> done. Error handling right now is sort of a mess since DOM exceptions
> >> >> are vastly different from JavaScript exceptions. Also DOM exceptions
> >> >> have a messy situation of error codes overlapping making it very easy
> >> >> to confuse a IDBDatabaseException with a DOMException with an
> >> >> overlapping error code.
> >> >>
> >> >> For details, see
> >> >>
> >> >>
> >> >>
> http://lists.w3.org/Archives/Public/public-script-coord/2010OctDec/0112.html
> >> >>
> >> >> So my gut feeling is that we'll have to revamp exceptions quite a bit
> >> >> before we unprefix our implementation. This is very unfortunate, but
> >> >> shouldn't be as big deal of a deal as many other changes as most of
> >> >> the time people don't have error handling code. Or at least not error
> >> >> handling code that differentiates the various errors.
> >> >>
> >> >> Unfortunately we can't make any changes to the spec here until WebIDL
> >> >> prescribes what the new exceptions should look like :(
> >> >>
> >> >> So to loop back to your original question, I think that the best way
> >> >> to expose the different types of aborts is by adding a .reason (or
> >> >> better named) property which returns a string or enum which describes
> >> >> the reason for the abort.
> >> >
> >> > Could we just add .abortCode, .abortReason, and constants for each
> code
> >> > to
> >> > IDBTransaction?
> >>
> >> Why both? How are they different. I'd just go with the former to align
> >> with error codes.
> >
> > Sorry, I meant .abortMessage instead of .abortReason.  This would be much
> > like normal error messages where we have a code that's standardized and
> easy
> > for scripts to understand and then the message portion which is easy for
> > humans to understand but more ad-hoc.
> >
> >>
> >> > And maybe revisit in the future?
> >>
> >> Yes. I think we need to wait for webidl to solidify a bit here before
> >> we do anything.
> >
> > I think we should put something in our spec in the mean time, but once
> > WebIDL solidifies then we can revisit and try to match what's decided
> there.
> >
> > On Tue, Feb 8, 2011 at 8:07 AM, ben turner  > wrote:
> >>
> >> Why not just expand our list of error codes to have multiple ABORT_
> >> variants for each situation, and then always fire the "abort" event
> >> with a slightly different errorCode?
> >>
> >> That seems far simpler IMO.
> >
> > If that is OK spec wise, I'm fine with it.  To be honest, hanging
> > ABORT_BLAHs off IDBDatabaseException seems a bit odd though.
>
> I think at this point I've sort of lost track of what the proposal is.
> Is it simply making abort events look like error events, but obviously
> with .type set to "abort". And give them codes which live side-by-side
> with the error codes?
>
> If so, that would be ok with me.
>

I think that's what Ben was suggesting.  I was suggesting that it seemed
kind of odd though, and I'd prefer the following:

Add the following to IDBTransaction:
  readonly attribute EXPLICIT_ABORT = 1
  readonly attribute INTERNAL_ERROR_ABORT = 2
  readonly attribute QUOTA_ERROR_ABORT=3
  ... etc
  readonly attribute abortMessage;
  readonly attribute abortCode;

And just set the message/code right before firing an abort event.

J


Re: [IndexedDB] setVersion blocked on uncollected garbage IDBDatabases

2011-02-08 Thread Jeremy Orlow
On Tue, Feb 8, 2011 at 10:36 AM, Jonas Sicking  wrote:

> On Tue, Feb 8, 2011 at 9:26 AM, Jeremy Orlow  wrote:
> > On Tue, Feb 8, 2011 at 3:36 AM, João Eiras  wrote:
> >>
> >> > Unless by "certain GC behavior" mean
> >>
> >> I referred to
> >>
> >> # The only solution I can think of is to require (or recommend) that
> >> implementations run the garbage collector
> >>
> >> The GC is transparent and a spec cannot expect that it runs at
> >> specific times or demand it.
> >
> > Yeah, Jonas.  We can't reasonably expect any behavior from the garbage
> > collectors.  I can't think of any other precedent for this.  And as
> > collectors become more complicated, doing a gc that catches _every_ piece
> of
> > garbage is becoming harder or even impossible (not aware of any GC's
> where
> > it is "impossible" in specific cases, but it wouldn't surprise me).  The
> v8
> > guys simply won't let us do this.  :-)
> > And saying that at the worst case your setVersion transaction will stall
> > possibly forever just doesn't seem like a good solution either.
>
> Huh? It seems like a very strange GC implementation that not only
> doesn't allow you to do a full search for garbage, even
> asynchronously, but then can't even guarantee that a given object will
> eventually be freed.
>
> I'm all for not relying on GC behavior, but not even relying on it to
> collect garbage? That seems a bit extreme to me. It's also not a GC
> strategy I've ever heard of, so yes, it would surprise me if there are
> GC strategies out there that doesn't free up objects sooner or later.
> How is that different from a GC strategy that is simply leaking?
>

I meant that it wouldn't be able to collect on demand like that.  Or that it
would at least be prohibitively expensive.


> > What if we made the default for onsetversion to be calling close?  I.e.
> > instead of the close behavior being opt-out, it'd be opt-in?  I know we
> made
> > a conscious decision originally of it being opt-in, but I don't see how
> > that'll work.
>
> This flips the model completely on its head since you're now forced to
> implement a more advanced version upgrade strategies or suffer your
> pages breaking.
>
> The worst case scenario isn't even that bad IMHO. Say that you have a
> GC strategy which truly never frees the unreferenced DB object. That
> is no worse than the page simply holding a reference DB object and
> making the version upgrade wait for the user to close old tabs.
> Something that is bound to happen anyway if authors take the simple
> path of not listening to "blocked" or "versionchange" events.
>

You're assuming implementations have one heap per tab.

We could cheat and kill things when the document goes away, I suppose.
 Still not very excited about that tho.  (Especially since even those
semantics can be a bit tricky, at least in WebKit.)

I'd like to hear other peoples' thoughts on this.

J


Re: [IndexedDB] setVersion blocked on uncollected garbage IDBDatabases

2011-02-08 Thread Jeremy Orlow
On Tue, Feb 8, 2011 at 3:36 AM, João Eiras  wrote:

> > Unless by "certain GC behavior" mean
>
> I referred to
>
> # The only solution I can think of is to require (or recommend) that
> implementations run the garbage collector
>
> The GC is transparent and a spec cannot expect that it runs at
> specific times or demand it.
>

Yeah, Jonas.  We can't reasonably expect any behavior from the garbage
collectors.  I can't think of any other precedent for this.  And as
collectors become more complicated, doing a gc that catches _every_ piece of
garbage is becoming harder or even impossible (not aware of any GC's where
it is "impossible" in specific cases, but it wouldn't surprise me).  The v8
guys simply won't let us do this.  :-)

And saying that at the worst case your setVersion transaction will stall
possibly forever just doesn't seem like a good solution either.

What if we made the default for onsetversion to be calling close?  I.e.
instead of the close behavior being opt-out, it'd be opt-in?  I know we made
a conscious decision originally of it being opt-in, but I don't see how
that'll work.

J


Re: [IndexedDB] Reason for aborting transactions

2011-02-08 Thread Jeremy Orlow
On Tue, Feb 8, 2011 at 2:21 AM, Jonas Sicking  wrote:

> On Mon, Feb 7, 2011 at 8:05 PM, Jeremy Orlow  wrote:
> > On Mon, Feb 7, 2011 at 7:36 PM, Jonas Sicking  wrote:
> >>
> >> On Fri, Jan 28, 2011 at 4:33 PM, Jeremy Orlow 
> wrote:
> >> > We do that as well.
> >> > What's the best way to do it API wise?  Do we need to add an
> >> > IDBTransactionError object with error codes and such?
> >>
> >> I don't actually know. I can't think of a precedence. Usually you use
> >> different error codes for different errors, but here we want to
> >> distinguish a particular type of error (aborts) into several sub
> >> categories.
> >
> > I don't see how that's any different than what we're doing with the
> onerror
> > error codes though?
>
> Hmm.. true.
>
> >> To make this more complicated, I actually think we're going to end up
> >> having to change a lot of error handling when things are all said and
> >> done. Error handling right now is sort of a mess since DOM exceptions
> >> are vastly different from JavaScript exceptions. Also DOM exceptions
> >> have a messy situation of error codes overlapping making it very easy
> >> to confuse a IDBDatabaseException with a DOMException with an
> >> overlapping error code.
> >>
> >> For details, see
> >>
> >>
> http://lists.w3.org/Archives/Public/public-script-coord/2010OctDec/0112.html
> >>
> >> So my gut feeling is that we'll have to revamp exceptions quite a bit
> >> before we unprefix our implementation. This is very unfortunate, but
> >> shouldn't be as big deal of a deal as many other changes as most of
> >> the time people don't have error handling code. Or at least not error
> >> handling code that differentiates the various errors.
> >>
> >> Unfortunately we can't make any changes to the spec here until WebIDL
> >> prescribes what the new exceptions should look like :(
> >>
> >> So to loop back to your original question, I think that the best way
> >> to expose the different types of aborts is by adding a .reason (or
> >> better named) property which returns a string or enum which describes
> >> the reason for the abort.
> >
> > Could we just add .abortCode, .abortReason, and constants for each code
> to
> > IDBTransaction?
>
> Why both? How are they different. I'd just go with the former to align
> with error codes.
>

Sorry, I meant .abortMessage instead of .abortReason.  This would be much
like normal error messages where we have a code that's standardized and easy
for scripts to understand and then the message portion which is easy for
humans to understand but more ad-hoc.


> > And maybe revisit in the future?
>
> Yes. I think we need to wait for webidl to solidify a bit here before
> we do anything.
>

I think we should put something in our spec in the mean time, but once
WebIDL solidifies then we can revisit and try to match what's decided there.


On Tue, Feb 8, 2011 at 8:07 AM, ben turner  wrote:

> Why not just expand our list of error codes to have multiple ABORT_
> variants for each situation, and then always fire the "abort" event
> with a slightly different errorCode?
>
> That seems far simpler IMO.


If that is OK spec wise, I'm fine with it.  To be honest, hanging
ABORT_BLAHs off IDBDatabaseException seems a bit odd though.

J


[IndexedDB] setVersion blocked on uncollected garbage IDBDatabases

2011-02-07 Thread Jeremy Orlow
We're currently implementing the onblocked/setVersion semantics and ran into
an interesting problem: if you don't call .close() on a database and simply
expect it to be collected, then you ever being able to run a setVersion
transaction is at the mercy of the garbage collecter doing a collection.
 Otherwise implementations will assume the database is still open...right?

If so, this seems bad.  But I can't think of any way to get around it.
 Thoughts?

J


Re: [IndexedDB] Reason for aborting transactions

2011-02-07 Thread Jeremy Orlow
On Mon, Feb 7, 2011 at 7:36 PM, Jonas Sicking  wrote:

> On Fri, Jan 28, 2011 at 4:33 PM, Jeremy Orlow  wrote:
> > We do that as well.
> > What's the best way to do it API wise?  Do we need to add an
> > IDBTransactionError object with error codes and such?
>
> I don't actually know. I can't think of a precedence. Usually you use
> different error codes for different errors, but here we want to
> distinguish a particular type of error (aborts) into several sub
> categories.
>

I don't see how that's any different than what we're doing with the onerror
error codes though?


> To make this more complicated, I actually think we're going to end up
> having to change a lot of error handling when things are all said and
> done. Error handling right now is sort of a mess since DOM exceptions
> are vastly different from JavaScript exceptions. Also DOM exceptions
> have a messy situation of error codes overlapping making it very easy
> to confuse a IDBDatabaseException with a DOMException with an
> overlapping error code.
>
> For details, see
>
> http://lists.w3.org/Archives/Public/public-script-coord/2010OctDec/0112.html
>
> So my gut feeling is that we'll have to revamp exceptions quite a bit
> before we unprefix our implementation. This is very unfortunate, but
> shouldn't be as big deal of a deal as many other changes as most of
> the time people don't have error handling code. Or at least not error
> handling code that differentiates the various errors.
>
> Unfortunately we can't make any changes to the spec here until WebIDL
> prescribes what the new exceptions should look like :(
>
> So to loop back to your original question, I think that the best way
> to expose the different types of aborts is by adding a .reason (or
> better named) property which returns a string or enum which describes
> the reason for the abort.
>

Could we just add .abortCode, .abortReason, and constants for each code to
IDBTransaction?  And maybe revisit in the future?

J


Re: [Bug 11948] New: index.openCursor's cursor should have a way to access the index's "value" (in addition to the index's key and objectStore's value)

2011-02-07 Thread Jeremy Orlow
On Mon, Feb 7, 2011 at 3:47 PM, Jonas Sicking  wrote:

> On Sat, Feb 5, 2011 at 11:02 AM, Jeremy Orlow  wrote:
> > On Fri, Feb 4, 2011 at 11:50 PM, Jonas Sicking  wrote:
> >>
> >> On Fri, Feb 4, 2011 at 3:30 PM, Jeremy Orlow 
> wrote:
> >> > We haven't used the term primary key too much in the spec, but I think
> a
> >> > lot
> >> > might actually be more clear if we used it more.  And I think it'd
> also
> >> > make
> >> > a good name here.  So I'm OK with that being the name we choose.
> >> > Here's another question: what do we set primaryKey to for cursors
> opened
> >> > via
> >> > index.openKeyCursor and objectStore.openCursor?  It seems as though
> >> > setting
> >> > them to null/undefined could be confusing.  One possibility is to have
> >> > .value and .primaryKey be the same thing for the former and .key and
> >> > .primaryKey be the same for the latter, but that too could be
> confusing.
> >> >  (I
> >> > think we have this problem no matter what we name it, but if there
> were
> >> > some
> >> > name that was more clear in these contexts, then that'd be a good
> reason
> >> > to
> >> > consider it instead.)
> >> > J
> >> >
> >> > For objectStore.openCursor, if we went with primaryKey, then would we
> >> > set
> >> > both key and primaryKey to be the same thing?  Leaving it
> undefined/null
> >> > seems odd.
> >>
> >> I've been pondering the same questions but so far no answer seems
> >> obviously best.
> >>
> >> One way to think about it is that it's good if you can use the same
> >> code to iterate over an index cursor as a objectStore cursor. For
> >> example to display a list of results in a table. This would indicate
> >> that for objectStore cursors .key and .primaryKey should have the same
> >> value. This sort of makes sense too since it means that a objectStore
> >> cursor is just a special case of an index cursor where the iterated
> >> index just happens to be the primary index.
> >>
> >> This would leave the index key-cursor. Here it would actually make
> >> sense to me to let .key be the index key, .primaryKey be the key in
> >> the objectStore, and .value be empty. This means that index cursors
> >> and index key-cursors work the same, with just .value being empty for
> >> the latter.
> >>
> >> So in summary
> >>
> >> objectStore.openCursor:
> >> .key = entry key
> >> .primaryKey = entry key
> >> .value = entry value
> >>
> >> index.openCursor:
> >> .key = index key
> >> .primaryKey = entry key
> >> .value = entry value
> >>
> >> index.openKeyCursor:
> >> .key = index key
> >> .primaryKey = entry key
> >> .value = undefined
> >>
> >>
> >> There are two bad things with this:
> >> 1. for an objectStore cursor .key and .primaryKey are the same. This
> >> does seem unneccesary, but I doubt it'll be a source of bugs or
> >> require people to write more code. I'm less worried about confusion
> >> since both properties are in fact keys.
> >
> > As long as we're breaking backwards compatibility in the name of clarity,
> we
> > might as well change key to indexKey and keep it null undefined for
> > objectStore.openCursor I think.  This would eliminate the confusion.
> > If we do break compat, is it possible for FF4 to include these changes?
>  If
> > not, then I would actually lean towards leaving .key and .value as is and
> > having .primaryKey duplicate info for index.openKeyCursor and
> > objectStore.openCursor.
>
> Actually, I quite like the idea of having objectStore-cursors just be
> a special case of index-cursors. Which also allows us to keep the nice
> and short name "key" of being the key that you are iterating (be that
> a primary key or an index key).


Can you explain further?  I don't fully understand you.

Here's another proposal (which is maybe what you meant?):

objectStore.openCursor:
.key = entry key
.value = entry value

index.openCursor:
.indexKey = index key
.key = entry key
.value = entry value

index.openKeyCursor:
.indexKey = index key
.key = entry key

Note that I'm thinking we should probably sub-class IDBCursor for each type
so that attributes don't show up if we're not going to populate them.

Which we maybe should do for IDBRequest as well?

J


Re: [Bug 11351] New: [IndexedDB] Should we have a maximum key size (or something like that)?

2011-02-07 Thread Jeremy Orlow
On Mon, Feb 7, 2011 at 2:49 PM, Jonas Sicking  wrote:

> On Sun, Feb 6, 2011 at 11:41 PM, Jeremy Orlow  wrote:
> > On Sun, Feb 6, 2011 at 11:38 PM, Jonas Sicking  wrote:
> >>
> >> On Sun, Feb 6, 2011 at 2:31 PM, Jeremy Orlow 
> wrote:
> >> > On Sun, Feb 6, 2011 at 2:03 PM, Shawn Wilsher 
> >> > wrote:
> >> >>
> >> >> On 2/6/2011 12:42 PM, Jeremy Orlow wrote:
> >> >>>
> >> >>> My current thinking is that we should have some relatively large
> >> >>> limitmaybe on the order of 64k?  It seems like it'd be very
> >> >>> difficult
> >> >>> to
> >> >>> hit such a limit with any sort of legitimate use case, and the
> chances
> >> >>> of
> >> >>> some subtle data-dependent error would be much less.  But a 1GB key
> is
> >> >>> just
> >> >>> not going to work well in any implementation (if it doesn't simply
> oom
> >> >>> the
> >> >>> process!).  So despite what I said earlier, I guess I think we
> should
> >> >>> have
> >> >>> some limit...but keep it an order of magnitude or two larger than
> what
> >> >>> we
> >> >>> expect any legitimate usage to hit just to keep the system as
> flexible
> >> >>> as
> >> >>> possible.
> >> >>>
> >> >>> Does that sound reasonable to people?
> >> >>
> >> >> Are we thinking about making this a MUST requirement, or a SHOULD?
>  I'm
> >> >> hesitant to spec an exact size as a MUST given how technology has a
> way
> >> >> of
> >> >> changing in unexpected ways that makes old constraints obsolete.  But
> >> >> then,
> >> >> I may just be overly concerned about this too.
> >> >
> >> > If we put a limit, it'd be a MUST for sure.  Otherwise people would
> >> > develop
> >> > against one of the implementations that don't place a limit and then
> >> > their
> >> > app would break on the others.
> >> > The reason that I suggested 64K is that it seems outrageously big for
> >> > the
> >> > data types that we're looking at.  But it's too small to do much with
> >> > base64
> >> > encoding binary blobs into it or anything else like that that I could
> >> > see
> >> > becoming rather large.  So it seems like a limit that'd avoid major
> >> > abuses
> >> > (where someone is probably approaching the problem wrong) but would
> not
> >> > come
> >> > close to limiting any practical use I can imagine.
> >> > With our architecture in Chrome, we will probably need to have some
> >> > limit.
> >> >  We haven't decided what that is yet, but since I remember others
> saying
> >> > similar things when we talked about this at TPAC, it seems like it
> might
> >> > be
> >> > best to standardize it--even though it does feel a bit dirty.
> >>
> >> One problem with putting a limit is that it basically forces
> >> implementations to use a specific encoding, or pay a hefty price. For
> >> example if we choose a 64K limit, is that of UTF8 data or of UTF16
> >> data? If it is of UTF8 data, and the implementation uses something
> >> else to store the date, you risk having to convert the data just to
> >> measure the size. Possibly this would be different if we measured size
> >> using UTF16 as javascript more or less enforces that the source string
> >> is UTF16 which means that you can measure utf16 size on the cheap,
> >> even if the stored data uses a different format.
> >
> > That's a very good point.  What's your suggestion then?  Spec unlimited
> > storage and have non-normative text saying that most implementations will
> > likely have some limit?  Maybe we can at least spec a minimum limit in
> terms
> > of a particular character encoding?  (Implementations could translate
> this
> > into the worst case size for their own native encoding and then ensure
> their
> > limit is higher.)
>
> I'm fine with relying on UTF16 encoding size and specifying a 64K
> limit. Like Shawn points out, this API is fairly geared towards
> JavaScript anyway (and I personally don't think that's a bad thing).
> One thing that I just thought of is that even if i

Re: [Bug 11351] New: [IndexedDB] Should we have a maximum key size (or something like that)?

2011-02-06 Thread Jeremy Orlow
On Sun, Feb 6, 2011 at 11:38 PM, Jonas Sicking  wrote:

> On Sun, Feb 6, 2011 at 2:31 PM, Jeremy Orlow  wrote:
> > On Sun, Feb 6, 2011 at 2:03 PM, Shawn Wilsher 
> wrote:
> >>
> >> On 2/6/2011 12:42 PM, Jeremy Orlow wrote:
> >>>
> >>> My current thinking is that we should have some relatively large
> >>> limitmaybe on the order of 64k?  It seems like it'd be very
> difficult
> >>> to
> >>> hit such a limit with any sort of legitimate use case, and the chances
> of
> >>> some subtle data-dependent error would be much less.  But a 1GB key is
> >>> just
> >>> not going to work well in any implementation (if it doesn't simply oom
> >>> the
> >>> process!).  So despite what I said earlier, I guess I think we should
> >>> have
> >>> some limit...but keep it an order of magnitude or two larger than what
> we
> >>> expect any legitimate usage to hit just to keep the system as flexible
> as
> >>> possible.
> >>>
> >>> Does that sound reasonable to people?
> >>
> >> Are we thinking about making this a MUST requirement, or a SHOULD?  I'm
> >> hesitant to spec an exact size as a MUST given how technology has a way
> of
> >> changing in unexpected ways that makes old constraints obsolete.  But
> then,
> >> I may just be overly concerned about this too.
> >
> > If we put a limit, it'd be a MUST for sure.  Otherwise people would
> develop
> > against one of the implementations that don't place a limit and then
> their
> > app would break on the others.
> > The reason that I suggested 64K is that it seems outrageously big for the
> > data types that we're looking at.  But it's too small to do much with
> base64
> > encoding binary blobs into it or anything else like that that I could see
> > becoming rather large.  So it seems like a limit that'd avoid major
> abuses
> > (where someone is probably approaching the problem wrong) but would not
> come
> > close to limiting any practical use I can imagine.
> > With our architecture in Chrome, we will probably need to have some
> limit.
> >  We haven't decided what that is yet, but since I remember others saying
> > similar things when we talked about this at TPAC, it seems like it might
> be
> > best to standardize it--even though it does feel a bit dirty.
>
> One problem with putting a limit is that it basically forces
> implementations to use a specific encoding, or pay a hefty price. For
> example if we choose a 64K limit, is that of UTF8 data or of UTF16
> data? If it is of UTF8 data, and the implementation uses something
> else to store the date, you risk having to convert the data just to
> measure the size. Possibly this would be different if we measured size
> using UTF16 as javascript more or less enforces that the source string
> is UTF16 which means that you can measure utf16 size on the cheap,
> even if the stored data uses a different format.
>

That's a very good point.  What's your suggestion then?  Spec unlimited
storage and have non-normative text saying that most implementations will
likely have some limit?  Maybe we can at least spec a minimum limit in terms
of a particular character encoding?  (Implementations could translate this
into the worst case size for their own native encoding and then ensure their
limit is higher.)

J


Re: [Bug 11351] New: [IndexedDB] Should we have a maximum key size (or something like that)?

2011-02-06 Thread Jeremy Orlow
On Sun, Feb 6, 2011 at 2:03 PM, Shawn Wilsher  wrote:

> On 2/6/2011 12:42 PM, Jeremy Orlow wrote:
>
>> My current thinking is that we should have some relatively large
>> limitmaybe on the order of 64k?  It seems like it'd be very difficult
>> to
>> hit such a limit with any sort of legitimate use case, and the chances of
>> some subtle data-dependent error would be much less.  But a 1GB key is
>> just
>> not going to work well in any implementation (if it doesn't simply oom the
>> process!).  So despite what I said earlier, I guess I think we should have
>> some limit...but keep it an order of magnitude or two larger than what we
>> expect any legitimate usage to hit just to keep the system as flexible as
>> possible.
>>
>> Does that sound reasonable to people?
>>
> Are we thinking about making this a MUST requirement, or a SHOULD?  I'm
> hesitant to spec an exact size as a MUST given how technology has a way of
> changing in unexpected ways that makes old constraints obsolete.  But then,
> I may just be overly concerned about this too.
>

If we put a limit, it'd be a MUST for sure.  Otherwise people would develop
against one of the implementations that don't place a limit and then their
app would break on the others.

The reason that I suggested 64K is that it seems outrageously big for the
data types that we're looking at.  But it's too small to do much with base64
encoding binary blobs into it or anything else like that that I could see
becoming rather large.  So it seems like a limit that'd avoid major abuses
(where someone is probably approaching the problem wrong) but would not come
close to limiting any practical use I can imagine.

With our architecture in Chrome, we will probably need to have some limit.
 We haven't decided what that is yet, but since I remember others saying
similar things when we talked about this at TPAC, it seems like it might be
best to standardize it--even though it does feel a bit dirty.

J


Re: [Bug 11351] New: [IndexedDB] Should we have a maximum key size (or something like that)?

2011-02-06 Thread Jeremy Orlow
On Tue, Dec 14, 2010 at 4:26 PM, Pablo Castro wrote:

>
> From: jor...@google.com [mailto:jor...@google.com] On Behalf Of Jeremy
> Orlow
> Sent: Tuesday, December 14, 2010 4:23 PM
>
> >> On Wed, Dec 15, 2010 at 12:19 AM, Pablo Castro <
> pablo.cas...@microsoft.com> wrote:
> >>
> >> From: public-webapps-requ...@w3.org [mailto:
> public-webapps-requ...@w3.org] On Behalf Of Jonas Sicking
> >> Sent: Friday, December 10, 2010 1:42 PM
> >>
> >> >> On Fri, Dec 10, 2010 at 7:32 AM, Jeremy Orlow 
> wrote:
> >> >> > Any more thoughts on this?
> >> >>
> >> >> I don't feel strongly one way or another. Implementation wise I don't
> >> >> really understand why implementations couldn't use keys of unlimited
> >> >> size. I wouldn't imagine implementations would want to use fixed-size
> >> >> allocations for every key anyway, right (which would be a strong
> >> >> reason to keep maximum size down).
> >> I don't have a very strong opinion either. I don't quite agree with the
> guideline of "having something working slowly is better than not working at
> all"...as having something not work at all sometimes may help developers hit
> a wall and think differently about their approach for a given problem. That
> said, if folks think this is an instance where we're better off not having a
> limit I'm fine with it.
> >>
> >> My only concern is that the developer might not hit this wall, but then
> some user (doing things the developer didn't fully anticipate) could hit
> that wall.  I can definitely see both sides of the argument though.  And
> elsewhere we've headed more in the direction of forcing the developer to
> think about performance, but this case seems a bit more non-deterministic
> than any of those.
>
> Yeah, that's a good point for this case, avoiding data-dependent errors is
> probably worth the perf hit.


My current thinking is that we should have some relatively large
limitmaybe on the order of 64k?  It seems like it'd be very difficult to
hit such a limit with any sort of legitimate use case, and the chances of
some subtle data-dependent error would be much less.  But a 1GB key is just
not going to work well in any implementation (if it doesn't simply oom the
process!).  So despite what I said earlier, I guess I think we should have
some limit...but keep it an order of magnitude or two larger than what we
expect any legitimate usage to hit just to keep the system as flexible as
possible.

Does that sound reasonable to people?

J


Re: [Bug 11948] New: index.openCursor's cursor should have a way to access the index's "value" (in addition to the index's key and objectStore's value)

2011-02-05 Thread Jeremy Orlow
On Fri, Feb 4, 2011 at 11:50 PM, Jonas Sicking  wrote:

> On Fri, Feb 4, 2011 at 3:30 PM, Jeremy Orlow  wrote:
> > We haven't used the term primary key too much in the spec, but I think a
> lot
> > might actually be more clear if we used it more.  And I think it'd also
> make
> > a good name here.  So I'm OK with that being the name we choose.
> > Here's another question: what do we set primaryKey to for cursors opened
> via
> > index.openKeyCursor and objectStore.openCursor?  It seems as though
> setting
> > them to null/undefined could be confusing.  One possibility is to have
> > .value and .primaryKey be the same thing for the former and .key and
> > .primaryKey be the same for the latter, but that too could be confusing.
>  (I
> > think we have this problem no matter what we name it, but if there were
> some
> > name that was more clear in these contexts, then that'd be a good reason
> to
> > consider it instead.)
> > J
> >
> > For objectStore.openCursor, if we went with primaryKey, then would we set
> > both key and primaryKey to be the same thing?  Leaving it undefined/null
> > seems odd.
>
> I've been pondering the same questions but so far no answer seems
> obviously best.
>
> One way to think about it is that it's good if you can use the same
> code to iterate over an index cursor as a objectStore cursor. For
> example to display a list of results in a table. This would indicate
> that for objectStore cursors .key and .primaryKey should have the same
> value. This sort of makes sense too since it means that a objectStore
> cursor is just a special case of an index cursor where the iterated
> index just happens to be the primary index.
>
> This would leave the index key-cursor. Here it would actually make
> sense to me to let .key be the index key, .primaryKey be the key in
> the objectStore, and .value be empty. This means that index cursors
> and index key-cursors work the same, with just .value being empty for
> the latter.
>
> So in summary
>
> objectStore.openCursor:
> .key = entry key
> .primaryKey = entry key
> .value = entry value
>
> index.openCursor:
> .key = index key
> .primaryKey = entry key
> .value = entry value
>
> index.openKeyCursor:
> .key = index key
> .primaryKey = entry key
> .value = undefined
>
>
> There are two bad things with this:
> 1. for an objectStore cursor .key and .primaryKey are the same. This
> does seem unneccesary, but I doubt it'll be a source of bugs or
> require people to write more code. I'm less worried about confusion
> since both properties are in fact keys.
>

As long as we're breaking backwards compatibility in the name of clarity, we
might as well change key to indexKey and keep it null undefined for
objectStore.openCursor I think.  This would eliminate the confusion.

If we do break compat, is it possible for FF4 to include these changes?  If
not, then I would actually lean towards leaving .key and .value as is and
having .primaryKey duplicate info for index.openKeyCursor and
objectStore.openCursor.


> 2. You can't use the same code to iterate over a key-cursor and a
> "normal" cursor and display the result in a table. However I suspect
> that in most cases key cursors will be used for different things, such
> as joins, rather than reusing code that would normally use values.
>

I'm not super worried about this.  I think it it's more important to be
clear than make it easy to share code between the different types of
cursors.

On the other hand, it would be nice if there were some way for code to be
able to figure out what type of cursor they're working with.  Since values
can be undefined, they won't be able to just look at .key, .primaryKey, and
.value to figure it out though.  Maybe we need some attribute that says what
type of cursor it is?

J


Re: [Bug 11948] New: index.openCursor's cursor should have a way to access the index's "value" (in addition to the index's key and objectStore's value)

2011-02-04 Thread Jeremy Orlow
We haven't used the term primary key too much in the spec, but I think a lot
might actually be more clear if we used it more.  And I think it'd also make
a good name here.  So I'm OK with that being the name we choose.

Here's another question: what do we set primaryKey to for cursors opened via
index.openKeyCursor and objectStore.openCursor?  It seems as though setting
them to null/undefined could be confusing.  One possibility is to have
.value and .primaryKey be the same thing for the former and .key and
.primaryKey be the same for the latter, but that too could be confusing.  (I
think we have this problem no matter what we name it, but if there were some
name that was more clear in these contexts, then that'd be a good reason to
consider it instead.)

J

For objectStore.openCursor, if we went with primaryKey, then would we set
both key and primaryKey to be the same thing?  Leaving it undefined/null
seems odd.

On Fri, Feb 4, 2011 at 1:36 PM, Jonas Sicking  wrote:

> On Fri, Feb 4, 2011 at 11:14 AM, Shawn Wilsher 
> wrote:
> > On 2/1/2011 11:00 AM, bugzi...@jessica.w3.org wrote:
> >>
> >> As discussed in the mailing list thread from bug 11257, we should add
> some
> >> way
> >> for index.openCursor cursors to access the primary key for the
> >> objectStore.
> >> .indexValue, .objectStoreKey, or .primaryKey might be good names to use
> >> for it.
> >
> > .objectStoreKey seems to be the most clear way to express this to me.
>
> Oh, I missed that the original bug included a few suggestions. Given
> that both me and Jeremy independently thought of "indexValue" and
> "primaryKey" I think that's a decent sign that they are intuitive
> names. I happen to like "primaryKey" the most as it's really a key
> rather than a value that we've got here.
>
> For some reason objectStoreKey makes me think that it's connected to
> the objectStore rather than the entry in it.
>
> / Jonas
>
>


Re: [Bug 11348] New: [IndexedDB] Overhaul of the event model

2011-02-02 Thread Jeremy Orlow
On Wed, Feb 2, 2011 at 3:21 PM, Jonas Sicking  wrote:

> On Wed, Feb 2, 2011 at 3:19 PM, Jeremy Orlow  wrote:
> > I don't know much about window.onerror (I'm finding out what the story is
> in
> > WebKit), but overall sounds fine to me.
> > What about complete events?  Should we make those non-bubbling as well?
>
> Good question. I think so yeah. Don't have a strong opinion either way.
>
> The only argument I can think of that if it bubbles then we might want
> to add .oncomplete on IDBDatabase, which would be somewhat confusing.
>

That was my same line of thought.

J


Re: [Bug 11348] New: [IndexedDB] Overhaul of the event model

2011-02-02 Thread Jeremy Orlow
I don't know much about window.onerror (I'm finding out what the story is in
WebKit), but overall sounds fine to me.

What about complete events?  Should we make those non-bubbling as well?

J

On Wed, Feb 2, 2011 at 2:28 PM, Jonas Sicking  wrote:

> On Wed, Feb 2, 2011 at 2:10 PM, Jeremy Orlow  wrote:
> > Just to confirm, we don't want the events to propagate to the window
> itself,
> > right?
>
> Correct. Sort of. Here's what we did in gecko:
>
> The event propagation path is request->transaction->database. This
> goes for both "success" and "error" events. However "success" doesn't
> bubble so "normal" event handlers doesn't fire on the transaction or
> database for "success". But if you really want you can attach a
> capturing listener using .addEventListener and listen to them there.
> This matches events fired on nodes.
>
> For "abort" events the propagation path is just transaction->database
> since the target of "abort" events is the transaction.
>
> So far this matches what you said.
>
> However, we also wanted to integrate the window.onerror feature in
> HTML5. So after we've fired an "error" event, if .preventDefault() was
> never called on the event, we fire an error event on the window (can't
> remember if this happens before or after we abort the transaction).
> This is a separate event, which for example means that even if you
> attach a capturing "error" handler on window, you won't see any events
> unless an error really went unhandled. And you also can't call
> .preventDefault on the error event fired on the window in order to
> prevent the transaction from being aborted. It's purely there for
> error reporting and distinctly different from the event propagating to
> the window.
>
> This is similar to how "error" events are handled in workers.
>
> (I think that so far webkit hasn't implemented the window.onerror
> feature yet, so you probably don't want to fire the separate error
> event on the window until that has been implemented).
>
> I hope this makes sense and sounds like a good idea?
>
> / Jonas
>


Re: [Bug 11348] New: [IndexedDB] Overhaul of the event model

2011-02-02 Thread Jeremy Orlow
Just to confirm, we don't want the events to propagate to the window itself,
right?

On Fri, Nov 19, 2010 at 3:44 AM,  wrote:

> http://www.w3.org/Bugs/Public/show_bug.cgi?id=11348
>
>   Summary: [IndexedDB] Overhaul of the event model
>   Product: WebAppsWG
>   Version: unspecified
>  Platform: PC
>OS/Version: All
>Status: NEW
>  Severity: normal
>  Priority: P2
> Component: Indexed Database API
>AssignedTo: dave.n...@w3.org
>ReportedBy: jor...@chromium.org
> QAContact: member-webapi-...@w3.org
>CC: m...@w3.org, public-webapps@w3.org
>
>
> We talked about this for a while at TPAC.  Here's what I think we agreed
> upon
> at the time:
>
> * All events should propagate from the IDBRequest to the IDBTransaction to
> the
> IDBDatabase.
>
> * For error events, preventDefault must be called in order to avoid a
> transaction aborting.  (When you use onerror, you'd of course use false to
> do
> so.)
>
> * If you throw within an event handler, the transaction will abort.  (Catch
> errors that you don't want to implicitly abort the transaction.)
>
> * The success event will be non-bubbling (because having onsuccess on
> IDBTransaction and IDBDatabase would be confusing).
>
> * The error event should be added to IDBTransaction and IDBDatabase and
> should
> bubble.
>
> * createObjectStore should remain sync and simply abort the transaction on
> errors (which are pretty much constrained to quota and internal errors).
>
> * createIndex is the same, except that indexes with a uniqueness constraint
> and
> existing data that doesn't satisfy it will present another (and more
> common)
> case that'll cause the transaction to abort.  The spec should have a red
> note
> that reminds people of this.
>
> --
> Configure bugmail: http://www.w3.org/Bugs/Public/userprefs.cgi?tab=email
> --- You are receiving this mail because: ---
> You are on the CC list for the bug.
>
>


Re: IndexedDB: updates through cursors on indexes that change the key

2011-02-01 Thread Jeremy Orlow
Please look at the mail archives.  IIRC, it seemed confusing that you could
be looking at old data.  Iterating on live data seems more consistent with
run to completion semantics.

J

On Tue, Feb 1, 2011 at 5:26 PM, Keean Schupke  wrote:

> So whats the benefit of allowing a cursor to modify the data under it?
>
> Cheers,
> Keean.
>
>
> On 2 February 2011 01:17, Jonas Sicking  wrote:
>
>> On Tue, Feb 1, 2011 at 4:48 PM, Keean Schupke  wrote:
>> > Sorry, sent that before I was finished.
>> >
>> > Seems prone to problems in environments with multiple parallel accesses
>> to
>> > the same database.
>>
>> As long as you're inside a transaction, no other environments (be they
>> separate tabs running in a separate process, workers running in a
>> separate thread, or separate components running in the same page) will
>> be able to mutate the data under you.
>>
>> / Jonas
>>
>
>


Re: IndexedDB: updates through cursors on indexes that change the key

2011-02-01 Thread Jeremy Orlow
On Tue, Feb 1, 2011 at 2:56 PM, Jonas Sicking  wrote:

> On Tue, Feb 1, 2011 at 11:44 AM, Jeremy Orlow  wrote:
> > On Tue, Feb 1, 2011 at 10:07 AM, Hans Wennborg 
> wrote:
> >>
> >> For cursors on object stores, we disallow updates that change the key:
> >> one cannot provide an explicit key, and for object stores with a key
> >> path, the spec says that "If the effective object store of this cursor
> >> uses in-line keys and evaluating the key path of the value parameter
> >> results in a different value than the cursor's effective key, this
> >> method throws DATA_ERR."
> >>
> >> I suppose the reason is that an implementation may have trouble
> >> handling such updates, i.e. changing the keys that the cursor iterates
> >> over during the iteration is a bad idea.
> >>
> >> A similar situation can occur with cursors over indexes:
> >>
> >> Say that there is an object store with objects like {fname: 'John',
> >> lname: 'Doe', phone: 1234}, and an index with 'fname' as key path.
> >> When iterating over the index with a cursor, should it be allowed to
> >> update the objects so that the key in the index, in this case the
> >> 'fname', of an object is changed? The situation seems analogous to the
> >> one above, but as far as I can see, the spec does not mention this.
> >> Should it be allowed?
> >>
> >> I would be interested to hear your thoughts on this.
> >
> > I think we should remove the original limitation instead.  While a cursor
> is
> > happening, anyone can call .remove() and .put() which is essentially the
> > same as doing an .update() which changes a key.  So implementations will
> > already need to handle this case one way or another.  What's there seems
> > like a fairly artificial limitation.
>
> The tricky part if you allow modifying the primary key is defining the
> exact semantics around that, especially going forward if we add things
> like events or audit logs or anything like that (something like that
> is likely going to be needed for syncing). As things stand now, a call
> to cursor.update() is semantically equivalent to a call to
> objectStore.put(). If we allow modifying the key that is no longer the
> case.
>
> So we would then need to define things like does cursor.update() equal
> objectStore.remove() and then objectStore.add() always? Or just when
> the key is actually changed? And what happens if the new key already
> exists in the database? Does that undo both the remove() and the
> add(), fail, or do you risk losing the entry? Or does cursor.update()
> equal objectStore.remove() and then objectStore.put() such that if an
> entry with the key already exists, it is overwritten?
>
> So I don't think the concern here is about confusing the cursor
> object. Like Jeremy points out, cursors have to deal with the iterated
> data changing anyway. I think the main reason for the current
> restriction is to keep the set of operations that you can perform on
> the data simpler.
>
> Of course, if there is reason to allow modifying the primary key, then
> we'll just have to deal with the more complex set of allowed
> operations. But then it would probably also make sense to allow
> modifying the primary key of an existing entry directly on the
> objectStore, without having to go through a cursor.


Good points (against having it remove the original key if it changes).

After some more thought: The original idea behind cursor.delete() and
cursor.update() was that they would basically just be aliases for
objectStore.delete() and objectStore.put().  Maybe calling .update() with a
changed primary key should simply have the same behavior as .put().  Thus
the value corresponding to the original key would be left unmodified and the
new key would then correspond to the new value.

I can't think of any examples where the current behavior would get in
someone's way though.  So I guess maybe we should just leave it as is.  But
I still hate the idea of it being subtly different from being a straight up
alias to put.

J


Re: IndexedDB: updates through cursors on indexes that change the key

2011-02-01 Thread Jeremy Orlow
On Tue, Feb 1, 2011 at 10:07 AM, Hans Wennborg  wrote:

> For cursors on object stores, we disallow updates that change the key:
> one cannot provide an explicit key, and for object stores with a key
> path, the spec says that "If the effective object store of this cursor
> uses in-line keys and evaluating the key path of the value parameter
> results in a different value than the cursor's effective key, this
> method throws DATA_ERR."
>
> I suppose the reason is that an implementation may have trouble
> handling such updates, i.e. changing the keys that the cursor iterates
> over during the iteration is a bad idea.
>
> A similar situation can occur with cursors over indexes:
>
> Say that there is an object store with objects like {fname: 'John',
> lname: 'Doe', phone: 1234}, and an index with 'fname' as key path.
> When iterating over the index with a cursor, should it be allowed to
> update the objects so that the key in the index, in this case the
> 'fname', of an object is changed? The situation seems analogous to the
> one above, but as far as I can see, the spec does not mention this.
> Should it be allowed?
>
> I would be interested to hear your thoughts on this.
>

I think we should remove the original limitation instead.  While a cursor is
happening, anyone can call .remove() and .put() which is essentially the
same as doing an .update() which changes a key.  So implementations will
already need to handle this case one way or another.  What's there seems
like a fairly artificial limitation.

J


Re: [IndexedDB] Reason for aborting transactions

2011-01-28 Thread Jeremy Orlow
We do that as well.

What's the best way to do it API wise?  Do we need to add an
IDBTransactionError object with error codes and such?

J

On Fri, Jan 28, 2011 at 4:31 PM, Jonas Sicking  wrote:

> On Fri, Jan 28, 2011 at 2:35 PM, Jeremy Orlow  wrote:
> > Given that transactions can be aborted because of explicit action,
> internal
> > errors, quota errors, and possibly other things in the future, I'm
> wondering
> > if we should add some way for people to find out why the transaction was
> > aborted.
> > Thoughts?
>
> Hmm.. not a bad idea. We also abort transactions if the user leaves
> the current page in the middle of a transaction. Don't remember if the
> spec requires this or not, but it might make sense to do so.
>
> / Jonas
>


[IndexedDB] Reason for aborting transactions

2011-01-28 Thread Jeremy Orlow
Given that transactions can be aborted because of explicit action, internal
errors, quota errors, and possibly other things in the future, I'm wondering
if we should add some way for people to find out why the transaction was
aborted.

Thoughts?

J


Re: [Bug 11348] New: [IndexedDB] Overhaul of the event model

2011-01-27 Thread Jeremy Orlow
On Thu, Jan 27, 2011 at 5:48 PM, Jonas Sicking  wrote:

> On Thu, Jan 27, 2011 at 5:30 PM, Axel Rauschmayer 
> wrote:
> > I am really sorry to bring this up again, but: Why not separate concerns?
> Why not separate input data and output data?
> >
> > If onsuccess and onerror were handed in as an input parameters, would
> there be any need for readyState, LOADING, and DONE?
>
> We decided a long long time ago, based on input from web developers,
> to use DOM-Events as notification mechanism. We went through the same
> thing in the FileReader API where I initially suggested using a
> different type of callback, but got the feedback that developers
> preferred to use DOM-Events.
>
> Also note that the reason that your suggestion removes the need for
> readyState is that your proposal decides to drop support for the
> use-case that readyState aims to help solve. I.e. the ability to
> register additional event handlers sometime after the request is
> created.
>

I'm still not convinced this use case is necessary either, but we've already
argued that to death, so let's not start up again.

Is all of this what was implemented in FF4b9?  If so, I'll do it in
Chromium, though the event.target syntax really is kind of horrible.

Lastly, let's say you're doing cursor.continue() on an index cursor, how can
you get a handle to the objectStore?  I believe you can't.  Should we add in
something for that?  (Most likely give the index a link to its object store?
 And maybe even give a cursor a link back up as well?)

J


> > Then IDBRequest would be more like an event, right? It would be sent to
> the onsuccess and onerror event handlers.
>
> I don't understand what you mean here. But in the current model (both
> the one that's in the spec right now, and the one that I'm proposing)
> we're using real DOM-Events. Can't really get more "like events" than
> that?
>
> / Jonas
>


Re: [chromium-html5] LocalStorage inside Worker

2011-01-27 Thread Jeremy Orlow
On Thu, Jan 27, 2011 at 12:47 PM, João Eiras wrote:

> On Thursday 27 January 2011 20:39:50 you wrote:
> > On Thu, Jan 27, 2011 at 12:06 PM, Charles Pritchard  > wrote:
> > FWIW: websql is mostly abandoned, though super handy on ios mobile
> devices.
> >
> > It's been around for a while in everything other than FF and IE.
> >
> > IndexedDB is live in Chrome, Firefox and the MS interop team released a
> prototype for IE.
> >
> > For the record, we haven't shipped it to stable yet, though we do have a
> version in the dev channel.  We're hoping to ship before long though (once
> we get the API back up to date).
> >
> > Moz and webkit both just implement IDB atop of their internal sqlite
> processes. That is, thy create a simple websql schema.
> >
> > For now, yes.  It's actually pretty fast though!
> >
> >
> > On Thu, Jan 27, 2011 at 12:31 PM, João Eiras  > wrote:
> >
> > > Afaik, websql does not support blobs.
> > >
> >
> > If stored as strings, it does. sqlite treats TEXT as an opaque buffer.
> >
> > Not all binary can be expressed at UTF16.  Note that this is also a
> limitation of LocalStorage as well.
> >
>
> UTF16 represents the character table user agents use when displaying a
> buffer of text, because nothing prevents you from doing:
>
> # localStorage.setItem('foobar', "\0\xff\ufeff");
> # alert(escape(localStorage.getItem('foobar')));
>
> Works in Opera at least, including in web sql dbs.
>

Works in Chrome as well.  (Didn't try WebSQLDatabase.)

Nevertheless, I would expect any code doing stuff like this to be fairly
fragile.  And in general, I'd probably recommend not doing it unless you
really need to.

J


Re: [chromium-html5] LocalStorage inside Worker

2011-01-27 Thread Jeremy Orlow
On Thu, Jan 27, 2011 at 12:06 PM, Charles Pritchard  wrote:

> FWIW: websql is mostly abandoned, though super handy on ios mobile devices.
>

It's been around for a while in everything other than FF and IE.


> IndexedDB is live in Chrome, Firefox and the MS interop team released a
> prototype for IE.
>

For the record, we haven't shipped it to stable yet, though we do have a
version in the dev channel.  We're hoping to ship before long though (once
we get the API back up to date).


> Moz and webkit both just implement IDB atop of their internal sqlite
> processes. That is, thy create a simple websql schema.
>

For now, yes.  It's actually pretty fast though!


On Thu, Jan 27, 2011 at 12:31 PM, João Eiras 
 wrote:

>
> > Afaik, websql does not support blobs.
> >
>
> If stored as strings, it does. sqlite treats TEXT as an opaque buffer.
>

Not all binary can be expressed at UTF16.  Note that this is also a
limitation of LocalStorage as well.

J


Re: [chromium-html5] LocalStorage inside Worker

2011-01-27 Thread Jeremy Orlow
On Thu, Jan 27, 2011 at 12:39 AM, Felix Halim  wrote:

> 2011/1/7 Jonas Sicking :
> > On Thu, Jan 6, 2011 at 7:14 PM, Boris Zbarsky  wrote:
> >> On 1/6/11 5:25 PM, João Eiras wrote:
> >>>
> >>> Not different from two different tabs/windows running the same code.
> >>
> >> In which current browsers do same-origin tabs/windows end up
> interleaving
> >> their JS (that is, one runs JS before the other has returned to the
> event
> >> loop)?
> >
> > I'm fairly sure it happens both in Chrome and IE. One way it can happen
> is:
> >
> > Tab 1 opens with a page from site A
> > Tab 2 opens with a page from site B
> > The page in tab 2 contains an iframe with a page from site A.
> >
> > But I'm not even sure that Chrome and IE makes an effort to use the
> > same process if you open two tabs for the same site.
>
> It seemed that Chrome doesn't interleave the JS when the same page is
> opened in multiple tabs.
>
> Try running this script in multiple tabs and monitor the console output:
>
> http://felix-halim.net/interleave.html
>
> In Chrome console log, you will see many "FAIL", but not in Firefox.
>
> So does this mean localStorage in Chrome is broken? or this is an
> intended behavior?
>

Although the updates to localStorage are "interleaved", the two tabs are
not: both are running in different processess/event-loops.  I.e. they run
parallel.  i.e. localStorage (and cookies) don't support "run to
completion".

We don't implement the storage mutex as specced because it'd severely limit
this concurrency.  If you search the archives, you'll find a lot of
discussions about this.

If this is a problem for you, then I suggest you look at WebSQLDatabase or
(soon) IndexedDB.

J


Re: [Bug 11348] New: [IndexedDB] Overhaul of the event model

2011-01-26 Thread Jeremy Orlow
What's the current thinking in terms of events that we're firing?  I
remember we talked about this a bit, but I don't remember the conclusion and
I can't find it captured anywhere.

Here's a brain dump of the requirements as I remember them:
* Everything should have a source attribute.
* Everything done in the context of a transaction should have a transaction
attribute.  (Probably even errors, which I believe is not the current case.)
* Only success events should have a result.
* Only error events should have a code and a messageor should they just
have an error attribute which holds an IDBDatabaseError object?  (If it's
the former, then do we even need an interface for IDBDatabaseError to be
defined?)
* IIRC, Jonas suggested at some point that maybe there should be additional
attributes beyond just the source and/or objects should link to their
parents.  (The latter probably makes the most sense, right?  If so, I'll bug
it.)

Is there anything I'm missing?

As far as I can tell, this means we need 5 events: an IDBEvent (with source)
and then error with transaction, error without, success with, and success
without.  That seems kind of ugly though.

Another possibility is that we could put a transaction attribute on IDBEvent
that's null when there's no transaction.  And then error and success would
have their own subclasses.  To me, this sounds best.

Thoughts?

J

On Fri, Nov 19, 2010 at 3:44 AM,  wrote:

> http://www.w3.org/Bugs/Public/show_bug.cgi?id=11348
>
>   Summary: [IndexedDB] Overhaul of the event model
>   Product: WebAppsWG
>   Version: unspecified
>  Platform: PC
>OS/Version: All
>Status: NEW
>  Severity: normal
>  Priority: P2
> Component: Indexed Database API
>AssignedTo: dave.n...@w3.org
>ReportedBy: jor...@chromium.org
> QAContact: member-webapi-...@w3.org
>CC: m...@w3.org, public-webapps@w3.org
>
>
> We talked about this for a while at TPAC.  Here's what I think we agreed
> upon
> at the time:
>
> * All events should propagate from the IDBRequest to the IDBTransaction to
> the
> IDBDatabase.
>
> * For error events, preventDefault must be called in order to avoid a
> transaction aborting.  (When you use onerror, you'd of course use false to
> do
> so.)
>
> * If you throw within an event handler, the transaction will abort.  (Catch
> errors that you don't want to implicitly abort the transaction.)
>
> * The success event will be non-bubbling (because having onsuccess on
> IDBTransaction and IDBDatabase would be confusing).
>
> * The error event should be added to IDBTransaction and IDBDatabase and
> should
> bubble.
>
> * createObjectStore should remain sync and simply abort the transaction on
> errors (which are pretty much constrained to quota and internal errors).
>
> * createIndex is the same, except that indexes with a uniqueness constraint
> and
> existing data that doesn't satisfy it will present another (and more
> common)
> case that'll cause the transaction to abort.  The spec should have a red
> note
> that reminds people of this.
>
> --
> Configure bugmail: http://www.w3.org/Bugs/Public/userprefs.cgi?tab=email
> --- You are receiving this mail because: ---
> You are on the CC list for the bug.
>
>


Re: [IndexedDB] Compound and multiple keys

2011-01-21 Thread Jeremy Orlow
On Thu, Jan 20, 2011 at 6:29 PM, Tab Atkins Jr. wrote:

> On Thu, Jan 20, 2011 at 10:12 AM, Keean Schupke  wrote:
> > Compound primary keys are commonly used afaik.
>
> Indeed.  It's one of the common themes in the debate between natural
> and synthetic keys.
>

Fair enough.

Should we allow explicit compound keys?  I.e myOS.put({...}, ['first name',
'last name'])?  I feel pretty strongly that if we do, we should require this
be specified up-front when creating the objectStore.  I.e. add some
additional parameter to the optional options object.  Otherwise, we'll force
implementations to handle variable compound keys for just this one case,
which seems kind of silly.

The other option is to just disallow them.

J


Re: [IndexedDB] Compound and multiple keys

2011-01-20 Thread Jeremy Orlow
FWIW, I share your concern that A is kind of forcing a schema upon users in
a bad way.  But I think all the other arguments point towards A.  I really
don't like the idea of having duplicate data.  And if we go with B, then if
I want to index on both firstname, lastname and lastname, firstname then
I'll have to have duplicate information.  This gets worse when your compound
indexes get more complicated.

So, after thinking about this for a bit, I think we should just go with A.

Also, I don't think we should allow primary keys to be compound.  I can't
think of much precedent in other databases and it'd make things considerably
more complicated for no great benefit.

J


On Thu, Jan 20, 2011 at 11:07 AM, Keean Schupke  wrote:

> Out of line keys (B) for me. You can have a key that is not an object
> property that way... and you can include the key in the object optionally.
> There is also no need to give the key fields in advance. These two things
> together make this the best option IMHO.
>
> Keean
>  On 20 Jan 2011 10:52, "Jeremy Orlow"  wrote:
> > Ok. So what's the resolution? Let's bug it!
> >
> > On Fri, Dec 10, 2010 at 12:34 PM, Jeremy Orlow 
> wrote:
> >
> >> Any other thoughts on this issue?
> >>
> >>
> >> On Thu, Dec 2, 2010 at 7:19 AM, Keean Schupke  wrote:
> >>
> >>> I think I prefer A. Declaring the keys in advance is stating to sound a
> >>> little like a schema, and when you go down that route you end up at SQL
> >>> schemas (which is a good thing in my opinion). I understand however
> that
> >>> some people are not so comfortable with the idea of a schema, and these
> >>> people seem to be the kind of people that like IndexedDB. So, although
> I
> >>> prefer A for me, I would have to say B for IndexedDB.
> >>>
> >>> So in conclusion: I think "B" is the better choice for IndexedDB, as it
> is
> >>> more consistent with the design of IDB.
> >>>
> >>> As for the cons of "B", sorting an array is just like sorting a string,
> >>> and it already supports string types.
> >>>
> >>> Surely there is also option "C":
> >>>
> >>> store.add({firstName: "Benny", lastName: "Zysk", age: 28},
> ["firstName",
> >>> "lastName"]);
> >>> store.add({firstName: "Benny", lastName: "Andersson", age:
> >>> 63}, ["firstName", "lastName"]);
> >>>
> >>> Like "A", but listing the properties to include in the composite index
> >>> with each add, therefore avoiding the "schema"...
> >>>
> >>>
> >>> As for layering the Relational API over the top, It doesn't make any
> >>> difference, but I would prefer whichever has the best performance.
> >>>
> >>>
> >>> Cheers,
> >>> Keean.
> >>>
> >>>
> >>> On 2 December 2010 00:57, Jonas Sicking  wrote:
> >>>
> >>>> Hi IndexedDB fans (yay!!),
> >>>>
> >>>> Problem description:
> >>>>
> >>>> One of the current shortcomings of IndexedDB is that it doesn't
> >>>> support compound indexes. I.e. indexing on more than one value. For
> >>>> example it's impossible to index on, and therefor efficiently search
> >>>> for, firstname and lastname in an objectStore which stores people. Or
> >>>> index on to-address and date sent in an objectStore holding emails.
> >>>>
> >>>> The way this is traditionally done is that multiple values are used as
> >>>> key for each individual entry in an index or objectStore. For example
> >>>> the CREATE INDEX statement in SQL can list multiple columns, and
> >>>> CREATE TABLE statment can list several columns as PRIMARY KEY.
> >>>>
> >>>> There have been a couple of suggestions how to do this in IndexedDB
> >>>>
> >>>> Option A)
> >>>> When specifying a key path in createObjectStore and createIndex, allow
> >>>> an array of key-paths to be specified. Such as
> >>>>
> >>>> store = db.createObjectStore("mystore", ["firstName", "lastName"]);
> >>>> store.add({firstName: "Benny", lastName: "Zysk", age: 28});
> >>>> store.add({firstName: "Benny", lastName: "An

Re: [IndexedDB] Auto increment and spec inconsistency

2011-01-20 Thread Jeremy Orlow
Sounds good to me.  Please file a bug?

On Mon, Jan 17, 2011 at 5:06 PM, Hans Wennborg  wrote:

> Reading http://dvcs.w3.org/hg/IndexedDB/raw-file/tip/Overview.html,
> there seems to be some inconsistency around how an object store with
> key generator is supposed to behave.
>
> In 5.1 Object Store Storage Operation, step 1 it says: "If store uses
> a key generator and key is undefined, set key to the next generated
> key. If store also uses in-line keys, then set the property in value
> pointed to by store's key path to the new value for key".
>
> But in the object store example in 3.3.3, there is the following:
>
> A second put operation will overwrite the record stored by the first
> put operation.
> var abraham = {id: 1, name: 'Abraham', number: '2107'};
> store.put(abraham);
>
> However, the way I read the specification, the key generator will
> generate the key 2, and then set the "id" property in the value to 2.
> So this operation does not at all overwrite the first record, and the
> next statement in the example: "Now when the object store is read with
> the same key, the result is different compared to the object read
> earlier." is false.
>
>
> It seems to me that for an object store with a key generator, it is
> never possible to specify the key for a put or add operation: Using a
> key parameter is disallowed (in 3.2.5 under "add" and "put"), and
> in-line keys get overwritten.
>
> This means that it is not possible to update a record with a put
> operation if the object store uses a key generator, which seems
> counter-intuitive to me.
>
> To me, it would make sense that:
>
> 1. If a user provides an explicit key to an operation on an object
> store that has a key generator, then the explicit key takes
> precedence, and the key generator doesn't do anything.
>
> 2. If a user provides an in-line key, then that key takes precedence,
> and the key generator doesn't do anything.
>
> I would be interested to read your thoughts about this.
>
> Thanks,
> Hans
>
>
>


Re: [IndexedDB] Compound and multiple keys

2011-01-20 Thread Jeremy Orlow
Ok.  So what's the resolution?  Let's bug it!

On Fri, Dec 10, 2010 at 12:34 PM, Jeremy Orlow  wrote:

> Any other thoughts on this issue?
>
>
> On Thu, Dec 2, 2010 at 7:19 AM, Keean Schupke  wrote:
>
>> I think I prefer A. Declaring the keys in advance is stating to sound a
>> little like a schema, and when you go down that route you end up at SQL
>> schemas (which is a good thing in my opinion). I understand however that
>> some people are not so comfortable with the idea of a schema, and these
>> people seem to be the kind of people that like IndexedDB. So, although I
>> prefer A for me, I would have to say B for IndexedDB.
>>
>> So in conclusion: I think "B" is the better choice for IndexedDB, as it is
>> more consistent with the design of IDB.
>>
>> As for the cons of "B", sorting an array is just like sorting a string,
>> and it already supports string types.
>>
>> Surely there is also option "C":
>>
>> store.add({firstName: "Benny", lastName: "Zysk", age: 28}, ["firstName",
>> "lastName"]);
>> store.add({firstName: "Benny", lastName: "Andersson", age:
>> 63}, ["firstName", "lastName"]);
>>
>> Like "A", but listing the properties to include in the composite index
>> with each add, therefore avoiding the "schema"...
>>
>>
>> As for layering the Relational API over the top, It doesn't make any
>> difference, but I would prefer whichever has the best performance.
>>
>>
>> Cheers,
>> Keean.
>>
>>
>> On 2 December 2010 00:57, Jonas Sicking  wrote:
>>
>>> Hi IndexedDB fans (yay!!),
>>>
>>> Problem description:
>>>
>>> One of the current shortcomings of IndexedDB is that it doesn't
>>> support compound indexes. I.e. indexing on more than one value. For
>>> example it's impossible to index on, and therefor efficiently search
>>> for, firstname and lastname in an objectStore which stores people. Or
>>> index on to-address and date sent in an objectStore holding emails.
>>>
>>> The way this is traditionally done is that multiple values are used as
>>> key for each individual entry in an index or objectStore. For example
>>> the CREATE INDEX statement in SQL can list multiple columns, and
>>> CREATE TABLE statment can list several columns as PRIMARY KEY.
>>>
>>> There have been a couple of suggestions how to do this in IndexedDB
>>>
>>> Option A)
>>> When specifying a key path in createObjectStore and createIndex, allow
>>> an array of key-paths to be specified. Such as
>>>
>>> store = db.createObjectStore("mystore", ["firstName", "lastName"]);
>>> store.add({firstName: "Benny", lastName: "Zysk", age: 28});
>>> store.add({firstName: "Benny", lastName: "Andersson", age: 63});
>>> store.add({firstName: "Charlie", lastName: "Brown", age: 8});
>>>
>>> The records are stored in the following order
>>> "Benny", "Andersson"
>>> "Benny", "Zysk"
>>> "Charlie", "Brown"
>>>
>>> Similarly, createIndex accepts the same syntax:
>>> store.createIndex("myindex", ["lastName", "age"]);
>>>
>>> Option B)
>>> Allowing arrays as an additional data type for keys.
>>> store = db.createObjectStore("mystore", "fullName");
>>> store.add({fullName: ["Benny", "Zysk"], age: 28});
>>> store.add({fullName: ["Benny", "Andersson"], age: 63});
>>> store.add({fullName: ["Charlie", "Brown"], age: 8});
>>>
>>> Also allows out-of-line keys using:
>>> store = db.createObjectStore("mystore");
>>> store.add({age: 28}, ["Benny", "Zysk"]);
>>> store.add({age: 63}, ["Benny", "Andersson"]);
>>> store.add({age: 8}, ["Charlie", "Brown"]);
>>>
>>> (the sort order here is the same as in option A).
>>>
>>> Similarly, if an index pointed used a keyPath which points to an
>>> array, this would create an entry in the index which used a compound
>>> key consisting of the values in the array.
>>>
>>> There are of course advantages and disadvantages with both options.
>>>
>>>

Re: [Bug 11398] New: [IndexedDB] Methods that take multiple optional parameters should instead take an options object

2011-01-13 Thread Jeremy Orlow
OK.  Let's leave it then.

On Wed, Jan 12, 2011 at 11:48 PM, Jonas Sicking  wrote:

> On Wed, Jan 12, 2011 at 2:29 PM, Jeremy Orlow  wrote:
> > On Wed, Jan 12, 2011 at 10:13 PM, Jonas Sicking 
> wrote:
> >>
> >> On Tue, Jan 11, 2011 at 2:22 AM, Jeremy Orlow 
> wrote:
> >> > On Mon, Jan 10, 2011 at 9:40 PM, Jonas Sicking 
> wrote:
> >> >>
> >> >> On Tue, Dec 14, 2010 at 12:13 PM, Jeremy Orlow 
> >> >> wrote:
> >> >> >
> >> >> > On Tue, Dec 14, 2010 at 7:50 PM, Jonas Sicking 
> >> >> > wrote:
> >> >> >>
> >> >> >> On Tue, Dec 14, 2010 at 8:47 AM, Jeremy Orlow <
> jor...@chromium.org>
> >> >> >> wrote:
> >> >> >>>
> >> >> >>> Btw, I forgot to mention IDBDatabase.transaction which I
> definitely
> >> >> >>> think should take an options object as well.
> >> >> >>
> >> >> >> Hmm.. I think we should make the first argument required, I
> actually
> >> >> >> thought it was until I looked just now. I don't see what the use
> >> >> >> case is for
> >> >> >> opening all tables.
> >> >> >
> >> >> > FWIW I'm finding that the majority of the IndexedDB code I read and
> >> >> > write does indeed need to lock everything.  I'm also finding that
> >> >> > most of
> >> >> > the code I'm writing/reading won't be helped at all by defaulting
> to
> >> >> > READ_ONLY...
> >> >> >
> >> >> >>
> >> >> >> In fact, it seems rather harmful that the syntax which will result
> >> >> >> in
> >> >> >> more lock contention is simpler than the syntax which is better
> >> >> >> optimized.
> >> >> >
> >> >> > But you're right about this.  So, if we're trying to force users to
> >> >> > write highly parallelizable code, then yes the first arg probably
> >> >> > should be
> >> >> > required.  But if we're trying to make IndexedDB easy to use then
> >> >> > actually
> >> >> > the mode should probably be changed back to defaulting to
> READ_WRITE.
> >> >> > I know I argued for the mode default change earlier, but I'm having
> >> >> > second thoughts.  We've spent so much effort making the rest of the
> >> >> > API easy
> >> >> > to use that having points of abrasion like this seem a bit wrong.
> >> >> >  Especially if (at least in my experience) the abrasion is only
> going
> >> >> > to
> >> >> > help a limited number of cases--and probably ones where the
> >> >> > developers will
> >> >> > pay attention to this without us being heavy-handed.
> >> >>
> >> >> I think "ease of use" is different from "few characters typed". For
> >> >> example it's important that the API discourages bugs, for example by
> >> >> making the code easy and clear to read. Included in that is IMHO to
> >> >> make it easy to make the code fast.
> >> >
> >> > It won't make the cost fast.  It'll make it'll allow parallel
> execution.
> >> >  Which will only matter if a developer is trying to do multiple reads
> at
> >> > once and you have significant latency to your backend and/or it's
> >> > heavily
> >> > disk bound.  Which will only be true in complex web apps--the kind
> where
> >> > a
> >> > developer is going to be more conscious of various performance
> >> > bottlenecks.
> >> >  In other words, most of the time, defaulting to READ_ONLY will almost
> >> > certainly have no visible impact in speed.
> >>
> >> It also matters for the use cases of having background workers reading
> >> from the same table,
> >
> > Workers are a pretty advanced use case.  One where I'd expect the
> developer
> > to be mindful of something like this.
> >
> >> as well as any time the user opens two tabs to
> >> the same page. The latter is something that I expect every web app
> >> would care about.
> >
> > A user will generally only be using one page at a time.  The f

Re: [Bug 11398] New: [IndexedDB] Methods that take multiple optional parameters should instead take an options object

2011-01-12 Thread Jeremy Orlow
On Wed, Jan 12, 2011 at 10:13 PM, Jonas Sicking  wrote:

> On Tue, Jan 11, 2011 at 2:22 AM, Jeremy Orlow  wrote:
> > On Mon, Jan 10, 2011 at 9:40 PM, Jonas Sicking  wrote:
> >>
> >> On Tue, Dec 14, 2010 at 12:13 PM, Jeremy Orlow 
> >> wrote:
> >> >
> >> > On Tue, Dec 14, 2010 at 7:50 PM, Jonas Sicking 
> wrote:
> >> >>
> >> >> On Tue, Dec 14, 2010 at 8:47 AM, Jeremy Orlow 
> >> >> wrote:
> >> >>>
> >> >>> Btw, I forgot to mention IDBDatabase.transaction which I definitely
> >> >>> think should take an options object as well.
> >> >>
> >> >> Hmm.. I think we should make the first argument required, I actually
> >> >> thought it was until I looked just now. I don't see what the use case
> is for
> >> >> opening all tables.
> >> >
> >> > FWIW I'm finding that the majority of the IndexedDB code I read and
> >> > write does indeed need to lock everything.  I'm also finding that most
> of
> >> > the code I'm writing/reading won't be helped at all by defaulting to
> >> > READ_ONLY...
> >> >
> >> >>
> >> >> In fact, it seems rather harmful that the syntax which will result in
> >> >> more lock contention is simpler than the syntax which is better
> optimized.
> >> >
> >> > But you're right about this.  So, if we're trying to force users to
> >> > write highly parallelizable code, then yes the first arg probably
> should be
> >> > required.  But if we're trying to make IndexedDB easy to use then
> actually
> >> > the mode should probably be changed back to defaulting to READ_WRITE.
> >> > I know I argued for the mode default change earlier, but I'm having
> >> > second thoughts.  We've spent so much effort making the rest of the
> API easy
> >> > to use that having points of abrasion like this seem a bit wrong.
> >> >  Especially if (at least in my experience) the abrasion is only going
> to
> >> > help a limited number of cases--and probably ones where the developers
> will
> >> > pay attention to this without us being heavy-handed.
> >>
> >> I think "ease of use" is different from "few characters typed". For
> >> example it's important that the API discourages bugs, for example by
> >> making the code easy and clear to read. Included in that is IMHO to
> >> make it easy to make the code fast.
> >
> > It won't make the cost fast.  It'll make it'll allow parallel execution.
> >  Which will only matter if a developer is trying to do multiple reads at
> > once and you have significant latency to your backend and/or it's heavily
> > disk bound.  Which will only be true in complex web apps--the kind where
> a
> > developer is going to be more conscious of various performance
> bottlenecks.
> >  In other words, most of the time, defaulting to READ_ONLY will almost
> > certainly have no visible impact in speed.
>
> It also matters for the use cases of having background workers reading
> from the same table,


Workers are a pretty advanced use case.  One where I'd expect the developer
to be mindful of something like this.


> as well as any time the user opens two tabs to
> the same page. The latter is something that I expect every web app
> would care about.
>

A user will generally only be using one page at a time.  The few apps that I
can think of where this in't true would be fairly advanced use cases where
the developer is going to need to consciously optimize their app anyway.

I doubt that you're going to save the world more grief than you're going to
cause them by defaulting to READ_ONLY.

J


Re: [chromium-html5] LocalStorage inside Worker

2011-01-12 Thread Jeremy Orlow
Agreed.

There's lots of stuff in the web platform that you basically just shouldn't
use/do but that we have to leave in.  In my opinion, localStorage is just
yet another one.  (And yes, this is coming from the person who implemented
it in Chromium. :-)

J

On Wed, Jan 12, 2011 at 11:36 AM, Keean Schupke  wrote:

> So, it may be acceptable to say you can't use localStorage from a worker,
> use IndexedDB instead. But is it acceptable to leave localStorage broken
> with multiple tabs/windows.  As the spec says there should be a global lock,
> that seems to be an implementation problem though.
>
>
> Cheers,
> Keean.
>
>
> On 12 January 2011 11:29, Jeremy Orlow  wrote:
>
>> On Wed, Jan 12, 2011 at 11:14 AM, Glenn Maynard  wrote:
>>
>>> On Wed, Jan 12, 2011 at 6:00 AM, Keean Schupke  wrote:
>>> > IMHO, if the global lock on localStorage implemented, then I think it
>>> is
>>> > acceptable to say localStorage may have poor performance with multiple
>>> > windows/tabs open. If you want better then use IndexedDB.
>>>
>>> Performance isn't the problem.  The problems, as I understand them, are:
>>>
>>> 1: the global lock is simply not being implemented; it's too hard to
>>> implement this sort of locking from within a running UI thread
>>> properly, and
>>> 2: unlike scripts in the main thread, a worker thread may not return
>>> to caller regularly; that's when the storage mutex is unlocked, which
>>> means there's no proepr way to unlock the storage mutex from a thread.
>>>
>>> The callback API addresses both of these problems.
>>>
>>> On 12 January 2011 10:21, Jeremy Orlow  wrote:
>>> > Why not just use a small library (like lawnchair) on top of IndexedDB
>>> > instead?  This doesn't seem like it's worth the surface area at all...
>>>
>>> This sounds more like an argument for deprecating the entire Storage API.
>>>
>>
>> It is, but the thing with the web platform is that once you add something
>> you can pretty much never remove it.
>>
>> But I'll be doing everything in my power to push people away from
>> LocalStorage once IndexedDB is a bit more mature.
>>
>> J
>>
>
>


Re: [chromium-html5] LocalStorage inside Worker

2011-01-12 Thread Jeremy Orlow
On Wed, Jan 12, 2011 at 11:14 AM, Glenn Maynard  wrote:

> On Wed, Jan 12, 2011 at 6:00 AM, Keean Schupke  wrote:
> > IMHO, if the global lock on localStorage implemented, then I think it is
> > acceptable to say localStorage may have poor performance with multiple
> > windows/tabs open. If you want better then use IndexedDB.
>
> Performance isn't the problem.  The problems, as I understand them, are:
>
> 1: the global lock is simply not being implemented; it's too hard to
> implement this sort of locking from within a running UI thread
> properly, and
> 2: unlike scripts in the main thread, a worker thread may not return
> to caller regularly; that's when the storage mutex is unlocked, which
> means there's no proepr way to unlock the storage mutex from a thread.
>
> The callback API addresses both of these problems.
>
> On 12 January 2011 10:21, Jeremy Orlow  wrote:
> > Why not just use a small library (like lawnchair) on top of IndexedDB
> > instead?  This doesn't seem like it's worth the surface area at all...
>
> This sounds more like an argument for deprecating the entire Storage API.
>

It is, but the thing with the web platform is that once you add something
you can pretty much never remove it.

But I'll be doing everything in my power to push people away from
LocalStorage once IndexedDB is a bit more mature.

J


Re: [chromium-html5] LocalStorage inside Worker

2011-01-12 Thread Jeremy Orlow
On Tue, Jan 11, 2011 at 8:58 PM, Jonas Sicking  wrote:

> With localStorage being the way it is, I personally don't think we can
> ever allow localStorage access in workers.
>
> However I do think we can and should provide access to a separate
> storage area (or several named storage areas) which can only be
> accessed from callbacks.


So basically you want to create yet another storage API?  Didn't we decide a
while ago this was a bad idea?

Why not just use a small library (like lawnchair) on top of IndexedDB
instead?  This doesn't seem like it's worth the surface area at all...


> On the main thread those callbacks would be
> asynchronous. In workers those callbacks can be either synchronous or
> asynchronous. Here is the API I'm proposing:
>
> getNamedStorage(in DOMString name, in Function callback);
> getNamedStorageSync(in DOMString name, in Function callback);
>
> The latter is only available in workers. The former is available in
> both workers and in windows. When the callback is called it's given a
> reference to the Storage object which has the exact same API as
> localStorage does.
>
> Also, you're not allowed to nest getNamedStorageSync and/or
> IDBDatabaseSync.transaction calls.
>
> This has the added advantage that it's much more implementable without
> threading hazards than localStorage already is.
>
> / Jonas
>
> On Tue, Jan 11, 2011 at 6:40 AM, Jeremy Orlow  wrote:
> > So what's the plan for localStorage in workers?
> > J
> >
> > On Tue, Jan 11, 2011 at 9:10 AM, Keean Schupke  wrote:
> >>
> >> I think I already came to the same conclusion... JavaScript has no
> control
> >> over effects, which devalues STM. In the absence of effect control,
> apparent
> >> serialisation (of transactions) is the best you can do.
> >> What we need is a purely functional JavaScript, it makes threading so
> much
> >> easier ;-)
> >>
> >> Cheers,
> >> Keean.
> >>
> >> On 10 January 2011 23:42, Robert O'Callahan 
> wrote:
> >>>
> >>> STM is not a panacea. Read
> >>>
> http://www.bluebytesoftware.com/blog/2010/01/03/ABriefRetrospectiveOnTransactionalMemory.aspx
> >>> if you haven't already.
> >>>
> >>> In Haskell, where you have powerful control over effects, it may work
> >>> well, but Javascript isn't anything like that.
> >>>
> >>> Rob
> >>> --
> >>> "Now the Bereans were of more noble character than the Thessalonians,
> for
> >>> they received the message with great eagerness and examined the
> Scriptures
> >>> every day to see if what Paul said was true." [Acts 17:11]
> >>
> >
> >
>


Re: [chromium-html5] LocalStorage inside Worker

2011-01-11 Thread Jeremy Orlow
So what's the plan for localStorage in workers?

J

On Tue, Jan 11, 2011 at 9:10 AM, Keean Schupke  wrote:

> I think I already came to the same conclusion... JavaScript has no control
> over effects, which devalues STM. In the absence of effect control, apparent
> serialisation (of transactions) is the best you can do.
>
> What we need is a purely functional JavaScript, it makes threading so much
> easier ;-)
>
>
> Cheers,
> Keean.
>
>
> On 10 January 2011 23:42, Robert O'Callahan  wrote:
>
>> STM is not a panacea. Read
>> http://www.bluebytesoftware.com/blog/2010/01/03/ABriefRetrospectiveOnTransactionalMemory.aspxif
>>  you haven't already.
>>
>> In Haskell, where you have powerful control over effects, it may work
>> well, but Javascript isn't anything like that.
>>
>> Rob
>> --
>> "Now the Bereans were of more noble character than the Thessalonians, for
>> they received the message with great eagerness and examined the Scriptures
>> every day to see if what Paul said was true." [Acts 17:11]
>>
>
>


Re: [Bug 11398] New: [IndexedDB] Methods that take multiple optional parameters should instead take an options object

2011-01-11 Thread Jeremy Orlow
On Mon, Jan 10, 2011 at 9:40 PM, Jonas Sicking  wrote:

> On Tue, Dec 14, 2010 at 12:13 PM, Jeremy Orlow 
> wrote:
> >
> > On Tue, Dec 14, 2010 at 7:50 PM, Jonas Sicking  wrote:
> >>
> >> On Tue, Dec 14, 2010 at 8:47 AM, Jeremy Orlow 
> wrote:
> >>>
> >>> Btw, I forgot to mention IDBDatabase.transaction which I definitely
> think should take an options object as well.
> >>
> >> Hmm.. I think we should make the first argument required, I actually
> thought it was until I looked just now. I don't see what the use case is for
> opening all tables.
> >
> > FWIW I'm finding that the majority of the IndexedDB code I read and write
> does indeed need to lock everything.  I'm also finding that most of the code
> I'm writing/reading won't be helped at all by defaulting to READ_ONLY...
> >
> >>
> >> In fact, it seems rather harmful that the syntax which will result in
> more lock contention is simpler than the syntax which is better optimized.
> >
> > But you're right about this.  So, if we're trying to force users to write
> highly parallelizable code, then yes the first arg probably should be
> required.  But if we're trying to make IndexedDB easy to use then actually
> the mode should probably be changed back to defaulting to READ_WRITE.
> > I know I argued for the mode default change earlier, but I'm having
> second thoughts.  We've spent so much effort making the rest of the API easy
> to use that having points of abrasion like this seem a bit wrong.
>  Especially if (at least in my experience) the abrasion is only going to
> help a limited number of cases--and probably ones where the developers will
> pay attention to this without us being heavy-handed.
>
> I think "ease of use" is different from "few characters typed". For
> example it's important that the API discourages bugs, for example by
> making the code easy and clear to read. Included in that is IMHO to
> make it easy to make the code fast.
>

It won't make the cost fast.  It'll make it'll allow parallel execution.
 Which will only matter if a developer is trying to do multiple reads at
once and you have significant latency to your backend and/or it's heavily
disk bound.  Which will only be true in complex web apps--the kind where a
developer is going to be more conscious of various performance bottlenecks.
 In other words, most of the time, defaulting to READ_ONLY will almost
certainly have no visible impact in speed.

I'm pretty sure this is decreasing ease of use any way you measure it.  And
that's mostly based on me personally having spent several days coding up
stuff with IndexedDB + talking to others who have doen the same.

J


Re: [chromium-html5] LocalStorage inside Worker

2011-01-06 Thread Jeremy Orlow
public-webapps is probably the better place for this email

On Sat, Jan 1, 2011 at 4:22 AM, Felix Halim  wrote:

> I know this has been discussed > 1 year ago:
>
> http://www.mail-archive.com/whatwg@lists.whatwg.org/msg14087.html
>
> I couldn't find the follow up, so I guess localStorage is still
> inaccessible from Workers?
>

Yes.


> I have one other option aside from what mentioned by Jeremy:
>
> http://www.mail-archive.com/whatwg@lists.whatwg.org/msg14075.html
>
> 5: Why not make localStorage accessible from the Workers as "read only" ?
>
> The use case is as following:
>
> First, the user in the main window page (who has read/write access to
> localStorage), dumps a big data to localStorage. Once all data has
> been set, then the main page spawns Workers. These workers read the
> data from localStorage, process it, and returns via message passing
> (as they cannot alter the localStorage value).
>
> What are the benefits?
> 1. No lock, no deadlock, no data race, fast, and efficient (see #2 below).
> 2. You only set the data once, read by many Worker threads (as opposed
> to give the big data again and again from the main page to each of the
> Workers via message).
> 3. It is very easy to use compared to using IndexedDB (i'm the big
> proponent in localStorage).
>
> Note: I was not following the discussion on the spec, and I don't know
> if my proposal has been discussed before? or is too late to change
> now?
>

I don't think it's too late or has had much discussion any time recently.
 It's probably worth re-exploring.


> Thanks,
>
> Felix Halim
>


Re: [chromium-html5] LocalStorage inside Worker

2011-01-06 Thread Jeremy Orlow
(oops, apologies for not cleaning up the subject line before sending this!)

On Thu, Jan 6, 2011 at 8:01 PM, Jeremy Orlow  wrote:

> public-webapps is probably the better place for this email
>
> On Sat, Jan 1, 2011 at 4:22 AM, Felix Halim  wrote:
>
>> I know this has been discussed > 1 year ago:
>>
>> http://www.mail-archive.com/whatwg@lists.whatwg.org/msg14087.html
>>
>> I couldn't find the follow up, so I guess localStorage is still
>> inaccessible from Workers?
>>
>
> Yes.
>
>
>> I have one other option aside from what mentioned by Jeremy:
>>
>> http://www.mail-archive.com/whatwg@lists.whatwg.org/msg14075.html
>>
>> 5: Why not make localStorage accessible from the Workers as "read only" ?
>>
>> The use case is as following:
>>
>> First, the user in the main window page (who has read/write access to
>> localStorage), dumps a big data to localStorage. Once all data has
>> been set, then the main page spawns Workers. These workers read the
>> data from localStorage, process it, and returns via message passing
>> (as they cannot alter the localStorage value).
>>
>> What are the benefits?
>> 1. No lock, no deadlock, no data race, fast, and efficient (see #2 below).
>> 2. You only set the data once, read by many Worker threads (as opposed
>> to give the big data again and again from the main page to each of the
>> Workers via message).
>> 3. It is very easy to use compared to using IndexedDB (i'm the big
>> proponent in localStorage).
>>
>> Note: I was not following the discussion on the spec, and I don't know
>> if my proposal has been discussed before? or is too late to change
>> now?
>>
>
> I don't think it's too late or has had much discussion any time recently.
>  It's probably worth re-exploring.
>
>
>> Thanks,
>>
>> Felix Halim
>>
>


Re: [IndexedDB] Do we need a timeout for VERSION_CHANGE?

2010-12-17 Thread Jeremy Orlow
On Fri, Dec 17, 2010 at 5:56 PM, Jonas Sicking  wrote:

> On Thu, Dec 16, 2010 at 4:39 PM, Jeremy Orlow  wrote:
> > On Thu, Dec 16, 2010 at 10:09 PM, Pablo Castro <
> pablo.cas...@microsoft.com>
> > wrote:
> >>
> >> From: jor...@google.com [mailto:jor...@google.com] On Behalf Of Jeremy
> >> Orlow
> >> Sent: Thursday, December 16, 2010 2:35 AM
> >>
> >> >>In another thread (in the last couple days) we actually decided to
> >> >> remove timeouts from normal transactions since they can be
> implemented as a
> >> >> setTimeout+abort.
> >> >>
> >> >>But I agree that we need a way to abort setVersion transactions before
> >> >> getting the callback (so that we implement timeouts for them as
> well).
> >> >>  Unfortunately, I don't immediately have any good ideas on how to do
> that
> >> >> though.
> >>
> >> Sorry, forgot to qualify it, context == sync api. I assume that the sync
> >> versions of the API will truly block, so setTimeout won't do as code
> won't
> >> just reenter into the timeout callback while blocked on a sync IndexedDB
> >> call, are we all on the same page on that? If that's the case, then I
> don't
> >> think we can remove the timeout parameter from the sync versions of
> >> transaction() and setVersion(). Does that sound reasonable? I'll add
> them
> >> for now, we can adjust if somebody come up with a better approach.
> >>
> >> As for setVersion in async...that's actually a problem as well now that
> I
> >> think about it because you don't have access to the (version)
> transaction
> >> object until it actually was able to start. One option besides having a
> >> timeout parameter in the method would be to have an abort() method in
> >> IDBVersionChangeRequest.
> >
> > Very good points
> > Given the fact that we will need timeouts for the sync version, I'm
> starting
> > to wonder if it makes sense to just leave in for the async version.
>  Hm...
> >  Jonas, what do you think?
>
> I'm fine with that, but I still don't think we should introduce the
> options object, given how rarely the timeout parameter is likely to be
> used.
>

I'm still not sure we should keep it.  It seems like a decent amont of API +
implementation for a feature that can be easily emulated.  I guess we can
keep it for now though.

What about timeout/abort for async setVersion?  How should we allow those?
 Have abort() and ontimeout on the IDBRequest for just setVersion + a second
optional parameter to setVersion for the timeout?  Seems like a heavyweight
solution, but I'm not sure how else to do it.

J


Re: [IndexedDB] KeyRange factory methods

2010-12-17 Thread Jeremy Orlow
Filed: http://www.w3.org/Bugs/Public/show_bug.cgi?id=11567

On Fri, Dec 17, 2010 at 12:07 AM, Jonas Sicking  wrote:

> On Thu, Dec 16, 2010 at 2:52 PM, Pablo Castro
>  wrote:
> > I was going to file a bug on this but wanted to make sure I'm not missing
> something first.
> >
> > All the factory methods for ranges (e.g. bound, lowerBound, etc.) are in
> the IDBKeyRangeConstructors interface now, but I don't see the interface
> referenced anywhere. Who implements this interface, the Window object,
> IDBFactory[Sync], something else?
>
> >From the spec:
>
> "To construct a key range a set of constructors are available. In
> languages with interface objects [WEBIDL], these constructors are
> available on the IDBKeyRange interface object. In other languages
> these constructors are available through language specific means, for
> example as static functions"
>
> But I see that I forgot to say that the IDBKeyRangeConstructors
> interface implements these constructors.
>
> However WebIDL nowadays have support for "static" so there is no
> longer a need for a separate interface. We should just do
>
> interface IDBKeyRange {
>readonly attribute any lower;
>readonly attribute any upper;
>readonly attribute boolean lowerOpen;
>readonly attribute boolean upperOpen;
>
>static IDBKeyRange only (in any value);
>static IDBKeyRange lowerBound (in any bound, in optional boolean open);
>static IDBKeyRange upperBound (in any bound, in optional boolean open);
>static IDBKeyRange bound (in any lower, in any upper,
> in optional boolean lowerOpen,
> in optional boolean upperOpen);
> };
>
> / Jonas
>
>


Re: [IndexedDB] Do we need a timeout for VERSION_CHANGE?

2010-12-16 Thread Jeremy Orlow
On Thu, Dec 16, 2010 at 10:09 PM, Pablo Castro
wrote:

>
> From: jor...@google.com [mailto:jor...@google.com] On Behalf Of Jeremy
> Orlow
> Sent: Thursday, December 16, 2010 2:35 AM
>
> >>In another thread (in the last couple days) we actually decided to remove
> timeouts from normal transactions since they can be implemented as a
> setTimeout+abort.
> >>
> >>But I agree that we need a way to abort setVersion transactions before
> getting the callback (so that we implement timeouts for them as well).
>  Unfortunately, I don't immediately have any good ideas on how to do that
> though.
>
> Sorry, forgot to qualify it, context == sync api. I assume that the sync
> versions of the API will truly block, so setTimeout won't do as code won't
> just reenter into the timeout callback while blocked on a sync IndexedDB
> call, are we all on the same page on that? If that's the case, then I don't
> think we can remove the timeout parameter from the sync versions of
> transaction() and setVersion(). Does that sound reasonable? I'll add them
> for now, we can adjust if somebody come up with a better approach.
>
> As for setVersion in async...that's actually a problem as well now that I
> think about it because you don't have access to the (version) transaction
> object until it actually was able to start. One option besides having a
> timeout parameter in the method would be to have an abort() method in
> IDBVersionChangeRequest.
>

Very good points

Given the fact that we will need timeouts for the sync version, I'm starting
to wonder if it makes sense to just leave in for the async version.  Hm...
 Jonas, what do you think?

J


Re: [IndexedDB] Do we need a timeout for VERSION_CHANGE?

2010-12-16 Thread Jeremy Orlow
In another thread (in the last couple days) we actually decided to remove
timeouts from normal transactions since they can be implemented as a
setTimeout+abort.

But I agree that we need a way to abort setVersion transactions before
getting the callback (so that we implement timeouts for them as well).
 Unfortunately, I don't immediately have any good ideas on how to do that
though.

J

On Wed, Dec 15, 2010 at 10:46 PM, Pablo Castro
wrote:

> Regular transactions take a timeout parameter when started, which ensures
> that we eventually make progress one way or the other if there's an
> un-cooperating script that won't let go of an object store or something like
> that.
>
> I'm not sure if we discussed this before, it seems that we need to add a
> similar thing for setVersion(), and it's basically a way of starting a
> transaction.
>
> I was thinking we could have an optional timeout argument in setVersion
> with a UA-specific default. In the async case we would fire the onerror
> event and in the sync case just throw, both with TIMEOUT_ERR.
>
> Thanks
> -pablo
>
>
>


Re: [Bug 11375] New: [IndexedDB] Error codes need to be assigned new numbers

2010-12-15 Thread Jeremy Orlow
On Wed, Dec 15, 2010 at 3:42 AM, Jonas Sicking  wrote:

> > Speaking of which, we use UNKNOWN_ERR for a bunch of other
> > internal consistency issues.  Is this OK by everyone, should we use
> another,
> > or should we create a new one?  (Ideally these issues will be few and far
> > between as we make things more robust.)
>
> Is this things like disk IO errors and the like?
>

Yeah and other "impossible" conditions.


>  > We also use UNKNOWN_ERR for when things are not yet implemented.  Any
> > concerns?
>
> Ideal is if you can leave out the function entirely if you don't
> implement it yet.
>

An example is autoIncrement.  There isn't really any way to "leave it out"
because we otherwise support creating object stores and silently ignoring it
means that we don't behave the way a user would expect.

If there were mature IndexedDB implementations, we would have blocked
releasing anything until we were feature complete, but until then I think
it's better for everyone that we release early and often so we can get
feedback.

When this isn't possible, I would say that you should throw something
> different from what could be legitimately thrown from the function. I
> know gecko has a special exception we throw from various places when
> functionality isn't implemented.
>
> I don't think we should put anything in the spec about this as the
> spec should specify that everything should be implemented :)
>

I agree.  Wasn't asking because I wanted to add anything to the spec.


>  > What error code should we use for IDBCursor.update/delete when the
> cursor is
> > not currently on an item
>
> It's currently specced to throw NOT_ALLOWED_ERR. I think this is
> consistent with other uses of that exception.
>

I see now.  In the text at the top it does actually say this clearly, but in
the table below it only talks about hitting the end of results.


> > (or that item has been deleted)?
>
> I brought up this a while back in bug 11257. Note that you can't throw
> since you can't synchronously know if an item has been deleted.
>
> IMHO the simplest thing is to just treat IDBCursor.update the same as
> IDBObjectStore.put and IDBCursor.delete as IDBObjectStore.delete. I
> can't think of a use case for deleting and then wanting to ensure that
> IDBCursor.update doesn't create a new entry, so we might as well keep
> things simple and reuse the implementation and spec logic.
>
> In short, I think the spec is fine in this area.
>
> > TRANSIENT_ERR doesn't seem to be used anywhere in the spec.  Should it be
> > removed?
>
> Yes, along with RECOVERABLE_ERR, NON_TRANSIENT_ERR and DEADLOCK_ERR.
>
> We should also remove the .message property. DOMException doesn't have
> that.
>
> > As for the numbering: does anyone object to me just starting from 1 and
> > going sequentially?  I.e. does anyone have a problem with them all
> getting
> > new numbers, or should I keep the numbers the same when possible.  (i.e.
> > only UNKNOWN_ERR, RECOVERABLE_ERR, TRANSIENT_ERR, TIMEOUT_ERR,
> DEADLOCK_ERR
> > would change number, but the ordering of those on the page would change.)
> > I intend to tackle this early next week unless there are major areas of
> > contention.
>
> Sounds great. The only possible thing that we might want to do
> differently is that we might want to get rid of IDBDatabaseException
> entirely and just add values to DOMException. I don't have an opinion
> on this but I'm currently checking with JS developers what is easiest
> for them. (In general I'm not a fan of how exceptions in the DOM are
> so different from JS-exceptions).
>

Other APIs have their own exception classes.  What's the benefit of putting
our exceptions in DOMException?  The downside is that other specs need to
coordinate with our exception codes.  Unless there's a pretty compelling
reason to do this, I don't think we should.

J


Re: [Bug 11553] New: Ensure indexedDBSync is on the right worker interface

2010-12-15 Thread Jeremy Orlow
I believe the instance of WorkerUtils is much like window in a page.  I.e.
you put stuff on there that you want in the global scope.  Thus I'm pretty
sure that WorkerUtils is the right place for both.

J

On Wed, Dec 15, 2010 at 1:54 AM,  wrote:

> http://www.w3.org/Bugs/Public/show_bug.cgi?id=11553
>
>   Summary: Ensure indexedDBSync is on the right worker interface
>   Product: WebAppsWG
>   Version: unspecified
>  Platform: All
>OS/Version: All
>Status: NEW
>  Severity: normal
>  Priority: P2
> Component: Indexed Database API
>AssignedTo: dave.n...@w3.org
>ReportedBy: pablo.cas...@microsoft.com
> QAContact: member-webapi-...@w3.org
>CC: m...@w3.org, public-webapps@w3.org
>
>
> I just noticed that the async part of the spec has indexedDB in Worker
> (i.e.
> "Worker implements IDBEnvironment"), whereas the sync API has it
> WorkerUtils.
> The second one is probably just old, so for my current editing pass
> (bringing
> sync/async in sync) I'll just change it to "Worker" for consistency.
>
> However, from looking at the Web Workers spec [1] it seems that
> IDBEnvironmentSync should be implemented by AbstractWorker so it's
> available in
> regular and shared workers. Is that right? If not, what's the right spot?
>
> [1] http://dev.w3.org/html5/workers
>
> --
> Configure bugmail: http://www.w3.org/Bugs/Public/userprefs.cgi?tab=email
> --- You are receiving this mail because: ---
> You are on the CC list for the bug.
>
>


  1   2   3   4   5   >