Re: IndexedDB: Syntax for specifying persistent/temporary storage

2013-12-17 Thread Kyaw Tun
Option C) using numeric priority are good enough for most web application.
It has advantage of easy to implement.

Option C) can be combined with option B) by defining that 0 means
temporary, 1 means persistent and undefined means default. Any other values
should throw error.

Since we can query allocated quota and knowing estimated app data
size, persistent will be grantee.

Kyaw


Re: [IndexedDB] MultiEntry compound index proposal

2013-11-16 Thread Kyaw Tun
Sorry multiEntry attribute in 'clad-name' is true.
On Nov 17, 2013 12:21 PM, "Kyaw Tun"  wrote:

> Current IndexedDB spec does not allow MultiEntry index for array keyPath
> (compound index). Implementation of multiEntry compound index is rather
> straight forward, but ambiguous specification is an issue as explained in
> this bug report <https://www.w3.org/Bugs/Public/show_bug.cgi?id=21836#c2>.
> MultiEntry compound indexes are required for efficient key joining that
> involve multiEntry key.
>
> The behaviour of multiEntry attribute if array keyPath index can be
> defined by taking into account of multiEntry attribute of constituting
> indexes. Each item (string) of an array keyPath index is an index name on
> the object store, the index is called constituting index.
>
> Creating array keyPath index with optional multiEntry attribute is allowed
> and does not throw Error. When creating array keyPath index with multiEntry
> attribute set to true, but none of its constituting indexes exists or none
> of its constituting indexes has multiEntry attribute set to true, throw
> ConstraintError.
>
> If multiEntry attribute of array keyPath index is false or not exist,
> algorithm for storing a record and evaluation of keyPath value is the same
> as currently defined, but additionally each item in keyPath sequence can be
> an index name, in this case index key are referred by the constituting
> index.
>
> If multiEntry attribute of array keyPath index is true, algorithm for
> storing a record is modified such that each item in combination of keys of
> constituting indexes as its key and key as its value.
>
> As an illustrating example, support we have the following record.
>
> var human = {
>   taxon: 9606,
>   classification: ['Animalia', 'Mammalia', 'Primates'],
>   name: {
> genus: 'Homo',
> species: 'sapiens'
>   },
>   habitat: ['asia', 'americas', 'africa', 'europe', 'oceania']
> }
>
> store = db.createObjectStore('specie', {keyPath: 'taxon'});
> store.createIndex('clade', 'classification', {multiEntry: true});
> store.createIndex('habitat', 'habitat', {multiEntry: true});
> store.createIndex('binomen', ['name.genus', 'name.species']);
>
>
>
> The following composite index is used to list specie table.
>
> store.createIndex('specie', ['classification', 'binomen'], {unique: true, 
> multiEntry: false});
>
> It should create a index value of
>
> [['Animalia', 'Mammalia', 'Primates'], ['Homo', 'sapiens']]
>
> Notice that 'bionomen' is index name, not keyPath of record value.
>
> The following composite index is used to query specific clad order by name.
>
> store.createIndex('clad-name', ['clade', 'binomen'], {multiEntry: false});
>
> It should crate index values of
>
> ['Animalia', ['Homo', 'sapiens']]
> ['Mammalia', ['Homo', 'sapiens']]
> ['Primates', ['Homo', 'sapiens']]
>
>
> The following composite index is used to query habitant and clad.
>
>
> store.createIndex('clad-habitat', ['clade', 'habitat', 'binomen'], 
> {multiEntry: true});
> It should crate index values of
> ['Animalia', 'africa', ['Homo', 'sapiens']]
> ['Animalia', 'asia', ['Homo', 'sapiens']]
> ['Animalia', 'americas', ['Homo', 'sapiens']]
> ['Animalia', 'europe', ['Homo', 'sapiens']]
> ['Animalia', 'oceania', ['Homo', 'sapiens']]
> ['Mammalia', 'africa', ['Homo', 'sapiens']]
> ['Mammalia', 'asia', ['Homo', 'sapiens']]
> ['Mammalia', 'americas', ['Homo', 'sapiens']]
> ['Mammalia', 'europe', ['Homo', 'sapiens']]
> ['Mammalia', 'oceania', ['Homo', 'sapiens']]
> ['Mammalia', 'africa', ['Homo', 'sapiens']]
> ['Primates', 'africa', ['Homo', 'sapiens']]
> ['Primates', 'asia', ['Homo', 'sapiens']]
> ['Primates', 'americas', ['Homo', 'sapiens']]
> ['Primates', 'europe', ['Homo', 'sapiens']]
> ['Primates', 'oceania', ['Homo', 'sapiens']]
>
> Alternatively, this problem can also be solved by Bug 1 
> <https://www.w3.org/Bugs/Public/show_bug.cgi?id=1> using index key 
> created by expression. Expression based index is very powerful and can solve 
> many problems including full-text search index.
>
> However multiEntry compound index is common use case and this proposal is 
> expected behaviour of such index and should be baked into IndexedDB API. If 
> web developers, or wrapper library are generating multiEntry compound index 
> using expression index, handling schema changes will be nightmare.
>
>
> Best,
>
>
> Kyaw
>
>
>
>


[IndexedDB] MultiEntry compound index proposal

2013-11-16 Thread Kyaw Tun
Current IndexedDB spec does not allow MultiEntry index for array keyPath
(compound index). Implementation of multiEntry compound index is rather
straight forward, but ambiguous specification is an issue as explained in
this bug report .
MultiEntry compound indexes are required for efficient key joining that
involve multiEntry key.

The behaviour of multiEntry attribute if array keyPath index can be defined
by taking into account of multiEntry attribute of constituting indexes.
Each item (string) of an array keyPath index is an index name on the object
store, the index is called constituting index.

Creating array keyPath index with optional multiEntry attribute is allowed
and does not throw Error. When creating array keyPath index with multiEntry
attribute set to true, but none of its constituting indexes exists or none
of its constituting indexes has multiEntry attribute set to true, throw
ConstraintError.

If multiEntry attribute of array keyPath index is false or not exist,
algorithm for storing a record and evaluation of keyPath value is the same
as currently defined, but additionally each item in keyPath sequence can be
an index name, in this case index key are referred by the constituting
index.

If multiEntry attribute of array keyPath index is true, algorithm for
storing a record is modified such that each item in combination of keys of
constituting indexes as its key and key as its value.

As an illustrating example, support we have the following record.

var human = {
  taxon: 9606,
  classification: ['Animalia', 'Mammalia', 'Primates'],
  name: {
genus: 'Homo',
species: 'sapiens'
  },
  habitat: ['asia', 'americas', 'africa', 'europe', 'oceania']
}

store = db.createObjectStore('specie', {keyPath: 'taxon'});
store.createIndex('clade', 'classification', {multiEntry: true});
store.createIndex('habitat', 'habitat', {multiEntry: true});
store.createIndex('binomen', ['name.genus', 'name.species']);


The following composite index is used to list specie table.

store.createIndex('specie', ['classification', 'binomen'], {unique:
true, multiEntry: false});

It should create a index value of

[['Animalia', 'Mammalia', 'Primates'], ['Homo', 'sapiens']]

Notice that 'bionomen' is index name, not keyPath of record value.

The following composite index is used to query specific clad order by name.

store.createIndex('clad-name', ['clade', 'binomen'], {multiEntry: false});

It should crate index values of

['Animalia', ['Homo', 'sapiens']]
['Mammalia', ['Homo', 'sapiens']]
['Primates', ['Homo', 'sapiens']]


The following composite index is used to query habitant and clad.

store.createIndex('clad-habitat', ['clade', 'habitat', 'binomen'],
{multiEntry: true});
It should crate index values of
['Animalia', 'africa', ['Homo', 'sapiens']]
['Animalia', 'asia', ['Homo', 'sapiens']]
['Animalia', 'americas', ['Homo', 'sapiens']]
['Animalia', 'europe', ['Homo', 'sapiens']]
['Animalia', 'oceania', ['Homo', 'sapiens']]
['Mammalia', 'africa', ['Homo', 'sapiens']]
['Mammalia', 'asia', ['Homo', 'sapiens']]
['Mammalia', 'americas', ['Homo', 'sapiens']]
['Mammalia', 'europe', ['Homo', 'sapiens']]
['Mammalia', 'oceania', ['Homo', 'sapiens']]
['Mammalia', 'africa', ['Homo', 'sapiens']]
['Primates', 'africa', ['Homo', 'sapiens']]
['Primates', 'asia', ['Homo', 'sapiens']]
['Primates', 'americas', ['Homo', 'sapiens']]
['Primates', 'europe', ['Homo', 'sapiens']]
['Primates', 'oceania', ['Homo', 'sapiens']]

Alternatively, this problem can also be solved by Bug 1
 using index key
created by expression. Expression based index is very powerful and can
solve many problems including full-text search index.

However multiEntry compound index is common use case and this proposal
is expected behaviour of such index and should be baked into IndexedDB
API. If web developers, or wrapper library are generating multiEntry
compound index using expression index, handling schema changes will be
nightmare.


Best,

Kyaw


Re: Sync API for workers

2013-10-13 Thread Kyaw Tun
Actually only IDBRequest need to be sync, which are prone to error and
complicate workflow. Async workflow on database opening and transaction
request are fine.

Kyaw


[IndexedDB] blocked event should have default operation to close the connection

2013-10-09 Thread Kyaw Tun
An application receive blocked event on IndexedDB database instance when
another app open the database with a newer version.

The receiving application must close the connection so that other tab
receive open success event. Otherwise database open request will be
pending. Most developer are unaware of the fact and hard to figure out the
reason.

My suggestion is to make close method as default operation of blocked
event. For that app, that require to save data should listen blocked event
and invoke preventDefault() and finally close the connection.

After connection was closed, any transaction request will throw
InvalidStateError just like closePending flag is set. The final effect is
the same, but this time, developer can figure out the problem.

Best,
Kyaw


Re: Updating Quota API: Promise, Events and some more

2013-08-14 Thread Kyaw Tun
> That still feels like an odd mix of two APIs. An approach that we (Moz +

> Google) have talked about would be to extend the IDBFactory.open() call
> with an options dictionary, e.g.
>
> request = indexedDB.open({ name: ..., version: ..., storage: "temporary" });
>
> On a tangent...
>
> An open question is if the storage type (1) can be assigned only when an
> IDB database is created, or (2) can be changed, allowing an IDB database to
> be moved while retaining data, or (3) defines a namespace between origin
> and database, i.e. "example.com" / "permanent" / "db-1" and "example.com" /
> "temporary" / "db-1" co-exist as separate databases.

Specifying StorageType during opening the database is good enough.
Basically we can think a database as transaction group.

I don't see it is necessary to change storage type. Most likely
operation is when storage get low, we will delete not so important
data. onstoragelow event cover pretty much. Another problem is we
actually don't know the size of object we are storing.

Option 3 is very interesting.

Kyaw


Re: Updating Quota API: Promise, Events and some more

2013-08-14 Thread Kyaw Tun
How an IndexedDB database use persistent storage?


[IndexedDB] Feature detection pattern for array key support

2013-08-06 Thread Kyaw Tun
Some browsers does not support array key. I use indexedDB.cmp([1, 2], [1,
2]). It works by getting error on not supported bowser. Is there a clear
hack to detect it?

Thanks,
Kyaw


Re: [IndexedDB] Inform script of corruption recovery

2013-06-11 Thread Kyaw Tun
Yes, for v2.
Kyaw


On Wed, Jun 12, 2013 at 2:25 AM, Arthur Barstow wrote:

> Hi - your comment is considered a "Last Call comment" and it was included
> in the LC's comment tracking document [1].
>
> In [2], Joshua proposed this comment be addressed/resolved as a feature
> request and as such, it was added to the IDB feature request list [3].
>
> For the purposes of tracking your comment, please indicate if this
> resolution is acceptable or not.
>
> -Thanks, ArtB
>
> [1] <https://dvcs.w3.org/hg/**IndexedDB/raw-file/default/**
> Comments-16-May-2013-LCWD.html<https://dvcs.w3.org/hg/IndexedDB/raw-file/default/Comments-16-May-2013-LCWD.html>
> **>
> [2] <http://lists.w3.org/Archives/**Public/public-webapps/**
> 2013AprJun/0817.html<http://lists.w3.org/Archives/Public/public-webapps/2013AprJun/0817.html>
> >
> [3] 
> <http://www.w3.org/2008/**webapps/wiki/**IndexedDatabaseFeatures<http://www.w3.org/2008/webapps/wiki/IndexedDatabaseFeatures>
> >
>
>
>
> On 5/19/13 11:14 PM, ext Kyaw Tun wrote:
>
>> It will be good, if we can provide data priority per database and/or per
>> object store.
>>
>> Web app already assume Indexeddb data is temporary, recovery process are
>> in place at the beginning after database is successfully open. So silently
>> drop all data and set version to 0 is good way to go. I think detail reason
>> are not necessary.
>>
>> After opening, database should not corrupt. But quota exceed error do
>> happen. It is very difficult and messy to handle that issue.
>>
>> If these corruption happen, data are lost according to their priority
>> will be good enough for most situation. It is easy for both sides
>> (developer and browser implementation).
>>
>> Kyaw
>>
>
>


Re: [IndexedDB] IDBRequest.onerror for DataCloneError and DataError

2013-06-11 Thread Kyaw Tun
Yes, thanks.
Kyaw


On Wed, Jun 12, 2013 at 2:25 AM, Arthur Barstow wrote:

> Hi - your comment is considered a "Last Call comment" and it was included
> in the LC's comment tracking document [1].
>
> In [2], Joshua proposed this comment be addressed/resolved as a feature
> request and as such, it was added to the IDB feature request list [3].
>
> For the purposes of tracking your comment, please indicate if this
> resolution is acceptable or not.
>
> -Thanks, ArtB
>
> [1] <https://dvcs.w3.org/hg/**IndexedDB/raw-file/default/**
> Comments-16-May-2013-LCWD.html<https://dvcs.w3.org/hg/IndexedDB/raw-file/default/Comments-16-May-2013-LCWD.html>
> **>
> [2] <http://lists.w3.org/Archives/**Public/public-webapps/**
> 2013AprJun/0817.html<http://lists.w3.org/Archives/Public/public-webapps/2013AprJun/0817.html>
> >
> [3] 
> <http://www.w3.org/2008/**webapps/wiki/**IndexedDatabaseFeatures<http://www.w3.org/2008/webapps/wiki/IndexedDatabaseFeatures>
> >
>
>
>
> On 5/19/13 9:37 PM, ext Kyaw Tun wrote:
>
>> Sorry for reposting again for http://lists.w3.org/Archives/**
>> Public/public-webapps/**2013AprJun/0422.html<http://lists.w3.org/Archives/Public/public-webapps/2013AprJun/0422.html>Perhaps
>>  I am not well explain enough.
>>
>> In put and add method of object store and index, DataCloneError and
>> DataError are immediately throw before executing IDBRequest. It seems good
>> that exception are throw immediately, but in practical use case, these
>> exception are in async workflow (inside transaction callback). Exception
>> break the async workflow, (of course, it depending on usage design
>> pattern).
>>
>> DataCloneError and DataError are preventable in most situation. But
>> sometimes tricky. We even want database to handle these errors like
>> database constraint. The logic will be much simpler if DataCloneError and
>> DataError cause to invoke IDBRequest.onerror rather than exception.
>>
>
>


Re: [IndexedDB] request feedback on IDBKeyRange.inList([]) enhancement

2013-05-21 Thread Kyaw Tun
Thank you.

  - 1000 get() calls in single txn: ~1600ms
  - getAll for all 1000 keys:~1200ms

I would expect getAll could have better than that.

It seems context switching between js and database is cheap. In that case,
cursor walk could even better perform.

nsIDOMContact objects is fine, it is a typical object found in web app.


On Tue, May 21, 2013 at 10:05 PM, Ben Kelly  wrote:

> On May 20, 2013, at 3:18 PM, Joshua Bell  wrote:
> > Cool. Knowing what performance difference you see between multi-get and
> just a bunch of gets in parallel (for time to delivery of the last value)
> will be interesting. A multi-get of any sort should avoid a bunch of
> messaging overhead and excursions into script to deliver individual values,
> so it will almost certainly be faster, but I wonder how significantly the
> duration from first-get to last-success will differ.
>
> Here are some rough wall-clock numbers for previous testing I've done.  In
> all cases we are loading 1000 nsIDOMContact objects in batches.  Time is
> essentially wall-clock to load the entire 1000 values in the data set.
>
>   - 20 get() calls per txn: ~2000ms
>   - 1000 get() calls in single txn: ~1600ms
>   - getAll + inList for 20 keys per txn:~1500ms
>   - getAll + inList for 64 keys per txn:~1050ms
>   - getAll + inList for 256 keys per txn:   ~950ms
>   - getAll for all 1000 keys:   ~1200ms
>
> I've hesitated to post these because they are somewhat specific to my
> workload, test case, and device.  I'll try to pull out a more isolated test
> case while I work on the optimization for parallel get() calls.
>
> > > > Ignoring deplicating keys is not a useful feature in query. In fact,
> I will like result be respective ordering of given key list.
> > >
> > > Well, I would prefer to respect ordering as well.  I just assumed that
> people would prefer not to impose that on all calls.  Perhaps the cases
> could be separated:
> > >
> > >   IDBKeyList.inList([]) // unordered
> > >   IDBKeyList.inOrderedList([])  // ordered
> > >
> > > I would be happy to include duplicate keys as well.
> > >
> > > Thanks again.
> > >
> > > Ben
> > >
> > >
> > >
> >
> >
>
>


Re: [IndexedDB] request feedback on IDBKeyRange.inList([]) enhancement

2013-05-20 Thread Kyaw Tun
Sorry, I always have problem with talking clearly.

Your experiment on this kind of problem is very interesting. I would like
to see what are optimal number of requests in a given transaction. These
two concepts (cursor walk and multi-request) can be employed together.

In my library implementation (ydn-db), multi-request is so commonly useful
that I had a pattern:

var b = db.branch('multi', false); // multiple request parallel
transaction thread
b.get('store1', key1);
b.get('store1', key2); // parallel request
b.get('store2', key1); // parallel transaction
b.values('store2', keyrange1); // parallel request on second tx

On the other hand, we do not want large batch request, because it will
consume large memory. Conservation of memory is more important than CPU for
mobile.

I am excited batch query are planning in v2

index.getAll(key_range, limit)
objectStore.getAll(key_range, limit)

A lot of optimization could be done with that api.

Regarding array key or key_range query, i think it is NOT very useful.

index.getAll(array_of_key_or_key_range, limit)

because we don't know result and respective query key.

Regarding prefix query, it is not very useful. It is trivial to create a
key range of desire prefix. Additionally we will also need start marker
key (as in aws s3 api), reverse order and limit. This method will destroy
current graceful api.





On Mon, May 20, 2013 at 9:37 PM, Ben Kelly  wrote:

> Thanks for the feedback!
>
> On May 19, 2013, at 9:25 PM, Kyaw Tun  wrote:
> > IDBKeyRange.inList looks practically useful, but it can be achieve
> continue (continuePrimary) cursor iteration. Performance will be comparable
> except multiple round trip between js and database.
>
> I'm sorry, but I don't understand this bit.  How do you envision getting
> the cursor in the first place here without a way to form a query based on
> an arbitrary key list?  I'm sure I'm just missing an obvious part of the
> API here.
>
> > Querying by parallel multiple get in a single transaction should also be
> fast as well.
>
> Yes, Jonas Sicking did recommend a possible optimization for the multiple
> get() within a transaction.  It would seem to me, however, that this will
> likely impose some cost on the general single get() case.  It would be nice
> if the client had the ability to be explicit about their use case vs using
> a heuristic to infer it.
>
> In any case, I plan to prototype this in the next week or two.
>
> > Additionally IDBKeyRange.inList violate contiguous key range nature of
> IDBKeyRange. It is assumed in some use case, like checking a key in the key
> range or not. If this feature are to be implemented, it should not mess
> with IDBKeyRange, but directly handle by index batch request.
>
> Good point.  I suppose an IDBKeySet or IDBKeyList type could be added.
>
> > Ignoring deplicating keys is not a useful feature in query. In fact, I
> will like result be respective ordering of given key list.
>
> Well, I would prefer to respect ordering as well.  I just assumed that
> people would prefer not to impose that on all calls.  Perhaps the cases
> could be separated:
>
>   IDBKeyList.inList([]) // unordered
>   IDBKeyList.inOrderedList([])  // ordered
>
> I would be happy to include duplicate keys as well.
>
> Thanks again.
>
> Ben
>
>


Re: [IndexedDB] Inform script of corruption recovery

2013-05-19 Thread Kyaw Tun
It will be good, if we can provide data priority per database and/or per
object store.

Web app already assume Indexeddb data is temporary, recovery process are in
place at the beginning after database is successfully open. So silently
drop all data and set version to 0 is good way to go. I think detail reason
are not necessary.

After opening, database should not corrupt. But quota exceed error do
happen. It is very difficult and messy to handle that issue.

If these corruption happen, data are lost according to their priority will
be good enough for most situation. It is easy for both sides (developer and
browser implementation).

Kyaw


[IndexedDB] IDBRequest.onerror for DataCloneError and DataError

2013-05-19 Thread Kyaw Tun
Sorry for reposting again for
http://lists.w3.org/Archives/Public/public-webapps/2013AprJun/0422.html Perhaps
I am not well explain enough.

In put and add method of object store and index, DataCloneError and
DataError are immediately throw before executing IDBRequest. It seems good
that exception are throw immediately, but in practical use case, these
exception are in async workflow (inside transaction callback). Exception
break the async workflow, (of course, it depending on usage design
pattern).

DataCloneError and DataError are preventable in most situation. But
sometimes tricky. We even want database to handle these errors like
database constraint. The logic will be much simpler if DataCloneError and
DataError cause to invoke IDBRequest.onerror rather than exception.


Re: [IndexedDB] request feedback on IDBKeyRange.inList([]) enhancement

2013-05-19 Thread Kyaw Tun
IDBKeyRange.inList looks practically useful, but it can be achieve
continue (continuePrimary) cursor iteration. Performance will be
comparable except multiple round trip between js and database.

Querying by parallel multiple get in a single transaction should also
be fast as well.

Additionally IDBKeyRange.inList violate contiguous key range nature of
IDBKeyRange. It is assumed in some use case, like checking a key in
the key range or not. If this feature are to be implemented, it should
not mess with IDBKeyRange, but directly handle by index batch request.

Ignoring deplicating keys is not a useful feature in query. In fact, I
will like result be respective ordering of given key list.

Kyaw Tun


Re: [IndexedDB] Does "Abort this algorithm" mean "Abort transaction"?

2013-05-15 Thread Kyaw Tun
Oh yes, it works great. What a nice twist! I learned IndexedDB for one year
(literally), still not get it.


On Thu, May 16, 2013 at 12:03 PM, Jonas Sicking  wrote:

> On Wed, May 15, 2013 at 7:45 PM, Kyaw Tun  wrote:
> > When ConstraintError occur in 'steps for storing a record into an object
> > store', it said 'Abort this algorithm without taking any further steps.'
> I
> > assume the transaction can be still be used, except that this request
> fail.
> > However both Chrome and Firefox implementation abort transaction. Is that
> > correct behavior?
>
> The steps run by, for example, objectStore.add is actually "steps for
> asynchronously executing a request".
>
> This is the algorithm that eventually ends up running the "steps for
> storing a record into an object store".
>
> The in step 7 of the "steps for asynchronously executing a request",
> if the database operation failed we run the "fire an error event"
> algorithm.
>
> When this algorithm runs, unless the error Event is cancelled through
> Event.preventDefault(), the transaction is aborted.
>
> So the transaction isn't always aborted. But it is aborted if the
> error goes unhandled which sounds like is the case for you.
>
> / Jonas
>


[IndexedDB] Does "Abort this algorithm" mean "Abort transaction"?

2013-05-15 Thread Kyaw Tun
When ConstraintError occur in 'steps for storing a record into an object
store', it said 'Abort this algorithm without taking any further steps.' I
assume the transaction can be still be used, except that this request fail.
However both Chrome and Firefox implementation abort transaction. Is that
correct behavior?

In websql, if a request fail, we have option to abort or continue to use
the transaction.

Kyaw


Prefer Error than exception for DataCloneError and DataError

2013-04-29 Thread Kyaw Tun
In put and add method of object store and index, DataCloneError and
DataError are immediately throw before executing IDBRequest. It seems good
that exception are throw immediately, but in practical use case, these
exception are in async workflow (inside transaction callback). Exception
break the async workflow, depending on usage design pattern.

Alternatively, these exception could transform into IDBRequest.onerror
event. In this ways, we can gracefully handle such unexpected error.


Why not be multiEntry and array keyPath togather?

2013-04-25 Thread Kyaw Tun
 
createIndexAPI
specification state that "If keyPath is and Array and the multiEntry
property in the optionalParameters is true, then a DOMException of type
NotSupportedError must be thrown".

I believe NotSupportedError is unnecessary. multiEntry is no different than
non-multiEntry index value, except the reference value is repeated. This
specification limit generalizes usage of composite index for key joining
algorithm.

Google appengine datastore also have have multiEntry
(ListProperty).
It has no special different in indexing, other than limiting number of
entriesand
warning for possibility of explosive index.

Composite index with multiEntry is very useful, like modelling graph data
and many-to-many relationship. Currently query on such model are limited to
single index.

It is also very unlikely that web developer will use excessive indexing. I
propose NotSupportedError left out of specification.

Best regards,
Kyaw


Re: [IndexedDB] How to recover data from IndexedDB if the origin domain don't exist anymore?

2013-01-15 Thread Kyaw Tun
It will need to address limiting quota.

How about ad firm collect data from multiple origins?

Since user agent can delete IndexedDB data anytime, duplicate copy of
IndexedDB data to be more persistant is not much meaningful.

I think it is too much complex in security and privacy issue, but very
limited in use case.


[IndexedDB] better way of deleting records

2013-01-15 Thread Kyaw Tun
>From developer point of view, IDBObjectStore.delete method cannot be used
directly in most use case, since IDBObjectStore.delete returns
undefined. IDBObjectStore.delete(random_key) always receives onsuccess
event, but nothing happen. Currently I use cursor or count method before
deleting to make sure that it will actually be deleted. My suggestion is
IDBObjectStore.delete return number of records deleted
and IDBObjectStore.clear return undefined. Hence IDBObjectStore.clear will
take optional key or key range.

There is no efficient way to delete records by secondary key or index
key. IDBIndex do not have delete methods. Currently we have to use
openCursor and delete one by one. Interestingly again, we cannot delete
with more efficient openKeyCursor. Deleting from openKeyCursor should be
allowed.


Re: [IndexedDB] How to recover data from IndexedDB if the origin domain don't exist anymore?

2013-01-09 Thread Kyaw Tun
In this situation, I will go with CouchDB.

If used with IndexedDB, I will send data back JSON data to blob storage
server like Google Cloud Storage or S3.


Re: IDBCursor should walk on secondary ordering of index value

2012-12-12 Thread Kyaw Tun
> If you have that information from your other filtering, then why not just
> fetch them directly? Like IDBObjectStore.get(primary_key)?

The use case, here, is key joining without serialization, so that it
is very fast. We also want single scan. get method involve
serialization and resanning.

The use case, I talked about can be found in
http://dev.yathit.com/ydn-db/nosql-query.html in Sorted-merge join
section.


Re: IDBCursor should walk on secondary ordering of index value

2012-12-05 Thread Kyaw Tun
On Thu, Dec 6, 2012 at 6:47 AM, Joshua Bell  wrote:

>
>
> On Wed, Dec 5, 2012 at 7:50 AM, Kyaw Tun  wrote:
>
>> Index records are stored in ascending order of key (index key) followed
>> by ascending order of value (primary key).
>>
>>
>> However, current IndexedDB API expose retrieving only by index key.
>>
>>
>> For example, the following operation on ‘tag‘ index of ‘article’ object
>> store  will retrieve the first occurrent of index key ‘javascript’.
>>
>>
>> IDBCursor_article_tag.continue(‘javascript’)
>>
>>
>> Suppose, we have thousand of articles having tag to ‘javascript’, to find
>> the match we have to walk step-by-step.
>>
>>
>> IDBCursor_article_tag.continue()
>>
>>
>> This take linear time whereas it is possible with log time on database
>> engine. Additionally we are making unnecessary callbacks back-and-ford
>> between js and database engine.
>>
>>
>> Such use case is common on filtered query and join operations. In these
>> case, index key is held constance while walking on primary keys.
>>
>>
>> It will be good if IndexedDB API provide like:
>>
>>
>> IDBCursor_article_tag.continue(index_key, primary_key)
>>
>>
>> Agreed. We've also had several requests for this sort of
> 'continuePrimaryKey' method.
>

While continuing on primary key, these use cases require index key not
change.


>
> You've done a great job at explaining the use case. Can you file a bug at
> https://www.w3.org/Bugs under Product: WebAppsWG and component: Indexed
> Database API?
>
>  It is a bit strange since the result is again primary_key. We already
> know primary_key from other filter condition.
>
>>
>> This method is also useful for cursor resume process.
>>
>>
>> Probably IDBCursor.advance(count) should take negative integer value as
>> well.
>>
> Do you have a scenario in mind?
>

No. I do not need it so far. I wanted to peek it before continuing, but it
is not an important use case.


>
> The request makes sense, I just haven't heard this one before. It would be
> the first time we have a cursor "change direction", and while I don't think
> it's difficult in our implementation it could use some additional
> justification. How this plays with "prevunique" / "nextunique" also needs
> defining.
>
>
I think, cursor direction should not change. Cursor position just
steps backward. If direction changes require, a new cursor should be
created instead.


IDBCursor should walk on secondary ordering of index value

2012-12-05 Thread Kyaw Tun
Index records are stored in ascending order of key (index key) followed by
ascending order of value (primary key).


However, current IndexedDB API expose retrieving only by index key.


For example, the following operation on ‘tag‘ index of ‘article’ object
store  will retrieve the first occurrent of index key ‘javascript’.


IDBCursor_article_tag.continue(‘javascript’)


Suppose, we have thousand of articles having tag to ‘javascript’, to find
the match we have to walk step-by-step.


IDBCursor_article_tag.continue()


This take linear time whereas it is possible with log time on database
engine. Additionally we are making unnecessary callbacks back-and-ford
between js and database engine.


Such use case is common on filtered query and join operations. In these
case, index key is held constance while walking on primary keys.


It will be good if IndexedDB API provide like:


IDBCursor_article_tag.continue(index_key, primary_key)


It is a bit strange since the result is again primary_key. We already know
primary_key from other filter condition.


This method is also useful for cursor resume process.


Probably IDBCursor.advance(count) should take negative integer value as
well.


Best regards,

Kyaw


RE: [IndexedDB] coupled transactions

2012-11-20 Thread Kyaw Tun
>
> I don't understand why the two transactions need to be coupled.


I want to use more efficient cursor walk in single transaction or
reuse transaction, since I know keys will be coming in order.

>
> The producer object has a read only transaction, so it won't commit any 
> changes.  The consumer object has a read write transaction.  If it's 
> modifying the same object stores the producer object is reading from, that 
> transaction will block until the producer's transaction is inactive, so the 
> producer will receive a coherent snapshot of data.


I got it. Thanks. Must decouple them with write transaction.

>
>
>
> - Kyle


Bufferring keys in consumer object may get efficient retrieval. Let me
know if there any efficient way to retrieving them.



[IndexedDB] coupled transactions

2012-11-18 Thread Kyaw Tun
Coupled transactions exists when two or more transactions should be
committed together but transactions are in different scopes or mode.
Currently I find this problem challenging to solve with IndexedDB API.

This can be solved by merging transactions into single transaction,
but it will be sub-optimal and require sharing transaction objects.

The use case appear when we want to use producer-consumer pattern as follow:

In producer object, a read transaction is created and index cursors
are scanning to find a matching keys. Whenever it find a match key, it
send to consumer object.

Consumer object, a read or write transaction is created when it first
received a key. The cursor value is use to render UI or update it to
the database. In general, we are expecting to receive ordered sequence
of keys. For optimal purpose, the transaction should be keep active.

Concrete example:

Consumer side
--

var out = new ydn.db.Streamer(db, 'animals', 'id');
  out.setSink(function(key, value) {
console.log(['receiving', key, value]); // should be ['cow', 'cow']
  });


Producer side

var q1 = ydn.db.Iterator.where('animals', 'color', '=', 'spots');
var q2 = ydn.db.Iterator.where('animals', 'horn', '=', 1);
var q3 = ydn.db.Iterator.where('animals', 'legs', '=', 4);
var solver = new ydn.db.algo.NestedLoop(out);
var req = db.scan([q1, q2, q3], solver);


data
--
animals = [
{id: 'rat', color: 'brown', horn: 0, legs: 4},
{id: 'cow', color: 'spots', horn: 1, legs: 4},
{id: 'galon', color: 'gold', horn: 1, legs: 2},
{id: 'snake', color: 'spots', horn: 0, legs: 0},
{id: 'chicken', color: 'red', horn: 0, legs: 2}
  ];






Ref: test_31_scan_mutli_query_match in
https://bitbucket.org/ytkyaw/ydn-db/raw/0e1e33582cfed54c9baf1d5bb134cae58bac45c8/test/iteration_test.js



[IndexedDB] Can IDBTransaction.oncomplete callback be used as active state?

2012-11-18 Thread Kyaw Tun
Transaction is active as long as I send request from the IDBRequest
callback. Is there any other way to prevent committing?

If there any way to detect transaction active flag?

I expect IDBTransaction.oncomplete callback can be used to flag inactive
state, but it is not according to my few test. Transaction is already
inactive and cannot be use even before receiving oncomplete callback.


Put request need created flag

2012-11-14 Thread Kyaw Tun
I have hard to understand how to use add method effectively.

On my indexeddb wrapper library development, the wrapper database instance 
dispatches installable event for creating, deleting and updating a record. 
Interested components register and listen to update UI or sync to server. That 
requires differentiating created and updated on put call. On the otherhand add 
method throw Error rather than eventing onerror event when confict. So it usage 
will be very rare.

I wish put method request indicates some flag to differentiate between created 
or updated.

I could forget about put and use cursor directly, but still requires extra 
existance test request. 

Best 
Kyaw



IDBObjectStore require openKeyCursor method

2012-11-13 Thread Kyaw Tun
In contrast to IDBIndex, IDBObjectStore does not have openKeyCursor method.
This method fetch list of keys of a given range without cost
of serialization. There is no other ways to iterate keys only from
IDBObjectStore. However efficient fetching of keys in a specific range is
required in high performance web app.

Use case 1: Suppose 'note' object store use in-line key 'title' (possibly
adding a nonce). Developer wish to list titles without retrieving full
records.

Use case 2: Suppose 'article' object store use array keys having first
element as author id, which is key to 'author' object store and second
element as time stamp. Efficient retrieval of list of articles by an author
is possible by fetching IDBKeyRange.bound([author_id], [author_id, 0]) by
indexing primary key.

Use case 3: I am developing IndexedDB database
wrapper,
which use key scanning to run join algorithm with sorting and constrain.
Sometimes, the use case require iterating of primary key.

The workaround suggested in use case 2, it not applicable for out-of-line
key. Also indexing primary key is conter intuitive.

Since primary key will already have indexed in the database engine,
openKeyCursor method should provide in IDBObjectStore object. At the same
time, we get consistance and symmetric API.

Best regards,
Kyaw