your method is right but the limit is there what ever methods we use.
There's always going to be a limit for scalable applications -
appengine just exposes it.
because the offset is limited to 1000.
i can not sort data by fields in results more than some limited items
Don't sort. Use
your method is right but the limit is there what ever methods we use.
and you didn't get what i mean.
because the offset is limited to 1000.
i can not sort data by fields in results more than some limited items
with out the offset limit, we can do it easily.
On 12月24日, 上午2时10分, Andy Freeman
Any application requires fetching an unbounded amount of data for a
single page view is not scalable, no matter what technology you use to
build it, so this problem is not appengine specific.
If you need aggregations (average, median, total, etc), you have to
compute them incrementally or with an
when even with the datetime = you still get a big set, how you can
handle it?
for example you get 1 item with the most specific filtering sql.
and on this filtering sql, you should have a statistic info. like how
many item it is .
how do you expect the appengine to handle this problem?
how
What statistics are you talking about?
You're claiming that one can't page through an entity type without
fetching all instances and sorting them. That claim is wrong because
the order by constraint does exactly that.
For example, suppose that you want to page through by a date/time
field named
You misunderstand.
If you have an ordering based on one or more indexed properties, you
can page efficiently wrt that ordering, regardless of the number of
data items. (For the purposes of this discussion, __key__ is an
indexed property, but you don't have to use it or can use it just to
break
You misunderstand.
if not show me a site with statistics on many fields.
with more than 1000 pages please.
thanks.
On Dec 21, 9:06 am, Andy Freeman ana...@earthlink.net wrote:
You misunderstand.
If you have an ordering based on one or more indexed properties, you
can page efficiently wrt that
it is simple i know.
but what i am concerning is about statistics
to count the different fields for different usage,
it means we must count all the data, get the statistics info at once
query.
and more over, this statistics info may be more than one fields and
have different orders between
obviously, if you have to page a data set more than 5 items which
is not ordered by __key__,
you may find that the __key__ is of no use, because the filtered data
is ordered not by key.
but by the fields value, and for that reason you need to loop query as
you may like to do.
but you will
if the type of data is larger than 1 items, you need reindexing
for this result.
and recount each time for getting the proper item.
What kind of reindexing are you talking about.
Global reindexing is only required when you change the indices in
app.yaml. It doesn't occur when you add
I do bulk operations (including delete) using the queue.
Here is some psuedo code
List itemsToDelete = loadOneHundredItems();
for (Item i in itemsToDelete) {
delete(i);
}
//this will stop it when it gets to the end of the queue
if (itemsTodelete.size() 0) {
it is too complicated for most of us.
and it still can result in timout if the data is really big
of no much use to most of us if we really have big data to sort and
page.
On Dec 15, 11:35 pm, Stephen sdea...@gmail.com wrote:
On Dec 15, 8:04 am, ajaxer calid...@gmail.com wrote:
also 1000
You can check this product we have developed. It takes care of most of the
intricacies you have mentioned transparently.
http://www.cloud2db.com
On Wed, Dec 16, 2009 at 11:20 AM, Andy Freeman ana...@earthlink.net wrote:
it still can result in timout if the data is really big
How so? If you
of course the time is related to the type data you are fetching by one
query.
if the type of data is larger than 1 items, you need reindexing
for this result.
and recount each time for getting the proper item.
it seems you have not encountered such a problem.
on this situation, the indexes
and do you know that the offset is limited to 1000?
On Dec 17, 12:20 am, Andy Freeman ana...@earthlink.net wrote:
it still can result in timout if the data is really big
How so? If you don't request too many items with a page query, it
won't time out. You will run into
thanks for the explanation.
but I have no interest to learn such things as big table or something.
the only reason that i keep an eye on this project is it may bring me
convenience in my web development
not that it will bring me some knowledge of science or technology.
before the data
imho I think you do need to understand any new platform to a certain
degree if you really want to take advantage of it.
It is completely different from SQL/RDBMS which means if you don't
change your thinking and adapt to the platform
it can only be a toy for you. If you can't get your head around
yes, i have tried.
but i alway get timeouts.
and the amount is more than millions maybe
I uploading this data by bulk uploading for more than a day.
I think it is very important to be able delete a table in a single act
on the panel
On Dec 1, 3:05 pm, Tim Hoffman zutes...@gmail.com wrote:
Hi
On Sun, Dec 6, 2009 at 7:36 PM, ajaxer calid...@gmail.com wrote:
yes, i have tried.
but i alway get timeouts.
and the amount is more than millions maybe
I uploading this data by bulk uploading for more than a day.
I think it is very important to be able delete a table in a single act
on the
but there are thousands of instances in the Table.
it is impossible for me to manually delete all of them in a short time
and it is not very reliable to delete them through program.
and the site is open now.
some of the data i won't delete.
On Nov 28, 3:11 am, Jorge athenas...@gmail.com wrote:
Hi
You will need to write a process to delete them, or if you can easily
identify all the items (ie you know the keys or a repeated query will
find the one to delete)
you can just use the remote console.
for instance in the console you could
keys_to_delete =
In the production GAE, go to the Datastore/Data Viewer. Delete all the
instances. In the development GAE, as far as I know the only way is
rebuilding the application (Clean and Build in NetBeans, for
instance), which will create a new dev datastore.
Jorge Gonzalez
On Nov 27, 1:29 am, ajaxer
22 matches
Mail list logo