On Wednesday, October 8, 2014 1:00:56 PM UTC+2, Narūnas Krasauskas wrote:
>
>
> I have never said anything like you quoted, what I said though was: "users
> who can get to the search page ideally would be able to search/see all the
> records". Meaning, that users has access to the 1+m records, ho
2014 m. spalis 8 d., trečiadienis 10:39:37 UTC+1, Niphlod rašė:
>
>
>
4) If all goes well and I close my eyes to the fact that user will retrieve
>> slightly more records than I defined in *cache_count*, there still is a
>> problem. If user would click on the table headers to change sort orde
given you can pass whatever you want to cache_count, I'd say you can use
whatever technique you'd like. Of course "approximation" won't never land
to "perfectness" ;-P
On Wednesday, October 8, 2014 12:45:28 PM UTC+2, Johann Spies wrote:
>
> This thread triggered a thought regarding very large ta
This thread triggered a thought regarding very large tables: Would it be
practical to use the estimates from the backend's query planner to
determine cache_count? If a whole table is selected then one could use the
stats from the backend to estimate the total number of rows, but if you
execute a q
On Tuesday, October 7, 2014 4:00:56 PM UTC+2, Narūnas Krasauskas wrote:
>
> Hi,
>
> Thanks for the reply.
>
> I just tested my notorious 38 sec query directly on the MySQL, and it also
> took nearly 40 sec to complete, so after all it was not web2py's fault :-)
>
> Further I also tested *cache_c
Hi,
Thanks for the reply.
I just tested my notorious 38 sec query directly on the MySQL, and it also
took nearly 40 sec to complete, so after all it was not web2py's fault :-)
Further I also tested *cache_count* option, and indeed like you said, it
does something very similar to what I need, t
Those are not identical queries (constraints DO change, not only
pagination). Again,
select a from table limit 10 offset 0
and
select a from table limit 10 offset 10
SHOULD DEFINITIVELY NOT have different execution times.
And again, if you want 10 pages of 50 records, just pass paginate=50,
Also I wonder, why would you say something like this:
2014 m. spalis 4 d., šeštadienis 20:08:50 UTC+1, Niphlod rašė:
>
> The thing you posted
> """
> Same story here. I like *grid*, I like pagination, however I'm missing
> the feature of limiting the total number of rows in the output.
> In my c
Hi, thanks for the reply.
Yes you are right, users who can get to the search page ideally would be
able to search/see all the records.
However you are not right by saying that there is no performance impact on
fetching first 50 and next 50, next 1000, etc.
I did some benchmarking, so please hav
ok, lets reset us the question a bit.
This "original" issue had a logic because the grid makes TWO queries by
default. One is to retrieve the data for a particular page (and there's
NOTHING we can do about it) , the other is to count the total of the
records disregarding the pagination.
Now, if
Hi,
Just to make sure we speak same language, below is code snippet that causes
me problems described in my previous post. Please note search_widget=*None.
*I use my custom search form, instead of the grid's default, but that's
just a different html, which should not mess with the matters..
Als
if your backend needs tons of times to extract 50 records (no matter what
the page is) there's a problem. Grid has not. If ou want to show your users
a set of records, just pass to grid a query instead of the entire table.
On Friday, October 3, 2014 12:50:57 PM UTC+2, Narūnas Krasauskas wrote:
>
Hi,
Same story here. I like *grid*, I like pagination, however I miss the
feature of limiting the total number of rows in the output.
In my case *grid* can easily generate over 1+m rows in the output, which
when paginating by 50 rows per page, would produce 20k+ pages...
Who needs that many page
Hi,
Same story here. I like *grid*, I like pagination, however I'm missing the
feature of limiting the total number of rows in the output.
In my case *grid* can easily generate over 1+m rows in the output, which
when paginating by 50 rows per page, would produce 20k+ pages...
Who needs that many
boys, don't know if you'll use this as a feature or at least a workaround
for your requirements, but trunk has for grid a "cache_count" parameter.
check this thread
https://groups.google.com/d/msg/web2py/yr2z1M2tzIg/iMLP0sKKbjIJ for the
docs.
--
---
You received this message because you ar
Interesting :
http://stackoverflow.com/questions/13855304/slow-count-on-postgresql-9-2
Richard
On Mon, May 6, 2013 at 9:35 AM, Richard Vézina
wrote:
> Read the link of Niphold that were broken and it seems that the slow
> counting is gone in Postgres 9.2
>
> Richard
>
>
> On Mon, May 6, 2013 a
Read the link of Niphold that were broken and it seems that the slow
counting is gone in Postgres 9.2
Richard
On Mon, May 6, 2013 at 9:34 AM, Richard Vézina
wrote:
> http://wiki.postgresql.org/wiki/Slow_Counting
>
>
> On Mon, Apr 23, 2012 at 6:53 AM, Niphlod wrote:
>
>> unfortunately counting
http://wiki.postgresql.org/wiki/Slow_Counting
On Mon, Apr 23, 2012 at 6:53 AM, Niphlod wrote:
> unfortunately counting on postgres is a heavy operation, and it's pretty
> "famous" for this http://wiki.postgresql.org/wiki/Slow_Counting.
> maybe you can try limiting the amount to, let's s
Villas,
On Mon, May 6, 2013 at 5:52 AM, villas wrote:
> If you don't need the main features of the grid (eg pagination) then it is
> better not to use it. I mean, it is a trivial task to produce a nice table
> without pagination.
>
> The other option would be to hide the pagination with jQuery,
If you don't need the main features of the grid (eg pagination) then it is
better not to use it. I mean, it is a trivial task to produce a nice table
without pagination.
The other option would be to hide the pagination with jQuery, which should
also be easy to do.
On Friday, May 3, 2013 5:5
On Monday, April 23, 2012 7:57:23 PM UTC+5:30, Bruce Wade wrote:
>
> Yeah see that was my original question how to tell grid to only
> select/count the first 50,000 records instead of it just generating pages
> for all the records.
>
I understand that this is an year-old thread - but I too ha
Bruce,
can you try {{=response.toolbar()}} You will be able to see each query
performed by the system and how long it took. Count many be the culprit if
there is not an index associated to the count query.
On Sunday, 22 April 2012 13:35:28 UTC-5, Martin.Mulone wrote:
>
> Perhaps the slowdown co
Yeah see that was my original question how to tell grid to only
select/count the first 50,000 records instead of it just generating pages
for all the records. I single ad in our site has had over 60,000 views.
I am planing to move the adviewer to mongodb if I can't resolve this issue.
On Mon, Apr
perhaps we can make some lambda to count, but make sure that is the count()
thing, run some benchmark.
2012/4/23 Niphlod
> unfortunately counting on postgres is a heavy operation, and it's pretty
> "famous" for this http://wiki.postgresql.org/wiki/Slow_Counting.
> maybe you can try limit
unfortunately counting on postgres is a heavy operation, and it's pretty
"famous" for this http://wiki.postgresql.org/wiki/Slow_Counting.
maybe you can try limiting the amount to, let's say, 2 (if it's
possible as per requirements of your app) and see if there are differences.
In the
Yeah I will add an index I have another table with the same problem so I
think you might be right on the counting taking the most amount of time. Is
there anyway to index on count?
The other table:
db.cash_journal.distributor_id.readable =
db.cash_journal.transaction_id.readable =
db.cash_journal.
Perhaps it is the sorting that is taking the time. Might be worth adding an
index on ad_id.
On Sunday, 22 April 2012 17:37:14 UTC+1, Bruce Wade wrote:
>
> The grid is limiting the amount on the page being displayed. however it is
> still querying all the records then creating the pages so you c
Perhaps the slowdown comes, counting such amount of records.
2012/4/22 Massimo Di Pierro
> This is the logic in sqlhtml.py SQLFORM.grid
>
> if paginate and paginate
> limitby = (paginate*page,paginate*(page+1))
> rows =
> dbset.select(left=left,orderby=orderb
This is the logic in sqlhtml.py SQLFORM.grid
if paginate and paginate
> The grid is limiting the amount on the page being displayed. however it is
> still querying all the records then creating the pages so you can page
> between them. The query still queries all of the data instead of using
The grid is limiting the amount on the page being displayed. however it is
still querying all the records then creating the pages so you can page
between them. The query still queries all of the data instead of using
limits and or ajax. for 50,000 records it takes almost 3-8 seconds to load
the pag
This is what I do not understand. If the query is done by the grid, it
should limit the number of records to the number of records displayed.
If that now happening? Are you calling the query outside of the grid?
On Sunday, 22 April 2012 11:22:32 UTC-5, Bruce Wade wrote:
>
> Yes it has built in pa
Yes it has built in pagination, however some of my grids will return over
500,000+ records which causes query's to be very very very slow. I would
rather limit the returned data because the average person is not going to
go through all them records or care about them. For example currently there
ha
grid has built-in pagination. Perhaps I do not understand the question. Can
you show us your code?
On Sunday, 22 April 2012 10:40:37 UTC-5, Bruce Wade wrote:
>
> Hi,
>
> Is there a way to limit the returned results for the grid? I am finding
> that my site has so much data already that most area
33 matches
Mail list logo