Re: Data grid: fetching/scrolling data on user demand

2017-10-17 Thread Dave Page
On Tue, Oct 17, 2017 at 1:44 PM, Tomek  wrote:

> Hi,
>
>  It is not exactly truth... In v3 the query is executed, fetched and
> all rows are displayed,
> >>>
> >>> No they're not, though they are all transferred to the client which is
> why it's slower.
> >>
> >> They are not what?
> >
> > The handling of rows in pgAdmin 3 is not as you described.
>
> So please tell me how is it done.
>

You can find the code here:
https://git.postgresql.org/gitweb/?p=pgadmin3.git;a=tree;h=216f52a5135db3124268a32c553b812547d4b918;hb=216f52a5135db3124268a32c553b812547d4b918

I don't have the spare cycles to start explaining how pgAdmin 3 worked
internally.


>
> >> What is slower - is the "display" part in both versions. You have data
> from server and than You
> >> push it to display.
> >> I've done quick test - table 65 rows / 45 columns, query SELECT *
> from table limit 10.
> >> With default ON_DEMAND_RECORD_COUNT around 5 seconds, with
> ON_DEMAND_RECORD_COUNT = 10 25
> >> seconds...
> >> It is 20 seconds spent only on displaying...
> >
> > So? No human can read that quickly.
>
> I get the joke but it doesn't add anything to this discussion...
>

It wasn't a joke.


>
>  For me this idea of "load on demand" (which in reality is "display on
> demand") is pointless. It
> >>> is done only because the main lag of v4 comes from interface. I don't
> see any other purpose for
> >>> it... If You know (and You do) that v4 can't handle > big results add
> pagination like every other
> >>> webapp...
> >>>
> >>> We did that in the first beta, and users overwhelmingly said they
> didn't like or want pagination.
> >>>
> >>> What we have now gives users the interface they want, and presents the
> data to them quickly - far
> >>> more quickly than pgAdmin 3 ever did when working with larger
> resultsets.
> >>>
> >>> If that's pointless for you, then that's fine, but other users
> appreciate the speed and
> >>> responsiveness.
>
> Part of the data...
> Please explain to me what is the point o requesting 10 and getting
> 1000 without possibility to access the rest?
>

Of course you can get all 10, you just scroll down. Saying that you can
only get 1000 "without possibility to access the rest" is 100% factually
incorrect.


> >> I don't know of any users (we are the users) who are happy that
> selecting 1 rows requires
> >> dragging scrollbar five times to see 5001 record...
> >
> >> Saying pointless I meant that if I want 1 rows I should get 1
> rows, if I want to limit my
> >> data I'll use LIMIT. But if the ui can't handle big results just give
> me easiest/fastest way to get
> >> to may data.
> >
> > Then increase ON_DEMAND_RECORD_COUNT to a higher value if that suits the
> way you work. Very few
> > people scroll as you suggest - if you know you want to see the 5001
> record, it's common to use
> > limit/offset. If you don't know, then you're almost certainly going to
> scroll page by page or
> > similar, reading the results as you go - in which case, the batch
> loading will speed things up for
> > you as you'll have a much quicker "time to view first page".
>
> You can read yes? I wrote about LIMIT earlier... I'll put it in simpler
> terms "If you don't know" what I mean.
>
> You understand that if a user selects data, user wants to get data? - that
> is why he is doing select.
> You understand that to get the data is to see the data? - that is why he
> is doing select.
> You understand that data can take more than 1000 records?
> You understand that You hide the data without possibility to access it (at
> least quickly)?
> You understand that in this example user didn't get the data?
>
> I hope that above clarifies pointlessness of "display on demand" without
> some other option to speed up the browsing process.
>
> As for increasing ON_DEMAND_RECORD_COUNT - I would gladly do it but then
> displaying 10 records takes more time and about 500 Mb of memory (v3
> used around 80)... What is more in v4 I can't get 30 records even with
> default ON_DEMAND_RECORD_COUNT (canceled query after 5 minutes) while v3
> managed to do it in 80 seconds.
>
> I understand that large results are rare but this is a management tool not
> some office app... It should do what I want...
>
> And again You are telling me what I want and what I need, and how I should
> do it... You decided to make this tool in a way that no other db management
> works,


No - I'm telling you what nearly 20 years of interacting with pgAdmin users
has indicated they likely want. If it's not what you want, then you are
free to look at other tools. We do our best to meet the requirements of the
majority of our users, but it's obviously impossible to meet everyones
requirements.


> You cut out a lot of v3 features,


Yes. Did you ever create an operator family? Or use the graphical query
designer that crashed all the time? Those and many other features were
intentionally removed. Some have been put back in 

Re: Data grid: fetching/scrolling data on user demand

2017-10-17 Thread Dave Page
On Tue, Oct 17, 2017 at 11:35 AM, Tomek  wrote:

> Hi,
>
> >> It is not exactly truth... In v3 the query is executed, fetched and all
> rows are displayed,
> >
> > No they're not, though they are all transferred to the client which is
> why it's slower.
>
> They are not what?


The handling of rows in pgAdmin 3 is not as you described.


> What is slower - is the "display" part in both versions. You have data
> from server and than You push it to display.
> I've done quick test - table 65 rows / 45 columns, query SELECT * from
> table limit 10.
> With default ON_DEMAND_RECORD_COUNT around 5 seconds, with
> ON_DEMAND_RECORD_COUNT = 10 25 seconds...
> It is 20 seconds spent only on displaying...
>

So? No human can read that quickly.


>
> >> For me this idea of "load on demand" (which in reality is "display on
> demand") is pointless. It
> > is done only because the main lag of v4 comes from interface. I don't
> see any other purpose for
> > it... If You know (and You do) that v4 can't handle > big results add
> pagination like every other
> > webapp...
> >
> > We did that in the first beta, and users overwhelmingly said they didn't
> like or want pagination.
> >
> > What we have now gives users the interface they want, and presents the
> data to them quickly - far
> > more quickly than pgAdmin 3 ever did when working with larger resultsets.
> >
> > If that's pointless for you, then that's fine, but other users
> appreciate the speed and
> > responsiveness.
>
> I don't know of any users (we are the users) who are happy that selecting
> 1 rows requires dragging scrollbar five times to see 5001 record...


> Saying pointless I meant that if I want 1 rows I should get 1
> rows, if I want to limit my data I'll use LIMIT. But if the ui can't handle
> big results just give me easiest/fastest way to get to may data.
>

Then increase ON_DEMAND_RECORD_COUNT to a higher value if that suits the
way you work. Very few people scroll as you suggest - if you know you want
to see the 5001 record, it's common to use limit/offset. If you don't know,
then you're almost certainly going to scroll page by page or similar,
reading the results as you go - in which case, the batch loading will speed
things up for you as you'll have a much quicker "time to view first page".


-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Data grid: fetching/scrolling data on user demand

2017-10-17 Thread Tomek
Hi,

>> It is not exactly truth... In v3 the query is executed, fetched and all rows 
>> are displayed,
> 
> No they're not, though they are all transferred to the client which is why 
> it's slower.

They are not what? What is slower - is the "display" part in both versions. You 
have data from server and than You push it to display.
I've done quick test - table 65 rows / 45 columns, query SELECT * from 
table limit 10.
With default ON_DEMAND_RECORD_COUNT around 5 seconds, with 
ON_DEMAND_RECORD_COUNT = 10 25 seconds...
It is 20 seconds spent only on displaying...

>> For me this idea of "load on demand" (which in reality is "display on 
>> demand") is pointless. It
> is done only because the main lag of v4 comes from interface. I don't see any 
> other purpose for
> it... If You know (and You do) that v4 can't handle > big results add 
> pagination like every other
> webapp...
> 
> We did that in the first beta, and users overwhelmingly said they didn't like 
> or want pagination.
> 
> What we have now gives users the interface they want, and presents the data 
> to them quickly - far
> more quickly than pgAdmin 3 ever did when working with larger resultsets.
> 
> If that's pointless for you, then that's fine, but other users appreciate the 
> speed and
> responsiveness.

I don't know of any users (we are the users) who are happy that selecting 1 
rows requires dragging scrollbar five times to see 5001 record...

Saying pointless I meant that if I want 1 rows I should get 1 rows, if 
I want to limit my data I'll use LIMIT. But if the ui can't handle big results 
just give me easiest/fastest way to get to may data.

--
Tomek



Re: Data grid: fetching/scrolling data on user demand

2017-10-17 Thread Murtuza Zabuawala
Hi Tomek,

On Tue, Oct 17, 2017 at 3:21 PM, Tomek  wrote:

> Hi,
>
> > As I mentioned in my previous email that we do not use server side
> cursor, so it won't add any
> > limit on query.
> >
> > The delay is from database driver itself, it has nothing to do with
> pgAdmin4.
> > Try executing the same query in 'psql', 'pgAdmin3' and third party tool
> which use libpq library as
> > backend, you will observe the same behaviour.
>
> It is not exactly truth... In v3 the query is executed, fetched and all
> rows are displayed, in v4 query is executed, fetched but only 1000 records
> are displayed.
>
My answer was not in regards of fetching & displaying the data on UI but it
was about the execution time of a query.​
​My point was unless we get the result from database driver itself, we can
not fetch it.​


> For me this idea of "load on demand" (which in reality is "display on
> demand") is pointless. It is done only because the main lag of v4 comes
> from interface. I don't see any other purpose for it... If You know (and
> You do) that v4 can't handle big results add pagination like every other
> webapp...
>
> And by the way You have a big leak in query tool - execute query with
> several thousands rows, scroll above 1000 mark few times, execute the same
> query, scroll above 1000 mark few times - repeat until You run out of
> memory or v4 crashes...
>
​Thanks for reporting, I'll look into this.​


>
> --
> Tomek
>
>


Re: Data grid: fetching/scrolling data on user demand

2017-10-17 Thread Dave Page
On Tue, Oct 17, 2017 at 10:51 AM, Tomek  wrote:

> Hi,
>
> > As I mentioned in my previous email that we do not use server side
> cursor, so it won't add any
> > limit on query.
> >
> > The delay is from database driver itself, it has nothing to do with
> pgAdmin4.
> > Try executing the same query in 'psql', 'pgAdmin3' and third party tool
> which use libpq library as
> > backend, you will observe the same behaviour.
>
> It is not exactly truth... In v3 the query is executed, fetched and all
> rows are displayed,


No they're not, though they are all transferred to the client which is why
it's slower.

> For me this idea of "load on demand" (which in reality is "display on
demand") is pointless. It is done only because the main lag of v4 comes
from interface. I don't see any other purpose for it... If You know (and
You do) that v4 can't handle > big results add pagination like every other
webapp...

We did that in the first beta, and users overwhelmingly said they didn't
like or want pagination.

What we have now gives users the interface they want, and presents the data
to them quickly - far more quickly than pgAdmin 3 ever did when working
with larger resultsets.

If that's pointless for you, then that's fine, but other users appreciate
the speed and responsiveness.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Data grid: fetching/scrolling data on user demand

2017-10-17 Thread Tomek
Hi,

> As I mentioned in my previous email that we do not use server side cursor, so 
> it won't add any
> limit on query.
> 
> The delay is from database driver itself, it has nothing to do with pgAdmin4.
> Try executing the same query in 'psql', 'pgAdmin3' and third party tool which 
> use libpq library as
> backend, you will observe the same behaviour.

It is not exactly truth... In v3 the query is executed, fetched and all rows 
are displayed, in v4 query is executed, fetched but only 1000 records are 
displayed.

For me this idea of "load on demand" (which in reality is "display on demand") 
is pointless. It is done only because the main lag of v4 comes from interface. 
I don't see any other purpose for it... If You know (and You do) that v4 can't 
handle big results add pagination like every other webapp...

And by the way You have a big leak in query tool - execute query with several 
thousands rows, scroll above 1000 mark few times, execute the same query, 
scroll above 1000 mark few times - repeat until You run out of memory or v4 
crashes...

--
Tomek



Re: Data grid: fetching/scrolling data on user demand

2017-10-17 Thread Dave Page
On Tue, Oct 17, 2017 at 9:36 AM, legrand legrand <
legrand_legr...@hotmail.com> wrote:

> Pgadmin doesn't have to Wait for all the data,
> As he should only load/fetch the first 1000 rows.
>
> Loading all the data in memory will not be possible for big datasets.
>
> This is a design error at my point of view.
>

pgAdmin doesn't wait for all the data. It does load/fetch only the first
1000 rows.

However, it cannot do anything until Postgres makes data available.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Data grid: fetching/scrolling data on user demand

2017-10-17 Thread legrand legrand
Pgadmin doesn't have to Wait for all the data,
As he should only load/fetch the first 1000 rows.

Loading all the data in memory will not be possible for big datasets.

This is a design error at my point of view.

PAscal
SQLeo projection manager





--
Sent from: 
http://www.postgresql-archive.org/PostgreSQL-pgadmin-support-f2191615.html



Re: Data grid: fetching/scrolling data on user demand

2017-10-17 Thread Dave Page
On Tue, Oct 17, 2017 at 6:36 AM, Murtuza Zabuawala <
murtuza.zabuaw...@enterprisedb.com> wrote:

>
> On Tue, Oct 17, 2017 at 2:22 AM, legrand legrand <
> legrand_legr...@hotmail.com> wrote:
>
>> How long does it take in your environnment
>> to fetch the 1000 first records from
>>
>> select * from information_schema.columns a,information_schema.columns b
>>
> ​I didn't run it because on my environment just count
> from information_schema.columns gave me 7325 records (Cross join will be 7
> 325
> ​ * 7
> 325
> ​ records , and obviously that will take huge amount of time
> )
> ​
> ​
>

Right - exactly as it does in psql (the command line interface). pgAdmin
can't make the database engine faster - all it can do is retrieve and
display the results as efficiently as possible, once the server makes them
available.


-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Data grid: fetching/scrolling data on user demand

2017-10-16 Thread Murtuza Zabuawala
On Tue, Oct 17, 2017 at 2:22 AM, legrand legrand <
legrand_legr...@hotmail.com> wrote:

> How long does it take in your environnment
> to fetch the 1000 first records from
>
> select * from information_schema.columns a,information_schema.columns b
>
​I didn't run it because on my environment just count
from information_schema.columns gave me 7325 records (Cross join will be 7
325
​ * 7
325
​ records , and obviously that will take huge amount of time
)
​
​

>
>
>
>
> --
> Sent from: http://www.postgresql-archive.org/PostgreSQL-pgadmin-
> support-f2191615.html
>
>


Re: Data grid: fetching/scrolling data on user demand

2017-10-16 Thread legrand legrand
How long does it take in your environnment
to fetch the 1000 first records from

select * from information_schema.columns a,information_schema.columns b 





--
Sent from: 
http://www.postgresql-archive.org/PostgreSQL-pgadmin-support-f2191615.html



Re: Data grid: fetching/scrolling data on user demand

2017-10-16 Thread Murtuza Zabuawala
On Tue, Oct 17, 2017 at 12:40 AM, legrand legrand <
legrand_legr...@hotmail.com> wrote:

> maybe this behavior is related to fetching records using a server-side
> cursor
> ?
>
> https://wiki.postgresql.org/wiki/Using_psycopg2_with_
> PostgreSQL#Fetch_Records_using_a_Server-Side_Cursor

​No we are not using server side cursor.
pgAdmin4 execute queries in async mode
 and poll
the result as it gets available on connection file descriptor, so the delay
you are facing on cross join is from psycopg2 driver itself, pgAdmin4 will
start polling result data in batch (1000 records by default) incrementally
as soon as they gets available on
connection file descriptor
​.​

>
>
> I met the same problem using pgjdbc with Oracle SQL developer as descibed
> here
> https://stackoverflow.com/questions/46066882/autocommit-
> always-turns-on-in-oracle-sqldeveloper-when-connecting-to-postgresql
> or
> https://github.com/pgjdbc/pgjdbc/issues/976
>
>
>
> --
> Sent from: http://www.postgresql-archive.org/PostgreSQL-pgadmin-
> support-f2191615.html
>
>


Re: Data grid: fetching/scrolling data on user demand

2017-10-16 Thread legrand legrand
maybe this behavior is related to fetching records using a server-side cursor
?

https://wiki.postgresql.org/wiki/Using_psycopg2_with_PostgreSQL#Fetch_Records_using_a_Server-Side_Cursor

I met the same problem using pgjdbc with Oracle SQL developer as descibed
here
https://stackoverflow.com/questions/46066882/autocommit-always-turns-on-in-oracle-sqldeveloper-when-connecting-to-postgresql
or
https://github.com/pgjdbc/pgjdbc/issues/976



--
Sent from: 
http://www.postgresql-archive.org/PostgreSQL-pgadmin-support-f2191615.html



Re: Data grid: fetching/scrolling data on user demand

2017-10-16 Thread legrand legrand
Sorry, why is

*select * from information_schema.columns a,information_schema.columns b
*
on a newly created db is never ending ?

when 
*select * from information_schema.columns a,information_schema.columns b
limit 1000*
takes less than one second ?

Is pgadmin4 really fetching only the 1000 first records ?

PAscal
SQLeo project manager



--
Sent from: 
http://www.postgresql-archive.org/PostgreSQL-pgadmin-support-f2191615.html



Re: Data grid: fetching/scrolling data on user demand

2017-10-16 Thread Dave Page
On Sat, Oct 14, 2017 at 8:24 PM, legrand legrand <
legrand_legr...@hotmail.com> wrote:

> Hello,
>
> Data grid is populated without any limit by default,
> it could be a problem with very big datasets ...
>
> To avoid this, it is possible to limit the number of rows retrieved,
> but that limit is fixed, even if user tryes to scroll more data ...
>
> Is scrolling data on demand supported ?
>

pgAdmin 4 loads data on demand (i.e. as the user scrolls) already. That's
why it's significantly faster than pgAdmin 3. The batch size can be changed
(from the default 1000) in the application config file by overriding
the ON_DEMAND_RECORD_COUNT option in web/config_local.py in your
installation.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company