Hi,
> On Tue, Oct 17, 2017 at 1:44 PM, Tomek wrote:
>
>> Hi,
>>
>> It is not exactly truth... In v3 the query is executed, fetched and all
>> rows are displayed,
>
> No they're not, though they are all transferred to the client which is
> why it's slower.
They
On Tue, Oct 17, 2017 at 1:44 PM, Tomek wrote:
> Hi,
>
> It is not exactly truth... In v3 the query is executed, fetched and
> all rows are displayed,
> >>>
> >>> No they're not, though they are all transferred to the client which is
> why it's slower.
> >>
> >> They are not what?
> >
> > The
Hi,
It is not exactly truth... In v3 the query is executed, fetched and all
rows are displayed,
>>>
>>> No they're not, though they are all transferred to the client which is why
>>> it's slower.
>>
>> They are not what?
>
> The handling of rows in pgAdmin 3 is not as you described.
On Tue, Oct 17, 2017 at 11:35 AM, Tomek wrote:
> Hi,
>
> >> It is not exactly truth... In v3 the query is executed, fetched and all
> rows are displayed,
> >
> > No they're not, though they are all transferred to the client which is
> why it's slower.
>
> They are not what?
The handling of rows
Hi,
>> It is not exactly truth... In v3 the query is executed, fetched and all rows
>> are displayed,
>
> No they're not, though they are all transferred to the client which is why
> it's slower.
They are not what? What is slower - is the "display" part in both versions. You
have data from se
Hi Tomek,
On Tue, Oct 17, 2017 at 3:21 PM, Tomek wrote:
> Hi,
>
> > As I mentioned in my previous email that we do not use server side
> cursor, so it won't add any
> > limit on query.
> >
> > The delay is from database driver itself, it has nothing to do with
> pgAdmin4.
> > Try executing the s
On Tue, Oct 17, 2017 at 10:51 AM, Tomek wrote:
> Hi,
>
> > As I mentioned in my previous email that we do not use server side
> cursor, so it won't add any
> > limit on query.
> >
> > The delay is from database driver itself, it has nothing to do with
> pgAdmin4.
> > Try executing the same query
Hi,
> As I mentioned in my previous email that we do not use server side cursor, so
> it won't add any
> limit on query.
>
> The delay is from database driver itself, it has nothing to do with pgAdmin4.
> Try executing the same query in 'psql', 'pgAdmin3' and third party tool which
> use libpq
As I mentioned in my previous email that we do not use server side cursor,
so it won't add any limit on query.
The delay is from database driver itself, it has nothing to do with
pgAdmin4.
Try executing the same query in 'psql', 'pgAdmin3' and third party tool
which use libpq library as backend, y
On Tue, Oct 17, 2017 at 9:51 AM, legrand legrand <
legrand_legr...@hotmail.com> wrote:
> 1000 first rows are available in less than zone second.
> See query with limit 1000.
>
We cannot add arbitrary limit/offsets to users queries. They will affect
timing for those who are trying to tune queries,
1000 first rows are available in less than zone second.
See query with limit 1000.
Monitoring memory usage or PG_stat_activity shows that all the data is
fetched.
--
Sent from:
http://www.postgresql-archive.org/PostgreSQL-pgadmin-support-f2191615.html
On Tue, Oct 17, 2017 at 9:36 AM, legrand legrand <
legrand_legr...@hotmail.com> wrote:
> Pgadmin doesn't have to Wait for all the data,
> As he should only load/fetch the first 1000 rows.
>
> Loading all the data in memory will not be possible for big datasets.
>
> This is a design error at my poi
Pgadmin doesn't have to Wait for all the data,
As he should only load/fetch the first 1000 rows.
Loading all the data in memory will not be possible for big datasets.
This is a design error at my point of view.
PAscal
SQLeo projection manager
--
Sent from:
http://www.postgresql-archive.org
On Tue, Oct 17, 2017 at 6:36 AM, Murtuza Zabuawala <
murtuza.zabuaw...@enterprisedb.com> wrote:
>
> On Tue, Oct 17, 2017 at 2:22 AM, legrand legrand <
> legrand_legr...@hotmail.com> wrote:
>
>> How long does it take in your environnment
>> to fetch the 1000 first records from
>>
>> select * from i
On Tue, Oct 17, 2017 at 2:22 AM, legrand legrand <
legrand_legr...@hotmail.com> wrote:
> How long does it take in your environnment
> to fetch the 1000 first records from
>
> select * from information_schema.columns a,information_schema.columns b
>
I didn't run it because on my environment just c
How long does it take in your environnment
to fetch the 1000 first records from
select * from information_schema.columns a,information_schema.columns b
--
Sent from:
http://www.postgresql-archive.org/PostgreSQL-pgadmin-support-f2191615.html
On Tue, Oct 17, 2017 at 12:40 AM, legrand legrand <
legrand_legr...@hotmail.com> wrote:
> maybe this behavior is related to fetching records using a server-side
> cursor
> ?
>
> https://wiki.postgresql.org/wiki/Using_psycopg2_with_
> PostgreSQL#Fetch_Records_using_a_Server-Side_Cursor
No we are
maybe this behavior is related to fetching records using a server-side cursor
?
https://wiki.postgresql.org/wiki/Using_psycopg2_with_PostgreSQL#Fetch_Records_using_a_Server-Side_Cursor
I met the same problem using pgjdbc with Oracle SQL developer as descibed
here
https://stackoverflow.com/questio
Sorry, why is
*select * from information_schema.columns a,information_schema.columns b
*
on a newly created db is never ending ?
when
*select * from information_schema.columns a,information_schema.columns b
limit 1000*
takes less than one second ?
Is pgadmin4 really fetching only the 1000 first
On Sat, Oct 14, 2017 at 8:24 PM, legrand legrand <
legrand_legr...@hotmail.com> wrote:
> Hello,
>
> Data grid is populated without any limit by default,
> it could be a problem with very big datasets ...
>
> To avoid this, it is possible to limit the number of rows retrieved,
> but that limit is f
Hello,
Data grid is populated without any limit by default,
it could be a problem with very big datasets ...
To avoid this, it is possible to limit the number of rows retrieved,
but that limit is fixed, even if user tryes to scroll more data ...
Is scrolling data on demand supported ?
Thanks in
21 matches
Mail list logo