On Tue, Oct 17, 2017 at 1:44 PM, Tomek wrote:
> Hi,
>
> It is not exactly truth... In v3 the query is executed, fetched and
> all rows are displayed,
> >>>
> >>> No they're not, though they are all transferred to the client which is
> why it's slower.
> >>
> >> They are
On Tue, Oct 17, 2017 at 11:35 AM, Tomek wrote:
> Hi,
>
> >> It is not exactly truth... In v3 the query is executed, fetched and all
> rows are displayed,
> >
> > No they're not, though they are all transferred to the client which is
> why it's slower.
>
> They are not what?
Hi,
>> It is not exactly truth... In v3 the query is executed, fetched and all rows
>> are displayed,
>
> No they're not, though they are all transferred to the client which is why
> it's slower.
They are not what? What is slower - is the "display" part in both versions. You
have data from
Hi Tomek,
On Tue, Oct 17, 2017 at 3:21 PM, Tomek wrote:
> Hi,
>
> > As I mentioned in my previous email that we do not use server side
> cursor, so it won't add any
> > limit on query.
> >
> > The delay is from database driver itself, it has nothing to do with
> pgAdmin4.
>
On Tue, Oct 17, 2017 at 10:51 AM, Tomek wrote:
> Hi,
>
> > As I mentioned in my previous email that we do not use server side
> cursor, so it won't add any
> > limit on query.
> >
> > The delay is from database driver itself, it has nothing to do with
> pgAdmin4.
> > Try
Hi,
> As I mentioned in my previous email that we do not use server side cursor, so
> it won't add any
> limit on query.
>
> The delay is from database driver itself, it has nothing to do with pgAdmin4.
> Try executing the same query in 'psql', 'pgAdmin3' and third party tool which
> use libpq
On Tue, Oct 17, 2017 at 9:36 AM, legrand legrand <
legrand_legr...@hotmail.com> wrote:
> Pgadmin doesn't have to Wait for all the data,
> As he should only load/fetch the first 1000 rows.
>
> Loading all the data in memory will not be possible for big datasets.
>
> This is a design error at my
Pgadmin doesn't have to Wait for all the data,
As he should only load/fetch the first 1000 rows.
Loading all the data in memory will not be possible for big datasets.
This is a design error at my point of view.
PAscal
SQLeo projection manager
--
Sent from:
On Tue, Oct 17, 2017 at 6:36 AM, Murtuza Zabuawala <
murtuza.zabuaw...@enterprisedb.com> wrote:
>
> On Tue, Oct 17, 2017 at 2:22 AM, legrand legrand <
> legrand_legr...@hotmail.com> wrote:
>
>> How long does it take in your environnment
>> to fetch the 1000 first records from
>>
>> select * from
On Tue, Oct 17, 2017 at 2:22 AM, legrand legrand <
legrand_legr...@hotmail.com> wrote:
> How long does it take in your environnment
> to fetch the 1000 first records from
>
> select * from information_schema.columns a,information_schema.columns b
>
I didn't run it because on my environment just
How long does it take in your environnment
to fetch the 1000 first records from
select * from information_schema.columns a,information_schema.columns b
--
Sent from:
http://www.postgresql-archive.org/PostgreSQL-pgadmin-support-f2191615.html
On Tue, Oct 17, 2017 at 12:40 AM, legrand legrand <
legrand_legr...@hotmail.com> wrote:
> maybe this behavior is related to fetching records using a server-side
> cursor
> ?
>
> https://wiki.postgresql.org/wiki/Using_psycopg2_with_
> PostgreSQL#Fetch_Records_using_a_Server-Side_Cursor
No we are
maybe this behavior is related to fetching records using a server-side cursor
?
https://wiki.postgresql.org/wiki/Using_psycopg2_with_PostgreSQL#Fetch_Records_using_a_Server-Side_Cursor
I met the same problem using pgjdbc with Oracle SQL developer as descibed
here
Sorry, why is
*select * from information_schema.columns a,information_schema.columns b
*
on a newly created db is never ending ?
when
*select * from information_schema.columns a,information_schema.columns b
limit 1000*
takes less than one second ?
Is pgadmin4 really fetching only the 1000
On Sat, Oct 14, 2017 at 8:24 PM, legrand legrand <
legrand_legr...@hotmail.com> wrote:
> Hello,
>
> Data grid is populated without any limit by default,
> it could be a problem with very big datasets ...
>
> To avoid this, it is possible to limit the number of rows retrieved,
> but that limit is
Hello,
Data grid is populated without any limit by default,
it could be a problem with very big datasets ...
To avoid this, it is possible to limit the number of rows retrieved,
but that limit is fixed, even if user tryes to scroll more data ...
Is scrolling data on demand supported ?
Thanks
16 matches
Mail list logo