Hello Dimitri,
>Rainer, seeking psqlODBC code source it seems to work in similar way
>and have an option "SQL_ROWSET_SIZE" to execute FETCH query in the
>same way as "FETCH_COUNT" in psql. Try to set it to 100 and let's see
>if it'll be better...
But that is only for bulk fetching with SQLExtende
Rainer, seeking psqlODBC code source it seems to work in similar way
and have an option "SQL_ROWSET_SIZE" to execute FETCH query in the
same way as "FETCH_COUNT" in psql. Try to set it to 100 and let's see
if it'll be better...
Rgds,
-Dimitri
On 6/22/07, Rainer Bauer <[EMAIL PROTECTED]> wrote:
Hello Joshua,
>That opens up some questions. What ODBC driver are you using (with exact
>version please).
psqlODBC 8.2.4.2 (build locally).
I have restored the 8.2.4.0 from the official msi installer, but the results
are the same.
Rainer
---(end of broadcast)--
Rainer Bauer wrote:
Hello Dimitri,
Hope it's more clear now and at least there is a choice :))
As well, if your query result will be 500 (for ex.) I think the
difference will be less important between non-CURSOR and "FETCH 500"
execution...
The problem is that I am using ODBC and not libpq di
Hello Dimitri,
>Hope it's more clear now and at least there is a choice :))
>As well, if your query result will be 500 (for ex.) I think the
>difference will be less important between non-CURSOR and "FETCH 500"
>execution...
The problem is that I am using ODBC and not libpq directly.
I will have
I did not find a solution so far; and for bulk data transfers I now
>programmed a workaround.
But that is surely based on some component installed on the server,
isn't
it?
Correct. I use a pyro-remote server. On request this remote server copies
the relevant rows into a temporary table, u
PFC,
Correct. I use a pyro-remote server. On request this remote server copies
> the relevant rows into a temporary table, uses a copy_to Call to push
> them
> into a StringIO-Objekt (that's Pythons version of "In Memory File"),
> serializes that StringIO-Objekt, does a bz2-compression and trans
Hello Rainer,
initially I was surprised you did not match non-CURSOR time with FETCH
100, but then thinking little bit the explanation is very simple -
let's analyze what's going in both cases:
Without CURSOR:
1.) app calls PQexec() with "Query" and waiting for the result
2.) PG sends the resu
Hello Dimitri,
>Let's stay optimist - at least now you know the main source of your problem!
>:))
>
>Let's see now with CURSOR...
>
>Firstly try this:
>munnin=>\timing
>munnin=>\set FETCH_COUNT 1;
>munnin=>select * from "tblItem";
>
>what's the time you see here? (I think your application is work
Rainer,
I did not find a solution so far; and for bulk data transfers I now
>programmed a workaround.
But that is surely based on some component installed on the server, isn't
it?
Correct. I use a pyro-remote server. On request this remote server copies
the relevant rows into a temporary tab
Dave Page wrote:
I don't see why pgAdmin should be slow though - it should be only
marginally slower than psql I would think (assuming there are no thinkos
in our code that none of use ever noticed).
Nevermind...
/D
---(end of broadcast)---
TIP
Rainer Bauer wrote:
It's not immediately clear why pgAdmin would have the same issue,
though, because AFAIK it doesn't rely on ODBC.
No it doesn't. That's the reason I used it to verify the behaviour.
But I remember Dave Page mentioning using a virtual list control to display
the results and t
Hello Dimitri,
>Rainer, but did you try initial query with FETCH_COUNT equal to 100?...
Yes I tried it with different values and it's like you suspected:
FETCH_COUNT 1 Time: 8642,000 ms
FETCH_COUNT 5 Time: 2360,000 ms
FETCH_COUNT 10 Time: 1563,000 ms
FETCH_COUNT 25 Time: 1329,000 ms
FETCH_COUN
Rainer, but did you try initial query with FETCH_COUNT equal to 100?...
Rgds,
-Dimitri
On 6/22/07, Rainer Bauer <[EMAIL PROTECTED]> wrote:
Hello Dimitri,
>Let's stay optimist - at least now you know the main source of your
problem! :))
>
>Let's see now with CURSOR...
>
>Firstly try this:
>munn
Hello Tom,
>This previous post says that someone else solved an ODBC
>performance problem with UseDeclareFetch=1:
I thought about that too, but enabling UseDeclareFetch will slow down the
query: it takes 30 seconds instead of 8.
>It's not immediately clear why pgAdmin would have the same issue,
Tom,
seems to me the problem here is rather simple: current issue depends
completely on the low level 'implementation' of SELECT query in the
application. In case it's implemented with using of "DECLARE ...
CURSOR ..." and then "FETCH NEXT" by default (most common case) it
brings application into
Rainer Bauer <[EMAIL PROTECTED]> writes:
> Fetching the 50 rows takes 12 seconds (without logging 8 seconds) and
> examining the log I found what I suspected: the performance is directly
> related to the ping time to the server since fetching one tuple requires a
> round trip to the server.
Hm, bu
Rainer Bauer wrote:
Hello Dimitri,
but did you try to execute your query directly from 'psql' ?...
munnin=>\timing
munnin=>select * from "tblItem";
(50 rows)
Time: 391,000 ms
Why I'm asking: seems to me your case is probably just network latency
dependent, and what I noticed dur
Let's stay optimist - at least now you know the main source of your problem! :))
Let's see now with CURSOR...
Firstly try this:
munnin=>\timing
munnin=>\set FETCH_COUNT 1;
munnin=>select * from "tblItem";
what's the time you see here? (I think your application is working in
this manner)
Now, c
Hello Dimitri,
>but did you try to execute your query directly from 'psql' ?...
munnin=>\timing
munnin=>select * from "tblItem";
(50 rows)
Time: 391,000 ms
>Why I'm asking: seems to me your case is probably just network latency
>dependent, and what I noticed during last benchmarks with PostgreS
Hi Rainer,
but did you try to execute your query directly from 'psql' ?...
Why I'm asking: seems to me your case is probably just network latency
dependent, and what I noticed during last benchmarks with PostgreSQL
the SELECT query become very traffic hungry if you are using CURSOR.
Program 'psq
I wrote:
>Hello Harald,
>
>>I do not have a solution, but I can confirm the problem :)
>
>At least that rules out any misconfiguration issues :-(
I did a quick test with my application and enabled the ODBC logging.
Fetching the 50 rows takes 12 seconds (without logging 8 seconds) and
examining t
Hello Harald,
>I do not have a solution, but I can confirm the problem :)
At least that rules out any misconfiguration issues :-(
>I did not find a solution so far; and for bulk data transfers I now
>programmed a workaround.
But that is surely based on some component installed on the server, is
Hello Tom,
>I seem to recall that we've seen similar reports before, always
>involving Windows :-(. Check whether you have any nonstandard
>components hooking into the network stack on that machine.
I just repeated the test by booting into "Safe Mode with Network Support", but
the results are th
Rainer Bauer <[EMAIL PROTECTED]> writes:
> one of my customers installed Postgres on a public server to access the data
> from several places. The problem is that it takes _ages_ to transfer data from
> the database to the client app. At first I suspected a problem with the ODBC
> driver and my ap
Hello Rainer,
The database computer is connected via a 2MBit SDL connection. I myself have
a
768/128 KBit ADSL connection and pinging the server takes 150ms on
average.
I do not have a solution, but I can confirm the problem :)
One PostgreSQL-Installation: Server 8.1 and 8.2 on Windows in th
Hello all,
one of my customers installed Postgres on a public server to access the data
from several places. The problem is that it takes _ages_ to transfer data from
the database to the client app. At first I suspected a problem with the ODBC
driver and my application, but using pgAdminIII 1.6.3
27 matches
Mail list logo