On Sep 13, 2008, at 4:39 AM, Tomasz Ostrowski wrote:
On 2008-09-12 15:52, Jack Orenstein wrote:
Sorry, I misspoke. I have an index, but preferred doing a scan
without the index in this case.
Why?
The only reason I can think of is that you'd like to avoid disk
seeking. But you get at most
On 2008-09-12 15:52, Jack Orenstein wrote:
Sorry, I misspoke. I have an index, but preferred doing a scan without
the index in this case.
Why?
The only reason I can think of is that you'd like to avoid disk seeking.
But you get at most 1 row in 30 seconds, so disk latency (only several
mill
Tomasz Ostrowski wrote:
On 2008-09-11 17:21, Jack Orenstein wrote:
The id > last_id trick doesn't work for me -- I don't have an index that would
support it efficiently.
You do not have a primary key? If you do then you have an index as it is
automatically created.
Sorry, I misspoke. I have
Tomasz Ostrowski wrote:
On 2008-09-11 18:03, Jack Orenstein wrote:
When you do:
result = query("select something from sometable")
then all rows of a result will be cached by a client program.
I am very sure this is not happening. Maybe some rows are being
cached (specifying fetch size), but c
On 2008-09-11 17:21, Jack Orenstein wrote:
> The id > last_id trick doesn't work for me -- I don't have an index that
> would
> support it efficiently.
You do not have a primary key? If you do then you have an index as it is
automatically created.
Watch this:
test=> create temporary table tes
On 2008-09-11 18:03, Jack Orenstein wrote:
>> When you do:
>> result = query("select something from sometable")
>> then all rows of a result will be cached by a client program.
>
> I am very sure this is not happening. Maybe some rows are being
> cached (specifying fetch size), but certainly not
Tomasz Ostrowski wrote:
On 2008-09-11 17:21, Jack Orenstein wrote:
Then do the processing in separate transactions like this (in pseudocode):
The id > last_id trick doesn't work for me -- I don't have an index that would
support it efficiently.
Turning on autocommit seems to work, I'm just n
On 2008-09-11 17:21, Jack Orenstein wrote:
>> Then do the processing in separate transactions like this (in pseudocode):
> The id > last_id trick doesn't work for me -- I don't have an index that
> would
> support it efficiently.
>
> Turning on autocommit seems to work, I'm just not clear on th
On 2008-09-10 16:46, Jack Orenstein wrote:
> Application requirement. We need to do something for each row retrieved from
> BIG
> and the something is expensive. We do the scan slowly (30 second sleep inside
> the loop) to amortize the cost.
Then do the processing in separate transactions like
Martijn van Oosterhout wrote:
On Wed, Sep 10, 2008 at 09:45:04AM -0400, Jack Orenstein wrote:
Am I on the right track -- does autocommit = false for the BIG scan force
versions of TINY to accumulate? I played around with a JDBC test program,
and so far cannot see how the autocommit mode causes
Martijn van Oosterhout wrote:
> Vacuum can only clean up stuff older than the oldest open transaction.
> So if you have a transaction which is open for hours then stuff made
> since then it can't be vacuumed. The solution is: don't do that.
Actually it's worse than that: older than the oldest tra
On Wed, Sep 10, 2008 at 09:45:04AM -0400, Jack Orenstein wrote:
> Am I on the right track -- does autocommit = false for the BIG scan force
> versions of TINY to accumulate? I played around with a JDBC test program,
> and so far cannot see how the autocommit mode causes variations in what is
> s
I'm trying to understand the effect of autocommit on vacuum behavior (postgres
8.3, if it matters). Let's suppose you have two tables, BIG and TINY in a
database accessed through JDBC. BIG has lots of rows. There are inserts,
updates, and every so often there is a scan of the entire table. The
13 matches
Mail list logo