On Sun, 2007-02-09 at 13:04 -0500, Kenneth Marshall wrote:
> 2. Evaluate the performance of different hash index implementations
>and/or changes to the current implementation. My current plan is
>to keep the implementation as simple as possible and still provide
>the desired performance
On Thu, Sep 06, 2007 at 01:08:59PM -0500, Kenneth Marshall wrote:
> Since we already have to check the actual tuple values for any index
> lookup in postgresql, we could only store the full hash value and the
> corresponding TIDs in the bucket. Then when we lookup an item by
> calculating its hash,
Tom Lane wrote:
"Florian G. Pflug" <[EMAIL PROTECTED]> writes:
So I believe you're right, and we can skip taking the lock in the no
xid case - I actually think with just a little bit of more work, we
can go even further, and get rid of the ReadNewTransactionId() call
completely during snapshotti
"Florian G. Pflug" <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> The lazy-XID patch, as committed, doesn't help that situation at all,
> I think the comment is correct in principle - If we remove the oldest
> xmin without locking, then two concurrent OldestXmin calculations
> will get two diffe
Tom Lane wrote:
Simon was complaining a bit ago that we still have problems with
excessive contention for the ProcArrayLock, and that much of this stems
from the need for transaction exit to take that lock exclusively.
The lazy-XID patch, as committed, doesn't help that situation at all,
saying
On Thu, 6 Sep 2007, Kevin Grittner wrote:
I thought that the bgwriter_lru_percent was scanned from the lru end
each time; I would not expect that it would ever get beyond the oldest
10%.
You're correct; I stated that badly. What I should have said is that your
LRU writer could potentially s
While reading one of the recent -perform threads, it occurred to me to
check, and the 8.2.4 Win32 release binaries aren't marked "large
address aware". This means the process gets a 2GB VM space, which is
normal for 32bit Windows. On x64, my understanding is that each 32
bit process can actually
Simon was complaining a bit ago that we still have problems with
excessive contention for the ProcArrayLock, and that much of this stems
from the need for transaction exit to take that lock exclusively.
The lazy-XID patch, as committed, doesn't help that situation at all,
saying
/*
"Kevin Grittner" <[EMAIL PROTECTED]> writes:
> On Thu, Sep 6, 2007 at 11:27 AM, in message
> <[EMAIL PROTECTED]>, Greg Smith
> <[EMAIL PROTECTED]> wrote:
>> With the default delay of 200ms, this has the LRU-writer scanning the
>> whole pool every 1 second,
>
> Whoa! Apparently I've totally mis
>>> On Thu, Sep 6, 2007 at 7:03 PM, in message
<[EMAIL PROTECTED]>, Jeff Davis <[EMAIL PROTECTED]>
wrote:
>
> I think ... there's still room for a simple tool that can zero out
> the meaningless data in a partially-used WAL segment before compression.
> It seems reasonable to me, so long as you
>>> On Thu, Sep 6, 2007 at 11:27 AM, in message
<[EMAIL PROTECTED]>, Greg Smith
<[EMAIL PROTECTED]> wrote:
> On Thu, 6 Sep 2007, Kevin Grittner wrote:
>
> I have been staring carefully at your configuration recently, and I would
> wager that you could turn off the LRU writer altogether and still
On Thu, 2007-09-06 at 21:50 +0100, Gregory Stark wrote:
> > - Improve scalability of recovery for large I/O bound databases
>
> That seems too vague for the TODO. Did you have specific items in mind?
I think we should parallelise recovery. Heikki wanted to do this another
way, so I worded it vag
"Simon Riggs" <[EMAIL PROTECTED]> writes:
> High Availability
> -
> - Allow a warm standby system to also allow read-only statements
> - Allow WAL traffic to be streamed to another server for stand-by
> replication (synchronous/asynchronous options)
Asynchronous streaming of WAL
On Thu, 2007-09-06 at 19:23 -0400, Tom Lane wrote:
> Jeff Davis <[EMAIL PROTECTED]> writes:
> > On Thu, 2007-09-06 at 12:08 +0100, Heikki Linnakangas wrote:
> >> With file-based log shipping, you can get down to 1 second, by using the
> >> archive_timeout setting. It will produce a lot of log files
Hi Korry,
Can you e-mail the VC++ projects that you created? (You can do that
off-list). I have VC++ here but I haven't tried to do a PG build in
that environment yet (guess it's time to learn).
Done.
However, I can't set any breakpoints using PgAdmin. I know the dlls
are loaded via Pro
Charlie Savage wrote:
> Hi Magnus,
>
>> Yeah, this is a known problem, and it's fixed for 8.3. Really, the msvc
>> build process in 8.2 has a *lot* of rough edges ;-) The code itself builds
>> fine, but the buildsystem needed a lot more work.
>>
> Great - figured that might be the case.
>
>>>
Hi Korry,
I am having problems with getting the debugger to work. Setup:
OS: Windows XP
Postgresql Version: 8.2.4 compiled with VC++ 2005
Since my version of pg is built with VC++ I thought it wise to also use
VC++ for the debugger plugin. So I converted the Makefile into 3
different VC++ p
>>> On Thu, Sep 6, 2007 at 3:25 PM, in message
<[EMAIL PROTECTED]>, Jeff Davis <[EMAIL PROTECTED]>
wrote:
> On Thu, 2007-09-06 at 12:08 +0100, Heikki Linnakangas wrote:
>> With file-based log shipping, you can get down to 1 second, by using the
>> archive_timeout setting. It will produce a lot of
Since my version of pg is built with VC++ I thought it wise to also
use VC++ for the debugger plugin. So I converted the Makefile into 3
different VC++ projects - pldbgapi, plugin_debugger and targetinfo.
Note that targetinfo is not mentioned at all in the comments at the
top of Makefile - y
Oh, didn't realise you'd stick it in the actual contrib tree. I thought
it would build with pgxs or something.
You used to have to build postgis from the contrib directory. That
changed a while back, but I guess its just habit. It has its own
configure and makefile scripts, so doesn't seem t
Jeff Davis <[EMAIL PROTECTED]> writes:
> On Thu, 2007-09-06 at 12:08 +0100, Heikki Linnakangas wrote:
>> With file-based log shipping, you can get down to 1 second, by using the
>> archive_timeout setting. It will produce a lot of log files with very
>> little content in them, but they will compres
On Thu, Sep 06, 2007 at 09:20:31AM -0500, Kevin Grittner wrote:
> >>> On Wed, Sep 5, 2007 at 10:31 PM, in message
> <[EMAIL PROTECTED]>, Greg Smith
> <[EMAIL PROTECTED]> wrote:
> >
> > -There are two magic constants in the code:
> >
> > int smoothing_samples = 16;
> > float
Michael Glaesemann wrote:
On Sep 6, 2007, at 10:53 , Mark Mielke wrote:
I don't like the truncating hash suggestion because it limits the
ability of a hash code to uniquely identify a key.
AIUI, a hash can't be used as a unique identifier: it always needs to
be rechecked due to the chance of co
Ühel kenal päeval, N, 2007-09-06 kell 19:33, kirjutas apoc9009:
> Simon Riggs schrieb:
> > I'm not clear whether you are looking for Backup, or High Availability
> > Replication.
> >
> > There is no data loss with the online backup technique described in the
> > manual.
> >
> No, there is a lost
On Thu, 2007-09-06 at 12:08 +0100, Heikki Linnakangas wrote:
> With file-based log shipping, you can get down to 1 second, by using the
> archive_timeout setting. It will produce a lot of log files with very
> little content in them, but they will compress well.
I tried doing a couple pg_switch_xl
On Thu, 2007-09-06 at 17:53 +0300, Hannu Krosing wrote:
> Ühel kenal päeval, N, 2007-09-06 kell 16:15, kirjutas apoc9009:
> > > So you want the user to still be connected to the failed machine, but at
> > > the same time be connected to the new live failover machine ?
> > >
> > > -
> > > Hannu
On Thu, Sep 06, 2007 at 04:08:10PM +0200, apoc9009 wrote:
> >archive_timeout setting. It will produce a lot of log files with very
> >little content in them, but they will compress well.
> >
> Yes, it is possible but not recommended . My Backup Servers Filesystem
> will explode :D
. . .
> Corr
On 9/6/07, apoc9009 <[EMAIL PROTECTED]> wrote:
> Backup 12/24/2008 Version 2
> /pg/backup/12_24_2008/base/rcvry.rcv <--- Basebackup
> /pg/backup/12_24_2008/changes/0001.chg <--- Changed Data
> /changes/0002.chg <--- Changed Data
>
On Thu, Sep 06, 2007 at 11:53:45AM -0400, Mark Mielke wrote:
> Hannu Krosing wrote:
One approahc is not to mix hashes, but to partition the hash, so that
each column gets its N bits in the hash.
>>> How does that help? You still need all the keys to find out which
>>> bucket to lo
Ühel kenal päeval, N, 2007-09-06 kell 09:38, kirjutas Tom Lane:
> Hannu Krosing <[EMAIL PROTECTED]> writes:
> > Ühel kenal päeval, E, 2007-09-03 kell 19:55, kirjutas Tom Lane:
> >> No, because part of the deal is that you can do lookups using only the
> >> leading index columns. At least, all the
Hi Korry,
On Sep 6, 2007, at 10:23 AM, korry.douglas wrote:
John, I started writing up the API documentation and then noticed
that most of what I intended to write is already described in the
pldbgapi.c module.
Take a look at that module and let me know if you have any
questions (you can
Simon Riggs schrieb:
On Thu, 2007-09-06 at 12:53 +0200, apoc9009 wrote:
You've either not read 23.4.4 or haven't understood it. If the text is
unclear, documentation additions/changes are always welcome.
I have read this:
PostgreSQL directly supports file-based log shipping as descri
Heikki Linnakangas schrieb:
apoc9009 wrote:
"Without datalosses" is utopy. For that, you'd need something like
synchronous replication, otherwise you're always going to have a window
where you have something committed in the server, but not yet in the
backup. So it's just a question of how wide
On Thu, 2007-09-06 at 12:53 +0200, apoc9009 wrote:
> > You've either not read 23.4.4 or haven't understood it. If the text is
> > unclear, documentation additions/changes are always welcome.
> I have read this:
>
> PostgreSQL directly supports file-based log shipping as described above.
> It is
On Sep 6, 2007, at 10:53 , Mark Mielke wrote:
I don't like the truncating hash suggestion because it limits the
ability of a hash code to uniquely identify a key.
AIUI, a hash can't be used as a unique identifier: it always needs to
be rechecked due to the chance of collisions. There might
Ühel kenal päeval, N, 2007-09-06 kell 16:15, kirjutas apoc9009:
> > So you want the user to still be connected to the failed machine, but at
> > the same time be connected to the new live failover machine ?
> >
> > -
> > Hannu
> No.
> The User should be connected to the running db without restr
>>> On Wed, Sep 5, 2007 at 10:31 PM, in message
<[EMAIL PROTECTED]>, Greg Smith
<[EMAIL PROTECTED]> wrote:
>
> -There are two magic constants in the code:
>
> int smoothing_samples = 16;
> float scan_whole_pool_seconds = 120.0;
>
> I personally
> don't feel like these
Is there any documentation that describes how to use the SQL
functions? Some are obvious enough, but a simple example showing a
debugging session would be helpful.
John, I started writing up the API documentation and then noticed that
most of what I intended to write is already described in th
Hannu Krosing wrote:
One approahc is not to mix hashes, but to partition the hash, so that
each column gets its N bits in the hash.
How does that help? You still need all the keys to find out which
bucket to look in.
no. you need to look at only the buckets where that part of has
On Thu, 6 Sep 2007, Kevin Grittner wrote:
If you exposed the scan_whole_pool_seconds as a tunable GUC, that would
allay all of my concerns about this patch. Basically, our problems were
resolved by getting all dirty buffers out to the OS cache within two
seconds
Unfortunately it wouldn't make
So you want the user to still be connected to the failed machine, but at
the same time be connected to the new live failover machine ?
-
Hannu
No.
The User should be connected to the running db without restrictions
while backup is in progress
Apoc
---(end of bro
Ühel kenal päeval, N, 2007-09-06 kell 12:53, kirjutas apoc9009:
> I wish to have an Solution, thadt backup my Database DB wihout
> Datalosses,
Then you need the backup record to be completed on the backup machine
before the transaction commit returns on master.
This is quaranteed to be really s
Hannu Krosing <[EMAIL PROTECTED]> writes:
> Ãhel kenal päeval, E, 2007-09-03 kell 19:55, kirjutas Tom Lane:
>> No, because part of the deal is that you can do lookups using only the
>> leading index columns. At least, all the existing multicolumn index
>> types can do that.
> One approahc is no
On Thu, 2007-09-06 at 12:21 +0200, apoc9009 wrote:
> If the System Crash, you have Dataloss of
> over 16 MBytes thats Fatal and not acceptable! 1MByte Dataloss of ASCII Data
> is also
> not acceptable!
Is any data loss acceptable in the case of a disaster? How much?
--
Simon Riggs
2ndQua
Ühel kenal päeval, N, 2007-09-06 kell 12:53, kirjutas apoc9009:
> > You've either not read 23.4.4 or haven't understood it. If the text is
> > unclear, documentation additions/changes are always welcome.
> I have read this:
>
> PostgreSQL directly supports file-based log shipping as described abo
Ühel kenal päeval, E, 2007-09-03 kell 19:55, kirjutas Tom Lane:
> Gregory Stark <[EMAIL PROTECTED]> writes:
> > "Kenneth Marshall" <[EMAIL PROTECTED]> writes:
> >> - What about multi-column indexes? The current implementation
> >> only supports 1 column.
>
> > That seems kind of weird. It seems ob
apoc9009 wrote:
You've either not read 23.4.4 or haven't understood it. If the text is
unclear, documentation additions/changes are always welcome.
I have read this:
PostgreSQL directly supports file-based log shipping as described above.
It is also possible to implement record-based log shi
Hi,
On 9/6/07, apoc9009 <[EMAIL PROTECTED]> wrote:
> Filebased Logship backups having a big Problem for doing continous
> Backups. You have to wait until
> the Postmaster has written the WAL File, after this you can save it to
> the Backupserver. But 1 WAL
> has a size of 16 MByte ny default! (tha
apoc9009 wrote:
> I wish to have an Solution, thadt backup my Database DB wihout
> Datalosses, without locking Tables, without Shutdown
> and without any User must be forced for logging out (Backup in
> Production State Online without Datalosses).
"Without datalosses" is utopy. For that, you'd ne
You've either not read 23.4.4 or haven't understood it. If the text is
unclear, documentation additions/changes are always welcome.
I have read this:
PostgreSQL directly supports file-based log shipping as described above.
It is also possible to implement record-based log shipping, though thi
apoc9009 wrote:
http://www.postgresql.org/docs/8.2/static/warm-standby.html
Particularly section 23.4.4
23.4.4 is thadt what iam using just im Time but this ist not eneought
for me!
No Versioning, no chances to prevent data losses
You have to wait until a WAL File ist written (Default Val
http://www.postgresql.org/docs/8.2/static/warm-standby.html
Particularly section 23.4.4
23.4.4 is thadt what iam using just im Time but this ist not eneought
for me!
No Versioning, no chances to prevent data losses
You have to wait until a WAL File ist written (Default Value for WAL
Files
http://www.postgresql.org/docs/8.2/static/warm-standby.html
Particularly section 23.4.4
Thadt is Replication NOT Backup
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
apoc9009 wrote:
Hi Hackers,
In my Project i have to handle a Database with 600 GByte Text only,
distributed on 4 Tablespaces
on multiple Harddisks and Remote SAN's connected via Gigaethernet to the
Remote SAN-Storage.
I need more flexibility by doing Backups of my big Database, but the
buil
John DeSoi wrote:
> Hi Dave,
>
> On Sep 5, 2007, at 3:54 AM, Dave Page wrote:
>
>> That's odd - I cannot reproduce that on OS X using beta 4 (which has no
>> important changes in the debugger over beta 3).
>>
>> Can you provide a simple test case?
>
> I get the same error with this:
>
> create
Hi Hackers,
In my Project i have to handle a Database with 600 GByte Text only,
distributed on 4 Tablespaces
on multiple Harddisks and Remote SAN's connected via Gigaethernet to the
Remote SAN-Storage.
I need more flexibility by doing Backups of my big Database, but the
built in Online Backu
On Wed, 2007-09-05 at 20:11 +0400, Teodor Sigaev wrote:
> I found two queries which do the same thing but they is very different in
> time.
> For test suite it's about 10^3 times, but on real data it can be 10^5 times.
> It's observed on 8.1-current, 8.2-current and CVS HEAD versions. Interesti
57 matches
Mail list logo