Re: [HACKERS] stuck spinlock

2013-12-17 Thread bricklen
On Mon, Dec 16, 2013 at 6:46 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 Andres Freund and...@2ndquadrant.com writes:
  Hard to say, the issues fixed in the release are quite important as
  well. I'd tend to say they are more important. I think we just need to
  release 9.3.3 pretty soon.

 Yeah.


Has there been any talk about when a 9.3.3 (and/or 9.2.7?) patch might be
released?


Re: [HACKERS] Why we are going to have to go DirectIO

2013-12-05 Thread bricklen
On Thu, Dec 5, 2013 at 12:43 PM, Josh Berkus j...@agliodbs.com wrote:

 On 12/05/2013 12:41 PM, Robert Haas wrote:
  Do drunks lurch differently in cathedrals than they do elsewhere?

 Yeah, because they lurch from one column to another.


Row by row?


Re: [HACKERS] pg_system_identifier()

2013-08-22 Thread bricklen
On Thu, Aug 22, 2013 at 6:42 AM, Andres Freund and...@2ndquadrant.comwrote:


 FWIW I've wished for that function repeatedly. Mostly just to make sure
 I am actually connected to the same network of replicas and not some
 other.
 It's also useful if you're providing support for a limited number of
 machines and you want some form of identifying a node.


There's a hostname function at PGXN which serves some use-cases:
http://pgxn.org/dist/hostname/


Re: [HACKERS] pg_dump --snapshot

2013-05-07 Thread bricklen
On Tue, May 7, 2013 at 10:02 AM, Dimitri Fontaine dimi...@2ndquadrant.frwrote:

 Rather than take some locks, you can now prevent the database objects
 from changing with an event trigger. pg_dump could install that event
 trigger in a preparing transaction, then do its work as currently, then
 when done either remove or disable the event trigger.

 All the event trigger has to do is unconditionnaly raise an exception
 with a message explaining that no DDL command is accepted during when a
 dump is in progress.



I'm thinking of a case where a hot standby is executing a pg_dump and DDL
is issued on the master -- would that cause any unexpected problems on the
hot standby?


Re: [HACKERS] pg_dump --exclude-table-data

2011-08-24 Thread bricklen
On Wed, Aug 24, 2011 at 2:01 PM, Josh Berkus j...@agliodbs.com wrote:

 FWIW, I have immediate use for this in creating cut-down versions of
 databases for testing purposes.  It'll eliminate a couple pages of shell
 scripts for me.

Speaking of cut-down versions, I have recently been using pg_sample,
and been happy with the resulting subset database. I created a db 10
GB in size from a source db ~600Gb in a few minutes.

https://github.com/mla/pg_sample

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade bug found!

2011-04-08 Thread bricklen
On Fri, Apr 8, 2011 at 4:00 PM, Jeff Davis pg...@j-davis.com wrote:
 On Fri, 2011-04-08 at 15:03 -0400, Bruce Momjian wrote:
 A fix will be included in upcoming Postgres releases 8.4.8 and 9.0.4.
 These releases will remove the need for the above script by correctly
 updating all TOAST tables in the migrated databases.

 You might want to clarify that the fix may be required if you ever used
 pg_upgrade before. Using the new version of pg_upgrade/dump when you
 still have a bad relfrozenxid doesn't help.

 Regards,
        Jeff Davis


I've been noticing in my logs for the past few days the message you
note in the wiki. It seems to occur during a vacuum around 7:30am
every day. I will be running the suggested script shortly, but can
anyone tell me in how bad of shape my db is in? This is our production
db with two hot standby's running off it.

grep -i 'could not access status of transaction' postgresql-2011-04*.log
postgresql-2011-04-06.log:2011-04-06 07:28:27 PDT [15882]: [1-1]
(user=postgres) (rhost=[local]) ERROR:  could not access status of
transaction 1273385235
postgresql-2011-04-07.log:2011-04-07 07:27:14 PDT [29790]: [1-1]
(user=postgres) (rhost=[local]) ERROR:  could not access status of
transaction 1273385235
postgresql-2011-04-08.log:2011-04-08 07:26:35 PDT [2402]: [1-1]
(user=postgres) (rhost=[local]) ERROR:  could not access status of
transaction 1273385235

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade bug found!

2011-04-08 Thread bricklen
On Fri, Apr 8, 2011 at 4:51 PM, bricklen brick...@gmail.com wrote:
 I've been noticing in my logs for the past few days the message you
 note in the wiki. It seems to occur during a vacuum around 7:30am
 every day. I will be running the suggested script shortly, but can
 anyone tell me in how bad of shape my db is in? This is our production
 db with two hot standby's running off it.

 grep -i 'could not access status of transaction' postgresql-2011-04*.log
 postgresql-2011-04-06.log:2011-04-06 07:28:27 PDT [15882]: [1-1]
 (user=postgres) (rhost=[local]) ERROR:  could not access status of
 transaction 1273385235
 postgresql-2011-04-07.log:2011-04-07 07:27:14 PDT [29790]: [1-1]
 (user=postgres) (rhost=[local]) ERROR:  could not access status of
 transaction 1273385235
 postgresql-2011-04-08.log:2011-04-08 07:26:35 PDT [2402]: [1-1]
 (user=postgres) (rhost=[local]) ERROR:  could not access status of
 transaction 1273385235

version 9.03, if that helps

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade bug found!

2011-04-08 Thread bricklen
Hi Stephen,

On Fri, Apr 8, 2011 at 6:57 PM, Stephen Frost sfr...@snowman.net wrote:
 bricklen,

 * bricklen (brick...@gmail.com) wrote:
 I've been noticing in my logs for the past few days the message you
 note in the wiki. It seems to occur during a vacuum around 7:30am
 every day. I will be running the suggested script shortly, but can
 anyone tell me in how bad of shape my db is in? This is our production
 db with two hot standby's running off it.

 Unfortunately, I don't think the script that Bruce posted will help if
 the clog files have been removed (which appears to be the case here).
 Do you have a backup which includes older files which existed under the
 'pg_clog' directory under your database's root?  Hopefully you do and
 can restore those and restart the database.  If you restore and then
 restart then Bruce's script could be run and hopefully would clear out
 these errors.

 Bruce, please correct me if I got any part of this wrong.

        Thanks,

                Stephen

I looked deeper into our backup archives, and it appears that I do
have the clog file reference in the error message DETAIL:  Could not
open file pg_clog/04BE: No such file or directory.

It exists in an untouched backup directory that I originally made when
I set up the backup and ran pg_upgrade. I'm not sure if it is from
version 8.4 or 9.0.2 though. Is it safe to just copy it into my
production pg_clog dir and restart?

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade bug found!

2011-04-08 Thread bricklen
On Fri, Apr 8, 2011 at 7:11 PM, Stephen Frost sfr...@snowman.net wrote:
 bricklen,

 * bricklen (brick...@gmail.com) wrote:
 I looked deeper into our backup archives, and it appears that I do
 have the clog file reference in the error message DETAIL:  Could not
 open file pg_clog/04BE: No such file or directory.

 Great!  And there's no file in pg_clog which matches that name (or
 exist which are smaller in value), right?

 It exists in an untouched backup directory that I originally made when
 I set up the backup and ran pg_upgrade. I'm not sure if it is from
 version 8.4 or 9.0.2 though. Is it safe to just copy it into my
 production pg_clog dir and restart?

 It should be, provided you're not overwriting any files or putting a
 clog file in place which is greater than the other clog files in that
 directory.

It appears that there are no files lower.

Missing clog: 04BE

production pg_clog dir:
ls -lhrt 9.0/data/pg_clog
total 38M
-rw--- 1 postgres postgres 256K Jan 25 21:04 04BF
-rw--- 1 postgres postgres 256K Jan 26 12:35 04C0
-rw--- 1 postgres postgres 256K Jan 26 20:58 04C1
-rw--- 1 postgres postgres 256K Jan 27 13:02 04C2
-rw--- 1 postgres postgres 256K Jan 28 01:00 04C3
...

old backup pg_clog dir (possibly v8.4)
...
-rw--- 1 postgres postgres 256K Jan 23 21:11 04BB
-rw--- 1 postgres postgres 256K Jan 24 08:56 04BC
-rw--- 1 postgres postgres 256K Jan 25 06:32 04BD
-rw--- 1 postgres postgres 256K Jan 25 10:58 04BE
-rw--- 1 postgres postgres 256K Jan 25 20:44 04BF
-rw--- 1 postgres postgres 8.0K Jan 25 20:54 04C0


So, if I have this right, my steps to take are:
- copy the backup 04BE to production pg_clog dir
- restart the database
- run Bruce's script

Does that sound right? Has anyone else experienced this? I'm leery of
testing this on my production db, as our last pg_dump was from early
this morning, so I apologize for being so cautious.

Thanks,

Bricklen

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade bug found!

2011-04-08 Thread bricklen
On Fri, Apr 8, 2011 at 7:20 PM, bricklen brick...@gmail.com wrote:
 On Fri, Apr 8, 2011 at 7:11 PM, Stephen Frost sfr...@snowman.net wrote:
 bricklen,

 * bricklen (brick...@gmail.com) wrote:
 I looked deeper into our backup archives, and it appears that I do
 have the clog file reference in the error message DETAIL:  Could not
 open file pg_clog/04BE: No such file or directory.

 Great!  And there's no file in pg_clog which matches that name (or
 exist which are smaller in value), right?

 It exists in an untouched backup directory that I originally made when
 I set up the backup and ran pg_upgrade. I'm not sure if it is from
 version 8.4 or 9.0.2 though. Is it safe to just copy it into my
 production pg_clog dir and restart?

 It should be, provided you're not overwriting any files or putting a
 clog file in place which is greater than the other clog files in that
 directory.

 It appears that there are no files lower.

 Missing clog: 04BE

 production pg_clog dir:
 ls -lhrt 9.0/data/pg_clog
 total 38M
 -rw--- 1 postgres postgres 256K Jan 25 21:04 04BF
 -rw--- 1 postgres postgres 256K Jan 26 12:35 04C0
 -rw--- 1 postgres postgres 256K Jan 26 20:58 04C1
 -rw--- 1 postgres postgres 256K Jan 27 13:02 04C2
 -rw--- 1 postgres postgres 256K Jan 28 01:00 04C3
 ...

 old backup pg_clog dir (possibly v8.4)
 ...
 -rw--- 1 postgres postgres 256K Jan 23 21:11 04BB
 -rw--- 1 postgres postgres 256K Jan 24 08:56 04BC
 -rw--- 1 postgres postgres 256K Jan 25 06:32 04BD
 -rw--- 1 postgres postgres 256K Jan 25 10:58 04BE
 -rw--- 1 postgres postgres 256K Jan 25 20:44 04BF
 -rw--- 1 postgres postgres 8.0K Jan 25 20:54 04C0


 So, if I have this right, my steps to take are:
 - copy the backup 04BE to production pg_clog dir
 - restart the database
 - run Bruce's script

 Does that sound right? Has anyone else experienced this? I'm leery of
 testing this on my production db, as our last pg_dump was from early
 this morning, so I apologize for being so cautious.

 Thanks,

 Bricklen

What I've tested and current status:

When I saw the announcement a few hours ago, I started setting up a
9.0.3 hot standby. I brought it live a few minutes ago.
- I copied over the 04BE clog from the original backup,
- restarted the standby cluster
- ran the script against the main database
and turned up a bunch of other transactions that were missing:

psql:pg_upgrade_tmp.sql:539: ERROR:  could not access status of
transaction 1248683931
DETAIL:  Could not open file pg_clog/04A6: No such file or directory.

psql:pg_upgrade_tmp.sql:540: ERROR:  could not access status of
transaction 1249010987
DETAIL:  Could not open file pg_clog/04A7: No such file or directory.

psql:pg_upgrade_tmp.sql:541: ERROR:  could not access status of
transaction 1250325059
DETAIL:  Could not open file pg_clog/04A8: No such file or directory.

psql:pg_upgrade_tmp.sql:542: ERROR:  could not access status of
transaction 1252759918
DETAIL:  Could not open file pg_clog/04AA: No such file or directory.

psql:pg_upgrade_tmp.sql:543: ERROR:  could not access status of
transaction 1254527871
DETAIL:  Could not open file pg_clog/04AC: No such file or directory.

psql:pg_upgrade_tmp.sql:544: ERROR:  could not access status of
transaction 1256193334
DETAIL:  Could not open file pg_clog/04AD: No such file or directory.

psql:pg_upgrade_tmp.sql:556: ERROR:  could not access status of
transaction 1268739471
DETAIL:  Could not open file pg_clog/04B9: No such file or directory.

I checked, and found that each one of those files exists in the
original backup location.

- scp'd those files to the hot standby clog directory,
- pg_ctl stop -m fast
- pg_ctl start
- ran the script

Hit a bunch of missing clog file errors like above, repeated the scp +
bounce + script process 4 or 5 more times until no more missing clog
file messages surfaced.

Now, is this safe to run against my production database?

**Those steps again, to run against prod:

cp the clog files from the original backup to dir to my production pg_clog dir
bounce the database
run the script against all database in the cluster

Anyone have any suggestions or changes before I commit myself to this
course of action?

Thanks,

Bricklen

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade bug found!

2011-04-08 Thread bricklen
On Fri, Apr 8, 2011 at 8:07 PM, Bruce Momjian br...@momjian.us wrote:
 Stephen Frost wrote:
 -- Start of PGP signed section.
 bricklen,

 * bricklen (brick...@gmail.com) wrote:
  Now, is this safe to run against my production database?

 Yes, with a few caveats.  One recommendation is to also increase
 autovacuum_freeze_max_age to 5 (500m), which will hopefully
 prevent autovacuum from 'butting in' and causing issues during the
 process.  Also, a database-wide 'VACUUM FREEZE;' should be lower-risk,
 if you can afford it (it will cause a lot of i/o on the system).  The
 per-table 'VACUUM FREEZE table;' that the script does can end up
 removing clog files prematurely.

  Anyone have any suggestions or changes before I commit myself to this
  course of action?

 If you run into problems, and perhaps even before starting, you may want
 to pop in to #postgresql on irc.freenode.net, there are people there who
 can help you with this process who are very familiar with PG.

 Stephen is 100% correct and we have updated the wiki to explain recovery
 details:

        http://wiki.postgresql.org/wiki/20110408pg_upgrade_fix


Thanks guys, I really appreciate your help. For the vacuum freeze, you
say database-wide, should I run vacuumdb -a -v -F ? Will freezing the
other tables in the cluster help (not sure how that works with
template0/1 databases?)

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade bug found!

2011-04-08 Thread bricklen
On Fri, Apr 8, 2011 at 9:28 PM, Alvaro Herrera
alvhe...@commandprompt.com wrote:

 Why is it important to have the original pg_clog files around?  Since
 the transactions in question are below the freeze horizon, surely the
 tuples that involve those transaction have all been visited by vacuum
 and thus removed if they were leftover from aborted transactions or
 deleted, no?  So you could just fill those files with the 0x55 pattern
 (signalling all transactions are committed) and the net result should
 be the same.  No?

 Forgive me if I'm missing something.  I haven't been following this
 thread and I'm more than a little tired (but wanted to shoot this today
 because I'm gonna be able to, until Monday).

 --
 Álvaro Herrera alvhe...@commandprompt.com
 The PostgreSQL Company - Command Prompt, Inc.
 PostgreSQL Replication, Consulting, Custom Development, 24x7 support

Update on the status of the steps we took, which were:
- test on a hot standby by bringing it live, running the script,
determing the missing clog files, copying them into the live (hot
standby) pg_clog dir

Now, on the master, copied the same old clog files into the production
*master*, ran vacuumdb -a -v -F. The step I should have taken on the
master before the vacuumdb -F would have been to run the
http://wiki.postgresql.org/wiki/20110408pg_upgrade_fix script to see
if I was missing any clog files on the master.
That vacuum freeze step pointed out a clog file, I copied that into
the master pg_clog dir, ran the aforementioned script. It didn't fail
on any of the clog files this time, so now I am rerunning the vacuum
freeze command and hoping like hell it works!

If the current run of the vacuum freeze fails, I'll report back.

Thanks again for everyone's help.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade bug found!

2011-04-08 Thread bricklen
On Fri, Apr 8, 2011 at 10:01 PM, bricklen brick...@gmail.com wrote:
 Update on the status of the steps we took, which were:
 - test on a hot standby by bringing it live, running the script,
 determing the missing clog files, copying them into the live (hot
 standby) pg_clog dir

 Now, on the master, copied the same old clog files into the production
 *master*, ran vacuumdb -a -v -F. The step I should have taken on the
 master before the vacuumdb -F would have been to run the
 http://wiki.postgresql.org/wiki/20110408pg_upgrade_fix script to see
 if I was missing any clog files on the master.
 That vacuum freeze step pointed out a clog file, I copied that into
 the master pg_clog dir, ran the aforementioned script. It didn't fail
 on any of the clog files this time, so now I am rerunning the vacuum
 freeze command and hoping like hell it works!

 If the current run of the vacuum freeze fails, I'll report back.

 Thanks again for everyone's help.


The vacuumdb -a -v F completed successfully this time.

Cheers!

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [ADMIN] PD_ALL_VISIBLE flag was incorrectly set happend during repeatable vacuum

2011-03-02 Thread bricklen
On Wed, Mar 2, 2011 at 3:53 PM, daveg da...@sonic.net wrote:
  Postgresql version is 8.4.4.

 I don't see how this could be related, but since you're running on NFS,
 maybe it is, somehow:
 http://archives.postgresql.org/message-id/4d40ddb7.1010...@credativ.com
 (for example what if the visibility map fork's last page is overwritten?)

 Running on ISCSI, not nfs. But it is still a Netapp, so who knows. I'll look.
 Also, we are not seeing any of the unexpected data beyond EOF errors,
 just thousands per day of the PD_ALL_VISIBLE error.

 -dg

FWIW, we had a couple occurrences of that message about a month ago on 9.0.2

http://archives.postgresql.org/pgsql-general/2011-01/msg00887.php

Haven't seen it since we ran a cluster-wide vacuum.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE ... REPLACE WITH

2010-12-16 Thread bricklen
On Wed, Dec 15, 2010 at 2:39 AM, Simon Riggs si...@2ndquadrant.com wrote:
 Perhaps a more useful definition would be

 EXCHANGE TABLE target WITH source;

 which just swaps the heap and indexes of each table.

At the risk of stating the obvious, this would work with partition exchange too?

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] leaky views, yet again

2010-10-05 Thread bricklen
On Tue, Oct 5, 2010 at 1:25 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
 Right now this is managed by query classes in our Java applications,
 but as we're moving to a variety of new and different technologies
 it's getting harder for the DBAs to ensure that nothing is leaking
 to inappropriate recipients.  :-(  I think we're going to need to
 move more of the enforcement to database views and/or security
 restrictions based on database roles.

Does  Veil cover some of those needs?
http://veil.projects.postgresql.org/curdocs/index.html
I've never used it, but from what I recall hearing about it, it did
something similar (I thought).

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Getting to 8.3 beta1

2007-09-28 Thread Bricklen Anderson

Simon Riggs wrote:

...knock-on...
tackle


Been watching the Rugby World Cup?  :)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] UPSERT

2007-03-02 Thread Bricklen Anderson

Simon Riggs wrote:

I'm a bit surprised the TODO didn't mention the MERGE statement, which
is the SQL:2003 syntax for specifying this as an atomic statement.


http://archives.postgresql.org/pgsql-hackers/2004-05/thrd5.php#00497

There is a thread there entitled Adding MERGE to the TODO list

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] UPSERT

2007-03-02 Thread Bricklen Anderson

Tom Lane wrote:

Bricklen Anderson [EMAIL PROTECTED] writes:

http://archives.postgresql.org/pgsql-hackers/2004-05/thrd5.php#00497
There is a thread there entitled Adding MERGE to the TODO list


The more interesting discussion is the one that got it taken off TODO again,
from Nov 2005.  Try these threads:
http://archives.postgresql.org/pgsql-hackers/2005-11/msg00501.php
http://archives.postgresql.org/pgsql-hackers/2005-11/msg00536.php

regards, tom lane


Yeah, that's a better set of threads.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] Anyone want to admit to being presinet.com?

2006-02-13 Thread Bricklen Anderson

Tom Lane wrote:

And if so, would you mind stopping your mail system from regurgitating
copies of pghackers traffic?  It's especially bad that you're sending
the stuff with a fraudulent envelope From, ie, one not pointing back
at yourself.



That would be me. I've notified one of our admins about the problem. It 
appears we are testing some new software on our mail system, and 
obviously there is a misconfiguration.


Thanks for the heads-up, and sorry about the noise.

Where did you see the emails? In this list? I haven't seen any show up 
here, or I would have gotten on this earlier.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] New project launched : PostgreSQL GUI Installer for

2006-01-31 Thread Bricklen Anderson

J. Andrew Rogers wrote:
snip
A graphical installer for Unix is fine, but please, do not make it  
anything like Oracle's graphical installer.  Oracle's graphical  install 
process gives command line installs a good name for ease of use.



J. Andrew Rogers


I heartily second that!

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Rollback Mountain

2006-01-26 Thread Bricklen Anderson

Michael Fuhr wrote:

Rollback Mountain

A raw, powerful story of two young transactions, one serializable
and the other read-committed, who meet in the summer of 2005 updating
tables in the harsh, high-volume environment of a contemporary
online trading system and form an unorthodox yet session-long bond --
by turns ecstatic, bitter, and conflicted.



ENCORE ENCORE!

:)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] prefix btree implementation

2005-10-05 Thread Bricklen Anderson
Qingqing Zhou wrote:
 I am not sure if this idea was mentioned before.
 
 The basic prefix btree idea is quite straightforward, i.e., try to
 compress the key items within a data page by sharing the common prefix.
 Thus the fanout of the page is increased and the benefits is obvious
 theorectically.
 
snip
 
 So together, there are basically four types of possible sharing:
 column-wise (case 1), character-wise (case 2), column-character-wise (case
 3), and byte-wise (case 4).
 
Oracle implements something similar called index compression, but I believe it
is only for common column values. I haven't checked in versions9r1 so maybe
there are other options implemented by now.

Jonathan Lewis describes some pros and cons here:
http://www.jlcomp.demon.co.uk/faq/compress_ind.html

-- 
___

This e-mail may be privileged and/or confidential, and the sender does
not waive any related rights and obligations. Any distribution, use or
copying of this e-mail or the information it contains by other than an
intended recipient is unauthorized. If you received this e-mail in
error, please advise me (by return e-mail or otherwise) immediately.
___

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: R: [HACKERS] Table Partitioning is in 8.1

2005-09-23 Thread Bricklen Anderson
Paolo Magnoli wrote:
 Hi, I seem to recall that in Oracle you load into specific partitions
 without specifically naming them in insert statements (in other words you
 insert into table, the engine redirects data to the corrisponding
 partition), 

This is correct
-- 
___

This e-mail may be privileged and/or confidential, and the sender does
not waive any related rights and obligations. Any distribution, use or
copying of this e-mail or the information it contains by other than an
intended recipient is unauthorized. If you received this e-mail in
error, please advise me (by return e-mail or otherwise) immediately.
___

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Tablespace-level Block Size Definitions

2005-05-31 Thread Bricklen Anderson

Jonah H. Harris wrote:

Hey everyone,

I'm sure this has been thought of but was wondering whether anyone had 
discussed the allowance of run-time block size specifications at the 
tablespace level?  I know that a change such as this would substantially 
impact buffer operations, transactions, access methods, the storage 
manager, and a lot of other stuff, however it would give an 
administrator the ability to inhance performance for specific applications.


Arguably, one can set the block size at compile-time, but for a system 
running multiple databases it *may* be a nice feature.  Would it be used 
a lot?  Probably not.  Would I use it?  Certainly!  Would some of my 
clients use it?  Yes.


Perhaps a TODO item for some advantageous company to fund?

-Jonah


Have you used Oracle's version as well?

--
___

This e-mail may be privileged and/or confidential, and the sender does
not waive any related rights and obligations. Any distribution, use or
copying of this e-mail or the information it contains by other than an
intended recipient is unauthorized. If you received this e-mail in
error, please advise me (by return e-mail or otherwise) immediately.
___

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [HACKERS] Adding MERGE to the TODO list (resend with subject)

2004-05-11 Thread Bricklen
Alvaro Herrera wrote:

snip
Oh, I see.  Complex stuff ... I wonder how will it work with sequences
-- if one insertion fails and we have to try again, there's a chance a
sequence could be advanced more than once.  Note the article skips the
signal-statement symbol (is it present in SQL99? What does it do?)
I also wonder if there will be a corresponding RULE implementation ...

The full DB2 reference is at
http://publib.boulder.ibm.com/infocenter/db2help/index.jsp?topic=/com.ibm.db2.udb.doc/admin/r0010873.htm
(signal-statement is something to raise an exception, apparently)

(I wonder why they don't use BNF syntax anymore ...)

Just to add to this information, Oracle 9i and 10g have also implemented 
the MERGE command. 9i offers an update/insert, whereas 10g adds a delete 
option as well, which is rather handy.
'Purpose', quoted from: 
http://download-west.oracle.com/docs/cd/B13789_01/server.101/b10759/statements_9016.htm#sthref7014

(note, to view this link, you will need to sign up for a free OTN acct.)

Use the MERGE statement to select rows from one or more sources for 
update or insertion into one or more tables. You can specify conditions 
to determine whether to update or insert into the target tables.

This statement is a convenient way to combine multiple operations. It 
lets you avoid multiple INSERT, UPDATE, and DELETE DML statements.

MERGE is a deterministic statement. That is, you cannot update the same 
row of the target table multiple times in the same MERGE statement.

Point being, I've found the delete option very useful too, rather than 
having to do the same procedurally.



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster