Re: [GENERAL] Postgres 9.3 beta log

2013-08-10 Thread Stefan Kaltenbrunner
On 08/10/2013 04:21 PM, Bruce Momjian wrote:
 On Sat, Aug 10, 2013 at 06:38:57AM +, Arvind Singh wrote:
 as you can see it is for a single record login88193 it places

 PARSER STATISTICS
 PARSE ANALYSIS STATISTICS
 REWRITER STATISTICS

 I know it appers because i have enabled log_parser_* options in log

 But what i request to know is the meaning of these statistics and how usefull
 is can be for a server load  evaluation
 
 You should look get the getrusage manual pages for the meaning of the
 values, and perhaps this for the meaning of these stages of a query:
 
   http://www.postgresql.org/developer/backend/

true - but it seems a bit dubious that all of them show basically zero
(or a constant time)



Stefan


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Roadmap for Postgres on AIX

2013-04-21 Thread Stefan Kaltenbrunner
On 04/17/2013 07:45 PM, Thomas Munro wrote:
 On 19 March 2013 01:00, Tom Lane t...@sss.pgh.pa.us
 mailto:t...@sss.pgh.pa.us wrote:
 
 Wasim Arif wasima...@gmail.com mailto:wasima...@gmail.com writes:
  What is the road map for Postgres on the AIX platform? I
 understand that
  the pg build farm contains an AIX 5.3 server; are there any plans to
  upgrade to 6.1 and 7.1?
 
 The reason there's an AIX 5.3 buildfarm member is that someone cares
 enough about preserving portability to that platform to provide a
 buildfarm member.  If you're worried about other AIX releases, I suggest
 you do likewise.  It's not a huge burden.  Basic info about it is here:
 http://buildfarm.postgresql.org/cgi-bin/register-form.pl
 
 By and large, our approach to AIX is the same as to any other platform:
 we'll support it as long as users of the platform provide testing and
 any necessary portability fixes.  That burden might fall more directly
 on you, as one of a relatively small number of users, than it would on
 somebody using say Linux or BSD.  But as long as you are willing to do
 some of the work you need not fear that we're planning to abandon you.
 
 
 For those organisations running PostgreSQL on AIX, I wonder if it would
 be possible for a member of 'IBM PartnerWorld' to use the free 'Virtual
 Loaner' programme[1] to set up build farm members on a range of AIX
 versions and POWER chips.
 
 My employer is in this category, but I am not in a position to
 contribute company time patches or resources currently (although that
 might change).  From memory we've had to make a couple of local tweaks
 to makefiles for our particular versions of things, although we are
 mostly focussed on clients on AIX, not the server.  But perhaps some of
 the other (rare!) AIX users from the list might be able to look into VPL
 farm animals?
 
 [1] http://www-304.ibm.com/partnerworld/wps/pub/systems/vlp/index

hmm looks kinda interesting and better POWER coverage on the BF would be
nice - if your company cannot donate time, maybe they could get someone
from the community access to those systems for setting up bf animals?


Stefan




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What's wrong with postgresql.org domain?

2013-04-18 Thread Stefan Kaltenbrunner
On 04/18/2013 01:08 PM, Eduardo Morras wrote:
 On Thu, 18 Apr 2013 20:35:37 +1000
 Chris Angelico ros...@gmail.com wrote:
 
 That one's clearly fine. What about www.postgresql.org? It's possible
 you have a poisoned cache for just that one record.

 ChrisA
 
 I have clean all (dns, web cache, etc) and get:
 
 camibar% nslookup www.postgresql.org
 Server: 62.42.230.24
 Address:62.42.230.24#53
 
 Non-authoritative answer:
 www.postgresql.org  canonical name = www.mirrors.postgresql.org.
 Name:   www.mirrors.postgresql.org
 Address: 87.238.57.232
 Name:   www.mirrors.postgresql.org
 Address: 98.129.198.126
 Name:   www.mirrors.postgresql.org
 Address: 217.196.149.50

those are the correct IPv4-adddresses of all currently active postgresql
web frontends.


 
 In 87.238.57.232 (Sweeden) 98.129.198.126(San Antonio,TX) and 
 217.196.149.50(Saltzburg) i get  lighttpd default page.

this is expected if you are not sending a http host header for one of
our domains.


Stefan


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Too far out of the mainstream

2012-09-01 Thread Stefan Kaltenbrunner
On 09/01/2012 01:24 PM, Peter Bex wrote:
 On Sat, Sep 01, 2012 at 12:43:15AM +0200, Geert Mak wrote:
 There is this case studies section as well -

 http://www.postgresql.org/about/casestudies/

 Which appear to me a little old and a little too little, one could try to 
 add more, perhaps.
 
 I noticed that the Share Your Story link is broken.
 I don't know how long it's been broken, but this might be a reason
 there are no new ones.

hmm thanks for mentioning that - we will look into fixing that soon.


 
 What kind of success story would be accepted for this page?
 We're also running Postgres for most our projects at work, some of them
 being rather large databases.  Of course large is subjective... some
 people might call it kids' stuff.  Also, how well known does a company
 need to be in order for it to be on the list?

I don't think there are any formal requirements, in fact i think people
would welcome an interesting casestudy any day - so please make one :)


Stefan


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some information about postgresql products and community

2012-08-28 Thread Stefan Kaltenbrunner
On 08/28/2012 04:47 PM, Merlin Moncure wrote:
 On Tue, Aug 28, 2012 at 9:29 AM, mithun
 mithun.chicklore.yogen...@huawei.com wrote:
 Dear Sir,

 We are gathering information regarding  PostgreSQL open source product
 quality and its community. Can you kindly help me  to find following data.
 
 Your questions are incredibly broad, so you're going to get some broad 
 answers.

indeed - not sure what the purpuse is but well...

 
 1.   Number of releases happened since last 12 months along with its
 version numbers.
 
 look here: http://archives.postgresql.org/pgsql-announce/
 
 2.   Number of Bugs fixed since last 12 months and last 6 months.
 
 look here: http://archives.postgresql.org/pgsql-bugs/

i think that looking at the git shortlog of a stable branch(or the
release notes of a stable branch) is actually painting a more realistic
picture of the bug fixing activity

http://git.postgresql.org/gitweb/?p=postgresql.git;a=shortlog;h=refs/heads/REL9_1_STABLE



Stefan


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] NASA needs Postgres - Nagios help

2010-07-14 Thread Stefan Kaltenbrunner

On 07/13/2010 10:44 PM, Thom Brown wrote:

On 13 July 2010 21:25, Magnus Hagandermag...@hagander.net  wrote:

On Tue, Jul 13, 2010 at 20:10, Thom Brownthombr...@gmail.com  wrote:

On 13 July 2010 17:14, Duncavage, Daniel P. (JSC-OD211)
daniel.p.duncav...@nasa.gov  wrote:

We are implementing Nagios on Space Station and want to use PostgreSQL to
store the data on orbit and then replicate that db on the ground.  The
problem is, most people use MySQL with Nagios.  We need an addon to ingest
Nagios data into PostgreSQL.  It looks like the most reasonable
implementation is to update the NDOUtils addon to support PostgreSQL.  Does
anyone have such an addon, or want to write one?



I'm the NASA project manager for the set of computers on Space Station and
we plan to deploy this capability this year.  If have to write our own
addon, we will, but I'd rather use something already out there.


This looks like it hasn't been worked on in a while, but is this any
use?: http://nagiosplugins.projects.postgresql.org/


Those are plugins to monitor postgresql using nagios. For that, you
should realy be looking at check_postgres. I think what the OP is
looking for is a way to store Nagios metadata in postgres, which is
something else.



Ah yes, I see.  The documentation suggests PostgreSQL is supported in
version 1.0 under the Database Support section:
http://nagios.sourceforge.net/docs/1_0/xdata-db.html

Is that no longer the case then?  They actually *removed* support? :(


well - there was direct database support in nagios ages ago(nagios 1.x 
is ancient) and replaced with a plugin based approach based on their 
eventbroker architecture called NDOutils. Based on tracking internal 
state it can be used to export current and historical monitoring data 
from nagios for later postprocessing (or for usin a GUI or whatever).
NODutils however has no real working support for PostgreSQL, IDOutils 
(which I mentioned elsewhere in the thread) from the icinga fork does 
have basic support.



Stefan


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] NASA needs Postgres - Nagios help

2010-07-13 Thread Stefan Kaltenbrunner

Duncavage, Daniel P. (JSC-OD211) wrote:
We are implementing Nagios on Space Station and want to use PostgreSQL 
to store the data on orbit and then replicate that db on the ground.  
The problem is, most people use MySQL with Nagios.  We need an addon to 
ingest Nagios data into PostgreSQL.  It looks like the most reasonable 
implementation is to update the NDOUtils addon to support PostgreSQL.  
Does anyone have such an addon, or want to write one? 


Cool project :) I once did some work on adding proper PostgreSQL support 
to NDOutils but the problem is that the current code is really not too 
well structured for a real RDBMS(prepared statements, transactions,...)
However the http://www.icinga.org/ fork of NDOutils (IDOutils) does have 
some basic PostgreSQL support - maybe that will get you started.


 

I'm the NASA project manager for the set of computers on Space Station 
and we plan to deploy this capability this year.  If have to write our 
own addon, we will, but I'd rather use something already out there. 


Yeah reusing code is always easier and you also don't have to maintain 
it one your own as well :)



Stefan

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] An issue with max() and order by ... limit 1 in postgresql8.3-beta3

2010-01-09 Thread Stefan Kaltenbrunner

Andreas Kretschmer wrote:

zxo102 ouyang zxo...@gmail.com wrote:

Hi everyone, 
   I am using postgresql 8.3-beta3. I have a table 'test' with three fields:


I'm guessing you mean 8.4-beta3, right?


either of those are unsuitable for any kind of production use...



Stefan

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] schema proxying virtual database

2009-09-13 Thread Stefan Kaltenbrunner

Rajesh Kumar Mallah wrote:

Dear List,

There are many opensource applications that support postgresql
(eg , gforge , bricolage , dspam ..) but does not use schemas(namespaces)
as a result of which you are forced to use/create a new database and 
loose the

advantage of linking the application data with your existing database.
as no-cross database queries can be done in PG.


my question is , is it a feasible idea to have some special kind of database
in the postgresql cluster that mimics a schema of an existsing database.

say rt3 is a special database that links to existing maindb databases' 
rt3 namespace then  any DDL / DML done to public schema of rt3 database

is redirected to rt3 namespace of the existing database maindb's rt3 schema.


well in most cases it is enought to give those apps their own user and 
setting the default search_path for that user to it's own schema. That 
way you usually don't have to modify the app at all and still get it 
(more or less) running in it's own schema.



Stefan

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FYI: Load times for a largish DB in 8.2 vs. 8.3 vs. 8.4

2009-06-19 Thread Stefan Kaltenbrunner

Tom Lane wrote:

Todd A. Cook tc...@blackducksoftware.com writes:

First, the numbers:



  PG VersionLoad time pg_database_size autovac
--
   8.2.13179 min   92,807,992,820on
   8.3.7 180 min   84,048,744,044on (defaults)
   8.4b2 206 min   84,028,995,344on (defaults)
   8.4b2 183 min   84,028,839,696off



The bulk of the data is in 16 tables, each having about 55 million rows of
the form (int, int, smallint, smallint, int, int, int).  Each table has a
single partial index on one of the integer columns.


Given that it's multiple tables, it'd be possible for autovacuum to
kick in and ANALYZE the data inserted into earlier tables while the
later ones were still being loaded.  If so, the discrepancy might be
explained by 8.4's more-aggressive statistics target, which means that
a background ANALYZE will take about 10x more work than before.

If you have time to repeat the experiments, it would be interesting to
see what happens with consistent default_statistics_target across 8.3
and 8.4.


given that this was likely a single-thread restore and therefor wal 
logged I wonder if the 206min one might be affected by the issue 
discussed here 
http://archives.postgresql.org/pgsql-hackers/2009-06/msg01133.php



Stefan

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Switching databases over JDBC/ODBC

2009-05-30 Thread Stefan Kaltenbrunner

Scott Bailey wrote:

John R Pierce wrote:

Scott Bailey wrote:
Well at work we've got Oracle, MySQL, MS SQL and Postgres. So I 
generally use Aqua Data Studio because it works with all of them. For 
MySQL and MS SQL you register a single connection to the server and 
can switch to any database. But with Postgres, you have to register a 
new connection for every database because you can't switch once 
connected.


I just figured that if even Microsoft can do it, surely we can.



you could always use SCHEMA instead of DATABASE if thats what you want.

btw, in Oracle, you need a different connection for different database 
instances, too.


Well, that's true. The difference with Oracle is that you can only have 
a single database per server instance. (Which is really dumb IMO) But 
that being the case, schema are regularly used to partition the server 
in Oracle. On DMBS's that support multiple databases you never see 
SCHEMA being used this way.


Well the point here is that at least in MySQL(maybe also in MSSQL no 
idea about that one) a database is really much more like a schema in 
PostgreSQL. In the former you basically have


instance - databases - objects(tables whatever)  in the later you have 
cluster - databases - schema - objects.
In general the need to switch between several databases in the same app 
is sometimes a sign that one should have used schemas instead during the 
design phase.



Stefan

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server Performance

2009-04-01 Thread Stefan Kaltenbrunner

chris.el...@shropshire.gov.uk wrote:


Stefan Kaltenbrunner ste...@kaltenbrunner.cc wrote on 01/04/2009 06:53:07:

  chris.el...@shropshire.gov.uk wrote:
  
   Scott Marlowe scott.marl...@gmail.com wrote on 31/03/2009 15:53:34:
  
 On Tue, Mar 31, 2009 at 8:21 AM, 
 chris.el...@shropshire.gov.uk wrote:

 
  Scott Marlowe scott.marl...@gmail.com wrote on 31/03/2009 
15:16:01:

 
  I'd call IBM and ask them to come pick up their boat anchors.
 
  My sentiments exactly, unfortunately, I seem stuck with them :(

 Can you at least source your own RAID controllers?
  
   Yes I will be, I never really did trust IBM and I certainly don't now!
  
   I just need to choose the correct RAID card now, good performance 
at the

   right price.
 
  you are jumping to conclusions too quickly - while the 8k is not the
  worlds fastest raid card available it is really not (that) bad at all.
  we have plenty of x3650 in production and last time I tested I was
  easily able to get 2000tps even on an untuned postgresql install and
  with fwer disks.

Could you provide any more information upon your configurations if 
possible, please?


x3650, dual quadcore Xeon 5430. Servraid 8k with 256MB-BBWC and likely 
RAID6 during that testing. OS was/is debian etch/amd64. Don't have the 
exact (pgbench) test parameters handy anymore though...




 
  So I really think you are looking at another problem here (be it
  defective hardware or a driver/OS level issue).

Hardware is always a possiblity, finally managed to get hold of IBM too.
I have tried two different Linux distro's, with different kernels, My 
current Mandriva test using a fairly upto date kernel.

I may try a custom kernel.


also test with different IO schedulers(especially deadline and noop).


Stefan

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server Performance

2009-03-31 Thread Stefan Kaltenbrunner

chris.el...@shropshire.gov.uk wrote:


Scott Marlowe scott.marl...@gmail.com wrote on 31/03/2009 15:53:34:

  On Tue, Mar 31, 2009 at 8:21 AM,  chris.el...@shropshire.gov.uk wrote:
  
   Scott Marlowe scott.marl...@gmail.com wrote on 31/03/2009 15:16:01:
  
   I'd call IBM and ask them to come pick up their boat anchors.
  
   My sentiments exactly, unfortunately, I seem stuck with them :(
 
  Can you at least source your own RAID controllers?

Yes I will be, I never really did trust IBM and I certainly don't now!

I just need to choose the correct RAID card now, good performance at the 
right price.


you are jumping to conclusions too quickly - while the 8k is not the 
worlds fastest raid card available it is really not (that) bad at all. 
we have plenty of x3650 in production and last time I tested I was 
easily able to get 2000tps even on an untuned postgresql install and 
with fwer disks.


So I really think you are looking at another problem here (be it 
defective hardware or a driver/OS level issue).


is your SLES10 install updated to the latest patch levels available and 
are you running the recommended driver version for that version of SLES?




Stefan

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] problem with locale :

2009-03-28 Thread Stefan Kaltenbrunner

Eugenio Modesti wrote:

Hello,
i received a copy of a postgresql data directory and i need to read it 
(and do a regular dump).

the datadir version is 8.1 so i installed that version of the server.
When trying to start the server i got this:
Error: The server must be started under the locale : which does not 
exist anymore


how can i know on which locale that datadir is based?
what is : as a locale?

the server is running under ubuntu server.


are you sure that the datadirectory you got was not created on a 32bit 
box and you are running a 64bit OS (or the other way round)?



Stefan

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Maximum transaction rate

2009-03-16 Thread Stefan Kaltenbrunner

Tom Lane wrote:

Jack Orenstein jack.orenst...@hds.com writes:

The transaction rates I'm getting seem way too high: 2800-2900 with
one thread, 5000-7000 with ten threads. I'm guessing that writes
aren't really reaching the disk. Can someone suggest how to figure out
where, below postgres, someone is lying about writes reaching the
disk?


AFAIK there are two trouble sources in recent Linux machines: LVM and
the disk drive itself.  LVM is apparently broken by design --- it simply
fails to pass fsync requests.  If you're using it you have to stop.
(Which sucks, because it's exactly the kind of thing DBAs tend to want.)
Otherwise you need to reconfigure your drive to not cache writes.
I forget the incantation for that but it's in the PG list archives.


hmm are you sure this is what is happening?
In my understanding LVM is not passing down barriers(generally - it 
seems to do in some limited circumstances) which means in my 
understanding it is not safe on any storage drive that has write cache 
enabled. This seems to be the very same issue like linux had for ages 
before ext3 got barrier support(not sure if even today all filesystems 
do have that).
So in my understanding LVM is safe on disks that have write cache 
disabled or behave as one (like a controller with a battery backed cache).
For storage with write caches it seems to be unsafe, even if the 
filesystem supports barriers and it has them enabled (which I don't 
think all have) which is basically what all of linux was not too long ago.



Stefan

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] deployment query

2009-03-16 Thread Stefan Kaltenbrunner

Nagalingam, Karthikeyan wrote:

Hi,
we are in the process of finding the best solution for Postgresql 
deployment with storage controller. I have some query, Please give some 
suggestion for the below
 
1) Can we get customer deployment scenarios for postgresql with storage 
controller. Any flow diagram, operation diagram and implementation 
diagram are welcome.


well deployment is the same as for deploying it to plain old direct 
attached storage - so all the docs available on www.postgresql.org are 
more or less valid for this.




2) Which protocol is mostly used in production. [NFS,ISCSi,FCP,etc...]


all of those are used - however NFS is quite often discouraged due to 
various reliability issues (mostly on the client side) and operational 
complexity that caused issues in the past. ISCSI and Fiberchannel 
deployments (both on netapp based storage and others) have worked very 
well for me.





3) What kind of application Mostly used with Postgresql.


that is an extremely broad question - in doubt it is always the 
application the customer uses.




4) What is the business and technical issues for Postgresql with storage 
controller at present stage.


not sure what a business issue would be here - but as for technical 
issues postgresql is comparable to the demands of other (commercial) 
databases in that regard. I personally found general tuning guidelines 
for storage arrays that got written for oracle to be pretty well 
suitable(within limits obviously) for postgresql too.




5) In which area Postgresql most wanted.


it's the customer that counts :)



6) What kind of DR solution customer using for Postgresql with storage 
controller.


not sure what the question here is - maybe you can explain that in more 
detail?



Stefan

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Maximum transaction rate

2009-03-16 Thread Stefan Kaltenbrunner

Scott Marlowe wrote:

On Mon, Mar 16, 2009 at 2:03 PM, Stefan Kaltenbrunner
ste...@kaltenbrunner.cc wrote:

So in my understanding LVM is safe on disks that have write cache disabled
or behave as one (like a controller with a battery backed cache).
For storage with write caches it seems to be unsafe, even if the filesystem
supports barriers and it has them enabled (which I don't think all have)
which is basically what all of linux was not too long ago.


I definitely didn't have this problem with SCSI drives directly
attached to a machine under pgsql on ext2 back in the day (way back,
like 5 to 10 years ago).  IDE / PATA drives, on the other hand,
definitely suffered with having write caches enabled.


I guess thats likely because most SCSI drives (at least back in the 
days) had write caches turned off by default (whereas IDE drives had 
them turned on).
The Linux kernel docs actually have some stuff on the barrier 
implementation (
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob_plain;f=Documentation/block/barrier.txt;hb=HEAD) 
which seems to explain some of the issues related to that.



Stefan

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres Cookbook

2009-03-07 Thread Stefan Kaltenbrunner

Raymond O'Donnell wrote:

On 07/03/2009 05:50, Artacus wrote:

Stefan Kaltenbrunner wrote:

Tino Wildenhain wrote:
what about a cookbook section on the main wiki?

That would be great. Who do we need to talk to for that?


AFAIK that once you register a community account you can just go ahead
and do it - it's a wiki, after all. :-)

That's my understanding, anyway - haven't actually tried it.


Yes that is how it works :-)


Stefan

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres Cookbook

2009-03-05 Thread Stefan Kaltenbrunner

Tino Wildenhain wrote:

Greg Smith wrote:

On Wed, 4 Mar 2009, Artacus wrote:

So it looks like at one time we had a cookbook. But the links are 
dead now.


I'm not sure why Roberto Mello stopped hosting that, but you can see 
the last content posted there at 
http://web.archive.org/web/20031207045017/http://www.brasileiro.net/postgres/cookbook/ 



Even though that is mainly aimed at older versions, there are a lot of 
neat PL/PGSQL examples there that you might wrangle into working 
against a current one.


I just found that I registered a matching named domain at some time...
if there is content to host, I could probably jump in.


what about a cookbook section on the main wiki?


Stefan

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] RCA for MemoryContextAlloc: invalid request size(Known Issue)

2009-01-08 Thread Stefan Kaltenbrunner

Yogvinder Singh wrote:

What I am looking for is the version in which this problem has been
resolved.

I can't find it anywhere in the documentation.


well there was by far not enough information in your original mail(not 
even the exact version you are on) to even verify that this is a data 
corruption issue.


it could simply be  a query(or the OS) running out of memory.
As for what changed in the years since 7.3.0 was released simply read up on:

http://www.postgresql.org/docs/current/static/release.html


Stefan

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres mail list traffic over time

2008-11-22 Thread Stefan Kaltenbrunner

Alvaro Herrera wrote:

Sam Mason wrote:


the following has links to more:

  http://markmail.org/search/?q=list:org.postgresql


Wow, the spanish list is the 3rd in traffic after hackers and general!


yeah and that tom lane guy sent over 77000(!!!) mails to the lists up to 
now ...



Stefan

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DUPS in tables columns ERROR: column . . . does not exist

2008-08-30 Thread Stefan Kaltenbrunner

Albretch Mueller wrote:

 Hi,
~
 I am trying to get dups from some data from files which md5sums I
previously calculated
~
 Here is my mere mortal SQL
~
SELECT md5, COUNT(md5) AS md5cnt
FROM jdk1_6_0_07_txtfls_md5
WHERE (md5cnt  1)
GROUP BY md5
ORDER BY md5cnt DESC;


I think you are looking for HAVING as in:

SELECT md5, COUNT(md5)
FROM jdk1_6_0_07_txtfls_md5
GROUP BY md5
HAVING count(md5)  1


Stefan

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Q] DNS(bind) ER model

2008-08-23 Thread Stefan Kaltenbrunner

Andrew Sullivan wrote:

On Fri, Aug 15, 2008 at 09:54:26PM +0200, Tino Wildenhain wrote:

looks like you want to write your own powerdns ? :-)
http://www.powerdns.com/


Oh, right, I forgot they use a SQL back end.  They do EDNS0, too :)

(Note, however, that if you plan to deploy DNSSEC you're out of luck
with them.  Bert is hostile to it.)


recent(upcoming 2.9.22) powerdns code actually does have dnssec support 
(a bit limited though).
On the other hand getting your typical database backed DNS-managment GUI 
to grok DNSSEC is probably turing out to become a challenge.



Stefan

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Initdb problem on debian mips cobalt: Bus error

2008-08-06 Thread Stefan Kaltenbrunner

Glyn Astill wrote:

http://privatepaste.com/cbY2S4JhtA

Very little difference with the -O0


FWIW: there also seems to be a fairly indepth discussion on the cobalt 
related netbsd list from last year about a problem that looks very 
similiar (at least to you issue with etch):



http://www.nabble.com/Strange-segmentation-fault-trying-to-run-postgresql-on-current-to9997129.html


Stefan

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Initdb problem on debian mips cobalt: Bus error

2008-07-19 Thread Stefan Kaltenbrunner

Tom Lane wrote:

Glyn Astill [EMAIL PROTECTED] writes:

No. Will recompile with debug info and post back when done.


FWIW, the most likely issue here is the MIPS-specific assembly code in
src/include/storage/s_lock.h --- I'm not sure how many MIPS platforms
that's really been exercised on, but it may not work on yours.  While
you're waiting for the rebuild you might try to find a MIPS guru to
show that code to.


hmm well - lionfish (which is now offline due to a broken power supply) 
is actually a cobalt cube too (and is running debian). So if we really 
managed to break mipsel it must have happened in the last few months:



http://www.pgbuildfarm.org/cgi-bin/show_history.pl?nm=lionfishbr=HEAD


Stefan

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Mediawiki 1.10 and PG 8.3 upgrade

2008-05-30 Thread Stefan Kaltenbrunner

Jon Lapham wrote:

Hello all,

I'm sure I did something wrong during my upgrade of mediawiki from 
Postgresql 8.2 to 8.3.  Any help on how to recover would be appreciated.


[...]

I would like to say thanks to the people that put in so much time and 
effort in developing tsearch2 (and postgresql too), your work is greatly 
appreciated.  However, I have never had a recent postgresql upgrade work 
seemlessly, and it is almost always due to issues with tsearch2.  :(


yeah - upgrades with tsearch2 have always been a bit of a problem - this 
is one of many reasons why tsearch2 got fully integrated into the 
backend starting with 8.3.




While searching the internet for magic recipes on how to perform these 
upgrades that involve tsearch2, I find long complex instruction sets 
detailing brain-numbingly complex operations (hand edit the schema!):


http://julien.danjou.info/blog/index.php/post/2008/05/19/Upgrading-mediawiki-from-PostgreSQL-82-to-83 



Also, I have found this blog entry from Greg Mullane which makes it all 
seem so simple: (Note Tom Lane's comment, which seems to directly relate 
to the error message I am seeing).


http://people.planetpostgresql.org/greg/index.php?/archives/123-MediaWiki-is-Postgres-8.3-compatible.html 



These are just a 2 examples, there are others.  The one common thread 
shared by all the instructions are that they say different things.  How 
are we, Joe PostgreSQL users on the street, supposed to know which 
instructions to follow?  Have I (very possible) missed some official 
PostgreSQL instructions?  How do we go about triaging tsearch2 problems?


http://www.postgresql.org/docs/8.3/static/textsearch-migration.html has 
a some discussion - and also see the backwards compatibility module 
provided in contrib/tsearch2.



Stefan

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] page is uninitialized --- fixing

2008-03-30 Thread Stefan Kaltenbrunner

Tom Lane wrote:


This is not entirely out of the question, because of the designed-in
property that a freshly initialized page is only inserted into by
the backend that got it --- no one else will know there is any
free space in it until VACUUM first passes over it.  So if there
are a lot of different sessions writing into this table you don't
need to assume more than about one tuple per page.  Still, it's
kinda hard to believe that the first two backends could remain stuck
for so long as to let ~800 other insertions happen.


depending on how the multipathing and recovery works on that particular 
SAN/OS combination it might very well be that some processes are getting 
 their IO hold much longer than some other processes.
Maybe the  first two backends had IO in-flight and the OS needed time to 
requeue/resend those after the SAN recovered and new backends were 
able to do IO immediately ?



Stefan

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Getting all tables into memory

2008-01-24 Thread Stefan Kaltenbrunner

Robert Fitzpatrick wrote:

I have a couple of servers running Postfix with amavisd-maia+SA+clamav
on FreeBSD 6.2 which use a central db server running PgSQL 8.2.4. My
issue is the bayes database causing SA TIMED OUT in the logs and want to
make sure I am getting everything into memory. The disk activity is high
on the db server, this is the average systat status...


The SA SQL-bayes implementation is one prime example of a real life 
application that can benefit from HOT because it is basically updating a 
 fairly limited set of non-indexes columns at an insane rate.
I have seen real live installations that could barly keep up with bloat 
even on a tight 3min vacuum cycle and the 8.3B4 test instance I have 
here can take at least 4 times the load than 8.1 could using that kind 
of workload.



Stefan

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Error while compiling PostgreSQL with Java

2007-11-25 Thread Stefan Kaltenbrunner
Никоноров Григорий wrote:
 Hi Kris
 
 Thanks for your answer.When i download this JDBC4 Postgresql Driver,
 Version 8.2-506 driver what should i do next to configure PostgreSQL
 with it ?

there is nothing you need on the backend side (except for maybe setting
up authentication as with every other client).


Stefan

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Read-only availability of a standby server?

2007-11-21 Thread Stefan Kaltenbrunner
Garber, Mikhail wrote:
 In the high-availabilty situation with a warm standby, is it possible (or 
 planned) to be able to make standby readable?
 This is a new feature in Oracle 11 and it is very important for a project I 
 am working on.

yeah there are plans to support this in 8.4 (and some basic groundwork
already happened in 8.3) ...


Stefan

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Postgresql performance in production environment

2007-08-19 Thread Stefan Kaltenbrunner
Phoenix Kiula wrote:
 On 19/08/07, Phoenix Kiula [EMAIL PROTECTED] wrote:

[... ]

 Well based on some past posts, I looked into my pg_log stuff and found
 a number of these lines:
 
 
 [
 LOG:  could not fork new process for connection: Resource temporarily
 unavailable
 LOG:  could not fork new process for connection: Resource temporarily
 unavailable
 LOG:  could not fork new process for connection: Resource temporarily
 unavailable
 LOG:  could not fork new process for connection: Resource temporarily
 unavailable
 LOG:  could not fork new process for connection: Resource temporarily
 unavailable
 LOG:  could not fork new process for connection: Resource temporarily
 unavailable
 ]
 
 
 Which suggests that our guess of running out of connections is the right one.
 
 So, we have three options (to begin with) --
 
 1. Increase the number of max_connections. This seems to be a voodoo
 art and a complex calculation of database size (which in our case is
 difficult to predict; it grows very fast), hardware, and such. I
 cannot risk other apps running on this same machine.sql 

this error is a sign that the OS(!) is running out of resources(or at
least won't allow pg to fork another process) - either you hit an ulimit
for the user postgresql runs under or you need to flip some kernel
setting to increase the number of processes. increasing max_connections
wil NOT help because you are not even hitting the current one yet ...

 
 2. Use connection pooling. I've found pgpool2 and pgbouncer from the
 Skype group. Does anyone have experience using either? The latter
 looks good, although we're usually skeptical about connection pooling
 in general (or is that just the mysqli_pconnect() hangover?)

pgbouncer works quite fine here.

Stefan

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] [pgsql-advocacy] European users mailing list

2007-07-30 Thread Stefan Kaltenbrunner
Robert Treat wrote:
 On Monday 30 July 2007 03:47, Dave Page wrote:
 Joshua D. Drake wrote:
 Peter Eisentraut wrote:
 Dave Page wrote:
 As discussed at the first meeting of what will become the European
 PostgreSQL Users Group after pgDay in Prato, we now have a mailing
 list setup at [EMAIL PROTECTED]

 This is for the European users group, so is not really intended as a
 technical list but as a place to discuss events, advocacy and other
 topics relevant to our work in Europe.
 If it's not a general list, why did you name it general?  That will
 certainly cause confusion.  It sounds like -eu-advocacy might have
 been better.
 +1 -eu-advocacy seems to make more sense.
 With all due repect JD, you were not at the meeting at which this was
 discussed and are presumably unaware that we discussed more than just
 'advocacy'. I expect there to be 3 or 4 -eu lists eventually, one of
 which may well be -advocacy.

 
 *shrug*  I wasn't there either, but seems all the emails I have seen reffered 
 to it as the European PostgreSQL Users Group, so I expected it to look more 
 like other users groups, ie. [EMAIL PROTECTED]  

I was at that meeting and for the record: there is no european
postgresql user group (yet) - so a general list seems vastly appropriate
for discussing general european stuff.
If we ever get a european user group we can simply add a list for that
but what we now need is a list to discuss general topics and only time
will tell if we ever need eupug@, pgsql-eu-advocacy@ or nothing at all.

 
 But for now, I was asked to arrange a general mailing list, which I have
 done.

 
 Except we already had a general european mailing list, so I'm really not 
 clear 
 on what the above is meant for accomplishing.

that list was not running on official infrastructure and only meant as a
short term solution until there was enough momentum and interest to put
it on official infrastructure.


Stefan

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] createing indexes on large tables and int8

2007-07-16 Thread Stefan Kaltenbrunner
Tom Lane wrote:
 Janning Vygen [EMAIL PROTECTED] writes:
 After this i create the index and it took 10 hours just for one index 
 (primary 
 key). I have 100.000.000 rows with one PK (int8), two integer data values, 
 and two FK (int8)
 
 What PG version is this?  We did a fair amount of work on sort speed
 for 8.2.

yeah - back when i tested that during the 8.2 development cycle I got a
5-6x speedup with the external sort improvements.
ie sorting 1.8B rows (integer) went down from over 12h to about 2h10min
- but 10h sounds like a lot for only 100M rows - I wonder what kind of
hardware that is and how much concurrent activity is going on ...


Stefan

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Limit number connections by IP

2007-07-13 Thread Stefan Kaltenbrunner
tabai wrote:
 Hi
 
 I know that i can limited the total number of connections in
 postgresql.conf with max_connections, but... can i limite  the max
 number of connections from an specific IP?
 
 For example y like have total max connections of 40 buy the same ip
 can't have more than 5 connections open.
 
 It is possible?

no - you can limit the maximum numbers of connections on a per database
and also a per role base. If you really need a per source address
limitation look into using whatever firewall solution is available on
your OS.


Stefan

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Limit connections per username

2007-07-13 Thread Stefan Kaltenbrunner
Nik wrote:
 PostgreSQL 8.2 on Windows 2003 Server.
 
 Is it possible to limit number of connections per username?

yes - look for CONNECTION LIMIT on:

http://www.postgresql.org/docs/8.2/static/sql-createrole.html
and
http://www.postgresql.org/docs/8.2/static/sql-alterrole.html


Stefan

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pointer to feature comparisons, please

2007-06-13 Thread Stefan Kaltenbrunner
Kevin Hunter wrote:

[...]
 I originally had him code his project for Postgres, but for reasons
 beyond our control we've had to move to Oracle.  In designing the schema
 we have need of a constraint that checks values in other tables.  The
 way that I currently know how to do this in Postgres is with PLpgSQL
 functions.  Then I add something like
 
 CONSTRAINT away_team_is_playing CHECK ( NOT teamIsPlaying( awayteamid,
 timeid ) )
 
 to the table schema.  No big deal, except that it seems Oracle can't use
 anything other than a simple column constraint.  He can't use any custom
 functions like he could in Postgres, and we've yet to find a solution to
 do what he needs.

well doing it that way is usually not a good idea at all (you cannot
actually use arbitrary queries in a CHECK constraint in pg either -
using a function to hide that is cheating the database - oracle might
actually be more(!) clever here not less ...). this why you can get into
all kind of weird situations with losing the integrity of your data or
running into serious issues during dump/restore for example.

What you need to do here is to use a trigger.


Stefan

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] pointer to feature comparisons, please

2007-06-13 Thread Stefan Kaltenbrunner
Kevin Hunter wrote:
 At 3:26p -0400 on 13 Jun 2007, Stefan Kaltenbrunner wrote:
 The way that I currently know how to do this in Postgres is with
 PLpgSQL functions.  Then I add something like

 CONSTRAINT away_team_is_playing CHECK ( NOT teamIsPlaying(
 awayteamid, timeid ) )

 to the table schema.

 well doing it that way is usually not a good idea at all (you cannot
 actually use arbitrary queries in a CHECK constraint in pg either -
 using a function to hide that is cheating the database - oracle might
 actually be more(!) clever here not less ...). this why you can get
 into all kind of weird situations with losing the integrity of your
 data or running into serious issues during dump/restore for example.
 
 I was /hoping/ for a response like this!  Thanks!  Okay.  I'll bite. 
 Why can't they be used in general?  Is it the same problem that the
 trigger has (below)?

ok consider the following scenario:

you have table A with a check constraint saying the a given column
depends on the result of some arbitrary queries on table B (maybe
something as simple as this flag can only be set to true if there are
at least 5 items of this kind in table B).
So after a while you added some data to both tables and A has set that
column to true for a few rows. Now you manipulate B in a way that woudl
cause maybe one constraint to fail in A IF and only IF it get's reevaluated.
One day later your server decides to commit suicide - so you get the
spare one dig out the backup from the day before and try to restore that
dump.
First you will get the schema restored from the backup - next all the
data will be put in into the tables and then ? Bom.
The third major step in restoring the database is readding all
constraints - if the CHECK constraint get's readded it will get
evaluated by the database fully (ie row-by-row) - but because your data
is inconsistent you will get a constraint violation on the now
inconsistent data ...
The trigger would avoid that restore problem because it would not be
invoked in a similiar fashion (ie they trigger on UPDATE/DELETE/INSERT
which are already done at that point of the restore).
That is just one of the most obvious failures - trying to implement that
kind of arbitrary complex query based foreign-key like constraint is
always very difficult.

 
 What you need to do here is to use a trigger.
 
 From online docs regarding Oracle, this is not 100% safe either:
 
 (http://download-east.oracle.com/docs/cd/B14117_01/appdev.101/b10795/adfns_co.htm)
 
 'To enforce this rule without integrity constraints, you can use a
 trigger to query the department table and test that each new employee's
 department is valid. But this method is less reliable than the integrity
 constraint. SELECT in Oracle Database uses consistent read, so the
 query might miss uncommitted changes from other transactions.'
 
 It seems to me that there are certain situations where, especially in a
 highly normalized data model, that you'd /have/ to have multiple checks
 of even other tables.  What theory am I missing if this is not the case?

well - the main point why the trigger is better is because it will cause
you less operational issues, what it cannot guarantee you either is that
 you cannot manipulate table B in a way that would violate the checking
logic of your constraint in table A (well you could use a trigger on A
to validate that in the other direction - but that opens up another big
can of worms).
The best way is to design your schema in a way that you can use real
foreign key constraints to enforce various things or CHECK constraints
that only deal with data in the very same row(ie neither do queries on
other tables nor depend on OTHER rows than the one that is manipulated
in the same table).



Stefan

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] why postgresql over other RDBMS

2007-05-26 Thread Stefan Kaltenbrunner
Tom Lane wrote:
 Alvaro Herrera [EMAIL PROTECTED] writes:
 Erik Jones wrote:
 And, to finish up, is there any reason that pg_restore couldn't  
 already work with separate processes working in parallel?
 
 The problem is that the ordering of objects in the dump is the only
 thing that makes the dump consistent with regards to the dependencies of
 objects.  So pg_restore cannot make any assumptions of parallelisability
 of the restoration process of objects in the dump.
 
 That's true at the level of DDL operations, but AFAIK we could
 parallelize table-loading and index-creation steps pretty effectively
 --- and that's where all the time goes.

yes loading the data and creating the index is the most time consuming
part of a large dump and reload cycle.

 
 A more interesting question is what sort of hardware you need for that
 actually to be a win, though.  Loading a few tables in parallel sounds
 like an ideal recipe for oversaturating your disk bandwidth...

you don't actually need that much of disk bandwidth both COPY and CREATE
INDEX are CPU bottlenecked on modern boxes and reasonable disk
subsystems - spreading their work over multiple cores/processes can give
big benefits.
For example I have managed to load ~2B rows (5 integer columns - no
indexes) at a rate of about 32 rows/s on a modern(but already 1,5
years old) 4 core 2.6Ghz Opteron box (with 12 disks and BBWC iirc).
Using 4 concurrent processes to load the data resulted in about 93
rows/s loaded (hitting the disk-io limit at that rate).
So having the ability to parallelize those operations at both the dump
and the restore level would be a huge win.
A manual experiment I did a while back with doing that by hand (ie.
splitting the dump manually and feeding it in parallel with a
concurrency of 2)  on a copy of a production database brought down the
restore time from 3h+ to a bit less than 2 hours.


Stefan

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] why postgresql over other RDBMS

2007-05-26 Thread Stefan Kaltenbrunner
Tom Lane wrote:
 Stefan Kaltenbrunner [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 A more interesting question is what sort of hardware you need for that
 actually to be a win, though.  Loading a few tables in parallel sounds
 like an ideal recipe for oversaturating your disk bandwidth...
 
 you don't actually need that much of disk bandwidth both COPY and CREATE
 INDEX are CPU bottlenecked on modern boxes and reasonable disk
 subsystems - spreading their work over multiple cores/processes can give
 big benefits.
 
 Hmm ... I wonder if that's true for COPY BINARY ...

not sure on that - I was simply trying to say that even a simple
parallel dump  restore capability could result in a serious improvement
for people running large databases(especially considering that one can
now buy 1U boxes with 8+ cores or 2U boxes that can hold 14disks+)  ;-)


Stefan

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Technical Documentation and Community Login

2007-04-20 Thread Stefan Kaltenbrunner
Magnus Hagander wrote:
 Guy Rouillier wrote:
 George Weaver wrote:
 Original Message From Guy Rouillier

 http://www.postgresql.org/docs/techdocs.71.  Color me dumb, but I
 can't figure a way to download the document Apache 2.0, Tomcat 5.5,
 WARs  PostgreSQL 8.1 JDBC DataSources on Windows XP.  When I click
 on the link to the document, it just brings me back to the same page.
 If you back up the url chain to  Community Generated Articles, Guides,
 and Documentation you will be taken to the following page:

 http://www.postgresql.org/docs/techdocs.2

 If you follow the Apache 2.0, Tomcat 5.5, WARs  PostgreSQL 8.1 JDBC
 DataSources on Windows link at the bottom under Web Development you
 should be able to access the articles your interested in.
 Thanks, George, that does indeed work.  I got to the original article I
 quoted by entering tomcat jdbc in the search function.  That article
 is top on the list.  Does this indicate that the search index is outdated?
 
 Not really. The page on the 71 link is there, and it's empty. The search
 engine has indexed the title of it, and the summary. I have no idea
 where the content really is supposed to be - Rob, do you by any chance know?
 
 
 I also still haven't received an email in response to my userid submittal.
 
 Strange, but you could've been caught up in the server change we did a
 couple of days ago.  Please try the lost password feature at
 http://www.postgresql.org/community/lostpwd to get a new copy of it.

and if that does not work either please tell us what email address you
used to sign up(and a exact date if possible) - the one you are using
here is not in the logs for the last few days.

Stefan

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] unexpected data beyond EOF and character encoding

2007-03-30 Thread Stefan Kaltenbrunner
Jaime Silvela wrote:
 I completed another migration from 8.1.3 to 8.2.3, and again the problem
 with unexpected data beyond EOF, exactly twice like before, but in two
 tables different from the last time.
 The kernel bug hypothesis seems a strong one. I told Unix Ops about the
 possible bug, and one of the guys said 2.6.5-7.244 was well known to be
 trouble on our hardware. We should be upgrading soon. I'll try to get
 more info regarding the particular Linux bug.

2.6.5-7.244 looks like an older SLES kernel which makes this bug
suspicially look like one we got reported a while ago:

http://archives.postgresql.org/pgsql-admin/2006-09/msg00092.php

and toms analysis:

http://archives.postgresql.org/pgsql-hackers/2006-09/msg01899.php


Stefan

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Configure can't find com_err on OpenBSD for --with-krb5

2007-03-04 Thread Stefan Kaltenbrunner
Jim Rosenberg wrote:
 --On Saturday, March 3, 2007 11:24 PM -0500 Tom Lane [EMAIL PROTECTED]
 wrote:
 
 I am trying to build PostgreSQL 8.2.3 on OpenBSD 4.0. When I give the
 command

 ./configure -v --with-openssl --with-krb5
 --with-includes=/usr/include/kerberosV

 I get:

 checking for library containing com_err... no
 configure: error: could not find function 'com_err' required for
 Kerberos 5

 Perhaps you need a --with-libs switch too?  Usually, if the package
 didn't put its include files right in /usr/include, it likely didn't
 put the libraries right in /usr/lib either.
 
 Hmm. libkrb5 is in /usr/lib -- with all the rest of the libs -- not
 somewhere special.
 
 I also have /usr/lib/libcom_err.a too, and it seems to have com_err.o
 also. Putting in an explicit
 
 ./configure --with-openssl --with-krb5
 --with-includes=/usr/include/kerberosV
 --with-libs=/usr/lib


FWIW - we have a buildfarm box running OpenBSD 4.0/AMD64 - and it is
using the following configuration:

http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=zebradt=2007-03-02%2013:18:04

I vaguely remember I had some kerberos-related issues too when I set up
that box but the above configuration is working :-)

Stefan

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] real multi-master replication?

2007-03-04 Thread Stefan Kaltenbrunner
Bill Moran wrote:
 hubert depesz lubaczewski [EMAIL PROTECTED] wrote:
 hi,
 i read about some replication system for postgresql, but - as far as i
 know there is none real multi-master replication system for
 postgresql.
 all i have seen are based on query replication with various hacks
 for specific constructions (like now()).
 my question is - is there any (even fully commercial) multi-master
 replication system for postgresql that will work with all possible
 constructs, triggers, random data and so on?
 i mean - i dont want to bother with choosing to 'note' somehow that
 'this particular query' has to be replicated somehow.
 i'm thinking about working solution that will allow multi-master connections.

 anything? anywhere?
 
 Have you looked at pgpool?

afaik pgpool is statement based and not really multimaster either ...


Stefan

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] real multi-master replication?

2007-03-04 Thread Stefan Kaltenbrunner
Bill Moran wrote:
 Stefan Kaltenbrunner [EMAIL PROTECTED] wrote:
 Bill Moran wrote:
 hubert depesz lubaczewski [EMAIL PROTECTED] wrote:
 hi,
 i read about some replication system for postgresql, but - as far as i
 know there is none real multi-master replication system for
 postgresql.
 all i have seen are based on query replication with various hacks
 for specific constructions (like now()).
 my question is - is there any (even fully commercial) multi-master
 replication system for postgresql that will work with all possible
 constructs, triggers, random data and so on?
 i mean - i dont want to bother with choosing to 'note' somehow that
 'this particular query' has to be replicated somehow.
 i'm thinking about working solution that will allow multi-master 
 connections.

 anything? anywhere?
 Have you looked at pgpool?
 afaik pgpool is statement based and not really multimaster either ...
 
 Well, it's multi-master to the degree that all servers are read/write,
 and therefore any server can take over.

not sure I follow - pgpool will simply replay the queries to each
backend-server that are going through it.
You cannot directly write to the servers (well you can - but that will
likely cause inconsistent data) - and you have all the problems with
non-determinstic queries as well as problems of getting a node back in
sync after a downtime or connection loss.

 
 How would you define multi-master?

for true multimaster one would expect to be able to write to all the
nodes and keep the data consistent/sync or have some sort of conflict
resolution for an async solution.

Stefan

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Warning TupleDesc reference leak

2007-02-20 Thread Stefan Kaltenbrunner
Marek Lewczuk wrote:
 Hello,
 after upgrade to 8.2 version, PostgreSQL throws following warnings:
 WARNING:  TupleDesc reference leak: TupleDesc 0x42051d90 (16425,-1)
 still referenced
 WARNING:  TupleDesc reference leak: TupleDesc 0x41f60ad0 (16425,-1)
 still referenced
 WARNING:  TupleDesc reference leak: TupleDesc 0x4203d908 (16425,-1)
 still referenced
 WARNING:  TupleDesc reference leak: TupleDesc 0x41fdc410 (16425,-1)
 still referenced
 WARNING:  TupleDesc reference leak: TupleDesc 0x41fbb568 (16425,-1)
 still referenced
 WARNING:  TupleDesc reference leak: TupleDesc 0x42044bf0 (16425,-1)
 still referenced
 WARNING:  TupleDesc reference leak: TupleDesc 0x42038e60 (16425,-1)
 still referenced
 WARNING:  TupleDesc reference leak: TupleDesc 0x41feebc0 (16425,-1)
 still referenced
 WARNING:  TupleDesc reference leak: TupleDesc 0x41fa0018 (16425,-1)
 still referenced
 WARNING:  TupleDesc reference leak: TupleDesc 0x41fd9c30 (16425,-1)
 still referenced
 
 What it means ?

there is at least one known cause for that though that is fixed in 8.2.3
  and involves plpgsql and subtransactions/exception blocks - what
version are you running exactly ?


Stefan

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Out of memory on vacuum analyze

2007-02-20 Thread Stefan Kaltenbrunner
Jim Nasby wrote:
 On Feb 19, 2007, at 1:19 PM, Jeff Davis wrote:
 You told PostgreSQL that you have 900MB available for
 maintenance_work_mem, but your OS is denying the request. Try *lowering*
 that setting to something that your OS will allow. That seems like an
 awfully high setting to me.
 
 900MB isn't that unreasonable if you're building indexes on a restore or
 something similar. I have run into issues when trying to set it much
 over 1G, though... on various OSes and platforms.

versions before 8.2 have some issues(mostly reporting bogus errors) with
very large settings for maintenance_work_mem. 8.2 and up are behaving
more sanely but I don't think they can actually make anything better
with values in the GB range.
Have you actually measured a performance improvment going beyond
250-350MB(that seemed about to be the sweet spot last I tested) or so
for index creation and friends ?


Stefan

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] [HACKERS] getting status transaction error

2007-02-06 Thread Stefan Kaltenbrunner
Merlin Moncure wrote:
 On 2/6/07, Merlin Moncure [EMAIL PROTECTED] wrote:
 around 6:30 this morning, I started getting the following messages in
 my log:

 Feb  6 06:33:34 mojo postgres[1117]: [2-1] :: ERROR:  could not access
 status of transaction 51911
 Feb  6 06:34:35 mojo postgres[1128]: [2-1] :: ERROR:  could not access
 status of transaction 51911
 [...]

 repeated roughly once a minute.  I've never seen this before. this is
 on postgres 8.1.1 running on fedora core 4 smp.  I don't have any
 contextual information yet but I'm getting ready to turn statement
 logging on.

 Anybody know what this is?
 [x-posting to -hackers]
 
 actually, here is some more relevant bits from the log.
 Feb  6 06:31:33 mojo postgres[1088]: [1-1] :: LOG:  autovacuum:
 processing database template0
 Feb  6 06:31:33 mojo postgres[1088]: [2-1] :: ERROR:  could not access
 status of transaction 51911
 Feb  6 06:31:33 mojo postgres[1088]: [2-2] :: DETAIL:  could not open
 file pg_clog/0207: No such file or directory

hmm I first thought it could have been
http://archives.postgresql.org/pgsql-committers/2006-01/msg00288.php
which affects 8.1.1 but that's not the very same error as the one
created by the above bug.


Stefan

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Index bloat of 4x

2007-01-17 Thread Stefan Kaltenbrunner
Tom Lane wrote:
 Bill Moran [EMAIL PROTECTED] writes:
 The entire database was around 28M prior to the upgrades, etc.  Immediately
 after the upgrades, it was ~270M.  Following a vacuum full, it dropped to
 165M.  Following a database-wide reindex, it dropped to 30M.
 
 As Alvaro said, vacuum full doesn't shrink indexes but in fact bloats them.
 (Worst case, they could double in size, if the vacuum moves every row;
 there's an intermediate state where there have to be index entries for
 both old and new copies of each moved row, to ensure things are
 consistent if the vacuum crashes right there.)
 
 So the above doesn't sound too unlikely.  Perhaps we should recommend
 vac full + reindex as standard cleanup procedure.  Longer term, maybe
 teach vac full to do an automatic reindex if it's moved more than X% of
 the rows.  Or forget the current vac full implementation entirely, and
 go over to something acting more like CLUSTER ...

we have to recommend the CLUSTER way to fix overly bloated databases
quite often to people on IRC because vacuum full is unreasonably slow on
highly fragmented databases.
Doing something like that internally for vacuum full sounds like a
reasonable idea except for the additional disk usage during the process
which might cause issues for people ...



Stefan

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Reducing pg_dump pg_restore times

2006-10-26 Thread Stefan Kaltenbrunner
Ron Johnson wrote:
 On 10/26/06 01:20, Chris wrote:
 Coder At Heart wrote:
 Hi!
  
 Restore  backup take time in the tune of 6 hrs on a Linux, 4 proc, 32
 G RAM machine for a 60GB database.
 Please always cc the mailing list.

 I've never played with databases that large so I don't know if that's
 good or bad.. others on the list will be able to offer advice/suggestions.
 
 RAM certainly helps, but pg_dump is single-threaded, and, by
 definition, backing up and restoring 60GB of data is *extremely* IO
 constrained.

well from what I have seen is that on large databases and a sufficiently
fast disk-io subsystem actually CPU(or rather the speed of a single
core) starts to be the bottleneck with current postgresql versions.
That is true for both COPY and to a much greater effect index creation
(I have seen restores that took days and most of that was purely index
creation).
8.2 has improved considerably on that due to the massive improvments in
the external sorting code but it will still be bottleneck by the
single-process nature of psql.

 
 So,
 - What OS?
 - What version of PG?
 - what kind of disk system does the DB live on?
 - How many controllers?
 - What kind of tape drive?
 - Is it on it's own SCSI controller?

in addition to that basic information we need there are:

*) increasing maintainance_work_mem and work_mem massively
*) increasing the number of checkpoint segments
*) disabling fsync during the load


Stefan

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Majordomo drops multi-line Subject:

2006-09-14 Thread Stefan Kaltenbrunner
Michelle Konzack wrote:
 Hello Marc,
 
 Am 2006-09-09 12:50:36, schrieb Marc G. Fournier:
 
 And I missed this one from Joshua ... but, we aren't running Majordomo 
 from GreatCircle, we are running Majordomo2 (http://www.mj2.org) which is 
 very much being actively support ...
 
 I have seen...
 
 Majordomo2 support a nomail option as well ...
 
 I know, since some Mailinglist I am on are migrated to Majordomo2.
 Afaik does Mailman not support the nomail Option.
 
 Same for ezmlm, sympha, and smartlist.

fwiw: sympa - and that one has a nomail option for ages ...


Stefan

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] [pgsql-advocacy] Thought provoking piece on

2006-09-01 Thread Stefan Kaltenbrunner

Alvaro Herrera wrote:

Bruce Momjian wrote:

Tom Lane wrote:



It's pointless to suppose that individual developers would really be
answerable to any project-wide management, since that's not who they're
paid by.  So I tend to think that a project roadmap would be more of an
exercise in wishful thinking than a useful management tool.  OTOH it
*could* be useful, if there are any developers out there wondering what
they should work on next.  Are there any ... and would they listen to a
roadmap if they had one, rather than scratching their own itches?


I would certainly listen to a roadmap if it talked to me ...


I think the longer someone is with the project the more they start
working on what is good for the project, rather than what interests
them.  I think we have seen many cases of that.


On my particular case, I generally grab some problem that I perceive as
important and unhandled, and try to do something to remedy it.  This is
how I got here in the first place, by fixing some problems in the
CLUSTER implementation.  This is how I got to doing shared dependencies,
shared row locks and autovacuum -- neither of them were problems that
affected me in any way.  Savepoints were a different matter.  I chose to
work on them because Bruce and other people on this list suggested them
to me, back when I was looking for something to do my undergrad project
in.

So yes, I'd probably work on something the community considered
important.



heh if this is a request for a wishlist then I would suggest that we 
should finally tackle one of the things most databases are doing better 
then we (including MySQL) - that is better charset/locale/collate support.
especially for new users or users converting from other database this is 
one of the major stumbling blocks (at least as seen on irc regulary)



Stefan

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] [pgsql-advocacy] Thought provoking piece on

2006-09-01 Thread Stefan Kaltenbrunner
Martijn van Oosterhout wrote:
 On Fri, Sep 01, 2006 at 12:40:53PM +0200, Stefan Kaltenbrunner wrote:
 heh if this is a request for a wishlist then I would suggest that we 
 should finally tackle one of the things most databases are doing better 
 then we (including MySQL) - that is better charset/locale/collate support.
 especially for new users or users converting from other database this is 
 one of the major stumbling blocks (at least as seen on irc regulary)
 
 Yeah well, I got reasonably far on that. To the point of being able to
 have different collations on different columns, creating indexes with
 different collations and having collation-sensetive comparisons:
 
 http://archives.postgresql.org/pgsql-hackers/2005-12/msg01121.php
 
 Where I got stuck is teaching the planner how to use the collation info
 to produce appropriate plans. There wasn't a lot of feedback on the
 patch itself, so I didn't know how to proceed. I don't have time for it
 anymore but if someone wants to pick it up and run with it...
 
 Note however that it's not easy, there are a number of related issues
 which need to be solved at the same time:

yeah I had some hopes for this getting done - and what you have seems
like a nice start - but the whole thing is quite difficult and I expect
that project to need quite a lot of further work :-(

 
 Supporting SORTFUNC_LT/GT is going to get much harder, but there no
 idea as to how much it's used anyway:
 
 http://archives.postgresql.org/pgsql-hackers/2005-12/msg01154.php
 
 The concept of operator class needs to be expanded into something
 more general, into something that's actually describes the type, rather
 than just how btrees work.
 
 Do we want to keep relying on the system libraries for collation, or do
 we want to use a cross-platform library like ICU or do we want to
 create our own collation library?

that is probably something that we really need to decide on - system
libaries do seem to be easy but I have some doubts about portability and
quality of implemtations (like getting different behaviour on different
platforms) and some of our supported platforms (like the BSDs) have
rather limited support for collation either.
On the ICU vs. our own library I'm not sure what would be a good thing
to do - ICU is _LARGE_ and we already have some perfectly fine and
proven code for things like character conversion or timezone handling in
the core ...


Stefan

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Cutting the Gborg throat

2006-08-31 Thread Stefan Kaltenbrunner

Dave Page wrote:
 


-Original Message-
From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED] On Behalf Of Jan Wieck

Sent: 31 August 2006 13:59
To: Joshua D. Drake
Cc: Dave Cramer; Greg Sabino Mullane; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Cutting the Gborg throat

On 8/28/2006 9:36 PM, Joshua D. Drake wrote:


Hello,

O.k. so how about a phased approach?

1. Contact maintainers to create their new projects on 
pgfoundry and 

begin moving tickets

2. Migrate CVS

3. Migrate mailing lists

Apparently something cut the throat first. GBorg is down since Sunday.


Neptune lost 2 disks at once... Buts whats more interesting is that
yours is the first complaint I've seen.


fwiw we had a number of requests on irc at least ...


Stefan

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] undefined reference to `PQprepare' with postgresql-dev

2006-08-24 Thread Stefan Kaltenbrunner
Alexander Farber wrote:
 Hello,
 
 I'm trying to compile a libpq program under Debian 3.1r2 with these
 packages installed:
 
 $ dpkg -l | grep postgres
 ii  postgresql 7.4.7-6sarge2  object-relational SQL database
 management sy
 ii  postgresql-cli 7.4.7-6sarge2  front-end programs for PostgreSQL
 ii  postgresql-con 7.4.7-6sarge2  additional facilities for PostgreSQL
 ii  postgresql-dev 7.4.7-6sarge2  development files for libpq
 (PostgreSQL libr
 ii  postgresql-doc 7.4.7-6sarge2  documentation for the PostgreSQL
 database ma
 
 That program compiles and works fine with OpenBSD
 and Cygwin (I'm using PostgreSQL 8.x there though...)
 
 On Debian it unfortunately doesn't link:
 
 $ gcc build/pref.o build/message.o build/pgsql.o build/user.o
 build/util.o build/table.o build/common.o build/array.o
 build/xstring.o build/strlcpy.o build/strlcat.o build/daemon.o -o pref
 -L /usr/lib -L  /usr/lib/postgresql/lib -lpq
 build/pgsql.o(.text+0x15b): In function `db_prepare':
 server/pgsql.c:57: undefined reference to `PQprepare'
 collect2: ld returned 1 exit status
 
 $ pg_config --version
 PostgreSQL 7.4.7
 
 $ pg_config --libdir
 /usr/lib
 
 $ objdump -x /usr/lib/libpq.a | grep -i PQprepare
  *UND*   pqPrepareAsyncResult
 1974 R_386_PLT32   pqPrepareAsyncResult
 0490 g F .text  007d pqPrepareAsyncResult
 12e5 R_386_PLT32   pqPrepareAsyncResult
 130a R_386_PLT32   pqPrepareAsyncResult
  *UND*   pqPrepareAsyncResult
 1841 R_386_PLT32   pqPrepareAsyncResult
 
 There are very few hits for the undefined reference PQprepare
 on Google (which I take a good sign :-)
 
 Does anybody have an idea please, what could I be doing wrong?


there is no PQprepare in 7.4 - it got introduced in 8.0 (it is not
mentioned in the 7.4 manual either) ...


Stefan

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Postgres on 64bit Windows Server with WOW64

2006-07-31 Thread Stefan Kaltenbrunner
Joshua D. Drake wrote:
 Mont Rothstein wrote:
 I wouldn't know that one way or the other :-)

 I was referring to Windows on Windows
 64http://en.wikipedia.org/wiki/WoW64
 .

 Enables 32bit apps to run on 64bit Windows OSs, in case anyone wasn't
 familiar with it (I wasn't until today).
 
 In theory you should be able to compile PostgreSQL to 64bit windows. We
 support 64bit everywhere else.
 
 Does mingw support 64bit?

no it does not.
To get 64bit on windows we will need to get VC fully supported for now ...

Stefan

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Backslash as ordinary char vs. not; set via a connection/session

2006-07-27 Thread Stefan Kaltenbrunner
Ken Johanson wrote:
 Tom and folks,
 
 Will it be possible to set this more standard backslash handling
 behavior -- and possibly similar conformance modes... in a way similar
 to how mysql allows? They allow one to issue commands on the connection
 like:
 
 SET SESSION sql_mode = 'NO_BACKSLASH_ESCAPES,IGNORE_SPACE,ANSI'
 
 The advantage to this is that ISPs, etc can, by default, run the
 database with the old/incorrect mode (which is more compatible with the
 correspondingly legacy/broken apps.. and for newer apps to issue that
 command to make the DB act more standards compliant..

postgresql can do that in an even more powerful way - but people tend to
not notice much of it in your case that would be:

ALTER ROLE foo SET standard_conforming_strings='off'

or even:

ALTER DATABASE bar SET standard_conforming_strings='off'

you can do that for nearly all GUCs (like
logging,client_encoding,search_path,)


Stefan

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Backslash as ordinary char vs. not; set via a connection/session

2006-07-27 Thread Stefan Kaltenbrunner
Ken Johanson wrote:
 Stefan Kaltenbrunner wrote:
 

 postgresql can do that in an even more powerful way - but people tend to
 not notice much of it in your case that would be:

 ALTER ROLE foo SET standard_conforming_strings='off'

 or even:

 ALTER DATABASE bar SET standard_conforming_strings='off'

 you can do that for nearly all GUCs (like
 logging,client_encoding,search_path,)


 Stefan
 
 Stefan and Alvaro,
 
 Thank you!!! Yes, that is the feature I'd like... and yes, setting it on
 a per role or per database level is something I personally would prefer
 over the connection level. But, is there also a way to set it on the
 connection? Just because, one can imagine scenarios where two APIs share
 the same role  database, but one API forces backslashes 'on' during its
 statement-prepare just playing devil's advocate :-)

foo=# create table backslash(baz text);
CREATE TABLE
foo=# set standard_conforming_strings to on;
SET
foo=# insert into backslash values ('\\');
INSERT 0 1
foo=# set standard_conforming_strings to off;
SET
foo=# insert into backslash values ('\\');
WARNING:  nonstandard use of \\ in a string literal
LINE 1: insert into backslash values ('\\');
  ^
HINT:  Use the escape string syntax for backslashes, e.g., E'\\'.
INSERT 0 1
foo=# select * from backslash;
 baz
-
 \\
 \
(2 rows)


like that ? :-)


 
 So is this 'standard_conforming_strings' variable already set-able in a
 recent build, at the role or db level? Or will that need to wait for 8.2?

it's already in -HEAD and will therefor be in 8.2 when that gets released.


Stefan

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] pgsql vs mysql

2006-07-11 Thread Stefan Kaltenbrunner
Jan Wieck wrote:
 On 7/11/2006 1:08 PM, Scott Marlowe wrote:
 
 On Tue, 2006-07-11 at 11:04, Jan Wieck wrote:
 On 6/30/2006 1:07 PM, Merlin Moncure wrote:

  * mysql has a few features here and there which are nice...just to
  name a few, flush tables with lock, multiple insert, etc

 The multiple insert stuff is not only non-standard, it also
 encourages the bad practice of using literal values directly in the
 SQL string versus prepared statements with place holders.

 I thought it was in the SQL 99 standard...  
 
 The SQL bible doesn't say SQL99, it says it is a DB2 specific feature.

hmm:

http://troels.arvin.dk/db/rdbms/#insert

says otherwise - or are we talking a different multiple insert ?


Stefan

---(end of broadcast)---
TIP 6: explain analyze is your friend