Re: [PERFORM] OpenMP in PostgreSQL-8.4.0

2009-12-05 Thread Denis Lussier
Sounds more like a school project than a proper performance question.

On 11/28/09, Reydan Cankur  wrote:
> Hi,
>
> I am trying to run postgresql functions with threads by using OpenMP.
> I tried to parallelize slot_deform_tuple function(src/backend/access/
> common/heaptuple.c) and added below lines to the code.
>
> #pragma omp parallel
> {
>   #pragma omp sections
>   {
>   #pragma omp section
>   values[attnum] = fetchatt(thisatt, tp + off);
>
>   #pragma omp section
>   off = att_addlength_pointer(off, thisatt->attlen, tp + off);
>   }
> }
>
> During ./configure I saw the information message for  heaptuple.c as
> below:
> "OpenMP defined section was parallelized."
>
> Below is the configure that I have run:
> ./configure CC="/path/to/icc -openmp" CFLAGS="-O2" --prefix=/path/to/
> pgsql --bindir=/path/to/pgsql/bin --datadir=/path/to/pgsql/share --
> sysconfdir=/path/to/pgsql/etc --libdir=/path/to/pgsql/lib --
> includedir=/path/to/pgsql/include --mandir=/path/to/pgsql/man --with-
> pgport=65432 --with-readline --without-zlib
>
> After configure I ran gmake and gmake install and I saw "PostgreSQL
> installation complete."
>
> When I begin to configure for initdb and run below command:
>   /path/to/pgsql/bin/initdb -D /path/to/pgsql/data
>
> I get following error:
>
> The files belonging to this database system will be owned by user
> "reydan.cankur".
> This user must also own the server process.
>
> The database cluster will be initialized with locale en_US.UTF-8.
> The default database encoding has accordingly been set to UTF8.
> The default text search configuration will be set to "english".
>
> fixing permissions on existing directory /path/to/pgsql/data ... ok
> creating subdirectories ... ok
> selecting default max_connections ... 100
> selecting default shared_buffers ... 32MB
> creating configuration files ... ok
> creating template1 database in /path/to/pgsql/data/base/1 ... FATAL:
> could not create unique index "pg_type_typname_nsp_index"
> DETAIL:  Table contains duplicated values.
> child process exited with exit code 1
> initdb: removing contents of data directory "/path/to/pgsql/data"
>
> I could not get the point between initdb process and the change that I
> have made.
> I need your help on solution of this issue.
>
> Thanks in advance,
> Reydan
>
>
>
>

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Server Freezing

2009-12-05 Thread Denis Lussier
Perhaps making your select be explicitely part of a read-only
transaction rather than letting java make use of an implicit
transaction (which may be in auto commit mode)

On 11/30/09, Waldomiro  wrote:
> Hi everybody,
>
> I have an java application like this:
>
> while ( true ) {
>  Thread.sleep( 1000 ) // sleeps 1 second
>
>   SELECT field1
>   FROM TABLE1
>   WHERE field2 = '10'
>
>   if ( field1 != null ) {
>   BEGIN;
>
>   processSomething( field1 );
>
>   UPDATE TABLE1
>   SET field2 = '20'
>   WHERE field1 = '10';
>
>   COMMIT;
>  }
> }
>
> This is a simple program which is waiting for a record inserted by
> another workstation, after I process that record I update to an
> processed status.
>
> That table receives about 3000 inserts and 6 updates each day, but
> at night I do a TRUNCATE TABLE1 (Every Night), so the table is very
> small. There is an index by field1 too.
>
> Some days It works very good all day, but somedays I have 7 seconds
> freeze, I mean, my serves delays 7 seconds on  this statement:
>   SELECT field1
>   FROM TABLE1
>   WHERE field2 = '10'
>
> Last Friday, It happens about 4 times, one at 9:50 am, another on 13:14
> pm, another on 17:27 pm and another on 17:57 pm.
>
> I looked up to the statistics for that table, but the statistics says
> that postgres is reading memory, not disk, becouse the table is very
> small and I do a select every second, so the postgres keeps the table in
> shared buffers.
>
> Why this 7 seconds delay? How could I figure out what is happening?
>
> I know:
>
> It is not disk, becouse statistics shows its reading memory.
> It is not internet delay, becouse it is a local network
> It is not workstations, becouse there are 2 workstations, and both
> freeze at the same time
> It is not processors, becouse my server has 8 processors
> It is not memory, becouse my server has 32 GB, and about 200 MB free
> It is not another big process or maybe not, becouse I think postgres
> would not stops my simples process for 7 seconds to do a big process,
> and I cant see any big process at that time.
> Its not lock, becouse the simple select freezes, It doesnot have an "FOR
> UPDATE"
> Its not a vaccum needed, becouse I do a TRUNCATE every night.
>
> Is It possible the checkpoint is doing that? Or the archiving? How can I
> see?
>
> Someone have any idea?
>
> Thank you
>
> Waldomiro Caraiani
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Best suiting OS

2009-10-03 Thread Denis Lussier
I'm a BSD license fan, but, I don't know much about *BSD otherwise (except
that many advocates say it runs PG very nicely).
On the Linux side, unless your a dweeb, go with a newer, popular & well
supported release for Production.  IMHO, that's RHEL 5.x or CentOS 5.x.  Of
course the latest SLES & UBuntu schtuff are also fine.

In other words, unless you've got a really good reason for it, stay away
from Fedora & OpenSuse for production usage.

On Thu, Oct 1, 2009 at 3:10 PM,  wrote:

> On Thu, 1 Oct 2009, S Arvind wrote:
>
>  Hi everyone,
>> What is the best Linux flavor for server which runs postgres alone.
>> The postgres must handle greater number of database around 200+.
>> Performance
>> on speed is the vital factor.
>> Is it FreeBSD, CentOS, Fedora, Redhat xxx??
>>
>
> as noted by others *BSD is not linux
>
> among the linux options, the best option is the one that you as a company
> are most comfortable with (and have the support/upgrade processes in place
> for)
>
> in general, the newer the kernel the better things will work, but it's far
> better to have an 'old' system that your sysadmins understand well and can
> support easily than a 'new' system that they don't know well and therefor
> have trouble supporting.
>
> David Lang
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>


Re: [PERFORM] Benchmark comparing PostgreSQL, MySQL and Oracle

2009-02-21 Thread Denis Lussier
Hi all,

As the author of BenchmarkSQL and the founder of EnterpriseDB  I
can assure you that BenchmarkSQL was NOT written specifically for
PostgreSQL.It is intended to be a completely database agnostic
tpc-c like java based benchmark.

However; as Jonah correctly points out in painstaking detail:
PostgreSQL is good, but...  Oracle, MySQL/Innodb and and and don't
necessarily suck.  :-)

--Luss

PS:   Submit a patch to BenchmarkSQL and I'll update it.


On 2/20/09, Sergio Lopez  wrote:
> El Fri, 20 Feb 2009 16:54:58 -0500
> Robert Haas  escribió:
>
>> On Fri, Feb 20, 2009 at 4:34 PM, Jonah H. Harris
>>  wrote:
>> > On Fri, Feb 20, 2009 at 3:40 PM, Merlin Moncure
>> >  wrote:
>> >>
>> >> ISTM you are the one throwing out unsubstantiated assertions
>> >> without data to back it up.  OP ran benchmark. showed
>> >> hardware/configs, and demonstrated result.  He was careful to
>> >> hedge expectations and gave rationale for his analysis methods.
>> >
>> > As I pointed out in my last email, he makes claims about PG being
>> > faster than Oracle and MySQL based on his results.  I've already
>> > pointed out significant tuning considerations, for both Postgres
>> > and Oracle, which his benchmark did not take into account.
>> >
>> > This group really surprises me sometimes.  For such a smart group
>> > of people, I'm not sure why everyone seems to have a problem
>> > pointing out design flaws, etc. in -hackers, yet when we want to
>> > look good, we'll overlook blatant flaws where benchmarks are
>> > concerned.
>>
>> The biggest flaw in the benchmark by far has got to be that it was
>> done with a ramdisk, so it's really only measuring CPU consumption.
>> Measuring CPU consumption is interesting, but it doesn't have a lot to
>> do with throughput in real-life situations.  The benchmark was
>> obviously constructed to make PG look good, since the OP even mentions
>> on the page that the reason he went to ramdisk was that all of the
>> databases, *but particularly PG*, had trouble handling all those
>> little writes.  (I wonder how much it would help to fiddle with the
>> synchronous_commit settings.  How do MySQL and Oracle alleviate this
>> problem and we can usefully imitate any of it?)
>>
>
> The benchmark is NOT constructed to make PostgreSQL look good, that
> never was my intention. All databases suffered the I/O bottleneck for
> their redo/xlog/binary_log files, specially PostgreSQL but closely
> followed by Oracle. For some reason MySQL seems to deal better with I/O
> contention, but still gives numbers that are less than the half it gives
> with tmpfs.
>
> While using the old array (StorageTek T3), I've played with
> synchronous_commit, wal_sync_method, commit_delay... and only setting
> wal_sync_method = open_datasync (which, in Solaris, completly disables
> I/O syncing) gave better results, for obvious reasons.
>
> Anyway, I think that in the next few months I'll be able to repeat the
> tests with a nice SAN, and then we'll have new numbers that will be
> more near to "real-world situations" (but synthetic benchmarks always
> are synthetic benchmarks) and also we'll be able to compare them with
> this ones to see how I/O contetion impacts on each database.
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Simple join optimized badly?

2006-10-07 Thread Denis Lussier

Wouldn't PG supporting simple optmizer hints get around this kinda
problem?   Seems to me that at least one customer posting per week
would be solved via the use of simple hints.

If the community is interested...  EnterpriseDB has added support for
a few different simple types of hints (optimize for speed, optimize
for first rows, use particular indexes) for our upcoming 8.2 version.
We are glad to submit them into the community process if there is any
chance they will eventually be accepted for 8.3.

I don't think there is an ANSI standrd for hints, but, that doesn't
mean they are not occosaionally extrenmely useful.  All hints are
effectively harmless/helpful suggestions,  the planner is free to
ignore them if they are not feasible.

--Denis Lussier
 Founder
 http://www.enterprisedb.com

On 10/7/06, Tom Lane <[EMAIL PROTECTED]> wrote:

"Craig A. James" <[EMAIL PROTECTED]> writes:
> There are two plans below.  The first is before an ANALYZE HITLIST_ROWS, and 
it's horrible -- it looks to me like it's sorting the 16 million rows of the 
SEARCH table.  Then I run ANALYZE HITLIST_ROWS, and the plan is pretty decent.

It would be interesting to look at the before-ANALYZE cost estimate for
the hash join, which you could get by setting enable_mergejoin off (you
might have to turn off enable_nestloop too).  I recall though that
there's a fudge factor in costsize.c that penalizes hashing on a column
that no statistics are available for.  The reason for this is the
possibility that the column has only a small number of distinct values,
which would make a hash join very inefficient (in the worst case all
the values might end up in the same hash bucket, making it no better
than a nestloop).  Once you've done ANALYZE it plugs in a real estimate
instead, and evidently the cost estimate drops enough to make hashjoin
the winner.

You might be able to persuade it to use a hashjoin anyway by increasing
work_mem enough, but on the whole my advice is to do the ANALYZE after
you load up the temp table.  The planner really can't be expected to be
very intelligent when it has no stats.

   regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [PERFORM] recommended benchmarks

2006-09-23 Thread Denis Lussier

If the real world applications you'll be running on the box are Java
(or use lots of prepared statements and no stored procedures)...   try
BenchmarkSQL from pgFoundry.  Its extremely easy to setup and use.
Like the DBT2, it's an oltp benchmark that is similar to the tpc-c.

--Denis Lussier
http://www.enterprisedb.com

On 9/22/06, Bucky Jordan <[EMAIL PROTECTED]> wrote:

> On Fri, 2006-09-22 at 13:14 -0400, Charles Sprickman wrote:
> > Hi all,
> >
> > I still have an dual dual-core opteron box with a 3Ware 9550SX-12
> sitting
> > here and I need to start getting it ready for production.  I also
have
> to
> > send back one processor since we were mistakenly sent two. Before I
do
> > that, I would like to record some stats for posterity and post to
the
> list
> > so that others can see how this particular hardware performs.
> >
> > It looks to be more than adequate for our needs...
> >
> > What are the standard benchmarks that people here use for comparison
> > purposes?  I know all benchmarks are flawed in some way, but I'd at
> least
> > like to measure with the same tools that folks here generally use to
get
> a
> > ballpark figure.
>
> Check out the OSDL stuff.
>
>
http://www.osdl.org/lab_activities/kernel_testing/osdl_database_test_sui
te
> /
>
> Brad.
>

Let me know what tests you end up using and how difficult they are to
setup/run- I have a dell 2950 (2 dual core woodcrest) that I could
probably run the same tests on. I'm looking into DBT2 (OLTP, similar to
TPC-C) to start with, then probably DBT-3 since it's more OLAP style
(and more like the application I'll be dealing with).

What specific hardware are you testing? (CPU, RAM, raid setup, etc?)

- Bucky

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [PERFORM] XFS filessystem for Datawarehousing -2

2006-08-04 Thread Denis Lussier
I agree that OCFS 2.0 is NOT a general purpose PG (or any other) solution.  My recollection is that OCFS gave about 15% performance improvements (same as setting some aggressive switches on ext3).   I assume OCFS has excellent crash safety with its default settings but we did not test this as of yet.  OCFS now ships as one of the optional FS's that ship with Suse.   That takes care of some of the FUD created by Oracle's disclaimer below.   
OCFS 2 is much more POSIX compliant than OCFS 1.  The BenchmarkSQL, DBT2, & Regression tests we ran on OCFS 2 all worked well.  The lack of full Posix compliance did cause some problems for configuring PITR.
--Denis   http://www.enterprisedb.comOn 8/3/06, Chris Browne <[EMAIL PROTECTED]
> wrote:Of course, with a big warning sticker of "what is required for Oracle
to work properly is implemented, anything more is not a guarantee" onit, who's going to trust it?--


Re: [PERFORM] XFS filessystem for Datawarehousing -2

2006-08-02 Thread Denis Lussier
I was kinda thinking that making the Block Size configurable at InitDB time would be a nice & simple enhancement for PG 8.3.  My own personal rule of thumb for sizing is 8k for OLTP, 16k for mixed use, & 32k for DWH.
I have no personal experience with XFS, but, I've seen numerous internal edb-postgres test results that show that of all file systems... OCFS 2.0 seems to be quite good for PG update intensive apps (especially on 64 bit machines).
On 8/1/06, Luke Lonergan <[EMAIL PROTECTED]> wrote:
Milen,On 8/1/06 3:19 PM, "Milen Kulev" <[EMAIL PROTECTED]> wrote:> Sorry, forgot to ask:> What is the recommended/best  PG block size for DWH  database?  16k, 32k, 64k
> ?> What hsould be the relation  between XFS/RAID stripe size and PG block size ?We have found that the page size in PG starts to matter only at very highdisk performance levels around 1000MB/s.  Other posters have talked about
maintenance tasks improving in performance, but I haven't seen it.- Luke---(end of broadcast)---TIP 4: Have you searched our list archives?
   http://archives.postgresql.org


Re: [PERFORM] PITR performance overhead?

2006-08-02 Thread Denis Lussier
If your server is heavily I/O bound AND you care about your data AND your are throwing out your WAL files in the middle of the day...  You are headed for a cliff.   I'm sure this doesn't apply to anyone on this thread, just a general reminder to all you DBA's out there who sometimes are too busy to implement PITR until after a disaster strikes.   I know that in the past I've personally been guilty of this on several occasions.
--Denis  EnterpriseDB (yeah, rah, rah...)On 8/1/06, Merlin Moncure <[EMAIL PROTECTED]> wrote:
On 8/1/06, George Pavlov <[EMAIL PROTECTED]
> wrote:> I am looking for some general guidelines on what is the performance> overhead of enabling point-in-time recovery (archive_command config) on> an 8.1 database. Obviously it will depend on a multitude of factors, but
> some broad-brush statements and/or anecdotal evidence will suffice.> Should one worry about its performance implications? Also, what can one> do to mitigate it?pitr is extremely cheap both in performance drag and administation
overhead for the benefits it provides.  it comes almost for free, justmake sure you can handle all the wal files and do sane backupscheduling.  in fact, pitr can actually reduce the load on a serverdue to running less frequent backups.  if your server is heavy i/o
loaded, it might take a bit of planning.merlin---(end of broadcast)---TIP 4: Have you searched our list archives?   
http://archives.postgresql.org


Re: [PERFORM] Performances with new Intel Core* processors

2006-08-02 Thread Denis Lussier
My theory, based entirely on what I have read in this thread, is that a low end server (really a small workstation) with an Intel Dual Core CPU is likely an excellent PG choice for the lowest end.I'll try to snag an Intel Dual Core workstation in the near future and report back DBT2 scores comparing it to a similarly equiped 1 socket AMD dual core workstation.   I'll keep the data size small to fit entirely in RAM so the DBT2 isn't it's usual disk bound dog when you run it the "right" way (according to tpc-c guidelines).
--Denis   Dweeb from EnterpriseDBOn 8/1/06, Florian Weimer <[EMAIL PROTECTED]> wrote:
* Arjen van der Meijden:> For a database system, however, processors hardly ever are the main> bottleneck, are they?Not directly, but the choice of processor influences whichchipsets/mainboards are available, which in turn has some impact on
the number of RAM slots.  (According to our hardware supplier, beyound8 GB, the price per GB goes up sharply.)  Unfortunately, it seems thatthe Core 2 Duo mainboards do not change that much in this area.
--Florian Weimer<[EMAIL PROTECTED]>BFK edv-consulting GmbH   http://www.bfk.de/Durlacher Allee 47tel: +49-721-96201-1
D-76131 Karlsruhe fax: +49-721-96201-99---(end of broadcast)---TIP 9: In versions below 8.0, the planner will ignore your desire to   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] Performance with 2 AMD/Opteron 2.6Ghz and 8gig

2006-07-29 Thread Denis Lussier
 
Not sure that EnterpriseDB's Dynatune is the general purpose answer that the PG community has been searching to find.   Actually, I think it could be, but...  the community process will decide.
 
We are presently planning to create a site that will be called http://gforge.enterprisedb.com that will be similar in spirit to BizGres.  By this I mean that we will be open sourcing many key small "improvements" (in the eye of the beholder) for PG that will potentially make it into PG (likely in some modified format) depending on the reactions and desires of the general Postgres community.

 
In case anyone is wondering...  NO, EnterpriseDB won't be open sourcing the legacy Horacle stuff we've added to our product (at least not yet).   This stuff is distributed under our Commercial Open Source license (similar to SugarCRM's).   Our Commercial Open Source license simply means that if you buy a Platinum Subscription to our product, then you can keep the source code under your pillow and use it internally at your company however you see fit.

 
--Denis Lussier
  CTO
  http://www.enterprisedb.com
 
On 7/29/06, Luke Lonergan <[EMAIL PROTECTED]> wrote:
Denis,On 7/29/06 11:09 AM, "Denis Lussier" <
[EMAIL PROTECTED]> wrote:> We do something we call "Dynatune" at db startup time.Sounds great - where do we download it?- Luke


Re: [PERFORM] Performance with 2 AMD/Opteron 2.6Ghz and 8gig

2006-07-29 Thread Denis Lussier
> systems could send me their bonnie + benchmarksql results! 
I am one of the authors of BenchmarkSQL, it is similar to a DBT2.   But, its very easy to use (&/or abuse).   It's a multithreaded Java Swing client that can run the exact same benchmark (uses JDBC prepared statements) against Postgres/EnterpriseDB/Bizgres, MySQueeL, Horacle, Microsloth, etc, etc.  You can find BenchmarkSQL on pgFoundry and SourceForge.

 
As expected, Postgres is good on this benchmark and is getting better all the time.
 
If you run an EnterpriseDB install right out of the box versus a PG install right out of the box you'll notice that EnterpriseDB outperforms PG by better than 2x.    This does NOT mean that EnterpriseDB is 3x faster than Postgres...   EnterpriseDB is the same speed as Postgres.   We do something we call "Dynatune" at db startup time.  The algorithm is pretty simple in our current GA version and really only considers the amount of RAM, SHARED Memory, and machine usage pattern.  Manual tuning is required to really optimize performance

 
For great insight into the basics of quickly tuning PostgreSQL for a reasonable starting point, check out the great instructions offered by Josh Berkus and Joe Conway at 
http://www.powerpostgresql.com/PerfList/.
 
The moral of this unreasonably verbose email is that you shouldn't abuse BenchmarkSQL and measure runs without making sure that, at least, quick/simple best practices have been applied to tuning the db's you are choosing to test.

 
--Denis Lussier
  CTO
  http://www.enterprisedb.com 

>


Re: [PERFORM] Savepoint performance

2006-07-27 Thread Denis Lussier

My understanding of EDB's approach is that our prototype just
implicitly does a savepoint before each INSERT, UPDATE, or DELETE
statement inside of PLpgSQL.   We then rollback to that savepoint if a
sql error occurs.  I don 't believe our prelim approach changes any
transaction start/end semantics on the server side and it doesn't
change any PLpgSQL syntax either (although it does allow you to
optionally code commits &/or rollbacks inside stored procs).

Can anybody point me to a thread on the 7.3 disastrous experiment?

I personally think that doing commit or rollbacks inside stored
procedures is usually bad coding practice AND can be avoided...   It's
a backward compatibility thing for non-ansi legacy stuff and this is
why I was previously guessing that the community wouldn't be
interested in this for PLpgSQL.  Actually...  does anybody know
offhand if the ansi standard for stored procs allows for explicit
transaction control inside of a stored procedure?

--Luss

On 7/27/06, Tom Lane <[EMAIL PROTECTED]> wrote:

"Denis Lussier" <[EMAIL PROTECTED]> writes:
> Would the community be potentially interested in this feature if we created
> a BSD Postgres patch of this feature for PLpgSQL (likely for 8.3)??

Based on our rather disastrous experiment in 7.3, I'd say that fooling
around with transaction start/end semantics on the server side is
unlikely to fly ...

regards, tom lane



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [PERFORM] Savepoint performance

2006-07-27 Thread Denis Lussier
We've actually done some prelim benchmarking of this feature about six months ago and we are actively considering adding it to our "closer to Oracle" version of PLpgSQL.   I certainly don't want to suggest that it's a good idea to do this because it's Oracle compatible.  :-)

 
I'll get someone to post our performance results on this thread.  As Alvaro correctly alludes, it has an overhead impact that is measurable, but, likely acceptable for situations where the feature is desired (as long as it doesn't negatively affect performance in the "normal" case).  I believe the impact was something around a 12% average slowdown for the handful of PLpgSQL functions we tested when this feature is turned on.

 
Would the community be potentially interested in this feature if we created a BSD Postgres patch of this feature for PLpgSQL (likely for 8.3)??
 
--Luss
 
Denis Lussier
CTO
http://www.enterprisedb.com 
On 7/27/06, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
Mark Lewis wrote:> So my question is, how expensive is setting a savepoint in PG?  If it's> not too expensive, I'm wondering if it would be feasible to add a config
> parameter to psql or other client interfaces (thinking specifically of> jdbc here) to do it automatically.  Doing so would make it a little> easier to work with PG in a multi-db environment.
It is moderately expensive.  It's cheaper than starting/committing atransaction, but certainly much more expensive than not setting asavepoint.In psql you can do what you want using \set ON_ERROR_ROLLBACK on.  This
is clearly a client-only issue, so the server does not provide anyspecial support for it (just like autocommit mode).--Alvaro Herrera
http://www.CommandPrompt.com/PostgreSQL Replication, Consulting, Custom Development, 24x7 support---(end of broadcast)---TIP 5: don't forget to increase your free space map settings



Re: [PERFORM] postgres benchmarks

2006-07-23 Thread Denis Lussier
 
At EnterpriseDB we make extensive use of the OSDB's OLTP Benchmark.   We also use the Java based benchamrk called BenchmarkSQL from SourceForge.  Both of these benchmarks are update intensive OLTP tests that closely mimic the Traqnsaction Processing COuncil's TPC-C benchmark.

 
Postgres also ships with pg_bench, which is a simpler OLTP benchmark that I believe is similar to a TPC-B.
 
--Denis Lussier
  CTO
  http://www.enterprisedb.com 
On 7/21/06, Petronenko D.S. <[EMAIL PROTECTED]> wrote:
Hello,does anybody use OSDB benchmarks for postgres?if not, which kind of bechmarks are used for postgres?
Thanks,Denis.---(end of broadcast)---TIP 1: if posting/reading through Usenet, please send an appropriate  subscribe-nomail command to 
[EMAIL PROTECTED] so that your  message can get through to the mailing list cleanly