> d) self-join with a function ;)
> EXPLAIN ANALYZE SELECT * FROM (SELECT n, id2username(n) AS username
> FROM (SELECT DISTINCT n FROM aaa) AS values) AS v_lookup RIGHT JOIN
> aaa USING (n);
That's pretty clever.
It sure seems like the server was not caching the results of the
function...maybe
> John A Meinel <[EMAIL PROTECTED]> writes:
> > Joel Fradkin wrote:
> >> Postgres was on the second run
> >> Total query runtime: 17109 ms.
> >> Data retrieval runtime: 72188 ms.
> >> 331640 rows retrieved.
>
> > How were you measuring "data retrieval time"?
>
> I suspect he's using pgadmin. We'
> In practice, we have watched Windows evolve in such a fashion with
> respect to multiuser support, and, in effect, it has never really
> gotten it. Microsoft started by hacking something on top of MS-DOS,
> and by the time enough applications had enough dependancies on the way
> that worked, it
> I am waiting to here back from Josh on using cursors and trying to
flatten
> long running views.
>
> I am a little disappointed I have not understood enough to get my
analyzer
> to use the proper plan, we had to set seqscan off to get the select
from
> response_line to work fast and I had to tur
Joel wrote:
I have been following threads (in case you don't know I bought a 4 proc
Dell recently) and the Opteron seems the way to go.
I just called HP for a quote, but don't want to make any mistakes.
[snip]
At your level of play it's the DL585.
Have you checked out http://www.swt.com?
Merlin
Justin wrote:
I have 6 Windows PC in a test environment accessing a very small Postgres DB on
a 2003 Server. The PC's access the database with a cobol app via ODBC. 3 of
the PC's operate very efficiently and quickly. 3 of them do not. The 3 that
do not are all new Dell XP Pro with SP2. They
> There are some immediate questions from our engineers about
performance
>
> "- Oracle has one particular performance enhancement that Postgres is
> missing. If you do a select that returns 100,000 rows in a given
order,
> and all you want are rows 99101 to 99200, then Oracle can do that very
>
> Hi !
>
> My company is evaluating to compatibilizate our system (developed in
> C++) to PostgreSQL.
>
> Our programmer made a lot of tests and he informed me that the
> performance using ODBC is very similar than using libpq, even with a
big
> number of simultaneous connections/queries. Of cour
[moved to pgsql-performance]
> > Currently I want to take a TPC-H test on postgresql-8.0.2. I
have
> > downloaded the DBGEN and QGEN from the homepage of TPC. But I
> encountered
> > many problems which forced me to request some help. 1. How to load
the
> data
> > from flat file generated by db
I need a fast way (sql only preferred) to solve the following problem:
I need the smallest integer that is greater than zero that is not in the
column of a table. In other words, if an 'id' column has values
1,2,3,4,6 and 7, I need a query that returns the value of 5.
I've already worked out a q
> Not so bad. Try something like this:
>
> SELECT min(id+1) as id_new FROM table
> WHERE (id+1) NOT IN (SELECT id FROM table);
>
> Now, this requires probably a sequential scan, but I'm not sure how
you
> can get around that.
> Maybe if you got trickier and did some ordering and limits. The a
John Meinel wrote:
> See my follow up post, which enables an index scan. On my system with
> 90k rows, it takes no apparent time.
> (0.000ms)
> John
> =:->
Confirmed. Hats off to you, the above some really wicked querying.
IIRC I posted the same question several months ago with no response and
ha
> Merlin Moncure wrote:
>
> > I need a fast way (sql only preferred) to solve the following
problem:
> > I need the smallest integer that is greater than zero that is not in
the
> > column of a table.
> >
> > I've already worked out a query using genera
> On Tue, Jun 28, 2005 at 12:02:09 -0400,
> Merlin Moncure <[EMAIL PROTECTED]> wrote:
> >
> > Confirmed. Hats off to you, the above some really wicked querying.
> > IIRC I posted the same question several months ago with no response
and
> > had given up on
Cosimo wrote:
> I'm very interested in this "tricky query".
> Sorry John, but if I populate the `id_test' relation
> with only 4 tuples with id values (10, 11, 12, 13),
> the result of this query is:
>
>cosimo=> create table id_test (id integer primary key);
>NOTICE: CREATE TABLE / PRIMAR
> Milan Sekanina <[EMAIL PROTECTED]> writes:
> > We are running an application that uses psqlodbc driver on Windows
XP to
> > connect to a server and for some reason the download of data from
the
> > server is very slow. We have created a very simple test application
that
> > inserts a larger amoun
> Stuart,
>
> > I'm putting together a road map on how our systems can scale as our
load
> > increases. As part of this, I need to look into setting up some fast
> > read only mirrors of our database. We should have more than enough
RAM
> > to fit everything into memory. I would like to find out i
> I'm not sure how much this has been discussed on the list, but wasn't
> able to find anything relevant in the archives.
>
> The new Spamassassin is due out pretty soon. They are currently
testing
> 3.1.0pre4. One of the things I hope to get out of this release is
bayes
> word stats moved to a
Kari Lavikka wrote:
> shared_buffers = 15000
you can play around with this one but in my experience it doesn't make
much difference anymore (it used to).
> work_mem = 1536 # min 64, size in KB
this seems low. are you sure you are not getting sorts swapped to disk?
> fsy
> Kari Lavikka <[EMAIL PROTECTED]> writes:
> > samples %symbol name
> > 13513390 16.0074 AtEOXact_CatCache
>
> That seems quite odd --- I'm not used to seeing that function at the
top
> of a profile. What is the workload being profiled, exactly?
He is running a commit_delay of 8.
> Actually, it seems to me that with the addition of the WAL in
PostgreSQL
> and the subsequent decreased need to fsync the data files themselves
> (only during checkpoints?), that the only time a battery-backed write
> cache would make a really large performance difference would be on the
> drive(
Hello,
Doing some testing on upcoming 8.1 devel and am having serious issues
with new bitmap index scan feature. It is easy to work around (just
disable it) but IMO the planner is using it when a regular index scan
should be strongly favored. The performance of the bitmapscan in my
usage is actua
Christopher
> You could use a 1 column/1 row table perhaps. Use some sort of
locking
> mechanism.
>
> Also, check out contrib/userlock
userlock is definitely the way to go for this type of problem.
The are really the only way to provide locking facilities that live
outside transactions.
You
> Kari Lavikka <[EMAIL PROTECTED]> writes:
> > However, those configuration changes didn't have significant effect
to
> > oprofile results. AtEOXact_CatCache consumes even more cycles.
>
> I believe I've fixed that for 8.1.
Relative to 8.0, I am seeing a dramatic, almost miraculous reduction in
C
> Cool --- we've done a fair amount of work on squeezing out internal
> inefficiencies during this devel cycle, but it's always hard to
predict
> just how much anyone will notice in the real world.
>
> Care to do some oprofile or gprof profiles to see where it's still
bad?
>
Since release of 8.0
> Bill of Materials Traversal ( ~ 62k records).
>
> ISAM* pg 8.0 pg 8.1 devel delta 8.0->8.1
> running time 63 sec 90 secs71 secs21%
> cpu load 17%45%32%29%
> loadsecs** 10.71 40.5 22.72 44%
> recs/sec
> That seems quite peculiar; AFAICS the pgstat code shouldn't be any
> slower than before. At first I thought it might be because we'd
> increased PGSTAT_ACTIVITY_SIZE, but actually that happened before
> 8.0 release, so it shouldn't be a factor in this comparison.
Just FYI the last time I looked
> Hello,
> i have a pg-8.0.3 running on Linux kernel 2.6.8, CPU Sempron
2600+,
> 1Gb RAM on IDE HD ( which could be called a "heavy desktop" ),
measuring
> this performance with pgbench ( found on /contrib ) it gave me an
> average ( after several runs ) of 170 transactions per second;
170 tps
> Ok, there is always a lot of talk about tuning PostgreSQL on linux and
> how PostgreSQL uses the linux kernel cache to cache the tables and
> indexes.
[...]
>
> 1. Implement a partition type layout using views and rules - This
> will allow me to have one table in each view with the "active" dat
> Putting pg_xlog on the IDE drives gave about 10% performance
> improvement. Would faster disks give more performance?
>
> What my application does:
>
> Every five minutes a new logfile will be imported. Depending on the
> source of the request it will be imported in one of three "raw click"
> t
Mark Kirkwood
> > The 'desc' seems to be the guy triggering the sort, e.g:
>
> Oh; really an accident that I didn't notice myself, I was actually
going
> to
> remove all instances of "desc" in my simplification, but seems like I
> forgot.
If desc is the problem you can push the query into a subqu
> Hello all,
>
> I was hoping someone could explain the plan for a statement.
>
> We have a table with a column of longs being used as an index. The
> query plan in 8.0 was like this:
>
> # explain select distinct timeseriesid from tbltimeseries where
> timeseriesid > 0 order by timeseriesid;
> Does such a solution exist now. To me this appears to be in entirety
of
> what should constitute a database cluster. Only the search needs to be
> done on all the servers simultaneously at the low level. Once you get
the
> results, the writing can be determined by the upper level logic (which
ca
Tobias wrote:
> Splendid :-) Unfortunately we will not be upgrading for some monthes
> still,
> but anyway I'm happy. This provides yet another good argument for
> upgrading
> sooner. I'm also happy to see such a perfect match:
>
> - A problem that can be reduced from beeing complex and
>pr
Ulrich wrote:
> Hi again,
>
> first I want to say ***THANK YOU*** for everyone who kindly shared
their
> thoughts on my hardware problems. I really appreciate it. I started to
> look for a new server and I am quite sure we'll get a serious hardware
> "update". As suggested by some people I would
> Hi Merlin,
> > Just a thought: have you considered having apache logs write to a
> > process that immediately makes insert query(s) to postgresql?
>
> Yes we have considered that, but dismissed the idea very soon. We need
> Apache to be as responsive as possible. It's a two server setup with
> l
> I'm having performance issues with a table consisting of 2,043,133
rows.
> The
> schema is:
> locality_1 has 16650 distinct values and locality_2 has 1156 distinct
> values.
Just so you know I have a 2GHz p4 workstation with similar size (2M
rows), several keys, and can find and fetch 2k rows b
> Table "public.address"
> Column| Type | Modifiers
> --++---
> postcode_top | character varying(2) | not null
> postcode_middle | character varying(4) | not null
> postcode_b
> -Original Message-
> From: Alvaro Herrera [mailto:[EMAIL PROTECTED]
> Sent: Thursday, September 01, 2005 3:34 PM
> To: Merlin Moncure
> Cc: Matthew Sackman; [email protected]
> Subject: Re: [PERFORM] Massive performance issues
>
> On Thu, Sep 01,
Carlos wrote:
SELECT * FROM SSIRRA where
(YEAR = 2004 and CUSTOMER = 04 and CODE = 00 and PART >= 00) or
(YEAR = 2004 and CUSTOMER = 04 and CODE > 00) or
(YEAR = 2004 and CUSTOMER > 04) or
(YEAR > 2004)
[snip]
ah, the positional query. You can always rewrite this query
> > Carlos wrote:
> > SELECT * FROM SSIRRA where
> > (YEAR = 2004 and CUSTOMER = 04 and CODE = 00 and PART >= 00)
or
> > (YEAR = 2004 and CUSTOMER = 04 and CODE > 00) or
> > (YEAR = 2004 and CUSTOMER > 04) or
> > (YEAR > 2004)
> > [snip]
> >
> > ah, the positional query. Yo
> Hi,
>
> I usually use PostgreSQL coupled with Linux, but I have to use Windows
for
> a
> perticular project.
>
> So I wanted to do some tests to know if the performance will be
acceptable
> (I
> don't need PostgreSQL to be as fast with windows as with linux, but it
has
> to
> be usable...).
In
> > In my experience win32 is par with linux generally with a few gotchas on
> > either side. Are your times with fsync=no? It's much harder to give
> > apples-apples comparison with fsync=on for various reasons.
> It is with fsync=off on windows, fsync=on on linux
well, inside a transaction this
> This makes me wonder if we are looking in the wrong place. Maybe the
> problem is coming from psql? More results to follow.
problem is not coming from psql.
One thing I did notice that in a 250k insert transaction the insert time
grows with #recs inserted. Time to insert first 50k recs is
> > One thing I did notice that in a 250k insert transaction the insert
time
> > grows with #recs inserted. Time to insert first 50k recs is about
27
> > sec and last 50 k recs is 77 sec. I also confimed that size of
table is
> > not playing a role here.
> >
> > Marc, can you do select timeofday(
> On Tuesday 06 September 2005 19:11, Merlin Moncure wrote:
> Here's the timeofday ... i'll do the gprof as soon as I can.
> Every 5 rows...
>
> Wed Sep 07 13:58:13.860378 2005 CEST
> Wed Sep 07 13:58:20.926983 2005 CEST
> Wed Sep 07 13:58:27.928385 2005 CEST
&g
> Hello.
>
> I would like to build a shared repository for Enterprise Architect
> (http://www.sparxsystems.com.au/ea.htm) using PostgreSQL. I have done
it
> before with Linux and FreeBSD servers and everything was working out
of
> the
> box. The repository is pretty simple database with less than
> On Tue, Sep 13, 2005 at 07:58:20AM -0400, Merlin Moncure wrote:
> This command is executed while a model is loaded from the repository.
>
> The table definition is:
> CREATE TABLE t_umlpattern (
> PatternID INTEGER DEFAULT nextval('"patternid_seq&quo
> Did you run the select remotely on a Windows server?
yes.
> Yes the server load is practically 0. Note the difference between
local
> and
> remote execution of the command. I think you are right about the
network
> problem possibility. But it is bound to PostgreSQL. MySQL on the same
> machine
> in the 10 ms range. Definitely not 800 ms. The 8.1 has the same
problem.
>
> Just for the record: the server PC is Dell Precision 330 with 3Com
3C920
> integrated network card. OS MS Windows Professional 2002 with service
pack
> 2. There is Symantec Antivirus installed - which I have (hopefully)
> On Wed, Sep 14, 2005 at 11:28:43AM -0700, Jeffrey W. Baker wrote:
> > On Wed, 2005-09-14 at 11:25 -0700, Peter Darley wrote:
> > > I'm getting a new server for our database, and I have a quick
> question
> > > about RAID controllers with a battery backed cache. I understand
that
> the
> > > ca
Well, pg being a multi-process
architecture, on a 64 bit system you get the advantages of extra memory for
cache all day long. I don’t
thing a 2gb mem
limit/backend is not a wall people are hitting very often even on high end
systems. Performance wise, 32 vs.
64 bit is a tug of war betwee
> >I previously posted the following as a sequel to my SELECT DISTINCT
> >Performance Issue question. We would most appreciate any clue or
> >suggestions on how to overcome this show-stopping issue. We are using
> >8.0.3 on Windows.
> >
> >Is it a known limitation when using a view with SELECT ...
> >Here is a trick I use sometimes with views, etc. This may or may not
be
> >effective to solve your problem but it's worth a shot. Create one
small
> >SQL function taking date, etc. and returning the values and define it
> >immutable. Now in-query it is treated like a constant.
>
> We don't u
> At 02:07 05/09/23, Merlin Moncure wrote:
> > > >Here is a trick I use sometimes with views, etc. This may or may
not
> be
> > > >effective to solve your problem but it's worth a shot. Create
one
> small
> > > >SQL function taking
[to K C:] sorry, was out on vactation all last week. I was visualizing
the problem incorrectly anyways...
Jim wrote:
> That function is not immutable, it should be defined as stable.
That is 100% correct: however now and then I declare stable functions as
immutable in some cases because the plan
> It's a quad opteron system. RAID controller is a 4 channel LSILogic
> Megaraid
> 320 connected to 10 15k 36.7G SCSI disks. The disks are configured in
5
> mirrored partitions. The pg_xlog is on one mirror and the data and
indexes
> are spread over the other 4 using tablespaces. These numbers from
Chris wrote:
> [EMAIL PROTECTED] (Dan Harris) writes:
> > On Oct 3, 2005, at 5:02 AM, Steinar H. Gunderson wrote:
> >
> >> I thought this might be interesting, not the least due to the
> >> extremely low
> >> price ($150 + the price of regular DIMMs):
> >
> > Replying before my other post came thro
> What's the current status of how much faster the Opteron is compared
to
> the
> Xeons? I know the Opterons used to be close to 2x faster, but is that
> still
> the case? I understand much work has been done to reduce the contect
> switching storms on the Xeon architecture, is this correct?
Up
On 10/7/05, Cestmir Hybl <[EMAIL PROTECTED]> wrote:
Isn't it possible (and reasonable) for these environments to keep track
of whether there is a transaction in progress with update to given table
and if not, use an index scan (count(*) where) or cached value
(count(*)) to perform this kind of quer
KC wrote:
>
> So I guess it all comes back to the basic question:
>
> For the query select distinct on (PlayerID) * from Player a where
> PlayerID='0' order by PlayerId Desc, AtDate Desc;
> can the optimizer recognise the fact the query is selecting by the
primary
> key (PlayerID,AtDate), so
> The disk used for the data is an external raid array, I don't know
much
> about that right now except I think is some relatively fast IDE stuff.
> In any case the operations should be cache friendly, we don't scan
over
> and over the big tables...
Maybe you are I/O bound. Do you know if your RA
>
> Would it not be faster to do a dump/reload of the table than reindex
or
> is it about the same?
>
reindex is probably faster, but that's not the point. you can reindex a
running system whereas dump/restore requires downtime unless you work
everything into a transaction, which is headache, and
Rodrigo wrote:
$$
As I understand it, the client needs to put the data into the server
using a textual-based command. This makes the 5MB data grow up-to 5x,
making it 25MB in the worst case. (Example: 0x01 -> \\001).
My question is:
1) Is there any way for me to send the binary field directly wit
Kuba wrote:
> is there an easy way to flush all cached query plans in pl/pgsql
> functions? I've long running sessions where data are changing and the
> plans become inaccurate after a while. I can imagine something like
> recreating all pl/pgsql functions. I can recreate them from sql source
> fi
> If I turn on stats_command_string, how much impact would it have on
> PostgreSQL server's performance during a period of massive data
> INSERTs? I know that the answer to the question I'm asking will
> largely depend upon different factors so I would like to know in which
> situations it would b
> look at this:
> select count(*) from fotos where archivo not in (select archivo from
> archivos)
> Aggregate (cost=4899037992.36..4899037992.37 rows=1 width=0)
> -> Seq Scan on fotos (cost=22598.78..4899037338.07 rows=261716 width=0)
>Filter: (NOT (subplan))
>SubPlan
>
Christopher
> > - Present a nifty date selector to choose the records from any
day,
> > hour, minute, second
> > - show them, with "next day" and "previous day" buttons
> >
> > - It's more useful to the user (most likely he wants to know
what
> > happened on 01/05/2005 rather than vie
There are a few ways to do this...thinking about it a bit, I would add a
timestamp column to your log table (indexed) and keep a control table which
keeps track of the last log print sweep operation.
The print operation would just do
select * from log where logtime > (select lastlogtime());
Th
Kelly wrote:
> We are running some performance tests in which we are attempting to
> insert about 100,000,000 rows in a database at a sustained rate.
About
> 50M rows in, our performance drops dramatically.
>
> This test is with data that we believe to be close to what we will
> encounter in produ
> > if that index is causing the problem, you may want to consider
setting
> > up partial index to exclude null values.
>
> This is a single column index. I assumed that null column values were
> not indexed. Is my assumption incorrect?
>
> -K
It turns out it is, or it certainly seems to be. I
> On Mon, Oct 31, 2005 at 12:32:03PM -0500, Merlin Moncure wrote:
> > if that index is causing the problem, you may want to consider
setting
> > up partial index to exclude null values.
>
> Hey all.
>
> Pardon my ignorance. :-)
>
> I've been trying to figu
> [EMAIL PROTECTED] writes:
> > I've been trying to figure out whether null values are indexed or
not
> from
> > the documentation. I was under the impression, that null values are
not
> > stored in the index.
>
> You're mistaken, at least with regard to btree indexes.
hmm. I tried several differ
> select * from sometable where somefield IS NULL won't work because IS
is
> not a nomally indexible operator.
Ah, I didn't know that. So there is no real reason not to exclude null
values from all your indexes :). Reading Tom's recent comments
everything is clear now.
Instead of using your two
hello performance minded administrators:
We have recently converted a number of routines that walk a bill of
materials (which is a nested structure) from the application side to the
server side via recursive plpgsql functions. The performance is
absolutely fantastic but I have to maintain a speci
Geoffrey wrote:
> We are going live with a application in a few months that is a
complete
> rewrite of an existing application. We are moving from an existing
> proprietary database to Postgresql. We are looking for some
> insight/suggestions as to how folks test Postgresql in such a
situation.
> The body of callit() need be little more than OidFunctionCall1()
> plus whatever error checking and security checking you want to
> include.
esp=# create table test(f text);
CREATE TABLE
esp=# create function test() returns void as
$$
begin
insert into test values ('c
> I've done the tests with rc1. This is still as slow on windows ...
about
> 6-10
> times slower thant linux (via Ip socket). (depending on using prepared
> queries, etc...)
>
> By the way, we've tried to insert into the windows database from a
linux
> psql
> client, via the network. In this confi
> Would you be willing to write up an example of this? We often get
asked
> about support for WITH, so I bet there's other people who would be
very
> interested in what you've got.
Sure. In fact, I had already decided this to be the next topic on my
blog. I'm assuming you are asking about tools t
> Would you be willing to write up an example of this? We often get
asked
> about support for WITH, so I bet there's other people who would be
very
> interested in what you've got.
>
You can see my blog on the subject here:
http://www.postgresql.org/docs/8.0/interactive/plpgsql.html#PLPGSQL-ADVA
N
oops. my blog is here: :-)
http://people.planetpostgresql.org/merlin/
>
http://www.postgresql.org/docs/8.0/interactive/plpgsql.html#PLPGSQL-ADVA
> NTAGES
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.
> selectwhen_stopped as when_happened,
>1 as order_hint
> from transaction t
> where '2005-10-25 15:00:00' <= when_stopped
>and when_stopped <= '2005-10-26 10:00:00'
> union all
> selectwhen_stopped as when_happened,
>
> Postgresql 8.0.4 using plpgsql
>
> The basic function is set up as:
> CREATE FUNCTION add_data(t_row mytable) RETURNS VOID AS $func$
> DECLARE
> newtable text;
> thesql text;
> BEGIN
> INSERT INTO newtable thename from mytable where lookup.id =
> t_row.id;
> thesql := 'INSERT INT
> Merlin Moncure wrote:
> > hmm, try pushing the union into a subquery...this is better style
> > because it's kind of ambiguous if the ordering will apply
before/after
> > the union.
>
> Seems to be a little slower. There's a new "subquery scan" st
> On Wed, 2 Nov 2005, Merlin Moncure wrote:
> If you put client/server on the same machine, then we don't know how
the
> CPU is splitted. Can you take a look at the approximate number by
> observing the task manager data while running?
ok, I generated a test case which was 250k
> Wow. I hadn't known about generate_series, but there are a bunch of
> places I've needed it.
It's a wonder tool :).
> But I think there is something I can do: I can just do a query of the
> transaction table sorted by start time. My graph tool can keep a
Reading the previous paragraphs I was
> The ANSI/ISO specs are not at all ambiguous on this. An
> ORDER BY is not allowed for the SELECT statements within
> a UNION. It must come at the end and applied to the resulting
> UNION.
Interesting :/
Merlin
---(end of broadcast)---
TIP 6: e
> Both win32 send/recv have pgwin32_poll_signals() in them. This is
> glorified WaitForSingleObjectEx on global pgwin32_signal_event. This
is
> probably part of the problem. Can we work some of the same magic you
put
> into check interrupts macro?
Whoop! following a cvs update I see this is alr
> > Sorry, I don't follow you here - what do you mean to do? Remove the
> > event completely so we can't wait on it?
> >
>
> I'd like to use the win32 provided recv(), send() functions instead of
> redirect them to pgwin32_recv()/pgwin32_send(), just like libpq does.
If
> we do this, we will lose
> You mean something like the attached?
not quite: attached is a file to generate test.
to do it:
psql yadda
\i timeit.sql
\t
\o dump.sql
select make_dump(5, false);
\q
cat dump.sql | psql -q yadda
and see what pops out. I had to do it that way because redirecting psql
to dump file caused
> > You mean something like the attached?
oh, btw I ran timeit.c and performance is flat and fairly fast. I'm
pretty sure psql is the culprit here.
Merlin
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
> Hello everyone.
>
> We are facing a performance problem with views consisting of several
> unioned tables. The simplified schema is as follows:
>
> CREATE TABLE foo (
> foo_object_id bigint,
> link_id bigint,
> somedatatext,
> PRIMARY KEY (foo_object_id
> > not quite: attached is a file to generate test.
>
> > cat dump.sql | psql -q yadda
>
> Ah. Does your psql have readline support? if so, does adding -n to
> that command change anything?
>
It doesn't, and it doesn't. :/ Ok, here's where it gets interesting. I
removed all the newlines fr
> That's bizarre ... I'd have thought a very long line would be more
> likely to trigger internal performance problems than the original.
>
> What happens if you read the file with "psql -f dump.sql" instead
> of cat/stdin?
non-flat. Also ran via \i and got non flat times.
> BTW, I get flat tim
>
> "Merlin Moncure" <[EMAIL PROTECTED]> writes:
> > yeah. I'm guessing problem is in the mingw flex/bison (which I
really,
> > really hope is not the case) or some other win32 specific block of
code.
> > I'm snooping around there...
>
> M
ok, here is gprof output from newlines/no newlines
[newlines]
% cumulative self self total
time seconds secondscalls s/call s/call name
19.03 0.67 0.671 0.67 3.20 MainLoop
17.61 1.29 0.62 500031 0.00
Nailed it.
problem is in mainloop.c -> setup_cancel_handler. Apparently you can
have multiple handlers and windows keeps track of them all, even if they
do the same thing. Keeping track of so many system handles would
naturally slow the whole process down. Commenting that line times are
flat as
> "Merlin Moncure" <[EMAIL PROTECTED]> writes:
> > Nailed it.
>
> > problem is in mainloop.c -> setup_cancel_handler. Apparently you
can
> > have multiple handlers and windows keeps track of them all, even if
they
> > do the same thin
> The point Gentlemen, was that Good Architecture is King. That's what
I
> was trying to emphasize by calling proper DB architecture step 0. All
> other things being equal (and they usually aren't, this sort of stuff
is
> _very_ context dependent), the more of your critical schema that you
can
>
> Hardware-wise I'd say dual core opterons. One dual-core-opteron
> performs better than two single-core at the same speed. Tyan makes
> some boards that have four sockets, thereby giving you 8 cpu's (if you
> need that many). Sun and HP also makes nice hardware although the Tyan
> board is more co
1 - 100 of 1088 matches
Mail list logo