On Thu, 2006-02-16 at 12:35 +0100, Steinar H. Gunderson wrote:
> glibc-2.3.5/stdlib/qsort.c:
>
> /* Order size using quicksort. This implementation incorporates
> four optimizations discussed in Sedgewick:
>
> I can't see any references to merge sort in there at all.
stdlib/qsort.c defin
On Wed, 2006-02-15 at 18:28 -0500, Tom Lane wrote:
> It seems clear that our qsort.c is doing a pretty awful job of picking
> qsort pivots, while glibc is mostly managing not to make that mistake.
> I haven't looked at the glibc code yet to see what they are doing
> differently.
glibc qsort is act
On Fri, 2006-01-20 at 18:14 +0900, James Russell wrote:
> I am looking to speed up performance, and since each page executes a
> static set of queries where only the parameters change, I was hoping
> to take advantage of stored procedures since I read that PostgreSQL's
> caches the execution plans
On Fri, 2006-01-13 at 15:10 -0500, Michael Stone wrote:
> OIDs seem to be on their way out, and most of the time you can get a
> more helpful result by using a serial primary key anyway, but I wonder
> if there's any extension to INSERT to help identify what unique id a
> newly-inserted key will ge
On Mon, 2005-12-05 at 09:42 +0200, Howard Oblowitz wrote:
> I am trying to run a query that selects 26 million rows from a
> table with 68 byte rows.
>
> When run on the Server via psql the following error occurs:
>
> calloc : Cannot allocate memory
That's precisely what I'd expect: the backend
On Mon, 2005-07-11 at 19:07 +0100, Enrico Weigelt wrote:
> I've got a similar problem: I have to match different datatypes,
> ie. bigint vs. integer vs. oid.
>
> Of course I tried to use casted index (aka ON (foo::oid)), but
> it didn't work.
Don't include the cast in the index definition, inc
On Mon, 2005-31-10 at 17:16 -0600, PostgreSQL wrote:
> We're running 8.1beta3 on one server and are having ridiculous performance
> issues. This is a 2 cpu Opteron box and both processors are staying at 98
> or 99% utilization processing not-that-complex queries. Prior to the
> upgrade, our I/
On Sun, 2005-23-10 at 21:36 -0700, Josh Berkus wrote:
> SELECT id INTO v_check
> FROM some_table ORDER BY id LIMIT 1;
>
> IF id > 0 THEN
>
> ... that says pretty clearly to code maintainers that I'm only interested in
> finding out whether there's any rows in the table, while making sure I
On Fri, 2005-21-10 at 07:34 -0500, Martin Nickel wrote:
> Let's say I do the same thing in Postgres. I'm likely to have my very
> fastest performance for the first few queries until memory gets filled up.
No, you're not: if a query doesn't hit the cache (both the OS cache and
the Postgres userspa
On Mon, 2005-26-09 at 12:54 -0500, Announce wrote:
> Is there an performance benefit to using int2 (instead of int4) in cases
> where i know i will be well within its numeric range?
int2 uses slightly less storage space (2 bytes rather than 4). Depending
on alignment and padding requirements, as w
Cristian Prieto wrote:
Anyway, do you know where could I get more info and theory about
database optimizer plan? (in general)
Personally I like this survey paper on query optimization:
http://citeseer.csail.mit.edu/371707.html
The paper also cites a lot of other papers that cover specific
Pryscila B Guttoski wrote:
On my master course, I'm studying the PostgreSQL's optimizer.
I don't know if anyone in this list have been participated from the
PostgreSQL's Optimizer development, but maybe someone can help me on this
question.
pgsql-hackers might be more appropriate.
PostgreSQL
Jignesh Shah wrote:
Now the question is why there are so many calls to MemoryContextSwitchTo
in a single SELECT query command? Can it be minimized?
I agree with Tom -- if profiling indicates that MemoryContextSwitchTo()
is the bottleneck, I would be suspicious that your profiling setup is
mis
Jim C. Nasby wrote:
Actually, from what I've read 4.2BSD actually took priority into account
when scheduling I/O.
FWIW, you can set I/O priority in recent versions of the Linux kernel
using ionice, which is part of RML's schedutils package (which was
recently merged into util-linux).
-Neil
Gnanavel S wrote:
reindex the tables separately.
Reindexing should not affect this problem, anyway.
-Neil
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining co
Tom Arthurs wrote:
I just puhsd 8.0.3 to production on Sunday, and haven't had a time to
really monitor it under load, so I can't tell if it's helped the context
switch problem yet or not.
8.0 is unlikely to make a significant difference -- by "current sources"
I meant the current CVS HEAD so
Tom Arthurs wrote:
Yes, shared buffers in postgres are not used for caching
Shared buffers in Postgres _are_ used for caching, they just form a
secondary cache on top of the kernel's IO cache. Postgres does IO
through the filesystem, which is then cached by the kernel. Increasing
shared_buff
Mark Rinaudo wrote:
I'm running the Redhat Version of Postgresql which came pre-installed
with Redhat ES. It's version number is 7.3.10-1. I'm not sure what
options it was compiled with. Is there a way for me to tell?
`pg_config --configure` in recent releases.
Should i just compile my own p
Mark Stosberg wrote:
I've used PQA to analyze my queries and happy overall with how they are
running. About 55% of the query time is going to variations of the pet
searching query, which seems like where it should be going. The query is
frequent and complex. It has already been combed over for ap
On Sun, 2005-05-29 at 16:17 -0400, Eric Lauzon wrote:
> So OID can be beneficial on static tables
OIDs aren't beneficial on "static tables"; unless you have unusual
requirements[1], there is no benefit to having OIDs on user-created
tables (see the default_with_oids GUC var, which will default to
mark durrant wrote:
PostgreSQL Machine:
"Aggregate (cost=140122.56..140122.56 rows=1 width=0)
(actual time=24516.000..24516.000 rows=1 loops=1)"
" -> Index Scan using "day" on mtable
(cost=0.00..140035.06 rows=35000 width=0) (actual
time=47.000..21841.000 rows=1166025 loops=1)"
"Inde
Tom Lane wrote:
Performance?
I'll run some benchmarks tomorrow, as it's rather late in my time zone.
If anyone wants to post some benchmark results, they are welcome to.
I disagree completely with the idea of forcing this behavior for all
datatypes. It could only be sensible for fairly wide valu
Josh Berkus wrote:
The other problem, as I was told it at OSCON, was that these were not
high-availability clusters; it's impossible to add a server to an existing
cluster
Yeah, that's a pretty significant problem.
a server going down is liable to take the whole cluster down.
That's news to me. D
Joshua D. Drake wrote:
Neil Conway wrote:
Oh? What's wrong with MySQL's clustering implementation?
Ram only tables :)
Sure, but that hardly makes it not "usable". Considering the price of
RAM these days, having enough RAM to hold the database (distributed over
the entire c
Josh Berkus wrote:
Don't hold your breath. MySQL, to judge by their first "clustering"
implementation, has a *long* way to go before they have anything usable.
Oh? What's wrong with MySQL's clustering implementation?
-Neil
---(end of broadcast)---
Tom Lane wrote:
I have a gut reaction against that: it makes hash indexes fundamentally
subservient to btrees.
I wouldn't say "subservient" -- if there is no ordering defined for the
index key, we just do a linear scan.
However: what about storing the things in hashcode order? Ordering uint32s
d
Tom Lane wrote:
On the other hand, once you reach the target index page, a hash index
has no better method than linear scan through all the page's index
entries to find the actually wanted key(s)
I wonder if it would be possible to store the keys in a hash bucket in
sorted order, provided that the
Jim C. Nasby wrote:
>> No, hash joins and hash indexes are unrelated.
I know they are now, but does that have to be the case?
I mean, the algorithms are fundamentally unrelated. They share a bit of
code such as the hash functions themselves, but they are really solving
two different problems (dis
Jim C. Nasby wrote:
Having indexes that people shouldn't be using does add confusion for
users, and presents the opportunity for foot-shooting.
Emitting a warning/notice on hash-index creation is something I've
suggested in the past -- that would be fine with me.
Even if there is some kind of adv
Christopher Petrilli wrote:
This being the case, is there ever ANY reason for someone to use it?
Well, someone might fix it up at some point in the future. I don't think
there's anything fundamentally wrong with hash indexes, it is just that
the current implementation is a bit lacking.
If not, t
Ying Lu wrote:
May I know for simple "=" operation query, for "Hash index" vs. "B-tree"
index, which can provide better performance please?
I don't think we've found a case in which the hash index code
outperforms B+-tree indexes, even for "=". The hash index code also has
a number of additional
Keith Worthington wrote:
-> Seq Scan on tbl_current (cost=0.00..1775.57 rows=76457
width=31) (actual time=22.870..25.024 rows=605 loops=1)
This rowcount is way off -- have you run ANALYZE recently?
-Neil
---(end of broadcast)---
TIP 4:
Tom Lane wrote:
The larger point is that writing an estimator for an SRF is frequently a
task about as difficult as writing the SRF itself
True, although I think this doesn't necessarily kill the idea. If
writing an estimator for a given SRF is too difficult, the user is no
worse off than they ar
Tom Lane wrote:
Not too many releases ago, there were several columns in pg_proc that
were intended to support estimation of the runtime cost and number of
result rows of set-returning functions. I believe in fact that these
were the remains of Joe Hellerstein's thesis on expensive-function
evalua
Adam Palmblad wrote:
can I actually look at the call tree that occurs when my function is
being executed or will I be limited to viewing calls to functions in
the postmaster binary?
You're the one with the gprof data, you tell us :)
It wouldn't surprise me if gprof didn't get profiling data for dlo
Bruno Wolff III wrote:
Functions are just black boxes to the planner.
... unless the function is a SQL function that is trivial enough for the
planner to inline it into the plan of the invoking query. Currently, we
won't inline set-returning SQL functions that are used in the query's
rangetable,
Magnus Hagander wrote:
Yes, fsync=false is very good for bulk loading *IFF* you can live with
data loss in case you get a crash during load.
It's not merely data loss -- you could encounter potentially
unrecoverable database corruption.
There is a TODO item about allowing the delaying of WAL writ
Magnus Hagander wrote:
You can *never* get above 80 without using write cache, regardless of
your OS, if you have a single disk.
Why? Even with, say, a 15K RPM disk? Or the ability to fsync() multiple
concurrently-committing transactions at once?
-Neil
---(end of broadcast
On Sat, 2005-02-05 at 14:42 -0500, Tom Lane wrote:
> Marinos Yannikos <[EMAIL PROTECTED]> writes:
> > Some more things I tried:
>
> You might try the attached patch (which I just applied to HEAD).
> It cuts down the number of acquisitions of the BufMgrLock by merging
> adjacent bufmgr calls during
On Fri, 2004-11-26 at 14:37 +1300, Andrew McMillan wrote:
> In PostgreSQL the UPDATE will result
> internally in a new record being written, with the old record being
> marked as deleted. That old record won't be re-used until after a
> VACUUM has run, and this means that the on-disk tables will h
Josh Berkus wrote:
I was under the impression that work_mem would be used for the index if there
was an index for the RI lookup. Wrong?
Yes -- work_mem is not used for doing index scans, whether for RI
lookups or otherwise.
-Neil
---(end of broadcast)---
Dawid Kuroczko wrote:
Side question: Do TEMPORARY tables operations end up in PITR log?
No.
-Neil
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
On Fri, 2004-11-05 at 02:47, Chris Browne wrote:
> Another thing that would be valuable would be to have some way to say:
>
> "Read this data; don't bother throwing other data out of the cache
>to stuff this in."
This is similar, although not exactly the same thing:
http://www.opengroup.or
On Thu, 2004-11-04 at 23:29, Pierre-Frédéric Caillaud wrote:
> There is also the fact that syncing after every transaction could be
> changed to syncing every N transactions (N fixed or depending on the data
> size written by the transactions) which would be more efficient than the
> cu
On Fri, 2004-11-05 at 06:20, Steinar H. Gunderson wrote:
> You mean, like, open(filename, O_DIRECT)? :-)
This disables readahead (at least on Linux), which is certainly not we
want: for the very case where we don't want to keep the data in cache
for a while (sequential scans, VACUUM), we also want
On Mon, 2004-11-01 at 11:01, Josh Berkus wrote:
> > Gist indexes take a long time to create as compared
> > to normal indexes is there any way to speed them up ?
> >
> > (for example by modifying sort_mem or something temporarily )
>
> More sort_mem will indeed help.
How so? sort_mem improves ind
On Mon, 2004-10-25 at 17:17, Curt Sampson wrote:
> When you select all the columns, you're going to force it to go to the
> table. If you select only the indexed column, it ought to be able to use
> just the index, and never read the table at all.
Perhaps in other database systems, but not in Post
Matt Clark wrote:
I'm thinking along the lines of an FS that's aware of PG's strategies and
requirements and therefore optimised to make those activities as efiicient
as possible - possibly even being aware of PG's disk layout and treating
files differently on that basis.
As someone else noted, thi
On Fri, 2004-10-15 at 04:38, Igor Maciel Macaubas wrote:
> I have around 100 tables, and divided them in 14 different schemas,
> and then adapted my application to use schemas as well.
> I could percept that the query / insert / update times get pretty much
> faster then when I was using the old un
On Thu, 2004-10-14 at 04:57, Mark Wong wrote:
> I have some DBT-3 (decision support) results using Gavin's original
> futex patch fix.
I sent an initial description of the futex patch to the mailing lists
last week, but it never appeared (from talking to Marc I believe it
exceeded the size limit
On Thu, 2004-10-07 at 08:26, Paul Ramsey wrote:
> The shared_buffers are shared (go figure) :). It is all one pool shared
> by all connections.
Yeah, I thought this was pretty clear. Doug, can you elaborate on where
you saw the misleading docs?
> The sort_mem and vacuum_mem are *per*connection*
On Tue, 2004-09-28 at 08:42, Gaetano Mendola wrote:
> Now I'm reading an article, written by the same author that ispired the magic "300"
> on analyze.c, about "Self-tuning Histograms". If this is implemented, I understood
> we can take rid of "vacuum analyze" for mantain up to date the statistics.
On Mon, 2004-09-20 at 17:57, Guy Thornley wrote:
> According to the manpage, O_DIRECT implies O_SYNC:
>
> File I/O is done directly to/from user space buffers. The I/O is
> synchronous, i.e., at the completion of the read(2) or write(2)
> system call, data is guaranteed to
On Thu, 2004-09-23 at 05:59, Tom Lane wrote:
> I think this would allow the problems of cached plans to bite
> applications that were previously not subject to them :-(.
> An app that wants plan re-use can use PREPARE to identify the
> queries that are going to be re-executed.
I agree; if you want
Tom Lane wrote:
Markus Schaber <[EMAIL PROTECTED]> writes:
So, now my question is, why does the query optimizer not recognize that
it can throw away those "non-unique" Sort/Unique passes?
Because the issue doesn't come up often enough to justify expending
cycles to check for it.
How many cycles are
Christopher Browne wrote:
One of our sysadmins did all the "configuring OS stuff" part; I don't
recall offhand if there was a need to twiddle something in order to
get it to have great gobs of shared memory.
FWIW, the section on configuring kernel resources under various
Unixen[1] doesn't have any
Rosser Schwarz wrote:
PostgreSQL uses the operating system's disk cache.
... in addition to its own buffer cache, which is stored in shared
memory. You're correct though, in that the best practice is to keep the
PostgreSQL cache small and give more memory to the operating system's
disk cache.
P
Eugeny Balakhonov wrote:
I tries to run simple query:
select * from files_t where parent =
Use this instead:
select * from files_t where parent = '';
("parent = ::int8" would work as well.)
PostgreSQL (< 7.5) won't consider using an indexscan when the predicate
involves an integer lit
On Fri, 2004-05-14 at 17:08, Jaime Casanova wrote:
> is there any diff. in performance if i use smallint in place of integer?
Assuming you steer clear of planner deficiencies, smallint should be
slightly faster (since it consumes less disk space), but the performance
difference should be very smal
On Wed, 2004-05-12 at 05:02, Shridhar Daithankar wrote:
> I agree. For shared buffers start with 5000 and increase in batches on 1000. Or
> set it to a high value and check with ipcs for maximum shared memory usage. If
> share memory usage peaks at 100MB, you don't need more than say 120MB of buf
On Mon, 2004-04-05 at 11:36, Josh Berkus wrote:
> Unfortunately, these days only Tom and Neil seem to be seriously working on
> the query planner (beg pardon in advance if I've missed someone)
Actually, Tom is the only person actively working on the planner --
while I hope to contribute to it in
Andrew Sullivan wrote:
"Intended", no. "Expected", yes. This topic has had the best
Postgres minds work on it, and so far nobody's come up with a
solution.
Actually, this has already been fixed in CVS HEAD (as I mentioned in
this thread yesterday). To wit:
nconway=# create table t1 (a int8);
CR
Steven Butler wrote:
I've recently converted a database to use bigint for the indices. Suddenly
simple queries like
select * from new_test_result where parent_id = 2
are doing full table scans instead of using the index.
This is fixed in CVS HEAD. In the mean time, you can enclose the
integer li
Mike Nolan wrote:
Is there a way to copy a table INCLUDING the check constraints? If not,
then that information is lost, unlike varchar(n).
"pg_dump -t" should work fine, unless I'm misunderstanding you.
-Neil
---(end of broadcast)---
TIP 5: Have y
Simon Riggs wrote:
On the other hand, I was just about to change the wal_debug behaviour to
allow better debugging of PITR features as they're added.
That's a development activity. Enabling the WAL_DEBUG #ifdef by
default during the 7.5 development cycle would be uncontroversial, I
think.
I thin
Josh Berkus wrote:
Hmmm. I was told that it was this way for 7.4 as well; that's why it's in
the docs that way.
No such statement is made in the docs AFAIK: they merely say "If
nonzero, turn on WAL-related debugging output."
I invented a new #ifdef symbol when making this change in CVS HEAD, s
Simon Riggs wrote:
Josh Berkus wrote
Simon Riggs wrote
Please set WAL_DEBUG to 1 so we can see a bit more info: thanks.
I'm pretty sure that WAL_DEBUG requires a compile-time option.
I'm surprised, but you are right, the manual does SAY this requires a
compile time option; it is unfortunately not
Shridhar Daithankar <[EMAIL PROTECTED]> writes:
> Right now, it is hotly debated on HACKERS about adding a NOWAIT
> clause to SELECT FOR UPDATE. If you think your application
> deployment is away for months and can try CVS head, you can expect
> some action on it in coming few days.
You can also t
"Loeke" <[EMAIL PROTECTED]> writes:
> do views exist fysically a separate "table", or are they generated
> on the fly whenever they are queried?
Views are implementing by rewriting queries into the appropriate query
on the view's base tables.
http://www.postgresql.org/docs/current/static/rules-vi
Richard Huxton <[EMAIL PROTECTED]> writes:
> I didn't think they'd be meaningful for a statement-level
> trigger. Surely OLD/NEW are by definition row-level details.
Granted; the feature in question is *some* means of accessing the
result set of a statement-level trigger -- it probably would not u
Harald Fuchs <[EMAIL PROTECTED]> writes:
> Does anyone know how to access the affected values for
> statement-level triggers? I mean what the "old" and "new"
> pseudo-records are for row-level triggers.
Yeah, I didn't get around to implementing that. If anyone wants this
feature, I'd encourage th
"scott.marlowe" <[EMAIL PROTECTED]> writes:
> Yes, previously run query should be faster, if it fits in kernel
> cache.
Or the PostgreSQL buffer cache.
> Plus, the design of Postgresql is such that it would have to do a
> LOT of cache checking to see if there were any updates to the
> underlying
John Siracusa <[EMAIL PROTECTED]> writes:
> 1. The query "select max(foo) from bar" where the column foo has an index.
> Aren't indexes ordered? If not, an "ordered index" would be useful in this
> situation so that this query, rather than doing a sequential scan of the
> whole table, would just "
David Shadovitz <[EMAIL PROTECTED]> writes:
> What could account for this difference?
Lots of things -- disk fragmentation, expired tuples that aren't being
cleaned up by VACUUM due to a long-lived transaction, the state of the
kernel buffer cache, the configuration of the kernel, etc.
> How can
"David Shadovitz" <[EMAIL PROTECTED]> writes:
> I'm running PG 7.2.2 on RH Linux 8.0.
Note that this version of PostgreSQL is quite old.
> I'd like to know why "VACUUM ANALYZE " is extemely slow (hours) for
> certain tables.
Is there another concurrent transaction that has modified the table
bu
"Sean P. Thomas" <[EMAIL PROTECTED]> writes:
> 1. Is there any performance difference for declaring a primary or
> foreign key a column or table contraint? From the documentation,
> which way is faster and/or scales better:
>
> CREATE TABLE distributors (
> did integer,
> namev
Tom Lane <[EMAIL PROTECTED]> writes:
> I don't believe anyone has proposed removing the facility
> altogether. There's a big difference between making the default
> behavior be not to have OIDs and removing the ability to have OIDs.
Right, that's what I had meant to say. Sorry for the inaccuracy.
Sai Hertz And Control Systems <[EMAIL PROTECTED]> writes:
> I have created my tables without OIDS now my doubts are :
> 1. Will this speed up the data insertion process
Slightly. It means that each inserted row will be 4 bytes smaller (on
disk), which in turn means you can fit more tuples on a p
Mark Kirkwood <[EMAIL PROTECTED]> writes:
> Note : The Pgbench runs were conducted using -s 10 and -t 1000 -c
> 1->64, 2 - 3 runs of each setup were performed (averaged figures
> shown).
FYI, the pgbench docs state:
NOTE: scaling factor should be at least as large as the largest
numbe
stephen farrell <[EMAIL PROTECTED]> writes:
> With the indexes created it worked. It took about 4 hours, but it
> inserted all of the records.
Has this been satisfactorily resolved?
If not, can you post an EXPLAIN ANALYZE for the failing query, as Tom
asked earlier?
-Neil
Shridhar Daithankar <[EMAIL PROTECTED]> writes:
> This is not a bug. It is just that people find it confusing when
> postgresql planner consider seemingly same type as different.
It certainly is a bug, or at least a deficiency: PostgreSQL planner
*could* use the index to process the query, but the
LIANHE SHAO <[EMAIL PROTECTED]> writes:
> Hello, I use php as front-end to query our database. When I use
> System Monitor to check the usage of cpu and memory, I noticed that
> the cpu very easily gets up to 100%. Is that normal? if not, could
> someone points out possible reason?
You haven't giv
Steve Wampler <[EMAIL PROTECTED]> writes:
> PG: 7.2.3 (RedHat 8.0)
You're using PG 7.2.3 with the PG 7.1 JDBC driver; FWIW, upgrading to
newer software is highly recommended.
> The two sites were performing at comparable speeds until a few days
> ago, when we deleted several million records from
Josh Berkus <[EMAIL PROTECTED]> writes:
> Oh, good. Was this a 7.4 improvement?
No, it was in 7.3
-Neil
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faqs/FAQ.html
Josh Berkus <[EMAIL PROTECTED]> writes:
> 1) to keep it working, you will probably need to run ANALZYE more
>often than you have been;
I'm not sure why this would be the case -- can you elaborate?
> 4) Currently, pg_dump does *not* back up statistics settings.
Yes, it does.
-Neil
LIANHE SHAO <[EMAIL PROTECTED]> writes:
> We will have a very large database to store microarray data (may
> exceed 80-100G some day). now we have 1G RAM, 2G Hz Pentium 4, 1
> CPU. and enough hard disk.
> Could anybody tell me that our hardware is an issue or not?
IMHO the size of the DB is less
Stefan Champailler <[EMAIL PROTECTED]> writes:
> So here's my trouble : some DELETE statement take up to 1 minute to
> complete (but not always, sometimes it's fast, sometimes it's that
> slow). Here's a typical one : DELETE FROM response_bool WHERE
> response_id = '125' The response_bool table has
<[EMAIL PROTECTED]> writes:
> But it was not this bad in 7.3 as far as i understand.
No, I believe this behavior is present in any recent release of
PostgreSQL.
-Neil
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send a
Torsten Schulz <[EMAIL PROTECTED]> writes:
> Our Server:
> Dual-CPU with 1.2 GHz
> 1.5 GB RAM
What kind of I/O subsystem is in this machine? This is an x86 machine,
right?
> Has anyone an idea what's the best configuration for thta server?
It is difficult to say until you provide some informatio
Tom Lane <[EMAIL PROTECTED]> writes:
> (I believe the previous discussion also agreed that we wanted to
> postpone the freezing of now(), which currently also happens at
> BEGIN rather than the first command after BEGIN.)
That doesn't make sense to me: from a user's perspective, the "start
of the
Josh Berkus <[EMAIL PROTECTED]> writes:
> The only thing you're adding to the query is a second SORT step, so it
> shouldn't require any more time/memory than the query's first SORT
> did.
Interesting -- I wonder if it would be possible for the optimizer to
detect this and avoid the redundant inn
Suchandra Thapa <[EMAIL PROTECTED]> writes:
> I was thinking using about using a raid 1+0 array to hold the
> database but since I can use different array types, would it be
> better to use 1+0 for the wal logs and a raid 5 for the database?
It has been recommended on this list that getting a RAID
"Marc G. Fournier" <[EMAIL PROTECTED]> writes:
> -> Index Scan using tl_month on traffic_logs ts (cost=0.00..30763.02 rows=8213
> width=16) (actual time=0.29..5562.25 rows=462198 loops=1)
> Index Cond: (month_trunc(runtime) = '2003-10-01 00:00:00'::timestamp without
> time zone)
Interest
"Patrick Hatcher" <[EMAIL PROTECTED]> writes:
> Do you have an index on ts.bytes? Josh had suggested this and after I put
> it on my summed fields, I saw a speed increase.
What's the reasoning behind this? ISTM that sum() should never use an
index, nor would it benefit from using one.
-Neil
--
<[EMAIL PROTECTED]> writes:
> The \timing psql command gives different time for the same query executed
> repeatedly.
That's probably because executing the query repeatedly results in
different execution times, as one would expect. \timing returns the
"exact" query response time, nevertheless.
-N
On Tue, 2003-11-04 at 09:49, [EMAIL PROTECTED] wrote:
> How do we measure the response time in postgresql?
In addition to EXPLAIN ANALYZE, the log_min_duration_statement
configuration variable and the \timing psql command might also be
useful.
-Neil
---(end of broadcast
On Fri, 2003-10-31 at 11:37, Greg Stark wrote:
> My understanding is that the case where HT hurts is precisely your case. When
> you have two real processors with HT the kernel will sometimes schedule two
> jobs on the two virtual processors on the same real processor leaving the two
> virtual proc
On Fri, 2003-10-31 at 13:27, Allen Landsidel wrote:
> I had no idea analyze was playing such a big role in this sense.. I really
> thought that other than saving space, it wasn't doing much for tables that
> don't have indexes on the.
ANALYZE doesn't save any space at all -- VACUUM is probably w
On Mon, 2003-10-27 at 13:52, Tom Lane wrote:
> Greg is correct. int8 is a pass-by-reference datatype and so every
> aggregate state-transition function cycle requires at least one palloc
> (to return the function result).
Interesting. Is there a reason why int8 is pass-by-reference? (ISTM that
pa
On Mon, 2003-10-27 at 12:56, Greg Stark wrote:
> Neil Conway <[EMAIL PROTECTED]> writes:
> > Uh, what? Why would an int8 need to be "dynamically allocated
> > repeatedly"?
>
> Perhaps I'm wrong, I'm extrapolating from a comment Tom Lane made that
1 - 100 of 135 matches
Mail list logo