Michael Glaesemann <[EMAIL PROTECTED]> writes:
> I took startup time to be the time to return the first row *of the
> first loop*. But it's actually the average startup time to return the
> first row *in each loop*, right?
Correct, just as the total time and tuples returned are averages over a
On Dec 2, 2007, at 19:56 , Tom Lane wrote:
IOW the actual time to get in and out of a node is going to be a
shade more
than is reported.
Thanks, Tom. Should be close enough for jazz.
When I was first going over the Using Explain section, I stumbled a
bit on the startup time/total time/lo
Michael Glaesemann <[EMAIL PROTECTED]> writes:
> I'd like to get confirmation that I'm correctly understanding the
> times given in EXPLAIN ANALYZE.
> ...
> Is this correct?
Looks about right to me. Note that some of what you are calling
"executor overhead" might also be classed as "gettimeofda
I'd like to get confirmation that I'm correctly understanding the
times given in EXPLAIN ANALYZE. Taking the example given in the Using
Explain section of the docs,
http://www.postgresql.org/docs/current/static/using-explain
EXPLAIN ANALYZE SELECT * FROM tenk1 t1, tenk2 t2 WHERE t1.unique1 <
>
> Your development system is probably running inexpensive IDE disks that
> cache writes, while the test server is not caching. If you loop over
> single inserts, PostgreSQL's default configuration will do a physical
> commit to disk after every one of them, which limits performance to how
> fast
Mindaugas wrote:
And I cannot use some index organized table or table partitioned by From :)
because there are at least 2 similar indexes by which queries can be executed -
From and To.
This makes things a bit tough. One trick is to vertically partition the
table into two new tables
On Sun, 2 Dec 2007, Beyers Cronje wrote:
Initially I tested this on my development PC, an old P4 system with 2GB
RAM and 10,000 INSERTs took ~12 secs on average, which I was fairly
satisfied with. I then moved everything over to our test server, a new
Dell 1950 server with quad core Xeon proce
On 02/12/2007, Beyers Cronje <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> I'm busy evaluating PostgreSQL and I'm having performance problems on one of
> my servers. I have a very simple one table database, and the client using
> Mono 1.2.5.1 is running a loop doing INSERTs on the table. Initially I
> t
Hi all,
I'm busy evaluating PostgreSQL and I'm having performance problems on one of
my servers. I have a very simple one table database, and the client using
Mono 1.2.5.1 is running a loop doing INSERTs on the table. Initially I
tested this on my development PC, an old P4 system with 2GB RAM and
Robert Treat wrote:
On Wednesday 28 November 2007 11:20, Usama Munir Dar wrote:
EnterpriseDB (www.enterprisedb.com), ofcourse
lame :-P
Have you or anyone you know tried the training offerings? or you think
its lame because i top posted , which of course would be a very poor
c
On 02.12.2007, at 06:30, Merlin Moncure wrote:
I've been dying to know if anyone has ever done PostgreSQL training at
'the big nerd ranch'.
There are a couple of reviews floating around the web:
http://www.linux.com/articles/48870
http://www.linuxjournal.com/article/7847
I was in the course
On Nov 30, 2007 4:15 AM, Robert Treat <[EMAIL PROTECTED]> wrote:
> Never take advice from a guy who top posts... A friend of mine just went
> through an OTG course and had good things to say, and I've heard other speak
> well of it too, so I'd probably recommend them, but there are several
> optio
> What exactly is your goal? Do you need this query to respond in under a
> specific limit? What limit? Do you need to be able to execute many instances
> of this query in less than 5s * the number of executions? Or do you have more
> complex queries that you're really worried about?
I'd like t
"Mindaugas" <[EMAIL PROTECTED]> writes:
> I execute simple query "select * from bigtable where From='something'".
> Query returns like 1000 rows and takes 5++ seconds to complete.
As you pointed out that's not terribly slow for 1000 random accesses. It
sounds like your drive has nearly 5ms
On Dec 2, 2007 11:26 AM, Mindaugas <[EMAIL PROTECTED]> wrote:
> I execute simple query "select * from bigtable where From='something'".
> Query returns like 1000 rows and takes 5++ seconds to complete. As far as I
> understand the query is slow because:
Can you post an EXPLAIN ANALYZE? Which v
> my answer may be out of topic since you might be looking for a
> postgres-only solution.. But just in case
I'd like to stay with SQL.
> What are you trying to achieve exactly ? Is there any way you could
> re-work your algorithms to avoid selects and use a sequential scan
> (consider you
Hi,
my answer may be out of topic since you might be looking for a
postgres-only solution.. But just in case
What are you trying to achieve exactly ? Is there any way you could
re-work your algorithms to avoid selects and use a sequential scan
(consider your postgres data as one big file) to
Hello,
Started to work with big tables (like 300GB) and performance problems started
to appear. :(
To simplify things - table has an index on From an index on To columns. And
it also have several other not indexed columns. There are 10+ of different
values for From and the same for T
18 matches
Mail list logo