Hi. I have a performance problem with this simple query:
SELECT collect_time FROM n_traffic JOIN n_logins USING (login_id)
WHERE n_logins.account_id = '1655' ORDER BY collect_time LIMIT 1;
Limit (cost=0.00..2026.57 rows=1 width=8) (actual
Hi. I have a performance problem with this simple query:
SELECT collect_time FROM n_traffic JOIN n_logins USING (login_id)
WHERE n_logins.account_id = '1655' ORDER BY collect_time LIMIT 1;
I must add that is occurs when there is no rows in n_traffic for these
login_id's. Where there is at
Hi Alexandru,
Alexandru Coseru schrieb:
[...]
Question 1:
The RAID layout should be:
a) 2 hdd in raid 1 for system and pg_xlog and 6 hdd in
raid10 for data ?
b) 8 hdd in raid10 for all ?
c) 2 hdd in raid1 for system , 2 hdd in raid1 for pg_xlog ,
Thanks to all for the feedback on this issue.. After reviewing your comments,
Im thinking of changing to the following values
shared_buffers = 786432 - If Ive done my math right, then this is 6GB which
is 25% of 24GB ( I want to preserve the other 8GB for OS and App )
On Tue, Dec 05, 2006 at 01:21:38AM -0500, Alex Turner wrote:
My other and most important point is that I can't find any solid
recommendations for a SCSI card that can perform optimally in Linux or
*BSD. Off by a factor of 3x is pretty sad IMHO. (and yes, we know the
Adaptec cards suck worse,
The problem I see with software raid is the issue of a battery backed unit:
If the computer loses power, then the 'cache' which is held in system
memory, goes away, and fubars your RAID.
Alex
On 12/5/06, Michael Stone [EMAIL PROTECTED] wrote:
On Tue, Dec 05, 2006 at 01:21:38AM -0500, Alex
Alex Turner wrote:
The problem I see with software raid is the issue of a battery backed
unit: If the computer loses power, then the 'cache' which is held in
system memory, goes away, and fubars your RAID.
I'm not sure I see the difference. If data are cached, they're not written
whether it
On Tue, Dec 05, 2006 at 07:57:43AM -0500, Alex Turner wrote:
The problem I see with software raid is the issue of a battery backed unit:
If the computer loses power, then the 'cache' which is held in system
memory, goes away, and fubars your RAID.
Since the Adaptec doesn't have a BBU, it's a
Hi,
I have to refactoring a 'DELETE FROM x WHERE y IN (...)' because IN got
to much parameters. = 'stack depth limit exceeded'
I don't want to increase just the parameter for max_stack_depth. It is
better to refactoring because the number of arguments to IN may increase
in the future.
My
Sven Geisler wrote:
I have to refactoring a 'DELETE FROM x WHERE y IN (...)' because IN got
to much parameters. = 'stack depth limit exceeded'
I don't want to increase just the parameter for max_stack_depth. It is
better to refactoring because the number of arguments to IN may increase
in the
Hi Heikki
Heikki Linnakangas schrieb:
Sven Geisler wrote:
I have to refactoring a 'DELETE FROM x WHERE y IN (...)' because IN got
to much parameters. = 'stack depth limit exceeded'
I don't want to increase just the parameter for max_stack_depth. It is
better to refactoring because the number
Sven Geisler wrote:
I have to insert my arguments to a temporary table first, because the
arguments are only known in the application tier.
Is a multiple insert to a temporary table and a final 'DELETE FROM x
WHERE y IN (SELECT z FROM tmp)' faster than multiple deletes?
If the number of
Hi,
Many thanks for the fast response.
I will use temporary table with copy.
Other issue I have is the connection between app and db.
I guess, the approach with the copy will also hold down the network I/O
between app and db. Keeping in mind that I produce 10,000+ statements.
Thx
Sven
Heikki
Heikki Linnakangas [EMAIL PROTECTED] writes:
Sven Geisler wrote:
I have to refactoring a 'DELETE FROM x WHERE y IN (...)' because IN got
to much parameters. = 'stack depth limit exceeded'
The number of arguments is around 10,000.
...
BTW: The arguments are generate in the application tier. I
I am wanting some ideas about improving the performance of ORDER BY in
our use. I have a DB on the order of 500,000 rows and 50 columns.
The results are always sorted with ORDER BY. Sometimes, the users end up
with a search that matches most of the rows. In that case, I have a
LIMIT 5000 to
Glenn,
On 12/5/06 9:12 AM, Glenn Sullivan [EMAIL PROTECTED] wrote:
I am wanting some ideas about improving the performance of ORDER BY in
our use. I have a DB on the order of 500,000 rows and 50 columns.
The results are always sorted with ORDER BY. Sometimes, the users end up
with a search
Hello,
Is there a relation between database size and PostGreSQL restart duration ?
If true, i'm looking for a law predicting how much time is required to
restart PostGreSQL, depending on the DB size.
Does anyone now the behavior of restart time ?
Thanks
--
-- Jean Arnaud
Glenn Sullivan [EMAIL PROTECTED] writes:
I am wanting some ideas about improving the performance of ORDER BY in
our use. I have a DB on the order of 500,000 rows and 50 columns.
The results are always sorted with ORDER BY. Sometimes, the users end up
with a search that matches most of the
On Tue, Dec 05, 2006 at 01:02:06PM -0500, Tom Lane wrote:
In 8.0 that might be counterproductively high --- we have seen cases
where more sort_mem = slower with the older sorting code. I concur
with Luke's advice that you should update to 8.2 (not 8.1) to get the
improved sorting code.
By
am Tue, dem 05.12.2006, um 13:02:06 -0500 mailte Tom Lane folgendes:
In 8.0 that might be counterproductively high --- we have seen cases
where more sort_mem = slower with the older sorting code. I concur
with Luke's advice that you should update to 8.2 (not 8.1) to get the
improved sorting
Steinar H. Gunderson [EMAIL PROTECTED] writes:
By the way, is the new sorting code any better for platforms that already
have a decent qsort() (like Linux)?
It seemed better to us. Linux' qsort() is really mergesort, which is
better sometimes but often worse --- mergesort tends to have a less
On 12/5/06, Tom Lane [EMAIL PROTECTED] wrote:
Jean Arnaud [EMAIL PROTECTED] writes:
Is there a relation between database size and PostGreSQL restart
duration ?
No.
Does anyone now the behavior of restart time ?
It depends on how many updates were applied since the last checkpoint
before
Rajesh Kumar Mallah [EMAIL PROTECTED] writes:
Startup time of a clean shutdown database is constant. But we still
face problem when it comes to shutting down. PostgreSQL waits
for clients to finish gracefully. till date i have never been able to
shutdown
quickly (web application scenerio) and
Steinar H. Gunderson wrote:
On Tue, Dec 05, 2006 at 01:02:06PM -0500, Tom Lane wrote:
In 8.0 that might be counterproductively high --- we have seen cases
where more sort_mem = slower with the older sorting code. I concur
with Luke's advice that you should update to 8.2 (not 8.1) to get the
Hi,
PostgreSQL offers some proprietary SQL parameters and commands which
easily solves such problems.
If you are sure PostgreSQL is the DB for your app forever ;) , you could
use this parameters and commands.
Here a possible resolution for your problem.
DELETE FROM x USING tmp WHERE
Thanks to Luke and Tom for the input. I guess this was good timing
given that it looks like
8.2 was just released today. I will upgade to that before doing
anything else.
Glenn
Tom Lane wrote:
Glenn Sullivan [EMAIL PROTECTED] writes:
I am wanting some ideas about improving the
Hello..
Thanks for the advices..
Actually , i'm waiting for the clovertown to show up on the market...
Regards
Alex
- Original Message -
From: Sven Geisler [EMAIL PROTECTED]
To: Alexandru Coseru [EMAIL PROTECTED]
Cc: pgsql-performance@postgresql.org
Sent: Tuesday, December 05,
I wanted to post a follow up to the list regarding a high
shared_buffers value on OS X 10.4.8.
Thanks to Tom's help we successfully compiled PostgreSQL 8.1.5 using
64-bit on OS X Server 10.4.8 (You can find info. for this on pgports)
shared_buffers can now be set as high as shmmax without
Brian Wipf [EMAIL PROTECTED] writes:
shared_buffers can now be set as high as shmmax without getting the
error message could not create shared memory segment Now,
however, when shared_buffers are set greater than 279212 a
segmentation fault occurs on startup of PostgreSQL.
Stack
On 5-Dec-06, at 4:10 PM, Tom Lane wrote:
Brian Wipf [EMAIL PROTECTED] writes:
shared_buffers can now be set as high as shmmax without getting the
error message could not create shared memory segment Now,
however, when shared_buffers are set greater than 279212 a
segmentation fault occurs on
We're using 8.1 - I thought such a construct was safe in pg 8.1:
select max(indexed_value) from huge_table;
while earlier we had to use:
select indexed_value from huge_table order by indexed_value desc limit 1;
seems like I was wrong:
mydb= explain analyze select indexed_value1 from
[Tobias Brox - Wed at 04:01:56AM +0100]
We're using 8.1 - I thought such a construct was safe in pg 8.1:
select max(indexed_value) from huge_table;
while earlier we had to use:
select indexed_value from huge_table order by indexed_value desc limit 1;
seems like I was wrong:
The
Tobias Brox [EMAIL PROTECTED] writes:
We're using 8.1 - I thought such a construct was safe in pg 8.1:
select max(indexed_value) from huge_table;
while earlier we had to use:
select indexed_value from huge_table order by indexed_value desc limit 1;
These are not actually exactly the same
[Tom Lane - Tue at 10:29:53PM -0500]
These are not actually exactly the same thing. In particular, I suppose
your table contains a lot of nulls?
Yes; I'm sorry I was a bit quick with the first posting.
---(end of broadcast)---
TIP 1: if
The test that I did - which was somewhat limited, showed no benefit
splitting disks into seperate partitions for large bulk loads.
The program read from one very large file and wrote the input out to two
other large files.
The totaly throughput on a single partition was close to the maximum
On Tue, 5 Dec 2006, Craig A. James wrote:
I'm not familiar with the inner details of software RAID, but the only
circumstance I can see where things would get corrupted is if the RAID driver
writes a LOT of blocks to one disk of the array before synchronizing the
others...
You're talking
On 12/6/06, Tom Lane [EMAIL PROTECTED] wrote:
Rajesh Kumar Mallah [EMAIL PROTECTED] writes:
Startup time of a clean shutdown database is constant. But we still
face problem when it comes to shutting down. PostgreSQL waits
for clients to finish gracefully. till date i have never been able to
Does PostgreSQL lock the entire row in a table if I update only 1
column?
--
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608416
web: www.askesis.nl
---(end of broadcast)---
38 matches
Mail list logo