d Indexes
- Data size growth
- If writes are slowing down, then it could be because of slow disks
Are you saying that queries are slowing down when there are heavy writes ?
Are you referring to SELECTs or all types of queries ?
Regards,
Venkata B N
Fujitsu Australia
https://medium.com/@c2c/nodejs-a-quick-optimization-advice-7353b820c92e
100% performance boost, for mysterious reasons that may be worth knowing about…
Graeme Bell
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
> I don't think inserts can cause contention on the server. Insert do not lock
> tables during the transaction. You may have contention on sequence but it
> won't vary with transaction size.
Perhaps there could be a trigger on inserts which creates some lock contention?
--
Sent via
Sounds like a locking problem, but assuming you aren’t sherlock holmes and
simply want to get the thing working as soon as possible:
Stick a fast SSD in there (whether you stay on VM or physical). If you have
enough I/O, you may be able to solve the problem with brute force.
SSDs are a lot
>> First the database was on a partition where compression was enabled, I
>> changed it to an uncompressed one to see if it makes a difference thinking
>> maybe the cpu couldn't handle the load.
> It made little difference in my case.
>
> My regular gmirror partition seems faster:
> dd bs=8k
postgres running all the time or do you start it before this test?
Perhaps check if any background tasks are running when you use postgres -
autovacuum, autoanalyze etc.
Graeme Bell
> On 08 Oct 2015, at 11:17, Bram Van Steenlandt <b...@diomedia.be> wrote:
>
> Hi,
>
> I use pos
>>
>>
> Like this ?
>
> gmirror (iozone -s 4 -a /dev/mirror/gm0s1e) = 806376 (faster drives)
> zfs uncompressed (iozone -s 4 -a /datapool/data) = 650136
> zfs compressed (iozone -s 4 -a /datapool/data) = 676345
If you can get the complete tables (as in the images on the blog post) with
> On 08 Oct 2015, at 11:17, Bram Van Steenlandt <b...@diomedia.be> wrote:
>
> The database (9.2.9) on the server (freebsd10) runs on a zfs mirror.
> If I copy a file to the mirror using scp I get 37MB/sec
> My script achieves something like 7 or 8MB/sec on large (+100MB) f
> On 08 Oct 2015, at 13:50, Bram Van Steenlandt <b...@diomedia.be> wrote:
>>> 1. The part is "fobj = lobject(db.db,0,"r",0,fpath)", I don't think there
>>> is anything there
Re: lobject
http://initd.org/psycopg/docs/usage.html#large-objects
&
>>
>> http://initd.org/psycopg/docs/usage.html#large-objects
>>
>>
>> "Psycopg large object support *efficient* import/export with file system
>> files using the lo_import() and lo_export() libpq functions.”
>>
>> See *
>>
> I was under the impression they meant that the lobject was using
I previously posted about par_psql, but I recently found another PG parallelism
project which can do a few extra things that par_psql can’t:
https://github.com/moat/pmpp
pmpp: Poor Man's Parallel Processing.
Corey Huinker had the idea of using dblink async as a foundation for
distributing
On 28 Jul 2015, at 22:29, Graeme B. Bell graeme.b...@nibio.no wrote:
Entering production, availability 2016
1000x faster than nand flash/ssd , eg dram-latency
10x denser than dram
1000x write endurance of nand
Priced between flash and dram
Manufactured by intel/micron
Non-volatile
http
QUERY
SELECT COUNT(*) FROM occurrences WHERE (lat = -27.91550355958 AND lat
= -27.015680440420002 AND lng = 152.13307044728307 AND lng =
153.03137355271693 AND category_id = 1 AND (ST_Intersects(
ST_Buffer(ST_PointFromText('POINT(152.58 -27.465592)')::geography,
Some of you may have had annoying problems in the past with autofreeze or
autovacuum running at unexpected moments and dropping the performance of your
server randomly.
On our SSD-RAID10 based system we found a 20GB table finished it's vacuum
freeze in about 100 seconds. There were no
, Graeme B. Bell graeme.b...@nibio.no wrote:
Some of you may have had annoying problems in the past with autofreeze or
autovacuum running at unexpected moments and dropping the performance of your
server randomly.
On our SSD-RAID10 based system we found a 20GB table finished it's vacuum
Entering production, availability 2016
1000x faster than nand flash/ssd , eg dram-latency
10x denser than dram
1000x write endurance of nand
Priced between flash and dram
Manufactured by intel/micron
Non-volatile
Guess what's going in my 2016 db servers :-)
Please, don't be vapourware...
Hi all,
1. For those that don't like par_psql (http://github.com/gbb/par_psql), here's
an alternative approach that uses the Gnu Parallel command to organise
parallelism for queries that take days to run usually. Short script and
GIS-focused, but may give you a few ideas about how to
On 23 Jul 2015, at 13:37, domenico febbo mimmopastic...@gmail.com wrote:
is the problem also in PostgreSQL 9.4.x?
I'm going to buy a production's server with 4 sockets E7-4850 12 cores
so 12*4 = 48 cores (and 96 threads using HT).
What do you suggest?
Using or not HT?
BR
1. If you
No, of course it doesn't. It appears that you didn't look at the repo or
read my previous mail before you wrote this.
FFS, I *ran* some of the tests and reported on results. With you in CC.
Just checked back. So you did. I'm sorry, I made the mistake I accused you of.
But... why then
On 09 Jul 2015, at 15:22, Thomas Kellerer spam_ea...@gmx.net wrote:
Graeme B. Bell schrieb am 09.07.2015 um 11:44:
I don't recall seeing a clear statement telling me I should mark pl/pgsql
functions nonvolatile wherever possible or throw all performance and
scalability out the window
3. I don't disagree that the benchmark code is objectively 'bad' in the
sense that it is missing an important optimisation.
Particularly with regards documentation, a patch improving things is
much more likely to improve the situation than griping. Also,
conversation on this list gets
On 09 Jul 2015, at 17:42, Merlin Moncure mmonc...@gmail.com wrote:
The community maintains it's own mailing list archives in
postgresql.org. Short of an array of tactical nuclear strikes this is
going to be preserved
Good to know, I've seen a lot of dead software projects throughout my
On 08 Jul 2015, at 22:27, Andres Freund and...@anarazel.de wrote:
On 2015-07-08 13:46:53 -0500, Merlin Moncure wrote:
On Wed, Jul 8, 2015 at 12:48 PM, Craig James cja...@emolecules.com wrote:
Well, right, which is why I mentioned even with dozens of clients.
Shouldn't that scale to at least
On 08 Jul 2015, at 13:20, Andres Freund and...@anarazel.de wrote:
On 2015-07-08 11:13:04 +, Graeme B. Bell wrote:
I'm guessing you are maybe pressed for time at the moment because I
already clearly included this on the last email, as well as the links
to the alternative benchmarks
On 09 Jul 2015, at 05:38, Tom Lane t...@sss.pgh.pa.us wrote:
If you
write your is_prime function purely in plpgsql, and don't bother to mark
it nonvolatile, *it will not scale*.
much for properly written plpgsql; but there's an awful lot of bad plpgsql
code out there, and it can make a
This is a reply to to Andreas's post on the #13495 documentation thread in
-bugs.
I am responding to it here because it relates to #13493 only.
Andres wrote, re: #13493
This issue is absolutely critical for performance and scalability of code,
Pft. In most cases it doesn't actually matter
On 07 Jul 2015, at 22:52, Merlin Moncure mmonc...@gmail.com wrote:
On Tue, Jul 7, 2015 at 3:33 PM, Graeme B. Bell graeme.b...@nibio.no wrote:
Hi Merlin,
Long story short - thanks for the reply, but you're not measuring anything
about the parallelism of code running in a pl/pgsql
On 07/07/2015 08:05 PM, Craig James wrote:
No ideas, but I ran into the same thing. I have a set of C/C++ functions
that put some chemistry calculations into Postgres as extensions (things
like, calculate the molecular weight of this molecule). As SQL
functions, the whole thing bogged
Technology
-Original Message-
From: Graeme B. Bell [mailto:graeme.b...@nibio.no]
Sent: Tuesday, July 07, 2015 8:26 AM
To: Merlin Moncure
Cc: Wes Vaske (wvaske); Craig James; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] New server: SSD/RAID recommendations?
As I have
, at 18:56, Wei Shan weishan@gmail.com wrote:
Hi Graeme,
Why would you think that you don't need RAID for ZFS?
Reason I'm asking if because we are moving to ZFS on FreeBSD for our future
projects.
Regards,
Wei Shan
On 8 July 2015 at 00:46, Graeme B. Bell graeme.b...@nibio.no
RAID controllers are completely unnecessary for SSD as they currently
exist.
Agreed. The best solution is not to buy cheap disks and not to buy RAID
controllers now, imho.
In my own situation, I had a tight budget, high performance demand and a newish
machine with RAID controller and HDDs
drives from *any*
company from consideration for use with Postgres.
So it lies about fsync()... The next question is, does it nevertheless
enforce the correct ordering of persisting fsync'd data? If you write to file
A and fsync it, then write to another file B and fsync it too
A and fsync it, then write to another file B and fsync it too, is it
guaranteed that if B is persisted, A is as well? Because if it isn't, you can
end up with filesystem (or database) corruption anyway.
- Heikki
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
On 07 Jul 2015, at 19:47, Scott Marlowe scott.marl...@gmail.com wrote:
[I know that using a shingled disk sounds crazy (it sounds crazy to me) but
you can bet there are people that just want to max out the disk bays in
their server... ]
Let's just say no online backup companies are using
Cache flushing isn't an atomic operation though. Even if the ordering is right,
you are likely to have a partial fsync on the disk when the lights go out -
isn't your FS still corrupt?
On 07 Jul 2015, at 21:53, Heikki Linnakangas hlinn...@iki.fi wrote:
On 07/07/2015 09:01 PM, Wes Vaske
suppose any RAID
controller removes data from BBU cache after it was fsynced by the drive. As
I know, there is no other magic command for drive to tell controller that
the data is safe now and can be removed from BBU cache.
Вт, 7 лип. 2015 11:59 Graeme B. Bell graeme.b...@nibio.no пише
features in some marketing material that were only present on the H710P)
And I see UBER (unrecoverable bit error) rates for SSDs and HDDs, but has
anyone ever seen them for the flash-based cache on their raid controller?
Sleep well, friends.
Graeme.
On 07 Jul 2015, at 18:54, Graeme B. Bell graeme.b
://github.com/gbb/t,
and I'm going to submit it as a bug to the pg bugs list.
Graeme.
On 06 Jul 2015, at 18:40, Merlin Moncure mmonc...@gmail.com wrote:
On Fri, Jul 3, 2015 at 9:48 AM, Graeme B. Bell graeme.b...@nibio.no wrote:
Hi everyone,
I've written a new open source tool
, Mkrtchyan, Tigran tigran.mkrtch...@desy.de wrote:
- Original Message -
From: Graeme B. Bell graeme.b...@nibio.no
To: Mkrtchyan, Tigran tigran.mkrtch...@desy.de
Cc: Graeme B. Bell graeme.b...@nibio.no, Steve Crawford
scrawf...@pinpointresearch.com, Wes Vaske (wvaske)
wva
Hi Karl,
Great post, thanks.
Though I don't think it's against conventional wisdom to aggregate writes into
larger blocks rather than rely on 4k performance on ssds :-)
128kb blocks + compression certainly makes sense. But it might make less sense
I suppose if you had some incredibly high
, Mkrtchyan, Tigran tigran.mkrtch...@desy.de wrote:
Thanks for the Info.
So if RAID controllers are not an option, what one should use to build
big databases? LVM with xfs? BtrFs? Zfs?
Tigran.
- Original Message -
From: Graeme B. Bell graeme.b...@nibio.no
To: Steve Crawford scrawf
Completely agree with Steve.
1. Intel NVMe looks like the best bet if you have modern enough hardware for
NVMe. Otherwise e.g. S3700 mentioned elsewhere.
2. RAID controllers.
We have e.g. 10-12 of these here and e.g. 25-30 SSDs, among various machines.
This might give people idea about
Hi everyone,
I've written a new open source tool for easily parallelising SQL scripts in
postgres. [obligatory plug: https://github.com/gbb/par_psql ]
Using it, I'm seeing a problem I've seen in other postgres projects involving
parallelisation in the last 12 months.
Basically:
- I
Thanks, this is very useful to know about the 730. When you say 'tested it with
plug-pulls', you were using diskchecker.pl, right?
Graeme.
On 07 Jul 2015, at 14:39, Karl Denninger k...@denninger.net wrote:
Incidentally while there are people who have questioned the 730 series power
loss
with postgres is:
a) disable the disk cache, which will cripple performance to about 3-5% of
normal.
b) use a battery backed or cap-backed RAID controller, which will generally
hurt performance, by limiting you to the peak performance of the flash on the
raid controller.
If you are buying such a drive, I
Hi everyone,
I've written a new open source tool for easily parallelising SQL scripts in
postgres. [obligatory plug: https://github.com/gbb/par_psql ]
Using it, I'm seeing a problem that I've also seen in other postgres projects
involving high degrees of parallelisation in the last 12
I previously mentioned on the list that nvme drives are going to be a very big
thing this year for DB performance.
This video shows what happens if you get an 'enthusiast'-class motherboard and
5 of the 400GB intel 750 drives.
https://www.youtube.com/watch?v=-hE8Vg1qPSw
Total transfer speed:
Images/data here
http://www.pcper.com/reviews/Storage/Five-Intel-SSD-750s-Tested-Two-Million-IOPS-and-10-GBsec-Achievement-Unlocked
On 04 Jun 2015, at 13:07, Graeme Bell g...@skogoglandskap.no wrote:
I previously mentioned on the list that nvme drives are going to be a very
big thing this
with unlogged than to just get faster drives + logged tables?)
On Thu, Jun 4, 2015 at 1:23 PM, Graeme B. Bell g...@skogoglandskap.no wrote:
Images/data here
http://www.pcper.com/reviews/Storage/Five-Intel-SSD-750s-Tested-Two-Million-IOPS-and-10-GBsec-Achievement-Unlocked
On 04 Jun 2015
I believe yes / 0 are the default settings for synchronous commit and
commit_delay. ** (Interestingly the manual pages do not specify.) **
Sorry, I've just spotted the settings in the text. The statement (marked **) is
incorrect.
Defaults are yes/0.
On Sun, May 31, 2015 at 7:53 PM, Yves Dorfsman y...@zioup.com wrote:
That's the thing, even on an old laptop with a slow IDE disk, 273
individual
inserts should not take more than a second.
I think that would depend on settings such as synchronous_commit, commit_delay,
or whether 2-phase
Josh, there seems to be an inconsistency in your blog. You say 3.10.X is
safe, but the graph you show with the poor performance seems to be from
3.13.X which as I understand it is a later kernel. Can you clarify which
3.X kernels are good to use and which are not?
Sorry to cut in -
So
141791601
Mail Attachment.jpeg
2015-04-09 13:01 GMT+02:00 Graeme B. Bell g...@skogoglandskap.no:
From a measurement I took back when we did the upgrade:
performance with 2.6: (pgbench, size 100, 32 clients)
48 651 transactions per second (read only)
6 504 transactions per second
faster it was?
Przemek Deć
2015-04-09 11:04 GMT+02:00 Graeme B. Bell g...@skogoglandskap.no:
Josh, there seems to be an inconsistency in your blog. You say 3.10.X is
safe, but the graph you show with the poor performance seems to be from
3.13.X which as I understand it is a later
A tangent to the performance testing thread here, but an important issue that
you will see come up in your work this year or next.
PCIe SSD may include AHCI PCI SSD or NVMe PCI SSD.
AHCI = old style, basically it's faster than SATA3 but quite similar in terms
of how the operating system
Hi Nico,
No one has mentioned the elephant in the room, but a database can
be very I/O intensive and you may not be getting the performance
you need from your virtual disk running on your VMware disk subsystem.
What do IOmeter or other disk performance evaluation software report?
/7364/memory-scaling-on-haswell/3
Graeme Bell
On 11 Feb 2015, at 01:31, Mark Kirkwood mark.kirkw...@catalyst.net.nz wrote:
On 10/02/15 10:29, Gavin Flower wrote:
On 10/02/15 08:30, Luis Antonio Dias de Sá Junior wrote:
Hi,
A survay: with pgbench using TPS-B, what is the maximum TPS you're
postgres can go. No matter OS or other variables.
Gavin, you got more than 12000 TPS?
2015-02-09 19:29 GMT-02:00 Gavin Flower gavinflo...@archidevsys.co.nz:
On 10/02/15 08:30, Luis Antonio Dias de Sá Junior wrote:
Hi,
A survay: with pgbench using TPS-B, what is the maximum TPS you're ever
I have a beast of a Dell server with the following specifications:
• 4x Xeon E5-4657LV2 (48 cores total)
• 196GB RAM
• 2x SCSI 900GB in RAID1 (for the OS)
• 8x Intel S3500 SSD 240GB in RAID10
• H710p RAID controller, 1GB cache
Centos 6.6, RAID10 SSDs uses XFS
I don't understand the logic behind using drives,
which are best for random io, for sequent io workloads.
Because they are also best for sequential IO. I get 1.3-1.4GB/second from 4
SSDs in RAID or 500MB/s for single disk systems, even with cheap models.
Are you getting more than that from
Very much agree with this. Because SSD is fast doesn't make it suited for
certain things, and a streaming sequential 100% write workload is one of
them. I've worked with everything from local disk to high-end SAN and even
at the high end we've always put any DB logs on spinning disk.
Hi Roberto,
Hardware etc. is a solution; but you have not yet characterised the problem.
You should investigate if the events are mostly...
- reads
- writes
- computationally intensive
- memory intensive
- I/O intensive
- network I/O intensive
- independent? (e.g. does it matter if you
/should/does postgres ever attempt
two strategies in parallel, in cases where strategy A is generally good but
strategy B prevents bad worst case behaviour? Kind of like a Schrödinger's Cat
approach to scheduling. What problems would it raise?
Graeme.
--
Sent via pgsql-performance mailing list
like.
Graeme
On 30 Sep 2014, at 18:32, Tom Lane t...@sss.pgh.pa.us wrote:
Graeme B. Bell g...@skogoglandskap.no writes:
Every year or two the core count goes up. Can/should/does postgres ever
attempt two strategies in parallel, in cases where strategy A is generally
good but strategy B
= 48GB
From: Graeme B. Bell [g...@skogoglandskap.no]
Sent: Friday, September 26, 2014 9:55 AM
To: Burgess, Freddie
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Very slow postgreSQL 9.3.4 query
A good way to start would be to introduce
HT off is common knowledge for better benchmarking result
It's wise to use the qualifer 'for better benchmarking results'.
It's worth keeping in mind here that a benchmark is not the same as normal
production use.
For example, where I work we do lots of long-running queries in parallel over
Following are the tables
---
CREATE TABLE equipment (
contract_nr varchar(32) COLLATE C NULL DEFAULT NULL,
name varchar(64) COLLATE C
On 04 Apr 2014, at 18:29, Nicolas Paris nipari...@gmail.com wrote:
Hello,
My question is about multiprocess and materialized View.
http://www.postgresql.org/docs/9.3/static/sql-creatematerializedview.html
I (will) have something like 3600 materialised views, and I would like to
know the
and fast.
It's concurrent writes or concurrent read/write of the same data item that
causes problems with locking. That shouldn't be happening here, judging by your
description.
If possible, try to make sure nothing is modifying those source tables
A/B/C/D/E/F when you are doing your view refresh
this routines will be at night, and need to be finished
quickly.
Thanks
Nicolas PARIS
2014-04-07 14:59 GMT+02:00 Graeme B. Bell g...@skogoglandskap.no:
Hi again Nick.
Glad it helped.
Generally, I would expect that doing all the A's first, then all the B's, and
so on, would
Postgresql rsync backups require the DB to be shutdown during the 'second'
rsync.
1. rsync the DB onto the backup filesystem (produces e.g. 95-99.99% consistent
DB on the backup filesystem)
2. shut down the DB
3. rsync the shut down DB onto the backup filesystem(synchronises the last
few
Start off, I'm new to postgres. I'm running Ubuntu 10.04.04 with postgres 9.1
on a VM with 32 GB of RAM.
I'm trying to increase the response time on submitted queries. I'm comparing
the same queries to a SQL Server instance with the same data sets.
The queries are used in our Analytics
Dear All,
Thanks alot for all the invaluable comments.
Regards,
Sreejith.
On Jul 14, 2012 2:19 PM, Craig Ringer ring...@ringerc.id.au wrote:
On 07/14/2012 09:26 AM, B Sreejith wrote:
Dear Robert,
We need to scale up both size and load.
Could you please provide steps I need to follow
Dear Friends,
Is there a tool available to perform Data Model review, from a performance
perspective?
One which can be used to check if the data model is optimal or not.
Thanks,
Sreejith.
Dear Sergev,
We have around 15 to 18 separate products.What we are told to do is to
check the scalability of the underlying DB of each product (application).
That's the requirement.Nothing more was explained to us.That's why I said
earlier that I am confused on how to approach this.
Regards,
Dear Robert,
We need to scale up both size and load.
Could you please provide steps I need to follow.
Warm regards,
Sreejith.
On Jul 14, 2012 1:37 AM, Robert Klemme shortcut...@googlemail.com wrote:
On Tue, Jul 10, 2012 at 10:21 AM, Sreejith Balakrishnan
sreejith.balakrish...@tcs.com wrote:
Hi All,
I am trying to compile Postgres Source code for ARM cortex A8 architecture.
While compiling, I got an error message which read selected processor does not
support `swpb r4,r4,[r3]'
One of the Postgres forums at the location
Of Heikki Linnakangas
Sent: Saturday, January 28, 2012 1:27 AM
To: Jayashankar K B
Cc: Andy Colson; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Postgress is taking lot of CPU on our embedded hardware.
On 27.01.2012 20:30, Jayashankar K B wrote:
Hi Heikki Linnakangas: We are using series
and Regards
Jayashankar
-Original Message-
From: Claudio Freire [mailto:klaussfre...@gmail.com]
Sent: Saturday, January 28, 2012 7:54 AM
To: Heikki Linnakangas
Cc: Jayashankar K B; Andy Colson; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Postgress is taking lot of CPU on our embedded
Hi,
We are having an embedded system with a freescale m68k architecture based
micro-controller, 256MB RAM running a customized version of Slackware 12 linux.
It's a relatively modest Hardware.
We have installed postgres 9.1 as our database engine. While testing, we found
that the Postgres
...@squeakycode.net]
Sent: Friday, January 27, 2012 10:45 PM
To: Heikki Linnakangas
Cc: Jayashankar K B; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Postgress is taking lot of CPU on our embedded hardware.
On 1/27/2012 10:47 AM, Heikki Linnakangas wrote:
On 27.01.2012 15:34, Jayashankar K B wrote:
Hi
Hi there.
If you just wanted PostgreSQL to go as fast as possible WITHOUT any
care for your data (you accept 100% dataloss and datacorruption if any
error should occur), what settings should you use then?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
Turn off fsync and full_page_writes (i.e. running with scissors).
Also depends on what you mean by as fast as possible. Fast at doing
what? Bulk inserts, selecting from massive tables?
I guess some tuning has to be done to make it work well with the
particular workload (in this case most
If you just wanted PostgreSQL to go as fast as possible WITHOUT any
care for your data (you accept 100% dataloss and datacorruption if any
error should occur), what settings should you use then?
I'm just curious, what do you need that for?
regards
Szymon
I was just thinking about the
If you just wanted PostgreSQL to go as fast as possible WITHOUT any
care for your data (you accept 100% dataloss and datacorruption if any
error should occur), what settings should you use then?
Others have suggested appropriate parameters (running with scissors).
I'd like to add something
Hi there
We are runing Postgres 8.3.7 on a
We have a problem with Explain Analyze that we haven't seen before.
we run an Explain Analyze on a query.
Nested Loop (cost=1256.32..2097.31 rows=198 width=1120) (actual
time=12.958..20.846 rows=494 loops=1)
- HashAggregate
4) Use software raid unless you have the money to buy a raid
controller, in which case here is the ranking of them
list of brand/modells
Areca and 3ware/Escalade are the two best controllers for the money
out right now. They tend to take turns being the absolute best as
they release
So, the eternal problem with what hardware to buy. I really miss a
hardware buying guide for database servers now that I'm about to buy
one..
Some general guidelines mixed with ranked lists of what hardware that
is best, shouldn't that be on the wiki?.
THis is of course very difficult to advice
, but that is
what I have right now, in the future I can throw more money on
hardware.
Will I see a general improvement in performance in 8.3.X over 8.1.11?
2008/4/29 A B [EMAIL PROTECTED]:
Right now, version 8.1.11 on centos.x86-64, intel dual core cpu with 2
sata discs (mirror raid
So, it is time to improve performance, it is running to slow.
AFAIK (as a novice) there are a few general areas:
1) hardware
2) rewriting my queries and table structures
3) using more predefined queries
4) tweek parameters in the db conf files
Of these points:
1) is nothing I can do about right
Hi all,
I'm running the following query to match a supplied text string to an actual
place name which is recorded in a table with extra info like coordinates,
etc.
SELECT ts_rank_cd(textsearchable_index_col , query, 32 /* rank/(rank+1) */)
AS rank,*
FROM gazetteer,
.
Looking at the EXPLAIN for the query no sequential scans are going on
and everything has an index that points directly at its search criteria.
Example:
Select sum(whatever) from a inner join b on a.something=b.something
WHERE b.day=1 and b.hour=1
Select sum(whatever) from a inner join b
, 2007 2:18 PM
To: Parks, Aaron B.
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Intermitent slow queries
On 2-May-07, at 11:24 AM, Parks, Aaron B. wrote:
My pg 8.1 install on an AMD-64 box (4 processors) with 9 gigs of ram
running RHEL4 is acting kind of odd and I thought I
, Aaron B.
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Intermitent slow queries
Among other possibilities, there's a known problem with slow memory
leaks in various JVM's under circumstances similar to those you are
describing.
The behavior you are describing is typical
In what directory in my linux server will I find these 3 tables?
-Original Message-
From: Alvaro Nunes Melo [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 26, 2005 10:49 AM
To: Christian Paul B. Cosinas
Subject: Re: [PERFORM] Temporary Table
Christian Paul B. Cosinas wrote:
I am
I try to run this command in my linux server.
VACUUM FULL pg_class;
VACUUM FULL pg_attribute;
VACUUM FULL pg_depend;
But it give me the following error:
-bash: VACUUM: command not found
I choose Polesoft Lockspam to fight spam, and you?
http://www.polesoft.com/refer.html
: Tuesday, November 08, 2005 2:11 AM
To: Christian Paul B. Cosinas
Cc: 'Alvaro Nunes Melo'; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Temporary Table
Christian Paul B. Cosinas wrote:
I try to run this command in my linux server.
VACUUM FULL pg_class;
VACUUM FULL pg_attribute
It affect my application since the
database server starts to slow down. Hence a very slow in return of functions.
Any more ideas about this everyone?
Please.
From:
[EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Alex Turner
Sent: Friday, October 21, 2005
3:42 PM
. We only have a
full server vacuum once a day.
-Original Message-
From: Mark Kirkwood [mailto:[EMAIL PROTECTED]
Sent: Monday, October 24, 2005 3:14 AM
To: Christian Paul B. Cosinas
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Used Memory
I just noticed that as long
Does Creating Temporary
table in a function and NOT dropping them affects the performance of the
database?
I choose Polesoft Lockspam to fight spam, and you?
http://www.polesoft.com/refer.html
I choose Polesoft Lockspam to fight spam, and you?
http://www.polesoft.com/refer.html
1 - 100 of 128 matches
Mail list logo