Hi group,
I could really use your help with this one. I don't have all the
details right now (I can provide more descriptions tomorrow and logs
if needed), but maybe this will be enough:
I have written a PG (8.3.8) module, which uses Flex Lexical Analyser.
It takes text from database field and
Option 2:
App Server and Postgres: Dual Xeon 5520 quad core with 12GB ram and
2x 146GB 15k RPM SAS (RAID1) disks
you didnt mention your dataset size, but i the second option would
be preferrable in most situations since it gives more of the os memory
for disc caching. 12 gb vs 4 gb for
EXPLAIN ANALYSE on smaller query:
Seq Scan on teksty (cost=0.00..1353.50 rows=1 width=695) (actual
time=0.220..12.354 rows=368 loops=1)
Filter: (id = 1)
Total runtime: 12.488 ms
Memory config:
# - Memory -
shared_buffers = 24MB
temp_buffers = 8MB
max_prepared_transactions = 5
work_mem =
Hi,
I changed the date comparison to be based on year alone:
extract(YEAR FROM sc.taken_start) = 1900 AND
extract(YEAR FROM sc.taken_end) = 2009 AND
The indexes are now always used; if someone wants to explain why using the
numbers works (a constant) but using a date (another constant?)
Greetings,
in
http://archives.postgresql.org/message-id/1056648218.7041.11.ca...@jester,
it is stated that the performance of temporary tables is the same as a
regular table but without
WAL on the table contents..
I have a datamining-type application which makes heavy use of temporary
temporary tables are handled pretty much like the regular table. The
magic happens on schema level, new schema is setup for connection, so
that it can access its own temporary tables.
Temporary tables also are not autovacuumed.
And that's pretty much the most of the differences.
--
Sent via
Am 25.05.2010 10:49, schrieb Grzegorz Jaśkiewicz:
temporary tables are handled pretty much like the regular table. The
magic happens on schema level, new schema is setup for connection, so
that it can access its own temporary tables.
Temporary tables also are not autovacuumed.
And that's pretty
2010/5/25 Joachim Worringen joachim.worrin...@iathh.de:
Am 25.05.2010 10:49, schrieb Grzegorz Jaśkiewicz:
temporary tables are handled pretty much like the regular table. The
magic happens on schema level, new schema is setup for connection, so
that it can access its own temporary tables.
Am 25.05.2010 11:15, schrieb Thom Brown:
2010/5/25 Joachim Worringenjoachim.worrin...@iathh.de:
And, is there anything like RAM-only tables? I really don't care whether the
staging data is lost on the rare event of a machine crash, or whether the
query crashes due to lack of memory (I make sure
WAL does the same thing to DB journaling does to the FS.
Plus allows you to roll back (PITR).
As for the RAM, it will be in ram as long as OS decides to keep it in
RAM cache, and/or its in the shared buffers memory.
Unless you have a lot of doubt about the two, I don't think it makes
too much
We're using a function that when run as a select statement outside of the
function takes roughly 1.5s to complete whereas running an identical
query within a function is taking around 55s to complete.
We are lost as to why placing this query within a function as opposed to
substituting the
Am 25.05.2010 11:38, schrieb Grzegorz Jaśkiewicz:
WAL does the same thing to DB journaling does to the FS.
Plus allows you to roll back (PITR).
As for the RAM, it will be in ram as long as OS decides to keep it in
RAM cache, and/or its in the shared buffers memory.
Or until I commit the
In response to Tyler Hildebrandt :
We're using a function that when run as a select statement outside of the
function takes roughly 1.5s to complete whereas running an identical
query within a function is taking around 55s to complete.
select * from fn_medirota_get_staff_leave_summary(6);
On Tuesday 25 May 2010 11:00:24 Joachim Worringen wrote:
Am 25.05.2010 10:49, schrieb Grzegorz Jaśkiewicz:
temporary tables are handled pretty much like the regular table. The
magic happens on schema level, new schema is setup for connection, so
that it can access its own temporary tables.
I think, your problem is here:
SELECT INTO current_user * FROM
fn_medirota_validate_rota_master(in_currentuser);
The planner has no knowledge about how many rows this functions returns
if he don't know the actual parameter. Because of this, this query
enforce a seq-scan. Try to rewrite
Good day list
I would appreciate some comments to the following:
I have a Dell PowerEdge SC1420 server with 2 GB of RAM 1 DD 73 Gb SCSI
Ulltra320 2 Xeon (4
cache) with PGSQL 7.3.7
running GNU / Linux Red Hat Enterprise 4, 0 for 32-bit (kernel 2.6.9-5Elsmp)
Nahant (ES)
and another server
On Tue, May 25, 2010 at 02:04:07PM +, Juan Pablo Sandoval Rivera wrote:
Please let me give recommendation to the confituracion...
The subject line of this message said you're trying to run PostgreSQL 7.3.7. I
hope that's a typo, and you really mean 8.3.7, in which case this suggestion
boils
On Tue, May 25, 2010 at 9:41 AM, Tyler Hildebrandt
ty...@campbell-lange.net wrote:
I think, your problem is here:
SELECT INTO current_user * FROM
fn_medirota_validate_rota_master(in_currentuser);
The planner has no knowledge about how many rows this functions returns
if he don't know the
On Tue, May 25, 2010 at 10:55 AM, Merlin Moncure mmonc...@gmail.com wrote:
On Tue, May 25, 2010 at 9:41 AM, Tyler Hildebrandt
ty...@campbell-lange.net wrote:
I think, your problem is here:
SELECT INTO current_user * FROM
fn_medirota_validate_rota_master(in_currentuser);
The planner has no
Have you read this?
http://blog.endpoint.com/2008/12/why-is-my-function-slow.html
99% of the 'function is slow' problems are caused by this.
Have you checked the difference between explain and prepare + explain execute?
Tyler Hildebrandt ty...@campbell-lange.net 05/25/10 4:59 AM
We're
Sorry Jesper, I thought I had mentioned.. our dataset have 18GB.
Pedro Axelrud
http://mailee.me
http://softa.com.br
http://flavors.me/pedroaxl
On Tue, May 25, 2010 at 03:21, Jesper Krogh jes...@krogh.cc wrote:
Option 2:
App Server and Postgres: Dual Xeon 5520 quad core with 12GB ram and 2x
On May 21, 2010, at 8:26 PM, Matthew Wakeling wrote:
On Fri, 21 May 2010, Richard Yen wrote:
Any ideas why the query planner chooses a different query plan when using
prepared statements?
This is a FAQ. Preparing a statement makes Postgres create a plan, without
knowing the values that
David Jarvis thanga...@gmail.com wrote:
The value for *random_page_cost* was at 2.0; reducing it to 1.1
had a massive performance improvement (nearly an order of
magnitude). While the results now return in 5 seconds (down from
~85 seconds)
It sounds as though the active portion of your
On Tue, May 25, 2010 at 11:27:08AM -0700, Scott Carey wrote:
On May 21, 2010, at 8:26 PM, Matthew Wakeling wrote:
On Fri, 21 May 2010, Richard Yen wrote:
Any ideas why the query planner chooses a different query plan when using
prepared statements?
This is a FAQ. Preparing a statement
On Tue, May 25, 2010 at 5:58 AM, Konrad Garus konrad.ga...@gmail.com wrote:
2010/5/24 Merlin Moncure mmonc...@gmail.com:
*) a page fault to disk is a much bigger deal than a fault to pg cache
vs os/ cache.
That was my impression. That's why I did not touch our 2/16 GB setting
right away. I
Hi, Kevin.
Thanks for the response.
It sounds as though the active portion of your database is pretty
much cached in RAM. True?
I would not have thought so; there are seven tables, each with 39 to 43
million rows as:
CREATE TABLE climate.measurement (
id bigserial NOT NULL,
taken date
David Jarvis thanga...@gmail.com writes:
It sounds as though the active portion of your database is pretty
much cached in RAM. True?
I would not have thought so; there are seven tables, each with 39 to 43
million rows as: [ perhaps 64 bytes per row ]
The machine has 4GB of RAM, donated to
On Tue, May 25, 2010 at 4:26 PM, David Jarvis thanga...@gmail.com wrote:
shared_buffers = 1GB
temp_buffers = 32MB
work_mem = 32MB
maintenance_work_mem = 64MB
effective_cache_size = 256MB
Shouldn't effective_cache_size be significantly larger?
--
Rob Wultsch
wult...@gmail.com
--
Sent via
Hi, Tom.
Yes, that is what happened, making the tests rather meaningless, and giving
me the false impression that the indexes were being used. They were but only
because of cached results. When multiple users making different queries, the
performance will return to ~80s per query.
I also tried
On Tue, May 11, 2010 at 2:00 PM, Carlo Stonebanks
stonec.regis...@sympatico.ca wrote:
I am concerned that there is such a lag between all the index and function
scans start/complete times and and the nested loops starting. I have
reformatted the SLOW PLAN results below to make them easier to
On Wed, May 12, 2010 at 1:45 AM, venu madhav venutaurus...@gmail.com wrote:
[Venu] Yes, autovacuum is running every hour. I could see in the log
messages. All the configurations for autovacuum are disabled except that it
should run for every hour. This application runs on an embedded box, so
31 matches
Mail list logo