fx TATEISHI KOJI wrote:
> Issuing exactly the same query as PostgreSQL 8.2.21 and PostgreSQL 9.3.2 will
> slow the
> response by 6.4 ms on average.
> What could be the cause?
> Measurement method is as follows.
> ・ PostgreSQL 8.2.21 installation
> ★Measurement
> ・ Export DUMP of PostgreSQL
rverghese wrote:
> We are on Postgres 9.5, and have been running a daily vacuum analyze on the
> entire database since 8.2
> The data has grown exponentially since, and we are seeing that queries are
> now being significantly affected while the vacuum analyze runs. The query
> database is a Slony
Akihiko Odaki wrote:
> On 2017-06-23 20:20, Albe Laurenz wrote:
>> You could either try to do something like
>>
>> SELECT *
>> FROM (SELECT "posts".*
>>FROM "posts"
>> JOIN "follows" ON "follows".&quo
Akihiko Odaki wrote:
> On 2017-06-23 20:20, Albe Laurenz wrote:
>> You could either try to do something like
>>
>> SELECT *
>> FROM (SELECT "posts".*
>>FROM "posts"
>> JOIN "follows" ON "follows".&quo
Akihiko Odaki wrote:
> I am having a problem with nested loop join.
>
> A database has 2 tables: "posts" and "follows".
> Table "posts" have two columns: "timestamp" and "account".
> Table "follows" have two columns: "target_account" and "owner_account".
> The database also has an index on
Armand Pirvu wrote:
> Running 9.5.2
>
> I have the following update and run into a bit of a trouble . I realize the
> tables
> involved have quite some data but here goes
>
>
> UPDATE
> tf_transaction_item_person TRANS
> SET
> general_ledger_code = PURCH.general_ledger_code,
>
Pat Maddox wrote:
> I’ve been asked to help with a project dealing with slow queries. I’m brand
> new to the project, so I
> have very little context. I’ve gathered as much information as I can.
>
> I’ve put the schema, query, and explain info in gists to maintain their
> formatting.
>
> We
Philippe Ivaldi wrote:
> The explain analyze of the following code is https://explain.depesz.com/s/VhOv
>
> [OTHER CTEs - TRUNCATED CODE]
> SELECT
> count(*)
> FROM dossier d
> LEFT JOIN vp ON vp.dossier_id = d.id
> LEFT JOIN affected_ccial ON affected_ccial.dossier_id = d.id
> LEFT JOIN
Sven Kerkling wrote:
> This one ist the burden, running at least 100 seconds:
>
> SELECT b.id, b.status
> FROM export b, masterNew mb
> WHERE mb.sperre IS NULL
> AND mb.status IS NULL
> AND b.id = mb.id
> LIMIT 100;
>
> http://explain.depesz.com/s/eAqG
I think the
Artem Tomyuk wrote:
> Is Postgres use shared_buffers during seq_scan?
> In what way i can optimize seq_scan on big tables?
If the estimated table size is less than a quarter of shared_buffers,
the whole table will be read to the shared buffers during a sequential scan.
If the table is larger
Tomas Vondra wrote:
> Yes, that's clearly the culprit here. In both cases we estimate here are
> only ~4000 tuples in the hash, and 9.3 sizes the hash table to have at
> most ~10 tuples per bucket (in a linked list).
>
> However we actually get ~3M rows, so there will be ~3000 tuples per
>
Tomas Vondra wrote:
> On 01/29/2016 04:17 PM, Albe Laurenz wrote:
>> I have a query that runs *slower* if I increase work_mem.
>>
>> The execution plans are identical in both cases, except that a temp file
>> is used when work_mem is smaller.
>
I have a query that runs *slower* if I increase work_mem.
The execution plans are identical in both cases, except that a temp file
is used when work_mem is smaller.
The relevant lines of EXPLAIN ANALYZE output are:
With work_mem='100MB':
-> Hash Join (cost=46738.74..285400.61 rows=292
Phil S wrote:
> I am running Postgresql on a Windows Server 2008 server. I have noticed that
> queries have very high
> planning times now and then. Planning times go down for the same query
> immediately after the query
> runs the first time, but then go up again after if the query is not
Adam Brusselback wrote:
> I appreciate the response Tom, and you are correct that the workaround would
> not work in my case.
>
> So no index expressions can return the their value without recomputing
> without that work around? I
> learn something new every day it seems.
> Thank you for the
Jonathan Rogers wrote:
>> Look at the EXPLAIN ANALYZE output for both the custom plan (one of the
>> first five executions) and the generic plan (the one used from the sixth
>> time on) and see if you can find and fix the cause for the misestimate.
>
> Yes, I have been looking at both plans and
Jonathan Rogers wrote:
> I have a very complex SELECT for which I use PREPARE and then EXECUTE.
> The first five times I run "explain (analyze, buffers) execute ..." in
> psql, it takes about 1s. Starting with the sixth execution, the plan
> changes and execution time doubles or more. The slower
Duane Murphy wrote:
We are trying to improve performance by avoiding the temp file creation.
LOG: temporary file: path base/pgsql_tmp/pgsql_tmp8068.125071, size
58988604
STATEMENT: SELECT iiid.installed_item__id, item_detail.id,
item_detail.model_id, item_detail.type
FROM
avpro avpro wrote:
in the pgsql documentation
(http://www.postgresql.org/docs/9.1/static/sql-createtrigger.html)
i haven't seen anything referring to: how is affected the data inserted in
the new table by a trigger
Before Insert compared with a trigger After Insert? and anything related
Tory M Blue wrote:
I've got some beefy hardware but have some tables that are over 57GB raw and
end up at 140GB size
after indexes are applied. One index creation took 7 hours today. So it's
time to dive in and see
where i'm lacking and what I should be tweaking.
I looked at pgtune again
Alex Goncharov wrote:
Thank you, Kevin -- this is helpful.
But it still leaves questions for me.
Alex Goncharov alex.goncharov@gmail.com wrote:
The whole thing is aborted then, and the good 99 records are not
making it into the target table.
Right. This is one reason people often
[about loadling large amounts of data]
Felipe Santos wrote:
This might also help:
http://www.postgresql.org/docs/9.1/static/populate.html
Bulk load tables from text files in almost all RDMS are log free (Postgres'
COPY is one of them).
The reason is that the database doesn't need to
Please keep the list on CC: in your responses.
Benjamin Dugast wrote:
2014-07-18 13:11 GMT+02:00 Albe Laurenz laurenz.a...@wien.gv.at:
This sounds a lot like checkpoint I/O spikes.
Check with the database server log if the freezes coincide with checkpoints.
You can increase
Benjamin Dugast wrote:
I'm working on Postgres 9.3.4 for a project.
We are using Scala, Akka and JDBC to insert data in the database, we have
around 25M insert to do
which are basically lines from 5000 files. We issue a DELETE according to the
file (mandatory) and
then a COPY each 1000
Gezeala M. Bacuño II wrote:
Does anybody have a similar setup:
[a] 1 physical machine with half a terabyte of RAM, Xeon E7- 8837 @ 2.67GHz,
huge ZFS pools + ZIL +
L2ARC
[b] master DB pg9.3 postgres_fdw with read/write capabilities, with
tablespaces and WAL on separate
zpools, archiving
I wrote:
Jeff Janes wrote:
All that said, there has always been a recommendation of caution around
using NFS as a backing store for PG, or any RDBMS..
I know that Oracle recommends it - they even built an NFS client
into their database server to make the most of it.
Last I heard
Сурен Арустамян wrote:
I'm using postgresql 9.3.4 on Red Hat Enterprise Linux Server release 6.5
(Santiago)
Linux 193-45-142-74 2.6.32-431.17.1.el6.x86_64 #1 SMP Fri Apr 11 17:27:00 EDT
2014 x86_64 x86_64
x86_64 GNU/Linux
Server specs:
4x Intel(R) Xeon(R) CPU E7- 4870 @ 2.40GHz (40
John Melesky wrote:
I just learned that NFS does not use a file system cache on the client side.
That's ... incorrect. NFS is cache-capable. NFSv3 (I think? It may have been
v2) started sending
metadata on file operations that was intended to allow for client-side
caches. NFSv4 added all
Jeff Janes wrote:
All that said, there has always been a recommendation of caution around
using NFS as a backing store for PG, or any RDBMS..
I know that Oracle recommends it - they even built an NFS client
into their database server to make the most of it.
Last I heard (which
I just learned that NFS does not use a file system cache on the client side.
On the other hand, PostgreSQL relies on the file system cache for performance,
because beyond a certain amount of shared_buffers performance will suffer.
Together these things seem to indicate that you cannot get good
Stephen Frost wrote:
All that said, there has always been a recommendation of caution around
using NFS as a backing store for PG, or any RDBMS..
I know that Oracle recommends it - they even built an NFS client
into their database server to make the most of it.
Yours,
Laurenz Albe
--
Sent via
Karl Denninger wrote:
I've been doing a bit of benchmarking and real-world performance
testing, and have found some curious results.
[...]
The odd thing is that I am getting better performance with a 128k record
size on this application than I get with an 8k one!
[...]
What I am curious
Hans Drexler wrote:
We are porting an application to PostgreSQL. The appplication already
runs with DB2 (LUW version) and Oracle. One query in particular executes
slower on Postgres than it does on other Database platforms, notably DB2
LUW and Oracle. (Please understand, we are not comparing
desmodemone wrote:
max is 1024mb.
That must be a typo.
It can surely be much higher.
Yours,
Laurenz Albe
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
kosalram Babu Chellappa wrote:
We have a requirement to store images/documents with an average size of
1-2MB on PostgreSQL database.
We have PostgreSQL 9.2.4 running on Red hat linux 64 bit. We decided to setup
a stand alone postgreSQL
server without streaming replication to host the
monalee_dba wrote:
Eg. SELECT col1, col2, col3,col10 FROM table1;
For above query If I didn't mention ORDER BY clause, then I want to know
selected data will appear in which order by a query planner?
Because I have huge size table, and when I applied ORDER BY col1, col2..in
query the
Johan Loubser wrote:
I am tasked with getting specs for a postgres database server for the
core purpose of running moodle at our university.
The main question is at the moment is 12core AMD or 6/8core (E Series)
INTEL.
What would be the most in portend metric in planning an enterprise level
amul sul wrote:
I am trying to run DBT5 to test performance of PG9.2.4,
But execution failed due to undefined symbol: PQescapeLiteral error in
output_dir/bh/bh.out
Full error as follow:
[...]
BrokerageHouseMain: symbol lookup error: BrokerageHouseMain: undefined
symbol: PQescapeLiteral
Niels Kristian Schjødt wrote:
I am experiencing a similar issue as the one mentioned in this post
http://stackoverflow.com/questions/3100072/postgresql-slow-on-a-large-table-with-arrays-and-lots-of-
updates/3100232#3100232
However the post is written for a 8.3 installation, so I'm wondering if
Jayadevan M wrote:
If a table takes 100 MB while on disk, approximately how much space will it
take in RAM/database
buffer?
100 MB. A block in memory has the same layout as a block on disk.
Yours,
Laurenz Albe
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
Tory M Blue wrote:
My postgres db ran out of space. I have 27028 files in the pg_xlog directory.
I'm unclear what
happened this has been running flawless for years. I do have archiving turned
on and run an archive
command every 10 minutes.
I'm not sure how to go about cleaning this up, I
AJ Weber wrote:
What is the unit-of-measure used for default_statistics_target?
Number of entries in pg_stats.histogram_bounds orpg_stats.most_common_vals.
Yours,
Laurenz Albe
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
classical_89 wrote:
Hi everyone ,I have a question. I have a table with large data (i was used
bytea datatype and insert a binary content to table ) so that Postgres help
me get a TOAST table to storage out-of-line values .
Assume that my table is tbl_test and toast table oid is 16816
suhas.basavaraj12 wrote:
Can i delete the content of this folder. I have observed couple of
times ,
this folder got cleaned automatically.
These files are in use and you should not delete them.
If you need them to go right now, cancel the queries that
create temporary files.
If there are any
David Popiashvili wrote:
I have database with few hundred millions of rows. I'm running the
following query:
select * from Payments as p
inner join PaymentOrders as po
on po.Id = p.PaymentOrderId
inner join Users as u
On u.Id = po.UserId
INNER JOIN Roles as r
on u.RoleId = r.Id
Where
K P Manoj wrote:
I am facing query performance in one of my testing server.
How i can create index with table column name ?
EXPLAIN select xxx.* from xxx xxx where exists (select 1 from tmp
where mdc_domain_reverse like
xxx.reverse_pd || '.%');
QUERY
K P Manoj wrote:
Please find the details of table description
test=# \d xxx
Table public.xxx
Column|Type |
Modifiers
--+-+---
crawler_id
Eildert Groeneveld wrote:
I am currently implementing using a compressed binary storage scheme
genotyping data. These are basically vectors of binary data which may be
megabytes in size.
Our current implementation uses the data type bit varying.
What we want to do is very simple: we want
Wu Ming wrote:
I had installed postgreSQL v9.2 in Windows XP SP3.
My PC specs:
Processor: Pentium Dual Core 2.09 GHz
RAM: 2GB
The postgreSQL is run as windows service (manual).
The problem is the postgreSQL service uses a lot of memory and lags
the OS if running in long time (about 2
Pedro Jiménez Pérez wrote:
Sent: Friday, November 02, 2012 1:14 PM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] help with too slow query
Hello,
I have this table definition:
CREATE TABLE ism_floatvalues
(
id_signal bigint NOT NULL, -- Indica la señal a la que pertenece
AndyG wrote:
Increasing the statistics for test_result.id_recipe_version
had no effect?
I increased the statistics in steps up to 5000 (with vacuum analyse) -
Seems
to be as good as it gets.
http://explain.depesz.com/s/z2a
Just out of curiosity, do you get a better plan with
But why? Is there a way to force the planner into this?
I don't know enough about the planner to answer the why,
but the root of the problem seems to be the mis-estimate
for the join between test_result and recipe_version
(1348 instead of 21983 rows).
That makes the planner think that a nested
Andy wrote:
I have been pulling my hair out over the last few days trying to get
any useful performance out of the
following
painfully slow query.
The query is JPA created, I've just cleaned the aliases to make it
more readable.
Using 'distinct' or 'group by' deliver about the same results,
On Mon, Oct 29, 2012 at 6:05 AM, Albe Laurenz
laurenz.a...@wien.gv.at wrote:
I am configuring streaming replication with hot standby
with PostgreSQL 9.1.3 on RHEL 6 (kernel 2.6.32-220.el6.x86_64).
PostgreSQL was compiled from source.
It works fine, except that starting the standby took
Sean Woolcock wrote:
I have a large (3 million row) table called tape that represents
files,
which I join to a small (100 row) table called filesystem that
represents
filesystems. I have a web interface that allows you to sort by a
number of
fields in the tape table and view
Heikki Linnakangas wrote:
Why does WAL replay read much more than it writes?
I thought that pretty much every block read during WAL
replay would also get dirtied and hence written out.
Not necessarily. If a block is modified and written out of the buffer
cache before next checkpoint, the
k...@rice.edu wrote:
If you do not have good random io performance log replay is nearly
unbearable.
also, what io scheduler are you using? if it is cfq change that to
deadline or noop.
that can make a huge difference.
We use the noop scheduler.
As I said, an identical system performed
AndyG wrote:
A marginal improvement.
http://explain.depesz.com/s/y63
That's what I thought.
Increasing the statistics for test_result.id_recipe_version
had no effect?
I am going to normalize the table some more before partitioning.
How do you think that partitioning will help?
Yours,
I am configuring streaming replication with hot standby
with PostgreSQL 9.1.3 on RHEL 6 (kernel 2.6.32-220.el6.x86_64).
PostgreSQL was compiled from source.
It works fine, except that starting the standby took for ever:
it took the system more than 80 minutes to replay 48 WAL files
and connect to
Alvaro Herrera wrote:
I am configuring streaming replication with hot standby
with PostgreSQL 9.1.3 on RHEL 6 (kernel 2.6.32-220.el6.x86_64).
PostgreSQL was compiled from source.
It works fine, except that starting the standby took for ever:
it took the system more than 80 minutes to replay
Sam Wong wrote:
I am investigating a performance issue involved with LIKE '%'
on an index in a complex query with joins.
Q1.
SELECT * FROM shipments WHERE shipment_id LIKE '12345678%'
Q2.
SELECT * FROM shipments WHERE shipment_id = '12345678' AND
shipment_id '12345679'
[Q1 and Q2
Strahinja Kustudic wrote:
I have a Postgresql 9.1 dedicated server with 16 cores, 96GB RAM and RAID10
15K SCSI drives
which is runing Centos 6.2 x64. This server is mainly used for
inserting/updating large amounts of
data via copy/insert/update commands, and seldom for running select
John Nash wrote:
We have being doing some testing with an ISD transaction and we had
some problems that we posted here.
The answers we got were very kind and useful but we couldn't solve the
problem.
Could you refer to the threads so that you don't get the same advice again?
We have
henk de wit wrote:
I'm using Postgres 9.1 on Debian Lenny and via a Java server (JBoss AS
6.1) I'm executing a simple
select ... for update query:
SELECT
importing
FROM
customer
WHERE
id = :customer_id
FOR UPDATE NOWAIT
Once every 10 to 20 times Postgres fails to
Jon Leighton wrote:
I'm one of the developers of the Ruby on Rails web framework.
In some situations, the framework generates an empty transaction
block.
I.e. we sent a BEGIN and then later a COMMIT, with no other queries in
the middle.
We currently can't avoid doing this, because a user
AI Rumman wrote:
I execued the query:
ALTER TABLE entity ADD COLUMN owner_type char(1) NOT NULL default 'U';
The db is stuck. The enity table has 2064740 records;
Watching locks:
[all locks are granted]
Any idea for the db stuck?
To add the column, PostgreSQL has to modify all rows in
Aryan Ariel Rodriguez Chalas wrote:
I'm working with an application that connects to a remote server database
using libpq library over
internet, but making a simple query is really slow even though I've done
PostgreSQL Tunning and table
being indexed, so I want to know:
-Why is
Eileen wrote:
I have written some Java code which builds a postgresql function.
That function calls approximately 6
INSERT statements with a RETURNING clause. I recreate and re-run the
function about 900,000 times. I
use JDBC to execute these functions on postgresql 8.3 on Windows.
When I
Markus Innerebner wrote:
I am doing some runtime experiments in my implementation, which is
computing multi-modal range queries
for a query point (if you want to know details check the website:
www.isochrones.inf.unibz.it).
The network is explored using Dijkstra Shortest Path algorithm that
Jayadevan M wrote:
I have a plpgsql function that takes a few seconds (less than 5) when
executed from psql. The same
function, when invoked from java via a prepared statement takes a few
minutes. There are a few queries
in the function. Out of these, the first query takes input parameters
for
codevally wrote:
I have a question regarding PostgreSQL 9.1 indexing.
I am having a table and want to create a index for a column and I want
to
store the data with time zone for that column. The questions are:
1. Can I create a index for a column which store time stamp with time
zone.
If
Yan Chunlu wrote:
I have logged one day data and found the checkpoint is rather
frequently(detail:
https://gist.github.com/3088338). Not sure if it is normal, but the
average time of checkpoint is
about 100sec~200sec, it seems related with my settings:
574 checkpoint_segments = 64
575
Yan Chunlu wrote:
I have grabbed one day slow query log and analyzed it by pgfouine, to
my surprise, the slowest query
is just a simple select statement:
select diggcontent_data_message.thing_id,
diggcontent_data_message.KEY,
diggcontent_data_message.value, diggcontent_data_message.kind FROM
Hermann Matthes wrote:
I want to implement a paged Query feature, where the user can enter
in
a dialog, how much rows he want to see. After displaying the first
page
of rows, he can can push a button to display the next/previous page.
On database level I could user limit to implement this
Stanislaw Pankevich wrote:
PostgreSQL db, 30 tables with number of rows 100 (not huge) - the
fastest way to clean each
non-empty table and reset unique identifier column of empty ones
I wonder, what is the fastest way to accomplish this kind of task in
PostgreSQL. I am
Stanislaw Pankevich wrote:
PostgreSQL db, 30 tables with number of rows 100 (not huge) - the
fastest way to clean each
non-empty table and reset unique identifier column of empty ones
I wonder, what is the fastest way to accomplish this kind of task in
PostgreSQL. I am
Josh Berkus wrote:
On 6/20/12 3:27 PM, Midge Brown wrote:
I need to move a postgres 9.0 database -- with tables, indexes, and wals
associated with 16
tablespaces on 12 logical drives -- to an existing raid 10 drive in another
volume on the same server.
Once I get the data off the initial
Cédric Villemain wrote:
- The statistics gathered by ANALYZE are independent of the tablespace
containing the table.
yes.
- The tablespace containing the table has no influence on query planning
unless seq_page_cost or random_page_cost has been set on the
tablespace.
yes.
Could somebody confirm or refute the following statements, please?
- The statistics gathered by ANALYZE are independent of the tablespace
containing the table.
- The tablespace containing the table has no influence on query planning
unless seq_page_cost or random_page_cost has been set on the
Ivan Voras wrote:
I have a SQL function (which I've pasted below) and while testing its
code directly (outside a function), this is the normal, default
plan:
http://explain.depesz.com/s/vfP (67 ms)
and this is the plain with enable_seqscan turned off:
http://explain.depesz.com/s/EFP (27
Martin Grotzke wrote:
You could try different kernel I/O elevators and see if that improves
something.
I have made good experiences with elevator=deadline and
elevator=noop.
Ok, great info.
I'm not sure at which device to look honestly to check the current
configuration.
mount/fstab
Martin Grotzke wrote:
we want to see if we can gain better performance with our postgresql
database. In the last year the amount of data growed from ~25G to now
~140G and we're currently developing a new feature that needs to get
data faster from the database. The system is both read and write
Rafael Martinez wrote:
I am wondering why the time reported by \timing in psql is not the
same
as the time reported by duration in the log file when log_duration or
log_min_duration_statement are on?. I can not find any information
about
this in the documentation.
\timing measures the time on
Heikki Linnakangas wrote:
Disabling OpenSSL compression in the source (which
is possible since OpenSSL 1.0.0) does not give me any performance
improvement.
If it doesn't give you any performance improvement then you haven't
disabled compression. Modern CPUs can easily saturate 1 GbitE with
Marti Raudsepp wrote:
I can't get oprofile to run on this RHEL6 box, it doesn't record
anything, so all I can test is total query duration.
Maybe this helps you with OProfile?
http://people.planetpostgresql.org/andrew/index.php?/archives/224-The-joy-of-Vx.html
Dang, you're right, I wasn't
Marti Raudsepp wrote:
Disabling OpenSSL compression in the source (which
is possible since OpenSSL 1.0.0) does not give me any performance
improvement.
If it doesn't give you any performance improvement then you haven't
disabled compression. Modern CPUs can easily saturate 1 GbitE with
Marti Raudsepp wrote:
Disabling OpenSSL compression in the source (which
is possible since OpenSSL 1.0.0) does not give me any performance
improvement.
If it doesn't give you any performance improvement then you haven't
disabled compression. Modern CPUs can easily saturate 1 GbitE with
Merlin Moncure wrote:
We selected a 30MB bytea with psql connected with
-h localhost and found that it makes a huge
difference whether we have SSL encryption on or off.
Without SSL the SELECT finished in about a second,
with SSL it took over 23 seconds (measured with
\timing in psql).
Heikki Linnakangas wrote:
We selected a 30MB bytea with psql connected with
-h localhost and found that it makes a huge
difference whether we have SSL encryption on or off.
Without SSL the SELECT finished in about a second,
with SSL it took over 23 seconds (measured with
\timing in psql).
We selected a 30MB bytea with psql connected with
-h localhost and found that it makes a huge
difference whether we have SSL encryption on or off.
Without SSL the SELECT finished in about a second,
with SSL it took over 23 seconds (measured with
\timing in psql).
During that time, the CPU is 100%
Carlo Stonebanks wrote:
The whole topic of messing with stats makes my head spin but I am concerned
about some horridly performing queries that have had bad rows estimates and
others which always choose seq scans when indexes are available. Reading up
on how to improve planner estimates, I
Robert Haas wrote:
[explanation of how Oracle locks on Updates involving foreign keys]
Yeah, that seems odd. I assume they know what they're doing; they're
Oracle, after all. It does sound, too, like they have column level
locks based on your comment about an EXCLUSIVE lock on the modified
Greg Smith wrote:
Recently I've made a number of unsubstantiated claims that the deadline
scheduler on Linux does bad things compared to CFQ when running
real-world mixed I/O database tests. Unfortunately every time I do one
of these I end up unable to release the results due to client
Robert Haas wrote:
Just for kicks I tried this out and the behavior is as the OP
describes: after a little poking around, it sees that the INSERT grabs
a share-lock on the referenced row so that a concurrent update can't
modify the referenced column.
It's not really clear how to get around
waldomiro wrote:
I need to know how much the postgres is going to disk to get
blocks and how much it is going to cache? witch is the
statistic table and what is the field that indicates blocks
reads from the disk and the memory cache?
The view pg_statio_all_tables will show you the number
SHIVESH WANGRUNGVICHAISRI wrote:
The main question is: how do I configure Postgresql such that
it's most efficient for storing large BLOB at a high-rate?
Refering to what you wrote on the web site you quoted,
I would guess that neither tuning WAL nor tuning logging
will have much effect.
My
Raji Sridar wrote:
We use a typical counter within a transaction to generate
order sequence number and update the next sequence number.
This is a simple next counter - nothing fancy about it. When
multiple clients are concurrently accessing this table and
updating it, under extermely
Eric Haszlakiewicz wrote:
I noticed a bit of a performance regression in embedded sql queries when
moving from the client libraries in verison 8.2.4 to 8.3.7. My
application does a whole lot of queries, many of which don't return any
data. When we moved to the new libraries the time of
Alberto Dalmaso wrote:
[...]
in the explanation I'll see that the db use nasted loop.
[...]
Sorry for the remark off topic, but I *love* the term
nasted loop. It should not go to oblivion unnoticed.
Yours,
Laurenz Albe
--
Sent via pgsql-performance mailing list
Matthew Hartman wrote:
To determine the available slots, the algorithm finds the earliest slot
that has an available chair and a count of the required concurrent
intervals afterwards. So a 60 minute regimen requires 12 concurrent
rows. This is accomplished by joining the table on itself. A
1 - 100 of 123 matches
Mail list logo