On 10/12/2015 11:14 PM, Shaun Thomas wrote:
On Mon, Oct 12, 2015 at 1:28 PM, Andres Freund wrote:
Any chance
you could provide profiles of such a run?
This is as simple as I could make it reliably. With one copy running,
the thread finishes in about 1 second. With 2, it's 1.5s each, and
with
On 07/07/2015 10:59 PM, Graeme B. Bell wrote:
Cache flushing isn't an atomic operation though. Even if the ordering
is right, you are likely to have a partial fsync on the disk when the
lights go out - isn't your FS still corrupt?
If the filesystem is worth its salt, no. Journaling filesystems
On 07/07/2015 09:01 PM, Wes Vaske (wvaske) wrote:
Regarding:
“lie about their fsync status.”
This is mostly semantics but it might help google searches on the issue.
A drive doesn’t support fsync(), that’s a filesystem/kernel process. A drive will do
a FLUSH CACHE. Before kernels 2.6. the fsyn
On 07/07/2015 05:15 PM, Wes Vaske (wvaske) wrote:
The M500/M550/M600 are consumer class drives that don't have power
protection for all inflight data.* (like the Samsung 8x0 series and
the Intel 3x0 & 5x0 series).
The M500DC has full power protection for inflight data and is an
enterprise-class
On 05/04/2015 12:23 AM, Anton Bushmelev wrote:
Hello guru of postgres, it's possoble to tune query with join on random
string ?
i know that it is not real life example, but i need it for tests.
soe=# explain
soe-# SELECT ADDRESS_ID,
soe-# CUSTOMER_ID,
soe-# DATE_CREATED
On 02/26/2015 12:31 AM, Josh Berkus wrote:
On 02/14/2015 10:35 AM, Sven R. Kunze wrote:
Thanks for the immediate reply.
I understand the use case is quite limited.
On the other hand, I see potential when it comes to applications which
use PostgreSQL. There, programmers would have to change a l
On 12/22/2014 10:53 PM, Robert DiFalco wrote:
This may fall into the category of over-optimization but I've become
curious.
I have a user table with about 14 columns that are all 1:1 data - so they
can't be normalized.
When I insert a row all columns need to be set. But when I update, I
sometim
On 11/25/2014 10:36 PM, Scott Marlowe wrote:
OK so there's a simple set of tree functions we use at work. They're
quite fast in 8.4 and they've gotten about 40% slower in 9.2. They're
a simple mix of sql and plpgsql functions which are at
http://pastebin.com/SXTnNhd5 and which I've attached.
Her
On 10/02/2014 07:43 AM, Rodrigo Barboza wrote:
Hello, I have a table that receives lots of updates and inserts.
Auto vaccum is always being cancelled on that table.
One day the database went on standby and I had to act manually to recover.
What should I do to avoid auto vaccum cancel?
Cancella
On 09/23/2014 11:37 PM, Emi Lu wrote:
Hello list,
For a big table with more than 1,000,000 records, may I know which update is
quicker please?
(1) update t1
set c1 = a.c1
from a
where pk and
t1.c1 <> a.c1;
..
update t1
set
On 09/17/2014 07:56 AM, Alexander Hill wrote:
Hello,
I have a table of tree nodes with a tsquery column. To get a subtree's
tsquery, I need to OR all of its nodes' tsqueries together.
I defined a custom aggregate using tsquery_or:
CREATE AGGREGATE tsquery_or_agg (tsquery)
(
On 06/30/2014 05:46 PM, Soni M wrote:
Here's what 'perf top' said on streaming replica :
Samples: 26K of event 'cpu-clock', Event count (approx.): 19781
95.97% postgres [.] 0x002210f3
Ok, so it's stuck doing something.. Can you get build with debug symbol
On 06/29/2014 03:43 PM, Soni M wrote:
top and sar says 100% cpu usage of one core, no sign of I/O wait.
Hmm, I wonder what it's doing then... If you have "perf" installed on
the system, you can do "perf top" to get a quick overlook of where the
CPU time is spent.
- Heikki
--
Sent via pgs
On 06/29/2014 11:14 AM, Soni M wrote:
Everything works fine until on Thursday we have high load on master, and
after that every streaming replica lag further behind the master. Even on
night and weekend where all server load is low. But the slony slave is OK
at all.
What does 'top' on the stand
On 05/28/2014 12:59 PM, Grzegorz Olszewski wrote:
random_page_cost = 4.0
seq_page_cost = 1.0
There is about 500,000 rows and about 500 new rows each business day.
About 96% of rows meet given conditions, that is, count shoud be about 480,000.
When such a large percentage of the rows match, a
On 05/27/2014 02:06 PM, Albe Laurenz wrote:
I just learned that NFS does not use a file system cache on the client side.
On the other hand, PostgreSQL relies on the file system cache for performance,
because beyond a certain amount of shared_buffers performance will suffer.
Together these thing
On 04/28/2014 09:07 PM, Karl Denninger wrote:
The WAL is fsync'd frequently. My guess is that that causes a lot of
extra work to repeatedly recompress the same data, or something like
that.
It shouldn't as ZFS re-writes on change, and what's showing up is not
high I/O*count* but rather percent
On 04/28/2014 06:47 PM, Karl Denninger wrote:
What I am curious about, however, is the xlog -- that appears to suffer
pretty badly from 128k record size, although it compresses even
more-materially; 1.94x (!)
The files in the xlog directory are large (16MB each) and thus "first
blush" would be t
On 04/24/2014 10:57 PM, Sergey Konoplev wrote:
On Thu, Apr 24, 2014 at 5:34 AM, Ivan Voras wrote:
On 24 April 2014 13:34, Heikki Linnakangas wrote:
As the docs say, the GIN index does not store the weights. As such, there is
no need to strip them. A recheck would be necessary if your query
On 04/24/2014 01:56 AM, Sergey Konoplev wrote:
On Wed, Apr 23, 2014 at 4:08 AM, Ivan Voras wrote:
Ok, I found out what is happening, quoting from the documentation:
"GIN indexes are not lossy for standard queries, but their performance
depends logarithmically on the number of unique words. (Ho
On 04/22/2014 10:57 AM, Ivan Voras wrote:
On 22 April 2014 08:40, Heikki Linnakangas wrote:
On 04/20/2014 02:15 AM, Ivan Voras wrote:
More details: after thinking about it some more, it might have
something to do with tsearch2 and indexes: the large data in this case
is a tsvector, indexed
On 04/20/2014 02:15 AM, Ivan Voras wrote:
Hello,
If a table contains simple fields as well as large (hundreds of KiB)
text fields, will accessing only the simple fields cause the entire
record data, including the large fields, to be read and unpacked?
(e.g. SELECT int_field FROM table_with_large
On 04/20/2014 07:46 AM, Oleg Bartunov wrote:
btw, 9.4 should be wiser in case of rare+common terms, thanks to GIN
fast scan feature.
Indeed, although we didn't actually do anything to the planner to make
it understand when fast scan helps. Doing something about cost
estimation is still on the
On 04/16/2014 06:13 PM, Linos wrote:
I thought that Postgresql would optimize out joins on columns I
don't ask for when I use the view but it doesn't, this query:
It doesn't, because it would be wrong. It still has to check that the
tables have a matching row (or multiple matching rows).
If
On 04/14/2014 09:36 PM, Stefan Keller wrote:
Who's the elephant in the room who is reluctant to introduce explicit hints?
Please read some of the previous discussions on this. Like this, in this
very same thread:
http://www.postgresql.org/message-id/15381.1395410...@sss.pgh.pa.us
I'd like t
On 04/07/2014 03:06 PM, Albe Laurenz wrote:
Hans Drexler wrote:
Postgres needs close to 50
minutes to process the same query on the same data. Sometimes, Postgres
needs more than 2 hours.
The application performs an update query on every row
of the table. The exact SQL of this query is:
update
On 04/06/2014 05:25 AM, Ryan Johnson wrote:
I've tried linux perf, but all it says is that lots of time is going to
LWLock (but callgraph tracing doesn't work in my not-bleeding-edge
kernel).
Make sure you compile with the "-fno-omit-frame-pointer" flag.
- Heikki
--
Sent via pgsql-performance
On 04/03/2014 08:39 PM, Reza Taheri wrote:
Hello PGSQL performance community,
You might remember that I pinged you in July 2012 to introduce the TPC-V
benchmark. I am now back with more data, and a question about checkpoints. As
far as the plans for the benchmark, we are hoping to release a ben
On 03/22/2014 02:59 AM, Erik van Zijst wrote:
Is there any way I can get postgres to perform the hash calculations
on the *result* of the other parts of the where clause, instead of the
other way around? Or else rewrite the query?
The planner doesn't know that the crypt function is expensive. T
On 02/27/2014 04:06 PM, Jean-Max Reymond wrote:
I am running the last version of PostgreSQL 9.3.3
I have two tables detm and corm and a lot of datas in the column
cormdata of corm table (1.4 GB).
I have a GIN index on cormdata:
CREATE INDEX ix_corm_fulltext_cormdata ON corm
USING gin (to_ts
On 02/18/2014 12:14 AM, David Wall wrote:
I am running PG 9.2.4 and I am trying to figure out why my database size
shows one value, but the sum of my total relation sizes is so much less.
Basically, I'm told my database is 188MB, but the sum of my total
relation sizes adds up to just 8.7MB, whic
On 02/10/2014 09:52 PM, M Putz wrote:
Hello,
While analyzing performance, we encountered the following phenomenon,
SELECT sum(pow(.5*generate_series,.5))
FROM generate_series(1,100);
is much much (a hundred times) slower than
SELECT sum(pow(random()*generate_series,.5))
F
On 01/21/2014 08:26 AM, Katharina Koobs wrote:
Hi,
We have a PostgreSQL DB, version 8.4 on a Suse Linux system.
Every night a script runs with several updates and inserts. The query time
at day increases after
approximately 3 weeks from a few minutes to about an hour.
Does it get gradually slo
On 25.11.2013 22:01, Lee Nguyen wrote:
Hi,
Having attended a few PGCons, I've always heard the remark from a few
presenters and attendees that Postgres shouldn't be run inside a VM. That
bare metal is the only way to go.
Here at work we were entertaining the idea of running our Postgres databas
On 13.08.2013 23:47, Daniel Cristian Cruz wrote:
Hello,
I'm trying to simplify a schema, where I had many ranges floating around.
My idea is to put them all in an array field and query like this:
SELECT
event.*
FROM event
JOIN participant_details
USING (participant_id)
WHERE
tsrange(event
On 06.05.2013 04:51, Mark Kirkwood wrote:
On 05/05/13 00:49, Simon Riggs wrote:
On 3 May 2013 13:41, Simon Riggs wrote:
(3) to make the check on TransactionIdIsInProgress() into a heuristic,
since we don't *need* to check that, so if we keep checking the same
xid repeatedly we can reduce the
On 03.05.2013 15:41, Simon Riggs wrote:
What appears to be happening is we're spending a lot of time in
TransactionIdIsInProgress() so we can set hints and then when we find
it is still in progress we then spend more time in XidIsInSnapshot()
while we check that it is still invisible to us. Even
On 30.05.2013 15:09, itishree sukla wrote:
In our server Check pointer process is consuming 8 GB of memory, what could
be the possible reason? Can any one please help.
Are you sure you're measuring the memory correctly? The RES field in top
output, for example, includes shared memory, ie. the
On 15.05.2013 18:31, Shaun Thomas wrote:
I've seen conversations on this since at least 2005. There were even
proposed patches every once in a while, but never any consensus. Anyone
care to comment?
Well, as you said, there has never been any consensus.
There are basically two pieces to the pu
On 14.05.2013 12:23, Sameer Thakur wrote:
Hello,
I am trying to find predicate information for a given SQL query plan as
provided by Oracle using DBMS_XPLAN. I am looking at the EXPLAIN command
for getting this query plan information, with no luck so far.
Does the EXPLAIN command provide predica
On 25.04.2013 02:56, Kelphet Xiong wrote:
In all the experiments, the lineitem and partsupp tables reside in memory
because there is no io activities observed from iotop.
Since there is enough network bandwidth (1Gb/s or 128MB/s) between
client and server,
I would like to know what determines the
On 22.04.2013 19:48, Anne Rosset wrote:
Hi,
We are seeing some overall performance degradation in our application since we
installed the security release. Other commits were also done at the same time
in the application so we don't know yet if the degradation has any relationship
with the secu
On 22.03.2013 02:05, Josh Berkus wrote:
Well, no.<@ is not a btree-indexable operator.
Yes, but it's equivalent to ( ( a>= b1 or b1 is null ) and ( a< b2 or
b2 is null ) ), which *is* btree-indexable and can use an index. So it
seems like the kind of optimization we could eventually make.
Y
On 21.03.2013 17:55, Alexander Korotkov wrote:
On Thu, Mar 21, 2013 at 12:52 PM, Heikki Linnakangas<
The immediate fix is attached, but this made me realize that rangesel() is
still missing estimation for the "element<@ range" operator. It shouldn't
be hard to implement, I&
On 21.03.2013 06:07, Vasilis Ventirozos wrote:
On Thu, Mar 21, 2013 at 5:58 AM, Tom Lane wrote:
What I find more disturbing is that this is what I get from the example
in HEAD:
regression=# explain SELECT * FROM a WHERE ts<@
tstzrange('2013-01-01','2013-01-01 00:10:00');
ERROR: XX000: type 11
On 15.03.2013 19:31, Florian Weimer wrote:
Does it make sense to pre-sort COPY FROM input to produce long runs of
increasing values of an indexed column, or does PostgreSQL perform
this optimization on its own?
PostgreSQL doesn't do that sort of an optimization itself, so yeah, if
the random I
On 22.02.2013 20:10, Markus Schulz wrote:
Am Freitag, 22. Februar 2013, 14:35:25 schrieb Heikki Linnakangas:
You could check what the generic plan looks like by taking the query
used in the java program, with the parameter markers, and running
EXPLAIN on that.
how can i do this?
I've
On 22.02.2013 10:25, Markus Schulz wrote:
i can run the query four times with good performance and after that postgresql
starts with the strange lseek() behavior.
By default, the JDBC driver re-plans the prepared statement for the
first 4 invocations of the query. On the fifth invocation, it s
On 14.02.2013 12:49, Tory M Blue wrote:
My postgres db ran out of space. I have 27028 files in the pg_xlog
directory. I'm unclear what happened this has been running flawless for
years. I do have archiving turned on and run an archive command every 10
minutes.
I'm not sure how to go about cleani
On 05.02.2013 05:45, Will Platnick wrote:
We upgraded from PG 9.1 to 9.2. Since the upgrade, the # of active queries has
raised significantly, especially during our peak time where lots of users are
logging in. According to New Relic, this query is now taking up the most
amount of time during
On 21.01.2013 17:29, AJ Weber wrote:
I was under the impression that the default_statistics_target was a
percentage of rows to analyze. Maybe this is not the case?
Nope.
I ran an analyze during a "quiet point" last night and for a few of my
large tables, I didn't get what I consider a reasona
On 11.01.2013 12:36, Matheus de Oliveira wrote:
On Fri, Jan 11, 2013 at 8:19 AM, Heikki Linnakangas
wrote:
One thing that caught my eye:
CREATE OR REPLACE FUNCTION partition_insert_trigger_spi()
RETURNS trigger
LANGUAGE C
VOLATILE STRICT
AS 'partition_insert_trigge
On 10.01.2013 21:48, Matheus de Oliveira wrote:
I have made a small modification to keep the plans, and it got from
33957.768ms to 43782.376ms.
If I'm reading results.txt correctly, the avg runtimes are:
C and SPI_execute_with_args: 58567.708 ms
C and SPI_(prepare/keepplan/execute_plan): 43782
On 10.01.2013 21:11, Matheus de Oliveira wrote:
On Thu, Jan 10, 2013 at 4:54 PM, Heikki Linnakangas
wrote:
The right way to do this with SPI is to prepare each insert-statement on
first invocation (SPI_prepare + SPI_keepplan), and reuse the plan after
that (SPI_execute_with_args).
If you
On 10.01.2013 20:45, Matheus de Oliveira wrote:
Inspired by Charles' thread and the work of Emmanuel [1], I have made some
experiments trying to create a trigger to make partitioning using C
language.
The first attempt was not good, I tried to use SPI [2] to create a query to
insert into the cor
On 03.01.2013 15:30, Daniel Westermann wrote:
What additionally makes me wonder is, that the same table in oracle is taking
much less space than in postgresql:
SQL> select sum(bytes) from dba_extents where segment_name = 'TEST1';
SUM(BYTES)
--
1610612736
select pg_relation_size('mgmt
On 27.11.2012 09:47, Syed Asif Tanveer wrote:
I am using PostgreSQL 9.1.5 for Data warehousing and OLAP puposes. Data size
is around 100 GB and I have tuned my PostgreSQL accordingly still I am
facing performance issues. The query performance is too low despite tables
being properly indexed and a
On 25.11.2012 18:30, Catalin Iacob wrote:
So it seems we're just doing too many connections and too many
queries. Each page view from a user translates to multiple requests to
the application server and each of those translates to a connection
and at least a few queries (which are done in middlew
On 22.11.2012 09:54, Eildert Groeneveld wrote:
ok, I did have a look at the libpq librar, and you are right, there is a
way to obtain binary data from the backend through the PQexecParams
res = PQexecParams(conn,
"DECLARE myportal CURSOR FOR select genotype_bits
fro
On 22.11.2012 02:53, Jeff Janes wrote:
That gives the planner the information it needs to choose the right plan on
its own. That kind of hints would be much less implementation specific and
much more likely to still be useful, or at least not outright
counter-productive, in a future version with
On 21.11.2012 15:42, Kevin Grittner wrote:
Better, IMV, would be to identify what sorts of hints people actually
find useful, and use that as the basis for TODO items for optimizer
improvement as well as inventing clear ways to specify the desired
coercion. I liked the suggestion that a CTE which
On 21.11.2012 17:42, Gavin Flower wrote:
On 22/11/12 04:32, Andres Freund wrote:
On 2012-11-21 10:21:16 -0500, Andrew Dunstan wrote:
I wasn't talking about removing it. My point was that if the
optimization
fence around CTEs is removed a lot of people will need to rework apps
where
they have us
On 21.11.2012 01:53, Tom Lane wrote:
I think the more interesting question is what cases wouldn't be covered
by such a rule. Typically you need to use OFFSET 0 in situations where
the planner has guessed wrong about costs or rowcounts, and I think
people are likely using WITH for that as well.
On 13.11.2012 21:13, Jon Nelson wrote:
I was working on a data warehousing project where a fair number of files
could be COPY'd more or less directly into tables. I have a somewhat nice
machine to work with, and I ran on 75% of the cores I have (75% of 32 is
24).
Performance was pretty bad. With
On 05.11.2012 16:32, Claudio Freire wrote:
On Sun, Nov 4, 2012 at 7:23 PM, Dong Ye wrote:
You are using prepared statements, this makes me think that this
regression might be due to support for parameter specific plans for
prepared statements. [1] Can you run the test on both versions without
p
On 30.10.2012 10:50, Albe Laurenz wrote:
Why does WAL replay read much more than it writes?
I thought that pretty much every block read during WAL
replay would also get dirtied and hence written out.
Not necessarily. If a block is modified and written out of the buffer
cache before next checkp
On 01.10.2012 19:49, pg noob wrote:
Hi all,
I have a question about the deadlock_timeout in regards to performance.
Right now we have this timeout set at its default of 1s.
My understanding of it is that this means that every 1 second the server
will check for deadlocks.
Not quite. It means th
On 31.08.2012 15:27, John Nash wrote:
Program 1: dbtransfromfile: this program creates a simple table
consisting of a one int column table. After the creation, the program
inserts 1000 tuples in the table, which are never deleted, after that
the program reads a transaction pattern from a given fi
On 27.01.2012 20:30, Jayashankar K B wrote:
Hi Heikki Linnakangas: We are using series of Insert statements to insert the
records into database.
Sending data in binary is not an option as the module that writes into DB has
been finalized.
We do not have control over that.
That certainly
cs/9.1/static/release-9-1.html#AEN107416
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
data in binary format instead of text
might shave some cycles.
If you can run something like oprofile on the system, that would be
helpful to pinpoint the expensive part.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-performance mailing list (
ANDm.changedate_id < $2 ORDER BY m.emailaddress,
m.websiteid;
$query$ USING p_emailaddress, l_sysdateid;
end if;
end if;
Return ref;
EXCEPTION
WHEN NO_DATA_FOUND THEN
Return null;
END;
$function$
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
--
S
nough, then you probably want
autovacuum to run.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
only works with OpenSSL 1.0.0 - did you upgrade? I thought you were
using 0.9.7a earlier.
FWIW, it would be better to test "#ifdef SSL_OP_NO_COMPRESSION"
directly, rather than the version number.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql
2.9795 vmlinux-3.0.0-1-amd64put_mems_allowed
Maybe your data is very expensive to compress for some reason?
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Yep, increase checkpoint_segments. And you probably want to raise
checkpoint_timeout too.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org
needed for hash joins and hash
aggregation. And those things *are* big wins, even if hash indexes
themselves never become so.
We could drop the hash indexam code but keep the opclasses etc. I'm not
sure that would gain us, though.
--
Heikki Linnakangas
EnterpriseDB http://www.enterpris
On 14.09.2011 09:39, Stefan Keller wrote:
Should I open a ticket?
What ticket? With whom?
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
l.org/wiki/SlowQueryQuestions)
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
the planner
might start to pick the faster plan, because the seqscan on icecream
will start to look more expensive compared to the index scan and nested
loop join in the faster plan.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-performance mailing list (
ts from the kind of large cache that Infinite
Cache provides.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 06.06.2011 12:43, Heikki Linnakangas wrote:
Also, even when safe, it's not clear that the transformation is always a
win. The left-hand expression could be expensive, in which case having
to evaluate it multiple times could hurt performance. Maybe yo
Sorry, hit "send" too ea
.
This transformation would also need to be done in the planner, after
checking that the left-hand expression is not volatile.
Also, even when safe, it's not clear that the transformation is always a
win. The left-hand expression could be expensive, in which case having
to evaluat
On 28.04.2011 12:20, Rishabh Kumar Jain wrote:
How the tables must be ordered in the list of tables in from statement?
There is no difference in performance, if that's what you mean. (If not,
then pgsql-novice or pgsql-sql mailing list would've be more appropriate)
--
Heikki L
billing data at the end would mean that I not only have to
pull all the data down to the User Client, I must also push the data back up
to the server for writing the billing records.
Yeah, that sounds right.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql
e average tuple width. I don't see O(n^2) behavior,
though, it seems linear.
I'd like to know if this is a known behavior ? And if I could mitigate it
somehow ?
I'm out of ideas on how to make it faster, I'm afraid.
--
Heikki Linnakangas
EnterpriseDB http://www.
servername make installcheck
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
int, but I believe
the KNN gist code is flexible enough that it could be used for distance
to the edge of a shape as well. Someone just needs to write the
operators and support functions.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-performance mailing li
what
values can be stored, it doesn't affect how the strings are stored. In
both cases, the strings are stored in a variable-length format.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To
libpq. In all cases the client code ran on the same
box as the server. We have experimented with Postgres 8.1, 8.3 and 9.0.
Try COPY, ie. "COPY bulk_performance.counts TO STDOUT BINARY".
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-performan
ually a good idea to put an ORDER BY in a view
or subquery anyway unless you also have volatile functions in there, or
you want to coerce the optimizer to choose a certain plan.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-performance mailing list (pg
On 12/07/10 14:15, A. Kretschmer wrote:
Use timeofday() instead, now() returns the transaction starting time.
timeofday() is a legacy function kept only for backwards-compatibility.
It returns a string, which is quite awkward. Use clock_timestamp() instead.
--
Heikki Linnakangas
binary heap when merging tapes in the
tuplesort code, for example, but that's tiny.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgres
h int4, date int4, hour int4, minute
int4, second int4, milliseconds int4, timezone text)
Now that we have named parameter notation, callers can use it to
conveniently fill in only the fields needed:
SELECT construct_timestamp(year := 1999, month := 10, date := 22);
--
Heikki Linnakangas
E
segfaults:
select dateserial( 2007, 1, 3 )
Any ideas why?
The C function returns a DateADT, which is a typedef for int32, but the
CREATE FUNCTION statement claims that it returns 'text'.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-per
On 10/06/10 23:08, Anne Rosset wrote:
Heikki Linnakangas wrote:
On 10/06/10 22:47, Craig James wrote:
Postgres normally doesn't index NULL values even if the column is
indexed, so it has to do a table scan when your query includes an IS
NULL condition.
That was addressed in version 8.3
NULLs were stored in the index in earlier releases too,
they just couldn't be searched for.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.post
log messages do you
normally get in the PostgreSQL log related to bulk loading?
Autovacuum or autoanalyze should not interfere with loading data, even
if it runs simultaneously.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-performance mailing lis
On 10/06/10 11:47, Ambarish Bhattacharya wrote:
It would be helpful if you can suggest me the appropriate Autovacuum settings
for handling this large data as my autovacuum setting is hanging the entire
process.
What do you mean by "hanging the entire process"?
--
Heikki L
has been out for some time and people have experience with the
load-distributed checkpoints.
I'm not sure how the spreading of the fsync()s should work, it's hard to
estimate how long each fsync() is going to take, for example, but surely
something would be better than nothing.
--
Heik
1 - 100 of 313 matches
Mail list logo