Hi! I am trying to implement a mechanism to reserve the last row for every
distinct value in column "c1".
CREATE TABLE table1 (c1 INTEGER,c2 INTEGER, PRIMARY KEY (c1,c2));
CREATE OR REPLACE FUNCTION keep1() RETURNS TRIGGER AS $$
BEGIN
IF (SELECT COUNT(*) FROM table1 WHERE c1=OLD.c1) = 1
I've run into a weird query performance problem. I have a large, complex
query which joins the results of several set-returning functions with
some tables and filters them by calling another function, which involves
PostGIS calls (ST_DWithin). This used to run in about 10 seconds until I
change
On 05/15/2012 02:44 PM, Robert James wrote:
Okay, I understand why we still need VACUUM and why we can't always
CLUSTER. But my question remains: assuming I have some down time, do
I loose anything by CLUSTER. Your answer is, I believe: Not normally,
but there is one case where you do. That's
On Tue, May 15, 2012 at 4:44 PM, Robert James wrote:
> On 5/15/12, Steve Crawford wrote:
>> On 05/15/2012 02:02 PM, Robert James wrote:
>>> Besides the one time spent CLUSTERing, do I loose anything by doing it
>>> for every table? Does a CLUSTER slow anything down?
>
>> Cluster should have bett
So this is purely anecdotal but I'm curious, what's with all the
different naming conventions? There's psql (for database
connections), pgsql (used for some dirs like /usr/pgsql-9.1 and this
mailing list), postgres (user and other references), and postgresql
(startup scripts).
Cheers,
Scott
--
On 5/15/12, Steve Crawford wrote:
> On 05/15/2012 02:02 PM, Robert James wrote:
>> Besides the one time spent CLUSTERing, do I loose anything by doing it
>> for every table? Does a CLUSTER slow anything down?
> Cluster should have better performance but it depends on the index you
> choose relat
Hi, can someone please explain the purpose of archive_command on both
the master and slave when it comes to streaming replication? From
what I understand so far, what really matters is how many pg_xlog
files are kept when it comes to reestablishing replication when it
breaks for some reason.
Let'
On 05/15/2012 02:02 PM, Robert James wrote:
Besides the one time spent CLUSTERing, do I loose anything by doing it
for every table? Does a CLUSTER slow anything down?
It would seem to me that a) a CLUSTER should never have worse
performance than a random order b) may have better performance and
On Tue, May 15, 2012 at 3:58 PM, Robert James wrote:
> I see how CLUSTER can speed up a range query (eg WHERE val < 30),
> because it groups those records in contiguous pages.
>
> What about where I'm only pulling one record back? Eg WHERE user_id =
> 100. Is there any benefit to a CLUSTER in tha
On Tue, May 15, 2012 at 4:02 PM, Robert James wrote:
> Besides the one time spent CLUSTERing, do I loose anything by doing it
> for every table? Does a CLUSTER slow anything down?
>
> It would seem to me that a) a CLUSTER should never have worse
> performance than a random order b) may have bette
Besides the one time spent CLUSTERing, do I loose anything by doing it
for every table? Does a CLUSTER slow anything down?
It would seem to me that a) a CLUSTER should never have worse
performance than a random order b) may have better performance and c)
has the benefits of a VACUUM and REINDEX.
I see how CLUSTER can speed up a range query (eg WHERE val < 30),
because it groups those records in contiguous pages.
What about where I'm only pulling one record back? Eg WHERE user_id =
100. Is there any benefit to a CLUSTER in that case? Is there
anything lost if I CLUSTER on a different ind
On 05/15/2012 01:46 PM, Robert James wrote:
I can run clusterdb -a from the command line to cluster all databases.
After clustering, its recommended to run ANALYZE. But there doesn't
seem to be any way to do this from the command line, and even in SQL,
there doesn't seem to be any way to do thi
I've never done that in PG before, but I've used named connections
with Oracle. Is it the same sort of deal? There's a file on the disk
somewhere with the connection info? Either way, I'm sure it's a RTFM
thing so I'll look into it.
>>>
>>> yeah, there's a good example in the
I can run clusterdb -a from the command line to cluster all databases.
After clustering, its recommended to run ANALYZE. But there doesn't
seem to be any way to do this from the command line, and even in SQL,
there doesn't seem to be any way to do this for all databases.
1. What's the recommende
On Tue, May 15, 2012 at 3:16 PM, Mike Christensen wrote:
>>> I've never done that in PG before, but I've used named connections
>>> with Oracle. Is it the same sort of deal? There's a file on the disk
>>> somewhere with the connection info? Either way, I'm sure it's a RTFM
>>> thing so I'll loo
>> I've never done that in PG before, but I've used named connections
>> with Oracle. Is it the same sort of deal? There's a file on the disk
>> somewhere with the connection info? Either way, I'm sure it's a RTFM
>> thing so I'll look into it.
>
> yeah, there's a good example in the docs here:
On Tue, May 15, 2012 at 2:51 PM, Mike Christensen wrote:
> Thanks!
>
> I've never done that in PG before, but I've used named connections
> with Oracle. Is it the same sort of deal? There's a file on the disk
> somewhere with the connection info? Either way, I'm sure it's a RTFM
> thing so I'll
Thanks!
I've never done that in PG before, but I've used named connections
with Oracle. Is it the same sort of deal? There's a file on the disk
somewhere with the connection info? Either way, I'm sure it's a RTFM
thing so I'll look into it.
Mike
On Tue, May 15, 2012 at 12:45 PM, Merlin Moncur
On Tue, May 15, 2012 at 2:28 PM, Mike Christensen wrote:
> I often manually pull in production data into my test database so I
> can test new code on realistic data, as well as test upgrade scenarios
> or repro data specific bugs. To do this, I've setup a `VIEW` for each
> production table in my
I often manually pull in production data into my test database so I
can test new code on realistic data, as well as test upgrade scenarios
or repro data specific bugs. To do this, I've setup a `VIEW` for each
production table in my test database. These views look something like
this:
CREATE
On 05/15/2012 05:30 AM, Condor wrote:
O
I use vacuum full because I have huge tables that every night is
deleted (truncated)
and I want my space back.
Truncate does reclaim space. Bulk deletes do not. If you are doing bulk
deletes since you need to delete *almost* everything consider using
I have just restored a database about 347GB in size
postgres=# select * from pg_database_size('dbname');
pg_database_size
--
346782483256
using this command:
pg_restore -d dbname -O -Fc dbname.backup
It started at 13/5 21:28 and at 14/5 18:47 all user tables were imported
On Tue, May 15, 2012 at 1:21 AM, Martijn van Oosterhout
wrote:
> On Tue, May 15, 2012 at 05:38:27AM +0530, Samba wrote:
>> Hi,
>>
>> Does postgresql support Global Prepared Statements, which are prepared only
>> once per server and not per every connection?
>
> As pointed out, no.
>
>> Problem wit
On 15.05.2012 14:07, Tom Lane wrote:
Condor writes:
today when I do select relname, last_autovacuum, last_vacuum from
pg_stat_all_tables I see last_vacuum and autovacuum fields is empty.
This its seems strange for me, because every night crontab start at
01:10 am a vacuum script that do:
reinde
Hello.
When I discovered the wonders of partitioning I quickly jumped on-board.
I had big tables used for statistics and a split was needed.
I created the parent, I linked the big table with this new parent and I
added other childs, per month.
Example:
new_stats - parent (empty)
old_s
Condor writes:
> today when I do select relname, last_autovacuum, last_vacuum from
> pg_stat_all_tables I see last_vacuum and autovacuum fields is empty.
> This its seems strange for me, because every night crontab start at
> 01:10 am a vacuum script that do:
> reindex, vacuum full and vacuum an
It only shows up when you search for it in lowercase. e.g "postgresql", not
"PostgreSQL".
You could also just use sudo-apt get install postgresql-9.1
postgresql-contrib-9.1 ("postgresql" is a meta-package)
hth,
WBL
On Tue, May 15, 2012 at 8:41 AM, Lee Hachadoorian <
lee.hachadooria...@gmail.com>
Adding to the previous question
Should recovery in the standby using restore_command is done only with
base_backup which was taken for the first time
or
It can be at any point of time on standby is recovered using restore_command
??
--
View this message in context:
http://postgresql.1045
Hello everyone,
today when I do select relname, last_autovacuum, last_vacuum from
pg_stat_all_tables I see last_vacuum and autovacuum fields is empty.
This its seems strange for me, because every night crontab start at
01:10 am a vacuum script that do:
reindex, vacuum full and vacuum analyze.
On 14 May 2012 17:28, Paulo Correia wrote:
> Hello all!
> Having a Postgres 9.0 with assynchronous streaming replication to a
> hot-standby slave, both with CentOs 5.6, how can I use both DB instances for
> query load balancing?
> I've tried with pgPool-II but the pooling mechanism is disruptive
On Tue, May 15, 2012 at 4:04 PM, björn lundin wrote:
> Hmm, I was under the impression that if you create the table with quoted
> field names, you get case-sensitive names,
> But if you create the tables without quotes around the the field names, pg
> will make them lowercase,
> But case-insensi
Configured server1 to be primary archiving on and server2 as standby with
base backup(only first time) to replicate from primary. If Primary(server1)
is down standby(server2) is configured as primary and starts archiving to
the same WAL archive.
If the previous primary(Server1) comes up it will
Basil Bourque wrote:
> Today's announcement of 9.2 beta said installers were available...
> -
> Get PostgreSQL 9.2 beta, including binaries and installers for
Windows, Linux and Mac from our
> download page: http://www.postgresql.org/download
> -
>
> But I cannot find any installers at all for the
34 matches
Mail list logo