[SQL] creating a new trigger for a table on a slave host

2012-06-05 Thread rverghese
I need to modify my database schema and create a new trigger function. I know
that triggers are suppressed on the slave once Slony starts up. However
since slony is now already running on the slave, if I create a new trigger
there, it will not be suppressed. What is the best practice for creating a
trigger on the slave? Should you take the table out of replication, create
trigger , then add the table back in? Is there some other way to do it?
Thanks
RV

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/creating-a-new-trigger-for-a-table-on-a-slave-host-tp5711310.html
Sent from the PostgreSQL - sql mailing list archive at Nabble.com.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] map row in one table with random row in another table

2012-03-07 Thread rverghese
That's cool, thanks! 
Yeah I just want to create some test data, so it's not something I would run
often.

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/map-row-in-one-table-with-random-row-in-another-table-tp5542231p5545510.html
Sent from the PostgreSQL - sql mailing list archive at Nabble.com.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] map row in one table with random row in another table

2012-03-06 Thread rverghese
Hi, I am trying to map every row in one table with a random row in another.
So for e.g. , for each network in 1 table I am trying to map random segments
from the other table. I have this sql below, but it always applies the same
random segment that it picks to all the rows for the network. I want each
row to have a random segment value. I'm just using the generate_series
function to generate id's as an e.g. 
Any suggestions?

My Query
select id, seg_list from  (select generate_series(1,10) as id) as X, (select 
segment  from segments  order by random() limit 1 ) as Y 

I get

1;'cob0002'
2;'cob0002'
3;'cob0002'
4;'cob0002'
5;'cob0002'
6;'cob0002'
7;'cob0002'
8;'cob0002'
9;'cob0002'
10;'cob0002'

What I want is 

1;'cob0002'
2;'cob0008'
3;'cob0006'
4;'cob0004'
5;'cob0002'
6;'cob0007'
7;'cob0003'
8;'cob0004'
9;'cob0009'
10;'cob0001'

I also tried
select generate_series(1,10), (select  segment from segments  order by
random() limit 1 )



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/map-row-in-one-table-with-random-row-in-another-table-tp5542231p5542231.html
Sent from the PostgreSQL - sql mailing list archive at Nabble.com.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Concurrent Reindex on Primary Key for large table

2012-02-22 Thread rverghese
Hi
I had another question, what about when the primary key is a foreign key in
another table? Is the only option to drop the FK and recreate it after the
primary key has been created with the new index?

Thanks!
RV

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Concurrent-Reindex-on-Primary-Key-for-large-table-tp5467243p5506261.html
Sent from the PostgreSQL - sql mailing list archive at Nabble.com.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Concurrent Reindex on Primary Key for large table

2012-02-09 Thread rverghese
Thanks! That worked. 

Any thoughts about containing index bloat. I thought the autovac would clean
it up a bit more. Would any tweaks to my settings improve autovac
performance? I am still doing a couple of concurrent reindexes per week
otherwise performance degrades over a couple of days. 

Thanks
RV

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Concurrent-Reindex-on-Primary-Key-for-large-table-tp5467243p5470216.html
Sent from the PostgreSQL - sql mailing list archive at Nabble.com.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Concurrent Reindex on Primary Key for large table

2012-02-08 Thread rverghese
I have a large table with about 60 million rows, everyday I add 3-4 million,
remove 3-4 million and update 1-2 million. I have a script that reindexes
concurrently a couple of times a week, since I see significant bloat. I have
autovac on and the settings are below. I can't concurrently reindex the
primary key, since there can be only one on a table. I can't take the table
offline to do a drop/recreate. I assumed the autovac would take care of the
bloat there, but I checked the size of the index and dropped it and
recreated it and the size went from 2.5GB to 1.3GB and my daily import into
that table took almost half the time after recreating the primary key.
I was wondering what the options are to reindex it. Should i not have a
primary key and just a unique index so that I can concurrently reindex? Do I
lose anything by replacing the primary key with a unique index? I prefer to
have a primary key on the table for Slony, but I suppose I could make the
OID column the primary key just for SLONY purposes. 
Any thoughts about the best approach would be appreciated. Note that this
bloat on the primary key is from about 2-3 weeks of daily
deletes/updates/inserts. I have multiple tables with similar structure and
data size. 

Here are my autovac settings. I am on PostgreSQL 9.1.1

#autovacuum_max_workers = 3 # max number of autovacuum
subprocesses
#autovacuum_naptime = 1min  # time between autovacuum runs
autovacuum_vacuum_threshold =  2# min number of row updates before 
# vacuum default 50
autovacuum_analyze_threshold = 2# min number of row updates before  
# analyze default 50
autovacuum_vacuum_scale_factor = 0.1# fraction of table size before
vacuum
autovacuum_analyze_scale_factor = 0.1   # fraction of table  size before
analyze
#autovacuum_freeze_max_age = 2  # maximum XID age before forced
vacuum
autovacuum_vacuum_cost_delay = 5ms
#autovacuum_vacuum_cost_limit = -1  

Thanks
RV

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Concurrent-Reindex-on-Primary-Key-for-large-table-tp5467243p5467243.html
Sent from the PostgreSQL - sql mailing list archive at Nabble.com.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] ignore unique violation OR check row exists

2012-01-04 Thread rverghese
Thanks for the recommendations. Unfortunately I have to clean out the data
before I insert, so I cannot do a bulk copy from a CSV, I will try the
option of inserting into src table and then copying relevant data to dest
table and see if that works faster for me. I suppose I could bulk insert and
then clean out the data before I insert into destination table. I'll have to
see how long that takes.


--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/ignore-unique-violation-OR-check-row-exists-tp5117916p5120317.html
Sent from the PostgreSQL - sql mailing list archive at Nabble.com.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] ignore unique violation OR check row exists

2012-01-03 Thread rverghese
I want to insert a bunch of records and not do anything if the record already
exists. So the 2 options I considered are 1) check if row exists or insert
and 2) ignore the unique violation on insert if row exists. 
Any opinions on whether it is faster to INSERT and then catch the UNIQUE
VIOLATION exception and ignore it in plpgsql  versus check if row exists and
INSERT if it doesn't. 
I can't seem to ignore the UNIQUE VIOLATION exception via php, since it is a
plpgsql command, so if I have to do the check and insert, alternatively i
have a function that tries to insert and then ignores the violation. I was
wondering if one way was better than the other.
Thanks

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/ignore-unique-violation-OR-check-row-exists-tp5117916p5117916.html
Sent from the PostgreSQL - sql mailing list archive at Nabble.com.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql