I need to modify my database schema and create a new trigger function. I know
that triggers are suppressed on the slave once Slony starts up. However
since slony is now already running on the slave, if I create a new trigger
there, it will not be suppressed. What is the best practice for creating a
That's cool, thanks!
Yeah I just want to create some test data, so it's not something I would run
often.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/map-row-in-one-table-with-random-row-in-another-table-tp5542231p5545510.html
Sent from the PostgreSQL - sql mailing li
Hi, I am trying to map every row in one table with a random row in another.
So for e.g. , for each network in 1 table I am trying to map random segments
from the other table. I have this sql below, but it always applies the same
random segment that it picks to all the rows for the network. I want e
Hi
I had another question, what about when the primary key is a foreign key in
another table? Is the only option to drop the FK and recreate it after the
primary key has been created with the new index?
Thanks!
RV
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Concurren
Thanks! That worked.
Any thoughts about containing index bloat. I thought the autovac would clean
it up a bit more. Would any tweaks to my settings improve autovac
performance? I am still doing a couple of concurrent reindexes per week
otherwise performance degrades over a couple of days.
Thank
I have a large table with about 60 million rows, everyday I add 3-4 million,
remove 3-4 million and update 1-2 million. I have a script that reindexes
concurrently a couple of times a week, since I see significant bloat. I have
autovac on and the settings are below. I can't concurrently reindex the
Thanks for the recommendations. Unfortunately I have to clean out the data
before I insert, so I cannot do a bulk copy from a CSV, I will try the
option of inserting into src table and then copying relevant data to dest
table and see if that works faster for me. I suppose I could bulk insert and
th
I want to insert a bunch of records and not do anything if the record already
exists. So the 2 options I considered are 1) check if row exists or insert
and 2) ignore the unique violation on insert if row exists.
Any opinions on whether it is faster to INSERT and then catch the UNIQUE
VIOLATION ex