dear all,
i have a question regarding a recent problem that we faced and we are trying
to identify.
let's suppose we have a function A and a function B that in some point calls
function A.
function A->
…..
insert into table1(col1,col2) values ($1,$2)
…..
function B ->
…
select…
update….
insert…
thx a lot for your help. it worked great :)
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/upgrade-postgres-to-8-4-8-centos-5-3-tp4822762p4841782.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.
--
Sent via pgsql-general mailing list (pgsql-g
i searched on the net and didnt find this rpm. anyway the output is the
following ->
baseurl=http://yum.pgsqlrpms.org/8.4/redhat/rhel-$releasever-$basearch
baseurl=http://yum.pgsqlrpms.org/srpms/8.4/redhat/rhel-$releasever-$basearch
baseurl=http://yum.pgsqlrpms.org/8.4/redhat/rhel-$releasever-$bas
thx for your answer.
do u mean something like that? -> yum list | grep *PGDG*rpm
or i shouldn't search in the yum repos?
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/upgrade-postgres-to-8-4-8-centos-5-3-tp4822762p4822823.html
Sent from the PostgreSQL - general mailing
hello all,
i have a centos 5.3 which has postgres 8.4.4 installed from the repos. I
want to upgrade to 8.4.8 but when i try to install the .bin file of 8.4.8
then it's a new installation and when i try to run yum check-update nothing
new is there. any ideas? tnx in advance
--
View this message in
just another update since the system is up and running and one more question
:p
the secondary server is able to restore the wal archives practically
immediately after they arrive. i have set a rsync cron job to send the new
wals every 5 minutes. the procedure to transfer the files and to restore
t
just an update from my tests
i restored from the backup. the db is about 2.5TB and the wal archives were
about 300GB. the recovery of the db was completed after 3 hours. thx to all
for your help
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/warm-standby-apply-wal-archi
the network transfer does not bother me for now. i will first try to do the
whole procedure without compression, so as not to waste any cpu util and
time for compressing and decompressing. through the 4Gbps ethernet, the
200GB of the day can be transferred in a matter of minutes. so i will try it
a
The network bandwidth between the servers is definitely not an issue. What is
bothering me is the big size of the wal archives, which goes up to 200GB per
day and if the standby server will be able to replay all these files. The
argument that; since the master can do it and also do various other ta
the nodes communicate through 4Gbps ethernet so i dont think there is an
issue there. probably some kind of misconfiguration of DRBD has occured. i
will check on that tommorow. thx a lot :)
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/warm-standby-apply-wal-archives-tp
thx a lot for your answer.
actually DRBD is the solution i am trying to avoid, since i think the
performance is degrading a lot (i ve used it in the past). and also i have
serious doubts if the data is corrupted in case of the master's failure, if
not all blocks have been replicated to they second
my bad...
i read in the manual that the recovery process is constant and runs all the
time. so the question now is
how many wals can this procedure handle? for example can it handle 100-200G
every day? if it cannot, any other suggestions for HA ?thx in advance
--
View this message in context:
ht
hello all,
i would like your advice in the following matter. If i am not wrong, by
implementing a warm standby (pg 8.4) the wal archives are being sent to the
fail over server and when the time comes the fail over who already has a
copy of the /data of the primary and all the wal archives, starts
hello all,
i came with a strange finding the other day and i would appreciate any ideas
on the matter (if any). while checking on the locks of the server i found a
tuple indicating that a prepared transaction had requested an exclusive lock
on a relation. in general, i am aware of the situations w
i looked into data partitioning and it is definitely something we will use
soon. but, as far as the backups are concerned, how can i take a backup
incrementally? if i get it correctly, the idea is to partition a big table
(using a date field for example) and then take each night for example a dump
thx a lot. i will definitely look into that option
in the meantime, if there are any other suggestions i 'd love to hear them
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/backup-strategies-for-large-databases-tp4697145p4698006.html
Sent from the PostgreSQL - general m
hello to all
i am trying to find an "acceptable" solution for a backup strategy on one of
our servers (will be 8.4.8 soon, now its 8.4.7). i am familiar with both
logical (dump/restore) and physical backups (pg_start_backup, walarchives
etc) and have tried both in some other cases.
the issue her
thx a lot for the answers
we will upgrade to 8.4.8 and i will monitor the situation :)
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/PD-ALL-VISIBLE-flag-warnings-tp4692473p4697106.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.
--
Sent via
i wiil provide some info in case it is helpful
-the error msg is -> WARNING: PD_ALL_VISIBLE flag was incorrectly set in
relation "summary_data" page 54
(the same thing appears for many tables and many pages on each table)
-in my internet searching i found some cases where this issue was related
hello to all,
the last few days i have observed tons of msgs like the above on a db server
of ours. i have searched a lot on the internet but didn't find much relevant
information. i read that this could mean some kind of serious data
corruption, but didn't find more info in this direction. on the
thank you all for your help. finally the big table had many more rows(2
billions) than the stats showed so there is no "weird" thing going on.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/weird-table-sizes-tp4626505p4630238.html
Sent from the PostgreSQL - general mail
i mentioned the sequences number only b/c it seemed stange and i didnt know
if it could be related to the "weird" sizes.
now i found something more weird...the autovacuum is ON but on
pg_stat_user_tables on this specific table tha last_vacuum and
last_autovacuum are both NULL...how can this happen
thx for the reply :)
the table are identical, and i mean that they have the same columns, the
same constraints, the same indexes etc
1) the small table(65gb) is on version 8.4.7 and the big one(430gb) on 8.4.4
2) the small in on Red Hat 4.1.2-50 and the big on Red Hat 4.1.2-46
3) the 2nd was rest
hello to all
i would like your help in the following matter ->
we have 2 identical databases. the 1st was built from scratch while the 2nd
was 'restored' from a dump of another database (without the data). so the
sequences for instance on the 2nd started from very big numbers. in these
databases
24 matches
Mail list logo