Hi All,
Just wondering whether someone might recognize the problem i am
having. First the intro: postgresql-8.1.0 running on FreeBSD
6.0-RELENG, on a dual xeon 3.0GHZ machine with 2gb memory/2gb swap.
The problem is we have been experiencing random disconnects from our
main application and a
Quick question.
I am looking at top on my RH AS 4.0 server that runs my PG 8.1
database. When I see a db process in top, it says postgres: user
db ipaddr(number) condition (i.e. postgres: user1 db1 10.4.10.10(23456)
SELECT).
What does the number after the ip address represent (the (23456)]?
Chris Hoover [EMAIL PROTECTED] writes:
What does the number after the ip address represent (the (23456)]?
It's the TCP port number on the client end. Some people requested that
so they could tell apart multiple connections from the same client
machine (you can use lsof or equivalent on the
In java, this big fat exception:
org.postgresql.util.PSQLException: An I/O error occured while
sending to the backend.
Call stack:
org.postgresql.util.PSQLException: An I/O error occured while
sending to the backend.
at
What are the recommended ulimit settings. A single postgres connection
that does an insert(with no other database activity) of say 6 million
rows into a 15 column table (all integer columns) cannot complete due to
an out of memory error
This happens on a 4Gb Linux box with shmmax set to 1gb and
Hi,
I am in the process of implementing Failover to our production
database. Inorder to able to restore to the last archive log, I have
cron job from the production database that runs every 5-10min to check
if there are any new archive logs and copy new archive logs to the
remote stand
...I have a
cron job from the production database that runs every 5-10min to check
if there are any new archive logs and copy new archive logs to the
remote stand by failover machine.
The problem with this scenario is that there might be a possibility that
I might scp a partially filled
Steve Crawford wrote:
...I have a
cron job from the production database that runs every 5-10min to
check if there are any new archive logs and copy new archive logs to
the remote stand by failover machine.
The problem with this scenario is that there might be a possibility
that I might scp
Hi Steve,
Thanks! for the quick reply, I thought about rsync too, but wasnt
sure about completely how it handles partial files. I use rsync for all
the backups, it works fine for all the application except for our mail
application it copies the files but at the end of the job it gives me
So, I've run a number of PG databases for a number of years.. and I've
now run into something I've never seen before in the most trivial
of places.
A couple of months back my girlfriend installed a music player called
amarok on her system... I don't know much about it, but it stores its
Gregory Maxwell wrote:
I recently noticed that this database has grown to a huge size. ...
Which I found to be somewhat odd because none of the tables have more
than around 1000 rows. I hadn't been vacuuming because I didn't
think that anything would ever be deleted so I performed a
On 3/23/06, Alvaro Herrera [EMAIL PROTECTED] wrote:
Gregory Maxwell wrote:
I recently noticed that this database has grown to a huge size. ...
Which I found to be somewhat odd because none of the tables have more
than around 1000 rows. I hadn't been vacuuming because I didn't
think
Gregory Maxwell [EMAIL PROTECTED] writes:
When I vacuum fulled nothing else was connected.. I just restarted PG
and vacuumed again.. no obvious change of disk size (still.. 6.4
gigs).. but this changed:
amarokcollection=# select relname, pg_relation_size(oid) FROM
pg_class ORDER BY 2 DESC
On 3/23/06, Tom Lane [EMAIL PROTECTED] wrote:
amarokcollection=# select relname, pg_relation_size(oid) FROM
pg_class ORDER BY 2 DESC LIMIT 20;
relname | pg_relation_size
-+--
pg_attribute_relid_attnam_index |
Gregory Maxwell [EMAIL PROTECTED] writes:
So it's by design that these now bloated index won't shrink if let
unvacuumed? I didn't expect to hit something like that.
Well, the VACUUM FULL algorithm is incapable of shrinking indexes ---
the only way is REINDEX, or something else that reconstructs
15 matches
Mail list logo