Hello,
On Tue, 2010-06-22 at 04:50 -0700, Deven wrote:
Hi all,
I am using Postgresql Database for our system storage and I am running
Autovacuuming Deamon on my entire database. But on of the table set in my
database never undergoes the Autovacuuming. I always need to do the manual
Hello,
On Tue, 2010-06-22 at 04:50 -0700, Deven wrote:
Hi all,
I am using Postgresql Database for our system storage and I am running
Autovacuuming Deamon on my entire database. But on of the table set in
my
database never undergoes the Autovacuuming. I always need to do the
manual
at 4:05 PM, Joao Ferreira
joao.miguel.c.ferre...@gmail.com wrote:
Hello all,
I have a hard situation in hands. my autovacuum does not
seem to be able
to get his job done;
database is under active INSERTs/UPDATEs
Hello all,
I have a hard situation in hands. my autovacuum does not seem to be able
to get his job done;
database is under active INSERTs/UPDATEs;
CPU is in aprox 50% iowait for the past 5 hours;
I've tried turning off autovacuum and the effect goes away; I turn it back
on and it goes back to
Hello all,
I need to write an application in C to read the list of databases
currently in the server. very much like a psql -l...
but I need it in C ! I never used C before to access PG.
the libpq API seems a bit scary ! Is there anything, shipped with
postgresql, other than libpq that would
On Fri, 2010-03-05 at 10:03 -0500, akp geek wrote:
Hi All -
I am still having the issue, even after I turned on the
auto vaccum. I have quick question. How do I know that auto vacuum
process is running. When I restarted my database , I got the message
auto vacuum launcher
On Wed, 2010-03-03 at 12:46 -0500, akp geek wrote:
Hi All -
I need some help from you. this question is in follow up
with my earlier questions. I turned the autovacuum and restarted the
db and the settings I have as follows. It seems the autovacuum process
has not been turned on.
Hello,
Considering the CLUSTER operation on a frequently updated table, if I
have 2 indexes on the table how do I choose one of them ? or is it
possible to have CLUSTER take both into consideration...
my table is read from based on two columns: a 'timestamp' integer column
(actually a UTC
On Fri, 2010-02-12 at 18:43 +0100, Marcin Krol wrote:
Amitabh Kant wrote:
You need to do VACUUM FULL ANALYZE to claim the disk space, but this
creates a exclusive lock on the tables.
See http://www.postgresql.org/docs/8.3/static/sql-vacuum.html
Aha!
OK but why did the performance
A strange behaviour is observerd in the physical files with respect to
this table. The size of the file is growing abnormally in GBs. Suppose
the file name (oid of relation )with respect to the table is 18924 I
could find entries of 1 GB files like 18924, 18924.1, 18924.2 ,
I'dd suggest:
pgdumpall --clean dump.sql
edit the dump.sql file by hand replacing database name and owners and
so...
then reload into the new DB with psql -f dump.sql postgres
this does all the work except creation of users and databases
should give you an exact replica with all data inside
On Wed, 2009-11-18 at 08:39 -0700, Scott Marlowe wrote:
On Wed, Nov 18, 2009 at 8:12 AM, Joao Ferreira gmail
joao.miguel.c.ferre...@gmail.com wrote:
I'dd suggest:
pgdumpall --clean dump.sql
I'd think he'd be much better off with pg_dump, not pg_dumpall.
yes, agree. sorry.
joao
Hello all,
How can I safelly erase (with rm command in Linux) files or dirs
concerning a specific database ?
assuming I whish to elimiante data belonging to database A but I do not
whish to disturb or cause any injury to database B
Is there documentation on how to do this or on what exactly am
was considering
rm
thx
Joao
On Sat, 2009-11-14 at 14:35 -0500, Bill Moran wrote:
Joao Ferreira gmail joao.miguel.c.ferre...@gmail.com wrote:
Hello all,
How can I safelly erase (with rm command in Linux) files or dirs
concerning a specific database ?
What do you mean
On Thu, 2009-05-28 at 16:43 +0100, Grzegorz Jaśkiewicz wrote:
On Thu, May 28, 2009 at 4:24 PM, inf200...@ucf.edu.cu wrote:
hi, sorry my english
I need to copy a data base from windows to linux, how can I save my data
base from windows with pg_dump, and where the file is??
and after
hello,
as a perl addict I am... I recommend checking this out:
http://search.cpan.org/~cmungall/DBIx-DBStag/DBIx/DBStag/Cookbook.pm
it's pretty flexible and allows you to specify to some extent just how
the database structure is infered from the XML...
check it out
Joao
On Wed, 2009-05-06
On Wed, 2009-05-06 at 16:53 +0100, Joao Ferreira gmail wrote:
hello,
as a perl addict I am... I recommend checking this out:
http://search.cpan.org/~cmungall/DBIx-DBStag/DBIx/DBStag/Cookbook.pm
it's pretty flexible and allows you to specify to some extent just how
the database structure
On Wed, 2009-05-06 at 16:53 +0100, Joao Ferreira gmail wrote:
hello,
as a perl addict I am... I recommend checking this out:
http://search.cpan.org/~cmungall/DBIx-DBStag/DBIx/DBStag/Cookbook.pm
it's pretty flexible and allows you to specify to some extent just how
the database structure
pg 8.1.4 has a very ugly bug which prevents VACUUM and AUTOVACUUM from
performing well
I certain situations the AUTOVACUUM will start failing and any VACUUM
operations will fail too.
solution I found was to periodically REINDEX my tables and INDEXES.
the major effect of this bug is Pg
On Wed, 2009-04-22 at 22:12 +0530, S Arvind wrote:
Our company wants to move from 8,1 to 8.3 latest. In irc they told me
to check realse notes for issues while upgrading. But there are lots
of release notesss. Can anyone tell some most noticable change or
place-of-error while upgrading?
one I
Coming loud and clear !
joao
On Wed, 2009-04-22 at 13:21 -0400, Atul Chojar wrote:
Could someone reply to this email? I am testing my subscription; joined over
2 months ago, but never get any response to questions
Thanks!
Atul
-Original Message-
From:
On Sun, 2009-04-12 at 09:27 -0700, Irene Barg wrote:
Hi,
We are running postgresql-8.1.9 and plan to upgrade to 8.2 or even 8.3
but can't just yet. I need to run analyze periodically (like hourly),
but before I write a script to loop through the tables in each schema
and run analyze, I
On Wed, 2009-01-28 at 09:09 -0800, David Miller wrote:
Pg Dump does not include schema name on insert statement generated from
pg_dump with a -d option when exporting data for a particular table using the
-t schema.table in version 8.3. I believe this same bug exists in 8.4 but
have not
On Wed, 2009-01-28 at 09:09 -0800, David Miller wrote:
Pg Dump does not include schema name on insert statement generated from
pg_dump with a -d option when exporting data for a particular table using the
-t schema.table in version 8.3. I believe this same bug exists in 8.4 but
have not
hello all,
I have 2 dumps of the same Pg database in diferent instants.
I'dd like to merge the two dumps into one single dump in order to
restore all data at one time.
Is this possible ? are there any helper tools to aid in dealing with
text dump files ?
thanks
Joao
--
Sent via
On Tue, 2008-11-11 at 11:16 +, Richard Huxton wrote:
Joao Ferreira gmail wrote:
hello all,
I have 2 dumps of the same Pg database in diferent instants.
I'dd like to merge the two dumps into one single dump in order to
restore all data at one time.
Is there any overlap
On Wed, 2008-11-05 at 15:08 -0600, Tony Fernandez wrote:
Hello all,
I am in the process of updating my DB on Postgres 8.1.11 to 8.3.4. I
also use Slony 1.2.14 for replication.
Is there a safe path on how to accomplish this, please advice on what
steps I will need to consider.
On Fri, 2008-10-31 at 17:31 +0200, Devrim GÜNDÜZ wrote:
Hi,
Have you considered installing directlly from CPAN ?
# perl -MCPAN -e 'install DBD::Pg;'
joao
On Fri, 2008-10-31 at 09:20 -0400, Kevin Murphy wrote:
My life would be complete if it offered perl-DBD-Pg for CentOS 5!
We had
Hello,
I've been searching the docs on a simple way to convert a time
_duration_ in seconds to the format dd:hh:mm:ss, but I can't find it.
90061 -- 1d 1h 1m 1s
(90061=24*3600+3600+60+1)
any ideas ?
I've been using to_char and to_timestamp to format dates/timestamps...
but this is diferent...
On Thu, 2008-10-30 at 13:08 -0700, Alan Hodgson wrote:
On Thursday 30 October 2008, Joao Ferreira gmail
During restore:
# vmstat
procs memory--- ---swap-- -io -system-- cpu
r b swpd free buff cache si so bi bo in cs us sy id wa
3 1 230204
Hello all,
I've been tring to speed up the restore operation of my database without
success.
I have a 200MB dump file obtained with 'pg_dumpall --clean --oids'.
After restore is produces a database with one single table (1.000.000)
rows. I have also some indexes on that table. that's it.
It
On Thu, 2008-10-30 at 11:39 -0700, Alan Hodgson wrote:
On Thursday 30 October 2008, Joao Ferreira gmail
[EMAIL PROTECTED] wrote:
What other cfg paramenters shoud I touch ?
work_mem set to most of your free memory might help.
I've raised work_mem to 128MB.
still get the same 20 minutes
Hello Eduardo
On Tue, 2008-10-14 at 15:40 -0500, Eduardo Arévalo wrote:
I installed the 8.3 postgres
the amount of giving the command:
bash-3.2$ /usr/local/postgres_8.3/bin/initdb -D /base/data
that command only initializes the underlying filesystem database files,
directories and
Hello all,
I need to print to a file a simple list of all the databases on my
postgresql.
I need to do this from a shell script to be executed without human
intervention
I guess something like:
su postgres -c 'psql ...whatever /tmp/my_databases.txt'
but I don't know exactly to what
Hello all,
I have a ascii dump file based on the COPY operation.
lets say I restore this dump into a live database with applications
doing INSERTs and UPDATEs onto it.
in case the COPY of a register causes a primary key (or UNIQUE, or FK)
violation does the psql restore command try to continue
thank you depesz
it seems a pretty good fix for my problem. Actually yestreday I came up
with something similar but your's is better.
cheers
joao
On Tue, 2008-09-23 at 09:26 +0200, hubert depesz lubaczewski wrote:
On Mon, Sep 22, 2008 at 05:59:25PM +0100, Joao Ferreira gmail wrote:
I'm
hello all,
I'm unable to build a LIKE or SIMILAR TO expression for matching and ip
address
192.168.90.3
10.3.2.1
any help please...
thanks
joao
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
well...
my IP addresses are stored in a TEXT type field. that field can actually
contain usernames like 'joao' or 'scott' and it can contain IP
addresses
:(
joao
On Mon, 2008-09-22 at 11:13 -0600, Scott Marlowe wrote:
On Mon, Sep 22, 2008 at 10:59 AM, Joao Ferreira gmail
[EMAIL
case.
any hints.
thx
j
On Sat, 2008-09-13 at 16:48 -0400, Robert Treat wrote:
On Thursday 11 September 2008 07:47:00 Joao Ferreira gmail wrote:
Hello all,
my application is coming to a point on which 'partitioning' seems to be
the solution for many problems:
- query speed up
Hello all,
my application is coming to a point on which 'partitioning' seems to be
the solution for many problems:
- query speed up
- data elimination speed up
I'dd like to get the feeling of it by talking to people who use
partitioning, in general..
- good, bad,
- hard to manage, easy to
Is there a date for the release of 8.4 ?
joao
On Thu, 2008-09-04 at 10:09 -0400, Alvaro Herrera wrote:
paul tilles wrote:
Where can I find a list of changes for Version 8.4 of postgres?
It's not officially written anywhere. As a starting point you can look
here:
Hello all,
in which system tables can I find the effective run-time values of the
autovacuum configuration parameters...
naptime, thresholds, scale factors, etc
thx
joao
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
Hello all,
I'm getting this error after installing pg-8.3.3 on a test system which
had 8.1.4
shell su postgres -c /usr/bin/postmaster -D /var/pgsql/data
FATAL: database files are incompatible with server
DETAIL: The data directory was initialized by PostgreSQL version 8.1,
which is not
On Thu, 2008-08-28 at 19:53 +0800, Phoenix Kiula wrote:
On our database of about 5GB we vaccuum all of our 12 tables (only one
is huge, all others have about 100,000 rows or so) every hour or so.
if you refer to manual VACUUM or VACUUM FULL every hour is probably too
much. You should aim your
, Bill Moran wrote:
In response to Joao Ferreira gmail [EMAIL PROTECTED]:
On Thu, 2008-08-28 at 19:53 +0800, Phoenix Kiula wrote:
On our database of about 5GB we vaccuum all of our 12 tables (only one
is huge, all others have about 100,000 rows or so) every hour or so.
if you refer
Any suggestions? Is my procedure correct? Would I need to also copy
the transaction logs or something like that?
the 'by the book' procedure for this operation is to use
pg_dumpall . dump_file.sql
and later
psql -f dump_file.sql postgres
pg_dumpall gives you a transaction
Hello all
While debugging my autovacuum I increased the level of logging to
debug3 and got this:
# cat /var/pgsql/data/logfile | grep vac | egrep mydb|mytable
LOG: autovacuum: processing database mydb
DEBUG: mytbl: vac: 10409 (threshold 20), anl: -183366 (threshold
5)
LOG:
Hello all,
a few days ago I bumped into this:
-
# vacuumdb -f -z -a
vacuumdb: vacuuming database postgres
VACUUM
vacuumdb: vacuuming database rtdata
vacuumdb: vacuuming of database rtdata failed: ERROR: failed to
re-find parent key in
Hello all,
I have a big database in which much information is stored in TEXT type
columns (I did this initially because I did not want to limit the
maximum size of the string to be stored)... but...
.. let's say I choose an upper limit (p.ex. 200) for the string sizes
and I start a fresh
Because VACUUM FULL needs to move stuff around in the table which means it
need to mess around with the indexes (adding new entries). Ordinary
VACUUM only needs to delete stuff so doesn't cause anywhere near as
many problems.
so in the event that I really end up running VACUUM FULL once
| 3673 kB
edgereporting | 3617 kB
template1 | 3617 kB
template0 | 3537 kB
(7 rows)
postgres=#
V.
Joao Ferreira gmail wrote:
Hello all,
I'm finding it very strange that my pg takes 9Giga on disk but
pg_dumpall produces a 250Mega dump
)
timeslots_timestamp_index btree (timestamp)
timeslots_var_index btree (var)
egbert=#
On Mon, 2008-08-11 at 12:45 -0400, Greg Smith wrote:
On Mon, 11 Aug 2008, Joao Ferreira gmail wrote:
I'm finding it very strange that my pg takes 9Giga
On Mon, 2008-08-11 at 12:45 -0400, Greg Smith wrote:
On Mon, 11 Aug 2008, Joao Ferreira gmail wrote:
I'm finding it very strange that my pg takes 9Giga on disk but
pg_dumpall produces a 250Mega dump. 'VACUUM FULL' was executed
yesterday.
If you've been running VACUUM FULL, it's
On Mon, 2008-08-11 at 10:58 -0600, Scott Marlowe wrote:
It's likely you've got index bloat. If you reload a pg_dump of the
database in question into another server how much space does that take
up?
right. just loaded the dump into a clean database and everything came
down about 10 times...
Hello all
[[[ while dealing with a disk size problem I realised my REINDEX cron
script was not really being called every week :( so... ]]]
I executed REINDEX by hand and the disk ocupation imediatelly dropped 6
Giga...!!!
is there a way to configure postgres to automatically execute the
On Tue, 2008-08-12 at 11:53 -0400, Tom Lane wrote:
TW, more aggressive routine vacuuming does NOT mean use vacuum
full.
Vacuum full tends to make index bloat worse, not better.
regards, tom lane
Ok. so what does it mean ?
I'm a bit lost here. I'm currently
Hi guys,
If found the reason for all this problem.
explanation: vacuum reindex cron scripts were not being executed.
I executed the operations by hand and the values became normal.
thank you all for the fine discussion.
joao
On Tue, 2008-08-12 at 13:49 +0200, Tommy Gildseth wrote:
Joao
Hello all,
could you please recommend tools to make diagnostic, admin and
maintenance work easier...
I imagine there are tools (maybe graphical, or browser based) that allow
me to connect to postgres and receive diagnostic data and
pointers/recommendations on how to solve specific problems or
Hello all,
I'm finding it very strange that my pg takes 9Giga on disk but
pg_dumpall produces a 250Mega dump. 'VACUUM FULL' was executed
yesterday.
Is this normal ? Should I be worried ?
details bellow:
Hello all,
we are using PostgreSQL in a situation where I think we could try and
run two separate instances:
- 1 application can grow up to 10 Giga data; needs 'frequent' vaccuming
and re-indexing (lots of insert, updates and deletes, per minute): we
use it for 'near-real-time' applications logs
60 matches
Mail list logo