Martijn van Oosterhout writes:
It's still a reasonable suggestion. The maximum offset is the number of
rows in the table. You'll notice when the output is empty.
Once I find the point where the output is empty then what?
Do you have
an idea how much data it contains?
Yes. Around 87
Looking at archives seem to indicate missing pg_clog files is some form
of row or page corruption.
In an old thread from back in 2003 Tom Lane recommended
(http://tinyurl.com/jushf):
If you want to try to narrow down where the corruption is, you can
experiment with commands like
select
Jaime Casanova writes:
so you want a different logfile for every database you connect to?
An option to specify a log for database.
where do you will log database shared operations like autovacuum, role
creation, maybe even a database creation, tablespace creation, etc...
In a global
I am currently using a log with the file name format:
log_filename = 'postgresql-%Y-%m.log'
Is there any way to change the filename do start witht he database name?
For now just added to add the database name to each line, but it would be
usefull to have each DB written to it's own file. Or
Chris writes:
SELECT * from pg_locks ;
And this is per DB right?
No, this is per system.
On a DB doing no/little work I always see two records returned. One has a
value in the 'database' column. How can I find what database it is?
Looking for it in pg_database did not yield any databases
Tom Lane writes:
CREATE INDEX shouldn't block any concurrent SELECT, regardless of which
index AM is involved.
The problem was that the table needed a vacuum full. It was a large table
and had done a massive update. It is not that it was blocked, but that it
was just taking a very long
Chris writes:
Is there a way to tell what tables have locks on them?
SELECT * from pg_locks ;
(version 7.4 and above at least, don't have an install earlier than that).
And this is per DB right?
Any way to tell locks in all DBs?
In particular if planning to bounce back the DB would be
The release notes for 8.1, http://www.postgresql.org/docs/whatsnew, states
about GIST
indexing mechanism has improved to support the high-speed concurrency,
recoverability and update performance
As I write this I am creating an index with gist and trying to do a select
on the table froze.
I have a test .sql file of the form:
insert into testtable values ('1');
insert into testtable values ('2');
insert into testtable values ('3');
100 Million
Right before I call the file with \i I do a begin transaction.
At some point during the load the process stops.
After some 5+ minutes
Tom Lane writes:
I don't have the patience to run this for 10^8 rows, but the test case
I got suspicious of my 'test' file so I took 1000 rows. That had problems
and pointed out problems with the file.
It seems I had mismatched single quotes.. my guess is that psql got confused
and went
What resource do I need to increase to avoid the error above?
Trying to do a straight select against a table with 6 million records.
So far tried increasing SHMMAX to 512MB
---(end of broadcast)---
TIP 6: explain analyze is your friend
[EMAIL PROTECTED] writes:
Usage is to match data from the key and val tables to fetch the data
value from the sid table.
What is the relation between key and val tables?
Will key.id and val.id be equal?
I have never quite/fully understand the outputs of analyze, but I wonder why
you have:
Merlin Moncure writes:
escalade is a fairly full featured raid controller for the price.
consider it the ford taurus of raid controllers, it's functional and
practical but not sexy. Their S line is not native sata but operates
over a pata-sata bridge. Stay away from raid 5.
Do you know if
Merlin Moncure writes:
there are reasons to go with raid 5 or other raids. where I work we
often do 14 drive raid 6 plus 1 hot swap on a 15 drive tray.
Raid 5 is different from raid 6 To say that there are times it's ok to
use RAID 5 and then say you use raid 6... well... doesn't really
[EMAIL PROTECTED] writes:
I have made some minor changes and speeded things up to around 15-20
lookups/sec, good enough, but not exciting :-)
hmm let me understand this.
You went from 1 query 3 to 4 seconds to 45 to 60 queries in the same amount
of time... 45 to 60 times faster.. and that
Alex Turner writes:
Suggests that the 9550SX is at least competitive with the others.
Thanks for the links.
I know I like the 3ware/AMCC cards because of their very good RAID 10
performance.
Raid 10 is what I used on my last server and likely what I will use on the
next.
I wish we
Emi Lu writes:
One more thing to consider. If you have a column with lots of repeated
values and a handfull of selective values, you could use a partial index.
http://www.postgresql.org/docs/8.0/interactive/indexes-partial.html
For example imagine you have an accounts table like
Accounts
Michael Fuhr writes:
On Sun, Dec 18, 2005 at 11:29:13PM -0500, Francisco Reyes wrote:
Any reason why a database would not get dumped by pg_dumpall?
Is there a way to check the successfull completion of pg_dumpall.
Loosing 3 databases is not an experience I want to repeat.
Perphaps
Jaime Casanova writes:
- you still have the server where these databases exists?
No. I lost 3 databases.
- what version of pgsql, is this?
It was 8.0.4
I was upgrading to 8.1.
I checked the nightly jobs had been running, then ran a manual one and
proceeded to do the upgrade.
Michael Fuhr writes:
On Sun, Dec 18, 2005 at 11:29:13PM -0500, Francisco Reyes wrote:
Any reason why a database would not get dumped by pg_dumpall?
Always run pg_dumpall as the superuser.
As the operating system superuser or as a database superuser?
There's a difference.
As the database
Michael Fuhr writes:
On Sun, Dec 18, 2005 at 11:29:13PM -0500, Francisco Reyes wrote:
Any reason why a database would not get dumped by pg_dumpall?
Always run pg_dumpall as the superuser.
Researched what was lost. It seems that all databases after a particular
database, called test, were
Any reason why a database would not get dumped by pg_dumpall?
Always run pg_dumpall as the superuser.
I do a nightly dump and have checked several days so far and the database is
missing in all so far. :-(
The only thing, for that DB, that got backed up was the database, but not a
single
On Mon, 17 Oct 2005, Richard Huxton wrote:
At a guess, PHPWiki is using persistent connections to PG, so you'll get one
connection per Apache backend.
Thanks for the info. That may well be the problem.
Alternatively, a small change in PHPWiki's code should clear it too (start
with a search
Ever since I installed a particular program, PHPWiki, I am seeing idle
postgres sessions.. even days old. Is it safe to delete them?
For example:
postmaster: wiki simplicato_wiki [local] idle (postgres)
Ultimately I will either switch wiki or take the time and find the piece
of code that is
On Wed, 30 Mar 2005, Carlos Roberto Chamorro Mostac wrote:
Hola a todos
Replied to the user offlist..
Isn't there a spanish list?
---(end of broadcast)---
TIP 8: explain analyze is your friend
Trying the following simple sql file:
\set proc_date 6/30/2004
\echo Date is :proc_date
select * from feeds where date = :proc_date limit 20;
If I start psql with the -a option I see the output:
\set proc_date 6/30/2004
\echo Date is :proc_date
Date is 6/30/2004
select * from feeds where date =
On Thu, 7 Oct 2004, Tom Lane wrote:
It's fairly painful to get single quotes into a psql variable;
AFAIK you have to do it like this:
\set proc_date '\'6/30/2004\''
Thanks that worked.
I figure I needed to escape the single quotes, but I had tried
\'6/30/2004\', which did not work.
Today I was using \e to edit the buffer, something I don't commonly do.
Somehow that screwed up my help file for psql.
When I do \?
instead of getting help for the slash commands I get
General
General
General
General
General
ull and not nosend;
Any thoughts?
The last line ull and not nosend;
On Fri, 19 Mar 2004, Anton Nikiforov wrote:
Or were you talking about something else like storing different data in
different media speeds? (Like Hierarchical Storage Management)
I do not exactly know how to deal wth such a huge amount of data. The disk subsytem
is the must and i do
I have a comment field in a table that I want populated if another field
has a certain value. Is it possible to set a check constraint for this?
Example:
Let's say we have fields
Purchase_type smallint check(purchase_type 4)
comment varchar
I need a check rule to something like (pseudo
On Wed, 17 Mar 2004, Stephan Szabo wrote:
Actually, shouldn't a table level check constraint be able to do this with
something like:
check (purchase_type!=3 or comment is not null)
That worked Stephan.
Gregory. I think yours would work too. Saw Stephans answer and tested
before I saw your
On Tue, 10 Feb 2004, Tom Lane wrote:
Francisco Reyes [EMAIL PROTECTED] writes:
Is there a way to change a schema owner other than dump/restore?
How about changing the nspowner in pg_namespace? Will that do the trick
without any negative consecuences?
I think that will work
Is there a way to change a schema owner other than dump/restore?
How about changing the nspowner in pg_namespace? Will that do the trick
without any negative consecuences?
---(end of broadcast)---
TIP 8: explain analyze is your friend
On Wed, 14 Jan 2004, Tom Lane wrote:
The line from the sql file that is failing, a dump, is
SELECT pg_catalog.setval('invoicesdetails_invoicesdetailsid_seq', 18,
true);
You have not given us any context to interpret this report in, but
I do not think it has anything to do with the
On Fri, 25 Jul 2003 [EMAIL PROTECTED] wrote:
The data will be stored on an external raid,
SCSI based 2.5TB with IDE disks. Configured as 1 large volume, RAID5. (
We already have this hardware)
How come you did not go with SCSI disks?
Specially 15K ones.
Performance will be much better with
How does line completion gets to psql?
At my FreeBSD machines when I build the PostgreSQL port I have always had
line completion. Now I need to do some work on a Linux SUSE machine (which
I don't administer) and psql doesn't have line completion.
The person that manages the machine installed from
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi there,
I must migrate a PostgreSQL database that I've created initially with
PostgreSQL 7.0 over LiNUX to a system that has PostgreSQL 6.5.3 running on a
FreeBSD. I've no chance to update de PostgreSQL engine to a newer version so
the only way
unsubscribe
Daniel Francisco Sachet
Diretor de Tecnologia de Informao
IFX do Brasil - www.ifx.com.br
+55 11 3365-5860
+55 11 9119-0083
[EMAIL PROTECTED]
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
On Fri, 2 Feb 2001, Michael Miyabara-McCaskey wrote:
Francisco,
Excellent idea.
Thanks for the info. Is this what you are doing now? And if so, since the
current version does not appear to have replication, have you found a
workaround?
-Michael
I am new to PostgreSQL so I haven't
On Sun, 4 Feb 2001, Boris wrote:
That sounds good, the only question left is the memory requiremend of
apache per client, i do not completely understand the spawning things
with Apache. On high load there are alway minimum 10 processes left,
but where is the limit? Interesting thing.
I would
What is this?
ERROR: index_formtuple: data takes 16468 bytes: too big
When a try VACUUM my_table ;
Somebody can help?
[]'s
Eriko
does postgresql compile and run on macosx server?
if so, does anyone know if binaries for it exist?
thanks..
Does PostGreSQL-6.5.2 have a record limit?
Ive read a few posts that say older versions (6.3.x) had a limit of 8k..
and they said it would change soon.. does this limit still exist?
Thanks.
501 - 543 of 543 matches
Mail list logo