My PG server is still going down. After spending the weekend doing a
CLUSTER of my largest table (it's a RAID 1 system with SATA hard disks
and 4 GB memory, mostly devoted to PG) I still have this issue.
When I do a top command, 99% of the CPU and about 15% of the memory
is being taken by PG.
= on
log_min_duration_statement = 1000
Much appreciate any further ideas!
On Sun, Aug 3, 2014 at 9:29 PM, Adrian Klaver adrian.kla...@aklaver.com wrote:
On 08/02/2014 07:37 PM, Phoenix Kiula wrote:
In your original post you said it was stopping on pg_class so now I am
confused.
No need
Thanks...comments below.
assuming you installed 9.0 from the yum.postgresql.com respositories, then,
`yum update postgresql90-server` and restart the postgresql-9.0 service
should do nicely.
This worked. Took me to 9.0.17 for some reason. I'm OK with this.
But the vacuum full was a
Hi. I've been patient. PG is 9.0.17, updated via Yum yesterday.
One of my large tables (101 GB on disk, about 1.1 billion rows) used
to take too long to vacuum. Not sure if it's an index corruption
issue. But I tried VACUUM FULL ANALYZE as recommended in another
thread yesterday, which took 5
30101692 46962204
sda8 2.7744.8820.07 86661146 38754800
sda9267.11 43478.67 4603.61 83952607992 8889065916
On Sun, Aug 3, 2014 at 9:56 AM, John R Pierce pie...@hogranch.com wrote:
On 8/2/2014 6:20 PM, Phoenix Kiula wrote:
PS: CentOS 6 64 bit
In your original post you said it was stopping on pg_class so now I am
confused.
No need to be confused. The vacuum thing is a bit tricky for laymen
like myself. The pg_class seemed to be associated to this table.
Anyway, even before the upgrade, the vacuum was stopping at this table
and
Hello,
I have Postgresql from a few years ago. That's 9.0.11.
During the vacuum it's basically crawling to its knees. While googling
for this (it stops at pg_classes forever) I see Tom Lane suggested
upgrading.
So now I must. In doing so, can I follow these instructions?
Thank you John.
you can upgrade to 9.0.18 painlessly. 9.1 or .2 or .3, not quite so
painless.
What's the best way to get to 9.0.18, as a start? Is there a simple
single command I can use? I'm on CentOS 6, 64bit.
have you tried a vacuum full of the whole cluster, with your applications
Hi. Hard to find this command in the documentation - how should I alter a
table to REMOVE the on delete cascade constraint from a table? Thanks.
On Thu, Oct 4, 2012 at 2:50 AM, Wolf Schwurack w...@uen.org wrote:
I use pgpool but some of the problem you listed are same as I had with pgpool
Thanks Wolf, for the thoughts.
I would not run pgbouner in /var/run/pbbouner. Every time you reboot the
directory will get deleted. I set my
On Sat, Oct 6, 2012 at 12:01 AM, Wolf Schwurack w...@uen.org wrote:
You need to have a pgbouner directory in /var/log and have the owner
pgbouncer. This is easy to test try creating a file in /var/log as the user
pgbouncer. It should fail because pgbouncer does not have writer
On Sat, Oct 6, 2012 at 10:07 AM, Adrian Klaver adrian.kla...@gmail.com wrote:
...snip...
What are the contents of your pgbouncer.ini file?
Thanks Adrian.
I mentioned the full ini file details above in the thread, but here
they are again. (Please do not comment about port numbers. This is a
On Sat, Oct 6, 2012 at 10:24 AM, Adrian Klaver adrian.kla...@gmail.com wrote:
One thing I see above:
http://pgbouncer.projects.postgresql.org/doc/config.html
\* acts as fallback database
Notice the backslash.
Ok, but:
(1) The exact same INI file was working so far.
(2) Why do I need a
On Tue, Oct 2, 2012 at 5:27 PM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
On Tue, Oct 2, 2012 at 11:29 AM, Phoenix Kiula phoenix.ki...@gmail.com
wrote:
On Tue, Oct 2, 2012 at 12:59 AM, Phoenix Kiula phoenix.ki...@gmail.com
wrote:
Could you please check permission of /var/run/pgbouncer
On Tue, Oct 2, 2012 at 11:29 AM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
On Tue, Oct 2, 2012 at 12:59 AM, Phoenix Kiula phoenix.ki...@gmail.com
wrote:
Could you please check permission of /var/run/pgbouncer/ directory. If
pgbouncer directory does not have postgres user permissions,please
Hi,
- PG 9.0.10
- Pgbouncer version 1.4.2
Not long ago, during the last server reboot for us, we had fixed the
really painful (and largely mysterious) process of setting up
pgbouncer.
File permissions and other mysteries were solved with help from Raghavendra:
Could you please check permission of /var/run/pgbouncer/ directory. If
pgbouncer directory does not have postgres user permissions,please assign
it and then start the pgbouncer.
The /var/run/pgbouncer/ directory has
chown -R postgres:postgres ..
The port number everywhere is already
On Tue, Oct 2, 2012 at 12:59 AM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
Could you please check permission of /var/run/pgbouncer/ directory. If
pgbouncer directory does not have postgres user permissions,please assign
it and then start the pgbouncer.
The /var/run/pgbouncer/ directory
On Fri, Apr 13, 2012 at 2:59 PM, Raghavendra
raghavendra@enterprisedb.com wrote:
.
Add it in pgbouncer.auth file as per your .ini file parameter.
auth_file = /var/lib/pgsql/pgbouncer.txt
Seems you already did this. I believe you are connecting as postgres user
not from
I had pgbouncer working somehow, but we have switched servers recently
and now I cannot for the life of me figure out again how to set it up.
Online guides say things like create a user ID. Well, where? Inside
PG the database? Or in my CentOS system?
Here's my /etc/pgbouncer.ini:
On Fri, Apr 13, 2012 at 2:41 AM, Scott Marlowe scott.marl...@gmail.com wrote:
On Thu, Apr 12, 2012 at 11:12 AM, Phoenix Kiula phoenix.ki...@gmail.com
wrote:
WARNING: password file /root/.pgpass has group or world access;
permissions should be u=rw (0600) or less
psql: ERROR
Hi. Hope I'm articulating the question correctly.
I currently have a foreign key like this:
fk_table2 FOREIGN KEY (id) REFERENCES table1(id) ON DELETE CASCADE;
Given the peculiar needs of this db, it seems that in some cases we
will need to track the id of table2 against a concatenation of
On Fri, Jan 6, 2012 at 10:38 PM, John R Pierce pie...@hogranch.com wrote:
you should check your attitude at the door. this isn't Microsoft Pay per
Incident Tech Support.
I saw the door. Found some other attitudes that were allowed to be let
in. Like asking me to write my own patch. You see,
Hi. I'm using Postgresql 9.0.5, and the connection is made via pgbouncer.
I create a temporary table, and then want to import data into this
table via a COPY command. Yet, this just created table is not being
recognized. What's up?
From my terminal:
mydb=#
mydb=# create temporary table vl (
On Fri, Jan 6, 2012 at 11:46 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Phoenix Kiula phoenix.ki...@gmail.com writes:
Hi. I'm using Postgresql 9.0.5, and the connection is made via pgbouncer.
Perhaps pgbouncer is redirecting the second command to a different
session?
Thanks Tom. I'm
On Fri, Jan 6, 2012 at 4:24 PM, Steve Crawford
scrawf...@pinpointresearch.com wrote:
On 01/06/2012 01:11 PM, Phoenix Kiula wrote:
On Fri, Jan 6, 2012 at 11:46 AM, Tom Lanet...@sss.pgh.pa.us wrote:
Phoenix Kiulaphoenix.ki...@gmail.com writes:
Hi. I'm using Postgresql 9.0.5
On Fri, Jan 6, 2012 at 6:20 PM, Adrian Klaver adrian.kla...@gmail.com wrote:
http://www.postgresql.org/docs/9.0/interactive/sql-copy.html
Search for
NULL
Thanks Adrian.
Without examples, it's hard to predict syntax. If the value after a
pipe is missing altogether, I suppose the missing
On Fri, Jan 6, 2012 at 6:53 PM, Steve Crawford
scrawf...@pinpointresearch.com wrote:
On 01/06/2012 03:42 PM, Phoenix Kiula wrote:
On Fri, Jan 6, 2012 at 6:20 PM, Adrian Klaveradrian.kla...@gmail.com
wrote:
http://www.postgresql.org/docs/9.0/interactive/sql-copy.html
Search for
NULL
On Fri, Jan 6, 2012 at 6:54 PM, Adrian Klaver adrian.kla...@gmail.com wrote:
Try:
copy vl from 'data.txt' WITH CSV DELIMITER '|';
Doesn't work. Can't see what the different in CSV is from a text file.
Same errors are thrown.
If that doesn't work take a look at pgloader:
On Sun, Dec 4, 2011 at 7:41 PM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
On Sun, Dec 4, 2011 at 7:14 PM, Alban Hertroys haram...@gmail.com wrote:
On 4 Dec 2011, at 11:19, Phoenix Kiula wrote:
INSERTs in the parent table don't need to check for any reference from the
child table
Hi.
I have a foreign key as such:
ALTER TABLE child_table
ADD CONSTRAINT fk_child
FOREIGN KEY (stringid) REFERENCES parent_table (stringid) MATCH FULL
ON DELETE CASCADE ;
Questions:
1. Is MATCH FULL adding any value here? If the foreign key is just
on an id column, what purpose does it
Hi.
I have deleted a row from a table. Confirmed by SELECT. All
associated children tables don't have this key value either.
Yet, when I insert this row back again, the primary key index on this
table gives me a duplicate error.
As demonstrated below. PGSQL version is 9.0.5.
Is this common? I
On Sun, Dec 4, 2011 at 7:14 PM, Alban Hertroys haram...@gmail.com wrote:
On 4 Dec 2011, at 11:19, Phoenix Kiula wrote:
INSERTs in the parent table don't need to check for any reference from the
child table, since they're new; there can't be a reference. UPDATEs and
DELETEs do though
On Sun, Dec 4, 2011 at 7:55 PM, Szymon Guz mabew...@gmail.com wrote:
.
and then show us the whole table structure, especially any rules or
triggers.
Not many rules or triggers. See below.
I ran a REINDEX on the key allegedly being violated, and it finished
it in 30 mins or so, but still
Hi.
(My pgbouncer is finally working and has results in at least a 3-fold
site speed increase! YAY! Thanks to everyone who helped.)
Now, a new small problem.
In my PHP code I have a condition that checks for the existence of a
record, and if not found, it INSERTs a new one.
Here's the first
On Thu, Nov 24, 2011 at 10:42 AM, Adrian Klaver adrian.kla...@gmail.com wrote:
Also, how can I tell the pgbouncer log not to log proper connections
and their closing. Right now it's filling up with nonsense. I only
want it to log when there's a warning or error.
On Mon, Nov 21, 2011 at 6:14 PM, Tomas Vondra t...@fuzzy.cz wrote:
.
An index on (a, b) can be used for queries involving only a but not for
those involving only b.
That is not true since 8.2 - a multi-column index may be used even for
queries without conditions on leading columns. It
On Wed, Nov 23, 2011 at 6:13 AM, Alex Thurlow alex-repo...@blastro.com wrote:
On 11/22/2011 3:28 PM, Merlin Moncure wrote:
..
How long is this backup taking? I have a ~100GB database that I back up
with pg_dump (which compresses as it dumps if you want it to) and that only
takes 35
On Thu, Nov 24, 2011 at 11:53 PM, Benjamin Henrion b...@udev.org wrote:
On Thu, Nov 24, 2011 at 4:49 PM, Phoenix Kiula phoenix.ki...@gmail.com
wrote:
On Wed, Nov 23, 2011 at 6:13 AM, Alex Thurlow alex-repo...@blastro.com
wrote:
On 11/22/2011 3:28 PM, Merlin Moncure wrote:
..
How long
On Tue, Nov 22, 2011 at 2:13 AM, Steve Crawford
scrawf...@pinpointresearch.com wrote:
..
The information in the pgbouncer pseudo-database is helpful, here (psql -U
youradminuser -h 127.0.0.1 pgbouncer).
Thanks, I finally got it connecting.
Where's the pgbouncer database. Do I need to
On Thu, Nov 24, 2011 at 9:18 AM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
On Tue, Nov 22, 2011 at 2:13 AM, Steve Crawford
...
Thanks, I finally got it connecting.
Where's the pgbouncer database. Do I need to install it? It's not
installed. (How else should I tell the load and utilization
On Sun, Nov 20, 2011 at 4:49 PM, John R Pierce pie...@hogranch.com wrote:
On 11/19/11 11:42 PM, Phoenix Kiula wrote:
does this firewall block localhost at all? many don't. (I'm not at all
familiar with this CSF/LFD thing)
if you enable a port for TCP_IN, does it automatically allow replies
On Sun, Nov 20, 2011 at 7:43 PM, Scott Mead sco...@openscg.com wrote:
On Sun, Nov 20, 2011 at 6:21 AM, Marko Kreen mark...@gmail.com wrote:
On Sun, Nov 20, 2011 at 8:32 AM, Phoenix Kiula phoenix.ki...@gmail.com
wrote:
The password I am entering in the terminal is right for sure. I've
On Sun, Nov 20, 2011 at 7:52 PM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
On Sun, Nov 20, 2011 at 7:43 PM, Scott Mead sco...@openscg.com wrote:
On Sun, Nov 20, 2011 at 6:21 AM, Marko Kreen mark...@gmail.com wrote:
On Sun, Nov 20, 2011 at 8:32 AM, Phoenix Kiula phoenix.ki...@gmail.com
On Sun, Nov 20, 2011 at 2:11 PM, Tomas Vondra t...@fuzzy.cz wrote:
Dne 18.11.2011 13:30, Phoenix Kiula napsal(a):
Full DB: 32GB
The big table referenced above: 28 GB
It's inserts into this one that are taking time.
Hm, in that case the shared_buffers is probably too low. It'd be nice
Hi.
Want to start another thread, loosely related to the performance
problems thread I have going.
Need some DB design guidance from the gurus here.
My big table now has about 70 million rows, with the following columns:
alias | character varying(35)
url | text
On Sun, Nov 20, 2011 at 9:33 PM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
Hi.
Want to start another thread, loosely related to the performance
problems thread I have going.
Need some DB design guidance from the gurus here.
My big table now has about 70 million rows, with the following
On Mon, Nov 21, 2011 at 4:13 AM, Adrian Klaver adrian.kla...@gmail.com wrote:
On Sunday, November 20, 2011 3:56:18 am Phoenix Kiula wrote:
Any ideas?
Just to add, the connection string I try for pgbouncer is EXACTLY the
same as the one I use to connect directly to PG, but I add the port
On Mon, Nov 21, 2011 at 10:18 AM, Tomas Vondra t...@fuzzy.cz wrote:
Dne 20.11.2011 03:33, Amitabh Kant napsal(a):
On Sun, Nov 20, 2011 at 4:14 AM, Phoenix Kiula phoenix.ki...@gmail.com
Does the auth file (/var/lib/pgsql/pgbouncer.txt) contain valid login
credentials to your database? If I
On Mon, Nov 21, 2011 at 7:26 AM, Gavin Flower
gavinflo...@archidevsys.co.nz wrote:
How about having 2 indexes: one on each of ip url_md5? Pg will combine the
indexes as required, or will just use one if that is best.
Thanks Gavin. Question: what if I have a joined index? If from a
joined
On Mon, Nov 21, 2011 at 10:54 AM, Tomas Vondra t...@fuzzy.cz wrote:
Dne 21.11.2011 02:44, Phoenix Kiula napsal(a):
Thanks Tomas and everyone.
I have the following passwords:
1. Pgbouncer.ini file
[databases]
MYDB = host=127.0.0.1 dbname=MYDB user=MYDB_MYDB client_encoding=utf8
port=5432
1. Do I need to set up the /etc/pgbouncer.ini.rpmnew as
/etc/pgbouncer.ini and then change settings in it? What do I change?
How? The FAQ is super geeky and unhelpful. As is the sparse info on
the PG Wiki on pgbouncer. How can I tune pgbouner settings?
Just a quick update. By googling for
On Sun, Nov 20, 2011 at 2:13 AM, Adrian Klaver adrian.kla...@gmail.com wrote:
http://pgbouncer.projects.postgresql.org/doc/config.html
I have never used pgbouncer, but from above it would seem you need to set up a
[databases] section to tie pgbouncer to the Postgres server.
See:
SECTION
On Sun, Nov 20, 2011 at 2:39 AM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
On Sun, Nov 20, 2011 at 2:13 AM, Adrian Klaver adrian.kla...@gmail.com
wrote:
http://pgbouncer.projects.postgresql.org/doc/config.html
I have never used pgbouncer, but from above it would seem you need to set up
On Fri, Nov 18, 2011 at 10:41 AM, Tomas Vondra t...@fuzzy.cz wrote:
That has nothing to do with the inserts, it means the number of connection
requests exceeds the max_connections. You've set it to 350, and that seems
too high - the processes are going to struggle for resources (CPU, I/O and
Hi. PG 9.0.5, on CentOS 5 with 64 bit.
Here's the logging related items from my config file:
log_directory = 'pg_log'
log_filename= 'pglog.postgresql-%Y-%m-%d_%H%M%S.log'
log_rotation_age= 1d
log_min_duration_statement = 5000 # In
On Sat, Nov 19, 2011 at 1:29 AM, Steve Crawford
scrawf...@pinpointresearch.com wrote:
On 11/18/2011 04:30 AM, Phoenix Kiula wrote:
On Fri, Nov 18, 2011 at 9:25 AM, Steve Crawford
scrawf...@pinpointresearch.com wrote:
Database only? Or is it also your webserver?
It's my webserver and DB
On Sun, Nov 20, 2011 at 3:35 AM, Adrian Klaver adrian.kla...@gmail.com wrote:
Well in the .ini file you posted there is no [databases] section. From what I
read lack of one would explain the problem you are seeing.
Yes. Because that's private to post on a public mailing list like this.
On Sun, Nov 20, 2011 at 3:38 AM, Tomas Vondra t...@fuzzy.cz wrote:
What about log_destination and log_collector?
Thanks. This was it!
Much appreciated.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
On Sun, Nov 20, 2011 at 8:08 AM, Adrian Klaver adrian.kla...@gmail.com wrote:
You have pgbouncer listening on 127.0.0.1. In your psql connection string you
are not specifying a host, so if you are on a Unix platform it is trying to
connect to a socket which would account for the error. I found
On Sun, Nov 20, 2011 at 10:33 AM, Amitabh Kant amitabhk...@gmail.com wrote:
I am assuming the difference in the port numbers between your config file
and php code is a typing error.
Does the auth file (/var/lib/pgsql/pgbouncer.txt) contain valid login
credentials to your database? If I
On Sun, Nov 20, 2011 at 10:37 AM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
On Sun, Nov 20, 2011 at 10:33 AM, Amitabh Kant amitabhk...@gmail.com wrote:
I am assuming the difference in the port numbers between your config file
and php code is a typing error.
Does the auth file (/var/lib
On Sun, Nov 20, 2011 at 10:51 AM, Adrian Klaver adrian.kla...@gmail.com wrote:
I don't see a user specified. You sure you are connecting as correct user?
Remember absent a -U the user will be either your system user name or what is
specified in a ENV variable.
Adrian, all this is not
On Sun, Nov 20, 2011 at 1:57 PM, Amitabh Kant amitabhk...@gmail.com wrote:
On Sun, Nov 20, 2011 at 12:12 PM, Tomas Vondra t...@fuzzy.cz wrote:
Dne 20.11.2011 04:21, Phoenix Kiula napsal(a):
snip
My guess is that you actually require a password when connecting to the
database, but you
On Sun, Nov 20, 2011 at 2:16 PM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
On Sun, Nov 20, 2011 at 1:57 PM, Amitabh Kant amitabhk...@gmail.com wrote:
On Sun, Nov 20, 2011 at 12:12 PM, Tomas Vondra t...@fuzzy.cz wrote:
Dne 20.11.2011 04:21, Phoenix Kiula napsal(a):
snip
My guess
On Sun, Nov 20, 2011 at 2:45 PM, Amitabh Kant amitabhk...@gmail.com wrote:
On Sun, Nov 20, 2011 at 12:02 PM, Phoenix Kiula phoenix.ki...@gmail.com
wrote:
Just a trial: try password without quotes in your pgbouncer config file.
That's how I have specified in mine, and it is working.
Already
On Sun, Nov 20, 2011 at 2:32 PM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
On Sun, Nov 20, 2011 at 2:16 PM, Phoenix Kiula phoenix.ki...@gmail.com
wrote:
On Sun, Nov 20, 2011 at 1:57 PM, Amitabh Kant amitabhk...@gmail.com wrote:
On Sun, Nov 20, 2011 at 12:12 PM, Tomas Vondra t...@fuzzy.cz
On Fri, Nov 18, 2011 at 9:25 AM, Steve Crawford
scrawf...@pinpointresearch.com wrote:
Database only? Or is it also your webserver?
It's my webserver and DB. Webserver is nginx, proxying all PHP
requests to apache in the backend.
What version of PostgreSQL? What OS? What OS tuning, if any,
Hi.
I use CentOS 5, 64bit.
PG is 9.0.5.
I did yum install pgbouncer and got this:
---
Running Transaction
Installing : libevent
1/2
Installing : pgbouncer
2/2
On Mon, Nov 14, 2011 at 1:45 PM, Venkat Balaji venkat.bal...@verse.in wrote:
Question: what can I do to rsync only the new additions in every table
starting 00:00:01 until 23:59:59 for each day?
A table level replication (like Slony) should help here.
Slony needs more than one physical
Hi. I have a massive traffic website.
I keep getting FATAL: Sorry, too many clients already problems.
It's a Quad core machine with dual servers, 4 SCSI disks with RAID 10,
with RAM of 8GB.
Server is Nginx backed by Apache for the php.
Postgresql just has to do about 1000 SELECTs a minute, and
Hi.
I currently have a cronjob to do a full pgdump of the database every
day. And then gzip it for saving to my backup drive.
However, my db is now 60GB in size, so this daily operation is making
less and less sense. (Some of you may think this is foolish to begin
with).
Question: what can I do
On Sun, Nov 13, 2011 at 8:42 PM, Robins Tharakan
robins.thara...@comodo.com wrote:
Hi,
Well, the 'complex' stuff is only as there for larger or high-traffic DBs.
Besides at 60GB that is a largish DB in itself and you should begin to try
out a few other backup methods nonetheless. That is
On Sun, Nov 13, 2011 at 10:45 PM, Andy Colson a...@squeakycode.net wrote:
On 11/13/2011 07:51 AM, Gregg Jaskiewicz wrote:
pg_dump -Fc already compresses, no need to pipe through gzip
I dont think that'll use two core's if you have 'em. The pipe method will
use two cores, so it should be
Hi,
I bet this is a simple solution but I have been racking my brains.
I have a column in my table:
user_id varchar(100) ;
This can be NULL, or it can have a value. If it has a value during
INSERT or UPDATE, I want to check that the user exists against my
Users table. Otherwise, NULL is
On Wed, May 11, 2011 at 11:18 PM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
I have a text column in a table, which I want to search through --
seeking the occurrence of about 300 small strings in it.
Let's say the table is like this:
table1 (
id bigint primary key
Hi
Been reading some old threads (pre 9.x version) and it seems that the
consensus is to avoid doing massive deletes from a table as it'll
create so much unrecoverable space/gaps that vacuum full would be
needed. Etc.
Instead, we might as well do a dump/restore. Faster, cleaner.
This is all
[snip]
You can easily install it as a contrib . Just read the installation guide or
the man Page.
Thanks Eric. How though?
The instructions here --
http://reorg.projects.postgresql.org/pg_reorg.html -- are woefully
incomplete.
I have a standard PG install on WHM/Cpanel type server. I
I have a text column in a table, which I want to search through --
seeking the occurrence of about 300 small strings in it.
Let's say the table is like this:
table1 (
id bigint primary key
,mytext text
,mydate timestamp without time zone
);
I am using this
On Thu, Apr 28, 2011 at 10:59 PM, Simon Riggs si...@2ndquadrant.com wrote:
On Wed, Apr 27, 2011 at 5:24 PM, Phoenix Kiula phoenix.ki...@gmail.com
wrote:
Possibly a dumb question but there isn't much about this.
http://www.google.com/search?sourceid=chromeie=UTF-8q=postgresql+null+value+disk
Hi.
Just want to check why, in an UPDATE sql, the JOIN condition is not making
use of the index?
In both tables being joined, the column in question is in fact the primary
key!
Table structure and query below. All I want is to take values from a smaller
accesscount table and update from it the
Possibly a dumb question but there isn't much about this.
http://www.google.com/search?sourceid=chromeie=UTF-8q=postgresql+null+value+disk+space
I have some BOOLEAN columns. 90% of the cases of the columns is FALSE. Do I
save disk space by having them as NULL instead of FALSE? So my application
On Tue, Apr 26, 2011 at 8:28 PM, Greg Smith g...@2ndquadrant.com wrote:
On 04/25/2011 10:10 AM, Vick Khera wrote:
Basically, you create your partitions and set up the necessary triggers
you want (I re-write the app to insert directly into the correct partition).
Then all new data starts
On Tue, Apr 26, 2011 at 3:24 PM, Scott Marlowe scott.marl...@gmail.com wrote:
On Mon, Apr 25, 2011 at 8:50 PM, Phoenix Kiula phoenix.ki...@gmail.com
wrote:
On Tuesday, April 26, 2011, Tomas Vondra t...@fuzzy.cz wrote:
Dne 25.4.2011 18:16, Phoenix Kiula napsal(a):
Sorry, spoke too soon.
I
Hi,
Is there any place I can download the default postgresql.conf that
comes with 9.0?
Thanks!
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Wed, Apr 27, 2011 at 1:18 AM, Magnus Hagander mag...@hagander.net wrote:
On Tue, Apr 26, 2011 at 19:11, Phoenix Kiula phoenix.ki...@gmail.com wrote:
Hi,
Is there any place I can download the default postgresql.conf that
comes with 9.0?
http://git.postgresql.org/gitweb?p=postgresql.git
On Wed, Apr 27, 2011 at 1:53 AM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
On Wed, Apr 27, 2011 at 1:18 AM, Magnus Hagander mag...@hagander.net wrote:
On Tue, Apr 26, 2011 at 19:11, Phoenix Kiula phoenix.ki...@gmail.com wrote:
Hi,
Is there any place I can download the default
Hi.
The partitioning documentation in PG is very clear on how to partition
a new table. Create child tables, and have triggers that manage
INSERT, UPDATE and DELETE commands.
How about doing this with existing massive tables? (Over 120 million rows)
I could create a new parent table with child
On Fri, Apr 22, 2011 at 8:35 PM, t...@fuzzy.cz wrote:
On Fri, Apr 22, 2011 at 8:20 PM, t...@fuzzy.cz wrote:
On Fri, Apr 22, 2011 at 7:07 PM, t...@fuzzy.cz wrote:
In the pg_dumpall backup process, I get this error. Does this help?
Well, not really - it's just another incarnation of the
On Mon, Apr 25, 2011 at 9:19 PM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
On Fri, Apr 22, 2011 at 8:35 PM, t...@fuzzy.cz wrote:
On Fri, Apr 22, 2011 at 8:20 PM, t...@fuzzy.cz wrote:
On Fri, Apr 22, 2011 at 7:07 PM, t...@fuzzy.cz wrote:
In the pg_dumpall backup process, I get this error
On Tue, Apr 26, 2011 at 1:56 AM, Tomas Vondra t...@fuzzy.cz wrote:
Dne 25.4.2011 19:31, Alban Hertroys napsal(a):
On 25 Apr 2011, at 18:16, Phoenix Kiula wrote:
If I COPY each individual file back into the table, it works. Slowly,
but seems to work. I tried to combine all the files into one
On Tuesday, April 26, 2011, Tomas Vondra t...@fuzzy.cz wrote:
Dne 25.4.2011 18:16, Phoenix Kiula napsal(a):
Sorry, spoke too soon.
I can COPY individual chunks to files. Did that by year, and at least
the dumping worked.
Now I need to pull the data in at the destination server.
If I COPY
On Fri, Apr 22, 2011 at 7:07 PM, t...@fuzzy.cz wrote:
On Fri, Apr 22, 2011 at 12:06 PM, Phoenix Kiula phoenix.ki...@gmail.com
wrote:
On Fri, Apr 22, 2011 at 12:51 AM, Tomas Vondra t...@fuzzy.cz wrote:
Dne 21.4.2011 07:16, Phoenix Kiula napsal(a):
Tomas,
I did a crash log with the strace
On Fri, Apr 22, 2011 at 8:20 PM, t...@fuzzy.cz wrote:
On Fri, Apr 22, 2011 at 7:07 PM, t...@fuzzy.cz wrote:
In the pg_dumpall backup process, I get this error. Does this help?
Well, not really - it's just another incarnation of the problem we've
already seen. PostgreSQL reads the data, and
On Fri, Apr 22, 2011 at 12:51 AM, Tomas Vondra t...@fuzzy.cz wrote:
Dne 21.4.2011 07:16, Phoenix Kiula napsal(a):
Tomas,
I did a crash log with the strace for PID of the index command as you
suggested.
Here's the output:
http://www.heypasteit.com/clip/WNR
Also including below
On Fri, Apr 22, 2011 at 12:06 PM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
On Fri, Apr 22, 2011 at 12:51 AM, Tomas Vondra t...@fuzzy.cz wrote:
Dne 21.4.2011 07:16, Phoenix Kiula napsal(a):
Tomas,
I did a crash log with the strace for PID of the index command as you
suggested.
Here's
On a fast network it should only take a few minutes. Now rsyncing
live 2.4 TB databases, that takes time. :) Your raptors, if they're
working properly, should be able to transfer at around 80 to
100Megabytes a second. 10 to 15 seconds a gig. 30 minutes or so via
gig ethernet. I'd run
On Thu, Apr 21, 2011 at 7:27 AM, Tomas Vondra t...@fuzzy.cz wrote:
Dne 20.4.2011 22:11, Tomas Vondra napsal(a):
There's a very nice guide on how to do that
http://blog.endpoint.com/2010/06/tracking-down-database-corruption-with.html
It sure seems like the problem you have (invalid alloc
On Thu, Apr 21, 2011 at 11:49 AM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
On Thu, Apr 21, 2011 at 7:27 AM, Tomas Vondra t...@fuzzy.cz wrote:
Dne 20.4.2011 22:11, Tomas Vondra napsal(a):
There's a very nice guide on how to do that
http://blog.endpoint.com/2010/06/tracking-down-database
While I fix some bigger DB woes, I have learned a lesson. Huge indexes
and tables are a pain.
Which makes me doubly keen on looking at partitioning.
Most examples I see online are partitioned by date. As in months, or
quarter, and so on. This doesn't work for me as I don't have too much
logic
1 - 100 of 321 matches
Mail list logo