# turns forced
synchronization on or off
checkpoint_segments = 128 # was 128
checkpoint_timeout = 1000 # was 1000
enable_indexscan = on
log_min_duration_statement = 1000
Much appreciate any further ideas!
On Sun, Aug 3, 2014 at
My PG server is still going down. After spending the weekend doing a
CLUSTER of my largest table (it's a RAID 1 system with SATA hard disks
and 4 GB memory, mostly devoted to PG) I still have this issue.
When I do a "top" command, 99% of the CPU and about 15% of the memory
is being taken by PG. Wh
> In your original post you said it was stopping on pg_class so now I am
> confused.
No need to be confused. The vacuum thing is a bit tricky for laymen
like myself. The "pg_class" seemed to be associated to this table.
Anyway, even before the upgrade, the vacuum was stopping at this table
and t
24.32 30101692 46962204
sda8 2.7744.8820.07 86661146 38754800
sda9267.11 43478.67 4603.61 83952607992 8889065916
On Sun, Aug 3, 2014 at 9:56 AM, John R Pierce wrote:
> On 8/2/2014 6:20 PM, Phoenix Kiula wrote:
>>
>> PS:
Hi. I've been patient. PG is 9.0.17, updated via Yum yesterday.
One of my large tables (101 GB on disk, about 1.1 billion rows) used
to take too long to vacuum. Not sure if it's an index corruption
issue. But I tried VACUUM FULL ANALYZE as recommended in another
thread yesterday, which took 5 hour
Thanks...comments below.
> assuming you installed 9.0 from the yum.postgresql.com respositories, then,
> `yum update postgresql90-server` and restart the postgresql-9.0 service
> should do nicely.
This worked. Took me to 9.0.17 for some reason. I'm OK with this.
But the "vacuum full" was a
Thank you John.
> you can upgrade to 9.0.18 painlessly. 9.1 or .2 or .3, not quite so
> painless.
What's the best way to get to 9.0.18, as a start? Is there a simple
single command I can use? I'm on CentOS 6, 64bit.
> have you tried a vacuum full of the whole cluster, with your applicatio
Hello,
I have Postgresql from a few years ago. That's 9.0.11.
During the vacuum it's basically crawling to its knees. While googling
for this (it stops at "pg_classes" forever) I see Tom Lane suggested
upgrading.
So now I must. In doing so, can I follow these instructions?
https://www.digitaloce
Hi. Hard to find this command in the documentation - how should I alter a
table to REMOVE the "on delete cascade" constraint from a table? Thanks.
On Sat, Oct 6, 2012 at 10:24 AM, Adrian Klaver wrote:
>
> One thing I see above:
> http://pgbouncer.projects.postgresql.org/doc/config.html
> ""\*" acts as fallback database"
>
> Notice the backslash.
>
Ok, but:
(1) The exact same INI file was working so far.
(2) Why do I need a fallback datab
On Sat, Oct 6, 2012 at 10:07 AM, Adrian Klaver wrote:
...snip...
> What are the contents of your pgbouncer.ini file?
>>
Thanks Adrian.
I mentioned the full ini file details above in the thread, but here
they are again. (Please do not comment about port numbers. This is a
public list so I cha
On Sat, Oct 6, 2012 at 12:01 AM, Wolf Schwurack wrote:
> You need to have a pgbouner directory in /var/log and have the owner
> pgbouncer. This is easy to test try creating a file in /var/log as the user
> pgbouncer. It should fail because pgbouncer does not have writer permissions
> to /v
On Thu, Oct 4, 2012 at 2:50 AM, Wolf Schwurack wrote:
> I use pgpool but some of the problem you listed are same as I had with pgpool
Thanks Wolf, for the thoughts.
> I would not run pgbouner in /var/run/pbbouner. Every time you reboot the
> directory will get deleted. I set my parameter to an
On Tue, Oct 2, 2012 at 5:27 PM, Phoenix Kiula wrote:
> On Tue, Oct 2, 2012 at 11:29 AM, Phoenix Kiula
> wrote:
>> On Tue, Oct 2, 2012 at 12:59 AM, Phoenix Kiula
>> wrote:
>>>> Could you please check permission of /var/run/pgbouncer/ directory. If
>>
On Tue, Oct 2, 2012 at 11:29 AM, Phoenix Kiula wrote:
> On Tue, Oct 2, 2012 at 12:59 AM, Phoenix Kiula
> wrote:
>>> Could you please check permission of /var/run/pgbouncer/ directory. If
>>> pgbouncer directory does not have "postgres" user permissions,pleas
On Tue, Oct 2, 2012 at 12:59 AM, Phoenix Kiula wrote:
>> Could you please check permission of /var/run/pgbouncer/ directory. If
>> pgbouncer directory does not have "postgres" user permissions,please assign
>> it and then start the pgbouncer.
>
>
>
> Could you please check permission of /var/run/pgbouncer/ directory. If
> pgbouncer directory does not have "postgres" user permissions,please assign
> it and then start the pgbouncer.
The /var/run/pgbouncer/ directory has
chown -R postgres:postgres ..
The port number everywhere is already
Hi,
- PG 9.0.10
- Pgbouncer version 1.4.2
Not long ago, during the last server reboot for us, we had fixed the
really painful (and largely mysterious) process of setting up
pgbouncer.
File permissions and other mysteries were solved with help from Raghavendra:
http://permalink.gmane.org/gman
On Fri, Apr 13, 2012 at 2:59 PM, Raghavendra
wrote:
.
>> Add it in pgbouncer.auth file as per your .ini file parameter.
>>
>> auth_file = /var/lib/pgsql/pgbouncer.txt
>>
>
> Seems you already did this. I believe you are connecting as postgres user
> not from root, if yes, then check .
On Fri, Apr 13, 2012 at 2:41 AM, Scott Marlowe wrote:
> On Thu, Apr 12, 2012 at 11:12 AM, Phoenix Kiula
> wrote:
>>
>> WARNING: password file "/root/.pgpass" has group or world access;
>> permissions should be u=rw (0600) or less
>> psql: ERROR: N
I had pgbouncer working somehow, but we have switched servers recently
and now I cannot for the life of me figure out again how to set it up.
Online guides say things like "create a user ID". Well, where? Inside
PG the database? Or in my CentOS system?
Here's my "/etc/pgbouncer.ini":
[datab
Hi. Hope I'm articulating the question correctly.
I currently have a foreign key like this:
"fk_table2" FOREIGN KEY (id) REFERENCES table1(id) ON DELETE CASCADE;
Given the peculiar needs of this db, it seems that in some cases we
will need to track the "id" of table2 against a concatenation
On Fri, Jan 6, 2012 at 10:38 PM, John R Pierce wrote:
> you should check your attitude at the door. this isn't Microsoft Pay per
> Incident Tech Support.
I saw the door. Found some other attitudes that were allowed to be let
in. Like asking me to write my own patch. You see, attitudes come in
On Fri, Jan 6, 2012 at 8:19 PM, Adrian Klaver wrote:
> On Friday, January 06, 2012 4:16:09 pm Phoenix Kiula wrote:
>> On Fri, Jan 6, 2012 at 6:54 PM, Adrian Klaver
>> wrote:
>> > Try:
>> > copy vl from 'data.txt' WITH CSV DELIMITER '|';
>&
On Fri, Jan 6, 2012 at 6:54 PM, Adrian Klaver wrote:
>
> Try:
> copy vl from 'data.txt' WITH CSV DELIMITER '|';
Doesn't work. Can't see what the different in CSV is from a text file.
Same errors are thrown.
> If that doesn't work take a look at pgloader:
> http://pgfoundry.org/projects/pgloade
On Fri, Jan 6, 2012 at 6:53 PM, Steve Crawford
wrote:
> On 01/06/2012 03:42 PM, Phoenix Kiula wrote:
>>
>> On Fri, Jan 6, 2012 at 6:20 PM, Adrian Klaver
>> wrote:
>>
>>> http://www.postgresql.org/docs/9.0/interactive/sql-copy.html
>>>
>
On Fri, Jan 6, 2012 at 6:20 PM, Adrian Klaver wrote:
> http://www.postgresql.org/docs/9.0/interactive/sql-copy.html
>
> Search for
> NULL
Thanks Adrian.
Without examples, it's hard to predict syntax. If the value after a
pipe is missing altogether, I suppose the missing value is "\n"
(newline
On Fri, Jan 6, 2012 at 4:24 PM, Steve Crawford
wrote:
> On 01/06/2012 01:11 PM, Phoenix Kiula wrote:
>>
>> On Fri, Jan 6, 2012 at 11:46 AM, Tom Lane wrote:
>>>
>>> Phoenix Kiula writes:
>>>>
>>>> Hi. I'm using Postgresql 9.0.5,
On Fri, Jan 6, 2012 at 11:46 AM, Tom Lane wrote:
> Phoenix Kiula writes:
>> Hi. I'm using Postgresql 9.0.5, and the connection is made via pgbouncer.
>
> Perhaps pgbouncer is redirecting the second command to a different
> session?
>
Thanks Tom. I'm in the ex
Hi. I'm using Postgresql 9.0.5, and the connection is made via pgbouncer.
I create a temporary table, and then want to import data into this
table via a COPY command. Yet, this just created table is not being
recognized. What's up?
>From my terminal:
mydb=#
mydb=# create temporary table vl (
On Sun, Dec 4, 2011 at 7:41 PM, Phoenix Kiula wrote:
> On Sun, Dec 4, 2011 at 7:14 PM, Alban Hertroys wrote:
>> On 4 Dec 2011, at 11:19, Phoenix Kiula wrote:
>
>
>>
>> INSERTs in the parent table don't need to check for any reference from the
>> child
On Sun, Dec 4, 2011 at 7:55 PM, Szymon Guz wrote:
.
> and then show us the whole table structure, especially any rules or
> triggers.
Not many rules or triggers. See below.
I ran a REINDEX on the key allegedly being violated, and it finished
it in 30 mins or so, but still the same problem:
On Sun, Dec 4, 2011 at 7:14 PM, Alban Hertroys wrote:
> On 4 Dec 2011, at 11:19, Phoenix Kiula wrote:
>
> INSERTs in the parent table don't need to check for any reference from the
> child table, since they're new; there can't be a reference. UPDATEs and
>
Hi.
I have deleted a row from a table. Confirmed by "SELECT". All
associated children tables don't have this key value either.
Yet, when I insert this row back again, the primary key index on this
table gives me a duplicate error.
As demonstrated below. PGSQL version is 9.0.5.
Is this common? I
Hi.
I have a foreign key as such:
ALTER TABLE child_table
ADD CONSTRAINT fk_child
FOREIGN KEY (stringid) REFERENCES parent_table (stringid) MATCH FULL
ON DELETE CASCADE ;
Questions:
1. Is "MATCH FULL" adding any value here? If the foreign key is just
on an "id" column, what purpose does it se
Hi.
(My pgbouncer is finally working and has results in at least a 3-fold
site speed increase! YAY! Thanks to everyone who helped.)
Now, a new small problem.
In my PHP code I have a condition that checks for the existence of a
record, and if not found, it INSERTs a new one.
Here's the first SQL
On Thu, Nov 24, 2011 at 11:53 PM, Benjamin Henrion wrote:
> On Thu, Nov 24, 2011 at 4:49 PM, Phoenix Kiula
> wrote:
>> On Wed, Nov 23, 2011 at 6:13 AM, Alex Thurlow
>> wrote:
>>> On 11/22/2011 3:28 PM, Merlin Moncure wrote:
>> ..
>>> How lo
On Wed, Nov 23, 2011 at 6:13 AM, Alex Thurlow wrote:
> On 11/22/2011 3:28 PM, Merlin Moncure wrote:
..
> How long is this backup taking? I have a ~100GB database that I back up
> with pg_dump (which compresses as it dumps if you want it to) and that only
> takes 35 minutes. Granted, I have i
On Mon, Nov 21, 2011 at 6:14 PM, Tomas Vondra wrote:
.
>>
>> An index on (a, b) can be used for queries involving only a but not for
>> those involving only b.
>
> That is not true since 8.2 - a multi-column index may be used even for
> queries without conditions on leading columns. It won't b
On Thu, Nov 24, 2011 at 10:42 AM, Adrian Klaver wrote:
>>
>> Also, how can I tell the pgbouncer log not to log proper connections
>> and their closing. Right now it's filling up with nonsense. I only
>> want it to log when there's a warning or error.
>
> http://pgbouncer.projects.postgresql.org/do
On Thu, Nov 24, 2011 at 9:18 AM, Phoenix Kiula wrote:
> On Tue, Nov 22, 2011 at 2:13 AM, Steve Crawford
...
>
Thanks, I finally got it connecting.
Where's the "pgbouncer" database. Do I need to install it? It's not
installed. (How else should I tell the load and util
On Tue, Nov 22, 2011 at 2:13 AM, Steve Crawford
wrote:
>
..
> The information in the pgbouncer pseudo-database is helpful, here (psql -U
> youradminuser -h 127.0.0.1 pgbouncer).
Thanks, I finally got it connecting.
Where's the "pgbouncer" database. Do I need to install it? It's not install
On Mon, Nov 21, 2011 at 10:54 AM, Tomas Vondra wrote:
> Dne 21.11.2011 02:44, Phoenix Kiula napsal(a):
>> Thanks Tomas and everyone.
>>
>> I have the following passwords:
>>
>> 1. Pgbouncer.ini file
>>
>> [databases]
>> MYDB = host=127.0.
On Mon, Nov 21, 2011 at 7:26 AM, Gavin Flower
wrote:
> How about having 2 indexes: one on each of ip & url_md5? Pg will combine the
> indexes as required, or will just use one if that is best.
Thanks Gavin. Question: what if I have a joined index? If from a
joined index I only use the first co
On Mon, Nov 21, 2011 at 10:18 AM, Tomas Vondra wrote:
> Dne 20.11.2011 03:33, Amitabh Kant napsal(a):
>> On Sun, Nov 20, 2011 at 4:14 AM, Phoenix Kiula
>>
>> Does the auth file (/var/lib/pgsql/pgbouncer.txt) contain valid login
>> credentials to your database? If I re
On Mon, Nov 21, 2011 at 4:13 AM, Adrian Klaver wrote:
> On Sunday, November 20, 2011 3:56:18 am Phoenix Kiula wrote:
>
>> > Any ideas?
>>
>> Just to add, the connection string I try for pgbouncer is EXACTLY the
>> same as the one I use to connect directly to
On Sun, Nov 20, 2011 at 9:33 PM, Phoenix Kiula wrote:
> Hi.
>
> Want to start another thread, loosely related to the performance
> problems thread I have going.
>
> Need some DB design guidance from the gurus here.
>
> My big table now has about 70 million rows, wi
Hi.
Want to start another thread, loosely related to the performance
problems thread I have going.
Need some DB design guidance from the gurus here.
My big table now has about 70 million rows, with the following columns:
alias | character varying(35)
url | text
modify_
On Sun, Nov 20, 2011 at 2:11 PM, Tomas Vondra wrote:
> Dne 18.11.2011 13:30, Phoenix Kiula napsal(a):
>> Full DB: 32GB
>> The big table referenced above: 28 GB
>>
>> It's inserts into this one that are taking time.
>
> Hm, in that case the shared_buffer
On Sun, Nov 20, 2011 at 7:52 PM, Phoenix Kiula wrote:
> On Sun, Nov 20, 2011 at 7:43 PM, Scott Mead wrote:
>>
>>
>> On Sun, Nov 20, 2011 at 6:21 AM, Marko Kreen wrote:
>>>
>>> On Sun, Nov 20, 2011 at 8:32 AM, Phoenix Kiula
>>> wrote:
>>&
On Sun, Nov 20, 2011 at 7:43 PM, Scott Mead wrote:
>
>
> On Sun, Nov 20, 2011 at 6:21 AM, Marko Kreen wrote:
>>
>> On Sun, Nov 20, 2011 at 8:32 AM, Phoenix Kiula
>> wrote:
>> > The password I am entering in the terminal is right for sure. I've
>>
On Sun, Nov 20, 2011 at 4:49 PM, John R Pierce wrote:
> On 11/19/11 11:42 PM, Phoenix Kiula wrote:
>
> does this firewall block localhost at all? many don't. (I'm not at all
> familiar with this CSF/LFD thing)
>
> if you enable a port for TCP_IN, does it auto
On Sun, Nov 20, 2011 at 2:32 PM, Phoenix Kiula wrote:
> On Sun, Nov 20, 2011 at 2:16 PM, Phoenix Kiula
> wrote:
>> On Sun, Nov 20, 2011 at 1:57 PM, Amitabh Kant wrote:
>>>
>>> On Sun, Nov 20, 2011 at 12:12 PM, Tomas Vondra wrote:
>>>>
>>
On Sun, Nov 20, 2011 at 2:45 PM, Amitabh Kant wrote:
> On Sun, Nov 20, 2011 at 12:02 PM, Phoenix Kiula
> wrote:
>
> Just a trial: try password without quotes in your pgbouncer config file.
> That's how I have specified in mine, and it is working.
Already done. Same problem.
On Sun, Nov 20, 2011 at 2:16 PM, Phoenix Kiula wrote:
> On Sun, Nov 20, 2011 at 1:57 PM, Amitabh Kant wrote:
>>
>> On Sun, Nov 20, 2011 at 12:12 PM, Tomas Vondra wrote:
>>>
>>> Dne 20.11.2011 04:21, Phoenix Kiula napsal(a):
>>>
>>> My
On Sun, Nov 20, 2011 at 1:57 PM, Amitabh Kant wrote:
>
> On Sun, Nov 20, 2011 at 12:12 PM, Tomas Vondra wrote:
>>
>> Dne 20.11.2011 04:21, Phoenix Kiula napsal(a):
>>
>> My guess is that you actually require a password when connecting to the
>> database, but
On Sun, Nov 20, 2011 at 10:51 AM, Adrian Klaver wrote:
>
> I don't see a user specified. You sure you are connecting as correct user?
> Remember absent a -U the user will be either your system user name or what is
> specified in a ENV variable.
Adrian, all this is not helping.
To be sure, I tri
On Sun, Nov 20, 2011 at 10:37 AM, Phoenix Kiula wrote:
> On Sun, Nov 20, 2011 at 10:33 AM, Amitabh Kant wrote:
>>
>> I am assuming the difference in the port numbers between your config file
>> and php code is a typing error.
>> Does the auth file (/var/lib/pgsql/p
On Sun, Nov 20, 2011 at 10:33 AM, Amitabh Kant wrote:
>
> I am assuming the difference in the port numbers between your config file
> and php code is a typing error.
> Does the auth file (/var/lib/pgsql/pgbouncer.txt) contain valid login
> credentials to your database? If I remember correctly, it
On Sun, Nov 20, 2011 at 8:08 AM, Adrian Klaver wrote:
>
> You have pgbouncer listening on 127.0.0.1. In your psql connection string you
> are not specifying a host, so if you are on a Unix platform it is trying to
> connect to a socket which would account for the error. I found when working
> wit
On Sun, Nov 20, 2011 at 3:38 AM, Tomas Vondra wrote:
>
> What about log_destination and log_collector?
Thanks. This was it!
Much appreciated.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-
On Sun, Nov 20, 2011 at 3:35 AM, Adrian Klaver wrote:
>
> Well in the .ini file you posted there is no [databases] section. From what I
> read lack of one would explain the problem you are seeing.
Yes. Because that's private to post on a public mailing list like this.
Here's my INI file below,
On Sat, Nov 19, 2011 at 1:29 AM, Steve Crawford
wrote:
> On 11/18/2011 04:30 AM, Phoenix Kiula wrote:
>>
>> On Fri, Nov 18, 2011 at 9:25 AM, Steve Crawford
>> wrote:
>>
>>
>>>> Database only? Or is it also your webserver?
>>
>> It
Hi. PG 9.0.5, on CentOS 5 with 64 bit.
Here's the logging related items from my config file:
log_directory = 'pg_log'
log_filename= 'pglog.postgresql-%Y-%m-%d_%H%M%S.log'
log_rotation_age= 1d
log_min_duration_statement = 5000 # In m
On Fri, Nov 18, 2011 at 10:41 AM, Tomas Vondra wrote:
>
> That has nothing to do with the inserts, it means the number of connection
> requests exceeds the max_connections. You've set it to 350, and that seems
> too high - the processes are going to struggle for resources (CPU, I/O and
> memory) a
On Sun, Nov 20, 2011 at 2:39 AM, Phoenix Kiula wrote:
> On Sun, Nov 20, 2011 at 2:13 AM, Adrian Klaver
> wrote:
>>
>> http://pgbouncer.projects.postgresql.org/doc/config.html
>>
>> I have never used pgbouncer, but from above it would seem you need to set up
>
On Sun, Nov 20, 2011 at 2:13 AM, Adrian Klaver wrote:
>
> http://pgbouncer.projects.postgresql.org/doc/config.html
>
> I have never used pgbouncer, but from above it would seem you need to set up a
> [databases] section to tie pgbouncer to the Postgres server.
> See:
> SECTION [databases]
Thank
> 1. Do I need to set up the "/etc/pgbouncer.ini.rpmnew" as
> "/etc/pgbouncer.ini" and then change settings in it? What do I change?
> How? The FAQ is super geeky and unhelpful. As is the sparse info on
> the PG Wiki on pgbouncer. How can I tune pgbouner settings?
Just a quick update. By googlin
Hi.
I use CentOS 5, 64bit.
PG is 9.0.5.
I did "yum install pgbouncer" and got this:
---
Running Transaction
Installing : libevent
1/2
Installing : pgbouncer
2/2
war
On Fri, Nov 18, 2011 at 9:25 AM, Steve Crawford
wrote:
>> Database only? Or is it also your webserver?
It's my webserver and DB. Webserver is nginx, proxying all PHP
requests to apache in the backend.
> What version of PostgreSQL? What OS? What OS tuning, if any, have you done?
> (Have you i
Hi. I have a massive traffic website.
I keep getting "FATAL: Sorry, too many clients already" problems.
It's a Quad core machine with dual servers, 4 SCSI disks with RAID 10,
with RAM of 8GB.
Server is Nginx backed by Apache for the php.
Postgresql just has to do about 1000 SELECTs a minute, an
On Mon, Nov 14, 2011 at 1:45 PM, Venkat Balaji wrote:
>> Question: what can I do to rsync only the new additions in every table
>> starting 00:00:01 until 23:59:59 for each day?
>
> A table level replication (like Slony) should help here.
Slony needs more than one physical server, right?
--
Se
On Sun, Nov 13, 2011 at 10:45 PM, Andy Colson wrote:
> On 11/13/2011 07:51 AM, Gregg Jaskiewicz wrote:
>>
>> pg_dump -Fc already compresses, no need to pipe through gzip
>>
>
> I dont think that'll use two core's if you have 'em. The pipe method will
> use two cores, so it should be faster. (ass
On Sun, Nov 13, 2011 at 8:42 PM, Robins Tharakan
wrote:
> Hi,
>
> Well, the 'complex' stuff is only as there for larger or high-traffic DBs.
> Besides at 60GB that is a largish DB in itself and you should begin to try
> out a few other backup methods nonetheless. That is moreso, if you are
> takin
Hi.
I currently have a cronjob to do a full pgdump of the database every
day. And then gzip it for saving to my backup drive.
However, my db is now 60GB in size, so this daily operation is making
less and less sense. (Some of you may think this is foolish to begin
with).
Question: what can I do
Hi,
I bet this is a simple solution but I have been racking my brains.
I have a column in my table:
user_id varchar(100) ;
This can be NULL, or it can have a value. If it has a value during
INSERT or UPDATE, I want to check that the user exists against my
"Users" table. Otherwise, NULL is
[snip]
> You can easily install it as a contrib . Just read the installation guide or
> the man Page.
Thanks Eric. How though?
The instructions here --
http://reorg.projects.postgresql.org/pg_reorg.html -- are woefully
incomplete.
I have a standard PG install on WHM/Cpanel type server. I kn
Hi
Been reading some old threads (pre 9.x version) and it seems that the
consensus is to avoid doing massive deletes from a table as it'll
create so much unrecoverable space/gaps that vacuum full would be
needed. Etc.
Instead, we might as well do a dump/restore. Faster, cleaner.
This is all well
On Wed, May 11, 2011 at 11:18 PM, Phoenix Kiula wrote:
> I have a text column in a table, which I want to search through --
> seeking the occurrence of about 300 small strings in it.
>
> Let's say the table is like this:
>
> table1 (
> id bigint primary k
I have a text column in a table, which I want to search through --
seeking the occurrence of about 300 small strings in it.
Let's say the table is like this:
table1 (
id bigint primary key
,mytext text
,mydate timestamp without time zone
);
I am using this
On Thu, Apr 28, 2011 at 10:59 PM, Simon Riggs wrote:
> On Wed, Apr 27, 2011 at 5:24 PM, Phoenix Kiula
> wrote:
>> Possibly a dumb question but there isn't much about this.
>> http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=postgresql+null+value+disk+space
On Tue, Apr 26, 2011 at 8:28 PM, Greg Smith wrote:
> On 04/25/2011 10:10 AM, Vick Khera wrote:
>>
>> Basically, you create your partitions and set up the necessary triggers
>> you want (I re-write the app to insert directly into the correct partition).
>> Then all new data starts going into the p
Possibly a dumb question but there isn't much about this.
http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=postgresql+null+value+disk+space
I have some BOOLEAN columns. 90% of the cases of the columns is FALSE. Do I
save disk space by having them as NULL instead of FALSE? So my application
Hi.
Just want to check why, in an UPDATE sql, the JOIN condition is not making
use of the index?
In both tables being joined, the column in question is in fact the primary
key!
Table structure and query below. All I want is to take values from a smaller
"accesscount" table and update from it the
On Wed, Apr 27, 2011 at 1:53 AM, Phoenix Kiula wrote:
> On Wed, Apr 27, 2011 at 1:18 AM, Magnus Hagander wrote:
>> On Tue, Apr 26, 2011 at 19:11, Phoenix Kiula wrote:
>>> Hi,
>>>
>>> Is there any place I can download the default postgresql.conf
On Wed, Apr 27, 2011 at 1:18 AM, Magnus Hagander wrote:
> On Tue, Apr 26, 2011 at 19:11, Phoenix Kiula wrote:
>> Hi,
>>
>> Is there any place I can download the default postgresql.conf that
>> comes with 9.0?
>
> http://git.postgresql.org/gitweb?p=postgresql.git
Hi,
Is there any place I can download the default postgresql.conf that
comes with 9.0?
Thanks!
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Tue, Apr 26, 2011 at 3:24 PM, Scott Marlowe wrote:
> On Mon, Apr 25, 2011 at 8:50 PM, Phoenix Kiula
> wrote:
>>> On Tuesday, April 26, 2011, Tomas Vondra wrote:
>>>> Dne 25.4.2011 18:16, Phoenix Kiula napsal(a):
>>>>> Sorry, spoke too soon.
>>
> On Tuesday, April 26, 2011, Tomas Vondra wrote:
>> Dne 25.4.2011 18:16, Phoenix Kiula napsal(a):
>>> Sorry, spoke too soon.
>>>
>>> I can COPY individual chunks to files. Did that by year, and at least
>>> the dumping worked.
>>>
>&g
On Tue, Apr 26, 2011 at 1:56 AM, Tomas Vondra wrote:
> Dne 25.4.2011 19:31, Alban Hertroys napsal(a):
>> On 25 Apr 2011, at 18:16, Phoenix Kiula wrote:
>>
>>> If I COPY each individual file back into the table, it works. Slowly,
>>> but seems to work. I tried to
On Mon, Apr 25, 2011 at 9:19 PM, Phoenix Kiula wrote:
> On Fri, Apr 22, 2011 at 8:35 PM, wrote:
>>> On Fri, Apr 22, 2011 at 8:20 PM, wrote:
>>>>> On Fri, Apr 22, 2011 at 7:07 PM, wrote:
>>>>> In the pg_dumpall backup process, I get this error.
On Fri, Apr 22, 2011 at 8:35 PM, wrote:
>> On Fri, Apr 22, 2011 at 8:20 PM, wrote:
On Fri, Apr 22, 2011 at 7:07 PM, wrote:
In the pg_dumpall backup process, I get this error. Does this help?
>>>
>>> Well, not really - it's just another incarnation of the problem we've
>>> alrea
Hi.
The partitioning documentation in PG is very clear on how to partition
a new table. Create child tables, and have triggers that manage
INSERT, UPDATE and DELETE commands.
How about doing this with existing massive tables? (Over 120 million rows)
I could create a new parent table with child t
On Fri, Apr 22, 2011 at 8:20 PM, wrote:
>> On Fri, Apr 22, 2011 at 7:07 PM, wrote:
>> In the pg_dumpall backup process, I get this error. Does this help?
>>
>
> Well, not really - it's just another incarnation of the problem we've
> already seen. PostgreSQL reads the data, and at some point it
On Fri, Apr 22, 2011 at 7:07 PM, wrote:
>> On Fri, Apr 22, 2011 at 12:06 PM, Phoenix Kiula
>> wrote:
>>> On Fri, Apr 22, 2011 at 12:51 AM, Tomas Vondra wrote:
>>>> Dne 21.4.2011 07:16, Phoenix Kiula napsal(a):
>>>>> Tomas,
>>>>&g
On Fri, Apr 22, 2011 at 12:06 PM, Phoenix Kiula wrote:
> On Fri, Apr 22, 2011 at 12:51 AM, Tomas Vondra wrote:
>> Dne 21.4.2011 07:16, Phoenix Kiula napsal(a):
>>> Tomas,
>>>
>>> I did a crash log with the strace for PID of the index command as you
>
On Fri, Apr 22, 2011 at 12:51 AM, Tomas Vondra wrote:
> Dne 21.4.2011 07:16, Phoenix Kiula napsal(a):
>> Tomas,
>>
>> I did a crash log with the strace for PID of the index command as you
>> suggested.
>>
>> Here's the output:
>> http://www.heypa
On Thu, Apr 21, 2011 at 11:49 AM, Phoenix Kiula wrote:
> On Thu, Apr 21, 2011 at 7:27 AM, Tomas Vondra wrote:
>> Dne 20.4.2011 22:11, Tomas Vondra napsal(a):
>>> There's a very nice guide on how to do that
>>>
>>> http://blog.endpoint.com/2010/06/tr
On Thu, Apr 21, 2011 at 7:27 AM, Tomas Vondra wrote:
> Dne 20.4.2011 22:11, Tomas Vondra napsal(a):
>> There's a very nice guide on how to do that
>>
>> http://blog.endpoint.com/2010/06/tracking-down-database-corruption-with.html
>>
>> It sure seems like the problem you have (invalid alloc request
> On a fast network it should only take a few minutes. Now rsyncing
> live 2.4 TB databases, that takes time. :) Your raptors, if they're
> working properly, should be able to transfer at around 80 to
> 100Megabytes a second. 10 to 15 seconds a gig. 30 minutes or so via
> gig ethernet. I'd run
1 - 100 of 323 matches
Mail list logo