PostgreSQL so that all is need is a dump of the DB for a complete
backup...
On 12/07/2010 19:26, Scott Marlowe wrote:
On Mon, Jul 12, 2010 at 7:57 AM, Thom Brown wrote:
On 12 July 2010 14:50, Tom Wilcox wrote:
Hi Thom,
I am performing update statements that are applied to a single table
2010 14:50, Tom Wilcox wrote:
Hi Thom,
I am performing update statements that are applied to a single table that is
about 96GB in size. These updates are grouped together in a single
transaction. This transaction runs until the machine runs out of disk space.
What I am trying to achieve is for
Hi,
Is it possible to configure postgres from SQL?
I am interested in turning off fsync for a set of queries (that take
ages to run) and then turn fsync back on again afterwards.
Cheers,
Tom
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscri
1:20, Dave Page wrote:
On Wed, Jun 23, 2010 at 10:19 AM, Tom Wilcox wrote:
Hi,
If I use the apt-get facility on Ubuntu is that supposed to get the
appropriate postgres version for my OS setup (i.e. 64-bit and Python 2.6) ?
I would assume so. I'm more of a Redhat guy though.
rsion).
T
On 23/06/2010 10:01, Dave Page wrote:
On Wed, Jun 23, 2010 at 9:52 AM, Tom Wilcox wrote:
Hi,
I have not used python2.3 on this machine at all (to my knowledge). It is a
fresh install of the latest Ubuntu 64 which comes with Python 2.6 and 3.1.
The build I want to run is the 6
n Klaver writes:
On 06/22/2010 09:52 AM, Tom Wilcox wrote:
ERROR: could not load library
"/opt/PostgreSQL/8.4/lib/postgresql/plpython.so": libpython2.3.so.1.0:
cannot open shared object file: No such file or directory
I have python2.6 installed.
It
this out
and get plpython running my functions in postgresql on this linux OS.
Please help me!
Tom
On 16/06/2010 15:02, Tom Wilcox wrote:
Interesting:
This works:
postg...@ubuntu64-bertha:~$ createlang -d cse plpythonu
Password:
postg...@ubuntu64-bertha:~$
But this doesn't:
postg
Hi,
I have got PostgreSQL 8.4 up and running on my Ubuntu Server 64-bit OS, but
I am having difficulty with the PL/Python functions from my database. In
particular, I am running this:
---
drop language plpythonu cascade;
create language plpython
stgresql/plpython.so": libpython2.3.so.1.0:
cannot open shared object file: No such file or directory
Any suggestions? Feels like I'm not with the correct env vars in psql but
TBH Im getting somewhat dazed and confused from staring at this too long..
T
On 16 June 2010 14:58, Tom Wilcox
memory can be used for *os level cache*, not postgresql
buffercache.
//Magnus
On Wed, Jun 2, 2010 at 16:08, Tom Wilcox wrote:
Hi Stephen,
The impression I was getting from Magnus Hagander's blog was that a 32-bit
version of Postgres could make use of>4Gb RAM when running on 64-bit
erver over Postgres
(less pain to get the gain). Perhaps this is an argument in favour of 64-bit
Windows port to be added to this list:
http://wiki.postgresql.org/wiki/64bit_Windows_port
Thanks again for all your help.
Tom
On 2 June 2010 15:27, Stephen Frost wrote:
> Tom,
>
> *
27;s where Im getting this from:
http://blog.hagander.net/archives/73-PostgreSQL-vs-64-bit-windows.html
Thanks,
Tom
On 2 June 2010 15:04, Stephen Frost wrote:
> * Tom Wilcox (hungry...@googlemail.com) wrote:
> > My question now becomes.. Since it works now, do those memory usage stats
>
ls_4e();
COMMIT;
My plan now is to try increasing the shared_buffers, work_mem,
maintenance_work_mem and apparently checkpoint_segments and see if that
fixes it. However I am still stuck in a situation where the postgres
service isnt visible to Windows services and I fear that now I really am
j
n the same machine (although not actually doing
anything at the moment - just idle server running in background).
Tangent: Is there any way to increase the memory allocated to postgres
by Windows using Job Objects?
Cheers,
Tom
On 29/05/2010 18:55, Bill Moran wrote:
On 5/28/10 8:43:48 PM, Tom Wi
d replace
those keywords..
Thanks,
Tom
Bill Moran wrote:
On 5/28/10 8:43:48 PM, Tom Wilcox wrote:
I ran this query:
EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org;
And I got this result:
"Seq Scan on match_data (cost=0.00..9762191.68 rows=32205168 width=206)
(actual time=76873
I ran this query:
EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org;
And I got this result:
"Seq Scan on match_data (cost=0.00..9762191.68 rows=32205168 width=206)
(actual time=76873.592..357450.519 rows=2961 loops=1)"
"Total runtime: 8028212.367 ms"
On 28 May 20
nlpg.match_data;
CREATE TABLE nlpg.match_data
(
premise_id integer,
usrn bigint,
org text,
sao text,
"level" text,
pao text,
"name" text,
street text,
town text,
pc postcode,
postcode text,
match_data_id integer NOT NULL DEFAULT
nextval('nlpg.
virtualisation options..
Cheers,
Tom
On 28/05/2010 15:44, Stephen Frost wrote:
* Tom Wilcox (hungry...@googlemail.com) wrote:
Can anyone tell me what might be going on and how I can fix it so that
postgres uses as much memory and processing power as poss... in a stable
manner?
I realize
Hi,
I am fighting with Postgres on a 64-bit Windows (Server 2008) machine with
96GB trying to get it to use as much memory as possible (I am the only user
and I am running complex queries on large tables). [See my previous thread
for details "Out of Memory and Configuration Problems (Big Computer)
Hi,
I am fairly new to postgres and I have been using it with Python to
develop a set of functions that operate on some moderately large tables
(30million rows with 13-15 columns).
I have spent some time messing with the configuration file to get PG to
use indexes when planning queries, etc
20 matches
Mail list logo