Re: [GENERAL] db size and VACUUM ANALYZE

2010-02-13 Thread Scott Marlowe
On Sat, Feb 13, 2010 at 12:19 AM, Greg Smith g...@2ndquadrant.com wrote:
 Amitabh Kant wrote:

 You need to do VACUUM FULL ANALYZE to claim the disk space, but this
 creates a exclusive lock on the tables.
 See http://www.postgresql.org/docs/8.3/static/sql-vacuum.html

 First off, you don't need the ANALYZE in there.

 Second, VACUUM FULL is a terrible way to fix a table that's seriously
 screwed up--it will take forever to run.  Use CLUSTER to accomplish the same
 thing much faster; it basically does the same thing as the dump/restore step
 that's restoring good performance to the database.

This is a bit of an oversimplification.  I've found that selecting the
contents of the table out, truncating the table, and inserting them
back in from a select with an order by can be orders of magnitude
faster than cluster IF the data in the table is basically random.
After that, cluster can perform reasonably well to keep the table
clustered, because it's mostly in order already.  Basically, unless
it's been fixed in 9.0, cluster reads the table by index entry one row
at a time and builds the new table.  This is very very slow for a
randomly ordered table.

 Before doing that, I would run a VACUUM VERBOSE on the whole cluster and see
 if there are any max_fsm_pages warnings in there.  Those settings might be
 too low, for example if large deletions are done in batches, and ultimately
 be the true cause of this problem.

Good point, if he's blowing out the fsm regularly then the fix above
will be temporary at best.

Since setting fsm pages / relations is basically very cheap, it's a
good idea to set them a few times higher than what you need, so if you
need 1M set it to 10M to give a big buffer in case things get worse
over time.  Especially since fsm pages is a restart requiring change.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY FROM wish list

2010-02-13 Thread Greg Smith

Marc Mamin wrote:


Looking at the TODO List, I feel that only some aspects of the COPY 
FROM command are adressed.

Could a discussion trigger some activity on this topic  :o)  ?



(Sounds only of crickets chirping)...guess not.  I would love to have 
FEEDBACK added.


The TODO list doesn't have every COPY extension idea around on it 
though.  http://wiki.postgresql.org/wiki/COPY for example is an implicit 
wishlist:  fix all these things documented as troublesome.  Your 
suggestions might make a good addition to that, perhaps in a section 
specifically addressing Missing Features Common to Other Database Loaders.


If you don't mind a speed loss in the process, I've found pgloader to be 
a nicer interface for dealing with slightly odd data imports that don't 
match the built-in COPY restrictions, it does some of the things you're 
looking for:  http://pgfoundry.org/projects/pgloader/


And pg_bulkload aims to handle some of the high-performance features:  
http://pgbulkload.projects.postgresql.org/


The problem with working on the COPY code, from the perspective of 
finding people to pay for the job, is that bulk-loading is a one-time 
operation for many people.  Easier to just suck it up and write a set of 
one-off data massage tools than to try and fix the core to add these 
capabilties.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Problemas con la copia de seguridad PostgresSQL

2010-02-13 Thread sergio barrera

Buenas compañeros!
 
Tengo un problema al cual no le encuentro la 
lógica. Mi idea es realizar una copia de seguridad de una base de datos 
PostgresSQL diaria bajo el sistema operativo Linux-Ubuntu 9. Para ello primero, 
he ejecutado el siguiente comando en el terminal, obteniendo una respuesta 
esperada: 
 
$ pg_dump -h localhost -p 5432 -U postgres -F c -b 
-v -f /home/sbarrera/BACKUP/copia_BD_Openbravo/fechaC.backup 
openbravo
 
Al ejecutar dicha sentencia, se me genera un 
archivo con nombre fechaC.backup, que es el resultado esperado. El segundo 
paso es realizar un script para que pueda ser lanzado diariamente por el cron y 
ahí es donde tengo el problema. El script que me he generado(respaldoBD.sh) es 
tan sencillo como:
 
#!/bin/sh
pg_dump -h localhost -p 5432 -U 
postgres -F c -b -v -f /home/sbarrera/BACKUP/copia_BD_Openbravo/fechaC.backup 
openbravo
 
El problema es que cuando ejecuto en el terminal 

$source respaldoBD.sh
me reporta el siguiente error : »: FATAL:  no 
existe la base de datos «openbravoa base de datos «openbravo
pg_dump: *** 
se abortó por un error
 
No entiendo esto si estoy ejecutando la misma 
sentencia pero en este caso desde un script. ¿Porqué podria pasar 
esto?
 
Nota : tengo el archivo de .pgpass con permisos 
0600 y de valor : 
 
*:*:*:postgres:postgres
*:*:*:*:postgres
 
Espero vuestra ayuda porque ya no se lo que probar, 
Muchas Gracias
 
INGENIERO 
INFORMÁTICO
ANALISTA/PROGRAMADOR  
_
¿Aún sin la última versión de Internet Explorer 8? ¡Actualízate gratis!
http://www.vivelive.com/internetexplorer8

Re: [GENERAL] Memory Usage and OpenBSD

2010-02-13 Thread Jeff Ross

Greg Smith wrote:

Jeff Ross wrote:
I think I'm doing it right.  Here's the whole script.  I run it from 
another server on the lan.


That looks basically sane--your description was wrong, not your 
program, which is always better than the other way around.


Note that everything your script is doing and way more is done quite 
easily with pgbench-tools:  
http://git.postgresql.org/gitweb?p=pgbench-tools.git;a=summary


You can just dump a list of scales and client counts you want to test 
and let that loose, it will generate graphs showing TPS vs. 
scale/clients and everything if gnuplot is available.




Cool!  I'll get gnuplot installed and have some runs going before long.


transaction type: TPC-B (sort of)
scaling factor: 70
query mode: simple
number of clients: 70
number of transactions per client: 2
number of transactions actually processed: 140/140
tps = 293.081245 (including connections establishing)
tps = 293.124705 (excluding connections establishing)


This is way more clients than your server is going to handle well on 
pgbench's TPC-B test, which is primarily a test of hard disk write 
speed but it will get bogged down with client contention in many 
conditions.  Performance degrades considerably as the number of 
clients increases much past the number of cores in the server; 
typically 1.5 to 2X as many clients as cores gives peak throughput.


I'm not sure what's causing your panic--not enough BSD practice.  But 
I think Tom's suggestion of vastly decreasing from:


maintenance_work_mem = 240MB

Is worth trying.  Reducing it won't hurt pgbench performance on quick 
tests, just how long it takes to get the tests setup.




Okay, I'll try that.  Hopefully if I can get it to run well under 
pgbench the same setup will work well with drupal.  The site I was 
worried about when I went to this bigger server has started a little 
slower than originally projected so the old server is handling the load.


Sorry about pgtune being a bit aggressive in what it suggests--on the 
TODO list to scale it back, and hopefully provide more helpful 
suggestions for kernel tuning too.


One of my goals (once I get all this figured out) is to provide some  
guide lines on how to best configure OpenBSD to run PostgreSQL.


Thanks for all of your work and tools, Greg! 


--  Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com



Jeff Ross

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory Usage and OpenBSD

2010-02-13 Thread Greg Smith

Jeff Ross wrote:
Hopefully if I can get it to run well under pgbench the same setup 
will work well with drupal.  The site I was worried about when I went 
to this bigger server has started a little slower than originally 
projected so the old server is handling the load.


The standard TPC-B-like test pgbench runs is not even remotely like what 
Drupal will do on your server.  It's almost a write-only test--the few 
things that are read in are also being written.


I'd suggest you explore pgbench using the -S flag (or what 
pgbench-tools calls select.sql) to generate select-only loads at various 
sizes and client counts to get an idea what your server is going to do 
on loads more like a web application.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] could not read block 0... error followed by database does not exist

2010-02-13 Thread Janet S Jacobsen
Thanks, Tom.  I will give this a try and let you know what happens.

I don't see anything in the logfile prior to the first could not read
block 0... error.

Thanks,
Janet


Janet S Jacobsen jsjacob...@lbl.gov writes:
  Hi.  What I see when I do ls on the current (corrupt)
  $PGDATA/global is

  ...
  - rw--- 1 jsjacobs deepsky 0 Feb  8 18:51 1262
  ...
  -rw--- 1 jsjacobs deepsky   602 Feb 12 17:42 pg_auth
  -rw--- 1 jsjacobs deepsky  8192 Feb 12 17:42 pg_control
  -rw--- 1 jsjacobs deepsky 0 Feb 12 17:42 pg_database
  -rw--- 1 jsjacobs deepsky 10927 Feb 12 21:57 pgstat.stat

Looks about as I'd expect from your description.  Something clobbered
1262, and then the flat file pg_database got updated from that.
You might want to look around at what was happening Feb 8 18:51.

  I have a pgdump from a month ago.  Are you saying to restore
  that to a different location and then copy over
  $PGDATA/global/1262?  Do I also need to copy over
  $PGDATA/global/pg_database?

Right on both.  Of course, it'd be a good idea to first make a backup of
what you have in $PGDATA now (all of it) --- you want to be able to get
back to where you are if this makes things worse.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Hosting Account with PostgreSQL and PHP?

2010-02-13 Thread Andre Lopes
Hi,

I need an hosting account with PostgreSQL and PHP. I have signed an account
with HostNine.com, but the PostgreSQL is the 8.1, and don't allow to create
Languages and Triggers, so I can't get the website working...

Please tell me if you know a good hosting with PostgreSQL that allow
Triggers and all functionalities og PostgreSQL.

Best Regards,


Re: [GENERAL] Problemas con la copia de seguridad PostgresSQL

2010-02-13 Thread Diego Schulz
2010/2/13 sergio barrera xpbarr...@hotmail.com

  Buenas compañeros!

 Tengo un problema al cual no le encuentro la lógica. Mi idea es realizar
 una copia de seguridad de una base de datos PostgresSQL diaria bajo el
 sistema operativo Linux-Ubuntu 9. Para ello primero, he ejecutado el
 siguiente comando en el terminal, obteniendo una respuesta esperada:

 $ pg_dump -h localhost -p 5432 -U postgres -F c -b -v -f
 /home/sbarrera/BACKUP/copia_BD_Openbravo/fechaC.backup openbravo

 Al ejecutar dicha sentencia, se me genera un archivo con nombre
 fechaC.backup, que es el resultado esperado. El segundo paso es realizar
 un script para que pueda ser lanzado diariamente por el cron y ahí es donde
 tengo el problema. El script que me he generado(respaldoBD.sh) es tan
 sencillo como:

 #!/bin/sh
 pg_dump -h localhost -p 5432 -U postgres -F c -b -v -f
 /home/sbarrera/BACKUP/copia_BD_Openbravo/fechaC.backup openbravo

 El problema es que cuando ejecuto en el terminal
 $source respaldoBD.sh
 me reporta el siguiente error : »: FATAL:  no existe la base de datos
 «openbravoa base de datos «openbravo
 pg_dump: *** se abortó por un error

 No entiendo esto si estoy ejecutando la misma sentencia pero en este caso
 desde un script. ¿Porqué podria pasar esto?

 Nota : tengo el archivo de .pgpass con permisos 0600 y de valor :

 *:*:*:postgres:postgres
 *:*:*:*:postgres

 Espero vuestra ayuda porque ya no se lo que probar, Muchas Gracias



¿Desde el crontab de qué usuario se ejecuta el script?  El de root?

¿Podrías mostrarnos la línea que tienes en el crontab?

Saludos,

diego


Re: [GENERAL] Problemas con la copia de seguridad PostgresSQL

2010-02-13 Thread Diego Schulz
2010/2/13 sergio barrera xpbarr...@hotmail.com

  Buenas compañeros!

 Tengo un problema al cual no le encuentro la lógica. Mi idea es realizar
 una copia de seguridad de una base de datos PostgresSQL diaria bajo el
 sistema operativo Linux-Ubuntu 9. Para ello primero, he ejecutado el
 siguiente comando en el terminal, obteniendo una respuesta esperada:

 $ pg_dump -h localhost -p 5432 -U postgres -F c -b -v -f
 /home/sbarrera/BACKUP/copia_BD_Openbravo/fechaC.backup openbravo

 Al ejecutar dicha sentencia, se me genera un archivo con nombre
 fechaC.backup, que es el resultado esperado. El segundo paso es realizar
 un script para que pueda ser lanzado diariamente por el cron y ahí es donde
 tengo el problema. El script que me he generado(respaldoBD.sh) es tan
 sencillo como:

 #!/bin/sh
 pg_dump -h localhost -p 5432 -U postgres -F c -b -v -f
 /home/sbarrera/BACKUP/copia_BD_Openbravo/fechaC.backup openbravo

 El problema es que cuando ejecuto en el terminal
 $source respaldoBD.sh
 me reporta el siguiente error : »: FATAL:  no existe la base de datos
 «openbravoa base de datos «openbravo
 pg_dump: *** se abortó por un error

 No entiendo esto si estoy ejecutando la misma sentencia pero en este caso
 desde un script. ¿Porqué podria pasar esto?

 Nota : tengo el archivo de .pgpass con permisos 0600 y de valor :

 *:*:*:postgres:postgres
 *:*:*:*:postgres

 Espero vuestra ayuda porque ya no se lo que probar, Muchas Gracias



Si usas el crontab de root, puedes invocar a pg_dump con el comando su.
Sería mas o menos asi:

(dentro del script)

# volcado en formato 'archive'
su sbarrera -c '/usr/pg_dump -Fc mibasededatos'
  /algun/lugar/backus/backup-${NOW}.dmp

# tambien volcar en texto plano, sql
su sbarrera -c '/usr/bin/pg_dump mibasededatos'  
/algun/lugar/backus/backup-${NOW}.sql



Saludos,

diego


Re: [GENERAL] how to create a new composite type using already existing composite types

2010-02-13 Thread Jeff Davis
On Wed, 2010-02-10 at 03:46 +, Iain Barnett wrote:

 CREATE TYPE inventory_item2 AS (
 nametext,
 supplier_id integer,
 price   numeric,
 size  integer
 );
 
 
 but it would be handy if I could reuse inventory_item instead of
 having to retype the whole lot. I can't work out or find the right
 syntax, can anyone show me how? Any help would be much appreciated.

What about:

  CREATE TYPE inventory_item2 AS (
ii inventory_item,
size integer
  );

or:

  CREATE TABLE inventory_item2 (
LIKE inventory_item,
size integer
  );

and that will automatically create the type.

Regards,
Jeff Davis



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] unable to restore from a pgdump file

2010-02-13 Thread Janet S Jacobsen
Hi.  I am trying to restore a database from a pgdump file, something
that I've sucessfully done before.  

I created a new database cluster, created the database that I want
to restore using

create database subptf with template = template0;

and then I tried to restore the database using

./psql subptf  subptf.pgdump

The first part of what I get is

ERROR:  syntax error at or near PGDMP
LINE 1: PGDMP
^
GRANT
GRANT
ERROR:  syntax error at or near 
LINE 1:  ptffield integer,
^
ERROR:  syntax error at or near 
LINE 1:  f1 integer,
^
ERROR:  syntax error at or near 
LINE 1: AS $$
^
ERROR:  syntax error at or near 
LINE 1: AS $_X$
^
ERROR:  syntax error at or near 
LINE 1: AS $_X$
^
ERROR:  syntax error at or near 
LINE 1: AS $$
...

The last part of what I get is 

...
vJ?N'g...@??q7?2g?0k??$??slz^p???k??|A?ʞ?A??f-j
\p: extra argument 
zr_?q«+_???c?ܦ?e??...@???ߒqy+?v?*?x?d}x?r?ǯ?67??e???y?/4o?f?u??ڇw?5???.??an...@??_???--kd??i?satb?akƳ?ҕz?k???{?zUt_???{
 ?Tt
???Au.?V9?rpZ?g?l??8???0³???3
 ޒ}?-??Vӭf+,?mo?HN??+?
  
?R??z?:?x[bk˯?R_?8ed?ЀI???=%?2H??B??? ignored
\p: extra argument :P ignored
\p: extra argument ;???P?+?1?i?)??a=? ignored
unterminated quoted string
invalid command \??c?
  ??Փ%??U???3+??U?SƳǕ???9?.???e+
invalid command 
\W??_tUÁ+MK??z?ܦ?H??`o??-/s??n?s?3?sy?6{7??^grp?...@?l#??g7hw??j?ӣ?t30={??
invalid command \??g.???
invalid command \AJ??E?$?TC?r?f]?B??
invalid command \??Hzp?Y2??8?@
invalid command \Ā?(?%?O
\connect: FATAL:  database ???6??$?^ does not exist

I feel like I'm making some beginner's error here, but I haven't
run into this before.

I am running 8.3.7 on a linux system

Thanks,
Janet



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] unable to restore from a pgdump file

2010-02-13 Thread Steve Atkins

On Feb 13, 2010, at 7:36 PM, Janet S Jacobsen wrote:

 Hi.  I am trying to restore a database from a pgdump file, something
 that I've sucessfully done before.  
 
 I created a new database cluster, created the database that I want
 to restore using
 
 create database subptf with template = template0;
 
 and then I tried to restore the database using
 
 ./psql subptf  subptf.pgdump
 
 The first part of what I get is
 
 ERROR:  syntax error at or near PGDMP

I'd guess that you created the dump in custom format, rather than sql format.

If so, you'd need to use pg_restore to restore it rather than psql.

Cheers,
  Steve


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general