Re: [GENERAL] Recover corrupted data

2017-04-19 Thread Alexandre
It appears to be just one table I'm trying to backup without that table.

But there is no solution for this kind of error?

On Wed, Apr 19, 2017 at 1:11 PM, Moreno Andreo <moreno.and...@evolu-s.it>
wrote:

> Il 19/04/2017 17:49, Vick Khera ha scritto:
>
> 1) restore from backup
> 2) fix whatever configuration you made to let windows (or your hardware)
> destroy your data on crash. is there some RAID cache that is not backed up
> by a battery?
>
>
> IMHO If there's no hurry, it'd be better to start with point 2, because if
> your filesystem (or hardware) is somehow badly broken, it will happen
> again...
> If you have more than one database in your cluster, you can find what's
> the database that's been corrupted and restore just that one, instead of
> the whole cluster.
> Cheers
> Moreno
>
>
>
> On Wed, Apr 19, 2017 at 10:18 AM, Alexandre <psy...@gmail.com> wrote:
>
>> Hello,
>>
>> The computer had a unexpected shutdown, it is a Windows machine.
>> Now some data appears to be corrupted, I am receiving exceptions like
>> this:
>>
>> ERROR: could not read block 0 in file "base/16393/16485": read only
>> 0 of 8192 bytes
>>
>> There is some way to correct this?
>>
>
>
>


Re: [GENERAL] Recover corrupted data

2017-04-19 Thread Alexandre
1) We have a backup but its from the last month, I will try to backup the
data without the table that raises the exception.
2) We dont use RAID.

Thank you

On Wed, Apr 19, 2017 at 12:49 PM, Vick Khera <vi...@khera.org> wrote:

> 1) restore from backup
> 2) fix whatever configuration you made to let windows (or your hardware)
> destroy your data on crash. is there some RAID cache that is not backed up
> by a battery?
>
>
> On Wed, Apr 19, 2017 at 10:18 AM, Alexandre <psy...@gmail.com> wrote:
>
>> Hello,
>>
>> The computer had a unexpected shutdown, it is a Windows machine.
>> Now some data appears to be corrupted, I am receiving exceptions like
>> this:
>>
>> ERROR: could not read block 0 in file "base/16393/16485": read only
>> 0 of 8192 bytes
>>
>> There is some way to correct this?
>>
>
>


[GENERAL] Recover corrupted data

2017-04-19 Thread Alexandre
Hello,

The computer had a unexpected shutdown, it is a Windows machine.
Now some data appears to be corrupted, I am receiving exceptions like this:

ERROR: could not read block 0 in file "base/16393/16485": read only
0 of 8192 bytes

There is some way to correct this?


[GENERAL] plpythonu

2008-01-17 Thread Alexandre da Silva
Hello,
someone can tell me if is secure to create external python modules and
import them to functions/procedures/triggers to use?

I am doing the following:

function/procedure/trigger:

CREATE OR REPLACE FUNCTION tabela_be_i_tg_fx() RETURNS trigger AS $body$
from dbfunctions.postgres.pg_trigger import TestTrigger as fx
fe = fx()
return fe.execute(args=TD[args], event=TD[event], when=TD[when],
level=TD[level], name=TD[name], relid=TD[relid], new=TD[new],
old=TD[old], pl_py=plpy)
$body$
LANGUAGE plpythonu;


at python module I have:


class TestTrigger(object):
def execute(self, args, event, when, level, name, relid, new, old,
pl_py):
new[group_name]='__modified__'
return 'MODIFY'


all this works properly (on windows and linux), but I don't know if is
the correct way to use plpythonu, and if I will have future problems.

Another question is that I have read in some discussion list (old
message year 2003) the possibility of plpython be removed from
postgresql, this information is valid yet?

sorry bad English

Thank's for all

-- 
Alexandre da Silva
Analista de Sistemas - Bacharel em Sistemas de Informação (2003-2007)


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Ordering problem with varchar (DESC)

2007-01-31 Thread Alexandre Leclerc
Hi all,

We have a column (varchar) that has plain text time and it is indexed.
When I do a query with the index, all the data is in the right order,
but when I user ORDER BY .. DESC, the order is messed up. Example:

By index 1: (date, time, data)
SELECT * from t1;
date (date type)  time (varchar)  data
2007-01-17 8h40   d1
2007-01-30 9h30   d2
2007-01-3012h00   d3
2007-01-3013h45   d4
2007-01-3017h20   d5

SELECT * from t1 ORDER BY date, time DESC;
date (date type)  time (varchar)  data
2007-01-30 9h30   d2
2007-01-3017h20   d5
2007-01-3013h45   d4
2007-01-3012h00   d3
2007-01-17 8h40   d1

I don't know why, this is like if the 'time' varchar was trimmed then
used for the ordering.

How can I fix that so that the result is exactly like the first one but
perfectly reversed in it's order?

Best regards.

-- 
Alexandre Leclerc

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Ordering problem with varchar (DESC)

2007-01-31 Thread Alexandre Leclerc
Brandon Aiken a écrit :
 As others have said, VARCHAR is the incorrect data type to be using
 here.  You should either be using INTERVAL or TIMESTAMP depending on
 what you want.  You can even combine date and time into a single
 TIMESTAMP field.  Only use VARCHAR when no other data type will do.

I dearly would like to do that, but it is impossible (because of the
software/technology that uses the database). I would have use a
TIMESTAMP for that.

 Try SELECT * from t1 ORDER BY date, time;, and I suspect you will get:
 date (date type)  time (varchar)  data
 2007-01-17 8h40   d1
 2007-01-3012h00   d3
 2007-01-3013h45   d4
 2007-01-3017h20   d5
 2007-01-30 9h30   d2
 
 To use your current schema, you need to zero-fill your hours, so 9h30
 needs to be 09h30 and so forth.

Exactly. This is sorted that way. This is what I'll do, inserting a 0.

Best regards.

-- 
Alexandre Leclerc

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Ordering problem with varchar (DESC)

2007-01-31 Thread Alexandre Leclerc
Daniel Verite a écrit :
   Alexandre Leclerc wrote:
 
 SELECT * from t1 ORDER BY date, time DESC;
 date (date type)  time (varchar)  data
 2007-01-30 9h30   d2
 2007-01-3017h20   d5
 2007-01-3013h45   d4
 2007-01-3012h00   d3
 2007-01-17 8h40   d1

 I don't know why, this is like if the 'time' varchar was trimmed then
 used for the ordering.

 How can I fix that so that the result is exactly like the first one but
 perfectly reversed in it's order?
 
 I believe ORDER BY date, replace(time,'h',':')::time DESC would work.

That worked perfectly. Unfortunately I can't control the sql query in
the situation I am in. But... I know this is the white space that does
the issue.

 Or just use directly a time datatype instead of varchar, or only one datetime
 column instead of the two, and order by that column.
 
 Or use a leading '0' instead of a leading space when the hour is less than 
 10...

Yep, this is the only solution that will work for that situation right
now: inserting a leading '0' instead of a white space.

Thank you for your help.
Best regards.

-- 
Alexandre Leclerc

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Ordering problem with varchar (DESC)

2007-01-31 Thread Alexandre Leclerc
Hi all,

We have a column (varchar) that has plain text time and it is indexed.
When I do a query with the index, all the data is in the right order,
but when I user ORDER BY .. DESC, the order is messed up. Example:

By index 1: (date, time, data)
SELECT * from t1;
date (date type)  time (varchar)  data
2007-01-17 8h40   d1
2007-01-30 9h30   d2
2007-01-3012h00   d3
2007-01-3013h45   d4
2007-01-3017h20   d5

SELECT * from t1 ORDER BY date, time DESC;
date (date type)  time (varchar)  data
2007-01-30 9h30   d2
2007-01-3017h20   d5
2007-01-3013h45   d4
2007-01-3012h00   d3
2007-01-17 8h40   d1

I don't know why, this is like if the 'time' varchar was trimmed then
used for the ordering.

How can I fix that so that the result is exactly like the first one but
perfectly reversed in it's order?

Best regards.

-- 
Alexandre Leclerc
Projets spéciaux

Ipso Systèmes Stratégiques inc.
176 boul. Harwood (suite 10), Vaudreuil-Dorion, QC, J7V 1Y2
Tel: 450-424-6847 ext. 108 / 1-800-879-4776
Fax: 450-424-8439
http://www.ipso.ca/

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pg_locks: who is locking ? (SOLVED!)

2006-10-17 Thread Alexandre Arruda


Tom Lane wrote:

Alexandre Arruda [EMAIL PROTECTED] writes:
But pg_stat_activity joined with pg_locks only give me informations 
about the lock itself.

Realy, I want a (possible) simple information: Who is locking me ?


You need a self-join to pg_locks to find the matching lock that is held
(not awaited) by some process, then join that to pg_stat_activity to
find out who that is.


Tom, thanks for explanation !!!
And if someone need, here will go my views (sorry if I made this in the 
long and complicated way)... ;)


1) For transaction locks

create or replace view locks_tr_aux as SELECT a.transaction,a.pid as 
pid_locked,b.pid as pid_locker,c.usename as user_locked FROM pg_locks a, 
pg_locks b, pg_stat_activity c where b.granted=true and a.granted=false 
and a.transaction=b.transaction and a.pid=c.procpid;


create or replace view locks_tr as select a.*,c.usename as user_locker 
from locks_tr_aux a,pg_stat_activity c where a.pid_locker=c.procpid;



2) For tables locks

create or replace view locks_tb_aux as SELECT a.relation::regclass as 
table,a.transaction,a.pid as pid_locked,b.pid as pid_locker,c.usename as 
user_locked FROM pg_locks a, pg_locks b, pg_stat_activity c where 
b.granted=true and a.granted=false and a.relation=b.relation and 
a.pid=c.procpid;


create or replace view locks_tb as select a.*,c.usename as user_locker 
from locks_tb_aux a,pg_stat_activity c where a.pid_locker=c.procpid;



3) For transactionid locks

create or replace view locks_trid_aux as SELECT a.transaction,a.pid as 
pid_locked,b.pid as pid_locker,c.usename as user_locked FROM pg_locks a, 
pg_locks b, pg_stat_activity c where b.granted=true and a.granted=false 
and a.transactionid=b.transactionid and a.pid=c.procpid and 
a.locktype='transactionid';


create or replace view locks_trid as select a.*,c.usename as user_locker 
from trava_trid_aux a,pg_stat_activity c where a.pid_lockedr=c.procpid;



select * from locks_tr;
select * from locks_tb;
select * from locks_trid;


Best Regads,


Alexandre
Aldeia Digital

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] pg_locks: who is locking ?

2006-10-16 Thread Alexandre Arruda

Hi,

My Database have a lot of locks not granted every moments in a day.

Can I create a view that returns someting like this ?

UserGranted Table   Who_is_locking_me  PID
--- -   -  ---
joe f   foo frank  1212
jefff   foo frank  1313
ann f   foo frank  1414
frank   t   foo
(...)

(Or the locked transactions, if the table cold't be retrived)

pg_locks view does not give me WHO is locking...


Best regards,

Alexandre



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] pg_locks: who is locking ?

2006-10-16 Thread Alexandre Arruda

Alvaro Herrera escreveu:

Alexandre Arruda wrote:

Hi,

My Database have a lot of locks not granted every moments in a day.

Can I create a view that returns someting like this ?

UserGranted Table   Who_is_locking_me  PID
--- -   -  ---
joe f   foo frank  1212
jefff   foo frank  1313
ann f   foo frank  1414
frank   t   foo
(...)

(Or the locked transactions, if the table cold't be retrived)


You can look up more data about a backend by joining pg_locks to
pg_stat_activity, using the PID (I think it's called procpid on one view
and pid on the other).



Hi,

But pg_stat_activity joined with pg_locks only give me informations 
about the lock itself.

Realy, I want a (possible) simple information: Who is locking me ?

Today, I *presume* this information by manually search the pg_locks:

1) Search for the locked tables
2) Search for all lock GRANTED to this tables
3) Generally, the older PID is the locker

Not so smart, I think. :)

Best regards,

Alexandre

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] pg_dump pg_restore suggestion

2006-07-24 Thread Alexandre Arruda
Hi,

My wishlist for pg_{dump,restore} have only 1 item:

1) Thats pg_restore -t table file.bkp returns not only the table,
but the related PK and indexes, like pg_dump -t table database
makes with a operational database.
If you drop a table from a database, this drops whole table and,
obviously, your indexes. If you want to restore one table from a file,
you have previously to know what the indexes and PK the table have and
restore each manually.

Thanks,

Alexandre

Pavel Golub wrote:
 Hello, pgsql-general.
 
 Is it possible to create not only executable version of pg_dump and
 pg_restore, but also libraries (.so, .dll) so developers can use them
 freely the same as libpq client library?
 
 I suppose it would be very usefull. For me it's for sure.
 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] pg_dump -t pg_restore -t

2006-04-27 Thread alexandre - aldeia digital
Richard Huxton wrote:
 alexandre - aldeia digital wrote:
 Hi,

 If I do:

 pg_dump -Fc -t TABLE database  table-custom.sql

 The pg_dump returns the DDL of the table, the data and the DDL for
 indexes and PK. If I use -s, only the structure is returned but it's
 include all elements.

 But if I do:

 pg_dump -Fc database  backup-custom.file
 pg_restore -Fc -t TABLE backup-custom.file  table-plain.sql

 Only the DDL of table and data is returned, but not indexes, etc.

 Question:

 How can I restore a table with all other dependences from a custom file
 database ?
 
 The -l / -L options let you create/read back a control file listing all
 the database objects. You can comment out/edit this to control pretty
 much everything.

Ok. But I have a lot of tables, and this tables have a lot of indexes.
How can I get all objects related to a table ?

Thanks,

Alexandre



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] pg_dump -t pg_restore -t

2006-04-26 Thread alexandre - aldeia digital
Hi,

If I do:

pg_dump -Fc -t TABLE database  table-custom.sql

The pg_dump returns the DDL of the table, the data and the DDL for
indexes and PK. If I use -s, only the structure is returned but it's
include all elements.

But if I do:

pg_dump -Fc database  backup-custom.file
pg_restore -Fc -t TABLE backup-custom.file  table-plain.sql

Only the DDL of table and data is returned, but not indexes, etc.

Question:

How can I restore a table with all other dependences from a custom file
database ?

PG 8.1.3

Thanks,

Alexandre


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] differences between pg_dump and pg_restore with -t

2006-02-13 Thread alexandre - aldeia digital

Hi,

If I do:

pg_dump -Fc -t table database  table.sql

The pg_dump returns the DDL of the table, the data and the DDL for
indexes and PK. If I use -s, only the structure, but all is returned.

But if I do:

pg_dump -Fc database  backup.file
pg_restore -Fc -t table backup.file  table.sql

Only the DDL of table and data is returned, but not indexes, etc.

Question:

How can I restore a table with all other dependences from a custom file
database ?

PG: 8.1.2 and 8.0.6

Thanks,

Alexandre




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] differences between pg_dump and pg_restore with -t

2006-02-13 Thread alexandre - aldeia digital

Hi,

If I do:

pg_dump -Fc -t table database  table.sql

The pg_dump returns the DDL of the table, the data and the DDL for 
indexes and PK. If I use -s, only the structure, but all is returned.


But if I do:

pg_dump -Fc database  backup.file
pg_restore -Fc -t table backup.file  table.sql

Only the DDL of table and data is returned, but not indexes, etc.

Question:

How can I restore a table with all other dependences from a custom file 
database ?


PG: 8.1.2 and 8.0.6

Thanks,

Alexandre



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] pg_dump error codes

2006-01-03 Thread alexandre - aldeia digital


Michael Fuhr wrote:

On Mon, Jan 02, 2006 at 08:45:28AM -0200, alexandre - aldeia digital wrote:
In my Linux bash backup scripts, I wish to send an e-mail when an error 
occurs in pg_dump proccess. And if possible, I want to send the error 
output via e-mail.


Anybody knows how to capture the output and send this to an e-mail ONLY 
if an error occurs ?


This is more of a shell scripting question than a PostgreSQL question.
See your shell's documentation and read about I/O redirection and
control structures like if.


I know, but I don't found if the pg_dump returns some error code after a 
problem.




Here's a simple but only minimally-tested example that might give
you some ideas:

#!/bin/sh

dumpout=/tmp/dump.out.$$
dumperr=/tmp/dump.err.$$
erruser=root

trap rm -f $dumperr $dumpout; exit 1 2 15

if ! pg_dump $@  $dumpout 2 $dumperr
then
rm -f $dumpout
mail -s Dump errors $erruser  $dumperr
fi

rm -f $dumperr



Thank you very much!  :)
This is all I need...


Alexandre


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] pg_dump error codes

2006-01-02 Thread alexandre - aldeia digital

Hi,

(maybe an idiot question)

In my Linux bash backup scripts, I wish to send an e-mail when an error 
occurs in pg_dump proccess. And if possible, I want to send the error 
output via e-mail.


Anybody knows how to capture the output and send this to an e-mail ONLY 
if an error occurs ?


Thanks

Alexandre

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Start up script for Fedora Core 3

2005-08-24 Thread Alexandre Barros

Chris Guo wrote:


Dear all,

 

We are using Fedora Core 3 in our company, and we installed postgresql 
8.0.3 as the database system. I wonder if anybody has the start up 
script for this version so we can start postgresql service 
automatically after we reboot the server.


Any help will be highly appreciated.

 


Chris


personally i use something like this on my rc.local:

rm -f /opt/pgsql-8.0.3/dbdata/postmaster.pid
sudo -u pgsql /opt/pgsql-8.0.3/bin/pg_ctl -o -i -D 
/opt/pgsql-8.0.3/dbdata/ -l /opt/pgsql-8.0.3/dbdata/logfile start


does the job, but i'm not sure i'd recommend it... ;)


best regards.





---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] PostgreSQL 8.0.3 limiting max_connections to 64 ?

2005-08-18 Thread Alexandre Barros




First of all, thanks in advance for all the replies.

Joshua D. Drake wrote:
 
  
max_connections, shared_buffers, shmmax
were tweaked, but the server does not seems to respect the 500
max_connections...
  
i *know* i'm missing something obvious, but, what could be wrong ?...
i'm lost... any help would be most appreciated... please.
  

  
  
A completely stupid response but don't take it pseronally. 
  
Did you remove the # from in front of the max_connections parameter and
  
do a COMPLETE restart? 
  

no worries, i didn't take personally. ;)

---
Tom Lane wrote:

  Alexandre Barros [EMAIL PROTECTED] writes:
  
  
max_connections, shared_buffers, shmmax were tweaked, but the server 
does not seems to respect the 500 max_connections...

  
  
Er ... did you restart the postmaster after increasing those parameters?

			regards, tom lane

  

Er... at least twice, once because a power failure with a defective
no-break, 
and the second time i restarted all services ( and before that, i
kept using -- over and over -- the "kill -HUP" on postmaster, and
trusted the
"reloading parameters" message on the logfile... );

---
Sebastian Hennebrueder wrote:

  500 parallel connections are very very much. 


i agree... but i needed to test how ( and why ) things were ( not )
working... and the "sorry too many clients already" message was
driving me crazy...


  You should verify if one
application is not closing connections or if you can create an
connection pool.
  


the connection pool ( was testing pgpool for that ) was a possibility,
but i need things at least "barely working" before... and an obscene
value on "max_connections" was my best try.


  Use select * from pg_stat_activity to see wheach connections are open
from which client and if they are iddle or not. See the postgreSQL doc
for more information on these queries.

  

i have lots of idle connections showing on a "ps|grep" from squirrel (
apache with persistent connections ) amavisd, courier-authlib all keep
idle connections for a long time, but that pg_stat query only shows
stuff like that:

postfix=# select * from pg_stat_activity ;
 datid | datname | procpid | usesysid | usename |
current_query | query_start 

---+--+-+--+--+--+-
 17230 | postfix | 29852 | 100 | postfix |
command string not enabled | 
  (...)
 (47 rows)

i'm yet not sure what this means, we'll be looking over
postgresql docs...

---
Sven Willenberger wrote:

  
Can you post the relevent portions of your postgresql.conf file? Do you
see any error messsages when starting the database up (perhaps about
shared memory or such)? 

everything not commented out from the postgresql.conf:

max_connections = 500
 # note: increasing max_connections costs about 500 bytes of
shared
 # memory per connection slot, in addition to costs from
shared_buffers
 # and max_locks_per_transaction.
superuser_reserved_connections = 2
shared_buffers = 1004 # min 16, at least max_connections*2, 8KB each

debug_print_parse = false
debug_print_rewritten = false
debug_print_plan = false
debug_pretty_print = false
log_statement = 'none'

log_parser_stats = false 
log_planner_stats = false 
log_executor_stats = false 
log_statement_stats = false 

statement_timeout = 6 # 0 is disabled, in milliseconds


/proc/sys/kernel/shmmax == 33554432

and no error messages at all on the startup logfile...


  We also run a postfix mailserver (with maildrop,
courier-imap/vauthchkpw, etc) against a postgresql database with
max_connections set to 512 (FreeBSD 5.x machine). On the delivery end of
things we pool the connections from the postfix server using proxymap(8)
(which helped ease the connections load).

Sven

  

aha ! 512 connections ! someone as "exaggerated" as me, i feel happy
now ! 
;)

i will also study proxymap, thanks !

Alexandre Barros





Re: [GENERAL] postgresql 8 abort with signal 10

2005-06-02 Thread Alexandre Biancalana
I changed from postgresql to mysql and everything now is great ;)

Same machine, same os, etc...

On 6/2/05, Roman Neuhauser [EMAIL PROTECTED] wrote:
 # [EMAIL PROTECTED] / 2005-05-03 17:56:53 -0300:
  The FreeBSD is the last STABLE version. I can try to change some
  hardware, I already changed memory, what can I try now ? the processor
  ? motherboard ??
 
  On 5/3/05, Scott Marlowe [EMAIL PROTECTED] wrote:
   On Tue, 2005-05-03 at 15:04, Alexandre Biancalana wrote:
Thank you for the detailed explanation Scott, they are very handy !!
   
I reduced the shared_buffers to 32768, but the problem still occurs.
   
Any other idea ??
  
   Yeah, I had a sneaking suspicion that shared_buffers wasn't causing the
   issue really.
  
   Sounds like either a hardware fault, or a BSD bug.  I'd check the BSD
   mailing lists for mention of said bug, and see if you can grab a spare
   drive and install the last stable version of FreeBSD 4.x and if that
   fixes the problem.
  
   If you decide to try linux, avoid the 2.6 kernel, it's still got
   issues...  2.4 is pretty stable.
  
   I really doubt it's a problem in postgresql itself though.
 
 For the sake of archives, what was causing the SIGBUSes?
 
 --
 How many Vietnam vets does it take to screw in a light bulb?
 You don't know, man.  You don't KNOW.
 Cause you weren't THERE. http://bash.org/?255991


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Connecting to Postgres from LAN

2005-05-31 Thread Alexandre Lollini
on 30/05/05 20:47, Andrus [EMAIL PROTECTED] wrote:

 I installed Postgres 8 to Windows XP and added a line
 
 hostall all 168.179.0.1/32  trust
 
 to pg_hba.conf file
 
 
 When connection from LAN to the Postgres with user name postgres I got error
 
 no pg_hba.conf entry for host 168.179.0.10, user postgres, database
 mydb, SSL off
 
 
 How  to enable connection from LAN ?
 
 Andrus 

The solution is to edit the postgresql.conf file
And then enable LAN connections.

But before doing so, I urge you :

Your line in pg_hba.conf as is is EXTREMELY INSECURE
I suggest to be more tight than all and trust

Use a comma separated list of users (exclude postgres)
Use a comma separated list of databases (excluding templates and pg_)

Use password instead of trust, at least.

So create some users with limited grants, with a password, even if these are
script users.

No network can be trusted.

User postgres is (and should stay) the only one user capable of DROP
DATABASE x

I strongly recomment not to use user postgres for runtime remote
connections.


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Problem with void integer

2005-05-30 Thread Alexandre Lollini
I have a big problem to jump from 7.2 to 8.0.3
In my application I was doing a major use of int and float
And the difference between '0' and void NULL '' data content.

Now it seems to me that void is not allowed for an integer ?

What should I do ?


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] postgresql 8 abort with signal 10

2005-05-03 Thread Alexandre Biancalana
/9AE223C8
May  3 06:58:54 e-filter postgres[13484]: [24-1] FATAL:  the database
system is starting up
May  3 06:58:54 e-filter postgres[13485]: [24-1] FATAL:  the database
system is starting up
May  3 06:58:55 e-filter postgres[13488]: [24-1] FATAL:  the database
system is starting up
May  3 06:58:57 e-filter postgres[13478]: [32-1] LOG:  database system is ready


and some time latter its ocur again:
May  3 09:59:38 e-filter postgres[250]: [24-1] LOG:  server process
(PID 34743) was terminated by signal 10
May  3 09:59:38 e-filter postgres[250]: [25-1] LOG:  terminating any
other active server processes
May  3 09:59:38 e-filter postgres[35215]: [24-1] WARNING:  terminating
connection because of crash of another server process
May  3 09:59:38 e-filter postgres[35215]: [24-2] DETAIL:  The
postmaster has commanded this server process to roll back the current
transaction and exit, because another server
May  3 09:59:38 e-filter postgres[35215]: [24-3]  process exited
abnormally and possibly corrupted shared memory.
May  3 09:59:38 e-filter postgres[35215]: [24-4] HINT:  In a moment
you should be able to reconnect to the database and repeat your
command.
May  3 09:59:38 e-filter postgres[34744]: [24-1] WARNING:  terminating
connection because of crash of another server process
May  3 09:59:38 e-filter postgres[34744]: [24-2] DETAIL:  The
postmaster has commanded this server process to roll back the current
transaction and exit, because another server
May  3 09:59:38 e-filter postgres[33592]: [24-1] WARNING:  terminating
connection because of crash of another server process
May  3 09:59:38 e-filter postgres[34744]: [24-3]  process exited
abnormally and possibly corrupted shared memory.


This is my postgresql.conf

max_connections = 70
superuser_reserved_connections = 2
shared_buffers = 81920
work_mem = 10240
maintenance_work_mem = 51200
fsync = true
checkpoint_segments = 8
effective_cache_size = 10
log_destination = 'syslog'
silent_mode = true
lc_messages = 'C'
lc_monetary = 'C'
lc_numeric = 'C'
lc_time = 'C'


and the shared memory configuration:

kern.ipc.shmmax: 7
kern.ipc.shmmin: 1
kern.ipc.shmmni: 192
kern.ipc.shmseg: 256
kern.ipc.shmall: 7


I have some configuration error that could result in this kind of problem ?

Any ideas ? Any thoughts ?

Best Regards,
Alexandre

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] postgresql 8 abort with signal 10

2005-05-03 Thread Alexandre Biancalana
You need to find out what's triggering that.  Turning on query logging
would be a good way of investigating.

 Which directives can I use to enable this ?
debug_print_parse ? debug_print_rewritten ? debug_print_plan ?
debug_pretty_print ?


Rather large, shared buffers for a machine with only 1 gig of ram.  640
Meg of RAM means the kernel is basically double buffering everything.
have you tested with smaller settings and this setting was the best?

I had 256 of RAM then I increase to 1GB thinking this could be a
problem of out of memory or a buggy memory.. After this upgrade
I increase the numbers of shared buffers,etc

It's important to say that the max memory usage reach to only 80%.

What values do you suggest ?

You might want to look in your signal man page on BSD and see what
signal 10 means.  On solaris it's a bus error.  Not a clue what it is in
FreeBSD myself though.

FreeBSD man page say: 10SIGBUS   

The system does not generate core dump file for this error. 

Regards,

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] postgresql 8 abort with signal 10

2005-05-03 Thread Alexandre Biancalana
On 5/3/05, Scott Marlowe [EMAIL PROTECTED] wrote:
 On Tue, 2005-05-03 at 11:36, Alexandre Biancalana wrote:
  You need to find out what's triggering that.  Turning on query logging
  would be a good way of investigating.
 
   Which directives can I use to enable this ?
  debug_print_parse ? debug_print_rewritten ? debug_print_plan ?
  debug_pretty_print ?
 
 
  Rather large, shared buffers for a machine with only 1 gig of ram.  640
  Meg of RAM means the kernel is basically double buffering everything.
  have you tested with smaller settings and this setting was the best?
 
  I had 256 of RAM then I increase to 1GB thinking this could be a
  problem of out of memory or a buggy memory.. After this upgrade
  I increase the numbers of shared buffers,etc
 
  It's important to say that the max memory usage reach to only 80%.
 
  What values do you suggest ?
 
 Generally 25% of the memory or 256 Megs, whichever is less. In your
 case, they're the same.  The Reasoning being that the kernel caches,
 while postgresql only really holds onto data as long as it needs it,
 then frees it, so having a really huge buffer space lets postgresql
 flush the kernel cache, then the next access, after postgresql has freed
 the memory that was holding the data, now has to go to disk.
 
 The kernel is generally a lot better at caching than most apps.
 
 So, 32768 is about as big as i'd normally go, and even that may be more
 than you really need.  Note that there's overhead in managing such a
 large buffer as well.  With pgsql 8.x and the new caching algorithms in
 place, such overhead may be lower, and larger buffer settings may be in
 order.  But if testing hasn't shown them to be faster, i'd avoid them
 for now and see if your signal 10 errors start going away.
 
 If they do, then you've likely got a kernel bug in there somewhere.  If
 they don't, I'd suspect bad hardware.
 
  You might want to look in your signal man page on BSD and see what
  signal 10 means.  On solaris it's a bus error.  Not a clue what it is in
  FreeBSD myself though.
 
  FreeBSD man page say: 10SIGBUS
 
  The system does not generate core dump file for this error.
 


Hi Michael,

Here is my /etc/sysctl.conf:

kern.corefile=/var/coredumps/%N.%P.core
kern.sugid_coredump=1

and how I said before, there is no one core file in /var/coredumps
I should say that this structure to store core files it's ok, in past
I used this a lot

Thanks Scott I will lower shared_buffers to 32768 and try again, but
how about work_mem, maintenance_work_mem, effective_cache_size ??

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] postgresql 8 abort with signal 10

2005-05-03 Thread Alexandre Biancalana
Thank you for the detailed explanation Scott, they are very handy !!

I reduced the shared_buffers to 32768, but the problem still occurs.

Any other idea ??

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] postgresql 8 abort with signal 10

2005-05-03 Thread Alexandre Biancalana
Ohhh god :(

The FreeBSD is the last STABLE version. I can try to change some
hardware, I already changed memory, what can I try now ? the processor
? motherboard ??



On 5/3/05, Scott Marlowe [EMAIL PROTECTED] wrote:
 On Tue, 2005-05-03 at 15:04, Alexandre Biancalana wrote:
  Thank you for the detailed explanation Scott, they are very handy !!
 
  I reduced the shared_buffers to 32768, but the problem still occurs.
 
  Any other idea ??
 
 Yeah, I had a sneaking suspicion that shared_buffers wasn't causing the
 issue really.
 
 Sounds like either a hardware fault, or a BSD bug.  I'd check the BSD
 mailing lists for mention of said bug, and see if you can grab a spare
 drive and install the last stable version of FreeBSD 4.x and if that
 fixes the problem.
 
 If you decide to try linux, avoid the 2.6 kernel, it's still got
 issues...  2.4 is pretty stable.
 
 I really doubt it's a problem in postgresql itself though.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Simplified (a-la [G|N]DBM) DB access

2005-04-21 Thread Alexandre
Chris (and others), thank you for the good explanation! =)
I will try to use database as you recommend, thank you again for the 
advices!

Regards,
/Alexandre.
On Apr 20, 2005, at 17:39, Chris Browne wrote:
I think you're missing two points:
 1.  With careful design, the ISAM wrapper can _avoid_ most of the
 costs you suggest.
 For instance, one might set up a prepared query which would only
 parse, plan, and compile the query _once_.
 Further, I'd expect that most of the behaviour could be
 hidden in stored procedures which would further hide the need to
 parse, plan, and compile things.  The ISAM-congruent abstraction
 would presumably make it easier to use, to boot.
 2.  Dan Sugalski indicated that he actually found the overhead to be
 ignorable.
 As a datapoint, that's pretty useful.  He actually went thru
 the effort of building the ISAM wrapper, and discovered that
 the overhead wasn't material.
 You ought to consider the possibility that perhaps he is right,
 and that perhaps you are trying to optimize something that does
 not need to be optimized.
Remember Michael Jackson's _First Rule of Software Optimization_,
which is expressed in one word:
Don't.
(And then there's his second rule, for experts: Don't do it yet.)

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Simplified (a-la [G|N]DBM) DB access

2005-04-20 Thread Alexandre
Tom, yes, something like Berkeley DB, but inside PostgreSQL... It may 
sounds crazy, but some data just need not all SQL functionality and 
overhead, and at the same time I need not second db, so the best 
possible solution (in my opinion): two ways of the data access inside 
one db.

Regards,
/Alexandre.
On Apr 19, 2005, at 22:37, Tom Lane wrote:
Alexandre [EMAIL PROTECTED] writes:
Lane, thank you, but it is not: PostISAM generates SQL statements on
the fly from traditional ISAM (read, write, start) statements, so it
just add overhead, and is not what I'm looking for.
Well, if you don't want any SQL capability at all, I think you are
looking for something more like Berkeley DB ...
regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Simplified (a-la [G|N]DBM) DB access

2005-04-20 Thread Alexandre
On Apr 19, 2005, at 23:24, Dan Sugalski wrote:
At 9:40 PM +0400 4/19/05, Alexandre wrote:
Lane, thank you, but it is not: PostISAM generates SQL statements on 
the fly from traditional ISAM (read, write, start) statements, so it 
just add overhead, and is not what I'm looking for.
Speaking from experience, as I have a system which hides Postgres 
behind an ISAM interface (though not PostISAM -- I rolled my own DB 
library) as part of a legacy 4GL migration, the overhead's ignorable. 
Dismissing it for that reason's not a good idea.
Does your own ISAM interface also convert ISAM calls to the SQL 
queries? If so, then it is also add overhead: make a call to the some 
wrapper, which will generate SQL query and send it to the sever, which 
will parse, plan and compile it and execute only after that. Desirable: 
make a call, which will connect to the server and get row.

But if you write ISAM interface which didn't use SQL, can you tell more 
about, please?

I didn't have any legacy application, which I need to move to the new 
DB, I just need more simple and fast access.

Regards,
/Alexandre.

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Simplified (a-la [G|N]DBM) DB access

2005-04-20 Thread Alexandre
On Apr 19, 2005, at 22:56, Dann Corbit wrote:
Have you actually seen any problem in simple get/put/delete/update
situations?
Not sure, that I understand your question.
It's a shame when people spend a great deal of effort to invent a cure
for a disease that does not exist.
If you have problems with any of these things, there are usually good
solutions.
Not always. Please, keep in mind, that not all problems need so  
complicated querys.

Using the copy API, you can insert like a raving madman.  PostgreSQL  
has
a prepared insert.
Yes, but it will be rest prepared till the end of the session.
Safe, transacted inserts will be hard to do much
faster than that.
INSERT is not the main problem, SELECT - is. Why I need complicated  
SELECT * FROM words WHERE id = 21 to just make simple key/value search?

You can also bracket bunches of operations in a
single transaction if you like.
In real life, how fast can you update records?  If you have a titanic
pile of users all trying to update, then the MVCC model is probably
close to optimal anyway.
Again, INSERT and UPDATE is not a main problem.
Do you have some measurement that shows PostgreSQL is not performing up
to a real business case time requirement?
And again, business can be different.
Sometimes, we can be guilty of 1980's batch oriented mind-set, if we
have been doing data processing for a long time.  The old paradigms no
longer apply for the most part.
Recommended reading:
http://research.microsoft.com/research/pubs/view.aspx?msr_tr_id=MSR- 
TR-2
005-39
Thank you for the reading, but it is not the case for me.
First of all, I will repeat, in some situations, you need not so  
complicated solution, and nothing comes for free. In some situation you  
may need faster access with more complicated API, in some, of course,  
SQL is more suitable.

As Tom said absolutely correct, in such situations Berkeley DB is  
something that more suitable, but it will add another one database, so  
ISAM-like (native, which does not translate it's calls to the SQL)  
access to the PostgreSQL will be just fine.

Regards,
/Alexandre.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [GENERAL] Simplified (a-la [G|N]DBM) DB access

2005-04-20 Thread Alexandre
On Apr 20, 2005, at 17:09, Dan Sugalski wrote:
Does your own ISAM interface also convert ISAM calls to the SQL 
queries?
Yes of course -- it has to, there's no other way to access Postgres.
Some server specific API?
If so, then it is also add overhead: make a call to the some wrapper, 
which will generate SQL query and send it to the sever, which will 
parse, plan and compile it and execute only after that. Desirable: 
make a call, which will connect to the server and get row.
Right. And, as I said, the overhead is ignorable. It just doesn't 
matter.

I didn't have any legacy application, which I need to move to the new 
DB, I just need more simple and fast access.
Then either use PostISAM or write your own wrapper code to present an 
ISAM interface.

You really need to understand that there's *already* a lot of overhead 
involved in Postgres, or any other relational database. ACID 
guarantees aren't free. Compared to the costs involved in Postgres, as 
well as in any app that's using the ISAM interface, the extra costs in 
the wrapping are negligible.

Bluntly, you're worrying about the wrong stuff. Just write the 
wrappers, prepare the SQL statements (hell, pregenerate them if you 
want -- they don't have to be created on the fly), and ignore the 
library until it ends up being worth worrying about. You'll be 
ignoring it for a very long time.
I see no reason to write another ISAM wrapper if it will not invent 
anything new.
I understand, that there is already a lot of overhead, but Berkeley DB 
also provides strict ACID transaction semantics. Would you like to 
say, that PostgreSQL will be faster on simple operations than Berkeley 
DB?

Regards,
/Alexandre.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] Simplified (a-la [G|N]DBM) DB access

2005-04-19 Thread Alexandre
Good day,
excuse me, if my question is lame, but is it possible to get some 
simplified access to the PostgreSQL?

What I mean is: currently to get/put/delete/edit any data I have to 
compose an SQL query, which should be parsed, compiled, optimized and 
so on. While, in some tasks simple interface a-la [G|N]DBM should be 
more than enough, but it will be more preferable to store all data in 
one database, which support concurrent access, transactions, etc.

For example, let me have some textual data in PostgreSQL, and let me 
wish to have an inverted index for some statistical analyses purpose 
(for example, search, but without using tsearch2). For now, to make any 
operations with all that data, I have to use an SQL, which makes such 
thing really slow (take a look at sql-mode mnogosearch and others).
Yes, I can store all that data outside the db, but I will have to 
reinvent all the features, which realized great in PostgreSQL (for 
example, recovery, transactions) and I will get harder administrative 
support, backup and so on.

Thank you in advance,
Regards,
/Alexandre.

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Simplified (a-la [G|N]DBM) DB access

2005-04-19 Thread Alexandre
Lane, thank you, but it is not: PostISAM generates SQL statements on 
the fly from traditional ISAM (read, write, start) statements, so it 
just add overhead, and is not what I'm looking for.

Anyway, thank you for the information.
Regards,
/Alexandre.
On Apr 19, 2005, at 18:16, Tom Lane wrote:
Alexandre [EMAIL PROTECTED] writes:
What I mean is: currently to get/put/delete/edit any data I have to
compose an SQL query, which should be parsed, compiled, optimized and
so on. While, in some tasks simple interface a-la [G|N]DBM should be
more than enough, but it will be more preferable to store all data in
one database, which support concurrent access, transactions, etc.
I seem to recall that someone has written an ISAM-style interface
library, which might be more or less what you are asking for.  Check
the archives, and/or look at gborg and pgfoundry.
regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


[GENERAL] Blob Fields

2005-03-04 Thread Alexandre da Siva



Blobs is not Implemented on PostgreSQL, 
butI need tothis field typeon 
PosgreSQL databases, how I can to use this?
I'm using delphi...


ps: I readed PosgreSQL Manual and other lists and 
sites, but not get a answer for my specific problem



[GENERAL] Killing process through of a function

2005-02-17 Thread alexandre::aldeia digital
Hi,
Can I kill a postgres user process inside a function (SP)?
Sometimes, the java program that our company uses stay in idle in 
transaction and every day, I need to import a text data to some tables 
in DB.
This import make a TRUNCATE in this tables and I need to kill the 
remaining process before execute this, to avoid the lock wait.

Thanks for any help
Alexandre de Arruda Paes
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Error al Subir base de datos

2005-01-20 Thread alexandre::aldeia digital
Hi,
You need to create you database with an enconding like LATIN1:
createdb -E latin1 mydb

Alexandre
Juan Jose Siles Salinas wrote:
Cuando subo la base de datos con pg_restore -d mydb  mydb.tar 
restablece toda la informacion pero los acentos y ñ muestran caracteres 
en otra codificacion como puedo solucionar esto

__
Correo Yahoo!
Espacio para todos tus mensajes, antivirus y antispam ¡gratis!
¡Abrí tu cuenta ya! - http://correo.yahoo.com.ar
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] Index, Tablespace and performance

2005-01-02 Thread alexandre::aldeia digital
Hi,
I have a 4 x SCSI in RAID 10 (PG 7.4.6) with a regular performance.
My database have 55 Gb of data.
In PG 8, moving the indexes to a separeted disk (or array) can 
(generally) improve the performance if the system make a heavy use of 
indexes ?

Thanks
Alexandre
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[GENERAL] pgdump blob

2000-07-13 Thread Alexandre FLAMENT

Is there a version of pg_dump that save blob ?

___
Vendez tout... aux enchères - http://www.caraplazza.com