Re: [GENERAL] Generic timestamp function for updates where field

2007-01-03 Thread novnov



Adrian Klaver wrote:
> 
> On Sunday 31 December 2006 8:48 am, novnov wrote:
>> OK. python would be the preference, if anyone is interested in showing me
>> how it would be done, I've never used one of the dynamic languages with
>> postgres.
>>
>> Why would not be possible in plpgsql? It has loop etc, the only part I'm
>> not sure it can do it use the variable as field name.
>>
> 
>> >http://archives.postgresql.org/
> Here is a function I wrote in python to do something similar.  My
> timestamp 
> fields are of the form tc_ts_update where tc is  a table code that can be 
> found by looking up the table name in the table_code table. In pl/pythonu 
> that ships with 8.2 it is no longer necessary to do the relid look up. 
> There 
> is a TD["table_name"] variable that returns the table name directly.
> 
> CREATE OR REPLACE FUNCTION public.ts_update()
> RETURNS trigger AS
> $Body$
> table_oid=TD["relid"]
> plan_name=plpy.prepare("SELECT relname FROM pg_class WHERE
> oid=$1",["oid"])
> plan_code=plpy.prepare("SELECT tc_table_code FROM table_code WHERE 
> tc_table_name=$1",["text"])
> rs_name=plpy.execute(plan_name,[table_oid])
> rs_code=plpy.execute(plan_code,[rs_name[0]["relname"]])
> fld_name="_ts_update"
> tbl_code=rs_code[0]["tc_table_code"]
> full_name=tbl_code+fld_name
> TD["new"][full_name]="now()"
> return "MODIFY"
> $Body$
> LANGUAGE plpythonu SECURITY DEFINER;
> -- 
> Adrian Klaver
> [EMAIL PROTECTED]
> 
> 

Here is what I have tried, it fails on the TD["NEW"][varFieldName]="now()"
line. 
Do I need the Return? 
I'm passing in the table prefix as a param.
I set to VOLATILE not SECURITY DEFINER (wasn't sure what that was)

CREATE OR REPLACE FUNCTION "public"."datem_update"()
RETURNS trigger AS
$BODY$
varPrefix=TG_ARGV[0]
varFieldName=varPrefix+"_datem"
TD["NEW"][varFieldName]="now()" 
RETURN "Modify"
$BODY$
LANGUAGE 'plpythonu' VOLATILE;
-- 
View this message in context: 
http://www.nabble.com/Generic-timestamp-function-for-updates-where-field-names-vary-tf2899327.html#a8137098
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Geographical redundancy

2007-01-03 Thread Markus Schiltknecht

Hello Dennis,

Dennis wrote:

Is there any feasible way to achieve geographical redundancy of postgresql 
database?


As nobody mentioned it up until now: please check the very nice 
documentation about High Availability and Failover here:


http://www.postgresql.org/docs/8.2/static/high-availability.html

It explains most optionsand its pros and cons.

Regards

Markus


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] queueing via database table?

2007-01-03 Thread Richard Huxton

Steve Atkins wrote:


Holding a lock while generating the thumbnail doesn't
sound like a great idea, and I think that the select
for update will end up serialising the requests.

I'd add a "rendering" field, text, defaulting
to an empty string.

Then do a "select for update where ... and rendering = '' limit 1",
update the rendering field to the hostname of the box doing the
work and commit. Render the thumbnail. Delete the record.


Assuming each processing host keeps its connection open, I'd store the 
process-id instead (get via function pg_backend_pid()). Also have a 
separate status (pending|processing|done) and timestamps to track when 
each status is set. Only delete rows that have been marked "done" for a 
certain length of time.


This will let you spot when a host has stopped processing (e.g. crashed) 
and also let you measure throughput on particular hosts.


The other thing to be aware of is that queries of the SELECT FOR UPDATE 
LIMIT 1 form can return 0 rows. Open two psql connections and try it to 
see what happens. You'll need to have the processing clients retry the 
query in this case.


HTH
--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] queueing via database table?

2007-01-03 Thread Dawid Kuroczko

On 1/3/07, Mark Harrison <[EMAIL PROTECTED]> wrote:

Does the following sound like a reasonable approach?
If not, what's a good way to go?

The processes generating the requests will insert into
a queue table.  They may add a priority and timestamp.

The several processes servicing the requests will do a
SELECT FOR UPDATE where ... limit 1, generate thumbnail,
delete the record and commit.


Well, this will block.  So it will mean that only one thumbnail
will be processed while running the transaction.

You may want to rather use SELECT FOR UPDATE NOWAIT,
probably "wrapped" into a PL/PgSQL function.  I did that and
I'm quite satisfied with this approach.

A simple implementation would be something like this:

CREATE OR REPLACE FUNCTION get_next() RETURNS int AS $$
 DECLARE
  r RECORD;
 BEGIN
   FOR r IN SELECT id FROM foo_table LIMIT 100 LOOP
  BEGIN
PERFORM id FROM foo_table WHERE id=r.id FOR UPDATE NOWAIT;
RETURN r.id;
  EXCEPTION
WHEN lock_not_available THEN -- do nothing
  END;
   END LOOP;
   RETURN NULL;
 END;
$$ LANGUAGE PLpgSQL;

Of course you should customize the query, and use better tuned limit.
I think good rule of the thumb size of LIMIT is twice the number of
simultaneous processing nodes working.  An ORDER BY might be
worh it or not, etc, etc.

Other approach might be using something like
 LOOP
   BEGIN
 SELECT id INTO i FROM foo_table LIMIT 1 OFFSET n FOR UPDATE NOWAIT;
 RETURN i;
   EXCEPTION
 WHEN lock_not_avaibale THEN -- do nothing;
   END;
   n := n + 1;
 END LOOP;

But I feel it will be slower most of the time.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] queueing via database table?

2007-01-03 Thread Gregory S. Williamson
Mark --

As others have indicated, there may be some blocking issues with the approach 
you outlined.

A variant I have seen used in the past uses a table with a unique id for the 
job, the work queue it is in, a status flag, priority and at least one time 
stamp (and perhaps space for a process id).

Each client that wants work issues a request (SELECT FOR UPDATE) to get the 
next job in its queue that has a status flag of "Available" ordered by priority 
or initial time of creation, etc.; update that entry with the current timestamp 
(and perhaps the process id of the client) and set the status flag to show the 
job is now being worked on all in one transaction.

This releases the job but now with a changed status flag so other processes 
pulling work from the same queue won't see it anymore.

When the job finishes it selects its entry and updates the status flag and 
timestamp (and probably clears its process id). Logic for how to bump a job to 
the next step can be embedded in the client or in another process, depending on 
your needs.

It is useful to have a daemon or some other process to sweep the queue table 
and at least send an alert about stale or frozen jobs.

HTH,

Greg Williamson
DBA
GlobeXplorer LLC
-Original Message-
From:   [EMAIL PROTECTED] on behalf of Mark Harrison
Sent:   Tue 1/2/2007 10:34 PM
To: pgsql-general@postgresql.org
Cc: 
Subject:[GENERAL] queueing via database table?

I have a cluster of CPUs generating thumbnails for
a render farm.  I would like to place thumbnail
requests on a queue, and have the cluster of client
dequeue the requests and process them.

Of course, each request should be only dequeued once...
if a thumbnail is being processed by one CPU, it
shouldn't be processed by another CPU.

Does the following sound like a reasonable approach?
If not, what's a good way to go?

The processes generating the requests will insert into
a queue table.  They may add a priority and timestamp.

The several processes servicing the requests will do a
SELECT FOR UPDATE where ... limit 1, generate thumbnail,
delete the record and commit.

Comments and suggestions welcome,
Mark

-- 
Mark Harrison
Pixar Animation Studios

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


---
Click link below if it is SPAM [EMAIL PROTECTED]
"https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=459b5025191744846743324&[EMAIL
 PROTECTED]&retrain=spam&template=history&history_page=1"
!DSPAM:459b5025191744846743324!
---






---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Index Scanning

2007-01-03 Thread Enrico
Hi, I have this query

select TB.id_int,TR.codice_art,importo,cod_iva,prezzo,qta as qta 
from bolle_carico_testata TB inner join tmp_righe_bolle_carico TR on 
(TB.id_bolla_rem=TR.id_bolla_rem); 

and these are indexes on tables.

# \d tmp_righe_bolle_carico
Indexes:
"tmpidx1" btree (id_bolla_rem)

gepv=# \d bolle_carico_testata
"idxbct4" btree (id_bolla_rem)


My query plan is:

QUERY PLAN 
-
 Hash Join  (cost=7.17..888.67 rows=22420 width=67) (actual time=1.059..118.090 
rows=22420 loops=1)
   Hash Cond: (tr.id_bolla_rem = tb.id_bolla_rem)
   ->  Seq Scan on tmp_righe_bolle_carico tr  (cost=0.00..545.20 rows=22420 
width=67) (actual time=0.030..32.963 rows=22420 loops=1)
   ->  Hash  (cost=6.54..6.54 rows=254 width=16) (actual time=0.980..0.980 
rows=254 loops=1)
 ->  Seq Scan on bolle_carico_testata tb  (cost=0.00..6.54 rows=254 
width=16) (actual time=0.025..0.500 rows=254 loops=1)
 Total runtime: 141.864 ms
(6 rows)

Can anybody tell me why I have two Seq scans instead of two Ind. scan?
And how can I do to have two ind. scan?

Thanks in advantage.

Enrico

-- 
If Bill Gates had a penny for everytime Windows crashed,he'd be a 
multi-billionaire by now ...oh look, he already is 
[EMAIL PROTECTED] - Skype:sscotty71
http://www.linuxtime.it/enricopirozzi

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Index Scanning

2007-01-03 Thread Richard Huxton

Enrico wrote:

Hi, I have this query

select TB.id_int,TR.codice_art,importo,cod_iva,prezzo,qta as qta 
from bolle_carico_testata TB inner join tmp_righe_bolle_carico TR on (TB.id_bolla_rem=TR.id_bolla_rem); 

[snip plan]

Can anybody tell me why I have two Seq scans instead of two Ind. scan?
And how can I do to have two ind. scan?


You're fetching all the rows from both tables - what would an index scan 
gain you?


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] database design and refactoring

2007-01-03 Thread Luca Ferrari
Hi all,
in my database I've got a table with a key that is char string, since it was 
supposed to support values like strings. Running the database, the users 
decided to place numbers as strings, so values like 00110002 and so on.
Now I was wondering to refactor my database and change the char field into a 
numeric one but here comes problems: the field is referenced by other tables 
and views.
I guess there's not, but you'll never know until you try: is there a tool or a 
way to easily do such refactoring or should I write a program on my own to do 
this?

And moreover a database design question: is a better idea to choose always 
(when possible) numeric keys? I'm thinking about the database portability, 
since not all databases support sequences (that can be used with a concat to 
provide an automatic string key).

Thanks,
Luca


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] queueing via database table?

2007-01-03 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 01/03/07 00:34, Mark Harrison wrote:
> I have a cluster of CPUs generating thumbnails for
> a render farm.  I would like to place thumbnail
> requests on a queue, and have the cluster of client
> dequeue the requests and process them.
> 
> Of course, each request should be only dequeued once...
> if a thumbnail is being processed by one CPU, it
> shouldn't be processed by another CPU.
> 
> Does the following sound like a reasonable approach?
> If not, what's a good way to go?
> 
> The processes generating the requests will insert into
> a queue table.  They may add a priority and timestamp.
> 
> The several processes servicing the requests will do a
> SELECT FOR UPDATE where ... limit 1, generate thumbnail,
> delete the record and commit.
> 
> Comments and suggestions welcome,

That's not what relational tables are good at.  Instead, use a
message queuing library with a file backing-store (so that if the
machine goes down for any reason, the messages are still in the queue).

In a message-passing system, a network-aware daemon manages a set of
named FIFO queues.  Some processes call in_q(), and other processes
(same machine, or not) call de_q().  If nothing is calling de_q(),
messages just pile up in the queue until such time as something
*does* start calling de_q().


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFFm504S9HxQb37XmcRAmj6AKDWa7Sx15wygoTc+/wOfLZIpqi4awCg34SZ
rkq1IEjdqu1zx0B5QyFW/n0=
=V0oF
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Index Scanning

2007-01-03 Thread Richard Huxton

Don't forget to cc the list

Enrico wrote:

On Wed, 03 Jan 2007 10:55:03 +
Richard Huxton  wrote:

You're fetching all the rows from both tables - what would an index scan 
gain you


there is a join


Yes, there is a join. Between all the rows in tmp_righe_bolle_carico 
(tr) and all the matches in bolle_carico_testata (tb). It looks like 
there is one row in (tb) that matches each in (tr).


What do you expect two index scans over both tables would gain you?
You seem to be saying that you think:
  fetching 22420 index entries on tr + fetching 22420 rows in tr
+ fetching 22420 index entries on tb + fetching 22420 rows in tb
+ merge-join
would be faster than the hash-join you've got. I think it's unlikely 
that's the case.


If you issue "set enable_seqscan = off" before the query that should 
force it to use the indexes first. What does that plan show you?


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] "no unpinned buffers available" ? why? (hstore and plperl involved)

2007-01-03 Thread hubert depesz lubaczewski

hi,
i got this situation: i'm using 8.3devel checked out from cvs about a week
ago. if this is neccesary i can rerun the tests in 8.2 or something else.

i wrote this code:
CREATE TYPE srf_get_old_cf_for_advert AS (
   codename TEXT,
   value TEXT
);
CREATE OR REPLACE FUNCTION get_old_cf_for_advert(INT8) RETURNS setof
srf_get_old_cf_for_advert AS $BODY$
my $advert_id = shift;
my $cf_map = {};
my $sth = spi_query("SELECT v.* FROM adverts a JOIN v_category_custom_fields
v ON a.category_id = v.category_id WHERE a.id = $advert_id");
while (my $row = spi_fetchrow($sth)) {
   $cf_map->{ $row->{'codename'} } = $row->{'custom_field_name'};
}

my $old_cf = spi_query("SELECT acf.* FROM advert_custom_fields acf WHERE
acf.advert_id = $advert_id");
my $row = spi_fetchrow($old_cf);
return unless $row;

for my $key (keys %{ $cf_map }) {
   my $cf_name = $cf_map->{ $key };
   my $cf_value = $row->{ $cf_name };
   next unless defined $cf_value;
   return_next(
   {
   'codename' => $key,
   'value'=> $cf_value,
   }
   );
}
return;
$BODY$ LANGUAGE 'plperl';

CREATE OR REPLACE FUNCTION migrate_cf_old_to_hstore(in_advert_id INT8)
RETURNS hstore AS $BODY$
declare
   temprec RECORD;
   use_cf hstore;
BEGIN
   use_cf := '';
   for temprec in SELECT * FROM get_old_cf_for_advert(in_advert_id) LOOP
   use_cf := use_cf || ( temprec.codename => temprec.value );
   END LOOP;
   RETURN use_cf;
END;
$BODY$ language 'plpgsql';

CREATE TABLE hstore_migration as SELECT id as advert_id,
migrate_cf_old_to_hstore(id) as hstore_cf FROM adverts;

to give some more details:
- in both tables (advert_custom_fields and adverts) we have 308428 adverts.
- computer i was running it on is just a workstation - 1g of memory, 5400
rpm sata hdd (laptop)

memory settings:
# - Memory -

shared_buffers = 2kB# min 128kB or max_connections*16kB
   # (change requires restart)
#temp_buffers = 8000kB  # min 800kB
#max_prepared_transactions = 5  # can be 0 or more
   # (change requires restart)
# Note: increasing max_prepared_transactions costs ~600 bytes of shared
memory
# per transaction slot, plus lock space (see max_locks_per_transaction).
#work_mem = 1MB # min 64kB
#maintenance_work_mem = 16MB# min 1MB
#max_stack_depth = 2MB  # min 100kB

and - after some time of this "create table", postmaster process eats all
the memory (over 1.8g), and dies with:
psql:133.sql:125: ERROR:  error from Perl function: no unpinned buffers
available at line 5.
CONTEXT:  PL/pgSQL function "migrate_cf_old_to_hstore" line 6 at for over
select rows

my questions are:
1. is it a bug and will it be fixed?
2. if it is a bug - is it in hstore? plperl? my code?
3. i can do the migration using small parts - let's say 100 records at a
time, disconnect, reconnect, convert next 100 records. but - will i be safe
later on during standard work?

best regards,

hubert

--
http://www.depesz.com/ - nowy, lepszy depesz


Re: [GENERAL] Generic timestamp function for updates where field

2007-01-03 Thread Adrian Klaver
On Wednesday 03 January 2007 12:13 am, novnov wrote:
> Adrian Klaver wrote:
> > On Sunday 31 December 2006 8:48 am, novnov wrote:
> >> OK. python would be the preference, if anyone is interested in showing
> >> me how it would be done, I've never used one of the dynamic languages
> >> with postgres.
> >>
> >> Why would not be possible in plpgsql? It has loop etc, the only part I'm
> >> not sure it can do it use the variable as field name.
> >>
> >> >http://archives.postgresql.org/
> >
> > Here is a function I wrote in python to do something similar.  My
> > timestamp
> > fields are of the form tc_ts_update where tc is  a table code that can be
> > found by looking up the table name in the table_code table. In pl/pythonu
> > that ships with 8.2 it is no longer necessary to do the relid look up.
> > There
> > is a TD["table_name"] variable that returns the table name directly.
> >
> > CREATE OR REPLACE FUNCTION public.ts_update()
> > RETURNS trigger AS
> > $Body$
> > table_oid=TD["relid"]
> > plan_name=plpy.prepare("SELECT relname FROM pg_class WHERE
> > oid=$1",["oid"])
> > plan_code=plpy.prepare("SELECT tc_table_code FROM table_code WHERE
> > tc_table_name=$1",["text"])
> > rs_name=plpy.execute(plan_name,[table_oid])
> > rs_code=plpy.execute(plan_code,[rs_name[0]["relname"]])
> > fld_name="_ts_update"
> > tbl_code=rs_code[0]["tc_table_code"]
> > full_name=tbl_code+fld_name
> > TD["new"][full_name]="now()"
> > return "MODIFY"
> > $Body$
> > LANGUAGE plpythonu SECURITY DEFINER;
> > --
> > Adrian Klaver
> > [EMAIL PROTECTED]
>
> Here is what I have tried, it fails on the TD["NEW"][varFieldName]="now()"
> line.
> Do I need the Return?
> I'm passing in the table prefix as a param.
> I set to VOLATILE not SECURITY DEFINER (wasn't sure what that was)
>
> CREATE OR REPLACE FUNCTION "public"."datem_update"()
> RETURNS trigger AS
> $BODY$
> varPrefix=TG_ARGV[0]
> varFieldName=varPrefix+"_datem"
> TD["NEW"][varFieldName]="now()"
> RETURN "Modify"
Try return "Modify". I believe the problem is actually the upper case RETURN.
> $BODY$
> LANGUAGE 'plpythonu' VOLATILE;

-- 
Adrian Klaver
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] "no unpinned buffers available" ? why? (hstore and

2007-01-03 Thread Richard Huxton

hubert depesz lubaczewski wrote:

hi,
i got this situation: i'm using 8.3devel checked out from cvs about a week
ago. if this is neccesary i can rerun the tests in 8.2 or something else.



CREATE OR REPLACE FUNCTION get_old_cf_for_advert(INT8) RETURNS setof
srf_get_old_cf_for_advert AS $BODY$



$BODY$ LANGUAGE 'plperl';

CREATE OR REPLACE FUNCTION migrate_cf_old_to_hstore(in_advert_id INT8)
RETURNS hstore AS $BODY$
declare
   temprec RECORD;
   use_cf hstore;
BEGIN
   use_cf := '';
   for temprec in SELECT * FROM get_old_cf_for_advert(in_advert_id) LOOP
   use_cf := use_cf || ( temprec.codename => temprec.value );
   END LOOP;
   RETURN use_cf;
END;
$BODY$ language 'plpgsql';

CREATE TABLE hstore_migration as SELECT id as advert_id,
migrate_cf_old_to_hstore(id) as hstore_cf FROM adverts;



and - after some time of this "create table", postmaster process eats all
the memory (over 1.8g), and dies with:
psql:133.sql:125: ERROR:  error from Perl function: no unpinned buffers
available at line 5.



my questions are:
1. is it a bug and will it be fixed?
2. if it is a bug - is it in hstore? plperl? my code?


My guess would be that plperl isn't freeing it's result set storage 
until the end of the transaction. Might not be classed as a bug, but 
certainly an inefficiency.



3. i can do the migration using small parts - let's say 100 records at a
time, disconnect, reconnect, convert next 100 records. but - will i be safe
later on during standard work?


Once the connection is closed, all memory should be freed.

But, it looks to me like you might be able to replace the plperl 
function by just a straight query. That should be faster too.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] superuser authentication?

2007-01-03 Thread woger151
I'm setting up postgresql (8.1) on what I hope to be a very secure server 
(SUSE Linux 10.1).


Only authentication allowed by anyone is 'local' (unix-domain sockets).

Most users I plan on authenticating by PASSWORD (web connections are made to 
an apache webserver over SSL; the actual postgresql connections are 
themselves all local via pg_connect).


What I'm not sure about is how to authenticate the postgresql superuser 
(user 'postgres' on my system).  I'm considering:


1.  Using ident (supposedly secure because of the SO_PEERCRED mechanism; and 
I've made a lot of effort to secure the server at the OS level)

2.  Using password (_not_ stored on disk in e.g. pgpass)
3.  Using reject

My questions:
* Is 3 overly paranoid in the context of a production server?
* Would 2 or 3 hobble some kind of daemons?  (A cursory search led me to 
think that maybe pg_autovacuum wouldn't work, and I'm not sure if there are 
other such daemons.)
* If the choice came down to 1 vs 2, is there much argument for one over the 
other in terms of security?  (I realize that there might not be a clear 
answer to that.)


TIA 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] superuser authentication?

2007-01-03 Thread Tom Lane
"woger151" <[EMAIL PROTECTED]> writes:
> What I'm not sure about is how to authenticate the postgresql superuser 
> (user 'postgres' on my system).  I'm considering:

> 1.  Using ident (supposedly secure because of the SO_PEERCRED mechanism; and 
> I've made a lot of effort to secure the server at the OS level)
> 2.  Using password (_not_ stored on disk in e.g. pgpass)
> 3.  Using reject

How are you going to do backups?

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] "no unpinned buffers available" ? why? (hstore and plperl involved)

2007-01-03 Thread hubert depesz lubaczewski

On 1/3/07, Richard Huxton  wrote:


> my questions are:
> 1. is it a bug and will it be fixed?
> 2. if it is a bug - is it in hstore? plperl? my code?
My guess would be that plperl isn't freeing it's result set storage
until the end of the transaction. Might not be classed as a bug, but
certainly an inefficiency.



not good - but - if it is pl/perl only issue - i can live with it.



> 3. i can do the migration using small parts - let's say 100 records at a
> time, disconnect, reconnect, convert next 100 records. but - will i be
safe
> later on during standard work?
Once the connection is closed, all memory should be freed.
But, it looks to me like you might be able to replace the plperl
function by just a straight query. That should be faster too.



really? i was thinking really hard on how to do it in sql, but didn't found
any way to achieve it. actually - i dont really think it would be possible
at all in standard sql. but then - maybe i'm wrong.

best regards,

depesz


--
http://www.depesz.com/ - nowy, lepszy depesz


Re: [GENERAL] superuser authentication?

2007-01-03 Thread Bill Moran
In response to Tom Lane <[EMAIL PROTECTED]>:

> "woger151" <[EMAIL PROTECTED]> writes:
> > What I'm not sure about is how to authenticate the postgresql superuser 
> > (user 'postgres' on my system).  I'm considering:
> 
> > 1.  Using ident (supposedly secure because of the SO_PEERCRED mechanism; 
> > and 
> > I've made a lot of effort to secure the server at the OS level)
> > 2.  Using password (_not_ stored on disk in e.g. pgpass)
> > 3.  Using reject
> 
> How are you going to do backups?

Additionally ...

While I would never caution someone _against_ more security, keep some 
things in mind.

There's a user on your system that PostgreSQL runs under (probably called
"postgres").  That user owns all the files where Postgres stores the tables
and everything else.  None of that data is encrypted by Postgres (except
passwords) so any user who can su to the postgres user can bypass the
database to access the data, corrupt it, and even (if they're very clever)
modify it.

My point being, that if an attacker gets a shell on your system, they're
already very close to being able to access your PostgreSQL data.

Personally, I'd set auth to password, then keep the password in a file in
root's home directory and set it readable by root only.  If an attacker can
read that file, he already doesn't need to.

This does mean that you'll have to carefully secure the script you use to
make backups, since they'll need to have the password in them.  But you'll
need to carefully secure your backups anyway or all the other security is
rather pointless.

-- 
Bill Moran
Collaborative Fusion Inc.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] "no unpinned buffers available" ? why? (hstore and plperl involved)

2007-01-03 Thread Tom Lane
"hubert depesz lubaczewski" <[EMAIL PROTECTED]> writes:
> and - after some time of this "create table", postmaster process eats all
> the memory (over 1.8g), and dies with:
> psql:133.sql:125: ERROR:  error from Perl function: no unpinned buffers
> available at line 5.

Could you reduce this to a self-contained example please?  Your
functions depend on a bunch of tables that you have not provided
definitions or data for ...

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] granting SELECT on _all_ database objects (even non existing objects)

2007-01-03 Thread Joao Miguel Ferreira
Hello All,

my database contains a big table. on this table I create VIEWs.

The problem is this: the VIEWs are created dinamically by an external
program (depending on some configurations). Nevertheless I would like to
GRANT SELECT priviliges to my readOnlyUser, in a simple way...!!!

Summary: is it possible to allow Read-Onlky access to some user, in some
database, even if new database objects (VIEWs in my case) are created
dinamically ???

something like
GRANT SELECT ON * TO readOnlyUser;

I'm confused..

thx
jmf



DISCLAIMER: This message may contain confidential information or privileged 
material and is intended only for the individual(s) named. If you are not a 
named addressee and mistakenly received this message you should not copy or 
otherwise disseminate it: please delete this e-mail from your system and notify 
the sender immediately. E-mail transmissions are not guaranteed to be secure or 
without errors as information could be intercepted, corrupted, lost, destroyed, 
arrive late or incomplete or contain viruses. Therefore, the sender does not 
accept liability for any errors or omissions in the contents of this message 
that arise as a result of e-mail transmissions. Please request a hard copy 
version if verification is required. Critical Software, SA.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] "no unpinned buffers available" ? why? (hstore and

2007-01-03 Thread Richard Huxton

hubert depesz lubaczewski wrote:

But, it looks to me like you might be able to replace the plperl
function by just a straight query. That should be faster too.



really? i was thinking really hard on how to do it in sql, but didn't found
any way to achieve it. actually - i dont really think it would be possible
at all in standard sql. but then - maybe i'm wrong.


Well, it looks to me like the main problem is you're trying to convert a 
table like: (advert_id, cust1, cust2, cust3) to: (advert_id, codename, 
codevalue).
If you do that separately at the start of the process, (one query per 
custom column in the old table) then it becomes straightforward.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] superuser authentication?

2007-01-03 Thread Richard Huxton

Bill Moran wrote:


Personally, I'd set auth to password, then keep the password in a file in
root's home directory and set it readable by root only.  If an attacker can
read that file, he already doesn't need to.

This does mean that you'll have to carefully secure the script you use to
make backups, since they'll need to have the password in them.  But you'll
need to carefully secure your backups anyway or all the other security is
rather pointless.


I'd run it as a non-root backup-specific user. That way if someone 
compromises the backup process they're limited in the amount of damage 
they can do (since the user will only have write access to a few 
directories). Also makes auditing easier if you're that way inclined.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] queueing via database table?

2007-01-03 Thread Scott Marlowe
On Tue, 2007-01-02 at 22:34 -0800, Mark Harrison wrote:
> I have a cluster of CPUs generating thumbnails for
> a render farm.  I would like to place thumbnail
> requests on a queue, and have the cluster of client
> dequeue the requests and process them.
> 
> Of course, each request should be only dequeued once...
> if a thumbnail is being processed by one CPU, it
> shouldn't be processed by another CPU.
> 
> Does the following sound like a reasonable approach?
> If not, what's a good way to go?
> 
> The processes generating the requests will insert into
> a queue table.  They may add a priority and timestamp.
> 
> The several processes servicing the requests will do a
> SELECT FOR UPDATE where ... limit 1, generate thumbnail,
> delete the record and commit.

Here's what I'd do.  Create two sequences.  Sequence one is used to
assign ids to the thumbnail records when they're placed into the control
table.  The other is used to "check out" the records. 

A process selects nextval from sequence two, and then selects the
corresponding record info from the control table, and marks the record
as being in work.  When it's done, it marks it as done, and selects
another value from the second sequence and repeats the process.

Every now and then run a check program to look for thumbs that have been
missed or skipped and process them or assign them a new id from sequence
one to put them back into the queue.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Database versus filesystem for storing images

2007-01-03 Thread Scott Ribe
Personally, I'd put them on the file system, because then backup software
can perform incremental backups. In the database, that becomes more of a
difficulty. One suggestion, don't use a file name from a hash to store the
image, just use the serial id, and break them up by hundreds or thousands,
iow image 1123 might be in images/000/01/01123.

-- 
Scott Ribe
[EMAIL PROTECTED]
http://www.killerbytes.com/
(303) 722-0567 voice



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] "no unpinned buffers available" ? why? (hstore and plperl involved)

2007-01-03 Thread hubert depesz lubaczewski

On 1/3/07, Richard Huxton  wrote:


If you do that separately at the start of the process, (one query per
custom column in the old table) then it becomes straightforward.



no, because meaning of "col1" in advert_custom_fields is different for each
record.
for one record it's codename might be "email" for another record it might be
"engine size".

hubert

--
http://www.depesz.com/ - nowy, lepszy depesz


Re: [GENERAL] "no unpinned buffers available" ? why? (hstore and plperl involved)

2007-01-03 Thread hubert depesz lubaczewski

On 1/3/07, Tom Lane <[EMAIL PROTECTED]> wrote:


Could you reduce this to a self-contained example please?  Your
functions depend on a bunch of tables that you have not provided
definitions or data for ...



i'll try. it will take some time though.

hubert

--
http://www.depesz.com/ - nowy, lepszy depesz


Re: [GENERAL] Index Scanning

2007-01-03 Thread Enrico
On Wed, 03 Jan 2007 12:21:31 +
Richard Huxton  wrote:

> Don't forget to cc the list

Ok thanks :)
> 

> If you issue "set enable_seqscan = off" before the query that should 
> force it to use the indexes first. What does that plan show you?

I try and then I post the results.

Regards Enrico

 


-- 
If Bill Gates had a penny for everytime Windows crashed,he'd be a 
multi-billionaire by now ...oh look, he already is 
[EMAIL PROTECTED] - Skype:sscotty71
http://www.linuxtime.it/enricopirozzi

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Generic timestamp function for updates where field

2007-01-03 Thread novnov



Adrian Klaver wrote:
> 
> On Wednesday 03 January 2007 12:13 am, novnov wrote:
>> Adrian Klaver wrote:
>> > On Sunday 31 December 2006 8:48 am, novnov wrote:
>> >> OK. python would be the preference, if anyone is interested in showing
>> >> me how it would be done, I've never used one of the dynamic languages
>> >> with postgres.
>> >>
>> >> Why would not be possible in plpgsql? It has loop etc, the only part
>> I'm
>> >> not sure it can do it use the variable as field name.
>> >>
>> >> >http://archives.postgresql.org/
>> >
>> > Here is a function I wrote in python to do something similar.  My
>> > timestamp
>> > fields are of the form tc_ts_update where tc is  a table code that can
>> be
>> > found by looking up the table name in the table_code table. In
>> pl/pythonu
>> > that ships with 8.2 it is no longer necessary to do the relid look up.
>> > There
>> > is a TD["table_name"] variable that returns the table name directly.
>> >
>> > CREATE OR REPLACE FUNCTION public.ts_update()
>> > RETURNS trigger AS
>> > $Body$
>> > table_oid=TD["relid"]
>> > plan_name=plpy.prepare("SELECT relname FROM pg_class WHERE
>> > oid=$1",["oid"])
>> > plan_code=plpy.prepare("SELECT tc_table_code FROM table_code WHERE
>> > tc_table_name=$1",["text"])
>> > rs_name=plpy.execute(plan_name,[table_oid])
>> > rs_code=plpy.execute(plan_code,[rs_name[0]["relname"]])
>> > fld_name="_ts_update"
>> > tbl_code=rs_code[0]["tc_table_code"]
>> > full_name=tbl_code+fld_name
>> > TD["new"][full_name]="now()"
>> > return "MODIFY"
>> > $Body$
>> > LANGUAGE plpythonu SECURITY DEFINER;
>> > --
>> > Adrian Klaver
>> > [EMAIL PROTECTED]
>>
>> Here is what I have tried, it fails on the
>> TD["NEW"][varFieldName]="now()"
>> line.
>> Do I need the Return?
>> I'm passing in the table prefix as a param.
>> I set to VOLATILE not SECURITY DEFINER (wasn't sure what that was)
>>
>> CREATE OR REPLACE FUNCTION "public"."datem_update"()
>> RETURNS trigger AS
>> $BODY$
>> varPrefix=TG_ARGV[0]
>> varFieldName=varPrefix+"_datem"
>> TD["NEW"][varFieldName]="now()"
>> RETURN "Modify"
> Try return "Modify". I believe the problem is actually the upper case
> RETURN.
>> $BODY$
>> LANGUAGE 'plpythonu' VOLATILE;
> -- 
> Adrian Klaver
> [EMAIL PROTECTED]
> 

Thanks Adrian, 'return' works better. But there may be a namespace issue
with TG_ARGV. The error I get is "exceptions.NameError: global name TG_ARGV
is not defined." I have been unable to find anything on this by googling the
web or usenet. Do the postgres names like TG_ARGV need special treatment
inside a python function? tg_argv[0] (ie lowercase) did no better.

As an experiment I replaced tg_argv with a hard coded the prefix value, and
found that it didn't like NEW either, 'new' is better. But with that change
the function works, so the TG_ARGV issue is the last one. 
-- 
View this message in context: 
http://www.nabble.com/Generic-timestamp-function-for-updates-where-field-names-vary-tf2899327.html#a8144550
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] "no unpinned buffers available" ? why? (hstore and

2007-01-03 Thread Richard Huxton

hubert depesz lubaczewski wrote:

On 1/3/07, Richard Huxton  wrote:


If you do that separately at the start of the process, (one query per
custom column in the old table) then it becomes straightforward.



no, because meaning of "col1" in advert_custom_fields is different for each
record.
for one record it's codename might be "email" for another record it 
might be

"engine size".
And is that not what's stored in "v_category_custom_fields"? So you can 
do the transformation and get (advert_id=1, codename='col1', 
value='vvv') then use v_category_custom_fields to update the 'col1' part.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Backup Restore

2007-01-03 Thread Bob Pawley
Looking through PGAdmin where would I find the spatial references that the 
errror message references?


Bob


- Original Message - 
From: "Richard Huxton" 

To: "Bob Pawley" <[EMAIL PROTECTED]>
Cc: "Dave Page" <[EMAIL PROTECTED]>; "Shoaib Mir" <[EMAIL PROTECTED]>; 
"Postgresql" 

Sent: Tuesday, January 02, 2007 2:00 AM
Subject: Re: [GENERAL] Backup Restore



Bob Pawley wrote:

Following is the error message on pg_restore:-

"pg_restore: ERROR:  duplicate key violates unique constraint 
"spatial_ref_sys_pkey"
CONTEXT:  COPY spatial_ref_sys, line 1: "2000 EPSG 2000 PROJCS["Anguilla 
1957 / British West Indies Grid",GEOGCS["Anguilla 1957",DATUM["Angui..."
pg_restore: [archiver (db)] error returned by PQendcopy: ERROR: 
duplicate key violates unique constraint "spatial_ref_sys_pkey"
CONTEXT:  COPY spatial_ref_sys, line 1: "2000 EPSG 2000 PROJCS["Anguilla 
1957 / British West Indies Grid",GEOGCS["Anguilla 1957",DATUM["Angui..."

pg_restore: *** aborted because of error

Process returned exit code 1."

The GIS feature was removed from the PostgreSQL application before the 
project dump.


Not sure what you mean by that - you removed all GIS related types and 
functions from the source database?



Anyone have any thoughts on how to get around this??


Remove the constraint if you no longer have that requirement. It looks 
like you have a primary-key defined on spatial_ref_sys and want to remove 
it. See the SQL Reference section of the manuals for how to use ALTER 
TABLE to drop primary keys and other constraints.


I'm curious as to how this can happen though. Are the definitions of table 
spatial_ref_sys the same in the source and target database?


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Backup Restore

2007-01-03 Thread Richard Huxton

Bob Pawley wrote:
Looking through PGAdmin where would I find the spatial references that 
the errror message references?



Bob Pawley wrote:

Following is the error message on pg_restore:-

"pg_restore: ERROR:  duplicate key violates unique constraint 
"spatial_ref_sys_pkey"
CONTEXT:  COPY spatial_ref_sys, line 1: "2000 EPSG 2000 
PROJCS["Anguilla 1957 / British West Indies Grid",GEOGCS["Anguilla 
1957",DATUM["Angui..."


Looking at this, you should have a table "spatial_ref_sys" with a 
primary key constraint "spatial_ref_sys_pkey"


Find the table via the left-hand tree conrol and its details should be 
listed on the right-hand side. The primary key will be detailed at the 
top and bottom


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Geographical redundancy

2007-01-03 Thread Dennis
Well, I am mainly concerned with catastrophic  failure. If 1st (main) 
datacenter fails majorly (say fire, earthquake, db server dies etc), I need to 
be able to restore websites/data quickly in another location. If I get a data 
loss of say 6-12 hours during a major failure (which should never occur), I am 
ok with that.

Ben <[EMAIL PROTECTED]> wrote: On Sat, 30 Dec 2006, Dennis wrote:

> I was thinking of maybe just having 2nd location receive a PG dump (full 
> or incremental) every so often (an hour to 6 hours) and if the main 
> location fails majorly, restore the PG cluster from the dump and switch 
> DNS settings on the actual sites. I can make sure all website files are 
> always in sync on both locations.

Well, first off, you can just rsync your archived WAL files. That may be 
easier than playing with pg_dump:

http://www.postgresql.org/docs/8.2/interactive/continuous-archiving.html

But second, and more important given your data-loss desires, if you do it 
this way you have a window where you can experience data loss. 
Specifically, after a transaction is committed, that commit will be at 
risk until the next transfer has completed.


 __
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

[GENERAL] pg_dump problems

2007-01-03 Thread JTyrrell

Hi everyone,

I'm having problems using pg_dump.


When I run the command

pg_dump -a -d testrig > testrig.data

on a machine running postgresql 7.4.13 with database testirg i have no
problem.


Then, I want to do the same but on a different machine running postgresql
7.4.6 with database root. So...

pg_dump -a -d root > root.data


For some reason this doesnt work! The command is running without error, but
the file root.data is always empty. The database does exist and all the data
is there, and I can run psql and do everything I normally do. I've tried
pg_dump with other databases and still get an empty file.


Could it be the commands are different for the different versions? If thats
the case, does anyone know what command I should be running?

-- 
View this message in context: 
http://www.nabble.com/pg_dump-problems-tf2912789.html#a8138723
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] About auto_increment

2007-01-03 Thread Yesh

Hi,

 Thankx guys I am now able to increment that field by using serial.

Richard Huxton wrote:
> 
> Yesh wrote:
>> Hi,
>> 
>>   I need to know how to increment a primary key field automatically in
>> run
>> time.
> 
> Not sure precisely what you want to know, but read up on the SERIAL 
> type, sequences, nextval() and currval().
> 
> -- 
>Richard Huxton
>Archonet Ltd
> 
> ---(end of broadcast)---
> TIP 1: if posting/reading through Usenet, please send an appropriate
>subscribe-nomail command to [EMAIL PROTECTED] so that your
>message can get through to the mailing list cleanly
> 
> 

-- 
View this message in context: 
http://www.nabble.com/About-auto_increment-tf2906530.html#a8135523
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] pg_dump problems

2007-01-03 Thread Richard Huxton

JTyrrell wrote:


pg_dump -a -d testrig > testrig.data

on a machine running postgresql 7.4.13 with database testirg i have no
problem.

Then, I want to do the same but on a different machine running postgresql
7.4.6 with database root. So...

pg_dump -a -d root > root.data

For some reason this doesnt work! The command is running without error, but
the file root.data is always empty. The database does exist and all the data
is there, and I can run psql and do everything I normally do. I've tried
pg_dump with other databases and still get an empty file.


Should work just fine.
1. Does "pg_dump -s" work?
2. Does the user you run this as have permission to dump data from "root"?

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] queueing via database table?

2007-01-03 Thread Vivek Khera


On Jan 3, 2007, at 2:00 AM, Steve Atkins wrote:


Holding a lock while generating the thumbnail doesn't
sound like a great idea, and I think that the select
for update will end up serialising the requests.

I'd add a "rendering" field, text, defaulting
to an empty string.

Then do a "select for update where ... and rendering = '' limit 1",
update the rendering field to the hostname of the box doing the
work and commit. Render the thumbnail. Delete the record.

That'll also give you an easy way to show status of which
box is rendering which scene.

Depending on what else you're putting into the where clause
a partial index on something for records where rendering=''
might be helpful.


this is more or less how we do it, so i second this.

we also use NOTIFY/LISTEN to "wake up" the job processors when new  
work is added.




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Backup Restore

2007-01-03 Thread Bob Pawley
Found it in template 1. This seems strange as both servers and pgadmins are 
the same version and I haven't opened the template until today.


Bob

- Original Message - 
From: "Richard Huxton" 

To: "Bob Pawley" <[EMAIL PROTECTED]>
Cc: "Dave Page" <[EMAIL PROTECTED]>; "Shoaib Mir" <[EMAIL PROTECTED]>; 
"Postgresql" 

Sent: Wednesday, January 03, 2007 10:44 AM
Subject: Re: [GENERAL] Backup Restore



Bob Pawley wrote:
Looking through PGAdmin where would I find the spatial references that 
the errror message references?



Bob Pawley wrote:

Following is the error message on pg_restore:-

"pg_restore: ERROR:  duplicate key violates unique constraint 
"spatial_ref_sys_pkey"
CONTEXT:  COPY spatial_ref_sys, line 1: "2000 EPSG 2000 
PROJCS["Anguilla 1957 / British West Indies Grid",GEOGCS["Anguilla 
1957",DATUM["Angui..."


Looking at this, you should have a table "spatial_ref_sys" with a primary 
key constraint "spatial_ref_sys_pkey"


Find the table via the left-hand tree conrol and its details should be 
listed on the right-hand side. The primary key will be detailed at the top 
and bottom


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] [pgsql-es-ayuda] Update to 8.2 in openSUSE 10.2

2007-01-03 Thread Juan Martínez

Romulo Hunter escribió:

Hola a todos.
Como sabemos openSUSE viene con muchos paquetes para instalar o
preinstalados. El punto es que viene con la versión de PostgreSQL
8.1... y cuando descargo la 8.2 desde el sitio de PostgreSQL me lo
instala en en /usr/local/pgsl; la instalación inicial del motor que
viene dentro del sistema openSUSE está en /var/lib/pgsql, ahora : ¿
Cómo hago para que la nueva versión reemplace a la anterior y quede
una sola versión (la 8.2) en mi máquina?.


Facil. Desintala 8.1 con tu gestor de paquetes!

1. Haces un respaldo con pg_dumpall,
2. Desintalas postgres 8.1
3. Instalas postgres 8.2
4. Repones el respaldo y listo...

A disfrutar!

:-D

--
Juan Martinez G.   Mac Iver # 370
Departamento de Informatica4997900 - 4997950
Universidad Miguel de CervantesSantiago - Chile
http://download.bblug.usla.org.ar/netiquette.png

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] superuser authentication?

2007-01-03 Thread woger151


- Original Message - 
From: "Tom Lane" <[EMAIL PROTECTED]>

To: "woger151" <[EMAIL PROTECTED]>
Cc: 
Sent: Wednesday, January 03, 2007 9:52 AM
Subject: Re: [GENERAL] superuser authentication?



"woger151" <[EMAIL PROTECTED]> writes:

What I'm not sure about is how to authenticate the postgresql superuser
(user 'postgres' on my system).  I'm considering:


1.  Using ident (supposedly secure because of the SO_PEERCRED mechanism; 
and

I've made a lot of effort to secure the server at the OS level)
2.  Using password (_not_ stored on disk in e.g. pgpass)
3.  Using reject


How are you going to do backups?


Hadn't thought about that yet, though I know that periodic backups are 
mandatory.


Easy to switch the authentication method back to something like password or 
ident if one is doing things manually anyway, but it _would_ make it hard to 
script things.


I'll have to think more about that...



regards, tom lane 



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] superuser authentication?

2007-01-03 Thread woger151


- Original Message - 
From: "Bill Moran" <[EMAIL PROTECTED]>

To: "woger151" <[EMAIL PROTECTED]>
Cc: 
Sent: Wednesday, January 03, 2007 10:09 AM
Subject: Re: [GENERAL] superuser authentication?



In response to Tom Lane <[EMAIL PROTECTED]>:


"woger151" <[EMAIL PROTECTED]> writes:
> What I'm not sure about is how to authenticate the postgresql superuser
> (user 'postgres' on my system).  I'm considering:

> 1.  Using ident (supposedly secure because of the SO_PEERCRED 
> mechanism; and

> I've made a lot of effort to secure the server at the OS level)
> 2.  Using password (_not_ stored on disk in e.g. pgpass)
> 3.  Using reject

How are you going to do backups?


Additionally ...

While I would never caution someone _against_ more security, keep some
things in mind.

There's a user on your system that PostgreSQL runs under (probably called
"postgres").  That user owns all the files where Postgres stores the 
tables

and everything else.  None of that data is encrypted by Postgres (except
passwords) so any user who can su to the postgres user can bypass the
database to access the data, corrupt it, and even (if they're very clever)
modify it.

My point being, that if an attacker gets a shell on your system, they're
already very close to being able to access your PostgreSQL data.


Right, which is why "ident" seems pretty secure.  The only reason I don't 
just go ahead with "ident" is that one can always wonder, "what if there's a 
security hole in the implementation of SO_PEERCRED?"



Personally, I'd set auth to password, then keep the password in a file in
root's home directory and set it readable by root only.  If an attacker 
can

read that file, he already doesn't need to.

This does mean that you'll have to carefully secure the script you use to
make backups, since they'll need to have the password in them.  But you'll
need to carefully secure your backups anyway or all the other security is
rather pointless.


Right.



--
Bill Moran
Collaborative Fusion Inc. 



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Generic timestamp function for updates where field

2007-01-03 Thread Adrian Klaver
On Wednesday 03 January 2007 9:10 am, novnov wrote:
> Adrian Klaver wrote:
> > On Wednesday 03 January 2007 12:13 am, novnov wrote:
> >> Adrian Klaver wrote:
> >> > On Sunday 31 December 2006 8:48 am, novnov wrote:
> >> >> OK. python would be the preference, if anyone is interested in
> >> >> showing me how it would be done, I've never used one of the dynamic
> >> >> languages with postgres.
> >> >>
> >> >> Why would not be possible in plpgsql? It has loop etc, the only part
> >>
> >> I'm
> >>
> >> >> not sure it can do it use the variable as field name.
> >> >>
> >> >> >http://archives.postgresql.org/
> >> >
> >> > Here is a function I wrote in python to do something similar.  My
> >> > timestamp
> >> > fields are of the form tc_ts_update where tc is  a table code that can
> >>
> >> be
> >>
> >> > found by looking up the table name in the table_code table. In
> >>
> >> pl/pythonu
> >>
> >> > that ships with 8.2 it is no longer necessary to do the relid look up.
> >> > There
> >> > is a TD["table_name"] variable that returns the table name directly.
> >> >
> >> > CREATE OR REPLACE FUNCTION public.ts_update()
> >> > RETURNS trigger AS
> >> > $Body$
> >> > table_oid=TD["relid"]
> >> > plan_name=plpy.prepare("SELECT relname FROM pg_class WHERE
> >> > oid=$1",["oid"])
> >> > plan_code=plpy.prepare("SELECT tc_table_code FROM table_code WHERE
> >> > tc_table_name=$1",["text"])
> >> > rs_name=plpy.execute(plan_name,[table_oid])
> >> > rs_code=plpy.execute(plan_code,[rs_name[0]["relname"]])
> >> > fld_name="_ts_update"
> >> > tbl_code=rs_code[0]["tc_table_code"]
> >> > full_name=tbl_code+fld_name
> >> > TD["new"][full_name]="now()"
> >> > return "MODIFY"
> >> > $Body$
> >> > LANGUAGE plpythonu SECURITY DEFINER;
> >> > --
> >> > Adrian Klaver
> >> > [EMAIL PROTECTED]
> >>
> >> Here is what I have tried, it fails on the
> >> TD["NEW"][varFieldName]="now()"
> >> line.
> >> Do I need the Return?
> >> I'm passing in the table prefix as a param.
> >> I set to VOLATILE not SECURITY DEFINER (wasn't sure what that was)
> >>
> >> CREATE OR REPLACE FUNCTION "public"."datem_update"()
> >> RETURNS trigger AS
> >> $BODY$
> >> varPrefix=TG_ARGV[0]
> >> varFieldName=varPrefix+"_datem"
> >> TD["NEW"][varFieldName]="now()"
> >> RETURN "Modify"
> >
> > Try return "Modify". I believe the problem is actually the upper case
> > RETURN.
> >
> >> $BODY$
> >> LANGUAGE 'plpythonu' VOLATILE;
> >
> > --
> > Adrian Klaver
> > [EMAIL PROTECTED]
>
> Thanks Adrian, 'return' works better. But there may be a namespace issue
> with TG_ARGV. The error I get is "exceptions.NameError: global name TG_ARGV
> is not defined." I have been unable to find anything on this by googling
> the web or usenet. Do the postgres names like TG_ARGV need special
> treatment inside a python function? tg_argv[0] (ie lowercase) did no
> better.
>
> As an experiment I replaced tg_argv with a hard coded the prefix value, and
> found that it didn't like NEW either, 'new' is better. But with that change
> the function works, so the TG_ARGV issue is the last one.
Replace TG_ARGV[0] with TD["args"][0]
For complete documentation see
http://www.postgresql.org/docs/8.2/interactive/plpython.html
-- 
Adrian Klaver
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Tabbed data in tab-separated output

2007-01-03 Thread felix
I have a table which has a few VARCHAR columns whose first character
is a tab.  If I run pg_dump on this table, it outputs data like this:

43158  \t555-12123

where the two embedded white spaces are actually tabs.

If I use psql to execute SQL to dump parts of the table, like this:

psql -qtA -f '\t' -U xyzzy xyzzy -c 'select ... >/tmp/xxx

I get this:

43158   310-319-1333, x1070 3

where that initial embeded white space represents two tabs.  When I
use psql to restore this data, it thinks the 2nd column is empty and
complains that the third column is the wrong type.

pg_dump apparently is smart enough to print embedded tabs as escaped
chars, but not psql.  Is there a fix for this?  I thought of reverting
to standard output, without the -t option, and analyzing the first two
lines to tell exactly how many spaces are assigned to each column, but
that gives me the shudders.

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman & rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] could not open file xxxx for writing: Permission denied

2007-01-03 Thread Tomas Lanczos
Hello,

Using PostgreSQL 8.1.4, pgadmin III 1.6.0 in WinXP I tried to export a table
using COPY (first time in postgresql, did many times in Sybase):

COPY ml50jtsk_datum_v TO 'c:/postgresql/ml50jtsk.out';

I got the following errormessage:

ERROR: could not open file "c:/postgresql/ml50jtsk.out" for writing:
Permission denied
SQL state: 42501

What's wrong? Does it mean that the database user has no writing permission
out of the database? How I did a pg_dump then few weeks ago?

Many thanks for help and/or explanation, best regards and Happy New Year

Tomas


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] could not open file xxxx for writing: Permission

2007-01-03 Thread mike
That usually means that the database user doesn't have Windows file
level permissions to write to that folder.

Maybe when you upgraded to pgadmin 1.6 and now are using a different
user id to log into the it?  Perhaps the destination directory was c:
\postgresql\8.1\data before?

Mike

On Thu, 2007-01-04 at 03:28 +0100, Tomas Lanczos wrote:
> Hello,
> 
> Using PostgreSQL 8.1.4, pgadmin III 1.6.0 in WinXP I tried to export a table
> using COPY (first time in postgresql, did many times in Sybase):
> 
> COPY ml50jtsk_datum_v TO 'c:/postgresql/ml50jtsk.out';
> 
> I got the following errormessage:
> 
> ERROR: could not open file "c:/postgresql/ml50jtsk.out" for writing:
> Permission denied
> SQL state: 42501
> 
> What's wrong? Does it mean that the database user has no writing permission
> out of the database? How I did a pg_dump then few weeks ago?
> 
> Many thanks for help and/or explanation, best regards and Happy New Year
> 
> Tomas
> 
> 
> ---(end of broadcast)---
> TIP 3: Have you checked our extensive FAQ?
> 
>http://www.postgresql.org/docs/faq


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Tabbed data in tab-separated output

2007-01-03 Thread mike
How are you attempting to restore the table after using psql?  Psql
insert statements? Pgdump? COPY FROM?

Mike
On Wed, 2007-01-03 at 17:59 -0800, [EMAIL PROTECTED] wrote:
> I have a table which has a few VARCHAR columns whose first character
> is a tab.  If I run pg_dump on this table, it outputs data like this:
> 
> 43158  \t555-12123
> 
> where the two embedded white spaces are actually tabs.
> 
> If I use psql to execute SQL to dump parts of the table, like this:
> 
> psql -qtA -f '\t' -U xyzzy xyzzy -c 'select ... >/tmp/xxx
> 
> I get this:
> 
> 43158 310-319-1333, x1070 3
> 
> where that initial embeded white space represents two tabs.  When I
> use psql to restore this data, it thinks the 2nd column is empty and
> complains that the third column is the wrong type.
> 
> pg_dump apparently is smart enough to print embedded tabs as escaped
> chars, but not psql.  Is there a fix for this?  I thought of reverting
> to standard output, without the -t option, and analyzing the first two
> lines to tell exactly how many spaces are assigned to each column, but
> that gives me the shudders.
> 


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] could not open file xxxx for writing: Permission denied

2007-01-03 Thread Magnus Hagander
> Hello,
> 
> Using PostgreSQL 8.1.4, pgadmin III 1.6.0 in WinXP I tried to export a table
> using COPY (first time in postgresql, did many times in Sybase):
> 
> COPY ml50jtsk_datum_v TO 'c:/postgresql/ml50jtsk.out';
> 
> I got the following errormessage:
> 
> ERROR: could not open file "c:/postgresql/ml50jtsk.out" for writing:
> Permission denied
> SQL state: 42501
> 
> What's wrong? Does it mean that the database user has no writing permission
> out of the database? How I did a pg_dump then few weeks ago?

Yes, that's what it means. pg_dump works because it uses the clients account, 
but copy uses the servers. To use client permissions with copy in psql, use 
\copy 
instead.

/Magnus


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Generic timestamp function for updates where field

2007-01-03 Thread novnov

Thanks, that did it, and I'd not seen that set of docs yet, should be
helpful.

-- 
View this message in context: 
http://www.nabble.com/Generic-timestamp-function-for-updates-where-field-names-vary-tf2899327.html#a8155138
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/