Re: [GENERAL] Business days

2007-04-25 Thread A. Kretschmer
am  Wed, dem 25.04.2007, um 21:01:13 -0400 mailte Eddy D. Sanchez folgendes:
> Hello...
> I need to get the a total number of business days (from monday to  
> friday) between two dates.
> Someone can help me please.

For instance the number of business days between 2007-04-01 and
2007-04-30:

select sum(case when extract (dow from foo) in(1,2,3,4,5) then 1 else 0 end) 
from (select ('2007-04-01'::date + (generate_series(0,'2007-04-30'::date - 
'2007-04-01'::date)||'days')::interval) as foo) foo;

Unregardedly Easter!



Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] a math question

2007-04-25 Thread Chris

tom wrote:

In pgsql I have to modify this a bit with 'cast (s_msgs as double 
precision)' or 'cast(s_msgs as real)' in order to get floating point math.

( cast(s_msgs as double precision)/S_msgs)  and so on...

Question:  Is there a better way to get floating point math out of a set 
of integers?


Nope.

This is mentioned in the docs: 
http://www.postgresql.org/docs/8.2/static/functions-math.html


division (integer division truncates results)

I'm sure it's because of sql specs but someone else will throw their 
2c's in if that's wrong ;)


You only need one real or double precision field in there for that not 
to be truncated, you don't need to cast everything.


--
Postgresql & php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Feature request - have postgresql log warning when new sub-release comes out.

2007-04-25 Thread Bill Moran
Ron Mayer <[EMAIL PROTECTED]> wrote:
>
> Carlos Moreno wrote:
> > Tom Lane wrote:
> >> Well, if you can't update major versions that's understandable; that's
> >> why we're still maintaining the old branches.  But there is no excuse
> >> for not running a reasonably recent sub-release within your branch.
> > 
> > Slammer..bug in Microsucks SQL Serverfix...had been available
> 
> Feature request.
> 
> How about if PostgreSQL periodically check for updates on the
> internet and log WARNINGs as soon as it sees it's not running
> the newest minor version for a branch.   Ideally, it could
> be set so the time-of-day's configurable to avoid setting off
> pagers in the middle of the night.
> 
> I might not lurk on the mailinglists enough to notice every
> dot release; but I sure would notice if pagers went off with
> warnings in the log files from production servers.
> 
> Is that a possible TODO?

If you switch to FreeBSD, you can easily have this done automatically
with existing tools.

...

Actually, I've a feeling that it would be trivial to do with just
about any existing packaging system ...

-- 
Bill Moran
http://www.potentialtech.com

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Business days

2007-04-25 Thread brian

Eddy D. Sanchez wrote:

Hello...
I need to get the a total number of business days (from monday to  
friday) between two dates.

Someone can help me please.



Whether you're using PHP or not, this page may be of some help:

http://www.php.net/manual/en/function.date.php

There are a couple of examples there of how to do it. Maybe something 
there can be of some use.


b

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Business days

2007-04-25 Thread Rich Shepard

On Wed, 25 Apr 2007, Eddy D. Sanchez wrote:


I need to get the a total number of business days (from monday to friday)
between two dates. Someone can help me please.


  Joe Celko's "SQL for Smarties, 2nd Edition" has exactly this solution.
Check it out!

Rich

--
Richard B. Shepard, Ph.D.   |The Environmental Permitting
Applied Ecosystem Services, Inc.|  Accelerator(TM)
 Voice: 503-667-4517  Fax: 503-667-8863

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Business days

2007-04-25 Thread Eddy D. Sanchez

Hello...
I need to get the a total number of business days (from monday to  
friday) between two dates.

Someone can help me please.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Vacuum-full very slow

2007-04-25 Thread Steve Crawford
Tom Lane wrote:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
>> Steve Crawford wrote:
>>> Am I reading that what it actually does is to thrash around keeping
>>> indexes unnecessarily updated, bloating them in the process?
> 
>> Yes.
> 
> Just for the record, it's not "unnecessary".  The point of that is to
> not leave a corrupted table behind if VACUUM FULL fails midway through.
> The algorithm is:...

Yes, dig far enough under the covers and it all makes sense.

Just curious would it be wise or even possible to create the
functionality of an (oxymoronic) "vacuum full partial"? In other words,
provide the ability to set a max-tuples or max-time parameter. Since you
are looking for active tuples at the physical end of the file and moving
them to unused space within the file, then lopping off the end it might
be possible to do this in chunks to give control over how long a table
is locked at any one time. Of course this doesn't improve the
index-bloat issue.

> CLUSTER avoids all this thrashing by recopying the whole table, but
> of course that has peak space requirements approximately twice the
> table size (and is probably not a win anyway unless most of the table
> rows need to be moved).  You pays your money, you takes your choice.

That's certainly our case as we are archiving and purging 10s of
millions of tuples from the prior year and have plenty of reserve
disk-space. I killed the vacuum full after it passed the 5-hour mark.
Cluster took 20 minutes with nice-compact indexes included.

Thanks for the advice, everyone.

Cheers,
Steve

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Feature request - have postgresql log warning when new sub-release comes out.

2007-04-25 Thread Joshua D. Drake

Ron Mayer wrote:

Carlos Moreno wrote:

Tom Lane wrote:

Well, if you can't update major versions that's understandable; that's
why we're still maintaining the old branches.  But there is no excuse
for not running a reasonably recent sub-release within your branch.

Slammer..bug in Microsucks SQL Serverfix...had been available


Feature request.

How about if PostgreSQL periodically check for updates on the
internet and log WARNINGs as soon as it sees it's not running
the newest minor version for a branch.   Ideally, it could
be set so the time-of-day's configurable to avoid setting off
pagers in the middle of the night.


uhmmm gah, errm no... e why? :)

I could see a contrib module that was an agent that did that but not as 
part of actual core.


Joshua D. Drake




I might not lurk on the mailinglists enough to notice every
dot release; but I sure would notice if pagers went off with
warnings in the log files from production servers.

Is that a possible TODO?



(The thread started on the performance mailing lists but
I moved it to general since it drifted off topic).

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings




--

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Schema as versioning strategy

2007-04-25 Thread Reece Hart
On Wed, 2007-04-25 at 12:47 -0400, Owen Hartnett wrote:
> I want to "freeze" a snapshot of the database every year (think of 
> end of year tax records).  However, I want this frozen version (and 
> all the previous frozen versions) available to the database user as 
> read-only. 

First, I'd rename the current-year schema to a more meaningful name (eg,
taxes2006).  Each year you could do a schema-only dump of the current
year, tweak the schema name in the dump to reflect the new year, and
restore just the schema into the same database. The benefit of this
approach is that the data stay in place (ie, you don't dump public and
restore into a new schema). Conceptually, something as simple as the
following pipe might suffice to dump, rename, and restore into a new
schema:
   $ pg_dump -s -n taxes2006 | sed -e 's/taxes2006/taxes2007/g' | psql
-qa 
(This is a little dangerous because I've assumed that the string
'taxes2006' occurs only as a schema name.  I've also assumed Unix/Linux
and I have no idea what you'd do on a windows box.)

PostgreSQL doesn't have a read-only mode per se. The closest you can get
is to write a script to revoke insert/update/delete on all tables in an
archived schema; that's also pretty easy:
  $ psql -Atc "select 'REVOKE INSERT,UPDATE,DELETE FROM '||
nspname||'.'||
relname||' FROM someuser;' from pg_class C join pg_namespace N on
C.relnamespace=N.oid and N.nspname='taxes2006' WHERE C.relkind='r'"
\
| psql -qa
(In general, you should reduce everything to a one-liner.)

One of the advantages of having archived schemas in a single database is
that you'll be able to write queries that involve multiple years.  You
wouldn't be able to do that (easily*) if you archived the full database.

-Reece


* This is where David Fetter will mention dblink.

-- 
Reece Hart, http://harts.net/reece/, GPG:0x25EC91A0


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Feature request - have postgresql log warning when new sub-release comes out.

2007-04-25 Thread Ron Mayer
Carlos Moreno wrote:
> Tom Lane wrote:
>> Well, if you can't update major versions that's understandable; that's
>> why we're still maintaining the old branches.  But there is no excuse
>> for not running a reasonably recent sub-release within your branch.
> 
> Slammer..bug in Microsucks SQL Serverfix...had been available

Feature request.

How about if PostgreSQL periodically check for updates on the
internet and log WARNINGs as soon as it sees it's not running
the newest minor version for a branch.   Ideally, it could
be set so the time-of-day's configurable to avoid setting off
pagers in the middle of the night.

I might not lurk on the mailinglists enough to notice every
dot release; but I sure would notice if pagers went off with
warnings in the log files from production servers.

Is that a possible TODO?



(The thread started on the performance mailing lists but
I moved it to general since it drifted off topic).

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Audit-trail engine: getting the application's layer user_id

2007-04-25 Thread Jorge Godoy
"Marcelo de Moraes Serpa" <[EMAIL PROTECTED]> writes:

> Hello all,
>
> I know I may be asking too much, but I have a very limited C/C++ (as well as
> PostgreSQL internal architecture) knowledge. I've tried compiling the C
> source code Manuel sent as a PostgreSQL loadable module on Visual Studio
> .NET 2003 (C++) without success (lots of missing identifiers, "int Datum"
> redefinition and other things I didn't really understood). The comments in
> the Postgres 8.0 manual didn't help much.
>
> If anyone could put me on the right direction on how to write/build C/C++
> PostgreSQL on the Windows platform (specifically Windows XP) I would be
> grateful as I really need this thing working as soon as possible.
>
> Thanks,
>
> Marcelo.

If you really need this, shouldn't you consider hiring someone that
works professionaly with PostgreSQL?

They'd certainly do the work for you and you'll accomplish your target
on the due date.

-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Vacuum-full very slow

2007-04-25 Thread Listmail



I don't see a way to remove the old index entries before inserting new
ones without creating a window where the index and table will be
inconsistent if vacuum fails.


VACUUM FULL is slow because it plays with the indexes...
CLUSTER is slow because it has to order the rows...

	Maybe, drop all the indexes, VACUUM FULL only the table, then recreate  
all the indexes ?

If vacuum fails, the index drop would be rolled back.

By the way, about indexes :

	When you have a small table (say, for a website, maybe a few tens of  
megabytes max...) reindexing it takes just a few seconds, maybe 10-20  
seconds.
	It could be interesting, performance-wise, to tell postgres not to bother  
about crash-survivability of indexes on this table. Like temporary tables.  
Write nothing to WAL. If it crashes, on recovery, postgres would reindex  
the table.
	btree indexing is so fast on postgres that I'd definitely use this  
feature.
	I'd rather trade a minute of recovery versus less disk IO for index  
update.


	You could even do that for whole tables (like, web sessions table) which  
hold "perishable" data...



CLUSTER avoids all this thrashing by recopying the whole table, but
of course that has peak space requirements approximately twice the
table size (and is probably not a win anyway unless most of the table
rows need to be moved).  You pays your money, you takes your choice.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Vacuum-full very slow

2007-04-25 Thread Tom Lane
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Steve Crawford wrote:
>> Am I reading that what it actually does is to thrash around keeping
>> indexes unnecessarily updated, bloating them in the process?

> Yes.

Just for the record, it's not "unnecessary".  The point of that is to
not leave a corrupted table behind if VACUUM FULL fails midway through.
The algorithm is:

1. copy tuples to lower blocks, inserting index entries for them too

During this stage, if we fail then the copied tuples are invalid (since
they were inserted by a failed transaction) and so no corruption.
Meanwhile the original tuples are marked as "moved by this vacuum
transaction", but their validity is not affected by that.

2. mark the transaction committed

This atomically causes all the copied tuples to be GOOD and all the
originals to be INVALID according to the tuple validity rules.

3. remove the index entries for moved-off tuples

If we crash here, some of the invalid tuples will have index entries
and some won't, but that doesn't matter because they're invalid.
(The next vacuum will take care of finishing the cleanup.)

4. remove the moved-off tuples (which just requires truncating the
table)


I don't see a way to remove the old index entries before inserting new
ones without creating a window where the index and table will be
inconsistent if vacuum fails.

CLUSTER avoids all this thrashing by recopying the whole table, but
of course that has peak space requirements approximately twice the
table size (and is probably not a win anyway unless most of the table
rows need to be moved).  You pays your money, you takes your choice.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] [DOCS] Incrementally Updated Backups: Docs Clarification

2007-04-25 Thread Thomas F. O'Connell


On Apr 25, 2007, at 9:42 AM, Simon Riggs wrote:


On Thu, 2007-04-19 at 15:48 -0500, Thomas F. O'Connell wrote:


"If we take a backup of the standby server's files while it is
following logs shipped from the primary, we will be able to reload
that data and restart the standby's recovery process from the last
restart point. We no longer need to keep WAL files from before the
restart point. If we need to recover, it will be faster to recover
from the incrementally updated backup than from the original base
backup."

I'm specifically confused about the meaning of the following phrases:

"backup of the standby server's files" - Which files?


the files that make up the database server:
- data directory
- all tablespace directories


"reload that data" - What does this mean in postgres terms?


copy back from wherever you put them in the first place

"that data" referring to the "files that make up the db server"


"last restart point" - What is this? Wouldn't it be able to restart
from the last recovered file, which would presumably occur later than
the last restart point?


No, we don't restart file-by-file.

http://developer.postgresql.org/pgdocs/postgres/continuous- 
archiving.html#BACKUP-PITR-RECOVERY


"If recovery finds a corruption in the WAL..." onwards explains the
restart mechanism. It's much like checkpointing, so we don't restart
from the last log file we restart from a point possibly many log files
in the past.


Does this mean make a filesystem backup of the standby server's data
directory while it's stopped, and then start it again with that data
and the restricted set of WAL files needed to continue recovery?


No need to stop server. Where do you read you need to do that?


I'd like to see the language here converted to words that have more
meaning in the context of postgres. I'd be happy to attempt a  
revision

of this section once I'm able to complete an incrementally updated
backup successfully.


Feel free to provide updates that make it clearer.


Here's how I envision it playing out in practice:

1. stop standby postgres server
2. [optional] preserve data directory, remove unnecessary WAL files
3. restart standby server


step 2 only.

Clearly not an optional step, since its a 1 stage process. :-)

--
  Simon Riggs
  EnterpriseDB   http://www.enterprisedb.com


Well, this conversation made things a lot clearer, but I'm not sure  
(yet) how to patch the docs. It seems like the original version is  
written in general terms, whereas what our Q&A produces here is very  
postgres-specific. I'll see if I can produce a version that would be  
add clarity (for me).


--
Thomas F. O'Connell

optimizing modern web applications
: for search engines, for usability, and for performance :

http://o.ptimized.com/
615-260-0005

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Audit-trail engine: getting the application's layer user_id

2007-04-25 Thread Marcelo de Moraes Serpa

Hello all,

I know I may be asking too much, but I have a very limited C/C++ (as well as
PostgreSQL internal architecture) knowledge. I've tried compiling the C
source code Manuel sent as a PostgreSQL loadable module on Visual Studio
.NET 2003 (C++) without success (lots of missing identifiers, "int Datum"
redefinition and other things I didn't really understood). The comments in
the Postgres 8.0 manual didn't help much.

If anyone could put me on the right direction on how to write/build C/C++
PostgreSQL on the Windows platform (specifically Windows XP) I would be
grateful as I really need this thing working as soon as possible.

Thanks,

Marcelo.

On 4/25/07, Marcelo de Moraes Serpa <[EMAIL PROTECTED]> wrote:


I'm doing it form the application layer and I don't think it can be
> done in the database layer, how the trigger will figure out which user
> is doing the query?, It's the same problem you are trying to solve!
>

Duh! That is what happens when you start having high levels of caffeinne
in your blood and haven't had a good night sleep... thanks for putting me on
the track again.

I will study Tim's approach more. This thing got more complicated than I
thought it would be. At least I'm learning more about PostgreSQL internal
architecture.

Thank you for your help.

Marcelo.

On 4/25/07, Manuel Sugawara <[EMAIL PROTECTED]> wrote:
>
> "Marcelo de Moraes Serpa" <[EMAIL PROTECTED]> writes:
>
> > Hi Manuel,
> >
> >> each time the user sends a request I do more or less the following:
> >
> > Could a trigger be used to implement this ? Or are you doing this from
> the
> > application layer?
>
> I'm doing it form the application layer and I don't think it can be
> done in the database layer, how the trigger will figure out which user
> is doing the query?, It's the same problem you are trying to solve!
>
> > My problem is that, like Til, I don't have full control over my
> > request cycle as I'm over a very high-level framework (Actually it
> > is an data-oriented application generator, called GeneXus).
>
> Maybe you can use Til approach with temporal tables.
>
> Regards,
> Manuel.
>




Re: [GENERAL] PosegreSQL support

2007-04-25 Thread Richard Broersma Jr
oops, I didn't complete the first email.
--- Richard Broersma Jr <[EMAIL PROTECTED]> wrote:

> 
> --- "Porell, Chris" <[EMAIL PROTECTED]> wrote:
> 
> > My question to the list:  Do any companies that offer professional support
> > services stand out from the rest?  Please let me know who you are using.
> 
> The only indication that I can give, to help answer your question would be to 
> look at the
> companies that employ the main contributors of postgresql:
> http://www.postgresql.org/developer/bios
> and then compare these companies to the 
one listed on the support link:
http://www.postgresql.org/support/professional_support

Regards,
Richard Broersma Jr.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PosegreSQL support

2007-04-25 Thread Porell, Chris
All very good points, thank you!

We have an immediate obstacle which we need assistance with, (I'll be posing
this issue to one of the lists!) but generally we will need to maintain 24/7
support as a matter of company policy.  Any service located in the US should
be fine since this functional area of the comany operates on a national
level.

Regarding the depth of knowledge, we are the kind of group that hates to
call support unless we really have to.  We have usually done all the
preliminary legwork by the time we call, so it is important to have an
expert pick up the phone.

To your third point, I absolutely agree.

I will continue to look through the archives as well.

Thank you!!


-Original Message-
From: Richard Huxton [mailto:[EMAIL PROTECTED]
Sent: Wednesday, April 25, 2007 3:02 PM
To: Porell, Chris
Cc: 'pgsql-general@postgresql.org'
Subject: Re: [GENERAL] PosegreSQL support


Porell, Chris wrote:
> 
> My question to the list:  Do any companies that offer professional support
> services stand out from the rest?  Please let me know who you are using.

Many of the people who reply will be offering services, so it's 
difficult to be unbiased. Without knowing what your requirements are, 
there are three things that probably matter:
1. When do you need support and for how long - do you want 24/7 
worldwide or Berlin office hours? A couple of days' work, or are you 
looking for a long-term partnership? If it's short-term work, locally 
based might help your comfort-level.

2. How deep/specific are your support requirements? Various of the 
companies employ people working on developing code for PG. Now that 
doesn't necessarily mean they'll know more about real-world usage than 
someone else, but it does mean they'll have insights regarding the inner 
workings of the database.

3. Do you relate well on an individual level? I've always found it helps 
to naturally be on the same wavelength as a client. Not vital, but 
definitely helps in the early days.

Finally, check the list archives for people providing useful advice in 
areas you are interested in. That might give you some insight as to 
whether firms will be able to help you.

-- 
   Richard Huxton
   Archonet Ltd
-
IMPORTANT: The sender intends that this electronic message is for
exclusive use by the person to whom it is addressed. This message
may contain information that is confidential or privileged and
exempt from disclosure under applicable law. If the reader of this
message is not an intended recipient, be aware that any disclosure,
dissemination, distribution or copying of this communication, or
the use of its contents, is prohibited. If you have received this
message in error, please immediately notify the sender of your
inadvertent receipt and delete this message from all data storage
systems. Thank you.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PosegreSQL support

2007-04-25 Thread Richard Broersma Jr

--- "Porell, Chris" <[EMAIL PROTECTED]> wrote:

> My question to the list:  Do any companies that offer professional support
> services stand out from the rest?  Please let me know who you are using.

The only indication that I can give, to help answer your question would be to 
look at the
companies that employe the main contributors of postgresql:
http://www.postgresql.org/developer/bios
and then compare these companies to the 


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Vacuum-full very slow

2007-04-25 Thread Alvaro Herrera
Steve Crawford wrote:

> So my mental-model is utterly and completely wrong. My assumption was
> that since a full vacuum requires an access exclusive lock, it would do
> the intelligent and efficient thing which would be to first compact the
> table and then recreate the indexes.

Right, it doesn't do the intelligent and efficient thing.  There are
differences though: VACUUM FULL does not need an extra copy of the table
and indexes, while CLUSTER does.

OTOH, VACUUM FULL also needs to WAL log every action, which makes it
slower; CLUSTER only calls fsync when it's done, but since it keeps the
original files around it doesn't need to involve WAL.

> Am I reading that what it actually does is to thrash around keeping
> indexes unnecessarily updated, bloating them in the process?

Yes.

> Will cluster reduce the on-disk size like vacuum does?

Yes.  And a bit more because indexes don't suffer.

> And am I the only one who thinks the cluster command is backwards -
> after all it is the table that is being reordered based on an index so:

No, you're not, which is why a new syntax has been introduced for 8.3.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PosegreSQL support

2007-04-25 Thread Richard Huxton

Porell, Chris wrote:


My question to the list:  Do any companies that offer professional support
services stand out from the rest?  Please let me know who you are using.


Many of the people who reply will be offering services, so it's 
difficult to be unbiased. Without knowing what your requirements are, 
there are three things that probably matter:
1. When do you need support and for how long - do you want 24/7 
worldwide or Berlin office hours? A couple of days' work, or are you 
looking for a long-term partnership? If it's short-term work, locally 
based might help your comfort-level.


2. How deep/specific are your support requirements? Various of the 
companies employ people working on developing code for PG. Now that 
doesn't necessarily mean they'll know more about real-world usage than 
someone else, but it does mean they'll have insights regarding the inner 
workings of the database.


3. Do you relate well on an individual level? I've always found it helps 
to naturally be on the same wavelength as a client. Not vital, but 
definitely helps in the early days.


Finally, check the list archives for people providing useful advice in 
areas you are interested in. That might give you some insight as to 
whether firms will be able to help you.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Vacuum-full very slow

2007-04-25 Thread Steve Crawford
Martijn van Oosterhout wrote:
> On Wed, Apr 25, 2007 at 09:36:35AM -0700, Steve Crawford wrote:
>> Hmmm, why would cluster be faster?
> 
> Basically, vacuum full moves tuples from the end to the beginning of a
> table so it can compact the table. In the process it needs to update
> all the indexes too. So you save heap space but it tends to fragment
> your index. Lots of disk writes also.
> 
> OTOH, cluster simply scans the table, sorts it, writes it out then
> rebuilds the indexes. If you've removed a lot of tuples, empirically
> it's faster.
> 
> VACUUM FULL is discouraged these days, simply becuase it isn't actually
> as efficient as you might expect. Better to make sure it doesn't grow
> big in the first place, and use CLUSTER to rebuild the table if you
> really need to.
> 
> Hope this helps,

So my mental-model is utterly and completely wrong. My assumption was
that since a full vacuum requires an access exclusive lock, it would do
the intelligent and efficient thing which would be to first compact the
table and then recreate the indexes.

Am I reading that what it actually does is to thrash around keeping
indexes unnecessarily updated, bloating them in the process?

Will cluster reduce the on-disk size like vacuum does?

(

And am I the only one who thinks the cluster command is backwards -
after all it is the table that is being reordered based on an index so:

CLUSTER tablename ON indexname

seems way more intuitive than

CLUSTER indexname ON tablename

)

Cheers,
Steve


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Where to find kind code for STATISTIC_KIND GEOMETRY?

2007-04-25 Thread Tom Lane
[EMAIL PROTECTED] writes:
> I am looking for the kind code for STATISTIC_KIND GEOMETRY to calculate
> the selectivity/stats for geometry type (st_geometry). 

Presumably, the PostGIS people would be the ones to ask.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Stored Procedure Speed

2007-04-25 Thread Tom Lane
Richard Huxton <[EMAIL PROTECTED]> writes:
> Ah, fair enough. I *am* right in thinking that trivial SQL functions 
> will have their expressions inlined though?

Yes.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] PosegreSQL support

2007-04-25 Thread Porell, Chris
Hi everyone,

newbie here...

I have a need to get paid support services for a new postgres implemetation.
I've looked at some of the companies listed at www.postgresql.org that offer
professional services.

I've perused some of the mailing list archived, but I can't seem to find the
info I need.

My question to the list:  Do any companies that offer professional support
services stand out from the rest?  Please let me know who you are using.

Thanks!
Chris

-
IMPORTANT: The sender intends that this electronic message is for
exclusive use by the person to whom it is addressed. This message
may contain information that is confidential or privileged and
exempt from disclosure under applicable law. If the reader of this
message is not an intended recipient, be aware that any disclosure,
dissemination, distribution or copying of this communication, or
the use of its contents, is prohibited. If you have received this
message in error, please immediately notify the sender of your
inadvertent receipt and delete this message from all data storage
systems. Thank you.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Stored Procedure Speed

2007-04-25 Thread Richard Huxton

Tom Lane wrote:

Richard Huxton <[EMAIL PROTECTED]> writes:
I must say I thought recent versions of PG delayed planning the query 
until first call though.


No, you're thinking of parameterized queries sent through the FE/BE
protocol.  Functions still plan without any assumptions about parameter
values.


Ah, fair enough. I *am* right in thinking that trivial SQL functions 
will have their expressions inlined though?


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Schema as versioning strategy

2007-04-25 Thread Richard Huxton

Owen Hartnett wrote:
I want to "freeze" a snapshot of the database every year (think of end 
of year tax records).  However, I want this frozen version (and all the 
previous frozen versions) available to the database user as read-only.  
My thinking is to copy the entire public schema (which is where all the 
current data lives) into a new schema, named 2007 (2008, etc.)


Sounds perfectly reasonable. You could either do it as a series of:
  CREATE TABLE archive2007.foo AS SELECT * FROM public.foo;
or do a pg_dump of schema "public", tweak the file to change the schema 
names and restore it.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Stored Procedure Speed

2007-04-25 Thread Tom Lane
Richard Huxton <[EMAIL PROTECTED]> writes:
> I must say I thought recent versions of PG delayed planning the query 
> until first call though.

No, you're thinking of parameterized queries sent through the FE/BE
protocol.  Functions still plan without any assumptions about parameter
values.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Vacuum-full very slow

2007-04-25 Thread Martijn van Oosterhout
On Wed, Apr 25, 2007 at 09:36:35AM -0700, Steve Crawford wrote:
> Hmmm, why would cluster be faster?

Basically, vacuum full moves tuples from the end to the beginning of a
table so it can compact the table. In the process it needs to update
all the indexes too. So you save heap space but it tends to fragment
your index. Lots of disk writes also.

OTOH, cluster simply scans the table, sorts it, writes it out then
rebuilds the indexes. If you've removed a lot of tuples, empirically
it's faster.

VACUUM FULL is discouraged these days, simply becuase it isn't actually
as efficient as you might expect. Better to make sure it doesn't grow
big in the first place, and use CLUSTER to rebuild the table if you
really need to.

Hope this helps,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] Audit-trail engine: getting the application's layer user_id

2007-04-25 Thread Marcelo de Moraes Serpa


I'm doing it form the application layer and I don't think it can be
done in the database layer, how the trigger will figure out which user
is doing the query?, It's the same problem you are trying to solve!



Duh! That is what happens when you start having high levels of caffeinne in
your blood and haven't had a good night sleep... thanks for putting me on
the track again.

I will study Tim's approach more. This thing got more complicated than I
thought it would be. At least I'm learning more about PostgreSQL internal
architecture.

Thank you for your help.

Marcelo.

On 4/25/07, Manuel Sugawara <[EMAIL PROTECTED]> wrote:


"Marcelo de Moraes Serpa" <[EMAIL PROTECTED]> writes:

> Hi Manuel,
>
>> each time the user sends a request I do more or less the following:
>
> Could a trigger be used to implement this ? Or are you doing this from
the
> application layer?

I'm doing it form the application layer and I don't think it can be
done in the database layer, how the trigger will figure out which user
is doing the query?, It's the same problem you are trying to solve!

> My problem is that, like Til, I don't have full control over my
> request cycle as I'm over a very high-level framework (Actually it
> is an data-oriented application generator, called GeneXus).

Maybe you can use Til approach with temporal tables.

Regards,
Manuel.



Re: [GENERAL] Audit-trail engine: getting the application's layer user_id

2007-04-25 Thread Manuel Sugawara
"Marcelo de Moraes Serpa" <[EMAIL PROTECTED]> writes:

> Hi Manuel,
>
>> each time the user sends a request I do more or less the following:
>
> Could a trigger be used to implement this ? Or are you doing this from the
> application layer? 

I'm doing it form the application layer and I don't think it can be
done in the database layer, how the trigger will figure out which user
is doing the query?, It's the same problem you are trying to solve!

> My problem is that, like Til, I don't have full control over my
> request cycle as I'm over a very high-level framework (Actually it
> is an data-oriented application generator, called GeneXus).

Maybe you can use Til approach with temporal tables.

Regards,
Manuel.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Audit-trail engine: getting the application's layer user_id

2007-04-25 Thread Marcelo de Moraes Serpa

Hi Manuel,

each time the user sends a request I do more or less

the following:

Could a trigger be used to implement this ? Or are you doing this from the

application layer? My problem is that, like Til, I don't have full control
over my request cycle as I'm over a very high-level framework (Actually it
is an data-oriented application generator, called GeneXus).

Thanks,

Marcelo.


On 4/25/07, Manuel Sugawara <[EMAIL PROTECTED]> wrote:


"Marcelo de Moraes Serpa" <[EMAIL PROTECTED]> writes:

> I'm sorry Manuel, but after some time trying to fully understand your
> approach, I think I really don't have the required elements to do so.
>
> How do you pass your application's usename to this table? Or you don't
keep
> the username at all?
>
> Could you give a more concrete example? Maybe showing the spots on your
> application where you called these functions and why?

I keep my user-names (agents) in the database along with a hashed
version of their passphrases, when a user logs in I have a procedure
written in plpgsql that checks the provided passphrase against the one
in the database and if they match the user is granted a session, and
the a corresponding row inserted in the session table. I keep the user
information (the session id and a key) in the session of the web tier
(I'm using java servlets but the concept is the same for other
frameworks). Now, each time the user sends a request I do more or less
the following:

   retrieve from the web session the id of the session in the database
   request a fresh connection from the pool
   check if the session is still alive (if not throw an exception)
   set the session id of the user
   handle the user request
   reset the session id
   return the connection to the pool

The implementation details are left to the reader ;-). Hope that helps

Regards,
Manuel.



[GENERAL] Schema as versioning strategy

2007-04-25 Thread Owen Hartnett


Hi:

I'm a new user of Postgresql (8.2.3), and I'm very happy with both 
the performance and operation of the system.  My compliments to you 
the many authors who keep this database running and useful.


My question is:

I want to "freeze" a snapshot of the database every year (think of 
end of year tax records).  However, I want this frozen version (and 
all the previous frozen versions) available to the database user as 
read-only.  My thinking is to copy the entire public schema (which is 
where all the current data lives) into a new schema, named 2007 
(2008, etc.)


Is this a valid plan.  I had thought of using a different database, 
but that would require multiple opens.  I looked to see if there were 
an easy way to script doing an exact schema copy, but I haven't found 
anything like it in the docs.


This is not heavy usage, nor is there a large amount of data (current 
pg_dump backups are around 30 Megabytes.


Am I on the right track, or would you suggest a different strategy?

-Owen

Clipboard, Inc.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Vacuum-full very slow

2007-04-25 Thread Tom Lane
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> I wonder, though, if you set maintenance_work_mem too high and are
> causing the OS to swap?

AFAIR, vacuum full pays no attention to maintenance_work_mem anyway.
If the data it needs doesn't fit in memory, you lose ...

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Audit-trail engine: getting the application's layer user_id

2007-04-25 Thread Tilmann Singer
* Manuel Sugawara <[EMAIL PROTECTED]> [20070425 17:57]:
> I think you can use a plpgsql function with execute. For instance, if
> the name of your temp table is current_user_id the function will be
> something like:
> 
> create function get_current_user_id() returns int as $$
> declare
>   v_rec record;
>   v_user int;
>   v_query text;
> begin
>   v_query := 'SELECT user_id FROM current_user_id';
>   for v_rec in execute v_query loop
> v_user := v_rec.user_id;
>   end loop;
>   return v_user;
> end;   
> $$ language plpgsql;
> 
> Untested but that's the idea, you need to use execute to avoid the
> caching of the plan. You might also want to control what happens when
> the table does not exist and that can be done handling the
> corresponding exception. Check the docs for the details.

Excellent, that works exactly as needed! I will rely on the permanent
table being there always to provide the default value if no temporary
table has been created.


Thanks! Til

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Vacuum-full very slow

2007-04-25 Thread Steve Crawford
> You could try CLUSTER instead of VACUUM FULL, as I think it should be
> faster.  And the indexes will be devoid of any bloat, which will be a
> nice side effect.
> 
> I wonder, though, if you set maintenance_work_mem too high and are
> causing the OS to swap?
> 

Hmmm, why would cluster be faster?

No swapping - "top" shows swap mem of 3MB used and that wasn't changing.
Just to be sure I ran "swapoff -a ; swapon -a" which brought it back to
zero and it's not budging from there.

Cheers,
Steve

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Vacuum-full very slow

2007-04-25 Thread Alvaro Herrera
Steve Crawford wrote:
> I'm in the process of archiving data on one of my PG machines. After
> backing up the data, I delete the old records and then run a "vacuum
> full" on each table.
> 
> I'm vacuuming the first table now and it is taking much longer than I
> expected (I'm now past the 2-hour mark). Some info:
> 
> Version: 8.1.2
> On-disk table size: ~1.9GB
> Records deleted from the table: 10,290,892 (~60% of records)
> Physical memory: 2GB
> Connection maintenance_work_mem: 1GB
> Table indexes: 7
> CPU: Intel(R) Pentium(R) 4 CPU 3.00GHz
> Disk: 2x200GB SATA as RAID-1 using 3-ware card
> 
> The vacuum full is the only significant load on the server at the moment
> (PG or otherwise). IO is probably the bottleneck as CPU is running near
> 50% idle and 40% wait-state with PG using in the 5-15% range.

You could try CLUSTER instead of VACUUM FULL, as I think it should be
faster.  And the indexes will be devoid of any bloat, which will be a
nice side effect.

I wonder, though, if you set maintenance_work_mem too high and are
causing the OS to swap?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Audit-trail engine: getting the application's layer user_id

2007-04-25 Thread Manuel Sugawara
Tilmann Singer <[EMAIL PROTECTED]> writes:

> Can I define a view which references a table in a way so that it will
> use a temporary table of the same name if it exists, otherwise the
> permanent table with that name?

I think you can use a plpgsql function with execute. For instance, if
the name of your temp table is current_user_id the function will be
something like:

create function get_current_user_id() returns int as $$
declare
  v_rec record;
  v_user int;
  v_query text;
begin
  v_query := 'SELECT user_id FROM current_user_id';
  for v_rec in execute v_query loop
v_user := v_rec.user_id;
  end loop;
  return v_user;
end;   
$$ language plpgsql;


Untested but that's the idea, you need to use execute to avoid the
caching of the plan. You might also want to control what happens when
the table does not exist and that can be done handling the
corresponding exception. Check the docs for the details.

Regards,
Manuel.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] FOREIGN KEY CONSTRAINT AND INHERITANCE

2007-04-25 Thread A. Kretschmer
am  Wed, dem 25.04.2007, um 19:25:46 +0400 mailte Sergey Karin folgendes:
> vka7=# insert into style_type_object values(1, 2, false, 0, 0);
> ERROR:  insert or update on table "style_type_object" violates foreign key
> constraint "fk_id_style"
> DETAIL:  Key (id_style)=(2) is not present in table "style".

Read my answer yesterday in the thread "inherits and foreign key
problem", this mailinglist.

Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Vacuum-full very slow

2007-04-25 Thread Steve Crawford
I'm in the process of archiving data on one of my PG machines. After
backing up the data, I delete the old records and then run a "vacuum
full" on each table.

I'm vacuuming the first table now and it is taking much longer than I
expected (I'm now past the 2-hour mark). Some info:

Version: 8.1.2
On-disk table size: ~1.9GB
Records deleted from the table: 10,290,892 (~60% of records)
Physical memory: 2GB
Connection maintenance_work_mem: 1GB
Table indexes: 7
CPU: Intel(R) Pentium(R) 4 CPU 3.00GHz
Disk: 2x200GB SATA as RAID-1 using 3-ware card

The vacuum full is the only significant load on the server at the moment
(PG or otherwise). IO is probably the bottleneck as CPU is running near
50% idle and 40% wait-state with PG using in the 5-15% range.

So

What amount of time might I expect to wait for this process to complete?

Should I be changing settings other than maintenance_work_mem?

What is the effect of the indexes and would dropping and recreating them
help?

Other suggestions?

Cheers,
Steve






---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] reasonable limit to number of schemas in a database?

2007-04-25 Thread Ben
I currently am using a normal system like the one you suggest, in  
which every user puts their data into a single schema, and uses keys  
to keep things separate. The problem comes in database upgrades.  
They're not common, but as I ramp up the number of users, it becomes  
increasingly infeasible to upgrade everybody at once. But everybody  
using the same schema has to be on the same schema version.


Each session will probably touch most if not all of the tables  
eventually, but will only touch a dozen or so from each schema with  
any regularity.


Is the 300k files/directory my only real bottleneck, or should I  
worry about catalog cache and lock table space too? How would I  
overcome those last two?


On Apr 24, 2007, at 10:14 PM, Tom Lane wrote:

The number of schemas doesn't scare me so much as the number of  
tables.

Are you using a filesystem that can cope gracefully with 300K files in
one directory?  How many of these tables do you anticipate any one
session touching?  (That last translates to catalog cache and lock  
table

space...)

Generally, when someone proposes a scheme like this, they are thinking
that N identical tables are somehow better than one table with an
additional key column.  The latter is usually the better design,  
unless

you have special requirements you didn't mention.

regards, tom lane



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Audit-trail engine: getting the application's layer user_id

2007-04-25 Thread Tilmann Singer
* Manuel Sugawara <[EMAIL PROTECTED]> [20070425 17:14]:
> > So the view will reference the original table and not the temporary
> > table. Is there a way to achieve this kind of transaction local
> > setting? A transaction based solution would give more security in a
> > situation where a web app server uses a connection pool and you can
> > not guarantee 100% that your reset code is called properly at request
> > init.
> 
> Nop, you do the reset part *at the end* of the request cycle:
> 
>set session id
>handle request
>reset session id
> 
> So, you can guarantee that the id of the session is reset and the
> connection properly disposed to the pool.

If you have full confidence in having control over the request cycle
in your app server, yes.

However, I'm using Ruby on Rails' ActiveRecord and am not 100%
familiar with the way it uses connection pooling and request setup, so
I would feel more safe if I could set something like a temporary table
with ON COMMIT DROP to be sure my app user id will only be visible
from within the transaction that it was set in, and that a value set
can not leak to another web request that reuses the previous db
connection.

Rephrasing the question:

Can I define a view which references a table in a way so that it will
use a temporary table of the same name if it exists, otherwise the
permanent table with that name?



Til

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] reg:bitmap index

2007-04-25 Thread sangeetha k.s

hello,
Thanks for your reply.
will that be possible for me to get the simple bitmapindex
implementation code now.
i went through the ondiskimplementation code i cant understand that
properply.
i want a simplified implementation of that.could you kindly help me
out.

Sangeetha.K.S.


Re: [GENERAL] Regarding WAL

2007-04-25 Thread Simon Riggs
On Tue, 2007-04-24 at 11:31 +0200, Alexander Staubo wrote:
> On 4/24/07, Mageshwaran <[EMAIL PROTECTED]> wrote:
> > I want to do replication using WAL , please tell the methods by which
> > log shipping is done ie moving the wal files to slaves and executing it.
> 
> Not possible at the moment: the log shipping facility that was
> introduced in 8.2 only lets you set up a so-called warm standby, which
> cannot be queried; it's not live replication.
> 
> The warm standby system is a fairly crude hack that relies on WAL
> files being copied from the main server to the standby and then
> starting the standby in recovery mode when you want to bring it up. At
> this point the standby is your main database, and it can no longer WAL
> files. Documented here:
> 
>  http://www.postgresql.org/docs/8.2/static/warm-standby.html

The use-case for Warm Standby is for people that want a simple,
efficient mechanism for providing High Availability replication.

If there are any feature requests, please let me know. 

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] FOREIGN KEY CONSTRAINT AND INHERITANCE

2007-04-25 Thread Sergey Karin

Hi, All!

I have the tables as defined:

create table style
(
  ID_STYLE INT4 not null,
  STYLE_STRING VARCHAR  not null,
  constraint PK_STYLE primary key (ID_STYLE)
);

create table style_type_object
(
  ID_TYPE_OBJECT   INT4 not null,
  ID_STYLE INT4 not null,
  ZOOMABLE BOOL not null default false,
  START_SCALE  INT4 not null,
  END_SCALEINT4 not null
);

alter table STYLE_TYPE_OBJECT
  add constraint FK_ID_STYLE foreign key (ID_STYLE)
 references STYLE (ID_STYLE)
 on delete restrict on update restrict;

create table style_1
(
   constraint PK_STYLE_1 primary key (id_style),
   check (id_style between 1 and 1)
) inherits (style);


create rule r_style_1 as on insert to style
where (id_style between 1 and 1)
do instead
 insert into style_1 (id_style, style_string)
 values(new.id_style, new.style_string);


insert into style values (2,  'PEN(w:1px)' );

vka7=# select id_style, style_string from style;
id_style | style_string
--+--
   2 | PEN(w:1px)


vka7=# insert into style_type_object values(1, 2, false, 0, 0);
ERROR:  insert or update on table "style_type_object" violates foreign key
constraint "fk_id_style"
DETAIL:  Key (id_style)=(2) is not present in table "style".


THE QUESTION IS: are there any abilities to check foreign key constraint not
only in master table, but also in all tables that inherited the master?

Thanks in advance.

Sergey Karin


[GENERAL] reg: bitmap index implementation

2007-04-25 Thread sangeetha k.s

hello sir,
does postgre sql support bitmap indexing for indexing the
datatables.
   will that be possible to get the source of that.

Sangeetha.K.S


Re: [GENERAL] Audit-trail engine: getting the application's layer user_id

2007-04-25 Thread Manuel Sugawara
Tilmann Singer <[EMAIL PROTECTED]> writes:

> So the view will reference the original table and not the temporary
> table. Is there a way to achieve this kind of transaction local
> setting? A transaction based solution would give more security in a
> situation where a web app server uses a connection pool and you can
> not guarantee 100% that your reset code is called properly at request
> init.

Nop, you do the reset part *at the end* of the request cycle:

   set session id
   handle request
   reset session id

So, you can guarantee that the id of the session is reset and the
connection properly disposed to the pool.

Regards,
Manuel.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] pg_connect sometimes works sometimes not

2007-04-25 Thread Martijn van Oosterhout
On Wed, Apr 25, 2007 at 04:50:46PM +0200, [EMAIL PROTECTED] wrote:
> Richard Huxton wrote:
> >[EMAIL PROTECTED] wrote:
> >>What other variable (which are sometimes there, sometimes not) reasons 
> >>there can be pg_connect to fail?
> >
> >What result-code/error do you get? What do your logs show?
> 
> I have -
> 
> $connection = pg_connect("$host $db $user $pass");
> 
> When I get the error it is because $connection is 'false'.
> 
> Thought of using pg_result_error - but it seems applicable only to 
> queries (i.e. with pg_query or pg_execute). How can I get an error code?

In the C interface it's called PQerrorMessage(). I'm sure PHP has an
equivalent.

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] Audit-trail engine: getting the application's layer user_id

2007-04-25 Thread Manuel Sugawara
"Marcelo de Moraes Serpa" <[EMAIL PROTECTED]> writes:

> I'm sorry Manuel, but after some time trying to fully understand your
> approach, I think I really don't have the required elements to do so.
>
> How do you pass your application's usename to this table? Or you don't keep
> the username at all?
>
> Could you give a more concrete example? Maybe showing the spots on your
> application where you called these functions and why?

I keep my user-names (agents) in the database along with a hashed
version of their passphrases, when a user logs in I have a procedure
written in plpgsql that checks the provided passphrase against the one
in the database and if they match the user is granted a session, and
the a corresponding row inserted in the session table. I keep the user
information (the session id and a key) in the session of the web tier
(I'm using java servlets but the concept is the same for other
frameworks). Now, each time the user sends a request I do more or less
the following:

   retrieve from the web session the id of the session in the database
   request a fresh connection from the pool
   check if the session is still alive (if not throw an exception)
   set the session id of the user
   handle the user request
   reset the session id
   return the connection to the pool

The implementation details are left to the reader ;-). Hope that helps

Regards,
Manuel.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] pg_connect sometimes works sometimes not

2007-04-25 Thread [EMAIL PROTECTED]

Richard Huxton wrote:
> Did you try pg_last_error()?

No. Will try now.

> Are you logging connection attempts/failures? Details in the manuals.

Understood.

Thank you very much!
Iv

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] pg_connect sometimes works sometimes not

2007-04-25 Thread Raymond O'Donnell

On 25/04/2007 15:50, [EMAIL PROTECTED] wrote:

Thought of using pg_result_error - but it seems applicable only to 
queries (i.e. with pg_query or pg_execute). How can I get an error code?


pg_last_error()

http://www.php.net/manual/en/function.pg-last-error.php

Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] [HACKERS] Kill a Long Running Query

2007-04-25 Thread Heikki Linnakangas
Please don't cross-post to multiple mailing lists. And pgsql-hackers is 
not the correct list for basic usage questions. And long end-of-mail 
disclaimers are not generally appreciated.


Mageshwaran wrote:
Any body tell me how to kill a long running query in postgresql, is 
there any statement to kill a query, 


See the user manual on administration functions, pg_cancel_backend in 
particular:


http://www.postgresql.org/docs/8.2/interactive/functions-admin.html

Basically you issue a "SELECT * FROM pg_stat_activity", or plain ps to 
find out the pid of the backend executing the long running query, and 
then use pg_cancel_backend (or kill -INT) to cancel it.



and also tell me how to log slow queries to a log file.


Using the log_min_duration_statement configuration variable.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] pg_connect sometimes works sometimes not

2007-04-25 Thread Richard Huxton

[EMAIL PROTECTED] wrote:

Richard Huxton wrote:

[EMAIL PROTECTED] wrote:
What other variable (which are sometimes there, sometimes not) 
reasons there can be pg_connect to fail?


What result-code/error do you get? What do your logs show?


I have -

$connection = pg_connect("$host $db $user $pass");

When I get the error it is because $connection is 'false'.

Thought of using pg_result_error - but it seems applicable only to 
queries (i.e. with pg_query or pg_execute). How can I get an error code?


Did you try pg_last_error()?

PostgreSQL is instructed to log into the syslog, which is 
/var/log/messages. There are only two type of things there from today -


[1-1] WARNING: nonstandard use of \\ in a string literal at character XXX

[1-2] HINT: Use the escape string syntax for backslashes, e.g., E'\\'.

But it does not seem like any of these are related to pg_connect, or am 
I wrong (I guess they are related to bad code somewhere).


Are you logging connection attempts/failures? Details in the manuals.

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] [HACKERS] Kill a Long Running Query

2007-04-25 Thread Andrew Dunstan

Mageshwaran wrote:

Hi ,
Any body tell me how to kill a long running query in postgresql, is 
there any statement to kill a query, and also tell me how to log slow 
queries to a log file.





First. please do not cross-post like this. Pick the correct list and use it.

Second, this query definitely does not belong on the -hackers list.

Third, please find a way of posting to lists that does not include a 
huge disclaimer and advertisements. If that is added by your company's 
mail server, you should look at using some other method of posting such 
as gmail.


Fourth, please read our excellent documentation. It contains the answers 
to your questions, I believe.


cheers

andrew

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] WAL files, warm spares and minor versions

2007-04-25 Thread Simon Riggs
On Tue, 2007-04-24 at 12:57 -0500, Michael Nolan wrote:
> Can WAL files be used to create/update a warm standby on a different
> minor version of PostgreSQL (eg, using files from a server running
> 8.2.3 on an 8.2.4 server, or vice-versa?)
> 
> I suspect this is a FAQ, but I didn't see it in the docs on WALs and
> PITR (section 23.3)

The docs for Warm Standby are in 23.4

"It is the policy of the PostgreSQL Global Development Group not to make
changes to disk formats during minor release upgrades, so it is likely
that running different minor release levels on primary and standby
servers will work successfully. However, no formal support for that is
offered and you are advised to keep primary and standby servers at the
same release level as much as possible. When updating to a new minor
release, the safest policy is to update the standby servers first — a
new minor release is more likely to be able to read WAL files from a
previous minor release than vice versa."

http://developer.postgresql.org/pgdocs/postgres/warm-standby.html

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] [ADMIN] Kill a Long Running Query

2007-04-25 Thread Aaron Bono

On 4/25/07, Mageshwaran <[EMAIL PROTECTED]> wrote:


Hi ,
Any body tell me how to kill a long running query in postgresql, is
there any statement to kill a query, and also tell me how to log slow
queries to a log file.

Regards
J Mageshwaran




See if this helps:
http://archives.postgresql.org/pgsql-hackers-win32/2004-12/msg00039.php




--
==
  Aaron Bono
  Aranya Software Technologies, Inc.
  http://www.aranya.com
  http://codeelixir.com
==


Re: [GENERAL] [HACKERS] Kill a Long Running Query

2007-04-25 Thread Richard Huxton

Mageshwaran wrote:

Hi ,
Any body tell me how to kill a long running query in postgresql, is 
there any statement to kill a query, and also tell me how to log slow 
queries to a log file.


Oh, and please don't post to several lists at once - this clearly isn't 
a hacker-related question.

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] pg_connect sometimes works sometimes not

2007-04-25 Thread [EMAIL PROTECTED]

Richard Huxton wrote:

[EMAIL PROTECTED] wrote:
What other variable (which are sometimes there, sometimes not) reasons 
there can be pg_connect to fail?


What result-code/error do you get? What do your logs show?


I have -

$connection = pg_connect("$host $db $user $pass");

When I get the error it is because $connection is 'false'.

Thought of using pg_result_error - but it seems applicable only to 
queries (i.e. with pg_query or pg_execute). How can I get an error code?


PostgreSQL is instructed to log into the syslog, which is 
/var/log/messages. There are only two type of things there from today -


[1-1] WARNING: nonstandard use of \\ in a string literal at character XXX

[1-2] HINT: Use the escape string syntax for backslashes, e.g., E'\\'.

But it does not seem like any of these are related to pg_connect, or am 
I wrong (I guess they are related to bad code somewhere).


Thank you,
Iv


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Kill a Long Running Query

2007-04-25 Thread A. Kretschmer
am  Wed, dem 25.04.2007, um 20:03:37 +0530 mailte Mageshwaran folgendes:
> Hi ,
> Any body tell me how to kill a long running query in postgresql, is 
> there any statement to kill a query, and also tell me how to log slow 
> queries to a log file.

pg_cancel_backend(pid int) to kill. You need to know the pid, you can
obtain the pid from pg_stat_activity.

To log slow querys you can set log_min_duration_statement to a propper
value, in ms. All querys with an execution time more than that logged.


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Stored Procedure Speed

2007-04-25 Thread Richard Huxton

Scott Schulthess wrote:

Hey Ya'll,

I'm a little puzzled by the speed of the stored procedures I am writing.

Here is the query alone in pgAdmin

select distinct featuretype from gnis_placenames where state='CT'
TIME: 312+16ms

 


Here is a stored procedure

create or replace function getfeaturetypes(text) returns setof text as
$$
select distinct featuretype from gnis_placenames where state=$1;
$$ language sql;

TIME: 2391+15ms


Basically, the planner has more information with the hard-coded example. 
It should know enough to come up with different plans for 'CT' and XX'. 
Functions (and this varies per-language, but plpgsql is the usual 
culprit) cache their query-plans, so you end up with "one size fits all".


You can see what plan it comes up with by using PREPARE ...  
followed by EXPLAIN EXECUTE ...


I must say I thought recent versions of PG delayed planning the query 
until first call though.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] [DOCS] Incrementally Updated Backups: Docs Clarification

2007-04-25 Thread Simon Riggs
On Thu, 2007-04-19 at 15:48 -0500, Thomas F. O'Connell wrote:

> "If we take a backup of the standby server's files while it is
> following logs shipped from the primary, we will be able to reload
> that data and restart the standby's recovery process from the last
> restart point. We no longer need to keep WAL files from before the
> restart point. If we need to recover, it will be faster to recover
> from the incrementally updated backup than from the original base
> backup."
> 
> 
> I'm specifically confused about the meaning of the following phrases:
> 
> 
> "backup of the standby server's files" - Which files?

the files that make up the database server:
- data directory
- all tablespace directories

> "reload that data" - What does this mean in postgres terms?

copy back from wherever you put them in the first place

"that data" referring to the "files that make up the db server"

> "last restart point" - What is this? Wouldn't it be able to restart
> from the last recovered file, which would presumably occur later than
> the last restart point?

No, we don't restart file-by-file. 

http://developer.postgresql.org/pgdocs/postgres/continuous-archiving.html#BACKUP-PITR-RECOVERY

"If recovery finds a corruption in the WAL..." onwards explains the
restart mechanism. It's much like checkpointing, so we don't restart
from the last log file we restart from a point possibly many log files
in the past.

> Does this mean make a filesystem backup of the standby server's data
> directory while it's stopped, and then start it again with that data
> and the restricted set of WAL files needed to continue recovery? 

No need to stop server. Where do you read you need to do that?

> I'd like to see the language here converted to words that have more
> meaning in the context of postgres. I'd be happy to attempt a revision
> of this section once I'm able to complete an incrementally updated
> backup successfully.

Feel free to provide updates that make it clearer.

> Here's how I envision it playing out in practice:
> 
> 
> 1. stop standby postgres server
> 2. [optional] preserve data directory, remove unnecessary WAL files
> 3. restart standby server

step 2 only. 

Clearly not an optional step, since its a 1 stage process. :-)

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] query from a list of ids

2007-04-25 Thread Andrei Kovalevski

   You can try this one.

   SELECT
   table2.*
   FROM
   (SELECT string_to_array(ids, ', ') FROM table1 WHERE name = 
'Peter') AS a(a),
   (SELECT generate_series(1,array_upper(string_to_array(ids, 
', '),1)+1,1)FROM table1 WHERE name = 'Peter') c(n),
table2   
   WHERE

   table2.id = a[c.n]

finecur wrote:

Hi,

Here is my first table:

Table1

name| ids
-
Peter| 2, 3, 4, 5
Jack| 100, 34, 3

Both name and ids are in text format.

Here is my second table

Table2

id | Flag | Title
-
2 | Red| good
3 | Blue   | poor
4 | Green| middle

id is in integer (serial) format.

I would like to list all the rows in table 2 where the id is in the
ids field of peter. So I did

select * from tables where id in (select ids from table1 where
name='Peter')

It did not work. How can I do the query?

Thanks,

ff


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
  



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] pg_connect sometimes works sometimes not

2007-04-25 Thread Richard Huxton

[EMAIL PROTECTED] wrote:
What other variable (which are sometimes there, sometimes not) reasons 
there can be pg_connect to fail?


What result-code/error do you get? What do your logs show?

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Kill a Long Running Query

2007-04-25 Thread Mageshwaran

Hi ,
Any body tell me how to kill a long running query in postgresql, is 
there any statement to kill a query, and also tell me how to log slow 
queries to a log file.


Regards
J Mageshwaran

** DISCLAIMER **
Information contained and transmitted by this E-MAIL is proprietary to 
Sify Limited and is intended for use only by the individual or entity to 
which it is addressed, and may contain information that is privileged, 
confidential or exempt from disclosure under applicable law. If this is a 
forwarded message, the content of this E-MAIL may not have been sent with 
the authority of the Company. If you are not the intended recipient, an 
agent of the intended recipient or a  person responsible for delivering the 
information to the named recipient,  you are notified that any use, 
distribution, transmission, printing, copying or dissemination of this 
information in any way or in any manner is strictly prohibited. If you have 
received this communication in error, please delete this mail & notify us 
immediately at [EMAIL PROTECTED]



Complete Coverage of the ICC World Cup '07! Log on to www.sify.com/khel for 
latest updates, expert columns, schedule, desktop scorecard, photo galleries 
and more!


Watch the hottest videos from Bollywood, Fashion, News and more only on 
www.sifymax.com


For the Expert view of the ICC World Cup log on to www.sify.com/khel. 
Read exclusive interviews with Sachin, Ganguly, Yuvraj, Sreesanth, Expert 
Columns by Gavaskar, Web chat with Dhoni and more! .


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Audit-trail engine: getting the application's layer user_id

2007-04-25 Thread Tilmann Singer
* Manuel Sugawara <[EMAIL PROTECTED]> [20070425 00:17]:
> I solved the problem using a C program and keeping all the information
> in the database, that means, users, passwords and ``sessions''. Each
> time a user opens a session the system register it in a table that
> looks like:

This looks very useful, thanks!

Do you know if there is a way to set such a variable for a transaction
only?

I thought it might work by creating a temporary table, which will
overlay a non-temporary table with the same name, so there could be a
permanent table with the default value and a temporary table with the
transaction specific user_id:


test=# create table current_application_user (user_id int);
CREATE TABLE
test=# insert into current_application_user values (NULL); -- the default
INSERT 0 1
test=# select user_id from current_application_user ;
 user_id
-

(1 row)
test=# begin;
BEGIN
test=# create temporary table current_application_user (user_id int) on commit 
drop;
CREATE TABLE
test=# insert into current_application_user values (1); -- the current 
application user
INSERT 0 1
test=# select user_id from current_application_user ;
 user_id
-
   1
(1 row)

test=# commit;
COMMIT
test=# select user_id from current_application_user ;
 user_id
-

(1 row)


But is it possible to create a database view that accesses the value
of that temporary table when present and otherwise the value of the
default table? I tried the following, but apparently the view
definition will contain a reference to the public schema as soon as a
temporary table with the same name is present:


test=# create table some_content (body text, owner_id int);
CREATE TABLE
test=# create view some_content_restricted as select * from some_content where 
owner_id=(select user_id from current_application_user );
CREATE VIEW
test=# \d some_content_restricted
View "public.some_content_restricted"
  Column  |  Type   | Modifiers
--+-+---
 body | text|
 owner_id | integer |
View definition:
 SELECT some_content.body, some_content.owner_id
   FROM some_content
  WHERE some_content.owner_id = (( SELECT current_application_user.user_id
   FROM current_application_user));

test=# begin;
BEGIN
test=# create temporary table current_application_user (user_id int) on commit 
drop;
CREATE TABLE
test=# \d some_content_restricted
View "public.some_content_restricted"
  Column  |  Type   | Modifiers
--+-+---
 body | text|
 owner_id | integer |
View definition:
 SELECT some_content.body, some_content.owner_id
   FROM some_content
  WHERE some_content.owner_id = (( SELECT current_application_user.user_id
   FROM public.current_application_user));



So the view will reference the original table and not the temporary
table. Is there a way to achieve this kind of transaction local
setting? A transaction based solution would give more security in a
situation where a web app server uses a connection pool and you can
not guarantee 100% that your reset code is called properly at request
init.


tia, Til

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Stored Procedure Speed

2007-04-25 Thread Scott Schulthess
Hey Ya'll,

 

I'm a little puzzled by the speed of the stored procedures I am writing.


 

Here is the query alone in pgAdmin

 

select distinct featuretype from gnis_placenames where state='CT'

TIME: 312+16ms

 

Here is a stored procedure

 

create or replace function getfeaturetypes(text) returns setof text as
$$

select distinct featuretype from gnis_placenames where state=$1;

$$ language sql;

TIME: 2391+15ms

 

Now if I hardcode the stored procedure with the input

create or replace function getfeaturetypes(text) returns setof text as
$$

select distinct featuretype from gnis_placenames where state='CT';

$$ language sql;

TIME: 312+16ms

 

I also tried plPgsql

 

CREATE OR REPLACE FUNCTION  getfeaturetypes(text) returns setof text as
$$

declare r record;

begin

for r in SELECT featuretype as text from gnis_placenames where state=$1
group by featuretype order by featuretype asc

LOOP

return next r.text;

END LOOP;

return;

end;

$$ language plpgsql;

grant execute on function getfeaturetypes(text) to tzuser;

TIME: 2609+16ms

 

What gives? How can I speed up this stored procedure?  

 

-Scott



Re: [GENERAL] Audit-trail engine: getting the application's layer user_id

2007-04-25 Thread Marcelo de Moraes Serpa

I'm sorry Manuel, but after some time trying to fully understand your
approach, I think I really don't have the required elements to do so.

How do you pass your application's usename to this table? Or you don't keep
the username at all?

Could you give a more concrete example? Maybe showing the spots on your
application where you called these functions and why?


At least, for the C shared library compiling on Windows, I think I'm
half-way done - I've found a really useful comment on a PostgreSQL manual
page teaching how to compile PostgreSQL modules under Windows - you can see
it here: http://www.postgresql.org/docs/8.0/interactive/xfunc-c.html

Thank you again.

Marcelo.


On 4/24/07, Manuel Sugawara <[EMAIL PROTECTED]> wrote:


"Marcelo de Moraes Serpa" <[EMAIL PROTECTED]> writes:

> Hey guys,
>
> Mine is an web application - three tier. The app connects to the db
using
> only one user and it has it's own authentication system and doesn't
> rely on the database for user management.

I solved the problem using a C program and keeping all the information
in the database, that means, users, passwords and ``sessions''. Each
time a user opens a session the system register it in a table that
looks like:

auth.session
 Tabla «auth.session»
Columna|Tipo |
Modificadores

---+-+
id| integer | not null default nextval(('
auth.session_sid'::text)::regclass)
skey  | text| not null
agent_id  | integer | not null
host  | text| not null default
'localhost'::text
start_time| timestamp without time zone | not null default now()
end_time  | timestamp without time zone |
su_session_id | integer |
Índices:
«session_pkey» PRIMARY KEY, btree (id)
Restricciones de llave foránea:
«$1» FOREIGN KEY (agent_id) REFERENCES auth.agent(id)
«session_su_session_id_fkey» FOREIGN KEY (su_session_id) REFERENCES
auth."session"(id)

Each time the application gets a connection from the pool it sets the
session id of that user in a static variable (that was the tricky
part) using a function set_session_id and then you can retrieve it
using another function current_session_id (nowadays I think that can
be done without C at all but using the new GUC infrastructure
). So you can put in your log table something like:


  session_id int not null default current_session_id() references
auth.session(id),


Finally before returning the connection to the pool the application
resets the session id of that user using reset_session_id. The code
is:

#include "postgres.h"

#include 
#include 
#include 
#include 
#include "fmgr.h"

static int session_id = 0;
static int session_id_is_set = false;

Datum set_session_id(PG_FUNCTION_ARGS);
Datum current_session_id(PG_FUNCTION_ARGS);
Datum reset_session_id(PG_FUNCTION_ARGS);

PG_FUNCTION_INFO_V1(set_session_id);
PG_FUNCTION_INFO_V1(current_session_id);
PG_FUNCTION_INFO_V1(reset_session_id);

Datum
set_session_id(PG_FUNCTION_ARGS) {
session_id = PG_GETARG_INT32(0);
session_id_is_set = true;
PG_RETURN_INT32(session_id);
}

Datum
current_session_id(PG_FUNCTION_ARGS) {
if (! session_id_is_set)
PG_RETURN_NULL();
PG_RETURN_INT32(session_id);
}

Datum
reset_session_id(PG_FUNCTION_ARGS) {
session_id_is_set = false;
PG_RETURN_BOOL(1);
}

Hope that helps.

Regards,
Manuel.



[GENERAL] pg_connect sometimes works sometimes not

2007-04-25 Thread [EMAIL PROTECTED]

Hello,

we migrated a php code from FreeBSD 5.x, PostgreSQL 8.x and php 4.x - to 
the latest versions of these, keeping the configuration options. Now 
pg_connect started to fail on irregular intervals for no obvious reason. 
Before we had a problem with the number of connections, but we monitored 
them and they are less than 10 (out of 100 available).


What other variable (which are sometimes there, sometimes not) reasons 
there can be pg_connect to fail?


Thank you,
Iv

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Audit-trail engine: getting the application's layer user_id

2007-04-25 Thread Marcelo de Moraes Serpa

Hi Manuel, just a quick question: What C libraries do I need to compile this
function? Or better: Where can I find a reference manual about db stored
procedures written in C for PostgreSQL?

Thanks!

On 4/24/07, Manuel Sugawara <[EMAIL PROTECTED]> wrote:


"Marcelo de Moraes Serpa" <[EMAIL PROTECTED]> writes:

> Hey guys,
>
> Mine is an web application - three tier. The app connects to the db
using
> only one user and it has it's own authentication system and doesn't
> rely on the database for user management.

I solved the problem using a C program and keeping all the information
in the database, that means, users, passwords and ``sessions''. Each
time a user opens a session the system register it in a table that
looks like:

auth.session
 Tabla «auth.session»
Columna|Tipo |
Modificadores

---+-+
id| integer | not null default nextval(('
auth.session_sid'::text)::regclass)
skey  | text| not null
agent_id  | integer | not null
host  | text| not null default
'localhost'::text
start_time| timestamp without time zone | not null default now()
end_time  | timestamp without time zone |
su_session_id | integer |
Índices:
«session_pkey» PRIMARY KEY, btree (id)
Restricciones de llave foránea:
«$1» FOREIGN KEY (agent_id) REFERENCES auth.agent(id)
«session_su_session_id_fkey» FOREIGN KEY (su_session_id) REFERENCES
auth."session"(id)

Each time the application gets a connection from the pool it sets the
session id of that user in a static variable (that was the tricky
part) using a function set_session_id and then you can retrieve it
using another function current_session_id (nowadays I think that can
be done without C at all but using the new GUC infrastructure
). So you can put in your log table something like:


  session_id int not null default current_session_id() references
auth.session(id),


Finally before returning the connection to the pool the application
resets the session id of that user using reset_session_id. The code
is:

#include "postgres.h"

#include 
#include 
#include 
#include 
#include "fmgr.h"

static int session_id = 0;
static int session_id_is_set = false;

Datum set_session_id(PG_FUNCTION_ARGS);
Datum current_session_id(PG_FUNCTION_ARGS);
Datum reset_session_id(PG_FUNCTION_ARGS);

PG_FUNCTION_INFO_V1(set_session_id);
PG_FUNCTION_INFO_V1(current_session_id);
PG_FUNCTION_INFO_V1(reset_session_id);

Datum
set_session_id(PG_FUNCTION_ARGS) {
session_id = PG_GETARG_INT32(0);
session_id_is_set = true;
PG_RETURN_INT32(session_id);
}

Datum
current_session_id(PG_FUNCTION_ARGS) {
if (! session_id_is_set)
PG_RETURN_NULL();
PG_RETURN_INT32(session_id);
}

Datum
reset_session_id(PG_FUNCTION_ARGS) {
session_id_is_set = false;
PG_RETURN_BOOL(1);
}

Hope that helps.

Regards,
Manuel.



Re: [GENERAL] Audit-trail engine: getting the application's layer user_id

2007-04-25 Thread Marcelo de Moraes Serpa

Hey guys, I really appreaciate your help, thank you very much for your time.

@Manuel: What a comprehensive solution! Thanks a lot for that :)
@Joris: That would be a simpler althernative, I will try it out too!

Marcelo.

On 4/24/07, Joris Dobbelsteen <[EMAIL PROTECTED]> wrote:




 --
*From:* [EMAIL PROTECTED] [mailto:
[EMAIL PROTECTED] *On Behalf Of *Marcelo de Moraes Serpa
*Sent:* dinsdag 24 april 2007 21:06
*To:* pgsql-general@postgresql.org
*Subject:* Re: [GENERAL] Audit-trail engine: getting the application's
layer user_id

 Thank you for the replies.

@Richard: I've thought about having one DB user for each APP user.
However, a coworker told me that it would infeasible to do that on the web
enviroment, specifically for J2EE where a DB connection pool is used, so I
gave up on that.

As Richard mentioned, he has done it.
Remember, for the pool you DO NOT setup a new connection every time but
you can certainly utilize the pool. The trick is the postgresql idea of the
Role-Based Access Control (RBAC) implementation. I.e. you can just do a
SET LOCAL ROLE .
After transaction commit or rollback, or execution of SET LOCAL ROLE
NONE or RESET ROLE you will have your original role (own user) again. This
should work just fine.

See also: http://www.postgresql.org/docs/8.1/interactive/sql-set-role.html

 @Jorge: Is this "connection id" you say equivalent to the "applicationid"
mentioned in the ibm db2 article? If so, how could I get this data through
my application?

On 4/24/07, Marcelo de Moraes Serpa <[EMAIL PROTECTED]> wrote:
>
> Thank you for the replies.
>
> @Richard: I've thought about having one DB user for each APP user.
> However, a coworker told me that it would infeasible to do that on the web
> enviroment, specifically for J2EE where a DB connection pool is used, so I
> gave up on that.
>
> @Jorge: Is this "connection id" you say equivalent to the
> "applicationid" mentioned in the ibm db2 article? If so, how could I get
> this data through my application?
>
> Marcelo.
>
> On 4/24/07, Jorge Godoy <[EMAIL PROTECTED]> wrote:
> >
> > "Marcelo de Moraes Serpa" <[EMAIL PROTECTED]> writes:
> >
> > > I forgot to add the link to the article I've mentioned:
> > >
> > >
> > 
http://www-128.ibm.com/developerworks/db2/library/techarticle/0302stolze/0302stolze.html#section2b
> > >
> > > This is what I'd like to do on PostgreSQL,
> >
> > So, translating it to a simpler example:
> >
> > You want that your function gets the connection ID it is using and
> > ties it to your current user ID at your application and then have
> > all your tables use a trigger to retrieve the user name from the
> > auxiliar table that maps "connection ID -> user", right?
> >
> > That's what's in that page: a UDF (user defined function) named
> > getapplicationid() that will return the user login / name / whatever
> > and
> > triggers.
> >
> > What is preventing you from writing that?  What is your doubt with
> > regards to how create that feature on your database?
> >
> >
> >
> > --
> > Jorge Godoy  <[EMAIL PROTECTED] >
> >
>
>



[GENERAL] a math question

2007-04-25 Thread tom
I have a math question and a benchmark question and I'm not sure how  
to benchmark it.


What I'm trying to do is use pgsql as a bayes token store for a spam  
filter I'm writing.
In doing this I have a data structure with index keys and two integer  
fields 'h_msgs' and 's_msgs' for each token and another pair for each  
user (H_msgs, S_msgs), making four data pieces for each user-token  
relationship.


for Bayes these are run through an equation of the form:
(s_msgs/S_msgs)/(s_msgs/S_msgs + h_msgs/H_msgs)
Which I currently do in perl.
In pgsql I have to modify this a bit with 'cast (s_msgs as double  
precision)' or 'cast(s_msgs as real)' in order to get floating point  
math.

( cast(s_msgs as double precision)/S_msgs)  and so on...

Question:  Is there a better way to get floating point math out of a  
set of integers?


Thought occurred to me that if I let pgsql do this, it should be  
considerably faster since perl is slower than C.  But I don't know if  
I have any good way of proving this.
The data retrieval process tends to dwarf everything else -- which  
may mean I really shouldn't waste my time with this anyways.


But I was wondering if the thinking is valid, and how I might  
benchmark the differences.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] understanding output from Explain

2007-04-25 Thread tom

Is there a tutorial on this stuff?
I did check the Really Fine Manual and they said explain plans are  
beyond the scope RTFM.




On Apr 22, 2007, at 10:36 AM, A. Kretschmer wrote:

am  Sun, dem 22.04.2007, um  9:43:32 -0400 mailte Tom Allison  
folgendes:

"Seq Scan on"  - is this just a FULL TABLE scan?


Yes.


Andreas
--
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of  
broadcast)---

TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] query from a list of ids

2007-04-25 Thread Listmail




name| ids
-
Peter| 2, 3, 4, 5
Jack| 100, 34, 3

Both name and ids are in text format.


	IF you really do not want to use a link table (user_id, flag_id), you  
could use an array of ints instead of a string...


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster