Re: [GENERAL] General Ledger db design

2007-02-26 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/26/07 01:39, Karl O. Pinc wrote:
> 
> On 02/25/2007 06:21:45 PM, Kenneth Downs wrote:
>> Martin Winsler wrote:
> 
[snip]
> The above proposal takes care of the data
> structure/referential integrity
> issues, but does not solve the data integrity issues.
> 
> The only way, at present, to solve the data integrity
> issues is to write a FOR EACH STATEMENT trigger to be sure that
> all the rows agree with each other and everything balances.
> But this can only be done after all the data goes into the database.
[snip]
> FWIW, I have long lusted after a per-row trigger that would
> fire on transaction commit to solve these problems.
> (Or any sort of trigger with access to the row
> data so that it can be checked.)
> I couldn't say whether such triggers are technically feasible,
> but I'm pretty sure nobody's
> interested enough to do the implementation.

Why wouldn't deferred (commit time) constraints be adequate to the
task?  That's why they were designed.


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFF4p5LS9HxQb37XmcRAqRlAJ4x1I/R2C/OHc+qLwZpz81jJRcRewCeJDz5
/TZzI8PkALsb/YSIl7wyl+4=
=OTZZ
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] complex referential integrity constraints

2007-02-26 Thread Alban Hertroys
Robert Haas wrote:
> I don't understand what a weighted constraint would mean.  Either the
> attacker_id can be a wolf, or it can't.  Knowing that it is only 1%
> likely over the long haul is insufficient to disallow any particular
> transaction.

Basically I suggested to combine the constraint with a probability. If
the probability of one animal attacking another is 0% it can't attack
the other animal - that's a strict constraint. The other way around it
can, and you'll also immediately know how likely that is to happen.

An added bonus is that you can generalize certain constraints. If
certain animals are less than - say 25% - likely to attack other certain
 other animals you could determine that the attacked animal is not in
fact prey. An example would probably be wolves attacking other wolves
(or predators in general). For relations that depend on an animal being
prey, a constraint would be that this number be <25%.

In this discussion it is also not entirely defined what attacking means.
Is a ram defending his horde from wolves attacking (wolves)?

I realise this all started from an analogy to a real problem, so most of
this is probably not very relevant to your actual problem. No less
interesting, though.

-- 
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] PostgreSQL on Windows Paper

2007-02-26 Thread Dave Page
Scott Marlowe wrote:
> On Fri, 2007-02-23 at 12:22, Chris Travers wrote:
>> Hi all;
>>
>> Microsoft has seen it fit to publish a paper I have written as an 
>> introduction to PostgreSQL on Windows.  This paper covers the basics of 
>> installing and configuring the software.  I thought it might be of 
>> interest here so here is the link:
>>
>> http://port25.technet.com/archive/2007/02/22/postgresql-on-windows-a-primer.aspx
>>
>> If there are any editorial concerns they can be directed to me.  It is 
>> my hope that this will help introduce our favorite RDBMS to a wider 
>> audience.
> 
> One point, the paper mentions that you can't run pgsql under an admin
> account, but I thought that changed with 8.2.  Or is that with Vista or
> something?

Yes, it did change with 8.2. A couple of other points whilst we're on
the subject:

- It says to click postgresql-8.2-int.msi to install. That's wrong -
it's postgresql-8.2.msi.

- It says that 'one should expect performance on Windows to be lower
[because of the per-process architecture], especially where large
numbers of small queries are made.' That's not really accurate - it will
be slower when there are large numbers of short lived connections. Lots
of queries in one connection should be fine though.

- It should be noted that on Vista, UAC can be re-enabled following
installation. We are aiming to fix this for 8.3.

- It's "pgAdmin III", not PGAdmin III or PgAdmin III (yeah, I know, get
a life Dave...)

Regards, Dave

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] SQL Question - Using Group By

2007-02-26 Thread Alban Hertroys
[EMAIL PROTECTED] wrote:
> You could use COUNT() in conjunction with NULLIF:
> 
> select "Type",
> count(nullif("Active", false)) as "Active Count",
> count(nullif("Active", true)) as "Inactive Count",
> 100 * count(nullif("Active", false)) / count(*) as "Active Percent"
> from table_name group by "Type"

Tom Lane suggested me to use sum("Active"::int) in a similar situation;
Except that I had boolean expressions instead of values. It is a bit
faster; IMO readability is just "different".

-- 
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] problem installing NPGSQL and pgOLEDB with .NET

2007-02-26 Thread RPK

I am using NPGSQL but noticed degraded performance when only 5 to 10 records
are there. ODBC was very fast.

-- 
View this message in context: 
http://www.nabble.com/problem-installing-NPGSQL-and-pgOLEDB-with-.NET-tf3282768.html#a9156510
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] General Ledger db design

2007-02-26 Thread Raymond O'Donnell

Kenneth Downs wrote:

So far so good.  Now we have AR invoices, and AP vouchers.  Let's 


My apologies if this is a stupid question, but what do "AR" and "AP" 
stand for? I'm following this thread with interest.


Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] General Ledger db design

2007-02-26 Thread Michael Glaesemann


On Feb 26, 2007, at 19:32 , Raymond O'Donnell wrote:


Kenneth Downs wrote:


So far so good.  Now we have AR invoices, and AP vouchers.  Let's


My apologies if this is a stupid question, but what do "AR" and  
"AP" stand for? I'm following this thread with interest.


Accounts Receivable and Accounts Payable.


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] PostgreSQL 8.2.x and JDBC driver

2007-02-26 Thread DANTE Alexandra

Hello List,

I am not sure that this is the appropriated list but I try...
I try to used BenchmarkSQL (release 2.3.2, built on February 12, 2006) 
with PostgreSQL 8.2.2 and then 8.2.3.
By default, the JDBC driver included with this release of BenchmarkSQL 
is "postgresql-8.0.309.jdbc3.jar".
So I downloaded at http://jdbc.postgresql.org/download.html 
"postgresql-8.2-504.jdbc3.jar" as I used JDK 1.4 and PostgreSQL 8.2.x.


I have a question about the release of this JDBC driver : the version 
8.2-504 has been built on December 1st, 2006, that is to say for 
PostgreSQL 8.2.0, so can I use it with 8.2.2 and 8.2.3 ? Does a release 
exist for these releases of PostgreSQL or is it correct to use the 
version 8.2-504 ?


Thank you in advance.
Regards,
Alexandra




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] Strange Problem. Please Help.

2007-02-26 Thread hengky liwandouw
Dear all,
   
  I have a very strange problem with Access 2003 as frontend and PqSql 8.1 as 
backend.
   
  In my access application, i have order form with orderdetail subform.
   
  Subform recordsource, based on a saved Access query that has very simple 
calculated field "AMOUNT" that calculate QTY*Unitprice. At the form footer, i 
have SUBTOTAL Textbox with "=sum(amount)" function. All fields has "standard" 
format, with 2 decimals
   
  The problem is : when subtotal should have decimals value eg. 123.45
   
  It just show 123.00
   
  What caused this problem ? I didn't experience the same problem with MDB 
tables database.
   
  I remember when i installed PostgreSQL, i check all function, and my database 
has about 500 fundred built in function !! Is this the source of problem ?
   
  Thank you for any help !

 
-
Don't pick lemons.
See all the new 2007 cars at Yahoo! Autos.

Re: [GENERAL] [HACKERS] urgent: upgraded to 8.2, getting kernel panics

2007-02-26 Thread Merlin Moncure

On 2/23/07, Tom Lane <[EMAIL PROTECTED]> wrote:

"Merlin Moncure" <[EMAIL PROTECTED]> writes:
> On friday we upgraded a critical backend server to postgresql 8.2
> running on fedora core 4.

Umm ... why that particular choice of OS?  Red Hat dropped update
support for FC4 some time ago, and AFAIK the Fedora Legacy project
is not getting things done.  How old is the kernel you're using?


Linux mojo 2.6.17-1.2142_FC4smp #1 SMP Tue Jul 11 22:57:02 EDT 2006
i686 i686 i386 GNU/Linux


Unfortunately, the decision about which kernel to run is more or less
out of my hands.  I would personally really dislike fedora and would
much prefer to be running centos/redhat as.  That said, your comments
and those of others are very helpul in regards to fixing that.

we tried update to the latest via yum update with no help.

as promised, here is the  best photo of the panic we could get:
http://img144.imageshack.us/my.php?image=dumpic6.jpg

We did an emergency downgrade to 8.1 and will monitor the
situation...the decision to get a new server has already been made and
hopefully it will be on a more stable platform.

big thanks to all who took a few minutes out of their day to lend a hand.

merlin

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] General Ledger db design

2007-02-26 Thread Kenneth Downs

Karl O. Pinc wrote:


On 02/25/2007 06:21:45 PM, Kenneth Downs wrote:

Martin Winsler wrote:



This is a real world situation where referential
integrity needs to be broken in theory, I believe.  Does anybody 
have any experience or knowledge of building financial accounting 
databases?  Am I wrong about this?


The problem is that with "double entry accounting" you have records 
in tables that both reference other records in the same table as 
well as different records in other tables depending on some fairly 
complex logic.
For instance an invoice is a financial instrument, so the "parent 
record" would naturally want to be part of a company wide "journal" 
or "ledger."  However, its child records would be actual invoice 
lines as well as two different sets of entries in the general ledger 
detail, all 3 sets of records must agree with each other on the 
invoice parent record total.


The solution I've always used is to introduce a table of batches.  
This is the table that unifies all of the others.  When you post an 
invoice, you generate a new batch, give it type "AR".  The invoice is 
stamped with the batch #, as are the GL transaction rows.   When you 
post an AP voucher, do the same thing.   Same for checks received, 
checks paid, etc, all of them have different batch types.


It's been a while since I've done finance apps but
this is my recollection of the situation.

The above proposal takes care of the data
structure/referential integrity
issues, but does not solve the data integrity issues.

The only way, at present, to solve the data integrity
issues is to write a FOR EACH STATEMENT trigger to be sure that
all the rows agree with each other and everything balances.
But this can only be done after all the data goes into the database.
For instance, insert the credit and debit rows
into a temporary table, then insert from the temporary
table into the actual GL transaction table in one go,
and have a AFTER ... FOR EACH STATEMENT go through
and make sure the entire ledger is still in balance.
From a performance standpoint this bites.


Yeah, there is going to be some kind of extra work here.

My own solution is to add a "closed flag" to the batch and a calculated 
column on the GL entries.  If the closed flag is "N", the calculated 
column is zero, so that the ledger remains in balance while the entries 
are going in one-by-one.


A trigger on the batch table traps the setting of closed="Y" and sets 
the calculated values to the trx values, so the entire batch is 
committed inside of a single transaction.  If the batch is not balanced, 
it will reject a setting of closed="Y".


Other trigger code prevents new entries to a closed batch or the 
re-opening of a batch.



--
Kenneth Downs
Secure Data Software, Inc.
www.secdat.com / www.andromeda-project.org
Office: 631-689-7200   Cell: 631-379-0010

::Think you may have a problem with programming? Ask yourself this 
::question: do you worry about how to throw away a garbage can?



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] perfromance world records

2007-02-26 Thread Merlin Moncure

On 2/24/07, Tomi N/A <[EMAIL PROTECTED]> wrote:


That's the kind of leverage I'd like to have when talking about using
pgsql with my colleagues.
Anyone care to comment?


The name brand test are basically paid pr for the big databases.
Basically, the tests are in environments controlled completely by the
vendor.  Like almost everything else in the commercial database world,
the major point is to distract and confuse people and not provide any
substantive information.

Some people who follow graphics card developments might remember how
both ATI and nVidia caught a lot of heat by optimizing their drivers
for specific benchmarks to make themselves look good.  In the database
world, this kind of behavior is encouraged and publication of user run
benchmarks is prohibited.

A much more fair benchmark would be to publish the hardware/OS
platform in advance but not the SQL in a benchmark.  Only standard SQL
would be used and if any unexpected results come back the test is
considered 'failed' by the database.  I don't think such a thing would
ever happen though, but I would expect PostgreSQL to do extremely well
in such a test.

merlin

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] [HACKERS] urgent: upgraded to 8.2, getting kernel panics

2007-02-26 Thread Devrim GUNDUZ
Hi,

On Mon, 2007-02-26 at 08:24 -0500, Merlin Moncure wrote:
> we tried update to the latest via yum update with no help.

As Tom stated, FC4 is no more supported; therefore you won't be able to
get newer kernel via yum.

> as promised, here is the  best photo of the panic we could get:
> http://img144.imageshack.us/my.php?image=dumpic6.jpg

...bad locking...

The picture reminded me a SCSI driver bug in older kernels -- I google'd
again now and I saw a post that says "native drivers are being used in
FC5+ kernels". If this is the real case, you may hit the problem
sometime later.

Upgrading OS will probably solve your problem; since there is no way to
upgrade FC4 kernel unless you want to compile kernel source on your
system.

Regards,

-- 
Devrim GÜNDÜZ
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, ODBCng - http://www.commandprompt.com/




signature.asc
Description: This is a digitally signed message part


[GENERAL] proper export table to csv? multilineproblem.

2007-02-26 Thread peter pilsl


I need to export several tables as csv. I use the \f-command to set the 
seperator and pipe the output of my select directly to a file.


Unfortunately thats all the control I have over the created csv-file. I cannot 
set the field-delimiter and - which is critical to me - I cannot set an 
alternate record-seperator (newline at the moment). The latter is important to 
me cause many of my fields-values have \n or \r in it, so the csv-import-filter 
has a hard time to distinguish the record-seperator from a newline inside the data.


On the server I've postgres7.2, so the COPY-command does not know about the 
CSV-option yet (not does the postgres 8).


Is there any ready tool to create flexible csv-files or any trick I did not find 
out yet?


thnx,
peter

--
mag. peter pilsl - goldfisch.at
IT-Consulting
Tel: +43-650-3574035
Tel: +43-1-8900602
Fax: +43-1-8900602-15
skype: peter.pilsl
[EMAIL PROTECTED]
www.goldfisch.at

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] how to sort an array and remove duplicate in plpgsql

2007-02-26 Thread David Gagnon
Hi all,

  I'm messing up with this problem for a while and I searched the web
without success.  I have an array of timestamp and I needed sorted and I
need to remove duplicate value.  The Select statement offers the SORT BY and
UNIQUE that may help me but so far I didn't find the way to plug my array
variable into the select and get back the sorted array in return.

Any help or clue will be really appreciated!

Regards 

David

 

 



Re: [GENERAL] how to sort an array and remove duplicate in plpgsql

2007-02-26 Thread A. Kretschmer
am  Mon, dem 26.02.2007, um  9:15:52 -0500 mailte David Gagnon folgendes:
> Hi all,
> 
>   I?m messing up with this problem for a while and I searched the web without
> success.  I have an array of timestamp and I needed sorted and I need to 
> remove
> duplicate value.  The Select statement offers the SORT BY and UNIQUE that may
> help me but so far I didn?t find the way to plug my array variable into the
> select and get back the sorted array in return.

CREATE OR REPLACE FUNCTION array_sort (ANYARRAY)
RETURNS ANYARRAY
LANGUAGE SQL
AS $$
SELECT ARRAY(
SELECT $1[s.i] AS "foo"
FROM
generate_series(array_lower($1,1), array_upper($1,1)) AS s(i)
ORDER BY foo
);
$$;


You can change the 'SELECT $1' to 'SELECT DISTINCT $1' to remove
duplicate values.


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Querying all months even if don't exist

2007-02-26 Thread Robert Fitzpatrick
I have a query that pulls totals for the month and from there I am
building a crosstab to show all months. My dilemma is that sometimes
there is no data for a month and the crosstab becomes skewed. I made a
table with all the 12 months in it and joined to the query in my view to
get all the months for any year there was sales to show in the query
results, surely there is a better way? But when spanning different years
like in the query below, that does not work as I only get the 12 months
of the years where sales occurred in my query leaving out 2005 since
this user had no sales in 2005.

primepay=# select * from view_pick1 WHERE rep = 'aespinal' and nmonth >= 
'12/01/2005' and nmonth <= '11/30/2006' ORDER BY 1;
   rep|   nmonth   | units | revenue
--++---+-
 aespinal | 2006-01-01 |   |
 aespinal | 2006-02-01 |   |
 aespinal | 2006-03-01 |   |
 aespinal | 2006-04-01 |   |
 aespinal | 2006-05-01 | 4 |
 aespinal | 2006-06-01 | 3 |
 aespinal | 2006-07-01 |   |
 aespinal | 2006-08-01 |   |
 aespinal | 2006-09-01 |   |
 aespinal | 2006-10-01 |   |
 aespinal | 2006-11-01 |   |
(11 rows)

I need to make sure there is always 12 rows with all months for each
type. Is there any kind of query I could make to build a list of all
months whether they had sales in that year or not? Right now, this query
below is what I'm using to get all the months of any year there were
sales. The view_pick1_data view is the query where the totals are built.
The view_pick1 shown in the above query takes all the months in the
result of the query below and joins the view_pick1_data. I know there
must be a better way, I'm struggling to figure it out.

SELECT DISTINCT view_pick1_data.rep, view_pick1_data.nyear, months."month"
FROM view_pick1_data, months
ORDER BY view_pick1_data.rep, view_pick1_data.nyear, months."month";

Thanks for any help in advance!

-- 
Robert


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] how to sort an array and remove duplicate in plpgsql

2007-02-26 Thread Chris Coleman
Hi,
 
I'm no postgres guru and am not sure if this is the accepted way or not,
but:
 
CREATE OR REPLACE FUNCTION explode_array(in_array anyarray)
RETURNS SETOF anyelement AS
$BODY$
SELECT ($1)[s] FROM generate_series(1, array_upper($1, 1)) AS s;
$BODY$
LANGUAGE 'sql' IMMUTABLE;
ALTER FUNCTION explode_array(in_array anyarray) OWNER TO postgres;

Then you could do:
 
SELECT DISTINCT explode_array(ARRAY[1, 2, 5, 3, 1, 2]) AS data ORDER BY
data
 
==>
 
1,
2,
3,
5
 
Hope that helps,
 
Cheers
Chris



From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of David Gagnon
Sent: 26 February 2007 14:16
To: pgsql-general@postgresql.org
Subject: [GENERAL] how to sort an array and remove duplicate in plpgsql 



Hi all,

  I'm messing up with this problem for a while and I searched the web
without success.  I have an array of timestamp and I needed sorted and I
need to remove duplicate value.  The Select statement offers the SORT BY
and UNIQUE that may help me but so far I didn't find the way to plug my
array variable into the select and get back the sorted array in return.

Any help or clue will be really appreciated!

Regards 

David

 

 

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

This e-mail is confidential and may be read only by the intended recipient.
If you are not the intended recipient, please do not forward, copy or take
any action based on it and, in addition, please delete this email and
inform the sender.
We cannot be sure that this e-mail or its attachments are free from
viruses.  In keeping with good computing practice, please ensure that
you take adequate steps to check for any viruses.  Before replying
or sending any email to us, please consider that the internet is inherently
insecure and is an inappropriate medium for certain kinds of information.
We reserve the right to access and read all e-mails and attachments
entering or leaving our systems.

Registered office: Eurocom House, Ashbourne Road, Derby DE22 4NB Company 
number: 01574696. 


Re: [GENERAL] Querying all months even if don't exist

2007-02-26 Thread A. Kretschmer
am  Mon, dem 26.02.2007, um 10:10:45 -0500 mailte Robert Fitzpatrick folgendes:
> I have a query that pulls totals for the month and from there I am
> building a crosstab to show all months. My dilemma is that sometimes
> there is no data for a month and the crosstab becomes skewed. I made a
> table with all the 12 months in it and joined to the query in my view to

Try:

select ('2007-01-01'::date+(s||'month')::interval)::date from 
generate_series(0,11)s;


to generate a list of all month in this year. You can use this to join
to other tables.


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Writing oracle/postgress generic SQL

2007-02-26 Thread Richard Troy

On Fri, 23 Feb 2007, David Fetter wrote:
> On Fri, Feb 23, 2007 at 08:28:06AM -0800, Richard Troy wrote:
> > On Fri, 23 Feb 2007, David Fetter wrote:
> > > On Fri, Feb 23, 2007 at 10:23:56AM +0100, Ben Edwards wrote:
> > > > Anyone know of any guidelines for writing SQL which works under
> > > > Oracle witch will also work under postgress.  This is to ensure that
> > > > SQL written for an Oracle database can be migrated to postgress
> > > > later.
> > >
> > > You've just bumped into the problem that while standard SQL exists,
> > > only Mimer and possibly DB2 implement it.  The presentation below
> > > outlines your main choices for supporting more than one DB back-end,
> > > and they're all expensive and troublesome to maintain.
> > >
> > > http://www.powerpostgresql.com/Downloads/database_depends_public.swf
> >
> > With all due respect to Josh's presentation, there's a lot more to
> > the story than those couple of slides.
>
> With all due respect, the presentation was if anything an
> understatement.

Yes; it didn't say very much. I'm sure Josh, as speaker, articulated what
wasn't in those slides, but we didn't get the benefit of that on the web.

>  Unless, as with rare beasties like Science Tools, the
> major purpose of the application is to support multiple DBMS
> back-ends, it's just too expensive.  Even in those rare cases, it's
> expensive.

I guess anything you have to pay for is too expensive. (Sounds like dogma
to me. And you know what dogma makes - just don't step in it.)

> > Are there things it misses?  Yes, but not much.  I'll take the wild
> > guess that more than 80% of applications are completely and
> > adequately served.
>
> That says something about the applications you've seen, and not about
> the adequacy of such a library.

That remark is uninformed and arrogantly presumptuous about both me and
the library, and uninsightful regarding the implementation of
applications. It's also needlessly offensive, if you'll forgive the pun.

>  What point is there in using a
> powerful tool like an RDBMS and then hobbling yourself by only using
> 10% of the available features?  It's certainly a bad thing to do by
> default.

10%? Whatever. I never said anything of the kind - and I'm reminded that
an unsupported argument can be dismissed without support. But there ARE
good reasons. We read on this very list about two weeks ago a long
treatise on the subject by an obviously long-in-the-tooth DBA type who
articulately took at least four pages to tell us why it was his practice
and advice to always be able to move to another RDBMS. Perhaps read the
archives and become informed...

> > It has pass-through capability so you can still get at engine-specific
> > features, though it does completely side-step stored procedures
>
> Oops!  There went 60% of the code in some of the databases I've seen
> in production.  80% in at least one case I've seen in the past year.

Lots of people use stored procedures and some people over-use them while
some others under-utilize them in their architectures. It should be no
surprise that some people follow dogma while others consider every arrow
in their quiver. Yet I detect a certain flippant bigottry in your response
- Oops! Perhaps a more considered argument would be effective than just an
attack - that is, presuming there's a considered argument to be made.

The short of it is that Science Tools is surely not alone in having
developed an SQL dialect translator, though we may be the only ones to
offer it to customers. Either way, automated dialect translation, whether
by us otherwise, is another useful choice whether _you_ like it or not.

Ciao,
Richard

-- 
Richard Troy, Chief Scientist
Science Tools Corporation
510-924-1363 or 202-747-1263
[EMAIL PROTECTED], http://ScienceTools.com/


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] complex referential integrity constraints

2007-02-26 Thread Alban Hertroys
Robert Haas wrote:
> I sort of think that this kind of stuff belongs in a separate table
> somehow.  For example in this case I would want to:

I wasn't suggesting otherwise. A constraint is a constraint; whether it
involves an extra table or not wasn't really relevant until now
(apparently).

> CREATE TABLE attack_probability (
>   attacker_type_id integer not null references animal_type (id), 
>   victim_type_id integer not null references animal_type (id), 
>   percent_chance integer not null,

Personally I would prefer something that can hold decimals, a numeric
fe. And I usually add cascading behaviour definitions explicitly.

>   primary key (attacker_type_id, victim_type_id)
> );
> 
> ...and then declare that the CONSTRAINT on the "maulings" table is that
> if I look up the types of the attacker and victim, that pair of types
> must be present in the attack_probability table with percent_chance > 0.

This was pretty much what I had in mind. Not _exactly_, as I didn't
concern myself with the implementation details yet. But it pretty much
boils down to what you suggested, yes.

> I guess my point here is that I think in your proposal you are letting
> domain-specific data creep into the schema.  It's the job of the schema
> to enforce integrity constraints, but not to know specifically how
> things work.  The fact (if it is a fact) that the chance of one type of
> animal attacking another can be captured as a probability (rather than,
> say, one probability for the day time and another probability for the
> night time, or one probability for each specific animal rather than each
> animal type, or I don't know what's going on and want to infer the
> probabilities from the data after I've gathered it) is domain-specific.
> I don't really want the information about attack probabilities (or
> whatever) to be something that's hardcoded in my schema; I want it to be
> part of the data in the schema, with the schema enforcing such
> constraints on that data as I may see fit to define.  I don't want to
> have to select things out of system tables to find out attack
> probabilities.  Also, as a practical matter, I suspect that such a setup
> would result in an absurdly complex constraint language.

I was merely pointing out the possibility of such a constraint - maybe
"business rule" is more appropriate (but isn't that a constraint as well?).
In the original suggestions this wasn't possible, it simply stated that
wolves cannot attack lions - which isn't necessarily true.

I suppose that's the point I was trying to make in general; the whole
who-attacks-who business isn't that black-and-white - more like
different shades of gray.

In the end you'll have to make a decision about what combinations are
possible depending on what you want to get out of your database. It may
not be necessary to get into this much detail - or it may. Not for me to
say.

> ...Robert

-- 
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Best way to store and retrieve photo from PostGreSQL

2007-02-26 Thread Brent Wood


We are storing images as files with names/paths, metadata, etc. in 
PostGIS (as our images are often associated with a location, being  
things like field specimen images, or seabed pictures we use PostGIS to 
store & query the location info, and UMN Mapserver as the engine driving 
a map based image selection tool - www.atlasmd.com)


Atlas originally used Firebird RDBMS for image data, but was migrated to 
PostGIS to fully support OGC compliant spatial functionality.


The additional step we have implemented to ensure a two way link between 
the images & the data is to write the db primary key to an EXIF field in 
each image as it is uploaded. This means that given an image, we can 
access it's data from the DB, (by looking up the key which is embedded 
in the image) instead of the usual approach which only permits a link 
the other way.


Cheers,

 Brent Wood

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] complex referential integrity constraints

2007-02-26 Thread Robert Haas
I sort of think that this kind of stuff belongs in a separate table
somehow.  For example in this case I would want to:

CREATE TABLE attack_probability (
attacker_type_id integer not null references animal_type (id), 
victim_type_id integer not null references animal_type (id), 
percent_chance integer not null,
primary key (attacker_type_id, victim_type_id)
);

...and then declare that the CONSTRAINT on the "maulings" table is that
if I look up the types of the attacker and victim, that pair of types
must be present in the attack_probability table with percent_chance > 0.

I guess my point here is that I think in your proposal you are letting
domain-specific data creep into the schema.  It's the job of the schema
to enforce integrity constraints, but not to know specifically how
things work.  The fact (if it is a fact) that the chance of one type of
animal attacking another can be captured as a probability (rather than,
say, one probability for the day time and another probability for the
night time, or one probability for each specific animal rather than each
animal type, or I don't know what's going on and want to infer the
probabilities from the data after I've gathered it) is domain-specific.
I don't really want the information about attack probabilities (or
whatever) to be something that's hardcoded in my schema; I want it to be
part of the data in the schema, with the schema enforcing such
constraints on that data as I may see fit to define.  I don't want to
have to select things out of system tables to find out attack
probabilities.  Also, as a practical matter, I suspect that such a setup
would result in an absurdly complex constraint language.

...Robert

-Original Message-
From: Alban Hertroys [mailto:[EMAIL PROTECTED] 
Sent: Monday, February 26, 2007 4:15 AM
To: Robert Haas
Cc: David Fetter; pgsql-general@postgresql.org
Subject: Re: [GENERAL] complex referential integrity constraints

Robert Haas wrote:
> I don't understand what a weighted constraint would mean.  Either the
> attacker_id can be a wolf, or it can't.  Knowing that it is only 1%
> likely over the long haul is insufficient to disallow any particular
> transaction.

Basically I suggested to combine the constraint with a probability. If
the probability of one animal attacking another is 0% it can't attack
the other animal - that's a strict constraint. The other way around it
can, and you'll also immediately know how likely that is to happen.

An added bonus is that you can generalize certain constraints. If
certain animals are less than - say 25% - likely to attack other certain
 other animals you could determine that the attacked animal is not in
fact prey. An example would probably be wolves attacking other wolves
(or predators in general). For relations that depend on an animal being
prey, a constraint would be that this number be <25%.

In this discussion it is also not entirely defined what attacking means.
Is a ram defending his horde from wolves attacking (wolves)?

I realise this all started from an analogy to a real problem, so most of
this is probably not very relevant to your actual problem. No less
interesting, though.

-- 
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] General Ledger db design

2007-02-26 Thread Filipe Fernandes
[snip]
>>> Martin Winsler wrote:
 Does anybody have any experience or knowledge of building
 financial accounting databases?
[snip]

I too was thinking about building a double entry accounting system and
I've been following this thread closely, learning a few tricks on the way :)

I've been gathering up ideas on how a db schema might look by reading the
gnuCash tutorials on double-entry accounting, but does anybody know if
there are open source applications that have already solved this on the db
layer?

filipe

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] General Ledger db design

2007-02-26 Thread Karl O. Pinc


On 02/26/2007 07:40:17 AM, Kenneth Downs wrote:

Karl O. Pinc wrote:


On 02/25/2007 06:21:45 PM, Kenneth Downs wrote:

Martin Winsler wrote:



This is a real world situation where referential
integrity needs to be broken in theory, I believe.


The problem is that with "double entry accounting" you have  
records in tables that both reference other records in the same  
table as well as different records in other tables depending on  
some fairly complex logic.
For instance an invoice is a financial instrument, so the "parent  
record" would naturally want to be part of a company wide  
"journal" or "ledger."  However, its child records would be actual  
invoice lines as well as two different sets of entries in the  
general ledger detail, all 3 sets of records must agree with each  
other on the invoice parent record total.


The solution I've always used is to introduce a table of batches.   
This is the table that unifies all of the others.  When you post an  
invoice, you generate a new batch, give it type "AR".  The invoice  
is stamped with the batch #, as are the GL transaction rows.   When  
you post an AP voucher, do the same thing.   Same for checks  
received, checks paid, etc, all of them have different batch types.




My own solution is to add a "closed flag" to the batch and a  
calculated column on the GL entries.  If the closed flag is "N", the  
calculated column is zero, so that the ledger remains in balance  
while the entries are going in one-by-one.


A trigger on the batch table traps the setting of closed="Y" and sets  
the calculated values to the trx values, so the entire batch is  
committed inside of a single transaction.  If the batch is not  
balanced, it will reject a setting of closed="Y".


Other trigger code prevents new entries to a closed batch or the  
re-opening of a batch.


I haven't entirely focused my brain around this problem, but
it seems to me that you still get rows in, e.g., the financial
transaction detail table -- the credits and debits -- that
don't balance out for some period of time.  Forever if the
application has a bug.  Why is your approach better than
just putting the "check that everything balances" code,
plus whatever updating you want to do elsewhere in the db,
directly into the parent table's trigger (the batch table)
and having the application insert into the batch table last?

You can put triggers into the financial transaction detail table
that says that the batch id has to be valid if it exists
to get your referential integrity right.
You can also not allow new rows to be inserted if there
is already a batch row, thus the insertion of a
batch row "closes" the batch.  Trying to add new credits
or debits or change the values of existing credits or
debits (prevented in the cr/db table's update trigger), things
would cause the batch to go out of balance, are thus
prevented.

If something
goes wrong, you've got some extra rows laying about and
you can easily identify them because there's no corresponding
row in in the batches table.  (Your proposal has good error
recovery too, but seems like it's more work to impliment,
as far as having to go back and update the "closed" flag,
and even more instruction needs to be given to the
application programmer come time to use the db.)

Karl <[EMAIL PROTECTED]>
Free Software:  "You don't pay back, you pay forward."
 -- Robert A. Heinlein


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] General Ledger db design

2007-02-26 Thread Kenneth Downs

Karl O. Pinc wrote:


You can put triggers into the financial transaction detail table
that says that the batch id has to be valid if it exists
to get your referential integrity right.


Right.


You can also not allow new rows to be inserted if there
is already a batch row, thus the insertion of a
batch row "closes" the batch.


Not sure what you mean, but you can in fact have any number of open 
batches, on the assumption that it is a multi-user system.



Trying to add new credits
or debits or change the values of existing credits or
debits (prevented in the cr/db table's update trigger), things
would cause the batch to go out of balance, are thus
prevented.


Using the batch # as the foreign key allows all batches to be isolated 
from each other.





If something
goes wrong, you've got some extra rows laying about and
you can easily identify them because there's no corresponding
row in in the batches table.  (Your proposal has good error
recovery too, but seems like it's more work to impliment,
as far as having to go back and update the "closed" flag,
and even more instruction needs to be given to the
application programmer come time to use the db.)


Well to be honest I don't manually code any of it, I have a generator 
that does it, I don't trust myself to code something like that properly :)


The code generator lets me do necessary things like sum the transactions 
to the batch row, preventing a close unless they balance, preventing an 
update to the batch row when it is already closed, which as a bonus 
prevents new rows being added, and "Distributing" (as we call it) the 
close flag to the transaction rows when the batch closes.





Karl <[EMAIL PROTECTED]>
Free Software:  "You don't pay back, you pay forward."
 -- Robert A. Heinlein




--
Kenneth Downs
Secure Data Software, Inc.
www.secdat.com / www.andromeda-project.org
Office: 631-689-7200   Cell: 631-379-0010

::Think you may have a problem with programming? Ask yourself this 
::question: do you worry about how to throw away a garbage can?



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] General Ledger db design

2007-02-26 Thread Karl O. Pinc


On 02/26/2007 11:41:18 AM, Kenneth Downs wrote:


You can also not allow new rows to be inserted if there
is already a batch row, thus the insertion of a
batch row "closes" the batch.


Not sure what you mean, but you can in fact have any number of open  
batches, on the assumption that it is a multi-user system.


I'm saying why have a close flag at all?  Why not signal the
close of the batch by the creation of the batch row?

Karl <[EMAIL PROTECTED]>
Free Software:  "You don't pay back, you pay forward."
 -- Robert A. Heinlein


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [pgsql-advocacy] [GENERAL] PostgreSQL on Windows Paper

2007-02-26 Thread Magnus Hagander
Dave Page wrote:
> Scott Marlowe wrote:
>> On Fri, 2007-02-23 at 12:22, Chris Travers wrote:
>>> Hi all;
>>>
>>> Microsoft has seen it fit to publish a paper I have written as an 
>>> introduction to PostgreSQL on Windows.  This paper covers the basics of 
>>> installing and configuring the software.  I thought it might be of 
>>> interest here so here is the link:
>>>
>>> http://port25.technet.com/archive/2007/02/22/postgresql-on-windows-a-primer.aspx
>>>
>>> If there are any editorial concerns they can be directed to me.  It is 
>>> my hope that this will help introduce our favorite RDBMS to a wider 
>>> audience.
>> One point, the paper mentions that you can't run pgsql under an admin
>> account, but I thought that changed with 8.2.  Or is that with Vista or
>> something?
> 
> Yes, it did change with 8.2. A couple of other points whilst we're on
> the subject:
> 
> - It says to click postgresql-8.2-int.msi to install. That's wrong -
> it's postgresql-8.2.msi.
> 
> - It says that 'one should expect performance on Windows to be lower
> [because of the per-process architecture], especially where large
> numbers of small queries are made.' That's not really accurate - it will
> be slower when there are large numbers of short lived connections. Lots
> of queries in one connection should be fine though.

I believe ou will still see worse performance, because of at least two
things: context switching is more expensive (much more), and shared
memory access appears to be more expensive.
It will be worse if you have short lived connections, of course.


> - It should be noted that on Vista, UAC can be re-enabled following
> installation. We are aiming to fix this for 8.3.
> 
> - It's "pgAdmin III", not PGAdmin III or PgAdmin III (yeah, I know, get
> a life Dave...)

Nah, you gotta keep at them. it's like all those people who use postgre.

//Magnus

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] proper export table to csv? multilineproblem.

2007-02-26 Thread Reid Thompson
On Mon, 2007-02-26 at 14:54 +0100, peter pilsl wrote:
> I need to export several tables as csv. I use the \f-command to set the 
> seperator and pipe the output of my select directly to a file.
> 
> Unfortunately thats all the control I have over the created csv-file. I 
> cannot 
> set the field-delimiter and - which is critical to me - I cannot set an 
> alternate record-seperator (newline at the moment). The latter is important 
> to 
> me cause many of my fields-values have \n or \r in it, so the 
> csv-import-filter 
> has a hard time to distinguish the record-seperator from a newline inside the 
> data.
> 
> On the server I've postgres7.2, so the COPY-command does not know about the 
> CSV-option yet (not does the postgres 8).
> 
> Is there any ready tool to create flexible csv-files or any trick I did not 
> find 
> out yet?
> 
> thnx,
> peter
> 
tablename=>\pset fieldsep ,
tablename=>\pset recordsep ^

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] General Ledger db design

2007-02-26 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/26/07 10:35, Filipe Fernandes wrote:
> [snip]
 Martin Winsler wrote:
> Does anybody have any experience or knowledge of building
> financial accounting databases?
> [snip]
> 
> I too was thinking about building a double entry accounting system and
> I've been following this thread closely, learning a few tricks on the way :)
> 
> I've been gathering up ideas on how a db schema might look by reading the
> gnuCash tutorials on double-entry accounting, but does anybody know if
> there are open source applications that have already solved this on the db
> layer?

Ledger SMB (http://www.ledgersmb.org/about/) might be a place to start.



-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFF4ySeS9HxQb37XmcRAm5rAJ0R8/GVHF/Zdycyca63aJATp4urdQCfW8+n
6Z3D1feasFASpUQS1To91Hc=
=2s3j
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] proper export table to csv? multilineproblem.

2007-02-26 Thread Reid Thompson
On Mon, 2007-02-26 at 13:20 -0500, Reid Thompson wrote:
> On Mon, 2007-02-26 at 14:54 +0100, peter pilsl wrote:
> > I need to export several tables as csv. I use the \f-command to set the 
> > seperator and pipe the output of my select directly to a file.
> > 
> > Unfortunately thats all the control I have over the created csv-file. I 
> > cannot 
> > set the field-delimiter and - which is critical to me - I cannot set an 
> > alternate record-seperator (newline at the moment). The latter is important 
> > to 
> > me cause many of my fields-values have \n or \r in it, so the 
> > csv-import-filter 
> > has a hard time to distinguish the record-seperator from a newline inside 
> > the data.
> > 
> > On the server I've postgres7.2, so the COPY-command does not know about the 
> > CSV-option yet (not does the postgres 8).
> > 
> > Is there any ready tool to create flexible csv-files or any trick I did not 
> > find 
> > out yet?
> > 
> > thnx,
> > peter
> > 
> tablename=>\pset fieldsep ,
> tablename=>\pset recordsep ^
sorry, forgot to set unaligned data mode

tablename=>\a

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Composite Keys

2007-02-26 Thread RPK

Jorge,

For other tables I have ID field which is incremented by sequence. But for
this table, there is not ID field. Receipt No will be incremented by finding
the max value from the existing Receipt Nos. corresponding to that Book No.
This case has a drawback as compared to the sequences in other tables.
Sequences are automatically handled by the database if two users
simultaneously enter data. But for the Receipts table where there is no need
to define a sequence, one user will find the Max(ReceiptNo) and type in the
rest of the entries. So there is a chance that in the meantime another user
on a different machine will also get the same max(ReceiptNo) until the
record of the previous user gets saved.

So how to solve this problem?
-- 
View this message in context: 
http://www.nabble.com/Composite-Keys-tf3282722.html#a9164794
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] stored procedure optimisation...

2007-02-26 Thread Anton Melser

Hi,
I need to analyse some html to get some links out, and with only 25
lines in exports_tmp_links (and text_to_parse no more than around
10KB) this function has taken 10 minutes and counting. Something
horribly wrong is going on here! Can someone give me any pointers?
Cheers
Anton

delete from tmp_links_all;
analyze exports_tmp_links;

FOR tempRow IN SELECT * FROM exports_tmp_links LOOP
startString := tempRow.text_to_parse;
LOOP

linkString := substring(startString, 
'(linkadministration.*=[0-9]+)');
IF linkString is null THEN
EXIT;
END IF;

newlinkid := substring(linkString,'([0-9]+)');
INSERT INTO tmp_links_all
(link_id,content_section,item_id,item_name,location_of_link)
values (newlinkid,tempRow.content_section,tempRow.item_id,
tempRow.item_name, tempRow.location_of_link);

startString := substring(startString from position(newlinkid in 
startString));

END LOOP;
END LOOP;

analyze tmp_links_all;

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Composite Keys

2007-02-26 Thread Alvaro Herrera
RPK wrote:
> 
> Jorge,
> 
> For other tables I have ID field which is incremented by sequence. But for
> this table, there is not ID field. Receipt No will be incremented by finding
> the max value from the existing Receipt Nos. corresponding to that Book No.
> This case has a drawback as compared to the sequences in other tables.
> Sequences are automatically handled by the database if two users
> simultaneously enter data. But for the Receipts table where there is no need
> to define a sequence, one user will find the Max(ReceiptNo) and type in the
> rest of the entries. So there is a chance that in the meantime another user
> on a different machine will also get the same max(ReceiptNo) until the
> record of the previous user gets saved.
> 
> So how to solve this problem?

Lock the table beforehand.  Only one user can be getting the
max(ReceiptNo) that way.

Alternatively, you could use userlocks, so that you can lock, generate
the number, unlock.  And you can use it to lock that particular BookNo,
not the whole table.  (In Postgres 8.2 the facility is called "advisory
locks" and comes with the core code; in older releases it's in contrib
and it's called "userlock").

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] help required regarding queryin postgis database from google maps

2007-02-26 Thread Andrew Hammond
On Feb 25, 9:34 am, [EMAIL PROTECTED] (Andrew Dunstan) wrote:
> Phani Kishore wrote:
>
> > hi !
>
> > i think u people could probably help me i how to query the
> > pgsql/postgis from google maps api to display the markers on the
> > google maps which are stored in the postgis database.
> > Phani Kishore
> > Tata Consultancy Services
> > Mailto: [EMAIL PROTECTED]
> > Website:http://www.tcs.com
>
> This list is not about how to use postgres. Please ask in the correct forum.

pgsql-general, would be a good place to start with.

Andrew


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Composite Keys

2007-02-26 Thread RPK

Alvaro,

I am using VB.NET. How to enable locking from within VB.NET for PostgreSQL?
Which command need to be executed?



Alvaro Herrera-7 wrote:
> 
>>Lock the table beforehand.  Only one user can be getting the
>>max(ReceiptNo) that way.
> 
>>Alternatively, you could use userlocks, so that you can lock, generate
>>the number, unlock.  And you can use it to lock that particular BookNo,
>>not the whole table.  (In Postgres 8.2 the facility is called "advisory
>>locks" and comes with the core code; in older releases it's in contrib
>>and it's called "userlock").
> 
> -- 
> Alvaro Herrera   
> http://www.CommandPrompt.com/
> The PostgreSQL Company - Command Prompt, Inc.
> 
> ---(end of broadcast)---
> TIP 5: don't forget to increase your free space map settings
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Composite-Keys-tf3282722.html#a9166441
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] psql : password on Win32

2007-02-26 Thread Fabio D'Ovidio
Hi folks !
I am new of the list even if I use PostgreSQL\PostGIS since I was in Planetek 
to work.

I have a problem with psql  command line on Windows XP. I want to set password 
for the 
user postgres in order to execute an sql script directly from a batch file 
without entering 
password from the prompt.

The command is :

psql -U postgres -d mydb -f script.sql

I have used pgpass.conf file with the syntax :

localhost:5432:mydb:postgres:password

and it doesn't work.

How can i do ?

Thanks!

--
Fabio D'Ovidio

Web GIS Staff 
Planetek Italia s.r.l.
Via Massaua 12, I-70123 Bari 
Tel.: +39 080 5343750 Fax: +39 080 5340280 
[EMAIL PROTECTED] - http://www.planetek.it 
--




---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [pgsql-advocacy] [GENERAL] PostgreSQL on Windows Paper

2007-02-26 Thread Dave Page
Magnus Hagander wrote:
>> - It says that 'one should expect performance on Windows to be lower
>> [because of the per-process architecture], especially where large
>> numbers of small queries are made.' That's not really accurate - it will
>> be slower when there are large numbers of short lived connections. Lots
>> of queries in one connection should be fine though.
> 
> I believe ou will still see worse performance, because of at least two
> things: context switching is more expensive (much more), and shared
> memory access appears to be more expensive.
> It will be worse if you have short lived connections, of course.

OK, 'relatively speaking'. I think the important part is the connection
setup time, if only because many web apps may setup new connections for
every page (for example) which is where people often seem to come
unstuck and really see the performance hit.

Regards, Dave.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] psql : password on Win32

2007-02-26 Thread A. Kretschmer
am  Mon, dem 26.02.2007, um 19:34:25 +0100 mailte Fabio D'Ovidio folgendes:
> I have used pgpass.conf file with the syntax :
> 
> localhost:5432:mydb:postgres:password
> 
> and it doesn't work.

Wrong syntax. Change ':' to '\t' or other whitespace.


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] General Ledger db design

2007-02-26 Thread Filipe Fernandes
> Look at SQL-Ledger and LedgerSMB
[snip]
> Ledger SMB (http://www.ledgersmb.org/about/) might be a place to start.

Thanks Ron and Brent for the suggestion... very much appreciated.

filipe

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Writing oracle/postgress generic SQL

2007-02-26 Thread Joshua D. Drake

>>  Unless, as with rare beasties like Science Tools, the
>> major purpose of the application is to support multiple DBMS
>> back-ends, it's just too expensive.  Even in those rare cases, it's
>> expensive.
> 
> I guess anything you have to pay for is too expensive. (Sounds like dogma
> to me. And you know what dogma makes - just don't step in it.)

*cough* There really isn't a good argument in general for abstracting
out database access to support multiple platforms.

The only argument I ever see is:

We want our product to support as many databases as possible. Which is
certainly a valid business argument but certainly not a good technical
argument.

> 
>>> Are there things it misses?  Yes, but not much.  I'll take the wild
>>> guess that more than 80% of applications are completely and
>>> adequately served.
>> That says something about the applications you've seen, and not about
>> the adequacy of such a library.
> 
> That remark is uninformed and arrogantly presumptuous about both me and
> the library, and uninsightful regarding the implementation of
> applications. It's also needlessly offensive, if you'll forgive the pun.
>

I am not sure why you would be offended by another's experience. I am
offended that you are offended that he wasn't offended. Good lord, take
a breath.

I would agree that in my experience most applications that choose to
abstract their database usage generally make bad choices in how they do
it and thus have a negative impact on not only the survivability of
existing code but the maintainability of said code.

Are their apps out there that do it right? Oh probably, I have never
seen one though.

> 
> The short of it is that Science Tools is surely not alone in having
> developed an SQL dialect translator, though we may be the only ones to
> offer it to customers. Either way, automated dialect translation, whether
> by us otherwise, is another useful choice whether _you_ like it or not.

useful not always == good.

Windows is useful.

Windows is not good.

Joshua D. Drake


> 
> Ciao,
> Richard
> 


-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Writing oracle/postgress generic SQL

2007-02-26 Thread David Fetter
On Mon, Feb 26, 2007 at 08:01:52AM -0800, Richard Troy wrote:
> On Fri, 23 Feb 2007, David Fetter wrote:
> > On Fri, Feb 23, 2007 at 08:28:06AM -0800, Richard Troy wrote:
> > > On Fri, 23 Feb 2007, David Fetter wrote:
> > > > On Fri, Feb 23, 2007 at 10:23:56AM +0100, Ben Edwards wrote:
> > > > > Anyone know of any guidelines for writing SQL which works under
> > > > > Oracle witch will also work under postgress.  This is to ensure that
> > > > > SQL written for an Oracle database can be migrated to postgress
> > > > > later.
> > > >
> > > > You've just bumped into the problem that while standard SQL exists,
> > > > only Mimer and possibly DB2 implement it.  The presentation below
> > > > outlines your main choices for supporting more than one DB back-end,
> > > > and they're all expensive and troublesome to maintain.
> > > >
> > > > http://www.powerpostgresql.com/Downloads/database_depends_public.swf
> > >
> > > With all due respect to Josh's presentation, there's a lot more
> > > to the story than those couple of slides.
> >
> > With all due respect, the presentation was if anything an
> > understatement.
> 
> Yes; it didn't say very much.  I'm sure Josh, as speaker, articulated
> what wasn't in those slides, but we didn't get the benefit of that
> on the web.

The presentation understated the problems with trying to support more
than one DBMS back-end.

> > Unless, as with rare beasties like Science Tools, the major
> > purpose of the application is to support multiple DBMS back-ends,
> > it's just too expensive.  Even in those rare cases, it's
> > expensive.
> 
> I guess anything you have to pay for is too expensive. (Sounds like
> dogma to me.  And you know what dogma makes - just don't step in it.)

If you're determined to take offense, especially after your phone call
wherein I thought we had discussed this rationally, I can't stop you.

I can tell you that I've tried many times over the years and seen
plenty of other efforts to make database-independent code, and in no
case was it cheap even to attempt.  Either it pushes lots of work from
the database out into application land, or it's duplicating database
code that essentially does the same thing for each back-end RDBMS.

The first is expensive because the applications are now doing things
that the database is good at, and the second is expensive because
maintaining parallel code bases where the design criterion is that
they must behave identically is never going to be cheap.  The first
piles on the second one's cost as soon as there is more than one
application.

None of this has anything to do with the business model.  It has to do
with essential qualities of software development.

> > > Are there things it misses?  Yes, but not much.  I'll take the
> > > wild guess that more than 80% of applications are completely and
> > > adequately served.
> >
> > That says something about the applications you've seen, and not
> > about the adequacy of such a library.
> 
> That remark is uninformed and arrogantly presumptuous about both me
> and the library, and uninsightful regarding the implementation of
> applications.  It's also needlessly offensive, if you'll forgive the
> pun.

Since Science Tools is not in the business of selling SQL translators,
you'll of course be delighted to show just exactly how it works and
for what cases.  The "treat the DBMS as a dumb data store" model is
one that's been widely tested and proven inadequate from the viewpoint
of the organization that has to maintain said data store.  That model
can be quite lucrative for vendors, and more power to them.

> > What point is there in using a powerful tool like an RDBMS and
> > then hobbling yourself by only using 10% of the available
> > features?  It's certainly a bad thing to do by default.
> 
> 10%?  Whatever.  I never said anything of the kind - and I'm reminded
> that an unsupported argument can be dismissed without support.  But
> there ARE good reasons.  We read on this very list about two weeks
> ago a long treatise on the subject by an obviously long-in-the-tooth
> DBA type who articulately took at least four pages to tell us why it
> was his practice and advice to always be able to move to another
> RDBMS.  Perhaps read the archives and become informed...

I'm informed.  I am aware that some of the cute tricks DBMS vendors
used to play by making it expensive to switch back-ends weren't
terribly ethical, just as the cute tricks Unix vendors used to play
weren't.  That was the late 1980s and early 1990s, and the situation
now is different.  Without needing to introduce intentional
incompatibilities, RDBMSs are so different from one another that it's
just about impossible to make code that's exactly identical, one to
the other.  I'd contend that it's impossible without pushing work out
into the application layers, which is that "dumb data store" model.

> > > It has pass-through capability so you can still get at
> > > engine-specific features, though it does completely side

[GENERAL] grant on sequence and pg_restore/pg_dump problem

2007-02-26 Thread Tony Caduto

Hi,
I did a quick search and didn't see anything on this, if I missed it 
sorry in advance.
Anyway, I was doing a restore of a 8.1 database(on a 8.1 server) using 
the 8.2 pg_restore and it was throwing errors when it was trying to

restore the permissions on the sequences.
basically the pg_restore was using the grant on sequence against the 8.1 
database which of course 8.1 knows nothing about.


Is there a switch or something I missed that would allow this to work 
properly on a 8.1 or lower database?

Or do I have to now have 2 versions of dump/restore in order to do this?

Thanks,

--
Tony 



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Most efficient report of number of records in all tables?

2007-02-26 Thread D. Dante Lorenso

All,

I can find the names of all tables in the database with this query:

   SELECT table_name
   FROM information_schema.tables
   WHERE table_type = 'BASE TABLE'
   AND table_schema NOT IN ('pg_catalog', 'information_schema')
   ORDER BY table_name ASC;


Then, in code, I can loop through all the table names and run the 
following query:


   SELECT COUNT(*) AS result
   FROM $table;


But, this can be slow when I have a large number of tables of some 
tables have several million rows.


Is there a faster way to get this data using table statistics or 
something like that?  Perhaps something in a single query?


-- Dante




---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] preventing ALTER TABLE RENAME from changing view definitions?

2007-02-26 Thread George Pavlov
Currently "ALTER TABLE ... RENAME TO ..." results in all views that
refer to the table to be rewritten with the new table name. This is a
good thing in the general case, but there are also situations where it
is not (e.g. temporarily renaming tables for data reorg reasons). I
can't seem to find a clean way to only rename the table without causing
change to the view. The ONLY keyword does not work in this case.
Anything I am missing (short of re-creating all views). I am on 8.1.

test=> create table a (col int);
CREATE TABLE
test=> create view v_a as select col from a;
CREATE VIEW
test=> \d v_a
  View "public.v_a"
 Column |  Type   | Modifiers 
+-+---
 col| integer | 
View definition:
 SELECT a.col
   FROM a;

test=> alter table a rename to b;
ALTER TABLE
test=> \d v_a
  View "public.v_a"
 Column |  Type   | Modifiers 
+-+---
 col| integer | 
View definition:
 SELECT a.col
   FROM b a;

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Most efficient report of number of records in all tables?

2007-02-26 Thread Dann Corbit
If you only need a cardinality estimate, then pg_class.reltuples may be
of help (it will be accurate to when the last vacuum was performed).

If you need exact counts then there are a couple of problems:
1.  An MVCC database cannot store an exact count, because it can differ
by user.  Hence, to collect the exact number, a table scan is necessary.
2.  The number can be invalid immediately after the query and might be
different for different users anyway.

What are you doing with those numbers?

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:pgsql-general-
> [EMAIL PROTECTED] On Behalf Of D. Dante Lorenso
> Sent: Monday, February 26, 2007 2:20 PM
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] Most efficient report of number of records in all
> tables?
> 
> All,
> 
> I can find the names of all tables in the database with this query:
> 
> SELECT table_name
> FROM information_schema.tables
> WHERE table_type = 'BASE TABLE'
> AND table_schema NOT IN ('pg_catalog', 'information_schema')
> ORDER BY table_name ASC;
> 
> 
> Then, in code, I can loop through all the table names and run the
> following query:
> 
> SELECT COUNT(*) AS result
> FROM $table;
> 
> 
> But, this can be slow when I have a large number of tables of some
> tables have several million rows.
> 
> Is there a faster way to get this data using table statistics or
> something like that?  Perhaps something in a single query?
> 
> -- Dante
> 
> 
> 
> 
> ---(end of
broadcast)---
> TIP 6: explain analyze is your friend

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Most efficient report of number of records in all tables?

2007-02-26 Thread D. Dante Lorenso

Dann Corbit wrote:

If you only need a cardinality estimate, then pg_class.reltuples may be
of help (it will be accurate to when the last vacuum was performed).
  


Last vacuum ... how does that work with autovacuum?


If you need exact counts then there are a couple of problems:
1.  An MVCC database cannot store an exact count, because it can differ
by user.  Hence, to collect the exact number, a table scan is necessary.
  
A table scan ... ouch?  I just assumed that COUNT(*) FROM table_name 
would be a fast query internally.  I see what you mean about MVCC, though.



2.  The number can be invalid immediately after the query and might be
different for different users anyway.
  
The numbers don't really need to be 100% accurate (it's just a ballpark 
stat).



What are you doing with those numbers?
  


It's just an administrative report showing patterns of growth in our 
database storage.  We are trying to keep statistics for users and our 
stats tables are generating about 50,000 records daily.  We only know 
this is true because we have this reports which shows table record 
counts daily.


-- Dante

  

-Original Message-
From: [EMAIL PROTECTED] [mailto:pgsql-general-
[EMAIL PROTECTED] On Behalf Of D. Dante Lorenso
Sent: Monday, February 26, 2007 2:20 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Most efficient report of number of records in all
tables?

All,

I can find the names of all tables in the database with this query:

SELECT table_name
FROM information_schema.tables
WHERE table_type = 'BASE TABLE'
AND table_schema NOT IN ('pg_catalog', 'information_schema')
ORDER BY table_name ASC;


Then, in code, I can loop through all the table names and run the
following query:

SELECT COUNT(*) AS result
FROM $table;


But, this can be slow when I have a large number of tables of some
tables have several million rows.

Is there a faster way to get this data using table statistics or
something like that?  Perhaps something in a single query?

-- Dante




---(end of


broadcast)---
  

TIP 6: explain analyze is your friend



  




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] preventing ALTER TABLE RENAME from changing view definitions?

2007-02-26 Thread Richard Huxton

George Pavlov wrote:

Currently "ALTER TABLE ... RENAME TO ..." results in all views that
refer to the table to be rewritten with the new table name. This is a
good thing in the general case, but there are also situations where it
is not (e.g. temporarily renaming tables for data reorg reasons). I
can't seem to find a clean way to only rename the table without causing
change to the view. The ONLY keyword does not work in this case.
Anything I am missing (short of re-creating all views). I am on 8.1.


It'll either rename in all views, or not happen at all. Otherwise, 
there'd be nothing to stop you replacing the table with one that has an 
incompatible definition.


Is this something you do regularly, and if so what are you trying to 
achieve?


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Most efficient report of number of records in all tables?

2007-02-26 Thread Alvaro Herrera
D. Dante Lorenso wrote:
> Dann Corbit wrote:
> >If you only need a cardinality estimate, then pg_class.reltuples may be
> >of help (it will be accurate to when the last vacuum was performed).
> 
> Last vacuum ... how does that work with autovacuum?

The same, only that you'd have to monitor autovac activity to know for
sure how far back it is :-)

However, note that the optimizer uses the following trick to guesstimate
the number of tuples on any given table: first, it asks the kernel for
the number of blocks in the files corresponding to the table.  Then, it
uses the pg_class.reltuples and relpages values to estimate a "tuple
density" (tuples per page), then multiplies by the number of blocks.  As
far as estimates go, this one is pretty good.

The only thing I'm not sure how to get from SQL, is the actual number of
blocks.  You could trivially build a C function to get it.  In fact,
contrib/pgstatindex in CVS head (not sure if it's in 8.2) contains such
a function which you could borrow, pg_relpages().

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] dropping a master table and all of its partitions?

2007-02-26 Thread George Nychis

Hey everyone,

I created a master table, and created ~2000 partitions for it.

*no* data is in any of these partitions.

I am trying to drop the master and all of the partitions with a cascade:
DROP TABLE master CASCADE;

Except after about 30 seconds my memory usage (4GB) jumps to 99%, and 
after about 10 minutes it kills over and drops my connection.


How do you delete a master and all of its partitions?

Thanks!
George

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] dropping a master table and all of its partitions?

2007-02-26 Thread Erik Jones
Did you use some kind of sensical naming convention for the child  
tables?  If so, couldn't you write a script to loop through and drop  
them one at a time?


On Feb 26, 2007, at 6:42 PM, George Nychis wrote:


Hey everyone,

I created a master table, and created ~2000 partitions for it.

*no* data is in any of these partitions.

I am trying to drop the master and all of the partitions with a  
cascade:

DROP TABLE master CASCADE;

Except after about 30 seconds my memory usage (4GB) jumps to 99%,  
and after about 10 minutes it kills over and drops my connection.


How do you delete a master and all of its partitions?

Thanks!
George

---(end of  
broadcast)---

TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


erik jones <[EMAIL PROTECTED]>
sofware developer
615-296-0838
emma(r)





Re: [GENERAL] dropping a master table and all of its partitions?

2007-02-26 Thread George Nychis

Sure I can do that, but why is this happening?  Is this normal behavior?

- George


Erik Jones wrote:
Did you use some kind of sensical naming convention for the child 
tables?  If so, couldn't you write a script to loop through and drop 
them one at a time?


On Feb 26, 2007, at 6:42 PM, George Nychis wrote:


Hey everyone,

I created a master table, and created ~2000 partitions for it.

*no* data is in any of these partitions.

I am trying to drop the master and all of the partitions with a cascade:
DROP TABLE master CASCADE;

Except after about 30 seconds my memory usage (4GB) jumps to 99%, and 
after about 10 minutes it kills over and drops my connection.


How do you delete a master and all of its partitions?

Thanks!
George

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


erik jones <[EMAIL PROTECTED] >
sofware developer
615-296-0838
emma(r)






---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] dropping a master table and all of its partitions?

2007-02-26 Thread Joshua D. Drake
George Nychis wrote:
> Sure I can do that, but why is this happening?  Is this normal behavior?

Well that is the better question. If it is indeed doing what you say it
is doing, I would say it is a bug. However you have not mentioned
several important items, like what postgresql version you are running.

Joshua D. Drake


> 
> - George
> 
> 
> Erik Jones wrote:
>> Did you use some kind of sensical naming convention for the child
>> tables?  If so, couldn't you write a script to loop through and drop
>> them one at a time?
>>
>> On Feb 26, 2007, at 6:42 PM, George Nychis wrote:
>>
>>> Hey everyone,
>>>
>>> I created a master table, and created ~2000 partitions for it.
>>>
>>> *no* data is in any of these partitions.
>>>
>>> I am trying to drop the master and all of the partitions with a cascade:
>>> DROP TABLE master CASCADE;
>>>
>>> Except after about 30 seconds my memory usage (4GB) jumps to 99%, and
>>> after about 10 minutes it kills over and drops my connection.
>>>
>>> How do you delete a master and all of its partitions?
>>>
>>> Thanks!
>>> George
>>>
>>> ---(end of broadcast)---
>>> TIP 4: Have you searched our list archives?
>>>
>>>   http://archives.postgresql.org/
>>
>> erik jones <[EMAIL PROTECTED] >
>> sofware developer
>> 615-296-0838
>> emma(r)
>>
>>
>>
> 
> 
> ---(end of broadcast)---
> TIP 3: Have you checked our extensive FAQ?
> 
>   http://www.postgresql.org/docs/faq
> 


-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] dropping a master table and all of its partitions?

2007-02-26 Thread George Nychis

I tend to forget the important details ;)

[EMAIL PROTECTED]:~$ psql --version
psql (PostgreSQL) 8.1.8
contains support for command-line editing
[EMAIL PROTECTED]:~$ uname -a
Linux sn001 2.6.17-10-server #2 SMP Tue Dec 5 21:17:26 UTC 2006 x86_64 
GNU/Linux


- George


Joshua D. Drake wrote:

George Nychis wrote:

Sure I can do that, but why is this happening?  Is this normal behavior?


Well that is the better question. If it is indeed doing what you say it
is doing, I would say it is a bug. However you have not mentioned
several important items, like what postgresql version you are running.

Joshua D. Drake



- George


Erik Jones wrote:

Did you use some kind of sensical naming convention for the child
tables?  If so, couldn't you write a script to loop through and drop
them one at a time?

On Feb 26, 2007, at 6:42 PM, George Nychis wrote:


Hey everyone,

I created a master table, and created ~2000 partitions for it.

*no* data is in any of these partitions.

I am trying to drop the master and all of the partitions with a cascade:
DROP TABLE master CASCADE;

Except after about 30 seconds my memory usage (4GB) jumps to 99%, and
after about 10 minutes it kills over and drops my connection.

How do you delete a master and all of its partitions?

Thanks!
George

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/

erik jones <[EMAIL PROTECTED] >
sofware developer
615-296-0838
emma(r)





---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq







---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Most efficient report of number of records in all tables?

2007-02-26 Thread Merlin Moncure

On 2/27/07, D. Dante Lorenso <[EMAIL PROTECTED]> wrote:

Dann Corbit wrote:
> If you only need a cardinality estimate, then pg_class.reltuples may be
> of help (it will be accurate to when the last vacuum was performed).
>

Last vacuum ... how does that work with autovacuum?


'analyze' updates pg_class.reltuples also.  It is also cheaper than
vacuum...you can force a fresh one by doing an analyze before you do
your sweep.

merlin

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] preventing ALTER TABLE RENAME from changing view definitions?

2007-02-26 Thread Tom Lane
"George Pavlov" <[EMAIL PROTECTED]> writes:
> Currently "ALTER TABLE ... RENAME TO ..." results in all views that
> refer to the table to be rewritten with the new table name.

They are not "rewritten".  Views refer to tables by OID, and are
therefore entirely insensitive to RENAME operations.  This is not
something we're likely to change.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] dropping a master table and all of its partitions?

2007-02-26 Thread Alvaro Herrera
George Nychis wrote:
> I tend to forget the important details ;)
> 
> [EMAIL PROTECTED]:~$ psql --version
> psql (PostgreSQL) 8.1.8
> contains support for command-line editing
> [EMAIL PROTECTED]:~$ uname -a
> Linux sn001 2.6.17-10-server #2 SMP Tue Dec 5 21:17:26 UTC 2006 x86_64 
> GNU/Linux

Just tried it here, worked without a hitch.

create a table "parent", then 2000 children
for i in `seq 1 2000`; do psql -c "create table child_$i (b int) inherits 
(parent)"; done

then
DROP TABLE parent CASCADE

and it took some seconds.  I have 1 GB of physical RAM here.

You're going to give us a lot more details ...

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] dropping a master table and all of its partitions?

2007-02-26 Thread Tom Lane
"Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> George Nychis wrote:
>> Sure I can do that, but why is this happening?  Is this normal behavior?

> Well that is the better question. If it is indeed doing what you say it
> is doing, I would say it is a bug. However you have not mentioned
> several important items, like what postgresql version you are running.

Or even more to the point, the table schemas.  I would guess that this
could be related to any number of things; perhaps the number of foreign
keys involved, to take one example.

In short: please provide a test case.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Connecting to a remote server, pg_hba.conf error?

2007-02-26 Thread novnov

I am trying to connect to a postgres 8.1 installation on a ubuntu box from
windows xp. I get this error: FATAL: missing or erroneous pg_hba.conf file
HINT see server log for details.

The server ip address is 192.168.1.10. The workstation ip address is
192.168.2.100. The server firewall allows connections on port 5432. I am
past the 'is the server open on port 5432?' errors, now it's 'missing or
erroneous pg_hba.conf'. I've added a line like this to that file:

host   all   all   192.168.2.100   md5 (have tried trust also)

As far as spacing of those entries goes, it's in keeping with the other
entries. 

Also I edited the postgresql.conf file so that listen_address = '*'

If I rem the line I added, I get a different error message (FATAL: no
pg_hba.conf entry for host "192.168.1.100", user "postgres", database
"template1", SSL off).

Note that is not the ip address of the xp workstation (192.168.2.100) or the
ubuntu server (192.168.1.10), that's the ip address of a router that stands
between them. But I'm guessing that the workstation can 'see' the ubuntu
postgresql installation, or altering the pg_hba.conf file wouldn't have any
effect at all.

I'd sure appreciate any help with this. I'm not sure how to check the log on
the ubuntu server as the error suggests...yes, I'm new to this, might do
better if I could could use one of the postgres gui tools, but that's
exactly what I'm not able to do at the moment .


-- 
View this message in context: 
http://www.nabble.com/Connecting-to-a-remote-server%2C-pg_hba.conf-error--tf3297991.html#a9174496
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Connecting to a remote server, pg_hba.conf error?

2007-02-26 Thread Douglas McNaught
novnov <[EMAIL PROTECTED]> writes:

> I am trying to connect to a postgres 8.1 installation on a ubuntu box from
> windows xp. I get this error: FATAL: missing or erroneous pg_hba.conf file
> HINT see server log for details.
>
> The server ip address is 192.168.1.10. The workstation ip address is
> 192.168.2.100. The server firewall allows connections on port 5432. I am
> past the 'is the server open on port 5432?' errors, now it's 'missing or
> erroneous pg_hba.conf'. I've added a line like this to that file:
>
> host   all   all   192.168.2.100   md5 (have tried trust also)

I think you need a netmask on this IP:

host   all   all   192.168.2.100/32   md5

-Doug

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Connecting to a remote server, pg_hba.conf error?

2007-02-26 Thread novnov

Wonderful, that solved it...THANK YOU!


Douglas McNaught wrote:
> 
> novnov <[EMAIL PROTECTED]> writes:
> 
>> I am trying to connect to a postgres 8.1 installation on a ubuntu box
>> from
>> windows xp. I get this error: FATAL: missing or erroneous pg_hba.conf
>> file
>> HINT see server log for details.
>>
>> The server ip address is 192.168.1.10. The workstation ip address is
>> 192.168.2.100. The server firewall allows connections on port 5432. I am
>> past the 'is the server open on port 5432?' errors, now it's 'missing or
>> erroneous pg_hba.conf'. I've added a line like this to that file:
>>
>> host   all   all   192.168.2.100   md5 (have tried trust also)
> 
> I think you need a netmask on this IP:
> 
> host   all   all   192.168.2.100/32   md5
> 
> -Doug
> 
> ---(end of broadcast)---
> TIP 3: Have you checked our extensive FAQ?
> 
>http://www.postgresql.org/docs/faq
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Connecting-to-a-remote-server%2C-pg_hba.conf-error--tf3297991.html#a9175269
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL 8.2.x and JDBC driver

2007-02-26 Thread Kris Jurka



On Mon, 26 Feb 2007, DANTE Alexandra wrote:


I am not sure that this is the appropriated list but I try...
I try to used BenchmarkSQL (release 2.3.2, built on February 12, 2006) with 
PostgreSQL 8.2.2 and then 8.2.3.
By default, the JDBC driver included with this release of BenchmarkSQL is 
"postgresql-8.0.309.jdbc3.jar".
So I downloaded at http://jdbc.postgresql.org/download.html 
"postgresql-8.2-504.jdbc3.jar" as I used JDK 1.4 and PostgreSQL 8.2.x.


I have a question about the release of this JDBC driver : the version 8.2-504 
has been built on December 1st, 2006, that is to say for PostgreSQL 8.2.0, so 
can I use it with 8.2.2 and 8.2.3 ? Does a release exist for these releases 
of PostgreSQL or is it correct to use the version 8.2-504 ?




8.2-504 is the release to use.  The JDBC driver has a different release 
schedule than the server so it will be ahead or behind of the server 
release date, but the latest version is always OK to use.


Kris Jurka

---(end of broadcast)---
TIP 6: explain analyze is your friend