Re: [GENERAL] Need help with complicated SQL statement

2007-11-19 Thread Shane Ambler

Ted Byers wrote:

Thanks Shane,

It works reasonably well.  It gets the right answer,
but I guess my data set is much larger than your test.


What indexes have you got?

Using this index on the sample I sent gets the response time to about 
5ms (per stock_id) (as opposed to 900ms with these columns indexed 
separately)


CREATE INDEX idx_stockprices_id_date ON stockprices (stock_id,price_date);



Please consider the appended data.

The first two SQL statements are directly comparable. 
My Left join is marginally simpler, as shown by

EXPLAIN, and runs to completion in about 3 seconds
(elapsed real time), as opposed to about 30 seconds
for the two views.  It makes a little sense, though,
in that according to EXPLAIN, the LEFT JOIN needs to
consider a dramatically smaller number of rows.  What
I find puzzling, though, is that it takes less time to
get the report for 28 stocks at a given time than it
does to get the report for 1. (Both take about 30
seconds, but for 28 stocks, it takes about 0.005
seconds less time ;-)

This is a case where LEFT JOINS appear to be much
faster than subqueries.

I appreciate all your help, but I am struggling to
figure out how best to adapt my LEFT JOINs in your
VIEWs, so that the latter benefit from the speed of
the JOINs.  The heart of my problem is to figure out
how to  use a stock_id in the WHERE clause.


That is where I have moved away from your select - the way you are 
joining makes it hard to adapt to where you want it to end up (more than 
one stock_id per query)


By using the view that generates the rows you want you make the query 
sent from the client so much simpler and make it easy to get any single 
or list of stock_id you want.


The second view will add little overhead and can be part of the select 
sent from the client if you wish. I separated them in to two views to 
prevent duplicating the same selects for the calculations. You can merge 
them into one view if you wish - the first view would become a subselect 
for the second view.




One thing I am not certain of is, "Is there a way to
preserve the logic of the WHERE clauses by replacing
the WHERE clause, which I use to sample the time
series at 22 days ago, 66 days ago, 132 days ago &c.,
by a "GROUP BY" clause, grouping by stock_id?  If so,
might that, along with an additional LEFT JOIN, get me
the result I am after?

I created a stored procedure that takes an id argument
(and can usefully invoke it on any stock_id in the
database), but the problem remains as to how to
construct a record set by applying the procedure to
each id in a set of ids returned, e.g., by SELECT
stock_id FROM stocks;

Ted



--

Shane Ambler
[EMAIL PROTECTED]

Get Sheeky @ http://Sheeky.Biz

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] plpythonu under OS X 10.4 -- a brief HOWTO

2007-11-19 Thread Philip Semanchuk

Hi all,
I just got plpythonu working under OS X, and I'm posting my HOWTO  
notes here in case I (or anyone else) needs them.


The install was pretty straightforward for me once I'd Googled the  
proper magic commands. I'm on OS X Tiger (10.4.10) which comes with  
Python 2.3 installed by default. I wanted to have Python 2.5  
available, so my first step was to download and build that. One must  
build Python with the "--enabled-shared" flag in order to be able to  
use plpythonu, and I don't know if the stock OS X install of Python  
has that flag. So even if you're satisfied with the stock Python  
install version, you might need to build a new version anyway.


Here's how I made Python:

export MACOSX_DEPLOYMENT_TARGET=10.4
./configure --enable-shared --enable-framework
make
sudo make frameworkinstall

That installed into /Library/Frameworks/Python.framework/Versions/ 
2.5. (Note that OS X's python lives in /System/Library/Frameworks/ 
Python.framework/Versions/2.3, so the 2.5 install lives alongside of  
the system install rather than replacing it. This is good -- OS X  
uses Python and you don't want to monkey around with something on  
which OS X relies.) The installer helpfully symlinked /usr/local/bin/ 
python to /Library/Frameworks/Python.framework/Versions/2.5/bin/ 
python, so putting /usr/local/bin in my path before /usr/bin ensures  
that I invoke Python 2.5 automatically.


After that, I rebuilt postgres (I'm using 8.1.4 and building from / 
usr/local/src) with the following commands. I don't know if the "-- 
with-includes" and "--with-libraries" options are necessary for  
plpythonu, but I needed them to get readline support in psql. The  
'make clean' step was necessary for me to resolve a linker error.


./configure --with-includes=/usr/local/include --with-libraries=/usr/ 
local/lib --with-python --enable-nls='en'

sudo make clean
sudo make install

I didn't back up my data or config files before reinstalling and do  
not appear to have lost anything. YMMV.


You should now have plpythonu available as a language in Postgres. In  
order to make it available in database my_stuff, start psql as  
superuser and execute these commands:


\c my_stuff

CREATE LANGUAGE plpythonu;

Since plpythonu is an untrusted language (that's what the 'u' on the  
end stands for), only superusers can create functions in that  
language. So as superuser you can create a function like so:


CREATE OR REPLACE FUNCTION show_python_version() RETURNS TEXT AS $$
import sys

return sys.version
$$ LANGUAGE plpythonu;

Then invoke psql as a normal user and try out your function:
> select * from show_python_version();
show_python_version

2.5.1 (r251:54863, Nov 17 2007, 21:19:53)
[GCC 4.0.1 (Apple Computer, Inc. build 5367)]
(1 row)


That's all! Enjoy

Philip Semanchuk
first name @ last name.com


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] GIN: any ordering guarantees for the hits returned?

2007-11-19 Thread Alex Drobychev


David Fetter <[EMAIL PROTECTED]> wrote:On Fri, Nov 16, 2007 at 07:56:45PM 
-0800, adrobj wrote:
> 
> Hello,
> 
> I have a moderately large (~10-20GB) table:
> 
> CREATE TABLE msgs (
> msg varchar(2048),
> msg_tsv tsvector,
> posted timestamp
> );
> 
> CREATE INDEX msgs_i ON msgs USING gin(msg_tsv);
> 
> The table never gets updated (more specifically, it gets re-created
> once a day with no updates in between).
> 
> I want to run queries of the following form:
> 
> SELECT msg, posted FROM msgs WHERE 'blah blah'::tsquery @@ msg_tsv
> ORDERED BY posted DESC; (with various LIMIT/OFFSET)
> 
> Which obviously may get too expensive, for it will cause reading and
> sorting of all rows meeting the condition, i.e. too many disk reads.
> 
> On the other hand, (as far as I understand) GIN always produces hits
> already sorted in the insertion order.
> 
> So - what if I just populate my table in the order of decreasing
> 'posted', remove the "ORDERED BY" clause and just hope for the best?
> Will the correct ordering be guaranteed?

Ordering is never guaranteed without an ORDER BY, except in the time
between a CLUSTER and the first write operation after it.

  Which sound like my case - there are no writes to the table!
   
  Do I really need to CLUSTER - or just doing INSERTs in the right order would 
be sufficient?
   
  
> If not, are there any other ideas around?

Rather than assuming you know where problems will arise, do some
profiling and find out where they actually do :)
   
  I agree with this maybe 98% - but not 100%. :-) Unfortunately performance can 
change rather unpredictably when the DB stops fitting in memory - say, 3-4 
months after a production roll-out, too late for profiling experiments. :-(
  

Cheers,
David.
-- 
David Fetter http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: [EMAIL PROTECTED]

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate



   
-
Never miss a thing.   Make Yahoo your homepage.

Re: [GENERAL] ERROR: invalid restriction selectivity: 224359728.000000

2007-11-19 Thread xeb
В сообщении от Sunday 18 November 2007 05:00:35 Scott Marlowe написал(а):
> On Nov 16, 2007 11:59 AM,  <[EMAIL PROTECTED]> wrote:
> > Hello!
> > Process postmaster completly eat my proccessor for a long time and i see
> > that message in logs.
> > Does anybody know what does the subj means and why it occures ?
>
> You're giving us WAY too little information to troubleshoot this problem.
>
> What message in the logs?  What does top / vmstat / ps / iostat really
> show?  What kind of query is running?  Have you got query logging
> turned on for long running queries?  Is this actually affecting the
> performance of your machine? Is the process chewing through all your
> memory?  If you know what query is causing this, what does explain
> analyze of the query look like?
>
> The more info you can provide about the issue, the more we can help.
> But first we have to be sure there's a problem.
>
> If you're asking postgresql to sort 1billion rows it is going to eat
> up a lot of CPU and there's little you can do about it.

OK, when it occures again i'll collect all information.

> Have you got query logging
> turned on for long running queries? 

How can i do it ?

> Indeed, but it seems to have something to do with a broken selectivity
> estimator function (see restriction_selectivity()).  What PG version
> is this?  Do you have any add-on datatypes or operators installed?
> Exactly what is the query that triggers the problem?

PG 8.2.4

Database containes phpBB forum, jabberd-2 and gentoo-wiki mirror and who 
causes trouble and what query i don't know.

I have droped gentoo-wiki yesterday and trouble did not occured yet, it seems 
gentoo-wiki was causing, but i want to restore this database. 
As you may know this database is large enough.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] postgresql storage and performance questions

2007-11-19 Thread Trevor Talbot
On 11/19/07, Josh Harrison <[EMAIL PROTECTED]> wrote:

> I have 2 tables with 2 cols each( 1 numeric(8,0) and 1 varchar(3) ).
> In table1 both the cols are filled and in table2  the varchar colm is null
>
> So when I checked the tablesize for these two tables (using pg_relation_size)
> table1 - 57344 bytes (no null columns)
> table2 - 49152 bytes (varchar colm is null)
>
> There is not much difference between the two sizes.So even if a column
> is null postgresql still has lots of overhead.
>  Does postgres occupy space even when the column is NULL?

PostgreSQL's disk storage works in "pages", where each page is 8KB.
It will use as much space within each page as it can.  Filip's last
link details this.

Is there a specific reason you're looking at this, as in you have some
requirement to meet?  Or just curious how it works?

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] plpython array support

2007-11-19 Thread Sean Davis
On Nov 19, 2007 9:08 PM, Jorge Godoy <[EMAIL PROTECTED]> wrote:
> Em Monday 19 November 2007 19:29:51 Sean Davis escreveu:
> > Just a simple question--does plpythonu (8.3beta) have support for
> > arrays?  I don't see a specific mention of it in the docs, so I
> > suppose not.
>
> Arrays work for a long time now.  I've been using them since 8.1, for sure,
> but I think that on 7.4 I already had code with arrays inside the database...

Thanks, Jorge.  I know that postgresql supports arrays, but I still
don't see how plpython supports arrays.  I noticed this post:

http://archives.postgresql.org/pgsql-general/2007-01/msg01417.php

from January, 2007 that suggests how to use the string representation
of an array to convert to/from python arrays.  This will work, but I
was wondering if there is now a true conversion from postgresql arrays
to python and vice-versa.

Thanks,
Sean

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] plpython and error catching

2007-11-19 Thread Adrian Klaver
On Monday 19 November 2007 10:37 am, Sean Davis wrote:
> What is the expected behavior of a construct like this:
>
> def insert_xml(elem):
> id=int(elem.findtext('PMID'))
> try:
> plpy.execute(plan,[unicode(ET.tostring(elem)),id])
> except:
> plpy.execute(plan2,[unicode(ET.tostring(elem)),id])
>
> id is a primary key on the table into which I am inserting.  "plan" is
> the execute plan for inserting new data.  "plan2" is for updating data
> already in the database.  When I run this, I am not able to catch
> errors of this type:
>
> WARNING:  plpython: in function insert_medline_file:
> DETAIL:  plpy.Error: Unknown error in PLy_spi_execute_plan
> ERROR:  duplicate key value violates unique constraint "medlinexml_pkey"
> CONTEXT:  SQL statement "insert into medlinexml(content,id) values
> (xmlparse(CONTENT $1),$2)"
>
> Why am I not able to catch this error and execute the update plan?
> The manual (8.3beta2) implies that errors generated in functions are
> genuine python errors that I should be able to catch.
>
> Thanks,
> Sean
You might want to take another look at the manual. It says an exception can be 
raised from within the function and passed to the calling query by, unless 
caught,  by using plpy.error or plpy.fatal. In other words the opposite of 
what you are counting on.

-- 
Adrian Klaver
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] plpython array support

2007-11-19 Thread Jorge Godoy
Em Monday 19 November 2007 19:29:51 Sean Davis escreveu:
> Just a simple question--does plpythonu (8.3beta) have support for
> arrays?  I don't see a specific mention of it in the docs, so I
> suppose not.

Arrays work for a long time now.  I've been using them since 8.1, for sure, 
but I think that on 7.4 I already had code with arrays inside the database... 

-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] PostgreSQL Conference 08 East!

2007-11-19 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


It's that time, after a wildly successful conference last October in
Portland, Oregon we are now beginning to ramp up for the East Coast 08
conference! The current plan is to host a two day conference of
Tutorials (new) and Talks on March 28th and 29th. The currently
designated location for the conference is the Univserity of Maryland.
This will be confirmed within two weeks.

For now, we are making a call out to the community, it was the hands of
the community that made the October conference great. It will be the
hands of the community that makes the March conference great!

We have already had a couple of offers for help which we are grateful
for but we want to make sure that we open this up for anyone who may
want to help organize the conference. Of specific interest are
community members that are geographically close to the Maryland area.
We will need boots on the ground to help us follow up with others (such
as student unions etc..) to make sure we kick this conference off
without a hitch.

As a reminder all proceeds from the Conference series go directly to
PostgreSQL via Software in the Public Interest, a 501(c)3 non-profit,
and will be used for PostgreSQL development, support and advocacy.

So if you are on the east coast and can help with organizing this
conference please let me know.

Sincerely,

Joshua D. Drake

- -- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHQiXYATb/zqfZUUQRApK+AJ0WPG39t8CF2oOFF/uHhtgzo7zELgCghYy+
FNjnokvLINAvh8DxJxmctAI=
=gvjX
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Need help with complicated SQL statement

2007-11-19 Thread Ted Byers
Thanks Shane,

It works reasonably well.  It gets the right answer,
but I guess my data set is much larger than your test.

Please consider the appended data.

The first two SQL statements are directly comparable. 
My Left join is marginally simpler, as shown by
EXPLAIN, and runs to completion in about 3 seconds
(elapsed real time), as opposed to about 30 seconds
for the two views.  It makes a little sense, though,
in that according to EXPLAIN, the LEFT JOIN needs to
consider a dramatically smaller number of rows.  What
I find puzzling, though, is that it takes less time to
get the report for 28 stocks at a given time than it
does to get the report for 1. (Both take about 30
seconds, but for 28 stocks, it takes about 0.005
seconds less time ;-)

This is a case where LEFT JOINS appear to be much
faster than subqueries.

I appreciate all your help, but I am struggling to
figure out how best to adapt my LEFT JOINs in your
VIEWs, so that the latter benefit from the speed of
the JOINs.  The heart of my problem is to figure out
how to  use a stock_id in the WHERE clause.

One thing I am not certain of is, "Is there a way to
preserve the logic of the WHERE clauses by replacing
the WHERE clause, which I use to sample the time
series at 22 days ago, 66 days ago, 132 days ago &c.,
by a "GROUP BY" clause, grouping by stock_id?  If so,
might that, along with an additional LEFT JOIN, get me
the result I am after?

I created a stored procedure that takes an id argument
(and can usefully invoke it on any stock_id in the
database), but the problem remains as to how to
construct a record set by applying the procedure to
each id in a set of ids returned, e.g., by SELECT
stock_id FROM stocks;

Ted

=== test data =
EXPLAIN SELECT A1.stock_id,
A1.price_date,
A1.adjusted,
A2.price_date AS pd22,
100.0 * (A1.adjusted -
A2.adjusted)/A2.adjusted AS gl22pc,
A3.price_date AS pd66,
100.0 * (A1.adjusted -
A3.adjusted)/A3.adjusted AS gl66pc,
A4.price_date AS pd132,
100.0 * (A1.adjusted -
A4.adjusted)/A4.adjusted AS gl132pc,
A5.price_date AS pd264,
100.0 * (A1.adjusted -
A5.adjusted)/A5.adjusted AS gl264pc
 FROM
 (SELECT * FROM stockprices WHERE stock_id = 1 ORDER
BY price_date DESC LIMIT 1) AS A1
 LEFT JOIN
 (SELECT * FROM stockprices WHERE stock_id = 1 ORDER
BY price_date DESC LIMIT 1 OFFSET 22) AS A2
 ON A1.stock_id = A2.stock_id
 LEFT JOIN
 (SELECT * FROM stockprices WHERE stock_id = 1 ORDER
BY price_date DESC LIMIT 1 OFFSET 66) AS A3
 ON A1.stock_id = A3.stock_id
 LEFT JOIN
 (SELECT * FROM stockprices WHERE stock_id = 1 ORDER
BY price_date DESC LIMIT 1 OFFSET 132) AS A4
 ON A1.stock_id = A4.stock_id
 LEFT JOIN
 (SELECT * FROM stockprices WHERE stock_id = 1 ORDER
BY price_date DESC LIMIT 1 OFFSET 264) AS A5
 ON A1.stock_id = A5.stock_id;
++-+-++---+-+-+--+---+-+
| id | select_type | table   | type   |
possible_keys | key | key_len | ref  | rows  |
Extra   |
++-+-++---+-+-+--+---+-+
|  1 | PRIMARY |   | system | NULL  
   | NULL| NULL| NULL | 1 | |
|  1 | PRIMARY |   | system | NULL  
   | NULL| NULL| NULL | 1 | |
|  1 | PRIMARY |   | system | NULL  
   | NULL| NULL| NULL | 1 | |
|  1 | PRIMARY |   | system | NULL  
   | NULL| NULL| NULL | 1 | |
|  1 | PRIMARY |   | system | NULL  
   | NULL| NULL| NULL | 1 | |
|  6 | DERIVED | stockprices | ref| PRIMARY   
   | PRIMARY | 4   |  | 17442 | Using where |
|  5 | DERIVED | stockprices | ref| PRIMARY   
   | PRIMARY | 4   |  | 17442 | Using where |
|  4 | DERIVED | stockprices | ref| PRIMARY   
   | PRIMARY | 4   |  | 17442 | Using where |
|  3 | DERIVED | stockprices | ref| PRIMARY   
   | PRIMARY | 4   |  | 17442 | Using where |
|  2 | DERIVED | stockprices | ref| PRIMARY   
   | PRIMARY | 4   |  | 17442 | Using where |
++-+-++---+-+-+--+---+-+
10 rows in set (0.08 sec)

EXPLAIN SELECT * FROM stock_price_history WHERE
stock_id = 1;
+++-+---+---+-+-+---+++
| id | select_type| table   | type  |
possible_keys | key | key_len | ref  
| rows   | Extra  |
+++-+---+---+-+-+---+++
|  1 | PRIMARY|   | ALL   | NULL
 | NULL| NULL| NULL  |   
494 | Using where|
|  3 | DERIVED| ST  

Re: [GENERAL] Postgre and XML

2007-11-19 Thread Raymond O'Donnell

On 19/11/2007 22:05, Raymond O'Donnell wrote:

On 19/11/2007 21:44, Thomas Kellerer wrote:

But I have no clue how you would enable the module after the 
installation has finished.


I think you have to run an SQL script to enable the module.


Sorry, that wasn't very helpful. :-) I should have said that there's 
usually an SQL script that comes with the particular module, which you 
need to run against your database to enable the module.


On my XP laptop, a peek into

  C:\Program Files\PostgreSQL\8.2\share\contrib

reveals files named pgxml.sql and uninstall_pgxml.sql which ought to do 
the job.


A further peek into

  C:\Program Files\PostgreSQL\8.2\doc\contrib

reveals a whole bunch of README files, one for each contrib module, with 
detailed instructions and other good stuff.


Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Postgre and XML

2007-11-19 Thread Raymond O'Donnell

On 19/11/2007 21:44, Thomas Kellerer wrote:

But I have no clue how you would enable the module after the 
installation has finished.


I think you have to run an SQL script to enable the module.

Ray.


---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Foreign keys and inheritance

2007-11-19 Thread Jeff Davis
On Mon, 2007-11-19 at 17:19 -0300, Alvaro Herrera wrote:
> Kynn Jones escribió:
> > I have two classes of objects, A and B, where B is just a special case
> > of A.  (I.e., to describe a B-type object I need to specify the same
> > fields as for an A-type object, plus a whole bunch additional fields
> > specific to B alone.)  Furthermore, there's a third class T that is in
> > a many-to-one relation with A (and hence also B) objects.
> > 
> > The question is, what's the "best practice" for implementing this
> > situation in PostgreSQL.  My first idea was to define B as inheriting
> > from A, which is OK, except that I have not figured out how to
> > implement the reference from T.  Is inheritance indeed the right tool
> > for this problem, or should I use a different approach?
> 
> It would be the right tool if the FKs worked :-(  Sadly, they don't.
> 

I don't think it's that bad of a situation. It would be great if
PostgreSQL did support keys across tables, but it's not necessary for a
good design in his case.

The difference between using inheritance and just using multiple tables
(like the alternative that I suggested) is the difference between
vertically partitioning and horizontally partitioning. Both seem like
good choices to me.

Regards,
Jeff Davis


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Postgre and XML

2007-11-19 Thread Thomas Kellerer

x asasaxax wrote on 19.11.2007 12:17:

Hi,

   I´m interested in running xml with postgre. I use postgre version 8.2 and
windows xp. I would like to know how can i enable the xml in the postgresql.

Did you know if its secure to use this xml function of postgre in
commercial applications? How much trustable its this module? Can anyone
explain me how to install the xml module?

Thanks for the help.

The Postgres Windows installer will ask you if you want to install (and enable) 
the XML module (at least it did for me)


But I have no clue how you would enable the module after the installation has 
finished.


Regards
Thomas



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Postgre and XML

2007-11-19 Thread Alvaro Herrera
x asasaxax escribió:
> I dont know how can i enable the xml in postgre. I´ve tried to do a xml
> search function but it seams that the postgre doesen´t recognized the
> function. How can I install and use the xml on the postgre?

What version of Postgres?  In Postgres 8.2 and earlier, you need to
install a Postgres contrib module.  In Postgres 8.3 (currently in beta),
you need to rebuild Postgres with --with-libxml to enable the internal
Postgres XML functionality.


Note to Postgres people: do we think that repetition will cause the
correct name to stick?

-- 
Alvaro Herrera  http://www.amazon.com/gp/registry/5ZYLFMCVHXC
"La gente vulgar solo piensa en pasar el tiempo;
el que tiene talento, en aprovecharlo"

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] plpython array support

2007-11-19 Thread Sean Davis
Just a simple question--does plpythonu (8.3beta) have support for
arrays?  I don't see a specific mention of it in the docs, so I
suppose not.

Thanks,
Sean

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Postgre and XML

2007-11-19 Thread x asasaxax
I dont know how can i enable the xml in postgre. I´ve tried to do a xml
search function but it seams that the postgre doesen´t recognized the
function. How can I install and use the xml on the postgre?

Thanks

2007/11/19, Bruce Momjian <[EMAIL PROTECTED]>:
>
> Peter Eisentraut wrote:
> > Am Montag, 19. November 2007 schrieb x asasaxax:
> > >I?m interested in running xml with postgre. I use postgre version
> 8.2
> > > and windows xp. I would like to know how can i enable the xml in the
> > > postgresql.
> >
> > That depends on what you want to do with it.  XML is quite a broad
> topic.
>
> Here is our Postgres 8.2 documentation about XML:
>
> http://www.postgresql.org/docs/8.2/static/datatype-xml.html
>
> Postgres 8.3 has some new XML features:
>
> http://www.postgresql.org/docs/8.3/static/release-8-3.html
>
> --
>   Bruce Momjian  <[EMAIL PROTECTED]>http://momjian.us
>   EnterpriseDB
> http://postgres.enterprisedb.com
>
>   + If your life is a hard drive, Christ can be your backup. +
>


Re: [GENERAL] Foreign keys and inheritance

2007-11-19 Thread Alvaro Herrera
Kynn Jones escribió:
> I have two classes of objects, A and B, where B is just a special case
> of A.  (I.e., to describe a B-type object I need to specify the same
> fields as for an A-type object, plus a whole bunch additional fields
> specific to B alone.)  Furthermore, there's a third class T that is in
> a many-to-one relation with A (and hence also B) objects.
> 
> The question is, what's the "best practice" for implementing this
> situation in PostgreSQL.  My first idea was to define B as inheriting
> from A, which is OK, except that I have not figured out how to
> implement the reference from T.  Is inheritance indeed the right tool
> for this problem, or should I use a different approach?

It would be the right tool if the FKs worked :-(  Sadly, they don't.

alvherre=# create table foo (a int primary key);
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "foo_pkey" for 
table "foo"
cCREATE TABLE
alvherre=# create table bar (a int not null references foo);
CREATE TABLE
alvherre=# create table baz () inherits (foo);
CREATE TABLE
alvherre=# insert into baz values  (1);
INSERT 0 1
alvherre=# select * from foo;
 a 
---
 1
(1 row)

alvherre=# insert into bar values (1);
ERROR:  insert or update on table "bar" violates foreign key constraint 
"bar_a_fkey"
DETAIL:  Key (a)=(1) is not present in table "foo".


This is a Postgres shortcoming, but I don't think there's anybody
working on fixing it, so don't hold your breath.

Uniqueness also fails in inheritance: for example

alvherre=# insert into foo values (1);
INSERT 0 1
alvherre=# select * from foo;
 a 
---
 1
 1
(2 rows)

(Note that column is the PK)

-- 
Alvaro Herrera  Developer, http://www.PostgreSQL.org/
"Investigación es lo que hago cuando no sé lo que estoy haciendo"
(Wernher von Braun)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Foreign keys and inheritance

2007-11-19 Thread brian

Kynn Jones wrote:

I have two classes of objects, A and B, where B is just a special case
of A.  (I.e., to describe a B-type object I need to specify the same
fields as for an A-type object, plus a whole bunch additional fields
specific to B alone.)  Furthermore, there's a third class T that is in
a many-to-one relation with A (and hence also B) objects.

The question is, what's the "best practice" for implementing this
situation in PostgreSQL.  My first idea was to define B as inheriting
from A, which is OK, except that I have not figured out how to
implement the reference from T.  Is inheritance indeed the right tool
for this problem, or should I use a different approach?



It seems that inheritance is precisely what you want.

WRT yout table T you should be able to join to B in the same way you 
would join to A. But perhaps you should give an example of both B & T 
(and maybe A).


brian

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Foreign keys and inheritance

2007-11-19 Thread Jeff Davis
On Mon, 2007-11-19 at 14:36 -0500, Kynn Jones wrote:
> I have two classes of objects, A and B, where B is just a special case
> of A.  (I.e., to describe a B-type object I need to specify the same
> fields as for an A-type object, plus a whole bunch additional fields
> specific to B alone.)  Furthermore, there's a third class T that is in
> a many-to-one relation with A (and hence also B) objects.
> 
> The question is, what's the "best practice" for implementing this
> situation in PostgreSQL.  My first idea was to define B as inheriting
> from A, which is OK, except that I have not figured out how to
> implement the reference from T.  Is inheritance indeed the right tool
> for this problem, or should I use a different approach?
> 

I would probably do something like:

CREATE TABLE A (a_id INT PRIMARY KEY, a_attr text);
CREATE TABLE B (b_id INT PRIMARY KEY, 
  a_id int references A(a_id) UNIQUE, b_attr text);
CREATE TABLE T (t_id INT PRIMARY KEY, 
  a_id int references A(a_id), t_attr text);

I can't tell whether you mean that every A has many T or vice versa, but
minor modification will make it work in the opposite direction.

To look at all A objects, you just look in table A.
You can do "A NATURAL JOIN T" to realize the many-to-one relationship
from A to T.
You can do "A NATURAL JOIN B" to see all B objects (which have a_attr
since they are a special case of A).

This is a normal relational design that is very flexible and doesn't
require the PostgreSQL-specific "INHERITANCE" feature. You don't need to
use natrual joins of course, it was just easier for this example.

Regards,
Jeff Davis


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Foreign keys and inheritance

2007-11-19 Thread Kynn Jones
I have two classes of objects, A and B, where B is just a special case
of A.  (I.e., to describe a B-type object I need to specify the same
fields as for an A-type object, plus a whole bunch additional fields
specific to B alone.)  Furthermore, there's a third class T that is in
a many-to-one relation with A (and hence also B) objects.

The question is, what's the "best practice" for implementing this
situation in PostgreSQL.  My first idea was to define B as inheriting
from A, which is OK, except that I have not figured out how to
implement the reference from T.  Is inheritance indeed the right tool
for this problem, or should I use a different approach?

TIA!

kj

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] possible to create multivalued index from xpath() results in 8.3?

2007-11-19 Thread Matt Magoffin
> "Matt Magoffin" <[EMAIL PROTECTED]> writes:
>> 2) Even if I could have an xpath() result return an array with multiple
>> values, like {value1,value2,value3} I wasn't able to define a GIN index
>> against the xml[] type. Should this be possible?
>
> Dunno about your other questions, but the answer to this one is "no"
> --- AFAICS there is no indexing support of any kind for the xml type
> in 8.3.  Try casting to text[] instead.

I had tried that, but it does not actually work because of my first issue,
of a way for the XPath to return 3 individual text nodes. If I return 3
elements, like

/[EMAIL PROTECTED]"mykey"] => {
  value1,
  value2,
  value3
}

and cast that to text[], I get 3 XML strings, including the .
I want only the element text content.

Should the xpath() function return 3 individual text nodes like this:

/[EMAIL PROTECTED]"mykey"]/text() => {
  value1,
  value2,
  value3
}

rather than concatenating these into a single text node result? I also
tried something like

string(/[EMAIL PROTECTED]"mykey"])

but that throws an XPath error. It looks like the code is converting this
to /x/string(/[EMAIL PROTECTED]"mykey"] internally, which is not a valid XPath.

So if xpath() cannot return individual text node results, would a possible
solution be to write a trigger function that generates a tsvector from the
XML array, and then use text search to locate matches?

-- m@

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] postgresql storage and performance questions

2007-11-19 Thread Josh Harrison
Thanks Filip.
I have 2 tables with 2 cols each( 1 numeric(8,0) and 1 varchar(3) ).
In table1 both the cols are filled and in table2  the varchar colm is null

So when I checked the tablesize for these two tables (using pg_relation_size)
table1 - 57344 bytes (no null columns)
table2 - 49152 bytes (varchar colm is null)

There is not much difference between the two sizes.So even if a column
is null postgresql still has lots of overhead.
 Does postgres occupy space even when the column is NULL?

This is not a spam I posted it twice becoz my question  didnot
show up the first time in the mailing list even after 30 minutes. So i
tried again and then both showed up...kind of strange though!

Thanks again
Josh

On Nov 19, 2007 1:37 PM, Filip Rembiałkowski <[EMAIL PROTECTED]> wrote:
> 2007/11/19, Josh Harrison <[EMAIL PROTECTED]>:
> > Hi,
> > I have a few questions about the storage and performance
> >
> > 1. How do you estimate the table size in postgresql?
> > For example if I have a table 'Dummy' with 1 varchar (40) & 1
> > numeric(22,0) fields and 1000 rows, what is the tablesize estimate for
> > this (including the row overhead etc)? How many pages will this
> > occupy?
> >
> > 2. Also if the table contains null columns, does postgres allocates
> > the same space for these nulls columns? How does it handle 'nulls' in
> > terms of storage?
>
> Try these:
> http://www.postgresql.org/docs/current/static/datatype-numeric.html
> http://www.postgresql.org/docs/current/static/datatype-character.html
> http://www.postgresql.org/docs/current/static/storage-page-layout.html
>
> ad.1 )  There is a fixed-size header (occupying 27 bytes on most
> machines) for each tuple
>
> so you will have about 27 + 1 +  varchar data + numeric data per row,
> plus some overhaed for block headers
>
> ad.2 ) there is a null bitmap for each tuple which has nullable fields
> - so every 8 NULLable columns occupy one byte bitmap.
>
>
> PS. why do you post same thing many times? this is kinda.. spam?
>
> --
> Filip Rembiałkowski
>

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] postgresql storage and performance questions

2007-11-19 Thread Filip Rembiałkowski
2007/11/19, Josh Harrison <[EMAIL PROTECTED]>:
> Hi,
> I have a few questions about the storage and performance
>
> 1. How do you estimate the table size in postgresql?
> For example if I have a table 'Dummy' with 1 varchar (40) & 1
> numeric(22,0) fields and 1000 rows, what is the tablesize estimate for
> this (including the row overhead etc)? How many pages will this
> occupy?
>
> 2. Also if the table contains null columns, does postgres allocates
> the same space for these nulls columns? How does it handle 'nulls' in
> terms of storage?

Try these:
http://www.postgresql.org/docs/current/static/datatype-numeric.html
http://www.postgresql.org/docs/current/static/datatype-character.html
http://www.postgresql.org/docs/current/static/storage-page-layout.html

ad.1 )  There is a fixed-size header (occupying 27 bytes on most
machines) for each tuple

so you will have about 27 + 1 +  varchar data + numeric data per row,
plus some overhaed for block headers

ad.2 ) there is a null bitmap for each tuple which has nullable fields
- so every 8 NULLable columns occupy one byte bitmap.


PS. why do you post same thing many times? this is kinda.. spam?

-- 
Filip Rembiałkowski

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] plpython and error catching

2007-11-19 Thread Sean Davis
What is the expected behavior of a construct like this:

def insert_xml(elem):
id=int(elem.findtext('PMID'))
try:
plpy.execute(plan,[unicode(ET.tostring(elem)),id])
except:
plpy.execute(plan2,[unicode(ET.tostring(elem)),id])

id is a primary key on the table into which I am inserting.  "plan" is
the execute plan for inserting new data.  "plan2" is for updating data
already in the database.  When I run this, I am not able to catch
errors of this type:

WARNING:  plpython: in function insert_medline_file:
DETAIL:  plpy.Error: Unknown error in PLy_spi_execute_plan
ERROR:  duplicate key value violates unique constraint "medlinexml_pkey"
CONTEXT:  SQL statement "insert into medlinexml(content,id) values
(xmlparse(CONTENT $1),$2)"

Why am I not able to catch this error and execute the update plan?
The manual (8.3beta2) implies that errors generated in functions are
genuine python errors that I should be able to catch.

Thanks,
Sean

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Temporary, In-memory Postgres DB?

2007-11-19 Thread Guy Rouillier

Michelle Konzack wrote:


I run an Opteron 140 with 8 GByte of memory and sometimes I have
problems with too less memory...  but unfortunatly I have not found
a Singel-Opteron Mainboard which support more then 8 GByte of memory
where I prefere to use 16-32 GByte...


Tyan makes a server motherboard with a single Opteron socket and 8 DIMM 
slots: http://www.tyan.com/product_board_detail.aspx?pid=229


--
Guy Rouillier

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] IP addresses

2007-11-19 Thread Harald Fuchs
In article <[EMAIL PROTECTED]>,
"Tom Allison" <[EMAIL PROTECTED]> writes:

> I am planning on doing a LOT of work with ip addresses and thought that the
> inet data type would be a great place to start.

Forget inet.  Check out http://pgfoundry.org/projects/ip4r/ and be happy.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Postgre and XML

2007-11-19 Thread Bruce Momjian
Peter Eisentraut wrote:
> Am Montag, 19. November 2007 schrieb x asasaxax:
> >I?m interested in running xml with postgre. I use postgre version 8.2
> > and windows xp. I would like to know how can i enable the xml in the
> > postgresql.
> 
> That depends on what you want to do with it.  XML is quite a broad topic.

Here is our Postgres 8.2 documentation about XML:

http://www.postgresql.org/docs/8.2/static/datatype-xml.html

Postgres 8.3 has some new XML features:

http://www.postgresql.org/docs/8.3/static/release-8-3.html

-- 
  Bruce Momjian  <[EMAIL PROTECTED]>http://momjian.us
  EnterpriseDB http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Postgres file structure doubt

2007-11-19 Thread Scott Marlowe
On Nov 19, 2007 11:24 AM,  <[EMAIL PROTECTED]> wrote:
>
> Hi everyone,
>
> Got a doubt in my setup, please correct me if I'm wrong.
>
> In my postgres setup,
> /usr/local/pgsql (where postgres install)
> /usr/local/pgsql/data (PGDATA)
> /database/pg/mydata (tablespace which use for all the table I create)
> /database/pg/myindex (index which use for all the table I create)
>
> 1) In this setup, the actual user data are store in PGDATA
> the table structure & index are store in /database/pg/mydata &
> /database/pg/myindex
>
> Am I correct?

The data that defines users, and tables, and other objects are in
PGDATA.  The data from users are stored in mydata/myindex.  Not sure
if that matches what you wrote or not...

> 2) So to backup (not pg_dump), I should make sure it include these 2 folder
> right?
>
> /usr/local/pgsql/data
> /database/pg/

To backup, you should generally use pg_dump.  Are you planning on
using PITR?  Are you planning on shutting down your database when you
back it up?  if you're not using PITR, you must shut down postgresql
to take a file system backup.

> 3) I think my setup is not quite right, I should move the PGDATA to
> /database/pg right?

Sorta a personaly choice really.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Postgres file structure doubt

2007-11-19 Thread mailtolouis2020-postgres
Hi everyone,

Got a doubt in my setup, please correct me if I'm wrong.

In my postgres setup, 
/usr/local/pgsql (where postgres install)
/usr/local/pgsql/data (PGDATA)
/database/pg/mydata (tablespace which use for all the table I create)
/database/pg/myindex (index which use for all the table I create)

1) In this setup, the actual user data are store in PGDATA
the table structure & index are store in /database/pg/mydata & 
/database/pg/myindex

Am I correct?

2) So to backup (not pg_dump), I should make sure it include these 2 folder 
right?
/usr/local/pgsql/data 
/database/pg/
 

3) I think my setup is not quite right, I should move the PGDATA to 
/database/pg right?


Regards
Louis

Re: [ADMIN] [GENERAL] Error while starting postgreSQL service

2007-11-19 Thread Magnus Hagander

On Mon, 2007-11-19 at 10:33 +, Richard Huxton wrote:
> Bebarta, Simanchala wrote:
>  >>
>  >> Does the problem go away when you put shared_buffers back to a lower
>  >> number?
>  >>
> > Yes, when I set the value to 1300 MB, everything goes fine. Any value
> > higher than this value does not allow me to start the service.
> 
> It's quite possible that you can't go any higher (I don't know enough 
> about Windows' memory handling). It's quite possible you don't want to 
> anyway.
> Don't forget, you want to allow space for the following:
> - Windows itself
> - Other applications
> - Each backend of PostgreSQL will need its own memory when running queries.
> - Filesystem caching.
> 
> PostgreSQL isn't like some other RDBMS where you dedicate a big block of 
> memory just to it.

Yeah, going above 1300Mb shared_buffer is certainly going to be a
problem. You'll run out of address space in the processes (limited to
2Gb, but that's including code and OS overhead).

That said, you're also likely to have a lot of other reasons for having
it lower. We've seen a lot of cases where putting shared_buffers as low
as possible gives for a lot better performance, and I've yet to see a
single case where having shared_buffers very high has helped.

(Before someone comments on it, the previous paragraph valid for win32
only, of course)

//Magnus


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Timestamp comparison with string in some special cases

2007-11-19 Thread Tom Lane
Dragan Matic <[EMAIL PROTECTED]> writes:
> And isn't this:

> SELECT * from table where timestamp_column < to_timestamp('11/19/2007 
> 15:46:09 PM','MM/DD/ HH24:MI:SS')

> just doing the same thing that implicit string to timestamp conversion 
> should have done in the first case?

No.  The entire reason for existence of to_timestamp() is to accommodate
translation of formats that are too weird, ambiguous, or inconsistent
to be reasonable for the main timestamp input code to accept.
(Like this one.)

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Timestamp comparison with string in some special cases

2007-11-19 Thread Sam Mason
On Mon, Nov 19, 2007 at 06:03:36PM +0100, Dragan Matic wrote:
> Sam Mason wrote:
> >On Mon, Nov 19, 2007 at 04:52:10PM +0100, Dragan Matic wrote:
> >  
> >>select * from table where timestamp_column < '11/19/2007 15:46:09 PM'
> >>
> >
> >Maybe the to_timestamp() function would help you:
> >
> >  SELECT to_timestamp('11/19/2007 15:46:09 PM','MM/DD/ HH24:MI:SS')
> >-> 2007-11-19 15:46:09+00
> >
> >That just ignores the AM/PM flag, which may or may not be what you want
> >to do.
> >
> >
> >  Sam
> >
> >
> >  
> Tnx, this helped a lot. But not, I am confused with something. I thought 
> there supposed to be an implicit conversion from string to timestamp in 
> the first case. And isn't this:
> 
> SELECT * from table where timestamp_column < to_timestamp('11/19/2007 
> 15:46:09 PM','MM/DD/ HH24:MI:SS')
> 
> just doing the same thing that implicit string to timestamp conversion 
> should have done in the first case?

When you type 'some text' into postgres it's treated as a "literal",
it then uses the type's input function to convert the literal to the
actual internal encoding used by the database.  With timestamps, you
have some control over this conversion with timestamps, but not much.
The to_timestamp function treats the literal as text and then parses out
the date with more flexibility.

Does that help?


  Sam

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Compressed Backup too big

2007-11-19 Thread Magnus Hagander

On Thu, 2007-11-15 at 20:35 +0200, Andrus wrote:
> "PostgreSQL 8.2.3 on i686-pc-mingw32, compiled by GCC gcc.exe (GCC) 3.4.2 
> (mingw-special)"
> Database size in disk returned by  pg_database_size() is 210 MB
> 
> Database compressesed  backup file size is now 125 MB.
> This seems too much. I expect compression to decrease size 10 times, also 
> indexes are not backed up. A year ago compressed backup size was 9 MB only.
> 
> I created query returning biggest tables with and without indexes and found:
> 
>   1  pg_toast_22185  95 MB   96 MB
>   2  rid 21 MB   27 MB
>   3  klient  13 MB   19 MB
>   4  mailbox 10 MB   11 MB
>   5  dok 7640 kB 12 MB
>   6  desktop 8080 kB 8200 kB
>   7  strings 5536 kB 6584 kB
>   8  pg_toast_22338  5232 kB 5368 kB
> 
> ...
> 
> Questions:
> 
> 1. Tables are relatively small and thus cannot create 125 MB compressed 
> backup file.
> Why backup file sis so big ?

Is this a pg_dump backup or a PITR style backup?
If it's a pg_dump backup, you can open it up in an editor to find out
what's taking so much space.

> 2. How to determine what data is containing in  pg_toast_22185  ?
> Why this is so big ?

Could it be that you haven't been VACUUMing properly? Possibly you need
to run a VACUUM FULL if you haven't kept up. If it's a PITR style backup
on 1, that could be the same reason.


To find what table has pg_toast_22185, try:
SELECT relname FROM pg_class WHERE oid=22185


//Magnus


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Timestamp comparison with string in some special cases

2007-11-19 Thread Dragan Matic

Sam Mason wrote:

On Mon, Nov 19, 2007 at 04:52:10PM +0100, Dragan Matic wrote:
  

select * from table where timestamp_column < '11/19/2007 15:46:09 PM'



Maybe the to_timestamp() function would help you:

  SELECT to_timestamp('11/19/2007 15:46:09 PM','MM/DD/ HH24:MI:SS')
-> 2007-11-19 15:46:09+00

That just ignores the AM/PM flag, which may or may not be what you want
to do.


  Sam


  
Tnx, this helped a lot. But not, I am confused with something. I thought 
there supposed to be an implicit conversion from string to timestamp in 
the first case. And isn't this:


SELECT * from table where timestamp_column < to_timestamp('11/19/2007 
15:46:09 PM','MM/DD/ HH24:MI:SS')


just doing the same thing that implicit string to timestamp conversion 
should have done in the first case?


Dragan Matic



--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [ADMIN] [GENERAL] Error while starting postgreSQL service

2007-11-19 Thread Scott Marlowe
On Nov 19, 2007 4:15 AM, Bebarta, Simanchala
<[EMAIL PROTECTED]> wrote:
> Yes, when I set the value to 1300 MB, everything goes fine. Any value
> higher than this value does not allow me to start the service.

Please don't top post.

Anyway, are you sure that increasing shared_buffers that high is
helping?  There's some evidence that windows pgsql can't use
shared_buffer settings that high efficiently.  Have you tested
shared_buffer settings starting low and working you way up and seen a
steady increase in performance?  Or are you just guessing that larger
shared_buffer = better performance?

I'm guessing that it's a limitation of windows and pgsql that you're
running into.  If you want to run a large memory postgresql database
server with lots of load, I highly recommend using unix as the
underlying OS, as pgsql is far better tested and supported there.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Substitute column in SELECT with static value? (Crosstab problem?)

2007-11-19 Thread Scott Marlowe
On Nov 19, 2007 1:34 AM, Stefan Schwarzer <[EMAIL PROTECTED]> wrote:
> >>
> >> Hi there,
> >>
> >> I run an aggregation on national statistics to retrieve regional
> >> values (for
> >> Africa, Europe, ...). Now, I want to have a global aggregation as
> >> well. The
> >> easiest thing for my PHP/HTML procedure would be to have the
> >> global row make
> >> appear within the regional result. So it would be something like
> >>
> >>name | y_2001  |y_2002 .
> >> 
> >>   Africa  |  2323 | 342323
> >>   Europe| 
> >>   .
> >>   Global |849309   |.
> >
> >> Is there a way to substitute this with a "static" value, such as
> >> "Global"?
> >> So, that the query still results in three columns?
> >
> > Sure, just include it as 'Global'
> >
> > Note the single, not double, quotes.
> >
>
> That's what I thought at the beginning too. But it didn't work.
>
> Both queries are being executed separately correctly.

> SELECT * FROM crosstab( '
> SELECT
> COALESCE(r.name, ) AS name,
> year_start AS year,
> SUM(value) AS value
> FROM
> co2_total_cdiac AS d
> RIGHT JOIN
> countries_view AS c ON c.id = id_country
> RIGHT JOIN
> regions AS r ON r.id = c.reg_id
> WHERE
> year_start = 2002
> GROUP BY
> r.name,
> year_start
>
> UNION ALL
>
> SELECT
> 'Global' AS name,

Remember, you're calling this as an arg to a function, so you need to
double up your quotes...

''Global''

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] convert custom datatype to array

2007-11-19 Thread Mike Charnoky
Thanks!  That did the trick.  For posterity, I was able to do the final
conversion using:

alter table mytable alter column mycolumn type float4[] using
string_to_array(trim(both '[]' from
textin(nbf4a_out(mycolumn))),',')::float4[];


Mike

Alvaro Herrera wrote:
> Mike Charnoky wrote:
>> OK, forgive my ignorance here, but the maintainer of our custom data
>> type code is no longer with us and this is new territory for me.  We do
>> have a function which takes our custom data type and returns a cstring.
>>  Is there a pg function which converts a cstring to text type?  This
>> seems to be the missing link to eventually get the data to float4[].
> 
> Sure, textin() does that.
> 

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Postgresql storage question

2007-11-19 Thread Josh Harrison
Hi,

I have a few questions about the storage and performance

1. How do you estimate the table size in postgresql?
For example if I have a table 'Dummy' with 1 varchar (40) & 1
numeric(22,0) fields and 1000 rows, what is the tablesize estimate for
this (including the row overhead etc)? How many pages will this
occupy?

2. Also if the table contains null columns, does postgres allocates
the same space for these nulls columns? How does it handle 'nulls' in
terms of storage?

3. How does oracle handle these 2 cases?

4. Does increasing the block size in postgres improve query performance?

Thanks in advance
Josh

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Timestamp comparison with string in some special cases

2007-11-19 Thread Sam Mason
On Mon, Nov 19, 2007 at 04:52:10PM +0100, Dragan Matic wrote:
> select * from table where timestamp_column < '11/19/2007 15:46:09 PM'

Maybe the to_timestamp() function would help you:

  SELECT to_timestamp('11/19/2007 15:46:09 PM','MM/DD/ HH24:MI:SS')
-> 2007-11-19 15:46:09+00

That just ignores the AM/PM flag, which may or may not be what you want
to do.


  Sam

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] IP addresses

2007-11-19 Thread Ian Barwick
2007/11/19, Tom Allison <[EMAIL PROTECTED]>:
> I am planning on doing a LOT of work with ip addresses and thought that the
> inet data type would be a great place to start.
>
> But I'm not sure how this works in with accessing the addresses.  In perl or
> ruby how is the value returned?

In Perl the value is returned as a scalar.

> Or should I stricly use host() and other functions to be explicit about what
> I'm doing.
>
>
> Another question.
> Given a subnet (eg: 192.168.1.0/24) is there some way to pull all the
> addresses therein?
> I can do this in code - but I was curious if there was a postgres way of
> doing it (didn't see any, but..)

You want the network address functions and operators, I presume:
http://www.postgresql.org/docs/8.2/interactive/functions-net.html

HTH

Ian Barwick

-- 
http://sql-info.de/index.html

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Timestamp comparison with string in some special cases

2007-11-19 Thread Andreas Kretschmer
Dragan Matic <[EMAIL PROTECTED]> schrieb:

> I have just found out that when comparing timestamp with time value where 
> time value is represented in 24h format + AM/PM sign doesn't work always.
> for instance, the following query works in this format:
> 
> select * from table where timestamp_column < '11/19/2007 3:46:09 PM'

This isn't a timestamp, this is a simple string. But you can CAST it to
a TIMESTAMP:

test=*# select now() > '11/19/2007 3:46:09 PM'::timestamp;
 ?column?
--
 t


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly."(unknow)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] postgresql performance and storage questions

2007-11-19 Thread Josh Harrison
Hi,
I have a few questions about the storage and performance

1. How do you estimate the table size in postgresql?
For example if I have a table 'Dummy' with 1 varchar (3) & 1
numeric(8,0) fields and 1000 rows, what is the tablesize estimate for
this (including the row overhead etc)? How many pages will this
occupy?

2. Also if the table contains null columns, does postgres allocates
the same space for these nulls columns? How does it handle 'nulls' in
terms of storage?

3. How does oracle handle these 2 cases?

4. Does increasing the block size in postgres improve query performance?

Thanks in advance
Josh

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL]

2007-11-19 Thread Tom Hart

Reply from 127.0.0.1: bytes=32 time=51ms TTL=241

Josh Harrison wrote:

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/
  



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL]

2007-11-19 Thread Josh Harrison


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Timestamp comparison with string in some special cases

2007-11-19 Thread Dragan Matic
I have just found out that when comparing timestamp with time value 
where time value is represented in 24h format + AM/PM sign doesn't work 
always.

for instance, the following query works in this format:

select * from table where timestamp_column < '11/19/2007 3:46:09 PM'

and also in this format

select * from table where timestamp_column < '11/19/2007 15:46:09'

BUT: if time is presented in this format, postgres returns an error

ERROR: date/time field value out of range

select * from table where timestamp_column < '11/19/2007 15:46:09 PM'

Now, I know that 24 time format + AM/PM is redundant, but this is how 
windows clock format is set up in few of our customers, and this is what 
is sent to postgres. We will alter this presentation through our client 
program, but is there a way to make postgres accept values like these? 
Postgres is version 8.2.3 on fedora 7 (64-bit).


Dragan Matic

--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] convert custom datatype to array

2007-11-19 Thread Alvaro Herrera
Mike Charnoky wrote:
> OK, forgive my ignorance here, but the maintainer of our custom data
> type code is no longer with us and this is new territory for me.  We do
> have a function which takes our custom data type and returns a cstring.
>  Is there a pg function which converts a cstring to text type?  This
> seems to be the missing link to eventually get the data to float4[].

Sure, textin() does that.

-- 
Alvaro Herrera http://www.amazon.com/gp/registry/DXLWNGRJD34J
"Coge la flor que hoy nace alegre, ufana. ¿Quién sabe si nacera otra mañana?"

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] convert access sql to postgresql

2007-11-19 Thread Tom Hart

Shane Ambler wrote:

Tom Hart wrote:
Hey guys. I have a long piece of sql that I'm trying to take out of 
an existing Access db and modify to work with Postgresql. I've 
started trying to convert it, but I've come across a problem that I 
don't even know how to describe, let alone google. Here's the function


INSERT INTO MemberMailingAddress (
mb_cn_num,
mb_acct_num,
isactive,
ismember,
ismail,
ispromomail,
...  ...

)
SELECT
mb_cn_num,
mb_acct_num,
mb_stat_cd<>1 as isactive,
mb_stat_cd=0 as ismember,
isactive and (mb_mail_cd=0 or mb_mail_cd=1) as ismail,
ismail and (mb_stat_cd=0 or mb_stat_cd=2) as ispromomail,
...  ...

FROM member
ORDER BY mb_cn_num, mb_acct_num
;

Specifically I'm looking at these two lines

isactive and (mb_mail_cd=0 or mb_mail_cd=1) as ismail,
ismail and (mb_stat_cd=0 or mb_stat_cd=2) as ispromomail,


Lets's clarify something
- the insert puts data into the MemberMailingAddress table which 
includes the column isactive

The select retrieves and calculates data from the member table.
The select calculates a value for the third column that you alias to 
be named isactive.

That's correct.
Are you expecting the 5th and 6th column (as ismail - as ispromomail) 
to calculate from the 3rd and 5th column of the select or from the 
columns of MemberMailingAddress?
It's not that I expect pgSQL to do this, as much as I've seen access 
behave this way, in which column aliases were able to be used within the 
same select query in a "variable" type fashion.
If you expect the later you need to add a join to the 
MemberMailingAddress table to get those columns. (or use a sub-select 
to get the data)
If you are only calculating from the member table then you will need 
to repeat the calculations instead of referring to the alias. And if 
this is the case how does the insert fit with the issue of the select?
I figured I would have to repeat the calculations (I've tried this with 
a small chunk of the sql with success). What I was really looking for 
was an answer on whether postgreSQL behaved this way, or if this was 
standard SQL. The answer I've been hearing is no on both counts, just 
wanted to be sure. Thank you for your reply.


Thomas R. Hart II
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] convert custom datatype to array

2007-11-19 Thread Mike Charnoky
OK, forgive my ignorance here, but the maintainer of our custom data
type code is no longer with us and this is new territory for me.  We do
have a function which takes our custom data type and returns a cstring.
 Is there a pg function which converts a cstring to text type?  This
seems to be the missing link to eventually get the data to float4[].


Mike

Alvaro Herrera wrote:
> Mike Charnoky wrote:
>> Our database schema was designed before postgresql supported arrays and
>> contains a custom type which is basically a float4 array.  I would like
>> to clean things up and convert the custom datatype to float4[], as this
>> would obviate the need for us to compile a custom shared object.  We are
>> hitting problems with pg8.3 and I would rather just drop the custom stuff.
>>
>> Problem is, I cannot do an ALTER COLUMN:
>>
>> mydb=# alter table mytable alter column mycolumn float4[];
>> ERROR:  column "mycolumn" cannot be cast to type "float4[]"
> 
> You can specify a USING clause to let the system know exactly how to
> convert your type to float[].  I suspect you can use float4in() and
> something to read out your type into the format that float4in expects (a
> hacked up version of your out function perhaps).
> 

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Temporary, In-memory Postgres DB?

2007-11-19 Thread Gauthier, Dave
Sounds like a lot of work.  I don't want to do anything risky.  And they
probably won't give me a ramdisk anyway.  

Being able to run a small but full featured, purely in-memory DB (sort
of like SQLite) would probably fit a niche that Postgres competitors
don't address.  So I guess this is just a wish list / suggestion matter
at this point.

-dave

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Michelle
Konzack
Sent: Saturday, November 17, 2007 11:48 AM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Temporary, In-memory Postgres DB?

Am 2007-11-07 10:03:24, schrieb Gauthier, Dave:
> Is there such a thing as a temporary, probably in-memory, version of a
> Postgres DB?  Sort of like SQLite, only with the features/function of
> PG?  A DB like this would exist inside of, and for the duration of, a
> script/program that created it, then vanish when the script/program
> ends.

I have done this before but it requires very much memory
if you need it writable and you must vaccmizer very often.


You need a shellscript which replace the "startupscript" for the
PostgreSQL in which you

  1)  create the RAMDISK
  2)  then decompress the previously build data.tar.bz2
  3)  start the PostgreSQL

and replace the shutdownscript with your own shellscript which do

  1)  stop write access to the PostgreSQL
  2)  vacuumizer the database
  3)  shutdown the PostgreSQL
  4)  make a backup of the previously created data.tar.bz2
  4)  compress the datadir to data.tar.bz2

I run an Opteron 140 with 8 GByte of memory and sometimes I have
problems with too less memory...  but unfortunatly I have not found
a Singel-Opteron Mainboard which support more then 8 GByte of memory
where I prefere to use 16-32 GByte...

Thanks, Greetings and nice Day
Michelle Konzack
Tamay Dogan Network
Open Hardware Developer
Debian GNU/Linux Consultant


-- 
Linux-User #280138 with the Linux Counter, http://counter.li.org/
# Debian GNU/Linux Consultant #
Michelle Konzack   Apt. 917  ICQ #328449886
   50, rue de Soultz MSN LinuxMichi
0033/6/6192519367100 Strasbourg/France   IRC #Debian (irc.icq.com)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] possible to create multivalued index from xpath() results in 8.3?

2007-11-19 Thread Tom Lane
"Matt Magoffin" <[EMAIL PROTECTED]> writes:
> 2) Even if I could have an xpath() result return an array with multiple
> values, like {value1,value2,value3} I wasn't able to define a GIN index
> against the xml[] type. Should this be possible?

Dunno about your other questions, but the answer to this one is "no"
--- AFAICS there is no indexing support of any kind for the xml type
in 8.3.  Try casting to text[] instead.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] convert custom datatype to array

2007-11-19 Thread Alvaro Herrera
Mike Charnoky wrote:
> Our database schema was designed before postgresql supported arrays and
> contains a custom type which is basically a float4 array.  I would like
> to clean things up and convert the custom datatype to float4[], as this
> would obviate the need for us to compile a custom shared object.  We are
> hitting problems with pg8.3 and I would rather just drop the custom stuff.
> 
> Problem is, I cannot do an ALTER COLUMN:
> 
> mydb=# alter table mytable alter column mycolumn float4[];
> ERROR:  column "mycolumn" cannot be cast to type "float4[]"

You can specify a USING clause to let the system know exactly how to
convert your type to float[].  I suspect you can use float4in() and
something to read out your type into the format that float4in expects (a
hacked up version of your out function perhaps).

-- 
Alvaro Herrera  http://www.amazon.com/gp/registry/5ZYLFMCVHXC
"La tristeza es un muro entre dos jardines" (Khalil Gibran)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] postgresql storage and performance questions

2007-11-19 Thread Josh Harrison
Hi,
I have a few questions about the storage and performance

1. How do you estimate the table size in postgresql?
For example if I have a table 'Dummy' with 1 varchar (40) & 1
numeric(22,0) fields and 1000 rows, what is the tablesize estimate for
this (including the row overhead etc)? How many pages will this
occupy?

2. Also if the table contains null columns, does postgres allocates
the same space for these nulls columns? How does it handle 'nulls' in
terms of storage?

3. How does oracle handle these 2 cases?

4. Does increasing the block size in postgres improve query performance?

Thanks in advance
Josh

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Temporary, In-memory Postgres DB?

2007-11-19 Thread Michelle Konzack
Am 2007-11-07 10:03:24, schrieb Gauthier, Dave:
> Is there such a thing as a temporary, probably in-memory, version of a
> Postgres DB?  Sort of like SQLite, only with the features/function of
> PG?  A DB like this would exist inside of, and for the duration of, a
> script/program that created it, then vanish when the script/program
> ends.

I have done this before but it requires very much memory
if you need it writable and you must vaccmizer very often.


You need a shellscript which replace the "startupscript" for the
PostgreSQL in which you

  1)  create the RAMDISK
  2)  then decompress the previously build data.tar.bz2
  3)  start the PostgreSQL

and replace the shutdownscript with your own shellscript which do

  1)  stop write access to the PostgreSQL
  2)  vacuumizer the database
  3)  shutdown the PostgreSQL
  4)  make a backup of the previously created data.tar.bz2
  4)  compress the datadir to data.tar.bz2

I run an Opteron 140 with 8 GByte of memory and sometimes I have
problems with too less memory...  but unfortunatly I have not found
a Singel-Opteron Mainboard which support more then 8 GByte of memory
where I prefere to use 16-32 GByte...

Thanks, Greetings and nice Day
Michelle Konzack
Tamay Dogan Network
Open Hardware Developer
Debian GNU/Linux Consultant


-- 
Linux-User #280138 with the Linux Counter, http://counter.li.org/
# Debian GNU/Linux Consultant #
Michelle Konzack   Apt. 917  ICQ #328449886
   50, rue de Soultz MSN LinuxMichi
0033/6/6192519367100 Strasbourg/France   IRC #Debian (irc.icq.com)


signature.pgp
Description: Digital signature


[GENERAL] convert custom datatype to array

2007-11-19 Thread Mike Charnoky
Our database schema was designed before postgresql supported arrays and
contains a custom type which is basically a float4 array.  I would like
to clean things up and convert the custom datatype to float4[], as this
would obviate the need for us to compile a custom shared object.  We are
hitting problems with pg8.3 and I would rather just drop the custom stuff.

Problem is, I cannot do an ALTER COLUMN:

mydb=# alter table mytable alter column mycolumn float4[];
ERROR:  column "mycolumn" cannot be cast to type "float4[]"

Any ideas on how to get around this, other than writing code to read
data from every row and copy it out to a new table?  The data looks like
this:

mydb=# select mycolumn from mytable limit 4;

mycolumn



 [30.910,12.300]
 
[5.950,15.780,1.580,1.070,1.050,0.940,1.750,7.880]
 [10.680,29.030]
 [15.250,32.880]


Mike

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] tsearch2 best practices

2007-11-19 Thread Ian Barwick
2007/11/18, Mag Gam <[EMAIL PROTECTED]>:
> Hi All,
>
> Planning to implement tsearch2 for my websitem and dbschema. I wanted to
> know if there is a "Best practices" guide I should be following. While
> reading about it, I noticed there were lot of 'gotchas' with this, such as
> back-up/restore, Slony 1 replication issues, etc..
>
> What do most people recommend for backup/restore solution: 1) Install
> tsearch 2)Alter tables for tsearch2, update trigger, 3) do work .. 4)
> uninstall tsearch2, 5)backup? Is that the right approach? Or something else?
>
> Also, when will tsearch2 part of core, instead of contrib?

tsearch2 has been integrated into the upcoming 8.3 release (currently beta).

-- 
http://sql-info.de/index.html

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] IP addresses

2007-11-19 Thread Tom Allison
I am planning on doing a LOT of work with ip addresses and thought that the
inet data type would be a great place to start.

But I'm not sure how this works in with accessing the addresses.  In perl or
ruby how is the value returned?
Or should I stricly use host() and other functions to be explicit about what
I'm doing.


Another question.
Given a subnet (eg: 192.168.1.0/24) is there some way to pull all the
addresses therein?
I can do this in code - but I was curious if there was a postgres way of
doing it (didn't see any, but..)


Re: [GENERAL] Calculation for Max_FSM_pages : Any rules of thumb?

2007-11-19 Thread Bill Moran
In response to Ow Mun Heng <[EMAIL PROTECTED]>:
> 
> Even with the regular vacuuming and even a vacuum full ( on my test DB)
> I still see that perhaps something is wrong (from the below)
> 
> (I got this gem from the mailling list archives)
> hmxmms=> SELECT
> c.relname,
> c.reltuples::bigint as rowcnt,
> pg_stat_get_tuples_inserted(c.oid) AS inserted,
> pg_stat_get_tuples_updated(c.oid) AS updated,
> pg_stat_get_tuples_deleted(c.oid) AS deleted
> FROM pg_class c
> WHERE c.relkind = 'r'::"char"
> GROUP BY c.oid, c.relname, c.reltuples
> HAVING pg_stat_get_tuples_updated(c.oid) +
> pg_stat_get_tuples_deleted(c.oid) > 1000
> ORDER BY pg_stat_get_tuples_updated(c.oid) +
> pg_stat_get_tuples_deleted(c.oid) DESC;
> relname|  rowcnt  | inserted | updated | deleted
> ---+--+--+-+--
>  tst_r | 11971691 |0 |   0 | 22390528 <--
>  pg_statistic  | 1465 |  280 |7716 |  153
>  dr_ns |  2305571 | 1959 |   0 | 1922
>  pg_attribute  | 3787 | 1403 | 184 | 1292
> 
> No matter how many times I vacuum/full the deleted number still doesn't
> go down.

Are you sure you're interpreting that number correctly?  I took it to
mean a counter of the number of delete operations since server start.

-- 
Bill Moran
http://www.potentialtech.com

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Composite types for composite primary/foreign keys?

2007-11-19 Thread Michael Glaesemann


On Nov 19, 2007, at 6:17 , Wolfgang Keller wrote:

I wanted to simplify the schema and make it more "readable" for  
clueless morons like me. >;->


Simplifying the schema is fine (and good!) as long as it exhibits the  
same behavior as the more complex one: often in the course of  
simplifying you find a solution yourself. However, we cannot help you  
if you don't provide adequate information.


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Postgre and XML

2007-11-19 Thread Peter Eisentraut
Am Montag, 19. November 2007 schrieb x asasaxax:
>I´m interested in running xml with postgre. I use postgre version 8.2
> and windows xp. I would like to know how can i enable the xml in the
> postgresql.

That depends on what you want to do with it.  XML is quite a broad topic.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Composite types for composite primary/foreign keys?

2007-11-19 Thread Wolfgang Keller

Hello,

and thanks for your reply.


I'm sorry, but I'm apparently too dump to actually figure out
myself whether this means that I can use a single composite type
column as a primary / foreign key or whether not...?


What have you actually tried?


I wanted to simplify the schema and make it more "readable" for 
clueless morons like me. >;->



You can learn a lot by a few minutes of
exploration at a psql prompt.


Yes, I will have to get used to using the Postgres prompt just like I 
do with the Python prompt. ;-)


Sincerely,

Wolfgang Keller



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Postgre and XML

2007-11-19 Thread x asasaxax
Hi,

   I´m interested in running xml with postgre. I use postgre version 8.2 and
windows xp. I would like to know how can i enable the xml in the postgresql.

Did you know if its secure to use this xml function of postgre in
commercial applications? How much trustable its this module? Can anyone
explain me how to install the xml module?

Thanks for the help.


Re: [ADMIN] [GENERAL] Error while starting postgreSQL service

2007-11-19 Thread Richard Huxton

Bebarta, Simanchala wrote:
>>
>> Does the problem go away when you put shared_buffers back to a lower
>> number?
>>

Yes, when I set the value to 1300 MB, everything goes fine. Any value
higher than this value does not allow me to start the service.


It's quite possible that you can't go any higher (I don't know enough 
about Windows' memory handling). It's quite possible you don't want to 
anyway.

Don't forget, you want to allow space for the following:
- Windows itself
- Other applications
- Each backend of PostgreSQL will need its own memory when running queries.
- Filesystem caching.

PostgreSQL isn't like some other RDBMS where you dedicate a big block of 
memory just to it.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] [GENERAL] Error while starting postgreSQL service

2007-11-19 Thread Bebarta, Simanchala
Yes, when I set the value to 1300 MB, everything goes fine. Any value
higher than this value does not allow me to start the service.

sima

-Original Message-
From: Richard Huxton [mailto:[EMAIL PROTECTED] 
Sent: Monday, November 19, 2007 3:40 PM
To: Bebarta, Simanchala
Cc: PG-General Mailing List; [EMAIL PROTECTED]
Subject: Re: [ADMIN] [GENERAL] Error while starting postgreSQL service

Don't forget to cc: the list, other people will probably know more than
me.

Bebarta, Simanchala wrote:
> 
> Hi,
> 
> My postgreSQL 8.2.4 is installed on Windows server 2003 Enterprise
> Edition.

OK. Thanks.

8.2.5 has been released, and you should consider upgrading soon. The 
Windows-related changes seem to be:
# Windows socket and semaphore improvements (Magnus)
# Make pg_ctl -w work properly in Windows service mode (Dave Page)
# Fix memory allocation bug when using MIT Kerberos on Windows (Magnus)
# Suppress timezone name (%Z) in log timestamps on Windows because of 
possible encoding mismatches (Tom)

http://www.postgresql.org/docs/8.2/static/release-8-2-5.html

Could the Kerberos-related bug be affecting you?

> Windows Server configuration:
> RAM 4GB
> HDD 50 GB
> Processor: Pentium 4 CPU 3.20 GHz
> 
>  
> When the shared_buffers = 2048 MB is set in the Postgresql.conf file,
> while starting the service, I got the error
> 
>  Could not open process token "error code-5".

Does the problem go away when you put shared_buffers back to a lower
number?

-- 
   Richard Huxton
   Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] [GENERAL] Error while starting postgreSQL service

2007-11-19 Thread Richard Huxton

Don't forget to cc: the list, other people will probably know more than me.

Bebarta, Simanchala wrote:


Hi,

My postgreSQL 8.2.4 is installed on Windows server 2003 Enterprise
Edition.


OK. Thanks.

8.2.5 has been released, and you should consider upgrading soon. The 
Windows-related changes seem to be:

# Windows socket and semaphore improvements (Magnus)
# Make pg_ctl -w work properly in Windows service mode (Dave Page)
# Fix memory allocation bug when using MIT Kerberos on Windows (Magnus)
# Suppress timezone name (%Z) in log timestamps on Windows because of 
possible encoding mismatches (Tom)


http://www.postgresql.org/docs/8.2/static/release-8-2-5.html

Could the Kerberos-related bug be affecting you?


Windows Server configuration:
RAM 4GB
HDD 50 GB
Processor: Pentium 4 CPU 3.20 GHz

 
When the shared_buffers = 2048 MB is set in the Postgresql.conf file,

while starting the service, I got the error

 Could not open process token "error code-5".


Does the problem go away when you put shared_buffers back to a lower number?

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Error while starting postgreSQL service

2007-11-19 Thread Richard Huxton

Bebarta, Simanchala wrote:


Need immediate attention to my concern.


OK.


While starting the postgreSQL service, i get the error message as
Could not open process token "error code-5"


What version of PostgreSQL?
What operating-system?
Have you had any problems before this point?

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] Error while starting postgreSQL service

2007-11-19 Thread Bebarta, Simanchala
Hi,

 

Need immediate attention to my concern.

 

While starting the postgreSQL service, i get the error message as

Could not open process token "error code-5"

 

Appreciate if any one can help me out on this issue.

 

Thanks & Regards,

Sima

 



Re: [GENERAL] Function Problems

2007-11-19 Thread Richard Huxton

Francis Waweru wrote:

Am creating a function that will be able to sort date from a table
that is over 400,000 rows. I want to pass variables from a java
application am running but I can't able to do so. Please help on how
to pass a variable from an application to a function so that I can
speed up my select queries.


You haven't said what you've tried so far. There's nothing special about 
passing variables to a function - it's just part of a query.



--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Function Problems

2007-11-19 Thread Francis Waweru
Am creating a function that will be able to sort date from a table
that is over 400,000 rows. I want to pass variables from a java
application am running but I can't able to do so. Please help on how
to pass a variable from an application to a function so that I can
speed up my select queries.
Waweru

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings