[GENERAL] Hash problem

2005-03-11 Thread Madison Kelly
Hi all,
  I am back to trying to get the forum software called 'Ikonboard' 
working under postgres (it's advertised as being supported but the 
developers confirmed their pgSQL person left a while back).

  At the moment I am (well, the program is) getting trying to perform 
this query:

SELECT * FROM tf_forum_posts WHERE FORUM_ID = HASH(0x868d4e0) AND 
TOPIC_ID = AND QUEUED = '0' ORDER BY POST_DATE LIMIT 2;

which throws this error:
ERROR:  parser: parse error at or near "x868d4e0" at character 53
  Most of the problems so far have stemmed from changes from (I think) 
pgSQL 7.2 (the last version the program was actively tested on) to 7.4. 
I have yet to test it under 8.0. Does this seem like an obvious syntax 
error in the newer versions of pgSQL? If so, what is a valid substitution.

  Thanks a lot! (I'm not -quite- a n00b but I am still learning. :) )
Madison
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Madison Kelly (Digimer)
TLE-BU, The Linux Experience; Back Up
http://tle-bu.thelinuxexperience.com
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Oracle's Virtual Private Database functionality

2005-03-11 Thread Qingqing Zhou
Check out CREATE RULE command,

Regards,
Qingqing

""Doug Bloebaum"" <[EMAIL PROTECTED]> дÈëÓʼþ
news:[EMAIL PROTECTED]
> [ Sorry if this appears twice on the list: sent it via a non-subscribed
email the first time around.  Of course, while waiting for it to appear I
figured out a workaround; see the "Late breaking update" at the end. ]
>
> In the spirit of "tell us what you're trying to do..."
>
> I'd like to mimic a subset of Oracle's Virtual Private Database
functionality (see
http://dba-oracle.com/oracle_news/2004_1_16_vpd_security.htm for an
overview) in Postgres: based on some per-connection setting, I'd like a
query to return a different set of rows.  In VPD, the Oracle engine actually
applies a defined predicate (say, country_code='USA') to every query.  The
idea is that a given set of users can only see rows in a table that match
this predicate, while the other rows are invisible to them.
>
> Now for the "how I tried to do it" part...
>
> I thought I was on my way to doing this in Postgres by making use of
schemas and search_path:
>
> CREATE USER user1 UNENCRYPTED PASSWORD 'user1';
> CREATE SCHEMA AUTHORIZATION user1;
>
> CREATE SCHEMA canada;
> GRANT USAGE ON SCHEMA canada TO user1;
> CREATE TABLE canada.row_limiter (country_code VARCHAR(3));
> INSERT INTO canada.row_limiter VALUES('CAN');
> GRANT SELECT ON canada.row_limiter TO user1;
>
> CREATE SCHEMA usa;
> GRANT USAGE ON SCHEMA usa TO user1;
> CREATE TABLE usa.row_limiter (country_code VARCHAR(3));
> INSERT INTO usa.row_limiter VALUES('USA');
> GRANT SELECT ON usa.row_limiter TO user1;
>
> \CONNECT - user1
>
> SET SEARCH_PATH TO '$user',canada;
>
> CREATE TABLE my_data (country_code VARCHAR(3), data NUMERIC);
> INSERT INTO my_data VALUES('USA',11);
> INSERT INTO my_data VALUES('USA',12);
> INSERT INTO my_data VALUES('USA',13);
>
> INSERT INTO my_data VALUES('CAN',21);
> INSERT INTO my_data VALUES('CAN',22);
> INSERT INTO my_data VALUES('CAN',23);
>
> CREATE VIEW my_data_v AS
> SELECT md.*
>   FROM my_data md,
>row_limiter rl
>  WHERE rl.country_code=md.country_code;
>
> SELECT * FROM my_data_v;
>
> -- Looks great - I only see Canadian data!!
> -- country_code | data
> +--
> -- CAN  |   21
> -- CAN  |   22
> -- CAN  |   23
>
> SET SEARCH_PATH TO '$user',usa;
>
> SELECT * FROM my_data_v;
>
> -- Darn, I still only see Canadian data :-(
> -- country_code | data
> +--
> -- CAN  |   21
> -- CAN  |   22
> -- CAN  |   23
>
> \d my_data_v
>
> View definition:
>  SELECT md.country_code, md.data
>FROM my_data md, CANADA.row_limiter rl -- <--ah, and here's the
reason...
>   WHERE rl.country_code::text = md.country_code::text;
>
>
> It's apparent why: the view determines which table it's going to use at
view creation time, not at query time, so this method is no good.
>
> Is there a "right way" to accomplish what I'm trying to do?
>
> Late breaking update:
>
> Looks like a function returning SETOF does the trick, so it must not
resolve the table early like the view does:
>
> CREATE OR REPLACE FUNCTION my_data_f() RETURNS SETOF my_data AS '
>   SELECT md.country_code,
>  md.data
> FROM my_data md, row_limiter rl
>WHERE rl.country_code = md.country_code;
> ' LANGUAGE sql;
>
> SET SEARCH_PATH TO '$user',usa;
> SELECT * FROM my_data_f();
>  country_code | data
> --+--
>  USA  |   11
>  USA  |   12
>  USA  |   13
>
> SET SEARCH_PATH TO '$user',canada;
> SELECT * FROM my_data_f();
>  country_code | data
> --+--
>  CAN  |   21
>  CAN  |   22
>  CAN  |   23
>
> Can I rely on this behavior?  Is this the best way to do what I'm after?2
to accomplish what I'm trying to do?
>
> Late breaking update:
>
> Looks like a function returning SETOF does the trick, so it must not
resolve the table early like the view does:
>
> CREATE OR REPLACE FUNCTION my_data_f() RETURNS SETOF my_data AS '
>   SELECT md.country_code,
>  md.data
> FROM my_data md, row_limiter rl
>WHERE rl.country_code = md.country_code;
> ' LANGUAGE sql;
>
> SET SEARCH_PATH TO '$user',usa;
> SELECT * FROM my_data_f();
>  country_code | data
> --+--
>  USA  |   11
>  USA  |   12
>  USA  |   13
>
> SET SEARCH_PATH TO '$user',canada;
> SE
>
>
> ---(end of broadcast)---
> TIP 9: the planner will ignore your desire to choose an index scan if your
>   joining column's datatypes do not match
>



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] [PATCHES] A way to let Vacuum warn if FSM settings are low. [final?]

2005-03-11 Thread Bruce Momjian

I have applied your patch with minor modifications.  Applied version
attached.

I think the "pages" message:

INFO:  free space map: 44 relations, 28 pages stored; 704 total pages 
used
DETAIL:  FSM size: 1000 relations + 2 pages = 182 kB shared memory.

should remain DEBUG2 for non-VERBOSE, and INFO for VERBOSE.  The
information is pretty complex and probably of little interest to a
typical vacuum user.  In fact, the new messages make the information
even less important because problems are now flagged.

I adjusted your output levels for the new messages.  I realize the
"checkpoint warning" is a LOG message, but it has to be because there is
no active session when a checkpoint is being run.  In the case of
VACUUM, there is an active session so I think the messages should be
sent to that session.  Sending them to the logs and not to the user
seems unusual because they are the ones who asked for the VACUUM.  I
realize they might not be able to change the server settings.
 
These new messages:

NOTICE:  max_fsm_relations(1000) equals the number of relations checked
HINT:  You have >= 44 relations.
Consider increasing the configuration parameter "max_fsm_relations".
NOTICE:  the number of page slots needed (704) exceeds max_fsm_pages 
(2)
HINT:  Consider increasing the configuration parameter 
"max_fsm_relations"
to a value over 704.
VACUUM

should be NOTICE.  NOTICE is for unusual events that are not warnings,
and that fits these cases.  If the administrator wants those in the
logs, he can set log_min_messages to NOTICE.

I also adjusted your output strings to more closely match our checkpoint
warning message.

Another idea would be to send the output to both the client and the logs
by default.

---

Ron Mayer wrote:
> On Sun, 27 Feb 2005, Simon Riggs wrote:
> > On Fri, 2005-02-25 at 16:48 -0800, Ron Mayer wrote:
> > > Getting closer?
> > For me, yes.  [...]
> > The not-warnings seem a little wordy for me, but they happen when and
> > how I would hope for. 
> > 
> > So, for me, it looks like a polish of final wording and commit.
> 
> Thanks for the feedback.  How about I replace the grammatically poor:
> 
>  LOG:  max_fsm_relations(%d) is equal than the number of relations vacuum 
> checked (%d)",
>  HINT:  You probably have more than %d relations. You should increase 
> max_fsm_relations. Pages needed for 
> max_fsm_pages may have been underestimated. 
> 
> with this:
> 
>  LOG:  max_fsm_relations(100) equals the number of relations checked
>  HINT:  You have >= 100 relations. You should increase max_fsm_relations.
> 
> 
> and replace this:
> 
>  LOG:  max_fsm_pages(%d) is smaller than the actual number of page slots 
> needed(%.0f)",
>  HINT:  You may want to increase max_fsm_pages to be larger than %.0f"
> 
> with the slightly smaller
> 
>  LOG:  the number of page slots needed (2832) exceeds max_fsm_pages (1601)
>  HINT:  You may want to increase max_fsm_pages to a value over 2832.
> 
> 
> These updated messages would fit on an 80-column display if the numbers 
> aren't too big.   Here's 80 characters for a quick reference.
>  
> 01234567890123456789012345678901234567890123456789012345678901234567890123456789
> The "pages needed...underestimate" in the first message was no longer 
> useful anyway; since it's no longer logging fsm_pages stuff when the
> max_fsm_relations condition occurred anyway
> 
>   Ron
> 
> The patch now looks like:
> 
> 
> % diff -u postgresql-8.0.1/src/backend/storage/freespace/freespace.c 
> postgresql-patched/src/backend/storage/freespace/freespace.c
> --- postgresql-8.0.1/src/backend/storage/freespace/freespace.c2004-12-31 
> 14:00:54.0 -0800
> +++ postgresql-patched/src/backend/storage/freespace/freespace.c
> 2005-02-27 11:54:39.776546200 -0800
> @@ -705,12 +705,25 @@
>  /* Convert stats to actual number of page slots needed */
>  needed = (sumRequests + numRels) * CHUNKPAGES;
>  
> -ereport(elevel,
> -(errmsg("free space map: %d relations, %d pages stored; %.0f 
> total pages needed",
> +ereport(INFO,
> +(errmsg("free space map: %d relations, %d pages stored; %.0f 
> total pages used",
>  numRels, storedPages, needed),
> - errdetail("Allocated FSM size: %d relations + %d pages = %.0f 
> kB shared memory.",
> + errdetail("FSM size: %d relations + %d pages = %.0f kB shared 
> memory.",
> MaxFSMRelations, MaxFSMPages,
> (double) FreeSpaceShmemSize() / 1024.0)));
> +
> +if (numRels == MaxFSMRelations)
> +ereport(LOG,
> +(errmsg("max_fsm_relations(%d) equals the number of relations 
> checked",
> + MaxFSMRelations),
> + errhint("You have >= 

Re: [JDBC] [GENERAL] MS Access to PostgreSQL

2005-03-11 Thread Brian Gunning
Hi Shelby,

I don't think the problem isn't with the backslashes. They are escaped and
seem to be working fine e.g. to insert a backslash in Access I had to use
one escape character ('\\') whereas in PostgreSQL four backslashes ('')
are required. The line that inserts the % is as follows...

  String aPath = group.getPath() + aOldGroupName + "\\%";

It just doesn't seem to be having the same effect in PostgreSQL as in
Access.



B

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Shelby Cain
Sent: 11 March 2005 17:32
To: William Shatner; [EMAIL PROTECTED];
pgsql-general@postgresql.org; pgsql-jdbc@postgresql.org
Subject: Re: [JDBC] [GENERAL] MS Access to PostgreSQL

--- William Shatner <[EMAIL PROTECTED]> wrote:
> I have recently migrated from MS Access to
> PostgreSQL.Previously I had
> a SQL command
> 
>   ResultSet aGroupResultSet =
> aGroupPathStmt.executeQuery(
>   "SELECT \"groupID\",\"fullpath\" FROM
> \"groups\" WHERE
> \"fullpath\" Like '" +
>   aPath + "'");
> 
> 
> 
> where aPath was equal to  'folder\another folder\%'.
> 
> The field to be edited stores the full path in the
> format
> 'folder\folder1\folder2' and so on...
> The purpose being to change all groups at this level
> of the
> hieracarchy and below, this was achieved using the
> '%' in Access, this
> however doesn't seem to work in PostgreSQL, it
> doesn't error out but
> it just seems to see the '%' as a normal character.
> 
> How can this be done in PostgreSQL?
> 

I suspect that, unlike Access, PostgreSQL will
intrepret C-style escape sequences (ie: \r, \n, \t) so
you'll need to properly escape the backslash in aPath
like so:

folder1\\folder2\\folder3

Regards,

Shelby Cain



__ 
Do you Yahoo!? 
Yahoo! Small Business - Try our new resources site!
http://smallbusiness.yahoo.com/resources/ 

---(end of broadcast)---
TIP 8: explain analyze is your friend



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Best practices: Handling Daylight-saving time

2005-03-11 Thread Mike Benoit
Instead of spikes or dips, with your method customers will just be
confused as to why the labels skip an hour, or have two of the same
hour. It would make for a more accurate graph though, your right.

6 of 1, half a dozen of another I guess.


On Fri, 2005-03-11 at 23:33 +, Andrew - Supernews wrote:
> On 2005-03-11, Mike Benoit <[EMAIL PROTECTED]> wrote:
> > Here is a scenario I ran in to with collecting bandwidth usage and
> > displaying it back in graph form to customers.
> >
> > You can store the timestamps in GMT, but the customer wants to see when
> > spikes happen in his localtime, which most likely has DST. So twice a
> > year, you are either compressing two hours of bandwidth usage into one,
> > or the opposite, stretching one hour in to two, which of course produces
> > somewhat odd looking graphs during that time.
> 
> That seems an odd way to handle it. If you graph the data by days according
> to the customer's time, then on one day in the year your graph is one hour
> smaller, and on another day it is one hour larger. The point to notice is
> that the customer's local time should affect only the _labels_ on the graph,
> and possibly your choice of start and end times, and not the _data_ being
> plotted.
> 
> For example, suppose I have a table:
> 
> create table tztst (ts timestamptz primary key, value float8 not null);
> 
> and I want to plot individual days from it in the customer's timezone:
> 
> test=> set timezone to 'America/Denver';  -- or wherever he is
> SET
> 
> test=> select ts::time,value from tztst
>  where ts between '2005-04-02 00:00:00' and '2005-04-02 23:59:59' order by ts;
> ts|  value   
> --+--
>  00:00:00 | 286.764410064167
>  01:00:00 | 291.294525072763
>  02:00:00 | 294.912455364789
>  03:00:00 | 297.582051776698
>  04:00:00 | 299.276640583591
>  05:00:00 | 299.979290014267
>  06:00:00 |  299.68297942788
>  07:00:00 | 298.390669461862
>  08:00:00 | 296.115272450212
>  09:00:00 | 292.879523407724
>  10:00:00 | 288.715752869235
>  11:00:00 | 283.665563853606
>  12:00:00 | 277.779416180109
>  13:00:00 | 271.116122290598
>  14:00:00 | 263.742259615024
>  15:00:00 | 255.731505351766
>  16:00:00 |  247.16390030942
>  17:00:00 | 238.125049165494
>  18:00:00 | 228.705265132773
>  19:00:00 | 218.998667579544
>  20:00:00 | 209.102241619985
>  21:00:00 |  199.11486907096
>  22:00:00 | 189.136340457592
>  23:00:00 | 179.266357939324
> (24 rows)
> 
> test=> select ts::time,value from tztst
>  where ts between '2005-04-03 00:00:00' and '2005-04-03 23:59:59' order by ts;
> ts|  value   
> --+--
>  00:00:00 | 169.603539118895
>  01:00:00 | 160.244431687857
>  03:00:00 | 151.282548753949
>  04:00:00 | 142.807434489044
>  05:00:00 | 134.903769433375
>  06:00:00 | 127.650524395576
>  07:00:00 | 121.120171402458
>  08:00:00 | 115.377959582483
>  09:00:00 | 110.481263218032
>  10:00:00 | 106.479008480546
>  11:00:00 | 103.411184576393
>  12:00:00 | 101.308444187935
>  13:00:00 |  100.19179720206
>  14:00:00 | 100.072400786337
>  15:00:00 | 100.951447910284
>  16:00:00 | 102.820155425614
>  17:00:00 | 105.659851824544
>  18:00:00 | 109.442163799338
>  19:00:00 | 114.129299739007
>  20:00:00 | 119.674427330605
>  21:00:00 | 126.022141492211
>  22:00:00 | 133.109017962198
>  23:00:00 | 140.864247013488
> (23 rows)
> 
> test=> select ts::time,value from tztst
>  where ts between '2005-10-30 00:00:00' and '2005-10-30 23:59:59' order by ts;
> ts|  value   
> --+--
>  00:00:00 | 110.349122831853
>  01:00:00 | 114.741289638094
>  01:00:00 | 119.837588745288
>  02:00:00 | 125.595930978012
>  03:00:00 | 131.968759497219
>  04:00:00 | 138.903442561358
>  05:00:00 | 146.342708199957
>  06:00:00 | 154.225117209803
>  07:00:00 | 162.485570567354
>  08:00:00 | 171.055847066766
>  09:00:00 | 179.865166743321
>  10:00:00 | 188.840775429059
>  11:00:00 | 197.908545612907
>  12:00:00 |  206.99358864294
>  13:00:00 | 216.020873214721
>  14:00:00 | 224.915845037786
>  15:00:00 | 233.605042562575
>  16:00:00 | 242.016703682664
>  17:00:00 | 250.081358401684
>  18:00:00 | 257.732402570221
>  19:00:00 | 264.906647954345
>  20:00:00 | 271.544844092858
>  21:00:00 | 277.592167633387
>  22:00:00 | 282.998675105977
>  23:00:00 |  287.71971539486
> (25 rows)
> 
> All of these can be converted to meaningful (and un-distorted) graphs.
> 
> -- 
> Andrew, Supernews
> http://www.supernews.com - individual and corporate NNTP services
> 
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
-- 
Mike Benoit <[EMAIL PROTECTED]>


signature.asc
Description: This is a digitally signed message part


[GENERAL] Postgres jobs mailing list?

2005-03-11 Thread Jerry Sievers
Hello.

The PG mailing lists web page contains artifacts of a jobs list but no
such list appeared in the dropdown of available lists.

I am referring to;

http://www.postgresql.org/community/lists/subscribe

Is there such a resource and if so, could someone direct me to it?

Thanks.


-- 
---
Jerry Sievers   305 854-3001 (home) WWW ECommerce Consultant
305 321-1144 (mobilehttp://www.JerrySievers.com/

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Best practices: Handling Daylight-saving time

2005-03-11 Thread Andrew - Supernews
On 2005-03-11, Mike Benoit <[EMAIL PROTECTED]> wrote:
> Here is a scenario I ran in to with collecting bandwidth usage and
> displaying it back in graph form to customers.
>
> You can store the timestamps in GMT, but the customer wants to see when
> spikes happen in his localtime, which most likely has DST. So twice a
> year, you are either compressing two hours of bandwidth usage into one,
> or the opposite, stretching one hour in to two, which of course produces
> somewhat odd looking graphs during that time.

That seems an odd way to handle it. If you graph the data by days according
to the customer's time, then on one day in the year your graph is one hour
smaller, and on another day it is one hour larger. The point to notice is
that the customer's local time should affect only the _labels_ on the graph,
and possibly your choice of start and end times, and not the _data_ being
plotted.

For example, suppose I have a table:

create table tztst (ts timestamptz primary key, value float8 not null);

and I want to plot individual days from it in the customer's timezone:

test=> set timezone to 'America/Denver';  -- or wherever he is
SET

test=> select ts::time,value from tztst
 where ts between '2005-04-02 00:00:00' and '2005-04-02 23:59:59' order by ts;
ts|  value   
--+--
 00:00:00 | 286.764410064167
 01:00:00 | 291.294525072763
 02:00:00 | 294.912455364789
 03:00:00 | 297.582051776698
 04:00:00 | 299.276640583591
 05:00:00 | 299.979290014267
 06:00:00 |  299.68297942788
 07:00:00 | 298.390669461862
 08:00:00 | 296.115272450212
 09:00:00 | 292.879523407724
 10:00:00 | 288.715752869235
 11:00:00 | 283.665563853606
 12:00:00 | 277.779416180109
 13:00:00 | 271.116122290598
 14:00:00 | 263.742259615024
 15:00:00 | 255.731505351766
 16:00:00 |  247.16390030942
 17:00:00 | 238.125049165494
 18:00:00 | 228.705265132773
 19:00:00 | 218.998667579544
 20:00:00 | 209.102241619985
 21:00:00 |  199.11486907096
 22:00:00 | 189.136340457592
 23:00:00 | 179.266357939324
(24 rows)

test=> select ts::time,value from tztst
 where ts between '2005-04-03 00:00:00' and '2005-04-03 23:59:59' order by ts;
ts|  value   
--+--
 00:00:00 | 169.603539118895
 01:00:00 | 160.244431687857
 03:00:00 | 151.282548753949
 04:00:00 | 142.807434489044
 05:00:00 | 134.903769433375
 06:00:00 | 127.650524395576
 07:00:00 | 121.120171402458
 08:00:00 | 115.377959582483
 09:00:00 | 110.481263218032
 10:00:00 | 106.479008480546
 11:00:00 | 103.411184576393
 12:00:00 | 101.308444187935
 13:00:00 |  100.19179720206
 14:00:00 | 100.072400786337
 15:00:00 | 100.951447910284
 16:00:00 | 102.820155425614
 17:00:00 | 105.659851824544
 18:00:00 | 109.442163799338
 19:00:00 | 114.129299739007
 20:00:00 | 119.674427330605
 21:00:00 | 126.022141492211
 22:00:00 | 133.109017962198
 23:00:00 | 140.864247013488
(23 rows)

test=> select ts::time,value from tztst
 where ts between '2005-10-30 00:00:00' and '2005-10-30 23:59:59' order by ts;
ts|  value   
--+--
 00:00:00 | 110.349122831853
 01:00:00 | 114.741289638094
 01:00:00 | 119.837588745288
 02:00:00 | 125.595930978012
 03:00:00 | 131.968759497219
 04:00:00 | 138.903442561358
 05:00:00 | 146.342708199957
 06:00:00 | 154.225117209803
 07:00:00 | 162.485570567354
 08:00:00 | 171.055847066766
 09:00:00 | 179.865166743321
 10:00:00 | 188.840775429059
 11:00:00 | 197.908545612907
 12:00:00 |  206.99358864294
 13:00:00 | 216.020873214721
 14:00:00 | 224.915845037786
 15:00:00 | 233.605042562575
 16:00:00 | 242.016703682664
 17:00:00 | 250.081358401684
 18:00:00 | 257.732402570221
 19:00:00 | 264.906647954345
 20:00:00 | 271.544844092858
 21:00:00 | 277.592167633387
 22:00:00 | 282.998675105977
 23:00:00 |  287.71971539486
(25 rows)

All of these can be converted to meaningful (and un-distorted) graphs.

-- 
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Replication suggestions (weak multi-master)

2005-03-11 Thread Andrew Sullivan
On Tue, Mar 01, 2005 at 11:41:11AM +0100, Martijn van Oosterhout wrote:
> I'm impressed by Slony and I could probably make it do what I want with
> enough trickery. But again the need to be able to connect from any
> server to any other. To put it in Slony terms, the origin never needs
> to move. If the machine is down, nothing needs to be updated.
> 
> I'm considering starting with dbmirror and simply writing a script to
> make it do what I want. But I'm asking in case someone has a simple
> solution I've overlooked.

My suspicion that the log shipping support in Slony 1 (HEAD right
now) or the approach outlined at OSCON last year (anonymous file
shipping based on dbmirror -- mighty cool) is something closeish to
what you'll need.

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
This work was visionary and imaginative, and goes to show that visionary
and imaginative work need not end up well. 
--Dennis Ritchie

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Best practices: Handling Daylight-saving time

2005-03-11 Thread Scott Marlowe
On Fri, 2005-03-11 at 13:47, Mike Benoit wrote:
> Not exactly...
> 
> Here is a scenario I ran in to with collecting bandwidth usage and
> displaying it back in graph form to customers.
> 
> You can store the timestamps in GMT, but the customer wants to see when
> spikes happen in his localtime, which most likely has DST. So twice a
> year, you are either compressing two hours of bandwidth usage into one,
> or the opposite, stretching one hour in to two, which of course produces
> somewhat odd looking graphs during that time.
> 
> Besides making note of DST on the graph so the customer can see it, I
> haven't found a elegant solution to this problem.

I would think that if you stored them with the local timezone, and used 
AT TIME ZONE to convert them to GMT for sorting, then they should show
up in the right order.  Just a guess.

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Best practices: Handling Daylight-saving time

2005-03-11 Thread Mike Benoit
Not exactly...

Here is a scenario I ran in to with collecting bandwidth usage and
displaying it back in graph form to customers.

You can store the timestamps in GMT, but the customer wants to see when
spikes happen in his localtime, which most likely has DST. So twice a
year, you are either compressing two hours of bandwidth usage into one,
or the opposite, stretching one hour in to two, which of course produces
somewhat odd looking graphs during that time.

Besides making note of DST on the graph so the customer can see it, I
haven't found a elegant solution to this problem.


On Fri, 2005-03-11 at 12:19 -0500, Christopher Browne wrote:
> [EMAIL PROTECTED] (Együd Csaba) wrote:
> > I'd like to ask your opininon about how to handle DST on an 7/24 system.
> > Where should it be handled: on the server side or on the client side? And
> > how could I (at all could I???) make it transparent? 
> 
> Don't use DST.
> 
> Use GMT/UTC.
> 
> That makes the issue go away.
-- 
Mike Benoit <[EMAIL PROTECTED]>


signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] 8.0 Setup problems

2005-03-11 Thread Tom Lane
"DEV" <[EMAIL PROTECTED]> writes:
> zic.o(.text+0x3c40): undefined reference to `__ctype_b'
> zic.o(.text+0x3c5b): undefined reference to `__ctype_tolower'

Apparently your /usr/include/ctype.h file doesn't agree with your libc.
Some googling on __ctype_tolower suggests that there were changes in
that part of glibc a couple years ago.  I think you've probably updated
the library and not the header file, or perhaps vice versa.

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] 8.0 Setup problems

2005-03-11 Thread DEV








Hello,

 

I am trying build and install 8.0 on RedHat 9.0 the
configure runs with out a problem and when I type make I get:

gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith
-Wendif-labels -fno-strict-aliasing zic.o ialloc.o scheck.o localtime.o
-L../../src/port  -Wl,-rpath,/usr/local/pgsql/lib -lpgport -lz -lreadline
-ltermcap -lcrypt -lresolv -lnsl -ldl -lm -lbsd  -o zic

zic.o: In function `lowerit':

zic.o(.text+0x3c40): undefined reference to `__ctype_b'

zic.o(.text+0x3c5b): undefined reference to
`__ctype_tolower'

zic.o: In function `getfields':

zic.o(.text+0x3e36): undefined reference to `__ctype_b'

zic.o(.text+0x3eb0): undefined reference to `__ctype_b'

zic.o(.text+0x3ec3): undefined reference to `__ctype_b'

collect2: ld returned 1 exit status

make[2]: *** [zic] Error 1

make[2]: Leaving directory
`/home/doyleb/postgresql-8.0.1/src/timezone'

make[1]: *** [all] Error 2

make[1]: Leaving directory
`/home/doyleb/postgresql-8.0.1/src'

make: *** [all] Error 2

 

What do I need to do to fix this?

 

Currently I am running 7.3.2 and need to get some of the
functionality that is in 8.0

 

 

 

 








Re: [GENERAL] Best practices: Handling Daylight-saving time

2005-03-11 Thread Randall Nortman
On Fri, Mar 11, 2005 at 03:25:28PM +0100, Egy?d Csaba wrote:
> Hi All,
> I'd like to ask your opininon about how to handle DST on an 7/24 system.
> Where should it be handled: on the server side or on the client side? And
> how could I (at all could I???) make it transparent? 

As others have mentioned, store timestamps on the server in UTC, and
translate to/from local time on the client side if desired.  Postgres
can do this for you in the query; just look in the docs to see how.

I have personally encountered situations where that is not quite
adequate, however, because the data elements may originate in
different time zones, and it may be necessary to display in the
original time zone instead of (or in addition to) the local time zone
of the client.  (Weather data, for example, should generally be
displayed using the time zone of the source of the data.)  In this
case, you must store some representation of the source time zone in a
separate field, in addition to the UTC timestamp.  You can then use
both fields together to retrieve the correct original local time.
(Make sure to note in that time zone field whether or not DST is
observed in that locale.)  Exactly how to do this depends on your
application.

> Or we must bow to the fact that twice a year there are two unusable hours?
> If it cannot be solved technically, than it is acceptable, but if there is a
> chance to do it, I'd like to try it. 

When timestamps are stored in UTC, the missing (when skipping forward)
and duplicate (when setting back) hours are only an issue in the user
interface, for entry and display of the times.  The missing hours are
generally easier to deal with, since the only thing affected is the
calculation of durations.  If your interface displays elapsed time,
make sure you take this into account.  (The easy way is to do the
duration calculations in the database, using UTC.)  Duplicate times
are more difficult: when displaying, you need to indicate whether DST
was in effect or not (i.e., was it the first 02:30 or the second?).
If times are to be entered manually in local time, the interface needs
to notice when an ambiguous time has been entered and ask the user to
disambiguate somehow.  This is a hell of a lot of trouble to go to for
something that will only come up very rarely or never in most
applications, but you have to do it if you want to get it right.

> Our system stores 200-1000 measured data per minute, comming from
> substations. The substations clock is synchronized periodically as well.
> When the DST is switched there is 1 hour of data missing (or overlapped).
> Certainly the client machines are autmatically adjusted for the DST. 

If you have control over the production of data on these client
machines, just make sure it is produced in UTC, and the issue goes
away.  Otherwise, you can convert their local time back to UTC for
storage in the database, but then you have the duplicate hour
ambiguity to deal with.  If you know the data will be coming in
sequentially and/or in near real-time, you can probably figure it out
with a little extra logic in the app that loads the data into the DB.

Randall

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Unique Indexes

2005-03-11 Thread Michael Fuhr
On Fri, Mar 11, 2005 at 04:21:49PM +, ILove TheSpam wrote:

> My problem is that I want to:
> a) Be able to detect when an error occurs and be able to read that error
> b) Find out the FieldID for the row which already has the value in it. So 
> if I try to enter 'blah' into FieldValue and it's already in the row with 
> FieldID=3, I want the value 3 to be returned to me. I realise this can be 
> done using a SELECT after the error, but I'd like to avoid that. I'm hoping 
> Postgres itself has some way of detecting this sort of thing.

Could you describe the problem you're trying to solve?  It might
be easier to help if we knew the ultimate purpose of what you're
trying to do.  Something like "I want to do this because"

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Best practices: Handling Daylight-saving time

2005-03-11 Thread Steve Crawford
On Friday 11 March 2005 6:25 am, Együd Csaba wrote:
> Hi All,
> I'd like to ask your opininon about how to handle DST on an 7/24
> system. Where should it be handled: on the server side or on the
> client side? And how could I (at all could I???) make it
> transparent?
>
> Or we must bow to the fact that twice a year there are two unusable
> hours? If it cannot be solved technically, than it is acceptable,
> but if there is a chance to do it, I'd like to try it.
>
> Our system stores 200-1000 measured data per minute, comming from
> substations. The substations clock is synchronized periodically as
> well. When the DST is switched there is 1 hour of data missing (or
> overlapped). Certainly the client machines are autmatically
> adjusted for the DST.

First, at point in time is a point in time. You might call it 
2005-03-11 01:02:03.12345 GMT or you might represent it in any other 
time zone with or without Daylight Saving Time (or European Summer 
Time or...) but it is still the same point in time.

Your assertion about unusable hours is incorrect. Rather, if you have 
this problem then you aren't sending/storing the time stamp data 
completely/correctly.

Here on the West Coast of the US in the spring the time simply moves 
seamlessly and continuously from 01:59:59 PST to 03:00:00 PDT. 
Similarly in the fall it goes from 01:59:59 PDT to 01:00:00 PST. If 
you throw away the time zone the you will naturally have problems.

The localization in both *nix and PostgreSQL will display the times 
just fine based on whether or not they are in standard or daylight 
time. (Try "select now();" and "select now() + '1 month'::interval;"

Whether you choose to fix things by specifying complete timestamps, 
standardize on GMT, use unix timestamps, etc. is up to you. It will 
all work fine if both ends use the same convention and you don't drop 
critical parts of the timezone information.

Side note: if you are using cron/at to schedule your data collection 
then you need to investigate the behaviour of your versions of those 
programs and compare it to your desired outcome.

Cheers,
Steve


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Slony uninstall info/warning

2005-03-11 Thread Andrew Sullivan
On Tue, Feb 15, 2005 at 02:00:20PM -0500, Christopher Browne wrote:
> As has been noted by others, this is an artifact of the JDBC driver
> holding onto a stored query plan.  And it's somewhat worth noting that
> this is not unique to Slony-I...

(Yes, I'm behind on reading mail lists.)

Actually, I don't think this _is_ a JDBC issue.  AIUI, The problem
bites you with Slony because its triggers use SPI.  So the reference
to the oid is to the trigger's oid, which you won't lose until the
back ends are all recyled.

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
I remember when computers were frustrating because they *did* exactly what 
you told them to.  That actually seems sort of quaint now.
--J.D. Baldwin

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Best practices: Handling Daylight-saving time

2005-03-11 Thread Christopher Browne
[EMAIL PROTECTED] (Együd Csaba) wrote:
> I'd like to ask your opininon about how to handle DST on an 7/24 system.
> Where should it be handled: on the server side or on the client side? And
> how could I (at all could I???) make it transparent? 

Don't use DST.

Use GMT/UTC.

That makes the issue go away.
-- 
(reverse (concatenate 'string "gro.mca" "@" "enworbbc"))
http://cbbrowne.com/info/slony.html
Signs of a Klingon  Programmer #2: "You  question the worthiness of my
code? I should kill you where you stand!"

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] MS Access to PostgreSQL

2005-03-11 Thread Michael Fuhr
On Fri, Mar 11, 2005 at 05:19:35PM +, William Shatner wrote:
> I have recently migrated from MS Access to PostgreSQL.Previously I had
> a SQL command
> 
>   ResultSet aGroupResultSet = aGroupPathStmt.executeQuery(
>   "SELECT \"groupID\",\"fullpath\" FROM \"groups\" WHERE
> \"fullpath\" Like '" +
>   aPath + "'");
> 
> where aPath was equal to  'folder\another folder\%'.
> 
> The field to be edited stores the full path in the format
> 'folder\folder1\folder2' and so on...
> The purpose being to change all groups at this level of the
> hieracarchy and below, this was achieved using the '%' in Access, this
> however doesn't seem to work in PostgreSQL, it doesn't error out but
> it just seems to see the '%' as a normal character.

You're running into problems with \ being the escape character in
string constants and again in patterns.  There's some discussion
of this in the "Pattern Matching" section of the "Functions and
Operators" chapter in the documentation:

http://www.postgresql.org/docs/8.0/interactive/functions-matching.html

Here are some ways to make it work (dollar quoting available only
in 8.0 and later):

fullpath LIKE 'folderanother folder%'
fullpath LIKE 'folder\\another folder\\%' ESCAPE ''
fullpath LIKE $$folder\\another folder\\%$$
fullpath LIKE $$folder\another folder\%$$ ESCAPE ''

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] SSL error: decryption failed or bad record mac (pg as Samba backend)

2005-03-11 Thread Tom Lane
Fernando Schapachnik <[EMAIL PROTECTED]> writes:
> And the server log says:
> [24129]  LOG:  SSL error: decryption failed or bad record mac
> [24129]  LOG:  pq_recvbuf: recv() failed: Connection reset by peer

I think you need to find some SSL hackers; this is below libpq's level too.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] speak of the devil

2005-03-11 Thread Richard_D_Levine
But no Windows support yet.  Needs future hardware: new Intel hardware for
better support of virtualization.  VMWare is drooling over the upcoming
Intel support also.

Rick



 
  tony <[EMAIL PROTECTED]>  
 
  Sent by:   To:   postgres list 
  
  [EMAIL PROTECTED]cc:  
   
  tgresql.orgSubject:  [GENERAL] speak 
of the devil  

 

 
  03/11/2005 11:57 AM   
 

 

 




http://linux.slashdot.org/article.pl?
sid=05/03/11/130217&tid=143&tid=163&tid=223&tid=1

Tony


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly




---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] MS Access to PostgreSQL

2005-03-11 Thread Shelby Cain
--- William Shatner <[EMAIL PROTECTED]> wrote:
> I have recently migrated from MS Access to
> PostgreSQL.Previously I had
> a SQL command
> 
>   ResultSet aGroupResultSet =
> aGroupPathStmt.executeQuery(
>   "SELECT \"groupID\",\"fullpath\" FROM
> \"groups\" WHERE
> \"fullpath\" Like '" +
>   aPath + "'");
> 
> 
> 
> where aPath was equal to  'folder\another folder\%'.
> 
> The field to be edited stores the full path in the
> format
> 'folder\folder1\folder2' and so on...
> The purpose being to change all groups at this level
> of the
> hieracarchy and below, this was achieved using the
> '%' in Access, this
> however doesn't seem to work in PostgreSQL, it
> doesn't error out but
> it just seems to see the '%' as a normal character.
> 
> How can this be done in PostgreSQL?
> 

I suspect that, unlike Access, PostgreSQL will
intrepret C-style escape sequences (ie: \r, \n, \t) so
you'll need to properly escape the backslash in aPath
like so:

folder1\\folder2\\folder3

Regards,

Shelby Cain



__ 
Do you Yahoo!? 
Yahoo! Small Business - Try our new resources site!
http://smallbusiness.yahoo.com/resources/ 

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] MS Access to PostgreSQL

2005-03-11 Thread William Shatner
I have recently migrated from MS Access to PostgreSQL.Previously I had
a SQL command

  ResultSet aGroupResultSet = aGroupPathStmt.executeQuery(
  "SELECT \"groupID\",\"fullpath\" FROM \"groups\" WHERE
\"fullpath\" Like '" +
  aPath + "'");



where aPath was equal to  'folder\another folder\%'.

The field to be edited stores the full path in the format
'folder\folder1\folder2' and so on...
The purpose being to change all groups at this level of the
hieracarchy and below, this was achieved using the '%' in Access, this
however doesn't seem to work in PostgreSQL, it doesn't error out but
it just seems to see the '%' as a normal character.

How can this be done in PostgreSQL?


Many Thanks,
B

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[GENERAL] speak of the devil

2005-03-11 Thread tony
http://linux.slashdot.org/article.pl?
sid=05/03/11/130217&tid=143&tid=163&tid=223&tid=1

Tony


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] PostgreSQL still for Linux only?

2005-03-11 Thread tony
Le vendredi 11 mars 2005 à 17:41 +0100, Magnus Hagander a écrit :
> > > Do *not* do this with a production database.
> > > 
> > > Vmware does *not* correctly handle fsync()s (or O_SYNC or any of 
> > > those) thruogh to disk. If your host PC crashes, your database will 
> > > almost certainly be corrupted. fsync() on the client just 
> > puts it in 
> > > the RAM cache on the host. Not even in the write cache on 
> > the disk/raid.
...snip
> It's possible this works fine if you use direct disk access in vmware
> (giving the session a native disk to access), but I haven't tried that.

OK! I understand your worries now. I always do this because initial
reading through the different disk modes when 3.0 came out made my hair
stand on end. The speed and size of disks today means that each virtual
machine can treat a part of the disk as its own as far as I'm concerned.
The other disk modes always seemed strange to me - maybe they have uses
for others... When I am in my virtual machine I like to see the HD diode
go on each time I do a save, improves my tan =:-D

> After some looking around (and with some hints from Dave Page) for my
> own needs of virtualising linux-on-linux, I've moved to linux-vservers.
> While it doesn't virtualise everything, it's good enough for me. I
> suggest you at least look at it before going down the vmware path - it's
> also free software unlike vmware.

Looked at that. It requires heavy guru voodoo magic at host OS install
time. VMware (I already own the licence I was going to use) can be
installed on a machine that is up and running.

Thanks 

Tony


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] PostgreSQL still for Linux only?

2005-03-11 Thread Magnus Hagander
> > Do *not* do this with a production database.
> > 
> > Vmware does *not* correctly handle fsync()s (or O_SYNC or any of 
> > those) thruogh to disk. If your host PC crashes, your database will 
> > almost certainly be corrupted. fsync() on the client just 
> puts it in 
> > the RAM cache on the host. Not even in the write cache on 
> the disk/raid.
> 
> Putting Windows NT inside a virtual machine (VMware 
> workstation) solved all hardware stability problems in my 
> case. NT would only crash if we forgot to reboot every 45 
> days or so... The Linux host had a 9 month uptime at one point. 
> 
> If you could be more explicit as to why VMware client does 
> not write to disk I would much appreciate. I was thinking of 
> virtualising a couple of servers (Linux client on Linux host). TIA

PostgreSQL relies on fsync() putting your data all the way through to
the disc. It must *not* stay in cache memory, because then you can lose
transactions. If write ordering is also lost (which is likely in this
case), you can get a corrupt database.

In the tests I've been running on vmware, a fsync() in the guest OS will
flush it out of the guest OSs buffer, but the data will stay in the host
OS buffers.

This means that you may be hosed if your host OS crashes. It should
survive a *guest* OS crash without problems.

I haven't had any actual crashes on this, but there is plenty of
evidence that syncing doesn't go all the way through (see my other mail)
at least with Windows as the host OS. Which means you are basically
running with write-cache enabled all the time with no way to turn it
off, and some reading of the pg lists should tell you how bad that is.

It's possible this works fine if you use direct disk access in vmware
(giving the session a native disk to access), but I haven't tried that.


After some looking around (and with some hints from Dave Page) for my
own needs of virtualising linux-on-linux, I've moved to linux-vservers.
While it doesn't virtualise everything, it's good enough for me. I
suggest you at least look at it before going down the vmware path - it's
also free software unlike vmware.

//Magnus

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL]

2005-03-11 Thread William Shatner
subscribe-digest pgsql-general

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Mac OS X 10.3 and SYSV shared memory settings

2005-03-11 Thread Kevin Murphy
Because I was just burned by this, I'd like to remind OS X users that 
OS 10.3 (Panther) may be a less desirable platform than 10.2 for 
running PostgreSQL and other applications that benefit from customizing 
SYSV shared memory settings.

The problem is that in 10.3, there unbelievably seems to be no reliable 
way to customize the SYSV shared memory settings such that the settings 
are preserved across OS updates (10.3.6 to 10.3.7, e.g.)!

The following reminder from the PG 8.0 manual (thanks to whomever put 
this in) also applies to PG 7.4.X, and it is incredibly important:

"In OS X 10.3, these commands [shared memory sysctl commands] have been 
moved to /etc/rc and must be edited there. You'll need to reboot to 
make changes take effect. Note that /etc/rc is usually overwritten by 
OS X updates (such as 10.3.6 to 10.3.7) so you should expect to have to 
redo your editing after each update."

If someone has a solution to this problem, please let me know.  In the 
meantime we'll have to implement a cron-driven alert and a policy 
change, hope that transitioning to a new sysadmin at a later date does 
not screw us, and pray that 10.4 provides a solution to this problem.  
A simple change to /etc/rc on Apple's part seems to be all that would 
be required - it already executes sysctl commands from a user-defined 
/etc/sysctl.conf file, but the shared memory settings can only be set 
*once*, and /etc/rc happens to set the defaults before reading 
/etc/sysctl.conf.  I can have a cron job remove the shared memory 
sysctls in /etc/rc if they reappear, but that would be a decidedly 
imperfect kludge.

Kevin Murphy
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] VMWare and PostgreSQL: WAS :PostgreSQL still for Linux only?

2005-03-11 Thread Magnus Hagander
> Thanks Magnus,
> 
> Just the information I need.  Is this the case if Linux is 
> the host (and running PostgreSQL) and Windows is the guest 
> running under VMWare?  I care about the data in the Linux 
> realm.  I could give a rip if a PowerPoint presentation gets 
> hosed.  On the other hand, if it lost one of my games saves

I have only run it with windows as the host, but with both linux and
windows as the guest OS.

One way is to see how much performance you get. If the performance is
unreasonably good, that means it's not syncing :-) Try small write
transactions, if you get significantly above around 60, then it's not
syncing properly (assuming single-spindle IDE system).

//Magnus

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] Unique Indexes

2005-03-11 Thread ILove TheSpam
Hi,
I'm new to Postgres and plpgsql so my sincerest apologies if the answer to 
my problem is common :-)
I want to add entries to a table. The table has 2 fields, FieldID which is 
the primary key and FieldValue which is a unique field.
I can add to FieldValue just fine when the unique constraint hasn't been 
breached. However, if I try to add to FieldValue and there's already another 
field with that value, obviously I'll get an error.
My problem is that I want to:
a) Be able to detect when an error occurs and be able to read that error
b) Find out the FieldID for the row which already has the value in it. So if 
I try to enter 'blah' into FieldValue and it's already in the row with 
FieldID=3, I want the value 3 to be returned to me. I realise this can be 
done using a SELECT after the error, but I'd like to avoid that. I'm hoping 
Postgres itself has some way of detecting this sort of thing.

Even though I'm new to Postgres, I have looked at triggers and functions and 
I'm feeling fairly confident with them, so please, if that's what it takes, 
suggest it :-)

Thanks!
_
Express yourself instantly with MSN Messenger! Download today - it's FREE! 
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Best practices: Handling Daylight-saving time

2005-03-11 Thread Martijn van Oosterhout
On Fri, Mar 11, 2005 at 03:25:28PM +0100, Együd Csaba wrote:
> Hi All,
> I'd like to ask your opininon about how to handle DST on an 7/24 system.
> Where should it be handled: on the server side or on the client side? And
> how could I (at all could I???) make it transparent? 
> 
> Or we must bow to the fact that twice a year there are two unusable hours?
> If it cannot be solved technically, than it is acceptable, but if there is a
> chance to do it, I'd like to try it. 

I guess the same way computers have been handling it for years: store
time as "seconds since epoch" and let the user interface handle the
translation. Timezone files for anywhere in the world are available as
well as routines for converting seconds since epoch to localtime and
vice-versa.

Hope this helps,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and then sitting around waiting for someone
> else to do the other 95% so you can sue them.


pgpFudB4sqEH5.pgp
Description: PGP signature


Re: [GENERAL] PostgreSQL still for Linux only?

2005-03-11 Thread tony
Le vendredi 11 mars 2005 à 16:51 +0100, Magnus Hagander a écrit :

> Do *not* do this with a production database.
> 
> Vmware does *not* correctly handle fsync()s (or O_SYNC or any of those)
> thruogh to disk. If your host PC crashes, your database will almost
> certainly be corrupted. fsync() on the client just puts it in the RAM
> cache on the host. Not even in the write cache on the disk/raid.

Putting Windows NT inside a virtual machine (VMware workstation) solved
all hardware stability problems in my case. NT would only crash if we
forgot to reboot every 45 days or so... The Linux host had a 9 month
uptime at one point. 

If you could be more explicit as to why VMware client does not write to
disk I would much appreciate. I was thinking of virtualising a couple of
servers (Linux client on Linux host). TIA

Tony Grant


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Best practices: Handling Daylight-saving time

2005-03-11 Thread Bruno Wolff III
On Fri, Mar 11, 2005 at 15:25:28 +0100,
  Együd Csaba <[EMAIL PROTECTED]> wrote:
> Hi All,
> I'd like to ask your opininon about how to handle DST on an 7/24 system.

My advice would be to use GMT and not have to worry about DST while
collecting data. When displaying data you might convert the timestamps
to localtime if that is useful for people using the data.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] PostgreSQL still for Linux only?

2005-03-11 Thread tony
Le vendredi 11 mars 2005 à 10:10 -0500, [EMAIL PROTECTED] a
écrit :
> An idea I like, because I have entrenched windows clients also, is to run
> things that run best under Linux on VMWare (vmware.com) and to run good
> Windows things (like desktop apps) under Windows.  Linux can be either the
> host or guest OS under VMWare, so the options of which OS is truly in
> control are symmetrical.  I'm proposing this to my customer to solve a
> completely different set of problems (not PostgreSQL related) but the
> approach might have merit here as well.
> 
> If anyone has tried this please respond.

=:-D A man with good ideas! Yes this rocks. 

I had a database running like this for quite some time at a clients. It
was an NT server running on a Linux host but other way round it works
just as well. This permits easy replication, easy backup (take a VMware
snapshot of your virtual disk from time to time). I could ssh into the
Linux box and reboot the NT virtual machine after working hours.

Right now the high end virtualisation stuff from the ESX and GSX virtual
machines is trickling down into the Workstation variant. You will be
able to do much more with the VMware 5 Workstation which is on beta test
at the moment.

For all Windows shops this is a very good way of running Linux without
getting your hands dirty. I would recommend
http://lwn.net/Articles/69534/ any distribution based on RHEL. If you
decide to go all the way later you will already have RHEL experience for
$189 outlay - the cost of the VMware workstation licence.

Cheers

Tony


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] VMWare and PostgreSQL: WAS :PostgreSQL still for Linux only?

2005-03-11 Thread Richard_D_Levine
Thanks Magnus,

Just the information I need.  Is this the case if Linux is the host (and
running PostgreSQL) and Windows is the guest running under VMWare?  I care
about the data in the Linux realm.  I could give a rip if a PowerPoint
presentation gets hosed.  On the other hand, if it lost one of my games
saves

Thanks,

Rick



 
  "Magnus Hagander" 
 
  <[EMAIL PROTECTED]>   To:   <[EMAIL 
PROTECTED]>, "Neil Dugan" 
  Sent by:<[EMAIL PROTECTED]>   
 
  [EMAIL PROTECTED]cc:   
, <[EMAIL PROTECTED]>  
  tgresql.orgSubject:  Re: [GENERAL] 
PostgreSQL still for Linux only?

 

 
  03/11/2005 10:51 AM   
 

 

 




> An idea I like, because I have entrenched windows clients
> also, is to run things that run best under Linux on VMWare
> (vmware.com) and to run good Windows things (like desktop
> apps) under Windows.  Linux can be either the host or guest
> OS under VMWare, so the options of which OS is truly in
> control are symmetrical.  I'm proposing this to my customer
> to solve a completely different set of problems (not
> PostgreSQL related) but the approach might have merit here as well.
>
> If anyone has tried this please respond.

Do *not* do this with a production database.

Vmware does *not* correctly handle fsync()s (or O_SYNC or any of those)
thruogh to disk. If your host PC crashes, your database will almost
certainly be corrupted. fsync() on the client just puts it in the RAM
cache on the host. Not even in the write cache on the disk/raid.

This is vmware workstation, of course. I'm sure their server line of
products act differently.

//Magnus

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match




---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Temporary tables privileges

2005-03-11 Thread Bruce Momjian
Alejandro D. Burne wrote:
> Hi, I'm new at pg. 
> I'll be using tmp tables in others rdbms. An user can create your own
> tmp tables (grant temporary tables) but can't drop it (I don't want to
> grant drop privileges).
> Other way it's using on commit; but I can't make this work.
> 
> Example:
> CREATE TEMPORARY TABLE tmp ON COMMIT DROP AS 
> SELECT code FROM mytable WHERE code BETWEEN 1 AND 10;
> 
> shows error near ON

Our TODO has:

* Add ON COMMIT capability to CREATE TABLE AS SELECT

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] PostgreSQL still for Linux only?

2005-03-11 Thread Magnus Hagander
> An idea I like, because I have entrenched windows clients 
> also, is to run things that run best under Linux on VMWare 
> (vmware.com) and to run good Windows things (like desktop 
> apps) under Windows.  Linux can be either the host or guest 
> OS under VMWare, so the options of which OS is truly in 
> control are symmetrical.  I'm proposing this to my customer 
> to solve a completely different set of problems (not 
> PostgreSQL related) but the approach might have merit here as well.
> 
> If anyone has tried this please respond.

Do *not* do this with a production database.

Vmware does *not* correctly handle fsync()s (or O_SYNC or any of those)
thruogh to disk. If your host PC crashes, your database will almost
certainly be corrupted. fsync() on the client just puts it in the RAM
cache on the host. Not even in the write cache on the disk/raid.

This is vmware workstation, of course. I'm sure their server line of
products act differently.

//Magnus

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] SSL error: decryption failed or bad record mac (pg as Samba backend)

2005-03-11 Thread Fernando Schapachnik
Hi,

I'm trying to use an SSL-enabled (OpenSSL 0.9.7d) Postgres 7.3.9 as database 
backend to Samba 3.0.11. On startup Samba opens a connection, and passes it to 
every fork()ed process. On some scenarios (consistenly, when somebody tries to 
log into a workstation after reboot), Samba spits:

SELECT ... (details ommited)
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.

And the server log says:
[24129]  LOG:  SSL error: decryption failed or bad record mac
[24129]  LOG:  pq_recvbuf: recv() failed: Connection reset by peer

There is no problem when not using SSL. The Samba code doesn't have any 
SSL-specifics, leaving it to libpq. Any ideas?

Thanks in advance.

Regards.

Fernando.

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] PostgreSQL still for Linux only?

2005-03-11 Thread Bruce Momjian
Richard Huxton wrote:
> > 2. This response is alarming: Tom Lane wrote in digest V1.5092:
> > 
> >> We are supporting Windows as a Postgres platform for the benefit of
> >> developers who want to do testing on their laptops (and for reasons
> >> best known to themselves feel a need to run >Windows on their
> >> laptops).
> > 
> > 
> > a. Who are the 'we' Tom is talking about?
> 
> In an email in the public lists we = Tom
> 
>  > b. Is he speaking for
> > PostgreSQL Developers and the entire PostgreSQL community?
> 
> Official pronouncements from "core" will be marked as such. No-one 
> speaks for the "entire" PostgreSQL community. You're part of that 
> community, just by virtue of downloading a copy and subscribing to the 
> lists.

As a core member I can confirm that "we = Tom" in this context.  The
core group has made no decisions about the relative stability of Win32
vs Unix, and is unlikely to in the future.

The decision about operating system and stability are to be made by
end-users based on their experience.  We do our best to make all
platforms as well supported as possible.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] PostgreSQL still for Linux only?

2005-03-11 Thread Richard_D_Levine
An idea I like, because I have entrenched windows clients also, is to run
things that run best under Linux on VMWare (vmware.com) and to run good
Windows things (like desktop apps) under Windows.  Linux can be either the
host or guest OS under VMWare, so the options of which OS is truly in
control are symmetrical.  I'm proposing this to my customer to solve a
completely different set of problems (not PostgreSQL related) but the
approach might have merit here as well.

If anyone has tried this please respond.

Thanks,

Rick



  
  Neil Dugan
  
  <[EMAIL PROTECTED]To:   
pgsql-general@postgresql.org  
  hes.com.au> cc:   
  
  Sent by:Subject:  Re: [GENERAL] 
PostgreSQL still for Linux only?
  [EMAIL PROTECTED] 
   
  gresql.org
  

  

  
  03/10/2005 05:29 PM   
  

  

  




On Thu, 2005-03-10 at 16:19 +, Tope Akinniyi wrote:
> Hi all,
>
--- cut ---
>
> I sought Windows replication tool for and could not get.  I checked
> PgFoundry and the one there put a banner and said NOT FOR WINDOWS.
> Then I said is this PostgreSQL for Windows a joke?  That prompted
> my post - IS POSTGRESQL FOR LINUX ONLY?

Have you tried to setup the PostgreSQL server on a Linux computer (with
replication) and use it via PostgreSQL clients running on Windows(tm)
computers.  This way your clients will still have the OS they are use to
and the database server will be running on the best OS for it.

--- cut ---




---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] PostgreSQL still for Linux only?

2005-03-11 Thread Christopher Browne
Quoth [EMAIL PROTECTED] ("Jim C. Nasby"):
> With the attitude of "Windows can not be made to reliably run a
> database", how many developers do you think will be attracted?

That remains to be seen.

I wouldn't consider it the least bit worthwhile to try to evaluate it
now, as what is happening now is that "WinFolk" are getting their very
first exposure to the software.

It would seem surprising for new developers to emerge from the
Windows(tm) population before at least 6 months have passed.

The way developers emerge is that users come along, work with the
software for a while, and discover things that "itch" them the wrong
way.  They have become sufficiently committed that it is worth putting
a little effort into scratching some of the itches.  That starts
getting them into understanding the code a little better, allowing
them to subsequently scratch deeper itches.
-- 
output = reverse("moc.liamg" "@" "enworbbc")
http://linuxdatabases.info/info/nonrdbms.html
Share and Enjoy!!

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Oracle's Virtual Private Database functionality

2005-03-11 Thread Doug Bloebaum
[ Sorry if this appears twice on the list: sent it via a non-subscribed email 
the first time around.  Of course, while waiting for it to appear I figured out 
a workaround; see the "Late breaking update" at the end. ]

In the spirit of "tell us what you're trying to do..." 

I'd like to mimic a subset of Oracle's Virtual Private Database functionality 
(see http://dba-oracle.com/oracle_news/2004_1_16_vpd_security.htm for an 
overview) in Postgres: based on some per-connection setting, I'd like a query 
to return a different set of rows.  In VPD, the Oracle engine actually applies 
a defined predicate (say, country_code='USA') to every query.  The idea is that 
a given set of users can only see rows in a table that match this predicate, 
while the other rows are invisible to them.

Now for the "how I tried to do it" part...

I thought I was on my way to doing this in Postgres by making use of schemas 
and search_path:

CREATE USER user1 UNENCRYPTED PASSWORD 'user1';
CREATE SCHEMA AUTHORIZATION user1;

CREATE SCHEMA canada;
GRANT USAGE ON SCHEMA canada TO user1;
CREATE TABLE canada.row_limiter (country_code VARCHAR(3));
INSERT INTO canada.row_limiter VALUES('CAN');
GRANT SELECT ON canada.row_limiter TO user1;

CREATE SCHEMA usa;
GRANT USAGE ON SCHEMA usa TO user1;
CREATE TABLE usa.row_limiter (country_code VARCHAR(3));
INSERT INTO usa.row_limiter VALUES('USA');
GRANT SELECT ON usa.row_limiter TO user1;

\CONNECT - user1

SET SEARCH_PATH TO '$user',canada;

CREATE TABLE my_data (country_code VARCHAR(3), data NUMERIC);
INSERT INTO my_data VALUES('USA',11);
INSERT INTO my_data VALUES('USA',12);
INSERT INTO my_data VALUES('USA',13);

INSERT INTO my_data VALUES('CAN',21);
INSERT INTO my_data VALUES('CAN',22);
INSERT INTO my_data VALUES('CAN',23);

CREATE VIEW my_data_v AS
SELECT md.*
  FROM my_data md,
   row_limiter rl
 WHERE rl.country_code=md.country_code;

SELECT * FROM my_data_v;

-- Looks great - I only see Canadian data!!
-- country_code | data 
+--
-- CAN  |   21
-- CAN  |   22
-- CAN  |   23

SET SEARCH_PATH TO '$user',usa;

SELECT * FROM my_data_v;

-- Darn, I still only see Canadian data :-(
-- country_code | data 
+--
-- CAN  |   21
-- CAN  |   22
-- CAN  |   23

\d my_data_v

View definition:
 SELECT md.country_code, md.data
   FROM my_data md, CANADA.row_limiter rl -- <--ah, and here's the reason...
  WHERE rl.country_code::text = md.country_code::text;


It's apparent why: the view determines which table it's going to use at view 
creation time, not at query time, so this method is no good.

Is there a "right way" to accomplish what I'm trying to do?

Late breaking update:

Looks like a function returning SETOF does the trick, so it must not resolve 
the table early like the view does:

CREATE OR REPLACE FUNCTION my_data_f() RETURNS SETOF my_data AS '
  SELECT md.country_code,
 md.data
FROM my_data md, row_limiter rl
   WHERE rl.country_code = md.country_code;
' LANGUAGE sql;

SET SEARCH_PATH TO '$user',usa;
SELECT * FROM my_data_f();
 country_code | data 
--+--
 USA  |   11
 USA  |   12
 USA  |   13

SET SEARCH_PATH TO '$user',canada;
SELECT * FROM my_data_f();
 country_code | data 
--+--
 CAN  |   21
 CAN  |   22
 CAN  |   23

Can I rely on this behavior?  Is this the best way to do what I'm after?2 to 
accomplish what I'm trying to do?

Late breaking update:

Looks like a function returning SETOF does the trick, so it must not resolve 
the table early like the view does:

CREATE OR REPLACE FUNCTION my_data_f() RETURNS SETOF my_data AS '
  SELECT md.country_code,
 md.data
FROM my_data md, row_limiter rl
   WHERE rl.country_code = md.country_code;
' LANGUAGE sql;

SET SEARCH_PATH TO '$user',usa;
SELECT * FROM my_data_f();
 country_code | data 
--+--
 USA  |   11
 USA  |   12
 USA  |   13

SET SEARCH_PATH TO '$user',canada;
SE


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] PostgreSQL still for Linux only?

2005-03-11 Thread Christopher Browne
That remains to be seen.

I wouldn't consider it the least bit worthwhile to try to evaluate it
now, as what is happening now is that "WinFolk" are getting their very
first exposure to the software.

It would seem surprising for new developers to emerge from the
Windows(tm) population before at least 6 months have passed.

The way developers emerge is that users come along, work with the
software for a while, and discover things that "itch" them the wrong
way.  They have become sufficiently committed that it is worth putting
a little effort into scratching some of the itches.  That starts
getting them into understanding the code a little better, allowing
them to subsequently scratch deeper itches.
-- 
output = reverse("moc.liamg" "@" "enworbbc")
http://linuxdatabases.info/info/nonrdbms.html
Share and Enjoy!!

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] row numbering

2005-03-11 Thread Bruno Wolff III
On Thu, Mar 10, 2005 at 13:22:05 +0100,
  Karsten Hilbert <[EMAIL PROTECTED]> wrote:
> 
> Also notice that we do have views that display the missing
> shots per schedule per patient. I just have not found a way to
> join the two views (that is, given and missing) because that
> would AFAICT require the output "row numbering".

If you have a view of patients shots and a view of patients missing shots,
the way to connect them is with a union. You can then group or order
the union to display data about a patient in one place.

Another approach would to cross join the patients with the required shots
and then left join that result with the shots patients have had.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Best practices: Handling Daylight-saving time

2005-03-11 Thread Együd Csaba
Hi All,
I'd like to ask your opininon about how to handle DST on an 7/24 system.
Where should it be handled: on the server side or on the client side? And
how could I (at all could I???) make it transparent? 

Or we must bow to the fact that twice a year there are two unusable hours?
If it cannot be solved technically, than it is acceptable, but if there is a
chance to do it, I'd like to try it. 

Our system stores 200-1000 measured data per minute, comming from
substations. The substations clock is synchronized periodically as well.
When the DST is switched there is 1 hour of data missing (or overlapped).
Certainly the client machines are autmatically adjusted for the DST. 

How do this others? 

Many many thanks,

-- Csaba

-- 
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.6.4 - Release Date: 2005.03.07.
 



-- 
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.6.4 - Release Date: 2005.03.07.


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Problem with inherited table, can you help?...

2005-03-11 Thread Alvaro Herrera
On Fri, Mar 11, 2005 at 06:16:28PM +1100, Russell Smith wrote:
> On Fri, 11 Mar 2005 03:39 am, Michael Fuhr wrote:
> > On Thu, Mar 10, 2005 at 01:31:21AM -0800, Net Virtual Mailing Lists wrote:
> 
> > Some have suggested that PostgreSQL should use a weaker lock on the
> > referenced key, but that hasn't been implemented yet.
> 
> Are there actually any problems with only getting a AccessShareLock?

Yes, because there is no infrastructure to get any type of lock save
an exclusive lock (not sure to which lmgr lock type is equivalent) on a
per-row basis.

-- 
Alvaro Herrera (<[EMAIL PROTECTED]>)
"Ni aun el genio muy grande llegaría muy lejos
si tuviera que sacarlo todo de su propio interior" (Goethe)

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] PostgreSQL still for Linux only?

2005-03-11 Thread Thomas Hallgren
Richard Huxton wrote:
It can also be bad - the more time spent supporting Windows, the less 
time is spent working on PostgreSQL itself.

Unless the Windows support attracts more resources. Personally I'd be 
surprised if that's not the case.

That's clearly a decision only you can make. Getting replication working 
on Windows will happen quicker the more people help. If all you want is 
an off-machine backup, perhaps look at PITR (see manuals for details).

If you're using a Java based client perhaps something like C-JDBC 
http://c-jdbc.objectweb.org would help. It's known to run well with 
PostgreSQL.

Regards,
Thomas Hallgren
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Convert Cursor to array

2005-03-11 Thread GIROIRE Nicolas (COFRAMI)
Title: RE: [GENERAL] Convert Cursor to array






my sort algorithm is a very particular solution.
In fact, I put record in array to allow me to sort record. I sort them by using one column of my table but the column indx can have duplicate value and it's volontary.


here is the order algorithm :
for i in 1..recordcount loop            
            tmp_row := children[i];
            indx := tmp_row[5];


            if (indx
                -- on déplace les éléments pour laisser la place à l'élément qu'on déplace
                for j in 0..(i-indx-1) loop
                    children[i-j] := children[i-j-1];
                end loop;
                -- on met l'élément à sa nouvelle position
                children[indx] := tmp_row;
            end if;
        end loop;


It's particular to my system. I won't go to explain my choice but if someone has an idea to save time.






-Message d'origine-
De : Richard Huxton [mailto:dev@archonet.com]
Envoyé : vendredi 11 mars 2005 15:01
À : GIROIRE Nicolas (COFRAMI)
Cc : 'pgsql-general@postgresql.org'
Objet : Re: [GENERAL] Convert Cursor to array



GIROIRE Nicolas (COFRAMI) wrote:
> Hi,
> 
> In a procedure, I put data of Cursor in a two-dimensional array. This allows
> me to sort rows.
> 
> The problem is this method is too slow.
> 
> In fact, I translate a pl/sql procedure to a plpgsql.
> Under Oracle, we use bulk and I search to use equivalent of this under
> postgresql.
> 
> Is that exist ?


No, but there might be other ways to do this. Can you explain why you 
need to sort within the function rather than using "order by"?


If you're sure you want to sort within the function, take a look at one 
of the alternative procedural languages. PL/pgsql evaluates all 
expressions by passing them to the SQL parser, which keeps things simple 
  and consistent but isn't efficient if you want to do a lot of data 
processing.
--
   Richard Huxton
   Archonet Ltd


This mail has originated outside your organization,
either from an external partner or the Global Internet. 
Keep this in mind if you answer this message.





Re: [GENERAL] Convert Cursor to array

2005-03-11 Thread Richard Huxton
GIROIRE Nicolas (COFRAMI) wrote:
Hi,
In a procedure, I put data of Cursor in a two-dimensional array. This allows
me to sort rows.
The problem is this method is too slow.
In fact, I translate a pl/sql procedure to a plpgsql.
Under Oracle, we use bulk and I search to use equivalent of this under
postgresql.
Is that exist ?
No, but there might be other ways to do this. Can you explain why you 
need to sort within the function rather than using "order by"?

If you're sure you want to sort within the function, take a look at one 
of the alternative procedural languages. PL/pgsql evaluates all 
expressions by passing them to the SQL parser, which keeps things simple 
 and consistent but isn't efficient if you want to do a lot of data 
processing.
--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL still for Linux only?

2005-03-11 Thread Dick Davies
* Jim C. Nasby <[EMAIL PROTECTED]> [0336 21:36]:

> With the attitude of "Windows can not be made to reliably run a
> database", how many developers do you think will be attracted?

People are entitled to an opinion, and in many cases its formed from
experience. I think it's unrealistic to expect a large team of programmers who
have been using *NIX to think windows is equally good.
If they did, they'd run on it, right?

The process model is presumably there because for 90% of platforms it makes
sense to do it that way. No-one is going to object to a well-written thread
based postmaster, but it's expecting a bit much for it to spring into life
off the bat.

To me a database is a service, like a dns or dhcp server, and wanting to
put it on windoms is like wanting to run BIND or IPF on there.

For most people it's going to be easier to stick a linux on a dedicated box
and run postgresql on that. I don't see what the problem is with that.

Just to be clear:

I have no interest or opinion in windows, microsoft or anything else that
makes slashdotters jump up and down beyound playing civ3 on it.
You like it, that's great. 

The one thing the world does'nt need is another 'my os can beat up your os'
thread.

-- 
'That question was less stupid; though you asked it in a profoundly stupid way.'
-- Prof. Farnsworth
Rasputin :: Jack of All Trades - Master of Nuns

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] more execution time

2005-03-11 Thread Richard Huxton
ALÝ ÇELÝK wrote:
why this query needs more time?  Its very slow
Difficult to say for sure - could you provide the output of EXPLAIN 
ANALYSE rather than just EXPLAIN?

Some other immediate observations:
1. Perhaps don't post to so many mailing lists at once. If you reply to 
this, maybe reduce it to pgsql-performance?
2. You don't say whether the row estimates are accurate in the EXPLAIN.
3. You seem to be needlessly coalescing personaldetails.masterid since 
you check for it being null in your WHERE clause
4. Do you really need to cast to numeric and generate a "sorting" column 
that you then don't ORDER BY?
5. Is ppid an id number? And are you sure it's safe to calculate it like 
that?
6. What is balance() and how long does it take to calculate its result?

select
  coalesce(personaldetails.masterid::numeric,personaldetails.id) +
(coalesce(personaldetails.id::numeric,0)/100) as sorting,
  floor(coalesce(personaldetails.masterid::numeric,personaldetails.id) +
(coalesce(personaldetails.id::numeric,0)/100)) as ppid,

  balance('MASTER-REGISTRATION',personaldetails.id) as balance,

  balance('MASTER-REGISTRATION',pd2.id) as accbalance,
I'm guessing point 6 is actually your problem - try it without the calls 
to balance() and see what that does to your timings.
--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


[GENERAL] Convert Cursor to array

2005-03-11 Thread GIROIRE Nicolas (COFRAMI)
Title: Convert Cursor to array






Hi,


In a procedure, I put data of Cursor in a two-dimensional array. This allows me to sort rows.


The problem is this method is too slow.


In fact, I translate a pl/sql procedure to a plpgsql.
Under Oracle, we use bulk and I search to use equivalent of this under postgresql.


Is that exist ?


Oracle procedure


cursor childCursor is select * from CHILD WHERE..

open childCursor;
fetch childCursor bulk collect into children;
close childCursor;



Postgresql procedure :


FOR childRecord IN select * from nico.CHILD WHERE...
        LOOP
        -- on met les éléments dans le tableau
        children[recordcount] := '{'
                        || childRecord.child_id || ',' 
                        || childRecord.evolution || ',' 
                        || childRecord.isremoved || ',' 
                        || childRecord.child_class || ',' 
                        || childRecord.indx || ',' 
                        || childRecord.ele_id || ',' 
                        || childRecord.doc_id ||
                    '}';
        
        recordcount := recordcount + 1;
        
    END LOOP;



Bulk are native Oracle array and it is probably faster than array.
Is there native method postgresql to replace bulk ?


Best regards.





[GENERAL] Data mining setup/hardware?

2005-03-11 Thread Mike Harding
Any suggestions on how to set up a server -strictly- for data mining?
This means:

- I want queries to run as quickly as possible, and
- I don't care if I lose data if the power goes down.

A solid state disk would obviously speed things up, like the to-disk
sorts that Postgres does when it runs out of RAM, and help the seek
time, etc.  This would also be somewhat expensive, of course.

I can buy a custom server and throw lots of RAM in it and so on.
Commodity PC hardware seems to limited to about 4G, or 8G or so for the
AMD64 hardware, so it seems unlikely I can run a pure memory disk.

I'm using FreeBSD , so any FreeBSD related hints are appreciated, but I
can run Linux or whatever if needed for this server.

Thanks,

-- 
Mike Harding <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] SRF, JDBC and result info

2005-03-11 Thread ntinos
Solution found! Thanks to Kris Jurka's advise I managed to pass this info 
using: elog(INFO,... or elog(NOTICE,... . These messages together with 
.getWarnings() do the job. : 

e.g. message returned by the SQLWarning: 

SNOTICEC0M#SUCCESSFUL EXECUTION.  NO TUPLES FROM PEER(S): 
mobileb#Ftestmybuild.cL2558Ranswer 

Getting the plain message is then trivial (e.g. using flag chars like '#' 
above) 

Of cource the appropriate logging must be set in postgresql.conf. 

Just in case somenone wants to do the same thing.
I dont know if this is the best solution (or if any other exists) but it 
surely works. 

Regards,
Ntinos Katsaros 

PS: libpq has nothing to do with the above :-)! 

[EMAIL PROTECTED] writes: 

Thank you very much for your reply. The thing is that my SRF is written in 
C, not plpgsql, but I'll look into RAISE NOTICE anyway.(I think there is 
something equevalent in libpq)  

Thanks again,
Ntinos Katsaros  

Kris Jurka writes:  

  

On Tue, 8 Mar 2005 [EMAIL PROTECTED] wrote:  

Hi everybody!   

I have an SRF which is called from a JAVA app with JDBC. Everything
works fine and I want now to be able to pass some result-related info to
my app. It is not about the format of the results (ResultSetMetaData) or
something like that.  

Is it possible to return  some string (or other type of)info together 
with the result tuples (even if it requiers some hacking i.e. there is 
no provision for something like that)? Any ideas?   

The only idea that comes to mind is using RAISE NOTICE in your plpgsql 
function and Statement or ResultSet .getWarnings() on the Java side to 
retrieve that info.  There really isn't any other out of band data path.  

Kris Jurka  

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
 

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ? 

  http://www.postgresql.org/docs/faq

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] row numbering

2005-03-11 Thread Karsten Hilbert
> You can look up our complete schema in our Wiki:
> 
> http://salaam.homeunix.com/twiki/bin/view/Gnumed/WebHome
> 
> Go to Deverloper Guide -> Database Structure.

 http://salaam.homeunix.com/twiki/bin/view/Gnumed/DatabaseSchema

is more convenient for you guys.

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] more execution time

2005-03-11 Thread ALÝ ÇELÝK
why this query needs more time?  Its very slow

thx

//QUERY
select
  coalesce(personaldetails.masterid::numeric,personaldetails.id) +
(coalesce(personaldetails.id::numeric,0)/100) as sorting,
  floor(coalesce(personaldetails.masterid::numeric,personaldetails.id) +
(coalesce(personaldetails.id::numeric,0)/100)) as ppid,
  personaldetails.id as pid,
  personaldetails.masterid,
  coalesce(personaldetails.prefix,'') || '' ||
coalesce(personaldetails.firstname,' ') || ' ' ||
coalesce(personaldetails.lastname,) as fullname,
  personaldetails.regtypeid,
  personaldetails.regdate,
  personaldetails.regprice,
  coalesce(regtypes.regtype,' ') || ' ' ||
coalesce(regtypes.subregtype,' ') as regtypetitle,
  regtypes.regtype,
  regtypes.subregtype,
  regtypedates.title,
  balance('MASTER-REGISTRATION',personaldetails.id) as balance,
   coalesce(pd2.prefix,' ') || ' ' || coalesce(pd2.firstname,' ') ||
' ' || coalesce(pd2.lastname,' ') as accfullname,
  coalesce(rt2.regtype,) || ' ' || coalesce(rt2.subregtype,' ') as
accregtypetitle,
  pd2.id as accid,
  pd2.regtypeid as accregtypeid,
  pd2.regdate as accregdate,
  pd2.regprice as accregprice,
  rt2.regtype as accregtype,
  rt2.subregtype as accsubregtype,
  rd2.title as acctitle,
  balance('MASTER-REGISTRATION',pd2.id) as accbalance,
  case when coalesce(balance('REGISTRATION',personaldetails.id),0)<=0
then 1 else 0 end as balancestatus

from personaldetails
left outer join regtypes on regtypes.id=personaldetails.regtypeid
left outer join regtypedates on regtypes.dateid=regtypedates.id
left outer join personaldetails pd2 on personaldetails.id=pd2.masterid
left outer join regtypes rt2 on rt2.id=pd2.regtypeid
left outer join regtypedates rd2 on rt2.dateid=rd2.id
where personaldetails.masterid is null
/ RESULT STATISTICS
Total query runtime: 348892 ms.
Data retrieval runtime: 311 ms.
763 rows retrieved.
  EXPLAIN QUERY

Hash Left Join  (cost=109.32..109.95 rows=5 width=434)
  Hash Cond: ("outer".dateid = "inner".id)
  ->  Merge Left Join  (cost=108.27..108.46 rows=5 width=409)
Merge Cond: ("outer".regtypeid = "inner".id)
->  Sort  (cost=106.19..106.20 rows=5 width=347)
  Sort Key: pd2.regtypeid
  ->  Hash Left Join  (cost=90.11..106.13 rows=5 width=347)
Hash Cond: ("outer".id = "inner".masterid)
->  Hash Left Join  (cost=45.49..45.71 rows=5 width=219)
  Hash Cond: ("outer".dateid = "inner".id)
  ->  Merge Left Join  (cost=44.44..44.63 rows=5
width=194)
Merge Cond: ("outer".regtypeid = "inner".id)
->  Sort  (cost=42.36..42.37 rows=5
width=132)
  Sort Key: personaldetails.regtypeid
  ->  Seq Scan on personaldetails
(cost=0.00..42.30 rows=5 width=132)
Filter: (masterid IS NULL)
->  Sort  (cost=2.08..2.16 rows=31 width=66)
  Sort Key: regtypes.id
  ->  Seq Scan on regtypes
(cost=0.00..1.31 rows=31 width=66)
  ->  Hash  (cost=1.04..1.04 rows=4 width=33)
->  Seq Scan on regtypedates
(cost=0.00..1.04 rows=4 width=33)
->  Hash  (cost=42.30..42.30 rows=930 width=132)
  ->  Seq Scan on personaldetails pd2
(cost=0.00..42.30 rows=930 width=132)
->  Sort  (cost=2.08..2.16 rows=31 width=66)
  Sort Key: rt2.id
  ->  Seq Scan on regtypes rt2  (cost=0.00..1.31 rows=31
width=66)
  ->  Hash  (cost=1.04..1.04 rows=4 width=33)
->  Seq Scan on regtypedates rd2  (cost=0.00..1.04 rows=4 width=33)




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Statement cancel or transaction cancel?

2005-03-11 Thread Sean Davis
On Mar 11, 2005, at 4:07 AM, Qingqing Zhou wrote:
Pg8.0.1
backend> begin;
backend> create table a (i int);
backend> insert into a values(1);
 1: i   (typeid = 23, len = 4, typmod = -1, byval = t)

backend> select * from pg_class;
 1: relname (typeid = 19, len = 64, typmod = -1, byval = f)
 2: relnamespace(typeid = 26, len = 4, typmod = -1, 
byval =
t)
 3: reltype (typeid = 26, len = 4, typmod = -1, byval = t)
 4: relowner(typeid = 23, len = 4, typmod = -1, byval = t)

... before the query is finished, press a ctrl_c here ...
ERROR:  canceling query due to user request
backend> select * from a;
ERROR:  current transaction is aborted, commands ignored until end of
transaction block
backend> commit;
backend> select * from a;
ERROR:  relation "a" does not exist
So ctrl_c in a backend actually means cancel the whole transaction 
(instead
of statement)?
CTRL-C yields an error, which I think will cause the transaction to 
fail, yes.

Sean
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] pg_restore error

2005-03-11 Thread Richard Huxton
go wrote:
Hi All!
  While restoring PG7.23 dump into PG 8.01 database
(pg_restore -Fc -d a3 -v arz3.test.dat) i got error:
pg_restore: ERROR:  invalid byte sequence for encoding "UNICODE": 0xfc3535
CONTEXT:  COPY mytable, line 9, column post3: "¦¦ 55146/23.12.04
"
pg_restore: [archiver (db)] error returned by PQendcopy
pg_restore: *** aborted because of error
  Tell me please how to ignore all errors ?
With bad data? You can't. Are you sure the original database was UNICODE 
 (UTF8) and not SQL_ASCII encoding?

Try restoring into an SQL_ASCII version 8.0 database, then you can 
correct your data and repeat the dump/import. Alternatively, you could 
produce a text-file of the dump and correct it with perl/awk/etc.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] pg_restore error

2005-03-11 Thread go
Hi All!

  While restoring PG7.23 dump into PG 8.01 database
(pg_restore -Fc -d a3 -v arz3.test.dat) i got error:
pg_restore: ERROR:  invalid byte sequence for encoding "UNICODE": 0xfc3535
CONTEXT:  COPY mytable, line 9, column post3: "¦¦ 55146/23.12.04
"
pg_restore: [archiver (db)] error returned by PQendcopy
pg_restore: *** aborted because of error

  Tell me please how to ignore all errors ?

Thanks


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] IO Timeout

2005-03-11 Thread Qingqing Zhou

How do you know that PG is in maintainence mode?

I am not sure what PG's maintainence mode is and not aware of any
configuration parameters that could control the IO reponse waiting time.
Postgres just issues the ordinary file system read/write command and wait
for the response.

Regards,
Qingqing

"Alex Turner" <[EMAIL PROTECTED]>
news:[EMAIL PROTECTED]
> I have a question about IO timeouts:
>
> We are using the 3ware escalade 9500S series of cards, and we had a
> drive failure this morning.  Apparnetly the card waits 30 seconds for
> the drive to respond, and if it doesn't, it put's the drive in a fail
> state.  Postgres it seems didn't wait 30 seconds before it decided
> that the system was upset, and put the database in maintainence mode.
>
> Is there a way to increase to IO wait timeout so this doesn't happen?
>
> Alex Turner
> netEconomist
>
> ---(end of broadcast)---
> TIP 6: Have you searched our list archives?
>
>http://archives.postgresql.org
>



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] row numbering

2005-03-11 Thread Karsten Hilbert
> I don't know that much about medicine, so this might be a funny
> question, but do you really need to know that "shots 4 and 5 are
> missing",
I want to be able to display "shot 4: ..." and "shot 5: ..." in
the application but pull the data from the database, not
calculate it in the application. Reason being that there
are/can be several different clients that should all be getting
this right. Our current view does just that but only with shots
already given.

> or just that the patient needs to be shot two more times,
No, that would simply be

select max(shots in schedule) - count(shots given) where patient = 

> or do you really want the *application dates*?
No. Those are under more conditions than the database should
have to handle. IOW I would not suggest putting *that* much
business logic into the database. There would be some value in
getting the *by-schedule* next application date but that's of
secondary importance to me.

> This is a description of steps you decided would get you to your
> goal. Instead of describing the steps, what's the goal?
Sure, fine.

I want a view that roughly looks like this:

pk_patient   schedule   vacc_no   given
1Tet1 1.1.1980
1Tet2 1.1.1985
1Hep1 1.1.1980
1Hep2 NULL  -> IOW missing
2......
...

Why is there no missing Tet shot ? Because the schedule does
not define more than 2 shots. Medically, this is a bad example
because Tet actually requires more and also requires a booster
every 10 years but that does not make a difference to the
problem at hand.

The less technical goal is: I want the database to be able to
provide a vaccination *status* view to me.

Also notice that we do have views that display the missing
shots per schedule per patient. I just have not found a way to
join the two views (that is, given and missing) because that
would AFAICT require the output "row numbering".

You can look up our complete schema in our Wiki:

http://salaam.homeunix.com/twiki/bin/view/Gnumed/WebHome

Go to Deverloper Guide -> Database Structure.

Karsten Hilbert, MD
GnuMed i18n coordinator
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] partitionning

2005-03-11 Thread Karsten Hilbert
> Actually I have a strong feeling what really _ought_ to happen here is that
> the inherited tables support in postgres, which never really worked anyways,
> should be deprecated and eventually removed.
Hopefully not. They are useful for other things, too.

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] Statement cancel or transaction cancel?

2005-03-11 Thread Qingqing Zhou
Pg8.0.1

backend> begin;
backend> create table a (i int);
backend> insert into a values(1);
 1: i   (typeid = 23, len = 4, typmod = -1, byval = t)

backend> select * from pg_class;
 1: relname (typeid = 19, len = 64, typmod = -1, byval = f)
 2: relnamespace(typeid = 26, len = 4, typmod = -1, byval =
t)
 3: reltype (typeid = 26, len = 4, typmod = -1, byval = t)
 4: relowner(typeid = 23, len = 4, typmod = -1, byval = t)

... before the query is finished, press a ctrl_c here ...

ERROR:  canceling query due to user request
backend> select * from a;
ERROR:  current transaction is aborted, commands ignored until end of
transaction block
backend> commit;
backend> select * from a;
ERROR:  relation "a" does not exist

So ctrl_c in a backend actually means cancel the whole transaction (instead
of statement)?


Thanks,
Qingqing




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] New PostgreSQL RPMs are available for many platforms

2005-03-11 Thread Devrim GUNDUZ
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
- -
PostgreSQL New RPM Sets
2005-03-11
Versions: 8.0.1, 7.4.7
Operating Systems:
* Red Hat Enterprise Linux Enterprise Server 2.1
* Red Hat Enterprise Linux Enterprise Server 4
* Fedora Core 2 x86_64
* Fedora Core 3 x86_64
Set labels: 7.4.7-2PGDG, 8.0.1-2PGDG
- -
- -
Package Info:
PostgreSQL RPM Building Project has released RPMs for Red Hat Enterprise 
Linux Enterprise Server 2.1 & 4, Fedora Core 2 x86_64, Fedora Core 3 
x86_64 for PostgreSQL 8.0.1 and 7.4.7. They will available in main FTP 
site and in its mirrors after sync.

It should be noted that, for those looking to upgrade from a 7.4.0/7.4.1
server to 7.4.7, that there a dump-n-reload is *recommended* (but not
required) to address a fix introduced between 7.4.1 and 7.4.2. Please
refer to the 7.4.7 HISTORY file for information on how this can be
accomplished without a dump-n-reload, as there are steps that can be
followed for this.
Each RPM has been signed by the builder, and each directory contains a
CURRENT_MAINTAINER file which includes the name/email of the package
builder and link to their PGP key.
- -
Changes since 8.0.1-1PGDG
- - Repair improper error message in init script when PGVERSION doesn't
match. (Tom Lane)
- - Arrange for auto update of version embedded in init script. (Tom Lane)
- - Update JDBC jars (Devrim Gunduz)
- -
- -
Changes since 7.4.7-1PGDG
- - Repair improper error message in init script when PGVERSION doesn't
match. (Tom Lane)
- - Arrange for auto update of version embedded in init script. (Tom Lane)
- - Patch additional buffer overruns in plpgsql (CAN-2005-0247) (Tom Lane)
- - Fix improper call of strerror_r, which leads to junk error messages in
libpq (Tom Lane, Devrim Gunduz)
- -
If you experience problems with the RPMs or if you have a feature
requests, please join
 [EMAIL PROTECTED]
More info about the list is found at:
 http://lists.pgfoundry.org/mailman/listinfo/pgsqlrpms-general
You can also report any bugs to pgsql-bugs@postgresql.org
- -
Please download these from one of our FTP mirror sites:
http://www.PostgreSQL.org/mirrors-ftp.html
or via our  FTP browser:
http://www.PostgreSQL.org/ftp/binary
or from Bittorrent (Thanks to David Fetter) :
http://bt.PostgreSQL.org
Regards,
- --
Devrim GUNDUZ 
devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.tdmsoft.com http://www.gunduz.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.1 (GNU/Linux)

iD8DBQFCMVnOtl86P3SPfQ4RAo2hAKDibbkA+YH4+qS6c77Sr4OEOElyAwCglhdy
HHH4h79l79nkz2b1sMLFt+0=
=Hi0h
-END PGP SIGNATURE-
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] PostgreSQL still for Linux only?

2005-03-11 Thread Richard Huxton
[EMAIL PROTECTED] wrote:
...it will be the first time they have seen your name... ...with your first
email have criticised the project...
Check the archives.  This poster has been active on the list for awhile.
He has indeed, and even posted a news item, but it will still be the 
first time many people have seen Tope's name. Given the traffic on the 
various lists and the number of new users we've gained recently, you 
need to post a *lot* for people to recognise you.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]