[GENERAL] Cartesian product bug?

2003-10-31 Thread Åsmund Kveim Lie
Hi,

We have found a possible bug in 7.3.1. It seems that using CROSS JOIN and 
doing plain Cartesian product, listing to tables in the from clause, gives 
different results. According to the documentation this should be 
equivalent. The following example should explain the problem:

CREATE TABLE a (a1 text, a2 text);
CREATE TABLE b (b1 text, b2 text);
CREATE TABLE c (a1 text, b1 text, c1 text);
INSERT INTO a VALUES('a1', 'a2');
INSERT INTO b VALUES('b1', 'b2');
INSERT INTO c VALUES('a3', 'b1', 'c1');
SELECT * FROM a,b NATURAL JOIN c;
a1 | a2 | b1 | b2 | a1 | c1
+++++
 a1 | a2 | b1 | b2 | a3 | c1
(1 row)
SELECT * FROM a CROSS JOIN b NATURAL JOIN c;
a1 | b1 | a2 | b2 | c1
++++
(0 rows)
These two example queries should give the same result. In the first query, 
it seems like itâs doing the natural join between b and c, and then does 
the Cartesian product on that result with a. On the second query, it does 
as we assume it should, namely does the Cartesian product first.

Is this the correct behavior?

Regards

Ãsmund

--
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Recomended FS

2003-10-31 Thread Lynn . Tilby

Really solid microcode actually reads the sectors
just written and confirms the write at the hardware level
by comparing it with what is in the controller memory.  
It then returns with a successfull confirmation or an error
if differences were detected.

Any data storage device controller, disk, memory stick, whatever
that does not follow this fundamental common sense protocol is
not reliable and should not be used, period!

Perhaps the IDE designers have folded to management pressure 
and tried to make their drives "seem" faster by not taking
the time to actually confirm the write at the hardware level.
I don't know, but it looks like this may be a possiblity.

Lynn

Quoting "scott.marlowe" <[EMAIL PROTECTED]>:

> On Sat, 25 Oct 2003, James Moe wrote:
> 
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA1
> > 
> > On Sun, 26 Oct 2003 16:24:17 +1300, Mark Kirkwood wrote:
> > 
> > >I would conclude that it not *always* the case that power failure 
> > >renders the database unuseable.
> > >
> > >I have just noticed a similar posting from Scott were he finds the
> cache 
> > >enabled case has a dead database after power failure.
> > >
> >   Other posts have noted that SCSI never fails under this condition.
> Apparently SCSI 
> > drives sense an impending power loss and flush the cache before power
> completely 
> > disappears. Speed *and* reliability. Hm.
> 
> Actually, it would appear that the SCSI drives simply don't lie about 
> fsync.  I.e. when they tell the OS that they wrote the data, they wrote
> 
> the data.  Some of them may have caching flushing with lying about fsync
> 
> built in, but the performance looks more like just good fsyncing to me. 
> 
> It's all a guess without examining the microcode though... :-)
> 
> >   Of course, anyone serious about a server would have it backed up
> with a UPS and 
> > appropriate software to shut the system down during an extended power
> outage. This just 
> > leaves people tripping over the power cords or maliciously pulling the
> plugs.
> 
> Or a CPU frying, or a power supply dying, or a motherboard failure, or a
> 
> kernel panic, or any number of other possibilities.  Admittedly, the
> first 
> line of defense is always good backups, but it's nice knowing that if
> one 
> of my CPUs fry, I can pull it, put in the terminator / replacement, and
> my 
> whole machine will likely come back up.
> 
> But anyone serious about a server will also likely be running on SCSI as
> 
> well as on a UPS.  We use a hosting center with 3 UPS and a Diesel 
> generator, and we still managed to lose power about a year ago when one
> 
> UPS went haywire, browned out the circuits of the other two, and the 
> diesel generator's switch burnt out.  Millions of dollars worth of UPS /
> 
> high reliability equipment, and a $50 switch brought it all down. 
> 
> 
> ---(end of
> broadcast)---
> TIP 1: subscribe and unsubscribe commands go to
> [EMAIL PROTECTED]
> 


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] [OT] Choosing a scripting language.

2003-10-31 Thread Martijn van Oosterhout
On Tue, Oct 28, 2003 at 04:57:34PM +0100, Marco Colombo wrote:
> processing. Try and do the same in any other language, with less
> keystrokes:
> 
> find / -print0 | perl -ln0e 'print if -p'

I would imagine:

find / -type p

is shorter, but that probably wasn't your point :)

-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> "All that is needed for the forces of evil to triumph is for enough good
> men to do nothing." - Edmond Burke
> "The penalty good people pay for not being interested in politics is to be
> governed by people worse than themselves." - Plato


pgp0.pgp
Description: PGP signature


Re: [GENERAL] [CYGWIN] Info Required!!

2003-10-31 Thread Claudio Natoli

Native windows support is not available in postgresql-7.4.

Hopefully it will be in 7.5

Cheers,
Claudio


-Original Message-
From: Raghu Holla
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: 10/28/03 8:49 PM
Subject: [CYGWIN] Info Required!!

Hi,

Could you please let me know whether postgresql-7.4beta4 or
postgresql-7.4beta5 has complete support for native windows? 

If yes, how do I install it on windows? 

 

I have downloaded the beta releases from
http://developer.postgresql.org/beta.php
  . I would also like to
mention that the Readme and Install files are not updated in the beta
releases. The Install file still talks about version 7.3

 

Regards,

 

Raghu Holla B.S 
EAI Messaging Team, Putnam Investments

Tel (Putnam): 1 x 0075, 1 x 0074, 1 x 0073
Tel (WIPRO): 91-80-8520408 Extn: 83232

 


**Disclaimer



Information contained in this E-MAIL being proprietary to Wipro Limited
is 

'privileged' and 'confidential' and intended for use only by the
individual

 or entity to which it is addressed. You are notified that any use,
copying 

or dissemination of the information contained in the E-MAIL in any
manner 

whatsoever is strictly prohibited.




***



--- 
WE HAVE MOVED - PLEASE NOTE OUR NEW CONTACT DETAILS: 
THE BASEMENT, 33 EWELL STREET, BALMAIN NSW 2041 
TEL: +61 2 9555 1544 FAX: +61 2 9555 6911 
Certain disclaimers and policies apply to all email sent from Memetrics.
For the full text of these disclaimers and policies see 
http://www.memetrics.com/emailpolicy.html

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] database speed

2003-10-31 Thread Doug McNaught
Dennis Gearon <[EMAIL PROTECTED]> writes:

> Is ist possible for a trigger to fork off a 'reindex' command and the
> trigger terminates leaving the reindex running?

Even if possible, ick.

I'd say the best thing to do would to have such a trigger set a flag
(value in a one-row table) and have a cron job that checks for that
flag and runs REINDEX.

-Doug

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Custom types and arrays

2003-10-31 Thread Tom Lane
"mlunnon @ RWA" <[EMAIL PROTECTED]> writes:
> I get:
> psql:Supplements.sql:34: ERROR:  Type "test_type[]" does not exist
> psql:Supplements.sql:41: ERROR:  Type "_test_type" does not exist

It works for me ... are you sure it's not something silly like mistyping
the type name, or adding/omitting double quotes?

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] slow query performance

2003-10-31 Thread Mark Kirkwood
Dave Weaver wrote:

- clustering the "obs" table on "station"
   

Sorry, I don't understand what you mean by this - can you explain?

 

Supposing obs_pkey is on (station, valid_time):

cluster obs_pkey on obs

will re-order the rows in obs based on the index obs_pkey. (This is 
clustering on 'station' and 'valid_time', to do just station you could 
use an index on just 'station').

The down side is that the row ordering is slowly lost as rows are 
updated, so periodic running of the cluster command is needed - this is 
a pain as it will take a while for 13 million row table.

regards

Mark

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] regular backups - super user

2003-10-31 Thread DeJuan Jackson
quick answer: research/read the pg_hba.conf in the data directory.

Jeff MacDonald wrote:

Hi,

WHen i run pg_dumpall as the super user [postgres in my case] it asks
for a password for every database. I don't know my users passwords. Is
there a way to make the super user able to backup without passwords ?
Thanks.

Jeff.



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match
 



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] table locking.. FK tables locking on insert

2003-10-31 Thread Stephan Szabo

On Tue, 28 Oct 2003, Ati Rosselet wrote:

> I have several tables with common FKs.. the problem occurs when
> performing a large number of inserts on any of the tables at the same
> time.  The first to start inserting seems to get a lock on the
> referenced FK tables and doesn't release them until its done, forcing
> the other apps to just wait... and wait..since they also seem to want a
> lock for their FK tables. something just doesnt seem right here :(
>
> when I remove the FKs, everything goes fine.. but I need my foreign
> keys.. really I do.
>
> What does postgres do here?  or is the problem in the jdbc driver.. or
> maybe the server? (using Jboss 3.2.1, hibernate and 7.3.2 postgres)..
>
> any ideas?

A lock is grabbed on the associated pk row (so that some other transaction
can't delete it). Unfortunately that lock conflicts with other fk
modifications attempting to grab the lock. Two possible solutions involve
read locks or dirty reads. Neither of these is trivial, one will get
implemented eventually, but there's no concrete timetable.

For more details, you may want to look up info in the archives.

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Cartesian product bug?

2003-10-31 Thread Greg Stark
Åsmund Kveim Lie <[EMAIL PROTECTED]> writes:

> SELECT * FROM a,b NATURAL JOIN c;

This parses as 

select * from a, (b natural join c)

> SELECT * FROM a CROSS JOIN b NATURAL JOIN c;

This parses as

select * from (a cross join b) natural join c

> These two example queries should give the same result. In the first query, it
> seems like it’s doing the natural join between b and c, and then does
> the Cartesian product on that result with a. On the second query, it does as
> we assume it should, namely does the Cartesian product first.
> 
> Is this the correct behavior?

yes

You can put parentheses to change the explicit joins like

select * from a cross join (b natural join c);

But the implicit join is harder to fix. I think you either need to use an
explicit join like above or a subquery like

select * from (select * from a,b) as ab natural join c

I tend to find it's easier to stick to all explicit or all implicit joins and
not mix them. Personally I like explicit joins for aesthetic reasons
especially in 7.4 where they get optimized as well as implicit joins.


-- 
greg


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] regular backups - super user

2003-10-31 Thread Stuart Johnston
DeJuan Jackson wrote:
quick answer: research/read the pg_hba.conf in the data directory.

Jeff MacDonald wrote:

Hi,

WHen i run pg_dumpall as the super user [postgres in my case] it asks
for a password for every database. I don't know my users passwords. Is
there a way to make the super user able to backup without passwords ?
If your users always connect through TCP/IP then you can set local 
connections to not use passwords but I'm pretty sure there is no way to 
configure this just for the postgres user (I wish you could).

If you need passworded local connections for users then you'll probably 
want to turn on 'ident' mode just long enough to set the password for 
the postgres user (or some other 'super user').  Then you may also want 
to use the PGPASSWORD environment variable or the .pgpass file to avoid 
typing the password repeatedly.

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] [SQL] connectby

2003-10-31 Thread BenLaKnet




I use postgresql 7.2.3 
How can I use connectby ?? 

Must I install files ? or packages ? or it is recommanded to upgrade
dataserver ?




George Essig wrote:

  
hi

I have menu table:
  id  | integer | not null default 
nextval('public.menu_id_seq'::text)
  parent_id   | integer |
  description | text|

I do select:
test=> SELECT * FROM connectby('menu','id','parent_id','2',0,'~') t(id 
integer, parent_id integer, level int, branch text);
  id | parent_id | level |   branch
+---+---+-
   2 |   | 0 | 2
   4 | 2 | 1 | 2~4
   7 | 4 | 2 | 2~4~7
  10 | 7 | 3 | 2~4~7~10
  16 |10 | 4 | 2~4~7~10~16
   9 | 4 | 2 | 2~4~9

How am I able to select description file from menu table, not only id, 
parent_id, level, branch fields?

-- 
WBR, sector119

  
  
Try a join with the original table:

SELECT t.*, description FROM connectby('menu','id','parent_id','2',0,'~') 
AS t(id integer, parent_id integer, level int, branch text), menu 
WHERE t.id = menu.id

George Essig

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

  





[GENERAL] Where are the database files located?

2003-10-31 Thread Victor Spång Arthursson
Well, bought a new disk to my Powerbook yesterday and reinstalled Mac 
OS X. I'm in the middle of restoring everything now, and need to know 
where the database files where located. I thougt /usr/local/pgsql/data 
would be the appropriate place, but can't find anything there that 
makes sense…

Regards,

Victor

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[GENERAL] slow query performance

2003-10-31 Thread Dave Weaver

I'm having severe performance issues with a conceptually simple
database.  The database has one table, containing weather observations.
The table currently has about 13.5 million rows, and is being updated
constantly. The database is running on a dual 550MHz PIII with 512MB RAM.

On the whole, queries are of the form:

SELECT ? FROM obs WHERE station = ?
AND valid_time < ? AND valid_time > ?
or:
SELECT ? FROM obs WHERE station IN (?, ?, ...)
AND valid_time < ? AND valid_time > ?

Queries like these are taking around 4 to 5 minutes each, which seems
excessively slow to me (or are my expectations far too optimistic?).

For instance:
SELECT station, air_temp FROM obs
 WHERE station = 'EGBB'
AND valid_time > '28/8/03 00:00'
AND valid_time < '28/10/03 00:00'

takes 4 mins 32 secs.

An EXPLAIN of the above query says:
  NOTICE:  QUERY PLAN:

  Index Scan using obs_pkey on obs  (cost=0.00..9.01 rows=1 width=20)

A simple "SELECT count(*) from obs" query takes around that sort of time
too.

I have run "vacuumdb --analyze obs", to little effect.

How can I speed this up? Where am I going wrong? Is there a problem with
the table structure, or the indexes? Does the continual updating of the
database (at the rate of somewhere between 1-3 entries per second) cause
problems?

The table and indexes are defined as follows:


   Table "obs"
 Attribute  |   Type   | Modifier 
+--+--
 valid_time | timestamp with time zone | 
 metar_air_temp | double precision | 
 relative_humidity  | double precision | 
 pressure_change| double precision | 
 ceiling| double precision | 
 metar_dew_point| double precision | 
 metar_gusts| double precision | 
 wet_bulb_temperature   | double precision | 
 past_weather   | text | 
 visibility | double precision | 
 metar_visibility   | double precision | 
 precipitation  | double precision | 
 station| character(10)| 
 pressure_msl   | double precision | 
 metar_min_temperature_6hr  | double precision | 
 precipitation_period   | double precision | 
 metar_wet_bulb | double precision | 
 saturation_mixing_ratio| double precision | 
 metar_pressure | double precision | 
 metar_sky_cover| text | 
 dew_point  | double precision | 
 wind_direction | double precision | 
 actual_time| timestamp with time zone | 
 gust_speed | double precision | 
 high_cloud_type| text | 
 precipitation_24hr | double precision | 
 metar_precipitation_24hr   | double precision | 
 pressure_tendency  | text | 
 metar_relative_humidity| double precision | 
 low_cloud_type | text | 
 metar_max_temperature_6hr  | double precision | 
 middle_cloud_type  | text | 
 air_temp   | double precision | 
 low_and_middle_cloud_cover | text | 
 metar_wind_dir | double precision | 
 metar_weather  | text | 
 snow_depth | double precision | 
 metar_snow_depth   | double precision | 
 min_temp_12hr  | double precision | 
 present_weather| text | 
 wind_speed | double precision | 
 snow_cover | text | 
 metar_wind_speed   | double precision | 
 metar_ceiling  | double precision | 
 max_temp_12hr  | double precision | 
 mixing_ratio   | double precision | 
 pressure_change_3hr| double precision | 
 total_cloud| integer  | 
 max_temp_24hr  | double precision | 
 min_temp_24hr  | double precision | 
 snow_amount_6hr| double precision | 
Indices: obs_pkey,
 obs_station,
 obs_valid_time

   Index "obs_pkey"
 Attribute  |   Type   
+--
 valid_time | timestamp with time zone
 station| character(10)
unique btree

Index "obs_station"
 Attribute | Type  
---+---
 station   | character(10)
btree

Index "obs_valid_time"
 Attribute  |   Type   

[GENERAL] Fixed field text import

2003-10-31 Thread Brent Wood


Is there an easy way (similar to COPY) to import fixed width text files
directly into Postgres tables?

COPY is fine for files with delimited fields, but I have fixed format text
files to import into tables.



Thanks,

  Brent Wood


---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] Request for Interview Candidates - PG Research Help

2003-10-31 Thread Jillian Carroll
Title: Message



Good 
morning,
 
I have a contact who 
is looking to speak with up to five (5) individuals regarding: open-source DBMS, 
end users who are using PostgreSQL in applications that they are 
using to run their business and ISVs who have based their applications on 
PostgreSQL.
 
If you are 
interested, please contact me off-list with the category you fit into 
(developer/user/var/isv) along with what your suggested topic of discussion 
would be.
 
There will be a 
perspective published from these interviews, which will be made available to 
everyone free of charge.
 
--
Jillian


Re: [GENERAL] EMBEDDED BUG?!?!?!?

2003-10-31 Thread Michael Meskes
On Mon, Oct 27, 2003 at 06:25:46PM -0700, [EMAIL PROTECTED] wrote:
> [EMAIL PROTECTED] pgsql]$ postmaster -V
> postmaster (PostgreSQL) 7.1.3

This is pretty old already.

> EXEC SQL FETCH stck_info_crsr INTO :i_symbol :i_data_date, :i_open, :i_high,
> :i_low, :i_close, :i_volume;

If this is taken literally from your program be aware that it cannot
work as there is no "," between ":i_symbol" and ":i_data_date" thus the
latter is considered an indicator.

> when executing the fetch I get:
> 
> sqlcaid = SQLCA   
> sqlabc = 4.642502
> sqlcode = 4.642506
> sqlca.sqlerrm.sqlerrml = 49
> sqlca.sqlerrm.sqlerrmc = Not correctly formatted int type: 29.25 line 500.

Okay, it seems you did use the line above and try to read a real value
into an int.

> BEFORE ** A   6 20020102.00 28.51 29.34 28.45 0
> SYMBOL ** 'A  '
> D_DATE ** '0'
> i_open ** '20020102.00'
> VOLUME ** '0'
> 
> The BEFORE, SYMBOL, D_DATE, i_open, VOLUME are from simple printf's.

This is utterly useless unless you provide the sources so we can see
what you actually printf.

> Apparently postgres is misreading the declaration of i_symbol and
> loading part of the declaration into the actual value of the data

Nope. The "6" comes beyond the limit of the variable. The string simply
seems to lack the delimitting \0.

> The folks who do embedded stuff need to take a look at this!!

They just did. They even used their spare time to create a test case
themselves as you did only provide code snippets.

Michael
-- 
Michael Meskes
Email: Michael at Fam-Meskes dot De
ICQ: 179140304, AIM/Yahoo: michaelmeskes, Jabber: [EMAIL PROTECTED]
Go SF 49ers! Go Rhein Fire! Use Debian GNU/Linux! Use PostgreSQL!

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] database speed

2003-10-31 Thread Dennis Gearon
Vivek Khera wrote:

"DM" == Doug McNaught <[EMAIL PROTECTED]> writes:
   

DM> The index bloat problem can occur when you have a an indexed SERIAL
DM> column whose value always increases, and you delete older rows on a
DM> regular basis.  VACUUM recycles the row storage, but the index never
DM> shrinks. 

I get this on tables which are updated a lot, even if I don't do any
mass deletes of older rows.  The indexes on other columns also bloat
rapidly on some tables.  It is a significant problem, especially since
reindex locks the tables for significant amounts of time.
 

Is ist possible for a trigger to fork off a 'reindex' command and the 
trigger terminates leaving the reindex running?

Using PL/PGSQL?

If a counter were kept in some table for each row, and after a certain 
amount of updates/deletes, reindex were run on an automatic, more often 
time frequency?

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] help from Portugal

2003-10-31 Thread Fernando Schapachnik
Looks like the new DB might have been initialized with a different locale. Read
about locales and initdb in the docs.

Good luck.

Fernando.

En un mensaje anterior, Pedro Miguel Guedes escribió:
[...]
> 
> The problem seems to be on accent on letters. So I thought maybe some stuff
> on charsets/encoding and I tried all the advices on the html/docs changing
> database and client encodings to LATIN1 (it was SQL_ASCII) to no help.
> Environ var like LANG and LC_ALL equal "pt_PT" on /etc/init.d/postgres.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] duplicates rejected by index reports PGSQL_FATAL_ERROR

2003-10-31 Thread tj
v7.2.2

When an insert fails because its in violation of an index (duplicate), I get 
a return code 7 (PGSQL_FATAL_ERROR) in my php 4.2.3 interface.

Its not really a fatal error, is it?  Shouldn't it be reported as a 
NONFATAL_ERROR ?

I have a "dumb" synchronizing task, that attempts to import a daily 
cumulative batch of records, relying on a specific table index to reject 
the records that were successfully imported from previous runs.  The "index 
rejection" method takes only 25 seconds for my batch of 30,000 records, 
while the "select to find if non-existing then perform the insert" takes 25 
minutes.  Is there a better way?

-- 
TJ

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] help from Portugal

2003-10-31 Thread Pedro Miguel Guedes
Dear sirs

I have a crazy problem here.
Nobody seems to known why.
I am not a database expert nor do known much about
Postgres.
I am just a system administrator trying to move a database from
one system to another.
O the old system (slackware linux) the postgres was downloaded
an compiled from source.
O the new system (Red Hat 9.0) the postgres was installed from
rpm precompiled binaries.
Everything seemed ok.
I pg_dump from the original.
I createdb anf pg_restore on the destination.

On the old system I did the query:
SELECT * FROM contactos WHERE empresa_nome ILIKE ´%nestlé%´;

and it did work
O the new system it does not produce nothing!???

if I do
SELECT * FROM contactos WHERE empresa_nome ILIKE ´%NESTLÈ%´;
it works!

For queried material where accent on letters does not matter like NISA/nisa
there are no problems with the query.

The problem seems to be on accent on letters. So I thought maybe some stuff
on charsets/encoding and I tried all the advices on the html/docs changing
database and client encodings to LATIN1 (it was SQL_ASCII) to no help.
Environ var like LANG and LC_ALL equal "pt_PT" on /etc/init.d/postgres.

The guys who did the programming on the dadbase do not known any clues.
Me, has I said, I am not expert and I a bit lost not knowing what to do
next.

Do you have any ideas?

Thanks for any help

Pedro Guedes
PORTUGAL
[EMAIL PROTECTED]



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Altering columns order

2003-10-31 Thread Daniele Varrazzo
Hi everybody.

Is there any way to alter the order the columns appear in a table (apart from using 
views)?

I tried changing attnum from pg_attribute... but I could only mess the server up! 
Successive INSERT command crashed it until I restored the previous order... :-\

If the answer was "You have to create a new table and dump everything into it": is 
there a simple way to move also the foreign keys referencing to it and eventually 
alter other objects depending onto it?

Thanks a lot

Daniele


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] INSERT performance

2003-10-31 Thread Razvan Surdulescu
I'm running PostgreSQL 7.3.4 on Windows 2000/Cygwin (AMD Athlon XP+ 
1800, 512MB RAM).

I want to insert about 500 records into a table. The table is heavily 
indexed (has about 10-12 indices created on it). The insert is performed 
in a transaction block.

If I keep the indices on the table, the insert takes about 12 seconds. 
If I drop the indices, do the insert, and then re-create the indices, 
the entire process takes about 3-4 seconds.

I am somewhat surprised at both of those performance numbers above -- I 
would have expected both of them to be a lot smaller (about half of what 
they are). 500 records is not that big of a number! In particular, the 
transacted insert without indices should be very fast!

Are these numbers about in range to what I should expect from 
PostgreSQL? I looked through the various performance FAQs and I cannot 
use the COPY command since the data needs to be processed first, and 
this can only be done in memory.

Any suggestions/pointers would be much appreciated.

Thanks,

Razvan.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Attribute 'rtrim' is repeated

2003-10-31 Thread Stephan Szabo
On Tue, 28 Oct 2003 [EMAIL PROTECTED] wrote:

>
> In trying to trim some data while selecting and loading from 2 tables
> into 1 I get the following:
>
> SELECT TRIM(TRAILING ' ' FROM s.strk_sym_cmplt), TRIM(TRAILING ' ' FROM
> s.stk_sym), s.c_p, o.dta_date, o.dta_date_num, o.dta_date_dys, o.strk_sym,
> o.strk_yr, s.strk_num, s.exp_sym, s.exp_mo, s.exp_mo_num, s.exp_yr,
> s.exp_yr_num, s.exp_date, s.exp_date_dys, o.opn, o.hi, o.lw, o.cls, o.vol,
> o.opn_intrst, o.bd, o.ak
> INTO TABLE optn_calls_tbl
> FROM option_data_oprtnl o, strk_data_oprtnl s
> WHERE TRIM(TRAILING ' ' FROM o.strk_sym_ful) = TRIM(TRAILING ' ' FROM
> s.strk_sym_cmplt) AND s.c_p = 'C';
> ERROR:  Attribute 'rtrim' is repeated
>
> I understand that rtrim is an underlying function to trim.  Is this a BUG???
> How do I fix this problem?

You probably want to give your columns names,
SELECT TRIM(...) AS strk_sym_cmplt, TRIM(...) AS stk_sym, ...

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] problem with huge joins

2003-10-31 Thread Tom Lane
Kolus Maximiliano <[EMAIL PROTECTED]> writes:
>   In order to get this, I run this query:

> SELECT ip, TO_CHAR(date, '-MM-DD'), protocol, port 
> FROM tProxyPort, tProxyList, tProxyHost
> WHERE tProxyPort.listId=tProxyList.listId
>   AND tProxyList.output=tProxyHost.hostId
> ORDER BY ip, port

>   Whose query plan is:

> Sort  (cost=311874.07..311874.07 rows=986130 width=44) (actual
> time=300086.42..302580.25 rows=986130 loops=1)
>   ->  Hash Join  (cost=39735.96..96907.83 rows=986130 width=44) (actual
> time=86226.28..223195.50 rows=986130 loops=1)
> ->  Seq Scan on tport  (cost=0.00..18629.30 rows=986130 width=12)
> (actual time=0.15..25910.56 rows=986130 loops=1)
> ->  Hash  (cost=35972.38..35972.38 rows=403034 width=32) (actual
> time=86194.99..86194.99 rows=0 loops=1)
>   ->  Hash Join  (cost=9787.92..35972.38 rows=403034 width=32)
> (actual time=12180.64..84316.65 rows=403927 loops=1)
> ->  Seq Scan on thost  (cost=0.00..7850.41 rows=457341
> width=16) (actual time=619.09..10032.85 rows=458787 loops=1)
> ->  Hash  (cost=6812.34..6812.34 rows=403034 width=16)
> (actual time=6656.36..6656.36 rows=0 loops=1)
>   ->  Seq Scan on tlist  (cost=0.00..6812.34
> rows=403034 width=16) (actual time=6.90..5030.22 rows=403927 loops=1)
> Total runtime: 317046.69 msec

The joins and sort steps seem to take rather a long time.  What do you
have sort_mem set to?  You probably want it on the order of 10Mb so that
these joins are done in memory rather than spilling to disk.

The hash indexes are a waste of time for this :-(

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: 7.3.5 release (was: Re: [GENERAL] SELECT with row>32k hangs over

2003-10-31 Thread Lynn . Tilby

Perhaps I am not the appropriate one to mention this (and if you already
have a QA group,GREAT!!!) since I am not a
Postgres developer but...  Is there any QA group that looks at the
design specs and does independent desk checking and test writting
according to those specs and then tests the code before it is released?

Just a thought...

Lynn

Quoting Tom Lane <[EMAIL PROTECTED]>:

> Jan Wieck <[EMAIL PROTECTED]> writes:
> > Tom Lane wrote:
> >> Yeah, I think we have accumulated enough changes in the 7.3 branch
> to
> >> justify a 7.3.5, but I'm not sure when we'll get around to it.
> 
> > On 10/03/2003 Bruce was the only one responding to my question if the
> 
> > namespace fix I had for PL/Tcl should be backpatched into 7.3.5. He 
> > claimed that we'll probably not release any 7.3.X any more and we 
> > dropped the issue.
> > Guess the question is open again then.
> 
> I'm on the fence right now, but one or two more fixes in the 7.3
> branch
> will be enough to make me feel we should put out 7.3.5.  If you are
> confident of that namespace fix, then I'd say by all means commit it
> into the 7.3 branch so it will be there when 7.3.5 happens.
> 
> Attached are the current CVS log entries for post-7.3.4 changes in
> REL7_3_STABLE.  What do you think, is it time yet?
> 
>   
regards, tom lane
> 
> 2003-10-20 16:01  tgl
> 
>   * src/backend/rewrite/: rewriteManip.c (REL7_3_STABLE),
>   rewriteManip.c: It is possible for ResolveNew to be used to insert
>   a sublink into a subquery that didn't previously have one.  We have
>   traditionally made the caller of ResolveNew responsible for
>   updating the hasSubLinks flag of the outermost query, but this
>   fails to account for hasSubLinks in subqueries.  Fix ResolveNew to
>   handle this.  We might later want to change the calling convention
>   of ResolveNew so that it can fix the outer query too, simplifying
>   callers.  But I went with the localized fix for now.  Per bug
>   report from J Smith, 20-Oct-03.
> 
> 2003-10-02 18:25  tgl
> 
>   * src/backend/utils/adt/ruleutils.c (REL7_3_STABLE): When dumping
>   CREATE INDEX, must show opclass name if the opclass isn't in the
>   schema search path.  Otherwise pg_dump doesn't correctly dump
>   scenarios where a custom opclass is created in 'public' and then
>   used by indexes in other schemas.
> 
> 2003-09-29 14:53  momjian
> 
>   * src/bin/scripts/clusterdb (REL7_3_STABLE): 
>   [ Patch applied only to 7.3.X.]
>   
>   Hi There's a bug in the clusterdb script where it looks like the
>   arguments to the psql command are being passed in the wrong order,
>   so it fails when you run it on a database that is not on localhost.
>Here's the output from the command:
>   
>   133 anands-Computer:bin/scripts> clusterdb -h wooster -U rr granada
>   psql: warning: extra option wooster ignored psql: warning: extra
>   option -U ignored psql: warning: extra option rr ignored psql:
>   warning: extra option -F: ignored psql: warning: extra option -P
>   ignored psql: warning: extra option format=unaligned ignored psql:
>   warning: extra option -t ignored psql: warning: extra option -c
>   ignored psql: warning: extra option SELECT nspname,
>   pg_class.relname, pg_class_2.relname FROM pg_class, pg_class AS
>   pg_class_2 JOIN pg_namespace ON (pg_namespace.oid=relnamespace),
>   pg_index WHERE pg_class.oid=pg_index.indrelid AND
>   pg_class_2.oid=pg_index.indexrelid AND pg_index.indisclustered AND
>   pg_class.relowner=(SELECT usesysid FROM pg_user WHERE
>   usename=current_user) ignored psql: FATAL:  user "-h" does not
>   exist
>   
>   I'm attaching a patch that fixes the problem. The diff was run on
>   postgresql 7.3.4
>   
>   Thanks a lot.  Anand Ranganathan
> 
> 2003-09-28 13:46  wieck
> 
>   * src/bin/pg_dump/pg_dump.c (REL7_3_STABLE): Backpatched changes
>   for rules when casts are dumped according to discussion on hackers.
>   
>   Jan
> 
> 2003-09-23 11:11  tgl
> 
>   * src/backend/executor/spi.c (REL7_3_STABLE): _SPI_cursor_operation
>   forgot to check for failure return from _SPI_begin_call.  Per gripe
>   from Tomasz Myrta.
> 
> 2003-09-17 14:40  tgl
> 
>   * src/pl/plpython/plpython.c (REL7_3_STABLE): Back-patch fix for
>   plpython problems with dropped table columns; per bug report from
>   Arthur Ward, who also tested this patch.
> 
> 2003-09-03 15:01  tgl
> 
>   * src/backend/utils/adt/formatting.c (REL7_3_STABLE): Back-patch
>   the other part of Karel's formatting bug fix.
> 
> 2003-09-03 11:00  tgl
> 
>   * src/backend/utils/adt/formatting.c (REL7_3_STABLE): Repair
>   problems with to_char() overrunning its input string.  From Karel
>   Zak.
> 
> 2003-08-24 17:26  petere
> 
>   * src/bin/psql/po/de.po (REL7_3_STABLE): Fix translation mistake.
> 
> 20

Re: [GENERAL] [SQL] connectby

2003-10-31 Thread Benoît BOURNON




I use postgresql 7.2.3 
How can I use connectby ?? 

Must I install files ? or packages ? or it is recommanded to upgrade
dataserver ?


George Essig wrote:

  
hi

I have menu table:
  id  | integer | not null default 
nextval('public.menu_id_seq'::text)
  parent_id   | integer |
  description | text|

I do select:
test=> SELECT * FROM connectby('menu','id','parent_id','2',0,'~') t(id 
integer, parent_id integer, level int, branch text);
  id | parent_id | level |   branch
+---+---+-
   2 |   | 0 | 2
   4 | 2 | 1 | 2~4
   7 | 4 | 2 | 2~4~7
  10 | 7 | 3 | 2~4~7~10
  16 |10 | 4 | 2~4~7~10~16
   9 | 4 | 2 | 2~4~9

How am I able to select description file from menu table, not only id, 
parent_id, level, branch fields?

-- 
WBR, sector119

  
  
Try a join with the original table:

SELECT t.*, description FROM connectby('menu','id','parent_id','2',0,'~') 
AS t(id integer, parent_id integer, level int, branch text), menu 
WHERE t.id = menu.id

George Essig

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

  





Re: [GENERAL] Viewing linked large postgres tables with Access: Running out of memory

2003-10-31 Thread Emmon Simbo
Hi

I had a customer with a similar problem to you. I believe the solution
to your problem is to set the UseDeclareFetch to Yes. I believe this
stops it from doing the query on the Client.

Emmon Simbo
OpenLink Software



[EMAIL PROTECTED] (moonman) wrote in message news:<[EMAIL PROTECTED]>...
> Hello all,
> 
> First, let me explain hat I'm an absolute database novice. The reason
> I've made so much progress sofar with postgresql is that it is an
> amazing software suite with exceptional documentation. So if I'm
> asking an idiotic question, I apologize... but atleast tell me what
> the obvious is.
> 
> I just downloaded the psqlodbc_07_03_0200.zip windows odbc drivers for
> postgresql.
> 
> I can link to my postgresql server and generally do queries no
> problem.
> 
> My problem is that my users have the annoying habit of clicking on the
> linked table icon to view some of the data.
> 
> When they do this, the memory used by MS Access grows until there is
> no memory left.
> 
> Short of ridiculing the users into proper behaviour, is there some way
> to configure the ODBC driver to only show 'x'-number of records at a
> time (similar to the Oracle ODBC driver we currently have on other
> databases)?
> 
> Thanks in advance
> 
> moonman

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[GENERAL] Attribute 'rtrim' is repeated

2003-10-31 Thread Lynn . Tilby

In trying to trim some data while selecting and loading from 2 tables
into 1 I get the following:

SELECT TRIM(TRAILING ' ' FROM s.strk_sym_cmplt), TRIM(TRAILING ' ' FROM
s.stk_sym), s.c_p, o.dta_date, o.dta_date_num, o.dta_date_dys, o.strk_sym,
o.strk_yr, s.strk_num, s.exp_sym, s.exp_mo, s.exp_mo_num, s.exp_yr,
s.exp_yr_num, s.exp_date, s.exp_date_dys, o.opn, o.hi, o.lw, o.cls, o.vol,
o.opn_intrst, o.bd, o.ak
INTO TABLE optn_calls_tbl
FROM option_data_oprtnl o, strk_data_oprtnl s
WHERE TRIM(TRAILING ' ' FROM o.strk_sym_ful) = TRIM(TRAILING ' ' FROM
s.strk_sym_cmplt) AND s.c_p = 'C';
ERROR:  Attribute 'rtrim' is repeated

I understand that rtrim is an underlying function to trim.  Is this a BUG???
How do I fix this problem?

Thanks for your help!
Lynn Tilby 

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[GENERAL] table locking.. FK tables locking on insert

2003-10-31 Thread Ati Rosselet
I have several tables with common FKs.. the problem occurs when
performing a large number of inserts on any of the tables at the same
time.  The first to start inserting seems to get a lock on the
referenced FK tables and doesn't release them until its done, forcing
the other apps to just wait... and wait..since they also seem to want a
lock for their FK tables. something just doesnt seem right here :(

when I remove the FKs, everything goes fine.. but I need my foreign
keys.. really I do.

What does postgres do here?  or is the problem in the jdbc driver.. or
maybe the server? (using Jboss 3.2.1, hibernate and 7.3.2 postgres)..

any ideas? 


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] retrieve statement from catalogs

2003-10-31 Thread Shridhar Daithankar
On Tuesday 28 October 2003 14:33, Jaime Casanova wrote:
> Hi everybody, can anyone tell me if there's a way to retrieve the select
> instruction executed from the catalogs, or maybe via some structure in a
> trigger?
>
> The reason is that i have some selects constructed on-the-fly (just part of
> it) and i want to save that in a table in order to know what are the most
> used for optimizing them.
>
> Another reason for doing that is that i don't know all the selects in the
> program nor the page and this is an easier solution than examine all the
> code.

Use pg_ctl -l logfile option to start postmaster. It will log the things. And 
set log_statement=true in postgresql.conf. 

So there will be all the sql statements logged in.

 HTH

 Shridhar


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Problem with ecpg

2003-10-31 Thread Michael Meskes
On Fri, Oct 31, 2003 at 11:20:27AM -0300, Carmen Gloria Sepulveda Dedes wrote:
> I have pgsql 7.3.4 and ecpg 2.10.0
> 
> Does exists a patch for this version?

No, but it is rather simple. Just search for "Unterminated" in
preproc/pgc.l and you'll find some mmerror() calls. In these calls just
replace the ET_ERROR with ET_FATAL. That's it.

Michael
-- 
Michael Meskes
Email: Michael at Fam-Meskes dot De
ICQ: 179140304, AIM/Yahoo: michaelmeskes, Jabber: [EMAIL PROTECTED]
Go SF 49ers! Go Rhein Fire! Use Debian GNU/Linux! Use PostgreSQL!

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] problem with huge joins

2003-10-31 Thread Kolus Maximiliano
Title: RE: [GENERAL] problem with huge joins 





> > Total runtime: 317046.69 msec


Total runtime: 216001.94 msec


    A lot better! Thanks!


> The hash indexes are a waste of time for this :-(


    Which kind should I use?





[GENERAL] grant all to database inheritence

2003-10-31 Thread Dave Cramer
I have a challenge to be able to grant all to the database, and then
have subsequent tables accessible by all users.

It seems to me that this is how a database should work. I do realize
that postgres doesn't do this now. Is there a way around this? Using
rules or some other mechanism?

Dave
-- 
Dave Cramer <[EMAIL PROTECTED]>
fastcrypt


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] database speed

2003-10-31 Thread Vivek Khera
> "DM" == Doug McNaught <[EMAIL PROTECTED]> writes:

DM> The index bloat problem can occur when you have a an indexed SERIAL
DM> column whose value always increases, and you delete older rows on a
DM> regular basis.  VACUUM recycles the row storage, but the index never
DM> shrinks. 

I get this on tables which are updated a lot, even if I don't do any
mass deletes of older rows.  The indexes on other columns also bloat
rapidly on some tables.  It is a significant problem, especially since
reindex locks the tables for significant amounts of time.

-- 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.Khera Communications, Inc.
Internet: [EMAIL PROTECTED]   Rockville, MD   +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster