Re: [GENERAL] Newbie-question

2003-10-29 Thread Andrew Ayers
Victor SpÄng Arthursson wrote:
 Are presently converting from mysql to postgresql, and my first 
 newbiequestion is how to make all the rows in a result from a select 
 just swosh by? That is, I dont want to see them page for page; just to 
 scroll by so I can se the last line with the number of corresponding rows.

You seem to want to see the number of corresponding rows, not the stuff 
swish by - or at least, I would hope you are more interested in the 
count and not just text flying by...

In that case, do a select count(*) from SQL select...

Andrew Ayers
Phoenix, Arizona

-- CONFIDENTIALITY NOTICE --

This message is intended for the sole use of the individual and entity to whom it is 
addressed, and may contain information that is privileged, confidential and exempt 
from disclosure under applicable law. If you are not the intended addressee, nor 
authorized to receive for the intended addressee, you are hereby notified that you may 
not use, copy, disclose or distribute to anyone the message or any information 
contained in the message. If you have received this message in error, please 
immediately advise the sender by reply email, and delete the message. Thank you.

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] shared memory on OS X - 7.4beta4

2003-10-29 Thread Tom Lane
John DeSoi [EMAIL PROTECTED] writes:
 What version of OS X are you running? I thought I replicated the 
 problem after upgrading to 10.2.8, but now I'm not certain I ran initdb 
 again (I may have just rebuilt beta 5). So possibly it could be a 
 difference between 10.2.6 and 10.2.8. If not, I have no clue. I'll 
 report back if I find anything.

I'm running 10.2.6 (a pretty fresh install, see prior bellyaching about
hardware problems with my laptop ;-)).  IIRC, someone else reported
success with a clean 10.2.8 installation in this thread.  I have also
checked PG against a 10.3 beta recently, and so have other people.
It's fairly likely that there are problems with 10.1, if anyone still
uses that, but I have no reason to think that PG 7.4 will fail with
either 10.2.* or 10.3.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] Performace question

2003-10-29 Thread Lada 'Ray' Lostak

Dear list,

First of all I want to say sory, if my question was answered somewhere.
Or if it is my fault. If so, please, give me link/hint. My own search fails
:( I have experimence with MySql, MsSql and Oracle (and MS Access huch :) I
am new to PgSql.

We are running server - OpenBSD 2.9, latest apache, latest PHP, latest
PgSql, latest mod_ssl. No other related packages are used. All programs are
compiled 'normal' way, including PgSql - except unicode support. There are
no special runtime conditions. Server is Intel P3 800/intel MB, 512M Ram,
plenty HDD, etc. We have web application, thin client type (mozzila/ie).
Users are connecting throw HTTPS to Apache, where runs PHP scritps talking
by TCP/IP to local Postgres DB (native PgSql support in PHP). PgSql uses
default settings from source package. System is huge, many tables, but
nothing special. I hope it is enough info about this. Postmaster options: -h
127.0.0.1 -i -p 5432, postgresql.conf is 'empty' (no default value override
used)...

Let's say:

SELECT
id,parent,alias,aliasfull,name,comment,type,typeflags,flags,cluster,viewprio
r FROM dtditems WHERE cluster IN (42)

(ohhh, what a complicated example :)

My problem is performance. Sometimes SQL statement takes ~20 ms,
sometimes (the same) takes 200ms, sometimes 2.000 ms (!). I am sure it is
not because of CPU/memory (both plenty avilable at the moment). Average is
~600ms. 'dtditems' is table, without _any_ foreign key/indexes, etc. It have
592 rows. 'cluster' is integer. Don't tell me to create indexes, or foreign
keys. I know they helps, but for table with ~600 rows, where ~15% is
selected, difference will be small, not 10x.

This SQL statements takes 982ms, after few 'refresh' it takes 604ms,
after another few refresh 56, and after another few 12480ms. I didn't find
any conditions why these times are soo different. My experimence says, that
this kind of SQL and row count can be done at given CPU somewhat around
~5-10ms. Above statement hits 113 rows.

When I run the system connected to another DB, this problem doesn't
arise. I think (90% :) the problem is somewhere in PgSql (maybe related to
system).

Initially, _ALL_ SQL's takes hundreds of ms. I created ONE index (for
testing) and the problem was over. But was over for ALL tables. Seems PgSql
have some trouble with indexes.

I allready tried create indexes (for table in above SQL statement), but
the problem wasn't solved. So, I remove indexes again.

Please, anyone can give me hint where I should try to find what causes
the problem ? Why is PgSql (or it is PHP/Apache bug ?) s slow ? And
mainly, WHY there is 10x difference between executing the same SQL statement
? Is there any way, how to 100% say if the problem is @ php or pgsql ?

I understand because of multitasking OS we can't measure 'exactly', so,
if one requst will be 10ms, next 12,8,14,20,12,40,34,... it will be fine.
But something like 50, 500, 2000 makes me crazy

I have added 'log' - part of pgsql log coresponding to above SQL.

Feel free to have any Q about sysconfig/programs config.

And sorry again if I miss something.

Thank you !

Best regards,
Lada 'Ray' Lostak
Unreal64 Develop group
http://www.orcave.com
http://www.unreal64.net


--
In the 1960s you needed the power of two C64s to get a rocket
to the moon. Now you need a machine which is a vast number
of times more powerful just to run the most popular GUI.



log
Description: Binary data

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


7.3.5 release (was: Re: [GENERAL] SELECT with row32k hangs over SSL-Connection)

2003-10-29 Thread Jan Wieck
Tom Lane wrote:
Bruno Wolff III [EMAIL PROTECTED] writes:
I think that a 7.3.5 release should be done a bit after 7.4 is released,
but my opinion doesn't count for much.
Yeah, I think we have accumulated enough changes in the 7.3 branch to
justify a 7.3.5, but I'm not sure when we'll get around to it.
On 10/03/2003 Bruce was the only one responding to my question if the 
namespace fix I had for PL/Tcl should be backpatched into 7.3.5. He 
claimed that we'll probably not release any 7.3.X any more and we 
dropped the issue.

Guess the question is open again then.

Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] JDBC DatabaseMetaData problem

2003-10-29 Thread Fernando Nasser
Wrong list.  Please post this to the [EMAIL PROTECTED] list.

I am sure there are people there that can help you with this.

Regards,
Fernando
Aleksey wrote:
Hello,



I have the following problem working with DatabaseMetaData. There is a 
database with table and attribute names in Russian. Database cluster was 
initialized with appropriate ru_RU.KOI8-R locale. All the databases were 
created with KOI8-R encoding. No problems were encountered in accessing 
database table data with JDBC.

Database has foreign key constraints that I try to get with 
DatabaseMetaData methods. Both getTables and getPrimaryKeys work fine, 
all the results have correct encoding and values.

The following fragment of code causes exception:

rs = meta.getImportedKeys(null,null,tableName);
while(rs.next()) {
   String pkTable = rs1.getString(PKTABLE_NAME);
   String pkColumn = rs1.getString(PKCOLUMN_NAME);  /* here */
   String fkTable = rs1.getString(FKTABLE_NAME);
   String fkColumn = rs1.getString(FKCOLUMN_NAME); /* and here */
}

PKTABLE_NAME and FKTABLE_NAME fields are fetched correctly. Both the 
marked lines produce exception with this stack trace:

at org.postgresql.core.Encoding.decodeUTF8(Encoding.java:270)
at org.postgresql.core.Encoding.decode(Encoding.java:165)
at org.postgresql.core.Encoding.decode(Encoding.java:181)
at 
org.postgresql.jdbc1.AbstractJdbc1ResultSet.getString(AbstractJdbc1ResultSet.java:97) 

at 
org.postgresql.jdbc1.AbstractJdbc1ResultSet.getString(AbstractJdbc1ResultSet.java:337) 

Error message is: Invalid character data was found.  This is most 
likely caused by stored data containing characters that are invalid for 
the character set the database was created in.  The most common example 
of this is storing 8bit data in a SQL_ASCII database.,

but database is not SQL_ASCII (actually KOI8-R) and all the characters 
in column names are taken from this codepage. Other DatabaseMetaData 
methods work with these characters fine.

I tested the same methods with the same database but with tables with 
latin names - everything worked fine, but renaming all the columns will 
cause a huge amount of extra work with database and applications.

I use PostgreSQL-7.3.4 compiled from source, JDBC driver from 
http://jdbc.postgresql.org/download/pg73jdbc3.jar on Linux, J2SDK 1.4.1_02.

I will appreciate any help with this.

Thank you.

Sincerely yours,
Aleksey.
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org



--
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] PG_RESTORE/DUMP Question

2003-10-29 Thread Doug McNaught
Alex [EMAIL PROTECTED] writes:

 Hi,
 
 I have a test system that is setup the same as a production system and
 would like to frequently copy the database over.
 pg_dump takes a few hours and even sometimes hangs.
 
 Are there any reasons not to simply just copy the entire data
 directory over to the test system? I could not find any postings on
 the net suggesting otherwise. Is there anything to pay attention too ?

If the two systems are the same architecture and OS, this can work,
but in order to get a consistent copy, you need to either:

a) Stop (completely shut down) the source database while the copy
runs, or
b) Use volume management and take a snapshot of the source database,
them copy the snapshot over.  This will lose open transactions but
will be otherwise consistent.

-Doug

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] PG_RESTORE/DUMP Question

2003-10-29 Thread Alex
Hi,

I have a test system that is setup the same as a production system and 
would like to frequently copy the database over.
pg_dump takes a few hours and even sometimes hangs.

Are there any reasons not to simply just copy the entire data directory 
over to the test system? I could not find any postings on the net 
suggesting otherwise. Is there anything to pay attention too ?

Thanks for any advise
Alex


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Performace question

2003-10-29 Thread Martijn van Oosterhout
Firstly, have you run ANALYZE across the database?

Secondly, the queries that are varying so much, can you post the EXPLAIN
ANALYZE output so we can see what is actually going on.

Note also that the query log can be very helpful in finding out if the delay
is in the database or not.

Hope this helps,

On Wed, Oct 29, 2003 at 02:28:51PM +0100, Lada 'Ray' Lostak wrote:
 
 Dear list,
 
 First of all I want to say sory, if my question was answered somewhere.
 Or if it is my fault. If so, please, give me link/hint. My own search fails
 :( I have experimence with MySql, MsSql and Oracle (and MS Access huch :) I
 am new to PgSql.
 

-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 All that is needed for the forces of evil to triumph is for enough good
 men to do nothing. - Edmond Burke
 The penalty good people pay for not being interested in politics is to be
 governed by people worse than themselves. - Plato


pgp0.pgp
Description: PGP signature


[GENERAL] Error size varchar

2003-10-29 Thread Edwin Quijada




Hi!!
I got error about a length field varchar. I have a table with a field type 
varchar(20) but if I try to set to this field more than 20 characters I got 
error.
I did a function to control the length of data and put it on trigger but 
when it ocurrs I got the error anyway and the trigger not works.
This error is over than trigger execution??

This is the error

ERROR:  value too long for type character varying(30)

*---*
*-Edwin Quijada
*-Developer DataBase
*-JQ Microsistemas
*-809-747-2787
*  Si deseas lograr cosas excepcionales debes de hacer cosas fuera de lo 
comun
*---*

_
Surf and talk on the phone at the same time with broadband Internet access. 
Get high-speed for as low as $29.95/month (depending on the local service 
providers in your area).  https://broadband.msn.com

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] N Texas PostgreSQL Consultant Needed

2003-10-29 Thread Stuart Johnston
I am looking for an experienced DBA in the North Texas area who would be 
available for a brief (probably about a day) consultation on database 
design and optimization in PostgreSQL.

Please contact me if interested,

Stuart Johnston
[EMAIL PROTECTED]
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] PG_RESTORE/DUMP Question

2003-10-29 Thread Shridhar Daithankar
Alex wrote:

Hi,

I have a test system that is setup the same as a production system and 
would like to frequently copy the database over.
pg_dump takes a few hours and even sometimes hangs.

Are there any reasons not to simply just copy the entire data directory 
over to the test system? I could not find any postings on the net 
suggesting otherwise. Is there anything to pay attention too ?
Yes. just shutdown production postmaster. Copy the entire data directory over to 
test system.

Two system should be absolutely identical. Same architecture, preferrably same 
OS, same postgresql client and server version etc.

Or investigate some of the asynchronous replication systems. That would save you 
some time but will affect production performance a bit.

HTH

 Shridhar

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: 7.3.5 release (was: Re: [GENERAL] SELECT with row32k hangs over SSL-Connection)

2003-10-29 Thread Tom Lane
Jan Wieck [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 Yeah, I think we have accumulated enough changes in the 7.3 branch to
 justify a 7.3.5, but I'm not sure when we'll get around to it.

 On 10/03/2003 Bruce was the only one responding to my question if the 
 namespace fix I had for PL/Tcl should be backpatched into 7.3.5. He 
 claimed that we'll probably not release any 7.3.X any more and we 
 dropped the issue.
 Guess the question is open again then.

I'm on the fence right now, but one or two more fixes in the 7.3 branch
will be enough to make me feel we should put out 7.3.5.  If you are
confident of that namespace fix, then I'd say by all means commit it
into the 7.3 branch so it will be there when 7.3.5 happens.

Attached are the current CVS log entries for post-7.3.4 changes in
REL7_3_STABLE.  What do you think, is it time yet?

regards, tom lane

2003-10-20 16:01  tgl

* src/backend/rewrite/: rewriteManip.c (REL7_3_STABLE),
rewriteManip.c: It is possible for ResolveNew to be used to insert
a sublink into a subquery that didn't previously have one.  We have
traditionally made the caller of ResolveNew responsible for
updating the hasSubLinks flag of the outermost query, but this
fails to account for hasSubLinks in subqueries.  Fix ResolveNew to
handle this.  We might later want to change the calling convention
of ResolveNew so that it can fix the outer query too, simplifying
callers.  But I went with the localized fix for now.  Per bug
report from J Smith, 20-Oct-03.

2003-10-02 18:25  tgl

* src/backend/utils/adt/ruleutils.c (REL7_3_STABLE): When dumping
CREATE INDEX, must show opclass name if the opclass isn't in the
schema search path.  Otherwise pg_dump doesn't correctly dump
scenarios where a custom opclass is created in 'public' and then
used by indexes in other schemas.

2003-09-29 14:53  momjian

* src/bin/scripts/clusterdb (REL7_3_STABLE): 
[ Patch applied only to 7.3.X.]

Hi There's a bug in the clusterdb script where it looks like the
arguments to the psql command are being passed in the wrong order,
so it fails when you run it on a database that is not on localhost.
 Here's the output from the command:

133 anands-Computer:bin/scripts clusterdb -h wooster -U rr granada
psql: warning: extra option wooster ignored psql: warning: extra
option -U ignored psql: warning: extra option rr ignored psql:
warning: extra option -F: ignored psql: warning: extra option -P
ignored psql: warning: extra option format=unaligned ignored psql:
warning: extra option -t ignored psql: warning: extra option -c
ignored psql: warning: extra option SELECT nspname,
pg_class.relname, pg_class_2.relname FROM pg_class, pg_class AS
pg_class_2 JOIN pg_namespace ON (pg_namespace.oid=relnamespace),
pg_index WHERE pg_class.oid=pg_index.indrelid AND
pg_class_2.oid=pg_index.indexrelid AND pg_index.indisclustered AND
pg_class.relowner=(SELECT usesysid FROM pg_user WHERE
usename=current_user) ignored psql: FATAL:  user -h does not
exist

I'm attaching a patch that fixes the problem. The diff was run on
postgresql 7.3.4

Thanks a lot.  Anand Ranganathan

2003-09-28 13:46  wieck

* src/bin/pg_dump/pg_dump.c (REL7_3_STABLE): Backpatched changes
for rules when casts are dumped according to discussion on hackers.

Jan

2003-09-23 11:11  tgl

* src/backend/executor/spi.c (REL7_3_STABLE): _SPI_cursor_operation
forgot to check for failure return from _SPI_begin_call.  Per gripe
from Tomasz Myrta.

2003-09-17 14:40  tgl

* src/pl/plpython/plpython.c (REL7_3_STABLE): Back-patch fix for
plpython problems with dropped table columns; per bug report from
Arthur Ward, who also tested this patch.

2003-09-03 15:01  tgl

* src/backend/utils/adt/formatting.c (REL7_3_STABLE): Back-patch
the other part of Karel's formatting bug fix.

2003-09-03 11:00  tgl

* src/backend/utils/adt/formatting.c (REL7_3_STABLE): Repair
problems with to_char() overrunning its input string.  From Karel
Zak.

2003-08-24 17:26  petere

* src/bin/psql/po/de.po (REL7_3_STABLE): Fix translation mistake.

2003-08-24 01:13  ishii

* src/backend/utils/mb/Unicode/gb18030_to_utf8.map (REL7_3_STABLE):
Fix GB18030 to UTF-8 mapping table

2003-08-24 01:00  ishii

* src/backend/utils/mb/Unicode/UCS_to_GB18030.pl (REL7_3_STABLE):
Fix bug in GB18030 conversion script

2003-08-22 17:57  tgl

* src/interfaces/libpq/fe-secure.c (REL7_3_STABLE): Sigh, I'm an
idiot ... SSL_ERROR_WANT_READ isn't an error condition at all, it
just means 'no data available 

Re: [GENERAL] Performace question

2003-10-29 Thread Lada 'Ray' Lostak

Firstly, have you run ANALYZE across the database?
ANALYZE not, only EXPLAIN. And it looks pretty normal :( There is basically
nothing interested in soo simple SQL.

Secondly, the queries that are varying so much, can you post the EXPLAIN
ANALYZE output so we can see what is actually going on.
Thx, ANALYZE was good idea. Here it comes - right now, there is index on
'cluster' (BTREE) @ dtditems. But it was not used - I guess because reading
 seraching will cost more than pure seq scan on 'few' items...

EXPLAIN SELECT
id,parent,alias,aliasfull,name,comment,type,typeflags,flags,cluster,viewprio
r FROM dtditems WHERE cluster IN (42)

QUERY PLAN Seq Scan on dtditems (cost=0.00..253.40 rows=150 width=84)
Filter: (cluster = 42)

QUERY PLAN Seq Scan on dtditems (cost=0.00..253.40 rows=150 width=84)
(actual time=440.10..472.00 rows=113 loops=1)Filter: (cluster = 42)
Total runtime: 519.86 msec

Current 'top' output (while running test)

load averages:  1.31,  1.11,  0.78
50 processes:  1 running, 47 idle, 2 stopped
CPU states:  1.4% user,  0.0% nice,  2.2% system,  0.2% interrupt, 96.3%
idle

There also more than 200M free memory.

Just to compare - the same SQL executed by Ms Access database (uch, it hurt
to type that name !) takes ~12 ms. Also MySql takes similar time...

There is no differences between variations on WHERE - like using = instead
of IN etc.

Note also that the query log can be very helpful in finding out if the
delay
is in the database or not.
I added to email log from PgSql (hope it arrives well last time, coz I am
sitting @ [EMAIL PROTECTED] :) and there you can see, that it really takes 500 ms
to select 100 records from ~500 rows table... Let me know, if log was
damaged. But the time coresponds

What can I do (or where is some document regarding this topic ?) speed up
PgSql ? I really think, half second for selecting ~100 rows from ~600 rows
table it pretty slow. Commodore 64 (1 mHz 6510) will do it faster :)

Any hints ?
Thanks,
Best regards,
Lada 'Ray' Lostak
Unreal64 Develop group
http://www.orcave.com
http://www.unreal64.net


--
In the 1960s you needed the power of two C64s to get a rocket
to the moon. Now you need a machine which is a vast number
of times more powerful just to run the most popular GUI.



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] backup another server

2003-10-29 Thread Edwin Quijada


Hi!!
I wanna take a whole database running in a box1 pg7.3.4 and moving to 
another box2 with 7.3.4 too.
There is a fast way to do that??
Which??/



*---*
*-Edwin Quijada
*-Developer DataBase
*-JQ Microsistemas
*-809-747-2787
*  Si deseas lograr cosas excepcionales debes de hacer cosas fuera de lo 
comun
*---*

_
Fretting that your Hotmail account may expire because you forgot to sign in 
enough? Get Hotmail Extra Storage today!   
http://join.msn.com/?PAGE=features/es

---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] Trigger delete

2003-10-29 Thread Edwin Quijada
What is that I have to return in a delete trigger??
return OLD or NEW
_
Add MSN 8 Internet Software to your current Internet access and enjoy 
patented spam control and more.  Get two months FREE! 
http://join.msn.com/?page=dept/byoa

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] backup another server

2003-10-29 Thread scott.marlowe
On Wed, 29 Oct 2003, Edwin Quijada wrote:

 
 
 
 Hi!!
 I wanna take a whole database running in a box1 pg7.3.4 and moving to 
 another box2 with 7.3.4 too.
 There is a fast way to do that??
 Which??/

The way I do it is:

pg_dumpall -h source_machine |psql -h dest_machine




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] dump schema schema only?

2003-10-29 Thread Patrick Hatcher
I have a development server where I, well, do my development. Occasionally,
I will create a new schema within an existing database that I would like to
use on my production machine.   I know that doing a pg_dump -s database
 somefile.sql will dump the entire schema of the database, but is there a
way to export only schema X from the database?

TIA
Patrick Hatcher



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] backup another server

2003-10-29 Thread Ron Jr
On Wed, 2003-10-29 at 11:46, Edwin Quijada wrote:
 
 Hi!!
 I wanna take a whole database running in a box1 pg7.3.4 and moving to 
 another box2 with 7.3.4 too.
 There is a fast way to do that??
 Which??/

oldserver:~ pg_dumpall foo.dmp  scp foo.dmp newserver:.
oldserver:~ ssh newserver
newserver:~ pg_restore foo.dmp

-- 
-
Ron Johnson, Jr. [EMAIL PROTECTED]
Jefferson, LA USA

LUKE: Is Perl better than Python?
YODA: No... no... no. Quicker, easier, more seductive.
LUKE: But how will I know why Python is better than Perl?
YODA: You will know. When your code you try to read six months
from now.


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] dump schema schema only?

2003-10-29 Thread Tom Lane
Patrick Hatcher [EMAIL PROTECTED] writes:
 I have a development server where I, well, do my development. Occasionally,
 I will create a new schema within an existing database that I would like to
 use on my production machine.   I know that doing a pg_dump -s database
  somefile.sql will dump the entire schema of the database, but is there a
 way to export only schema X from the database?

7.4's pg_dump has an option to dump the contents of just one schema.

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: 7.3.5 release (was: Re: [GENERAL] SELECT with row32k hangs over

2003-10-29 Thread Marc G. Fournier

I'd say yes based on the SSL and pg_dump fixes that were back patched ...


On Wed, 29 Oct 2003, Tom Lane wrote:

 Jan Wieck [EMAIL PROTECTED] writes:
  Tom Lane wrote:
  Yeah, I think we have accumulated enough changes in the 7.3 branch to
  justify a 7.3.5, but I'm not sure when we'll get around to it.

  On 10/03/2003 Bruce was the only one responding to my question if the
  namespace fix I had for PL/Tcl should be backpatched into 7.3.5. He
  claimed that we'll probably not release any 7.3.X any more and we
  dropped the issue.
  Guess the question is open again then.

 I'm on the fence right now, but one or two more fixes in the 7.3 branch
 will be enough to make me feel we should put out 7.3.5.  If you are
 confident of that namespace fix, then I'd say by all means commit it
 into the 7.3 branch so it will be there when 7.3.5 happens.

 Attached are the current CVS log entries for post-7.3.4 changes in
 REL7_3_STABLE.  What do you think, is it time yet?

   regards, tom lane


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Error size varchar

2003-10-29 Thread Edwin Quijada
wHAT IS unconstrained varchar???



*---*
*-Edwin Quijada
*-Developer DataBase
*-JQ Microsistemas
*-809-747-2787
*  Si deseas lograr cosas excepcionales debes de hacer cosas fuera de lo 
comun
*---*





From: Tom Lane [EMAIL PROTECTED]
To: Edwin Quijada [EMAIL PROTECTED]
CC: [EMAIL PROTECTED]
Subject: Re: [GENERAL] Error size varchar Date: Wed, 29 Oct 2003 11:28:23 
-0500

Edwin Quijada [EMAIL PROTECTED] writes:
 I got error about a length field varchar. I have a table with a field 
type
 varchar(20) but if I try to set to this field more than 20 characters I 
got
 error.
 I did a function to control the length of data and put it on trigger but
 when it ocurrs I got the error anyway and the trigger not works.

The length constraint is checked before triggers are fired, I believe.
If you want silent truncation rather than an error, use a text column
(or unconstrained varchar) and put the truncation behavior into your
trigger.
			regards, tom lane
_
Fretting that your Hotmail account may expire because you forgot to sign in 
enough? Get Hotmail Extra Storage today!   
http://join.msn.com/?PAGE=features/es

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Error size varchar

2003-10-29 Thread Adam Kavan
At 05:06 PM 10/29/03 +, Edwin Quijada wrote:

wHAT IS unconstrained varchar???


Define the column as just varchar.  This allows a string of any 
length.  Then have a trigger truncate it after it is inserted.

--- Adam Kavan
--- [EMAIL PROTECTED]


---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] dump schema schema only?

2003-10-29 Thread Rick Seeger

You can use -t to specify a table:

% pg_dump -s database -t tablename


-Rick



-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Patrick
Hatcher
Sent: Wednesday, October 29, 2003 12:57 PM
To: [EMAIL PROTECTED]
Subject: [GENERAL] dump schema schema only?


I have a development server where I, well, do my development.
Occasionally,
I will create a new schema within an existing database that I would
like to
use on my production machine.   I know that doing a pg_dump -s
database
 somefile.sql will dump the entire schema of the database, but is
there a
way to export only schema X from the database?

TIA
Patrick Hatcher



---(end of
broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to
[EMAIL PROTECTED])




---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Error size varchar

2003-10-29 Thread Tom Lane
Edwin Quijada [EMAIL PROTECTED] writes:
 I got error about a length field varchar. I have a table with a field type 
 varchar(20) but if I try to set to this field more than 20 characters I got 
 error.
 I did a function to control the length of data and put it on trigger but 
 when it ocurrs I got the error anyway and the trigger not works.

The length constraint is checked before triggers are fired, I believe.
If you want silent truncation rather than an error, use a text column
(or unconstrained varchar) and put the truncation behavior into your
trigger.

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] Database design question: ugliness or referential integrity?

2003-10-29 Thread Paulo Jan
Hi all:

	Let's say I'm designing a database (Postgres 7.3) with a list of all 
email accounts in a certain server:

CREATE TABLE emails (
clienteid INT4,
direccion VARCHAR(512) PRIMARY KEY,
login varchar(128) NOT NULL,
password VARCHAR(128),
dominio VARCHAR(256)
);
	The PHBs want to have a log of when was an email account added, which 
technician did it, when was it deleted, when did we have to reset its 
password, etc.:

CREATE TABLE emails_log (
direccion varchar(512) references emails,
fecha date,
autor varchar(32),
texto varchar(1024)
);
	texto would be a free form text field explaining what has been done.
	Now, let's suppose that an email account is deleted, and six months 
later another user requests it and we add it again. Do we want to keep 
an audit trail for the old version of that account? The PHBs say yes. 
Which means that we can't use the email address as primary key. Fine, we 
add an ID column to the emails table and make it the primary key, 
and point the foreign key in emails_log to that column. But now we 
have two options, and here is my question:

	-In emails, the direccion column needs to be unique... but only for 
the active email addresses (there can be 5, 10, or 20 dead addresses 
called [EMAIL PROTECTED], but only one alive at the moment). We could 
add an active boolean column to emails, and write a custom 
constraint to check this condition, but I find it ugly (and I saw 
similar objections when another user came up with a similar problem some 
time ago)...
	-...Or we could create a table called dead_emails, and add to it the 
email addresses that we delete (using an ON DELETE trigger, perhaps). 
Basically, store the deleted email accounts in another table... but then 
we lose the referential integrity check in emails_log.

	The question is: what would you do? (I don't really like the idea of 
creating yet another dead_emails_log table pointing to dead_emails; 
I find it almost as ugly as the first one).



Paulo Jan.
DDnet.


---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] Automatic auditing suggestion

2003-10-29 Thread Scott Chapman
In my further discussion with Andrew offline, we came up with a joint 
suggestion to have PostgreSQL do automatic auditing.  This would be VERY 
NICE, imho.  Any input?

Scott wrote:
 It seems like it would be nice if you could flip a toggle on a 
 table and have it automatically build audit entries in another table.

Andrew replied:
 Yeah - that would be a great feature - automatic auditing...
 Maybe you should post that to someone (whoever it would be?) at 
 PostgreSQL - sure, there would be major performance hit problems (maybe 
 rather than at table level, field/column level would be better), but it 
 would be a boon for many...


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])