Re: [GENERAL] Log File Maintainance

2005-04-13 Thread Richard Huxton
Neil Dugan wrote:
On Tue, 2005-04-12 at 08:51 +0100, Richard Huxton wrote:
Inpreet Singh wrote:
syslog = 2  # range 0-2
syslog_facility = 'LOCAL0'
syslog_ident = 'postgres'
You also need to update your syslog.conf and restart syslogd
Add a line like:
local0.* /var/log/pgsql

Saw this post and tried to setup the log info to be sent to a different
file '/var/log/pgsql'.  There was no trouble with this.
I then thought I would use a more logical name than 'LOCAL0' for the
facility name.  However no matter what name I tried
'/etc/init.d/postgresql' wouldn't restart the 'postmaster' program.
The local0/1/etc facilities are for custom applications logging to 
syslog. You can see a list of valid facilities with man syslog.conf

HTH
--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] psql vs perl prepared inserts

2005-04-13 Thread Dawid Kuroczko
On 4/12/05, Matt Van Mater [EMAIL PROTECTED] wrote:
 I've been experimenting with loading a large amount of data into a
 fairly simple database using both psql and perl prepared statements.
 Unfortunately I'm seeing no appreciable differences between the two
 methods, where I was under the impression that prepared statements
 should be much faster (in my case, they are slightly slower).

I've been playing with similar issue and in my case the best solution
for bulk insert was using perl to format data in form suitable for COPY
command.

I believe there may be problems if you have RULEs on table you want
to copy data into (IIRC such RULEs will be ignored).

For a test you might want to try also this approach (both from perl and
from psql):

$dbh-do('PREPARE sth_tim (int,inet,boolean,timestamptz) AS INSERT
INTO timestamps VALUES ($1,$2,$3,$4)');
$sth_tim  = $dbh-prepare(EXECUTE sth_tim(?,?,?,?));

...and later execute it.  (and likewise with psql).  If you'll see gain in speed
with perl it means your DBD::Pg wasn't using server side prepared
statements.  (And there is a quite a chance that your psql will outperform
perl using this syntax).

   Regards,
  Dawid

PS: I have not tested these statements, yet they should be ok; make
sure the argument list (int,inet,boolean...) is correct for your data.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] psql vs perl prepared inserts

2005-04-13 Thread Sean Davis
On Apr 13, 2005, at 4:12 AM, Dawid Kuroczko wrote:
On 4/12/05, Matt Van Mater [EMAIL PROTECTED] wrote:
I've been experimenting with loading a large amount of data into a
fairly simple database using both psql and perl prepared statements.
Unfortunately I'm seeing no appreciable differences between the two
methods, where I was under the impression that prepared statements
should be much faster (in my case, they are slightly slower).
I've been playing with similar issue and in my case the best solution
for bulk insert was using perl to format data in form suitable for COPY
command.

I second this approach.  Generally, getting the data into the database 
can be done VERY quickly (for the 18k rows you have, it would likely be 
instantaneous to copy them).  I often create a separate loader schema 
into which I load text files.  Then, I can use SQL, triggers, or 
functions to clean up the data, enforce referential integrity, etc. 
within the database.  If you have perl code to do this, you can 
probably modify it just slightly to be used in a pl/perl function to do 
the same thing as before, but now it is done on the server side and 
will probably be significantly faster.

Sean
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [GENERAL] What are the consequences of a bad database design (never seen that before !)

2005-04-13 Thread Robert Treat
On Wednesday 13 April 2005 01:21, Jinane Haddad wrote:
 Thanx guys for the advices.

 i think i will have to find some POLITICAL approach in order to
 restructure the existing database, which i am not so good at but worse
 trying. Note that even the code is Bad (they are using PHP for a big
 application - no object oriented design - a lot of code redundancy ...).

 However, it seems difficult to fix the database bit by bit cause as far as
 i have seen one or more  primary TAble(s) are missing !! So instead of
 using an ID, 3-4 fields are being rewritten in almost every table ! So if i
 have to build the primary tables, i have to change all the other tables
 replacing the combined fields with the corresponding ID ... and there is
 many others modifications which could lead to eventuel code modification
 even if i change the Views in order to mask the changes. (Thanx god they
 are using Views !)

 Anyways it seems i have a major modification that will need time and they
 are giving me Time for adding modules not the time for fixing the existing.

 So basically what is happening is du to the bad database and code design:
 Writing a simple Task is becoming difficult and requires minimum 4 times
 more time than in the case of a good design.
 So development time is wasted, and data Corrections are being done almost
 every day by the stuff here ...


Remember that the goal is to fix everything *now*... but fix it bit by bit.  
The first time you would need to access those 3-4 fields in any new module, 
rather than adding them into a new tables, rework the schema to be 
normalized...even if you cant pull those 3-4 fields out of every table, pull 
it out of a core few tables and use your new key in your new tables so that 
you start down the path to a better schema.  

But be careful how you approach things... have a 3-4 field primary key in 10 
different tables is perfectly fine within the relational model... in fact 
some purists would even argue for something like that rather than creating a 
surrogate key... so just because they have done that doesn't mean that they 
are wrong even if your way is better.

-- 
Robert Treat
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] psql vs perl prepared inserts

2005-04-13 Thread Neil Conway
Dawid Kuroczko wrote:
For a test you might want to try also this approach (both from perl and
from psql):
$dbh-do('PREPARE sth_tim (int,inet,boolean,timestamptz) AS INSERT
INTO timestamps VALUES ($1,$2,$3,$4)');
$sth_tim  = $dbh-prepare(EXECUTE sth_tim(?,?,?,?));
...and later execute it.  (and likewise with psql).  If you'll see gain in speed
with perl it means your DBD::Pg wasn't using server side prepared
statements.
The intent of prepared statements is to reduce the overhead of running 
the parser, rewriter and planner multiple times for a statement that is 
executed multiple times. For an INSERT query without any sub-selects 
that is not rewritten by any rules, the cost to parse, rewrite and plan 
the statement is trivial. So I wouldn't expect prepared statements to be 
a big win -- you would gain a lot more from batching multiple inserts 
into a single transaction, and more still from using COPY.

-Neil
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] psql vs perl prepared inserts

2005-04-13 Thread Matt Van Mater
Thanks to all who replied.  Thanks for the tip on that last thread
Tom, I don't know how I missed it.  I have a hunch that it's not
applicable to me at this time because I'm running a year and a half
old software (included in OpenBSD 3.4), but I will have to check which
version of DBD::Pg was installed.

 The intent of prepared statements is to reduce the overhead of running
 the parser, rewriter and planner multiple times for a statement that is
 executed multiple times. For an INSERT query without any sub-selects
 that is not rewritten by any rules, the cost to parse, rewrite and plan
 the statement is trivial. So I wouldn't expect prepared statements to be
 a big win -- you would gain a lot more from batching multiple inserts
 into a single transaction, and more still from using COPY.

I was thinking something along the same lines, and was considering
using the COPY statement as my next step, but as someone mentioned
then I have to load it into a temporary database and then do some more
internal magic to preserve referential integrity, unique contraints,
etc.  For that reason I was hoping to keep it in perl, and it's always
nice to keep everything in a single neat portable package.

Also, I forgot to mention earlier that I tried using transactions to
speed things up, but since I expect to see certain inserts fail I
would need to rework my code so the whole transaction doesn't fail if
one insert goes bad.  This is somewhat contrary to the purpose of
transactions so I'm not sure how to accomplish this.  I saw roughly a
20% speed improvement by turning autocommit off and only committing at
the end of parsing each file.

I think in the end I need to check the version of my Pg driver and
perhaps upgrade to 7.4 or 8.0 in order to take advantage of the server
side prepared statements.  This is only a development box and I'm
doing this mostly as an academic exercise that will someday help me
speed up the production side, so upgrading isn't out of the question.

Matt

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] psql vs perl prepared inserts

2005-04-13 Thread Bruno Wolff III
On Wed, Apr 13, 2005 at 09:57:09 -0400,
  Matt Van Mater [EMAIL PROTECTED] wrote:
 
 Also, I forgot to mention earlier that I tried using transactions to
 speed things up, but since I expect to see certain inserts fail I
 would need to rework my code so the whole transaction doesn't fail if
 one insert goes bad.  This is somewhat contrary to the purpose of
 transactions so I'm not sure how to accomplish this.  I saw roughly a
 20% speed improvement by turning autocommit off and only committing at
 the end of parsing each file.

You might get a small benefit using savepoints in version 8.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Composite type versus Domain constraints.

2005-04-13 Thread James Robinson
On Apr 12, 2005, at 4:48 PM, Tom Lane wrote:
James Robinson [EMAIL PROTECTED] writes:
insert into simple_table values (null, '(43)'); -- GRR works!!! It'll
let any smallint in. What happened to the constraint?
The composite-type input routine doesn't check any constraints ...
and that includes domains.  You can make it work if you don't use
a composite literal:
egression=# insert into simple_table values (null, row(43));
ERROR:  value for domain simple violates check constraint limits
Thank you for the great info. If I may, here's another question. I am in
the need of new scalar types, essentially domain'd smallints, hence
why my composite type had but one composite member. Domain'd
smallints would be great, but it seems when they get returned in a
result set to the client, they come shipped with the oid of smallint
(21 on my box), not the oid of the domain. I'm experimenting with
a client driver (Python's psycopg) which allows you to register
handlers for arbitrary oids -- but if the result set's metadata
contains the oid for smallint, this does not bode well -- if I register
for the domain's oid, it never gets returned to me in a select, and
if I register for int2's oid hilarity ensues.
Is there an easy path to creating (many) scalar types which piggyback
on int2's functions. Naive experimentation fails:
social=# create type MyType (
 INTERNALLENGTH = 2,
 INPUT = int2in,
OUTPUT = int2out
);
ERROR:  function int2out(mytype) does not exist
Which is reasonable.
In short, I need a domain'd smallint with a different type oid returned 
from selects. Is there a short path? The composite type solution works 
at a purely SQL level, although something feels not quite right.


James Robinson
Socialserve.com
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] psql vs perl prepared inserts

2005-04-13 Thread Daniel Verite
Neil Conway wrote:

 For an INSERT query without any sub-selects 
 that is not rewritten by any rules, the cost to parse, rewrite and plan 
 the statement is trivial. So I wouldn't expect prepared statements to be 
 a big win -- you would gain a lot more from batching multiple inserts 
 into a single transaction, and more still from using COPY.

FWIW, when testing pgstream [1] I typically see a 50% increase in execution
speed when switching to prepared statements in such a scenario.
I'm attaching a small test program that inserts 1 rows into 5 columns, first
without and then with prepared statements, and displays elapsed time.

Example of results:
  elapsed time in loop 0 is 1873 ms (PQexec)
  elapsed time in loop 1 is 1136 ms (PQexecPrepared)
That's with unix domain sockets and a 8.0.1 server.

[1] a thin C++ layer on top of libpq (http://manitou-mail.org/pgstream) that
happens to have a unified API for prepared/non-prepared statements.

-- 
 Daniel
 PostgreSQL-powered mail user agent and storage: http://www.manitou-mail.org

#include pgstream.h
#include iostream
#include stdlib.h
#include stdio.h
#include sys/time.h

int main(int argc, char** argv)
{
  const int loops=1;
  pg_cnx cnx;
  char buf[]=abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ;
  try {
struct timeval tv1, tv2;
if (argc  2) {
  std::cerr  Usage:   argv[0]   connection_string\n;
  exit(1);
}
cnx.connect(argv[1]);

for (int j=0; j2; j++) {
  cnx.set_option(bind_variables, j0);
  cnx.set_option(prepare_statements, j0);
  {
pg_trans trans(cnx);
pg_stmt(CREATE TABLE txt1(pk1 int,t1 text,t2 text,
t3 text, t4 text), cnx);
gettimeofday(tv1,NULL);
pg_stream s1(INSERT INTO txt1(pk1,t1,t2,t3,t4) 
 VALUES(:i,:p1,:p2,:p3,:p4), cnx);
for (int i=0; iloops; i++) {
  s1  i  buf  buf  buf  buf;
}
gettimeofday(tv2,NULL);
pg_stmt(DROP TABLE txt1, cnx);
trans.commit();
printf(elapsed time in loop %d is %d ms\n, j, 
   (tv2.tv_sec-tv1.tv_sec)*1000+(tv2.tv_usec-tv1.tv_usec)/1000);
  }
}
  }
  catch(pg_excpt p) {
std::cerr  p.full_error_txt();
  }
  return 0;
}

---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] Foreign Keys Question

2005-04-13 Thread Matthias Loitsch

First of all, hi. I'm new to this mailing list.


I searched this on the net, but I didn't get any usable answers...


So here's my problem:

I have 2 schemas. One is called SITE, one SITE_forum.

What I wanted, is to separate the forum from the whole Site db, so I can
put them on different servers if I encounter performance problems or
anything.
So I started the SITE_forum schema, made all my tables, and realized,
that I don't want to create the table users in the schema SITE_forum,
because I want to use the exact same users than in the schema SITE.

One possibility would be, to create 2 identic tables on both schemas,
but that really really is not what I'd like to do.

So I thought I could make a foreign key on a different Schema (db), and
use the same table

And well, thats where I started to search if this is possible ... and,
in fact my main question is: Is this a good idea?
I have no idea if this will be fast enough, or if I will have lots of
problems afterward

Could anyone help me with this ?


Thanks in advance,
Matthias Loitsch



-- 
THEK
Matthias Loitsch
www.studiothek.com/



pgpPyT3vJZehW.pgp
Description: PGP signature


Re: [GENERAL] psql vs perl prepared inserts

2005-04-13 Thread Tom Lane
Neil Conway [EMAIL PROTECTED] writes:
 The intent of prepared statements is to reduce the overhead of running 
 the parser, rewriter and planner multiple times for a statement that is 
 executed multiple times. For an INSERT query without any sub-selects 
 that is not rewritten by any rules, the cost to parse, rewrite and plan 
 the statement is trivial.

You'd be surprised ...

I was looking into this just the other day, with a test case that looks
like

 create table bench1 (id int NOT NULL,id2 int NOT NULL,id3 int NOT NULL,
  dummy1 char(30));
 create unique index bench1_index_ on bench1  (id,id2);
 create index bench1_index_1 on bench1  (id3);
 begin;
 insert into bench1 values (0,0,0,'ABCDEFGHIJ');
 ... 300,000 inserts ...
 insert into bench1 values (167151,167151,167151,'CDEFGHIJKL');
 commit;

According to gprof, the above-1% functions are

  %   cumulative   self  self total   
 time   seconds   secondscalls   s/call   s/call  name
  6.62 53.3853.38   37 0.00 0.00  yyparse
  5.31 96.1942.81 11808781 0.00 0.00  hash_search
  4.44132.0035.81 61268959 0.00 0.00  AllocSetAlloc
  4.23166.1334.13   902304 0.00 0.00  XLogInsert
  3.20191.9925.86 13688735 0.00 0.00  _bt_compare
  2.94215.7523.76 12158347 0.00 0.00  LWLockAcquire
  2.67237.3221.58  4500066 0.00 0.00  base_yylex
  2.56258.0020.68  6000510 0.00 0.00  SearchCatCache
  1.99274.0716.07 12160856 0.00 0.00  LWLockRelease
  1.88289.2515.18 13008925 0.00 0.00  hash_any
  1.49301.2512.01  2452386 0.00 0.00  PinBuffer
  1.36312.2511.00  1201324 0.00 0.00  fmgr_info_cxt_security
  1.36323.2410.98   30 0.00 0.00  planner
  1.19332.81 9.57 20700142 0.00 0.00  
MemoryContextAllocZeroAligned

I don't trust gprof's tree-structured breakdown entirely, but it puts
a pretty significant fraction of the blame on parse/plan activities:

3.66  767.69  37/37  PostgresMain [4]
[5] 95.63.66  767.69  37 exec_simple_query [5]
6.13  283.12  37/37  PortalRun [6]
0.48  167.39  37/37  pg_analyze_and_rewrite [9]
0.47  122.85  37/37  pg_plan_queries [16]
1.56   93.29  37/37  pg_parse_query [23]
0.62   34.78  37/37  pg_rewrite_queries [52]
0.99   17.39  37/37  PortalDrop [79]
0.56   16.26  600014/600014  finish_xact_command [84]
1.196.89  37/37  CreatePortal [126]
1.082.29  600014/600014  start_xact_command [186]
1.880.36  37/37  PortalStart [218]
0.761.44  37/37  pq_puttextmessage [220]
1.350.00  37/600017  set_ps_display [210]
1.270.00  37/37  CreateCommandTag [271]
0.890.22  37/37  printtup_create_DR [286]
0.740.00  37/37  CreateDestReceiver [328]
0.610.00  37/600015  pgstat_report_activity [277]
0.100.14  37/13864259 pfree [112]
0.230.00  37/37  PortalSetResultFormat [449]
0.190.00  37/37  IsAbortedTransactionBlockState 
[474]
0.070.00  37/37  printtup_destroy [564]
0.050.00  37/37  PortalDefineQuery [580]
0.030.00  37/37  EndCommand [617]
0.010.00  37/37  BeginCommand [670]

That adds up to over 50% of the runtime spent in parse/rewrite/plan.

I haven't gotten around to converting the test case into a program that
can use a prepared INSERT command, but it looks plausible to expect
a factor of 2 or so speedup ... of course, using COPY would completely
blow this away, anyway ...

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Foreign Keys Question

2005-04-13 Thread Bruno Wolff III
On Wed, Apr 13, 2005 at 13:54:05 +0200,
  Matthias Loitsch [EMAIL PROTECTED] wrote:
 
 So I thought I could make a foreign key on a different Schema (db), and
 use the same table
 
 And well, thats where I started to search if this is possible ... and,
 in fact my main question is: Is this a good idea?
 I have no idea if this will be fast enough, or if I will have lots of
 problems afterward
 
 Could anyone help me with this ?

Schemas are just a name space, so using foreign keys accross schemas
shouldn't be a problem.

However, it you later put the contents of the schema on other server
or even in another database, then you are going to need to make a copy
of the data as you can't make foreign key references outside of
the current database.

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] PostGreSQL (7.3?) recovery, Mac OS X (10.3.8)

2005-04-13 Thread Eric D. Nielsen
Thank you. I'm used to installing from source or Debian packages.  I 
haven't poked around too much at the console level of the Mac.  I would 
assume I install from source and just point the install to the old data 
directory? Any gotchas to watch for that aren't in the manual? I would 
assume I should upgrade to 7.3.4, and see if that fixes enough that I 
can generate a database dump.  Afterwards bring it forward t0 the 8.0 
series.

Eric
On Apr 12, 2005, at 10:09 PM, Tom Lane wrote:
Eric D. Nielsen [EMAIL PROTECTED] writes:
PANIC:  XLogWrite: write request 0/2364000 is past end of log 
0/2364000
This is a known corner-case bug in some 7.3 releases.  If you care 
about
getting the data out of it, you can update-in-place to the latest 7.3
release.  If not, well, 7.3 was a long time ago ...

			regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] psql vs perl prepared inserts

2005-04-13 Thread Sean Davis
On Apr 13, 2005, at 9:57 AM, Matt Van Mater wrote:
Thanks to all who replied.  Thanks for the tip on that last thread
Tom, I don't know how I missed it.  I have a hunch that it's not
applicable to me at this time because I'm running a year and a half
old software (included in OpenBSD 3.4), but I will have to check which
version of DBD::Pg was installed.
The intent of prepared statements is to reduce the overhead of running
the parser, rewriter and planner multiple times for a statement that 
is
executed multiple times. For an INSERT query without any sub-selects
that is not rewritten by any rules, the cost to parse, rewrite and 
plan
the statement is trivial. So I wouldn't expect prepared statements to 
be
a big win -- you would gain a lot more from batching multiple inserts
into a single transaction, and more still from using COPY.
I was thinking something along the same lines, and was considering
using the COPY statement as my next step, but as someone mentioned
then I have to load it into a temporary database and then do some more
internal magic to preserve referential integrity, unique contraints,
etc.  For that reason I was hoping to keep it in perl, and it's always
nice to keep everything in a single neat portable package.
You can use pl/perl stored functions to do exactly what you do in perl 
on the client side, but it happens much faster since the data is 
already on the server.  Then, your neat little portable package IS the 
database.

Sean
---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] About ERROR: could not convert UTF-8 character 0x00e9 to ISO8859-1

2005-04-13 Thread Ying Lu
Greetings,
PostgreSQL 8.0.1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.2. 
When I run

select * from table1;
I got an error:
=
ERROR:  could not convert UTF-8 character 0x00e9 to ISO8859-1
I tried to google but cannot find much info about it. Can somebody help?
Thanks,
Emi

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Composite type versus Domain constraints.

2005-04-13 Thread Tom Lane
James Robinson [EMAIL PROTECTED] writes:
 Thank you for the great info. If I may, here's another question. I am in
 the need of new scalar types, essentially domain'd smallints, hence
 why my composite type had but one composite member. Domain'd
 smallints would be great, but it seems when they get returned in a
 result set to the client, they come shipped with the oid of smallint
 (21 on my box), not the oid of the domain.

Yeah.  IIRC that was a deliberate decision on the grounds that most
client software would probably break if we sent the domain OID.
Maybe we should reconsider, but I think the answer would be the same.

 Is there an easy path to creating (many) scalar types which piggyback
 on int2's functions. Naive experimentation fails:

 social=# create type MyType (
INTERNALLENGTH = 2,
INPUT = int2in,
   OUTPUT = int2out
 );
 ERROR:  function int2out(mytype) does not exist

 Which is reasonable.

What you'd have to do is also create dummy I/O functions as aliases
for the internal int2in/out functions.

regression=# create function mytypein(cstring) returns mytype
regression-# as 'int2in' language internal strict immutable;
NOTICE:  type mytype is not yet defined
DETAIL:  Creating a shell type definition.
CREATE FUNCTION
regression=# create function mytypeout(mytype) returns cstring
regression-# as 'int2out' language internal strict immutable;
NOTICE:  argument type mytype is only a shell
CREATE FUNCTION
regression=# create type mytype( input = mytypein, output = mytypeout,
regression(# internallength = 2, passedbyvalue, alignment = int2 );
CREATE TYPE
regression=# select '42'::mytype;
 mytype

 42
(1 row)

regression=#

Note that it's absolutely critical that you get the size/alignment/byval
properties right ;-)

With suitable casts to/from int2 (probably implicit to int2, but not
implicit from), this would probably work fairly well.

regression=# select '42'::mytype + '42'::mytype;
ERROR:  operator does not exist: mytype + mytype
HINT:  No operator matches the given name and argument type(s). You may need to 
add explicit type casts.
regression=# create cast (mytype as int2) without function as implicit;
CREATE CAST
regression=# select '42'::mytype + '42'::mytype;
 ?column?
--
   84
(1 row)

regression=# create table zit(f1 mytype primary key);
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index zit_pkey for 
table zit
CREATE TABLE
regression=# \d zit
 Table public.zit
 Column |  Type  | Modifiers
++---
 f1 | mytype | not null
Indexes:
zit_pkey PRIMARY KEY, btree (f1 int2_ops)

regression=#

Sweet ... I wasn't actually expecting the index to work without more
hacking than that ...

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] psql vs perl prepared inserts

2005-04-13 Thread Alvaro Herrera
On Wed, Apr 13, 2005 at 09:57:09AM -0400, Matt Van Mater wrote:

 Also, I forgot to mention earlier that I tried using transactions to
 speed things up, but since I expect to see certain inserts fail I
 would need to rework my code so the whole transaction doesn't fail if
 one insert goes bad.  This is somewhat contrary to the purpose of
 transactions so I'm not sure how to accomplish this.

Try the pgloader project at pgfoundry.  It tries to insert all rows
using COPY and the rejected ones go to a file, using something akin to
binary search.  It may be of some use.  I haven't tried it so I'm not
sure of its maturity.  Let us know how it goes if you try it!

-- 
Alvaro Herrera ([EMAIL PROTECTED])
And as an added bonus, now my computer goes to the toilet for me, leaving me
free to spend time on more useful activities! yay slug codefests! (C. Parker)

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Foreign Keys Question

2005-04-13 Thread Oleg Bartunov
Will inherits helps you ?
create table SITE_forum.t1 () inherits (SITE.t);
Oleg
On Wed, 13 Apr 2005, Matthias Loitsch wrote:
First of all, hi. I'm new to this mailing list.
I searched this on the net, but I didn't get any usable answers...
So here's my problem:
I have 2 schemas. One is called SITE, one SITE_forum.
What I wanted, is to separate the forum from the whole Site db, so I can
put them on different servers if I encounter performance problems or
anything.
So I started the SITE_forum schema, made all my tables, and realized,
that I don't want to create the table users in the schema SITE_forum,
because I want to use the exact same users than in the schema SITE.
One possibility would be, to create 2 identic tables on both schemas,
but that really really is not what I'd like to do.
So I thought I could make a foreign key on a different Schema (db), and
use the same table
And well, thats where I started to search if this is possible ... and,
in fact my main question is: Is this a good idea?
I have no idea if this will be fast enough, or if I will have lots of
problems afterward
Could anyone help me with this ?
Thanks in advance,
Matthias Loitsch


Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] Postgresql 8.0.2 and Tiger?

2005-04-13 Thread Jerry LeVan
Mac OSX 10.4 aka Tiger is going to be released in a couple of weeks, 
does
anyone know if Tiger will still be Postgresql friendly?

Jerry
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [GENERAL] Postgresql 8.0.2 and Tiger?

2005-04-13 Thread Joshua D. Drake
Jerry LeVan wrote:
Mac OSX 10.4 aka Tiger is going to be released in a couple of weeks, does
anyone know if Tiger will still be Postgresql friendly?
There is no reason to think that it wouldn't be.
Sincerely,
Joshua D. Drake

Jerry
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq

--
Your PostgreSQL solutions provider, Command Prompt, Inc.
24x7 support - 1.800.492.2240, programming, and consulting
Home of PostgreSQL Replicator, plPHP, plPerlNG and pgPHPToolkit
http://www.commandprompt.com / http://www.postgresql.org
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Postgresql 8.0.2 and Tiger?

2005-04-13 Thread Steve Atkins
On Wed, Apr 13, 2005 at 01:21:41PM -0400, Jerry LeVan wrote:
 Mac OSX 10.4 aka Tiger is going to be released in a couple of weeks, 
 does
 anyone know if Tiger will still be Postgresql friendly?

8.0.2 doesn't build out of the box on build 8A351B. I've not had a chance
to see what's going on yet, but it doesn't look like anything too hard
to fix.

Cheers,
  Steve


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Postgresql 8.0.2 and Tiger?

2005-04-13 Thread Vivek Khera
On Apr 13, 2005, at 1:52 PM, Steve Atkins wrote:
8.0.2 doesn't build out of the box on build 8A351B. I've not had a 
chance
to see what's going on yet, but it doesn't look like anything too hard
to fix.

My guess would be any issues on building would be related to gcc4 in 
tiger, and any changes needed for that will be applicable to gcc4 on 
any other platform too.  But that's just a SWAG.

Vivek Khera, Ph.D.
+1-301-869-4449 x806


smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] About ERROR: could not convert UTF-8 character 0x00e9 to ISO8859-1

2005-04-13 Thread Secrtariat
Specify your codepage :
SET CLIENT_ENCODING TO 'LATIN1' ;
SELECT * FROM table1 ;
Your database woul be coded as LATIN1.
Luc
- Original Message - 
From: Ying Lu [EMAIL PROTECTED]
To: pgsql-general@postgresql.org
Sent: Wednesday, April 13, 2005 5:49 PM
Subject: [GENERAL] About ERROR: could not convert UTF-8 character 0x00e9 to 
ISO8859-1


Greetings,
PostgreSQL 8.0.1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.2. 
When I run

select * from table1;
I got an error:
=
ERROR:  could not convert UTF-8 character 0x00e9 to ISO8859-1
I tried to google but cannot find much info about it. Can somebody help?
Thanks,
Emi

---(end of broadcast)---
TIP 8: explain analyze is your friend

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Composite type versus Domain constraints.

2005-04-13 Thread James Robinson
On Apr 13, 2005, at 11:50 AM, Tom Lane wrote:
Thank you for the great info. If I may, here's another question. I am 
in
the need of new scalar types, essentially domain'd smallints, hence
why my composite type had but one composite member. Domain'd
smallints would be great, but it seems when they get returned in a
result set to the client, they come shipped with the oid of smallint
(21 on my box), not the oid of the domain.
Yeah.  IIRC that was a deliberate decision on the grounds that most
client software would probably break if we sent the domain OID.
Maybe we should reconsider, but I think the answer would be the same.
[snipped fantastic PG type trickery -- thanks! We'll run with that style
for now]
Those wrapper in/out functions and casts to int2 look great and will 
work
for what we absolutely need, but still seem a bit on the wordy side. 
What
about, for databases + client applications which expect it, an option to
have a domain expose its oid in result set metadata. Domains created
without the extra syntactical flag would operate exactly as they do now
-- returning the oid of the wrapped type. But if created like:

CREATE DOMAIN mytype AS int2 EXPOSE OID
CONSTRAINT test CHECK (VALUE IN (0,1,2,3));
Then when these guys are returned in queries, the domain's oid is
eturned as the metadata for the column.
Would psql or pg_dump care? Our client apps would be expecting it
and would love it.
I would suspect that an additional boolean column in pg_type, something
along the lines of 'tyobscuresbasetype', defaulting to false, but set to
true if 'EXPOSE OID' was provided would be enough for the system to
decide which oid to send back.
That seems less fragile and error prone than casts, wrapping in/out
functions, etc. Reduces the barrier of effort towards making lots and
lots of these little guys and harnessing easy extra value on the client
side of things. We've essentially got hordes of enumerated types
on the client side adding lots of value and functionality to the
database-stored smallint, and having only two lines of SQL to build
to educate the database about each one would be a real winner.
Not to sound ungrateful -- the scalar type tricks can work, but with
more SQL-level effort.

James Robinson
Socialserve.com
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] About ERROR: could not convert UTF-8 character 0x00e9

2005-04-13 Thread Ying Lu
Hello,
Actually, both the server and client sides have been setup encoding 
latin1.

I am trying to read data from one DB and save to another DB. When I 
tried to read data from one DB (using postgresql-8.0-310.jdbc3.jar), it 
seems that I lost all my French characters (they become ?).

Also, I am *not* able even to *paste* french characters from PSQL terminal.
I am thinking should I make some changes about my terminal drivers to 
allow French characters?


(Note: I already tried
  javac -encoding ISO-8859-1
  java -Dfile.encoding=ISO-8859-1 test
When compiling and running my java programs
)
Thanks a lot,
Emi




SET CLIENT_ENCODING TO 'LATIN1' ;
SELECT * FROM table1 ;
Your database woul be coded as LATIN1.
Luc
- Original Message - From: Ying Lu [EMAIL PROTECTED]
To: pgsql-general@postgresql.org
Sent: Wednesday, April 13, 2005 5:49 PM
Subject: [GENERAL] About ERROR: could not convert UTF-8 character 
0x00e9 to ISO8859-1


Greetings,
PostgreSQL 8.0.1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 
3.3.2. When I run

select * from table1;
I got an error:
=
ERROR:  could not convert UTF-8 character 0x00e9 to ISO8859-1
I tried to google but cannot find much info about it. Can somebody help?
Thanks,
Emi

---(end of broadcast)---
TIP 8: explain analyze is your friend

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


[GENERAL] [PHP] unsubscribe

2005-04-13 Thread Libia Andrea Ramos Snchez
unsubscribe
LIBIA ANDREA RAMOS SÁNCHEZ
INGENIERO DE SISTEMAS
ESCUELA COLOMBIANA DE INGENIERIA
CEL. 3103271242
OFC. 5452799 - 2550469 CMN-Consulting
_
Charla con tus amigos en línea mediante MSN Messenger: 
http://messenger.latam.msn.com/

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] About ERROR: could not convert UTF-8 character 0x00e9

2005-04-13 Thread Ying Lu
Luc,
You are right. I am afraid that I checked different mahine's 
client_encoding, which have been setup the correct encodings.

While for my working environment, I have
CLIENT_ENCODING = SQL_ASCII ;
SERVER_ENCODING = SQL_ASCII;
After we setup the correct encoding for client and server side, we 
should be able to read correct French characters through JDBC.

However, I am still *not able* to *paste French Characters* such as ç   
*directly from PSQL terminal *screen.

Please advise,
Emi

Actually, both the server and client sides have been setup encoding 
latin1.

I am trying to read data from one DB and save to another DB. When I 
tried to read data from one DB (using postgresql-8.0-310.jdbc3.jar), 
it seems that I lost all my French characters (they become ?).

Also, I am *not* able even to *paste* french characters from PSQL 
terminal.

I am thinking should I make some changes about my terminal drivers to 
allow French characters?


(Note: I already tried
  javac -encoding ISO-8859-1
  java -Dfile.encoding=ISO-8859-1 test
When compiling and running my java programs
)
Thanks a lot,
Emi


SET CLIENT_ENCODING TO 'LATIN1' ;
SELECT * FROM table1 ;
Your database woul be coded as LATIN1.
Luc
- Original Message - From: Ying Lu [EMAIL PROTECTED]
To: pgsql-general@postgresql.org
Sent: Wednesday, April 13, 2005 5:49 PM
Subject: [GENERAL] About ERROR: could not convert UTF-8 character 
0x00e9 to ISO8859-1


Greetings,
PostgreSQL 8.0.1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 
3.3.2. When I run

select * from table1;
I got an error:
=
ERROR:  could not convert UTF-8 character 0x00e9 to ISO8859-1
I tried to google but cannot find much info about it. Can somebody 
help?

Thanks,
Emi

---(end of 
broadcast)---
TIP 8: explain analyze is your friend


---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Postgresql 8.0.2 and Tiger?

2005-04-13 Thread Scott Ribe
 Mac OSX 10.4 aka Tiger is going to be released in a couple of weeks,
 does
 anyone know if Tiger will still be Postgresql friendly?

Well, considering that PostgreSQL is now used by Apple Remote Desktop, for
storing client-management kinds of info, I expect there will be some way to
get it working ;-)

 
-- 
Scott Ribe
[EMAIL PROTECTED]
http://www.killerbytes.com/
(303) 665-7007 voice



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Postgresql 8.0.2 and Tiger?

2005-04-13 Thread Benjamin Reed
On 4/13/05, Vivek Khera [EMAIL PROTECTED] wrote:

 My guess would be any issues on building would be related to gcc4 in
 tiger, and any changes needed for that will be applicable to gcc4 on
 any other platform too.  But that's just a SWAG.

I confirmed that my postgresql 8.0.2 and 7.4.7 packages build in fink
in the latest revision seeded to developers...  It's only a few builds
away from what is said to be final, so I'm betting it's just fine.

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] grant all privileges to all tables in a database

2005-04-13 Thread Florin Andrei
On Mon, 2005-04-11 at 03:28 +0200, Daniel Verite wrote:
   Florin Andrei wrote:
 
  On MySQL, it's enough to do this:
  
  GRANT ALL PRIVILEGES ON dbname.* TO username [IDENTIFIED BY 'password'];
  
  On PostgreSQL, you have to give it privileges not only to the database,
  but to all components within (tables, sequences and whatnot). The
  following three commands will grant those privileges, first to the
  database, then to the tables, then to the sequences.
 
 In this case, why not let 'username' create the database and all its objects 
 so
 that it will have all privileges on them afterwards without any specific GRANT
 required?

Those are not system accounts, just DB accounts.

-- 
Florin Andrei

http://florin.myip.org/


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly