[GENERAL] [ANN]VTD-XML 1.8 released

2006-11-22 Thread Jimmy Zhang

Version 1.8 of VTD-XML is now released. The new features are:

·XMLModifier is a easy to use class that takes advantage of the 
incremental update capability offered by VTD-XML


·XPath built-in functions are now almost complete

·This release added encoding support for iso-8859-2~10, windows code 
page 1250~1258


·Added various functions to autoPilot that evaluate XPath to string, 
number and boolean


·This release also fixes a number of XPath bugs related to string 
handling


To download the latest release please visit http://vtd-xml.sf.net








---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Thanks for all the help on replacing 'unique'

2006-11-22 Thread Thomas Kellerer

On 21.11.2006 15:20 Wm.A.Stafford wrote:
Distinct is the equivalent 
aggregate function as many pointed out.  A co-worker said that Oracles 
'unique' is probably an optimized version of distinct that takes 
advantage of some Oracle specific indexing.
I don't think so. They are merely two different keywords for the same 
thing. I seen more than one DB using UNIQUE instead of DISTINCT. Quote 
from the Oracle manual:


these two keywords are synonymous
http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_103a.htm#2065648

Thomas


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] list archives

2006-11-22 Thread Ray Stell


The large print giveth and the small print taketh away - Tom Waits




On Tue, Nov 21, 2006 at 10:34:28PM +0100, Magnus Hagander wrote:
  I intended to read a thread I caught part of in Oct, so I went to the
  archive:  http://archives.postgresql.org/pgsql-general/
  Seems most of the months posts are missing.  Is this SOP or 
  is something busted?  Seems to hold true for most months this year. 
 
 We had this question on a different list just a week ago or so. You
 didn't happen to also miss the Next Page link, did you?
 
 //Magnus
 
 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] MSSQL to PostgreSQL : Encoding problem

2006-11-22 Thread Arnaud Lesauvage

Richard Huxton a écrit :

Arnaud Lesauvage wrote:

Hi list !

I already posted this as COPY FROM encoding error, but I have been 
doing some more tests since then.


I'm trying to export data from MS SQL Server to PostgreSQL.
The tables are quite big (20M rows), so a CSV export and a COPY FROM3 
import seems to be the only reasonable solution.


Or go via MS-Access/Perl and ODBC/DBI perhaps?


Yes, I think it would work. The problem is that the DB is 
too big for this king of export. Using DTS from MSSQL to 
export directly to PostgreSQL using psqlODBC Unicode Driver, 
I exported ~1000 rows per second in a 2-columns table with 
~20M rows. That means several days just for this table, and 
I have bigger ones !



In DTS, I have 3 options to export a table as a text file : ANSI, OEM 
and UNICODE.

I tried all these options (and I have three files, one for each).


Well, what character-set is your database in?



Collation in MSSQL is SQL_Latin1_General_CP1_Cl_AS.
DTS documentation tells me that exporting in ANSI should 
export using the current codepage.

According to my local setting, my codepage is Windows-1252.
This file is not correctly read by COPY when using 
client_encoding of WIN1252 though...



I then try to import into PostgreSQL. The farther I can get is when 
using the UNICODE export, and importing it using a client_encoding set 
to UTF8 (I tried WIN1252, LATIN9, LATIN1, ...).

The copy then stops with an error :
ERROR: invalid byte sequence for encoding UTF8: 0xff
État SQL :22021

The problematic character is the euro currency symbol.


You'll want UTF-8 or LATIN9 for the euro symbol. LATIN1 supports that 
character-number but it is used for a different symbol.


Your first step needs to be to find out what character-set your data is in.
Your second is then to decide what char-set you want to use to store it 
in PG.

Then you can decide how to get there.


In PG, UTF8 was my choice (the DB already exists, I am just 
adding some tables that are still stored in MSSQL), and 
according to what you say this was the right choice.

The problem is really about reading this file I think.

I thought that given the character sets available in 
PostgreSQL, I would be able to COPY directly from my 
exported files.
If I have to convert them using some third party tool, I'll 
do that, but that's a bit more painful...


--
Arnaud

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] MSSQL to PostgreSQL : Encoding problem

2006-11-22 Thread Arnaud Lesauvage

Tomi NA a écrit :

I think I'll go this way... No other choice, actually !
The MSSQL database is in SQL_Latin1_General_CP1_Cl_AS.
I don't really understand what this is. It supports the euro
symbol, so it is probably not pure LATIN1, right ?


I suppose you'd have to look at the latin1 codepage character table
somewhere...I'm a UTF-8 guy so I'm not well suited to respond to the
question. :)


Yep, http://en.wikipedia.org/wiki/Latin-1 tells me that 
LATIN1 is missing the euro sign...

Gr I hate this !!!

--
Arnaud

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] MSSQL to PostgreSQL : Encoding problem

2006-11-22 Thread Tomi NA

2006/11/21, Arnaud Lesauvage [EMAIL PROTECTED]:

Hi list !

I already posted this as COPY FROM encoding error, but I have
been doing some more tests since then.

I'm trying to export data from MS SQL Server to PostgreSQL.
The tables are quite big (20M rows), so a CSV export and a COPY
FROM3 import seems to be the only reasonable solution.


I believe you might have more luck working without files altogether.
Use an ETL tool like kettle or even DTS with the pgsql ODBC driver.
That's exactly what those tools are for.

You still have to get the encodings right, though.
I suggest unicode for pgsql, but only you know how the MSSQL database
is encoded.

t.n.a.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] MSSQL to PostgreSQL : Encoding problem

2006-11-22 Thread Arnaud Lesauvage

Richard Huxton a écrit :

Arnaud Lesauvage wrote:

Richard Huxton a écrit :


Or go via MS-Access/Perl and ODBC/DBI perhaps?


Yes, I think it would work. The problem is that the DB is too big for 
this king of export. Using DTS from MSSQL to export directly to 
PostgreSQL using psqlODBC Unicode Driver, I exported ~1000 rows per 
second in a 2-columns table with ~20M rows. That means several days just 
for this table, and I have bigger ones !


Well it's about 0.25 days, but if it's too long, it's too long.


Sure, sorry for the confusion, the problem is with the other 
tables (same number of rows but a lot of columns, some very 
large).



 Collation in MSSQL is SQL_Latin1_General_CP1_Cl_AS.
 DTS documentation tells me that exporting in ANSI should export using 
 the current codepage.

 According to my local setting, my codepage is Windows-1252.
 This file is not correctly read by COPY when using client_encoding of 
 WIN1252 though...


Hmm. Odd that they don't agree on what WIN1252 is. I'm not sure how to
check the file and confirm one way or the other. Anyone else on the list 
got an idea?


I just downloaded the GnuWin32 version of iconv.
I'm giving it a try and I'll tell you haw it went.


--
Arnaud

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Vienna migrating some DB's from Oracle to PG

2006-11-22 Thread Albe Laurenz
Reid Thompson wrote:
 not much info... just a blurb about RH, but with the PG comment thrown
 in..
 
 http://opensource.sys-con.com/read/303017.htm
 
 The city of Vienna is going with Red Hat on its file servers and
Oracle
 database servers, some of which are evidently going to become
PostgreSQL
 databases.

PostgreSQL is up, running and productive here!

Greetings from the City of Vienna,
Laurenz Albe

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] URL Decoding

2006-11-22 Thread Arnaud Lesauvage

Hi List !

I am looking for an easy URL decoding function.
I thought about using regexp_replace, but I cna't get it to 
work :


SELECT regexp_replace('foo%B5bar', '%(..)', '\x\\1' , 'g');
 'fooxB5bar'

I wanted to replace %BE with the character \xB5 (µ, I 
think), but of course I am doing this wrong...

Is there a simple way t odo this ?

Thanks a lot !

--
Arnaud

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] MSSQL to PostgreSQL : Encoding problem

2006-11-22 Thread Alvaro Herrera
Arnaud Lesauvage wrote:
 Tomi NA a écrit :
 I think I'll go this way... No other choice, actually !
 The MSSQL database is in SQL_Latin1_General_CP1_Cl_AS.
 I don't really understand what this is. It supports the euro
 symbol, so it is probably not pure LATIN1, right ?
 
 I suppose you'd have to look at the latin1 codepage character table
 somewhere...I'm a UTF-8 guy so I'm not well suited to respond to the
 question. :)
 
 Yep, http://en.wikipedia.org/wiki/Latin-1 tells me that 
 LATIN1 is missing the euro sign...
 Gr I hate this !!!

So use Latin9 ...

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] build for none standard socket

2006-11-22 Thread garry saddington
I have searched and Googled but can't find how to do a ./configure to
use a different socket than /tmp/.s.PGSQL.5432. It says in the manual
that it can be done but then does not say how.
Anyone done this before?
Regards
Garry 


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] MSSQL to PostgreSQL : Encoding problem

2006-11-22 Thread Tomi NA

2006/11/22, Arnaud Lesauvage [EMAIL PROTECTED]:

Tomi NA a écrit :
 2006/11/21, Arnaud Lesauvage [EMAIL PROTECTED]:
 Hi list !

 I already posted this as COPY FROM encoding error, but I have
 been doing some more tests since then.

 I'm trying to export data from MS SQL Server to PostgreSQL.
 The tables are quite big (20M rows), so a CSV export and a COPY
 FROM3 import seems to be the only reasonable solution.

 I believe you might have more luck working without files altogether.
 Use an ETL tool like kettle or even DTS with the pgsql ODBC driver.
 That's exactly what those tools are for.

 You still have to get the encodings right, though.
 I suggest unicode for pgsql, but only you know how the MSSQL database
 is encoded.

I think I'll go this way... No other choice, actually !
The MSSQL database is in SQL_Latin1_General_CP1_Cl_AS.
I don't really understand what this is. It supports the euro
symbol, so it is probably not pure LATIN1, right ?


I suppose you'd have to look at the latin1 codepage character table
somewhere...I'm a UTF-8 guy so I'm not well suited to respond to the
question. :)

t.n.a.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] build for none standard socket

2006-11-22 Thread Matthias . Pitzl
Hi!

You can at least configure the socket directory, the socket group and the
socket permissions in your postgresql.conf.

Greetings,
Matthias

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of 
 garry saddington
 Sent: Wednesday, November 22, 2006 8:58 AM
 To: Postgres General
 Subject: [GENERAL] build for none standard socket
 
 
 I have searched and Googled but can't find how to do a ./configure to
 use a different socket than /tmp/.s.PGSQL.5432. It says in the manual
 that it can be done but then does not say how.
 Anyone done this before?
 Regards
 Garry 
 
 
 ---(end of 
 broadcast)---
 TIP 6: explain analyze is your friend
 

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] MSSQL to PostgreSQL : Encoding problem

2006-11-22 Thread Magnus Hagander

  I have done this in Delphi using it's built in UTF8 encoding and 
  decoding routines.   You can get a free copy of Delphi 
 Turbo Explorer 
  which includes components for MS SQL server and ODBC, so it 
 would be 
  pretty straight forward to get this working.
  
  The actual method in Delphi is system.UTF8Encode(widestring).  This 
  will encode unicode to UTF8 which is compatible with a 
 Postgresql UTF8 database.
 
 Ah, that's useful to know. Windows just doesn't have the same 
 quantity of tools installed as a *nix platform.

If your file is small enough, you can just open it up in Notepad and
re-save it as UTF8. It might play funny with the BOMs though
(byte-order-marks).

There is also, IIRC, an iconv binary available for Windows that should
be able to do such a conversion. Can't rememebr where thuogh :-)

//Magnus

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] MSSQL to PostgreSQL : Encoding problem

2006-11-22 Thread Magnus Hagander
  I already posted this as COPY FROM encoding error, but I 
 have been 
  doing some more tests since then.
  
  I'm trying to export data from MS SQL Server to PostgreSQL.
  The tables are quite big (20M rows), so a CSV export and a COPY 
  FROM3 import seems to be the only reasonable solution.
  
  Or go via MS-Access/Perl and ODBC/DBI perhaps?
 
 Yes, I think it would work. The problem is that the DB is too 
 big for this king of export. Using DTS from MSSQL to export 
 directly to PostgreSQL using psqlODBC Unicode Driver, I 
 exported ~1000 rows per second in a 2-columns table with ~20M 
 rows. That means several days just for this table, and I have 
 bigger ones !
 

Interesting. What did you set the Inser batch size to? (I think that's
available for all transformatino tasks). And did you remember to check
the box for use transactions?

While it's never as fast as a COPY, it should be possible to make it
faster than that, Ithink.


Another option is to just BCP the file out, and then COPY it into
postgresql. No nice GUI, but you can tune almost everything with BCP.

//Magnus

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] MSSQL to PostgreSQL : Encoding problem

2006-11-22 Thread Arnaud Lesauvage

Alvaro Herrera a écrit :

Arnaud Lesauvage wrote:

Tomi NA a écrit :
I think I'll go this way... No other choice, actually !
The MSSQL database is in SQL_Latin1_General_CP1_Cl_AS.
I don't really understand what this is. It supports the euro
symbol, so it is probably not pure LATIN1, right ?

I suppose you'd have to look at the latin1 codepage character table
somewhere...I'm a UTF-8 guy so I'm not well suited to respond to the
question. :)

Yep, http://en.wikipedia.org/wiki/Latin-1 tells me that 
LATIN1 is missing the euro sign...

Gr I hate this !!!


So use Latin9 ...


Of course, but it doesn't work !!!
Whatever client encoding I choose in postgresql before 
COPYing, I get the 'invalid byte sequence error'.


The farther I can get is exporting to UNICODE and importing 
as UTF8. Then COPY only breaks on the euro symbol (otherwise 
it breaks very early, I think on the first non-ascii 
character).


--
Arnaud

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] MSSQL to PostgreSQL : Encoding problem

2006-11-22 Thread Arnaud Lesauvage

Tomi NA a écrit :

2006/11/21, Arnaud Lesauvage [EMAIL PROTECTED]:

Hi list !

I already posted this as COPY FROM encoding error, but I have
been doing some more tests since then.

I'm trying to export data from MS SQL Server to PostgreSQL.
The tables are quite big (20M rows), so a CSV export and a COPY
FROM3 import seems to be the only reasonable solution.


I believe you might have more luck working without files altogether.
Use an ETL tool like kettle or even DTS with the pgsql ODBC driver.
That's exactly what those tools are for.

You still have to get the encodings right, though.
I suggest unicode for pgsql, but only you know how the MSSQL database
is encoded.


I think I'll go this way... No other choice, actually !
The MSSQL database is in SQL_Latin1_General_CP1_Cl_AS.
I don't really understand what this is. It supports the euro 
symbol, so it is probably not pure LATIN1, right ?


--
Arnaud

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] MSSQL to PostgreSQL : Encoding problem

2006-11-22 Thread Alvaro Herrera
Arnaud Lesauvage wrote:
 Alvaro Herrera a écrit :
 Arnaud Lesauvage wrote:
 Tomi NA a écrit :
 I think I'll go this way... No other choice, actually !
 The MSSQL database is in SQL_Latin1_General_CP1_Cl_AS.
 I don't really understand what this is. It supports the euro
 symbol, so it is probably not pure LATIN1, right ?
 
 I suppose you'd have to look at the latin1 codepage character table
 somewhere...I'm a UTF-8 guy so I'm not well suited to respond to the
 question. :)
 
 Yep, http://en.wikipedia.org/wiki/Latin-1 tells me that 
 LATIN1 is missing the euro sign...
 Gr I hate this !!!
 
 So use Latin9 ...
 
 Of course, but it doesn't work !!!
 Whatever client encoding I choose in postgresql before 
 COPYing, I get the 'invalid byte sequence error'.

Humm ... how are you choosing the client encoding?  Is it actually
working?  I don't see how choosing Latin1 or Latin9 and feeding whatever
byte sequence would give you an invalid byte sequence.  These charsets
don't have any way to validate the bytes, as opposed to what UTF-8 can
do.  So you could end up with invalid bytes if you choose the wrong
client encoding, but that's a different error.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] MSSQL to PostgreSQL : Encoding problem

2006-11-22 Thread Richard Huxton

Arnaud Lesauvage wrote:

Richard Huxton a écrit :


Or go via MS-Access/Perl and ODBC/DBI perhaps?


Yes, I think it would work. The problem is that the DB is too big for 
this king of export. Using DTS from MSSQL to export directly to 
PostgreSQL using psqlODBC Unicode Driver, I exported ~1000 rows per 
second in a 2-columns table with ~20M rows. That means several days just 
for this table, and I have bigger ones !


Well it's about 0.25 days, but if it's too long, it's too long.

In DTS, I have 3 options to export a table as a text file : ANSI, OEM 
and UNICODE.

I tried all these options (and I have three files, one for each).


Well, what character-set is your database in?



Collation in MSSQL is SQL_Latin1_General_CP1_Cl_AS.
DTS documentation tells me that exporting in ANSI should export using 
the current codepage.

According to my local setting, my codepage is Windows-1252.
This file is not correctly read by COPY when using client_encoding of 
WIN1252 though...


Hmm. Odd that they don't agree on what WIN1252 is. I'm not sure how to 
check the file and confirm one way or the other. Anyone else on the list 
got an idea?


--
  Richard Huxton
  Archonet Ltd


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] ERROR: 42601: a column definition list is only allowed for functions returning record

2006-11-22 Thread Gopal
Hi,

 

I'm running postgres following version on win2K server (PostgreSQL 8.1.4
on i686-pc-mingw32, compiled by GCC gcc.exe (GCC) 3.4.2 (mingw-special))

 

I am using c# and npgsql 1.0 (same behaviour with 0.7) to call a
function that has 6 input parameters and 4 output parameters.

 

When I call .ExecuteNonQuery(), I get the following message

 

ERROR: 42601: a column definition list is only allowed for functions
returning record

 

Here is my function declaration

 

CREATE or REPLACE FUNCTION public.usp_hello(

IN centrex float8,

IN centrey float8,

IN pixelwidth int4,

IN pixelheight int4,

IN originaldataset text,

IN originallevel int4,

IN srid int4,

OUT newdataset text,

OUT newlevel int4,

OUT newx float8,

OUT newy float8)  

returns record AS

 

 

My calling c# code is as follows

 

NpgsqlCommand sqlCommand1 = new NpgsqlCommand();  

  sqlCommand1.CommandText = usp_hello;

  sqlCommand1.CommandType =
System.Data.CommandType.StoredProcedure;

  sqlCommand1.Connection = sqlConnection1;

  sqlCommand1.Parameters.Add(new
NpgsqlParameter(centrex,NpgsqlTypes.NpgsqlDbType.Double));

  sqlCommand1.Parameters.Add(new
NpgsqlParameter(centrey,NpgsqlTypes.NpgsqlDbType.Double));

  sqlCommand1.Parameters.Add(new
NpgsqlParameter(pixelwidth,NpgsqlTypes.NpgsqlDbType.Integer));

  sqlCommand1.Parameters.Add(new
NpgsqlParameter(pixelheight,NpgsqlTypes.NpgsqlDbType.Integer));

  sqlCommand1.Parameters.Add(new
NpgsqlParameter(originaldataset,NpgsqlTypes.NpgsqlDbType.Text));

  sqlCommand1.Parameters.Add(new
NpgsqlParameter(originallevel,NpgsqlTypes.NpgsqlDbType.Integer));


  sqlCommand1.Parameters.Add(new
NpgsqlParameter(srid,NpgsqlTypes.NpgsqlDbType.Integer));

 

 

 

  

 

  sqlCommand1.Parameters[centrex].Value = centreX;

  sqlCommand1.Parameters[centrey].Value = centreY;

  sqlCommand1.Parameters[pixelwidth].Value =
pixelWidth;

  sqlCommand1.Parameters[pixelheight].Value =
pixelHeight;

  sqlCommand1.Parameters[originaldataset].Value =
originalDataset;

  sqlCommand1.Parameters[originallevel].Value =
originalLevel;

  sqlCommand1.Parameters[srid].Value = 27700;

 

  NpgsqlParameter newds = new
NpgsqlParameter(newdataset,NpgsqlTypes.NpgsqlDbType.Text);


  newds.Direction = ParameterDirection.Output;

  sqlCommand1.Parameters.Add(newds);

 

  NpgsqlParameter newlvl = new
NpgsqlParameter(newlevel,NpgsqlTypes.NpgsqlDbType.Integer);


  newlvl.Direction = ParameterDirection.Output;

  sqlCommand1.Parameters.Add(newlvl);

  

  NpgsqlParameter newx = new
NpgsqlParameter(newx,NpgsqlTypes.NpgsqlDbType.Double);

  newx.Direction = ParameterDirection.Output;

  sqlCommand1.Parameters.Add(newx);

  

  NpgsqlParameter newy = new
NpgsqlParameter(newy,NpgsqlTypes.NpgsqlDbType.Double);

  newy.Direction = ParameterDirection.Output;

  sqlCommand1.Parameters.Add(newy);

 

  try

  {

sqlConnection1.Open();

sqlCommand1.ExecuteNonQuery();


newDataset =
sqlCommand1.Parameters[newdataset].Value.ToString();

newLevel =
Convert.ToInt32(sqlCommand1.Parameters[newlevel].Value);

  } 

 

 

I cannot find any other matching issues, please help!

 

Gopal



Re: [GENERAL] MSSQL to PostgreSQL : Encoding problem

2006-11-22 Thread Tony Caduto




Of course, but it doesn't work !!!
Whatever client encoding I choose in postgresql before COPYing, I get 
the 'invalid byte sequence error'.


The farther I can get is exporting to UNICODE and importing as UTF8. 
Then COPY only breaks on the euro symbol (otherwise it breaks very 
early, I think on the first non-ascii character).


--
Like I said before UNICODE  UTF8  That's why the COPY command breaks 
on the Euro symbol.
You will have to export as UNICODE, then encode it as UTF8, then you 
won't get the breakage.


UTF8 is simply a means to store UNICODE pretty much as ASCII text.

You could grab a copy of Delphi TurboExplorer and create a import 
routine using the dbGO ADO components and the PG ODBC driver.
Basicly you need to encode any UNICODE data going to the PG server with 
the system.utf8encode function:


[Delphi] function *UTF8Encode*(const WS: WideString): UTF8String;
Call Utf8Encode to convert a Unicode string to UTF-8. WS is the Unicode 
string to convert. Utf8Encode returns the corresponding UTF-8 string.


I would imagine that Perl also has such routines, but I don't know for 
sure. These routines might be in FreePascal as well.


--
Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for Postgresql
Your best bet for Postgresql Administration 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] URL Decoding

2006-11-22 Thread Tino Wildenhain

Arnaud Lesauvage schrieb:

Hi List !

I am looking for an easy URL decoding function.
I thought about using regexp_replace, but I cna't get it to work :

SELECT regexp_replace('foo%B5bar', '%(..)', '\x\\1' , 'g');
  'fooxB5bar'

I wanted to replace %BE with the character \xB5 (µ, I think), but of 
course I am doing this wrong...

Is there a simple way t odo this ?


I was simply using a stored function using pl/pythonu with
urllib (split and unquote). Works flawlessy :-)

Regards
T.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] MSSQL to PostgreSQL : Encoding problem

2006-11-22 Thread Arnaud Lesauvage

Magnus Hagander a écrit :
 I have done this in Delphi using it's built in UTF8 encoding and 
 decoding routines.   You can get a free copy of Delphi 
Turbo Explorer 
 which includes components for MS SQL server and ODBC, so it 
would be 
 pretty straight forward to get this working.
 
 The actual method in Delphi is system.UTF8Encode(widestring).  This 
 will encode unicode to UTF8 which is compatible with a 
Postgresql UTF8 database.


Ah, that's useful to know. Windows just doesn't have the same 
quantity of tools installed as a *nix platform.


If your file is small enough, you can just open it up in Notepad and
re-save it as UTF8. It might play funny with the BOMs though
(byte-order-marks).

There is also, IIRC, an iconv binary available for Windows that should
be able to do such a conversion. Can't rememebr where thuogh :-)


The file is way too big for notepad. It is even too big for 
notepad++.


I do have the GnuWin32 version of iconv (*great* software 
collection, BTW), but still no go...
I tried iconv -f CP1252 -t UTF-8 
detailrecherche_ansi.csv detailrecherche_cp1252utf8.csv
and iconv -f LATIN-9 -t UTF-8 detailrecherche_ansi.csv 
detailrecherche_latin9utf8.csv


Both don't want to load as UTF8 (invalid byte sequence x00).

I am desperate...
--
Arnaud

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] MSSQL to PostgreSQL : Encoding problem

2006-11-22 Thread Arnaud Lesauvage

Alvaro Herrera a écrit :

Arnaud Lesauvage wrote:

Alvaro Herrera a écrit :
Arnaud Lesauvage wrote:
Tomi NA a écrit :
I think I'll go this way... No other choice, actually !
The MSSQL database is in SQL_Latin1_General_CP1_Cl_AS.
I don't really understand what this is. It supports the euro
symbol, so it is probably not pure LATIN1, right ?

I suppose you'd have to look at the latin1 codepage character table
somewhere...I'm a UTF-8 guy so I'm not well suited to respond to the
question. :)

Yep, http://en.wikipedia.org/wiki/Latin-1 tells me that 
LATIN1 is missing the euro sign...

Gr I hate this !!!

So use Latin9 ...

Of course, but it doesn't work !!!
Whatever client encoding I choose in postgresql before 
COPYing, I get the 'invalid byte sequence error'.


Humm ... how are you choosing the client encoding?  Is it actually
working?  I don't see how choosing Latin1 or Latin9 and feeding whatever
byte sequence would give you an invalid byte sequence.  These charsets
don't have any way to validate the bytes, as opposed to what UTF-8 can
do.  So you could end up with invalid bytes if you choose the wrong
client encoding, but that's a different error.



mydb=# SET client_encoding TO LATIN9;
SET
mydb=# COPY statistiques.detailrecherche (log_gid, 
champrecherche, valeurrecherche) FROM 
'E:\\Production\\Temp\\detailrecherche_ansi.csv' CSV;

ERROR:  invalid byte sequence for encoding LATIN9: 0x00
HINT:  This error can also happen if the byte sequence does 
not match the encoding expected by the server, which is 
controlled by client_encoding.

CONTEXT:  COPY detailrecherche, line 9212
mydb=# SET client_encoding TO WIN1252;
SET
mydb=# COPY statistiques.detailrecherche (log_gid, 
champrecherche, valeurrecherche) FROM 
'E:\\Production\\Temp\\detailrecherche_ansi.csv' CSV;

ERROR:  invalid byte sequence for encoding WIN1252: 0x00
HINT:  This error can also happen if the byte sequence does 
not match the encoding expected by the server, which is 
controlled by client_encoding.

CONTEXT:  COPY detailrecherche, line 9212


Really, I'd rather have another error, but this is all I can 
get.

This is with the ANSI export.
With the UNICODE export :

mydb=# SET client_encoding TO UTF8;
SET
mydb=# COPY statistiques.detailrecherche (log_gid, 
champrecherche, valeurrecherche) FROM 
'E:\\Production\\Temp\\detailrecherche_unicode.csv' CSV;

ERROR:  invalid byte sequence for encoding UTF8: 0xff
HINT:  This error can also happen if the byte sequence does 
not match the encoding expected by the server, which is 
controlled by client_encoding.

CONTEXT:  COPY detailrecherche, line 592680

So, line 592680 is *a lot* better, but it is still not good!

--
Arnaud


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] MSSQL to PostgreSQL : Encoding problem

2006-11-22 Thread Thomas H.



Or go via MS-Access/Perl and ODBC/DBI perhaps?


Yes, I think it would work. The problem is that the DB is too big for 
this king of export. Using DTS from MSSQL to export directly to 
PostgreSQL using psqlODBC Unicode Driver, I exported ~1000 rows per 
second in a 2-columns table with ~20M rows. That means several days just 
for this table, and I have bigger ones !


Well it's about 0.25 days, but if it's too long, it's too long.


Sure, sorry for the confusion, the problem is with the other tables (same 
number of rows but a lot of columns, some very large).




well, if its too slow, then you will have to dump the db to a textfile (DTS 
does this for you) and then convert the textfile to utf8 manually before 
importing it to pgsql. iconv for win32 will help you there. i found tho it 
removes some wanted special characters, so watch out.
a less scientific approach would be using an unicode-aware texteditor to 
convert it (ultraedit does this pretty nicely, for example). have had good 
results with it.


loading several million rows will always take some time, tho.

- thomas 




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] advanced index (descending and table-presorted descending)

2006-11-22 Thread Peter Childs

On 21/11/06, Vic Cekvenich [EMAIL PROTECTED] wrote:


Can pgSQL 8.x do descending indexes like mySQL 5.1?
(so 1st column is descending and rest are asscending?)


Can pgSQL 8.x do physically sorted table (like a forced index order) so we
don't need order by?

tia,
.V




No and I don't really believe mySQL can if it can its a bug, I would
not trust it. If you want your results sorted you need the order
by..

However, Cluster might work for you, but you need to re-cluster after
every updates or inserts, so it will probably be fine for static data.
Which I suspect is the same for mySql but I can't be bothered to
check, If mysql really works like this its worse that I originally
thought it was. Even so I would always include the order by clause for
safety.

(Its a bug because the table(file) will always grow and grow and grow
and eventually take up the entire disk with tonnes of gaps which can
be closed without unsorting the data or rewriting the entire file ie
cluster)

Peter.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] build for none standard socket

2006-11-22 Thread Tom Lane
garry saddington [EMAIL PROTECTED] writes:
 I have searched and Googled but can't find how to do a ./configure to
 use a different socket than /tmp/.s.PGSQL.5432. It says in the manual
 that it can be done but then does not say how.

See DEFAULT_PGSOCKET_DIR in src/include/pg_config_manual.h

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] PGSQL Newbie

2006-11-22 Thread Wolfe, Robert

Good morning all!  I'm a newbie to PGSQL here so thought I would ask my
first question since I joined this list this morning...

Is it possible to run Postgresql and MySQL together on the same machine?

-- 
Robert Wolfe, Linux and Network Admin
net261.com | http://www.net261.com:85
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Multiple currencies in a application

2006-11-22 Thread novnov

Bumping this in hopes that someone can give me a bit of input?



novnov wrote:
 
 I'm working on an application that will eventually need to support various
 currencies. I've never touched this area before, have dealt with dollars
 till now. I'm not sure what the regular practices are re mulitple
 currencies in the same application.
 
 The app includes calculations like price per unit, which of course
 involves division and thus fractional monetary units. I'm more concerned
 about that stuff than formatting/presentation.
 
 Users of the app will be able to identify the currency that is relevant
 for their use, and each individual user won't be using more than one
 currency.
 
 Do I handle all currency calcs in functions that adjust for currency?
 

-- 
View this message in context: 
http://www.nabble.com/Multiple-currencies-in-a-application-tf2605959.html#a7485709
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] ERROR: 42601: a column definition list is only allowed for functions returning record

2006-11-22 Thread Tom Lane
Gopal [EMAIL PROTECTED] writes:
 I am using c# and npgsql 1.0 (same behaviour with 0.7) to call a
 function that has 6 input parameters and 4 output parameters.
 When I call .ExecuteNonQuery(), I get the following message
 ERROR: 42601: a column definition list is only allowed for functions
 returning record

I think you need a newer version of npgsql, ie, one that understands
about OUT parameters.  It sounds to me like it is noticing the function
result type is RECORD and it doesn't realize that that doesn't mean
it has to specify the output column types in the query.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] PGSQL Newbie

2006-11-22 Thread Richard Broersma Jr
 Good morning all!  I'm a newbie to PGSQL here so thought I would ask my
 first question since I joined this list this morning...
 
 Is it possible to run Postgresql and MySQL together on the same machine?

Yes, it is even possible to mulitple clusters of postgresql running at the same 
time.  Just
remember that you have to configure each to only use its share of your hardware 
resources so that
they play nice together (i.e. you don't start swapping to disk).

Regards,

Richard Broersma Jr.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PGSQL Newbie

2006-11-22 Thread Matthias . Pitzl
Hi Robert!

Sure, why not? Both databases run on different directories, ports, sockets
and so on.

Greetings,
Matthias

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Wolfe, Robert
 Sent: Wednesday, November 22, 2006 4:53 PM
 To: pgsql-general@postgresql.org
 Subject: [GENERAL] PGSQL Newbie
 
 
 
 Good morning all!  I'm a newbie to PGSQL here so thought I 
 would ask my
 first question since I joined this list this morning...
 
 Is it possible to run Postgresql and MySQL together on the 
 same machine?
 
 -- 
 Robert Wolfe, Linux and Network Admin
 net261.com | http://www.net261.com:85
 [EMAIL PROTECTED]
 
 
 ---(end of 
 broadcast)---
 TIP 2: Don't 'kill -9' the postmaster
 

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Multiple currencies in a application

2006-11-22 Thread Alban Hertroys
novnov wrote:
 Bumping this in hopes that someone can give me a bit of input?

Wow, nobody replied?...

 novnov wrote:
 I'm working on an application that will eventually need to support various
 currencies. I've never touched this area before, have dealt with dollars
 till now. I'm not sure what the regular practices are re mulitple
 currencies in the same application.

 The app includes calculations like price per unit, which of course
 involves division and thus fractional monetary units. I'm more concerned
 about that stuff than formatting/presentation.

 Users of the app will be able to identify the currency that is relevant
 for their use, and each individual user won't be using more than one
 currency.

 Do I handle all currency calcs in functions that adjust for currency?

A few things you'll probably want:
- Store prices in your db with their original currency
- Make sure you have up-to-date conversion rates (how up to date that
needs to be is up to you)
- Calculate actual prices on demand

We are satisfied with daily updates to our conversion rates, which we
store in a table. Conversion isn't too difficult that way.

Say you want to convert the price of a product from dollars (the
original currency) to euros, your query would look something like this:

SELECT price * target.rate / source.rate
  FROM products
   INNER JOIN rates source ON (products.currency = source.currency),
rates target
 WHERE products.id = 1234
   AND target.currency = 'euro';

I don't think you'll need any functions, unless to retrieve real-time
conversion rates somehow. Otherwise a cron job will do nicely.

-- 
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Functional Index

2006-11-22 Thread Alexander Presber

Hello everybody,

I am trying to speed up a query on an integer column by defining an 
index as follows


 CREATE INDEX idx_main_subject ON pdb.main (lower(main_subject::text) 
using varchar_ops);


on column main_subject.

I had hoped to get speedups for right-fuzzy LIKE-searches, but EXPLAIN 
ANALYZE yields that the index is not used:


 EXPLAIN ANALYZE SELECT COUNT(*) FROM pdb.main WHERE 
lower(main_subject::text) LIKE lower('10%'::text);
QUERY 
PLAN

Aggregate  (cost=137759.92..137759.93 rows=1 width=0) (actual 
time=3421.696..3421.697 rows=1 loops=1)
  -  Seq Scan on main  (cost=0.00..137727.17 rows=13096 width=0) 
(actual time=0.036..3300.961 rows=77577 loops=1)

Filter: (lower((main_subject)::text) ~~ '10%'::text)
Total runtime: 3421.751 ms
(4 Zeilen)


Am I misunderstanding the concept of functional indexes? Is there 
another way to achieve

Any help is greatly
appreciated.

Yours,
Alexander Presber
begin:vcard
fn:Alexander Presber
n:Presber;Alexander
org;quoted-printable:Wei=C3=9Fhuhn  Wei=C3=9Fhuhn Kommunikationsmanagement GmbH;Softwareentwicklung
adr;quoted-printable;dom:;;Warschauer Stra=C3=9Fe 58a;Berlin;;10243
email;internet:[EMAIL PROTECTED]
title:Dipl.-Phys.
tel;work:61654 - 214
x-mozilla-html:FALSE
version:2.1
end:vcard


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Data transfer between databases over the Internet

2006-11-22 Thread John McCawley
I have found myself somewhat trapped in a project where the software 
being used is essentially set in stone, and its limitations are 
strangling progress.  It is a client-side app written in VB with an 
Access backend, and it has no reasonable data migration/synchronization 
ability.  It was designed for a single user system, and cannot even hook 
up to a different backend.  I have no confidence in the vendor's ability 
to fix this situation in a timely fashion.


A lot of people are trying to tackle this (relatively simple) problem, 
and trying to use bulldozers to hammer a nail.  I am hearing Active 
Directory and Citrix thrown around, and it scares me.  I want to 
offer a clean, quick solution to consolidating this data as our field 
guys enter it without having to change the current work process (i.e. 
while continuing to use this software...I only have so much pull!)


What I would like to do is set up a relational database at our site 
(preferably Postgres) which would ultimately house all of the data from 
the many different client machines.  This database is extremely simple, 
only consisting of 5 tables, and I have already written a simple VB tool 
that is capable of merging these databases into one PostgreSQL backend, 
so logically my task is simple.


What I need to do, however, is write a client-side app that can sit on 
my users' computers and accomplish this task over the Internet.  I want 
to create a simple check-in/check-out type of system that my users can 
use to do the data upload and download themselves.  As I said before, 
the database structure is exceedingly simple, so I will have no 
difficulties writing the logic.  I don't want to re-invent the wheel, 
though, so I wanted to ask this list what would be the most sensible 
transfer mechanism.


1) Simply expose the PostgreSQL database backend to the Internet over an 
encrypted pipe and write my client-side app almost identical to my 
current VB app.  This seems to be the most logically simple solution, 
however I question how fast the app will be since it has to do its work 
through ODBC over the Internet.  Also, is opening up a port like this, 
even encrypted, considered a bad practice?


2) Write my own client and server program that manually transfer the 
data themselves over my own encrypted pipe.  The client side would 
basically do queries to pull all of the data, I'd send it over the pipe 
to the server program, and it would insert or update all of the 
records.  As far as the protocol for sending the data, I suppose I could 
wrap all of the data in XML, but that's pretty verbose and may bulk up 
the data.  Is there any established wire protocol for doing this?  This 
approach seems pretty labor intensive, and I feel like I'd be 
reinventing the wheel.


3) Perhaps there is some existing app, protocol, or combination of apps 
that people use for this purpose that I've never heard of.  Are there 
any other approaches you guys have used when faced with this problem?


John


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Functional Index

2006-11-22 Thread Teodor Sigaev
use varchar_pattern_ops operator class, LIKE cannot use varchar_ops for non-C 
locales.


Alexander Presber wrote:

Hello everybody,

I am trying to speed up a query on an integer column by defining an 
index as follows


  CREATE INDEX idx_main_subject ON pdb.main (lower(main_subject::text) 
using varchar_ops);


on column main_subject.

I had hoped to get speedups for right-fuzzy LIKE-searches, but EXPLAIN 
ANALYZE yields that the index is not used:


  EXPLAIN ANALYZE SELECT COUNT(*) FROM pdb.main WHERE 
lower(main_subject::text) LIKE lower('10%'::text);
QUERY 
PLAN
 

Aggregate  (cost=137759.92..137759.93 rows=1 width=0) (actual 
time=3421.696..3421.697 rows=1 loops=1)
  -  Seq Scan on main  (cost=0.00..137727.17 rows=13096 width=0) 
(actual time=0.036..3300.961 rows=77577 loops=1)

Filter: (lower((main_subject)::text) ~~ '10%'::text)
Total runtime: 3421.751 ms
(4 Zeilen)


Am I misunderstanding the concept of functional indexes? Is there 
another way to achieve

Any help is greatly
appreciated.

Yours,
Alexander Presber

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PGSQL Newbie

2006-11-22 Thread Joshua D. Drake
On Wed, 2006-11-22 at 10:53 -0500, Wolfe, Robert wrote:
 Good morning all!  I'm a newbie to PGSQL here so thought I would ask my
 first question since I joined this list this morning...
 
 Is it possible to run Postgresql and MySQL together on the same machine?
 

Yes. You might also want to take a look at pgsql-novice :)

Sincerely,

Joshua D. Drake



-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Functional Index

2006-11-22 Thread Stephan Szabo
On Wed, 22 Nov 2006, Alexander Presber wrote:

 Hello everybody,

 I am trying to speed up a query on an integer column by defining an
 index as follows

   CREATE INDEX idx_main_subject ON pdb.main (lower(main_subject::text)
 using varchar_ops);

 on column main_subject.

 I had hoped to get speedups for right-fuzzy LIKE-searches,

IIRC, unless you're in C locale, you'll want varchar_pattern_ops rather
than varchar_ops on the index to make it considered for a LIKE search.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] MSSQL to PostgreSQL : Encoding problem

2006-11-22 Thread Alvaro Herrera
Arnaud Lesauvage wrote:

 mydb=# SET client_encoding TO LATIN9;
 SET
 mydb=# COPY statistiques.detailrecherche (log_gid, 
 champrecherche, valeurrecherche) FROM 
 'E:\\Production\\Temp\\detailrecherche_ansi.csv' CSV;
 ERROR:  invalid byte sequence for encoding LATIN9: 0x00
 HINT:  This error can also happen if the byte sequence does 
 not match the encoding expected by the server, which is 
 controlled by client_encoding.

Huh, why do you have a 0x00 byte in there?  That's certainly not
Latin9 (nor UTF8 as far as I know).

Is the file actually Latin-something or did you convert it to something
else at some point?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Functional Index

2006-11-22 Thread Tom Lane
Stephan Szabo [EMAIL PROTECTED] writes:
 On Wed, 22 Nov 2006, Alexander Presber wrote:
 CREATE INDEX idx_main_subject ON pdb.main (lower(main_subject::text)
 using varchar_ops);

 IIRC, unless you're in C locale, you'll want varchar_pattern_ops rather
 than varchar_ops on the index to make it considered for a LIKE search.

text_pattern_ops would be better, seeing that the output of lower() is
text not varchar.  I'm a bit surprised the planner is able to make use
of this index at all.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Data transfer between databases over the Internet

2006-11-22 Thread Tony Caduto

John McCawley wrote:
I have found myself somewhat trapped in a project where the software 
being used is essentially set in stone, and its limitations are 
strangling progress.  It is a client-side app written in VB with an 
Access backend, and it has no reasonable data 
migration/synchronization ability.  It was designed for a single user 
system, and cannot even hook up to a different backend.  I have no 
confidence in the vendor's ability to fix this situation in a timely 
fashion.


I guess I would recommend you to upgrade your VB application to either 
vb.net or C#, then you can use the Postgresql .net data provider which 
requires no additional client libraries.
This means you could write a nice compact application that would require 
one exe and the .net data provider assembly.

http://npgsql.projects.postgresql.org/

Or you could use Turbo Delphi (http://www.turboexplorer.com/delphi) 
along with a native client access library such as 
PostgresDAC(http://www.microolap.com) or Zeos (http://www.zeoslib.net).


You could also use ASP.net to(with npgsql)  create a nice web application.

For C# or vb.net, you don't need a M$ ide, you could use SharpDevelop 
(http://www.icsharpcode.com/OpenSource/SD/Default.aspx) which is excellent.


Hope this helps,

--
Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for Postgresql
Your best bet for Postgresql Administration 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Trapping PL/Perl spi_query_exec errors

2006-11-22 Thread Worky Workerson

This is probably a really basic question, but how do I trap when
spi_query_exec returns an error in PL/Perl?  I would expect to be able
to use the return value to determine an error condition, however I get
the message:

ERROR:  error from Perl function: relation foo does not exist at line 7.

where line 7 looks like:

my $rv = spi_query_exec('select count(*) from foo');

Alternatively, I would expect to be able to trap it with eval, however
the trusted PL/Perl does not support eval.

Thanks!

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] Buffer overflow in psql

2006-11-22 Thread Jack Orenstein

I'm using Postgrseql 7.4.8. In January, I reported a psql bug. The
problem was that an INSERT issued through psql would cause a
crash. There was no problem with other operations I tried, or with the
same INSERT submitted through JDBC. The discussion thread begins here:
http://archives.postgresql.org/pgsql-bugs/2006-01/msg00071.php

There was no resolution to this problem -- a bad psql build was
suggested and I couldn't disprove it.

The problem has occurred again, and I've found a buffer overflow in
psql that explains it. Here is code from src/bin/psql/common.c, from
the PrintQueryResults function:

case PGRES_COMMAND_OK:
{
charbuf[10];

success = true;
sprintf(buf, %u, (unsigned int) 
PQoidValue(results));

In 8.1.5, the sprintf is replaced by an snprintf, resulting in a less
serious form of the bug.

I believe that we end up in this code after an INSERT is processed
through psql. If PQoidValue returns 10 (1 billion) or higher,
which requires 10 characters, then we overflow buf due to the terminal
zero.

Looking at my databases, I find that the problem occurs exactly in the
databases with OIDs above 1 billion for newly inserted rows. (Because
the psql crash occurs in processing results, the INSERT succeeds, and
I can examine the OIDs of the inserted rows.)

My January email indicates that we had been loading data for three
months, so OIDs could conceivably have gotten as high as 1 billion (if
I understand OIDs correctly). The problem occurred again at about the
same time -- three months into a test.

Changing the 10 to 11, INSERTs through psql no longer cause psql to
crash.

I have two questions:

1) Is one of the postgresql developers willing to get this fix into
   the next release? (We're patching our own 7.4.8 build.)

2) If no one else has hit this, then it suggests I might be in
   uncharted territory with OIDs getting this high.  Do I need to
   review my vacuuming strategy? (I can summarize my vacuuming
   strategy for anyone interested.)

Jack Orenstein

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Data transfer between databases over the Internet

2006-11-22 Thread John McCawley
I'm not really sure what switching development platforms for the tool 
would buy me.  Are you saying that the wire-protocol used by vb.net and 
C# etc. is substantially faster than what is available in VB?  Regarding 
ASP.net, I don't really see how a web app would help here, as it has to 
run on hundreds of client machines...Are you suggesting that I install a 
webserver on them?


Tony Caduto wrote:

I guess I would recommend you to upgrade your VB application to either 
vb.net or C#, then you can use the Postgresql .net data provider which 
requires no additional client libraries.
This means you could write a nice compact application that would 
require one exe and the .net data provider assembly.

http://npgsql.projects.postgresql.org/

Or you could use Turbo Delphi (http://www.turboexplorer.com/delphi) 
along with a native client access library such as 
PostgresDAC(http://www.microolap.com) or Zeos (http://www.zeoslib.net).


You could also use ASP.net to(with npgsql)  create a nice web 
application.


For C# or vb.net, you don't need a M$ ide, you could use SharpDevelop 
(http://www.icsharpcode.com/OpenSource/SD/Default.aspx) which is 
excellent.


Hope this helps,



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Functional Index

2006-11-22 Thread Bernhard Weisshuhn
On Wed, Nov 22, 2006 at 11:24:33AM -0500, Tom Lane [EMAIL PROTECTED] wrote:

 Stephan Szabo [EMAIL PROTECTED] writes:
  On Wed, 22 Nov 2006, Alexander Presber wrote:
  CREATE INDEX idx_main_subject ON pdb.main (lower(main_subject::text)
  using varchar_ops);
 
  IIRC, unless you're in C locale, you'll want varchar_pattern_ops rather
  than varchar_ops on the index to make it considered for a LIKE search.
 
 text_pattern_ops would be better, seeing that the output of lower() is
 text not varchar.  I'm a bit surprised the planner is able to make use
 of this index at all.


Since the original poster Alex is a colleage of mine and just ran out
the door, let me pass on his big THANK YOU on his behalf. He is all
smiles now, and the query is fast now.

He should also be wearing that ole' brown paper bag, since we even
have an inhouse wiki page documenting the need for varchar_pattern_ops :-)

Anyway, thanks a bunch everybody!

regards,
  Alex  bkw


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Buffer overflow in psql

2006-11-22 Thread Martijn van Oosterhout
On Wed, Nov 22, 2006 at 11:11:09AM -0500, Jack Orenstein wrote:
 I'm using Postgrseql 7.4.8. In January, I reported a psql bug. The
 problem was that an INSERT issued through psql would cause a
 crash. There was no problem with other operations I tried, or with the
 same INSERT submitted through JDBC. The discussion thread begins here:
 http://archives.postgresql.org/pgsql-bugs/2006-01/msg00071.php

snip
   case PGRES_COMMAND_OK:
   {
   charbuf[10];
 
   success = true;
   sprintf(buf, %u, (unsigned int) 
   PQoidValue(results));
 
 In 8.1.5, the sprintf is replaced by an snprintf, resulting in a less
 serious form of the bug.

Looks like you found something.

 1) Is one of the postgresql developers willing to get this fix into
the next release? (We're patching our own 7.4.8 build.)

Probably, though I don't know the release cycle for backpatches.

 2) If no one else has hit this, then it suggests I might be in
uncharted territory with OIDs getting this high.  Do I need to
review my vacuuming strategy? (I can summarize my vacuuming
strategy for anyone interested.)

I think most people have OIDs disabled, which avoids the problem
entirely. Perhaps that's why it hasn't been run into before.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] Data transfer between databases over the Internet

2006-11-22 Thread Merlin Moncure

On 11/22/06, John McCawley [EMAIL PROTECTED] wrote:

I'm not really sure what switching development platforms for the tool
would buy me.  Are you saying that the wire-protocol used by vb.net and
C# etc. is substantially faster than what is available in VB?  Regarding


no, the wire protocols are the same (dictated by the server).  also,
c# and vb.net are almost functionally the same language.


ASP.net, I don't really see how a web app would help here, as it has to
run on hundreds of client machines...Are you suggesting that I install a
webserver on them?


no, i think he is suggesting you rewrite your app as server side
application using asp.net.  There are pros and cons to this argument
but if you have a very short timeframe I would suggest going with what
you know.  This could be odbc based or whatever you are most
comfortable with.  Simple transfers can actually work quite well with
vanilla insert statements via odbc/vbscript, especially if you use
(8.1) multiple line insert statements.

My suggestion would be to have your client app connect directly to the
database and tranfer the data via standard sql...insert statements and
such.  You can connect directly via ssl or set up a ssh tunnel
forwarding the port to the protected port on the server side...I
prefer to tunnel because this allows me to compile postgresql without
ssl support.  What to do here is really a matter of style.  It is
perfectly safe as long as you take the proper precautions although I
would use nonstandard ports regardless.

merlin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Data transfer between databases over the Internet

2006-11-22 Thread Tony Caduto

John McCawley wrote:
I'm not really sure what switching development platforms for the tool 
would buy me.  Are you saying that the wire-protocol used by vb.net 
and C# etc. is substantially faster than what is available in VB?  
Regarding ASP.net, I don't really see how a web app would help here, 
as it has to run on hundreds of client machines...Are you suggesting 
that I install a webserver on them?



I would say it probably is a bit faster, but the nice thing is there is 
no client libraries required, hence a thinner deployment.


With a web application you would have the application run at a single 
web server and the clients would access through a browser.  That way you 
would not have to deploy anything.


--
Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for Postgresql
Your best bet for Postgresql Administration 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] MSSQL to PostgreSQL : Encoding problem

2006-11-22 Thread Alvaro Herrera
Arnaud Lesauvage wrote:
 Alvaro Herrera a écrit :
 Arnaud Lesauvage wrote:
 
 mydb=# SET client_encoding TO LATIN9;
 SET
 mydb=# COPY statistiques.detailrecherche (log_gid, 
 champrecherche, valeurrecherche) FROM 
 'E:\\Production\\Temp\\detailrecherche_ansi.csv' CSV;
 ERROR:  invalid byte sequence for encoding LATIN9: 0x00
 HINT:  This error can also happen if the byte sequence does 
 not match the encoding expected by the server, which is 
 controlled by client_encoding.
 
 Huh, why do you have a 0x00 byte in there?  That's certainly not
 Latin9 (nor UTF8 as far as I know).
 
 Is the file actually Latin-something or did you convert it to something
 else at some point?
 
 This is the file generated by DTS with ANSI encoding. It 
 was not altered in any way after that !
 The doc states that ANSI exports with the local codepage 
 (which is Win1252). That's all I know. :(

I thought Win1252 was supposed to be almost the same as Latin1.  While
I'd expect certain differences, I wouldn't expect it to use 0x00 as
data!

Maybe you could have DTS export Unicode, which would presumably be
UTF-16, then recode that to something else (possibly UTF-8) with GNU
iconv.

FWIW, I think the preferred way to set the client encoding on psql is
\encoding.  I'm not sure if it does anything different from the SET
command though.

-- 
Alvaro Herrera   Valdivia, Chile   ICBM: S 39º 49' 18.1, W 73º 13' 56.4
In Europe they call me Niklaus Wirth; in the US they call me Nickel's worth.
 That's because in Europe they call me by name, and in the US by value!

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Buffer overflow in psql

2006-11-22 Thread Tom Lane
Jack Orenstein [EMAIL PROTECTED] writes:
 The problem has occurred again, and I've found a buffer overflow in
 psql that explains it. Here is code from src/bin/psql/common.c, from
 the PrintQueryResults function:

   case PGRES_COMMAND_OK:
   {
   charbuf[10];

   success = true;
   sprintf(buf, %u, (unsigned int) 
 PQoidValue(results));

Good catch!  What platform and compiler are you using exactly?  I'd
imagine that on most platforms, the size of that array is effectively
rounded up to 12 bytes due to alignment/padding considerations, which
would mask the mistake.  Yours must somehow be putting something
critical right after the array.

 1) Is one of the postgresql developers willing to get this fix into
 the next release? (We're patching our own 7.4.8 build.)

Yeah, we'll fix it.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Data transfer between databases over the Internet

2006-11-22 Thread John McCawley
I think I may not have described my problem clearly enough...I *already* 
have a server-side app written in PHP with a Postgres backend...This is 
the ultimate destination of the data.  The problem is that I am being 
forced by my client to deploy a 3rd party app on all of my field guys' 
laptops...This app (that I have NO ability to modify) is set in stone, I 
cannot use a different one or modify it.  I am trying to write an app 
client side that exports the data from the .mdb backend of this 3rd 
party program and into my Postgres backend.  This needs to work over the 
Internet, as my guys are not necessarily on my LAN.


Believe me, you don't have to sell me on the benefits of web 
programming.  What I am trying to do is work around the limitations of 
someone else's app.  Everything that I have written on this project 
(with the eception of these glue apps) is PHP/Postgres.



Tony Caduto wrote:


John McCawley wrote:

I'm not really sure what switching development platforms for the tool 
would buy me.  Are you saying that the wire-protocol used by vb.net 
and C# etc. is substantially faster than what is available in VB?  
Regarding ASP.net, I don't really see how a web app would help here, 
as it has to run on hundreds of client machines...Are you suggesting 
that I install a webserver on them?



I would say it probably is a bit faster, but the nice thing is there 
is no client libraries required, hence a thinner deployment.


With a web application you would have the application run at a single 
web server and the clients would access through a browser.  That way 
you would not have to deploy anything.




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Data transfer between databases over the Internet

2006-11-22 Thread Tony Caduto

Merlin Moncure wrote:


no, the wire protocols are the same (dictated by the server).  also,
c# and vb.net are almost functionally the same language.



Hi,

From experience NPGSQL seems faster than ODBC, though they have the 
same wire protocol.  Must have something to do with the overhead of ODBC.


All .net languages are basically the same but C# has less baggage than 
VB.net because VB.net requires special assemblies for VB features. 

I suggested .net as the npgsql .net data provider is pretty thin 
compared to libpq.dll which has a ton of dependencies these days that 
must also be deployed.
Of course you would need .net on all the client PCs, but more than 
likely it's already installed.


If the client side app is pretty simple then it would be a good idea to 
migrate away from VB 6 as it really is a dead language now.


Food for thought :-)

Later,

--
Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for Postgresql
Your best bet for Postgresql Administration 



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Uninstalling PostgreSql

2006-11-22 Thread Guy Rouillier
Bob Pawley wrote:
 I've done this before but being DOS illiterate I can't remember the
 details. 
 
 In order to uninstall PostgreSQL a manual operation needs to take
 place. My instructions are to enter - net user /delete postgres -
 from the command prompt.  
 
 My command prompt comes up C:\  (c/w a directory name depending on
 how I start the prompt) and doesn't recogize the instructions. 

What version of Windows are you using?  Some of the ancient ones didn't
have the net command.  That command should be found in
c:\windows\system32, depending of course on how you've got your OS
installed.  When you say it doesn't recognize the instructions, what
is it saying?  I just ran net help user from the command line, and it
says your last two arguments should be reversed.

At any rate, if you have admin rights on this computer, you'll
accomplish exactly the same thing if you simply open up User Accounts
from the Control Panel and delete the account that way.

-- 
Guy Rouillier

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Data transfer between databases over the Internet

2006-11-22 Thread Tony Caduto

John McCawley wrote:
I think I may not have described my problem clearly enough...I 
*already* have a server-side app written in PHP with a Postgres 
backend...This is the ultimate destination of the data.  The problem 
is that I am being forced by my client to deploy a 3rd party app on 
all of my field guys' laptops...This app (that I have NO ability to 
modify) is set in stone, I cannot use a different one or modify it.  I 
am trying to write an app client side that exports the data from the 
.mdb backend of this 3rd party program and into my Postgres backend.  
This needs to work over the Internet, as my guys are not necessarily 
on my LAN.


Believe me, you don't have to sell me on the benefits of web 
programming.  What I am trying to do is work around the limitations of 
someone else's app.  Everything that I have written on this project 
(with the eception of these glue apps) is PHP/Postgres.




Ok, then scratch the asp.net idea.

The other stuff is still good advice
With C# or VB.net you could use the ado.net driver to access the .mdb 
files,loop through the rows, then use the NPGSQL .net provider to do the 
inserts into the PG backend server for each row iteration.


You could even use Perl to do this pretty easily.

Later,

--
Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for Postgresql
Your best bet for Postgresql Administration 



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] MSSQL to PostgreSQL : Encoding problem

2006-11-22 Thread Arnaud Lesauvage

Alvaro Herrera a écrit :

Arnaud Lesauvage wrote:

Alvaro Herrera a écrit :
Arnaud Lesauvage wrote:

mydb=# SET client_encoding TO LATIN9;
SET
mydb=# COPY statistiques.detailrecherche (log_gid, 
champrecherche, valeurrecherche) FROM 
'E:\\Production\\Temp\\detailrecherche_ansi.csv' CSV;

ERROR:  invalid byte sequence for encoding LATIN9: 0x00
HINT:  This error can also happen if the byte sequence does 
not match the encoding expected by the server, which is 
controlled by client_encoding.


Huh, why do you have a 0x00 byte in there?  That's certainly not
Latin9 (nor UTF8 as far as I know).

Is the file actually Latin-something or did you convert it to something
else at some point?

This is the file generated by DTS with ANSI encoding. It 
was not altered in any way after that !
The doc states that ANSI exports with the local codepage 
(which is Win1252). That's all I know. :(


I thought Win1252 was supposed to be almost the same as Latin1.  While
I'd expect certain differences, I wouldn't expect it to use 0x00 as
data!

Maybe you could have DTS export Unicode, which would presumably be
UTF-16, then recode that to something else (possibly UTF-8) with GNU
iconv.


UTF-16 ! That's something I haven't tried !
I'll try an iconv conversion tomorrow from UTF16 to UTF8 !

--
Arnaud

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] MSSQL to PostgreSQL : Encoding problem

2006-11-22 Thread Bruce Momjian
Arnaud Lesauvage wrote:
  I thought Win1252 was supposed to be almost the same as Latin1.  While
  I'd expect certain differences, I wouldn't expect it to use 0x00 as
  data!
  
  Maybe you could have DTS export Unicode, which would presumably be
  UTF-16, then recode that to something else (possibly UTF-8) with GNU
  iconv.
 
 UTF-16 ! That's something I haven't tried !
 I'll try an iconv conversion tomorrow from UTF16 to UTF8 !

Right!  To clarify, Unicode is the character set, and UTF8 and UTF16 are
ways of representing that characters set in 8-bit and 16-bit segments,
respectively.  PostgreSQL only suports UTF8, and Win32 only supports
UTF16 in the operating system.  And 0x00 is not a valid value in any of
those, that I know of, but perhaps it is in UTF16.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Uninstalling PostgreSql

2006-11-22 Thread Guy Rouillier
Bob Pawley wrote:

 My Windows version is 5.1.

Which makes in Windows XP.  I believe the net command is available in
all version.  I run XP at home and it is there.

 
 My command prompt states that - net or delete - are not recognized as
 an 
 internal or external command.

From the command prompt, enter the command path and hit enter.
Perhaps c:\windows\system32 is not in your path.

 
 The user account displays only the administrative account.
 
 I did find the net file under system 32.

If your path does not include this directory, then just change to it cd
\windows\system32 and issue your command from there.  The current
directory is searched first before the path.

 
 Still baffled.

Run it the way you were given, and if that doesn't work, reverse the
last two params per the help.

 
 Bob
 
 - Original Message -
 From: Guy Rouillier [EMAIL PROTECTED]
 To: Postgresql pgsql-general@postgresql.org
 Sent: Wednesday, November 22, 2006 9:06 AM
 Subject: Re: [GENERAL] Uninstalling PostgreSql
 
 
 Bob Pawley wrote:
 I've done this before but being DOS illiterate I can't remember the
 details. 
 
 In order to uninstall PostgreSQL a manual operation needs to take
 place. My instructions are to enter - net user /delete postgres -
 from the command prompt. 
 
 My command prompt comes up C:\  (c/w a directory name depending on
 how I start the prompt) and doesn't recogize the instructions.
 
 What version of Windows are you using?  Some of the ancient ones
 didn't have the net command.  That command should be found in
 c:\windows\system32, depending of course on how you've got your OS
 installed.  When you say it doesn't recognize the instructions,
 what is it saying?  I just ran net help user from the command line,
 and it says your last two arguments should be reversed. 
 
 At any rate, if you have admin rights on this computer, you'll
 accomplish exactly the same thing if you simply open up User Accounts
 from the Control Panel and delete the account that way.  



-- 
Guy Rouillier


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] MSSQL to PostgreSQL : Encoding problem

2006-11-22 Thread Magnus Hagander
   I thought Win1252 was supposed to be almost the same as Latin1.  
   While I'd expect certain differences, I wouldn't expect it to use 
   0x00 as data!
   
   Maybe you could have DTS export Unicode, which would 
 presumably be 
   UTF-16, then recode that to something else (possibly 
 UTF-8) with GNU 
   iconv.
  
  UTF-16 ! That's something I haven't tried !
  I'll try an iconv conversion tomorrow from UTF16 to UTF8 !
 
 Right!  To clarify, Unicode is the character set, and UTF8 
 and UTF16 are ways of representing that characters set in 
 8-bit and 16-bit segments, respectively.  PostgreSQL only 
 suports UTF8, and Win32 only supports
 UTF16 in the operating system.  And 0x00 is not a valid value 
 in any of those, that I know of, but perhaps it is in UTF16.

Actually, Win32 supports UTF8 as well. There are a few operations that
aren't supported on it, but you can certainly read and write files in it
from most builtin apps.

One other problem is that in most (all) win32 documentation talks about
UNICODE when they mean UTF16 (in = NT4, UCS-2). And PostgreSQL used to
say UNICODE when we meant UTF8. Adds to the confusion.

Finally, UTF-8 does not represent the characters in 8-bit segments - it
can use anything from 8 to 32 bits. UTF-16 always uses 16 bits. This
also means that you acn't talk about 0x00 being valid in UTF-16,
because all characters are 16-bit. It would be 0x or 0x00 0x00.
But that requires an application that knows UTF16, which postgresql
doesn't, so it reports on the first 0x00.

//Magnus

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] PGSQL Newbie

2006-11-22 Thread Jason Earl
Wolfe, Robert [EMAIL PROTECTED] writes:

 Good morning all!  I'm a newbie to PGSQL here so thought I would ask
 my first question since I joined this list this morning...

 Is it possible to run Postgresql and MySQL together on the same
 machine?

Sure, my development machine is setup this way so that I can write
code to migrate away from MySQL.

Jason

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Buffer overflow in psql

2006-11-22 Thread Jack Orenstein

Tom Lane wrote:

Jack Orenstein [EMAIL PROTECTED] writes:

The problem has occurred again, and I've found a buffer overflow in
psql that explains it. Here is code from src/bin/psql/common.c, from
the PrintQueryResults function:



case PGRES_COMMAND_OK:
{
charbuf[10];



success = true;
sprintf(buf, %u, (unsigned int) 
PQoidValue(results));


Good catch!  What platform and compiler are you using exactly?  I'd
imagine that on most platforms, the size of that array is effectively
rounded up to 12 bytes due to alignment/padding considerations, which
would mask the mistake.  Yours must somehow be putting something
critical right after the array.


We're using gcc-4.0.2-8.fc4 on FC4 (intel). I believe that we didn't
just get lucky with the overflow. One of our Linux experts says that
our libc is doing memory bounds checking. Note that the stack
goes through __sprintf_chk:

/lib/libc.so.6(__chk_fail+0x41)[0xb7c0bbc5]
/lib/libc.so.6(__vsprintf_chk+0x0)[0xb7c0b490]
/lib/libc.so.6(_IO_default_xsputn+0x97)[0xb7b8e8d8]
/lib/libc.so.6(_IO_vfprintf+0x1aeb)[0xb7b6a2f7]
/lib/libc.so.6(__vsprintf_chk+0xa1)[0xb7c0b531]
/lib/libc.so.6(__sprintf_chk+0x30)[0xb7c0b484]
/usr/bin/psql[0x804ea63]
/usr/bin/psql[0x805195b]
/usr/bin/psql[0x804afdd]
/usr/bin/psql[0x804cde9]
/usr/bin/psql[0x804deb5]
/usr/bin/psql[0x80510cf]
/usr/bin/psql[0x805336d]
/lib/libc.so.6(__libc_start_main+0xdf)[0xb7b42d7f]
/usr/bin/psql[0x804a8e1]

Our Linux kernel is 2.6.17-1.2139

Jack Orenstein

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Grant group revoke user isue

2006-11-22 Thread Wim Bertels

Hallo,

version 7.4.7

i first grant a group some privileges,
then i take those privileges away from a user in this group,
somehow this doens't work (no errors, but the rights aren't revoked)

eg:

GRANT ALL ON TABLE test TO GROUP groep;
--the user jimmy is part of groep;
REVOKE ALL ON TABLE test FROM jimmy;
--no complaint, but also nothing is revoked

suggestion, explanations?

tnx,



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Buffer overflow in psql

2006-11-22 Thread Tom Lane
Jack Orenstein [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 Good catch!  What platform and compiler are you using exactly?  I'd
 imagine that on most platforms, the size of that array is effectively
 rounded up to 12 bytes due to alignment/padding considerations, which
 would mask the mistake.  Yours must somehow be putting something
 critical right after the array.

 We're using gcc-4.0.2-8.fc4 on FC4 (intel). I believe that we didn't
 just get lucky with the overflow. One of our Linux experts says that
 our libc is doing memory bounds checking.

Ah so, that explains how come it noticed.  BTW, I see that somebody
already changed the array size to 16 bytes in HEAD --- so it's just
the back branches that need fixing.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] MSSQL to PostgreSQL : Encoding problem

2006-11-22 Thread Brandon Aiken
It also might be a big/little endian problem, although I always thought that 
was platform specific, not locale specific.

Try the UCS-2-INTERNAL and UCS-4-INTERNAL codepages in iconv, which should use 
the two-byte or four-byte versions of UCS encoding using the system's default 
endian setting.

There's many Unicode codepage formats that iconv supports:
UTF-8
ISO-10646-UCS-2 UCS-2 CSUNICODE
UCS-2BE UNICODE-1-1 UNICODEBIG CSUNICODE11
UCS-2LE UNICODELITTLE
ISO-10646-UCS-4 UCS-4 CSUCS4
UCS-4BE
UCS-4LE
UTF-16
UTF-16BE
UTF-16LE
UTF-32
UTF-32BE
UTF-32LE
UNICODE-1-1-UTF-7 UTF-7 CSUNICODE11UTF7
UCS-2-INTERNAL
UCS-2-SWAPPED
UCS-4-INTERNAL
UCS-4-SWAPPED

Gee, didn't Unicode just so simplify this codepage mess?  Remember when it was 
just ASCII, EBCDIC, ANSI, and localized codepages?

--
Brandon Aiken
CS/IT Systems Engineer
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Arnaud Lesauvage
Sent: Wednesday, November 22, 2006 12:38 PM
To: Arnaud Lesauvage; General
Subject: Re: [GENERAL] MSSQL to PostgreSQL : Encoding problem

Alvaro Herrera a écrit :
 Arnaud Lesauvage wrote:
 Alvaro Herrera a écrit :
 Arnaud Lesauvage wrote:
 
 mydb=# SET client_encoding TO LATIN9;
 SET
 mydb=# COPY statistiques.detailrecherche (log_gid, 
 champrecherche, valeurrecherche) FROM 
 'E:\\Production\\Temp\\detailrecherche_ansi.csv' CSV;
 ERROR:  invalid byte sequence for encoding LATIN9: 0x00
 HINT:  This error can also happen if the byte sequence does 
 not match the encoding expected by the server, which is 
 controlled by client_encoding.
 
 Huh, why do you have a 0x00 byte in there?  That's certainly not
 Latin9 (nor UTF8 as far as I know).
 
 Is the file actually Latin-something or did you convert it to something
 else at some point?
 
 This is the file generated by DTS with ANSI encoding. It 
 was not altered in any way after that !
 The doc states that ANSI exports with the local codepage 
 (which is Win1252). That's all I know. :(
 
 I thought Win1252 was supposed to be almost the same as Latin1.  While
 I'd expect certain differences, I wouldn't expect it to use 0x00 as
 data!
 
 Maybe you could have DTS export Unicode, which would presumably be
 UTF-16, then recode that to something else (possibly UTF-8) with GNU
 iconv.

UTF-16 ! That's something I haven't tried !
I'll try an iconv conversion tomorrow from UTF16 to UTF8 !

--
Arnaud

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] BUG #2772: Undefined Subroutine Pg::connectdb ( );

2006-11-22 Thread Bruno Wolff III
This isn't a bug and you should have asked on a different list. I am going
to move this over to pgsql-general.

On Mon, Nov 20, 2006 at 12:07:25 +,
  S.Balaji [EMAIL PROTECTED] wrote:
 
 In Redhat 7.2 use Pg; command in perl Scripts will working with out any
 Problem.But in CentOS use Pg command will not Working it shows error is
 cannot Locate Pg.pm in Include Directory.After I am Solved this Problem by
 export PERLLIB=/usr/lib/perl5/vendor_perl/5.8.5/i386-linux-thread-multi/DBD/
 command.After it shows error is undefined subroutine Pg::connectdb
 command.how can i solve this Problem without changing the use Pg command or
 any other equivalent command is there.Please help me to solve this Problem

The Pg perl module is not getting much support, but you can get a copy of
the source from ftp://gborg.postgresql.org/pub/pgperl/stable/Pg-2.1.1.tar.gz
and build it yourself.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Uninstalling PostgreSql

2006-11-22 Thread Bob Pawley
Guy Wrote - If your path does not include this directory, then just change 
to it cd \windows\system32 and issue your command from there.  The current

directory is searched first before the path.

This worked - Thanks very much for your help.

Perhaps the PostgreSQL designers can include this in the uninstall procedure 
to make PostgreSQL a little more user freindly to those of us without a PHD 
in the tools we use to develop the systems in which we do have expertise.


Software tools should be as easy to use as a wrench or hammer. It should 
not be required that the user launch a second career in order to make use of 
them.

(With due apologies to Joel Orr.)

Thanks again.

Bob
- Original Message - 
From: Guy Rouillier [EMAIL PROTECTED]

To: PostgreSQL General pgsql-general@postgresql.org
Sent: Wednesday, November 22, 2006 10:13 AM
Subject: Re: [GENERAL] Uninstalling PostgreSql


Bob Pawley wrote:


My Windows version is 5.1.


Which makes in Windows XP.  I believe the net command is available in
all version.  I run XP at home and it is there.



My command prompt states that - net or delete - are not recognized as
an
internal or external command.



From the command prompt, enter the command path and hit enter.

Perhaps c:\windows\system32 is not in your path.



The user account displays only the administrative account.

I did find the net file under system 32.


If your path does not include this directory, then just change to it cd
\windows\system32 and issue your command from there.  The current
directory is searched first before the path.



Still baffled.


Run it the way you were given, and if that doesn't work, reverse the
last two params per the help.



Bob

- Original Message -
From: Guy Rouillier [EMAIL PROTECTED]
To: Postgresql pgsql-general@postgresql.org
Sent: Wednesday, November 22, 2006 9:06 AM
Subject: Re: [GENERAL] Uninstalling PostgreSql


Bob Pawley wrote:

I've done this before but being DOS illiterate I can't remember the
details.

In order to uninstall PostgreSQL a manual operation needs to take
place. My instructions are to enter - net user /delete postgres -
from the command prompt.

My command prompt comes up C:\  (c/w a directory name depending on
how I start the prompt) and doesn't recogize the instructions.


What version of Windows are you using?  Some of the ancient ones
didn't have the net command.  That command should be found in
c:\windows\system32, depending of course on how you've got your OS
installed.  When you say it doesn't recognize the instructions,
what is it saying?  I just ran net help user from the command line,
and it says your last two arguments should be reversed.

At any rate, if you have admin rights on this computer, you'll
accomplish exactly the same thing if you simply open up User Accounts
from the Control Panel and delete the account that way.




--
Guy Rouillier


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Superuser lost access to particular database

2006-11-22 Thread Francisco Reyes

[EMAIL PROTECTED] writes:


Francisco,

I just read your mail in the pgsql archives.
I have the same problem:
On my server, pg_dump crashes the backend with a SELECT statement.
I'm using pgsql 8.1 on FreeBSD 6
Did you figure out what could be the cause?

This could be a workaround:
I issued the commands
reindexdb -s -U SU DB
and
reindexdb -U SU DB

and pg_dump worked again.



Those commands seem to have fixed my problem too.
In particular pg_dumpall works. Also checked and was able to access the 
trouble DB with the super user.  


I found out the following:
The original SELECT command
(SELECT classid, objid, refclassid, refobjid, deptype FROM pg_depend
WHERE deptype != 'p' ORDER BY 1,2)
issued by pg_dump lets the backend process eat up CPU time.
The command
(SELECT classid, objid, refclassid, refobjid, deptype FROM pg_depend
WHERE ORDER BY 1)
also never gets finished but does NOT use CPU time.

All other variations of this command work (at least on
my server) if you drop the ORDER clause for column 1.
E.g.
(SELECT classid, objid, refclassid, refobjid, deptype FROM pg_depend
WHERE deptype != 'p')
works
and
(SELECT classid, objid, refclassid, refobjid, deptype FROM pg_depend
WHERE deptype != 'p' ORDER BY 2)
also works.

So the ORDER BY 1 lets postmaster hang and if it is issued together
with the WHERE  clause it additionally starts eating CPU time.

It seems to me that a nightly vacuum script somehow leads to
this bug: When this bug first ocurred, I switched off vacuum.
Quite a long time nothing happened. A few days ago I switched
it back on again and again the nightly pg_dump would fail.


It has been working for me the same.. I had a nighly vacuum too. After the 
reindex I turned it off. I will turn it back on and see if it breaks again. 
I am using 8.1.3, but plan to upgrade to 8.1.5 soon. Will test with 8.1.3.. 
if the nighly vacuum breaks the dump, will reindex again.. then try to see 
if 8.1.5 has the same issue.



If you are interested in these, please let me know.


Yes please.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] advanced index (descending and table-presorted descending)

2006-11-22 Thread John D. Burger

However, Cluster might work for you, but you need to re-cluster after
every updates or inserts, so it will probably be fine for static data.


This reminds me of a (somewhat off-topic) question I have had:

I have a static database, and most of the tables are 100% correlated  
with one column or another (because I build them that way, or due to  
clustering).  In some cases I join two tables on one of these  
perfectly correlated columns, and so the planner wants to sort the  
two on that column.  Of course, this is unnecessary, and for large  
tables, the sorts get spilled to disk (I suppose) and can take a  
while.  Is there any way to convince the planner that the sorts are  
unnecessary, and it can just zip the two tables together as is?


This is under PG 7.4, by the way.  Any comments welcome.

- John D. Burger
  MITRE


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Trapping PL/Perl spi_query_exec errors

2006-11-22 Thread Tom Lane
Worky Workerson [EMAIL PROTECTED] writes:
 This is probably a really basic question, but how do I trap when
 spi_query_exec returns an error in PL/Perl?

I think you don't, without using plperlu :-(.  Unless perhaps recent
Perls have added an error trapping construct that's separate from eval?

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] MSSQL to PostgreSQL : Encoding problem

2006-11-22 Thread Tomi NA

2006/11/22, Brandon Aiken [EMAIL PROTECTED]:


Gee, didn't Unicode just so simplify this codepage mess?  Remember when it was 
just ASCII, EBCDIC, ANSI, and localized codepages?


Unicode is a heaven sent, compared to 3 or 4 codepages representing
any given (obviously non-English) language, and 3 or 4 more for every
other language you have to deal with in your application. Perfect?
Hardly. But then again, much more so than natural languages.
I'd say we'd deliver products 10-20% faster (in the company I work in)
if people looked ahead a couple of decades ago and decided upon
something along the lines of unicode instead of ASCII.

Cheers,
t.n.a.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Multiple currencies in a application

2006-11-22 Thread Jorge Godoy
Alban Hertroys [EMAIL PROTECTED] writes:

 A few things you'll probably want:
 - Store prices in your db with their original currency
 - Make sure you have up-to-date conversion rates (how up to date that
 needs to be is up to you)
 - Calculate actual prices on demand

- Keep the highest precision on prices you can, if you can get sub-cents its
  better

You'll have to worry with rounding / truncation rules (they differ from
country to country so you'll have to either state what you do and why this
might not be accurate or you'll have to code it some way that you can do the
right thing all the time...). 

If you store up to cents, then you might end up loosing money or charging too
much.  

If you can get to the fourth decimal place you'll have a safe zone to work
with cents if you deal with a few thousands units of the product.  The more
you sell / buy, the more decimal places would be interesting to have.  Of
course, I'm supposing that cents are important, if you're selling / buying
products that cost thousands or millions of currency here, then this looses
that importance (this is relative to how much monetary units the value
represents in the destination currency).

 We are satisfied with daily updates to our conversion rates, which we
 store in a table. Conversion isn't too difficult that way.

With webservices you can get almost real time rates and you don't even have to
store these rates on your database (even though it is nice to have it for
summaries and reports and also for auditing operations). 

 Say you want to convert the price of a product from dollars (the
 original currency) to euros, your query would look something like this:

 SELECT price * target.rate / source.rate
   FROM products
INNER JOIN rates source ON (products.currency = source.currency),
   rates target
  WHERE products.id = 1234
AND target.currency = 'euro';

 I don't think you'll need any functions, unless to retrieve real-time
 conversion rates somehow. Otherwise a cron job will do nicely.

Yep...  There are some apps that request for the rate to be used when you
login (they usually suppose you'll be working with two currencies: a reference
currency and a local currency).


I've also seem projects that have an artifical value for each product and then
apply conversion rates from this value to any currency they want.  Something
like making 1 unit equal to 10 cents, so a product that costs 10 currency
would be stored as costing 100 units.  Then you just have to have a
conversion table from the basic unit value to the currency you want to deal
with. 

This makes it easier to update prices and do some historical analisys besides
making it easier to make a comparative analisys of each market.


I've also seen systems where each currency (local, actually) can have a
different price.  It makes a lot of sense since paying something like 2
dollars for a coffee on the US doesn't sound all that much but it would be a
robbery here in Brazil if the value of such product was converted to reais :-)


Last but not less important you should also consider how you're going to add /
represent SH costs, import / export taxes, etc. and how this will impact on
the value you'll be showing to the user.


This is just the tip of the iceberg, but I hope it helps a little to see what
is important or not for you.

-- 
Jorge Godoy  [EMAIL PROTECTED]

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] MSSQL to PostgreSQL : Encoding problem

2006-11-22 Thread Martijn van Oosterhout
On Wed, Nov 22, 2006 at 01:55:55PM -0500, Brandon Aiken wrote:
 Gee, didn't Unicode just so simplify this codepage mess?  Remember
 when it was just ASCII, EBCDIC, ANSI, and localized codepages?

I think that's one reason why Unix has standardised on UTF-8 rather
than one of the other Unicode variants. For transmission between
systems it's the easiest to get right...

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] Data transfer between databases over the Internet

2006-11-22 Thread Karsten Hilbert
 John McCawley wrote:
 I think I may not have described my problem clearly enough...I 
 *already* have a server-side app written in PHP with a Postgres 
 backend...This is the ultimate destination of the data.  The problem 
 is that I am being forced by my client to deploy a 3rd party app on 
 all of my field guys' laptops...This app (that I have NO ability to 
 modify) is set in stone, I cannot use a different one or modify it.  I 
 am trying to write an app client side that exports the data from the 
 .mdb backend of this 3rd party program and into my Postgres backend.  
If that's really all you need you might just write a Python
script. Or do you need a frontend for the PostgreSQL data as
well ? If so it might still be useful to separate it from
the ex-/importer.

 This needs to work over the Internet, as my guys are not necessarily 
 on my LAN.
No problem. I'd just make sure it works over an encrypted
pipe (as you said) - be it ssh or ssl.

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] BUG #2772: Undefined Subroutine Pg::connectdb ( );

2006-11-22 Thread Ragnar
On mið, 2006-11-22 at 13:28 -0600, Bruno Wolff III wrote:
 This isn't a bug and you should have asked on a different list. I am going
 to move this over to pgsql-general.
 
 On Mon, Nov 20, 2006 at 12:07:25 +,
   S.Balaji [EMAIL PROTECTED] wrote:
  
  In Redhat 7.2 use Pg; command in perl Scripts will working with out any
  Problem.But in CentOS use Pg command will not Working it shows error is
  cannot Locate Pg.pm in Include Directory.

this probably means that the package containing the 
old Pg module has not been installed. have you checked 
if such a CentOS package exists ?

for example in debian, this modules comes with the
package libpg-perl

 After I am Solved this Problem by
  export PERLLIB=/usr/lib/perl5/vendor_perl/5.8.5/i386-linux-thread-multi/DBD/
  command.

this certainly does not Solve any problem, as this would fool perl into
loading the module DBD::Pg instead of Pg.

you should not do this, as this may well interfere with
the operation of the correct Pg module if you manage to 
install it from a rpm ,or by source as suggested by 
Bruno below.

 After it shows error is undefined subroutine Pg::connectdb

unfortunately DBD::Pg does not define connectdb()  :-)
  command.how can i solve this Problem without changing the use Pg command or
  any other equivalent command is there.Please help me to solve this Problem
 
 The Pg perl module is not getting much support, but you can get a copy of
 the source from ftp://gborg.postgresql.org/pub/pgperl/stable/Pg-2.1.1.tar.gz
 and build it yourself.

yep

gnari



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Multiple currencies in a application

2006-11-22 Thread novnov

Thanks to the three of you for your thoughts, those are very very helpful
perspectives that are going to help me design this. One note, I won't have
to worry about multiple currencies on the internal bookeeping side of
things, which is a major plus.


Jorge Godoy-2 wrote:
 
 Alban Hertroys [EMAIL PROTECTED] writes:
 
 A few things you'll probably want:
 - Store prices in your db with their original currency
 - Make sure you have up-to-date conversion rates (how up to date that
 needs to be is up to you)
 - Calculate actual prices on demand
 
 - Keep the highest precision on prices you can, if you can get sub-cents
 its
   better
 
 You'll have to worry with rounding / truncation rules (they differ from
 country to country so you'll have to either state what you do and why this
 might not be accurate or you'll have to code it some way that you can do
 the
 right thing all the time...). 
 
 If you store up to cents, then you might end up loosing money or charging
 too
 much.  
 
 If you can get to the fourth decimal place you'll have a safe zone to work
 with cents if you deal with a few thousands units of the product.  The
 more
 you sell / buy, the more decimal places would be interesting to have.  Of
 course, I'm supposing that cents are important, if you're selling / buying
 products that cost thousands or millions of currency here, then this
 looses
 that importance (this is relative to how much monetary units the value
 represents in the destination currency).
 
 We are satisfied with daily updates to our conversion rates, which we
 store in a table. Conversion isn't too difficult that way.
 
 With webservices you can get almost real time rates and you don't even
 have to
 store these rates on your database (even though it is nice to have it for
 summaries and reports and also for auditing operations). 
 
 Say you want to convert the price of a product from dollars (the
 original currency) to euros, your query would look something like this:

 SELECT price * target.rate / source.rate
   FROM products
INNER JOIN rates source ON (products.currency = source.currency),
  rates target
  WHERE products.id = 1234
AND target.currency = 'euro';

 I don't think you'll need any functions, unless to retrieve real-time
 conversion rates somehow. Otherwise a cron job will do nicely.
 
 Yep...  There are some apps that request for the rate to be used when you
 login (they usually suppose you'll be working with two currencies: a
 reference
 currency and a local currency).
 
 
 I've also seem projects that have an artifical value for each product and
 then
 apply conversion rates from this value to any currency they want. 
 Something
 like making 1 unit equal to 10 cents, so a product that costs 10
 currency
 would be stored as costing 100 units.  Then you just have to have a
 conversion table from the basic unit value to the currency you want to
 deal
 with. 
 
 This makes it easier to update prices and do some historical analisys
 besides
 making it easier to make a comparative analisys of each market.
 
 
 I've also seen systems where each currency (local, actually) can have a
 different price.  It makes a lot of sense since paying something like 2
 dollars for a coffee on the US doesn't sound all that much but it would be
 a
 robbery here in Brazil if the value of such product was converted to reais
 :-)
 
 
 Last but not less important you should also consider how you're going to
 add /
 represent SH costs, import / export taxes, etc. and how this will impact
 on
 the value you'll be showing to the user.
 
 
 This is just the tip of the iceberg, but I hope it helps a little to see
 what
 is important or not for you.
 
 -- 
 Jorge Godoy  [EMAIL PROTECTED]
 
 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster
 
 

-- 
View this message in context: 
http://www.nabble.com/Multiple-currencies-in-a-application-tf2605959.html#a7498357
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Data

2006-11-22 Thread Bob Pawley
I had an access violation which corrupted the PostgreSQL server to the point 
that it would not open.

I took the opportunity to upgrade to version 8.1.

Is there a way of retreiving the project in the old version and opening it in 
the new version without reinstalling the 8.0 version for a pgdump?

Bob Pawley

Re: [GENERAL] Data

2006-11-22 Thread Adrian Klaver
On Wednesday 22 November 2006 03:16 pm, Bob Pawley wrote:
 I had an access violation which corrupted the PostgreSQL server to the
 point that it would not open.

 I took the opportunity to upgrade to version 8.1.

 Is there a way of retreiving the project in the old version and opening it
 in the new version without reinstalling the 8.0 version for a pgdump?

 Bob Pawley

Off hand I would say no. The 8.0 server needs to be running in order for 
pg_dump to make a connection and retrieve the data.. Transferring the binary 
files will not work because of the version difference. Even if you could copy 
the files I would hesitate because of the corruption issue. 
-- 
Adrian Klaver   
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Buffer overflow in psql

2006-11-22 Thread John D. Burger

Tom Lane wrote:


Ah so, that explains how come it noticed.  BTW, I see that somebody
already changed the array size to 16 bytes in HEAD --- so it's just
the back branches that need fixing.


Um, is that really considered a fix???  We all know that there's no  
guarantee at all, even in ANSI C, that unsigned int isn't bigger than  
32 bits, right?  There are still some weird architectures out there.


Whenever I need to print some integer x, I use code like this:

  char buf[1 + sizeof(x) * CHAR_BIT / 3]

I let the compiler figure out the length needed to print in octal,  
and use that as a (slight) over-estimate of the length for decimal.   
As a bonus, the type of x can be changed without having to track down  
this kind of crap.


Alternatively, the code in question could just cast to one of the  
newer fixed-length int types, like int32_t, although that has its own  
problems.


Sorry for the pedantry ...

- John D. Burger
  MITRE


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Buffer overflow in psql

2006-11-22 Thread Tom Lane
John D. Burger [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 Ah so, that explains how come it noticed.  BTW, I see that somebody
 already changed the array size to 16 bytes in HEAD --- so it's just
 the back branches that need fixing.

 Um, is that really considered a fix???  We all know that there's no  
 guarantee at all, even in ANSI C, that unsigned int isn't bigger than  
 32 bits, right?

OID is 32 bits.  Full stop.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] 8.2Beta3 - create view testview (a,b,c) values (1,2,3),(4,5,6),...

2006-11-22 Thread Richard Broersma Jr
I just learned that about creating a static view using values for 8.2.  I was 
surpised to see that
it worked.  My question is, where are these values stored? and Are there any 
limitations to the
number of values that can be added?

Regards,

Richard Broersma Jr.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] IS it a good practice to use SERIAL as Primary Key?

2006-11-22 Thread carter ck

Hi all,

I am wonderring if it is a good practice to use SERIAL index as primary key, 
as it is only available up to 999?


Currently i am dealing with storing LDAP users into Postgres and i am 
looking for a better way to make use of the DN as primary key instead of 
SERIAL index.


Any advice or suggestion is appreciated.

Thanks.

Regards,
Carter

_
Find singles online in your area with MSN Dating and Match.com! 
http://match.sg.msn.com/match/mt.cfm?pg=channeltcid=281203



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] 2 problems of my postgresql

2006-11-22 Thread zenith siea

see my operations first:
[EMAIL PROTECTED] postgresql-8.0.8]# su postgres
bash-2.05b$ initdb -D /usr/local/pgsql/data
initdb: error while loading shared libraries: libssl.so.4: cannot open
shared object file: No such file or directory
bash-2.05b$ service postgresql status
postmaster stopped
bash-2.05b$ service postgresql start

An old version of the database format was found.
You need to upgrade the data format before using PostgreSQL.
See /usr/share/doc/postgresql-8.0.8/README.rpm-dist for more information.
bash-2.05b$

Question 1:
error in initting my pgsq database, libssl.so.4, what is it? what shall i do?

Question 2:
the old version of database format, i have deleted old
database/usr/local/pgsql/data,
but why do pgsq complain about it?
there are no database in my pgsq now and no database can be updated.
I have looked into the file
/usr/share/doc/postgresql-8.0.8/README.rpm-dist
and found nothing about this quesiton, is it this file the right manual?
what should i do?

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] IS it a good practice to use SERIAL as Primary Key?

2006-11-22 Thread Steve Atkins


On Nov 22, 2006, at 6:23 PM, carter ck wrote:


Hi all,

I am wonderring if it is a good practice to use SERIAL index as  
primary key, as it is only available up to 999?


Where did you get that idea? A serial should be good up to at least  
2,000,000,000 or so, and if that's not enough there's always bigserial.




Currently i am dealing with storing LDAP users into Postgres and i  
am looking for a better way to make use of the DN as primary key  
instead of SERIAL index.


Any advice or suggestion is appreciated.


If you want a synthetic primary key then a serial field is the  
easiest way to create one.


Cheers,
  Steve

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] 8.2Beta3 - create view testview (a,b,c) values (1,2,3),(4,5,6),...

2006-11-22 Thread Tom Lane
Richard Broersma Jr [EMAIL PROTECTED] writes:
 I just learned that about creating a static view using values for 8.2.  I was 
 surpised to see that
 it worked.  My question is, where are these values stored?

In the text of the view definition.

 and Are there any limitations to the
 number of values that can be added?

I think you'd get unhappy with the performance for more than a few
thousand.  This isn't meant to be a substitute for a real table.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] IS it a good practice to use SERIAL as Primary Key?

2006-11-22 Thread Tom Lane
carter ck [EMAIL PROTECTED] writes:
 I am wonderring if it is a good practice to use SERIAL index as primary key, 
 as it is only available up to 999?

Where in the world did you get that idea?

SERIAL goes up to 2^31 (2 billion); if you need more use BIGSERIAL.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] IS it a good practice to use SERIAL as Primary Key?

2006-11-22 Thread Richard Broersma Jr
 I am wonderring if it is a good practice to use SERIAL index as primary key, 
 as it is only available up to 999?

That isn't true.  It is much larger that that. If yor need more than that there 
is always
bigserial.

serial= int4
bigserial = int8


The type names serial and serial4 are equivalent: both create integer columns. 
The type names
bigserial and serial8 work just the same way, except that they create a bigint 
column. bigserial
should be used if you anticipate the use of more than 231 identifiers over the 
lifetime of the
table. 

http://www.postgresql.org/docs/8.2/interactive/datatype-numeric.html#DATATYPE-SERIAL

 Currently i am dealing with storing LDAP users into Postgres and i am 
 looking for a better way to make use of the DN as primary key instead of 
 SERIAL index.
 
 Any advice or suggestion is appreciated.

Here is a similar discussion that you may be enterested in:
http://archives.postgresql.org/pgsql-general/2006-10/msg00024.php

Regards,

Richard Broersma Jr.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Possible problem with PQescapeStringConn and

2006-11-22 Thread Bruce Momjian

I tried to add this to the documentation, config.sgml, but everything I
come up with sounds so long-winded that it would do more harm than good.

---

Jeff Davis wrote:
 On Thu, 2006-10-26 at 19:46 -0400, Tom Lane wrote:
  Jeff Davis [EMAIL PROTECTED] writes:
   You can set standard_conforming_strings in postgresql.conf at any time
   and reload the config, changing the value for all active connections.
   That means that if a client opens a connection, and you SIGHUP postgres,
   and then the client issues a PQescapeStringConn, the client will get an
   incorrectly-escaped string.
  
  The window for this is pretty narrow, because PQescapeStringConn will
  use the latest-delivered parameter status, but it's certainly true that
  randomly changing standard_conforming_strings wouldn't be a bright idea.
  Probably a documentation note recommending against changing it via
  SIGHUP would be sufficient.
  
 
 It's not a narrow time window (which was my original test), but you're
 right that it is narrow in the sense that any command executed on that
 connection will update the status. So, a potential attacker has one
 chance :)
 
 It seems like a documentation note would be sufficient to prevent people
 from changing it too haphazardly. You wouldn't want to change it at
 runtime if the bulk of your queries involved escape sequences.
 
 Regards,
   Jeff Davis
 
 
 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] ISO week dates

2006-11-22 Thread Bruce Momjian
Peter Eisentraut wrote:
 Brendan Jurd wrote:
   * add an ISO day format pattern to to_char() called 'ID', which
  starts at Monday = 1, and
   * add an ISO year field to extract() called 'isoyear'?
 
 That seems reasonable.  Do you volunteer?

Added to TODO:

* Add ISO day of week format 'ID' to to_char() where Monday = 1
* Add an ISO year field to extract() called 'isoyear'

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] None standard install directories

2006-11-22 Thread garry saddington
Thanks to this list I have managed to install into a none standard
location and have the socket where I want it -thanks all. However, now I
am getting a socket.lock as well and my application is getting a
permission denied error on connect, how can I prevent this? I have
checked that all directory permissions are OK.
Regards
Garry


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] log database changes - a design problem?

2006-11-22 Thread Luca Ferrari
Hi all,
I've got my database on which I'm building a Java application. Since I'd like 
to store information about who inserted and updated a specified record, I 
placed in my main tables a few additional fields to store such information:
CREATE TABLE myTable(

insertUser varchar(30),
insertDate date,
updateUser varchar(30),
updateDate date,
FOREIGN KEY insertUser REFERENCES users(username),
FOREIGN KEY updateUser REFERENCES users(username)
);

where 'users' is a table that stores the usernames allowed to connect thru my 
application. This solution works, of course, but as you can imagine is quite 
boring to insert into each sql statement values for this additional fields, 
and even if I can set-up default values, this can be error prone.
After a while, in order to get a little debug level on my application, I 
started logging into a table the sql query that each user/client has 
executed. This to understand why and which query can be wrong, which client 
version has issued it, from which, etc. So I've got a kind of backtrace of 
changes in the database disregarding the above additional fields. Even if 
this log is more difficult to consult than the above additional fields (I 
need to search within the sql statement string), I was wondering to remove 
such additional fields.
Here comes my question: how can I catch user changes to each record in the 
database without be bored with user/date details? Anyone can suggest me  a 
smart solution and/or database design? Anyone has already found such kind of 
problem?

Thanks,
Luca

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/