Re: [GENERAL] plpython2.dll missing from Enterprise DB Postgres distribution

2017-09-06 Thread Alan Millington
I had the following subsequent correspondence with EnterpriseDB.


From: EnterpriseDB Customer Support <supp...@enterprisedb.com>
To: <admilling...@yahoo.co.uk> 
Sent: 25 Aug 2017, 10:19 AM

Your concern for missing 'plpython2.dll' file from the PostgreSQL 9.6
installer(by EnterpriseDB) was redirected to the EnterpriseDB support portal
by one of our Senior Principal Engineer.
Regarding your queries, I would like to mention that the PostgreSQL 9.6
installer from EDB is complete and includes Python v3 within the package and
the data link library file for the same is present as 'plpython3.dll' inside
your lib directory (C:\Program Files\PostgreSQL\9.6\lib\plpython3.dll).

Please let us know if you have any more queries in this regards.

Vipul Shukla 
EnterpriseDB Corporation 


From: Alan Millington <admilling...@yahoo.co.uk>
To: EnterpriseDB Customer Support" <supp...@enterprisedb.com>
Sent: 25 Aug 2017, 12:12 PM

My question was concerning plpython2.dll, not plpython3.dll. plpython2.dll
supports Python 2, and plpython3.dll supports Python 3. Python 2 and Python
3 are two different languages. They are not interchangeable.

The Postgres documentation states in Chapter 43.1 (I am looking at the
documentation for v. 9.5) that "PL/Python supports both the Python 2 and
Python 3 language variants . . . the default Python language variant . . .
is currently Python 2 . . . The default will probably be changed to Python 3
in a distant future release of PostgreSQL, depending on the progress of the
migration to Python 3 in the Python community."

That is a sensible policy. Some users want the new features of Python 3, and
no doubt anyone starting a new system from scratch will use Python 3, but
many organisations have complex systems written in Python 2, and many of
them see no point in spending effort on translating their code to Python 3.

Postgres distributions for other platforms (e.g. Ubuntu) include
plpython2.so.


From: EnterpriseDB Customer Support <supp...@enterprisedb.com>
To: <admilling...@yahoo.co.uk> 
Sent: 25 Aug 2017, 8:13 PM

As per the update from our development team, we only ship and support
plpython3 with our PostgreSQL installation package.

We also do not plan to ship plpython2.

Vipul Shukla 
EnterpriseDB Corporation 


From: Alan Millington <admilling...@yahoo.co.uk>
To: EnterpriseDB Customer Support <supp...@enterprisedb.com>
Sent: 29 Aug 2017, 10:43 AM

Thank you for that update.

It leaves unanswered the question of why EnterpriseDB does not include
plpython2.dll in its distribution, given the statement in the Postgres
documentation that Python 2 is the default version of Python, and will
remain so for the foreseeable future.


From: EnterpriseDB Customer Support <supp...@enterprisedb.com>
To: <admilling...@yahoo.co.uk> 
Sent: 29 Aug 2017, 11:16 AM

EnterpriseDB stopped providing support for PLPython long time ago. It is not
possible to support each and every version of an extension indefinitely as
it might require fixes from time to time and it is not possible. We have
been supporting the latest version and we would continue to support the same
in future. 

Also the document for Postgres was also updated to reflect the same to help
customers plan the migration strategy or use plpythonu if they do not plan
to migrate. Here is a excerpt from the documentation::

https://www.postgresql.org/docs/9.2/static/plpython-python23.html

Existing users and users who are currently not interested in Python 3 use
the language name plpythonu and don't have to change anything for the
foreseeable future. It is recommended to gradually "future-proof" the code
via migration to Python 2.6/2.7 to simplify the eventual migration to Python
3.

In practice, many PL/Python functions will migrate to Python 3 with few or
no changes.

Amit Sharma
EnterpriseDB Corporation 


That leaves me with the last question in my original post. If I try to build
the library myself, would I have to build the whole of Postgres, or is is
feasible to build just the library that I am missing? If the latter, which
source files would I need to include in the build?




--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] plpython2.dll missing from Enterprise DB Postgres distribution

2017-08-24 Thread Alan Millington
There were several queries in 2014 regarding the absence of plpython2.dll from 
the Enterprise DB Postgres distribution for Win32. I should have thought that 
this omission was simply an oversight, which would have been rectified by now. 
However, I find that plpython2.dll is still missing from the EDB distribution 
of Postgres 9.6.3.



I have raised this with EDB. I received an acknowledgment, but I have not heard 
anything more. That is not surprising, as I am a person of no importance. A 
suggestion from Postgres.org that the library should be included might carry 
more weight.
The point was made in 2014 that Postgres.org is not responsible for what EDB 
includes in its distributions. On the other hand, there is a link to the EDB 
site in the Postgres site, and that might be taken as an endorsement. If EDB 
are not willing to provide the library, at the least you might include a 
warning that the distribution is not complete.
I suppose I could try to build the library myself. Would I have to build the 
whole of Postgres, or is is feasible to build just the library that I am 
missing? If the latter, which source files would I need to include in the build?


   

[GENERAL] plpython2.dll missing from Enterprise DB Postgres distribution

2017-08-24 Thread Alan Millington
There were several queries in 2014 regarding the absence of plpython2.dll from 
the Enterprise DB Postgres distribution for Win32. I should have thought that 
this omission was simply an oversight, which would have been rectified by now. 
However, I find that plpython2.dll is still missing from the EDB distribution 
of Postgres 9.6.3.
I have raised this with EDB. I received an acknowledgment, but I have not heard 
anything more. That is not surprising, as I am a person of no importance. A 
suggestion from Postgres.org that the library should be included might carry 
more weight.
The point was made in 2014 that Postgres.org is not responsible for what EDB 
includes in its distributions. On the other hand, there is a link to the EDB 
site in the Postgres site, and that might be taken as an endorsement. If EDB 
are not willing to provide the library, at the least you might include a 
warning that the distribution is not complete.
I suppose I could try to build the library myself. Would I have to build the 
whole of Postgres, or is is feasible to build just the library that I am 
missing? If the latter, which source files would I need to include in the build?


[GENERAL] Using psql -f to load a UTF8 file

2012-09-20 Thread Alan Millington
I am running Postgres 8.4.1 on Windows XP Professional Service Pack 3. My 
database is UTF8. I use psql -f to load files containing DDL and DML commands. 
I discovered a long time ago that psql does not like UTF8 files: it complains 
about the byte order mark on the first line. Up to now I have worked round that 
by making sure that the files were saved as what Microsoft calls ANSI. 
However, that option is not available if I want to insert data which includes 
non-ASCII characters.
 
I have found a suggestion that psql can be told to expect UTF8 files by 
creating a file called psqlrc.conf containing the line \encoding unicode. I 
have tried putting this file (i) in the Postgres data directory, along with 
postgresql.conf, and (ii) in %APPDATA%\postgresql, but I still get an error:
 
psql:120919_insert_into_letter.sql:1: ERROR:  syntax error at or near insert

LINE 1: insert into LETTER_VAR (var_name, type) values ('REPORT_COP...
 
I have found a workaround, which is to start the file with a line containing 
just a semicolon. Then the empty statement fails, but the other statements 
succeed:
 
psql:120919_insert_into_letter_copy2.sql:1: ERROR:  syntax error at or near 

LINE 1: ;
    ^
INSERT 0 1
INSERT 0 1

 
However, I feel sure that there must be a better way.
Ihave noted BUG report #6271: psql -f reporting unexpected syntax errors on 
first command. This involves the same problem. Álvaro Herrera advised that You 
need to remove [the byte order mark] before passing the file to psql. But how 
am I supposed to remove the byte order mark from a UTF8 file? I thought that 
the whole point of the byte order mark was to tell programs what the file 
encoding is. Other programs, such as Python, rely on this.

Re: [GENERAL] Using psql -f to load a UTF8 file

2012-09-20 Thread Alan Millington
Thank you for the link. I am using Notepad, which inserts the byte order mark. 
Following the links a bit further, I gather that the version of Notepad that I 
am using may not identify a UTF8 file correctly if the byte order mark is 
omitted. Also, as I mentioned, Python makes use of it. (From the Python 
documentation on Encoding declarations: If the first bytes of the file are the 
UTF-8 byte-order mark ('\xef\xbb\xbf'), the declared file encoding is UTF-8 
(this is supported, among others, by Microsoft’s Notepad).)
 
The conclusion seems to be that I must use one editor for Python, and another 
for Postgres.
 



From: Leif Biberg Kristensen l...@solumslekt.org
To: Postgres general mailing list pgsql-general@postgresql.org 
Cc: Alan Millington admilling...@yahoo.co.uk 
Sent: Thursday, 20 September 2012, 16:44
Subject: Re: [GENERAL] Using psql -f to load a UTF8 file

Torsdag 20. september 2012 16.56.16 skrev Alan Millington :
 psql. But how am I supposed to remove the byte order mark from a UTF8
 file? I thought that the whole point of the byte order mark was to tell
 programs what the file encoding is. Other programs, such as Python, rely
 on this.

http://en.wikipedia.org/wiki/Byte_order_mark

While the Byte Order Mark is important for UTF-16, it's totally irrelevant to 
the UTF-8 encoding. Still you'll find several editors that automatically input 
BOMs in every text file. There is usually a setting Insert Byte Order Mark 
somewhere in the configuration, and it may be on by default.

regards, Leif

Re: [GENERAL] invalid input syntax for type bytea

2011-09-01 Thread Alan Millington
Thank you for that. Sure enough, at character 36202 I have found \\ 51. I 
assume that the space should be a zero. That appears to be the only error. How 
odd!

--- On Wed, 31/8/11, Tom Lane t...@sss.pgh.pa.us wrote:

From: Tom Lane t...@sss.pgh.pa.us
Subject: Re: [GENERAL] invalid input syntax for type bytea
To: Alan Millington admilling...@yahoo.co.uk
Cc: Postgres general mailing list pgsql-general@postgresql.org
Date: Wednesday, 31 August, 2011, 18:24

Alan Millington admilling...@yahoo.co.uk writes:
 I recently hit an error when trying to insert into a bytea column. The 
 following is the first part of what appears in the Postgres log:
 2011-08-25 14:42:40 BST HINT:  Use the escape string syntax for backslashes, 
 e.g., E'\\'.2011-08-25 14:42:40 BST ERROR:  invalid input syntax for type 
 bytea at character 752011-08-25 14:42:40 BST STATEMENT:  insert into 
 FAMILY_DATA (family_id, seq_num, family_pod) values (177, 20,
  
'\\200\\002\\135q\\001\\050cperson\\012Person\\012q\\002\\051\\201q\\003\\175q\\004\\050U\\006\\137namesq\\005cperson\\012\\137Names\\012q\\006\\051\\201q\\007cperson\\012\\137Name\\012q\\010\\051\\201q\\011\\050\\135q\\012X\\003\\000\\000\\00017\\052q\\013a\\135q\\014X\\002\\000\\000\\000ABq\\015ae\\175q\\016U\\006parentq\\017h\\007sba\\175q\\020U\\006personq\\021h\\003sbU\\021nationality\\137eventq\\022cperson\\012Dictof\\137Nationality\\137Event\\012q\\023\\051\\201q\\024X\\004\\000\\000\\000UKBSq\\025\\175q\\026\\051cperson\\012Listof\\137Nationality\\137Event\\012q\\027\\051\\201q\\030\\050cperson\\012Nationality\\137Event\\012q\\031\\051\\201q\\032\\175q\\033\\050U\\015contra\\137eventsq\\034\\051U\\006parentq\\035h\\030U\\004bvosq\\036c\\137\\137builtin\\137\\137\\012set\\012q\\037\\135q
 
U\\002ATq\\041a\\205Rq\\042U\\007cfflagsq\\043\\051U\\006effectq\\044U\\001AU\\005cinfoq\\045ccinfo\\012BCInfo
 (I omit the rest, but there is a close quote and a close parenthesis at the 
 end.)

Well, the part you showed us is perfectly valid bytea data, so the
problem is somewhere in what you omitted.

 If I have counted right, character 75 is the '7' in '\\175', which looks to 
 be a valid octal value.

Unfortunately, that number is just a pointer to the start of the bytea
literal within the statement, so it's not very helpful here :-(.  What
you need to look for is a backslash that's not followed by either a
second backslash or octal digits.

            regards, tom lane


[GENERAL] invalid input syntax for type bytea

2011-08-31 Thread Alan Millington
I am running Postgres 8.4.1 on Windows XP Professional Service Pack 3. My 
database is UTF8. My program code is written in Python, and to interface to 
Postgres I use mxODBC 3.0 and the PostgreSQL Unicode driver PSQLODBCW.DLL 
version 8.01.02.00 dated 31/01/2006.
I recently hit an error when trying to insert into a bytea column. The 
following is the first part of what appears in the Postgres log:
2011-08-25 14:42:40 BST HINT:  Use the escape string syntax for backslashes, 
e.g., E'\\'.2011-08-25 14:42:40 BST ERROR:  invalid input syntax for type bytea 
at character 752011-08-25 14:42:40 BST STATEMENT:  insert into FAMILY_DATA 
(family_id, seq_num, family_pod) values (177, 20,
 
'\\200\\002\\135q\\001\\050cperson\\012Person\\012q\\002\\051\\201q\\003\\175q\\004\\050U\\006\\137namesq\\005cperson\\012\\137Names\\012q\\006\\051\\201q\\007cperson\\012\\137Name\\012q\\010\\051\\201q\\011\\050\\135q\\012X\\003\\000\\000\\00017\\052q\\013a\\135q\\014X\\002\\000\\000\\000ABq\\015ae\\175q\\016U\\006parentq\\017h\\007sba\\175q\\020U\\006personq\\021h\\003sbU\\021nationality\\137eventq\\022cperson\\012Dictof\\137Nationality\\137Event\\012q\\023\\051\\201q\\024X\\004\\000\\000\\000UKBSq\\025\\175q\\026\\051cperson\\012Listof\\137Nationality\\137Event\\012q\\027\\051\\201q\\030\\050cperson\\012Nationality\\137Event\\012q\\031\\051\\201q\\032\\175q\\033\\050U\\015contra\\137eventsq\\034\\051U\\006parentq\\035h\\030U\\004bvosq\\036c\\137\\137builtin\\137\\137\\012set\\012q\\037\\135q
 
U\\002ATq\\041a\\205Rq\\042U\\007cfflagsq\\043\\051U\\006effectq\\044U\\001AU\\005cinfoq\\045ccinfo\\012BCInfo
(I omit the rest, but there is a close quote and a close parenthesis at the 
end.)
If I have counted right, character 75 is the '7' in '\\175', which looks to be 
a valid octal value.
The statement as shown in the log is generated by mxODBC, not by me, so if 
there is something wrong with it I shall have to take the matter up with 
eGenix. Before I do so it would be useful to know what is wrong with the 
statement.
I had to tweak my own code when I upgraded from Postgres 8.1 to 8.4, but since 
then I have inserted 5092 rows with an average binary data length of 40,000 
bytes. This is the only insert that has failed.


Re: [GENERAL] When is an explicit cast necessary?

2010-04-14 Thread Alan Millington
If you think that smallints are more bother than they are worth, perhaps you 
should remove support for smallints completely. Then people would know where 
they stood. (Or you could make smallint a synonym for int.)
 
The other half of my problem was having to cast the literal 'R' to char(1) 
explicitly. I know that string literals can be used to represent all sorts of 
datatypes, but that is no reason to disallow interpreting them as strings.
 
Or would an implicit cast have worked if I had omitted the length specifier?


--- On Tue, 13/4/10, Tom Lane t...@sss.pgh.pa.us wrote:


From: Tom Lane t...@sss.pgh.pa.us
Subject: Re: [GENERAL] When is an explicit cast necessary?
To: Martijn van Oosterhout klep...@svana.org
Cc: Alan Millington admilling...@yahoo.co.uk, Alban Hertroys 
dal...@solfertje.student.utwente.nl, Greg Smith g...@2ndquadrant.com, 
Postgres general mailing list pgsql-general@postgresql.org
Date: Tuesday, 13 April, 2010, 20:16


Martijn van Oosterhout klep...@svana.org writes:
 The question is: does the column really need to be smallint.

Yeah.  Usually, declaring a function's argument as int rather than
smallint is the easiest fix.  We have looked into this in the past,
and concluded that the negative aspects of allowing integer constants
to implicitly cast to smallint parameters would outweigh the
positives.  As an example, such simple expressions as 2 + 2 would
start to fail because it'd be unclear whether int or smallint addition
is meant.  (And the point isn't academic, since for example it would
affect the overflow threshold.)

            regards, tom lane



  

Re: [GENERAL] When is an explicit cast necessary?

2010-04-10 Thread Alan Millington
In C, if I declare a short variable shortvar, I can write an assignment 
shortvar = 1. The literal value 1 is an int, but the compiler will implicitly 
convert it to a short. Similarly, if I write a function func() which takes a 
short argument, then provided I use function prototypes, I can write a call 
func(1). Again the compiler will implicitly convert the int 1 to a short.
 
In the case of my PL/pgSQL function, the syntax that I have to use to call it
 
    select col1, col2, col3 from func(?, ?, ?) as (col1 int, col2 smallint, 
col3 char(1))
 
clearly indicates what I expect the return datatypes to be. There is no 
ambiguity. The only question is whether 1 can be converted to a smallint, and 
'R' to char(1). Clearly the answer is Yes, as otherwise the explicit casts 
would fail. But it seems that with the change made in Postgres 8.3, I now have 
to write the type information twice. I do not regard that as an improvement!


--- On Sat, 10/4/10, Alban Hertroys dal...@solfertje.student.utwente.nl wrote:


From: Alban Hertroys dal...@solfertje.student.utwente.nl
Subject: Re: [GENERAL] When is an explicit cast necessary?
To: Alan Millington admilling...@yahoo.co.uk
Cc: Greg Smith g...@2ndquadrant.com, Postgres general mailing list 
pgsql-general@postgresql.org
Date: Saturday, 10 April, 2010, 11:51


On 9 Apr 2010, at 16:57, Alan Millington wrote:

 However, every language that I have ever used will implicitly convert an 
 integer 1 to a smallint (or short) 1 when required to do so. How can such a 
 cast be called surprising behaviour, or produce misleading results, to 
 quote the first article?

No, they probably don't cast integer values down to smallint. What they do is 
cast the smallint up to integer, as that's a safe cast. After all, numbers that 
fit in an int may not fit in a smallint (try select 75000::smallint; for 
example) and you'd lose data casting it down, but it's fine the other way 
around.

Since your function has smallint as one of its parameter types the database 
can't cast the smallint up to an int like it would normally do in such cases as 
the function doesn't accept integer values for that parameter. PG can't do much 
but throw an error.

Alban Hertroys






  

[GENERAL] When is an explicit cast necessary?

2010-04-09 Thread Alan Millington
I am using Postgres 8.4.1 on Windows XP Professional Service Pack 3.
 
I have a PL/pgSQL function which is defined as returns record. The record 
contains three values. In one execution path, the values are read from a table, 
the selected columns being of types int, smallint and char(1). In another 
execution path, the second and third values are the literals 1 and 'R'. In the 
original version of the function the assignment in the second case was as 
follows:
 
   v_rv = (v_id, 1, 'R') ;

where v_rv is a variable of type record, and v_id is of type int. The client 
application calls the function as follows:
 
   select col1, col2, col3 from func(?, ?, ?) as (col1 int, col2 smallint, col3 
char(1))
 
As far as I remember, when I was using Postgres 8.1.4 that worked, but under 
Postgres 8.4.1 it results in the errors Returned type integer does not match 
expected type smallint and Returned type unknown does not match expected type 
character. I can avoid the error by altering the assignment thus:
 
   v_rv = (v_id, 1::smallint, 'R'::char(1)) ;

but I am puzzled as to why these explicit casts should be necessary. Is this 
covered anywhere in the documentation?
 


  

Re: [GENERAL] When is an explicit cast necessary?

2010-04-09 Thread Alan Millington
Thank you for that helpful information. I thought I was going mad!
 
It would never have occurred to me to write a join which relied on an implicit 
cast between int and string. However, every language that I have ever used will 
implicitly convert an integer 1 to a smallint (or short) 1 when required to do 
so. How can such a cast be called surprising behaviour, or produce 
misleading results, to quote the first article?


--- On Fri, 9/4/10, Greg Smith g...@2ndquadrant.com wrote:


From: Greg Smith g...@2ndquadrant.com
Subject: Re: [GENERAL] When is an explicit cast necessary?
To: Alan Millington admilling...@yahoo.co.uk
Cc: Postgres general mailing list pgsql-general@postgresql.org
Date: Friday, 9 April, 2010, 15:06


Alan Millington wrote:
 
 As far as I remember, when I was using Postgres 8.1.4 that worked, but under 
 Postgres 8.4.1 it results in the errors Returned type integer does not match 
 expected type smallint and Returned type unknown does not match expected 
 type character.
 

There was a major breaking change to how casting is handled in 8.3.  Some good 
reading about what happened:

http://www.depesz.com/index.php/2008/05/05/error-operator-does-not-exist-integer-text-how-to-fix-it/
http://petereisentraut.blogspot.com/2008/03/readding-implicit-casts-in-postgresql.html

-- Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndquadrant.us




  

Re: [GENERAL] Error invalid byte sequence for encoding UTF8 on insert into BYTEA column

2010-01-23 Thread Alan Millington
You probably need to ask the mxODBC developers (who AFAIK don't hang out
on this list) what they are doing with that data.  It sounds fairly
likely to me that the bytea value is just being sent as a string without
any special encoding.  That would explain both the null sensitivity you
mention later in the thread, and the encoding validity complaints ---
PG 8.1 was much less picky about string encoding validity than recent
versions are.
There are basically two ways that you could make this work reliably:
arrange for the bytea value to be sent as an out-of-line binary
parameter, or encode it using backslash sequences (eg, '\000' for a
null).  Whether the former is possible with mxODBC I dunno.  The latter
might be something that mxODBC will do for you if it knows the value
is supposed to be bytea, but without that knowledge I don't see how it
could.  You might end up having to do the encoding yourself.

Preliminary notes:
 
1. I have now confirmed that at some point I upgraded from mxODBC 3.0 to 3.0.3. 
The statement in my original posting that my mxODBC installation had not 
changed was wrong.
 
2. The Python 'str' datatype is used for any sequence of single bytes, like C's 
array of char. One cannot tell from the datatype what these bytes are intended 
to represent: it could be ASCII characters, characters in any single-byte 
encoding, Unicode in any encoding, or binary data.
 
I have discovered a workaround, which is to pass the data to mxODBC in a Python 
buffer object, which clearly identifies the data as binary.
 
I wrote to eGenix about this as follows:

For some years I have used a Postgres 8.1.4 database (UTF8) with the Postgres 
Unicode ODBC driver. At first I used mxODBC 2.0 as my interface with Python 
2.4, then I upgraded to mxODBC 3.0 for Python 2.5. I am now on Python 2..6.
  
 With mxODBC 2.0 I had to use the Binary() function to convert binary data to a 
buffer object if I was to insert it successfully into a bytea column. With 
mxODBC 3.0 I found that that was no longer necessary, though now I had to check 
the “bytea as LO” option of the driver.
  
 At some point I upgraded to mxODBC 3.0.3, and recently I upgraded to Postgres 
8.4.1. On trying to insert into a bytea column yesterday I found I was no 
longer able to do so. Either I received an invalid byte sequence for encoding 
UTF8 error from Postgres, or the data was truncated at the first NULL, as 
though it was a C-style string.
  
 I find, however, that if I use Binary() again, I can insert the data correctly.
  
 It seems to make no difference whether I use BIND_USING_SQLTYPE or 
BIND_USING_PYTHONTYPE.
  
 Possibilities:
  
 i) Postgres has changed its processing of bytea input.
ii) mxODBC has changed the way it handles such data.
iii) Both have changed.
  
 Can you shed any light on this?
Marc-Andre Lemburg replied as follows:

mxODBC has not changed in this respect between 3.0.0 and 3.0.3,
so it must be the PostgreSQL code that has changed.
 
Note that changing the bind type doesn't help in this case, since
the PostgreSQL ODBC driver does not provide the necessary APIs
to do SQL type binding. As a result, mxODBC will always use
Python type binding.
 
By using the Binary() wrapper for binary data (which really is just
an alias for buffer()), you tell mxODBC to send the data as
binary data to PostgreSQL. Without the wrapper, mxODBC will send
the data as text data and since it is not possible to store
binary data as UTF-8 you get the errors you are seeing.
What puzzles me is hinted at in the last sentence: why does Postgres 8.4.1 
(though apparently not 8.1.4) try to interpret the bytes as UTF8 when they are 
being sent to a column that is typed as bytea?
 
I apologise if this posting is excessively long, but I like to understand the 
reasons for things, and others may find the information useful.
 
 
 
 


  

Re: [GENERAL] Error invalid byte sequence for encoding UTF8 on insert into BYTEA column

2010-01-22 Thread Alan Millington
 Today for the first time since upgrading to Postgres 8.4.1 I tried
 out part of the code which inserts some binary data into a table. The
 insert failed with the error invalid byte sequence for encoding
 UTF8. That is odd, because the column into which the insert was made
 is of type bytea: the data is meant to be binary data, not
 UTF8-encoded Unicode.
 
Inserting in bytea needs an extra level of escaping when the parameters
are sent inline. See
 
http://www.postgresql.org/docs/8.4/interactive/datatype-binary.html
 
I have compared the 8.4 documentation for bytea with the 8.1 documentation, and 
as far as I can see, nothing of significance has changed.
 
 When I was using Postgres 8.1.4 the same code worked. My code, the
 mxODBC code and the driver are all unaltered: only the Postgres
 version has changed.. Is there something I can tweak to get it to
 work on Postgres 8.4.1?
 
Very odd. The syntax for this hasn't changed in a long time. And I
would have thought you'd be sending your paramters out of line anyway.
Can you check that?
 
The difficulty here is that several layers of software are involved. It is not 
as though I were writing in C and calling Postgres interface functions directly.
 
What I send to mxODBC is the command as a string containing placeholders 
(question marks), e.g. insert into mytable (intcol, byteacol) values (?, ?), 
plus the actual values as separate arguments, e.g. 1, data (where 1 is a 
literal and data is a Python variable containing binary data). What mxODBC does 
with this I have no idea. It is a black box as far as I am concerned.
 
However, when I was running Postgres 8.1.4 it used to work. If I knew that 
something had changed in Postgres between 8.1.4 and 8.4.1, I could go to eGenix 
and ask whether the latest mxODBC version supports the change. However, if you 
are telling me that nothing in this area has changed in Postgres, I am baffled!



  

Re: [GENERAL] Error invalid byte sequence for encoding UTF8 on insert into BYTEA column

2010-01-22 Thread Alan Millington
Having done some investigation, I can shed further light on the problem.
 
Within an interactive Python session I connected to the database using mxODBC: 
the variable csr is an mxODBC cursor object. I made the following assignments:
 
sql = 'insert into mytable (seq_num, data) values (?, ?)'
data = '\x00\x00\x00'
 
i.e. data is three NULLs.
 
I then executed the command
 
csr.execute(sql, (1, data))
 
The insert succeeded, but when I selected length(data) back from the table 
(using pgAdmin III) it was zero.
 
I tried again, using data = '\x01\x01\x00\x01' and seq_num 2. This time, when I 
selected back the length of the data it was 2, namely the first two bytes.
 
So it seems that at some point the string is being parsed as though it were a 
C-style NULL-terminated string. Having checked the dates on all my files, I 
think I may have downloaded a new mxODBC version since I last successfully did 
a binary insert, so it may be that mxODBC is the culprit, but it would be 
helpful if you could confirm that the Postgres parsing has not changed. I 
recall that I came across a similar problem when I tried to write a PL/Python 
function which returned binary data to a PL/pgSQL function: in order to get it 
to work I had to convert the data to base64Binary in the Python function and 
then convert it back in the PL/pgSQL function. That option is not open to me in 
this case.
 
 

 


  

[GENERAL] Error invalid byte sequence for encoding UTF8 on insert into BYTEA column

2010-01-21 Thread Alan Millington
I am running Postgres 8.4.1 on Windows XP Professional Service Pack 3. My 
database is UTF8. My program code is written in Python, and to interface to 
Postgres I use mxODBC 3.0 and the PostgreSQL Unicode driver PSQLODBCW.DLL 
version 8.01.02.00 dated 31/01/2006.
 
Today for the first time since upgrading to Postgres 8.4.1 I tried out part of 
the code which inserts some binary data into a table. The insert failed with 
the error invalid byte sequence for encoding UTF8. That is odd, because the 
column into which the insert was made is of type bytea: the data is meant to be 
binary data, not UTF8-encoded Unicode.
 
When I was using Postgres 8.1.4 the same code worked. My code, the mxODBC code 
and the driver are all unaltered: only the Postgres version has changed.. Is 
there something I can tweak to get it to work on Postgres 8.4.1?
 
I confirm that the driver option bytea as LO is checked. That has always been 
necessary in the past.
 


  

Re: [GENERAL] Could not open relation XXX: No such file or directory

2009-08-22 Thread Alan Millington
On 19/08/2009 6:38 PM, Craig Ringer wrote:

Got a virus scanner installed? If so, remove it (do not just disable it) and 
see if you can reproduce the problem. Ditto anti-spyware software. 
 
You should also `chkdsk' your file system(s) and use a SMART diagnostic tool to 
test your hard disk (assuming it's a single ATA disk). 
chkdsk reported that the disc is clean.
 
Since installing Postgres in early 2007 I have been running it together with 
McAfee with no problem. A few days ago McAfee was deinstalled and Kaspersky 
installed in its place, so Kaspersky appeared to be a suspect.
 
However, on looking at the matter again, I am now almost certain that I caused 
the problem myself. I have a Python function which (as a workaround to a 
problem which exists in Python 2.4, the version to which Postgres 8.1.4 is 
tied) executes a chdir. It appears that once this has happened, the current 
Postgres session is no longer able to find any new data files, though evidently 
it is still able to use those that it has located previously. If you can 
confirm that Postgres does indeed rely on the current working directory to 
locate its data files, the problem is solved.
 
Moral: never underestimate the stupidity of the people who post the questions 
(in this case, me)! No doubt this provides one example of why Python is deemed 
unsafe.
 


  

[GENERAL] Could not open relation XXX: No such file or directory

2009-08-19 Thread Alan Millington
I am running Postgres 8.1.4 on Windows XP Professional Service Pack 3.
 
Yesterday when attempting to select from one particular database I repeatedly 
encountered errors such as the following:
 
2009-08-18 10:49:54 ERROR:  could not open relation 1663/51796/1247: No such 
file or directory
2009-08-18 10:49:54 STATEMENT:  SELECT typname FROM pg_type WHERE oid = 1043
 
1247 is pg_type. In Explorer I could see the file 51796/1247, and the timestamp 
indicated that it had not recently been updated. I had never previously seen 
this error.
 
As that database is used only for testing, I dropped it and loaded a new one 
from a dump. However, earlier today a colleague working against this new 
database experienced a slightly different error on several occasions:
 
2009-08-19 03:06:45 ERROR:  could not read block 0 of relation 
1663/52752/52896: No such file or directory
2009-08-19 03:06:45 STATEMENT:  SELECT * FROM RP.RP_ATTRIBUTE WHERE rp_id = 
(('MM100')::text)
 
52752/52896 is the Primary Key index on the referenced table. Again, I can see 
the file in Explorer, and it has not been updated since the database was 
created. When I try the same Select statement now I get no error.
 
Clearly something is amiss, but I don't know what. I should be grateful for any 
suggestions as to what I should check.
 


  

[GENERAL] plpython.dll dependencies

2009-08-12 Thread Alan Millington

I note that plpython.dll references a specific Python version. In the case of 
Postgres 8.1.4, which is what I have installed, the reference is to 
Python24.dll.
 
Is there yet a Postgres version in which plpython.dll references Python26.dll? 
(I am running on Windows XP Professional Service Pack 3.)


  

[GENERAL] Problem with Check Constraint in pg_restore

2009-07-13 Thread Alan Millington
I am running Postgres 8.1.4 on Windows XP Professional Service Pack 3.
 
I recently tried to  make a copy of an existing database. I made a dump using 
pg_dump -Fc, I created a new database from template0, and attempted to restore 
into it using pg_restore. The tables were created and about half were 
populated, but the restore was halted by the following error (messages from the 
log):
 
2009-07-10 17:47:52 ERROR:  new row for relation RP_DATE violates check 
constraint RP_DATE_check
2009-07-10 17:47:52 CONTEXT:  COPY RP_DATE, line 565: 
CB11 OLD_CI 1901-01-01 ? [OLD]: Date appointed by HM Proclamation 17 Sep 1900
2009-07-10 17:47:52 STATEMENT:  COPY RP_DATE (rp_id, date_type, date, 
accuracy, notes) FROM stdin;

That is puzzling, as both data and constraint came from the existing database, 
where they had happily coexisted. I find that I am able to insert the 
complained-of row into the new database using either INSERT or COPY, so the 
statement that the constraint is violated cannot be correct.
 
The constraint as displayed by pgAdmin III is as follows (it is the same in the 
original and the restored database):
 
CONSTRAINT RP_DATE_check CHECK (date IS NULL AND accuracy = '?'::bpchar OR 
date IS NOT NULL AND date  '1099-12-31'::date AND (accuracy = 'D'::bpchar OR 
accuracy = 'M'::bpchar AND date::text ~~ '%-01'::text OR (accuracy = 
'Y'::bpchar OR accuracy = '?'::bpchar) AND date::text ~~ '%-01-01'::text))
 
The solution may be to upgrade to a more recent Postgres version, but as that 
is likely to involve making a database dump and restoring it, I thought I 
should check whether the problem is caused by a bug which has now been fixed.
 


  

Re: [GENERAL] Problem with Check Constraint in pg_restore

2009-07-13 Thread Alan Millington
Thank you for that.
 
Having done some further investigation I had concluded that the problem was 
probably with the LIKE (~~) comparison. I created a text dump file with the -d 
(use INSERT rather than COPY) option so that I could see which rows failed. All 
of the rows which exercised the LIKE test failed, and all those that did not 
succeeded.
 
I remain a bit puzzled as to why the comparison should work one way when INSERT 
or COPY is invoked from pgAdmin III, and another when COPY is invoked from 
pg_restore or INSERT is contained in a file fed to psql. Be that as it may; I 
shall take your advice, and use EXTRACT instead, if that is going to be more 
robust.


--- On Mon, 13/7/09, Tom Lane t...@sss.pgh.pa.us wrote:


From: Tom Lane t...@sss.pgh.pa.us
Subject: Re: [GENERAL] Problem with Check Constraint in pg_restore
To: Alan Millington admilling...@yahoo.co.uk
Cc: pgsql-general@postgresql.org
Date: Monday, 13 July, 2009, 3:06 PM


Alan Millington admilling...@yahoo.co.uk writes:
 CONSTRAINT RP_DATE_check CHECK (date IS NULL AND accuracy = '?'::bpchar OR 
 date IS NOT NULL AND date  '1099-12-31'::date AND (accuracy = 'D'::bpchar OR 
 accuracy = 'M'::bpchar AND date::text ~~ '%-01'::text OR (accuracy = 
 'Y'::bpchar OR accuracy = '?'::bpchar) AND date::text ~~ '%-01-01'::text))

These ~~ (LIKE) tests on date columns seem horribly unsafe.  I suspect
your problem is that you're trying to load the data with DateStyle
set to some setting other than what this constraint is hard-wired
to assume.  Personally I'd suggest replacing the LIKEs with something
using EXTRACTs, or some other way that doesn't depend on the textual
formatting of dates.

            regards, tom lane