Thanks for the suggestion ... since the data involved came from different 
source, I suspect there may be more than one encoding, but this has great 
promise.

Greg


-----Original Message-----
From:   Russell Smith [mailto:[EMAIL PROTECTED]
Sent:   Thu 11/16/2006 7:27 PM
To:     Gregory S. Williamson
Cc:     pgsql-general@postgresql.org
Subject:        Re: [GENERAL] Eliminating bad characters from a database for 
upgrading  from 7.4 to 8.1

Gregory S. Williamson wrote:
> Dear list,
>
> I have been banging my head against a problem for a few days now, and 
> although I am making progress it is painfully slow, and I am hoping that some 
> one out there can steer me in a better way.
>
> I've got a medium size database (7.4), about 64 gigs of data, about 1/3rd of 
> which is in one table, which has ~32 million rows (22 gigs when dumped). This 
> largish table has about 20 different columns of varchar or text.
>
> There are some records that have illegal characters in them, according to 
> postgres 8.1.5, which imposes stricter standards on UTF encoding.
>
> I've been using copy to dump the big table to disk, then try to load it into 
> my new table. When it fails, I use split to break the file into managable 
> chunks and then use ""vi" to find the offending line, then figure out the 
> column. Then I use something like:
>
> create table bad_char_gids as select gid from parcels where position('Ñ' in 
> s_street) > 0;
>
> And so create a table with the ids of the bad records; and then use replace 
> to either replace or eliminate the offending characters from that column. 
> This example got 5001 records, but often it is one record in the whole DB 
> will have some other offending character. I fix the problem in the loaddata 
> as well, and continue.
>
> The problem is that there are errors in quite a few of the columns (but only 
> a few tens of thousands of records), and the offending characters are all 
> quite different (wierd diacritics and characters, upper and lower case). And 
> so this is a very slow process.
>
> Is there any way to get a list of records, even if done repeatedly for each 
> column, that would let me find the offending records in 7.4 which have any 
> invalid UTF chars? I am feeling stupid for not seeing one ... I can find any 
> individual bad character, but I want to find them all at once, if possible.
>   
Try converting the dump files encoding to UTF-8.  before 8.1 you could 
insert invalid characters into the DB because it accepted other 
encodings.  It will also dump other encoding.  For example, converting 
something with windows characters in it.

iconv -f "WINDOWS-1251" -t "UTF-8" dump_file > converted_dump_file

And import the converted file.  you may need to try a couple of 
different input encodings if you aren't sure what encoding was used when 
inserting data into the DB.

Russell.

> TIA,
>
> Greg Williamson
> DBA
> GlobeXplorer LLC
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: explain analyze is your friend
>
>
>   



-------------------------------------------------------
Click link below if it is SPAM [EMAIL PROTECTED]
"https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=455d2cee144961034217237&[EMAIL
 PROTECTED]&retrain=spam&template=history&history_page=1"
!DSPAM:455d2cee144961034217237!
-------------------------------------------------------






---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
       subscribe-nomail command to [EMAIL PROTECTED] so that your
       message can get through to the mailing list cleanly

Reply via email to