Craig Ringer <cr...@postnewspapers.com.au> wrote:

>> I am in the process of moving a FoxPro based system to PostgreSQL.

>> We have several tables that have memo fields which contain carriage
>> returns and line feeds that I need to preserve. I thought if I converted
>> these into the appropriate \r and \n codes that they would be imported as
>> carriage returns and line feeds, but instead they are stored in the
>> database as \r and \n.

> PostgreSQL doesn't process escapes in CSV import mode.

> You can reformat the data into the non-csv COPY format,
> which WILL process escapes. Or you can post-process it after
> import to expand them. Unfortunately PostgreSQL doesn't
> offer an option to process escapes when "CSV" mode COPY is
> requested.

> I posted a little Python script that reads CSV data and
> spits out COPY-friendly output a few days ago. It should be
> trivially adaptable to your needs, you'd just need to change
> the input dialect options. See the archives for the script.

Another option is a small Perl script or something similar
that connects to both the FoxPro and the PostgreSQL database
and transfers the data with parameterized "INSERT". The ad-
vantage of this is that you have tight control of charsets,
date formats, EOL conventions & Co. and do not have to won-
der whether this and that file is in this and that stage of
the conversion process, the disadvantage is obviously that
you lose any speed benefit of bulk "COPY".

Tim


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to