On Mon, 2007-08-27 at 11:55 +0800, Ow Mun Heng wrote:
> I just ran into trouble with this. This rule seems to work when I do
> simple inserts, but as what I will be doing will be doing \copy
> bulkloads, it will balk and fail.
> Now would be a good idea to teach me how to skin the cat differently.
On Tue, 2007-08-14 at 10:16 -0500, Scott Marlowe wrote:
> On 8/14/07, Ow Mun Heng <[EMAIL PROTECTED]> wrote:
> > I'm seeing an obstacle in my aim to migrate from mysql to PG mainly from
> > the manner in which PG handles duplicate entries either from primary
> > keys or unique entries.
> >
> > Data
On 8/15/07, Gregory Stark <[EMAIL PROTECTED]> wrote:
> "Tom Lane" <[EMAIL PROTECTED]> writes:
> > I for one have a reputation of running spam filters that eat pets and small
> > children ... so if you want to be sure to get through to me, don't forget to
> > cc: the list.
>
> They eat all my emails
"Tom Lane" <[EMAIL PROTECTED]> writes:
> Ow Mun Heng <[EMAIL PROTECTED]> writes:
>> Ps : Is it this list's norm to have the OP/sender in the "to" list and
>> mailing list on the "CC" list?
>
> Yes. If you don't like that you can try including a "Reply-To: "
> header in what you send to the list;
Ow Mun Heng <[EMAIL PROTECTED]> writes:
> Ps : Is it this list's norm to have the OP/sender in the "to" list and
> mailing list on the "CC" list?
Yes. If you don't like that you can try including a "Reply-To: "
header in what you send to the list; or perhaps better, I think there's
a way to tell
On Tue, 2007-08-14 at 10:16 -0500, Scott Marlowe wrote:
> On 8/14/07, Ow Mun Heng <[EMAIL PROTECTED]> wrote:
> >
> > In MySql, I was using mysqlimport --replace which essentially provided
> > the means to load data into the DB, while at the same time, would
> > provide the necessary logic to replac
On 8/14/07, Ow Mun Heng <[EMAIL PROTECTED]> wrote:
> I'm seeing an obstacle in my aim to migrate from mysql to PG mainly from
> the manner in which PG handles duplicate entries either from primary
> keys or unique entries.
>
> Data is taken from perl DBI into (right now) CSV based files to be used
I'm seeing an obstacle in my aim to migrate from mysql to PG mainly from
the manner in which PG handles duplicate entries either from primary
keys or unique entries.
Data is taken from perl DBI into (right now) CSV based files to be used
via psql's \copy command to insert into the table.
In MySql