Hi, > > Right, I want to use it with a bulk operation, say importing a million > records > > with COPY. Calling nextval one million times looks to me like an > enormous waste > > of resources. Suppose, you are on an ADSL line: it will cost one million > times > > the ping time of the ADSL line (say 10 milliseconds per call). Well OK, > one > > could write a server function that does this, but then the one million > result > > values must be transported back to the client, because they are not > guaranteed > > to be contiguous. Unneeded complexity compared to a simple nextval > increment > > parameter. > > The usual way to use nextval() is to use it on the server as an expression > in > an INSERT or DEFAULT. If you're using COPY and don't have a column default > set > up then, hm, I guess you're kind of stuck. That would make a good use case > for > a one-time nextval(increment) or something like that. >
Coincidently, I very briefly discussed (offline) about supporting expressions while doing loads using COPY FROM with Heikki a while back. From the above mail exchanges, it does appear that adding this kind of functionality will be useful while doing bulk imports into tables using COPY. Heikki's initial suggestion was as follows: COPY <table> FROM <file> USING <query> Where query could be any SELECT query, executed once for row using the values from the input data file. For example: COPY footable (strcol, strcollen, moredata) FROM <file> USING SELECT $1, length($1), $2; The sql expressions could refer to the columns being read or could be user defined procedures, built-in functions etc too. These expressions would need to be executed per row read from the input data file to form a new set of values[], nulls[] entries before forming the corresponding tuple entry. I think the above will be a very useful enhancement to COPY. The syntax and other details mentioned above are ofcourse subject to discussion and approval on the list. Regards, Nikhils -- EnterpriseDB http://www.enterprisedb.com