Jaime Silvela wrote:
Brian, that's not what I meant.
Parsing of the uploaded file is just for the purpose of extracting the components of each spreadsheet row and constructing the INSERTs. Actually, whenever I copy from a file, either using COPY or with a custom importer, I put the data into a staging table, so that I can pre-process before writing to the main table. But why would COPYing from a file be so insecure?


I was under the impression that you were copying indiscriminately from an uploaded CSV file ("spreadsheet" being ambiguous). Obviously, that would be a Bad Thing to rely upon.

nextval() and sequences are not what I'm looking for. I want to assign the same id to all the rows imported from the same file. Let's say user A is working on portfolio_id 3, and decides to upload a spreadsheet with new values. I want to be able to import the spreadsheet into the staging table, and assign a portfolio_id of 3 to all its entries. Of course, I can't just UPDATE the staging table to have portfolio_id = 3, because user B might also be uploading a sheet for portfolio_id = 9.


Seems like you need to adjust your schema to use a pivot table:

CREATE TABLE portfolio (

        id SERIAL PRIMARY KEY,
        ...

CREATE TABLE portfolio_entries (
        portfolio_id INT4 NOT NULL,

        ...

        CONSTRAINT fk_portfolio_entries FOREIGN KEY (portfolio_id)
                REFERENCES portfolio
                ON DELETE CASCADE

Then you should be able to insert directly into the second table a row for each entry (for want of a better word) that corresponds to a particular portfolio.

brian

---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

Reply via email to