If you have not already recorded the data, you could play with
auto-increment-increment and  auto-increment-offset...

If you already have the data, you could do a select.. insert, creating
new primary keys, and have a new column in the result table to hold
the old pk. One update on the table that relies on the old primary key
and you are set.

On 9/27/07, D$ <[EMAIL PROTECTED]> wrote:
>
>
>
> I have a set up where data will be collected on several local machines, and
> then needs to be merged into a single database on a data storage server. I
> have two tables that have auto_incrementing ids that I don't want to clash
> when merging, and another table that uses one of these autoincrementing
> numbers as a key that would need to be changed when transfered to the
> storage server.
>
> The tables look like: d
> data__ProcessedDataFrames
> data__RawDataFrames
> data__Raw_in_Processed (this has a key into data__ProcessedDataFrames on
> the auto_incremented id)
>
>
> mysql> desc data__ProcessedDataFrames;
> +------------------------+------------------+------+-----+---------+----------------+
> | Field                  | Type             | Null | Key | Default | Extra
>         |
> +------------------------+------------------+------+-----+---------+----------------+
> | processed_id           | int(10) unsigned | NO   | PRI | NULL    |
> auto_increment |
> | top_level_product_name | varchar(255)     | YES  | MUL | NULL    |
>         |
> | test_id                | int(10) unsigned | YES  | MUL | NULL    |
>         |
> | payload_time           | double           | YES  | MUL | NULL    |
>         |
> | universal_time         | double           | YES  |     | NULL    |
>         |
> | processed_data         | mediumblob       | YES  |     | NULL    |
>         |
> +------------------------+------------------+------+-----+---------+----------------+
> 6 rows in set (0.01 sec)
>
> mysql> desc data__RawDataFrames;
> +--------------+------------------+------+-----+---------+----------------+
> | Field        | Type             | Null | Key | Default | Extra          |
> +--------------+------------------+------+-----+---------+----------------+
> | raw_id       | int(10) unsigned | NO   | PRI | NULL    | auto_increment |
> | test_id      | int(10) unsigned | NO   | PRI | 0       |                |
> | payload_time | double           | YES  | MUL | NULL    |                |
> | raw_data     | blob             | YES  |     | NULL    |                |
> | frame_name   | varchar(100)     | YES  | MUL | NULL    |                |
> +--------------+------------------+------+-----+---------+----------------+
> 5 rows in set (0.02 sec)
>
> mysql> desc data__Raw_in_Processed;
> +--------------+------------------+------+-----+---------+-------+
> | Field        | Type             | Null | Key | Default | Extra |
> +--------------+------------------+------+-----+---------+-------+
> | processed_id | int(10) unsigned | NO   | MUL |         |       |
> | frame_name   | varchar(100)     | YES  |     | NULL    |       |
> | payload_time | double           | YES  |     | NULL    |       |
> +--------------+------------------+------+-----+---------+-------+
> 3 rows in set (0.00 sec)
>
> mysql>
>
> Again each individual box will be collecting data separately, and when we
> decide to store the data we need to merge it onto a storage server into the
> same database.
>
>
>
> Are there some docs out that that could be helpful or any tips from anyone?
>
>
> D$
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]
>
>


--
Rob Wultsch
(480)223-2566
[EMAIL PROTECTED] (email/google im)
wultsch (aim)
[EMAIL PROTECTED] (msn)


-- 
Rob Wultsch
(480)223-2566
[EMAIL PROTECTED] (email/google im)
wultsch (aim)
[EMAIL PROTECTED] (msn)

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to