My thought that perhaps the loading table may have error and besides reading 
every line prior to inserting into his main table. Just to unload it where it 
is and see what happens on reload.  If it fails there also I would tend to 
believe it would fail everywhere.

I also find the process so fast and it takes any questions out of the equation. 
 KISS & JM.02





Sincerely,
Paul D.






 

-----Original Message-----
From: [email protected] [mailto:[email protected]] On Behalf Of MikeB
Sent: Monday, September 14, 2009 7:44 AM
To: RBASE-L Mailing List
Subject: [RBASE-L] - RE: Strange Update

>From his original post, it begs the question why is the data printed to disk 
at all.  Is there any sound reason this is not done as an INSERT / SELECT 
since the data goes to a Master Table from a source table?



----- Original Message ----- 
From: "Alastair Burr" <[email protected]>
To: "RBASE-L Mailing List" <[email protected]>
Sent: Monday, September 14, 2009 3:07 AM
Subject: [RBASE-L] - RE: Strange Update


If you are doing the unload and then an immediate load are you sure that the 
data has been written to disk and the file closed before the load is 
attempted?

Regards,
Alastair.


  ----- Original Message ----- 
  From: Ed Rivkin
  To: RBASE-L Mailing List
  Sent: Monday, September 14, 2009 2:09 AM
  Subject: [RBASE-L] - RE: Strange Update


  Anne,
  Thanks for the suggestion. It was certainly behaving like a corrupted 
table. I ran the DB through
  R:Scope and it came through clean on both structure and data.

  Emmitt, good point but it seems odd that a Disc / Copy / Conn would cause 
what I am seeing.
  Since putting in a pause for 30 secs after the Conn (did that after 
posting my question), it
  is working fine. As I am updating 3 tables in the .rmd, I just thought it 
easier to keep everything
  together rather than 3 unloads....

  Ed

  Sep 13, 2009 06:34:30 AM, [email protected] wrote:


    Try running R:Scope on the database.

    I did have something similar happen when updating an operations table.
    Every time a record was updated a 2nd copy of the record would appear.

    Found the customer table, which is a feeder table, had corrupted.

    Reloaded the customer table from a backup copy and all has been running
    fine.



      -------- Original Message --------
      Subject: [RBASE-L] - Strange Update
      From: Ed Rivkin <[email protected]>
      Date: Fri, September 11, 2009 11:04 pm
      To: [email protected] (RBASE-L Mailing List)

      I am curious if someone has faced a similar situation.
      The problem revolves around a .rmd file originally
      written for my application 20 years ago using r:base 4.0.

      I update a table monthly with 140 entries; one for each
      account. To do so, I print a report to disk, load it into a table
      and append the table to my master table which is the current
      Rent table. In addition all rows for accounts with a zero balance
      are moved to the Rent History table and deleted from the Rent Table

      Prior to the print / load / append the DB is disconnected
      and the DB is copied so I have a recovery point in case
      something fails.

      The only changes from 4.0-4.5 - 7.6 are the screen handling messages.

      After testing the conversion everything seemed to work fine. Running
      parallel we are having a strange problem. Many of the current Rent
      records are duplicated, others tripled and quadrupled and some of the
      history data is finding it's way back into the Rent file.

      When I commented out the Disconnect / Copy to Disk / Connect
      everything worked fine again.

      Am I hitting some sort of DB corruption issue? Should I put a
      Pause for 30-45 secs after the connect? Or am I totally missing
      something else.

      Thanks as always,
      Ed





------------------------------------------------------------------------------



  No virus found in this incoming message.
  Checked by AVG - www.avg.com
  Version: 8.5.409 / Virus Database: 270.13.94/2367 - Release Date: 09/13/09 
05:50:00


Reply via email to