[GENERAL] Instability in copying large quantities of data
Hi all... I've a big thorne in my side at the moment. I'developing a web app based essentially on a set of report. This reports are generated from queryes on my client's legacy system. For obviuos security reason, my app doesn't interacts directly with the main server, but is built around a Postgres DB on a separate machine (that is also the web server), and I set up a "poor man's replication" that batch transfer data from legacy server to pgsql server. In practice, the legacy server generates ASCII dumps of the data necessary for the reports and the zips'em and ftp'em to the web server. Then, a little process sheduled in cron get them up and COPY them in the pgsql system. I built this process using C and LibPQ (if necessary, I can post the code, but is a very simple thing and I assume you can figure up how it works). I used this schema many times for various web app, and I never encountered problems (I've got an app built eons ago, based on Slack 3.5 and PG 6.3.2, that's housed on a far-away provider and that never stopped a single second in all of this time. Wow!). Now I was trying it on a brand new RH 6.2 with PG 7.0.2, RPM version. The problem is that the COPY of the data, apparently, sometimes leaves a table in an inconsistent state. The command doesn't throw any error, but when I try to SELECT or VACUUM that table the backend dumps core. Apparently the only thing I can do is drop the table and recreate it. This is EXTREMELY unfortunate, since it all must be automated and if I can't catch any error condition during the update, than also the web app start crashing down... Sadly this happens in a very inconsistent way. However, it seems that the size of the data file is related to the frequency of the problem: and since some of the table dumps are more then 20 Meg, this is no good news. I have not got any log, cause the RPM versions doesn't create them, however, I'll try to fix this as soon as possible. In the meantime, anybody can share some hint on how to resolve this nightmare? /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/ Fabrizio Ermini Alternate E-mail: C.so Umberto, 7 [EMAIL PROTECTED] loc. Meleto Valdarno Mail on GSM: (keep it short!) 52020 Cavriglia (AR) [EMAIL PROTECTED]
[GENERAL] Instability in copying large quantities of data
Hi all... I've a big thorne in my side at the moment. I'developing a web app based essentially on a set of report. This reports are generated from queryes on my client's legacy system. For obviuos security reason, my app doesn't interacts directly with the main server, but is built around a Postgres DB on a separate machine (that is also the web server), and I set up a "poor man's replication" that batch transfer data from legacy server to pgsql server. In practice, the legacy server generates ASCII dumps of the data necessary for the reports and the zips'em and ftp'em to the web server. Then, a little process sheduled in cron get them up and COPY them in the pgsql system. I built this process using C and LibPQ (if necessary, I can post the code, but is a very simple thing and I assume you can figure up how it works). I used this schema many times for various web app, and I never encountered problems (I've got an app built eons ago, based on Slack 3.5 and PG 6.3.2, that's housed on a far-away provider and that never stopped a single second in all of this time. Wow!). Now I was trying it on a brand new RH 6.2 with PG 7.0.2, RPM version. The problem is that the COPY of the data, apparently, sometimes leaves a table in an inconsistent state. The command doesn't throw any error, but when I try to SELECT or VACUUM that table the backend dumps core. Apparently the only thing I can do is drop the table and recreate it. This is EXTREMELY unfortunate, since it all must be automated and if I can't catch any error condition during the update, than also the web app start crashing down... Sadly this happens in a very inconsistent way. However, it seems that the size of the data file is related to the frequency of the problem: and since some of the table dumps are more then 20 Meg, this is no good news. I have not got any log, cause the RPM versions doesn't create them, however, I'll try to fix this as soon as possible. In the meantime, anybody can share some hint on how to resolve this nightmare? /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/ Fabrizio Ermini Alternate E-mail: C.so Umberto, 7 [EMAIL PROTECTED] loc. Meleto Valdarno Mail on GSM: (keep it short!) 52020 Cavriglia (AR) [EMAIL PROTECTED]
Re: [GENERAL] Instability in copying large quantities of data
[EMAIL PROTECTED] writes: version. The problem is that the COPY of the data, apparently, sometimes leaves a table in an inconsistent state. The command doesn't throw any error, but when I try to SELECT or VACUUM that table the backend dumps core. Backtrace from core file, please? (Compiling the backend with -g first would improve the usefulness of the trace, but it might tell us something even without.) regards, tom lane