Okay, I think it has to do with machine precision issues; in the configuration where the code runs, I had barely satisfied the convergence tolerance for CG (in topology optimization, the 'void' regions worsen the stiffness matrix conditioning); in the other configuration, it never reaches that tolerance and ends up diverging. If I slightly relax that tolerance in this configuration though, the analysis runs just fine.
Thanks! -Julian --- El lun, 12/27/10, Wolfgang Bangerth <[email protected]> escribió: De: Wolfgang Bangerth <[email protected]> Asunto: Re: [deal.II] Problem with CG A: [email protected] Cc: "Julian" <[email protected]> Fecha: lunes, 27 de diciembre de 2010, 11:46 am Julian, > I have a code using preconditioned (SSOR) CG that converges (in <500 > iterations) in one system (32 bit, Ubuntu 9.10, gcc 4.4.1, no threading, > deal.II 6.2.1), but fails to converge (in 10,000 iterations) in another > system (4 bit, RedHat 4, gcc 4.4.2, no threading, deal.II 6.3.1). Has > anyone run into a similar problem or might otherwise know what the problem > could be? These are too many variables and too little details you haven't explained. What did you try so far to find out what the problem is? For example, did you try to use the same deal.II version on both machines? Did you try to write out the matrices and compare their values? What happens if you use a very small linear system? Your mail just doesn't have enough detail for anyone to give useful feedback beyond stabbing in the dark... Best WB -- ------------------------------------------------------------------------- Wolfgang Bangerth email: [email protected] www: http://www.math.tamu.edu/~bangerth/
_______________________________________________ dealii mailing list http://poisson.dealii.org/mailman/listinfo/dealii
