Hello Devs, You find that when groups are starting financial operations with small number of transactions (both loans and savings) and the minimum machine specs will work fine for sometime. This changes with time as the number of transactions grows and overnight jobs hit out-of-memory issues.
Is it a good idea to tweak a job such that it approaches the task batch-wise? For example you find the an 8GB machine will process 1000 standing instructions without a problem but run into memory exceptions when the number of instructions changes to 1200. So it a good idea to tweak the job such it handles the 1000 to 1200 instructions in batches of say 250 instructions, then another 250 etc? With the same machine job will handle task without running into memory issues. Is this the way it should work? What is the other way (without changing server specs)? Regards. Wilfred
