Hi Michael! Thanks for reporting. There definitely can be unresolved bugs introduced by the Postgres/EclipseLink migration. I have found a few myself and fixed them but I haven't checked the loan side yet so quite possible that there is some bug lurking somewhere. I now went over (again) the code diff of Postgres migration for Portfolio micro service (there are not many changed lines) but I didn't spot anything suspicious.
I suggest you try to increase logging. You can tweak the logging of containers this way: 1. into "fineract-cn-docker-compose" create subdirectories "node_logs" and "node_config" 2. into subdirectory "node_config" create logback-provisioner.xml with contents like this: <configuration> <timestamp key="bySecond" datePattern="HHmmss"/> <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>/tmp/logs/provisioner.${bySecond}.log</file> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <fileNamePattern>/tmp/logs/archive/provisioner.%d{yyyy-MM-dd}_${bySecond}.log</fileNamePattern> <maxHistory>2</maxHistory> <totalSizeCap>200MB</totalSizeCap> </rollingPolicy> <encoder> <pattern>%d{HH:mm:ss.SSS} XABC [%thread] %-5level %logger{36} - %msg%n</pattern> </encoder> </appender> <root level="DEBUG"> <appender-ref ref="FILE"/> </root> </configuration> 3. to docker-compose.yml add logger.conf to provisioner environment and mount the directories. For this find: portfolio-ms: image: apache/fineract-cn-portfolio:latest ports: - "2026:2026" environment: replace with : portfolio-ms: image: apache/fineract-cn-portfolio:latest ports: - "2026:2026" volumes: - ./node_config:/etc/configs/:ro - ./node_logs:/tmp/logs environment: logging.config: "/etc/configs/logback-portfolio.xml" I hope you get more informative error messages to the file. If you do then feel free to share here. Kind regards Juhan Aasaru Kontakt Michael E. (<michael.elg...@gmail.com>) kirjutas kuupƤeval R, 1. november 2019 kell 13:51: > I've been doing some testing with the Fineract-CN, > using fineract-cn-docker-compose. > > In previous testing of Fineract-CN before the postgres migration, I was > able to create and open loan products for members. However using the latest > versions of the microservices and provisioning using the scripts provided > in fineract-cn-docker-compose, I am not ables to open loan cases for > members. > I'm able to create members, create deposit and loan products, add these > products to members, and open the deposit account, but when attempting to > open the loan product, the portfolio-ms freezes up and doesn't respond to > any requests until it is restarted. > > This is the relevant section from the portfolio-ms log: > > [EL Fine]: sql: 2019-11-01 > 11:24:42.54--ServerSession(817490653)--Connection(1699212784)--Thread(Thread[qtp1761217448-17,5,main])--SELECT > t2.id, t2.a_comment, t2.executed_by, t2.executed_on, t2.case_id, > t2.task_def_id FROM bastet_cases t3, bastet_c_task_insts t2, > bastet_p_task_defs t1, bastet_products t0 WHERE (((t0.identifier = ?) AND > (t3.identifier = ?)) AND (((t1.id = t2.task_def_id) AND (t0.id = > t1.product_id)) AND (t3.id = t2.case_id))) > bind => [test, test] > [EL Finest]: query: 2019-11-01 > 11:24:42.543--UnitOfWork(396538513)--Thread(Thread[qtp1761217448-20,5,main])--Execute > query ReadAllQuery(referenceClass=TaskDefinitionEntity sql="SELECT t1.id, > t1.actions, t1.description, t1.four_eyes, t1.identifier, t1.mandatory, > t1.a_name, t1.product_id FROM bastet_products t0, bastet_p_task_defs t1 > WHERE ((t0.identifier = ?) AND (t0.id = t1.product_id))") > [EL Finest]: connection: 2019-11-01 > 11:24:42.544--ServerSession(817490653)--Connection(1400896017)--Thread(Thread[qtp1761217448-20,5,main])--Connection > acquired from connection pool [read]. > [EL Finest]: connection: 2019-11-01 > 11:24:42.544--ServerSession(817490653)--Thread(Thread[qtp1761217448-20,5,main])--reconnecting > to external connection pool > [EL Finer]: transaction: 2019-11-01 > 11:24:42.548--UnitOfWork(441609303)--Thread(Thread[qtp1761217448-17,5,main])--release > unit of work > [EL Finer]: connection: 2019-11-01 > 11:24:42.548--ClientSession(782248230)--Thread(Thread[qtp1761217448-17,5,main])--client > released > [EL Finer]: connection: 2019-11-01 > 11:24:48.267--ServerSession(817490653)--Thread(Thread[qtp1761217448-17,5,main])--client > acquired: 1594588247 > [EL Finer]: transaction: 2019-11-01 > 11:24:48.268--ClientSession(1594588247)--Thread(Thread[qtp1761217448-17,5,main])--acquire > unit of work: 1487145812 > [EL Finest]: query: 2019-11-01 > 11:24:48.268--UnitOfWork(1487145812)--Thread(Thread[qtp1761217448-17,5,main])--Execute > query ReadAllQuery(referenceClass=ProductEntity ) > [EL Finest]: connection: 2019-11-01 > 11:24:48.269--ServerSession(817490653)--Connection(1454951413)--Thread(Thread[qtp1761217448-17,5,main])--Connection > acquired from connection pool [read]. > [EL Finest]: connection: 2019-11-01 > 11:24:48.269--ServerSession(817490653)--Thread(Thread[qtp1761217448-17,5,main])--reconnecting > to external connection pool > [EL Finer]: connection: 2019-11-01 > 11:24:48.28--ServerSession(817490653)--Thread(Thread[qtp1761217448-18,5,main])--client > acquired: 1954958561 > [EL Finer]: transaction: 2019-11-01 > 11:24:48.281--ClientSession(1954958561)--Thread(Thread[qtp1761217448-18,5,main])--acquire > unit of work: 119694586 > [EL Finest]: query: 2019-11-01 > 11:24:48.281--UnitOfWork(119694586)--Thread(Thread[qtp1761217448-18,5,main])--Execute > query ReadAllQuery(referenceClass=ProductEntity ) > [EL Finest]: connection: 2019-11-01 > 11:24:48.282--ServerSession(817490653)--Connection(287564597)--Thread(Thread[qtp1761217448-18,5,main])--Connection > acquired from connection pool [read]. > [EL Finest]: connection: 2019-11-01 > 11:24:48.282--ServerSession(817490653)--Thread(Thread[qtp1761217448-18,5,main])--reconnecting > to external connection pool > > After "reconnecting to external connection pool", the microservice stops > responding. I assume this issue was introduced during the postgres > migration. Can anyone else recreate/resolve this issue? >