Hello everyone,

I hope you are all well and healthy.
I am writing because we are facing a weird behavior of the JobPoller in a
Docker environment.
I list here env execution data:
- ofbiz version is 13.07.03
- Ubuntu 18 server
- Docker container server
- Multiple docker containers running: 1 container -> 1 customer -> 1 full
ofbiz instance
- Multi-tenant enabled : 1 container -> 1 tenant
- 1 container running MySQL Server (shared by all the ofbiz containers)
- 1 container with Apache Web Server, acting as a proxy
- each container has its own volumes to persist data and one of the files
is *general.properties*
- each ofbiz-container is set up with its own and specific unique.instanceId

*Problem context*
Each ofbiz instance has some scheduled services that run at some time
mostly for external integration with ERP systems.
We have in particular one job that reads shipped sales order data and
creates a csv file in a position inside the ofbiz-home directory: this
location is kept as a volume to let an external program, to come and read
the generated file from the "physical" server.
The problem is that often, we can see that the service runs with no
problems, order header processed records are marked/flagged as processed,
but the file is not generated inside the container I expect to see it, but
in another container.
This leads to a customer that has registered on his ERP systems, orders
data of another customer; no good.

I know that when there are multiple ofbiz instances running, the unique id
is crucial to keep the things going so I double-checked them and I can
confirm that each instance has its unique id.

What I did, then is to give a look at the JOB_SANDBOX entity of each tenant
and here I noticed that services, when executed from the JobPoller, have an
instance id, different from the one of "its container", and that means that
the job is executed by the "wrong" container.
I also add that the JobPoller does not always pick up the same "wrong"
uniqueId, but it often differs from job execution to job execution.

This happens only when the service is run by the JobManager; if I execute
the same service by hand on the proper container all is good.

To summarize, it seems that when the JobManager reads the instanceId by the
general.properties file, it picks the id belonging to another (random)
container: if we are lucky it picks up its own unique id making the service
run correctly.

Does anyone have ever experienced this or a similar issue?

Thank you very much in advance,

Giulio

-- 
Giulio Speri


*Mp Styl**e Srl*
via Antonio Meucci, 37
41019 Limidi di Soliera (MO)
T 059/684916
M 347/0965506

www.mpstyle.it

Reply via email to