Hmm for a batch this code doesnt mean anything - request scope. Did you hack something around detaspike to make it working?
If this entity manager is used in an EJB this should be fine, if not then you need to ensure transaction are handled as you expect - should be the case with batchee but doesnt cost anything to validate it . Finally do you use @Asynchronous in your code otherwise you shouldn't see it Romain Manni-Bucau @rmannibucau <https://twitter.com/rmannibucau> | Blog <http://rmannibucau.wordpress.com> | Github <https://github.com/rmannibucau> | LinkedIn <https://www.linkedin.com/in/rmannibucau> | Tomitriber <http://www.tomitribe.com> 2015-03-02 18:10 GMT+01:00 Karl Kildén <[email protected]>: > Hello, > > I have some @Stateless that I use from batches. After the job has finished > I can see after a heap dump that the async thread seems to keep a reference > to the RepeatableWriteUnitOfWork. When I google I understand that this is > the EclipseLink entitymanager and since nobody seems to have called clear > on it my heap is getting pretty full... > > I have defined my Batches with normal read process write. They are @Named > and simply inject my @Stateless. They @Stateless uses EntityManager and it > is produced like this: > > @PersistenceContext(unitName = APP_NAME) > private EntityManager entityManager; > > @Produces > @RequestScoped > protected EntityManager createEntityManager() { > return this.entityManager; > } > > > Not sure if I am missing some kind of disposal here? I don't think so > because only the jobs get the UnitOfWork stuck on the heap. > > Not sure I understand any of this very well. I can just clearly see that > my entire heap is now RepeatableWriteUnitOfWork tied to @ASynchronous > threads. > > My memory dump could of course be sent to someone or shared desktop if > someone want's to help me understand this... Or maybe a pointer on where to > debug? > > cheers >
