Hi Rene, Talking about upgrading instruction docs improvement:
> Regarding the migration from 3.8 to 3.9, here there is a file for upgrade instructions in general: https://github.com/apache/james-project/blob/master/upgrade-instructions.md#390-version I must admit people need to figure out which upgrade concerns which product, if concerned or not, it's not really practical sorry. We got room for improvement here I think too. Maybe we could think about a matrix table in upgrade-instructions.md: - row(s): list of changes - column(s): App variants (Distributed, JPA, Postgres...) The value would be a boolean: effected by the change or not. Quan On Tue, Apr 7, 2026 at 3:47 PM Rene Cordier <[email protected]> wrote: > Hi Gilberto, > > I ain't much of a JPA expert but I will try to answer to the best of my > capabilities, hoping it helps you. > > I think the EmbeddedActiveMq class creates a default activemq broker > with default storage settings, which must be around 20mb. The attachment > of your user seems slitghly bigger than this limit, filling up the > queue. James keeps crashing then I guess as the message keeps staying in > the queue. > > Is the faulty email important to keep or not? If not can try to clear > away the queue. The data of activemq should be inside the container at > /var/store/activemq. > > I think you can try to clean what's in > /var/store/activemq/brokers/KahaDB/* (cf > > https://github.com/apache/james-project/blob/3.8.x/server/queue/queue-activemq/src/main/java/org/apache/james/queue/activemq/EmbeddedActiveMQ.java > ) > > I guess you could back up the data as well, from /var/store/activemq, > and reinject the data in an external volume with more space that you can > mount then in /var/store/activemq ? > > There is unfortunately no configuration for ActiveMQ storage space. The > configuration file would be activemq.properties, but I think it is just > about some metrics configuration so far. But maybe that could be a nice > contribution to propose? > > Regarding the migration from 3.8 to 3.9, here there is a file for > upgrade instructions in general: > > https://github.com/apache/james-project/blob/master/upgrade-instructions.md#390-version > > I must admit people need to figure out which upgrade concerns which > product, if concerned or not, it's not really practical sorry. We got > room for improvement here I think too. > > Let me know if this helps you. > > Good luck :) > > Rene. > > On 4/3/26 08:05, Gilberto Espinoza wrote: > > Hi Apache James Brain Trust, > > > > I am having issues with my James server. I am running James 3.8, JPA in a > > Docker Container. A user was attempting to email a large video file and > > that seems to have crashed the server. I get this message in the log > file: > > > > [WARN ] o.a.a.b.r.Queue - Usage(Main:store:queue://spool:store) > > percentUsage=100%, usage=20779975, limit=20765548, > > percentUsageMinDelta=1%;Parent:Usage(Main:store) percentUsage=100%, > > usage=20779975, limit=20765548, percentUsageMinDelta=1%: Persistent store > > is Full, 100% of 20765548. Stopping producer > > (ID:7a04cd6de807-43041-1775136440917-5:2:1:1) to prevent flooding > > queue://spool. > > > > I am struggling to figure out how to resolve this issue (I found this > link, > > but I am unable to figure where the ActiveMQ files are located) ( > > > https://knowledge.informatica.com/s/article/Persistent-store-is-Full-100-of-Stopping-producer-ID-to-prevent-flooding-error-in-DataExchange-logs?language=en_US > > ). > > > > I would appreciate your help in resolving this issue (restarting the > server > > has not helped). How do you stop the "producer"? > > > > Also, where can I find the configuration file for ActiveMQ? > > > > I would also like to know how to back up the database files, if that is > > possible? > > > > Is there any documentation on how to migrate from 3.8 to 3.9? > > > > I look forward to any feedback. > > > > Best regards, > > > > Gil Espinoza > > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: [email protected] > For additional commands, e-mail: [email protected] > >
