Hi Justin,

We are using JDBC persistence currently and Time take for 500k transactions
to enqueued and dequeued is almost 4.5 hours, which is hug compared to
ActiveMQ (1.5 hours). Here is my broker example, if you have any clue
broker.xml parameters. Please suggest. I agree file storage is faster, but
this seems a huge difference.

I tried to tweak journal-min-files, journal-pool-files but no luck.

<configuration xmlns="urn:activemq"
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
               xmlns:xi="http://www.w3.org/2001/XInclude";
               xsi:schemaLocation="urn:activemq
/schema/artemis-configuration.xsd">

   <core xmlns="urn:activemq:core"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
         xsi:schemaLocation="urn:activemq:core ">

      <name>0.0.0.0</name>

      <store>
         <database-store>
            <!-- The most efficient persistent layer for Artemis is the
file-store,
                 however if you require a database please refer to your
database provider
                 for any database specific questions.
                 We don't endorse any specific JDBC provider. Derby is
provided by default for demonstration purposes. -->
            
<jdbc-driver-class-name>org.postgresql.Driver</jdbc-driver-class-name>
            
<jdbc-connection-url>jdbc:postgresql://pxinfra-development-artemis.cluster-cvresftoa15j.us-east-2.rds.amazonaws.com:5432/ce-3cd7123d-99c3-4ca3-8906-a44ba979956f
-batch-queue?currentSchema=provenir&amp;user=ce-3cd7123d-99c3-4ca3-8906-a44ba979956f-batch&amp;password=67aCwLeaki4S1a145LaOeo8!Jd#$u^p*b*w1HoR*&amp;maximumPoolSize=100</jd
bc-connection-url>
            <message-table-name>MESSAGES</message-table-name>
            <bindings-table-name>BINDINGS</bindings-table-name>
            <large-message-table-name>LARGE_MESSAGES</large-message-table-name>
            <page-store-table-name>PAGE_STORE</page-store-table-name>
            
<node-manager-store-table-name>NODE_MANAGER_STORE</node-manager-store-table-name>
            <jdbc-lock-expiration>10000</jdbc-lock-expiration>
            <jdbc-lock-renew-period>2000</jdbc-lock-renew-period>
            <jdbc-network-timeout>20000</jdbc-network-timeout>
         </database-store>
      </store>

      <persistence-enabled>true</persistence-enabled>

      <!-- this could be ASYNCIO, MAPPED, NIO
           ASYNCIO: Linux Libaio
           MAPPED: mmap files
           NIO: Plain Java Files
       -->
      <journal-type>NIO</journal-type>

      <paging-directory>data/paging</paging-directory>

      <bindings-directory>data/bindings</bindings-directory>

      <journal-directory>data/journal</journal-directory>

      <large-messages-directory>data/large-messages</large-messages-directory>


      <!-- if you want to retain your journal uncomment this following
configuration.

      This will allow your system to keep 7 days of your data, up to 10G.
Tweak it accordingly to your use case and capacity.

      it is recommended to use a separate storage unit from the journal for
performance considerations.

      <journal-retention-directory period="7" unit="DAYS"
storage-limit="10G">data/retention</journal-retention-directory>

      You can also enable retention by using the argument journal-retention
on the `artemis create` command -->



      <journal-datasync>true</journal-datasync>

      <journal-min-files>10</journal-min-files>

      <journal-pool-files>100</journal-pool-files>

      <journal-device-block-size>4096</journal-device-block-size>

      <journal-file-size>10M</journal-file-size>
            <!--
        You can verify the network health of a particular NIC by specifying
the <network-check-NIC> element.
         <network-check-NIC>theNicName</network-check-NIC>
        -->

      <!--
        Use this to use an HTTP server to validate the network
         <network-check-URL-list>http://www.apache.org</network-check-URL-list>
 -->

      <!-- <network-check-period>10000</network-check-period> -->
      <!-- <network-check-timeout>1000</network-check-timeout> -->

      <!-- this is a comma separated list, no spaces, just DNS or IPs
           it should accept IPV6

           Warning: Make sure you understand your network topology as this
is meant to validate if your network is valid.
                    Using IPs that could eventually disappear or be
partially visible may defeat the purpose.
                    You can use a list of multiple IPs, and if any
successful ping will make the server OK to continue running -->
      <!-- <network-check-list>10.0.0.1</network-check-list> -->

      <!-- use this to customize the ping used for ipv4 addresses -->
      <!-- <network-check-ping-command>ping -c 1 -t %d
%s</network-check-ping-command> -->

      <!-- use this to customize the ping used for ipv6 addresses -->
      <!-- <network-check-ping6-command>ping6 -c 1
%2$s</network-check-ping6-command> -->




      <!-- how often we are looking for how many bytes are being used on the
disk in ms -->
      <disk-scan-period>5000</disk-scan-period>

      <!-- once the disk hits this limit the system will block, or close the
connection in certain protocols
           that won't support flow control. -->
      <max-disk-usage>90</max-disk-usage>

      <!-- should the broker detect dead locks and other issues -->
      <critical-analyzer>true</critical-analyzer>

      <critical-analyzer-timeout>120000</critical-analyzer-timeout>

      <critical-analyzer-check-period>60000</critical-analyzer-check-period>

      <critical-analyzer-policy>HALT</critical-analyzer-policy>



            <!-- the system will enter into page mode once you hit this
limit.
           This is an estimate in bytes of how much the messages are using
in memory

            The system will use half of the available memory (-Xmx) by
default for the global-max-size.
            You may specify a different value here if you need to customize
it to your needs.

            <global-max-size>100Mb</global-max-size>

      -->

      <acceptors>

         <!-- useEpoll means: it will use Netty epoll if you are on a system
(Linux) that supports it -->
         <!-- amqpCredits: The number of credits sent to AMQP producers -->
         <!-- amqpLowCredits: The server will send the # credits specified
at amqpCredits at this low mark -->
         <!-- amqpDuplicateDetection: If you are not using duplicate
detection, set this to false
                                      as duplicate detection requires
applicationProperties to be parsed on the server. -->
         <!-- amqpMinLargeMessageSize: Determines how many bytes are
considered large, so we start using files to hold their data.
                                       default: 102400, -1 would mean to
disable large mesasge control -->

         <!-- Note: If an acceptor needs to be compatible with HornetQ
and/or Artemis 1.x clients add
                    "anycastPrefix=jms.queue.;multicastPrefix=jms.topic." to
the acceptor url.
                    See https://issues.apache.org/jira/browse/ARTEMIS-1644
for more information. -->


         <!-- Acceptor for every supported protocol -->
         <acceptor
name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNET
Q,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true;supportAdvisory=false;suppressInternalManagementObjects=false</acceptor>

         <!-- AMQP Acceptor.  Listens on default AMQP port for AMQP
traffic.-->
         <acceptor
name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;am
qpMinLargeMessageSize=102400;amqpDuplicateDetection=true</acceptor>

         <!-- STOMP Acceptor. -->
         <acceptor
name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor>

         <!-- HornetQ Compatibility Acceptor.  Enables HornetQ Core and
STOMP for legacy HornetQ clients. -->
         <acceptor
name="hornetq">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor>

         <!-- MQTT Acceptor -->
         <acceptor
name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>

      </acceptors>


      <security-settings>
         <security-setting match="#">
            <permission type="createNonDurableQueue" roles="amq"/>
            <permission type="deleteNonDurableQueue" roles="amq"/>
            <permission type="createDurableQueue" roles="amq"/>
            <permission type="deleteDurableQueue" roles="amq"/>
            <permission type="createAddress" roles="amq"/>
            <permission type="deleteAddress" roles="amq"/>
            <permission type="consume" roles="amq"/>
            <permission type="browse" roles="amq"/>
            <permission type="send" roles="amq"/>
            <!-- we need this otherwise ./artemis data imp wouldn't work -->
            <permission type="manage" roles="amq"/>
         </security-setting>
      </security-settings>

      <address-settings>
         <!-- if you define auto-create on certain queues, management has to
be auto-create -->
         <address-setting match="activemq.management#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!-- with -1 only the global-max-size is in use for limiting -->
            <max-size-bytes>-1</max-size-bytes>
            
<message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
         <!--default for catch all-->
         <address-setting match="#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!-- with -1 only the global-max-size is in use for limiting -->
            <max-size-bytes>-1</max-size-bytes>
            
<message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
            <auto-delete-queues>false</auto-delete-queues>
            <auto-delete-addresses>false</auto-delete-addresses>
         </address-setting>
      </address-settings>

      <addresses>
         <address name="DLQ">
            <anycast>
               <queue name="DLQ" />
            </anycast>
         </address>
         <address name="ExpiryQueue">
            <anycast>
               <queue name="ExpiryQueue" />
            </anycast>
         </address>

      </addresses>


      <!-- Uncomment the following if you want to use the Standard
LoggingActiveMQServerPlugin pluging to log in events
      <broker-plugins>
         <broker-plugin
class-name="org.apache.activemq.artemis.core.server.plugin.impl.LoggingActiveMQServerPlugin">
            <property key="LOG_ALL_EVENTS" value="true"/>
            <property key="LOG_CONNECTION_EVENTS" value="true"/>
            <property key="LOG_SESSION_EVENTS" value="true"/>
            <property key="LOG_CONSUMER_EVENTS" value="true"/>
            <property key="LOG_DELIVERING_EVENTS" value="true"/>
            <property key="LOG_SENDING_EVENTS" value="true"/>
            <property key="LOG_INTERNAL_EVENTS" value="true"/>
         </broker-plugin>
      </broker-plugins>
      -->

   </core>
</configuration>

-----Original Message-----
From: Justin Bertram <jbert...@apache.org>
Sent: Wednesday, September 13, 2023 10:59 PM
To: users@activemq.apache.org
Subject: Re: Artemis File Storage Persistence vs JDBC Persistence

If you are sending one or more durable messages in your transaction then the
broker will attempt to write them to storage when they arrive. If the broker
fails to write the messages, for whatever reason, then the sender will
receive an exception when it attempts to commit the transaction. If the
broker succeeds in writing the messages to storage but the broker stops for
whatever reason before the messages are consumed then when the broker
restarts those messages will be loaded from storage and made available to
consumers.

This is the same whether you're using the file-based journal or a database
to store the messages.


Justin

On Wed, Sep 13, 2023 at 11:57 AM Shivang Modi <sm...@provenir.com.invalid>
wrote:

> Hi Justin,
>
> We are using Artemis docker image and start kubernetes pods with it.
> We have one sender which will write messages on queue and one receiver
> which will read messages queue.
> Now due to any reason, kubernetes queue pod gets restarted so before
> restarts whatever transactions gets enqueued by sender but not read by
> receiver, will that persisted with file storage and If yes, in any
> scenario file storage, chances of losing transactions?
>
> Thanks,
> Shivang.
>
> -----Original Message-----
> From: Justin Bertram <jbert...@apache.org>
> Sent: Wednesday, September 13, 2023 10:23 PM
> To: users@activemq.apache.org
> Subject: Re: Artemis File Storage Persistence vs JDBC Persistence
>
> I'm not really sure what you're asking. Are you asking whether you
> should use the file-based journal or a database if you have 100k
> transactions?
>
> To be clear, what is "best" in one situation is often not "best" in
> another.
> Everything depends on the specifics of your particular use-case.
>
>
> Justin
>
> On Wed, Sep 13, 2023 at 11:47 AM Shivang Modi
> <sm...@provenir.com.invalid>
> wrote:
>
> > If scenario is no loss transactions 100% if queue goes down whatever
> > transactions gets enqueued, should get dequeued once queue comes up,
> > we have 100k transactions or more need to flow up via queue. What
> > would be best in such scenarios?
> >
> > Thanks,
> > Shivang
> >
> > -----Original Message-----
> > From: Justin Bertram <jbert...@apache.org>
> > Sent: Wednesday, September 13, 2023 8:38 PM
> > To: users@activemq.apache.org
> > Subject: Re: Artemis File Storage Persistence vs JDBC Persistence
> >
> > When deciding between the file-based journal on local storage versus
> > a remote database I think the three main considerations are:
> >
> >  - Performance
> >  - Infrastructure
> >  - Reliability
> >
> > The file-based journal on local storage will be faster than a
> > database for a few reasons:
> >  - The storage is local so there's no network latency to deal with.
> >  - The file-based journal was specifically written and heavily
> > optimized for the message broker use-case.
> >
> > The file-based journal on local storage requires less infrastructure
> > than a database since most servers already come with local storage.
> > Using a database requires provisioning additional hardware as well
> > as installing and maintaining a distinct piece of software. This can
> > be costly both in terms of money and man-power.
> >
> > Generally speaking, local storage is always going to be more
> > reliable than a remote database simply because it's much simpler
> > (i.e. no network, no database with its own maintenance requirements,
> > etc.).
> > This simplicity tends to reduce downtime.
> >
> > In my experience the only folks who choose to use a database are
> > those in an environment where there's already been a substantial
> > investment in an enterprise database and stuff like automated
> > backups, redundant networking, data replications, etc. are available.
> >
> > No matter which option you choose, the broker is written so that you
> > should
> > *never* lose messages.
> >
> >
> > Justin
> >
> >
> >
> > On Wed, Sep 13, 2023 at 7:14 AM Shivang Modi
> > <sm...@provenir.com.invalid>
> > wrote:
> >
> > > Hi Team,
> > >
> > >
> > >
> > > Can anyone share pros and cons in depth between both. I see only
> > > file storage is faster than JDBC storage. Is there any
> > > disadvantage of File Storage like losing the enqueued data or
> > > anything?
> > >
> > >
> > >
> > > Thanks,
> > >
> > > Shivang.
> > >
> > > --
> > > *This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION
> > > intended solely for the use of the addressee(s). If you are not
> > > the intended recipient, please notify the sender by e-mail and
> > > delete the original message. Further, you are not to copy,
> > > disclose, or distribute this e-mail or its contents to any other
> > > person and any such actions maybe unlawful*.
> > > This e-mail may contain viruses. Provenir has taken every
> > > reasonable precaution to minimize this risk, but is not liable for
> > > any damage you may sustain as a result of any virus in this
> > > e-mail. You should carry out your own virus checks before opening
> > > the e-mail or
> attachment.
> > > Provenir reserves the right to monitor and review the content of
> > > all messages sent to or from this e-mail address. Messages sent to
> > > or from this e-mail address may be stored on the Provenir e-mail
> > > system.
> > >
> >
> > --
> > *This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION
> > intended solely for the use of the addressee(s). If you are not the
> > intended recipient, please notify the sender by e-mail and delete
> > the original message. Further, you are not to copy, disclose, or
> > distribute this e-mail or its contents to any other person and any
> > such actions maybe unlawful*.
> > This e-mail may contain viruses. Provenir has taken every reasonable
> > precaution to minimize this risk, but is not liable for any damage
> > you may sustain as a result of any virus in this e-mail. You should
> > carry out your own virus checks before opening the e-mail or attachment.
> > Provenir reserves the right to monitor and review the content of all
> > messages sent to or from this e-mail address. Messages sent to or
> > from this e-mail address may be stored on the Provenir e-mail system.
> >
> >
>
> --
> *This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended
> solely for the use of the addressee(s). If you are not the intended
> recipient, please notify the sender by e-mail and delete the original
> message. Further, you are not to copy, disclose, or distribute this
> e-mail or its contents to any other person and any such actions maybe
> unlawful*.
> This e-mail may contain viruses. Provenir has taken every reasonable
> precaution to minimize this risk, but is not liable for any damage you
> may sustain as a result of any virus in this e-mail. You should carry
> out your own virus checks before opening the e-mail or attachment.
> Provenir reserves the right to monitor and review the content of all
> messages sent to or from this e-mail address. Messages sent to or from
> this e-mail address may be stored on the Provenir e-mail system.
>
>

-- 
*This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended 
solely for the use of the addressee(s). If you are not the intended 
recipient, please notify the sender by e-mail and delete the original 
message. Further, you are not to copy, disclose, or distribute this e-mail 
or its contents to any other person and any such actions maybe unlawful*. 
This e-mail may contain viruses. Provenir has taken every reasonable 
precaution to minimize this risk, but is not liable for any damage you may 
sustain as a result of any virus in this e-mail. You should carry out your 
own virus checks before opening the e-mail or attachment. Provenir reserves 
the right to monitor and review the content of all messages sent to or from 
this e-mail address. Messages sent to or from this e-mail address may be 
stored on the Provenir e-mail system.

Reply via email to