[ https://issues.apache.org/jira/browse/QPID-8692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17995725#comment-17995725 ]
Robert Godfrey commented on QPID-8692: -------------------------------------- Can you reliably reproduce this with a fixed set of data? It would be useful for us to have a way to debug why this is occurring. Obviously this is a bug, but in general the queue entries should be unique by definition - it is basically just a record of the queue id and message id, both Qpid generated identifiers (e.g. sending the same e-mail message more than once, will result in the message having different "message ids" as far as Qpid is concerned. Clearly this needs to be unique - a message cannot be in the same queue more than once. It might help clarify things if you enable DEBUG level logging on org.apache.qpid.store.derby.DerbyMessageStore. re: "The only weird messages we see is about Suspension" - this is normal, and basically saying that the broker has temporarily stopped sending messages to the consumer, because the consumer is not consuming the messages as fast as the broker can send them - this is not something that the Derby layer will even be aware of. > [Broker-J] Failed to enqueue messages caused by duplicate key value in on > 'QPID_QUEUE_ENTRIES' > ---------------------------------------------------------------------------------------------- > > Key: QPID-8692 > URL: https://issues.apache.org/jira/browse/QPID-8692 > Project: Qpid > Issue Type: Bug > Components: Broker-J > Affects Versions: qpid-java-broker-9.2.0 > Environment: java version : openjdk 21.0.6 2025-01-21 LTS > Red Hat Enterprise Linux 8.10 > qpid broker: qpid-broker-9.2.0 > Reporter: Vijay Danda > Priority: Major > Attachments: derby_logs_message5119.txt, qpid-complete-run.log, > qpid_log.txt > > > Use case: We use Qpid instance to drop email messages in to the queue. > Clients would connect to qpid instance, reads those email messages and sends > it to SMTP server. > Issue: When sending emails on second day, we saw the following error in qpid > logs > {code:java} > java.sql.BatchUpdateException: The statement was aborted because it would > have caused a duplicate key value in a unique or primary key constraint or > unique index identified by 'SQL200930134600720' defined on > 'QPID_QUEUE_ENTRIES'. > at > org.apache.derby.impl.jdbc.EmbedStatement.executeLargeBatch(Unknown > Source){code} > Complete qpid logs are attached. See qpid_log.txt > Test case: > We sent around 5000 email messages. We observed, for each message there is a > insert in to QPID_QUEUE_ENTRIES statement and sub sequent delete from > QPID_QUEUE_ENTRIES. However, when we connect to database and see > QPID_QUEUE_ENTRIES table there are still around 1520 records in the table. > If we send more messages the next day, we are running in to above error > message. Attached derby_logs_message5119.txt derby log with trace statements > for message id 5119. -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org