[ 
https://issues.apache.org/jira/browse/QPID-8692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17995732#comment-17995732
 ] 

Robert Godfrey commented on QPID-8692:
--------------------------------------

Just to add, your SQL log implies that message 5119 should have been deleted:


{code:java}
Wed Apr 16 10:04:37 EDT 2025 Thread[#48,IO-/127.0.0.1:39656,5,main] (XID = 
117512053), (SESSIONID = 18617), (DATABASE = /path), (DRDAID = null), Executing 
prepared statement: DELETE FROM QPID_QUEUE_ENTRIES WHERE queue_id = ? AND 
message_id =? :End prepared statement with 2 parameters begin parameter #1: 
2d7bab30-e9aa-4a4d-9541-0623fb55bdce :end parameter begin parameter #2: 5119 
:end parameter

Wed Apr 16 10:04:37 EDT 2025 Thread[#111,yhr-store-1,5,main] (XID = 117512053), 
(SESSIONID = 18617), (DATABASE = /path), (DRDAID = null), Committing
Wed Apr 16 10:04:37 EDT 2025 Thread[#111,yhr-store-1,5,main] (XID = 117512053), 
(SESSIONID = 18617), (DATABASE = /path), (DRDAID = null), Rolling back
{code}

I don't know why you get that final "Rolling back", but since the prior line 
says the transaction was committed, I would have thought that the row would 
have been deleted from the table.  

Also, apologies, I see you are actually running Derby as an external database 
server, rather than embedded into Qpid, in which case the logging whold be on 
{code:java}org.apache.qpid.server.store.jdbc.GenericJDBCMessageStore{code:} 
*not* 
{code:java}
org.apache.qpid.store.derby.DerbyMessageStore
{code}

Apologies - it has been over 10 years since we wrote this code :-)



> [Broker-J] Failed to enqueue messages caused by duplicate key value in on 
> 'QPID_QUEUE_ENTRIES'
> ----------------------------------------------------------------------------------------------
>
>                 Key: QPID-8692
>                 URL: https://issues.apache.org/jira/browse/QPID-8692
>             Project: Qpid
>          Issue Type: Bug
>          Components: Broker-J
>    Affects Versions: qpid-java-broker-9.2.0
>         Environment: java version : openjdk 21.0.6 2025-01-21 LTS
> Red Hat Enterprise Linux 8.10
> qpid broker: qpid-broker-9.2.0
>            Reporter: Vijay Danda
>            Priority: Major
>         Attachments: derby_logs_message5119.txt, qpid-complete-run.log, 
> qpid_log.txt
>
>
> Use case: We use Qpid instance to drop email messages in to the queue. 
> Clients would connect to qpid instance, reads those email messages and sends 
> it to SMTP server.
> Issue: When sending emails on second day, we saw the following error in qpid 
> logs
> {code:java}
> java.sql.BatchUpdateException: The statement was aborted because it would 
> have caused a duplicate key value in a unique or primary key constraint or 
> unique index identified by 'SQL200930134600720' defined on 
> 'QPID_QUEUE_ENTRIES'.
>         at 
> org.apache.derby.impl.jdbc.EmbedStatement.executeLargeBatch(Unknown 
> Source){code}
> Complete qpid logs are attached. See qpid_log.txt
> Test case:
> We sent around 5000 email messages. We observed, for each message there is a 
> insert in to QPID_QUEUE_ENTRIES statement and sub sequent delete from  
> QPID_QUEUE_ENTRIES. However, when we connect to database and see 
> QPID_QUEUE_ENTRIES table there are still around 1520 records in the table.  
> If we send more messages the next day, we are running in to above error 
> message. Attached derby_logs_message5119.txt  derby log with trace statements 
> for message id 5119.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org

Reply via email to