Hi Gustavo,

There's a great deal of noise in this log that ManifoldCF has nothing to do
with.  Did you turn on logging for the JDBC driver?  If so, can you turn it
off?

I *do* see signs that the forensics ran:

October 23rd 2018, 18:22:41.662 message:DEBUG 2018-10-23T18:22:57,492
(Worker thread '5') - ==== Forensics for record 1540306481078, current
status: 12 ==== @version:1 @timestamp:October 23rd 2018, 18:22:41.662
ALCH_TENANT_ID:9c4ba694-9cf4-440c-8b63-d4f46ee61e5b type:syslog
timestamp:October 23rd 2018, 18:22:57.493
application_id_str:74f6767d-fcdd-43e2-8d85-521844f1fef0
message_type_str:OUT source_id_str:APP
space_id_str:9c4ba694-9cf4-440c-8b63-d4f46ee61e5b
org_id_str:666eed00-59f5-4800-8775-b816fa85b915 app_name_str:apwmcf
space_name_str:apps_pre instance_id_str:0
org_name_str:PRE_INTRANET_CD1 origin_str:rep tags_array:log,
unknown_msg_format _id:AWahvHx_nkO5H-t2xNdM _type:syslog
_index:logstash-2018.10.23 _score:


but what you will need to do is when you see the "forensics for..."
message, grep the log for the displayed record ID and include the last
chunk of those in your log dump.  What we see when the database isn't
working right is a clear indication that the record state was changed
and committed and then later the record state is a different value,
which cannot be.  It's as if the database forgets how to do ACID
properly.


Karl



On Wed, Oct 24, 2018 at 5:35 AM Gustavo Beneitez <gustavo.benei...@gmail.com>
wrote:

> Hi Karl,
>
> we reproduced the issue yesterday and managed to get several logs.
> The exact error was:
>
> Error: Unexpected jobqueue status - record id 1540306481078, expecting
> active status, saw 12,
> So we performed a new search on database and what we got was:
> [image: mail1.png]
>
> status G, that means  STATUS_PENDINGPURGATORY
> [image: mail2.png]
> Please find enclosed a log extraction filtered by this record, on line 53
> there is an ERROR message "Unexpected jobqueue status - record id
> 1540306481078, expecting active status, saw 12". I don't know if that's
> what you need or maybe we have also to increase general log level.
>
> Thanks in advance.
>
>
> El mar., 23 oct. 2018 a las 14:28, Gustavo Beneitez (<
> gustavo.benei...@gmail.com>) escribió:
>
>> Thanks Karl, we are going to make new crawls with that property enable
>> and will get back to you.
>>
>> El mar., 23 oct. 2018 a las 10:09, Karl Wright (<daddy...@gmail.com>)
>> escribió:
>>
>>> Add this to your properties.xml:
>>>
>>> <property name="org.apache.manifoldcf.diagnostics" value="DEBUG"/>
>>>
>>> This keeps stuff in memory and dumps a lot to the log as well.
>>>
>>> I'm afraid that groveling through the logs after a failure to confirm
>>> it's the same kind of thing we've seen before takes many hours.  I can only
>>> promise to do this when I have the time.
>>>
>>> Karl
>>>
>>>
>>> On Tue, Oct 23, 2018 at 2:34 AM Gustavo Beneitez <
>>> gustavo.benei...@gmail.com> wrote:
>>>
>>>> I Karl,
>>>>
>>>> MySQL. As per config variables:
>>>> version  5.7.23-log
>>>> version comment MySQL Community Server (GPL)
>>>>
>>>> which file should I enable logging/debugging?
>>>>
>>>> Thanks!
>>>>
>>>> El lun., 22 oct. 2018 a las 21:36, Karl Wright (<daddy...@gmail.com>)
>>>> escribió:
>>>>
>>>>> Hi Gustavo,
>>>>>
>>>>> I have seen this error before; it is apparently due to the database
>>>>> failing to properly gate transactions and behave according to the
>>>>> concurrency model selected for a transaction.  We have a debugging setting
>>>>> you can configure which logs the needed information so that forensics get
>>>>> dumped, and when they do, it's apparent what is happening.
>>>>>
>>>>> Note well that I have never been able to make this problem appear
>>>>> here, so I suspect that the issue is related to network latency or some
>>>>> other external factor I cannot easily reproduce.
>>>>>
>>>>> Just so I know -- what database is this?  The place where we've seen
>>>>> this is postgresql; later versions of MySql do not seem to have an issue.
>>>>>
>>>>> Thanks,
>>>>> Karl
>>>>>
>>>>>
>>>>> On Mon, Oct 22, 2018 at 1:44 PM Gustavo Beneitez <
>>>>> gustavo.benei...@gmail.com> wrote:
>>>>>
>>>>>> Hi Karl,
>>>>>>
>>>>>> lately we are facing job status problems. After a few minutes the job
>>>>>> ends suddenly, always the same way:
>>>>>>
>>>>>> Error: Unexpected jobqueue status - record id 1539339908660,
>>>>>> expecting active status, saw 2
>>>>>> Error: Unexpected jobqueue status - record id 1539291541171,
>>>>>> expecting active status, saw 2
>>>>>> Error: Unexpected jobqueue status - record id 1539294182173,
>>>>>> expecting active status, saw 2
>>>>>> Error: Unexpected jobqueue status - record id 1539338949797,
>>>>>> expecting active status, saw 2
>>>>>>
>>>>>> I did some investigations and a select to the database after the
>>>>>> error appeared and BEFORE rerunning the job:
>>>>>>
>>>>>> SELECT * FROM `jobqueue` WHERE id = 1539336459053 and jobid =
>>>>>> 1539269973731
>>>>>>
>>>>>>
>>>>>> it returned status = 'G'
>>>>>>
>>>>>>
>>>>>> After the run was repeated, it finished OK  and same query returned
>>>>>> status = 'C'.
>>>>>>
>>>>>> I don't understand much of the "active" workers but it seems the item
>>>>>> is processed twice. Do you have an idea about what we should investigate?
>>>>>>
>>>>>>
>>>>>> Thanks in advance!
>>>>>>
>>>>>

Reply via email to