On 2012年08月22日 15:06, Jan Stancek wrote:
>
> ----- Original Message -----
>> From: "Kang Kai"<[email protected]>
>> To: "Jan Stancek"<[email protected]>
>> Cc: [email protected], [email protected], "Zhenfeng 
>> Zhao"<[email protected]>
>> Sent: Wednesday, 22 August, 2012 4:29:25 AM
>> Subject: Re: [LTP] [PATCH] pthread_detach/4-3: workaround for segment fault
>>
>> On 2012年08月21日 18:35, Jan Stancek wrote:
>>> ----- Original Message -----
>>>> From: "Kang Kai"<[email protected]>
>>>> To: "Jan Stancek"<[email protected]>
>>>> Cc: [email protected], [email protected],
>>>> "Zhenfeng Zhao"<[email protected]>
>>>> Sent: Tuesday, 21 August, 2012 11:59:19 AM
>>>> Subject: Re: [LTP] [PATCH] pthread_detach/4-3: workaround for
>>>> segment fault
>>> <snip>
>>>
>>>> Hi Jan,
>>>>
>>>> Thanks.
>>>> I am sorry that It doesn't work and still "segment fault".
>>> Maybe we can narrow it down by running only subset of scenarios.
>>> I would suggest trying to limit "sc" or "NSCENAR" and see which
>>> one triggers it.
>>>
>>> Another thing that looks suspicious are scenarios with altstack,
>>> there seems
>>> to be small window where more than 1 thread can use same altstack.
>>> Can you try to reproduce it without altstack scenarios?
>>>
>>> diff --git
>>> a/testcases/open_posix_testsuite/conformance/interfaces/pthread_detach/4-3.c
>>> b/testcases/open_posix_testsuite/conforman
>>> index 5c15e93..63b6ee7 100644
>>> ---
>>> a/testcases/open_posix_testsuite/conformance/interfaces/pthread_detach/4-3.c
>>> +++
>>> b/testcases/open_posix_testsuite/conformance/interfaces/pthread_detach/4-3.c
>>> @@ -162,6 +162,11 @@ static void *test(void *arg)
>>>                   output("Starting test with scenario (%i): %s\n",
>>>                          sc, scenarii[sc].descr);
>>>    #endif
>> Hi Jan,
>>
>>> +               if (scenarii[sc].altstack) {
>>> +                       sc++;
>>> +                       sc %= NSCENAR;
>>> +                       continue;
>>> +               }
>>>
>> It fails with another error randomly, and output is:
>>
>> [09:45:16]System abilities:
>> [09:45:16] TSA: 200809
>> [09:45:16] TSS: 200809
>> [09:45:16] TPS: 200809
>> [09:45:16] pagesize: 4096
>> [09:45:16] min stack size: 16384
>> [09:45:16]WARNING: The TPS option is claimed to be supported but
>> setscope fails
>> [09:45:16]WARNING: The TPS option is claimed to be supported but
>> setscope fails
>> [09:45:16]WARNING: The TPS option is claimed to be supported but
>> setscope fails
>> [09:45:16]WARNING: The TPS option is claimed to be supported but
>> setscope fails
>> [09:45:16]WARNING: The TPS option is claimed to be supported but
>> setscope fails
>> [09:45:16]WARNING: The TPS option is claimed to be supported but
>> setscope fails
>> [09:45:16]WARNING: The TPS option is claimed to be supported but
>> setscope fails
>> [09:45:16]WARNING: The TPS option is claimed to be supported but
>> setscope fails
>> [09:45:16]WARNING: The TPS option is claimed to be supported but
>> setscope fails
>> [09:45:16]All 33 thread attribute objects were initialized
>>
>> [09:45:18]Test ../../../conformance/interfaces/pthread_detach/4-3.c
>> unresolved: got 12 (Cannot allocate memory) on line 179 (Failed to
>> create this thread)

Hi Jan,

> Is the above output from mips or x86_64?
> Are you running it as root user?

This was tested on x86_64 with unprivileged  user. I retest it with root 
and test passes.
It passes on routerstation(mips) with your patch too.
So it looks like a race condition issue about stack attribute between 
threads, right?

Thanks a lot.
Kai

>
> Regards,
> Jan
>
>>
>> Regards,
>> Kai
>>
>>>                   count_ope++;
>>>
>>>
>>> Regards,
>>> Jan
>>>
>>>> Regards,
>>>> Kai
>>>>
>>


------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Ltp-list mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/ltp-list

Reply via email to