Dear *.
I´m aware that the following is not really a mod_perl related mail but I
already posted this question elsewhere (e.g. on perlmonks.org) without
luck and I didn't find another community with equal knowledge about this
topic.
Feel free to answer me directly to keep this list clean.
I have to maintain a multithreaded (150+) C++ application (running as
64Bit App on Solaris 10, SPARC, SunCC) which uses embedded perl to allow
for easy data modification (utf-8 encoded strings) in certain places.
This does work in general, but every now and then the application
crashes although the same perl code has been run for a few days.
The cores usually look similar to this stack:
-- snip --
----------------- lwp# 114 / thread# 114 --------------------
ffffffff723b0e00 Perl_sv_clear
ffffffff723b16f4 Perl_sv_free
ffffffff723a7a94 S_visit
ffffffff723a7cd8 Perl_sv_clean_objs
ffffffff7232e414 perl_destruct
ffffffff7510568c __1cHrun_sub6Fpvpc_1_
ffffffff75106434 __1cTcEmbdPerlFilterListJrunFilter6MrknKipsCString_3r1_b_
...
-- / snip --
Currently the invocation of the perl filters is implemented as follows:
When a new (previously unencountered) perl script is to be loaded the
following code is executed, allocating a new perl interpreter used
solely for this script:
-- snip --
void * new_perlFilter( char * pcScriptName )
{
int _iArgc = 2;
char* _ppcArgv[] = { "", NULL, NULL };
_ppcArgv[1] = pcScriptName;
pthread_mutex_lock(&cloneMux);
PerlInterpreter * ppi = perl_alloc();
PERL_SET_CONTEXT( ppi );
perl_construct( ppi );
if( perl_parse( ppi,
xs_init,
_iArgc,
_ppcArgv,
(char **)NULL )
)
{
ppi = NULL;
}
pthread_mutex_unlock(&cloneMux);
return ppi;
}
-- / snip --
There is a global, mutex-protected list of these instances returned by
the above function (keyed with the scripts name) and when the
respective script is needed, the following code is executed, calling a sub
"ips_handler" in that script (with pPi being the interpreter instance
from the list):
char * run_sub( void * pPi, tPerlCallMap* pCallMap)
{ char* _pcRet = NULL;
try {
if(pPi) {
STRLEN _lLength = 0;
SV* _pSV = NULL;
char* _pcErg = NULL;
int _iStat = 0;
pthread_mutex_lock(&cloneMux);
PERL_SET_CONTEXT( (PerlInterpreter*)pPi );
PerlInterpreter* ppicl =
perl_clone((PerlInterpreter*)pPi,
CLONEf_COPY_STACKS);
pthread_mutex_unlock(&cloneMux);
PerlInterpreter* my_perl = ppicl;// nec. for dSP
PERL_SET_CONTEXT( my_perl );
dSP ;
ENTER ;
SAVETMPS ;
PUSHMARK(SP) ;
tPerlCallMap::iterator _oItr;
for(_oItr = pCallMap->begin();
_oItr != pCallMap->end();
_oItr++)
{
SV* _pSVIn = newSVpv(_oItr->second.c_str(), 0);
SvUTF8_on(_pSVIn);
XPUSHs(sv_2mortal(_pSVIn));
}
PUTBACK ;
_iStat = call_pv("ips_handler",
G_SCALAR | G_EVAL | G_KEEPERR);
if(_iStat) {
SPAGAIN;
_pSV = POPs;
SvUTF8_on(_pSV);
_pcErg = SvPV(_pSV, _lLength);
long _lErgLength = strlen(_pcErg);
_pcRet = new char[_lErgLength + 1];
if( _pcRet != NULL ) {
_pcRet[_lErgLength] = 0;
strcpy( _pcRet, _pcErg);
}
PUTBACK;
}
FREETMPS ;
LEAVE ;
PERL_SET_CONTEXT( my_perl );
perl_destruct( my_perl );
perl_free( my_perl );
}
}
catch(int nWhy) {}
catch(...) {}
return _pcRet;
}
I don´t see where the problem comes in.
The scripts run ok for a large number of invocations, there is no "exit"
in it and a "die" should be catched by the G_EVAL flag...
It even happens with perl-only scripts (i.e. no XS, only our script,
warnings, strict, URI::Escape, and (inderectly) Carp).
Maybe it´s some concurrency problem, but I don´t get it.
Besides my missing insight in the problem I´m wondering if there isn´t a
better approach to embedding perl:
* Using one interpreter per thread,
* with the same interpreter parsing every script the respective
thread is told to invoke and
* invoking the scripts using a modification of the "package
Embed::Persistent" example in `perldoc perlembed`
This should eliminate the "perl_destruct" calls which may trigger the
cores (although it´d increase memory consumption)?!
Any hints are welcome!
Best regards and many thanks for your help and patience
Heiko
Cologne, Germany
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]