Hi Torsten, Torsten Juergeleit wrote: > Hi, > > after reading through Castors XML best practices document, mailing list > entries and JIRA tickets regarding using Castor XML binding in > multi-threaded environments I came to the following conclusions: > > 1. For multi-threaded it's recommended to re-use a single instance of > XMLClassDescriptorResolver either indirectly via XMLContext or directly > via creating your own instance via > ClassDescriptorResolverFactory.createClassDescriptorResolver(BindingType.XML) +1.
> 2. XMLClassDescriptorResolverImpl is not thread-safe (due to the > non-thread-safe implementation of the inner class DescriptorCacheImpl ) If I remember mails correctly, you are right. > 3. To work-around the issue mentioned in 2. it's recommended to pre-load > all mapping and CDR files before starting to marshal / unmarshal Most definitely. > But how to deal with the issue mentioned in 2. if one isn't able to use > the work-around mentioned in 3.? Help us making this cache thread-safe .... > We're starting to use Castors XML binding (version 1.3) in a > multi-threaded environment with multiple WARs / EARs and >1000 > Castor-generated classes. Each of these WARs / EARs uses an isolated > class loader with its own instance of XMLClassDescriptorResolverImpl. > Fully pre-loading these instances of XMLClassDescriptorResolverImpl > means loading all Castor-generated classes and their descriptor classes > the same time in different class loaders. > > This implies longer deployment / initialization time of the WARs / EARs > and increased JVM class loading memory (permanent generation). Both are > not viable options for us (deployment / startup time is already 30 > minutes and PermGen is already 512M). > > > Any suggestions? Yes, see above. There's nothing that should prevent us from making the descriptor cache thread-safe. All it takes is somebody's time and a little bit of energy .... > Cheers > Torsten > > --------------------------------------------------------------------- > To unsubscribe from this list, please visit: > > http://xircles.codehaus.org/manage_email > > --------------------------------------------------------------------- To unsubscribe from this list, please visit: http://xircles.codehaus.org/manage_email

