_ all yade users, please introduce yourself!_ OR YOU WILL BE BANNED! ;-) > >> Another option, for quasistatic simulations, is to use triangulation or >> one of the old collider with radiusFactor = 1.2, and update the contact >> list only once per N iterations. Depending on max velocity in the model, >> N can be set to at least 10. You can adjust N during runtime >> automatically if maxVel increases somewhere. In some very slow >> compression tests, N could really be 50 or so. >> > Agreed. But is there such collider written? (I know there are papers > about that.) What is the computation complexity of triangulation > collider? I know that the grid-based ones achieve O(N), but we don't > have that implemented either. > > Written? Yes, any of the colliders we have would do the job. Just set an interval for e.g. insertionSortCollider, and run it periodically with detectionFactor = 1.5. During N timesteps, use the same list of interactions (some will go from virtual to real, or real to virtual, no problem with that as long as they can change many times). I experimented that before with the triangulation collider, it worked well, but it was not faster than SAPcollider for a stupid reason (*), and I realised the same thing could be achieved with SAPcollider with intDetectionFactor=1.5. It needs a few adjustments in other classes now, perhaps, perhaps not. For instance, i'm not sure requestDeletion(i) will let "i" become real again, but this would be minimal changes. The problem with CGAL::triangulation is its not parallel.
(*) the triangulation was fast, but iterations on all edges (i.e. virtual contacts) was slow, because the data structure has only vector<edge> and vector<cell>. There are workarounds but I'm not on that now. Soon however, we will have a triangulation in metabody to simulate flow, so I guess it will be time to remove the collider and use the same triangulation for flow and contact detection (in our specific problems). > > Maybe > http://software.intel.com/en-us/intel-mkl/, but there must be more, I am > quite sure. But you can easily limit current number of threads for > openMP and allocate those to taucs, or just let the OS do the > thread-battle by itself. > > We will experiment, thank you for the tip. I'm not so sure a lot of algebra libraries with parallel features are available. > Before, it was the geometry functor deciding on which interaction to > delete (and there had to be IS2IS4SCGWater), now this responsibility > shifted to the constitutive law; so it is logical. > > Note that we stopped using IS2IS4SCGWater after the intFactor was implemented in IS2IS4SCG. I think it is correct to define intFactor in the same law, because anybody using more than one physical law between the same bodies would have the same problem. > > Don't add it to ElasticContactLaw, however, but to > ef2_Spheres_Elastic_ElasticLaw (shouldn't it be renamed to > Law2_Spheres[ContactGeometry]_Elastic[ContactInteraction]_Elastic for > clarity) and use ConstitutiveLawDispatcher/InteractionDispatchers. > ElasticContactLaw is nothing but loop around calls to > ef2_Spheres_Elastic_ElasticLaw with typecheck. Slower and > non-parallelizable. > > Good point. > Oh, BTW, is there agreement on having useShear by default and > progressively remove prevNormal from various InteractionPhysics classes? > I'd prefer to keep the traditional (Cundall) incremental form for now. The equations are simpler, especially in 3D, so it is easier to understand what happens and implement different laws. I will change my mind one day perhaps, but for now I feel like the incremental form is good enough for dry friction. :) Bruno -- _______________ Chareyre Bruno Maitre de conference Grenoble INP Laboratoire 3SR - bureau E145 BP 53 - 38041, Grenoble cedex 9 - France Tél : 33 4 56 52 86 21 Fax : 33 4 76 82 70 43 ________________ _______________________________________________ Mailing list: https://launchpad.net/~yade-users Post to : [email protected] Unsubscribe : https://launchpad.net/~yade-users More help : https://help.launchpad.net/ListHelp _______________________________________________ yade-users mailing list [email protected] https://lists.berlios.de/mailman/listinfo/yade-users
