> We actually already do this in Rubinius, but we do it in the JIT directly. > It's far less problematic than generating C code that you have to figure out > how to link back in, plus we can inline methods, all that without loosing any > backtrace information.
Interesting. So does that mean that only the llvm enabled build will use this optimization? > We already don't need the user to specify that they're done setting things > up, as the JIT can run when it wants and the system can gracefully degrade > should any JIT assumptions be invalidated by a user adding a new method or > class. > > Lastly, your technique looks pretty much like a normal inline cache, which is > also something Rubinius has. We don't currently push them down into the > execution stream like you have here, because we use them the interpreter as > well. Same question here. Thanks. -r -- --- !ruby/object:MailingList name: rubinius-dev view: http://groups.google.com/group/rubinius-dev?hl=en post: [email protected] unsubscribe: [email protected]
