diff options
author | Koichi Sasada <ko1@atdot.net> | 2020-01-08 16:14:01 +0900 |
---|---|---|
committer | Koichi Sasada <ko1@atdot.net> | 2020-02-22 09:58:59 +0900 |
commit | b9007b6c548f91e88fd3f2ffa23de740431fa969 (patch) | |
tree | 1746393d1c5f704e8dc7e0a458198264062273bf /mjit.c | |
parent | f2286925f08406bc857f7b03ad6779a5d61443ae (diff) |
Introduce disposable call-cache.
This patch contains several ideas:
(1) Disposable inline method cache (IMC) for race-free inline method cache
* Making call-cache (CC) as a RVALUE (GC target object) and allocate new
CC on cache miss.
* This technique allows race-free access from parallel processing
elements like RCU.
(2) Introduce per-Class method cache (pCMC)
* Instead of fixed-size global method cache (GMC), pCMC allows flexible
cache size.
* Caching CCs reduces CC allocation and allow sharing CC's fast-path
between same call-info (CI) call-sites.
(3) Invalidate an inline method cache by invalidating corresponding method
entries (MEs)
* Instead of using class serials, we set "invalidated" flag for method
entry itself to represent cache invalidation.
* Compare with using class serials, the impact of method modification
(add/overwrite/delete) is small.
* Updating class serials invalidate all method caches of the class and
sub-classes.
* Proposed approach only invalidate the method cache of only one ME.
See [Feature #16614] for more details.
Notes
Notes:
Merged: https://github.com/ruby/ruby/pull/2888
Diffstat (limited to 'mjit.c')
-rw-r--r-- | mjit.c | 19 |
1 files changed, 13 insertions, 6 deletions
@@ -25,6 +25,9 @@ #include "internal/warnings.h" #include "mjit_worker.c" +#include "vm_callinfo.h" + +static void create_unit(const rb_iseq_t *iseq); // Copy ISeq's states so that race condition does not happen on compilation. static void @@ -51,14 +54,18 @@ mjit_copy_job_handler(void *data) } const struct rb_iseq_constant_body *body = job->iseq->body; - if (job->cc_entries) { - unsigned int i; - struct rb_call_cache *sink = job->cc_entries; - const struct rb_call_data *calls = body->call_data; - for (i = 0; i < body->ci_size; i++) { - *sink++ = calls[i].cc; + unsigned int ci_size = body->ci_size; + if (ci_size > 0) { + const struct rb_callcache **cc_entries = ALLOC_N(const struct rb_callcache *, ci_size); + if (body->jit_unit == NULL) { + create_unit(job->iseq); + } + body->jit_unit->cc_entries = cc_entries; + for (unsigned int i=0; i<ci_size; i++) { + cc_entries[i] = body->call_data[i].cc; } } + if (job->is_entries) { memcpy(job->is_entries, body->is_entries, sizeof(union iseq_inline_storage_entry) * body->is_size); } |