summaryrefslogtreecommitdiff
path: root/debug_counter.h
AgeCommit message (Collapse)Author
2021-06-18Add a cache for class variableseileencodes
Redo of 34a2acdac788602c14bf05fb616215187badd504 and 931138b00696419945dc03e10f033b1f53cd50f3 which were reverted. GitHub PR #4340. This change implements a cache for class variables. Previously there was no cache for cvars. Cvar access is slow due to needing to travel all the way up th ancestor tree before returning the cvar value. The deeper the ancestor tree the slower cvar access will be. The benefits of the cache are more visible with a higher number of included modules due to the way Ruby looks up class variables. The benchmark here includes 26 modules and shows with the cache, this branch is 6.5x faster when accessing class variables. ``` compare-ruby: ruby 3.1.0dev (2021-03-15T06:22:34Z master 9e5105c) [x86_64-darwin19] built-ruby: ruby 3.1.0dev (2021-03-15T12:12:44Z add-cache-for-clas.. c6be009) [x86_64-darwin19] | |compare-ruby|built-ruby| |:--------|-----------:|---------:| |vm_cvar | 5.681M| 36.980M| | | -| 6.51x| ``` Benchmark.ips calling `ActiveRecord::Base.logger` from within a Rails application. ActiveRecord::Base.logger has 71 ancestors. The more ancestors a tree has, the more clear the speed increase. IE if Base had only one ancestor we'd see no improvement. This benchmark is run on a vanilla Rails application. Benchmark code: ```ruby require "benchmark/ips" require_relative "config/environment" Benchmark.ips do |x| x.report "logger" do ActiveRecord::Base.logger end end ``` Ruby 3.0 master / Rails 6.1: ``` Warming up -------------------------------------- logger 155.251k i/100ms Calculating ------------------------------------- ``` Ruby 3.0 with cvar cache / Rails 6.1: ``` Warming up -------------------------------------- logger 1.546M i/100ms Calculating ------------------------------------- logger 14.857M (± 4.8%) i/s - 74.198M in 5.006202s ``` Lastly we ran a benchmark to demonstate the difference between master and our cache when the number of modules increases. This benchmark measures 1 ancestor, 30 ancestors, and 100 ancestors. Ruby 3.0 master: ``` Warming up -------------------------------------- 1 module 1.231M i/100ms 30 modules 432.020k i/100ms 100 modules 145.399k i/100ms Calculating ------------------------------------- 1 module 12.210M (± 2.1%) i/s - 61.553M in 5.043400s 30 modules 4.354M (± 2.7%) i/s - 22.033M in 5.063839s 100 modules 1.434M (± 2.9%) i/s - 7.270M in 5.072531s Comparison: 1 module: 12209958.3 i/s 30 modules: 4354217.8 i/s - 2.80x (± 0.00) slower 100 modules: 1434447.3 i/s - 8.51x (± 0.00) slower ``` Ruby 3.0 with cvar cache: ``` Warming up -------------------------------------- 1 module 1.641M i/100ms 30 modules 1.655M i/100ms 100 modules 1.620M i/100ms Calculating ------------------------------------- 1 module 16.279M (± 3.8%) i/s - 82.038M in 5.046923s 30 modules 15.891M (± 3.9%) i/s - 79.459M in 5.007958s 100 modules 16.087M (± 3.6%) i/s - 81.005M in 5.041931s Comparison: 1 module: 16279458.0 i/s 100 modules: 16087484.6 i/s - same-ish: difference falls within error 30 modules: 15891406.2 i/s - same-ish: difference falls within error ``` Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org> Notes: Merged: https://github.com/ruby/ruby/pull/4544
2021-05-11Revert "Filling cache values on cvar write"Aaron Patterson
This reverts commit 08de37f9fa3469365e6b5c964689ae2bae0eb9f3. This reverts commit e8ae922b62adb00a80d3d4c49f7d7b0e6026eaba.
2021-05-11Filling cache values on cvar writeeileencodes
Instead of on read. Once it's in the inline cache we never have to make one again. We want to eventually put the value into the cache, and the best opportunity to do that is when you write the value. Notes: Merged: https://github.com/ruby/ruby/pull/4340
2021-05-11Add a cache for class variableseileencodes
This change implements a cache for class variables. Previously there was no cache for cvars. Cvar access is slow due to needing to travel all the way up th ancestor tree before returning the cvar value. The deeper the ancestor tree the slower cvar access will be. The benefits of the cache are more visible with a higher number of included modules due to the way Ruby looks up class variables. The benchmark here includes 26 modules and shows with the cache, this branch is 6.5x faster when accessing class variables. ``` compare-ruby: ruby 3.1.0dev (2021-03-15T06:22:34Z master 9e5105ca45) [x86_64-darwin19] built-ruby: ruby 3.1.0dev (2021-03-15T12:12:44Z add-cache-for-clas.. c6be0093ae) [x86_64-darwin19] | |compare-ruby|built-ruby| |:--------|-----------:|---------:| |vm_cvar | 5.681M| 36.980M| | | -| 6.51x| ``` Benchmark.ips calling `ActiveRecord::Base.logger` from within a Rails application. ActiveRecord::Base.logger has 71 ancestors. The more ancestors a tree has, the more clear the speed increase. IE if Base had only one ancestor we'd see no improvement. This benchmark is run on a vanilla Rails application. Benchmark code: ```ruby require "benchmark/ips" require_relative "config/environment" Benchmark.ips do |x| x.report "logger" do ActiveRecord::Base.logger end end ``` Ruby 3.0 master / Rails 6.1: ``` Warming up -------------------------------------- logger 155.251k i/100ms Calculating ------------------------------------- ``` Ruby 3.0 with cvar cache / Rails 6.1: ``` Warming up -------------------------------------- logger 1.546M i/100ms Calculating ------------------------------------- logger 14.857M (± 4.8%) i/s - 74.198M in 5.006202s ``` Lastly we ran a benchmark to demonstate the difference between master and our cache when the number of modules increases. This benchmark measures 1 ancestor, 30 ancestors, and 100 ancestors. Ruby 3.0 master: ``` Warming up -------------------------------------- 1 module 1.231M i/100ms 30 modules 432.020k i/100ms 100 modules 145.399k i/100ms Calculating ------------------------------------- 1 module 12.210M (± 2.1%) i/s - 61.553M in 5.043400s 30 modules 4.354M (± 2.7%) i/s - 22.033M in 5.063839s 100 modules 1.434M (± 2.9%) i/s - 7.270M in 5.072531s Comparison: 1 module: 12209958.3 i/s 30 modules: 4354217.8 i/s - 2.80x (± 0.00) slower 100 modules: 1434447.3 i/s - 8.51x (± 0.00) slower ``` Ruby 3.0 with cvar cache: ``` Warming up -------------------------------------- 1 module 1.641M i/100ms 30 modules 1.655M i/100ms 100 modules 1.620M i/100ms Calculating ------------------------------------- 1 module 16.279M (± 3.8%) i/s - 82.038M in 5.046923s 30 modules 15.891M (± 3.9%) i/s - 79.459M in 5.007958s 100 modules 16.087M (± 3.6%) i/s - 81.005M in 5.041931s Comparison: 1 module: 16279458.0 i/s 100 modules: 16087484.6 i/s - same-ish: difference falls within error 30 modules: 15891406.2 i/s - same-ish: difference falls within error ``` Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org> Notes: Merged: https://github.com/ruby/ruby/pull/4340
2021-04-26Fix some typos by spell checkerRyuta Kamizono
Notes: Merged: https://github.com/ruby/ruby/pull/4414
2021-01-29global call-cache cache table for rb_funcall*Koichi Sasada
rb_funcall* (rb_funcall(), rb_funcallv(), ...) functions invokes Ruby's method with given receiver. Ruby 2.7 introduced inline method cache with static memory area. However, Ruby 3.0 reimplemented the method cache data structures and the inline cache was removed. Without inline cache, rb_funcall* searched methods everytime. Most of cases per-Class Method Cache (pCMC) will be helped but pCMC requires VM-wide locking and it hurts performance on multi-Ractor execution, especially all Ractors calls methods with rb_funcall*. This patch introduced Global Call-Cache Cache Table (gccct) for rb_funcall*. Call-Cache was introduced from Ruby 3.0 to manage method cache entry atomically and gccct enables method-caching without VM-wide locking. This table solves the performance issue on multi-ractor execution. [Bug #17497] Ruby-level method invocation does not use gccct because it has inline-method-cache and the table size is limited. Basically rb_funcall* is not used frequently, so 1023 entries can be enough. We will revisit the table size if it is not enough. Notes: Merged: https://github.com/ruby/ruby/pull/4129
2021-01-05enable constant cache on ractorsKoichi Sasada
constant cache `IC` is accessed by non-atomic manner and there are thread-safety issues, so Ruby 3.0 disables to use const cache on non-main ractors. This patch enables it by introducing `imemo_constcache` and allocates it by every re-fill of const cache like `imemo_callcache`. [Bug #17510] Now `IC` only has one entry `IC::entry` and it points to `iseq_inline_constant_cache_entry`, managed by T_IMEMO object. `IC` is atomic data structure so `rb_mjit_before_vm_ic_update()` and `rb_mjit_after_vm_ic_update()` is not needed. Notes: Merged: https://github.com/ruby/ruby/pull/4022
2020-12-17add debug counters for gc start eventsKoichi Sasada
2020-12-17make RB_DEBUG_COUNTER_INC()_thread-safeKoichi Sasada
Notes: Merged: https://github.com/ruby/ruby/pull/3915
2020-12-16add vm_sync debug countersKoichi Sasada
* vm_sync_lock * vm_sync_lock_enter * vm_sync_lock_enter_nb * vm_sync_lock_enter_cr * vm_sync_barrier Notes: Merged: https://github.com/ruby/ruby/pull/3910
2020-12-15add several debug countersKoichi Sasada
add cc_found_in_ccs (renamed from cc_found_ccs), cc_not_found_in_ccs, call0_public, call0_other debug counters to measure more details. also it contains several modification. Notes: Merged: https://github.com/ruby/ruby/pull/3903
2020-12-14fix condition and add another debug counterKoichi Sasada
mc_inline_miss_same_def is added to check same method or not. Also the mc_inline_miss_same_cc calculation was fixed.
2020-12-14add ccs_not_found debug counterKoichi Sasada
ccs_not_found to count not found in ccs table.
2020-12-14add debug counters to survey the IMC missKoichi Sasada
2020-12-14add cc_invalidate_negative debug counterKoichi Sasada
counts for invalidating negative cache. Notes: Merged: https://github.com/ruby/ruby/pull/3892
2020-11-09Add debug counter for ivar inline cache misses that could hitAaron Patterson
This commit adds a debug counter for the case where the inline cache *missed* but the ivar index table has an entry for that ivar. This is a case where a polymorphic cache could help Notes: Merged: https://github.com/ruby/ruby/pull/3750
2020-11-09remove unused debug counterAaron Patterson
Notes: Merged: https://github.com/ruby/ruby/pull/3740
2020-07-10Explicit conversion to boolean to suppress shorten-64-to-32 warningsNobuyoshi Nakada
2020-05-28Add a debug_counter for JIT cancel on leaveTakashi Kokubun
2020-05-11sed -i 's|ruby/impl|ruby/internal|'卜部昌平
To fix build failures. Notes: Merged: https://github.com/ruby/ruby/pull/3079
2020-05-11sed -i s|ruby/3|ruby/impl|g卜部昌平
This shall fix compile errors. Notes: Merged: https://github.com/ruby/ruby/pull/3079
2020-04-13Make vm_call_cfunc_with_frame a fastpath (#3027)Takashi Kokubun
when there's no need to call CALLER_SETUP_ARG and CALLER_REMOVE_EMPTY_KW_SPLAT (i.e. !rb_splat_or_kwargs_p(ci) && !calling->kw_splat). Micro benchmark: ``` $ benchmark-driver -v --rbenv 'before;after' benchmark/vm_send_cfunc.yml --repeat-count=4 before: ruby 2.8.0dev (2020-04-13T23:45:05Z master b9d3ceee8f) [x86_64-linux] after: ruby 2.8.0dev (2020-04-14T00:48:52Z no-splat-fastpath 418d363722) [x86_64-linux] Calculating ------------------------------------- before after vm_send_cfunc 69.585M 88.724M i/s - 100.000M times in 1.437097s 1.127096s Comparison: vm_send_cfunc after: 88723605.2 i/s before: 69584737.1 i/s - 1.28x slower ``` Optcarrot: ``` $ benchmark-driver -v --rbenv 'before;after' benchmark.yml --repeat-count=12 --output=all before: ruby 2.8.0dev (2020-04-13T23:45:05Z master b9d3ceee8f) [x86_64-linux] after: ruby 2.8.0dev (2020-04-14T00:48:52Z no-splat-fastpath 418d363722) [x86_64-linux] Calculating ------------------------------------- before after Optcarrot Lan_Master.nes 50.76119601545175 42.73858236484051 fps 50.76388649761503 51.04211379912850 50.80930672252514 51.39455790755538 50.90236000778749 51.75656936556145 51.01744746340430 51.86875277356489 51.06495279015112 51.88692482485558 51.07785337168974 51.93429603190578 51.20163525187862 51.95768145071314 51.34671771913112 52.45577266040274 51.35918340835583 52.53163888762858 51.46641337418146 52.62172484121034 51.50835463462257 52.85064021113239 ``` Notes: Merged-By: k0kubun <takashikkbn@gmail.com>
2020-04-08Merge pull request #2991 from shyouhei/ruby.h卜部昌平
Split ruby.h Notes: Merged-By: shyouhei <shyouhei@ruby-lang.org>
2020-03-30Optimize exivar access on JIT-ed getivarTakashi Kokubun
JIT support of dd723771c11. $ benchmark-driver -v --rbenv 'before;before --jit;after --jit' benchmark/mjit_exivar.yml --repeat-count=4 before: ruby 2.8.0dev (2020-03-30T12:32:26Z master e5db3da9d3) [x86_64-linux] before --jit: ruby 2.8.0dev (2020-03-30T12:32:26Z master e5db3da9d3) +JIT [x86_64-linux] after --jit: ruby 2.8.0dev (2020-03-31T05:57:24Z mjit-exivar 128625baec) +JIT [x86_64-linux] Calculating ------------------------------------- before before --jit after --jit mjit_exivar 57.944M 53.579M 54.471M i/s - 200.000M times in 3.451588s 3.732772s 3.671687s Comparison: mjit_exivar before: 57944345.1 i/s after --jit: 54470876.7 i/s - 1.06x slower before --jit: 53579483.4 i/s - 1.08x slower
2020-03-16Fix typos [ci skip]Kazuhiro NISHIYAMA
2020-03-15Add debug counter for unload_unitsTakashi Kokubun
changing add_iseq_to_process's debug counter name as well for comparison
2020-02-22Introduce disposable call-cache.Koichi Sasada
This patch contains several ideas: (1) Disposable inline method cache (IMC) for race-free inline method cache * Making call-cache (CC) as a RVALUE (GC target object) and allocate new CC on cache miss. * This technique allows race-free access from parallel processing elements like RCU. (2) Introduce per-Class method cache (pCMC) * Instead of fixed-size global method cache (GMC), pCMC allows flexible cache size. * Caching CCs reduces CC allocation and allow sharing CC's fast-path between same call-info (CI) call-sites. (3) Invalidate an inline method cache by invalidating corresponding method entries (MEs) * Instead of using class serials, we set "invalidated" flag for method entry itself to represent cache invalidation. * Compare with using class serials, the impact of method modification (add/overwrite/delete) is small. * Updating class serials invalidate all method caches of the class and sub-classes. * Proposed approach only invalidate the method cache of only one ME. See [Feature #16614] for more details. Notes: Merged: https://github.com/ruby/ruby/pull/2888
2020-02-22VALUE size packed callinfo (ci).Koichi Sasada
Now, rb_call_info contains how to call the method with tuple of (mid, orig_argc, flags, kwarg). Most of cases, kwarg == NULL and mid+argc+flags only requires 64bits. So this patch packed rb_call_info to VALUE (1 word) on such cases. If we can not represent it in VALUE, then use imemo_callinfo which contains conventional callinfo (rb_callinfo, renamed from rb_call_info). iseq->body->ci_kw_size is removed because all of callinfo is VALUE size (packed ci or a pointer to imemo_callinfo). To access ci information, we need to use these functions: vm_ci_mid(ci), _flag(ci), _argc(ci), _kwarg(ci). struct rb_call_info_kw_arg is renamed to rb_callinfo_kwarg. rb_funcallv_with_cc() and rb_method_basic_definition_p_with_cc() is temporary removed because cd->ci should be marked. Notes: Merged: https://github.com/ruby/ruby/pull/2888
2019-12-26debug_counter.h must be self-contained卜部昌平
Include what is necessary. Notes: Merged: https://github.com/ruby/ruby/pull/2711
2019-12-25add debug_counter access functions.Koichi Sasada
These functions are enabled only on USE_DEBUG_COUNTER=1.
2019-12-23add more debug counters to count numeric objects.Koichi Sasada
2019-12-22Fixed misspellingsNobuyoshi Nakada
Fixed misspellings reported at [Bug #16437], missed and a new typo.
2019-12-18describe mc_miss_reuse_call [ci skip]卜部昌平
2019-12-17add debug counter to count `call` reusing cases.Koichi Sasada
2019-10-03add debug counters for vm_search_method_slowpath()卜部昌平
Implemented fine-grained inspection of cache misshits. Handy for counting the reasons why an inline method cache was evicted.
2019-09-25introduce `obj_ary_extracapa`.Koichi Sasada
Introduce a new debug counter `obj_ary_extracapa` which counts arrays which are `len < capa`.
2019-09-20Fix rb_define_singleton_method warningTakashi Kokubun
for debug counters ``` ../include/ruby/intern.h:1175:137: warning: passing argument 3 of 'rb_define_singleton_method0' from incompatible pointer type [-Wincompatible-pointer-types] #define rb_define_singleton_method(klass, mid, func, arity) rb_define_singleton_method_choose_prototypem3((arity),(func))((klass),(mid),(func),(arity)); ^ ../vm.c:2958:5: note: in expansion of macro 'rb_define_singleton_method' rb_define_singleton_method(rb_cRubyVM, "show_debug_counters", rb_debug_counter_show, 0); ^~~~~~~~~~~~~~~~~~~~~~~~~~ ../include/ruby/intern.h:1139:99: note: expected 'VALUE (*)(VALUE) {aka long unsigned int (*)(long unsigned int)}' but argument is of type 'VALUE (*)(void) {aka long unsigned int (*)(void)}' __attribute__((__unused__,__weakref__("rb_define_singleton_method"),__nonnull__(2,3)))static void rb_define_singleton_method0 (VALUE,const char*,VALUE(*)(VALUE),int); ```
2019-08-07Add a way to print debug counters without exitingAaron Patterson
I am trying to study debug counters inside a Rails application. Accessing debug counters by killing the process is hard because child processes don't get the same TRAP as the parent, and Rails seems to intercept calls to `exit`. Adding this method lets me print the debug counters when I want (at the end of requests for example)
2019-08-02add debug_counters to check details.Koichi Sasada
add debug_counters to check the Hash object statistics.
2019-07-31Use 1 byte hint for ar_table [Feature #15602]Koichi Sasada
On ar_table, Do not keep a full-length hash value (FLHV, 8 bytes) but keep a 1 byte hint from a FLHV (lowest byte of FLHV). An ar_table only contains at least 8 entries, so hints consumes 8 bytes at most. We can store hints in RHash::ar_hint. On 32bit CPU, we use 4 entries ar_table. The advantages: * We don't need to keep FLHV so ar_table only consumes 16 bytes (VALUEs of key and value) * 8 entries = 128 bytes. * We don't need to scan ar_table, but only need to check hints in many cases. Especially we don't need to access ar_table if there is no match entries (in many cases). It will increase memory cache locality. The disadvantages: * This technique can increase `#eql?` time because hints can conflicts (in theory, it conflicts once in 256 times). It can introduce incompatibility if there is a object x where x.eql? returns true even if hash values are different. I believe we don't need to care such irregular case. * We need to re-calculate FLHV if we need to switch from ar_table to st_table (e.g. exceeds 8 entries). It also can introduce incompatibility, on mutating key objects. I believe we don't need to care such irregular case too. Add new debug counters to measure the performance: * artable_hint_hit - hint is matched and eql?#=>true * artable_hint_miss - hint is not matched but eql?#=>false * artable_hint_notfound - lookup counts
2019-07-19fix debug counter for Hash counts.Koichi Sasada
Change debug_counters for Hash object counts: * obj_hash_under4 (1-3) -> obj_hash_1_4 (1-4) * obj_hash_ge4 (4-7) -> obj_hash_5_8 (5-8) * obj_hash_ge8 (>=8) -> obj_hash_g8 (> 8) For example on rdoc benchmark: [RUBY_DEBUG_COUNTER] obj_hash_empty 554,900 [RUBY_DEBUG_COUNTER] obj_hash_under4 572,998 [RUBY_DEBUG_COUNTER] obj_hash_ge4 1,825 [RUBY_DEBUG_COUNTER] obj_hash_ge8 2,344 [RUBY_DEBUG_COUNTER] obj_hash_empty 553,097 [RUBY_DEBUG_COUNTER] obj_hash_1_4 571,880 [RUBY_DEBUG_COUNTER] obj_hash_5_8 982 [RUBY_DEBUG_COUNTER] obj_hash_g8 2,189
2019-07-19fix shared array terminology.Koichi Sasada
Shared arrays created by Array#dup and so on points a shared_root object to manage lifetime of Array buffer. However, sometimes shared_root is called only shared so it is confusing. So I fixed these wording "shared" to "shared_root". * RArray::heap::aux::shared -> RArray::heap::aux::shared_root * ARY_SHARED() -> ARY_SHARED_ROOT() * ARY_SHARED_NUM() -> ARY_SHARED_ROOT_REFCNT() Also, add some debug_counters to count shared array objects. * ary_shared_create: shared ary by Array#dup and so on. * ary_shared: finished in shard. * ary_shared_root_occupied: shared_root but has only 1 refcnt. The number (ary_shared - ary_shared_root_occupied) is meaningful.
2019-05-07Reduce ONIG_NREGION from 10 to 4: power of 2 and testing revealed most ↵Lourens Naudé
pattern matches are less than or equal to 4 results Closes: https://github.com/ruby/ruby/pull/2135
2019-05-07add new debug_counters about is_pointer_to_heap().Koichi Sasada
is_pointer_to_heap() is used for conservative marking. To analyze this function's behavior, introduce some debug_counters.
2019-04-20Invalidate JIT-ed code if ISeq is moved by GC.compactk0kubun
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@67638 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
2019-04-14Fix missing debug counter namek0kubun
r67550 introduced the typo git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@67553 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
2019-04-14Add debug counter for MJIT stale_unitsk0kubun
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@67546 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
2019-04-14Add RubyVM.reset_debug_counters when RB_DEBUG_COUNTERk0kubun
is defined. It's 0 by default and so it dissappears on actual build. git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@67544 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
2019-04-06Add debug counter for VM <-> MJIT callsk0kubun
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@67460 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
2019-03-29Add mjit_compile_failures debug counterk0kubun
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@67382 b2dd03c8-39d4-4d8f-98ff-823fe69b080e