summaryrefslogtreecommitdiff
path: root/vm_insnhelper.h
AgeCommit message (Collapse)Author
2021-10-20Count interpreter instructions when -DYJIT_STATS=1Alan Wu
The interpreter instruction count was enabled based on RUBY_DEBUG as opposed to YJIT_STATS. In builds with YJIT_STATS=1 but RUBY_DEBUG=0, the count was not available. Move YJIT_STATS in yjit.h where declarations are expoed to code outside of YJIT. Also reduce the changes made to the interpreter for calling into YJIT's instruction counting function.
2021-10-20Yet Another Ruby JIT!Jose Narvaez
Renaming uJIT to YJIT. AKA s/ujit/yjit/g.
2021-10-20Implement --ujit-stats and instructoin countingAlan Wu
VM and ujit instruction counting in debug builds. shopify/ruby#19
2021-06-18Add a cache for class variableseileencodes
Redo of 34a2acdac788602c14bf05fb616215187badd504 and 931138b00696419945dc03e10f033b1f53cd50f3 which were reverted. GitHub PR #4340. This change implements a cache for class variables. Previously there was no cache for cvars. Cvar access is slow due to needing to travel all the way up th ancestor tree before returning the cvar value. The deeper the ancestor tree the slower cvar access will be. The benefits of the cache are more visible with a higher number of included modules due to the way Ruby looks up class variables. The benchmark here includes 26 modules and shows with the cache, this branch is 6.5x faster when accessing class variables. ``` compare-ruby: ruby 3.1.0dev (2021-03-15T06:22:34Z master 9e5105c) [x86_64-darwin19] built-ruby: ruby 3.1.0dev (2021-03-15T12:12:44Z add-cache-for-clas.. c6be009) [x86_64-darwin19] | |compare-ruby|built-ruby| |:--------|-----------:|---------:| |vm_cvar | 5.681M| 36.980M| | | -| 6.51x| ``` Benchmark.ips calling `ActiveRecord::Base.logger` from within a Rails application. ActiveRecord::Base.logger has 71 ancestors. The more ancestors a tree has, the more clear the speed increase. IE if Base had only one ancestor we'd see no improvement. This benchmark is run on a vanilla Rails application. Benchmark code: ```ruby require "benchmark/ips" require_relative "config/environment" Benchmark.ips do |x| x.report "logger" do ActiveRecord::Base.logger end end ``` Ruby 3.0 master / Rails 6.1: ``` Warming up -------------------------------------- logger 155.251k i/100ms Calculating ------------------------------------- ``` Ruby 3.0 with cvar cache / Rails 6.1: ``` Warming up -------------------------------------- logger 1.546M i/100ms Calculating ------------------------------------- logger 14.857M (± 4.8%) i/s - 74.198M in 5.006202s ``` Lastly we ran a benchmark to demonstate the difference between master and our cache when the number of modules increases. This benchmark measures 1 ancestor, 30 ancestors, and 100 ancestors. Ruby 3.0 master: ``` Warming up -------------------------------------- 1 module 1.231M i/100ms 30 modules 432.020k i/100ms 100 modules 145.399k i/100ms Calculating ------------------------------------- 1 module 12.210M (± 2.1%) i/s - 61.553M in 5.043400s 30 modules 4.354M (± 2.7%) i/s - 22.033M in 5.063839s 100 modules 1.434M (± 2.9%) i/s - 7.270M in 5.072531s Comparison: 1 module: 12209958.3 i/s 30 modules: 4354217.8 i/s - 2.80x (± 0.00) slower 100 modules: 1434447.3 i/s - 8.51x (± 0.00) slower ``` Ruby 3.0 with cvar cache: ``` Warming up -------------------------------------- 1 module 1.641M i/100ms 30 modules 1.655M i/100ms 100 modules 1.620M i/100ms Calculating ------------------------------------- 1 module 16.279M (± 3.8%) i/s - 82.038M in 5.046923s 30 modules 15.891M (± 3.9%) i/s - 79.459M in 5.007958s 100 modules 16.087M (± 3.6%) i/s - 81.005M in 5.041931s Comparison: 1 module: 16279458.0 i/s 100 modules: 16087484.6 i/s - same-ish: difference falls within error 30 modules: 15891406.2 i/s - same-ish: difference falls within error ``` Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org> Notes: Merged: https://github.com/ruby/ruby/pull/4544
2021-06-02Clarify these are just for MJITTakashi Kokubun
and not for third-party libraries. See: e6484a153038703447b50fcac26349249922ab28
2021-05-11Revert "Filling cache values on cvar write"Aaron Patterson
This reverts commit 08de37f9fa3469365e6b5c964689ae2bae0eb9f3. This reverts commit e8ae922b62adb00a80d3d4c49f7d7b0e6026eaba.
2021-05-11Add a cache for class variableseileencodes
This change implements a cache for class variables. Previously there was no cache for cvars. Cvar access is slow due to needing to travel all the way up th ancestor tree before returning the cvar value. The deeper the ancestor tree the slower cvar access will be. The benefits of the cache are more visible with a higher number of included modules due to the way Ruby looks up class variables. The benchmark here includes 26 modules and shows with the cache, this branch is 6.5x faster when accessing class variables. ``` compare-ruby: ruby 3.1.0dev (2021-03-15T06:22:34Z master 9e5105ca45) [x86_64-darwin19] built-ruby: ruby 3.1.0dev (2021-03-15T12:12:44Z add-cache-for-clas.. c6be0093ae) [x86_64-darwin19] | |compare-ruby|built-ruby| |:--------|-----------:|---------:| |vm_cvar | 5.681M| 36.980M| | | -| 6.51x| ``` Benchmark.ips calling `ActiveRecord::Base.logger` from within a Rails application. ActiveRecord::Base.logger has 71 ancestors. The more ancestors a tree has, the more clear the speed increase. IE if Base had only one ancestor we'd see no improvement. This benchmark is run on a vanilla Rails application. Benchmark code: ```ruby require "benchmark/ips" require_relative "config/environment" Benchmark.ips do |x| x.report "logger" do ActiveRecord::Base.logger end end ``` Ruby 3.0 master / Rails 6.1: ``` Warming up -------------------------------------- logger 155.251k i/100ms Calculating ------------------------------------- ``` Ruby 3.0 with cvar cache / Rails 6.1: ``` Warming up -------------------------------------- logger 1.546M i/100ms Calculating ------------------------------------- logger 14.857M (± 4.8%) i/s - 74.198M in 5.006202s ``` Lastly we ran a benchmark to demonstate the difference between master and our cache when the number of modules increases. This benchmark measures 1 ancestor, 30 ancestors, and 100 ancestors. Ruby 3.0 master: ``` Warming up -------------------------------------- 1 module 1.231M i/100ms 30 modules 432.020k i/100ms 100 modules 145.399k i/100ms Calculating ------------------------------------- 1 module 12.210M (± 2.1%) i/s - 61.553M in 5.043400s 30 modules 4.354M (± 2.7%) i/s - 22.033M in 5.063839s 100 modules 1.434M (± 2.9%) i/s - 7.270M in 5.072531s Comparison: 1 module: 12209958.3 i/s 30 modules: 4354217.8 i/s - 2.80x (± 0.00) slower 100 modules: 1434447.3 i/s - 8.51x (± 0.00) slower ``` Ruby 3.0 with cvar cache: ``` Warming up -------------------------------------- 1 module 1.641M i/100ms 30 modules 1.655M i/100ms 100 modules 1.620M i/100ms Calculating ------------------------------------- 1 module 16.279M (± 3.8%) i/s - 82.038M in 5.046923s 30 modules 15.891M (± 3.9%) i/s - 79.459M in 5.007958s 100 modules 16.087M (± 3.6%) i/s - 81.005M in 5.041931s Comparison: 1 module: 16279458.0 i/s 100 modules: 16087484.6 i/s - same-ish: difference falls within error 30 modules: 15891406.2 i/s - same-ish: difference falls within error ``` Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org> Notes: Merged: https://github.com/ruby/ruby/pull/4340
2020-12-26Fixed leaked global symbolsNobuyoshi Nakada
Notes: Merged: https://github.com/ruby/ruby/pull/4003
2020-11-15Functions defined in headers should be static inlineNobuyoshi Nakada
2020-10-14ruby_vm_global_method_state is no longer needed.Koichi Sasada
Now ruby_vm_global_method_state is not used so let's remove it. Notes: Merged: https://github.com/ruby/ruby/pull/3657
2020-09-03Introduce Ractor mechanism for parallel executionKoichi Sasada
This commit introduces Ractor mechanism to run Ruby program in parallel. See doc/ractor.md for more details about Ractor. See ticket [Feature #17100] to see the implementation details and discussions. [Feature #17100] This commit does not complete the implementation. You can find many bugs on using Ractor. Also the specification will be changed so that this feature is experimental. You will see a warning when you make the first Ractor with `Ractor.new`. I hope this feature can help programmers from thread-safety issues. Notes: Merged: https://github.com/ruby/ruby/pull/3365
2020-06-30Extracted METHOD_ENTRY_CACHEABLE macroNobuyoshi Nakada
2020-06-22Use canary cond also if not VM_CHECK_MODE to suppress warningsNobuyoshi Nakada
2020-06-21Verify builtin inline annotation with VM_CHECK_MODE (#3244)Takashi Kokubun
* Verify builtin inline annotation with VM_CHECK_MODE * Remove static to fix the link issue on MJIT Notes: Merged-By: k0kubun <takashikkbn@gmail.com>
2020-05-22Suppress warnings no inline ruby debug (#3107)Kenta Murata
* Suppress unused warnings occurred due to -fno-inline * Suppress warning occurred due to RUBY_DEBUG=1 Notes: Merged-By: mrkn <mrkn@ruby-lang.org>
2020-04-13add #include guard hack卜部昌平
According to MSVC manual (*1), cl.exe can skip including a header file when that: - contains #pragma once, or - starts with #ifndef, or - starts with #if ! defined. GCC has a similar trick (*2), but it acts more stricter (e. g. there must be _no tokens_ outside of #ifndef...#endif). Sun C lacked #pragma once for a looong time. Oracle Developer Studio 12.5 finally implemented it, but we cannot assume such recent version. This changeset modifies header files so that each of them include strictly one #ifndef...#endif. I believe this is the most portable way to trigger compiler optimizations. [Bug #16770] *1: https://docs.microsoft.com/en-us/cpp/preprocessor/once *2: https://gcc.gnu.org/onlinedocs/cppinternals/Guard-Macros.html Notes: Merged: https://github.com/ruby/ruby/pull/3023
2020-03-17Reduce allocations for keyword argument hashesJeremy Evans
Previously, passing a keyword splat to a method always allocated a hash on the caller side, and accepting arbitrary keywords in a method allocated a separate hash on the callee side. Passing explicit keywords to a method that accepted a keyword splat did not allocate a hash on the caller side, but resulted in two hashes allocated on the callee side. This commit makes passing a single keyword splat to a method not allocate a hash on the caller side. Passing multiple keyword splats or a mix of explicit keywords and a keyword splat still generates a hash on the caller side. On the callee side, if arbitrary keywords are not accepted, it does not allocate a hash. If arbitrary keywords are accepted, it will allocate a hash, but this commit uses a callinfo flag to indicate whether the caller already allocated a hash, and if so, the callee can use the passed hash without duplicating it. So this commit should make it so that a maximum of a single hash is allocated during method calls. To set the callinfo flag appropriately, method call argument compilation checks if only a single keyword splat is given. If only one keyword splat is given, the VM_CALL_KW_SPLAT_MUT callinfo flag is not set, since in that case the keyword splat is passed directly and not mutable. If more than one splat is used, a new hash needs to be generated on the caller side, and in that case the callinfo flag is set, indicating the keyword splat is mutable by the callee. In compile_hash, used for both hash and keyword argument compilation, if compiling keyword arguments and only a single keyword splat is used, pass the argument directly. On the caller side, in vm_args.c, the callinfo flag needs to be recognized and handled. Because the keyword splat argument may not be a hash, it needs to be converted to a hash first if not. Then, unless the callinfo flag is set, the hash needs to be duplicated. The temporary copy of the callinfo flag, kw_flag, is updated if a hash was duplicated, to prevent the need to duplicate it again. If we are converting to a hash or duplicating a hash, we need to update the argument array, which can including duplicating the positional splat array if one was passed. CALLER_SETUP_ARG and a couple other places needs to be modified to handle similar issues for other types of calls. This includes fairly comprehensive tests for different ways keywords are handled internally, checking that you get equal results but that keyword splats on the caller side result in distinct objects for keyword rest parameters. Included are benchmarks for keyword argument calls. Brief results when compiled without optimization: def kw(a: 1) a end def kws(**kw) kw end h = {a: 1} kw(a: 1) # about same kw(**h) # 2.37x faster kws(a: 1) # 1.30x faster kws(**h) # 2.19x faster kw(a: 1, **h) # 1.03x slower kw(**h, **h) # about same kws(a: 1, **h) # 1.16x faster kws(**h, **h) # 1.14x faster Notes: Merged: https://github.com/ruby/ruby/pull/2945
2020-02-22Introduce disposable call-cache.Koichi Sasada
This patch contains several ideas: (1) Disposable inline method cache (IMC) for race-free inline method cache * Making call-cache (CC) as a RVALUE (GC target object) and allocate new CC on cache miss. * This technique allows race-free access from parallel processing elements like RCU. (2) Introduce per-Class method cache (pCMC) * Instead of fixed-size global method cache (GMC), pCMC allows flexible cache size. * Caching CCs reduces CC allocation and allow sharing CC's fast-path between same call-info (CI) call-sites. (3) Invalidate an inline method cache by invalidating corresponding method entries (MEs) * Instead of using class serials, we set "invalidated" flag for method entry itself to represent cache invalidation. * Compare with using class serials, the impact of method modification (add/overwrite/delete) is small. * Updating class serials invalidate all method caches of the class and sub-classes. * Proposed approach only invalidate the method cache of only one ME. See [Feature #16614] for more details. Notes: Merged: https://github.com/ruby/ruby/pull/2888
2020-02-22VALUE size packed callinfo (ci).Koichi Sasada
Now, rb_call_info contains how to call the method with tuple of (mid, orig_argc, flags, kwarg). Most of cases, kwarg == NULL and mid+argc+flags only requires 64bits. So this patch packed rb_call_info to VALUE (1 word) on such cases. If we can not represent it in VALUE, then use imemo_callinfo which contains conventional callinfo (rb_callinfo, renamed from rb_call_info). iseq->body->ci_kw_size is removed because all of callinfo is VALUE size (packed ci or a pointer to imemo_callinfo). To access ci information, we need to use these functions: vm_ci_mid(ci), _flag(ci), _argc(ci), _kwarg(ci). struct rb_call_info_kw_arg is renamed to rb_callinfo_kwarg. rb_funcallv_with_cc() and rb_method_basic_definition_p_with_cc() is temporary removed because cd->ci should be marked. Notes: Merged: https://github.com/ruby/ruby/pull/2888
2019-12-18per-method serial number卜部昌平
Methods and their definitions can be allocated/deallocated on-the-fly. One pathological situation is when a method is deallocated then another one is allocated immediately after that. Address of those old/new method entries/definitions can be the same then, depending on underlying malloc/free implementation. So pointer comparison is insufficient. We have to check the contents. To do so we introduce def->method_serial, which is an integer unique to that specific method definition. PS: Note that method_serial being uintptr_t rather than rb_serial_t is intentional. This is because rb_serial_t can be bigger than a pointer on a 32bit system (rb_serial_t is at least 64bit). In order to preserve old packing of struct rb_call_cache, rb_serial_t is inappropriate. Notes: Merged: https://github.com/ruby/ruby/pull/2759
2019-12-17Define PREV_CLASS_SERIALJohn Hawthorn
Avoids genereating a "throwaway" sentinel class serial. There wasn't any read harm in doing so (we're at no risk of exhaustion and there'd be no measurable performance impact), but if feels cleaner that all class serials actually end up assigned and used (especially now that we won't overwrite them in a single method definition). Notes: Merged: https://github.com/ruby/ruby/pull/2752
2019-12-17convert macros into inline functions卜部昌平
For better readability.
2019-12-16ensure cc->def == cc->me->def卜部昌平
The equation shall hold for every call cache. However prior to this changeset cc->me could be updated without also updating cc->def. Let's make it sure by introducing new macro named CC_SET_ME which sets cc->me and cc->def at once.
2019-10-24Combine call info and cache to speed up method invocationAlan Wu
To perform a regular method call, the VM needs two structs, `rb_call_info` and `rb_call_cache`. At the moment, we allocate these two structures in separate buffers. In the worst case, the CPU needs to read 4 cache lines to complete a method call. Putting the two structures together reduces the maximum number of cache line reads to 2. Combining the structures also saves 8 bytes per call site as the current layout uses separate two pointers for the call info and the call cache. This saves about 2 MiB on Discourse. This change improves the Optcarrot benchmark at least 3%. For more details, see attached bugs.ruby-lang.org ticket. Complications: - A new instruction attribute `comptime_sp_inc` is introduced to calculate SP increase at compile time without using call caches. At compile time, a `TS_CALLDATA` operand points to a call info struct, but at runtime, the same operand points to a call data struct. Instruction that explicitly define `sp_inc` also need to define `comptime_sp_inc`. - MJIT code for copying call cache becomes slightly more complicated. - This changes the bytecode format, which might break existing tools. [Misc #16258] Notes: Merged: https://github.com/ruby/ruby/pull/2564
2019-09-03Merge pull request #2422 from jeremyevans/rb_keyword_given_pJeremy Evans
Add rb_keyword_given_p to the C-API Notes: Merged-By: jeremyevans <code@jeremyevans.net>
2019-08-30Don't pass an empty keyword hash when double splatting empty hashJeremy Evans
2019-07-25Initialize vm_throw_data::throw_state as intNobuyoshi Nakada
As `struct vm_throw_data::throw_state` is initialized as `VALUE` by rb_imemo_new, extended MSW part is assigned to it on LP64 big-endian platforms. Fix up 1feda1c2b091b950efcaa481a11fd660efa9e717
2019-07-22constify again.Koichi Sasada
Same as last commit, make some fields `const`. include/ruby/ruby.h: * Rasic::klass * RArray::heap::aux::shared_root * RRegexp::src internal.h: * rb_classext_struct::origin_, redefined_class * vm_svar::cref_or_me, lastline, backref, others * vm_throw_data::throw_obj * vm_ifunc::data * MEMO::v1, v2, u3::value While modifying this patch, I found write-barrier miss on rb_classext_struct::redefined_class. Also vm_throw_data::throw_state is only `int` so change the type.
2019-07-14Make export declaration place more consistentTakashi Kokubun
2019-07-14MJIT Support for getblockparamproxyTakashi Kokubun
2019-03-21Share vm_call_iseq_optimizable_p to reduce copy-pastek0kubun
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@67329 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
2019-02-01on-smash canary detectionshyouhei
In addition to detect dead canary, we try to detect the very moment when we smash the stack top. Requested by k0kubun: https://twitter.com/k0kubun/status/1085180749899194368 git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@66981 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
2019-01-25vm.inc now in C99shyouhei
This changeset modifies the VM generator so that vm.inc is written in C99. Also added some comments in _insn_entry.erb so that the intention of each parts to be made more clear. I think this improves overall readability of the generated VM. Confirmed that the exact same binary is generated before/after this changeset. git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@66923 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
2018-12-28vm_insnhelper.c: delete unused macrosshyouhei
- FIXNUM_2_P: moved to vm_insnhelper.c because that is the only place this macro is used. - FLONUM_2_P: ditto. - FLOAT_HEAP_P: not used anywhere. - FLOAT_INSTANCE_P: ditto. - GET_TOS: ditto. - USE_IC_FOR_SPECIALIZED_METHOD: ditto. - rb_obj_hidden_p: ditto. - REG_A: ditto. - REG_B: ditto. - GET_CONST_INLINE_CACHE: ditto. - vm_regan_regtype: moved inside of VM_COLLECT_USAGE_DETAILS because that os the only place this enum is used. - vm_regan_acttype: ditto. - GET_GLOBAL: used only once. Removed with replacing that usage. - SET_GLOBAL: ditto. - rb_method_definition_create: declaration moved to vm_insnhelper.c because that is the only place this declaration makes sense. - rb_method_definition_set: ditto. - rb_method_definition_eq: ditto. - rb_make_no_method_exception: ditto. git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@66597 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
2018-12-26insns.def: refactor to avoid CALL_METHOD macroshyouhei
These send and its variant instructions are the most frequently called paths in the entire process. Reducing macro expansions to make them dedicated function called vm_sendish() is the main goal of this changeset. It reduces the size of vm_exec_coref from 25,552 bytes to 23,728 bytes on my machine. I see no significant slowdown. Fix: [GH-2056] vanilla: ruby 2.6.0dev (2018-12-19 trunk 66449) [x86_64-darwin15] ours: ruby 2.6.0dev (2018-12-19 refactor-send 66449) [x86_64-darwin15] last_commit=insns.def: refactor to avoid CALL_METHOD macro Calculating ------------------------------------- vanilla ours vm2_defined_method 2.645M 2.823M i/s - 6.000M times in 5.109888s 4.783254s vm2_method 8.553M 8.873M i/s - 6.000M times in 1.579892s 1.524026s vm2_method_missing 3.772M 3.858M i/s - 6.000M times in 3.579482s 3.499220s vm2_method_with_block 8.494M 8.944M i/s - 6.000M times in 1.589774s 1.509463s vm2_poly_method 0.571 0.607 i/s - 1.000 times in 3.947570s 3.733528s vm2_poly_method_ov 5.514 5.168 i/s - 1.000 times in 0.408156s 0.436169s vm3_clearmethodcache 2.875 2.837 i/s - 1.000 times in 0.783018s 0.793493s Comparison: vm2_defined_method ours: 2822555.4 i/s vanilla: 2644878.1 i/s - 1.07x slower vm2_method ours: 8872947.8 i/s vanilla: 8553433.1 i/s - 1.04x slower vm2_method_missing ours: 3858192.3 i/s vanilla: 3772296.3 i/s - 1.02x slower vm2_method_with_block ours: 8943825.1 i/s vanilla: 8493955.0 i/s - 1.05x slower vm2_poly_method ours: 0.6 i/s vanilla: 0.6 i/s - 1.06x slower vm2_poly_method_ov vanilla: 5.5 i/s ours: 5.2 i/s - 1.07x slower vm3_clearmethodcache vanilla: 2.9 i/s ours: 2.8 i/s - 1.01x slower git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@66565 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
2018-09-18vm_insnhelper.h: rename CI_SET_FASTPATH to CC_SET_FASTPATHk0kubun
because it's actually setting fastpath to cc instead of ci since r51903. vm_insnhelper.c: ditto mjit_compile.c: ditto tool/ruby_vm/views/_mjit_compile_send.erb: ditto git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@64772 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
2018-09-14move ADD_PC around (take 2)shyouhei
Now that we can say for sure if an instruction calls a method or not internally, it is now possible to reroute the bugs that forced us to revert the "move PC around" optimization. First try: r62051 Reverted: r63763 See also: r63999 ---- trunk: ruby 2.6.0dev (2018-09-13 trunk 64736) [x86_64-darwin15] ours: ruby 2.6.0dev (2018-09-13 trunk 64736) [x86_64-darwin15] last_commit=move ADD_PC around (take 2) Calculating ------------------------------------- trunk ours so_ackermann 1.884 2.278 i/s - 1.000 times in 0.530926s 0.438935s so_array 1.178 1.157 i/s - 1.000 times in 0.848786s 0.864467s so_binary_trees 0.176 0.177 i/s - 1.000 times in 5.683895s 5.657707s so_concatenate 0.220 0.221 i/s - 1.000 times in 4.546896s 4.518949s so_count_words 6.729 6.470 i/s - 1.000 times in 0.148602s 0.154561s so_exception 3.324 3.688 i/s - 1.000 times in 0.300872s 0.271147s so_fannkuch 0.546 0.968 i/s - 1.000 times in 1.831328s 1.033376s so_fasta 0.541 0.547 i/s - 1.000 times in 1.849923s 1.827091s so_k_nucleotide 0.800 0.777 i/s - 1.000 times in 1.250635s 1.286295s so_lists 2.101 1.848 i/s - 1.000 times in 0.475954s 0.541095s so_mandelbrot 0.435 0.408 i/s - 1.000 times in 2.299328s 2.450535s so_matrix 1.946 1.912 i/s - 1.000 times in 0.513872s 0.523076s so_meteor_contest 0.311 0.317 i/s - 1.000 times in 3.219297s 3.152052s so_nbody 0.746 0.703 i/s - 1.000 times in 1.339815s 1.423441s so_nested_loop 0.899 0.901 i/s - 1.000 times in 1.111767s 1.109555s so_nsieve 0.559 0.579 i/s - 1.000 times in 1.787763s 1.726552s so_nsieve_bits 0.435 0.428 i/s - 1.000 times in 2.296282s 2.333852s so_object 1.368 1.442 i/s - 1.000 times in 0.731237s 0.693684s so_partial_sums 0.616 0.546 i/s - 1.000 times in 1.623592s 1.833097s so_pidigits 0.831 0.832 i/s - 1.000 times in 1.203117s 1.202334s so_random 2.934 2.724 i/s - 1.000 times in 0.340791s 0.367150s so_reverse_complement 0.583 0.866 i/s - 1.000 times in 1.714144s 1.154615s so_sieve 1.829 2.081 i/s - 1.000 times in 0.546607s 0.480562s so_spectralnorm 0.524 0.558 i/s - 1.000 times in 1.908716s 1.792382s git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@64737 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
2018-09-13vm_insnhelper.h: drop OPT_CALL_FASTPATH macro supportk0kubun
because cc->call is NULL by default and it is not overridden by vm_search_super_method if OPT_CALL_FASTPATH is 0. So this macro is not just a switch for optimization but now it's mandatory. vm_insnhelper.c: cosmetic change. Use boolean-ish `TRUE` instead of 1 to specify `enabled` flag. git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@64735 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
2018-09-13Revert "vm_insnhelper.h: simplify EXEC_EC_CFP implementation"k0kubun
This reverts commit r64711, because EXEC_EC_CFP on JIT-ed code does not call jit_func with the patch when catch_except_p is true. It wasn't intentional. git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@64730 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
2018-09-13vm_insnhelper.h: simplify EXEC_EC_CFP implementationk0kubun
and possibly memory access for iseq->body may be reduced. No significant impact for performance on Optcarrot. * before fps: 55.03865935187656 fps: 57.16854675983188 fps: 57.672458407661765 fps: 58.28989837869383 fps: 58.80503815099268 fps: 59.068054176528534 fps: 59.55736806358244 fps: 61.01018920533034 fps: 63.34167049232186 fps: 65.20575018321766 fps: 65.46758316561318 * after fps: 55.21860411005677 fps: 55.34840351179166 fps: 58.23666596747484 fps: 59.71987124578901 fps: 61.131485120234935 fps: 61.279905164649485 fps: 61.66060774175459 fps: 64.11215576508765 fps: 64.63699742853154 fps: 65.28260058920769 fps: 65.85447796482678 git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@64711 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
2018-09-13move canary-related statements into macrosshyouhei
This is mostly cosmetic. Should generate a slightly readable vm.inc output. git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@64709 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
2018-08-07mjit.c: initial support for mswin MJITk0kubun
By this commit's changes in other files, now MJIT started to work on VC++. Unfortunately some features are still broken and they'll be fixed later. This also suppresses cl.exe's default output to stdout because there seems to be no option to do it. Tweaking some log messages as well. vm_core.h: declare `__declspec(dllimport)` to export them correctly on mswin. vm_insnhelper.h: ditto mjit.h: ditto test_jit.rb: skipped some pending tests. git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@64221 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
2018-07-19mjit_compile.c: reduce sp motion on JITk0kubun
This retries r62655, which was reverted at r63863 for r63763. tool/ruby_vm/views/_mjit_compile_insn.erb: revert the revert. tool/ruby_vm/views/_mjit_compile_insn_body.erb: ditto. tool/ruby_vm/views/_mjit_compile_pc_and_sp.erb: ditto. tool/ruby_vm/views/_mjit_compile_send.erb: ditto. tool/ruby_vm/views/mjit_compile.inc.erb: ditto. tool/ruby_vm/views/_insn_entry.erb: revert half of r63763. The commit was originally reverted since changing pc motion was bad for tracing, but changing sp motion was totally fine. For JIT, I wanna resurrect the sp motion change in r62051. tool/ruby_vm/models/bare_instructions.rb: ditto. insns.def: ditto. vm_insnhelper.c: ditto. vm_insnhelper.h: ditto. * benchmark $ benchmark-driver benchmark.yml --rbenv 'before;after;before --jit;after --jit' --repeat-count 12 -v before: ruby 2.6.0dev (2018-07-19 trunk 63998) [x86_64-linux] after: ruby 2.6.0dev (2018-07-19 add-sp 63998) [x86_64-linux] last_commit=mjit_compile.c: reduce sp motion on JIT before --jit: ruby 2.6.0dev (2018-07-19 trunk 63998) +JIT [x86_64-linux] after --jit: ruby 2.6.0dev (2018-07-19 add-sp 63998) +JIT [x86_64-linux] last_commit=mjit_compile.c: reduce sp motion on JIT Calculating ------------------------------------- before after before --jit after --jit Optcarrot Lan_Master.nes 51.354 50.238 70.010 72.139 fps Comparison: Optcarrot Lan_Master.nes after --jit: 72.1 fps before --jit: 70.0 fps - 1.03x slower before: 51.4 fps - 1.40x slower after: 50.2 fps - 1.44x slower git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@63999 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
2018-06-27remove assertion for hidden objects.ko1
* vm_insnhelper.h (PUSH): r63763 increase "PUSH()" op and it reveals that there are hidden objects (at least T_IMEMO/iseq objects) are located on VM stack. Remove this check and I'll revisit it later. git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@63765 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
2018-06-27give up insn attr handles_frameshyouhei
I introduced this mechanism in r62051 to speed things up. Later it was reported that the change causes problems. I searched for workarounds but nothing seemed appropriate. I hereby officially give it up. The idea to move ADD_PC around was a mistake. Fixes [Bug #14809] and [Bug #14834]. Signed-off-by: Urabe, Shyouhei <shyouhei@ruby-lang.org> git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@63763 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
2018-06-19NULL class Data_Wrap_Struct is not allowed.ko1
* spec/ruby/optional/capi/typed_data_spec.rb: same as r63692. * spec/ruby/optional/capi/ext/typed_data_spec.c: ditto. * vm_insnhelper.h (PUSH): re-enable assertion to check hidden objects. git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@63694 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
2018-06-18remvoe assertion because rubyspec hit this assertko1
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@63688 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
2018-06-18hidden objects should not be pushed.ko1
* vm_insnhelper.h (PUSH): hidden objects (klass == 0) should not be pushed to a VM value stack. Add assertion for it. git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@63687 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
2018-03-04mjit_compile.c: use local variables for stackk0kubun
if catch_except_p is FALSE. If catch_except_p is TRUE, stack values should be on VM's stack when exception is thrown and the JIT-ed frame is re-executed by VM's exception handler. If it's FALSE, the JIT-ed frame won't be re-executed and don't need to keep values on VM's stack. Using local variables allows us to reduce cfp->sp motion. Moving cfp->sp is needed only for insns whose handles_frame? is false. So it improves performance. _mjit_compile_insn.erb: Prepare `stack_size` variable for GET_SP, STACK_ADDR_FROM_TOP, TOPN macros. Share pc and sp motion partial view. Use cancel handler created in mjit_compile.c. _mjit_compile_send.erb: ditto. Also, when iseq->body->catch_except_p is TRUE, this stops to call mjit_exec directly. I described the reason in vm_insnhelper.h's comment for EXEC_EC_CFP. _mjit_compile_pc_and_sp.erb: Shared logic for moving sp and pc. As you can see from thsi file, when status->local_stack_p is TRUE and insn.handles_frame? is false, moving sp is skipped. But if insn.handles_frame? is true, values should be rolled back to VM's stack. common.mk: add dependency for the file _mjit_compile_insn_body.erb: Set sp value before canceling JIT on DISPATCH_ORIGINAL_INSN. Replace GET_SP, STACK_ADDR_FROM_TOP, TOPN macros for the case ocal_stack_p is TRUE and insn.handles_frame? is false. In that case, values are not available on VM's stack and those macros should be replaced. mjit_compile.inc.erb: updated comments of macros which are supported by JIT compiler. All references to `cfp->sp` should be replaced and thus INC_SP, SET_SV, PUSH are no longer supported for now, because they are not used now. vm_exec.h: moved EXEC_EC_CFP definition to vm_insnhelper.h because it's tighly coupled to CALL_METHOD. vm_insnhelper.h: Have revised EXEC_EC_CFP definition moved from vm_exec.h. Now it triggers mjit_exec for VM, and has the guard for catch_except_p on JIT-ed code. See comments for details. CALL_METHOD delegates triggering mjit_exec to EXEC_EC_CFP. insns.def: Stopped using EXEC_EC_CFP for the case we don't want to trigger mjit_exec. Those insns (defineclass, opt_call_c_function) are not supported by JIT and it's safe to use RESTORE_REGS(), NEXT_INSN(). expandarray is changed to pass GET_SP() to replace the macro in _mjit_compile_insn_body.erb. vm_insnhelper.c: change to take sp for the above reason. [close https://github.com/ruby/ruby/pull/1828] This patch resurrects the performance which was attached in [Feature #14235]. * Benchmark Optcarrot (with configuration for benchmark_driver.gem) https://github.com/benchmark-driver/optcarrot $ benchmark-driver benchmark.yml --verbose 1 --rbenv 'before;before+JIT::before,--jit;after;after+JIT::after,--jit' --repeat-count 10 before: ruby 2.6.0dev (2018-03-04 trunk 62652) [x86_64-linux] before+JIT: ruby 2.6.0dev (2018-03-04 trunk 62652) +JIT [x86_64-linux] after: ruby 2.6.0dev (2018-03-04 local-variable.. 62652) [x86_64-linux] last_commit=mjit_compile.c: use local variables for stack after+JIT: ruby 2.6.0dev (2018-03-04 local-variable.. 62652) +JIT [x86_64-linux] last_commit=mjit_compile.c: use local variables for stack Calculating ------------------------------------- before before+JIT after after+JIT optcarrot 53.552 59.680 53.697 63.358 fps Comparison: optcarrot after+JIT: 63.4 fps before+JIT: 59.7 fps - 1.06x slower after: 53.7 fps - 1.18x slower before: 53.6 fps - 1.18x slower git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@62655 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
2018-03-03vm.c: add mjit_enable_p flagk0kubun
to count up total calls properly. Some places (especially CALL_METHOD) invoke mjit_exec twice for one method call. It would be problematic when debugging, or possibly it would result in a wrong profiling result. This commit doesn't have impact for performance: * Optcarrot benchmark ** before fps: 59.37757770848619 fps: 56.49998488958699 fps: 59.07900362739362 fps: 58.924749807695996 fps: 57.667905665594894 fps: 57.540021018385254 fps: 59.5518055679647 fps: 55.93831555148311 fps: 57.82685112863262 fps: 59.22391754481736 checksum: 59662 ** after fps: 58.461881158098194 fps: 59.32685183081354 fps: 54.11334310279802 fps: 59.2281560439788 fps: 58.60495705318312 fps: 55.696478648491045 fps: 58.49003452654724 fps: 58.387771929393224 fps: 59.24156772816439 fps: 56.68804731968107 checksum: 59662 * Discourse Your Results: (note for timings- percentile is first, duration is second in millisecs) ** before (without JIT) categories_admin: 50: 16 75: 17 90: 24 99: 37 home_admin: 50: 20 75: 20 90: 24 99: 42 topic_admin: 50: 16 75: 16 90: 18 99: 28 categories: 50: 36 75: 37 90: 45 99: 68 home: 50: 38 75: 40 90: 53 99: 92 topic: 50: 14 75: 15 90: 17 99: 26 ** after (without JIT) categories_admin: 50: 16 75: 16 90: 24 99: 36 home_admin: 50: 19 75: 20 90: 23 99: 41 topic_admin: 50: 16 75: 16 90: 19 99: 33 categories: 50: 35 75: 36 90: 44 99: 61 home: 50: 38 75: 40 90: 52 99: 101 topic: 50: 14 75: 15 90: 15 99: 24 ** before (with JIT) categories_admin: 50: 19 75: 23 90: 29 99: 44 home_admin: 50: 24 75: 26 90: 32 99: 46 topic_admin: 50: 20 75: 22 90: 27 99: 44 categories: 50: 41 75: 43 90: 51 99: 66 home: 50: 46 75: 49 90: 56 99: 68 topic: 50: 18 75: 19 90: 22 99: 31 ** after (with JIT) categories_admin: 50: 18 75: 21 90: 28 99: 42 home_admin: 50: 23 75: 25 90: 31 99: 51 topic_admin: 50: 19 75: 20 90: 24 99: 31 categories: 50: 41 75: 44 90: 52 99: 69 home: 50: 45 75: 48 90: 61 99: 88 topic: 50: 19 75: 20 90: 24 99: 33 git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@62641 b2dd03c8-39d4-4d8f-98ff-823fe69b080e