summaryrefslogtreecommitdiff
path: root/rjit_c.rb
AgeCommit message (Collapse)Author
2024-03-01Update a stubbed type for RJITTakashi Kokubun
cfunc.func is actually used by RJIT
2024-03-01Update bindgen for YJIT and RJITTakashi Kokubun
2024-02-21[PRISM] Provide runtime flag for prism in iseqKevin Newton
2024-02-15Bump the required BASERUBY version to 3.0 (#9976)Takashi Kokubun
2024-01-24Introduce Allocationless Anonymous Splat ForwardingJeremy Evans
Ruby makes it easy to delegate all arguments from one method to another: ```ruby def f(*args, **kw) g(*args, **kw) end ``` Unfortunately, this indirection decreases performance. One reason it decreases performance is that this allocates an array and a hash per call to `f`, even if `args` and `kw` are not modified. Due to Ruby's ability to modify almost anything at runtime, it's difficult to avoid the array allocation in the general case. For example, it's not safe to avoid the allocation in a case like this: ```ruby def f(*args, **kw) foo(bar) g(*args, **kw) end ``` Because `foo` may be `eval` and `bar` may be a string referencing `args` or `kw`. To fix this correctly, you need to perform something similar to escape analysis on the variables. However, there is a case where you can avoid the allocation without doing escape analysis, and that is when the splat variables are anonymous: ```ruby def f(*, **) g(*, **) end ``` When splat variables are anonymous, it is not possible to reference them directly, it is only possible to use them as splats to other methods. Since that is the case, if `f` is called with a regular splat and a keyword splat, it can pass the arguments directly to `g` without copying them, avoiding allocation. For example: ```ruby def g(a, b:) a + b end def f(*, **) g(*, **) end a = [1] kw = {b: 2} f(*a, **kw) ``` I call this technique: Allocationless Anonymous Splat Forwarding. This is implemented using a couple additional iseq param flags, anon_rest and anon_kwrest. If anon_rest is set, and an array splat is passed when calling the method when the array splat can be used without modification, `setup_parameters_complex` does not duplicate it. Similarly, if anon_kwest is set, and a keyword splat is passed when calling the method, `setup_parameters_complex` does not duplicate it.
2024-01-23Leave a comment about the limitation of PrimitiveTakashi Kokubun
and adjust some code styling from that PR.
2024-01-22`cexpr!` must be up to one per line nowNobuyoshi Nakada
2024-01-18RJIT: Properly reject keyword splat with `yield`Alan Wu
See the fix for YJIT.
2024-01-16Drop obsoleted BUILTIN_ATTR_NO_GC attributeTakashi Kokubun
The thing that has used this in the past was very buggy, and we've never revisied it. Let's remove it until we need it again.
2024-01-05Do not `poll` firstKoichi Sasada
Before this patch, the MN scheduler waits for the IO with the following steps: 1. `poll(fd, timeout=0)` to check fd is ready or not. 2. if fd is not ready, waits with MN thread scheduler 3. call `func` to issue the blocking I/O call The advantage of advanced `poll()` is we can wait for the IO ready for any fds. However `poll()` becomes overhead for already ready fds. This patch changes the steps like: 1. call `func` to issue the blocking I/O call 2. if the `func` returns `EWOULDBLOCK` the fd is `O_NONBLOCK` and we need to wait for fd is ready so that waits with MN thread scheduler. In this case, we can wait only for `O_NONBLOCK` fds. Otherwise it waits with blocking operations such as `read()` system call. However we don't need to call `poll()` to check fd is ready in advance. With this patch we can observe performance improvement on microbenchmark which repeats blocking I/O (not `O_NONBLOCK` fd) with and without MN thread scheduler. ```ruby require 'benchmark' f = open('/dev/null', 'w') f.sync = true TN = 1 N = 1_000_000 / TN Benchmark.bm{|x| x.report{ TN.times.map{ Thread.new{ N.times{f.print '.'} } }.each(&:join) } } __END__ TN = 1 user system total real ruby32 0.393966 0.101122 0.495088 ( 0.495235) ruby33 0.493963 0.089521 0.583484 ( 0.584091) ruby33+MN 0.639333 0.200843 0.840176 ( 0.840291) <- Slow this+MN 0.512231 0.099091 0.611322 ( 0.611074) <- Good ```
2023-12-22RJIT: Distinguish Pointer with ArrayTakashi Kokubun
This is more convenient for accessing those fields.
2023-12-21RJIT: Update bindgenTakashi Kokubun
2023-12-21RJIT: Rename pause/resume to disable/enableTakashi Kokubun
like YJIT. They don't work in the same way yet, but it's nice to make the naming consistent first so that we will not need to rename them later.
2023-12-18RJIT: Share rb_vm_insns_count for vm_insns_countTakashi Kokubun
2023-12-08Thread specific storage APIsKoichi Sasada
This patch introduces thread specific storage APIs for tools which use `rb_internal_thread_event_hook` APIs. * `rb_internal_thread_specific_key_create()` to create a tool specific thread local storage key and allocate the storage if not available. * `rb_internal_thread_specific_set()` sets a data to thread and tool specific storage. * `rb_internal_thread_specific_get()` gets a data in thread and tool specific storage. Note that `rb_internal_thread_specific_get|set(thread_val, key)` can be called without GVL and safe for async signal and safe for multi-threading (native threads). So you can call it in any internal thread event hooks. Further more you can call it from other native threads. Of course `thread_val` should be living while accessing the data from this function. Note that you should not forget to clean up the set data.
2023-11-13Revert "Revert "Remove SHAPE_CAPACITY_CHANGE shapes""Peter Zhu
This reverts commit 5f3fb4f4e397735783743fe52a7899b614bece20.
2023-11-10Revert "Remove SHAPE_CAPACITY_CHANGE shapes"Peter Zhu
This reverts commit f6910a61122931e4193bcc0fad18d839c319b720. We're seeing crashes in the test suite of Shopify's core monolith after this change.
2023-11-09Remove SHAPE_CAPACITY_CHANGE shapesPeter Zhu
We don't need to create a shape to transition capacity as we can transition the capacity when the capacity of the SHAPE_IVAR changes.
2023-11-08Refactor rb_shape_transition_shape_capa outJean Boussier
Right now the `rb_shape_get_next` shape caller need to first check if there is capacity left, and if not call `rb_shape_transition_shape_capa` before it can call `rb_shape_get_next`. And on each of these it needs to checks if we got a TOO_COMPLEX back. All this logic is duplicated in the interpreter, YJIT and RJIT. Instead we can have `rb_shape_get_next` do the capacity transition when needed. The caller can compare the old and new shapes capacity to know if resizing is needed. It also can check for TOO_COMPLEX only once.
2023-11-02Make every initial size pool shape a root shapePeter Zhu
This commit makes every initial size pool shape a root shape and assigns it a capacity of 0.
2023-10-24Use a functional red-black tree for indexing the shapesAaron Patterson
This is an experimental commit that uses a functional red-black tree to create an index of the ancestor shapes. It uses an Okasaki style functional red black tree: https://www.cs.tufts.edu/comp/150FP/archive/chris-okasaki/redblack99.pdf This tree is advantageous because: * It offers O(n log n) insertions and O(n log n) lookups. * It shares memory with previous "versions" of the tree When we insert a node in the tree, only the parts of the tree that need to be rebalanced are newly allocated. Parts of the tree that don't need to be rebalanced are not reallocated, so "new trees" are able to share memory with old trees. This is in contrast to a sorted set where we would have to duplicate the set, and also resort the set on each insertion. I've added a new stat to RubyVM.stat so we can understand how the red black tree increases.
2023-10-18Revert "shape.h: Make attr_index_t uint8_t"Katherine Oelsner
This reverts commit e3afc212ec059525fe4e5387b2a3be920ffe0f0e.
2023-10-11shape.h: Make attr_index_t uint8_tJean Boussier
Given `SHAPE_MAX_NUM_IVS 80`, we transition to TOO_COMPLEX way before we could overflow a 8bit counter. This reduce the size of `rb_shape_t` from 32B to 24B. If we decide to raise `SHAPE_MAX_NUM_IVS` we can always increase that type again.
2023-10-10Refactor rb_shape_transition_shape_capa to not accept capacityJean Boussier
This way the groth factor is encapsulated, which allows rb_shape_transition_shape_capa to be smarter about ideal sizes.
2023-10-01Use reference counting to avoid memory leak in kwargsHParker
Tracks other callinfo that references the same kwargs and frees them when all references are cleared. [bug #19906] Co-authored-by: Peter Zhu <peter@peterzhu.ca>
2023-09-22[Bug #19896]Adam Hess
fix memory leak in vm_method This introduces a unified reference_count to clarify who is referencing a method. This also allows us to treat the refinement method as the def owner since it counts itself as a reference Co-authored-by: Peter Zhu <peter@peterzhu.ca>
2023-08-08YJIT: Compile exception handlers (#8171)Takashi Kokubun
Co-authored-by: Maxime Chevalier-Boisvert <maximechevalierb@gmail.com> Notes: Merged-By: k0kubun <takashikkbn@gmail.com>
2023-07-17Remove __bp__ and speed-up bmethod calls (#8060)Alan Wu
Remove rb_control_frame_t::__bp__ and optimize bmethod calls This commit removes the __bp__ field from rb_control_frame_t. It was introduced to help MJIT, but since MJIT was replaced by RJIT, we can use vm_base_ptr() to compute it from the SP of the previous control frame instead. Removing the field avoids needing to set it up when pushing new frames. Simply removing __bp__ would cause crashes since RJIT and YJIT used a slightly different stack layout for bmethod calls than the interpreter. At the moment of the call, the two layouts looked as follows: ┌────────────┐ ┌────────────┐ │ frame_base │ │ frame_base │ ├────────────┤ ├────────────┤ │ ... │ │ ... │ ├────────────┤ ├────────────┤ │ args │ │ args │ ├────────────┤ └────────────┘<─prev_frame_sp │ receiver │ prev_frame_sp─>└────────────┘ RJIT & YJIT interpreter Essentially, vm_base_ptr() needs to compute the address to frame_base given prev_frame_sp in the diagrams. The presence of the receiver created an off-by-one situation. Make the interpreter use the layout the JITs use for iseq-to-iseq bmethod calls. Doing so removes unnecessary argument shifting and vm_exec_core() re-entry from the interpreter, yielding a speed improvement visible through `benchmark/vm_defined_method.yml`: patched: 7578743.1 i/s master: 4796596.3 i/s - 1.58x slower C-to-iseq bmethod calls now store one more VALUE than before, but that should have negligible impact on overall performance. Note that re-entering vm_exec_core() used to be necessary for firing TracePoint events, but that's no longer the case since 9121e57a5f50bc91bae48b3b91edb283bf96cb6b. Closes ruby/ruby#6428
2023-06-23Expose rb_hash_resurrectAaron Patterson
This is for implementing the `duphash` instruction Notes: Merged: https://github.com/ruby/ruby/pull/7969
2023-06-06Unify length field for embedded and heap strings (#7908)Peter Zhu
* Unify length field for embedded and heap strings The length field is of the same type and position in RString for both embedded and heap allocated strings, so we can unify it. * Remove RSTRING_EMBED_LEN Notes: Merged-By: maximecb <maximecb@ruby-lang.org>
2023-04-18Update RJIT to support newarray_sendAaron Patterson
This also adds max / hash support Notes: Merged: https://github.com/ruby/ruby/pull/6090
2023-04-11Move `catch_except_p` to `compile_data`eileencodes
The `catch_except_p` flag is used for communicating between parent and child iseq's that a throw instruction was emitted. So for example if a child iseq has a throw in it and the parent wants to catch the throw, we use this flag to communicate to the parent iseq that a throw instruction was emitted. This flag is only useful at compile time, it only impacts the compilation process so it seems to be fine to move it from the iseq body to the compile_data struct. Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org> Notes: Merged: https://github.com/ruby/ruby/pull/7652
2023-04-07Expose rb_sym_to_proc via RJITAaron Patterson
This is needed for getblockparamproxy Notes: Merged: https://github.com/ruby/ruby/pull/7673
2023-04-04[Feature #19579] Remove !USE_RVARGC code (#7655)Peter Zhu
Remove !USE_RVARGC code [Feature #19579] The Variable Width Allocation feature was turned on by default in Ruby 3.2. Since then, we haven't received bug reports or backports to the non-Variable Width Allocation code paths, so we assume that nobody is using it. We also don't plan on maintaining the non-Variable Width Allocation code, so we are going to remove it. Notes: Merged-By: maximecb <maximecb@ruby-lang.org>
2023-04-04RJIT: Add --rjit-verify-ctx optionTakashi Kokubun
2023-04-02RJIT: Store type information in ContextTakashi Kokubun
2023-04-02RJIT: Support entry with different PCsTakashi Kokubun
2023-04-02RJIT: Support has_opt ISEQsTakashi Kokubun
2023-04-02RJIT: Simplify cfunc implementationTakashi Kokubun
2023-04-02RJIT: Simplify invokesuper implementationTakashi Kokubun
2023-04-02RJIT: Group blockarg exit reasonsTakashi Kokubun
2023-04-02RJIT: Support splat argsTakashi Kokubun
2023-04-02RJIT: Update exit reasonsTakashi Kokubun
2023-04-01Remove an unneeded function copyTakashi Kokubun
2023-04-01RJIT: Support rest argsTakashi Kokubun
2023-04-01RJIT: Fix has_rest exit conditionsTakashi Kokubun
2023-04-01RJIT: Remove unused countersTakashi Kokubun
2023-04-01RJIT: Start moving away from VM-like ISEQ handlingTakashi Kokubun
2023-03-26RJIT: Implement leaf builtin callTakashi Kokubun
2023-03-26RJIT: Implement attr_writerTakashi Kokubun