| Age | Commit message (Collapse) | Author |
|
HashSet::clear() doesn't deallocate the backing buffer and shrink the
capacity. Replace with a 0-capcity set instead so we reclaim some memory
each code GC.
Notes:
Merged: https://github.com/ruby/ruby/pull/6833
|
|
Notes:
Merged-By: maximecb <maximecb@ruby-lang.org>
|
|
YJIT: x86_64: Fix cmp with number where sign bit is set
Before this commit, we were unconditionally treating unsigned ints as
signed ints when counting the number of bits required for representing
the immediate in machine code. When the size of the immediate matches
the size of the other operand, no sign extension happens, so this was
incorrect. `asm.cmp(opnd64, 0x8000_0000)` panicked even though it's
encodable as `CMP r/m32, imm32`. Large shape ids were impacted by this
issue.
Co-Authored-By: Aaron Patterson <tenderlove@ruby-lang.org>
Co-Authored-By: Alan Wu <alanwu@ruby-lang.org>
Co-authored-by: Aaron Patterson <tenderlove@ruby-lang.org>
Co-authored-by: Alan Wu <alanwu@ruby-lang.org>
Notes:
Merged-By: maximecb <maximecb@ruby-lang.org>
|
|
This commit changes the shape id comparisons to use a 32 bit comparison
rather than 64 bit. That means we don't need to load the shape id to a
register on x86 machines.
Given the following program:
```ruby
class Foo
def initialize
@foo = 1
@bar = 1
end
def read
[@foo, @bar]
end
end
foo = Foo.new
foo.read
foo.read
foo.read
foo.read
foo.read
puts RubyVM::YJIT.disasm(Foo.instance_method(:read))
```
The machine code we generated _before_ this change is like this:
```
== BLOCK 1/4, ISEQ RANGE [0,3), 65 bytes ======================
# getinstancevariable
0x559a18623023: mov rax, qword ptr [r13 + 0x18]
# guard object is heap
0x559a18623027: test al, 7
0x559a1862302a: jne 0x559a1862502d
0x559a18623030: cmp rax, 4
0x559a18623034: jbe 0x559a1862502d
# guard shape, embedded, and T_OBJECT
0x559a1862303a: mov rcx, qword ptr [rax]
0x559a1862303d: movabs r11, 0xffff00000000201f
0x559a18623047: and rcx, r11
0x559a1862304a: movabs r11, 0xb000000002001
0x559a18623054: cmp rcx, r11
0x559a18623057: jne 0x559a18625046
0x559a1862305d: mov rax, qword ptr [rax + 0x18]
0x559a18623061: mov qword ptr [rbx], rax
== BLOCK 2/4, ISEQ RANGE [3,6), 0 bytes =======================
== BLOCK 3/4, ISEQ RANGE [3,6), 47 bytes ======================
# gen_direct_jmp: fallthrough
# getinstancevariable
# regenerate_branch
# getinstancevariable
# regenerate_branch
0x559a18623064: mov rax, qword ptr [r13 + 0x18]
# guard shape, embedded, and T_OBJECT
0x559a18623068: mov rcx, qword ptr [rax]
0x559a1862306b: movabs r11, 0xffff00000000201f
0x559a18623075: and rcx, r11
0x559a18623078: movabs r11, 0xb000000002001
0x559a18623082: cmp rcx, r11
0x559a18623085: jne 0x559a18625099
0x559a1862308b: mov rax, qword ptr [rax + 0x20]
0x559a1862308f: mov qword ptr [rbx + 8], rax
```
After this change, it's like this:
```
== BLOCK 1/4, ISEQ RANGE [0,3), 41 bytes ======================
# getinstancevariable
0x5560c986d023: mov rax, qword ptr [r13 + 0x18]
# guard object is heap
0x5560c986d027: test al, 7
0x5560c986d02a: jne 0x5560c986f02d
0x5560c986d030: cmp rax, 4
0x5560c986d034: jbe 0x5560c986f02d
# guard shape
0x5560c986d03a: cmp word ptr [rax + 6], 0x19
0x5560c986d03f: jne 0x5560c986f046
0x5560c986d045: mov rax, qword ptr [rax + 0x10]
0x5560c986d049: mov qword ptr [rbx], rax
== BLOCK 2/4, ISEQ RANGE [3,6), 0 bytes =======================
== BLOCK 3/4, ISEQ RANGE [3,6), 23 bytes ======================
# gen_direct_jmp: fallthrough
# getinstancevariable
# regenerate_branch
# getinstancevariable
# regenerate_branch
0x5560c986d04c: mov rax, qword ptr [r13 + 0x18]
# guard shape
0x5560c986d050: cmp word ptr [rax + 6], 0x19
0x5560c986d055: jne 0x5560c986f099
0x5560c986d05b: mov rax, qword ptr [rax + 0x18]
0x5560c986d05f: mov qword ptr [rbx + 8], rax
```
The first ivar read is a bit more complex, but the second ivar read is
much simpler. I think eventually we could teach the context about the
shape, then emit only one shape guard.
Notes:
Merged: https://github.com/ruby/ruby/pull/6737
|
|
* YJIT: Always encode Opnd::Value in 64 bits on x86_64
for GC offsets
Co-authored-by: Alan Wu <alansi.xingwu@shopify.com>
* Introduce heap_object_p
* Leave original mov intact
* Remove unneeded branches
* Add a test for movabs
Co-authored-by: Alan Wu <alansi.xingwu@shopify.com>
Notes:
Merged-By: k0kubun <takashikkbn@gmail.com>
|
|
Notes:
Merged-By: k0kubun <takashikkbn@gmail.com>
|
|
When RUBY_DEBUG is enabled, shape ids are 16 bits. I would like to do
16 bit comparisons, so I need to load halfwords sometimes. This commit
adds LDURH so that I can load halfwords.
https://developer.arm.com/documentation/ddi0596/2021-12/Base-Instructions/LDURH--Load-Register-Halfword--unscaled--?lang=en
I verified the bytes using clang:
```
$ cat asmthing.s
.global _start
.align 2
_start:
ldurh w10, [x1]
ldurh w10, [x1, #123]
$ as asmthing.s -o asmthing.o && objdump --disassemble asmthing.o
asmthing.o: file format mach-o arm64
Disassembly of section __TEXT,__text:
0000000000000000 <ltmp0>:
0: 2a 00 40 78 ldurh w10, [x1]
4: 2a b0 47 78 ldurh w10, [x1, #123]
```
Notes:
Merged: https://github.com/ruby/ruby/pull/6729
|
|
We switch to a new page when we detect dropped_bytes flipping from false
to true. Previously, when we patch code for invalidation during code gc,
we start with the flag being set to true, so we failed to apply patches
that straddle pages. We would write out jumps half way and then stop,
which left the code corrupted.
Reset the flag before patching so we patch across pages properly.
Notes:
Merged: https://github.com/ruby/ruby/pull/6686
|
|
Notes:
Merged-By: maximecb <maximecb@ruby-lang.org>
|
|
Notes:
Merged-By: maximecb <maximecb@ruby-lang.org>
|
|
Co-Authored-By: Alan Wu <alansi.xingwu@shopify.com>
Co-authored-by: Alan Wu <alansi.xingwu@shopify.com>
Notes:
Merged-By: maximecb <maximecb@ruby-lang.org>
|
|
Co-Authored-By: Alan Wu <alansi.xingwu@shopify.com>
Co-Authored-By: Maxime Chevalier-Boisvert <maxime.chevalierboisvert@shopify.com>
Co-authored-by: Alan Wu <alansi.xingwu@shopify.com>
Co-authored-by: Maxime Chevalier-Boisvert <maxime.chevalierboisvert@shopify.com>
Notes:
Merged-By: maximecb <maximecb@ruby-lang.org>
|
|
* YJIT: Add RubyVM::YJIT.code_gc
* Rename compiled_page_count to live_page_count
Notes:
Merged-By: maximecb <maximecb@ruby-lang.org>
|
|
when it fails to allocate a new page.
Co-authored-by: Alan Wu <alansi.xingwu@shopify.com>
Notes:
Merged-By: k0kubun <takashikkbn@gmail.com>
|
|
Previously, we found the current page by rounding the current pointer to
the closest smaller page size. This is incorrect because pages are
relative to the start of the address we reserve. For example, if the
starting address is 12KiB modulo the 16KiB page size, once we have more
than 4KiB of code, calculating with the address would incorrectly give
us page 1 when we're actually still on page 0.
Previously, I can reproduce crashes with:
make btest RUN_OPTS=--yjit-code-page-size=32
on ARM64 macOS, where system page sizes are 16KiB.
Notes:
Merged: https://github.com/ruby/ruby/pull/6607
Merged-By: XrXr
|
|
YJIT: Skip dumping code for the other cb
on --yjit-dump-disasm
Notes:
Merged-By: maximecb <maximecb@ruby-lang.org>
|
|
|
|
Previously, enabling only "disasm" didn't actually build. Since these
two features are closely related and we don't really use one without the
other, let's simplify and merge the two features together.
Notes:
Merged-By: maximecb <maximecb@ruby-lang.org>
|
|
Notes:
Merged-By: maximecb <maximecb@ruby-lang.org>
|
|
Co-authored-by: Alan Wu <alansi.xingwu@shopify.com>
Co-authored-by: Maxime Chevalier-Boisvert <maxime.chevalierboisvert@shopify.com>
Notes:
Merged-By: k0kubun <takashikkbn@gmail.com>
|
|
Notes:
Merged-By: maximecb <maximecb@ruby-lang.org>
|
|
* fixes more clippy warnings
* Fix x86 c_callable to have doc_strings
Notes:
Merged-By: maximecb <maximecb@ruby-lang.org>
|
|
Notes:
Merged-By: maximecb <maximecb@ruby-lang.org>
|
|
This reverts commit 9a6803c90b817f70389cae10d60b50ad752da48f.
|
|
For logical instructions such as AND, there is a constraint that the N
part of the bitmask immediate must be 0. We weren't respecting this
condition previously and were silently emitting undefined instructions.
Check for this condition in the assembler and tweak the backend to
correctly detect whether a number could be encoded as an immediate in a
32 bit logical instruction. Due to the nature of the immediate encoding,
the same numeric value encodes differently depending on the size of
the register the instruction works on.
We currently don't have cases where we use 32 bit immediates but we ran
into this encoding issue during development.
Notes:
Merged-By: maximecb <maximecb@ruby-lang.org>
|
|
This reverts commit 68bc9e2e97d12f80df0d113e284864e225f771c2.
|
|
Notes:
Merged-By: maximecb <maximecb@ruby-lang.org>
|
|
Object Shapes is used for accessing instance variables and representing the
"frozenness" of objects. Object instances have a "shape" and the shape
represents some attributes of the object (currently which instance variables are
set and the "frozenness"). Shapes form a tree data structure, and when a new
instance variable is set on an object, that object "transitions" to a new shape
in the shape tree. Each shape has an ID that is used for caching. The shape
structure is independent of class, so objects of different types can have the
same shape.
For example:
```ruby
class Foo
def initialize
# Starts with shape id 0
@a = 1 # transitions to shape id 1
@b = 1 # transitions to shape id 2
end
end
class Bar
def initialize
# Starts with shape id 0
@a = 1 # transitions to shape id 1
@b = 1 # transitions to shape id 2
end
end
foo = Foo.new # `foo` has shape id 2
bar = Bar.new # `bar` has shape id 2
```
Both `foo` and `bar` instances have the same shape because they both set
instance variables of the same name in the same order.
This technique can help to improve inline cache hits as well as generate more
efficient machine code in JIT compilers.
This commit also adds some methods for debugging shapes on objects. See
`RubyVM::Shape` for more details.
For more context on Object Shapes, see [Feature: #18776]
Co-Authored-By: Aaron Patterson <tenderlove@ruby-lang.org>
Co-Authored-By: Eileen M. Uchitelle <eileencodes@gmail.com>
Co-Authored-By: John Hawthorn <john@hawthorn.email>
|
|
Add assertion wrt label names
Notes:
Merged-By: maximecb <maximecb@ruby-lang.org>
|
|
* Change IncrCounter lowering on AArch64
Previously we were using LDADDAL which is not available on
Graviton 1 chips. Instead, we're going to use an exclusive
load/store group through the LDAXR/STLXR instructions.
* Update yjit/src/backend/arm64/mod.rs
Co-authored-by: Maxime Chevalier-Boisvert <maximechevalierb@gmail.com>
Notes:
Merged-By: maximecb <maximecb@ruby-lang.org>
|
|
Revert "* expand tabs. [ci skip]"
This reverts commit 830b5b5c351c5c6efa5ad461ae4ec5085e5f0275.
Revert "This commit implements the Object Shapes technique in CRuby."
This reverts commit 9ddfd2ca004d1952be79cf1b84c52c79a55978f4.
|
|
Object Shapes is used for accessing instance variables and representing the
"frozenness" of objects. Object instances have a "shape" and the shape
represents some attributes of the object (currently which instance variables are
set and the "frozenness"). Shapes form a tree data structure, and when a new
instance variable is set on an object, that object "transitions" to a new shape
in the shape tree. Each shape has an ID that is used for caching. The shape
structure is independent of class, so objects of different types can have the
same shape.
For example:
```ruby
class Foo
def initialize
# Starts with shape id 0
@a = 1 # transitions to shape id 1
@b = 1 # transitions to shape id 2
end
end
class Bar
def initialize
# Starts with shape id 0
@a = 1 # transitions to shape id 1
@b = 1 # transitions to shape id 2
end
end
foo = Foo.new # `foo` has shape id 2
bar = Bar.new # `bar` has shape id 2
```
Both `foo` and `bar` instances have the same shape because they both set
instance variables of the same name in the same order.
This technique can help to improve inline cache hits as well as generate more
efficient machine code in JIT compilers.
This commit also adds some methods for debugging shapes on objects. See
`RubyVM::Shape` for more details.
For more context on Object Shapes, see [Feature: #18776]
Co-Authored-By: Aaron Patterson <tenderlove@ruby-lang.org>
Co-Authored-By: Eileen M. Uchitelle <eileencodes@gmail.com>
Co-Authored-By: John Hawthorn <john@hawthorn.email>
Notes:
Merged: https://github.com/ruby/ruby/pull/6386
|
|
* YJIT: Add Opnd#sub_opnd to use only 8 bits
* Add with_num_bits and let arm64_split use it
* Add another assertion to with_num_bits
* Use only with_num_bits
Notes:
Merged-By: maximecb <maximecb@ruby-lang.org>
|
|
* Introduce InstructionOffset for AArch64
There are a lot of instructions on AArch64 where we take an offset
from PC in terms of the number of instructions. This is for loading
a value relative to the PC or for jumping.
We were usually accepting an A64Opnd or an i32. It can get
confusing and inconsistent though because sometimes you would
divide by 4 to get the number of instructions or multiply by 4 to
get the number of bytes.
This commit adds a struct that wraps an i32 in order to keep all of
that logic in one place. It makes it much easier to read and reason
about how these offsets are getting used.
* Use b instruction when the offset fits on AArch64
Notes:
Merged-By: maximecb <maximecb@ruby-lang.org>
|
|
* Let --yjit-dump-disasm=all dump ocb code as well
* Use an enum instead
* Add a None Option to DumpDisasm (#444)
* Add a None Option to DumpDisasm
* Update yjit/src/asm/mod.rs
Co-authored-by: Maxime Chevalier-Boisvert <maximechevalierb@gmail.com>
* Fix a build failure
* Use only a single name
* Only None will be a disabled case
* Fix cargo test
* Fix --yjit-dump-disasm=all to print outlined cb
Co-authored-by: Jimmy Miller <jimmyhmiller@gmail.com>
Co-authored-by: Maxime Chevalier-Boisvert <maximechevalierb@gmail.com>
Notes:
Merged-By: k0kubun <takashikkbn@gmail.com>
|
|
* Better b.cond usage on AArch64
When we're lowering a conditional jump, we previously had a bit of
a complicated setup where we could emit a conditional jump to skip
over a jump that was the next instruction, and then write out the
destination and use a branch register.
Now instead we use the b.cond instruction if our offset fits (not
common, but not unused either) and if it doesn't we write out an
inverse condition to jump past loading the destination and
branching directly.
* Added an inverse fn for Condition (#443)
Prevents the need to pass two params and potentially reduces errors.
Co-authored-by: Jimmy Miller <jimmyhmiller@jimmys-mbp.lan>
Co-authored-by: Maxime Chevalier-Boisvert <maximechevalierb@gmail.com>
Co-authored-by: Jimmy Miller <jimmyhmiller@jimmys-mbp.lan>
Notes:
Merged-By: maximecb <maximecb@ruby-lang.org>
|
|
There are a lot of times when encoding AArch64 instructions that we
need to represent an integer value with a custom fixed width. For
example, the offset for a B instruction is 26 bits, so we store an
i32 on the instruction struct and then mask it when we encode.
We've been doing this masking everywhere, which has worked, but
it's getting a bit copy-pasty all over the place. This commit
centralizes that logic to make sure we stay consistent.
Notes:
Merged: https://github.com/ruby/ruby/pull/6289
|
|
Notes:
Merged: https://github.com/ruby/ruby/pull/6289
|
|
Notes:
Merged: https://github.com/ruby/ruby/pull/6289
|
|
(https://github.com/Shopify/ruby/pull/430)
* Add --yjit-dump-disasm to dump every compiled code
* Just use get_option
* Carve out disasm_from_addr
* Avoid push_str with format!
* Share the logic through asm.compile
* This seems to negatively impact the compilation speed
Notes:
Merged: https://github.com/ruby/ruby/pull/6289
|
|
* When we're storing an immediate 0 value at a memory address, we
can use STUR XZR, Xd instead of loading 0 into a register and
then storing that register.
* When we're moving 0 into an argument register, we can use
MOV Xd, XZR instead of loading the value into a register first.
* In the newarray instruction, we can skip looking at the stack at
all if the number of values we're using is 0.
Notes:
Merged: https://github.com/ruby/ruby/pull/6289
|
|
|
|
|
|
(https://github.com/Shopify/ruby/pull/395)
`YJIT.simulate_oom!` used to leave one byte of space in the code block,
so our test didn't expose a problem with asserting that the write
position is in bounds in `CodeBlock::set_pos`. We do the following when
patching code:
1. save current write position
2. seek to middle of the code block and patch
3. restore old write position
The bounds check fails on (3) when the code block is already filled up.
Leaving one byte of space also meant that when we write that byte, we
need to fill the entire code region with trapping instruction in
`VirtualMem`, which made the OOM tests unnecessarily slow.
Remove the incorrect bounds check and stop leaving space in the code
block when simulating OOM.
|
|
(https://github.com/Shopify/ruby/pull/382)
* LDR instruction for AArch64
* Split loads in arm64_split when memory address displacements do not fit
|
|
* Left and right shift for IR
* Update yjit/src/backend/x86_64/mod.rs
Co-authored-by: Alan Wu <XrXr@users.noreply.github.com>
Co-authored-by: Maxime Chevalier-Boisvert <maximechevalierb@gmail.com>
|
|
(https://github.com/Shopify/ruby/pull/342)
|
|
(https://github.com/Shopify/ruby/pull/344)
* A64: Fix off by one in offset encoding for BL
It's relative to the address of the instruction not the end of it.
* A64: Fix off by one when encoding B
It's relative to the start of the instruction not the end.
* A64: Add some tests for boundary offsets
|
|
* Fix conditional jumps to label
* Bitmask immediates cannot be u64::MAX
|
|
* Better splitting for Op::Add, Op::Sub, and Op::Cmp
* Split stores if the displacement is too large
* Use a shifted immediate argument
* Split all places where shifted immediates are used
* Add more tests to the cirrus workflow
|