[前][次][番号順一覧][スレッド一覧]

ruby-changes:73301

From: Kevin <ko1@a...>
Date: Tue, 30 Aug 2022 01:09:58 +0900 (JST)
Subject: [ruby-changes:73301] b00606eb64 (master): Even more prep for instruction enum (https://github.com/Shopify/ruby/pull/413)

https://git.ruby-lang.org/ruby.git/commit/?id=b00606eb64

From b00606eb644e4ffb42b9267f7d81b352845a29ae Mon Sep 17 00:00:00 2001
From: Kevin Newton <kddnewton@g...>
Date: Wed, 17 Aug 2022 16:08:41 -0400
Subject: Even more prep for instruction enum
 (https://github.com/Shopify/ruby/pull/413)

* Mutate in place for register allocation

Currently we allocate a new instruction every time when we're
doing register allocation by first splitting up the instruction
into its component parts, mapping the operands and the output, and
then pushing all of its parts onto the new assembler.

Since we don't need the old instruction, we can mutate the existing
one in place. While it's not that big of a win in and of itself, it
matches much more closely to what we're going to have to do when we
switch the instruction from being a struct to being an enum,
because it's much easier for the instruction to modify itself since
it knows its own shape than it is to push a new instruction that
very closely matches.

* Mutate in place for arm64 split

When we're splitting instructions for the arm64 backend, we map all
of the operands for a given instruction when it has an Opnd::Value.
We can do this in place with the existing operand instead of
allocating a new vector each time. This enables us to pattern match
against the entire instruction instead of just the opcode, which is
much closer to matching against an enum.

* Match against entire instruction in arm64_emit

Instead of matching against the opcode and then accessing all of
the various fields on the instruction when emitting bytecode for
arm64, we should instead match against the entire instruction.
This makes it much closer to what's going to happen when we switch
it over to being an enum.

* Match against entire instruction in x86_64 backend

When we're splitting or emitting code for x86_64, we should match
against the entire instruction instead of matching against just the
opcode. This gets us closer to matching against an enum instead of
a struct.

* Reuse instructions for arm64_split

When we're splitting, the default behavior was previously to split
up the instruction into its component parts and then reassemble
them in a new instruction. Instead, we can reuse the existing
instruction.
---
 yjit/src/backend/arm64/mod.rs  | 264 +++++++++++++++++-----------------
 yjit/src/backend/ir.rs         | 132 +++++++++--------
 yjit/src/backend/x86_64/mod.rs | 311 ++++++++++++++++++++++-------------------
 3 files changed, 367 insertions(+), 340 deletions(-)

diff --git a/yjit/src/backend/arm64/mod.rs b/yjit/src/backend/arm64/mod.rs
index d2693fee32..501e0a6138 100644
--- a/yjit/src/backend/arm64/mod.rs
+++ b/yjit/src/backend/arm64/mod.rs
@@ -186,29 +186,27 @@ impl Assembler https://github.com/ruby/ruby/blob/trunk/yjit/src/backend/arm64/mod.rs#L186
         let asm = &mut asm_local;
         let mut iterator = self.into_draining_iter();
 
-        while let Some((index, insn)) = iterator.next_mapped() {
+        while let Some((index, mut insn)) = iterator.next_mapped() {
             // Here we're going to map the operands of the instruction to load
             // any Opnd::Value operands into registers if they are heap objects
             // such that only the Op::Load instruction needs to handle that
             // case. If the values aren't heap objects then we'll treat them as
             // if they were just unsigned integer.
-            let opnds: Vec<Opnd> = insn.opnds.into_iter().map(|opnd| {
+            for opnd in &mut insn.opnds {
                 match opnd {
                     Opnd::Value(value) => {
                         if value.special_const_p() {
-                            Opnd::UImm(value.as_u64())
-                        } else if insn.op == Op::Load {
-                            opnd
-                        } else {
-                            asm.load(opnd)
+                            *opnd = Opnd::UImm(value.as_u64());
+                        } else if insn.op != Op::Load {
+                            *opnd = asm.load(*opnd);
                         }
                     },
-                    _ => opnd
-                }
-            }).collect();
+                    _ => {}
+                };
+            }
 
-            match insn.op {
-                Op::Add => {
+            match insn {
+                Insn { op: Op::Add, opnds, .. } => {
                     match (opnds[0], opnds[1]) {
                         (Opnd::Reg(_) | Opnd::InsnOut { .. }, Opnd::Reg(_) | Opnd::InsnOut { .. }) => {
                             asm.add(opnds[0], opnds[1]);
@@ -225,24 +223,24 @@ impl Assembler https://github.com/ruby/ruby/blob/trunk/yjit/src/backend/arm64/mod.rs#L223
                         }
                     }
                 },
-                Op::And | Op::Or | Op::Xor => {
+                Insn { op: Op::And | Op::Or | Op::Xor, opnds, target, text, pos_marker, .. } => {
                     match (opnds[0], opnds[1]) {
                         (Opnd::Reg(_), Opnd::Reg(_)) => {
-                            asm.push_insn_parts(insn.op, vec![opnds[0], opnds[1]], insn.target, insn.text, insn.pos_marker);
+                            asm.push_insn_parts(insn.op, vec![opnds[0], opnds[1]], target, text, pos_marker);
                         },
                         (reg_opnd @ Opnd::Reg(_), other_opnd) |
                         (other_opnd, reg_opnd @ Opnd::Reg(_)) => {
                             let opnd1 = split_bitmask_immediate(asm, other_opnd);
-                            asm.push_insn_parts(insn.op, vec![reg_opnd, opnd1], insn.target, insn.text, insn.pos_marker);
+                            asm.push_insn_parts(insn.op, vec![reg_opnd, opnd1], target, text, pos_marker);
                         },
                         _ => {
                             let opnd0 = split_load_operand(asm, opnds[0]);
                             let opnd1 = split_bitmask_immediate(asm, opnds[1]);
-                            asm.push_insn_parts(insn.op, vec![opnd0, opnd1], insn.target, insn.text, insn.pos_marker);
+                            asm.push_insn_parts(insn.op, vec![opnd0, opnd1], target, text, pos_marker);
                         }
                     }
                 },
-                Op::CCall => {
+                Insn { op: Op::CCall, opnds, target, .. } => {
                     assert!(opnds.len() <= C_ARG_OPNDS.len());
 
                     // For each of the operands we're going to first load them
@@ -257,9 +255,9 @@ impl Assembler https://github.com/ruby/ruby/blob/trunk/yjit/src/backend/arm64/mod.rs#L255
 
                     // Now we push the CCall without any arguments so that it
                     // just performs the call.
-                    asm.ccall(insn.target.unwrap().unwrap_fun_ptr(), vec![]);
+                    asm.ccall(target.unwrap().unwrap_fun_ptr(), vec![]);
                 },
-                Op::Cmp => {
+                Insn { op: Op::Cmp, opnds, .. } => {
                     let opnd0 = match opnds[0] {
                         Opnd::Reg(_) | Opnd::InsnOut { .. } => opnds[0],
                         _ => split_load_operand(asm, opnds[0])
@@ -268,15 +266,14 @@ impl Assembler https://github.com/ruby/ruby/blob/trunk/yjit/src/backend/arm64/mod.rs#L266
                     let opnd1 = split_shifted_immediate(asm, opnds[1]);
                     asm.cmp(opnd0, opnd1);
                 },
-                Op::CRet => {
+                Insn { op: Op::CRet, opnds, .. } => {
                     if opnds[0] != Opnd::Reg(C_RET_REG) {
                         let value = split_load_operand(asm, opnds[0]);
                         asm.mov(C_RET_OPND, value);
                     }
                     asm.cret(C_RET_OPND);
                 },
-                Op::CSelZ | Op::CSelNZ | Op::CSelE | Op::CSelNE |
-                Op::CSelL | Op::CSelLE | Op::CSelG | Op::CSelGE => {
+                Insn { op: Op::CSelZ | Op::CSelNZ | Op::CSelE | Op::CSelNE | Op::CSelL | Op::CSelLE | Op::CSelG | Op::CSelGE, opnds, target, text, pos_marker, .. } => {
                     let new_opnds = opnds.into_iter().map(|opnd| {
                         match opnd {
                             Opnd::Reg(_) | Opnd::InsnOut { .. } => opnd,
@@ -284,9 +281,9 @@ impl Assembler https://github.com/ruby/ruby/blob/trunk/yjit/src/backend/arm64/mod.rs#L281
                         }
                     }).collect();
 
-                    asm.push_insn_parts(insn.op, new_opnds, insn.target, insn.text, insn.pos_marker);
+                    asm.push_insn_parts(insn.op, new_opnds, target, text, pos_marker);
                 },
-                Op::IncrCounter => {
+                Insn { op: Op::IncrCounter, opnds, .. } => {
                     // We'll use LDADD later which only works with registers
                     // ... Load pointer into register
                     let counter_addr = split_lea_operand(asm, opnds[0]);
@@ -299,7 +296,7 @@ impl Assembler https://github.com/ruby/ruby/blob/trunk/yjit/src/backend/arm64/mod.rs#L296
 
                     asm.incr_counter(counter_addr, addend);
                 },
-                Op::JmpOpnd => {
+                Insn { op: Op::JmpOpnd, opnds, .. } => {
                     if let Opnd::Mem(_) = opnds[0] {
                         let opnd0 = split_load_operand(asm, opnds[0]);
                         asm.jmp_opnd(opnd0);
@@ -307,10 +304,10 @@ impl Assembler https://github.com/ruby/ruby/blob/trunk/yjit/src/backend/arm64/mod.rs#L304
                         asm.jmp_opnd(opnds[0]);
                     }
                 },
-                Op::Load => {
+                Insn { op: Op::Load, opnds, .. } => {
                     split_load_operand(asm, opnds[0]);
                 },
-                Op::LoadSExt => {
+                Insn { op: Op::LoadSExt, opnds, .. } => {
                     match opnds[0] {
                         // We only want to sign extend if the operand is a
                         // regist (... truncated)

--
ML: ruby-changes@q...
Info: http://www.atdot.net/~ko1/quickml/

[前][次][番号順一覧][スレッド一覧]