Lines Matching defs:bytecode
84 // Reset the bytecode age and OSR state (optimized to a single write).
95 // bytecode. If there is baseline code on the shared function info, converts an
97 // code. Otherwise execution continues with bytecode.
125 // Start with bytecode as there is no baseline code.
167 // Compute baseline pc for bytecode offset.
181 // not a valid bytecode offset.
198 // Get bytecode array from the stack frame.
230 // If the bytecode offset is kFunctionEntryOffset, get the start address of
231 // the first bytecode.
405 // JavaScript frame. This is the case then OSR is triggered from bytecode.
722 // Underlying function needs to have bytecode available.
1206 // Advance the current bytecode offset. This simulates what all bytecode
1208 // label if the bytecode (without prefix) is a return bytecode. Will not advance
1209 // the bytecode offset if the current bytecode is a JumpLoop, instead just
1210 // re-executing the JumpLoop to jump to the correct bytecode.
1214 Register bytecode, Register scratch1,
1217 Register scratch3 = bytecode;
1219 // The bytecode offset value will be increased by one in wide and extra wide
1220 // cases. In the case of having a wide or extra wide JumpLoop bytecode, we
1221 // will restore the original bytecode. In order to simplify the code, we have
1225 bytecode, original_bytecode_offset));
1230 // Check if the bytecode is a Wide or ExtraWide prefix bytecode.
1237 __ cmpi(bytecode, Operand(0x3));
1239 __ andi(r0, bytecode, Operand(0x1));
1242 // Load the next bytecode and update table to the wide scaled table.
1244 __ lbzx(bytecode, MemOperand(bytecode_array, bytecode_offset));
1250 // Load the next bytecode and update table to the extra wide scaled table.
1252 __ lbzx(bytecode, MemOperand(bytecode_array, bytecode_offset));
1256 // Load the size of the current bytecode.
1259 // Bailout to the return label if this is a return bytecode.
1261 __ cmpi(bytecode, \
1270 __ cmpi(bytecode,
1274 // increased it to skip the wide / extra-wide prefix bytecode.
1279 // Otherwise, load the size of the current bytecode and advance the offset.
1280 __ lbzx(scratch3, MemOperand(bytecode_size_table, bytecode));
1392 // We'll use the bytecode for both code age/OSR resetting, and pushing onto
1401 // store the bytecode offset.
1483 // Get the bytecode array from the function object and load it into
1487 // Load original bytecode array or the debug copy.
1493 // The bytecode array could have been flushed from the shared function info,
1556 // Load initial bytecode offset.
1560 // Push bytecode array and Smi tagged bytecode array offset.
1590 // If the bytecode array has a valid incoming new target or generator object
1614 // Load the dispatch table into a register and dispatch to the bytecode
1615 // handler at the current bytecode offset.
1630 // Any returns to the entry trampoline are either due to the return bytecode
1633 // Get bytecode array and bytecode offset from the stack frame.
1640 // Either return, or advance to the next bytecode and dispatch.
1655 // Modify the bytecode offset in the stack to be kFunctionEntryBytecodeOffset
1664 // After the call, restore the bytecode array, bytecode offset and accumulator
1665 // registers again. Also, restore the bytecode offset in the stack to its
1867 // Get the bytecode array pointer from the frame.
1883 // Get the target bytecode offset from the frame.
1898 // Dispatch to the target bytecode.
1910 // Get bytecode array and bytecode offset from the stack frame.
1923 // Load the current bytecode.
1927 // Advance to the next bytecode.
1934 // Convert new bytecode offset to a Smi and save in the stackframe.
1944 // not a valid bytecode offset. Detect this case and advance to the first
1945 // actual bytecode.