Previously, YJIT returned truthy for the block given query at the top
level. That's incorrect because the top level script never receives a
block, and `yield` is a syntax error there.
Inside methods, the number of hops to get from `iseq` to
`iseq->body->local_iseq` is the same as the number of
`VM_ENV_PREV_EP(ep)` hops to get to an environment with
`VM_ENV_FLAG_LOCAL`. YJIT and the interpreter both rely on this as can
be seen in get_lvar_level(). However, this identity does not hold for
the top level frame because of vm_set_eval_stack(), which sets up
`TOPLEVEL_BINDING`.
Since only methods can take a block that `yield` goes to, have ISEQs
that are the child of a non-method ISEQ return falsy for the block given
query. This fixes the issue for the top level script and is an
optimization for non-method contexts such as inside `ISEQ_TYPE_CLASS`.
The test was added in #5221 4 years ago but:
1. The insn it targets was removed in 2022 in #6187
2. The YJIT API `blocks_for` seems to be dropped in 2022 when it switched
to use Rust in #5826
So this test has not been run in more than 3 years and can't be run
anymore. I think we can remove it.
* Added `Ractor::Port`
* `Ractor::Port#receive` (support multi-threads)
* `Rcator::Port#close`
* `Ractor::Port#closed?`
* Added some methods
* `Ractor#join`
* `Ractor#value`
* `Ractor#monitor`
* `Ractor#unmonitor`
* Removed some methods
* `Ractor#take`
* `Ractor.yield`
* Change the spec
* `Racotr.select`
You can wait for multiple sequences of messages with `Ractor::Port`.
```ruby
ports = 3.times.map{ Ractor::Port.new }
ports.map.with_index do |port, ri|
Ractor.new port,ri do |port, ri|
3.times{|i| port << "r#{ri}-#{i}"}
end
end
p ports.each{|port| pp 3.times.map{port.receive}}
```
In this example, we use 3 ports, and 3 Ractors send messages to them respectively.
We can receive a series of messages from each port.
You can use `Ractor#value` to get the last value of a Ractor's block:
```ruby
result = Ractor.new do
heavy_task()
end.value
```
You can wait for the termination of a Ractor with `Ractor#join` like this:
```ruby
Ractor.new do
some_task()
end.join
```
`#value` and `#join` are similar to `Thread#value` and `Thread#join`.
To implement `#join`, `Ractor#monitor` (and `Ractor#unmonitor`) is introduced.
This commit changes `Ractor.select()` method.
It now only accepts ports or Ractors, and returns when a port receives a message or a Ractor terminates.
We removes `Ractor.yield` and `Ractor#take` because:
* `Ractor::Port` supports most of similar use cases in a simpler manner.
* Removing them significantly simplifies the code.
We also change the internal thread scheduler code (thread_pthread.c):
* During barrier synchronization, we keep the `ractor_sched` lock to avoid deadlocks.
This lock is released by `rb_ractor_sched_barrier_end()`
which is called at the end of operations that require the barrier.
* fix potential deadlock issues by checking interrupts just before setting UBF.
https://bugs.ruby-lang.org/issues/21262
Avoid generating an infinite loop in the case where:
1. Block `first` is adjacent to block `second`, and the branch from `first` to
`second` is a fallthrough, and
2. Block `second` immediately exits to the interpreter, and
3. Block `second` is invalidated and YJIT is OOM
While pondering how to fix this, I think I've stumbled on another related edge case:
1. Block `incoming_one` and `incoming_two` both branch to block `second`. Block
`incoming_one` has a fallthrough
2. Block `second` immediately exits to the interpreter (so it starts with its exit)
3. When Block `second` is invalidated, the incoming fallthrough branch from
`incoming_one` might be rewritten first, which overwrites the start of block
`second` with a jump to a new branch stub.
4. YJIT runs of out memory
5. The incoming branch from `incoming_two` is then rewritten, but because we're
OOM we can't generate a new stub, so we use `second`'s exit as the branch
target. However `second`'s exit was already overwritten with a jump to the
branch stub for `incoming_one`, so `incoming_two` will end up jumping to
`incoming_one`'s branch stub.
Fixes [Bug #21257]
* YJIT: Fix indentation [ci skip]
Fixes: cdf33ed5f3
* YJIT: Initialize locals in ISeqs defined with `...`
Previously, callers of forwardable ISeqs moved the stack pointer up
without writing to the stack. If there happens to be a stale value in
the area skipped over, it could crash due to "try to mark T_NONE". Also,
the uninitialized local variables were observable through `binding`.
Initialize the locals to nil.
[Bug #21021]
Evident with the crash reported in [Bug #20997], the C replacement
codegen functions aren't authored to handle block arguments (nor
should they because the extra code from the complexity defeats
optimization). Filter sites with VM_CALL_ARGS_BLOCKARG.
Code like the following is crashing for us on 3.4.1:
```ruby
def a(&) = yield(x: 0)
1000.times { a { |x:| x } }
```
Crash:
```
ruby: YJIT has panicked. More info to follow...
thread '<unnamed>' panicked at ./yjit/src/codegen.rs:8018:13:
assertion `left == right` failed
left: 0
right: 1
```
Co-authored-by: Dani Acherkan <dtl.117@gmail.com>
* Add opt_duparray_send insn to skip the allocation on `#include?`
If the method isn't going to modify the array we don't need to copy it.
This avoids the allocation / array copy for things like `[:a, :b].include?(x)`.
This adds a BOP for include? and tracks redefinition for it on Array.
Co-authored-by: Andrew Novoselac <andrew.novoselac@shopify.com>
* YJIT: Implement opt_duparray_send include_p
Co-authored-by: Andrew Novoselac <andrew.novoselac@shopify.com>
* Update opt_newarray_send to support simple forms of include?(arg)
Similar to opt_duparray_send but for non-static arrays.
* YJIT: Implement opt_newarray_send include_p
---------
Co-authored-by: Andrew Novoselac <andrew.novoselac@shopify.com>
* YJIT: Specialize `String#[]` (`String#slice`) with fixnum arguments
String#[] is in the top few C calls of several YJIT benchmarks:
liquid-compile rubocop mail sudoku
This speeds up these benchmarks by 1-2%.
* YJIT: Try harder to get type info for `String#[]`
In the large generated code of the mail gem the context doesn't have
the type info. In that case if we peek at the stack and add a guard
we can still apply the specialization
and it speeds up the mail benchmark by 5%.
Co-authored-by: Maxime Chevalier-Boisvert <maxime.chevalierboisvert@shopify.com>
Co-authored-by: Takashi Kokubun (k0kubun) <takashikkbn@gmail.com>
---------
Co-authored-by: Maxime Chevalier-Boisvert <maxime.chevalierboisvert@shopify.com>
Co-authored-by: Takashi Kokubun (k0kubun) <takashikkbn@gmail.com>
[Feature #20205]
The warning now suggests running with --debug-frozen-string-literal:
```
test.rb:3: warning: literal string will be frozen in the future (run with --debug-frozen-string-literal for more information)
```
When using --debug-frozen-string-literal, the location where the string
was created is shown:
```
test.rb:3: warning: literal string will be frozen in the future
test.rb:1: info: the string was created here
```
When resurrecting strings and debug mode is not enabled, the overhead is a simple FL_TEST_RAW.
When mutating chilled strings and deprecation warnings are not enabled,
the overhead is a simple warning category enabled check.
Co-authored-by: Jean Boussier <byroot@ruby-lang.org>
Co-authored-by: Nobuyoshi Nakada <nobu@ruby-lang.org>
Co-authored-by: Jean Boussier <byroot@ruby-lang.org>
This commit expands inlining for simple ISeqs to accept
callees that have unused keyword parameters and callers
that specify unused keywords. The following shows 2 new
callsites that will be inlined:
```ruby
def let(a, checked: true) = a
let(1)
let(1, checked: false)
```
Co-authored-by: Kaan Ozkan <kaan.ozkan@shopify.com>
This commit fixes splat and block handling when calling in to a
forwarding iseq. In the case of a splat we need to avoid expanding the
array to the stack. We need to also ensure the CI write is flushed to
the SP, otherwise it's possible for a block handler to clobber the CI
[ruby-core:118360]
This patch optimizes forwarding callers and callees. It only optimizes methods that only take `...` as their parameter, and then pass `...` to other calls.
Calls it optimizes look like this:
```ruby
def bar(a) = a
def foo(...) = bar(...) # optimized
foo(123)
```
```ruby
def bar(a) = a
def foo(...) = bar(1, 2, ...) # optimized
foo(123)
```
```ruby
def bar(*a) = a
def foo(...)
list = [1, 2]
bar(*list, ...) # optimized
end
foo(123)
```
All variants of the above but using `super` are also optimized, including a bare super like this:
```ruby
def foo(...)
super
end
```
This patch eliminates intermediate allocations made when calling methods that accept `...`.
We can observe allocation elimination like this:
```ruby
def m
x = GC.stat(:total_allocated_objects)
yield
GC.stat(:total_allocated_objects) - x
end
def bar(a) = a
def foo(...) = bar(...)
def test
m { foo(123) }
end
test
p test # allocates 1 object on master, but 0 objects with this patch
```
```ruby
def bar(a, b:) = a + b
def foo(...) = bar(...)
def test
m { foo(1, b: 2) }
end
test
p test # allocates 2 objects on master, but 0 objects with this patch
```
How does it work?
-----------------
This patch works by using a dynamic stack size when passing forwarded parameters to callees.
The caller's info object (known as the "CI") contains the stack size of the
parameters, so we pass the CI object itself as a parameter to the callee.
When forwarding parameters, the forwarding ISeq uses the caller's CI to determine how much stack to copy, then copies the caller's stack before calling the callee.
The CI at the forwarded call site is adjusted using information from the caller's CI.
I think this description is kind of confusing, so let's walk through an example with code.
```ruby
def delegatee(a, b) = a + b
def delegator(...)
delegatee(...) # CI2 (FORWARDING)
end
def caller
delegator(1, 2) # CI1 (argc: 2)
end
```
Before we call the delegator method, the stack looks like this:
```
Executing Line | Code | Stack
---------------+---------------------------------------+--------
1| def delegatee(a, b) = a + b | self
2| | 1
3| def delegator(...) | 2
4| # |
5| delegatee(...) # CI2 (FORWARDING) |
6| end |
7| |
8| def caller |
-> 9| delegator(1, 2) # CI1 (argc: 2) |
10| end |
```
The ISeq for `delegator` is tagged as "forwardable", so when `caller` calls in
to `delegator`, it writes `CI1` on to the stack as a local variable for the
`delegator` method. The `delegator` method has a special local called `...`
that holds the caller's CI object.
Here is the ISeq disasm fo `delegator`:
```
== disasm: #<ISeq:delegator@-e:1 (1,0)-(1,39)>
local table (size: 1, argc: 0 [opts: 0, rest: -1, post: 0, block: -1, kw: -1@-1, kwrest: -1])
[ 1] "..."@0
0000 putself ( 1)[LiCa]
0001 getlocal_WC_0 "..."@0
0003 send <calldata!mid:delegatee, argc:0, FCALL|FORWARDING>, nil
0006 leave [Re]
```
The local called `...` will contain the caller's CI: CI1.
Here is the stack when we enter `delegator`:
```
Executing Line | Code | Stack
---------------+---------------------------------------+--------
1| def delegatee(a, b) = a + b | self
2| | 1
3| def delegator(...) | 2
-> 4| # | CI1 (argc: 2)
5| delegatee(...) # CI2 (FORWARDING) | cref_or_me
6| end | specval
7| | type
8| def caller |
9| delegator(1, 2) # CI1 (argc: 2) |
10| end |
```
The CI at `delegatee` on line 5 is tagged as "FORWARDING", so it knows to
memcopy the caller's stack before calling `delegatee`. In this case, it will
memcopy self, 1, and 2 to the stack before calling `delegatee`. It knows how much
memory to copy from the caller because `CI1` contains stack size information
(argc: 2).
Before executing the `send` instruction, we push `...` on the stack. The
`send` instruction pops `...`, and because it is tagged with `FORWARDING`, it
knows to memcopy (using the information in the CI it just popped):
```
== disasm: #<ISeq:delegator@-e:1 (1,0)-(1,39)>
local table (size: 1, argc: 0 [opts: 0, rest: -1, post: 0, block: -1, kw: -1@-1, kwrest: -1])
[ 1] "..."@0
0000 putself ( 1)[LiCa]
0001 getlocal_WC_0 "..."@0
0003 send <calldata!mid:delegatee, argc:0, FCALL|FORWARDING>, nil
0006 leave [Re]
```
Instruction 001 puts the caller's CI on the stack. `send` is tagged with
FORWARDING, so it reads the CI and _copies_ the callers stack to this stack:
```
Executing Line | Code | Stack
---------------+---------------------------------------+--------
1| def delegatee(a, b) = a + b | self
2| | 1
3| def delegator(...) | 2
4| # | CI1 (argc: 2)
-> 5| delegatee(...) # CI2 (FORWARDING) | cref_or_me
6| end | specval
7| | type
8| def caller | self
9| delegator(1, 2) # CI1 (argc: 2) | 1
10| end | 2
```
The "FORWARDING" call site combines information from CI1 with CI2 in order
to support passing other values in addition to the `...` value, as well as
perfectly forward splat args, kwargs, etc.
Since we're able to copy the stack from `caller` in to `delegator`'s stack, we
can avoid allocating objects.
I want to do this to eliminate object allocations for delegate methods.
My long term goal is to implement `Class#new` in Ruby and it uses `...`.
I was able to implement `Class#new` in Ruby
[here](https://github.com/ruby/ruby/pull/9289).
If we adopt the technique in this patch, then we can optimize allocating
objects that take keyword parameters for `initialize`.
For example, this code will allocate 2 objects: one for `SomeObject`, and one
for the kwargs:
```ruby
SomeObject.new(foo: 1)
```
If we combine this technique, plus implement `Class#new` in Ruby, then we can
reduce allocations for this common operation.
Co-Authored-By: John Hawthorn <john@hawthorn.email>
Co-Authored-By: Alan Wu <XrXr@users.noreply.github.com>
The implementation of assert_equal inside bootstraptest/runner.rb wraps
a print around all the test code specified in the string, making returns
useless.
This change fixes this test for platforms that don't implement
GC.compact
Co-Authored-By: Peter Zhu <peter@peterzhu.ca>
The warning category should be enabled if we want to call
`Warning.warn`.
This commit speeds up the following benchmark:
```ruby
eval "def test; " +
1000.times.map { "' '.chomp!" }.join(";") + "; end"
def run_benchmark count
i = 0
while i < count
start = Process.clock_gettime(Process::CLOCK_MONOTONIC)
yield
ms = Process.clock_gettime(Process::CLOCK_MONOTONIC) - start
puts "itr ##{i}: #{(ms * 1000).to_i}ms"
i += 1
end
end
run_benchmark(25) do
250.times do
test
end
end
```
On `master` this runs at about 92ms per iteration. With this patch, it
is 7ms per iteration.
[Bug #20573]
* YJIT: Fix `Struct` accessors not firing tracing events
Reading and writing to structs should fire `c_call` and `c_return`, but
YJIT wasn't correctly dropping those calls when tracing.
This has been missing since this functionality was added in 3081c83169,
but the added test only fails when ran in isolation with
`--yjit-call-threshold=1`. The test sometimes failed on CI.
* RJIT: YJIT: Fix `Struct` readers not firing tracing events
Same issue as YJIT, but it looks like RJIT doesn't support writing to
structs, so only reading needs changing.
* YJIT: Add specialized codegen function for `TrueClass#===`
TrueClass#=== is currently number 10 in the most frequent C calls list of the lobsters benchmark.
```
require "benchmark/ips"
def wrap
true === true
true === false
true === :x
end
Benchmark.ips do |x|
x.report(:wrap) do
wrap
end
end
```
```
before
Warming up --------------------------------------
wrap 1.791M i/100ms
Calculating -------------------------------------
wrap 17.806M (± 1.0%) i/s - 89.544M in 5.029363s
after
Warming up --------------------------------------
wrap 4.024M i/100ms
Calculating -------------------------------------
wrap 40.149M (± 1.1%) i/s - 201.223M in 5.012527s
```
Co-authored-by: Maxime Chevalier-Boisvert <maxime.chevalierboisvert@shopify.com>
Co-authored-by: Takashi Kokubun (k0kubun) <takashikkbn@gmail.com>
Co-authored-by: Kevin Menard <kevin.menard@shopify.com>
Co-authored-by: Alan Wu <XrXr@users.noreply.github.com>
* Fix the new test for RJIT
---------
Co-authored-by: Maxime Chevalier-Boisvert <maxime.chevalierboisvert@shopify.com>
Co-authored-by: Takashi Kokubun (k0kubun) <takashikkbn@gmail.com>
Co-authored-by: Kevin Menard <kevin.menard@shopify.com>
Co-authored-by: Alan Wu <XrXr@users.noreply.github.com>
Types like `Type::CString` really only assert that at one point the object had
its class field equal to `String`. Once a singleton class is created for any
strings, the type makes no assertion about any class field anymore, and becomes
the same as `Type::TString`.
Previously, the `--yjit-verify-ctx` option wasn't allowing objects of these
kind that have have singleton classes to pass verification even though the code
generators handle it just fine.
Found through `ruby/spec`.
* Revert "Revert "YJIT: Optimize local variables when EP == BP" (#10584)"
This reverts commit c878344195.
* YJIT: Take care of GC references in ISEQ invariants
Co-authored-by: Alan Wu <alansi.xingwu@shopify.com>
---------
Co-authored-by: Alan Wu <alansi.xingwu@shopify.com>
Add a specialized codegen function for `Class#superclass`.
Co-authored-by: Maxime Chevalier-Boisvert <maxime.chevalierboisvert@shopify.com>
Co-authored-by: Takashi Kokubun (k0kubun) <takashikkbn@gmail.com>
Co-authored-by: Randy Stauner <randy.stauner@shopify.com>
Co-authored-by: Alan Wu <XrXr@users.noreply.github.com>
Previously, passing objects that respond to #to_int to `String#setbyte`
resulted in a crash when compiled by YJIT. This was due to the lazily
pushed frame from rb_yjit_lazy_push_frame() lingering and not being
popped by an exception as expected.
The fix is to ensure that `ec->cfp` is restored to before the lazy frame
push in case the method call for conversion succeeds. Right now, this is
only for conversion to integers.
Found running `ruby/spec`.
* clarify comment
We just need to make sure `ec->cfp` is always preserved and this can
convert without rising when `raise` is true.
* YJIT: Fix shrinking block with assumption too much
Under the very specific circumstances, discovered by a test case in
`ruby/spec`, an `expandarray` block can contain just a branch and carry
a method lookup assumption. Previously, when we regenerated the branch,
we allowed it to shrink to empty, since we put the code at the jump
target immediately after it. That was incorrect and caused a crash while
the block is invalidated, since that left no room to patch in an exit.
When regenerating a branch that makes up a block entirely, and the block
could be invalidated, we need to ensure there is room for invalidation.
When there is code before the branch, they should act as padding, so we
don't need to worry about those cases.
* skip on RJIT
In `jit_rb_int_lshift()`, we guard against the right hand side changing
since we want to avoid generating variable length shifts. When control
reaches a `putobject` and `opt_ltlt` pair, though, we know that the right
hand side never changes.
This commit detects this situation and substitutes an implementation
that does not guard against the right hand side changing, saving that
work.
Deleted some `putobject` Rust tests since they aren't that valuable and
cause linking issues.
Nice boost to `optcarrot` and `protoboeuf`:
```
---------- ------------------
bench yjit-pre/yjit-post
optcarrot 1.09
protoboeuf 1.12
---------- ------------------
```
This mainly targets things like `T.unsafe()` from Sorbet, which is just an
identity function at runtime and only a hint for the static checker.
Only deal with simple caller and callees (no keywords and splat etc.).
Co-authored-by: Takashi Kokubun (k0kubun) <takashikkbn@gmail.com>
[Feature #20205]
As a path toward enabling frozen string literals by default in the future,
this commit introduce "chilled strings". From a user perspective chilled
strings pretend to be frozen, but on the first attempt to mutate them,
they lose their frozen status and emit a warning rather than to raise a
`FrozenError`.
Implementation wise, `rb_compile_option_struct.frozen_string_literal` is
no longer a boolean but a tri-state of `enabled/disabled/unset`.
When code is compiled with frozen string literals neither explictly enabled
or disabled, string literals are compiled with a new `putchilledstring`
instruction. This instruction is identical to `putstring` except it marks
the String with the `STR_CHILLED (FL_USER3)` and `FL_FREEZE` flags.
Chilled strings have the `FL_FREEZE` flag as to minimize the need to check
for chilled strings across the codebase, and to improve compatibility with
C extensions.
Notes:
- `String#freeze`: clears the chilled flag.
- `String#-@`: acts as if the string was mutable.
- `String#+@`: acts as if the string was mutable.
- `String#clone`: copies the chilled flag.
Co-authored-by: Jean Boussier <byroot@ruby-lang.org>
This type of cfuncs shows up as consume a lot of cycles in profiles of
the lobsters benchmark, even though in the stats they don't happen that
frequently. Might be a bug in the profiling, but these calls are not
too bad to support, so might as well do it.
Co-authored-by: Maxime Chevalier-Boisvert <maxime.chevalierboisvert@shopify.com>
Usually we deal with splats by speculating that they're of a specific
size. In this case, the C method takes a pointer and a length, so
we can support changing sizes just fine.
* YJIT: Lazily push a frame for specialized C funcs
Co-authored-by: Maxime Chevalier-Boisvert <maxime.chevalierboisvert@shopify.com>
* Fix a comment on pc_to_cfunc
* Rename rb_yjit_check_pc to rb_yjit_lazy_push_frame
* Rename it to jit_prepare_lazy_frame_call
* Fix a typo
* Optimize String#getbyte as well
* Optimize String#byteslice as well
---------
Co-authored-by: Maxime Chevalier-Boisvert <maxime.chevalierboisvert@shopify.com>