Both Red Hat and Debian-like systems configure the minimum TLS version
to be 1.2 by default, but allow users to change this via configs.
On Red Hat and derivatives this happens via crypto-policies[1], which in
writes settings in /etc/crypto-policies/back-ends/opensslcnf.config.
Most notably, it sets TLS.MinProtocol there. For Debian there's
MinProtocol in /etc/ssl/openssl.cnf. Both default to TLSv1.2, which is
considered a secure default.
In constrast, the SSLContext has a hard coded OpenSSL::SSL::TLS1_VERSION
for min_version. TLS 1.0 and 1.1 are considered insecure. By always
setting this in the default parameters, the system wide default can't be
respected, even if a developer wants to.
This takes the approach that's also done for ciphers: it's only set for
OpenSSL < 1.1.0.
[1]: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/security_hardening/using-the-system-wide-cryptographic-policies_security-hardeningae215a47ae
ASAN leaves a pointer to the fake frame on the stack; we can use the
__asan_addr_is_in_fake_stack API to work out the extent of the fake
stack and thus mark any VALUEs contained therein.
[Bug #20001]
It looks like `sched_getcpu(3)` returns a strange number on some
(virtual?) environments.
I decided to remove the setaffinity mechanism because the performance
does not appear to degrade on a quick benchmark even if removed.
[Bug #20172]
When opening a file with `File.open`, and then setting the encoding with
`IO#set_encoding`, it still correctly performs CRLF -> LF conversion on
Windows when reading files with a CRLF line ending in them (in text
mode).
However, the file is opened instead with either the `rb_io_fdopen` or
`rb_file_open` APIs from C, the CRLF conversion is _NOT_ set up
correctly; it works if the encoding is not specified, but if
`IO#set_encoding` is called, the conversion stops happening. This seems
to be because the encflags never get ECONV_DEFAULT_NEWLINE_DECORATOR
set in these codepaths.
Concretely, this means that the conversion doesn't happen in the
following circumstances:
* When loading ruby files with require (that calls rb_io_fdopen)
* When parsing ruuby files with RubyVM::AbstractSyntaxTree (that calls
rb_file_open).
This then causes the ErrorHighlight tests to fail on windows if git has
checked them out with CRLF line endings - the error messages it's
testing wind up with literal \r\n sequences in them because the iseq
text from the parser contains un-newline-converted strings.
This commit fixes the problem by copy-pasting the relevant snippet which
sets this up in `rb_io_extract_modeenc` (for the File.open path) into
the relevant codepaths for `rb_io_fdopen` and `rb_file_open`.
[Bug #20101]
The terminator is not actually getting filled in; we're simply passing
(two) bytes of empty memory as the NUL terminator. This can lead to
garbage characters getting written to registry values.
Fix this by explicitly putting a WCHAR_NUL character into the string to
be sent to the registry API, like we do in the MULTI_SZ case.
[Bug #20096]
This avoids pinning an id to the symbol used if a dynamic symbol is
passed in as a hash key.
rb_sym2str is available in Ruby 2.2+ and json depends on >= 2.3.
5cbafb8dbe
Object IDs became more expensive in Ruby 2.7. Using `Hash#compare_by_identity` let's us get the same effect, without needing to force all these objects to have object_ids assigned to them.
df69e4a12e
`:nodoc:` directive does not work at method definition in C, and must
be at the implementation function. That is, there is no way to make
one method visible and another method sharing the implementation
invisible at the same time.
This reverts commit cbda94edd8, because
`:nodoc:` does not work for constants.
In the case of `rb_define_const`, RDoc parses the preceeding comment
as in `"/* definition: comment */"` form.