While __php_mempcpy is only used by ext/standard/crypt_sha*, the
mempcpy "pattern" is used everywhere.
This commit removes __php_mempcpy, adds zend_mempcpy and transforms
open-coded parts into function calls.
Thanks to Maurício Fauth for finding and reporting this bug.
The bug was introduced in October 2022. It originally only affected
text encodings which do not have a fixed byte width per characters
and for which mbstring does not have an mblen_table. However, I recently
made another change to mbstring, such that mb_substr no longer relies
on the mblen_table even if one is available. Because of this change,
the bug earlier introduced in October 2022 now affected a greater
number of text encodings, including UTF-8.
For GB18030, it is not generally possible to identify character
boundaries without scanning through the entire string. Therefore,
implement mb_strcut using a similar strategy as the mblen_table based
implementation in mbstring.c. The difference is that for GB18030, we
need to look at two leading bytes to determine the byte length of a
multi-byte character.
The new implementation is 4-5x faster for short strings, and more than
10x faster for long strings. (Part of the reason why this new code has
such a great performance advantage is because it is replacing code
based on the older text conversion filters provided by libmbfl, which
were quite slow.)
The behavior is the same as before for valid GB18030 strings; for
some invalid strings, mb_strcut will choose different 'cut' points
as compared to before. (Clang's libFuzzer was used to compare the
old and new implementations, searching for test cases where they had
different behavior; no such cases were found.)
Starting many years ago, libmbfl included a 'mblen_table' for selected
text encodings. This table allows looking up the byte length of a
(possibly multi-byte) character from the value of the first byte.
libmbfl uses these tables to optimize certain operations; if a
text-processing operation can be performed using an mblen_table,
it may not be necessary to decode the text to codepoints. Since
libmbfl's decoding filters are generally slow, this improves
performance.
Since mbstring is (or was) based on libmbfl, it has always used
these mblen_tables to implement some functions. This design has a
significant downside. Let me explain:
While some mbstring functions are implemented by converting input text
to codepoints and operating on the codepoints, others operate directly
on the original input bytes (using an mblen_table to identify character
boundaries). Both of these implementation styles, if correctly coded,
yield equivalent results on valid strings. However, on strings which
contain encoding errors, the results are often different.
When decoding byte strings to codepoints using some text encoding,
mbstring uses the non-existent codepoint 0xFFFFFFFF to represent a
byte sequence which cannot be decoded. Then, when mbstring indexes into
the resulting sequence of codepoints, the index of any particular
character depends on the number of such 'error markers' which were
produced during the decoding process. In contrast, when an mblen_table
is used to split a byte sequence into characters, there is no question
of counting encoding errors; rather, table lookups into the mblen_table
are used to repeatedly 'bite off' some number of bytes (which are
treated as one 'character'). In the presence of encoding errors, these
two methods of mapping between byte indices and character indices are
inherently different and will rarely agree.
(For completeness, it must be said that some internal mbstring code
which operates only on UTF-8 text uses a third method for mapping
between byte indices and character indices, that is: counting
non-continuation UTF-8 bytes, which are all bytes whose binary
representation is NOT like 0b10xxxxxx. This method happens to agree with
the method which involves decoding the input text to codepoints and then
counting the codepoints.)
I have been aware of this issue for years, but only recently became
aware that in the case of mb_strstr, mb_strpos, and mb_substr,
this issue can cause seriously unintuitive behavior (and even security
vulnerabilities). This was reported by Stefan Schiller.
Stefan Schiller shared the following example for mb_strstr:
var_dump(mb_strstr("\xf0start", "start", false, "UTF-8"));
// string(2) "rt"
Similarly, when mb_strpos and mb_substr are used to identify and
extract a substring from a string with encoding errors, Stefan Schiller
pointed out that the extracted portion may be completely different than
desired. This is because (for UTF-8 strings) mb_strpos works by counting
non-continuation bytes, but mb_substr uses an mblen_table.
Since some mbstring functions *cannot* be implemented using an
mblen_table, as long as mblen_tables are used, similar inconsistencies
cannot be totally avoided. But the mblen_tables are critical to
mbstring's performance. Or are they? Benchmarking mb_substr on various
UTF-8, SJIS, and EUC-JP strings revealed something interesting.
On all SJIS and EUC-JP test cases, mb_substr was slightly faster when
the mblen_table based code was deleted. For some UTF-8 test cases, the
mblen_table-based code was a tiny bit faster, while for others the
fallback code was a touch faster; in no case was the difference
significant.
Therefore, the simple fix is to delete the mblen_table-based
implementation of mb_substr.
Aside from making the function behave consistently with other mbstring
functions on invalid strings, there is ONE case where behavior is now
different on valid strings: that is, on SJIS-Mac (MacJapanese) strings
which contain any of the following code units:
0x85AB-0x85AD, 0x85BF, 0x85C0, 0x85C1, 0x8645, 0x864B, 0x865D, 0x869E,
0x86CE, 0x86D3-0x86D5, 0x86D6, 0x8971, 0x8792, 0x879D, 0x87FB, 0x87FC,
0xEB41, 0xEB42, 0xEB50, 0xEB5B, 0xEB5D, 0xEB60-0xEB6E, and all from
0xEB81 and above.
All of these SJIS-Mac code units share the (very unusual) property that
they do not correspond to any one Unicode codepoint. When converting
from SJIS-Mac to Unicode, these must be converted to 2, 3, 4, or 5
codepoints each.
The previous, mblen_table-based implementation of mb_substr would treat
all of these SJIS-Mac byte sequences as 'one character'. Now, they are
treated as multiple characters (one for each of the Unicode codepoints
which they decode to). The new behavior is more consistent with other
mbstring functions.
I don't know if SJIS-Mac users will like this change or not (probably
most will never notice), but the BC break is justified by the very
real security impact of the previous, inconsistent behavior.
Finally, I should comment on whether similar changes are needed
elsewhere. The remaining functions which use an mblen_table are:
mb_str_split, mb_strcut, and various search functions (such as
mb_strpos). The search functions are only affected now when they
receive a positive 'offset' parameter specifying where to start
searching from.
The search functions should definitely be fixed so they do not use
an mblen_table to implement the 'offset' parameter. I am not convinced
that there is any good reason to change mb_str_split and mb_strcut.
This internal function is used to implement mb_strstr, mb_stristr,
mb_strrchr, mb_strrichr, mb_substr, mb_strimwidth, mb_trim, and
mb_str_pad. All of these functions will be faster if we return
early when requested for a zero-length "substring".
For legacy text encodings where mb_strcut is implemented using an
mblen_table (such as the various SJIS variants), mb_strcut is now
~30% faster on small strings (about 10 bytes). This is because we
are now avoiding an extra, unnecessary copy operation on the
output string.
When used on large strings, the difference in performance is
negligible, as almost the entire runtime is spent stepping through
the string to find the starting and ending cut points.
On microbenchmarks run on my dev machine, mb_strcut is now ~50% faster
for fixed-byte-length text encodings like ASCII. (This is because the
previous code did an extra, unnecessary copy operation on the
resulting output string.)
* Fast path for when there is nothing to trim in mb_trim
* Make mb_trim decide between linear search vs hash table lookup
Using empirical experiments I noticed that on my i7-4790 the hash table
approach becomes faster once we have more than 4 code points in the trim
characters, when evaluated on the worst case.
This patch changes the logic so that a hash table is used for a large
number of trim characters, and linear search when the number of trim
characters is <= 4.
This bug was introduced in cb840799b4.
Thanks to Ignace Nyamagana Butera for discovering this bug and
to Sebastian Bergmann for doing an initial investigation and opening
a bug ticket.
The old implementation runs through the entire string to pick out the
part which should be returned by mb_strcut. This creates significant
performance overhead. The new specialized implementation of mb_strcut
for UTF-8 usually only examines a few bytes around the starting and
ending cut points, meaning it generally runs in constant time.
For UTF-8 strings just a few bytes long, the new implementation is
around 10% faster (according to microbenchmarks which I ran locally).
For strings around 10,000 bytes in length, it is 50-300x faster.
(Yes, that is 300x and not 300%.)
The new implementation behaves identically to the old one on VALID
UTF-8 strings; a fuzzer was used to help ensure this is the case.
On invalid UTF-8 strings, there is a difference: in some cases, the
old implementation will pass invalid byte sequences through unchanged,
while in others it will remove them. The new implementation has
behavior which is perhaps slightly more predictable: it simply backs
up the starting and ending cut points to the preceding "starter
byte" (one which is not a UTF-8 continuation byte).
We need to remove the value from the GC buffer before freeing it. Otherwise
shutdown will uaf when running the gc. Do that by switching from
zend_hash_destroy to zend_array_destroy, which should also be faster for freeing
members due to inlining of i_zval_ptr_dtor.
Closes GH-11822
When not providing a pad string, *and* not having other defaulted
arguments, the function would crash on a NULL pad zend_string*.
Despite testing with an empty pad string, the issue wasn't found because
when using named arguments the pad string *is* filled in.
I tweaked the #if check such that the workaround only applies on GCC
versions older than 8.0.
I tested this with GCC 7.5, 8.4, 9.4, GCC 13.1.1, and Clang 10.0.
Closes GH-11516.
In 6fc8d014df, pakutoma added specialized validity checking functions
for some legacy text encodings like ISO-2022-JP and UTF-7. These
check functions perform a more strict validity check than the encoding
conversion functions for the same text encodings. For example, the
check function for ISO-2022-JP verifies that the string ends in the
correct state required by the specification for ISO-2022-JP.
These check functions are already being used to make detection of text
encoding more accurate when 'strict' detection mode is enabled.
However, since the default is 'non-strict' detection (a bad API design
but we're stuck with it now), most users will not benefit from
pakutoma's work. I was previously reluctant to enable this new logic
for non-strict detection mode. My intention was to reduce the scope of
behavior changes, since almost *any* behavior change may affect *some*
user in a way we don't expect.
However, we definitely have users whose (production) code was broken
by the changes I made in 28b346bc06, and enabling pakutoma's check
functions for non-strict detection mode would un-break it. (See
GH-10192 as an example.) The added checks do also make sense.
In non-strict detection mode, we will not immediately reject candidate
encodings whose validity check function returns false; but they will
be much less likely to be selected. However, failure of the validity
check function is weighted less heavily than an encoding error detected
by the encoding conversion function.
The documentation for mb_detect_encoding says that this function
"Detects the most likely character encoding for string `string` from an
ordered list of candidates".
Prior to 28b346bc06, mb_detect_encoding did not really attempt to
determine the "most likely" text encoding for the input string. It
would just return the first candidate encoding for which the string was
valid. In 28b346bc06, I amended this function so that it uses heuristics
to try to guess which candidate encoding is "most likely".
However, the caller did not have any way to indicate which candidate
text encoding(s) they consider to be more likely, in case the
heuristics applied are inconclusive. In the language of Bayesian
probability, there was no way for the caller to indicate their 'prior'
assignment of probabilities.
Further, the documentation for mb_detect_encoding also says that the
second parameter `encodings` is "a list of character encodings to try,
in order". The documentation clearly implies that the order of
the `encodings` argument should be significant.
Therefore, amend mb_detect_encoding so that while it still uses
heuristics to guess the most likely text encoding for the input string,
it favors those which are earlier in the list of candidate encodings.
One complication is that many callers of mb_detect_encoding use it
in this way:
mb_detect_encoding($string, mb_list_encodings());
In a majority of cases, this is bad code; mb_detect_encoding will both
be much slower and the results will be less reliable than if a smaller
list of candidates is used. However, since such code already exists and
people are using it in production, we should not unnecessarily break it.
The order of candidate encodings obviously does not express any prior
belief of which candidates are more likely in this case, and treating
it as if it did will degrade the accuracy of the result.
Since mb_list_encodings now returns a single, immutable array on each
call, we can avoid that problem by turning off the new behavior when
we receive the array of encodings returned by mb_list_encodings.
This implementation means that if the user does this:
$a = mb_list_encodings();
mb_detect_encoding($string, $a);
...then the order of candidate encodings will not be considered.
However, if the user explicitly initializes their own array of all
supported legacy text encodings, then the order *will* be considered.
The other functions which also follow this new behavior are:
• mb_convert_variables
• mb_convert_encoding (when multiple candidate input encodings are
listed)
Other places where "detection" (or really "guessing") of text encoding
may be performed include:
• mb_send_mail
• Zend engine, when determining the encoding of a PHP script
• mbstring processing of HTTP request contents, when http_input INI
parameter is set to a list
In these cases, the new logic based on order of candidate encodings
is *not* enabled. It *might* be logical to consider the order of
candidate encodings in some or all of these cases, but I'm not sure if
that is true, so it seems wiser to avoid more behavior changes than is
necessary. Further, ever since the new encoding detection heuristics
were implemented in 28b346bc06, we have not received any complaints of
user code being broken in these areas. So I am reluctant to "fix what
isn't broken".
Well, some might say that applying the new detection heuristics
to mb_send_mail, etc. in 28b346bc06 was "fixing what wasn't broken",
but (cough cough) I don't have any comment on that...
This will allow us to easily check in other mbstring functions if the
list of all supported encodings, returned by mb_list_encodings, is
passed in as input to another function.
Co-authored-by: Ilija Tovilo <ilija.tovilo@me.com>
For mb_parse_str, when mbstring.http_input (INI parameter) is a list of
multiple possible text encodings (which is not the case by default),
this new implementation is about 25% faster.
When mbstring.http_input is a single value, then nothing is changed.
(No automatic encoding detection is done in that case.)
In 6fc8d014df, pakutoma added some additional validation logic to
mb_detect_encoding. Since the implementation of mb_detect_encoding
has changed significantly between PHP 8.2 and 8.3, when merging this
change down from PHP-8.2 into master, I had to port his code over to
the new implementation in master.
However, I did this in a wrong way. In merge commit 0779950768,
the ported code modifies a function argument (to mb_guess_encoding)
which is marked 'const'. In the Windows CI job, MS VC++ rightly
flags this as a compile error.
Adjust the code to accomplish the same thing, but without destructively
modifying 'const' arguments.
Previously, mbstring used the same logic for encoding validation as for
encoding conversion.
However, there are cases where we want to use different logic for validation
and conversion. For example, if a string ends up with missing input
required by the encoding, or if a character is input that is invalid
as an encoding but can be converted, the conversion should succeed and
the validation should fail.
To achieve this, a function pointer mb_check_fn has been added to
struct mbfl_encoding to implement the logic used for validation.
Also, added implementation of validation logic for UTF-7, UTF7-IMAP,
ISO-2022-JP and JIS.
The behavior of the new mb_encode_mimeheader implementation closely
follows the old implementation, except for three points:
• The old implementation was missing a call to the mbfl_convert_filter
flush function. So it would sometimes truncate the input string just
before its end.
• The old implementation would drop zero bytes when QPrint-encoding.
So for example, if you tried to QPrint-encode the UTF-32BE string
"\x00\x00\x12\x34", its QPrint-encoding would be "=12=34", which
does not decode to a valid UTF-32BE string. This is now fixed.
• In some rare corner cases, the new implementation will choose to
Base64-encode or QPrint-encode the input string, where the old
implementation would have just added newlines to it. Specifically,
this can happen when there is a non-space ASCII character, followed
by a large number of ASCII spaces, followed by a non-ASCII character.
The new implementation is around 2.5-8x faster than the old one,
depending on the text encoding and transfer encoding used. Performance
gains are greater with Base64 transfer encoding than with QPrint
transfer encoding; this is not because QPrint-encoding bytes is slow,
but because QPrint-encoded output is much bigger than Base64-encoded
output and takes more lines, so we have to go through the process of
finding the right place to break a line many more times.