While __php_mempcpy is only used by ext/standard/crypt_sha*, the
mempcpy "pattern" is used everywhere.
This commit removes __php_mempcpy, adds zend_mempcpy and transforms
open-coded parts into function calls.
Thanks to Maurício Fauth for finding and reporting this bug.
The bug was introduced in October 2022. It originally only affected
text encodings which do not have a fixed byte width per characters
and for which mbstring does not have an mblen_table. However, I recently
made another change to mbstring, such that mb_substr no longer relies
on the mblen_table even if one is available. Because of this change,
the bug earlier introduced in October 2022 now affected a greater
number of text encodings, including UTF-8.
For GB18030, it is not generally possible to identify character
boundaries without scanning through the entire string. Therefore,
implement mb_strcut using a similar strategy as the mblen_table based
implementation in mbstring.c. The difference is that for GB18030, we
need to look at two leading bytes to determine the byte length of a
multi-byte character.
The new implementation is 4-5x faster for short strings, and more than
10x faster for long strings. (Part of the reason why this new code has
such a great performance advantage is because it is replacing code
based on the older text conversion filters provided by libmbfl, which
were quite slow.)
The behavior is the same as before for valid GB18030 strings; for
some invalid strings, mb_strcut will choose different 'cut' points
as compared to before. (Clang's libFuzzer was used to compare the
old and new implementations, searching for test cases where they had
different behavior; no such cases were found.)
Starting many years ago, libmbfl included a 'mblen_table' for selected
text encodings. This table allows looking up the byte length of a
(possibly multi-byte) character from the value of the first byte.
libmbfl uses these tables to optimize certain operations; if a
text-processing operation can be performed using an mblen_table,
it may not be necessary to decode the text to codepoints. Since
libmbfl's decoding filters are generally slow, this improves
performance.
Since mbstring is (or was) based on libmbfl, it has always used
these mblen_tables to implement some functions. This design has a
significant downside. Let me explain:
While some mbstring functions are implemented by converting input text
to codepoints and operating on the codepoints, others operate directly
on the original input bytes (using an mblen_table to identify character
boundaries). Both of these implementation styles, if correctly coded,
yield equivalent results on valid strings. However, on strings which
contain encoding errors, the results are often different.
When decoding byte strings to codepoints using some text encoding,
mbstring uses the non-existent codepoint 0xFFFFFFFF to represent a
byte sequence which cannot be decoded. Then, when mbstring indexes into
the resulting sequence of codepoints, the index of any particular
character depends on the number of such 'error markers' which were
produced during the decoding process. In contrast, when an mblen_table
is used to split a byte sequence into characters, there is no question
of counting encoding errors; rather, table lookups into the mblen_table
are used to repeatedly 'bite off' some number of bytes (which are
treated as one 'character'). In the presence of encoding errors, these
two methods of mapping between byte indices and character indices are
inherently different and will rarely agree.
(For completeness, it must be said that some internal mbstring code
which operates only on UTF-8 text uses a third method for mapping
between byte indices and character indices, that is: counting
non-continuation UTF-8 bytes, which are all bytes whose binary
representation is NOT like 0b10xxxxxx. This method happens to agree with
the method which involves decoding the input text to codepoints and then
counting the codepoints.)
I have been aware of this issue for years, but only recently became
aware that in the case of mb_strstr, mb_strpos, and mb_substr,
this issue can cause seriously unintuitive behavior (and even security
vulnerabilities). This was reported by Stefan Schiller.
Stefan Schiller shared the following example for mb_strstr:
var_dump(mb_strstr("\xf0start", "start", false, "UTF-8"));
// string(2) "rt"
Similarly, when mb_strpos and mb_substr are used to identify and
extract a substring from a string with encoding errors, Stefan Schiller
pointed out that the extracted portion may be completely different than
desired. This is because (for UTF-8 strings) mb_strpos works by counting
non-continuation bytes, but mb_substr uses an mblen_table.
Since some mbstring functions *cannot* be implemented using an
mblen_table, as long as mblen_tables are used, similar inconsistencies
cannot be totally avoided. But the mblen_tables are critical to
mbstring's performance. Or are they? Benchmarking mb_substr on various
UTF-8, SJIS, and EUC-JP strings revealed something interesting.
On all SJIS and EUC-JP test cases, mb_substr was slightly faster when
the mblen_table based code was deleted. For some UTF-8 test cases, the
mblen_table-based code was a tiny bit faster, while for others the
fallback code was a touch faster; in no case was the difference
significant.
Therefore, the simple fix is to delete the mblen_table-based
implementation of mb_substr.
Aside from making the function behave consistently with other mbstring
functions on invalid strings, there is ONE case where behavior is now
different on valid strings: that is, on SJIS-Mac (MacJapanese) strings
which contain any of the following code units:
0x85AB-0x85AD, 0x85BF, 0x85C0, 0x85C1, 0x8645, 0x864B, 0x865D, 0x869E,
0x86CE, 0x86D3-0x86D5, 0x86D6, 0x8971, 0x8792, 0x879D, 0x87FB, 0x87FC,
0xEB41, 0xEB42, 0xEB50, 0xEB5B, 0xEB5D, 0xEB60-0xEB6E, and all from
0xEB81 and above.
All of these SJIS-Mac code units share the (very unusual) property that
they do not correspond to any one Unicode codepoint. When converting
from SJIS-Mac to Unicode, these must be converted to 2, 3, 4, or 5
codepoints each.
The previous, mblen_table-based implementation of mb_substr would treat
all of these SJIS-Mac byte sequences as 'one character'. Now, they are
treated as multiple characters (one for each of the Unicode codepoints
which they decode to). The new behavior is more consistent with other
mbstring functions.
I don't know if SJIS-Mac users will like this change or not (probably
most will never notice), but the BC break is justified by the very
real security impact of the previous, inconsistent behavior.
Finally, I should comment on whether similar changes are needed
elsewhere. The remaining functions which use an mblen_table are:
mb_str_split, mb_strcut, and various search functions (such as
mb_strpos). The search functions are only affected now when they
receive a positive 'offset' parameter specifying where to start
searching from.
The search functions should definitely be fixed so they do not use
an mblen_table to implement the 'offset' parameter. I am not convinced
that there is any good reason to change mb_str_split and mb_strcut.
This internal function is used to implement mb_strstr, mb_stristr,
mb_strrchr, mb_strrichr, mb_substr, mb_strimwidth, mb_trim, and
mb_str_pad. All of these functions will be faster if we return
early when requested for a zero-length "substring".
For legacy text encodings where mb_strcut is implemented using an
mblen_table (such as the various SJIS variants), mb_strcut is now
~30% faster on small strings (about 10 bytes). This is because we
are now avoiding an extra, unnecessary copy operation on the
output string.
When used on large strings, the difference in performance is
negligible, as almost the entire runtime is spent stepping through
the string to find the starting and ending cut points.
On microbenchmarks run on my dev machine, mb_strcut is now ~50% faster
for fixed-byte-length text encodings like ASCII. (This is because the
previous code did an extra, unnecessary copy operation on the
resulting output string.)
* Fast path for when there is nothing to trim in mb_trim
* Make mb_trim decide between linear search vs hash table lookup
Using empirical experiments I noticed that on my i7-4790 the hash table
approach becomes faster once we have more than 4 code points in the trim
characters, when evaluated on the worst case.
This patch changes the logic so that a hash table is used for a large
number of trim characters, and linear search when the number of trim
characters is <= 4.
This has been the case at least since PHP 5.4. Thanks to Girgias for
pointing it out.
It appears that there are several global variables internal to mbstring
which can be queried via mb_get_info() and which could be NULL, but
at the very least, we know that "mbstring.http_input" is one of them.
* Use binary search for cp932ext*_ucs_table lookups
A large amount of time is spent doing a linear search through these
tables in the CP932 encoding. Instead of that, we can add sorted
versions of these tables that also store the index of the non-sorted
version and perform a binary search on those sorted versions.
This reduces the time spent from 1.54s to 0.91s for the reference
benchmark [1].
[1] https://github.com/php/php-src/issues/12684#issuecomment-1813799924
* Fix search bounds
mbfl_name2encoding() uses a linear loop through the encodings, comparing
the name one by one, which is very slow. For the benchmark [1] just
looking up the name takes about 50% of run-time.
By using perfect hashing instead, we no longer have to loop over the
list, and the number of string comparisons is reduced to just a single
one. The perfect hashing table is generated using GNU gperf and amended
manually to fit in with mbstring and manually changed to reduce the
cache size.
[1] https://github.com/php/php-src/issues/12684#issuecomment-1813799924
Similar to the fast, specialized mb_strcut implementation for UTF-8
in 1f0cf133db, this new implementation of mb_strcut for UTF-16 strings
just examines a few bytes before each cut point.
Even for short strings, the new implementation is around 2x faster.
For strings around 10,000 bytes in length, it comes out about 100-500x
faster in my microbenchmarks.
The new implementation behaves identically to the old one on valid
UTF-16 strings; a fuzzer was used to help verify this.
This bug was introduced in cb840799b4.
Thanks to Ignace Nyamagana Butera for discovering this bug and
to Sebastian Bergmann for doing an initial investigation and opening
a bug ticket.
...So conditionally including code which uses __builtin_usub_overflow
(for performance) if the macro is defined is not correct. We also need
to check if the macro is defined as a non-zero value.
Apparently this broke the build for a user whose C compiler is GCC
4.9.4. Sorry, user! That was my fault!
Thanks to Jakub Zelenka for reporting the issue.
The old implementation runs through the entire string to pick out the
part which should be returned by mb_strcut. This creates significant
performance overhead. The new specialized implementation of mb_strcut
for UTF-8 usually only examines a few bytes around the starting and
ending cut points, meaning it generally runs in constant time.
For UTF-8 strings just a few bytes long, the new implementation is
around 10% faster (according to microbenchmarks which I ran locally).
For strings around 10,000 bytes in length, it is 50-300x faster.
(Yes, that is 300x and not 300%.)
The new implementation behaves identically to the old one on VALID
UTF-8 strings; a fuzzer was used to help ensure this is the case.
On invalid UTF-8 strings, there is a difference: in some cases, the
old implementation will pass invalid byte sequences through unchanged,
while in others it will remove them. The new implementation has
behavior which is perhaps slightly more predictable: it simply backs
up the starting and ending cut points to the preceding "starter
byte" (one which is not a UTF-8 continuation byte).
These unit tests cover situations which were not previously tested by the
mbstring test suite. Adding them will make the test suite more complete.
To be specific, the 'obscure' case which we are now testing is: what happens
when the first half of a surrogate pair appears at end of an improperly
terminated Base64 section in UTF7-IMAP text?
I don't believe such a buffer overrun will ever occur, but just in
case the code is changed in the future, it will be good to have an
assertion here to help catch bugs. (A similar assertion is already
used in the UTF-7 version of this function.)
...So conditionally including code which uses __builtin_usub_overflow
(for performance) if the macro is defined is not correct. We also need
to check if the macro is defined as a non-zero value.
Apparently this broke the build for a user whose C compiler is GCC
4.9.4. Sorry, user! That was my fault!
Thanks to Jakub Zelenka for reporting the issue.
We need to remove the value from the GC buffer before freeing it. Otherwise
shutdown will uaf when running the gc. Do that by switching from
zend_hash_destroy to zend_array_destroy, which should also be faster for freeing
members due to inlining of i_zval_ptr_dtor.
Closes GH-11822