Commit graph

2382 commits

Author SHA1 Message Date
Alex Dowad
a90358639d Implement conditional casing for Greek letter sigma when title-casing text 2023-01-12 17:41:11 +02:00
Alex Dowad
290efe842d Adjust code which checks if encoding is ISO-8859-9 when converting case
Instead of checking the 'encoding number' to see if we are converting
case for ISO-8859-9 text, compare pointers instead.

This should free up 1 register in php_unicode_convert_case.
2023-01-12 17:41:11 +02:00
Alex Dowad
39b46a5398 Implement Unicode conditional casing rules for Greek letter sigma
The capital Greek letter sigma (Σ) should be lowercased as σ except
when it appears at the end of a word; in that case, it should be
lowercased as the special form ς.

This rule is included in the Unicode data file SpecialCasing.txt.
The condition for applying the rule is called "Final_Sigma" and is
defined in Unicode technical report 21. The rule is:

• For the special casing form to apply, the capital letter sigma must
  be preceded by 0 or more "case-ignorable" characters, preceded by
  at least 1 "cased" character.
• Further, capital sigma must NOT be followed by 0 or more
  case-ignorable characters and then at least 1 cased character.

"Case-ignorable" characters include certain punctuation marks, like
the apostrophe, as well as various accent marks. There are actually
close to 500 different case-ignorable characters, including accent marks
from Cyrillic, Hebrew, Armenian, Arabic, Syriac, Bengali, Gujarati,
Telugu, Tibetan, and many other alphabets. This category also includes
zero-width spaces, codepoints which indicate RTL/LTR text direction,
certain musical symbols, etc.

Since the rule involves scanning over "0 or more" of such
case-ignorable characters, it may be necessary to scan arbitrarily far
to the left and right of capital sigma to determine whether the special
lowercase form should be used or not. However, since we are trying to
be both memory-efficient and CPU-efficient, this implementation limits
how far to the left we will scan. Generally, we scan up to 63 characters
to the left looking for a "cased" character, but not more.

When scanning to the right, we go up to the end of the string if
necessary, even if it means scanning over thousands of characters.

Anyways, it is almost impossible to imagine that natural text will
include "words" with more than 63 successive apostrophes (for example)
followed by a capital sigma.

Closes GH-8096.
2023-01-12 17:41:11 +02:00
Alex Dowad
4427b2e1ab Mark UTF-8 strings emitted by mbstring functions as valid UTF-8
We now have a couple of mbstring functions which have fast paths for
strings marked as 'valid UTF-8'. Later, we may likely have more. So
that these fast paths can be used more frequently, mark UTF-8 strings
emitted by mbstring as 'valid UTF-8'. This is always a correct thing
to do, because mbstring never returns invalid UTF-8 as the result of
a conversion (or similar) operation.

Internally, we do have a conversion mode which deliberately emits
invalid UTF-8 in some cases. (This is done to prevent unwanted matches
when we are converting strings to UTF-8 before performing matching
operations on them.) For such strings, don't set the 'valid UTF-8' flag.
It probably wouldn't hurt anything to set it, because strings generated
using that special conversion mode should *never* be returned to
userland, and I don't think we do anything with them which cares about
the IS_STR_VALID_UTF8 flag... but still, it would likely cause
confusion for developers.
2023-01-11 17:08:27 +02:00
Max Kellermann
308fd311ea ext/{standard,json,random,...}: add missing includes 2023-01-10 14:19:03 +00:00
Alex Dowad
b4cbaabd9b Add fast SSE2-based implementation of mb_strlen for known-valid UTF-8 strings
One small piece of this was obtained from Stack Overflow. According to
Stack Overflow's Terms of Service, all user-contributed code on SO is
provided under a Creative Commons license. I believe this license is
compatible with the code being included in PHP.

Benchmarking results (UTF-8 only, for strings which have already been
checked using mb_check_encoding):

For very short (0-5 byte) strings, mb_strlen is 12% faster.
The speedup gets greater and greater on longer input strings; for
strings around 100KB, mb_strlen is 23 times faster.

Currently the 'fast' code is gated behind a GC flag check which ensures
it is only used on strings which have already been checked for UTF-8
validity. This is because the accelerated code will return different
results on some invalid UTF-8 strings.
2023-01-09 07:50:40 +02:00
Alex Dowad
092ad3e462 Optimize branch structure of UTF-8 decoder routine
I like the asm which gcc -O3 generates on this modified code...
and guess what: my CPU likes it too!

(The asm is noticeably tighter, without any extra operations in the
path which dispatches to the code for decoding a 1-byte, 2-byte,
3-byte, or 4-byte character. It's just CMP, conditional jump, CMP,
conditional jump, CMP, conditional jump.

...Though I was admittedly impressed to see gcc could implement the
boolean expression `c >= 0xC2 && c <= 0xDF` with just 3 instructions:
add, CMP, then conditional jump. Pretty slick stuff there, guys.)

Benchmark results:

UTF-8, short - to UTF-16LE  faster by 7.36% (0.0001 vs 0.0002)
UTF-8, short - to UTF-16BE  faster by 6.24% (0.0001 vs 0.0002)
UTF-8, medium - to UTF-16BE faster by 4.56% (0.0003 vs 0.0003)
UTF-8, medium - to UTF-16LE faster by 4.00% (0.0003 vs 0.0003)
UTF-8, long - to UTF-16BE   faster by 1.02% (0.0215 vs 0.0217)
UTF-8, long - to UTF-16LE   faster by 1.01% (0.0209 vs 0.0211)
2023-01-08 17:27:19 +02:00
Alex Dowad
d8b5b9fa55 Add unit tests for mb_str_split/mb_substr on MacJapanese encoding
MacJapanese has a somewhat unusual feature that when mapped to
Unicode, many characters map to sequences of several codepoints.
Add test cases demonstrating how mb_str_split and mb_substr behave in
this situation.

When adding these tests, I found the behavior of mb_substr was wrong
due to an inconsistency between the string "length" as measured by
mb_strlen and the number of native MacJapanese characters which
mb_substr would count when iterating over the string using the
mblen_table. This has been fixed.

I believe that mb_strstr will also return wrong results in some cases
for MacJapanese. I still need to come up with unit tests which
demonstrate the problem and figure out how to fix it.
2023-01-08 17:23:47 +02:00
Alex Dowad
cca4ca6d3d Remove 'fast path' using mblen_table from mb_get_strlen (it's actually a slow path)
Various mbstring legacy text encodings have what is called an 'mblen_table';
a table which gives the length of a multi-byte character using a lookup on
the first byte value. Several mbstring functions have a 'fast path' which uses
this table when it is available.

However, it turns out that iterating through a string using the mblen_table
is surprisingly slow. I found that by deleting this 'fast path' from mb_strlen,
while mb_strlen becomes a few percent slower on very small strings (0-5 bytes),
very large performance gains can be achieved on medium to long input strings.

Part of the reason for this is because our text decoding filters are so much
faster now.

Here are some benchmarks:

    EUC-KR, short (0-5 chars)        - master faster by 11.90% (0.0000 vs 0.0000)
    EUC-JP, short (0-5 chars)        - master faster by 10.88% (0.0000 vs 0.0000)
    BIG-5, short (0-5 chars)         - master faster by 10.66% (0.0000 vs 0.0000)
    UTF-8, short (0-5 chars)         - master faster by 8.91% (0.0000 vs 0.0000)
    CP936, short (0-5 chars)         - master faster by 6.27% (0.0000 vs 0.0000)
    UHC, short (0-5 chars)           - master faster by 5.38% (0.0000 vs 0.0000)
    SJIS, short (0-5 chars)          - master faster by 5.20% (0.0000 vs 0.0000)

    UTF-8, medium (~100 chars)       - new faster by 127.51% (0.0004 vs 0.0002)
    UTF-8, long (~10000 chars)       - new faster by 87.94% (0.0319 vs 0.0170)
    UTF-8, very long (~100000 chars) - new faster by 88.25% (0.3199 vs 0.1699)

    SJIS, medium (~100 chars)        - new faster by 208.89% (0.0004 vs 0.0001)
    SJIS, long (~10000 chars)        - new faster by 253.57% (0.0319 vs 0.0090)

    CP936, medium (~100 chars)       - new faster by 126.08% (0.0004 vs 0.0002)
    CP936, long (~10000 chars)       - new faster by 200.48% (0.0319 vs 0.0106)

    EUC-KR, medium (~100 chars)      - new faster by 146.71% (0.0004 vs 0.0002)
    EUC-KR, long (~10000 chars)      - new faster by 212.05% (0.0319 vs 0.0102)

    EUC-JP, medium (~100 chars)      - new faster by 186.68% (0.0004 vs 0.0001)
    EUC-JP, long (~10000 chars)      - new faster by 295.37% (0.0320 vs 0.0081)

    BIG-5, medium (~100 chars)       - new faster by 173.07% (0.0004 vs 0.0001)
    BIG-5, long (~10000 chars)       - new faster by 269.19% (0.0319 vs 0.0086)

    UHC, medium (~100 chars)         - new faster by 196.99% (0.0004 vs 0.0001)
    UHC, long (~10000 chars)         - new faster by 256.39% (0.0323 vs 0.0091)

This does raise the question: is using the 'mblen_table' worthwhile for
other mbstring functions, such as mb_str_split? The answer is yes, it
is worthwhile; you see, while mb_strlen only needs to decode the input
string but not re-encode it, when mb_str_split is implemented using
the conversion filters, it needs to both decode the string and then
re-encode it. This means that there is more potential to gain
performance by using the 'mblen_table'. Benchmarking shows that in a
few cases, mb_str_split becomes faster when the 'mblen_table fast path'
is deleted, but in the majority of cases, it becomes slower.
2023-01-08 17:23:47 +02:00
Alex Dowad
3ab72a4357 Merge branch 'PHP-8.2'
* PHP-8.2:
  Use different mblen_table for different SJIS variants
  Correct entry for 0x80,0xFD-FF in SJIS multi-byte character length table
2023-01-06 14:34:10 +02:00
Alex Dowad
1751f34cfa Merge branch 'PHP-8.1' into PHP-8.2
* PHP-8.1:
  Use different mblen_table for different SJIS variants
  Correct entry for 0x80,0xFD-FF in SJIS multi-byte character length table
2023-01-06 14:13:21 +02:00
Alex Dowad
3152b7b26f Use different mblen_table for different SJIS variants 2023-01-06 14:09:43 +02:00
Alex Dowad
d104481af8 Correct entry for 0x80,0xFD-FF in SJIS multi-byte character length table
As a performance optimization, mbstring implements some functions using
tables which give the (byte) length of a multi-byte character using a
lookup based on the value of the first byte. These tables are called
`mblen_table`.

For many years, the mblen_table for SJIS has had '2' in position 0x80.
That is wrong; it should have been '1'. Reasons:

For SJIS, SJIS-2004, and mobile variants of SJIS, 0x80 has never been
treated as the first byte of a 2-byte character. It has always been
treated as a single erroneous byte. On the other hand, 0x80 is a valid
character in MacJapanese... but a 1-byte character, not a 2-byte one.

The same applies to bytes 0xFD-FF; these are 1-byte characters in
MacJapanese, and in other SJIS variants, they are not valid (as the
first byte of a character).

Thanks to the GitHub user 'youkidearitai' for finding this problem.
2023-01-05 14:05:39 +02:00
Alex Dowad
204694cc71 Optimize out more checks from hot path for BIG5 decoding
This boosts the speed of BIG5 encoding conversion by just 1-2%.

I tried various other tweaks to the BIG5 decoding routine to see if
I could make it faster at the cost of using a larger conversion table,
but at least on the machine I am using for benchmarking, these other
changes just made things slower.
2023-01-05 08:05:05 +02:00
Alex Dowad
d75c78b0c8 Optimize out checks in hot path for SJIS decoding
This gives about a 20% speed boost when converting SJIS to some other
encoding.
2023-01-05 08:04:58 +02:00
Alex Dowad
9c283850fb Optimize out another bounds check in BIG5 decoder
This gives about a 9% speed boost for BIG5 encoding conversion.
(Not as much as I was hoping!)
2023-01-05 08:04:51 +02:00
Alex Dowad
e837a8800b Optimize another check out of hot path for UHC decoding
This gives about another 8-9% speed boost to UHC decoding.
2023-01-04 21:58:27 +02:00
Alex Dowad
a76658b329 Optimize out bounds check in UHC decoder
This gives a 25% speed boost for conversion operations on long strings
(~10,000 codepoints). For shorter strings, the speed boost is less
(as the input gets smaller, it is progressively swamped more and more
by the overhead of entering and exiting the conversion function).

When benchmarking string conversion speed, we are measuring not only
the speed of the decoder, but also the time which it takes to re-encode
the string in another encoding like UTF-8 or UTF-16. So the performance
increase for functions which only need to decode but not re-encode the
input string will be much more than 25%.
2023-01-04 21:58:27 +02:00
Alex Dowad
ffbddc4848 Optimize conversion of GB18030 to Unicode
As with CP936, iterating over the PUA table and looking for matches in
it was a significant bottleneck for GB18030 decoding (though not as
severe a bottleneck as for CP936, since more is involved in GB18030
decoding than CP936 decoding).

Here are some benchmark results after optimizing out that bottleneck:

    GB18030, medium - to UTF-16BE - faster by 60.71% (0.0007 vs 0.0017)
    GB18030, medium - to UTF-8    - faster by 59.88% (0.0007 vs 0.0017)
    GB18030, long - to UTF-8      - faster by 44.91% (0.0669 vs 0.1214)
    GB18030, long - to UTF-16BE   - faster by 43.05% (0.0672 vs 0.1181)
    GB18030, short - to UTF-8     - faster by 27.22% (0.0003 vs 0.0004)
    GB18030, short - to UTF-16BE  - faster by 26.98% (0.0003 vs 0.0004)

(The 'short' test strings had 0-5 codepoints each, 'medium' ~100
codepoints, and 'long' ~10,000 codepoints. For each benchmark, the
test harness cycled through all the test strings 40,000 times.)
2023-01-04 21:58:27 +02:00
Alex Dowad
703725e43b Optimize conversion of CP936 to Unicode
In the previous commit, the branch in mb_strlen which implements the
function using the mblen_table (when one is available) was removed.
This made mb_strlen faster for just about every legacy text encoding
which had an mblen_table... except for CP936, which became much slower.

This indicated that our decoding filter for CP936 was slow. I checked
and found iterating over the PUA table was a major bottleneck. After
optimizing that bottleneck out, benchmarks for text encoding conversion
speed were as follows:

    CP936, short - to UTF-8     - faster by 10.44% (0.0003 vs 0.0003)
    CP936, short - to UTF-16BE  - faster by 11.45% (0.0003 vs 0.0003)
    CP936, medium - to UTF-8    - faster by 139.09% (0.0012 vs 0.0005)
    CP936, medium - to UTF-16BE - faster by 140.34% (0.0013 vs 0.0005)
    CP936, long - to UTF-16BE   - faster by 215.88% (0.0538 vs 0.0170)
    CP936, long - to UTF-8      - faster by 232.41% (0.0528 vs 0.0159)

This does not fully express how much faster the CP936 decoder is now,
since these conversion benchmarks are not only measuring the speed of
decoding CP936, but then also re-encoding the codepoints as UTF-8 or
UTF-16.

For functions like mb_strlen, which just need to decode but not
re-encode the text, the gain in performance is much larger.
2023-01-04 21:58:27 +02:00
Alex Dowad
b15d0a9ba5 Remove redundant bounds check for lookup in BIG5 conversion table
For CP950 conversion, the bounds check is needed before doing a lookup
in big5_ucs_table, since the first byte of a CP950 multibyte
character can be up to 0xFE. For BIG5, we only accept 1st bytes up
to 0xF9, and it is not possible for the lookup to go out of bounds.
2023-01-04 18:18:37 +02:00
Alex Dowad
74319de2f9 Combine uhc1_ucs_table and uhc2_ucs_table for UHC/EUC-KR/ISO-2022-KR conversion
These two tables cover contiguous ranges of the KSX 1001/KSC 5601
charset. There seems to be no reason to divide them into two tables
instead of one.
2023-01-04 18:18:28 +02:00
Alex Dowad
ef114f94b9 Simplify code for conversion of UHC to Unicode
I was hoping to get some performance gains here, but the performance
is just the same as before, +/- a fraction of a percent.
2023-01-04 18:18:22 +02:00
Alex Dowad
3b5072f6f6 Use smart_str in mb_http_input rather than mbfl_memory_device
For many years, the code has contained a TODO comment indicating
that the original author had wanted to do this.

Using smart_str makes the code shorter and cleaner, and it is another
step towards removing a bunch of legacy mbstring code which will soon
be unneeded.
2023-01-03 09:10:13 +02:00
Alex Dowad
0e7160b836 Implement mb_detect_encoding using fast text conversion filters
Regarding the optional 3rd `strict` argument to mb_detect_encoding,
the documentation states:

  Controls the behaviour when string is not valid in any of the listed encodings.
  If strict is set to false, the closest matching encoding will be returned;
  if strict is set to true, false will be returned.

(Ref: https://www.php.net/manual/en/function.mb-detect-encoding.php)

Because of bugs in the implementation, mb_detect_encoding did not always
behave according to this description when `strict` was false.
For example:

  <?php
  echo var_export(mb_detect_encoding("\xc0\x00", "UTF-8", false));
  // Before this commit, prints: false
  // After this commit, prints: 'UTF-8'

Because `strict` is false in the above example, mb_detect_encoding
should return the 'closest matching encoding', which is UTF-8, since
that is the only candidate encoding. (Incidentally, this example shows
that using mb_detect_encoding with a single candidate encoding in
non-strict mode is useless.)

The new implementation fixes this bug. It also fixes another problem
with the old implementation as regards non-strict detection mode:

The old implementation would stop processing of the input string using
a particular candidate encoding as soon as it saw an error in that
encoding, even in non-strict mode. This means that it could not really
detect the 'closest matching encoding'; rather, what it would return
in non-strict mode was 'the encoding in which the first decoding error
is furthest from the beginning of the input string'.

In non-strict mode, the new implementation continues trying to process
the input string to its end even after seeing an error. This makes it
possible to determine in which candidate encoding the string has the
smallest number of errors, i.e. the 'closest matching encoding'.

Rejecting candidate encodings as soon as it saw an error gave the old
implementation a marked performance advantage in non-strict mode;
however, the new implementation still beats it in most cases. Here are
a few sample microbenchmark results:

  UTF-8, ~100 codepoints, strict mode
  Old: 0.080s (100,000 calls)
  New: 0.026s ("       "    )

  UTF-8, ~100 codepoints, non-strict mode
  Old: 0.079s (100,000 calls)
  New: 0.033s ("       "    )

  UTF-8, ~10000 codepoints, strict mode
  Old: 6.708s (60,000 calls)
  New: 1.383s ("      "    )

  UTF-8, ~10000 codepoints, non-strict mode
  Old: 6.705s (60,000 calls)
  New: 3.044s ("      "    )

Notice that the old implementation had almost identical performance
between strict and non-strict mode, while the new suffers a significant
performance penalty for non-strict detection. This is the cost of
implementing the behavior specified in the documentation.

A couple more sample results:

  SJIS, ~10000 codepoints, strict mode
  Old: 4.563s
  New: 1.084s

  SJIS, ~10000 codepoints, non-strict mode
  Old: 4.569s
  New: 2.863s

This is the only case I found where the new implementation loses:

  UTF-16LE, ~10000 codepoints, non-strict mode
  Old: 1.514s
  New: 2.813s

The reason is because the test strings happened to be invalid right from
the first few bytes for all the candidate encodings except for UTF-16LE;
so the old implementation would immediately reject all those encodings
and only process the entire string in UTF-16LE.

I believe mb_detect_encoding could be made much faster if we identified
good criteria for when to reject candidate encodings before reaching
the end of the input string.
2023-01-03 09:10:10 +02:00
Alex Dowad
953864661a Implement php_mb_zend_encoding_converter using fast text conversion filters 2023-01-03 09:02:21 +02:00
Alex Dowad
88c99afdac Implement mb_str_split using fast text conversion filters
There is no great difference between the old and new code for text
encodings which either 1) use a fixed number of bytes per codepoint or
2) for which we have an 'mblen' table which enables us to find the
length of a multi-byte character using a table lookup indexed by the
first byte value.

The big difference is for other text encodings, where we have to
actually decode the string to split it. For such text encodings,
such as ISO-2022-JP and UTF-16, I measured a speedup of 50%-120% over
the previous implementation.
2023-01-03 09:02:21 +02:00
Alex Dowad
a9a672048b Implement mb_output_handler using fast text conversion filters 2023-01-03 09:02:21 +02:00
Alex Dowad
f40c3fca88 Improve mb_detect_encoding's recognition of Turkish text
Add 4 codepoints commonly used to write Turkish text to our table
of 'commonly used' Unicode codepoints. These are:

• U+011F LATIN SMALL LETTER G WITH BREVE
• U+0130 LATIN CAPITAL LETTER I WITH DOT ABOVE
• U+0131 LATIN SMALL LETTER DOTLESS I
• U+015F LATIN SMALL LETTER S WITH CEDILLA
2022-12-30 14:22:46 +02:00
Alex Dowad
8b37c4ea5e Merge branch 'PHP-8.2'
* PHP-8.2:
  Allow 'h' and 'k' flags to be combined for mb_convert_kana
2022-12-29 20:39:22 +02:00
Alex Dowad
f7a19181d7 Allow 'h' and 'k' flags to be combined for mb_convert_kana
The 'h' flag makes mb_convert_kana convert zenkaku hiragana to hankaku
katakana; 'k' makes it convert zenkaku katakana to hankaku katakana.

When working on the implementation of mb_convert_kana, I added some
additional checks to catch combinations of flags which do not make
sense; but there is no conflict between 'h' and 'k' (they control
conversions for two disjoint ranges of codepoints) and this combination
should not have been restricted.

Thanks to the GitHub user 'akira345' for reporting this problem.

Closes GH-10174.
2022-12-29 20:38:01 +02:00
Yuya Hamada
e0e587cdb8 mbstring: Do not stop when mbstring test failed
I way want to confirm different on mbstring PHP 8.1 or newer and
PHP 8.0 or older, but when I port to PHP 8.0 from PHP 8.1 or newer
phpt files, it stopped die() function when test failed. I want to
make a list, so I don't want to stop it.

If you execute full test, set $testFailedLimit to -1 in
encoding_tests.inc.
2022-12-19 16:29:17 +02:00
Alex Dowad
7f44559516 mb_str{i,}pos does not match illegal byte sequences against occurrences of mb_substitute_char
In GitHub issue 9613, it was reported that mb_strpos wrongly matches the
character '?' against any invalid string, even when the character '?'
clearly does not appear in the invalid string. This behavior has existed
at least since PHP 5.2.

The reason for the behavior is that mb_strpos internally converts the
haystack and needle to UTF-8 before performing a search. When converting
to UTF-8, regardless of the setting of mb_substitute_character, libmbfl
would use '?' as an error marker for invalid byte sequences. Once those
invalid input sequences were replaced with '?', then naturally, they
would match against occurrences of the actual character '?' (when it
appeared as a 'normal' character, not as an error marker). This would
happen regardless of whether the error was in the haystack and '?' was
used in the needle, or whether the error was in the needle and '?' was
used in the haystack.

Why would libmbfl use '?' rather than the mb_substitute_character set
by the user? Remember that libmbfl was originally a separate library
which was imported into the PHP codebase. mb_substitute_character is an
mbstring API function, not something built into libmbfl. When mbstring
would call into libmbfl, it would provide the error replacement
character to libmbfl as a parameter. However, when libmbfl would perform
conversion operations internally, and not because of a direct call from
mbstring, it would use its own error replacement character.

Example:

    <?php
    $questionMark = "\x00?";
    $badUTF16 = "\xDB\x00"; // half of a surrogate pair
    echo mb_strpos($questionMark, $badUTF16, 0, 'UTF-16BE'), "\n";
    echo mb_strpos($badUTF16, $questionMark, 0, 'UTF-16BE'), "\n";

Incidentally, this behavior does not occur if the text encoding is
UTF-8, because no conversion is needed in that case.

mb_stripos had a similar issue, but instead of always using '?' as an
error marker internally, it would use the selected
mb_substitute_character. So, for example, if the mb_substitute_character
was '%', then occurrences of '%' in the haystack would match invalid
bytes in the needle, and vice versa.

Example:

    <?php
    mb_substitute_character(0x25); // '%'
    $percent = "\x00%";
    $badUTF16 = "\xDB\x00"; // half of a surrogate pair
    echo mb_stripos($percent, $badUTF16, 0, 'UTF-16BE'), "\n";
    echo mb_stripos($badUTF16, $percent, 0, 'UTF-16BE'), "\n";

This behavior (of mb_stripos) still occurs even if the text encoding is
UTF-8, because case folding is still needed to make the search
case-insensitive.

It is not hard to think of scenarios where these strange and unintuitive
behaviors could cause security vulnerabilities. In the discussion on
GH issue 9613, Christoph Becker suggested that mb_str{i,}pos should
simply refuse to operate on invalid strings. However, this would almost
certainly break existing production code.

This commit mitigates the problem in a less intrusive way: it ensures
that while invalid haystacks can match invalid needles (even if the
specific invalid bytes are different), invalid bytes in the haystack
will never match '?' OR occurrences of the mb_substitute_character in
the needle, and vice versa.

This does represent a backwards compatibility break, but a small one.
Since it mitigates a potential security problem, I believe this is
appropriate.

Closes GH-9613.
2022-12-18 15:31:20 +02:00
Alex Dowad
744ca16e73 Speed boost for mb_stripos (when not using UTF-8)
Instead of case-folding a string and then converting it to UTF-8 as a
separate operation, why not convert it to UTF-8 at the same time as
we fold case?

For non-UTF-8 encodings, this typically makes mb_stripos about 2x
faster.
2022-12-18 15:31:20 +02:00
Alex Dowad
b9cd1cdb4f Implement mb_substr_count using fast text conversion filters
The performance gain from this change depends on the text encoding and
input string size. For very small strings, other overheads tend to swamp
the performance gains to some extent, such that the speedup is less than
2x. For medium-length strings (~100 bytes or so), the speedup is
typically around 2.5x.

The greatest performance gains are for UTF-8 strings which have already
been marked as valid (using the GC flags on the zend_string object);
for those, the speedup is more than 10x in many cases.

The previous implementation first converted the haystack and needle to
wchars, then searched for matches between the two sequences of wchars.
Because we use -1 as an error marker when converting to wchars, error
markers from invalid byte sequences in the haystack would match error
markers from invalid byte sequences in the needle, even if the specific
invalid byte sequence was different. I am not sure whether this behavior
is really desirable or not, but anyways, this new implementation
follows the same behavior so as not to cause BC breaks.
2022-12-15 07:54:26 +02:00
Alex Dowad
e36c600a31 Optimize SJIS-Mobile#SOFTBANK decoder for speed
From my microbenchmarks, the new decoder makes encoding conversion
from SJIS-Mobile#SOFTBANK about 15-40% faster.
2022-12-12 16:28:49 +02:00
Alex Dowad
6bf0c44f48 Optimize SJIS-Mobile#KDDI decoder for speed
From my microbenchmarks, the new decoder makes encoding conversion
from SJIS-Mobile#KDDI about 30-50% faster.
2022-12-12 16:28:49 +02:00
Alex Dowad
43cdfa3190 Optimize SJIS-Mobile#DOCOMO decoder for speed
From my microbenchmarks, the new decoder makes encoding conversion
from SJIS-Mobile#DOCOMO about 15-20% faster.
2022-12-12 16:28:49 +02:00
Alex Dowad
4ebfddfad4 Move mobile variants of SJIS into mbfilter_sjis.c 2022-12-12 16:28:49 +02:00
Alex Dowad
005e49e552 Optimize MacJapanese decoder for speed
On longer MacJapanese strings, conversion speed is boosted by 60-80%.
On medium-length strings, conversion speed is boosted around 20-30%.
For very short strings, there is no appreciable difference.
2022-12-12 16:28:49 +02:00
Alex Dowad
4072a76e3f Move MacJapanese implementation into mbfilter_sjis.c 2022-12-12 16:28:49 +02:00
Alex Dowad
b3d197d688 Optimize SJIS decoder for speed
While benchmarking the new implementation of mb_substr, I found it was
slower than the old one only when the selected encoding was SJIS.
Investigation showed that the new text conversion filter for SJIS
was a touch slower than the old one.

With this optimization, the new SJIS decoder is about 20% faster than
the old one.
2022-12-12 16:28:49 +02:00
Alex Dowad
0c0774f5b4 Use fast text conversion filters for mb_strpos, mb_stripos, mb_substr, etc
This boosts the performance of mb_strpos, mb_stripos, mb_strrpos,
mb_strripos, mb_strstr, mb_stristr, mb_strrchr, and mb_strrichr when
used on non-UTF-8 strings. mb_substr is also faster.

With UTF-8 input, there is no appreciable difference in performance for
mb_strpos, mb_stripos, mb_strrpos, etc. This is expected, since the only
real difference here (aside from shorter and simpler code) is that the
new text conversion code is used when converting non-UTF-8 input strings
to UTF-8. (This is done because internally, mb_strpos, etc. work only
on UTF-8 text.)

For ASCII, speed is boosted by 30-65%. For other legacy text encodings,
the degree of performance improvement will depend on how slow the
legacy conversion code was.

One other minor, but notable difference is that strings encoded using
UTF-8 variants from Japanese mobile vendors (SoftBank, KDDI, Docomo)
will not undergo encoding conversion but will be processed "as is". It
is expected that this will result in a large performance boost for
such input strings; but realistically, the number of users who work
with such strings is probably minute.

I was not originally planning to include mb_substr in this commit, but
fuzzing of the reimplemented mb_strstr revealed that mb_substr needed
to be reimplemented, too; using the old mbfl_substr, which was based
on the old text conversion filters, in combination with functions which
use the new text conversion filters caused bugs.

The performance boost for mb_substr varies from 10%-500%, depending
on the encoding and input string used.
2022-12-12 16:28:49 +02:00
Alex Dowad
14110bff7f Merge branch 'PHP-8.2'
* PHP-8.2:
  Support Microsoft's "Best Fit" mappings for Windows-1252 text encoding
2022-12-09 15:41:07 +02:00
Alex Dowad
b79a86f53a Merge branch 'PHP-8.1' into PHP-8.2
* PHP-8.1:
  Support Microsoft's "Best Fit" mappings for Windows-1252 text encoding
2022-12-09 15:37:56 +02:00
Alex Dowad
a1a69c3734 Support Microsoft's "Best Fit" mappings for Windows-1252 text encoding
In b5ff87ca71, I made a number of adjustments to our conversion code
for CP1252. One of the adjustments was to make the mappings match those
published by the Unicode Consortium in the file CP1252.TXT. These do
not include mappings for the CP1252 bytes 0x81, 0x8D, 0x8F, 0x90, and
0x9D.

Rostyslav Gulka reported that this caused a problem. His application
stores binary JPEG data in an MS-SQL database. When they SELECT the
binary data out of the database, it is treated as CP1252 text and
automatically converted to UTF-8. To recover the original binary
data, they then do a conversion from UTF-8 to CP1252.

Obviously, that does not work if certain CP1252 bytes do not map to
any Unicode codepoint at all.

While this is a very unusual application of text encoding conversion,
and we might choose not to support it if there was no other basis for
including those mappings, it seems that Microsoft does actually include
them in the Win32 API as "best fit" mappings. These are extra mappings
from Unicode to other text encodings, which the Win32 API function
WideCharToMultiByte uses by default unless the WC_NO_BEST_FIT_CHARS
flag was passed.

A list of these "best fit" mappings for CP1252 can be found here:

https://www.unicode.org/Public/MAPPINGS/VENDORS/MICSFT/WindowsBestFit/bestfit1252.txt
2022-12-09 15:18:37 +02:00
Alex Dowad
0109aa62ec Simplify decoding filter for UTF-8
When decoding a 3-byte UTF-8 code unit, redundant checks for overlong
code unit and for illegal codepoints from U+D800-DFFF were included.
Both of these conditions are caught by the line which reads:

    if ((c2 & 0xC0) != 0x80 || (c == 0xF0 && c2 < 0x90) || (c == 0xF4 && c2 >= 0x90)) {

As such, there is no reason to check for the same error conditions again.

Likewise, when decoding a 4-byte UTF-8 code unit, there was a
redundant check for overlong code unit. That was already caught by the
line which reads:

    if ((c2 & 0xC0) != 0x80 || (c == 0xF0 && c2 < 0x90) || (c == 0xF4 && c2 >= 0x90)) {
2022-11-28 17:04:00 +02:00
Alex Dowad
0e540ed739 Merge branch 'PHP-8.2'
* PHP-8.2:
  Fix mangled kana output for JIS encoding
2022-11-22 15:50:43 +02:00
Alex Dowad
8f84192403 Fix mangled kana output for JIS encoding
For JIS encoding, hiragana and katakana can be input in multiple forms.
One form uses JISX 0201 escape sequences. Another is called 'GR-invoked'
kana.

In the context of ISO-2022 encoding, bytes with a zero bit in the MSB
are called "GL" (or "graphics left") and those with the MSB set are
called "GR" (or "graphics right"). Regarding the variants of
ISO-2022-JP which are called "JIS7" and "JIS8", Wikipedia states:

"Other, older variants known as JIS7 and JIS8 build directly on the
7-bit and 8-bit encodings defined by JIS X 0201 and allow use of JIS X
0201 kana from G1 without escape sequences, using Shift Out and Shift
In or setting the eighth bit (GR-invoked), respectively."

In harmony with this, we have always accepted bytes from 0xA3-0xDF and
decoded them to the corresponding hiragana/katakana. However, at some
point I accidentally broke output for these kana. You can see the
problem in 3v4l.org by running this program:

    <?php
    echo bin2hex(mb_convert_encoding("\xA3", 'JIS', 'JIS'));

The results are:

    Output for 8.2rc1 - rc3
    1b244200231b2842
    Output for 7.4.0 - 7.4.33, 8.0.1 - 8.0.25, 8.1.12
    1b2849231b2842
    Output for 8.1.0 - 8.1.11
    1b284923

You can see that from 8.1.0 - 8.1.11, there was a missing escape
sequence at the end. That was caused because the flush functions were
not being called properly, and has already been fixed. However, this
also shows that the output for 8.2rc1-rc3 is completely invalid.
It is trying to output a JISX 0208 sequence, but with 0x00 as one of
the JISX 0208 bytes, which is illegal.

Add the missing code which will make the new text conversion filters
behave the same as the old ones when outputting hiragana/katakana in
JIS encoding.
2022-11-22 15:49:19 +02:00
Alex Dowad
3e743e9ba1 Merge branch 'PHP-8.2'
* PHP-8.2:
  For UTF-7, flag unnecessary extra trailing byte in Base64 section as error
2022-11-21 14:49:55 +02:00