Commit graph

907 commits

Author SHA1 Message Date
Alex Dowad
d14ed12783 Adjust code to finish validating remaining 0-8 bytes at end of UTF-8 string
This code is a few percent faster for short UTF-8 strings. For long
(~10,000 byte) strings, it is also consistently faster on my local
microbenchmarks, but by less than 1%.
2023-01-26 09:49:58 +02:00
Alex Dowad
4f36623c1e Use RETURN_STR_COPY in mb_output_handler
This means the same thing and makes the code read a tiny bit better.

Thanks to Nikita Popov for the tip.
2023-01-22 13:53:04 +02:00
Alex Dowad
6f53dbb83e mb_scrub does not attempt to scrub known-valid UTF-8 strings 2023-01-22 13:53:04 +02:00
Alex Dowad
23dab38fe9 Use smart_str as dynamic buffer for extra headers in mb_send_mail 2023-01-21 23:12:58 +02:00
Alex Dowad
8a73a68190 Use fast encoding conversion filters in mb_send_mail 2023-01-21 23:12:58 +02:00
Jakub Zelenka
443eb50a4c
Merge branch 'PHP-8.2' 2023-01-19 19:06:38 +00:00
Jakub Zelenka
cc931af35d
Fix GH-8086: Introduce mail.mixed_lf_and_crlf INI
When this INI option is enabled, it reverts the line separator for
headers and message to LF which was a non conformant behavior in PHP 7.
It is done because some non conformant MTAs fail to parse CRLF line
separator for headers and body.

This is used for mail and mb_send_mail functions.
2023-01-19 19:05:39 +00:00
Alex Dowad
cb840799b4 mb_detect_encoding is more accurate on strings with UTF-8/16 BOM
Thanks to the GitHub user 'titanz35' for pointing out that the new
implementation of mb_detect_encoding had poor detection accuracy on
UTF-8 and UTF-16 strings with a byte-order mark.
2023-01-19 08:40:39 +02:00
Alex Dowad
8902e47f3d Simplify checks (in mb_fast_check_utf8) for overlong code units and invalid codepoint values 2023-01-18 17:14:53 +02:00
Alex Dowad
d58f70455b Simplify check (in mb_fast_check_utf8) for seeing if 16 bytes are all ASCII characters 2023-01-18 17:14:53 +02:00
Alex Dowad
b189aaacc2 Tweaks for accelerated implementation of mb_strlen for UTF-8
On longer strings, this gives a small speed boost of 10% or less.
2023-01-17 10:07:53 +02:00
Alex Dowad
3ae4779305 Add accelerated (SIMD-based) implementation of mb_check_encoding for UTF-8
The new SSE2-based implementation of mb_check_encoding for UTF-8 is
about 10% faster for 0-5 byte strings, more than 3 times faster for
~100-byte strings, and just under 4 times faster for ~10,000-byte
strings.

I believe it may be possible to make this function much faster again.
Some possible directions for further performance optimization include:

• If other ISA extensions like AVX or AVX-512 are available, use a
  similar algorithm, but process text in blocks of 32 or 64 bytes
  (instead of 16 bytes).
• If other SIMD ISA extensions are available, use the greater variety
  of available instructions to make some of the checks tighter.
• Even if only SSE/SSE2 are available, find clever ways to squeeze
  instructions out of the hot path. This would probably require a lot
  of perusing instruction mauals and thinking hard about which SIMD
  instructions could be used to perform the same checks with fewer
  instructions.
• Find a better algorithm, possibly one where more checks could be
  combined (just as the current algorithm combines the checks for
  certain overlong code units and reserved codepoints).
2023-01-17 10:07:53 +02:00
Alex Dowad
4427b2e1ab Mark UTF-8 strings emitted by mbstring functions as valid UTF-8
We now have a couple of mbstring functions which have fast paths for
strings marked as 'valid UTF-8'. Later, we may likely have more. So
that these fast paths can be used more frequently, mark UTF-8 strings
emitted by mbstring as 'valid UTF-8'. This is always a correct thing
to do, because mbstring never returns invalid UTF-8 as the result of
a conversion (or similar) operation.

Internally, we do have a conversion mode which deliberately emits
invalid UTF-8 in some cases. (This is done to prevent unwanted matches
when we are converting strings to UTF-8 before performing matching
operations on them.) For such strings, don't set the 'valid UTF-8' flag.
It probably wouldn't hurt anything to set it, because strings generated
using that special conversion mode should *never* be returned to
userland, and I don't think we do anything with them which cares about
the IS_STR_VALID_UTF8 flag... but still, it would likely cause
confusion for developers.
2023-01-11 17:08:27 +02:00
Alex Dowad
b4cbaabd9b Add fast SSE2-based implementation of mb_strlen for known-valid UTF-8 strings
One small piece of this was obtained from Stack Overflow. According to
Stack Overflow's Terms of Service, all user-contributed code on SO is
provided under a Creative Commons license. I believe this license is
compatible with the code being included in PHP.

Benchmarking results (UTF-8 only, for strings which have already been
checked using mb_check_encoding):

For very short (0-5 byte) strings, mb_strlen is 12% faster.
The speedup gets greater and greater on longer input strings; for
strings around 100KB, mb_strlen is 23 times faster.

Currently the 'fast' code is gated behind a GC flag check which ensures
it is only used on strings which have already been checked for UTF-8
validity. This is because the accelerated code will return different
results on some invalid UTF-8 strings.
2023-01-09 07:50:40 +02:00
Alex Dowad
d8b5b9fa55 Add unit tests for mb_str_split/mb_substr on MacJapanese encoding
MacJapanese has a somewhat unusual feature that when mapped to
Unicode, many characters map to sequences of several codepoints.
Add test cases demonstrating how mb_str_split and mb_substr behave in
this situation.

When adding these tests, I found the behavior of mb_substr was wrong
due to an inconsistency between the string "length" as measured by
mb_strlen and the number of native MacJapanese characters which
mb_substr would count when iterating over the string using the
mblen_table. This has been fixed.

I believe that mb_strstr will also return wrong results in some cases
for MacJapanese. I still need to come up with unit tests which
demonstrate the problem and figure out how to fix it.
2023-01-08 17:23:47 +02:00
Alex Dowad
cca4ca6d3d Remove 'fast path' using mblen_table from mb_get_strlen (it's actually a slow path)
Various mbstring legacy text encodings have what is called an 'mblen_table';
a table which gives the length of a multi-byte character using a lookup on
the first byte value. Several mbstring functions have a 'fast path' which uses
this table when it is available.

However, it turns out that iterating through a string using the mblen_table
is surprisingly slow. I found that by deleting this 'fast path' from mb_strlen,
while mb_strlen becomes a few percent slower on very small strings (0-5 bytes),
very large performance gains can be achieved on medium to long input strings.

Part of the reason for this is because our text decoding filters are so much
faster now.

Here are some benchmarks:

    EUC-KR, short (0-5 chars)        - master faster by 11.90% (0.0000 vs 0.0000)
    EUC-JP, short (0-5 chars)        - master faster by 10.88% (0.0000 vs 0.0000)
    BIG-5, short (0-5 chars)         - master faster by 10.66% (0.0000 vs 0.0000)
    UTF-8, short (0-5 chars)         - master faster by 8.91% (0.0000 vs 0.0000)
    CP936, short (0-5 chars)         - master faster by 6.27% (0.0000 vs 0.0000)
    UHC, short (0-5 chars)           - master faster by 5.38% (0.0000 vs 0.0000)
    SJIS, short (0-5 chars)          - master faster by 5.20% (0.0000 vs 0.0000)

    UTF-8, medium (~100 chars)       - new faster by 127.51% (0.0004 vs 0.0002)
    UTF-8, long (~10000 chars)       - new faster by 87.94% (0.0319 vs 0.0170)
    UTF-8, very long (~100000 chars) - new faster by 88.25% (0.3199 vs 0.1699)

    SJIS, medium (~100 chars)        - new faster by 208.89% (0.0004 vs 0.0001)
    SJIS, long (~10000 chars)        - new faster by 253.57% (0.0319 vs 0.0090)

    CP936, medium (~100 chars)       - new faster by 126.08% (0.0004 vs 0.0002)
    CP936, long (~10000 chars)       - new faster by 200.48% (0.0319 vs 0.0106)

    EUC-KR, medium (~100 chars)      - new faster by 146.71% (0.0004 vs 0.0002)
    EUC-KR, long (~10000 chars)      - new faster by 212.05% (0.0319 vs 0.0102)

    EUC-JP, medium (~100 chars)      - new faster by 186.68% (0.0004 vs 0.0001)
    EUC-JP, long (~10000 chars)      - new faster by 295.37% (0.0320 vs 0.0081)

    BIG-5, medium (~100 chars)       - new faster by 173.07% (0.0004 vs 0.0001)
    BIG-5, long (~10000 chars)       - new faster by 269.19% (0.0319 vs 0.0086)

    UHC, medium (~100 chars)         - new faster by 196.99% (0.0004 vs 0.0001)
    UHC, long (~10000 chars)         - new faster by 256.39% (0.0323 vs 0.0091)

This does raise the question: is using the 'mblen_table' worthwhile for
other mbstring functions, such as mb_str_split? The answer is yes, it
is worthwhile; you see, while mb_strlen only needs to decode the input
string but not re-encode it, when mb_str_split is implemented using
the conversion filters, it needs to both decode the string and then
re-encode it. This means that there is more potential to gain
performance by using the 'mblen_table'. Benchmarking shows that in a
few cases, mb_str_split becomes faster when the 'mblen_table fast path'
is deleted, but in the majority of cases, it becomes slower.
2023-01-08 17:23:47 +02:00
Alex Dowad
3b5072f6f6 Use smart_str in mb_http_input rather than mbfl_memory_device
For many years, the code has contained a TODO comment indicating
that the original author had wanted to do this.

Using smart_str makes the code shorter and cleaner, and it is another
step towards removing a bunch of legacy mbstring code which will soon
be unneeded.
2023-01-03 09:10:13 +02:00
Alex Dowad
0e7160b836 Implement mb_detect_encoding using fast text conversion filters
Regarding the optional 3rd `strict` argument to mb_detect_encoding,
the documentation states:

  Controls the behaviour when string is not valid in any of the listed encodings.
  If strict is set to false, the closest matching encoding will be returned;
  if strict is set to true, false will be returned.

(Ref: https://www.php.net/manual/en/function.mb-detect-encoding.php)

Because of bugs in the implementation, mb_detect_encoding did not always
behave according to this description when `strict` was false.
For example:

  <?php
  echo var_export(mb_detect_encoding("\xc0\x00", "UTF-8", false));
  // Before this commit, prints: false
  // After this commit, prints: 'UTF-8'

Because `strict` is false in the above example, mb_detect_encoding
should return the 'closest matching encoding', which is UTF-8, since
that is the only candidate encoding. (Incidentally, this example shows
that using mb_detect_encoding with a single candidate encoding in
non-strict mode is useless.)

The new implementation fixes this bug. It also fixes another problem
with the old implementation as regards non-strict detection mode:

The old implementation would stop processing of the input string using
a particular candidate encoding as soon as it saw an error in that
encoding, even in non-strict mode. This means that it could not really
detect the 'closest matching encoding'; rather, what it would return
in non-strict mode was 'the encoding in which the first decoding error
is furthest from the beginning of the input string'.

In non-strict mode, the new implementation continues trying to process
the input string to its end even after seeing an error. This makes it
possible to determine in which candidate encoding the string has the
smallest number of errors, i.e. the 'closest matching encoding'.

Rejecting candidate encodings as soon as it saw an error gave the old
implementation a marked performance advantage in non-strict mode;
however, the new implementation still beats it in most cases. Here are
a few sample microbenchmark results:

  UTF-8, ~100 codepoints, strict mode
  Old: 0.080s (100,000 calls)
  New: 0.026s ("       "    )

  UTF-8, ~100 codepoints, non-strict mode
  Old: 0.079s (100,000 calls)
  New: 0.033s ("       "    )

  UTF-8, ~10000 codepoints, strict mode
  Old: 6.708s (60,000 calls)
  New: 1.383s ("      "    )

  UTF-8, ~10000 codepoints, non-strict mode
  Old: 6.705s (60,000 calls)
  New: 3.044s ("      "    )

Notice that the old implementation had almost identical performance
between strict and non-strict mode, while the new suffers a significant
performance penalty for non-strict detection. This is the cost of
implementing the behavior specified in the documentation.

A couple more sample results:

  SJIS, ~10000 codepoints, strict mode
  Old: 4.563s
  New: 1.084s

  SJIS, ~10000 codepoints, non-strict mode
  Old: 4.569s
  New: 2.863s

This is the only case I found where the new implementation loses:

  UTF-16LE, ~10000 codepoints, non-strict mode
  Old: 1.514s
  New: 2.813s

The reason is because the test strings happened to be invalid right from
the first few bytes for all the candidate encodings except for UTF-16LE;
so the old implementation would immediately reject all those encodings
and only process the entire string in UTF-16LE.

I believe mb_detect_encoding could be made much faster if we identified
good criteria for when to reject candidate encodings before reaching
the end of the input string.
2023-01-03 09:10:10 +02:00
Alex Dowad
953864661a Implement php_mb_zend_encoding_converter using fast text conversion filters 2023-01-03 09:02:21 +02:00
Alex Dowad
88c99afdac Implement mb_str_split using fast text conversion filters
There is no great difference between the old and new code for text
encodings which either 1) use a fixed number of bytes per codepoint or
2) for which we have an 'mblen' table which enables us to find the
length of a multi-byte character using a table lookup indexed by the
first byte value.

The big difference is for other text encodings, where we have to
actually decode the string to split it. For such text encodings,
such as ISO-2022-JP and UTF-16, I measured a speedup of 50%-120% over
the previous implementation.
2023-01-03 09:02:21 +02:00
Alex Dowad
a9a672048b Implement mb_output_handler using fast text conversion filters 2023-01-03 09:02:21 +02:00
Alex Dowad
8b37c4ea5e Merge branch 'PHP-8.2'
* PHP-8.2:
  Allow 'h' and 'k' flags to be combined for mb_convert_kana
2022-12-29 20:39:22 +02:00
Alex Dowad
f7a19181d7 Allow 'h' and 'k' flags to be combined for mb_convert_kana
The 'h' flag makes mb_convert_kana convert zenkaku hiragana to hankaku
katakana; 'k' makes it convert zenkaku katakana to hankaku katakana.

When working on the implementation of mb_convert_kana, I added some
additional checks to catch combinations of flags which do not make
sense; but there is no conflict between 'h' and 'k' (they control
conversions for two disjoint ranges of codepoints) and this combination
should not have been restricted.

Thanks to the GitHub user 'akira345' for reporting this problem.

Closes GH-10174.
2022-12-29 20:38:01 +02:00
Alex Dowad
7f44559516 mb_str{i,}pos does not match illegal byte sequences against occurrences of mb_substitute_char
In GitHub issue 9613, it was reported that mb_strpos wrongly matches the
character '?' against any invalid string, even when the character '?'
clearly does not appear in the invalid string. This behavior has existed
at least since PHP 5.2.

The reason for the behavior is that mb_strpos internally converts the
haystack and needle to UTF-8 before performing a search. When converting
to UTF-8, regardless of the setting of mb_substitute_character, libmbfl
would use '?' as an error marker for invalid byte sequences. Once those
invalid input sequences were replaced with '?', then naturally, they
would match against occurrences of the actual character '?' (when it
appeared as a 'normal' character, not as an error marker). This would
happen regardless of whether the error was in the haystack and '?' was
used in the needle, or whether the error was in the needle and '?' was
used in the haystack.

Why would libmbfl use '?' rather than the mb_substitute_character set
by the user? Remember that libmbfl was originally a separate library
which was imported into the PHP codebase. mb_substitute_character is an
mbstring API function, not something built into libmbfl. When mbstring
would call into libmbfl, it would provide the error replacement
character to libmbfl as a parameter. However, when libmbfl would perform
conversion operations internally, and not because of a direct call from
mbstring, it would use its own error replacement character.

Example:

    <?php
    $questionMark = "\x00?";
    $badUTF16 = "\xDB\x00"; // half of a surrogate pair
    echo mb_strpos($questionMark, $badUTF16, 0, 'UTF-16BE'), "\n";
    echo mb_strpos($badUTF16, $questionMark, 0, 'UTF-16BE'), "\n";

Incidentally, this behavior does not occur if the text encoding is
UTF-8, because no conversion is needed in that case.

mb_stripos had a similar issue, but instead of always using '?' as an
error marker internally, it would use the selected
mb_substitute_character. So, for example, if the mb_substitute_character
was '%', then occurrences of '%' in the haystack would match invalid
bytes in the needle, and vice versa.

Example:

    <?php
    mb_substitute_character(0x25); // '%'
    $percent = "\x00%";
    $badUTF16 = "\xDB\x00"; // half of a surrogate pair
    echo mb_stripos($percent, $badUTF16, 0, 'UTF-16BE'), "\n";
    echo mb_stripos($badUTF16, $percent, 0, 'UTF-16BE'), "\n";

This behavior (of mb_stripos) still occurs even if the text encoding is
UTF-8, because case folding is still needed to make the search
case-insensitive.

It is not hard to think of scenarios where these strange and unintuitive
behaviors could cause security vulnerabilities. In the discussion on
GH issue 9613, Christoph Becker suggested that mb_str{i,}pos should
simply refuse to operate on invalid strings. However, this would almost
certainly break existing production code.

This commit mitigates the problem in a less intrusive way: it ensures
that while invalid haystacks can match invalid needles (even if the
specific invalid bytes are different), invalid bytes in the haystack
will never match '?' OR occurrences of the mb_substitute_character in
the needle, and vice versa.

This does represent a backwards compatibility break, but a small one.
Since it mitigates a potential security problem, I believe this is
appropriate.

Closes GH-9613.
2022-12-18 15:31:20 +02:00
Alex Dowad
744ca16e73 Speed boost for mb_stripos (when not using UTF-8)
Instead of case-folding a string and then converting it to UTF-8 as a
separate operation, why not convert it to UTF-8 at the same time as
we fold case?

For non-UTF-8 encodings, this typically makes mb_stripos about 2x
faster.
2022-12-18 15:31:20 +02:00
Alex Dowad
b9cd1cdb4f Implement mb_substr_count using fast text conversion filters
The performance gain from this change depends on the text encoding and
input string size. For very small strings, other overheads tend to swamp
the performance gains to some extent, such that the speedup is less than
2x. For medium-length strings (~100 bytes or so), the speedup is
typically around 2.5x.

The greatest performance gains are for UTF-8 strings which have already
been marked as valid (using the GC flags on the zend_string object);
for those, the speedup is more than 10x in many cases.

The previous implementation first converted the haystack and needle to
wchars, then searched for matches between the two sequences of wchars.
Because we use -1 as an error marker when converting to wchars, error
markers from invalid byte sequences in the haystack would match error
markers from invalid byte sequences in the needle, even if the specific
invalid byte sequence was different. I am not sure whether this behavior
is really desirable or not, but anyways, this new implementation
follows the same behavior so as not to cause BC breaks.
2022-12-15 07:54:26 +02:00
Alex Dowad
0c0774f5b4 Use fast text conversion filters for mb_strpos, mb_stripos, mb_substr, etc
This boosts the performance of mb_strpos, mb_stripos, mb_strrpos,
mb_strripos, mb_strstr, mb_stristr, mb_strrchr, and mb_strrichr when
used on non-UTF-8 strings. mb_substr is also faster.

With UTF-8 input, there is no appreciable difference in performance for
mb_strpos, mb_stripos, mb_strrpos, etc. This is expected, since the only
real difference here (aside from shorter and simpler code) is that the
new text conversion code is used when converting non-UTF-8 input strings
to UTF-8. (This is done because internally, mb_strpos, etc. work only
on UTF-8 text.)

For ASCII, speed is boosted by 30-65%. For other legacy text encodings,
the degree of performance improvement will depend on how slow the
legacy conversion code was.

One other minor, but notable difference is that strings encoded using
UTF-8 variants from Japanese mobile vendors (SoftBank, KDDI, Docomo)
will not undergo encoding conversion but will be processed "as is". It
is expected that this will result in a large performance boost for
such input strings; but realistically, the number of users who work
with such strings is probably minute.

I was not originally planning to include mb_substr in this commit, but
fuzzing of the reimplemented mb_strstr revealed that mb_substr needed
to be reimplemented, too; using the old mbfl_substr, which was based
on the old text conversion filters, in combination with functions which
use the new text conversion filters caused bugs.

The performance boost for mb_substr varies from 10%-500%, depending
on the encoding and input string used.
2022-12-12 16:28:49 +02:00
Alex Dowad
b1954f5fd6 Use fast text conversion filters to implement mb_convert_variables 2022-11-18 10:19:07 +02:00
Alex Dowad
d0d834429f Cache UTF-8-validity status of strings in GC flags
The PCRE extension is already doing this. The flag is set when a string
is determined to be valid UTF-8, and cleared in
zend_string_forget_hash_val.

We might as well make good use of it in mbstring as well.

This should result in a negligible slowdown for non-UTF-8 strings,
bad UTF-8 strings, and good UTF-8 strings which are checked only once.
However, when microbenchmarking this change using a variety of text
encodings and string lengths, I found that in most of these cases,
the 'new' code was a few percent faster. In a couple of cases, the 'old'
code was a few percent faster. This was not a result of sampling error,
since I could reproduce these test results repeatedly, and even when
running a large number of iterations. Both the new and old code
were compiled with -O3 -march=native. My (unproved) hypothesis is that
although the new code appears to only add one more conditional branch,
the compiler may emit slightly different code from before (perhaps
with different register allocation and so on), and this may cause some
cases to run slightly faster and others to run slightly slower. I have
not disassembled the old and new binaries to see if an examination of
the emitted assembly code would support this hypothesis.

For good UTF-8 strings which are checked repeatedly, the speedup is
about 40% even for strings 1-5 bytes in length. For ~100 byte strings,
it is ~700%, and for ~10000 byte strings, it is ~80000%.

I tried fuzzing MBString's php_mb_check_encoding function and
pcre2lib's valid_utf function to see if I could find any cases where
their output would be different. After running the fuzzer for a couple
of minutes, it had tried more than 1 million test cases without finding
any where the output was different. Therefore, it appears that
MBString's UTF-8 validation is compatible with PCRE's.
2022-11-15 19:14:35 +02:00
Alex Dowad
3ce888a837 Use uint32_t for 'illegal_substchar' codepoint in mbstring
This value is a wchar, so the best type for it is uint32_t.
2022-10-05 10:02:02 +09:00
Alex Dowad
20769fb9ab Make enum for valid case_mode values (for php_unicode_convert_case) 2022-10-05 10:02:02 +09:00
Alex Dowad
7eef2fb45e Use fast text conversion filters for mb_convert_case, mb_strtoupper, mb_strtolower
Speed increase is only about 50% for title casing, but 2-3x for other
forms of case conversion.
2022-10-05 10:02:02 +09:00
Nikita Popov
9f9042fd43 Fix always non-null warning
ZSTR_VAL can not be null.
2022-09-11 22:13:01 +02:00
Alex Dowad
8df515555b Remove unused 'to_language' and 'from_language' struct fields 2022-08-16 16:43:26 +02:00
Christoph M. Becker
d013d94985
Fix GH-9248: Segmentation fault in mb_strimwidth()
We need to initialize the optional argument `trimmarker` with its
default value.

Closes GH-9273.
2022-08-08 18:35:37 +02:00
Alex Dowad
5370f344d2 mb_strimwidth inserts error markers in invalid input string (for backwards compatibility)
The old implementation did this. It also did the same to the
trim marker, if the trim marker was invalid in the specified
encoding, but I have not imitated that behavior (for performance).
2022-08-02 11:07:06 +02:00
Alex Dowad
78ee18413f Move kana conversion function to mbfilter_cp5022x.c
...To avoid a dependency from libmbfl to mbstring.

Thanks to Nikita Popov for pointing this issue out.
2022-08-02 11:07:06 +02:00
Alex Dowad
7299096095 New implementation of mb_strimwidth
This new implementation of mb_strimwidth uses the new text
encoding conversion filters. Changes from the previous
implementation:

• mb_strimwidth allows a negative 'from' argument, which
should count backwards from the end of the string. However,
the implementation of this feature was buggy (starting right
from when it was first implemented).

It used the following code:

    if ((from < 0) || (width < 0)) {
        swidth = mbfl_strwidth(&string);
    }
    if (from < 0) {
        from += swidth;
    }

Do you see the bug? 'from' is a count of CODEPOINTS, but
'swidth' is a count of TERMINAL COLUMNS. Adding those two
together does not make sense. If there were no fullwidth
characters in the input string, then the two counts coincide
and the feature would work correctly. However, each
fullwidth character would throw the result off by one,
causing more characters to be skipped than was requested.

• mb_strimwidth also allows a negative 'width' argument,
which again counts backwards from the end of the string;
in this case, it is not determining the START of the portion
which we want to extract, but rather, the END of that portion.
Perhaps unsurprisingly, this feature was also buggy.

Code:

    if (width < 0) {
        width = swidth + width - from;
    }

'swidth + width' is fine here; the problem is '- from'.
Again, that is subtracting a count of CODEPOINTS from a
count of TERMINAL COLUMNS. In this case, we really need
to count the terminal width of the string prefix skipped
over by 'from', and subtract that rather than the number
of codepoints which are being skipped.

As a result, if a 'from' count was passed along with a
negative 'width', for every fullwidth character in the
skipped prefix, the result of mb_strimwidth was one
terminal column wider than requested.

Since these situations were covered by unit tests, you
might wonder why the bugs were not caught. Well, as far as
I can see, it looks like the author of the 'tests' just
captured the actual output of mb_strimwidth and defined it
as 'correct'. The tests were written in such a way that it
was difficult to examine them and see whether they made
sense or not; but a careful examination of the inputs and
outputs clearly shows that the legacy tests did not conform
to the documented contract of mb_strimwidth.

• The old implementation would always pass the input string
through decoding/encoding filters before returning it to
the caller, even if it fit within the specified width. This
means that invalid byte sequences would be converted to
error markers. For performance, the new implementation
returns the very same string which was passed in if it
does not exceed the specified width. This means that
erroneous byte sequences are not converted to error markers
unless it is necessary to trim the string.

• The same applies to the 'trim marker' string.

• The old implementation was buggy in the (unusual)
case that the trim marker is wider than the requested
maximum width of the result. It did an unsigned subtraction
of the requested width and the width of the trim marker. If the
width of the trim marker was greater, that subtraction would
underflow and yield a huge number. As a result, mb_strimwidth
would then pass the input string through, even if it was
far wider than the requested maximum width.

In that case, since the input string is wider than the
requested width, and NONE of it will fit together with the
trim marker, the new implementation returns just the trim
marker. This is the one case where the output can be wider
than the requested width: when BOTH the input string and
also the trim marker are too wide.

• Since it passed the input string and trim marker through
decoding/encoding filters, when using "Quoted-Printable" as
the encoding, newlines could be inserted into the trim marker
to maintain the maximum line length for QP.

This is an extremely bizarre use case and I don't think there
is any point in worrying about it. QP will be removed from
mbstring in time, anyways.

PERFORMANCE:

• From micro-benchmarking with various input string lengths and
text encodings, it appears that the new implementation is 2-3x
faster for UTF-8 and UTF-16. For legacy Japanese text encodings
like ISO-2022-JP or SJIS, the new implementation is perhaps 25%
faster.

• Note that correctly implementing negative 'from' and 'width'
arguments imposes a small performance burden in such cases; one
which the old implementation did not pay. This slightly skews
benchmarking results in favor of the old implementation. However,
even so, the new implementation is faster in all cases which I
tested.
2022-08-02 11:07:06 +02:00
Alex Dowad
94fde1566f Move implementation of mb_strlen to mbstring.c
mbfl_strlen (in mbfilter.c) is still being used in a couple
of places but will go away soon.
2022-08-02 11:07:06 +02:00
Christoph M. Becker
3e922bf08f
Merge branch 'PHP-8.1'
* PHP-8.1:
  Fix GH-9008: mb_detect_encoding(): wrong results with null $encodings
2022-07-20 17:01:42 +02:00
Christoph M. Becker
c2bdaa48e1
Fix GH-9008: mb_detect_encoding(): wrong results with null $encodings
Passing `null` to `$encodings` is supposed to behave like passing the
result of `mb_detect_order()`.  Therefore, we need to remove the non-
encodings from the `elist` in this case as well.  Thus, we duplicate
the global `elist`, so we can modify it.

Closes GH-9063.
2022-07-20 16:58:55 +02:00
Alex Dowad
9ac49c0dd3 New implementation of mb_convert_kana
mb_convert_kana now uses the new text encoding conversion
filters. Microbenchmarking shows speed gains of 50%-150%
across various text encodings and input string lengths.

The behavior is the same as the old mb_convert_kana
except for one fix: if the 'zero codepoint' U+0000 appeared
in the input, the old implementation would sometimes drop
it, not passing it through to the output. This is now
fixed.
2022-07-20 07:44:19 +02:00
Alex Dowad
76a92c26e3 mb_decode_numericentity decodes valid entities which are truncated at end of string
Since mb_decode_numericentity does not require all HTML entities
to end with ';', but allows them to be terminated by ANY non-digit
character, it doesn't make sense that valid entities which butt
up against the end of the input string are not converted.

As it turned out, supporting this case also made it possible
to simplify the code nicely.
2022-07-18 15:11:47 +02:00
Alex Dowad
5d6bd557b3 mb_decode_numericentity converts entities which immediately follow a valid/invalid entity
Thanks to Kamil Tieleka for suggesting that some of the behaviors of
the legacy implementation which the new mb_decode_numericentity
implementation took care to maintain were actually bugs and should
be fixed. Thanks also to Trevor Rowbotham for providing a link to
the HTML specification, showing how HTML numeric entities should
be interpreted.

mb_decode_numericentity now processes numeric entities in the
following situations where the old implementation would not:

- &<ENTITY> (for example, &&#65;)
- &#<ENTITY>
- &#x<ENTITY>
- <VALID BUT UNTERMINATED DECIMAL ENTITY><ENTITY> (for example, &#65&#65;)
- <VALID BUT UNTERMINATED HEX ENTITY><ENTITY>
- <INVALID AND UNTERMINATED DECIMAL ENTITY><ENTITY> (it does not matter why
  the first entity is invalid; the value could be too big, it could have
  too many digits, or it could not match the 'convmap' parameter)
- <INVALID AND UNTERMINATED HEX ENTITY><ENTITY>

This is consistent with the way that web browsers process
HTML entities.
2022-07-18 15:11:32 +02:00
Alex Dowad
91969e908f New implementation of mb_{de,en}code_numericentity
This new implementation uses the new encoding conversion filters.
Aside from fewer LOC and (hopefully) improved readability,
the differences are as follows:

BEHAVIOR CHANGES:

- The old implementation used signed arithmetic when operating
on the 'convmap'. This meant that results could be surprising when
using convmap entries with 1 in the MSB. Further, types like 'int'
were used rather than those with a specific bit width, such as
'int32_t'. This meant that results could also depend on the
platform width of an 'int'.

Now unsigned arithmetic is used, with explicit bit widths.

- Similarly, while converting decimal numeric entities, the
legacy implementation would ensure that the value never overflowed
INT_MAX, and if it did, the entity would be treated as invalid
and passed through unconverted.

However, that again means that results depend on the platform
size of an 'int'. So now, we use a value with explicit bit width
(32 bits) to hold the value of a deconverted decimal entity, and
ensure that the entity value does not overflow that.

Further, because we are using an UNSIGNED 32-bit value rather
than a signed one, the ceiling for how large a decimal entity
can be is higher now.

All of this will probably not affect anyone, since Unicode
codepoints above U+10FFFF are invalid anyways. To see the
difference, you need to be using a text encoding like UCS-4,
which allows huge 'codepoints'.

- If it saw something which looked like a hex entity, but
turned out not to be a valid numeric entity, the old
implementation would sometimes convert the hexadecimal
digits a-f to A-F (uppercase). The new implementation passes
invalid numeric entities through without performing case
conversion.

- The old implementation of mb_encode_numericentity was
limited in how many decimal/hex digits it could emit.
If a text encoding like UCS-4 was in use, where 'codepoints'
can have huge values (larger than the valid range
stipulated by the Unicode standard), it would not error
out on a 'codepoint' whose value was too large for it,
but would rather mangle the value and emit a numeric
entity which decoded to some other random codepoint.
The new implementation is able to emit enough digits to
express any value which fits in 32 bits.

PERFORMANCE:

Based on micro-benchmarks run on my development machine:

Decoding numeric HTML entities is about 4 times faster, for
both decimal and hexadecimal entities, across a variety of
input string lengths. Encoding is about 3 times faster.
2022-07-18 15:11:30 +02:00
Christoph M. Becker
d6fc165028
Drop useless TODO comment
Cf. <https://github.com/php/php-src/pull/9018#issuecomment-1185481492>.
2022-07-15 14:23:59 +02:00
Máté Kocsis
56137cd26e
Declare ext/mbstring constants in stubs (#8798) 2022-06-23 17:34:08 +02:00
Alex Dowad
880803a21e Use fast conversion filters to implement php_mb_ord
Even for single-character strings, this is about 50% faster for
ASCII, UTF-8, and UTF-16. For long strings, the performance gain is
enormous, since the old code would convert the ENTIRE string, just
to pick out the first codepoint.
2022-06-12 15:24:41 +02:00
Alex Dowad
950a7db9fe Use fast text conversion filters to implement mb_check_encoding
Benchmarking reveals that this is about 8% slower for UTF-8 strings
which have a bad codepoint at the very beginning of the string.
For good strings, or those where the first bad codepoint is much
later in the string, it is significantly faster (2-3 times faster
in many cases).
2022-06-12 15:24:41 +02:00
Remi Collet
5b2e413e89
Merge branch 'PHP-8.1'
* PHP-8.1:
  NEWS for GH-8685
  NEWS for GH-8685
  Fix GH-8685 mbstring requires pcre
2022-06-03 07:55:51 +02:00