If buf_len is zero, this would leave behind a dangling pointer
to an already released header.str. Make sure this can't happen
by always overwriting the pointer.
Closes GH-7376.
The CURLOPT_DEBUGDATA will point to the old curl handle after
copying. Update it to point to the new handle.
We don't separately store whether CURLINFO_HEADER_OUT is enabled,
so I'm doing this unconditionally. It should be harmless if
CURLOPT_DEBUGFUNCTION is not used.
As these hold on to some internal resource, there can't be two
"equal" objects with different identity. Make sure the lack of
public properties doesn't result in these being treated as always
equal.
This is mainly to work around https://github.com/curl/curl/issues/6455,
but not building the mime structure for empty hashtables is a general
performance optimization, so we do not restrict it to affected cURL
versions (7.56.0 to 7.75.0).
The minor change to bug79033.phpt is unexpected, but should not matter
in practice.
Closes GH-6606.
Not keeping a reference will not result in use after free, because
curl protects against it, but it will result in a memory leak,
because curl_share_cleanup() will fail. We should make sure that
the share handle object stays alive as long as the curl handles
use it.
Currently we treat paths with null bytes as a TypeError, which is
incorrect, and rather inconsistent, as we treat empty paths as
ValueError. We do this because the error is generated by zpp and
it's easier to always throw TypeError there.
This changes the zpp implementation to throw a TypeError only if
the type is actually wrong and throw ValueError for null bytes.
The error message is also split accordingly, to be more precise.
Closes GH-6094.
To allow exporting the php_curl.h header containing curl class
entries, split off a separate curl_private.h header with all the
implementation details.
We may move or expose additional APIs in php_curl.h on an as-needed
basis.
A recurring pattern in old extension: Putting the whole source
code behind HAVE_EXTNAME. This is pointless, as the code is only
compiled if the extension is enabled.
This removes a couple of them, but not all.
From an engine perspective, named parameters mainly add three
concepts:
* The SEND_* opcodes now accept a CONST op2, which is the
argument name. For now, it is looked up by linear scan and
runtime cached.
* This may leave UNDEF arguments on the stack. To avoid having
to deal with them in other places, a CHECK_UNDEF_ARGS opcode
is used to either replace them with defaults, or error.
* For variadic functions, EX(extra_named_params) are collected
and need to be freed based on ZEND_CALL_HAS_EXTRA_NAMED_PARAMS.
RFC: https://wiki.php.net/rfc/named_params
Closes GH-5357.
This properly addresses the issue from bug #79741. Silently
interpreting objects as mangled property tables is almost
always a bad idea.
Closes GH-5773.
While performing resource -> object migrations, we're adding
defensive classes that are final, non-serializable and non-clonable
(unless they are, of course). This path adds a ZEND_ACC_NO_DYNAMIC_PROPERTIES
flag, that also forbids the creation of dynamic properties on these objects.
This is a subset of #3931 and targeted at internal usage only
(though may be extended to userland at some point in the future).
It's already possible to achieve this (what the removed
WeakRef/WeakMap code does), but there's some caveats: First, this
simple approach is only possible if the class has no declared
properties, otherwise it's necessary to special-case those
properties. Second, it's easy to make it overly strict, e.g. by
forbidding isset($obj->prop) as well. And finally, it requires a
lot of boilerplate code for each class.
Closes GH-5572.
(int) $curlHandle will return spl_object_id($curlHandle). This
makes curl handle objects backwards compatible with code using
(int) $curlHandle to obtain a resource ID.
Closes GH-5743.
Unfortunately, some Webservers (e.g. IIS) do not implement the (F)CGI
specifications correctly wrt. chunked uploads (i.e. Transfer-encoding:
chunked), but instead pass -1 as CONTENT_LENGTH to the CGI
application. However, our (F)CFI SAPIs (i.e. cgi and cgi-fcgi) do not
support this.
Therefore we try to retrieve the stream size in advance and pass it to
`curl_mime_data_cb()` to prevent libcurl from doing chunked uploads.
This is basically the same approach that `curl_mime_filedata()`
implements, except that we are keeping already opened streams open for
the `read_cb()`.
To cater to `curl_copy_handle()` of cURL handles with attached
`CURLFile`s, we must not attach the opened stream, because the stream
may not be seekable, so that we could rewind, when the same stream is
going to be uploaded multiple times. Instead, we're opening the stream
lazily in the read callback.
Since `curl_multi_perfom()` processes easy handles asynchronously, we
have no control of the operation sequence. Since duplicated cURL
handles may be used with multi handles, we cannot use a single arg
structure, but actually have to rebuild the whole mime structure on
handle duplication and attach this to the new handle.
In order to better test this behavior, we extend the test responder to
print the size of the upload, and patch the existing tests accordingly.
libcurl 7.29.0 has been released almost eight years ago, so this
version is supposed to be available practically everywhere. This bump
also allows us to get rid of quite some conditional code and tests
catering to very old libcurl versions.