deps: update undici to 5.24.0

PR-URL: https://github.com/nodejs/node/pull/49559
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Matthew Aitken <maitken033380023@gmail.com>
This commit is contained in:
Node.js GitHub Bot 2023-09-23 12:40:45 +01:00 committed by GitHub
parent da7962fd4d
commit ef062b981e
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
33 changed files with 2679 additions and 311 deletions

View file

@ -18,30 +18,34 @@ npm i undici
## Benchmarks ## Benchmarks
The benchmark is a simple `hello world` [example](benchmarks/benchmark.js) using a The benchmark is a simple `hello world` [example](benchmarks/benchmark.js) using a
number of unix sockets (connections) with a pipelining depth of 10 running on Node 16. number of unix sockets (connections) with a pipelining depth of 10 running on Node 20.6.0.
The benchmarks below have the [simd](https://github.com/WebAssembly/simd) feature enabled.
### Connections 1 ### Connections 1
| Tests | Samples | Result | Tolerance | Difference with slowest | | Tests | Samples | Result | Tolerance | Difference with slowest |
|---------------------|---------|---------------|-----------|-------------------------| |---------------------|---------|---------------|-----------|-------------------------|
| http - no keepalive | 15 | 4.63 req/sec | ± 2.77 % | - | | http - no keepalive | 15 | 5.32 req/sec | ± 2.61 % | - |
| http - keepalive | 10 | 4.81 req/sec | ± 2.16 % | + 3.94 % | | http - keepalive | 10 | 5.35 req/sec | ± 2.47 % | + 0.44 % |
| undici - stream | 25 | 62.22 req/sec | ± 2.67 % | + 1244.58 % | | undici - fetch | 15 | 41.85 req/sec | ± 2.49 % | + 686.04 % |
| undici - dispatch | 15 | 64.33 req/sec | ± 2.47 % | + 1290.24 % | | undici - pipeline | 40 | 50.36 req/sec | ± 2.77 % | + 845.92 % |
| undici - request | 15 | 66.08 req/sec | ± 2.48 % | + 1327.88 % | | undici - stream | 15 | 60.58 req/sec | ± 2.75 % | + 1037.72 % |
| undici - pipeline | 10 | 66.13 req/sec | ± 1.39 % | + 1329.08 % | | undici - request | 10 | 61.19 req/sec | ± 2.60 % | + 1049.24 % |
| undici - dispatch | 20 | 64.84 req/sec | ± 2.81 % | + 1117.81 % |
### Connections 50 ### Connections 50
| Tests | Samples | Result | Tolerance | Difference with slowest | | Tests | Samples | Result | Tolerance | Difference with slowest |
|---------------------|---------|------------------|-----------|-------------------------| |---------------------|---------|------------------|-----------|-------------------------|
| http - no keepalive | 50 | 3546.49 req/sec | ± 2.90 % | - | | undici - fetch | 30 | 2107.19 req/sec | ± 2.69 % | - |
| http - keepalive | 15 | 5692.67 req/sec | ± 2.48 % | + 60.52 % | | http - no keepalive | 10 | 2698.90 req/sec | ± 2.68 % | + 28.08 % |
| undici - pipeline | 25 | 8478.71 req/sec | ± 2.62 % | + 139.07 % | | http - keepalive | 10 | 4639.49 req/sec | ± 2.55 % | + 120.17 % |
| undici - request | 20 | 9766.66 req/sec | ± 2.79 % | + 175.39 % | | undici - pipeline | 40 | 6123.33 req/sec | ± 2.97 % | + 190.59 % |
| undici - stream | 15 | 10109.74 req/sec | ± 2.94 % | + 185.06 % | | undici - stream | 50 | 9426.51 req/sec | ± 2.92 % | + 347.35 % |
| undici - dispatch | 25 | 10949.73 req/sec | ± 2.54 % | + 208.75 % | | undici - request | 10 | 10162.88 req/sec | ± 2.13 % | + 382.29 % |
| undici - dispatch | 50 | 11191.11 req/sec | ± 2.98 % | + 431.09 % |
## Quick Start ## Quick Start
@ -432,6 +436,7 @@ and `undici.Agent`) which will enable the family autoselection algorithm when es
* [__Ethan Arrowood__](https://github.com/ethan-arrowood), <https://www.npmjs.com/~ethan_arrowood> * [__Ethan Arrowood__](https://github.com/ethan-arrowood), <https://www.npmjs.com/~ethan_arrowood>
* [__Matteo Collina__](https://github.com/mcollina), <https://www.npmjs.com/~matteo.collina> * [__Matteo Collina__](https://github.com/mcollina), <https://www.npmjs.com/~matteo.collina>
* [__Robert Nagy__](https://github.com/ronag), <https://www.npmjs.com/~ronag> * [__Robert Nagy__](https://github.com/ronag), <https://www.npmjs.com/~ronag>
* [__Matthew Aitken__](https://github.com/KhafraDev), <https://www.npmjs.com/~khaf>
## License ## License

View file

@ -17,11 +17,13 @@ Returns: `Client`
### Parameter: `ClientOptions` ### Parameter: `ClientOptions`
> ⚠️ Warning: The `H2` support is experimental.
* **bodyTimeout** `number | null` (optional) - Default: `300e3` - The timeout after which a request will time out, in milliseconds. Monitors time between receiving body data. Use `0` to disable it entirely. Defaults to 300 seconds. * **bodyTimeout** `number | null` (optional) - Default: `300e3` - The timeout after which a request will time out, in milliseconds. Monitors time between receiving body data. Use `0` to disable it entirely. Defaults to 300 seconds.
* **headersTimeout** `number | null` (optional) - Default: `300e3` - The amount of time the parser will wait to receive the complete HTTP headers while not sending the request. Defaults to 300 seconds. * **headersTimeout** `number | null` (optional) - Default: `300e3` - The amount of time, in milliseconds, the parser will wait to receive the complete HTTP headers while not sending the request. Defaults to 300 seconds.
* **keepAliveMaxTimeout** `number | null` (optional) - Default: `600e3` - The maximum allowed `keepAliveTimeout` when overridden by *keep-alive* hints from the server. Defaults to 10 minutes. * **keepAliveMaxTimeout** `number | null` (optional) - Default: `600e3` - The maximum allowed `keepAliveTimeout`, in milliseconds, when overridden by *keep-alive* hints from the server. Defaults to 10 minutes.
* **keepAliveTimeout** `number | null` (optional) - Default: `4e3` - The timeout after which a socket without active requests will time out. Monitors time between activity on a connected socket. This value may be overridden by *keep-alive* hints from the server. See [MDN: HTTP - Headers - Keep-Alive directives](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Keep-Alive#directives) for more details. Defaults to 4 seconds. * **keepAliveTimeout** `number | null` (optional) - Default: `4e3` - The timeout, in milliseconds, after which a socket without active requests will time out. Monitors time between activity on a connected socket. This value may be overridden by *keep-alive* hints from the server. See [MDN: HTTP - Headers - Keep-Alive directives](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Keep-Alive#directives) for more details. Defaults to 4 seconds.
* **keepAliveTimeoutThreshold** `number | null` (optional) - Default: `1e3` - A number subtracted from server *keep-alive* hints when overriding `keepAliveTimeout` to account for timing inaccuracies caused by e.g. transport latency. Defaults to 1 second. * **keepAliveTimeoutThreshold** `number | null` (optional) - Default: `1e3` - A number of milliseconds subtracted from server *keep-alive* hints when overriding `keepAliveTimeout` to account for timing inaccuracies caused by e.g. transport latency. Defaults to 1 second.
* **maxHeaderSize** `number | null` (optional) - Default: `16384` - The maximum length of request headers in bytes. Defaults to 16KiB. * **maxHeaderSize** `number | null` (optional) - Default: `16384` - The maximum length of request headers in bytes. Defaults to 16KiB.
* **maxResponseSize** `number | null` (optional) - Default: `-1` - The maximum length of response body in bytes. Set to `-1` to disable. * **maxResponseSize** `number | null` (optional) - Default: `-1` - The maximum length of response body in bytes. Set to `-1` to disable.
* **pipelining** `number | null` (optional) - Default: `1` - The amount of concurrent requests to be sent over the single TCP/TLS connection according to [RFC7230](https://tools.ietf.org/html/rfc7230#section-6.3.2). Carefully consider your workload and environment before enabling concurrent requests as pipelining may reduce performance if used incorrectly. Pipelining is sensitive to network stack settings as well as head of line blocking caused by e.g. long running requests. Set to `0` to disable keep-alive connections. * **pipelining** `number | null` (optional) - Default: `1` - The amount of concurrent requests to be sent over the single TCP/TLS connection according to [RFC7230](https://tools.ietf.org/html/rfc7230#section-6.3.2). Carefully consider your workload and environment before enabling concurrent requests as pipelining may reduce performance if used incorrectly. Pipelining is sensitive to network stack settings as well as head of line blocking caused by e.g. long running requests. Set to `0` to disable keep-alive connections.
@ -30,6 +32,8 @@ Returns: `Client`
* **interceptors** `{ Client: DispatchInterceptor[] }` - Default: `[RedirectInterceptor]` - A list of interceptors that are applied to the dispatch method. Additional logic can be applied (such as, but not limited to: 302 status code handling, authentication, cookies, compression and caching). Note that the behavior of interceptors is Experimental and might change at any given time. * **interceptors** `{ Client: DispatchInterceptor[] }` - Default: `[RedirectInterceptor]` - A list of interceptors that are applied to the dispatch method. Additional logic can be applied (such as, but not limited to: 302 status code handling, authentication, cookies, compression and caching). Note that the behavior of interceptors is Experimental and might change at any given time.
* **autoSelectFamily**: `boolean` (optional) - Default: depends on local Node version, on Node 18.13.0 and above is `false`. Enables a family autodetection algorithm that loosely implements section 5 of [RFC 8305](https://tools.ietf.org/html/rfc8305#section-5). See [here](https://nodejs.org/api/net.html#socketconnectoptions-connectlistener) for more details. This option is ignored if not supported by the current Node version. * **autoSelectFamily**: `boolean` (optional) - Default: depends on local Node version, on Node 18.13.0 and above is `false`. Enables a family autodetection algorithm that loosely implements section 5 of [RFC 8305](https://tools.ietf.org/html/rfc8305#section-5). See [here](https://nodejs.org/api/net.html#socketconnectoptions-connectlistener) for more details. This option is ignored if not supported by the current Node version.
* **autoSelectFamilyAttemptTimeout**: `number` - Default: depends on local Node version, on Node 18.13.0 and above is `250`. The amount of time in milliseconds to wait for a connection attempt to finish before trying the next address when using the `autoSelectFamily` option. See [here](https://nodejs.org/api/net.html#socketconnectoptions-connectlistener) for more details. * **autoSelectFamilyAttemptTimeout**: `number` - Default: depends on local Node version, on Node 18.13.0 and above is `250`. The amount of time in milliseconds to wait for a connection attempt to finish before trying the next address when using the `autoSelectFamily` option. See [here](https://nodejs.org/api/net.html#socketconnectoptions-connectlistener) for more details.
* **allowH2**: `boolean` - Default: `false`. Enables support for H2 if the server has assigned bigger priority to it through ALPN negotiation.
* **maxConcurrentStreams**: `number` - Default: `100`. Dictates the maximum number of concurrent streams for a single H2 session. It can be overriden by a SETTINGS remote frame.
#### Parameter: `ConnectOptions` #### Parameter: `ConnectOptions`
@ -38,7 +42,7 @@ Furthermore, the following options can be passed:
* **socketPath** `string | null` (optional) - Default: `null` - An IPC endpoint, either Unix domain socket or Windows named pipe. * **socketPath** `string | null` (optional) - Default: `null` - An IPC endpoint, either Unix domain socket or Windows named pipe.
* **maxCachedSessions** `number | null` (optional) - Default: `100` - Maximum number of TLS cached sessions. Use 0 to disable TLS session caching. Default: 100. * **maxCachedSessions** `number | null` (optional) - Default: `100` - Maximum number of TLS cached sessions. Use 0 to disable TLS session caching. Default: 100.
* **timeout** `number | null` (optional) - Default `10e3` * **timeout** `number | null` (optional) - In milliseconds, Default `10e3`.
* **servername** `string | null` (optional) * **servername** `string | null` (optional)
* **keepAlive** `boolean | null` (optional) - Default: `true` - TCP keep-alive enabled * **keepAlive** `boolean | null` (optional) - Default: `true` - TCP keep-alive enabled
* **keepAliveInitialDelay** `number | null` (optional) - Default: `60000` - TCP keep-alive interval for the socket in milliseconds * **keepAliveInitialDelay** `number | null` (optional) - Default: `60000` - TCP keep-alive interval for the socket in milliseconds

View file

@ -13,8 +13,8 @@ Every Tls option, see [here](https://nodejs.org/api/tls.html#tls_tls_connect_opt
Furthermore, the following options can be passed: Furthermore, the following options can be passed:
* **socketPath** `string | null` (optional) - Default: `null` - An IPC endpoint, either Unix domain socket or Windows named pipe. * **socketPath** `string | null` (optional) - Default: `null` - An IPC endpoint, either Unix domain socket or Windows named pipe.
* **maxCachedSessions** `number | null` (optional) - Default: `100` - Maximum number of TLS cached sessions. Use 0 to disable TLS session caching. Default: 100. * **maxCachedSessions** `number | null` (optional) - Default: `100` - Maximum number of TLS cached sessions. Use 0 to disable TLS session caching. Default: `100`.
* **timeout** `number | null` (optional) - Default `10e3` * **timeout** `number | null` (optional) - In milliseconds. Default `10e3`.
* **servername** `string | null` (optional) * **servername** `string | null` (optional)
Once you call `buildConnector`, it will return a connector function, which takes the following parameters. Once you call `buildConnector`, it will return a connector function, which takes the following parameters.

View file

@ -200,8 +200,9 @@ Returns: `Boolean` - `false` if dispatcher is busy and further dispatch calls wo
* **blocking** `boolean` (optional) - Default: `false` - Whether the response is expected to take a long time and would end up blocking the pipeline. When this is set to `true` further pipelining will be avoided on the same connection until headers have been received. * **blocking** `boolean` (optional) - Default: `false` - Whether the response is expected to take a long time and would end up blocking the pipeline. When this is set to `true` further pipelining will be avoided on the same connection until headers have been received.
* **upgrade** `string | null` (optional) - Default: `null` - Upgrade the request. Should be used to specify the kind of upgrade i.e. `'Websocket'`. * **upgrade** `string | null` (optional) - Default: `null` - Upgrade the request. Should be used to specify the kind of upgrade i.e. `'Websocket'`.
* **bodyTimeout** `number | null` (optional) - The timeout after which a request will time out, in milliseconds. Monitors time between receiving body data. Use `0` to disable it entirely. Defaults to 300 seconds. * **bodyTimeout** `number | null` (optional) - The timeout after which a request will time out, in milliseconds. Monitors time between receiving body data. Use `0` to disable it entirely. Defaults to 300 seconds.
* **headersTimeout** `number | null` (optional) - The amount of time the parser will wait to receive the complete HTTP headers while not sending the request. Defaults to 300 seconds. * **headersTimeout** `number | null` (optional) - The amount of time, in milliseconds, the parser will wait to receive the complete HTTP headers while not sending the request. Defaults to 300 seconds.
* **throwOnError** `boolean` (optional) - Default: `false` - Whether Undici should throw an error upon receiving a 4xx or 5xx response from the server. * **throwOnError** `boolean` (optional) - Default: `false` - Whether Undici should throw an error upon receiving a 4xx or 5xx response from the server.
* **expectContinue** `boolean` (optional) - Default: `false` - For H2, it appends the expect: 100-continue header, and halts the request body until a 100-continue is received from the remote server
#### Parameter: `DispatchHandler` #### Parameter: `DispatchHandler`

View file

@ -35,7 +35,8 @@ const mockPool = mockAgent.get('http://localhost:3000')
### `MockPool.intercept(options)` ### `MockPool.intercept(options)`
This method defines the interception rules for matching against requests for a MockPool or MockPool. We can intercept multiple times on a single instance. This method defines the interception rules for matching against requests for a MockPool or MockPool. We can intercept multiple times on a single instance, but each intercept is only used once.
For example if you expect to make 2 requests inside a test, you need to call `intercept()` twice. Assuming you use `disableNetConnect()` you will get `MockNotMatchedError` on the second request when you only call `intercept()` once.
When defining interception rules, all the rules must pass for a request to be intercepted. If a request is not intercepted, a real request will be attempted. When defining interception rules, all the rules must pass for a request to be intercepted. If a request is not intercepted, a real request will be attempted.

View file

@ -2,9 +2,9 @@
const fetchImpl = require('./lib/fetch').fetch const fetchImpl = require('./lib/fetch').fetch
module.exports.fetch = async function fetch (resource) { module.exports.fetch = async function fetch (resource, init = undefined) {
try { try {
return await fetchImpl(...arguments) return await fetchImpl(resource, init)
} catch (err) { } catch (err) {
Error.captureStackTrace(err, this) Error.captureStackTrace(err, this)
throw err throw err
@ -14,3 +14,4 @@ module.exports.FormData = require('./lib/fetch/formdata').FormData
module.exports.Headers = require('./lib/fetch/headers').Headers module.exports.Headers = require('./lib/fetch/headers').Headers
module.exports.Response = require('./lib/fetch/response').Response module.exports.Response = require('./lib/fetch/response').Response
module.exports.Request = require('./lib/fetch/request').Request module.exports.Request = require('./lib/fetch/request').Request
module.exports.WebSocket = require('./lib/websocket/websocket').WebSocket

View file

@ -1,57 +1,3 @@
import Dispatcher from'./types/dispatcher' export * from './types/index'
import { setGlobalDispatcher, getGlobalDispatcher } from './types/global-dispatcher' import Undici from './types/index'
import { setGlobalOrigin, getGlobalOrigin } from './types/global-origin'
import Pool from'./types/pool'
import { RedirectHandler, DecoratorHandler } from './types/handlers'
import BalancedPool from './types/balanced-pool'
import Client from'./types/client'
import buildConnector from'./types/connector'
import errors from'./types/errors'
import Agent from'./types/agent'
import MockClient from'./types/mock-client'
import MockPool from'./types/mock-pool'
import MockAgent from'./types/mock-agent'
import mockErrors from'./types/mock-errors'
import ProxyAgent from'./types/proxy-agent'
import { request, pipeline, stream, connect, upgrade } from './types/api'
export * from './types/cookies'
export * from './types/fetch'
export * from './types/file'
export * from './types/filereader'
export * from './types/formdata'
export * from './types/diagnostics-channel'
export * from './types/websocket'
export * from './types/content-type'
export * from './types/cache'
export { Interceptable } from './types/mock-interceptor'
export { Dispatcher, BalancedPool, Pool, Client, buildConnector, errors, Agent, request, stream, pipeline, connect, upgrade, setGlobalDispatcher, getGlobalDispatcher, setGlobalOrigin, getGlobalOrigin, MockClient, MockPool, MockAgent, mockErrors, ProxyAgent, RedirectHandler, DecoratorHandler }
export default Undici export default Undici
declare namespace Undici {
var Dispatcher: typeof import('./types/dispatcher').default
var Pool: typeof import('./types/pool').default;
var RedirectHandler: typeof import ('./types/handlers').RedirectHandler
var DecoratorHandler: typeof import ('./types/handlers').DecoratorHandler
var createRedirectInterceptor: typeof import ('./types/interceptors').createRedirectInterceptor
var BalancedPool: typeof import('./types/balanced-pool').default;
var Client: typeof import('./types/client').default;
var buildConnector: typeof import('./types/connector').default;
var errors: typeof import('./types/errors').default;
var Agent: typeof import('./types/agent').default;
var setGlobalDispatcher: typeof import('./types/global-dispatcher').setGlobalDispatcher;
var getGlobalDispatcher: typeof import('./types/global-dispatcher').getGlobalDispatcher;
var request: typeof import('./types/api').request;
var stream: typeof import('./types/api').stream;
var pipeline: typeof import('./types/api').pipeline;
var connect: typeof import('./types/api').connect;
var upgrade: typeof import('./types/api').upgrade;
var MockClient: typeof import('./types/mock-client').default;
var MockPool: typeof import('./types/mock-pool').default;
var MockAgent: typeof import('./types/mock-agent').default;
var mockErrors: typeof import('./types/mock-errors').default;
var fetch: typeof import('./types/fetch').fetch;
var caches: typeof import('./types/cache').caches;
}

View file

@ -106,7 +106,10 @@ if (util.nodeMajor > 16 || (util.nodeMajor === 16 && util.nodeMinor >= 8)) {
try { try {
return await fetchImpl(...arguments) return await fetchImpl(...arguments)
} catch (err) { } catch (err) {
Error.captureStackTrace(err, this) if (typeof err === 'object') {
Error.captureStackTrace(err, this)
}
throw err throw err
} }
} }

View file

@ -1,7 +1,7 @@
'use strict' 'use strict'
const { InvalidArgumentError, RequestAbortedError, SocketError } = require('../core/errors')
const { AsyncResource } = require('async_hooks') const { AsyncResource } = require('async_hooks')
const { InvalidArgumentError, RequestAbortedError, SocketError } = require('../core/errors')
const util = require('../core/util') const util = require('../core/util')
const { addSignal, removeSignal } = require('./abort-signal') const { addSignal, removeSignal } = require('./abort-signal')
@ -50,7 +50,13 @@ class ConnectHandler extends AsyncResource {
removeSignal(this) removeSignal(this)
this.callback = null this.callback = null
const headers = this.responseHeaders === 'raw' ? util.parseRawHeaders(rawHeaders) : util.parseHeaders(rawHeaders)
let headers = rawHeaders
// Indicates is an HTTP2Session
if (headers != null) {
headers = this.responseHeaders === 'raw' ? util.parseRawHeaders(rawHeaders) : util.parseHeaders(rawHeaders)
}
this.runInAsyncScope(callback, null, null, { this.runInAsyncScope(callback, null, null, {
statusCode, statusCode,
headers, headers,

View file

@ -95,7 +95,6 @@ class RequestHandler extends AsyncResource {
this.callback = null this.callback = null
this.res = body this.res = body
if (callback !== null) { if (callback !== null) {
if (this.throwOnError && statusCode >= 400) { if (this.throwOnError && statusCode >= 400) {
this.runInAsyncScope(getResolveErrorBodyCallback, null, this.runInAsyncScope(getResolveErrorBodyCallback, null,

View file

@ -379,11 +379,7 @@ class Cache {
const reader = stream.getReader() const reader = stream.getReader()
// 11.3 // 11.3
readAllBytes( readAllBytes(reader).then(bodyReadPromise.resolve, bodyReadPromise.reject)
reader,
(bytes) => bodyReadPromise.resolve(bytes),
(error) => bodyReadPromise.reject(error)
)
} else { } else {
bodyReadPromise.resolve(undefined) bodyReadPromise.resolve(undefined)
} }

View file

@ -6,6 +6,7 @@
const assert = require('assert') const assert = require('assert')
const net = require('net') const net = require('net')
const { pipeline } = require('stream')
const util = require('./core/util') const util = require('./core/util')
const timers = require('./timers') const timers = require('./timers')
const Request = require('./core/request') const Request = require('./core/request')
@ -67,8 +68,40 @@ const {
kDispatch, kDispatch,
kInterceptors, kInterceptors,
kLocalAddress, kLocalAddress,
kMaxResponseSize kMaxResponseSize,
kHTTPConnVersion,
// HTTP2
kHost,
kHTTP2Session,
kHTTP2SessionState,
kHTTP2BuildRequest,
kHTTP2CopyHeaders,
kHTTP1BuildRequest
} = require('./core/symbols') } = require('./core/symbols')
/** @type {import('http2')} */
let http2
try {
http2 = require('http2')
} catch {
// @ts-ignore
http2 = { constants: {} }
}
const {
constants: {
HTTP2_HEADER_AUTHORITY,
HTTP2_HEADER_METHOD,
HTTP2_HEADER_PATH,
HTTP2_HEADER_CONTENT_LENGTH,
HTTP2_HEADER_EXPECT,
HTTP2_HEADER_STATUS
}
} = http2
// Experimental
let h2ExperimentalWarned = false
const FastBuffer = Buffer[Symbol.species] const FastBuffer = Buffer[Symbol.species]
const kClosedResolve = Symbol('kClosedResolve') const kClosedResolve = Symbol('kClosedResolve')
@ -122,7 +155,10 @@ class Client extends DispatcherBase {
localAddress, localAddress,
maxResponseSize, maxResponseSize,
autoSelectFamily, autoSelectFamily,
autoSelectFamilyAttemptTimeout autoSelectFamilyAttemptTimeout,
// h2
allowH2,
maxConcurrentStreams
} = {}) { } = {}) {
super() super()
@ -205,10 +241,20 @@ class Client extends DispatcherBase {
throw new InvalidArgumentError('autoSelectFamilyAttemptTimeout must be a positive number') throw new InvalidArgumentError('autoSelectFamilyAttemptTimeout must be a positive number')
} }
// h2
if (allowH2 != null && typeof allowH2 !== 'boolean') {
throw new InvalidArgumentError('allowH2 must be a valid boolean value')
}
if (maxConcurrentStreams != null && (typeof maxConcurrentStreams !== 'number' || maxConcurrentStreams < 1)) {
throw new InvalidArgumentError('maxConcurrentStreams must be a possitive integer, greater than 0')
}
if (typeof connect !== 'function') { if (typeof connect !== 'function') {
connect = buildConnector({ connect = buildConnector({
...tls, ...tls,
maxCachedSessions, maxCachedSessions,
allowH2,
socketPath, socketPath,
timeout: connectTimeout, timeout: connectTimeout,
...(util.nodeHasAutoSelectFamily && autoSelectFamily ? { autoSelectFamily, autoSelectFamilyAttemptTimeout } : undefined), ...(util.nodeHasAutoSelectFamily && autoSelectFamily ? { autoSelectFamily, autoSelectFamilyAttemptTimeout } : undefined),
@ -240,6 +286,18 @@ class Client extends DispatcherBase {
this[kMaxRequests] = maxRequestsPerClient this[kMaxRequests] = maxRequestsPerClient
this[kClosedResolve] = null this[kClosedResolve] = null
this[kMaxResponseSize] = maxResponseSize > -1 ? maxResponseSize : -1 this[kMaxResponseSize] = maxResponseSize > -1 ? maxResponseSize : -1
this[kHTTPConnVersion] = 'h1'
// HTTP/2
this[kHTTP2Session] = null
this[kHTTP2SessionState] = !allowH2
? null
: {
// streams: null, // Fixed queue of streams - For future support of `push`
openStreams: 0, // Keep track of them to decide wether or not unref the session
maxConcurrentStreams: maxConcurrentStreams != null ? maxConcurrentStreams : 100 // Max peerConcurrentStreams for a Node h2 server
}
this[kHost] = `${this[kUrl].hostname}${this[kUrl].port ? `:${this[kUrl].port}` : ''}`
// kQueue is built up of 3 sections separated by // kQueue is built up of 3 sections separated by
// the kRunningIdx and kPendingIdx indices. // the kRunningIdx and kPendingIdx indices.
@ -298,7 +356,9 @@ class Client extends DispatcherBase {
[kDispatch] (opts, handler) { [kDispatch] (opts, handler) {
const origin = opts.origin || this[kUrl].origin const origin = opts.origin || this[kUrl].origin
const request = new Request(origin, opts, handler) const request = this[kHTTPConnVersion] === 'h2'
? Request[kHTTP2BuildRequest](origin, opts, handler)
: Request[kHTTP1BuildRequest](origin, opts, handler)
this[kQueue].push(request) this[kQueue].push(request)
if (this[kResuming]) { if (this[kResuming]) {
@ -319,6 +379,8 @@ class Client extends DispatcherBase {
} }
async [kClose] () { async [kClose] () {
// TODO: for H2 we need to gracefully flush the remaining enqueued
// request and close each stream.
return new Promise((resolve) => { return new Promise((resolve) => {
if (!this[kSize]) { if (!this[kSize]) {
resolve(null) resolve(null)
@ -345,6 +407,12 @@ class Client extends DispatcherBase {
resolve() resolve()
} }
if (this[kHTTP2Session] != null) {
util.destroy(this[kHTTP2Session], err)
this[kHTTP2Session] = null
this[kHTTP2SessionState] = null
}
if (!this[kSocket]) { if (!this[kSocket]) {
queueMicrotask(callback) queueMicrotask(callback)
} else { } else {
@ -356,6 +424,64 @@ class Client extends DispatcherBase {
} }
} }
function onHttp2SessionError (err) {
assert(err.code !== 'ERR_TLS_CERT_ALTNAME_INVALID')
this[kSocket][kError] = err
onError(this[kClient], err)
}
function onHttp2FrameError (type, code, id) {
const err = new InformationalError(`HTTP/2: "frameError" received - type ${type}, code ${code}`)
if (id === 0) {
this[kSocket][kError] = err
onError(this[kClient], err)
}
}
function onHttp2SessionEnd () {
util.destroy(this, new SocketError('other side closed'))
util.destroy(this[kSocket], new SocketError('other side closed'))
}
function onHTTP2GoAway (code) {
const client = this[kClient]
const err = new InformationalError(`HTTP/2: "GOAWAY" frame received with code ${code}`)
client[kSocket] = null
client[kHTTP2Session] = null
if (client.destroyed) {
assert(this[kPending] === 0)
// Fail entire queue.
const requests = client[kQueue].splice(client[kRunningIdx])
for (let i = 0; i < requests.length; i++) {
const request = requests[i]
errorRequest(this, request, err)
}
} else if (client[kRunning] > 0) {
// Fail head of pipeline.
const request = client[kQueue][client[kRunningIdx]]
client[kQueue][client[kRunningIdx]++] = null
errorRequest(client, request, err)
}
client[kPendingIdx] = client[kRunningIdx]
assert(client[kRunning] === 0)
client.emit('disconnect',
client[kUrl],
[client],
err
)
resume(client)
}
const constants = require('./llhttp/constants') const constants = require('./llhttp/constants')
const createRedirectInterceptor = require('./interceptor/redirectInterceptor') const createRedirectInterceptor = require('./interceptor/redirectInterceptor')
const EMPTY_BUF = Buffer.alloc(0) const EMPTY_BUF = Buffer.alloc(0)
@ -946,16 +1072,18 @@ function onSocketReadable () {
} }
function onSocketError (err) { function onSocketError (err) {
const { [kParser]: parser } = this const { [kClient]: client, [kParser]: parser } = this
assert(err.code !== 'ERR_TLS_CERT_ALTNAME_INVALID') assert(err.code !== 'ERR_TLS_CERT_ALTNAME_INVALID')
// On Mac OS, we get an ECONNRESET even if there is a full body to be forwarded if (client[kHTTPConnVersion] !== 'h2') {
// to the user. // On Mac OS, we get an ECONNRESET even if there is a full body to be forwarded
if (err.code === 'ECONNRESET' && parser.statusCode && !parser.shouldKeepAlive) { // to the user.
// We treat all incoming data so for as a valid response. if (err.code === 'ECONNRESET' && parser.statusCode && !parser.shouldKeepAlive) {
parser.onMessageComplete() // We treat all incoming data so for as a valid response.
return parser.onMessageComplete()
return
}
} }
this[kError] = err this[kError] = err
@ -984,28 +1112,32 @@ function onError (client, err) {
} }
function onSocketEnd () { function onSocketEnd () {
const { [kParser]: parser } = this const { [kParser]: parser, [kClient]: client } = this
if (parser.statusCode && !parser.shouldKeepAlive) { if (client[kHTTPConnVersion] !== 'h2') {
// We treat all incoming data so far as a valid response. if (parser.statusCode && !parser.shouldKeepAlive) {
parser.onMessageComplete() // We treat all incoming data so far as a valid response.
return parser.onMessageComplete()
return
}
} }
util.destroy(this, new SocketError('other side closed', util.getSocketInfo(this))) util.destroy(this, new SocketError('other side closed', util.getSocketInfo(this)))
} }
function onSocketClose () { function onSocketClose () {
const { [kClient]: client } = this const { [kClient]: client, [kParser]: parser } = this
if (!this[kError] && this[kParser].statusCode && !this[kParser].shouldKeepAlive) { if (client[kHTTPConnVersion] === 'h1' && parser) {
// We treat all incoming data so far as a valid response. if (!this[kError] && parser.statusCode && !parser.shouldKeepAlive) {
this[kParser].onMessageComplete() // We treat all incoming data so far as a valid response.
parser.onMessageComplete()
}
this[kParser].destroy()
this[kParser] = null
} }
this[kParser].destroy()
this[kParser] = null
const err = this[kError] || new SocketError('closed', util.getSocketInfo(this)) const err = this[kError] || new SocketError('closed', util.getSocketInfo(this))
client[kSocket] = null client[kSocket] = null
@ -1092,24 +1224,54 @@ async function connect (client) {
return return
} }
if (!llhttpInstance) {
llhttpInstance = await llhttpPromise
llhttpPromise = null
}
client[kConnecting] = false client[kConnecting] = false
assert(socket) assert(socket)
socket[kNoRef] = false const isH2 = socket.alpnProtocol === 'h2'
socket[kWriting] = false if (isH2) {
socket[kReset] = false if (!h2ExperimentalWarned) {
socket[kBlocking] = false h2ExperimentalWarned = true
socket[kError] = null process.emitWarning('H2 support is experimental, expect them to change at any time.', {
socket[kParser] = new Parser(client, socket, llhttpInstance) code: 'UNDICI-H2'
socket[kClient] = client })
}
const session = http2.connect(client[kUrl], {
createConnection: () => socket,
peerMaxConcurrentStreams: client[kHTTP2SessionState].maxConcurrentStreams
})
client[kHTTPConnVersion] = 'h2'
session[kClient] = client
session[kSocket] = socket
session.on('error', onHttp2SessionError)
session.on('frameError', onHttp2FrameError)
session.on('end', onHttp2SessionEnd)
session.on('goaway', onHTTP2GoAway)
session.on('close', onSocketClose)
session.unref()
client[kHTTP2Session] = session
socket[kHTTP2Session] = session
} else {
if (!llhttpInstance) {
llhttpInstance = await llhttpPromise
llhttpPromise = null
}
socket[kNoRef] = false
socket[kWriting] = false
socket[kReset] = false
socket[kBlocking] = false
socket[kParser] = new Parser(client, socket, llhttpInstance)
}
socket[kCounter] = 0 socket[kCounter] = 0
socket[kMaxRequests] = client[kMaxRequests] socket[kMaxRequests] = client[kMaxRequests]
socket[kClient] = client
socket[kError] = null
socket socket
.on('error', onSocketError) .on('error', onSocketError)
.on('readable', onSocketReadable) .on('readable', onSocketReadable)
@ -1208,7 +1370,7 @@ function _resume (client, sync) {
const socket = client[kSocket] const socket = client[kSocket]
if (socket && !socket.destroyed) { if (socket && !socket.destroyed && socket.alpnProtocol !== 'h2') {
if (client[kSize] === 0) { if (client[kSize] === 0) {
if (!socket[kNoRef] && socket.unref) { if (!socket[kNoRef] && socket.unref) {
socket.unref() socket.unref()
@ -1273,7 +1435,7 @@ function _resume (client, sync) {
return return
} }
if (!socket) { if (!socket && !client[kHTTP2Session]) {
connect(client) connect(client)
return return
} }
@ -1334,6 +1496,11 @@ function _resume (client, sync) {
} }
function write (client, request) { function write (client, request) {
if (client[kHTTPConnVersion] === 'h2') {
writeH2(client, client[kHTTP2Session], request)
return
}
const { body, method, path, host, upgrade, headers, blocking, reset } = request const { body, method, path, host, upgrade, headers, blocking, reset } = request
// https://tools.ietf.org/html/rfc7231#section-4.3.1 // https://tools.ietf.org/html/rfc7231#section-4.3.1
@ -1489,9 +1656,291 @@ function write (client, request) {
return true return true
} }
function writeStream ({ body, client, request, socket, contentLength, header, expectsPayload }) { function writeH2 (client, session, request) {
const { body, method, path, host, upgrade, expectContinue, signal, headers: reqHeaders } = request
let headers
if (typeof reqHeaders === 'string') headers = Request[kHTTP2CopyHeaders](reqHeaders.trim())
else headers = reqHeaders
if (upgrade) {
errorRequest(client, request, new Error('Upgrade not supported for H2'))
return false
}
try {
// TODO(HTTP/2): Should we call onConnect immediately or on stream ready event?
request.onConnect((err) => {
if (request.aborted || request.completed) {
return
}
errorRequest(client, request, err || new RequestAbortedError())
})
} catch (err) {
errorRequest(client, request, err)
}
if (request.aborted) {
return false
}
let stream
const h2State = client[kHTTP2SessionState]
headers[HTTP2_HEADER_AUTHORITY] = host || client[kHost]
headers[HTTP2_HEADER_PATH] = path
if (method === 'CONNECT') {
session.ref()
// we are already connected, streams are pending, first request
// will create a new stream. We trigger a request to create the stream and wait until
// `ready` event is triggered
// We disabled endStream to allow the user to write to the stream
stream = session.request(headers, { endStream: false, signal })
if (stream.id && !stream.pending) {
request.onUpgrade(null, null, stream)
++h2State.openStreams
} else {
stream.once('ready', () => {
request.onUpgrade(null, null, stream)
++h2State.openStreams
})
}
stream.once('close', () => {
h2State.openStreams -= 1
// TODO(HTTP/2): unref only if current streams count is 0
if (h2State.openStreams === 0) session.unref()
})
return true
} else {
headers[HTTP2_HEADER_METHOD] = method
}
// https://tools.ietf.org/html/rfc7231#section-4.3.1
// https://tools.ietf.org/html/rfc7231#section-4.3.2
// https://tools.ietf.org/html/rfc7231#section-4.3.5
// Sending a payload body on a request that does not
// expect it can cause undefined behavior on some
// servers and corrupt connection state. Do not
// re-use the connection for further requests.
const expectsPayload = (
method === 'PUT' ||
method === 'POST' ||
method === 'PATCH'
)
if (body && typeof body.read === 'function') {
// Try to read EOF in order to get length.
body.read(0)
}
let contentLength = util.bodyLength(body)
if (contentLength == null) {
contentLength = request.contentLength
}
if (contentLength === 0 || !expectsPayload) {
// https://tools.ietf.org/html/rfc7230#section-3.3.2
// A user agent SHOULD NOT send a Content-Length header field when
// the request message does not contain a payload body and the method
// semantics do not anticipate such a body.
contentLength = null
}
if (request.contentLength != null && request.contentLength !== contentLength) {
if (client[kStrictContentLength]) {
errorRequest(client, request, new RequestContentLengthMismatchError())
return false
}
process.emitWarning(new RequestContentLengthMismatchError())
}
if (contentLength != null) {
assert(body, 'no body must not have content length')
headers[HTTP2_HEADER_CONTENT_LENGTH] = `${contentLength}`
}
session.ref()
const shouldEndStream = method === 'GET' || method === 'HEAD'
if (expectContinue) {
headers[HTTP2_HEADER_EXPECT] = '100-continue'
/**
* @type {import('node:http2').ClientHttp2Stream}
*/
stream = session.request(headers, { endStream: shouldEndStream, signal })
stream.once('continue', writeBodyH2)
} else {
/** @type {import('node:http2').ClientHttp2Stream} */
stream = session.request(headers, {
endStream: shouldEndStream,
signal
})
writeBodyH2()
}
// Increment counter as we have new several streams open
++h2State.openStreams
stream.once('response', headers => {
if (request.onHeaders(Number(headers[HTTP2_HEADER_STATUS]), headers, stream.resume.bind(stream), '') === false) {
stream.pause()
}
})
stream.once('end', () => {
request.onComplete([])
})
stream.on('data', (chunk) => {
if (request.onData(chunk) === false) stream.pause()
})
stream.once('close', () => {
h2State.openStreams -= 1
// TODO(HTTP/2): unref only if current streams count is 0
if (h2State.openStreams === 0) session.unref()
})
stream.once('error', function (err) {
if (client[kHTTP2Session] && !client[kHTTP2Session].destroyed && !this.closed && !this.destroyed) {
h2State.streams -= 1
util.destroy(stream, err)
}
})
stream.once('frameError', (type, code) => {
const err = new InformationalError(`HTTP/2: "frameError" received - type ${type}, code ${code}`)
errorRequest(client, request, err)
if (client[kHTTP2Session] && !client[kHTTP2Session].destroyed && !this.closed && !this.destroyed) {
h2State.streams -= 1
util.destroy(stream, err)
}
})
// stream.on('aborted', () => {
// // TODO(HTTP/2): Support aborted
// })
// stream.on('timeout', () => {
// // TODO(HTTP/2): Support timeout
// })
// stream.on('push', headers => {
// // TODO(HTTP/2): Suppor push
// })
// stream.on('trailers', headers => {
// // TODO(HTTP/2): Support trailers
// })
return true
function writeBodyH2 () {
/* istanbul ignore else: assertion */
if (!body) {
request.onRequestSent()
} else if (util.isBuffer(body)) {
assert(contentLength === body.byteLength, 'buffer body must have content length')
stream.cork()
stream.write(body)
stream.uncork()
request.onBodySent(body)
request.onRequestSent()
} else if (util.isBlobLike(body)) {
if (typeof body.stream === 'function') {
writeIterable({
client,
request,
contentLength,
h2stream: stream,
expectsPayload,
body: body.stream(),
socket: client[kSocket],
header: ''
})
} else {
writeBlob({
body,
client,
request,
contentLength,
expectsPayload,
h2stream: stream,
header: '',
socket: client[kSocket]
})
}
} else if (util.isStream(body)) {
writeStream({
body,
client,
request,
contentLength,
expectsPayload,
socket: client[kSocket],
h2stream: stream,
header: ''
})
} else if (util.isIterable(body)) {
writeIterable({
body,
client,
request,
contentLength,
expectsPayload,
header: '',
h2stream: stream,
socket: client[kSocket]
})
} else {
assert(false)
}
}
}
function writeStream ({ h2stream, body, client, request, socket, contentLength, header, expectsPayload }) {
assert(contentLength !== 0 || client[kRunning] === 0, 'stream body cannot be pipelined') assert(contentLength !== 0 || client[kRunning] === 0, 'stream body cannot be pipelined')
if (client[kHTTPConnVersion] === 'h2') {
// For HTTP/2, is enough to pipe the stream
const pipe = pipeline(
body,
h2stream,
(err) => {
if (err) {
util.destroy(body, err)
util.destroy(h2stream, err)
} else {
request.onRequestSent()
}
}
)
pipe.on('data', onPipeData)
pipe.once('end', () => {
pipe.removeListener('data', onPipeData)
util.destroy(pipe)
})
function onPipeData (chunk) {
request.onBodySent(chunk)
}
return
}
let finished = false let finished = false
const writer = new AsyncWriter({ socket, request, contentLength, client, expectsPayload, header }) const writer = new AsyncWriter({ socket, request, contentLength, client, expectsPayload, header })
@ -1572,9 +2021,10 @@ function writeStream ({ body, client, request, socket, contentLength, header, ex
.on('error', onFinished) .on('error', onFinished)
} }
async function writeBlob ({ body, client, request, socket, contentLength, header, expectsPayload }) { async function writeBlob ({ h2stream, body, client, request, socket, contentLength, header, expectsPayload }) {
assert(contentLength === body.size, 'blob body must have content length') assert(contentLength === body.size, 'blob body must have content length')
const isH2 = client[kHTTPConnVersion] === 'h2'
try { try {
if (contentLength != null && contentLength !== body.size) { if (contentLength != null && contentLength !== body.size) {
throw new RequestContentLengthMismatchError() throw new RequestContentLengthMismatchError()
@ -1582,10 +2032,16 @@ async function writeBlob ({ body, client, request, socket, contentLength, header
const buffer = Buffer.from(await body.arrayBuffer()) const buffer = Buffer.from(await body.arrayBuffer())
socket.cork() if (isH2) {
socket.write(`${header}content-length: ${contentLength}\r\n\r\n`, 'latin1') h2stream.cork()
socket.write(buffer) h2stream.write(buffer)
socket.uncork() h2stream.uncork()
} else {
socket.cork()
socket.write(`${header}content-length: ${contentLength}\r\n\r\n`, 'latin1')
socket.write(buffer)
socket.uncork()
}
request.onBodySent(buffer) request.onBodySent(buffer)
request.onRequestSent() request.onRequestSent()
@ -1596,11 +2052,11 @@ async function writeBlob ({ body, client, request, socket, contentLength, header
resume(client) resume(client)
} catch (err) { } catch (err) {
util.destroy(socket, err) util.destroy(isH2 ? h2stream : socket, err)
} }
} }
async function writeIterable ({ body, client, request, socket, contentLength, header, expectsPayload }) { async function writeIterable ({ h2stream, body, client, request, socket, contentLength, header, expectsPayload }) {
assert(contentLength !== 0 || client[kRunning] === 0, 'iterator body cannot be pipelined') assert(contentLength !== 0 || client[kRunning] === 0, 'iterator body cannot be pipelined')
let callback = null let callback = null
@ -1622,6 +2078,33 @@ async function writeIterable ({ body, client, request, socket, contentLength, he
} }
}) })
if (client[kHTTPConnVersion] === 'h2') {
h2stream
.on('close', onDrain)
.on('drain', onDrain)
try {
// It's up to the user to somehow abort the async iterable.
for await (const chunk of body) {
if (socket[kError]) {
throw socket[kError]
}
if (!h2stream.write(chunk)) {
await waitForDrain()
}
}
} catch (err) {
h2stream.destroy(err)
} finally {
h2stream
.off('close', onDrain)
.off('drain', onDrain)
}
return
}
socket socket
.on('close', onDrain) .on('close', onDrain)
.on('drain', onDrain) .on('drain', onDrain)

View file

@ -71,7 +71,7 @@ if (global.FinalizationRegistry) {
} }
} }
function buildConnector ({ maxCachedSessions, socketPath, timeout, ...opts }) { function buildConnector ({ allowH2, maxCachedSessions, socketPath, timeout, ...opts }) {
if (maxCachedSessions != null && (!Number.isInteger(maxCachedSessions) || maxCachedSessions < 0)) { if (maxCachedSessions != null && (!Number.isInteger(maxCachedSessions) || maxCachedSessions < 0)) {
throw new InvalidArgumentError('maxCachedSessions must be a positive integer or zero') throw new InvalidArgumentError('maxCachedSessions must be a positive integer or zero')
} }
@ -79,7 +79,7 @@ function buildConnector ({ maxCachedSessions, socketPath, timeout, ...opts }) {
const options = { path: socketPath, ...opts } const options = { path: socketPath, ...opts }
const sessionCache = new SessionCache(maxCachedSessions == null ? 100 : maxCachedSessions) const sessionCache = new SessionCache(maxCachedSessions == null ? 100 : maxCachedSessions)
timeout = timeout == null ? 10e3 : timeout timeout = timeout == null ? 10e3 : timeout
allowH2 = allowH2 != null ? allowH2 : false
return function connect ({ hostname, host, protocol, port, servername, localAddress, httpSocket }, callback) { return function connect ({ hostname, host, protocol, port, servername, localAddress, httpSocket }, callback) {
let socket let socket
if (protocol === 'https:') { if (protocol === 'https:') {
@ -99,6 +99,8 @@ function buildConnector ({ maxCachedSessions, socketPath, timeout, ...opts }) {
servername, servername,
session, session,
localAddress, localAddress,
// TODO(HTTP/2): Add support for h2c
ALPNProtocols: allowH2 ? ['http/1.1', 'h2'] : ['http/1.1'],
socket: httpSocket, // upgrade socket connection socket: httpSocket, // upgrade socket connection
port: port || 443, port: port || 443,
host: hostname host: hostname

View file

@ -5,6 +5,7 @@ const {
NotSupportedError NotSupportedError
} = require('./errors') } = require('./errors')
const assert = require('assert') const assert = require('assert')
const { kHTTP2BuildRequest, kHTTP2CopyHeaders, kHTTP1BuildRequest } = require('./symbols')
const util = require('./util') const util = require('./util')
// tokenRegExp and headerCharRegex have been lifted from // tokenRegExp and headerCharRegex have been lifted from
@ -62,7 +63,8 @@ class Request {
headersTimeout, headersTimeout,
bodyTimeout, bodyTimeout,
reset, reset,
throwOnError throwOnError,
expectContinue
}, handler) { }, handler) {
if (typeof path !== 'string') { if (typeof path !== 'string') {
throw new InvalidArgumentError('path must be a string') throw new InvalidArgumentError('path must be a string')
@ -98,6 +100,10 @@ class Request {
throw new InvalidArgumentError('invalid reset') throw new InvalidArgumentError('invalid reset')
} }
if (expectContinue != null && typeof expectContinue !== 'boolean') {
throw new InvalidArgumentError('invalid expectContinue')
}
this.headersTimeout = headersTimeout this.headersTimeout = headersTimeout
this.bodyTimeout = bodyTimeout this.bodyTimeout = bodyTimeout
@ -150,6 +156,9 @@ class Request {
this.headers = '' this.headers = ''
// Only for H2
this.expectContinue = expectContinue != null ? expectContinue : false
if (Array.isArray(headers)) { if (Array.isArray(headers)) {
if (headers.length % 2 !== 0) { if (headers.length % 2 !== 0) {
throw new InvalidArgumentError('headers array must be even') throw new InvalidArgumentError('headers array must be even')
@ -269,13 +278,64 @@ class Request {
return this[kHandler].onError(error) return this[kHandler].onError(error)
} }
// TODO: adjust to support H2
addHeader (key, value) { addHeader (key, value) {
processHeader(this, key, value) processHeader(this, key, value)
return this return this
} }
static [kHTTP1BuildRequest] (origin, opts, handler) {
// TODO: Migrate header parsing here, to make Requests
// HTTP agnostic
return new Request(origin, opts, handler)
}
static [kHTTP2BuildRequest] (origin, opts, handler) {
const headers = opts.headers
opts = { ...opts, headers: null }
const request = new Request(origin, opts, handler)
request.headers = {}
if (Array.isArray(headers)) {
if (headers.length % 2 !== 0) {
throw new InvalidArgumentError('headers array must be even')
}
for (let i = 0; i < headers.length; i += 2) {
processHeader(request, headers[i], headers[i + 1], true)
}
} else if (headers && typeof headers === 'object') {
const keys = Object.keys(headers)
for (let i = 0; i < keys.length; i++) {
const key = keys[i]
processHeader(request, key, headers[key], true)
}
} else if (headers != null) {
throw new InvalidArgumentError('headers must be an object or an array')
}
return request
}
static [kHTTP2CopyHeaders] (raw) {
const rawHeaders = raw.split('\r\n')
const headers = {}
for (const header of rawHeaders) {
const [key, value] = header.split(': ')
if (value == null || value.length === 0) continue
if (headers[key]) headers[key] += `,${value}`
else headers[key] = value
}
return headers
}
} }
function processHeaderValue (key, val) { function processHeaderValue (key, val, skipAppend) {
if (val && typeof val === 'object') { if (val && typeof val === 'object') {
throw new InvalidArgumentError(`invalid ${key} header`) throw new InvalidArgumentError(`invalid ${key} header`)
} }
@ -286,10 +346,10 @@ function processHeaderValue (key, val) {
throw new InvalidArgumentError(`invalid ${key} header`) throw new InvalidArgumentError(`invalid ${key} header`)
} }
return `${key}: ${val}\r\n` return skipAppend ? val : `${key}: ${val}\r\n`
} }
function processHeader (request, key, val) { function processHeader (request, key, val, skipAppend = false) {
if (val && (typeof val === 'object' && !Array.isArray(val))) { if (val && (typeof val === 'object' && !Array.isArray(val))) {
throw new InvalidArgumentError(`invalid ${key} header`) throw new InvalidArgumentError(`invalid ${key} header`)
} else if (val === undefined) { } else if (val === undefined) {
@ -357,10 +417,16 @@ function processHeader (request, key, val) {
} else { } else {
if (Array.isArray(val)) { if (Array.isArray(val)) {
for (let i = 0; i < val.length; i++) { for (let i = 0; i < val.length; i++) {
request.headers += processHeaderValue(key, val[i]) if (skipAppend) {
if (request.headers[key]) request.headers[key] += `,${processHeaderValue(key, val[i], skipAppend)}`
else request.headers[key] = processHeaderValue(key, val[i], skipAppend)
} else {
request.headers += processHeaderValue(key, val[i])
}
} }
} else { } else {
request.headers += processHeaderValue(key, val) if (skipAppend) request.headers[key] = processHeaderValue(key, val, skipAppend)
else request.headers += processHeaderValue(key, val)
} }
} }
} }

View file

@ -51,5 +51,11 @@ module.exports = {
kProxy: Symbol('proxy agent options'), kProxy: Symbol('proxy agent options'),
kCounter: Symbol('socket request counter'), kCounter: Symbol('socket request counter'),
kInterceptors: Symbol('dispatch interceptors'), kInterceptors: Symbol('dispatch interceptors'),
kMaxResponseSize: Symbol('max response size') kMaxResponseSize: Symbol('max response size'),
kHTTP2Session: Symbol('http2Session'),
kHTTP2SessionState: Symbol('http2Session state'),
kHTTP2BuildRequest: Symbol('http2 build request'),
kHTTP1BuildRequest: Symbol('http1 build request'),
kHTTP2CopyHeaders: Symbol('http2 copy headers'),
kHTTPConnVersion: Symbol('http connection version')
} }

View file

@ -168,7 +168,7 @@ function bodyLength (body) {
return 0 return 0
} else if (isStream(body)) { } else if (isStream(body)) {
const state = body._readableState const state = body._readableState
return state && state.ended === true && Number.isFinite(state.length) return state && state.objectMode === false && state.ended === true && Number.isFinite(state.length)
? state.length ? state.length
: null : null
} else if (isBlobLike(body)) { } else if (isBlobLike(body)) {
@ -199,6 +199,7 @@ function destroy (stream, err) {
// See: https://github.com/nodejs/node/pull/38505/files // See: https://github.com/nodejs/node/pull/38505/files
stream.socket = null stream.socket = null
} }
stream.destroy(err) stream.destroy(err)
} else if (err) { } else if (err) {
process.nextTick((stream, err) => { process.nextTick((stream, err) => {
@ -218,6 +219,9 @@ function parseKeepAliveTimeout (val) {
} }
function parseHeaders (headers, obj = {}) { function parseHeaders (headers, obj = {}) {
// For H2 support
if (!Array.isArray(headers)) return headers
for (let i = 0; i < headers.length; i += 2) { for (let i = 0; i < headers.length; i += 2) {
const key = headers[i].toString().toLowerCase() const key = headers[i].toString().toLowerCase()
let val = obj[key] let val = obj[key]
@ -355,6 +359,12 @@ function getSocketInfo (socket) {
} }
} }
async function * convertIterableToBuffer (iterable) {
for await (const chunk of iterable) {
yield Buffer.isBuffer(chunk) ? chunk : Buffer.from(chunk)
}
}
let ReadableStream let ReadableStream
function ReadableStreamFrom (iterable) { function ReadableStreamFrom (iterable) {
if (!ReadableStream) { if (!ReadableStream) {
@ -362,8 +372,7 @@ function ReadableStreamFrom (iterable) {
} }
if (ReadableStream.from) { if (ReadableStream.from) {
// https://github.com/whatwg/streams/pull/1083 return ReadableStream.from(convertIterableToBuffer(iterable))
return ReadableStream.from(iterable)
} }
let iterator let iterator

View file

@ -387,6 +387,7 @@ function bodyMixinMethods (instance) {
try { try {
busboy = Busboy({ busboy = Busboy({
headers, headers,
preservePath: true,
defParamCharset: 'utf8' defParamCharset: 'utf8'
}) })
} catch (err) { } catch (err) {
@ -532,7 +533,7 @@ async function specConsumeBody (object, convertBytesToJSValue, instance) {
// 6. Otherwise, fully read objects body given successSteps, // 6. Otherwise, fully read objects body given successSteps,
// errorSteps, and objects relevant global object. // errorSteps, and objects relevant global object.
fullyReadBody(object[kState].body, successSteps, errorSteps) await fullyReadBody(object[kState].body, successSteps, errorSteps)
// 7. Return promise. // 7. Return promise.
return promise.promise return promise.promise

View file

@ -1760,7 +1760,7 @@ async function httpNetworkFetch (
fetchParams.controller.connection.destroy() fetchParams.controller.connection.destroy()
// 2. Return the appropriate network error for fetchParams. // 2. Return the appropriate network error for fetchParams.
return makeAppropriateNetworkError(fetchParams) return makeAppropriateNetworkError(fetchParams, err)
} }
return makeNetworkError(err) return makeNetworkError(err)
@ -1979,19 +1979,37 @@ async function httpNetworkFetch (
let location = '' let location = ''
const headers = new Headers() const headers = new Headers()
for (let n = 0; n < headersList.length; n += 2) {
const key = headersList[n + 0].toString('latin1')
const val = headersList[n + 1].toString('latin1')
if (key.toLowerCase() === 'content-encoding') { // For H2, the headers are a plain JS object
// https://www.rfc-editor.org/rfc/rfc7231#section-3.1.2.1 // We distinguish between them and iterate accordingly
// "All content-coding values are case-insensitive..." if (Array.isArray(headersList)) {
codings = val.toLowerCase().split(',').map((x) => x.trim()).reverse() for (let n = 0; n < headersList.length; n += 2) {
} else if (key.toLowerCase() === 'location') { const key = headersList[n + 0].toString('latin1')
location = val const val = headersList[n + 1].toString('latin1')
if (key.toLowerCase() === 'content-encoding') {
// https://www.rfc-editor.org/rfc/rfc7231#section-3.1.2.1
// "All content-coding values are case-insensitive..."
codings = val.toLowerCase().split(',').map((x) => x.trim())
} else if (key.toLowerCase() === 'location') {
location = val
}
headers.append(key, val)
} }
} else {
const keys = Object.keys(headersList)
for (const key of keys) {
const val = headersList[key]
if (key.toLowerCase() === 'content-encoding') {
// https://www.rfc-editor.org/rfc/rfc7231#section-3.1.2.1
// "All content-coding values are case-insensitive..."
codings = val.toLowerCase().split(',').map((x) => x.trim()).reverse()
} else if (key.toLowerCase() === 'location') {
location = val
}
headers.append(key, val) headers.append(key, val)
}
} }
this.body = new Readable({ read: resume }) this.body = new Readable({ read: resume })

View file

@ -49,7 +49,7 @@ class Response {
} }
// https://fetch.spec.whatwg.org/#dom-response-json // https://fetch.spec.whatwg.org/#dom-response-json
static json (data = undefined, init = {}) { static json (data, init = {}) {
webidl.argumentLengthCheck(arguments, 1, { header: 'Response.json' }) webidl.argumentLengthCheck(arguments, 1, { header: 'Response.json' })
if (init !== null) { if (init !== null) {
@ -426,15 +426,15 @@ function filterResponse (response, type) {
} }
// https://fetch.spec.whatwg.org/#appropriate-network-error // https://fetch.spec.whatwg.org/#appropriate-network-error
function makeAppropriateNetworkError (fetchParams) { function makeAppropriateNetworkError (fetchParams, err = null) {
// 1. Assert: fetchParams is canceled. // 1. Assert: fetchParams is canceled.
assert(isCancelled(fetchParams)) assert(isCancelled(fetchParams))
// 2. Return an aborted network error if fetchParams is aborted; // 2. Return an aborted network error if fetchParams is aborted;
// otherwise return a network error. // otherwise return a network error.
return isAborted(fetchParams) return isAborted(fetchParams)
? makeNetworkError(new DOMException('The operation was aborted.', 'AbortError')) ? makeNetworkError(Object.assign(new DOMException('The operation was aborted.', 'AbortError'), { cause: err }))
: makeNetworkError('Request was cancelled.') : makeNetworkError(Object.assign(new DOMException('Request was cancelled.'), { cause: err }))
} }
// https://whatpr.org/fetch/1392.html#initialize-a-response // https://whatpr.org/fetch/1392.html#initialize-a-response

View file

@ -556,16 +556,37 @@ function bytesMatch (bytes, metadataList) {
const algorithm = item.algo const algorithm = item.algo
// 2. Let expectedValue be the val component of item. // 2. Let expectedValue be the val component of item.
const expectedValue = item.hash let expectedValue = item.hash
// See https://github.com/web-platform-tests/wpt/commit/e4c5cc7a5e48093220528dfdd1c4012dc3837a0e
// "be liberal with padding". This is annoying, and it's not even in the spec.
if (expectedValue.endsWith('==')) {
expectedValue = expectedValue.slice(0, -2)
}
// 3. Let actualValue be the result of applying algorithm to bytes. // 3. Let actualValue be the result of applying algorithm to bytes.
const actualValue = crypto.createHash(algorithm).update(bytes).digest('base64') let actualValue = crypto.createHash(algorithm).update(bytes).digest('base64')
if (actualValue.endsWith('==')) {
actualValue = actualValue.slice(0, -2)
}
// 4. If actualValue is a case-sensitive match for expectedValue, // 4. If actualValue is a case-sensitive match for expectedValue,
// return true. // return true.
if (actualValue === expectedValue) { if (actualValue === expectedValue) {
return true return true
} }
let actualBase64URL = crypto.createHash(algorithm).update(bytes).digest('base64url')
if (actualBase64URL.endsWith('==')) {
actualBase64URL = actualBase64URL.slice(0, -2)
}
if (actualBase64URL === expectedValue) {
return true
}
} }
// 6. Return false. // 6. Return false.
@ -812,17 +833,17 @@ function iteratorResult (pair, kind) {
/** /**
* @see https://fetch.spec.whatwg.org/#body-fully-read * @see https://fetch.spec.whatwg.org/#body-fully-read
*/ */
function fullyReadBody (body, processBody, processBodyError) { async function fullyReadBody (body, processBody, processBodyError) {
// 1. If taskDestination is null, then set taskDestination to // 1. If taskDestination is null, then set taskDestination to
// the result of starting a new parallel queue. // the result of starting a new parallel queue.
// 2. Let successSteps given a byte sequence bytes be to queue a // 2. Let successSteps given a byte sequence bytes be to queue a
// fetch task to run processBody given bytes, with taskDestination. // fetch task to run processBody given bytes, with taskDestination.
const successSteps = (bytes) => queueMicrotask(() => processBody(bytes)) const successSteps = processBody
// 3. Let errorSteps be to queue a fetch task to run processBodyError, // 3. Let errorSteps be to queue a fetch task to run processBodyError,
// with taskDestination. // with taskDestination.
const errorSteps = (error) => queueMicrotask(() => processBodyError(error)) const errorSteps = processBodyError
// 4. Let reader be the result of getting a reader for bodys stream. // 4. Let reader be the result of getting a reader for bodys stream.
// If that threw an exception, then run errorSteps with that // If that threw an exception, then run errorSteps with that
@ -837,7 +858,12 @@ function fullyReadBody (body, processBody, processBodyError) {
} }
// 5. Read all bytes from reader, given successSteps and errorSteps. // 5. Read all bytes from reader, given successSteps and errorSteps.
readAllBytes(reader, successSteps, errorSteps) try {
const result = await readAllBytes(reader)
successSteps(result)
} catch (e) {
errorSteps(e)
}
} }
/** @type {ReadableStream} */ /** @type {ReadableStream} */
@ -906,36 +932,23 @@ function isomorphicEncode (input) {
* @see https://streams.spec.whatwg.org/#readablestreamdefaultreader-read-all-bytes * @see https://streams.spec.whatwg.org/#readablestreamdefaultreader-read-all-bytes
* @see https://streams.spec.whatwg.org/#read-loop * @see https://streams.spec.whatwg.org/#read-loop
* @param {ReadableStreamDefaultReader} reader * @param {ReadableStreamDefaultReader} reader
* @param {(bytes: Uint8Array) => void} successSteps
* @param {(error: Error) => void} failureSteps
*/ */
async function readAllBytes (reader, successSteps, failureSteps) { async function readAllBytes (reader) {
const bytes = [] const bytes = []
let byteLength = 0 let byteLength = 0
while (true) { while (true) {
let done const { done, value: chunk } = await reader.read()
let chunk
try {
({ done, value: chunk } = await reader.read())
} catch (e) {
// 1. Call failureSteps with e.
failureSteps(e)
return
}
if (done) { if (done) {
// 1. Call successSteps with bytes. // 1. Call successSteps with bytes.
successSteps(Buffer.concat(bytes, byteLength)) return Buffer.concat(bytes, byteLength)
return
} }
// 1. If chunk is not a Uint8Array object, call failureSteps // 1. If chunk is not a Uint8Array object, call failureSteps
// with a TypeError and abort these steps. // with a TypeError and abort these steps.
if (!isUint8Array(chunk)) { if (!isUint8Array(chunk)) {
failureSteps(new TypeError('Received non-Uint8Array chunk')) throw new TypeError('Received non-Uint8Array chunk')
return
} }
// 2. Append the bytes represented by chunk to bytes. // 2. Append the bytes represented by chunk to bytes.

View file

@ -34,6 +34,7 @@ class Pool extends PoolBase {
socketPath, socketPath,
autoSelectFamily, autoSelectFamily,
autoSelectFamilyAttemptTimeout, autoSelectFamilyAttemptTimeout,
allowH2,
...options ...options
} = {}) { } = {}) {
super() super()
@ -54,6 +55,7 @@ class Pool extends PoolBase {
connect = buildConnector({ connect = buildConnector({
...tls, ...tls,
maxCachedSessions, maxCachedSessions,
allowH2,
socketPath, socketPath,
timeout: connectTimeout == null ? 10e3 : connectTimeout, timeout: connectTimeout == null ? 10e3 : connectTimeout,
...(util.nodeHasAutoSelectFamily && autoSelectFamily ? { autoSelectFamily, autoSelectFamilyAttemptTimeout } : undefined), ...(util.nodeHasAutoSelectFamily && autoSelectFamily ? { autoSelectFamily, autoSelectFamilyAttemptTimeout } : undefined),
@ -66,7 +68,7 @@ class Pool extends PoolBase {
: [] : []
this[kConnections] = connections || null this[kConnections] = connections || null
this[kUrl] = util.parseOrigin(origin) this[kUrl] = util.parseOrigin(origin)
this[kOptions] = { ...util.deepClone(options), connect } this[kOptions] = { ...util.deepClone(options), connect, allowH2 }
this[kOptions].interceptors = options.interceptors this[kOptions].interceptors = options.interceptors
? { ...options.interceptors } ? { ...options.interceptors }
: undefined : undefined

View file

@ -1,6 +1,5 @@
'use strict' 'use strict'
const { randomBytes, createHash } = require('crypto')
const diagnosticsChannel = require('diagnostics_channel') const diagnosticsChannel = require('diagnostics_channel')
const { uid, states } = require('./constants') const { uid, states } = require('./constants')
const { const {
@ -22,6 +21,14 @@ channels.open = diagnosticsChannel.channel('undici:websocket:open')
channels.close = diagnosticsChannel.channel('undici:websocket:close') channels.close = diagnosticsChannel.channel('undici:websocket:close')
channels.socketError = diagnosticsChannel.channel('undici:websocket:socket_error') channels.socketError = diagnosticsChannel.channel('undici:websocket:socket_error')
/** @type {import('crypto')} */
let crypto
try {
crypto = require('crypto')
} catch {
}
/** /**
* @see https://websockets.spec.whatwg.org/#concept-websocket-establish * @see https://websockets.spec.whatwg.org/#concept-websocket-establish
* @param {URL} url * @param {URL} url
@ -66,7 +73,7 @@ function establishWebSocketConnection (url, protocols, ws, onEstablish, options)
// 5. Let keyValue be a nonce consisting of a randomly selected // 5. Let keyValue be a nonce consisting of a randomly selected
// 16-byte value that has been forgiving-base64-encoded and // 16-byte value that has been forgiving-base64-encoded and
// isomorphic encoded. // isomorphic encoded.
const keyValue = randomBytes(16).toString('base64') const keyValue = crypto.randomBytes(16).toString('base64')
// 6. Append (`Sec-WebSocket-Key`, keyValue) to requests // 6. Append (`Sec-WebSocket-Key`, keyValue) to requests
// header list. // header list.
@ -148,7 +155,7 @@ function establishWebSocketConnection (url, protocols, ws, onEstablish, options)
// trailing whitespace, the client MUST _Fail the WebSocket // trailing whitespace, the client MUST _Fail the WebSocket
// Connection_. // Connection_.
const secWSAccept = response.headersList.get('Sec-WebSocket-Accept') const secWSAccept = response.headersList.get('Sec-WebSocket-Accept')
const digest = createHash('sha1').update(keyValue + uid).digest('base64') const digest = crypto.createHash('sha1').update(keyValue + uid).digest('base64')
if (secWSAccept !== digest) { if (secWSAccept !== digest) {
failWebsocketConnection(ws, 'Incorrect hash received in Sec-WebSocket-Accept header.') failWebsocketConnection(ws, 'Incorrect hash received in Sec-WebSocket-Accept header.')
return return

View file

@ -1,15 +1,22 @@
'use strict' 'use strict'
const { randomBytes } = require('crypto')
const { maxUnsigned16Bit } = require('./constants') const { maxUnsigned16Bit } = require('./constants')
/** @type {import('crypto')} */
let crypto
try {
crypto = require('crypto')
} catch {
}
class WebsocketFrameSend { class WebsocketFrameSend {
/** /**
* @param {Buffer|undefined} data * @param {Buffer|undefined} data
*/ */
constructor (data) { constructor (data) {
this.frameData = data this.frameData = data
this.maskKey = randomBytes(4) this.maskKey = crypto.randomBytes(4)
} }
createFrame (opcode) { createFrame (opcode) {

View file

@ -3,6 +3,7 @@
const { webidl } = require('../fetch/webidl') const { webidl } = require('../fetch/webidl')
const { DOMException } = require('../fetch/constants') const { DOMException } = require('../fetch/constants')
const { URLSerializer } = require('../fetch/dataURL') const { URLSerializer } = require('../fetch/dataURL')
const { getGlobalOrigin } = require('../fetch/global')
const { staticPropertyDescriptors, states, opcodes, emptyBuffer } = require('./constants') const { staticPropertyDescriptors, states, opcodes, emptyBuffer } = require('./constants')
const { const {
kWebSocketURL, kWebSocketURL,
@ -57,18 +58,28 @@ class WebSocket extends EventTarget {
url = webidl.converters.USVString(url) url = webidl.converters.USVString(url)
protocols = options.protocols protocols = options.protocols
// 1. Let urlRecord be the result of applying the URL parser to url. // 1. Let baseURL be this's relevant settings object's API base URL.
const baseURL = getGlobalOrigin()
// 1. Let urlRecord be the result of applying the URL parser to url with baseURL.
let urlRecord let urlRecord
try { try {
urlRecord = new URL(url) urlRecord = new URL(url, baseURL)
} catch (e) { } catch (e) {
// 2. If urlRecord is failure, then throw a "SyntaxError" DOMException. // 3. If urlRecord is failure, then throw a "SyntaxError" DOMException.
throw new DOMException(e, 'SyntaxError') throw new DOMException(e, 'SyntaxError')
} }
// 3. If urlRecords scheme is not "ws" or "wss", then throw a // 4. If urlRecords scheme is "http", then set urlRecords scheme to "ws".
// "SyntaxError" DOMException. if (urlRecord.protocol === 'http:') {
urlRecord.protocol = 'ws:'
} else if (urlRecord.protocol === 'https:') {
// 5. Otherwise, if urlRecords scheme is "https", set urlRecords scheme to "wss".
urlRecord.protocol = 'wss:'
}
// 6. If urlRecords scheme is not "ws" or "wss", then throw a "SyntaxError" DOMException.
if (urlRecord.protocol !== 'ws:' && urlRecord.protocol !== 'wss:') { if (urlRecord.protocol !== 'ws:' && urlRecord.protocol !== 'wss:') {
throw new DOMException( throw new DOMException(
`Expected a ws: or wss: protocol, got ${urlRecord.protocol}`, `Expected a ws: or wss: protocol, got ${urlRecord.protocol}`,
@ -76,19 +87,19 @@ class WebSocket extends EventTarget {
) )
} }
// 4. If urlRecords fragment is non-null, then throw a "SyntaxError" // 7. If urlRecords fragment is non-null, then throw a "SyntaxError"
// DOMException. // DOMException.
if (urlRecord.hash) { if (urlRecord.hash || urlRecord.href.endsWith('#')) {
throw new DOMException('Got fragment', 'SyntaxError') throw new DOMException('Got fragment', 'SyntaxError')
} }
// 5. If protocols is a string, set protocols to a sequence consisting // 8. If protocols is a string, set protocols to a sequence consisting
// of just that string. // of just that string.
if (typeof protocols === 'string') { if (typeof protocols === 'string') {
protocols = [protocols] protocols = [protocols]
} }
// 6. If any of the values in protocols occur more than once or otherwise // 9. If any of the values in protocols occur more than once or otherwise
// fail to match the requirements for elements that comprise the value // fail to match the requirements for elements that comprise the value
// of `Sec-WebSocket-Protocol` fields as defined by The WebSocket // of `Sec-WebSocket-Protocol` fields as defined by The WebSocket
// protocol, then throw a "SyntaxError" DOMException. // protocol, then throw a "SyntaxError" DOMException.
@ -100,12 +111,12 @@ class WebSocket extends EventTarget {
throw new DOMException('Invalid Sec-WebSocket-Protocol value', 'SyntaxError') throw new DOMException('Invalid Sec-WebSocket-Protocol value', 'SyntaxError')
} }
// 7. Set this's url to urlRecord. // 10. Set this's url to urlRecord.
this[kWebSocketURL] = urlRecord this[kWebSocketURL] = new URL(urlRecord.href)
// 8. Let client be this's relevant settings object. // 11. Let client be this's relevant settings object.
// 9. Run this step in parallel: // 12. Run this step in parallel:
// 1. Establish a WebSocket connection given urlRecord, protocols, // 1. Establish a WebSocket connection given urlRecord, protocols,
// and client. // and client.

View file

@ -1,6 +1,6 @@
{ {
"name": "undici", "name": "undici",
"version": "5.23.0", "version": "5.25.2",
"description": "An HTTP/1.1 client, written from scratch for Node.js", "description": "An HTTP/1.1 client, written from scratch for Node.js",
"homepage": "https://undici.nodejs.org", "homepage": "https://undici.nodejs.org",
"bugs": { "bugs": {
@ -11,12 +11,41 @@
"url": "git+https://github.com/nodejs/undici.git" "url": "git+https://github.com/nodejs/undici.git"
}, },
"license": "MIT", "license": "MIT",
"author": "Matteo Collina <hello@matteocollina.com>",
"contributors": [ "contributors": [
{
"name": "Daniele Belardi",
"url": "https://github.com/dnlup",
"author": true
},
{
"name": "Ethan Arrowood",
"url": "https://github.com/ethan-arrowood",
"author": true
},
{
"name": "Matteo Collina",
"url": "https://github.com/mcollina",
"author": true
},
{
"name": "Matthew Aitken",
"url": "https://github.com/KhafraDev",
"author": true
},
{ {
"name": "Robert Nagy", "name": "Robert Nagy",
"url": "https://github.com/ronag", "url": "https://github.com/ronag",
"author": true "author": true
},
{
"name": "Szymon Marczak",
"url": "https://github.com/szmarczak",
"author": true
},
{
"name": "Tomas Della Vedova",
"url": "https://github.com/delvedor",
"author": true
} }
], ],
"keywords": [ "keywords": [
@ -64,10 +93,11 @@
"bench:run": "CONNECTIONS=1 node benchmarks/benchmark.js; CONNECTIONS=50 node benchmarks/benchmark.js", "bench:run": "CONNECTIONS=1 node benchmarks/benchmark.js; CONNECTIONS=50 node benchmarks/benchmark.js",
"serve:website": "docsify serve .", "serve:website": "docsify serve .",
"prepare": "husky install", "prepare": "husky install",
"postpublish": "node scripts/update-undici-types-version.js && cd types && npm publish",
"fuzz": "jsfuzz test/fuzzing/fuzz.js corpus" "fuzz": "jsfuzz test/fuzzing/fuzz.js corpus"
}, },
"devDependencies": { "devDependencies": {
"@sinonjs/fake-timers": "^10.0.2", "@sinonjs/fake-timers": "^11.1.0",
"@types/node": "^18.0.3", "@types/node": "^18.0.3",
"abort-controller": "^3.0.0", "abort-controller": "^3.0.0",
"atomic-sleep": "^1.0.0", "atomic-sleep": "^1.0.0",
@ -98,7 +128,7 @@
"standard": "^17.0.0", "standard": "^17.0.0",
"table": "^6.8.0", "table": "^6.8.0",
"tap": "^16.1.0", "tap": "^16.1.0",
"tsd": "^0.28.1", "tsd": "^0.29.0",
"typescript": "^5.0.2", "typescript": "^5.0.2",
"wait-on": "^7.0.1", "wait-on": "^7.0.1",
"ws": "^8.11.0" "ws": "^8.11.0"

6
deps/undici/src/types/README.md vendored Normal file
View file

@ -0,0 +1,6 @@
# undici-types
This package is a dual-publish of the [undici](https://www.npmjs.com/package/undici) library types. The `undici` package **still contains types**. This package is for users who _only_ need undici types (such as for `@types/node`). It is published alongside every release of `undici`, so you can always use the same version.
- [GitHub nodejs/undici](https://github.com/nodejs/undici)
- [Undici Documentation](https://undici.nodejs.org/#/)

View file

@ -1,7 +1,6 @@
import { URL } from 'url' import { URL } from 'url'
import { TlsOptions } from 'tls' import { TlsOptions } from 'tls'
import Dispatcher from './dispatcher' import Dispatcher from './dispatcher'
import DispatchInterceptor from './dispatcher'
import buildConnector from "./connector"; import buildConnector from "./connector";
/** /**
@ -19,14 +18,14 @@ export class Client extends Dispatcher {
export declare namespace Client { export declare namespace Client {
export interface OptionsInterceptors { export interface OptionsInterceptors {
Client: readonly DispatchInterceptor[]; Client: readonly Dispatcher.DispatchInterceptor[];
} }
export interface Options { export interface Options {
/** TODO */ /** TODO */
interceptors?: OptionsInterceptors; interceptors?: OptionsInterceptors;
/** The maximum length of request headers in bytes. Default: `16384` (16KiB). */ /** The maximum length of request headers in bytes. Default: `16384` (16KiB). */
maxHeaderSize?: number; maxHeaderSize?: number;
/** The amount of time the parser will wait to receive the complete HTTP headers (Node 14 and above only). Default: `300e3` milliseconds (300s). */ /** The amount of time, in milliseconds, the parser will wait to receive the complete HTTP headers (Node 14 and above only). Default: `300e3` milliseconds (300s). */
headersTimeout?: number; headersTimeout?: number;
/** @deprecated unsupported socketTimeout, use headersTimeout & bodyTimeout instead */ /** @deprecated unsupported socketTimeout, use headersTimeout & bodyTimeout instead */
socketTimeout?: never; socketTimeout?: never;
@ -40,13 +39,13 @@ export declare namespace Client {
idleTimeout?: never; idleTimeout?: never;
/** @deprecated unsupported keepAlive, use pipelining=0 instead */ /** @deprecated unsupported keepAlive, use pipelining=0 instead */
keepAlive?: never; keepAlive?: never;
/** the timeout after which a socket without active requests will time out. Monitors time between activity on a connected socket. This value may be overridden by *keep-alive* hints from the server. Default: `4e3` milliseconds (4s). */ /** the timeout, in milliseconds, after which a socket without active requests will time out. Monitors time between activity on a connected socket. This value may be overridden by *keep-alive* hints from the server. Default: `4e3` milliseconds (4s). */
keepAliveTimeout?: number; keepAliveTimeout?: number;
/** @deprecated unsupported maxKeepAliveTimeout, use keepAliveMaxTimeout instead */ /** @deprecated unsupported maxKeepAliveTimeout, use keepAliveMaxTimeout instead */
maxKeepAliveTimeout?: never; maxKeepAliveTimeout?: never;
/** the maximum allowed `idleTimeout` when overridden by *keep-alive* hints from the server. Default: `600e3` milliseconds (10min). */ /** the maximum allowed `idleTimeout`, in milliseconds, when overridden by *keep-alive* hints from the server. Default: `600e3` milliseconds (10min). */
keepAliveMaxTimeout?: number; keepAliveMaxTimeout?: number;
/** A number subtracted from server *keep-alive* hints when overriding `idleTimeout` to account for timing inaccuracies caused by e.g. transport latency. Default: `1e3` milliseconds (1s). */ /** A number of milliseconds subtracted from server *keep-alive* hints when overriding `idleTimeout` to account for timing inaccuracies caused by e.g. transport latency. Default: `1e3` milliseconds (1s). */
keepAliveTimeoutThreshold?: number; keepAliveTimeoutThreshold?: number;
/** TODO */ /** TODO */
socketPath?: string; socketPath?: string;
@ -71,7 +70,17 @@ export declare namespace Client {
/** Enables a family autodetection algorithm that loosely implements section 5 of RFC 8305. */ /** Enables a family autodetection algorithm that loosely implements section 5 of RFC 8305. */
autoSelectFamily?: boolean; autoSelectFamily?: boolean;
/** The amount of time in milliseconds to wait for a connection attempt to finish before trying the next address when using the `autoSelectFamily` option. */ /** The amount of time in milliseconds to wait for a connection attempt to finish before trying the next address when using the `autoSelectFamily` option. */
autoSelectFamilyAttemptTimeout?: number; autoSelectFamilyAttemptTimeout?: number;
/**
* @description Enables support for H2 if the server has assigned bigger priority to it through ALPN negotiation.
* @default false
*/
allowH2?: boolean;
/**
* @description Dictates the maximum number of concurrent streams for a single H2 session. It can be overriden by a SETTINGS remote frame.
* @default 100
*/
maxConcurrentStreams?: number
} }
export interface SocketInfo { export interface SocketInfo {
localAddress?: string localAddress?: string

View file

@ -109,7 +109,7 @@ declare namespace Dispatcher {
blocking?: boolean; blocking?: boolean;
/** Upgrade the request. Should be used to specify the kind of upgrade i.e. `'Websocket'`. Default: `method === 'CONNECT' || null`. */ /** Upgrade the request. Should be used to specify the kind of upgrade i.e. `'Websocket'`. Default: `method === 'CONNECT' || null`. */
upgrade?: boolean | string | null; upgrade?: boolean | string | null;
/** The amount of time the parser will wait to receive the complete HTTP headers. Defaults to 300 seconds. */ /** The amount of time, in milliseconds, the parser will wait to receive the complete HTTP headers. Defaults to 300 seconds. */
headersTimeout?: number | null; headersTimeout?: number | null;
/** The timeout after which a request will time out, in milliseconds. Monitors time between receiving body data. Use 0 to disable it entirely. Defaults to 300 seconds. */ /** The timeout after which a request will time out, in milliseconds. Monitors time between receiving body data. Use 0 to disable it entirely. Defaults to 300 seconds. */
bodyTimeout?: number | null; bodyTimeout?: number | null;
@ -117,6 +117,8 @@ declare namespace Dispatcher {
reset?: boolean; reset?: boolean;
/** Whether Undici should throw an error upon receiving a 4xx or 5xx response from the server. Defaults to false */ /** Whether Undici should throw an error upon receiving a 4xx or 5xx response from the server. Defaults to false */
throwOnError?: boolean; throwOnError?: boolean;
/** For H2, it appends the expect: 100-continue header, and halts the request body until a 100-continue is received from the remote server*/
expectContinue?: boolean;
} }
export interface ConnectOptions { export interface ConnectOptions {
path: string; path: string;

57
deps/undici/src/types/index.d.ts vendored Normal file
View file

@ -0,0 +1,57 @@
import Dispatcher from'./dispatcher'
import { setGlobalDispatcher, getGlobalDispatcher } from './global-dispatcher'
import { setGlobalOrigin, getGlobalOrigin } from './global-origin'
import Pool from'./pool'
import { RedirectHandler, DecoratorHandler } from './handlers'
import BalancedPool from './balanced-pool'
import Client from'./client'
import buildConnector from'./connector'
import errors from'./errors'
import Agent from'./agent'
import MockClient from'./mock-client'
import MockPool from'./mock-pool'
import MockAgent from'./mock-agent'
import mockErrors from'./mock-errors'
import ProxyAgent from'./proxy-agent'
import { request, pipeline, stream, connect, upgrade } from './api'
export * from './cookies'
export * from './fetch'
export * from './file'
export * from './filereader'
export * from './formdata'
export * from './diagnostics-channel'
export * from './websocket'
export * from './content-type'
export * from './cache'
export { Interceptable } from './mock-interceptor'
export { Dispatcher, BalancedPool, Pool, Client, buildConnector, errors, Agent, request, stream, pipeline, connect, upgrade, setGlobalDispatcher, getGlobalDispatcher, setGlobalOrigin, getGlobalOrigin, MockClient, MockPool, MockAgent, mockErrors, ProxyAgent, RedirectHandler, DecoratorHandler }
export default Undici
declare namespace Undici {
var Dispatcher: typeof import('./dispatcher').default
var Pool: typeof import('./pool').default;
var RedirectHandler: typeof import ('./handlers').RedirectHandler
var DecoratorHandler: typeof import ('./handlers').DecoratorHandler
var createRedirectInterceptor: typeof import ('./interceptors').createRedirectInterceptor
var BalancedPool: typeof import('./balanced-pool').default;
var Client: typeof import('./client').default;
var buildConnector: typeof import('./connector').default;
var errors: typeof import('./errors').default;
var Agent: typeof import('./agent').default;
var setGlobalDispatcher: typeof import('./global-dispatcher').setGlobalDispatcher;
var getGlobalDispatcher: typeof import('./global-dispatcher').getGlobalDispatcher;
var request: typeof import('./api').request;
var stream: typeof import('./api').stream;
var pipeline: typeof import('./api').pipeline;
var connect: typeof import('./api').connect;
var upgrade: typeof import('./api').upgrade;
var MockClient: typeof import('./mock-client').default;
var MockPool: typeof import('./mock-pool').default;
var MockAgent: typeof import('./mock-agent').default;
var mockErrors: typeof import('./mock-errors').default;
var fetch: typeof import('./fetch').fetch;
var caches: typeof import('./cache').caches;
}

55
deps/undici/src/types/package.json vendored Normal file
View file

@ -0,0 +1,55 @@
{
"name": "undici-types",
"version": "5.25.1",
"description": "A stand-alone types package for Undici",
"homepage": "https://undici.nodejs.org",
"bugs": {
"url": "https://github.com/nodejs/undici/issues"
},
"repository": {
"type": "git",
"url": "git+https://github.com/nodejs/undici.git"
},
"license": "MIT",
"types": "index.d.ts",
"files": [
"*.d.ts"
],
"contributors": [
{
"name": "Daniele Belardi",
"url": "https://github.com/dnlup",
"author": true
},
{
"name": "Ethan Arrowood",
"url": "https://github.com/ethan-arrowood",
"author": true
},
{
"name": "Matteo Collina",
"url": "https://github.com/mcollina",
"author": true
},
{
"name": "Matthew Aitken",
"url": "https://github.com/KhafraDev",
"author": true
},
{
"name": "Robert Nagy",
"url": "https://github.com/ronag",
"author": true
},
{
"name": "Szymon Marczak",
"url": "https://github.com/szmarczak",
"author": true
},
{
"name": "Tomas Della Vedova",
"url": "https://github.com/delvedor",
"author": true
}
]
}

1804
deps/undici/undici.js vendored

File diff suppressed because it is too large Load diff

View file

@ -28,7 +28,7 @@ This a list of all the dependencies:
* [openssl 3.0.8][] * [openssl 3.0.8][]
* [postject 1.0.0-alpha.6][] * [postject 1.0.0-alpha.6][]
* [simdutf 3.2.17][] * [simdutf 3.2.17][]
* [undici 5.23.0][] * [undici 5.25.2][]
* [uvwasi 0.0.16][] * [uvwasi 0.0.16][]
* [V8 11.3.244.8][] * [V8 11.3.244.8][]
* [zlib 1.2.13.1-motley-f5fd0ad][] * [zlib 1.2.13.1-motley-f5fd0ad][]
@ -291,7 +291,7 @@ The [postject](https://github.com/nodejs/postject) dependency is used for the
The [simdutf](https://github.com/simdutf/simdutf) dependency is The [simdutf](https://github.com/simdutf/simdutf) dependency is
a C++ library for fast UTF-8 decoding and encoding. a C++ library for fast UTF-8 decoding and encoding.
### undici 5.23.0 ### undici 5.25.2
The [undici](https://github.com/nodejs/undici) dependency is an HTTP/1.1 client, The [undici](https://github.com/nodejs/undici) dependency is an HTTP/1.1 client,
written from scratch for Node.js.. written from scratch for Node.js..
@ -345,7 +345,7 @@ performance improvements not currently available in standard zlib.
[openssl 3.0.8]: #openssl-308 [openssl 3.0.8]: #openssl-308
[postject 1.0.0-alpha.6]: #postject-100-alpha6 [postject 1.0.0-alpha.6]: #postject-100-alpha6
[simdutf 3.2.17]: #simdutf-3217 [simdutf 3.2.17]: #simdutf-3217
[undici 5.23.0]: #undici-5230 [undici 5.25.2]: #undici-5252
[update-openssl-action]: ../../../.github/workflows/update-openssl.yml [update-openssl-action]: ../../../.github/workflows/update-openssl.yml
[uvwasi 0.0.16]: #uvwasi-0016 [uvwasi 0.0.16]: #uvwasi-0016
[v8 11.3.244.8]: #v8-1132448 [v8 11.3.244.8]: #v8-1132448

View file

@ -2,5 +2,5 @@
// Refer to tools/update-undici.sh // Refer to tools/update-undici.sh
#ifndef SRC_UNDICI_VERSION_H_ #ifndef SRC_UNDICI_VERSION_H_
#define SRC_UNDICI_VERSION_H_ #define SRC_UNDICI_VERSION_H_
#define UNDICI_VERSION "5.23.0" #define UNDICI_VERSION "5.25.2"
#endif // SRC_UNDICI_VERSION_H_ #endif // SRC_UNDICI_VERSION_H_