mirror of
https://github.com/nodejs/node.git
synced 2025-08-15 13:48:44 +02:00
deps: update undici to 5.24.0
PR-URL: https://github.com/nodejs/node/pull/49559 Reviewed-By: Matteo Collina <matteo.collina@gmail.com> Reviewed-By: Matthew Aitken <maitken033380023@gmail.com>
This commit is contained in:
parent
da7962fd4d
commit
ef062b981e
33 changed files with 2679 additions and 311 deletions
33
deps/undici/src/README.md
vendored
33
deps/undici/src/README.md
vendored
|
@ -18,30 +18,34 @@ npm i undici
|
|||
## Benchmarks
|
||||
|
||||
The benchmark is a simple `hello world` [example](benchmarks/benchmark.js) using a
|
||||
number of unix sockets (connections) with a pipelining depth of 10 running on Node 16.
|
||||
The benchmarks below have the [simd](https://github.com/WebAssembly/simd) feature enabled.
|
||||
number of unix sockets (connections) with a pipelining depth of 10 running on Node 20.6.0.
|
||||
|
||||
### Connections 1
|
||||
|
||||
|
||||
| Tests | Samples | Result | Tolerance | Difference with slowest |
|
||||
|---------------------|---------|---------------|-----------|-------------------------|
|
||||
| http - no keepalive | 15 | 4.63 req/sec | ± 2.77 % | - |
|
||||
| http - keepalive | 10 | 4.81 req/sec | ± 2.16 % | + 3.94 % |
|
||||
| undici - stream | 25 | 62.22 req/sec | ± 2.67 % | + 1244.58 % |
|
||||
| undici - dispatch | 15 | 64.33 req/sec | ± 2.47 % | + 1290.24 % |
|
||||
| undici - request | 15 | 66.08 req/sec | ± 2.48 % | + 1327.88 % |
|
||||
| undici - pipeline | 10 | 66.13 req/sec | ± 1.39 % | + 1329.08 % |
|
||||
| http - no keepalive | 15 | 5.32 req/sec | ± 2.61 % | - |
|
||||
| http - keepalive | 10 | 5.35 req/sec | ± 2.47 % | + 0.44 % |
|
||||
| undici - fetch | 15 | 41.85 req/sec | ± 2.49 % | + 686.04 % |
|
||||
| undici - pipeline | 40 | 50.36 req/sec | ± 2.77 % | + 845.92 % |
|
||||
| undici - stream | 15 | 60.58 req/sec | ± 2.75 % | + 1037.72 % |
|
||||
| undici - request | 10 | 61.19 req/sec | ± 2.60 % | + 1049.24 % |
|
||||
| undici - dispatch | 20 | 64.84 req/sec | ± 2.81 % | + 1117.81 % |
|
||||
|
||||
|
||||
### Connections 50
|
||||
|
||||
| Tests | Samples | Result | Tolerance | Difference with slowest |
|
||||
|---------------------|---------|------------------|-----------|-------------------------|
|
||||
| http - no keepalive | 50 | 3546.49 req/sec | ± 2.90 % | - |
|
||||
| http - keepalive | 15 | 5692.67 req/sec | ± 2.48 % | + 60.52 % |
|
||||
| undici - pipeline | 25 | 8478.71 req/sec | ± 2.62 % | + 139.07 % |
|
||||
| undici - request | 20 | 9766.66 req/sec | ± 2.79 % | + 175.39 % |
|
||||
| undici - stream | 15 | 10109.74 req/sec | ± 2.94 % | + 185.06 % |
|
||||
| undici - dispatch | 25 | 10949.73 req/sec | ± 2.54 % | + 208.75 % |
|
||||
| undici - fetch | 30 | 2107.19 req/sec | ± 2.69 % | - |
|
||||
| http - no keepalive | 10 | 2698.90 req/sec | ± 2.68 % | + 28.08 % |
|
||||
| http - keepalive | 10 | 4639.49 req/sec | ± 2.55 % | + 120.17 % |
|
||||
| undici - pipeline | 40 | 6123.33 req/sec | ± 2.97 % | + 190.59 % |
|
||||
| undici - stream | 50 | 9426.51 req/sec | ± 2.92 % | + 347.35 % |
|
||||
| undici - request | 10 | 10162.88 req/sec | ± 2.13 % | + 382.29 % |
|
||||
| undici - dispatch | 50 | 11191.11 req/sec | ± 2.98 % | + 431.09 % |
|
||||
|
||||
|
||||
## Quick Start
|
||||
|
||||
|
@ -432,6 +436,7 @@ and `undici.Agent`) which will enable the family autoselection algorithm when es
|
|||
* [__Ethan Arrowood__](https://github.com/ethan-arrowood), <https://www.npmjs.com/~ethan_arrowood>
|
||||
* [__Matteo Collina__](https://github.com/mcollina), <https://www.npmjs.com/~matteo.collina>
|
||||
* [__Robert Nagy__](https://github.com/ronag), <https://www.npmjs.com/~ronag>
|
||||
* [__Matthew Aitken__](https://github.com/KhafraDev), <https://www.npmjs.com/~khaf>
|
||||
|
||||
## License
|
||||
|
||||
|
|
14
deps/undici/src/docs/api/Client.md
vendored
14
deps/undici/src/docs/api/Client.md
vendored
|
@ -17,11 +17,13 @@ Returns: `Client`
|
|||
|
||||
### Parameter: `ClientOptions`
|
||||
|
||||
> ⚠️ Warning: The `H2` support is experimental.
|
||||
|
||||
* **bodyTimeout** `number | null` (optional) - Default: `300e3` - The timeout after which a request will time out, in milliseconds. Monitors time between receiving body data. Use `0` to disable it entirely. Defaults to 300 seconds.
|
||||
* **headersTimeout** `number | null` (optional) - Default: `300e3` - The amount of time the parser will wait to receive the complete HTTP headers while not sending the request. Defaults to 300 seconds.
|
||||
* **keepAliveMaxTimeout** `number | null` (optional) - Default: `600e3` - The maximum allowed `keepAliveTimeout` when overridden by *keep-alive* hints from the server. Defaults to 10 minutes.
|
||||
* **keepAliveTimeout** `number | null` (optional) - Default: `4e3` - The timeout after which a socket without active requests will time out. Monitors time between activity on a connected socket. This value may be overridden by *keep-alive* hints from the server. See [MDN: HTTP - Headers - Keep-Alive directives](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Keep-Alive#directives) for more details. Defaults to 4 seconds.
|
||||
* **keepAliveTimeoutThreshold** `number | null` (optional) - Default: `1e3` - A number subtracted from server *keep-alive* hints when overriding `keepAliveTimeout` to account for timing inaccuracies caused by e.g. transport latency. Defaults to 1 second.
|
||||
* **headersTimeout** `number | null` (optional) - Default: `300e3` - The amount of time, in milliseconds, the parser will wait to receive the complete HTTP headers while not sending the request. Defaults to 300 seconds.
|
||||
* **keepAliveMaxTimeout** `number | null` (optional) - Default: `600e3` - The maximum allowed `keepAliveTimeout`, in milliseconds, when overridden by *keep-alive* hints from the server. Defaults to 10 minutes.
|
||||
* **keepAliveTimeout** `number | null` (optional) - Default: `4e3` - The timeout, in milliseconds, after which a socket without active requests will time out. Monitors time between activity on a connected socket. This value may be overridden by *keep-alive* hints from the server. See [MDN: HTTP - Headers - Keep-Alive directives](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Keep-Alive#directives) for more details. Defaults to 4 seconds.
|
||||
* **keepAliveTimeoutThreshold** `number | null` (optional) - Default: `1e3` - A number of milliseconds subtracted from server *keep-alive* hints when overriding `keepAliveTimeout` to account for timing inaccuracies caused by e.g. transport latency. Defaults to 1 second.
|
||||
* **maxHeaderSize** `number | null` (optional) - Default: `16384` - The maximum length of request headers in bytes. Defaults to 16KiB.
|
||||
* **maxResponseSize** `number | null` (optional) - Default: `-1` - The maximum length of response body in bytes. Set to `-1` to disable.
|
||||
* **pipelining** `number | null` (optional) - Default: `1` - The amount of concurrent requests to be sent over the single TCP/TLS connection according to [RFC7230](https://tools.ietf.org/html/rfc7230#section-6.3.2). Carefully consider your workload and environment before enabling concurrent requests as pipelining may reduce performance if used incorrectly. Pipelining is sensitive to network stack settings as well as head of line blocking caused by e.g. long running requests. Set to `0` to disable keep-alive connections.
|
||||
|
@ -30,6 +32,8 @@ Returns: `Client`
|
|||
* **interceptors** `{ Client: DispatchInterceptor[] }` - Default: `[RedirectInterceptor]` - A list of interceptors that are applied to the dispatch method. Additional logic can be applied (such as, but not limited to: 302 status code handling, authentication, cookies, compression and caching). Note that the behavior of interceptors is Experimental and might change at any given time.
|
||||
* **autoSelectFamily**: `boolean` (optional) - Default: depends on local Node version, on Node 18.13.0 and above is `false`. Enables a family autodetection algorithm that loosely implements section 5 of [RFC 8305](https://tools.ietf.org/html/rfc8305#section-5). See [here](https://nodejs.org/api/net.html#socketconnectoptions-connectlistener) for more details. This option is ignored if not supported by the current Node version.
|
||||
* **autoSelectFamilyAttemptTimeout**: `number` - Default: depends on local Node version, on Node 18.13.0 and above is `250`. The amount of time in milliseconds to wait for a connection attempt to finish before trying the next address when using the `autoSelectFamily` option. See [here](https://nodejs.org/api/net.html#socketconnectoptions-connectlistener) for more details.
|
||||
* **allowH2**: `boolean` - Default: `false`. Enables support for H2 if the server has assigned bigger priority to it through ALPN negotiation.
|
||||
* **maxConcurrentStreams**: `number` - Default: `100`. Dictates the maximum number of concurrent streams for a single H2 session. It can be overriden by a SETTINGS remote frame.
|
||||
|
||||
#### Parameter: `ConnectOptions`
|
||||
|
||||
|
@ -38,7 +42,7 @@ Furthermore, the following options can be passed:
|
|||
|
||||
* **socketPath** `string | null` (optional) - Default: `null` - An IPC endpoint, either Unix domain socket or Windows named pipe.
|
||||
* **maxCachedSessions** `number | null` (optional) - Default: `100` - Maximum number of TLS cached sessions. Use 0 to disable TLS session caching. Default: 100.
|
||||
* **timeout** `number | null` (optional) - Default `10e3`
|
||||
* **timeout** `number | null` (optional) - In milliseconds, Default `10e3`.
|
||||
* **servername** `string | null` (optional)
|
||||
* **keepAlive** `boolean | null` (optional) - Default: `true` - TCP keep-alive enabled
|
||||
* **keepAliveInitialDelay** `number | null` (optional) - Default: `60000` - TCP keep-alive interval for the socket in milliseconds
|
||||
|
|
4
deps/undici/src/docs/api/Connector.md
vendored
4
deps/undici/src/docs/api/Connector.md
vendored
|
@ -13,8 +13,8 @@ Every Tls option, see [here](https://nodejs.org/api/tls.html#tls_tls_connect_opt
|
|||
Furthermore, the following options can be passed:
|
||||
|
||||
* **socketPath** `string | null` (optional) - Default: `null` - An IPC endpoint, either Unix domain socket or Windows named pipe.
|
||||
* **maxCachedSessions** `number | null` (optional) - Default: `100` - Maximum number of TLS cached sessions. Use 0 to disable TLS session caching. Default: 100.
|
||||
* **timeout** `number | null` (optional) - Default `10e3`
|
||||
* **maxCachedSessions** `number | null` (optional) - Default: `100` - Maximum number of TLS cached sessions. Use 0 to disable TLS session caching. Default: `100`.
|
||||
* **timeout** `number | null` (optional) - In milliseconds. Default `10e3`.
|
||||
* **servername** `string | null` (optional)
|
||||
|
||||
Once you call `buildConnector`, it will return a connector function, which takes the following parameters.
|
||||
|
|
3
deps/undici/src/docs/api/Dispatcher.md
vendored
3
deps/undici/src/docs/api/Dispatcher.md
vendored
|
@ -200,8 +200,9 @@ Returns: `Boolean` - `false` if dispatcher is busy and further dispatch calls wo
|
|||
* **blocking** `boolean` (optional) - Default: `false` - Whether the response is expected to take a long time and would end up blocking the pipeline. When this is set to `true` further pipelining will be avoided on the same connection until headers have been received.
|
||||
* **upgrade** `string | null` (optional) - Default: `null` - Upgrade the request. Should be used to specify the kind of upgrade i.e. `'Websocket'`.
|
||||
* **bodyTimeout** `number | null` (optional) - The timeout after which a request will time out, in milliseconds. Monitors time between receiving body data. Use `0` to disable it entirely. Defaults to 300 seconds.
|
||||
* **headersTimeout** `number | null` (optional) - The amount of time the parser will wait to receive the complete HTTP headers while not sending the request. Defaults to 300 seconds.
|
||||
* **headersTimeout** `number | null` (optional) - The amount of time, in milliseconds, the parser will wait to receive the complete HTTP headers while not sending the request. Defaults to 300 seconds.
|
||||
* **throwOnError** `boolean` (optional) - Default: `false` - Whether Undici should throw an error upon receiving a 4xx or 5xx response from the server.
|
||||
* **expectContinue** `boolean` (optional) - Default: `false` - For H2, it appends the expect: 100-continue header, and halts the request body until a 100-continue is received from the remote server
|
||||
|
||||
#### Parameter: `DispatchHandler`
|
||||
|
||||
|
|
3
deps/undici/src/docs/api/MockPool.md
vendored
3
deps/undici/src/docs/api/MockPool.md
vendored
|
@ -35,7 +35,8 @@ const mockPool = mockAgent.get('http://localhost:3000')
|
|||
|
||||
### `MockPool.intercept(options)`
|
||||
|
||||
This method defines the interception rules for matching against requests for a MockPool or MockPool. We can intercept multiple times on a single instance.
|
||||
This method defines the interception rules for matching against requests for a MockPool or MockPool. We can intercept multiple times on a single instance, but each intercept is only used once.
|
||||
For example if you expect to make 2 requests inside a test, you need to call `intercept()` twice. Assuming you use `disableNetConnect()` you will get `MockNotMatchedError` on the second request when you only call `intercept()` once.
|
||||
|
||||
When defining interception rules, all the rules must pass for a request to be intercepted. If a request is not intercepted, a real request will be attempted.
|
||||
|
||||
|
|
5
deps/undici/src/index-fetch.js
vendored
5
deps/undici/src/index-fetch.js
vendored
|
@ -2,9 +2,9 @@
|
|||
|
||||
const fetchImpl = require('./lib/fetch').fetch
|
||||
|
||||
module.exports.fetch = async function fetch (resource) {
|
||||
module.exports.fetch = async function fetch (resource, init = undefined) {
|
||||
try {
|
||||
return await fetchImpl(...arguments)
|
||||
return await fetchImpl(resource, init)
|
||||
} catch (err) {
|
||||
Error.captureStackTrace(err, this)
|
||||
throw err
|
||||
|
@ -14,3 +14,4 @@ module.exports.FormData = require('./lib/fetch/formdata').FormData
|
|||
module.exports.Headers = require('./lib/fetch/headers').Headers
|
||||
module.exports.Response = require('./lib/fetch/response').Response
|
||||
module.exports.Request = require('./lib/fetch/request').Request
|
||||
module.exports.WebSocket = require('./lib/websocket/websocket').WebSocket
|
||||
|
|
58
deps/undici/src/index.d.ts
vendored
58
deps/undici/src/index.d.ts
vendored
|
@ -1,57 +1,3 @@
|
|||
import Dispatcher from'./types/dispatcher'
|
||||
import { setGlobalDispatcher, getGlobalDispatcher } from './types/global-dispatcher'
|
||||
import { setGlobalOrigin, getGlobalOrigin } from './types/global-origin'
|
||||
import Pool from'./types/pool'
|
||||
import { RedirectHandler, DecoratorHandler } from './types/handlers'
|
||||
|
||||
import BalancedPool from './types/balanced-pool'
|
||||
import Client from'./types/client'
|
||||
import buildConnector from'./types/connector'
|
||||
import errors from'./types/errors'
|
||||
import Agent from'./types/agent'
|
||||
import MockClient from'./types/mock-client'
|
||||
import MockPool from'./types/mock-pool'
|
||||
import MockAgent from'./types/mock-agent'
|
||||
import mockErrors from'./types/mock-errors'
|
||||
import ProxyAgent from'./types/proxy-agent'
|
||||
import { request, pipeline, stream, connect, upgrade } from './types/api'
|
||||
|
||||
export * from './types/cookies'
|
||||
export * from './types/fetch'
|
||||
export * from './types/file'
|
||||
export * from './types/filereader'
|
||||
export * from './types/formdata'
|
||||
export * from './types/diagnostics-channel'
|
||||
export * from './types/websocket'
|
||||
export * from './types/content-type'
|
||||
export * from './types/cache'
|
||||
export { Interceptable } from './types/mock-interceptor'
|
||||
|
||||
export { Dispatcher, BalancedPool, Pool, Client, buildConnector, errors, Agent, request, stream, pipeline, connect, upgrade, setGlobalDispatcher, getGlobalDispatcher, setGlobalOrigin, getGlobalOrigin, MockClient, MockPool, MockAgent, mockErrors, ProxyAgent, RedirectHandler, DecoratorHandler }
|
||||
export * from './types/index'
|
||||
import Undici from './types/index'
|
||||
export default Undici
|
||||
|
||||
declare namespace Undici {
|
||||
var Dispatcher: typeof import('./types/dispatcher').default
|
||||
var Pool: typeof import('./types/pool').default;
|
||||
var RedirectHandler: typeof import ('./types/handlers').RedirectHandler
|
||||
var DecoratorHandler: typeof import ('./types/handlers').DecoratorHandler
|
||||
var createRedirectInterceptor: typeof import ('./types/interceptors').createRedirectInterceptor
|
||||
var BalancedPool: typeof import('./types/balanced-pool').default;
|
||||
var Client: typeof import('./types/client').default;
|
||||
var buildConnector: typeof import('./types/connector').default;
|
||||
var errors: typeof import('./types/errors').default;
|
||||
var Agent: typeof import('./types/agent').default;
|
||||
var setGlobalDispatcher: typeof import('./types/global-dispatcher').setGlobalDispatcher;
|
||||
var getGlobalDispatcher: typeof import('./types/global-dispatcher').getGlobalDispatcher;
|
||||
var request: typeof import('./types/api').request;
|
||||
var stream: typeof import('./types/api').stream;
|
||||
var pipeline: typeof import('./types/api').pipeline;
|
||||
var connect: typeof import('./types/api').connect;
|
||||
var upgrade: typeof import('./types/api').upgrade;
|
||||
var MockClient: typeof import('./types/mock-client').default;
|
||||
var MockPool: typeof import('./types/mock-pool').default;
|
||||
var MockAgent: typeof import('./types/mock-agent').default;
|
||||
var mockErrors: typeof import('./types/mock-errors').default;
|
||||
var fetch: typeof import('./types/fetch').fetch;
|
||||
var caches: typeof import('./types/cache').caches;
|
||||
}
|
||||
|
|
5
deps/undici/src/index.js
vendored
5
deps/undici/src/index.js
vendored
|
@ -106,7 +106,10 @@ if (util.nodeMajor > 16 || (util.nodeMajor === 16 && util.nodeMinor >= 8)) {
|
|||
try {
|
||||
return await fetchImpl(...arguments)
|
||||
} catch (err) {
|
||||
Error.captureStackTrace(err, this)
|
||||
if (typeof err === 'object') {
|
||||
Error.captureStackTrace(err, this)
|
||||
}
|
||||
|
||||
throw err
|
||||
}
|
||||
}
|
||||
|
|
10
deps/undici/src/lib/api/api-connect.js
vendored
10
deps/undici/src/lib/api/api-connect.js
vendored
|
@ -1,7 +1,7 @@
|
|||
'use strict'
|
||||
|
||||
const { InvalidArgumentError, RequestAbortedError, SocketError } = require('../core/errors')
|
||||
const { AsyncResource } = require('async_hooks')
|
||||
const { InvalidArgumentError, RequestAbortedError, SocketError } = require('../core/errors')
|
||||
const util = require('../core/util')
|
||||
const { addSignal, removeSignal } = require('./abort-signal')
|
||||
|
||||
|
@ -50,7 +50,13 @@ class ConnectHandler extends AsyncResource {
|
|||
removeSignal(this)
|
||||
|
||||
this.callback = null
|
||||
const headers = this.responseHeaders === 'raw' ? util.parseRawHeaders(rawHeaders) : util.parseHeaders(rawHeaders)
|
||||
|
||||
let headers = rawHeaders
|
||||
// Indicates is an HTTP2Session
|
||||
if (headers != null) {
|
||||
headers = this.responseHeaders === 'raw' ? util.parseRawHeaders(rawHeaders) : util.parseHeaders(rawHeaders)
|
||||
}
|
||||
|
||||
this.runInAsyncScope(callback, null, null, {
|
||||
statusCode,
|
||||
headers,
|
||||
|
|
1
deps/undici/src/lib/api/api-request.js
vendored
1
deps/undici/src/lib/api/api-request.js
vendored
|
@ -95,7 +95,6 @@ class RequestHandler extends AsyncResource {
|
|||
|
||||
this.callback = null
|
||||
this.res = body
|
||||
|
||||
if (callback !== null) {
|
||||
if (this.throwOnError && statusCode >= 400) {
|
||||
this.runInAsyncScope(getResolveErrorBodyCallback, null,
|
||||
|
|
6
deps/undici/src/lib/cache/cache.js
vendored
6
deps/undici/src/lib/cache/cache.js
vendored
|
@ -379,11 +379,7 @@ class Cache {
|
|||
const reader = stream.getReader()
|
||||
|
||||
// 11.3
|
||||
readAllBytes(
|
||||
reader,
|
||||
(bytes) => bodyReadPromise.resolve(bytes),
|
||||
(error) => bodyReadPromise.reject(error)
|
||||
)
|
||||
readAllBytes(reader).then(bodyReadPromise.resolve, bodyReadPromise.reject)
|
||||
} else {
|
||||
bodyReadPromise.resolve(undefined)
|
||||
}
|
||||
|
|
571
deps/undici/src/lib/client.js
vendored
571
deps/undici/src/lib/client.js
vendored
|
@ -6,6 +6,7 @@
|
|||
|
||||
const assert = require('assert')
|
||||
const net = require('net')
|
||||
const { pipeline } = require('stream')
|
||||
const util = require('./core/util')
|
||||
const timers = require('./timers')
|
||||
const Request = require('./core/request')
|
||||
|
@ -67,8 +68,40 @@ const {
|
|||
kDispatch,
|
||||
kInterceptors,
|
||||
kLocalAddress,
|
||||
kMaxResponseSize
|
||||
kMaxResponseSize,
|
||||
kHTTPConnVersion,
|
||||
// HTTP2
|
||||
kHost,
|
||||
kHTTP2Session,
|
||||
kHTTP2SessionState,
|
||||
kHTTP2BuildRequest,
|
||||
kHTTP2CopyHeaders,
|
||||
kHTTP1BuildRequest
|
||||
} = require('./core/symbols')
|
||||
|
||||
/** @type {import('http2')} */
|
||||
let http2
|
||||
try {
|
||||
http2 = require('http2')
|
||||
} catch {
|
||||
// @ts-ignore
|
||||
http2 = { constants: {} }
|
||||
}
|
||||
|
||||
const {
|
||||
constants: {
|
||||
HTTP2_HEADER_AUTHORITY,
|
||||
HTTP2_HEADER_METHOD,
|
||||
HTTP2_HEADER_PATH,
|
||||
HTTP2_HEADER_CONTENT_LENGTH,
|
||||
HTTP2_HEADER_EXPECT,
|
||||
HTTP2_HEADER_STATUS
|
||||
}
|
||||
} = http2
|
||||
|
||||
// Experimental
|
||||
let h2ExperimentalWarned = false
|
||||
|
||||
const FastBuffer = Buffer[Symbol.species]
|
||||
|
||||
const kClosedResolve = Symbol('kClosedResolve')
|
||||
|
@ -122,7 +155,10 @@ class Client extends DispatcherBase {
|
|||
localAddress,
|
||||
maxResponseSize,
|
||||
autoSelectFamily,
|
||||
autoSelectFamilyAttemptTimeout
|
||||
autoSelectFamilyAttemptTimeout,
|
||||
// h2
|
||||
allowH2,
|
||||
maxConcurrentStreams
|
||||
} = {}) {
|
||||
super()
|
||||
|
||||
|
@ -205,10 +241,20 @@ class Client extends DispatcherBase {
|
|||
throw new InvalidArgumentError('autoSelectFamilyAttemptTimeout must be a positive number')
|
||||
}
|
||||
|
||||
// h2
|
||||
if (allowH2 != null && typeof allowH2 !== 'boolean') {
|
||||
throw new InvalidArgumentError('allowH2 must be a valid boolean value')
|
||||
}
|
||||
|
||||
if (maxConcurrentStreams != null && (typeof maxConcurrentStreams !== 'number' || maxConcurrentStreams < 1)) {
|
||||
throw new InvalidArgumentError('maxConcurrentStreams must be a possitive integer, greater than 0')
|
||||
}
|
||||
|
||||
if (typeof connect !== 'function') {
|
||||
connect = buildConnector({
|
||||
...tls,
|
||||
maxCachedSessions,
|
||||
allowH2,
|
||||
socketPath,
|
||||
timeout: connectTimeout,
|
||||
...(util.nodeHasAutoSelectFamily && autoSelectFamily ? { autoSelectFamily, autoSelectFamilyAttemptTimeout } : undefined),
|
||||
|
@ -240,6 +286,18 @@ class Client extends DispatcherBase {
|
|||
this[kMaxRequests] = maxRequestsPerClient
|
||||
this[kClosedResolve] = null
|
||||
this[kMaxResponseSize] = maxResponseSize > -1 ? maxResponseSize : -1
|
||||
this[kHTTPConnVersion] = 'h1'
|
||||
|
||||
// HTTP/2
|
||||
this[kHTTP2Session] = null
|
||||
this[kHTTP2SessionState] = !allowH2
|
||||
? null
|
||||
: {
|
||||
// streams: null, // Fixed queue of streams - For future support of `push`
|
||||
openStreams: 0, // Keep track of them to decide wether or not unref the session
|
||||
maxConcurrentStreams: maxConcurrentStreams != null ? maxConcurrentStreams : 100 // Max peerConcurrentStreams for a Node h2 server
|
||||
}
|
||||
this[kHost] = `${this[kUrl].hostname}${this[kUrl].port ? `:${this[kUrl].port}` : ''}`
|
||||
|
||||
// kQueue is built up of 3 sections separated by
|
||||
// the kRunningIdx and kPendingIdx indices.
|
||||
|
@ -298,7 +356,9 @@ class Client extends DispatcherBase {
|
|||
[kDispatch] (opts, handler) {
|
||||
const origin = opts.origin || this[kUrl].origin
|
||||
|
||||
const request = new Request(origin, opts, handler)
|
||||
const request = this[kHTTPConnVersion] === 'h2'
|
||||
? Request[kHTTP2BuildRequest](origin, opts, handler)
|
||||
: Request[kHTTP1BuildRequest](origin, opts, handler)
|
||||
|
||||
this[kQueue].push(request)
|
||||
if (this[kResuming]) {
|
||||
|
@ -319,6 +379,8 @@ class Client extends DispatcherBase {
|
|||
}
|
||||
|
||||
async [kClose] () {
|
||||
// TODO: for H2 we need to gracefully flush the remaining enqueued
|
||||
// request and close each stream.
|
||||
return new Promise((resolve) => {
|
||||
if (!this[kSize]) {
|
||||
resolve(null)
|
||||
|
@ -345,6 +407,12 @@ class Client extends DispatcherBase {
|
|||
resolve()
|
||||
}
|
||||
|
||||
if (this[kHTTP2Session] != null) {
|
||||
util.destroy(this[kHTTP2Session], err)
|
||||
this[kHTTP2Session] = null
|
||||
this[kHTTP2SessionState] = null
|
||||
}
|
||||
|
||||
if (!this[kSocket]) {
|
||||
queueMicrotask(callback)
|
||||
} else {
|
||||
|
@ -356,6 +424,64 @@ class Client extends DispatcherBase {
|
|||
}
|
||||
}
|
||||
|
||||
function onHttp2SessionError (err) {
|
||||
assert(err.code !== 'ERR_TLS_CERT_ALTNAME_INVALID')
|
||||
|
||||
this[kSocket][kError] = err
|
||||
|
||||
onError(this[kClient], err)
|
||||
}
|
||||
|
||||
function onHttp2FrameError (type, code, id) {
|
||||
const err = new InformationalError(`HTTP/2: "frameError" received - type ${type}, code ${code}`)
|
||||
|
||||
if (id === 0) {
|
||||
this[kSocket][kError] = err
|
||||
onError(this[kClient], err)
|
||||
}
|
||||
}
|
||||
|
||||
function onHttp2SessionEnd () {
|
||||
util.destroy(this, new SocketError('other side closed'))
|
||||
util.destroy(this[kSocket], new SocketError('other side closed'))
|
||||
}
|
||||
|
||||
function onHTTP2GoAway (code) {
|
||||
const client = this[kClient]
|
||||
const err = new InformationalError(`HTTP/2: "GOAWAY" frame received with code ${code}`)
|
||||
client[kSocket] = null
|
||||
client[kHTTP2Session] = null
|
||||
|
||||
if (client.destroyed) {
|
||||
assert(this[kPending] === 0)
|
||||
|
||||
// Fail entire queue.
|
||||
const requests = client[kQueue].splice(client[kRunningIdx])
|
||||
for (let i = 0; i < requests.length; i++) {
|
||||
const request = requests[i]
|
||||
errorRequest(this, request, err)
|
||||
}
|
||||
} else if (client[kRunning] > 0) {
|
||||
// Fail head of pipeline.
|
||||
const request = client[kQueue][client[kRunningIdx]]
|
||||
client[kQueue][client[kRunningIdx]++] = null
|
||||
|
||||
errorRequest(client, request, err)
|
||||
}
|
||||
|
||||
client[kPendingIdx] = client[kRunningIdx]
|
||||
|
||||
assert(client[kRunning] === 0)
|
||||
|
||||
client.emit('disconnect',
|
||||
client[kUrl],
|
||||
[client],
|
||||
err
|
||||
)
|
||||
|
||||
resume(client)
|
||||
}
|
||||
|
||||
const constants = require('./llhttp/constants')
|
||||
const createRedirectInterceptor = require('./interceptor/redirectInterceptor')
|
||||
const EMPTY_BUF = Buffer.alloc(0)
|
||||
|
@ -946,16 +1072,18 @@ function onSocketReadable () {
|
|||
}
|
||||
|
||||
function onSocketError (err) {
|
||||
const { [kParser]: parser } = this
|
||||
const { [kClient]: client, [kParser]: parser } = this
|
||||
|
||||
assert(err.code !== 'ERR_TLS_CERT_ALTNAME_INVALID')
|
||||
|
||||
// On Mac OS, we get an ECONNRESET even if there is a full body to be forwarded
|
||||
// to the user.
|
||||
if (err.code === 'ECONNRESET' && parser.statusCode && !parser.shouldKeepAlive) {
|
||||
// We treat all incoming data so for as a valid response.
|
||||
parser.onMessageComplete()
|
||||
return
|
||||
if (client[kHTTPConnVersion] !== 'h2') {
|
||||
// On Mac OS, we get an ECONNRESET even if there is a full body to be forwarded
|
||||
// to the user.
|
||||
if (err.code === 'ECONNRESET' && parser.statusCode && !parser.shouldKeepAlive) {
|
||||
// We treat all incoming data so for as a valid response.
|
||||
parser.onMessageComplete()
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
this[kError] = err
|
||||
|
@ -984,28 +1112,32 @@ function onError (client, err) {
|
|||
}
|
||||
|
||||
function onSocketEnd () {
|
||||
const { [kParser]: parser } = this
|
||||
const { [kParser]: parser, [kClient]: client } = this
|
||||
|
||||
if (parser.statusCode && !parser.shouldKeepAlive) {
|
||||
// We treat all incoming data so far as a valid response.
|
||||
parser.onMessageComplete()
|
||||
return
|
||||
if (client[kHTTPConnVersion] !== 'h2') {
|
||||
if (parser.statusCode && !parser.shouldKeepAlive) {
|
||||
// We treat all incoming data so far as a valid response.
|
||||
parser.onMessageComplete()
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
util.destroy(this, new SocketError('other side closed', util.getSocketInfo(this)))
|
||||
}
|
||||
|
||||
function onSocketClose () {
|
||||
const { [kClient]: client } = this
|
||||
const { [kClient]: client, [kParser]: parser } = this
|
||||
|
||||
if (!this[kError] && this[kParser].statusCode && !this[kParser].shouldKeepAlive) {
|
||||
// We treat all incoming data so far as a valid response.
|
||||
this[kParser].onMessageComplete()
|
||||
if (client[kHTTPConnVersion] === 'h1' && parser) {
|
||||
if (!this[kError] && parser.statusCode && !parser.shouldKeepAlive) {
|
||||
// We treat all incoming data so far as a valid response.
|
||||
parser.onMessageComplete()
|
||||
}
|
||||
|
||||
this[kParser].destroy()
|
||||
this[kParser] = null
|
||||
}
|
||||
|
||||
this[kParser].destroy()
|
||||
this[kParser] = null
|
||||
|
||||
const err = this[kError] || new SocketError('closed', util.getSocketInfo(this))
|
||||
|
||||
client[kSocket] = null
|
||||
|
@ -1092,24 +1224,54 @@ async function connect (client) {
|
|||
return
|
||||
}
|
||||
|
||||
if (!llhttpInstance) {
|
||||
llhttpInstance = await llhttpPromise
|
||||
llhttpPromise = null
|
||||
}
|
||||
|
||||
client[kConnecting] = false
|
||||
|
||||
assert(socket)
|
||||
|
||||
socket[kNoRef] = false
|
||||
socket[kWriting] = false
|
||||
socket[kReset] = false
|
||||
socket[kBlocking] = false
|
||||
socket[kError] = null
|
||||
socket[kParser] = new Parser(client, socket, llhttpInstance)
|
||||
socket[kClient] = client
|
||||
const isH2 = socket.alpnProtocol === 'h2'
|
||||
if (isH2) {
|
||||
if (!h2ExperimentalWarned) {
|
||||
h2ExperimentalWarned = true
|
||||
process.emitWarning('H2 support is experimental, expect them to change at any time.', {
|
||||
code: 'UNDICI-H2'
|
||||
})
|
||||
}
|
||||
|
||||
const session = http2.connect(client[kUrl], {
|
||||
createConnection: () => socket,
|
||||
peerMaxConcurrentStreams: client[kHTTP2SessionState].maxConcurrentStreams
|
||||
})
|
||||
|
||||
client[kHTTPConnVersion] = 'h2'
|
||||
session[kClient] = client
|
||||
session[kSocket] = socket
|
||||
session.on('error', onHttp2SessionError)
|
||||
session.on('frameError', onHttp2FrameError)
|
||||
session.on('end', onHttp2SessionEnd)
|
||||
session.on('goaway', onHTTP2GoAway)
|
||||
session.on('close', onSocketClose)
|
||||
session.unref()
|
||||
|
||||
client[kHTTP2Session] = session
|
||||
socket[kHTTP2Session] = session
|
||||
} else {
|
||||
if (!llhttpInstance) {
|
||||
llhttpInstance = await llhttpPromise
|
||||
llhttpPromise = null
|
||||
}
|
||||
|
||||
socket[kNoRef] = false
|
||||
socket[kWriting] = false
|
||||
socket[kReset] = false
|
||||
socket[kBlocking] = false
|
||||
socket[kParser] = new Parser(client, socket, llhttpInstance)
|
||||
}
|
||||
|
||||
socket[kCounter] = 0
|
||||
socket[kMaxRequests] = client[kMaxRequests]
|
||||
socket[kClient] = client
|
||||
socket[kError] = null
|
||||
|
||||
socket
|
||||
.on('error', onSocketError)
|
||||
.on('readable', onSocketReadable)
|
||||
|
@ -1208,7 +1370,7 @@ function _resume (client, sync) {
|
|||
|
||||
const socket = client[kSocket]
|
||||
|
||||
if (socket && !socket.destroyed) {
|
||||
if (socket && !socket.destroyed && socket.alpnProtocol !== 'h2') {
|
||||
if (client[kSize] === 0) {
|
||||
if (!socket[kNoRef] && socket.unref) {
|
||||
socket.unref()
|
||||
|
@ -1273,7 +1435,7 @@ function _resume (client, sync) {
|
|||
return
|
||||
}
|
||||
|
||||
if (!socket) {
|
||||
if (!socket && !client[kHTTP2Session]) {
|
||||
connect(client)
|
||||
return
|
||||
}
|
||||
|
@ -1334,6 +1496,11 @@ function _resume (client, sync) {
|
|||
}
|
||||
|
||||
function write (client, request) {
|
||||
if (client[kHTTPConnVersion] === 'h2') {
|
||||
writeH2(client, client[kHTTP2Session], request)
|
||||
return
|
||||
}
|
||||
|
||||
const { body, method, path, host, upgrade, headers, blocking, reset } = request
|
||||
|
||||
// https://tools.ietf.org/html/rfc7231#section-4.3.1
|
||||
|
@ -1489,9 +1656,291 @@ function write (client, request) {
|
|||
return true
|
||||
}
|
||||
|
||||
function writeStream ({ body, client, request, socket, contentLength, header, expectsPayload }) {
|
||||
function writeH2 (client, session, request) {
|
||||
const { body, method, path, host, upgrade, expectContinue, signal, headers: reqHeaders } = request
|
||||
|
||||
let headers
|
||||
if (typeof reqHeaders === 'string') headers = Request[kHTTP2CopyHeaders](reqHeaders.trim())
|
||||
else headers = reqHeaders
|
||||
|
||||
if (upgrade) {
|
||||
errorRequest(client, request, new Error('Upgrade not supported for H2'))
|
||||
return false
|
||||
}
|
||||
|
||||
try {
|
||||
// TODO(HTTP/2): Should we call onConnect immediately or on stream ready event?
|
||||
request.onConnect((err) => {
|
||||
if (request.aborted || request.completed) {
|
||||
return
|
||||
}
|
||||
|
||||
errorRequest(client, request, err || new RequestAbortedError())
|
||||
})
|
||||
} catch (err) {
|
||||
errorRequest(client, request, err)
|
||||
}
|
||||
|
||||
if (request.aborted) {
|
||||
return false
|
||||
}
|
||||
|
||||
let stream
|
||||
const h2State = client[kHTTP2SessionState]
|
||||
|
||||
headers[HTTP2_HEADER_AUTHORITY] = host || client[kHost]
|
||||
headers[HTTP2_HEADER_PATH] = path
|
||||
|
||||
if (method === 'CONNECT') {
|
||||
session.ref()
|
||||
// we are already connected, streams are pending, first request
|
||||
// will create a new stream. We trigger a request to create the stream and wait until
|
||||
// `ready` event is triggered
|
||||
// We disabled endStream to allow the user to write to the stream
|
||||
stream = session.request(headers, { endStream: false, signal })
|
||||
|
||||
if (stream.id && !stream.pending) {
|
||||
request.onUpgrade(null, null, stream)
|
||||
++h2State.openStreams
|
||||
} else {
|
||||
stream.once('ready', () => {
|
||||
request.onUpgrade(null, null, stream)
|
||||
++h2State.openStreams
|
||||
})
|
||||
}
|
||||
|
||||
stream.once('close', () => {
|
||||
h2State.openStreams -= 1
|
||||
// TODO(HTTP/2): unref only if current streams count is 0
|
||||
if (h2State.openStreams === 0) session.unref()
|
||||
})
|
||||
|
||||
return true
|
||||
} else {
|
||||
headers[HTTP2_HEADER_METHOD] = method
|
||||
}
|
||||
|
||||
// https://tools.ietf.org/html/rfc7231#section-4.3.1
|
||||
// https://tools.ietf.org/html/rfc7231#section-4.3.2
|
||||
// https://tools.ietf.org/html/rfc7231#section-4.3.5
|
||||
|
||||
// Sending a payload body on a request that does not
|
||||
// expect it can cause undefined behavior on some
|
||||
// servers and corrupt connection state. Do not
|
||||
// re-use the connection for further requests.
|
||||
|
||||
const expectsPayload = (
|
||||
method === 'PUT' ||
|
||||
method === 'POST' ||
|
||||
method === 'PATCH'
|
||||
)
|
||||
|
||||
if (body && typeof body.read === 'function') {
|
||||
// Try to read EOF in order to get length.
|
||||
body.read(0)
|
||||
}
|
||||
|
||||
let contentLength = util.bodyLength(body)
|
||||
|
||||
if (contentLength == null) {
|
||||
contentLength = request.contentLength
|
||||
}
|
||||
|
||||
if (contentLength === 0 || !expectsPayload) {
|
||||
// https://tools.ietf.org/html/rfc7230#section-3.3.2
|
||||
// A user agent SHOULD NOT send a Content-Length header field when
|
||||
// the request message does not contain a payload body and the method
|
||||
// semantics do not anticipate such a body.
|
||||
|
||||
contentLength = null
|
||||
}
|
||||
|
||||
if (request.contentLength != null && request.contentLength !== contentLength) {
|
||||
if (client[kStrictContentLength]) {
|
||||
errorRequest(client, request, new RequestContentLengthMismatchError())
|
||||
return false
|
||||
}
|
||||
|
||||
process.emitWarning(new RequestContentLengthMismatchError())
|
||||
}
|
||||
|
||||
if (contentLength != null) {
|
||||
assert(body, 'no body must not have content length')
|
||||
headers[HTTP2_HEADER_CONTENT_LENGTH] = `${contentLength}`
|
||||
}
|
||||
|
||||
session.ref()
|
||||
|
||||
const shouldEndStream = method === 'GET' || method === 'HEAD'
|
||||
if (expectContinue) {
|
||||
headers[HTTP2_HEADER_EXPECT] = '100-continue'
|
||||
/**
|
||||
* @type {import('node:http2').ClientHttp2Stream}
|
||||
*/
|
||||
stream = session.request(headers, { endStream: shouldEndStream, signal })
|
||||
|
||||
stream.once('continue', writeBodyH2)
|
||||
} else {
|
||||
/** @type {import('node:http2').ClientHttp2Stream} */
|
||||
stream = session.request(headers, {
|
||||
endStream: shouldEndStream,
|
||||
signal
|
||||
})
|
||||
writeBodyH2()
|
||||
}
|
||||
|
||||
// Increment counter as we have new several streams open
|
||||
++h2State.openStreams
|
||||
|
||||
stream.once('response', headers => {
|
||||
if (request.onHeaders(Number(headers[HTTP2_HEADER_STATUS]), headers, stream.resume.bind(stream), '') === false) {
|
||||
stream.pause()
|
||||
}
|
||||
})
|
||||
|
||||
stream.once('end', () => {
|
||||
request.onComplete([])
|
||||
})
|
||||
|
||||
stream.on('data', (chunk) => {
|
||||
if (request.onData(chunk) === false) stream.pause()
|
||||
})
|
||||
|
||||
stream.once('close', () => {
|
||||
h2State.openStreams -= 1
|
||||
// TODO(HTTP/2): unref only if current streams count is 0
|
||||
if (h2State.openStreams === 0) session.unref()
|
||||
})
|
||||
|
||||
stream.once('error', function (err) {
|
||||
if (client[kHTTP2Session] && !client[kHTTP2Session].destroyed && !this.closed && !this.destroyed) {
|
||||
h2State.streams -= 1
|
||||
util.destroy(stream, err)
|
||||
}
|
||||
})
|
||||
|
||||
stream.once('frameError', (type, code) => {
|
||||
const err = new InformationalError(`HTTP/2: "frameError" received - type ${type}, code ${code}`)
|
||||
errorRequest(client, request, err)
|
||||
|
||||
if (client[kHTTP2Session] && !client[kHTTP2Session].destroyed && !this.closed && !this.destroyed) {
|
||||
h2State.streams -= 1
|
||||
util.destroy(stream, err)
|
||||
}
|
||||
})
|
||||
|
||||
// stream.on('aborted', () => {
|
||||
// // TODO(HTTP/2): Support aborted
|
||||
// })
|
||||
|
||||
// stream.on('timeout', () => {
|
||||
// // TODO(HTTP/2): Support timeout
|
||||
// })
|
||||
|
||||
// stream.on('push', headers => {
|
||||
// // TODO(HTTP/2): Suppor push
|
||||
// })
|
||||
|
||||
// stream.on('trailers', headers => {
|
||||
// // TODO(HTTP/2): Support trailers
|
||||
// })
|
||||
|
||||
return true
|
||||
|
||||
function writeBodyH2 () {
|
||||
/* istanbul ignore else: assertion */
|
||||
if (!body) {
|
||||
request.onRequestSent()
|
||||
} else if (util.isBuffer(body)) {
|
||||
assert(contentLength === body.byteLength, 'buffer body must have content length')
|
||||
stream.cork()
|
||||
stream.write(body)
|
||||
stream.uncork()
|
||||
request.onBodySent(body)
|
||||
request.onRequestSent()
|
||||
} else if (util.isBlobLike(body)) {
|
||||
if (typeof body.stream === 'function') {
|
||||
writeIterable({
|
||||
client,
|
||||
request,
|
||||
contentLength,
|
||||
h2stream: stream,
|
||||
expectsPayload,
|
||||
body: body.stream(),
|
||||
socket: client[kSocket],
|
||||
header: ''
|
||||
})
|
||||
} else {
|
||||
writeBlob({
|
||||
body,
|
||||
client,
|
||||
request,
|
||||
contentLength,
|
||||
expectsPayload,
|
||||
h2stream: stream,
|
||||
header: '',
|
||||
socket: client[kSocket]
|
||||
})
|
||||
}
|
||||
} else if (util.isStream(body)) {
|
||||
writeStream({
|
||||
body,
|
||||
client,
|
||||
request,
|
||||
contentLength,
|
||||
expectsPayload,
|
||||
socket: client[kSocket],
|
||||
h2stream: stream,
|
||||
header: ''
|
||||
})
|
||||
} else if (util.isIterable(body)) {
|
||||
writeIterable({
|
||||
body,
|
||||
client,
|
||||
request,
|
||||
contentLength,
|
||||
expectsPayload,
|
||||
header: '',
|
||||
h2stream: stream,
|
||||
socket: client[kSocket]
|
||||
})
|
||||
} else {
|
||||
assert(false)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function writeStream ({ h2stream, body, client, request, socket, contentLength, header, expectsPayload }) {
|
||||
assert(contentLength !== 0 || client[kRunning] === 0, 'stream body cannot be pipelined')
|
||||
|
||||
if (client[kHTTPConnVersion] === 'h2') {
|
||||
// For HTTP/2, is enough to pipe the stream
|
||||
const pipe = pipeline(
|
||||
body,
|
||||
h2stream,
|
||||
(err) => {
|
||||
if (err) {
|
||||
util.destroy(body, err)
|
||||
util.destroy(h2stream, err)
|
||||
} else {
|
||||
request.onRequestSent()
|
||||
}
|
||||
}
|
||||
)
|
||||
|
||||
pipe.on('data', onPipeData)
|
||||
pipe.once('end', () => {
|
||||
pipe.removeListener('data', onPipeData)
|
||||
util.destroy(pipe)
|
||||
})
|
||||
|
||||
function onPipeData (chunk) {
|
||||
request.onBodySent(chunk)
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
let finished = false
|
||||
|
||||
const writer = new AsyncWriter({ socket, request, contentLength, client, expectsPayload, header })
|
||||
|
@ -1572,9 +2021,10 @@ function writeStream ({ body, client, request, socket, contentLength, header, ex
|
|||
.on('error', onFinished)
|
||||
}
|
||||
|
||||
async function writeBlob ({ body, client, request, socket, contentLength, header, expectsPayload }) {
|
||||
async function writeBlob ({ h2stream, body, client, request, socket, contentLength, header, expectsPayload }) {
|
||||
assert(contentLength === body.size, 'blob body must have content length')
|
||||
|
||||
const isH2 = client[kHTTPConnVersion] === 'h2'
|
||||
try {
|
||||
if (contentLength != null && contentLength !== body.size) {
|
||||
throw new RequestContentLengthMismatchError()
|
||||
|
@ -1582,10 +2032,16 @@ async function writeBlob ({ body, client, request, socket, contentLength, header
|
|||
|
||||
const buffer = Buffer.from(await body.arrayBuffer())
|
||||
|
||||
socket.cork()
|
||||
socket.write(`${header}content-length: ${contentLength}\r\n\r\n`, 'latin1')
|
||||
socket.write(buffer)
|
||||
socket.uncork()
|
||||
if (isH2) {
|
||||
h2stream.cork()
|
||||
h2stream.write(buffer)
|
||||
h2stream.uncork()
|
||||
} else {
|
||||
socket.cork()
|
||||
socket.write(`${header}content-length: ${contentLength}\r\n\r\n`, 'latin1')
|
||||
socket.write(buffer)
|
||||
socket.uncork()
|
||||
}
|
||||
|
||||
request.onBodySent(buffer)
|
||||
request.onRequestSent()
|
||||
|
@ -1596,11 +2052,11 @@ async function writeBlob ({ body, client, request, socket, contentLength, header
|
|||
|
||||
resume(client)
|
||||
} catch (err) {
|
||||
util.destroy(socket, err)
|
||||
util.destroy(isH2 ? h2stream : socket, err)
|
||||
}
|
||||
}
|
||||
|
||||
async function writeIterable ({ body, client, request, socket, contentLength, header, expectsPayload }) {
|
||||
async function writeIterable ({ h2stream, body, client, request, socket, contentLength, header, expectsPayload }) {
|
||||
assert(contentLength !== 0 || client[kRunning] === 0, 'iterator body cannot be pipelined')
|
||||
|
||||
let callback = null
|
||||
|
@ -1622,6 +2078,33 @@ async function writeIterable ({ body, client, request, socket, contentLength, he
|
|||
}
|
||||
})
|
||||
|
||||
if (client[kHTTPConnVersion] === 'h2') {
|
||||
h2stream
|
||||
.on('close', onDrain)
|
||||
.on('drain', onDrain)
|
||||
|
||||
try {
|
||||
// It's up to the user to somehow abort the async iterable.
|
||||
for await (const chunk of body) {
|
||||
if (socket[kError]) {
|
||||
throw socket[kError]
|
||||
}
|
||||
|
||||
if (!h2stream.write(chunk)) {
|
||||
await waitForDrain()
|
||||
}
|
||||
}
|
||||
} catch (err) {
|
||||
h2stream.destroy(err)
|
||||
} finally {
|
||||
h2stream
|
||||
.off('close', onDrain)
|
||||
.off('drain', onDrain)
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
socket
|
||||
.on('close', onDrain)
|
||||
.on('drain', onDrain)
|
||||
|
|
6
deps/undici/src/lib/core/connect.js
vendored
6
deps/undici/src/lib/core/connect.js
vendored
|
@ -71,7 +71,7 @@ if (global.FinalizationRegistry) {
|
|||
}
|
||||
}
|
||||
|
||||
function buildConnector ({ maxCachedSessions, socketPath, timeout, ...opts }) {
|
||||
function buildConnector ({ allowH2, maxCachedSessions, socketPath, timeout, ...opts }) {
|
||||
if (maxCachedSessions != null && (!Number.isInteger(maxCachedSessions) || maxCachedSessions < 0)) {
|
||||
throw new InvalidArgumentError('maxCachedSessions must be a positive integer or zero')
|
||||
}
|
||||
|
@ -79,7 +79,7 @@ function buildConnector ({ maxCachedSessions, socketPath, timeout, ...opts }) {
|
|||
const options = { path: socketPath, ...opts }
|
||||
const sessionCache = new SessionCache(maxCachedSessions == null ? 100 : maxCachedSessions)
|
||||
timeout = timeout == null ? 10e3 : timeout
|
||||
|
||||
allowH2 = allowH2 != null ? allowH2 : false
|
||||
return function connect ({ hostname, host, protocol, port, servername, localAddress, httpSocket }, callback) {
|
||||
let socket
|
||||
if (protocol === 'https:') {
|
||||
|
@ -99,6 +99,8 @@ function buildConnector ({ maxCachedSessions, socketPath, timeout, ...opts }) {
|
|||
servername,
|
||||
session,
|
||||
localAddress,
|
||||
// TODO(HTTP/2): Add support for h2c
|
||||
ALPNProtocols: allowH2 ? ['http/1.1', 'h2'] : ['http/1.1'],
|
||||
socket: httpSocket, // upgrade socket connection
|
||||
port: port || 443,
|
||||
host: hostname
|
||||
|
|
78
deps/undici/src/lib/core/request.js
vendored
78
deps/undici/src/lib/core/request.js
vendored
|
@ -5,6 +5,7 @@ const {
|
|||
NotSupportedError
|
||||
} = require('./errors')
|
||||
const assert = require('assert')
|
||||
const { kHTTP2BuildRequest, kHTTP2CopyHeaders, kHTTP1BuildRequest } = require('./symbols')
|
||||
const util = require('./util')
|
||||
|
||||
// tokenRegExp and headerCharRegex have been lifted from
|
||||
|
@ -62,7 +63,8 @@ class Request {
|
|||
headersTimeout,
|
||||
bodyTimeout,
|
||||
reset,
|
||||
throwOnError
|
||||
throwOnError,
|
||||
expectContinue
|
||||
}, handler) {
|
||||
if (typeof path !== 'string') {
|
||||
throw new InvalidArgumentError('path must be a string')
|
||||
|
@ -98,6 +100,10 @@ class Request {
|
|||
throw new InvalidArgumentError('invalid reset')
|
||||
}
|
||||
|
||||
if (expectContinue != null && typeof expectContinue !== 'boolean') {
|
||||
throw new InvalidArgumentError('invalid expectContinue')
|
||||
}
|
||||
|
||||
this.headersTimeout = headersTimeout
|
||||
|
||||
this.bodyTimeout = bodyTimeout
|
||||
|
@ -150,6 +156,9 @@ class Request {
|
|||
|
||||
this.headers = ''
|
||||
|
||||
// Only for H2
|
||||
this.expectContinue = expectContinue != null ? expectContinue : false
|
||||
|
||||
if (Array.isArray(headers)) {
|
||||
if (headers.length % 2 !== 0) {
|
||||
throw new InvalidArgumentError('headers array must be even')
|
||||
|
@ -269,13 +278,64 @@ class Request {
|
|||
return this[kHandler].onError(error)
|
||||
}
|
||||
|
||||
// TODO: adjust to support H2
|
||||
addHeader (key, value) {
|
||||
processHeader(this, key, value)
|
||||
return this
|
||||
}
|
||||
|
||||
static [kHTTP1BuildRequest] (origin, opts, handler) {
|
||||
// TODO: Migrate header parsing here, to make Requests
|
||||
// HTTP agnostic
|
||||
return new Request(origin, opts, handler)
|
||||
}
|
||||
|
||||
static [kHTTP2BuildRequest] (origin, opts, handler) {
|
||||
const headers = opts.headers
|
||||
opts = { ...opts, headers: null }
|
||||
|
||||
const request = new Request(origin, opts, handler)
|
||||
|
||||
request.headers = {}
|
||||
|
||||
if (Array.isArray(headers)) {
|
||||
if (headers.length % 2 !== 0) {
|
||||
throw new InvalidArgumentError('headers array must be even')
|
||||
}
|
||||
for (let i = 0; i < headers.length; i += 2) {
|
||||
processHeader(request, headers[i], headers[i + 1], true)
|
||||
}
|
||||
} else if (headers && typeof headers === 'object') {
|
||||
const keys = Object.keys(headers)
|
||||
for (let i = 0; i < keys.length; i++) {
|
||||
const key = keys[i]
|
||||
processHeader(request, key, headers[key], true)
|
||||
}
|
||||
} else if (headers != null) {
|
||||
throw new InvalidArgumentError('headers must be an object or an array')
|
||||
}
|
||||
|
||||
return request
|
||||
}
|
||||
|
||||
static [kHTTP2CopyHeaders] (raw) {
|
||||
const rawHeaders = raw.split('\r\n')
|
||||
const headers = {}
|
||||
|
||||
for (const header of rawHeaders) {
|
||||
const [key, value] = header.split(': ')
|
||||
|
||||
if (value == null || value.length === 0) continue
|
||||
|
||||
if (headers[key]) headers[key] += `,${value}`
|
||||
else headers[key] = value
|
||||
}
|
||||
|
||||
return headers
|
||||
}
|
||||
}
|
||||
|
||||
function processHeaderValue (key, val) {
|
||||
function processHeaderValue (key, val, skipAppend) {
|
||||
if (val && typeof val === 'object') {
|
||||
throw new InvalidArgumentError(`invalid ${key} header`)
|
||||
}
|
||||
|
@ -286,10 +346,10 @@ function processHeaderValue (key, val) {
|
|||
throw new InvalidArgumentError(`invalid ${key} header`)
|
||||
}
|
||||
|
||||
return `${key}: ${val}\r\n`
|
||||
return skipAppend ? val : `${key}: ${val}\r\n`
|
||||
}
|
||||
|
||||
function processHeader (request, key, val) {
|
||||
function processHeader (request, key, val, skipAppend = false) {
|
||||
if (val && (typeof val === 'object' && !Array.isArray(val))) {
|
||||
throw new InvalidArgumentError(`invalid ${key} header`)
|
||||
} else if (val === undefined) {
|
||||
|
@ -357,10 +417,16 @@ function processHeader (request, key, val) {
|
|||
} else {
|
||||
if (Array.isArray(val)) {
|
||||
for (let i = 0; i < val.length; i++) {
|
||||
request.headers += processHeaderValue(key, val[i])
|
||||
if (skipAppend) {
|
||||
if (request.headers[key]) request.headers[key] += `,${processHeaderValue(key, val[i], skipAppend)}`
|
||||
else request.headers[key] = processHeaderValue(key, val[i], skipAppend)
|
||||
} else {
|
||||
request.headers += processHeaderValue(key, val[i])
|
||||
}
|
||||
}
|
||||
} else {
|
||||
request.headers += processHeaderValue(key, val)
|
||||
if (skipAppend) request.headers[key] = processHeaderValue(key, val, skipAppend)
|
||||
else request.headers += processHeaderValue(key, val)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
8
deps/undici/src/lib/core/symbols.js
vendored
8
deps/undici/src/lib/core/symbols.js
vendored
|
@ -51,5 +51,11 @@ module.exports = {
|
|||
kProxy: Symbol('proxy agent options'),
|
||||
kCounter: Symbol('socket request counter'),
|
||||
kInterceptors: Symbol('dispatch interceptors'),
|
||||
kMaxResponseSize: Symbol('max response size')
|
||||
kMaxResponseSize: Symbol('max response size'),
|
||||
kHTTP2Session: Symbol('http2Session'),
|
||||
kHTTP2SessionState: Symbol('http2Session state'),
|
||||
kHTTP2BuildRequest: Symbol('http2 build request'),
|
||||
kHTTP1BuildRequest: Symbol('http1 build request'),
|
||||
kHTTP2CopyHeaders: Symbol('http2 copy headers'),
|
||||
kHTTPConnVersion: Symbol('http connection version')
|
||||
}
|
||||
|
|
15
deps/undici/src/lib/core/util.js
vendored
15
deps/undici/src/lib/core/util.js
vendored
|
@ -168,7 +168,7 @@ function bodyLength (body) {
|
|||
return 0
|
||||
} else if (isStream(body)) {
|
||||
const state = body._readableState
|
||||
return state && state.ended === true && Number.isFinite(state.length)
|
||||
return state && state.objectMode === false && state.ended === true && Number.isFinite(state.length)
|
||||
? state.length
|
||||
: null
|
||||
} else if (isBlobLike(body)) {
|
||||
|
@ -199,6 +199,7 @@ function destroy (stream, err) {
|
|||
// See: https://github.com/nodejs/node/pull/38505/files
|
||||
stream.socket = null
|
||||
}
|
||||
|
||||
stream.destroy(err)
|
||||
} else if (err) {
|
||||
process.nextTick((stream, err) => {
|
||||
|
@ -218,6 +219,9 @@ function parseKeepAliveTimeout (val) {
|
|||
}
|
||||
|
||||
function parseHeaders (headers, obj = {}) {
|
||||
// For H2 support
|
||||
if (!Array.isArray(headers)) return headers
|
||||
|
||||
for (let i = 0; i < headers.length; i += 2) {
|
||||
const key = headers[i].toString().toLowerCase()
|
||||
let val = obj[key]
|
||||
|
@ -355,6 +359,12 @@ function getSocketInfo (socket) {
|
|||
}
|
||||
}
|
||||
|
||||
async function * convertIterableToBuffer (iterable) {
|
||||
for await (const chunk of iterable) {
|
||||
yield Buffer.isBuffer(chunk) ? chunk : Buffer.from(chunk)
|
||||
}
|
||||
}
|
||||
|
||||
let ReadableStream
|
||||
function ReadableStreamFrom (iterable) {
|
||||
if (!ReadableStream) {
|
||||
|
@ -362,8 +372,7 @@ function ReadableStreamFrom (iterable) {
|
|||
}
|
||||
|
||||
if (ReadableStream.from) {
|
||||
// https://github.com/whatwg/streams/pull/1083
|
||||
return ReadableStream.from(iterable)
|
||||
return ReadableStream.from(convertIterableToBuffer(iterable))
|
||||
}
|
||||
|
||||
let iterator
|
||||
|
|
3
deps/undici/src/lib/fetch/body.js
vendored
3
deps/undici/src/lib/fetch/body.js
vendored
|
@ -387,6 +387,7 @@ function bodyMixinMethods (instance) {
|
|||
try {
|
||||
busboy = Busboy({
|
||||
headers,
|
||||
preservePath: true,
|
||||
defParamCharset: 'utf8'
|
||||
})
|
||||
} catch (err) {
|
||||
|
@ -532,7 +533,7 @@ async function specConsumeBody (object, convertBytesToJSValue, instance) {
|
|||
|
||||
// 6. Otherwise, fully read object’s body given successSteps,
|
||||
// errorSteps, and object’s relevant global object.
|
||||
fullyReadBody(object[kState].body, successSteps, errorSteps)
|
||||
await fullyReadBody(object[kState].body, successSteps, errorSteps)
|
||||
|
||||
// 7. Return promise.
|
||||
return promise.promise
|
||||
|
|
40
deps/undici/src/lib/fetch/index.js
vendored
40
deps/undici/src/lib/fetch/index.js
vendored
|
@ -1760,7 +1760,7 @@ async function httpNetworkFetch (
|
|||
fetchParams.controller.connection.destroy()
|
||||
|
||||
// 2. Return the appropriate network error for fetchParams.
|
||||
return makeAppropriateNetworkError(fetchParams)
|
||||
return makeAppropriateNetworkError(fetchParams, err)
|
||||
}
|
||||
|
||||
return makeNetworkError(err)
|
||||
|
@ -1979,19 +1979,37 @@ async function httpNetworkFetch (
|
|||
let location = ''
|
||||
|
||||
const headers = new Headers()
|
||||
for (let n = 0; n < headersList.length; n += 2) {
|
||||
const key = headersList[n + 0].toString('latin1')
|
||||
const val = headersList[n + 1].toString('latin1')
|
||||
|
||||
if (key.toLowerCase() === 'content-encoding') {
|
||||
// https://www.rfc-editor.org/rfc/rfc7231#section-3.1.2.1
|
||||
// "All content-coding values are case-insensitive..."
|
||||
codings = val.toLowerCase().split(',').map((x) => x.trim()).reverse()
|
||||
} else if (key.toLowerCase() === 'location') {
|
||||
location = val
|
||||
// For H2, the headers are a plain JS object
|
||||
// We distinguish between them and iterate accordingly
|
||||
if (Array.isArray(headersList)) {
|
||||
for (let n = 0; n < headersList.length; n += 2) {
|
||||
const key = headersList[n + 0].toString('latin1')
|
||||
const val = headersList[n + 1].toString('latin1')
|
||||
if (key.toLowerCase() === 'content-encoding') {
|
||||
// https://www.rfc-editor.org/rfc/rfc7231#section-3.1.2.1
|
||||
// "All content-coding values are case-insensitive..."
|
||||
codings = val.toLowerCase().split(',').map((x) => x.trim())
|
||||
} else if (key.toLowerCase() === 'location') {
|
||||
location = val
|
||||
}
|
||||
|
||||
headers.append(key, val)
|
||||
}
|
||||
} else {
|
||||
const keys = Object.keys(headersList)
|
||||
for (const key of keys) {
|
||||
const val = headersList[key]
|
||||
if (key.toLowerCase() === 'content-encoding') {
|
||||
// https://www.rfc-editor.org/rfc/rfc7231#section-3.1.2.1
|
||||
// "All content-coding values are case-insensitive..."
|
||||
codings = val.toLowerCase().split(',').map((x) => x.trim()).reverse()
|
||||
} else if (key.toLowerCase() === 'location') {
|
||||
location = val
|
||||
}
|
||||
|
||||
headers.append(key, val)
|
||||
headers.append(key, val)
|
||||
}
|
||||
}
|
||||
|
||||
this.body = new Readable({ read: resume })
|
||||
|
|
8
deps/undici/src/lib/fetch/response.js
vendored
8
deps/undici/src/lib/fetch/response.js
vendored
|
@ -49,7 +49,7 @@ class Response {
|
|||
}
|
||||
|
||||
// https://fetch.spec.whatwg.org/#dom-response-json
|
||||
static json (data = undefined, init = {}) {
|
||||
static json (data, init = {}) {
|
||||
webidl.argumentLengthCheck(arguments, 1, { header: 'Response.json' })
|
||||
|
||||
if (init !== null) {
|
||||
|
@ -426,15 +426,15 @@ function filterResponse (response, type) {
|
|||
}
|
||||
|
||||
// https://fetch.spec.whatwg.org/#appropriate-network-error
|
||||
function makeAppropriateNetworkError (fetchParams) {
|
||||
function makeAppropriateNetworkError (fetchParams, err = null) {
|
||||
// 1. Assert: fetchParams is canceled.
|
||||
assert(isCancelled(fetchParams))
|
||||
|
||||
// 2. Return an aborted network error if fetchParams is aborted;
|
||||
// otherwise return a network error.
|
||||
return isAborted(fetchParams)
|
||||
? makeNetworkError(new DOMException('The operation was aborted.', 'AbortError'))
|
||||
: makeNetworkError('Request was cancelled.')
|
||||
? makeNetworkError(Object.assign(new DOMException('The operation was aborted.', 'AbortError'), { cause: err }))
|
||||
: makeNetworkError(Object.assign(new DOMException('Request was cancelled.'), { cause: err }))
|
||||
}
|
||||
|
||||
// https://whatpr.org/fetch/1392.html#initialize-a-response
|
||||
|
|
59
deps/undici/src/lib/fetch/util.js
vendored
59
deps/undici/src/lib/fetch/util.js
vendored
|
@ -556,16 +556,37 @@ function bytesMatch (bytes, metadataList) {
|
|||
const algorithm = item.algo
|
||||
|
||||
// 2. Let expectedValue be the val component of item.
|
||||
const expectedValue = item.hash
|
||||
let expectedValue = item.hash
|
||||
|
||||
// See https://github.com/web-platform-tests/wpt/commit/e4c5cc7a5e48093220528dfdd1c4012dc3837a0e
|
||||
// "be liberal with padding". This is annoying, and it's not even in the spec.
|
||||
|
||||
if (expectedValue.endsWith('==')) {
|
||||
expectedValue = expectedValue.slice(0, -2)
|
||||
}
|
||||
|
||||
// 3. Let actualValue be the result of applying algorithm to bytes.
|
||||
const actualValue = crypto.createHash(algorithm).update(bytes).digest('base64')
|
||||
let actualValue = crypto.createHash(algorithm).update(bytes).digest('base64')
|
||||
|
||||
if (actualValue.endsWith('==')) {
|
||||
actualValue = actualValue.slice(0, -2)
|
||||
}
|
||||
|
||||
// 4. If actualValue is a case-sensitive match for expectedValue,
|
||||
// return true.
|
||||
if (actualValue === expectedValue) {
|
||||
return true
|
||||
}
|
||||
|
||||
let actualBase64URL = crypto.createHash(algorithm).update(bytes).digest('base64url')
|
||||
|
||||
if (actualBase64URL.endsWith('==')) {
|
||||
actualBase64URL = actualBase64URL.slice(0, -2)
|
||||
}
|
||||
|
||||
if (actualBase64URL === expectedValue) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
// 6. Return false.
|
||||
|
@ -812,17 +833,17 @@ function iteratorResult (pair, kind) {
|
|||
/**
|
||||
* @see https://fetch.spec.whatwg.org/#body-fully-read
|
||||
*/
|
||||
function fullyReadBody (body, processBody, processBodyError) {
|
||||
async function fullyReadBody (body, processBody, processBodyError) {
|
||||
// 1. If taskDestination is null, then set taskDestination to
|
||||
// the result of starting a new parallel queue.
|
||||
|
||||
// 2. Let successSteps given a byte sequence bytes be to queue a
|
||||
// fetch task to run processBody given bytes, with taskDestination.
|
||||
const successSteps = (bytes) => queueMicrotask(() => processBody(bytes))
|
||||
const successSteps = processBody
|
||||
|
||||
// 3. Let errorSteps be to queue a fetch task to run processBodyError,
|
||||
// with taskDestination.
|
||||
const errorSteps = (error) => queueMicrotask(() => processBodyError(error))
|
||||
const errorSteps = processBodyError
|
||||
|
||||
// 4. Let reader be the result of getting a reader for body’s stream.
|
||||
// If that threw an exception, then run errorSteps with that
|
||||
|
@ -837,7 +858,12 @@ function fullyReadBody (body, processBody, processBodyError) {
|
|||
}
|
||||
|
||||
// 5. Read all bytes from reader, given successSteps and errorSteps.
|
||||
readAllBytes(reader, successSteps, errorSteps)
|
||||
try {
|
||||
const result = await readAllBytes(reader)
|
||||
successSteps(result)
|
||||
} catch (e) {
|
||||
errorSteps(e)
|
||||
}
|
||||
}
|
||||
|
||||
/** @type {ReadableStream} */
|
||||
|
@ -906,36 +932,23 @@ function isomorphicEncode (input) {
|
|||
* @see https://streams.spec.whatwg.org/#readablestreamdefaultreader-read-all-bytes
|
||||
* @see https://streams.spec.whatwg.org/#read-loop
|
||||
* @param {ReadableStreamDefaultReader} reader
|
||||
* @param {(bytes: Uint8Array) => void} successSteps
|
||||
* @param {(error: Error) => void} failureSteps
|
||||
*/
|
||||
async function readAllBytes (reader, successSteps, failureSteps) {
|
||||
async function readAllBytes (reader) {
|
||||
const bytes = []
|
||||
let byteLength = 0
|
||||
|
||||
while (true) {
|
||||
let done
|
||||
let chunk
|
||||
|
||||
try {
|
||||
({ done, value: chunk } = await reader.read())
|
||||
} catch (e) {
|
||||
// 1. Call failureSteps with e.
|
||||
failureSteps(e)
|
||||
return
|
||||
}
|
||||
const { done, value: chunk } = await reader.read()
|
||||
|
||||
if (done) {
|
||||
// 1. Call successSteps with bytes.
|
||||
successSteps(Buffer.concat(bytes, byteLength))
|
||||
return
|
||||
return Buffer.concat(bytes, byteLength)
|
||||
}
|
||||
|
||||
// 1. If chunk is not a Uint8Array object, call failureSteps
|
||||
// with a TypeError and abort these steps.
|
||||
if (!isUint8Array(chunk)) {
|
||||
failureSteps(new TypeError('Received non-Uint8Array chunk'))
|
||||
return
|
||||
throw new TypeError('Received non-Uint8Array chunk')
|
||||
}
|
||||
|
||||
// 2. Append the bytes represented by chunk to bytes.
|
||||
|
|
4
deps/undici/src/lib/pool.js
vendored
4
deps/undici/src/lib/pool.js
vendored
|
@ -34,6 +34,7 @@ class Pool extends PoolBase {
|
|||
socketPath,
|
||||
autoSelectFamily,
|
||||
autoSelectFamilyAttemptTimeout,
|
||||
allowH2,
|
||||
...options
|
||||
} = {}) {
|
||||
super()
|
||||
|
@ -54,6 +55,7 @@ class Pool extends PoolBase {
|
|||
connect = buildConnector({
|
||||
...tls,
|
||||
maxCachedSessions,
|
||||
allowH2,
|
||||
socketPath,
|
||||
timeout: connectTimeout == null ? 10e3 : connectTimeout,
|
||||
...(util.nodeHasAutoSelectFamily && autoSelectFamily ? { autoSelectFamily, autoSelectFamilyAttemptTimeout } : undefined),
|
||||
|
@ -66,7 +68,7 @@ class Pool extends PoolBase {
|
|||
: []
|
||||
this[kConnections] = connections || null
|
||||
this[kUrl] = util.parseOrigin(origin)
|
||||
this[kOptions] = { ...util.deepClone(options), connect }
|
||||
this[kOptions] = { ...util.deepClone(options), connect, allowH2 }
|
||||
this[kOptions].interceptors = options.interceptors
|
||||
? { ...options.interceptors }
|
||||
: undefined
|
||||
|
|
13
deps/undici/src/lib/websocket/connection.js
vendored
13
deps/undici/src/lib/websocket/connection.js
vendored
|
@ -1,6 +1,5 @@
|
|||
'use strict'
|
||||
|
||||
const { randomBytes, createHash } = require('crypto')
|
||||
const diagnosticsChannel = require('diagnostics_channel')
|
||||
const { uid, states } = require('./constants')
|
||||
const {
|
||||
|
@ -22,6 +21,14 @@ channels.open = diagnosticsChannel.channel('undici:websocket:open')
|
|||
channels.close = diagnosticsChannel.channel('undici:websocket:close')
|
||||
channels.socketError = diagnosticsChannel.channel('undici:websocket:socket_error')
|
||||
|
||||
/** @type {import('crypto')} */
|
||||
let crypto
|
||||
try {
|
||||
crypto = require('crypto')
|
||||
} catch {
|
||||
|
||||
}
|
||||
|
||||
/**
|
||||
* @see https://websockets.spec.whatwg.org/#concept-websocket-establish
|
||||
* @param {URL} url
|
||||
|
@ -66,7 +73,7 @@ function establishWebSocketConnection (url, protocols, ws, onEstablish, options)
|
|||
// 5. Let keyValue be a nonce consisting of a randomly selected
|
||||
// 16-byte value that has been forgiving-base64-encoded and
|
||||
// isomorphic encoded.
|
||||
const keyValue = randomBytes(16).toString('base64')
|
||||
const keyValue = crypto.randomBytes(16).toString('base64')
|
||||
|
||||
// 6. Append (`Sec-WebSocket-Key`, keyValue) to request’s
|
||||
// header list.
|
||||
|
@ -148,7 +155,7 @@ function establishWebSocketConnection (url, protocols, ws, onEstablish, options)
|
|||
// trailing whitespace, the client MUST _Fail the WebSocket
|
||||
// Connection_.
|
||||
const secWSAccept = response.headersList.get('Sec-WebSocket-Accept')
|
||||
const digest = createHash('sha1').update(keyValue + uid).digest('base64')
|
||||
const digest = crypto.createHash('sha1').update(keyValue + uid).digest('base64')
|
||||
if (secWSAccept !== digest) {
|
||||
failWebsocketConnection(ws, 'Incorrect hash received in Sec-WebSocket-Accept header.')
|
||||
return
|
||||
|
|
11
deps/undici/src/lib/websocket/frame.js
vendored
11
deps/undici/src/lib/websocket/frame.js
vendored
|
@ -1,15 +1,22 @@
|
|||
'use strict'
|
||||
|
||||
const { randomBytes } = require('crypto')
|
||||
const { maxUnsigned16Bit } = require('./constants')
|
||||
|
||||
/** @type {import('crypto')} */
|
||||
let crypto
|
||||
try {
|
||||
crypto = require('crypto')
|
||||
} catch {
|
||||
|
||||
}
|
||||
|
||||
class WebsocketFrameSend {
|
||||
/**
|
||||
* @param {Buffer|undefined} data
|
||||
*/
|
||||
constructor (data) {
|
||||
this.frameData = data
|
||||
this.maskKey = randomBytes(4)
|
||||
this.maskKey = crypto.randomBytes(4)
|
||||
}
|
||||
|
||||
createFrame (opcode) {
|
||||
|
|
37
deps/undici/src/lib/websocket/websocket.js
vendored
37
deps/undici/src/lib/websocket/websocket.js
vendored
|
@ -3,6 +3,7 @@
|
|||
const { webidl } = require('../fetch/webidl')
|
||||
const { DOMException } = require('../fetch/constants')
|
||||
const { URLSerializer } = require('../fetch/dataURL')
|
||||
const { getGlobalOrigin } = require('../fetch/global')
|
||||
const { staticPropertyDescriptors, states, opcodes, emptyBuffer } = require('./constants')
|
||||
const {
|
||||
kWebSocketURL,
|
||||
|
@ -57,18 +58,28 @@ class WebSocket extends EventTarget {
|
|||
url = webidl.converters.USVString(url)
|
||||
protocols = options.protocols
|
||||
|
||||
// 1. Let urlRecord be the result of applying the URL parser to url.
|
||||
// 1. Let baseURL be this's relevant settings object's API base URL.
|
||||
const baseURL = getGlobalOrigin()
|
||||
|
||||
// 1. Let urlRecord be the result of applying the URL parser to url with baseURL.
|
||||
let urlRecord
|
||||
|
||||
try {
|
||||
urlRecord = new URL(url)
|
||||
urlRecord = new URL(url, baseURL)
|
||||
} catch (e) {
|
||||
// 2. If urlRecord is failure, then throw a "SyntaxError" DOMException.
|
||||
// 3. If urlRecord is failure, then throw a "SyntaxError" DOMException.
|
||||
throw new DOMException(e, 'SyntaxError')
|
||||
}
|
||||
|
||||
// 3. If urlRecord’s scheme is not "ws" or "wss", then throw a
|
||||
// "SyntaxError" DOMException.
|
||||
// 4. If urlRecord’s scheme is "http", then set urlRecord’s scheme to "ws".
|
||||
if (urlRecord.protocol === 'http:') {
|
||||
urlRecord.protocol = 'ws:'
|
||||
} else if (urlRecord.protocol === 'https:') {
|
||||
// 5. Otherwise, if urlRecord’s scheme is "https", set urlRecord’s scheme to "wss".
|
||||
urlRecord.protocol = 'wss:'
|
||||
}
|
||||
|
||||
// 6. If urlRecord’s scheme is not "ws" or "wss", then throw a "SyntaxError" DOMException.
|
||||
if (urlRecord.protocol !== 'ws:' && urlRecord.protocol !== 'wss:') {
|
||||
throw new DOMException(
|
||||
`Expected a ws: or wss: protocol, got ${urlRecord.protocol}`,
|
||||
|
@ -76,19 +87,19 @@ class WebSocket extends EventTarget {
|
|||
)
|
||||
}
|
||||
|
||||
// 4. If urlRecord’s fragment is non-null, then throw a "SyntaxError"
|
||||
// 7. If urlRecord’s fragment is non-null, then throw a "SyntaxError"
|
||||
// DOMException.
|
||||
if (urlRecord.hash) {
|
||||
if (urlRecord.hash || urlRecord.href.endsWith('#')) {
|
||||
throw new DOMException('Got fragment', 'SyntaxError')
|
||||
}
|
||||
|
||||
// 5. If protocols is a string, set protocols to a sequence consisting
|
||||
// 8. If protocols is a string, set protocols to a sequence consisting
|
||||
// of just that string.
|
||||
if (typeof protocols === 'string') {
|
||||
protocols = [protocols]
|
||||
}
|
||||
|
||||
// 6. If any of the values in protocols occur more than once or otherwise
|
||||
// 9. If any of the values in protocols occur more than once or otherwise
|
||||
// fail to match the requirements for elements that comprise the value
|
||||
// of `Sec-WebSocket-Protocol` fields as defined by The WebSocket
|
||||
// protocol, then throw a "SyntaxError" DOMException.
|
||||
|
@ -100,12 +111,12 @@ class WebSocket extends EventTarget {
|
|||
throw new DOMException('Invalid Sec-WebSocket-Protocol value', 'SyntaxError')
|
||||
}
|
||||
|
||||
// 7. Set this's url to urlRecord.
|
||||
this[kWebSocketURL] = urlRecord
|
||||
// 10. Set this's url to urlRecord.
|
||||
this[kWebSocketURL] = new URL(urlRecord.href)
|
||||
|
||||
// 8. Let client be this's relevant settings object.
|
||||
// 11. Let client be this's relevant settings object.
|
||||
|
||||
// 9. Run this step in parallel:
|
||||
// 12. Run this step in parallel:
|
||||
|
||||
// 1. Establish a WebSocket connection given urlRecord, protocols,
|
||||
// and client.
|
||||
|
|
38
deps/undici/src/package.json
vendored
38
deps/undici/src/package.json
vendored
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "undici",
|
||||
"version": "5.23.0",
|
||||
"version": "5.25.2",
|
||||
"description": "An HTTP/1.1 client, written from scratch for Node.js",
|
||||
"homepage": "https://undici.nodejs.org",
|
||||
"bugs": {
|
||||
|
@ -11,12 +11,41 @@
|
|||
"url": "git+https://github.com/nodejs/undici.git"
|
||||
},
|
||||
"license": "MIT",
|
||||
"author": "Matteo Collina <hello@matteocollina.com>",
|
||||
"contributors": [
|
||||
{
|
||||
"name": "Daniele Belardi",
|
||||
"url": "https://github.com/dnlup",
|
||||
"author": true
|
||||
},
|
||||
{
|
||||
"name": "Ethan Arrowood",
|
||||
"url": "https://github.com/ethan-arrowood",
|
||||
"author": true
|
||||
},
|
||||
{
|
||||
"name": "Matteo Collina",
|
||||
"url": "https://github.com/mcollina",
|
||||
"author": true
|
||||
},
|
||||
{
|
||||
"name": "Matthew Aitken",
|
||||
"url": "https://github.com/KhafraDev",
|
||||
"author": true
|
||||
},
|
||||
{
|
||||
"name": "Robert Nagy",
|
||||
"url": "https://github.com/ronag",
|
||||
"author": true
|
||||
},
|
||||
{
|
||||
"name": "Szymon Marczak",
|
||||
"url": "https://github.com/szmarczak",
|
||||
"author": true
|
||||
},
|
||||
{
|
||||
"name": "Tomas Della Vedova",
|
||||
"url": "https://github.com/delvedor",
|
||||
"author": true
|
||||
}
|
||||
],
|
||||
"keywords": [
|
||||
|
@ -64,10 +93,11 @@
|
|||
"bench:run": "CONNECTIONS=1 node benchmarks/benchmark.js; CONNECTIONS=50 node benchmarks/benchmark.js",
|
||||
"serve:website": "docsify serve .",
|
||||
"prepare": "husky install",
|
||||
"postpublish": "node scripts/update-undici-types-version.js && cd types && npm publish",
|
||||
"fuzz": "jsfuzz test/fuzzing/fuzz.js corpus"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@sinonjs/fake-timers": "^10.0.2",
|
||||
"@sinonjs/fake-timers": "^11.1.0",
|
||||
"@types/node": "^18.0.3",
|
||||
"abort-controller": "^3.0.0",
|
||||
"atomic-sleep": "^1.0.0",
|
||||
|
@ -98,7 +128,7 @@
|
|||
"standard": "^17.0.0",
|
||||
"table": "^6.8.0",
|
||||
"tap": "^16.1.0",
|
||||
"tsd": "^0.28.1",
|
||||
"tsd": "^0.29.0",
|
||||
"typescript": "^5.0.2",
|
||||
"wait-on": "^7.0.1",
|
||||
"ws": "^8.11.0"
|
||||
|
|
6
deps/undici/src/types/README.md
vendored
Normal file
6
deps/undici/src/types/README.md
vendored
Normal file
|
@ -0,0 +1,6 @@
|
|||
# undici-types
|
||||
|
||||
This package is a dual-publish of the [undici](https://www.npmjs.com/package/undici) library types. The `undici` package **still contains types**. This package is for users who _only_ need undici types (such as for `@types/node`). It is published alongside every release of `undici`, so you can always use the same version.
|
||||
|
||||
- [GitHub nodejs/undici](https://github.com/nodejs/undici)
|
||||
- [Undici Documentation](https://undici.nodejs.org/#/)
|
23
deps/undici/src/types/client.d.ts
vendored
23
deps/undici/src/types/client.d.ts
vendored
|
@ -1,7 +1,6 @@
|
|||
import { URL } from 'url'
|
||||
import { TlsOptions } from 'tls'
|
||||
import Dispatcher from './dispatcher'
|
||||
import DispatchInterceptor from './dispatcher'
|
||||
import buildConnector from "./connector";
|
||||
|
||||
/**
|
||||
|
@ -19,14 +18,14 @@ export class Client extends Dispatcher {
|
|||
|
||||
export declare namespace Client {
|
||||
export interface OptionsInterceptors {
|
||||
Client: readonly DispatchInterceptor[];
|
||||
Client: readonly Dispatcher.DispatchInterceptor[];
|
||||
}
|
||||
export interface Options {
|
||||
/** TODO */
|
||||
interceptors?: OptionsInterceptors;
|
||||
/** The maximum length of request headers in bytes. Default: `16384` (16KiB). */
|
||||
maxHeaderSize?: number;
|
||||
/** The amount of time the parser will wait to receive the complete HTTP headers (Node 14 and above only). Default: `300e3` milliseconds (300s). */
|
||||
/** The amount of time, in milliseconds, the parser will wait to receive the complete HTTP headers (Node 14 and above only). Default: `300e3` milliseconds (300s). */
|
||||
headersTimeout?: number;
|
||||
/** @deprecated unsupported socketTimeout, use headersTimeout & bodyTimeout instead */
|
||||
socketTimeout?: never;
|
||||
|
@ -40,13 +39,13 @@ export declare namespace Client {
|
|||
idleTimeout?: never;
|
||||
/** @deprecated unsupported keepAlive, use pipelining=0 instead */
|
||||
keepAlive?: never;
|
||||
/** the timeout after which a socket without active requests will time out. Monitors time between activity on a connected socket. This value may be overridden by *keep-alive* hints from the server. Default: `4e3` milliseconds (4s). */
|
||||
/** the timeout, in milliseconds, after which a socket without active requests will time out. Monitors time between activity on a connected socket. This value may be overridden by *keep-alive* hints from the server. Default: `4e3` milliseconds (4s). */
|
||||
keepAliveTimeout?: number;
|
||||
/** @deprecated unsupported maxKeepAliveTimeout, use keepAliveMaxTimeout instead */
|
||||
maxKeepAliveTimeout?: never;
|
||||
/** the maximum allowed `idleTimeout` when overridden by *keep-alive* hints from the server. Default: `600e3` milliseconds (10min). */
|
||||
/** the maximum allowed `idleTimeout`, in milliseconds, when overridden by *keep-alive* hints from the server. Default: `600e3` milliseconds (10min). */
|
||||
keepAliveMaxTimeout?: number;
|
||||
/** A number subtracted from server *keep-alive* hints when overriding `idleTimeout` to account for timing inaccuracies caused by e.g. transport latency. Default: `1e3` milliseconds (1s). */
|
||||
/** A number of milliseconds subtracted from server *keep-alive* hints when overriding `idleTimeout` to account for timing inaccuracies caused by e.g. transport latency. Default: `1e3` milliseconds (1s). */
|
||||
keepAliveTimeoutThreshold?: number;
|
||||
/** TODO */
|
||||
socketPath?: string;
|
||||
|
@ -71,7 +70,17 @@ export declare namespace Client {
|
|||
/** Enables a family autodetection algorithm that loosely implements section 5 of RFC 8305. */
|
||||
autoSelectFamily?: boolean;
|
||||
/** The amount of time in milliseconds to wait for a connection attempt to finish before trying the next address when using the `autoSelectFamily` option. */
|
||||
autoSelectFamilyAttemptTimeout?: number;
|
||||
autoSelectFamilyAttemptTimeout?: number;
|
||||
/**
|
||||
* @description Enables support for H2 if the server has assigned bigger priority to it through ALPN negotiation.
|
||||
* @default false
|
||||
*/
|
||||
allowH2?: boolean;
|
||||
/**
|
||||
* @description Dictates the maximum number of concurrent streams for a single H2 session. It can be overriden by a SETTINGS remote frame.
|
||||
* @default 100
|
||||
*/
|
||||
maxConcurrentStreams?: number
|
||||
}
|
||||
export interface SocketInfo {
|
||||
localAddress?: string
|
||||
|
|
4
deps/undici/src/types/dispatcher.d.ts
vendored
4
deps/undici/src/types/dispatcher.d.ts
vendored
|
@ -109,7 +109,7 @@ declare namespace Dispatcher {
|
|||
blocking?: boolean;
|
||||
/** Upgrade the request. Should be used to specify the kind of upgrade i.e. `'Websocket'`. Default: `method === 'CONNECT' || null`. */
|
||||
upgrade?: boolean | string | null;
|
||||
/** The amount of time the parser will wait to receive the complete HTTP headers. Defaults to 300 seconds. */
|
||||
/** The amount of time, in milliseconds, the parser will wait to receive the complete HTTP headers. Defaults to 300 seconds. */
|
||||
headersTimeout?: number | null;
|
||||
/** The timeout after which a request will time out, in milliseconds. Monitors time between receiving body data. Use 0 to disable it entirely. Defaults to 300 seconds. */
|
||||
bodyTimeout?: number | null;
|
||||
|
@ -117,6 +117,8 @@ declare namespace Dispatcher {
|
|||
reset?: boolean;
|
||||
/** Whether Undici should throw an error upon receiving a 4xx or 5xx response from the server. Defaults to false */
|
||||
throwOnError?: boolean;
|
||||
/** For H2, it appends the expect: 100-continue header, and halts the request body until a 100-continue is received from the remote server*/
|
||||
expectContinue?: boolean;
|
||||
}
|
||||
export interface ConnectOptions {
|
||||
path: string;
|
||||
|
|
57
deps/undici/src/types/index.d.ts
vendored
Normal file
57
deps/undici/src/types/index.d.ts
vendored
Normal file
|
@ -0,0 +1,57 @@
|
|||
import Dispatcher from'./dispatcher'
|
||||
import { setGlobalDispatcher, getGlobalDispatcher } from './global-dispatcher'
|
||||
import { setGlobalOrigin, getGlobalOrigin } from './global-origin'
|
||||
import Pool from'./pool'
|
||||
import { RedirectHandler, DecoratorHandler } from './handlers'
|
||||
|
||||
import BalancedPool from './balanced-pool'
|
||||
import Client from'./client'
|
||||
import buildConnector from'./connector'
|
||||
import errors from'./errors'
|
||||
import Agent from'./agent'
|
||||
import MockClient from'./mock-client'
|
||||
import MockPool from'./mock-pool'
|
||||
import MockAgent from'./mock-agent'
|
||||
import mockErrors from'./mock-errors'
|
||||
import ProxyAgent from'./proxy-agent'
|
||||
import { request, pipeline, stream, connect, upgrade } from './api'
|
||||
|
||||
export * from './cookies'
|
||||
export * from './fetch'
|
||||
export * from './file'
|
||||
export * from './filereader'
|
||||
export * from './formdata'
|
||||
export * from './diagnostics-channel'
|
||||
export * from './websocket'
|
||||
export * from './content-type'
|
||||
export * from './cache'
|
||||
export { Interceptable } from './mock-interceptor'
|
||||
|
||||
export { Dispatcher, BalancedPool, Pool, Client, buildConnector, errors, Agent, request, stream, pipeline, connect, upgrade, setGlobalDispatcher, getGlobalDispatcher, setGlobalOrigin, getGlobalOrigin, MockClient, MockPool, MockAgent, mockErrors, ProxyAgent, RedirectHandler, DecoratorHandler }
|
||||
export default Undici
|
||||
|
||||
declare namespace Undici {
|
||||
var Dispatcher: typeof import('./dispatcher').default
|
||||
var Pool: typeof import('./pool').default;
|
||||
var RedirectHandler: typeof import ('./handlers').RedirectHandler
|
||||
var DecoratorHandler: typeof import ('./handlers').DecoratorHandler
|
||||
var createRedirectInterceptor: typeof import ('./interceptors').createRedirectInterceptor
|
||||
var BalancedPool: typeof import('./balanced-pool').default;
|
||||
var Client: typeof import('./client').default;
|
||||
var buildConnector: typeof import('./connector').default;
|
||||
var errors: typeof import('./errors').default;
|
||||
var Agent: typeof import('./agent').default;
|
||||
var setGlobalDispatcher: typeof import('./global-dispatcher').setGlobalDispatcher;
|
||||
var getGlobalDispatcher: typeof import('./global-dispatcher').getGlobalDispatcher;
|
||||
var request: typeof import('./api').request;
|
||||
var stream: typeof import('./api').stream;
|
||||
var pipeline: typeof import('./api').pipeline;
|
||||
var connect: typeof import('./api').connect;
|
||||
var upgrade: typeof import('./api').upgrade;
|
||||
var MockClient: typeof import('./mock-client').default;
|
||||
var MockPool: typeof import('./mock-pool').default;
|
||||
var MockAgent: typeof import('./mock-agent').default;
|
||||
var mockErrors: typeof import('./mock-errors').default;
|
||||
var fetch: typeof import('./fetch').fetch;
|
||||
var caches: typeof import('./cache').caches;
|
||||
}
|
55
deps/undici/src/types/package.json
vendored
Normal file
55
deps/undici/src/types/package.json
vendored
Normal file
|
@ -0,0 +1,55 @@
|
|||
{
|
||||
"name": "undici-types",
|
||||
"version": "5.25.1",
|
||||
"description": "A stand-alone types package for Undici",
|
||||
"homepage": "https://undici.nodejs.org",
|
||||
"bugs": {
|
||||
"url": "https://github.com/nodejs/undici/issues"
|
||||
},
|
||||
"repository": {
|
||||
"type": "git",
|
||||
"url": "git+https://github.com/nodejs/undici.git"
|
||||
},
|
||||
"license": "MIT",
|
||||
"types": "index.d.ts",
|
||||
"files": [
|
||||
"*.d.ts"
|
||||
],
|
||||
"contributors": [
|
||||
{
|
||||
"name": "Daniele Belardi",
|
||||
"url": "https://github.com/dnlup",
|
||||
"author": true
|
||||
},
|
||||
{
|
||||
"name": "Ethan Arrowood",
|
||||
"url": "https://github.com/ethan-arrowood",
|
||||
"author": true
|
||||
},
|
||||
{
|
||||
"name": "Matteo Collina",
|
||||
"url": "https://github.com/mcollina",
|
||||
"author": true
|
||||
},
|
||||
{
|
||||
"name": "Matthew Aitken",
|
||||
"url": "https://github.com/KhafraDev",
|
||||
"author": true
|
||||
},
|
||||
{
|
||||
"name": "Robert Nagy",
|
||||
"url": "https://github.com/ronag",
|
||||
"author": true
|
||||
},
|
||||
{
|
||||
"name": "Szymon Marczak",
|
||||
"url": "https://github.com/szmarczak",
|
||||
"author": true
|
||||
},
|
||||
{
|
||||
"name": "Tomas Della Vedova",
|
||||
"url": "https://github.com/delvedor",
|
||||
"author": true
|
||||
}
|
||||
]
|
||||
}
|
1804
deps/undici/undici.js
vendored
1804
deps/undici/undici.js
vendored
File diff suppressed because it is too large
Load diff
|
@ -28,7 +28,7 @@ This a list of all the dependencies:
|
|||
* [openssl 3.0.8][]
|
||||
* [postject 1.0.0-alpha.6][]
|
||||
* [simdutf 3.2.17][]
|
||||
* [undici 5.23.0][]
|
||||
* [undici 5.25.2][]
|
||||
* [uvwasi 0.0.16][]
|
||||
* [V8 11.3.244.8][]
|
||||
* [zlib 1.2.13.1-motley-f5fd0ad][]
|
||||
|
@ -291,7 +291,7 @@ The [postject](https://github.com/nodejs/postject) dependency is used for the
|
|||
The [simdutf](https://github.com/simdutf/simdutf) dependency is
|
||||
a C++ library for fast UTF-8 decoding and encoding.
|
||||
|
||||
### undici 5.23.0
|
||||
### undici 5.25.2
|
||||
|
||||
The [undici](https://github.com/nodejs/undici) dependency is an HTTP/1.1 client,
|
||||
written from scratch for Node.js..
|
||||
|
@ -345,7 +345,7 @@ performance improvements not currently available in standard zlib.
|
|||
[openssl 3.0.8]: #openssl-308
|
||||
[postject 1.0.0-alpha.6]: #postject-100-alpha6
|
||||
[simdutf 3.2.17]: #simdutf-3217
|
||||
[undici 5.23.0]: #undici-5230
|
||||
[undici 5.25.2]: #undici-5252
|
||||
[update-openssl-action]: ../../../.github/workflows/update-openssl.yml
|
||||
[uvwasi 0.0.16]: #uvwasi-0016
|
||||
[v8 11.3.244.8]: #v8-1132448
|
||||
|
|
|
@ -2,5 +2,5 @@
|
|||
// Refer to tools/update-undici.sh
|
||||
#ifndef SRC_UNDICI_VERSION_H_
|
||||
#define SRC_UNDICI_VERSION_H_
|
||||
#define UNDICI_VERSION "5.23.0"
|
||||
#define UNDICI_VERSION "5.25.2"
|
||||
#endif // SRC_UNDICI_VERSION_H_
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue