Lines Matching full:just

103               "desc": "<p>Node.js uses statically linked libraries such as V8, libuv, and OpenSSL. All\naddons are required to link to V8 and may link to any of the other dependencies\nas well. Typically, this is as simple as including the appropriate\n<code>#include &#x3C;...></code> statements (e.g. <code>#include &#x3C;v8.h></code>) and <code>node-gyp</code> will locate\nthe appropriate headers automatically. However, there are a few caveats to be\naware of:</p>\n<ul>\n<li>\n<p>When <code>node-gyp</code> runs, it will detect the specific release version of Node.js\nand download either the full source tarball or just the headers. If the full\nsource is downloaded, addons will have complete access to the full set of\nNode.js dependencies. However, if only the Node.js headers are downloaded,\nthen only the symbols exported by Node.js will be available.</p>\n</li>\n<li>\n<p><code>node-gyp</code> can be run using the <code>--nodedir</code> flag pointing at a local Node.js\nsource image. Using this option, the addon will have access to the full set of\ndependencies.</p>\n</li>\n</ul>",
202 "desc": "<p>Node-API (formerly N-API) is an API for building native Addons. It is\nindependent from the underlying JavaScript runtime (for example, V8) and is\nmaintained as part of Node.js itself. This API will be Application Binary\nInterface (ABI) stable across versions of Node.js. It is intended to insulate\naddons from changes in the underlying JavaScript engine and allow modules\ncompiled for one major version to run on later major versions of Node.js without\nrecompilation. The <a href=\"https://nodejs.org/en/docs/guides/abi-stability/\">ABI Stability</a> guide provides a more in-depth explanation.</p>\n<p>Addons are built/packaged with the same approach/tools outlined in the section\ntitled <a href=\"addons.html\">C++ Addons</a>. The only difference is the set of APIs that are used by\nthe native code. Instead of using the V8 or <a href=\"https://github.com/nodejs/nan\">Native Abstractions for Node.js</a>\nAPIs, the functions available in Node-API are used.</p>\n<p>APIs exposed by Node-API are generally used to create and manipulate\nJavaScript values. Concepts and operations generally map to ideas specified\nin the ECMA-262 Language Specification. The APIs have the following\nproperties:</p>\n<ul>\n<li>All Node-API calls return a status code of type <code>napi_status</code>. This\nstatus indicates whether the API call succeeded or failed.</li>\n<li>The API's return value is passed via an out parameter.</li>\n<li>All JavaScript values are abstracted behind an opaque type named\n<code>napi_value</code>.</li>\n<li>In case of an error status code, additional information can be obtained\nusing <code>napi_get_last_error_info</code>. More information can be found in the error\nhandling section <a href=\"n-api.html#error-handling\">Error handling</a>.</li>\n</ul>\n<p>Node-API is a C API that ensures ABI stability across Node.js versions\nand different compiler levels. A C++ API can be easier to use.\nTo support using C++, the project maintains a\nC++ wrapper module called <a href=\"https://github.com/nodejs/node-addon-api\"><code>node-addon-api</code></a>.\nThis wrapper provides an inlinable C++ API. Binaries built\nwith <code>node-addon-api</code> will depend on the symbols for the Node-API C-based\nfunctions exported by Node.js. <code>node-addon-api</code> is a more\nefficient way to write code that calls Node-API. Take, for example, the\nfollowing <code>node-addon-api</code> code. The first section shows the\n<code>node-addon-api</code> code and the second section shows what actually gets\nused in the addon.</p>\n<pre><code class=\"language-cpp\">Object obj = Object::New(env);\nobj[\"foo\"] = String::New(env, \"bar\");\n</code></pre>\n<pre><code class=\"language-cpp\">napi_status status;\nnapi_value object, string;\nstatus = napi_create_object(env, &#x26;object);\nif (status != napi_ok) {\n napi_throw_error(env, ...);\n return;\n}\n\nstatus = napi_create_string_utf8(env, \"bar\", NAPI_AUTO_LENGTH, &#x26;string);\nif (status != napi_ok) {\n napi_throw_error(env, ...);\n return;\n}\n\nstatus = napi_set_named_property(env, object, \"foo\", string);\nif (status != napi_ok) {\n napi_throw_error(env, ...);\n return;\n}\n</code></pre>\n<p>The end result is that the addon only uses the exported C APIs. As a result,\nit still gets the benefits of the ABI stability provided by the C API.</p>\n<p>When using <code>node-addon-api</code> instead of the C APIs, start with the API <a href=\"https://github.com/nodejs/node-addon-api#api-documentation\">docs</a>\nfor <code>node-addon-api</code>.</p>\n<p>The <a href=\"https://nodejs.github.io/node-addon-examples/\">Node-API Resource</a> offers\nan excellent orientation and tips for developers just getting started with\nNode-API and <code>node-addon-api</code>. Additional media resources can be found on the\n<a href=\"https://github.com/nodejs/abi-stable-node/blob/HEAD/node-api-media.md\">Node-API Media</a> page.</p>",
276 "desc": "<p>In order to use the Node-API functions, include the file <a href=\"https://github.com/nodejs/node/blob/HEAD/src/node_api.h\"><code>node_api.h</code></a> which\nis located in the src directory in the node development tree:</p>\n<pre><code class=\"language-c\">#include &#x3C;node_api.h>\n</code></pre>\n<p>This will opt into the default <code>NAPI_VERSION</code> for the given release of Node.js.\nIn order to ensure compatibility with specific versions of Node-API, the version\ncan be specified explicitly when including the header:</p>\n<pre><code class=\"language-c\">#define NAPI_VERSION 3\n#include &#x3C;node_api.h>\n</code></pre>\n<p>This restricts the Node-API surface to just the functionality that was available\nin the specified (and earlier) versions.</p>\n<p>Some of the Node-API surface is experimental and requires explicit opt-in:</p>\n<pre><code class=\"language-c\">#define NAPI_EXPERIMENTAL\n#include &#x3C;node_api.h>\n</code></pre>\n<p>In this case the entire API surface, including any experimental APIs, will be\navailable to the module code.</p>\n<p>Occasionally, experimental features are introduced that affect already-released\nand stable APIs. These features can be disabled by an opt-out:</p>\n<pre><code class=\"language-c\">#define NAPI_EXPERIMENTAL\n#define NODE_API_EXPERIMENTAL_&#x3C;FEATURE_NAME>_OPT_OUT\n#include &#x3C;node_api.h>\n</code></pre>\n<p>where <code>&#x3C;FEATURE_NAME></code> is the name of an experimental feature that affects both\nexperimental and stable APIs.</p>",
1441 "desc": "<pre><code class=\"language-c\">napi_status napi_create_external(napi_env env,\n void* data,\n napi_finalize finalize_cb,\n void* finalize_hint,\n napi_value* result)\n</code></pre>\n<ul>\n<li><code>[in] env</code>: The environment that the API is invoked under.</li>\n<li><code>[in] data</code>: Raw pointer to the external data.</li>\n<li><code>[in] finalize_cb</code>: Optional callback to call when the external value is being\ncollected. <a href=\"n-api.html#napi_finalize\"><code>napi_finalize</code></a> provides more details.</li>\n<li><code>[in] finalize_hint</code>: Optional hint to pass to the finalize callback during\ncollection.</li>\n<li><code>[out] result</code>: A <code>napi_value</code> representing an external value.</li>\n</ul>\n<p>Returns <code>napi_ok</code> if the API succeeded.</p>\n<p>This API allocates a JavaScript value with external data attached to it. This\nis used to pass external data through JavaScript code, so it can be retrieved\nlater by native code using <a href=\"n-api.html#napi_get_value_external\"><code>napi_get_value_external</code></a>.</p>\n<p>The API adds a <code>napi_finalize</code> callback which will be called when the JavaScript\nobject just created has been garbage collected.</p>\n<p>The created value is not an object, and therefore does not support additional\nproperties. It is considered a distinct value type: calling <code>napi_typeof()</code> with\nan external value yields <code>napi_external</code>.</p>",
1457 "desc": "<pre><code class=\"language-c\">napi_status\nnapi_create_external_arraybuffer(napi_env env,\n void* external_data,\n size_t byte_length,\n napi_finalize finalize_cb,\n void* finalize_hint,\n napi_value* result)\n</code></pre>\n<ul>\n<li><code>[in] env</code>: The environment that the API is invoked under.</li>\n<li><code>[in] external_data</code>: Pointer to the underlying byte buffer of the\n<code>ArrayBuffer</code>.</li>\n<li><code>[in] byte_length</code>: The length in bytes of the underlying buffer.</li>\n<li><code>[in] finalize_cb</code>: Optional callback to call when the <code>ArrayBuffer</code> is being\ncollected. <a href=\"n-api.html#napi_finalize\"><code>napi_finalize</code></a> provides more details.</li>\n<li><code>[in] finalize_hint</code>: Optional hint to pass to the finalize callback during\ncollection.</li>\n<li><code>[out] result</code>: A <code>napi_value</code> representing a JavaScript <code>ArrayBuffer</code>.</li>\n</ul>\n<p>Returns <code>napi_ok</code> if the API succeeded.</p>\n<p><strong>Some runtimes other than Node.js have dropped support for external buffers</strong>.\nOn runtimes other than Node.js this method may return\n<code>napi_no_external_buffers_allowed</code> to indicate that external\nbuffers are not supported. One such runtime is Electron as\ndescribed in this issue\n<a href=\"https://github.com/electron/electron/issues/35801\">electron/issues/35801</a>.</p>\n<p>In order to maintain broadest compatibility with all runtimes\nyou may define <code>NODE_API_NO_EXTERNAL_BUFFERS_ALLOWED</code> in your addon before\nincludes for the node-api headers. Doing so will hide the 2 functions\nthat create external buffers. This will ensure a compilation error\noccurs if you accidentally use one of these methods.</p>\n<p>This API returns a Node-API value corresponding to a JavaScript <code>ArrayBuffer</code>.\nThe underlying byte buffer of the <code>ArrayBuffer</code> is externally allocated and\nmanaged. The caller must ensure that the byte buffer remains valid until the\nfinalize callback is called.</p>\n<p>The API adds a <code>napi_finalize</code> callback which will be called when the JavaScript\nobject just created has been garbage collected.</p>\n<p>JavaScript <code>ArrayBuffer</code>s are described in\n<a href=\"https://tc39.github.io/ecma262/#sec-arraybuffer-objects\">Section 24.1</a> of the ECMAScript Language Specification.</p>",
1473 "desc": "<pre><code class=\"language-c\">napi_status napi_create_external_buffer(napi_env env,\n size_t length,\n void* data,\n napi_finalize finalize_cb,\n void* finalize_hint,\n napi_value* result)\n</code></pre>\n<ul>\n<li><code>[in] env</code>: The environment that the API is invoked under.</li>\n<li><code>[in] length</code>: Size in bytes of the input buffer (should be the same as the\nsize of the new buffer).</li>\n<li><code>[in] data</code>: Raw pointer to the underlying buffer to expose to JavaScript.</li>\n<li><code>[in] finalize_cb</code>: Optional callback to call when the <code>ArrayBuffer</code> is being\ncollected. <a href=\"n-api.html#napi_finalize\"><code>napi_finalize</code></a> provides more details.</li>\n<li><code>[in] finalize_hint</code>: Optional hint to pass to the finalize callback during\ncollection.</li>\n<li><code>[out] result</code>: A <code>napi_value</code> representing a <code>node::Buffer</code>.</li>\n</ul>\n<p>Returns <code>napi_ok</code> if the API succeeded.</p>\n<p><strong>Some runtimes other than Node.js have dropped support for external buffers</strong>.\nOn runtimes other than Node.js this method may return\n<code>napi_no_external_buffers_allowed</code> to indicate that external\nbuffers are not supported. One such runtime is Electron as\ndescribed in this issue\n<a href=\"https://github.com/electron/electron/issues/35801\">electron/issues/35801</a>.</p>\n<p>In order to maintain broadest compatibility with all runtimes\nyou may define <code>NODE_API_NO_EXTERNAL_BUFFERS_ALLOWED</code> in your addon before\nincludes for the node-api headers. Doing so will hide the 2 functions\nthat create external buffers. This will ensure a compilation error\noccurs if you accidentally use one of these methods.</p>\n<p>This API allocates a <code>node::Buffer</code> object and initializes it with data\nbacked by the passed in buffer. While this is still a fully-supported data\nstructure, in most cases using a <code>TypedArray</code> will suffice.</p>\n<p>The API adds a <code>napi_finalize</code> callback which will be called when the JavaScript\nobject just created has been garbage collected.</p>\n<p>For Node.js >=4 <code>Buffers</code> are <code>Uint8Array</code>s.</p>",
5789 "desc": "<p><em><a href=\"https://github.com/nodejs/corepack\">Corepack</a></em> is an experimental tool to help with\nmanaging versions of your package managers. It exposes binary proxies for\neach <a href=\"corepack.html#supported-package-managers\">supported package manager</a> that, when called, will identify whatever\npackage manager is configured for the current project, transparently install\nit if needed, and finally run it without requiring explicit user interactions.</p>\n<p>This feature simplifies two core workflows:</p>\n<ul>\n<li>\n<p>It eases new contributor onboarding, since they won't have to follow\nsystem-specific installation processes anymore just to have the package\nmanager you want them to.</p>\n</li>\n<li>\n<p>It allows you to ensure that everyone in your team will use exactly the\npackage manager version you intend them to, without them having to\nmanually synchronize it each time you need to make an update.</p>\n</li>\n</ul>",
9177 "desc": "<p>Type: Documentation-only</p>\n<p><a href=\"net.html#socketbuffersize\"><code>socket.bufferSize</code></a> is just an alias for <a href=\"stream.html#writablewritablelength\"><code>writable.writableLength</code></a>.</p>",
14155 "desc": "<p>To control how ICU is used in Node.js, four <code>configure</code> options are available\nduring compilation. Additional details on how to compile Node.js are documented\nin <a href=\"https://github.com/nodejs/node/blob/HEAD/BUILDING.md\">BUILDING.md</a>.</p>\n<ul>\n<li><code>--with-intl=none</code>/<code>--without-intl</code></li>\n<li><code>--with-intl=system-icu</code></li>\n<li><code>--with-intl=small-icu</code></li>\n<li><code>--with-intl=full-icu</code> (default)</li>\n</ul>\n<p>An overview of available Node.js and JavaScript features for each <code>configure</code>\noption:</p>\n<table>\n<thead>\n<tr>\n<th>Feature</th>\n<th><code>none</code></th>\n<th><code>system-icu</code></th>\n<th><code>small-icu</code></th>\n<th><code>full-icu</code></th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td><a href=\"https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/normalize\"><code>String.prototype.normalize()</code></a></td>\n<td>none (function is no-op)</td>\n<td>full</td>\n<td>full</td>\n<td>full</td>\n</tr>\n<tr>\n<td><code>String.prototype.to*Case()</code></td>\n<td>full</td>\n<td>full</td>\n<td>full</td>\n<td>full</td>\n</tr>\n<tr>\n<td><a href=\"https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Intl\"><code>Intl</code></a></td>\n<td>none (object does not exist)</td>\n<td>partial/full (depends on OS)</td>\n<td>partial (English-only)</td>\n<td>full</td>\n</tr>\n<tr>\n<td><a href=\"https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/localeCompare\"><code>String.prototype.localeCompare()</code></a></td>\n<td>partial (not locale-aware)</td>\n<td>full</td>\n<td>full</td>\n<td>full</td>\n</tr>\n<tr>\n<td><code>String.prototype.toLocale*Case()</code></td>\n<td>partial (not locale-aware)</td>\n<td>full</td>\n<td>full</td>\n<td>full</td>\n</tr>\n<tr>\n<td><a href=\"https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/toLocaleString\"><code>Number.prototype.toLocaleString()</code></a></td>\n<td>partial (not locale-aware)</td>\n<td>partial/full (depends on OS)</td>\n<td>partial (English-only)</td>\n<td>full</td>\n</tr>\n<tr>\n<td><code>Date.prototype.toLocale*String()</code></td>\n<td>partial (not locale-aware)</td>\n<td>partial/full (depends on OS)</td>\n<td>partial (English-only)</td>\n<td>full</td>\n</tr>\n<tr>\n<td><a href=\"url.html#legacy-url-api\">Legacy URL Parser</a></td>\n<td>partial (no IDN support)</td>\n<td>full</td>\n<td>full</td>\n<td>full</td>\n</tr>\n<tr>\n<td><a href=\"url.html#the-whatwg-url-api\">WHATWG URL Parser</a></td>\n<td>partial (no IDN support)</td>\n<td>full</td>\n<td>full</td>\n<td>full</td>\n</tr>\n<tr>\n<td><a href=\"buffer.html#buffertranscodesource-fromenc-toenc\"><code>require('node:buffer').transcode()</code></a></td>\n<td>none (function does not exist)</td>\n<td>full</td>\n<td>full</td>\n<td>full</td>\n</tr>\n<tr>\n<td><a href=\"repl.html#repl\">REPL</a></td>\n<td>partial (inaccurate line editing)</td>\n<td>full</td>\n<td>full</td>\n<td>full</td>\n</tr>\n<tr>\n<td><a href=\"util.html#class-utiltextdecoder\"><code>require('node:util').TextDecoder</code></a></td>\n<td>partial (basic encodings support)</td>\n<td>partial/full (depends on OS)</td>\n<td>partial (Unicode-only)</td>\n<td>full</td>\n</tr>\n<tr>\n<td><a href=\"https://github.com/tc39/proposal-regexp-unicode-property-escapes\"><code>RegExp</code> Unicode Property Escapes</a></td>\n<td>none (invalid <code>RegExp</code> error)</td>\n<td>full</td>\n<td>full</td>\n<td>full</td>\n</tr>\n</tbody>\n</table>\n<p>The \"(not locale-aware)\" designation denotes that the function carries out its\noperation just like the non-<code>Locale</code> version of the function, if one\nexists. For example, under <code>none</code> mode, <code>Date.prototype.toLocaleString()</code>'s\noperation is identical to that of <code>Date.prototype.toString()</code>.</p>",
15032 "desc": "<p>Write the package in CommonJS or transpile ES module sources into CommonJS, and\ncreate an ES module wrapper file that defines the named exports. Using\n<a href=\"packages.html#conditional-exports\">Conditional exports</a>, the ES module wrapper is used for <code>import</code> and the\nCommonJS entry point for <code>require</code>.</p>\n<pre><code class=\"language-json\">// ./node_modules/pkg/package.json\n{\n \"type\": \"module\",\n \"exports\": {\n \"import\": \"./wrapper.mjs\",\n \"require\": \"./index.cjs\"\n }\n}\n</code></pre>\n<p>The preceding example uses explicit extensions <code>.mjs</code> and <code>.cjs</code>.\nIf your files use the <code>.js</code> extension, <code>\"type\": \"module\"</code> will cause such files\nto be treated as ES modules, just as <code>\"type\": \"commonjs\"</code> would cause them\nto be treated as CommonJS.\nSee <a href=\"esm.html#enabling\">Enabling</a>.</p>\n<pre><code class=\"language-cjs\">// ./node_modules/pkg/index.cjs\nexports.name = 'value';\n</code></pre>\n<pre><code class=\"language-js\">// ./node_modules/pkg/wrapper.mjs\nimport cjsModule from './index.cjs';\nexport const name = cjsModule.name;\n</code></pre>\n<p>In this example, the <code>name</code> from <code>import { name } from 'pkg'</code> is the same\nsingleton as the <code>name</code> from <code>const { name } = require('pkg')</code>. Therefore <code>===</code>\nreturns <code>true</code> when comparing the two <code>name</code>s and the divergent specifier hazard\nis avoided.</p>\n<p>If the module is not simply a list of named exports, but rather contains a\nunique function or object export like <code>module.exports = function () { ... }</code>,\nor if support in the wrapper for the <code>import pkg from 'pkg'</code> pattern is desired,\nthen the wrapper would instead be written to export the default optionally\nalong with any named exports as well:</p>\n<pre><code class=\"language-js\">import cjsModule from './index.cjs';\nexport const name = cjsModule.name;\nexport default cjsModule;\n</code></pre>\n<p>This approach is appropriate for any of the following use cases:</p>\n<ul>\n<li>The package is currently written in CommonJS and the author would prefer not\nto refactor it into ES module syntax, but wishes to provide named exports for\nES module consumers.</li>\n<li>The package has other packages that depend on it, and the end user might\ninstall both this package and those other packages. For example a <code>utilities</code>\npackage is used directly in an application, and a <code>utilities-plus</code> package\nadds a few more functions to <code>utilities</code>. Because the wrapper exports\nunderlying CommonJS files, it doesn't matter if <code>utilities-plus</code> is written in\nCommonJS or ES module syntax; it will work either way.</li>\n<li>The package stores internal state, and the package author would prefer not to\nrefactor the package to isolate its state management. See the next section.</li>\n</ul>\n<p>A variant of this approach not requiring conditional exports for consumers could\nbe to add an export, e.g. <code>\"./module\"</code>, to point to an all-ES module-syntax\nversion of the package. This could be used via <code>import 'pkg/module'</code> by users\nwho are certain that the CommonJS version will not be loaded anywhere in the\napplication, such as by dependencies; or if the CommonJS version can be loaded\nbut doesn't affect the ES module version (for example, because the package is\nstateless):</p>\n<pre><code class=\"language-json\">// ./node_modules/pkg/package.json\n{\n \"type\": \"module\",\n \"exports\": {\n \".\": \"./index.cjs\",\n \"./module\": \"./wrapper.mjs\"\n }\n}\n</code></pre>",
16046 "desc": "<p>If <code>message</code> is falsy, the error message is set as the values of <code>actual</code> and\n<code>expected</code> separated by the provided <code>operator</code>. If just the two <code>actual</code> and\n<code>expected</code> arguments are provided, <code>operator</code> will default to <code>'!='</code>. If\n<code>message</code> is provided as third argument it will be used as the error message and\nthe other arguments will be stored as properties on the thrown object. If\n<code>stackStartFn</code> is provided, all stack frames above that function will be\nremoved from stacktrace (see <a href=\"errors.html#errorcapturestacktracetargetobject-constructoropt\"><code>Error.captureStackTrace</code></a>). If no arguments are\ngiven, the default message <code>Failed</code> will be used.</p>\n<pre><code class=\"language-mjs\">import assert from 'node:assert/strict';\n\nassert.fail('a', 'b');\n// AssertionError [ERR_ASSERTION]: 'a' != 'b'\n\nassert.fail(1, 2, undefined, '>');\n// AssertionError [ERR_ASSERTION]: 1 > 2\n\nassert.fail(1, 2, 'fail');\n// AssertionError [ERR_ASSERTION]: fail\n\nassert.fail(1, 2, 'whoops', '>');\n// AssertionError [ERR_ASSERTION]: whoops\n\nassert.fail(1, 2, new TypeError('need array'));\n// TypeError: need array\n</code></pre>\n<pre><code class=\"language-cjs\">const assert = require('node:assert/strict');\n\nassert.fail('a', 'b');\n// AssertionError [ERR_ASSERTION]: 'a' != 'b'\n\nassert.fail(1, 2, undefined, '>');\n// AssertionError [ERR_ASSERTION]: 1 > 2\n\nassert.fail(1, 2, 'fail');\n// AssertionError [ERR_ASSERTION]: fail\n\nassert.fail(1, 2, 'whoops', '>');\n// AssertionError [ERR_ASSERTION]: whoops\n\nassert.fail(1, 2, new TypeError('need array'));\n// TypeError: need array\n</code></pre>\n<p>In the last three cases <code>actual</code>, <code>expected</code>, and <code>operator</code> have no\ninfluence on the error message.</p>\n<p>Example use of <code>stackStartFn</code> for truncating the exception's stacktrace:</p>\n<pre><code class=\"language-mjs\">import assert from 'node:assert/strict';\n\nfunction suppressFrame() {\n assert.fail('a', 'b', undefined, '!==', suppressFrame);\n}\nsuppressFrame();\n// AssertionError [ERR_ASSERTION]: 'a' !== 'b'\n// at repl:1:1\n// at ContextifyScript.Script.runInThisContext (vm.js:44:33)\n// ...\n</code></pre>\n<pre><code class=\"language-cjs\">const assert = require('node:assert/strict');\n\nfunction suppressFrame() {\n assert.fail('a', 'b', undefined, '!==', suppressFrame);\n}\nsuppressFrame();\n// AssertionError [ERR_ASSERTION]: 'a' !== 'b'\n// at repl:1:1\n// at ContextifyScript.Script.runInThisContext (vm.js:44:33)\n// ...\n</code></pre>"
17036 "desc": "<p>An asynchronous resource represents an object with an associated callback.\nThis callback may be called multiple times, such as the <code>'connection'</code>\nevent in <code>net.createServer()</code>, or just a single time like in <code>fs.open()</code>.\nA resource can also be closed before the callback is called. <code>AsyncHook</code> does\nnot explicitly distinguish between these different cases but will represent them\nas the abstract concept that is a resource.</p>\n<p>If <a href=\"worker_threads.html#class-worker\"><code>Worker</code></a>s are used, each thread has an independent <code>async_hooks</code>\ninterface, and each thread will use a new set of async IDs.</p>",
17043 "desc": "<p>Following is a simple overview of the public API.</p>\n<pre><code class=\"language-mjs\">import async_hooks from 'node:async_hooks';\n\n// Return the ID of the current execution context.\nconst eid = async_hooks.executionAsyncId();\n\n// Return the ID of the handle responsible for triggering the callback of the\n// current execution scope to call.\nconst tid = async_hooks.triggerAsyncId();\n\n// Create a new AsyncHook instance. All of these callbacks are optional.\nconst asyncHook =\n async_hooks.createHook({ init, before, after, destroy, promiseResolve });\n\n// Allow callbacks of this AsyncHook instance to call. This is not an implicit\n// action after running the constructor, and must be explicitly run to begin\n// executing callbacks.\nasyncHook.enable();\n\n// Disable listening for new asynchronous events.\nasyncHook.disable();\n\n//\n// The following are the callbacks that can be passed to createHook().\n//\n\n// init() is called during object construction. The resource may not have\n// completed construction when this callback runs. Therefore, all fields of the\n// resource referenced by \"asyncId\" may not have been populated.\nfunction init(asyncId, type, triggerAsyncId, resource) { }\n\n// before() is called just before the resource's callback is called. It can be\n// called 0-N times for handles (such as TCPWrap), and will be called exactly 1\n// time for requests (such as FSReqCallback).\nfunction before(asyncId) { }\n\n// after() is called just after the resource's callback has finished.\nfunction after(asyncId) { }\n\n// destroy() is called when the resource is destroyed.\nfunction destroy(asyncId) { }\n\n// promiseResolve() is called only for promise resources, when the\n// resolve() function passed to the Promise constructor is invoked\n// (either directly or through other means of resolving a promise).\nfunction promiseResolve(asyncId) { }\n</code></pre>\n<pre><code class=\"language-cjs\">const async_hooks = require('node:async_hooks');\n\n// Return the ID of the current execution context.\nconst eid = async_hooks.executionAsyncId();\n\n// Return the ID of the handle responsible for triggering the callback of the\n// current execution scope to call.\nconst tid = async_hooks.triggerAsyncId();\n\n// Create a new AsyncHook instance. All of these callbacks are optional.\nconst asyncHook =\n async_hooks.createHook({ init, before, after, destroy, promiseResolve });\n\n// Allow callbacks of this AsyncHook instance to call. This is not an implicit\n// action after running the constructor, and must be explicitly run to begin\n// executing callbacks.\nasyncHook.enable();\n\n// Disable listening for new asynchronous events.\nasyncHook.disable();\n\n//\n// The following are the callbacks that can be passed to createHook().\n//\n\n// init() is called during object construction. The resource may not have\n// completed construction when this callback runs. Therefore, all fields of the\n// resource referenced by \"asyncId\" may not have been populated.\nfunction init(asyncId, type, triggerAsyncId, resource) { }\n\n// before() is called just before the resource's callback is called. It can be\n// called 0-N times for handles (such as TCPWrap), and will be called exactly 1\n// time for requests (such as FSReqCallback).\nfunction before(asyncId) { }\n\n// after() is called just after the resource's callback has finished.\nfunction after(asyncId) { }\n\n// destroy() is called when the resource is destroyed.\nfunction destroy(asyncId) { }\n\n// promiseResolve() is called only for promise resources, when the\n// resolve() function passed to the Promise constructor is invoked\n// (either directly or through other means of resolving a promise).\nfunction promiseResolve(asyncId) { }\n</code></pre>",
17349 "desc": "<p>When an asynchronous operation is initiated (such as a TCP server receiving a\nnew connection) or completes (such as writing data to disk) a callback is\ncalled to notify the user. The <code>before</code> callback is called just before said\ncallback is executed. <code>asyncId</code> is the unique identifier assigned to the\nresource about to execute the callback.</p>\n<p>The <code>before</code> callback will be called 0 to N times. The <code>before</code> callback\nwill typically be called 0 times if the asynchronous operation was cancelled\nor, for example, if no connections are received by a TCP server. Persistent\nasynchronous resources like a TCP server will typically call the <code>before</code>\ncallback multiple times, while other operations like <code>fs.open()</code> will call\nit only once.</p>"
23250 "desc": "<p>The worker processes are spawned using the <a href=\"child_process.html#child_processforkmodulepath-args-options\"><code>child_process.fork()</code></a> method,\nso that they can communicate with the parent via IPC and pass server\nhandles back and forth.</p>\n<p>The cluster module supports two methods of distributing incoming\nconnections.</p>\n<p>The first one (and the default one on all platforms except Windows)\nis the round-robin approach, where the primary process listens on a\nport, accepts new connections and distributes them across the workers\nin a round-robin fashion, with some built-in smarts to avoid\noverloading a worker process.</p>\n<p>The second approach is where the primary process creates the listen\nsocket and sends it to interested workers. The workers then accept\nincoming connections directly.</p>\n<p>The second approach should, in theory, give the best performance.\nIn practice however, distribution tends to be very unbalanced due\nto operating system scheduler vagaries. Loads have been observed\nwhere over 70% of all connections ended up in just two processes,\nout of a total of eight.</p>\n<p>Because <code>server.listen()</code> hands off most of the work to the primary\nprocess, there are three cases where the behavior between a normal\nNode.js process and a cluster worker differs:</p>\n<ol>\n<li><code>server.listen({fd: 7})</code> Because the message is passed to the primary,\nfile descriptor 7 <strong>in the parent</strong> will be listened on, and the\nhandle passed to the worker, rather than listening to the worker's\nidea of what the number 7 file descriptor references.</li>\n<li><code>server.listen(handle)</code> Listening on handles explicitly will cause\nthe worker to use the supplied handle, rather than talk to the primary\nprocess.</li>\n<li><code>server.listen(0)</code> Normally, this will cause servers to listen on a\nrandom port. However, in a cluster, each worker will receive the\nsame \"random\" port each time they do <code>listen(0)</code>. In essence, the\nport is random the first time, but predictable thereafter. To listen\non a unique port, generate a port number based on the cluster worker ID.</li>\n</ol>\n<p>Node.js does not provide routing logic. It is therefore important to design an\napplication such that it does not rely too heavily on in-memory data objects for\nthings like sessions and login.</p>\n<p>Because workers are all separate processes, they can be killed or\nre-spawned depending on a program's needs, without affecting other\nworkers. As long as there are some workers still alive, the server will\ncontinue to accept connections. If no workers are alive, existing connections\nwill be dropped and new connections will be refused. Node.js does not\nautomatically manage the number of workers, however. It is the application's\nresponsibility to manage the worker pool based on its own needs.</p>\n<p>Although a primary use case for the <code>node:cluster</code> module is networking, it can\nalso be used for other use cases requiring worker processes.</p>"
23535 "desc": "<p>This property is <code>true</code> if the worker exited due to <code>.disconnect()</code>.\nIf the worker exited any other way, it is <code>false</code>. If the\nworker has not exited, it is <code>undefined</code>.</p>\n<p>The boolean <a href=\"cluster.html#workerexitedafterdisconnect\"><code>worker.exitedAfterDisconnect</code></a> allows distinguishing between\nvoluntary and accidental exit, the primary may choose not to respawn a worker\nbased on this value.</p>\n<pre><code class=\"language-js\">cluster.on('exit', (worker, code, signal) => {\n if (worker.exitedAfterDisconnect === true) {\n console.log('Oh, it was just voluntary – no need to worry');\n }\n});\n\n// kill worker\nworker.kill();\n</code></pre>"
24444 "desc": "<p>Try to construct a table with the columns of the properties of <code>tabularData</code>\n(or use <code>properties</code>) and rows of <code>tabularData</code> and log it. Falls back to just\nlogging the argument if it can't be parsed as tabular.</p>\n<pre><code class=\"language-js\">// These can't be parsed as tabular data\nconsole.table(Symbol());\n// Symbol()\n\nconsole.table(undefined);\n// undefined\n\nconsole.table([{ a: 1, b: 'Y' }, { a: 'Z', b: 2 }]);\n// ┌─────────┬─────┬─────┐\n// │ (index) │ a │ b │\n// ├─────────┼─────┼─────┤\n// │ 0 │ 1 │ 'Y' │\n// │ 1 │ 'Z' │ 2 │\n// └─────────┴─────┴─────┘\n\nconsole.table([{ a: 1, b: 'Y' }, { a: 'Z', b: 2 }], ['a']);\n// ┌─────────┬─────┐\n// │ (index) │ a │\n// ├─────────┼─────┤\n// │ 0 │ 1 │\n// │ 1 │ 'Z' │\n// └─────────┴─────┘\n</code></pre>"
25118 "desc": "<p>Creates and returns a <code>Cipher</code> object, with the given <code>algorithm</code>, <code>key</code> and\ninitialization vector (<code>iv</code>).</p>\n<p>The <code>options</code> argument controls stream behavior and is optional except when a\ncipher in CCM or OCB mode (e.g. <code>'aes-128-ccm'</code>) is used. In that case, the\n<code>authTagLength</code> option is required and specifies the length of the\nauthentication tag in bytes, see <a href=\"crypto.html#ccm-mode\">CCM mode</a>. In GCM mode, the <code>authTagLength</code>\noption is not required but can be used to set the length of the authentication\ntag that will be returned by <code>getAuthTag()</code> and defaults to 16 bytes.\nFor <code>chacha20-poly1305</code>, the <code>authTagLength</code> option defaults to 16 bytes.</p>\n<p>The <code>algorithm</code> is dependent on OpenSSL, examples are <code>'aes192'</code>, etc. On\nrecent OpenSSL releases, <code>openssl list -cipher-algorithms</code> will\ndisplay the available cipher algorithms.</p>\n<p>The <code>key</code> is the raw key used by the <code>algorithm</code> and <code>iv</code> is an\n<a href=\"https://en.wikipedia.org/wiki/Initialization_vector\">initialization vector</a>. Both arguments must be <code>'utf8'</code> encoded strings,\n<a href=\"buffer.html\">Buffers</a>, <code>TypedArray</code>, or <code>DataView</code>s. The <code>key</code> may optionally be\na <a href=\"crypto.html#class-keyobject\"><code>KeyObject</code></a> of type <code>secret</code>. If the cipher does not need\nan initialization vector, <code>iv</code> may be <code>null</code>.</p>\n<p>When passing strings for <code>key</code> or <code>iv</code>, please consider\n<a href=\"crypto.html#using-strings-as-inputs-to-cryptographic-apis\">caveats when using strings as inputs to cryptographic APIs</a>.</p>\n<p>Initialization vectors should be unpredictable and unique; ideally, they will be\ncryptographically random. They do not have to be secret: IVs are typically just\nadded to ciphertext messages unencrypted. It may sound contradictory that\nsomething has to be unpredictable and unique, but does not have to be secret;\nremember that an attacker must not be able to predict ahead of time what a\ngiven IV will be.</p>"
25251 "desc": "<p>Creates and returns a <code>Decipher</code> object that uses the given <code>algorithm</code>, <code>key</code>\nand initialization vector (<code>iv</code>).</p>\n<p>The <code>options</code> argument controls stream behavior and is optional except when a\ncipher in CCM or OCB mode (e.g. <code>'aes-128-ccm'</code>) is used. In that case, the\n<code>authTagLength</code> option is required and specifies the length of the\nauthentication tag in bytes, see <a href=\"crypto.html#ccm-mode\">CCM mode</a>. In GCM mode, the <code>authTagLength</code>\noption is not required but can be used to restrict accepted authentication tags\nto those with the specified length.\nFor <code>chacha20-poly1305</code>, the <code>authTagLength</code> option defaults to 16 bytes.</p>\n<p>The <code>algorithm</code> is dependent on OpenSSL, examples are <code>'aes192'</code>, etc. On\nrecent OpenSSL releases, <code>openssl list -cipher-algorithms</code> will\ndisplay the available cipher algorithms.</p>\n<p>The <code>key</code> is the raw key used by the <code>algorithm</code> and <code>iv</code> is an\n<a href=\"https://en.wikipedia.org/wiki/Initialization_vector\">initialization vector</a>. Both arguments must be <code>'utf8'</code> encoded strings,\n<a href=\"buffer.html\">Buffers</a>, <code>TypedArray</code>, or <code>DataView</code>s. The <code>key</code> may optionally be\na <a href=\"crypto.html#class-keyobject\"><code>KeyObject</code></a> of type <code>secret</code>. If the cipher does not need\nan initialization vector, <code>iv</code> may be <code>null</code>.</p>\n<p>When passing strings for <code>key</code> or <code>iv</code>, please consider\n<a href=\"crypto.html#using-strings-as-inputs-to-cryptographic-apis\">caveats when using strings as inputs to cryptographic APIs</a>.</p>\n<p>Initialization vectors should be unpredictable and unique; ideally, they will be\ncryptographically random. They do not have to be secret: IVs are typically just\nadded to ciphertext messages unencrypted. It may sound contradictory that\nsomething has to be unpredictable and unique, but does not have to be secret;\nremember that an attacker must not be able to predict ahead of time what a given\nIV will be.</p>"
32810 "desc": "<p>Domain error handlers are not a substitute for closing down a\nprocess when an error occurs.</p>\n<p>By the very nature of how <a href=\"https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/throw\"><code>throw</code></a> works in JavaScript, there is almost\nnever any way to safely \"pick up where it left off\", without leaking\nreferences, or creating some other sort of undefined brittle state.</p>\n<p>The safest way to respond to a thrown error is to shut down the\nprocess. Of course, in a normal web server, there may be many\nopen connections, and it is not reasonable to abruptly shut those down\nbecause an error was triggered by someone else.</p>\n<p>The better approach is to send an error response to the request that\ntriggered the error, while letting the others finish in their normal\ntime, and stop listening for new requests in that worker.</p>\n<p>In this way, <code>domain</code> usage goes hand-in-hand with the cluster module,\nsince the primary process can fork a new worker when a worker\nencounters an error. For Node.js programs that scale to multiple\nmachines, the terminating proxy or service registry can take note of\nthe failure, and react accordingly.</p>\n<p>For example, this is not a good idea:</p>\n<pre><code class=\"language-js\">// XXX WARNING! BAD IDEA!\n\nconst d = require('node:domain').create();\nd.on('error', (er) => {\n // The error won't crash the process, but what it does is worse!\n // Though we've prevented abrupt process restarting, we are leaking\n // a lot of resources if this ever happens.\n // This is no better than process.on('uncaughtException')!\n console.log(`error, but oh well ${er.message}`);\n});\nd.run(() => {\n require('node:http').createServer((req, res) => {\n handleRequest(req, res);\n }).listen(PORT);\n});\n</code></pre>\n<p>By using the context of a domain, and the resilience of separating our\nprogram into multiple worker processes, we can react more\nappropriately, and handle errors with much greater safety.</p>\n<pre><code class=\"language-js\">// Much better!\n\nconst cluster = require('node:cluster');\nconst PORT = +process.env.PORT || 1337;\n\nif (cluster.isPrimary) {\n // A more realistic scenario would have more than 2 workers,\n // and perhaps not put the primary and worker in the same file.\n //\n // It is also possible to get a bit fancier about logging, and\n // implement whatever custom logic is needed to prevent DoS\n // attacks and other bad behavior.\n //\n // See the options in the cluster documentation.\n //\n // The important thing is that the primary does very little,\n // increasing our resilience to unexpected errors.\n\n cluster.fork();\n cluster.fork();\n\n cluster.on('disconnect', (worker) => {\n console.error('disconnect!');\n cluster.fork();\n });\n\n} else {\n // the worker\n //\n // This is where we put our bugs!\n\n const domain = require('node:domain');\n\n // See the cluster documentation for more details about using\n // worker processes to serve requests. How it works, caveats, etc.\n\n const server = require('node:http').createServer((req, res) => {\n const d = domain.create();\n d.on('error', (er) => {\n console.error(`error ${er.stack}`);\n\n // We're in dangerous territory!\n // By definition, something unexpected occurred,\n // which we probably didn't want.\n // Anything can happen now! Be very careful!\n\n try {\n // Make sure we close down within 30 seconds\n const killtimer = setTimeout(() => {\n process.exit(1);\n }, 30000);\n // But don't keep the process open just for that!\n killtimer.unref();\n\n // Stop taking new requests.\n server.close();\n\n // Let the primary know we're dead. This will trigger a\n // 'disconnect' in the cluster primary, and then it will fork\n // a new worker.\n cluster.worker.disconnect();\n\n // Try to send an error to the request that triggered the problem\n res.statusCode = 500;\n res.setHeader('content-type', 'text/plain');\n res.end('Oops, there was a problem!\\n');\n } catch (er2) {\n // Oh well, not much we can do at this point.\n console.error(`Error sending 500! ${er2.stack}`);\n }\n });\n\n // Because req and res were created before this domain existed,\n // we need to explicitly add them.\n // See the explanation of implicit vs explicit binding below.\n d.add(req);\n d.add(res);\n\n // Now run the handler function in the domain.\n d.run(() => {\n handleRequest(req, res);\n });\n });\n server.listen(PORT);\n}\n\n// This part is not important. Just an example routing thing.\n// Put fancy application logic here.\nfunction handleRequest(req, res) {\n switch (req.url) {\n case '/error':\n // We do some async stuff, and then...\n setTimeout(() => {\n // Whoops!\n flerb.bark();\n }, timeout);\n break;\n default:\n res.end('ok');\n }\n}\n</code></pre>"
32879 "desc": "<p>Explicitly adds an emitter to the domain. If any event handlers called by\nthe emitter throw an error, or if the emitter emits an <code>'error'</code> event, it\nwill be routed to the domain's <code>'error'</code> event, just like with implicit\nbinding.</p>\n<p>This also works with timers that are returned from <a href=\"timers.html#setintervalcallback-delay-args\"><code>setInterval()</code></a> and\n<a href=\"timers.html#settimeoutcallback-delay-args\"><code>setTimeout()</code></a>. If their callback function throws, it will be caught by\nthe domain <code>'error'</code> handler.</p>\n<p>If the Timer or <code>EventEmitter</code> was already bound to a domain, it is removed\nfrom that one, and bound to this one instead.</p>"
41377 "desc": "<p>Watch for changes on <code>filename</code>. The callback <code>listener</code> will be called each\ntime the file is accessed.</p>\n<p>The <code>options</code> argument may be omitted. If provided, it should be an object. The\n<code>options</code> object may contain a boolean named <code>persistent</code> that indicates\nwhether the process should continue to run as long as files are being watched.\nThe <code>options</code> object may specify an <code>interval</code> property indicating how often the\ntarget should be polled in milliseconds.</p>\n<p>The <code>listener</code> gets two arguments the current stat object and the previous\nstat object:</p>\n<pre><code class=\"language-mjs\">import { watchFile } from 'node:fs';\n\nwatchFile('message.text', (curr, prev) => {\n console.log(`the current mtime is: ${curr.mtime}`);\n console.log(`the previous mtime was: ${prev.mtime}`);\n});\n</code></pre>\n<p>These stat objects are instances of <code>fs.Stat</code>. If the <code>bigint</code> option is <code>true</code>,\nthe numeric values in these objects are specified as <code>BigInt</code>s.</p>\n<p>To be notified when the file was modified, not just accessed, it is necessary\nto compare <code>curr.mtimeMs</code> and <code>prev.mtimeMs</code>.</p>\n<p>When an <code>fs.watchFile</code> operation results in an <code>ENOENT</code> error, it\nwill invoke the listener once, with all the fields zeroed (or, for dates, the\nUnix Epoch). If the file is created later on, the listener will be called\nagain, with the latest stat objects. This is a change in functionality since\nv0.10.</p>\n<p>Using <a href=\"fs.html#fswatchfilename-options-listener\"><code>fs.watch()</code></a> is more efficient than <code>fs.watchFile</code> and\n<code>fs.unwatchFile</code>. <code>fs.watch</code> should be used instead of <code>fs.watchFile</code> and\n<code>fs.unwatchFile</code> when possible.</p>\n<p>When a file being watched by <code>fs.watchFile()</code> disappears and reappears,\nthen the contents of <code>previous</code> in the second callback event (the file's\nreappearance) will be the same as the contents of <code>previous</code> in the first\ncallback event (its disappearance).</p>\n<p>This happens when:</p>\n<ul>\n<li>the file is deleted, followed by a restore</li>\n<li>the file is renamed and then renamed a second time back to its original name</li>\n</ul>"
45590 "desc": "<p>On Windows, <code>file:</code> <a href=\"url.html#the-whatwg-url-api\" class=\"type\">&lt;URL&gt;</a>s with a host name convert to UNC paths, while <code>file:</code>\n<a href=\"url.html#the-whatwg-url-api\" class=\"type\">&lt;URL&gt;</a>s with drive letters convert to local absolute paths. <code>file:</code> <a href=\"url.html#the-whatwg-url-api\" class=\"type\">&lt;URL&gt;</a>s\nwith no host name and no drive letter will result in an error:</p>\n<pre><code class=\"language-mjs\">import { readFileSync } from 'node:fs';\n// On Windows :\n\n// - WHATWG file URLs with hostname convert to UNC path\n// file://hostname/p/a/t/h/file => \\\\hostname\\p\\a\\t\\h\\file\nreadFileSync(new URL('file://hostname/p/a/t/h/file'));\n\n// - WHATWG file URLs with drive letters convert to absolute path\n// file:///C:/tmp/hello => C:\\tmp\\hello\nreadFileSync(new URL('file:///C:/tmp/hello'));\n\n// - WHATWG file URLs without hostname must have a drive letters\nreadFileSync(new URL('file:///notdriveletter/p/a/t/h/file'));\nreadFileSync(new URL('file:///c/p/a/t/h/file'));\n// TypeError [ERR_INVALID_FILE_URL_PATH]: File URL path must be absolute\n</code></pre>\n<p><code>file:</code> <a href=\"url.html#the-whatwg-url-api\" class=\"type\">&lt;URL&gt;</a>s with drive letters must use <code>:</code> as a separator just after\nthe drive letter. Using another separator will result in an error.</p>\n<p>On all other platforms, <code>file:</code> <a href=\"url.html#the-whatwg-url-api\" class=\"type\">&lt;URL&gt;</a>s with a host name are unsupported and\nwill result in an error:</p>\n<pre><code class=\"language-mjs\">import { readFileSync } from 'node:fs';\n// On other platforms:\n\n// - WHATWG file URLs with hostname are unsupported\n// file://hostname/p/a/t/h/file => throw!\nreadFileSync(new URL('file://hostname/p/a/t/h/file'));\n// TypeError [ERR_INVALID_FILE_URL_PATH]: must be absolute\n\n// - WHATWG file URLs convert to absolute path\n// file:///tmp/hello => /tmp/hello\nreadFileSync(new URL('file:///tmp/hello'));\n</code></pre>\n<p>A <code>file:</code> <a href=\"url.html#the-whatwg-url-api\" class=\"type\">&lt;URL&gt;</a> having encoded slash characters will result in an error on all\nplatforms:</p>\n<pre><code class=\"language-mjs\">import { readFileSync } from 'node:fs';\n\n// On Windows\nreadFileSync(new URL('file:///C:/p/a/t/h/%2F'));\nreadFileSync(new URL('file:///C:/p/a/t/h/%2f'));\n/* TypeError [ERR_INVALID_FILE_URL_PATH]: File URL path must not include encoded\n\\ or / characters */\n\n// On POSIX\nreadFileSync(new URL('file:///p/a/t/h/%2F'));\nreadFileSync(new URL('file:///p/a/t/h/%2f'));\n/* TypeError [ERR_INVALID_FILE_URL_PATH]: File URL path must not include encoded\n/ characters */\n</code></pre>\n<p>On Windows, <code>file:</code> <a href=\"url.html#the-whatwg-url-api\" class=\"type\">&lt;URL&gt;</a>s having encoded backslash will result in an error:</p>\n<pre><code class=\"language-mjs\">import { readFileSync } from 'node:fs';\n\n// On Windows\nreadFileSync(new URL('file:///C:/path/%5C'));\nreadFileSync(new URL('file:///C:/path/%5c'));\n/* TypeError [ERR_INVALID_FILE_URL_PATH]: File URL path must not include encoded\n\\ or / characters */\n</code></pre>",
45664 "desc": "<p>An <code>Agent</code> is responsible for managing connection persistence\nand reuse for HTTP clients. It maintains a queue of pending requests\nfor a given host and port, reusing a single socket connection for each\nuntil the queue is empty, at which time the socket is either destroyed\nor put into a pool where it is kept to be used again for requests to the\nsame host and port. Whether it is destroyed or pooled depends on the\n<code>keepAlive</code> <a href=\"http.html#new-agentoptions\">option</a>.</p>\n<p>Pooled connections have TCP Keep-Alive enabled for them, but servers may\nstill close idle connections, in which case they will be removed from the\npool and a new connection will be made when a new HTTP request is made for\nthat host and port. Servers may also refuse to allow multiple requests\nover the same connection, in which case the connection will have to be\nremade for every request and cannot be pooled. The <code>Agent</code> will still make\nthe requests to that server, but each one will occur over a new connection.</p>\n<p>When a connection is closed by the client or the server, it is removed\nfrom the pool. Any unused sockets in the pool will be unrefed so as not\nto keep the Node.js process running when there are no outstanding requests.\n(see <a href=\"net.html#socketunref\"><code>socket.unref()</code></a>).</p>\n<p>It is good practice, to <a href=\"http.html#agentdestroy\"><code>destroy()</code></a> an <code>Agent</code> instance when it is no\nlonger in use, because unused sockets consume OS resources.</p>\n<p>Sockets are removed from an agent when the socket emits either\na <code>'close'</code> event or an <code>'agentRemove'</code> event. When intending to keep one\nHTTP request open for a long time without keeping it in the agent, something\nlike the following may be done:</p>\n<pre><code class=\"language-js\">http.get(options, (res) => {\n // Do stuff\n}).on('socket', (socket) => {\n socket.emit('agentRemove');\n});\n</code></pre>\n<p>An agent may also be used for an individual request. By providing\n<code>{agent: false}</code> as an option to the <code>http.get()</code> or <code>http.request()</code>\nfunctions, a one-time use <code>Agent</code> with default options will be used\nfor the client connection.</p>\n<p><code>agent:false</code>:</p>\n<pre><code class=\"language-js\">http.get({\n hostname: 'localhost',\n port: 80,\n path: '/',\n agent: false, // Create a new agent just for this one request\n}, (res) => {\n // Do stuff with response\n});\n</code></pre>",
46957 "description": "The `rawPacket` is the current buffer that just parsed. Adding this buffer to the error object of `'clientError'` event is to make it possible that developers can log the broken packet."
48180 "desc": "<p>Similar to <a href=\"http.html#messageheaders\"><code>message.headers</code></a>, but there is no join logic and the values are\nalways arrays of strings, even for headers received just once.</p>\n<pre><code class=\"language-js\">// Prints something like:\n//\n// { 'user-agent': ['curl/7.22.0'],\n// host: ['127.0.0.1:8000'],\n// accept: ['*/*'] }\nconsole.log(request.headersDistinct);\n</code></pre>"
48288 "desc": "<p>Similar to <a href=\"http.html#messagetrailers\"><code>message.trailers</code></a>, but there is no join logic and the values are\nalways arrays of strings, even for headers received just once.\nOnly populated at the <code>'end'</code> event.</p>"
49428 "textRaw": "`createConnection` {Function} A function that produces a socket/stream to use for the request when the `agent` option is not used. This can be used to avoid creating a custom `Agent` class just to override the default `createConnection` function. See [`agent.createConnection()`][] for more details. Any [`Duplex`][] stream is a valid return value.",
49431 "desc": "A function that produces a socket/stream to use for the request when the `agent` option is not used. This can be used to avoid creating a custom `Agent` class just to override the default `createConnection` function. See [`agent.createConnection()`][] for more details. Any [`Duplex`][] stream is a valid return value."
49680 "textRaw": "`createConnection` {Function} A function that produces a socket/stream to use for the request when the `agent` option is not used. This can be used to avoid creating a custom `Agent` class just to override the default `createConnection` function. See [`agent.createConnection()`][] for more details. Any [`Duplex`][] stream is a valid return value.",
49683 "desc": "A function that produces a socket/stream to use for the request when the `agent` option is not used. This can be used to avoid creating a custom `Agent` class just to override the default `createConnection` function. See [`agent.createConnection()`][] for more details. Any [`Duplex`][] stream is a valid return value."
55256 "desc": "<p>Use the internal <code>require()</code> machinery to look up the location of a module,\nbut rather than loading the module, just return the resolved filename.</p>\n<p>If the module can not be found, a <code>MODULE_NOT_FOUND</code> error is thrown.</p>",
56146 "desc": "<blockquote>\n<p><strong>Warning:</strong> This hook will be removed in a future version. Use\n<a href=\"module.html#initialize\"><code>initialize</code></a> instead. When a hooks module has an <code>initialize</code> export,\n<code>globalPreload</code> will be ignored.</p>\n</blockquote>\n<ul>\n<li><code>context</code> <a href=\"https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object\" class=\"type\">&lt;Object&gt;</a> Information to assist the preload code\n<ul>\n<li><code>port</code> <a href=\"worker_threads.html#class-messageport\" class=\"type\">&lt;MessagePort&gt;</a></li>\n</ul>\n</li>\n<li>Returns: <a href=\"https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#String_type\" class=\"type\">&lt;string&gt;</a> Code to run before application startup</li>\n</ul>\n<p>Sometimes it might be necessary to run some code inside of the same global\nscope that the application runs in. This hook allows the return of a string\nthat is run as a sloppy-mode script on startup.</p>\n<p>Similar to how CommonJS wrappers work, the code runs in an implicit function\nscope. The only argument is a <code>require</code>-like function that can be used to load\nbuiltins like \"fs\": <code>getBuiltin(request: string)</code>.</p>\n<p>If the code needs more advanced <code>require</code> features, it has to construct\nits own <code>require</code> using <code>module.createRequire()</code>.</p>\n<pre><code class=\"language-mjs\">export function globalPreload(context) {\n return `\\\nglobalThis.someInjectedProperty = 42;\nconsole.log('I just set some globals!');\n\nconst { createRequire } = getBuiltin('module');\nconst { cwd } = getBuiltin('process');\n\nconst require = createRequire(cwd() + '/&#x3C;preload>');\n// [...]\n`;\n}\n</code></pre>\n<p>Another argument is provided to the preload code: <code>port</code>. This is available as a\nparameter to the hook and inside of the source text returned by the hook. This\nfunctionality has been moved to the <code>initialize</code> hook.</p>\n<p>Care must be taken in order to properly call <a href=\"worker_threads.html#portref\"><code>port.ref()</code></a> and\n<a href=\"worker_threads.html#portunref\"><code>port.unref()</code></a> to prevent a process from being in a state where it won't\nclose normally.</p>\n<pre><code class=\"language-mjs\">/**\n * This example has the application context send a message to the hook\n * and sends the message back to the application context\n */\nexport function globalPreload({ port }) {\n port.on('message', (msg) => {\n port.postMessage(msg);\n });\n return `\\\n port.postMessage('console.log(\"I went to the hook and back\");');\n port.on('message', (msg) => {\n eval(msg);\n });\n `;\n}\n</code></pre>\n<h3>Examples</h3>\n<p>The various module customization hooks can be used together to accomplish\nwide-ranging customizations of the Node.js code loading and evaluation\nbehaviors.</p>"
72787 "textRaw": "`detailed` {boolean} Include the full certificate chain if `true`, otherwise include just the peer's certificate.",
72790 "desc": "Include the full certificate chain if `true`, otherwise include just the peer's certificate."
73692 "description": "If the `key` option is an array, individual entries do not need a `passphrase` property anymore. `Array` entries can also just be `string`s or `Buffer`s now."
82251 "desc": "<p>If <code>options</code> is a string, then it specifies the filename.</p>\n<p>Creating a new <code>vm.Script</code> object compiles <code>code</code> but does not run it. The\ncompiled <code>vm.Script</code> can be run later multiple times. The <code>code</code> is not bound to\nany global object; rather, it is bound before each run, just for that run.</p>"
83395 "desc": "<p><code>Promise</code>s and <code>async function</code>s can schedule tasks run by the JavaScript\nengine asynchronously. By default, these tasks are run after all JavaScript\nfunctions on the current stack are done executing.\nThis allows escaping the functionality of the <code>timeout</code> and\n<code>breakOnSigint</code> options.</p>\n<p>For example, the following code executed by <code>vm.runInNewContext()</code> with a\ntimeout of 5 milliseconds schedules an infinite loop to run after a promise\nresolves. The scheduled loop is never interrupted by the timeout:</p>\n<pre><code class=\"language-js\">const vm = require('node:vm');\n\nfunction loop() {\n console.log('entering loop');\n while (1) console.log(Date.now());\n}\n\nvm.runInNewContext(\n 'Promise.resolve().then(() => loop());',\n { loop, console },\n { timeout: 5 },\n);\n// This is printed *before* 'entering loop' (!)\nconsole.log('done executing');\n</code></pre>\n<p>This can be addressed by passing <code>microtaskMode: 'afterEvaluate'</code> to the code\nthat creates the <code>Context</code>:</p>\n<pre><code class=\"language-js\">const vm = require('node:vm');\n\nfunction loop() {\n while (1) console.log(Date.now());\n}\n\nvm.runInNewContext(\n 'Promise.resolve().then(() => loop());',\n { loop, console },\n { timeout: 5, microtaskMode: 'afterEvaluate' },\n);\n</code></pre>\n<p>In this case, the microtask scheduled through <code>promise.then()</code> will be run\nbefore returning from <code>vm.runInNewContext()</code>, and will be interrupted\nby the <code>timeout</code> functionality. This applies only to code running in a\n<code>vm.Context</code>, so e.g. <a href=\"vm.html#vmruninthiscontextcode-options\"><code>vm.runInThisContext()</code></a> does not take this option.</p>\n<p>Promise callbacks are entered into the microtask queue of the context in which\nthey were created. For example, if <code>() => loop()</code> is replaced with just <code>loop</code>\nin the above example, then <code>loop</code> will be pushed into the global microtask\nqueue, because it is a function from the outer (main) context, and thus will\nalso be able to escape the timeout.</p>\n<p>If asynchronous scheduling functions such as <code>process.nextTick()</code>,\n<code>queueMicrotask()</code>, <code>setTimeout()</code>, <code>setImmediate()</code>, etc. are made available\ninside a <code>vm.Context</code>, functions passed to them will be added to global queues,\nwhich are shared by all contexts. Therefore, callbacks passed to those functions\nare not controllable through the timeout either.</p>",
87928 "desc": "<ul>\n<li>Extends: <a href=\"events.html#class-eventemitter\" class=\"type\">&lt;EventEmitter&gt;</a></li>\n</ul>\n<p>The <code>Worker</code> class represents an independent JavaScript execution thread.\nMost Node.js APIs are available inside of it.</p>\n<p>Notable differences inside a Worker environment are:</p>\n<ul>\n<li>The <a href=\"process.html#processstdin\"><code>process.stdin</code></a>, <a href=\"process.html#processstdout\"><code>process.stdout</code></a>, and <a href=\"process.html#processstderr\"><code>process.stderr</code></a>\nstreams may be redirected by the parent thread.</li>\n<li>The <a href=\"worker_threads.html#workerismainthread\"><code>require('node:worker_threads').isMainThread</code></a> property is set to <code>false</code>.</li>\n<li>The <a href=\"worker_threads.html#workerparentport\"><code>require('node:worker_threads').parentPort</code></a> message port is available.</li>\n<li><a href=\"process.html#processexitcode\"><code>process.exit()</code></a> does not stop the whole program, just the single thread,\nand <a href=\"process.html#processabort\"><code>process.abort()</code></a> is not available.</li>\n<li><a href=\"process.html#processchdirdirectory\"><code>process.chdir()</code></a> and <code>process</code> methods that set group or user ids\nare not available.</li>\n<li><a href=\"process.html#processenv\"><code>process.env</code></a> is a copy of the parent thread's environment variables,\nunless otherwise specified. Changes to one copy are not visible in other\nthreads, and are not visible to native add-ons (unless\n<a href=\"worker_threads.html#workershare_env\"><code>worker.SHARE_ENV</code></a> is passed as the <code>env</code> option to the\n<a href=\"worker_threads.html#class-worker\"><code>Worker</code></a> constructor). On Windows, unlike the main thread, a copy of the\nenvironment variables operates in a case-sensitive manner.</li>\n<li><a href=\"process.html#processtitle\"><code>process.title</code></a> cannot be modified.</li>\n<li>Signals are not delivered through <a href=\"process.html#signal-events\"><code>process.on('...')</code></a>.</li>\n<li>Execution may stop at any point as a result of <a href=\"worker_threads.html#workerterminate\"><code>worker.terminate()</code></a>\nbeing invoked.</li>\n<li>IPC channels from parent processes are not accessible.</li>\n<li>The <a href=\"tracing.html\"><code>trace_events</code></a> module is not supported.</li>\n<li>Native add-ons can only be loaded from multiple threads if they fulfill\n<a href=\"addons.html#worker-support\">certain conditions</a>.</li>\n</ul>\n<p>Creating <code>Worker</code> instances inside of other <code>Worker</code>s is possible.</p>\n<p>Like <a href=\"https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API\">Web Workers</a> and the <a href=\"cluster.html\"><code>node:cluster</code> module</a>, two-way communication\ncan be achieved through inter-thread message passing. Internally, a <code>Worker</code> has\na built-in pair of <a href=\"worker_threads.html#class-messageport\"><code>MessagePort</code></a>s that are already associated with each\nother when the <code>Worker</code> is created. While the <code>MessagePort</code> object on the parent\nside is not directly exposed, its functionalities are exposed through\n<a href=\"worker_threads.html#workerpostmessagevalue-transferlist\"><code>worker.postMessage()</code></a> and the <a href=\"worker_threads.html#event-message_1\"><code>worker.on('message')</code></a> event\non the <code>Worker</code> object for the parent thread.</p>\n<p>To create custom messaging channels (which is encouraged over using the default\nglobal channel because it facilitates separation of concerns), users can create\na <code>MessageChannel</code> object on either thread and pass one of the\n<code>MessagePort</code>s on that <code>MessageChannel</code> to the other thread through a\npre-existing channel, such as the global one.</p>\n<p>See <a href=\"worker_threads.html#portpostmessagevalue-transferlist\"><code>port.postMessage()</code></a> for more information on how messages are passed,\nand what kind of JavaScript values can be successfully transported through\nthe thread barrier.</p>\n<pre><code class=\"language-js\">const assert = require('node:assert');\nconst {\n Worker, MessageChannel, MessagePort, isMainThread, parentPort,\n} = require('node:worker_threads');\nif (isMainThread) {\n const worker = new Worker(__filename);\n const subChannel = new MessageChannel();\n worker.postMessage({ hereIsYourPort: subChannel.port1 }, [subChannel.port1]);\n subChannel.port2.on('message', (value) => {\n console.log('received:', value);\n });\n} else {\n parentPort.once('message', (value) => {\n assert(value.hereIsYourPort instanceof MessagePort);\n value.hereIsYourPort.postMessage('the worker is sending this');\n value.hereIsYourPort.close();\n });\n}\n</code></pre>",
88462 "desc": "<p>The <code>node:zlib</code> module can be used to implement support for the <code>gzip</code>, <code>deflate</code>\nand <code>br</code> content-encoding mechanisms defined by\n<a href=\"https://tools.ietf.org/html/rfc7230#section-4.2\">HTTP</a>.</p>\n<p>The HTTP <a href=\"https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.3\"><code>Accept-Encoding</code></a> header is used within an HTTP request to identify\nthe compression encodings accepted by the client. The <a href=\"https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.11\"><code>Content-Encoding</code></a>\nheader is used to identify the compression encodings actually applied to a\nmessage.</p>\n<p>The examples given below are drastically simplified to show the basic concept.\nUsing <code>zlib</code> encoding can be expensive, and the results ought to be cached.\nSee <a href=\"zlib.html#memory-usage-tuning\">Memory usage tuning</a> for more information on the speed/memory/compression\ntradeoffs involved in <code>zlib</code> usage.</p>\n<pre><code class=\"language-js\">// Client request example\nconst zlib = require('node:zlib');\nconst http = require('node:http');\nconst fs = require('node:fs');\nconst { pipeline } = require('node:stream');\n\nconst request = http.get({ host: 'example.com',\n path: '/',\n port: 80,\n headers: { 'Accept-Encoding': 'br,gzip,deflate' } });\nrequest.on('response', (response) => {\n const output = fs.createWriteStream('example.com_index.html');\n\n const onError = (err) => {\n if (err) {\n console.error('An error occurred:', err);\n process.exitCode = 1;\n }\n };\n\n switch (response.headers['content-encoding']) {\n case 'br':\n pipeline(response, zlib.createBrotliDecompress(), output, onError);\n break;\n // Or, just use zlib.createUnzip() to handle both of the following cases:\n case 'gzip':\n pipeline(response, zlib.createGunzip(), output, onError);\n break;\n case 'deflate':\n pipeline(response, zlib.createInflate(), output, onError);\n break;\n default:\n pipeline(response, output, onError);\n break;\n }\n});\n</code></pre>\n<pre><code class=\"language-js\">// server example\n// Running a gzip operation on every request is quite expensive.\n// It would be much more efficient to cache the compressed buffer.\nconst zlib = require('node:zlib');\nconst http = require('node:http');\nconst fs = require('node:fs');\nconst { pipeline } = require('node:stream');\n\nhttp.createServer((request, response) => {\n const raw = fs.createReadStream('index.html');\n // Store both a compressed and an uncompressed version of the resource.\n response.setHeader('Vary', 'Accept-Encoding');\n let acceptEncoding = request.headers['accept-encoding'];\n if (!acceptEncoding) {\n acceptEncoding = '';\n }\n\n const onError = (err) => {\n if (err) {\n // If an error occurs, there's not much we can do because\n // the server has already sent the 200 response code and\n // some amount of data has already been sent to the client.\n // The best we can do is terminate the response immediately\n // and log the error.\n response.end();\n console.error('An error occurred:', err);\n }\n };\n\n // Note: This is not a conformant accept-encoding parser.\n // See https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.3\n if (/\\bdeflate\\b/.test(acceptEncoding)) {\n response.writeHead(200, { 'Content-Encoding': 'deflate' });\n pipeline(raw, zlib.createDeflate(), response, onError);\n } else if (/\\bgzip\\b/.test(acceptEncoding)) {\n response.writeHead(200, { 'Content-Encoding': 'gzip' });\n pipeline(raw, zlib.createGzip(), response, onError);\n } else if (/\\bbr\\b/.test(acceptEncoding)) {\n response.writeHead(200, { 'Content-Encoding': 'br' });\n pipeline(raw, zlib.createBrotliCompress(), response, onError);\n } else {\n response.writeHead(200, {});\n pipeline(raw, response, onError);\n }\n}).listen(1337);\n</code></pre>\n<p>By default, the <code>zlib</code> methods will throw an error when decompressing\ntruncated data. However, if it is known that the data is incomplete, or\nthe desire is to inspect only the beginning of a compressed file, it is\npossible to suppress the default error handling by changing the flushing\nmethod that is used to decompress the last chunk of input data:</p>\n<pre><code class=\"language-js\">// This is a truncated version of the buffer from the above examples\nconst buffer = Buffer.from('eJzT0yMA', 'base64');\n\nzlib.unzip(\n buffer,\n // For Brotli, the equivalent is zlib.constants.BROTLI_OPERATION_FLUSH.\n { finishFlush: zlib.constants.Z_SYNC_FLUSH },\n (err, buffer) => {\n if (err) {\n console.error('An error occurred:', err);\n process.exitCode = 1;\n }\n console.log(buffer.toString());\n });\n</code></pre>\n<p>This will not change the behavior in other error-throwing situations, e.g.\nwhen the input data has an invalid format. Using this method, it will not be\npossible to determine whether the input ended prematurely or lacks the\nintegrity checks, making it necessary to manually check that the\ndecompressed result is valid.</p>",
92598 "desc": "<p>The <code>bigint</code> version of the <a href=\"process.html#processhrtimetime\"><code>process.hrtime()</code></a> method returning the\ncurrent high-resolution real time in nanoseconds as a <code>bigint</code>.</p>\n<p>Unlike <a href=\"process.html#processhrtimetime\"><code>process.hrtime()</code></a>, it does not support an additional <code>time</code>\nargument since the difference can just be computed directly\nby subtraction of the two <code>bigint</code>s.</p>\n<pre><code class=\"language-mjs\">import { hrtime } from 'node:process';\n\nconst start = hrtime.bigint();\n// 191051479007711n\n\nsetTimeout(() => {\n const end = hrtime.bigint();\n // 191052633396993n\n\n console.log(`Benchmark took ${end - start} nanoseconds`);\n // Benchmark took 1154389282 nanoseconds\n}, 1000);\n</code></pre>\n<pre><code class=\"language-cjs\">const { hrtime } = require('node:process');\n\nconst start = hrtime.bigint();\n// 191051479007711n\n\nsetTimeout(() => {\n const end = hrtime.bigint();\n // 191052633396993n\n\n console.log(`Benchmark took ${end - start} nanoseconds`);\n // Benchmark took 1154389282 nanoseconds\n}, 1000);\n</code></pre>"
92659 "desc": "<p>The <code>process.kill()</code> method sends the <code>signal</code> to the process identified by\n<code>pid</code>.</p>\n<p>Signal names are strings such as <code>'SIGINT'</code> or <code>'SIGHUP'</code>. See <a href=\"process.html#signal-events\">Signal Events</a>\nand <a href=\"http://man7.org/linux/man-pages/man2/kill.2.html\"><code>kill(2)</code></a> for more information.</p>\n<p>This method will throw an error if the target <code>pid</code> does not exist. As a special\ncase, a signal of <code>0</code> can be used to test for the existence of a process.\nWindows platforms will throw an error if the <code>pid</code> is used to kill a process\ngroup.</p>\n<p>Even though the name of this function is <code>process.kill()</code>, it is really just a\nsignal sender, like the <code>kill</code> system call. The signal sent may do something\nother than kill the target process.</p>\n<pre><code class=\"language-mjs\">import process, { kill } from 'node:process';\n\nprocess.on('SIGHUP', () => {\n console.log('Got SIGHUP signal.');\n});\n\nsetTimeout(() => {\n console.log('Exiting.');\n process.exit(0);\n}, 100);\n\nkill(process.pid, 'SIGHUP');\n</code></pre>\n<pre><code class=\"language-cjs\">const process = require('node:process');\n\nprocess.on('SIGHUP', () => {\n console.log('Got SIGHUP signal.');\n});\n\nsetTimeout(() => {\n console.log('Exiting.');\n process.exit(0);\n}, 100);\n\nprocess.kill(process.pid, 'SIGHUP');\n</code></pre>\n<p>When <code>SIGUSR1</code> is received by a Node.js process, Node.js will start the\ndebugger. See <a href=\"process.html#signal-events\">Signal Events</a>.</p>"