Recently, I had some free time to continue working on the aria2 package that I had started previously, and I wrote a bit to document some interesting points during the process.
naria2 is a JavaScript / TypeScript client library that calls the aria2 RPC interface and wraps the abstracted aria2 downloader. It is also a CLI tool application that provides some additional features based on aria2c.
Unified Installation Method Across Platforms#
Before using aria2, we must first download aria2. This seems like a redundant statement, as aria2 is generally not included with the system by default, but as a native application written in C++, its installation and setup are not necessarily trivial.
Thus, we have some possible solutions. For example, why should I, as a client package, help users download it? You can download it yourself and add it to PATH
before using it. OK, no problem; just write a few more lines in the documentation, but it still doesn't feel very elegant.
Therefore, I hope that users of this library do not need to be overly aware of some details: we will start a process in the background that runs an aria2 RPC interface service. Users only need to know:
- Call a few simple asynchronous functions to complete the client initialization;
- Use the abstracted API to complete various download and monitoring tasks for Torrents;
- Destroy the client after all logic processing is complete.
So, we still hope to provide a convenient way for users on different platforms to download aria2.
Pull and Run a Remote Installation Script#
Currently, many cross-platform applications choose this installation method, where they set up a web service to distribute an installation script. For example, with Rust, you can copy the command shown in the image, press enter a few times, and you will have Rust installed.
However, as can be seen from this image, only Windows is special, requiring a separate installation package. This is because the scripts that can run on *nix systems do not run easily on Windows PowerShell and CMD. For example, Microsoft has kindly given Invoke-WebRequest (iwr) an alias of "curl," but its usage is completely different from curl.
Secondly, maintaining a service for distributing scripts is both simple and not simple. It is simple because you can easily deploy a static page platform or a serverless function platform to get the script up in no time. However, do you need to buy a domain, get an SSL certificate, renew the domain every few years, and consider the Great Firewall in China? Even though we thank Cloudflare for handling some of the dirty work for you, it can still be a bit troublesome. Of course, there are simpler methods, thanks to egoist and their years-long project bina, which automatically generates a script for downloading and installing CLI applications to PATH
based on the filenames marked with the platform in your project's GitHub Releases.
Publish to a Bunch of Package Managers#
Rust applications are a classic example of regional mapping. You are right, but Rust is a new memory-safe programming language developed by Mozilla. Compilation occurs in a build system called "Cargo," where referenced pointers are granted "lifetimes," guiding safety. You will play a mysterious role called "developer" and encounter surprising error messages while programming, navigating around them during compilation, gradually uncovering the truth about "Rust."
In fact, I believe there is not much difference between using a package manager and fetching a distributed installation script to run. The package manager simply implements the logic of "fetching installed items" and "running local installation scripts" internally, and it helps maintain a list of packages and their resources, saving you effort and making it less likely to go out of business. Of course, package managers can do more things, such as unified version and dependency management.
In addition, there are package management solutions like nix (which I don't know much about) and tea (which combines Web3?), etc. The purpose of this article is not to introduce these things; these two sections are just casual discussions about some solutions.
Utilizing optionalDependencies in package.json#
Thanks to some cross-platform features of Node, we consider publishing to npm. If you are familiar with the current front-end toolchain, applications like esbuild and swc that are developed using native languages can be distributed via npm.
Taking esbuild as an example, if you are curious, click on its npm homepage:
You will find that it depends on a bunch of things, including various platforms. If you click on its package.json
, you can see that these are actually optionalDependencies
:
{
"name": "esbuild",
"version": "0.19.5",
"description": "An extremely fast JavaScript and CSS bundler and minifier.",
"repository": "https://github.com/evanw/esbuild",
"scripts": {
"postinstall": "node install.js"
},
"main": "lib/main.js",
"types": "lib/main.d.ts",
"engines": {
"node": ">=12"
},
"bin": {
"esbuild": "bin/esbuild"
},
"optionalDependencies": {
"@esbuild/android-arm": "0.19.5",
"@esbuild/android-arm64": "0.19.5",
"@esbuild/android-x64": "0.19.5",
"@esbuild/darwin-arm64": "0.19.5",
"@esbuild/darwin-x64": "0.19.5",
"@esbuild/freebsd-arm64": "0.19.5",
"@esbuild/freebsd-x64": "0.19.5",
"@esbuild/linux-arm": "0.19.5",
"@esbuild/linux-arm64": "0.19.5",
"@esbuild/linux-ia32": "0.19.5",
"@esbuild/linux-loong64": "0.19.5",
"@esbuild/linux-mips64el": "0.19.5",
"@esbuild/linux-ppc64": "0.19.5",
"@esbuild/linux-riscv64": "0.19.5",
"@esbuild/linux-s390x": "0.19.5",
"@esbuild/linux-x64": "0.19.5",
"@esbuild/netbsd-x64": "0.19.5",
"@esbuild/openbsd-x64": "0.19.5",
"@esbuild/sunos-x64": "0.19.5",
"@esbuild/win32-arm64": "0.19.5",
"@esbuild/win32-ia32": "0.19.5",
"@esbuild/win32-x64": "0.19.5"
},
"license": "MIT"
}
Then, if we randomly click on a package, we can find that each @esbuild/*
actually contains a corresponding platform's esbuild
binary and a package.json
. Taking @esbuild/win32-x64
as an example, we can see that its package.json
specifies the supported platforms through the os
and cpu
fields.
{
"name": "@esbuild/win32-x64",
"version": "0.19.5",
"description": "The Windows 64-bit binary for esbuild, a JavaScript bundler.",
"repository": "https://github.com/evanw/esbuild",
"license": "MIT",
"preferUnplugged": true,
"engines": {
"node": ">=12"
},
"os": [
"win32"
],
"cpu": [
"x64"
]
}
The logic for installing optionalDependencies
is that it will take the current process.platform
and process.arch
to determine whether each package can match its platform; npm
will only download the package for the corresponding platform (you can pass some parameters to force it to download others; you can try it yourself to see which platform version of swc
and esbuild
your project is using).
In addition, you can see that the scripts
section of the esbuild
package has a postinstall
entry, which can be run with npm run
and will also automatically execute after npm install
(other package managers are similar), completing some other initialization work, such as running a postinstall.js
script.
@naria2/node#
In summary, using a similar configuration method, I stole a few built aria2 binaries from agalwood/Motrix and published them on npm, specifically referring to: packages/binary.
I feel that some other things can also be published to npm using a similar method, so that in the future, you only need to use Volta / fnm to get a Node environment, and then use npm to download other commonly used tools.
Then, after downloading the aria2 binary, you still need to write some code to select the corresponding binary package based on the current process.platform
and process.arch
:
function getPackage() {
const { platform, arch } = process;
switch (platform) {
case 'win32':
if (['x64', 'ia32'].includes(arch)) {
return `@naria2/win32-${arch}`;
}
case 'darwin':
if (['x64', 'arm64'].includes(arch)) {
return `@naria2/darwin-${arch}`;
}
case 'linux':
if (['x64', 'arm64'].includes(arch)) {
return `@naria2/linux-${arch}`;
}
}
throw new Error('naria2 does not provide aria2 binary of your platform');
}
export const BINARY = getPackage();
export function getBinary() {
const pkg = require.resolve(BINARY + '/package.json');
const { platform } = process;
// Windows should end with .exe
const binary = path.join(path.dirname(pkg), platform === 'win32' ? 'aria2c.exe' : 'aria2c');
return binary;
}
Finally, you only need to call getBinary()
to find the absolute path of the aria2
binary for the current platform.
Starting from Node.js Code#
After the above efforts, we have successfully placed the aria2 binary in node_modules
using a method similar to esbuild and swc. Of course, this is not enough; we continue to strive towards "calling a few simple asynchronous functions to complete client initialization."
maria2 is a basic aria2 RPC interface client that inspired me with its abstraction.
- Using WebSocket connection:
import { open, aria2 } from 'maria2'
const conn = await open(
new WebSocket('ws://localhost:6800/jsonrpc')
// import { createWebSocket } from 'maria2/transport'
// createWebSocket('ws://localhost:6800/jsonrpc')
)
const version = await aria2.getVersion(conn)
- Using HTTP connection
import { open, aria2 } from 'maria2'
import { createHTTP } from 'maria2/transport'
const conn = await open(
createHTTP('http://localhost:6800/jsonrpc')
)
const version = await aria2.getVersion(conn)
We can similarly wrap a locally started aria2
process along with its WebSocket connection into one entity. The original establishment of the WebSocket connection becomes first starting the aria2 process from node_modules
, then creating the WebSocket connection.
export async function createChildProcess(): Promise<ChildProcessSocket> {
const child = spawn(, ['--enable-rpc']);
await new Promise((res, rej) => {
let spawn = false;
if (child.stdout) {
child.stdout.once('data', () => {
spawn = true;
res(undefined);
});
} else {
child.once('spawn', () => {
spawn = true;
res(undefined);
});
}
child.once('error', (e) => {
if (!spawn) {
rej(e);
}
});
});
return new WebSocket(`ws://127.0.0.1:6800/jsonrpc`);
}
The above example code omits a few details, but that’s the general idea. Next comes the effect described in the README.md
:
npm i naria2 @naria2/node
When using it, you only need the line await createClient(createChildProcess())
to complete the creation of the aria2 process and the establishment of the WebSocket connection. Similarly, client.close()
also includes the logic for closing the aria2
process.
import { createClient } from 'naria2'
import { createChildProcess } from '@naria2/node'
// Initialize a client
const client = await createClient(createChildProcess())
// Start downloading a magnet
const torrent = await client.downloadUri('...')
// Watch torrent progress
await torrent.watchFollowedBy((torrent) => {
console.log(`Downloading ${torrent.name}`)
})
// Shutdown client
await client.shutdown()
Of course, there may be some details that cannot be ignored, such as the
aria2
process started this way may remain alive after the script exits abnormally. You may need to refer to the "death" of a Node.js command line program to handle such boundary cases.
Starting as a CLI Application#
Since we already have the current platform's aria2
binary, and aria2
is originally a CLI download tool, we can easily wrap it in the above manner to transform it into a cross-platform CLI download tool.
So, on Windows / Mac OS / Linux, as long as you have a Node environment, you can directly npm i -g
like other packages.
npm i -g naria2c
Initially, it does something very simple: it gets the real path of aria2
and then runs it using execa
, giving it a nice name called naria2c
.
#!/usr/bin/env node
import { execa } from 'execa'
import { getBinary } from '@naria2/node'
const binary = getBinary()
const childProcess = await execa(binary, process.argv.slice(2))
Then, we add a few more details.
Transforming naria2c's Output#
Because we have wrapped it, the surface running is no longer aria2c
, but if you use naria2c --help
, you will still see it prompting you to use aria2c
to run the program, which undoubtedly confuses users.
Therefore, we need to transform the output stream and error stream of aria2c
, converting aria2c
to naria2c
.
import { Transform } from 'stream'
childProcess.stdout.pipe(transformOutput()).pipe(process.stdout)
childProcess.stderr.pipe(transformOutput()).pipe(process.stderr)
function transformOutput() {
return new Transform({
transform(chunk, encoding, callback) {
let text = chunk.toString()
text = text.replace(/aria2c/g, 'naria2c')
callback(null, text)
}
});
}
However, this may introduce some incorrect replacements, similar to Production Replacement | Vite.
Thus, the actual selection for naria2c
:
- We will first check if it is running
-h, --help
or-v, --version
, and can only filter the output of help information and version information commands; - For the standard error stream, it may carry help information, using the above transformation stream (but the format of the replacement string is more specific).
It looks like it was really written by oneself (
Passing Termination Signals to the aria2 Process#
According to the "death" of a Node.js command line program, we pass some termination signals to the aria2
process.
import { onDeath } from '@breadc/death';
const cancelDeath = onDeath(async (signal) => {
const killChildProcess = () => {
childProcess.kill(signal);
return new Promise((res) => {
if (childProcess.exitCode !== null || childProcess.killed) {
res();
} else {
childProcess.once('exit', () => {
res();
});
}
});
};
await Promise.race([
killChildProcess(),
sleep(5000);
]);
});
However, due to the different signal mechanisms between Windows and other platforms, this part seems to be unable to simulate the original aria2c
termination behavior. The expectation is that after pressing Ctrl-C
, aria2
exits and outputs some summary information, or triggers a graceful exit of aria2
, and then pressing again to force exit.
But anyway, it’s still usable (
Preventing aria2 from Automatically Using Proxy Environment Variables#
Due to the auditing rules related to BitTorrent at the airport, aria2c
will automatically set proxies based on environment variables like http_proxy
, making it easy to accidentally forget to turn off the proxy and trigger airport audits.
Since we will manually start an aria2c
process, we can preprocess some things before starting it, clearing out the proxy-related environment variables.
First, we add a new option -I, --ignore-proxy
to clear proxy-related environment variables; then, when creating the process, we pass a filtered process.env
to it.
// Omit command line argument parsing
const env = options.ignoreProxy
? {
...process.env,
HTTP_PROXY: undefined,
HTTPS_PROXY: undefined,
ALL_PROXY: undefined,
http_proxy: undefined,
https_proxy: undefined,
all_proxy: undefined,
no_proxy: undefined
}
: process.env;
Adding Command Line Parameters to Start Web UI#
If you just want to download a Torrent like a CLI tool such as curl
, then the current method is completely sufficient. However, if you want to operate multiple Torrents, monitor their real-time status, etc., the TUI format is clearly not very convenient, and we still hope to have a GUI to control aria2.
So, since we are already building an aria2 RPC client library, why not use it to build another Web application for aria2?
To this end, we add a Web UI related parameter --ui
to naria2c
, which is used to start the aria2c
process while launching a Web UI.
naria2c --ui
Currently, the Web UI only implements some simple visualization and control.
With the command naria2c --ui
, you can complete the startup of both aria2c
and the Web UI. However, this involves some details:
- The
--ui
parameter is not supported byaria2c
, and needs to be filtered out in the startup script; - Opening the Web UI must accompany the startup of the RPC service, so if the
--ui
option is detected, we need to set--enable-rpc
at the same time; - The Web UI needs to know the port number and authentication key of the RPC service, and obviously, this information does not need to be specified repeatedly; we only need to manually parse the
--rpc-secret
option.
// Parameters for aria2c
const args: string[] = []
// Missing parameters
const missing = {
enableRpc: true
}
// Web UI related parameters
const webui = {
enable: false,
secret: undefined
}
for (const arg of process.argv.slice(2)) {
// Omit command line argument parsing
}
// If the Web UI is enabled but RPC is not started, supplement parameters
if (webui.enable && missing.enableRpc) {
args.push('--enable-rpc')
}
Thus, we have obtained the desired information from the command line arguments, and we can now start the embedded Web UI.
Embedded Web UI#
Following the previous section, why do I want to embed a Web UI into naria2c
?
Looking back at the beginning:
naria2
is essentially just a client library that can run in environments like browsers and Node;@naria2/node
provides Node-related local running support, such as directly opening the aria2 process locally;naria2c
is a cross-platform wrapper foraria2c
, just an additional product.
Therefore, we should prioritize providing more capabilities to the client: starting the Web UI can completely fall under @naria2/node
, as a capability of a package for its users.
In addition, we are not using technologies like electron to create a desktop GUI application (for example, agalwood/Motrix). Secondly, creating a desktop application does not meet the needs of being a package, as it requires adding a lot of extra packaging and distribution costs, just like downloading the aria2c
binary.
In fact, this is also my real need: AnimeSpace, another automated solution for downloading and organizing new anime episodes. AnimeSpace is a command-line program that automatically downloads anime resources based on subscription status. In its current version, it only displays a download progress bar in TUI, which is not very intuitive. Introducing this capability from @naria2/node
can provide it with a richer download progress GUI.
Starting the Web UI#
So how do we embed the Web UI into @naria2/node
?
In fact, we only need to copy the built artifacts of the front-end application into a certain directory within this package (and configure the files
item in package.json
accordingly), and then at runtime, find the directory of the front-end application build artifacts and create a static file HTTP server on it.
The package structure corresponding to @naria2/node
is shown below:
@naria2/node
├─ client/ # Front-end application build artifacts
│ ├─ assets/**
│ └─ index.html
├─ dist/** # Transpiled artifacts of TypeScript source code
├─ ...
└─ package.json
Potential issues:
- The path depends on the path of the TypeScript transpiled build artifacts;
- It cannot support
single-executable-applications
or inline dependency packaging methods, as it relies on the relative position of these non-code static artifacts in the file system.But it's not a big problem.
Thus, we can find the path of the Web UI artifacts using fileURLToPath(new URL('../client', import.meta.url))
. We can use serve-static
to start a local file HTTP server.
Ultimately, users can directly call await launchWebUI({ ... })
to start a Web UI.
export async function launchWebUI(options: WebUIOptions) {
const serveStatic = (await import('serve-static')).default;
const finalhandler = (await import('finalhandler')).default;
const http = await import('http');
const port = options.port ?? 6801;
const clientDir = fileURLToPath(new URL('../client', import.meta.url));
const serve = serveStatic(clientDir, { index: ['index.html'] });
const server = http.createServer(async (req, res) => {
serve(req, res, finalhandler(req, res));
});
server.listen(port);
return server;
}
export interface WebUIOptions {
port?: number;
rpc: {
port: number;
secret: string | undefined;
};
}
vite-plugin-naria2#
I had never written React before, and after browsing a lot online, I decided to try writing this application. Since we do not need SSR, we can directly use Vite, along with a bunch of React ecosystem tools, react-router, TanStack Query, Zustand, tailwindcss, shadcn, and a bunch of dependencies from shadcn. At first, it seemed quite complex, especially with the chunk of component code that shadcn popped up, which was quite overwhelming, but it seems to be quite convenient to write, and the default functionality is powerful.
The first question when starting to develop this application is that we need to start an aria2 service for the front end to use, so we refer to the idea of the plugin I previously wrote, vite-plugin-cloudflare-functions.
Directly start an aria2 in the Vite plugin, and pass the connection information to the front end through a virtual module. This way, there is no need to start another service and configure it separately.
export default function Naria2(): Plugin[] {
const childProcessRuntime = {
process: undefined as ChildProcessSocket | undefined,
url: undefined as string | undefined,
secret: undefined as string | undefined
};
return [
{
name: 'vite-plugin-naria2:runtime',
apply: 'serve',
async configureServer(server) {
if (!childProcessRuntime.url) {
// Start an aria2 in the Vite dev server
const childProcess = await createChildProcess();
childProcessRuntime.process = childProcess;
childProcessRuntime.url = `ws://127.0.0.1:${childProcess.getOptions().listenPort}/jsonrpc`;
childProcessRuntime.secret = childProcess.getOptions().secret;
}
},
closeBundle() {
childProcessRuntime.process?.close?.();
childProcessRuntime.url = undefined;
childProcessRuntime.secret = undefined;
}
},
{
name: 'vite-plugin-naria2:build',
resolveId(id) {
if (id === '~naria2/jsonrpc') {
return '\0' + id;
}
},
load(id) {
if (id === '\0~naria2/jsonrpc') {
// Pass information to the front end as a virtual module
const socketCode = childProcessRuntime.url
? `new WebSocket(${JSON.stringify(childProcessRuntime.url)})`
: 'undefined';
const clientCode = `socket ? await createClient(socket, { ${
childProcessRuntime.secret
? 'secret: ' + JSON.stringify(childProcessRuntime.secret)
: ''
} }) : undefined`;
return [
`import { createClient } from 'naria2';`,
`export const socket = ${socketCode};`,
`export const client = ${clientCode};`
].join('\n');
}
}
}
];
}
Opening Local File Explorer#
We also hope to add some common features to this Web UI, such as opening the local file explorer?
However, due to browser security restrictions, this is not very convenient to achieve directly.
But since we have started a backend service, we can just use this service to open it!
export async function launchWebUI(options: WebUIOptions) {
// ...
const handler = await createWebUIHandler(options);
const server = http.createServer(async (req, res) => {
if (await handler(req, res)) {
return;
}
serve(req, res, finalhandler(req, res));
});
// ...
}
export async function createWebUIHandler(options: Pick<WebUIOptions, 'rpc'>) {
return async (req: IncomingMessage, res: ServerResponse<IncomingMessage>) => {
if (!req.url) return false;
try {
const url = new URL(req.url, `http://${req.headers.host}`);
if (url.pathname === '/_/open') {
return await handleWebUIOpenRequest(url, req, res);
}
return false;
} catch (error) {
return false;
}
};
}
export async function handleWebUIOpenRequest(
url: URL,
req: IncomingMessage,
res: ServerResponse<IncomingMessage>
) {
try {
const auth = req.headers.authorization;
const dir = url.searchParams.get('dir');
if (dir) {
const open = (await import('open')).default;
await open(dir).catch(() => {});
res.write(JSON.stringify({ status: 'OK', open: p }));
}
res.setHeader('Content-Type', 'application/json');
res.end();
return true;
} catch (error) {
return false;
}
}
We add an interface for opening files /_/open?dir=...
to the original static file HTTP service. The front end sends a request to this interface, and the effect is to open the file explorer.
There may be some security issues; you can additionally verify the token or use options to disable this feature.
Proxying the aria2 RPC Port#
Although it is not very necessary, perhaps we can deploy this page on a server. For this, we would need to open 2 ports on the public internet, one for the front-end application and one for the aria2 RPC service; or we would need additional configuration, or use nginx or similar to discover the location of the RPC service.
As in the previous section, since we already have a backend service, why not make full use of it?
Directly add a proxy to the /jsonrpc
route of this service that forwards to the aria2 RPC service. Thus, deployment only needs to expose the application location, and connecting to the RPC service only requires accessing the same-origin /jsonrpc
.
export async function createWebUIHandler(options: Pick<WebUIOptions, 'rpc'>) {
// Create a proxy middleware
const { createProxyMiddleware } = await import('http-proxy-middleware');
const proxyMiddleware = createProxyMiddleware({
target: `http://127.0.0.1:${options.rpc.port}`,
changeOrigin: false,
ws: true,
logLevel: 'silent'
});
return async (req: IncomingMessage, res: ServerResponse<IncomingMessage>) => {
if (!req.url) return false;
try {
const url = new URL(req.url, `http://${req.headers.host}`);
if (url.pathname === '/jsonrpc') {
// Forward the request
proxyMiddleware(req as any, res as any, () => {});
return true;
} else if (url.pathname === '/_/open') {
return await handleWebUIOpenRequest(url, req, res);
}
return false;
} catch (error) {
return false;
}
};
}
Based on http-proxy-middleware, we complete the HTTP request proxy forwarding work, and this library also supports forwarding WebSocket requests, so we can use it directly.