Saturday morning I attempted to code an SSH server in JavaScript and gave up quickly when e = g^x mod p
. On Sunday morning I regrouped — I am a real coder 😤 — and I wrote an HTTP tunnel to prove it.
An HTTP tunnel or HTTP proxy is used to forward web requests. Much like how VPNs are used but only for HTTP traffic. Useful for bypassing firewall restrictions.
I’m using Deno because it has nice APIs for this stuff. And a sprinkle of TypeScript.
If you don’t care to read skip to GitHub.
Note: My server only proxies secure connections. Proxying insecure HTTP doesn’t use a CONNECT
request nor TLS connection. In fact, it can be achieved in one line:
Deno.serve((request) => fetch(request.url, request));
Digging a Tunnel
The only research I bothered to do was look up the CONNECT method.
The
CONNECT
HTTP method requests that a proxy establish a HTTP tunnel to a destination server, and if successful, blindly forward data in both directions until the tunnel is closed.
Deno has a nice HTTP server so why not start there? The CONNECT
request expects an HTTP/1.1 200 OK
response so let’s give it one and see what happens.
Deno.serve((request) => {
console.log(request);
if (request.method === "CONNECT") {
return new Response();
}
});
// Listening on http://0.0.0.0:8000/
We can test with curl
and the --proxy
option:
curl --proxy "http://0.0.0.0:8000" "https://example.com"
The server logs:
Request {
bodyUsed: false,
headers: Headers {
host: "example.com:443",
"proxy-connection": "Keep-Alive",
"user-agent": "curl/8.7.1"
},
method: "CONNECT",
redirect: undefined,
url: "example.com:443"
}
Success? Well not really, because curl
logs:
curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to example.com:443
What has gone wrong? After the first CONNECT
request no more requests are logged by the server. That’s because although a proxy connection is first negotiated with an HTTP request all subsequent data is sent at the TCP level. Deno.serve
operates at the higher HTTP level. We need a TCP server instead. Let’s create one of those.
Deno.listen
can do that:
const handle = async (conn: Deno.TcpConn) => {
const buffer = new Uint8Array(1024);
while (true) {
const read = await conn.read(buffer);
if (read === null) break;
if (read === 0) continue;
const request = new TextDecoder().decode(buffer.subarray(0, read));
console.log(request);
}
};
const listener = Deno.listen({ port: 8000 });
for await (const conn of listener) {
handle(conn);
}
Sending the same curl
request to the TCP server will log:
CONNECT example.com:443 HTTP/1.1
Host: example.com:443
User-Agent: curl/8.7.1
Proxy-Connection: Keep-Alive
Success! This is a raw HTTP request. Each line ends with \r\n
i.e. CRLF (carriage return, line feed), with an empty CRLF following the final header.
Now let’s respond affirmative again within handle
:
if (request.startsWith("CONNECT")) {
await conn.write(new TextEncoder().encode(
"HTTP/1.1 200 OK\r\n\r\n",
));
}
Testing again with curl
we now get a second request log:
�/9NW�V{B����b̨̩̪�0�,�(�$��:�- ��)?w)Պ�k9������=5���/�+ [...]
Success! This nonsense is part of the TLS handshake. curl
is trying to connect securely to https://example.com
because we’ve responded 200 OK
to the initial CONNECT
request. We were supposed to open a TCP connection and proxy this data, not log it to the console.
It seems cruel to make curl
wait in perpetuity so let’s fix that next.
Step one is to yolo a regular expression to parse the request line. Look away now:
const { 1: hostname, 2: port } = text.match(/^CONNECT (.+):(\d+) /)!;
Lovely stuff. Just as Tim Berners-Lee envisioned.
Next we can use Deno.connect
to open a TCP connection:
const counter = await Deno.connect({
hostname,
port: Number.parseInt(port),
});
You might think Deno.connectTls
should be used. I tried that initially. However that will terminate TLS at the server whereas we need to tunnel it through the server.
Now we have conn
and its counterpart counter
which are two Deno.TcpConn
instances. They both have read and write methods. Let’s plug them together and see what happens.
Connecting Connections
At this stage I experimented and refactored a few times.
I decided a map would be useful to store a reference to both parties:
type TcpConnMap = Map<Deno.TcpConn, Deno.TcpConn>;
const connMap: TcpConnMap = new Map();
I also created a close connection function. If one side closes, or errors, the other side should be closed immediately too.
const closeConnection = (conn: Deno.TcpConn): void => {
const counter = connMap.get(conn);
connMap.delete(conn);
try {
conn.close();
} catch { /* Don't care */ }
if (counter) {
closeConnection(counter);
}
};
There is little to do in response to errors other than close the tunnel. That is why everything is wrapped in try
statements. There can be a whole cascade of errors but it’s an effective solution to ignore them all.
In my first version I created a function to which I can hand off each connection. This function continues reading data and writing it to the other end.
const proxyConnection = async (conn: Deno.TcpConn): Promise<void> => {
try {
while (true) {
const buffer = new Uint8Array(1024);
const read = await conn.read(buffer);
if (read === null) break;
if (read === 0) continue;
const counter = connMap.get(conn);
const written = await counter.write(buffer.subarray(0, read));
}
} catch {
/* Not my problem */
} finally {
closeConnection(conn);
}
};
I set the read buffer size to 1 KB (1024 bytes) as this is safely within the 1500 MTU of a standard network packet. I thought I was being clever. I left a multiline comment smugly explaining this choice. Look at me, I’m a network engineer now. See the next section on performance. I’m dumb. Spoiler: the code above is pure overhead.
With this scaffolding in place and errors safely ignored we can now plug the connections together. Once connected the 200 OK
response is sent and the tunnel should tunnel.
connMap.set(conn, counter);
connMap.set(counter, conn);
proxyConnection(conn);
proxyConnection(counter);
await conn.write(new TextEncoder().encode(
"HTTP/1.1 200 OK\r\n\r\n",
));
return;
We return from the initial handle
because proxyConnection
has taken over reading data. It’s now a self contained system running in the background. Both connections read and write to each other with encrypted TLS data.
That’s a complete HTTP tunnel proxy thingy!
Find the full code on GitHub 👈
Optimising Performance
I pay my ISP for symmetrical “Gigabit” but what I get is another matter. I ran a few tests to establish a benchmark without using my tunnel:
Today my upload is half what I pay for but that could be speedtest.net’s fault.
Anyway, I tested again with the tunnel. The Firefox Multi-Account Containers add-on is handy for testing HTTP proxy servers.
Multiple tests showed a slight drop in speed and increase in ping but nothing crazy. Testing for buffer bloat was inconclusive. I saw a mix of A and B grades. It seemed like my proxy worked just fine. This was encouraging until I tested on a Raspberry Pi 5. On the Pi download speeds were reasonable but upload speeds were abysmal; around 20 Mbps.
I tested again on a Docker VM on my Mini-ITX home server.
services:
deno:
container_name: proxy
image: denoland/deno:2.0.0
command: deno run --allow-env --allow-net jsr:@dbushell/http-tunnel
ports:
- 0.0.0.0:3000:3000/tcp
Like the Raspberry Pi, download speeds were as expected but upload was 10x slower than it should be. I have a dockerised VPN client on the same VM providing an HTTP proxy so I know it can do more. I experimented on the Raspberry Pi by shrinking the buffer size. This did improve upload but also decreased download. A few tests hit equilibrium around 200 Mbps but nothing was stable.
Well that was disappointing!
I went outside for a walk to think it over.
Remember when I said I was using Deno because it has nice APIs? Nice web standard APIs like streams. That is when I remembered about ReadableStream.pipeTo
. I can just pipe streams together without the intermediary buffer.
I replaced the proxyConnection
function with:
conn.readable
.pipeTo(counter.writable)
.catch(() => closeConnection(conn));
counter.readable
.pipeTo(conn.writable)
.catch(() => closeConnection(counter));
And boom:
Problem solved. That’s the Pi 5 maxing out 1 CPU core.
I’m still not sure why upload was affected so badly but at least it works now. Without the buffer I can’t monitor transfer rate within the code itself. Not that I was planning to but it might have made for a nice extension. If I find time I will experiment with pipeThrough
. Although I suspect that will just introduce the same issue.
See my HTTP Tunnel GitHub repo for progress as I clean up and improve the code. If you have any idea I’m on Mastodon @dbushell@fosstodon.org or leave a GitHub issue.