IPv6 PAC Support

If you’re not familiar with a Proxy Auto-Configuration (PAC) file, it’s a JavaScript function that determines whether web requests should be forwarded to a proxy server or not.

There’s a minimal set of JavaScript functions defined that you can use to conditionally send your web traffic through a proxy. One of those functions, isInNet(), returns true if an address (an IPv4 address) is included in a given network range. Unfortunately there are no functions in this standard set that provide similar support for IPv6 addresses.

There are many IPv6 parsing libraries on GitHub, but all of them depend on at least a few npm packages. I’m normally not one to complain about npm dependencies, but this is once instance where the dependency model is not really tenable.

Here I’ve put together a copy/pastable inIPv6Range() function that provides limited functionality for determining whether a given IPv6 address is in a predetermined range.

function expandIPv6( ipv6 ) {
	const parts = ipv6
		.replace( '::', ':' )
		.split( ':' )
		.filter( p => !! p );
	const zerofill = 8 - parts.length;

	// Fill in :: with missing zeros
	return ipv6
		.replace( '::', `:${ '0:'.repeat( zerofill ) }` )
		.replace( /:$/, '' );
}

function parseIPv6( ipv6 ) {
	ipv6 = expandIPv6( ipv6 );

	// Check is valid IPv6 address
	ipv6 = ipv6.split( ':' );
	if ( ipv6.length !== 8 ) {
		return false;
	}

	const parts = [];
	ipv6.forEach( function( part ) {
		let bin = parseInt( part, 16 ).toString( 2 );
		while ( bin.length < 16 ) {
			// left pad
			bin = '0' + bin;
		}

		parts.push( bin );
	});

	const bin = parts.join( '' );
	return parseInt( bin, 2 );
}

function inIPv6Range( ipv6, low = '::', high = 'ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff' ) {
	ipv6 = parseIPv6( ipv6 );
	low = parseIPv6( low );
	high = parseIPv6( high );

	if ( false === ipv6 || false === low || false === high ) {
		return false;
	}

	return ipv6 >= low && ipv6 <= high;
}

If you clean this up or add CIDR support, let me know!

SwiftUI Webview with a Progress Bar

Update: (Nov 16, 2021) – This now includes better constraint management from feedback on GitHub.

If you’re not familiar, SwiftUI is the new UI framework for building user interfaces in Swift apps. If you squint, the declarative syntax is vaguely reminiscent of contemporary JavaScript frameworks like React.

With a declarative Swift syntax that’s easy to read and natural to write, SwiftUI works seamlessly with new Xcode design tools to keep your code and design perfectly in sync. Automatic support for Dynamic Type, Dark Mode, localization, and accessibility means your first line of SwiftUI code is already the most powerful UI code you’ve ever written.

developer.apple.com/xcode/swiftui/

The API is still fairly immature. When you get beyond the sample apps, it’s hard to build anything real with the components that come out of the box. Components that you would expect to see if you’re familiar with UIKit, like WKWebView and UIActivityIndicatorView don’t exist in SwiftUI yet.

Luckily, it’s not that hard to create them yourself.

To get started with a basic view, you need an object that implements UIViewRepresentable. A simple Webview could look like this:

struct Webview: UIViewRepresentable {
	let url: URL

	func makeUIView(context: UIViewRepresentableContext<Webview>) -> WKWebView {
		let webview = WKWebView()

		let request = URLRequest(url: self.url, cachePolicy: .returnCacheDataElseLoad)
		webview.load(request)

		return webview
	}

	func updateUIView(_ webview: WKWebView, context: UIViewRepresentableContext<Webview>) {
		let request = URLRequest(url: self.url, cachePolicy: .returnCacheDataElseLoad)
		webview.load(request)
	}
}

Progress Bar Example

It’s also possible to model a UIViewController by implementing UIViewControllerRepresentable.

For example, a view controller that renders a web view with a progress bar:

class WebviewController: UIViewController, WKNavigationDelegate {
	lazy var webview: WKWebView = WKWebView()
	lazy var progressbar: UIProgressView = UIProgressView()

	deinit {
		self.webview.removeObserver(self, forKeyPath: "estimatedProgress")
		self.webview.scrollView.removeObserver(self, forKeyPath: "contentOffset")
	}

	override func viewDidLoad() {
		super.viewDidLoad()

		self.webview.navigationDelegate = self
		self.view.addSubview(self.webview)

		self.webview.frame = self.view.frame
		self.webview.translatesAutoresizingMaskIntoConstraints = false
		self.view.addConstraints([
			self.webview.topAnchor.constraint(equalTo: self.view.topAnchor),
			self.webview.bottomAnchor.constraint(equalTo: self.view.bottomAnchor),
			self.webview.leadingAnchor.constraint(equalTo: self.view.leadingAnchor),
			self.webview.trailingAnchor.constraint(equalTo: self.view.trailingAnchor),
		])

		self.webview.addSubview(self.progressbar)
		self.setProgressBarPosition()

		webview.scrollView.addObserver(self, forKeyPath: "contentOffset", options: .new, context: nil)

		self.progressbar.progress = 0.1
		webview.addObserver(self, forKeyPath: "estimatedProgress", options: .new, context: nil)
	}

	func setProgressBarPosition() {
		self.progressbar.translatesAutoresizingMaskIntoConstraints = false
		self.webview.removeConstraints(self.webview.constraints)
		self.webview.addConstraints([
			self.progressbar.topAnchor.constraint(equalTo: self.webview.topAnchor, constant: self.webview.scrollView.contentOffset.y * -1),
			self.progressbar.leadingAnchor.constraint(equalTo: self.webview.leadingAnchor),
			self.progressbar.trailingAnchor.constraint(equalTo: self.webview.trailingAnchor),
		])
	}

	// MARK: - Web view progress
	override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) {
		switch keyPath {
		case "estimatedProgress":
			if self.webview.estimatedProgress >= 1.0 {
				UIView.animate(withDuration: 0.3, animations: { () in
					self.progressbar.alpha = 0.0
				}, completion: { finished in
					self.progressbar.setProgress(0.0, animated: false)
				})
			} else {
				self.progressbar.isHidden = false
				self.progressbar.alpha = 1.0
				progressbar.setProgress(Float(self.webview.estimatedProgress), animated: true)
			}

		case "contentOffset":
			self.setProgressBarPosition()

		default:
			super.observeValue(forKeyPath: keyPath, of: object, change: change, context: context)
		}
	}
}

Then you can implement that as a UIViewControllerRepresentable like so:

struct Webview: UIViewControllerRepresentable {
	let url: URL

	func makeUIViewController(context: Context) -> WebviewController {
		let webviewController = WebviewController()

		let request = URLRequest(url: self.url, cachePolicy: .returnCacheDataElseLoad)
		webviewController.webview.load(request)

		return webviewController
	}

	func updateUIViewController(_ webviewController: WebviewController, context: Context) {
		let request = URLRequest(url: self.url, cachePolicy: .returnCacheDataElseLoad)
		webviewController.webview.load(request)
	}
}

You can see a working example of this view controller over on GitHub.

Highly Available Node

Updated 7/13/22: Updated to reflect some changes to the gist which simplify the graceful shutdown logic.

At VIP, we run a highly available Node service that powers much of our platform. One challenge we see teams face is the question of how to scale a highly available API.

That’s a broad problem to solve, but let’s assume we already have adequate test coverage and everything in front of the API taken care of for us. We only care about things we can change about the Node app itself.

Our typical answer looks something like this:

  1. Use Node’s cluster module to fully take advantage of multiple CPUs
  2. Gracefully reload worker processes for deploys and uncaught exceptions

Node Cluster

Node’s cluster module uses child_process.fork() to create a new process where communication between the main process and the worker happens over a unix socket.

The TCP module’s server.listen() function hands off most of the work to the main process, allowing child processes to act like they’re all listening on the same port.

HTTP Server Example

Let’s take a simple http server as an example. Here we have a server that listens on port 3000 by default and returns Hello World!. It also throws an uncaught exception 0.001% of the time to simulate a bug we haven’t accounted for.

/**
 * External dependencies
 */
const { createServer } = require( 'http' )

module.exports = createServer( ( req, res ) => {
	if ( Math.random() > 0.99999 ) {
		// Randomly throws an uncaught error 0.001% of the time
		throw Error( '0.001% error' )
	}

	res.end( 'Hello World!\n' )
} ).listen( process.env.port || 3000 )

Obviously a real server would be much more complex, but this toy example will be adequate for this example. We could run this server with node server.js and we’d have an http server running on our server.

The first thing we’ll do is use Node’s cluster module to start one copy of the server per CPU, which will automatically load balance between them.

#!/usr/bin/env node

/**
 * External dependencies
 */
const cluster = require( 'cluster' )

const WORKERS = process.env.WORKERS || require( 'os' ).cpus().length

if ( cluster.isMaster ) {
	for ( let i = 0; i < WORKERS; i++ ) {
		cluster.fork()
	}

	cluster.on( 'listening', ( worker, address ) => {
		console.log( 'Worker %d (pid %d) listening on http://%s:%d',
			worker.id,
			worker.process.pid,
			address.address || '127.0.0.1',
			address.port
		)
	} );
} else {
	const server = require( './server' )
}

This will start one copy of the server for each CPU in our system. The operating system will take care of scheduling these processes across the CPUs.

Graceful Reload

Now that we have multiple processes, we can gracefully reload these in case of errors and for deploys.

Errors

In case of errors, we terminate the worker process and spawn a new one. This is important because an uncaught exception means the process is now in an inconsistent state. In other words, an exception occurred that was not accounted for and we’re not sure what side effects that will have.

First, we’ll ensure that worker processes are restarted if any exit unexpectedly. In the isMaster branch:

cluster.on( 'exit', ( worker, code, signal ) => {
	if ( ! worker.exitedAfterDisconnect ) {
		console.log( 'Worker %d (pid %d) died with code %d and signal %s, restarting', worker.id, worker.process.pid, code, signal )
		cluster.fork()
	}
} )

Here worker.existAfterDisconnect would be true if we call worker.disconnect(), but false if the worker itself calls process.exit(). That becomes important in this next step, where we automatically terminate the worker process in the case of an uncaught exception.

const SHUTDOWN_TIMEOUT = process.env.SHUTDOWN_TIMEOUT || 5000
process.on( 'uncaughtException', error => {
	console.log( error.stack )

	// Exit immediately, no need to wait for graceful shutdown
	process.exit( 1 );
} )

We terminate the process with process.exit( 1 ). Since there was some kind of uncaught error, we just want to terminate the worker and spawn a new one. There is no need to wait for graceful shutdown in this case.

Deploys

For deploys, we gracefully reload all the worker processes one at a time to avoid any downtime in the process.

In the worker, we watch for the disconnect event. This again calls server.close() to stop accepting new connections and terminates the process when all active connections have closed.

const server = require( './server' )
process.on( 'disconnect', () => {
	server.close( () => process.exit( 0 ) );
} );

Upon SIGHUP we create one new worker for each active worker and gracefully shutdown the old worker when the new one is ready to accept connections.

process.on( 'SIGHUP', () => {
	console.log( 'Caught SIGHUP, reloading workers' )

	for ( const id in cluster.workers ) {
		cluster.fork().on( 'listening', () => {
			gracefulShutdown( cluster.workers[ id ] )
		} )
	}
} )

Gracefully shutting down a worker involves a few steps.

First, we trigger the disconnect event. As mentioned before, when all the connections are closed, the worker process will terminate itself. Since we want to ensure this worker is stopped within a reasonable timeframe, we force it to close with worker.kill() after 5 seconds.

const SHUTDOWN_TIMEOUT = process.env.SHUTDOWN_TIMEOUT || 5000
const gracefulShutdown = worker => {
	const shutdown = setTimeout( () => {
		// Force shutdown after timeout
		worker.kill();
	}, SHUTDOWN_TIMEOUT );

	worker.once( 'exit', () => clearTimeout( shutdown ) );
	worker.disconnect();
}

Upon SIGINT or ^C, we’ll perform a similar graceful shutdown routine. The only difference is that we don’t need to restart each worker this time.

process.on( 'SIGINT', () => {
	console.log( 'Caught SIGINT, initiating graceful shutdown' )

	for ( const id in cluster.workers ) {
		gracefulShutdown( cluster.workers[ id ] )
	}
} )

To prevent the initial SIGINT from propagating to worker processes and immediately terminating them, we’ll handle the signal separately there. The first one is ignored, but if you press ^C or otherwise send SIGINT twice, all threads are closed immediately, bypassing the graceful shutdown.

process.on( 'SIGINT', () => {
	// Ignore first SIGINT from parent

	process.on( 'SIGINT', () => {
		process.exit( 1 )
	} )
} )

I hope this was helpful. You can see the full example on GitHub.

Wire 1.5

There have been several recent updates to Wire focused on making the app more responsive and easier to use.

The original goal of Wire was to build an RSS reader that renders content in the format of the website, instead of a stylized view of the text. The whole idea is that a website is more than just what’s in the <content:encoded> tags in an RSS feed, but the CSS and JavaScript that browsers render as well.

There are downsides of loading the URL of an article in a web view though — namely, the overhead of downloading the article and then rendering it. On a fast connection, it’s noticeable. On a slow connection, it can be annoying.

To that end, the last couple of releases have been focused on improving that aspect of the experience. As of version 1.4, Wire downloads every article, which improves performance and makes offline viewing possible. As of version 1.5, articles are pre-rendered to make the transition from the article list to the web view as fast as if we were just rendering displaying the text from the RSS feed.

I’ve been using this for a couple weeks and the more responsive feels like magic.

Spying TVs are getting cheaper

But the most interesting and telling reason for why TVs are now so cheap is because TV manufacturers have found a new revenue stream: advertising. If you buy a new TV today, you’re most likely buying a “smart” TV with software from either the manufacturer itself or a third-party company like Roku.

Noah Kulwin in The Outline

It is so creepy when Roku TVs show a message to “continue watching from the beginning” when you’re watching something on an Apple TV. I assume the TV is constantly sending frames of whatever is on screen to Roku servers for analyzing. It seems unlikely that the TV is capable of doing this recognition on its own.

The first time this happened, I finally broke down and bought a Raspberry Pi so I could set up Pi-hole.

I can’t believe this spying is not a huge story.