The Stealth Server

By Simon, 10 June, 2023
DNS root tree

I’ve been involved with breaking, fixing, and expanding DNS since the very early days of the World Wide Web. Several books have quoted my work on DNS.

Back in the day, one of the things I “invented” was the use of a “stealth server” to protect the root DNS zone of the Internet. However, I never really considered it an invention in the conventional & patentable way, that you normally think of inventions. It was new & novel at the time, but it was also a major breach of the rules. It’s now regarded as a best practice and it's well documented.

Here’s the description of the DNS stealth server according to IBM:

A stealth server is a server that answers authoritatively for a zone, but is not listed in that zone's NS records. Stealth servers can be used as a way to centralize distribution of a zone, without having to edit the zone on a remote name server. When the master file for a zone is on a stealth server in this way, it is often referred to as a hidden primary configuration. Stealth servers can also be a way to keep a local copy of a zone for rapid access to the zone's records, even if all official name servers for the zone are inaccessible.

Necessity is the mother of invention.

This is the story of how it all came about. 1994. I was an early customer of EarthLink in Los Angeles (how early? Sky Dayton, the founder/CEO at the time, and who I'd met through the Electronic Cafe, was still answering the phone). I couldn’t afford the thousands of dollars a month that a T1 or even partial T1 cost, nor could I afford the price of the very new and very expensive commercial web hosting at the time. So I asked Sky if I could get an EarthLink account that allowed me to permanently connect a dial-up modem, and, oh, could I have a Class C IP address to go with it? This allowed me to do my own web hosting, from home, on my own machines, for a reasonable price, which included surfing the web 24/7. Fortunately for me, he said yes.

One of the things you have to understand is the amount of bandwidth available at the time. Or lack of it. We’re talking a snail's crawl 19kbps initially, and up to 56kbps after upgrading the modem. We eventually figured out the modem had a practical maximum daily throughput of about 250MB. That’s right, all the data we could squeeze through the modem in 24 hours was 250MB. That’s it. It might not seem much now, but my MacBook at the time only had an 80MB hard drive.

Fortunately, the EarthLink account came with a whole 1MB of web hosting space (not a typo, it really was 1MB), and with the Class C IP address space, they also offered to provide secondary DNS for me. So we carefully uploaded all our most popular web site graphics into the 1MB of web hosting space on EarthLink. That way, all we had to do was squeeze a small html file from our servers, at home, through the modem, and the graphics would be all served from EarthLink’s web server on their much greater bandwidth.

The Class C IP address space allowed all the computers at home to have a public IP address. This also meant naming them, which is where the DNS comes in. As an Apple developer at the time, we were testing out a lot new software, including a DNS server called MacDNS. This was a privately developed app (by The Jourvian Group) which was freeware at the time. Right in the middle of all this, Apple announced their own DNS server for their Workgroup Server and called their unreleased software MacDNS as well. I wrote a cease and desist for the owner of MacDNS to file against Apple, and I’m told they came to a mutually agreeable arrangement.

Anyhow, I was testing multiple DNS servers at the time, including MacDNS, MIND and QuickDNS. I had set up MacDNS as my primary DNS, behind the modem, and used EarthLink’s two DNS servers as secondary DNS servers. Then, as the traffic over the modem increased, I realized I could offload all the public DNS requests onto Earthlink’s DNS servers simply by not advertising my primary DNS server anymore (i.e. removing the public NS records for it). This would save a ton of bandwidth going through our poor little modem. And that's how the “stealth server” (or “hidden master”) for DNS was born.

I could still update the primary DNS server, and any changes I made, would still propagate out to the secondary DNS servers. This would save me a lot of modem bandwidth. Not only that, it was also much more secure. If, for any reason, EarthLink’s DNS servers got hacked, or their cache got polluted, the bad data would only be alive until the zone’s TTL (time to live) expired, and then all the data would be refreshed and replaced from the (now stealth/hidden) primary DNS server. It was self-healing.

While all this was going on, I was also collaborating with the Internet Engineering Task Force (IETF), participating in the Newdom mailing list, and I had set up, and was hosting the Shared-TLD mailing list, to explore sharing domain name registry data amongst a group of peers, in order to break the registry monopoly. I’d written an Internet Draft in April 1996 entitled “Top Level Domain Classification and Categorization” (which was intended to be a replacement to RFC1591), that was very unofficially co-ordinated with Jon Postel’s Draft Postel, the goal of both documents being to open up the domain name space to new top level domains. I was also a root server operator for the Open Root Server Confederation (ORSC), which tried unsuccessfully to bid on the IANA contract, which instead went as a sole-source contract to ICANN.

It was on one of my regular trips to see Jon Postel, at IANA, that I told him, and Bill Manning, about my success with my DNS stealth server. Jon’s reaction was initially “you can’t do that”, referring to the fact that a primary DNS server declares itself authoritative for a zone with a flag it broadcasts. Then he laughed, and said “I guess we can fudge the authoritative bit.” And that's exactly what they did.

A couple of weeks later, Network Solutions (now Verisign), who host the “A root server”, announced they were using a stealth DNS server to protect all of the root DNS servers. What they had done was firewall off their stealth server from the public internet, making it unreachable except from the 13 root servers which it would update. They even made it a point of pride in the tours of their facilities.

I didn’t think of the idea worthy of an internet draft or RFC at the time, which is why stealth servers weren’t mentioned until RFC2182 came out in July 1997, which examines the roles and public/private availability of DNS servers.

In 2010, InfoWorld came out with an article arguing why you should use stealth master DNS server.