Awesome article! Interesting related work is in , where we used DNS TTLs as a covert channel for passing data, without needing to control the domain(s) being used. Through the development of that covert channel, we found a variety of idiosyncrasies such in the client-side DNS infrastructure and discussed them in . Some devices will report an erroneously high TTL, some will unnecessarily shorten the TTL, some represent entire clusters of DNS resolvers with interesting properties, and so on. Based on your work, it appears that over the past five years, the number of open resolvers has dropped dramatically, from ~30M to ~3M.
Your email response really is indicative of some of the folks that get cranky when you send them packets :)
I wish I had a more insightful comment, but I'll just say this:
I love posts like this where someone applies a theoretical concept in a fun and interesting (even if not practical) way.
a couple other silly projects that attempt to store data without using disk space (some cheating required):
πfs (a file system)
0byte (a programming language)
Reminds me of this, one of my all-time favorites:
Guessing date as circa 2003. Could be wrong.
As for DNS, djbdns can store arbitrary bytes in RR (e.g., TXT), as octal. For example, modified dnstxt can print formatted text stored in TXT records, with linefeeds, etc.
Please don't. There's a perfectly lovely naturally emerged digital life form living in the spaces in between on the Internet, and this would threaten their habitat. Sure they haven't figured out we exist yet, and are certainly a long way from being able to communicate with us, but they seem kind to one another and I'd hate to see their evolution displaced.
Even unlimited is bound by memory/storage with probably an LRU eviction scheme. So unless your stored data is hot, or their storage is very large, it might not stay around long.
need a background worker that periodically reads all data to keep things in cache (like a raid or SSD background check).
Super fun article. I also like to see a "real" implementation of crazy ideas like this.
Can anyone confirm if the Microsoft DNS servers default to caching an unlimited amount of data? The article claims "Unlimited??" as the default for these systems. Eyeballing the pie chart looks like 20% of the servers are running Microsoft, which could provide quite a lot of storage.
Wouldn't simply caching DNS SRV records do that?
Maybe; I haven’t fully thought it all through yet.
An enhancement of this technique could be used on one’s own private network of DNS resolvers for the specific purpose of acting like a highly available directory of private cloud nodes, storing the following information:
This would kind of be like a mashup of Apple Bonjour and this technique.
The big question is, how long to cache the information for in such a setup, assuming the cloud itself is highly unreliable, so as to make the entire thing extremely fault tolerant?
Too bad he couldn't use FUSE. Would be nice to do `ls` and other commands with this.
Yes! Reading the description of DNSFS, i was sure Dan Kaminsky had done something like this years ago, but couldn't track it down - Dan Kaminsky has done a lot of things with DNS.
While an interesting use, abusing DNS in a similar way has beena long known (15 year) security vulnerability. For example, OzymanDNS. Even then, that was just one of the first published exploits. People had been performing DNS tunneling for some time.
There are detectors of DNS abuse that I imagine the people who actually would store files in DNS would not want pointed at their files.
People are already sneaking data through DNS in both directions. Here's a quick example from a year ago that popped in my head first: http://4lemon.ru/2017-01-17_facebook_imagetragick_remote_cod...
This is very neat as well. Still trying to understand it.
I have tested various use cases for Iodine, which works great, unless you are blocking all outbound dns traffic.
FYI re: PoC
NS for hacker.toys not responding
I've typically blocked outgoing DNS requests to arbitrary resolvers on every network I've managed, which disables the use of this FS.
Reason being, if users on my network are using a resolvers other than my own, they can resolve all sorts of domains I would have otherwise blackholed.
Controlling network access on DNS level seems pretty ineffective to me.
Especially with things like Google DNS over HTTPS and https://github.com/pforemski/dingo ...
Oh I'm with you, you've gotta put other controls in place. Still in my basic acl for every network, because it's one if the first things users will do to circumvent controls.
It's also not uncommon to not use the default DNS settings of a network.
Doing this sounds like a good way to increase the noise to signal ratio in your support calls....
Pretty much 100% waste of time I think. Users can easily just use raw IP addresses right?
HTTP 1.1 servers need the host name in the request, so that a single IP can host multiple domains that resolve to it. If you just go to the IP address, you get an error or a default host. It should work fine with most other protocols, though.
Adding to what others say here: if you have/know the ip address, you probably also know the host name. There's nothing magical about:
# from memory, syntax might not quite work
telnet 22.214.171.124 80
With http 1.0 blocking/filtering ips was enough, with 1.1 you need a proxy. With tls/ssl you have the choice between (having the capability to) decrypt everything or filter nothing. (obviously ip level filtering works, but it's a little crude in a Http 1/1 world. Ditto for http2 etc).
Add entry to /etc/hosts (or the windows equivalent), navigate in browser.
Too high of a hurdle for your average user though, in which case blocking sites at the DNS resolver works.
I'm pretty sure you can send a request to an IP address with the host name in the request.
Wish I had more to add than: "This is so neat!"
Seem's like this would be a good way to circumvent web filters that block remote file services (though allow DNS over tcp or udp).
How would one restrict this capability from an administrative perspective?
Fun article, kudos!
Just a tiny correction: RIPE Atlas' reliability tags (e.g., "-stable-Xd") have nothing to do with the probe "changing the public IP address once a day". Those filter simply measure the probe's uptime over different time windows.
In fact, the "-stable-1d" tag you mentioned would be true even for probes that have been down "up to 2h" over the last day.
You can use the dig utility to see if a DNS server is recursive. Just do the scan in two steps. One major port scan using masscan, netscan, etc., then a smaller scan of the IPs with port 53 open to see if they are recursive or not. You'll see this in dig's output if the server is not recursive:
;; WARNING: recursion requested but not available
I'm surprised too. Since I am running it and it dies every few days.
dnsmasq is very popular with SOHO routers.
And just about every mobile device (hotspot mode)
I'm surprised at the marketshare dnsmasq has, I would've thought BIND and dnsmasq numbers to be flipped.
Ha! Combine this idea with my proof-of-concept CDN53 Chrome extension and it would be serving websites directly from others' DNS resolvers =:)
Great, article. I've noticed a trend with anything which requires masscan is probably going to be fun/interesting.