DNS Edge: Looking Back, and Looking Forward

Last updated on April 29, 2021.

It’s been just over one year since we released DNS Edge, BlueCat’s DNS-based security solution. In that time we’ve seen our customers use it to leverage their existing DNS infrastructure to get visibility, control, and detection capabilities they previously did not have. Over the past year we’ve also learned a lot from our customers, so we wanted to check in with our CTO, Andrew Wertkin, to get his perspective on the past 12 months as well as where he and the team plan to take DNS Edge next.

What major highlights for DNS Edge stand out for you after its first year?

A couple of things really stand out for me. The first has been our ability to scale; Edge has been able to easily scale to every theoretical step that we expected to do based on our testing. We’ve pushed six updates of the product, so we’ve been able to innovate rapidly on top of critical infrastructure. That required a great deal of protective engineering to ensure that our customers continue to trust us to deliver unbelievably reliable DNS, and we will never take that lightly. Yet, we’ve been able to innovate pretty rapidly while maintaining that promise.

The other thing that we’re really proud of is the bigger promise of Edge. We always envisioned that Edge would be able to deliver unique DNS solutions that weren’t only focused on security. A good example of that this year was our multi-namespace solution, allowing our customers a very smart way to deal with mixed zones. It’s becoming more and more of an issue as they break out to the internet and more locations and implement SD-WAN strategies, or adopt Office 365… and of course everything else that is going on out there. So we’ve also been able to leverage Edge to drive core DNS innovation as well.

What impact have our customers had on DNS Edge?

Our customers are always coming back with interesting ideas. Their own review of the unique information Edge provides has lead to suggestions on different ways we might analyze and present the data. For instance, things they might think are false positive have helped us look at improved methods to screen for them. We’re lucky to have a very engaged customer base who really like working with us to help make the product better. They’ve also suggested things that sort of shifted the way we think about the product.

Right now we’re developing some DNS traffic management capabilities; that’s an example where customers have come back and said “since this thing is on the edge of the network maybe this is the right place to attack this problem”. I thought we would just be focused on building core security focus capabilities for at least another year before we started diverging into more DNS-focused capabilities, but our customers have played a major role in where we’ve gone. I guess that’s the thing that surprised me most looking back – our earlier-than-expected investment in core DNS solutions in Edge while we are still enhancing our security capabilities in parallel.

As you’ve talked with network and security teams about DNS Edge, who’s really getting excited by Edge? Do you see mutual interest and collaboration with these teams when it comes to DNS?

For sure, I mean it’s been changing quite rapidly. It used to be the DNS team was either owned by the network team or the server team, or in some cases the security team, and often times it’s in the critical infrastructure team. But things are changing, and there are several factors why. First, and most importantly, core networking and security are working way closer together. Secondly, automation and the driving out of new applications to micro-segmented or segmented networks in general, has played a huge factor. You think about security beforehand and you deploy your architectures that enable or that met your security requirements, as opposed to pen testing afterward and trying to plug holes. It requires joint effort much earlier, and I think the sophistication in the way security is thought of has changed, and DNS has been way more top of mind with our customers.

And why is that? I think there is a broad recognition that the data is interesting. The data is useful. It’s a good signal because it’s required and it must be used by anything in order to resolve an address to get to some network endpoint. As a control point its efficiency is unquestionable. The ubiquity protocol and the efficiency of actually blocking something in DNS is pretty unique. In parallel, some of our customers are well aware that they have bought security solutions they don’t use enough of. They bought a lot of overlapping solutions that they don’t use enough of and there’s a lot that’s on their network already that they’re not taking advantage of. Our belief with Edge has always been that this is a pretty critical layer in defense-in-depth, a pretty substantial layer that utilizes what you already need anyway. You need DNS, and the fact that you need DNS makes DNS critical to any sort of malware or C2 or other attack techniques. Therefore, it becomes a critical control point. It doesn’t have to be any more complicated than that.

Do teams you speak to immediately get the intrinsic value of where Edge lives in terms of on the recursive layer, and therefore its ability to provide a unique view into internal traffic?

It’s interesting when we talk to customers about what data they assess already. Usually you can’t get the same answer from anybody at the table, and they’re not quite sure where they’re pulling from. They’re not quite sure if they’re getting queries and answers. They’re not quite sure if the data is being cached before they get it, so are they losing a bunch of data? They’re not sure how they’re stored there. They’re just storing a number of unique pairs throughout the day or they’re storing the breadth of the data that allows us to do frequency analysis and other things. And then there are bigger questions like where would they be blocking policy? The data is usually being captured and stored somewhere, but nobody really understands what’s being captured and how it’s being utilized. It requires a discussion because initially it’s going from a “hey, we already do that” to a much more interesting and eye-opening conversation. Most of the time we just have to show them, and we’re able to find some pretty interesting things in the network just doing that. And often it creates “lightbulb” moments for them. Customers just look at raw data and it’s helped them figure out whiteboard policy, helped them investigate further on things they otherwise wouldn’t have been able to even see previously.

So enough looking back. What is the team focusing on now in terms of future development?

We’re doing some some more work specifically around traffic management. We’re also doing research in our lab right now to develop and introduce enhanced analytics-based methods to identify signs of compromise of customer networks. Some of it is adding additional dimensions to the data, such as where is the server on the other side of the query? Where is the DNS server and where’s the IP address that this service goes to? Where is it geolocated? We’re using more and more reference data along those lines. To give you an example, we’ll be updating policies more to support more sinkholing configurability and other types of remediation efforts. We also want to get our users excited about data in as many ways as we possibly can. We’re constantly looking at the usefulness of DNS data from a security standpoint, but also from any other particular standpoint that is relevant and actionable.

Things are only going to get more interesting from here on out. Here’s to another great year of Edge, and many more! 

Read more