When cybersecurity teams think about the Domain Name System (DNS) at all, they’re most likely to think about it in terms of external web servers, not the internal servers which sit inside the DMZ. .
There’s a built-in assumption that most malicious activity is associated with domains out in the wild west of the internet. That’s why so much effort is put into protecting external network traffic. Everyone wants to make sure that the IP addresses and web pages they're connecting to through public DNS servers are legit.
To make this happen, security teams put all kinds of filters and firewalls in place to secure external DNS queries. They want the ability to monitor, redirect, or simply block that traffic before an outside connection is made. They want to check the patterns of DNS queries and guard against the ones which indicate the presence of inappropriate or malicious activity.
All of this is necessary. External traffic needs to be secured, and (in our humble opinion) DNS is usually the best way to do it.
Yet here’s a hard fact: the majority of DNS queries never make it to the outside internet. Most of the traffic on your corporate network is composed of internal, “east-west” queries.
How do we know this?
As the core DNS service for so many large enterprises, BlueCat has access to some pretty interesting data about how networks really operate. Looking across our customer base, we found that roughly 60% of all network traffic is actually directed at internal resources. The numbers are remarkably consistent – usually within just a few percentage points for every enterprise we serve.
So what does this mean for network security?
First, it highlights an urgent need for visibility into internal network traffic. The downside of most boundary-level filters and firewalls is that they can’t see what’s going on inside the network.
It’s a network architecture issue. Recursive servers sit between client devices and the network boundary. When filters and firewalls look back into the network, they can only see the last hop server. The internal DNS servers, internal IP addresses, and devices making DNS queries are essentially invisible.
That lack of visibility might be excusable if the number of internal queries was small. But it isn’t. The fact that a majority of network queries never even make it to external-facing security sensors exposes a significant weakness in the “set it and forget it” mentality associated with boundary-level security systems.
Second, security teams need more than just visibility into internal network activity – they need the ability to act based on what they find. A sensor alone will tell you that a problem exists, but it doesn’t necessarily help with a solution.
At the network boundary, this challenge is solved relatively easily. You simply place your security system on an external facing server and let it run.
For internal queries, however, there’s the question of where control should be exercised. You can enforce policies on strategic internal choke points, but that would require extensive deployments across a constantly changing network topography. You can use on-device agents, but those have performance implications and aren’t always an option for IoT and mobile devices. Malicious software has a way of navigating around these loopholes.
The quest for visibility and control of internal network traffic naturally highlights the need for zero trust security systems across the enterprise. The underlying assumption behind boundary-level filters and firewalls is that everything on the outside is inherently suspicious, but everything inside trusted internal networks is fine.
That’s not to say that the same level of suspicion should be assigned to internal and external traffic – clearly there are priorities when it comes to assigning resources to security challenges, and most will naturally default toward external protections.
At the same time, the damage caused by insider and advanced persistent threats only continues to grow. In both these examples, only a comprehensive picture of who can gain access to internal network traffic and how they use it can prevent significant reputational damage and data loss. No security administrator can afford to overlook the implications of malicious activity inside the network. Everyone should be looking to prevent unauthorized access to critical data.
Securing internal network traffic through DNS
It’s easy enough to sound the alarm bell about 60% of your network being at risk, but harder to do something about it. This is where BlueCat’s unique approach to DNS security comes in.
We mentioned before that BlueCat got that statistic about internal traffic from an analysis of the DNS traffic it handles every day across its large, diverse customer base. It’s that very position on the network, and the role we play in directing all network traffic, that gives us the ability to solve the challenge of visibility and control.
The recursive layers which prevent visibility from the network boundary can also provide a great deal of visibility at the client level. By acting as the “first hop” recursive server for every network query, BlueCat sees everything that’s coming off of a client device – both internal and external – without the need for an on-device agent. Those DNS records have a ton of valuable intelligence which can provide insights into what’s going on across the enterprise, all implemented with a very light touch.
That same position on the network allows security team to act on every DNS resolution before it goes anywhere. Even more importantly, BlueCat also provides insight into the source of malicious queries, allowing administrators to mitigate infected devices directly rather than trying to connect the dots through multiple layers of data. Our customers rave not only about the ability to shut down traffic from a compromised client, but also about the fact that they can do it in real time, not after days of data analysis.
Learn more about how BlueCat leverages DNS management as part of its intelligent security offering.