Three solid protection techniques you can implement on your firewall to enhance network security
Are you wondering how you can improve your network defenses? or Maybe you are wondering how you can get the most out of your Next Gen Firewall?
This post will be covering specific techniques you can implement on your firewall to enhance your network defenses. These could be considered more advanced, like if you've deployed your firewalls already and are looking to enable more features or if you have a mature security posture and want to look to get more value from your network firewall investment.
"The firewall is dead" a lot of people say, "no use in getting unified threat management (UTM) licenses because everything in encrypted so only endpoint protection matters". It is true the end point protection is vital to protecting against exploitation, however there is still a need to have a firewall. Even if you are not decrypting traffic you can still enhance your perimeter and internal security by deploying firewalls. I feel the firewall will still be integral to network security for years to come and we should continue to hammer defense in depth which includes firewalls in cybersecurity education.
The techniques covered will encompass these goals of network firewalls:
Improving traffic filtering to prevent initial access, data exfiltration, and/or command and control.
Network segmentation to prevent lateral movement and device discovery.
Network traffic inspection to prevent layer 3, 4 and 7 evasion and reconnaissance.
We will mainly cover Palo Alto networks and Fortinet as those are best in breed, but I will try to also provide information for other vendors in addition to general information. We will not be covering decryption.
First a small disclaimer, whether you are a mature shop with thousands of firewalls or a small shop with 1 firewall, management should create the policy to determine what can be blocked and should be blocked from the application and filtering level. The network security administrator advises and helps craft the policy and ultimately implements it, however management is always the policy maker. Just wanted to get that out of the way.
Improving Traffic Filtering
As you might know firewall rules or access control lists are processed top down. You'll generally have heavy hitting rules at the top for administrative and (nanosecond) performance reasons. However there is also some other important rules you should have at the top of the rule set and those are your Geo-Filtering and known malicious IP and domain block lists.
The reason you have these at the top is so before any forwarding rules are processed it will be checked to see if its malicious or violates your Geographically allowed regions and therefore would be blocked.
Geo-IP, IP and domain block lists are sometimes perceived as not reliable, however there are lists from the vendors which can be more accurate and not curated by some random websites. Some of the IP lists could be from other security websites like spam haus or barracuda which can be considered trusted but it's up to you how far you want to go and how trusting you are.
I wouldn't worry as much about false positives when it comes to the known risky IP addresses, perhaps more so with the Geo-Filtering.
Geo-IP Filtering
Even if you are an international enterprise there is likely adversarial countries you don't do business with and/or are well known for security malpractice and the origination of hacking activity etc. Therefore on all of your firewalls you should have a Geo-policy to only allow friendly countries.
This includes for your remote access VPN or inbound traffic to external facing services, why let those bad countries flood your VPN login when you can just block them? The Geo-Blocking can definitely remove some of the noise from less sophisticated attackers as well if you're using it as ingress filtering before your load balancers etc.
See if the rule could be a negate rule as you probably have a smaller list of allowed countries vs. blocked countries, so you'd include what countries you want to allow and then negate that so others not included won't be allowed. If its the opposite and you only want to block some bad actor countries then you wouldn't want to use the negate type of rule.
Be sure to include you own IP space if using the negate type of rule, like RFC1918 space such as 10.0.0.0/8 and 192.168.0.0/16 for example, otherwise you will hose and block all your traffic. This is because the Geo-Filtering might not include your specific subnet or internal IP space. You'd have two block policies, first one for traffic sourced from those countries and second, one for traffic destination to those countries.
If the aggressor's infrastructure is hosted in a country you have blocked, then that will make them work harder and use more resources to get infrastructure in an allowed IP range to reflect to their preferred hosts.
Most service providers won't be able to do this as they likely allow all geographies to be reached by their customers. However if you have firewalls towards the edge, maybe some hyper scale devices for session logging, DDoS safeguards, or NAT etc. Then you can likely implement the next technique.
External Block Lists
The next item is the malicious IP and Domain URL block lists. Any network operator of enterprise and even service provider will likely want to restrict these IP addresses of known Spam or command and control (C2) destinations (among other categories).
Above is an example of a Geo-IP negate rule and a ISDB block rule of known risky IP policies on a Fortinet firewall.
You'd want to create rules just like the Geo-Blocking (either above or below if using those as well) with sources and destinations set to block, no negate needed as you want to block the IPs (or URLs) listed in the database(s) as opposed to negating.
There are various ways to get the feeds, sometimes called External Threat Feeds or Dynamic IP lists. Cisco, Fortinet, Palo Alto, and Juniper for instance all provide threat feeds and have ways to implement theirs, yours, or pull from other sources. I'd guess other firewall vendors do as well but you'll have to review their documentation.
By using built-in threat feeds you will drastically increase the amount of high risk IP addresses and URLs you can block which will enhance your organization's defenses. Even though you have implemented a NGFW it is possible you are not getting all of the threat intelligence possible in your system.
Lets look at why this could be important. For example if a user goes to a website with a malicious Ad or receives a phishing e-mail with a link and clicks on it, then maybe it will be blocked as it will hit one of the block policies you just created thereby preventing the drive-by payload delivery for initial access or that phishing login screen.
Moreover, lets say a user is actually compromised, if the C2 beacons out and its a known malicious IP and is blocked then its possible the attacker will not know their payload was executed or that someone clicked their link. To go even further if data is being exfiltrated, its a slim chance but if it's to an IP or domain in one of these lists then it could be blocked.
Above is an example of what Palo offers out of the box (with licensing) called Dynamic IP lists, with Fortinet they offer similar lists via their internet services database (ISDB) which was shown in a previous picture of example policies. Cisco Calls them Security Intelligence feeds, here is a blurb from their documentation:
"Security Intelligence feeds are updated regularly with the latest threat intelligence from Talos:
Cisco-DNS-and-URL-Intelligence-Feed (under DNS Lists and Feeds)
Cisco-Intelligence-Feed (for IP addresses, under Network Lists and Feeds)"
Another benefit of having these policies is that you can log the traffic hitting those block rules so it can provide notifications of possible indicators of compromise (IOCs) which can help the SOC triage investigating incidents.
These methodologies would of course be used in conjunction with your web filtering, DNS filtering, and intrusion prevention (IPS) profiles to help elevate the degree of filtering.
Manage your own!
You can alternatively create your own web server to host a text file with Palo or Fortinet as well to create your own lists to add IP addresses or Domains you find out about or want to block for various reasons.
By having your own threat feed (even better in conjunction with a built-in list) you can more easily and quickly add URLs or IP addresses to block which streamlines security operations.
Maybe you get e-mails from CISA or other advisories and you want to make sure you are blocking their intel or what if you have a list of 500 URLs you want to block? Manually adding those to a policy one by one is not ideal, but with this method it is a simple copy and paste. Double check the formatting the vendor wants before implementing.
Even without your own by using the built-in lists you can add millions of known bad actor IP addresses and domains to restrict. See the links below of some of the documentation to get started:
Feeds:
Here is how the architecture could look:
Preventing Lateral Movement and Device Discovery
In this next section we will discuss the philosophy of preventing lateral movement and discovery on the firewall side. Lets say that there is a breach and a bad actor has persistence on an infected computer within the network, how can firewall policy restrict their movement?
First, if user subnets are segmented between locations in a distributed organization it will help contain the infection to the single network. Why should a user at location A be able to ping or SMB direct to a user's device at office B? By disallowing these types of flows an attacker will discover less possible victims thereby decreasing chances for lateral movement to other networks. Keep your sites separated wherever possible.
In a distributed hub and spoke enterprise it might not be a requirement for sites to communicate, therefore you could have all sites going to a central hub firewall which means you can more easily control the traffic from the site to the data center or between each other.
Alternatively when talking about servers or applications, perhaps only a few locations need to access a certain application, why would you allow all locations to access that web server? Moreover, is this application just web based? Then maybe only allow HTTPS for the users to access, and then allow SSH for the system administrators only which can help reduce the chance of the server being compromised. Having restrictions like these can help reduce lateral movement and prevent attackers from scanning to find vulnerable hosts within the inside of the network.
For service providers the production network might not be segmented however the internal management and office networks should be; that might be where this type of segmentation can occur.
Continuing, reasonably you might have a ton of rules and want to clean things up, here are a few tips.
First, its always good to be specific with the port/protocols and source and destinations. You'll want to move away from 'Any/Any' type rules as you might know (regarding any direction of traffic), however its perfectly okay if you have a zone/interface tied to only one subnet, then in that flow's rule you could have a source or destination as 'Any' depending on the direction of the rule if its specific for that. In that example the one zone/interface acts as the controlling mechanism versus the IP address object.
There are also vendor tools that can help identify unused rules, shadow rules, or rules that might be risky, consider those if you are a large operator with many firewalls and policies.
Running default subnet gateways for users to firewalls also helps reduce pivoting because all traffic exiting the subnet must go through it for inspection purposes thereby better containing the subnets. Be mindful when creating rules if they are intra-zone or inter-zone etc. when adopting this.
One tactic one can use to clean up undocumented legacy or 'Any' type of firewall rules is to create a more specific rule for the traffic that is matching it. Then place that new more specific rule (source/destination, port/protocols) over the legacy rule. By placing the more specific rule over it the new sessions will fall off the old rule and hit the new one. After a period of time doing this and reviewing logs you will be able to disable that old rule and then delete it.
User Identification Techniques
Lastly, the next level of firewall rule philosophy is to move away from source IPs in certain rules and move to user groups tied into Windows Active Directory (AD). Especially with IPv6 and the advent of larger global networks, it might be hard to track hundreds of subnets that need access to a single application cluster; therefore this solution could scale more securely.
Moving away from the dependence of source IP objects means you could have wider allow rules at the sources for your cookie-cutter location polices and then specific rules at the destinations in the core that allows only a certain active directory group of users. So when a user is created and their job will utilize that application, no matter where they are located, they can just be added to that AD security group which is linked to your address object.
With Palo its called User-ID and with Fortinet its called FSSO. Usually you will have agents on the devices to check into a centralized server and/or an AAA type server will parse the logs of AD for user mapping information, the later can be slightly less accurate in my opinion. See the below diagram from vendor documentation to understand one form of this type of architecture.
Accuracy is important because if the log is not updated the user won't match the allow rule. The host agents update the firewalls that are integrated to them by saying this user in this group has this IP Address; moreover the centralized servers parse the windows logs for Login/Logout events to determine the user and related IP address.
This means the firewall keeps track of the IP addresses related to users within the groups you have created which are specifically for objects to be used in firewall policies. Therefore if an attacker had persistence on a device in a subnet and that subnet had an user-ID rule, its possible for that attacker to not have access to the application despite being adjacent to another user who does.
On the other hand if this example was using a rule that only had the one subnet then anyone in the subnet would be able to access the server. Moreover, by using the AD group technique you won't have to have users with static IP addresses or DHCP reservations which removes some administrative overhead.
Restrict Specific Traffic Types to Prevent Evasion and Recon
In this final section we will take it a step further and look at another strategy for layering protection. This is to inspect allowed traffic for layer 3, 4, and layer 7 behaviors, so maybe you have your rules tuned but how can you stop risky traffic if its allowed?
We will assume your defensive position is mature and you have anti-virus (A/V) and IPS (aka threat) enabled on your rules. I recommend to use A/V and IPS (aka threat) inspection for all rules or as many as possible (consider performance). Even if the traffic or payload is encrypted there are signatures or behaviors captured in the headers or protocol negotiations which can be stopped.
Moving on, one of the first things an attacker does and is illustrated in the MITRE framework is recon or information gathering (it shows it early in the attack chain and such but it could be a later step depending on the situation and TTPs). This includes scanning address blocks to discover devices and build a network topology.
This too can mean to scan ports and services to try and find vulnerabilities in addition to fingerprinting hosts. The malicious actor would likely attempt to evade detection during these operations.
TCP SYN with Data, TCP SYN/ACK with data, TCP stealth activity, TCP split handshake, ICMP reverse shell, host scanning, port scanning, and fragmented or malformed IP packets are all examples of layer 3 and 4 traffic types or behaviors you can block using IPS (Forti, Cisco, Checkpoint), zone protection profiles (Palo) or general type system settings (Watchguard, Sonciwall). There are also IPv6 options that can be enabled in these areas.
We won't delve into what each of these are or mean but here is a nice article about TCP level attacks. You'll have to do your own protocol research in the RFCs or something, some nice nighttime reading. Also be careful with the fragmented packet options as there is a possibility of legitimate fragmented traffic.
See some configuration snippets below:
PA has a lot of options for IP, TCP, ICMP, Flood protection etc.
Like PA, Fortinet has signatures but under IPS, here is a snippet of TCP options.
When searching for new victim machines or mapping the network, often attackers will try and throttle and control how wide their scope of scanning is. Thus it could be hard to detect and could require some tuning from you in the profiles based on your knowledge or how aggressive you want it to trigger, but its good to have these features enabled to block possible recon attempts or at least to possibly alert you.
Likewise during the scanning there are TCP approaches Nmap (and maybe others) use to probe open ports and running services by sending crafted TCP packets in one of the methods listed above. Additionally, even if these packets are not blocked, if they are detected it might alert you that there is some suspicious activity going on. I've also seen vulnerabilities that are caused by malformed or crafted packets which makes it wise to just block those anomalies at the layer 3 and 4 level.
If a port scan attempt was blocked then the aggressor might not discover that vulnerable device. Furthermore, if you had your host sweep threshold detection set to sensitive then you'd have a higher chance of detecting if someone was scanning on the network. By blocking some of those TCP evasion methods you reduce the options the attacker has to evade detection.
Notice in both examples above you need to manually create a policy to block some of these signatures. Obviously your mileage may vary and you probably want to investigate how often you are getting hits for these type of signatures if possible.
These methods also apply for multicast traffic which is something you should consider when reviewing and creating a policy if it falls under something you have in your environment. But multicast is beyond the scope of this post.
Lastly, PA and Forti also have different flavors to enforce application policies for egress traffic. App-ID with PA helps to only allow the actual application based on its behavior and Forti App control performs a similar role. Its usually recommended to monitor layer 7 traffic using these more in-depth application modules for filtering on egress to the internet to attempt to prevent tunneling which is a common evasion technique.
If you or your security team has their own scanner for auditing make sure you don't block that, and have specific policies for it only.
Related information:
To conclude, in this post we talked about different ways you can improve your defensive posture. By using dynamic block lists you can streamline operations and increase the chances of blocking initial access or command and control for egress traffic. By crafting sound firewall policy with proper segmentation or integrating user groups you can limit what an attacker can discover and reduce the range of lateral movement. Finally, we looked at additional methods to reduce possible attempts of evasion or reconnaissance by blocking certain types of traffic.
There is no way to 100% protect from all network level attacks, yet we can do our best to ensure we are utilizing all the options we have at our disposal or reviewing what each vendor's capabilities are. I hope this post helps you do that. Thank you and stay secure!
Would you like to know more? Here are some related articles: