Archive - Using transport rules as a security tool

Archive - Using transport rules as a security tool
Photo by sue hughes / Unsplash
đź’ˇ
I'm working on getting the images back for this article, or I may just revamp it when I get a chance. I didn't realize I'd have to pull it from the Wayback machine, so it's missing some stuff :(

Introduction

At a conference last May, I had an opportunity to facilitate a session focused around email security covering topics such as SPF/DKIM/DMARC/MTA-STS, spam/phishing filtering, message routing, archiving/holds, and more. During that session, I shared about the methodology I use to track current trends of spam and phishing attacks rather than relying on the native spam filtering in our email platforms (which is regularly inadequate) or paying for additional security products which are often out of budget and are also sometimes ineffective due to their focus on businesses rather than unique challenges of educational institutions. I believe creating rules based on patterns you are actually seeing can be every bit as effective, and utilizing the existing tools you have better can save a lot of money that can be invested elsewhere.

Since the think tank sessions were not recorded, I have had several people ask for more details so they could customize this method to their own liking. Hopefully this can be a reference to point back to that can be kept up to date as new ideas are shared and the process is refined. I will try to keep this focused on the methodology rather than a specific product, but since we use Office 365 (and G Suite isn’t an exact 1:1 feature match), there may be some gaps you need to figure out depending on what your platform is.

At a high level overview, this method uses rules to copy email that are likely spam or phishing but weren’t caught by the spam filters into a mailbox for review. This content is then used to create transport rules that will then allow you to not only block those messages but also refine the rules to become more effective over time. It’s also a good idea to set up a [email protected] for users to forward what they believe to be phishing to. Let’s just say that even a few years in, users are still having difficulties distinguishing between phishing and spam, so do plan accordingly by having pre-canned messages for both scenarios to reply back to users with.

Setting up the mailbox

Before we begin, it’s important to note that this process requires copying email destined for your users or potentially mail driven processes (like accounts payable workflows) into a mailbox that you and/or others have access to. This means that you most likely will inadvertently intercept personal communications or even confidential information, so let that guide your decisions as you set up this mailbox and who you give permission to access it. It would be a good idea to have an approval process for every step of this to make sure it is well documented and well understood.

The first thing we need to do is create a mailbox that we can dump spam and potential phishing attacks into for review. We used a Shared mailbox in Office 365, and as a best practice, we create a security group specifically for accessing that mailbox and add the appropriate people to that security group. The documentation on creating a shared mailbox and assigning permissions can be found here: https://docs.microsoft.com/en-us/office365/admin/email/create-a-shared-mailbox (G Suite users look here: https://support.google.com/a/answer/167430)

Once that is set up, we need to configure the spam filter to BCC messages into this mailbox and configure more stringent controls in test mode so that those messages are copied into our shared mailbox. Things like attachment types, Javascript, web bugs, etc. are good candidates for testing, and since we have so little foreign origin/language email, evaluating these and possibly increasing spam confidence levels may be a good thing to look at as well. G Suite users may be able to do similar with Content Compliance rules for Gmail, and if you have good examples, please reach out and I’ll update this post! The documentation on how to do that in Office 365 can be found here: https://docs.microsoft.com/en-us/office365/securitycompliance/configure-your-spam-filter-policies

This is an example of how mine looks

At this point, we should have a mailbox our team can access with some data flowing in, and hopefully we can start seeing patterns that give us an idea of how we might be able to use transport rules to start catching these. If we set up a user reported phishing mailbox, we can also use that data to improve our transport rules as well. We’re going to let this fill up for a week, and in the next part of this series, we will cover how to identify campaigns (and how they are often conducted), things that make them unique, and how to create rules that are effective in catching them.

Spam / Phishing Campaigns

In our previous post, we set up a mailbox so we could copy potential spam or phishing attacks that are making it past our current filter rules. After a few hours, you should have had some data that you could start looking at to get an idea of the types of things your end users are seeing (maybe like the image below). This will help us create rules to begin filtering out what we can, and hopefully we’ll see patterns or key words that can help us do better. Since attackers are always changing their tools, we will need to monitor and evolve our rule sets to address the current methods that we are seeing.

Maybe you’ll see something like this?

Before we dive into setting up rules, it is important to understand how these campaigns operate. Most people reading this will understand what spam and phishing are, ideas on how to identify them, and sometimes even why the tactics they use are effective (right timing, sense of urgency, etc.). What isn’t as widely understood is how they get created and how they end up in our mailbox. A large portion of these messages are actually sent by kits that are either purchased or rented. While everyone makes a big deal about the markets on the dark web, you can actually purchase these services for very little money by simply joining some Facebook groups… The tools are extremely simple, have built in templates, and are easily configurable so that the attacker can load in custom lists. Some of these can even automate deployment of HTTPS encrypted websites hosted out of Azure blobs to give a sense of legitimacy. These kits or services make it so that an attacker needs little to no technical expertise to pull of these attacks.

Here’s a good example of a kit generated phish…

Transport rule goodness

I highly recommend giving your transport rules ID numbers (like ID01, ID02, etc.) or something easy to map to folders in the shared mailbox. There are some basic transport rules that you really should consider such as blocking of auto-forwarded messages, quarantine from hostile anonymous email systems, and maybe even a rule to block social media or other services you want to prevent employees from being able to sign up for using their district email. As much as I hate them, I also have rules to prepend banners to messages to let users know an email was external, spoofed our domain, etc.

In my setup, we use several types of rules, ones based on specific words, one that uses regex, one that uses headers, etc. For each of these types of rules, I have 3 separate rules that I use as stages – audit, quarantine, delete. This means the rules quickly add up, so naming them well is very important. This list may give a general idea to go off of:

ID003: Delete – Keywords
ID004: Delete – Regex
ID005: Delete – From header
ID006: Delete – Authentication results header
ID007: Quarantine – Keywords
ID008: Quarantine – Regex
ID009: Quarantine – From header
ID010: Quarantine – Authentication results header
ID011: BCC – Keywords
ID012: BCC – Regex
ID013: BCC – From header
ID014: BCC – Authentication results header

For the delete and quarantine rules, I set them to generate an incident report and have it sent to the shared mailbox. For the BCC rules, I set a message header property (X-TransportRuleID) to the ID number for that rule and then BCC it to the shared mailbox. In the shared mailbox, I have created a folder per each transport rule, and then I set up an Inbox rule for each transport rule to sort the emails into the appropriate folders. For the delete and quarantine rules, the ID numbers are in the body of the message, so it is easy to sort. For the BCC’d messages, the Inbox rules have to look at the header property we set.

We could also do similar things for DLP type content as well as anything else you might need to monitor such as users emailing passwords or other sensitive information. Just keep in mind that we could be exposing confidential information into this mailbox, so plan accordingly!

Refining transport rules

Hopefully you aren’t like me with over 3K messages… 🙂

At this point, we should now have a shared mailbox full of messages (like above) that have been copied in due to testing our spam policy and our transport rules. If your organization is on the smaller side, there may not be much to look at, but even still, we want to refine our transport rules and start using Inbox rules so we can spend less time reviewing and more time doing other things!

You will notice certain patterns in spam and phishing attacks that can be added to your transport rules. Sometimes this is a specific phrase such as “<district name> Email Admin Team” or eventually end up with something like “(wallet|bitcoin|btc|cryptocurrency) (address|transfer|account|wallet)” to catch extortion emails that found one of your user’s (hopefully) previous passwords in a data dump. Once you’ve added these to the quarantine rules and evaluated, you can move these to delete which will help reduce how many messages you have to deal with in the shared mailbox. Over time, this will help you identify new campaigns and tactics faster and easier, and you can simply add them to the transport rules. Here’s what that might look like:

Keep only the junk you want!

You may have already noticed that the rules will sometimes pull in things like social media communications, random online services that your staff use, and other legitimate bulk mail. This seems counter-intuitive, but we want to add these to our allow list so that they end up in the Inbox folder. The issue here is that Inbox rules only run on items that hit the Inbox, not Junk, so we have no way to delete them automatically unless they end up in the Inbox.

Create an Inbox rule in the shared mailbox to delete based on key words, regex, headers, etc. in the same way that you do for transport rules, and simply add things like twitter.com or mysteryscience.com to the list. This will help reduce the amount of things you have to look through, and it should result in faster response and less work overall.

The very first rule is delete anything that passed DKIM and DMARC for our domain to delete any legitimate email that passes in. Internal rule matches actually bypass this because the content is attached to the message, not in the incident report message.

There are some things that I just don’t feel comfortable moving into the always delete level of rules, especially if it could be something like gives a false positive and affects a business function like accounts payable. In these cases, I leave it set to quarantine even if it has never given a false positive. In the event we do get a false positive, we simply go over to the Security and Compliance Center and release the message from Quarantine. Once I feel comfortable with a specific part of a rule match, I’ll set up an Inbox rule to delete just these specific messages.

Looking to the future

An unfortunate truth about transport rules is that there is a limit to how many keywords or regex entries can be used in a single rule. In a smaller district, you may never hit that limit, but larger districts may hit that limit very quickly. Personally, that limit has been hit a few times causing me to have to create a second rule to add more, but when possible, I’ve also found that many of the keywords overlap, can be re-written, or can even be handled better via regex.

About every 3 months, I pull a list of all of our keywords and do a string compare to look for similar enough phrases where I can make regex handle multiple keyword strings. I’m sure this is more computationally intensive, but I think it’s worth it (especially if cloud hosted since it’s not our servers!). I do this review mostly to keep things clean and easier for my staff to look through and update the rules, but there is nothing wrong with just creating additional rules when you hit the limit.

Mastodon