After you’ve gotten InsightConnect stood-up and started experimenting with workflows from the Extension Library, it’s time to think about taking some of your existing manual processes and automating them with InsightConnect. We refer to this as “Use-Case Mapping”.
Overview
Use-case mapping isn’t magic. It’s just the process of taking your “manual” actions and “mapping” them onto InsightConnect workflow elements. This isn’t really about building workflows but rather the translation of your existing process into something that can be automated. Yes, you can take the manual process and just start building a workflow but you’re almost guaranteed to create an inefficient and hard to maintain workflow. This isn’t because there’s anything wrong with your existing process, it just has a different set of secondary objectives than an automated process. Manual processes tend to be designed to minimize effort while automatic processes rely on an abundance of data. While you may order steps in a manual process so everything done by one person is done at the same time, an automated workflow may incrementally add data from the same source at different points in the workflow. Mapping allows you to identify these differences and account for them up-front making your end product better organized and easier to understand.
While not complicated, mapping a manual process can be uncomfortable. It’s not difficult, but it does require a different way of thinking. The more “complex” things you do manually, like cross-referencing data, accessing various management systems, and ensuring everything matches up properly are actually quite trivial in a framework like InsightConnect. After all, computers are very good at talking to other computers.
Many of the simplest things you do as part of your process are actually the hardest to replicated for a computer. For example, you’re analyzing a potential phishing e-mail. The very first thing you do is look at the e-mail and compare it to the sender’s domain. Seems trivial right? What you’ve actually done is developed an expectation of who should have sent the e-mail based on what the email is asking you to do. Email is asking you to confirm your google account information but the sender is “jon.smith@facebook.com”? Instant decision - phishing attempt. Half the time this process is so simple you don’t even realize it’s part of your process, but from a computer’s perspective it’s a monumental accomplishment that even the most complex AI systems have trouble replicating. Everything you do in your process is influenced by this initial intuitive assessment. Recognizing and accounting for this “human element” in your processes is the single most challenging aspect of mapping use-cases.
The Mapping Process
The basic process:
- create a process catalog
- link and rank outcomes and data
1. Process Catalog
The mapping process starts with creating a catalog of the systems, data, and decisions involved. This catalog must include the following:
- The condition/s that must exist to start the process - this is called the “trigger” in InsightConnect
- The systems or services directly involved in the process
- The data required for the process - and the systems or services that supply that data
- The outcomes of the process
Let’s examine a phishing process as an example. Our manual process goes like this:
- user reports a potential phishing attempt
- analyst reviews the email and makes a determination using various analysis tools at their disposal
- analyst responds with the outcome of their analysis
- analyst purges the message from the company’s email system
Our initial catalog looks like this:
trigger: e-mail report received from user
systems modified: email system
data required: analyst’s opinion
outcomes: malicious message, clean message
When constructing a catalog, pay close attention to any intuitive leaps your process includes. In this example, we’ve already made a couple - analysts’ opinion of the message and the potential outcomes. Analyst’s opinion is obvious a leap but the outcomes? We’re missing some potential results of our process which will become clear as we continue.
When you’re just starting out, it can be useful to start with the outcomes and work your way backwards eliminating the intuitive leaps and refining both the data required and the systems modified in the process. As you gain experience translating manual processes into workflows, this will become second nature and you may even be able to develop efficient workflows while skipping the mapping process entirely!
In this scenario we actually have 3 potential outcomes so far:
- Message is clean
- Malicious
- Malicious and needs to be immediately purged from all mailboxes
With these 3 potential outcomes, we can start cataloging the requirements for each of these outcomes:
- Message is clean - the default condition
- Malicious - message contains malicious indicators
- Needs to be purged - high confidence in malicious indicators and/or high threat index
The “Malicious” process outcome is where we need to focus. What indicators do we need to determine an email is malicious?
- malicious URLs in the message
- reputation of sender or message source
- malicious files attached to the message
And where can we get these indicators?
Malicious URL - we’ll use virus total
Malicious File - we’ll use virus total and hybrid analysis
reputation of sender - we’ll use domains in certain headers run through virus total domain analysis
Wait - we just made another intuitive leap! Did you see it? We assumed we had the original message to access file attachments and message headers. We actually have a 4th outcome! Incorrectly reported message where the suspect message was forwarded instead of attached.
Our catalog now looks like this:
trigger: e-mail report received from user
systems modified: email system
data required: virus total url analysis, virus total file analysis, hybrid analysis file analysis, original message headers
outcomes: malicious message, clean message, incorrectly reported message, purge message
This looks pretty good, but what data do we need for the “purge message” possible outcome? This is a high-risk task and this is a new automation - so we’ll use two data-points: the scope of the proposed purge and the our old standby - an analyst’s opinion. Over time we can transition the analyst’s opinion to an automated process.
Our final catalog looks like this:
trigger: e-mail report received from user
systems modified: email system
data required: virus total url analysis, virus total file analysis, hybrid analysis file analysis, original message headers, analyst opinion for purge, purge scope (messages/user)
outcomes: malicious message, clean message, incorrectly reported message, purge message
2. Link and Rank Outcomes to Data
Now that we have a process catalog, we need to determine the relationships between them. This process allows us to put things in the order that makes sense for an automated process. For demonstration purposes, we’ve simplified some of our data step names.
We start by determining which outcomes rely on other outcomes:
Now we add in our data linked to our outcome:
And finally, we reorder things so every piece of data is above the outcome that relies on it and put our trigger at the top!
We’ve now mapped our manual process and it’s ready to be implemented in an InsightConnect workflow!
Converting to a Workflow
Looking at our completed map, we already have something that looks a lot like an InsightConnect workflow:
- we have a triggering event
- we have an order of operations
- from our catalog, we have a list of connections we need to systems and services
Once we’ve confirmed we have the connections we’re going to use setup, we can start by determining the decisions our use-case map indicates we need to make. It should look like this:
Note the “Should we Purge?” step is a human decision step to account for our “Analyst Opinion” prior to purging the message.
We’ve just started this workflow, but we already know what needs to be done. To complete this workflow, place our outcomes after the appropriate decision point, build out our data gathering steps, and finally implement our logic in the decision points!
Happy building!