Sentinelone as a trigger means a lot of new jobs running. It also means one new threat can have hundreds of new threats under the same original threat.
It would be nice to be able to group the new threat alerts by one common denom that is unique to the threat. _______ is what is unique for each threat. What is the blank? That is the question - threat.Id, threat.hash, endpoint name, etc while at the same time not overdoing it so we end up miss things.
In other words each new threat should have a single ticket per threat. Each new threat can be treated as a single entity.
What I have done so far: The threat file hash changes every time, but I am experimenting on the ID to see if that is the same.
Sorry, I don’t use S1 to know what to filter on. I think I understand what you are seeing because I’ve had other similar situations but haven’t found a good way to address that.
100s of parallel Jobs vs a loop that has a count in the 100s are both performance hits in different ways alone, not even accounting for aggregation.
Thanks Brandon. That should have been my next attempt a loop with a counter, but that comes with a performance hit you say, makes sense. I even tried a global artifact strategy, but how would you delete the entries at an interval to ensure that nothing new is missed? Back to the drawing board
With the Global Artifact method you would need a second workflow that checks your GA and does a time comparison. If it is older than x amount of time, then clean it up.
So your GA structure would be something like:
TimeStamp | User | Asset | Description | Ticket Number
When a new threat pops in, it first checks the GA if that user or asset is on the list. If it is on the list, is it within the acceptable amount of time. If yes, then maybe add something from the new threat as a comment to the original ticket created, but do not add to the GA. If you want to maintain a count of how many threats were tied to that original ticket you could include a count field in your overall structure that increments by one every time a a threat is matched to the original entry.
If the user or asset is not found in the GA, then add it.
The second workflow would need to cycle through the list, check the time, if the time is older than a day as an example, then clear that entry.
In the GUI of SentinelOne you have the option to “Group by Hash” for your threats. Rather than using Asset or Host you could potentially use one of the hash values, but you would need to look into a threat you have captured in ICON and then look in the GUI and see which hash exactly it is matching on.
If you want a single ticket per endpoint or asset you can keep the structure mentioned above, but also add in a hash value.
@Eric-Wilson brought up a great point, which is that you don’t need to have a Global Artifact cleanup workflow if you don’t want it. ICON will only reference the latest 1000 entries, but the GA itself can have more than 1000.
You will want to make sure to create a Global Artifact with object structure. Add a field titled enabled, and make it a boolean value. For every entry created make the enabled field true.
For your workflow on the timer lookup items in the GA with field enabled = true. The next step can then delete the items from the previous lookup which should be all items in the GA.