Anyone use S1 as a trigger and have a way to filter down the results to be useful?

Sentinelone as a trigger means a lot of new jobs running. It also means one new threat can have hundreds of new threats under the same original threat.

It would be nice to be able to group the new threat alerts by one common denom that is unique to the threat. _______ is what is unique for each threat. What is the blank? That is the question - threat.Id, threat.hash, endpoint name, etc while at the same time not overdoing it so we end up miss things.

In other words each new threat should have a single ticket per threat. Each new threat can be treated as a single entity.

What I have done so far: The threat file hash changes every time, but I am experimenting on the ID to see if that is the same.

Are you looking to just create a ticket in an external ticketing system every time SentinelOne creates a incident?

That is the goal yes.

But the new incident in S1 can be 150+ new threats that are the same machine, for the same binary, just doing slightly different activity.

Are talking specifically about STAR rules or Incidents that trigger from the S1 engines because I have not seen this before.

S1 engines then. S1 plugin only has one trigger and its new threats.

@jon_schipp1 @brandon_mcclure still here?

Sorry, I don’t use S1 to know what to filter on. I think I understand what you are seeing because I’ve had other similar situations but haven’t found a good way to address that.
100s of parallel Jobs vs a loop that has a count in the 100s are both performance hits in different ways alone, not even accounting for aggregation.

1 Like

Thanks Brandon. That should have been my next attempt a loop with a counter, but that comes with a performance hit you say, makes sense. I even tried a global artifact strategy, but how would you delete the entries at an interval to ensure that nothing new is missed? Back to the drawing board

Also, GAs have a max search limit of 1000 if I remember, so I don’t use those for large datasets.
What I do is outside of SOAR in our SIEM.

With the Global Artifact method you would need a second workflow that checks your GA and does a time comparison. If it is older than x amount of time, then clean it up.

So your GA structure would be something like:

TimeStamp | User | Asset | Description | Ticket Number

When a new threat pops in, it first checks the GA if that user or asset is on the list. If it is on the list, is it within the acceptable amount of time. If yes, then maybe add something from the new threat as a comment to the original ticket created, but do not add to the GA. If you want to maintain a count of how many threats were tied to that original ticket you could include a count field in your overall structure that increments by one every time a a threat is matched to the original entry.

If the user or asset is not found in the GA, then add it.

The second workflow would need to cycle through the list, check the time, if the time is older than a day as an example, then clear that entry.

1 Like

Thanks Darrick I like this idea a lot. I can try it and report back.

In the GUI of SentinelOne you have the option to “Group by Hash” for your threats. Rather than using Asset or Host you could potentially use one of the hash values, but you would need to look into a threat you have captured in ICON and then look in the GUI and see which hash exactly it is matching on.

If you want a single ticket per endpoint or asset you can keep the structure mentioned above, but also add in a hash value.

@Eric-Wilson brought up a great point, which is that you don’t need to have a Global Artifact cleanup workflow if you don’t want it. ICON will only reference the latest 1000 entries, but the GA itself can have more than 1000.

1 Like

So I believe what they are grouping by hash is the “file content hash” value.

Is there a rule for including multiple value pairs in global artifacts? Looks like GAs are just one value pair allowed.

EDIT: That way can work as you can just lookup a value “Contains” what string you want.

Is there a way to run a daily workflow to delete all the entries of a Global Artifact list?

If you could use a timer trigger and somehow lookup all the values then delete all the values.

I’d like to keep the two workflows separate. One to maintain the GA list and one to work on the new threat.

As I see it:

Option A

Workflow 1

  1. Create new ticket for each threat
  2. Lookup file hash from new threat in open tickets from the same day and find who is assigned to the original ticket
  3. Assign additional tickets for the same threat to same person
  4. Do some other stuff

Option B

Workflow 1

  1. Lookup file hash for new threat in GAs
  2. If it exists and is from same day, grab the previous ticket number (also in GA)
  3. Create a new ticket for threat and assign it to same person as the original ticket.

Workflow 2

  1. Every 24 hours delete all the entries in the global artifact library for workflow 1 to start fresh

Option C (If you are limited on the number of Workflows)

Have an external system do the GA cleanup using the REST API
https://docs.rapid7.com/insightconnect/api/#tag/Global-Artifacts-Entities/operation/Global%20Artifacts%20Entities#Delete%20Entity
or just delete and recreate the GA
https://docs.rapid7.com/insightconnect/api/#tag/Global-Artifacts/operation/Global%20Artifacts#Delete

You will want to make sure to create a Global Artifact with object structure. Add a field titled enabled, and make it a boolean value. For every entry created make the enabled field true.

For your workflow on the timer lookup items in the GA with field enabled = true. The next step can then delete the items from the previous lookup which should be all items in the GA.