Delete oudated Vulnerability Data on an Asset

Hello everyone
I would like to describe the following case and hope to find answers here.
It happens from time to time that the authentication on an asset no longer works during a scan. In the following example, it is a Windows 10 client that has already been updated to version 20H2 according to the information from MS 365 Defender. The R7 vulnerability scanners were last able to authenticate on this system in February 2022 and read the vulnerabilities that were present at that time. After that, the authentication no longer worked, but the vulnerability data from February is still present in the asset database. Whether these are actually still present on the system is not a given. This circumstance leads to the reporting being distorted in my opinion.
Are there any settings that delete outdated vulnerability data from the assets? We are also still trying to solve the problem with the failed authentication. The clients are actually all configured the same and the user we scan with has the highest privileges. One assumption is that the RPC service is no longer working properly on some clients. We are currently investigating this.
Scan_History
Asset_Information
Vulnerabilities
Failed_Credentials

1 Like

Hi @david_altanian
My guess is that these are vulnerabilities detected from the external interface of the device, so even though the local authenticated part of the scanning fails, the scan engine will still assess the asset from a network perspective. Sharing these remaining 6 vulnerabilities will confirm if these are indeed external (to the asset) facing services that trigger vulnerability detections.

Yes, the 6 vulnerabilities are composed of 2 scanning diagnostics messages and 4 vulnerabilities that are normally found without authentication (see screenshot).
2022-08-10_09-35-57
In my opinion, only these vulnerabilities should be displayed on the asset. But instead, the vulnerabilities that were found in the February scan are still displayed. As I said, first and foremost we have to look at why the authentication does not work. But in addition, I am looking for a solution to get rid of outdated vulnerabilities in the asset DB.
Is there any information in the database when the vulnerability was last found on the target system? There is information about when the vulnerability was first discovered on the target system.

So there is a couple different ways to go about this but not exactly an automatic way. You could set your data retention settings to remove scan data after a certain amount of time but it get srid of ALL old data after that time. It’s not going to only get rid of old scan data “that may not be accurate anymore” and leave other “good data”. Basically the one true way to clean this up is with an authenticated scan. InsightVM is going to hold onto that data because as far as it knows, you still have those vulnerabilities. The last time it scanned with a fully authenticated scan it found those vulnerabilities so now since every other scan was unauthenticated it can’t be sure that those vulnerabilities no longer exist.

I’m not sure how many endpoints you have that are experiencing this but to help with credential issues I would suggest checking out the scan assistant.

1 Like

I will second reviewing your data retention settings for scan data.

Things to verify or explore:

  • Verify your credentials for service accounts. I know IVM has integrations with some secrets management solution for rotating credentials. Otherwise, youll be like me and manually rotate the creds in x amount of days.
  • depending on your scenario it is probably a good idea to get the insight agent deployed to those systems. That way you have a proactive data collections coming in from your assets. It’s effectively another way to credential scan 4 times a day in a 24 hr period.
  • if you dont have an agent deployed as others mentioned. Scan assistant will suffice, too.

I would not put all eggs in a basket with unauthenticated scans merely because its a best guess at trying to fingerprint the asset.

Good Luck @david_altanian

1 Like

Hello everyone,
first, thank you for your feedback. Yesterday I checked everything again step by step with a fellow IT colleague to find out why the scanners can’t authenticate on his system. Ironically, we ended up looking at the most obvious cause of the problem lastly… for some reason, the AD group Domain Admins, which includes my service account for scanning, was not included in the local Administrators group on the client. This should be present by default on all over 5k Windows 10 clients (but also Windows Servers). Re-scan worked afterwards. We now hope that this is also the case for the remaining 200 clients.
The next step will now be to check on which clients and servers this group is missing and to fix it before the next global scan.

@john_hartman I have already considered the workaround with the Scan Assistant. We already use it on some servers that are not connected to the domain and therefore authentication with the service account is not possible. It works great.

@brzrkstrk Since we use Thycotic as a secret management solution, we implemented a custom integration together with Rapid7 early this year. Credentials are now automatically changed on a daily basis and synced with the R7 console.

Yes, my goal is to scan all Windows 10 clients and servers in an authenticated way. I have created corresponding target cards to monitor this process.
2022-08-12_07-14-09
Thank you and best regards

2 Likes

Way cool! Super happy to hear that you found what you were looking for. I am curious to hear more about the custom integration with Thycotic. We do use vault, too. In the past I know that they did have an integration built to rotate the credentials, but has since then been deprecated.

What did you end up doing to ensure credentials are being rotated? Did you leverage a powershell script, or API calls ?

Cheers.

2 Likes

Unrelated to your original inquiry but I see you have a drastic increase in scan times starting mid June, I am seeing this same issue across all of my scanning, authenticated, discovery, internal, external. Curious if you are seeing this across all your sites/assets or if it’s just a coincidence in your screenshot with my situation. My ticket with support didn’t go anywhere as I suspect this is a product issue but I haven’t heard anything and they closed my ticket despite my attempts to ask them for further explanation. If anyone else can confirm increased scan times I would appreciate it.

@brzrkstrk: We use Thycotic Secret Server as a password vault:Powerful Privileged Access Management | Cloud or On-Premises
The password is automatically changed on Thycotic. A Rapid7 developer, together with our Thycotic admin, then created a script that exports the current password on a dayli basis from Thycotic via API and enters it in the Rapid7 Console in the correct place under Shared Credentials (and overwrites the existing entry accordingly).

@jonathan_hieftje
Yes Jonathan, I have already created a support case in this regard on 30.06.2022. Something must have changed between 14 June and 28 June. A scan site that checks the most assets and for which 3 scan engines also perform the check took 3h 30min on 14 June. On 28 June, the second monthly scan, the scan no longer worked and was cancelled by me after 24 hours. I then adjusted the template and set the
Maximum assets scanned simultaneously per scan engine and Maximum scan processes simultaneously used on each asset. After that, the scan stopped automatically after 6h 10min with the message (Not enough memory to complete scan). At that time, I equipped all the scan sites with 8GB Ram as standard and provided additional scanners for the sites with many assets. After I selectively increasing the memory from 8GB to 12GB, the scan could be completed successfully after 8h 10min. In July, the scan took 6h 18min. On the one hand because I increased the values in the scan template and because Rapid7 reduced the memory consumption according to the release notes.

Finally, I would like to share with you the answer from Rapid7 Support:
We have been informed that with the newer versions update of IVM the new system requirements for scan engine would be 16gb and the documentation is due to get released soon.
There has been an update in nmap in newer versions especially with newer nmap and even more so the nmap scripts being used for log4shell and spring4shell, our memory needs for scan have gone up increasing possibility of problems.

Probably it’s time to increase the memory of the scan engines. Let me know if you are going to increase the memory and if you still see the same performance issue.

A quick update. After our IT made sure that the domain Admins group was added back on all clients, I did a re-scan today. I further adjusted the scan template so that even more assets are scanned at the same time per scan engine.
image
The scan time has now been reduced from 6h 29min. to 3h 34min. My setup for this scan site: 3 scan engines with 12GB memory each.