Agent + Scan Engine fails on Windows


We are running into a situation where on Windows systems of all flavors, if we have the agent installed and running and also a remote scan running against the asset, the vulnerability data the agent collected is dropped and the scan engine logs only externally visible vulnerability data (e.g. TLS handshake issues). The OS for the asset also changes from the correct one detected by the agent to a generic “Microsoft Windows”

We’ve been working around this by just not scanning anything with the agent, but we’d like to be able to use both so we can do adhoc scanning as necessary.

We have asset correlation enabled and also the “do not repeat checks the agent run” option enabled.

Has anyone else run into this and/or have any pointers?

1 Like

hello, I have def seen this before. the agent reports in correctly but when the scan runs nmap reports it incorrectly. causes some issues for sure. I would be interested in what we can to do fix this.

Do you have the “Enable Windows services during a scan” enabled? Also, credentials with appropriate permissions on the windows devices configured in the site(s)?

Yes to both.

Okay, another issue I ran into with some of our Windows endpoints was our endpoint protection would react to the scanning and would temporarily block any communication inbound from the engine, effectively stopping the scan on the asset before it could really even start. We got around this with some specific exceptions to the endpoint protection to allow/ignore the scan engine activity when it came from the engines.

The last thing I can think of would check that your endpoint’s firewall rules to make sure that inbound connections from engines are being allowed or not blocked like CIFS(tcp/445), at least for the IP(s) of
the scan engines, as well as any hardening settings controlling remote connections to the endpoint.

You can create a separate site for ad hoc scanning without the agent exclusions and scan schedule. Then use that site for scans.

You can create a separate site for ad hoc scanning without the agent exclusions and scan schedule. Then use that site for scans.

We did try this and we still experience the problem.

We have the same problem of agent data being “overwritten” by engine scan data (only remote / unauthenticated scans). This kicks the data of higher quality (agent) out of dashboards and overviews. It also makes the risk score graphic of individual assets look a bit like a sawtooth wave. With individual vulnerabilities we saw assets suddenly disappearing from dashboards (because they were just scanned by our engine) and reappearing days later (when the agent reported again).

Is there a solution to this? Shouldn’t InsightVM have some mechanism to pick the data that was collected with higher privilege for its analysis?

We found the problem and we were able to verify and resolve it in our environment. It turned out that we had an entry in the global exclusion list that was matching the subdomain for these Windows systems. So the scan would start and then during the asset discovery the subdomain was discovered and the scan would terminate (without an obvious warning).

I’d still call it a bug since if the scan terminates due to the exclusion list, it shouldn’t overwrite/corrupt the existing vulnerability data.

So in the end, the trigger was a misconfiguration on our part, but how the vulnerability data was affected is a bug, IMO.

1 Like

Our situation differs a little from yours since I expect the engine scan data to be incomplete (unauthenticated scan). But I agree with you concerning the merging of the data resulting from two different types of data collection (agent and scan engine). This seems to be a bug.