Differentiate between very out of date and slightly out of date systems

Hi, I have a SQL query that outputs Solutions for all systems with Critical Vulnerabilities.

However, I’m unable to find a way to differentiate between systems just missing the latest, for example, Windows or Chrome update and those that have failed to update for months.

Other than manually editing the query each month to ignore specific text in vulns\solutions, please could anyone let me know if there is a way to differentiate between systems that are missing updates > 60 days old? Vuln published and modified dates won’t work for various reasons (they may be published 6 months ago but only recently resolved for example).

We have systems that automate the updates so i don’t want to create this report and send support teams off to fix things that this automated system is likely to do anyway within the next week or two


I don’t think there is a straightforward and simple way to run this query. Like you said, the published dates or finding dates could be useful but have their own intricacies. The only way to really get a good timeline that I know of would be a lot of manual effort by comparing the versions of the software through the proof of the findings.

What level detail is the report? (Asset, Solution, Vuln, etc.)

Assuming you are using the console schema, can you utilize fact_asset_vulnerability_age and rank systems by taking an average (I would also add a min & max) of the age of all vulnerabilities for each solution? You could then create your own buckets, which could help the team group systems that are “very out of date” vs “slightly out of date”.

If this is on the right track, I have some other ideas that might be able to enhance the results using weighted averages.

The assumption here, is the older a vuln the more out dated the system is…

Thanks - before you replied I decided to do something similar by including the vulnerability publish date anyway in the query, and only including those where the vuln was published longer than x days ago. It fixes more problems than it creates and gets close enough to limit sending support staff out to fix systems that will likely update themselves anyway. Once that process has run for a while I’ll revisit this and look at your buckets options.

Great! Keep us posted.

Wait…are you using the vulnerablity published date, or the age of that instance of that particular vulnerability on that asset? I ask because the former is not related to your assets or when they were last updated and you may get some strange results because of that. You’ll get better results by using fact_asset_vulnerability_age and then filtering for anything that is older than X days. Granted, there are cases where a brand-new asset with an older OS and few/no patches won’t show up using that…but in those cases the vulnerability count on that asset will be totally bananas and should be noticed anyways by other means.

1 Like

Hi, I’m using the vulnerability published date. It doesn’t matter (at the moment) what the age is relating to the asset - some of these are likely to be awful. We need to get our systems a bit more under control before we start worrying about ages of vulnerabilities on specific assets.
We have systems that automatically update some software (Windows Updates, Adobe etc) which mostly works - what i’m trying to find are where this process isn’t working, but i need to filter out the systems that require solutions to newer vulnerabilities so that we don’t waste time of systems that require newer fixes. This should fix 80+% of our systems - getting to a point where i need to start being more specific to assets with old vulnerabilities is my target - at that point i’ll start looking at the ages per asset. Although now i’ve written this I’m starting to doubt myself but the reports i’m getting out at the moment are definitely working for us at the moment.