Asset/scan data retention

I’ve had questions from my peers asking about asset retention policies. There’s been a big push for decommissioning older machines located in segmented areas of our network. Because our asset retention is set to two months, we still wind up with assets showing up, even if they’ve been taken down. They’d like to reduce our asset retention down to a month or even 2 weeks.

My concern here is that since I reduced our retention to 2 months, I’ve had entire sites get “zeroed out” on asset count, because they haven’t been scanned in two months, retention kicks in and purges those assets.

My recommendation was to use “only assets in the last scan” on reporting, but because we extract data to an external portal, that’s where he sees it, and the assets aren’t disappearing as fast as he’d like.

So having said all of that… my question is: What is the overall impact of an asset purge when retention milestones kick in? What data gets lost, and what stuff can I rely on being in the database? If I set retention to two weeks, are all of my monthly scan counts going to go to zero?

I see in the documentation that “asset data retention settings do not affect historical scan data”. But SOMETHING must be getting purged… Otherwise, my older sites wouldn’t be going to zero…

“asset data retention settings do not affect historical scan data” this basically means the asset is soft deleted which means it keeps all trending and historical data for if it is ever discovered again as well as does not affect the bigger trending graphs. Most customers i work with tend to scan everything weekly and then i recommend setting the ‘asset’ data retention to 4 times the scan schedule to be around a month and that works pretty good. Is there a reason that group of assets scan frequency is so much bigger than the other sites?

It really comes down to differently paced remediation managers. Small labs are scanning at a rapid pace, while most user endpoint things are being done on more of a monthly cadence, which reflects the non-emergency patching cycle.

I might recommend whether they need the new data or not, a weekly scan cadence is just good overall to assure they are getting the latest vulnerabilities. The only other thought is maybe a second console instance for departments with less frequent scan requirements, then you could have the ‘global’ data retention option set up for two sets of assets, but that causes its own challenges.

You could also just do the deletes manually using the ‘last scan date’ in a DAG, and just select the sites that are applicable and leave the data retention as it is.

1 Like