On our security console there are a large number of .json files being created under /opt/rapid7/nexpose/shared/temp/nexpose-export/. The number of files has grown over time, starting when we upgraded/deployed this security console last August, until today.
Can someone tell me what this folder is used for and if there is a retention setting to ensure that we can manage storage?
[user@server nexpose-export]# du -hd1
[user@server nexpose-export]# pwd
/opt/rapid7/nexpose/shared/temp/nexpose-export/ is the directory where the product saves json blobs that are queued to be sent to the cloud/platform. This is a critical aspect with the current functionality of the product to ensure that your console and cloud/platform remain in sync with one another. While it is natural for this directory to contain data, it should fluctuate as it processes and sends the data over to the platform. The amount of data will depend on your scan frequency, number of agents, bandwidth constraints, etc.
In case you’re curious, there is another directory:
This is where json blobs will get stored if for some reason they were being uploaded to the platform but failed to send. They will reproccess at the next opportunity when whatever interfered with normal operations is no longer hindering the connection.
I hope this helps clarify the questions you had!
Thank you for clarifying the purpose of the folders and the data within. Do you know if it is normal to retain this data on the security console for an extended time or should something be periodically grooming the data?
The data on our security console goes back to just over 8 months ago. For example, the 108G
/opt/rapid7/nexpose/shared/temp/nexpose-export/asset/default folder holds over a million files going back ~8 months to August 27, 2020.
We do not currently have a
/opt/rapid7/nexpose/nsc/spillover-directory folder so hopefully that means our data is being synchronized in a timely manner.
I hope you had a lovely weekend. Unfortunately, it’s not normal to retain data for any extended period of time in this folder, it should constantly fluctuate based on your assessment/change frequency and depending on your bandwidth constraints it should slowly decrease.
We have something called a collector on the console. The collector is the mechanism that allows the console to send and receive data to our platform. We are able to see its status on our backend systems and any errors that it may be encountering. I hope you don’t mind, but I did a little digging in our backend for your environment. I noticed something may be contributing towards the backlog you’re seeing in the
/nexpose-export/ directory and so I made a quick adjustment to see if that would modify your collectors behavior. Please keep an eye on this directory and let me know if it starts to decrease, if not and/or if you prefer to keep the rest of these details private, please don’t hesitate to open a case with our support team and reference this discussion link so they have some context .
Thank you so much for your time, Jim!
I had a pretty good weekend, I hope yours was magnificent.
Taking a quick look, the
/opt/rapid7/nexpose/shared/temp/nexpose-export/tag folder only has data from today where it previously had data going back to last August. The number of files in the
/opt/rapid7/nexpose/shared/temp/nexpose-export tree is also slowly decreasing, and the folder-size has decreased by about 2G since I started looking. I’ll report back later today, but I suspect you have corrected the issue.
I don’t mind the discussion here as long as we avoid anything potentially sensitive. Perhaps someone else will benefit down the road. If you need a case for this, I’m more than willing to spin one up. Easy enough.
Thank so much for looking into this!
I’m glad to hear you had a good weekend, mine was pretty relaxing, thank you!
Yeeees, I’m so glad to hear! I’ll keep watching for an update as well. I don’t think a case will be necessary unless we’re not able to resolve successfully here
Just as an FYI, I’ve asked our Product Management team to expedite a story we have to add a console notification mechanism so that customers may be notified via the bell and drop down menu whenever the embedded collector is in a non-operational status and the console is not syncing data to the cloud/platform. Hopefully, it won’t be too long and you’ll see that in our release notes soon. This way, if this ever happens again you’ll receive a proactive notification so that you know to reach out to us and open up a case.
Since checking Monday morning, roughly 200k files consuming 20GB of space have been processed. There are still about a million files and 105G of data to sift through. It might take another minute.
[nexpose-export]# ls -lR | wc -l
[nexpose-export]# du -hd1
Was the data synchronization process broken or just the clean-up on our security console? If it wasn’t synchronizing correctly, that might explain some discrepancies I’ve seen in the exposure analytics dashboards and I’ll be suuuper happy to see those working as expected.
Thank you for the heads-up on the new monitoring/alerting. That’s awesome that you were able to identify the problem, fix, and slot the alert so quickly. I hope I never see it! (Except in the patch notes!)
Sounds like just another minute or two will do it. If (and this is a big if, depending on your business requirements) you have the ability to do so, setting a blackout for a couple of days days where you don’t run any scans will likely give the console the ‘catch up’ time it needs. As you continue to scan it will add more files to the export folder so you will send files and it will decrease but as you assess today it will add some more files so it’s a little bit of a balancing act. This is only because it got to the state it was in for so long, unfortunately.
It looks like either at setup of the console, at a restore process or something else (our logs don’t go back that far so it’s impossible to tell now) something put the imbedded collector on your console into a down state which meant it wasn’t able to function in a normal fashion to handle the sync jobs. In a lot of cases, customers may notice this while using the cloud features and they note a discrepancy between the counts of the number of sites, tags or assets being reported between the console and platform and will open a case with support and they can get it sorted rather quickly.
Once this folder is empty-ish (remember, we can always expect some data in here but really it should be in the magnitude of MB, not GB) you should see your dashboard cards, Query Builder, projects and the rest of the platform functionality reflect your console data with almost near-time accuracy.
Hopefully this catches up for you soon and you can start to enjoy all of the benefits of the new dashboards we’ve recently added, and they will have all of your appropriate asset data too
Have a wonderful rest of your week, Jim!
Circling back on this. The data synchronization looks like it finished up today. We’re down to 316K of data within 119 files and its holding steady there. The exposure analytics dashboards are also filling out with current data. I’m looking forward to being able to use some of the new dashboards and cards. I believe we’re back in business!
Once again, you’ve been wonderful and thank you for your help!
Thanks so much for checking in and letting me know - I’m so glad to hear this is all set for you now!
It’s been my pleasure to assist you and I hope you have an amazing weekend!
I am encountering the same issue it eating up 400+ GB and we are out of disk space.
Can you perform checkups and fix this issue?
Application is completely stopped working as these files ate up the HDD space.
Quick response and solution would be highly appreciated
Hi Naveen! Based on what you’ve shared here, I would recommend opening a Support case in the Customer Portal. Our Support team will be able to dig into your environment in more detail than we’re able to here on the forum and help figure out what’s causing this disk space issue.
You can open a Support case here: https://r7support.force.com/