Splunk TA add-on Data from Nexpose not populating

@here I have a customer that is trying to get data into Splunk. Has Nexpose on one server and Splunk on a laptop. Connectivity seems to be working. Login tested and works, but… wondering if the certificate is the problem. He has TA and splunk heavy forwarder installed. Can anyone vouch that the below errors we see are relevant, as no data is reaching splunk TA interface.
splunkd
03-23-2020 17:03:35.426 -0400 ERROR X509Verify - X509 certificate (CN=10.120.51.89) failed validation; error=19, reason=“self signed certificate in certificate chain”
03-23-2020 17:03:35.426 -0400 WARN SSLCommon - Received fatal SSL3 alert. ssl_state=‘SSLv3 read server certificate B’, alert_description=‘unknown CA’.
03-23-2020 17:03:47.466 -0400 ERROR HttpListener - Exception while processing request from 127.0.0.1 for /en-US/manager/TA-rapid7_nexpose/apps/local/TA-rapid7_nexpose/setup?action=edit: Connection closed by peer
03-23-2020 17:03:47.466 -0400 ERROR HttpListener - Handler for /en-US/manager/TA-rapid7_nexpose/apps/local/TA-rapid7_nexpose/setup?action=edit sent a 0 byte response after earlier claiming a Content-Length of 12785!
03-23-2020 09:22:47.834 -0400 INFO ExecProcessor - setting reschedule_ms=67032166, for command=python “C:\Program Files\Splunk\etc\apps\TA-rapid7_nexpose\bin\rapid7nexpose.py”
03-23-2020 15:28:21.862 -0400 INFO WatchedFile - Will begin reading at offset=0 for file=‘C:\Program Files\Splunk\var\log\splunk\metrics.log’.
03-23-2020 15:28:21.880 -0400 INFO WatchedFile - Will begin reading at offset=24992560 for file=‘C:\Program Files\Splunk\var\log\splunk\metrics.log.1’.
03-23-2020 16:36:17.977 -0400 WARN HttpListener - Connection from 127.0.0.1 didn’t send us any data, disconnecting

@david_munoz These errors and warnings seem to be occurring during the setup of the Nexpose TA under Manage Apps. We don’t make any requests to the Nexpose console when settings are configured. Is it possible the error is occurring due to the heavy forwarder’s self-signed certificate and communication to the indexer? Or, is it possible there is a proxy sitting in the middle of the two?

Looks like there may be a helpful answer to a similar issue: https://answers.splunk.com/answers/552930/why-is-ssl-on-universal-forwarder-failing-with-err.html

Let us know if this helps!