User Trust and Engagement
Last updated
Last updated
This usually happens when ML is conducted primarily by data scientists in isolation from users and stakeholders, and can be avoided by:
Engaging with users from the start - understand what problem they expect the model to solve for them and use that to frame initial investigation and analysis
Demo and explain your model results to users as part of your iterative model development - take them on the journey with you.
Focus on explainability - this may be of the model itself. our users may want feedback on how it's arrived at its decision (e.g. surfacing the values of the most important features used to provide a recommendation), or it may be guiding your users on how to take action on the end result (e.g. talking through how to threshold against a credit risk score)
Users will prefer concrete domain based values over abstract scores or data points, so feed this consideration into your algorithmic selection.
Give access to model monitoring and metrics (link here) once you are in production - this will help maintain user trust if they wish to check in on model health if they have any concerns.
Provide a feedback mechanism - ideally available directly alongside the model result. This allows the user to confirm good results and raise suspicious ones, and can be a great source of labelling data. Knowing their actions can have a direct impact on the model provides trust and empowerment.