Poor security practices
Last updated
Last updated
Operationalising ML uses a mixture of infrastructure, code and data, all of which should be implemented and operated in a secure way. Our describes the practices we know are important for secure development and operations and these should be applied to your ML development and operations.
Some specific security pitfalls to watch out for in ML based solutions are:
Making the model accessible to the whole internet - making your model endpoint publicly accessible may expose unintended inferences or prediction metadata that you would rather keep private. Even if your predictions are safe for public exposure, making your endpoint anonymously accessible may present cost management issues. A machine learning model endpoint can be secured using the same mechanisms as any other online service.
Exposure of data in the pipeline - you will certainly need to include data pipelines as part of your solution. In some cases they may use personal data in the training. Of course these should be protected to the same standards as you would in any other development.
Embedding API Keys in mobile apps - a mobile application may need specific credentials to directly access your model endpoint. Embedding these credentials in your app allows them to be extracted by third parties and used for other purposes. Securing your model endpoint behind your app backend can prevent uncontrolled access.