Make sure that off-the-shelf AI model is legit – it could be a poisoned dependency
Another kind of supply chain attack that can quietly mess up bots and apps
Updated French outfit Mithril Security has managed to poison a large language model (LLM) and make it available to developers – to prove a point about misinformation.…
Author: Thomas Claburn. [Source Link (*), The Register]