GPT's "long-term memory" allows prompt injections to become permanent
Facepalm: "The code is TrustNoAI." This is a phrase that a white hat hacker recently used while demonstrating how he could exploit ChatGPT to steal anyone's data. So, it might be a code we should all adopt. He discovered a way hackers could use the LLM's persistent memory to exfiltrate data from any user continuously.
Copilot for Telegram uses GPT and Bing to assist users. You can use Copilot on the Telegram desktop and mobile apps for free to ask questions, search, or chat with the AI bot. Sharing your phone number is a requirement though.
In the future, you won't buy a car without an AI chip inside
A hot potato: Nvidia has been attempting to license its GPU technology to third-party chip manufacturers for quite some time. Taiwanese fabless chipmaker MediaTek has now announced a new partnership with the GPU giant to bring new "experiences" and AI edge capabilities to cars.
In context: Most, if not all, large language models censor responses when users ask for things considered dangerous, unethical, or illegal. Good luck getting Bing to tell you how to cook your company's books or crystal meth. Developers block the chatbot from fulfilling these queries, but that hasn't stopped people from figuring out workarounds.
Hackers could deploy the worms in plain text emails or hidden in images
In context: Big Tech continues to recklessly shovel billions of dollars into bringing AI assistants to consumers. Microsoft's Copilot, Google's Bard, Amazon's Alexa, and Meta's Chatbot already have generative AI engines. Apple is one of the few that seems to be taking its time upgrading Siri to an LLM and hopes to compete with an LLM that runs locally rather than in the cloud.