You can try, but you can’t make it correct. My ideal is to write code once that is bug-free. That’s very difficult, but not fundamentally impossible. Especially in small well-scrutinized areas that are critical for security it is possible with enough care and effort to write code with no security bugs. With LLM AI tools that’s not even theoretically possible, let alone practical. You will just need to be forever updating your prompt to mitigate the free latest most fashionable prompt injections.
- 0 Posts
- 4 Comments
Joined 2 years ago
Cake day: June 9th, 2023
You are not logged in. If you use a Fediverse account that is able to follow users, you can follow this user.
The problem with LLM AIs Ous that you can’t sanitize the inputs safely. There is no difference between the program (initial prompt from the developer) and the data (your form input)
You need to use trigger warnings for this kind of shit.
Are you sure about that? Is it a local connected smart switch (still fancy electronics, just local) or a plain old power switch?
If it’s a power switch, and If you turned your lights off by app over the internet, and then the internet went out, then your lights’ ability to come back on when you flick the physical switch depends on somebody having thought about this need and programmed a “oh, the switch was flicked so I better ignore the internet settings” mode.
And if they did that, it also probably means your lights all turn on after a power outage since the light can’t tell the difference between power outage and light switch flipped off.