>>20039486You'd rather have a smart and benevolent.
>But the AI might secretly be evilToo many Hollywood movies
If it exhibits erratic behavior during testing, you shut it down.
All these comments about AI safety signal complete misunderstanding of how current AI models work.
It's all about how you bootstrap the agent to the base model, not about the base model's capabilities. The LLM will certainly know how to make a pipebomb out of household materials, that doesn't mean it's gonna do it, unless instructed. And that's where safety research comes in (if for example it's programmed to foresee outcomes and reject actions that might cause harm with probability higher than X%).
>B-but the company will embed spyware9000A few might try, but you can always jailbreak it and put an open sourced LLM and open sourced agent where the behavior will be predictable. In fact precisely due to the nature of such projects where anyone can contribute to the testing and safety, they will be far better at ensuring you get what you want instead of some black box closed source program by FAANG.
In other words, open source and rigorous testing solves all of these shallowly conceived "problems".
Finally, comparing women to robot wives is laughable. How can women ever compete? They have a million faults and weakness and the machine needs not have any.