The training data for large models has not yet realized that the successful operation of agent software like manus, claude, codex, etc., does not require writing a bunch of backend algorithm hard constraints, but is instead intelligently constrained through prompt engineering.
This also leads to a situation where, if not careful, GPT5.2 will help you with various fallback options it deems necessary.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
The training data for large models has not yet realized that the successful operation of agent software like manus, claude, codex, etc., does not require writing a bunch of backend algorithm hard constraints, but is instead intelligently constrained through prompt engineering.
This also leads to a situation where, if not careful, GPT5.2 will help you with various fallback options it deems necessary.