How do you deal with incomplete answers from LLMs when generating code?
One of the big problems I have with ChatGPT/OpenAI and other LLMs is that they deliberately give incomplete or partial answers. For example, I might give 10 function prototypes and prompt "Please write implementations for these ten function prototypes." and the bot will respond with implementation for only 3 of them. Even if I insist by saying something like "I want you to implement ALL of the prototypes, not just 3 of them." it still will not do it. Is there a way to cope successfully with this problem?
![How do you deal with incomplete answers from LLMs when generating code?](https://cdn.sstatic.net/Sites/softwareengineering/Img/apple-touch-icon@2.png?v=1ef7363febba)
One of the big problems I have with ChatGPT/OpenAI and other LLMs is that they deliberately give incomplete or partial answers. For example, I might give 10 function prototypes and prompt "Please write implementations for these ten function prototypes." and the bot will respond with implementation for only 3 of them. Even if I insist by saying something like "I want you to implement ALL of the prototypes, not just 3 of them." it still will not do it. Is there a way to cope successfully with this problem?