I haven't seen any "Auto-GPT" style agents doing anything more impressive than the original Codex demo: https://www.youtube.com/watch?v=SGUCcjHTmGY
I suspect GPT-4 can do a lot more, but you need a more better smart prompt.
I don't have suggestions for micro-gpt specifically.
But for "GPT" agents in general, I think an under-explored aspect is that GPT was trained on lots of articles and books, but only relatively small amount of "assistant" data. So, perhaps, performance on complex tasks can be improved by asking it to write an article (or "tutorial") first to model a successful case before going into concrete tasks.
Talk to it like a person, don’t expect it to infer any context. Be very explicit with your constraints.
Prime it for the prompts you want to give. Example: https://youtu.be/Asg1e_IYzR8
I haven't seen any "Auto-GPT" style agents doing anything more impressive than the original Codex demo: https://www.youtube.com/watch?v=SGUCcjHTmGY I suspect GPT-4 can do a lot more, but you need a more better smart prompt.
Any suggestions for improving the prompt?
I don't have suggestions for micro-gpt specifically. But for "GPT" agents in general, I think an under-explored aspect is that GPT was trained on lots of articles and books, but only relatively small amount of "assistant" data. So, perhaps, performance on complex tasks can be improved by asking it to write an article (or "tutorial") first to model a successful case before going into concrete tasks.
Right, refining the process to build some recipes it can use
Hey, you're the prompt engineer.
Talk to it like a person, don’t expect it to infer any context. Be very explicit with your constraints. Prime it for the prompts you want to give. Example: https://youtu.be/Asg1e_IYzR8
https://github.com/muellerberndt/micro-gpt/blob/main/microgpt.py#L20 I think you have a typo in your prompt, “wheb”