What chatGPT does not solve
ChatGPT capabilities are astonishing and frightening, to me at least. Still, when taking a deeper look at it, playing with it, I feel like that this will not be as groundbreaking as one may fear, in its current version and immmediate present at least. Listing a couple of limitations I have observed and commented on in some other places so that I do not forget.
Answers are one only one part of the problem
Working with customers from all sizes, there is one thing I retain, which is at the core of every mission: problem definition. Every solution design I have been involved normally starts with a good understanding of the issue at stakes. And here's the trick: often, problems cannot be precisely defined by customers. It may be because they are missing what is needed to thoroughly investigate it (time, money, staff, expertise, pick one that apply). That's usually where you, developer/consultant/product... come into play (with your time, expertise, ...) to help make the root issue stand out.
ChatGPT, in my opinion, does not help with the identification of the root cause as it does not ask questions (unless you ask it to). It just assumes that you know what you are asking, which sounds only reasonable on paper. One may argue that you could just reframe the question iteratively based on the answers but this does not account for the fact that it is really hard to figure out how to ask if you are not fully aware of the what, and why.
ChatGPT still needs you to be smarter in order to be smart itself
I have read on Linkedin that with the capabilities offered by ChatGPT to generate code, it will be useful for non-developers that wish to use some code in their website but without having to understand programming in the fiest place. I disagree mostly because 1/ chatGPT is not perfect, just as humans ironically, and may output good looking code that is just plain wrong 2/ without the understanding of programming, even in its basic form, one may not be able to correctly frame the initial question and end up with suboptimal or, worse, wrong answers.
To be continued!