It’s interesting how people are amazed by ChatGPT passing exams. Exams are narrowly designed processes with somewhat clear rubric for determining scores, exactly the same type of process that had been used to train and improve machine learning and artificial intelligence. Never mind that it’s passing Wharton MBA or law exams, these are special situations which are designed specifically to be somewhat ‘gamed’. And these are the situations where machines are in their elements.
The fact that they only pass the exams and not excel, reflects that the variability of the exams and the desire to really pick out top human candidates. This is also a test for the exams-setting as it reflects that they are not at all about just getting the answers right. Rather, exams should be designed and set to be open-minded to ‘surprise me’ type of situations.
We could all become machine-like, ask ‘What is going to be on the test?‘ and then approach it by trying to get answers right to everything. Or we can learn to solve real world problems by acting like humans, accepting our weaknesses and vulnerability, and cracking on bit by bit. Problems are rarely solved by invulnerability – they are typically solved by first acknowledging what we don’t know and moving at the edges of what we do know.