ChatGPT doesn't ackowledge being wrong?
8 by caldarons | 7 comments on Hacker News.
So far in all the cases I have seen of ChatGPT providing a wrong answer, I have never seen it actually acknowledge that the answer it provides might be innacurate. Am I the only one worried by this? For all the talk about "AI ethics" there is it seems strinking to me that the current state of the art model will opt for providing a convincing argument for why it's wrong answer is correct, rather than say that the answer it provides might be innacurate. (Funnily enough this is what humans tend to do aswell.) Being that these models often tend to be trained on data found on the internet, could this be a sign of the bias we tend to have when writing on social platforms and the internet in general? (As in justifying our answers instead of trying to get to the correct one) So the questions are: 1. What are the consequences of this in the development of LLMs and in their application to various fields? 2. How would one implement this capability of recognizing where the model might be innacurate? 3. Is 2. really that much more complicated than what is currently being done? If not, then why hasn't it been done yet?
No comments:
Post a Comment