NLP Hallucinations
Someone playing around with the Bing ChatGPT product believes they uncovered some initial prompts/instructions the chatbot has in how it does every conversation. It’s very odd to read through and it appears it revealed its codename is Sydney.
The super weird part is people point out this may not even be true. A known issue in natural language processing called hallucination where the model will serve up factually incorrect answers stated confidently. So it’s not clear if the AI is generating this text because it’s in fact part of its ruleset that it got tricked in leaking, or if it’s generating it because it thinks it’s a good answer to the question (even if not true).