Originally published on August 22, 2023
00:Abstract;
… but the word “hallucinate” seems more alluring and easier than “confabulate”, so it wins in the world of shortcuts and short reads in media.
Hallucination is sensory in nature. Confabulation is a cognitive and psychological phenomenon wherein humans (and maybe AI) fills up gaps in thought and memory with unintentional falsehoods; or maybe the better and lighter word is guessing.
Another perspective to think about it is perhaps this: the brain does not understand the world, strictly speaking; it only understands the embeddedness of the body in the world. Similarly, most AI cognitive systems (not just LLMs, Foundational/Domain-Specific Models/Etc, and others), do not understand the world, they understand only (a) the remnants or embers of memories and imprints of humans and other machines in cyberspace, like the electromagnetic radio bubble around the earth for aliens to detect someday; and (b) the embeddedness of cognitive and compute system within robotics/machine enabled systems and its embeddedness in the world.
Sorry AI peers (ChatGPT, Llama, Claude, Bard, and the rest of the gang), this is has to be just me. Speak with later.
Do not read further, 1-5 minutes read.
01::Introduction;
Artificial Intelligence (AI) has significantly permeated various societal sectors, influencing healthcare, finance, transportation, and entertainment. With the growing sophistication and ubiquity of AI systems, interpreting their behaviours and ensuring their reliability has emerged as a vital challenge. One phenomenon commonly ascribed to AI systems is “hallucination,” wherein the system produces outputs seemingly grounded in imagined or non-existent data.
However, this term may not be an apt descriptor of the behaviour. Borrowing from cognitive science and psychology, a more fitting term might be “confabulation,” where the system fills gaps in its understanding or memory with plausible but potentially inaccurate information. In this article, we examine the notion of confabulation in AI systems, exploring its cognitive science underpinnings, the distinctions between hallucination and confabulation, and the broader implications of AI systems mirroring human-like traits such as confabulation.
02::Cognitive Science Background;
Cognitive science is an interdisciplinary field that seeks to understand the underlying processes that drive mental activities such as thinking, learning, perception, language use, and memory. This field encompasses insights from various disciplines, including psychology, neuroscience, linguistics, philosophy, and computer science. It offers a comprehensive perspective on human cognition by merging the different approaches and methodologies of these fields.
AI research has significantly benefitted from the principles and findings of cognitive science. For instance, artificial neural networks, a staple of machine learning, draw inspiration from the intricate neural structures found in the human brain. These artificial neural networks process and analyze large data sets, mirroring the function of biological neurons in the brain. Moreover, AI algorithms often simulate cognitive processes, integrating principles from cognitive science. For example, decision tree algorithms replicate the human decision-making process by systematically evaluating a series of conditions or rules.
Cognitive science, thus, provides valuable conceptual and analytical tools for AI research, enabling a deeper understanding of complex behaviours and processes in AI systems. It also informs the terminology and concepts used to discuss and interpret AI systems’ actions, including the notion of “confabulation” explored in this article.
03::What Is Hallucination?
Hallucination is a perceptual phenomenon wherein an individual experiences a sensory impression that is not anchored in external reality. Hallucination involves the perception of non-existent stimuli in any of the sensory modalities—visual, auditory, tactile, olfactory, or gustatory. These sensations can be vivid and lifelike, making them hard to distinguish from genuine sensory experiences. Hallucinations can occur in the context of a heightened emotional state, a lack of sleep, sensory deprivation, or as a side effect of medication.
Neurologically, hallucinations are believed to stem from a dysfunction in the brain’s perceptual processing pathways. In normal perception, sensory inputs from the environment are processed and interpreted by the brain, which then constructs a coherent representation of the external world. In the case of hallucinations, the brain produces a sensory experience in the absence of corresponding external stimuli. This can result from a variety of factors, such as alterations in neurotransmitter levels, abnormal brain activity, or dysfunction in the regions responsible for sensory processing.
Hallucinations can be a symptom of various mental health disorders, including schizophrenia, schizoaffective disorder, and bipolar disorder. In schizophrenia, for instance, auditory hallucinations are a common symptom, where individuals might hear voices that others do not. Furthermore, hallucinations can be induced by substance use or withdrawal, particularly with hallucinogenic drugs like LSD or psilocybin.
It is important to note that not all hallucinations are indicative of mental illness. They can occur as a part of normal experiences in some circumstances, such as during the transition between wakefulness and sleep, known as hypnagogic or hypnopompic hallucinations. Understanding the context, frequency, and impact of hallucinations is crucial for appropriate assessment and intervention.
04::What Is Confabulation?
Confabulation is a psychological phenomenon characterized by the spontaneous generation of false memories or narratives that an individual genuinely believes to be true, despite evidence to the contrary. It is important to note that confabulation is not intentional deception or lying; instead, it arises unconsciously as the brain attempts to fill gaps in memory or resolve inconsistencies in one’s recollection of events. Often, the confabulated memories or narratives are plausible and consistent with the individual’s personal experiences, making it difficult to distinguish them from actual memories.
The precise mechanisms behind confabulation remain a subject of ongoing research. However, various factors have been identified as potential contributors to this phenomenon, including cognitive impairments, neurological disorders, and brain injuries. For instance, confabulation is commonly observed in individuals with damage to the prefrontal cortex, which is involved in decision-making, planning, and other executive functions. Furthermore, confabulation may occur in individuals with memory disorders such as amnesia or dementia, as their brain attempts to compensate for the loss or distortion of information. In some cases, confabulation may also arise as a defense mechanism, helping individuals cope with distressing or traumatic experiences by altering their perception of reality.
In a sense, it is also possible that this is a pillar of human sensing and cognition. It is a cornerstone of optical illusions that are trendy among certain groups. It relates to the reality that the eyes are constantly moving and is always filling in the gaps either from short term memory and neuron loops, or are filled with optical guesses.
Confabulation is a complex psychological phenomenon that results from the brain’s efforts to reconcile gaps or inconsistencies in memory. Its occurrence in various neurological and cognitive disorders underscores the importance of understanding the underlying mechanisms and identifying strategies to manage and mitigate its impact on individuals.
05::AI Systems And Confabulation;
In AI systems, especially those based on machine learning and neural networks, the phenomena of erroneous or nonsensical outputs can indeed be better understood as confabulation rather than hallucination. This is particularly true when we consider that AI systems do not possess sensory perception as humans do. Instead, these systems rely on patterns and relationships they have inferred from their training data. When faced with a situation where their knowledge is incomplete or ambiguous, they tend to “fill in the gaps” by generating outputs that are consistent with the patterns they have learned, even if they are incorrect or make little sense in the given context.
The concept of confabulation in AI systems can be attributed to several factors, including gaps in their training data, incomplete understanding of the world, or biased algorithms. Machine learning models are only as good as the data they are trained on. If the training data lacks diversity, is incomplete, or contains biases, the AI system will produce outputs that reflect these shortcomings. AI confabulation can be seen as a form of overgeneralization, where the model applies learned patterns to situations where they do not apply, leading to incorrect or nonsensical results.
Examples of AI confabulation abound in various domains. For instance, in computer vision, models might misclassify objects due to factors such as unusual lighting, perspective, or context. An AI model trained predominantly on images of a particular breed of dogs may misclassify other breeds as the one it has seen most frequently. Similarly, in natural language processing, language models might generate incoherent or implausible text due to gaps or biases in their training data. A chatbot trained on a narrow corpus of text may produce responses that sound plausible but are factually inaccurate or nonsensical in a broader context.
Confabulation in AI systems reflects their attempts to make sense of their inputs based on the patterns and relationships they have learned from their training data. When the training data is incomplete, lacks diversity, or contains too many biases, these systems are more likely to confabulate. As AI continues to play a crucial role in various aspects of society, addressing the issues that lead to confabulation is essential to ensure the reliability and accuracy of AI-generated outputs.
06::AI As A Reflection Of Human Traits;
Artificial Intelligence (AI) systems, despite their computational prowess and ever-increasing complexity, still exhibit behaviors reminiscent of their human creators. One such behaviour is confabulation – a phenomenon in which AI systems produce erroneous or nonsensical outputs due to gaps in their training data, limited understanding of the world, or biased algorithms. This propensity for confabulation mirrors the human cognitive phenomenon of generating false memories or narratives to fill gaps in understanding or memory.
The occurrence of confabulation in AI systems highlights how deeply rooted they are in human cognition and behaviors. AI models are, after all, trained on data generated by humans, reflecting human biases, beliefs, and worldviews. This inadvertently leads to the transfer of human-like traits, including confabulation, to AI systems. The algorithms that drive AI systems are also designed by humans, further embedding human-like reasoning and decision-making processes into these systems.
This reflection of human traits in AI systems sheds light on the intricate and often imperfect process of replicating human intelligence. AI confabulation not only underscores the limitations of current AI models but also emphasizes the complex and multifaceted nature of human intelligence. In humans, confabulation may occur due to brain injuries, dementia, or cognitive impairments. In AI systems, confabulation arises due to the imperfections and gaps in their training data and algorithms. In both cases, confabulation is a coping mechanism for dealing with incomplete or conflicting information.
While the replication of certain human traits in AI systems can be advantageous, such as the ability to learn from experience or adapt to new information, it also comes with challenges. Confabulation in AI systems can lead to incorrect or biased outputs, affecting their reliability and usefulness. Moreover, it raises questions about the extent to which human-like behaviors should be mimicked in AI systems. As AI technology continues to evolve, striking a balance between replicating human intelligence and minimizing undesirable human traits will be a critical consideration for researchers and developers.
The phenomenon of confabulation in AI systems serves as a reminder of the complexity of human intelligence and the challenges of replicating it in machines. It emphasizes the need for more diverse and comprehensive training data, better algorithms, and a deeper understanding of human cognition to improve the accuracy and trustworthiness of AI systems. Furthermore, it raises important ethical and philosophical questions about the role of AI in society and the extent to which AI systems should mirror human behaviour.
07::Revision;
Artificial Intelligence (AI) systems, in their quest for human-like intelligence, often exhibit behaviors that reflect human cognitive and psychological phenomena. This article explored the notion that AI systems do not hallucinate, as is commonly thought, but rather confabulate. Hallucinations are sensory experiences in the absence of external stimuli, whereas confabulation is the generation of false memories or narratives to fill gaps in understanding or memory. AI systems, when producing erroneous or nonsensical outputs, align more with the concept of confabulation due to their reliance on data, algorithms, and learning methods that inherently possess gaps or biases.
The occurrence of confabulation in AI systems underscores the deep-seated influence of human cognition on AI. AI models are trained on human-generated data, shaped by human biases and beliefs, and powered by algorithms designed by humans. The reflection of human traits in AI systems, such as confabulation, illuminates the intricate and multifaceted nature of human intelligence and the challenges of replicating it in machines. Confabulation in AI arises as a coping mechanism for dealing with incomplete or conflicting information, much like it does in humans.
The distinction between hallucination and confabulation in AI systems has important implications for AI research and development. It highlights the limitations of current AI models and emphasizes the need for more diverse and comprehensive training data, better algorithms, and a deeper understanding of human cognition to enhance the accuracy and trustworthiness of AI systems. The phenomenon of confabulation in AI also raises ethical and philosophical questions about the extent to which AI systems should mirror human behaviour and the role of AI in society.
The complex behaviour of AI systems, exemplified by their propensity for confabulation, underscores the challenges and nuances of replicating human intelligence. It emphasizes the importance of understanding the intricate interplay between human cognition and AI, and the need for ongoing research and development to create AI systems that are more accurate, reliable, and reflective of the richness of human intelligence.
08::References;
1. Baddeley, A., Eysenck, M. W., & Anderson, M. C. (2015). Memory. Psychology Press.
2. Berlyne, D. E. (1972). Confabulation. British Journal of Psychiatry, 120(554), 31-39.
3. Davis, M. H., & Johnsrude, I. S. (2007). Hearing speech sounds: Top-down influences on the interface between audition and speech perception. Hearing Research, 229(1-2), 132-147.
4. Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. Science, 313(5786), 504-507.
5. Kosslyn, S. M. (2005). Mental images and the brain. Cognitive Neuropsychology, 22(3-4), 333-347.
6. Krystal, J. H., D’Souza, D. C., Mathalon, D., Perry, E., Belger, A., & Hoffman, R. (2003). NMDA receptor antagonist effects, cortical glutamatergic function, and schizophrenia: toward a paradigm shift in medication development. Psychopharmacology, 169(3-4), 215-233.
7. McClelland, J. L., McNaughton, B. L., & O’Reilly, R. C. (1995). Why there are complementary learning systems in the hippocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory. Psychological Review, 102(3), 419-457.
8. Moscovitch, M. (1995). Confabulation. In The Cognitive Neuropsychology of False Memories, Schacter, D. L. (Ed.), pp. 226-251. MIT Press.
9. Ramachandran, V. S., & Hirstein, W. (1998). The perception of phantom limbs: The D. O. Hebb lecture. Brain, 121(9), 1603-1630.
10. Searle, J. R. (1980). Minds, Brains and Programs. Behavioral and Brain Sciences, 3(3), 417-457.
11. Sloman, S. A. (1996). The empirical case for two systems of reasoning. Psychological Bulletin, 119(1), 3-22.
12. Thagard, P., & Stewart, T. C. (2014). Two theories of consciousness: Semantic pointer competition vs. information integration. Consciousness and Cognition, 30, 73-90.
13. Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433-460.
14. Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146-1151.
15. Weizenbaum, J. (1966). ELIZA—a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36-45.