17/06/2024
Alongside worries about the rise of Skynet and the use of LLMs such as ChatGPT to replace work that could and should be done by humans, one line of inquiry concerns what exactly these programs are up to: in particular, there is a question about the nature and meaning of the text produced, and of its connection to truth. In this paper, we argue against the view that when ChatGPT and the like produce false claims they are lying or even hallucinating, and in favour of the position that the activity they are engaged in is bu****itting, in the Frankfurtian sense (Frankfurt, 2002, 2005). Because these programs cannot themselves be concerned with truth, and because they are designed to produce text that looks truth-apt without any actual concern for truth, it seems appropriate to call their outputs bu****it.
https://link.springer.com/article/10.1007/s10676-024-09775-5?fbclid=IwZXh0bgNhZW0CMTAAAR3XfMl08fNkmJl4nyiAFzv2GeOYzGPS7MhKg_TXsbz35ItCB8qVyFuiKFo_aem_ZmFrZWR1bW15MTZieXRlcw
Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that thes...