This article by Ethan Zuckerman in Prospect magazine set me thinking about bullshit: Tech has an innate problem with bullshitters. But we don’t need to let them win.
During my tech career in the 1980s and 1990s I was very familiar with the concept of bullshit, and was staggered by the amount of bullshit generated by senior managers in my organisation. We even had a ‘bullshitter of the year’ award.
The Prospect article suggests that the term was made popular in the 1980s by philosopher Harry G Frankfurt in a short book, On Bullshit. For Frankfurt, bullshit is more complex than falsehood. The liar knows the things he is saying are untrue, while the bullshitter, bluffing persuasively, “does not care whether the things he says describe reality correctly. He just picks them out, or makes them up, to suit his purpose.” Sounds like a few politicians and entrepreneurs you may be aware of? Try Trump, Johnson, Musk…
Now, it has struck me that the emerging AI chatbots are potentially going to massively increase the amount of bullshit around, in this case not deliberately, but simply because of how they operate hoovering up information from the web. So I though I’d ask the OpenAI tool Chatgpt to help me along. Here’s our conversation.
Conversation with Chatgpt on Bullshit
Me: How do you define bullshit?
Chatgpt: Bullshit” refers to statements, claims, or information that are untrue, misleading, or insincere. It’s often used to describe information that is deliberately intended to deceive or manipulate people.
Bullshit can take many forms, such as false advertising, propaganda, spin, or exaggeration. It can also refer to nonsensical or meaningless statements that lack any real substance or value.
In general, bullshit is information that is not based on evidence, reason, or fact and is not intended to inform or educate, but rather to mislead or manipulate people. It’s important to be able to recognize and distinguish between bullshit and trustworthy information in order to make informed decisions and avoid being misled.
Me: How can I detect bullshit?
Chatgpt: There are several strategies you can use to help detect bullshit:
- Verify the source: Check if the information is coming from a credible source.
- Fact-check: Use reliable sources such as fact-checking websites, government agencies, and academic journals to verify the information.
- Look for evidence: Does the information have solid evidence to support it, or is it based on opinions or conjecture?
- Consider the motive: What is the motive behind the information being shared? Is it to inform, entertain, or deceive?
- Seek other perspectives: Look for information from a variety of sources to get a well-rounded understanding of the issue.
- Use your critical thinking skills: Evaluate the information objectively and don’t accept everything at face value.
Remember, just because information is widely shared or appears on a reputable website does not mean it is true. By using these strategies, you can help reduce your exposure to bullshit and make more informed decisions.
Me: Is there a correlation between level of management and amount of bullshit spoken?
Chatgpt: There is no definitive answer to this question as it can vary depending on the individual and the organization. Some research suggests that there may be a correlation between the level of management and the amount of bullshit spoken, as higher level managers may be more likely to engage in spin or manipulation in order to protect their interests or present a positive image of their organization.
However, it is important to note that not all managers engage in this behavior and that it is not exclusive to higher-level positions. In some cases, lower-level employees may also engage in bullshit in order to advance their own careers or protect their own interests.
Ultimately, the amount of bullshit spoken by an individual or in an organization is influenced by a complex interplay of factors, including personal values, organizational culture, and power dynamics. It’s up to each individual to use their critical thinking skills and assess the information they receive based on evidence, reason, and fact.
Me: Can chatgpt be used to create bullshit?
Chatgpt: As an AI language model, OpenAI’s ChatGPT can be used to generate a wide variety of text, including potentially misleading or false information. However, it is important to note that the AI does not have personal beliefs or motivations, and generates text based solely on patterns it has learned from the data it was trained on.
It is up to the user to evaluate the information generated by the model and determine its credibility. Just like with any other source of information, it’s important to use critical thinking and fact-checking to verify the accuracy of the information and distinguish between trustworthy information and potential bullshit.
It’s also important to remember that while OpenAI’s language model is trained on a large corpus of text, it is not a source of truth or a substitute for human judgement and expertise. It is up to individuals to use their own judgement and discretion when evaluating the information generated by the model.
The answers are scarily plausible, all generated within a few seconds. Chatgpt is impressive, if a bit verbose!
Basically it’s saying, never believe what you read/hear unless it’s credible from other sources and your own judgement/intuition.
Yes, these tools will be generating a load more bullshit, but not intentionally.
It also occurs to me that these tools can be used to generate blog posts, there could even be AI bloggers, ‘who’ emit quite plausible posts. Wow. A strange world we’re moving into!
Featured image was created with Inkscape by Anynobody, composing work: Mabdul ., CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0, via Wikimedia Commons