The Age of Bullshit and AI

This article by Ethan Zuckerman in Prospect magazine set me thinking about bullshit: Tech has an innate problem with bullshitters. But we don’t need to let them win.

During my tech career in the 1980s and 1990s I was very familiar with the concept of bullshit, and was staggered by the amount of bullshit generated by senior managers in my organisation. We even had a ‘bullshitter of the year’ award.

The Prospect article suggests that the term was made popular in the 1980s by philosopher Harry G Frankfurt in a short book, On Bullshit. For Frankfurt, bullshit is more complex than falsehood. The liar knows the things he is saying are untrue, while the bullshitter, bluffing persuasively, “does not care whether the things he says describe reality correctly. He just picks them out, or makes them up, to suit his purpose.” Sounds like a few politicians and entrepreneurs you may be aware of? Try Trump, Johnson, Musk…

Now, it has struck me that the emerging AI chatbots are potentially going to massively increase the amount of bullshit around, in this case not deliberately, but simply because of how they operate hoovering up information from the web. So I though I’d ask the OpenAI tool Chatgpt to help me along. Here’s our conversation.

Conversation with Chatgpt on Bullshit

Me: How do you define bullshit?

Chatgpt: Bullshit” refers to statements, claims, or information that are untrue, misleading, or insincere. It’s often used to describe information that is deliberately intended to deceive or manipulate people.

Bullshit can take many forms, such as false advertising, propaganda, spin, or exaggeration. It can also refer to nonsensical or meaningless statements that lack any real substance or value.

In general, bullshit is information that is not based on evidence, reason, or fact and is not intended to inform or educate, but rather to mislead or manipulate people. It’s important to be able to recognize and distinguish between bullshit and trustworthy information in order to make informed decisions and avoid being misled.

Me: How can I detect bullshit?

Chatgpt: There are several strategies you can use to help detect bullshit:

  1. Verify the source: Check if the information is coming from a credible source.
  2. Fact-check: Use reliable sources such as fact-checking websites, government agencies, and academic journals to verify the information.
  3. Look for evidence: Does the information have solid evidence to support it, or is it based on opinions or conjecture?
  4. Consider the motive: What is the motive behind the information being shared? Is it to inform, entertain, or deceive?
  5. Seek other perspectives: Look for information from a variety of sources to get a well-rounded understanding of the issue.
  6. Use your critical thinking skills: Evaluate the information objectively and don’t accept everything at face value.

Remember, just because information is widely shared or appears on a reputable website does not mean it is true. By using these strategies, you can help reduce your exposure to bullshit and make more informed decisions.

Me: Is there a correlation between level of management and amount of bullshit spoken?

Chatgpt: There is no definitive answer to this question as it can vary depending on the individual and the organization. Some research suggests that there may be a correlation between the level of management and the amount of bullshit spoken, as higher level managers may be more likely to engage in spin or manipulation in order to protect their interests or present a positive image of their organization.

However, it is important to note that not all managers engage in this behavior and that it is not exclusive to higher-level positions. In some cases, lower-level employees may also engage in bullshit in order to advance their own careers or protect their own interests.

Ultimately, the amount of bullshit spoken by an individual or in an organization is influenced by a complex interplay of factors, including personal values, organizational culture, and power dynamics. It’s up to each individual to use their critical thinking skills and assess the information they receive based on evidence, reason, and fact.

Me: Can chatgpt be used to create bullshit?

Chatgpt: As an AI language model, OpenAI’s ChatGPT can be used to generate a wide variety of text, including potentially misleading or false information. However, it is important to note that the AI does not have personal beliefs or motivations, and generates text based solely on patterns it has learned from the data it was trained on.

It is up to the user to evaluate the information generated by the model and determine its credibility. Just like with any other source of information, it’s important to use critical thinking and fact-checking to verify the accuracy of the information and distinguish between trustworthy information and potential bullshit.

It’s also important to remember that while OpenAI’s language model is trained on a large corpus of text, it is not a source of truth or a substitute for human judgement and expertise. It is up to individuals to use their own judgement and discretion when evaluating the information generated by the model.

Thoughts

The answers are scarily plausible, all generated within a few seconds. Chatgpt is impressive, if a bit verbose!

Basically it’s saying, never believe what you read/hear unless it’s credible from other sources and your own judgement/intuition.

Yes, these tools will be generating a load more bullshit, but not intentionally.

It also occurs to me that these tools can be used to generate blog posts, there could even be AI bloggers, ‘who’ emit quite plausible posts. Wow. A strange world we’re moving into!

Featured image was created with Inkscape by Anynobody, composing work: Mabdul ., CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0, via Wikimedia Commons

AI, Art and Life

Eric Wayne has just published a most thought-provoking post entitled AI Won. Human Artists and Humankind are Defeated. It’s well worth reading, particularly if, like me, you’re not aware of the amazing capabilities exhibited by the latest AI programs. As Eric says: “the latest algorithm from Midjourney enables anyone at all to make astounding art without any prior skill, training, understanding, or even exposure to art…” Surely an amazing statement, but Eric is an accomplished artist and I’m sure he’s correct.

So, whatever inspiration the artist puts into his/her work can be simulated by the AI forever more and in great variety. Art would appear to have really gone the way of chess, where computers can now easily beat the best human players. And of course other forms of AI are being developed by the likes of Google to make informed decisions from huge amounts of data that would be beyond the individual human being, potentially revolutionising transport, healthcare, environmental management and other sectors of the economy.

What the AI can never do is copy the inner lived experience of the human being, the pleasure of playing a game of chess with another human, the joy of following one’s own creativity, or appreciating the creativity of another, or the appreciation of the AI work itself. Yes, it can simulate all these things, but AI is all on the surface; there is no depth, no life. It is a massive simulation of what the left brain can do and understand. There is no equivalent of the right brain, other than through simulation.

So we face a world of massive change, through an artifical intelligence that has no inner world, no conscience, no morality, no intuition. In a sense this is the ultimate left brain project whereby, somewhere along the way of our development, morality became replaced by laws, the inspiration of the prophets was superseded by institutionalised religions, and now creativity is replaced by algorithms.

We cannot stop all this development, which is itself wonderfully creative. However, we are approaching a world of some peril. Consider the use of AI in warfare. The AI has no moral sense, no common sense, other than a set of rules that someone may have encoded in them. The challenge, as Isaac Asimov was telling us all those years ago, is how do we keep any sort of control on this stuff? Maybe we can’t and, in the end, good and bad things will happen…

Featured image was generated by AI in a few seconds – see Eric’s post.

Multiple Me’s?

I’m generally a great fan of The Guardian/ The Observer, but they do sometimes publish a load of nonsense, because they have a blindspot, being entirely materialistic and denying the interiority of mind and the spiritual. Here is a recent example that just appeared in my inbox: What happens if your mind lives for ever on the internet?

This article actually takes this question seriously and goes on to examine the implications of multiple versions of ‘you’.

I would suggest that this is nonsense, like much that is written about so-called Artificial Intelligence.

Yes, I accept that at some point it may be possible to understand aspects of my/your brain activity and put it up on the internet as some sort of simulation of me/you. But it will be just that, a simulation. It will be algorithmic, will not be conscious. It will be all ‘outer’ and no ‘inner’. it will not contain the essence of me/you.

And thank God for that!

Featured image is from the article.

 

Can computers ever be conscious?

This question is posed in an interesting paper True Artificial Consciousness – Is It Possible? from Sean Webb on the IONS blog. The paper is quite detailed and worth a read if you’re interested in the subject. My take is somewhat simpler, as follows.

Everything has an ‘inner’ and an ‘outer’. Science and technology deal with the ‘outer’, consciousness is a feature of the ‘inner’. Could the twain ever meet? Explaining consciousness is regarded as a ‘hard problem’ of science – too right – they operate in different domains.

So-called artificial intelligence is basically technology that emulates the real intelligence that flows forth from consciousness. This emulation can increasingly appear to be conscious, and even pass the so-called Turing Test of intelligent behaviour, but I would suggest it is not really conscious – could its ‘inner’ conceivably emerge from the ‘outer’ algorithms?

So, if we let machines control things we finish up with a mechanistic universe that is devoid of the spark of consciousness, indeed could become its persecutor.

Featured illustration of the Turing Test by By Mushii , via Wikimedia Commons