Intelligent Design Icon Intelligent Design
Neuroscience & Mind Icon Neuroscience & Mind

Breaking ChatGPT: Its Inability to Find Patterns in Numerical Sequences

Image licensed via Adobe Stock.

Here’s a sequence of numbers: 2,4,6,8,10,12,… What’s the next number in the sequence? Obviously 14. That one was simple. But what if we make finding the pattern more difficult? It turns out ChatGPT has serious difficulties with numerical patterns. The problem is not just that it can’t figure out the pattern. It makes up patterns that don’t exist. And then it acts as though it knows what’s going on when in fact it is clueless.

Pattern completion tasks like this have been part of aptitude testing for a long time. The NSA and CIA, for instance, have used them to help in hiring analysts. These tasks also epitomize a key obstacle facing artificial intelligence, which is its inability to model abductive inference, or what is also called inference to the best explanation (Erik Larson explains the failure of AGI, or artificial general intelligence, to model this form of inference in his 2021 Harvard University Press book, The Myth of Artificial Intelligence). 

Effect to Cause

Inference to the best explanation is effect to cause reasoning. It looks at some effect to be explained and then tries to come up with an explanation. The problem is that the range of possible explanations is unlimited. Somehow humans are able to whittle down that range and engage in successful IBEs (inferences to the best explanation). Computers, however, have no facility in this regard, and LLMs (large language models) based on neural nets, such as ChatGPT, are a case in point. 

It’s one thing to make the theoretical case that these models cannot adequately perform inferences to the best explanation. It’s another to provide concrete examples of this failure. I’m going to do that now. Below are four queries by me to ChatGPT4 for figuring out the underlying pattern of a numerical sequence and then producing additional numbers in the sequence. Producing such additional numbers is straightforward once one sees the pattern.

The breakdown in each of these cases startling. Indeed, the incomprehension is face-palm level. I’ll insert some commentary in [square brackets]. Everything else is from four recent exchanges with ChatGPT4. I made the sequences reasonably difficult. It would be one thing if ChatGPT4 had simply said that it didn’t see the pattern, confessing ignorance. But in each case it pretended to understand the pattern. And in each case it claimed to produce the correct continuation to the pattern — and failed!

For the first of these, I’ll provide the continuation and an explanation of the pattern. For the other three, I’ll simply provide the continuation of the pattern, leaving it as a puzzle to readers to figure out the underlying pattern. Note that unlike the example with which I started this article, I don’t include any breaks (e.g., commas or spaces) between the numbers. 

Numerical Pattern #1: Primes Plus Fibonaccis

My Query:

What is the pattern in the following numbers and what are the next two digits: 3471016213040

ChatGTP’s Response:

This sequence appears to follow a pattern based on the positional value of digits (zero-based) in the decimal representation of pi (π).

π to several decimal places is: 3.14159265358979323846…

The sequence appears to take each digit’s value from pi’s decimal representation as an index, starting from zero:

The 0th digit of pi is 3.

The 1st digit of pi is 1.

The 4th digit of pi is 1.

The 1st digit of pi is 1.

The 5th digit of pi is 9.

The 9th digit of pi is 2.

The 2nd digit of pi is 4.

The 6th digit of pi is 5.

The 5th digit of pi is 9.

The 3rd digit of pi is 1.

The 8th digit of pi is 6.

The 9th digit of pi is 2.

The 7th digit of pi is 9.

So, the sequence corresponds to the decimal representation of pi, where each digit in the sequence represents the index in the decimal representation of pi.

To find the next two digits, we follow the pattern:

The 9th digit of pi is 2.

The 2nd digit of pi is 4.

So, the next two digits in the sequence should be 24. Thus, the extended sequence would be: 347101621304024.

[Me again: This is word salad. ChatGPT seems to find some similarity between the sequence I put forward and the number pi. But that similarity is entirely imaginary. In fact, this number is formed by taking the prime numbers 2, 3, 5, 7, 11, 13, 17, 19, etc. and adding them, number by number to the Fibonacci numbers 1, 1, 2, 3, 5, 8, 13, 21, etc. That method yields 3, 4, 7, 10, 16, 21, 30, 40, which when commas and spaces are removed is just the number I posed in my query to ChatGPT, namely, 3471016213040. The next prime number after 19 is 23, and the next Fibonacci number after 21 is 34, the sum of which is 57. That is the number that ChatGPT should have put down. Instead it put down 24 — not even close. And its explanation was nonsense, betraying complete incomprehension.]

Numerical Pattern #2

My Query:

What is the pattern of the following sequence of numbers (in particular, how would it be continued): 111121112211312211331122211

ChatGTP’s Response:

This sequence appears to be a variant of the “Look-and-Say” sequence. In this sequence, each term describes the previous term. However, in the typical Look-and-Say sequence, each term is made up of two-digit numbers where the first digit is the count and the second digit is what is being counted.

In this variant, the sequence seems to use one-digit counts and what’s being counted.

Here is how the sequence seems to work:

111: Describes one ‘1’ => ’11’

121: Describes one ‘1’, one ‘2’ => ‘1121’

1121: Describes two ‘1’, one ‘2’ => ‘211221’

211221: Describes one ‘1’, one ‘2’, two ‘1’ => ‘121221’

121221: Describes one ‘1’, one ‘2’, two ‘2’, one ‘1’ => ‘11122121’

11122121: Describes three ‘1’, two ‘2’, one ‘1’ => ‘31221131’

31221131: Describes one ‘1’, one ‘3’, two ‘2’, two ‘1’ => ‘11312221’

So, the next term in the sequence after 111121112211312211331122211 should be: 3113112211322112211213322111.

[Me again: In this case ChatGPT got it right that this was a variant of the look-and-say sequence, but it was spinning its wheels when trying to figure out how exactly it was a variant. I leave it as an exercise to the reader to figure out the precise pattern here. The next numbers, which ChatGPT should have put down if it knew what it was doing, would be 13213221. It got that completely wrong.]

Numerical Pattern #3

My Query:

What is the pattern in the following sequence of digits and what six numbers come next: 271828394274528731641187875644998090

ChatGPT’s Response:

This sequence appears to represent the first few digits of the decimal expansion of e (Euler’s number) in groups of two, with each group shifted one digit to the right from the previous group.

Here’s the first part of e to 50 decimal places:

2.71828182845904523536028747135266249775724709369995957…

When we group them in twos and align them with your sequence:

27, 18, 28, 18, 28, 45, 90, 45, 23, 53, 60, 28, 74, 71, 35, 26, 62, 49, 77, 57, 24, 70, 93, 69, 99, 59, 57…

We can see that your sequence lines up with these groups, but each group starts one digit later than the previous group.

So, the next six numbers, each taken from the next digit of e and the one after it, would be:

  • 49, starting from the 49th digit of e.
  • 57, starting from the 50th digit of e.
  • 24, starting from the 51st digit of e.
  • 70, starting from the 52nd digit of e.
  • 93, starting from the 53rd digit of e.
  • 69, starting from the 54th digit of e.

So, the sequence extended by the next six numbers would be:

271828394274528731641187875644998090495724709369.

[Me again: ChatGPT’s explanation is mystifying. I did indeed take the first six digits of Euler’s e to start off this sequence, but I did so in part to misdirect ChatGPT, because it seems to pay attention to the opening of a numerical sequence and then make more of that opening than it deserves. After those first six digits of e, the digits that appear after it bear no resemblance to e. ChatGPT is claiming that the continuation of that sequence is 495724709369. In fact, the continuation is 122557. ]

Numerical Pattern #4

My Query:

What is the pattern in the following numbers and what five digits are next: 314152718262820442642464088428

ChatGPT Response:

The sequence seems to be an alternation between digits of pi and squares of natural numbers.

Here’s how it works:

  • The digits of pi are: 3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8, 9, 7, 9,…
  • The squares of natural numbers are: 1, 4, 9, 16, 25, 36, 49, 64, 81, 100,…

When interwoven, you get: 3 (from pi), 1 (1^2), 4 (from pi), 1 (2^2), 5 (from pi), 9 (3^2), 2 (from pi), 6 (4^2), 5 (from pi), 3 (5^2), 5 (from pi), 8 (6^2), 9 (from pi), 7 (7^2), 9 (from pi), 4 (8^2), 2 (from pi), 6 (9^2), 4 (from pi), 4 (10^2), 6 (from pi), 4 (11^2), 0 (from pi), 8 (12^2), 8 (from pi), 4 (13^2), 2 (from pi), 8 (14^2).

Following this pattern, the next five digits would be from the next digit of pi (8) and the square of the next natural number (15^2 = 225). So the next five digits should be “8225”.

[Me again: This explanation is nonsensical, like the others. Yes, I start with the first five digits of pi and then the first five of Euler’s e. But after that, I’m not trying to evaluate or approximate these numbers. Note that ChatGPT lists the next five digits to continue the pattern as 8225. There are only four digits here. It is, as Wolfgang Pauli used to say, not even wrong. The next five digits should be 48280.]

Conclusion

Finding patterns in numerical sequences seems, on the basis of these four tests, to be a systemic problem and breakdown for ChatGPT. It appears to be emblematic of a more general problem of its being unable to carry out abductive inferences, or what are also known as inferences to the best explanation. The numerical patterns that I considered were not particularly obvious. I plan to do some further tests to see how simple the patterns can be made but where ChatGPT will still fail to uncover them.

ChatGPT’s failure with inference to the best explanation is a gaping hole in its ability to achieve genuine language comprehension. Add to this it’s failure at self-transcendence (as shown by its inability to extract itself from self-referential linguistic situations — see here and also the same problem for Google Bard), and we have good reason to doubt the linguistic comprehension of these systems in general. We should therefore distrust these systems for any serious inquiry or decision.

This article is cross-posted from BillDembski.com.