Following its formal inception, the field of AI experienced an initial period of significant achievements, largely driven by the symbolic AI paradigm.
Pioneering Programs: Logic Theorist, General Problem Solver, and Game AI
Early AI programs quickly demonstrated the potential of machines to perform tasks previously thought to require human intellect. The Logic Theorist, developed by Allen Newell and Herbert Simon and presented at the Dartmouth workshop in 1956, was an early AI program capable of proving mathematical theorems from a given set of axioms and rules.3 This program effectively demonstrated the potential for symbolic reasoning in AI.6 Building on this, Newell and Simon created the General Problem Solver (GPS) in 1957 (with an improved version in 1961), an early attempt at a universal AI system designed for problem-solving across various domains, pioneering the concept of search algorithms.3 GPS showed that theorem-proving techniques could be applied broadly to different problems by defining a goal and searching for a series of valid moves to reach it.11
In the realm of game AI, Arthur Samuel developed a program to play checkers in 1952, notable as the first program to learn the game independently.1 Early chess programs also emerged during this period.3 To facilitate AI research, John McCarthy invented LISP (List Processing) in 1958, the first programming language specifically designed for AI, which remains in popular use today.1
The successes of programs like Logic Theorist and GPS were crucial because they validated the “physical symbol system hypothesis,” articulated by Allen Newell and Herbert A. Simon.3 This hypothesis proposed that “A physical symbol system has the necessary and sufficient means for general intelligent action”.3 These early programs, by effectively manipulating high-level symbols to solve problems, provided empirical evidence for this paradigm, leading to the dominance of symbolic AI for several decades. This causal link between a theoretical hypothesis and practical demonstration fueled immense optimism within the burgeoning field.
The Rise of Expert Systems and Conversational Agents
The 1960s and 1970s saw the development of more specialized AI applications. In 1965, Edward Feigenbaum and Joshua Lederberg created the first “expert system,” known as Dendral.1 This system was programmed to replicate the thinking and decision-making abilities of human experts, specifically assisting organic chemists in identifying unknown organic molecules by analyzing mass spectra and applying chemical knowledge.12 Dendral pioneered the concept of “knowledge engineering,” focusing on attaining productive interaction between a knowledge base and problem-solving techniques.12 The success of expert systems like XCON (expert configurer), which entered the commercial market in 1980 to assist in ordering computer systems, further demonstrated AI’s commercial viability.1
Another notable development was Joseph Weizenbaum’s ELIZA in 1966, the first “chatterbot” (later shortened to chatbot).1 ELIZA functioned as a mock psychotherapist, using natural language processing (NLP) to converse with humans.1 In the realm of robotics, James L. Adams created the Stanford Cart in 1961, which in 1979 successfully navigated a room full of chairs without human interference, marking one of the first examples of an autonomous vehicle.1
While early AI often aimed for “general intelligence,” the success of expert systems like Dendral and conversational agents like ELIZA demonstrated that AI could achieve practical utility by focusing on narrow, domain-specific problems.1 This shift represented a pragmatic response to the immense difficulty of achieving general intelligence, showing that even limited AI could be “useful”.3 This established a pathway for commercialization and real-world application, even if it did not immediately solve the grand challenge of human-level AI.
The Perceptron’s Promise and Its Early Limitations
Alongside symbolic AI, early neural network models also emerged. Frank Rosenblatt’s Perceptron, developed in 1958, was a simple neural network model inspired by the brain’s structure, promising to “learn” patterns like handwritten digits.3 This model was hailed as a potential future of machine intelligence, capable of learning by adjusting weights between artificial neurons.14
However, in 1969, Marvin Minsky and Seymour Papert published their influential book, Perceptrons, which critically dissected the mathematical limitations of single-layer perceptrons.3 They rigorously proved that these networks were fundamentally “local machines,” incapable of computing “global” properties such as connectedness or parity in an efficient manner.14 Their work exposed a “linear threshold bottleneck” that severely restricted the types of problems perceptrons could solve.14
This critique had a profound impact, contributing directly to the “first AI winter” for neural network research.3 Funding for connectionist approaches dried up as researchers grappled with the perceived limitations of these models.3 The story of the perceptron is a microcosm of AI’s broader historical trajectory. Initial enthusiasm and “romanticism” were met with rigorous mathematical critique that exposed fundamental limitations. This critique directly caused a decline in funding and interest, leading to a period of stagnation for connectionist research. This pattern of over-optimism, followed by a sober reckoning with technological or theoretical barriers, is a recurring theme that has shaped the field’s trajectory and resource allocation.

