Artificial Intelligence Research in Europe, 1950s-1980s

Artificial intelligence (AI) emerged as a field of scientific research during the second half of the twentieth century. Inspired in part by analogies between the computer and the brain, AI researches and develops computer systems that simulate human intelligent behaviour. In Europe, scientific networks began to seriously form around AI since the 1970s, although there were problems with acceptance and funding, before they consolidated in the 1980s when information technology was given increasing political attention.

Illustration 1: ENIAC (Electronic Numerical Integrator and Computer), the first programmable, electric and digital computer, finished in 1945
Illustration 1: ENIAC (Electronic Numerical Integrator and Computer), the first programmable, electric and digital computer, finished in 1945 (Wikimedia Commons)
Illustration 2: The first logo of ECCAI.  Source: Wolfgang Bibel, “ECCAI got started”,
Illustration 2: The first logo of ECCAI. Source: Wolfgang Bibel, “ECCAI got started”, KI-Rundbrief, n° 28, 1982, p. 46-47.
Contents

Electronic computers have been called “electronic brains” or “thinking machines” since their development in the 1940s, when computers were essentially room-sized calculators. But it quickly became apparent that these machines might be capable of much more. In 1955, the US-American mathematician John McCarthy (1927-2011) first officially used the term “artificial intelligence”, proposing research that was “to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it”. Since then, scientists – a handful at first – have tried to make computers play games, process languages or recognise shapes.

Early Networks, Successes and Setbacks for European Artificial Intelligence

In Europe, interest in and research on AI and AI-related topics picked up in the 1960s, though not always using the term “artificial intelligence”. As early as 1958, researchers from primarily Western Europe and the US met near London to discuss “the mechanisation of thought”. Later conferences covered “learning automata” or “cognitive systems” before AI became the most widely used term. Although the Austrian society for AI used the English term in its name (Österreichische Gesellschaft für Artificial Intelligence), most countries translated it into their respective languages: intelligence artificielle, kunstmatige intelligentie, etc. The founding of national societies showed that AI was gaining sufficient numbers of researchers to organise themselves beyond individual groups or institutes. Possibly the oldest AI society is the British Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB), founded in 1964. Britain also had one of the earliest large AI groups in Europe, built up by Donald Michie (1923–2007) in Edinburgh. Apart from the AISB, a few other societies date from the 1960s, though they were not as AI specific (e.g. the Italian Associazione Italiana per il Calcolo Automatico of 1961, or the Czechoslovak Cybernetics Society of 1966).  

The societies initiated regular conferences, mostly on the national level, like the German Workshops on AI that started in the late 1970s. Researchers also strived for international exchanges of ideas and results. In 1978, for instance, the British AISB and the German special interest group for AI co-organised a conference in Hamburg. Yet language proved to be a barrier between AI research communities. One problem was that scientists researching if and how computers might learn languages naturally used their native languages, which complicated the sharing of results. But European AI research more generally had a problem with international invisibility, researchers felt. The French AI researcher Jacques Pitrat (1934-2019) and his Swedish colleague Erik Sandewall (1945-2024) noted in the 1970s that British AI research was much better known internationally than its Western European counterparts. Other researchers felt their papers, written by non-native English speakers, were unfairly rejected by the big international conferences which were dominated by researchers from the United States of America.

Language was one problem, research habits and topics another. Several projects conducted in Europe focused more on the theoretical foundations of AI rather than on building systems, partly because they were lacking the relevant hard- and software. The expense of early computers meant not every university could afford them, and if they could, researchers only had shared access. On the one hand, these different emphases diversified the research field, on the other hand they could lead to research being dismissed as irrelevant. (Perceived) irrelevance of research specifically turned into a problem for British AI research in the early 1970s. The mathematician Sir James Lighthill (1924-1998) had been tasked by the Scientific Research Council, which governed publicly funded research in Britain, to evaluate AI. Lighthill concluded that much of what was being promised wasn’t being delivered and that current research wasn’t linked closely enough to real-world problems. In consequence, funding was reduced.

Consolidating Artificial Intelligence and Becoming Part of Europe’s Strategies for Information Technology

The 1980s saw an increase in applied research as well as a consolidation of AI across Europe. The European Coordinating Committee for Artificial Intelligence (ECCAI) was established in 1982, with the German Wolfgang Bibel (b. 1938) as its founding chairperson (ill. 1). The first conference to call itself European Conference on AI (ECAI; four previous AISB meetings count retrospectively as ECAI meetings) was organised in Orsay (France) that same year. By the end of the decade, the number of national societies had tripled. Importantly, research policies now explicitly included AI. Two developments played into this. Firstly, computers were increasingly used by non-specialists like office workers and private individuals. Secondly, European politicians paid increasing attention to the research and development (R&D) of information technologies (IT). In comparison to how the United States and Japan handled their R&D, Europe worried about staying competitive and innovative enough. 

Two European programmes were devised in reaction to these developments: FAST (Forecasting and Assessment in Science and Technology, since 1978) and ESPRIT (European Strategic Programme for Research and Development in Information Technology, 1983-1998). They aimed at establishing scientific networks between academia and industry and at strengthening both basic and applied research. ESPRIT’s budget was provided half by the European Commission and half by participating companies and institutions. Within the overall focus on IT R&D, AI’s role was to develop knowledge-based systems. These were meant to “understand” both the problems they were built for as well as their users, and thus be more efficient and less error-prone than other systems. A second relevant strand of AI research concerned natural language processing, which allowed users to communicate more easily with the computer using their native language rather than a programming language. A database enquiry could thus simply be typing “How many flights leave on Sunday?” The overall aims were better user friendliness and user interfaces.

The beginnings of AI date back almost three-quarters of a century now, and its history is still being written. As with many emerging disciplines, it started with individual researchers and groups that formed increasingly large, (inter)national networks. Together with IT overall, AI was the concern of the first European-wide research policies that shaped the current EU’s emphasis on applied science with economic uses.

To quote from this article

Helen Piel , « Artificial Intelligence Research in Europe, 1950s-1980s », Encyclopédie d'histoire numérique de l'Europe [online], ISSN 2677-6588, published on 19/02/25 , consulted on 18/03/2025. Permalink : https://ehne.fr/en/node/22523

This article is licensed under licence CC-BY 4.0 This license allows reuse of the content, provided that the author is properly credited.

Bibliography

Agar, Jon (2020). “What is science for? The Lighthill report on artificial intelligence reinterpreted”, British Journal for the History of Science, vol. 53, n°3 (2020): 289-310.

Piel, Helen, Seising, Rudolf, Pfau, Dinah, Müller, Florian and Tschandl, Jakob (eds.), “Perspectives on Artificial Intelligence in Europe. Special Issue”, IEEE Annals of the History of Computing, vol. 45, n°3 (2023).

Van Laer, Arthe, « Vers une politique de recherche commune. Du silence du Traité CEE au titre de l’Acte unique », in Bouneau, Christophe, Burigana, David & Varsori, Antonio (eds.), Les trajectoires de l’innovation technologique et la construction européenne. Des voies de structuration durable ? (Brussels: Peter Lang, 2010): 77-96.

/sites/default/files/styles/opengraph/public/image-opengraph/Image%20AI.png?itok=7lnGPjLk

Don’t miss a single publication! Register now to receive our newsletter in English:

The subscriber's email address.
Manage your newsletter subscriptions
Select the newsletter(s) to which you want to subscribe.