This rapid exploration is part of the Foresight towards the 2nd Strategic Plan of Horizon Europe project.
How far can we get with Artificial Intelligence (AI) - here, meant as “machine learning”? Computers and supercomputers are extremely good at sequential calculations, calculating correlations and recognising patterns (machine learning, big data) where human capabilities fail. Nonetheless, complex decisions, emotional context and moral aspects are still out of scope for artificial intelligence. There are promises of next-generation, generalised AI (Artificial General Intelligence, AGI), opening up new possibilities for autonomous self-learning systems to be realised. What is the limit of control, and where is the limit of autonomy for these next-generation AI machines? What are the stakes and benefits for society, humanity and the world when including autonomous machines in daily lives (e.g. level 5 self-driving vehicles)? How can the development / AI be governed, and where is the limit if AI is autonomous? How can autonomous machines be trusted to act morally and how do they decide in ethical aspects?
DRIVERS AND BARRIERSMassive computing and quantum computers are pushing forward machine learning and the development of general artificial intelligence. In addition, progress is made in systems containing sensors, actuators, and information processing. AI has proven to be useful in many practical applications, but it remains far from “understanding” or consciousness. Huge interest in AI comes from industry, economy, and military as “intelligent” robots could do work, assist humans, and even fight a war without shedding blood. Of course, this form of high tech promises high revenues for companies, and the supranational companies have the resources to finance the advances privately.Nonetheless, there are considerable concerns in society as well. One counter trend could be the “back to nature and frugality” movement, which might lead to the social divide being connected to the urban-rural nexus and the topic of “rising social confrontation”. A central issue is safeguarding security, safety and morality when the driver is the (human) competition? There is already ethical and philosophical discourse: what would be the right value-setting for artificial intelligence? Assuming that there is such a thing as general natural intelligence, what are the relationships between intelligence, morality and wisdom? Do we want general intelligence or general wisdom?What would happen when AI started training itself? This poses the question of control of AI.FUTURES
What if AI makes our lives much easier and people are used to the applications?
What if AI is used for dull tasks, and human intelligence focuses on creativity?
What if mobility is exclusively run by autonomous machines/vehicles?
What if AI changed the way we understood “intelligence”?
What if AI changed the way we organise schools/ education?
What if AI changes how we think about knowledge and makes us all computer scientists?
What if general AI challenges human decisions? What if AI decides? What if AI was used in (most) decision-making processes? What if AI goes further than we want?
What if general AI decides that human life can be sacrificed in certain situations for the sake of the community or other species?