IN SPACE, ROBOTS DEPEND ON HUMANS

124

by Marcello Spagnulo

Machines with artificial intelligence are not yet able to do without human control. The constraints of the operating environment limit computers. When will the paradigm shift take place? Musk’s ambitions. Leonov’s drama.

1. “The longer I work here, the more I think I understand the hosts. It’s the human beings who confuse me”. This is how Bernard Lowe addresses his partner Robert Ford. The two founded Westworld together, an amusement park set in the Wild West of the pioneers where visitors can have fun and satisfy their fantasies, even the most violent ones, using the hosts as victims. These, however, are not biological beings but androids that look every bit as human as humans. Everyone thinks Bernard is human, when in fact he too is an android. He was created by Robert in the image of his old partner who died years before. All the hosts at Westworld are unaware of their mechanical nature but at the same time have an articulate consciousness, the result of Robert’s programming skills. When some of them start acting abnormally and endanger the visitors, it turns out that the genius creator demiurge has given them a chance to evolve. From that moment on, the androids embark on a journey of consciousness-raising punctuated by dramatic surprises and twists that would undoubtedly have won the praise of Michael Crichton.

This is in fact the initial plot of Westworld, the successful HBO television series freely inspired by the film of the same name that the late American novelist, author of masterpieces such as The Andromeda Strain and Jurassic Park, wrote and directed in 1973. The film brought to the big screen for the first time the theme of anthropomorphic machines that rise against their human creator, to the point of exterminating him.

In the original 1970s film, Crichton unravelled formidable insights that may seem naive today but were alienating for the time. He pushes his dystopian reflection on the paradox of technological evolution creating a conscious awakening in robots, to the point of disavowing Asimov’s three laws of robotics and sowing the doubt of the possibility of an android rebellion against biological humanity. This theme was then taken up in a thousand other films and novels, up to the HBO television series the authors of which, immersed in the prediction of a technological evolution in the third millennium, have tried to portray, in their own words, ‘a dark odyssey on the dawn of artificial consciousness and the future of sin’.

2. Unlike the film of half a century ago, the TV series’ narrative revolves round robots, not humans. The anthropomorphic androids are entirely similar to the visitors but live prisoners in a virtual reality and slowly set out on a quest for what they call ‘forbidden knowledge’, namely self-consciousness. This is hidden within thousands of programming software instructions. When it is reached, the last barrier between androids and humans is also broken. Standing in the way of this robotic odyssey are violent, ruthless enemies driven by destructive instincts: the visitors of Westworld Park.

Now, however, let us turn off the dystopian TV world and look at the real one, where events happen that allow us to read fragments of predictions of the future. Let’s go to Palo Alto, California, where the second Tesla Artificial Intelligence Day was held on September 30, 2022, a six-hour event that was also streamed live, where the car manufacturer’s managers, Elon Musk in the lead, presented the latest achievements in the supercomputer software Dojo, designed to train the very artificial intelligence (AI) systems capable of performing complex driver assistance tasks such as Tesla Autopilot or Full Self-Driving. The guest of honour at the event, however, was Optimus, a prototype humanoid robot that for just over a minute during the presentation gestured and waved to the audience alongside Elon Musk. “But it can really do much more than we just showed you,” said the head of Tesla Motors as he showed a video of Optimus carrying boxes and wandering around a garden with a can to water the plants.

According to the company’s plans, when mass production of the robots begins, everyone will be able to buy one for the price of a small car. Thus, according to the American multi-billionaire, we will have mechanical helpers in our homes, powered by rechargeable 2.3 kWh batteries, and we will be able to connect them to Wi-Fi and LTE networks via a special application on our smart phones.

In reality, Elon Musk’s declared primary objective is to make Optimus’ next descendants into skilled workers to be employed in the manufacturing industry, such as the car industry, to fill labour shortages. Or to replace labour, we would be inclined to say – but that’s another story.

In order to achieve all this, the companies belonging to the multi-billionaire visionary who just paid 44 billion dollars to buy the social network Twitter, are developing software to support robots by means of supercomputers with artificial intelligence, i.e. machines that will be used to teach androids how to behave. Like a new Robert Ford in the flesh, Elon Musk will have in Optimus his Bernard Lowe, a bionic creature in whose microchips millions of lines of software will be instilled towards the furthest frontiers of knowledge. Will it be all the way to self-awareness? Who knows? Let’s leave it to science fiction fans to come up with the difficult answer, and let’s instead stay in sunny California.

Let us go to Hawthorne, about six hundred kilometres south of Palo Alto, because it is there, in that LA suburb, that Elon Musk builds his Falcon space rockets and his Dragon spaceships, on which Optimus’s grandchildren will sooner or later board. Launching robots into Space will be an inescapable step precisely because its environment is lethal to humans thus making it a habitat ideal for machines indifferent to microgravity or cosmic radiation.

Moreover, space exploration, due to its characteristic high media impact, is able to focus not only the interest but also the custom and approval of public opinion. And this too will favour the use of AI-equipped robots in space missions. While the latter will become increasingly sophisticated and autonomous, will there still be a role in Space for human beings? The first to answer – in the affirmative, needless to say – are the astronauts. It is understandable, but in order to try to contextualise the theme, it is worth recalling two illustrative episodes of opposite significance that lead one to reflect on the human-machine dichotomy in Space. Both events date back to the dawn of astronautics, or rather cosmonautics.

12 April 1961. Preparations are in full swing at the Bajkonur Cosmodrome for the launch of the first man into Space. Yuri Gagarin in his orange suit walks towards the ladder of the Vostok 1 and mechanically listens to the instruction that the engineers have been tirelessly and obsessively repeating to him non-stop for days: “Do not touch anything!”. The man in the space capsule is in fact considered by nature fragile and unpredictable, the weak link in such a complicated mission. So he must in fact act as a flesh-and-blood robot awaiting instructions from the ground.

Thinking back to this event makes us smile but also reflect on the issue of safety and the extent of technological complexity, which is enormously larger to deal with than it would be in designing a mission where the crew is made up of anthropomorphic machines. But there is another episode that leads us to opposite reflections.

It happened on 18 March 1965, four years after Gagarin’s flight. Cosmonaut Alexei Leonov has just completed the first space walk in history and tries to re-enter the spaceship Voskhod 2. However, he notices that his suit has expanded unexpectedly and prevents him from re-entering the capsule. Procedures call for him to re-enter feet first followed by the torso, but all attempts are in vain as the suit has overinflated and the hatch can no longer be passed through. Leonov gets a panic attack barely kept under control by his rigid training. He struggles to move and his vision becomes increasingly blurred. The stress causes him to lose six kilos of weight in a very short time, burning fat and perspiring profusely, but at the same time his sympathetic nervous system releases a neurotransmitter called adrenaline, which enters his system by binding to adrenergic receptors. In those moments, he undergoes a vasoconstriction of the peripheral vessels and a bronchodilatation of the body. He becomes more reactive and billions of neural contacts are triggered in his brain that no quantum computer with AI has yet managed to emulate. Leonov decides to violate procedures. He opens the suit’s pressure valve and re-enters the spaceship headlong like a diver from the diving board of death. And he is saved.
This episode is always cited, with good reason, to show that one can never predict everything from Earth and that the human spirit of initiative allows one to resolve situations that would prove fatal for a machine.

3. Humans or robots then? The question is not new, but compared to the aforementioned beginnings of astronautics, the context and consequently the possible answer have changed. As technological progress gives machines ever greater autonomy, the question is no longer framed in the same terms as in the 1980s or 1990s, when men and robots were in fact systematically pitted against each other. Today, the two have become complementary.

“We need robots and we need humans,” they say at NASA, where experts now agree that robotic machines have to be sent out to explore before humans, regardless of their level of autonomy. Just as on Earth Elon Musk wants to employ robots for repetitive manufacturing, albeit qualified by progressive AI levels, so space robots are expected to perform so-called 3D, i.e. Dull, Difficult, and Dangerous, tasks. And although today the expectation for the use of robots in space is very high, it must be recognised that at present they are still mostly used as passive tools remote-controlled by astronauts.

The use of autonomous robots is limited by the technologies that can be implemented, two of them above all: software and energy supply.

At the moment, space robot autonomy is usually a localised function designed for safety, to protect themselves and their surroundings from damage, as for example in the case of the risk prevention software on board NASA’s Mars rovers.

Unmanned missions are still far from being carried out by fully autonomous robots. Human operators remain an essential component, especially in planning and reacting to unforeseen circumstances. This does not mean that robotic missions cannot be carried out, for example in Earth orbit, where the delay in communications and in telemetry and remote control transmissions is acceptable. But as space agencies are pushing the frontier of exploration towards manned missions further from Earth, in cislunar orbit for example, having intermittent or delayed communications makes remote-controlled operations difficult.

Unlike the rovers that are now on Mars and that largely operate in remote-controlled mode, humans cannot wait for a response to be transmitted from Earth once a day or delay their operational mission several times as has happened to the Mars rovers on many occasions. Beyond commercial factors, therefore, the need for autonomous systems for the future of Space is paradoxically driven by the very manned missions being planned. And here another decisive factor for the design of future space robots comes into play: the operating environment.

For example, some scientists make a distinction between robots in orbit and planetary robots, as in fact each of these two distinct environments has different characteristics in which the machine operates. In orbit, robots are in microgravity conditions, so that Newton’s third law must be considered before everything else when planning robotic movement. On the surface, lunar or Martian for example, one has to deal with gravity and many other factors such as transmission latency or natural phenomena (dust storms, micrometeorites, etc.). All this influences the design of the mechanical structure and software. As a result, robotic mobility can differ considerably.

Ultimately, the robot’s ability to achieve different levels of autonomy is limited by the hardware but mainly by the software that enables its capabilities. In fact, the computer is strongly constrained by its operating environment, i.e. whether it is shielded from solar radiation or not.

One of the first robot prototypes to have already flown in Space is Cimon, the mobile interactive companion of the International Space Station (ISS) crew. The robot has therefore operated within a shielded orbital environment. A kind of space Alexa, Cimon has always remained connected to the Columbus Control Center in Germany, the Biotechnology Space Support Center in Lucerne, and the IBM Cloud in Frankfurt. It was the first step of a technological demonstrator of what could in the future be a space exploration mission’s travel assistant, but still far removed from the suggestions of 2001. A Space Odyssey.
The situation is very different if we instead consider an ‘external’ and unshielded operating environment, such as the computer of NASA’s Curiosity rover, which has been circulating on the surface of Mars since 2012. It is equipped with two BAE RAD750 processors clocked at up to 200 MHz, 256 Mb RAM and 2 Gb SSD. Since it has been running for several years on the red planet, it is in fact now the most reliable thing that can be sent into space millions of kilometres from Earth. In fact, the Perseverance rover, which arrived on Mars in 2021, also uses the same processor as Curiosity.

The fact is that the on-board CPU is based on the PowerPC 750 processor that IBM and Motorola introduced in the late 1990s to compete with Intel’s Pentium II. This means that the most technologically reliable space computer that has been operating for years in deep space on the surface of Mars can smoothly handle a 30-year-old video game, but computationally could not handle the task load of a modern video game.

Certainly the technological push we are witnessing for the development of autonomous robots in Space will be fuelled by the progress of machine-learning software, improved computing capabilities and, above all, the progressive use of microchips of the RHBD (Radiation-Hardening-by-Design) type, which are based on the manufacturing process known as CMOS (Complementary Metal-Oxide-Semiconductor) and can be manufactured in commercial foundries, lowering costs and enabling space mission designers to recover advanced computational performance.

However, it is unlikely that within two or three decades the rise of autonomous robots in Space will take operations from a basic level, where the robot operates as a remotely directed tool, to the level of full autonomy, where the robot becomes an independent operator.

Humans will remain in the loop, one way or another, even for the most advanced level of robotic autonomy.

And this shifts the focus to another issue that is often overlooked when talking generically about robots: the human-machine interface, the so-called AI-HRI (Artificial-Intelligence for Human-Robot Interaction). AI-HRI is the functional system that allows man and robot to communicate with each other and work productively as a team. The quality of operations obviously depends on the human’s ability and the robot’s level of autonomy, but nothing could take place without AI-HRI’s adaptive functions. From the simple movement of a joystick to a voice command to the perceived human movement processed and reproduced by a robot, the different forms of AI-HRI unfold under the influence of the operational environment in which each entity finds itself. But it must be considered that it is not true that robotic autonomy shared with humans is the only possible future, just because autonomous technologies are not sufficiently mature to date.

Space exploration is a scientific and geopolitical frontier. The idea that its operational future can be planned and conducted by autonomous robots without human input requires a non-human-centred culture. But culture changes at a much slower pace than technology. Until then, the design of shared human-robot autonomy will be dominant, but as soon as the cultural leap occurs, the relationship will be reversed.

Predicting when this will happen is divination. Preparing for it would be wise. Starting to think about how to do so is now urgent.

The article appeared on Limes 12/2022 L’intelligenza non è artificiale.
Translated into English by Mark Sammut Sassi.