Why we are skeptical of robots

Advertisement

Various concerns are raised by robots. Some experts believe that they could potentially take away our employment opportunities. Furthermore, if artificial intelligence continues to advance, there is a possibility that they may pose a threat to humanity by either subjecting us to servitude or causing the total destruction of mankind.

These machines, robots, are intriguing beings and not just for the reasons that are often mentioned. It is natural for us to have some concerns about them.

Picture yourself in the Quai Branly-Jacques Chirac museum in Paris, where anthropology and ethnology take center stage. While you explore the exhibits, your curiosity piques upon encountering a particular artifact. As time passes, you start to feel the approach of someone familiar who is also drawn to the very same object of interest.

As you slowly make a movement, a peculiar sensation overwhelms you when you catch sight of a somewhat indistinct, non-human entity on the edge of your vision. A rush of anxiety engulfs you.

As you rotate your head and your eyesight becomes more focused, this sensation intensifies. It dawns on you that you are observing a humanoid machine known as Berenson. Berenson is a robot created by the roboticist Philippe Gaussier (from the Image and Signal Processing Lab) and the anthropologist Denis Vidal (from the Institut de recherche sur le développement). The robot’s name pays homage to the American art critic Bernard Berenson. This experiment involving Berenson has been taking place at the Quai Branly museum since 2012.

You are filled with fear all of a sudden due to the peculiar nature of the interaction with Berenson, causing you to take a step back and distance yourself from the machine.

The concept of this emotion has been examined in the field of robotics since the 1970s, when Professor Masahiro Mori, a Japanese researcher, introduced his theory known as the “uncanny valley.” According to Mori, when a robot bears a resemblance to humans, we tend to perceive its existence similarly to that of a human being.

However, when the true nature of the machine as a robot is exposed, we will experience a sense of unease. This phenomenon, referred to as “the uncanny valley” by Mori, leads us to consider the robot as somewhat resembling a zombie.

While Mori’s theory lacks the ability to be systematically confirmed, the emotions we encounter when interacting with a self-governing machine are unquestionably mixed with confusion and fascination.

For instance, the Quai Branly experiment involving Berenson demonstrated the ability of the robot to evoke conflicting behavior in visitors to the museum. This highlights the inherent ambiguity that defines the interaction between humans and robots, specifically the numerous communication challenges they present.

Our wariness towards these machines stems mainly from our uncertainty about their intentions, or whether they even have intentions at all. Understanding their intentions and establishing a foundation for minimal interaction is crucial in any engagement with them. This is why it is common to observe visitors at the Quai Branly exhibit adopting certain social behaviors when interacting with the machine, such as conversing with it or positioning themselves in front of it, in order to gain insight into its perception of its surroundings.

When visitors interact with robots, their primary aim is typically to establish a connection in some manner. The act of considering the robot as a person, even if only temporarily, seems to have a strategic element to it. These social behaviors are not limited to humans interacting with robots that resemble us, but rather, it appears that we tend to project human-like qualities onto robots whenever humans and robots come together.

A newly formed interdisciplinary group has been established with the purpose of investigating the various aspects uncovered in these exchanges. Specifically, they are focusing on instances where we, in our thoughts, are prepared to attribute intentions and intelligence to robots.

The origin of the PsyPhINe project can be traced to the exploration of human-robot interactions using a robotic lamp. This project aims to gain a deeper understanding of the human inclination to attribute human-like traits to machines.

Once people become accustomed to the unusualness of the situation, it is fairly typical to notice individuals actively interacting with the lamp on a social level. When participating in a game involving the robot, people can be observed responding to its movements and occasionally conversing with it, offering commentary on its actions or the overall situation.

The initial moments of our interactions with machines are often marked by a sense of mistrust. Many people lack a clear understanding of what robots are composed of, their purposes, and their potential intentions, beyond their outward appearance. The realm of robots appears to be significantly distant from our own.

However, this sensation fades rapidly. In the event that they haven’t already fled from the device, individuals typically look for ways to establish and uphold a structure for communication. Often, they depend on established patterns of communication, like those utilized when interacting with pets or any other living being who perceives the world differently than they do.

In the end, it appears that we as humans are both intrigued by the possibilities our technologies offer us, yet wary of them at the same time.

Advertisement
Advertisement