Traditionally, the EU goes into the political summer break until the beginning of September. One topic that has occupied the representatives of the “Telecommunications Working Group” in particular right up to the last session is the so-called AI Act (Artificial Intelligence Act). The regulations, which are currently in draft status, are intended to define a legal framework for all providers and professional users of artificial intelligence throughout the EU. Reason enough for us, as developers and integrators of AI-controlled robot technology, to give this topic some thought.
Is the Artificial Intelligence Act already “set in stone”?
In spring 2018, after 3 years of preparatory work, a first proposal was drafted, which, however, received a lot of criticism due to the very broad definition of an AI. Since then, attempts have been made to find a solution that – hopefully – takes all interests into account.
The latest proposals from the Czech parliamentarians try to define AI more clearly and come close with their definition: “Autonomous systems that use algorithms and machine-based learning to achieve human-defined goals.” This is close to the way our robots approach tasks.
In our view, there is also a great need for improvement in the current risk-based approach (the higher the potential danger, the stricter the regulations), because instead of assuming a fundamental risk, AI-based systems should only be considered as such if they act completely autonomously, i.e. without human control, without human situation assessment and thus also final decision.
Robotic-AI + human intelligence = perfect security partners
In simply all safety concepts and application scenarios of our robot portfolio, man and machine are always used as parallel, complementary and supporting components. Of course, robots such as Spot, Argus or Beehive follow your instructions, patrol along defined routes, scan, check and use their intelligence to independently avoid obstacles, for example, and do so in increasingly efficient ways.
If an incident occurs, such as the detection of an unauthorised person on the patrolled area, the robot immediately informs the control centre, where experienced human security personnel analyse the situation, evaluate it and finally trigger actions. Here, the clear division of tasks and also the limits of an AI become clear. Or to put it another way:
“AI in robotics helps the system to work more efficiently, optimise performance and thus save time and money, without making humans obsolete or removing them from responsibility.”
Thus, AI-based systems do not pose a potential danger in our industry, and this is precisely what needs to be communicated. The further development of artificial intelligence must be given space and rules, not a tight bureaucratic corset.
Have a say, participate, co-decide
Innovative companies, developers and AI experts may no longer consider themselves mere spectators, but stakeholders and experts in equal measure and actively participate in the proper design and implementation of the Artificial Intelligence Act.
In this way, we have the chance not only to have a say in the international comparison in the future, but to gain ground and take a leading role in the topic of robotics and AI.
Let’s make use of it!
CONTACT FOR PRESS & COMMUNICATION:
Michael Engel | m.engel@security-robotics.de
Landsberger Allee 366, 12681 Berlin
Telefon: +49 341 2569 3369