Low-code real-time control of swarm agents

Slash the time and cost for developing your swarm applications using simple descriptions of real-time behaviour layering on top of digital twins as abstractions for devices and processes. This article describes work being done in the SmartEdge project to support Use Case 3 (smart manufacturing).

Low code is an approach for empowering domain experts to create applications more efficiently. In this post we describe how simple descriptions can be used by cognitive agents to control swarms consisting of one or more digital twins by using facts, event-driven rules and intent-based asynchronous actions. This is expressed in a convenient syntax using Chunks & Rules [1], drawing upon decades of work in the cognitive sciences on how humans execute tasks. A chunk is just a collection of properties, including the chunk’s type and identifier.

Each cognitive agent includes one or more chunk graphs, each associated with a buffer that holds a single chunk. Chunk rules are conditioned on the current state of these buffers, and associated with one or more actions. Actions are asynchronous, and when complete, update the chunk graphs or queue chunks to the buffers, triggering further rules for the follow-on behaviour.

Digital twins provide an abstraction for devices and processes that hides the complexity of the underlying technologies and standards. Digital twins expose intents that describe an aim, purpose, goal or objective rather than how these are to be realised. An example is an intent to move a robot gripper to a specified position and orientation. The cognitive agent delegates the implementation to the robot controller which needs to devise and execute a plan for driving each of the robot’s joints.

Application development is a partnership between domain experts and systems programmers who are responsible for implementing the intents, as well as for updating the chunk graphs that provide the agent’s model of the environment. The factory demo [2] models a bottling plant that fills and caps bottles, and packs them six at time into boxes for shipment.  A single cognitive agent controls a robot arm, two conveyor belts, along with the filling and capping stations. The overall behaviour is described in just twenty rules.

Real-time control is realised through delegation and low-latency feedback loops, mimicking the cortico-cerebellar circuits in the brain responsible for motor control. Agents map the intents to plans for realising them and then execute those plans, adapting them as needed to compensate for imprecise data and to react to changing circumstances. Intents are safer than lower-level procedural APIs. Rule execution is fast as a) the rule conditions apply to the buffers rather than directly to the chunk graphs, and b) the rule actions are asynchronous, enabling cognition to continue while the intents are being realised. When control is needed at very short timescales (e.g. microseconds), the behaviour can be implemented as local reflex actions. An example is how conveyor belts automatically stop when an object reaches the end of the belt. This raises an event, triggering behavioural rules, e.g. for the robot arm to grab the bottle at the end of the belt.

Chunks & Rules can be used to control devices via the robot operating systems (ROS), see [3]. ROS is an open-source software framework with a strong developer community. It uses messages with hardware abstraction, featuring topic-based streams, request/response services, nodes for message exchange and a shared database for parameters. ROS topic streams can be used to update chunk graphs for models of the robots and their environment. Chunk rule actions can likewise be used to invoke ROS services. Swarm entities (cognitive agents or digital twins) are associated with unique identifiers enabling messaging without the need for applications to know the details for the underlying communication technologies.

Rules are good for repetitive behaviours, but struggle with unforeseen situations, which have to be attended to by the developers. In the bottling use case, a bottle might fall over or be incorrectly filled or capped. This requires an iterative development process in which new situations are identified and dealt with via the collaboration between the domain expert and the system programmer.  As an example, the computer vision software may need to be extended to recognise and characterise faults.  The domain expert can then add new rules to handle the situation appropriately.

In the future, thanks to advances in AI, programming will be superseded by explaining and showing agents what you want, where they apply their knowledge to fill in the gaps in your account. AI will make it much easier to curate use cases and to identify and address the difficult corner cases that plague conventional software development. Generative AI is showing promise, but further research is needed to enable sentient agents that exhibit continual learning and continual reasoning along with the ability to remember and reflect on their experiences. In the longer-term, Sentient AI will provide the desired flexibility and resilience, acting in the front line with human experts as backup. Research priorities should be directed to address these opportunities, e.g. continual prediction as a basis for continual learning.

References:
[1] Chunks & Rules specification: https://w3c.github.io/cogai/chunks-and-rules.html
[2] Factory demo: https://www.w3.org/Data/demos/chunks/robot/
[3] Robot Operating System: https://www.ros.org

Dave Raggett (W3C/ERCIM)

Share this Post