Classifying Orchestration

= Classifying Orchestration =

Types
In general, the approach taken for orchestration depends very much on the kind of problem being solved. To illustrate, a few potential examples:

Scenario Scripted - a collection of modelled agents, all tightly controlled from pre-programmed higher level script(s) that play out scenarios. Quite often this approach tends to become centralised with a linear storyline that assumes direct control over all flows and modelled entities in the system. Whilst simple, it is inflexible and as such it has problems scaling, handling uncertainties and small variations to the system as a whole. Static Link Graphs - a fixed, pre-configured link configuration for a multi-agent solution. For users familiar with a roslaunch syntax to pre-link and connect a distributed robot control system, this method is simply moving up a level to do the same for a multi-agent version of the same. It differs from the former approach by virtue of being interrupt driven by default - making the required connections can generate a scenario without the need for a controlling script. This is the approach we adopted for the first rocon demo, purely for the sake of simplicity. When agents connect, the launcher immediately executes an application on the agent which is remapped according to the needs of the solution. This fixes the task for the robot as well as its connectivity for the duration of the solution. Whilst we can supply some logical rules to increase the flexibility, there is no real way to take advantage of a dynamicity, either in agent clientele, or in the ability to retask agents mid-solution. The dynamic retasking in particular, limits the scope of what can be achieved with a finite set of agents. Supervised Orchestration - starcraft for robots! One possible means of increasing the potential applicability of a fixed clientele is to put a human in the controller's chair and let them conduct who should participate and the nature of tasks they should run. The very obvious and familiar analogy is starcraft. Whilst this is certainly an entertaining option, we would like to find ways to automate rocon solutions as much as possible. Assisted Orchestration - enabling humans to inject expert knowledge into the orchestration to assist multi-agent problem reasoning and planning. The problem of orchestrating a solution for a given variety of devices (both sensors and actuated devices), robots, interactive humans and pc algorithms possesses an exponentially increasing difficulty in the permutation of possible solutions that can arise with the number of agents in the system. Determining what arrangement of components is valid, also which ones are best able to realise the solution and how to monitor and re-plan if necessary - these will often (at least in the short term) require human assistance. The goal would be to find ways of providing human assistance without requiring a human in the driver's seat continuously. Autonomous Intelligence - a cognisant intelligence lurking behind the concert that is able to re-arrange, learn and configure itself according to the tasks given as required. This would be the ultimate goal requiring the orchestration layer to be reasonably self-aware and capable of learning and adapting to changes in the environment and the tasks required of it. This is a driving ambition of the rubicon project. Free Agents - autonomous agents independently moving across concerts and accessing resources as necessary to accomplish its given tasks. Aka 007 agents. In many environments, robots may not be practically connected to an all-encompassing network of devices (sensors, inputs) and when given a task, whether that be through its own inputs or via a concert (or web), it may then have to move through a disconnected series of local concerts, accessing sensors and information as it passes through them to achieve its task. Web services may help it plan and predict the feasibility of this movement, but in more than any other kind of orchestration, it is required to plot out its task in very similar ways to a human. Swarm Robotics - often ad-hoc connected networks of many robots controlled loosely to form emergent behaviours. Such systems are controllable only known from the local perspective of an individual robot and as such, any system behaviours are dependant on the emergent nature of interactions that result in reactions from robot to robot and robot to environment.

Traits
There are also several traits that distinctly influence the nature of orchestration often seen in real use cases.

Environment - how strongly can the system make guarantees about the environment the agents will work in? Certainty in the environment structure can vary from fully structured to completely unstructured. For example, a robot soccer field is exactly known, a factory environment may have to be learned, but could be re-used but an outdoor environment for a survery robot would always have to be learned on the fly from potentially unreliable sensors. Communication reliability can vary significantly as well. Web, ethernet lan, wireless, 3G, presence of dead zones make very different demands on a system. These represent just two kinds of uncertainty but the keypoint is that these factors can significantly affect the co-operability and/or required independent autonomy of agents in the system. Composition - what is the nature of the array of robots, devices, sensors and pc's available to the system? Are the agents in the system completely known at startup? Can they be swapped in and out for 'similar' agents? Can agents be flexibly added/removed in runtime? Systems with a very certain knowledge about their composition are significantly easier to handle - you can hard code into the system the agents required to fulfill a goal. Generalising for systems with a varying list of available resources becomes increasingly difficult as the correct composition of agents must be determined dynamically, and many combinations at a single time may be possible. Dynamism - robots are fairly dynamic compared to static devices present in usual digital ecosystems. Robots will join and leave mid-operation, often escape to recharge and are autonomous to the point that they can and should continue their task even if having moved out of wireless range. This creates a tremendous amount of complexity.