top of page

Acerca de

ANSWERS

I have chosen TouringMachines for the Hybrid Agents Architecture.

 

a)  The objective of architecture.

  • First, it will need to be reactive in order to cope with situations that it may not have had enough time or resources to predict.

  • Second, since the agent's primary objective in our example will be to go from one point to another in a certain amount of time, it must be capable of rational, resource-constrained, goal-directed behaviour.

  • Third, since it will be living in a world full of other entities, it must be able to reason about what is going on around it, assess how it could affect its own goals, and predict what might happen in the near future.

 

-  The problem that researchers are attempting to solve is that as operating environments such as automated factories, nuclear power plants, and space stations become more complex, centralized scheduling policies that are both robust to unexpected events and flexible in dealing with operational changes that may occur over time will become increasingly difficult to control. Therefore, distributing control and scheduling of processes to a number of intelligent, task-achieving computational or robotic agents is one approach to this challenge that is gaining traction. As we know, the majority of today's robotic agents, on the other hand, are constrained to a small number of well-defined, pre-programmed, or human-assisted activities. So with this hybrid agents architecture, future agents will need to be far more robust and versatile than they are now in order to survive and flourish in complicated, real-world environments. Besides, multiple agents, each pursuing a different purpose, are likely to inhabit such environments. Because agents will have limited knowledge of the world and will be competing for shared and restricted resources, some of their desires will always clash. In real-world contexts, agents will often conduct complicated tasks that need some consideration of computing resource constraints, temporal deadlines, and the influence of their shorter-term actions on their longer-term objectives. Time, on the other hand, will not halt or slow down for them to consider all conceivable outcomes for every world condition. Moreover, intelligent agents will need a variety of talents to react quickly to unforeseen occurrences while also completing pre-programmed jobs and resolving unanticipated disputes in a timely and effective way.

 

2) Explain how the architecture works.

 

​

​

​

​

​

​

​

​

​

​

 

This architecture  of TouringMachines is split into three continuously working, independently motivated, activity-producing levels: a reactive layer, a planning layer, and a reflective-predictive or modelling layer. Figure 1 show the reactive layer is the first of these layers, while the planning layer is the second and the third layer is modellling layer. It is represented at a distinct level of abstraction than the agent's environment, and it has different task-oriented capabilities than the other. Although the TouringMachine architecture is hybrid, it may have various functional or horizontal faculties inside a single task-accomplishing or vertical layer, resulting in a hybrid framework. For example, in layer, the concepts of hypothetical reasoning and focus of attention are both realized. 

 

Essentially, the basic premise of vertical decomposition is to construct activity-producing subsystems, each of which is capable of directly connecting perception to action and determining whether or not to act in a particular world circumstance. But since a layer is an approximation machine, its abstracted world model is always going to be imperfect. This means that the suggested actions of one layer will often clash with those of another layer. As a result, layers must be mediated by an enveloping control framework if the agent is to respond effectively in each diverse world condition while acting as a single unit.

 

​

 

 

 

 

 

 

 

 

 

 

 

 

- The models that an agent uses are really filled-in instances of model templates obtained from a Model Library (Figure 5). While all templates have the same basic 4-way structure, they can be customized in terms of the depth of information that can be represented or reasoned about (for example, a component of one template may require that modelled beliefs be treated as hypothetical), the initial default values provided, and the cost. Each time the agent draws an inference from the selected model, the latest of these will be taken into consideration. Looking for inconsistencies between an entity's actual behaviour and that anticipated by its model, or, in the case of a self-model, between the agent's actual behaviour and that intended by the agent, is the essence of reasoning from a model of an entity. 

 

Therefore, the predictions are made by temporally projecting the parameters that make up the configuration vector of the modelled entity in the context of the present world condition and the entity's stated intents. A disparity between actual and projected (or intended) behaviours, on the other hand, does not always need a complete rewrite of the agent's model. This is because the testbed user may choose the size of upper- and lower-bounds associated with each of the parameters of a model's -vector. Only if the entity's observable configuration parameters go beyond the matching -vector boundaries in the agent's model of the entity will it become. We can see clearly, the amount of environmental change observable to the agent and the amount of time the agent will need to spend updating its models will be affected by various values for these parameter constraints. A recent research is looking at similar tradeoffs in TouringMachines.

​
 

​

bottom of page