Intelligent Agents

Created on Nov. 30, 2012, 12:30 a.m. by Hevok & updated by Hevok on May 2, 2013, 4:55 p.m.

An Intelligent Agent gets to interact with an environment. The agent can perceive the state of the environment through sensors and it can affect the state through its actuators. The challenge is to design the functions that maps sensors to actuators which is called the control policy for the agent. In particular how can the agent make decision that it can carry out with its actuators based on past sensor data.

The loop of environment feedback to sensors, agent decision, actuator interaction with the environment is the so called perception action cycle.

A truly Intelligent Agent uses the computer as its environment.

An Artificial Intelligent (AI) is therefore an entity that mimics biological organisms.

Definition

An Intelligent Agent (IA) is an autonomous entity which observes through sensors and act upon an environment using actuators and directs its activity towards achieving goals. Intelligent Agents may also learn knowledge or use knowledge to achieve their objectives. IAs can be very simple or very complex. Cognitive Science aims to understand the nature and workings of intelligent systems. An intelligent system is something that processes internal information in order to do something purposeful and meaningful. Many things satisfy that descriptions: Humans, animals, computers, robots and sensory systems, etc. However on sort of intelligent system is of particular interest for Cognitive Scientists: The artificial autonomous intelligent agent. Intelligent Agents are are increasingly needed as a way to use artificial intelligence routines in software to help users accomplish many kinds of tasks in Research and Science.

Terminologies

To distinguish different types of problems a certain AI needs to master a basic terminology needs to be defined.

Fully vs. Partially Observable: An environment is fully observable what the agent can sense at any point in time is completely sufficient to make optimal decision (e.g. cheese). This is totally in contrast to other environments where a memory is needed on the side of the agent to make the best possible decision. Specifically, a environment is fully observable if the sensors can see the entire state of the environment, while when it is partially observable the sensors can only see a fraction of the state. Yet memorizing past measurements gives additional information of the state. Agents operating in partial observables need to acquire internal memory (e.g. via Hidden Markov Models) to understand what the state of the environment is and how to predict its changes.

Deterministic vs. Stochastic: In a deterministic environment the agent's actions unique determine the outcome (e.g. playing cheese). Games with dice in contrast is stochastic as there is a certain amount of randomness involved for the outcome of dice.

Discrete vs. Continuous: A discrete environment is one where there is finitely many action choices and finally many things that can be sensed. In a continues environment the space of possible action or things that can be sensed may be infinite.

Benign vs. Adversarial Environments: In benign environments the environment might be random and stochastic, but it has no objective on its own that would contradict the agents objective (e.g. whether). Whereas in an adversarial environment opponents agents counteract the agents actions.

Uncertainty

AI is the technique of uncertainty management in computer software. To formulate it in a different way, AI is the discipline that is applied when it is not known what to do.

Intelligent-agent.jpg

Tags: ai, singularity, decision making
Parent: Denigma
Children: Intelligent Agent

Update entry (Admin) | See changes

Comment on This Data Unit