EnglishPrecedentSuivant

Ce document n'est pas traduit, désolé...



ALIVE


Copyright

Image(s): 640*480



Jpeg Image (40 Ko) Jpeg Image (64 Ko)
Jpeg Image (50 Ko)


Author(s)

Institute(s)

Project : Autonomous Agents

  • URL : http://www-white.media.mit.edu/vismod">http://agents.www.media.mit.edu/groups/agents http://www-white.media.mit.edu/vismod

    Video(s) and extracted images: 320*240

    Film
    1
    Video QuickTime -> Film/Video (2.6 Mo)
    Jpeg Images -> (9 Ko)

    Film
    2
    Video QuickTime -> Film/Video (3.0 Mo)
    Jpeg Images -> (11 Ko)

    Film
    3
    Video QuickTime -> Film/Video (3.4 Mo)
    Jpeg Images -> (10 Ko)



    Description



    In this video we discuss the design and implementation of a novel system which allows wireless full-body interaction between a human participant and a graphical world which is inhabitated by autonomous agents. The system is called ALIVE, which stands for Artificial Life Interactive Video Environment.

    More Information...


    • Bibliography :

      • "Modeling Interactive Agents in ALIVE"
      • [1] Joseph Bates, Barbara IIayes-Roth and Pattie Maes, Workshop notes of the AAAI Spring Sympo sium on Interactive Story Systems: Plot and Char acter, AAAI, March 1995.
      • [2] Bruce Blumberg, "Action Selection in EIamster dam: Lessons from Ethology", Proceedings of the 3rd International Conference on the Simulation of Adaptive Behavior, Brighton, August 1994, MIT Press.
      • [3] Bruce Blumberg and Tinsley Gallyean, ~Multi Level Direction of Autonomous Creatures for Real Time Virtual Environments", Proceedings of tbe Siggraph 1995 conference, Los Angeles, CA, Au gust 1995.
      • [4] Krueger M.W., Artificial Reality II, Addison Wes ley, 1990.
      • [5] Pattie Maes, Trevor Darrell, Bruce Blumberg and Alex Pentland, "The ALIVE system: Fullbody Interaction with Autonomous Agents", Proceed ings of the Computer Animation '95 Conference Geneva, Swit~erland, IEEE-Press, April 1995.


    • Abstract :

      In this video we discuss the design and implementation of a novel system which allows wireless full-body interaction between a human participant and a graphical world inhabited by autonomous agents. The system is called "ALIVE," an acronym for Artificial Life Interactive Video Environment [5]. One of the goals of the ALIVE project is to demonstrate that virtual environments can offer a more emotional and evocative experience by allowing the participant to interact with animated characters which have complex behaviors and which react to the user and the user's actions in the virtual world. The ALIVE system has been demonstrated and tested in several public forums. It was demonstrated for 5 days at the SIGGRAPH-93 Tomorrow's Realities show in Anaheim, California and for 3 days at the AAAI-94 Art Show in Seattle, Washington. The system is installed permanently at the MIT Media Laboratory in Cambridge, Massachusetts. It will feature in the Ars Electronica Museum, currently under construction in Linz, Austria and the ArcTec electronic arts bienale in Tokyo, Japan, May 1995. In the style of Myron Krueger's Videoplace system, the ALIVE system offers an unencombered, full-body interface to a virtual world [4]. The ALIVE user moves around in a space of approximately 16 by 16 feet. A video camera captures the user's image and removes the background environment; thus, no blue-screens or other special walls are needed. The separated outline is then composited into a 3D graphical world. The resulting scene is projected onto a large (approximately 10'x16') screen which faces the user and acts as a "magic mirror:" the user sees him/herself in the environment, surrounded by objects and agents. No goggles, gloves, or tethering wires are needed for interaction with the virtual world. Computer vision techniques are used to extract in formation about the person, such as where in the space the person stands and the position of various body parts. A pattern-matching technique called dynamic time-warping is used to recognize simple gestures as they are performed. ALIVE combines active vision and domain knowledge to achieve robust real-time performance [5]. The user's position as well as hand and body gestures are used as input to affect the behavior of agents in the virtual world. Agents have vision sensors which allow them to "see" the user and react to gestures such as pointing or throwing a virtual ball. The user receives visual (on the big screen) and auditory (prerecorded sound) feedback about the agents' internal state and reactions. Agents have aset of needs and motivations, a set of sensors to perceive their environment, a repertoire of activities which they can perform and a physically based motor system that allows them to move in and act on the environment. The behavior system decides in real-time which activity the agents should engage in so as to meet their internal needs and to take advantage of opportunities presented by the current state of the environment. The system allows a direct-manipulation style of interaction in which users interact directly with the environment - such as pushing buttons or moving objects - and also an indirect style of interaction in which users give agents commands and the agents carry out the commands based on the user's input, the current environment, and their internal state. For example, the meaning of a gesture is interpreted by the agents based on the situation the agent and user find themselves in. When the user points away from herself, and thereby "gives the command" to send a character away, the character responding to the command will go to a different place in the virtual environment depending on where the user is standing and which direction she is pointing. In this manner, a relatively small set of gestures can be employed to mean many different things in many different situations. The ALIVE system incorporates a tool, called "Hamsterdam" [2] [3], for modeling semi-intelligent autonomous agents that can interact with one another and with the user. Hamsterdam produces agents that respond with a relevant activity on every time step, given their internal needs and motivations, their past history and the environment they perceive with its at tendant opportunities, challenges and changes. More over, the pattern and rhythm of the chosen activities is such that the agents neither dither between multiple activities, nor persist too long in a single activity. They are capable of interrupting a given activity if a more pressing need or an unforeseen opportunity arises. The Hamsterdam activity model is based on elements taken from animal behavior models initially proposed by ethologists. In particular, several ethological concepts such as behavior hierarchies, releasers, fatigue, and so on have proven to be crucial in guaranteeing the robust and flexible behavior required by autonomous interacting agents [2] [3].
      When using Hamsterdam to build a creature, the designer specifies the sensors of the agent, its motivations or internal needs, and its activities (behaviors) and ac tions (motor system movements necessary to fulfill a behavior). Given that information, the Hamsterdam software automatically infers which of the activities is most relevant to the agent at a particular moment in time according to the state of the a~ent, the situation it finds itself in, relevant input from t~e environment, and its recent behavior history. The observed actions of the agent are the final result of numerous potentially exe cutable behaviors competing for control of the agent. The activities compete on the basis of the value of a given activity to the agent at that instant, given the above factors. The details of the behavior model and a discussion of its features are reported in [2] [3]. The most sophisticated creature built so far is a dog called Silas. Silas's behavioral repertoire currently in cludes following the user, sitting when asked by the user, going away when ordered by the user to do so, and performing other tricks such as standing on his hind legs, fetching a ball, Iying down and shaking paws. Silas also will chase the Hamster if the latter creature is in troduced into the same virtual environment as the dog. Along with visual sensors and feedback, the ALIVE en vironment also uses sound. Silas provides auditory out put in the form of a variety of prerecorded samples. The ALIVE system demonstrates that entertainment and the effort to model believable creatures in simple virtual environments can be a challenging and inter esting application area for autonomous agents research. Al,IVE provides a novel environment for studying archi tectures for intelligent autonomous agents. As a testbed for agent architectures, it avoids the problems associ ated ~with physical hardware agents or robots, but at the same time forces us to face non-trivial problems, such as noisy sensors and an unpredictable, fast-changing en vironment. It makes possible our study of a~ents with higher levels of cognition, without oversimprifying the world in which these agents live. ALIVE represents only the beginning of a whole range of novel applications that could be explored with this kind of system. We are currently investigating ALIVE for interactive storytelling applications in which the user plays one of the characters in the story and all other characters are artificial agents which collab orate to make the story move forwards (for more ex position on this topic, see the three short papers on ALIVE in [1]). Another obvious entertainment applica tion of ALIVE is video games. We have hooked up the ALIVE vision-based interface to existing video garne software, so as to let the user control a game wit-h his full body. In addition, we are investigating how au tonomous video game characters can learn and improve their competence over time, so as to keep challenging a video game player. Finally, we are modeling animated characters that ~h a user ~hysical skill in a person alized way. Thé agent is modeled as a personal trainer that demonstrates to the user how ~ perform an action and provides personalized and time~ feedback to the user, on the basis of the sensory information about the user's gestures and body positions. The ALIVE system shows that animated characters that are based on Artificial Life models can not only look convincing- that is allow suspension of disbelief on viewing - but can act and interact in a realistic enough manner to maintain this suspension of disbelief durmg unpredictable real time interaction with users.




    • Some internal links :

      (oo) Same Author
      (ooo)Artificial Creatures
      (oo) Same Institute
      (ooo)Artificial Creatures

    EnglishPrecedentSuivant
  • Copyright © 1994-2015 mediaport.net/w3architect.com | Hébergé par p2pweb
    Autres Sites : afromix.org | Actualité Afrique et Caraïbe | Flux d'actualité thématiques | Actualité Européenne