From IPRE Wiki
Revision as of 21:19, 31 January 2013 by Myro-user (Talk | contribs)

Jump to: navigation, search

Luar stands for Lua Universal Architecture for Human Robotics (the H is silent, and, in fact, invisible!)


If you would like to work on this on your own computer, you'll need the following:

  1. Download Webots - comes with RobotStadium simulator
  2. Download Luar - humanoid control system


  • sudo apt-get install liblua5.1-0-dev
  • May need to edit WebotsController/Makefile:
    • -I/usr/include/lua5.1/
    • OR do this:
    • sudo cp /usr/include/lua5.1/* /usr/include/
  • May need to edit WebotsController/Makefile:
    • change WEBOTS_HOME
  • Fix Makefiles in UPennalizers/Lib, or just copy headers:
    • sudo cp /usr/include/lua5.1/* /usr/include/

To setup the system, please follow these instructions (click on view raw):

Quick Start

  1. git clone
For information about continual access to code through your github account see:
If getting errors/need to setup an account or SSH key see:
  1. cd UPennalizers/WebotsController
  2. make
  3. cd ../Lib
  4. make setup_webots

Every once in a while, you may want to:

  1. cd UPennalizers
  2. git pull

That will take the items from the git repository and merge them with your own code.

Assign the teams

When Webots starts up, it looks in the subdirectories:

  • /usr/local/webots/projects/contests/robotstadium/controllers/nao_team_0
  • /usr/local/webots/projects/contests/robotstadium/controllers/nao_team_1

You can list out the contents of those directories in Linux with:

ls -al /usr/local/webots/projects/contests/robotstadium/controllers/nao_team_0
ls -al /usr/local/webots/projects/contests/robotstadium/controllers/nao_team_1

You will see (probably) that these two directories (nao_team_0 and nao_team_1) point to something else.

If you "change directory" (cd) to that sibdirectory, and "list" (ls) the files, you'll see files nao_team_0 and nao_team_1. These are the programs (or pointers to programs) that Webots starts. Looking at the contents of "nao_team_0" (or the program it points to) using an editor, or "less", you'll see something like:

export COMPUTER=`uname`
export PLAYER_ID=$1
export TEAM_ID=$2
export PLATFORM=webots
exec lua -l controller start.lua

This is a shell script to assign PLATFORM, COMPUTER, PLAYER_ID, and TEAM_ID. PLAYER_ID and TEAM_ID get their values from Webots (parameters $1 and $2), and the value of COMPUTER is set by the shell program "uname". PLATFORM is set to "webots" and then the program starts up lua with the program "start.lua".

start_lua (and other programs) can read the environment (eg, os.getenv('PLAYER_ID')) for these values that were EXPORTed.

After that point, lua has control.

You can use the maketeam command to set the teams:

  1. maketeam /usr/local/webots/projects/contests/robotstadium/controllers/nao_team_0.bak nao_team_0
  2. maketeam /home/dblank/UPennalizers/WebotsController nao_team_1

Here, team_0 is the original, and team_1 is your new Luar team.

Play soccer

  1. webots
  2. Select Robotstadium Contest
  3. close 4 error dialogs
  4. Soccer will automatically start


RoboCup Rules 2011

The rules for RoboCup 2011 can be found in pdf form here:


Scale diagram of entire field (dimensions in mm).

RoboCup Field


A Rao-Blackwellized particle filter is used for localization of the robot on the field (see PoseFilter.lua).

The state of every object in the world is represented by its 2D pose:

 x: x-axis coordinate [m]
 y: y-axis coordinate [m]
 a: orientation angle [rad]
vector form: {x, y, a}

NOTE: All calculations are done using the right hand rule.

Some poses are in a frame relative to the robot and others are global. To convert between the two use:

util.pose_global(pRelative, pose) - transform the pose pose in the pRelative frame to the global frame
util.pose_relative(pGlobal, pose) - transform the pose pGlobal into the frame defined by pose
both of these functions use the vector form of a pose {x, y, a}

Example: Converting ball position local to robot to a global position:

ball = wcm.get_ball()
robotPose = wcm.get_pose()
ballGlobal = util.pose_global({ball.x, ball.y, 0}, {robotPose.x, robotPose.y, robotPose.a})


etc/hostname holds the presets for each robot's teamID and the team's team number. Team IDs are primarily used in Team.lua to set up field role as follows:

  • 1 = goal
  • 2 = defense
  • 3 = support
  • 4 = attack

Shared Memory Interface

Luar is a multi-process architecture in that there are multiple processes running simultaneously and at different frequencies. Luar is composed of two main processes: a Motion process and a Cognitive process. The Motion process runs at about 100 updates a second (~100Hz) and the Cognitive process runs at about 30 updates a second (~30Hz).

Of course, there are data that need to be shared between these parallel processes. To accomplish this, Luar uses shared memory. In Luar all shared memory interfaces are handled by special Lua modules called Communication Managers. In general a communication manager will end with *cm.lua (e.g. vcm.lua is the Vision Communication Manager and handles all of the shared memory vision information). The Communication Manager is designed to standardize and abstract all shared memory interfacing and to make adding additional shared memory variables easily.

Every Communication Manager has the following special variables.

A table indicating the shared memory variables to define including the type and size of the variable.
Each member of the shared table indicates a new shared memory segment.
The variables in each segment are indicated by the members of each segment table.
A table indicating the size of the shared memory segment to create.
You can indicated the total size of the segment by adding it as a key to this table with size of the segment as the value.
The key should be the same used in the shared table.
If no entry exists in the shsize table then the default size (2^16 bytes) is used.

The second half of the communication managers in the creation of the shared memory segments and accessors for the data. This is handled by the util.init_shm_segment function. This functions will only create/destroy a segment or variable if it is needed. If the segment already exists and is the correct size then the current segment is used. Similarly for variables, new instances are only created if they do not already exist or the existing one is the incorrect size.

The accessors and setter functions are automatically created from the shared table structure. The same basic format is followed for all the segments/variables.

Simply put, the following entry:

 shared.[segment name].[variable name]

will result in the following function definitions to access and set the data respectively:

 get_[segment name]_[variable name]()
 set_[segment name]_[variable name](val)

The following data types are supported:

single numbers are indicated by setting the value to a vector of length 1.
NOTE: the accessor will return the number not a vector of length 1.
fixed length arrays by setting the value to a vector of length n > 1
the length of the array is specified by the size of the vector
vector.zeros(4) indicates to the program to create an array of length 4 in the shared memory
variable length strings indicated by setting the value to any string
the actual value set in the shared table is ignored, just the fact that it is a string is noted.
c array
fixed size c arrays indicated by setting the value to a number indicating the size in bytes of the array
The accessor/setter for a c array accept and return userdata

NOTE: When using the setters you must give it a variable of the correct size. (unless it is for the string)


The following is how to setup an example Communication Manager we will call Test Communication Manager (tcm) with the lua module tcm.lua.


 module(..., package.seeall);
 -- shared properties
 shared = {};
 shsize = {};
 shared.segment1 = {};
 shared.segment1.number_a = vector.zeros(1); -- vector of size 1 indicates a number
 shared.segment1.vector_b = vector.zeros(3); -- vector of size > 1 indicates an array
 shared.segment1.string_c = ' '; -- any string indicates a string (the actual value here is ignored when creating the block)
 shared.segment2 = {};
 shared.segment2.number_a = vector.zeros(1); -- vector of size 1 indicates a number
 -- indicate segment size for segment2
 shsize.segment2 = 2^12; -- bytes
 -- initialize shared memory segments/variables and create accessors/setters
 util.init_shm_segment(getfenv(), _NAME, shared, shsize);

This Communication Manager will have two shared memory segments indicated by the 'segment1' and 'segment2' fields in the table shared. The actual segment names for these will be 'tcmSegment1' and 'tcmSegment2' respectively and 'tcmSegment2' will have a total size of 2^12 bytes. The variables in each segment are:

number_a - which has a size of 1
vector_b - which has a size of 3
string_c - which is a variable length string
number_a - which has a size of 1

The initialization also automatically creates accessors and setters for the shared memory variables.


Testing the Test Comminication Manager:

We can test the new tcm.lua module interactively Lua. First, make the tcm.lua file as shown above and open two terminals with an instance of Lua. Each line is a step in time.

Terminal 1 Terminal 2
> dofile('init.lua');
> require('tcm')
> tcm.set_segment1_number_a(4)
> print(tcm.get_segment1_number_a())
> tcm.set_segment1_vector_b({1,2,3})
> tcm.set_segment1_string_c('hello')
> print(tcm.get_segment1_string_c())
> print(tcm.get_segment2_number_a())
> dofile('init.lua')
> require('tcm')
> print(tcm.get_segment1_number_a())
> tcm.set_segment1_number_a(1)
> print(tcm.get_segment1_number_a())
> print(tcm.get_segment1_vector_b())
{1, 2, 3}
> print(tcm.get_segment1_string_c())
> tcm.set_segment1_string_c('bye')
WARNING: Input size 3 != current block size 5. Resizing string_c block.
> tcm.set_segment2_number_a(2)
Independent Shared Memories

Memory segment names changed to make them personal to each robot on each team in each user account.

[segment name][variable name]
[segment name][variable name][teamID][playerID][user]

One can check the segment names by using "cd /dev/shm/" and then "ls" in terminal.

Now when testing *cm files, we must define team and player IDs so that we are watching one memory of one robot. Example:

 > = 1;
 > = 2;

Shared Memory Access outside of Lua

An additional use of shared memory is that we, as an outside observer, can open up shared memory and keep an eye on what is happening, to better refine strategies and debug.

The Lib/Util/Python and Lib/Util/PyQt directories have sample code for doing this.

To build Lib/Util/Python:

Note: You may have to edit Makefile in this directory. Replace python2.6 with python2.7 (or whatever Python version you are using; the script has not yet been tested on Python 3).

cd Lib/Util/Python
>>> import shm
(if error appears, see note above)

Other Basics

All of the useful functions in shm.ShmWrapper are automatically generated from the shared memory segment that it is connected to. It basically takes all of the existing variables in the shared memory segment and creates the get_x() and set_x(val) methods.

Make sure it is using the correct segment name. It is going to depend on the team, robot id and your user name. For me, when I use nao1.wbt world it is team 1 with robot id 2, so the segment name becomes vcmImage12USERNAME. You should check to make sure the name being used in the script is correct for the robot you are trying to connect to.

The robots do not copy the images to shared memory by default. In the Vision config file you need to set the following flags to 1.

In the file ./Player/Config/Config_Webots_Nao_Vision.lua:

-- use this to enable copying images to shm (for colortables, testing)
vision.copy_image_to_shm = 1;
-- use this to enable storing all images
vision.store_all_images = 1;

The command vision.copy_image_to_shm = 1 enables copying in general. The command vision.store_all_images = 1 tells Luar to copy all images. There are other options for copying only specific kinds of images, such as when a ball or goal is detected.

NOTE: It is important that you run any external debugging *after* you start the simulator. Most of them depend on generating information from the shared memory segments, if those are not initialized then they will not know what variables are available.

You can run python interactively to test:

>>> import shm
>>> segmentName = 'vcmImage12USERNAME'    # your segment name will probably differ
>>> vcmImage = shm.ShmWrapper(segmentName);
>>> dir(vcmImage)
['_ShmWrapper__generate_accessors_setters', '__doc__', '__init__', '__module__', '_format_input', '_format_output', 'get_axisMajor', 
 'get_axisMinor', 'get_centroid', 'get_da', 'get_detect', 'get_dr', 'get_r',  'get_v', 'handle', 's', 'set_axisMajor', 'set_axisMinor', 
 'set_centroid', 'set_da', 'set_detect', 'set_dr', 'set_r', 'set_v', 'test']
>>> vcmImage.get_v()
array([ 0.22508712,  0.07112646, -0.35909602,  1.        ])

NOTE: You should only create the shm.ShmWrapper once for each segmentName. For example, if accessing segmentName multiple times or in a loop, set vcmImage = shm.ShmWrapper(segmentName) outside of that loop and access it through vcmImage.get_whateveryouwant inside the loop.

Running a viewer:


NOTE: The robots do not copy the images to shared memory by default. In the Vision config file you need to set the following flags to 1.

-- use this to enable copying images to shm (for colortables, testing)
vision.copy_image_to_shm = 1;
-- use this to enable storing all images
vision.store_all_images = 1;

There are other image storing flags that can be set in Config_Webots_Nao_Vision.lua like storing goal detections or ball detections.

Shared Memory Segment Documentation

This section is an attempt to document each value in the shared memory files and what they are used for to aid the creation of debugging tools. The following is a list of shared memory segment names (each segment is stored as the name given, followed by the team number (indexed from 0), robot number (indexed from 1), and user name), followed by the values within that shared memory segment (determined by running Jordan's Python shm script to view all possible functions on each shared memory segment during normal gameplay. In other words, the wrapper provides get_[name] and set_[name](x) functions for each of the below values. Note that this is a work in progress - the next step is to document what each of these memory values is used for - but hopefully this will provide a good starting point.


half, kickoff, last_update, nplayers, opponent_penalty, penalty, state, time_remaining


color, number, player_id, role


debuglevel, visiondebuglevel, visiondebugmode


dSum, distance, free, left, obstacles, right


bodyHeight, bodyOffset, footY, stepHeight, stillTime, supportX, supportY, tStep, uLeft, uRight


axisMajor, axisMinor, centroid, da, detect, dr, r, v


command, select


enable_shm_copy, store_all_images, store_ball_detections, store_goal_detections


color, detect, postBoundingBox1, postBoundingBox2, type, v1, v2


count, fps, headAngles, height, horizonA, horizonB, labelA, labelB, select, time, width, yuyv


angle, detect, v, vcentroid, vendpoint


dodge, t, velocity, xy


attack, attack_angle, attack_bearing, defend, defend_angle, t


pose, uTorso

Webots Supervisor

The Webots Supervisor (similar to the RoboCup referee) sends out 'GameControl' packets of game information like penalties and team goals. The structure of these packets can be seen in UPennalizers/GameController/spl/RoboCupGameControlData.h

GameControl.lua (or in Webots, UPennalizers/Lib/Platform/Webots/GameControl/WebotsNaoGameControl.lua) parses these packets and stores the information in the shared memory.


When controlling the robot joints we consider the robot's body as an array of joints. To control a joint you must use the index value for that joint. Below is the index to joint mapping that we use:

(# is the index, followed by joint name)

  1. Head Yaw
  2. Head Pitch
  3. Left Shoulder Pitch
  4. Left Shoulder Roll
  5. Left Elbow Yaw
  6. Left Elbow Roll
  7. Left Hip Yaw Pitch
  8. Left Hip Roll
  9. Left Hip Pitch
  10. Left Knee Pitch
  11. Left Ankle Pitch
  12. Left Ankle Roll
  13. Right Hip Pitch Yaw
  14. Right Hip Roll
  15. Right Hip Pitch
  16. Right Knee Pitch
  17. Right Ankle Pitch
  18. Right Ankle Roll
  19. Right Shoulder Pitch
  20. Right Shoulder Roll
  21. Right Elbow Yaw
  22. Right Elbow Roll

Code Structure

The code is divided into two main components: high and low level processing. The low level code is mainly written in C/C++ and compiled into libraries that have Lua interfaces. These libraries are used mainly for device drivers and anything designed to execute quickly (e.g. image processing and forward/inverse kinematics calculations). The high level code is mainly scripted Lua. The high level code includes the robots behavioral state machines which use the low level libraries.

The following is a brief description of the code following the provided directory structure. The low level code is rooted at the Lib directory and the high level code is rooted at the Player directory.

Low Level Interface

The low-level interfaces are found in the ./Lib directory.

  • Platform -- humanoid robot platforms (Nao, OP, Webots, Webots_OP)
    There are several directories contained here, one corresponding to each of the robot platforms supported. The code contained in these directories is everything that is platform dependent. This includes the robots forward/inverse kinematics and device drivers for controlling the robots sensors and actuators. All of the libraries have the same interface to allow you to just drop in the needed libraries/Lua files without changing the high level behavioral code. The directory trees for each platform (Webots/Nao) are the same:
    • Body -- Body contains the device interface for controlling the robot's sensors and actuators. This includes controlling joint angles, reading IMU data, etc.
    • Camera -- Camera contains the device interface for controlling the robots camera.
    • Kinematics -- Kinematics contains library for computing the forward and inverse kinematics of the robot.
  • ImageProc -- This directory contains all of the image processing libraries written in C/C++: Segmentation and finding connected components.
  • Util -- These are all of the C/C++ utility function libraries.
    • CArray -- CArray allows access to C arrays in Lua.
    • CUtil -- CUtil contains functions that can be performed on C arrays (such as array-to-string conversion) that can then be called by Lua.
    • Shm -- Shm is the Lua interface to Boost shared memory objects (only used for the Nao).
    • Unix -- This library provides a Lua interface to a number of important Unix functions; including time, sleep, and chdir to name a few.
    • Matlab -- Contains the Matlab utility used to create the colortables necessary for vision
      • Colortable process has been translated to a python program as it is open source
    • PyQT -- An incomplete attempt at writing a Python colortable utility, to replace the existing Matlab one.
  • NaoQi -- This contains the custom NaoQi module allowing access to the Nao device communication manager in Lua.
    • actuator_process.cpp --This script seems to handle command priority and execution in the shared memory space for actuators.
    • dcmprocess.cpp -- This deals with accessing DCM (the Device Communication Manager), which is access from shared memory and allows code to control the actuators and sensors on the robot itself.
    • luadcm.cpp -- This allows Lua to send commands to the DCM.
    • sensor_process.cpp -- This script seems to handle command priority and execution in the shared memory space for sensors.
    • shm_util.cpp -- This is the script that actually creates and manages the SHM (shared memory space) itself.
    • shmmap.cpp -- This code allows the sensor and actuator to access the SHM (though a good deal of it is commented out).

Vision (High Level)

The vision is handled through the following classes:

- Vision - Velocity - detectBall - detectGoal - detectLine - detectSpot - Detection - detectLandmark - detectLandmarks - headTransform

Vision (Low Level)

The Low level C/C++ functions responsible for the robots vision:

-Block_bitor(): It takes the parameters of the scene from the camera then it fragments the image according to color classes. - Color_count(): It counts how many colors are there in a frame or in a scene. - ConnectRegion(): This function is responsible for analysing the connecting regions between two block of colors .

                  The function takes parameters the fragmented image and the region coordinates.

- Luar_color_stats(): The function calculates different stats related to the :

                    - The number of colors in the fragmented image
                    - The area over which a color is spread.

- Lua_field_lines(): This function takes as argument the fragmented image and analyzes the colors.

                    Then according to this analysis it determines the field lines. 

- Lua_goal_posts(): This function takes the analysis of the colors then it generates the location.

                   Then using the location it determines the goal posts.

- luaImageProc(): The function uses the output of the different sub-functions to determine the vision.

                 Its main goal is to determine the location of the robot in the field. 

High Level Interface

player.lua is the main entry point for the code and contains the robot's main loop.

  • BodyFSM -- The state machine definition and states for the robot body are found here. These robot states include spinning to look for the ball, walking toward the ball and kicking the ball when positioned.
  • Config -- This directory contains the only high level platform dependent code, in the form of configuration files. Config.lua links to other environment-dependent configuration files, where the walk parameters, camera parameters and the names of the device interface libraries to use are all defined.
  • Dev -- This directory contains the Lua modules for controlling the devices (actuators/sensors) on the robot.
  • Data -- This directory contains any logging information produced. Currently this is only in the form of saved images.
  • HeadFSM -- The head state machine definition and states are located here. The head is controlled separately from the rest of the body and transitions between searching for the goals, searching for the ball, and tracking the ball once found.
  • Lib -- Lib contains all of the compiled, low level C libraries and Lua files that were created in ./Lib.
  • Motion -- Here is where all of the robot's motions are defined. It contains the walk and kick engines along with keyframe motions used for the get-up routines.
  • Util -- Utility functions are located here. The base finite state machine description and a Lua vector class are defined here.
  • Vision -- The main image processing pipeline is located here. It uses the output from the low level image processing to detect objects of interest (ball, goals, lines, spot, and landmarks).
  • World -- This is the code relating to the robots world model.
Team Positions

The robot with Player 1 ID is automatically the goalie (see Config_Nao.lua) and never changes positions; however, other members of the team change positions dynamically (see Team.lua). Based on the distant to goal, the team member is either attack, defence, or support.

Lua Functions

This section documents the high-level interface in Lua.


  • set_actuator_command(cmd, index)
    set the position of the given joint. Starting at index, set the joints to the cmd values (based on the size of the cmd array)
  • set_actuator_velocity(vel, index)
    set the velocity of the given joint. Starting at index, set the joint velocities to the vel values (based on the size of the vel array)
  • set_actuator_hardness(h, index)
    set the hardness of the given joint. Starting at index, set the joint hardnesses to the h values (based on the size of the h array)
  • get_sensor_position(index)
    return the current position of the given joint. If index is not specified then return an array of all joint positions.
  • get_sensor_imuAngle(index)
    return the state of the 3-axis imu. if index is specified then return only that axis' value, otherwise return all 3.
  • get_sensor_button()
    return the state of the button (1 - pressed/0 - not pressed)
  • get_head_position()
    convience function to return an array of all the head joint positions
  • get_larm_position()
    convience function to return an array of all the left arm joint positions
  • get_rarm_position()
    convience function to return an array of all the right arm joint positions
  • get_lleg_position()
    convience function to return an array of all the left leg joint positions
  • get_rleg_position()
    convience function to return an array of all the right leg joint positions
  • set_body_hardness(h)
    convience function to set the hardness of all the joints in the body to the same value
  • set_head_hardness(h)
    convience function to set the hardness of all the head joints to the same value
  • set_larm_hardness(h)
    convience function to set the hardness of all the left arm joints to the same value
  • set_rarm_hardness(h)
    convience function to set the hardness of all the right arm joints to the same value
  • set_lleg_hardness(h)
    convience function to set the hardness of all the left leg joints to the same value
  • set_rleg_hardness(h)
    convience function to set the hardness of all the right leg joints to the same value
  • set_head_command(cmd)
    convience function to set the head joint positions
  • set_larm_command(cmd)
    convience function to set the left arm joint positions
  • set_rarm_command(cmd)
    convience function to set the right arm joint positions
  • set_lleg_command(cmd)
    convience function to set the left leg joint positions
  • set_rleg_command(cmd)
    convience function to set the right leg joint positions
  • update()
    this function is run at every time step on the robot. it actually updates the robot joint positions based on the target velocity and desired position.


  • set_param(param, val)
    set the given camera parameter
  • get_param(param)
    return the current value of the camera parameter
  • get_height()
    return the height of the images the camera takes
  • get_width()
    return the width of the images the camera takes
  • get_image()
    return the actual image from the camera. This image should be in YUV422 format.
  • get_camera_position()
    returns the position of the camera being used. This is needed for the Naos because they have to cameras.
  • select(bottom)
    select a camera (used for Nao). If bottom != 0 then select the bottom camera otherwise select the top camera.
  • get_select()
    return the current selected camera.


  • os.getenv('USER')
    get the current user name; useful in avoiding bad/not multi-user friendly naming schemes


Illustration of Motion Finite State Machine

Has a shared memory and fsm, all files are found in Player/Motion/


The NaoWalk.lua file is responsible for the walking of the Nao. This draws, as most files do, from the Config files. In this case, mainly from the Player/Config/Config_Nao_Walk.lua file.

The walk itself uses ZMP computation to calculate where each motor should be at every point in time, and send it there. It only stops when it is not in the middle of a step.

Walk Engine

A dynamic walk engine is responsible for the locomotion of the robot. This walk engine utilizes a ZMP controller to maintain static stability throughout a step.

For more information on ZMP control see these papers: Zero-Moment Point - Thirty Five Years of its Life and Zero-Moment Point - Proper Interpretation.

The walk engine is omni-directional. In other words, the robot can change direction "instantaneously" in order to react to ball movement. Using inputs from our vision and localization modules, the walk engine generates trajectories for the robot’s Center of Mass (COM). The robot uses the information about the ball’s location and its relative orientation to determine rotational and translational velocities. Inverse kinematics (IK) are then used to generate joint trajectories that satisfy the ZMP criterion. This process is repeated to generate alternate support and swing phases for the two legs.

Information from the Inertial Measurement Unit (IMU) and the foot sensors of the robot is used to modulate the commanded joint angles and phase of the gait cycles to correct against perturbations. Hence, minor disturbances caused by irregularities in the carpet and bumping into obstacles do not cause the robot to lose stability.

The walk engine has tunable parameters that allow adjustments to be made based on the nature of the playing surface. These include step-height, step-length, COM position during stance phase, etc. An intuition for the effect that each parameter has on the gait of the robot is crucial in being able to re-tune the walk on a different playing surface.

Keyframe Motions

Luar has the ability for play back sequences of moves, or keyframe motions. For examples, see Player/Motion/keyframes/*.lua

Keyframe motions are scripted, open loop motions that the robots can run. These are mainly used for the get-up routine and kicks. All keframe motions we provide have the following naming convention: km_<Name>.lua (e.g. km_StandupFromFront.lua is the keyframe motion file for standing up from the robots front/stomach).

Keyframe motions are lua tables with the following members:

  • mot -- tables include:
    • mot.servos -- list of motors/servos you want to control.
    • mot.keyframes -- list of frames that the robot will go through
  • stiffness -- the motor stiffness/hardness for this frame
  • angles -- the target motor positions for this frame
  • duration -- the amount of time that it should take the robot to reach the target motor positions

The motor id numbers can be found in Luar#Body

Further information about the servos can be found here

Finite State Machine

Assuming that the UPEnnalizers code is saved to the home directory, FSM lua files are found in the directory:

  • ~/UPennalizers/Player/BodyFSM/NaoPlayer


  • gameInitial.lua
    The Body and Head states are set to Idle. The robots set their team color and determine whether it is their team's turn for kickoff.
  • gameReady.lua
    The robots use walk.start() to walk to their positions, then set the Body and Head states to Ready.
  • gameSet.lua
    The Robots stop walking and wait for the game to start. The Body state is set to Stop, and the Head state is set to Track.
  • gamePlaying.lua
    The Head and Body states are set to Start. Unlike in other states, the robots check to see if they are penalized.
  • gamePenalized.lua
    The Body and Head states are set to Idle. The robot checks to see if it can go back to Playing.
  • gameFinished.Lua
    The Body and Head states are set to Idle.



The main code that defines the Body FSM's states and transitions is called BodyFSM.lua.

All the other files in the directory are files that define the entry, update, and exit methods for the different states:

  • bodyIdle.lua
    The initial state of the Game FSM, it moves robot from sitting position to stance position, using states in the Motion directory.
  • bodyReady.lua
    In gameReady state in the game fsm, bodyReady is set. Move robots into position game positions.
  • bodyStop.lua
    The robot stops walking.
  • bodyStart.lua
    Called in the Game FSM state gamePlaying, the Robot begins walking. State transitions to bodyPosition.
  • bodyPosition.lua
    Depending on the role assigned to the robot:
    • 2: Based on the position of the ball, position body to defend the goal
    • 3: Support, move towards attacking goal and face the ball
    • All other roles: Move towards the ball
  • bodyOrbit.lua
    It seems that the Robot 'orbits' the ball trying to align it's body to the ball and the goal.
  • bodySearch.lua
    Attempts to locate the ball by spinning in a circle. If it cannot locate the ball, the search will timeout.
  • bodyGotoCenter.lua
    If the robot is the Goalie, move to the center of the goal. Otherwise move to either the attack or defensive position.
  • bodyApproach.lua
    Using the known location of the ball, move towards the ball.
  • bodyKick.lua
    Determines kind of kick based on ball position, and attempts to kick the ball. After a kick, the head state is changed to 'headTrack'
  • bodyObstacle.lua
    Stop walking, check to see if there is an obstacle. Then resume walking.
  • bodyObstacleAvoid.lua
    Attempts to avoid the obstacle by walking away from detected object. If not completed successfully in appropriate amount of time, function timesout.
  • bodyChase.lua
    It seems that the Robot chases after the ball, and checks if there is an obstacle.
  • bodyPause.lua
    The robot stops walking for a certain amount of time, and says "Defending."



The main code that defines the Head FSM's states and transitions is called HeadFSM.lua.

All the other files in the directory are files that define the entry, update, and exit methods for the different states:

  • headIdle.lua
    This is the initial state which is called in the game FSM state gameInitial. The default head position is defined by 'yaw' and 'pitch' constants in 'set_head_command.' This also continuously switches cameras. Transitions to headStart.
  • headStart.lua
    Called in the game FSM state gamePlaying. This state seems to be a placeholder as no changes are made to the head. Transitions to headTrack.
  • headReady.lua
    Called in the game FSM state gameReady. It finds the head angle and based on this value, sets the head into a new position. A timer is set, and if this takes more than 5 seconds, transition to headReadyLookGoal. Again it continuously switches cameras.
  • headReadyLookGoal.lua
    Determines the closest attack between attackAngle and defendAngle. This state only uses the top camera. The 'yaw' which is used to set the head position is determined based on the closest attack. If the time takes more than 1.5 seconds, then it either times out or determines it is lost, and both of these transition back to headReady.
  • headTrack.lua
    Transitions from headStart. This state uses only the bottom camera. The robot locates the ball and updates its head based on this location. The timer either times out and transitions to headLookGoal or determines that the ball is lost and transitions to headScan.
  • headScan.lua
    Transitions to headScan in order to locate the ball. Continuously switches between the two cameras in a figure 8 motion. It updates the head position by continiously scanning left-right and up-down. The scan is parameterized in an attempt to cover the maximum field of view in the shortest time. The scan is started in the ball's last known direction. If the ball it found, it transitions back to headTrack, and if it times out, it stays in the same state and starts over.
  • headLookGoal.lua
    Uses only the top camera. This also determines the closest attack between attackAngle and defendAngle and updates the head position based on the closest attack. If it times out it transitions to headTrack, and if the ball is lost, it transitions to headSweep.
  • headSweep.lua
    Transitions to headSweep when the goal is lost (ie. it wasn't where it was expected to be during headLookGoal). Uses the top camera to do a sweep of the head from left to right, it will not look up or down. It finds the head angle and determines the new head position based on this value. If the time has been more than 2 seconds, it transitions to headTrack.

Tools and Utilities

The following are tools and utilities (developed by Doug Blank's Androids class) to aid in development and debugging. Note that unless otherwise marked, these are works in progress and should not be used for any actually debugging.

Finite State Machine Visualizer

  • This Tkinter-based Python utility will (when complete) allow the user to open any Lua FSM file, pick a robot, and watch the robot transition through the FSM in real time. It is currently under development, and can be found on Github here: (note that it's both incomplete and poorly documented. Both of those things will be changed soon).

Localization Visualizer

  • A python script to show, on a scaled version of the field, what the robot thinks it's position and heading are. There are two versions, the first one to show only one robot and where it thinks the ball is located. And the second one to show where all the robots on the field think they are located.

Python Colortable Generator

  • A (currently incomplete/untested) Python version of the Matlab colortable generation utility. This code is currently messy, undocumented, untested, and incomplete. Once we've got it tested and are sure it works, I'll set about cleaning up and documenting the code fully. For now, this serves an emergency way to access the files if need be.
  • VERSION A: This version of the code takes as its input a bitmap file and allows for the selection of colors, then writes a look-up table with the elected colors. This should work fine but is totally, completely untested. There's a chance that the RGB to YUYV conversion is incorrect so if you try this and it doesn't work see if changing it helps (an alternative algorithm is provided in the comments). Note that you must have PIL installed for this to work correctly (
  • VERSION B: This version of the code will take as input an array of YUYV values from the robot's shared memory. This is the version currently being developed, but it is currently INCOMPLETE. DO NOT TRY TO USE IT YET! It's mostly just here for reference/in case of emergencies.


Working With the Nao

Scary attempting to stand-up from a backward fall.


Some quick notes:

  • Numbers for head, body, and battery can be found under the head cap, and battery pack on the Nao. These are useful when conveying hardware problems to Alderbaran.
  • USB is located under head cap. Must compile code and transfer it over to USB. The USB should be placed in the head slot with black facing upwards (away from feet).

Getting the Software on the Nao

Based on

  1. Nao Website ID: Digitowls, Password: brynmawr999
  1. Aldebaran Robotics Website ID: Nao2021, Password: 9999
  2. Download the Nao OS Robocup Image from Aldebaran. This release has been tested for version 1.10.52 of the Nao OS.
  3. Download the flasher utility provided by Aldebaran to copy the image to the USB stick.
  4. Run flasher as root or sudo
  5. Load the image from above
  6. Select the USB port, and install

Hardcoded path needs to be set:

  1. mkdir -p /usr/include/lua5.1/
  2. cd /usr/include/lua5.1/
  3. ln -s /usr/include/lauxlib.h .

Set network IP

  • /home/nao
  • ./remotes
  • ./remotes/config
  • ./remotes/lircd.conf
  • ./naoqi
  • ./naoqi/preferences
  • ./naoqi/preferences/eeprom
  • ./naoqi/preferences/Device_Body.xml
  • ./naoqi/preferences/ALMotion.xml
  • ./naoqi/preferences/Device_Head.xml
  • ./naoqi/share
  • ./naoqi/share/naoqi
  • ./naoqi/share/naoqi/vision
  • ./behaviors

Getting Started with NaoQi

  • Open PuTTY and login using the robot's hostname. (Ginger, Posh, Sporty, Baby, Scary)
  • Login to the shell
    • Username: Nao
    • Password: Nao
  • In the browser you can type robots_IP:port and you will get the robot's page. On this page you can find all of the available toolboxes and their various functions for your nao. (Note: to do the things below, you will need both ALTextToSpeech, ALVisionToolbox, ALVideoDevice.)
  • You can also get documentation and tutorials through Choregraphe --> HELP --> Documentation.
  • Quick Tutorial: Making the Nao Say Something from Command Line
 nao@Nao$ python
 >> from naoqi import ALProxy
 >> audioProxy = ALProxy("ALTextToSpeech",robot_IP,9559) #be sure to put robot_ID in quotes
 >>"I can speak!")
  • Quick Tutorial: Recording and Saving a Video from Nao from Command Line
 nao@Nao$ python
 >> from naoqi import ALProxy
 >> camProxy = ALProxy("ALVisionToolbox",robot_IP,9559) #be sure to put robot_ID in quotes
 >> vidProxy = ALProxy("ALVideoDevice",robot_IP,9559) #be sure to put robot_ID in quotes
 >> vidProxy.setParam(18,1) #camera settings; 1 is for bottom camera, 0 is for top
 >> camProxy.takePicture() #should take 3 consecutive pictures and store them in naoqi/share/naoqi/vision

Sample Images

Interpreting Signals from the Nao

Here is a link that will tell you what all of the LED blinky and spoken messages mean when your Nao is trying to tell you something important.

When Running the Code

  • On boot --> Initial
  • 1st button press --> Ready
  • 2nd button press --> Set
  • 3rd button press--> Playing
  • 4th button press--> Penalized
  • Then rotate between play and penalty continuously until Finish.

Generating Keyframe Motions

A keyframe motion generator for the Nao can be found at /Player/Test/gen_motion.lua. Command instructions are included in program. New motions will be saved in /Player/ directory and can be tested and stepped through with /Player/Test/test_keyframe.lua. Successful motions should be saved/moved to the /Player/Motions/keyframes directory.

Updating Noa V4


Login Information Password: adminadmin