Developmental Robotics; Creativity and Simulation

From IPRE Wiki
Jump to: navigation, search

June 1st - June 5th

Monday, June 1st

Key points: Creativity, Unsupervised vs. Supervised neural networks, Developmental Robotics, Microsoft Surface

I started my research experience here at BMC this summer late, however, I came in with a slew of ideas. Which I discussed with Doug. We discussed my thoughts on creativity and developmental robots; whether or not genuine creativity is a possible phenomenon in robots (which I now presume it is), how this can be measured, and what kind of environment best fosters it. We also spoke in general about Developmental Robotics, and removing the anthropomorphic bias inherent in other forms of robotics research. I agreed with Doug, removing this bias, and letting the robot generate its own self-motivations, is key in discovering whether creativity is possible for robots. From this point we began discussing unsupervised vs. supervised neural networks with the rest of the research group. Meena is our expert in this area.

I spent the rest of the day learning about the Microsoft Surface for which we will do some programming this summer.

Tuesday and Wednesday, June 2nd and 3rd

Key points: Webots and Humanoid Robot Simulation

For the next two days Doug introduced me to the Webots robot simulator, this is the simulator we will be using for our Humanoid robot, the Robonova-1. Webots is very cool, and, when not programming, very simple to use. It allows the user to create worlds, robots, and code to control those robots. As well as physics to control the world the robot lives in. I learned that the main goal of this project would be to create an abstract library, using the Python programming language, for programming humanoid robots. This project would work in conjunction with the Pyro Project (Python Robotics).

Thursday and Friday, June 4th and 5th

Key Points: Learning Webots Worlds and Simple Robot Models

I spent all of Thursday and Friday learning about how to create worlds in Webots and the creation of very basic robot models. I primarily studied the lighting of worlds, creations of objects and the creation of boundaries around objects (to keep my robot from walking through walls), materials of objects, textures, and such. I eventually created my own, simple world, that consists of a fairly simple maze. The end point of the maze is a blue box, when the blue box is spotted, my robot will then stop moving around.

Maze.jpg

I also worked on a differential wheels model of a robot (a robot that is a body and has two wheels on either side). I also gave this little robot two eyes, which each contain two IR sensors, and a little hat containing a camera. I don't have a name for her yet, suggestions would be appreciated.

Differentialwheels.jpg

June 8th - June 12th

Monday and Tuesday, June 8th and 9th

Key Points: Code for Webots and Reading on Creativity and Developmental Robotics

Over the weekend I finished all the reading on Doug's Developmental Robotics research and his PhD thesis. His thesis was awesome, he did research on whether robots can learn to make analogies, which they did. Its a bigger version of what I might want to achieve this summer. I also spoke to a Psychology professor here and a Philosophy professor here to guide me in my research. After speaking with them and Doug, I realized that the creativity coming from my robot would be most similar to creativity from babies. My robot will only have so many primitive actions, just as a baby has very few abilities at birth (sucking, movement of libs etc.). The creativity exhibited by my robot will be based on these primitives. For example, rather than my robot doing art, something creative might be the way my robot takes a movement it already knows, and combines it with another movement to create one newer, more complex movement (it would be even better if it were to serve some purpose). Another example might be if the robot uses one of its primitive movements to manipulate its environment in order to serve some purpose or in an attempt to relieve its own boredom, such as a baby bouncing to music or pushing a button to reveal some cool consequence (like hearing a noise or seeing a picture). My mother pointed out to me that she found it most interesting when babies realize that they are in control of their own bodies. She particularly likes it when babies stare at their hands, flipping them back and forth, realizing that they have control over it. If a robot could make a discovery about its own body, and use this discovery to manipulate its environment or self, would be very interesting and very creative.

I spent my Monday and Tuesday working on Webots Controller code. The basic code for the robot to move around the maze was originally written in C, I translated this code into python (there are VERY few controllers written in python publicly available, so making the switch was difficult as the library was totally new to me) The code is quite simple. I mostly struggled with the translation in terms of object oriented programming (C is not, Python is). Because of this, I had trouble determining what belonged in what class and whether I needed to create new classes or if the classes already existed in the libraries imported etc.

C Code

 /*
 * File:         mybot_simple.c
 * Date:         August 8th, 2006
 * Description:  A really simple controller which moves the MyBot robot
 *               and avoids the walls
 * Author:       Simon Blanchoud
 * Modifications:
 *
 * Copyright (c) 2008 Cyberbotics - www.cyberbotics.com
 */

#include <webots/robot.h>
#include <webots/differential_wheels.h>
#include <webots/distance_sensor.h>

#define SPEED 60
#define TIME_STEP 64

int main()
{
  wb_robot_init(); /* necessary to initialize webots stuff */
  
  /* Get and enable the distance sensors. */
  WbDeviceTag ir0 = wb_robot_get_device("ir0");
  WbDeviceTag ir1 = wb_robot_get_device("ir1");
  wb_distance_sensor_enable(ir0, TIME_STEP);
  wb_distance_sensor_enable(ir1, TIME_STEP);
  
  while(wb_robot_step(TIME_STEP)!=-1) {
    
    /* Get distance sensor values */
    double ir0_value = wb_distance_sensor_get_value(ir0);
    double ir1_value = wb_distance_sensor_get_value(ir1);

    /* Compute the motor speeds */
    double left_speed, right_speed;
    if (ir1_value > 500) {

        /*
         * If both distance sensors are detecting something, this means that
         * we are facing a wall. In this case we need to move backwards.
         */
        if (ir0_value > 500) {
            left_speed = -SPEED;
            right_speed = -SPEED / 2;
        } else {

            /*
             * We turn proportionnaly to the sensors value because the
             * closer we are from the wall, the more we need to turn.
             */
            left_speed = -ir1_value / 10;
            right_speed = (ir0_value / 10) + 5;
        }
    } else if (ir0_value > 500) {
        left_speed = (ir1_value / 10) + 5;
        right_speed = -ir0_value / 10;
    } else {

        /*
         * If nothing has been detected we can move forward at maximal speed.
         */
        left_speed = SPEED;
        right_speed = SPEED;
    }

    /* Set the motor speeds. */
    wb_differential_wheels_set_speed(left_speed, right_speed);
  }
  
  return 0;
}

My Python Code

# File:        Mybot_Simple_Python.py 
# Date:        6/8/2009 
# Description:  Python Testing Program  
# Author:       AshGavs

from controller import *  
timestep = 64
speed = 60  
class Mybot_Simple_Python (DifferentialWheels):
  
  def run(self):
    distanceSensor0 = self.getDistanceSensor('ir0') #Get the IR sensors and enable them
    distanceSensor1 = self.getDistanceSensor('ir1')
    distanceSensor0.enable(timestep)
    distanceSensor1.enable(timestep)

    while (self.step(timestep)!= -1): # Start the timer for the world
      ir0_value = distanceSensor0.getValue()
      ir1_value = distanceSensor1.getValue()
      
      if(ir1_value > 500):
        if(ir0_value > 500): #both IR sensors sense a wall
          left_speed = speed*-1
          right_speed = (speed*-1)/2
        else:
          left_speed = (ir1_value*-1)/10 # Just IR1 senses a wall, move one wheel
          right_speed = (ir0_value/10) + 5
        
      elif(ir0_value > 500): # Just IR0 senses a wall, move other wheel
        left_speed = (ir1_value/10)+5
        right_speed = (ir0_value*-1)/10
      else: # no wall, move foward
        left_speed = speed
        right_speed = speed
        
      self.setSpeed(left_speed,right_speed) # set the speed of the wheels
      
controller = Mybot_Simple_Python()
controller.run()


Wednesday, June 10th

Key Points: Testing Python on a Humanoid Robot

I have decided to start work on using python on a humanoid robot. Though there is no model of the RN-1 (the model we will end up using) for Webots, I started doing work on the Kondo KHR-2HV, a similar bot. This controller was written in C++, which is object oriented, however, the code is very complicated and will be difficult to translate. I am currently just reading and attempting to understand the code, the model, the physics of the world, and determining whether it would be more worth it to build my own world and model from scratch (which we might need to do eventually), or to experiment with this robot.

Humanoid.jpg

Thursday, June 11th

Key Points: Huba the Robot, More Reading

Today I continued some reading about humanoid robots. I read about using them in the classroom and what not. The reading was good. I also began to go over the C++ code. I also helped to edit, perhaps too seriously, an abstract for an upcoming conference. I have been made one of the co-authors! Today Doug, myself and the other researchers held a meeting about designing a humanoid robot specifically for Bryn Mawr. We have decided to name her Huba. We talked about what size she would be, how many joints she'd have, what kind of motors we should use, what sensors we can get etc. We also discussed designs for her neck, hands, head, and whether it would be possible to use legos sensors parts. That is, whether we can stick some legos onto her shell, and attach the legos mindstorm sensors too her. We thought a microphone, and stereo camera would be best.


Friday, June 12th

Key Points: KHR-2HV C++ Controller In addition to 'evaluating the Microsoft surface interface' by playing chess and tiles with fellow researchers Alex and Meena, I read and re-read...and re-read the C++ Code for the KHR-2HV controller until I finally, sort-of, understood it. It is a lot of code, and a lot of it is uninteresting, hard coded movements, so I will not post it here, though I may post some of my python translation. From what I gleaned, it look like a surprisingly simple controller. I believe the real complexity of the bot, how it balances, where the centers of mass for each body part are, etc. is really in the physics code, which I have not read up on yet. I suppose we will learn more about that when we attempt to create our own model. I wonder if we can just modify the physics code for this bot and make it compatible for ours.

As for the rest of the code, it was just more about classes and library definitions that I hadn't yet seen. Though the functions and classes were more complex than those I had seen for my differential wheels robot, the principals are still the same. I think I will do a couple read throughs until I really understand the code, and then attempt to rewrite in python.


June 15th - June 19th

Monday, June 15th

Key Points: A visit to Swarthmore to learn about Growing Neural Gas, Intelligent Adaptive Curiosity, and Rovio

Today we spent most of the day at Swarthmore with a colleague of Doug's, Lisa Meeden, and her students Rachel and Ryan. Rachel was working on a developmental robotics project similar to ours. Hers, however, involved the use of Growing Neural Gas (GNG) rather than a more typical neural network. Essentially, the goal of the experiment was to see whether a simulated robot could begin to make accurate predictions about its environment. In its environment are 2 different objects of different colors. One red, and one blue. The red is a stationary object, the blue moves. The robot can also sense the color green, and how centered the object is in its vision. Information about the current situation is fed into the GNG. If the situation is novel, a new node for that situation is made and placed closest to its most similar nodes. The robot makes predictions over and over again about what it thinks it SHOULD be seeing, based on the situation. If it gets to a point where it can always predict a given situation, that situation is considered learned. If it sees a situation is unlearnable, such as random noise, it will move onto something else (hence the Intelligent Adaptive Curiosity). I REALLY liked learning about this and think I will ask for the code for the algorithm.

Ryan was working on making the Rovio Robot work with Pyro. He has come very far and we are interested in maybe using Rovio for our intro course. Rovio is very cute, has 3 wheels, and an AWESOME camera (you can even take video).

I spent my last hour re-reading the C++ code for the KHR-2HV robot. I think I pretty much understand completely how all of the 8 files of code work together and am ready to begin writing my python translation tomorrow!

Tuesday, June 16th - Friday, June 20th

Key Points: Modeling the Robonova This past week I worked on modeling the Robonova for Webots. I am happy to say that my model is near perfect. There are a couple motor restrictions that I need to program in, but I think that can easily be done through the controller. I based the model on the company's previous model, shown above, the KHR. They each have the same number of servos and very similar proportions. After altering the model to make it work with the Robonova, I added textures, and modeled the fluke (Robonova's head). I still don't have the exact perspective correct for the camera, but its getting there. Here is my model:

RN1.jpg

Next week I am going to begin working on code for a python controller. I am going to convert the Nao robot controller from matlab, to python, and then attempt to run this controller on the RN1. Wish me luck!

June 22nd - June 26th

Monday, June 22nd

Key Points: Swarthmore Visits, Nao Code, IAC

Today Swarthmore came to visit, as well as a professor from Sarah Lawrence. We showed them whhat we have been working, and continued to discuss different learning algorithms. We discussed new ways to look at GNG and IAC, discussed the use of the Governor to help neural networks with catastrophic forgetting.

I was becoming concerned with how to structure my own research so I discussed this with Doug. He recommended that I look at an article on the original IAC which I will do. Also found out today that Teyvonia, a BMC alum has converted the Matlab code already, so I am going to try and edit her code instead.

Tuesday, June 23rd - Thursday, June 25th

Key Points: Robonova and Webots Wiki

Have started working on Webots tutorials using the Robonova. Soon these will be ready to go and I will post them ASAP!