May 20, 2011

Coding- Final Project

In order to build the eyes and eyelids, Christine and I needed the materials that Hande ordered. So, while we were waiting on materials, we started coding our eye movements in LabVIEW. However, in order to code our eye movements in LabVIEW, we needed to understand how the ultrasound sensors worked. So, we started there.

We tested the ultrasound sensors using some trivial code which said read the values of the sensor and print them out to the screen of the NXT until somebody physically stops the program.
Trivial code which reads the value of the sensor.
Lack of icons because the version of LabVIEW
on my computer is experimental, so I couldn't load the
NXT module. 

We found that the ultrasound sensors sensed pretty well. They printed out values between 0 and 255, where 0 is when you have your hand on the sensor and 255 is the maximum value. They were a bit sketchy when objects got too close (within a few centimeters of the sensor) and they would oscillate between the real value and the max value. However, we did not think that this would be a detriment to our project, since we would assume that nobody would walk within a few centimeters of our puppet. We also found that the sensors were extremely accurate and would pick up values when we waved our hands about half a meter in front of the sensors.
Ultrasound sensors configured to test values
So, after we tested our sensors, we started coding. However, minutes into our coding, we decided that it would be too complex to start with code. Rather, we decided to reason about our code on paper, and then translate that reasoning into LabVIEW.
An outline of our code

An outline of our design decisions:
  •  Three sensors
  •  5 angles for eyes based on which sensors are currently sensing somebody: -60, -30, 0, 30, 60.
  •  The direction that the eyes move in depends on which angle the eyes are currently at and which sensors are currently sensing somebody
    • For example: Wendy just woke up, so her eyes are at 0. However, Bob over to Wendy, and only is infront of the first and second sensors [sensors numbered in order]. So the eyes move to look at Bob, which means that the motor controlling the eyes goes backward until it registers -30 degrees. However, if Wendy hadn't just woke up and instead was watching Bob's friend Carol who was to the left of Bob, covering only the first sensor, the motor goes forward until it registers -30 degrees.

We spent a really long time iterating on our code for a variety of reasons.
  • We originally planned on our range being only 90 degrees, instead of 120 degrees. However, this was not that noticeable for people who were just left or right of center. This was the easiest fix we made, as we just had to change our constants.
  • In our first version of our code, we measured rotation every single time we wanted to use it. Now, this is absurdly inefficient. Measuring rotation is expensive, and doing it two to four times every single time around [I would assume] would make our eyes too jerky and not particularly creepy. So, instead of measuring rotation every single time we wanted to use it we just wired the value to every spot that we used it. I realized this strategy was too inefficient when we were not particularly far into it, so we never actually came close to completing or testing this code.
Second Iteration of our code
  • So, the second (and the first for that matter) version of our code was coded under the assumption that only boolean values could be entered into case structures, an assumption based on experience with other programming languages and lack of experience with LabVIEW. So, these values were generated by comparing the rotation value to the our target values. We then made a really complicated case structure combined these booleans with other boolean values which asked is there somebody in front of my sensor to determine whether the motor for the eyes should go forward, backward, or brake. However convoluted this scheme was, we did not know a better way, until in class the next day, Chris suggested a new way.
  • Our third version of the code entered the rotation value into the case structure. This way, we could make cases based on ranges, like 3-27 degrees instead of teasing out the ranges from a bunch of booleans. [Tip: the code will not compile if all cases are not covered. For True/False cases, it comes already preset. If you use numbers as input, like we did, you either need a case called "default" or cases that cover +/- infinity, by having one case that goes from ..-minVal and another case that goes from maxVal-.. . The ".." signifies and to infinity.] We still had nested case structures, to tease our which sensors were currently detecting something. We also changed the values of our rotations to be not the actual value, but the value mod 360, which made this value reflect the angle on the circle. When we ran our code after making this one change, our eyes kind of followed our hand, a great improvement from our previous iterations.
Third iteration of the code- this case is simpler than most
  • The only differences between our third iteration and what we ended up at the end were minor.
    • Instead of aiming for a singular target value, we aimed for a target range, from +/- 2 or 3 from the target value.
    • We got rid of the shift registers. We tried to implement some proportional controls, and that made our code work even worse, so we gave up on ever implementing derivative controls.
    • We noticed that sometimes the first value was not particularly accurate, so we decided to throw away the first reading.
    • We also noticed that sometimes, when there was not a hard background behind her, that she would always register that somebody was in front of her, so I added a case structure that said if the value is greater than 180, that is if there is no close background, then just let 180 be the background. Else, let the measured background be the background value. This makes the code more modular, since it will work regardless if there is a close background or not.
    • I also connected the pink wires into the case structure. This really didn't change anything, at least from observation, but it makes the code more logical, since the pink wires show sequence.

Final iteration of code broken up into two parts because
it did not entirely fit on one computer screen.
The other case structures control the brows and lids.


After exhausting all of my time on they eyes, I did not really have that much time to invest in creative coding for the eyebrows and eye lids. My first iteration was did the exact opposite of what I wanted (close eyelids when somebody was there, and open when nobody was). My second iteration worked. The code for the eyebrows and lids are almost the same. They are proportionally controlled and move only based on whether any sensor is sensing somebody or no sensor is sensing somebody. I should note that the code assumes that the eyelids start closed. We ran into issues here when we reset Wendy without resetting her eyelids. However, I cannot think of a good solution to fix this - the NXT automatically resets the rotation to zero whenever it is restarted. I could have assumed that the eyes start open, but this makes no sense.


Jaw:

So, I was in no means responsible for the jaw. However, I did help out in the coding. [Also, all of the sound recordings are my voice.] Hande coded all of the individual phrases, moving the jaw to mimic the motion that a human's jaw would. I helped her integrate all of the phrases into one file "mouthGo.vi". Here is the design process from after Hande coded all of the individual phrases until we finished:

  • We needed to hook up the jaw some kind of sensor so her speech wouldn't be completely routine. Also, we could not use the three ultrasound sensors because each NXT device only has room for running 3 motors, and the NXT hooked up to the ultrasound sensors was also hooked up to 3 motors (eyes, eyelids, eyebrows). So, we decided to use a microphone. We wanted to have a different sensor and our options at the last minute were limited. It was also fun to try out a new sensor. We also could use this to have three states: "not present", "quiet but present" and "present and talking".
    • "not present" -> do nothing
    • "quiet but present" -> say "come closer"
    • "present and talking" -> start a conversation
  • Next we tested the microphone. We found that the microphone was extremely sensitive and the difference in background noise between different places made it impossible to have an absolute range for "quiet but present" and "present and talking". So, we used the difference between the current value and the background value for input, rather than just the current value. We also noticed that the variablity in any given place was about 20 units, so we set our "quiet but present" to be 15-30 above background, and anything above that to be "present and talking."
  • Finally we combined everything into one big while loop. So, every time around, we measure current sound. Then we subtract off the background value and hook that number into a large case structure. We had 3 cases corresponding to the three states defined above.
    • "Not present" was empty.
    • In "quiet but present" we pasted in the code for "come closer."
    • "Present and talking" had the largest amount of code. We had two case structures: one for the greeting and the other for the talking. However, we weren't sure what to put in the case structure. My first thought mod-ing a random number by 2. But, I couldn't find a random number so I used the sound, since this number was random enough. So, we hooked up this number mod two to the greeting block. If the number was equivalent to 0 mod 2, then she says "hi", else she says "hello." She then waits for awhile for the person to talk back to her. Then if the floor of the number divided by 2 was equivalent to 0 mod 2, then she says "hmm... interesting" else she says "tell me more."

First half of the final code


Second half of the final code



I've embedded some tips, but here are some more general tips.
  • LabVIEW lets you label your code, so it makes it a lot more clear if you use that feature. I wasted hours and hours looking for a bug that probably could have been avoided if I had labeled everything.
  • Before going intense bug hunting, first check that all of your ports are labeled correctly. Incorrectly or forgotten port labels were the majority of our bugs. Sometimes it is execution, not design.
  • Notice, that the "clean-up" button does not always clean-up the same way. In addition, with our final version of the code, when you press that button repeatedly, it swaps all the elements around and sometimes makes the area larger or smaller. This makes it confusing to resume working exactly where you ended. So, be very careful right after you press that button, so you don't end up changing a large chunk of code. I did this. It took me hours to undo.
  • It's easier to code things in chunks in different files and copy the contents of each of the different files into a new file. Hande did this with the code for the jaw, where each phrase was coded in a different file and combined in the final file. Christine and I did this in the eye code by coding up the brows and lids separate from the eyeballs and pasted them all together in a final file.
[As a side note, the coding part of this project reminded me of this comic: http://xkcd.com/844/. ]