Robert Whitney
Welcome to my Fast Robots page! I'm a Master of Engineering student in the Cornell University Graduate School. I've always been fascinated by robots, airplanes, rocketships, drones, anything that moves fast! Unsurprisingly, I'm thrilled to be taking this class where I will develop an autonomous RC car for localization, mapping, path planning, and some cool stunts too.
The purpose of this lab is to setup and become familiar with the Arduino IDE and the Artemis board. After this lab, I will be able to program my board, blink the LED, read/write serial messages over USB, display the output from the onboard temperature sensor, measure the loudest frequency recorded by the Pulse Density Microphone, and run the board using a battery instead of my computer.
The first part of lab 1 was simple because I already have the Arduino IDE installed on my computer and am familiar with how these devices work. I uploaded the “Blink” sketch from the examples folder to ensure that sketches could be uploaded to the Artemis. From previous experience, I know that it is successful when the RX and TX lights on the Artemis blink rapidly while the sketch is uploading. To double check that the code was working properly, I changed the timing of the blinking, uploaded the new sketch, and observed the light blinking faster as expected.
The “Serial” example worked as expected, parsing inputs to the Artemis and using those inputs to display messages on the Serial Monitor. I had to change the baud rate to view them properly.
The “analogRead” example worked as well, although the functionality of the sketch was not fully realized. The temperature sensor reported gradually higher temperatures as I held my thumb to the board. However, the pin being read as an analog input was floating as it was not grounded or connected to an input, as expected.
The “microphoneOutput” example sketch also worked as described, returning the frequency of the loudest sound. When whistling loudly, I could change the output on the serial monitor by changing the pitch of my whistling in a range from 100 to 2500.
To test the battery, I inserted an “if” statement to turn on the onboard LED (pin 19) if the loudest frequency was above 2000 and turn it off otherwise. I also had to set pin 19 as an output in the setup function. This modification to the sketch worked as expected. I could whistle a low tune while watching the serial monitor, slowly raise the output above 2000, and watch the LED turn on.
I also unplugged the Artemis from the USB cable and plugged in the battery. The power LED turned on, indicating the sketch was probably running. I tested the sketch with whistling again, this time with no serial monitor to check the frequency. Again, the LED turned on when I whistled a high enough frequency and turned off when it was below the threshold.
The Artemis board and a computer are capable of low-latency, moderate-throughput wireless communication via Bluetooth LE. In this lab, you will expand a basic application for wireless communication over a Generic Attribute (GATT) framework on top of robust BLE stacks.
After getting a reliable bluetooth connection, the round-trip latency of the link was tested. After changing a few lines in main.py and running it, the VM terminal output the response time for sending messages to the Artemis board over bluetooth.
The latency of the wireless messages was most commonly between 180 and 210 ms, although some bytes were sent and received in as little as 160 ms and as much as 250 ms.
To test the latency of a wired serial connection, a short script was written in Arduino and another in Matlab. The Arduino script waits to receive a 'ping' char over a serial port set at 115200 baud/s before sending a 'pong' char in response.
while (Serial.available()) { //wait for ping
if (Serial.read() == 'p') Serial.println('r'); //send response
}
The Matlab script logs the time between sending the initial 'ping' and finishing parsing the response from the Artemis.
write(s,'p','char'); %send char
tic %start timer
msg = readline(s); %wait for response and read it
tResponse(i) = toc; %stop timer and save
The latency of a wired serial connection over 1000 trials was significantly lower compared to that of the Bluetooth connection. The round-trip latency for sending a single char and then parsing the response averaged 4ms, though it must be noted that only one char was sent at a time.
To check how many bytes are in the data packet, Serial.println(l_Rcvd);
was added in the Arduino script after the other print statements. l_Rcvd
is the total length of the packet we are sending in bytes. It continuously printed 99, indicating the ping-pong message was sending a full packet back and forth every time.
Next, I used the same data structure to request a float with the VM and send a response with the Artemis. For the VM, I simply uncommented await theRobot.sendCommand(Commands.REQ_FLOAT)
. For the Arduino script, I had to insert a float (I chose pi to 7 significant digits) into res_cmd->data
as well as specify res_cmd->command_type
and the total number of bytes in the packet (1 command byte + 1 length byte + 4 float bytes = 6 total bytes).
After running both of these programs, the VM terminal output "3.141592502593994" instead of the float that I sent which was "3.141596". They are as close as they can be for 4 byte floats without being equal.
Last, the data rate was tested by sending 32 and 64 bit integers to the VM. The data structure was packed with either a uint32 or a uint64 then sent off while keeping track of the time in between.
In the VM program, I only had to uncomment await theRobot.testByteStream(25)
.
The Artemis sent packets averaging about every 10.7ms regardless of if they were uint32 or uint64.
After completing this lab, I see the role that this bluetooth capability will play as the project progresses. The latency and data rate of the connection is far too slow for a fast robot to react dynamically using the VM for the majority of its calculations. For example, I expect that for the inverted pendulum lab to work, the Artemis will have to sense, compute, and actuate 20-100 times per second to achieve closed-loop stability, an update frequency too fast to be doing much computation over bluetooth. Thus, the Artemis and the VM must work together and compensate for eachothers weaknesses to solve whatever task is issued.
Before we start taking the car apart, we can leverage the remote control to familiarize ourselves with the capabilities of the hardware. In later labs we will seek to replicate manual behavior with autonomous control and in some cases exceed what manual control can make it do for planar navigation and more dynamic (‘flippy’) behaviors. This is a fairly open ended lab, but the goal is to methodically document the car in any way that might be useful later on.
There are important physical charactersitcs of the car that will be essential for any model of that we intend to implement in future labs. A more complete model accelerates iterative testing and will simplify future labs.
I started with determining the relevant dimensions with electronic calipers. First, I removed the plastic top as I expected it would get in the way of future labs. The wheelbase (front to rear tires) is W = 80mm and the track (left to right tires) is T = 105mm. The diameter of each tire is D = 77mm, although this may change with loading as the wheels are deformable.
The front and rear axles are equally far (40mm) from the geometric center of the body of the car whose length, width, and height are 119mm, 65mm, and 45mm respectively.
The battery life lasts about 20 minutes of continuous use involving accelerating, braking, spinning in place, doing tricks, and just moving around in general. I consider the battery "fully exhausted" when the robot can no motors can no longer overcome the rolling resistance on carpet to move. Each battery averages 4 hours to charge from dead (4.8V) to fully charged (5.6V).
The performance of the car is dependent on the surface it is driving on, the driving mode (fast or slow mode), and the battery. On tile and hardwood, the car has no problem accelerating, turning in place, and doing tricks, even on low battery. However, the car struggles to overcome friction when on carpet. Even on a full battery, turning in place is not possible so a K-turn maneuver is required to change the heading.
The car can do a few cool stunts when it has enough battery. By accelerating or decelerating very suddenly, I can flip the car over itself. I am unable to balance it on two wheels, but I can get it to flip and stay up for an instant. Additionally, I can somewhat reliably get the car to stop inches infront of a wall starting from full speed.
Here is a video of the car doing several very fast flips at full throttle.
And here is a video of me trying to hit 4 targets as quickly as I can and return to the startpoint.
I performed several tests to get a rough estimate of the speed, acceleration, and braking distance. All tests were performed with a full battery on the tiled floor where I expect to be running the robot for the remaining labs. A slow motion camera recorded a stopwatch and several trials of the robot driving across the 12" x 12" tiles. Speed was tested by getting the car up to speed and recording the time for the car to cross 4 tiles (4 feet). The average speed while in fast and slow mode over 5 trials was 8.85 ft/s (2.69 m/s) and 6.50 ft/s (1.98 m/s) respectively. Acceleration was tested by recording the time it takes for the car to cross 1 tile, starting from rest and assuming constant acceleration equal to a = 2d/t^2 where d is the distance traveled in time t. The average acceleration while in fast and slow mode over 5 trials was 20.0 ft/s^2 (6.10 m/s^2) and 6.49 ft/s^2 (1.98 m/s^2) respectively. I have included a video showing how I did the acceleration testing below. I was surprised at the level of repeatability of these tests, although I would expect very different numbers if I used a battery closer to empty.
I performed another 2 tests for braking. First, I let the car get up to speed in fast mode, then switched to slow mode upon reversing the throttle on both sets of wheels. Next, I performed the same test, but I left the car get up to speed in slow mode before reversing the throttle. The first and second tests ranged from 2-3 feet and 1-1.25 feet between initiating braking and coming to rest respectively. The repeatability of the braking tests did not compare to that of the speed and acceleration tests. The reason for the discrepancy is the difficulty in reversing the throttle at the exact moment the car would begin to cross a marked tile.
The simulator will be critical for iterative testing in future labs. We can use it to quickly test path planning, mapping, and localization tools before implementing them on the real robot. After being setup in the VM, some tests were performed to check the basic functionality of the simulator.
The robot can be controlled using the keyboard teleoperation tool. It can vary the linear and angular velocity independently which traces out straight or gracefully curved lines, turn in place, reverse, etc. The minimum linear and angular velocity is zero and the maximum seems to be unbounded, although it is difficult to test because the robot runs into a wall before reaching maximum speed.
Upon striking an obstacle, the robot is replaced with a warning sign and will not move or respond to commands. It must be moved manually with the cursor before it will continue its moving at its previous linear and angular velocity. The robot can also be moved with the cursor at any time, not just when it strikes a wall.
The purpose of this lab is to change from manual to open loop control of the car. At the end of this lab, the car was able to execute a pre-programmed series of moves, using the Artemis board and the motor driver communicating via i2c interface.
To begin, I removed the existing circuit board and replaced it with the provided motor driver. I was able to undo one of the QWIC connectors, feed it through the old hole for the "On" button hole, and reassemble the connector.
Before closing up the car, I installed the Serial Control Motor Driver library and uploaded an example sketch to verify that I could control each motor individually.
First, I wanted to determine the minimum motor speed command that would actually turn the wheels. I modified the example sketch to set the motor speed to iterate from 0-255, pausing for a half second in between commands and printing the motor speed to the Serial Monitor. The wheels would overcome friction and begin spinning reliably at 60-65 LSB/ 255. Both wheels always started spinning withing 1-2 LSB of eachother, indicating that the motor drive trains are very symmetric and will probably drive straight under open-loop control.
It must be noted that this minimum motor speed was determined from rest with no load. The actual minimum speed command will be higher if the car were driving on the floor with rolling resistance and lower if the wheels were already spinning.
I then modified the example sketch to set both wheels in forward drive for one second, pause for one second, then set both wheels in reverse for one second, then pause for another second. I let the robot run this loop on the tiled floor and observed how straight its trajectory was.
Next, I modified the example sketch with a sequence of motor speeds and delays. Ideally, the robot will drive forward, trace out a square, drive back to where it started, then spin in place. With no feedback to correct itself, achieving this ideal trajectory would be impossible, but I just wanted to test how it turns by itself.
Finally, as suggested on CampusWire, the "PSWC" pins on the Artemis can serve as a power switch. I soldered pins to them and have a removable jumber. Removing/replacing the jumper to turn the Artemis on/off is far more convenient that plugging and unplugging the battery. Below is one picture with the jumper in place with the car off and one picture with the jumper removed and the car on.
I also did some open-loop testing on the simulator. The instructions in the lab manual were followed to setup the Jupyter notebook and simulator. I modified the code to trace a perfect square with the robot. This was easy because the robot is given precise linear and angular velocity commands. I made 4 equally straight and long paths interspaced between 4 90 degree turns to form a perfect square and put the robot back where it started.
The purpose of this lab is to enable the robot to perform obstacle avoidance. To do this, we first need to equip the robot with distance sensors - the further the robot can see and the more it can trust a sensor reading, the faster it is able to drive. Once you have your sensors working, you will mount them on your robot and attempt fast motion around your room without crashing into obstacles. You will use the VM simulator to implement similar behavior.
First, I installed the library and opened a few example sketches. The I2C address matched, so I began taking readings immediately. The sensor measures the light intensity of the returning IR beam it projected into the environment. Consequentially, it reacts very differently depending on the texture and color of the surface it is seeing.
I performed some simple tests to observe how well the sensor could be used as a distance sensor. I recorded the sensor measurements for 7 different materials at 3 fixed distances each. When comparing measurements at the same distance, glossy, smooth, lighter colored materials like white paper returned the IR light the most while dull, darker materials returned it the least. Over the 3 distances for each material, the relationship between sensor measurement and distance was more linear for the dull, darker materials than for the glossy, lighter ones. I also tested the sensor in a dark room, and while shining a flashlight at it, but it had no noticeable effect.
The update frequency of this sensor was very quick, coming in at less than one millisecond. However, the sensor performed poorly for distances further than 200mm and was very noisy for distances under 50mm. For this reason, I decided against using this sensor for this lab. At high speeds, the robot would have too much inertia to begin slowing down with 200mm to an object in front of it. Another reason was the non-linearity of the sensor measurement - distance relationship and the variability of the measurement with respect to material.
Despite its drawbacks, I see potential in this sensor. If I attach pieces of white tape radially to the inside of one of the black tires, I may be able to use it as an optical encoder. There is significant need for a speed sensor and this may suffice, but this will be investigated further in the next lab.
I followed a similar procedure to test the ToF sensor. Interestingly, the I2C address on the datasheet (0x52) did not match what the I2C scanner had found (0x29). Nontheless, I used the scanner address and the sensor worked with no issues. I soon discovered the range and dependability of this sensor were impressive.
My range tests for the ToF sensor were performed over 1-5 ft and with only 3 materials. The sensor was accurate to within an inch and did not depend on the type of material or lighting conditions. It has an operating range of ~5mm to 6m and updates every 100ms, although I changed this to 50ms for a more reactive robot. I also tested it with the "Status and Rate" Example Sketch which gave indicators for the quality of the measurement. I waved a notebook infront of it quickly to simulate a fast moving object, but the sensor rarely returned an invalid measurement.
I also performed the calibration procedure using the example sketch. The measurement offset was 11mm. I decided to use this sensor as the primary range finder for the robot because its range, update frequency, and accuracy were suitable obstacle avoidance.
After sensor testing was complete, I mounted a small piece of cardboard to the front of the robot with hot glue and tape, then taped both sensors to the front and wired them to the motor driver and the Artemis.
The sketch for obstacle avoidance was broken into 5 simple sections: initialize, sense, filter, decide, actuate.
I combined code snippets from examples and initialized the global variables and constants I would need to decide when an obstacle was too close, how fast to drive, how far away an obstacle was, etc.
I continually took ToF measurements without clearing the interrupt to save time because it had no noticeable effect on the measurements. When the measurement was ready, I subtracted the offset from the raw measurement and save it in measToF
. Then, I filtered the measurements using a complimentary filter. alpha
was set to 0.25 because the measurements were very reliable. In hindsight, I think the system would work without filtering the data.
Once the filtered range dropped below closeEnough
, the motors were set to turn right. Otherwise, they were set to move straight ahead. I tested some methods of avoidance where I would use delay()
to perform braking and turning, but none worked as well at higher speeds as the simple approach I settled on.
For testing obstacle avoidance, I gradually increased the speed of the robot which presented new problems. At high speeds, the inertia of the robot was too much for the robot to turn to the right before hitting an obstacle. I solved this by increasing closeEnough
, although it created problems when the robot was in an area where most obstacles around it were closer than this threshold and it would just turn in place. All told, the final version of this code drove the motors at full speed and usually avoided hitting most obstacles. To make it better, I would implement the optical encoder with the IR sensor I mentioned and vary closeEnough
with the speed of the robot.
I used a similar code structure for the simulator. The simulated range sensor measurements had no noise, so the filtering step was unnecessary and was not included. The robot needs to turn enough so that when it is done turning and continues forward, its trajectory will encounter an object withing closeEnough
before the side of the robot hits a wall. The robot is a small square, about 0.3m per side.
I noticed that the robot will hit a wall and stop when it moves along the wall at a shallow angle. I added a delay after the turn command to lengthen the minimum turn angle when it sees an obstacle. This did not solve the problem, but certainly helped. I'm not sure how to implement a perfect obstacle avoidance algorithm without additional sensors.
The purpose of this lab is to setup your IMU (to enable your robot to estimate its orientation). Note that there are several ways to compute these, through this lab you should understand the trade offs of each approach. Once the IMU is implemented, you will mount it on your robot and attempt to do PID control on the rotational speed. Enabling reliable rotation will help you implement navigation and localization in labs 8-9; enabling reliable and slow rotation will help you do TOF rotational scans to compute a map in lab 7.
First, I secured the IMU to the robot chassis near the center of mass and familiarized myself with the accelerometer, gyroscope, and magnetometer.
For the accelerometer, I used the equations from class to convert the raw data into pitch and roll. At {-90, 0, 90} degrees of pitch and roll, the accelerometer estimate was accurate to within 1-2 degrees. The deviations are a combination of white noise, inaccuracies associated with the accelerometer itself, and the fact that the IMU is not actually oriented at exactly {-90, 0, 90} degrees. The average deviation for each was almost negligible, but I adjusted the pitch by a 1.25 degree constant offset to bring its average output closer to its expected. Any acceleration not due to gravity induces large errors in the accelerometer angle estimates.
I also performed a frequency response for the accelerometer. I observed the raw, unagitated output and fed it into the Fast Fourier Transform from the lab instructions. The output showed a small peak near 225 Hz. Therefore, my low-pass filters for my accelerometer must be designed to reject noise at that frequency.
For the gyroscope, I integrated the raw data through time for yaw, pitch, and roll. Since the error in each measurement acculates, each angle estimate drifts by 1-3 degrees per second. This effect was mitigated by subtracting the average offset of the raw data when the IMU is stationary. The angles computed from gyroscope data are also more accurate in the short term and less susceptible to noise.
I created a complimentary filter that is both stable and accurate by combining the estimates from sensor in certain proportions. The figure below shows the robustness of the filter in comparison to each sensor's individual estimate when agitated in roll. The green in the complimentary filter estimate, blue is the accelerometer (notice the noisy output), and the red is the gyroscope's estimate (notice the accumulated errors).
For the magnetometer, ...
Before implementing feedback control, I performed some open-loop testing to get a better feel for how the robot spins.
Although I was supposed to use bluetooth for wireless communication, I had several issues with the code and the VM. I found a workaround by using a telemetry radio connected to the other UART on the Artemis. This setup allowed me to send messages over Serial1
to my desktop where the ground module of the telemetry radio was plugged in. The serial messages were parsed by Matlab and contained the estimated yaw rate from the gyroscope and the current motor command.
The open-loop script made the motors spin in opposite directions at increasing speed until maximum speed, then decreasing speed until the robot stopped. By observing the response, I was able to determine the deadband (the motor value below which the motors don't spin) of each motor, hysteresis (the deadband may depend on if the motor command is increasing or decreasing), and the max rotational speed.
With the ramp response, I estimated that the lowest speed that the open-loop robot would rotate about its own axis was about 360 deg/s.
The structure of the closed-loop script was similar to that of the object-avoidance program: sense, compute, actuate. The PID library I used in this lab was the same I had used for other projects, so I was comfortable with how it worked.
I decided to implement a PI controller to avoid the pitfalls associated with the D-term and high frequency noise in the measurement. To begin, I set the reference point of the controller to 360 deg/s and used low P and I gains. With little tuning, I was able to get the average of the yaw rate right around 360 deg/s.
I continued testing and tuning, progressively decreasing the reference yaw rate and increasing the gains appropriately.
This worked until I approached 180 deg/s where the steady state motor command is close enough to the deadband of the right motor for the motor to not spin.
To attempt to further reduce the minimum reliable yaw rate, I used the same program to turn only one motor (the left, stronger one) while the right motor was set to a low value (40) to avoid braking. This worked well and I was able to reduce the minimum reliable yaw rate to 120 deg/s (one rotation every 3 seconds). The downside of this method is the robot no longer rotates about its own axis, but instead moves in a circle (a reliable one) with a radius of about 8 inches.
Below is a video of this program working. From 0:06 - 0:23 (17s), the robot completes about 6 rotations which is about 1 rotation every 3 seconds which is about 120 deg/s.
The intention of controlling the yaw rate is to perform mapping using a ranging sensor. A slower yaw rate is desirable because it will produce a map with finer resolution per rotation and because the uncertainty associated with ranging sensor will be lower.
The ToF sensor has a minimum sampling time (the time during which the measurement is actually taken) of 20ms in short mode and 33ms otherwise. At the minimum reliable open-loop yaw rate of 360 deg/s and a distance of 0.5m from a perpendicular wall, the car will rotate ~0.125 rad (~0.207 rad if not in short mode) during the sampling time. Using trigonometry, the distance between the sensor and the wall changes by almost 4mm (11 mm if not in short mode). The uncertainty in the measurement is made far worse when the robot is originally oriented at 45 degrees to the wall. In this case, the distance between the sensor and the wall changes by 108 mm! (208 mm if not in short mode!).
This level of uncertainty will be difficult to deal with because the ToF sensor flags measurements as invalid if they violate either of the two configurable parameters used to qualify measurements. Sigma
represents the estimated standard deviation of the measurement and its default tolerance is 15mm. Signal Rate
represents the amplitude of the reflected signal.
Assuming the robot is rotating at 120 deg/s about its own axis, the distance between the sensor and the wall changes by only 15 mm (54 mm if not in short mode) in the worst case scenario (45 degrees to the wall). This great reduction in uncertainty makes mapping with the ToF far more feasible. I expect this will be investigated further in the coming labs.
The remaining section of the lab was very simple. I used the simulator and a plotter to command the robot in an environment, record its pose according to odometry, and compare that to its actual pose.
Clearly, the errors associated with the measurements and the model itself induce deviations in the dead reckoning estimate of the robots position. These deviations are never corrected to bring the dead reckoning pose estimate back to the true pose, making it impossible to get a reliable pose estimate from odometry alone.
In this lab, I laid the groundwork for implementing grid localization in simulation for the next lab. I also did some rudimentary mapping with the physical robot and the ToF sensor.
To begin, I familiarized myself with the classes and functions that would be required to implement a Bayesian Filter in the next lab. Next, I sent the virtual robot on its pre-planned path through the simulation environment. Below is figure of the environment and a figure with the true and the odometry estimated trajectory shown in green and blue respectively.
Last, I developed pseudocode for the Bayesian Filter. The skeleton code provided to us included functions with inputs and outputs to help organize the final implementation.
Using geometry and the odometry model defined in class, I wrote out the compute_control
function to compute the control action sequence that moved the robot from a previous pose to its current one.
Using the previous function, I also wrote in the odom_motion_model
function that returns the probability that the robot is at a new state given its previous state and the control input over the last timestep.
I also wrote the sensor_model
function which is responsible for returning the liklihood of a set of observations given what the robot actually observed.
For the remaining prediction and update functions, each requires a set of 3 nested for
loops to iterate through each grid cell in the 20x20x18 matrix. For each cell, the belief is predicted using odometry data, then updated using measurement data. After the completion of the update step, the entire grid must be normalized or else all of the probabilities will converge to zero. I wrote a short function to do this for a 1-D array as well.
Before beginning any actual mapping, I modified a copy of the bluetooth example to send measurement data over bluetooth to the VM where it could be post-processed into a map. This new program combined the functionality of the previous labs, including ToF sensor ranging, reading and filtering gyroscope measurements into reliable yaw estimate, using a PI controller to track 120 deg/s of yaw rate with one wheel, and sending ToF measurements and yaw estimates to the VM. The robot was programmed to drive in one circle while sending data, then stop.
For the mapping part of the lab, I set up boxes to make an enclosure with a few interesting features. I made sure to have an obstacle in the middle of the floor in addition to a 'hole' in the wall. Below is the actual environment as well as a sketch of the environment.
The first plots looked better than I expected (I was not expecting them to even vaguely resemble the actual environment). I thought that the combination of the uncertainty in the yaw estimate, the fact that the robot is not rotating about its own axis, and the sampling frequency of the ToF sensor would make the map unrecognizable, but I was wrong.
Next, I post-processed the data in Matlab. I assumed that the robot moved in a perfect circle with a radius of 6", starting at the top of the circle and moving counterclockwise. I then transformed the measurements from the sensor frame to the robot frame, and finally to the inertial frame. I assumed that the interial frame's origin was located where the robot began its trajectory.
The results of post-processing were very noticeable. First, the features of the environment are clear. I can clearly see corners, straight walls, the obstacle, and the 'hole' in the environment. Second, the inconsistencies are less noticeable, namely the range discrepancy at 0 degrees. This point in space is ranged twice; once at the beginning of the robot's turn and once at the end. Since the robot does not move in a perfect circle, the wall appears to have an object protruding from it when it is infact straight.
I spot-checked the map with a tape-measure. Most major walls and features are accurate to within 10cm and are straight with no protrusions.
I converted the environment into a format the simulator could use for visualization in the plotter tool.
The purpose of this lab is to implement the Bayesian Grid Localization algorithm we've been discussing in lecture. The implementation in this lab is a precursor to the localization algorithm that will be used on the physical robot in the next lab.
Programming the Bayes filter was simple considering the general structure was already laid out in the previous lab.
For compute_control
, I only had to make a small modification to my previous implementation for it to work. The angles for turning were not bounded to [-180,180) degrees which could have led to problems around the discontinuity. I used .normalize_angles()
to make it more robust.
odom_motion_model
didn't require any modifications.
For prediction_step
, I had the general structure correct, but misunderstood what the inputs and outputs were supposed to be. We need the previous and current pose from odometry to calculate what control inputs were used over the last timestep using compute_control
. We also need the pose for which we were most confident of in the last timestep, best_prev_belief
. The prediction probability of each cell in bel_bar
was updated using odom_motion_model
to the probability that we got to each cell's position from best_prev_belief
using the control input we used. Poses around the pose that the robot arrived at from best_prev_belief
using the control inputs have high probability in comparison to poses on the other side of the environment.
update_step
had a similar structure to prediction step. For each of the 18 measurements, we use the actual measurementz_actual
to compare to the measurement from each cell. The belief of pose bel
is updated by multiplying the product of the array of measurement probabilities with the bel_bar
from the corresponding cell. Thus, if the actual measurement and the expected measurement are similar AND the bel_bar
from that cell is high, then that pose gets high probability.
I added a probability tolerance check to reduce computation time. If any of the elements in the measurement probability array are less than 1e-40, the bel
of that cell is set to zero. The tolerance was chosen heuristically to reduce computation time without ruling out any highly probable poses. Typically, the algorithm will only have to actually calculate the bel
for 100-400 cells instead of the full 7200.
The filter was able to reliably track the trajectory of the robot. The entire trajectory took a little under 7 minutes, the extension due to computation time. It took a little over a second to compute each prediction step and a little less than half a second to compute each update step. The figures below are of the robot in the simulator and the plotter at 3 timesteps in the trajectory.
The actual trajectory is in green, the odometry estimate in blue, and the Bayes filter in yellow.
The prediction step belief distribution is shown as shaded cells where the lighter cells have higer probability of the robot being there. Occasionally, the prediction is completely off due to extremely noisy odometry information. However, the filter could usually rely on the sensor measurements to retain an acceptable pose estimate.
Over several trials, the filter was able to reliably track the trajectory of the robot with few deviations.
In the lab, I implemented the full Bayes filter that we have been working on. This required devloping interfaces between several modules and proved challenging.
I began with writing a new Arduino sketch to control the robot and communicate with the VM. Using the example bluetooth sketch as the skeleton, I developed functionality to receive commands from the VM over bluetooth and send data back upon request.
Upon receiving REQUEST_SCAN
, the robot would drive in a counter-clockwise circle at 120 deg/s using PID control while recording ToF measurements (as in the previous labs). The measurements divided into 18 cells over the 360 degree scan (20 deg per cell) where each cell contained the average of all the ToF measurements corresponding to those 20 degrees. After completing one circle, the robot sent the 18 element array back to the VM.
scan_startingYaw
is the yaw angle of the robot at the start of a scan (0 deg for the first scan). For a reason that has yet to be discovered, scan_startingYaw
goes to nan
in the middle of the second scan. Thus, the robot only ever does its first update and prediction steps.
For this lab, I used the proximity sensor as an encoder to keep track of how far the robot moves in straight lines. The Int
pin on the proximity sensor was programmed to change when the proximity reading changed enough. I painted opposite quarters of the wheel with white acrylic paint to give contrast for the sensor to pick up. Upon receiving REQUEST_ENCODER
, the robot would send the current count of the encoder to the VM which was an indication of how far the robot moved. I also integrated the accelerometer data (with a low-end cutoff to reduce drift) to fulfill the same role and for comparison. Since I couldn't get the whole filter working for more than one step, I just used the integrated accelerometer data because of some issues I was having with the proximity sensor.
I also included a simple movement function. Since the movement commands from bluetooth are unsigned bytes, I had to convert them to signed bytes and then scale them appropriately.
For python, the functions we had to implement ended up being simpler than I expected. get_pose()
simply returns the odometry pose (store in a global variable) in the map frame. perform_observation_loop
sends a REQUEST_SCAN
command to the Artemis and waits for the response.
set_vel(v,w,t)
translates the linear and angular velocity commands v,w
to left and right wheel velocity commands, then converts them to unsigned bytes to be sent over bluetooth. The function then waits the prescribed time t
, sends a velocity command to stop the robot, then sends REQUEST_ENCODER
to get the odometry data and update the odometry pose.
I was able to get the Bluetooth working in the Jupyter notebook after much trouble. My biggest challenges involved interfacing the synchronous and asynchronous parts of the code because I did not understand how python expected the code to be managed. Gradually, the functionality of the code came together. In the end, I could connect the Artemis to the Jupyter notebook with two-way bluetooth communication.
Using the Bayes Filter module, I call initialize_bayes_filter()
and the robot did an observation scan as expected. The result of the first observation of the filter is shown below where the yellow dot is the filter's belief and the green is the actual position of the robot.
After the initialization, I called move_robot()
to move the robot forward, then turn. Within move_robot
, set_vel(v,w,t)
sends the motor commands to the Artemis and updates the odometry pose.
The resulting change in odometry pose is fed into step_bayes_filter()
which wouldn't work properly because of the issues in the Arduino code.
Even with only the one update step, the filter localizes well and gets the initial position correct to within a few inches.
In the lab, I implemented the full Bayes filter that we have been working on. This required devloping interfaces between several modules and proved challenging.
To begin, I had to set up a new environment for the physical robot as I traveled home for Thanksgiving. I am also in the middle of moving between houses, so I had lots of boxes to make an interesting environment. After using a measuring tape to map out the environment, I manually imported it into the VM as a list of start_points
and end_points
and as a binary occupancy grid.
The discretized grid has the same size cells as in previous labs (20cm x 20cm) for consistency. In the grid below, 1s represent occupied cells and 0s represent open cells.
Next, I considered options for a global planners. Due to time constraints, implementing my own algorithm was out of the question. I was familiar with Dijkstra and found an implementation on this Github Page. The repository also contained an implementation of A* with an example. Although time was not a huge constraint and the map I was using was not complex, I decided to use the implemntation of A*. It produced the same shortest path, but used a simple heuristic to guide the search toward the goal instead of doing an exhaustive search.
Next, I intended to use a local planner to have the robot follow a list of waypoints generated by the global planner. Since the global planner outputs the sequence of cells to get from the start to the goal, there can be a superfluous amount of waypoints along the path. The actual list of waypoints still contains the start and the goal cells as waypoints, but skips cells according to the parameter cell_step_size
(ex. when cell_step_size
is 2, the list of waypoints contains every other cell along the path and includes the start and end goal).
For the virtual robot, I decided to use a sequence of turn, move, localize to help the robot follow the path. I used execute_control(rot1, trans, rot2)
with rot2 = 0. I kept the default movement time as 1 second and adjusted the angular and linear velocity to move the robot where it needed to go. The angular rate is calculated such that the robot faces towards the next waypoint and the linear velocity is calculated such that the robot covers the distance to the waypoint in 1 second. Once the distance between the robot and the current waypoint is less than the paramter close_enough
, the current waypoint is updated to the next waypoint in the list. This loop runs until the current waypoint is the last waypoint in the list, indicating that the robot has reached the goal.
To test the algorithms, I ran my code on the virtual robot in the environment I set up. First, the global planner searched for the shortest path from the start (red) to the goal (green) location.
The blue cells represent the cells that have been searched. You can see A* searching towards the goal, but also searching around the path. The path is drawn in red in the final picture.
After the path was determined, I ran the code for the local planner. The robot was able to localize and control itself well enough to follow the path and reach its goal, although some tuning of cell_step_size
and close_enough
was required. In the first figure, the green is the ground truth path of the robot and the yellow is the Bayes filter path.
I also ran another trial with different start and goal positions to check for robustness. The second run, the robot was also able to successfully navigate to the goal cell.
For the second trial, the robot got close enough to the center obstacle to hit it, but not stop it completely. To prevent this on the real robot, I padded ALL obstacles in the binary occupancy grid by 1 cell so that the global planner wouldn't output a path so close to walls.
Before modifying my code for the physical robot, I looked into improving my observation loop because this lab was being performed on a smooth, uniform hardwood floor instead of a bumpy tiled floor. I wrapped the front wheels with electrical tape and retuned the PID controller (increased Ki, decreased Kp) to smooth out the response. After these two changes, the robot could reliably turn with one wheel in place at 90 deg/s, a massive improvement over driving in a large circle at 120 deg/s. I also discovered that the format of my observation vector was incorrect. The first element was supposed to be the scan reading from the current heading instead of the scan reading from 0 deg yaw as I had it for the previous lab. This improved the localization module's ability to track the yaw of the robot using the observation loop.
Next, I modified how I set movement commands to the robot. I decided to use a turn, then move sequence at constant speed, adjusting turn_time
, move_time
, and the direction of the turn (isCC = 1
for counter-clockwise turns, 0 otherwise) proportional to how much turning or moving was needed. The parameters that modified how long the robot turned or moved were tuned heuristically. After the movement was complete, the robot immediately sent the odometry data to the VM for the prediction step. I also modified the corresponding modules in the Arduino code to turn or move until turn_time
or move_time
had elapsed.
Finally, I modified the virtual robot local planner code, replacing much of the initialization with await init_bayes_filter()
, replacing execute_control()
with await move_robot(move_time, turn_time, isCC)
, and replacing the prediction and update steps with await step_bayes_filter()
. The rest of the code involving distance checks to waypoints and deciding how to move to the next waypoint was left the same.
I tuned the turning and moving parameters first to prevent over/under turning and moving. Once that was functional, I noticed that the robot was better at making small adjustments to its heading than larger ones, so I modified cell_step_size
to reduce the number of waypoint in the path and increase the distance between waypoints. This was also desirable because it reduced the number of scans the robot was required to perform which saved time. One caveat of reducing the number of waypoints was that it made it possible for the robot to try to take a path through an obstacle. I solved this either by finding an appropriate number of waypoints for that particular trajectory or by bloating the obstacles in the occupancy grid so that the global planner found a path further away from the obstacles.
For the first trial on the real robot, I used the same waypoints as in the first trial of the virtual robot. The robot began to track the waypoints reliably after the tuning I did. Below are videos of an unsuccessful early attempt and a subsequent successful attempt as well as a figure of the trajectory of the Bayes filter belief. For the second video, the robot ended up directly over the cross representing the goal.
I also did a test with the robot starting away from its supposed starting position. The robot is still able to localize and adjust itself to track the waypoints and reach the goal. The robot ended up just over 12 inches away from the goal.
I performed one last trial using different waypoints. For this trial, I bloated the obstacles in the occupancy grid after the robot kept hitting the boxes in the middle. The robot ended up just under 6 inches away from the goal.
The robot was able to plan paths, localize, and execute well considering what sensors were available. In previous classes, I have solved this path planning and localization problem with far better precision, but it required reliable, complex sensors on a slow robot. This robot is capable of much faster, albeit more 'sloppy' navigation.
In the lab, I implemented state feedback and LQR control on a simulation of an inverted pendulum on a cart. I also introduced some nonlinearities to more closely resemble the real robot.
Before implementing any control, I ran the simulation to verify the nonlinear dynamics. The animation makes sense to me: as the mass falls to the left, the cart moves to the right and the whole system begins to oscillate as a normal pendulum would. Next, to begin implementing the controller, I defined an array of 4 negative real poles in pendulumParam.py
to describe the dynamics of the closed loop system. I used the poles and the command control.place(A,B,poles)
to get the 1x4 control input gain matrix Kr
. Last, I modified u = Kr*(z_ref-z_curr)
in pendulumControllerDynamics.py
.
After running runSimulation.py
and adjusting the poles to fit the response, I had a controller that was robust enough to stabilize the pendulum and track reference positions. The poles I came up with were poles = [-0.9, -2.5, -1, -2]
. I had the cart track different waveforms, frequencies, and amplitudes. In the figures below, the top blue line is the reference and the orange line is the position of the pendulum.
Next, I designed a more aggressive controller by placing the eigenvalues of the system much farther into the left-half plane. The poles I came up with were poles = [-10 -5.1, -5.11, -5]
. I tested this controller with the same references as the previous one.
Although the second controller is more aggressive and can track the sin wave and the low frequency square wave more accurately, it is not robust enough to track the high frequency square wave because it breaks the simulation.
To implement the LQR, I only had to change how I was selecting the values for the control input gain matrix Kr
. In pendulumParam.py
, I defined Q and R matrices to punish the deviation of a state and control input from its reference. scipy.linalg.solve_continuous_are(A, B, Q, R)
solves for the associated Riccatti matrix which is used to solve for Kr
with the equation Kr = np.linalg.inv(R).dot(B.transpose().dot(S))
.
Again, I ran the simulation and played with the values on the diagonals of Q and R until I got a response that I was happy with. After some testing, I subjected this controller to some harsh reference trajectories in an attempt to break it. I included a plot of a high frequency square wave below to demonstrate that although it was not able to track the reference well (due to the inherent limitations of the system), it was still stable.
Finally, I increased the cost of control usage as high as I could to break the controller. Interestingly, no matter how high I increased the cost of the control and decreased the cost of state deviation, the controller would still stabilize the pendulum (although it wouldn't track the reference, it would just remain stationary). Essentially, the controller was completely insensetive to any deviation in the state after stabilizing the initial deviation. I even tried increasing the initial angle, but the controller still stabilized the pendulum.
First, I implemented a minimum and maximum control input (deadband and saturation). I modified pendulumControllerDynamics.py
so that u = 0
when u
was calculated to be below the deadband and u = saturation
when u
was calculated to be above the saturation value. When I ran the changes, it kept breaking the simulation. After consulting Campuswire and implementing the changes suggested on there, I got the simulation to run, but I didn't notice significant differences in the trajectory. To get the simulation to run, the controller didn't seem allowed to engage the deadband or saturation, so it didn't seem to have an effect. Below is a picture of the broken simulation and the functioning one with the deadband/saturation implementation.
Last, I tried implementing sensor noise, adding random gaussian noise with np.random.normal(loc=0,scale=self.sigma)
to each of the states prior to calculating the control. Unfortunately, I couldn't get the simulation to run through to the end even after consulting Campuswire and tweaking Q,R, the reference, and sigma. The most complete run that I have is pictured below.
In the lab, I familiarized myself with the Kalman filter by using it to estimate the state of the inverted pendulum and react using state feedback.
We calculate new A and B matrices for the Kalman Filter because we may not have a perfect model of the system.
We can see which states we are measuring directly by looking at the C matrix. The C matrix is a vector of 3 zeros and a 1 in the last position, meaning that our observation is the angular velocity, the last element in the state vector. All other states are being inferred from the dynamics of the A and B matrices.
After running runSimulation.py
, we can observe the animation of the inverted pendulum and see plots of comparisons between the actual states of the cart and the estimated states from the kalman filter. Without adjusting anything, the Kalman filter estimates the states perfectly because we are simulating a nearly perfect system.
After printing control.obsv(A,C)
(shown below), we can see in the output that the matrix is not full rank because the first column is all zeros. This implies that the full state is unobservable. We can see that this is because there is no way for the filter to correct for an initial error in the position, the first state.
In the A matrix, the change in the first and third states (position and angle) are only dependent on the second and fourth states (velocity and angular velocity). Since we only measure the angular velocity (C = [0 0 0 1]), we can only infer about the position of the cart and its angle. However, we can correct for a initial error in the estimate for angle, but not position. Below is a figure demonstrating this. An initial error of +/- 0.2 is applied to all states, and all states but position recover within a half a second, while the position estimate retains its initial error.
To further demonstrate this phenomenon, I tried to plot the uncertainty in each state over time. The uncertainty for each state is represented by the diagonal of the covariance matrix. I couldn't get it to look how I wanted for the rest of the lab, but you can still see the uncertainty in position grows over time while the uncertainty for all other states does not change much past 1 second.
I introduced each non-linearity and imperfection individually before trying them all together.
As before, I introduced errors in the initial estimate of the kalman filter. This time, I introduced an error of +0.4 one state at a time to see how sensitive each was. Position errors had little effect on the stability of the cart and other states. Velocity errors had little impact on the stability of the cart, but had a significant effect on the angle and position estimates. Within 2 seconds, the velocity and angle errors were corrected, but the position error persisted as expected. The plots of these two trials are shown below.
Angle errors had the largest impact on the stability of the cart because the cart tried to correct for an error that didn't exist. It took less than a second to correct this error and had little effect on the estimates of other states. Angular velocity errors had the least effect overall because they are the only state that is being measured directly. The error was corrected almost immediately.
Next, I introduced deadband and saturation values into the dynamics. I chose 0.5 and 5 to be the values because I figured the real robot had about 100 values to write to the motors that actually did something (150-255 LSB). I expected the cart to oscillate fiercely about its reference values and it did exactly that. The kalman filter still successfully tracked all of the states, but I noticed small errors in the unmeasured states.
Next, I introduced sensor and process noise. For sensor noise, I decided that adding random gaussian noise with a standard deviation of 0.05 rad/s (about 3 deg/s) would be approriate judging from the noise level of the gyroscopes we used in class. I added in the same amount of noise into the dynamics (adjusted dydt
by np.random.randn()*0.05
) as process noise. After seeing the animation, I didn't notice a difference. However, the plots clearly show the kalman filter being affected by the noise. The estimate for angle and angular velocity is railing between +/- 5 degrees and degrees/s. Luckily, the controller is robust enough to ignore this high frequency noise.
Next, tried adjusting a few elements of the A and B matrices. I only adjusted non-zero elements in the second and fourth row of the A and B matrices because those were elements of the model that are dependent on measurable parameters or simplifying assumptions. I randomly added or subtracted 10% to each of the 6 non-zero values because that seemed like a reasonable error for measuring some parameters. This effect was noticeable, but not drastic. The angle error was most affected. I continued increasing the percent error until the kalman filter failed when the terms were off by more than 40%.
Last, I tried reducing the update frequency of the kalman filter and the controller. If I were implementing this on the real robot, I would want my controller and kalman filter running as slow as 25 Hz (40 ms looptime, offhand). This did not seem to have much effect on the animation, but after looking at the plots, I can see its effect in the angle estimate. Any slower than 15 Hz and the cart could no longer stabilize the pandulum.
Finally, I tried implementing all of these imperfections (initial state error, saturation and deadband, measurement and process noise, parameter errors, lower update frequency) into the simulation and seeing how the kalman filter reacted. Without tuning the kalman filter, the LQG was still able to stabilize the cart, but only just barely. It was shaking back and forth violently and did not smoothly track the reference position.
Afterwards, I individually increased the effects of each of the imperfections by just a little and saw that most increases made the cart unstable (raising the deadband, increasing parameter errors, and decreasing the update frequency). This indicated that the cart was very close to instability. After seeing this simulation, I don't believe that the robot would be able to stabilize itself. First, some of the choices for the amount of imperfection I introduced have been optimistic and would have a greater effect on the actual robot. Additionally, there are some imperfections that this simulator doesn't even consider. In my opinion, one of the biggest contributors would be that the noise the IMU experiences is dependent on how the robot is moving and this would have a great effect on the Kalman filter's efficacy. Another would be that the simulation is only a 2 degree of freedom system while the robot has a full 6 degrees of freedom.