Bounded Verification with On-the-Fly Discrepancy Computation
Abstract. Simulation-based verification algorithms can provide formal safety guarantees for nonlinear and hybrid systems. The previous algorithms rely on user provided model annotations called discrepancy function, which are crucial for computing reachtubes from simulations. In this paper, we eliminate this requirement by presenting an algorithm for computing piece-wise exponential discrepancy functions. The algorithm relies on computing local convergence or divergence rates of trajectories along a simulation using a coarse over-approximation of the reach set and bounding the maximal eigenvalue of the Jacobian over this over-approximation. The resulting discrepancy function preserves the soundness and the relative completeness of the verification algorithm. We also provide a coordinate transformation method to improve the local estimates for the convergence or divergence rates in practical examples. We extend the method to get the input-to-state discrepancy of nonlinear dynamical systems which can be used for compositional analysis. Our experiments show that the approach is effective in terms of running time for several benchmark problems, scales reasonably to larger dimensional systems, and compares favorably with respect to available tools for nonlinear models.
The full version of the paper and experiment code:
Two Case studys:
1. Cardiac Cell (link)
2. Powertrain Control System (link)
Face Modeling based on Kinect and depth reconstruction via grating
In this project, we want to build a software, which can help people try on new cloth. Something like “Magic fitting mirror”. The traditional fitting software has one problem: that is, the clothes seem to be pasted on the photo of the triers. So we try to just change the model’s head with the user’s head. And change background of the model with the background of the users. These works are based on Kinect. We record the photo of the model from every angle. Then we use the depth map to move the background and the head. The background change is easy; we just move the rest of the body to the current background. Now we are working on face modeling. The base idea of this is finding over 100 key points of the face and making of model of the face.
The depth reconstruction is trying to make a cheaper Kinect. The principle of the Kinect is phase shifting. Here we use the character of the grating. The projection of the object will move to one direction via gratings. And the length of the move has positive relation with the distance between the object and the grating. So we use this principle to get the depth map.
Optimized parameters in the experiment; Performed comparison experiments
Traditional way of zooming is interpolation. And in recent years, Professor Tomas Huang at UIUC brought up the idea that using sparse representation in the super resolution area, which is called the synthesis sparsity model.
In our paper, we are using a different model called analysis sparse model. The general ad and disad between the two sparse models are still under discussion. However, our experience shows that analysis model is the better one. This is our first contribution.
The second contribution is that we introduce Lab color space into the super resolution of color images. In this color space, the three channels are set to be independent, so that can utilizing different SR methods, say the first channel by our proposed method and the other two channels by simpler interpolation methods. In this way, we can achieve good results with low computing complexity.
Here you can see the paper.
Designed the control system for Beobot 2.0 using the classical PID control and Relay-based Auto-tune method
Built a small simplified robot on which we could conduct many control experiments
Beobot is a human-size robot and you can find it on YouTube. Since I only work on it for six weeks, I mainly design the speed control system for the robot. Before I arriving at USC, the robot had a big problem of speed control because it used DC motors and the character of the two motors were different. What’s more, the road condition and the voltage of the battery affected the speed. The students in that lab wanted to use the video got by camera to control the two motors. But the computing complexity was too high and the control frequency is too low. So I designed a simple control system for the robot, using encoder, a microcontroller. I use the classical PID control method and the relay-based auto-tune method to figure out the best parameters for PID control. When the microcontroller finds oscillation of the speed, it will turn to auto-tune state and use 4 to 5 second to get the new parameters and give it to the PID system. Then the robot will keep the speed precisely and stably. Then I made a small simplified robot, and on this robot we can conduct a lot of control experiments.
The poster of my work is here.
Also, you can find my slides here.
Modeled and simulated with MATLAB, optimized control solution
Added elastic band to impel the state on phase plane into a limit circle
This is a project I have done when participating in the student research training program. We try to figure out how the ape swing itself on the tree, or how can people swing themselves when playing on the swing. We use some springs to simulate the action of the creature.