The DeepGreen snooker robot project aims to build a robot capable of challenging the best human players. Snooker (a billiard game played on a much larger 1.8 x 3.8m table) is challenging because of the very long shots and tight pockets, and because of the complex strategies used by players to block their opponents.
The robot consists of a robot arm with linear-motor cueing action, supported by a ceiling camera and a cue-mounted camera. It is already capable of taking near-human-level accurate shots, but would be further improved by more reliable ball detection by the cameras. A video of the robot in action can be found here:
https://vimeo.com/335260829
We currently use approximate dynamic programming based on a stochastic Monte Carlo three search and an heuristic approximation of the value function to decide on the strategy of the robot.
With this project we want to improve the physics engine to take into account of more realistic dynamics and employ advanced reinforcement learning techniques to improve the strategy of the robot. In particular we plan on using learning based methods to improve the approximate value function in order to capture advanced strategies (cover shots, safety shots) that are common in professional snooker playing.
The DeepGreen snooker robot project aims to build a robot capable of challenging the best human players. Snooker (a billiard game played on a much larger 1.8 x 3.8m table) is challenging because of the very long shots and tight pockets, and because of the complex strategies used by players to block their opponents.
The robot consists of a robot arm with linear-motor cueing action, supported by a ceiling camera and a cue-mounted camera. It is already capable of taking near-human-level accurate shots, but would be further improved by more reliable ball detection by the cameras. A video of the robot in action can be found here:
https://vimeo.com/335260829
We currently use approximate dynamic programming based on a stochastic Monte Carlo three search and an heuristic approximation of the value function to decide on the strategy of the robot.
With this project we want to improve the physics engine to take into account of more realistic dynamics and employ advanced reinforcement learning techniques to improve the strategy of the robot. In particular we plan on using learning based methods to improve the approximate value function in order to capture advanced strategies (cover shots, safety shots) that are common in professional snooker playing.
The project will proceed according to the following steps:
1) Evaluate the existing AI algorithms
2) Develop an efficient implementation of a more realistic physics engine (taking spin effects into consideration)
3) Evaluate the current algorithms with the improved physics engine
4) Develop and train novel reinforcement learning algorithms using the new physics engine
The project will proceed according to the following steps:
1) Evaluate the existing AI algorithms 2) Develop an efficient implementation of a more realistic physics engine (taking spin effects into consideration) 3) Evaluate the current algorithms with the improved physics engine 4) Develop and train novel reinforcement learning algorithms using the new physics engine
Marcello Colombino (mcolombi@ethz.ch)
Samuel Balula (sbalula@ethz.ch)
Marcello Colombino (mcolombi@ethz.ch) Samuel Balula (sbalula@ethz.ch)