Programming Contest Format
Based on feedback from last year, we have changed the programming contest to a 3-day format. All problems from both the AI Round and Optimization Round will be released on Friday evening, and you will have until Sunday evening to work on them.
In the AI Round, you and your team write programs which play mini games against other teams. Your most recent submission will be run against the programs of the other teams regularly, and your scores will be updated on a live leaderboard. You'll be able to view your games and update your strategies accordingly! You will have 46 hours to write and improve your code for 2 different mini games. All code should be written in Python 3.
In the Optimization Round, you and your team will work on 2 optimization-based problems, each with random test case generators. These problems are not intended to be solved perfectly - instead, you will be competing against other teams to get the closest approximation to an optimal solution. These problems will have a mathy flair and require some computational tools/thinking to get a decent answer. You will have 46 hours to gradually optimize your solutions. Your score for each problem will be determined by your most recent submission, and displayed on a live leaderboard. All code should be written in Python 3.
For both rounds, we will provide starter code to help you get started on each of the problems.
All code must be written in Python 3. Your code will be graded on a system running Python 3.8. You can only import modules from the Python Standard Library. You can submit your code whenever you want during the contest, and your most recent submission will be graded and displayed on the leaderboards. Each problem has a certain maximum time and memory limit that your program can run (both per turn and total), which will be specified at the bottom of each problem.
To join a round, click contests and select the given round. There will be an option to download starter code at the bottom of each problem statement.
If you have not yet submitted code for a problem, or your code makes an illegal move/runs into an error/times out, your bot will default to the default strategy of doing nothing. The starter code comes with a simple, random strategy that will perform better than the default strategy in most cases.
All submissions in both rounds will be regraded to produce final rankings at the end of the contest (each problem will be normalized so that the best submission is worth 100 points). In the spirit of the competition, and so that everyone can compete to develop cooler solutions, we encourage teams to not hoard solutions and submit at the last minute.
We have implemented a rating system and your score will go up and down depending on which games you win and how many games you have played. Every time you submit your code, you will receive a “burst” of game data which is many games played so that you can have immediate feedback as well as a rating update. Otherwise, we will run your most recent submission at a rate of roughly one game every 5 minutes (depending on the problem), and you can view game history and rating changes to see how well you are doing by clicking Match Results. At the end of the contest, we will reset all ratings, and run a large batch of games to determine final rankings.
Submitted code will be run against 10 tests (per generator) generated by random test case generators, which are provided to all contestants, and can be run locally. At the end of the contest, all submissions will be run against the same set of 100 random tests (per generator), and these will give the final scores. Scoring details can be found in the problem statements.
Graders can be run locally via the test_runner.py script. Before running, open test_runner.py, and uncomment the games you want to test, along with modifying the appropriate arguments. Running will produce replays in the replays subdirectory, which can then be viewed via the visualizer.
To print out intermediate data while your program is running, use
Printing normally will send text to the grader, causing an error.
Visualizations of played matches will be available online shortly after they are played in the match results tab. To visualize locally-played matches, run visualization_hoster.py from the contest handout locally, and go to the Local Visualization tab. The browser visualizers will pull data off the hoster’s server, without sending any data to CMIMC.
To use the online visualizer, click on the "Load" button on a replay to load it. Then, you can use the left/right arrow to go forwards/backwards one step, up/down to zoom in and out, and you may click and drag using the mouse.