CMIMC 2021 has ended! See the final results.

Programming Contest Format


AI Round

In the AI Round, you and your team write programs which play mini games (such as tic-tac-toe) against other teams. Every 5 minutes (after a 30 minute grace period), your most recent submission will be run against the programs of the other teams, and your scores will be updated on a live leaderboard. You'll be able to view your games and update your strategies accordingly! You will have 3 hours to write and improve your code for 3 different mini games. All code should be written in Python 3.

Optimization Round

In the Optimization Round, you and your team will work on 3 optimization-based problems. These problems are not intended to be solved perfectly - instead, you will be competing against other teams to get the closest approximation to an optimal solution. These problems will have a mathy flair and require some computational tools/thinking to get a decent answer. You will have 3 hours to gradually optimize your solutions. This round will have 3 problems, each with different test cases. Your score for each problem will be determined by your best performing submission, and displayed on a live leaderboard. You can use any programming language for this round, since you will only be submitting a text output.

For both rounds, we will provide starter code to help you get started on each of the problems.

Detailed Rules

AI Round

All code must be written in Python 3. Your code will be graded on a system running Python 3.9. The visualizers require Python 3.6+. You can only import modules from the Python Standard Library. You can submit your code whenever you want. We have implemented a rating system and your score will go up and down depending on which games you win and how many games you have played. Every time you submit your code, you will receive a “burst” of game data which is many games played so that you can have immediate feedback as well as a rating update. Otherwise, we will run your most recent submission at a rate of roughly one game per minute (depending on the problem), and you can view game history and rating changes to see how well you are doing by clicking Match Results.

To join the contest, click contests and then AI Round. There will be an option to download starter code at the bottom of the problem statement.

If you have not yet submitted code for a problem or your code returns invalid results, we will give you a default strategy as described in the problem statements. The default strategy will be very bad. The starter code comes with a simple, random strategy that will perform better than the default strategy in most cases.

How to use the starter code and visualizer:

The starter code is fully commented, describing the input data and format, output format, where you write your code, and how to test your bots against each other locally (you can use the default, random bot or any other bots you write). The starter code also has a built in local visualizer that will print out a text visualization of each game to your console.

To run the visualizer file that you download from the Match Results page, ensure that the visualizer file and the game replay history file are in the same folder. The game history is represented as a JSON file that the visualizer parses and visualizes for you. By running the visualizer file, you will be able to see what happened in a game you played in that mini round.The visualizers will also display error messages that your code threw when applicable.

Optimization Round

This is a 3 hour round, and your code can be written in any language you want. You will submit a .txt file in the format specified for each task in each problem statement. Our graders will then read in your text file and compute a score using the scoring formula specified in the problem statements.

This gives you your raw score for that task. For each task, the scores are normalized to be in the range [0, 100], with the current best submission getting 100 points. This ensures that tasks are weighted equally. Your score for a problem is the average of these normalized scores for the corresponding tasks. This is what gets displayed on the leaderboard. For example, if you have the best score in one out of four tasks and have no submissions for the other three, your score for that problem would be (100+0+0+0)/4 = 25.

How to use the starter code:

You can download a starter file in either Python or C++ though you may use any other language or tool that you want to solve these problems. The starter code reads in the input file that you download for each task (note that the input has to be in the same folder as the starter file, and you will need to type the name of the input file into the designated place in the starter file). The starter file also writes to a .txt output file, but it will not check that the output is in the format required by the problem statement. Also commented is the region in which you can write your own code logic to solve/approximate a solution for each task.

General Philosophy for the Optimization Round:

You can adopt any strategy you want for these problems. This means that for the smaller tasks, you can solve the problem by hand, picture, calculator, your favorite computational tool etc. to get a feel for the problem or even to find the most optimal solution! These problems are designed to draw on some problem solving to be able to design a good algorithm for the larger test cases. This also means that you can use a different, custom strategy for each task. In particular, feel free to try out different ideas or write different code depending on the task/size of the input.