Home The 2021 Competitive Programming Contest (CPC ‘21)
Post
Cancel

The 2021 Competitive Programming Contest (CPC ‘21)

Last week, the 2021 Competitive Programming Contest (CPC ‘21) was held on DMOJ. Editorials have been published, so I won’t talk about the solutions here. I created CPC ‘21 with my friends and I’m here to talk about the mistakes we made along the way.

Preparing the Contest

Creating Problems

Making problems is a creative activity, which means that you can’t force yourself to create them on the spot. You first need good ideas that can be turned into problems. I get my best ideas when I shower before going to bed 🚿🛏️.

Most ideas are not interesting enough to turn into contest problems. One example of a platform with a high standard is Codeforces. In the Codeforces LATOKEN Round 1 that contained 8 problems, my team had rejected 20 problems because they were not interesting enough, were not of a good difficulty level, or would skew the contest too far in favour of one topic over another. Some authors had dozens of rejected problems for a single round: 29/36 rejected and 72/78 rejected. We had an easier time making our problem because DMOJ has a lower standard than Codeforces. AQT and I already had the ideas for P6 and P7. As for P5, Plasmatic created multiple problems and we went with the current one.

Mistakes

Problem 2

The data for this problem was very weak. Some solutions that passed should have received the Wrong Answer or Time Limit Exceeded verdicts. The data-creation process for P2 consisted of creating incorrect solutions and then generating data on which they failed. Unfortunately, this was not enough to stop all the 🧀s from passing.

Problem 5

Plasmatic forgot to turn off the interactor’s feedback when the contest began. All the feedback did was inform the contestant of how many queries his solution used on each test case, which wasn’t a big deal.

Problem 6

I created worst-case data to kill solutions that did well on random cases, but were in reality $\mathcal{O}(N^2)$ or worse. I also tightened up the time limit to kill solutions that ran in $\mathcal{O}(N \log^2 N)$. To kill greedy strategies, I wrote generators that forced answers where each of $A$, $B$, and $C$ needed to be non-zero.

On the morning of Day 1 of the contest, I found out that I forgot to enable short-circuiting. Short-circuiting stops judging subsequence subtasks as soon as one of them fails. dalt had a bug in his code that made it fail subtask 1 but pass subtask 2. He ended up getting 60/100 in-contest and fixing his solution to get the full 100/100 mere minutes after his window ended. Since the contest had already started, I could not have turned on short-circuiting as that would have been unfair to either dalt or the other contestants, depending on whether I let dalt keep his 60/100. For now, I was not worried about the integrity of the data.

In the afternoon, Joelfu submitted a solution that printed the minimum of the maximum values of $A$, $B$, and $C$. It failed the first subtask, but passed the second one. I was shocked when I saw this. This was not simply due to my mistake of forgetting to turn on short-circuiting, but I had messed up the data! This had somehow slipped past my testing. I was now worried that there would be more incoming 🧀s. Thankfully, there were no more.

My mistake would have been caught if I had run invocations like I had done when creating problems on Polygon. An invocation runs solutions on the test data and checks to ensure that the verdicts of each solution matches what you’ve labeled it as.

Creating an invocationResults of an invocation
aa

I don’t like using testlib.h and Polygon, so I have been relying on my own scripts. In the future, I should run invocations on my data.

Relevant meme by Plasmatic:

meme

Closing Remarks

It was fun working on CPC ‘21 and I hope you enjoyed hearing our blunders. Bye for now! 👋