Romanticism in Programming

Why there is art in programming.

During my early days as a CS student, one of the first mind-blowing moments was watching Hello World! getting printed out to the console thousands of times in just two functional lines of code.

for _ in range(1, 1001):
  print("Hello World!")

At the time it felt like having the Elder Wand. Dumbledore with Elder Wand pic

But there was more to our lesson. The TA then asks us to put our newly found power to use by computing the sum of 1 to 100. Of course, it was a natural application of what we had just done earlier:

sum = 0
for i in range(1, 100+1):
  sum = sum + i

Sure enough we saw the answer 5050 in the console. But then the TA reminds us that we are making our computers work too hard. In other words, the computer needs to do one-hundred ADD instructions in order to make this computation happen.

What if the number was a million? How well would our method scale?

Well then it would take a million ADD instructions. We call this scaling linearly with the input size. Later we would formalize this to $\mathcal{O}(n)$ (pronounced: Big Oh of N).

The TA hinted that there is a better way, and that we already know of the better way in math.

Teaching meme

$\displaystyle S_n = \sum\limits_{i=1}^n i = 1 + 2 + … + n = \frac{n (n + 1)}{2}$

With this we are no longer using loops, but a known mathematical fact about sequences. If you don’t belive me, see the proof.

Written as code:

sum = n (n + 1) / 2

This one-liner solves our problem with just 3 (ADD, MULTIPLY, DIVIDE) instructions. Crucially, it does not depend on the size of the input like our previous solution, thus no matter the input, it always takes 3 instructions to compute! This is a HUGE win!

Math meme

Later we would formalize this to $\mathcal{O}(1)$, or constant scaling.

Yes, yes I know IRL the complier would optimize the loop solution such that it does not take N instructions but for the purposes of learning we were not allowed to depend on that.

Looking back at it now, both solutions are equally correct, and modern compilers would optimize the first solution in the final instructions sent to the cpu, such that any performance differences would be negligible. In other words, the computer wouldn’t acutally be working so hard.

Objectively, the first solution is more readable, and friendly to a new observer than the second.

Why then am I still so drawn to the second solution?

The first solution reminds me of the saying “to a hammer, eveything looks like a nail”. Its a brute force approach. In comparison, the second solution is using the exact tool for our particular problem. It somehow feels personalized and dare I say romantic.

When I reflect on moments like this, it reminds me that there is emergent elegance and beauty even in the seemingly arbitrary sequence of symbols that is code. The Matrix code going by image

Programming is not quite as objective as people would have you believe. There are trade-offs to each solution, and which solution you prefer relect on the trade-offs you are willing to accept, which varies by the observer: much like art.

Links

Git, email, and resume


How programming is romantic.

By Kumar Damani, 2022-11-29