Advent of Code 2018 Week 1: Recap

So I’ve decided to do Advent of Code this year again (no surprise there), but this time, I’m encouraging everyone in to join me as well.

I’ve completed 8 challenges, and thought it was time for a recap.  I plan on breaking down solutions day by day, and then ending with some lessons learned that might help others.  I compete each night with ugly hacked together code, then work on refactoring it the next day.  What I share is the refactored version (Don’t think I spit something like this out in just an hour).  You can find all my code on my GitHub

So let’s get started.

Day 1

""" Advent Of Code Day 1 """

from itertools import accumulate, cycle
from common.input_file import read_numbers

def get_sum_of_frequencies(numbers):
        Given a list of frequencies (positive and negative)
        What is the ending frequency when added together
    return sum(numbers)

def get_first_frequency_listed_twice(numbers):
        Given a list of frequencies (positive and negative)
        What is the first cumulative frequency seen twice
    seen_so_far = set([0])
    for running_total in accumulate(cycle(numbers)):

        if running_total in seen_so_far:
            return running_total


    raise RuntimeError("This code is unreachable")

NUMBERS = read_numbers("input/input1.txt")

if __name__ == "__main__":

This was a nice straightforward one.  I try to minimize my loops, and I didn’t like in the second part where I had to do a loop through frequencies in essentially a while True loop.  I knew about cycle, but accumulate was a nifty trick that I had never learned before.  The more you can express (and remove conditionals and loops, because that’s what human brains have trouble with), the better you will be.

Day 2

My day 2 code was slow , and even refactored, I think it’s a bit wordy.  I’m not going to link it here because I didn’t feel like there was anything special about it.

Day 3

   Advent of Code 2018 Day 3
from collections import Counter
from itertools import chain
import re

from common.input_file import get_transformed_input
from common.grid import Grid

def get_claim(claim):
        Transform a claim to an ID and a list of points
    claim_id, left, top, width, height = re.split(r"@|,|:|x", claim)
    left, top, width, height = int(left), int(top), int(width), int(height)
    points = Grid(top=top, bottom=top + height - 1, left=left, right=left + width -1)
    return (claim_id, points)

def get_claimed_squares(claims):
        Return a dictionary of all claimed squares where key is the coordinate
        and the value is the number of claims
    return Counter(chain.from_iterable(points for _, points in claims))

def get_number_of_overlapping_squares(claimed_squares):
        Get the number of overlapping squares
    return len([_ for _, claims in claimed_squares.items() if claims > 1])

def get_non_overlapping_claim(claims, claimed_squares):
        Get the only claim that is not overlapping

    def is_unclaimed(points):
        return all(claimed_squares[point] == 1 for point in points)

        return next(claim_id for claim_id, points in claims if is_unclaimed(points))
    except StopIteration:
        assert False, "We should never hit this line, as this means we didn't find an answer"

CLAIMS = get_transformed_input("input/input3.txt", get_claim)
CLAIMED_SQUARES = get_claimed_squares(CLAIMS)

if __name__ == "__main__":
    print(get_non_overlapping_claim(CLAIMS, CLAIMED_SQUARES))

I was proud of this code, as I really liked the simplicity.  From regex parsing, to using chain.from_iterable(which I've never used before) I felt this was straightforward

Day 4

Day 4 was rough.  No code to show here due to length and  ugliness.  I did finally get to use a Python3 feature `a, *b = [1,2,3,4,5]` to unpack collections better

Day 5

Day 5 was one of my favorites.  I wanted to do this in one pass, and it felt very similar to parentheses balancing, so I just used a stack to get through the polymer

    Advent of Code day 5
from common.input_file import read_single_line

def are_units_reactive(unit1, unit2):
        Return if two units are reactive to one another
        like aA or Aa (not aa or AA)
    return unit1.lower() == unit2.lower() and unit1 != unit2

def get_reduced_polymer_length(polymer):
        Get the polymer length after all reactions have taken place
    stack = []
    for letter in polymer:
        if not stack:

        if are_units_reactive(letter, stack[-1]):

    return len(stack)

def get_optimized_polymer_length(polymer):
        Find the shortest reduced polymer if we were to remove all of one unit (such as A/a)
    lowercase_units = set(l.lower() for l in polymer)
    return min(get_length_after_unit_removal(l, polymer) for l in lowercase_units)

def get_length_after_unit_removal(letter, polymer):
        Remove all captial and lowercase versions of the letter
        and returns the reduced length of the polymer
    candidate_polymer = polymer.replace(letter.lower(), "").replace(letter.upper(), "")
    return get_reduced_polymer_length(candidate_polymer)

POLYMER = read_single_line("input/input5.txt")
if __name__ == "__main__":

Day 6

Day 6 was another rough one, but once I extracted out a Grid class, my life became significantly easier.  Nothing to note in the code there.

Day 7

Day 7's code was long, but it taught me the value of good abstraction along the way (I had a Worker and a WorkerPool class, as well as a DAG.)  As the problems get harder, I need to start out this way

Day 8

All that Functional Programming paid off, because I was ready to think recursively.

    Advent of Code 2018

from common.input_file import read_single_line
class Tree:
        A simple recrusive tree structure
    def __init__(self):
        self.metadata = []
        self.children = []

    def get_metadata_sum(self):
            Gets the sum of all metadata and all children's metdata
        return sum(self.metadata) + sum(child.get_metadata_sum() for child in self.children)

    def get_value(self):
            Get the value (sum of metadata if no children), otherwise sum of
            children based on metadata
        if not self.children:
            return sum(self.metadata)
        indices = [index - 1 for index in self.metadata if self.is_valid_child_index(index)]
        values = [self.children[i].get_value() for i in indices]
        return sum(values)

    def is_valid_child_index(self, index):
            Return if the 1 based index is greater than zero,
            but can index into the children
        return 0 < index  0
    tree = Tree()
    tree.children = [create_tree(tree_iter) for _ in range(num_children)]
    tree.metadata = [next(tree_iter) for _ in range(num_metadata)]
    return tree

TREE_INPUT = read_single_line("input/input8.txt")
TREE = create_tree(iter(map(int, TREE_INPUT.split())))
if __name__ == "__main__":


I feel this is nice, short and sweet.  Also, the Tree abstraction saved my bacon in coding this up quickly.


Lessons Learned

  • Build abstractions early.  Graphs, Grids, Workers, Worker Pools, etc.  Keep them as generic as possible and keep the business logic out of them (as that's what you'll iterate on to get it right)
  • Learn the libraries
    • Itertools is one of my favorites in Python.  I've used chain, product, combinations, groupby, and others to give me clean, concise code, and to limit my loops
    • I also have been using more of and collections.Counter.  collections.Counter may be my new favorite class
  • Think of similar algorithms (such as parentheses balancing)
  • Always refactor, as it will give you tools for the future (especially if you think through SRP)

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s